diff --git a/SOL004/LICENSE b/SOL004/LICENSE new file mode 100644 index 0000000000000000000000000000000000000000..8dada3edaf50dbc082c9a125058f25def75e625a --- /dev/null +++ b/SOL004/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "{}" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright {yyyy} {name of copyright owner} + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/SOL004/README.md b/SOL004/README.md new file mode 100644 index 0000000000000000000000000000000000000000..efd6db060fb1e89fd061befb421de88ffc15a5c5 --- /dev/null +++ b/SOL004/README.md @@ -0,0 +1,170 @@ +# README + +## Cloning the repo + +You need to clone the repo with the option "--recursive" to fetch all the dependent git modules: + +```bash +# Clone with SSH +git clone --recursive ssh://git@osm.etsi.org:29419/vnf-onboarding/osm-packages.git +# Clone with HTTPS +git clone --recursive https://osm.etsi.org/gitlab/vnf-onboarding/osm-packages.git +``` + +## Structure of the repo + +This repo contains OSM packages used for testing and for the hackfest. + +There is typically a folder for each VNF under test and NS to run the test, for instance: + +- `hackfest_basic_vnf`: the VNF package to run the hackfest basic test +- `hackfest_basic_ns`: the NS package to run the hackfest basic test + +Please use the following conventions: + +- VNF folders should end with "_vnf" +- NS folders should end with "_ns" + +Charm sources must be stored in the folder of the package, in `charms/layers` folder. If relations are in place, the charm interfaces must be stored in `charms/interfaces` folder. + +Thanks to the new way for building and generating packages, charms sources can be placed directly in the package folder, under `charms/layers/` folder, thus simplifying the structure. When building the package, the charm will be built and only the binaries will be placed in the tar.gz file. + +## Working with packages + +Osmclient has been improved so that all utilities required to validate and build the packages have been +added. Now the single `osm` command line client is able to do everything you need to work with VNF and +NS packages. + +### Uploading a package from a folder + +This is by far the simplest way of working. The following command will validate the package, +build the charm from source if it exists under `PKGFOLDER/charms/layers`, build the tar.gz, +and onboard/upload it to OSM. + +```bash +osm nfpkg-create PKGFOLDER +``` + +For NS packages, the same applies. + +```bash +osm nspkg-create PKGFOLDER +``` + +Full help for both commands is show below: + +```bash +$ osm nfpkg-create --help +Usage: osm nfpkg-create [OPTIONS] FILENAME + + onboards a new NFpkg (alias of nfpkg-create) + + FILENAME: NF Package tar.gz file, NF Descriptor YAML file or NF Package folder + If FILENAME is a file (NF Package tar.gz or NF Descriptor YAML), it is onboarded. + If FILENAME is an NF Package folder, it is built and then onboarded. + +Options: + --overwrite TEXT Deprecated. Use override + --override TEXT overrides fields in descriptor, format: "key1.key2...=value[;key3...=value;...]" + --skip-charm-build The charm will not be compiled, it is assumed to already exist + --override-epa adds guest-epa parameters to all VDU + --override-nonepa removes all guest-epa parameters from all VDU + --override-paravirt overrides all VDU interfaces to PARAVIRT + -h, --help Show this message and exit. +``` + +```bash +$ osm nspkg-create --help +Usage: osm nspkg-create [OPTIONS] FILENAME + + onboards a new NSpkg + + FILENAME: NF Package tar.gz file, NF Descriptor YAML file or NF Package folder + If FILENAME is a file (NF Package tar.gz or NF Descriptor YAML), it is onboarded. + If FILENAME is an NF Package folder, it is built and then onboarded. + +Options: + --overwrite TEXT Deprecated. Use override + --override TEXT overrides fields in descriptor, format: "key1.key2...=value[;key3...=value;...]" + --skip-charm-build The charm will not be compiled, it is assumed to already exist + -h, --help Show this message and exit. +``` + +### Validating a descriptor + +The following command will look for all the descriptor files in a folder and validate them + +```bash +osm package-validate PKGFOLDER +``` + +Some relevant options are shown below: + +```bash +$ osm package-validate --help +Usage: osm package-validate [OPTIONS] [BASE_DIRECTORY] + + Validate descriptors given a base directory. + + BASE_DIRECTORY: Stub folder for NS, VNF or NST package. + +Options: + --recursive / --no-recursive The activated recursive option will validate the yaml files within the indicated directory and in its subdirectories + -h, --help Show this message and exit. +``` + +### Building a package + +The following command will validate and build the tar.gz file. It will als + +```bash +osm package-build PKGFOLDER +``` + +Some relevant options are shown below: + +```bash +$ osm package-build --help +Usage: osm package-build [OPTIONS] PACKAGE_FOLDER + + Build the package NS, VNF given the package_folder. + + PACKAGE_FOLDER: Folder of the NS, VNF or NST to be packaged + +Options: + --skip-validation skip package validation + --skip-charm-build the charm will not be compiled, it is assumed to already exist + -h, --help Show this message and exit. +``` + +### Creating a package structure + +The best way to work is by copying a previous one that is up-to-date. But you can also create +a first skeleton using this command: + +```bash +Usage: osm package-create [OPTIONS] PACKAGE_TYPE PACKAGE_NAME + + Creates an OSM NS, VNF, NST package + + PACKAGE_TYPE: Package to be created: NS, VNF or NST. + PACKAGE_NAME: Name of the package to create the folder with the content. + +Options: + --base-directory TEXT (NS/VNF/NST) Set the location for package creation. Default: "." + --image TEXT (VNF) Set the name of the vdu image. Default "image-name" + --vdus INTEGER (VNF) Set the number of vdus in a VNF. Default 1 + --vcpu INTEGER (VNF) Set the number of virtual CPUs in a vdu. Default 1 + --memory INTEGER (VNF) Set the memory size (MB) of the vdu. Default 1024 + --storage INTEGER (VNF) Set the disk size (GB) of the vdu. Default 10 + --interfaces INTEGER (VNF) Set the number of additional interfaces apart from the management interface. Default 0 + --vendor TEXT (NS/VNF) Set the descriptor vendor. Default "OSM" + --override (NS/VNF/NST) Flag for overriding the package if exists. + --detailed (NS/VNF/NST) Flag for generating descriptor .yaml with all possible commented options + --netslice-subnets INTEGER (NST) Number of netslice subnets. Default 1 + --netslice-vlds INTEGER (NST) Number of netslice vlds. Default 1 + -h, --help Show this message and exit. +``` + + + diff --git a/SOL004/charm-packages/ha_proxy_charm_ns/ha_proxy_charm_nsd.yaml b/SOL004/charm-packages/ha_proxy_charm_ns/ha_proxy_charm_nsd.yaml new file mode 100644 index 0000000000000000000000000000000000000000..b9cb16a0ebbb47d9b22179263c2d9124683af21c --- /dev/null +++ b/SOL004/charm-packages/ha_proxy_charm_ns/ha_proxy_charm_nsd.yaml @@ -0,0 +1,37 @@ +nsd: + nsd: + - id: ha_proxy_charm-ns + name: ha_proxy_charm-ns + version: '1.0' + description: NS with 2 VNFs with cloudinit connected by datanet and mgmtnet VLs + df: + - id: default-df + vnf-profile: + - id: '1' + virtual-link-connectivity: + - constituent-cpd-id: + - constituent-base-element-id: '1' + constituent-cpd-id: vnf-mgmt-ext + virtual-link-profile-id: mgmtnet + - constituent-cpd-id: + - constituent-base-element-id: '1' + constituent-cpd-id: vnf-data-ext + virtual-link-profile-id: datanet + vnfd-id: ha_proxy_charm-vnf + - id: '2' + virtual-link-connectivity: + - constituent-cpd-id: + - constituent-base-element-id: '2' + constituent-cpd-id: vnf-mgmt-ext + virtual-link-profile-id: mgmtnet + - constituent-cpd-id: + - constituent-base-element-id: '2' + constituent-cpd-id: vnf-data-ext + virtual-link-profile-id: datanet + vnfd-id: ha_proxy_charm-vnf + virtual-link-desc: + - id: mgmtnet + mgmt-network: 'true' + - id: datanet + vnfd-id: + - ha_proxy_charm-vnf diff --git a/SOL004/charm-packages/ha_proxy_charm_ns/icons/osm.png b/SOL004/charm-packages/ha_proxy_charm_ns/icons/osm.png new file mode 100644 index 0000000000000000000000000000000000000000..62012d2a2b491bdcd536d62c3c3c863c0d8c1b33 Binary files /dev/null and b/SOL004/charm-packages/ha_proxy_charm_ns/icons/osm.png differ diff --git a/SOL004/charm-packages/ha_proxy_charm_vnf/charms/simple/actions.yaml b/SOL004/charm-packages/ha_proxy_charm_vnf/charms/simple/actions.yaml new file mode 100644 index 0000000000000000000000000000000000000000..f9882c47bba960254e0381e5b5ab1c0688de6cf5 --- /dev/null +++ b/SOL004/charm-packages/ha_proxy_charm_vnf/charms/simple/actions.yaml @@ -0,0 +1,54 @@ +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## +touch: + description: "Touch a file on the VNF." + params: + filename: + description: "The name of the file to touch." + type: string + default: "" + required: + - filename + +# Standard OSM functions +start: + description: "Stop the service on the VNF." +stop: + description: "Stop the service on the VNF." +restart: + description: "Stop the service on the VNF." +reboot: + description: "Reboot the VNF virtual machine." +upgrade: + description: "Upgrade the software on the VNF." + +# Required by charms.osm.sshproxy +run: + description: "Run an arbitrary command" + params: + command: + description: "The command to execute." + type: string + default: "" + required: + - command +generate-ssh-key: + description: "Generate a new SSH keypair for this unit. This will replace any existing previously generated keypair." +verify-ssh-credentials: + description: "Verify that this unit can authenticate with server specified by ssh-hostname and ssh-username." +get-ssh-public-key: + description: "Get the public SSH key for this unit." diff --git a/SOL004/charm-packages/ha_proxy_charm_vnf/charms/simple/config.yaml b/SOL004/charm-packages/ha_proxy_charm_vnf/charms/simple/config.yaml new file mode 100644 index 0000000000000000000000000000000000000000..93e3cab01c0843418c57e4bc918eb933f2daf934 --- /dev/null +++ b/SOL004/charm-packages/ha_proxy_charm_vnf/charms/simple/config.yaml @@ -0,0 +1,41 @@ +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## +options: + ssh-hostname: + type: string + default: "" + description: "The hostname or IP address of the machine to" + ssh-username: + type: string + default: "" + description: "The username to login as." + ssh-password: + type: string + default: "" + description: "The password used to authenticate." + ssh-public-key: + type: string + default: "" + description: "The public key of this unit." + ssh-key-type: + type: string + default: "rsa" + description: "The type of encryption to use for the SSH key." + ssh-key-bits: + type: int + default: 4096 + description: "The number of bits to use for the SSH key." diff --git a/SOL004/charm-packages/ha_proxy_charm_vnf/charms/simple/hooks/install b/SOL004/charm-packages/ha_proxy_charm_vnf/charms/simple/hooks/install new file mode 120000 index 0000000000000000000000000000000000000000..25b1f68fa39d58d33c08ca420c3d439d19be0c55 --- /dev/null +++ b/SOL004/charm-packages/ha_proxy_charm_vnf/charms/simple/hooks/install @@ -0,0 +1 @@ +../src/charm.py \ No newline at end of file diff --git a/SOL004/charm-packages/ha_proxy_charm_vnf/charms/simple/hooks/start b/SOL004/charm-packages/ha_proxy_charm_vnf/charms/simple/hooks/start new file mode 120000 index 0000000000000000000000000000000000000000..25b1f68fa39d58d33c08ca420c3d439d19be0c55 --- /dev/null +++ b/SOL004/charm-packages/ha_proxy_charm_vnf/charms/simple/hooks/start @@ -0,0 +1 @@ +../src/charm.py \ No newline at end of file diff --git a/SOL004/charm-packages/ha_proxy_charm_vnf/charms/simple/hooks/upgrade-charm b/SOL004/charm-packages/ha_proxy_charm_vnf/charms/simple/hooks/upgrade-charm new file mode 120000 index 0000000000000000000000000000000000000000..25b1f68fa39d58d33c08ca420c3d439d19be0c55 --- /dev/null +++ b/SOL004/charm-packages/ha_proxy_charm_vnf/charms/simple/hooks/upgrade-charm @@ -0,0 +1 @@ +../src/charm.py \ No newline at end of file diff --git a/SOL004/charm-packages/ha_proxy_charm_vnf/charms/simple/lib/charms/osm b/SOL004/charm-packages/ha_proxy_charm_vnf/charms/simple/lib/charms/osm new file mode 120000 index 0000000000000000000000000000000000000000..48bb95e0a5f6ba4f472c2d761cf84d3f9734fea4 --- /dev/null +++ b/SOL004/charm-packages/ha_proxy_charm_vnf/charms/simple/lib/charms/osm @@ -0,0 +1 @@ +../../mod/charms.osm/charms/osm/ \ No newline at end of file diff --git a/SOL004/charm-packages/ha_proxy_charm_vnf/charms/simple/lib/ops b/SOL004/charm-packages/ha_proxy_charm_vnf/charms/simple/lib/ops new file mode 120000 index 0000000000000000000000000000000000000000..d93419320c2e0d3133a0bc8059e2f259bf5bb213 --- /dev/null +++ b/SOL004/charm-packages/ha_proxy_charm_vnf/charms/simple/lib/ops @@ -0,0 +1 @@ +../mod/operator/ops \ No newline at end of file diff --git a/SOL004/charm-packages/ha_proxy_charm_vnf/charms/simple/metadata.yaml b/SOL004/charm-packages/ha_proxy_charm_vnf/charms/simple/metadata.yaml new file mode 100644 index 0000000000000000000000000000000000000000..d6d0cd70360fb011c2b2a36442d967b104050810 --- /dev/null +++ b/SOL004/charm-packages/ha_proxy_charm_vnf/charms/simple/metadata.yaml @@ -0,0 +1,27 @@ +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## + +name: simple-ha-proxy +summary: A simple example proxy charm +description: | + Simple proxy charm is an example charm used in OSM Hackfests +series: + - xenial + - bionic +peers: + proxypeer: + interface: proxypeer diff --git a/SOL004/charm-packages/ha_proxy_charm_vnf/charms/simple/src/charm.py b/SOL004/charm-packages/ha_proxy_charm_vnf/charms/simple/src/charm.py new file mode 100755 index 0000000000000000000000000000000000000000..e23b12b7bcfddfd49d4efbc1b0ac72d92579778a --- /dev/null +++ b/SOL004/charm-packages/ha_proxy_charm_vnf/charms/simple/src/charm.py @@ -0,0 +1,65 @@ +#!/usr/bin/env python3 +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## + +import sys + +sys.path.append("lib") + +from charms.osm.sshproxy import SSHProxyCharm +from ops.main import main + +class MySSHProxyCharm(SSHProxyCharm): + + def __init__(self, framework, key): + super().__init__(framework, key) + + # Listen to charm events + self.framework.observe(self.on.config_changed, self.on_config_changed) + self.framework.observe(self.on.install, self.on_install) + self.framework.observe(self.on.start, self.on_start) + + # Listen to the touch action event + self.framework.observe(self.on.touch_action, self.on_touch_action) + + def on_config_changed(self, event): + """Handle changes in configuration""" + super().on_config_changed(event) + + def on_install(self, event): + """Called when the charm is being installed""" + super().on_install(event) + + def on_start(self, event): + """Called when the charm is being started""" + super().on_start(event) + + def on_touch_action(self, event): + """Touch a file.""" + + if self.model.unit.is_leader(): + filename = event.params["filename"] + proxy = self.get_ssh_proxy() + stdout, stderr = proxy.run("touch {}".format(filename)) + event.set_results({"output": stdout}) + else: + event.fail("Unit is not leader") + return + +if __name__ == "__main__": + main(MySSHProxyCharm) + diff --git a/SOL004/charm-packages/ha_proxy_charm_vnf/cloud_init/cloud-config.txt b/SOL004/charm-packages/ha_proxy_charm_vnf/cloud_init/cloud-config.txt new file mode 100755 index 0000000000000000000000000000000000000000..36c8d1bf2cdebbc4e50d1e8348003f64f419cd0b --- /dev/null +++ b/SOL004/charm-packages/ha_proxy_charm_vnf/cloud_init/cloud-config.txt @@ -0,0 +1,12 @@ +#cloud-config +password: osm4u +chpasswd: { expire: False } +ssh_pwauth: True + +write_files: +- content: | + # My new helloworld file + + owner: root:root + permissions: '0644' + path: /root/helloworld.txt diff --git a/SOL004/charm-packages/ha_proxy_charm_vnf/ha_proxy_charm_vnfd.yaml b/SOL004/charm-packages/ha_proxy_charm_vnf/ha_proxy_charm_vnfd.yaml new file mode 100644 index 0000000000000000000000000000000000000000..2b6ebd280767b74ac002568c9314005da9c13493 --- /dev/null +++ b/SOL004/charm-packages/ha_proxy_charm_vnf/ha_proxy_charm_vnfd.yaml @@ -0,0 +1,97 @@ +vnfd: + id: ha_proxy_charm-vnf + mgmt-cp: vnf-mgmt-ext + product-name: ha_proxy_charm-vnf + description: A VNF consisting of 1 VDU connected to two external VL, and one for + data and another one for management + version: 1.0 + df: + - id: default-df + instantiation-level: + - id: default-instantiation-level + vdu-level: + - number-of-instances: 1 + vdu-id: mgmtVM + vdu-profile: + - id: mgmtVM + min-number-of-instances: 1 + lcm-operations-configuration: + operate-vnf-op-config: + day1-2: + - id: ha_proxy_charm-vnf + execution-environment-list: + - id: simple-ee + juju: + charm: simple + config-access: + ssh-access: + default-user: ubuntu + required: true + config-primitive: + - name: touch + execution-environment-ref: simple-ee + parameter: + - data-type: STRING + default-value: /home/ubuntu/touched + name: filename + initial-config-primitive: + - name: config + execution-environment-ref: simple-ee + parameter: + - name: ssh-hostname + value: + - name: ssh-username + value: ubuntu + - name: ssh-password + value: osm4u + seq: 1 + - name: touch + execution-environment-ref: simple-ee + parameter: + - data-type: STRING + name: filename + value: /home/ubuntu/first-touch + seq: 2 + ext-cpd: + - id: vnf-mgmt-ext + int-cpd: + cpd: mgmtVM-eth0-int + vdu-id: mgmtVM + - id: vnf-data-ext + int-cpd: + cpd: dataVM-xe0-int + vdu-id: mgmtVM + sw-image-desc: + - id: ubuntu18.04 + image: ubuntu18.04 + name: ubuntu18.04 + vdu: + - cloud-init-file: cloud-config.txt + id: mgmtVM + int-cpd: + - id: mgmtVM-eth0-int + virtual-network-interface-requirement: + - name: mgmtVM-eth0 + position: 1 + virtual-interface: + type: PARAVIRT + - id: dataVM-xe0-int + virtual-network-interface-requirement: + - name: dataVM-xe0 + position: 2 + virtual-interface: + type: PARAVIRT + name: mgmtVM + sw-image-desc: ubuntu18.04 + virtual-compute-desc: mgmtVM-compute + virtual-storage-desc: + - mgmtVM-storage + virtual-compute-desc: + - id: mgmtVM-compute + virtual-cpu: + num-virtual-cpu: "1" + virtual-memory: + size: "1.0" + virtual-storage-desc: + - id: mgmtVM-storage + size-of-storage: "10" diff --git a/SOL004/charm-packages/ha_proxy_charm_vnf/icons/osm.png b/SOL004/charm-packages/ha_proxy_charm_vnf/icons/osm.png new file mode 100644 index 0000000000000000000000000000000000000000..62012d2a2b491bdcd536d62c3c3c863c0d8c1b33 Binary files /dev/null and b/SOL004/charm-packages/ha_proxy_charm_vnf/icons/osm.png differ diff --git a/SOL004/charm-packages/k8s_proxy_charm_ns/icons/osm.png b/SOL004/charm-packages/k8s_proxy_charm_ns/icons/osm.png new file mode 100644 index 0000000000000000000000000000000000000000..62012d2a2b491bdcd536d62c3c3c863c0d8c1b33 Binary files /dev/null and b/SOL004/charm-packages/k8s_proxy_charm_ns/icons/osm.png differ diff --git a/SOL004/charm-packages/k8s_proxy_charm_ns/k8s_proxy_charm_nsd.yaml b/SOL004/charm-packages/k8s_proxy_charm_ns/k8s_proxy_charm_nsd.yaml new file mode 100644 index 0000000000000000000000000000000000000000..beae3243b71a197f06f18c76c3fb3d277842df19 --- /dev/null +++ b/SOL004/charm-packages/k8s_proxy_charm_ns/k8s_proxy_charm_nsd.yaml @@ -0,0 +1,37 @@ +nsd: + nsd: + - description: NS with 2 VNFs with cloudinit connected by datanet and mgmtnet VLs + df: + - id: default-df + vnf-profile: + - id: '1' + virtual-link-connectivity: + - constituent-cpd-id: + - constituent-base-element-id: '1' + constituent-cpd-id: vnf-mgmt-ext + virtual-link-profile-id: mgmtnet + - constituent-cpd-id: + - constituent-base-element-id: '1' + constituent-cpd-id: vnf-data-ext + virtual-link-profile-id: datanet + vnfd-id: k8s_proxy_charm-vnf + - id: '2' + virtual-link-connectivity: + - constituent-cpd-id: + - constituent-base-element-id: '2' + constituent-cpd-id: vnf-mgmt-ext + virtual-link-profile-id: mgmtnet + - constituent-cpd-id: + - constituent-base-element-id: '2' + constituent-cpd-id: vnf-data-ext + virtual-link-profile-id: datanet + vnfd-id: k8s_proxy_charm-vnf + id: k8s_proxy_charm-ns + name: k8s_proxy_charm-ns + version: '1.0' + virtual-link-desc: + - id: mgmtnet + mgmt-network: 'true' + - id: datanet + vnfd-id: + - k8s_proxy_charm-vnf diff --git a/SOL004/charm-packages/k8s_proxy_charm_vnf/charms/simple/actions.yaml b/SOL004/charm-packages/k8s_proxy_charm_vnf/charms/simple/actions.yaml new file mode 100644 index 0000000000000000000000000000000000000000..f9882c47bba960254e0381e5b5ab1c0688de6cf5 --- /dev/null +++ b/SOL004/charm-packages/k8s_proxy_charm_vnf/charms/simple/actions.yaml @@ -0,0 +1,54 @@ +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## +touch: + description: "Touch a file on the VNF." + params: + filename: + description: "The name of the file to touch." + type: string + default: "" + required: + - filename + +# Standard OSM functions +start: + description: "Stop the service on the VNF." +stop: + description: "Stop the service on the VNF." +restart: + description: "Stop the service on the VNF." +reboot: + description: "Reboot the VNF virtual machine." +upgrade: + description: "Upgrade the software on the VNF." + +# Required by charms.osm.sshproxy +run: + description: "Run an arbitrary command" + params: + command: + description: "The command to execute." + type: string + default: "" + required: + - command +generate-ssh-key: + description: "Generate a new SSH keypair for this unit. This will replace any existing previously generated keypair." +verify-ssh-credentials: + description: "Verify that this unit can authenticate with server specified by ssh-hostname and ssh-username." +get-ssh-public-key: + description: "Get the public SSH key for this unit." diff --git a/SOL004/charm-packages/k8s_proxy_charm_vnf/charms/simple/config.yaml b/SOL004/charm-packages/k8s_proxy_charm_vnf/charms/simple/config.yaml new file mode 100644 index 0000000000000000000000000000000000000000..93e3cab01c0843418c57e4bc918eb933f2daf934 --- /dev/null +++ b/SOL004/charm-packages/k8s_proxy_charm_vnf/charms/simple/config.yaml @@ -0,0 +1,41 @@ +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## +options: + ssh-hostname: + type: string + default: "" + description: "The hostname or IP address of the machine to" + ssh-username: + type: string + default: "" + description: "The username to login as." + ssh-password: + type: string + default: "" + description: "The password used to authenticate." + ssh-public-key: + type: string + default: "" + description: "The public key of this unit." + ssh-key-type: + type: string + default: "rsa" + description: "The type of encryption to use for the SSH key." + ssh-key-bits: + type: int + default: 4096 + description: "The number of bits to use for the SSH key." diff --git a/SOL004/charm-packages/k8s_proxy_charm_vnf/charms/simple/hooks/install b/SOL004/charm-packages/k8s_proxy_charm_vnf/charms/simple/hooks/install new file mode 120000 index 0000000000000000000000000000000000000000..25b1f68fa39d58d33c08ca420c3d439d19be0c55 --- /dev/null +++ b/SOL004/charm-packages/k8s_proxy_charm_vnf/charms/simple/hooks/install @@ -0,0 +1 @@ +../src/charm.py \ No newline at end of file diff --git a/SOL004/charm-packages/k8s_proxy_charm_vnf/charms/simple/hooks/start b/SOL004/charm-packages/k8s_proxy_charm_vnf/charms/simple/hooks/start new file mode 120000 index 0000000000000000000000000000000000000000..25b1f68fa39d58d33c08ca420c3d439d19be0c55 --- /dev/null +++ b/SOL004/charm-packages/k8s_proxy_charm_vnf/charms/simple/hooks/start @@ -0,0 +1 @@ +../src/charm.py \ No newline at end of file diff --git a/SOL004/charm-packages/k8s_proxy_charm_vnf/charms/simple/hooks/upgrade-charm b/SOL004/charm-packages/k8s_proxy_charm_vnf/charms/simple/hooks/upgrade-charm new file mode 120000 index 0000000000000000000000000000000000000000..25b1f68fa39d58d33c08ca420c3d439d19be0c55 --- /dev/null +++ b/SOL004/charm-packages/k8s_proxy_charm_vnf/charms/simple/hooks/upgrade-charm @@ -0,0 +1 @@ +../src/charm.py \ No newline at end of file diff --git a/SOL004/charm-packages/k8s_proxy_charm_vnf/charms/simple/lib/charms b/SOL004/charm-packages/k8s_proxy_charm_vnf/charms/simple/lib/charms new file mode 120000 index 0000000000000000000000000000000000000000..bbb3079ba31017d6958b659dc899d072b1c8e724 --- /dev/null +++ b/SOL004/charm-packages/k8s_proxy_charm_vnf/charms/simple/lib/charms @@ -0,0 +1 @@ +../mod/charms.osm/charms \ No newline at end of file diff --git a/SOL004/charm-packages/k8s_proxy_charm_vnf/charms/simple/lib/ops b/SOL004/charm-packages/k8s_proxy_charm_vnf/charms/simple/lib/ops new file mode 120000 index 0000000000000000000000000000000000000000..d93419320c2e0d3133a0bc8059e2f259bf5bb213 --- /dev/null +++ b/SOL004/charm-packages/k8s_proxy_charm_vnf/charms/simple/lib/ops @@ -0,0 +1 @@ +../mod/operator/ops \ No newline at end of file diff --git a/SOL004/charm-packages/k8s_proxy_charm_vnf/charms/simple/metadata.yaml b/SOL004/charm-packages/k8s_proxy_charm_vnf/charms/simple/metadata.yaml new file mode 100644 index 0000000000000000000000000000000000000000..4b5b352850eb98a4e2028942957cebc622a2395e --- /dev/null +++ b/SOL004/charm-packages/k8s_proxy_charm_vnf/charms/simple/metadata.yaml @@ -0,0 +1,28 @@ +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## + +name: simple-k8s-proxy +summary: A simple example proxy charm +description: | + Simple proxy charm is an example charm used in OSM Hackfests +series: + - kubernetes +peers: + proxypeer: + interface: proxypeer +deployment: + mode: operator \ No newline at end of file diff --git a/SOL004/charm-packages/k8s_proxy_charm_vnf/charms/simple/src/charm.py b/SOL004/charm-packages/k8s_proxy_charm_vnf/charms/simple/src/charm.py new file mode 100755 index 0000000000000000000000000000000000000000..e23b12b7bcfddfd49d4efbc1b0ac72d92579778a --- /dev/null +++ b/SOL004/charm-packages/k8s_proxy_charm_vnf/charms/simple/src/charm.py @@ -0,0 +1,65 @@ +#!/usr/bin/env python3 +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## + +import sys + +sys.path.append("lib") + +from charms.osm.sshproxy import SSHProxyCharm +from ops.main import main + +class MySSHProxyCharm(SSHProxyCharm): + + def __init__(self, framework, key): + super().__init__(framework, key) + + # Listen to charm events + self.framework.observe(self.on.config_changed, self.on_config_changed) + self.framework.observe(self.on.install, self.on_install) + self.framework.observe(self.on.start, self.on_start) + + # Listen to the touch action event + self.framework.observe(self.on.touch_action, self.on_touch_action) + + def on_config_changed(self, event): + """Handle changes in configuration""" + super().on_config_changed(event) + + def on_install(self, event): + """Called when the charm is being installed""" + super().on_install(event) + + def on_start(self, event): + """Called when the charm is being started""" + super().on_start(event) + + def on_touch_action(self, event): + """Touch a file.""" + + if self.model.unit.is_leader(): + filename = event.params["filename"] + proxy = self.get_ssh_proxy() + stdout, stderr = proxy.run("touch {}".format(filename)) + event.set_results({"output": stdout}) + else: + event.fail("Unit is not leader") + return + +if __name__ == "__main__": + main(MySSHProxyCharm) + diff --git a/SOL004/charm-packages/k8s_proxy_charm_vnf/cloud_init/cloud-config.txt b/SOL004/charm-packages/k8s_proxy_charm_vnf/cloud_init/cloud-config.txt new file mode 100755 index 0000000000000000000000000000000000000000..36c8d1bf2cdebbc4e50d1e8348003f64f419cd0b --- /dev/null +++ b/SOL004/charm-packages/k8s_proxy_charm_vnf/cloud_init/cloud-config.txt @@ -0,0 +1,12 @@ +#cloud-config +password: osm4u +chpasswd: { expire: False } +ssh_pwauth: True + +write_files: +- content: | + # My new helloworld file + + owner: root:root + permissions: '0644' + path: /root/helloworld.txt diff --git a/SOL004/charm-packages/k8s_proxy_charm_vnf/icons/osm.png b/SOL004/charm-packages/k8s_proxy_charm_vnf/icons/osm.png new file mode 100644 index 0000000000000000000000000000000000000000..62012d2a2b491bdcd536d62c3c3c863c0d8c1b33 Binary files /dev/null and b/SOL004/charm-packages/k8s_proxy_charm_vnf/icons/osm.png differ diff --git a/SOL004/charm-packages/k8s_proxy_charm_vnf/k8s_proxy_charm_vnfd.yaml b/SOL004/charm-packages/k8s_proxy_charm_vnf/k8s_proxy_charm_vnfd.yaml new file mode 100644 index 0000000000000000000000000000000000000000..30f617e8ade1d467db87a2120bd15f31c259cf30 --- /dev/null +++ b/SOL004/charm-packages/k8s_proxy_charm_vnf/k8s_proxy_charm_vnfd.yaml @@ -0,0 +1,98 @@ +vnfd: + description: A VNF consisting of 1 VDU connected to two external VL, and one for + data and another one for management + df: + - id: default-df + instantiation-level: + - id: default-instantiation-level + vdu-level: + - number-of-instances: 1 + vdu-id: mgmtVM + vdu-profile: + - id: mgmtVM + min-number-of-instances: 1 + lcm-operations-configuration: + operate-vnf-op-config: + day1-2: + - config-primitive: + - name: touch + execution-environment-ref: simple-ee + parameter: + - data-type: STRING + default-value: /home/ubuntu/touched + name: filename + config-access: + ssh-access: + default-user: ubuntu + required: true + execution-environment-list: + - id: simple-ee + juju: + charm: simple + cloud: k8s + id: k8s_proxy_charm-vnf + initial-config-primitive: + - name: config + execution-environment-ref: simple-ee + parameter: + - name: ssh-hostname + value: + - name: ssh-username + value: ubuntu + - name: ssh-password + value: osm4u + seq: 1 + - name: touch + execution-environment-ref: simple-ee + parameter: + - data-type: STRING + name: filename + value: /home/ubuntu/first-touch + seq: 2 + ext-cpd: + - id: vnf-mgmt-ext + int-cpd: + cpd: mgmtVM-eth0-int + vdu-id: mgmtVM + - id: vnf-data-ext + int-cpd: + cpd: dataVM-xe0-int + vdu-id: mgmtVM + id: k8s_proxy_charm-vnf + mgmt-cp: vnf-mgmt-ext + product-name: k8s_proxy_charm-vnf + sw-image-desc: + - id: ubuntu18.04 + image: ubuntu18.04 + name: ubuntu18.04 + vdu: + - cloud-init-file: cloud-config.txt + id: mgmtVM + int-cpd: + - id: mgmtVM-eth0-int + virtual-network-interface-requirement: + - name: mgmtVM-eth0 + position: 1 + virtual-interface: + type: PARAVIRT + - id: dataVM-xe0-int + virtual-network-interface-requirement: + - name: dataVM-xe0 + position: 2 + virtual-interface: + type: PARAVIRT + name: mgmtVM + sw-image-desc: ubuntu18.04 + virtual-compute-desc: mgmtVM-compute + virtual-storage-desc: + - mgmtVM-storage + version: 1.0 + virtual-compute-desc: + - id: mgmtVM-compute + virtual-cpu: + num-virtual-cpu: 1 + virtual-memory: + size: 1.0 + virtual-storage-desc: + - id: mgmtVM-storage + size-of-storage: 10 diff --git a/SOL004/charm-packages/native_charm_ns/icons/osm.png b/SOL004/charm-packages/native_charm_ns/icons/osm.png new file mode 100644 index 0000000000000000000000000000000000000000..62012d2a2b491bdcd536d62c3c3c863c0d8c1b33 Binary files /dev/null and b/SOL004/charm-packages/native_charm_ns/icons/osm.png differ diff --git a/SOL004/charm-packages/native_charm_ns/native_charm_nsd.yaml b/SOL004/charm-packages/native_charm_ns/native_charm_nsd.yaml new file mode 100644 index 0000000000000000000000000000000000000000..43633090c6a3e9289bef2fdb161aec40e3fbd0e6 --- /dev/null +++ b/SOL004/charm-packages/native_charm_ns/native_charm_nsd.yaml @@ -0,0 +1,37 @@ +nsd: + nsd: + - description: NS with 2 VNFs with cloudinit connected by datanet and mgmtnet VLs + df: + - id: default-df + vnf-profile: + - id: '1' + virtual-link-connectivity: + - constituent-cpd-id: + - constituent-base-element-id: '1' + constituent-cpd-id: vnf-mgmt-ext + virtual-link-profile-id: mgmtnet + - constituent-cpd-id: + - constituent-base-element-id: '1' + constituent-cpd-id: vnf-data-ext + virtual-link-profile-id: datanet + vnfd-id: native_charm-vnf + - id: '2' + virtual-link-connectivity: + - constituent-cpd-id: + - constituent-base-element-id: '2' + constituent-cpd-id: vnf-mgmt-ext + virtual-link-profile-id: mgmtnet + - constituent-cpd-id: + - constituent-base-element-id: '2' + constituent-cpd-id: vnf-data-ext + virtual-link-profile-id: datanet + vnfd-id: native_charm-vnf + id: native_charm-ns + name: native_charm-ns + version: '1.0' + virtual-link-desc: + - id: mgmtnet + mgmt-network: 'true' + - id: datanet + vnfd-id: + - native_charm-vnf diff --git a/SOL004/charm-packages/native_charm_vnf/charms/simple/actions.yaml b/SOL004/charm-packages/native_charm_vnf/charms/simple/actions.yaml new file mode 100644 index 0000000000000000000000000000000000000000..53a706b4eba7347764c4711b7077146e72241de4 --- /dev/null +++ b/SOL004/charm-packages/native_charm_vnf/charms/simple/actions.yaml @@ -0,0 +1,25 @@ +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## +touch: + description: "Touch a file on the VNF." + params: + filename: + description: "The name of the file to touch." + type: string + default: "" + required: + - filename diff --git a/SOL004/charm-packages/native_charm_vnf/charms/simple/config.yaml b/SOL004/charm-packages/native_charm_vnf/charms/simple/config.yaml new file mode 100644 index 0000000000000000000000000000000000000000..2be62318c6ce27156e4af585494b36d0c2b802f8 --- /dev/null +++ b/SOL004/charm-packages/native_charm_vnf/charms/simple/config.yaml @@ -0,0 +1,17 @@ +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## +options: {} \ No newline at end of file diff --git a/SOL004/charm-packages/native_charm_vnf/charms/simple/hooks/install b/SOL004/charm-packages/native_charm_vnf/charms/simple/hooks/install new file mode 120000 index 0000000000000000000000000000000000000000..25b1f68fa39d58d33c08ca420c3d439d19be0c55 --- /dev/null +++ b/SOL004/charm-packages/native_charm_vnf/charms/simple/hooks/install @@ -0,0 +1 @@ +../src/charm.py \ No newline at end of file diff --git a/SOL004/charm-packages/native_charm_vnf/charms/simple/hooks/start b/SOL004/charm-packages/native_charm_vnf/charms/simple/hooks/start new file mode 120000 index 0000000000000000000000000000000000000000..25b1f68fa39d58d33c08ca420c3d439d19be0c55 --- /dev/null +++ b/SOL004/charm-packages/native_charm_vnf/charms/simple/hooks/start @@ -0,0 +1 @@ +../src/charm.py \ No newline at end of file diff --git a/SOL004/charm-packages/native_charm_vnf/charms/simple/hooks/upgrade-charm b/SOL004/charm-packages/native_charm_vnf/charms/simple/hooks/upgrade-charm new file mode 120000 index 0000000000000000000000000000000000000000..25b1f68fa39d58d33c08ca420c3d439d19be0c55 --- /dev/null +++ b/SOL004/charm-packages/native_charm_vnf/charms/simple/hooks/upgrade-charm @@ -0,0 +1 @@ +../src/charm.py \ No newline at end of file diff --git a/SOL004/charm-packages/native_charm_vnf/charms/simple/lib/ops b/SOL004/charm-packages/native_charm_vnf/charms/simple/lib/ops new file mode 120000 index 0000000000000000000000000000000000000000..d93419320c2e0d3133a0bc8059e2f259bf5bb213 --- /dev/null +++ b/SOL004/charm-packages/native_charm_vnf/charms/simple/lib/ops @@ -0,0 +1 @@ +../mod/operator/ops \ No newline at end of file diff --git a/SOL004/charm-packages/native_charm_vnf/charms/simple/metadata.yaml b/SOL004/charm-packages/native_charm_vnf/charms/simple/metadata.yaml new file mode 100644 index 0000000000000000000000000000000000000000..5e832fb61f0c21dd8c17f06822679164f855e0c8 --- /dev/null +++ b/SOL004/charm-packages/native_charm_vnf/charms/simple/metadata.yaml @@ -0,0 +1,25 @@ +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## + +name: simple-native +summary: A simple native charm +description: | + Simple native charm +series: + - bionic + - xenial + - focal \ No newline at end of file diff --git a/SOL004/charm-packages/native_charm_vnf/charms/simple/src/charm.py b/SOL004/charm-packages/native_charm_vnf/charms/simple/src/charm.py new file mode 100755 index 0000000000000000000000000000000000000000..9ef7d07286257338ea1ef940fb34f67a9052f998 --- /dev/null +++ b/SOL004/charm-packages/native_charm_vnf/charms/simple/src/charm.py @@ -0,0 +1,67 @@ +#!/usr/bin/env python3 +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## + +import sys +import subprocess + +sys.path.append("lib") + +from ops.charm import CharmBase +from ops.main import main +from ops.model import ActiveStatus + +class MyNativeCharm(CharmBase): + + def __init__(self, framework, key): + super().__init__(framework, key) + + # Listen to charm events + self.framework.observe(self.on.config_changed, self.on_config_changed) + self.framework.observe(self.on.install, self.on_install) + self.framework.observe(self.on.start, self.on_start) + + # Listen to the touch action event + self.framework.observe(self.on.touch_action, self.on_touch_action) + + def on_config_changed(self, event): + """Handle changes in configuration""" + self.model.unit.status = ActiveStatus() + + def on_install(self, event): + """Called when the charm is being installed""" + self.model.unit.status = ActiveStatus() + + def on_start(self, event): + """Called when the charm is being started""" + self.model.unit.status = ActiveStatus() + + def on_touch_action(self, event): + """Touch a file.""" + + filename = event.params["filename"] + try: + subprocess.run(["touch", filename], check=True) + event.set_results({"created": True, "filename": filename}) + except subprocess.CalledProcessError as e: + event.fail("Action failed: {}".format(e)) + self.model.unit.status = ActiveStatus() + + +if __name__ == "__main__": + main(MyNativeCharm) + diff --git a/SOL004/charm-packages/native_charm_vnf/cloud_init/cloud-config.txt b/SOL004/charm-packages/native_charm_vnf/cloud_init/cloud-config.txt new file mode 100755 index 0000000000000000000000000000000000000000..36c8d1bf2cdebbc4e50d1e8348003f64f419cd0b --- /dev/null +++ b/SOL004/charm-packages/native_charm_vnf/cloud_init/cloud-config.txt @@ -0,0 +1,12 @@ +#cloud-config +password: osm4u +chpasswd: { expire: False } +ssh_pwauth: True + +write_files: +- content: | + # My new helloworld file + + owner: root:root + permissions: '0644' + path: /root/helloworld.txt diff --git a/SOL004/charm-packages/native_charm_vnf/icons/osm.png b/SOL004/charm-packages/native_charm_vnf/icons/osm.png new file mode 100644 index 0000000000000000000000000000000000000000..62012d2a2b491bdcd536d62c3c3c863c0d8c1b33 Binary files /dev/null and b/SOL004/charm-packages/native_charm_vnf/icons/osm.png differ diff --git a/SOL004/charm-packages/native_charm_vnf/native_charm_vnfd.yaml b/SOL004/charm-packages/native_charm_vnf/native_charm_vnfd.yaml new file mode 100644 index 0000000000000000000000000000000000000000..e037d32a5c2f53f533917b2b8352c5b6a200e6a7 --- /dev/null +++ b/SOL004/charm-packages/native_charm_vnf/native_charm_vnfd.yaml @@ -0,0 +1,88 @@ +vnfd: + description: A VNF consisting of 1 VDU connected to two external VL, and one for + data and another one for management + df: + - id: default-df + instantiation-level: + - id: default-instantiation-level + vdu-level: + - number-of-instances: 1 + vdu-id: mgmtVM + vdu-profile: + - id: mgmtVM + min-number-of-instances: 1 + lcm-operations-configuration: + operate-vnf-op-config: + day1-2: + - config-access: + ssh-access: + default-user: ubuntu + required: true + config-primitive: + - name: touch + execution-environment-ref: simple-ee + parameter: + - data-type: STRING + default-value: /home/ubuntu/touched + name: filename + id: mgmtVM + execution-environment-list: + - id: simple-ee + juju: + charm: simple + proxy: false + initial-config-primitive: + - name: touch + execution-environment-ref: simple-ee + parameter: + - data-type: STRING + name: filename + value: /home/ubuntu/first-touch + seq: 1 + ext-cpd: + - id: vnf-mgmt-ext + int-cpd: + cpd: mgmtVM-eth0-int + vdu-id: mgmtVM + - id: vnf-data-ext + int-cpd: + cpd: dataVM-xe0-int + vdu-id: mgmtVM + id: native_charm-vnf + mgmt-cp: vnf-mgmt-ext + product-name: native_charm-vnf + sw-image-desc: + - id: ubuntu18.04 + image: ubuntu18.04 + name: ubuntu18.04 + vdu: + - cloud-init-file: cloud-config.txt + id: mgmtVM + int-cpd: + - id: mgmtVM-eth0-int + virtual-network-interface-requirement: + - name: mgmtVM-eth0 + position: 1 + virtual-interface: + type: PARAVIRT + - id: dataVM-xe0-int + virtual-network-interface-requirement: + - name: dataVM-xe0 + position: 2 + virtual-interface: + type: PARAVIRT + name: mgmtVM + sw-image-desc: ubuntu18.04 + virtual-compute-desc: mgmtVM-compute + virtual-storage-desc: + - mgmtVM-storage + version: 1.0 + virtual-compute-desc: + - id: mgmtVM-compute + virtual-cpu: + num-virtual-cpu: 1 + virtual-memory: + size: 1.0 + virtual-storage-desc: + - id: mgmtVM-storage + size-of-storage: 10 diff --git a/SOL004/charm-packages/native_k8s_charm_ns/native_k8s_charm_nsd.yaml b/SOL004/charm-packages/native_k8s_charm_ns/native_k8s_charm_nsd.yaml new file mode 100644 index 0000000000000000000000000000000000000000..96897b9434bcd9a54bf16d5207f9ff45010fee0a --- /dev/null +++ b/SOL004/charm-packages/native_k8s_charm_ns/native_k8s_charm_nsd.yaml @@ -0,0 +1,21 @@ +nsd: + nsd: + - description: NS with 1 KDU connected to the mgmtnet VL + df: + - id: default-df + vnf-profile: + - id: native_k8s_charm-vnf + virtual-link-connectivity: + - constituent-cpd-id: + - constituent-base-element-id: native_k8s_charm-vnf + constituent-cpd-id: mgmt-ext + virtual-link-profile-id: mgmtnet + vnfd-id: native_k8s_charm-vnf + id: native_k8s_charm-ns + name: native_k8s_charm-ns + version: '1.0' + virtual-link-desc: + - id: mgmtnet + mgmt-network: true + vnfd-id: + - native_k8s_charm-vnf diff --git a/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/actions.yaml b/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/actions.yaml new file mode 100644 index 0000000000000000000000000000000000000000..191e5552be1c28345ee1edb03945baa249ac5539 --- /dev/null +++ b/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/actions.yaml @@ -0,0 +1,8 @@ +changecontent: + description: "Change content of default html" + params: + customtitle: + description: "New Title" + default: "" + required: + - customtitle diff --git a/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/actions/changecontent b/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/actions/changecontent new file mode 100755 index 0000000000000000000000000000000000000000..030b8ecb9782014d271512022a7b59f798f0ae90 --- /dev/null +++ b/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/actions/changecontent @@ -0,0 +1,31 @@ +#!/bin/bash + +CUSTOM_TITLE=`action-get customtitle` + +cat > /usr/share/nginx/html/index.html < + + +$CUSTOM_TITLE + + + +

$CUSTOM_TITLE

+

If you see this page, the nginx web server is successfully installed and +working. Further configuration is required.

+ +

For online documentation and support please refer to +nginx.org.
+Commercial support is available at +nginx.com.

+ +

Thank you for using nginx.

+ + +EOF diff --git a/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/config.yaml b/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/config.yaml new file mode 100644 index 0000000000000000000000000000000000000000..406861de0424ab9c8c75a80dda11040f4439815e --- /dev/null +++ b/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/config.yaml @@ -0,0 +1,9 @@ +options: + port: + description: Zookeeper client port + type: int + default: 80 + image: + description: Docker image name + type: string + default: nginx diff --git a/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/hooks/start b/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/hooks/start new file mode 100755 index 0000000000000000000000000000000000000000..2afc6cb6d058d843ffa9ec9a7a88d63b2fba9c19 --- /dev/null +++ b/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/hooks/start @@ -0,0 +1,86 @@ +#!/usr/bin/env python3 + +import sys +import logging + +sys.path.append("lib") + +from ops.charm import CharmBase +from ops.framework import StoredState +from ops.main import main +from ops.model import ( + ActiveStatus, + MaintenanceStatus, +) + +logger = logging.getLogger(__name__) + + +class NginxK8sCharm(CharmBase): + state = StoredState() + + def __init__(self, framework, key): + super().__init__(framework, key) + self.state.set_default(spec=None) + + # Observe Charm related events + self.framework.observe(self.on.config_changed, self.on_config_changed) + self.framework.observe(self.on.start, self.on_start) + self.framework.observe(self.on.upgrade_charm, self.on_upgrade_charm) + + def _apply_spec(self): + # Only apply the spec if this unit is a leader. + if not self.framework.model.unit.is_leader(): + return + new_spec = self.make_pod_spec() + if new_spec == self.state.spec: + return + self.framework.model.pod.set_spec(new_spec) + self.state.spec = new_spec + + def make_pod_spec(self): + config = self.framework.model.config + + ports = [ + { + "name": "port", + "containerPort": config["port"], + "protocol": "TCP", + } + ] + + spec = { + "version": 2, + "containers": [ + { + "name": self.framework.model.app.name, + "image": "{}".format(config["image"]), + "ports": ports, + } + ], + } + + return spec + + def on_config_changed(self, event): + """Handle changes in configuration""" + unit = self.model.unit + unit.status = MaintenanceStatus("Applying new pod spec") + self._apply_spec() + unit.status = ActiveStatus("Ready") + + def on_start(self, event): + """Called when the charm is being installed""" + unit = self.model.unit + unit.status = MaintenanceStatus("Applying pod spec") + self._apply_spec() + unit.status = ActiveStatus("Ready") + + def on_upgrade_charm(self, event): + """Upgrade the charm.""" + unit = self.model.unit + unit.status = MaintenanceStatus("Upgrading charm") + self.on_start(event) + +if __name__ == "__main__": + main(NginxK8sCharm) diff --git a/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/lib/ops/__init__.py b/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/lib/ops/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e3207f5650949581c4610d830e543430d4776db7 --- /dev/null +++ b/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/lib/ops/__init__.py @@ -0,0 +1,20 @@ +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""The Operator Framework.""" + +__version__ = '0.6.1' + +# Import here the bare minimum to break the circular import between modules +from . import charm # NOQA diff --git a/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/lib/ops/charm.py b/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/lib/ops/charm.py new file mode 100755 index 0000000000000000000000000000000000000000..3d98b6ec321f3478cbf3410136abeca91054f1e9 --- /dev/null +++ b/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/lib/ops/charm.py @@ -0,0 +1,574 @@ +# Copyright 2019-2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import enum +import os +import pathlib +import typing + +import yaml + +from ops.framework import Object, EventSource, EventBase, Framework, ObjectEvents +from ops import model + + +class HookEvent(EventBase): + """A base class for events that trigger because of a Juju hook firing.""" + + +class ActionEvent(EventBase): + """A base class for events that trigger when a user asks for an Action to be run. + + To read the parameters for the action, see the instance variable `params`. + To respond with the result of the action, call `set_results`. To add progress + messages that are visible as the action is progressing use `log`. + + :ivar params: The parameters passed to the action (read by action-get) + """ + + def defer(self): + """Action events are not deferable like other events. + + This is because an action runs synchronously and the user is waiting for the result. + """ + raise RuntimeError('cannot defer action events') + + def restore(self, snapshot: dict) -> None: + """Used by the operator framework to record the action. + + Not meant to be called directly by Charm code. + """ + env_action_name = os.environ.get('JUJU_ACTION_NAME') + event_action_name = self.handle.kind[:-len('_action')].replace('_', '-') + if event_action_name != env_action_name: + # This could only happen if the dev manually emits the action, or from a bug. + raise RuntimeError('action event kind does not match current action') + # Params are loaded at restore rather than __init__ because + # the model is not available in __init__. + self.params = self.framework.model._backend.action_get() + + def set_results(self, results: typing.Mapping) -> None: + """Report the result of the action. + + Args: + results: The result of the action as a Dict + """ + self.framework.model._backend.action_set(results) + + def log(self, message: str) -> None: + """Send a message that a user will see while the action is running. + + Args: + message: The message for the user. + """ + self.framework.model._backend.action_log(message) + + def fail(self, message: str = '') -> None: + """Report that this action has failed. + + Args: + message: Optional message to record why it has failed. + """ + self.framework.model._backend.action_fail(message) + + +class InstallEvent(HookEvent): + """Represents the `install` hook from Juju.""" + + +class StartEvent(HookEvent): + """Represents the `start` hook from Juju.""" + + +class StopEvent(HookEvent): + """Represents the `stop` hook from Juju.""" + + +class RemoveEvent(HookEvent): + """Represents the `remove` hook from Juju. """ + + +class ConfigChangedEvent(HookEvent): + """Represents the `config-changed` hook from Juju.""" + + +class UpdateStatusEvent(HookEvent): + """Represents the `update-status` hook from Juju.""" + + +class UpgradeCharmEvent(HookEvent): + """Represents the `upgrade-charm` hook from Juju. + + This will be triggered when a user has run `juju upgrade-charm`. It is run after Juju + has unpacked the upgraded charm code, and so this event will be handled with new code. + """ + + +class PreSeriesUpgradeEvent(HookEvent): + """Represents the `pre-series-upgrade` hook from Juju. + + This happens when a user has run `juju upgrade-series MACHINE prepare` and + will fire for each unit that is running on the machine, telling them that + the user is preparing to upgrade the Machine's series (eg trusty->bionic). + The charm should take actions to prepare for the upgrade (a database charm + would want to write out a version-independent dump of the database, so that + when a new version of the database is available in a new series, it can be + used.) + Once all units on a machine have run `pre-series-upgrade`, the user will + initiate the steps to actually upgrade the machine (eg `do-release-upgrade`). + When the upgrade has been completed, the :class:`PostSeriesUpgradeEvent` will fire. + """ + + +class PostSeriesUpgradeEvent(HookEvent): + """Represents the `post-series-upgrade` hook from Juju. + + This is run after the user has done a distribution upgrade (or rolled back + and kept the same series). It is called in response to + `juju upgrade-series MACHINE complete`. Charms are expected to do whatever + steps are necessary to reconfigure their applications for the new series. + """ + + +class LeaderElectedEvent(HookEvent): + """Represents the `leader-elected` hook from Juju. + + Juju will trigger this when a new lead unit is chosen for a given application. + This represents the leader of the charm information (not necessarily the primary + of a running application). The main utility is that charm authors can know + that only one unit will be a leader at any given time, so they can do + configuration, etc, that would otherwise require coordination between units. + (eg, selecting a password for a new relation) + """ + + +class LeaderSettingsChangedEvent(HookEvent): + """Represents the `leader-settings-changed` hook from Juju. + + Deprecated. This represents when a lead unit would call `leader-set` to inform + the other units of an application that they have new information to handle. + This has been deprecated in favor of using a Peer relation, and having the + leader set a value in the Application data bag for that peer relation. + (see :class:`RelationChangedEvent`). + """ + + +class CollectMetricsEvent(HookEvent): + """Represents the `collect-metrics` hook from Juju. + + Note that events firing during a CollectMetricsEvent are currently + sandboxed in how they can interact with Juju. To report metrics + use :meth:`.add_metrics`. + """ + + def add_metrics(self, metrics: typing.Mapping, labels: typing.Mapping = None) -> None: + """Record metrics that have been gathered by the charm for this unit. + + Args: + metrics: A collection of {key: float} pairs that contains the + metrics that have been gathered + labels: {key:value} strings that can be applied to the + metrics that are being gathered + """ + self.framework.model._backend.add_metrics(metrics, labels) + + +class RelationEvent(HookEvent): + """A base class representing the various relation lifecycle events. + + Charmers should not be creating RelationEvents directly. The events will be + generated by the framework from Juju related events. Users can observe them + from the various `CharmBase.on[relation_name].relation_*` events. + + Attributes: + relation: The Relation involved in this event + app: The remote application that has triggered this event + unit: The remote unit that has triggered this event. This may be None + if the relation event was triggered as an Application level event + """ + + def __init__(self, handle, relation, app=None, unit=None): + super().__init__(handle) + + if unit is not None and unit.app != app: + raise RuntimeError( + 'cannot create RelationEvent with application {} and unit {}'.format(app, unit)) + + self.relation = relation + self.app = app + self.unit = unit + + def snapshot(self) -> dict: + """Used by the framework to serialize the event to disk. + + Not meant to be called by Charm code. + """ + snapshot = { + 'relation_name': self.relation.name, + 'relation_id': self.relation.id, + } + if self.app: + snapshot['app_name'] = self.app.name + if self.unit: + snapshot['unit_name'] = self.unit.name + return snapshot + + def restore(self, snapshot: dict) -> None: + """Used by the framework to deserialize the event from disk. + + Not meant to be called by Charm code. + """ + self.relation = self.framework.model.get_relation( + snapshot['relation_name'], snapshot['relation_id']) + + app_name = snapshot.get('app_name') + if app_name: + self.app = self.framework.model.get_app(app_name) + else: + self.app = None + + unit_name = snapshot.get('unit_name') + if unit_name: + self.unit = self.framework.model.get_unit(unit_name) + else: + self.unit = None + + +class RelationCreatedEvent(RelationEvent): + """Represents the `relation-created` hook from Juju. + + This is triggered when a new relation to another app is added in Juju. This + can occur before units for those applications have started. All existing + relations should be established before start. + """ + + +class RelationJoinedEvent(RelationEvent): + """Represents the `relation-joined` hook from Juju. + + This is triggered whenever a new unit of a related application joins the relation. + (eg, a unit was added to an existing related app, or a new relation was established + with an application that already had units.) + """ + + +class RelationChangedEvent(RelationEvent): + """Represents the `relation-changed` hook from Juju. + + This is triggered whenever there is a change to the data bucket for a related + application or unit. Look at `event.relation.data[event.unit/app]` to see the + new information. + """ + + +class RelationDepartedEvent(RelationEvent): + """Represents the `relation-departed` hook from Juju. + + This is the inverse of the RelationJoinedEvent, representing when a unit + is leaving the relation (the unit is being removed, the app is being removed, + the relation is being removed). It is fired once for each unit that is + going away. + """ + + +class RelationBrokenEvent(RelationEvent): + """Represents the `relation-broken` hook from Juju. + + If a relation is being removed (`juju remove-relation` or `juju remove-application`), + once all the units have been removed, RelationBrokenEvent will fire to signal + that the relationship has been fully terminated. + """ + + +class StorageEvent(HookEvent): + """Base class representing Storage related events.""" + + +class StorageAttachedEvent(StorageEvent): + """Represents the `storage-attached` hook from Juju. + + Called when new storage is available for the charm to use. + """ + + +class StorageDetachingEvent(StorageEvent): + """Represents the `storage-detaching` hook from Juju. + + Called when storage a charm has been using is going away. + """ + + +class CharmEvents(ObjectEvents): + """The events that are generated by Juju in response to the lifecycle of an application.""" + + install = EventSource(InstallEvent) + start = EventSource(StartEvent) + stop = EventSource(StopEvent) + remove = EventSource(RemoveEvent) + update_status = EventSource(UpdateStatusEvent) + config_changed = EventSource(ConfigChangedEvent) + upgrade_charm = EventSource(UpgradeCharmEvent) + pre_series_upgrade = EventSource(PreSeriesUpgradeEvent) + post_series_upgrade = EventSource(PostSeriesUpgradeEvent) + leader_elected = EventSource(LeaderElectedEvent) + leader_settings_changed = EventSource(LeaderSettingsChangedEvent) + collect_metrics = EventSource(CollectMetricsEvent) + + +class CharmBase(Object): + """Base class that represents the Charm overall. + + Usually this initialization is done by ops.main.main() rather than Charm authors + directly instantiating a Charm. + + Args: + framework: The framework responsible for managing the Model and events for this + Charm. + key: Arbitrary key to distinguish this instance of CharmBase from another. + Generally is None when initialized by the framework. For charms instantiated by + main.main(), this is currenly None. + """ + + on = CharmEvents() + + def __init__(self, framework: Framework, key: typing.Optional[str]): + """Initialize the Charm with its framework and application name. + + """ + super().__init__(framework, key) + + for relation_name in self.framework.meta.relations: + relation_name = relation_name.replace('-', '_') + self.on.define_event(relation_name + '_relation_created', RelationCreatedEvent) + self.on.define_event(relation_name + '_relation_joined', RelationJoinedEvent) + self.on.define_event(relation_name + '_relation_changed', RelationChangedEvent) + self.on.define_event(relation_name + '_relation_departed', RelationDepartedEvent) + self.on.define_event(relation_name + '_relation_broken', RelationBrokenEvent) + + for storage_name in self.framework.meta.storages: + storage_name = storage_name.replace('-', '_') + self.on.define_event(storage_name + '_storage_attached', StorageAttachedEvent) + self.on.define_event(storage_name + '_storage_detaching', StorageDetachingEvent) + + for action_name in self.framework.meta.actions: + action_name = action_name.replace('-', '_') + self.on.define_event(action_name + '_action', ActionEvent) + + @property + def app(self) -> model.Application: + """Application that this unit is part of.""" + return self.framework.model.app + + @property + def unit(self) -> model.Unit: + """Unit that this execution is responsible for.""" + return self.framework.model.unit + + @property + def meta(self) -> 'CharmMeta': + """CharmMeta of this charm. + """ + return self.framework.meta + + @property + def charm_dir(self) -> pathlib.Path: + """Root directory of the Charm as it is running. + """ + return self.framework.charm_dir + + +class CharmMeta: + """Object containing the metadata for the charm. + + This is read from metadata.yaml and/or actions.yaml. Generally charms will + define this information, rather than reading it at runtime. This class is + mostly for the framework to understand what the charm has defined. + + The maintainers, tags, terms, series, and extra_bindings attributes are all + lists of strings. The requires, provides, peers, relations, storage, + resources, and payloads attributes are all mappings of names to instances + of the respective RelationMeta, StorageMeta, ResourceMeta, or PayloadMeta. + + The relations attribute is a convenience accessor which includes all of the + requires, provides, and peers RelationMeta items. If needed, the role of + the relation definition can be obtained from its role attribute. + + Attributes: + name: The name of this charm + summary: Short description of what this charm does + description: Long description for this charm + maintainers: A list of strings of the email addresses of the maintainers + of this charm. + tags: Charm store tag metadata for categories associated with this charm. + terms: Charm store terms that should be agreed to before this charm can + be deployed. (Used for things like licensing issues.) + series: The list of supported OS series that this charm can support. + The first entry in the list is the default series that will be + used by deploy if no other series is requested by the user. + subordinate: True/False whether this charm is intended to be used as a + subordinate charm. + min_juju_version: If supplied, indicates this charm needs features that + are not available in older versions of Juju. + requires: A dict of {name: :class:`RelationMeta` } for each 'requires' relation. + provides: A dict of {name: :class:`RelationMeta` } for each 'provides' relation. + peers: A dict of {name: :class:`RelationMeta` } for each 'peer' relation. + relations: A dict containing all :class:`RelationMeta` attributes (merged from other + sections) + storages: A dict of {name: :class:`StorageMeta`} for each defined storage. + resources: A dict of {name: :class:`ResourceMeta`} for each defined resource. + payloads: A dict of {name: :class:`PayloadMeta`} for each defined payload. + extra_bindings: A dict of additional named bindings that a charm can use + for network configuration. + actions: A dict of {name: :class:`ActionMeta`} for actions that the charm has defined. + Args: + raw: a mapping containing the contents of metadata.yaml + actions_raw: a mapping containing the contents of actions.yaml + """ + + def __init__(self, raw: dict = {}, actions_raw: dict = {}): + self.name = raw.get('name', '') + self.summary = raw.get('summary', '') + self.description = raw.get('description', '') + self.maintainers = [] + if 'maintainer' in raw: + self.maintainers.append(raw['maintainer']) + if 'maintainers' in raw: + self.maintainers.extend(raw['maintainers']) + self.tags = raw.get('tags', []) + self.terms = raw.get('terms', []) + self.series = raw.get('series', []) + self.subordinate = raw.get('subordinate', False) + self.min_juju_version = raw.get('min-juju-version') + self.requires = {name: RelationMeta(RelationRole.requires, name, rel) + for name, rel in raw.get('requires', {}).items()} + self.provides = {name: RelationMeta(RelationRole.provides, name, rel) + for name, rel in raw.get('provides', {}).items()} + self.peers = {name: RelationMeta(RelationRole.peer, name, rel) + for name, rel in raw.get('peers', {}).items()} + self.relations = {} + self.relations.update(self.requires) + self.relations.update(self.provides) + self.relations.update(self.peers) + self.storages = {name: StorageMeta(name, storage) + for name, storage in raw.get('storage', {}).items()} + self.resources = {name: ResourceMeta(name, res) + for name, res in raw.get('resources', {}).items()} + self.payloads = {name: PayloadMeta(name, payload) + for name, payload in raw.get('payloads', {}).items()} + self.extra_bindings = raw.get('extra-bindings', {}) + self.actions = {name: ActionMeta(name, action) for name, action in actions_raw.items()} + + @classmethod + def from_yaml( + cls, metadata: typing.Union[str, typing.TextIO], + actions: typing.Optional[typing.Union[str, typing.TextIO]] = None): + """Instantiate a CharmMeta from a YAML description of metadata.yaml. + + Args: + metadata: A YAML description of charm metadata (name, relations, etc.) + This can be a simple string, or a file-like object. (passed to `yaml.safe_load`). + actions: YAML description of Actions for this charm (eg actions.yaml) + """ + meta = yaml.safe_load(metadata) + raw_actions = {} + if actions is not None: + raw_actions = yaml.safe_load(actions) + return cls(meta, raw_actions) + + +class RelationRole(enum.Enum): + peer = 'peer' + requires = 'requires' + provides = 'provides' + + def is_peer(self) -> bool: + """Return whether the current role is peer. + + A convenience to avoid having to import charm. + """ + return self is RelationRole.peer + + +class RelationMeta: + """Object containing metadata about a relation definition. + + Should not be constructed directly by Charm code. Is gotten from one of + :attr:`CharmMeta.peers`, :attr:`CharmMeta.requires`, :attr:`CharmMeta.provides`, + or :attr:`CharmMeta.relations`. + + Attributes: + role: This is one of peer/requires/provides + relation_name: Name of this relation from metadata.yaml + interface_name: Optional definition of the interface protocol. + scope: "global" or "container" scope based on how the relation should be used. + """ + + def __init__(self, role: RelationRole, relation_name: str, raw: dict): + if not isinstance(role, RelationRole): + raise TypeError("role should be a Role, not {!r}".format(role)) + self.role = role + self.relation_name = relation_name + self.interface_name = raw['interface'] + self.scope = raw.get('scope') + + +class StorageMeta: + """Object containing metadata about a storage definition.""" + + def __init__(self, name, raw): + self.storage_name = name + self.type = raw['type'] + self.description = raw.get('description', '') + self.shared = raw.get('shared', False) + self.read_only = raw.get('read-only', False) + self.minimum_size = raw.get('minimum-size') + self.location = raw.get('location') + self.multiple_range = None + if 'multiple' in raw: + range = raw['multiple']['range'] + if '-' not in range: + self.multiple_range = (int(range), int(range)) + else: + range = range.split('-') + self.multiple_range = (int(range[0]), int(range[1]) if range[1] else None) + + +class ResourceMeta: + """Object containing metadata about a resource definition.""" + + def __init__(self, name, raw): + self.resource_name = name + self.type = raw['type'] + self.filename = raw.get('filename', None) + self.description = raw.get('description', '') + + +class PayloadMeta: + """Object containing metadata about a payload definition.""" + + def __init__(self, name, raw): + self.payload_name = name + self.type = raw['type'] + + +class ActionMeta: + """Object containing metadata about an action's definition.""" + + def __init__(self, name, raw=None): + raw = raw or {} + self.name = name + self.title = raw.get('title', '') + self.description = raw.get('description', '') + self.parameters = raw.get('params', {}) # {: } + self.required = raw.get('required', []) # [, ...] diff --git a/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/lib/ops/framework.py b/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/lib/ops/framework.py new file mode 100755 index 0000000000000000000000000000000000000000..54945cb10e0ffa5f352381aefa2fa443048ba3ae --- /dev/null +++ b/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/lib/ops/framework.py @@ -0,0 +1,1132 @@ +# Copyright 2019-2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import collections +import collections.abc +import inspect +import keyword +import marshal +import os +import pdb +import pickle +import re +import sqlite3 +import sys +import types +import weakref +from datetime import timedelta + +from ops import charm + + +class Handle: + """Handle defines a name for an object in the form of a hierarchical path. + + The provided parent is the object (or that object's handle) that this handle + sits under, or None if the object identified by this handle stands by itself + as the root of its own hierarchy. + + The handle kind is a string that defines a namespace so objects with the + same parent and kind will have unique keys. + + The handle key is a string uniquely identifying the object. No other objects + under the same parent and kind may have the same key. + """ + + def __init__(self, parent, kind, key): + if parent and not isinstance(parent, Handle): + parent = parent.handle + self._parent = parent + self._kind = kind + self._key = key + if parent: + if key: + self._path = "{}/{}[{}]".format(parent, kind, key) + else: + self._path = "{}/{}".format(parent, kind) + else: + if key: + self._path = "{}[{}]".format(kind, key) + else: + self._path = "{}".format(kind) + + def nest(self, kind, key): + return Handle(self, kind, key) + + def __hash__(self): + return hash((self.parent, self.kind, self.key)) + + def __eq__(self, other): + return (self.parent, self.kind, self.key) == (other.parent, other.kind, other.key) + + def __str__(self): + return self.path + + @property + def parent(self): + return self._parent + + @property + def kind(self): + return self._kind + + @property + def key(self): + return self._key + + @property + def path(self): + return self._path + + @classmethod + def from_path(cls, path): + handle = None + for pair in path.split("/"): + pair = pair.split("[") + good = False + if len(pair) == 1: + kind, key = pair[0], None + good = True + elif len(pair) == 2: + kind, key = pair + if key and key[-1] == ']': + key = key[:-1] + good = True + if not good: + raise RuntimeError("attempted to restore invalid handle path {}".format(path)) + handle = Handle(handle, kind, key) + return handle + + +class EventBase: + + def __init__(self, handle): + self.handle = handle + self.deferred = False + + def defer(self): + self.deferred = True + + def snapshot(self): + """Return the snapshot data that should be persisted. + + Subclasses must override to save any custom state. + """ + return None + + def restore(self, snapshot): + """Restore the value state from the given snapshot. + + Subclasses must override to restore their custom state. + """ + self.deferred = False + + +class EventSource: + """EventSource wraps an event type with a descriptor to facilitate observing and emitting. + + It is generally used as: + + class SomethingHappened(EventBase): + pass + + class SomeObject(Object): + something_happened = EventSource(SomethingHappened) + + With that, instances of that type will offer the someobj.something_happened + attribute which is a BoundEvent and may be used to emit and observe the event. + """ + + def __init__(self, event_type): + if not isinstance(event_type, type) or not issubclass(event_type, EventBase): + raise RuntimeError( + 'Event requires a subclass of EventBase as an argument, got {}'.format(event_type)) + self.event_type = event_type + self.event_kind = None + self.emitter_type = None + + def _set_name(self, emitter_type, event_kind): + if self.event_kind is not None: + raise RuntimeError( + 'EventSource({}) reused as {}.{} and {}.{}'.format( + self.event_type.__name__, + self.emitter_type.__name__, + self.event_kind, + emitter_type.__name__, + event_kind, + )) + self.event_kind = event_kind + self.emitter_type = emitter_type + + def __get__(self, emitter, emitter_type=None): + if emitter is None: + return self + # Framework might not be available if accessed as CharmClass.on.event + # rather than charm_instance.on.event, but in that case it couldn't be + # emitted anyway, so there's no point to registering it. + framework = getattr(emitter, 'framework', None) + if framework is not None: + framework.register_type(self.event_type, emitter, self.event_kind) + return BoundEvent(emitter, self.event_type, self.event_kind) + + +class BoundEvent: + + def __repr__(self): + return ''.format( + self.event_type.__name__, + type(self.emitter).__name__, + self.event_kind, + hex(id(self)), + ) + + def __init__(self, emitter, event_type, event_kind): + self.emitter = emitter + self.event_type = event_type + self.event_kind = event_kind + + def emit(self, *args, **kwargs): + """Emit event to all registered observers. + + The current storage state is committed before and after each observer is notified. + """ + framework = self.emitter.framework + key = framework._next_event_key() + event = self.event_type(Handle(self.emitter, self.event_kind, key), *args, **kwargs) + framework._emit(event) + + +class HandleKind: + """Helper descriptor to define the Object.handle_kind field. + + The handle_kind for an object defaults to its type name, but it may + be explicitly overridden if desired. + """ + + def __get__(self, obj, obj_type): + kind = obj_type.__dict__.get("handle_kind") + if kind: + return kind + return obj_type.__name__ + + +class _Metaclass(type): + """Helper class to ensure proper instantiation of Object-derived classes. + + This class currently has a single purpose: events derived from EventSource + that are class attributes of Object-derived classes need to be told what + their name is in that class. For example, in + + class SomeObject(Object): + something_happened = EventSource(SomethingHappened) + + the instance of EventSource needs to know it's called 'something_happened'. + + Starting from python 3.6 we could use __set_name__ on EventSource for this, + but until then this (meta)class does the equivalent work. + + TODO: when we drop support for 3.5 drop this class, and rename _set_name in + EventSource to __set_name__; everything should continue to work. + + """ + + def __new__(typ, *a, **kw): + k = super().__new__(typ, *a, **kw) + # k is now the Object-derived class; loop over its class attributes + for n, v in vars(k).items(): + # we could do duck typing here if we want to support + # non-EventSource-derived shenanigans. We don't. + if isinstance(v, EventSource): + # this is what 3.6+ does automatically for us: + v._set_name(k, n) + return k + + +class Object(metaclass=_Metaclass): + + handle_kind = HandleKind() + + def __init__(self, parent, key): + kind = self.handle_kind + if isinstance(parent, Framework): + self.framework = parent + # Avoid Framework instances having a circular reference to themselves. + if self.framework is self: + self.framework = weakref.proxy(self.framework) + self.handle = Handle(None, kind, key) + else: + self.framework = parent.framework + self.handle = Handle(parent, kind, key) + self.framework._track(self) + + # TODO Detect conflicting handles here. + + @property + def model(self): + return self.framework.model + + +class ObjectEvents(Object): + """Convenience type to allow defining .on attributes at class level.""" + + handle_kind = "on" + + def __init__(self, parent=None, key=None): + if parent is not None: + super().__init__(parent, key) + else: + self._cache = weakref.WeakKeyDictionary() + + def __get__(self, emitter, emitter_type): + if emitter is None: + return self + instance = self._cache.get(emitter) + if instance is None: + # Same type, different instance, more data. Doing this unusual construct + # means people can subclass just this one class to have their own 'on'. + instance = self._cache[emitter] = type(self)(emitter) + return instance + + @classmethod + def define_event(cls, event_kind, event_type): + """Define an event on this type at runtime. + + cls: a type to define an event on. + + event_kind: an attribute name that will be used to access the + event. Must be a valid python identifier, not be a keyword + or an existing attribute. + + event_type: a type of the event to define. + + """ + prefix = 'unable to define an event with event_kind that ' + if not event_kind.isidentifier(): + raise RuntimeError(prefix + 'is not a valid python identifier: ' + event_kind) + elif keyword.iskeyword(event_kind): + raise RuntimeError(prefix + 'is a python keyword: ' + event_kind) + try: + getattr(cls, event_kind) + raise RuntimeError( + prefix + 'overlaps with an existing type {} attribute: {}'.format(cls, event_kind)) + except AttributeError: + pass + + event_descriptor = EventSource(event_type) + event_descriptor._set_name(cls, event_kind) + setattr(cls, event_kind, event_descriptor) + + def events(self): + """Return a mapping of event_kinds to bound_events for all available events. + """ + events_map = {} + # We have to iterate over the class rather than instance to allow for properties which + # might call this method (e.g., event views), leading to infinite recursion. + for attr_name, attr_value in inspect.getmembers(type(self)): + if isinstance(attr_value, EventSource): + # We actually care about the bound_event, however, since it + # provides the most info for users of this method. + event_kind = attr_name + bound_event = getattr(self, event_kind) + events_map[event_kind] = bound_event + return events_map + + def __getitem__(self, key): + return PrefixedEvents(self, key) + + +class PrefixedEvents: + + def __init__(self, emitter, key): + self._emitter = emitter + self._prefix = key.replace("-", "_") + '_' + + def __getattr__(self, name): + return getattr(self._emitter, self._prefix + name) + + +class PreCommitEvent(EventBase): + pass + + +class CommitEvent(EventBase): + pass + + +class FrameworkEvents(ObjectEvents): + pre_commit = EventSource(PreCommitEvent) + commit = EventSource(CommitEvent) + + +class NoSnapshotError(Exception): + + def __init__(self, handle_path): + self.handle_path = handle_path + + def __str__(self): + return 'no snapshot data found for {} object'.format(self.handle_path) + + +class NoTypeError(Exception): + + def __init__(self, handle_path): + self.handle_path = handle_path + + def __str__(self): + return "cannot restore {} since no class was registered for it".format(self.handle_path) + + +class SQLiteStorage: + + DB_LOCK_TIMEOUT = timedelta(hours=1) + + def __init__(self, filename): + # The isolation_level argument is set to None such that the implicit + # transaction management behavior of the sqlite3 module is disabled. + self._db = sqlite3.connect(str(filename), + isolation_level=None, + timeout=self.DB_LOCK_TIMEOUT.total_seconds()) + self._setup() + + def _setup(self): + # Make sure that the database is locked until the connection is closed, + # not until the transaction ends. + self._db.execute("PRAGMA locking_mode=EXCLUSIVE") + c = self._db.execute("BEGIN") + c.execute("SELECT count(name) FROM sqlite_master WHERE type='table' AND name='snapshot'") + if c.fetchone()[0] == 0: + # Keep in mind what might happen if the process dies somewhere below. + # The system must not be rendered permanently broken by that. + self._db.execute("CREATE TABLE snapshot (handle TEXT PRIMARY KEY, data BLOB)") + self._db.execute(''' + CREATE TABLE notice ( + sequence INTEGER PRIMARY KEY AUTOINCREMENT, + event_path TEXT, + observer_path TEXT, + method_name TEXT) + ''') + self._db.commit() + + def close(self): + self._db.close() + + def commit(self): + self._db.commit() + + # There's commit but no rollback. For abort to be supported, we'll need logic that + # can rollback decisions made by third-party code in terms of the internal state + # of objects that have been snapshotted, and hooks to let them know about it and + # take the needed actions to undo their logic until the last snapshot. + # This is doable but will increase significantly the chances for mistakes. + + def save_snapshot(self, handle_path, snapshot_data): + self._db.execute("REPLACE INTO snapshot VALUES (?, ?)", (handle_path, snapshot_data)) + + def load_snapshot(self, handle_path): + c = self._db.cursor() + c.execute("SELECT data FROM snapshot WHERE handle=?", (handle_path,)) + row = c.fetchone() + if row: + return row[0] + return None + + def drop_snapshot(self, handle_path): + self._db.execute("DELETE FROM snapshot WHERE handle=?", (handle_path,)) + + def save_notice(self, event_path, observer_path, method_name): + self._db.execute('INSERT INTO notice VALUES (NULL, ?, ?, ?)', + (event_path, observer_path, method_name)) + + def drop_notice(self, event_path, observer_path, method_name): + self._db.execute(''' + DELETE FROM notice + WHERE event_path=? + AND observer_path=? + AND method_name=? + ''', (event_path, observer_path, method_name)) + + def notices(self, event_path): + if event_path: + c = self._db.execute(''' + SELECT event_path, observer_path, method_name + FROM notice + WHERE event_path=? + ORDER BY sequence + ''', (event_path,)) + else: + c = self._db.execute(''' + SELECT event_path, observer_path, method_name + FROM notice + ORDER BY sequence + ''') + while True: + rows = c.fetchmany() + if not rows: + break + for row in rows: + yield tuple(row) + + +# the message to show to the user when a pdb breakpoint goes active +_BREAKPOINT_WELCOME_MESSAGE = """ +Starting pdb to debug charm operator. +Run `h` for help, `c` to continue, or `exit`/CTRL-d to abort. +Future breakpoints may interrupt execution again. +More details at https://discourse.jujucharms.com/t/debugging-charm-hooks + +""" + + +class Framework(Object): + + on = FrameworkEvents() + + # Override properties from Object so that we can set them in __init__. + model = None + meta = None + charm_dir = None + + def __init__(self, data_path, charm_dir, meta, model): + + super().__init__(self, None) + + self._data_path = data_path + self.charm_dir = charm_dir + self.meta = meta + self.model = model + self._observers = [] # [(observer_path, method_name, parent_path, event_key)] + self._observer = weakref.WeakValueDictionary() # {observer_path: observer} + self._objects = weakref.WeakValueDictionary() + self._type_registry = {} # {(parent_path, kind): cls} + self._type_known = set() # {cls} + + self._storage = SQLiteStorage(data_path) + + # We can't use the higher-level StoredState because it relies on events. + self.register_type(StoredStateData, None, StoredStateData.handle_kind) + stored_handle = Handle(None, StoredStateData.handle_kind, '_stored') + try: + self._stored = self.load_snapshot(stored_handle) + except NoSnapshotError: + self._stored = StoredStateData(self, '_stored') + self._stored['event_count'] = 0 + + # Hook into builtin breakpoint, so if Python >= 3.7, devs will be able to just do + # breakpoint(); if Python < 3.7, this doesn't affect anything + sys.breakpointhook = self.breakpoint + + # Flag to indicate that we already presented the welcome message in a debugger breakpoint + self._breakpoint_welcomed = False + + # Parse once the env var, which may be used multiple times later + debug_at = os.environ.get('JUJU_DEBUG_AT') + self._juju_debug_at = debug_at.split(',') if debug_at else () + + def close(self): + self._storage.close() + + def _track(self, obj): + """Track object and ensure it is the only object created using its handle path.""" + if obj is self: + # Framework objects don't track themselves + return + if obj.handle.path in self.framework._objects: + raise RuntimeError( + 'two objects claiming to be {} have been created'.format(obj.handle.path)) + self._objects[obj.handle.path] = obj + + def _forget(self, obj): + """Stop tracking the given object. See also _track.""" + self._objects.pop(obj.handle.path, None) + + def commit(self): + # Give a chance for objects to persist data they want to before a commit is made. + self.on.pre_commit.emit() + # Make sure snapshots are saved by instances of StoredStateData. Any possible state + # modifications in on_commit handlers of instances of other classes will not be persisted. + self.on.commit.emit() + # Save our event count after all events have been emitted. + self.save_snapshot(self._stored) + self._storage.commit() + + def register_type(self, cls, parent, kind=None): + if parent and not isinstance(parent, Handle): + parent = parent.handle + if parent: + parent_path = parent.path + else: + parent_path = None + if not kind: + kind = cls.handle_kind + self._type_registry[(parent_path, kind)] = cls + self._type_known.add(cls) + + def save_snapshot(self, value): + """Save a persistent snapshot of the provided value. + + The provided value must implement the following interface: + + value.handle = Handle(...) + value.snapshot() => {...} # Simple builtin types only. + value.restore(snapshot) # Restore custom state from prior snapshot. + """ + if type(value) not in self._type_known: + raise RuntimeError( + 'cannot save {} values before registering that type'.format(type(value).__name__)) + data = value.snapshot() + + # Use marshal as a validator, enforcing the use of simple types, as we later the + # information is really pickled, which is too error prone for future evolution of the + # stored data (e.g. if the developer stores a custom object and later changes its + # class name; when unpickling the original class will not be there and event + # data loading will fail). + try: + marshal.dumps(data) + except ValueError: + msg = "unable to save the data for {}, it must contain only simple types: {!r}" + raise ValueError(msg.format(value.__class__.__name__, data)) + + # Use pickle for serialization, so the value remains portable. + raw_data = pickle.dumps(data) + self._storage.save_snapshot(value.handle.path, raw_data) + + def load_snapshot(self, handle): + parent_path = None + if handle.parent: + parent_path = handle.parent.path + cls = self._type_registry.get((parent_path, handle.kind)) + if not cls: + raise NoTypeError(handle.path) + raw_data = self._storage.load_snapshot(handle.path) + if not raw_data: + raise NoSnapshotError(handle.path) + data = pickle.loads(raw_data) + obj = cls.__new__(cls) + obj.framework = self + obj.handle = handle + obj.restore(data) + self._track(obj) + return obj + + def drop_snapshot(self, handle): + self._storage.drop_snapshot(handle.path) + + def observe(self, bound_event: BoundEvent, observer: types.MethodType): + """Register observer to be called when bound_event is emitted. + + The bound_event is generally provided as an attribute of the object that emits + the event, and is created in this style: + + class SomeObject: + something_happened = Event(SomethingHappened) + + That event may be observed as: + + framework.observe(someobj.something_happened, self._on_something_happened) + + Raises: + RuntimeError: if bound_event or observer are the wrong type. + """ + if not isinstance(bound_event, BoundEvent): + raise RuntimeError( + 'Framework.observe requires a BoundEvent as second parameter, got {}'.format( + bound_event)) + if not isinstance(observer, types.MethodType): + # help users of older versions of the framework + if isinstance(observer, charm.CharmBase): + raise TypeError( + 'observer methods must now be explicitly provided;' + ' please replace observe(self.on.{0}, self)' + ' with e.g. observe(self.on.{0}, self._on_{0})'.format( + bound_event.event_kind)) + raise RuntimeError( + 'Framework.observe requires a method as third parameter, got {}'.format(observer)) + + event_type = bound_event.event_type + event_kind = bound_event.event_kind + emitter = bound_event.emitter + + self.register_type(event_type, emitter, event_kind) + + if hasattr(emitter, "handle"): + emitter_path = emitter.handle.path + else: + raise RuntimeError( + 'event emitter {} must have a "handle" attribute'.format(type(emitter).__name__)) + + # Validate that the method has an acceptable call signature. + sig = inspect.signature(observer) + # Self isn't included in the params list, so the first arg will be the event. + extra_params = list(sig.parameters.values())[1:] + + method_name = observer.__name__ + observer = observer.__self__ + if not sig.parameters: + raise TypeError( + '{}.{} must accept event parameter'.format(type(observer).__name__, method_name)) + elif any(param.default is inspect.Parameter.empty for param in extra_params): + # Allow for additional optional params, since there's no reason to exclude them, but + # required params will break. + raise TypeError( + '{}.{} has extra required parameter'.format(type(observer).__name__, method_name)) + + # TODO Prevent the exact same parameters from being registered more than once. + + self._observer[observer.handle.path] = observer + self._observers.append((observer.handle.path, method_name, emitter_path, event_kind)) + + def _next_event_key(self): + """Return the next event key that should be used, incrementing the internal counter.""" + # Increment the count first; this means the keys will start at 1, and 0 + # means no events have been emitted. + self._stored['event_count'] += 1 + return str(self._stored['event_count']) + + def _emit(self, event): + """See BoundEvent.emit for the public way to call this.""" + + # Save the event for all known observers before the first notification + # takes place, so that either everyone interested sees it, or nobody does. + self.save_snapshot(event) + event_path = event.handle.path + event_kind = event.handle.kind + parent_path = event.handle.parent.path + # TODO Track observers by (parent_path, event_kind) rather than as a list of + # all observers. Avoiding linear search through all observers for every event + for observer_path, method_name, _parent_path, _event_kind in self._observers: + if _parent_path != parent_path: + continue + if _event_kind and _event_kind != event_kind: + continue + # Again, only commit this after all notices are saved. + self._storage.save_notice(event_path, observer_path, method_name) + self._reemit(event_path) + + def reemit(self): + """Reemit previously deferred events to the observers that deferred them. + + Only the specific observers that have previously deferred the event will be + notified again. Observers that asked to be notified about events after it's + been first emitted won't be notified, as that would mean potentially observing + events out of order. + """ + self._reemit() + + def _reemit(self, single_event_path=None): + last_event_path = None + deferred = True + for event_path, observer_path, method_name in self._storage.notices(single_event_path): + event_handle = Handle.from_path(event_path) + + if last_event_path != event_path: + if not deferred: + self._storage.drop_snapshot(last_event_path) + last_event_path = event_path + deferred = False + + try: + event = self.load_snapshot(event_handle) + except NoTypeError: + self._storage.drop_notice(event_path, observer_path, method_name) + continue + + event.deferred = False + observer = self._observer.get(observer_path) + if observer: + custom_handler = getattr(observer, method_name, None) + if custom_handler: + event_is_from_juju = isinstance(event, charm.HookEvent) + event_is_action = isinstance(event, charm.ActionEvent) + if (event_is_from_juju or event_is_action) and 'hook' in self._juju_debug_at: + # Present the welcome message and run under PDB. + self._show_debug_code_message() + pdb.runcall(custom_handler, event) + else: + # Regular call to the registered method. + custom_handler(event) + + if event.deferred: + deferred = True + else: + self._storage.drop_notice(event_path, observer_path, method_name) + # We intentionally consider this event to be dead and reload it from + # scratch in the next path. + self.framework._forget(event) + + if not deferred: + self._storage.drop_snapshot(last_event_path) + + def _show_debug_code_message(self): + """Present the welcome message (only once!) when using debugger functionality.""" + if not self._breakpoint_welcomed: + self._breakpoint_welcomed = True + print(_BREAKPOINT_WELCOME_MESSAGE, file=sys.stderr, end='') + + def breakpoint(self, name=None): + """Add breakpoint, optionally named, at the place where this method is called. + + For the breakpoint to be activated the JUJU_DEBUG_AT environment variable + must be set to "all" or to the specific name parameter provided, if any. In every + other situation calling this method does nothing. + + The framework also provides a standard breakpoint named "hook", that will + stop execution when a hook event is about to be handled. + + For those reasons, the "all" and "hook" breakpoint names are reserved. + """ + # If given, validate the name comply with all the rules + if name is not None: + if not isinstance(name, str): + raise TypeError('breakpoint names must be strings') + if name in ('hook', 'all'): + raise ValueError('breakpoint names "all" and "hook" are reserved') + if not re.match(r'^[a-z0-9]([a-z0-9\-]*[a-z0-9])?$', name): + raise ValueError('breakpoint names must look like "foo" or "foo-bar"') + + indicated_breakpoints = self._juju_debug_at + if 'all' in indicated_breakpoints or name in indicated_breakpoints: + self._show_debug_code_message() + + # If we call set_trace() directly it will open the debugger *here*, so indicating + # it to use our caller's frame + code_frame = inspect.currentframe().f_back + pdb.Pdb().set_trace(code_frame) + + +class StoredStateData(Object): + + def __init__(self, parent, attr_name): + super().__init__(parent, attr_name) + self._cache = {} + self.dirty = False + + def __getitem__(self, key): + return self._cache.get(key) + + def __setitem__(self, key, value): + self._cache[key] = value + self.dirty = True + + def __contains__(self, key): + return key in self._cache + + def snapshot(self): + return self._cache + + def restore(self, snapshot): + self._cache = snapshot + self.dirty = False + + def on_commit(self, event): + if self.dirty: + self.framework.save_snapshot(self) + self.dirty = False + + +class BoundStoredState: + + def __init__(self, parent, attr_name): + parent.framework.register_type(StoredStateData, parent) + + handle = Handle(parent, StoredStateData.handle_kind, attr_name) + try: + data = parent.framework.load_snapshot(handle) + except NoSnapshotError: + data = StoredStateData(parent, attr_name) + + # __dict__ is used to avoid infinite recursion. + self.__dict__["_data"] = data + self.__dict__["_attr_name"] = attr_name + + parent.framework.observe(parent.framework.on.commit, self._data.on_commit) + + def __getattr__(self, key): + # "on" is the only reserved key that can't be used in the data map. + if key == "on": + return self._data.on + if key not in self._data: + raise AttributeError("attribute '{}' is not stored".format(key)) + return _wrap_stored(self._data, self._data[key]) + + def __setattr__(self, key, value): + if key == "on": + raise AttributeError("attribute 'on' is reserved and cannot be set") + + value = _unwrap_stored(self._data, value) + + if not isinstance(value, (type(None), int, float, str, bytes, list, dict, set)): + raise AttributeError( + 'attribute {!r} cannot be a {}: must be int/float/dict/list/etc'.format( + key, type(value).__name__)) + + self._data[key] = _unwrap_stored(self._data, value) + + def set_default(self, **kwargs): + """"Set the value of any given key if it has not already been set""" + for k, v in kwargs.items(): + if k not in self._data: + self._data[k] = v + + +class StoredState: + """A class used to store data the charm needs persisted across invocations. + + Example:: + + class MyClass(Object): + _stored = StoredState() + + Instances of `MyClass` can transparently save state between invocations by + setting attributes on `_stored`. Initial state should be set with + `set_default` on the bound object, that is:: + + class MyClass(Object): + _stored = StoredState() + + def __init__(self, parent, key): + super().__init__(parent, key) + self._stored.set_default(seen=set()) + self.framework.observe(self.on.seen, self._on_seen) + + def _on_seen(self, event): + self._stored.seen.add(event.uuid) + + """ + + def __init__(self): + self.parent_type = None + self.attr_name = None + + def __get__(self, parent, parent_type=None): + if self.parent_type is not None and self.parent_type not in parent_type.mro(): + # the StoredState instance is being shared between two unrelated classes + # -> unclear what is exepcted of us -> bail out + raise RuntimeError( + 'StoredState shared by {} and {}'.format( + self.parent_type.__name__, parent_type.__name__)) + + if parent is None: + # accessing via the class directly (e.g. MyClass.stored) + return self + + bound = None + if self.attr_name is not None: + bound = parent.__dict__.get(self.attr_name) + if bound is not None: + # we already have the thing from a previous pass, huzzah + return bound + + # need to find ourselves amongst the parent's bases + for cls in parent_type.mro(): + for attr_name, attr_value in cls.__dict__.items(): + if attr_value is not self: + continue + # we've found ourselves! is it the first time? + if bound is not None: + # the StoredState instance is being stored in two different + # attributes -> unclear what is expected of us -> bail out + raise RuntimeError("StoredState shared by {0}.{1} and {0}.{2}".format( + cls.__name__, self.attr_name, attr_name)) + # we've found ourselves for the first time; save where, and bind the object + self.attr_name = attr_name + self.parent_type = cls + bound = BoundStoredState(parent, attr_name) + + if bound is not None: + # cache the bound object to avoid the expensive lookup the next time + # (don't use setattr, to keep things symmetric with the fast-path lookup above) + parent.__dict__[self.attr_name] = bound + return bound + + raise AttributeError( + 'cannot find {} attribute in type {}'.format( + self.__class__.__name__, parent_type.__name__)) + + +def _wrap_stored(parent_data, value): + t = type(value) + if t is dict: + return StoredDict(parent_data, value) + if t is list: + return StoredList(parent_data, value) + if t is set: + return StoredSet(parent_data, value) + return value + + +def _unwrap_stored(parent_data, value): + t = type(value) + if t is StoredDict or t is StoredList or t is StoredSet: + return value._under + return value + + +class StoredDict(collections.abc.MutableMapping): + + def __init__(self, stored_data, under): + self._stored_data = stored_data + self._under = under + + def __getitem__(self, key): + return _wrap_stored(self._stored_data, self._under[key]) + + def __setitem__(self, key, value): + self._under[key] = _unwrap_stored(self._stored_data, value) + self._stored_data.dirty = True + + def __delitem__(self, key): + del self._under[key] + self._stored_data.dirty = True + + def __iter__(self): + return self._under.__iter__() + + def __len__(self): + return len(self._under) + + def __eq__(self, other): + if isinstance(other, StoredDict): + return self._under == other._under + elif isinstance(other, collections.abc.Mapping): + return self._under == other + else: + return NotImplemented + + +class StoredList(collections.abc.MutableSequence): + + def __init__(self, stored_data, under): + self._stored_data = stored_data + self._under = under + + def __getitem__(self, index): + return _wrap_stored(self._stored_data, self._under[index]) + + def __setitem__(self, index, value): + self._under[index] = _unwrap_stored(self._stored_data, value) + self._stored_data.dirty = True + + def __delitem__(self, index): + del self._under[index] + self._stored_data.dirty = True + + def __len__(self): + return len(self._under) + + def insert(self, index, value): + self._under.insert(index, value) + self._stored_data.dirty = True + + def append(self, value): + self._under.append(value) + self._stored_data.dirty = True + + def __eq__(self, other): + if isinstance(other, StoredList): + return self._under == other._under + elif isinstance(other, collections.abc.Sequence): + return self._under == other + else: + return NotImplemented + + def __lt__(self, other): + if isinstance(other, StoredList): + return self._under < other._under + elif isinstance(other, collections.abc.Sequence): + return self._under < other + else: + return NotImplemented + + def __le__(self, other): + if isinstance(other, StoredList): + return self._under <= other._under + elif isinstance(other, collections.abc.Sequence): + return self._under <= other + else: + return NotImplemented + + def __gt__(self, other): + if isinstance(other, StoredList): + return self._under > other._under + elif isinstance(other, collections.abc.Sequence): + return self._under > other + else: + return NotImplemented + + def __ge__(self, other): + if isinstance(other, StoredList): + return self._under >= other._under + elif isinstance(other, collections.abc.Sequence): + return self._under >= other + else: + return NotImplemented + + +class StoredSet(collections.abc.MutableSet): + + def __init__(self, stored_data, under): + self._stored_data = stored_data + self._under = under + + def add(self, key): + self._under.add(key) + self._stored_data.dirty = True + + def discard(self, key): + self._under.discard(key) + self._stored_data.dirty = True + + def __contains__(self, key): + return key in self._under + + def __iter__(self): + return self._under.__iter__() + + def __len__(self): + return len(self._under) + + @classmethod + def _from_iterable(cls, it): + """Construct an instance of the class from any iterable input. + + Per https://docs.python.org/3/library/collections.abc.html + if the Set mixin is being used in a class with a different constructor signature, + you will need to override _from_iterable() with a classmethod that can construct + new instances from an iterable argument. + """ + return set(it) + + def __le__(self, other): + if isinstance(other, StoredSet): + return self._under <= other._under + elif isinstance(other, collections.abc.Set): + return self._under <= other + else: + return NotImplemented + + def __ge__(self, other): + if isinstance(other, StoredSet): + return self._under >= other._under + elif isinstance(other, collections.abc.Set): + return self._under >= other + else: + return NotImplemented + + def __eq__(self, other): + if isinstance(other, StoredSet): + return self._under == other._under + elif isinstance(other, collections.abc.Set): + return self._under == other + else: + return NotImplemented diff --git a/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/lib/ops/jujuversion.py b/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/lib/ops/jujuversion.py new file mode 100755 index 0000000000000000000000000000000000000000..4517886218c143f8c0249ac7285dc594976f9b01 --- /dev/null +++ b/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/lib/ops/jujuversion.py @@ -0,0 +1,85 @@ +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import re +from functools import total_ordering + + +@total_ordering +class JujuVersion: + + PATTERN = r'''^ + (?P\d{1,9})\.(?P\d{1,9}) # and numbers are always there + ((?:\.|-(?P[a-z]+))(?P\d{1,9}))? # sometimes with . or - + (\.(?P\d{1,9}))?$ # and sometimes with a number. + ''' + + def __init__(self, version): + m = re.match(self.PATTERN, version, re.VERBOSE) + if not m: + raise RuntimeError('"{}" is not a valid Juju version string'.format(version)) + + d = m.groupdict() + self.major = int(m.group('major')) + self.minor = int(m.group('minor')) + self.tag = d['tag'] or '' + self.patch = int(d['patch'] or 0) + self.build = int(d['build'] or 0) + + def __repr__(self): + if self.tag: + s = '{}.{}-{}{}'.format(self.major, self.minor, self.tag, self.patch) + else: + s = '{}.{}.{}'.format(self.major, self.minor, self.patch) + if self.build > 0: + s += '.{}'.format(self.build) + return s + + def __eq__(self, other): + if self is other: + return True + if isinstance(other, str): + other = type(self)(other) + elif not isinstance(other, JujuVersion): + raise RuntimeError('cannot compare Juju version "{}" with "{}"'.format(self, other)) + return ( + self.major == other.major + and self.minor == other.minor + and self.tag == other.tag + and self.build == other.build + and self.patch == other.patch) + + def __lt__(self, other): + if self is other: + return False + if isinstance(other, str): + other = type(self)(other) + elif not isinstance(other, JujuVersion): + raise RuntimeError('cannot compare Juju version "{}" with "{}"'.format(self, other)) + + if self.major != other.major: + return self.major < other.major + elif self.minor != other.minor: + return self.minor < other.minor + elif self.tag != other.tag: + if not self.tag: + return False + elif not other.tag: + return True + return self.tag < other.tag + elif self.patch != other.patch: + return self.patch < other.patch + elif self.build != other.build: + return self.build < other.build + return False diff --git a/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/lib/ops/lib/__init__.py b/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/lib/ops/lib/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..716cf8c85e15a5b425bf68bfaae2810a11c8d2ed --- /dev/null +++ b/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/lib/ops/lib/__init__.py @@ -0,0 +1,185 @@ +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import sys +import os +import re + +from ast import literal_eval +from importlib.util import module_from_spec +from importlib.machinery import ModuleSpec +from pkgutil import get_importer +from types import ModuleType +from typing import Tuple, Dict, List, Iterator, Optional + + +_libraries = {} # type: Dict[Tuple[str,str], List[_Lib]] + +_libline_re = re.compile(r'''^LIB([A-Z]+)\s+=\s+([0-9]+|['"][a-zA-Z0-9_.-@]+['"])\s*$''') +_libname_re = re.compile(r'''^[a-z][a-z0-9]+$''') + +# Not perfect, but should do for now. +_libauthor_re = re.compile(r'''^[A-Za-z0-9_+.-]+@[a-z0-9_-]+(?:\.[a-z0-9_-]+)*\.[a-z]{2,3}$''') + + +def use(name: str, api: int, author: str) -> ModuleType: + """Use a library from the ops libraries. + + Args: + name: the name of the library requested. + api: the API version of the library. + author: the author of the library. If not given, requests the + one in the standard library. + Raises: + ImportError: if the library cannot be found. + TypeError: if the name, api, or author are the wrong type. + ValueError: if the name, api, or author are invalid. + """ + if not isinstance(name, str): + raise TypeError("invalid library name: {!r} (must be a str)".format(name)) + if not isinstance(author, str): + raise TypeError("invalid library author: {!r} (must be a str)".format(author)) + if not isinstance(api, int): + raise TypeError("invalid library API: {!r} (must be an int)".format(api)) + if api < 0: + raise ValueError('invalid library api: {} (must be ≥0)'.format(api)) + if not _libname_re.match(name): + raise ValueError("invalid library name: {!r} (chars and digits only)".format(name)) + if not _libauthor_re.match(author): + raise ValueError("invalid library author email: {!r}".format(author)) + + versions = _libraries.get((name, author), ()) + for lib in versions: + if lib.api == api: + return lib.import_module() + + others = ', '.join(str(lib.api) for lib in versions) + if others: + msg = 'cannot find "{}" from "{}" with API version {} (have {})'.format( + name, author, api, others) + else: + msg = 'cannot find library "{}" from "{}"'.format(name, author) + + raise ImportError(msg, name=name) + + +def autoimport(): + _libraries.clear() + for spec in _find_all_specs(sys.path): + lib = _parse_lib(spec) + if lib is None: + continue + + versions = _libraries.setdefault((lib.name, lib.author), []) + versions.append(lib) + versions.sort(reverse=True) + + +def _find_all_specs(path: List[str]) -> Iterator[ModuleSpec]: + for sys_dir in path: + if sys_dir == "": + sys_dir = "." + try: + top_dirs = os.listdir(sys_dir) + except OSError: + continue + for top_dir in top_dirs: + opslib = os.path.join(sys_dir, top_dir, 'opslib') + try: + lib_dirs = os.listdir(opslib) + except OSError: + continue + finder = get_importer(opslib) + if finder is None or not hasattr(finder, 'find_spec'): + continue + for lib_dir in lib_dirs: + spec = finder.find_spec(lib_dir) + if spec is None: + continue + if spec.loader is None: + # a namespace package; not supported + continue + yield spec + + +# only the first this many lines of a file are looked at for the LIB* constants +_MAX_LIB_LINES = 99 + + +def _parse_lib(spec: ModuleSpec) -> Optional['_Lib']: + if spec.origin is None: + return None + + _expected = {'NAME': str, 'AUTHOR': str, 'API': int, 'PATCH': int} + + try: + with open(spec.origin, 'rt', encoding='utf-8') as f: + libinfo = {} + for n, line in enumerate(f): + if len(libinfo) == len(_expected): + break + if n > _MAX_LIB_LINES: + return None + m = _libline_re.match(line) + if m is None: + continue + key, value = m.groups() + if key in _expected: + value = literal_eval(value) + if not isinstance(value, _expected[key]): + return None + libinfo[key] = value + else: + if len(libinfo) != len(_expected): + return None + except Exception: + return None + + return _Lib(spec, libinfo['NAME'], libinfo['AUTHOR'], libinfo['API'], libinfo['PATCH']) + + +class _Lib: + + def __init__(self, spec: ModuleSpec, name: str, author: str, api: int, patch: int): + self.spec = spec + self.name = name + self.author = author + self.api = api + self.patch = patch + + self._module = None # type: Optional[ModuleType] + + def __repr__(self): + return "<_Lib {0.name} by {0.author}, API {0.api}, patch {0.patch}>".format(self) + + def import_module(self) -> ModuleType: + if self._module is None: + module = module_from_spec(self.spec) + self.spec.loader.exec_module(module) + self._module = module + return self._module + + def __eq__(self, other): + if not isinstance(other, _Lib): + return NotImplemented + a = (self.name, self.author, self.api, self.patch) + b = (other.name, other.author, other.api, other.patch) + return a == b + + def __lt__(self, other): + if not isinstance(other, _Lib): + return NotImplemented + a = (self.name, self.author, self.api, self.patch) + b = (other.name, other.author, other.api, other.patch) + return a < b diff --git a/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/lib/ops/log.py b/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/lib/ops/log.py new file mode 100644 index 0000000000000000000000000000000000000000..a3f76a375a98e23c718e47bcde5c33b49f4031c7 --- /dev/null +++ b/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/lib/ops/log.py @@ -0,0 +1,47 @@ +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import logging + + +class JujuLogHandler(logging.Handler): + """A handler for sending logs to Juju via juju-log.""" + + def __init__(self, model_backend, level=logging.DEBUG): + super().__init__(level) + self.model_backend = model_backend + + def emit(self, record): + self.model_backend.juju_log(record.levelname, self.format(record)) + + +def setup_root_logging(model_backend, debug=False): + """Setup python logging to forward messages to juju-log. + + By default, logging is set to DEBUG level, and messages will be filtered by Juju. + Charmers can also set their own default log level with:: + + logging.getLogger().setLevel(logging.INFO) + + model_backend -- a ModelBackend to use for juju-log + debug -- if True, write logs to stderr as well as to juju-log. + """ + logger = logging.getLogger() + logger.setLevel(logging.DEBUG) + logger.addHandler(JujuLogHandler(model_backend)) + if debug: + handler = logging.StreamHandler() + formatter = logging.Formatter('%(asctime)s %(levelname)-8s %(message)s') + handler.setFormatter(formatter) + logger.addHandler(handler) diff --git a/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/lib/ops/main.py b/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/lib/ops/main.py new file mode 100755 index 0000000000000000000000000000000000000000..c38727168ebc223e1f0ed0a4c1d9b53f24118156 --- /dev/null +++ b/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/lib/ops/main.py @@ -0,0 +1,317 @@ +#!/usr/bin/env python3 +# Copyright 2019 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import subprocess +import sys +from pathlib import Path + +import yaml + +import ops.charm +import ops.framework +import ops.model +import logging + +from ops.log import setup_root_logging + +CHARM_STATE_FILE = '.unit-state.db' + + +logger = logging.getLogger() + + +def _get_charm_dir(): + charm_dir = os.environ.get("JUJU_CHARM_DIR") + if charm_dir is None: + # Assume $JUJU_CHARM_DIR/lib/op/main.py structure. + charm_dir = Path('{}/../../..'.format(__file__)).resolve() + else: + charm_dir = Path(charm_dir).resolve() + return charm_dir + + +def _load_metadata(charm_dir): + metadata = yaml.safe_load((charm_dir / 'metadata.yaml').read_text()) + + actions_meta = charm_dir / 'actions.yaml' + if actions_meta.exists(): + actions_metadata = yaml.safe_load(actions_meta.read_text()) + else: + actions_metadata = {} + return metadata, actions_metadata + + +def _create_event_link(charm, bound_event): + """Create a symlink for a particular event. + + charm -- A charm object. + bound_event -- An event for which to create a symlink. + """ + if issubclass(bound_event.event_type, ops.charm.HookEvent): + event_dir = charm.framework.charm_dir / 'hooks' + event_path = event_dir / bound_event.event_kind.replace('_', '-') + elif issubclass(bound_event.event_type, ops.charm.ActionEvent): + if not bound_event.event_kind.endswith("_action"): + raise RuntimeError( + 'action event name {} needs _action suffix'.format(bound_event.event_kind)) + event_dir = charm.framework.charm_dir / 'actions' + # The event_kind is suffixed with "_action" while the executable is not. + event_path = event_dir / bound_event.event_kind[:-len('_action')].replace('_', '-') + else: + raise RuntimeError( + 'cannot create a symlink: unsupported event type {}'.format(bound_event.event_type)) + + event_dir.mkdir(exist_ok=True) + if not event_path.exists(): + # CPython has different implementations for populating sys.argv[0] for Linux and Windows. + # For Windows it is always an absolute path (any symlinks are resolved) + # while for Linux it can be a relative path. + target_path = os.path.relpath(os.path.realpath(sys.argv[0]), str(event_dir)) + + # Ignore the non-symlink files or directories + # assuming the charm author knows what they are doing. + logger.debug( + 'Creating a new relative symlink at %s pointing to %s', + event_path, target_path) + event_path.symlink_to(target_path) + + +def _setup_event_links(charm_dir, charm): + """Set up links for supported events that originate from Juju. + + Whether a charm can handle an event or not can be determined by + introspecting which events are defined on it. + + Hooks or actions are created as symlinks to the charm code file + which is determined by inspecting symlinks provided by the charm + author at hooks/install or hooks/start. + + charm_dir -- A root directory of the charm. + charm -- An instance of the Charm class. + + """ + for bound_event in charm.on.events().values(): + # Only events that originate from Juju need symlinks. + if issubclass(bound_event.event_type, (ops.charm.HookEvent, ops.charm.ActionEvent)): + _create_event_link(charm, bound_event) + + +def _emit_charm_event(charm, event_name): + """Emits a charm event based on a Juju event name. + + charm -- A charm instance to emit an event from. + event_name -- A Juju event name to emit on a charm. + """ + event_to_emit = None + try: + event_to_emit = getattr(charm.on, event_name) + except AttributeError: + logger.debug("Event %s not defined for %s.", event_name, charm) + + # If the event is not supported by the charm implementation, do + # not error out or try to emit it. This is to support rollbacks. + if event_to_emit is not None: + args, kwargs = _get_event_args(charm, event_to_emit) + logger.debug('Emitting Juju event %s.', event_name) + event_to_emit.emit(*args, **kwargs) + + +def _get_event_args(charm, bound_event): + event_type = bound_event.event_type + model = charm.framework.model + + if issubclass(event_type, ops.charm.RelationEvent): + relation_name = os.environ['JUJU_RELATION'] + relation_id = int(os.environ['JUJU_RELATION_ID'].split(':')[-1]) + relation = model.get_relation(relation_name, relation_id) + else: + relation = None + + remote_app_name = os.environ.get('JUJU_REMOTE_APP', '') + remote_unit_name = os.environ.get('JUJU_REMOTE_UNIT', '') + if remote_app_name or remote_unit_name: + if not remote_app_name: + if '/' not in remote_unit_name: + raise RuntimeError('invalid remote unit name: {}'.format(remote_unit_name)) + remote_app_name = remote_unit_name.split('/')[0] + args = [relation, model.get_app(remote_app_name)] + if remote_unit_name: + args.append(model.get_unit(remote_unit_name)) + return args, {} + elif relation: + return [relation], {} + return [], {} + + +class _Dispatcher: + """Encapsulate how to figure out what event Juju wants us to run. + + Also knows how to run “legacy” hooks when Juju called us via a top-level + ``dispatch`` binary. + + Args: + charm_dir: the toplevel directory of the charm + + Attributes: + event_name: the name of the event to run + is_dispatch_aware: are we running under a Juju that knows about the + dispatch binary? + + """ + + def __init__(self, charm_dir: Path): + self._charm_dir = charm_dir + self._exec_path = Path(sys.argv[0]) + + if 'JUJU_DISPATCH_PATH' in os.environ and (charm_dir / 'dispatch').exists(): + self._init_dispatch() + else: + self._init_legacy() + + def ensure_event_links(self, charm): + """Make sure necessary symlinks are present on disk""" + + if self.is_dispatch_aware: + # links aren't needed + return + + # When a charm is force-upgraded and a unit is in an error state Juju + # does not run upgrade-charm and instead runs the failed hook followed + # by config-changed. Given the nature of force-upgrading the hook setup + # code is not triggered on config-changed. + # + # 'start' event is included as Juju does not fire the install event for + # K8s charms (see LP: #1854635). + if (self.event_name in ('install', 'start', 'upgrade_charm') + or self.event_name.endswith('_storage_attached')): + _setup_event_links(self._charm_dir, charm) + + def run_any_legacy_hook(self): + """Run any extant legacy hook. + + If there is both a dispatch file and a legacy hook for the + current event, run the wanted legacy hook. + """ + + if not self.is_dispatch_aware: + # we *are* the legacy hook + return + + dispatch_path = self._charm_dir / self._dispatch_path + if not dispatch_path.exists(): + logger.debug("Legacy %s does not exist.", self._dispatch_path) + return + + # super strange that there isn't an is_executable + if not os.access(str(dispatch_path), os.X_OK): + logger.warning("Legacy %s exists but is not executable.", self._dispatch_path) + return + + if dispatch_path.resolve() == self._exec_path.resolve(): + logger.debug("Legacy %s is just a link to ourselves.", self._dispatch_path) + return + + argv = sys.argv.copy() + argv[0] = str(dispatch_path) + logger.info("Running legacy %s.", self._dispatch_path) + try: + subprocess.run(argv, check=True) + except subprocess.CalledProcessError as e: + logger.warning( + "Legacy %s exited with status %d.", + self._dispatch_path, e.returncode) + sys.exit(e.returncode) + else: + logger.debug("Legacy %s exited with status 0.", self._dispatch_path) + + def _set_name_from_path(self, path: Path): + """Sets the name attribute to that which can be inferred from the given path.""" + name = path.name.replace('-', '_') + if path.parent.name == 'actions': + name = '{}_action'.format(name) + self.event_name = name + + def _init_legacy(self): + """Set up the 'legacy' dispatcher. + + The current Juju doesn't know about 'dispatch' and calls hooks + explicitly. + """ + self.is_dispatch_aware = False + self._set_name_from_path(self._exec_path) + + def _init_dispatch(self): + """Set up the new 'dispatch' dispatcher. + + The current Juju will run 'dispatch' if it exists, and otherwise fall + back to the old behaviour. + + JUJU_DISPATCH_PATH will be set to the wanted hook, e.g. hooks/install, + in both cases. + """ + self._dispatch_path = Path(os.environ['JUJU_DISPATCH_PATH']) + + if 'OPERATOR_DISPATCH' in os.environ: + logger.debug("Charm called itself via %s.", self._dispatch_path) + sys.exit(0) + os.environ['OPERATOR_DISPATCH'] = '1' + + self.is_dispatch_aware = True + self._set_name_from_path(self._dispatch_path) + + +def main(charm_class): + """Setup the charm and dispatch the observed event. + + The event name is based on the way this executable was called (argv[0]). + """ + charm_dir = _get_charm_dir() + + model_backend = ops.model._ModelBackend() + debug = ('JUJU_DEBUG' in os.environ) + setup_root_logging(model_backend, debug=debug) + + dispatcher = _Dispatcher(charm_dir) + dispatcher.run_any_legacy_hook() + + metadata, actions_metadata = _load_metadata(charm_dir) + meta = ops.charm.CharmMeta(metadata, actions_metadata) + unit_name = os.environ['JUJU_UNIT_NAME'] + model_name = os.environ.get('JUJU_MODEL_NAME') + model = ops.model.Model(unit_name, meta, model_backend, model_name=model_name) + + # TODO: If Juju unit agent crashes after exit(0) from the charm code + # the framework will commit the snapshot but Juju will not commit its + # operation. + charm_state_path = charm_dir / CHARM_STATE_FILE + framework = ops.framework.Framework(charm_state_path, charm_dir, meta, model) + try: + charm = charm_class(framework, None) + dispatcher.ensure_event_links(charm) + + # TODO: Remove the collect_metrics check below as soon as the relevant + # Juju changes are made. + # + # Skip reemission of deferred events for collect-metrics events because + # they do not have the full access to all hook tools. + if dispatcher.event_name != 'collect_metrics': + framework.reemit() + + _emit_charm_event(charm, dispatcher.event_name) + + framework.commit() + finally: + framework.close() diff --git a/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/lib/ops/model.py b/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/lib/ops/model.py new file mode 100644 index 0000000000000000000000000000000000000000..10ae5343d05495e4558e37023507e95891cff6da --- /dev/null +++ b/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/lib/ops/model.py @@ -0,0 +1,1213 @@ +# Copyright 2019 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import datetime +import decimal +import ipaddress +import json +import os +import re +import shutil +import tempfile +import time +import typing +import weakref + +from abc import ABC, abstractmethod +from collections.abc import Mapping, MutableMapping +from pathlib import Path +from subprocess import run, PIPE, CalledProcessError + +import ops + + +class Model: + """Represents the Juju Model as seen from this unit. + + Attributes: + unit: A :class:`Unit` that represents the unit that is running this code (eg yourself) + app: A :class:`Application` that represents the application this unit is a part of. + relations: Mapping of endpoint to list of :class:`Relation` answering the question + "what am I currently related to". See also :meth:`.get_relation` + config: A dict of the config for the current application. + resources: Access to resources for this charm. Use ``model.resources.fetch(resource_name)`` + to get the path on disk where the resource can be found. + storages: Mapping of storage_name to :class:`Storage` for the storage points defined in + metadata.yaml + pod: Used to get access to ``model.pod.set_spec`` to set the container specification + for Kubernetes charms. + """ + + def __init__( + self, unit_name: str, meta: 'ops.charm.CharmMeta', backend: '_ModelBackend', *, + model_name: str = None): + self._cache = _ModelCache(backend) + self._backend = backend + self.unit = self.get_unit(unit_name) + self.app = self.unit.app + self.relations = RelationMapping(meta.relations, self.unit, self._backend, self._cache) + self.config = ConfigData(self._backend) + self.resources = Resources(list(meta.resources), self._backend) + self.pod = Pod(self._backend) + self.storages = StorageMapping(list(meta.storages), self._backend) + self._bindings = BindingMapping(self._backend) + self._model_name = model_name + + @property + def name(self) -> str: + """Return the name of the Model that this unit is running in. + + This is read from the environment variable ``JUJU_MODEL_NAME``. + """ + return self._model_name + + def get_unit(self, unit_name: str) -> 'Unit': + """Get an arbitrary unit by name. + + Internally this uses a cache, so asking for the same unit two times will + return the same object. + """ + return self._cache.get(Unit, unit_name) + + def get_app(self, app_name: str) -> 'Application': + """Get an application by name. + + Internally this uses a cache, so asking for the same application two times will + return the same object. + """ + return self._cache.get(Application, app_name) + + def get_relation( + self, relation_name: str, + relation_id: typing.Optional[int] = None) -> 'Relation': + """Get a specific Relation instance. + + If relation_id is not given, this will return the Relation instance if the + relation is established only once or None if it is not established. If this + same relation is established multiple times the error TooManyRelatedAppsError is raised. + + Args: + relation_name: The name of the endpoint for this charm + relation_id: An identifier for a specific relation. Used to disambiguate when a + given application has more than one relation on a given endpoint. + Raises: + TooManyRelatedAppsError: is raised if there is more than one relation to the + supplied relation_name and no relation_id was supplied + """ + return self.relations._get_unique(relation_name, relation_id) + + def get_binding(self, binding_key: typing.Union[str, 'Relation']) -> 'Binding': + """Get a network space binding. + + Args: + binding_key: The relation name or instance to obtain bindings for. + Returns: + If ``binding_key`` is a relation name, the method returns the default binding + for that relation. If a relation instance is provided, the method first looks + up a more specific binding for that specific relation ID, and if none is found + falls back to the default binding for the relation name. + """ + return self._bindings.get(binding_key) + + +class _ModelCache: + + def __init__(self, backend): + self._backend = backend + self._weakrefs = weakref.WeakValueDictionary() + + def get(self, entity_type, *args): + key = (entity_type,) + args + entity = self._weakrefs.get(key) + if entity is None: + entity = entity_type(*args, backend=self._backend, cache=self) + self._weakrefs[key] = entity + return entity + + +class Application: + """Represents a named application in the model. + + This might be your application, or might be an application that you are related to. + Charmers should not instantiate Application objects directly, but should use + :meth:`Model.get_app` if they need a reference to a given application. + + Attributes: + name: The name of this application (eg, 'mysql'). This name may differ from the name of + the charm, if the user has deployed it to a different name. + """ + + def __init__(self, name, backend, cache): + self.name = name + self._backend = backend + self._cache = cache + self._is_our_app = self.name == self._backend.app_name + self._status = None + + def _invalidate(self): + self._status = None + + @property + def status(self) -> 'StatusBase': + """Used to report or read the status of the overall application. + + Can only be read and set by the lead unit of the application. + + The status of remote units is always Unknown. + + Raises: + RuntimeError: if you try to set the status of another application, or if you try to + set the status of this application as a unit that is not the leader. + InvalidStatusError: if you try to set the status to something that is not a + :class:`StatusBase` + + Example:: + + self.model.app.status = BlockedStatus('I need a human to come help me') + """ + if not self._is_our_app: + return UnknownStatus() + + if not self._backend.is_leader(): + raise RuntimeError('cannot get application status as a non-leader unit') + + if self._status: + return self._status + + s = self._backend.status_get(is_app=True) + self._status = StatusBase.from_name(s['status'], s['message']) + return self._status + + @status.setter + def status(self, value: 'StatusBase'): + if not isinstance(value, StatusBase): + raise InvalidStatusError( + 'invalid value provided for application {} status: {}'.format(self, value) + ) + + if not self._is_our_app: + raise RuntimeError('cannot to set status for a remote application {}'.format(self)) + + if not self._backend.is_leader(): + raise RuntimeError('cannot set application status as a non-leader unit') + + self._backend.status_set(value.name, value.message, is_app=True) + self._status = value + + def __repr__(self): + return '<{}.{} {}>'.format(type(self).__module__, type(self).__name__, self.name) + + +class Unit: + """Represents a named unit in the model. + + This might be your unit, another unit of your application, or a unit of another application + that you are related to. + + Attributes: + name: The name of the unit (eg, 'mysql/0') + app: The Application the unit is a part of. + """ + + def __init__(self, name, backend, cache): + self.name = name + + app_name = name.split('/')[0] + self.app = cache.get(Application, app_name) + + self._backend = backend + self._cache = cache + self._is_our_unit = self.name == self._backend.unit_name + self._status = None + + def _invalidate(self): + self._status = None + + @property + def status(self) -> 'StatusBase': + """Used to report or read the status of a specific unit. + + The status of any unit other than yourself is always Unknown. + + Raises: + RuntimeError: if you try to set the status of a unit other than yourself. + InvalidStatusError: if you try to set the status to something other than + a :class:`StatusBase` + Example:: + + self.model.unit.status = MaintenanceStatus('reconfiguring the frobnicators') + """ + if not self._is_our_unit: + return UnknownStatus() + + if self._status: + return self._status + + s = self._backend.status_get(is_app=False) + self._status = StatusBase.from_name(s['status'], s['message']) + return self._status + + @status.setter + def status(self, value: 'StatusBase'): + if not isinstance(value, StatusBase): + raise InvalidStatusError( + 'invalid value provided for unit {} status: {}'.format(self, value) + ) + + if not self._is_our_unit: + raise RuntimeError('cannot set status for a remote unit {}'.format(self)) + + self._backend.status_set(value.name, value.message, is_app=False) + self._status = value + + def __repr__(self): + return '<{}.{} {}>'.format(type(self).__module__, type(self).__name__, self.name) + + def is_leader(self) -> bool: + """Return whether this unit is the leader of its application. + + This can only be called for your own unit. + Returns: + True if you are the leader, False otherwise + Raises: + RuntimeError: if called for a unit that is not yourself + """ + if self._is_our_unit: + # This value is not cached as it is not guaranteed to persist for the whole duration + # of a hook execution. + return self._backend.is_leader() + else: + raise RuntimeError( + 'leadership status of remote units ({}) is not visible to other' + ' applications'.format(self) + ) + + def set_workload_version(self, version: str) -> None: + """Record the version of the software running as the workload. + + This shouldn't be confused with the revision of the charm. This is informative only; + shown in the output of 'juju status'. + """ + if not isinstance(version, str): + raise TypeError("workload version must be a str, not {}: {!r}".format( + type(version).__name__, version)) + self._backend.application_version_set(version) + + +class LazyMapping(Mapping, ABC): + """Represents a dict that isn't populated until it is accessed. + + Charm authors should generally never need to use this directly, but it forms + the basis for many of the dicts that the framework tracks. + """ + + _lazy_data = None + + @abstractmethod + def _load(self): + raise NotImplementedError() + + @property + def _data(self): + data = self._lazy_data + if data is None: + data = self._lazy_data = self._load() + return data + + def _invalidate(self): + self._lazy_data = None + + def __contains__(self, key): + return key in self._data + + def __len__(self): + return len(self._data) + + def __iter__(self): + return iter(self._data) + + def __getitem__(self, key): + return self._data[key] + + +class RelationMapping(Mapping): + """Map of relation names to lists of :class:`Relation` instances.""" + + def __init__(self, relations_meta, our_unit, backend, cache): + self._peers = set() + for name, relation_meta in relations_meta.items(): + if relation_meta.role.is_peer(): + self._peers.add(name) + self._our_unit = our_unit + self._backend = backend + self._cache = cache + self._data = {relation_name: None for relation_name in relations_meta} + + def __contains__(self, key): + return key in self._data + + def __len__(self): + return len(self._data) + + def __iter__(self): + return iter(self._data) + + def __getitem__(self, relation_name): + is_peer = relation_name in self._peers + relation_list = self._data[relation_name] + if relation_list is None: + relation_list = self._data[relation_name] = [] + for rid in self._backend.relation_ids(relation_name): + relation = Relation(relation_name, rid, is_peer, + self._our_unit, self._backend, self._cache) + relation_list.append(relation) + return relation_list + + def _invalidate(self, relation_name): + """Used to wipe the cache of a given relation_name. + + Not meant to be used by Charm authors. The content of relation data is + static for the lifetime of a hook, so it is safe to cache in memory once + accessed. + """ + self._data[relation_name] = None + + def _get_unique(self, relation_name, relation_id=None): + if relation_id is not None: + if not isinstance(relation_id, int): + raise ModelError('relation id {} must be int or None not {}'.format( + relation_id, + type(relation_id).__name__)) + for relation in self[relation_name]: + if relation.id == relation_id: + return relation + else: + # The relation may be dead, but it is not forgotten. + is_peer = relation_name in self._peers + return Relation(relation_name, relation_id, is_peer, + self._our_unit, self._backend, self._cache) + num_related = len(self[relation_name]) + if num_related == 0: + return None + elif num_related == 1: + return self[relation_name][0] + else: + # TODO: We need something in the framework to catch and gracefully handle + # errors, ideally integrating the error catching with Juju's mechanisms. + raise TooManyRelatedAppsError(relation_name, num_related, 1) + + +class BindingMapping: + """Mapping of endpoints to network bindings. + + Charm authors should not instantiate this directly, but access it via + :meth:`Model.get_binding` + """ + + def __init__(self, backend): + self._backend = backend + self._data = {} + + def get(self, binding_key: typing.Union[str, 'Relation']) -> 'Binding': + """Get a specific Binding for an endpoint/relation. + + Not used directly by Charm authors. See :meth:`Model.get_binding` + """ + if isinstance(binding_key, Relation): + binding_name = binding_key.name + relation_id = binding_key.id + elif isinstance(binding_key, str): + binding_name = binding_key + relation_id = None + else: + raise ModelError('binding key must be str or relation instance, not {}' + ''.format(type(binding_key).__name__)) + binding = self._data.get(binding_key) + if binding is None: + binding = Binding(binding_name, relation_id, self._backend) + self._data[binding_key] = binding + return binding + + +class Binding: + """Binding to a network space. + + Attributes: + name: The name of the endpoint this binding represents (eg, 'db') + """ + + def __init__(self, name, relation_id, backend): + self.name = name + self._relation_id = relation_id + self._backend = backend + self._network = None + + @property + def network(self) -> 'Network': + """The network information for this binding.""" + if self._network is None: + try: + self._network = Network(self._backend.network_get(self.name, self._relation_id)) + except RelationNotFoundError: + if self._relation_id is None: + raise + # If a relation is dead, we can still get network info associated with an + # endpoint itself + self._network = Network(self._backend.network_get(self.name)) + return self._network + + +class Network: + """Network space details. + + Charm authors should not instantiate this directly, but should get access to the Network + definition from :meth:`Model.get_binding` and its ``network`` attribute. + + Attributes: + interfaces: A list of :class:`NetworkInterface` details. This includes the + information about how your application should be configured (eg, what + IP addresses should you bind to.) + Note that multiple addresses for a single interface are represented as multiple + interfaces. (eg, ``[NetworKInfo('ens1', '10.1.1.1/32'), + NetworkInfo('ens1', '10.1.2.1/32'])``) + ingress_addresses: A list of :class:`ipaddress.ip_address` objects representing the IP + addresses that other units should use to get in touch with you. + egress_subnets: A list of :class:`ipaddress.ip_network` representing the subnets that + other units will see you connecting from. Due to things like NAT it isn't always + possible to narrow it down to a single address, but when it is clear, the CIDRs + will be constrained to a single address. (eg, 10.0.0.1/32) + Args: + network_info: A dict of network information as returned by ``network-get``. + """ + + def __init__(self, network_info: dict): + self.interfaces = [] + # Treat multiple addresses on an interface as multiple logical + # interfaces with the same name. + for interface_info in network_info['bind-addresses']: + interface_name = interface_info['interface-name'] + for address_info in interface_info['addresses']: + self.interfaces.append(NetworkInterface(interface_name, address_info)) + self.ingress_addresses = [] + for address in network_info['ingress-addresses']: + self.ingress_addresses.append(ipaddress.ip_address(address)) + self.egress_subnets = [] + for subnet in network_info['egress-subnets']: + self.egress_subnets.append(ipaddress.ip_network(subnet)) + + @property + def bind_address(self): + """A single address that your application should bind() to. + + For the common case where there is a single answer. This represents a single + address from :attr:`.interfaces` that can be used to configure where your + application should bind() and listen(). + """ + return self.interfaces[0].address + + @property + def ingress_address(self): + """The address other applications should use to connect to your unit. + + Due to things like public/private addresses, NAT and tunneling, the address you bind() + to is not always the address other people can use to connect() to you. + This is just the first address from :attr:`.ingress_addresses`. + """ + return self.ingress_addresses[0] + + +class NetworkInterface: + """Represents a single network interface that the charm needs to know about. + + Charmers should not instantiate this type directly. Instead use :meth:`Model.get_binding` + to get the network information for a given endpoint. + + Attributes: + name: The name of the interface (eg. 'eth0', or 'ens1') + subnet: An :class:`ipaddress.ip_network` representation of the IP for the network + interface. This may be a single address (eg '10.0.1.2/32') + """ + + def __init__(self, name: str, address_info: dict): + self.name = name + # TODO: expose a hardware address here, see LP: #1864070. + self.address = ipaddress.ip_address(address_info['value']) + cidr = address_info['cidr'] + if not cidr: + # The cidr field may be empty, see LP: #1864102. + # In this case, make it a /32 or /128 IP network. + self.subnet = ipaddress.ip_network(address_info['value']) + else: + self.subnet = ipaddress.ip_network(cidr) + # TODO: expose a hostname/canonical name for the address here, see LP: #1864086. + + +class Relation: + """Represents an established relation between this application and another application. + + This class should not be instantiated directly, instead use :meth:`Model.get_relation` + or :attr:`RelationEvent.relation`. + + Attributes: + name: The name of the local endpoint of the relation (eg 'db') + id: The identifier for a particular relation (integer) + app: An :class:`Application` representing the remote application of this relation. + For peer relations this will be the local application. + units: A set of :class:`Unit` for units that have started and joined this relation. + data: A :class:`RelationData` holding the data buckets for each entity + of a relation. Accessed via eg Relation.data[unit]['foo'] + """ + + def __init__( + self, relation_name: str, relation_id: int, is_peer: bool, our_unit: Unit, + backend: '_ModelBackend', cache: '_ModelCache'): + self.name = relation_name + self.id = relation_id + self.app = None + self.units = set() + + # For peer relations, both the remote and the local app are the same. + if is_peer: + self.app = our_unit.app + try: + for unit_name in backend.relation_list(self.id): + unit = cache.get(Unit, unit_name) + self.units.add(unit) + if self.app is None: + self.app = unit.app + except RelationNotFoundError: + # If the relation is dead, just treat it as if it has no remote units. + pass + self.data = RelationData(self, our_unit, backend) + + def __repr__(self): + return '<{}.{} {}:{}>'.format(type(self).__module__, + type(self).__name__, + self.name, + self.id) + + +class RelationData(Mapping): + """Represents the various data buckets of a given relation. + + Each unit and application involved in a relation has their own data bucket. + Eg: ``{entity: RelationDataContent}`` + where entity can be either a :class:`Unit` or a :class:`Application`. + + Units can read and write their own data, and if they are the leader, + they can read and write their application data. They are allowed to read + remote unit and application data. + + This class should not be created directly. It should be accessed via + :attr:`Relation.data` + """ + + def __init__(self, relation: Relation, our_unit: Unit, backend: '_ModelBackend'): + self.relation = weakref.proxy(relation) + self._data = { + our_unit: RelationDataContent(self.relation, our_unit, backend), + our_unit.app: RelationDataContent(self.relation, our_unit.app, backend), + } + self._data.update({ + unit: RelationDataContent(self.relation, unit, backend) + for unit in self.relation.units}) + # The relation might be dead so avoid a None key here. + if self.relation.app is not None: + self._data.update({ + self.relation.app: RelationDataContent(self.relation, self.relation.app, backend), + }) + + def __contains__(self, key): + return key in self._data + + def __len__(self): + return len(self._data) + + def __iter__(self): + return iter(self._data) + + def __getitem__(self, key): + return self._data[key] + + +# We mix in MutableMapping here to get some convenience implementations, but whether it's actually +# mutable or not is controlled by the flag. +class RelationDataContent(LazyMapping, MutableMapping): + + def __init__(self, relation, entity, backend): + self.relation = relation + self._entity = entity + self._backend = backend + self._is_app = isinstance(entity, Application) + + def _load(self): + try: + return self._backend.relation_get(self.relation.id, self._entity.name, self._is_app) + except RelationNotFoundError: + # Dead relations tell no tales (and have no data). + return {} + + def _is_mutable(self): + if self._is_app: + is_our_app = self._backend.app_name == self._entity.name + if not is_our_app: + return False + # Whether the application data bag is mutable or not depends on + # whether this unit is a leader or not, but this is not guaranteed + # to be always true during the same hook execution. + return self._backend.is_leader() + else: + is_our_unit = self._backend.unit_name == self._entity.name + if is_our_unit: + return True + return False + + def __setitem__(self, key, value): + if not self._is_mutable(): + raise RelationDataError('cannot set relation data for {}'.format(self._entity.name)) + if not isinstance(value, str): + raise RelationDataError('relation data values must be strings') + + self._backend.relation_set(self.relation.id, key, value, self._is_app) + + # Don't load data unnecessarily if we're only updating. + if self._lazy_data is not None: + if value == '': + # Match the behavior of Juju, which is that setting the value to an + # empty string will remove the key entirely from the relation data. + del self._data[key] + else: + self._data[key] = value + + def __delitem__(self, key): + # Match the behavior of Juju, which is that setting the value to an empty + # string will remove the key entirely from the relation data. + self.__setitem__(key, '') + + +class ConfigData(LazyMapping): + + def __init__(self, backend): + self._backend = backend + + def _load(self): + return self._backend.config_get() + + +class StatusBase: + """Status values specific to applications and units. + + To access a status by name, see :meth:`StatusBase.from_name`, most use cases will just + directly use the child class to indicate their status. + """ + + _statuses = {} + name = None + + def __init__(self, message: str): + self.message = message + + def __new__(cls, *args, **kwargs): + if cls is StatusBase: + raise TypeError("cannot instantiate a base class") + return super().__new__(cls) + + def __eq__(self, other): + if not isinstance(self, type(other)): + return False + return self.message == other.message + + def __repr__(self): + return "{.__class__.__name__}({!r})".format(self, self.message) + + @classmethod + def from_name(cls, name: str, message: str): + if name == 'unknown': + # unknown is special + return UnknownStatus() + else: + return cls._statuses[name](message) + + @classmethod + def register(cls, child): + if child.name is None: + raise AttributeError('cannot register a Status which has no name') + cls._statuses[child.name] = child + return child + + +@StatusBase.register +class UnknownStatus(StatusBase): + """The unit status is unknown. + + A unit-agent has finished calling install, config-changed and start, but the + charm has not called status-set yet. + + """ + name = 'unknown' + + def __init__(self): + # Unknown status cannot be set and does not have a message associated with it. + super().__init__('') + + def __repr__(self): + return "UnknownStatus()" + + +@StatusBase.register +class ActiveStatus(StatusBase): + """The unit is ready. + + The unit believes it is correctly offering all the services it has been asked to offer. + """ + name = 'active' + + def __init__(self, message: str = ''): + super().__init__(message) + + +@StatusBase.register +class BlockedStatus(StatusBase): + """The unit requires manual intervention. + + An operator has to manually intervene to unblock the unit and let it proceed. + """ + name = 'blocked' + + +@StatusBase.register +class MaintenanceStatus(StatusBase): + """The unit is performing maintenance tasks. + + The unit is not yet providing services, but is actively doing work in preparation + for providing those services. This is a "spinning" state, not an error state. It + reflects activity on the unit itself, not on peers or related units. + + """ + name = 'maintenance' + + +@StatusBase.register +class WaitingStatus(StatusBase): + """A unit is unable to progress. + + The unit is unable to progress to an active state because an application to which + it is related is not running. + + """ + name = 'waiting' + + +class Resources: + """Object representing resources for the charm. + """ + + def __init__(self, names: typing.Iterable[str], backend: '_ModelBackend'): + self._backend = backend + self._paths = {name: None for name in names} + + def fetch(self, name: str) -> Path: + """Fetch the resource from the controller or store. + + If successfully fetched, this returns a Path object to where the resource is stored + on disk, otherwise it raises a ModelError. + """ + if name not in self._paths: + raise RuntimeError('invalid resource name: {}'.format(name)) + if self._paths[name] is None: + self._paths[name] = Path(self._backend.resource_get(name)) + return self._paths[name] + + +class Pod: + """Represents the definition of a pod spec in Kubernetes models. + + Currently only supports simple access to setting the Juju pod spec via :attr:`.set_spec`. + """ + + def __init__(self, backend: '_ModelBackend'): + self._backend = backend + + def set_spec(self, spec: typing.Mapping, k8s_resources: typing.Mapping = None): + """Set the specification for pods that Juju should start in kubernetes. + + See `juju help-tool pod-spec-set` for details of what should be passed. + Args: + spec: The mapping defining the pod specification + k8s_resources: Additional kubernetes specific specification. + + Returns: + """ + if not self._backend.is_leader(): + raise ModelError('cannot set a pod spec as this unit is not a leader') + self._backend.pod_spec_set(spec, k8s_resources) + + +class StorageMapping(Mapping): + """Map of storage names to lists of Storage instances.""" + + def __init__(self, storage_names: typing.Iterable[str], backend: '_ModelBackend'): + self._backend = backend + self._storage_map = {storage_name: None for storage_name in storage_names} + + def __contains__(self, key: str): + return key in self._storage_map + + def __len__(self): + return len(self._storage_map) + + def __iter__(self): + return iter(self._storage_map) + + def __getitem__(self, storage_name: str) -> typing.List['Storage']: + storage_list = self._storage_map[storage_name] + if storage_list is None: + storage_list = self._storage_map[storage_name] = [] + for storage_id in self._backend.storage_list(storage_name): + storage_list.append(Storage(storage_name, storage_id, self._backend)) + return storage_list + + def request(self, storage_name: str, count: int = 1): + """Requests new storage instances of a given name. + + Uses storage-add tool to request additional storage. Juju will notify the unit + via -storage-attached events when it becomes available. + """ + if storage_name not in self._storage_map: + raise ModelError(('cannot add storage {!r}:' + ' it is not present in the charm metadata').format(storage_name)) + self._backend.storage_add(storage_name, count) + + +class Storage: + """"Represents a storage as defined in metadata.yaml + + Attributes: + name: Simple string name of the storage + id: The provider id for storage + """ + + def __init__(self, storage_name, storage_id, backend): + self.name = storage_name + self.id = storage_id + self._backend = backend + self._location = None + + @property + def location(self): + if self._location is None: + raw = self._backend.storage_get('{}/{}'.format(self.name, self.id), "location") + self._location = Path(raw) + return self._location + + +class ModelError(Exception): + """Base class for exceptions raised when interacting with the Model.""" + pass + + +class TooManyRelatedAppsError(ModelError): + """Raised by :meth:`Model.get_relation` if there is more than one related application.""" + + def __init__(self, relation_name, num_related, max_supported): + super().__init__('Too many remote applications on {} ({} > {})'.format( + relation_name, num_related, max_supported)) + self.relation_name = relation_name + self.num_related = num_related + self.max_supported = max_supported + + +class RelationDataError(ModelError): + """Raised by ``Relation.data[entity][key] = 'foo'`` if the data is invalid. + + This is raised if you're either trying to set a value to something that isn't a string, + or if you are trying to set a value in a bucket that you don't have access to. (eg, + another application/unit or setting your application data but you aren't the leader.) + """ + + +class RelationNotFoundError(ModelError): + """Backend error when querying juju for a given relation and that relation doesn't exist.""" + + +class InvalidStatusError(ModelError): + """Raised if trying to set an Application or Unit status to something invalid.""" + + +class _ModelBackend: + """Represents the connection between the Model representation and talking to Juju. + + Charm authors should not directly interact with the ModelBackend, it is a private + implementation of Model. + """ + + LEASE_RENEWAL_PERIOD = datetime.timedelta(seconds=30) + + def __init__(self): + self.unit_name = os.environ['JUJU_UNIT_NAME'] + self.app_name = self.unit_name.split('/')[0] + + self._is_leader = None + self._leader_check_time = None + + def _run(self, *args, return_output=False, use_json=False): + kwargs = dict(stdout=PIPE, stderr=PIPE) + if use_json: + args += ('--format=json',) + try: + result = run(args, check=True, **kwargs) + except CalledProcessError as e: + raise ModelError(e.stderr) + if return_output: + if result.stdout is None: + return '' + else: + text = result.stdout.decode('utf8') + if use_json: + return json.loads(text) + else: + return text + + def relation_ids(self, relation_name): + relation_ids = self._run('relation-ids', relation_name, return_output=True, use_json=True) + return [int(relation_id.split(':')[-1]) for relation_id in relation_ids] + + def relation_list(self, relation_id): + try: + return self._run('relation-list', '-r', str(relation_id), + return_output=True, use_json=True) + except ModelError as e: + if 'relation not found' in str(e): + raise RelationNotFoundError() from e + raise + + def relation_get(self, relation_id, member_name, is_app): + if not isinstance(is_app, bool): + raise TypeError('is_app parameter to relation_get must be a boolean') + + try: + return self._run('relation-get', '-r', str(relation_id), + '-', member_name, '--app={}'.format(is_app), + return_output=True, use_json=True) + except ModelError as e: + if 'relation not found' in str(e): + raise RelationNotFoundError() from e + raise + + def relation_set(self, relation_id, key, value, is_app): + if not isinstance(is_app, bool): + raise TypeError('is_app parameter to relation_set must be a boolean') + + try: + return self._run('relation-set', '-r', str(relation_id), + '{}={}'.format(key, value), '--app={}'.format(is_app)) + except ModelError as e: + if 'relation not found' in str(e): + raise RelationNotFoundError() from e + raise + + def config_get(self): + return self._run('config-get', return_output=True, use_json=True) + + def is_leader(self): + """Obtain the current leadership status for the unit the charm code is executing on. + + The value is cached for the duration of a lease which is 30s in Juju. + """ + now = time.monotonic() + if self._leader_check_time is None: + check = True + else: + time_since_check = datetime.timedelta(seconds=now - self._leader_check_time) + check = (time_since_check > self.LEASE_RENEWAL_PERIOD or self._is_leader is None) + if check: + # Current time MUST be saved before running is-leader to ensure the cache + # is only used inside the window that is-leader itself asserts. + self._leader_check_time = now + self._is_leader = self._run('is-leader', return_output=True, use_json=True) + + return self._is_leader + + def resource_get(self, resource_name): + return self._run('resource-get', resource_name, return_output=True).strip() + + def pod_spec_set(self, spec, k8s_resources): + tmpdir = Path(tempfile.mkdtemp('-pod-spec-set')) + try: + spec_path = tmpdir / 'spec.json' + spec_path.write_text(json.dumps(spec)) + args = ['--file', str(spec_path)] + if k8s_resources: + k8s_res_path = tmpdir / 'k8s-resources.json' + k8s_res_path.write_text(json.dumps(k8s_resources)) + args.extend(['--k8s-resources', str(k8s_res_path)]) + self._run('pod-spec-set', *args) + finally: + shutil.rmtree(str(tmpdir)) + + def status_get(self, *, is_app=False): + """Get a status of a unit or an application. + + Args: + is_app: A boolean indicating whether the status should be retrieved for a unit + or an application. + """ + content = self._run( + 'status-get', '--include-data', '--application={}'.format(is_app), + use_json=True, + return_output=True) + # Unit status looks like (in YAML): + # message: 'load: 0.28 0.26 0.26' + # status: active + # status-data: {} + # Application status looks like (in YAML): + # application-status: + # message: 'load: 0.28 0.26 0.26' + # status: active + # status-data: {} + # units: + # uo/0: + # message: 'load: 0.28 0.26 0.26' + # status: active + # status-data: {} + + if is_app: + return {'status': content['application-status']['status'], + 'message': content['application-status']['message']} + else: + return content + + def status_set(self, status, message='', *, is_app=False): + """Set a status of a unit or an application. + + Args: + app: A boolean indicating whether the status should be set for a unit or an + application. + """ + if not isinstance(is_app, bool): + raise TypeError('is_app parameter must be boolean') + return self._run('status-set', '--application={}'.format(is_app), status, message) + + def storage_list(self, name): + return [int(s.split('/')[1]) for s in self._run('storage-list', name, + return_output=True, use_json=True)] + + def storage_get(self, storage_name_id, attribute): + return self._run('storage-get', '-s', storage_name_id, attribute, + return_output=True, use_json=True) + + def storage_add(self, name, count=1): + if not isinstance(count, int) or isinstance(count, bool): + raise TypeError('storage count must be integer, got: {} ({})'.format(count, + type(count))) + self._run('storage-add', '{}={}'.format(name, count)) + + def action_get(self): + return self._run('action-get', return_output=True, use_json=True) + + def action_set(self, results): + self._run('action-set', *["{}={}".format(k, v) for k, v in results.items()]) + + def action_log(self, message): + self._run('action-log', message) + + def action_fail(self, message=''): + self._run('action-fail', message) + + def application_version_set(self, version): + self._run('application-version-set', '--', version) + + def juju_log(self, level, message): + self._run('juju-log', '--log-level', level, message) + + def network_get(self, binding_name, relation_id=None): + """Return network info provided by network-get for a given binding. + + Args: + binding_name: A name of a binding (relation name or extra-binding name). + relation_id: An optional relation id to get network info for. + """ + cmd = ['network-get', binding_name] + if relation_id is not None: + cmd.extend(['-r', str(relation_id)]) + try: + return self._run(*cmd, return_output=True, use_json=True) + except ModelError as e: + if 'relation not found' in str(e): + raise RelationNotFoundError() from e + raise + + def add_metrics(self, metrics, labels=None): + cmd = ['add-metric'] + + if labels: + label_args = [] + for k, v in labels.items(): + _ModelBackendValidator.validate_metric_label(k) + _ModelBackendValidator.validate_label_value(k, v) + label_args.append('{}={}'.format(k, v)) + cmd.extend(['--labels', ','.join(label_args)]) + + metric_args = [] + for k, v in metrics.items(): + _ModelBackendValidator.validate_metric_key(k) + metric_value = _ModelBackendValidator.format_metric_value(v) + metric_args.append('{}={}'.format(k, metric_value)) + cmd.extend(metric_args) + self._run(*cmd) + + +class _ModelBackendValidator: + """Provides facilities for validating inputs and formatting them for model backends.""" + + METRIC_KEY_REGEX = re.compile(r'^[a-zA-Z](?:[a-zA-Z0-9-_]*[a-zA-Z0-9])?$') + + @classmethod + def validate_metric_key(cls, key): + if cls.METRIC_KEY_REGEX.match(key) is None: + raise ModelError( + 'invalid metric key {!r}: must match {}'.format( + key, cls.METRIC_KEY_REGEX.pattern)) + + @classmethod + def validate_metric_label(cls, label_name): + if cls.METRIC_KEY_REGEX.match(label_name) is None: + raise ModelError( + 'invalid metric label name {!r}: must match {}'.format( + label_name, cls.METRIC_KEY_REGEX.pattern)) + + @classmethod + def format_metric_value(cls, value): + try: + decimal_value = decimal.Decimal.from_float(value) + except TypeError as e: + e2 = ModelError('invalid metric value {!r} provided:' + ' must be a positive finite float'.format(value)) + raise e2 from e + if decimal_value.is_nan() or decimal_value.is_infinite() or decimal_value < 0: + raise ModelError('invalid metric value {!r} provided:' + ' must be a positive finite float'.format(value)) + return str(decimal_value) + + @classmethod + def validate_label_value(cls, label, value): + # Label values cannot be empty, contain commas or equal signs as those are + # used by add-metric as separators. + if not value: + raise ModelError( + 'metric label {} has an empty value, which is not allowed'.format(label)) + v = str(value) + if re.search('[,=]', v) is not None: + raise ModelError( + 'metric label values must not contain "," or "=": {}={!r}'.format(label, value)) diff --git a/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/lib/ops/testing.py b/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/lib/ops/testing.py new file mode 100644 index 0000000000000000000000000000000000000000..53e729ea757086d4715a18692eef28a10e0f7551 --- /dev/null +++ b/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/lib/ops/testing.py @@ -0,0 +1,520 @@ +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import inspect +import pathlib +from textwrap import dedent +import typing + +from ops import charm, framework, model + + +# OptionalYAML is something like metadata.yaml or actions.yaml. You can +# pass in a file-like object or the string directly. +OptionalYAML = typing.Optional[typing.Union[str, typing.TextIO]] + + +# noinspection PyProtectedMember +class Harness: + """This class represents a way to build up the model that will drive a test suite. + + The model that is created is from the viewpoint of the charm that you are testing. + + Example:: + + harness = Harness(MyCharm) + # Do initial setup here + relation_id = harness.add_relation('db', 'postgresql') + # Now instantiate the charm to see events as the model changes + harness.begin() + harness.add_relation_unit(relation_id, 'postgresql/0') + harness.update_relation_data(relation_id, 'postgresql/0', {'key': 'val'}) + # Check that charm has properly handled the relation_joined event for postgresql/0 + self.assertEqual(harness.charm. ...) + + Args: + charm_cls: The Charm class that you'll be testing. + meta: charm.CharmBase is a A string or file-like object containing the contents of + metadata.yaml. If not supplied, we will look for a 'metadata.yaml' file in the + parent directory of the Charm, and if not found fall back to a trivial + 'name: test-charm' metadata. + actions: A string or file-like object containing the contents of + actions.yaml. If not supplied, we will look for a 'actions.yaml' file in the + parent directory of the Charm. + """ + + def __init__( + self, + charm_cls: typing.Type[charm.CharmBase], + *, + meta: OptionalYAML = None, + actions: OptionalYAML = None): + # TODO: jam 2020-03-05 We probably want to take config as a parameter as well, since + # it would define the default values of config that the charm would see. + self._charm_cls = charm_cls + self._charm = None + self._charm_dir = 'no-disk-path' # this may be updated by _create_meta + self._meta = self._create_meta(meta, actions) + self._unit_name = self._meta.name + '/0' + self._framework = None + self._hooks_enabled = True + self._relation_id_counter = 0 + self._backend = _TestingModelBackend(self._unit_name, self._meta) + self._model = model.Model(self._unit_name, self._meta, self._backend) + self._framework = framework.Framework(":memory:", self._charm_dir, self._meta, self._model) + + @property + def charm(self) -> charm.CharmBase: + """Return the instance of the charm class that was passed to __init__. + + Note that the Charm is not instantiated until you have called + :meth:`.begin()`. + """ + return self._charm + + @property + def model(self) -> model.Model: + """Return the :class:`~ops.model.Model` that is being driven by this Harness.""" + return self._model + + @property + def framework(self) -> framework.Framework: + """Return the Framework that is being driven by this Harness.""" + return self._framework + + def begin(self) -> None: + """Instantiate the Charm and start handling events. + + Before calling begin(), there is no Charm instance, so changes to the Model won't emit + events. You must call begin before :attr:`.charm` is valid. + """ + if self._charm is not None: + raise RuntimeError('cannot call the begin method on the harness more than once') + + # The Framework adds attributes to class objects for events, etc. As such, we can't re-use + # the original class against multiple Frameworks. So create a locally defined class + # and register it. + # TODO: jam 2020-03-16 We are looking to changes this to Instance attributes instead of + # Class attributes which should clean up this ugliness. The API can stay the same + class TestEvents(self._charm_cls.on.__class__): + pass + + TestEvents.__name__ = self._charm_cls.on.__class__.__name__ + + class TestCharm(self._charm_cls): + on = TestEvents() + + # Note: jam 2020-03-01 This is so that errors in testing say MyCharm has no attribute foo, + # rather than TestCharm has no attribute foo. + TestCharm.__name__ = self._charm_cls.__name__ + self._charm = TestCharm(self._framework, self._framework.meta.name) + + def _create_meta(self, charm_metadata, action_metadata): + """Create a CharmMeta object. + + Handle the cases where a user doesn't supply explicit metadata snippets. + """ + filename = inspect.getfile(self._charm_cls) + charm_dir = pathlib.Path(filename).parents[1] + + if charm_metadata is None: + metadata_path = charm_dir / 'metadata.yaml' + if metadata_path.is_file(): + charm_metadata = metadata_path.read_text() + self._charm_dir = charm_dir + else: + # The simplest of metadata that the framework can support + charm_metadata = 'name: test-charm' + elif isinstance(charm_metadata, str): + charm_metadata = dedent(charm_metadata) + + if action_metadata is None: + actions_path = charm_dir / 'actions.yaml' + if actions_path.is_file(): + action_metadata = actions_path.read_text() + self._charm_dir = charm_dir + elif isinstance(action_metadata, str): + action_metadata = dedent(action_metadata) + + return charm.CharmMeta.from_yaml(charm_metadata, action_metadata) + + def disable_hooks(self) -> None: + """Stop emitting hook events when the model changes. + + This can be used by developers to stop changes to the model from emitting events that + the charm will react to. Call :meth:`.enable_hooks` + to re-enable them. + """ + self._hooks_enabled = False + + def enable_hooks(self) -> None: + """Re-enable hook events from charm.on when the model is changed. + + By default hook events are enabled once you call :meth:`.begin`, + but if you have used :meth:`.disable_hooks`, this can be used to + enable them again. + """ + self._hooks_enabled = True + + def _next_relation_id(self): + rel_id = self._relation_id_counter + self._relation_id_counter += 1 + return rel_id + + def add_relation(self, relation_name: str, remote_app: str) -> int: + """Declare that there is a new relation between this app and `remote_app`. + + Args: + relation_name: The relation on Charm that is being related to + remote_app: The name of the application that is being related to + + Return: + The relation_id created by this add_relation. + """ + rel_id = self._next_relation_id() + self._backend._relation_ids_map.setdefault(relation_name, []).append(rel_id) + self._backend._relation_names[rel_id] = relation_name + self._backend._relation_list_map[rel_id] = [] + self._backend._relation_data[rel_id] = { + remote_app: {}, + self._backend.unit_name: {}, + self._backend.app_name: {}, + } + # Reload the relation_ids list + if self._model is not None: + self._model.relations._invalidate(relation_name) + if self._charm is None or not self._hooks_enabled: + return rel_id + relation = self._model.get_relation(relation_name, rel_id) + app = self._model.get_app(remote_app) + self._charm.on[relation_name].relation_created.emit( + relation, app) + return rel_id + + def add_relation_unit(self, relation_id: int, remote_unit_name: str) -> None: + """Add a new unit to a relation. + + Example:: + + rel_id = harness.add_relation('db', 'postgresql') + harness.add_relation_unit(rel_id, 'postgresql/0') + + This will trigger a `relation_joined` event and a `relation_changed` event. + + Args: + relation_id: The integer relation identifier (as returned by add_relation). + remote_unit_name: A string representing the remote unit that is being added. + Return: + None + """ + self._backend._relation_list_map[relation_id].append(remote_unit_name) + self._backend._relation_data[relation_id][remote_unit_name] = {} + relation_name = self._backend._relation_names[relation_id] + # Make sure that the Model reloads the relation_list for this relation_id, as well as + # reloading the relation data for this unit. + if self._model is not None: + remote_unit = self._model.get_unit(remote_unit_name) + relation = self._model.get_relation(relation_name, relation_id) + unit_cache = relation.data.get(remote_unit, None) + if unit_cache is not None: + unit_cache._invalidate() + self._model.relations._invalidate(relation_name) + if self._charm is None or not self._hooks_enabled: + return + self._charm.on[relation_name].relation_joined.emit( + relation, remote_unit.app, remote_unit) + + def get_relation_data(self, relation_id: int, app_or_unit: str) -> typing.Mapping: + """Get the relation data bucket for a single app or unit in a given relation. + + This ignores all of the safety checks of who can and can't see data in relations (eg, + non-leaders can't read their own application's relation data because there are no events + that keep that data up-to-date for the unit). + + Args: + relation_id: The relation whose content we want to look at. + app_or_unit: The name of the application or unit whose data we want to read + Return: + a dict containing the relation data for `app_or_unit` or None. + Raises: + KeyError: if relation_id doesn't exist + """ + return self._backend._relation_data[relation_id].get(app_or_unit, None) + + def get_workload_version(self) -> str: + """Read the workload version that was set by the unit.""" + return self._backend._workload_version + + def update_relation_data( + self, + relation_id: int, + app_or_unit: str, + key_values: typing.Mapping, + ) -> None: + """Update the relation data for a given unit or application in a given relation. + + This also triggers the `relation_changed` event for this relation_id. + + Args: + relation_id: The integer relation_id representing this relation. + app_or_unit: The unit or application name that is being updated. + This can be the local or remote application. + key_values: Each key/value will be updated in the relation data. + """ + relation_name = self._backend._relation_names[relation_id] + relation = self._model.get_relation(relation_name, relation_id) + if '/' in app_or_unit: + entity = self._model.get_unit(app_or_unit) + else: + entity = self._model.get_app(app_or_unit) + rel_data = relation.data.get(entity, None) + if rel_data is not None: + # rel_data may have cached now-stale data, so _invalidate() it. + # Note, this won't cause the data to be loaded if it wasn't already. + rel_data._invalidate() + + new_values = self._backend._relation_data[relation_id][app_or_unit].copy() + for k, v in key_values.items(): + if v == '': + new_values.pop(k, None) + else: + new_values[k] = v + self._backend._relation_data[relation_id][app_or_unit] = new_values + + if app_or_unit == self._model.unit.name: + # No events for our own unit + return + if app_or_unit == self._model.app.name: + # updating our own app only generates an event if it is a peer relation and we + # aren't the leader + is_peer = self._meta.relations[relation_name].role.is_peer() + if not is_peer: + return + if self._model.unit.is_leader(): + return + self._emit_relation_changed(relation_id, app_or_unit) + + def _emit_relation_changed(self, relation_id, app_or_unit): + if self._charm is None or not self._hooks_enabled: + return + rel_name = self._backend._relation_names[relation_id] + relation = self.model.get_relation(rel_name, relation_id) + if '/' in app_or_unit: + app_name = app_or_unit.split('/')[0] + unit_name = app_or_unit + app = self.model.get_app(app_name) + unit = self.model.get_unit(unit_name) + args = (relation, app, unit) + else: + app_name = app_or_unit + app = self.model.get_app(app_name) + args = (relation, app) + self._charm.on[rel_name].relation_changed.emit(*args) + + def update_config( + self, + key_values: typing.Mapping[str, str] = None, + unset: typing.Iterable[str] = (), + ) -> None: + """Update the config as seen by the charm. + + This will trigger a `config_changed` event. + + Args: + key_values: A Mapping of key:value pairs to update in config. + unset: An iterable of keys to remove from Config. (Note that this does + not currently reset the config values to the default defined in config.yaml.) + """ + config = self._backend._config + if key_values is not None: + for key, value in key_values.items(): + config[key] = value + for key in unset: + config.pop(key, None) + # NOTE: jam 2020-03-01 Note that this sort of works "by accident". Config + # is a LazyMapping, but its _load returns a dict and this method mutates + # the dict that Config is caching. Arguably we should be doing some sort + # of charm.framework.model.config._invalidate() + if self._charm is None or not self._hooks_enabled: + return + self._charm.on.config_changed.emit() + + def set_leader(self, is_leader: bool = True) -> None: + """Set whether this unit is the leader or not. + + If this charm becomes a leader then `leader_elected` will be triggered. + + Args: + is_leader: True/False as to whether this unit is the leader. + """ + was_leader = self._backend._is_leader + self._backend._is_leader = is_leader + # Note: jam 2020-03-01 currently is_leader is cached at the ModelBackend level, not in + # the Model objects, so this automatically gets noticed. + if is_leader and not was_leader and self._charm is not None and self._hooks_enabled: + self._charm.on.leader_elected.emit() + + def _get_backend_calls(self, reset: bool = True) -> list: + """Return the calls that we have made to the TestingModelBackend. + + This is useful mostly for testing the framework itself, so that we can assert that we + do/don't trigger extra calls. + + Args: + reset: If True, reset the calls list back to empty, if false, the call list is + preserved. + Return: + ``[(call1, args...), (call2, args...)]`` + """ + calls = self._backend._calls.copy() + if reset: + self._backend._calls.clear() + return calls + + +def _record_calls(cls): + """Replace methods on cls with methods that record that they have been called. + + Iterate all attributes of cls, and for public methods, replace them with a wrapped method + that records the method called along with the arguments and keyword arguments. + """ + for meth_name, orig_method in cls.__dict__.items(): + if meth_name.startswith('_'): + continue + + def decorator(orig_method): + def wrapped(self, *args, **kwargs): + full_args = (orig_method.__name__,) + args + if kwargs: + full_args = full_args + (kwargs,) + self._calls.append(full_args) + return orig_method(self, *args, **kwargs) + return wrapped + + setattr(cls, meth_name, decorator(orig_method)) + return cls + + +@_record_calls +class _TestingModelBackend: + """This conforms to the interface for ModelBackend but provides canned data. + + DO NOT use this class directly, it is used by `Harness`_ to drive the model. + `Harness`_ is responsible for maintaining the internal consistency of the values here, + as the only public methods of this type are for implementing ModelBackend. + """ + + def __init__(self, unit_name, meta): + self.unit_name = unit_name + self.app_name = self.unit_name.split('/')[0] + self._calls = [] + self._meta = meta + self._is_leader = None + self._relation_ids_map = {} # relation name to [relation_ids,...] + self._relation_names = {} # reverse map from relation_id to relation_name + self._relation_list_map = {} # relation_id: [unit_name,...] + self._relation_data = {} # {relation_id: {name: data}} + self._config = {} + self._is_leader = False + self._resources_map = {} + self._pod_spec = None + self._app_status = {'status': 'unknown', 'message': ''} + self._unit_status = {'status': 'maintenance', 'message': ''} + self._workload_version = None + + def relation_ids(self, relation_name): + try: + return self._relation_ids_map[relation_name] + except KeyError as e: + if relation_name not in self._meta.relations: + raise model.ModelError('{} is not a known relation'.format(relation_name)) from e + return [] + + def relation_list(self, relation_id): + try: + return self._relation_list_map[relation_id] + except KeyError as e: + raise model.RelationNotFoundError from e + + def relation_get(self, relation_id, member_name, is_app): + if is_app and '/' in member_name: + member_name = member_name.split('/')[0] + if relation_id not in self._relation_data: + raise model.RelationNotFoundError() + return self._relation_data[relation_id][member_name].copy() + + def relation_set(self, relation_id, key, value, is_app): + relation = self._relation_data[relation_id] + if is_app: + bucket_key = self.app_name + else: + bucket_key = self.unit_name + if bucket_key not in relation: + relation[bucket_key] = {} + bucket = relation[bucket_key] + if value == '': + bucket.pop(key, None) + else: + bucket[key] = value + + def config_get(self): + return self._config + + def is_leader(self): + return self._is_leader + + def application_version_set(self, version): + self._workload_version = version + + def resource_get(self, resource_name): + return self._resources_map[resource_name] + + def pod_spec_set(self, spec, k8s_resources): + self._pod_spec = (spec, k8s_resources) + + def status_get(self, *, is_app=False): + if is_app: + return self._app_status + else: + return self._unit_status + + def status_set(self, status, message='', *, is_app=False): + if is_app: + self._app_status = {'status': status, 'message': message} + else: + self._unit_status = {'status': status, 'message': message} + + def storage_list(self, name): + raise NotImplementedError(self.storage_list) + + def storage_get(self, storage_name_id, attribute): + raise NotImplementedError(self.storage_get) + + def storage_add(self, name, count=1): + raise NotImplementedError(self.storage_add) + + def action_get(self): + raise NotImplementedError(self.action_get) + + def action_set(self, results): + raise NotImplementedError(self.action_set) + + def action_log(self, message): + raise NotImplementedError(self.action_log) + + def action_fail(self, message=''): + raise NotImplementedError(self.action_fail) + + def network_get(self, endpoint_name, relation_id=None): + raise NotImplementedError(self.network_get) diff --git a/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/metadata.yaml b/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/metadata.yaml new file mode 100644 index 0000000000000000000000000000000000000000..21c168deb5d8a722072ffa0271a8a86eee70c0f0 --- /dev/null +++ b/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/metadata.yaml @@ -0,0 +1,10 @@ +name: nginx-k8s +summary: A nginx charm +description: | + Describe your charm here +series: + - kubernetes +min-juju-version: 2.7.0 +deployment: + type: stateless + service: loadbalancer diff --git a/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/src/charm.py b/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/src/charm.py new file mode 100755 index 0000000000000000000000000000000000000000..2afc6cb6d058d843ffa9ec9a7a88d63b2fba9c19 --- /dev/null +++ b/SOL004/charm-packages/native_k8s_charm_vnf/charms/nginx-k8s/src/charm.py @@ -0,0 +1,86 @@ +#!/usr/bin/env python3 + +import sys +import logging + +sys.path.append("lib") + +from ops.charm import CharmBase +from ops.framework import StoredState +from ops.main import main +from ops.model import ( + ActiveStatus, + MaintenanceStatus, +) + +logger = logging.getLogger(__name__) + + +class NginxK8sCharm(CharmBase): + state = StoredState() + + def __init__(self, framework, key): + super().__init__(framework, key) + self.state.set_default(spec=None) + + # Observe Charm related events + self.framework.observe(self.on.config_changed, self.on_config_changed) + self.framework.observe(self.on.start, self.on_start) + self.framework.observe(self.on.upgrade_charm, self.on_upgrade_charm) + + def _apply_spec(self): + # Only apply the spec if this unit is a leader. + if not self.framework.model.unit.is_leader(): + return + new_spec = self.make_pod_spec() + if new_spec == self.state.spec: + return + self.framework.model.pod.set_spec(new_spec) + self.state.spec = new_spec + + def make_pod_spec(self): + config = self.framework.model.config + + ports = [ + { + "name": "port", + "containerPort": config["port"], + "protocol": "TCP", + } + ] + + spec = { + "version": 2, + "containers": [ + { + "name": self.framework.model.app.name, + "image": "{}".format(config["image"]), + "ports": ports, + } + ], + } + + return spec + + def on_config_changed(self, event): + """Handle changes in configuration""" + unit = self.model.unit + unit.status = MaintenanceStatus("Applying new pod spec") + self._apply_spec() + unit.status = ActiveStatus("Ready") + + def on_start(self, event): + """Called when the charm is being installed""" + unit = self.model.unit + unit.status = MaintenanceStatus("Applying pod spec") + self._apply_spec() + unit.status = ActiveStatus("Ready") + + def on_upgrade_charm(self, event): + """Upgrade the charm.""" + unit = self.model.unit + unit.status = MaintenanceStatus("Upgrading charm") + self.on_start(event) + +if __name__ == "__main__": + main(NginxK8sCharm) diff --git a/SOL004/charm-packages/native_k8s_charm_vnf/juju-bundles/bundle.yaml b/SOL004/charm-packages/native_k8s_charm_vnf/juju-bundles/bundle.yaml new file mode 100644 index 0000000000000000000000000000000000000000..8115e959ac4502789c9ecd9d014e3139a8b636a2 --- /dev/null +++ b/SOL004/charm-packages/native_k8s_charm_vnf/juju-bundles/bundle.yaml @@ -0,0 +1,6 @@ +description: Squid Bundle +bundle: kubernetes +applications: + nginx: + charm: '../charms/nginx-k8s' + scale: 1 diff --git a/SOL004/charm-packages/native_k8s_charm_vnf/native_k8s_charm_vnfd.yaml b/SOL004/charm-packages/native_k8s_charm_vnf/native_k8s_charm_vnfd.yaml new file mode 100644 index 0000000000000000000000000000000000000000..35d47caece1a1e997851b5da1f1ed5e417850524 --- /dev/null +++ b/SOL004/charm-packages/native_k8s_charm_vnf/native_k8s_charm_vnfd.yaml @@ -0,0 +1,38 @@ +vnfd: + df: + - id: default-df + lcm-operations-configuration: + operate-vnf-op-config: + day1-2: + - id: native-kdu + config-primitive: + - name: changecontent + parameter: + - data-type: STRING + default-value: nginx + name: application-name + - data-type: STRING + default-value: '' + name: customtitle + initial-config-primitive: + - name: changecontent + parameter: + - data-type: STRING + name: application-name + value: nginx + - data-type: STRING + name: customtitle + value: Initial Config Primitive + seq: 1 + ext-cpd: + - id: mgmt-ext + k8s-cluster-net: mgmtnet + id: native_k8s_charm-vnf + k8s-cluster: + nets: + - id: mgmtnet + kdu: + - name: native-kdu + juju-bundle: bundle.yaml + mgmt-cp: mgmt-ext + product-name: native_k8s_charm-vnf diff --git a/SOL004/charm-packages/nopasswd_k8s_proxy_charm_ns/icons/osm.png b/SOL004/charm-packages/nopasswd_k8s_proxy_charm_ns/icons/osm.png new file mode 100644 index 0000000000000000000000000000000000000000..62012d2a2b491bdcd536d62c3c3c863c0d8c1b33 Binary files /dev/null and b/SOL004/charm-packages/nopasswd_k8s_proxy_charm_ns/icons/osm.png differ diff --git a/SOL004/charm-packages/nopasswd_k8s_proxy_charm_ns/nopasswd_k8s_proxy_charm_nsd.yaml b/SOL004/charm-packages/nopasswd_k8s_proxy_charm_ns/nopasswd_k8s_proxy_charm_nsd.yaml new file mode 100644 index 0000000000000000000000000000000000000000..8b75bb26f06e33963b54f2ba00e3a56c93d80121 --- /dev/null +++ b/SOL004/charm-packages/nopasswd_k8s_proxy_charm_ns/nopasswd_k8s_proxy_charm_nsd.yaml @@ -0,0 +1,37 @@ +nsd: + nsd: + - description: NS with 2 VNFs with cloudinit connected by datanet and mgmtnet VLs + df: + - id: default-df + vnf-profile: + - id: '1' + virtual-link-connectivity: + - constituent-cpd-id: + - constituent-base-element-id: '1' + constituent-cpd-id: vnf-mgmt-ext + virtual-link-profile-id: mgmtnet + - constituent-cpd-id: + - constituent-base-element-id: '1' + constituent-cpd-id: vnf-data-ext + virtual-link-profile-id: datanet + vnfd-id: nopasswd_k8s_proxy_charm-vnf + - id: '2' + virtual-link-connectivity: + - constituent-cpd-id: + - constituent-base-element-id: '2' + constituent-cpd-id: vnf-mgmt-ext + virtual-link-profile-id: mgmtnet + - constituent-cpd-id: + - constituent-base-element-id: '2' + constituent-cpd-id: vnf-data-ext + virtual-link-profile-id: datanet + vnfd-id: nopasswd_k8s_proxy_charm-vnf + id: nopasswd_k8s_proxy_charm-ns + name: nopasswd_k8s_proxy_charm-ns + version: '1.0' + virtual-link-desc: + - id: mgmtnet + mgmt-network: 'true' + - id: datanet + vnfd-id: + - nopasswd_k8s_proxy_charm-vnf diff --git a/SOL004/charm-packages/nopasswd_k8s_proxy_charm_vnf/charms/simple/actions.yaml b/SOL004/charm-packages/nopasswd_k8s_proxy_charm_vnf/charms/simple/actions.yaml new file mode 100644 index 0000000000000000000000000000000000000000..f9882c47bba960254e0381e5b5ab1c0688de6cf5 --- /dev/null +++ b/SOL004/charm-packages/nopasswd_k8s_proxy_charm_vnf/charms/simple/actions.yaml @@ -0,0 +1,54 @@ +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## +touch: + description: "Touch a file on the VNF." + params: + filename: + description: "The name of the file to touch." + type: string + default: "" + required: + - filename + +# Standard OSM functions +start: + description: "Stop the service on the VNF." +stop: + description: "Stop the service on the VNF." +restart: + description: "Stop the service on the VNF." +reboot: + description: "Reboot the VNF virtual machine." +upgrade: + description: "Upgrade the software on the VNF." + +# Required by charms.osm.sshproxy +run: + description: "Run an arbitrary command" + params: + command: + description: "The command to execute." + type: string + default: "" + required: + - command +generate-ssh-key: + description: "Generate a new SSH keypair for this unit. This will replace any existing previously generated keypair." +verify-ssh-credentials: + description: "Verify that this unit can authenticate with server specified by ssh-hostname and ssh-username." +get-ssh-public-key: + description: "Get the public SSH key for this unit." diff --git a/SOL004/charm-packages/nopasswd_k8s_proxy_charm_vnf/charms/simple/config.yaml b/SOL004/charm-packages/nopasswd_k8s_proxy_charm_vnf/charms/simple/config.yaml new file mode 100644 index 0000000000000000000000000000000000000000..93e3cab01c0843418c57e4bc918eb933f2daf934 --- /dev/null +++ b/SOL004/charm-packages/nopasswd_k8s_proxy_charm_vnf/charms/simple/config.yaml @@ -0,0 +1,41 @@ +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## +options: + ssh-hostname: + type: string + default: "" + description: "The hostname or IP address of the machine to" + ssh-username: + type: string + default: "" + description: "The username to login as." + ssh-password: + type: string + default: "" + description: "The password used to authenticate." + ssh-public-key: + type: string + default: "" + description: "The public key of this unit." + ssh-key-type: + type: string + default: "rsa" + description: "The type of encryption to use for the SSH key." + ssh-key-bits: + type: int + default: 4096 + description: "The number of bits to use for the SSH key." diff --git a/SOL004/charm-packages/nopasswd_k8s_proxy_charm_vnf/charms/simple/hooks/install b/SOL004/charm-packages/nopasswd_k8s_proxy_charm_vnf/charms/simple/hooks/install new file mode 120000 index 0000000000000000000000000000000000000000..25b1f68fa39d58d33c08ca420c3d439d19be0c55 --- /dev/null +++ b/SOL004/charm-packages/nopasswd_k8s_proxy_charm_vnf/charms/simple/hooks/install @@ -0,0 +1 @@ +../src/charm.py \ No newline at end of file diff --git a/SOL004/charm-packages/nopasswd_k8s_proxy_charm_vnf/charms/simple/hooks/start b/SOL004/charm-packages/nopasswd_k8s_proxy_charm_vnf/charms/simple/hooks/start new file mode 120000 index 0000000000000000000000000000000000000000..25b1f68fa39d58d33c08ca420c3d439d19be0c55 --- /dev/null +++ b/SOL004/charm-packages/nopasswd_k8s_proxy_charm_vnf/charms/simple/hooks/start @@ -0,0 +1 @@ +../src/charm.py \ No newline at end of file diff --git a/SOL004/charm-packages/nopasswd_k8s_proxy_charm_vnf/charms/simple/hooks/upgrade-charm b/SOL004/charm-packages/nopasswd_k8s_proxy_charm_vnf/charms/simple/hooks/upgrade-charm new file mode 120000 index 0000000000000000000000000000000000000000..25b1f68fa39d58d33c08ca420c3d439d19be0c55 --- /dev/null +++ b/SOL004/charm-packages/nopasswd_k8s_proxy_charm_vnf/charms/simple/hooks/upgrade-charm @@ -0,0 +1 @@ +../src/charm.py \ No newline at end of file diff --git a/SOL004/charm-packages/nopasswd_k8s_proxy_charm_vnf/charms/simple/lib/charms b/SOL004/charm-packages/nopasswd_k8s_proxy_charm_vnf/charms/simple/lib/charms new file mode 120000 index 0000000000000000000000000000000000000000..5abf77aa5a04fddc6d4eb9d0d6d17e36fd00272a --- /dev/null +++ b/SOL004/charm-packages/nopasswd_k8s_proxy_charm_vnf/charms/simple/lib/charms @@ -0,0 +1 @@ +../mod/charms.osm/charms/ \ No newline at end of file diff --git a/SOL004/charm-packages/nopasswd_k8s_proxy_charm_vnf/charms/simple/lib/ops b/SOL004/charm-packages/nopasswd_k8s_proxy_charm_vnf/charms/simple/lib/ops new file mode 120000 index 0000000000000000000000000000000000000000..c36dab1ed4a93aee97a31b0eba0438852c5e05bf --- /dev/null +++ b/SOL004/charm-packages/nopasswd_k8s_proxy_charm_vnf/charms/simple/lib/ops @@ -0,0 +1 @@ +../mod/operator/ops/ \ No newline at end of file diff --git a/SOL004/charm-packages/nopasswd_k8s_proxy_charm_vnf/charms/simple/metadata.yaml b/SOL004/charm-packages/nopasswd_k8s_proxy_charm_vnf/charms/simple/metadata.yaml new file mode 100644 index 0000000000000000000000000000000000000000..4b5b352850eb98a4e2028942957cebc622a2395e --- /dev/null +++ b/SOL004/charm-packages/nopasswd_k8s_proxy_charm_vnf/charms/simple/metadata.yaml @@ -0,0 +1,28 @@ +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## + +name: simple-k8s-proxy +summary: A simple example proxy charm +description: | + Simple proxy charm is an example charm used in OSM Hackfests +series: + - kubernetes +peers: + proxypeer: + interface: proxypeer +deployment: + mode: operator \ No newline at end of file diff --git a/SOL004/charm-packages/nopasswd_k8s_proxy_charm_vnf/charms/simple/src/charm.py b/SOL004/charm-packages/nopasswd_k8s_proxy_charm_vnf/charms/simple/src/charm.py new file mode 100755 index 0000000000000000000000000000000000000000..e23b12b7bcfddfd49d4efbc1b0ac72d92579778a --- /dev/null +++ b/SOL004/charm-packages/nopasswd_k8s_proxy_charm_vnf/charms/simple/src/charm.py @@ -0,0 +1,65 @@ +#!/usr/bin/env python3 +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## + +import sys + +sys.path.append("lib") + +from charms.osm.sshproxy import SSHProxyCharm +from ops.main import main + +class MySSHProxyCharm(SSHProxyCharm): + + def __init__(self, framework, key): + super().__init__(framework, key) + + # Listen to charm events + self.framework.observe(self.on.config_changed, self.on_config_changed) + self.framework.observe(self.on.install, self.on_install) + self.framework.observe(self.on.start, self.on_start) + + # Listen to the touch action event + self.framework.observe(self.on.touch_action, self.on_touch_action) + + def on_config_changed(self, event): + """Handle changes in configuration""" + super().on_config_changed(event) + + def on_install(self, event): + """Called when the charm is being installed""" + super().on_install(event) + + def on_start(self, event): + """Called when the charm is being started""" + super().on_start(event) + + def on_touch_action(self, event): + """Touch a file.""" + + if self.model.unit.is_leader(): + filename = event.params["filename"] + proxy = self.get_ssh_proxy() + stdout, stderr = proxy.run("touch {}".format(filename)) + event.set_results({"output": stdout}) + else: + event.fail("Unit is not leader") + return + +if __name__ == "__main__": + main(MySSHProxyCharm) + diff --git a/SOL004/charm-packages/nopasswd_k8s_proxy_charm_vnf/cloud_init/cloud-config.txt b/SOL004/charm-packages/nopasswd_k8s_proxy_charm_vnf/cloud_init/cloud-config.txt new file mode 100755 index 0000000000000000000000000000000000000000..36c8d1bf2cdebbc4e50d1e8348003f64f419cd0b --- /dev/null +++ b/SOL004/charm-packages/nopasswd_k8s_proxy_charm_vnf/cloud_init/cloud-config.txt @@ -0,0 +1,12 @@ +#cloud-config +password: osm4u +chpasswd: { expire: False } +ssh_pwauth: True + +write_files: +- content: | + # My new helloworld file + + owner: root:root + permissions: '0644' + path: /root/helloworld.txt diff --git a/SOL004/charm-packages/nopasswd_k8s_proxy_charm_vnf/icons/osm.png b/SOL004/charm-packages/nopasswd_k8s_proxy_charm_vnf/icons/osm.png new file mode 100644 index 0000000000000000000000000000000000000000..62012d2a2b491bdcd536d62c3c3c863c0d8c1b33 Binary files /dev/null and b/SOL004/charm-packages/nopasswd_k8s_proxy_charm_vnf/icons/osm.png differ diff --git a/SOL004/charm-packages/nopasswd_k8s_proxy_charm_vnf/nopasswd_k8s_proxy_charm_vnfd.yaml b/SOL004/charm-packages/nopasswd_k8s_proxy_charm_vnf/nopasswd_k8s_proxy_charm_vnfd.yaml new file mode 100644 index 0000000000000000000000000000000000000000..bf32376cd9ac6f66e1fbf1887757a0cd1295400a --- /dev/null +++ b/SOL004/charm-packages/nopasswd_k8s_proxy_charm_vnf/nopasswd_k8s_proxy_charm_vnfd.yaml @@ -0,0 +1,96 @@ +vnfd: + description: A VNF consisting of 1 VDU connected to two external VL, and one for + data and another one for management + df: + - id: default-df + instantiation-level: + - id: default-instantiation-level + vdu-level: + - number-of-instances: 1 + vdu-id: mgmtVM + vdu-profile: + - id: mgmtVM + min-number-of-instances: 1 + lcm-operations-configuration: + operate-vnf-op-config: + day1-2: + - config-access: + ssh-access: + default-user: ubuntu + required: true + config-primitive: + - name: touch + execution-environment-ref: simple-ee + parameter: + - data-type: STRING + default-value: /home/ubuntu/touched + name: filename + id: nopasswd_k8s_proxy_charm-vnf + execution-environment-list: + - id: simple-ee + juju: + charm: simple + cloud: k8s + initial-config-primitive: + - name: config + execution-environment-ref: simple-ee + parameter: + - name: ssh-hostname + value: + - name: ssh-username + value: ubuntu + seq: 1 + - name: touch + execution-environment-ref: simple-ee + parameter: + - data-type: STRING + name: filename + value: /home/ubuntu/first-touch + seq: 2 + ext-cpd: + - id: vnf-mgmt-ext + int-cpd: + cpd: mgmtVM-eth0-int + vdu-id: mgmtVM + - id: vnf-data-ext + int-cpd: + cpd: dataVM-xe0-int + vdu-id: mgmtVM + id: nopasswd_k8s_proxy_charm-vnf + mgmt-cp: vnf-mgmt-ext + product-name: nopasswd_k8s_proxy_charm-vnf + sw-image-desc: + - id: ubuntu18.04 + image: ubuntu18.04 + name: ubuntu18.04 + vdu: + - cloud-init-file: cloud-config.txt + id: mgmtVM + int-cpd: + - id: mgmtVM-eth0-int + virtual-network-interface-requirement: + - name: mgmtVM-eth0 + position: 1 + virtual-interface: + type: PARAVIRT + - id: dataVM-xe0-int + virtual-network-interface-requirement: + - name: dataVM-xe0 + position: 2 + virtual-interface: + type: PARAVIRT + name: mgmtVM + sw-image-desc: ubuntu18.04 + virtual-compute-desc: mgmtVM-compute + virtual-storage-desc: + - mgmtVM-storage + version: 1.0 + virtual-compute-desc: + - id: mgmtVM-compute + virtual-cpu: + num-virtual-cpu: 1 + virtual-memory: + size: 1.0 + virtual-storage-desc: + - id: mgmtVM-storage + size-of-storage: 10 diff --git a/SOL004/charm-packages/nopasswd_proxy_charm_ns/icons/osm.png b/SOL004/charm-packages/nopasswd_proxy_charm_ns/icons/osm.png new file mode 100644 index 0000000000000000000000000000000000000000..62012d2a2b491bdcd536d62c3c3c863c0d8c1b33 Binary files /dev/null and b/SOL004/charm-packages/nopasswd_proxy_charm_ns/icons/osm.png differ diff --git a/SOL004/charm-packages/nopasswd_proxy_charm_ns/nopasswd_proxy_charm_nsd.yaml b/SOL004/charm-packages/nopasswd_proxy_charm_ns/nopasswd_proxy_charm_nsd.yaml new file mode 100644 index 0000000000000000000000000000000000000000..859525bedcdb4e23f0fd438840f8c5ab6dc8f262 --- /dev/null +++ b/SOL004/charm-packages/nopasswd_proxy_charm_ns/nopasswd_proxy_charm_nsd.yaml @@ -0,0 +1,37 @@ +nsd: + nsd: + - description: NS with 2 VNFs with cloudinit connected by datanet and mgmtnet VLs + df: + - id: default-df + vnf-profile: + - id: '1' + virtual-link-connectivity: + - constituent-cpd-id: + - constituent-base-element-id: '1' + constituent-cpd-id: vnf-mgmt-ext + virtual-link-profile-id: mgmtnet + - constituent-cpd-id: + - constituent-base-element-id: '1' + constituent-cpd-id: vnf-data-ext + virtual-link-profile-id: datanet + vnfd-id: nopasswd_proxy_charm-vnf + - id: '2' + virtual-link-connectivity: + - constituent-cpd-id: + - constituent-base-element-id: '2' + constituent-cpd-id: vnf-mgmt-ext + virtual-link-profile-id: mgmtnet + - constituent-cpd-id: + - constituent-base-element-id: '2' + constituent-cpd-id: vnf-data-ext + virtual-link-profile-id: datanet + vnfd-id: nopasswd_proxy_charm-vnf + id: nopasswd_proxy_charm-ns + name: nopasswd_proxy_charm-ns + version: '1.0' + virtual-link-desc: + - id: mgmtnet + mgmt-network: 'true' + - id: datanet + vnfd-id: + - nopasswd_proxy_charm-vnf diff --git a/SOL004/charm-packages/nopasswd_proxy_charm_vnf/charms/simple/actions.yaml b/SOL004/charm-packages/nopasswd_proxy_charm_vnf/charms/simple/actions.yaml new file mode 100644 index 0000000000000000000000000000000000000000..f9882c47bba960254e0381e5b5ab1c0688de6cf5 --- /dev/null +++ b/SOL004/charm-packages/nopasswd_proxy_charm_vnf/charms/simple/actions.yaml @@ -0,0 +1,54 @@ +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## +touch: + description: "Touch a file on the VNF." + params: + filename: + description: "The name of the file to touch." + type: string + default: "" + required: + - filename + +# Standard OSM functions +start: + description: "Stop the service on the VNF." +stop: + description: "Stop the service on the VNF." +restart: + description: "Stop the service on the VNF." +reboot: + description: "Reboot the VNF virtual machine." +upgrade: + description: "Upgrade the software on the VNF." + +# Required by charms.osm.sshproxy +run: + description: "Run an arbitrary command" + params: + command: + description: "The command to execute." + type: string + default: "" + required: + - command +generate-ssh-key: + description: "Generate a new SSH keypair for this unit. This will replace any existing previously generated keypair." +verify-ssh-credentials: + description: "Verify that this unit can authenticate with server specified by ssh-hostname and ssh-username." +get-ssh-public-key: + description: "Get the public SSH key for this unit." diff --git a/SOL004/charm-packages/nopasswd_proxy_charm_vnf/charms/simple/config.yaml b/SOL004/charm-packages/nopasswd_proxy_charm_vnf/charms/simple/config.yaml new file mode 100644 index 0000000000000000000000000000000000000000..93e3cab01c0843418c57e4bc918eb933f2daf934 --- /dev/null +++ b/SOL004/charm-packages/nopasswd_proxy_charm_vnf/charms/simple/config.yaml @@ -0,0 +1,41 @@ +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## +options: + ssh-hostname: + type: string + default: "" + description: "The hostname or IP address of the machine to" + ssh-username: + type: string + default: "" + description: "The username to login as." + ssh-password: + type: string + default: "" + description: "The password used to authenticate." + ssh-public-key: + type: string + default: "" + description: "The public key of this unit." + ssh-key-type: + type: string + default: "rsa" + description: "The type of encryption to use for the SSH key." + ssh-key-bits: + type: int + default: 4096 + description: "The number of bits to use for the SSH key." diff --git a/SOL004/charm-packages/nopasswd_proxy_charm_vnf/charms/simple/hooks/install b/SOL004/charm-packages/nopasswd_proxy_charm_vnf/charms/simple/hooks/install new file mode 120000 index 0000000000000000000000000000000000000000..25b1f68fa39d58d33c08ca420c3d439d19be0c55 --- /dev/null +++ b/SOL004/charm-packages/nopasswd_proxy_charm_vnf/charms/simple/hooks/install @@ -0,0 +1 @@ +../src/charm.py \ No newline at end of file diff --git a/SOL004/charm-packages/nopasswd_proxy_charm_vnf/charms/simple/hooks/start b/SOL004/charm-packages/nopasswd_proxy_charm_vnf/charms/simple/hooks/start new file mode 120000 index 0000000000000000000000000000000000000000..25b1f68fa39d58d33c08ca420c3d439d19be0c55 --- /dev/null +++ b/SOL004/charm-packages/nopasswd_proxy_charm_vnf/charms/simple/hooks/start @@ -0,0 +1 @@ +../src/charm.py \ No newline at end of file diff --git a/SOL004/charm-packages/nopasswd_proxy_charm_vnf/charms/simple/hooks/upgrade-charm b/SOL004/charm-packages/nopasswd_proxy_charm_vnf/charms/simple/hooks/upgrade-charm new file mode 120000 index 0000000000000000000000000000000000000000..25b1f68fa39d58d33c08ca420c3d439d19be0c55 --- /dev/null +++ b/SOL004/charm-packages/nopasswd_proxy_charm_vnf/charms/simple/hooks/upgrade-charm @@ -0,0 +1 @@ +../src/charm.py \ No newline at end of file diff --git a/SOL004/charm-packages/nopasswd_proxy_charm_vnf/charms/simple/lib/charms b/SOL004/charm-packages/nopasswd_proxy_charm_vnf/charms/simple/lib/charms new file mode 120000 index 0000000000000000000000000000000000000000..5abf77aa5a04fddc6d4eb9d0d6d17e36fd00272a --- /dev/null +++ b/SOL004/charm-packages/nopasswd_proxy_charm_vnf/charms/simple/lib/charms @@ -0,0 +1 @@ +../mod/charms.osm/charms/ \ No newline at end of file diff --git a/SOL004/charm-packages/nopasswd_proxy_charm_vnf/charms/simple/lib/ops b/SOL004/charm-packages/nopasswd_proxy_charm_vnf/charms/simple/lib/ops new file mode 120000 index 0000000000000000000000000000000000000000..c36dab1ed4a93aee97a31b0eba0438852c5e05bf --- /dev/null +++ b/SOL004/charm-packages/nopasswd_proxy_charm_vnf/charms/simple/lib/ops @@ -0,0 +1 @@ +../mod/operator/ops/ \ No newline at end of file diff --git a/SOL004/charm-packages/nopasswd_proxy_charm_vnf/charms/simple/metadata.yaml b/SOL004/charm-packages/nopasswd_proxy_charm_vnf/charms/simple/metadata.yaml new file mode 100644 index 0000000000000000000000000000000000000000..d6d0cd70360fb011c2b2a36442d967b104050810 --- /dev/null +++ b/SOL004/charm-packages/nopasswd_proxy_charm_vnf/charms/simple/metadata.yaml @@ -0,0 +1,27 @@ +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## + +name: simple-ha-proxy +summary: A simple example proxy charm +description: | + Simple proxy charm is an example charm used in OSM Hackfests +series: + - xenial + - bionic +peers: + proxypeer: + interface: proxypeer diff --git a/SOL004/charm-packages/nopasswd_proxy_charm_vnf/charms/simple/src/charm.py b/SOL004/charm-packages/nopasswd_proxy_charm_vnf/charms/simple/src/charm.py new file mode 100755 index 0000000000000000000000000000000000000000..e23b12b7bcfddfd49d4efbc1b0ac72d92579778a --- /dev/null +++ b/SOL004/charm-packages/nopasswd_proxy_charm_vnf/charms/simple/src/charm.py @@ -0,0 +1,65 @@ +#!/usr/bin/env python3 +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## + +import sys + +sys.path.append("lib") + +from charms.osm.sshproxy import SSHProxyCharm +from ops.main import main + +class MySSHProxyCharm(SSHProxyCharm): + + def __init__(self, framework, key): + super().__init__(framework, key) + + # Listen to charm events + self.framework.observe(self.on.config_changed, self.on_config_changed) + self.framework.observe(self.on.install, self.on_install) + self.framework.observe(self.on.start, self.on_start) + + # Listen to the touch action event + self.framework.observe(self.on.touch_action, self.on_touch_action) + + def on_config_changed(self, event): + """Handle changes in configuration""" + super().on_config_changed(event) + + def on_install(self, event): + """Called when the charm is being installed""" + super().on_install(event) + + def on_start(self, event): + """Called when the charm is being started""" + super().on_start(event) + + def on_touch_action(self, event): + """Touch a file.""" + + if self.model.unit.is_leader(): + filename = event.params["filename"] + proxy = self.get_ssh_proxy() + stdout, stderr = proxy.run("touch {}".format(filename)) + event.set_results({"output": stdout}) + else: + event.fail("Unit is not leader") + return + +if __name__ == "__main__": + main(MySSHProxyCharm) + diff --git a/SOL004/charm-packages/nopasswd_proxy_charm_vnf/cloud_init/cloud-config.txt b/SOL004/charm-packages/nopasswd_proxy_charm_vnf/cloud_init/cloud-config.txt new file mode 100755 index 0000000000000000000000000000000000000000..36c8d1bf2cdebbc4e50d1e8348003f64f419cd0b --- /dev/null +++ b/SOL004/charm-packages/nopasswd_proxy_charm_vnf/cloud_init/cloud-config.txt @@ -0,0 +1,12 @@ +#cloud-config +password: osm4u +chpasswd: { expire: False } +ssh_pwauth: True + +write_files: +- content: | + # My new helloworld file + + owner: root:root + permissions: '0644' + path: /root/helloworld.txt diff --git a/SOL004/charm-packages/nopasswd_proxy_charm_vnf/icons/osm.png b/SOL004/charm-packages/nopasswd_proxy_charm_vnf/icons/osm.png new file mode 100644 index 0000000000000000000000000000000000000000..62012d2a2b491bdcd536d62c3c3c863c0d8c1b33 Binary files /dev/null and b/SOL004/charm-packages/nopasswd_proxy_charm_vnf/icons/osm.png differ diff --git a/SOL004/charm-packages/nopasswd_proxy_charm_vnf/nopasswd_proxy_charm_vnfd.yaml b/SOL004/charm-packages/nopasswd_proxy_charm_vnf/nopasswd_proxy_charm_vnfd.yaml new file mode 100644 index 0000000000000000000000000000000000000000..0777117a3badad9fed67ad3ba15fcb0a4d8db28e --- /dev/null +++ b/SOL004/charm-packages/nopasswd_proxy_charm_vnf/nopasswd_proxy_charm_vnfd.yaml @@ -0,0 +1,95 @@ +vnfd: + description: A VNF consisting of 1 VDU connected to two external VL, and one for + data and another one for management + df: + - id: default-df + instantiation-level: + - id: default-instantiation-level + vdu-level: + - number-of-instances: 1 + vdu-id: mgmtVM + vdu-profile: + - id: mgmtVM + min-number-of-instances: 1 + lcm-operations-configuration: + operate-vnf-op-config: + day1-2: + - config-access: + ssh-access: + default-user: ubuntu + required: true + config-primitive: + - name: touch + execution-environment-ref: simple-ee + parameter: + - data-type: STRING + default-value: /home/ubuntu/touched + name: filename + id: nopasswd_proxy_charm-vnf + execution-environment-list: + - id: simple-ee + juju: + charm: simple + initial-config-primitive: + - name: config + execution-environment-ref: simple-ee + parameter: + - name: ssh-hostname + value: + - name: ssh-username + value: ubuntu + seq: 1 + - name: touch + execution-environment-ref: simple-ee + parameter: + - data-type: STRING + name: filename + value: /home/ubuntu/first-touch + seq: 2 + ext-cpd: + - id: vnf-mgmt-ext + int-cpd: + cpd: mgmtVM-eth0-int + vdu-id: mgmtVM + - id: vnf-data-ext + int-cpd: + cpd: dataVM-xe0-int + vdu-id: mgmtVM + id: nopasswd_proxy_charm-vnf + mgmt-cp: vnf-mgmt-ext + product-name: nopasswd_proxy_charm-vnf + sw-image-desc: + - id: ubuntu18.04 + image: ubuntu18.04 + name: ubuntu18.04 + vdu: + - cloud-init-file: cloud-config.txt + id: mgmtVM + int-cpd: + - id: mgmtVM-eth0-int + virtual-network-interface-requirement: + - name: mgmtVM-eth0 + position: 1 + virtual-interface: + type: PARAVIRT + - id: dataVM-xe0-int + virtual-network-interface-requirement: + - name: dataVM-xe0 + position: 2 + virtual-interface: + type: PARAVIRT + name: mgmtVM + sw-image-desc: ubuntu18.04 + virtual-compute-desc: mgmtVM-compute + virtual-storage-desc: + - mgmtVM-storage + version: 1.0 + virtual-compute-desc: + - id: mgmtVM-compute + virtual-cpu: + num-virtual-cpu: 1 + virtual-memory: + size: 1.0 + virtual-storage-desc: + - id: mgmtVM-storage + size-of-storage: 10 diff --git a/SOL004/charm-packages/ns_relations_ns/icons/osm.png b/SOL004/charm-packages/ns_relations_ns/icons/osm.png new file mode 100644 index 0000000000000000000000000000000000000000..62012d2a2b491bdcd536d62c3c3c863c0d8c1b33 Binary files /dev/null and b/SOL004/charm-packages/ns_relations_ns/icons/osm.png differ diff --git a/SOL004/charm-packages/ns_relations_ns/ns_relations_nsd.yaml b/SOL004/charm-packages/ns_relations_ns/ns_relations_nsd.yaml new file mode 100644 index 0000000000000000000000000000000000000000..6a10331c5c314038f0f7c1ed04e496f45b72450a --- /dev/null +++ b/SOL004/charm-packages/ns_relations_ns/ns_relations_nsd.yaml @@ -0,0 +1,37 @@ +nsd: + nsd: + - description: NS with 2 VNFs with cloudinit connected by datanet and mgmtnet VLs + df: + - id: default-df + vnf-profile: + - id: '1' + virtual-link-connectivity: + - constituent-cpd-id: + - constituent-base-element-id: '1' + constituent-cpd-id: provides-mgmt-ext + virtual-link-profile-id: mgmtnet + vnfd-id: ns_relations_provides-vnf + - id: '2' + virtual-link-connectivity: + - constituent-cpd-id: + - constituent-base-element-id: '2' + constituent-cpd-id: requires-mgmt-ext + virtual-link-profile-id: mgmtnet + vnfd-id: ns_relations_requires-vnf + id: ns_relations-ns + name: ns_relations-ns + version: '1.0' + virtual-link-desc: + - id: mgmtnet + mgmt-network: 'true' + vnfd-id: + - ns_relations_provides-vnf + - ns_relations_requires-vnf + ns-configuration: + relation: + - name: relation + entities: + - id: '1' + endpoint: interface + - id: '2' + endpoint: interface diff --git a/SOL004/charm-packages/ns_relations_provides_vnf/charms/simple_provides/actions.yaml b/SOL004/charm-packages/ns_relations_provides_vnf/charms/simple_provides/actions.yaml new file mode 100644 index 0000000000000000000000000000000000000000..53a706b4eba7347764c4711b7077146e72241de4 --- /dev/null +++ b/SOL004/charm-packages/ns_relations_provides_vnf/charms/simple_provides/actions.yaml @@ -0,0 +1,25 @@ +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## +touch: + description: "Touch a file on the VNF." + params: + filename: + description: "The name of the file to touch." + type: string + default: "" + required: + - filename diff --git a/SOL004/charm-packages/ns_relations_provides_vnf/charms/simple_provides/config.yaml b/SOL004/charm-packages/ns_relations_provides_vnf/charms/simple_provides/config.yaml new file mode 100644 index 0000000000000000000000000000000000000000..2be62318c6ce27156e4af585494b36d0c2b802f8 --- /dev/null +++ b/SOL004/charm-packages/ns_relations_provides_vnf/charms/simple_provides/config.yaml @@ -0,0 +1,17 @@ +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## +options: {} \ No newline at end of file diff --git a/SOL004/charm-packages/ns_relations_provides_vnf/charms/simple_provides/hooks/install b/SOL004/charm-packages/ns_relations_provides_vnf/charms/simple_provides/hooks/install new file mode 120000 index 0000000000000000000000000000000000000000..25b1f68fa39d58d33c08ca420c3d439d19be0c55 --- /dev/null +++ b/SOL004/charm-packages/ns_relations_provides_vnf/charms/simple_provides/hooks/install @@ -0,0 +1 @@ +../src/charm.py \ No newline at end of file diff --git a/SOL004/charm-packages/ns_relations_provides_vnf/charms/simple_provides/hooks/start b/SOL004/charm-packages/ns_relations_provides_vnf/charms/simple_provides/hooks/start new file mode 120000 index 0000000000000000000000000000000000000000..25b1f68fa39d58d33c08ca420c3d439d19be0c55 --- /dev/null +++ b/SOL004/charm-packages/ns_relations_provides_vnf/charms/simple_provides/hooks/start @@ -0,0 +1 @@ +../src/charm.py \ No newline at end of file diff --git a/SOL004/charm-packages/ns_relations_provides_vnf/charms/simple_provides/hooks/upgrade-charm b/SOL004/charm-packages/ns_relations_provides_vnf/charms/simple_provides/hooks/upgrade-charm new file mode 120000 index 0000000000000000000000000000000000000000..25b1f68fa39d58d33c08ca420c3d439d19be0c55 --- /dev/null +++ b/SOL004/charm-packages/ns_relations_provides_vnf/charms/simple_provides/hooks/upgrade-charm @@ -0,0 +1 @@ +../src/charm.py \ No newline at end of file diff --git a/SOL004/charm-packages/ns_relations_provides_vnf/charms/simple_provides/lib/ops b/SOL004/charm-packages/ns_relations_provides_vnf/charms/simple_provides/lib/ops new file mode 120000 index 0000000000000000000000000000000000000000..d93419320c2e0d3133a0bc8059e2f259bf5bb213 --- /dev/null +++ b/SOL004/charm-packages/ns_relations_provides_vnf/charms/simple_provides/lib/ops @@ -0,0 +1 @@ +../mod/operator/ops \ No newline at end of file diff --git a/SOL004/charm-packages/ns_relations_provides_vnf/charms/simple_provides/metadata.yaml b/SOL004/charm-packages/ns_relations_provides_vnf/charms/simple_provides/metadata.yaml new file mode 100644 index 0000000000000000000000000000000000000000..db97d146a1734356dc46ec8bd549b4d1154aa418 --- /dev/null +++ b/SOL004/charm-packages/ns_relations_provides_vnf/charms/simple_provides/metadata.yaml @@ -0,0 +1,28 @@ +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## + +name: simple-native-provides +summary: A simple native charm +description: | + Simple native charm +series: + - bionic + - xenial + - focal +provides: + interface: + interface: interface \ No newline at end of file diff --git a/SOL004/charm-packages/ns_relations_provides_vnf/charms/simple_provides/src/charm.py b/SOL004/charm-packages/ns_relations_provides_vnf/charms/simple_provides/src/charm.py new file mode 100755 index 0000000000000000000000000000000000000000..b15f847604d7ea6af2d4d1467cce6c7d1e64bf7e --- /dev/null +++ b/SOL004/charm-packages/ns_relations_provides_vnf/charms/simple_provides/src/charm.py @@ -0,0 +1,75 @@ +#!/usr/bin/env python3 +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## + +import sys +import subprocess + +sys.path.append("lib") + +from ops.charm import CharmBase +from ops.main import main +from ops.model import ActiveStatus + +class MyNativeCharm(CharmBase): + + def __init__(self, framework, key): + super().__init__(framework, key) + + # Listen to charm events + self.framework.observe(self.on.config_changed, self.on_config_changed) + self.framework.observe(self.on.install, self.on_install) + self.framework.observe(self.on.start, self.on_start) + + # Listen to the touch action event + self.framework.observe(self.on.touch_action, self.on_touch_action) + + # Listen to relation changed + self.framework.observe( + self.on.interface_relation_changed, self.on_interface_relation_changed + ) + def on_config_changed(self, event): + """Handle changes in configuration""" + self.model.unit.status = ActiveStatus() + + def on_install(self, event): + """Called when the charm is being installed""" + self.model.unit.status = ActiveStatus() + + def on_start(self, event): + """Called when the charm is being started""" + self.model.unit.status = ActiveStatus() + + def on_touch_action(self, event): + """Touch a file.""" + filename = event.params["filename"] + try: + subprocess.run(["touch", filename], check=True) + event.set_results({"created": True, "filename": filename}) + except subprocess.CalledProcessError as e: + event.fail("Action failed: {}".format(e)) + self.model.unit.status = ActiveStatus() + + def on_interface_relation_changed(self, event): + parameter = "Hello" + event.relation.data[self.model.unit]["parameter"] = parameter + self.model.unit.status = ActiveStatus("Parameter sent: {}".format(parameter)) + + +if __name__ == "__main__": + main(MyNativeCharm) + diff --git a/SOL004/charm-packages/ns_relations_provides_vnf/cloud_init/cloud-config.txt b/SOL004/charm-packages/ns_relations_provides_vnf/cloud_init/cloud-config.txt new file mode 100755 index 0000000000000000000000000000000000000000..36c8d1bf2cdebbc4e50d1e8348003f64f419cd0b --- /dev/null +++ b/SOL004/charm-packages/ns_relations_provides_vnf/cloud_init/cloud-config.txt @@ -0,0 +1,12 @@ +#cloud-config +password: osm4u +chpasswd: { expire: False } +ssh_pwauth: True + +write_files: +- content: | + # My new helloworld file + + owner: root:root + permissions: '0644' + path: /root/helloworld.txt diff --git a/SOL004/charm-packages/ns_relations_provides_vnf/icons/osm.png b/SOL004/charm-packages/ns_relations_provides_vnf/icons/osm.png new file mode 100644 index 0000000000000000000000000000000000000000..62012d2a2b491bdcd536d62c3c3c863c0d8c1b33 Binary files /dev/null and b/SOL004/charm-packages/ns_relations_provides_vnf/icons/osm.png differ diff --git a/SOL004/charm-packages/ns_relations_provides_vnf/ns_relations_provides_vnfd.yaml b/SOL004/charm-packages/ns_relations_provides_vnf/ns_relations_provides_vnfd.yaml new file mode 100644 index 0000000000000000000000000000000000000000..ceb8d2953d130cd2bcface10873b9566779f508c --- /dev/null +++ b/SOL004/charm-packages/ns_relations_provides_vnf/ns_relations_provides_vnfd.yaml @@ -0,0 +1,78 @@ +vnfd: + description: A VNF consisting of 1 VDU connected to two external VL, and one for + data and another one for management + df: + - id: default-df + instantiation-level: + - id: default-instantiation-level + vdu-level: + - number-of-instances: 1 + vdu-id: simple_provides + vdu-profile: + - id: simple_provides + min-number-of-instances: 1 + lcm-operations-configuration: + operate-vnf-op-config: + day1-2: + - config-access: + ssh-access: + default-user: ubuntu + required: true + config-primitive: + - name: touch + execution-environment-ref: simple-provides-ee + parameter: + - data-type: STRING + default-value: /home/ubuntu/touched + name: filename + id: ns_relations_provides-vnf + execution-environment-list: + - id: simple-provides-ee + juju: + charm: simple_provides + proxy: false + initial-config-primitive: + - name: touch + execution-environment-ref: simple-provides-ee + parameter: + - data-type: STRING + name: filename + value: /home/ubuntu/first-touch + seq: 1 + ext-cpd: + - id: provides-mgmt-ext + int-cpd: + cpd: simple_provides-eth0-int + vdu-id: simple_provides + id: ns_relations_provides-vnf + mgmt-cp: provides-mgmt-ext + product-name: ns_relations_provides-vnf + sw-image-desc: + - id: ubuntu18.04 + image: ubuntu18.04 + name: ubuntu18.04 + vdu: + - cloud-init-file: cloud-config.txt + id: simple_provides + int-cpd: + - id: simple_provides-eth0-int + virtual-network-interface-requirement: + - name: simple_provides-eth0 + position: 1 + virtual-interface: + type: PARAVIRT + name: simple_provides + sw-image-desc: ubuntu18.04 + virtual-compute-desc: simple_provides-compute + virtual-storage-desc: + - simple_provides-storage + version: 1.0 + virtual-compute-desc: + - id: simple_provides-compute + virtual-cpu: + num-virtual-cpu: 1 + virtual-memory: + size: 1.0 + virtual-storage-desc: + - id: simple_provides-storage + size-of-storage: 10 diff --git a/SOL004/charm-packages/ns_relations_requires_vnf/charms/simple_requires/actions.yaml b/SOL004/charm-packages/ns_relations_requires_vnf/charms/simple_requires/actions.yaml new file mode 100644 index 0000000000000000000000000000000000000000..53a706b4eba7347764c4711b7077146e72241de4 --- /dev/null +++ b/SOL004/charm-packages/ns_relations_requires_vnf/charms/simple_requires/actions.yaml @@ -0,0 +1,25 @@ +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## +touch: + description: "Touch a file on the VNF." + params: + filename: + description: "The name of the file to touch." + type: string + default: "" + required: + - filename diff --git a/SOL004/charm-packages/ns_relations_requires_vnf/charms/simple_requires/config.yaml b/SOL004/charm-packages/ns_relations_requires_vnf/charms/simple_requires/config.yaml new file mode 100644 index 0000000000000000000000000000000000000000..2be62318c6ce27156e4af585494b36d0c2b802f8 --- /dev/null +++ b/SOL004/charm-packages/ns_relations_requires_vnf/charms/simple_requires/config.yaml @@ -0,0 +1,17 @@ +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## +options: {} \ No newline at end of file diff --git a/SOL004/charm-packages/ns_relations_requires_vnf/charms/simple_requires/hooks/install b/SOL004/charm-packages/ns_relations_requires_vnf/charms/simple_requires/hooks/install new file mode 120000 index 0000000000000000000000000000000000000000..25b1f68fa39d58d33c08ca420c3d439d19be0c55 --- /dev/null +++ b/SOL004/charm-packages/ns_relations_requires_vnf/charms/simple_requires/hooks/install @@ -0,0 +1 @@ +../src/charm.py \ No newline at end of file diff --git a/SOL004/charm-packages/ns_relations_requires_vnf/charms/simple_requires/hooks/start b/SOL004/charm-packages/ns_relations_requires_vnf/charms/simple_requires/hooks/start new file mode 120000 index 0000000000000000000000000000000000000000..25b1f68fa39d58d33c08ca420c3d439d19be0c55 --- /dev/null +++ b/SOL004/charm-packages/ns_relations_requires_vnf/charms/simple_requires/hooks/start @@ -0,0 +1 @@ +../src/charm.py \ No newline at end of file diff --git a/SOL004/charm-packages/ns_relations_requires_vnf/charms/simple_requires/hooks/upgrade-charm b/SOL004/charm-packages/ns_relations_requires_vnf/charms/simple_requires/hooks/upgrade-charm new file mode 120000 index 0000000000000000000000000000000000000000..25b1f68fa39d58d33c08ca420c3d439d19be0c55 --- /dev/null +++ b/SOL004/charm-packages/ns_relations_requires_vnf/charms/simple_requires/hooks/upgrade-charm @@ -0,0 +1 @@ +../src/charm.py \ No newline at end of file diff --git a/SOL004/charm-packages/ns_relations_requires_vnf/charms/simple_requires/lib/ops b/SOL004/charm-packages/ns_relations_requires_vnf/charms/simple_requires/lib/ops new file mode 120000 index 0000000000000000000000000000000000000000..d93419320c2e0d3133a0bc8059e2f259bf5bb213 --- /dev/null +++ b/SOL004/charm-packages/ns_relations_requires_vnf/charms/simple_requires/lib/ops @@ -0,0 +1 @@ +../mod/operator/ops \ No newline at end of file diff --git a/SOL004/charm-packages/ns_relations_requires_vnf/charms/simple_requires/metadata.yaml b/SOL004/charm-packages/ns_relations_requires_vnf/charms/simple_requires/metadata.yaml new file mode 100644 index 0000000000000000000000000000000000000000..071f8fa8d2b605f85a78f2322ba8c096c65e1326 --- /dev/null +++ b/SOL004/charm-packages/ns_relations_requires_vnf/charms/simple_requires/metadata.yaml @@ -0,0 +1,28 @@ +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## + +name: simple-native-requires +summary: A simple native charm +description: | + Simple native charm +series: + - bionic + - xenial + - focal +requires: + interface: + interface: interface \ No newline at end of file diff --git a/SOL004/charm-packages/ns_relations_requires_vnf/charms/simple_requires/src/charm.py b/SOL004/charm-packages/ns_relations_requires_vnf/charms/simple_requires/src/charm.py new file mode 100755 index 0000000000000000000000000000000000000000..c243dc90a31ba2fb5c14143a5d658f7b8dfd2dcc --- /dev/null +++ b/SOL004/charm-packages/ns_relations_requires_vnf/charms/simple_requires/src/charm.py @@ -0,0 +1,80 @@ +#!/usr/bin/env python3 +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## + +import sys +import subprocess +import logging + +sys.path.append("lib") + +from ops.charm import CharmBase +from ops.main import main +from ops.model import ActiveStatus + +logger = logging.getLogger(__name__) + +class MyNativeCharm(CharmBase): + + def __init__(self, framework, key): + super().__init__(framework, key) + + # Listen to charm events + self.framework.observe(self.on.config_changed, self.on_config_changed) + self.framework.observe(self.on.install, self.on_install) + self.framework.observe(self.on.start, self.on_start) + + # Listen to the touch action event + self.framework.observe(self.on.touch_action, self.on_touch_action) + + # Listen to relation changed + self.framework.observe( + self.on.interface_relation_changed, self.on_interface_relation_changed + ) + def on_config_changed(self, event): + """Handle changes in configuration""" + self.model.unit.status = ActiveStatus() + + def on_install(self, event): + """Called when the charm is being installed""" + self.model.unit.status = ActiveStatus() + + def on_start(self, event): + """Called when the charm is being started""" + self.model.unit.status = ActiveStatus() + + def on_touch_action(self, event): + """Touch a file.""" + + filename = event.params["filename"] + try: + subprocess.run(["touch", filename], check=True) + event.set_results({"created": True, "filename": filename}) + except subprocess.CalledProcessError as e: + event.fail("Action failed: {}".format(e)) + self.model.unit.status = ActiveStatus() + + def on_interface_relation_changed(self, event): + logger.debug("RELATION DATA: {}".format(dict(event.relation.data[event.unit]))) + parameter = event.relation.data[event.unit].get("parameter") + if parameter: + self.model.unit.status = ActiveStatus("Parameter received: {}".format(parameter)) + + +if __name__ == "__main__": + main(MyNativeCharm) + diff --git a/SOL004/charm-packages/ns_relations_requires_vnf/cloud_init/cloud-config.txt b/SOL004/charm-packages/ns_relations_requires_vnf/cloud_init/cloud-config.txt new file mode 100755 index 0000000000000000000000000000000000000000..36c8d1bf2cdebbc4e50d1e8348003f64f419cd0b --- /dev/null +++ b/SOL004/charm-packages/ns_relations_requires_vnf/cloud_init/cloud-config.txt @@ -0,0 +1,12 @@ +#cloud-config +password: osm4u +chpasswd: { expire: False } +ssh_pwauth: True + +write_files: +- content: | + # My new helloworld file + + owner: root:root + permissions: '0644' + path: /root/helloworld.txt diff --git a/SOL004/charm-packages/ns_relations_requires_vnf/icons/osm.png b/SOL004/charm-packages/ns_relations_requires_vnf/icons/osm.png new file mode 100644 index 0000000000000000000000000000000000000000..62012d2a2b491bdcd536d62c3c3c863c0d8c1b33 Binary files /dev/null and b/SOL004/charm-packages/ns_relations_requires_vnf/icons/osm.png differ diff --git a/SOL004/charm-packages/ns_relations_requires_vnf/ns_relations_requires_vnfd.yaml b/SOL004/charm-packages/ns_relations_requires_vnf/ns_relations_requires_vnfd.yaml new file mode 100644 index 0000000000000000000000000000000000000000..b4c45c0411925f63587156717ce13170700953f1 --- /dev/null +++ b/SOL004/charm-packages/ns_relations_requires_vnf/ns_relations_requires_vnfd.yaml @@ -0,0 +1,78 @@ +vnfd: + description: A VNF consisting of 1 VDU connected to two external VL, and one for + data and another one for management + df: + - id: default-df + instantiation-level: + - id: default-instantiation-level + vdu-level: + - number-of-instances: 1 + vdu-id: simple_requires + vdu-profile: + - id: simple_requires + min-number-of-instances: 1 + lcm-operations-configuration: + operate-vnf-op-config: + day1-2: + - config-access: + ssh-access: + default-user: ubuntu + required: true + config-primitive: + - name: touch + execution-environment-ref: simple-requires-ee + parameter: + - data-type: STRING + default-value: /home/ubuntu/touched + name: filename + id: ns_relations_requires-vnf + execution-environment-list: + - id: simple-requires-ee + juju: + charm: simple_requires + proxy: false + initial-config-primitive: + - name: touch + execution-environment-ref: simple-requires-ee + parameter: + - data-type: STRING + name: filename + value: /home/ubuntu/first-touch + seq: 1 + ext-cpd: + - id: requires-mgmt-ext + int-cpd: + cpd: simple_requires-eth0-int + vdu-id: simple_requires + id: ns_relations_requires-vnf + mgmt-cp: requires-mgmt-ext + product-name: ns_relations_requires-vnf + sw-image-desc: + - id: ubuntu18.04 + image: ubuntu18.04 + name: ubuntu18.04 + vdu: + - cloud-init-file: cloud-config.txt + id: simple_requires + int-cpd: + - id: simple_requires-eth0-int + virtual-network-interface-requirement: + - name: simple_requires-eth0 + position: 1 + virtual-interface: + type: PARAVIRT + name: simple_requires + sw-image-desc: ubuntu18.04 + virtual-compute-desc: simple_requires-compute + virtual-storage-desc: + - simple_requires-storage + version: 1.0 + virtual-compute-desc: + - id: simple_requires-compute + virtual-cpu: + num-virtual-cpu: 1 + virtual-memory: + size: 1.0 + virtual-storage-desc: + - id: simple_requires-storage + size-of-storage: 10 diff --git a/SOL004/charm-packages/params.yaml b/SOL004/charm-packages/params.yaml new file mode 100644 index 0000000000000000000000000000000000000000..3dc9c71198c1ab2b2d1639f9a17ed7871485d03a --- /dev/null +++ b/SOL004/charm-packages/params.yaml @@ -0,0 +1,3 @@ +additionalParamsForVnf: + - member-vnf-index: "1" + config-units: 2 diff --git a/SOL004/charm-packages/proxy_native_relation_ns/icons/osm.png b/SOL004/charm-packages/proxy_native_relation_ns/icons/osm.png new file mode 100644 index 0000000000000000000000000000000000000000..62012d2a2b491bdcd536d62c3c3c863c0d8c1b33 Binary files /dev/null and b/SOL004/charm-packages/proxy_native_relation_ns/icons/osm.png differ diff --git a/SOL004/charm-packages/proxy_native_relation_ns/proxy_native_relation_nsd.yaml b/SOL004/charm-packages/proxy_native_relation_ns/proxy_native_relation_nsd.yaml new file mode 100644 index 0000000000000000000000000000000000000000..4039eb76a10c88965107ccfcd42fc468f1c7adce --- /dev/null +++ b/SOL004/charm-packages/proxy_native_relation_ns/proxy_native_relation_nsd.yaml @@ -0,0 +1,23 @@ +nsd: + nsd: + - description: NS with 2 VNFs with cloudinit connected by datanet and mgmtnet VLs + df: + - id: default-df + vnf-profile: + - id: '1' + virtual-link-connectivity: + - constituent-cpd-id: + - constituent-base-element-id: '1' + constituent-cpd-id: provides-mgmt-ext + - constituent-base-element-id: '1' + constituent-cpd-id: requires-mgmt-ext + virtual-link-profile-id: mgmtnet + vnfd-id: proxy_native_relation-vnf + id: proxy_native_relation-ns + name: proxy_native_relation-ns + version: '1.0' + virtual-link-desc: + - id: mgmtnet + mgmt-network: 'true' + vnfd-id: + - proxy_native_relation-vnf diff --git a/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_provides_proxy/actions.yaml b/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_provides_proxy/actions.yaml new file mode 100644 index 0000000000000000000000000000000000000000..f9882c47bba960254e0381e5b5ab1c0688de6cf5 --- /dev/null +++ b/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_provides_proxy/actions.yaml @@ -0,0 +1,54 @@ +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## +touch: + description: "Touch a file on the VNF." + params: + filename: + description: "The name of the file to touch." + type: string + default: "" + required: + - filename + +# Standard OSM functions +start: + description: "Stop the service on the VNF." +stop: + description: "Stop the service on the VNF." +restart: + description: "Stop the service on the VNF." +reboot: + description: "Reboot the VNF virtual machine." +upgrade: + description: "Upgrade the software on the VNF." + +# Required by charms.osm.sshproxy +run: + description: "Run an arbitrary command" + params: + command: + description: "The command to execute." + type: string + default: "" + required: + - command +generate-ssh-key: + description: "Generate a new SSH keypair for this unit. This will replace any existing previously generated keypair." +verify-ssh-credentials: + description: "Verify that this unit can authenticate with server specified by ssh-hostname and ssh-username." +get-ssh-public-key: + description: "Get the public SSH key for this unit." diff --git a/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_provides_proxy/config.yaml b/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_provides_proxy/config.yaml new file mode 100644 index 0000000000000000000000000000000000000000..93e3cab01c0843418c57e4bc918eb933f2daf934 --- /dev/null +++ b/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_provides_proxy/config.yaml @@ -0,0 +1,41 @@ +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## +options: + ssh-hostname: + type: string + default: "" + description: "The hostname or IP address of the machine to" + ssh-username: + type: string + default: "" + description: "The username to login as." + ssh-password: + type: string + default: "" + description: "The password used to authenticate." + ssh-public-key: + type: string + default: "" + description: "The public key of this unit." + ssh-key-type: + type: string + default: "rsa" + description: "The type of encryption to use for the SSH key." + ssh-key-bits: + type: int + default: 4096 + description: "The number of bits to use for the SSH key." diff --git a/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_provides_proxy/hooks/install b/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_provides_proxy/hooks/install new file mode 100755 index 0000000000000000000000000000000000000000..e23b12b7bcfddfd49d4efbc1b0ac72d92579778a --- /dev/null +++ b/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_provides_proxy/hooks/install @@ -0,0 +1,65 @@ +#!/usr/bin/env python3 +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## + +import sys + +sys.path.append("lib") + +from charms.osm.sshproxy import SSHProxyCharm +from ops.main import main + +class MySSHProxyCharm(SSHProxyCharm): + + def __init__(self, framework, key): + super().__init__(framework, key) + + # Listen to charm events + self.framework.observe(self.on.config_changed, self.on_config_changed) + self.framework.observe(self.on.install, self.on_install) + self.framework.observe(self.on.start, self.on_start) + + # Listen to the touch action event + self.framework.observe(self.on.touch_action, self.on_touch_action) + + def on_config_changed(self, event): + """Handle changes in configuration""" + super().on_config_changed(event) + + def on_install(self, event): + """Called when the charm is being installed""" + super().on_install(event) + + def on_start(self, event): + """Called when the charm is being started""" + super().on_start(event) + + def on_touch_action(self, event): + """Touch a file.""" + + if self.model.unit.is_leader(): + filename = event.params["filename"] + proxy = self.get_ssh_proxy() + stdout, stderr = proxy.run("touch {}".format(filename)) + event.set_results({"output": stdout}) + else: + event.fail("Unit is not leader") + return + +if __name__ == "__main__": + main(MySSHProxyCharm) + diff --git a/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_provides_proxy/hooks/start b/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_provides_proxy/hooks/start new file mode 100755 index 0000000000000000000000000000000000000000..e23b12b7bcfddfd49d4efbc1b0ac72d92579778a --- /dev/null +++ b/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_provides_proxy/hooks/start @@ -0,0 +1,65 @@ +#!/usr/bin/env python3 +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## + +import sys + +sys.path.append("lib") + +from charms.osm.sshproxy import SSHProxyCharm +from ops.main import main + +class MySSHProxyCharm(SSHProxyCharm): + + def __init__(self, framework, key): + super().__init__(framework, key) + + # Listen to charm events + self.framework.observe(self.on.config_changed, self.on_config_changed) + self.framework.observe(self.on.install, self.on_install) + self.framework.observe(self.on.start, self.on_start) + + # Listen to the touch action event + self.framework.observe(self.on.touch_action, self.on_touch_action) + + def on_config_changed(self, event): + """Handle changes in configuration""" + super().on_config_changed(event) + + def on_install(self, event): + """Called when the charm is being installed""" + super().on_install(event) + + def on_start(self, event): + """Called when the charm is being started""" + super().on_start(event) + + def on_touch_action(self, event): + """Touch a file.""" + + if self.model.unit.is_leader(): + filename = event.params["filename"] + proxy = self.get_ssh_proxy() + stdout, stderr = proxy.run("touch {}".format(filename)) + event.set_results({"output": stdout}) + else: + event.fail("Unit is not leader") + return + +if __name__ == "__main__": + main(MySSHProxyCharm) + diff --git a/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_provides_proxy/hooks/upgrade-charm b/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_provides_proxy/hooks/upgrade-charm new file mode 100755 index 0000000000000000000000000000000000000000..e23b12b7bcfddfd49d4efbc1b0ac72d92579778a --- /dev/null +++ b/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_provides_proxy/hooks/upgrade-charm @@ -0,0 +1,65 @@ +#!/usr/bin/env python3 +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## + +import sys + +sys.path.append("lib") + +from charms.osm.sshproxy import SSHProxyCharm +from ops.main import main + +class MySSHProxyCharm(SSHProxyCharm): + + def __init__(self, framework, key): + super().__init__(framework, key) + + # Listen to charm events + self.framework.observe(self.on.config_changed, self.on_config_changed) + self.framework.observe(self.on.install, self.on_install) + self.framework.observe(self.on.start, self.on_start) + + # Listen to the touch action event + self.framework.observe(self.on.touch_action, self.on_touch_action) + + def on_config_changed(self, event): + """Handle changes in configuration""" + super().on_config_changed(event) + + def on_install(self, event): + """Called when the charm is being installed""" + super().on_install(event) + + def on_start(self, event): + """Called when the charm is being started""" + super().on_start(event) + + def on_touch_action(self, event): + """Touch a file.""" + + if self.model.unit.is_leader(): + filename = event.params["filename"] + proxy = self.get_ssh_proxy() + stdout, stderr = proxy.run("touch {}".format(filename)) + event.set_results({"output": stdout}) + else: + event.fail("Unit is not leader") + return + +if __name__ == "__main__": + main(MySSHProxyCharm) + diff --git a/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_provides_proxy/lib/charms b/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_provides_proxy/lib/charms new file mode 120000 index 0000000000000000000000000000000000000000..5abf77aa5a04fddc6d4eb9d0d6d17e36fd00272a --- /dev/null +++ b/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_provides_proxy/lib/charms @@ -0,0 +1 @@ +../mod/charms.osm/charms/ \ No newline at end of file diff --git a/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_provides_proxy/lib/ops b/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_provides_proxy/lib/ops new file mode 120000 index 0000000000000000000000000000000000000000..c36dab1ed4a93aee97a31b0eba0438852c5e05bf --- /dev/null +++ b/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_provides_proxy/lib/ops @@ -0,0 +1 @@ +../mod/operator/ops/ \ No newline at end of file diff --git a/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_provides_proxy/metadata.yaml b/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_provides_proxy/metadata.yaml new file mode 100644 index 0000000000000000000000000000000000000000..76b268bfca697808056fb7386913e592df4b508d --- /dev/null +++ b/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_provides_proxy/metadata.yaml @@ -0,0 +1,30 @@ +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## + +name: simple-ha-proxy +summary: A simple example proxy charm +description: | + Simple proxy charm is an example charm used in OSM Hackfests +series: + - xenial + - bionic +peers: + proxypeer: + interface: proxypeer +provides: + interface: + interface: interface \ No newline at end of file diff --git a/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_provides_proxy/src/charm.py b/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_provides_proxy/src/charm.py new file mode 100755 index 0000000000000000000000000000000000000000..1092fc34023c21e26eeb7f5cb81876c9d5b21c44 --- /dev/null +++ b/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_provides_proxy/src/charm.py @@ -0,0 +1,73 @@ +#!/usr/bin/env python3 +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## + +import sys + +sys.path.append("lib") + +from charms.osm.sshproxy import SSHProxyCharm +from ops.main import main + +class MySSHProxyCharm(SSHProxyCharm): + + def __init__(self, framework, key): + super().__init__(framework, key) + + # Listen to charm events + self.framework.observe(self.on.config_changed, self.on_config_changed) + self.framework.observe(self.on.install, self.on_install) + self.framework.observe(self.on.start, self.on_start) + + # Listen to the touch action event + self.framework.observe(self.on.touch_action, self.on_touch_action) + self.framework.observe( + self.on.interface_relation_changed, self.on_interface_relation_changed + ) + + def on_config_changed(self, event): + """Handle changes in configuration""" + super().on_config_changed(event) + + def on_install(self, event): + """Called when the charm is being installed""" + super().on_install(event) + + def on_start(self, event): + """Called when the charm is being started""" + super().on_start(event) + + def on_touch_action(self, event): + """Touch a file.""" + + if self.model.unit.is_leader(): + filename = event.params["filename"] + proxy = self.get_ssh_proxy() + stdout, stderr = proxy.run("touch {}".format(filename)) + event.set_results({"output": stdout}) + else: + event.fail("Unit is not leader") + return + + def on_interface_relation_changed(self, event): + parameter = "Hello" + event.relation.data[self.model.unit]["parameter"] = parameter + self.model.unit.status = ActiveStatus("Parameter sent: {}".format(parameter)) + + +if __name__ == "__main__": + main(MySSHProxyCharm) diff --git a/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_requires/actions.yaml b/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_requires/actions.yaml new file mode 100644 index 0000000000000000000000000000000000000000..53a706b4eba7347764c4711b7077146e72241de4 --- /dev/null +++ b/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_requires/actions.yaml @@ -0,0 +1,25 @@ +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## +touch: + description: "Touch a file on the VNF." + params: + filename: + description: "The name of the file to touch." + type: string + default: "" + required: + - filename diff --git a/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_requires/config.yaml b/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_requires/config.yaml new file mode 100644 index 0000000000000000000000000000000000000000..2be62318c6ce27156e4af585494b36d0c2b802f8 --- /dev/null +++ b/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_requires/config.yaml @@ -0,0 +1,17 @@ +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## +options: {} \ No newline at end of file diff --git a/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_requires/hooks/install b/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_requires/hooks/install new file mode 100755 index 0000000000000000000000000000000000000000..c243dc90a31ba2fb5c14143a5d658f7b8dfd2dcc --- /dev/null +++ b/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_requires/hooks/install @@ -0,0 +1,80 @@ +#!/usr/bin/env python3 +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## + +import sys +import subprocess +import logging + +sys.path.append("lib") + +from ops.charm import CharmBase +from ops.main import main +from ops.model import ActiveStatus + +logger = logging.getLogger(__name__) + +class MyNativeCharm(CharmBase): + + def __init__(self, framework, key): + super().__init__(framework, key) + + # Listen to charm events + self.framework.observe(self.on.config_changed, self.on_config_changed) + self.framework.observe(self.on.install, self.on_install) + self.framework.observe(self.on.start, self.on_start) + + # Listen to the touch action event + self.framework.observe(self.on.touch_action, self.on_touch_action) + + # Listen to relation changed + self.framework.observe( + self.on.interface_relation_changed, self.on_interface_relation_changed + ) + def on_config_changed(self, event): + """Handle changes in configuration""" + self.model.unit.status = ActiveStatus() + + def on_install(self, event): + """Called when the charm is being installed""" + self.model.unit.status = ActiveStatus() + + def on_start(self, event): + """Called when the charm is being started""" + self.model.unit.status = ActiveStatus() + + def on_touch_action(self, event): + """Touch a file.""" + + filename = event.params["filename"] + try: + subprocess.run(["touch", filename], check=True) + event.set_results({"created": True, "filename": filename}) + except subprocess.CalledProcessError as e: + event.fail("Action failed: {}".format(e)) + self.model.unit.status = ActiveStatus() + + def on_interface_relation_changed(self, event): + logger.debug("RELATION DATA: {}".format(dict(event.relation.data[event.unit]))) + parameter = event.relation.data[event.unit].get("parameter") + if parameter: + self.model.unit.status = ActiveStatus("Parameter received: {}".format(parameter)) + + +if __name__ == "__main__": + main(MyNativeCharm) + diff --git a/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_requires/hooks/start b/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_requires/hooks/start new file mode 100755 index 0000000000000000000000000000000000000000..c243dc90a31ba2fb5c14143a5d658f7b8dfd2dcc --- /dev/null +++ b/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_requires/hooks/start @@ -0,0 +1,80 @@ +#!/usr/bin/env python3 +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## + +import sys +import subprocess +import logging + +sys.path.append("lib") + +from ops.charm import CharmBase +from ops.main import main +from ops.model import ActiveStatus + +logger = logging.getLogger(__name__) + +class MyNativeCharm(CharmBase): + + def __init__(self, framework, key): + super().__init__(framework, key) + + # Listen to charm events + self.framework.observe(self.on.config_changed, self.on_config_changed) + self.framework.observe(self.on.install, self.on_install) + self.framework.observe(self.on.start, self.on_start) + + # Listen to the touch action event + self.framework.observe(self.on.touch_action, self.on_touch_action) + + # Listen to relation changed + self.framework.observe( + self.on.interface_relation_changed, self.on_interface_relation_changed + ) + def on_config_changed(self, event): + """Handle changes in configuration""" + self.model.unit.status = ActiveStatus() + + def on_install(self, event): + """Called when the charm is being installed""" + self.model.unit.status = ActiveStatus() + + def on_start(self, event): + """Called when the charm is being started""" + self.model.unit.status = ActiveStatus() + + def on_touch_action(self, event): + """Touch a file.""" + + filename = event.params["filename"] + try: + subprocess.run(["touch", filename], check=True) + event.set_results({"created": True, "filename": filename}) + except subprocess.CalledProcessError as e: + event.fail("Action failed: {}".format(e)) + self.model.unit.status = ActiveStatus() + + def on_interface_relation_changed(self, event): + logger.debug("RELATION DATA: {}".format(dict(event.relation.data[event.unit]))) + parameter = event.relation.data[event.unit].get("parameter") + if parameter: + self.model.unit.status = ActiveStatus("Parameter received: {}".format(parameter)) + + +if __name__ == "__main__": + main(MyNativeCharm) + diff --git a/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_requires/hooks/upgrade-charm b/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_requires/hooks/upgrade-charm new file mode 100755 index 0000000000000000000000000000000000000000..c243dc90a31ba2fb5c14143a5d658f7b8dfd2dcc --- /dev/null +++ b/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_requires/hooks/upgrade-charm @@ -0,0 +1,80 @@ +#!/usr/bin/env python3 +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## + +import sys +import subprocess +import logging + +sys.path.append("lib") + +from ops.charm import CharmBase +from ops.main import main +from ops.model import ActiveStatus + +logger = logging.getLogger(__name__) + +class MyNativeCharm(CharmBase): + + def __init__(self, framework, key): + super().__init__(framework, key) + + # Listen to charm events + self.framework.observe(self.on.config_changed, self.on_config_changed) + self.framework.observe(self.on.install, self.on_install) + self.framework.observe(self.on.start, self.on_start) + + # Listen to the touch action event + self.framework.observe(self.on.touch_action, self.on_touch_action) + + # Listen to relation changed + self.framework.observe( + self.on.interface_relation_changed, self.on_interface_relation_changed + ) + def on_config_changed(self, event): + """Handle changes in configuration""" + self.model.unit.status = ActiveStatus() + + def on_install(self, event): + """Called when the charm is being installed""" + self.model.unit.status = ActiveStatus() + + def on_start(self, event): + """Called when the charm is being started""" + self.model.unit.status = ActiveStatus() + + def on_touch_action(self, event): + """Touch a file.""" + + filename = event.params["filename"] + try: + subprocess.run(["touch", filename], check=True) + event.set_results({"created": True, "filename": filename}) + except subprocess.CalledProcessError as e: + event.fail("Action failed: {}".format(e)) + self.model.unit.status = ActiveStatus() + + def on_interface_relation_changed(self, event): + logger.debug("RELATION DATA: {}".format(dict(event.relation.data[event.unit]))) + parameter = event.relation.data[event.unit].get("parameter") + if parameter: + self.model.unit.status = ActiveStatus("Parameter received: {}".format(parameter)) + + +if __name__ == "__main__": + main(MyNativeCharm) + diff --git a/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_requires/lib/ops b/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_requires/lib/ops new file mode 120000 index 0000000000000000000000000000000000000000..c36dab1ed4a93aee97a31b0eba0438852c5e05bf --- /dev/null +++ b/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_requires/lib/ops @@ -0,0 +1 @@ +../mod/operator/ops/ \ No newline at end of file diff --git a/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_requires/metadata.yaml b/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_requires/metadata.yaml new file mode 100644 index 0000000000000000000000000000000000000000..071f8fa8d2b605f85a78f2322ba8c096c65e1326 --- /dev/null +++ b/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_requires/metadata.yaml @@ -0,0 +1,28 @@ +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## + +name: simple-native-requires +summary: A simple native charm +description: | + Simple native charm +series: + - bionic + - xenial + - focal +requires: + interface: + interface: interface \ No newline at end of file diff --git a/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_requires/src/charm.py b/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_requires/src/charm.py new file mode 100755 index 0000000000000000000000000000000000000000..c243dc90a31ba2fb5c14143a5d658f7b8dfd2dcc --- /dev/null +++ b/SOL004/charm-packages/proxy_native_relation_vnf/charms/simple_requires/src/charm.py @@ -0,0 +1,80 @@ +#!/usr/bin/env python3 +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## + +import sys +import subprocess +import logging + +sys.path.append("lib") + +from ops.charm import CharmBase +from ops.main import main +from ops.model import ActiveStatus + +logger = logging.getLogger(__name__) + +class MyNativeCharm(CharmBase): + + def __init__(self, framework, key): + super().__init__(framework, key) + + # Listen to charm events + self.framework.observe(self.on.config_changed, self.on_config_changed) + self.framework.observe(self.on.install, self.on_install) + self.framework.observe(self.on.start, self.on_start) + + # Listen to the touch action event + self.framework.observe(self.on.touch_action, self.on_touch_action) + + # Listen to relation changed + self.framework.observe( + self.on.interface_relation_changed, self.on_interface_relation_changed + ) + def on_config_changed(self, event): + """Handle changes in configuration""" + self.model.unit.status = ActiveStatus() + + def on_install(self, event): + """Called when the charm is being installed""" + self.model.unit.status = ActiveStatus() + + def on_start(self, event): + """Called when the charm is being started""" + self.model.unit.status = ActiveStatus() + + def on_touch_action(self, event): + """Touch a file.""" + + filename = event.params["filename"] + try: + subprocess.run(["touch", filename], check=True) + event.set_results({"created": True, "filename": filename}) + except subprocess.CalledProcessError as e: + event.fail("Action failed: {}".format(e)) + self.model.unit.status = ActiveStatus() + + def on_interface_relation_changed(self, event): + logger.debug("RELATION DATA: {}".format(dict(event.relation.data[event.unit]))) + parameter = event.relation.data[event.unit].get("parameter") + if parameter: + self.model.unit.status = ActiveStatus("Parameter received: {}".format(parameter)) + + +if __name__ == "__main__": + main(MyNativeCharm) + diff --git a/SOL004/charm-packages/proxy_native_relation_vnf/cloud_init/cloud-config.txt b/SOL004/charm-packages/proxy_native_relation_vnf/cloud_init/cloud-config.txt new file mode 100755 index 0000000000000000000000000000000000000000..36c8d1bf2cdebbc4e50d1e8348003f64f419cd0b --- /dev/null +++ b/SOL004/charm-packages/proxy_native_relation_vnf/cloud_init/cloud-config.txt @@ -0,0 +1,12 @@ +#cloud-config +password: osm4u +chpasswd: { expire: False } +ssh_pwauth: True + +write_files: +- content: | + # My new helloworld file + + owner: root:root + permissions: '0644' + path: /root/helloworld.txt diff --git a/SOL004/charm-packages/proxy_native_relation_vnf/icons/osm.png b/SOL004/charm-packages/proxy_native_relation_vnf/icons/osm.png new file mode 100644 index 0000000000000000000000000000000000000000..62012d2a2b491bdcd536d62c3c3c863c0d8c1b33 Binary files /dev/null and b/SOL004/charm-packages/proxy_native_relation_vnf/icons/osm.png differ diff --git a/SOL004/charm-packages/proxy_native_relation_vnf/proxy_native_relation_vnfd.yaml b/SOL004/charm-packages/proxy_native_relation_vnf/proxy_native_relation_vnfd.yaml new file mode 100644 index 0000000000000000000000000000000000000000..ffb49af9dc2b5279e8ed5d7440c25afd67094400 --- /dev/null +++ b/SOL004/charm-packages/proxy_native_relation_vnf/proxy_native_relation_vnfd.yaml @@ -0,0 +1,149 @@ +vnfd: + description: A VNF consisting of 1 VDU connected to two external VL, and one for + data and another one for management + df: + - id: default-df + instantiation-level: + - id: default-instantiation-level + vdu-level: + - number-of-instances: 1 + vdu-id: simple_requires + - number-of-instances: 1 + vdu-id: simple_provides + vdu-profile: + - id: simple_requires + min-number-of-instances: 1 + - id: simple_provides + min-number-of-instances: 1 + lcm-operations-configuration: + operate-vnf-op-config: + day1-2: + - config-access: + ssh-access: + default-user: ubuntu + required: true + config-primitive: + - name: touch + execution-environment-ref: simple-provides-proxy-ee + parameter: + - data-type: STRING + default-value: /home/ubuntu/touched + name: filename + id: proxy_native_relation-vnf + execution-environment-list: + - id: simple-provides-proxy-ee + juju: + charm: simple_provides_proxy + proxy: true + external-connection-point-ref: provides-mgmt-ext + initial-config-primitive: + - name: config + execution-environment-ref: simple-provides-proxy-ee + parameter: + - name: ssh-hostname + value: + - name: ssh-username + value: ubuntu + seq: 1 + - name: touch + execution-environment-ref: simple-provides-proxy-ee + parameter: + - data-type: STRING + name: filename + value: /home/ubuntu/first-touch + seq: 2 + relation: + - entities: + - endpoint: interface + id: proxy_native_relation-vnf + - endpoint: interface + id: simple_requires + name: relation + - config-access: + ssh-access: + default-user: ubuntu + required: true + config-primitive: + - name: touch + execution-environment-ref: simple-requires-ee + parameter: + - data-type: STRING + default-value: /home/ubuntu/touched + name: filename + id: simple_requires + execution-environment-list: + - id: simple-requires-ee + juju: + charm: simple_requires + proxy: false + external-connection-point-ref: requires-mgmt-ext + initial-config-primitive: + - name: touch + execution-environment-ref: simple-requires-ee + parameter: + - data-type: STRING + name: filename + value: /home/ubuntu/first-touch + seq: 1 + ext-cpd: + - id: requires-mgmt-ext + int-cpd: + cpd: simple_requires-eth0-int + vdu-id: simple_requires + - id: provides-mgmt-ext + int-cpd: + cpd: simple_provides-eth0-int + vdu-id: simple_provides + id: proxy_native_relation-vnf + mgmt-cp: provides-mgmt-ext + product-name: proxy_native_relation-vnf + sw-image-desc: + - id: ubuntu18.04 + image: ubuntu18.04 + name: ubuntu18.04 + vdu: + - cloud-init-file: cloud-config.txt + id: simple_requires + int-cpd: + - id: simple_requires-eth0-int + virtual-network-interface-requirement: + - name: simple_requires-eth0 + position: 1 + virtual-interface: + type: PARAVIRT + name: simple_requires + sw-image-desc: ubuntu18.04 + virtual-compute-desc: simple_requires-compute + virtual-storage-desc: + - simple_requires-storage + - cloud-init-file: cloud-config.txt + id: simple_provides + int-cpd: + - id: simple_provides-eth0-int + virtual-network-interface-requirement: + - name: simple_provides-eth0 + position: 1 + virtual-interface: + type: PARAVIRT + name: simple_provides + sw-image-desc: ubuntu18.04 + virtual-compute-desc: simple_provides-compute + virtual-storage-desc: + - simple_provides-storage + version: 1.0 + virtual-compute-desc: + - id: simple_requires-compute + virtual-cpu: + num-virtual-cpu: 1 + virtual-memory: + size: 1.0 + - id: simple_provides-compute + virtual-cpu: + num-virtual-cpu: 1 + virtual-memory: + size: 1.0 + virtual-storage-desc: + - id: simple_requires-storage + size-of-storage: 10 + - id: simple_provides-storage + size-of-storage: 10 diff --git a/SOL004/charm-packages/vnf_relations_ns/icons/osm.png b/SOL004/charm-packages/vnf_relations_ns/icons/osm.png new file mode 100644 index 0000000000000000000000000000000000000000..62012d2a2b491bdcd536d62c3c3c863c0d8c1b33 Binary files /dev/null and b/SOL004/charm-packages/vnf_relations_ns/icons/osm.png differ diff --git a/SOL004/charm-packages/vnf_relations_ns/vnf_relations_nsd.yaml b/SOL004/charm-packages/vnf_relations_ns/vnf_relations_nsd.yaml new file mode 100644 index 0000000000000000000000000000000000000000..a4578c5be4ce175423d60c1fb430db96b35a1d0e --- /dev/null +++ b/SOL004/charm-packages/vnf_relations_ns/vnf_relations_nsd.yaml @@ -0,0 +1,23 @@ +nsd: + nsd: + - description: NS with 2 VNFs with cloudinit connected by datanet and mgmtnet VLs + df: + - id: default-df + vnf-profile: + - id: '1' + virtual-link-connectivity: + - constituent-cpd-id: + - constituent-base-element-id: '1' + constituent-cpd-id: provides-mgmt-ext + - constituent-base-element-id: '1' + constituent-cpd-id: requires-mgmt-ext + virtual-link-profile-id: mgmtnet + vnfd-id: vnf_relations-vnf + id: vnf_relations-ns + name: vnf_relations-ns + version: '1.0' + virtual-link-desc: + - id: mgmtnet + mgmt-network: 'true' + vnfd-id: + - vnf_relations-vnf diff --git a/SOL004/charm-packages/vnf_relations_vnf/charms/simple_provides/actions.yaml b/SOL004/charm-packages/vnf_relations_vnf/charms/simple_provides/actions.yaml new file mode 100644 index 0000000000000000000000000000000000000000..53a706b4eba7347764c4711b7077146e72241de4 --- /dev/null +++ b/SOL004/charm-packages/vnf_relations_vnf/charms/simple_provides/actions.yaml @@ -0,0 +1,25 @@ +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## +touch: + description: "Touch a file on the VNF." + params: + filename: + description: "The name of the file to touch." + type: string + default: "" + required: + - filename diff --git a/SOL004/charm-packages/vnf_relations_vnf/charms/simple_provides/config.yaml b/SOL004/charm-packages/vnf_relations_vnf/charms/simple_provides/config.yaml new file mode 100644 index 0000000000000000000000000000000000000000..2be62318c6ce27156e4af585494b36d0c2b802f8 --- /dev/null +++ b/SOL004/charm-packages/vnf_relations_vnf/charms/simple_provides/config.yaml @@ -0,0 +1,17 @@ +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## +options: {} \ No newline at end of file diff --git a/SOL004/charm-packages/vnf_relations_vnf/charms/simple_provides/hooks/install b/SOL004/charm-packages/vnf_relations_vnf/charms/simple_provides/hooks/install new file mode 120000 index 0000000000000000000000000000000000000000..25b1f68fa39d58d33c08ca420c3d439d19be0c55 --- /dev/null +++ b/SOL004/charm-packages/vnf_relations_vnf/charms/simple_provides/hooks/install @@ -0,0 +1 @@ +../src/charm.py \ No newline at end of file diff --git a/SOL004/charm-packages/vnf_relations_vnf/charms/simple_provides/hooks/start b/SOL004/charm-packages/vnf_relations_vnf/charms/simple_provides/hooks/start new file mode 120000 index 0000000000000000000000000000000000000000..25b1f68fa39d58d33c08ca420c3d439d19be0c55 --- /dev/null +++ b/SOL004/charm-packages/vnf_relations_vnf/charms/simple_provides/hooks/start @@ -0,0 +1 @@ +../src/charm.py \ No newline at end of file diff --git a/SOL004/charm-packages/vnf_relations_vnf/charms/simple_provides/hooks/upgrade-charm b/SOL004/charm-packages/vnf_relations_vnf/charms/simple_provides/hooks/upgrade-charm new file mode 120000 index 0000000000000000000000000000000000000000..25b1f68fa39d58d33c08ca420c3d439d19be0c55 --- /dev/null +++ b/SOL004/charm-packages/vnf_relations_vnf/charms/simple_provides/hooks/upgrade-charm @@ -0,0 +1 @@ +../src/charm.py \ No newline at end of file diff --git a/SOL004/charm-packages/vnf_relations_vnf/charms/simple_provides/lib/ops b/SOL004/charm-packages/vnf_relations_vnf/charms/simple_provides/lib/ops new file mode 120000 index 0000000000000000000000000000000000000000..d93419320c2e0d3133a0bc8059e2f259bf5bb213 --- /dev/null +++ b/SOL004/charm-packages/vnf_relations_vnf/charms/simple_provides/lib/ops @@ -0,0 +1 @@ +../mod/operator/ops \ No newline at end of file diff --git a/SOL004/charm-packages/vnf_relations_vnf/charms/simple_provides/metadata.yaml b/SOL004/charm-packages/vnf_relations_vnf/charms/simple_provides/metadata.yaml new file mode 100644 index 0000000000000000000000000000000000000000..db97d146a1734356dc46ec8bd549b4d1154aa418 --- /dev/null +++ b/SOL004/charm-packages/vnf_relations_vnf/charms/simple_provides/metadata.yaml @@ -0,0 +1,28 @@ +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## + +name: simple-native-provides +summary: A simple native charm +description: | + Simple native charm +series: + - bionic + - xenial + - focal +provides: + interface: + interface: interface \ No newline at end of file diff --git a/SOL004/charm-packages/vnf_relations_vnf/charms/simple_provides/src/charm.py b/SOL004/charm-packages/vnf_relations_vnf/charms/simple_provides/src/charm.py new file mode 100755 index 0000000000000000000000000000000000000000..b15f847604d7ea6af2d4d1467cce6c7d1e64bf7e --- /dev/null +++ b/SOL004/charm-packages/vnf_relations_vnf/charms/simple_provides/src/charm.py @@ -0,0 +1,75 @@ +#!/usr/bin/env python3 +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## + +import sys +import subprocess + +sys.path.append("lib") + +from ops.charm import CharmBase +from ops.main import main +from ops.model import ActiveStatus + +class MyNativeCharm(CharmBase): + + def __init__(self, framework, key): + super().__init__(framework, key) + + # Listen to charm events + self.framework.observe(self.on.config_changed, self.on_config_changed) + self.framework.observe(self.on.install, self.on_install) + self.framework.observe(self.on.start, self.on_start) + + # Listen to the touch action event + self.framework.observe(self.on.touch_action, self.on_touch_action) + + # Listen to relation changed + self.framework.observe( + self.on.interface_relation_changed, self.on_interface_relation_changed + ) + def on_config_changed(self, event): + """Handle changes in configuration""" + self.model.unit.status = ActiveStatus() + + def on_install(self, event): + """Called when the charm is being installed""" + self.model.unit.status = ActiveStatus() + + def on_start(self, event): + """Called when the charm is being started""" + self.model.unit.status = ActiveStatus() + + def on_touch_action(self, event): + """Touch a file.""" + filename = event.params["filename"] + try: + subprocess.run(["touch", filename], check=True) + event.set_results({"created": True, "filename": filename}) + except subprocess.CalledProcessError as e: + event.fail("Action failed: {}".format(e)) + self.model.unit.status = ActiveStatus() + + def on_interface_relation_changed(self, event): + parameter = "Hello" + event.relation.data[self.model.unit]["parameter"] = parameter + self.model.unit.status = ActiveStatus("Parameter sent: {}".format(parameter)) + + +if __name__ == "__main__": + main(MyNativeCharm) + diff --git a/SOL004/charm-packages/vnf_relations_vnf/charms/simple_requires/actions.yaml b/SOL004/charm-packages/vnf_relations_vnf/charms/simple_requires/actions.yaml new file mode 100644 index 0000000000000000000000000000000000000000..53a706b4eba7347764c4711b7077146e72241de4 --- /dev/null +++ b/SOL004/charm-packages/vnf_relations_vnf/charms/simple_requires/actions.yaml @@ -0,0 +1,25 @@ +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## +touch: + description: "Touch a file on the VNF." + params: + filename: + description: "The name of the file to touch." + type: string + default: "" + required: + - filename diff --git a/SOL004/charm-packages/vnf_relations_vnf/charms/simple_requires/config.yaml b/SOL004/charm-packages/vnf_relations_vnf/charms/simple_requires/config.yaml new file mode 100644 index 0000000000000000000000000000000000000000..2be62318c6ce27156e4af585494b36d0c2b802f8 --- /dev/null +++ b/SOL004/charm-packages/vnf_relations_vnf/charms/simple_requires/config.yaml @@ -0,0 +1,17 @@ +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## +options: {} \ No newline at end of file diff --git a/SOL004/charm-packages/vnf_relations_vnf/charms/simple_requires/hooks/install b/SOL004/charm-packages/vnf_relations_vnf/charms/simple_requires/hooks/install new file mode 120000 index 0000000000000000000000000000000000000000..25b1f68fa39d58d33c08ca420c3d439d19be0c55 --- /dev/null +++ b/SOL004/charm-packages/vnf_relations_vnf/charms/simple_requires/hooks/install @@ -0,0 +1 @@ +../src/charm.py \ No newline at end of file diff --git a/SOL004/charm-packages/vnf_relations_vnf/charms/simple_requires/hooks/start b/SOL004/charm-packages/vnf_relations_vnf/charms/simple_requires/hooks/start new file mode 120000 index 0000000000000000000000000000000000000000..25b1f68fa39d58d33c08ca420c3d439d19be0c55 --- /dev/null +++ b/SOL004/charm-packages/vnf_relations_vnf/charms/simple_requires/hooks/start @@ -0,0 +1 @@ +../src/charm.py \ No newline at end of file diff --git a/SOL004/charm-packages/vnf_relations_vnf/charms/simple_requires/hooks/upgrade-charm b/SOL004/charm-packages/vnf_relations_vnf/charms/simple_requires/hooks/upgrade-charm new file mode 120000 index 0000000000000000000000000000000000000000..25b1f68fa39d58d33c08ca420c3d439d19be0c55 --- /dev/null +++ b/SOL004/charm-packages/vnf_relations_vnf/charms/simple_requires/hooks/upgrade-charm @@ -0,0 +1 @@ +../src/charm.py \ No newline at end of file diff --git a/SOL004/charm-packages/vnf_relations_vnf/charms/simple_requires/lib/ops b/SOL004/charm-packages/vnf_relations_vnf/charms/simple_requires/lib/ops new file mode 120000 index 0000000000000000000000000000000000000000..d93419320c2e0d3133a0bc8059e2f259bf5bb213 --- /dev/null +++ b/SOL004/charm-packages/vnf_relations_vnf/charms/simple_requires/lib/ops @@ -0,0 +1 @@ +../mod/operator/ops \ No newline at end of file diff --git a/SOL004/charm-packages/vnf_relations_vnf/charms/simple_requires/metadata.yaml b/SOL004/charm-packages/vnf_relations_vnf/charms/simple_requires/metadata.yaml new file mode 100644 index 0000000000000000000000000000000000000000..071f8fa8d2b605f85a78f2322ba8c096c65e1326 --- /dev/null +++ b/SOL004/charm-packages/vnf_relations_vnf/charms/simple_requires/metadata.yaml @@ -0,0 +1,28 @@ +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## + +name: simple-native-requires +summary: A simple native charm +description: | + Simple native charm +series: + - bionic + - xenial + - focal +requires: + interface: + interface: interface \ No newline at end of file diff --git a/SOL004/charm-packages/vnf_relations_vnf/charms/simple_requires/src/charm.py b/SOL004/charm-packages/vnf_relations_vnf/charms/simple_requires/src/charm.py new file mode 100755 index 0000000000000000000000000000000000000000..c243dc90a31ba2fb5c14143a5d658f7b8dfd2dcc --- /dev/null +++ b/SOL004/charm-packages/vnf_relations_vnf/charms/simple_requires/src/charm.py @@ -0,0 +1,80 @@ +#!/usr/bin/env python3 +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## + +import sys +import subprocess +import logging + +sys.path.append("lib") + +from ops.charm import CharmBase +from ops.main import main +from ops.model import ActiveStatus + +logger = logging.getLogger(__name__) + +class MyNativeCharm(CharmBase): + + def __init__(self, framework, key): + super().__init__(framework, key) + + # Listen to charm events + self.framework.observe(self.on.config_changed, self.on_config_changed) + self.framework.observe(self.on.install, self.on_install) + self.framework.observe(self.on.start, self.on_start) + + # Listen to the touch action event + self.framework.observe(self.on.touch_action, self.on_touch_action) + + # Listen to relation changed + self.framework.observe( + self.on.interface_relation_changed, self.on_interface_relation_changed + ) + def on_config_changed(self, event): + """Handle changes in configuration""" + self.model.unit.status = ActiveStatus() + + def on_install(self, event): + """Called when the charm is being installed""" + self.model.unit.status = ActiveStatus() + + def on_start(self, event): + """Called when the charm is being started""" + self.model.unit.status = ActiveStatus() + + def on_touch_action(self, event): + """Touch a file.""" + + filename = event.params["filename"] + try: + subprocess.run(["touch", filename], check=True) + event.set_results({"created": True, "filename": filename}) + except subprocess.CalledProcessError as e: + event.fail("Action failed: {}".format(e)) + self.model.unit.status = ActiveStatus() + + def on_interface_relation_changed(self, event): + logger.debug("RELATION DATA: {}".format(dict(event.relation.data[event.unit]))) + parameter = event.relation.data[event.unit].get("parameter") + if parameter: + self.model.unit.status = ActiveStatus("Parameter received: {}".format(parameter)) + + +if __name__ == "__main__": + main(MyNativeCharm) + diff --git a/SOL004/charm-packages/vnf_relations_vnf/cloud_init/cloud-config.txt b/SOL004/charm-packages/vnf_relations_vnf/cloud_init/cloud-config.txt new file mode 100755 index 0000000000000000000000000000000000000000..36c8d1bf2cdebbc4e50d1e8348003f64f419cd0b --- /dev/null +++ b/SOL004/charm-packages/vnf_relations_vnf/cloud_init/cloud-config.txt @@ -0,0 +1,12 @@ +#cloud-config +password: osm4u +chpasswd: { expire: False } +ssh_pwauth: True + +write_files: +- content: | + # My new helloworld file + + owner: root:root + permissions: '0644' + path: /root/helloworld.txt diff --git a/SOL004/charm-packages/vnf_relations_vnf/icons/osm.png b/SOL004/charm-packages/vnf_relations_vnf/icons/osm.png new file mode 100644 index 0000000000000000000000000000000000000000..62012d2a2b491bdcd536d62c3c3c863c0d8c1b33 Binary files /dev/null and b/SOL004/charm-packages/vnf_relations_vnf/icons/osm.png differ diff --git a/SOL004/charm-packages/vnf_relations_vnf/vnf_relations_vnfd.yaml b/SOL004/charm-packages/vnf_relations_vnf/vnf_relations_vnfd.yaml new file mode 100644 index 0000000000000000000000000000000000000000..f0a5b25e7dbd9bb31717f476256097be05d53b96 --- /dev/null +++ b/SOL004/charm-packages/vnf_relations_vnf/vnf_relations_vnfd.yaml @@ -0,0 +1,142 @@ +vnfd: + description: A VNF consisting of 1 VDU connected to two external VL, and one for + data and another one for management + df: + - id: default-df + instantiation-level: + - id: default-instantiation-level + vdu-level: + - number-of-instances: 1 + vdu-id: simple_requires + - number-of-instances: 1 + vdu-id: simple_provides + vdu-profile: + - id: simple_requires + min-number-of-instances: 1 + - id: simple_provides + min-number-of-instances: 1 + lcm-operations-configuration: + operate-vnf-op-config: + day1-2: + - id: vnf_relations-vnf + relation: + - entities: + - endpoint: interface + id: simple_provides + - endpoint: interface + id: simple_requires + name: relation + - config-access: + ssh-access: + default-user: ubuntu + required: true + config-primitive: + - name: touch + execution-environment-ref: simple-requires-ee + parameter: + - data-type: STRING + default-value: /home/ubuntu/touched + name: filename + id: simple_requires + execution-environment-list: + - id: simple-requires-ee + juju: + charm: simple_requires + proxy: false + external-connection-point-ref: requires-mgmt-ext + initial-config-primitive: + - name: touch + execution-environment-ref: simple-requires-ee + parameter: + - data-type: STRING + name: filename + value: /home/ubuntu/first-touch + seq: 1 + - config-access: + ssh-access: + default-user: ubuntu + required: true + config-primitive: + - name: touch + execution-environment-ref: simple-provides-ee + parameter: + - data-type: STRING + default-value: /home/ubuntu/touched + name: filename + id: simple_provides + execution-environment-list: + - id: simple-provides-ee + juju: + charm: simple_provides + proxy: false + external-connection-point-ref: provides-mgmt-ext + initial-config-primitive: + - name: touch + execution-environment-ref: simple-provides-ee + parameter: + - data-type: STRING + name: filename + value: /home/ubuntu/first-touch + seq: 1 + ext-cpd: + - id: requires-mgmt-ext + int-cpd: + cpd: simple_requires-eth0-int + vdu-id: simple_requires + - id: provides-mgmt-ext + int-cpd: + cpd: simple_provides-eth0-int + vdu-id: simple_provides + id: vnf_relations-vnf + mgmt-cp: requires-mgmt-ext + product-name: vnf_relations-vnf + sw-image-desc: + - id: ubuntu18.04 + image: ubuntu18.04 + name: ubuntu18.04 + vdu: + - cloud-init-file: cloud-config.txt + id: simple_requires + int-cpd: + - id: simple_requires-eth0-int + virtual-network-interface-requirement: + - name: simple_requires-eth0 + position: 1 + virtual-interface: + type: PARAVIRT + name: simple_requires + sw-image-desc: ubuntu18.04 + virtual-compute-desc: simple_requires-compute + virtual-storage-desc: + - simple_requires-storage + - cloud-init-file: cloud-config.txt + id: simple_provides + int-cpd: + - id: simple_provides-eth0-int + virtual-network-interface-requirement: + - name: simple_provides-eth0 + position: 1 + virtual-interface: + type: PARAVIRT + name: simple_provides + sw-image-desc: ubuntu18.04 + virtual-compute-desc: simple_provides-compute + virtual-storage-desc: + - simple_provides-storage + version: 1.0 + virtual-compute-desc: + - id: simple_requires-compute + virtual-cpu: + num-virtual-cpu: 1 + virtual-memory: + size: 1.0 + - id: simple_provides-compute + virtual-cpu: + num-virtual-cpu: 1 + virtual-memory: + size: 1.0 + virtual-storage-desc: + - id: simple_requires-storage + size-of-storage: 10 + - id: simple_provides-storage + size-of-storage: 10 diff --git a/SOL004/cirros_alarm_ns/cirros_alarm_nsd.yaml b/SOL004/cirros_alarm_ns/cirros_alarm_nsd.yaml new file mode 100644 index 0000000000000000000000000000000000000000..40030c295259ca34aadbe2bc66068630e274b928 --- /dev/null +++ b/SOL004/cirros_alarm_ns/cirros_alarm_nsd.yaml @@ -0,0 +1,22 @@ +nsd: + nsd: + - description: Simple NS example with a cirros_alarm-vnf + designer: OSM + df: + - id: default-df + vnf-profile: + - id: '1' + virtual-link-connectivity: + - constituent-cpd-id: + - constituent-base-element-id: '1' + constituent-cpd-id: eth0-ext + virtual-link-profile-id: mgmtnet + vnfd-id: cirros_alarm-vnf + id: cirros_alarm-ns + name: cirros_alarm-ns + version: '1.0' + virtual-link-desc: + - id: mgmtnet + mgmt-network: true + vnfd-id: + - cirros_alarm-vnf diff --git a/SOL004/cirros_alarm_vnf/cirros_alarm_vnfd.yaml b/SOL004/cirros_alarm_vnf/cirros_alarm_vnfd.yaml new file mode 100644 index 0000000000000000000000000000000000000000..8a301b5b39528af1c9c574c7aaa5a08090ccca33 --- /dev/null +++ b/SOL004/cirros_alarm_vnf/cirros_alarm_vnfd.yaml @@ -0,0 +1,70 @@ +vnfd: + description: Simple VNF example with a cirros and a VNF alarm + df: + - id: default-df + instantiation-level: + - id: default-instantiation-level + vdu-level: + - number-of-instances: 1 + vdu-id: cirros_vnfd-VM + vdu-profile: + - id: cirros_vnfd-VM + min-number-of-instances: 1 + ext-cpd: + - id: eth0-ext + int-cpd: + cpd: eth0-int + vdu-id: cirros_vnfd-VM + id: cirros_alarm-vnf + mgmt-cp: eth0-ext + product-name: cirros_alarm-vnf + provider: OSM + sw-image-desc: + - id: cirros-0.3.5-x86_64-disk.img + image: cirros-0.3.5-x86_64-disk.img + name: cirros-0.3.5-x86_64-disk.img + vdu: + - alarm: + - actions: + alarm: + - url: ${WEBHOOK_URL} + insufficient-data: + - url: ${WEBHOOK_URL} + ok: + - url: ${WEBHOOK_URL} + alarm-id: alarm-1 + operation: LT + value: 20 + vnf-monitoring-param-ref: cirros_vnf_cpu_util + description: cirros_vnfd-VM + id: cirros_vnfd-VM + int-cpd: + - id: eth0-int + virtual-network-interface-requirement: + - name: eth0 + virtual-interface: + bandwidth: 0 + type: VIRTIO + vpci: 0000:00:0a.0 + monitoring-parameter: + - id: cirros_vnf_cpu_util + name: cirros_vnf_cpu_util + performance-metric: cpu_utilization + - id: cirros_vnf_average_memory_utilization + name: cirros_vnf_average_memory_utilization + performance-metric: average_memory_utilization + name: cirros_vnfd-VM + sw-image-desc: cirros-0.3.5-x86_64-disk.img + virtual-compute-desc: cirros_vnfd-VM-compute + virtual-storage-desc: + - cirros_vnfd-VM-storage + version: '1.0' + virtual-compute-desc: + - id: cirros_vnfd-VM-compute + virtual-cpu: + num-virtual-cpu: 1 + virtual-memory: + size: 1 + virtual-storage-desc: + - id: cirros_vnfd-VM-storage + size-of-storage: 2 diff --git a/SOL004/epa_1vm_passthrough_ns/epa_1vm_passthrough_nsd.yaml b/SOL004/epa_1vm_passthrough_ns/epa_1vm_passthrough_nsd.yaml new file mode 100755 index 0000000000000000000000000000000000000000..35bcd0c31836c469f88e1f430bba429aa1472466 --- /dev/null +++ b/SOL004/epa_1vm_passthrough_ns/epa_1vm_passthrough_nsd.yaml @@ -0,0 +1,38 @@ +nsd: + nsd: + - description: NS with 2 VNFs epa_1vm_passthrough-vnf connected by datanet and mgmtnet + VLs + df: + - id: default-df + vnf-profile: + - id: '1' + virtual-link-connectivity: + - constituent-cpd-id: + - constituent-base-element-id: '1' + constituent-cpd-id: vnf-mgmt-ext + virtual-link-profile-id: mgmtnet + - constituent-cpd-id: + - constituent-base-element-id: '1' + constituent-cpd-id: vnf-data-ext + virtual-link-profile-id: datanet + vnfd-id: epa_1vm_passthrough-vnf + - id: '2' + virtual-link-connectivity: + - constituent-cpd-id: + - constituent-base-element-id: '2' + constituent-cpd-id: vnf-mgmt-ext + virtual-link-profile-id: mgmtnet + - constituent-cpd-id: + - constituent-base-element-id: '2' + constituent-cpd-id: vnf-data-ext + virtual-link-profile-id: datanet + vnfd-id: epa_1vm_passthrough-vnf + id: epa_1vm_passthrough-ns + name: epa_1vm_passthrough-ns + version: '1.0' + virtual-link-desc: + - id: mgmtnet + mgmt-network: true + - id: datanet + vnfd-id: + - epa_1vm_passthrough-vnf diff --git a/SOL004/epa_1vm_passthrough_ns/icons/osm.png b/SOL004/epa_1vm_passthrough_ns/icons/osm.png new file mode 100755 index 0000000000000000000000000000000000000000..62012d2a2b491bdcd536d62c3c3c863c0d8c1b33 Binary files /dev/null and b/SOL004/epa_1vm_passthrough_ns/icons/osm.png differ diff --git a/SOL004/epa_1vm_passthrough_vnf/cloud_init/cloud-config.txt b/SOL004/epa_1vm_passthrough_vnf/cloud_init/cloud-config.txt new file mode 100755 index 0000000000000000000000000000000000000000..25b63cc647effa3d8c9ee922beaac1782339ba4f --- /dev/null +++ b/SOL004/epa_1vm_passthrough_vnf/cloud_init/cloud-config.txt @@ -0,0 +1,4 @@ +#cloud-config +password: osm4u +chpasswd: { expire: False } +ssh_pwauth: True diff --git a/SOL004/epa_1vm_passthrough_vnf/epa_1vm_passthrough_vnfd.yaml b/SOL004/epa_1vm_passthrough_vnf/epa_1vm_passthrough_vnfd.yaml new file mode 100755 index 0000000000000000000000000000000000000000..d7512e3d155a5c5096cbded464950a9f0731bdc8 --- /dev/null +++ b/SOL004/epa_1vm_passthrough_vnf/epa_1vm_passthrough_vnfd.yaml @@ -0,0 +1,69 @@ +vnfd: + description: A VNF consisting of 1 VDU with 1 paravirt and 1 passthrough interface + df: + - id: default-df + instantiation-level: + - id: default-instantiation-level + vdu-level: + - number-of-instances: 1 + vdu-id: dataVM + vdu-profile: + - id: dataVM + min-number-of-instances: 1 + ext-cpd: + - id: vnf-mgmt-ext + int-cpd: + cpd: eth0-int + vdu-id: dataVM + - id: vnf-data-ext + int-cpd: + cpd: xe0-int + vdu-id: dataVM + id: epa_1vm_passthrough-vnf + mgmt-cp: vnf-mgmt-ext + product-name: epa_1vm_passthrough-vnf + sw-image-desc: + - id: ubuntu20.04 + image: ubuntu20.04 + name: ubuntu20.04 + vdu: + - cloud-init-file: cloud-config.txt + id: dataVM + int-cpd: + - id: eth0-int + virtual-network-interface-requirement: + - name: eth0 + position: 1 + virtual-interface: + type: PARAVIRT + - id: xe0-int + virtual-network-interface-requirement: + - name: xe0 + position: 2 + virtual-interface: + type: PCI-PASSTHROUGH + name: dataVM + sw-image-desc: ubuntu20.04 + virtual-compute-desc: dataVM-compute + virtual-storage-desc: + - dataVM-storage + version: '1.0' + virtual-compute-desc: + - id: dataVM-compute + virtual-cpu: + num-virtual-cpu: 2 + pinning: + policy: static + thread-policy: PREFER + virtual-memory: + mempage-size: LARGE + numa-enabled: true + numa-node-policy: + mem-policy: STRICT + node: + - id: 1 + node-cnt: 1 + size: 4.0 + virtual-storage-desc: + - id: dataVM-storage + size-of-storage: 10 diff --git a/SOL004/epa_1vm_passthrough_vnf/icons/osm.png b/SOL004/epa_1vm_passthrough_vnf/icons/osm.png new file mode 100755 index 0000000000000000000000000000000000000000..62012d2a2b491bdcd536d62c3c3c863c0d8c1b33 Binary files /dev/null and b/SOL004/epa_1vm_passthrough_vnf/icons/osm.png differ diff --git a/SOL004/epa_1vm_sriov_ns/epa_1vm_sriov_nsd.yaml b/SOL004/epa_1vm_sriov_ns/epa_1vm_sriov_nsd.yaml new file mode 100644 index 0000000000000000000000000000000000000000..a0c66e51b953615d0b4ec761103e2fd5614fe582 --- /dev/null +++ b/SOL004/epa_1vm_sriov_ns/epa_1vm_sriov_nsd.yaml @@ -0,0 +1,38 @@ +nsd: + nsd: + - description: NS with 2 VNFs epa_1vm_sriov-vnf connected by datanet and mgmtnet + VLs + df: + - id: default-df + vnf-profile: + - id: '1' + virtual-link-connectivity: + - constituent-cpd-id: + - constituent-base-element-id: '1' + constituent-cpd-id: vnf-mgmt-ext + virtual-link-profile-id: mgmtnet + - constituent-cpd-id: + - constituent-base-element-id: '1' + constituent-cpd-id: vnf-data-ext + virtual-link-profile-id: datanet + vnfd-id: epa_1vm_sriov-vnf + - id: '2' + virtual-link-connectivity: + - constituent-cpd-id: + - constituent-base-element-id: '2' + constituent-cpd-id: vnf-mgmt-ext + virtual-link-profile-id: mgmtnet + - constituent-cpd-id: + - constituent-base-element-id: '2' + constituent-cpd-id: vnf-data-ext + virtual-link-profile-id: datanet + vnfd-id: epa_1vm_sriov-vnf + id: epa_1vm_sriov-ns + name: epa_1vm_sriov-ns + version: '1.0' + virtual-link-desc: + - id: mgmtnet + mgmt-network: true + - id: datanet + vnfd-id: + - epa_1vm_sriov-vnf diff --git a/SOL004/epa_1vm_sriov_ns/icons/osm.png b/SOL004/epa_1vm_sriov_ns/icons/osm.png new file mode 100644 index 0000000000000000000000000000000000000000..62012d2a2b491bdcd536d62c3c3c863c0d8c1b33 Binary files /dev/null and b/SOL004/epa_1vm_sriov_ns/icons/osm.png differ diff --git a/SOL004/epa_1vm_sriov_vnf/cloud_init/cloud-config.txt b/SOL004/epa_1vm_sriov_vnf/cloud_init/cloud-config.txt new file mode 100644 index 0000000000000000000000000000000000000000..25b63cc647effa3d8c9ee922beaac1782339ba4f --- /dev/null +++ b/SOL004/epa_1vm_sriov_vnf/cloud_init/cloud-config.txt @@ -0,0 +1,4 @@ +#cloud-config +password: osm4u +chpasswd: { expire: False } +ssh_pwauth: True diff --git a/SOL004/epa_1vm_sriov_vnf/epa_1vm_sriov_vnfd.yaml b/SOL004/epa_1vm_sriov_vnf/epa_1vm_sriov_vnfd.yaml new file mode 100644 index 0000000000000000000000000000000000000000..f374fae17b9073c6490bff80911862c712ac54b4 --- /dev/null +++ b/SOL004/epa_1vm_sriov_vnf/epa_1vm_sriov_vnfd.yaml @@ -0,0 +1,69 @@ +vnfd: + description: A VNF consisting of 1 VDU with 1 paravirt and 1 SRIOV interface + df: + - id: default-df + instantiation-level: + - id: default-instantiation-level + vdu-level: + - number-of-instances: 1 + vdu-id: dataVM + vdu-profile: + - id: dataVM + min-number-of-instances: 1 + ext-cpd: + - id: vnf-mgmt-ext + int-cpd: + cpd: eth0-int + vdu-id: dataVM + - id: vnf-data-ext + int-cpd: + cpd: xe0-int + vdu-id: dataVM + id: epa_1vm_sriov-vnf + mgmt-cp: vnf-mgmt-ext + product-name: epa_1vm_sriov-vnf + sw-image-desc: + - id: ubuntu20.04 + image: ubuntu20.04 + name: ubuntu20.04 + vdu: + - cloud-init-file: cloud-config.txt + id: dataVM + int-cpd: + - id: eth0-int + virtual-network-interface-requirement: + - name: eth0 + position: 1 + virtual-interface: + type: PARAVIRT + - id: xe0-int + virtual-network-interface-requirement: + - name: xe0 + position: 2 + virtual-interface: + type: SR-IOV + name: dataVM + sw-image-desc: ubuntu20.04 + virtual-compute-desc: dataVM-compute + virtual-storage-desc: + - dataVM-storage + version: '1.0' + virtual-compute-desc: + - id: dataVM-compute + virtual-cpu: + num-virtual-cpu: 2 + pinning: + policy: static + thread-policy: PREFER + virtual-memory: + mempage-size: LARGE + numa-enabled: true + numa-node-policy: + mem-policy: STRICT + node: + - id: 1 + node-cnt: 1 + size: 4.0 + virtual-storage-desc: + - id: dataVM-storage + size-of-storage: 10 diff --git a/SOL004/epa_1vm_sriov_vnf/icons/osm.png b/SOL004/epa_1vm_sriov_vnf/icons/osm.png new file mode 100644 index 0000000000000000000000000000000000000000..62012d2a2b491bdcd536d62c3c3c863c0d8c1b33 Binary files /dev/null and b/SOL004/epa_1vm_sriov_vnf/icons/osm.png differ diff --git a/SOL004/epa_2vm_sriov_ns/epa_2vm_sriov_nsd.yaml b/SOL004/epa_2vm_sriov_ns/epa_2vm_sriov_nsd.yaml new file mode 100644 index 0000000000000000000000000000000000000000..57ae239725b33b9ab46ce11bed72c2f69a119d93 --- /dev/null +++ b/SOL004/epa_2vm_sriov_ns/epa_2vm_sriov_nsd.yaml @@ -0,0 +1,38 @@ +nsd: + nsd: + - description: NS with 2 VNFs epa_2vm_sriov-vnf connected by datanet and mgmtnet + VLs + df: + - id: default-df + vnf-profile: + - id: '1' + virtual-link-connectivity: + - constituent-cpd-id: + - constituent-base-element-id: '1' + constituent-cpd-id: vnf-mgmt-ext + virtual-link-profile-id: mgmtnet + - constituent-cpd-id: + - constituent-base-element-id: '1' + constituent-cpd-id: vnf-data-ext + virtual-link-profile-id: datanet + vnfd-id: epa_2vm_sriov-vnf + - id: '2' + virtual-link-connectivity: + - constituent-cpd-id: + - constituent-base-element-id: '2' + constituent-cpd-id: vnf-mgmt-ext + virtual-link-profile-id: mgmtnet + - constituent-cpd-id: + - constituent-base-element-id: '2' + constituent-cpd-id: vnf-data-ext + virtual-link-profile-id: datanet + vnfd-id: epa_2vm_sriov-vnf + id: epa_2vm_sriov-ns + name: epa_2vm_sriov-ns + version: '1.0' + virtual-link-desc: + - id: mgmtnet + mgmt-network: true + - id: datanet + vnfd-id: + - epa_2vm_sriov-vnf diff --git a/SOL004/epa_2vm_sriov_ns/icons/osm.png b/SOL004/epa_2vm_sriov_ns/icons/osm.png new file mode 100644 index 0000000000000000000000000000000000000000..62012d2a2b491bdcd536d62c3c3c863c0d8c1b33 Binary files /dev/null and b/SOL004/epa_2vm_sriov_ns/icons/osm.png differ diff --git a/SOL004/epa_2vm_sriov_vnf/cloud_init/cloud-config-data.txt b/SOL004/epa_2vm_sriov_vnf/cloud_init/cloud-config-data.txt new file mode 100644 index 0000000000000000000000000000000000000000..cf1e7a5ebe6047de6c5ee5dbbdc9d5d2e46f36ed --- /dev/null +++ b/SOL004/epa_2vm_sriov_vnf/cloud_init/cloud-config-data.txt @@ -0,0 +1,6 @@ +#cloud-config +password: osm4u +chpasswd: { expire: False } +ssh_pwauth: True +ssh_authorized_keys: + - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC49EBX7pW4ogqGMqT4y/SBDGXibz+XSgH8q5IEgt4BHlsHM2tWTtWFs/ULQ9Rq5BF781h96cludcLL9LnFAH8LVJCfiqfVnXEvwVNsVZqUVcTRyOnUMWGiJMptOu6JQkwVkPAnk0bMRjerWUhySCduyAQOhVE7DBcE6VjCu3q8cqAURzAcbBk8AtCPSvKCMhE45+9h+0V9joPV+MpsgHdqsoASiHIXTGhybcH5cS25epPpYuyTatQTBzz5WGiwUz//FsoI4KHZ+bcXXq4b+HkecplPgUtu+tVvRre/JoYmr6YrqCk9Qqf7bpEC6i4rjU7BzMWTuys7rrY8nAB4ytAN diff --git a/SOL004/epa_2vm_sriov_vnf/cloud_init/cloud-config-mgmt.txt b/SOL004/epa_2vm_sriov_vnf/cloud_init/cloud-config-mgmt.txt new file mode 100644 index 0000000000000000000000000000000000000000..ea35add978bab82daa29a5bac425d7debe1d4410 --- /dev/null +++ b/SOL004/epa_2vm_sriov_vnf/cloud_init/cloud-config-mgmt.txt @@ -0,0 +1,41 @@ +#cloud-config +password: osm4u +chpasswd: { expire: False } +ssh_pwauth: True +write_files: +- content: | + -----BEGIN RSA PRIVATE KEY----- + MIIEowIBAAKCAQEAuPRAV+6VuKIKhjKk+Mv0gQxl4m8/l0oB/KuSBILeAR5bBzNr + Vk7VhbP1C0PUauQRe/NYfenJbnXCy/S5xQB/C1SQn4qn1Z1xL8FTbFWalFXE0cjp + 1DFhoiTKbTruiUJMFZDwJ5NGzEY3q1lIckgnbsgEDoVROwwXBOlYwrt6vHKgFEcw + HGwZPALQj0rygjIROOfvYftFfY6D1fjKbIB3arKAEohyF0xocm3B+XEtuXqT6WLs + k2rUEwc8+VhosFM//xbKCOCh2fm3F16uG/h5HnKZT4FLbvrVb0a3vyaGJq+mK6gp + PUKn+26RAuouK41OwczFk7srO662PJwAeMrQDQIDAQABAoIBAEm73jbv/7JWr1lm + sUwai0QzPB70eOaIc5hWkeTIg1bw0hthzWXgEdG2t3chOHrQp4PBtMKlxE8MFXeX + /cNi+kL7GJjx/wzzMl92dBqZWgHO26NCtK2KbkDk5+p59wSzcH+cg5FTboMbbzAZ + bP9acpYAmVVsosCmhjrICEHB2iFr5sopbHgq9MEq1vQhV8fZJf8NDjmAPjug1XFa + 2vi3G+bKuCLKBabsFVMabcCKlTlSrjUcwijJDatbJkmiz3Sg7IcczQbz++BXdhCU + UkjpLA5tAvIyncQpX5pAU467rO2E+Wcrq9NHl43vN2XHM3g6wIRjSBkmgndBFLds + En5WloUCgYEA71/Am0RCjyoF1IUJ7DpkjWxinkhzn6P0vIIAIRvMMwcEKPHoclBv + 2k0ZdFkGfI4PDZgWwbFsvMysF/lB6bmw8tFxDL2g3cGEdVFBPC0L1qsf5jOKKe1d + eqtarToIgcya4/9ALNeci4DiH6coZrZ3Pg4/6rkhhhm5yWjl1prgFqsCgYEAxczf + VANZVg9JsbXjRnygRxOx735Ug17+dxaDdfdzYviH1XEqWMFyGCmpnk5aT8aHJpIY + DroaAvgAOEkVje5Pv9PpU6GuAh2USL11l4Hp8z3I1YJNkmABUa0aPe5ZQEG5NruH + dUTwJhXuo1buKbhg8j81DrPF/HAQOTMjpASmFCcCgYAjZmm5jJK7UV+jWjlpcglE + 0O6UsepHhZu/9QnH27CLK1J2K7BQl4yzIAvPpQrMiMW5IPDcnDyUf0BEG1ygEBIX + Fto9JB4SLXhFUUrwd0j06kHBbYXVRYE5jvXOAHMZnwSZCzqWZxyDlP/b5oCXEAz6 + ZGkXcWF/z3YmTxkOb8EMGQKBgAlHHMqSBaS1vt0pDOoHenUbBWOYZ1pqIEFxuuTT + oIPp6GUok7XqDPH5Mk2Mm0vWogw7GgnGzOWKwGfjdbFclj0HMZCMqt7KiGQJDvT2 + UQTpxvvMytqsyiMMnYh+H42WB5v7m9TyUNlYegMLVsx6K4HxWQnBgO3gS8KDFY7h + 8PBNAoGBAN+92Vayc9CD42Kg8nRN0Gy15zE4x8R6x4bxys47qC7ST3xVJgQHlOk4 + weBWzLb+eW2bvKogRm5xS4nmXzg/fFWaLzwDGRHawKPO8LpSOdXNeV54YncaFuCZ + LPMO7xo6dYNtdI3mIk+K3lSjxgeIE1fMmFVm4ZJWgY5qeKPnMzfc + -----END RSA PRIVATE KEY----- + + + owner: root:root + permissions: '0600' + path: /home/ubuntu/test.pem +runcmd: +- [ sh, -c, "chown ubuntu:ubuntu /home/ubuntu/test.pem" ] + diff --git a/SOL004/epa_2vm_sriov_vnf/epa_2vm_sriov_vnfd.yaml b/SOL004/epa_2vm_sriov_vnf/epa_2vm_sriov_vnfd.yaml new file mode 100644 index 0000000000000000000000000000000000000000..cdd4140bb9a77bd10e7e819576542acb927de583 --- /dev/null +++ b/SOL004/epa_2vm_sriov_vnf/epa_2vm_sriov_vnfd.yaml @@ -0,0 +1,115 @@ +vnfd: + description: A VNF consisting of 2 VDUs with EPA capabilities connected to an internal + VL, 1 external CP based on paravirt, 1 external CP based on SR-IOV + df: + - id: default-df + instantiation-level: + - id: default-instantiation-level + vdu-level: + - number-of-instances: 1 + vdu-id: mgmtVM + - number-of-instances: 1 + vdu-id: dataVM + vdu-profile: + - id: mgmtVM + min-number-of-instances: 1 + - id: dataVM + min-number-of-instances: 1 + ext-cpd: + - id: vnf-mgmt-ext + int-cpd: + cpd: mgmtVM-eth0-int + vdu-id: mgmtVM + - id: vnf-data-ext + int-cpd: + cpd: xe0-int + vdu-id: dataVM + id: epa_2vm_sriov-vnf + int-virtual-link-desc: + - id: internal + mgmt-cp: vnf-mgmt-ext + product-name: epa_2vm_sriov-vnf + sw-image-desc: + - id: ubuntu20.04 + image: ubuntu20.04 + name: ubuntu20.04 + vdu: + - cloud-init-file: cloud-config-mgmt.txt + id: mgmtVM + int-cpd: + - id: mgmtVM-eth0-int + virtual-network-interface-requirement: + - name: mgmtVM-eth0 + position: 1 + virtual-interface: + type: PARAVIRT + - id: mgmtVM-eth1-int + int-virtual-link-desc: internal + virtual-network-interface-requirement: + - name: mgmtVM-eth1 + position: 2 + virtual-interface: + type: PARAVIRT + name: mgmtVM + sw-image-desc: ubuntu20.04 + virtual-compute-desc: mgmtVM-compute + virtual-storage-desc: + - mgmtVM-storage + - cloud-init-file: cloud-config-data.txt + id: dataVM + int-cpd: + - id: eth0-int + int-virtual-link-desc: internal + virtual-network-interface-requirement: + - name: eth0 + position: 1 + virtual-interface: + type: PARAVIRT + - id: xe0-int + virtual-network-interface-requirement: + - name: xe0 + position: 2 + virtual-interface: + type: SR-IOV + name: dataVM + sw-image-desc: ubuntu20.04 + virtual-compute-desc: dataVM-compute + virtual-storage-desc: + - dataVM-storage + version: '1.0' + virtual-compute-desc: + - id: mgmtVM-compute + virtual-cpu: + num-virtual-cpu: 1 + pinning: + policy: static + thread-policy: PREFER + virtual-memory: + mempage-size: LARGE + numa-enabled: true + numa-node-policy: + mem-policy: STRICT + node: + - id: 1 + node-cnt: 1 + size: 1.0 + - id: dataVM-compute + virtual-cpu: + num-virtual-cpu: 8 + pinning: + policy: static + thread-policy: PREFER + virtual-memory: + mempage-size: LARGE + numa-enabled: true + numa-node-policy: + mem-policy: STRICT + node: + - id: 1 + node-cnt: 1 + size: 4.0 + virtual-storage-desc: + - id: mgmtVM-storage + size-of-storage: 10 + - id: dataVM-storage + size-of-storage: 10 diff --git a/SOL004/epa_2vm_sriov_vnf/icons/osm.png b/SOL004/epa_2vm_sriov_vnf/icons/osm.png new file mode 100644 index 0000000000000000000000000000000000000000..62012d2a2b491bdcd536d62c3c3c863c0d8c1b33 Binary files /dev/null and b/SOL004/epa_2vm_sriov_vnf/icons/osm.png differ diff --git a/SOL004/epa_quota_ns/epa_quota_nsd.yaml b/SOL004/epa_quota_ns/epa_quota_nsd.yaml new file mode 100644 index 0000000000000000000000000000000000000000..c5d4a0cb8fb914ce6c081f95d7829302d6796c7e --- /dev/null +++ b/SOL004/epa_quota_ns/epa_quota_nsd.yaml @@ -0,0 +1,21 @@ +nsd: + nsd: + - description: Simple NS with a single VNF and a single VL + df: + - id: default-df + vnf-profile: + - id: '1' + virtual-link-connectivity: + - constituent-cpd-id: + - constituent-base-element-id: '1' + constituent-cpd-id: vnf-cp0-ext + virtual-link-profile-id: mgmtnet + vnfd-id: epa_quota-vnf + id: epa_quota-ns + name: epa_quota-ns + version: '1.0' + virtual-link-desc: + - id: mgmtnet + mgmt-network: true + vnfd-id: + - epa_quota-vnf diff --git a/SOL004/epa_quota_ns/icons/osm.png b/SOL004/epa_quota_ns/icons/osm.png new file mode 100644 index 0000000000000000000000000000000000000000..62012d2a2b491bdcd536d62c3c3c863c0d8c1b33 Binary files /dev/null and b/SOL004/epa_quota_ns/icons/osm.png differ diff --git a/SOL004/epa_quota_vnf/epa_quota_vnfd.yaml b/SOL004/epa_quota_vnf/epa_quota_vnfd.yaml new file mode 100644 index 0000000000000000000000000000000000000000..1b1bfb5306eec8926ea48576b8942ebf148e66a9 --- /dev/null +++ b/SOL004/epa_quota_vnf/epa_quota_vnfd.yaml @@ -0,0 +1,56 @@ +vnfd: + description: A basic VNF descriptor w/ one VDU and quota EPA + df: + - id: default-df + instantiation-level: + - id: default-instantiation-level + vdu-level: + - number-of-instances: 1 + vdu-id: epa_quota-VM + vdu-profile: + - id: epa_quota-VM + min-number-of-instances: 1 + ext-cpd: + - id: vnf-cp0-ext + int-cpd: + cpd: vdu-eth0-int + vdu-id: epa_quota-VM + id: epa_quota-vnf + mgmt-cp: vnf-cp0-ext + product-name: epa_quota-vnf + sw-image-desc: + - id: US1604 + image: US1604 + name: US1604 + vdu: + - id: epa_quota-VM + int-cpd: + - id: vdu-eth0-int + virtual-network-interface-requirement: + - name: vdu-eth0 + virtual-interface: + type: PARAVIRT + name: epa_quota-VM + sw-image-desc: US1604 + virtual-compute-desc: epa_quota-VM-compute + virtual-storage-desc: + - epa_quota-VM-storage + version: '1.0' + virtual-compute-desc: + - id: epa_quota-VM-compute + virtual-cpu: + cpu-quota: + reserve: 512 + shares: 3000 + num-virtual-cpu: 2 + virtual-memory: + mem-quota: + limit: 2048 + reserve: 512 + shares: 20480 + size: 2.0 + virtual-storage-desc: + - disk-io-quota: + shares: 1000 + id: epa_quota-VM-storage + size-of-storage: 10 diff --git a/SOL004/epa_quota_vnf/icons/osm.png b/SOL004/epa_quota_vnf/icons/osm.png new file mode 100644 index 0000000000000000000000000000000000000000..62012d2a2b491bdcd536d62c3c3c863c0d8c1b33 Binary files /dev/null and b/SOL004/epa_quota_vnf/icons/osm.png differ diff --git a/SOL004/generate-packages.sh b/SOL004/generate-packages.sh new file mode 100755 index 0000000000000000000000000000000000000000..20f8f72d99f243c1dd75509340752291bf27f63c --- /dev/null +++ b/SOL004/generate-packages.sh @@ -0,0 +1,9 @@ +#!/bin/bash +set -eux +for d in *vnf; do + osm package-build $d +done +for d in *ns; do + osm package-build $d +done + diff --git a/SOL004/hackfest_basic_metrics_ns/hackfest_basic_nsd.yaml b/SOL004/hackfest_basic_metrics_ns/hackfest_basic_nsd.yaml new file mode 100644 index 0000000000000000000000000000000000000000..a9f95c74c750cf58cec9598492a21ea88de0ef55 --- /dev/null +++ b/SOL004/hackfest_basic_metrics_ns/hackfest_basic_nsd.yaml @@ -0,0 +1,21 @@ +nsd: + nsd: + - description: Simple NS with a single VNF and a single VL and Metrics + df: + - id: default-df + vnf-profile: + - id: '1' + virtual-link-connectivity: + - constituent-cpd-id: + - constituent-base-element-id: '1' + constituent-cpd-id: vnf-cp0-ext + virtual-link-profile-id: mgmtnet + vnfd-id: hackfest_basic_metrics-vnf + id: hackfest_basic-ns-metrics + name: hackfest_basic-ns-metrics + version: '1.0' + virtual-link-desc: + - id: mgmtnet + mgmt-network: true + vnfd-id: + - hackfest_basic_metrics-vnf diff --git a/SOL004/hackfest_basic_metrics_ns/icons/osm.png b/SOL004/hackfest_basic_metrics_ns/icons/osm.png new file mode 100644 index 0000000000000000000000000000000000000000..62012d2a2b491bdcd536d62c3c3c863c0d8c1b33 Binary files /dev/null and b/SOL004/hackfest_basic_metrics_ns/icons/osm.png differ diff --git a/SOL004/hackfest_basic_metrics_vnf/cloud_init/cloud-config b/SOL004/hackfest_basic_metrics_vnf/cloud_init/cloud-config new file mode 100644 index 0000000000000000000000000000000000000000..25b63cc647effa3d8c9ee922beaac1782339ba4f --- /dev/null +++ b/SOL004/hackfest_basic_metrics_vnf/cloud_init/cloud-config @@ -0,0 +1,4 @@ +#cloud-config +password: osm4u +chpasswd: { expire: False } +ssh_pwauth: True diff --git a/SOL004/hackfest_basic_metrics_vnf/hackfest_basic_metrics_vnfd.yaml b/SOL004/hackfest_basic_metrics_vnf/hackfest_basic_metrics_vnfd.yaml new file mode 100644 index 0000000000000000000000000000000000000000..0da9e96134fcee393aad34cd32882b701d745141 --- /dev/null +++ b/SOL004/hackfest_basic_metrics_vnf/hackfest_basic_metrics_vnfd.yaml @@ -0,0 +1,84 @@ +vnfd: + description: A basic VNF descriptor w/ one VDU and VIM metrics + df: + - id: default-df + instantiation-level: + - id: default-instantiation-level + vdu-level: + - number-of-instances: "1" + vdu-id: hackfest_basic_metrics-VM + scaling-aspect: + - aspect-delta-details: + deltas: + - id: vdu_autoscale-delta + vdu-delta: + - id: hackfest_basic_metrics-VM + number-of-instances: "1" + id: vdu_autoscale + max-scale-level: 1 + name: vdu_autoscale + scaling-policy: + - cooldown-time: 120 + name: cpu_util_above_threshold + scaling-criteria: + - name: cpu_util_above_threshold + scale-in-relational-operation: LT + scale-in-threshold: 10 + scale-out-relational-operation: GT + scale-out-threshold: 60 + vnf-monitoring-param-ref: vnf_cpu_util + scaling-type: automatic + threshold-time: 10 + vdu-profile: + - id: hackfest_basic_metrics-VM + max-number-of-instances: "2" + min-number-of-instances: "1" + ext-cpd: + - id: vnf-cp0-ext + int-cpd: + cpd: vdu-eth0-int + vdu-id: hackfest_basic_metrics-VM + id: hackfest_basic_metrics-vnf + mgmt-cp: vnf-cp0-ext + product-name: hackfest_basic_metrics-vnf + sw-image-desc: + - id: ubuntu16.04 + image: ubuntu16.04 + name: ubuntu16.04 + vdu: + - cloud-init-file: cloud-config + id: hackfest_basic_metrics-VM + int-cpd: + - id: vdu-eth0-int + virtual-network-interface-requirement: + - name: vdu-eth0 + virtual-interface: + type: PARAVIRT + monitoring-parameter: + - id: vnf_cpu_util + name: vnf_cpu_util + performance-metric: cpu_utilization + - id: vnf_memory_util + name: vnf_memory_util + performance-metric: average_memory_utilization + - id: vnf_packets_sent + name: vnf_packets_sent + performance-metric: packets_sent + - id: vnf_packets_received + name: vnf_packets_received + performance-metric: packets_received + name: hackfest_basic_metrics-VM + sw-image-desc: ubuntu16.04 + virtual-compute-desc: hackfest_basic_metrics-VM-compute + virtual-storage-desc: + - hackfest_basic_metrics-VM-storage + version: '1.0' + virtual-compute-desc: + - id: hackfest_basic_metrics-VM-compute + virtual-cpu: + num-virtual-cpu: 1 + virtual-memory: + size: 1.0 + virtual-storage-desc: + - id: hackfest_basic_metrics-VM-storage + size-of-storage: 10 diff --git a/SOL004/hackfest_basic_metrics_vnf/icons/osm.png b/SOL004/hackfest_basic_metrics_vnf/icons/osm.png new file mode 100644 index 0000000000000000000000000000000000000000..62012d2a2b491bdcd536d62c3c3c863c0d8c1b33 Binary files /dev/null and b/SOL004/hackfest_basic_metrics_vnf/icons/osm.png differ diff --git a/SOL004/hackfest_basic_ns/hackfest_basic_nsd.yaml b/SOL004/hackfest_basic_ns/hackfest_basic_nsd.yaml new file mode 100644 index 0000000000000000000000000000000000000000..be76b264172ba0f906d2b6d84e48c7542154e9e4 --- /dev/null +++ b/SOL004/hackfest_basic_ns/hackfest_basic_nsd.yaml @@ -0,0 +1,21 @@ +nsd: + nsd: + - description: Simple NS with a single VNF and a single VL + df: + - id: default-df + vnf-profile: + - id: '1' + virtual-link-connectivity: + - constituent-cpd-id: + - constituent-base-element-id: '1' + constituent-cpd-id: vnf-cp0-ext + virtual-link-profile-id: mgmtnet + vnfd-id: hackfest_basic-vnf + id: hackfest_basic-ns + name: hackfest_basic-ns + version: '1.0' + virtual-link-desc: + - id: mgmtnet + mgmt-network: true + vnfd-id: + - hackfest_basic-vnf diff --git a/SOL004/hackfest_basic_ns/icons/osm.png b/SOL004/hackfest_basic_ns/icons/osm.png new file mode 100644 index 0000000000000000000000000000000000000000..62012d2a2b491bdcd536d62c3c3c863c0d8c1b33 Binary files /dev/null and b/SOL004/hackfest_basic_ns/icons/osm.png differ diff --git a/SOL004/hackfest_basic_sriov_ns/hackfest_basic_sriov_nsd.yaml b/SOL004/hackfest_basic_sriov_ns/hackfest_basic_sriov_nsd.yaml new file mode 100644 index 0000000000000000000000000000000000000000..7f7397b02d6d34db89132da39cdb8dd646e3ecf2 --- /dev/null +++ b/SOL004/hackfest_basic_sriov_ns/hackfest_basic_sriov_nsd.yaml @@ -0,0 +1,26 @@ +nsd: + nsd: + - description: Simple NS with a single VNF with SRIOV port + df: + - id: default-df + vnf-profile: + - id: '1' + virtual-link-connectivity: + - constituent-cpd-id: + - constituent-base-element-id: '1' + constituent-cpd-id: vnf-cp0-ext + virtual-link-profile-id: mgmtnet + - constituent-cpd-id: + - constituent-base-element-id: '1' + constituent-cpd-id: vnf-cp1-ext + virtual-link-profile-id: sriov_vld + vnfd-id: hackfest_basic_sriov-vnf + id: hackfest_basic_sriov-ns + name: hackfest_basic_sriov-ns + version: '1.0' + virtual-link-desc: + - id: mgmtnet + mgmt-network: true + - id: sriov_vld + vnfd-id: + - hackfest_basic_sriov-vnf diff --git a/SOL004/hackfest_basic_sriov_ns/icons/osm.png b/SOL004/hackfest_basic_sriov_ns/icons/osm.png new file mode 100644 index 0000000000000000000000000000000000000000..62012d2a2b491bdcd536d62c3c3c863c0d8c1b33 Binary files /dev/null and b/SOL004/hackfest_basic_sriov_ns/icons/osm.png differ diff --git a/SOL004/hackfest_basic_sriov_vnf/cloud_init/cloud.config b/SOL004/hackfest_basic_sriov_vnf/cloud_init/cloud.config new file mode 100644 index 0000000000000000000000000000000000000000..25b63cc647effa3d8c9ee922beaac1782339ba4f --- /dev/null +++ b/SOL004/hackfest_basic_sriov_vnf/cloud_init/cloud.config @@ -0,0 +1,4 @@ +#cloud-config +password: osm4u +chpasswd: { expire: False } +ssh_pwauth: True diff --git a/SOL004/hackfest_basic_sriov_vnf/hackfest_basic_sriov_vnfd.yaml b/SOL004/hackfest_basic_sriov_vnf/hackfest_basic_sriov_vnfd.yaml new file mode 100644 index 0000000000000000000000000000000000000000..a6631397394010c0cd9031958d0ec4b21ddad188 --- /dev/null +++ b/SOL004/hackfest_basic_sriov_vnf/hackfest_basic_sriov_vnfd.yaml @@ -0,0 +1,57 @@ +vnfd: + description: A basic VNF descriptor w/ one VDU and SRIOV + df: + - id: default-df + instantiation-level: + - id: default-instantiation-level + vdu-level: + - number-of-instances: 1 + vdu-id: hackfest_basic-VM + vdu-profile: + - id: hackfest_basic-VM + min-number-of-instances: 1 + ext-cpd: + - id: vnf-cp0-ext + int-cpd: + cpd: vdu-eth0-int + vdu-id: hackfest_basic-VM + - id: vnf-cp1-ext + int-cpd: + cpd: vdu-eth1-int + vdu-id: hackfest_basic-VM + id: hackfest_basic_sriov-vnf + mgmt-cp: vnf-cp0-ext + product-name: hackfest_basic_sriov-vnf + sw-image-desc: + - id: ubuntu18.04 + image: ubuntu18.04 + name: ubuntu18.04 + vdu: + - cloud-init-file: cloud.config + id: hackfest_basic-VM + int-cpd: + - id: vdu-eth0-int + virtual-network-interface-requirement: + - name: vdu-eth0 + virtual-interface: + type: PARAVIRT + - id: vdu-eth1-int + virtual-network-interface-requirement: + - name: vdu-eth1 + virtual-interface: + type: SR-IOV + name: hackfest_basic-VM + sw-image-desc: ubuntu18.04 + virtual-compute-desc: hackfest_basic-VM-compute + virtual-storage-desc: + - hackfest_basic-VM-storage + version: '1.0' + virtual-compute-desc: + - id: hackfest_basic-VM-compute + virtual-cpu: + num-virtual-cpu: 2 + virtual-memory: + size: 2.0 + virtual-storage-desc: + - id: hackfest_basic-VM-storage + size-of-storage: 30 diff --git a/SOL004/hackfest_basic_sriov_vnf/icons/osm.png b/SOL004/hackfest_basic_sriov_vnf/icons/osm.png new file mode 100644 index 0000000000000000000000000000000000000000..62012d2a2b491bdcd536d62c3c3c863c0d8c1b33 Binary files /dev/null and b/SOL004/hackfest_basic_sriov_vnf/icons/osm.png differ diff --git a/SOL004/hackfest_basic_vnf/hackfest_basic_vnfd.yaml b/SOL004/hackfest_basic_vnf/hackfest_basic_vnfd.yaml new file mode 100644 index 0000000000000000000000000000000000000000..04832554aed141fbfb9898758218612c0bf84ed4 --- /dev/null +++ b/SOL004/hackfest_basic_vnf/hackfest_basic_vnfd.yaml @@ -0,0 +1,53 @@ +vnfd: + description: A basic VNF descriptor w/ one VDU + df: + - id: default-df + instantiation-level: + - id: default-instantiation-level + vdu-level: + - number-of-instances: 1 + vdu-id: hackfest_basic-VM + vdu-profile: + - id: hackfest_basic-VM + min-number-of-instances: 1 + ext-cpd: + - id: vnf-cp0-ext + int-cpd: + cpd: vdu-eth0-int + vdu-id: hackfest_basic-VM + id: hackfest_basic-vnf + mgmt-cp: vnf-cp0-ext + product-name: hackfest_basic-vnf + sw-image-desc: + - id: ubuntu16.04 + name: ubuntu16.04 + image: ubuntu16.04 + - id: ubuntu16.04-aws + name: ubuntu16.04-aws + image: ubuntu/images/hvm-ssd/ubuntu-artful-17.10-amd64-server-20180509 + vim-type: aws + vdu: + - id: hackfest_basic-VM + name: hackfest_basic-VM + sw-image-desc: ubuntu16.04 + alternative-sw-image-desc: + - ubuntu16.04-aws + virtual-compute-desc: hackfest_basic-VM-compute + virtual-storage-desc: + - hackfest_basic-VM-storage + int-cpd: + - id: vdu-eth0-int + virtual-network-interface-requirement: + - name: vdu-eth0 + virtual-interface: + type: PARAVIRT + version: '1.0' + virtual-compute-desc: + - id: hackfest_basic-VM-compute + virtual-cpu: + num-virtual-cpu: "1" + virtual-memory: + size: "1.0" + virtual-storage-desc: + - id: hackfest_basic-VM-storage + size-of-storage: "10" diff --git a/SOL004/hackfest_basic_vnf/icons/osm.png b/SOL004/hackfest_basic_vnf/icons/osm.png new file mode 100644 index 0000000000000000000000000000000000000000..62012d2a2b491bdcd536d62c3c3c863c0d8c1b33 Binary files /dev/null and b/SOL004/hackfest_basic_vnf/icons/osm.png differ diff --git a/SOL004/hackfest_cloudinit_ns/hackfest_cloudinit_nsd.yaml b/SOL004/hackfest_cloudinit_ns/hackfest_cloudinit_nsd.yaml new file mode 100644 index 0000000000000000000000000000000000000000..bd8283cdb74d1eebae2ea2878909e393ff385567 --- /dev/null +++ b/SOL004/hackfest_cloudinit_ns/hackfest_cloudinit_nsd.yaml @@ -0,0 +1,37 @@ +nsd: + nsd: + - description: NS with 2 VNFs with cloudinit connected by datanet and mgmtnet VLs + df: + - id: default-df + vnf-profile: + - id: '1' + virtual-link-connectivity: + - constituent-cpd-id: + - constituent-base-element-id: '1' + constituent-cpd-id: vnf-mgmt-ext + virtual-link-profile-id: mgmtnet + - constituent-cpd-id: + - constituent-base-element-id: '1' + constituent-cpd-id: vnf-data-ext + virtual-link-profile-id: datanet + vnfd-id: hackfest_cloudinit-vnf + - id: '2' + virtual-link-connectivity: + - constituent-cpd-id: + - constituent-base-element-id: '2' + constituent-cpd-id: vnf-mgmt-ext + virtual-link-profile-id: mgmtnet + - constituent-cpd-id: + - constituent-base-element-id: '2' + constituent-cpd-id: vnf-data-ext + virtual-link-profile-id: datanet + vnfd-id: hackfest_cloudinit-vnf + id: hackfest_cloudinit-ns + name: hackfest_cloudinit-ns + version: '1.0' + virtual-link-desc: + - id: mgmtnet + mgmt-network: true + - id: datanet + vnfd-id: + - hackfest_cloudinit-vnf diff --git a/SOL004/hackfest_cloudinit_ns/icons/osm.png b/SOL004/hackfest_cloudinit_ns/icons/osm.png new file mode 100644 index 0000000000000000000000000000000000000000..62012d2a2b491bdcd536d62c3c3c863c0d8c1b33 Binary files /dev/null and b/SOL004/hackfest_cloudinit_ns/icons/osm.png differ diff --git a/SOL004/hackfest_cloudinit_vnf/cloud_init/cloud-config.txt b/SOL004/hackfest_cloudinit_vnf/cloud_init/cloud-config.txt new file mode 100755 index 0000000000000000000000000000000000000000..36c8d1bf2cdebbc4e50d1e8348003f64f419cd0b --- /dev/null +++ b/SOL004/hackfest_cloudinit_vnf/cloud_init/cloud-config.txt @@ -0,0 +1,12 @@ +#cloud-config +password: osm4u +chpasswd: { expire: False } +ssh_pwauth: True + +write_files: +- content: | + # My new helloworld file + + owner: root:root + permissions: '0644' + path: /root/helloworld.txt diff --git a/SOL004/hackfest_cloudinit_vnf/hackfest_cloudinit_vnfd.yaml b/SOL004/hackfest_cloudinit_vnf/hackfest_cloudinit_vnfd.yaml new file mode 100644 index 0000000000000000000000000000000000000000..cfa7d77631096fbd10fb238b3f45eb9c0be2bfea --- /dev/null +++ b/SOL004/hackfest_cloudinit_vnf/hackfest_cloudinit_vnfd.yaml @@ -0,0 +1,94 @@ +vnfd: + description: A VNF consisting of 2 VDUs connected to an internal VL, and one VDU + with cloud-init + df: + - id: default-df + instantiation-level: + - id: default-instantiation-level + vdu-level: + - number-of-instances: 1 + vdu-id: mgmtVM + - number-of-instances: 1 + vdu-id: dataVM + vdu-profile: + - id: mgmtVM + min-number-of-instances: 1 + - id: dataVM + min-number-of-instances: 1 + ext-cpd: + - id: vnf-mgmt-ext + int-cpd: + cpd: mgmtVM-eth0-int + vdu-id: mgmtVM + - id: vnf-data-ext + int-cpd: + cpd: dataVM-xe0-int + vdu-id: dataVM + id: hackfest_cloudinit-vnf + int-virtual-link-desc: + - id: internal + mgmt-cp: vnf-mgmt-ext + product-name: hackfest_cloudinit-vnf + sw-image-desc: + - id: ubuntu16.04 + image: ubuntu16.04 + name: ubuntu16.04 + vdu: + - cloud-init-file: cloud-config.txt + id: mgmtVM + int-cpd: + - id: mgmtVM-eth0-int + virtual-network-interface-requirement: + - name: mgmtVM-eth0 + position: 1 + virtual-interface: + type: PARAVIRT + - id: mgmtVM-eth1-int + int-virtual-link-desc: internal + virtual-network-interface-requirement: + - name: mgmtVM-eth1 + position: 2 + virtual-interface: + type: PARAVIRT + name: mgmtVM + sw-image-desc: ubuntu16.04 + virtual-compute-desc: mgmtVM-compute + virtual-storage-desc: + - mgmtVM-storage + - id: dataVM + int-cpd: + - id: dataVM-eth0-int + int-virtual-link-desc: internal + virtual-network-interface-requirement: + - name: dataVM-eth0 + position: 1 + virtual-interface: + type: PARAVIRT + - id: dataVM-xe0-int + virtual-network-interface-requirement: + - name: dataVM-xe0 + position: 2 + virtual-interface: + type: PARAVIRT + name: dataVM + sw-image-desc: ubuntu16.04 + virtual-compute-desc: dataVM-compute + virtual-storage-desc: + - dataVM-storage + version: '1.0' + virtual-compute-desc: + - id: mgmtVM-compute + virtual-cpu: + num-virtual-cpu: "1" + virtual-memory: + size: "1.0" + - id: dataVM-compute + virtual-cpu: + num-virtual-cpu: "1" + virtual-memory: + size: "1.0" + virtual-storage-desc: + - id: mgmtVM-storage + size-of-storage: "10" + - id: dataVM-storage + size-of-storage: "10" diff --git a/SOL004/hackfest_cloudinit_vnf/icons/osm.png b/SOL004/hackfest_cloudinit_vnf/icons/osm.png new file mode 100644 index 0000000000000000000000000000000000000000..62012d2a2b491bdcd536d62c3c3c863c0d8c1b33 Binary files /dev/null and b/SOL004/hackfest_cloudinit_vnf/icons/osm.png differ diff --git a/SOL004/hackfest_epasriov_ns/hackfest_epasriov_nsd.yaml b/SOL004/hackfest_epasriov_ns/hackfest_epasriov_nsd.yaml new file mode 100644 index 0000000000000000000000000000000000000000..74a896aa3beac73bb84a4c35a320ce8f2def4a56 --- /dev/null +++ b/SOL004/hackfest_epasriov_ns/hackfest_epasriov_nsd.yaml @@ -0,0 +1,38 @@ +nsd: + nsd: + - description: NS with 2 VNFs hackfest_epasriov-vnf connected by datanet and mgmtnet + VLs + df: + - id: default-df + vnf-profile: + - id: '1' + virtual-link-connectivity: + - constituent-cpd-id: + - constituent-base-element-id: '1' + constituent-cpd-id: vnf-mgmt-ext + virtual-link-profile-id: mgmtnet + - constituent-cpd-id: + - constituent-base-element-id: '1' + constituent-cpd-id: vnf-data-ext + virtual-link-profile-id: datanet + vnfd-id: hackfest_epasriov-vnf + - id: '2' + virtual-link-connectivity: + - constituent-cpd-id: + - constituent-base-element-id: '2' + constituent-cpd-id: vnf-mgmt-ext + virtual-link-profile-id: mgmtnet + - constituent-cpd-id: + - constituent-base-element-id: '2' + constituent-cpd-id: vnf-data-ext + virtual-link-profile-id: datanet + vnfd-id: hackfest_epasriov-vnf + id: hackfest_epasriov-ns + name: hackfest_epasriov-ns + version: '1.0' + virtual-link-desc: + - id: mgmtnet + mgmt-network: true + - id: datanet + vnfd-id: + - hackfest_epasriov-vnf diff --git a/SOL004/hackfest_epasriov_ns/icons/osm.png b/SOL004/hackfest_epasriov_ns/icons/osm.png new file mode 100644 index 0000000000000000000000000000000000000000..62012d2a2b491bdcd536d62c3c3c863c0d8c1b33 Binary files /dev/null and b/SOL004/hackfest_epasriov_ns/icons/osm.png differ diff --git a/SOL004/hackfest_epasriov_vnf/cloud_init/cloud-config.txt b/SOL004/hackfest_epasriov_vnf/cloud_init/cloud-config.txt new file mode 100755 index 0000000000000000000000000000000000000000..3e8d7b217417d1edb20e763b22e2b91255355310 --- /dev/null +++ b/SOL004/hackfest_epasriov_vnf/cloud_init/cloud-config.txt @@ -0,0 +1,18 @@ +#cloud-config +password: osm4u +chpasswd: { expire: False } +ssh_pwauth: True +write_files: +- encoding: b64 + permissions: '0600' + path: /home/ubuntu/test4.pem + content: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBMTJGVDh5Q09wY0F2NFc5VHg3SUNYSDlkWm1DaU5hUkhoODNJMFBGL0QrMk1EbzlvCm9ZcEljdWlFYlp0L0I4eis3blRGbzBLSlQxR3lNelFacFhiZkg4dWhCWkdSS1grMGQwTTRnMjZzTnhSK3NDRzAKSnk2UW1WbXZUZ2hrY2o4ajNRV1AveXRHR3VXL3lBWHh1TGZpcUNGY0pDYmtodnFHRVJvUW1JWndNRVdmeUZNTQpZYld6VmJFdHVzRGZveHJHbENpRmdoQVcwdVhIQlhtL1pOM3FMeUJaQkN6UXlWVmxTYklkcmNocTFIZmgwTmZzClVCclFWSEo0cVNWZStaY0M5bWRxNmE1ekNDTGxYUGdOYlV6VmdLeDNIQzBzbjhYOXBBRGN1WXhncE1EUUI5UkUKNmQ2L0FPUVlLQldIZXpVaThTdDZHOTE4bkY3Mkw3ZjF6SzFBendJREFRQUJBb0lCQUc3bU5ZUzlvZFdrMU1LZQpRU1JVK3pSSGZIOG5pTDVZSFdER3kvMFNMQnUyYytSWFlVZTBYVU9WaUFLc0MwZW4vU2dwUms2ZkJ2YXBtVGtXClBaSmVWOXNXVFk0QmV4NUVIRmRBYkl2NFk0Sms4aXFjNEJkQXVjSE1WU0MzMzRpWURFNVUrK1Vta2cxdGVVZDAKRUJmTnowNUZCeDJ5VFA5WFpjck9nZmNYV2hMd0JObWQ5OStnVjZjUUI2TkYwWkZWbTZUTHVBN2dIM2pveWlwaQpLZmhvZDd1b25GUlZYU2c2dmlwWHMzK29INkJrcGdZeFE1RUM5VTIxcTdZUzQvSDVueUJKRFF5bWY4OEx2bTBsCklrWmFzS0M4UktrSEp1cFJPeVVVck9SVzBuNlpWNVAzalBDdHNhdi9uNlFJUVN0cUNuaVIxU2x1T05IbEkvU20KcXk3MVZwRUNnWUVBN0FQdUVzU1NSL0FRdHhNcDA1NytqcXg2VkUrVWpJeXJ0a0xCRnpIbDZHeFU0U1J6OWZJUwpMeHZJOGZUdjNBTXBKVElRY2Y3OGw4Q05CczBscHhPeE9odW9GUWN3cVpzTWIvSGc1d2RTNnIwdDZnZ0tNTWhPCmlnMHhJVHpHRVc1Y1ZNUTJ0M21FMXNIUDkyTlQ4QlR4MGVEOG5HOWI0aHhERnVqck5Pb1RNaWNDZ1lFQTZaNFgKc0ltNUxjRlJQZS9tTmhhc0x4WXVoK2Q1c2F0MjBPbTJZdE5YREEzNDlxUTllNG1DMzNtS0xSMS9FVkE1ZER1ZAp5dmp3T3Z3dmVmTEV1Qy8xVzhkOEF4VWFZVU1RSHRvUEI0cUcvL0QvY0lHaG0zWXR2VXVhSm9YRVAwa2Q0N21xClByNk51L1c1YjdqTy9yQ0lCT1pONlQraDJkeENsQ0xBcFo5dnJSa0NnWUVBMitWcGVya0ZaZHNwWjdtR0hmS2sKVUVBcEZjYXpyQ1FnbEljcnFyWEY5TUNDY09acTJIcjdNRU1kL1RsdUJibzRLcnl6ajlLNGU1ZGVqaml6WFRDKwp6bG9ZUjhkVU1xSVFlM2lNU0JTTno4SUZObWpaUGN4VFNOS3p0TGtQL2d1cUlSeFRzcXlZOVJMTTlqem9adWJNCnkvUm95RVFGQXUyOElHdFJRaExaWWI4Q2dZQkY2ZThUQVJSdkVnU2JNWmxHcEtCZzh4VjN6SmxKeDVPbVQ5c3EKVmk4ZHgyeXplMUYvRUJjZmhBTUxIMkd3cjc2Ui8ybG9uZmxlM2F2anBmaWpXbzdtS1p2K1hDbHA1Q1VGNXFKSwowblUyVVV4UXdpcTRHTFQxaXBPV1piL21aSjVTVVhVV2svWmN3dHY5Q0dUQ0tkaDdCdVZZSVpmeFdBNkF3S25BCnB5ZEh1UUtCZ1FDTUtCRk5IMFc1YlhwKzQweG1tNUNZdXZSMmhOckRsZ2pMMGc0WDRSQllFQmNnNnlDQ3ExZm8KZ3YwT0JxTmUxTDcyTFdoSTZvSTJxbEtQdmVOMWZkaDBqc0F2UHBRcGNlLzE2WTBmN3hzbDVOM3VwcCt5SEhOawp4cnVET0c3bnBTQ3lIS2ZjUWpEdGFIT1BLWWlRLzNDdVhwN09KT3ZqYk5aRmYwR0paQTlYUUE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo= +- content: | + # My new helloworld file + + owner: root:root + permissions: '0644' + path: /root/helloworld.txt +runcmd: +- [ sh, -c, "chown ubuntu:ubuntu /home/ubuntu/test4.pem" ] + diff --git a/SOL004/hackfest_epasriov_vnf/hackfest_epasriov_vnfd.yaml b/SOL004/hackfest_epasriov_vnf/hackfest_epasriov_vnfd.yaml new file mode 100644 index 0000000000000000000000000000000000000000..87bc4aa1b20ac0499b3427e98a5664eecd0f1941 --- /dev/null +++ b/SOL004/hackfest_epasriov_vnf/hackfest_epasriov_vnfd.yaml @@ -0,0 +1,117 @@ +vnfd: + description: A VNF consisting of 2 VDUs with EPA capabilities connected to an internal + VL, mgmtVM VDU with cloud-init + df: + - id: default-df + instantiation-level: + - id: default-instantiation-level + vdu-level: + - number-of-instances: 1 + vdu-id: mgmtVM + - number-of-instances: 1 + vdu-id: dataVM + vdu-profile: + - id: mgmtVM + min-number-of-instances: 1 + - id: dataVM + min-number-of-instances: 1 + ext-cpd: + - id: vnf-mgmt-ext + int-cpd: + cpd: mgmtVM-eth0-int + vdu-id: mgmtVM + - id: vnf-data-ext + int-cpd: + cpd: xe0-int + vdu-id: dataVM + id: hackfest_epasriov-vnf + int-virtual-link-desc: + - id: internal + mgmt-cp: vnf-mgmt-ext + product-name: hackfest_epasriov-vnf + sw-image-desc: + - id: ubuntu20.04 + image: ubuntu20.04 + name: ubuntu20.04 + - id: hackfest-pktgen + image: hackfest-pktgen + name: hackfest-pktgen + vdu: + - cloud-init-file: cloud-config.txt + id: mgmtVM + int-cpd: + - id: mgmtVM-eth0-int + virtual-network-interface-requirement: + - name: mgmtVM-eth0 + position: 1 + virtual-interface: + type: PARAVIRT + - id: mgmtVM-eth1-int + int-virtual-link-desc: internal + virtual-network-interface-requirement: + - name: mgmtVM-eth1 + position: 2 + virtual-interface: + type: PARAVIRT + name: mgmtVM + sw-image-desc: ubuntu20.04 + virtual-compute-desc: mgmtVM-compute + virtual-storage-desc: + - mgmtVM-storage + - id: dataVM + int-cpd: + - id: eth0-int + int-virtual-link-desc: internal + virtual-network-interface-requirement: + - name: eth0 + position: 1 + virtual-interface: + type: PARAVIRT + - id: xe0-int + virtual-network-interface-requirement: + - name: xe0 + position: 2 + virtual-interface: + type: SR-IOV + name: dataVM + sw-image-desc: hackfest-pktgen + virtual-compute-desc: dataVM-compute + virtual-storage-desc: + - dataVM-storage + version: '1.0' + virtual-compute-desc: + - id: mgmtVM-compute + virtual-cpu: + num-virtual-cpu: 1 + pinning: + policy: static + thread-policy: PREFER + virtual-memory: + mempage-size: LARGE + numa-enabled: true + numa-node-policy: + mem-policy: STRICT + node: + - id: 1 + node-cnt: 1 + size: 1.0 + - id: dataVM-compute + virtual-cpu: + num-virtual-cpu: 8 + pinning: + policy: static + thread-policy: PREFER + virtual-memory: + mempage-size: LARGE + numa-enabled: true + numa-node-policy: + mem-policy: STRICT + node: + - id: 1 + node-cnt: 1 + size: 4.0 + virtual-storage-desc: + - id: mgmtVM-storage + size-of-storage: 10 + - id: dataVM-storage + size-of-storage: 10 diff --git a/SOL004/hackfest_epasriov_vnf/icons/osm.png b/SOL004/hackfest_epasriov_vnf/icons/osm.png new file mode 100644 index 0000000000000000000000000000000000000000..62012d2a2b491bdcd536d62c3c3c863c0d8c1b33 Binary files /dev/null and b/SOL004/hackfest_epasriov_vnf/icons/osm.png differ diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/README.md b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/README.md new file mode 100644 index 0000000000000000000000000000000000000000..2bdcab20c1a6d39aaffe9a450ba3bdbc44044b42 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/README.md @@ -0,0 +1,3 @@ +# Vyos-config + +This is a proxy charm used by Open Source Mano (OSM) to configure Vyos Router PNF, written in the [Python Operator Framwork](https://github.com/canonical/operator) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/actions.yaml b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/actions.yaml new file mode 100644 index 0000000000000000000000000000000000000000..80497e105a10016e2dec0329f6157dbad382a298 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/actions.yaml @@ -0,0 +1,46 @@ +# VyOS Action +add-port-forward: + description: "Adds a port forwarding rule" + params: + ruleNumber: + description: "Rule number, must be unique and needed to remove the rule later" + type: "string" + default: "10" + sourcePort: + description: "Source port to listen on" + type: "string" + destinationPort: + description: "Target port number on remote host to forward" + type: "string" + destinationAddress: + description: "Target host or IP address to forward traffic" + type: "string" + required: + - sourcePort + - destinationPort + - destinationAddress + +remove-port-forward: + description: "Removes a port forwarding rule by number" + params: + ruleNumber: + description: "Rule number to remove" + type: "string" + default: "10" + +# Required by charms.osm.sshproxy +run: + description: "Run an arbitrary command" + params: + command: + description: "The command to execute." + type: string + default: "" + required: + - command +generate-ssh-key: + description: "Generate a new SSH keypair for this unit. This will replace any existing previously generated keypair." +verify-ssh-credentials: + description: "Verify that this unit can authenticate with server specified by ssh-hostname and ssh-username." +get-ssh-public-key: + description: "Get the public SSH key for this unit." diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/config.yaml b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/config.yaml new file mode 100644 index 0000000000000000000000000000000000000000..7f6d4408bec64af0b5c3ebf0e8cae30b33609f74 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/config.yaml @@ -0,0 +1,25 @@ +options: + ssh-hostname: + type: string + default: "" + description: "The hostname or IP address of the machine to" + ssh-username: + type: string + default: "" + description: "The username to login as." + ssh-password: + type: string + default: "" + description: "The password used to authenticate." + ssh-public-key: + type: string + default: "" + description: "The public key of this unit." + ssh-key-type: + type: string + default: "rsa" + description: "The type of encryption to use for the SSH key." + ssh-key-bits: + type: int + default: 4096 + description: "The number of bits to use for the SSH key." diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..61ef90719b5d5d759de1a6b80a1ea748d8bb0911 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/__init__.py @@ -0,0 +1,97 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Bootstrap charm-helpers, installing its dependencies if necessary using +# only standard libraries. +from __future__ import print_function +from __future__ import absolute_import + +import functools +import inspect +import subprocess +import sys + +try: + import six # NOQA:F401 +except ImportError: + if sys.version_info.major == 2: + subprocess.check_call(['apt-get', 'install', '-y', 'python-six']) + else: + subprocess.check_call(['apt-get', 'install', '-y', 'python3-six']) + import six # NOQA:F401 + +try: + import yaml # NOQA:F401 +except ImportError: + if sys.version_info.major == 2: + subprocess.check_call(['apt-get', 'install', '-y', 'python-yaml']) + else: + subprocess.check_call(['apt-get', 'install', '-y', 'python3-yaml']) + import yaml # NOQA:F401 + + +# Holds a list of mapping of mangled function names that have been deprecated +# using the @deprecate decorator below. This is so that the warning is only +# printed once for each usage of the function. +__deprecated_functions = {} + + +def deprecate(warning, date=None, log=None): + """Add a deprecation warning the first time the function is used. + The date, which is a string in semi-ISO8660 format indicate the year-month + that the function is officially going to be removed. + + usage: + + @deprecate('use core/fetch/add_source() instead', '2017-04') + def contributed_add_source_thing(...): + ... + + And it then prints to the log ONCE that the function is deprecated. + The reason for passing the logging function (log) is so that hookenv.log + can be used for a charm if needed. + + :param warning: String to indicat where it has moved ot. + :param date: optional sting, in YYYY-MM format to indicate when the + function will definitely (probably) be removed. + :param log: The log function to call to log. If not, logs to stdout + """ + def wrap(f): + + @functools.wraps(f) + def wrapped_f(*args, **kwargs): + try: + module = inspect.getmodule(f) + file = inspect.getsourcefile(f) + lines = inspect.getsourcelines(f) + f_name = "{}-{}-{}..{}-{}".format( + module.__name__, file, lines[0], lines[-1], f.__name__) + except (IOError, TypeError): + # assume it was local, so just use the name of the function + f_name = f.__name__ + if f_name not in __deprecated_functions: + __deprecated_functions[f_name] = True + s = "DEPRECATION WARNING: Function {} is being removed".format( + f.__name__) + if date: + s = "{} on/around {}".format(s, date) + if warning: + s = "{} : {}".format(s, warning) + if log: + log(s) + else: + print(s) + return f(*args, **kwargs) + return wrapped_f + return wrap diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/cli/README.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/cli/README.rst new file mode 100644 index 0000000000000000000000000000000000000000..f7901c09c79c268d353825776a74072a7ee4dee7 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/cli/README.rst @@ -0,0 +1,57 @@ +========== +Commandant +========== + +----------------------------------------------------- +Automatic command-line interfaces to Python functions +----------------------------------------------------- + +One of the benefits of ``libvirt`` is the uniformity of the interface: the C API (as well as the bindings in other languages) is a set of functions that accept parameters that are nearly identical to the command-line arguments. If you run ``virsh``, you get an interactive command prompt that supports all of the same commands that your shell scripts use as ``virsh`` subcommands. + +Command execution and stdio manipulation is the greatest common factor across all development systems in the POSIX environment. By exposing your functions as commands that manipulate streams of text, you can make life easier for all the Ruby and Erlang and Go programmers in your life. + +Goals +===== + +* Single decorator to expose a function as a command. + * now two decorators - one "automatic" and one that allows authors to manipulate the arguments for fine-grained control.(MW) +* Automatic analysis of function signature through ``inspect.getargspec()`` +* Command argument parser built automatically with ``argparse`` +* Interactive interpreter loop object made with ``Cmd`` +* Options to output structured return value data via ``pprint``, ``yaml`` or ``json`` dumps. + +Other Important Features that need writing +------------------------------------------ + +* Help and Usage documentation can be automatically generated, but it will be important to let users override this behaviour +* The decorator should allow specifying further parameters to the parser's add_argument() calls, to specify types or to make arguments behave as boolean flags, etc. + - Filename arguments are important, as good practice is for functions to accept file objects as parameters. + - choices arguments help to limit bad input before the function is called +* Some automatic behaviour could make for better defaults, once the user can override them. + - We could automatically detect arguments that default to False or True, and automatically support --no-foo for foo=True. + - We could automatically support hyphens as alternates for underscores + - Arguments defaulting to sequence types could support the ``append`` action. + + +----------------------------------------------------- +Implementing subcommands +----------------------------------------------------- + +(WIP) + +So as to avoid dependencies on the cli module, subcommands should be defined separately from their implementations. The recommmendation would be to place definitions into separate modules near the implementations which they expose. + +Some examples:: + + from charmhelpers.cli import CommandLine + from charmhelpers.payload import execd + from charmhelpers.foo import bar + + cli = CommandLine() + + cli.subcommand(execd.execd_run) + + @cli.subcommand_builder("bar", help="Bar baz qux") + def barcmd_builder(subparser): + subparser.add_argument('argument1', help="yackety") + return bar diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/cli/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/cli/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..389b490f4eec27f18118ab6a5f3f529dbf2e9ecc --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/cli/__init__.py @@ -0,0 +1,189 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import inspect +import argparse +import sys + +from six.moves import zip + +import charmhelpers.core.unitdata + + +class OutputFormatter(object): + def __init__(self, outfile=sys.stdout): + self.formats = ( + "raw", + "json", + "py", + "yaml", + "csv", + "tab", + ) + self.outfile = outfile + + def add_arguments(self, argument_parser): + formatgroup = argument_parser.add_mutually_exclusive_group() + choices = self.supported_formats + formatgroup.add_argument("--format", metavar='FMT', + help="Select output format for returned data, " + "where FMT is one of: {}".format(choices), + choices=choices, default='raw') + for fmt in self.formats: + fmtfunc = getattr(self, fmt) + formatgroup.add_argument("-{}".format(fmt[0]), + "--{}".format(fmt), action='store_const', + const=fmt, dest='format', + help=fmtfunc.__doc__) + + @property + def supported_formats(self): + return self.formats + + def raw(self, output): + """Output data as raw string (default)""" + if isinstance(output, (list, tuple)): + output = '\n'.join(map(str, output)) + self.outfile.write(str(output)) + + def py(self, output): + """Output data as a nicely-formatted python data structure""" + import pprint + pprint.pprint(output, stream=self.outfile) + + def json(self, output): + """Output data in JSON format""" + import json + json.dump(output, self.outfile) + + def yaml(self, output): + """Output data in YAML format""" + import yaml + yaml.safe_dump(output, self.outfile) + + def csv(self, output): + """Output data as excel-compatible CSV""" + import csv + csvwriter = csv.writer(self.outfile) + csvwriter.writerows(output) + + def tab(self, output): + """Output data in excel-compatible tab-delimited format""" + import csv + csvwriter = csv.writer(self.outfile, dialect=csv.excel_tab) + csvwriter.writerows(output) + + def format_output(self, output, fmt='raw'): + fmtfunc = getattr(self, fmt) + fmtfunc(output) + + +class CommandLine(object): + argument_parser = None + subparsers = None + formatter = None + exit_code = 0 + + def __init__(self): + if not self.argument_parser: + self.argument_parser = argparse.ArgumentParser(description='Perform common charm tasks') + if not self.formatter: + self.formatter = OutputFormatter() + self.formatter.add_arguments(self.argument_parser) + if not self.subparsers: + self.subparsers = self.argument_parser.add_subparsers(help='Commands') + + def subcommand(self, command_name=None): + """ + Decorate a function as a subcommand. Use its arguments as the + command-line arguments""" + def wrapper(decorated): + cmd_name = command_name or decorated.__name__ + subparser = self.subparsers.add_parser(cmd_name, + description=decorated.__doc__) + for args, kwargs in describe_arguments(decorated): + subparser.add_argument(*args, **kwargs) + subparser.set_defaults(func=decorated) + return decorated + return wrapper + + def test_command(self, decorated): + """ + Subcommand is a boolean test function, so bool return values should be + converted to a 0/1 exit code. + """ + decorated._cli_test_command = True + return decorated + + def no_output(self, decorated): + """ + Subcommand is not expected to return a value, so don't print a spurious None. + """ + decorated._cli_no_output = True + return decorated + + def subcommand_builder(self, command_name, description=None): + """ + Decorate a function that builds a subcommand. Builders should accept a + single argument (the subparser instance) and return the function to be + run as the command.""" + def wrapper(decorated): + subparser = self.subparsers.add_parser(command_name) + func = decorated(subparser) + subparser.set_defaults(func=func) + subparser.description = description or func.__doc__ + return wrapper + + def run(self): + "Run cli, processing arguments and executing subcommands." + arguments = self.argument_parser.parse_args() + argspec = inspect.getargspec(arguments.func) + vargs = [] + for arg in argspec.args: + vargs.append(getattr(arguments, arg)) + if argspec.varargs: + vargs.extend(getattr(arguments, argspec.varargs)) + output = arguments.func(*vargs) + if getattr(arguments.func, '_cli_test_command', False): + self.exit_code = 0 if output else 1 + output = '' + if getattr(arguments.func, '_cli_no_output', False): + output = '' + self.formatter.format_output(output, arguments.format) + if charmhelpers.core.unitdata._KV: + charmhelpers.core.unitdata._KV.flush() + + +cmdline = CommandLine() + + +def describe_arguments(func): + """ + Analyze a function's signature and return a data structure suitable for + passing in as arguments to an argparse parser's add_argument() method.""" + + argspec = inspect.getargspec(func) + # we should probably raise an exception somewhere if func includes **kwargs + if argspec.defaults: + positional_args = argspec.args[:-len(argspec.defaults)] + keyword_names = argspec.args[-len(argspec.defaults):] + for arg, default in zip(keyword_names, argspec.defaults): + yield ('--{}'.format(arg),), {'default': default} + else: + positional_args = argspec.args + + for arg in positional_args: + yield (arg,), {} + if argspec.varargs: + yield (argspec.varargs,), {'nargs': '*'} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/cli/benchmark.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/cli/benchmark.py new file mode 100644 index 0000000000000000000000000000000000000000..303af14b607d31e338aefff0df593609b7b45feb --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/cli/benchmark.py @@ -0,0 +1,34 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from . import cmdline +from charmhelpers.contrib.benchmark import Benchmark + + +@cmdline.subcommand(command_name='benchmark-start') +def start(): + Benchmark.start() + + +@cmdline.subcommand(command_name='benchmark-finish') +def finish(): + Benchmark.finish() + + +@cmdline.subcommand_builder('benchmark-composite', description="Set the benchmark composite score") +def service(subparser): + subparser.add_argument("value", help="The composite score.") + subparser.add_argument("units", help="The units the composite score represents, i.e., 'reads/sec'.") + subparser.add_argument("direction", help="'asc' if a lower score is better, 'desc' if a higher score is better.") + return Benchmark.set_composite_score diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/cli/commands.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/cli/commands.py new file mode 100644 index 0000000000000000000000000000000000000000..b93105650be8226aa390fda46aa08afb23ebc7bc --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/cli/commands.py @@ -0,0 +1,30 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +""" +This module loads sub-modules into the python runtime so they can be +discovered via the inspect module. In order to prevent flake8 from (rightfully) +telling us these are unused modules, throw a ' # noqa' at the end of each import +so that the warning is suppressed. +""" + +from . import CommandLine # noqa + +""" +Import the sub-modules which have decorated subcommands to register with chlp. +""" +from . import host # noqa +from . import benchmark # noqa +from . import unitdata # noqa +from . import hookenv # noqa diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/cli/hookenv.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/cli/hookenv.py new file mode 100644 index 0000000000000000000000000000000000000000..bd72f448bf0092251a454ff6dd3145f09048ae72 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/cli/hookenv.py @@ -0,0 +1,21 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from . import cmdline +from charmhelpers.core import hookenv + + +cmdline.subcommand('relation-id')(hookenv.relation_id._wrapped) +cmdline.subcommand('service-name')(hookenv.service_name) +cmdline.subcommand('remote-service-name')(hookenv.remote_service_name._wrapped) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/cli/host.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/cli/host.py new file mode 100644 index 0000000000000000000000000000000000000000..40396849907976fd077cc9c53a0852c4380c6266 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/cli/host.py @@ -0,0 +1,29 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from . import cmdline +from charmhelpers.core import host + + +@cmdline.subcommand() +def mounts(): + "List mounts" + return host.mounts() + + +@cmdline.subcommand_builder('service', description="Control system services") +def service(subparser): + subparser.add_argument("action", help="The action to perform (start, stop, etc...)") + subparser.add_argument("service_name", help="Name of the service to control") + return host.service diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/cli/unitdata.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/cli/unitdata.py new file mode 100644 index 0000000000000000000000000000000000000000..acce846f84ef32ed0b5829cf08e67ad33f0eb5d1 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/cli/unitdata.py @@ -0,0 +1,46 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from . import cmdline +from charmhelpers.core import unitdata + + +@cmdline.subcommand_builder('unitdata', description="Store and retrieve data") +def unitdata_cmd(subparser): + nested = subparser.add_subparsers() + + get_cmd = nested.add_parser('get', help='Retrieve data') + get_cmd.add_argument('key', help='Key to retrieve the value of') + get_cmd.set_defaults(action='get', value=None) + + getrange_cmd = nested.add_parser('getrange', help='Retrieve multiple data') + getrange_cmd.add_argument('key', metavar='prefix', + help='Prefix of the keys to retrieve') + getrange_cmd.set_defaults(action='getrange', value=None) + + set_cmd = nested.add_parser('set', help='Store data') + set_cmd.add_argument('key', help='Key to set') + set_cmd.add_argument('value', help='Value to store') + set_cmd.set_defaults(action='set') + + def _unitdata_cmd(action, key, value): + if action == 'get': + return unitdata.kv().get(key) + elif action == 'getrange': + return unitdata.kv().getrange(key) + elif action == 'set': + unitdata.kv().set(key, value) + unitdata.kv().flush() + return '' + return _unitdata_cmd diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/context.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/context.py new file mode 100644 index 0000000000000000000000000000000000000000..01864740e89633f6c65f43c32ed89ef9d5e857c9 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/context.py @@ -0,0 +1,205 @@ +# Copyright 2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +''' +A Pythonic API to interact with the charm hook environment. + +:author: Stuart Bishop +''' + +import six + +from charmhelpers.core import hookenv + +from collections import OrderedDict +if six.PY3: + from collections import UserDict # pragma: nocover +else: + from UserDict import IterableUserDict as UserDict # pragma: nocover + + +class Relations(OrderedDict): + '''Mapping relation name -> relation id -> Relation. + + >>> rels = Relations() + >>> rels['sprog']['sprog:12']['client/6']['widget'] + 'remote widget' + >>> rels['sprog']['sprog:12'].local['widget'] = 'local widget' + >>> rels['sprog']['sprog:12'].local['widget'] + 'local widget' + >>> rels.peer.local['widget'] + 'local widget on the peer relation' + ''' + def __init__(self): + super(Relations, self).__init__() + for relname in sorted(hookenv.relation_types()): + self[relname] = OrderedDict() + relids = hookenv.relation_ids(relname) + relids.sort(key=lambda x: int(x.split(':', 1)[-1])) + for relid in relids: + self[relname][relid] = Relation(relid) + + @property + def peer(self): + peer_relid = hookenv.peer_relation_id() + for rels in self.values(): + if peer_relid in rels: + return rels[peer_relid] + + +class Relation(OrderedDict): + '''Mapping of unit -> remote RelationInfo for a relation. + + This is an OrderedDict mapping, ordered numerically by + by unit number. + + Also provides access to the local RelationInfo, and peer RelationInfo + instances by the 'local' and 'peers' attributes. + + >>> r = Relation('sprog:12') + >>> r.keys() + ['client/9', 'client/10'] # Ordered numerically + >>> r['client/10']['widget'] # A remote RelationInfo setting + 'remote widget' + >>> r.local['widget'] # The local RelationInfo setting + 'local widget' + ''' + relid = None # The relation id. + relname = None # The relation name (also known as relation type). + service = None # The remote service name, if known. + local = None # The local end's RelationInfo. + peers = None # Map of peer -> RelationInfo. None if no peer relation. + + def __init__(self, relid): + remote_units = hookenv.related_units(relid) + remote_units.sort(key=lambda u: int(u.split('/', 1)[-1])) + super(Relation, self).__init__((unit, RelationInfo(relid, unit)) + for unit in remote_units) + + self.relname = relid.split(':', 1)[0] + self.relid = relid + self.local = RelationInfo(relid, hookenv.local_unit()) + + for relinfo in self.values(): + self.service = relinfo.service + break + + # If we have peers, and they have joined both the provided peer + # relation and this relation, we can peek at their data too. + # This is useful for creating consensus without leadership. + peer_relid = hookenv.peer_relation_id() + if peer_relid and peer_relid != relid: + peers = hookenv.related_units(peer_relid) + if peers: + peers.sort(key=lambda u: int(u.split('/', 1)[-1])) + self.peers = OrderedDict((peer, RelationInfo(relid, peer)) + for peer in peers) + else: + self.peers = OrderedDict() + else: + self.peers = None + + def __str__(self): + return '{} ({})'.format(self.relid, self.service) + + +class RelationInfo(UserDict): + '''The bag of data at an end of a relation. + + Every unit participating in a relation has a single bag of + data associated with that relation. This is that bag. + + The bag of data for the local unit may be updated. Remote data + is immutable and will remain static for the duration of the hook. + + Changes made to the local units relation data only become visible + to other units after the hook completes successfully. If the hook + does not complete successfully, the changes are rolled back. + + Unlike standard Python mappings, setting an item to None is the + same as deleting it. + + >>> relinfo = RelationInfo('db:12') # Default is the local unit. + >>> relinfo['user'] = 'fred' + >>> relinfo['user'] + 'fred' + >>> relinfo['user'] = None + >>> 'fred' in relinfo + False + + This class wraps hookenv.relation_get and hookenv.relation_set. + All caching is left up to these two methods to avoid synchronization + issues. Data is only loaded on demand. + ''' + relid = None # The relation id. + relname = None # The relation name (also know as the relation type). + unit = None # The unit id. + number = None # The unit number (integer). + service = None # The service name. + + def __init__(self, relid, unit): + self.relname = relid.split(':', 1)[0] + self.relid = relid + self.unit = unit + self.service, num = self.unit.split('/', 1) + self.number = int(num) + + def __str__(self): + return '{} ({})'.format(self.relid, self.unit) + + @property + def data(self): + return hookenv.relation_get(rid=self.relid, unit=self.unit) + + def __setitem__(self, key, value): + if self.unit != hookenv.local_unit(): + raise TypeError('Attempting to set {} on remote unit {}' + ''.format(key, self.unit)) + if value is not None and not isinstance(value, six.string_types): + # We don't do implicit casting. This would cause simple + # types like integers to be read back as strings in subsequent + # hooks, and mutable types would require a lot of wrapping + # to ensure relation-set gets called when they are mutated. + raise ValueError('Only string values allowed') + hookenv.relation_set(self.relid, {key: value}) + + def __delitem__(self, key): + # Deleting a key and setting it to null is the same thing in + # Juju relations. + self[key] = None + + +class Leader(UserDict): + def __init__(self): + pass # Don't call superclass initializer, as it will nuke self.data + + @property + def data(self): + return hookenv.leader_get() + + def __setitem__(self, key, value): + if not hookenv.is_leader(): + raise TypeError('Not the leader. Cannot change leader settings.') + if value is not None and not isinstance(value, six.string_types): + # We don't do implicit casting. This would cause simple + # types like integers to be read back as strings in subsequent + # hooks, and mutable types would require a lot of wrapping + # to ensure leader-set gets called when they are mutated. + raise ValueError('Only string values allowed') + hookenv.leader_set({key: value}) + + def __delitem__(self, key): + # Deleting a key and setting it to null is the same thing in + # Juju leadership settings. + self[key] = None diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d7567b863e3a5ad2b7a7f44958b4166e0c3d346b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/amulet/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/amulet/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d7567b863e3a5ad2b7a7f44958b4166e0c3d346b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/amulet/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/amulet/deployment.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/amulet/deployment.py new file mode 100644 index 0000000000000000000000000000000000000000..d21d01d8ffe242d686283b0ed977b88be6bfc74e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/amulet/deployment.py @@ -0,0 +1,99 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import amulet +import os +import six + + +class AmuletDeployment(object): + """Amulet deployment. + + This class provides generic Amulet deployment and test runner + methods. + """ + + def __init__(self, series=None): + """Initialize the deployment environment.""" + self.series = None + + if series: + self.series = series + self.d = amulet.Deployment(series=self.series) + else: + self.d = amulet.Deployment() + + def _add_services(self, this_service, other_services): + """Add services. + + Add services to the deployment where this_service is the local charm + that we're testing and other_services are the other services that + are being used in the local amulet tests. + """ + if this_service['name'] != os.path.basename(os.getcwd()): + s = this_service['name'] + msg = "The charm's root directory name needs to be {}".format(s) + amulet.raise_status(amulet.FAIL, msg=msg) + + if 'units' not in this_service: + this_service['units'] = 1 + + self.d.add(this_service['name'], units=this_service['units'], + constraints=this_service.get('constraints'), + storage=this_service.get('storage')) + + for svc in other_services: + if 'location' in svc: + branch_location = svc['location'] + elif self.series: + branch_location = 'cs:{}/{}'.format(self.series, svc['name']), + else: + branch_location = None + + if 'units' not in svc: + svc['units'] = 1 + + self.d.add(svc['name'], charm=branch_location, units=svc['units'], + constraints=svc.get('constraints'), + storage=svc.get('storage')) + + def _add_relations(self, relations): + """Add all of the relations for the services.""" + for k, v in six.iteritems(relations): + self.d.relate(k, v) + + def _configure_services(self, configs): + """Configure all of the services.""" + for service, config in six.iteritems(configs): + self.d.configure(service, config) + + def _deploy(self): + """Deploy environment and wait for all hooks to finish executing.""" + timeout = int(os.environ.get('AMULET_SETUP_TIMEOUT', 900)) + try: + self.d.setup(timeout=timeout) + self.d.sentry.wait(timeout=timeout) + except amulet.helpers.TimeoutError: + amulet.raise_status( + amulet.FAIL, + msg="Deployment timed out ({}s)".format(timeout) + ) + except Exception: + raise + + def run_tests(self): + """Run all of the methods that are prefixed with 'test_'.""" + for test in dir(self): + if test.startswith('test_'): + getattr(self, test)() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/amulet/utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/amulet/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..54283088e1ddd53bc85008bc7446620e87b42278 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/amulet/utils.py @@ -0,0 +1,820 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import io +import json +import logging +import os +import re +import socket +import subprocess +import sys +import time +import uuid + +import amulet +import distro_info +import six +from six.moves import configparser +if six.PY3: + from urllib import parse as urlparse +else: + import urlparse + + +class AmuletUtils(object): + """Amulet utilities. + + This class provides common utility functions that are used by Amulet + tests. + """ + + def __init__(self, log_level=logging.ERROR): + self.log = self.get_logger(level=log_level) + self.ubuntu_releases = self.get_ubuntu_releases() + + def get_logger(self, name="amulet-logger", level=logging.DEBUG): + """Get a logger object that will log to stdout.""" + log = logging + logger = log.getLogger(name) + fmt = log.Formatter("%(asctime)s %(funcName)s " + "%(levelname)s: %(message)s") + + handler = log.StreamHandler(stream=sys.stdout) + handler.setLevel(level) + handler.setFormatter(fmt) + + logger.addHandler(handler) + logger.setLevel(level) + + return logger + + def valid_ip(self, ip): + if re.match(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$", ip): + return True + else: + return False + + def valid_url(self, url): + p = re.compile( + r'^(?:http|ftp)s?://' + r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}\.?)|' # noqa + r'localhost|' + r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})' + r'(?::\d+)?' + r'(?:/?|[/?]\S+)$', + re.IGNORECASE) + if p.match(url): + return True + else: + return False + + def get_ubuntu_release_from_sentry(self, sentry_unit): + """Get Ubuntu release codename from sentry unit. + + :param sentry_unit: amulet sentry/service unit pointer + :returns: list of strings - release codename, failure message + """ + msg = None + cmd = 'lsb_release -cs' + release, code = sentry_unit.ssh(cmd) + if code == 0: + self.log.debug('{} lsb_release: {}'.format( + sentry_unit.info['unit_name'], release)) + else: + msg = ('{} `{}` returned {} ' + '{}'.format(sentry_unit.info['unit_name'], + cmd, release, code)) + if release not in self.ubuntu_releases: + msg = ("Release ({}) not found in Ubuntu releases " + "({})".format(release, self.ubuntu_releases)) + return release, msg + + def validate_services(self, commands): + """Validate that lists of commands succeed on service units. Can be + used to verify system services are running on the corresponding + service units. + + :param commands: dict with sentry keys and arbitrary command list vals + :returns: None if successful, Failure string message otherwise + """ + self.log.debug('Checking status of system services...') + + # /!\ DEPRECATION WARNING (beisner): + # New and existing tests should be rewritten to use + # validate_services_by_name() as it is aware of init systems. + self.log.warn('DEPRECATION WARNING: use ' + 'validate_services_by_name instead of validate_services ' + 'due to init system differences.') + + for k, v in six.iteritems(commands): + for cmd in v: + output, code = k.run(cmd) + self.log.debug('{} `{}` returned ' + '{}'.format(k.info['unit_name'], + cmd, code)) + if code != 0: + return "command `{}` returned {}".format(cmd, str(code)) + return None + + def validate_services_by_name(self, sentry_services): + """Validate system service status by service name, automatically + detecting init system based on Ubuntu release codename. + + :param sentry_services: dict with sentry keys and svc list values + :returns: None if successful, Failure string message otherwise + """ + self.log.debug('Checking status of system services...') + + # Point at which systemd became a thing + systemd_switch = self.ubuntu_releases.index('vivid') + + for sentry_unit, services_list in six.iteritems(sentry_services): + # Get lsb_release codename from unit + release, ret = self.get_ubuntu_release_from_sentry(sentry_unit) + if ret: + return ret + + for service_name in services_list: + if (self.ubuntu_releases.index(release) >= systemd_switch or + service_name in ['rabbitmq-server', 'apache2', + 'memcached']): + # init is systemd (or regular sysv) + cmd = 'sudo service {} status'.format(service_name) + output, code = sentry_unit.run(cmd) + service_running = code == 0 + elif self.ubuntu_releases.index(release) < systemd_switch: + # init is upstart + cmd = 'sudo status {}'.format(service_name) + output, code = sentry_unit.run(cmd) + service_running = code == 0 and "start/running" in output + + self.log.debug('{} `{}` returned ' + '{}'.format(sentry_unit.info['unit_name'], + cmd, code)) + if not service_running: + return u"command `{}` returned {} {}".format( + cmd, output, str(code)) + return None + + def _get_config(self, unit, filename): + """Get a ConfigParser object for parsing a unit's config file.""" + file_contents = unit.file_contents(filename) + + # NOTE(beisner): by default, ConfigParser does not handle options + # with no value, such as the flags used in the mysql my.cnf file. + # https://bugs.python.org/issue7005 + config = configparser.ConfigParser(allow_no_value=True) + config.readfp(io.StringIO(file_contents)) + return config + + def validate_config_data(self, sentry_unit, config_file, section, + expected): + """Validate config file data. + + Verify that the specified section of the config file contains + the expected option key:value pairs. + + Compare expected dictionary data vs actual dictionary data. + The values in the 'expected' dictionary can be strings, bools, ints, + longs, or can be a function that evaluates a variable and returns a + bool. + """ + self.log.debug('Validating config file data ({} in {} on {})' + '...'.format(section, config_file, + sentry_unit.info['unit_name'])) + config = self._get_config(sentry_unit, config_file) + + if section != 'DEFAULT' and not config.has_section(section): + return "section [{}] does not exist".format(section) + + for k in expected.keys(): + if not config.has_option(section, k): + return "section [{}] is missing option {}".format(section, k) + + actual = config.get(section, k) + v = expected[k] + if (isinstance(v, six.string_types) or + isinstance(v, bool) or + isinstance(v, six.integer_types)): + # handle explicit values + if actual != v: + return "section [{}] {}:{} != expected {}:{}".format( + section, k, actual, k, expected[k]) + # handle function pointers, such as not_null or valid_ip + elif not v(actual): + return "section [{}] {}:{} != expected {}:{}".format( + section, k, actual, k, expected[k]) + return None + + def _validate_dict_data(self, expected, actual): + """Validate dictionary data. + + Compare expected dictionary data vs actual dictionary data. + The values in the 'expected' dictionary can be strings, bools, ints, + longs, or can be a function that evaluates a variable and returns a + bool. + """ + self.log.debug('actual: {}'.format(repr(actual))) + self.log.debug('expected: {}'.format(repr(expected))) + + for k, v in six.iteritems(expected): + if k in actual: + if (isinstance(v, six.string_types) or + isinstance(v, bool) or + isinstance(v, six.integer_types)): + # handle explicit values + if v != actual[k]: + return "{}:{}".format(k, actual[k]) + # handle function pointers, such as not_null or valid_ip + elif not v(actual[k]): + return "{}:{}".format(k, actual[k]) + else: + return "key '{}' does not exist".format(k) + return None + + def validate_relation_data(self, sentry_unit, relation, expected): + """Validate actual relation data based on expected relation data.""" + actual = sentry_unit.relation(relation[0], relation[1]) + return self._validate_dict_data(expected, actual) + + def _validate_list_data(self, expected, actual): + """Compare expected list vs actual list data.""" + for e in expected: + if e not in actual: + return "expected item {} not found in actual list".format(e) + return None + + def not_null(self, string): + if string is not None: + return True + else: + return False + + def _get_file_mtime(self, sentry_unit, filename): + """Get last modification time of file.""" + return sentry_unit.file_stat(filename)['mtime'] + + def _get_dir_mtime(self, sentry_unit, directory): + """Get last modification time of directory.""" + return sentry_unit.directory_stat(directory)['mtime'] + + def _get_proc_start_time(self, sentry_unit, service, pgrep_full=None): + """Get start time of a process based on the last modification time + of the /proc/pid directory. + + :sentry_unit: The sentry unit to check for the service on + :service: service name to look for in process table + :pgrep_full: [Deprecated] Use full command line search mode with pgrep + :returns: epoch time of service process start + :param commands: list of bash commands + :param sentry_units: list of sentry unit pointers + :returns: None if successful; Failure message otherwise + """ + pid_list = self.get_process_id_list( + sentry_unit, service, pgrep_full=pgrep_full) + pid = pid_list[0] + proc_dir = '/proc/{}'.format(pid) + self.log.debug('Pid for {} on {}: {}'.format( + service, sentry_unit.info['unit_name'], pid)) + + return self._get_dir_mtime(sentry_unit, proc_dir) + + def service_restarted(self, sentry_unit, service, filename, + pgrep_full=None, sleep_time=20): + """Check if service was restarted. + + Compare a service's start time vs a file's last modification time + (such as a config file for that service) to determine if the service + has been restarted. + """ + # /!\ DEPRECATION WARNING (beisner): + # This method is prone to races in that no before-time is known. + # Use validate_service_config_changed instead. + + # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now + # used instead of pgrep. pgrep_full is still passed through to ensure + # deprecation WARNS. lp1474030 + self.log.warn('DEPRECATION WARNING: use ' + 'validate_service_config_changed instead of ' + 'service_restarted due to known races.') + + time.sleep(sleep_time) + if (self._get_proc_start_time(sentry_unit, service, pgrep_full) >= + self._get_file_mtime(sentry_unit, filename)): + return True + else: + return False + + def service_restarted_since(self, sentry_unit, mtime, service, + pgrep_full=None, sleep_time=20, + retry_count=30, retry_sleep_time=10): + """Check if service was been started after a given time. + + Args: + sentry_unit (sentry): The sentry unit to check for the service on + mtime (float): The epoch time to check against + service (string): service name to look for in process table + pgrep_full: [Deprecated] Use full command line search mode with pgrep + sleep_time (int): Initial sleep time (s) before looking for file + retry_sleep_time (int): Time (s) to sleep between retries + retry_count (int): If file is not found, how many times to retry + + Returns: + bool: True if service found and its start time it newer than mtime, + False if service is older than mtime or if service was + not found. + """ + # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now + # used instead of pgrep. pgrep_full is still passed through to ensure + # deprecation WARNS. lp1474030 + + unit_name = sentry_unit.info['unit_name'] + self.log.debug('Checking that %s service restarted since %s on ' + '%s' % (service, mtime, unit_name)) + time.sleep(sleep_time) + proc_start_time = None + tries = 0 + while tries <= retry_count and not proc_start_time: + try: + proc_start_time = self._get_proc_start_time(sentry_unit, + service, + pgrep_full) + self.log.debug('Attempt {} to get {} proc start time on {} ' + 'OK'.format(tries, service, unit_name)) + except IOError as e: + # NOTE(beisner) - race avoidance, proc may not exist yet. + # https://bugs.launchpad.net/charm-helpers/+bug/1474030 + self.log.debug('Attempt {} to get {} proc start time on {} ' + 'failed\n{}'.format(tries, service, + unit_name, e)) + time.sleep(retry_sleep_time) + tries += 1 + + if not proc_start_time: + self.log.warn('No proc start time found, assuming service did ' + 'not start') + return False + if proc_start_time >= mtime: + self.log.debug('Proc start time is newer than provided mtime' + '(%s >= %s) on %s (OK)' % (proc_start_time, + mtime, unit_name)) + return True + else: + self.log.warn('Proc start time (%s) is older than provided mtime ' + '(%s) on %s, service did not ' + 'restart' % (proc_start_time, mtime, unit_name)) + return False + + def config_updated_since(self, sentry_unit, filename, mtime, + sleep_time=20, retry_count=30, + retry_sleep_time=10): + """Check if file was modified after a given time. + + Args: + sentry_unit (sentry): The sentry unit to check the file mtime on + filename (string): The file to check mtime of + mtime (float): The epoch time to check against + sleep_time (int): Initial sleep time (s) before looking for file + retry_sleep_time (int): Time (s) to sleep between retries + retry_count (int): If file is not found, how many times to retry + + Returns: + bool: True if file was modified more recently than mtime, False if + file was modified before mtime, or if file not found. + """ + unit_name = sentry_unit.info['unit_name'] + self.log.debug('Checking that %s updated since %s on ' + '%s' % (filename, mtime, unit_name)) + time.sleep(sleep_time) + file_mtime = None + tries = 0 + while tries <= retry_count and not file_mtime: + try: + file_mtime = self._get_file_mtime(sentry_unit, filename) + self.log.debug('Attempt {} to get {} file mtime on {} ' + 'OK'.format(tries, filename, unit_name)) + except IOError as e: + # NOTE(beisner) - race avoidance, file may not exist yet. + # https://bugs.launchpad.net/charm-helpers/+bug/1474030 + self.log.debug('Attempt {} to get {} file mtime on {} ' + 'failed\n{}'.format(tries, filename, + unit_name, e)) + time.sleep(retry_sleep_time) + tries += 1 + + if not file_mtime: + self.log.warn('Could not determine file mtime, assuming ' + 'file does not exist') + return False + + if file_mtime >= mtime: + self.log.debug('File mtime is newer than provided mtime ' + '(%s >= %s) on %s (OK)' % (file_mtime, + mtime, unit_name)) + return True + else: + self.log.warn('File mtime is older than provided mtime' + '(%s < on %s) on %s' % (file_mtime, + mtime, unit_name)) + return False + + def validate_service_config_changed(self, sentry_unit, mtime, service, + filename, pgrep_full=None, + sleep_time=20, retry_count=30, + retry_sleep_time=10): + """Check service and file were updated after mtime + + Args: + sentry_unit (sentry): The sentry unit to check for the service on + mtime (float): The epoch time to check against + service (string): service name to look for in process table + filename (string): The file to check mtime of + pgrep_full: [Deprecated] Use full command line search mode with pgrep + sleep_time (int): Initial sleep in seconds to pass to test helpers + retry_count (int): If service is not found, how many times to retry + retry_sleep_time (int): Time in seconds to wait between retries + + Typical Usage: + u = OpenStackAmuletUtils(ERROR) + ... + mtime = u.get_sentry_time(self.cinder_sentry) + self.d.configure('cinder', {'verbose': 'True', 'debug': 'True'}) + if not u.validate_service_config_changed(self.cinder_sentry, + mtime, + 'cinder-api', + '/etc/cinder/cinder.conf') + amulet.raise_status(amulet.FAIL, msg='update failed') + Returns: + bool: True if both service and file where updated/restarted after + mtime, False if service is older than mtime or if service was + not found or if filename was modified before mtime. + """ + + # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now + # used instead of pgrep. pgrep_full is still passed through to ensure + # deprecation WARNS. lp1474030 + + service_restart = self.service_restarted_since( + sentry_unit, mtime, + service, + pgrep_full=pgrep_full, + sleep_time=sleep_time, + retry_count=retry_count, + retry_sleep_time=retry_sleep_time) + + config_update = self.config_updated_since( + sentry_unit, + filename, + mtime, + sleep_time=sleep_time, + retry_count=retry_count, + retry_sleep_time=retry_sleep_time) + + return service_restart and config_update + + def get_sentry_time(self, sentry_unit): + """Return current epoch time on a sentry""" + cmd = "date +'%s'" + return float(sentry_unit.run(cmd)[0]) + + def relation_error(self, name, data): + return 'unexpected relation data in {} - {}'.format(name, data) + + def endpoint_error(self, name, data): + return 'unexpected endpoint data in {} - {}'.format(name, data) + + def get_ubuntu_releases(self): + """Return a list of all Ubuntu releases in order of release.""" + _d = distro_info.UbuntuDistroInfo() + _release_list = _d.all + return _release_list + + def file_to_url(self, file_rel_path): + """Convert a relative file path to a file URL.""" + _abs_path = os.path.abspath(file_rel_path) + return urlparse.urlparse(_abs_path, scheme='file').geturl() + + def check_commands_on_units(self, commands, sentry_units): + """Check that all commands in a list exit zero on all + sentry units in a list. + + :param commands: list of bash commands + :param sentry_units: list of sentry unit pointers + :returns: None if successful; Failure message otherwise + """ + self.log.debug('Checking exit codes for {} commands on {} ' + 'sentry units...'.format(len(commands), + len(sentry_units))) + for sentry_unit in sentry_units: + for cmd in commands: + output, code = sentry_unit.run(cmd) + if code == 0: + self.log.debug('{} `{}` returned {} ' + '(OK)'.format(sentry_unit.info['unit_name'], + cmd, code)) + else: + return ('{} `{}` returned {} ' + '{}'.format(sentry_unit.info['unit_name'], + cmd, code, output)) + return None + + def get_process_id_list(self, sentry_unit, process_name, + expect_success=True, pgrep_full=False): + """Get a list of process ID(s) from a single sentry juju unit + for a single process name. + + :param sentry_unit: Amulet sentry instance (juju unit) + :param process_name: Process name + :param expect_success: If False, expect the PID to be missing, + raise if it is present. + :returns: List of process IDs + """ + if pgrep_full: + cmd = 'pgrep -f "{}"'.format(process_name) + else: + cmd = 'pidof -x "{}"'.format(process_name) + if not expect_success: + cmd += " || exit 0 && exit 1" + output, code = sentry_unit.run(cmd) + if code != 0: + msg = ('{} `{}` returned {} ' + '{}'.format(sentry_unit.info['unit_name'], + cmd, code, output)) + amulet.raise_status(amulet.FAIL, msg=msg) + return str(output).split() + + def get_unit_process_ids( + self, unit_processes, expect_success=True, pgrep_full=False): + """Construct a dict containing unit sentries, process names, and + process IDs. + + :param unit_processes: A dictionary of Amulet sentry instance + to list of process names. + :param expect_success: if False expect the processes to not be + running, raise if they are. + :returns: Dictionary of Amulet sentry instance to dictionary + of process names to PIDs. + """ + pid_dict = {} + for sentry_unit, process_list in six.iteritems(unit_processes): + pid_dict[sentry_unit] = {} + for process in process_list: + pids = self.get_process_id_list( + sentry_unit, process, expect_success=expect_success, + pgrep_full=pgrep_full) + pid_dict[sentry_unit].update({process: pids}) + return pid_dict + + def validate_unit_process_ids(self, expected, actual): + """Validate process id quantities for services on units.""" + self.log.debug('Checking units for running processes...') + self.log.debug('Expected PIDs: {}'.format(expected)) + self.log.debug('Actual PIDs: {}'.format(actual)) + + if len(actual) != len(expected): + return ('Unit count mismatch. expected, actual: {}, ' + '{} '.format(len(expected), len(actual))) + + for (e_sentry, e_proc_names) in six.iteritems(expected): + e_sentry_name = e_sentry.info['unit_name'] + if e_sentry in actual.keys(): + a_proc_names = actual[e_sentry] + else: + return ('Expected sentry ({}) not found in actual dict data.' + '{}'.format(e_sentry_name, e_sentry)) + + if len(e_proc_names.keys()) != len(a_proc_names.keys()): + return ('Process name count mismatch. expected, actual: {}, ' + '{}'.format(len(expected), len(actual))) + + for (e_proc_name, e_pids), (a_proc_name, a_pids) in \ + zip(e_proc_names.items(), a_proc_names.items()): + if e_proc_name != a_proc_name: + return ('Process name mismatch. expected, actual: {}, ' + '{}'.format(e_proc_name, a_proc_name)) + + a_pids_length = len(a_pids) + fail_msg = ('PID count mismatch. {} ({}) expected, actual: ' + '{}, {} ({})'.format(e_sentry_name, e_proc_name, + e_pids, a_pids_length, + a_pids)) + + # If expected is a list, ensure at least one PID quantity match + if isinstance(e_pids, list) and \ + a_pids_length not in e_pids: + return fail_msg + # If expected is not bool and not list, + # ensure PID quantities match + elif not isinstance(e_pids, bool) and \ + not isinstance(e_pids, list) and \ + a_pids_length != e_pids: + return fail_msg + # If expected is bool True, ensure 1 or more PIDs exist + elif isinstance(e_pids, bool) and \ + e_pids is True and a_pids_length < 1: + return fail_msg + # If expected is bool False, ensure 0 PIDs exist + elif isinstance(e_pids, bool) and \ + e_pids is False and a_pids_length != 0: + return fail_msg + else: + self.log.debug('PID check OK: {} {} {}: ' + '{}'.format(e_sentry_name, e_proc_name, + e_pids, a_pids)) + return None + + def validate_list_of_identical_dicts(self, list_of_dicts): + """Check that all dicts within a list are identical.""" + hashes = [] + for _dict in list_of_dicts: + hashes.append(hash(frozenset(_dict.items()))) + + self.log.debug('Hashes: {}'.format(hashes)) + if len(set(hashes)) == 1: + self.log.debug('Dicts within list are identical') + else: + return 'Dicts within list are not identical' + + return None + + def validate_sectionless_conf(self, file_contents, expected): + """A crude conf parser. Useful to inspect configuration files which + do not have section headers (as would be necessary in order to use + the configparser). Such as openstack-dashboard or rabbitmq confs.""" + for line in file_contents.split('\n'): + if '=' in line: + args = line.split('=') + if len(args) <= 1: + continue + key = args[0].strip() + value = args[1].strip() + if key in expected.keys(): + if expected[key] != value: + msg = ('Config mismatch. Expected, actual: {}, ' + '{}'.format(expected[key], value)) + amulet.raise_status(amulet.FAIL, msg=msg) + + def get_unit_hostnames(self, units): + """Return a dict of juju unit names to hostnames.""" + host_names = {} + for unit in units: + host_names[unit.info['unit_name']] = \ + str(unit.file_contents('/etc/hostname').strip()) + self.log.debug('Unit host names: {}'.format(host_names)) + return host_names + + def run_cmd_unit(self, sentry_unit, cmd): + """Run a command on a unit, return the output and exit code.""" + output, code = sentry_unit.run(cmd) + if code == 0: + self.log.debug('{} `{}` command returned {} ' + '(OK)'.format(sentry_unit.info['unit_name'], + cmd, code)) + else: + msg = ('{} `{}` command returned {} ' + '{}'.format(sentry_unit.info['unit_name'], + cmd, code, output)) + amulet.raise_status(amulet.FAIL, msg=msg) + return str(output), code + + def file_exists_on_unit(self, sentry_unit, file_name): + """Check if a file exists on a unit.""" + try: + sentry_unit.file_stat(file_name) + return True + except IOError: + return False + except Exception as e: + msg = 'Error checking file {}: {}'.format(file_name, e) + amulet.raise_status(amulet.FAIL, msg=msg) + + def file_contents_safe(self, sentry_unit, file_name, + max_wait=60, fatal=False): + """Get file contents from a sentry unit. Wrap amulet file_contents + with retry logic to address races where a file checks as existing, + but no longer exists by the time file_contents is called. + Return None if file not found. Optionally raise if fatal is True.""" + unit_name = sentry_unit.info['unit_name'] + file_contents = False + tries = 0 + while not file_contents and tries < (max_wait / 4): + try: + file_contents = sentry_unit.file_contents(file_name) + except IOError: + self.log.debug('Attempt {} to open file {} from {} ' + 'failed'.format(tries, file_name, + unit_name)) + time.sleep(4) + tries += 1 + + if file_contents: + return file_contents + elif not fatal: + return None + elif fatal: + msg = 'Failed to get file contents from unit.' + amulet.raise_status(amulet.FAIL, msg) + + def port_knock_tcp(self, host="localhost", port=22, timeout=15): + """Open a TCP socket to check for a listening sevice on a host. + + :param host: host name or IP address, default to localhost + :param port: TCP port number, default to 22 + :param timeout: Connect timeout, default to 15 seconds + :returns: True if successful, False if connect failed + """ + + # Resolve host name if possible + try: + connect_host = socket.gethostbyname(host) + host_human = "{} ({})".format(connect_host, host) + except socket.error as e: + self.log.warn('Unable to resolve address: ' + '{} ({}) Trying anyway!'.format(host, e)) + connect_host = host + host_human = connect_host + + # Attempt socket connection + try: + knock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) + knock.settimeout(timeout) + knock.connect((connect_host, port)) + knock.close() + self.log.debug('Socket connect OK for host ' + '{} on port {}.'.format(host_human, port)) + return True + except socket.error as e: + self.log.debug('Socket connect FAIL for' + ' {} port {} ({})'.format(host_human, port, e)) + return False + + def port_knock_units(self, sentry_units, port=22, + timeout=15, expect_success=True): + """Open a TCP socket to check for a listening sevice on each + listed juju unit. + + :param sentry_units: list of sentry unit pointers + :param port: TCP port number, default to 22 + :param timeout: Connect timeout, default to 15 seconds + :expect_success: True by default, set False to invert logic + :returns: None if successful, Failure message otherwise + """ + for unit in sentry_units: + host = unit.info['public-address'] + connected = self.port_knock_tcp(host, port, timeout) + if not connected and expect_success: + return 'Socket connect failed.' + elif connected and not expect_success: + return 'Socket connected unexpectedly.' + + def get_uuid_epoch_stamp(self): + """Returns a stamp string based on uuid4 and epoch time. Useful in + generating test messages which need to be unique-ish.""" + return '[{}-{}]'.format(uuid.uuid4(), time.time()) + + # amulet juju action helpers: + def run_action(self, unit_sentry, action, + _check_output=subprocess.check_output, + params=None): + """Translate to amulet's built in run_action(). Deprecated. + + Run the named action on a given unit sentry. + + params a dict of parameters to use + _check_output parameter is no longer used + + @return action_id. + """ + self.log.warn('charmhelpers.contrib.amulet.utils.run_action has been ' + 'deprecated for amulet.run_action') + return unit_sentry.run_action(action, action_args=params) + + def wait_on_action(self, action_id, _check_output=subprocess.check_output): + """Wait for a given action, returning if it completed or not. + + action_id a string action uuid + _check_output parameter is no longer used + """ + data = amulet.actions.get_action_output(action_id, full_output=True) + return data.get(u"status") == "completed" + + def status_get(self, unit): + """Return the current service status of this unit.""" + raw_status, return_code = unit.run( + "status-get --format=json --include-data") + if return_code != 0: + return ("unknown", "") + status = json.loads(raw_status) + return (status["status"], status["message"]) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/ansible/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/ansible/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..824780aca93c406f7331ec3f0da8f22bee1272ec --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/ansible/__init__.py @@ -0,0 +1,303 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Copyright 2013 Canonical Ltd. +# +# Authors: +# Charm Helpers Developers +""" +The ansible package enables you to easily use the configuration management +tool `Ansible`_ to setup and configure your charm. All of your charm +configuration options and relation-data are available as regular Ansible +variables which can be used in your playbooks and templates. + +.. _Ansible: https://www.ansible.com/ + +Usage +===== + +Here is an example directory structure for a charm to get you started:: + + charm-ansible-example/ + |-- ansible + | |-- playbook.yaml + | `-- templates + | `-- example.j2 + |-- config.yaml + |-- copyright + |-- icon.svg + |-- layer.yaml + |-- metadata.yaml + |-- reactive + | `-- example.py + |-- README.md + +Running a playbook called ``playbook.yaml`` when the ``install`` hook is run +can be as simple as:: + + from charmhelpers.contrib import ansible + from charms.reactive import hook + + @hook('install') + def install(): + ansible.install_ansible_support() + ansible.apply_playbook('ansible/playbook.yaml') + +Here is an example playbook that uses the ``template`` module to template the +file ``example.j2`` to the charm host and then uses the ``debug`` module to +print out all the host and Juju variables that you can use in your playbooks. +Note that you must target ``localhost`` as the playbook is run locally on the +charm host:: + + --- + - hosts: localhost + tasks: + - name: Template a file + template: + src: templates/example.j2 + dest: /tmp/example.j2 + + - name: Print all variables available to Ansible + debug: + var: vars + +Read more online about `playbooks`_ and standard Ansible `modules`_. + +.. _playbooks: https://docs.ansible.com/ansible/latest/user_guide/playbooks.html +.. _modules: https://docs.ansible.com/ansible/latest/user_guide/modules.html + +A further feature of the Ansible hooks is to provide a light weight "action" +scripting tool. This is a decorator that you apply to a function, and that +function can now receive cli args, and can pass extra args to the playbook:: + + @hooks.action() + def some_action(amount, force="False"): + "Usage: some-action AMOUNT [force=True]" # <-- shown on error + # process the arguments + # do some calls + # return extra-vars to be passed to ansible-playbook + return { + 'amount': int(amount), + 'type': force, + } + +You can now create a symlink to hooks.py that can be invoked like a hook, but +with cli params:: + + # link actions/some-action to hooks/hooks.py + + actions/some-action amount=10 force=true + +Install Ansible via pip +======================= + +If you want to install a specific version of Ansible via pip instead of +``install_ansible_support`` which uses APT, consider using the layer options +of `layer-basic`_ to install Ansible in a virtualenv:: + + options: + basic: + python_packages: ['ansible==2.9.0'] + include_system_packages: true + use_venv: true + +.. _layer-basic: https://charmsreactive.readthedocs.io/en/latest/layer-basic.html#layer-configuration + +""" +import os +import json +import stat +import subprocess +import functools + +import charmhelpers.contrib.templating.contexts +import charmhelpers.core.host +import charmhelpers.core.hookenv +import charmhelpers.fetch + + +charm_dir = os.environ.get('CHARM_DIR', '') +ansible_hosts_path = '/etc/ansible/hosts' +# Ansible will automatically include any vars in the following +# file in its inventory when run locally. +ansible_vars_path = '/etc/ansible/host_vars/localhost' + + +def install_ansible_support(from_ppa=True, ppa_location='ppa:ansible/ansible'): + """Installs Ansible via APT. + + By default this installs Ansible from the `PPA`_ linked from + the Ansible `website`_ or from a PPA set in ``ppa_location``. + + .. _PPA: https://launchpad.net/~ansible/+archive/ubuntu/ansible + .. _website: http://docs.ansible.com/intro_installation.html#latest-releases-via-apt-ubuntu + + If ``from_ppa`` is ``False``, then Ansible will be installed from + Ubuntu's Universe repositories. + """ + if from_ppa: + charmhelpers.fetch.add_source(ppa_location) + charmhelpers.fetch.apt_update(fatal=True) + charmhelpers.fetch.apt_install('ansible') + with open(ansible_hosts_path, 'w+') as hosts_file: + hosts_file.write('localhost ansible_connection=local ansible_remote_tmp=/root/.ansible/tmp') + + +def apply_playbook(playbook, tags=None, extra_vars=None): + """Run a playbook. + + This helper runs a playbook with juju state variables as context, + therefore variables set in application config can be used directly. + List of tags (--tags) and dictionary with extra_vars (--extra-vars) + can be passed as additional parameters. + + Read more about playbook `_variables`_ online. + + .. _variables: https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html + + Example:: + + # Run ansible/playbook.yaml with tag install and pass extra + # variables var_a and var_b + apply_playbook( + playbook='ansible/playbook.yaml', + tags=['install'], + extra_vars={'var_a': 'val_a', 'var_b': 'val_b'} + ) + + # Run ansible/playbook.yaml with tag config and extra variable nested, + # which is passed as json and can be used as dictionary in playbook + apply_playbook( + playbook='ansible/playbook.yaml', + tags=['config'], + extra_vars={'nested': {'a': 'value1', 'b': 'value2'}} + ) + + # Custom config file can be passed within extra_vars + apply_playbook( + playbook='ansible/playbook.yaml', + extra_vars="@some_file.json" + ) + + """ + tags = tags or [] + tags = ",".join(tags) + charmhelpers.contrib.templating.contexts.juju_state_to_yaml( + ansible_vars_path, namespace_separator='__', + allow_hyphens_in_keys=False, mode=(stat.S_IRUSR | stat.S_IWUSR)) + + # we want ansible's log output to be unbuffered + env = os.environ.copy() + env['PYTHONUNBUFFERED'] = "1" + call = [ + 'ansible-playbook', + '-c', + 'local', + playbook, + ] + if tags: + call.extend(['--tags', '{}'.format(tags)]) + if extra_vars: + call.extend(['--extra-vars', json.dumps(extra_vars)]) + subprocess.check_call(call, env=env) + + +class AnsibleHooks(charmhelpers.core.hookenv.Hooks): + """Run a playbook with the hook-name as the tag. + + This helper builds on the standard hookenv.Hooks helper, + but additionally runs the playbook with the hook-name specified + using --tags (ie. running all the tasks tagged with the hook-name). + + Example:: + + hooks = AnsibleHooks(playbook_path='ansible/my_machine_state.yaml') + + # All the tasks within my_machine_state.yaml tagged with 'install' + # will be run automatically after do_custom_work() + @hooks.hook() + def install(): + do_custom_work() + + # For most of your hooks, you won't need to do anything other + # than run the tagged tasks for the hook: + @hooks.hook('config-changed', 'start', 'stop') + def just_use_playbook(): + pass + + # As a convenience, you can avoid the above noop function by specifying + # the hooks which are handled by ansible-only and they'll be registered + # for you: + # hooks = AnsibleHooks( + # 'ansible/my_machine_state.yaml', + # default_hooks=['config-changed', 'start', 'stop']) + + if __name__ == "__main__": + # execute a hook based on the name the program is called by + hooks.execute(sys.argv) + """ + + def __init__(self, playbook_path, default_hooks=None): + """Register any hooks handled by ansible.""" + super(AnsibleHooks, self).__init__() + + self._actions = {} + self.playbook_path = playbook_path + + default_hooks = default_hooks or [] + + def noop(*args, **kwargs): + pass + + for hook in default_hooks: + self.register(hook, noop) + + def register_action(self, name, function): + """Register a hook""" + self._actions[name] = function + + def execute(self, args): + """Execute the hook followed by the playbook using the hook as tag.""" + hook_name = os.path.basename(args[0]) + extra_vars = None + if hook_name in self._actions: + extra_vars = self._actions[hook_name](args[1:]) + else: + super(AnsibleHooks, self).execute(args) + + charmhelpers.contrib.ansible.apply_playbook( + self.playbook_path, tags=[hook_name], extra_vars=extra_vars) + + def action(self, *action_names): + """Decorator, registering them as actions""" + def action_wrapper(decorated): + + @functools.wraps(decorated) + def wrapper(argv): + kwargs = dict(arg.split('=') for arg in argv) + try: + return decorated(**kwargs) + except TypeError as e: + if decorated.__doc__: + e.args += (decorated.__doc__,) + raise + + self.register_action(decorated.__name__, wrapper) + if '_' in decorated.__name__: + self.register_action( + decorated.__name__.replace('_', '-'), wrapper) + + return wrapper + + return action_wrapper diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/benchmark/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/benchmark/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..c35f7fe785a29519a257f1ed5aa33fdfa19129c7 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/benchmark/__init__.py @@ -0,0 +1,124 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import subprocess +import time +import os +from distutils.spawn import find_executable + +from charmhelpers.core.hookenv import ( + in_relation_hook, + relation_ids, + relation_set, + relation_get, +) + + +def action_set(key, val): + if find_executable('action-set'): + action_cmd = ['action-set'] + + if isinstance(val, dict): + for k, v in iter(val.items()): + action_set('%s.%s' % (key, k), v) + return True + + action_cmd.append('%s=%s' % (key, val)) + subprocess.check_call(action_cmd) + return True + return False + + +class Benchmark(): + """ + Helper class for the `benchmark` interface. + + :param list actions: Define the actions that are also benchmarks + + From inside the benchmark-relation-changed hook, you would + Benchmark(['memory', 'cpu', 'disk', 'smoke', 'custom']) + + Examples: + + siege = Benchmark(['siege']) + siege.start() + [... run siege ...] + # The higher the score, the better the benchmark + siege.set_composite_score(16.70, 'trans/sec', 'desc') + siege.finish() + + + """ + + BENCHMARK_CONF = '/etc/benchmark.conf' # Replaced in testing + + required_keys = [ + 'hostname', + 'port', + 'graphite_port', + 'graphite_endpoint', + 'api_port' + ] + + def __init__(self, benchmarks=None): + if in_relation_hook(): + if benchmarks is not None: + for rid in sorted(relation_ids('benchmark')): + relation_set(relation_id=rid, relation_settings={ + 'benchmarks': ",".join(benchmarks) + }) + + # Check the relation data + config = {} + for key in self.required_keys: + val = relation_get(key) + if val is not None: + config[key] = val + else: + # We don't have all of the required keys + config = {} + break + + if len(config): + with open(self.BENCHMARK_CONF, 'w') as f: + for key, val in iter(config.items()): + f.write("%s=%s\n" % (key, val)) + + @staticmethod + def start(): + action_set('meta.start', time.strftime('%Y-%m-%dT%H:%M:%SZ')) + + """ + If the collectd charm is also installed, tell it to send a snapshot + of the current profile data. + """ + COLLECT_PROFILE_DATA = '/usr/local/bin/collect-profile-data' + if os.path.exists(COLLECT_PROFILE_DATA): + subprocess.check_output([COLLECT_PROFILE_DATA]) + + @staticmethod + def finish(): + action_set('meta.stop', time.strftime('%Y-%m-%dT%H:%M:%SZ')) + + @staticmethod + def set_composite_score(value, units, direction='asc'): + """ + Set the composite score for a benchmark run. This is a single number + representative of the benchmark results. This could be the most + important metric, or an amalgamation of metric scores. + """ + return action_set( + "meta.composite", + {'value': value, 'units': units, 'direction': direction} + ) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/charmhelpers/IMPORT b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/charmhelpers/IMPORT new file mode 100644 index 0000000000000000000000000000000000000000..d41cb041f45cb19b87aee7d3f5fd4d47a3d3fb2d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/charmhelpers/IMPORT @@ -0,0 +1,4 @@ +Source lp:charm-tools/trunk + +charm-tools/helpers/python/charmhelpers/__init__.py -> charmhelpers/charmhelpers/contrib/charmhelpers/__init__.py +charm-tools/helpers/python/charmhelpers/tests/test_charmhelpers.py -> charmhelpers/tests/contrib/charmhelpers/test_charmhelpers.py diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/charmhelpers/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/charmhelpers/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..ed63e8121e23a8e01ccb8ddcbf9aea34543d3666 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/charmhelpers/__init__.py @@ -0,0 +1,203 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import warnings +warnings.warn("contrib.charmhelpers is deprecated", DeprecationWarning) # noqa + +import operator +import tempfile +import time +import yaml +import subprocess + +import six +if six.PY3: + from urllib.request import urlopen + from urllib.error import (HTTPError, URLError) +else: + from urllib2 import (urlopen, HTTPError, URLError) + +"""Helper functions for writing Juju charms in Python.""" + +__metaclass__ = type +__all__ = [ + # 'get_config', # core.hookenv.config() + # 'log', # core.hookenv.log() + # 'log_entry', # core.hookenv.log() + # 'log_exit', # core.hookenv.log() + # 'relation_get', # core.hookenv.relation_get() + # 'relation_set', # core.hookenv.relation_set() + # 'relation_ids', # core.hookenv.relation_ids() + # 'relation_list', # core.hookenv.relation_units() + # 'config_get', # core.hookenv.config() + # 'unit_get', # core.hookenv.unit_get() + # 'open_port', # core.hookenv.open_port() + # 'close_port', # core.hookenv.close_port() + # 'service_control', # core.host.service() + 'unit_info', # client-side, NOT IMPLEMENTED + 'wait_for_machine', # client-side, NOT IMPLEMENTED + 'wait_for_page_contents', # client-side, NOT IMPLEMENTED + 'wait_for_relation', # client-side, NOT IMPLEMENTED + 'wait_for_unit', # client-side, NOT IMPLEMENTED +] + + +SLEEP_AMOUNT = 0.1 + + +# We create a juju_status Command here because it makes testing much, +# much easier. +def juju_status(): + subprocess.check_call(['juju', 'status']) + +# re-implemented as charmhelpers.fetch.configure_sources() +# def configure_source(update=False): +# source = config_get('source') +# if ((source.startswith('ppa:') or +# source.startswith('cloud:') or +# source.startswith('http:'))): +# run('add-apt-repository', source) +# if source.startswith("http:"): +# run('apt-key', 'import', config_get('key')) +# if update: +# run('apt-get', 'update') + + +# DEPRECATED: client-side only +def make_charm_config_file(charm_config): + charm_config_file = tempfile.NamedTemporaryFile(mode='w+') + charm_config_file.write(yaml.dump(charm_config)) + charm_config_file.flush() + # The NamedTemporaryFile instance is returned instead of just the name + # because we want to take advantage of garbage collection-triggered + # deletion of the temp file when it goes out of scope in the caller. + return charm_config_file + + +# DEPRECATED: client-side only +def unit_info(service_name, item_name, data=None, unit=None): + if data is None: + data = yaml.safe_load(juju_status()) + service = data['services'].get(service_name) + if service is None: + # XXX 2012-02-08 gmb: + # This allows us to cope with the race condition that we + # have between deploying a service and having it come up in + # `juju status`. We could probably do with cleaning it up so + # that it fails a bit more noisily after a while. + return '' + units = service['units'] + if unit is not None: + item = units[unit][item_name] + else: + # It might seem odd to sort the units here, but we do it to + # ensure that when no unit is specified, the first unit for the + # service (or at least the one with the lowest number) is the + # one whose data gets returned. + sorted_unit_names = sorted(units.keys()) + item = units[sorted_unit_names[0]][item_name] + return item + + +# DEPRECATED: client-side only +def get_machine_data(): + return yaml.safe_load(juju_status())['machines'] + + +# DEPRECATED: client-side only +def wait_for_machine(num_machines=1, timeout=300): + """Wait `timeout` seconds for `num_machines` machines to come up. + + This wait_for... function can be called by other wait_for functions + whose timeouts might be too short in situations where only a bare + Juju setup has been bootstrapped. + + :return: A tuple of (num_machines, time_taken). This is used for + testing. + """ + # You may think this is a hack, and you'd be right. The easiest way + # to tell what environment we're working in (LXC vs EC2) is to check + # the dns-name of the first machine. If it's localhost we're in LXC + # and we can just return here. + if get_machine_data()[0]['dns-name'] == 'localhost': + return 1, 0 + start_time = time.time() + while True: + # Drop the first machine, since it's the Zookeeper and that's + # not a machine that we need to wait for. This will only work + # for EC2 environments, which is why we return early above if + # we're in LXC. + machine_data = get_machine_data() + non_zookeeper_machines = [ + machine_data[key] for key in list(machine_data.keys())[1:]] + if len(non_zookeeper_machines) >= num_machines: + all_machines_running = True + for machine in non_zookeeper_machines: + if machine.get('instance-state') != 'running': + all_machines_running = False + break + if all_machines_running: + break + if time.time() - start_time >= timeout: + raise RuntimeError('timeout waiting for service to start') + time.sleep(SLEEP_AMOUNT) + return num_machines, time.time() - start_time + + +# DEPRECATED: client-side only +def wait_for_unit(service_name, timeout=480): + """Wait `timeout` seconds for a given service name to come up.""" + wait_for_machine(num_machines=1) + start_time = time.time() + while True: + state = unit_info(service_name, 'agent-state') + if 'error' in state or state == 'started': + break + if time.time() - start_time >= timeout: + raise RuntimeError('timeout waiting for service to start') + time.sleep(SLEEP_AMOUNT) + if state != 'started': + raise RuntimeError('unit did not start, agent-state: ' + state) + + +# DEPRECATED: client-side only +def wait_for_relation(service_name, relation_name, timeout=120): + """Wait `timeout` seconds for a given relation to come up.""" + start_time = time.time() + while True: + relation = unit_info(service_name, 'relations').get(relation_name) + if relation is not None and relation['state'] == 'up': + break + if time.time() - start_time >= timeout: + raise RuntimeError('timeout waiting for relation to be up') + time.sleep(SLEEP_AMOUNT) + + +# DEPRECATED: client-side only +def wait_for_page_contents(url, contents, timeout=120, validate=None): + if validate is None: + validate = operator.contains + start_time = time.time() + while True: + try: + stream = urlopen(url) + except (HTTPError, URLError): + pass + else: + page = stream.read() + if validate(page, contents): + return page + if time.time() - start_time >= timeout: + raise RuntimeError('timeout waiting for contents of ' + url) + time.sleep(SLEEP_AMOUNT) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/charmsupport/IMPORT b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/charmsupport/IMPORT new file mode 100644 index 0000000000000000000000000000000000000000..554fddda9f48d6d17abaff942834493b2cca2dbe --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/charmsupport/IMPORT @@ -0,0 +1,14 @@ +Source: lp:charmsupport/trunk + +charmsupport/charmsupport/execd.py -> charm-helpers/charmhelpers/contrib/charmsupport/execd.py +charmsupport/charmsupport/hookenv.py -> charm-helpers/charmhelpers/contrib/charmsupport/hookenv.py +charmsupport/charmsupport/host.py -> charm-helpers/charmhelpers/contrib/charmsupport/host.py +charmsupport/charmsupport/nrpe.py -> charm-helpers/charmhelpers/contrib/charmsupport/nrpe.py +charmsupport/charmsupport/volumes.py -> charm-helpers/charmhelpers/contrib/charmsupport/volumes.py + +charmsupport/tests/test_execd.py -> charm-helpers/tests/contrib/charmsupport/test_execd.py +charmsupport/tests/test_hookenv.py -> charm-helpers/tests/contrib/charmsupport/test_hookenv.py +charmsupport/tests/test_host.py -> charm-helpers/tests/contrib/charmsupport/test_host.py +charmsupport/tests/test_nrpe.py -> charm-helpers/tests/contrib/charmsupport/test_nrpe.py + +charmsupport/bin/charmsupport -> charm-helpers/bin/contrib/charmsupport/charmsupport diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/charmsupport/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/charmsupport/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d7567b863e3a5ad2b7a7f44958b4166e0c3d346b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/charmsupport/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/charmsupport/nrpe.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/charmsupport/nrpe.py new file mode 100644 index 0000000000000000000000000000000000000000..d775861b0868a174a3f0f4a8d4a42320272a4cb0 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/charmsupport/nrpe.py @@ -0,0 +1,500 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Compatibility with the nrpe-external-master charm""" +# Copyright 2012 Canonical Ltd. +# +# Authors: +# Matthew Wedgwood + +import subprocess +import pwd +import grp +import os +import glob +import shutil +import re +import shlex +import yaml + +from charmhelpers.core.hookenv import ( + config, + hook_name, + local_unit, + log, + relation_get, + relation_ids, + relation_set, + relations_of_type, +) + +from charmhelpers.core.host import service +from charmhelpers.core import host + +# This module adds compatibility with the nrpe-external-master and plain nrpe +# subordinate charms. To use it in your charm: +# +# 1. Update metadata.yaml +# +# provides: +# (...) +# nrpe-external-master: +# interface: nrpe-external-master +# scope: container +# +# and/or +# +# provides: +# (...) +# local-monitors: +# interface: local-monitors +# scope: container + +# +# 2. Add the following to config.yaml +# +# nagios_context: +# default: "juju" +# type: string +# description: | +# Used by the nrpe subordinate charms. +# A string that will be prepended to instance name to set the host name +# in nagios. So for instance the hostname would be something like: +# juju-myservice-0 +# If you're running multiple environments with the same services in them +# this allows you to differentiate between them. +# nagios_servicegroups: +# default: "" +# type: string +# description: | +# A comma-separated list of nagios servicegroups. +# If left empty, the nagios_context will be used as the servicegroup +# +# 3. Add custom checks (Nagios plugins) to files/nrpe-external-master +# +# 4. Update your hooks.py with something like this: +# +# from charmsupport.nrpe import NRPE +# (...) +# def update_nrpe_config(): +# nrpe_compat = NRPE() +# nrpe_compat.add_check( +# shortname = "myservice", +# description = "Check MyService", +# check_cmd = "check_http -w 2 -c 10 http://localhost" +# ) +# nrpe_compat.add_check( +# "myservice_other", +# "Check for widget failures", +# check_cmd = "/srv/myapp/scripts/widget_check" +# ) +# nrpe_compat.write() +# +# def config_changed(): +# (...) +# update_nrpe_config() +# +# def nrpe_external_master_relation_changed(): +# update_nrpe_config() +# +# def local_monitors_relation_changed(): +# update_nrpe_config() +# +# 4.a If your charm is a subordinate charm set primary=False +# +# from charmsupport.nrpe import NRPE +# (...) +# def update_nrpe_config(): +# nrpe_compat = NRPE(primary=False) +# +# 5. ln -s hooks.py nrpe-external-master-relation-changed +# ln -s hooks.py local-monitors-relation-changed + + +class CheckException(Exception): + pass + + +class Check(object): + shortname_re = '[A-Za-z0-9-_.@]+$' + service_template = (""" +#--------------------------------------------------- +# This file is Juju managed +#--------------------------------------------------- +define service {{ + use active-service + host_name {nagios_hostname} + service_description {nagios_hostname}[{shortname}] """ + """{description} + check_command check_nrpe!{command} + servicegroups {nagios_servicegroup} +}} +""") + + def __init__(self, shortname, description, check_cmd): + super(Check, self).__init__() + # XXX: could be better to calculate this from the service name + if not re.match(self.shortname_re, shortname): + raise CheckException("shortname must match {}".format( + Check.shortname_re)) + self.shortname = shortname + self.command = "check_{}".format(shortname) + # Note: a set of invalid characters is defined by the + # Nagios server config + # The default is: illegal_object_name_chars=`~!$%^&*"|'<>?,()= + self.description = description + self.check_cmd = self._locate_cmd(check_cmd) + + def _get_check_filename(self): + return os.path.join(NRPE.nrpe_confdir, '{}.cfg'.format(self.command)) + + def _get_service_filename(self, hostname): + return os.path.join(NRPE.nagios_exportdir, + 'service__{}_{}.cfg'.format(hostname, self.command)) + + def _locate_cmd(self, check_cmd): + search_path = ( + '/usr/lib/nagios/plugins', + '/usr/local/lib/nagios/plugins', + ) + parts = shlex.split(check_cmd) + for path in search_path: + if os.path.exists(os.path.join(path, parts[0])): + command = os.path.join(path, parts[0]) + if len(parts) > 1: + command += " " + " ".join(parts[1:]) + return command + log('Check command not found: {}'.format(parts[0])) + return '' + + def _remove_service_files(self): + if not os.path.exists(NRPE.nagios_exportdir): + return + for f in os.listdir(NRPE.nagios_exportdir): + if f.endswith('_{}.cfg'.format(self.command)): + os.remove(os.path.join(NRPE.nagios_exportdir, f)) + + def remove(self, hostname): + nrpe_check_file = self._get_check_filename() + if os.path.exists(nrpe_check_file): + os.remove(nrpe_check_file) + self._remove_service_files() + + def write(self, nagios_context, hostname, nagios_servicegroups): + nrpe_check_file = self._get_check_filename() + with open(nrpe_check_file, 'w') as nrpe_check_config: + nrpe_check_config.write("# check {}\n".format(self.shortname)) + if nagios_servicegroups: + nrpe_check_config.write( + "# The following header was added automatically by juju\n") + nrpe_check_config.write( + "# Modifying it will affect nagios monitoring and alerting\n") + nrpe_check_config.write( + "# servicegroups: {}\n".format(nagios_servicegroups)) + nrpe_check_config.write("command[{}]={}\n".format( + self.command, self.check_cmd)) + + if not os.path.exists(NRPE.nagios_exportdir): + log('Not writing service config as {} is not accessible'.format( + NRPE.nagios_exportdir)) + else: + self.write_service_config(nagios_context, hostname, + nagios_servicegroups) + + def write_service_config(self, nagios_context, hostname, + nagios_servicegroups): + self._remove_service_files() + + templ_vars = { + 'nagios_hostname': hostname, + 'nagios_servicegroup': nagios_servicegroups, + 'description': self.description, + 'shortname': self.shortname, + 'command': self.command, + } + nrpe_service_text = Check.service_template.format(**templ_vars) + nrpe_service_file = self._get_service_filename(hostname) + with open(nrpe_service_file, 'w') as nrpe_service_config: + nrpe_service_config.write(str(nrpe_service_text)) + + def run(self): + subprocess.call(self.check_cmd) + + +class NRPE(object): + nagios_logdir = '/var/log/nagios' + nagios_exportdir = '/var/lib/nagios/export' + nrpe_confdir = '/etc/nagios/nrpe.d' + homedir = '/var/lib/nagios' # home dir provided by nagios-nrpe-server + + def __init__(self, hostname=None, primary=True): + super(NRPE, self).__init__() + self.config = config() + self.primary = primary + self.nagios_context = self.config['nagios_context'] + if 'nagios_servicegroups' in self.config and self.config['nagios_servicegroups']: + self.nagios_servicegroups = self.config['nagios_servicegroups'] + else: + self.nagios_servicegroups = self.nagios_context + self.unit_name = local_unit().replace('/', '-') + if hostname: + self.hostname = hostname + else: + nagios_hostname = get_nagios_hostname() + if nagios_hostname: + self.hostname = nagios_hostname + else: + self.hostname = "{}-{}".format(self.nagios_context, self.unit_name) + self.checks = [] + # Iff in an nrpe-external-master relation hook, set primary status + relation = relation_ids('nrpe-external-master') + if relation: + log("Setting charm primary status {}".format(primary)) + for rid in relation: + relation_set(relation_id=rid, relation_settings={'primary': self.primary}) + self.remove_check_queue = set() + + def add_check(self, *args, **kwargs): + shortname = None + if kwargs.get('shortname') is None: + if len(args) > 0: + shortname = args[0] + else: + shortname = kwargs['shortname'] + + self.checks.append(Check(*args, **kwargs)) + try: + self.remove_check_queue.remove(shortname) + except KeyError: + pass + + def remove_check(self, *args, **kwargs): + if kwargs.get('shortname') is None: + raise ValueError('shortname of check must be specified') + + # Use sensible defaults if they're not specified - these are not + # actually used during removal, but they're required for constructing + # the Check object; check_disk is chosen because it's part of the + # nagios-plugins-basic package. + if kwargs.get('check_cmd') is None: + kwargs['check_cmd'] = 'check_disk' + if kwargs.get('description') is None: + kwargs['description'] = '' + + check = Check(*args, **kwargs) + check.remove(self.hostname) + self.remove_check_queue.add(kwargs['shortname']) + + def write(self): + try: + nagios_uid = pwd.getpwnam('nagios').pw_uid + nagios_gid = grp.getgrnam('nagios').gr_gid + except Exception: + log("Nagios user not set up, nrpe checks not updated") + return + + if not os.path.exists(NRPE.nagios_logdir): + os.mkdir(NRPE.nagios_logdir) + os.chown(NRPE.nagios_logdir, nagios_uid, nagios_gid) + + nrpe_monitors = {} + monitors = {"monitors": {"remote": {"nrpe": nrpe_monitors}}} + for nrpecheck in self.checks: + nrpecheck.write(self.nagios_context, self.hostname, + self.nagios_servicegroups) + nrpe_monitors[nrpecheck.shortname] = { + "command": nrpecheck.command, + } + + # update-status hooks are configured to firing every 5 minutes by + # default. When nagios-nrpe-server is restarted, the nagios server + # reports checks failing causing unnecessary alerts. Let's not restart + # on update-status hooks. + if not hook_name() == 'update-status': + service('restart', 'nagios-nrpe-server') + + monitor_ids = relation_ids("local-monitors") + \ + relation_ids("nrpe-external-master") + for rid in monitor_ids: + reldata = relation_get(unit=local_unit(), rid=rid) + if 'monitors' in reldata: + # update the existing set of monitors with the new data + old_monitors = yaml.safe_load(reldata['monitors']) + old_nrpe_monitors = old_monitors['monitors']['remote']['nrpe'] + # remove keys that are in the remove_check_queue + old_nrpe_monitors = {k: v for k, v in old_nrpe_monitors.items() + if k not in self.remove_check_queue} + # update/add nrpe_monitors + old_nrpe_monitors.update(nrpe_monitors) + old_monitors['monitors']['remote']['nrpe'] = old_nrpe_monitors + # write back to the relation + relation_set(relation_id=rid, monitors=yaml.dump(old_monitors)) + else: + # write a brand new set of monitors, as no existing ones. + relation_set(relation_id=rid, monitors=yaml.dump(monitors)) + + self.remove_check_queue.clear() + + +def get_nagios_hostcontext(relation_name='nrpe-external-master'): + """ + Query relation with nrpe subordinate, return the nagios_host_context + + :param str relation_name: Name of relation nrpe sub joined to + """ + for rel in relations_of_type(relation_name): + if 'nagios_host_context' in rel: + return rel['nagios_host_context'] + + +def get_nagios_hostname(relation_name='nrpe-external-master'): + """ + Query relation with nrpe subordinate, return the nagios_hostname + + :param str relation_name: Name of relation nrpe sub joined to + """ + for rel in relations_of_type(relation_name): + if 'nagios_hostname' in rel: + return rel['nagios_hostname'] + + +def get_nagios_unit_name(relation_name='nrpe-external-master'): + """ + Return the nagios unit name prepended with host_context if needed + + :param str relation_name: Name of relation nrpe sub joined to + """ + host_context = get_nagios_hostcontext(relation_name) + if host_context: + unit = "%s:%s" % (host_context, local_unit()) + else: + unit = local_unit() + return unit + + +def add_init_service_checks(nrpe, services, unit_name, immediate_check=True): + """ + Add checks for each service in list + + :param NRPE nrpe: NRPE object to add check to + :param list services: List of services to check + :param str unit_name: Unit name to use in check description + :param bool immediate_check: For sysv init, run the service check immediately + """ + for svc in services: + # Don't add a check for these services from neutron-gateway + if svc in ['ext-port', 'os-charm-phy-nic-mtu']: + next + + upstart_init = '/etc/init/%s.conf' % svc + sysv_init = '/etc/init.d/%s' % svc + + if host.init_is_systemd(): + nrpe.add_check( + shortname=svc, + description='process check {%s}' % unit_name, + check_cmd='check_systemd.py %s' % svc + ) + elif os.path.exists(upstart_init): + nrpe.add_check( + shortname=svc, + description='process check {%s}' % unit_name, + check_cmd='check_upstart_job %s' % svc + ) + elif os.path.exists(sysv_init): + cronpath = '/etc/cron.d/nagios-service-check-%s' % svc + checkpath = '%s/service-check-%s.txt' % (nrpe.homedir, svc) + croncmd = ( + '/usr/local/lib/nagios/plugins/check_exit_status.pl ' + '-e -s /etc/init.d/%s status' % svc + ) + cron_file = '*/5 * * * * root %s > %s\n' % (croncmd, checkpath) + f = open(cronpath, 'w') + f.write(cron_file) + f.close() + nrpe.add_check( + shortname=svc, + description='service check {%s}' % unit_name, + check_cmd='check_status_file.py -f %s' % checkpath, + ) + # if /var/lib/nagios doesn't exist open(checkpath, 'w') will fail + # (LP: #1670223). + if immediate_check and os.path.isdir(nrpe.homedir): + f = open(checkpath, 'w') + subprocess.call( + croncmd.split(), + stdout=f, + stderr=subprocess.STDOUT + ) + f.close() + os.chmod(checkpath, 0o644) + + +def copy_nrpe_checks(nrpe_files_dir=None): + """ + Copy the nrpe checks into place + + """ + NAGIOS_PLUGINS = '/usr/local/lib/nagios/plugins' + if nrpe_files_dir is None: + # determine if "charmhelpers" is in CHARMDIR or CHARMDIR/hooks + for segment in ['.', 'hooks']: + nrpe_files_dir = os.path.abspath(os.path.join( + os.getenv('CHARM_DIR'), + segment, + 'charmhelpers', + 'contrib', + 'openstack', + 'files')) + if os.path.isdir(nrpe_files_dir): + break + else: + raise RuntimeError("Couldn't find charmhelpers directory") + if not os.path.exists(NAGIOS_PLUGINS): + os.makedirs(NAGIOS_PLUGINS) + for fname in glob.glob(os.path.join(nrpe_files_dir, "check_*")): + if os.path.isfile(fname): + shutil.copy2(fname, + os.path.join(NAGIOS_PLUGINS, os.path.basename(fname))) + + +def add_haproxy_checks(nrpe, unit_name): + """ + Add checks for each service in list + + :param NRPE nrpe: NRPE object to add check to + :param str unit_name: Unit name to use in check description + """ + nrpe.add_check( + shortname='haproxy_servers', + description='Check HAProxy {%s}' % unit_name, + check_cmd='check_haproxy.sh') + nrpe.add_check( + shortname='haproxy_queue', + description='Check HAProxy queue depth {%s}' % unit_name, + check_cmd='check_haproxy_queue_depth.sh') + + +def remove_deprecated_check(nrpe, deprecated_services): + """ + Remove checks fro deprecated services in list + + :param nrpe: NRPE object to remove check from + :type nrpe: NRPE + :param deprecated_services: List of deprecated services that are removed + :type deprecated_services: list + """ + for dep_svc in deprecated_services: + log('Deprecated service: {}'.format(dep_svc)) + nrpe.remove_check(shortname=dep_svc) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/charmsupport/volumes.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/charmsupport/volumes.py new file mode 100644 index 0000000000000000000000000000000000000000..7ea43f0888cd92ac4344f908aa7c9d0afe7568ed --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/charmsupport/volumes.py @@ -0,0 +1,173 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +''' +Functions for managing volumes in juju units. One volume is supported per unit. +Subordinates may have their own storage, provided it is on its own partition. + +Configuration stanzas:: + + volume-ephemeral: + type: boolean + default: true + description: > + If false, a volume is mounted as sepecified in "volume-map" + If true, ephemeral storage will be used, meaning that log data + will only exist as long as the machine. YOU HAVE BEEN WARNED. + volume-map: + type: string + default: {} + description: > + YAML map of units to device names, e.g: + "{ rsyslog/0: /dev/vdb, rsyslog/1: /dev/vdb }" + Service units will raise a configure-error if volume-ephemeral + is 'true' and no volume-map value is set. Use 'juju set' to set a + value and 'juju resolved' to complete configuration. + +Usage:: + + from charmsupport.volumes import configure_volume, VolumeConfigurationError + from charmsupport.hookenv import log, ERROR + def post_mount_hook(): + stop_service('myservice') + def post_mount_hook(): + start_service('myservice') + + if __name__ == '__main__': + try: + configure_volume(before_change=pre_mount_hook, + after_change=post_mount_hook) + except VolumeConfigurationError: + log('Storage could not be configured', ERROR) + +''' + +# XXX: Known limitations +# - fstab is neither consulted nor updated + +import os +from charmhelpers.core import hookenv +from charmhelpers.core import host +import yaml + + +MOUNT_BASE = '/srv/juju/volumes' + + +class VolumeConfigurationError(Exception): + '''Volume configuration data is missing or invalid''' + pass + + +def get_config(): + '''Gather and sanity-check volume configuration data''' + volume_config = {} + config = hookenv.config() + + errors = False + + if config.get('volume-ephemeral') in (True, 'True', 'true', 'Yes', 'yes'): + volume_config['ephemeral'] = True + else: + volume_config['ephemeral'] = False + + try: + volume_map = yaml.safe_load(config.get('volume-map', '{}')) + except yaml.YAMLError as e: + hookenv.log("Error parsing YAML volume-map: {}".format(e), + hookenv.ERROR) + errors = True + if volume_map is None: + # probably an empty string + volume_map = {} + elif not isinstance(volume_map, dict): + hookenv.log("Volume-map should be a dictionary, not {}".format( + type(volume_map))) + errors = True + + volume_config['device'] = volume_map.get(os.environ['JUJU_UNIT_NAME']) + if volume_config['device'] and volume_config['ephemeral']: + # asked for ephemeral storage but also defined a volume ID + hookenv.log('A volume is defined for this unit, but ephemeral ' + 'storage was requested', hookenv.ERROR) + errors = True + elif not volume_config['device'] and not volume_config['ephemeral']: + # asked for permanent storage but did not define volume ID + hookenv.log('Ephemeral storage was requested, but there is no volume ' + 'defined for this unit.', hookenv.ERROR) + errors = True + + unit_mount_name = hookenv.local_unit().replace('/', '-') + volume_config['mountpoint'] = os.path.join(MOUNT_BASE, unit_mount_name) + + if errors: + return None + return volume_config + + +def mount_volume(config): + if os.path.exists(config['mountpoint']): + if not os.path.isdir(config['mountpoint']): + hookenv.log('Not a directory: {}'.format(config['mountpoint'])) + raise VolumeConfigurationError() + else: + host.mkdir(config['mountpoint']) + if os.path.ismount(config['mountpoint']): + unmount_volume(config) + if not host.mount(config['device'], config['mountpoint'], persist=True): + raise VolumeConfigurationError() + + +def unmount_volume(config): + if os.path.ismount(config['mountpoint']): + if not host.umount(config['mountpoint'], persist=True): + raise VolumeConfigurationError() + + +def managed_mounts(): + '''List of all mounted managed volumes''' + return filter(lambda mount: mount[0].startswith(MOUNT_BASE), host.mounts()) + + +def configure_volume(before_change=lambda: None, after_change=lambda: None): + '''Set up storage (or don't) according to the charm's volume configuration. + Returns the mount point or "ephemeral". before_change and after_change + are optional functions to be called if the volume configuration changes. + ''' + + config = get_config() + if not config: + hookenv.log('Failed to read volume configuration', hookenv.CRITICAL) + raise VolumeConfigurationError() + + if config['ephemeral']: + if os.path.ismount(config['mountpoint']): + before_change() + unmount_volume(config) + after_change() + return 'ephemeral' + else: + # persistent storage + if os.path.ismount(config['mountpoint']): + mounts = dict(managed_mounts()) + if mounts.get(config['mountpoint']) != config['device']: + before_change() + unmount_volume(config) + mount_volume(config) + after_change() + else: + before_change() + mount_volume(config) + after_change() + return config['mountpoint'] diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/database/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/database/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..64fac9def6101c1de83cd5dafe7fdb5213d6a955 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/database/__init__.py @@ -0,0 +1,11 @@ +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/database/mysql.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/database/mysql.py new file mode 100644 index 0000000000000000000000000000000000000000..c9ecce5f793eedeb963557d109deaf5e1271e65f --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/database/mysql.py @@ -0,0 +1,821 @@ +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Helper for working with a MySQL database""" +import collections +import copy +import json +import re +import sys +import platform +import os +import glob +import six + +# from string import upper + +from charmhelpers.core.host import ( + CompareHostReleases, + lsb_release, + mkdir, + pwgen, + write_file +) +from charmhelpers.core.hookenv import ( + config as config_get, + relation_get, + related_units, + unit_get, + log, + DEBUG, + INFO, + WARNING, + leader_get, + leader_set, + is_leader, +) +from charmhelpers.fetch import ( + apt_install, + apt_update, + filter_installed_packages, +) +from charmhelpers.contrib.network.ip import get_host_ip + +try: + import MySQLdb +except ImportError: + apt_update(fatal=True) + if six.PY2: + apt_install(filter_installed_packages(['python-mysqldb']), fatal=True) + else: + apt_install(filter_installed_packages(['python3-mysqldb']), fatal=True) + import MySQLdb + + +class MySQLSetPasswordError(Exception): + pass + + +class MySQLHelper(object): + + def __init__(self, rpasswdf_template, upasswdf_template, host='localhost', + migrate_passwd_to_leader_storage=True, + delete_ondisk_passwd_file=True, user="root", password=None, port=None): + self.user = user + self.host = host + self.password = password + self.port = port + + # Password file path templates + self.root_passwd_file_template = rpasswdf_template + self.user_passwd_file_template = upasswdf_template + + self.migrate_passwd_to_leader_storage = migrate_passwd_to_leader_storage + # If we migrate we have the option to delete local copy of root passwd + self.delete_ondisk_passwd_file = delete_ondisk_passwd_file + self.connection = None + + def connect(self, user='root', password=None, host=None, port=None): + _connection_info = { + "user": user or self.user, + "passwd": password or self.password, + "host": host or self.host + } + # port cannot be None but we also do not want to specify it unless it + # has been explicit set. + port = port or self.port + if port is not None: + _connection_info["port"] = port + + log("Opening db connection for %s@%s" % (user, host), level=DEBUG) + self.connection = MySQLdb.connect(**_connection_info) + + def database_exists(self, db_name): + cursor = self.connection.cursor() + try: + cursor.execute("SHOW DATABASES") + databases = [i[0] for i in cursor.fetchall()] + finally: + cursor.close() + + return db_name in databases + + def create_database(self, db_name): + cursor = self.connection.cursor() + try: + cursor.execute("CREATE DATABASE `{}` CHARACTER SET UTF8" + .format(db_name)) + finally: + cursor.close() + + def grant_exists(self, db_name, db_user, remote_ip): + cursor = self.connection.cursor() + priv_string = "GRANT ALL PRIVILEGES ON `{}`.* " \ + "TO '{}'@'{}'".format(db_name, db_user, remote_ip) + try: + cursor.execute("SHOW GRANTS for '{}'@'{}'".format(db_user, + remote_ip)) + grants = [i[0] for i in cursor.fetchall()] + except MySQLdb.OperationalError: + return False + finally: + cursor.close() + + # TODO: review for different grants + return priv_string in grants + + def create_grant(self, db_name, db_user, remote_ip, password): + cursor = self.connection.cursor() + try: + # TODO: review for different grants + cursor.execute("GRANT ALL PRIVILEGES ON `{}`.* TO '{}'@'{}' " + "IDENTIFIED BY '{}'".format(db_name, + db_user, + remote_ip, + password)) + finally: + cursor.close() + + def create_admin_grant(self, db_user, remote_ip, password): + cursor = self.connection.cursor() + try: + cursor.execute("GRANT ALL PRIVILEGES ON *.* TO '{}'@'{}' " + "IDENTIFIED BY '{}'".format(db_user, + remote_ip, + password)) + finally: + cursor.close() + + def cleanup_grant(self, db_user, remote_ip): + cursor = self.connection.cursor() + try: + cursor.execute("DROP FROM mysql.user WHERE user='{}' " + "AND HOST='{}'".format(db_user, + remote_ip)) + finally: + cursor.close() + + def flush_priviledges(self): + cursor = self.connection.cursor() + try: + cursor.execute("FLUSH PRIVILEGES") + finally: + cursor.close() + + def execute(self, sql): + """Execute arbitary SQL against the database.""" + cursor = self.connection.cursor() + try: + cursor.execute(sql) + finally: + cursor.close() + + def select(self, sql): + """ + Execute arbitrary SQL select query against the database + and return the results. + + :param sql: SQL select query to execute + :type sql: string + :returns: SQL select query result + :rtype: list of lists + :raises: MySQLdb.Error + """ + cursor = self.connection.cursor() + try: + cursor.execute(sql) + results = [list(i) for i in cursor.fetchall()] + finally: + cursor.close() + return results + + def migrate_passwords_to_leader_storage(self, excludes=None): + """Migrate any passwords storage on disk to leader storage.""" + if not is_leader(): + log("Skipping password migration as not the lead unit", + level=DEBUG) + return + dirname = os.path.dirname(self.root_passwd_file_template) + path = os.path.join(dirname, '*.passwd') + for f in glob.glob(path): + if excludes and f in excludes: + log("Excluding %s from leader storage migration" % (f), + level=DEBUG) + continue + + key = os.path.basename(f) + with open(f, 'r') as passwd: + _value = passwd.read().strip() + + try: + leader_set(settings={key: _value}) + + if self.delete_ondisk_passwd_file: + os.unlink(f) + except ValueError: + # NOTE cluster relation not yet ready - skip for now + pass + + def get_mysql_password_on_disk(self, username=None, password=None): + """Retrieve, generate or store a mysql password for the provided + username on disk.""" + if username: + template = self.user_passwd_file_template + passwd_file = template.format(username) + else: + passwd_file = self.root_passwd_file_template + + _password = None + if os.path.exists(passwd_file): + log("Using existing password file '%s'" % passwd_file, level=DEBUG) + with open(passwd_file, 'r') as passwd: + _password = passwd.read().strip() + else: + log("Generating new password file '%s'" % passwd_file, level=DEBUG) + if not os.path.isdir(os.path.dirname(passwd_file)): + # NOTE: need to ensure this is not mysql root dir (which needs + # to be mysql readable) + mkdir(os.path.dirname(passwd_file), owner='root', group='root', + perms=0o770) + # Force permissions - for some reason the chmod in makedirs + # fails + os.chmod(os.path.dirname(passwd_file), 0o770) + + _password = password or pwgen(length=32) + write_file(passwd_file, _password, owner='root', group='root', + perms=0o660) + + return _password + + def passwd_keys(self, username): + """Generator to return keys used to store passwords in peer store. + + NOTE: we support both legacy and new format to support mysql + charm prior to refactor. This is necessary to avoid LP 1451890. + """ + keys = [] + if username == 'mysql': + log("Bad username '%s'" % (username), level=WARNING) + + if username: + # IMPORTANT: *newer* format must be returned first + keys.append('mysql-%s.passwd' % (username)) + keys.append('%s.passwd' % (username)) + else: + keys.append('mysql.passwd') + + for key in keys: + yield key + + def get_mysql_password(self, username=None, password=None): + """Retrieve, generate or store a mysql password for the provided + username using peer relation cluster.""" + excludes = [] + + # First check peer relation. + try: + for key in self.passwd_keys(username): + _password = leader_get(key) + if _password: + break + + # If root password available don't update peer relation from local + if _password and not username: + excludes.append(self.root_passwd_file_template) + + except ValueError: + # cluster relation is not yet started; use on-disk + _password = None + + # If none available, generate new one + if not _password: + _password = self.get_mysql_password_on_disk(username, password) + + # Put on wire if required + if self.migrate_passwd_to_leader_storage: + self.migrate_passwords_to_leader_storage(excludes=excludes) + + return _password + + def get_mysql_root_password(self, password=None): + """Retrieve or generate mysql root password for service units.""" + return self.get_mysql_password(username=None, password=password) + + def set_mysql_password(self, username, password, current_password=None): + """Update a mysql password for the provided username changing the + leader settings + + To update root's password pass `None` in the username + + :param username: Username to change password of + :type username: str + :param password: New password for user. + :type password: str + :param current_password: Existing password for user. + :type current_password: str + """ + + if username is None: + username = 'root' + + # get root password via leader-get, it may be that in the past (when + # changes to root-password were not supported) the user changed the + # password, so leader-get is more reliable source than + # config.previous('root-password'). + rel_username = None if username == 'root' else username + if not current_password: + current_password = self.get_mysql_password(rel_username) + + # password that needs to be set + new_passwd = password + + # update password for all users (e.g. root@localhost, root@::1, etc) + try: + self.connect(user=username, password=current_password) + cursor = self.connection.cursor() + except MySQLdb.OperationalError as ex: + raise MySQLSetPasswordError(('Cannot connect using password in ' + 'leader settings (%s)') % ex, ex) + + try: + # NOTE(freyes): Due to skip-name-resolve root@$HOSTNAME account + # fails when using SET PASSWORD so using UPDATE against the + # mysql.user table is needed, but changes to this table are not + # replicated across the cluster, so this update needs to run in + # all the nodes. More info at + # http://galeracluster.com/documentation-webpages/userchanges.html + release = CompareHostReleases(lsb_release()['DISTRIB_CODENAME']) + if release < 'bionic': + SQL_UPDATE_PASSWD = ("UPDATE mysql.user SET password = " + "PASSWORD( %s ) WHERE user = %s;") + else: + # PXC 5.7 (introduced in Bionic) uses authentication_string + SQL_UPDATE_PASSWD = ("UPDATE mysql.user SET " + "authentication_string = " + "PASSWORD( %s ) WHERE user = %s;") + cursor.execute(SQL_UPDATE_PASSWD, (new_passwd, username)) + cursor.execute('FLUSH PRIVILEGES;') + self.connection.commit() + except MySQLdb.OperationalError as ex: + raise MySQLSetPasswordError('Cannot update password: %s' % str(ex), + ex) + finally: + cursor.close() + + # check the password was changed + try: + self.connect(user=username, password=new_passwd) + self.execute('select 1;') + except MySQLdb.OperationalError as ex: + raise MySQLSetPasswordError(('Cannot connect using new password: ' + '%s') % str(ex), ex) + + if not is_leader(): + log('Only the leader can set a new password in the relation', + level=DEBUG) + return + + for key in self.passwd_keys(rel_username): + _password = leader_get(key) + if _password: + log('Updating password for %s (%s)' % (key, rel_username), + level=DEBUG) + leader_set(settings={key: new_passwd}) + + def set_mysql_root_password(self, password, current_password=None): + """Update mysql root password changing the leader settings + + :param password: New password for user. + :type password: str + :param current_password: Existing password for user. + :type current_password: str + """ + self.set_mysql_password( + 'root', + password, + current_password=current_password) + + def normalize_address(self, hostname): + """Ensure that address returned is an IP address (i.e. not fqdn)""" + if config_get('prefer-ipv6'): + # TODO: add support for ipv6 dns + return hostname + + if hostname != unit_get('private-address'): + return get_host_ip(hostname, fallback=hostname) + + # Otherwise assume localhost + return '127.0.0.1' + + def get_allowed_units(self, database, username, relation_id=None): + """Get list of units with access grants for database with username. + + This is typically used to provide shared-db relations with a list of + which units have been granted access to the given database. + """ + self.connect(password=self.get_mysql_root_password()) + allowed_units = set() + for unit in related_units(relation_id): + settings = relation_get(rid=relation_id, unit=unit) + # First check for setting with prefix, then without + for attr in ["%s_hostname" % (database), 'hostname']: + hosts = settings.get(attr, None) + if hosts: + break + + if hosts: + # hostname can be json-encoded list of hostnames + try: + hosts = json.loads(hosts) + except ValueError: + hosts = [hosts] + else: + hosts = [settings['private-address']] + + if hosts: + for host in hosts: + host = self.normalize_address(host) + if self.grant_exists(database, username, host): + log("Grant exists for host '%s' on db '%s'" % + (host, database), level=DEBUG) + if unit not in allowed_units: + allowed_units.add(unit) + else: + log("Grant does NOT exist for host '%s' on db '%s'" % + (host, database), level=DEBUG) + else: + log("No hosts found for grant check", level=INFO) + + return allowed_units + + def configure_db(self, hostname, database, username, admin=False): + """Configure access to database for username from hostname.""" + self.connect(password=self.get_mysql_root_password()) + if not self.database_exists(database): + self.create_database(database) + + remote_ip = self.normalize_address(hostname) + password = self.get_mysql_password(username) + if not self.grant_exists(database, username, remote_ip): + if not admin: + self.create_grant(database, username, remote_ip, password) + else: + self.create_admin_grant(username, remote_ip, password) + self.flush_priviledges() + + return password + + +# `_singleton_config_helper` stores the instance of the helper class that is +# being used during a hook invocation. +_singleton_config_helper = None + + +def get_mysql_config_helper(): + global _singleton_config_helper + if _singleton_config_helper is None: + _singleton_config_helper = MySQLConfigHelper() + return _singleton_config_helper + + +class MySQLConfigHelper(object): + """Base configuration helper for MySQL.""" + + # Going for the biggest page size to avoid wasted bytes. + # InnoDB page size is 16MB + + DEFAULT_PAGE_SIZE = 16 * 1024 * 1024 + DEFAULT_INNODB_BUFFER_FACTOR = 0.50 + DEFAULT_INNODB_BUFFER_SIZE_MAX = 512 * 1024 * 1024 + + # Validation and lookups for InnoDB configuration + INNODB_VALID_BUFFERING_VALUES = [ + 'none', + 'inserts', + 'deletes', + 'changes', + 'purges', + 'all' + ] + INNODB_FLUSH_CONFIG_VALUES = { + 'fast': 2, + 'safest': 1, + 'unsafe': 0, + } + + def human_to_bytes(self, human): + """Convert human readable configuration options to bytes.""" + num_re = re.compile('^[0-9]+$') + if num_re.match(human): + return human + + factors = { + 'K': 1024, + 'M': 1048576, + 'G': 1073741824, + 'T': 1099511627776 + } + modifier = human[-1] + if modifier in factors: + return int(human[:-1]) * factors[modifier] + + if modifier == '%': + total_ram = self.human_to_bytes(self.get_mem_total()) + if self.is_32bit_system() and total_ram > self.sys_mem_limit(): + total_ram = self.sys_mem_limit() + factor = int(human[:-1]) * 0.01 + pctram = total_ram * factor + return int(pctram - (pctram % self.DEFAULT_PAGE_SIZE)) + + raise ValueError("Can only convert K,M,G, or T") + + def is_32bit_system(self): + """Determine whether system is 32 or 64 bit.""" + try: + return sys.maxsize < 2 ** 32 + except OverflowError: + return False + + def sys_mem_limit(self): + """Determine the default memory limit for the current service unit.""" + if platform.machine() in ['armv7l']: + _mem_limit = self.human_to_bytes('2700M') # experimentally determined + else: + # Limit for x86 based 32bit systems + _mem_limit = self.human_to_bytes('4G') + + return _mem_limit + + def get_mem_total(self): + """Calculate the total memory in the current service unit.""" + with open('/proc/meminfo') as meminfo_file: + for line in meminfo_file: + key, mem = line.split(':', 2) + if key == 'MemTotal': + mtot, modifier = mem.strip().split(' ') + return '%s%s' % (mtot, modifier[0].upper()) + + def get_innodb_flush_log_at_trx_commit(self): + """Get value for innodb_flush_log_at_trx_commit. + + Use the innodb-flush-log-at-trx-commit or the tunning-level setting + translated by INNODB_FLUSH_CONFIG_VALUES to get the + innodb_flush_log_at_trx_commit value. + + :returns: Numeric value for innodb_flush_log_at_trx_commit + :rtype: Union[None, int] + """ + _iflatc = config_get('innodb-flush-log-at-trx-commit') + _tuning_level = config_get('tuning-level') + if _iflatc: + return _iflatc + elif _tuning_level: + return self.INNODB_FLUSH_CONFIG_VALUES.get(_tuning_level, 1) + + def get_innodb_change_buffering(self): + """Get value for innodb_change_buffering. + + Use the innodb-change-buffering validated against + INNODB_VALID_BUFFERING_VALUES to get the innodb_change_buffering value. + + :returns: String value for innodb_change_buffering. + :rtype: Union[None, str] + """ + _icb = config_get('innodb-change-buffering') + if _icb and _icb in self.INNODB_VALID_BUFFERING_VALUES: + return _icb + + def get_innodb_buffer_pool_size(self): + """Get value for innodb_buffer_pool_size. + + Return the number value of innodb-buffer-pool-size or dataset-size. If + neither is set, calculate a sane default based on total memory. + + :returns: Numeric value for innodb_buffer_pool_size. + :rtype: int + """ + total_memory = self.human_to_bytes(self.get_mem_total()) + + dataset_bytes = config_get('dataset-size') + innodb_buffer_pool_size = config_get('innodb-buffer-pool-size') + + if innodb_buffer_pool_size: + innodb_buffer_pool_size = self.human_to_bytes( + innodb_buffer_pool_size) + elif dataset_bytes: + log("Option 'dataset-size' has been deprecated, please use" + "innodb_buffer_pool_size option instead", level="WARN") + innodb_buffer_pool_size = self.human_to_bytes( + dataset_bytes) + else: + # NOTE(jamespage): pick the smallest of 50% of RAM or 512MB + # to ensure that deployments in containers + # without constraints don't try to consume + # silly amounts of memory. + innodb_buffer_pool_size = min( + int(total_memory * self.DEFAULT_INNODB_BUFFER_FACTOR), + self.DEFAULT_INNODB_BUFFER_SIZE_MAX + ) + + if innodb_buffer_pool_size > total_memory: + log("innodb_buffer_pool_size; {} is greater than system available memory:{}".format( + innodb_buffer_pool_size, + total_memory), level='WARN') + + return innodb_buffer_pool_size + + +class PerconaClusterHelper(MySQLConfigHelper): + """Percona-cluster specific configuration helper.""" + + def parse_config(self): + """Parse charm configuration and calculate values for config files.""" + config = config_get() + mysql_config = {} + if 'max-connections' in config: + mysql_config['max_connections'] = config['max-connections'] + + if 'wait-timeout' in config: + mysql_config['wait_timeout'] = config['wait-timeout'] + + if self.get_innodb_flush_log_at_trx_commit() is not None: + mysql_config['innodb_flush_log_at_trx_commit'] = \ + self.get_innodb_flush_log_at_trx_commit() + + if self.get_innodb_change_buffering() is not None: + mysql_config['innodb_change_buffering'] = config['innodb-change-buffering'] + + if 'innodb-io-capacity' in config: + mysql_config['innodb_io_capacity'] = config['innodb-io-capacity'] + + # Set a sane default key_buffer size + mysql_config['key_buffer'] = self.human_to_bytes('32M') + mysql_config['innodb_buffer_pool_size'] = self.get_innodb_buffer_pool_size() + return mysql_config + + +class MySQL8Helper(MySQLHelper): + + def grant_exists(self, db_name, db_user, remote_ip): + cursor = self.connection.cursor() + priv_string = ("GRANT ALL PRIVILEGES ON {}.* " + "TO {}@{}".format(db_name, db_user, remote_ip)) + try: + cursor.execute("SHOW GRANTS FOR '{}'@'{}'".format(db_user, + remote_ip)) + grants = [i[0] for i in cursor.fetchall()] + except MySQLdb.OperationalError: + return False + finally: + cursor.close() + + # Different versions of MySQL use ' or `. Ignore these in the check. + return priv_string in [ + i.replace("'", "").replace("`", "") for i in grants] + + def create_grant(self, db_name, db_user, remote_ip, password): + if self.grant_exists(db_name, db_user, remote_ip): + return + + # Make sure the user exists + # MySQL8 must create the user before the grant + self.create_user(db_user, remote_ip, password) + + cursor = self.connection.cursor() + try: + cursor.execute("GRANT ALL PRIVILEGES ON `{}`.* TO '{}'@'{}'" + .format(db_name, db_user, remote_ip)) + finally: + cursor.close() + + def create_user(self, db_user, remote_ip, password): + + SQL_USER_CREATE = ( + "CREATE USER '{db_user}'@'{remote_ip}' " + "IDENTIFIED BY '{password}'") + + cursor = self.connection.cursor() + try: + cursor.execute(SQL_USER_CREATE.format( + db_user=db_user, + remote_ip=remote_ip, + password=password) + ) + except MySQLdb._exceptions.OperationalError: + log("DB user {} already exists.".format(db_user), + "WARNING") + finally: + cursor.close() + + def create_router_grant(self, db_user, remote_ip, password): + + # Make sure the user exists + # MySQL8 must create the user before the grant + self.create_user(db_user, remote_ip, password) + + # Mysql-Router specific grants + cursor = self.connection.cursor() + try: + cursor.execute("GRANT CREATE USER ON *.* TO '{}'@'{}' WITH GRANT " + "OPTION".format(db_user, remote_ip)) + cursor.execute("GRANT SELECT, INSERT, UPDATE, DELETE, EXECUTE ON " + "mysql_innodb_cluster_metadata.* TO '{}'@'{}'" + .format(db_user, remote_ip)) + cursor.execute("GRANT SELECT ON mysql.user TO '{}'@'{}'" + .format(db_user, remote_ip)) + cursor.execute("GRANT SELECT ON " + "performance_schema.replication_group_members " + "TO '{}'@'{}'".format(db_user, remote_ip)) + cursor.execute("GRANT SELECT ON " + "performance_schema.replication_group_member_stats " + "TO '{}'@'{}'".format(db_user, remote_ip)) + cursor.execute("GRANT SELECT ON " + "performance_schema.global_variables " + "TO '{}'@'{}'".format(db_user, remote_ip)) + finally: + cursor.close() + + def configure_router(self, hostname, username): + + if self.connection is None: + self.connect(password=self.get_mysql_root_password()) + + remote_ip = self.normalize_address(hostname) + password = self.get_mysql_password(username) + self.create_user(username, remote_ip, password) + self.create_router_grant(username, remote_ip, password) + + return password + + +def get_prefix(requested, keys=None): + """Return existing prefix or None. + + :param requested: Request string. i.e. novacell0_username + :type requested: str + :param keys: Keys to determine prefix. Defaults set in function. + :type keys: List of str keys + :returns: String prefix i.e. novacell0 + :rtype: Union[None, str] + """ + if keys is None: + # Shared-DB default keys + keys = ["_database", "_username", "_hostname"] + for key in keys: + if requested.endswith(key): + return requested[:-len(key)] + + +def get_db_data(relation_data, unprefixed): + """Organize database requests into a collections.OrderedDict + + :param relation_data: shared-db relation data + :type relation_data: dict + :param unprefixed: Prefix to use for requests without a prefix. This should + be unique for each side of the relation to avoid + conflicts. + :type unprefixed: str + :returns: Order dict of databases and users + :rtype: collections.OrderedDict + """ + # Deep copy to avoid unintentionally changing relation data + settings = copy.deepcopy(relation_data) + databases = collections.OrderedDict() + + # Clear non-db related elements + if "egress-subnets" in settings.keys(): + settings.pop("egress-subnets") + if "ingress-address" in settings.keys(): + settings.pop("ingress-address") + if "private-address" in settings.keys(): + settings.pop("private-address") + + singleset = {"database", "username", "hostname"} + if singleset.issubset(settings): + settings["{}_{}".format(unprefixed, "hostname")] = ( + settings["hostname"]) + settings.pop("hostname") + settings["{}_{}".format(unprefixed, "database")] = ( + settings["database"]) + settings.pop("database") + settings["{}_{}".format(unprefixed, "username")] = ( + settings["username"]) + settings.pop("username") + + for k, v in settings.items(): + db = k.split("_")[0] + x = "_".join(k.split("_")[1:]) + if db not in databases: + databases[db] = collections.OrderedDict() + databases[db][x] = v + + return databases diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hahelpers/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hahelpers/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d7567b863e3a5ad2b7a7f44958b4166e0c3d346b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hahelpers/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hahelpers/apache.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hahelpers/apache.py new file mode 100644 index 0000000000000000000000000000000000000000..2c1e371e179bb6926f227331f86cb14a38c229a2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hahelpers/apache.py @@ -0,0 +1,86 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# +# Copyright 2012 Canonical Ltd. +# +# This file is sourced from lp:openstack-charm-helpers +# +# Authors: +# James Page +# Adam Gandelman +# + +import os + +from charmhelpers.core import host +from charmhelpers.core.hookenv import ( + config as config_get, + relation_get, + relation_ids, + related_units as relation_list, + log, + INFO, +) + + +def get_cert(cn=None): + # TODO: deal with multiple https endpoints via charm config + cert = config_get('ssl_cert') + key = config_get('ssl_key') + if not (cert and key): + log("Inspecting identity-service relations for SSL certificate.", + level=INFO) + cert = key = None + if cn: + ssl_cert_attr = 'ssl_cert_{}'.format(cn) + ssl_key_attr = 'ssl_key_{}'.format(cn) + else: + ssl_cert_attr = 'ssl_cert' + ssl_key_attr = 'ssl_key' + for r_id in relation_ids('identity-service'): + for unit in relation_list(r_id): + if not cert: + cert = relation_get(ssl_cert_attr, + rid=r_id, unit=unit) + if not key: + key = relation_get(ssl_key_attr, + rid=r_id, unit=unit) + return (cert, key) + + +def get_ca_cert(): + ca_cert = config_get('ssl_ca') + if ca_cert is None: + log("Inspecting identity-service relations for CA SSL certificate.", + level=INFO) + for r_id in (relation_ids('identity-service') + + relation_ids('identity-credentials')): + for unit in relation_list(r_id): + if ca_cert is None: + ca_cert = relation_get('ca_cert', + rid=r_id, unit=unit) + return ca_cert + + +def retrieve_ca_cert(cert_file): + cert = None + if os.path.isfile(cert_file): + with open(cert_file, 'rb') as crt: + cert = crt.read() + return cert + + +def install_ca_cert(ca_cert): + host.install_ca_cert(ca_cert, 'keystone_juju_ca_cert') diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hahelpers/cluster.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hahelpers/cluster.py new file mode 100644 index 0000000000000000000000000000000000000000..ba34fba0cafa21d15a6a27946544b2c99fbd3663 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hahelpers/cluster.py @@ -0,0 +1,451 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# +# Copyright 2012 Canonical Ltd. +# +# Authors: +# James Page +# Adam Gandelman +# + +""" +Helpers for clustering and determining "cluster leadership" and other +clustering-related helpers. +""" + +import functools +import subprocess +import os +import time + +from socket import gethostname as get_unit_hostname + +import six + +from charmhelpers.core.hookenv import ( + log, + relation_ids, + related_units as relation_list, + relation_get, + config as config_get, + INFO, + DEBUG, + WARNING, + unit_get, + is_leader as juju_is_leader, + status_set, +) +from charmhelpers.core.host import ( + modulo_distribution, +) +from charmhelpers.core.decorators import ( + retry_on_exception, +) +from charmhelpers.core.strutils import ( + bool_from_string, +) + +DC_RESOURCE_NAME = 'DC' + + +class HAIncompleteConfig(Exception): + pass + + +class HAIncorrectConfig(Exception): + pass + + +class CRMResourceNotFound(Exception): + pass + + +class CRMDCNotFound(Exception): + pass + + +def is_elected_leader(resource): + """ + Returns True if the charm executing this is the elected cluster leader. + + It relies on two mechanisms to determine leadership: + 1. If juju is sufficiently new and leadership election is supported, + the is_leader command will be used. + 2. If the charm is part of a corosync cluster, call corosync to + determine leadership. + 3. If the charm is not part of a corosync cluster, the leader is + determined as being "the alive unit with the lowest unit numer". In + other words, the oldest surviving unit. + """ + try: + return juju_is_leader() + except NotImplementedError: + log('Juju leadership election feature not enabled' + ', using fallback support', + level=WARNING) + + if is_clustered(): + if not is_crm_leader(resource): + log('Deferring action to CRM leader.', level=INFO) + return False + else: + peers = peer_units() + if peers and not oldest_peer(peers): + log('Deferring action to oldest service unit.', level=INFO) + return False + return True + + +def is_clustered(): + for r_id in (relation_ids('ha') or []): + for unit in (relation_list(r_id) or []): + clustered = relation_get('clustered', + rid=r_id, + unit=unit) + if clustered: + return True + return False + + +def is_crm_dc(): + """ + Determine leadership by querying the pacemaker Designated Controller + """ + cmd = ['crm', 'status'] + try: + status = subprocess.check_output(cmd, stderr=subprocess.STDOUT) + if not isinstance(status, six.text_type): + status = six.text_type(status, "utf-8") + except subprocess.CalledProcessError as ex: + raise CRMDCNotFound(str(ex)) + + current_dc = '' + for line in status.split('\n'): + if line.startswith('Current DC'): + # Current DC: juju-lytrusty-machine-2 (168108163) - partition with quorum + current_dc = line.split(':')[1].split()[0] + if current_dc == get_unit_hostname(): + return True + elif current_dc == 'NONE': + raise CRMDCNotFound('Current DC: NONE') + + return False + + +@retry_on_exception(5, base_delay=2, + exc_type=(CRMResourceNotFound, CRMDCNotFound)) +def is_crm_leader(resource, retry=False): + """ + Returns True if the charm calling this is the elected corosync leader, + as returned by calling the external "crm" command. + + We allow this operation to be retried to avoid the possibility of getting a + false negative. See LP #1396246 for more info. + """ + if resource == DC_RESOURCE_NAME: + return is_crm_dc() + cmd = ['crm', 'resource', 'show', resource] + try: + status = subprocess.check_output(cmd, stderr=subprocess.STDOUT) + if not isinstance(status, six.text_type): + status = six.text_type(status, "utf-8") + except subprocess.CalledProcessError: + status = None + + if status and get_unit_hostname() in status: + return True + + if status and "resource %s is NOT running" % (resource) in status: + raise CRMResourceNotFound("CRM resource %s not found" % (resource)) + + return False + + +def is_leader(resource): + log("is_leader is deprecated. Please consider using is_crm_leader " + "instead.", level=WARNING) + return is_crm_leader(resource) + + +def peer_units(peer_relation="cluster"): + peers = [] + for r_id in (relation_ids(peer_relation) or []): + for unit in (relation_list(r_id) or []): + peers.append(unit) + return peers + + +def peer_ips(peer_relation='cluster', addr_key='private-address'): + '''Return a dict of peers and their private-address''' + peers = {} + for r_id in relation_ids(peer_relation): + for unit in relation_list(r_id): + peers[unit] = relation_get(addr_key, rid=r_id, unit=unit) + return peers + + +def oldest_peer(peers): + """Determines who the oldest peer is by comparing unit numbers.""" + local_unit_no = int(os.getenv('JUJU_UNIT_NAME').split('/')[1]) + for peer in peers: + remote_unit_no = int(peer.split('/')[1]) + if remote_unit_no < local_unit_no: + return False + return True + + +def eligible_leader(resource): + log("eligible_leader is deprecated. Please consider using " + "is_elected_leader instead.", level=WARNING) + return is_elected_leader(resource) + + +def https(): + ''' + Determines whether enough data has been provided in configuration + or relation data to configure HTTPS + . + returns: boolean + ''' + use_https = config_get('use-https') + if use_https and bool_from_string(use_https): + return True + if config_get('ssl_cert') and config_get('ssl_key'): + return True + for r_id in relation_ids('certificates'): + for unit in relation_list(r_id): + ca = relation_get('ca', rid=r_id, unit=unit) + if ca: + return True + for r_id in relation_ids('identity-service'): + for unit in relation_list(r_id): + # TODO - needs fixing for new helper as ssl_cert/key suffixes with CN + rel_state = [ + relation_get('https_keystone', rid=r_id, unit=unit), + relation_get('ca_cert', rid=r_id, unit=unit), + ] + # NOTE: works around (LP: #1203241) + if (None not in rel_state) and ('' not in rel_state): + return True + return False + + +def determine_api_port(public_port, singlenode_mode=False): + ''' + Determine correct API server listening port based on + existence of HTTPS reverse proxy and/or haproxy. + + public_port: int: standard public port for given service + + singlenode_mode: boolean: Shuffle ports when only a single unit is present + + returns: int: the correct listening port for the API service + ''' + i = 0 + if singlenode_mode: + i += 1 + elif len(peer_units()) > 0 or is_clustered(): + i += 1 + if https(): + i += 1 + return public_port - (i * 10) + + +def determine_apache_port(public_port, singlenode_mode=False): + ''' + Description: Determine correct apache listening port based on public IP + + state of the cluster. + + public_port: int: standard public port for given service + + singlenode_mode: boolean: Shuffle ports when only a single unit is present + + returns: int: the correct listening port for the HAProxy service + ''' + i = 0 + if singlenode_mode: + i += 1 + elif len(peer_units()) > 0 or is_clustered(): + i += 1 + return public_port - (i * 10) + + +determine_apache_port_single = functools.partial( + determine_apache_port, singlenode_mode=True) + + +def get_hacluster_config(exclude_keys=None): + ''' + Obtains all relevant configuration from charm configuration required + for initiating a relation to hacluster: + + ha-bindiface, ha-mcastport, vip, os-internal-hostname, + os-admin-hostname, os-public-hostname, os-access-hostname + + param: exclude_keys: list of setting key(s) to be excluded. + returns: dict: A dict containing settings keyed by setting name. + raises: HAIncompleteConfig if settings are missing or incorrect. + ''' + settings = ['ha-bindiface', 'ha-mcastport', 'vip', 'os-internal-hostname', + 'os-admin-hostname', 'os-public-hostname', 'os-access-hostname'] + conf = {} + for setting in settings: + if exclude_keys and setting in exclude_keys: + continue + + conf[setting] = config_get(setting) + + if not valid_hacluster_config(): + raise HAIncorrectConfig('Insufficient or incorrect config data to ' + 'configure hacluster.') + return conf + + +def valid_hacluster_config(): + ''' + Check that either vip or dns-ha is set. If dns-ha then one of os-*-hostname + must be set. + + Note: ha-bindiface and ha-macastport both have defaults and will always + be set. We only care that either vip or dns-ha is set. + + :returns: boolean: valid config returns true. + raises: HAIncompatibileConfig if settings conflict. + raises: HAIncompleteConfig if settings are missing. + ''' + vip = config_get('vip') + dns = config_get('dns-ha') + if not(bool(vip) ^ bool(dns)): + msg = ('HA: Either vip or dns-ha must be set but not both in order to ' + 'use high availability') + status_set('blocked', msg) + raise HAIncorrectConfig(msg) + + # If dns-ha then one of os-*-hostname must be set + if dns: + dns_settings = ['os-internal-hostname', 'os-admin-hostname', + 'os-public-hostname', 'os-access-hostname'] + # At this point it is unknown if one or all of the possible + # network spaces are in HA. Validate at least one is set which is + # the minimum required. + for setting in dns_settings: + if config_get(setting): + log('DNS HA: At least one hostname is set {}: {}' + ''.format(setting, config_get(setting)), + level=DEBUG) + return True + + msg = ('DNS HA: At least one os-*-hostname(s) must be set to use ' + 'DNS HA') + status_set('blocked', msg) + raise HAIncompleteConfig(msg) + + log('VIP HA: VIP is set {}'.format(vip), level=DEBUG) + return True + + +def canonical_url(configs, vip_setting='vip'): + ''' + Returns the correct HTTP URL to this host given the state of HTTPS + configuration and hacluster. + + :configs : OSTemplateRenderer: A config tempating object to inspect for + a complete https context. + + :vip_setting: str: Setting in charm config that specifies + VIP address. + ''' + scheme = 'http' + if 'https' in configs.complete_contexts(): + scheme = 'https' + if is_clustered(): + addr = config_get(vip_setting) + else: + addr = unit_get('private-address') + return '%s://%s' % (scheme, addr) + + +def distributed_wait(modulo=None, wait=None, operation_name='operation'): + ''' Distribute operations by waiting based on modulo_distribution + + If modulo and or wait are not set, check config_get for those values. + If config values are not set, default to modulo=3 and wait=30. + + :param modulo: int The modulo number creates the group distribution + :param wait: int The constant time wait value + :param operation_name: string Operation name for status message + i.e. 'restart' + :side effect: Calls config_get() + :side effect: Calls log() + :side effect: Calls status_set() + :side effect: Calls time.sleep() + ''' + if modulo is None: + modulo = config_get('modulo-nodes') or 3 + if wait is None: + wait = config_get('known-wait') or 30 + if juju_is_leader(): + # The leader should never wait + calculated_wait = 0 + else: + # non_zero_wait=True guarantees the non-leader who gets modulo 0 + # will still wait + calculated_wait = modulo_distribution(modulo=modulo, wait=wait, + non_zero_wait=True) + msg = "Waiting {} seconds for {} ...".format(calculated_wait, + operation_name) + log(msg, DEBUG) + status_set('maintenance', msg) + time.sleep(calculated_wait) + + +def get_managed_services_and_ports(services, external_ports, + external_services=None, + port_conv_f=determine_apache_port_single): + """Get the services and ports managed by this charm. + + Return only the services and corresponding ports that are managed by this + charm. This excludes haproxy when there is a relation with hacluster. This + is because this charm passes responsability for stopping and starting + haproxy to hacluster. + + Similarly, if a relation with hacluster exists then the ports returned by + this method correspond to those managed by the apache server rather than + haproxy. + + :param services: List of services. + :type services: List[str] + :param external_ports: List of ports managed by external services. + :type external_ports: List[int] + :param external_services: List of services to be removed if ha relation is + present. + :type external_services: List[str] + :param port_conv_f: Function to apply to ports to calculate the ports + managed by services controlled by this charm. + :type port_convert_func: f() + :returns: A tuple containing a list of services first followed by a list of + ports. + :rtype: Tuple[List[str], List[int]] + """ + if external_services is None: + external_services = ['haproxy'] + if relation_ids('ha'): + for svc in external_services: + try: + services.remove(svc) + except ValueError: + pass + external_ports = [port_conv_f(p) for p in external_ports] + return services, external_ports diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/README.hardening.md b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/README.hardening.md new file mode 100644 index 0000000000000000000000000000000000000000..91280c03e6d7b5d75b356cd94614fc821abc2644 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/README.hardening.md @@ -0,0 +1,38 @@ +# Juju charm-helpers hardening library + +## Description + +This library provides multiple implementations of system and application +hardening that conform to the standards of http://hardening.io/. + +Current implementations include: + + * OS + * SSH + * MySQL + * Apache + +## Requirements + +* Juju Charms + +## Usage + +1. Synchronise this library into your charm and add the harden() decorator + (from contrib.hardening.harden) to any functions or methods you want to use + to trigger hardening of your application/system. + +2. Add a config option called 'harden' to your charm config.yaml and set it to + a space-delimited list of hardening modules you want to run e.g. "os ssh" + +3. Override any config defaults (contrib.hardening.defaults) by adding a file + called hardening.yaml to your charm root containing the name(s) of the + modules whose settings you want override at root level and then any settings + with overrides e.g. + + os: + general: + desktop_enable: True + +4. Now just run your charm as usual and hardening will be applied each time the + hook runs. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..30a3e94359e94011cd247de7ade76667346e7379 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/apache/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/apache/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..58bebd846bd6fa648cfab6ab1056ad10d8415453 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/apache/__init__.py @@ -0,0 +1,17 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from os import path + +TEMPLATES_DIR = path.join(path.dirname(__file__), 'templates') diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/apache/checks/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/apache/checks/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..3bc2ebd4760124e23c128868e098aceac610260f --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/apache/checks/__init__.py @@ -0,0 +1,29 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from charmhelpers.core.hookenv import ( + log, + DEBUG, +) +from charmhelpers.contrib.hardening.apache.checks import config + + +def run_apache_checks(): + log("Starting Apache hardening checks.", level=DEBUG) + checks = config.get_audits() + for check in checks: + log("Running '%s' check" % (check.__class__.__name__), level=DEBUG) + check.ensure_compliance() + + log("Apache hardening checks complete.", level=DEBUG) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/apache/checks/config.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/apache/checks/config.py new file mode 100644 index 0000000000000000000000000000000000000000..341da9eee10f73cbe3d7e7e5cf91b57b4d2a89b4 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/apache/checks/config.py @@ -0,0 +1,104 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import re +import six +import subprocess + + +from charmhelpers.core.hookenv import ( + log, + INFO, +) +from charmhelpers.contrib.hardening.audits.file import ( + FilePermissionAudit, + DirectoryPermissionAudit, + NoReadWriteForOther, + TemplatedFile, + DeletedFile +) +from charmhelpers.contrib.hardening.audits.apache import DisabledModuleAudit +from charmhelpers.contrib.hardening.apache import TEMPLATES_DIR +from charmhelpers.contrib.hardening import utils + + +def get_audits(): + """Get Apache hardening config audits. + + :returns: dictionary of audits + """ + if subprocess.call(['which', 'apache2'], stdout=subprocess.PIPE) != 0: + log("Apache server does not appear to be installed on this node - " + "skipping apache hardening", level=INFO) + return [] + + context = ApacheConfContext() + settings = utils.get_settings('apache') + audits = [ + FilePermissionAudit(paths=os.path.join( + settings['common']['apache_dir'], 'apache2.conf'), + user='root', group='root', mode=0o0640), + + TemplatedFile(os.path.join(settings['common']['apache_dir'], + 'mods-available/alias.conf'), + context, + TEMPLATES_DIR, + mode=0o0640, + user='root', + service_actions=[{'service': 'apache2', + 'actions': ['restart']}]), + + TemplatedFile(os.path.join(settings['common']['apache_dir'], + 'conf-enabled/99-hardening.conf'), + context, + TEMPLATES_DIR, + mode=0o0640, + user='root', + service_actions=[{'service': 'apache2', + 'actions': ['restart']}]), + + DirectoryPermissionAudit(settings['common']['apache_dir'], + user='root', + group='root', + mode=0o0750), + + DisabledModuleAudit(settings['hardening']['modules_to_disable']), + + NoReadWriteForOther(settings['common']['apache_dir']), + + DeletedFile(['/var/www/html/index.html']) + ] + + return audits + + +class ApacheConfContext(object): + """Defines the set of key/value pairs to set in a apache config file. + + This context, when called, will return a dictionary containing the + key/value pairs of setting to specify in the + /etc/apache/conf-enabled/hardening.conf file. + """ + def __call__(self): + settings = utils.get_settings('apache') + ctxt = settings['hardening'] + + out = subprocess.check_output(['apache2', '-v']) + if six.PY3: + out = out.decode('utf-8') + ctxt['apache_version'] = re.search(r'.+version: Apache/(.+?)\s.+', + out).group(1) + ctxt['apache_icondir'] = '/usr/share/apache2/icons/' + return ctxt diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/apache/templates/99-hardening.conf b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/apache/templates/99-hardening.conf new file mode 100644 index 0000000000000000000000000000000000000000..22b68041d50ff753284bbb4b41a21e8f2bd8c18a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/apache/templates/99-hardening.conf @@ -0,0 +1,32 @@ +############################################################################### +# WARNING: This configuration file is maintained by Juju. Local changes may +# be overwritten. +############################################################################### + + + + # http://httpd.apache.org/docs/2.4/upgrading.html + {% if apache_version > '2.2' -%} + Require all granted + {% else -%} + Order Allow,Deny + Deny from all + {% endif %} + + + + + Options -Indexes -FollowSymLinks + AllowOverride None + + + + Options -Indexes -FollowSymLinks + AllowOverride None + + +TraceEnable {{ traceenable }} +ServerTokens {{ servertokens }} + +SSLHonorCipherOrder {{ honor_cipher_order }} +SSLCipherSuite {{ cipher_suite }} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/apache/templates/alias.conf b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/apache/templates/alias.conf new file mode 100644 index 0000000000000000000000000000000000000000..e46a58a30dadbb6ccffa02d82593c63a9cbf52df --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/apache/templates/alias.conf @@ -0,0 +1,31 @@ +############################################################################### +# WARNING: This configuration file is maintained by Juju. Local changes may +# be overwritten. +############################################################################### + + # + # Aliases: Add here as many aliases as you need (with no limit). The format is + # Alias fakename realname + # + # Note that if you include a trailing / on fakename then the server will + # require it to be present in the URL. So "/icons" isn't aliased in this + # example, only "/icons/". If the fakename is slash-terminated, then the + # realname must also be slash terminated, and if the fakename omits the + # trailing slash, the realname must also omit it. + # + # We include the /icons/ alias for FancyIndexed directory listings. If + # you do not use FancyIndexing, you may comment this out. + # + Alias /icons/ "{{ apache_icondir }}/" + + + Options -Indexes -MultiViews -FollowSymLinks + AllowOverride None +{% if apache_version == '2.4' -%} + Require all granted +{% else -%} + Order allow,deny + Allow from all +{% endif %} + + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/audits/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/audits/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..6dd5b05fec4ffcfcdb4378a06dfda4e8ac7e8371 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/audits/__init__.py @@ -0,0 +1,54 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + +class BaseAudit(object): # NO-QA + """Base class for hardening checks. + + The lifecycle of a hardening check is to first check to see if the system + is in compliance for the specified check. If it is not in compliance, the + check method will return a value which will be supplied to the. + """ + def __init__(self, *args, **kwargs): + self.unless = kwargs.get('unless', None) + super(BaseAudit, self).__init__() + + def ensure_compliance(self): + """Checks to see if the current hardening check is in compliance or + not. + + If the check that is performed is not in compliance, then an exception + should be raised. + """ + pass + + def _take_action(self): + """Determines whether to perform the action or not. + + Checks whether or not an action should be taken. This is determined by + the truthy value for the unless parameter. If unless is a callback + method, it will be invoked with no parameters in order to determine + whether or not the action should be taken. Otherwise, the truthy value + of the unless attribute will determine if the action should be + performed. + """ + # Do the action if there isn't an unless override. + if self.unless is None: + return True + + # Invoke the callback if there is one. + if hasattr(self.unless, '__call__'): + return not self.unless() + + return not self.unless diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/audits/apache.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/audits/apache.py new file mode 100644 index 0000000000000000000000000000000000000000..04825f5ada0c5b0bb9fc0955baa9a10fa199184d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/audits/apache.py @@ -0,0 +1,100 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import re +import subprocess + +import six + +from charmhelpers.core.hookenv import ( + log, + INFO, + ERROR, +) + +from charmhelpers.contrib.hardening.audits import BaseAudit + + +class DisabledModuleAudit(BaseAudit): + """Audits Apache2 modules. + + Determines if the apache2 modules are enabled. If the modules are enabled + then they are removed in the ensure_compliance. + """ + def __init__(self, modules): + if modules is None: + self.modules = [] + elif isinstance(modules, six.string_types): + self.modules = [modules] + else: + self.modules = modules + + def ensure_compliance(self): + """Ensures that the modules are not loaded.""" + if not self.modules: + return + + try: + loaded_modules = self._get_loaded_modules() + non_compliant_modules = [] + for module in self.modules: + if module in loaded_modules: + log("Module '%s' is enabled but should not be." % + (module), level=INFO) + non_compliant_modules.append(module) + + if len(non_compliant_modules) == 0: + return + + for module in non_compliant_modules: + self._disable_module(module) + self._restart_apache() + except subprocess.CalledProcessError as e: + log('Error occurred auditing apache module compliance. ' + 'This may have been already reported. ' + 'Output is: %s' % e.output, level=ERROR) + + @staticmethod + def _get_loaded_modules(): + """Returns the modules which are enabled in Apache.""" + output = subprocess.check_output(['apache2ctl', '-M']) + if six.PY3: + output = output.decode('utf-8') + modules = [] + for line in output.splitlines(): + # Each line of the enabled module output looks like: + # module_name (static|shared) + # Plus a header line at the top of the output which is stripped + # out by the regex. + matcher = re.search(r'^ (\S*)_module (\S*)', line) + if matcher: + modules.append(matcher.group(1)) + return modules + + @staticmethod + def _disable_module(module): + """Disables the specified module in Apache.""" + try: + subprocess.check_call(['a2dismod', module]) + except subprocess.CalledProcessError as e: + # Note: catch error here to allow the attempt of disabling + # multiple modules in one go rather than failing after the + # first module fails. + log('Error occurred disabling module %s. ' + 'Output is: %s' % (module, e.output), level=ERROR) + + @staticmethod + def _restart_apache(): + """Restarts the apache process""" + subprocess.check_output(['service', 'apache2', 'restart']) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/audits/apt.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/audits/apt.py new file mode 100644 index 0000000000000000000000000000000000000000..cad7bf7376d6f22ce8feccd843b070c399887aa9 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/audits/apt.py @@ -0,0 +1,104 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from __future__ import absolute_import # required for external apt import +from six import string_types + +from charmhelpers.fetch import ( + apt_cache, + apt_purge +) +from charmhelpers.core.hookenv import ( + log, + DEBUG, + WARNING, +) +from charmhelpers.contrib.hardening.audits import BaseAudit +from charmhelpers.fetch import ubuntu_apt_pkg as apt_pkg + + +class AptConfig(BaseAudit): + + def __init__(self, config, **kwargs): + self.config = config + + def verify_config(self): + apt_pkg.init() + for cfg in self.config: + value = apt_pkg.config.get(cfg['key'], cfg.get('default', '')) + if value and value != cfg['expected']: + log("APT config '%s' has unexpected value '%s' " + "(expected='%s')" % + (cfg['key'], value, cfg['expected']), level=WARNING) + + def ensure_compliance(self): + self.verify_config() + + +class RestrictedPackages(BaseAudit): + """Class used to audit restricted packages on the system.""" + + def __init__(self, pkgs, **kwargs): + super(RestrictedPackages, self).__init__(**kwargs) + if isinstance(pkgs, string_types) or not hasattr(pkgs, '__iter__'): + self.pkgs = pkgs.split() + else: + self.pkgs = pkgs + + def ensure_compliance(self): + cache = apt_cache() + + for p in self.pkgs: + if p not in cache: + continue + + pkg = cache[p] + if not self.is_virtual_package(pkg): + if not pkg.current_ver: + log("Package '%s' is not installed." % pkg.name, + level=DEBUG) + continue + else: + log("Restricted package '%s' is installed" % pkg.name, + level=WARNING) + self.delete_package(cache, pkg) + else: + log("Checking restricted virtual package '%s' provides" % + pkg.name, level=DEBUG) + self.delete_package(cache, pkg) + + def delete_package(self, cache, pkg): + """Deletes the package from the system. + + Deletes the package form the system, properly handling virtual + packages. + + :param cache: the apt cache + :param pkg: the package to remove + """ + if self.is_virtual_package(pkg): + log("Package '%s' appears to be virtual - purging provides" % + pkg.name, level=DEBUG) + for _p in pkg.provides_list: + self.delete_package(cache, _p[2].parent_pkg) + elif not pkg.current_ver: + log("Package '%s' not installed" % pkg.name, level=DEBUG) + return + else: + log("Purging package '%s'" % pkg.name, level=DEBUG) + apt_purge(pkg.name) + + def is_virtual_package(self, pkg): + return (pkg.get('has_provides', False) and + not pkg.get('has_versions', False)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/audits/file.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/audits/file.py new file mode 100644 index 0000000000000000000000000000000000000000..257c6351a0b0d244273013faef913f52349f2486 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/audits/file.py @@ -0,0 +1,550 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import grp +import os +import pwd +import re + +from subprocess import ( + CalledProcessError, + check_output, + check_call, +) +from traceback import format_exc +from six import string_types +from stat import ( + S_ISGID, + S_ISUID +) + +from charmhelpers.core.hookenv import ( + log, + DEBUG, + INFO, + WARNING, + ERROR, +) +from charmhelpers.core import unitdata +from charmhelpers.core.host import file_hash +from charmhelpers.contrib.hardening.audits import BaseAudit +from charmhelpers.contrib.hardening.templating import ( + get_template_path, + render_and_write, +) +from charmhelpers.contrib.hardening import utils + + +class BaseFileAudit(BaseAudit): + """Base class for file audits. + + Provides api stubs for compliance check flow that must be used by any class + that implemented this one. + """ + + def __init__(self, paths, always_comply=False, *args, **kwargs): + """ + :param paths: string path of list of paths of files we want to apply + compliance checks are criteria to. + :param always_comply: if true compliance criteria is always applied + else compliance is skipped for non-existent + paths. + """ + super(BaseFileAudit, self).__init__(*args, **kwargs) + self.always_comply = always_comply + if isinstance(paths, string_types) or not hasattr(paths, '__iter__'): + self.paths = [paths] + else: + self.paths = paths + + def ensure_compliance(self): + """Ensure that the all registered files comply to registered criteria. + """ + for p in self.paths: + if os.path.exists(p): + if self.is_compliant(p): + continue + + log('File %s is not in compliance.' % p, level=INFO) + else: + if not self.always_comply: + log("Non-existent path '%s' - skipping compliance check" + % (p), level=INFO) + continue + + if self._take_action(): + log("Applying compliance criteria to '%s'" % (p), level=INFO) + self.comply(p) + + def is_compliant(self, path): + """Audits the path to see if it is compliance. + + :param path: the path to the file that should be checked. + """ + raise NotImplementedError + + def comply(self, path): + """Enforces the compliance of a path. + + :param path: the path to the file that should be enforced. + """ + raise NotImplementedError + + @classmethod + def _get_stat(cls, path): + """Returns the Posix st_stat information for the specified file path. + + :param path: the path to get the st_stat information for. + :returns: an st_stat object for the path or None if the path doesn't + exist. + """ + return os.stat(path) + + +class FilePermissionAudit(BaseFileAudit): + """Implements an audit for file permissions and ownership for a user. + + This class implements functionality that ensures that a specific user/group + will own the file(s) specified and that the permissions specified are + applied properly to the file. + """ + def __init__(self, paths, user, group=None, mode=0o600, **kwargs): + self.user = user + self.group = group + self.mode = mode + super(FilePermissionAudit, self).__init__(paths, user, group, mode, + **kwargs) + + @property + def user(self): + return self._user + + @user.setter + def user(self, name): + try: + user = pwd.getpwnam(name) + except KeyError: + log('Unknown user %s' % name, level=ERROR) + user = None + self._user = user + + @property + def group(self): + return self._group + + @group.setter + def group(self, name): + try: + group = None + if name: + group = grp.getgrnam(name) + else: + group = grp.getgrgid(self.user.pw_gid) + except KeyError: + log('Unknown group %s' % name, level=ERROR) + self._group = group + + def is_compliant(self, path): + """Checks if the path is in compliance. + + Used to determine if the path specified meets the necessary + requirements to be in compliance with the check itself. + + :param path: the file path to check + :returns: True if the path is compliant, False otherwise. + """ + stat = self._get_stat(path) + user = self.user + group = self.group + + compliant = True + if stat.st_uid != user.pw_uid or stat.st_gid != group.gr_gid: + log('File %s is not owned by %s:%s.' % (path, user.pw_name, + group.gr_name), + level=INFO) + compliant = False + + # POSIX refers to the st_mode bits as corresponding to both the + # file type and file permission bits, where the least significant 12 + # bits (o7777) are the suid (11), sgid (10), sticky bits (9), and the + # file permission bits (8-0) + perms = stat.st_mode & 0o7777 + if perms != self.mode: + log('File %s has incorrect permissions, currently set to %s' % + (path, oct(stat.st_mode & 0o7777)), level=INFO) + compliant = False + + return compliant + + def comply(self, path): + """Issues a chown and chmod to the file paths specified.""" + utils.ensure_permissions(path, self.user.pw_name, self.group.gr_name, + self.mode) + + +class DirectoryPermissionAudit(FilePermissionAudit): + """Performs a permission check for the specified directory path.""" + + def __init__(self, paths, user, group=None, mode=0o600, + recursive=True, **kwargs): + super(DirectoryPermissionAudit, self).__init__(paths, user, group, + mode, **kwargs) + self.recursive = recursive + + def is_compliant(self, path): + """Checks if the directory is compliant. + + Used to determine if the path specified and all of its children + directories are in compliance with the check itself. + + :param path: the directory path to check + :returns: True if the directory tree is compliant, otherwise False. + """ + if not os.path.isdir(path): + log('Path specified %s is not a directory.' % path, level=ERROR) + raise ValueError("%s is not a directory." % path) + + if not self.recursive: + return super(DirectoryPermissionAudit, self).is_compliant(path) + + compliant = True + for root, dirs, _ in os.walk(path): + if len(dirs) > 0: + continue + + if not super(DirectoryPermissionAudit, self).is_compliant(root): + compliant = False + continue + + return compliant + + def comply(self, path): + for root, dirs, _ in os.walk(path): + if len(dirs) > 0: + super(DirectoryPermissionAudit, self).comply(root) + + +class ReadOnly(BaseFileAudit): + """Audits that files and folders are read only.""" + def __init__(self, paths, *args, **kwargs): + super(ReadOnly, self).__init__(paths=paths, *args, **kwargs) + + def is_compliant(self, path): + try: + output = check_output(['find', path, '-perm', '-go+w', + '-type', 'f']).strip() + + # The find above will find any files which have permission sets + # which allow too broad of write access. As such, the path is + # compliant if there is no output. + if output: + return False + + return True + except CalledProcessError as e: + log('Error occurred checking finding writable files for %s. ' + 'Error information is: command %s failed with returncode ' + '%d and output %s.\n%s' % (path, e.cmd, e.returncode, e.output, + format_exc(e)), level=ERROR) + return False + + def comply(self, path): + try: + check_output(['chmod', 'go-w', '-R', path]) + except CalledProcessError as e: + log('Error occurred removing writeable permissions for %s. ' + 'Error information is: command %s failed with returncode ' + '%d and output %s.\n%s' % (path, e.cmd, e.returncode, e.output, + format_exc(e)), level=ERROR) + + +class NoReadWriteForOther(BaseFileAudit): + """Ensures that the files found under the base path are readable or + writable by anyone other than the owner or the group. + """ + def __init__(self, paths): + super(NoReadWriteForOther, self).__init__(paths) + + def is_compliant(self, path): + try: + cmd = ['find', path, '-perm', '-o+r', '-type', 'f', '-o', + '-perm', '-o+w', '-type', 'f'] + output = check_output(cmd).strip() + + # The find above here will find any files which have read or + # write permissions for other, meaning there is too broad of access + # to read/write the file. As such, the path is compliant if there's + # no output. + if output: + return False + + return True + except CalledProcessError as e: + log('Error occurred while finding files which are readable or ' + 'writable to the world in %s. ' + 'Command output is: %s.' % (path, e.output), level=ERROR) + + def comply(self, path): + try: + check_output(['chmod', '-R', 'o-rw', path]) + except CalledProcessError as e: + log('Error occurred attempting to change modes of files under ' + 'path %s. Output of command is: %s' % (path, e.output)) + + +class NoSUIDSGIDAudit(BaseFileAudit): + """Audits that specified files do not have SUID/SGID bits set.""" + def __init__(self, paths, *args, **kwargs): + super(NoSUIDSGIDAudit, self).__init__(paths=paths, *args, **kwargs) + + def is_compliant(self, path): + stat = self._get_stat(path) + if (stat.st_mode & (S_ISGID | S_ISUID)) != 0: + return False + + return True + + def comply(self, path): + try: + log('Removing suid/sgid from %s.' % path, level=DEBUG) + check_output(['chmod', '-s', path]) + except CalledProcessError as e: + log('Error occurred removing suid/sgid from %s.' + 'Error information is: command %s failed with returncode ' + '%d and output %s.\n%s' % (path, e.cmd, e.returncode, e.output, + format_exc(e)), level=ERROR) + + +class TemplatedFile(BaseFileAudit): + """The TemplatedFileAudit audits the contents of a templated file. + + This audit renders a file from a template, sets the appropriate file + permissions, then generates a hashsum with which to check the content + changed. + """ + def __init__(self, path, context, template_dir, mode, user='root', + group='root', service_actions=None, **kwargs): + self.context = context + self.user = user + self.group = group + self.mode = mode + self.template_dir = template_dir + self.service_actions = service_actions + super(TemplatedFile, self).__init__(paths=path, always_comply=True, + **kwargs) + + def is_compliant(self, path): + """Determines if the templated file is compliant. + + A templated file is only compliant if it has not changed (as + determined by its sha256 hashsum) AND its file permissions are set + appropriately. + + :param path: the path to check compliance. + """ + same_templates = self.templates_match(path) + same_content = self.contents_match(path) + same_permissions = self.permissions_match(path) + + if same_content and same_permissions and same_templates: + return True + + return False + + def run_service_actions(self): + """Run any actions on services requested.""" + if not self.service_actions: + return + + for svc_action in self.service_actions: + name = svc_action['service'] + actions = svc_action['actions'] + log("Running service '%s' actions '%s'" % (name, actions), + level=DEBUG) + for action in actions: + cmd = ['service', name, action] + try: + check_call(cmd) + except CalledProcessError as exc: + log("Service name='%s' action='%s' failed - %s" % + (name, action, exc), level=WARNING) + + def comply(self, path): + """Ensures the contents and the permissions of the file. + + :param path: the path to correct + """ + dirname = os.path.dirname(path) + if not os.path.exists(dirname): + os.makedirs(dirname) + + self.pre_write() + render_and_write(self.template_dir, path, self.context()) + utils.ensure_permissions(path, self.user, self.group, self.mode) + self.run_service_actions() + self.save_checksum(path) + self.post_write() + + def pre_write(self): + """Invoked prior to writing the template.""" + pass + + def post_write(self): + """Invoked after writing the template.""" + pass + + def templates_match(self, path): + """Determines if the template files are the same. + + The template file equality is determined by the hashsum of the + template files themselves. If there is no hashsum, then the content + cannot be sure to be the same so treat it as if they changed. + Otherwise, return whether or not the hashsums are the same. + + :param path: the path to check + :returns: boolean + """ + template_path = get_template_path(self.template_dir, path) + key = 'hardening:template:%s' % template_path + template_checksum = file_hash(template_path) + kv = unitdata.kv() + stored_tmplt_checksum = kv.get(key) + if not stored_tmplt_checksum: + kv.set(key, template_checksum) + kv.flush() + log('Saved template checksum for %s.' % template_path, + level=DEBUG) + # Since we don't have a template checksum, then assume it doesn't + # match and return that the template is different. + return False + elif stored_tmplt_checksum != template_checksum: + kv.set(key, template_checksum) + kv.flush() + log('Updated template checksum for %s.' % template_path, + level=DEBUG) + return False + + # Here the template hasn't changed based upon the calculated + # checksum of the template and what was previously stored. + return True + + def contents_match(self, path): + """Determines if the file content is the same. + + This is determined by comparing hashsum of the file contents and + the saved hashsum. If there is no hashsum, then the content cannot + be sure to be the same so treat them as if they are not the same. + Otherwise, return True if the hashsums are the same, False if they + are not the same. + + :param path: the file to check. + """ + checksum = file_hash(path) + + kv = unitdata.kv() + stored_checksum = kv.get('hardening:%s' % path) + if not stored_checksum: + # If the checksum hasn't been generated, return False to ensure + # the file is written and the checksum stored. + log('Checksum for %s has not been calculated.' % path, level=DEBUG) + return False + elif stored_checksum != checksum: + log('Checksum mismatch for %s.' % path, level=DEBUG) + return False + + return True + + def permissions_match(self, path): + """Determines if the file owner and permissions match. + + :param path: the path to check. + """ + audit = FilePermissionAudit(path, self.user, self.group, self.mode) + return audit.is_compliant(path) + + def save_checksum(self, path): + """Calculates and saves the checksum for the path specified. + + :param path: the path of the file to save the checksum. + """ + checksum = file_hash(path) + kv = unitdata.kv() + kv.set('hardening:%s' % path, checksum) + kv.flush() + + +class DeletedFile(BaseFileAudit): + """Audit to ensure that a file is deleted.""" + def __init__(self, paths): + super(DeletedFile, self).__init__(paths) + + def is_compliant(self, path): + return not os.path.exists(path) + + def comply(self, path): + os.remove(path) + + +class FileContentAudit(BaseFileAudit): + """Audit the contents of a file.""" + def __init__(self, paths, cases, **kwargs): + # Cases we expect to pass + self.pass_cases = cases.get('pass', []) + # Cases we expect to fail + self.fail_cases = cases.get('fail', []) + super(FileContentAudit, self).__init__(paths, **kwargs) + + def is_compliant(self, path): + """ + Given a set of content matching cases i.e. tuple(regex, bool) where + bool value denotes whether or not regex is expected to match, check that + all cases match as expected with the contents of the file. Cases can be + expected to pass of fail. + + :param path: Path of file to check. + :returns: Boolean value representing whether or not all cases are + found to be compliant. + """ + log("Auditing contents of file '%s'" % (path), level=DEBUG) + with open(path, 'r') as fd: + contents = fd.read() + + matches = 0 + for pattern in self.pass_cases: + key = re.compile(pattern, flags=re.MULTILINE) + results = re.search(key, contents) + if results: + matches += 1 + else: + log("Pattern '%s' was expected to pass but instead it failed" + % (pattern), level=WARNING) + + for pattern in self.fail_cases: + key = re.compile(pattern, flags=re.MULTILINE) + results = re.search(key, contents) + if not results: + matches += 1 + else: + log("Pattern '%s' was expected to fail but instead it passed" + % (pattern), level=WARNING) + + total = len(self.pass_cases) + len(self.fail_cases) + log("Checked %s cases and %s passed" % (total, matches), level=DEBUG) + return matches == total + + def comply(self, *args, **kwargs): + """NOOP since we just issue warnings. This is to avoid the + NotImplememtedError. + """ + log("Not applying any compliance criteria, only checks.", level=INFO) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/defaults/apache.yaml b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/defaults/apache.yaml new file mode 100644 index 0000000000000000000000000000000000000000..0f940d4cfa85ca7051dd60a4805d84bb6aebed6d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/defaults/apache.yaml @@ -0,0 +1,16 @@ +# NOTE: this file contains the default configuration for the 'apache' hardening +# code. If you want to override any settings you must add them to a file +# called hardening.yaml in the root directory of your charm using the +# name 'apache' as the root key followed by any of the following with new +# values. + +common: + apache_dir: '/etc/apache2' + +hardening: + traceenable: 'off' + allowed_http_methods: "GET POST" + modules_to_disable: [ cgi, cgid ] + servertokens: 'Prod' + honor_cipher_order: 'on' + cipher_suite: 'ALL:+MEDIUM:+HIGH:!LOW:!MD5:!RC4:!eNULL:!aNULL:!3DES' diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/defaults/apache.yaml.schema b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/defaults/apache.yaml.schema new file mode 100644 index 0000000000000000000000000000000000000000..c112137cb45c4b63cb05384145b3edf8c443e2b8 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/defaults/apache.yaml.schema @@ -0,0 +1,12 @@ +# NOTE: this schema must contain all valid keys from it's associated defaults +# file. It is used to validate user-provided overrides. +common: + apache_dir: + traceenable: + +hardening: + allowed_http_methods: + modules_to_disable: + servertokens: + honor_cipher_order: + cipher_suite: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/defaults/mysql.yaml b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/defaults/mysql.yaml new file mode 100644 index 0000000000000000000000000000000000000000..682d22bf3ded32eb1c8d6188486ec4468d9ec457 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/defaults/mysql.yaml @@ -0,0 +1,38 @@ +# NOTE: this file contains the default configuration for the 'mysql' hardening +# code. If you want to override any settings you must add them to a file +# called hardening.yaml in the root directory of your charm using the +# name 'mysql' as the root key followed by any of the following with new +# values. + +hardening: + mysql-conf: /etc/mysql/my.cnf + hardening-conf: /etc/mysql/conf.d/hardening.cnf + +security: + # @see http://www.symantec.com/connect/articles/securing-mysql-step-step + # @see http://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_chroot + chroot: None + + # @see http://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_safe-user-create + safe-user-create: 1 + + # @see http://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_secure-auth + secure-auth: 1 + + # @see http://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_symbolic-links + skip-symbolic-links: 1 + + # @see http://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_skip-show-database + skip-show-database: True + + # @see http://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_local_infile + local-infile: 0 + + # @see https://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_allow-suspicious-udfs + allow-suspicious-udfs: 0 + + # @see https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_automatic_sp_privileges + automatic-sp-privileges: 0 + + # @see https://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_secure-file-priv + secure-file-priv: /tmp diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/defaults/mysql.yaml.schema b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/defaults/mysql.yaml.schema new file mode 100644 index 0000000000000000000000000000000000000000..2edf325c311c6fbb062a072083b4d12cebc3d9c1 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/defaults/mysql.yaml.schema @@ -0,0 +1,15 @@ +# NOTE: this schema must contain all valid keys from it's associated defaults +# file. It is used to validate user-provided overrides. +hardening: + mysql-conf: + hardening-conf: +security: + chroot: + safe-user-create: + secure-auth: + skip-symbolic-links: + skip-show-database: + local-infile: + allow-suspicious-udfs: + automatic-sp-privileges: + secure-file-priv: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/defaults/os.yaml b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/defaults/os.yaml new file mode 100644 index 0000000000000000000000000000000000000000..9a8627b5ed2803828e1e4d78260c6b5f90cae659 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/defaults/os.yaml @@ -0,0 +1,68 @@ +# NOTE: this file contains the default configuration for the 'os' hardening +# code. If you want to override any settings you must add them to a file +# called hardening.yaml in the root directory of your charm using the +# name 'os' as the root key followed by any of the following with new +# values. + +general: + desktop_enable: False # (type:boolean) + +environment: + extra_user_paths: [] + umask: 027 + root_path: / + +auth: + pw_max_age: 60 + # discourage password cycling + pw_min_age: 7 + retries: 5 + lockout_time: 600 + timeout: 60 + allow_homeless: False # (type:boolean) + pam_passwdqc_enable: True # (type:boolean) + pam_passwdqc_options: 'min=disabled,disabled,16,12,8' + root_ttys: + console + tty1 + tty2 + tty3 + tty4 + tty5 + tty6 + uid_min: 1000 + gid_min: 1000 + sys_uid_min: 100 + sys_uid_max: 999 + sys_gid_min: 100 + sys_gid_max: 999 + chfn_restrict: + +security: + users_allow: [] + suid_sgid_enforce: True # (type:boolean) + # user-defined blacklist and whitelist + suid_sgid_blacklist: [] + suid_sgid_whitelist: [] + # if this is True, remove any suid/sgid bits from files that were not in the whitelist + suid_sgid_dry_run_on_unknown: False # (type:boolean) + suid_sgid_remove_from_unknown: False # (type:boolean) + # remove packages with known issues + packages_clean: True # (type:boolean) + packages_list: + xinetd + inetd + ypserv + telnet-server + rsh-server + rsync + kernel_enable_module_loading: True # (type:boolean) + kernel_enable_core_dump: False # (type:boolean) + ssh_tmout: 300 + +sysctl: + kernel_secure_sysrq: 244 # 4 + 16 + 32 + 64 + 128 + kernel_enable_sysrq: False # (type:boolean) + forwarding: False # (type:boolean) + ipv6_enable: False # (type:boolean) + arp_restricted: True # (type:boolean) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/defaults/os.yaml.schema b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/defaults/os.yaml.schema new file mode 100644 index 0000000000000000000000000000000000000000..cc3b9c206eae56cbe68826cb76748e2deb9483e1 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/defaults/os.yaml.schema @@ -0,0 +1,43 @@ +# NOTE: this schema must contain all valid keys from it's associated defaults +# file. It is used to validate user-provided overrides. +general: + desktop_enable: +environment: + extra_user_paths: + umask: + root_path: +auth: + pw_max_age: + pw_min_age: + retries: + lockout_time: + timeout: + allow_homeless: + pam_passwdqc_enable: + pam_passwdqc_options: + root_ttys: + uid_min: + gid_min: + sys_uid_min: + sys_uid_max: + sys_gid_min: + sys_gid_max: + chfn_restrict: +security: + users_allow: + suid_sgid_enforce: + suid_sgid_blacklist: + suid_sgid_whitelist: + suid_sgid_dry_run_on_unknown: + suid_sgid_remove_from_unknown: + packages_clean: + packages_list: + kernel_enable_module_loading: + kernel_enable_core_dump: + ssh_tmout: +sysctl: + kernel_secure_sysrq: + kernel_enable_sysrq: + forwarding: + ipv6_enable: + arp_restricted: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/defaults/ssh.yaml b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/defaults/ssh.yaml new file mode 100644 index 0000000000000000000000000000000000000000..cd529bcae1ec00fef2e969f43dc3cf530b46ef9a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/defaults/ssh.yaml @@ -0,0 +1,49 @@ +# NOTE: this file contains the default configuration for the 'ssh' hardening +# code. If you want to override any settings you must add them to a file +# called hardening.yaml in the root directory of your charm using the +# name 'ssh' as the root key followed by any of the following with new +# values. + +common: + service_name: 'ssh' + network_ipv6_enable: False # (type:boolean) + ports: [22] + remote_hosts: [] + +client: + package: 'openssh-client' + cbc_required: False # (type:boolean) + weak_hmac: False # (type:boolean) + weak_kex: False # (type:boolean) + roaming: False + password_authentication: 'no' + +server: + host_key_files: ['/etc/ssh/ssh_host_rsa_key', '/etc/ssh/ssh_host_dsa_key', + '/etc/ssh/ssh_host_ecdsa_key'] + cbc_required: False # (type:boolean) + weak_hmac: False # (type:boolean) + weak_kex: False # (type:boolean) + allow_root_with_key: False # (type:boolean) + allow_tcp_forwarding: 'no' + allow_agent_forwarding: 'no' + allow_x11_forwarding: 'no' + use_privilege_separation: 'sandbox' + listen_to: ['0.0.0.0'] + use_pam: 'no' + package: 'openssh-server' + password_authentication: 'no' + alive_interval: '600' + alive_count: '3' + sftp_enable: False # (type:boolean) + sftp_group: 'sftponly' + sftp_chroot: '/home/%u' + deny_users: [] + allow_users: [] + deny_groups: [] + allow_groups: [] + print_motd: 'no' + print_last_log: 'no' + use_dns: 'no' + max_auth_tries: 2 + max_sessions: 10 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/defaults/ssh.yaml.schema b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/defaults/ssh.yaml.schema new file mode 100644 index 0000000000000000000000000000000000000000..d05e054bc234015206bb1195152fa9ffd6a33151 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/defaults/ssh.yaml.schema @@ -0,0 +1,42 @@ +# NOTE: this schema must contain all valid keys from it's associated defaults +# file. It is used to validate user-provided overrides. +common: + service_name: + network_ipv6_enable: + ports: + remote_hosts: +client: + package: + cbc_required: + weak_hmac: + weak_kex: + roaming: + password_authentication: +server: + host_key_files: + cbc_required: + weak_hmac: + weak_kex: + allow_root_with_key: + allow_tcp_forwarding: + allow_agent_forwarding: + allow_x11_forwarding: + use_privilege_separation: + listen_to: + use_pam: + package: + password_authentication: + alive_interval: + alive_count: + sftp_enable: + sftp_group: + sftp_chroot: + deny_users: + allow_users: + deny_groups: + allow_groups: + print_motd: + print_last_log: + use_dns: + max_auth_tries: + max_sessions: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/harden.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/harden.py new file mode 100644 index 0000000000000000000000000000000000000000..63f21b9c9855065da3be875c01a2c94db7df47b4 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/harden.py @@ -0,0 +1,96 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import six + +from collections import OrderedDict + +from charmhelpers.core.hookenv import ( + config, + log, + DEBUG, + WARNING, +) +from charmhelpers.contrib.hardening.host.checks import run_os_checks +from charmhelpers.contrib.hardening.ssh.checks import run_ssh_checks +from charmhelpers.contrib.hardening.mysql.checks import run_mysql_checks +from charmhelpers.contrib.hardening.apache.checks import run_apache_checks + +_DISABLE_HARDENING_FOR_UNIT_TEST = False + + +def harden(overrides=None): + """Hardening decorator. + + This is the main entry point for running the hardening stack. In order to + run modules of the stack you must add this decorator to charm hook(s) and + ensure that your charm config.yaml contains the 'harden' option set to + one or more of the supported modules. Setting these will cause the + corresponding hardening code to be run when the hook fires. + + This decorator can and should be applied to more than one hook or function + such that hardening modules are called multiple times. This is because + subsequent calls will perform auditing checks that will report any changes + to resources hardened by the first run (and possibly perform compliance + actions as a result of any detected infractions). + + :param overrides: Optional list of stack modules used to override those + provided with 'harden' config. + :returns: Returns value returned by decorated function once executed. + """ + if overrides is None: + overrides = [] + + def _harden_inner1(f): + # As this has to be py2.7 compat, we can't use nonlocal. Use a trick + # to capture the dictionary that can then be updated. + _logged = {'done': False} + + def _harden_inner2(*args, **kwargs): + # knock out hardening via a config var; normally it won't get + # disabled. + if _DISABLE_HARDENING_FOR_UNIT_TEST: + return f(*args, **kwargs) + if not _logged['done']: + log("Hardening function '%s'" % (f.__name__), level=DEBUG) + _logged['done'] = True + RUN_CATALOG = OrderedDict([('os', run_os_checks), + ('ssh', run_ssh_checks), + ('mysql', run_mysql_checks), + ('apache', run_apache_checks)]) + + enabled = overrides[:] or (config("harden") or "").split() + if enabled: + modules_to_run = [] + # modules will always be performed in the following order + for module, func in six.iteritems(RUN_CATALOG): + if module in enabled: + enabled.remove(module) + modules_to_run.append(func) + + if enabled: + log("Unknown hardening modules '%s' - ignoring" % + (', '.join(enabled)), level=WARNING) + + for hardener in modules_to_run: + log("Executing hardening module '%s'" % + (hardener.__name__), level=DEBUG) + hardener() + else: + log("No hardening applied to '%s'" % (f.__name__), level=DEBUG) + + return f(*args, **kwargs) + return _harden_inner2 + + return _harden_inner1 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..58bebd846bd6fa648cfab6ab1056ad10d8415453 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/__init__.py @@ -0,0 +1,17 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from os import path + +TEMPLATES_DIR = path.join(path.dirname(__file__), 'templates') diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/checks/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/checks/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..0e7e409f3c7e0406b40353f48acfc3479e4c1a24 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/checks/__init__.py @@ -0,0 +1,48 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from charmhelpers.core.hookenv import ( + log, + DEBUG, +) +from charmhelpers.contrib.hardening.host.checks import ( + apt, + limits, + login, + minimize_access, + pam, + profile, + securetty, + suid_sgid, + sysctl +) + + +def run_os_checks(): + log("Starting OS hardening checks.", level=DEBUG) + checks = apt.get_audits() + checks.extend(limits.get_audits()) + checks.extend(login.get_audits()) + checks.extend(minimize_access.get_audits()) + checks.extend(pam.get_audits()) + checks.extend(profile.get_audits()) + checks.extend(securetty.get_audits()) + checks.extend(suid_sgid.get_audits()) + checks.extend(sysctl.get_audits()) + + for check in checks: + log("Running '%s' check" % (check.__class__.__name__), level=DEBUG) + check.ensure_compliance() + + log("OS hardening checks complete.", level=DEBUG) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/checks/apt.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/checks/apt.py new file mode 100644 index 0000000000000000000000000000000000000000..7ce41b0043134e256d9c20ee729f1c4345faa3f9 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/checks/apt.py @@ -0,0 +1,37 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from charmhelpers.contrib.hardening.utils import get_settings +from charmhelpers.contrib.hardening.audits.apt import ( + AptConfig, + RestrictedPackages, +) + + +def get_audits(): + """Get OS hardening apt audits. + + :returns: dictionary of audits + """ + audits = [AptConfig([{'key': 'APT::Get::AllowUnauthenticated', + 'expected': 'false'}])] + + settings = get_settings('os') + clean_packages = settings['security']['packages_clean'] + if clean_packages: + security_packages = settings['security']['packages_list'] + if security_packages: + audits.append(RestrictedPackages(security_packages)) + + return audits diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/checks/limits.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/checks/limits.py new file mode 100644 index 0000000000000000000000000000000000000000..e94f5ebef360c7c80c35eba8243d3e7f7dcbb14d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/checks/limits.py @@ -0,0 +1,53 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from charmhelpers.contrib.hardening.audits.file import ( + DirectoryPermissionAudit, + TemplatedFile, +) +from charmhelpers.contrib.hardening.host import TEMPLATES_DIR +from charmhelpers.contrib.hardening import utils + + +def get_audits(): + """Get OS hardening security limits audits. + + :returns: dictionary of audits + """ + audits = [] + settings = utils.get_settings('os') + + # Ensure that the /etc/security/limits.d directory is only writable + # by the root user, but others can execute and read. + audits.append(DirectoryPermissionAudit('/etc/security/limits.d', + user='root', group='root', + mode=0o755)) + + # If core dumps are not enabled, then don't allow core dumps to be + # created as they may contain sensitive information. + if not settings['security']['kernel_enable_core_dump']: + audits.append(TemplatedFile('/etc/security/limits.d/10.hardcore.conf', + SecurityLimitsContext(), + template_dir=TEMPLATES_DIR, + user='root', group='root', mode=0o0440)) + return audits + + +class SecurityLimitsContext(object): + + def __call__(self): + settings = utils.get_settings('os') + ctxt = {'disable_core_dump': + not settings['security']['kernel_enable_core_dump']} + return ctxt diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/checks/login.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/checks/login.py new file mode 100644 index 0000000000000000000000000000000000000000..fe2bc6ef34a0dae612c2617dc1d13390f651e419 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/checks/login.py @@ -0,0 +1,65 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from six import string_types + +from charmhelpers.contrib.hardening.audits.file import TemplatedFile +from charmhelpers.contrib.hardening.host import TEMPLATES_DIR +from charmhelpers.contrib.hardening import utils + + +def get_audits(): + """Get OS hardening login.defs audits. + + :returns: dictionary of audits + """ + audits = [TemplatedFile('/etc/login.defs', LoginContext(), + template_dir=TEMPLATES_DIR, + user='root', group='root', mode=0o0444)] + return audits + + +class LoginContext(object): + + def __call__(self): + settings = utils.get_settings('os') + + # Octal numbers in yaml end up being turned into decimal, + # so check if the umask is entered as a string (e.g. '027') + # or as an octal umask as we know it (e.g. 002). If its not + # a string assume it to be octal and turn it into an octal + # string. + umask = settings['environment']['umask'] + if not isinstance(umask, string_types): + umask = '%s' % oct(umask) + + ctxt = { + 'additional_user_paths': + settings['environment']['extra_user_paths'], + 'umask': umask, + 'pwd_max_age': settings['auth']['pw_max_age'], + 'pwd_min_age': settings['auth']['pw_min_age'], + 'uid_min': settings['auth']['uid_min'], + 'sys_uid_min': settings['auth']['sys_uid_min'], + 'sys_uid_max': settings['auth']['sys_uid_max'], + 'gid_min': settings['auth']['gid_min'], + 'sys_gid_min': settings['auth']['sys_gid_min'], + 'sys_gid_max': settings['auth']['sys_gid_max'], + 'login_retries': settings['auth']['retries'], + 'login_timeout': settings['auth']['timeout'], + 'chfn_restrict': settings['auth']['chfn_restrict'], + 'allow_login_without_home': settings['auth']['allow_homeless'] + } + + return ctxt diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/checks/minimize_access.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/checks/minimize_access.py new file mode 100644 index 0000000000000000000000000000000000000000..6e64be003be0b89d1416b22c35c43b6024979361 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/checks/minimize_access.py @@ -0,0 +1,50 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from charmhelpers.contrib.hardening.audits.file import ( + FilePermissionAudit, + ReadOnly, +) +from charmhelpers.contrib.hardening import utils + + +def get_audits(): + """Get OS hardening access audits. + + :returns: dictionary of audits + """ + audits = [] + settings = utils.get_settings('os') + + # Remove write permissions from $PATH folders for all regular users. + # This prevents changing system-wide commands from normal users. + path_folders = {'/usr/local/sbin', + '/usr/local/bin', + '/usr/sbin', + '/usr/bin', + '/bin'} + extra_user_paths = settings['environment']['extra_user_paths'] + path_folders.update(extra_user_paths) + audits.append(ReadOnly(path_folders)) + + # Only allow the root user to have access to the shadow file. + audits.append(FilePermissionAudit('/etc/shadow', 'root', 'root', 0o0600)) + + if 'change_user' not in settings['security']['users_allow']: + # su should only be accessible to user and group root, unless it is + # expressly defined to allow users to change to root via the + # security_users_allow config option. + audits.append(FilePermissionAudit('/bin/su', 'root', 'root', 0o750)) + + return audits diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/checks/pam.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/checks/pam.py new file mode 100644 index 0000000000000000000000000000000000000000..9b38d5f0cf0b16282968825b79b44d80a1a7f577 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/checks/pam.py @@ -0,0 +1,132 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from subprocess import ( + check_output, + CalledProcessError, +) + +from charmhelpers.core.hookenv import ( + log, + DEBUG, + ERROR, +) +from charmhelpers.fetch import ( + apt_install, + apt_purge, + apt_update, +) +from charmhelpers.contrib.hardening.audits.file import ( + TemplatedFile, + DeletedFile, +) +from charmhelpers.contrib.hardening import utils +from charmhelpers.contrib.hardening.host import TEMPLATES_DIR + + +def get_audits(): + """Get OS hardening PAM authentication audits. + + :returns: dictionary of audits + """ + audits = [] + + settings = utils.get_settings('os') + + if settings['auth']['pam_passwdqc_enable']: + audits.append(PasswdqcPAM('/etc/passwdqc.conf')) + + if settings['auth']['retries']: + audits.append(Tally2PAM('/usr/share/pam-configs/tally2')) + else: + audits.append(DeletedFile('/usr/share/pam-configs/tally2')) + + return audits + + +class PasswdqcPAMContext(object): + + def __call__(self): + ctxt = {} + settings = utils.get_settings('os') + + ctxt['auth_pam_passwdqc_options'] = \ + settings['auth']['pam_passwdqc_options'] + + return ctxt + + +class PasswdqcPAM(TemplatedFile): + """The PAM Audit verifies the linux PAM settings.""" + def __init__(self, path): + super(PasswdqcPAM, self).__init__(path=path, + template_dir=TEMPLATES_DIR, + context=PasswdqcPAMContext(), + user='root', + group='root', + mode=0o0640) + + def pre_write(self): + # Always remove? + for pkg in ['libpam-ccreds', 'libpam-cracklib']: + log("Purging package '%s'" % pkg, level=DEBUG), + apt_purge(pkg) + + apt_update(fatal=True) + for pkg in ['libpam-passwdqc']: + log("Installing package '%s'" % pkg, level=DEBUG), + apt_install(pkg) + + def post_write(self): + """Updates the PAM configuration after the file has been written""" + try: + check_output(['pam-auth-update', '--package']) + except CalledProcessError as e: + log('Error calling pam-auth-update: %s' % e, level=ERROR) + + +class Tally2PAMContext(object): + + def __call__(self): + ctxt = {} + settings = utils.get_settings('os') + + ctxt['auth_lockout_time'] = settings['auth']['lockout_time'] + ctxt['auth_retries'] = settings['auth']['retries'] + + return ctxt + + +class Tally2PAM(TemplatedFile): + """The PAM Audit verifies the linux PAM settings.""" + def __init__(self, path): + super(Tally2PAM, self).__init__(path=path, + template_dir=TEMPLATES_DIR, + context=Tally2PAMContext(), + user='root', + group='root', + mode=0o0640) + + def pre_write(self): + # Always remove? + apt_purge('libpam-ccreds') + apt_update(fatal=True) + apt_install('libpam-modules') + + def post_write(self): + """Updates the PAM configuration after the file has been written""" + try: + check_output(['pam-auth-update', '--package']) + except CalledProcessError as e: + log('Error calling pam-auth-update: %s' % e, level=ERROR) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/checks/profile.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/checks/profile.py new file mode 100644 index 0000000000000000000000000000000000000000..2727428da9241ccf88a60843d05dffb26cebac96 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/checks/profile.py @@ -0,0 +1,49 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from charmhelpers.contrib.hardening.audits.file import TemplatedFile +from charmhelpers.contrib.hardening.host import TEMPLATES_DIR +from charmhelpers.contrib.hardening import utils + + +def get_audits(): + """Get OS hardening profile audits. + + :returns: dictionary of audits + """ + audits = [] + + settings = utils.get_settings('os') + # If core dumps are not enabled, then don't allow core dumps to be + # created as they may contain sensitive information. + if not settings['security']['kernel_enable_core_dump']: + audits.append(TemplatedFile('/etc/profile.d/pinerolo_profile.sh', + ProfileContext(), + template_dir=TEMPLATES_DIR, + mode=0o0755, user='root', group='root')) + if settings['security']['ssh_tmout']: + audits.append(TemplatedFile('/etc/profile.d/99-hardening.sh', + ProfileContext(), + template_dir=TEMPLATES_DIR, + mode=0o0644, user='root', group='root')) + return audits + + +class ProfileContext(object): + + def __call__(self): + settings = utils.get_settings('os') + ctxt = {'ssh_tmout': + settings['security']['ssh_tmout']} + return ctxt diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/checks/securetty.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/checks/securetty.py new file mode 100644 index 0000000000000000000000000000000000000000..34cd02178c1fbf1c7d467af0814ad9fd4199dc3d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/checks/securetty.py @@ -0,0 +1,37 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from charmhelpers.contrib.hardening.audits.file import TemplatedFile +from charmhelpers.contrib.hardening.host import TEMPLATES_DIR +from charmhelpers.contrib.hardening import utils + + +def get_audits(): + """Get OS hardening Secure TTY audits. + + :returns: dictionary of audits + """ + audits = [] + audits.append(TemplatedFile('/etc/securetty', SecureTTYContext(), + template_dir=TEMPLATES_DIR, + mode=0o0400, user='root', group='root')) + return audits + + +class SecureTTYContext(object): + + def __call__(self): + settings = utils.get_settings('os') + ctxt = {'ttys': settings['auth']['root_ttys']} + return ctxt diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/checks/suid_sgid.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/checks/suid_sgid.py new file mode 100644 index 0000000000000000000000000000000000000000..bcbe3fde07ea0716e2de6d1d4e103fcb19166c14 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/checks/suid_sgid.py @@ -0,0 +1,129 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import subprocess + +from charmhelpers.core.hookenv import ( + log, + INFO, +) +from charmhelpers.contrib.hardening.audits.file import NoSUIDSGIDAudit +from charmhelpers.contrib.hardening import utils + + +BLACKLIST = ['/usr/bin/rcp', '/usr/bin/rlogin', '/usr/bin/rsh', + '/usr/libexec/openssh/ssh-keysign', + '/usr/lib/openssh/ssh-keysign', + '/sbin/netreport', + '/usr/sbin/usernetctl', + '/usr/sbin/userisdnctl', + '/usr/sbin/pppd', + '/usr/bin/lockfile', + '/usr/bin/mail-lock', + '/usr/bin/mail-unlock', + '/usr/bin/mail-touchlock', + '/usr/bin/dotlockfile', + '/usr/bin/arping', + '/usr/sbin/uuidd', + '/usr/bin/mtr', + '/usr/lib/evolution/camel-lock-helper-1.2', + '/usr/lib/pt_chown', + '/usr/lib/eject/dmcrypt-get-device', + '/usr/lib/mc/cons.saver'] + +WHITELIST = ['/bin/mount', '/bin/ping', '/bin/su', '/bin/umount', + '/sbin/pam_timestamp_check', '/sbin/unix_chkpwd', '/usr/bin/at', + '/usr/bin/gpasswd', '/usr/bin/locate', '/usr/bin/newgrp', + '/usr/bin/passwd', '/usr/bin/ssh-agent', + '/usr/libexec/utempter/utempter', '/usr/sbin/lockdev', + '/usr/sbin/sendmail.sendmail', '/usr/bin/expiry', + '/bin/ping6', '/usr/bin/traceroute6.iputils', + '/sbin/mount.nfs', '/sbin/umount.nfs', + '/sbin/mount.nfs4', '/sbin/umount.nfs4', + '/usr/bin/crontab', + '/usr/bin/wall', '/usr/bin/write', + '/usr/bin/screen', + '/usr/bin/mlocate', + '/usr/bin/chage', '/usr/bin/chfn', '/usr/bin/chsh', + '/bin/fusermount', + '/usr/bin/pkexec', + '/usr/bin/sudo', '/usr/bin/sudoedit', + '/usr/sbin/postdrop', '/usr/sbin/postqueue', + '/usr/sbin/suexec', + '/usr/lib/squid/ncsa_auth', '/usr/lib/squid/pam_auth', + '/usr/kerberos/bin/ksu', + '/usr/sbin/ccreds_validate', + '/usr/bin/Xorg', + '/usr/bin/X', + '/usr/lib/dbus-1.0/dbus-daemon-launch-helper', + '/usr/lib/vte/gnome-pty-helper', + '/usr/lib/libvte9/gnome-pty-helper', + '/usr/lib/libvte-2.90-9/gnome-pty-helper'] + + +def get_audits(): + """Get OS hardening suid/sgid audits. + + :returns: dictionary of audits + """ + checks = [] + settings = utils.get_settings('os') + if not settings['security']['suid_sgid_enforce']: + log("Skipping suid/sgid hardening", level=INFO) + return checks + + # Build the blacklist and whitelist of files for suid/sgid checks. + # There are a total of 4 lists: + # 1. the system blacklist + # 2. the system whitelist + # 3. the user blacklist + # 4. the user whitelist + # + # The blacklist is the set of paths which should NOT have the suid/sgid bit + # set and the whitelist is the set of paths which MAY have the suid/sgid + # bit setl. The user whitelist/blacklist effectively override the system + # whitelist/blacklist. + u_b = settings['security']['suid_sgid_blacklist'] + u_w = settings['security']['suid_sgid_whitelist'] + + blacklist = set(BLACKLIST) - set(u_w + u_b) + whitelist = set(WHITELIST) - set(u_b + u_w) + + checks.append(NoSUIDSGIDAudit(blacklist)) + + dry_run = settings['security']['suid_sgid_dry_run_on_unknown'] + + if settings['security']['suid_sgid_remove_from_unknown'] or dry_run: + # If the policy is a dry_run (e.g. complain only) or remove unknown + # suid/sgid bits then find all of the paths which have the suid/sgid + # bit set and then remove the whitelisted paths. + root_path = settings['environment']['root_path'] + unknown_paths = find_paths_with_suid_sgid(root_path) - set(whitelist) + checks.append(NoSUIDSGIDAudit(unknown_paths, unless=dry_run)) + + return checks + + +def find_paths_with_suid_sgid(root_path): + """Finds all paths/files which have an suid/sgid bit enabled. + + Starting with the root_path, this will recursively find all paths which + have an suid or sgid bit set. + """ + cmd = ['find', root_path, '-perm', '-4000', '-o', '-perm', '-2000', + '-type', 'f', '!', '-path', '/proc/*', '-print'] + + p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) + out, _ = p.communicate() + return set(out.split('\n')) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/checks/sysctl.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/checks/sysctl.py new file mode 100644 index 0000000000000000000000000000000000000000..f1ea5813036b11893e8b9a986bf30a2f7a541b5d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/checks/sysctl.py @@ -0,0 +1,209 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import platform +import re +import six +import subprocess + +from charmhelpers.core.hookenv import ( + log, + INFO, + WARNING, +) +from charmhelpers.contrib.hardening import utils +from charmhelpers.contrib.hardening.audits.file import ( + FilePermissionAudit, + TemplatedFile, +) +from charmhelpers.contrib.hardening.host import TEMPLATES_DIR + + +SYSCTL_DEFAULTS = """net.ipv4.ip_forward=%(net_ipv4_ip_forward)s +net.ipv6.conf.all.forwarding=%(net_ipv6_conf_all_forwarding)s +net.ipv4.conf.all.rp_filter=1 +net.ipv4.conf.default.rp_filter=1 +net.ipv4.icmp_echo_ignore_broadcasts=1 +net.ipv4.icmp_ignore_bogus_error_responses=1 +net.ipv4.icmp_ratelimit=100 +net.ipv4.icmp_ratemask=88089 +net.ipv6.conf.all.disable_ipv6=%(net_ipv6_conf_all_disable_ipv6)s +net.ipv4.tcp_timestamps=%(net_ipv4_tcp_timestamps)s +net.ipv4.conf.all.arp_ignore=%(net_ipv4_conf_all_arp_ignore)s +net.ipv4.conf.all.arp_announce=%(net_ipv4_conf_all_arp_announce)s +net.ipv4.tcp_rfc1337=1 +net.ipv4.tcp_syncookies=1 +net.ipv4.conf.all.shared_media=1 +net.ipv4.conf.default.shared_media=1 +net.ipv4.conf.all.accept_source_route=0 +net.ipv4.conf.default.accept_source_route=0 +net.ipv4.conf.all.accept_redirects=0 +net.ipv4.conf.default.accept_redirects=0 +net.ipv6.conf.all.accept_redirects=0 +net.ipv6.conf.default.accept_redirects=0 +net.ipv4.conf.all.secure_redirects=0 +net.ipv4.conf.default.secure_redirects=0 +net.ipv4.conf.all.send_redirects=0 +net.ipv4.conf.default.send_redirects=0 +net.ipv4.conf.all.log_martians=0 +net.ipv6.conf.default.router_solicitations=0 +net.ipv6.conf.default.accept_ra_rtr_pref=0 +net.ipv6.conf.default.accept_ra_pinfo=0 +net.ipv6.conf.default.accept_ra_defrtr=0 +net.ipv6.conf.default.autoconf=0 +net.ipv6.conf.default.dad_transmits=0 +net.ipv6.conf.default.max_addresses=1 +net.ipv6.conf.all.accept_ra=0 +net.ipv6.conf.default.accept_ra=0 +kernel.modules_disabled=%(kernel_modules_disabled)s +kernel.sysrq=%(kernel_sysrq)s +fs.suid_dumpable=%(fs_suid_dumpable)s +kernel.randomize_va_space=2 +""" + + +def get_audits(): + """Get OS hardening sysctl audits. + + :returns: dictionary of audits + """ + audits = [] + settings = utils.get_settings('os') + + # Apply the sysctl settings which are configured to be applied. + audits.append(SysctlConf()) + # Make sure that only root has access to the sysctl.conf file, and + # that it is read-only. + audits.append(FilePermissionAudit('/etc/sysctl.conf', + user='root', + group='root', mode=0o0440)) + # If module loading is not enabled, then ensure that the modules + # file has the appropriate permissions and rebuild the initramfs + if not settings['security']['kernel_enable_module_loading']: + audits.append(ModulesTemplate()) + + return audits + + +class ModulesContext(object): + + def __call__(self): + settings = utils.get_settings('os') + with open('/proc/cpuinfo', 'r') as fd: + cpuinfo = fd.readlines() + + for line in cpuinfo: + match = re.search(r"^vendor_id\s+:\s+(.+)", line) + if match: + vendor = match.group(1) + + if vendor == "GenuineIntel": + vendor = "intel" + elif vendor == "AuthenticAMD": + vendor = "amd" + + ctxt = {'arch': platform.processor(), + 'cpuVendor': vendor, + 'desktop_enable': settings['general']['desktop_enable']} + + return ctxt + + +class ModulesTemplate(object): + + def __init__(self): + super(ModulesTemplate, self).__init__('/etc/initramfs-tools/modules', + ModulesContext(), + templates_dir=TEMPLATES_DIR, + user='root', group='root', + mode=0o0440) + + def post_write(self): + subprocess.check_call(['update-initramfs', '-u']) + + +class SysCtlHardeningContext(object): + def __call__(self): + settings = utils.get_settings('os') + ctxt = {'sysctl': {}} + + log("Applying sysctl settings", level=INFO) + extras = {'net_ipv4_ip_forward': 0, + 'net_ipv6_conf_all_forwarding': 0, + 'net_ipv6_conf_all_disable_ipv6': 1, + 'net_ipv4_tcp_timestamps': 0, + 'net_ipv4_conf_all_arp_ignore': 0, + 'net_ipv4_conf_all_arp_announce': 0, + 'kernel_sysrq': 0, + 'fs_suid_dumpable': 0, + 'kernel_modules_disabled': 1} + + if settings['sysctl']['ipv6_enable']: + extras['net_ipv6_conf_all_disable_ipv6'] = 0 + + if settings['sysctl']['forwarding']: + extras['net_ipv4_ip_forward'] = 1 + extras['net_ipv6_conf_all_forwarding'] = 1 + + if settings['sysctl']['arp_restricted']: + extras['net_ipv4_conf_all_arp_ignore'] = 1 + extras['net_ipv4_conf_all_arp_announce'] = 2 + + if settings['security']['kernel_enable_module_loading']: + extras['kernel_modules_disabled'] = 0 + + if settings['sysctl']['kernel_enable_sysrq']: + sysrq_val = settings['sysctl']['kernel_secure_sysrq'] + extras['kernel_sysrq'] = sysrq_val + + if settings['security']['kernel_enable_core_dump']: + extras['fs_suid_dumpable'] = 1 + + settings.update(extras) + for d in (SYSCTL_DEFAULTS % settings).split(): + d = d.strip().partition('=') + key = d[0].strip() + path = os.path.join('/proc/sys', key.replace('.', '/')) + if not os.path.exists(path): + log("Skipping '%s' since '%s' does not exist" % (key, path), + level=WARNING) + continue + + ctxt['sysctl'][key] = d[2] or None + + # Translate for python3 + return {'sysctl_settings': + [(k, v) for k, v in six.iteritems(ctxt['sysctl'])]} + + +class SysctlConf(TemplatedFile): + """An audit check for sysctl settings.""" + def __init__(self): + self.conffile = '/etc/sysctl.d/99-juju-hardening.conf' + super(SysctlConf, self).__init__(self.conffile, + SysCtlHardeningContext(), + template_dir=TEMPLATES_DIR, + user='root', group='root', + mode=0o0440) + + def post_write(self): + try: + subprocess.check_call(['sysctl', '-p', self.conffile]) + except subprocess.CalledProcessError as e: + # NOTE: on some systems if sysctl cannot apply all settings it + # will return non-zero as well. + log("sysctl command returned an error (maybe some " + "keys could not be set) - %s" % (e), + level=WARNING) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/templates/10.hardcore.conf b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/templates/10.hardcore.conf new file mode 100644 index 0000000000000000000000000000000000000000..0014191fc8152fd9147b3fb5446987e6e62f2d77 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/templates/10.hardcore.conf @@ -0,0 +1,8 @@ +############################################################################### +# WARNING: This configuration file is maintained by Juju. Local changes may +# be overwritten. +############################################################################### +{% if disable_core_dump -%} +# Prevent core dumps for all users. These are usually only needed by developers and may contain sensitive information. +* hard core 0 +{% endif %} \ No newline at end of file diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/templates/99-hardening.sh b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/templates/99-hardening.sh new file mode 100644 index 0000000000000000000000000000000000000000..616cef46f492f682aca28c71a6e20176870a36f2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/templates/99-hardening.sh @@ -0,0 +1,5 @@ +TMOUT={{ tmout }} +readonly TMOUT +export TMOUT + +readonly HISTFILE diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/templates/99-juju-hardening.conf b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/templates/99-juju-hardening.conf new file mode 100644 index 0000000000000000000000000000000000000000..101f1e1d709c268890553957f30c93259681ce59 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/templates/99-juju-hardening.conf @@ -0,0 +1,7 @@ +############################################################################### +# WARNING: This configuration file is maintained by Juju. Local changes may +# be overwritten. +############################################################################### +{% for key, value in sysctl_settings -%} +{{ key }}={{ value }} +{% endfor -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/templates/login.defs b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/templates/login.defs new file mode 100644 index 0000000000000000000000000000000000000000..db137d6dbb7a3a850294407199225392a880cfc2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/templates/login.defs @@ -0,0 +1,349 @@ +############################################################################### +# WARNING: This configuration file is maintained by Juju. Local changes may +# be overwritten. +############################################################################### +# +# /etc/login.defs - Configuration control definitions for the login package. +# +# Three items must be defined: MAIL_DIR, ENV_SUPATH, and ENV_PATH. +# If unspecified, some arbitrary (and possibly incorrect) value will +# be assumed. All other items are optional - if not specified then +# the described action or option will be inhibited. +# +# Comment lines (lines beginning with "#") and blank lines are ignored. +# +# Modified for Linux. --marekm + +# REQUIRED for useradd/userdel/usermod +# Directory where mailboxes reside, _or_ name of file, relative to the +# home directory. If you _do_ define MAIL_DIR and MAIL_FILE, +# MAIL_DIR takes precedence. +# +# Essentially: +# - MAIL_DIR defines the location of users mail spool files +# (for mbox use) by appending the username to MAIL_DIR as defined +# below. +# - MAIL_FILE defines the location of the users mail spool files as the +# fully-qualified filename obtained by prepending the user home +# directory before $MAIL_FILE +# +# NOTE: This is no more used for setting up users MAIL environment variable +# which is, starting from shadow 4.0.12-1 in Debian, entirely the +# job of the pam_mail PAM modules +# See default PAM configuration files provided for +# login, su, etc. +# +# This is a temporary situation: setting these variables will soon +# move to /etc/default/useradd and the variables will then be +# no more supported +MAIL_DIR /var/mail +#MAIL_FILE .mail + +# +# Enable logging and display of /var/log/faillog login failure info. +# This option conflicts with the pam_tally PAM module. +# +FAILLOG_ENAB yes + +# +# Enable display of unknown usernames when login failures are recorded. +# +# WARNING: Unknown usernames may become world readable. +# See #290803 and #298773 for details about how this could become a security +# concern +LOG_UNKFAIL_ENAB no + +# +# Enable logging of successful logins +# +LOG_OK_LOGINS yes + +# +# Enable "syslog" logging of su activity - in addition to sulog file logging. +# SYSLOG_SG_ENAB does the same for newgrp and sg. +# +SYSLOG_SU_ENAB yes +SYSLOG_SG_ENAB yes + +# +# If defined, all su activity is logged to this file. +# +#SULOG_FILE /var/log/sulog + +# +# If defined, file which maps tty line to TERM environment parameter. +# Each line of the file is in a format something like "vt100 tty01". +# +#TTYTYPE_FILE /etc/ttytype + +# +# If defined, login failures will be logged here in a utmp format +# last, when invoked as lastb, will read /var/log/btmp, so... +# +FTMP_FILE /var/log/btmp + +# +# If defined, the command name to display when running "su -". For +# example, if this is defined as "su" then a "ps" will display the +# command is "-su". If not defined, then "ps" would display the +# name of the shell actually being run, e.g. something like "-sh". +# +SU_NAME su + +# +# If defined, file which inhibits all the usual chatter during the login +# sequence. If a full pathname, then hushed mode will be enabled if the +# user's name or shell are found in the file. If not a full pathname, then +# hushed mode will be enabled if the file exists in the user's home directory. +# +HUSHLOGIN_FILE .hushlogin +#HUSHLOGIN_FILE /etc/hushlogins + +# +# *REQUIRED* The default PATH settings, for superuser and normal users. +# +# (they are minimal, add the rest in the shell startup files) +ENV_SUPATH PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin +ENV_PATH PATH=/usr/local/bin:/usr/bin:/bin{% if additional_user_paths %}{{ additional_user_paths }}{% endif %} + +# +# Terminal permissions +# +# TTYGROUP Login tty will be assigned this group ownership. +# TTYPERM Login tty will be set to this permission. +# +# If you have a "write" program which is "setgid" to a special group +# which owns the terminals, define TTYGROUP to the group number and +# TTYPERM to 0620. Otherwise leave TTYGROUP commented out and assign +# TTYPERM to either 622 or 600. +# +# In Debian /usr/bin/bsd-write or similar programs are setgid tty +# However, the default and recommended value for TTYPERM is still 0600 +# to not allow anyone to write to anyone else console or terminal + +# Users can still allow other people to write them by issuing +# the "mesg y" command. + +TTYGROUP tty +TTYPERM 0600 + +# +# Login configuration initializations: +# +# ERASECHAR Terminal ERASE character ('\010' = backspace). +# KILLCHAR Terminal KILL character ('\025' = CTRL/U). +# UMASK Default "umask" value. +# +# The ERASECHAR and KILLCHAR are used only on System V machines. +# +# UMASK is the default umask value for pam_umask and is used by +# useradd and newusers to set the mode of the new home directories. +# 022 is the "historical" value in Debian for UMASK +# 027, or even 077, could be considered better for privacy +# There is no One True Answer here : each sysadmin must make up his/her +# mind. +# +# If USERGROUPS_ENAB is set to "yes", that will modify this UMASK default value +# for private user groups, i. e. the uid is the same as gid, and username is +# the same as the primary group name: for these, the user permissions will be +# used as group permissions, e. g. 022 will become 002. +# +# Prefix these values with "0" to get octal, "0x" to get hexadecimal. +# +ERASECHAR 0177 +KILLCHAR 025 +UMASK {{ umask }} + +# Enable setting of the umask group bits to be the same as owner bits (examples: `022` -> `002`, `077` -> `007`) for non-root users, if the uid is the same as gid, and username is the same as the primary group name. +# If set to yes, userdel will remove the user´s group if it contains no more members, and useradd will create by default a group with the name of the user. +USERGROUPS_ENAB yes + +# +# Password aging controls: +# +# PASS_MAX_DAYS Maximum number of days a password may be used. +# PASS_MIN_DAYS Minimum number of days allowed between password changes. +# PASS_WARN_AGE Number of days warning given before a password expires. +# +PASS_MAX_DAYS {{ pwd_max_age }} +PASS_MIN_DAYS {{ pwd_min_age }} +PASS_WARN_AGE 7 + +# +# Min/max values for automatic uid selection in useradd +# +UID_MIN {{ uid_min }} +UID_MAX 60000 +# System accounts +SYS_UID_MIN {{ sys_uid_min }} +SYS_UID_MAX {{ sys_uid_max }} + +# Min/max values for automatic gid selection in groupadd +GID_MIN {{ gid_min }} +GID_MAX 60000 +# System accounts +SYS_GID_MIN {{ sys_gid_min }} +SYS_GID_MAX {{ sys_gid_max }} + +# +# Max number of login retries if password is bad. This will most likely be +# overriden by PAM, since the default pam_unix module has it's own built +# in of 3 retries. However, this is a safe fallback in case you are using +# an authentication module that does not enforce PAM_MAXTRIES. +# +LOGIN_RETRIES {{ login_retries }} + +# +# Max time in seconds for login +# +LOGIN_TIMEOUT {{ login_timeout }} + +# +# Which fields may be changed by regular users using chfn - use +# any combination of letters "frwh" (full name, room number, work +# phone, home phone). If not defined, no changes are allowed. +# For backward compatibility, "yes" = "rwh" and "no" = "frwh". +# +{% if chfn_restrict %} +CHFN_RESTRICT {{ chfn_restrict }} +{% endif %} + +# +# Should login be allowed if we can't cd to the home directory? +# Default in no. +# +DEFAULT_HOME {% if allow_login_without_home %} yes {% else %} no {% endif %} + +# +# If defined, this command is run when removing a user. +# It should remove any at/cron/print jobs etc. owned by +# the user to be removed (passed as the first argument). +# +#USERDEL_CMD /usr/sbin/userdel_local + +# +# Enable setting of the umask group bits to be the same as owner bits +# (examples: 022 -> 002, 077 -> 007) for non-root users, if the uid is +# the same as gid, and username is the same as the primary group name. +# +# If set to yes, userdel will remove the user´s group if it contains no +# more members, and useradd will create by default a group with the name +# of the user. +# +USERGROUPS_ENAB yes + +# +# Instead of the real user shell, the program specified by this parameter +# will be launched, although its visible name (argv[0]) will be the shell's. +# The program may do whatever it wants (logging, additional authentification, +# banner, ...) before running the actual shell. +# +# FAKE_SHELL /bin/fakeshell + +# +# If defined, either full pathname of a file containing device names or +# a ":" delimited list of device names. Root logins will be allowed only +# upon these devices. +# +# This variable is used by login and su. +# +#CONSOLE /etc/consoles +#CONSOLE console:tty01:tty02:tty03:tty04 + +# +# List of groups to add to the user's supplementary group set +# when logging in on the console (as determined by the CONSOLE +# setting). Default is none. +# +# Use with caution - it is possible for users to gain permanent +# access to these groups, even when not logged in on the console. +# How to do it is left as an exercise for the reader... +# +# This variable is used by login and su. +# +#CONSOLE_GROUPS floppy:audio:cdrom + +# +# If set to "yes", new passwords will be encrypted using the MD5-based +# algorithm compatible with the one used by recent releases of FreeBSD. +# It supports passwords of unlimited length and longer salt strings. +# Set to "no" if you need to copy encrypted passwords to other systems +# which don't understand the new algorithm. Default is "no". +# +# This variable is deprecated. You should use ENCRYPT_METHOD. +# +MD5_CRYPT_ENAB no + +# +# If set to MD5 , MD5-based algorithm will be used for encrypting password +# If set to SHA256, SHA256-based algorithm will be used for encrypting password +# If set to SHA512, SHA512-based algorithm will be used for encrypting password +# If set to DES, DES-based algorithm will be used for encrypting password (default) +# Overrides the MD5_CRYPT_ENAB option +# +# Note: It is recommended to use a value consistent with +# the PAM modules configuration. +# +ENCRYPT_METHOD SHA512 + +# +# Only used if ENCRYPT_METHOD is set to SHA256 or SHA512. +# +# Define the number of SHA rounds. +# With a lot of rounds, it is more difficult to brute forcing the password. +# But note also that it more CPU resources will be needed to authenticate +# users. +# +# If not specified, the libc will choose the default number of rounds (5000). +# The values must be inside the 1000-999999999 range. +# If only one of the MIN or MAX values is set, then this value will be used. +# If MIN > MAX, the highest value will be used. +# +# SHA_CRYPT_MIN_ROUNDS 5000 +# SHA_CRYPT_MAX_ROUNDS 5000 + +################# OBSOLETED BY PAM ############## +# # +# These options are now handled by PAM. Please # +# edit the appropriate file in /etc/pam.d/ to # +# enable the equivelants of them. +# +############### + +#MOTD_FILE +#DIALUPS_CHECK_ENAB +#LASTLOG_ENAB +#MAIL_CHECK_ENAB +#OBSCURE_CHECKS_ENAB +#PORTTIME_CHECKS_ENAB +#SU_WHEEL_ONLY +#CRACKLIB_DICTPATH +#PASS_CHANGE_TRIES +#PASS_ALWAYS_WARN +#ENVIRON_FILE +#NOLOGINS_FILE +#ISSUE_FILE +#PASS_MIN_LEN +#PASS_MAX_LEN +#ULIMIT +#ENV_HZ +#CHFN_AUTH +#CHSH_AUTH +#FAIL_DELAY + +################# OBSOLETED ####################### +# # +# These options are no more handled by shadow. # +# # +# Shadow utilities will display a warning if they # +# still appear. # +# # +################################################### + +# CLOSE_SESSIONS +# LOGIN_STRING +# NO_PASSWORD_CONSOLE +# QMAIL_DIR + + + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/templates/modules b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/templates/modules new file mode 100644 index 0000000000000000000000000000000000000000..ef0354ee35fa363b303bb22c6ed0d2d1196aed52 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/templates/modules @@ -0,0 +1,117 @@ +############################################################################### +# WARNING: This configuration file is maintained by Juju. Local changes may +# be overwritten. +############################################################################### +# /etc/modules: kernel modules to load at boot time. +# +# This file contains the names of kernel modules that should be loaded +# at boot time, one per line. Lines beginning with "#" are ignored. +# Parameters can be specified after the module name. + +# Arch +# ---- +# +# Modules for certains builds, contains support modules and some CPU-specific optimizations. + +{% if arch == "x86_64" -%} +# Optimize for x86_64 cryptographic features +twofish-x86_64-3way +twofish-x86_64 +aes-x86_64 +salsa20-x86_64 +blowfish-x86_64 +{% endif -%} + +{% if cpuVendor == "intel" -%} +# Intel-specific optimizations +ghash-clmulni-intel +aesni-intel +kvm-intel +{% endif -%} + +{% if cpuVendor == "amd" -%} +# AMD-specific optimizations +kvm-amd +{% endif -%} + +kvm + + +# Crypto +# ------ + +# Some core modules which comprise strong cryptography. +blowfish_common +blowfish_generic +ctr +cts +lrw +lzo +rmd160 +rmd256 +rmd320 +serpent +sha512_generic +twofish_common +twofish_generic +xts +zlib + + +# Drivers +# ------- + +# Basics +lp +rtc +loop + +# Filesystems +ext2 +btrfs + +{% if desktop_enable -%} +# Desktop +psmouse +snd +snd_ac97_codec +snd_intel8x0 +snd_page_alloc +snd_pcm +snd_timer +soundcore +usbhid +{% endif -%} + +# Lib +# --- +xz + + +# Net +# --- + +# All packets needed for netfilter rules (ie iptables, ebtables). +ip_tables +x_tables +iptable_filter +iptable_nat + +# Targets +ipt_LOG +ipt_REJECT + +# Modules +xt_connlimit +xt_tcpudp +xt_recent +xt_limit +xt_conntrack +nf_conntrack +nf_conntrack_ipv4 +nf_defrag_ipv4 +xt_state +nf_nat + +# Addons +xt_pknock \ No newline at end of file diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/templates/passwdqc.conf b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/templates/passwdqc.conf new file mode 100644 index 0000000000000000000000000000000000000000..f98d14e57428c106692e0f57e8b381f2b0a12c44 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/templates/passwdqc.conf @@ -0,0 +1,11 @@ +############################################################################### +# WARNING: This configuration file is maintained by Juju. Local changes may +# be overwritten. +############################################################################### +Name: passwdqc password strength enforcement +Default: yes +Priority: 1024 +Conflicts: cracklib +Password-Type: Primary +Password: + requisite pam_passwdqc.so {{ auth_pam_passwdqc_options }} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/templates/pinerolo_profile.sh b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/templates/pinerolo_profile.sh new file mode 100644 index 0000000000000000000000000000000000000000..fd2de791b96fbb8889811daf7340d1f2ca2ab3a6 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/templates/pinerolo_profile.sh @@ -0,0 +1,8 @@ +############################################################################### +# WARNING: This configuration file is maintained by Juju. Local changes may +# be overwritten. +############################################################################### +# Disable core dumps via soft limits for all users. Compliance to this setting +# is voluntary and can be modified by users up to a hard limit. This setting is +# a sane default. +ulimit -S -c 0 > /dev/null 2>&1 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/templates/securetty b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/templates/securetty new file mode 100644 index 0000000000000000000000000000000000000000..15b18d4e2f45747845d0b65c06997f154ef674a4 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/templates/securetty @@ -0,0 +1,11 @@ +############################################################################### +# WARNING: This configuration file is maintained by Juju. Local changes may +# be overwritten. +############################################################################### +# A list of TTYs, from which root can log in +# see `man securetty` for reference +{% if ttys -%} +{% for tty in ttys -%} +{{ tty }} +{% endfor -%} +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/templates/tally2 b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/templates/tally2 new file mode 100644 index 0000000000000000000000000000000000000000..d9620299c55e51abbee1017a227c217cd4a9fd33 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/host/templates/tally2 @@ -0,0 +1,14 @@ +############################################################################### +# WARNING: This configuration file is maintained by Juju. Local changes may +# be overwritten. +############################################################################### +Name: tally2 lockout after failed attempts enforcement +Default: yes +Priority: 1024 +Conflicts: cracklib +Auth-Type: Primary +Auth-Initial: + required pam_tally2.so deny={{ auth_retries }} onerr=fail unlock_time={{ auth_lockout_time }} +Account-Type: Primary +Account-Initial: + required pam_tally2.so diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/mysql/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/mysql/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..58bebd846bd6fa648cfab6ab1056ad10d8415453 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/mysql/__init__.py @@ -0,0 +1,17 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from os import path + +TEMPLATES_DIR = path.join(path.dirname(__file__), 'templates') diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/mysql/checks/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/mysql/checks/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..1990d8513bbeef067a8d9a2168e1952efb2961dc --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/mysql/checks/__init__.py @@ -0,0 +1,29 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from charmhelpers.core.hookenv import ( + log, + DEBUG, +) +from charmhelpers.contrib.hardening.mysql.checks import config + + +def run_mysql_checks(): + log("Starting MySQL hardening checks.", level=DEBUG) + checks = config.get_audits() + for check in checks: + log("Running '%s' check" % (check.__class__.__name__), level=DEBUG) + check.ensure_compliance() + + log("MySQL hardening checks complete.", level=DEBUG) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/mysql/checks/config.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/mysql/checks/config.py new file mode 100644 index 0000000000000000000000000000000000000000..a79f33b74a5c2972a82f0b4d8de8d1073dc293ed --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/mysql/checks/config.py @@ -0,0 +1,87 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import six +import subprocess + +from charmhelpers.core.hookenv import ( + log, + WARNING, +) +from charmhelpers.contrib.hardening.audits.file import ( + FilePermissionAudit, + DirectoryPermissionAudit, + TemplatedFile, +) +from charmhelpers.contrib.hardening.mysql import TEMPLATES_DIR +from charmhelpers.contrib.hardening import utils + + +def get_audits(): + """Get MySQL hardening config audits. + + :returns: dictionary of audits + """ + if subprocess.call(['which', 'mysql'], stdout=subprocess.PIPE) != 0: + log("MySQL does not appear to be installed on this node - " + "skipping mysql hardening", level=WARNING) + return [] + + settings = utils.get_settings('mysql') + hardening_settings = settings['hardening'] + my_cnf = hardening_settings['mysql-conf'] + + audits = [ + FilePermissionAudit(paths=[my_cnf], user='root', + group='root', mode=0o0600), + + TemplatedFile(hardening_settings['hardening-conf'], + MySQLConfContext(), + TEMPLATES_DIR, + mode=0o0750, + user='mysql', + group='root', + service_actions=[{'service': 'mysql', + 'actions': ['restart']}]), + + # MySQL and Percona charms do not allow configuration of the + # data directory, so use the default. + DirectoryPermissionAudit('/var/lib/mysql', + user='mysql', + group='mysql', + recursive=False, + mode=0o755), + + DirectoryPermissionAudit('/etc/mysql', + user='root', + group='root', + recursive=False, + mode=0o700), + ] + + return audits + + +class MySQLConfContext(object): + """Defines the set of key/value pairs to set in a mysql config file. + + This context, when called, will return a dictionary containing the + key/value pairs of setting to specify in the + /etc/mysql/conf.d/hardening.cnf file. + """ + def __call__(self): + settings = utils.get_settings('mysql') + # Translate for python3 + return {'mysql_settings': + [(k, v) for k, v in six.iteritems(settings['security'])]} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/mysql/templates/hardening.cnf b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/mysql/templates/hardening.cnf new file mode 100644 index 0000000000000000000000000000000000000000..8242586cd66360b7e6ae33f13018363b95cd4ea9 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/mysql/templates/hardening.cnf @@ -0,0 +1,12 @@ +############################################################################### +# WARNING: This configuration file is maintained by Juju. Local changes may +# be overwritten. +############################################################################### +[mysqld] +{% for setting, value in mysql_settings -%} +{% if value == 'True' -%} +{{ setting }} +{% elif value != 'None' and value != None -%} +{{ setting }} = {{ value }} +{% endif -%} +{% endfor -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/ssh/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/ssh/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..58bebd846bd6fa648cfab6ab1056ad10d8415453 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/ssh/__init__.py @@ -0,0 +1,17 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from os import path + +TEMPLATES_DIR = path.join(path.dirname(__file__), 'templates') diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/ssh/checks/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/ssh/checks/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..edaf484b39f8c7353cebb2f4b68944c6493ba7b3 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/ssh/checks/__init__.py @@ -0,0 +1,29 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from charmhelpers.core.hookenv import ( + log, + DEBUG, +) +from charmhelpers.contrib.hardening.ssh.checks import config + + +def run_ssh_checks(): + log("Starting SSH hardening checks.", level=DEBUG) + checks = config.get_audits() + for check in checks: + log("Running '%s' check" % (check.__class__.__name__), level=DEBUG) + check.ensure_compliance() + + log("SSH hardening checks complete.", level=DEBUG) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/ssh/checks/config.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/ssh/checks/config.py new file mode 100644 index 0000000000000000000000000000000000000000..41bed2d1e7b031182edcf62710876e4073dfbc6e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/ssh/checks/config.py @@ -0,0 +1,435 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os + +from charmhelpers.contrib.network.ip import ( + get_address_in_network, + get_iface_addr, + is_ip, +) +from charmhelpers.core.hookenv import ( + log, + DEBUG, +) +from charmhelpers.fetch import ( + apt_install, + apt_update, +) +from charmhelpers.core.host import ( + lsb_release, + CompareHostReleases, +) +from charmhelpers.contrib.hardening.audits.file import ( + TemplatedFile, + FileContentAudit, +) +from charmhelpers.contrib.hardening.ssh import TEMPLATES_DIR +from charmhelpers.contrib.hardening import utils + + +def get_audits(): + """Get SSH hardening config audits. + + :returns: dictionary of audits + """ + audits = [SSHConfig(), SSHDConfig(), SSHConfigFileContentAudit(), + SSHDConfigFileContentAudit()] + return audits + + +class SSHConfigContext(object): + + type = 'client' + + def get_macs(self, allow_weak_mac): + if allow_weak_mac: + weak_macs = 'weak' + else: + weak_macs = 'default' + + default = 'hmac-sha2-512,hmac-sha2-256,hmac-ripemd160' + macs = {'default': default, + 'weak': default + ',hmac-sha1'} + + default = ('hmac-sha2-512-etm@openssh.com,' + 'hmac-sha2-256-etm@openssh.com,' + 'hmac-ripemd160-etm@openssh.com,umac-128-etm@openssh.com,' + 'hmac-sha2-512,hmac-sha2-256,hmac-ripemd160') + macs_66 = {'default': default, + 'weak': default + ',hmac-sha1'} + + # Use newer ciphers on Ubuntu Trusty and above + _release = lsb_release()['DISTRIB_CODENAME'].lower() + if CompareHostReleases(_release) >= 'trusty': + log("Detected Ubuntu 14.04 or newer, using new macs", level=DEBUG) + macs = macs_66 + + return macs[weak_macs] + + def get_kexs(self, allow_weak_kex): + if allow_weak_kex: + weak_kex = 'weak' + else: + weak_kex = 'default' + + default = 'diffie-hellman-group-exchange-sha256' + weak = (default + ',diffie-hellman-group14-sha1,' + 'diffie-hellman-group-exchange-sha1,' + 'diffie-hellman-group1-sha1') + kex = {'default': default, + 'weak': weak} + + default = ('curve25519-sha256@libssh.org,' + 'diffie-hellman-group-exchange-sha256') + weak = (default + ',diffie-hellman-group14-sha1,' + 'diffie-hellman-group-exchange-sha1,' + 'diffie-hellman-group1-sha1') + kex_66 = {'default': default, + 'weak': weak} + + # Use newer kex on Ubuntu Trusty and above + _release = lsb_release()['DISTRIB_CODENAME'].lower() + if CompareHostReleases(_release) >= 'trusty': + log('Detected Ubuntu 14.04 or newer, using new key exchange ' + 'algorithms', level=DEBUG) + kex = kex_66 + + return kex[weak_kex] + + def get_ciphers(self, cbc_required): + if cbc_required: + weak_ciphers = 'weak' + else: + weak_ciphers = 'default' + + default = 'aes256-ctr,aes192-ctr,aes128-ctr' + cipher = {'default': default, + 'weak': default + 'aes256-cbc,aes192-cbc,aes128-cbc'} + + default = ('chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,' + 'aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr') + ciphers_66 = {'default': default, + 'weak': default + ',aes256-cbc,aes192-cbc,aes128-cbc'} + + # Use newer ciphers on ubuntu Trusty and above + _release = lsb_release()['DISTRIB_CODENAME'].lower() + if CompareHostReleases(_release) >= 'trusty': + log('Detected Ubuntu 14.04 or newer, using new ciphers', + level=DEBUG) + cipher = ciphers_66 + + return cipher[weak_ciphers] + + def get_listening(self, listen=['0.0.0.0']): + """Returns a list of addresses SSH can list on + + Turns input into a sensible list of IPs SSH can listen on. Input + must be a python list of interface names, IPs and/or CIDRs. + + :param listen: list of IPs, CIDRs, interface names + + :returns: list of IPs available on the host + """ + if listen == ['0.0.0.0']: + return listen + + value = [] + for network in listen: + try: + ip = get_address_in_network(network=network, fatal=True) + except ValueError: + if is_ip(network): + ip = network + else: + try: + ip = get_iface_addr(iface=network, fatal=False)[0] + except IndexError: + continue + value.append(ip) + if value == []: + return ['0.0.0.0'] + return value + + def __call__(self): + settings = utils.get_settings('ssh') + if settings['common']['network_ipv6_enable']: + addr_family = 'any' + else: + addr_family = 'inet' + + ctxt = { + 'addr_family': addr_family, + 'remote_hosts': settings['common']['remote_hosts'], + 'password_auth_allowed': + settings['client']['password_authentication'], + 'ports': settings['common']['ports'], + 'ciphers': self.get_ciphers(settings['client']['cbc_required']), + 'macs': self.get_macs(settings['client']['weak_hmac']), + 'kexs': self.get_kexs(settings['client']['weak_kex']), + 'roaming': settings['client']['roaming'], + } + return ctxt + + +class SSHConfig(TemplatedFile): + def __init__(self): + path = '/etc/ssh/ssh_config' + super(SSHConfig, self).__init__(path=path, + template_dir=TEMPLATES_DIR, + context=SSHConfigContext(), + user='root', + group='root', + mode=0o0644) + + def pre_write(self): + settings = utils.get_settings('ssh') + apt_update(fatal=True) + apt_install(settings['client']['package']) + if not os.path.exists('/etc/ssh'): + os.makedir('/etc/ssh') + # NOTE: don't recurse + utils.ensure_permissions('/etc/ssh', 'root', 'root', 0o0755, + maxdepth=0) + + def post_write(self): + # NOTE: don't recurse + utils.ensure_permissions('/etc/ssh', 'root', 'root', 0o0755, + maxdepth=0) + + +class SSHDConfigContext(SSHConfigContext): + + type = 'server' + + def __call__(self): + settings = utils.get_settings('ssh') + if settings['common']['network_ipv6_enable']: + addr_family = 'any' + else: + addr_family = 'inet' + + ctxt = { + 'ssh_ip': self.get_listening(settings['server']['listen_to']), + 'password_auth_allowed': + settings['server']['password_authentication'], + 'ports': settings['common']['ports'], + 'addr_family': addr_family, + 'ciphers': self.get_ciphers(settings['server']['cbc_required']), + 'macs': self.get_macs(settings['server']['weak_hmac']), + 'kexs': self.get_kexs(settings['server']['weak_kex']), + 'host_key_files': settings['server']['host_key_files'], + 'allow_root_with_key': settings['server']['allow_root_with_key'], + 'password_authentication': + settings['server']['password_authentication'], + 'use_priv_sep': settings['server']['use_privilege_separation'], + 'use_pam': settings['server']['use_pam'], + 'allow_x11_forwarding': settings['server']['allow_x11_forwarding'], + 'print_motd': settings['server']['print_motd'], + 'print_last_log': settings['server']['print_last_log'], + 'client_alive_interval': + settings['server']['alive_interval'], + 'client_alive_count': settings['server']['alive_count'], + 'allow_tcp_forwarding': settings['server']['allow_tcp_forwarding'], + 'allow_agent_forwarding': + settings['server']['allow_agent_forwarding'], + 'deny_users': settings['server']['deny_users'], + 'allow_users': settings['server']['allow_users'], + 'deny_groups': settings['server']['deny_groups'], + 'allow_groups': settings['server']['allow_groups'], + 'use_dns': settings['server']['use_dns'], + 'sftp_enable': settings['server']['sftp_enable'], + 'sftp_group': settings['server']['sftp_group'], + 'sftp_chroot': settings['server']['sftp_chroot'], + 'max_auth_tries': settings['server']['max_auth_tries'], + 'max_sessions': settings['server']['max_sessions'], + } + return ctxt + + +class SSHDConfig(TemplatedFile): + def __init__(self): + path = '/etc/ssh/sshd_config' + super(SSHDConfig, self).__init__(path=path, + template_dir=TEMPLATES_DIR, + context=SSHDConfigContext(), + user='root', + group='root', + mode=0o0600, + service_actions=[{'service': 'ssh', + 'actions': + ['restart']}]) + + def pre_write(self): + settings = utils.get_settings('ssh') + apt_update(fatal=True) + apt_install(settings['server']['package']) + if not os.path.exists('/etc/ssh'): + os.makedir('/etc/ssh') + # NOTE: don't recurse + utils.ensure_permissions('/etc/ssh', 'root', 'root', 0o0755, + maxdepth=0) + + def post_write(self): + # NOTE: don't recurse + utils.ensure_permissions('/etc/ssh', 'root', 'root', 0o0755, + maxdepth=0) + + +class SSHConfigFileContentAudit(FileContentAudit): + def __init__(self): + self.path = '/etc/ssh/ssh_config' + super(SSHConfigFileContentAudit, self).__init__(self.path, {}) + + def is_compliant(self, *args, **kwargs): + self.pass_cases = [] + self.fail_cases = [] + settings = utils.get_settings('ssh') + + _release = lsb_release()['DISTRIB_CODENAME'].lower() + if CompareHostReleases(_release) >= 'trusty': + if not settings['server']['weak_hmac']: + self.pass_cases.append(r'^MACs.+,hmac-ripemd160$') + else: + self.pass_cases.append(r'^MACs.+,hmac-sha1$') + + if settings['server']['weak_kex']: + self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha256[,\s]?') # noqa + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group14-sha1[,\s]?') # noqa + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha1[,\s]?') # noqa + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group1-sha1[,\s]?') # noqa + else: + self.pass_cases.append(r'^KexAlgorithms.+,diffie-hellman-group-exchange-sha256$') # noqa + self.fail_cases.append(r'^KexAlgorithms.*diffie-hellman-group14-sha1[,\s]?') # noqa + + if settings['server']['cbc_required']: + self.pass_cases.append(r'^Ciphers\s.*-cbc[,\s]?') + self.fail_cases.append(r'^Ciphers\s.*aes128-ctr[,\s]?') + self.fail_cases.append(r'^Ciphers\s.*aes192-ctr[,\s]?') + self.fail_cases.append(r'^Ciphers\s.*aes256-ctr[,\s]?') + else: + self.fail_cases.append(r'^Ciphers\s.*-cbc[,\s]?') + self.pass_cases.append(r'^Ciphers\schacha20-poly1305@openssh.com,.+') # noqa + self.pass_cases.append(r'^Ciphers\s.*aes128-ctr$') + self.pass_cases.append(r'^Ciphers\s.*aes192-ctr[,\s]?') + self.pass_cases.append(r'^Ciphers\s.*aes256-ctr[,\s]?') + else: + if not settings['client']['weak_hmac']: + self.fail_cases.append(r'^MACs.+,hmac-sha1$') + else: + self.pass_cases.append(r'^MACs.+,hmac-sha1$') + + if settings['client']['weak_kex']: + self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha256[,\s]?') # noqa + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group14-sha1[,\s]?') # noqa + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha1[,\s]?') # noqa + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group1-sha1[,\s]?') # noqa + else: + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha256$') # noqa + self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group14-sha1[,\s]?') # noqa + self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha1[,\s]?') # noqa + self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group1-sha1[,\s]?') # noqa + + if settings['client']['cbc_required']: + self.pass_cases.append(r'^Ciphers\s.*-cbc[,\s]?') + self.fail_cases.append(r'^Ciphers\s.*aes128-ctr[,\s]?') + self.fail_cases.append(r'^Ciphers\s.*aes192-ctr[,\s]?') + self.fail_cases.append(r'^Ciphers\s.*aes256-ctr[,\s]?') + else: + self.fail_cases.append(r'^Ciphers\s.*-cbc[,\s]?') + self.pass_cases.append(r'^Ciphers\s.*aes128-ctr[,\s]?') + self.pass_cases.append(r'^Ciphers\s.*aes192-ctr[,\s]?') + self.pass_cases.append(r'^Ciphers\s.*aes256-ctr[,\s]?') + + if settings['client']['roaming']: + self.pass_cases.append(r'^UseRoaming yes$') + else: + self.fail_cases.append(r'^UseRoaming yes$') + + return super(SSHConfigFileContentAudit, self).is_compliant(*args, + **kwargs) + + +class SSHDConfigFileContentAudit(FileContentAudit): + def __init__(self): + self.path = '/etc/ssh/sshd_config' + super(SSHDConfigFileContentAudit, self).__init__(self.path, {}) + + def is_compliant(self, *args, **kwargs): + self.pass_cases = [] + self.fail_cases = [] + settings = utils.get_settings('ssh') + + _release = lsb_release()['DISTRIB_CODENAME'].lower() + if CompareHostReleases(_release) >= 'trusty': + if not settings['server']['weak_hmac']: + self.pass_cases.append(r'^MACs.+,hmac-ripemd160$') + else: + self.pass_cases.append(r'^MACs.+,hmac-sha1$') + + if settings['server']['weak_kex']: + self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha256[,\s]?') # noqa + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group14-sha1[,\s]?') # noqa + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha1[,\s]?') # noqa + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group1-sha1[,\s]?') # noqa + else: + self.pass_cases.append(r'^KexAlgorithms.+,diffie-hellman-group-exchange-sha256$') # noqa + self.fail_cases.append(r'^KexAlgorithms.*diffie-hellman-group14-sha1[,\s]?') # noqa + + if settings['server']['cbc_required']: + self.pass_cases.append(r'^Ciphers\s.*-cbc[,\s]?') + self.fail_cases.append(r'^Ciphers\s.*aes128-ctr[,\s]?') + self.fail_cases.append(r'^Ciphers\s.*aes192-ctr[,\s]?') + self.fail_cases.append(r'^Ciphers\s.*aes256-ctr[,\s]?') + else: + self.fail_cases.append(r'^Ciphers\s.*-cbc[,\s]?') + self.pass_cases.append(r'^Ciphers\schacha20-poly1305@openssh.com,.+') # noqa + self.pass_cases.append(r'^Ciphers\s.*aes128-ctr$') + self.pass_cases.append(r'^Ciphers\s.*aes192-ctr[,\s]?') + self.pass_cases.append(r'^Ciphers\s.*aes256-ctr[,\s]?') + else: + if not settings['server']['weak_hmac']: + self.pass_cases.append(r'^MACs.+,hmac-ripemd160$') + else: + self.pass_cases.append(r'^MACs.+,hmac-sha1$') + + if settings['server']['weak_kex']: + self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha256[,\s]?') # noqa + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group14-sha1[,\s]?') # noqa + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha1[,\s]?') # noqa + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group1-sha1[,\s]?') # noqa + else: + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha256$') # noqa + self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group14-sha1[,\s]?') # noqa + self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha1[,\s]?') # noqa + self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group1-sha1[,\s]?') # noqa + + if settings['server']['cbc_required']: + self.pass_cases.append(r'^Ciphers\s.*-cbc[,\s]?') + self.fail_cases.append(r'^Ciphers\s.*aes128-ctr[,\s]?') + self.fail_cases.append(r'^Ciphers\s.*aes192-ctr[,\s]?') + self.fail_cases.append(r'^Ciphers\s.*aes256-ctr[,\s]?') + else: + self.fail_cases.append(r'^Ciphers\s.*-cbc[,\s]?') + self.pass_cases.append(r'^Ciphers\s.*aes128-ctr[,\s]?') + self.pass_cases.append(r'^Ciphers\s.*aes192-ctr[,\s]?') + self.pass_cases.append(r'^Ciphers\s.*aes256-ctr[,\s]?') + + if settings['server']['sftp_enable']: + self.pass_cases.append(r'^Subsystem\ssftp') + else: + self.fail_cases.append(r'^Subsystem\ssftp') + + return super(SSHDConfigFileContentAudit, self).is_compliant(*args, + **kwargs) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/ssh/templates/ssh_config b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/ssh/templates/ssh_config new file mode 100644 index 0000000000000000000000000000000000000000..9742d8e2a32cd5da01a9dcb691a5a1201ed93050 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/ssh/templates/ssh_config @@ -0,0 +1,70 @@ +############################################################################### +# WARNING: This configuration file is maintained by Juju. Local changes may +# be overwritten. +############################################################################### +# This is the ssh client system-wide configuration file. See +# ssh_config(5) for more information. This file provides defaults for +# users, and the values can be changed in per-user configuration files +# or on the command line. + +# Configuration data is parsed as follows: +# 1. command line options +# 2. user-specific file +# 3. system-wide file +# Any configuration value is only changed the first time it is set. +# Thus, host-specific definitions should be at the beginning of the +# configuration file, and defaults at the end. + +# Site-wide defaults for some commonly used options. For a comprehensive +# list of available options, their meanings and defaults, please see the +# ssh_config(5) man page. + +# Restrict the following configuration to be limited to this Host. +{% if remote_hosts -%} +Host {{ ' '.join(remote_hosts) }} +{% endif %} +ForwardAgent no +ForwardX11 no +ForwardX11Trusted yes +RhostsRSAAuthentication no +RSAAuthentication yes +PasswordAuthentication {{ password_auth_allowed }} +HostbasedAuthentication no +GSSAPIAuthentication no +GSSAPIDelegateCredentials no +GSSAPIKeyExchange no +GSSAPITrustDNS no +BatchMode no +CheckHostIP yes +AddressFamily {{ addr_family }} +ConnectTimeout 0 +StrictHostKeyChecking ask +IdentityFile ~/.ssh/identity +IdentityFile ~/.ssh/id_rsa +IdentityFile ~/.ssh/id_dsa +# The port at the destination should be defined +{% for port in ports -%} +Port {{ port }} +{% endfor %} +Protocol 2 +Cipher 3des +{% if ciphers -%} +Ciphers {{ ciphers }} +{%- endif %} +{% if macs -%} +MACs {{ macs }} +{%- endif %} +{% if kexs -%} +KexAlgorithms {{ kexs }} +{%- endif %} +EscapeChar ~ +Tunnel no +TunnelDevice any:any +PermitLocalCommand no +VisualHostKey no +RekeyLimit 1G 1h +SendEnv LANG LC_* +HashKnownHosts yes +{% if roaming -%} +UseRoaming {{ roaming }} +{% endif %} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/ssh/templates/sshd_config b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/ssh/templates/sshd_config new file mode 100644 index 0000000000000000000000000000000000000000..5f87298a8119bcab1d2578bcaefd068e5af167c4 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/ssh/templates/sshd_config @@ -0,0 +1,159 @@ +############################################################################### +# WARNING: This configuration file is maintained by Juju. Local changes may +# be overwritten. +############################################################################### +# Package generated configuration file +# See the sshd_config(5) manpage for details + +# What ports, IPs and protocols we listen for +{% for port in ports -%} +Port {{ port }} +{% endfor -%} +AddressFamily {{ addr_family }} +# Use these options to restrict which interfaces/protocols sshd will bind to +{% if ssh_ip -%} +{% for ip in ssh_ip -%} +ListenAddress {{ ip }} +{% endfor %} +{%- else -%} +ListenAddress :: +ListenAddress 0.0.0.0 +{% endif -%} +Protocol 2 +{% if ciphers -%} +Ciphers {{ ciphers }} +{% endif -%} +{% if macs -%} +MACs {{ macs }} +{% endif -%} +{% if kexs -%} +KexAlgorithms {{ kexs }} +{% endif -%} +# HostKeys for protocol version 2 +{% for keyfile in host_key_files -%} +HostKey {{ keyfile }} +{% endfor -%} + +# Privilege Separation is turned on for security +{% if use_priv_sep -%} +UsePrivilegeSeparation {{ use_priv_sep }} +{% endif -%} + +# Lifetime and size of ephemeral version 1 server key +KeyRegenerationInterval 3600 +ServerKeyBits 1024 + +# Logging +SyslogFacility AUTH +LogLevel VERBOSE + +# Authentication: +LoginGraceTime 30s +{% if allow_root_with_key -%} +PermitRootLogin without-password +{% else -%} +PermitRootLogin no +{% endif %} +PermitTunnel no +PermitUserEnvironment no +StrictModes yes + +RSAAuthentication yes +PubkeyAuthentication yes +AuthorizedKeysFile %h/.ssh/authorized_keys + +# Don't read the user's ~/.rhosts and ~/.shosts files +IgnoreRhosts yes +# For this to work you will also need host keys in /etc/ssh_known_hosts +RhostsRSAAuthentication no +# similar for protocol version 2 +HostbasedAuthentication no +# Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication +IgnoreUserKnownHosts yes + +# To enable empty passwords, change to yes (NOT RECOMMENDED) +PermitEmptyPasswords no + +# Change to yes to enable challenge-response passwords (beware issues with +# some PAM modules and threads) +ChallengeResponseAuthentication no + +# Change to no to disable tunnelled clear text passwords +PasswordAuthentication {{ password_authentication }} + +# Kerberos options +KerberosAuthentication no +KerberosGetAFSToken no +KerberosOrLocalPasswd no +KerberosTicketCleanup yes + +# GSSAPI options +GSSAPIAuthentication no +GSSAPICleanupCredentials yes + +X11Forwarding {{ allow_x11_forwarding }} +X11DisplayOffset 10 +X11UseLocalhost yes +GatewayPorts no +PrintMotd {{ print_motd }} +PrintLastLog {{ print_last_log }} +TCPKeepAlive no +UseLogin no + +ClientAliveInterval {{ client_alive_interval }} +ClientAliveCountMax {{ client_alive_count }} +AllowTcpForwarding {{ allow_tcp_forwarding }} +AllowAgentForwarding {{ allow_agent_forwarding }} + +MaxStartups 10:30:100 +#Banner /etc/issue.net + +# Allow client to pass locale environment variables +AcceptEnv LANG LC_* + +# Set this to 'yes' to enable PAM authentication, account processing, +# and session processing. If this is enabled, PAM authentication will +# be allowed through the ChallengeResponseAuthentication and +# PasswordAuthentication. Depending on your PAM configuration, +# PAM authentication via ChallengeResponseAuthentication may bypass +# the setting of "PermitRootLogin without-password". +# If you just want the PAM account and session checks to run without +# PAM authentication, then enable this but set PasswordAuthentication +# and ChallengeResponseAuthentication to 'no'. +UsePAM {{ use_pam }} + +{% if deny_users -%} +DenyUsers {{ deny_users }} +{% endif -%} +{% if allow_users -%} +AllowUsers {{ allow_users }} +{% endif -%} +{% if deny_groups -%} +DenyGroups {{ deny_groups }} +{% endif -%} +{% if allow_groups -%} +AllowGroups allow_groups +{% endif -%} +UseDNS {{ use_dns }} +MaxAuthTries {{ max_auth_tries }} +MaxSessions {{ max_sessions }} + +{% if sftp_enable -%} +# Configuration, in case SFTP is used +## override default of no subsystems +## Subsystem sftp /opt/app/openssh5/libexec/sftp-server +Subsystem sftp internal-sftp -l VERBOSE + +## These lines must appear at the *end* of sshd_config +Match Group {{ sftp_group }} +ForceCommand internal-sftp -l VERBOSE +ChrootDirectory {{ sftp_chroot }} +{% else -%} +# Configuration, in case SFTP is used +## override default of no subsystems +## Subsystem sftp /opt/app/openssh5/libexec/sftp-server +## These lines must appear at the *end* of sshd_config +Match Group sftponly +ForceCommand internal-sftp -l VERBOSE +ChrootDirectory /sftpchroot/home/%u +{% endif %} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/templating.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/templating.py new file mode 100644 index 0000000000000000000000000000000000000000..5b6765f7edeee4bed739fd354c6f7bdf0a8c952e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/templating.py @@ -0,0 +1,73 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import six + +from charmhelpers.core.hookenv import ( + log, + DEBUG, + WARNING, +) + +try: + from jinja2 import FileSystemLoader, Environment +except ImportError: + from charmhelpers.fetch import apt_install + from charmhelpers.fetch import apt_update + apt_update(fatal=True) + if six.PY2: + apt_install('python-jinja2', fatal=True) + else: + apt_install('python3-jinja2', fatal=True) + from jinja2 import FileSystemLoader, Environment + + +# NOTE: function separated from main rendering code to facilitate easier +# mocking in unit tests. +def write(path, data): + with open(path, 'wb') as out: + out.write(data) + + +def get_template_path(template_dir, path): + """Returns the template file which would be used to render the path. + + The path to the template file is returned. + :param template_dir: the directory the templates are located in + :param path: the file path to be written to. + :returns: path to the template file + """ + return os.path.join(template_dir, os.path.basename(path)) + + +def render_and_write(template_dir, path, context): + """Renders the specified template into the file. + + :param template_dir: the directory to load the template from + :param path: the path to write the templated contents to + :param context: the parameters to pass to the rendering engine + """ + env = Environment(loader=FileSystemLoader(template_dir)) + template_file = os.path.basename(path) + template = env.get_template(template_file) + log('Rendering from template: %s' % template.name, level=DEBUG) + rendered_content = template.render(context) + if not rendered_content: + log("Render returned None - skipping '%s'" % path, + level=WARNING) + return + + write(path, rendered_content.encode('utf-8').strip()) + log('Wrote template %s' % path, level=DEBUG) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..ff7485c28c8748ba366dba54f1e3b8f7e6a7c619 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/hardening/utils.py @@ -0,0 +1,155 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import glob +import grp +import os +import pwd +import six +import yaml + +from charmhelpers.core.hookenv import ( + log, + DEBUG, + INFO, + WARNING, + ERROR, +) + + +# Global settings cache. Since each hook fire entails a fresh module import it +# is safe to hold this in memory and not risk missing config changes (since +# they will result in a new hook fire and thus re-import). +__SETTINGS__ = {} + + +def _get_defaults(modules): + """Load the default config for the provided modules. + + :param modules: stack modules config defaults to lookup. + :returns: modules default config dictionary. + """ + default = os.path.join(os.path.dirname(__file__), + 'defaults/%s.yaml' % (modules)) + return yaml.safe_load(open(default)) + + +def _get_schema(modules): + """Load the config schema for the provided modules. + + NOTE: this schema is intended to have 1-1 relationship with they keys in + the default config and is used a means to verify valid overrides provided + by the user. + + :param modules: stack modules config schema to lookup. + :returns: modules default schema dictionary. + """ + schema = os.path.join(os.path.dirname(__file__), + 'defaults/%s.yaml.schema' % (modules)) + return yaml.safe_load(open(schema)) + + +def _get_user_provided_overrides(modules): + """Load user-provided config overrides. + + :param modules: stack modules to lookup in user overrides yaml file. + :returns: overrides dictionary. + """ + overrides = os.path.join(os.environ['JUJU_CHARM_DIR'], + 'hardening.yaml') + if os.path.exists(overrides): + log("Found user-provided config overrides file '%s'" % + (overrides), level=DEBUG) + settings = yaml.safe_load(open(overrides)) + if settings and settings.get(modules): + log("Applying '%s' overrides" % (modules), level=DEBUG) + return settings.get(modules) + + log("No overrides found for '%s'" % (modules), level=DEBUG) + else: + log("No hardening config overrides file '%s' found in charm " + "root dir" % (overrides), level=DEBUG) + + return {} + + +def _apply_overrides(settings, overrides, schema): + """Get overrides config overlayed onto modules defaults. + + :param modules: require stack modules config. + :returns: dictionary of modules config with user overrides applied. + """ + if overrides: + for k, v in six.iteritems(overrides): + if k in schema: + if schema[k] is None: + settings[k] = v + elif type(schema[k]) is dict: + settings[k] = _apply_overrides(settings[k], overrides[k], + schema[k]) + else: + raise Exception("Unexpected type found in schema '%s'" % + type(schema[k]), level=ERROR) + else: + log("Unknown override key '%s' - ignoring" % (k), level=INFO) + + return settings + + +def get_settings(modules): + global __SETTINGS__ + if modules in __SETTINGS__: + return __SETTINGS__[modules] + + schema = _get_schema(modules) + settings = _get_defaults(modules) + overrides = _get_user_provided_overrides(modules) + __SETTINGS__[modules] = _apply_overrides(settings, overrides, schema) + return __SETTINGS__[modules] + + +def ensure_permissions(path, user, group, permissions, maxdepth=-1): + """Ensure permissions for path. + + If path is a file, apply to file and return. If path is a directory, + apply recursively (if required) to directory contents and return. + + :param user: user name + :param group: group name + :param permissions: octal permissions + :param maxdepth: maximum recursion depth. A negative maxdepth allows + infinite recursion and maxdepth=0 means no recursion. + :returns: None + """ + if not os.path.exists(path): + log("File '%s' does not exist - cannot set permissions" % (path), + level=WARNING) + return + + _user = pwd.getpwnam(user) + os.chown(path, _user.pw_uid, grp.getgrnam(group).gr_gid) + os.chmod(path, permissions) + + if maxdepth == 0: + log("Max recursion depth reached - skipping further recursion", + level=DEBUG) + return + elif maxdepth > 0: + maxdepth -= 1 + + if os.path.isdir(path): + contents = glob.glob("%s/*" % (path)) + for c in contents: + ensure_permissions(c, user=user, group=group, + permissions=permissions, maxdepth=maxdepth) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/mellanox/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/mellanox/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..9b088de84e4b288b551603816fc10eebfa7b1503 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/mellanox/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2016 Canonical Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/mellanox/infiniband.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/mellanox/infiniband.py new file mode 100644 index 0000000000000000000000000000000000000000..0edb2314114738c79859f51e505de05e5c7c8fcc --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/mellanox/infiniband.py @@ -0,0 +1,153 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +__author__ = "Jorge Niedbalski " + +import six + +from charmhelpers.fetch import ( + apt_install, + apt_update, +) + +from charmhelpers.core.hookenv import ( + log, + INFO, +) + +try: + from netifaces import interfaces as network_interfaces +except ImportError: + if six.PY2: + apt_install('python-netifaces') + else: + apt_install('python3-netifaces') + from netifaces import interfaces as network_interfaces + +import os +import re +import subprocess + +from charmhelpers.core.kernel import modprobe + +REQUIRED_MODULES = ( + "mlx4_ib", + "mlx4_en", + "mlx4_core", + "ib_ipath", + "ib_mthca", + "ib_srpt", + "ib_srp", + "ib_ucm", + "ib_isert", + "ib_iser", + "ib_ipoib", + "ib_cm", + "ib_uverbs" + "ib_umad", + "ib_sa", + "ib_mad", + "ib_core", + "ib_addr", + "rdma_ucm", +) + +REQUIRED_PACKAGES = ( + "ibutils", + "infiniband-diags", + "ibverbs-utils", +) + +IPOIB_DRIVERS = ( + "ib_ipoib", +) + +ABI_VERSION_FILE = "/sys/class/infiniband_mad/abi_version" + + +class DeviceInfo(object): + pass + + +def install_packages(): + apt_update() + apt_install(REQUIRED_PACKAGES, fatal=True) + + +def load_modules(): + for module in REQUIRED_MODULES: + modprobe(module, persist=True) + + +def is_enabled(): + """Check if infiniband is loaded on the system""" + return os.path.exists(ABI_VERSION_FILE) + + +def stat(): + """Return full output of ibstat""" + return subprocess.check_output(["ibstat"]) + + +def devices(): + """Returns a list of IB enabled devices""" + return subprocess.check_output(['ibstat', '-l']).splitlines() + + +def device_info(device): + """Returns a DeviceInfo object with the current device settings""" + + status = subprocess.check_output([ + 'ibstat', device, '-s']).splitlines() + + regexes = { + "CA type: (.*)": "device_type", + "Number of ports: (.*)": "num_ports", + "Firmware version: (.*)": "fw_ver", + "Hardware version: (.*)": "hw_ver", + "Node GUID: (.*)": "node_guid", + "System image GUID: (.*)": "sys_guid", + } + + device = DeviceInfo() + + for line in status: + for expression, key in regexes.items(): + matches = re.search(expression, line) + if matches: + setattr(device, key, matches.group(1)) + + return device + + +def ipoib_interfaces(): + """Return a list of IPOIB capable ethernet interfaces""" + interfaces = [] + + for interface in network_interfaces(): + try: + driver = re.search('^driver: (.+)$', subprocess.check_output([ + 'ethtool', '-i', + interface]), re.M).group(1) + + if driver in IPOIB_DRIVERS: + interfaces.append(interface) + except Exception: + log("Skipping interface %s" % interface, level=INFO) + continue + + return interfaces diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/network/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/network/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d7567b863e3a5ad2b7a7f44958b4166e0c3d346b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/network/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/network/ip.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/network/ip.py new file mode 100644 index 0000000000000000000000000000000000000000..b13277bb57c9227b1d9dfecf4f6750740e5a262a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/network/ip.py @@ -0,0 +1,602 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import glob +import re +import subprocess +import six +import socket + +from functools import partial + +from charmhelpers.fetch import apt_install, apt_update +from charmhelpers.core.hookenv import ( + config, + log, + network_get_primary_address, + unit_get, + WARNING, + NoNetworkBinding, +) + +from charmhelpers.core.host import ( + lsb_release, + CompareHostReleases, +) + +try: + import netifaces +except ImportError: + apt_update(fatal=True) + if six.PY2: + apt_install('python-netifaces', fatal=True) + else: + apt_install('python3-netifaces', fatal=True) + import netifaces + +try: + import netaddr +except ImportError: + apt_update(fatal=True) + if six.PY2: + apt_install('python-netaddr', fatal=True) + else: + apt_install('python3-netaddr', fatal=True) + import netaddr + + +def _validate_cidr(network): + try: + netaddr.IPNetwork(network) + except (netaddr.core.AddrFormatError, ValueError): + raise ValueError("Network (%s) is not in CIDR presentation format" % + network) + + +def no_ip_found_error_out(network): + errmsg = ("No IP address found in network(s): %s" % network) + raise ValueError(errmsg) + + +def _get_ipv6_network_from_address(address): + """Get an netaddr.IPNetwork for the given IPv6 address + :param address: a dict as returned by netifaces.ifaddresses + :returns netaddr.IPNetwork: None if the address is a link local or loopback + address + """ + if address['addr'].startswith('fe80') or address['addr'] == "::1": + return None + + prefix = address['netmask'].split("/") + if len(prefix) > 1: + netmask = prefix[1] + else: + netmask = address['netmask'] + return netaddr.IPNetwork("%s/%s" % (address['addr'], + netmask)) + + +def get_address_in_network(network, fallback=None, fatal=False): + """Get an IPv4 or IPv6 address within the network from the host. + + :param network (str): CIDR presentation format. For example, + '192.168.1.0/24'. Supports multiple networks as a space-delimited list. + :param fallback (str): If no address is found, return fallback. + :param fatal (boolean): If no address is found, fallback is not + set and fatal is True then exit(1). + """ + if network is None: + if fallback is not None: + return fallback + + if fatal: + no_ip_found_error_out(network) + else: + return None + + networks = network.split() or [network] + for network in networks: + _validate_cidr(network) + network = netaddr.IPNetwork(network) + for iface in netifaces.interfaces(): + try: + addresses = netifaces.ifaddresses(iface) + except ValueError: + # If an instance was deleted between + # netifaces.interfaces() run and now, its interfaces are gone + continue + if network.version == 4 and netifaces.AF_INET in addresses: + for addr in addresses[netifaces.AF_INET]: + cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'], + addr['netmask'])) + if cidr in network: + return str(cidr.ip) + + if network.version == 6 and netifaces.AF_INET6 in addresses: + for addr in addresses[netifaces.AF_INET6]: + cidr = _get_ipv6_network_from_address(addr) + if cidr and cidr in network: + return str(cidr.ip) + + if fallback is not None: + return fallback + + if fatal: + no_ip_found_error_out(network) + + return None + + +def is_ipv6(address): + """Determine whether provided address is IPv6 or not.""" + try: + address = netaddr.IPAddress(address) + except netaddr.AddrFormatError: + # probably a hostname - so not an address at all! + return False + + return address.version == 6 + + +def is_address_in_network(network, address): + """ + Determine whether the provided address is within a network range. + + :param network (str): CIDR presentation format. For example, + '192.168.1.0/24'. + :param address: An individual IPv4 or IPv6 address without a net + mask or subnet prefix. For example, '192.168.1.1'. + :returns boolean: Flag indicating whether address is in network. + """ + try: + network = netaddr.IPNetwork(network) + except (netaddr.core.AddrFormatError, ValueError): + raise ValueError("Network (%s) is not in CIDR presentation format" % + network) + + try: + address = netaddr.IPAddress(address) + except (netaddr.core.AddrFormatError, ValueError): + raise ValueError("Address (%s) is not in correct presentation format" % + address) + + if address in network: + return True + else: + return False + + +def _get_for_address(address, key): + """Retrieve an attribute of or the physical interface that + the IP address provided could be bound to. + + :param address (str): An individual IPv4 or IPv6 address without a net + mask or subnet prefix. For example, '192.168.1.1'. + :param key: 'iface' for the physical interface name or an attribute + of the configured interface, for example 'netmask'. + :returns str: Requested attribute or None if address is not bindable. + """ + address = netaddr.IPAddress(address) + for iface in netifaces.interfaces(): + addresses = netifaces.ifaddresses(iface) + if address.version == 4 and netifaces.AF_INET in addresses: + addr = addresses[netifaces.AF_INET][0]['addr'] + netmask = addresses[netifaces.AF_INET][0]['netmask'] + network = netaddr.IPNetwork("%s/%s" % (addr, netmask)) + cidr = network.cidr + if address in cidr: + if key == 'iface': + return iface + else: + return addresses[netifaces.AF_INET][0][key] + + if address.version == 6 and netifaces.AF_INET6 in addresses: + for addr in addresses[netifaces.AF_INET6]: + network = _get_ipv6_network_from_address(addr) + if not network: + continue + + cidr = network.cidr + if address in cidr: + if key == 'iface': + return iface + elif key == 'netmask' and cidr: + return str(cidr).split('/')[1] + else: + return addr[key] + return None + + +get_iface_for_address = partial(_get_for_address, key='iface') + + +get_netmask_for_address = partial(_get_for_address, key='netmask') + + +def resolve_network_cidr(ip_address): + ''' + Resolves the full address cidr of an ip_address based on + configured network interfaces + ''' + netmask = get_netmask_for_address(ip_address) + return str(netaddr.IPNetwork("%s/%s" % (ip_address, netmask)).cidr) + + +def format_ipv6_addr(address): + """If address is IPv6, wrap it in '[]' otherwise return None. + + This is required by most configuration files when specifying IPv6 + addresses. + """ + if is_ipv6(address): + return "[%s]" % address + + return None + + +def is_ipv6_disabled(): + try: + result = subprocess.check_output( + ['sysctl', 'net.ipv6.conf.all.disable_ipv6'], + stderr=subprocess.STDOUT, + universal_newlines=True) + except subprocess.CalledProcessError: + return True + + return "net.ipv6.conf.all.disable_ipv6 = 1" in result + + +def get_iface_addr(iface='eth0', inet_type='AF_INET', inc_aliases=False, + fatal=True, exc_list=None): + """Return the assigned IP address for a given interface, if any. + + :param iface: network interface on which address(es) are expected to + be found. + :param inet_type: inet address family + :param inc_aliases: include alias interfaces in search + :param fatal: if True, raise exception if address not found + :param exc_list: list of addresses to ignore + :return: list of ip addresses + """ + # Extract nic if passed /dev/ethX + if '/' in iface: + iface = iface.split('/')[-1] + + if not exc_list: + exc_list = [] + + try: + inet_num = getattr(netifaces, inet_type) + except AttributeError: + raise Exception("Unknown inet type '%s'" % str(inet_type)) + + interfaces = netifaces.interfaces() + if inc_aliases: + ifaces = [] + for _iface in interfaces: + if iface == _iface or _iface.split(':')[0] == iface: + ifaces.append(_iface) + + if fatal and not ifaces: + raise Exception("Invalid interface '%s'" % iface) + + ifaces.sort() + else: + if iface not in interfaces: + if fatal: + raise Exception("Interface '%s' not found " % (iface)) + else: + return [] + + else: + ifaces = [iface] + + addresses = [] + for netiface in ifaces: + net_info = netifaces.ifaddresses(netiface) + if inet_num in net_info: + for entry in net_info[inet_num]: + if 'addr' in entry and entry['addr'] not in exc_list: + addresses.append(entry['addr']) + + if fatal and not addresses: + raise Exception("Interface '%s' doesn't have any %s addresses." % + (iface, inet_type)) + + return sorted(addresses) + + +get_ipv4_addr = partial(get_iface_addr, inet_type='AF_INET') + + +def get_iface_from_addr(addr): + """Work out on which interface the provided address is configured.""" + for iface in netifaces.interfaces(): + addresses = netifaces.ifaddresses(iface) + for inet_type in addresses: + for _addr in addresses[inet_type]: + _addr = _addr['addr'] + # link local + ll_key = re.compile("(.+)%.*") + raw = re.match(ll_key, _addr) + if raw: + _addr = raw.group(1) + + if _addr == addr: + log("Address '%s' is configured on iface '%s'" % + (addr, iface)) + return iface + + msg = "Unable to infer net iface on which '%s' is configured" % (addr) + raise Exception(msg) + + +def sniff_iface(f): + """Ensure decorated function is called with a value for iface. + + If no iface provided, inject net iface inferred from unit private address. + """ + def iface_sniffer(*args, **kwargs): + if not kwargs.get('iface', None): + kwargs['iface'] = get_iface_from_addr(unit_get('private-address')) + + return f(*args, **kwargs) + + return iface_sniffer + + +@sniff_iface +def get_ipv6_addr(iface=None, inc_aliases=False, fatal=True, exc_list=None, + dynamic_only=True): + """Get assigned IPv6 address for a given interface. + + Returns list of addresses found. If no address found, returns empty list. + + If iface is None, we infer the current primary interface by doing a reverse + lookup on the unit private-address. + + We currently only support scope global IPv6 addresses i.e. non-temporary + addresses. If no global IPv6 address is found, return the first one found + in the ipv6 address list. + + :param iface: network interface on which ipv6 address(es) are expected to + be found. + :param inc_aliases: include alias interfaces in search + :param fatal: if True, raise exception if address not found + :param exc_list: list of addresses to ignore + :param dynamic_only: only recognise dynamic addresses + :return: list of ipv6 addresses + """ + addresses = get_iface_addr(iface=iface, inet_type='AF_INET6', + inc_aliases=inc_aliases, fatal=fatal, + exc_list=exc_list) + + if addresses: + global_addrs = [] + for addr in addresses: + key_scope_link_local = re.compile("^fe80::..(.+)%(.+)") + m = re.match(key_scope_link_local, addr) + if m: + eui_64_mac = m.group(1) + iface = m.group(2) + else: + global_addrs.append(addr) + + if global_addrs: + # Make sure any found global addresses are not temporary + cmd = ['ip', 'addr', 'show', iface] + out = subprocess.check_output(cmd).decode('UTF-8') + if dynamic_only: + key = re.compile("inet6 (.+)/[0-9]+ scope global.* dynamic.*") + else: + key = re.compile("inet6 (.+)/[0-9]+ scope global.*") + + addrs = [] + for line in out.split('\n'): + line = line.strip() + m = re.match(key, line) + if m and 'temporary' not in line: + # Return the first valid address we find + for addr in global_addrs: + if m.group(1) == addr: + if not dynamic_only or \ + m.group(1).endswith(eui_64_mac): + addrs.append(addr) + + if addrs: + return addrs + + if fatal: + raise Exception("Interface '%s' does not have a scope global " + "non-temporary ipv6 address." % iface) + + return [] + + +def get_bridges(vnic_dir='/sys/devices/virtual/net'): + """Return a list of bridges on the system.""" + b_regex = "%s/*/bridge" % vnic_dir + return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_regex)] + + +def get_bridge_nics(bridge, vnic_dir='/sys/devices/virtual/net'): + """Return a list of nics comprising a given bridge on the system.""" + brif_regex = "%s/%s/brif/*" % (vnic_dir, bridge) + return [x.split('/')[-1] for x in glob.glob(brif_regex)] + + +def is_bridge_member(nic): + """Check if a given nic is a member of a bridge.""" + for bridge in get_bridges(): + if nic in get_bridge_nics(bridge): + return True + + return False + + +def is_ip(address): + """ + Returns True if address is a valid IP address. + """ + try: + # Test to see if already an IPv4/IPv6 address + address = netaddr.IPAddress(address) + return True + except (netaddr.AddrFormatError, ValueError): + return False + + +def ns_query(address): + try: + import dns.resolver + except ImportError: + if six.PY2: + apt_install('python-dnspython', fatal=True) + else: + apt_install('python3-dnspython', fatal=True) + import dns.resolver + + if isinstance(address, dns.name.Name): + rtype = 'PTR' + elif isinstance(address, six.string_types): + rtype = 'A' + else: + return None + + try: + answers = dns.resolver.query(address, rtype) + except dns.resolver.NXDOMAIN: + return None + + if answers: + return str(answers[0]) + return None + + +def get_host_ip(hostname, fallback=None): + """ + Resolves the IP for a given hostname, or returns + the input if it is already an IP. + """ + if is_ip(hostname): + return hostname + + ip_addr = ns_query(hostname) + if not ip_addr: + try: + ip_addr = socket.gethostbyname(hostname) + except Exception: + log("Failed to resolve hostname '%s'" % (hostname), + level=WARNING) + return fallback + return ip_addr + + +def get_hostname(address, fqdn=True): + """ + Resolves hostname for given IP, or returns the input + if it is already a hostname. + """ + if is_ip(address): + try: + import dns.reversename + except ImportError: + if six.PY2: + apt_install("python-dnspython", fatal=True) + else: + apt_install("python3-dnspython", fatal=True) + import dns.reversename + + rev = dns.reversename.from_address(address) + result = ns_query(rev) + + if not result: + try: + result = socket.gethostbyaddr(address)[0] + except Exception: + return None + else: + result = address + + if fqdn: + # strip trailing . + if result.endswith('.'): + return result[:-1] + else: + return result + else: + return result.split('.')[0] + + +def port_has_listener(address, port): + """ + Returns True if the address:port is open and being listened to, + else False. + + @param address: an IP address or hostname + @param port: integer port + + Note calls 'zc' via a subprocess shell + """ + cmd = ['nc', '-z', address, str(port)] + result = subprocess.call(cmd) + return not(bool(result)) + + +def assert_charm_supports_ipv6(): + """Check whether we are able to support charms ipv6.""" + release = lsb_release()['DISTRIB_CODENAME'].lower() + if CompareHostReleases(release) < "trusty": + raise Exception("IPv6 is not supported in the charms for Ubuntu " + "versions less than Trusty 14.04") + + +def get_relation_ip(interface, cidr_network=None): + """Return this unit's IP for the given interface. + + Allow for an arbitrary interface to use with network-get to select an IP. + Handle all address selection options including passed cidr network and + IPv6. + + Usage: get_relation_ip('amqp', cidr_network='10.0.0.0/8') + + @param interface: string name of the relation. + @param cidr_network: string CIDR Network to select an address from. + @raises Exception if prefer-ipv6 is configured but IPv6 unsupported. + @returns IPv6 or IPv4 address + """ + # Select the interface address first + # For possible use as a fallback bellow with get_address_in_network + try: + # Get the interface specific IP + address = network_get_primary_address(interface) + except NotImplementedError: + # If network-get is not available + address = get_host_ip(unit_get('private-address')) + except NoNetworkBinding: + log("No network binding for {}".format(interface), WARNING) + address = get_host_ip(unit_get('private-address')) + + if config('prefer-ipv6'): + # Currently IPv6 has priority, eventually we want IPv6 to just be + # another network space. + assert_charm_supports_ipv6() + return get_ipv6_addr()[0] + elif cidr_network: + # If a specific CIDR network is passed get the address from that + # network. + return get_address_in_network(cidr_network, address) + + # Return the interface address + return address diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/network/ovs/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/network/ovs/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..fd001bc2eaa9b1b511bbd1816e1089521935a50a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/network/ovs/__init__.py @@ -0,0 +1,541 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +''' Helpers for interacting with OpenvSwitch ''' +import hashlib +import subprocess +import os +import six + +from charmhelpers.fetch import apt_install + + +from charmhelpers.core.hookenv import ( + log, WARNING, INFO, DEBUG +) +from charmhelpers.core.host import ( + service +) + + +BRIDGE_TEMPLATE = """\ +# This veth pair is required when neutron data-port is mapped to an existing linux bridge. lp:1635067 + +auto {linuxbridge_port} +iface {linuxbridge_port} inet manual + pre-up ip link add name {linuxbridge_port} type veth peer name {ovsbridge_port} + pre-up ip link set {ovsbridge_port} master {bridge} + pre-up ip link set {ovsbridge_port} up + up ip link set {linuxbridge_port} up + down ip link del {linuxbridge_port} +""" + +MAX_KERNEL_INTERFACE_NAME_LEN = 15 + + +def get_bridges(): + """Return list of the bridges on the default openvswitch + + :returns: List of bridge names + :rtype: List[str] + :raises: subprocess.CalledProcessError if ovs-vsctl fails + """ + cmd = ["ovs-vsctl", "list-br"] + lines = subprocess.check_output(cmd).decode('utf-8').split("\n") + maybe_bridges = [l.strip() for l in lines] + return [b for b in maybe_bridges if b] + + +def get_bridge_ports(name): + """Return a list the ports on a named bridge + + :param name: the name of the bridge to list + :type name: str + :returns: List of ports on the named bridge + :rtype: List[str] + :raises: subprocess.CalledProcessError if the ovs-vsctl command fails. If + the named bridge doesn't exist, then the exception will be raised. + """ + cmd = ["ovs-vsctl", "--", "list-ports", name] + lines = subprocess.check_output(cmd).decode('utf-8').split("\n") + maybe_ports = [l.strip() for l in lines] + return [p for p in maybe_ports if p] + + +def get_bridges_and_ports_map(): + """Return dictionary of bridge to ports for the default openvswitch + + :returns: a mapping of bridge name to a list of ports. + :rtype: Dict[str, List[str]] + :raises: subprocess.CalledProcessError if any of the underlying ovs-vsctl + command fail. + """ + return {b: get_bridge_ports(b) for b in get_bridges()} + + +def _dict_to_vsctl_set(data, table, entity): + """Helper that takes dictionary and provides ``ovs-vsctl set`` commands + + :param data: Additional data to attach to interface + The keys in the data dictionary map directly to column names in the + OpenvSwitch table specified as defined in DB-SCHEMA [0] referenced in + RFC 7047 [1] + + There are some established conventions for keys in the external-ids + column of various tables, consult the OVS Integration Guide [2] for + more details. + + NOTE(fnordahl): Technically the ``external-ids`` column is called + ``external_ids`` (with an underscore) and we rely on ``ovs-vsctl``'s + behaviour of transforming dashes to underscores for us [3] so we can + have a more pleasant data structure. + + 0: http://www.openvswitch.org/ovs-vswitchd.conf.db.5.pdf + 1: https://tools.ietf.org/html/rfc7047 + 2: http://docs.openvswitch.org/en/latest/topics/integration/ + 3: https://github.com/openvswitch/ovs/blob/ + 20dac08fdcce4b7fda1d07add3b346aa9751cfbc/ + lib/db-ctl-base.c#L189-L215 + :type data: Optional[Dict[str,Union[str,Dict[str,str]]]] + :param table: Name of table to operate on + :type table: str + :param entity: Name of entity to operate on + :type entity: str + :returns: '--' separated ``ovs-vsctl set`` commands + :rtype: Iterator[Tuple[str, str, str, str, str]] + """ + for (k, v) in data.items(): + if isinstance(v, dict): + entries = { + '{}:{}'.format(k, dk): dv for (dk, dv) in v.items()} + else: + entries = {k: v} + for (colk, colv) in entries.items(): + yield ('--', 'set', table, entity, '{}={}'.format(colk, colv)) + + +def add_bridge(name, datapath_type=None, brdata=None, exclusive=False): + """Add the named bridge to openvswitch and set/update bridge data for it + + :param name: Name of bridge to create + :type name: str + :param datapath_type: Add datapath_type to bridge (DEPRECATED, use brdata) + :type datapath_type: Optional[str] + :param brdata: Additional data to attach to bridge + The keys in the brdata dictionary map directly to column names in the + OpenvSwitch bridge table as defined in DB-SCHEMA [0] referenced in + RFC 7047 [1] + + There are some established conventions for keys in the external-ids + column of various tables, consult the OVS Integration Guide [2] for + more details. + + NOTE(fnordahl): Technically the ``external-ids`` column is called + ``external_ids`` (with an underscore) and we rely on ``ovs-vsctl``'s + behaviour of transforming dashes to underscores for us [3] so we can + have a more pleasant data structure. + + 0: http://www.openvswitch.org/ovs-vswitchd.conf.db.5.pdf + 1: https://tools.ietf.org/html/rfc7047 + 2: http://docs.openvswitch.org/en/latest/topics/integration/ + 3: https://github.com/openvswitch/ovs/blob/ + 20dac08fdcce4b7fda1d07add3b346aa9751cfbc/ + lib/db-ctl-base.c#L189-L215 + :type brdata: Optional[Dict[str,Union[str,Dict[str,str]]]] + :param exclusive: If True, raise exception if bridge exists + :type exclusive: bool + :raises: subprocess.CalledProcessError + """ + log('Creating bridge {}'.format(name)) + cmd = ['ovs-vsctl', '--'] + if not exclusive: + cmd.append('--may-exist') + cmd.extend(('add-br', name)) + if brdata: + for setcmd in _dict_to_vsctl_set(brdata, 'bridge', name): + cmd.extend(setcmd) + if datapath_type is not None: + log('DEPRECATION WARNING: add_bridge called with datapath_type, ' + 'please use the brdata keyword argument instead.') + cmd += ['--', 'set', 'bridge', name, + 'datapath_type={}'.format(datapath_type)] + subprocess.check_call(cmd) + + +def del_bridge(name): + """Delete the named bridge from openvswitch + + :param name: Name of bridge to remove + :type name: str + :raises: subprocess.CalledProcessError + """ + log('Deleting bridge {}'.format(name)) + subprocess.check_call(["ovs-vsctl", "--", "--if-exists", "del-br", name]) + + +def add_bridge_port(name, port, promisc=False, ifdata=None, exclusive=False, + linkup=True, portdata=None): + """Add port to bridge and optionally set/update interface data for it + + :param name: Name of bridge to attach port to + :type name: str + :param port: Name of port as represented in netdev + :type port: str + :param promisc: Whether to set promiscuous mode on interface + True=on, False=off, None leave untouched + :type promisc: Optional[bool] + :param ifdata: Additional data to attach to interface + The keys in the ifdata dictionary map directly to column names in the + OpenvSwitch Interface table as defined in DB-SCHEMA [0] referenced in + RFC 7047 [1] + + There are some established conventions for keys in the external-ids + column of various tables, consult the OVS Integration Guide [2] for + more details. + + NOTE(fnordahl): Technically the ``external-ids`` column is called + ``external_ids`` (with an underscore) and we rely on ``ovs-vsctl``'s + behaviour of transforming dashes to underscores for us [3] so we can + have a more pleasant data structure. + + 0: http://www.openvswitch.org/ovs-vswitchd.conf.db.5.pdf + 1: https://tools.ietf.org/html/rfc7047 + 2: http://docs.openvswitch.org/en/latest/topics/integration/ + 3: https://github.com/openvswitch/ovs/blob/ + 20dac08fdcce4b7fda1d07add3b346aa9751cfbc/ + lib/db-ctl-base.c#L189-L215 + :type ifdata: Optional[Dict[str,Union[str,Dict[str,str]]]] + :param exclusive: If True, raise exception if port exists + :type exclusive: bool + :param linkup: Bring link up + :type linkup: bool + :param portdata: Additional data to attach to port. Similar to ifdata. + :type portdata: Optional[Dict[str,Union[str,Dict[str,str]]]] + :raises: subprocess.CalledProcessError + """ + cmd = ['ovs-vsctl', '--'] + if not exclusive: + cmd.append('--may-exist') + cmd.extend(('add-port', name, port)) + for ovs_table, data in (('Interface', ifdata), ('Port', portdata)): + if data: + for setcmd in _dict_to_vsctl_set(data, ovs_table, port): + cmd.extend(setcmd) + + log('Adding port {} to bridge {}'.format(port, name)) + subprocess.check_call(cmd) + if linkup: + # This is mostly a workaround for CI environments, in the real world + # the bare metal provider would most likely have configured and brought + # up the link for us. + subprocess.check_call(["ip", "link", "set", port, "up"]) + if promisc: + subprocess.check_call(["ip", "link", "set", port, "promisc", "on"]) + elif promisc is False: + subprocess.check_call(["ip", "link", "set", port, "promisc", "off"]) + + +def del_bridge_port(name, port): + """Delete a port from the named openvswitch bridge + + :param name: Name of bridge to remove port from + :type name: str + :param port: Name of port to remove + :type port: str + :raises: subprocess.CalledProcessError + """ + log('Deleting port {} from bridge {}'.format(port, name)) + subprocess.check_call(["ovs-vsctl", "--", "--if-exists", "del-port", + name, port]) + subprocess.check_call(["ip", "link", "set", port, "down"]) + subprocess.check_call(["ip", "link", "set", port, "promisc", "off"]) + + +def add_bridge_bond(bridge, port, interfaces, portdata=None, ifdatamap=None, + exclusive=False): + """Add bonded port in bridge from interfaces. + + :param bridge: Name of bridge to add bonded port to + :type bridge: str + :param port: Name of created port + :type port: str + :param interfaces: Underlying interfaces that make up the bonded port + :type interfaces: Iterator[str] + :param portdata: Additional data to attach to the created bond port + See _dict_to_vsctl_set() for detailed description. + Example: + { + 'bond-mode': 'balance-tcp', + 'lacp': 'active', + 'other-config': { + 'lacp-time': 'fast', + }, + } + :type portdata: Optional[Dict[str,Union[str,Dict[str,str]]]] + :param ifdatamap: Map of data to attach to created bond interfaces + See _dict_to_vsctl_set() for detailed description. + Example: + { + 'eth0': { + 'type': 'dpdk', + 'mtu-request': '9000', + 'options': { + 'dpdk-devargs': '0000:01:00.0', + }, + }, + } + :type ifdatamap: Optional[Dict[str,Dict[str,Union[str,Dict[str,str]]]]] + :param exclusive: If True, raise exception if port exists + :type exclusive: bool + :raises: subprocess.CalledProcessError + """ + cmd = ['ovs-vsctl', '--'] + if not exclusive: + cmd.append('--may-exist') + cmd.extend(('add-bond', bridge, port)) + cmd.extend(interfaces) + if portdata: + for setcmd in _dict_to_vsctl_set(portdata, 'port', port): + cmd.extend(setcmd) + if ifdatamap: + for ifname, ifdata in ifdatamap.items(): + for setcmd in _dict_to_vsctl_set(ifdata, 'Interface', ifname): + cmd.extend(setcmd) + subprocess.check_call(cmd) + + +def add_ovsbridge_linuxbridge(name, bridge, ifdata=None): + """Add linux bridge to the named openvswitch bridge + + :param name: Name of ovs bridge to be added to Linux bridge + :type name: str + :param bridge: Name of Linux bridge to be added to ovs bridge + :type name: str + :param ifdata: Additional data to attach to interface + The keys in the ifdata dictionary map directly to column names in the + OpenvSwitch Interface table as defined in DB-SCHEMA [0] referenced in + RFC 7047 [1] + + There are some established conventions for keys in the external-ids + column of various tables, consult the OVS Integration Guide [2] for + more details. + + NOTE(fnordahl): Technically the ``external-ids`` column is called + ``external_ids`` (with an underscore) and we rely on ``ovs-vsctl``'s + behaviour of transforming dashes to underscores for us [3] so we can + have a more pleasant data structure. + + 0: http://www.openvswitch.org/ovs-vswitchd.conf.db.5.pdf + 1: https://tools.ietf.org/html/rfc7047 + 2: http://docs.openvswitch.org/en/latest/topics/integration/ + 3: https://github.com/openvswitch/ovs/blob/ + 20dac08fdcce4b7fda1d07add3b346aa9751cfbc/ + lib/db-ctl-base.c#L189-L215 + :type ifdata: Optional[Dict[str,Union[str,Dict[str,str]]]] + """ + try: + import netifaces + except ImportError: + if six.PY2: + apt_install('python-netifaces', fatal=True) + else: + apt_install('python3-netifaces', fatal=True) + import netifaces + + # NOTE(jamespage): + # Older code supported addition of a linuxbridge directly + # to an OVS bridge; ensure we don't break uses on upgrade + existing_ovs_bridge = port_to_br(bridge) + if existing_ovs_bridge is not None: + log('Linuxbridge {} is already directly in use' + ' by OVS bridge {}'.format(bridge, existing_ovs_bridge), + level=INFO) + return + + # NOTE(jamespage): + # preserve existing naming because interfaces may already exist. + ovsbridge_port = "veth-" + name + linuxbridge_port = "veth-" + bridge + if (len(ovsbridge_port) > MAX_KERNEL_INTERFACE_NAME_LEN or + len(linuxbridge_port) > MAX_KERNEL_INTERFACE_NAME_LEN): + # NOTE(jamespage): + # use parts of hashed bridgename (openstack style) when + # a bridge name exceeds 15 chars + hashed_bridge = hashlib.sha256(bridge.encode('UTF-8')).hexdigest() + base = '{}-{}'.format(hashed_bridge[:8], hashed_bridge[-2:]) + ovsbridge_port = "cvo{}".format(base) + linuxbridge_port = "cvb{}".format(base) + + interfaces = netifaces.interfaces() + for interface in interfaces: + if interface == ovsbridge_port or interface == linuxbridge_port: + log('Interface {} already exists'.format(interface), level=INFO) + return + + log('Adding linuxbridge {} to ovsbridge {}'.format(bridge, name), + level=INFO) + + check_for_eni_source() + + with open('/etc/network/interfaces.d/{}.cfg'.format( + linuxbridge_port), 'w') as config: + config.write(BRIDGE_TEMPLATE.format(linuxbridge_port=linuxbridge_port, + ovsbridge_port=ovsbridge_port, + bridge=bridge)) + + subprocess.check_call(["ifup", linuxbridge_port]) + add_bridge_port(name, linuxbridge_port, ifdata=ifdata) + + +def is_linuxbridge_interface(port): + ''' Check if the interface is a linuxbridge bridge + :param port: Name of an interface to check whether it is a Linux bridge + :returns: True if port is a Linux bridge''' + + if os.path.exists('/sys/class/net/' + port + '/bridge'): + log('Interface {} is a Linux bridge'.format(port), level=DEBUG) + return True + else: + log('Interface {} is not a Linux bridge'.format(port), level=DEBUG) + return False + + +def set_manager(manager): + ''' Set the controller for the local openvswitch ''' + log('Setting manager for local ovs to {}'.format(manager)) + subprocess.check_call(['ovs-vsctl', 'set-manager', + 'ssl:{}'.format(manager)]) + + +def set_Open_vSwitch_column_value(column_value): + """ + Calls ovs-vsctl and sets the 'column_value' in the Open_vSwitch table. + + :param column_value: + See http://www.openvswitch.org//ovs-vswitchd.conf.db.5.pdf for + details of the relevant values. + :type str + :raises CalledProcessException: possibly ovsdb-server is not running + """ + log('Setting {} in the Open_vSwitch table'.format(column_value)) + subprocess.check_call(['ovs-vsctl', 'set', 'Open_vSwitch', '.', column_value]) + + +CERT_PATH = '/etc/openvswitch/ovsclient-cert.pem' + + +def get_certificate(): + ''' Read openvswitch certificate from disk ''' + if os.path.exists(CERT_PATH): + log('Reading ovs certificate from {}'.format(CERT_PATH)) + with open(CERT_PATH, 'r') as cert: + full_cert = cert.read() + begin_marker = "-----BEGIN CERTIFICATE-----" + end_marker = "-----END CERTIFICATE-----" + begin_index = full_cert.find(begin_marker) + end_index = full_cert.rfind(end_marker) + if end_index == -1 or begin_index == -1: + raise RuntimeError("Certificate does not contain valid begin" + " and end markers.") + full_cert = full_cert[begin_index:(end_index + len(end_marker))] + return full_cert + else: + log('Certificate not found', level=WARNING) + return None + + +def check_for_eni_source(): + ''' Juju removes the source line when setting up interfaces, + replace if missing ''' + + with open('/etc/network/interfaces', 'r') as eni: + for line in eni: + if line == 'source /etc/network/interfaces.d/*': + return + with open('/etc/network/interfaces', 'a') as eni: + eni.write('\nsource /etc/network/interfaces.d/*') + + +def full_restart(): + ''' Full restart and reload of openvswitch ''' + if os.path.exists('/etc/init/openvswitch-force-reload-kmod.conf'): + service('start', 'openvswitch-force-reload-kmod') + else: + service('force-reload-kmod', 'openvswitch-switch') + + +def enable_ipfix(bridge, target, + cache_active_timeout=60, + cache_max_flows=128, + sampling=64): + '''Enable IPFIX on bridge to target. + :param bridge: Bridge to monitor + :param target: IPFIX remote endpoint + :param cache_active_timeout: The maximum period in seconds for + which an IPFIX flow record is cached + and aggregated before being sent + :param cache_max_flows: The maximum number of IPFIX flow records + that can be cached at a time + :param sampling: The rate at which packets should be sampled and + sent to each target collector + ''' + cmd = [ + 'ovs-vsctl', 'set', 'Bridge', bridge, 'ipfix=@i', '--', + '--id=@i', 'create', 'IPFIX', + 'targets="{}"'.format(target), + 'sampling={}'.format(sampling), + 'cache_active_timeout={}'.format(cache_active_timeout), + 'cache_max_flows={}'.format(cache_max_flows), + ] + log('Enabling IPfix on {}.'.format(bridge)) + subprocess.check_call(cmd) + + +def disable_ipfix(bridge): + '''Diable IPFIX on target bridge. + :param bridge: Bridge to modify + ''' + cmd = ['ovs-vsctl', 'clear', 'Bridge', bridge, 'ipfix'] + subprocess.check_call(cmd) + + +def port_to_br(port): + '''Determine the bridge that contains a port + :param port: Name of port to check for + :returns str: OVS bridge containing port or None if not found + ''' + try: + return subprocess.check_output( + ['ovs-vsctl', 'port-to-br', port] + ).decode('UTF-8').strip() + except subprocess.CalledProcessError: + return None + + +def ovs_appctl(target, args): + """Run `ovs-appctl` for target with args and return output. + + :param target: Name of daemon to contact. Unless target begins with '/', + `ovs-appctl` looks for a pidfile and will build the path to + a /var/run/openvswitch/target.pid.ctl for you. + :type target: str + :param args: Command and arguments to pass to `ovs-appctl` + :type args: Tuple[str, ...] + :returns: Output from command + :rtype: str + :raises: subprocess.CalledProcessError + """ + cmd = ['ovs-appctl', '-t', target] + cmd.extend(args) + return subprocess.check_output(cmd, universal_newlines=True) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/network/ovs/ovn.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/network/ovs/ovn.py new file mode 100644 index 0000000000000000000000000000000000000000..2075f11acfbeb3d0d614cf3db5a1b535bf128824 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/network/ovs/ovn.py @@ -0,0 +1,233 @@ +# Copyright 2019 Canonical Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +import os +import subprocess +import uuid + +from . import utils + + +OVN_RUNDIR = '/var/run/ovn' +OVN_SYSCONFDIR = '/etc/ovn' + + +def ovn_appctl(target, args, rundir=None, use_ovs_appctl=False): + """Run ovn/ovs-appctl for target with args and return output. + + :param target: Name of daemon to contact. Unless target begins with '/', + `ovn-appctl` looks for a pidfile and will build the path to + a /var/run/ovn/target.pid.ctl for you. + :type target: str + :param args: Command and arguments to pass to `ovn-appctl` + :type args: Tuple[str, ...] + :param rundir: Override path to sockets + :type rundir: Optional[str] + :param use_ovs_appctl: The ``ovn-appctl`` command appeared in OVN 20.03, + set this to True to use ``ovs-appctl`` instead. + :type use_ovs_appctl: bool + :returns: Output from command + :rtype: str + :raises: subprocess.CalledProcessError + """ + # NOTE(fnordahl): The ovsdb-server processes for the OVN databases use a + # non-standard naming scheme for their daemon control socket and we need + # to pass the full path to the socket. + if target in ('ovnnb_db', 'ovnsb_db',): + target = os.path.join(rundir or OVN_RUNDIR, target + '.ctl') + + if use_ovs_appctl: + tool = 'ovs-appctl' + else: + tool = 'ovn-appctl' + + return utils._run(tool, '-t', target, *args) + + +class OVNClusterStatus(object): + + def __init__(self, name, cluster_id, server_id, address, status, role, + term, leader, vote, election_timer, log, + entries_not_yet_committed, entries_not_yet_applied, + connections, servers): + """Initialize and populate OVNClusterStatus object. + + Use class initializer so we can define types in a compatible manner. + + :param name: Name of schema used for database + :type name: str + :param cluster_id: UUID of cluster + :type cluster_id: uuid.UUID + :param server_id: UUID of server + :type server_id: uuid.UUID + :param address: OVSDB connection method + :type address: str + :param status: Status text + :type status: str + :param role: Role of server + :type role: str + :param term: Election term + :type term: int + :param leader: Short form UUID of leader + :type leader: str + :param vote: Vote + :type vote: str + :param election_timer: Current value of election timer + :type election_timer: int + :param log: Log + :type log: str + :param entries_not_yet_committed: Entries not yet committed + :type entries_not_yet_committed: int + :param entries_not_yet_applied: Entries not yet applied + :type entries_not_yet_applied: int + :param connections: Connections + :type connections: str + :param servers: Servers in the cluster + [('0ea6', 'ssl:192.0.2.42:6643')] + :type servers: List[Tuple[str,str]] + """ + self.name = name + self.cluster_id = cluster_id + self.server_id = server_id + self.address = address + self.status = status + self.role = role + self.term = term + self.leader = leader + self.vote = vote + self.election_timer = election_timer + self.log = log + self.entries_not_yet_committed = entries_not_yet_committed + self.entries_not_yet_applied = entries_not_yet_applied + self.connections = connections + self.servers = servers + + def __eq__(self, other): + return ( + self.name == other.name and + self.cluster_id == other.cluster_id and + self.server_id == other.server_id and + self.address == other.address and + self.status == other.status and + self.role == other.role and + self.term == other.term and + self.leader == other.leader and + self.vote == other.vote and + self.election_timer == other.election_timer and + self.log == other.log and + self.entries_not_yet_committed == other.entries_not_yet_committed and + self.entries_not_yet_applied == other.entries_not_yet_applied and + self.connections == other.connections and + self.servers == other.servers) + + @property + def is_cluster_leader(self): + """Retrieve status information from clustered OVSDB. + + :returns: Whether target is cluster leader + :rtype: bool + """ + return self.leader == 'self' + + +def cluster_status(target, schema=None, use_ovs_appctl=False, rundir=None): + """Retrieve status information from clustered OVSDB. + + :param target: Usually one of 'ovsdb-server', 'ovnnb_db', 'ovnsb_db', can + also be full path to control socket. + :type target: str + :param schema: Database schema name, deduced from target if not provided + :type schema: Optional[str] + :param use_ovs_appctl: The ``ovn-appctl`` command appeared in OVN 20.03, + set this to True to use ``ovs-appctl`` instead. + :type use_ovs_appctl: bool + :param rundir: Override path to sockets + :type rundir: Optional[str] + :returns: cluster status data object + :rtype: OVNClusterStatus + :raises: subprocess.CalledProcessError, KeyError, RuntimeError + """ + schema_map = { + 'ovnnb_db': 'OVN_Northbound', + 'ovnsb_db': 'OVN_Southbound', + } + if schema and schema not in schema_map.keys(): + raise RuntimeError('Unknown schema provided: "{}"'.format(schema)) + + status = {} + k = '' + for line in ovn_appctl(target, + ('cluster/status', schema or schema_map[target]), + rundir=rundir, + use_ovs_appctl=use_ovs_appctl).splitlines(): + if k and line.startswith(' '): + # there is no key which means this is a instance of a multi-line/ + # multi-value item, populate the List which is already stored under + # the key. + if k == 'servers': + status[k].append( + tuple(line.replace(')', '').lstrip().split()[0:4:3])) + else: + status[k].append(line.lstrip()) + elif ':' in line: + # this is a line with a key + k, v = line.split(':', 1) + k = k.lower() + k = k.replace(' ', '_') + if v: + # this is a line with both key and value + if k in ('cluster_id', 'server_id',): + v = v.replace('(', '') + v = v.replace(')', '') + status[k] = tuple(v.split()) + else: + status[k] = v.lstrip() + else: + # this is a line with only key which means a multi-line/ + # multi-value item. Store key as List which will be + # populated on subsequent iterations. + status[k] = [] + return OVNClusterStatus( + status['name'], + uuid.UUID(status['cluster_id'][1]), + uuid.UUID(status['server_id'][1]), + status['address'], + status['status'], + status['role'], + int(status['term']), + status['leader'], + status['vote'], + int(status['election_timer']), + status['log'], + int(status['entries_not_yet_committed']), + int(status['entries_not_yet_applied']), + status['connections'], + status['servers']) + + +def is_northd_active(): + """Query `ovn-northd` for active status. + + Note that the active status information for ovn-northd is available for + OVN 20.03 and onward. + + :returns: True if local `ovn-northd` instance is active, False otherwise + :rtype: bool + """ + try: + for line in ovn_appctl('ovn-northd', ('status',)).splitlines(): + if line.startswith('Status:') and 'active' in line: + return True + except subprocess.CalledProcessError: + pass + return False diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/network/ovs/ovsdb.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/network/ovs/ovsdb.py new file mode 100644 index 0000000000000000000000000000000000000000..5e50bc36333bde56674a82ad6c88e0d5de44ee07 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/network/ovs/ovsdb.py @@ -0,0 +1,206 @@ +# Copyright 2019 Canonical Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +import json +import uuid + +from . import utils + + +class SimpleOVSDB(object): + """Simple interface to OVSDB through the use of command line tools. + + OVS and OVN is managed through a set of databases. These databases have + similar command line tools to manage them. We make use of the similarity + to provide a generic class that can be used to manage them. + + The OpenvSwitch project does provide a Python API, but on the surface it + appears to be a bit too involved for our simple use case. + + Examples: + sbdb = SimpleOVSDB('ovn-sbctl') + for chs in sbdb.chassis: + print(chs) + + ovsdb = SimpleOVSDB('ovs-vsctl') + for br in ovsdb.bridge: + if br['name'] == 'br-test': + ovsdb.bridge.set(br['uuid'], 'external_ids:charm', 'managed') + """ + + # For validation we keep a complete map of currently known good tool and + # table combinations. This requires maintenance down the line whenever + # upstream adds things that downstream wants, and the cost of maintaining + # that will most likely be lower then the cost of finding the needle in + # the haystack whenever downstream code misspells something. + _tool_table_map = { + 'ovs-vsctl': ( + 'autoattach', + 'bridge', + 'ct_timeout_policy', + 'ct_zone', + 'controller', + 'datapath', + 'flow_sample_collector_set', + 'flow_table', + 'ipfix', + 'interface', + 'manager', + 'mirror', + 'netflow', + 'open_vswitch', + 'port', + 'qos', + 'queue', + 'ssl', + 'sflow', + ), + 'ovn-nbctl': ( + 'acl', + 'address_set', + 'connection', + 'dhcp_options', + 'dns', + 'forwarding_group', + 'gateway_chassis', + 'ha_chassis', + 'ha_chassis_group', + 'load_balancer', + 'load_balancer_health_check', + 'logical_router', + 'logical_router_policy', + 'logical_router_port', + 'logical_router_static_route', + 'logical_switch', + 'logical_switch_port', + 'meter', + 'meter_band', + 'nat', + 'nb_global', + 'port_group', + 'qos', + 'ssl', + ), + 'ovn-sbctl': ( + 'address_set', + 'chassis', + 'connection', + 'controller_event', + 'dhcp_options', + 'dhcpv6_options', + 'dns', + 'datapath_binding', + 'encap', + 'gateway_chassis', + 'ha_chassis', + 'ha_chassis_group', + 'igmp_group', + 'ip_multicast', + 'logical_flow', + 'mac_binding', + 'meter', + 'meter_band', + 'multicast_group', + 'port_binding', + 'port_group', + 'rbac_permission', + 'rbac_role', + 'sb_global', + 'ssl', + 'service_monitor', + ), + } + + def __init__(self, tool): + """SimpleOVSDB constructor. + + :param tool: Which tool with database commands to operate on. + Usually one of `ovs-vsctl`, `ovn-nbctl`, `ovn-sbctl` + :type tool: str + """ + if tool not in self._tool_table_map: + raise RuntimeError( + 'tool must be one of "{}"'.format(self._tool_table_map.keys())) + self._tool = tool + + def __getattr__(self, table): + if table not in self._tool_table_map[self._tool]: + raise AttributeError( + 'table "{}" not known for use with "{}"' + .format(table, self._tool)) + return self.Table(self._tool, table) + + class Table(object): + """Methods to interact with contents of OVSDB tables. + + NOTE: At the time of this writing ``find`` is the only command + line argument to OVSDB manipulating tools that actually supports + JSON output. + """ + + def __init__(self, tool, table): + """SimpleOVSDBTable constructor. + + :param table: Which table to operate on + :type table: str + """ + self._tool = tool + self._table = table + + def _find_tbl(self, condition=None): + """Run and parse output of OVSDB `find` command. + + :param condition: An optional RFC 7047 5.1 match condition + :type condition: Optional[str] + :returns: Dictionary with data + :rtype: Dict[str, any] + """ + # When using json formatted output to OVS commands Internal OVSDB + # notation may occur that require further deserializing. + # Reference: https://tools.ietf.org/html/rfc7047#section-5.1 + ovs_type_cb_map = { + 'uuid': uuid.UUID, + # FIXME sets also appear to sometimes contain type/value tuples + 'set': list, + 'map': dict, + } + cmd = [self._tool, '-f', 'json', 'find', self._table] + if condition: + cmd.append(condition) + output = utils._run(*cmd) + data = json.loads(output) + for row in data['data']: + values = [] + for col in row: + if isinstance(col, list): + f = ovs_type_cb_map.get(col[0], str) + values.append(f(col[1])) + else: + values.append(col) + yield dict(zip(data['headings'], values)) + + def __iter__(self): + return self._find_tbl() + + def clear(self, rec, col): + utils._run(self._tool, 'clear', self._table, rec, col) + + def find(self, condition): + return self._find_tbl(condition=condition) + + def remove(self, rec, col, value): + utils._run(self._tool, 'remove', self._table, rec, col, value) + + def set(self, rec, col, value): + utils._run(self._tool, 'set', self._table, rec, + '{}={}'.format(col, value)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/network/ovs/utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/network/ovs/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..53c9b4ddab6d68d2478d4161e658c70e2caa6a74 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/network/ovs/utils.py @@ -0,0 +1,26 @@ +# Copyright 2019 Canonical Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +import subprocess + + +def _run(*args): + """Run a process, check result, capture decoded output from STDOUT. + + :param args: Command and arguments to run + :type args: Tuple[str, ...] + :returns: Information about the completed process + :rtype: str + :raises subprocess.CalledProcessError + """ + return subprocess.check_output(args, universal_newlines=True) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/network/ufw.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/network/ufw.py new file mode 100644 index 0000000000000000000000000000000000000000..b9bf7c9df5576615e23225a7fdb6c11a28931ba4 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/network/ufw.py @@ -0,0 +1,386 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +""" +This module contains helpers to add and remove ufw rules. + +Examples: + +- open SSH port for subnet 10.0.3.0/24: + + >>> from charmhelpers.contrib.network import ufw + >>> ufw.enable() + >>> ufw.grant_access(src='10.0.3.0/24', dst='any', port='22', proto='tcp') + +- open service by name as defined in /etc/services: + + >>> from charmhelpers.contrib.network import ufw + >>> ufw.enable() + >>> ufw.service('ssh', 'open') + +- close service by port number: + + >>> from charmhelpers.contrib.network import ufw + >>> ufw.enable() + >>> ufw.service('4949', 'close') # munin +""" +import os +import re +import subprocess + +from charmhelpers.core import hookenv +from charmhelpers.core.kernel import modprobe, is_module_loaded + +__author__ = "Felipe Reyes " + + +class UFWError(Exception): + pass + + +class UFWIPv6Error(UFWError): + pass + + +def is_enabled(): + """ + Check if `ufw` is enabled + + :returns: True if ufw is enabled + """ + output = subprocess.check_output(['ufw', 'status'], + universal_newlines=True, + env={'LANG': 'en_US', + 'PATH': os.environ['PATH']}) + + m = re.findall(r'^Status: active\n', output, re.M) + + return len(m) >= 1 + + +def is_ipv6_ok(soft_fail=False): + """ + Check if IPv6 support is present and ip6tables functional + + :param soft_fail: If set to True and IPv6 support is broken, then reports + that the host doesn't have IPv6 support, otherwise a + UFWIPv6Error exception is raised. + :returns: True if IPv6 is working, False otherwise + """ + + # do we have IPv6 in the machine? + if os.path.isdir('/proc/sys/net/ipv6'): + # is ip6tables kernel module loaded? + if not is_module_loaded('ip6_tables'): + # ip6tables support isn't complete, let's try to load it + try: + modprobe('ip6_tables') + # great, we can load the module + return True + except subprocess.CalledProcessError as ex: + hookenv.log("Couldn't load ip6_tables module: %s" % ex.output, + level="WARN") + # we are in a world where ip6tables isn't working + if soft_fail: + # so we inform that the machine doesn't have IPv6 + return False + else: + raise UFWIPv6Error("IPv6 firewall support broken") + else: + # the module is present :) + return True + + else: + # the system doesn't have IPv6 + return False + + +def disable_ipv6(): + """ + Disable ufw IPv6 support in /etc/default/ufw + """ + exit_code = subprocess.call(['sed', '-i', 's/IPV6=.*/IPV6=no/g', + '/etc/default/ufw']) + if exit_code == 0: + hookenv.log('IPv6 support in ufw disabled', level='INFO') + else: + hookenv.log("Couldn't disable IPv6 support in ufw", level="ERROR") + raise UFWError("Couldn't disable IPv6 support in ufw") + + +def enable(soft_fail=False): + """ + Enable ufw + + :param soft_fail: If set to True silently disables IPv6 support in ufw, + otherwise a UFWIPv6Error exception is raised when IP6 + support is broken. + :returns: True if ufw is successfully enabled + """ + if is_enabled(): + return True + + if not is_ipv6_ok(soft_fail): + disable_ipv6() + + output = subprocess.check_output(['ufw', 'enable'], + universal_newlines=True, + env={'LANG': 'en_US', + 'PATH': os.environ['PATH']}) + + m = re.findall('^Firewall is active and enabled on system startup\n', + output, re.M) + hookenv.log(output, level='DEBUG') + + if len(m) == 0: + hookenv.log("ufw couldn't be enabled", level='WARN') + return False + else: + hookenv.log("ufw enabled", level='INFO') + return True + + +def reload(): + """ + Reload ufw + + :returns: True if ufw is successfully enabled + """ + output = subprocess.check_output(['ufw', 'reload'], + universal_newlines=True, + env={'LANG': 'en_US', + 'PATH': os.environ['PATH']}) + + m = re.findall('^Firewall reloaded\n', + output, re.M) + hookenv.log(output, level='DEBUG') + + if len(m) == 0: + hookenv.log("ufw couldn't be reloaded", level='WARN') + return False + else: + hookenv.log("ufw reloaded", level='INFO') + return True + + +def disable(): + """ + Disable ufw + + :returns: True if ufw is successfully disabled + """ + if not is_enabled(): + return True + + output = subprocess.check_output(['ufw', 'disable'], + universal_newlines=True, + env={'LANG': 'en_US', + 'PATH': os.environ['PATH']}) + + m = re.findall(r'^Firewall stopped and disabled on system startup\n', + output, re.M) + hookenv.log(output, level='DEBUG') + + if len(m) == 0: + hookenv.log("ufw couldn't be disabled", level='WARN') + return False + else: + hookenv.log("ufw disabled", level='INFO') + return True + + +def default_policy(policy='deny', direction='incoming'): + """ + Changes the default policy for traffic `direction` + + :param policy: allow, deny or reject + :param direction: traffic direction, possible values: incoming, outgoing, + routed + """ + if policy not in ['allow', 'deny', 'reject']: + raise UFWError(('Unknown policy %s, valid values: ' + 'allow, deny, reject') % policy) + + if direction not in ['incoming', 'outgoing', 'routed']: + raise UFWError(('Unknown direction %s, valid values: ' + 'incoming, outgoing, routed') % direction) + + output = subprocess.check_output(['ufw', 'default', policy, direction], + universal_newlines=True, + env={'LANG': 'en_US', + 'PATH': os.environ['PATH']}) + hookenv.log(output, level='DEBUG') + + m = re.findall("^Default %s policy changed to '%s'\n" % (direction, + policy), + output, re.M) + if len(m) == 0: + hookenv.log("ufw couldn't change the default policy to %s for %s" + % (policy, direction), level='WARN') + return False + else: + hookenv.log("ufw default policy for %s changed to %s" + % (direction, policy), level='INFO') + return True + + +def modify_access(src, dst='any', port=None, proto=None, action='allow', + index=None, prepend=False, comment=None): + """ + Grant access to an address or subnet + + :param src: address (e.g. 192.168.1.234) or subnet + (e.g. 192.168.1.0/24). + :type src: Optional[str] + :param dst: destiny of the connection, if the machine has multiple IPs and + connections to only one of those have to accepted this is the + field has to be set. + :type dst: Optional[str] + :param port: destiny port + :type port: Optional[int] + :param proto: protocol (tcp or udp) + :type proto: Optional[str] + :param action: `allow` or `delete` + :type action: str + :param index: if different from None the rule is inserted at the given + `index`. + :type index: Optional[int] + :param prepend: Whether to insert the rule before all other rules matching + the rule's IP type. + :type prepend: bool + :param comment: Create the rule with a comment + :type comment: Optional[str] + """ + if not is_enabled(): + hookenv.log('ufw is disabled, skipping modify_access()', level='WARN') + return + + if action == 'delete': + if index is not None: + cmd = ['ufw', '--force', 'delete', str(index)] + else: + cmd = ['ufw', 'delete', 'allow'] + elif index is not None: + cmd = ['ufw', 'insert', str(index), action] + elif prepend: + cmd = ['ufw', 'prepend', action] + else: + cmd = ['ufw', action] + + if src is not None: + cmd += ['from', src] + + if dst is not None: + cmd += ['to', dst] + + if port is not None: + cmd += ['port', str(port)] + + if proto is not None: + cmd += ['proto', proto] + + if comment: + cmd.extend(['comment', comment]) + + hookenv.log('ufw {}: {}'.format(action, ' '.join(cmd)), level='DEBUG') + p = subprocess.Popen(cmd, stdout=subprocess.PIPE) + (stdout, stderr) = p.communicate() + + hookenv.log(stdout, level='INFO') + + if p.returncode != 0: + hookenv.log(stderr, level='ERROR') + hookenv.log('Error running: {}, exit code: {}'.format(' '.join(cmd), + p.returncode), + level='ERROR') + + +def grant_access(src, dst='any', port=None, proto=None, index=None): + """ + Grant access to an address or subnet + + :param src: address (e.g. 192.168.1.234) or subnet + (e.g. 192.168.1.0/24). + :param dst: destiny of the connection, if the machine has multiple IPs and + connections to only one of those have to accepted this is the + field has to be set. + :param port: destiny port + :param proto: protocol (tcp or udp) + :param index: if different from None the rule is inserted at the given + `index`. + """ + return modify_access(src, dst=dst, port=port, proto=proto, action='allow', + index=index) + + +def revoke_access(src, dst='any', port=None, proto=None): + """ + Revoke access to an address or subnet + + :param src: address (e.g. 192.168.1.234) or subnet + (e.g. 192.168.1.0/24). + :param dst: destiny of the connection, if the machine has multiple IPs and + connections to only one of those have to accepted this is the + field has to be set. + :param port: destiny port + :param proto: protocol (tcp or udp) + """ + return modify_access(src, dst=dst, port=port, proto=proto, action='delete') + + +def service(name, action): + """ + Open/close access to a service + + :param name: could be a service name defined in `/etc/services` or a port + number. + :param action: `open` or `close` + """ + if action == 'open': + subprocess.check_output(['ufw', 'allow', str(name)], + universal_newlines=True) + elif action == 'close': + subprocess.check_output(['ufw', 'delete', 'allow', str(name)], + universal_newlines=True) + else: + raise UFWError(("'{}' not supported, use 'allow' " + "or 'delete'").format(action)) + + +def status(): + """Retrieve firewall rules as represented by UFW. + + :returns: Tuples with rule number and data + (1, {'to': '', 'action':, 'from':, '', ipv6: True, 'comment': ''}) + :rtype: Iterator[Tuple[int, Dict[str, Union[bool, str]]]] + """ + cp = subprocess.check_output(('ufw', 'status', 'numbered',), + stderr=subprocess.STDOUT, + universal_newlines=True) + for line in cp.splitlines(): + if not line.startswith('['): + continue + ipv6 = True if '(v6)' in line else False + line = line.replace('(v6)', '') + line = line.replace('[', '') + line = line.replace(']', '') + line = line.replace('Anywhere', 'any') + row = line.split() + yield (int(row[0]), { + 'to': row[1], + 'action': ' '.join(row[2:4]).lower(), + 'from': row[4], + 'ipv6': ipv6, + 'comment': row[6] if len(row) > 5 and row[5] == '#' else '', + }) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d7567b863e3a5ad2b7a7f44958b4166e0c3d346b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/alternatives.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/alternatives.py new file mode 100644 index 0000000000000000000000000000000000000000..547de09c6d818772191b519618fa32b08b0e6eff --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/alternatives.py @@ -0,0 +1,44 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +''' Helper for managing alternatives for file conflict resolution ''' + +import subprocess +import shutil +import os + + +def install_alternative(name, target, source, priority=50): + ''' Install alternative configuration ''' + if (os.path.exists(target) and not os.path.islink(target)): + # Move existing file/directory away before installing + shutil.move(target, '{}.bak'.format(target)) + cmd = [ + 'update-alternatives', '--force', '--install', + target, name, source, str(priority) + ] + subprocess.check_call(cmd) + + +def remove_alternative(name, source): + """Remove an installed alternative configuration file + + :param name: string name of the alternative to remove + :param source: string full path to alternative to remove + """ + cmd = [ + 'update-alternatives', '--remove', + name, source + ] + subprocess.check_call(cmd) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/amulet/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/amulet/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d7567b863e3a5ad2b7a7f44958b4166e0c3d346b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/amulet/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/amulet/deployment.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/amulet/deployment.py new file mode 100644 index 0000000000000000000000000000000000000000..dd3aebe97dd04bf42a2c4ee7c7b7b83b1917a678 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/amulet/deployment.py @@ -0,0 +1,384 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import logging +import os +import re +import sys +import six +from collections import OrderedDict +from charmhelpers.contrib.amulet.deployment import ( + AmuletDeployment +) +from charmhelpers.contrib.openstack.amulet.utils import ( + OPENSTACK_RELEASES_PAIRS +) + +DEBUG = logging.DEBUG +ERROR = logging.ERROR + + +class OpenStackAmuletDeployment(AmuletDeployment): + """OpenStack amulet deployment. + + This class inherits from AmuletDeployment and has additional support + that is specifically for use by OpenStack charms. + """ + + def __init__(self, series=None, openstack=None, source=None, + stable=True, log_level=DEBUG): + """Initialize the deployment environment.""" + super(OpenStackAmuletDeployment, self).__init__(series) + self.log = self.get_logger(level=log_level) + self.log.info('OpenStackAmuletDeployment: init') + self.openstack = openstack + self.source = source + self.stable = stable + + def get_logger(self, name="deployment-logger", level=logging.DEBUG): + """Get a logger object that will log to stdout.""" + log = logging + logger = log.getLogger(name) + fmt = log.Formatter("%(asctime)s %(funcName)s " + "%(levelname)s: %(message)s") + + handler = log.StreamHandler(stream=sys.stdout) + handler.setLevel(level) + handler.setFormatter(fmt) + + logger.addHandler(handler) + logger.setLevel(level) + + return logger + + def _determine_branch_locations(self, other_services): + """Determine the branch locations for the other services. + + Determine if the local branch being tested is derived from its + stable or next (dev) branch, and based on this, use the corresonding + stable or next branches for the other_services.""" + + self.log.info('OpenStackAmuletDeployment: determine branch locations') + + # Charms outside the ~openstack-charmers + base_charms = { + 'mysql': ['trusty'], + 'mongodb': ['trusty'], + 'nrpe': ['trusty', 'xenial'], + } + + for svc in other_services: + # If a location has been explicitly set, use it + if svc.get('location'): + continue + if svc['name'] in base_charms: + # NOTE: not all charms have support for all series we + # want/need to test against, so fix to most recent + # that each base charm supports + target_series = self.series + if self.series not in base_charms[svc['name']]: + target_series = base_charms[svc['name']][-1] + svc['location'] = 'cs:{}/{}'.format(target_series, + svc['name']) + elif self.stable: + svc['location'] = 'cs:{}/{}'.format(self.series, + svc['name']) + else: + svc['location'] = 'cs:~openstack-charmers-next/{}/{}'.format( + self.series, + svc['name'] + ) + + return other_services + + def _add_services(self, this_service, other_services, use_source=None, + no_origin=None): + """Add services to the deployment and optionally set + openstack-origin/source. + + :param this_service dict: Service dictionary describing the service + whose amulet tests are being run + :param other_services dict: List of service dictionaries describing + the services needed to support the target + service + :param use_source list: List of services which use the 'source' config + option rather than 'openstack-origin' + :param no_origin list: List of services which do not support setting + the Cloud Archive. + Service Dict: + { + 'name': str charm-name, + 'units': int number of units, + 'constraints': dict of juju constraints, + 'location': str location of charm, + } + eg + this_service = { + 'name': 'openvswitch-odl', + 'constraints': {'mem': '8G'}, + } + other_services = [ + { + 'name': 'nova-compute', + 'units': 2, + 'constraints': {'mem': '4G'}, + 'location': cs:~bob/xenial/nova-compute + }, + { + 'name': 'mysql', + 'constraints': {'mem': '2G'}, + }, + {'neutron-api-odl'}] + use_source = ['mysql'] + no_origin = ['neutron-api-odl'] + """ + self.log.info('OpenStackAmuletDeployment: adding services') + + other_services = self._determine_branch_locations(other_services) + + super(OpenStackAmuletDeployment, self)._add_services(this_service, + other_services) + + services = other_services + services.append(this_service) + + use_source = use_source or [] + no_origin = no_origin or [] + + # Charms which should use the source config option + use_source = list(set( + use_source + ['mysql', 'mongodb', 'rabbitmq-server', 'ceph', + 'ceph-osd', 'ceph-radosgw', 'ceph-mon', + 'ceph-proxy', 'percona-cluster', 'lxd'])) + + # Charms which can not use openstack-origin, ie. many subordinates + no_origin = list(set( + no_origin + ['cinder-ceph', 'hacluster', 'neutron-openvswitch', + 'nrpe', 'openvswitch-odl', 'neutron-api-odl', + 'odl-controller', 'cinder-backup', 'nexentaedge-data', + 'nexentaedge-iscsi-gw', 'nexentaedge-swift-gw', + 'cinder-nexentaedge', 'nexentaedge-mgmt', + 'ceilometer-agent'])) + + if self.openstack: + for svc in services: + if svc['name'] not in use_source + no_origin: + config = {'openstack-origin': self.openstack} + self.d.configure(svc['name'], config) + + if self.source: + for svc in services: + if svc['name'] in use_source and svc['name'] not in no_origin: + config = {'source': self.source} + self.d.configure(svc['name'], config) + + def _configure_services(self, configs): + """Configure all of the services.""" + self.log.info('OpenStackAmuletDeployment: configure services') + for service, config in six.iteritems(configs): + self.d.configure(service, config) + + def _auto_wait_for_status(self, message=None, exclude_services=None, + include_only=None, timeout=None): + """Wait for all units to have a specific extended status, except + for any defined as excluded. Unless specified via message, any + status containing any case of 'ready' will be considered a match. + + Examples of message usage: + + Wait for all unit status to CONTAIN any case of 'ready' or 'ok': + message = re.compile('.*ready.*|.*ok.*', re.IGNORECASE) + + Wait for all units to reach this status (exact match): + message = re.compile('^Unit is ready and clustered$') + + Wait for all units to reach any one of these (exact match): + message = re.compile('Unit is ready|OK|Ready') + + Wait for at least one unit to reach this status (exact match): + message = {'ready'} + + See Amulet's sentry.wait_for_messages() for message usage detail. + https://github.com/juju/amulet/blob/master/amulet/sentry.py + + :param message: Expected status match + :param exclude_services: List of juju service names to ignore, + not to be used in conjuction with include_only. + :param include_only: List of juju service names to exclusively check, + not to be used in conjuction with exclude_services. + :param timeout: Maximum time in seconds to wait for status match + :returns: None. Raises if timeout is hit. + """ + if not timeout: + timeout = int(os.environ.get('AMULET_SETUP_TIMEOUT', 1800)) + self.log.info('Waiting for extended status on units for {}s...' + ''.format(timeout)) + + all_services = self.d.services.keys() + + if exclude_services and include_only: + raise ValueError('exclude_services can not be used ' + 'with include_only') + + if message: + if isinstance(message, re._pattern_type): + match = message.pattern + else: + match = message + + self.log.debug('Custom extended status wait match: ' + '{}'.format(match)) + else: + self.log.debug('Default extended status wait match: contains ' + 'READY (case-insensitive)') + message = re.compile('.*ready.*', re.IGNORECASE) + + if exclude_services: + self.log.debug('Excluding services from extended status match: ' + '{}'.format(exclude_services)) + else: + exclude_services = [] + + if include_only: + services = include_only + else: + services = list(set(all_services) - set(exclude_services)) + + self.log.debug('Waiting up to {}s for extended status on services: ' + '{}'.format(timeout, services)) + service_messages = {service: message for service in services} + + # Check for idleness + self.d.sentry.wait(timeout=timeout) + # Check for error states and bail early + self.d.sentry.wait_for_status(self.d.juju_env, services, timeout=timeout) + # Check for ready messages + self.d.sentry.wait_for_messages(service_messages, timeout=timeout) + + self.log.info('OK') + + def _get_openstack_release(self): + """Get openstack release. + + Return an integer representing the enum value of the openstack + release. + """ + # Must be ordered by OpenStack release (not by Ubuntu release): + for i, os_pair in enumerate(OPENSTACK_RELEASES_PAIRS): + setattr(self, os_pair, i) + + releases = { + ('trusty', None): self.trusty_icehouse, + ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo, + ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty, + ('trusty', 'cloud:trusty-mitaka'): self.trusty_mitaka, + ('xenial', None): self.xenial_mitaka, + ('xenial', 'cloud:xenial-newton'): self.xenial_newton, + ('xenial', 'cloud:xenial-ocata'): self.xenial_ocata, + ('xenial', 'cloud:xenial-pike'): self.xenial_pike, + ('xenial', 'cloud:xenial-queens'): self.xenial_queens, + ('yakkety', None): self.yakkety_newton, + ('zesty', None): self.zesty_ocata, + ('artful', None): self.artful_pike, + ('bionic', None): self.bionic_queens, + ('bionic', 'cloud:bionic-rocky'): self.bionic_rocky, + ('bionic', 'cloud:bionic-stein'): self.bionic_stein, + ('bionic', 'cloud:bionic-train'): self.bionic_train, + ('bionic', 'cloud:bionic-ussuri'): self.bionic_ussuri, + ('cosmic', None): self.cosmic_rocky, + ('disco', None): self.disco_stein, + ('eoan', None): self.eoan_train, + ('focal', None): self.focal_ussuri, + } + return releases[(self.series, self.openstack)] + + def _get_openstack_release_string(self): + """Get openstack release string. + + Return a string representing the openstack release. + """ + releases = OrderedDict([ + ('trusty', 'icehouse'), + ('xenial', 'mitaka'), + ('yakkety', 'newton'), + ('zesty', 'ocata'), + ('artful', 'pike'), + ('bionic', 'queens'), + ('cosmic', 'rocky'), + ('disco', 'stein'), + ('eoan', 'train'), + ('focal', 'ussuri'), + ]) + if self.openstack: + os_origin = self.openstack.split(':')[1] + return os_origin.split('%s-' % self.series)[1].split('/')[0] + else: + return releases[self.series] + + def get_percona_service_entry(self, memory_constraint=None): + """Return a amulet service entry for percona cluster. + + :param memory_constraint: Override the default memory constraint + in the service entry. + :type memory_constraint: str + :returns: Amulet service entry. + :rtype: dict + """ + memory_constraint = memory_constraint or '3072M' + svc_entry = { + 'name': 'percona-cluster', + 'constraints': {'mem': memory_constraint}} + if self._get_openstack_release() <= self.trusty_mitaka: + svc_entry['location'] = 'cs:trusty/percona-cluster' + return svc_entry + + def get_ceph_expected_pools(self, radosgw=False): + """Return a list of expected ceph pools in a ceph + cinder + glance + test scenario, based on OpenStack release and whether ceph radosgw + is flagged as present or not.""" + + if self._get_openstack_release() == self.trusty_icehouse: + # Icehouse + pools = [ + 'data', + 'metadata', + 'rbd', + 'cinder-ceph', + 'glance' + ] + elif (self.trusty_kilo <= self._get_openstack_release() <= + self.zesty_ocata): + # Kilo through Ocata + pools = [ + 'rbd', + 'cinder-ceph', + 'glance' + ] + else: + # Pike and later + pools = [ + 'cinder-ceph', + 'glance' + ] + + if radosgw: + pools.extend([ + '.rgw.root', + '.rgw.control', + '.rgw', + '.rgw.gc', + '.users.uid' + ]) + + return pools diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/amulet/utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/amulet/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..14864198926b82afe382d07263034b020ad43c09 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/amulet/utils.py @@ -0,0 +1,1593 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import amulet +import json +import logging +import os +import re +import six +import time +import urllib +import urlparse + +import cinderclient.v1.client as cinder_client +import cinderclient.v2.client as cinder_clientv2 +import glanceclient.v1 as glance_client +import glanceclient.v2 as glance_clientv2 +import heatclient.v1.client as heat_client +from keystoneclient.v2_0 import client as keystone_client +from keystoneauth1.identity import ( + v3, + v2, +) +from keystoneauth1 import session as keystone_session +from keystoneclient.v3 import client as keystone_client_v3 +from novaclient import exceptions + +import novaclient.client as nova_client +import novaclient +import pika +import swiftclient + +from charmhelpers.core.decorators import retry_on_exception +from charmhelpers.contrib.amulet.utils import ( + AmuletUtils +) +from charmhelpers.core.host import CompareHostReleases + +DEBUG = logging.DEBUG +ERROR = logging.ERROR + +NOVA_CLIENT_VERSION = "2" + +OPENSTACK_RELEASES_PAIRS = [ + 'trusty_icehouse', 'trusty_kilo', 'trusty_liberty', + 'trusty_mitaka', 'xenial_mitaka', + 'xenial_newton', 'yakkety_newton', + 'xenial_ocata', 'zesty_ocata', + 'xenial_pike', 'artful_pike', + 'xenial_queens', 'bionic_queens', + 'bionic_rocky', 'cosmic_rocky', + 'bionic_stein', 'disco_stein', + 'bionic_train', 'eoan_train', + 'bionic_ussuri', 'focal_ussuri', +] + + +class OpenStackAmuletUtils(AmuletUtils): + """OpenStack amulet utilities. + + This class inherits from AmuletUtils and has additional support + that is specifically for use by OpenStack charm tests. + """ + + def __init__(self, log_level=ERROR): + """Initialize the deployment environment.""" + super(OpenStackAmuletUtils, self).__init__(log_level) + + def validate_endpoint_data(self, endpoints, admin_port, internal_port, + public_port, expected, openstack_release=None): + """Validate endpoint data. Pick the correct validator based on + OpenStack release. Expected data should be in the v2 format: + { + 'id': id, + 'region': region, + 'adminurl': adminurl, + 'internalurl': internalurl, + 'publicurl': publicurl, + 'service_id': service_id} + + """ + validation_function = self.validate_v2_endpoint_data + xenial_queens = OPENSTACK_RELEASES_PAIRS.index('xenial_queens') + if openstack_release and openstack_release >= xenial_queens: + validation_function = self.validate_v3_endpoint_data + expected = { + 'id': expected['id'], + 'region': expected['region'], + 'region_id': 'RegionOne', + 'url': self.valid_url, + 'interface': self.not_null, + 'service_id': expected['service_id']} + return validation_function(endpoints, admin_port, internal_port, + public_port, expected) + + def validate_v2_endpoint_data(self, endpoints, admin_port, internal_port, + public_port, expected): + """Validate endpoint data. + + Validate actual endpoint data vs expected endpoint data. The ports + are used to find the matching endpoint. + """ + self.log.debug('Validating endpoint data...') + self.log.debug('actual: {}'.format(repr(endpoints))) + found = False + for ep in endpoints: + self.log.debug('endpoint: {}'.format(repr(ep))) + if (admin_port in ep.adminurl and + internal_port in ep.internalurl and + public_port in ep.publicurl): + found = True + actual = {'id': ep.id, + 'region': ep.region, + 'adminurl': ep.adminurl, + 'internalurl': ep.internalurl, + 'publicurl': ep.publicurl, + 'service_id': ep.service_id} + ret = self._validate_dict_data(expected, actual) + if ret: + return 'unexpected endpoint data - {}'.format(ret) + + if not found: + return 'endpoint not found' + + def validate_v3_endpoint_data(self, endpoints, admin_port, internal_port, + public_port, expected, expected_num_eps=3): + """Validate keystone v3 endpoint data. + + Validate the v3 endpoint data which has changed from v2. The + ports are used to find the matching endpoint. + + The new v3 endpoint data looks like: + + ['}, + region=RegionOne, + region_id=RegionOne, + service_id=17f842a0dc084b928e476fafe67e4095, + url=http://10.5.6.5:9312>, + '}, + region=RegionOne, + region_id=RegionOne, + service_id=72fc8736fb41435e8b3584205bb2cfa3, + url=http://10.5.6.6:35357/v3>, + ... ] + """ + self.log.debug('Validating v3 endpoint data...') + self.log.debug('actual: {}'.format(repr(endpoints))) + found = [] + for ep in endpoints: + self.log.debug('endpoint: {}'.format(repr(ep))) + if ((admin_port in ep.url and ep.interface == 'admin') or + (internal_port in ep.url and ep.interface == 'internal') or + (public_port in ep.url and ep.interface == 'public')): + found.append(ep.interface) + # note we ignore the links member. + actual = {'id': ep.id, + 'region': ep.region, + 'region_id': ep.region_id, + 'interface': self.not_null, + 'url': ep.url, + 'service_id': ep.service_id, } + ret = self._validate_dict_data(expected, actual) + if ret: + return 'unexpected endpoint data - {}'.format(ret) + + if len(found) != expected_num_eps: + return 'Unexpected number of endpoints found' + + def convert_svc_catalog_endpoint_data_to_v3(self, ep_data): + """Convert v2 endpoint data into v3. + + { + 'service_name1': [ + { + 'adminURL': adminURL, + 'id': id, + 'region': region. + 'publicURL': publicURL, + 'internalURL': internalURL + }], + 'service_name2': [ + { + 'adminURL': adminURL, + 'id': id, + 'region': region. + 'publicURL': publicURL, + 'internalURL': internalURL + }], + } + """ + self.log.warn("Endpoint ID and Region ID validation is limited to not " + "null checks after v2 to v3 conversion") + for svc in ep_data.keys(): + assert len(ep_data[svc]) == 1, "Unknown data format" + svc_ep_data = ep_data[svc][0] + ep_data[svc] = [ + { + 'url': svc_ep_data['adminURL'], + 'interface': 'admin', + 'region': svc_ep_data['region'], + 'region_id': self.not_null, + 'id': self.not_null}, + { + 'url': svc_ep_data['publicURL'], + 'interface': 'public', + 'region': svc_ep_data['region'], + 'region_id': self.not_null, + 'id': self.not_null}, + { + 'url': svc_ep_data['internalURL'], + 'interface': 'internal', + 'region': svc_ep_data['region'], + 'region_id': self.not_null, + 'id': self.not_null}] + return ep_data + + def validate_svc_catalog_endpoint_data(self, expected, actual, + openstack_release=None): + """Validate service catalog endpoint data. Pick the correct validator + for the OpenStack version. Expected data should be in the v2 format: + { + 'service_name1': [ + { + 'adminURL': adminURL, + 'id': id, + 'region': region. + 'publicURL': publicURL, + 'internalURL': internalURL + }], + 'service_name2': [ + { + 'adminURL': adminURL, + 'id': id, + 'region': region. + 'publicURL': publicURL, + 'internalURL': internalURL + }], + } + + """ + validation_function = self.validate_v2_svc_catalog_endpoint_data + xenial_queens = OPENSTACK_RELEASES_PAIRS.index('xenial_queens') + if openstack_release and openstack_release >= xenial_queens: + validation_function = self.validate_v3_svc_catalog_endpoint_data + expected = self.convert_svc_catalog_endpoint_data_to_v3(expected) + return validation_function(expected, actual) + + def validate_v2_svc_catalog_endpoint_data(self, expected, actual): + """Validate service catalog endpoint data. + + Validate a list of actual service catalog endpoints vs a list of + expected service catalog endpoints. + """ + self.log.debug('Validating service catalog endpoint data...') + self.log.debug('actual: {}'.format(repr(actual))) + for k, v in six.iteritems(expected): + if k in actual: + ret = self._validate_dict_data(expected[k][0], actual[k][0]) + if ret: + return self.endpoint_error(k, ret) + else: + return "endpoint {} does not exist".format(k) + return ret + + def validate_v3_svc_catalog_endpoint_data(self, expected, actual): + """Validate the keystone v3 catalog endpoint data. + + Validate a list of dictinaries that make up the keystone v3 service + catalogue. + + It is in the form of: + + + {u'identity': [{u'id': u'48346b01c6804b298cdd7349aadb732e', + u'interface': u'admin', + u'region': u'RegionOne', + u'region_id': u'RegionOne', + u'url': u'http://10.5.5.224:35357/v3'}, + {u'id': u'8414f7352a4b47a69fddd9dbd2aef5cf', + u'interface': u'public', + u'region': u'RegionOne', + u'region_id': u'RegionOne', + u'url': u'http://10.5.5.224:5000/v3'}, + {u'id': u'd5ca31440cc24ee1bf625e2996fb6a5b', + u'interface': u'internal', + u'region': u'RegionOne', + u'region_id': u'RegionOne', + u'url': u'http://10.5.5.224:5000/v3'}], + u'key-manager': [{u'id': u'68ebc17df0b045fcb8a8a433ebea9e62', + u'interface': u'public', + u'region': u'RegionOne', + u'region_id': u'RegionOne', + u'url': u'http://10.5.5.223:9311'}, + {u'id': u'9cdfe2a893c34afd8f504eb218cd2f9d', + u'interface': u'internal', + u'region': u'RegionOne', + u'region_id': u'RegionOne', + u'url': u'http://10.5.5.223:9311'}, + {u'id': u'f629388955bc407f8b11d8b7ca168086', + u'interface': u'admin', + u'region': u'RegionOne', + u'region_id': u'RegionOne', + u'url': u'http://10.5.5.223:9312'}]} + + Note, that an added complication is that the order of admin, public, + internal against 'interface' in each region. + + Thus, the function sorts the expected and actual lists using the + interface key as a sort key, prior to the comparison. + """ + self.log.debug('Validating v3 service catalog endpoint data...') + self.log.debug('actual: {}'.format(repr(actual))) + for k, v in six.iteritems(expected): + if k in actual: + l_expected = sorted(v, key=lambda x: x['interface']) + l_actual = sorted(actual[k], key=lambda x: x['interface']) + if len(l_actual) != len(l_expected): + return ("endpoint {} has differing number of interfaces " + " - expected({}), actual({})" + .format(k, len(l_expected), len(l_actual))) + for i_expected, i_actual in zip(l_expected, l_actual): + self.log.debug("checking interface {}" + .format(i_expected['interface'])) + ret = self._validate_dict_data(i_expected, i_actual) + if ret: + return self.endpoint_error(k, ret) + else: + return "endpoint {} does not exist".format(k) + return ret + + def validate_tenant_data(self, expected, actual): + """Validate tenant data. + + Validate a list of actual tenant data vs list of expected tenant + data. + """ + self.log.debug('Validating tenant data...') + self.log.debug('actual: {}'.format(repr(actual))) + for e in expected: + found = False + for act in actual: + a = {'enabled': act.enabled, 'description': act.description, + 'name': act.name, 'id': act.id} + if e['name'] == a['name']: + found = True + ret = self._validate_dict_data(e, a) + if ret: + return "unexpected tenant data - {}".format(ret) + if not found: + return "tenant {} does not exist".format(e['name']) + return ret + + def validate_role_data(self, expected, actual): + """Validate role data. + + Validate a list of actual role data vs a list of expected role + data. + """ + self.log.debug('Validating role data...') + self.log.debug('actual: {}'.format(repr(actual))) + for e in expected: + found = False + for act in actual: + a = {'name': act.name, 'id': act.id} + if e['name'] == a['name']: + found = True + ret = self._validate_dict_data(e, a) + if ret: + return "unexpected role data - {}".format(ret) + if not found: + return "role {} does not exist".format(e['name']) + return ret + + def validate_user_data(self, expected, actual, api_version=None): + """Validate user data. + + Validate a list of actual user data vs a list of expected user + data. + """ + self.log.debug('Validating user data...') + self.log.debug('actual: {}'.format(repr(actual))) + for e in expected: + found = False + for act in actual: + if e['name'] == act.name: + a = {'enabled': act.enabled, 'name': act.name, + 'email': act.email, 'id': act.id} + if api_version == 3: + a['default_project_id'] = getattr(act, + 'default_project_id', + 'none') + else: + a['tenantId'] = act.tenantId + found = True + ret = self._validate_dict_data(e, a) + if ret: + return "unexpected user data - {}".format(ret) + if not found: + return "user {} does not exist".format(e['name']) + return ret + + def validate_flavor_data(self, expected, actual): + """Validate flavor data. + + Validate a list of actual flavors vs a list of expected flavors. + """ + self.log.debug('Validating flavor data...') + self.log.debug('actual: {}'.format(repr(actual))) + act = [a.name for a in actual] + return self._validate_list_data(expected, act) + + def tenant_exists(self, keystone, tenant): + """Return True if tenant exists.""" + self.log.debug('Checking if tenant exists ({})...'.format(tenant)) + return tenant in [t.name for t in keystone.tenants.list()] + + @retry_on_exception(num_retries=5, base_delay=1) + def keystone_wait_for_propagation(self, sentry_relation_pairs, + api_version): + """Iterate over list of sentry and relation tuples and verify that + api_version has the expected value. + + :param sentry_relation_pairs: list of sentry, relation name tuples used + for monitoring propagation of relation + data + :param api_version: api_version to expect in relation data + :returns: None if successful. Raise on error. + """ + for (sentry, relation_name) in sentry_relation_pairs: + rel = sentry.relation('identity-service', + relation_name) + self.log.debug('keystone relation data: {}'.format(rel)) + if rel.get('api_version') != str(api_version): + raise Exception("api_version not propagated through relation" + " data yet ('{}' != '{}')." + "".format(rel.get('api_version'), api_version)) + + def keystone_configure_api_version(self, sentry_relation_pairs, deployment, + api_version): + """Configure preferred-api-version of keystone in deployment and + monitor provided list of relation objects for propagation + before returning to caller. + + :param sentry_relation_pairs: list of sentry, relation tuples used for + monitoring propagation of relation data + :param deployment: deployment to configure + :param api_version: value preferred-api-version will be set to + :returns: None if successful. Raise on error. + """ + self.log.debug("Setting keystone preferred-api-version: '{}'" + "".format(api_version)) + + config = {'preferred-api-version': api_version} + deployment.d.configure('keystone', config) + deployment._auto_wait_for_status() + self.keystone_wait_for_propagation(sentry_relation_pairs, api_version) + + def authenticate_cinder_admin(self, keystone, api_version=2): + """Authenticates admin user with cinder.""" + self.log.debug('Authenticating cinder admin...') + _clients = { + 1: cinder_client.Client, + 2: cinder_clientv2.Client} + return _clients[api_version](session=keystone.session) + + def authenticate_keystone(self, keystone_ip, username, password, + api_version=False, admin_port=False, + user_domain_name=None, domain_name=None, + project_domain_name=None, project_name=None): + """Authenticate with Keystone""" + self.log.debug('Authenticating with keystone...') + if not api_version: + api_version = 2 + sess, auth = self.get_keystone_session( + keystone_ip=keystone_ip, + username=username, + password=password, + api_version=api_version, + admin_port=admin_port, + user_domain_name=user_domain_name, + domain_name=domain_name, + project_domain_name=project_domain_name, + project_name=project_name + ) + if api_version == 2: + client = keystone_client.Client(session=sess) + else: + client = keystone_client_v3.Client(session=sess) + # This populates the client.service_catalog + client.auth_ref = auth.get_access(sess) + return client + + def get_keystone_session(self, keystone_ip, username, password, + api_version=False, admin_port=False, + user_domain_name=None, domain_name=None, + project_domain_name=None, project_name=None): + """Return a keystone session object""" + ep = self.get_keystone_endpoint(keystone_ip, + api_version=api_version, + admin_port=admin_port) + if api_version == 2: + auth = v2.Password( + username=username, + password=password, + tenant_name=project_name, + auth_url=ep + ) + sess = keystone_session.Session(auth=auth) + else: + auth = v3.Password( + user_domain_name=user_domain_name, + username=username, + password=password, + domain_name=domain_name, + project_domain_name=project_domain_name, + project_name=project_name, + auth_url=ep + ) + sess = keystone_session.Session(auth=auth) + return (sess, auth) + + def get_keystone_endpoint(self, keystone_ip, api_version=None, + admin_port=False): + """Return keystone endpoint""" + port = 5000 + if admin_port: + port = 35357 + base_ep = "http://{}:{}".format(keystone_ip.strip().decode('utf-8'), + port) + if api_version == 2: + ep = base_ep + "/v2.0" + else: + ep = base_ep + "/v3" + return ep + + def get_default_keystone_session(self, keystone_sentry, + openstack_release=None, api_version=2): + """Return a keystone session object and client object assuming standard + default settings + + Example call in amulet tests: + self.keystone_session, self.keystone = u.get_default_keystone_session( + self.keystone_sentry, + openstack_release=self._get_openstack_release()) + + The session can then be used to auth other clients: + neutronclient.Client(session=session) + aodh_client.Client(session=session) + eyc + """ + self.log.debug('Authenticating keystone admin...') + # 11 => xenial_queens + if api_version == 3 or (openstack_release and openstack_release >= 11): + client_class = keystone_client_v3.Client + api_version = 3 + else: + client_class = keystone_client.Client + keystone_ip = keystone_sentry.info['public-address'] + session, auth = self.get_keystone_session( + keystone_ip, + api_version=api_version, + username='admin', + password='openstack', + project_name='admin', + user_domain_name='admin_domain', + project_domain_name='admin_domain') + client = client_class(session=session) + # This populates the client.service_catalog + client.auth_ref = auth.get_access(session) + return session, client + + def authenticate_keystone_admin(self, keystone_sentry, user, password, + tenant=None, api_version=None, + keystone_ip=None, user_domain_name=None, + project_domain_name=None, + project_name=None): + """Authenticates admin user with the keystone admin endpoint.""" + self.log.debug('Authenticating keystone admin...') + if not keystone_ip: + keystone_ip = keystone_sentry.info['public-address'] + + # To support backward compatibility usage of this function + if not project_name: + project_name = tenant + if api_version == 3 and not user_domain_name: + user_domain_name = 'admin_domain' + if api_version == 3 and not project_domain_name: + project_domain_name = 'admin_domain' + if api_version == 3 and not project_name: + project_name = 'admin' + + return self.authenticate_keystone( + keystone_ip, user, password, + api_version=api_version, + user_domain_name=user_domain_name, + project_domain_name=project_domain_name, + project_name=project_name, + admin_port=True) + + def authenticate_keystone_user(self, keystone, user, password, tenant): + """Authenticates a regular user with the keystone public endpoint.""" + self.log.debug('Authenticating keystone user ({})...'.format(user)) + ep = keystone.service_catalog.url_for(service_type='identity', + interface='publicURL') + keystone_ip = urlparse.urlparse(ep).hostname + + return self.authenticate_keystone(keystone_ip, user, password, + project_name=tenant) + + def authenticate_glance_admin(self, keystone, force_v1_client=False): + """Authenticates admin user with glance.""" + self.log.debug('Authenticating glance admin...') + ep = keystone.service_catalog.url_for(service_type='image', + interface='adminURL') + if not force_v1_client and keystone.session: + return glance_clientv2.Client("2", session=keystone.session) + else: + return glance_client.Client(ep, token=keystone.auth_token) + + def authenticate_heat_admin(self, keystone): + """Authenticates the admin user with heat.""" + self.log.debug('Authenticating heat admin...') + ep = keystone.service_catalog.url_for(service_type='orchestration', + interface='publicURL') + if keystone.session: + return heat_client.Client(endpoint=ep, session=keystone.session) + else: + return heat_client.Client(endpoint=ep, token=keystone.auth_token) + + def authenticate_nova_user(self, keystone, user, password, tenant): + """Authenticates a regular user with nova-api.""" + self.log.debug('Authenticating nova user ({})...'.format(user)) + ep = keystone.service_catalog.url_for(service_type='identity', + interface='publicURL') + if keystone.session: + return nova_client.Client(NOVA_CLIENT_VERSION, + session=keystone.session, + auth_url=ep) + elif novaclient.__version__[0] >= "7": + return nova_client.Client(NOVA_CLIENT_VERSION, + username=user, password=password, + project_name=tenant, auth_url=ep) + else: + return nova_client.Client(NOVA_CLIENT_VERSION, + username=user, api_key=password, + project_id=tenant, auth_url=ep) + + def authenticate_swift_user(self, keystone, user, password, tenant): + """Authenticates a regular user with swift api.""" + self.log.debug('Authenticating swift user ({})...'.format(user)) + ep = keystone.service_catalog.url_for(service_type='identity', + interface='publicURL') + if keystone.session: + return swiftclient.Connection(session=keystone.session) + else: + return swiftclient.Connection(authurl=ep, + user=user, + key=password, + tenant_name=tenant, + auth_version='2.0') + + def create_flavor(self, nova, name, ram, vcpus, disk, flavorid="auto", + ephemeral=0, swap=0, rxtx_factor=1.0, is_public=True): + """Create the specified flavor.""" + try: + nova.flavors.find(name=name) + except (exceptions.NotFound, exceptions.NoUniqueMatch): + self.log.debug('Creating flavor ({})'.format(name)) + nova.flavors.create(name, ram, vcpus, disk, flavorid, + ephemeral, swap, rxtx_factor, is_public) + + def glance_create_image(self, glance, image_name, image_url, + download_dir='tests', + hypervisor_type=None, + disk_format='qcow2', + architecture='x86_64', + container_format='bare'): + """Download an image and upload it to glance, validate its status + and return an image object pointer. KVM defaults, can override for + LXD. + + :param glance: pointer to authenticated glance api connection + :param image_name: display name for new image + :param image_url: url to retrieve + :param download_dir: directory to store downloaded image file + :param hypervisor_type: glance image hypervisor property + :param disk_format: glance image disk format + :param architecture: glance image architecture property + :param container_format: glance image container format + :returns: glance image pointer + """ + self.log.debug('Creating glance image ({}) from ' + '{}...'.format(image_name, image_url)) + + # Download image + http_proxy = os.getenv('OS_TEST_HTTP_PROXY') + self.log.debug('OS_TEST_HTTP_PROXY: {}'.format(http_proxy)) + if http_proxy: + proxies = {'http': http_proxy} + opener = urllib.FancyURLopener(proxies) + else: + opener = urllib.FancyURLopener() + + abs_file_name = os.path.join(download_dir, image_name) + if not os.path.exists(abs_file_name): + opener.retrieve(image_url, abs_file_name) + + # Create glance image + glance_properties = { + 'architecture': architecture, + } + if hypervisor_type: + glance_properties['hypervisor_type'] = hypervisor_type + # Create glance image + if float(glance.version) < 2.0: + with open(abs_file_name) as f: + image = glance.images.create( + name=image_name, + is_public=True, + disk_format=disk_format, + container_format=container_format, + properties=glance_properties, + data=f) + else: + image = glance.images.create( + name=image_name, + visibility="public", + disk_format=disk_format, + container_format=container_format) + glance.images.upload(image.id, open(abs_file_name, 'rb')) + glance.images.update(image.id, **glance_properties) + + # Wait for image to reach active status + img_id = image.id + ret = self.resource_reaches_status(glance.images, img_id, + expected_stat='active', + msg='Image status wait') + if not ret: + msg = 'Glance image failed to reach expected state.' + amulet.raise_status(amulet.FAIL, msg=msg) + + # Re-validate new image + self.log.debug('Validating image attributes...') + val_img_name = glance.images.get(img_id).name + val_img_stat = glance.images.get(img_id).status + val_img_cfmt = glance.images.get(img_id).container_format + val_img_dfmt = glance.images.get(img_id).disk_format + + if float(glance.version) < 2.0: + val_img_pub = glance.images.get(img_id).is_public + else: + val_img_pub = glance.images.get(img_id).visibility == "public" + + msg_attr = ('Image attributes - name:{} public:{} id:{} stat:{} ' + 'container fmt:{} disk fmt:{}'.format( + val_img_name, val_img_pub, img_id, + val_img_stat, val_img_cfmt, val_img_dfmt)) + + if val_img_name == image_name and val_img_stat == 'active' \ + and val_img_pub is True and val_img_cfmt == container_format \ + and val_img_dfmt == disk_format: + self.log.debug(msg_attr) + else: + msg = ('Image validation failed, {}'.format(msg_attr)) + amulet.raise_status(amulet.FAIL, msg=msg) + + return image + + def create_cirros_image(self, glance, image_name, hypervisor_type=None): + """Download the latest cirros image and upload it to glance, + validate and return a resource pointer. + + :param glance: pointer to authenticated glance connection + :param image_name: display name for new image + :param hypervisor_type: glance image hypervisor property + :returns: glance image pointer + """ + # /!\ DEPRECATION WARNING + self.log.warn('/!\\ DEPRECATION WARNING: use ' + 'glance_create_image instead of ' + 'create_cirros_image.') + + self.log.debug('Creating glance cirros image ' + '({})...'.format(image_name)) + + # Get cirros image URL + http_proxy = os.getenv('OS_TEST_HTTP_PROXY') + self.log.debug('OS_TEST_HTTP_PROXY: {}'.format(http_proxy)) + if http_proxy: + proxies = {'http': http_proxy} + opener = urllib.FancyURLopener(proxies) + else: + opener = urllib.FancyURLopener() + + f = opener.open('http://download.cirros-cloud.net/version/released') + version = f.read().strip() + cirros_img = 'cirros-{}-x86_64-disk.img'.format(version) + cirros_url = 'http://{}/{}/{}'.format('download.cirros-cloud.net', + version, cirros_img) + f.close() + + return self.glance_create_image( + glance, + image_name, + cirros_url, + hypervisor_type=hypervisor_type) + + def delete_image(self, glance, image): + """Delete the specified image.""" + + # /!\ DEPRECATION WARNING + self.log.warn('/!\\ DEPRECATION WARNING: use ' + 'delete_resource instead of delete_image.') + self.log.debug('Deleting glance image ({})...'.format(image)) + return self.delete_resource(glance.images, image, msg='glance image') + + def create_instance(self, nova, image_name, instance_name, flavor): + """Create the specified instance.""" + self.log.debug('Creating instance ' + '({}|{}|{})'.format(instance_name, image_name, flavor)) + image = nova.glance.find_image(image_name) + flavor = nova.flavors.find(name=flavor) + instance = nova.servers.create(name=instance_name, image=image, + flavor=flavor) + + count = 1 + status = instance.status + while status != 'ACTIVE' and count < 60: + time.sleep(3) + instance = nova.servers.get(instance.id) + status = instance.status + self.log.debug('instance status: {}'.format(status)) + count += 1 + + if status != 'ACTIVE': + self.log.error('instance creation timed out') + return None + + return instance + + def delete_instance(self, nova, instance): + """Delete the specified instance.""" + + # /!\ DEPRECATION WARNING + self.log.warn('/!\\ DEPRECATION WARNING: use ' + 'delete_resource instead of delete_instance.') + self.log.debug('Deleting instance ({})...'.format(instance)) + return self.delete_resource(nova.servers, instance, + msg='nova instance') + + def create_or_get_keypair(self, nova, keypair_name="testkey"): + """Create a new keypair, or return pointer if it already exists.""" + try: + _keypair = nova.keypairs.get(keypair_name) + self.log.debug('Keypair ({}) already exists, ' + 'using it.'.format(keypair_name)) + return _keypair + except Exception: + self.log.debug('Keypair ({}) does not exist, ' + 'creating it.'.format(keypair_name)) + + _keypair = nova.keypairs.create(name=keypair_name) + return _keypair + + def _get_cinder_obj_name(self, cinder_object): + """Retrieve name of cinder object. + + :param cinder_object: cinder snapshot or volume object + :returns: str cinder object name + """ + # v1 objects store name in 'display_name' attr but v2+ use 'name' + try: + return cinder_object.display_name + except AttributeError: + return cinder_object.name + + def create_cinder_volume(self, cinder, vol_name="demo-vol", vol_size=1, + img_id=None, src_vol_id=None, snap_id=None): + """Create cinder volume, optionally from a glance image, OR + optionally as a clone of an existing volume, OR optionally + from a snapshot. Wait for the new volume status to reach + the expected status, validate and return a resource pointer. + + :param vol_name: cinder volume display name + :param vol_size: size in gigabytes + :param img_id: optional glance image id + :param src_vol_id: optional source volume id to clone + :param snap_id: optional snapshot id to use + :returns: cinder volume pointer + """ + # Handle parameter input and avoid impossible combinations + if img_id and not src_vol_id and not snap_id: + # Create volume from image + self.log.debug('Creating cinder volume from glance image...') + bootable = 'true' + elif src_vol_id and not img_id and not snap_id: + # Clone an existing volume + self.log.debug('Cloning cinder volume...') + bootable = cinder.volumes.get(src_vol_id).bootable + elif snap_id and not src_vol_id and not img_id: + # Create volume from snapshot + self.log.debug('Creating cinder volume from snapshot...') + snap = cinder.volume_snapshots.find(id=snap_id) + vol_size = snap.size + snap_vol_id = cinder.volume_snapshots.get(snap_id).volume_id + bootable = cinder.volumes.get(snap_vol_id).bootable + elif not img_id and not src_vol_id and not snap_id: + # Create volume + self.log.debug('Creating cinder volume...') + bootable = 'false' + else: + # Impossible combination of parameters + msg = ('Invalid method use - name:{} size:{} img_id:{} ' + 'src_vol_id:{} snap_id:{}'.format(vol_name, vol_size, + img_id, src_vol_id, + snap_id)) + amulet.raise_status(amulet.FAIL, msg=msg) + + # Create new volume + try: + vol_new = cinder.volumes.create(display_name=vol_name, + imageRef=img_id, + size=vol_size, + source_volid=src_vol_id, + snapshot_id=snap_id) + vol_id = vol_new.id + except TypeError: + vol_new = cinder.volumes.create(name=vol_name, + imageRef=img_id, + size=vol_size, + source_volid=src_vol_id, + snapshot_id=snap_id) + vol_id = vol_new.id + except Exception as e: + msg = 'Failed to create volume: {}'.format(e) + amulet.raise_status(amulet.FAIL, msg=msg) + + # Wait for volume to reach available status + ret = self.resource_reaches_status(cinder.volumes, vol_id, + expected_stat="available", + msg="Volume status wait") + if not ret: + msg = 'Cinder volume failed to reach expected state.' + amulet.raise_status(amulet.FAIL, msg=msg) + + # Re-validate new volume + self.log.debug('Validating volume attributes...') + val_vol_name = self._get_cinder_obj_name(cinder.volumes.get(vol_id)) + val_vol_boot = cinder.volumes.get(vol_id).bootable + val_vol_stat = cinder.volumes.get(vol_id).status + val_vol_size = cinder.volumes.get(vol_id).size + msg_attr = ('Volume attributes - name:{} id:{} stat:{} boot:' + '{} size:{}'.format(val_vol_name, vol_id, + val_vol_stat, val_vol_boot, + val_vol_size)) + + if val_vol_boot == bootable and val_vol_stat == 'available' \ + and val_vol_name == vol_name and val_vol_size == vol_size: + self.log.debug(msg_attr) + else: + msg = ('Volume validation failed, {}'.format(msg_attr)) + amulet.raise_status(amulet.FAIL, msg=msg) + + return vol_new + + def delete_resource(self, resource, resource_id, + msg="resource", max_wait=120): + """Delete one openstack resource, such as one instance, keypair, + image, volume, stack, etc., and confirm deletion within max wait time. + + :param resource: pointer to os resource type, ex:glance_client.images + :param resource_id: unique name or id for the openstack resource + :param msg: text to identify purpose in logging + :param max_wait: maximum wait time in seconds + :returns: True if successful, otherwise False + """ + self.log.debug('Deleting OpenStack resource ' + '{} ({})'.format(resource_id, msg)) + num_before = len(list(resource.list())) + resource.delete(resource_id) + + tries = 0 + num_after = len(list(resource.list())) + while num_after != (num_before - 1) and tries < (max_wait / 4): + self.log.debug('{} delete check: ' + '{} [{}:{}] {}'.format(msg, tries, + num_before, + num_after, + resource_id)) + time.sleep(4) + num_after = len(list(resource.list())) + tries += 1 + + self.log.debug('{}: expected, actual count = {}, ' + '{}'.format(msg, num_before - 1, num_after)) + + if num_after == (num_before - 1): + return True + else: + self.log.error('{} delete timed out'.format(msg)) + return False + + def resource_reaches_status(self, resource, resource_id, + expected_stat='available', + msg='resource', max_wait=120): + """Wait for an openstack resources status to reach an + expected status within a specified time. Useful to confirm that + nova instances, cinder vols, snapshots, glance images, heat stacks + and other resources eventually reach the expected status. + + :param resource: pointer to os resource type, ex: heat_client.stacks + :param resource_id: unique id for the openstack resource + :param expected_stat: status to expect resource to reach + :param msg: text to identify purpose in logging + :param max_wait: maximum wait time in seconds + :returns: True if successful, False if status is not reached + """ + + tries = 0 + resource_stat = resource.get(resource_id).status + while resource_stat != expected_stat and tries < (max_wait / 4): + self.log.debug('{} status check: ' + '{} [{}:{}] {}'.format(msg, tries, + resource_stat, + expected_stat, + resource_id)) + time.sleep(4) + resource_stat = resource.get(resource_id).status + tries += 1 + + self.log.debug('{}: expected, actual status = {}, ' + '{}'.format(msg, resource_stat, expected_stat)) + + if resource_stat == expected_stat: + return True + else: + self.log.debug('{} never reached expected status: ' + '{}'.format(resource_id, expected_stat)) + return False + + def get_ceph_osd_id_cmd(self, index): + """Produce a shell command that will return a ceph-osd id.""" + return ("`initctl list | grep 'ceph-osd ' | " + "awk 'NR=={} {{ print $2 }}' | " + "grep -o '[0-9]*'`".format(index + 1)) + + def get_ceph_pools(self, sentry_unit): + """Return a dict of ceph pools from a single ceph unit, with + pool name as keys, pool id as vals.""" + pools = {} + cmd = 'sudo ceph osd lspools' + output, code = sentry_unit.run(cmd) + if code != 0: + msg = ('{} `{}` returned {} ' + '{}'.format(sentry_unit.info['unit_name'], + cmd, code, output)) + amulet.raise_status(amulet.FAIL, msg=msg) + + # For mimic ceph osd lspools output + output = output.replace("\n", ",") + + # Example output: 0 data,1 metadata,2 rbd,3 cinder,4 glance, + for pool in str(output).split(','): + pool_id_name = pool.split(' ') + if len(pool_id_name) == 2: + pool_id = pool_id_name[0] + pool_name = pool_id_name[1] + pools[pool_name] = int(pool_id) + + self.log.debug('Pools on {}: {}'.format(sentry_unit.info['unit_name'], + pools)) + return pools + + def get_ceph_df(self, sentry_unit): + """Return dict of ceph df json output, including ceph pool state. + + :param sentry_unit: Pointer to amulet sentry instance (juju unit) + :returns: Dict of ceph df output + """ + cmd = 'sudo ceph df --format=json' + output, code = sentry_unit.run(cmd) + if code != 0: + msg = ('{} `{}` returned {} ' + '{}'.format(sentry_unit.info['unit_name'], + cmd, code, output)) + amulet.raise_status(amulet.FAIL, msg=msg) + return json.loads(output) + + def get_ceph_pool_sample(self, sentry_unit, pool_id=0): + """Take a sample of attributes of a ceph pool, returning ceph + pool name, object count and disk space used for the specified + pool ID number. + + :param sentry_unit: Pointer to amulet sentry instance (juju unit) + :param pool_id: Ceph pool ID + :returns: List of pool name, object count, kb disk space used + """ + df = self.get_ceph_df(sentry_unit) + for pool in df['pools']: + if pool['id'] == pool_id: + pool_name = pool['name'] + obj_count = pool['stats']['objects'] + kb_used = pool['stats']['kb_used'] + + self.log.debug('Ceph {} pool (ID {}): {} objects, ' + '{} kb used'.format(pool_name, pool_id, + obj_count, kb_used)) + return pool_name, obj_count, kb_used + + def validate_ceph_pool_samples(self, samples, sample_type="resource pool"): + """Validate ceph pool samples taken over time, such as pool + object counts or pool kb used, before adding, after adding, and + after deleting items which affect those pool attributes. The + 2nd element is expected to be greater than the 1st; 3rd is expected + to be less than the 2nd. + + :param samples: List containing 3 data samples + :param sample_type: String for logging and usage context + :returns: None if successful, Failure message otherwise + """ + original, created, deleted = range(3) + if samples[created] <= samples[original] or \ + samples[deleted] >= samples[created]: + return ('Ceph {} samples ({}) ' + 'unexpected.'.format(sample_type, samples)) + else: + self.log.debug('Ceph {} samples (OK): ' + '{}'.format(sample_type, samples)) + return None + + # rabbitmq/amqp specific helpers: + + def rmq_wait_for_cluster(self, deployment, init_sleep=15, timeout=1200): + """Wait for rmq units extended status to show cluster readiness, + after an optional initial sleep period. Initial sleep is likely + necessary to be effective following a config change, as status + message may not instantly update to non-ready.""" + + if init_sleep: + time.sleep(init_sleep) + + message = re.compile('^Unit is ready and clustered$') + deployment._auto_wait_for_status(message=message, + timeout=timeout, + include_only=['rabbitmq-server']) + + def add_rmq_test_user(self, sentry_units, + username="testuser1", password="changeme"): + """Add a test user via the first rmq juju unit, check connection as + the new user against all sentry units. + + :param sentry_units: list of sentry unit pointers + :param username: amqp user name, default to testuser1 + :param password: amqp user password + :returns: None if successful. Raise on error. + """ + self.log.debug('Adding rmq user ({})...'.format(username)) + + # Check that user does not already exist + cmd_user_list = 'rabbitmqctl list_users' + output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list) + if username in output: + self.log.warning('User ({}) already exists, returning ' + 'gracefully.'.format(username)) + return + + perms = '".*" ".*" ".*"' + cmds = ['rabbitmqctl add_user {} {}'.format(username, password), + 'rabbitmqctl set_permissions {} {}'.format(username, perms)] + + # Add user via first unit + for cmd in cmds: + output, _ = self.run_cmd_unit(sentry_units[0], cmd) + + # Check connection against the other sentry_units + self.log.debug('Checking user connect against units...') + for sentry_unit in sentry_units: + connection = self.connect_amqp_by_unit(sentry_unit, ssl=False, + username=username, + password=password) + connection.close() + + def delete_rmq_test_user(self, sentry_units, username="testuser1"): + """Delete a rabbitmq user via the first rmq juju unit. + + :param sentry_units: list of sentry unit pointers + :param username: amqp user name, default to testuser1 + :param password: amqp user password + :returns: None if successful or no such user. + """ + self.log.debug('Deleting rmq user ({})...'.format(username)) + + # Check that the user exists + cmd_user_list = 'rabbitmqctl list_users' + output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list) + + if username not in output: + self.log.warning('User ({}) does not exist, returning ' + 'gracefully.'.format(username)) + return + + # Delete the user + cmd_user_del = 'rabbitmqctl delete_user {}'.format(username) + output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_del) + + def get_rmq_cluster_status(self, sentry_unit): + """Execute rabbitmq cluster status command on a unit and return + the full output. + + :param unit: sentry unit + :returns: String containing console output of cluster status command + """ + cmd = 'rabbitmqctl cluster_status' + output, _ = self.run_cmd_unit(sentry_unit, cmd) + self.log.debug('{} cluster_status:\n{}'.format( + sentry_unit.info['unit_name'], output)) + return str(output) + + def get_rmq_cluster_running_nodes(self, sentry_unit): + """Parse rabbitmqctl cluster_status output string, return list of + running rabbitmq cluster nodes. + + :param unit: sentry unit + :returns: List containing node names of running nodes + """ + # NOTE(beisner): rabbitmqctl cluster_status output is not + # json-parsable, do string chop foo, then json.loads that. + str_stat = self.get_rmq_cluster_status(sentry_unit) + if 'running_nodes' in str_stat: + pos_start = str_stat.find("{running_nodes,") + 15 + pos_end = str_stat.find("]},", pos_start) + 1 + str_run_nodes = str_stat[pos_start:pos_end].replace("'", '"') + run_nodes = json.loads(str_run_nodes) + return run_nodes + else: + return [] + + def validate_rmq_cluster_running_nodes(self, sentry_units): + """Check that all rmq unit hostnames are represented in the + cluster_status output of all units. + + :param host_names: dict of juju unit names to host names + :param units: list of sentry unit pointers (all rmq units) + :returns: None if successful, otherwise return error message + """ + host_names = self.get_unit_hostnames(sentry_units) + errors = [] + + # Query every unit for cluster_status running nodes + for query_unit in sentry_units: + query_unit_name = query_unit.info['unit_name'] + running_nodes = self.get_rmq_cluster_running_nodes(query_unit) + + # Confirm that every unit is represented in the queried unit's + # cluster_status running nodes output. + for validate_unit in sentry_units: + val_host_name = host_names[validate_unit.info['unit_name']] + val_node_name = 'rabbit@{}'.format(val_host_name) + + if val_node_name not in running_nodes: + errors.append('Cluster member check failed on {}: {} not ' + 'in {}\n'.format(query_unit_name, + val_node_name, + running_nodes)) + if errors: + return ''.join(errors) + + def rmq_ssl_is_enabled_on_unit(self, sentry_unit, port=None): + """Check a single juju rmq unit for ssl and port in the config file.""" + host = sentry_unit.info['public-address'] + unit_name = sentry_unit.info['unit_name'] + + conf_file = '/etc/rabbitmq/rabbitmq.config' + conf_contents = str(self.file_contents_safe(sentry_unit, + conf_file, max_wait=16)) + # Checks + conf_ssl = 'ssl' in conf_contents + conf_port = str(port) in conf_contents + + # Port explicitly checked in config + if port and conf_port and conf_ssl: + self.log.debug('SSL is enabled @{}:{} ' + '({})'.format(host, port, unit_name)) + return True + elif port and not conf_port and conf_ssl: + self.log.debug('SSL is enabled @{} but not on port {} ' + '({})'.format(host, port, unit_name)) + return False + # Port not checked (useful when checking that ssl is disabled) + elif not port and conf_ssl: + self.log.debug('SSL is enabled @{}:{} ' + '({})'.format(host, port, unit_name)) + return True + elif not conf_ssl: + self.log.debug('SSL not enabled @{}:{} ' + '({})'.format(host, port, unit_name)) + return False + else: + msg = ('Unknown condition when checking SSL status @{}:{} ' + '({})'.format(host, port, unit_name)) + amulet.raise_status(amulet.FAIL, msg) + + def validate_rmq_ssl_enabled_units(self, sentry_units, port=None): + """Check that ssl is enabled on rmq juju sentry units. + + :param sentry_units: list of all rmq sentry units + :param port: optional ssl port override to validate + :returns: None if successful, otherwise return error message + """ + for sentry_unit in sentry_units: + if not self.rmq_ssl_is_enabled_on_unit(sentry_unit, port=port): + return ('Unexpected condition: ssl is disabled on unit ' + '({})'.format(sentry_unit.info['unit_name'])) + return None + + def validate_rmq_ssl_disabled_units(self, sentry_units): + """Check that ssl is enabled on listed rmq juju sentry units. + + :param sentry_units: list of all rmq sentry units + :returns: True if successful. Raise on error. + """ + for sentry_unit in sentry_units: + if self.rmq_ssl_is_enabled_on_unit(sentry_unit): + return ('Unexpected condition: ssl is enabled on unit ' + '({})'.format(sentry_unit.info['unit_name'])) + return None + + def configure_rmq_ssl_on(self, sentry_units, deployment, + port=None, max_wait=60): + """Turn ssl charm config option on, with optional non-default + ssl port specification. Confirm that it is enabled on every + unit. + + :param sentry_units: list of sentry units + :param deployment: amulet deployment object pointer + :param port: amqp port, use defaults if None + :param max_wait: maximum time to wait in seconds to confirm + :returns: None if successful. Raise on error. + """ + self.log.debug('Setting ssl charm config option: on') + + # Enable RMQ SSL + config = {'ssl': 'on'} + if port: + config['ssl_port'] = port + + deployment.d.configure('rabbitmq-server', config) + + # Wait for unit status + self.rmq_wait_for_cluster(deployment) + + # Confirm + tries = 0 + ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port) + while ret and tries < (max_wait / 4): + time.sleep(4) + self.log.debug('Attempt {}: {}'.format(tries, ret)) + ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port) + tries += 1 + + if ret: + amulet.raise_status(amulet.FAIL, ret) + + def configure_rmq_ssl_off(self, sentry_units, deployment, max_wait=60): + """Turn ssl charm config option off, confirm that it is disabled + on every unit. + + :param sentry_units: list of sentry units + :param deployment: amulet deployment object pointer + :param max_wait: maximum time to wait in seconds to confirm + :returns: None if successful. Raise on error. + """ + self.log.debug('Setting ssl charm config option: off') + + # Disable RMQ SSL + config = {'ssl': 'off'} + deployment.d.configure('rabbitmq-server', config) + + # Wait for unit status + self.rmq_wait_for_cluster(deployment) + + # Confirm + tries = 0 + ret = self.validate_rmq_ssl_disabled_units(sentry_units) + while ret and tries < (max_wait / 4): + time.sleep(4) + self.log.debug('Attempt {}: {}'.format(tries, ret)) + ret = self.validate_rmq_ssl_disabled_units(sentry_units) + tries += 1 + + if ret: + amulet.raise_status(amulet.FAIL, ret) + + def connect_amqp_by_unit(self, sentry_unit, ssl=False, + port=None, fatal=True, + username="testuser1", password="changeme"): + """Establish and return a pika amqp connection to the rabbitmq service + running on a rmq juju unit. + + :param sentry_unit: sentry unit pointer + :param ssl: boolean, default to False + :param port: amqp port, use defaults if None + :param fatal: boolean, default to True (raises on connect error) + :param username: amqp user name, default to testuser1 + :param password: amqp user password + :returns: pika amqp connection pointer or None if failed and non-fatal + """ + host = sentry_unit.info['public-address'] + unit_name = sentry_unit.info['unit_name'] + + # Default port logic if port is not specified + if ssl and not port: + port = 5671 + elif not ssl and not port: + port = 5672 + + self.log.debug('Connecting to amqp on {}:{} ({}) as ' + '{}...'.format(host, port, unit_name, username)) + + try: + credentials = pika.PlainCredentials(username, password) + parameters = pika.ConnectionParameters(host=host, port=port, + credentials=credentials, + ssl=ssl, + connection_attempts=3, + retry_delay=5, + socket_timeout=1) + connection = pika.BlockingConnection(parameters) + assert connection.is_open is True + assert connection.is_closing is False + self.log.debug('Connect OK') + return connection + except Exception as e: + msg = ('amqp connection failed to {}:{} as ' + '{} ({})'.format(host, port, username, str(e))) + if fatal: + amulet.raise_status(amulet.FAIL, msg) + else: + self.log.warn(msg) + return None + + def publish_amqp_message_by_unit(self, sentry_unit, message, + queue="test", ssl=False, + username="testuser1", + password="changeme", + port=None): + """Publish an amqp message to a rmq juju unit. + + :param sentry_unit: sentry unit pointer + :param message: amqp message string + :param queue: message queue, default to test + :param username: amqp user name, default to testuser1 + :param password: amqp user password + :param ssl: boolean, default to False + :param port: amqp port, use defaults if None + :returns: None. Raises exception if publish failed. + """ + self.log.debug('Publishing message to {} queue:\n{}'.format(queue, + message)) + connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl, + port=port, + username=username, + password=password) + + # NOTE(beisner): extra debug here re: pika hang potential: + # https://github.com/pika/pika/issues/297 + # https://groups.google.com/forum/#!topic/rabbitmq-users/Ja0iyfF0Szw + self.log.debug('Defining channel...') + channel = connection.channel() + self.log.debug('Declaring queue...') + channel.queue_declare(queue=queue, auto_delete=False, durable=True) + self.log.debug('Publishing message...') + channel.basic_publish(exchange='', routing_key=queue, body=message) + self.log.debug('Closing channel...') + channel.close() + self.log.debug('Closing connection...') + connection.close() + + def get_amqp_message_by_unit(self, sentry_unit, queue="test", + username="testuser1", + password="changeme", + ssl=False, port=None): + """Get an amqp message from a rmq juju unit. + + :param sentry_unit: sentry unit pointer + :param queue: message queue, default to test + :param username: amqp user name, default to testuser1 + :param password: amqp user password + :param ssl: boolean, default to False + :param port: amqp port, use defaults if None + :returns: amqp message body as string. Raise if get fails. + """ + connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl, + port=port, + username=username, + password=password) + channel = connection.channel() + method_frame, _, body = channel.basic_get(queue) + + if method_frame: + self.log.debug('Retreived message from {} queue:\n{}'.format(queue, + body)) + channel.basic_ack(method_frame.delivery_tag) + channel.close() + connection.close() + return body + else: + msg = 'No message retrieved.' + amulet.raise_status(amulet.FAIL, msg) + + def validate_memcache(self, sentry_unit, conf, os_release, + earliest_release=5, section='keystone_authtoken', + check_kvs=None): + """Check Memcache is running and is configured to be used + + Example call from Amulet test: + + def test_110_memcache(self): + u.validate_memcache(self.neutron_api_sentry, + '/etc/neutron/neutron.conf', + self._get_openstack_release()) + + :param sentry_unit: sentry unit + :param conf: OpenStack config file to check memcache settings + :param os_release: Current OpenStack release int code + :param earliest_release: Earliest Openstack release to check int code + :param section: OpenStack config file section to check + :param check_kvs: Dict of settings to check in config file + :returns: None + """ + if os_release < earliest_release: + self.log.debug('Skipping memcache checks for deployment. {} <' + 'mitaka'.format(os_release)) + return + _kvs = check_kvs or {'memcached_servers': 'inet6:[::1]:11211'} + self.log.debug('Checking memcached is running') + ret = self.validate_services_by_name({sentry_unit: ['memcached']}) + if ret: + amulet.raise_status(amulet.FAIL, msg='Memcache running check' + 'failed {}'.format(ret)) + else: + self.log.debug('OK') + self.log.debug('Checking memcache url is configured in {}'.format( + conf)) + if self.validate_config_data(sentry_unit, conf, section, _kvs): + message = "Memcache config error in: {}".format(conf) + amulet.raise_status(amulet.FAIL, msg=message) + else: + self.log.debug('OK') + self.log.debug('Checking memcache configuration in ' + '/etc/memcached.conf') + contents = self.file_contents_safe(sentry_unit, '/etc/memcached.conf', + fatal=True) + ubuntu_release, _ = self.run_cmd_unit(sentry_unit, 'lsb_release -cs') + if CompareHostReleases(ubuntu_release) <= 'trusty': + memcache_listen_addr = 'ip6-localhost' + else: + memcache_listen_addr = '::1' + expected = { + '-p': '11211', + '-l': memcache_listen_addr} + found = [] + for key, value in expected.items(): + for line in contents.split('\n'): + if line.startswith(key): + self.log.debug('Checking {} is set to {}'.format( + key, + value)) + assert value == line.split()[-1] + self.log.debug(line.split()[-1]) + found.append(key) + if sorted(found) == sorted(expected.keys()): + self.log.debug('OK') + else: + message = "Memcache config error in: /etc/memcached.conf" + amulet.raise_status(amulet.FAIL, msg=message) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/audits/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/audits/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..7f7e5f79a5d5fe3cb374814e32ea16f6060f4f27 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/audits/__init__.py @@ -0,0 +1,212 @@ +# Copyright 2019 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""OpenStack Security Audit code""" + +import collections +from enum import Enum +import traceback + +from charmhelpers.core.host import cmp_pkgrevno +import charmhelpers.contrib.openstack.utils as openstack_utils +import charmhelpers.core.hookenv as hookenv + + +class AuditType(Enum): + OpenStackSecurityGuide = 1 + + +_audits = {} + +Audit = collections.namedtuple('Audit', 'func filters') + + +def audit(*args): + """Decorator to register an audit. + + These are used to generate audits that can be run on a + deployed system that matches the given configuration + + :param args: List of functions to filter tests against + :type args: List[Callable[Dict]] + """ + def wrapper(f): + test_name = f.__name__ + if _audits.get(test_name): + raise RuntimeError( + "Test name '{}' used more than once" + .format(test_name)) + non_callables = [fn for fn in args if not callable(fn)] + if non_callables: + raise RuntimeError( + "Configuration includes non-callable filters: {}" + .format(non_callables)) + _audits[test_name] = Audit(func=f, filters=args) + return f + return wrapper + + +def is_audit_type(*args): + """This audit is included in the specified kinds of audits. + + :param *args: List of AuditTypes to include this audit in + :type args: List[AuditType] + :rtype: Callable[Dict] + """ + def _is_audit_type(audit_options): + if audit_options.get('audit_type') in args: + return True + else: + return False + return _is_audit_type + + +def since_package(pkg, pkg_version): + """This audit should be run after the specified package version (incl). + + :param pkg: Package name to compare + :type pkg: str + :param release: The package version + :type release: str + :rtype: Callable[Dict] + """ + def _since_package(audit_options=None): + return cmp_pkgrevno(pkg, pkg_version) >= 0 + + return _since_package + + +def before_package(pkg, pkg_version): + """This audit should be run before the specified package version (excl). + + :param pkg: Package name to compare + :type pkg: str + :param release: The package version + :type release: str + :rtype: Callable[Dict] + """ + def _before_package(audit_options=None): + return not since_package(pkg, pkg_version)() + + return _before_package + + +def since_openstack_release(pkg, release): + """This audit should run after the specified OpenStack version (incl). + + :param pkg: Package name to compare + :type pkg: str + :param release: The OpenStack release codename + :type release: str + :rtype: Callable[Dict] + """ + def _since_openstack_release(audit_options=None): + _release = openstack_utils.get_os_codename_package(pkg) + return openstack_utils.CompareOpenStackReleases(_release) >= release + + return _since_openstack_release + + +def before_openstack_release(pkg, release): + """This audit should run before the specified OpenStack version (excl). + + :param pkg: Package name to compare + :type pkg: str + :param release: The OpenStack release codename + :type release: str + :rtype: Callable[Dict] + """ + def _before_openstack_release(audit_options=None): + return not since_openstack_release(pkg, release)() + + return _before_openstack_release + + +def it_has_config(config_key): + """This audit should be run based on specified config keys. + + :param config_key: Config key to look for + :type config_key: str + :rtype: Callable[Dict] + """ + def _it_has_config(audit_options): + return audit_options.get(config_key) is not None + + return _it_has_config + + +def run(audit_options): + """Run the configured audits with the specified audit_options. + + :param audit_options: Configuration for the audit + :type audit_options: Config + + :rtype: Dict[str, str] + """ + errors = {} + results = {} + for name, audit in sorted(_audits.items()): + result_name = name.replace('_', '-') + if result_name in audit_options.get('excludes', []): + print( + "Skipping {} because it is" + "excluded in audit config" + .format(result_name)) + continue + if all(p(audit_options) for p in audit.filters): + try: + audit.func(audit_options) + print("{}: PASS".format(name)) + results[result_name] = { + 'success': True, + } + except AssertionError as e: + print("{}: FAIL ({})".format(name, e)) + results[result_name] = { + 'success': False, + 'message': e, + } + except Exception as e: + print("{}: ERROR ({})".format(name, e)) + errors[name] = e + results[result_name] = { + 'success': False, + 'message': e, + } + for name, error in errors.items(): + print("=" * 20) + print("Error in {}: ".format(name)) + traceback.print_tb(error.__traceback__) + print() + return results + + +def action_parse_results(result): + """Parse the result of `run` in the context of an action. + + :param result: The result of running the security-checklist + action on a unit + :type result: Dict[str, Dict[str, str]] + :rtype: int + """ + passed = True + for test, result in result.items(): + if result['success']: + hookenv.action_set({test: 'PASS'}) + else: + hookenv.action_set({test: 'FAIL - {}'.format(result['message'])}) + passed = False + if not passed: + hookenv.action_fail("One or more tests failed") + return 0 if passed else 1 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/audits/openstack_security_guide.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/audits/openstack_security_guide.py new file mode 100644 index 0000000000000000000000000000000000000000..79740ed0c103e841b6e280af920f8be65e3d1d0d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/audits/openstack_security_guide.py @@ -0,0 +1,270 @@ +# Copyright 2019 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import collections +import configparser +import glob +import os.path +import subprocess + +from charmhelpers.contrib.openstack.audits import ( + audit, + AuditType, + # filters + is_audit_type, + it_has_config, +) + +from charmhelpers.core.hookenv import ( + cached, +) + +""" +The Security Guide suggests a specific list of files inside the +config directory for the service having 640 specifically, but +by ensuring the containing directory is 750, only the owner can +write, and only the group can read files within the directory. + +By restricting access to the containing directory, we can more +effectively ensure that there is no accidental leakage if a new +file is added to the service without being added to the security +guide, and to this check. +""" +FILE_ASSERTIONS = { + 'barbican': { + '/etc/barbican': {'group': 'barbican', 'mode': '750'}, + }, + 'ceph-mon': { + '/var/lib/charm/ceph-mon/ceph.conf': + {'owner': 'root', 'group': 'root', 'mode': '644'}, + '/etc/ceph/ceph.client.admin.keyring': + {'owner': 'ceph', 'group': 'ceph'}, + '/etc/ceph/rbdmap': {'mode': '644'}, + '/var/lib/ceph': {'owner': 'ceph', 'group': 'ceph', 'mode': '750'}, + '/var/lib/ceph/bootstrap-*/ceph.keyring': + {'owner': 'ceph', 'group': 'ceph', 'mode': '600'} + }, + 'ceph-osd': { + '/var/lib/charm/ceph-osd/ceph.conf': + {'owner': 'ceph', 'group': 'ceph', 'mode': '644'}, + '/var/lib/ceph': {'owner': 'ceph', 'group': 'ceph', 'mode': '750'}, + '/var/lib/ceph/*': {'owner': 'ceph', 'group': 'ceph', 'mode': '755'}, + '/var/lib/ceph/bootstrap-*/ceph.keyring': + {'owner': 'ceph', 'group': 'ceph', 'mode': '600'}, + '/var/lib/ceph/radosgw': + {'owner': 'ceph', 'group': 'ceph', 'mode': '755'}, + }, + 'cinder': { + '/etc/cinder': {'group': 'cinder', 'mode': '750'}, + }, + 'glance': { + '/etc/glance': {'group': 'glance', 'mode': '750'}, + }, + 'keystone': { + '/etc/keystone': + {'owner': 'keystone', 'group': 'keystone', 'mode': '750'}, + }, + 'manilla': { + '/etc/manila': {'group': 'manilla', 'mode': '750'}, + }, + 'neutron-gateway': { + '/etc/neutron': {'group': 'neutron', 'mode': '750'}, + }, + 'neutron-api': { + '/etc/neutron/': {'group': 'neutron', 'mode': '750'}, + }, + 'nova-cloud-controller': { + '/etc/nova': {'group': 'nova', 'mode': '750'}, + }, + 'nova-compute': { + '/etc/nova/': {'group': 'nova', 'mode': '750'}, + }, + 'openstack-dashboard': { + # From security guide + '/etc/openstack-dashboard/local_settings.py': + {'group': 'horizon', 'mode': '640'}, + }, +} + +Ownership = collections.namedtuple('Ownership', 'owner group mode') + + +@cached +def _stat(file): + """ + Get the Ownership information from a file. + + :param file: The path to a file to stat + :type file: str + :returns: owner, group, and mode of the specified file + :rtype: Ownership + :raises subprocess.CalledProcessError: If the underlying stat fails + """ + out = subprocess.check_output( + ['stat', '-c', '%U %G %a', file]).decode('utf-8') + return Ownership(*out.strip().split(' ')) + + +@cached +def _config_ini(path): + """ + Parse an ini file + + :param path: The path to a file to parse + :type file: str + :returns: Configuration contained in path + :rtype: Dict + """ + # When strict is enabled, duplicate options are not allowed in the + # parsed INI; however, Oslo allows duplicate values. This change + # causes us to ignore the duplicate values which is acceptable as + # long as we don't validate any multi-value options + conf = configparser.ConfigParser(strict=False) + conf.read(path) + return dict(conf) + + +def _validate_file_ownership(owner, group, file_name, optional=False): + """ + Validate that a specified file is owned by `owner:group`. + + :param owner: Name of the owner + :type owner: str + :param group: Name of the group + :type group: str + :param file_name: Path to the file to verify + :type file_name: str + :param optional: Is this file optional, + ie: Should this test fail when it's missing + :type optional: bool + """ + try: + ownership = _stat(file_name) + except subprocess.CalledProcessError as e: + print("Error reading file: {}".format(e)) + if not optional: + assert False, "Specified file does not exist: {}".format(file_name) + assert owner == ownership.owner, \ + "{} has an incorrect owner: {} should be {}".format( + file_name, ownership.owner, owner) + assert group == ownership.group, \ + "{} has an incorrect group: {} should be {}".format( + file_name, ownership.group, group) + print("Validate ownership of {}: PASS".format(file_name)) + + +def _validate_file_mode(mode, file_name, optional=False): + """ + Validate that a specified file has the specified permissions. + + :param mode: file mode that is desires + :type owner: str + :param file_name: Path to the file to verify + :type file_name: str + :param optional: Is this file optional, + ie: Should this test fail when it's missing + :type optional: bool + """ + try: + ownership = _stat(file_name) + except subprocess.CalledProcessError as e: + print("Error reading file: {}".format(e)) + if not optional: + assert False, "Specified file does not exist: {}".format(file_name) + assert mode == ownership.mode, \ + "{} has an incorrect mode: {} should be {}".format( + file_name, ownership.mode, mode) + print("Validate mode of {}: PASS".format(file_name)) + + +@cached +def _config_section(config, section): + """Read the configuration file and return a section.""" + path = os.path.join(config.get('config_path'), config.get('config_file')) + conf = _config_ini(path) + return conf.get(section) + + +@audit(is_audit_type(AuditType.OpenStackSecurityGuide), + it_has_config('files')) +def validate_file_ownership(config): + """Verify that configuration files are owned by the correct user/group.""" + files = config.get('files', {}) + for file_name, options in files.items(): + for key in options.keys(): + if key not in ["owner", "group", "mode"]: + raise RuntimeError( + "Invalid ownership configuration: {}".format(key)) + owner = options.get('owner', config.get('owner', 'root')) + group = options.get('group', config.get('group', 'root')) + optional = options.get('optional', config.get('optional', False)) + if '*' in file_name: + for file in glob.glob(file_name): + if file not in files.keys(): + if os.path.isfile(file): + _validate_file_ownership(owner, group, file, optional) + else: + if os.path.isfile(file_name): + _validate_file_ownership(owner, group, file_name, optional) + + +@audit(is_audit_type(AuditType.OpenStackSecurityGuide), + it_has_config('files')) +def validate_file_permissions(config): + """Verify that permissions on configuration files are secure enough.""" + files = config.get('files', {}) + for file_name, options in files.items(): + for key in options.keys(): + if key not in ["owner", "group", "mode"]: + raise RuntimeError( + "Invalid ownership configuration: {}".format(key)) + mode = options.get('mode', config.get('permissions', '600')) + optional = options.get('optional', config.get('optional', False)) + if '*' in file_name: + for file in glob.glob(file_name): + if file not in files.keys(): + if os.path.isfile(file): + _validate_file_mode(mode, file, optional) + else: + if os.path.isfile(file_name): + _validate_file_mode(mode, file_name, optional) + + +@audit(is_audit_type(AuditType.OpenStackSecurityGuide)) +def validate_uses_keystone(audit_options): + """Validate that the service uses Keystone for authentication.""" + section = _config_section(audit_options, 'api') or _config_section(audit_options, 'DEFAULT') + assert section is not None, "Missing section 'api / DEFAULT'" + assert section.get('auth_strategy') == "keystone", \ + "Application is not using Keystone" + + +@audit(is_audit_type(AuditType.OpenStackSecurityGuide)) +def validate_uses_tls_for_keystone(audit_options): + """Verify that TLS is used to communicate with Keystone.""" + section = _config_section(audit_options, 'keystone_authtoken') + assert section is not None, "Missing section 'keystone_authtoken'" + assert not section.get('insecure') and \ + "https://" in section.get("auth_uri"), \ + "TLS is not used for Keystone" + + +@audit(is_audit_type(AuditType.OpenStackSecurityGuide)) +def validate_uses_tls_for_glance(audit_options): + """Verify that TLS is used to communicate with Glance.""" + section = _config_section(audit_options, 'glance') + assert section is not None, "Missing section 'glance'" + assert not section.get('insecure') and \ + "https://" in section.get("api_servers"), \ + "TLS is not used for Glance" diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/cert_utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/cert_utils.py new file mode 100644 index 0000000000000000000000000000000000000000..b494af64aeae55db44b669990725de81705104b2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/cert_utils.py @@ -0,0 +1,289 @@ +# Copyright 2014-2018 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Common python helper functions used for OpenStack charm certificats. + +import os +import json + +from charmhelpers.contrib.network.ip import ( + get_hostname, + resolve_network_cidr, +) +from charmhelpers.core.hookenv import ( + local_unit, + network_get_primary_address, + config, + related_units, + relation_get, + relation_ids, + unit_get, + NoNetworkBinding, + log, + WARNING, +) +from charmhelpers.contrib.openstack.ip import ( + ADMIN, + resolve_address, + get_vip_in_network, + INTERNAL, + PUBLIC, + ADDRESS_MAP) + +from charmhelpers.core.host import ( + mkdir, + write_file, +) + +from charmhelpers.contrib.hahelpers.apache import ( + install_ca_cert +) + + +class CertRequest(object): + + """Create a request for certificates to be generated + """ + + def __init__(self, json_encode=True): + self.entries = [] + self.hostname_entry = None + self.json_encode = json_encode + + def add_entry(self, net_type, cn, addresses): + """Add a request to the batch + + :param net_type: str netwrok space name request is for + :param cn: str Canonical Name for certificate + :param addresses: [] List of addresses to be used as SANs + """ + self.entries.append({ + 'cn': cn, + 'addresses': addresses}) + + def add_hostname_cn(self): + """Add a request for the hostname of the machine""" + ip = unit_get('private-address') + addresses = [ip] + # If a vip is being used without os-hostname config or + # network spaces then we need to ensure the local units + # cert has the approriate vip in the SAN list + vip = get_vip_in_network(resolve_network_cidr(ip)) + if vip: + addresses.append(vip) + self.hostname_entry = { + 'cn': get_hostname(ip), + 'addresses': addresses} + + def add_hostname_cn_ip(self, addresses): + """Add an address to the SAN list for the hostname request + + :param addr: [] List of address to be added + """ + for addr in addresses: + if addr not in self.hostname_entry['addresses']: + self.hostname_entry['addresses'].append(addr) + + def get_request(self): + """Generate request from the batched up entries + + """ + if self.hostname_entry: + self.entries.append(self.hostname_entry) + request = {} + for entry in self.entries: + sans = sorted(list(set(entry['addresses']))) + request[entry['cn']] = {'sans': sans} + if self.json_encode: + req = {'cert_requests': json.dumps(request, sort_keys=True)} + else: + req = {'cert_requests': request} + req['unit_name'] = local_unit().replace('/', '_') + return req + + +def get_certificate_request(json_encode=True): + """Generate a certificatee requests based on the network confioguration + + """ + req = CertRequest(json_encode=json_encode) + req.add_hostname_cn() + # Add os-hostname entries + for net_type in [INTERNAL, ADMIN, PUBLIC]: + net_config = config(ADDRESS_MAP[net_type]['override']) + try: + net_addr = resolve_address(endpoint_type=net_type) + ip = network_get_primary_address( + ADDRESS_MAP[net_type]['binding']) + addresses = [net_addr, ip] + vip = get_vip_in_network(resolve_network_cidr(ip)) + if vip: + addresses.append(vip) + if net_config: + req.add_entry( + net_type, + net_config, + addresses) + else: + # There is network address with no corresponding hostname. + # Add the ip to the hostname cert to allow for this. + req.add_hostname_cn_ip(addresses) + except NoNetworkBinding: + log("Skipping request for certificate for ip in {} space, no " + "local address found".format(net_type), WARNING) + return req.get_request() + + +def create_ip_cert_links(ssl_dir, custom_hostname_link=None): + """Create symlinks for SAN records + + :param ssl_dir: str Directory to create symlinks in + :param custom_hostname_link: str Additional link to be created + """ + hostname = get_hostname(unit_get('private-address')) + hostname_cert = os.path.join( + ssl_dir, + 'cert_{}'.format(hostname)) + hostname_key = os.path.join( + ssl_dir, + 'key_{}'.format(hostname)) + # Add links to hostname cert, used if os-hostname vars not set + for net_type in [INTERNAL, ADMIN, PUBLIC]: + try: + addr = resolve_address(endpoint_type=net_type) + cert = os.path.join(ssl_dir, 'cert_{}'.format(addr)) + key = os.path.join(ssl_dir, 'key_{}'.format(addr)) + if os.path.isfile(hostname_cert) and not os.path.isfile(cert): + os.symlink(hostname_cert, cert) + os.symlink(hostname_key, key) + except NoNetworkBinding: + log("Skipping creating cert symlink for ip in {} space, no " + "local address found".format(net_type), WARNING) + if custom_hostname_link: + custom_cert = os.path.join( + ssl_dir, + 'cert_{}'.format(custom_hostname_link)) + custom_key = os.path.join( + ssl_dir, + 'key_{}'.format(custom_hostname_link)) + if os.path.isfile(hostname_cert) and not os.path.isfile(custom_cert): + os.symlink(hostname_cert, custom_cert) + os.symlink(hostname_key, custom_key) + + +def install_certs(ssl_dir, certs, chain=None, user='root', group='root'): + """Install the certs passed into the ssl dir and append the chain if + provided. + + :param ssl_dir: str Directory to create symlinks in + :param certs: {} {'cn': {'cert': 'CERT', 'key': 'KEY'}} + :param chain: str Chain to be appended to certs + :param user: (Optional) Owner of certificate files. Defaults to 'root' + :type user: str + :param group: (Optional) Group of certificate files. Defaults to 'root' + :type group: str + """ + for cn, bundle in certs.items(): + cert_filename = 'cert_{}'.format(cn) + key_filename = 'key_{}'.format(cn) + cert_data = bundle['cert'] + if chain: + # Append chain file so that clients that trust the root CA will + # trust certs signed by an intermediate in the chain + cert_data = cert_data + os.linesep + chain + write_file( + path=os.path.join(ssl_dir, cert_filename), owner=user, group=group, + content=cert_data, perms=0o640) + write_file( + path=os.path.join(ssl_dir, key_filename), owner=user, group=group, + content=bundle['key'], perms=0o640) + + +def process_certificates(service_name, relation_id, unit, + custom_hostname_link=None, user='root', group='root'): + """Process the certificates supplied down the relation + + :param service_name: str Name of service the certifcates are for. + :param relation_id: str Relation id providing the certs + :param unit: str Unit providing the certs + :param custom_hostname_link: str Name of custom link to create + :param user: (Optional) Owner of certificate files. Defaults to 'root' + :type user: str + :param group: (Optional) Group of certificate files. Defaults to 'root' + :type group: str + :returns: True if certificates processed for local unit or False + :rtype: bool + """ + data = relation_get(rid=relation_id, unit=unit) + ssl_dir = os.path.join('/etc/apache2/ssl/', service_name) + mkdir(path=ssl_dir) + name = local_unit().replace('/', '_') + certs = data.get('{}.processed_requests'.format(name)) + chain = data.get('chain') + ca = data.get('ca') + if certs: + certs = json.loads(certs) + install_ca_cert(ca.encode()) + install_certs(ssl_dir, certs, chain, user=user, group=group) + create_ip_cert_links( + ssl_dir, + custom_hostname_link=custom_hostname_link) + return True + return False + + +def get_requests_for_local_unit(relation_name=None): + """Extract any certificates data targeted at this unit down relation_name. + + :param relation_name: str Name of relation to check for data. + :returns: List of bundles of certificates. + :rtype: List of dicts + """ + local_name = local_unit().replace('/', '_') + raw_certs_key = '{}.processed_requests'.format(local_name) + relation_name = relation_name or 'certificates' + bundles = [] + for rid in relation_ids(relation_name): + for unit in related_units(rid): + data = relation_get(rid=rid, unit=unit) + if data.get(raw_certs_key): + bundles.append({ + 'ca': data['ca'], + 'chain': data.get('chain'), + 'certs': json.loads(data[raw_certs_key])}) + return bundles + + +def get_bundle_for_cn(cn, relation_name=None): + """Extract certificates for the given cn. + + :param cn: str Canonical Name on certificate. + :param relation_name: str Relation to check for certificates down. + :returns: Dictionary of certificate data, + :rtype: dict. + """ + entries = get_requests_for_local_unit(relation_name) + cert_bundle = {} + for entry in entries: + for _cn, bundle in entry['certs'].items(): + if _cn == cn: + cert_bundle = { + 'cert': bundle['cert'], + 'key': bundle['key'], + 'chain': entry['chain'], + 'ca': entry['ca']} + break + if cert_bundle: + break + return cert_bundle diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/context.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/context.py new file mode 100644 index 0000000000000000000000000000000000000000..42abccf7cb9eabc063bc8d87f016e59f4dc8f898 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/context.py @@ -0,0 +1,3177 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import collections +import copy +import enum +import glob +import hashlib +import json +import math +import os +import re +import socket +import time + +from base64 import b64decode +from subprocess import check_call, CalledProcessError + +import six + +from charmhelpers.contrib.openstack.audits.openstack_security_guide import ( + _config_ini as config_ini +) + +from charmhelpers.fetch import ( + apt_install, + filter_installed_packages, +) +from charmhelpers.core.hookenv import ( + NoNetworkBinding, + config, + is_relation_made, + local_unit, + log, + relation_get, + relation_ids, + related_units, + relation_set, + unit_get, + unit_private_ip, + charm_name, + DEBUG, + INFO, + ERROR, + status_set, + network_get_primary_address, + WARNING, +) + +from charmhelpers.core.sysctl import create as sysctl_create +from charmhelpers.core.strutils import bool_from_string +from charmhelpers.contrib.openstack.exceptions import OSContextError + +from charmhelpers.core.host import ( + get_bond_master, + is_phy_iface, + list_nics, + get_nic_hwaddr, + mkdir, + write_file, + pwgen, + lsb_release, + CompareHostReleases, + is_container, +) +from charmhelpers.contrib.hahelpers.cluster import ( + determine_apache_port, + determine_api_port, + https, + is_clustered, +) +from charmhelpers.contrib.hahelpers.apache import ( + get_cert, + get_ca_cert, + install_ca_cert, +) +from charmhelpers.contrib.openstack.neutron import ( + neutron_plugin_attribute, + parse_data_port_mappings, +) +from charmhelpers.contrib.openstack.ip import ( + resolve_address, + INTERNAL, + ADMIN, + PUBLIC, + ADDRESS_MAP, +) +from charmhelpers.contrib.network.ip import ( + get_address_in_network, + get_ipv4_addr, + get_ipv6_addr, + get_netmask_for_address, + format_ipv6_addr, + is_bridge_member, + is_ipv6_disabled, + get_relation_ip, +) +from charmhelpers.contrib.openstack.utils import ( + config_flags_parser, + get_os_codename_install_source, + enable_memcache, + CompareOpenStackReleases, + os_release, +) +from charmhelpers.core.unitdata import kv + +try: + from sriov_netplan_shim import pci +except ImportError: + # The use of the function and contexts that require the pci module is + # optional. + pass + +try: + import psutil +except ImportError: + if six.PY2: + apt_install('python-psutil', fatal=True) + else: + apt_install('python3-psutil', fatal=True) + import psutil + +CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt' +ADDRESS_TYPES = ['admin', 'internal', 'public'] +HAPROXY_RUN_DIR = '/var/run/haproxy/' +DEFAULT_OSLO_MESSAGING_DRIVER = "messagingv2" + + +def ensure_packages(packages): + """Install but do not upgrade required plugin packages.""" + required = filter_installed_packages(packages) + if required: + apt_install(required, fatal=True) + + +def context_complete(ctxt): + _missing = [] + for k, v in six.iteritems(ctxt): + if v is None or v == '': + _missing.append(k) + + if _missing: + log('Missing required data: %s' % ' '.join(_missing), level=INFO) + return False + + return True + + +class OSContextGenerator(object): + """Base class for all context generators.""" + interfaces = [] + related = False + complete = False + missing_data = [] + + def __call__(self): + raise NotImplementedError + + def context_complete(self, ctxt): + """Check for missing data for the required context data. + Set self.missing_data if it exists and return False. + Set self.complete if no missing data and return True. + """ + # Fresh start + self.complete = False + self.missing_data = [] + for k, v in six.iteritems(ctxt): + if v is None or v == '': + if k not in self.missing_data: + self.missing_data.append(k) + + if self.missing_data: + self.complete = False + log('Missing required data: %s' % ' '.join(self.missing_data), + level=INFO) + else: + self.complete = True + return self.complete + + def get_related(self): + """Check if any of the context interfaces have relation ids. + Set self.related and return True if one of the interfaces + has relation ids. + """ + # Fresh start + self.related = False + try: + for interface in self.interfaces: + if relation_ids(interface): + self.related = True + return self.related + except AttributeError as e: + log("{} {}" + "".format(self, e), 'INFO') + return self.related + + +class SharedDBContext(OSContextGenerator): + interfaces = ['shared-db'] + + def __init__(self, database=None, user=None, relation_prefix=None, + ssl_dir=None, relation_id=None): + """Allows inspecting relation for settings prefixed with + relation_prefix. This is useful for parsing access for multiple + databases returned via the shared-db interface (eg, nova_password, + quantum_password) + """ + self.relation_prefix = relation_prefix + self.database = database + self.user = user + self.ssl_dir = ssl_dir + self.rel_name = self.interfaces[0] + self.relation_id = relation_id + + def __call__(self): + self.database = self.database or config('database') + self.user = self.user or config('database-user') + if None in [self.database, self.user]: + log("Could not generate shared_db context. Missing required charm " + "config options. (database name and user)", level=ERROR) + raise OSContextError + + ctxt = {} + + # NOTE(jamespage) if mysql charm provides a network upon which + # access to the database should be made, reconfigure relation + # with the service units local address and defer execution + access_network = relation_get('access-network') + if access_network is not None: + if self.relation_prefix is not None: + hostname_key = "{}_hostname".format(self.relation_prefix) + else: + hostname_key = "hostname" + access_hostname = get_address_in_network( + access_network, + unit_get('private-address')) + set_hostname = relation_get(attribute=hostname_key, + unit=local_unit()) + if set_hostname != access_hostname: + relation_set(relation_settings={hostname_key: access_hostname}) + return None # Defer any further hook execution for now.... + + password_setting = 'password' + if self.relation_prefix: + password_setting = self.relation_prefix + '_password' + + if self.relation_id: + rids = [self.relation_id] + else: + rids = relation_ids(self.interfaces[0]) + + rel = (get_os_codename_install_source(config('openstack-origin')) or + 'icehouse') + for rid in rids: + self.related = True + for unit in related_units(rid): + rdata = relation_get(rid=rid, unit=unit) + host = rdata.get('db_host') + host = format_ipv6_addr(host) or host + ctxt = { + 'database_host': host, + 'database': self.database, + 'database_user': self.user, + 'database_password': rdata.get(password_setting), + 'database_type': 'mysql+pymysql' + } + # Port is being introduced with LP Bug #1876188 + # but it not currently required and may not be set in all + # cases, particularly in classic charms. + port = rdata.get('db_port') + if port: + ctxt['database_port'] = port + if CompareOpenStackReleases(rel) < 'queens': + ctxt['database_type'] = 'mysql' + if self.context_complete(ctxt): + db_ssl(rdata, ctxt, self.ssl_dir) + return ctxt + return {} + + +class PostgresqlDBContext(OSContextGenerator): + interfaces = ['pgsql-db'] + + def __init__(self, database=None): + self.database = database + + def __call__(self): + self.database = self.database or config('database') + if self.database is None: + log('Could not generate postgresql_db context. Missing required ' + 'charm config options. (database name)', level=ERROR) + raise OSContextError + + ctxt = {} + for rid in relation_ids(self.interfaces[0]): + self.related = True + for unit in related_units(rid): + rel_host = relation_get('host', rid=rid, unit=unit) + rel_user = relation_get('user', rid=rid, unit=unit) + rel_passwd = relation_get('password', rid=rid, unit=unit) + ctxt = {'database_host': rel_host, + 'database': self.database, + 'database_user': rel_user, + 'database_password': rel_passwd, + 'database_type': 'postgresql'} + if self.context_complete(ctxt): + return ctxt + + return {} + + +def db_ssl(rdata, ctxt, ssl_dir): + if 'ssl_ca' in rdata and ssl_dir: + ca_path = os.path.join(ssl_dir, 'db-client.ca') + with open(ca_path, 'wb') as fh: + fh.write(b64decode(rdata['ssl_ca'])) + + ctxt['database_ssl_ca'] = ca_path + elif 'ssl_ca' in rdata: + log("Charm not setup for ssl support but ssl ca found", level=INFO) + return ctxt + + if 'ssl_cert' in rdata: + cert_path = os.path.join( + ssl_dir, 'db-client.cert') + if not os.path.exists(cert_path): + log("Waiting 1m for ssl client cert validity", level=INFO) + time.sleep(60) + + with open(cert_path, 'wb') as fh: + fh.write(b64decode(rdata['ssl_cert'])) + + ctxt['database_ssl_cert'] = cert_path + key_path = os.path.join(ssl_dir, 'db-client.key') + with open(key_path, 'wb') as fh: + fh.write(b64decode(rdata['ssl_key'])) + + ctxt['database_ssl_key'] = key_path + + return ctxt + + +class IdentityServiceContext(OSContextGenerator): + + def __init__(self, + service=None, + service_user=None, + rel_name='identity-service'): + self.service = service + self.service_user = service_user + self.rel_name = rel_name + self.interfaces = [self.rel_name] + + def _setup_pki_cache(self): + if self.service and self.service_user: + # This is required for pki token signing if we don't want /tmp to + # be used. + cachedir = '/var/cache/%s' % (self.service) + if not os.path.isdir(cachedir): + log("Creating service cache dir %s" % (cachedir), level=DEBUG) + mkdir(path=cachedir, owner=self.service_user, + group=self.service_user, perms=0o700) + + return cachedir + return None + + def _get_pkg_name(self, python_name='keystonemiddleware'): + """Get corresponding distro installed package for python + package name. + + :param python_name: nameof the python package + :type: string + """ + pkg_names = map(lambda x: x + python_name, ('python3-', 'python-')) + + for pkg in pkg_names: + if not filter_installed_packages((pkg,)): + return pkg + + return None + + def _get_keystone_authtoken_ctxt(self, ctxt, keystonemiddleware_os_rel): + """Build Jinja2 context for full rendering of [keystone_authtoken] + section with variable names included. Re-constructed from former + template 'section-keystone-auth-mitaka'. + + :param ctxt: Jinja2 context returned from self.__call__() + :type: dict + :param keystonemiddleware_os_rel: OpenStack release name of + keystonemiddleware package installed + """ + c = collections.OrderedDict((('auth_type', 'password'),)) + + # 'www_authenticate_uri' replaced 'auth_uri' since Stein, + # see keystonemiddleware upstream sources for more info + if CompareOpenStackReleases(keystonemiddleware_os_rel) >= 'stein': + c.update(( + ('www_authenticate_uri', "{}://{}:{}/v3".format( + ctxt.get('service_protocol', ''), + ctxt.get('service_host', ''), + ctxt.get('service_port', ''))),)) + else: + c.update(( + ('auth_uri', "{}://{}:{}/v3".format( + ctxt.get('service_protocol', ''), + ctxt.get('service_host', ''), + ctxt.get('service_port', ''))),)) + + c.update(( + ('auth_url', "{}://{}:{}/v3".format( + ctxt.get('auth_protocol', ''), + ctxt.get('auth_host', ''), + ctxt.get('auth_port', ''))), + ('project_domain_name', ctxt.get('admin_domain_name', '')), + ('user_domain_name', ctxt.get('admin_domain_name', '')), + ('project_name', ctxt.get('admin_tenant_name', '')), + ('username', ctxt.get('admin_user', '')), + ('password', ctxt.get('admin_password', '')), + ('signing_dir', ctxt.get('signing_dir', '')),)) + + return c + + def __call__(self): + log('Generating template context for ' + self.rel_name, level=DEBUG) + ctxt = {} + + keystonemiddleware_os_release = None + if self._get_pkg_name(): + keystonemiddleware_os_release = os_release(self._get_pkg_name()) + + cachedir = self._setup_pki_cache() + if cachedir: + ctxt['signing_dir'] = cachedir + + for rid in relation_ids(self.rel_name): + self.related = True + for unit in related_units(rid): + rdata = relation_get(rid=rid, unit=unit) + serv_host = rdata.get('service_host') + serv_host = format_ipv6_addr(serv_host) or serv_host + auth_host = rdata.get('auth_host') + auth_host = format_ipv6_addr(auth_host) or auth_host + svc_protocol = rdata.get('service_protocol') or 'http' + auth_protocol = rdata.get('auth_protocol') or 'http' + api_version = rdata.get('api_version') or '2.0' + ctxt.update({'service_port': rdata.get('service_port'), + 'service_host': serv_host, + 'auth_host': auth_host, + 'auth_port': rdata.get('auth_port'), + 'admin_tenant_name': rdata.get('service_tenant'), + 'admin_user': rdata.get('service_username'), + 'admin_password': rdata.get('service_password'), + 'service_protocol': svc_protocol, + 'auth_protocol': auth_protocol, + 'api_version': api_version}) + + if float(api_version) > 2: + ctxt.update({ + 'admin_domain_name': rdata.get('service_domain'), + 'service_project_id': rdata.get('service_tenant_id'), + 'service_domain_id': rdata.get('service_domain_id')}) + + # we keep all veriables in ctxt for compatibility and + # add nested dictionary for keystone_authtoken generic + # templating + if keystonemiddleware_os_release: + ctxt['keystone_authtoken'] = \ + self._get_keystone_authtoken_ctxt( + ctxt, keystonemiddleware_os_release) + + if self.context_complete(ctxt): + # NOTE(jamespage) this is required for >= icehouse + # so a missing value just indicates keystone needs + # upgrading + ctxt['admin_tenant_id'] = rdata.get('service_tenant_id') + ctxt['admin_domain_id'] = rdata.get('service_domain_id') + return ctxt + + return {} + + +class IdentityCredentialsContext(IdentityServiceContext): + '''Context for identity-credentials interface type''' + + def __init__(self, + service=None, + service_user=None, + rel_name='identity-credentials'): + super(IdentityCredentialsContext, self).__init__(service, + service_user, + rel_name) + + def __call__(self): + log('Generating template context for ' + self.rel_name, level=DEBUG) + ctxt = {} + + cachedir = self._setup_pki_cache() + if cachedir: + ctxt['signing_dir'] = cachedir + + for rid in relation_ids(self.rel_name): + self.related = True + for unit in related_units(rid): + rdata = relation_get(rid=rid, unit=unit) + credentials_host = rdata.get('credentials_host') + credentials_host = ( + format_ipv6_addr(credentials_host) or credentials_host + ) + auth_host = rdata.get('auth_host') + auth_host = format_ipv6_addr(auth_host) or auth_host + svc_protocol = rdata.get('credentials_protocol') or 'http' + auth_protocol = rdata.get('auth_protocol') or 'http' + api_version = rdata.get('api_version') or '2.0' + ctxt.update({ + 'service_port': rdata.get('credentials_port'), + 'service_host': credentials_host, + 'auth_host': auth_host, + 'auth_port': rdata.get('auth_port'), + 'admin_tenant_name': rdata.get('credentials_project'), + 'admin_tenant_id': rdata.get('credentials_project_id'), + 'admin_user': rdata.get('credentials_username'), + 'admin_password': rdata.get('credentials_password'), + 'service_protocol': svc_protocol, + 'auth_protocol': auth_protocol, + 'api_version': api_version + }) + + if float(api_version) > 2: + ctxt.update({'admin_domain_name': + rdata.get('domain')}) + + if self.context_complete(ctxt): + return ctxt + + return {} + + +class NovaVendorMetadataContext(OSContextGenerator): + """Context used for configuring nova vendor metadata on nova.conf file.""" + + def __init__(self, os_release_pkg, interfaces=None): + """Initialize the NovaVendorMetadataContext object. + + :param os_release_pkg: the package name to extract the OpenStack + release codename from. + :type os_release_pkg: str + :param interfaces: list of string values to be used as the Context's + relation interfaces. + :type interfaces: List[str] + """ + self.os_release_pkg = os_release_pkg + if interfaces is not None: + self.interfaces = interfaces + + def __call__(self): + cmp_os_release = CompareOpenStackReleases( + os_release(self.os_release_pkg)) + ctxt = {'vendor_data': False} + + vdata_providers = [] + vdata = config('vendor-data') + vdata_url = config('vendor-data-url') + + if vdata: + try: + # validate the JSON. If invalid, we do not set anything here + json.loads(vdata) + except (TypeError, ValueError) as e: + log('Error decoding vendor-data. {}'.format(e), level=ERROR) + else: + ctxt['vendor_data'] = True + # Mitaka does not support DynamicJSON + # so vendordata_providers is not needed + if cmp_os_release > 'mitaka': + vdata_providers.append('StaticJSON') + + if vdata_url: + if cmp_os_release > 'mitaka': + ctxt['vendor_data_url'] = vdata_url + vdata_providers.append('DynamicJSON') + else: + log('Dynamic vendor data unsupported' + ' for {}.'.format(cmp_os_release), level=ERROR) + if vdata_providers: + ctxt['vendordata_providers'] = ','.join(vdata_providers) + + return ctxt + + +class NovaVendorMetadataJSONContext(OSContextGenerator): + """Context used for writing nova vendor metadata json file.""" + + def __init__(self, os_release_pkg): + """Initialize the NovaVendorMetadataJSONContext object. + + :param os_release_pkg: the package name to extract the OpenStack + release codename from. + :type os_release_pkg: str + """ + self.os_release_pkg = os_release_pkg + + def __call__(self): + ctxt = {'vendor_data_json': '{}'} + + vdata = config('vendor-data') + if vdata: + try: + # validate the JSON. If invalid, we return empty. + json.loads(vdata) + except (TypeError, ValueError) as e: + log('Error decoding vendor-data. {}'.format(e), level=ERROR) + else: + ctxt['vendor_data_json'] = vdata + + return ctxt + + +class AMQPContext(OSContextGenerator): + + def __init__(self, ssl_dir=None, rel_name='amqp', relation_prefix=None, + relation_id=None): + self.ssl_dir = ssl_dir + self.rel_name = rel_name + self.relation_prefix = relation_prefix + self.interfaces = [rel_name] + self.relation_id = relation_id + + def __call__(self): + log('Generating template context for amqp', level=DEBUG) + conf = config() + if self.relation_prefix: + user_setting = '%s-rabbit-user' % (self.relation_prefix) + vhost_setting = '%s-rabbit-vhost' % (self.relation_prefix) + else: + user_setting = 'rabbit-user' + vhost_setting = 'rabbit-vhost' + + try: + username = conf[user_setting] + vhost = conf[vhost_setting] + except KeyError as e: + log('Could not generate shared_db context. Missing required charm ' + 'config options: %s.' % e, level=ERROR) + raise OSContextError + + ctxt = {} + if self.relation_id: + rids = [self.relation_id] + else: + rids = relation_ids(self.rel_name) + for rid in rids: + ha_vip_only = False + self.related = True + transport_hosts = None + rabbitmq_port = '5672' + for unit in related_units(rid): + if relation_get('clustered', rid=rid, unit=unit): + ctxt['clustered'] = True + vip = relation_get('vip', rid=rid, unit=unit) + vip = format_ipv6_addr(vip) or vip + ctxt['rabbitmq_host'] = vip + transport_hosts = [vip] + else: + host = relation_get('private-address', rid=rid, unit=unit) + host = format_ipv6_addr(host) or host + ctxt['rabbitmq_host'] = host + transport_hosts = [host] + + ctxt.update({ + 'rabbitmq_user': username, + 'rabbitmq_password': relation_get('password', rid=rid, + unit=unit), + 'rabbitmq_virtual_host': vhost, + }) + + ssl_port = relation_get('ssl_port', rid=rid, unit=unit) + if ssl_port: + ctxt['rabbit_ssl_port'] = ssl_port + rabbitmq_port = ssl_port + + ssl_ca = relation_get('ssl_ca', rid=rid, unit=unit) + if ssl_ca: + ctxt['rabbit_ssl_ca'] = ssl_ca + + if relation_get('ha_queues', rid=rid, unit=unit) is not None: + ctxt['rabbitmq_ha_queues'] = True + + ha_vip_only = relation_get('ha-vip-only', + rid=rid, unit=unit) is not None + + if self.context_complete(ctxt): + if 'rabbit_ssl_ca' in ctxt: + if not self.ssl_dir: + log("Charm not setup for ssl support but ssl ca " + "found", level=INFO) + break + + ca_path = os.path.join( + self.ssl_dir, 'rabbit-client-ca.pem') + with open(ca_path, 'wb') as fh: + fh.write(b64decode(ctxt['rabbit_ssl_ca'])) + ctxt['rabbit_ssl_ca'] = ca_path + + # Sufficient information found = break out! + break + + # Used for active/active rabbitmq >= grizzly + if (('clustered' not in ctxt or ha_vip_only) and + len(related_units(rid)) > 1): + rabbitmq_hosts = [] + for unit in related_units(rid): + host = relation_get('private-address', rid=rid, unit=unit) + if not relation_get('password', rid=rid, unit=unit): + log( + ("Skipping {} password not sent which indicates " + "unit is not ready.".format(host)), + level=DEBUG) + continue + host = format_ipv6_addr(host) or host + rabbitmq_hosts.append(host) + + rabbitmq_hosts = sorted(rabbitmq_hosts) + ctxt['rabbitmq_hosts'] = ','.join(rabbitmq_hosts) + transport_hosts = rabbitmq_hosts + + if transport_hosts: + transport_url_hosts = ','.join([ + "{}:{}@{}:{}".format(ctxt['rabbitmq_user'], + ctxt['rabbitmq_password'], + host_, + rabbitmq_port) + for host_ in transport_hosts]) + ctxt['transport_url'] = "rabbit://{}/{}".format( + transport_url_hosts, vhost) + + oslo_messaging_flags = conf.get('oslo-messaging-flags', None) + if oslo_messaging_flags: + ctxt['oslo_messaging_flags'] = config_flags_parser( + oslo_messaging_flags) + + oslo_messaging_driver = conf.get( + 'oslo-messaging-driver', DEFAULT_OSLO_MESSAGING_DRIVER) + if oslo_messaging_driver: + ctxt['oslo_messaging_driver'] = oslo_messaging_driver + + notification_format = conf.get('notification-format', None) + if notification_format: + ctxt['notification_format'] = notification_format + + notification_topics = conf.get('notification-topics', None) + if notification_topics: + ctxt['notification_topics'] = notification_topics + + send_notifications_to_logs = conf.get('send-notifications-to-logs', None) + if send_notifications_to_logs: + ctxt['send_notifications_to_logs'] = send_notifications_to_logs + + if not self.complete: + return {} + + return ctxt + + +class CephContext(OSContextGenerator): + """Generates context for /etc/ceph/ceph.conf templates.""" + interfaces = ['ceph'] + + def __call__(self): + if not relation_ids('ceph'): + return {} + + log('Generating template context for ceph', level=DEBUG) + mon_hosts = [] + ctxt = { + 'use_syslog': str(config('use-syslog')).lower() + } + for rid in relation_ids('ceph'): + for unit in related_units(rid): + if not ctxt.get('auth'): + ctxt['auth'] = relation_get('auth', rid=rid, unit=unit) + if not ctxt.get('key'): + ctxt['key'] = relation_get('key', rid=rid, unit=unit) + if not ctxt.get('rbd_features'): + default_features = relation_get('rbd-features', rid=rid, unit=unit) + if default_features is not None: + ctxt['rbd_features'] = default_features + + ceph_addrs = relation_get('ceph-public-address', rid=rid, + unit=unit) + if ceph_addrs: + for addr in ceph_addrs.split(' '): + mon_hosts.append(format_ipv6_addr(addr) or addr) + else: + priv_addr = relation_get('private-address', rid=rid, + unit=unit) + mon_hosts.append(format_ipv6_addr(priv_addr) or priv_addr) + + ctxt['mon_hosts'] = ' '.join(sorted(mon_hosts)) + + if not os.path.isdir('/etc/ceph'): + os.mkdir('/etc/ceph') + + if not self.context_complete(ctxt): + return {} + + ensure_packages(['ceph-common']) + return ctxt + + def context_complete(self, ctxt): + """Overridden here to ensure the context is actually complete. + + We set `key` and `auth` to None here, by default, to ensure + that the context will always evaluate to incomplete until the + Ceph relation has actually sent these details; otherwise, + there is a potential race condition between the relation + appearing and the first unit actually setting this data on the + relation. + + :param ctxt: The current context members + :type ctxt: Dict[str, ANY] + :returns: True if the context is complete + :rtype: bool + """ + if 'auth' not in ctxt or 'key' not in ctxt: + return False + return super(CephContext, self).context_complete(ctxt) + + +class HAProxyContext(OSContextGenerator): + """Provides half a context for the haproxy template, which describes + all peers to be included in the cluster. Each charm needs to include + its own context generator that describes the port mapping. + + :side effect: mkdir is called on HAPROXY_RUN_DIR + """ + interfaces = ['cluster'] + + def __init__(self, singlenode_mode=False, + address_types=ADDRESS_TYPES): + self.address_types = address_types + self.singlenode_mode = singlenode_mode + + def __call__(self): + if not os.path.isdir(HAPROXY_RUN_DIR): + mkdir(path=HAPROXY_RUN_DIR) + if not relation_ids('cluster') and not self.singlenode_mode: + return {} + + l_unit = local_unit().replace('/', '-') + cluster_hosts = collections.OrderedDict() + + # NOTE(jamespage): build out map of configured network endpoints + # and associated backends + for addr_type in self.address_types: + cfg_opt = 'os-{}-network'.format(addr_type) + # NOTE(thedac) For some reason the ADDRESS_MAP uses 'int' rather + # than 'internal' + if addr_type == 'internal': + _addr_map_type = INTERNAL + else: + _addr_map_type = addr_type + # Network spaces aware + laddr = get_relation_ip(ADDRESS_MAP[_addr_map_type]['binding'], + config(cfg_opt)) + if laddr: + netmask = get_netmask_for_address(laddr) + cluster_hosts[laddr] = { + 'network': "{}/{}".format(laddr, + netmask), + 'backends': collections.OrderedDict([(l_unit, + laddr)]) + } + for rid in relation_ids('cluster'): + for unit in sorted(related_units(rid)): + # API Charms will need to set {addr_type}-address with + # get_relation_ip(addr_type) + _laddr = relation_get('{}-address'.format(addr_type), + rid=rid, unit=unit) + if _laddr: + _unit = unit.replace('/', '-') + cluster_hosts[laddr]['backends'][_unit] = _laddr + + # NOTE(jamespage) add backend based on get_relation_ip - this + # will either be the only backend or the fallback if no acls + # match in the frontend + # Network spaces aware + addr = get_relation_ip('cluster') + cluster_hosts[addr] = {} + netmask = get_netmask_for_address(addr) + cluster_hosts[addr] = { + 'network': "{}/{}".format(addr, netmask), + 'backends': collections.OrderedDict([(l_unit, + addr)]) + } + for rid in relation_ids('cluster'): + for unit in sorted(related_units(rid)): + # API Charms will need to set their private-address with + # get_relation_ip('cluster') + _laddr = relation_get('private-address', + rid=rid, unit=unit) + if _laddr: + _unit = unit.replace('/', '-') + cluster_hosts[addr]['backends'][_unit] = _laddr + + ctxt = { + 'frontends': cluster_hosts, + 'default_backend': addr + } + + if config('haproxy-server-timeout'): + ctxt['haproxy_server_timeout'] = config('haproxy-server-timeout') + + if config('haproxy-client-timeout'): + ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout') + + if config('haproxy-queue-timeout'): + ctxt['haproxy_queue_timeout'] = config('haproxy-queue-timeout') + + if config('haproxy-connect-timeout'): + ctxt['haproxy_connect_timeout'] = config('haproxy-connect-timeout') + + if config('prefer-ipv6'): + ctxt['local_host'] = 'ip6-localhost' + ctxt['haproxy_host'] = '::' + else: + ctxt['local_host'] = '127.0.0.1' + ctxt['haproxy_host'] = '0.0.0.0' + + ctxt['ipv6_enabled'] = not is_ipv6_disabled() + + ctxt['stat_port'] = '8888' + + db = kv() + ctxt['stat_password'] = db.get('stat-password') + if not ctxt['stat_password']: + ctxt['stat_password'] = db.set('stat-password', + pwgen(32)) + db.flush() + + for frontend in cluster_hosts: + if (len(cluster_hosts[frontend]['backends']) > 1 or + self.singlenode_mode): + # Enable haproxy when we have enough peers. + log('Ensuring haproxy enabled in /etc/default/haproxy.', + level=DEBUG) + with open('/etc/default/haproxy', 'w') as out: + out.write('ENABLED=1\n') + + return ctxt + + log('HAProxy context is incomplete, this unit has no peers.', + level=INFO) + return {} + + +class ImageServiceContext(OSContextGenerator): + interfaces = ['image-service'] + + def __call__(self): + """Obtains the glance API server from the image-service relation. + Useful in nova and cinder (currently). + """ + log('Generating template context for image-service.', level=DEBUG) + rids = relation_ids('image-service') + if not rids: + return {} + + for rid in rids: + for unit in related_units(rid): + api_server = relation_get('glance-api-server', + rid=rid, unit=unit) + if api_server: + return {'glance_api_servers': api_server} + + log("ImageService context is incomplete. Missing required relation " + "data.", level=INFO) + return {} + + +class ApacheSSLContext(OSContextGenerator): + """Generates a context for an apache vhost configuration that configures + HTTPS reverse proxying for one or many endpoints. Generated context + looks something like:: + + { + 'namespace': 'cinder', + 'private_address': 'iscsi.mycinderhost.com', + 'endpoints': [(8776, 8766), (8777, 8767)] + } + + The endpoints list consists of a tuples mapping external ports + to internal ports. + """ + interfaces = ['https'] + + # charms should inherit this context and set external ports + # and service namespace accordingly. + external_ports = [] + service_namespace = None + user = group = 'root' + + def enable_modules(self): + cmd = ['a2enmod', 'ssl', 'proxy', 'proxy_http', 'headers'] + check_call(cmd) + + def configure_cert(self, cn=None): + ssl_dir = os.path.join('/etc/apache2/ssl/', self.service_namespace) + mkdir(path=ssl_dir) + cert, key = get_cert(cn) + if cert and key: + if cn: + cert_filename = 'cert_{}'.format(cn) + key_filename = 'key_{}'.format(cn) + else: + cert_filename = 'cert' + key_filename = 'key' + + write_file(path=os.path.join(ssl_dir, cert_filename), + content=b64decode(cert), owner=self.user, + group=self.group, perms=0o640) + write_file(path=os.path.join(ssl_dir, key_filename), + content=b64decode(key), owner=self.user, + group=self.group, perms=0o640) + + def configure_ca(self): + ca_cert = get_ca_cert() + if ca_cert: + install_ca_cert(b64decode(ca_cert)) + + def canonical_names(self): + """Figure out which canonical names clients will access this service. + """ + cns = [] + for r_id in relation_ids('identity-service'): + for unit in related_units(r_id): + rdata = relation_get(rid=r_id, unit=unit) + for k in rdata: + if k.startswith('ssl_key_'): + cns.append(k.lstrip('ssl_key_')) + + return sorted(list(set(cns))) + + def get_network_addresses(self): + """For each network configured, return corresponding address and + hostnamr or vip (if available). + + Returns a list of tuples of the form: + + [(address_in_net_a, hostname_in_net_a), + (address_in_net_b, hostname_in_net_b), + ...] + + or, if no hostnames(s) available: + + [(address_in_net_a, vip_in_net_a), + (address_in_net_b, vip_in_net_b), + ...] + + or, if no vip(s) available: + + [(address_in_net_a, address_in_net_a), + (address_in_net_b, address_in_net_b), + ...] + """ + addresses = [] + for net_type in [INTERNAL, ADMIN, PUBLIC]: + net_config = config(ADDRESS_MAP[net_type]['config']) + # NOTE(jamespage): Fallback must always be private address + # as this is used to bind services on the + # local unit. + fallback = unit_get("private-address") + if net_config: + addr = get_address_in_network(net_config, + fallback) + else: + try: + addr = network_get_primary_address( + ADDRESS_MAP[net_type]['binding'] + ) + except (NotImplementedError, NoNetworkBinding): + addr = fallback + + endpoint = resolve_address(net_type) + addresses.append((addr, endpoint)) + + return sorted(set(addresses)) + + def __call__(self): + if isinstance(self.external_ports, six.string_types): + self.external_ports = [self.external_ports] + + if not self.external_ports or not https(): + return {} + + use_keystone_ca = True + for rid in relation_ids('certificates'): + if related_units(rid): + use_keystone_ca = False + + if use_keystone_ca: + self.configure_ca() + + self.enable_modules() + + ctxt = {'namespace': self.service_namespace, + 'endpoints': [], + 'ext_ports': []} + + if use_keystone_ca: + cns = self.canonical_names() + if cns: + for cn in cns: + self.configure_cert(cn) + else: + # Expect cert/key provided in config (currently assumed that ca + # uses ip for cn) + for net_type in (INTERNAL, ADMIN, PUBLIC): + cn = resolve_address(endpoint_type=net_type) + self.configure_cert(cn) + + addresses = self.get_network_addresses() + for address, endpoint in addresses: + for api_port in self.external_ports: + ext_port = determine_apache_port(api_port, + singlenode_mode=True) + int_port = determine_api_port(api_port, singlenode_mode=True) + portmap = (address, endpoint, int(ext_port), int(int_port)) + ctxt['endpoints'].append(portmap) + ctxt['ext_ports'].append(int(ext_port)) + + ctxt['ext_ports'] = sorted(list(set(ctxt['ext_ports']))) + return ctxt + + +class NeutronContext(OSContextGenerator): + interfaces = [] + + @property + def plugin(self): + return None + + @property + def network_manager(self): + return None + + @property + def packages(self): + return neutron_plugin_attribute(self.plugin, 'packages', + self.network_manager) + + @property + def neutron_security_groups(self): + return None + + def _ensure_packages(self): + for pkgs in self.packages: + ensure_packages(pkgs) + + def ovs_ctxt(self): + driver = neutron_plugin_attribute(self.plugin, 'driver', + self.network_manager) + config = neutron_plugin_attribute(self.plugin, 'config', + self.network_manager) + ovs_ctxt = {'core_plugin': driver, + 'neutron_plugin': 'ovs', + 'neutron_security_groups': self.neutron_security_groups, + 'local_ip': unit_private_ip(), + 'config': config} + + return ovs_ctxt + + def nuage_ctxt(self): + driver = neutron_plugin_attribute(self.plugin, 'driver', + self.network_manager) + config = neutron_plugin_attribute(self.plugin, 'config', + self.network_manager) + nuage_ctxt = {'core_plugin': driver, + 'neutron_plugin': 'vsp', + 'neutron_security_groups': self.neutron_security_groups, + 'local_ip': unit_private_ip(), + 'config': config} + + return nuage_ctxt + + def nvp_ctxt(self): + driver = neutron_plugin_attribute(self.plugin, 'driver', + self.network_manager) + config = neutron_plugin_attribute(self.plugin, 'config', + self.network_manager) + nvp_ctxt = {'core_plugin': driver, + 'neutron_plugin': 'nvp', + 'neutron_security_groups': self.neutron_security_groups, + 'local_ip': unit_private_ip(), + 'config': config} + + return nvp_ctxt + + def n1kv_ctxt(self): + driver = neutron_plugin_attribute(self.plugin, 'driver', + self.network_manager) + n1kv_config = neutron_plugin_attribute(self.plugin, 'config', + self.network_manager) + n1kv_user_config_flags = config('n1kv-config-flags') + restrict_policy_profiles = config('n1kv-restrict-policy-profiles') + n1kv_ctxt = {'core_plugin': driver, + 'neutron_plugin': 'n1kv', + 'neutron_security_groups': self.neutron_security_groups, + 'local_ip': unit_private_ip(), + 'config': n1kv_config, + 'vsm_ip': config('n1kv-vsm-ip'), + 'vsm_username': config('n1kv-vsm-username'), + 'vsm_password': config('n1kv-vsm-password'), + 'restrict_policy_profiles': restrict_policy_profiles} + + if n1kv_user_config_flags: + flags = config_flags_parser(n1kv_user_config_flags) + n1kv_ctxt['user_config_flags'] = flags + + return n1kv_ctxt + + def calico_ctxt(self): + driver = neutron_plugin_attribute(self.plugin, 'driver', + self.network_manager) + config = neutron_plugin_attribute(self.plugin, 'config', + self.network_manager) + calico_ctxt = {'core_plugin': driver, + 'neutron_plugin': 'Calico', + 'neutron_security_groups': self.neutron_security_groups, + 'local_ip': unit_private_ip(), + 'config': config} + + return calico_ctxt + + def neutron_ctxt(self): + if https(): + proto = 'https' + else: + proto = 'http' + + if is_clustered(): + host = config('vip') + else: + host = unit_get('private-address') + + ctxt = {'network_manager': self.network_manager, + 'neutron_url': '%s://%s:%s' % (proto, host, '9696')} + return ctxt + + def pg_ctxt(self): + driver = neutron_plugin_attribute(self.plugin, 'driver', + self.network_manager) + config = neutron_plugin_attribute(self.plugin, 'config', + self.network_manager) + ovs_ctxt = {'core_plugin': driver, + 'neutron_plugin': 'plumgrid', + 'neutron_security_groups': self.neutron_security_groups, + 'local_ip': unit_private_ip(), + 'config': config} + return ovs_ctxt + + def midonet_ctxt(self): + driver = neutron_plugin_attribute(self.plugin, 'driver', + self.network_manager) + midonet_config = neutron_plugin_attribute(self.plugin, 'config', + self.network_manager) + mido_ctxt = {'core_plugin': driver, + 'neutron_plugin': 'midonet', + 'neutron_security_groups': self.neutron_security_groups, + 'local_ip': unit_private_ip(), + 'config': midonet_config} + + return mido_ctxt + + def __call__(self): + if self.network_manager not in ['quantum', 'neutron']: + return {} + + if not self.plugin: + return {} + + ctxt = self.neutron_ctxt() + + if self.plugin == 'ovs': + ctxt.update(self.ovs_ctxt()) + elif self.plugin in ['nvp', 'nsx']: + ctxt.update(self.nvp_ctxt()) + elif self.plugin == 'n1kv': + ctxt.update(self.n1kv_ctxt()) + elif self.plugin == 'Calico': + ctxt.update(self.calico_ctxt()) + elif self.plugin == 'vsp': + ctxt.update(self.nuage_ctxt()) + elif self.plugin == 'plumgrid': + ctxt.update(self.pg_ctxt()) + elif self.plugin == 'midonet': + ctxt.update(self.midonet_ctxt()) + + alchemy_flags = config('neutron-alchemy-flags') + if alchemy_flags: + flags = config_flags_parser(alchemy_flags) + ctxt['neutron_alchemy_flags'] = flags + + return ctxt + + +class NeutronPortContext(OSContextGenerator): + + def resolve_ports(self, ports): + """Resolve NICs not yet bound to bridge(s) + + If hwaddress provided then returns resolved hwaddress otherwise NIC. + """ + if not ports: + return None + + hwaddr_to_nic = {} + hwaddr_to_ip = {} + extant_nics = list_nics() + + for nic in extant_nics: + # Ignore virtual interfaces (bond masters will be identified from + # their slaves) + if not is_phy_iface(nic): + continue + + _nic = get_bond_master(nic) + if _nic: + log("Replacing iface '%s' with bond master '%s'" % (nic, _nic), + level=DEBUG) + nic = _nic + + hwaddr = get_nic_hwaddr(nic) + hwaddr_to_nic[hwaddr] = nic + addresses = get_ipv4_addr(nic, fatal=False) + addresses += get_ipv6_addr(iface=nic, fatal=False) + hwaddr_to_ip[hwaddr] = addresses + + resolved = [] + mac_regex = re.compile(r'([0-9A-F]{2}[:-]){5}([0-9A-F]{2})', re.I) + for entry in ports: + if re.match(mac_regex, entry): + # NIC is in known NICs and does NOT hace an IP address + if entry in hwaddr_to_nic and not hwaddr_to_ip[entry]: + # If the nic is part of a bridge then don't use it + if is_bridge_member(hwaddr_to_nic[entry]): + continue + + # Entry is a MAC address for a valid interface that doesn't + # have an IP address assigned yet. + resolved.append(hwaddr_to_nic[entry]) + elif entry in extant_nics: + # If the passed entry is not a MAC address and the interface + # exists, assume it's a valid interface, and that the user put + # it there on purpose (we can trust it to be the real external + # network). + resolved.append(entry) + + # Ensure no duplicates + return list(set(resolved)) + + +class OSConfigFlagContext(OSContextGenerator): + """Provides support for user-defined config flags. + + Users can define a comma-seperated list of key=value pairs + in the charm configuration and apply them at any point in + any file by using a template flag. + + Sometimes users might want config flags inserted within a + specific section so this class allows users to specify the + template flag name, allowing for multiple template flags + (sections) within the same context. + + NOTE: the value of config-flags may be a comma-separated list of + key=value pairs and some Openstack config files support + comma-separated lists as values. + """ + + def __init__(self, charm_flag='config-flags', + template_flag='user_config_flags'): + """ + :param charm_flag: config flags in charm configuration. + :param template_flag: insert point for user-defined flags in template + file. + """ + super(OSConfigFlagContext, self).__init__() + self._charm_flag = charm_flag + self._template_flag = template_flag + + def __call__(self): + config_flags = config(self._charm_flag) + if not config_flags: + return {} + + return {self._template_flag: + config_flags_parser(config_flags)} + + +class LibvirtConfigFlagsContext(OSContextGenerator): + """ + This context provides support for extending + the libvirt section through user-defined flags. + """ + def __call__(self): + ctxt = {} + libvirt_flags = config('libvirt-flags') + if libvirt_flags: + ctxt['libvirt_flags'] = config_flags_parser( + libvirt_flags) + return ctxt + + +class SubordinateConfigContext(OSContextGenerator): + + """ + Responsible for inspecting relations to subordinates that + may be exporting required config via a json blob. + + The subordinate interface allows subordinates to export their + configuration requirements to the principle for multiple config + files and multiple services. Ie, a subordinate that has interfaces + to both glance and nova may export to following yaml blob as json:: + + glance: + /etc/glance/glance-api.conf: + sections: + DEFAULT: + - [key1, value1] + /etc/glance/glance-registry.conf: + MYSECTION: + - [key2, value2] + nova: + /etc/nova/nova.conf: + sections: + DEFAULT: + - [key3, value3] + + + It is then up to the principle charms to subscribe this context to + the service+config file it is interestd in. Configuration data will + be available in the template context, in glance's case, as:: + + ctxt = { + ... other context ... + 'subordinate_configuration': { + 'DEFAULT': { + 'key1': 'value1', + }, + 'MYSECTION': { + 'key2': 'value2', + }, + } + } + """ + + def __init__(self, service, config_file, interface): + """ + :param service : Service name key to query in any subordinate + data found + :param config_file : Service's config file to query sections + :param interface : Subordinate interface to inspect + """ + self.config_file = config_file + if isinstance(service, list): + self.services = service + else: + self.services = [service] + if isinstance(interface, list): + self.interfaces = interface + else: + self.interfaces = [interface] + + def __call__(self): + ctxt = {'sections': {}} + rids = [] + for interface in self.interfaces: + rids.extend(relation_ids(interface)) + for rid in rids: + for unit in related_units(rid): + sub_config = relation_get('subordinate_configuration', + rid=rid, unit=unit) + if sub_config and sub_config != '': + try: + sub_config = json.loads(sub_config) + except Exception: + log('Could not parse JSON from ' + 'subordinate_configuration setting from %s' + % rid, level=ERROR) + continue + + for service in self.services: + if service not in sub_config: + log('Found subordinate_configuration on %s but it ' + 'contained nothing for %s service' + % (rid, service), level=INFO) + continue + + sub_config = sub_config[service] + if self.config_file not in sub_config: + log('Found subordinate_configuration on %s but it ' + 'contained nothing for %s' + % (rid, self.config_file), level=INFO) + continue + + sub_config = sub_config[self.config_file] + for k, v in six.iteritems(sub_config): + if k == 'sections': + for section, config_list in six.iteritems(v): + log("adding section '%s'" % (section), + level=DEBUG) + if ctxt[k].get(section): + ctxt[k][section].extend(config_list) + else: + ctxt[k][section] = config_list + else: + ctxt[k] = v + log("%d section(s) found" % (len(ctxt['sections'])), level=DEBUG) + return ctxt + + +class LogLevelContext(OSContextGenerator): + + def __call__(self): + ctxt = {} + ctxt['debug'] = \ + False if config('debug') is None else config('debug') + ctxt['verbose'] = \ + False if config('verbose') is None else config('verbose') + + return ctxt + + +class SyslogContext(OSContextGenerator): + + def __call__(self): + ctxt = {'use_syslog': config('use-syslog')} + return ctxt + + +class BindHostContext(OSContextGenerator): + + def __call__(self): + if config('prefer-ipv6'): + return {'bind_host': '::'} + else: + return {'bind_host': '0.0.0.0'} + + +MAX_DEFAULT_WORKERS = 4 +DEFAULT_MULTIPLIER = 2 + + +def _calculate_workers(): + ''' + Determine the number of worker processes based on the CPU + count of the unit containing the application. + + Workers will be limited to MAX_DEFAULT_WORKERS in + container environments where no worker-multipler configuration + option been set. + + @returns int: number of worker processes to use + ''' + multiplier = config('worker-multiplier') or DEFAULT_MULTIPLIER + count = int(_num_cpus() * multiplier) + if multiplier > 0 and count == 0: + count = 1 + + if config('worker-multiplier') is None and is_container(): + # NOTE(jamespage): Limit unconfigured worker-multiplier + # to MAX_DEFAULT_WORKERS to avoid insane + # worker configuration in LXD containers + # on large servers + # Reference: https://pad.lv/1665270 + count = min(count, MAX_DEFAULT_WORKERS) + + return count + + +def _num_cpus(): + ''' + Compatibility wrapper for calculating the number of CPU's + a unit has. + + @returns: int: number of CPU cores detected + ''' + try: + return psutil.cpu_count() + except AttributeError: + return psutil.NUM_CPUS + + +class WorkerConfigContext(OSContextGenerator): + + def __call__(self): + ctxt = {"workers": _calculate_workers()} + return ctxt + + +class WSGIWorkerConfigContext(WorkerConfigContext): + + def __init__(self, name=None, script=None, admin_script=None, + public_script=None, user=None, group=None, + process_weight=1.00, + admin_process_weight=0.25, public_process_weight=0.75): + self.service_name = name + self.user = user or name + self.group = group or name + self.script = script + self.admin_script = admin_script + self.public_script = public_script + self.process_weight = process_weight + self.admin_process_weight = admin_process_weight + self.public_process_weight = public_process_weight + + def __call__(self): + total_processes = _calculate_workers() + ctxt = { + "service_name": self.service_name, + "user": self.user, + "group": self.group, + "script": self.script, + "admin_script": self.admin_script, + "public_script": self.public_script, + "processes": int(math.ceil(self.process_weight * total_processes)), + "admin_processes": int(math.ceil(self.admin_process_weight * + total_processes)), + "public_processes": int(math.ceil(self.public_process_weight * + total_processes)), + "threads": 1, + } + return ctxt + + +class ZeroMQContext(OSContextGenerator): + interfaces = ['zeromq-configuration'] + + def __call__(self): + ctxt = {} + if is_relation_made('zeromq-configuration', 'host'): + for rid in relation_ids('zeromq-configuration'): + for unit in related_units(rid): + ctxt['zmq_nonce'] = relation_get('nonce', unit, rid) + ctxt['zmq_host'] = relation_get('host', unit, rid) + ctxt['zmq_redis_address'] = relation_get( + 'zmq_redis_address', unit, rid) + + return ctxt + + +class NotificationDriverContext(OSContextGenerator): + + def __init__(self, zmq_relation='zeromq-configuration', + amqp_relation='amqp'): + """ + :param zmq_relation: Name of Zeromq relation to check + """ + self.zmq_relation = zmq_relation + self.amqp_relation = amqp_relation + + def __call__(self): + ctxt = {'notifications': 'False'} + if is_relation_made(self.amqp_relation): + ctxt['notifications'] = "True" + + return ctxt + + +class SysctlContext(OSContextGenerator): + """This context check if the 'sysctl' option exists on configuration + then creates a file with the loaded contents""" + def __call__(self): + sysctl_dict = config('sysctl') + if sysctl_dict: + sysctl_create(sysctl_dict, + '/etc/sysctl.d/50-{0}.conf'.format(charm_name())) + return {'sysctl': sysctl_dict} + + +class NeutronAPIContext(OSContextGenerator): + ''' + Inspects current neutron-plugin-api relation for neutron settings. Return + defaults if it is not present. + ''' + interfaces = ['neutron-plugin-api'] + + def __call__(self): + self.neutron_defaults = { + 'l2_population': { + 'rel_key': 'l2-population', + 'default': False, + }, + 'overlay_network_type': { + 'rel_key': 'overlay-network-type', + 'default': 'gre', + }, + 'neutron_security_groups': { + 'rel_key': 'neutron-security-groups', + 'default': False, + }, + 'network_device_mtu': { + 'rel_key': 'network-device-mtu', + 'default': None, + }, + 'enable_dvr': { + 'rel_key': 'enable-dvr', + 'default': False, + }, + 'enable_l3ha': { + 'rel_key': 'enable-l3ha', + 'default': False, + }, + 'dns_domain': { + 'rel_key': 'dns-domain', + 'default': None, + }, + 'polling_interval': { + 'rel_key': 'polling-interval', + 'default': 2, + }, + 'rpc_response_timeout': { + 'rel_key': 'rpc-response-timeout', + 'default': 60, + }, + 'report_interval': { + 'rel_key': 'report-interval', + 'default': 30, + }, + 'enable_qos': { + 'rel_key': 'enable-qos', + 'default': False, + }, + 'enable_nsg_logging': { + 'rel_key': 'enable-nsg-logging', + 'default': False, + }, + 'enable_nfg_logging': { + 'rel_key': 'enable-nfg-logging', + 'default': False, + }, + 'enable_port_forwarding': { + 'rel_key': 'enable-port-forwarding', + 'default': False, + }, + 'global_physnet_mtu': { + 'rel_key': 'global-physnet-mtu', + 'default': 1500, + }, + 'physical_network_mtus': { + 'rel_key': 'physical-network-mtus', + 'default': None, + }, + } + ctxt = self.get_neutron_options({}) + for rid in relation_ids('neutron-plugin-api'): + for unit in related_units(rid): + rdata = relation_get(rid=rid, unit=unit) + # The l2-population key is used by the context as a way of + # checking if the api service on the other end is sending data + # in a recent format. + if 'l2-population' in rdata: + ctxt.update(self.get_neutron_options(rdata)) + + extension_drivers = [] + + if ctxt['enable_qos']: + extension_drivers.append('qos') + + if ctxt['enable_nsg_logging']: + extension_drivers.append('log') + + ctxt['extension_drivers'] = ','.join(extension_drivers) + + l3_extension_plugins = [] + + if ctxt['enable_port_forwarding']: + l3_extension_plugins.append('port_forwarding') + + ctxt['l3_extension_plugins'] = l3_extension_plugins + + return ctxt + + def get_neutron_options(self, rdata): + settings = {} + for nkey in self.neutron_defaults.keys(): + defv = self.neutron_defaults[nkey]['default'] + rkey = self.neutron_defaults[nkey]['rel_key'] + if rkey in rdata.keys(): + if type(defv) is bool: + settings[nkey] = bool_from_string(rdata[rkey]) + else: + settings[nkey] = rdata[rkey] + else: + settings[nkey] = defv + return settings + + +class ExternalPortContext(NeutronPortContext): + + def __call__(self): + ctxt = {} + ports = config('ext-port') + if ports: + ports = [p.strip() for p in ports.split()] + ports = self.resolve_ports(ports) + if ports: + ctxt = {"ext_port": ports[0]} + napi_settings = NeutronAPIContext()() + mtu = napi_settings.get('network_device_mtu') + if mtu: + ctxt['ext_port_mtu'] = mtu + + return ctxt + + +class DataPortContext(NeutronPortContext): + + def __call__(self): + ports = config('data-port') + if ports: + # Map of {bridge:port/mac} + portmap = parse_data_port_mappings(ports) + ports = portmap.keys() + # Resolve provided ports or mac addresses and filter out those + # already attached to a bridge. + resolved = self.resolve_ports(ports) + # Rebuild port index using resolved and filtered ports. + normalized = {get_nic_hwaddr(port): port for port in resolved + if port not in ports} + normalized.update({port: port for port in resolved + if port in ports}) + if resolved: + return {normalized[port]: bridge for port, bridge in + six.iteritems(portmap) if port in normalized.keys()} + + return None + + +class PhyNICMTUContext(DataPortContext): + + def __call__(self): + ctxt = {} + mappings = super(PhyNICMTUContext, self).__call__() + if mappings and mappings.keys(): + ports = sorted(mappings.keys()) + napi_settings = NeutronAPIContext()() + mtu = napi_settings.get('network_device_mtu') + all_ports = set() + # If any of ports is a vlan device, its underlying device must have + # mtu applied first. + for port in ports: + for lport in glob.glob("/sys/class/net/%s/lower_*" % port): + lport = os.path.basename(lport) + all_ports.add(lport.split('_')[1]) + + all_ports = list(all_ports) + all_ports.extend(ports) + if mtu: + ctxt["devs"] = '\\n'.join(all_ports) + ctxt['mtu'] = mtu + + return ctxt + + +class NetworkServiceContext(OSContextGenerator): + + def __init__(self, rel_name='quantum-network-service'): + self.rel_name = rel_name + self.interfaces = [rel_name] + + def __call__(self): + for rid in relation_ids(self.rel_name): + for unit in related_units(rid): + rdata = relation_get(rid=rid, unit=unit) + ctxt = { + 'keystone_host': rdata.get('keystone_host'), + 'service_port': rdata.get('service_port'), + 'auth_port': rdata.get('auth_port'), + 'service_tenant': rdata.get('service_tenant'), + 'service_username': rdata.get('service_username'), + 'service_password': rdata.get('service_password'), + 'quantum_host': rdata.get('quantum_host'), + 'quantum_port': rdata.get('quantum_port'), + 'quantum_url': rdata.get('quantum_url'), + 'region': rdata.get('region'), + 'service_protocol': + rdata.get('service_protocol') or 'http', + 'auth_protocol': + rdata.get('auth_protocol') or 'http', + 'api_version': + rdata.get('api_version') or '2.0', + } + if self.context_complete(ctxt): + return ctxt + return {} + + +class InternalEndpointContext(OSContextGenerator): + """Internal endpoint context. + + This context provides the endpoint type used for communication between + services e.g. between Nova and Cinder internally. Openstack uses Public + endpoints by default so this allows admins to optionally use internal + endpoints. + """ + def __call__(self): + return {'use_internal_endpoints': config('use-internal-endpoints')} + + +class VolumeAPIContext(InternalEndpointContext): + """Volume API context. + + This context provides information regarding the volume endpoint to use + when communicating between services. It determines which version of the + API is appropriate for use. + + This value will be determined in the resulting context dictionary + returned from calling the VolumeAPIContext object. Information provided + by this context is as follows: + + volume_api_version: the volume api version to use, currently + 'v2' or 'v3' + volume_catalog_info: the information to use for a cinder client + configuration that consumes API endpoints from the keystone + catalog. This is defined as the type:name:endpoint_type string. + """ + # FIXME(wolsen) This implementation is based on the provider being able + # to specify the package version to check but does not guarantee that the + # volume service api version selected is available. In practice, it is + # quite likely the volume service *is* providing the v3 volume service. + # This should be resolved when the service-discovery spec is implemented. + def __init__(self, pkg): + """ + Creates a new VolumeAPIContext for use in determining which version + of the Volume API should be used for communication. A package codename + should be supplied for determining the currently installed OpenStack + version. + + :param pkg: the package codename to use in order to determine the + component version (e.g. nova-common). See + charmhelpers.contrib.openstack.utils.PACKAGE_CODENAMES for more. + """ + super(VolumeAPIContext, self).__init__() + self._ctxt = None + if not pkg: + raise ValueError('package name must be provided in order to ' + 'determine current OpenStack version.') + self.pkg = pkg + + @property + def ctxt(self): + if self._ctxt is not None: + return self._ctxt + self._ctxt = self._determine_ctxt() + return self._ctxt + + def _determine_ctxt(self): + """Determines the Volume API endpoint information. + + Determines the appropriate version of the API that should be used + as well as the catalog_info string that would be supplied. Returns + a dict containing the volume_api_version and the volume_catalog_info. + """ + rel = os_release(self.pkg) + version = '2' + if CompareOpenStackReleases(rel) >= 'pike': + version = '3' + + service_type = 'volumev{version}'.format(version=version) + service_name = 'cinderv{version}'.format(version=version) + endpoint_type = 'publicURL' + if config('use-internal-endpoints'): + endpoint_type = 'internalURL' + catalog_info = '{type}:{name}:{endpoint}'.format( + type=service_type, name=service_name, endpoint=endpoint_type) + + return { + 'volume_api_version': version, + 'volume_catalog_info': catalog_info, + } + + def __call__(self): + return self.ctxt + + +class AppArmorContext(OSContextGenerator): + """Base class for apparmor contexts.""" + + def __init__(self, profile_name=None): + self._ctxt = None + self.aa_profile = profile_name + self.aa_utils_packages = ['apparmor-utils'] + + @property + def ctxt(self): + if self._ctxt is not None: + return self._ctxt + self._ctxt = self._determine_ctxt() + return self._ctxt + + def _determine_ctxt(self): + """ + Validate aa-profile-mode settings is disable, enforce, or complain. + + :return ctxt: Dictionary of the apparmor profile or None + """ + if config('aa-profile-mode') in ['disable', 'enforce', 'complain']: + ctxt = {'aa_profile_mode': config('aa-profile-mode'), + 'ubuntu_release': lsb_release()['DISTRIB_RELEASE']} + if self.aa_profile: + ctxt['aa_profile'] = self.aa_profile + else: + ctxt = None + return ctxt + + def __call__(self): + return self.ctxt + + def install_aa_utils(self): + """ + Install packages required for apparmor configuration. + """ + log("Installing apparmor utils.") + ensure_packages(self.aa_utils_packages) + + def manually_disable_aa_profile(self): + """ + Manually disable an apparmor profile. + + If aa-profile-mode is set to disabled (default) this is required as the + template has been written but apparmor is yet unaware of the profile + and aa-disable aa-profile fails. Without this the profile would kick + into enforce mode on the next service restart. + + """ + profile_path = '/etc/apparmor.d' + disable_path = '/etc/apparmor.d/disable' + if not os.path.lexists(os.path.join(disable_path, self.aa_profile)): + os.symlink(os.path.join(profile_path, self.aa_profile), + os.path.join(disable_path, self.aa_profile)) + + def setup_aa_profile(self): + """ + Setup an apparmor profile. + The ctxt dictionary will contain the apparmor profile mode and + the apparmor profile name. + Makes calls out to aa-disable, aa-complain, or aa-enforce to setup + the apparmor profile. + """ + self() + if not self.ctxt: + log("Not enabling apparmor Profile") + return + self.install_aa_utils() + cmd = ['aa-{}'.format(self.ctxt['aa_profile_mode'])] + cmd.append(self.ctxt['aa_profile']) + log("Setting up the apparmor profile for {} in {} mode." + "".format(self.ctxt['aa_profile'], self.ctxt['aa_profile_mode'])) + try: + check_call(cmd) + except CalledProcessError as e: + # If aa-profile-mode is set to disabled (default) manual + # disabling is required as the template has been written but + # apparmor is yet unaware of the profile and aa-disable aa-profile + # fails. If aa-disable learns to read profile files first this can + # be removed. + if self.ctxt['aa_profile_mode'] == 'disable': + log("Manually disabling the apparmor profile for {}." + "".format(self.ctxt['aa_profile'])) + self.manually_disable_aa_profile() + return + status_set('blocked', "Apparmor profile {} failed to be set to {}." + "".format(self.ctxt['aa_profile'], + self.ctxt['aa_profile_mode'])) + raise e + + +class MemcacheContext(OSContextGenerator): + """Memcache context + + This context provides options for configuring a local memcache client and + server for both IPv4 and IPv6 + """ + + def __init__(self, package=None): + """ + @param package: Package to examine to extrapolate OpenStack release. + Used when charms have no openstack-origin config + option (ie subordinates) + """ + self.package = package + + def __call__(self): + ctxt = {} + ctxt['use_memcache'] = enable_memcache(package=self.package) + if ctxt['use_memcache']: + # Trusty version of memcached does not support ::1 as a listen + # address so use host file entry instead + release = lsb_release()['DISTRIB_CODENAME'].lower() + if is_ipv6_disabled(): + if CompareHostReleases(release) > 'trusty': + ctxt['memcache_server'] = '127.0.0.1' + else: + ctxt['memcache_server'] = 'localhost' + ctxt['memcache_server_formatted'] = '127.0.0.1' + ctxt['memcache_port'] = '11211' + ctxt['memcache_url'] = '{}:{}'.format( + ctxt['memcache_server_formatted'], + ctxt['memcache_port']) + else: + if CompareHostReleases(release) > 'trusty': + ctxt['memcache_server'] = '::1' + else: + ctxt['memcache_server'] = 'ip6-localhost' + ctxt['memcache_server_formatted'] = '[::1]' + ctxt['memcache_port'] = '11211' + ctxt['memcache_url'] = 'inet6:{}:{}'.format( + ctxt['memcache_server_formatted'], + ctxt['memcache_port']) + return ctxt + + +class EnsureDirContext(OSContextGenerator): + ''' + Serves as a generic context to create a directory as a side-effect. + + Useful for software that supports drop-in files (.d) in conjunction + with config option-based templates. Examples include: + * OpenStack oslo.policy drop-in files; + * systemd drop-in config files; + * other software that supports overriding defaults with .d files + + Another use-case is when a subordinate generates a configuration for + primary to render in a separate directory. + + Some software requires a user to create a target directory to be + scanned for drop-in files with a specific format. This is why this + context is needed to do that before rendering a template. + ''' + + def __init__(self, dirname, **kwargs): + '''Used merely to ensure that a given directory exists.''' + self.dirname = dirname + self.kwargs = kwargs + + def __call__(self): + mkdir(self.dirname, **self.kwargs) + return {} + + +class VersionsContext(OSContextGenerator): + """Context to return the openstack and operating system versions. + + """ + def __init__(self, pkg='python-keystone'): + """Initialise context. + + :param pkg: Package to extrapolate openstack version from. + :type pkg: str + """ + self.pkg = pkg + + def __call__(self): + ostack = os_release(self.pkg) + osystem = lsb_release()['DISTRIB_CODENAME'].lower() + return { + 'openstack_release': ostack, + 'operating_system_release': osystem} + + +class LogrotateContext(OSContextGenerator): + """Common context generator for logrotate.""" + + def __init__(self, location, interval, count): + """ + :param location: Absolute path for the logrotate config file + :type location: str + :param interval: The interval for the rotations. Valid values are + 'daily', 'weekly', 'monthly', 'yearly' + :type interval: str + :param count: The logrotate count option configures the 'count' times + the log files are being rotated before being + :type count: int + """ + self.location = location + self.interval = interval + self.count = 'rotate {}'.format(count) + + def __call__(self): + ctxt = { + 'logrotate_logs_location': self.location, + 'logrotate_interval': self.interval, + 'logrotate_count': self.count, + } + return ctxt + + +class HostInfoContext(OSContextGenerator): + """Context to provide host information.""" + + def __init__(self, use_fqdn_hint_cb=None): + """Initialize HostInfoContext + + :param use_fqdn_hint_cb: Callback whose return value used to populate + `use_fqdn_hint` + :type use_fqdn_hint_cb: Callable[[], bool] + """ + # Store callback used to get hint for whether FQDN should be used + + # Depending on the workload a charm manages, the use of FQDN vs. + # shortname may be a deploy-time decision, i.e. behaviour can not + # change on charm upgrade or post-deployment configuration change. + + # The hint is passed on as a flag in the context to allow the decision + # to be made in the Jinja2 configuration template. + self.use_fqdn_hint_cb = use_fqdn_hint_cb + + def _get_canonical_name(self, name=None): + """Get the official FQDN of the host + + The implementation of ``socket.getfqdn()`` in the standard Python + library does not exhaust all methods of getting the official name + of a host ref Python issue https://bugs.python.org/issue5004 + + This function mimics the behaviour of a call to ``hostname -f`` to + get the official FQDN but returns an empty string if it is + unsuccessful. + + :param name: Shortname to get FQDN on + :type name: Optional[str] + :returns: The official FQDN for host or empty string ('') + :rtype: str + """ + name = name or socket.gethostname() + fqdn = '' + + if six.PY2: + exc = socket.error + else: + exc = OSError + + try: + addrs = socket.getaddrinfo( + name, None, 0, socket.SOCK_DGRAM, 0, socket.AI_CANONNAME) + except exc: + pass + else: + for addr in addrs: + if addr[3]: + if '.' in addr[3]: + fqdn = addr[3] + break + return fqdn + + def __call__(self): + name = socket.gethostname() + ctxt = { + 'host_fqdn': self._get_canonical_name(name) or name, + 'host': name, + 'use_fqdn_hint': ( + self.use_fqdn_hint_cb() if self.use_fqdn_hint_cb else False) + } + return ctxt + + +def validate_ovs_use_veth(*args, **kwargs): + """Validate OVS use veth setting for dhcp agents + + The ovs_use_veth setting is considered immutable as it will break existing + deployments. Historically, we set ovs_use_veth=True in dhcp_agent.ini. It + turns out this is no longer necessary. Ideally, all new deployments would + have this set to False. + + This function validates that the config value does not conflict with + previously deployed settings in dhcp_agent.ini. + + See LP Bug#1831935 for details. + + :returns: Status state and message + :rtype: Union[(None, None), (string, string)] + """ + existing_ovs_use_veth = ( + DHCPAgentContext.get_existing_ovs_use_veth()) + config_ovs_use_veth = DHCPAgentContext.parse_ovs_use_veth() + + # Check settings are set and not None + if existing_ovs_use_veth is not None and config_ovs_use_veth is not None: + # Check for mismatch between existing config ini and juju config + if existing_ovs_use_veth != config_ovs_use_veth: + # Stop the line to avoid breakage + msg = ( + "The existing setting for dhcp_agent.ini ovs_use_veth, {}, " + "does not match the juju config setting, {}. This may lead to " + "VMs being unable to receive a DHCP IP. Either change the " + "juju config setting or dhcp agents may need to be recreated." + .format(existing_ovs_use_veth, config_ovs_use_veth)) + log(msg, ERROR) + return ( + "blocked", + "Mismatched existing and configured ovs-use-veth. See log.") + + # Everything is OK + return None, None + + +class DHCPAgentContext(OSContextGenerator): + + def __call__(self): + """Return the DHCPAGentContext. + + Return all DHCP Agent INI related configuration. + ovs unit is attached to (as a subordinate) and the 'dns_domain' from + the neutron-plugin-api relations (if one is set). + + :returns: Dictionary context + :rtype: Dict + """ + + ctxt = {} + dnsmasq_flags = config('dnsmasq-flags') + if dnsmasq_flags: + ctxt['dnsmasq_flags'] = config_flags_parser(dnsmasq_flags) + ctxt['dns_servers'] = config('dns-servers') + + neutron_api_settings = NeutronAPIContext()() + + ctxt['debug'] = config('debug') + ctxt['instance_mtu'] = config('instance-mtu') + ctxt['ovs_use_veth'] = self.get_ovs_use_veth() + + ctxt['enable_metadata_network'] = config('enable-metadata-network') + ctxt['enable_isolated_metadata'] = config('enable-isolated-metadata') + + if neutron_api_settings.get('dns_domain'): + ctxt['dns_domain'] = neutron_api_settings.get('dns_domain') + + # Override user supplied config for these plugins as these settings are + # mandatory + if config('plugin') in ['nvp', 'nsx', 'n1kv']: + ctxt['enable_metadata_network'] = True + ctxt['enable_isolated_metadata'] = True + + return ctxt + + @staticmethod + def get_existing_ovs_use_veth(): + """Return existing ovs_use_veth setting from dhcp_agent.ini. + + :returns: Boolean value of existing ovs_use_veth setting or None + :rtype: Optional[Bool] + """ + DHCP_AGENT_INI = "/etc/neutron/dhcp_agent.ini" + existing_ovs_use_veth = None + # If there is a dhcp_agent.ini file read the current setting + if os.path.isfile(DHCP_AGENT_INI): + # config_ini does the right thing and returns None if the setting is + # commented. + existing_ovs_use_veth = ( + config_ini(DHCP_AGENT_INI)["DEFAULT"].get("ovs_use_veth")) + # Convert to Bool if necessary + if isinstance(existing_ovs_use_veth, six.string_types): + return bool_from_string(existing_ovs_use_veth) + return existing_ovs_use_veth + + @staticmethod + def parse_ovs_use_veth(): + """Parse the ovs-use-veth config setting. + + Parse the string config setting for ovs-use-veth and return a boolean + or None. + + bool_from_string will raise a ValueError if the string is not falsy or + truthy. + + :raises: ValueError for invalid input + :returns: Boolean value of ovs-use-veth or None + :rtype: Optional[Bool] + """ + _config = config("ovs-use-veth") + # An unset parameter returns None. Just in case we will also check for + # an empty string: "". Ironically, (the problem we are trying to avoid) + # "False" returns True and "" returns False. + if _config is None or not _config: + # Return None + return + # bool_from_string handles many variations of true and false strings + # as well as upper and lowercases including: + # ['y', 'yes', 'true', 't', 'on', 'n', 'no', 'false', 'f', 'off'] + return bool_from_string(_config) + + def get_ovs_use_veth(self): + """Return correct ovs_use_veth setting for use in dhcp_agent.ini. + + Get the right value from config or existing dhcp_agent.ini file. + Existing has precedence. Attempt to default to "False" without + disrupting existing deployments. Handle existing deployments and + upgrades safely. See LP Bug#1831935 + + :returns: Value to use for ovs_use_veth setting + :rtype: Bool + """ + _existing = self.get_existing_ovs_use_veth() + if _existing is not None: + return _existing + + _config = self.parse_ovs_use_veth() + if _config is None: + # New better default + return False + else: + return _config + + +EntityMac = collections.namedtuple('EntityMac', ['entity', 'mac']) + + +def resolve_pci_from_mapping_config(config_key): + """Resolve local PCI devices from MAC addresses in mapping config. + + Note that this function keeps record of mac->PCI address lookups + in the local unit db as the devices will disappaear from the system + once bound. + + :param config_key: Configuration option key to parse data from + :type config_key: str + :returns: PCI device address to Tuple(entity, mac) map + :rtype: collections.OrderedDict[str,Tuple[str,str]] + """ + devices = pci.PCINetDevices() + resolved_devices = collections.OrderedDict() + db = kv() + # Note that ``parse_data_port_mappings`` returns Dict regardless of input + for mac, entity in parse_data_port_mappings(config(config_key)).items(): + pcidev = devices.get_device_from_mac(mac) + if pcidev: + # NOTE: store mac->pci allocation as post binding + # it disappears from PCIDevices. + db.set(mac, pcidev.pci_address) + db.flush() + + pci_address = db.get(mac) + if pci_address: + resolved_devices[pci_address] = EntityMac(entity, mac) + + return resolved_devices + + +class DPDKDeviceContext(OSContextGenerator): + + def __init__(self, driver_key=None, bridges_key=None, bonds_key=None): + """Initialize DPDKDeviceContext. + + :param driver_key: Key to use when retrieving driver config. + :type driver_key: str + :param bridges_key: Key to use when retrieving bridge config. + :type bridges_key: str + :param bonds_key: Key to use when retrieving bonds config. + :type bonds_key: str + """ + self.driver_key = driver_key or 'dpdk-driver' + self.bridges_key = bridges_key or 'data-port' + self.bonds_key = bonds_key or 'dpdk-bond-mappings' + + def __call__(self): + """Populate context. + + :returns: context + :rtype: Dict[str,Union[str,collections.OrderedDict[str,str]]] + """ + driver = config(self.driver_key) + if driver is None: + return {} + # Resolve PCI devices for both directly used devices (_bridges) + # and devices for use in dpdk bonds (_bonds) + pci_devices = resolve_pci_from_mapping_config(self.bridges_key) + pci_devices.update(resolve_pci_from_mapping_config(self.bonds_key)) + return {'devices': pci_devices, + 'driver': driver} + + +class OVSDPDKDeviceContext(OSContextGenerator): + + def __init__(self, bridges_key=None, bonds_key=None): + """Initialize OVSDPDKDeviceContext. + + :param bridges_key: Key to use when retrieving bridge config. + :type bridges_key: str + :param bonds_key: Key to use when retrieving bonds config. + :type bonds_key: str + """ + self.bridges_key = bridges_key or 'data-port' + self.bonds_key = bonds_key or 'dpdk-bond-mappings' + + @staticmethod + def _parse_cpu_list(cpulist): + """Parses a linux cpulist for a numa node + + :returns: list of cores + :rtype: List[int] + """ + cores = [] + ranges = cpulist.split(',') + for cpu_range in ranges: + if "-" in cpu_range: + cpu_min_max = cpu_range.split('-') + cores += range(int(cpu_min_max[0]), + int(cpu_min_max[1]) + 1) + else: + cores.append(int(cpu_range)) + return cores + + def _numa_node_cores(self): + """Get map of numa node -> cpu core + + :returns: map of numa node -> cpu core + :rtype: Dict[str,List[int]] + """ + nodes = {} + node_regex = '/sys/devices/system/node/node*' + for node in glob.glob(node_regex): + index = node.lstrip('/sys/devices/system/node/node') + with open(os.path.join(node, 'cpulist')) as cpulist: + nodes[index] = self._parse_cpu_list(cpulist.read().strip()) + return nodes + + def cpu_mask(self): + """Get hex formatted CPU mask + + The mask is based on using the first config:dpdk-socket-cores + cores of each NUMA node in the unit. + :returns: hex formatted CPU mask + :rtype: str + """ + num_cores = config('dpdk-socket-cores') + mask = 0 + for cores in self._numa_node_cores().values(): + for core in cores[:num_cores]: + mask = mask | 1 << core + return format(mask, '#04x') + + def socket_memory(self): + """Formatted list of socket memory configuration per NUMA node + + :returns: socket memory configuration per NUMA node + :rtype: str + """ + sm_size = config('dpdk-socket-memory') + node_regex = '/sys/devices/system/node/node*' + mem_list = [str(sm_size) for _ in glob.glob(node_regex)] + if mem_list: + return ','.join(mem_list) + else: + return str(sm_size) + + def devices(self): + """List of PCI devices for use by DPDK + + :returns: List of PCI devices for use by DPDK + :rtype: collections.OrderedDict[str,str] + """ + pci_devices = resolve_pci_from_mapping_config(self.bridges_key) + pci_devices.update(resolve_pci_from_mapping_config(self.bonds_key)) + return pci_devices + + def _formatted_whitelist(self, flag): + """Flag formatted list of devices to whitelist + + :param flag: flag format to use + :type flag: str + :rtype: str + """ + whitelist = [] + for device in self.devices(): + whitelist.append(flag.format(device=device)) + return ' '.join(whitelist) + + def device_whitelist(self): + """Formatted list of devices to whitelist for dpdk + + using the old style '-w' flag + + :returns: devices to whitelist prefixed by '-w ' + :rtype: str + """ + return self._formatted_whitelist('-w {device}') + + def pci_whitelist(self): + """Formatted list of devices to whitelist for dpdk + + using the new style '--pci-whitelist' flag + + :returns: devices to whitelist prefixed by '--pci-whitelist ' + :rtype: str + """ + return self._formatted_whitelist('--pci-whitelist {device}') + + def __call__(self): + """Populate context. + + :returns: context + :rtype: Dict[str,Union[bool,str]] + """ + ctxt = {} + whitelist = self.device_whitelist() + if whitelist: + ctxt['dpdk_enabled'] = config('enable-dpdk') + ctxt['device_whitelist'] = self.device_whitelist() + ctxt['socket_memory'] = self.socket_memory() + ctxt['cpu_mask'] = self.cpu_mask() + return ctxt + + +class BridgePortInterfaceMap(object): + """Build a map of bridge ports and interaces from charm configuration. + + NOTE: the handling of this detail in the charm is pre-deprecated. + + The long term goal is for network connectivity detail to be modelled in + the server provisioning layer (such as MAAS) which in turn will provide + a Netplan YAML description that will be used to drive Open vSwitch. + + Until we get to that reality the charm will need to configure this + detail based on application level configuration options. + + There is a established way of mapping interfaces to ports and bridges + in the ``neutron-openvswitch`` and ``neutron-gateway`` charms and we + will carry that forward. + + The relationship between bridge, port and interface(s). + +--------+ + | bridge | + +--------+ + | + +----------------+ + | port aka. bond | + +----------------+ + | | + +-+ +-+ + |i| |i| + |n| |n| + |t| |t| + |0| |N| + +-+ +-+ + """ + class interface_type(enum.Enum): + """Supported interface types. + + Supported interface types can be found in the ``iface_types`` column + in the ``Open_vSwitch`` table on a running system. + """ + dpdk = 'dpdk' + internal = 'internal' + system = 'system' + + def __str__(self): + """Return string representation of value. + + :returns: string representation of value. + :rtype: str + """ + return self.value + + def __init__(self, bridges_key=None, bonds_key=None, enable_dpdk_key=None, + global_mtu=None): + """Initialize map. + + :param bridges_key: Name of bridge:interface/port map config key + (default: 'data-port') + :type bridges_key: Optional[str] + :param bonds_key: Name of port-name:interface map config key + (default: 'dpdk-bond-mappings') + :type bonds_key: Optional[str] + :param enable_dpdk_key: Name of DPDK toggle config key + (default: 'enable-dpdk') + :type enable_dpdk_key: Optional[str] + :param global_mtu: Set a MTU on all interfaces at map initialization. + + The default is to have Open vSwitch get this from the underlying + interface as set up by bare metal provisioning. + + Note that you can augment the MTU on an individual interface basis + like this: + + ifdatamap = bpi.get_ifdatamap(bridge, port) + ifdatamap = { + port: { + **ifdata, + **{'mtu-request': my_individual_mtu_map[port]}, + } + for port, ifdata in ifdatamap.items() + } + :type global_mtu: Optional[int] + """ + bridges_key = bridges_key or 'data-port' + bonds_key = bonds_key or 'dpdk-bond-mappings' + enable_dpdk_key = enable_dpdk_key or 'enable-dpdk' + self._map = collections.defaultdict( + lambda: collections.defaultdict(dict)) + self._ifname_mac_map = collections.defaultdict(list) + self._mac_ifname_map = {} + self._mac_pci_address_map = {} + + # First we iterate over the list of physical interfaces visible to the + # system and update interface name to mac and mac to interface name map + for ifname in list_nics(): + if not is_phy_iface(ifname): + continue + mac = get_nic_hwaddr(ifname) + self._ifname_mac_map[ifname] = [mac] + self._mac_ifname_map[mac] = ifname + + # check if interface is part of a linux bond + _bond_name = get_bond_master(ifname) + if _bond_name and _bond_name != ifname: + log('Add linux bond "{}" to map for physical interface "{}" ' + 'with mac "{}".'.format(_bond_name, ifname, mac), + level=DEBUG) + # for bonds we want to be able to get a list of the mac + # addresses for the physical interfaces the bond is made up of. + if self._ifname_mac_map.get(_bond_name): + self._ifname_mac_map[_bond_name].append(mac) + else: + self._ifname_mac_map[_bond_name] = [mac] + + # In light of the pre-deprecation notice in the docstring of this + # class we will expose the ability to configure OVS bonds as a + # DPDK-only feature, but generally use the data structures internally. + if config(enable_dpdk_key): + # resolve PCI address of interfaces listed in the bridges and bonds + # charm configuration options. Note that for already bound + # interfaces the helper will retrieve MAC address from the unit + # KV store as the information is no longer available in sysfs. + _pci_bridge_mac = resolve_pci_from_mapping_config( + bridges_key) + _pci_bond_mac = resolve_pci_from_mapping_config( + bonds_key) + + for pci_address, bridge_mac in _pci_bridge_mac.items(): + if bridge_mac.mac in self._mac_ifname_map: + # if we already have the interface name in our map it is + # visible to the system and therefore not bound to DPDK + continue + ifname = 'dpdk-{}'.format( + hashlib.sha1( + pci_address.encode('UTF-8')).hexdigest()[:7]) + self._ifname_mac_map[ifname] = [bridge_mac.mac] + self._mac_ifname_map[bridge_mac.mac] = ifname + self._mac_pci_address_map[bridge_mac.mac] = pci_address + + for pci_address, bond_mac in _pci_bond_mac.items(): + # for bonds we want to be able to get a list of macs from + # the bond name and also get at the interface name made up + # of the hash of the PCI address + ifname = 'dpdk-{}'.format( + hashlib.sha1( + pci_address.encode('UTF-8')).hexdigest()[:7]) + self._ifname_mac_map[bond_mac.entity].append(bond_mac.mac) + self._mac_ifname_map[bond_mac.mac] = ifname + self._mac_pci_address_map[bond_mac.mac] = pci_address + + config_bridges = config(bridges_key) or '' + for bridge, ifname_or_mac in ( + pair.split(':', 1) + for pair in config_bridges.split()): + if ':' in ifname_or_mac: + try: + ifname = self.ifname_from_mac(ifname_or_mac) + except KeyError: + # The interface is destined for a different unit in the + # deployment. + continue + macs = [ifname_or_mac] + else: + ifname = ifname_or_mac + macs = self.macs_from_ifname(ifname_or_mac) + + portname = ifname + for mac in macs: + try: + pci_address = self.pci_address_from_mac(mac) + iftype = self.interface_type.dpdk + ifname = self.ifname_from_mac(mac) + except KeyError: + pci_address = None + iftype = self.interface_type.system + + self.add_interface( + bridge, portname, ifname, iftype, pci_address, global_mtu) + + if not macs: + # We have not mapped the interface and it is probably some sort + # of virtual interface. Our user have put it in the config with + # a purpose so let's carry out their wish. LP: #1884743 + log('Add unmapped interface from config: name "{}" bridge "{}"' + .format(ifname, bridge), + level=DEBUG) + self.add_interface( + bridge, ifname, ifname, self.interface_type.system, None, + global_mtu) + + def __getitem__(self, key): + """Provide a Dict-like interface, get value of item. + + :param key: Key to look up value from. + :type key: any + :returns: Value + :rtype: any + """ + return self._map.__getitem__(key) + + def __iter__(self): + """Provide a Dict-like interface, iterate over keys. + + :returns: Iterator + :rtype: Iterator[any] + """ + return self._map.__iter__() + + def __len__(self): + """Provide a Dict-like interface, measure the length of internal map. + + :returns: Length + :rtype: int + """ + return len(self._map) + + def items(self): + """Provide a Dict-like interface, iterate over items. + + :returns: Key Value pairs + :rtype: Iterator[any, any] + """ + return self._map.items() + + def keys(self): + """Provide a Dict-like interface, iterate over keys. + + :returns: Iterator + :rtype: Iterator[any] + """ + return self._map.keys() + + def ifname_from_mac(self, mac): + """ + :returns: Name of interface + :rtype: str + :raises: KeyError + """ + return (get_bond_master(self._mac_ifname_map[mac]) or + self._mac_ifname_map[mac]) + + def macs_from_ifname(self, ifname): + """ + :returns: List of hardware address (MAC) of interface + :rtype: List[str] + :raises: KeyError + """ + return self._ifname_mac_map[ifname] + + def pci_address_from_mac(self, mac): + """ + :param mac: Hardware address (MAC) of interface + :type mac: str + :returns: PCI address of device associated with mac + :rtype: str + :raises: KeyError + """ + return self._mac_pci_address_map[mac] + + def add_interface(self, bridge, port, ifname, iftype, + pci_address, mtu_request): + """Add an interface to the map. + + :param bridge: Name of bridge on which the bond will be added + :type bridge: str + :param port: Name of port which will represent the bond on bridge + :type port: str + :param ifname: Name of interface that will make up the bonded port + :type ifname: str + :param iftype: Type of interface + :type iftype: BridgeBondMap.interface_type + :param pci_address: PCI address of interface + :type pci_address: Optional[str] + :param mtu_request: MTU to request for interface + :type mtu_request: Optional[int] + """ + self._map[bridge][port][ifname] = { + 'type': str(iftype), + } + if pci_address: + self._map[bridge][port][ifname].update({ + 'pci-address': pci_address, + }) + if mtu_request is not None: + self._map[bridge][port][ifname].update({ + 'mtu-request': str(mtu_request) + }) + + def get_ifdatamap(self, bridge, port): + """Get structure suitable for charmhelpers.contrib.network.ovs helpers. + + :param bridge: Name of bridge on which the port will be added + :type bridge: str + :param port: Name of port which will represent one or more interfaces + :type port: str + """ + for _bridge, _ports in self.items(): + for _port, _interfaces in _ports.items(): + if _bridge == bridge and _port == port: + ifdatamap = {} + for name, data in _interfaces.items(): + ifdatamap.update({ + name: { + 'type': data['type'], + }, + }) + if data.get('mtu-request') is not None: + ifdatamap[name].update({ + 'mtu_request': data['mtu-request'], + }) + if data.get('pci-address'): + ifdatamap[name].update({ + 'options': { + 'dpdk-devargs': data['pci-address'], + }, + }) + return ifdatamap + + +class BondConfig(object): + """Container and helpers for bond configuration options. + + Data is put into a dictionary and a convenient config get interface is + provided. + """ + + DEFAULT_LACP_CONFIG = { + 'mode': 'balance-tcp', + 'lacp': 'active', + 'lacp-time': 'fast' + } + ALL_BONDS = 'ALL_BONDS' + + BOND_MODES = ['active-backup', 'balance-slb', 'balance-tcp'] + BOND_LACP = ['active', 'passive', 'off'] + BOND_LACP_TIME = ['fast', 'slow'] + + def __init__(self, config_key=None): + """Parse specified configuration option. + + :param config_key: Configuration key to retrieve data from + (default: ``dpdk-bond-config``) + :type config_key: Optional[str] + """ + self.config_key = config_key or 'dpdk-bond-config' + + self.lacp_config = { + self.ALL_BONDS: copy.deepcopy(self.DEFAULT_LACP_CONFIG) + } + + lacp_config = config(self.config_key) + if lacp_config: + lacp_config_map = lacp_config.split() + for entry in lacp_config_map: + bond, entry = entry.partition(':')[0:3:2] + if not bond: + bond = self.ALL_BONDS + + mode, entry = entry.partition(':')[0:3:2] + if not mode: + mode = self.DEFAULT_LACP_CONFIG['mode'] + assert mode in self.BOND_MODES, \ + "Bond mode {} is invalid".format(mode) + + lacp, entry = entry.partition(':')[0:3:2] + if not lacp: + lacp = self.DEFAULT_LACP_CONFIG['lacp'] + assert lacp in self.BOND_LACP, \ + "Bond lacp {} is invalid".format(lacp) + + lacp_time, entry = entry.partition(':')[0:3:2] + if not lacp_time: + lacp_time = self.DEFAULT_LACP_CONFIG['lacp-time'] + assert lacp_time in self.BOND_LACP_TIME, \ + "Bond lacp-time {} is invalid".format(lacp_time) + + self.lacp_config[bond] = { + 'mode': mode, + 'lacp': lacp, + 'lacp-time': lacp_time + } + + def get_bond_config(self, bond): + """Get the LACP configuration for a bond + + :param bond: the bond name + :return: a dictionary with the configuration of the bond + :rtype: Dict[str,Dict[str,str]] + """ + return self.lacp_config.get(bond, self.lacp_config[self.ALL_BONDS]) + + def get_ovs_portdata(self, bond): + """Get structure suitable for charmhelpers.contrib.network.ovs helpers. + + :param bond: the bond name + :return: a dictionary with the configuration of the bond + :rtype: Dict[str,Union[str,Dict[str,str]]] + """ + bond_config = self.get_bond_config(bond) + return { + 'bond_mode': bond_config['mode'], + 'lacp': bond_config['lacp'], + 'other_config': { + 'lacp-time': bond_config['lacp-time'], + }, + } + + +class SRIOVContext(OSContextGenerator): + """Provide context for configuring SR-IOV devices.""" + + class sriov_config_mode(enum.Enum): + """Mode in which SR-IOV is configured. + + The configuration option identified by the ``numvfs_key`` parameter + is overloaded and defines in which mode the charm should interpret + the other SR-IOV-related configuration options. + """ + auto = 'auto' + blanket = 'blanket' + explicit = 'explicit' + + def _determine_numvfs(self, device, sriov_numvfs): + """Determine number of Virtual Functions (VFs) configured for device. + + :param device: Object describing a PCI Network interface card (NIC)/ + :type device: sriov_netplan_shim.pci.PCINetDevice + :param sriov_numvfs: Number of VFs requested for blanket configuration. + :type sriov_numvfs: int + :returns: Number of VFs to configure for device + :rtype: Optional[int] + """ + + def _get_capped_numvfs(requested): + """Get a number of VFs that does not exceed individual card limits. + + Depending and make and model of NIC the number of VFs supported + vary. Requesting more VFs than a card support would be a fatal + error, cap the requested number at the total number of VFs each + individual card supports. + + :param requested: Number of VFs requested + :type requested: int + :returns: Number of VFs allowed + :rtype: int + """ + actual = min(int(requested), int(device.sriov_totalvfs)) + if actual < int(requested): + log('Requested VFs ({}) too high for device {}. Falling back ' + 'to value supprted by device: {}' + .format(requested, device.interface_name, + device.sriov_totalvfs), + level=WARNING) + return actual + + if self._sriov_config_mode == self.sriov_config_mode.auto: + # auto-mode + # + # If device mapping configuration is present, return information + # on cards with mapping. + # + # If no device mapping configuration is present, return information + # for all cards. + # + # The maximum number of VFs supported by card will be used. + if (self._sriov_mapped_devices and + device.interface_name not in self._sriov_mapped_devices): + log('SR-IOV configured in auto mode: No device mapping for {}' + .format(device.interface_name), + level=DEBUG) + return + return _get_capped_numvfs(device.sriov_totalvfs) + elif self._sriov_config_mode == self.sriov_config_mode.blanket: + # blanket-mode + # + # User has specified a number of VFs that should apply to all + # cards with support for VFs. + return _get_capped_numvfs(sriov_numvfs) + elif self._sriov_config_mode == self.sriov_config_mode.explicit: + # explicit-mode + # + # User has given a list of interface names and associated number of + # VFs + if device.interface_name not in self._sriov_config_devices: + log('SR-IOV configured in explicit mode: No device:numvfs ' + 'pair for device {}, skipping.' + .format(device.interface_name), + level=DEBUG) + return + return _get_capped_numvfs( + self._sriov_config_devices[device.interface_name]) + else: + raise RuntimeError('This should not be reached') + + def __init__(self, numvfs_key=None, device_mappings_key=None): + """Initialize map from PCI devices and configuration options. + + :param numvfs_key: Config key for numvfs (default: 'sriov-numvfs') + :type numvfs_key: Optional[str] + :param device_mappings_key: Config key for device mappings + (default: 'sriov-device-mappings') + :type device_mappings_key: Optional[str] + :raises: RuntimeError + """ + numvfs_key = numvfs_key or 'sriov-numvfs' + device_mappings_key = device_mappings_key or 'sriov-device-mappings' + + devices = pci.PCINetDevices() + charm_config = config() + sriov_numvfs = charm_config.get(numvfs_key) or '' + sriov_device_mappings = charm_config.get(device_mappings_key) or '' + + # create list of devices from sriov_device_mappings config option + self._sriov_mapped_devices = [ + pair.split(':', 1)[1] + for pair in sriov_device_mappings.split() + ] + + # create map of device:numvfs from sriov_numvfs config option + self._sriov_config_devices = { + ifname: numvfs for ifname, numvfs in ( + pair.split(':', 1) for pair in sriov_numvfs.split() + if ':' in sriov_numvfs) + } + + # determine configuration mode from contents of sriov_numvfs + if sriov_numvfs == 'auto': + self._sriov_config_mode = self.sriov_config_mode.auto + elif sriov_numvfs.isdigit(): + self._sriov_config_mode = self.sriov_config_mode.blanket + elif ':' in sriov_numvfs: + self._sriov_config_mode = self.sriov_config_mode.explicit + else: + raise RuntimeError('Unable to determine mode of SR-IOV ' + 'configuration.') + + self._map = { + device.interface_name: self._determine_numvfs(device, sriov_numvfs) + for device in devices.pci_devices + if device.sriov and + self._determine_numvfs(device, sriov_numvfs) is not None + } + + def __call__(self): + """Provide SR-IOV context. + + :returns: Map interface name: min(configured, max) virtual functions. + Example: + { + 'eth0': 16, + 'eth1': 32, + 'eth2': 64, + } + :rtype: Dict[str,int] + """ + return self._map diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/exceptions.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/exceptions.py new file mode 100644 index 0000000000000000000000000000000000000000..f85ae4f4cdbb6567cbdd896338bf88fbf3c9c0ec --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/exceptions.py @@ -0,0 +1,21 @@ +# Copyright 2016 Canonical Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + +class OSContextError(Exception): + """Raised when an error occurs during context generation. + + This exception is principally used in contrib.openstack.context + """ + pass diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/files/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/files/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..9df5f746fbdf5491c640a77df907b71817cbc5af --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/files/__init__.py @@ -0,0 +1,16 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# dummy __init__.py to fool syncer into thinking this is a syncable python +# module diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/files/check_haproxy.sh b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/files/check_haproxy.sh new file mode 100755 index 0000000000000000000000000000000000000000..1df55db4816ec51d6732d68ea0a1e25e6f7b116e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/files/check_haproxy.sh @@ -0,0 +1,34 @@ +#!/bin/bash +#-------------------------------------------- +# This file is managed by Juju +#-------------------------------------------- +# +# Copyright 2009,2012 Canonical Ltd. +# Author: Tom Haddon + +CRITICAL=0 +NOTACTIVE='' +LOGFILE=/var/log/nagios/check_haproxy.log +AUTH=$(grep -r "stats auth" /etc/haproxy/haproxy.cfg | awk 'NR=1{print $3}') + +typeset -i N_INSTANCES=0 +for appserver in $(awk '/^\s+server/{print $2}' /etc/haproxy/haproxy.cfg) +do + N_INSTANCES=N_INSTANCES+1 + output=$(/usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 -u '/;csv' --regex=",${appserver},.*,UP.*" -e ' 200 OK') + if [ $? != 0 ]; then + date >> $LOGFILE + echo $output >> $LOGFILE + /usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 -u '/;csv' -v | grep ",${appserver}," >> $LOGFILE 2>&1 + CRITICAL=1 + NOTACTIVE="${NOTACTIVE} $appserver" + fi +done + +if [ $CRITICAL = 1 ]; then + echo "CRITICAL:${NOTACTIVE}" + exit 2 +fi + +echo "OK: All haproxy instances ($N_INSTANCES) looking good" +exit 0 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/files/check_haproxy_queue_depth.sh b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/files/check_haproxy_queue_depth.sh new file mode 100755 index 0000000000000000000000000000000000000000..91ce0246e66115994c3f518b36448f70100ecfc7 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/files/check_haproxy_queue_depth.sh @@ -0,0 +1,30 @@ +#!/bin/bash +#-------------------------------------------- +# This file is managed by Juju +#-------------------------------------------- +# +# Copyright 2009,2012 Canonical Ltd. +# Author: Tom Haddon + +# These should be config options at some stage +CURRQthrsh=0 +MAXQthrsh=100 + +AUTH=$(grep -r "stats auth" /etc/haproxy/haproxy.cfg | awk 'NR=1{print $3}') + +HAPROXYSTATS=$(/usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 -u '/;csv' -v) + +for BACKEND in $(echo $HAPROXYSTATS| xargs -n1 | grep BACKEND | awk -F , '{print $1}') +do + CURRQ=$(echo "$HAPROXYSTATS" | grep $BACKEND | grep BACKEND | cut -d , -f 3) + MAXQ=$(echo "$HAPROXYSTATS" | grep $BACKEND | grep BACKEND | cut -d , -f 4) + + if [[ $CURRQ -gt $CURRQthrsh || $MAXQ -gt $MAXQthrsh ]] ; then + echo "CRITICAL: queue depth for $BACKEND - CURRENT:$CURRQ MAX:$MAXQ" + exit 2 + fi +done + +echo "OK: All haproxy queue depths looking good" +exit 0 + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/ha/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/ha/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..9b088de84e4b288b551603816fc10eebfa7b1503 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/ha/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2016 Canonical Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/ha/utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/ha/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..a5cbdf535d495a09a0b91f41fdda09862e34140d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/ha/utils.py @@ -0,0 +1,348 @@ +# Copyright 2014-2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# +# Copyright 2016 Canonical Ltd. +# +# Authors: +# Openstack Charmers < +# + +""" +Helpers for high availability. +""" + +import hashlib +import json + +import re + +from charmhelpers.core.hookenv import ( + expected_related_units, + log, + relation_set, + charm_name, + config, + status_set, + DEBUG, +) + +from charmhelpers.core.host import ( + lsb_release +) + +from charmhelpers.contrib.openstack.ip import ( + resolve_address, + is_ipv6, +) + +from charmhelpers.contrib.network.ip import ( + get_iface_for_address, + get_netmask_for_address, +) + +from charmhelpers.contrib.hahelpers.cluster import ( + get_hacluster_config +) + +JSON_ENCODE_OPTIONS = dict( + sort_keys=True, + allow_nan=False, + indent=None, + separators=(',', ':'), +) + +VIP_GROUP_NAME = 'grp_{service}_vips' +DNSHA_GROUP_NAME = 'grp_{service}_hostnames' + + +class DNSHAException(Exception): + """Raised when an error occurs setting up DNS HA + """ + + pass + + +def update_dns_ha_resource_params(resources, resource_params, + relation_id=None, + crm_ocf='ocf:maas:dns'): + """ Configure DNS-HA resources based on provided configuration and + update resource dictionaries for the HA relation. + + @param resources: Pointer to dictionary of resources. + Usually instantiated in ha_joined(). + @param resource_params: Pointer to dictionary of resource parameters. + Usually instantiated in ha_joined() + @param relation_id: Relation ID of the ha relation + @param crm_ocf: Corosync Open Cluster Framework resource agent to use for + DNS HA + """ + _relation_data = {'resources': {}, 'resource_params': {}} + update_hacluster_dns_ha(charm_name(), + _relation_data, + crm_ocf) + resources.update(_relation_data['resources']) + resource_params.update(_relation_data['resource_params']) + relation_set(relation_id=relation_id, groups=_relation_data['groups']) + + +def assert_charm_supports_dns_ha(): + """Validate prerequisites for DNS HA + The MAAS client is only available on Xenial or greater + + :raises DNSHAException: if release is < 16.04 + """ + if lsb_release().get('DISTRIB_RELEASE') < '16.04': + msg = ('DNS HA is only supported on 16.04 and greater ' + 'versions of Ubuntu.') + status_set('blocked', msg) + raise DNSHAException(msg) + return True + + +def expect_ha(): + """ Determine if the unit expects to be in HA + + Check juju goal-state if ha relation is expected, check for VIP or dns-ha + settings which indicate the unit should expect to be related to hacluster. + + @returns boolean + """ + ha_related_units = [] + try: + ha_related_units = list(expected_related_units(reltype='ha')) + except (NotImplementedError, KeyError): + pass + return len(ha_related_units) > 0 or config('vip') or config('dns-ha') + + +def generate_ha_relation_data(service, + extra_settings=None, + haproxy_enabled=True): + """ Generate relation data for ha relation + + Based on configuration options and unit interfaces, generate a json + encoded dict of relation data items for the hacluster relation, + providing configuration for DNS HA or VIP's + haproxy clone sets. + + Example of supplying additional settings:: + + COLO_CONSOLEAUTH = 'inf: res_nova_consoleauth grp_nova_vips' + AGENT_CONSOLEAUTH = 'ocf:openstack:nova-consoleauth' + AGENT_CA_PARAMS = 'op monitor interval="5s"' + + ha_console_settings = { + 'colocations': {'vip_consoleauth': COLO_CONSOLEAUTH}, + 'init_services': {'res_nova_consoleauth': 'nova-consoleauth'}, + 'resources': {'res_nova_consoleauth': AGENT_CONSOLEAUTH}, + 'resource_params': {'res_nova_consoleauth': AGENT_CA_PARAMS}) + generate_ha_relation_data('nova', extra_settings=ha_console_settings) + + + @param service: Name of the service being configured + @param extra_settings: Dict of additional resource data + @returns dict: json encoded data for use with relation_set + """ + _relation_data = {'resources': {}, 'resource_params': {}} + + if haproxy_enabled: + _meta = 'meta migration-threshold="INFINITY" failure-timeout="5s"' + _haproxy_res = 'res_{}_haproxy'.format(service) + _relation_data['resources'] = {_haproxy_res: 'lsb:haproxy'} + _relation_data['resource_params'] = { + _haproxy_res: '{} op monitor interval="5s"'.format(_meta) + } + _relation_data['init_services'] = {_haproxy_res: 'haproxy'} + _relation_data['clones'] = { + 'cl_{}_haproxy'.format(service): _haproxy_res + } + + if extra_settings: + for k, v in extra_settings.items(): + if _relation_data.get(k): + _relation_data[k].update(v) + else: + _relation_data[k] = v + + if config('dns-ha'): + update_hacluster_dns_ha(service, _relation_data) + else: + update_hacluster_vip(service, _relation_data) + + return { + 'json_{}'.format(k): json.dumps(v, **JSON_ENCODE_OPTIONS) + for k, v in _relation_data.items() if v + } + + +def update_hacluster_dns_ha(service, relation_data, + crm_ocf='ocf:maas:dns'): + """ Configure DNS-HA resources based on provided configuration + + @param service: Name of the service being configured + @param relation_data: Pointer to dictionary of relation data. + @param crm_ocf: Corosync Open Cluster Framework resource agent to use for + DNS HA + """ + # Validate the charm environment for DNS HA + assert_charm_supports_dns_ha() + + settings = ['os-admin-hostname', 'os-internal-hostname', + 'os-public-hostname', 'os-access-hostname'] + + # Check which DNS settings are set and update dictionaries + hostname_group = [] + for setting in settings: + hostname = config(setting) + if hostname is None: + log('DNS HA: Hostname setting {} is None. Ignoring.' + ''.format(setting), + DEBUG) + continue + m = re.search('os-(.+?)-hostname', setting) + if m: + endpoint_type = m.group(1) + # resolve_address's ADDRESS_MAP uses 'int' not 'internal' + if endpoint_type == 'internal': + endpoint_type = 'int' + else: + msg = ('Unexpected DNS hostname setting: {}. ' + 'Cannot determine endpoint_type name' + ''.format(setting)) + status_set('blocked', msg) + raise DNSHAException(msg) + + hostname_key = 'res_{}_{}_hostname'.format(service, endpoint_type) + if hostname_key in hostname_group: + log('DNS HA: Resource {}: {} already exists in ' + 'hostname group - skipping'.format(hostname_key, hostname), + DEBUG) + continue + + hostname_group.append(hostname_key) + relation_data['resources'][hostname_key] = crm_ocf + relation_data['resource_params'][hostname_key] = ( + 'params fqdn="{}" ip_address="{}"' + .format(hostname, resolve_address(endpoint_type=endpoint_type, + override=False))) + + if len(hostname_group) >= 1: + log('DNS HA: Hostname group is set with {} as members. ' + 'Informing the ha relation'.format(' '.join(hostname_group)), + DEBUG) + relation_data['groups'] = { + DNSHA_GROUP_NAME.format(service=service): ' '.join(hostname_group) + } + else: + msg = 'DNS HA: Hostname group has no members.' + status_set('blocked', msg) + raise DNSHAException(msg) + + +def get_vip_settings(vip): + """Calculate which nic is on the correct network for the given vip. + + If nic or netmask discovery fail then fallback to using charm supplied + config. If fallback is used this is indicated via the fallback variable. + + @param vip: VIP to lookup nic and cidr for. + @returns (str, str, bool): eg (iface, netmask, fallback) + """ + iface = get_iface_for_address(vip) + netmask = get_netmask_for_address(vip) + fallback = False + if iface is None: + iface = config('vip_iface') + fallback = True + if netmask is None: + netmask = config('vip_cidr') + fallback = True + return iface, netmask, fallback + + +def update_hacluster_vip(service, relation_data): + """ Configure VIP resources based on provided configuration + + @param service: Name of the service being configured + @param relation_data: Pointer to dictionary of relation data. + """ + cluster_config = get_hacluster_config() + vip_group = [] + vips_to_delete = [] + for vip in cluster_config['vip'].split(): + if is_ipv6(vip): + res_vip = 'ocf:heartbeat:IPv6addr' + vip_params = 'ipv6addr' + else: + res_vip = 'ocf:heartbeat:IPaddr2' + vip_params = 'ip' + + iface, netmask, fallback = get_vip_settings(vip) + + vip_monitoring = 'op monitor timeout="20s" interval="10s" depth="0"' + if iface is not None: + # NOTE(jamespage): Delete old VIP resources + # Old style naming encoding iface in name + # does not work well in environments where + # interface/subnet wiring is not consistent + vip_key = 'res_{}_{}_vip'.format(service, iface) + if vip_key in vips_to_delete: + vip_key = '{}_{}'.format(vip_key, vip_params) + vips_to_delete.append(vip_key) + + vip_key = 'res_{}_{}_vip'.format( + service, + hashlib.sha1(vip.encode('UTF-8')).hexdigest()[:7]) + + relation_data['resources'][vip_key] = res_vip + # NOTE(jamespage): + # Use option provided vip params if these where used + # instead of auto-detected values + if fallback: + relation_data['resource_params'][vip_key] = ( + 'params {ip}="{vip}" cidr_netmask="{netmask}" ' + 'nic="{iface}" {vip_monitoring}'.format( + ip=vip_params, + vip=vip, + iface=iface, + netmask=netmask, + vip_monitoring=vip_monitoring)) + else: + # NOTE(jamespage): + # let heartbeat figure out which interface and + # netmask to configure, which works nicely + # when network interface naming is not + # consistent across units. + relation_data['resource_params'][vip_key] = ( + 'params {ip}="{vip}" {vip_monitoring}'.format( + ip=vip_params, + vip=vip, + vip_monitoring=vip_monitoring)) + + vip_group.append(vip_key) + + if vips_to_delete: + try: + relation_data['delete_resources'].extend(vips_to_delete) + except KeyError: + relation_data['delete_resources'] = vips_to_delete + + if len(vip_group) >= 1: + key = VIP_GROUP_NAME.format(service=service) + try: + relation_data['groups'][key] = ' '.join(vip_group) + except KeyError: + relation_data['groups'] = { + key: ' '.join(vip_group) + } diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/ip.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/ip.py new file mode 100644 index 0000000000000000000000000000000000000000..723aebc172e94a5b00c385d9861dd4d45c1bc753 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/ip.py @@ -0,0 +1,197 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from charmhelpers.core.hookenv import ( + NoNetworkBinding, + config, + unit_get, + service_name, + network_get_primary_address, +) +from charmhelpers.contrib.network.ip import ( + get_address_in_network, + is_address_in_network, + is_ipv6, + get_ipv6_addr, + resolve_network_cidr, +) +from charmhelpers.contrib.hahelpers.cluster import is_clustered + +PUBLIC = 'public' +INTERNAL = 'int' +ADMIN = 'admin' +ACCESS = 'access' + +ADDRESS_MAP = { + PUBLIC: { + 'binding': 'public', + 'config': 'os-public-network', + 'fallback': 'public-address', + 'override': 'os-public-hostname', + }, + INTERNAL: { + 'binding': 'internal', + 'config': 'os-internal-network', + 'fallback': 'private-address', + 'override': 'os-internal-hostname', + }, + ADMIN: { + 'binding': 'admin', + 'config': 'os-admin-network', + 'fallback': 'private-address', + 'override': 'os-admin-hostname', + }, + ACCESS: { + 'binding': 'access', + 'config': 'access-network', + 'fallback': 'private-address', + 'override': 'os-access-hostname', + }, +} + + +def canonical_url(configs, endpoint_type=PUBLIC): + """Returns the correct HTTP URL to this host given the state of HTTPS + configuration, hacluster and charm configuration. + + :param configs: OSTemplateRenderer config templating object to inspect + for a complete https context. + :param endpoint_type: str endpoint type to resolve. + :param returns: str base URL for services on the current service unit. + """ + scheme = _get_scheme(configs) + + address = resolve_address(endpoint_type) + if is_ipv6(address): + address = "[{}]".format(address) + + return '%s://%s' % (scheme, address) + + +def _get_scheme(configs): + """Returns the scheme to use for the url (either http or https) + depending upon whether https is in the configs value. + + :param configs: OSTemplateRenderer config templating object to inspect + for a complete https context. + :returns: either 'http' or 'https' depending on whether https is + configured within the configs context. + """ + scheme = 'http' + if configs and 'https' in configs.complete_contexts(): + scheme = 'https' + return scheme + + +def _get_address_override(endpoint_type=PUBLIC): + """Returns any address overrides that the user has defined based on the + endpoint type. + + Note: this function allows for the service name to be inserted into the + address if the user specifies {service_name}.somehost.org. + + :param endpoint_type: the type of endpoint to retrieve the override + value for. + :returns: any endpoint address or hostname that the user has overridden + or None if an override is not present. + """ + override_key = ADDRESS_MAP[endpoint_type]['override'] + addr_override = config(override_key) + if not addr_override: + return None + else: + return addr_override.format(service_name=service_name()) + + +def resolve_address(endpoint_type=PUBLIC, override=True): + """Return unit address depending on net config. + + If unit is clustered with vip(s) and has net splits defined, return vip on + correct network. If clustered with no nets defined, return primary vip. + + If not clustered, return unit address ensuring address is on configured net + split if one is configured, or a Juju 2.0 extra-binding has been used. + + :param endpoint_type: Network endpoing type + :param override: Accept hostname overrides or not + """ + resolved_address = None + if override: + resolved_address = _get_address_override(endpoint_type) + if resolved_address: + return resolved_address + + vips = config('vip') + if vips: + vips = vips.split() + + net_type = ADDRESS_MAP[endpoint_type]['config'] + net_addr = config(net_type) + net_fallback = ADDRESS_MAP[endpoint_type]['fallback'] + binding = ADDRESS_MAP[endpoint_type]['binding'] + clustered = is_clustered() + + if clustered and vips: + if net_addr: + for vip in vips: + if is_address_in_network(net_addr, vip): + resolved_address = vip + break + else: + # NOTE: endeavour to check vips against network space + # bindings + try: + bound_cidr = resolve_network_cidr( + network_get_primary_address(binding) + ) + for vip in vips: + if is_address_in_network(bound_cidr, vip): + resolved_address = vip + break + except (NotImplementedError, NoNetworkBinding): + # If no net-splits configured and no support for extra + # bindings/network spaces so we expect a single vip + resolved_address = vips[0] + else: + if config('prefer-ipv6'): + fallback_addr = get_ipv6_addr(exc_list=vips)[0] + else: + fallback_addr = unit_get(net_fallback) + + if net_addr: + resolved_address = get_address_in_network(net_addr, fallback_addr) + else: + # NOTE: only try to use extra bindings if legacy network + # configuration is not in use + try: + resolved_address = network_get_primary_address(binding) + except (NotImplementedError, NoNetworkBinding): + resolved_address = fallback_addr + + if resolved_address is None: + raise ValueError("Unable to resolve a suitable IP address based on " + "charm state and configuration. (net_type=%s, " + "clustered=%s)" % (net_type, clustered)) + + return resolved_address + + +def get_vip_in_network(network): + matching_vip = None + vips = config('vip') + if vips: + for vip in vips.split(): + if is_address_in_network(network, vip): + matching_vip = vip + return matching_vip diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/keystone.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/keystone.py new file mode 100644 index 0000000000000000000000000000000000000000..d7e02ccd90155710901e444482b589aa264158e6 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/keystone.py @@ -0,0 +1,178 @@ +#!/usr/bin/python +# +# Copyright 2017 Canonical Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import six +from charmhelpers.fetch import apt_install +from charmhelpers.contrib.openstack.context import IdentityServiceContext +from charmhelpers.core.hookenv import ( + log, + ERROR, +) + + +def get_api_suffix(api_version): + """Return the formatted api suffix for the given version + @param api_version: version of the keystone endpoint + @returns the api suffix formatted according to the given api + version + """ + return 'v2.0' if api_version in (2, "2", "2.0") else 'v3' + + +def format_endpoint(schema, addr, port, api_version): + """Return a formatted keystone endpoint + @param schema: http or https + @param addr: ipv4/ipv6 host of the keystone service + @param port: port of the keystone service + @param api_version: 2 or 3 + @returns a fully formatted keystone endpoint + """ + return '{}://{}:{}/{}/'.format(schema, addr, port, + get_api_suffix(api_version)) + + +def get_keystone_manager(endpoint, api_version, **kwargs): + """Return a keystonemanager for the correct API version + + @param endpoint: the keystone endpoint to point client at + @param api_version: version of the keystone api the client should use + @param kwargs: token or username/tenant/password information + @returns keystonemanager class used for interrogating keystone + """ + if api_version == 2: + return KeystoneManager2(endpoint, **kwargs) + if api_version == 3: + return KeystoneManager3(endpoint, **kwargs) + raise ValueError('No manager found for api version {}'.format(api_version)) + + +def get_keystone_manager_from_identity_service_context(): + """Return a keystonmanager generated from a + instance of charmhelpers.contrib.openstack.context.IdentityServiceContext + @returns keystonamenager instance + """ + context = IdentityServiceContext()() + if not context: + msg = "Identity service context cannot be generated" + log(msg, level=ERROR) + raise ValueError(msg) + + endpoint = format_endpoint(context['service_protocol'], + context['service_host'], + context['service_port'], + context['api_version']) + + if context['api_version'] in (2, "2.0"): + api_version = 2 + else: + api_version = 3 + + return get_keystone_manager(endpoint, api_version, + username=context['admin_user'], + password=context['admin_password'], + tenant_name=context['admin_tenant_name']) + + +class KeystoneManager(object): + + def resolve_service_id(self, service_name=None, service_type=None): + """Find the service_id of a given service""" + services = [s._info for s in self.api.services.list()] + + service_name = service_name.lower() + for s in services: + name = s['name'].lower() + if service_type and service_name: + if (service_name == name and service_type == s['type']): + return s['id'] + elif service_name and service_name == name: + return s['id'] + elif service_type and service_type == s['type']: + return s['id'] + return None + + def service_exists(self, service_name=None, service_type=None): + """Determine if the given service exists on the service list""" + return self.resolve_service_id(service_name, service_type) is not None + + +class KeystoneManager2(KeystoneManager): + + def __init__(self, endpoint, **kwargs): + try: + from keystoneclient.v2_0 import client + from keystoneclient.auth.identity import v2 + from keystoneclient import session + except ImportError: + if six.PY2: + apt_install(["python-keystoneclient"], fatal=True) + else: + apt_install(["python3-keystoneclient"], fatal=True) + + from keystoneclient.v2_0 import client + from keystoneclient.auth.identity import v2 + from keystoneclient import session + + self.api_version = 2 + + token = kwargs.get("token", None) + if token: + api = client.Client(endpoint=endpoint, token=token) + else: + auth = v2.Password(username=kwargs.get("username"), + password=kwargs.get("password"), + tenant_name=kwargs.get("tenant_name"), + auth_url=endpoint) + sess = session.Session(auth=auth) + api = client.Client(session=sess) + + self.api = api + + +class KeystoneManager3(KeystoneManager): + + def __init__(self, endpoint, **kwargs): + try: + from keystoneclient.v3 import client + from keystoneclient.auth import token_endpoint + from keystoneclient import session + from keystoneclient.auth.identity import v3 + except ImportError: + if six.PY2: + apt_install(["python-keystoneclient"], fatal=True) + else: + apt_install(["python3-keystoneclient"], fatal=True) + + from keystoneclient.v3 import client + from keystoneclient.auth import token_endpoint + from keystoneclient import session + from keystoneclient.auth.identity import v3 + + self.api_version = 3 + + token = kwargs.get("token", None) + if token: + auth = token_endpoint.Token(endpoint=endpoint, + token=token) + sess = session.Session(auth=auth) + else: + auth = v3.Password(auth_url=endpoint, + user_id=kwargs.get("username"), + password=kwargs.get("password"), + project_id=kwargs.get("tenant_name")) + sess = session.Session(auth=auth) + + self.api = client.Client(session=sess) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/neutron.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/neutron.py new file mode 100644 index 0000000000000000000000000000000000000000..fb5607f3e73159d90236b2d7a4051aa82119e889 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/neutron.py @@ -0,0 +1,359 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Various utilies for dealing with Neutron and the renaming from Quantum. + +import six +from subprocess import check_output + +from charmhelpers.core.hookenv import ( + config, + log, + ERROR, +) + +from charmhelpers.contrib.openstack.utils import ( + os_release, + CompareOpenStackReleases, +) + + +def headers_package(): + """Ensures correct linux-headers for running kernel are installed, + for building DKMS package""" + kver = check_output(['uname', '-r']).decode('UTF-8').strip() + return 'linux-headers-%s' % kver + + +QUANTUM_CONF_DIR = '/etc/quantum' + + +def kernel_version(): + """ Retrieve the current major kernel version as a tuple e.g. (3, 13) """ + kver = check_output(['uname', '-r']).decode('UTF-8').strip() + kver = kver.split('.') + return (int(kver[0]), int(kver[1])) + + +def determine_dkms_package(): + """ Determine which DKMS package should be used based on kernel version """ + # NOTE: 3.13 kernels have support for GRE and VXLAN native + if kernel_version() >= (3, 13): + return [] + else: + return [headers_package(), 'openvswitch-datapath-dkms'] + + +# legacy + + +def quantum_plugins(): + return { + 'ovs': { + 'config': '/etc/quantum/plugins/openvswitch/' + 'ovs_quantum_plugin.ini', + 'driver': 'quantum.plugins.openvswitch.ovs_quantum_plugin.' + 'OVSQuantumPluginV2', + 'contexts': [], + 'services': ['quantum-plugin-openvswitch-agent'], + 'packages': [determine_dkms_package(), + ['quantum-plugin-openvswitch-agent']], + 'server_packages': ['quantum-server', + 'quantum-plugin-openvswitch'], + 'server_services': ['quantum-server'] + }, + 'nvp': { + 'config': '/etc/quantum/plugins/nicira/nvp.ini', + 'driver': 'quantum.plugins.nicira.nicira_nvp_plugin.' + 'QuantumPlugin.NvpPluginV2', + 'contexts': [], + 'services': [], + 'packages': [], + 'server_packages': ['quantum-server', + 'quantum-plugin-nicira'], + 'server_services': ['quantum-server'] + } + } + + +NEUTRON_CONF_DIR = '/etc/neutron' + + +def neutron_plugins(): + release = os_release('nova-common') + plugins = { + 'ovs': { + 'config': '/etc/neutron/plugins/openvswitch/' + 'ovs_neutron_plugin.ini', + 'driver': 'neutron.plugins.openvswitch.ovs_neutron_plugin.' + 'OVSNeutronPluginV2', + 'contexts': [], + 'services': ['neutron-plugin-openvswitch-agent'], + 'packages': [determine_dkms_package(), + ['neutron-plugin-openvswitch-agent']], + 'server_packages': ['neutron-server', + 'neutron-plugin-openvswitch'], + 'server_services': ['neutron-server'] + }, + 'nvp': { + 'config': '/etc/neutron/plugins/nicira/nvp.ini', + 'driver': 'neutron.plugins.nicira.nicira_nvp_plugin.' + 'NeutronPlugin.NvpPluginV2', + 'contexts': [], + 'services': [], + 'packages': [], + 'server_packages': ['neutron-server', + 'neutron-plugin-nicira'], + 'server_services': ['neutron-server'] + }, + 'nsx': { + 'config': '/etc/neutron/plugins/vmware/nsx.ini', + 'driver': 'vmware', + 'contexts': [], + 'services': [], + 'packages': [], + 'server_packages': ['neutron-server', + 'neutron-plugin-vmware'], + 'server_services': ['neutron-server'] + }, + 'n1kv': { + 'config': '/etc/neutron/plugins/cisco/cisco_plugins.ini', + 'driver': 'neutron.plugins.cisco.network_plugin.PluginV2', + 'contexts': [], + 'services': [], + 'packages': [determine_dkms_package(), + ['neutron-plugin-cisco']], + 'server_packages': ['neutron-server', + 'neutron-plugin-cisco'], + 'server_services': ['neutron-server'] + }, + 'Calico': { + 'config': '/etc/neutron/plugins/ml2/ml2_conf.ini', + 'driver': 'neutron.plugins.ml2.plugin.Ml2Plugin', + 'contexts': [], + 'services': ['calico-felix', + 'bird', + 'neutron-dhcp-agent', + 'nova-api-metadata', + 'etcd'], + 'packages': [determine_dkms_package(), + ['calico-compute', + 'bird', + 'neutron-dhcp-agent', + 'nova-api-metadata', + 'etcd']], + 'server_packages': ['neutron-server', 'calico-control', 'etcd'], + 'server_services': ['neutron-server', 'etcd'] + }, + 'vsp': { + 'config': '/etc/neutron/plugins/nuage/nuage_plugin.ini', + 'driver': 'neutron.plugins.nuage.plugin.NuagePlugin', + 'contexts': [], + 'services': [], + 'packages': [], + 'server_packages': ['neutron-server', 'neutron-plugin-nuage'], + 'server_services': ['neutron-server'] + }, + 'plumgrid': { + 'config': '/etc/neutron/plugins/plumgrid/plumgrid.ini', + 'driver': ('neutron.plugins.plumgrid.plumgrid_plugin' + '.plumgrid_plugin.NeutronPluginPLUMgridV2'), + 'contexts': [], + 'services': [], + 'packages': ['plumgrid-lxc', + 'iovisor-dkms'], + 'server_packages': ['neutron-server', + 'neutron-plugin-plumgrid'], + 'server_services': ['neutron-server'] + }, + 'midonet': { + 'config': '/etc/neutron/plugins/midonet/midonet.ini', + 'driver': 'midonet.neutron.plugin.MidonetPluginV2', + 'contexts': [], + 'services': [], + 'packages': [determine_dkms_package()], + 'server_packages': ['neutron-server', + 'python-neutron-plugin-midonet'], + 'server_services': ['neutron-server'] + } + } + if CompareOpenStackReleases(release) >= 'icehouse': + # NOTE: patch in ml2 plugin for icehouse onwards + plugins['ovs']['config'] = '/etc/neutron/plugins/ml2/ml2_conf.ini' + plugins['ovs']['driver'] = 'neutron.plugins.ml2.plugin.Ml2Plugin' + plugins['ovs']['server_packages'] = ['neutron-server', + 'neutron-plugin-ml2'] + # NOTE: patch in vmware renames nvp->nsx for icehouse onwards + plugins['nvp'] = plugins['nsx'] + if CompareOpenStackReleases(release) >= 'kilo': + plugins['midonet']['driver'] = ( + 'neutron.plugins.midonet.plugin.MidonetPluginV2') + if CompareOpenStackReleases(release) >= 'liberty': + plugins['midonet']['driver'] = ( + 'midonet.neutron.plugin_v1.MidonetPluginV2') + plugins['midonet']['server_packages'].remove( + 'python-neutron-plugin-midonet') + plugins['midonet']['server_packages'].append( + 'python-networking-midonet') + plugins['plumgrid']['driver'] = ( + 'networking_plumgrid.neutron.plugins' + '.plugin.NeutronPluginPLUMgridV2') + plugins['plumgrid']['server_packages'].remove( + 'neutron-plugin-plumgrid') + if CompareOpenStackReleases(release) >= 'mitaka': + plugins['nsx']['server_packages'].remove('neutron-plugin-vmware') + plugins['nsx']['server_packages'].append('python-vmware-nsx') + plugins['nsx']['config'] = '/etc/neutron/nsx.ini' + plugins['vsp']['driver'] = ( + 'nuage_neutron.plugins.nuage.plugin.NuagePlugin') + if CompareOpenStackReleases(release) >= 'newton': + plugins['vsp']['config'] = '/etc/neutron/plugins/ml2/ml2_conf.ini' + plugins['vsp']['driver'] = 'neutron.plugins.ml2.plugin.Ml2Plugin' + plugins['vsp']['server_packages'] = ['neutron-server', + 'neutron-plugin-ml2'] + return plugins + + +def neutron_plugin_attribute(plugin, attr, net_manager=None): + manager = net_manager or network_manager() + if manager == 'quantum': + plugins = quantum_plugins() + elif manager == 'neutron': + plugins = neutron_plugins() + else: + log("Network manager '%s' does not support plugins." % (manager), + level=ERROR) + raise Exception + + try: + _plugin = plugins[plugin] + except KeyError: + log('Unrecognised plugin for %s: %s' % (manager, plugin), level=ERROR) + raise Exception + + try: + return _plugin[attr] + except KeyError: + return None + + +def network_manager(): + ''' + Deals with the renaming of Quantum to Neutron in H and any situations + that require compatability (eg, deploying H with network-manager=quantum, + upgrading from G). + ''' + release = os_release('nova-common') + manager = config('network-manager').lower() + + if manager not in ['quantum', 'neutron']: + return manager + + if release in ['essex']: + # E does not support neutron + log('Neutron networking not supported in Essex.', level=ERROR) + raise Exception + elif release in ['folsom', 'grizzly']: + # neutron is named quantum in F and G + return 'quantum' + else: + # ensure accurate naming for all releases post-H + return 'neutron' + + +def parse_mappings(mappings, key_rvalue=False): + """By default mappings are lvalue keyed. + + If key_rvalue is True, the mapping will be reversed to allow multiple + configs for the same lvalue. + """ + parsed = {} + if mappings: + mappings = mappings.split() + for m in mappings: + p = m.partition(':') + + if key_rvalue: + key_index = 2 + val_index = 0 + # if there is no rvalue skip to next + if not p[1]: + continue + else: + key_index = 0 + val_index = 2 + + key = p[key_index].strip() + parsed[key] = p[val_index].strip() + + return parsed + + +def parse_bridge_mappings(mappings): + """Parse bridge mappings. + + Mappings must be a space-delimited list of provider:bridge mappings. + + Returns dict of the form {provider:bridge}. + """ + return parse_mappings(mappings) + + +def parse_data_port_mappings(mappings, default_bridge='br-data'): + """Parse data port mappings. + + Mappings must be a space-delimited list of bridge:port. + + Returns dict of the form {port:bridge} where ports may be mac addresses or + interface names. + """ + + # NOTE(dosaboy): we use rvalue for key to allow multiple values to be + # proposed for since it may be a mac address which will differ + # across units this allowing first-known-good to be chosen. + _mappings = parse_mappings(mappings, key_rvalue=True) + if not _mappings or list(_mappings.values()) == ['']: + if not mappings: + return {} + + # For backwards-compatibility we need to support port-only provided in + # config. + _mappings = {mappings.split()[0]: default_bridge} + + ports = _mappings.keys() + if len(set(ports)) != len(ports): + raise Exception("It is not allowed to have the same port configured " + "on more than one bridge") + + return _mappings + + +def parse_vlan_range_mappings(mappings): + """Parse vlan range mappings. + + Mappings must be a space-delimited list of provider:start:end mappings. + + The start:end range is optional and may be omitted. + + Returns dict of the form {provider: (start, end)}. + """ + _mappings = parse_mappings(mappings) + if not _mappings: + return {} + + mappings = {} + for p, r in six.iteritems(_mappings): + mappings[p] = tuple(r.split(':')) + + return mappings diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/policyd.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/policyd.py new file mode 100644 index 0000000000000000000000000000000000000000..f2bb21e9db926bd2c4de8ab3e8d10d0837af563a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/policyd.py @@ -0,0 +1,801 @@ +# Copyright 2019 Canonical Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import collections +import contextlib +import os +import six +import shutil +import yaml +import zipfile + +import charmhelpers +import charmhelpers.core.hookenv as hookenv +import charmhelpers.core.host as ch_host + +# Features provided by this module: + +""" +Policy.d helper functions +========================= + +The functions in this module are designed, as a set, to provide an easy-to-use +set of hooks for classic charms to add in /etc//policy.d/ +directory override YAML files. + +(For charms.openstack charms, a mixin class is provided for this +functionality). + +In order to "hook" this functionality into a (classic) charm, two functions are +provided: + + maybe_do_policyd_overrides(openstack_release, + service, + blacklist_paths=none, + blacklist_keys=none, + template_function=none, + restart_handler=none) + + maybe_do_policyd_overrides_on_config_changed(openstack_release, + service, + blacklist_paths=None, + blacklist_keys=None, + template_function=None, + restart_handler=None + +(See the docstrings for details on the parameters) + +The functions should be called from the install and upgrade hooks in the charm. +The `maybe_do_policyd_overrides_on_config_changed` function is designed to be +called on the config-changed hook, in that it does an additional check to +ensure that an already overriden policy.d in an upgrade or install hooks isn't +repeated. + +In order the *enable* this functionality, the charm's install, config_changed, +and upgrade_charm hooks need to be modified, and a new config option (see +below) needs to be added. The README for the charm should also be updated. + +Examples from the keystone charm are: + +@hooks.hook('install.real') +@harden() +def install(): + ... + # call the policy overrides handler which will install any policy overrides + maybe_do_policyd_overrides(os_release('keystone'), 'keystone') + + +@hooks.hook('config-changed') +@restart_on_change(restart_map(), restart_functions=restart_function_map()) +@harden() +def config_changed(): + ... + # call the policy overrides handler which will install any policy overrides + maybe_do_policyd_overrides_on_config_changed(os_release('keystone'), + 'keystone') + +@hooks.hook('upgrade-charm') +@restart_on_change(restart_map(), stopstart=True) +@harden() +def upgrade_charm(): + ... + # call the policy overrides handler which will install any policy overrides + maybe_do_policyd_overrides(os_release('keystone'), 'keystone') + +Status Line +=========== + +The workload status code in charm-helpers has been modified to detect if +policy.d override code has been incorporated into the charm by checking for the +new config variable (in the config.yaml). If it has been, then the workload +status line will automatically show "PO:" at the beginning of the workload +status for that unit/service if the config option is set. If the policy +override is broken, the "PO (broken):" will be shown. No changes to the charm +(apart from those already mentioned) are needed to enable this functionality. +(charms.openstack charms also get this functionality, but please see that +library for further details). +""" + +# The config.yaml for the charm should contain the following for the config +# option: + +""" + use-policyd-override: + type: boolean + default: False + description: | + If True then use the resource file named 'policyd-override' to install + override YAML files in the service's policy.d directory. The resource + file should be a ZIP file containing at least one yaml file with a .yaml + or .yml extension. If False then remove the overrides. +""" + +# The metadata.yaml for the charm should contain the following: +""" +resources: + policyd-override: + type: file + filename: policyd-override.zip + description: The policy.d overrides file +""" + +# The README for the charm should contain the following: +""" +Policy Overrides +---------------- + +This feature allows for policy overrides using the `policy.d` directory. This +is an **advanced** feature and the policies that the OpenStack service supports +should be clearly and unambiguously understood before trying to override, or +add to, the default policies that the service uses. The charm also has some +policy defaults. They should also be understood before being overridden. + +> **Caution**: It is possible to break the system (for tenants and other + services) if policies are incorrectly applied to the service. + +Policy overrides are YAML files that contain rules that will add to, or +override, existing policy rules in the service. The `policy.d` directory is +a place to put the YAML override files. This charm owns the +`/etc/keystone/policy.d` directory, and as such, any manual changes to it will +be overwritten on charm upgrades. + +Overrides are provided to the charm using a Juju resource called +`policyd-override`. The resource is a ZIP file. This file, say +`overrides.zip`, is attached to the charm by: + + + juju attach-resource policyd-override=overrides.zip + +The policy override is enabled in the charm using: + + juju config use-policyd-override=true + +When `use-policyd-override` is `True` the status line of the charm will be +prefixed with `PO:` indicating that policies have been overridden. If the +installation of the policy override YAML files failed for any reason then the +status line will be prefixed with `PO (broken):`. The log file for the charm +will indicate the reason. No policy override files are installed if the `PO +(broken):` is shown. The status line indicates that the overrides are broken, +not that the policy for the service has failed. The policy will be the defaults +for the charm and service. + +Policy overrides on one service may affect the functionality of another +service. Therefore, it may be necessary to provide policy overrides for +multiple service charms to achieve a consistent set of policies across the +OpenStack system. The charms for the other services that may need overrides +should be checked to ensure that they support overrides before proceeding. +""" + +POLICYD_VALID_EXTS = ['.yaml', '.yml', '.j2', '.tmpl', '.tpl'] +POLICYD_TEMPLATE_EXTS = ['.j2', '.tmpl', '.tpl'] +POLICYD_RESOURCE_NAME = "policyd-override" +POLICYD_CONFIG_NAME = "use-policyd-override" +POLICYD_SUCCESS_FILENAME = "policyd-override-success" +POLICYD_LOG_LEVEL_DEFAULT = hookenv.INFO +POLICYD_ALWAYS_BLACKLISTED_KEYS = ("admin_required", "cloud_admin") + + +class BadPolicyZipFile(Exception): + + def __init__(self, log_message): + self.log_message = log_message + + def __str__(self): + return self.log_message + + +class BadPolicyYamlFile(Exception): + + def __init__(self, log_message): + self.log_message = log_message + + def __str__(self): + return self.log_message + + +if six.PY2: + BadZipFile = zipfile.BadZipfile +else: + BadZipFile = zipfile.BadZipFile + + +def is_policyd_override_valid_on_this_release(openstack_release): + """Check that the charm is running on at least Ubuntu Xenial, and at + least the queens release. + + :param openstack_release: the release codename that is installed. + :type openstack_release: str + :returns: True if okay + :rtype: bool + """ + # NOTE(ajkavanagh) circular import! This is because the status message + # generation code in utils has to call into this module, but this function + # needs the CompareOpenStackReleases() function. The only way to solve + # this is either to put ALL of this module into utils, or refactor one or + # other of the CompareOpenStackReleases or status message generation code + # into a 3rd module. + import charmhelpers.contrib.openstack.utils as ch_utils + return ch_utils.CompareOpenStackReleases(openstack_release) >= 'queens' + + +def maybe_do_policyd_overrides(openstack_release, + service, + blacklist_paths=None, + blacklist_keys=None, + template_function=None, + restart_handler=None, + user=None, + group=None, + config_changed=False): + """If the config option is set, get the resource file and process it to + enable the policy.d overrides for the service passed. + + The param `openstack_release` is required as the policyd overrides feature + is only supported on openstack_release "queens" or later, and on ubuntu + "xenial" or later. Prior to these versions, this feature is a NOP. + + The optional template_function is a function that accepts a string and has + an opportunity to modify the loaded file prior to it being read by + yaml.safe_load(). This allows the charm to perform "templating" using + charm derived data. + + The param blacklist_paths are paths (that are in the service's policy.d + directory that should not be touched). + + The param blacklist_keys are keys that must not appear in the yaml file. + If they do, then the whole policy.d file fails. + + The yaml file extracted from the resource_file (which is a zipped file) has + its file path reconstructed. This, also, must not match any path in the + black list. + + The param restart_handler is an optional Callable that is called to perform + the service restart if the policy.d file is changed. This should normally + be None as oslo.policy automatically picks up changes in the policy.d + directory. However, for any services where this is buggy then a + restart_handler can be used to force the policy.d files to be read. + + If the config_changed param is True, then the handling is slightly + different: It will only perform the policyd overrides if the config is True + and the success file doesn't exist. Otherwise, it does nothing as the + resource file has already been processed. + + :param openstack_release: The openstack release that is installed. + :type openstack_release: str + :param service: the service name to construct the policy.d directory for. + :type service: str + :param blacklist_paths: optional list of paths to leave alone + :type blacklist_paths: Union[None, List[str]] + :param blacklist_keys: optional list of keys that mustn't appear in the + yaml file's + :type blacklist_keys: Union[None, List[str]] + :param template_function: Optional function that can modify the string + prior to being processed as a Yaml document. + :type template_function: Union[None, Callable[[str], str]] + :param restart_handler: The function to call if the service should be + restarted. + :type restart_handler: Union[None, Callable[]] + :param user: The user to create/write files/directories as + :type user: Union[None, str] + :param group: the group to create/write files/directories as + :type group: Union[None, str] + :param config_changed: Set to True for config_changed hook. + :type config_changed: bool + """ + _user = service if user is None else user + _group = service if group is None else group + if not is_policyd_override_valid_on_this_release(openstack_release): + return + hookenv.log("Running maybe_do_policyd_overrides", + level=POLICYD_LOG_LEVEL_DEFAULT) + config = hookenv.config() + try: + if not config.get(POLICYD_CONFIG_NAME, False): + clean_policyd_dir_for(service, + blacklist_paths, + user=_user, + group=_group) + if (os.path.isfile(_policy_success_file()) and + restart_handler is not None and + callable(restart_handler)): + restart_handler() + remove_policy_success_file() + return + except Exception as e: + hookenv.log("... ERROR: Exception is: {}".format(str(e)), + level=POLICYD_CONFIG_NAME) + import traceback + hookenv.log(traceback.format_exc(), level=POLICYD_LOG_LEVEL_DEFAULT) + return + # if the policyd overrides have been performed when doing config_changed + # just return + if config_changed and is_policy_success_file_set(): + hookenv.log("... already setup, so skipping.", + level=POLICYD_LOG_LEVEL_DEFAULT) + return + # from now on it should succeed; if it doesn't then status line will show + # broken. + resource_filename = get_policy_resource_filename() + restart = process_policy_resource_file( + resource_filename, service, blacklist_paths, blacklist_keys, + template_function) + if restart and restart_handler is not None and callable(restart_handler): + restart_handler() + + +@charmhelpers.deprecate("Use maybe_do_poliyd_overrrides instead") +def maybe_do_policyd_overrides_on_config_changed(*args, **kwargs): + """This function is designed to be called from the config changed hook. + + DEPRECATED: please use maybe_do_policyd_overrides() with the param + `config_changed` as `True`. + + See maybe_do_policyd_overrides() for more details on the params. + """ + if 'config_changed' not in kwargs.keys(): + kwargs['config_changed'] = True + return maybe_do_policyd_overrides(*args, **kwargs) + + +def get_policy_resource_filename(): + """Function to extract the policy resource filename + + :returns: The filename of the resource, if set, otherwise, if an error + occurs, then None is returned. + :rtype: Union[str, None] + """ + try: + return hookenv.resource_get(POLICYD_RESOURCE_NAME) + except Exception: + return None + + +@contextlib.contextmanager +def open_and_filter_yaml_files(filepath, has_subdirs=False): + """Validate that the filepath provided is a zip file and contains at least + one (.yaml|.yml) file, and that the files are not duplicated when the zip + file is flattened. Note that the yaml files are not checked. This is the + first stage in validating the policy zipfile; individual yaml files are not + checked for validity or black listed keys. + + If the has_subdirs param is True, then the files are flattened to the first + directory, and the files in the root are ignored. + + An example of use is: + + with open_and_filter_yaml_files(some_path) as zfp, g: + for zipinfo in g: + # do something with zipinfo ... + + :param filepath: a filepath object that can be opened by zipfile + :type filepath: Union[AnyStr, os.PathLike[AntStr]] + :param has_subdirs: Keep first level of subdirectories in yaml file. + :type has_subdirs: bool + :returns: (zfp handle, + a generator of the (name, filename, ZipInfo object) tuples) as a + tuple. + :rtype: ContextManager[(zipfile.ZipFile, + Generator[(name, str, str, zipfile.ZipInfo)])] + :raises: zipfile.BadZipFile + :raises: BadPolicyZipFile if duplicated yaml or missing + :raises: IOError if the filepath is not found + """ + with zipfile.ZipFile(filepath, 'r') as zfp: + # first pass through; check for duplicates and at least one yaml file. + names = collections.defaultdict(int) + yamlfiles = _yamlfiles(zfp, has_subdirs) + for name, _, _, _ in yamlfiles: + names[name] += 1 + # There must be at least 1 yaml file. + if len(names.keys()) == 0: + raise BadPolicyZipFile("contains no yaml files with {} extensions." + .format(", ".join(POLICYD_VALID_EXTS))) + # There must be no duplicates + duplicates = [n for n, c in names.items() if c > 1] + if duplicates: + raise BadPolicyZipFile("{} have duplicates in the zip file." + .format(", ".join(duplicates))) + # Finally, let's yield the generator + yield (zfp, yamlfiles) + + +def _yamlfiles(zipfile, has_subdirs=False): + """Helper to get a yaml file (according to POLICYD_VALID_EXTS extensions) + and the infolist item from a zipfile. + + If the `has_subdirs` param is True, the the only yaml files that have a + directory component are read, and then first part of the directory + component is kept, along with the filename in the name. e.g. an entry with + a filename of: + + compute/someotherdir/override.yaml + + is returned as: + + compute/override, yaml, override.yaml, + + This is to help with the special, additional, processing that the dashboard + charm requires. + + :param zipfile: the zipfile to read zipinfo items from + :type zipfile: zipfile.ZipFile + :param has_subdirs: Keep first level of subdirectories in yaml file. + :type has_subdirs: bool + :returns: generator of (name, ext, filename, info item) for each + self-identified yaml file. + :rtype: List[(str, str, str, zipfile.ZipInfo)] + """ + files = [] + for infolist_item in zipfile.infolist(): + try: + if infolist_item.is_dir(): + continue + except AttributeError: + # fallback to "old" way to determine dir entry for pre-py36 + if infolist_item.filename.endswith('/'): + continue + _dir, name_ext = os.path.split(infolist_item.filename) + name, ext = os.path.splitext(name_ext) + if has_subdirs and _dir != "": + name = os.path.join(_dir.split(os.path.sep)[0], name) + ext = ext.lower() + if ext and ext in POLICYD_VALID_EXTS: + files.append((name, ext, name_ext, infolist_item)) + return files + + +def read_and_validate_yaml(stream_or_doc, blacklist_keys=None): + """Read, validate and return the (first) yaml document from the stream. + + The doc is read, and checked for a yaml file. The the top-level keys are + checked against the blacklist_keys provided. If there are problems then an + Exception is raised. Otherwise the yaml document is returned as a Python + object that can be dumped back as a yaml file on the system. + + The yaml file must only consist of a str:str mapping, and if not then the + yaml file is rejected. + + :param stream_or_doc: the file object to read the yaml from + :type stream_or_doc: Union[AnyStr, IO[AnyStr]] + :param blacklist_keys: Any keys, which if in the yaml file, should cause + and error. + :type blacklisted_keys: Union[None, List[str]] + :returns: the yaml file as a python document + :rtype: Dict[str, str] + :raises: yaml.YAMLError if there is a problem with the document + :raises: BadPolicyYamlFile if file doesn't look right or there are + blacklisted keys in the file. + """ + blacklist_keys = blacklist_keys or [] + blacklist_keys.append(POLICYD_ALWAYS_BLACKLISTED_KEYS) + doc = yaml.safe_load(stream_or_doc) + if not isinstance(doc, dict): + raise BadPolicyYamlFile("doesn't look like a policy file?") + keys = set(doc.keys()) + blacklisted_keys_present = keys.intersection(blacklist_keys) + if blacklisted_keys_present: + raise BadPolicyYamlFile("blacklisted keys {} present." + .format(", ".join(blacklisted_keys_present))) + if not all(isinstance(k, six.string_types) for k in keys): + raise BadPolicyYamlFile("keys in yaml aren't all strings?") + # check that the dictionary looks like a mapping of str to str + if not all(isinstance(v, six.string_types) for v in doc.values()): + raise BadPolicyYamlFile("values in yaml aren't all strings?") + return doc + + +def policyd_dir_for(service): + """Return the policy directory for the named service. + + :param service: str + :returns: the policy.d override directory. + :rtype: os.PathLike[str] + """ + return os.path.join("/", "etc", service, "policy.d") + + +def clean_policyd_dir_for(service, keep_paths=None, user=None, group=None): + """Clean out the policyd directory except for items that should be kept. + + The keep_paths, if used, should be set to the full path of the files that + should be kept in the policyd directory for the service. Note that the + service name is passed in, and then the policyd_dir_for() function is used. + This is so that a coding error doesn't result in a sudden deletion of the + charm (say). + + :param service: the service name to use to construct the policy.d dir. + :type service: str + :param keep_paths: optional list of paths to not delete. + :type keep_paths: Union[None, List[str]] + :param user: The user to create/write files/directories as + :type user: Union[None, str] + :param group: the group to create/write files/directories as + :type group: Union[None, str] + """ + _user = service if user is None else user + _group = service if group is None else group + keep_paths = keep_paths or [] + path = policyd_dir_for(service) + hookenv.log("Cleaning path: {}".format(path), level=hookenv.DEBUG) + if not os.path.exists(path): + ch_host.mkdir(path, owner=_user, group=_group, perms=0o775) + _scanner = os.scandir if hasattr(os, 'scandir') else _fallback_scandir + for direntry in _scanner(path): + # see if the path should be kept. + if direntry.path in keep_paths: + continue + # we remove any directories; it's ours and there shouldn't be any + if direntry.is_dir(): + shutil.rmtree(direntry.path) + else: + os.remove(direntry.path) + + +def maybe_create_directory_for(path, user, group): + """For the filename 'path', ensure that the directory for that path exists. + + Note that if the directory already exists then the permissions are NOT + changed. + + :param path: the filename including the path to it. + :type path: str + :param user: the user to create the directory as + :param group: the group to create the directory as + """ + _dir, _ = os.path.split(path) + if not os.path.exists(_dir): + ch_host.mkdir(_dir, owner=user, group=group, perms=0o775) + + +@contextlib.contextmanager +def _fallback_scandir(path): + """Fallback os.scandir implementation. + + provide a fallback implementation of os.scandir if this module ever gets + used in a py2 or py34 charm. Uses os.listdir() to get the names in the path, + and then mocks the is_dir() function using os.path.isdir() to check for + directory. + + :param path: the path to list the directories for + :type path: str + :returns: Generator that provides _FBDirectory objects + :rtype: ContextManager[_FBDirectory] + """ + for f in os.listdir(path): + yield _FBDirectory(f) + + +class _FBDirectory(object): + """Mock a scandir Directory object with enough to use in + clean_policyd_dir_for + """ + + def __init__(self, path): + self.path = path + + def is_dir(self): + return os.path.isdir(self.path) + + +def path_for_policy_file(service, name): + """Return the full path for a policy.d file that will be written to the + service's policy.d directory. + + It is constructed using policyd_dir_for(), the name and the ".yaml" + extension. + + For horizon, for example, it's a bit more complicated. The name param is + actually "override_service_dir/a_name", where target_service needs to be + one the allowed horizon override services. This translation and check is + done in the _yamlfiles() function. + + :param service: the service name + :type service: str + :param name: the name for the policy override + :type name: str + :returns: the full path name for the file + :rtype: os.PathLike[str] + """ + return os.path.join(policyd_dir_for(service), name + ".yaml") + + +def _policy_success_file(): + """Return the file name for a successful drop of policy.d overrides + + :returns: the path name for the file. + :rtype: str + """ + return os.path.join(hookenv.charm_dir(), POLICYD_SUCCESS_FILENAME) + + +def remove_policy_success_file(): + """Remove the file that indicates successful policyd override.""" + try: + os.remove(_policy_success_file()) + except Exception: + pass + + +def set_policy_success_file(): + """Set the file that indicates successful policyd override.""" + open(_policy_success_file(), "w").close() + + +def is_policy_success_file_set(): + """Returns True if the policy success file has been set. + + This indicates that policies are overridden and working properly. + + :returns: True if the policy file is set + :rtype: bool + """ + return os.path.isfile(_policy_success_file()) + + +def policyd_status_message_prefix(): + """Return the prefix str for the status line. + + "PO:" indicating that the policy overrides are in place, or "PO (broken):" + if the policy is supposed to be working but there is no success file. + + :returns: the prefix + :rtype: str + """ + if is_policy_success_file_set(): + return "PO:" + return "PO (broken):" + + +def process_policy_resource_file(resource_file, + service, + blacklist_paths=None, + blacklist_keys=None, + template_function=None, + preserve_topdir=False, + preprocess_filename=None, + user=None, + group=None): + """Process the resource file (which should contain at least one yaml file) + and write those files to the service's policy.d directory. + + The optional template_function is a function that accepts a python + string and has an opportunity to modify the document + prior to it being read by the yaml.safe_load() function and written to + disk. Note that this function does *not* say how the templating is done - + this is up to the charm to implement its chosen method. + + The param blacklist_paths are paths (that are in the service's policy.d + directory that should not be touched). + + The param blacklist_keys are keys that must not appear in the yaml file. + If they do, then the whole policy.d file fails. + + The yaml file extracted from the resource_file (which is a zipped file) has + its file path reconstructed. This, also, must not match any path in the + black list. + + The yaml filename can be modified in two ways. If the `preserve_topdir` + param is True, then files will be flattened to the top dir. This allows + for creating sets of files that can be grouped into a single level tree + structure. + + Secondly, if the `preprocess_filename` param is not None and callable() + then the name is passed to that function for preprocessing before being + converted to the end location. This is to allow munging of the filename + prior to being tested for a blacklist path. + + If any error occurs, then the policy.d directory is cleared, the error is + written to the log, and the status line will eventually show as failed. + + :param resource_file: The zipped file to open and extract yaml files form. + :type resource_file: Union[AnyStr, os.PathLike[AnyStr]] + :param service: the service name to construct the policy.d directory for. + :type service: str + :param blacklist_paths: optional list of paths to leave alone + :type blacklist_paths: Union[None, List[str]] + :param blacklist_keys: optional list of keys that mustn't appear in the + yaml file's + :type blacklist_keys: Union[None, List[str]] + :param template_function: Optional function that can modify the yaml + document. + :type template_function: Union[None, Callable[[AnyStr], AnyStr]] + :param preserve_topdir: Keep the toplevel subdir + :type preserve_topdir: bool + :param preprocess_filename: Optional function to use to process filenames + extracted from the resource file. + :type preprocess_filename: Union[None, Callable[[AnyStr]. AnyStr]] + :param user: The user to create/write files/directories as + :type user: Union[None, str] + :param group: the group to create/write files/directories as + :type group: Union[None, str] + :returns: True if the processing was successful, False if not. + :rtype: boolean + """ + hookenv.log("Running process_policy_resource_file", level=hookenv.DEBUG) + blacklist_paths = blacklist_paths or [] + completed = False + _preprocess = None + if preprocess_filename is not None and callable(preprocess_filename): + _preprocess = preprocess_filename + _user = service if user is None else user + _group = service if group is None else group + try: + with open_and_filter_yaml_files( + resource_file, preserve_topdir) as (zfp, gen): + # first clear out the policy.d directory and clear success + remove_policy_success_file() + clean_policyd_dir_for(service, + blacklist_paths, + user=_user, + group=_group) + for name, ext, filename, zipinfo in gen: + # See if the name should be preprocessed. + if _preprocess is not None: + name = _preprocess(name) + # construct a name for the output file. + yaml_filename = path_for_policy_file(service, name) + if yaml_filename in blacklist_paths: + raise BadPolicyZipFile("policy.d name {} is blacklisted" + .format(yaml_filename)) + with zfp.open(zipinfo) as fp: + doc = fp.read() + # if template_function is not None, then offer the document + # to the template function + if ext in POLICYD_TEMPLATE_EXTS: + if (template_function is None or not + callable(template_function)): + raise BadPolicyZipFile( + "Template {} but no template_function is " + "available".format(filename)) + doc = template_function(doc) + yaml_doc = read_and_validate_yaml(doc, blacklist_keys) + # we may have to create the directory + maybe_create_directory_for(yaml_filename, _user, _group) + ch_host.write_file(yaml_filename, + yaml.dump(yaml_doc).encode('utf-8'), + _user, + _group) + # Every thing worked, so we mark up a success. + completed = True + except (BadZipFile, BadPolicyZipFile, BadPolicyYamlFile) as e: + hookenv.log("Processing {} failed: {}".format(resource_file, str(e)), + level=POLICYD_LOG_LEVEL_DEFAULT) + except IOError as e: + # technically this shouldn't happen; it would be a programming error as + # the filename comes from Juju and thus, should exist. + hookenv.log( + "File {} failed with IOError. This really shouldn't happen" + " -- error: {}".format(resource_file, str(e)), + level=POLICYD_LOG_LEVEL_DEFAULT) + except Exception as e: + import traceback + hookenv.log("General Exception({}) during policyd processing" + .format(str(e)), + level=POLICYD_LOG_LEVEL_DEFAULT) + hookenv.log(traceback.format_exc()) + finally: + if not completed: + hookenv.log("Processing {} failed: cleaning policy.d directory" + .format(resource_file), + level=POLICYD_LOG_LEVEL_DEFAULT) + clean_policyd_dir_for(service, + blacklist_paths, + user=_user, + group=_group) + else: + # touch the success filename + hookenv.log("policy.d overrides installed.", + level=POLICYD_LOG_LEVEL_DEFAULT) + set_policy_success_file() + return completed diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/ssh_migrations.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/ssh_migrations.py new file mode 100644 index 0000000000000000000000000000000000000000..96b9f71d42d1c81539f78b8e1c4761f81d84c304 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/ssh_migrations.py @@ -0,0 +1,412 @@ +# Copyright 2018 Canonical Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import subprocess + +from charmhelpers.core.hookenv import ( + ERROR, + log, + relation_get, +) +from charmhelpers.contrib.network.ip import ( + is_ipv6, + ns_query, +) +from charmhelpers.contrib.openstack.utils import ( + get_hostname, + get_host_ip, + is_ip, +) + +NOVA_SSH_DIR = '/etc/nova/compute_ssh/' + + +def ssh_directory_for_unit(application_name, user=None): + """Return the directory used to store ssh assets for the application. + + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param user: The user that the ssh asserts are for. + :type user: str + :returns: Fully qualified directory path. + :rtype: str + """ + if user: + application_name = "{}_{}".format(application_name, user) + _dir = os.path.join(NOVA_SSH_DIR, application_name) + for d in [NOVA_SSH_DIR, _dir]: + if not os.path.isdir(d): + os.mkdir(d) + for f in ['authorized_keys', 'known_hosts']: + f = os.path.join(_dir, f) + if not os.path.isfile(f): + open(f, 'w').close() + return _dir + + +def known_hosts(application_name, user=None): + """Return the known hosts file for the application. + + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param user: The user that the ssh asserts are for. + :type user: str + :returns: Fully qualified path to file. + :rtype: str + """ + return os.path.join( + ssh_directory_for_unit(application_name, user), + 'known_hosts') + + +def authorized_keys(application_name, user=None): + """Return the authorized keys file for the application. + + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param user: The user that the ssh asserts are for. + :type user: str + :returns: Fully qualified path to file. + :rtype: str + """ + return os.path.join( + ssh_directory_for_unit(application_name, user), + 'authorized_keys') + + +def ssh_known_host_key(host, application_name, user=None): + """Return the first entry in known_hosts for host. + + :param host: hostname to lookup in file. + :type host: str + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param user: The user that the ssh asserts are for. + :type user: str + :returns: Host key + :rtype: str or None + """ + cmd = [ + 'ssh-keygen', + '-f', known_hosts(application_name, user), + '-H', + '-F', + host] + try: + # The first line of output is like '# Host xx found: line 1 type RSA', + # which should be excluded. + output = subprocess.check_output(cmd) + except subprocess.CalledProcessError as e: + # RC of 1 seems to be legitimate for most ssh-keygen -F calls. + if e.returncode == 1: + output = e.output + else: + raise + output = output.strip() + + if output: + # Bug #1500589 cmd has 0 rc on precise if entry not present + lines = output.split('\n') + if len(lines) >= 1: + return lines[0] + + return None + + +def remove_known_host(host, application_name, user=None): + """Remove the entry in known_hosts for host. + + :param host: hostname to lookup in file. + :type host: str + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param user: The user that the ssh asserts are for. + :type user: str + """ + log('Removing SSH known host entry for compute host at %s' % host) + cmd = ['ssh-keygen', '-f', known_hosts(application_name, user), '-R', host] + subprocess.check_call(cmd) + + +def is_same_key(key_1, key_2): + """Extract the key from two host entries and compare them. + + :param key_1: Host key + :type key_1: str + :param key_2: Host key + :type key_2: str + """ + # The key format get will be like '|1|2rUumCavEXWVaVyB5uMl6m85pZo=|Cp' + # 'EL6l7VTY37T/fg/ihhNb/GPgs= ssh-rsa AAAAB', we only need to compare + # the part start with 'ssh-rsa' followed with '= ', because the hash + # value in the beginning will change each time. + k_1 = key_1.split('= ')[1] + k_2 = key_2.split('= ')[1] + return k_1 == k_2 + + +def add_known_host(host, application_name, user=None): + """Add the given host key to the known hosts file. + + :param host: host name + :type host: str + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param user: The user that the ssh asserts are for. + :type user: str + """ + cmd = ['ssh-keyscan', '-H', '-t', 'rsa', host] + try: + remote_key = subprocess.check_output(cmd).strip() + except Exception as e: + log('Could not obtain SSH host key from %s' % host, level=ERROR) + raise e + + current_key = ssh_known_host_key(host, application_name, user) + if current_key and remote_key: + if is_same_key(remote_key, current_key): + log('Known host key for compute host %s up to date.' % host) + return + else: + remove_known_host(host, application_name, user) + + log('Adding SSH host key to known hosts for compute node at %s.' % host) + with open(known_hosts(application_name, user), 'a') as out: + out.write("{}\n".format(remote_key)) + + +def ssh_authorized_key_exists(public_key, application_name, user=None): + """Check if given key is in the authorized_key file. + + :param public_key: Public key. + :type public_key: str + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param user: The user that the ssh asserts are for. + :type user: str + :returns: Whether given key is in the authorized_key file. + :rtype: boolean + """ + with open(authorized_keys(application_name, user)) as keys: + return ('%s' % public_key) in keys.read() + + +def add_authorized_key(public_key, application_name, user=None): + """Add given key to the authorized_key file. + + :param public_key: Public key. + :type public_key: str + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param user: The user that the ssh asserts are for. + :type user: str + """ + with open(authorized_keys(application_name, user), 'a') as keys: + keys.write("{}\n".format(public_key)) + + +def ssh_compute_add_host_and_key(public_key, hostname, private_address, + application_name, user=None): + """Add a compute nodes ssh details to local cache. + + Collect various hostname variations and add the corresponding host keys to + the local known hosts file. Finally, add the supplied public key to the + authorized_key file. + + :param public_key: Public key. + :type public_key: str + :param hostname: Hostname to collect host keys from. + :type hostname: str + :param private_address:aCorresponding private address for hostname + :type private_address: str + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param user: The user that the ssh asserts are for. + :type user: str + """ + # If remote compute node hands us a hostname, ensure we have a + # known hosts entry for its IP, hostname and FQDN. + hosts = [private_address] + + if not is_ipv6(private_address): + if hostname: + hosts.append(hostname) + + if is_ip(private_address): + hn = get_hostname(private_address) + if hn: + hosts.append(hn) + short = hn.split('.')[0] + if ns_query(short): + hosts.append(short) + else: + hosts.append(get_host_ip(private_address)) + short = private_address.split('.')[0] + if ns_query(short): + hosts.append(short) + + for host in list(set(hosts)): + add_known_host(host, application_name, user) + + if not ssh_authorized_key_exists(public_key, application_name, user): + log('Saving SSH authorized key for compute host at %s.' % + private_address) + add_authorized_key(public_key, application_name, user) + + +def ssh_compute_add(public_key, application_name, rid=None, unit=None, + user=None): + """Add a compute nodes ssh details to local cache. + + Collect various hostname variations and add the corresponding host keys to + the local known hosts file. Finally, add the supplied public key to the + authorized_key file. + + :param public_key: Public key. + :type public_key: str + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param rid: Relation id of the relation between this charm and the app. If + none is supplied it is assumed its the relation relating to + the current hook context. + :type rid: str + :param unit: Unit to add ssh asserts for if none is supplied it is assumed + its the unit relating to the current hook context. + :type unit: str + :param user: The user that the ssh asserts are for. + :type user: str + """ + relation_data = relation_get(rid=rid, unit=unit) + ssh_compute_add_host_and_key( + public_key, + relation_data.get('hostname'), + relation_data.get('private-address'), + application_name, + user=user) + + +def ssh_known_hosts_lines(application_name, user=None): + """Return contents of known_hosts file for given application. + + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param user: The user that the ssh asserts are for. + :type user: str + """ + known_hosts_list = [] + with open(known_hosts(application_name, user)) as hosts: + for hosts_line in hosts: + if hosts_line.rstrip(): + known_hosts_list.append(hosts_line.rstrip()) + return(known_hosts_list) + + +def ssh_authorized_keys_lines(application_name, user=None): + """Return contents of authorized_keys file for given application. + + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param user: The user that the ssh asserts are for. + :type user: str + """ + authorized_keys_list = [] + + with open(authorized_keys(application_name, user)) as keys: + for authkey_line in keys: + if authkey_line.rstrip(): + authorized_keys_list.append(authkey_line.rstrip()) + return(authorized_keys_list) + + +def ssh_compute_remove(public_key, application_name, user=None): + """Remove given public key from authorized_keys file. + + :param public_key: Public key. + :type public_key: str + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param user: The user that the ssh asserts are for. + :type user: str + """ + if not (os.path.isfile(authorized_keys(application_name, user)) or + os.path.isfile(known_hosts(application_name, user))): + return + + keys = ssh_authorized_keys_lines(application_name, user=None) + keys = [k.strip() for k in keys] + + if public_key not in keys: + return + + [keys.remove(key) for key in keys if key == public_key] + + with open(authorized_keys(application_name, user), 'w') as _keys: + keys = '\n'.join(keys) + if not keys.endswith('\n'): + keys += '\n' + _keys.write(keys) + + +def get_ssh_settings(application_name, user=None): + """Retrieve the known host entries and public keys for application + + Retrieve the known host entries and public keys for application for all + units of the given application related to this application for the + app + user combination. + + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param user: The user that the ssh asserts are for. + :type user: str + :returns: Public keys + host keys for all units for app + user combination. + :rtype: dict + """ + settings = {} + keys = {} + prefix = '' + if user: + prefix = '{}_'.format(user) + + for i, line in enumerate(ssh_known_hosts_lines( + application_name=application_name, user=user)): + settings['{}known_hosts_{}'.format(prefix, i)] = line + if settings: + settings['{}known_hosts_max_index'.format(prefix)] = len( + settings.keys()) + + for i, line in enumerate(ssh_authorized_keys_lines( + application_name=application_name, user=user)): + keys['{}authorized_keys_{}'.format(prefix, i)] = line + if keys: + keys['{}authorized_keys_max_index'.format(prefix)] = len(keys.keys()) + settings.update(keys) + return settings + + +def get_all_user_ssh_settings(application_name): + """Retrieve the known host entries and public keys for application + + Retrieve the known host entries and public keys for application for all + units of the given application related to this application for root user + and nova user. + + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :returns: Public keys + host keys for all units for app + user combination. + :rtype: dict + """ + settings = get_ssh_settings(application_name) + settings.update(get_ssh_settings(application_name, user='nova')) + return settings diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..9df5f746fbdf5491c640a77df907b71817cbc5af --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/__init__.py @@ -0,0 +1,16 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# dummy __init__.py to fool syncer into thinking this is a syncable python +# module diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/ceph.conf b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/ceph.conf new file mode 100644 index 0000000000000000000000000000000000000000..a11ce8ab85654a4d838e448f36fe3698b59f9531 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/ceph.conf @@ -0,0 +1,24 @@ +############################################################################### +# [ WARNING ] +# ceph configuration file maintained by Juju +# local changes may be overwritten. +############################################################################### +[global] +{% if auth -%} +auth_supported = {{ auth }} +keyring = /etc/ceph/$cluster.$name.keyring +mon host = {{ mon_hosts }} +{% endif -%} +log to syslog = {{ use_syslog }} +err to syslog = {{ use_syslog }} +clog to syslog = {{ use_syslog }} +{% if rbd_features %} +rbd default features = {{ rbd_features }} +{% endif %} + +[client] +{% if rbd_client_cache_settings -%} +{% for key, value in rbd_client_cache_settings.items() -%} +{{ key }} = {{ value }} +{% endfor -%} +{%- endif %} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/git.upstart b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/git.upstart new file mode 100644 index 0000000000000000000000000000000000000000..4bed404bc01087c4dec6a44a56d17ed122e1d1e3 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/git.upstart @@ -0,0 +1,17 @@ +description "{{ service_description }}" +author "Juju {{ service_name }} Charm " + +start on runlevel [2345] +stop on runlevel [!2345] + +respawn + +exec start-stop-daemon --start --chuid {{ user_name }} \ + --chdir {{ start_dir }} --name {{ process_name }} \ + --exec {{ executable_name }} -- \ + {% for config_file in config_files -%} + --config-file={{ config_file }} \ + {% endfor -%} + {% if log_file -%} + --log-file={{ log_file }} + {% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/haproxy.cfg b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/haproxy.cfg new file mode 100644 index 0000000000000000000000000000000000000000..d36af2aa86f91b2ee1249594fffd106843dd0676 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/haproxy.cfg @@ -0,0 +1,77 @@ +global + log /var/lib/haproxy/dev/log local0 + log /var/lib/haproxy/dev/log local1 notice + maxconn 20000 + user haproxy + group haproxy + spread-checks 0 + stats socket /var/run/haproxy/admin.sock mode 600 level admin + stats timeout 2m + +defaults + log global + mode tcp + option tcplog + option dontlognull + retries 3 +{%- if haproxy_queue_timeout %} + timeout queue {{ haproxy_queue_timeout }} +{%- else %} + timeout queue 9000 +{%- endif %} +{%- if haproxy_connect_timeout %} + timeout connect {{ haproxy_connect_timeout }} +{%- else %} + timeout connect 9000 +{%- endif %} +{%- if haproxy_client_timeout %} + timeout client {{ haproxy_client_timeout }} +{%- else %} + timeout client 90000 +{%- endif %} +{%- if haproxy_server_timeout %} + timeout server {{ haproxy_server_timeout }} +{%- else %} + timeout server 90000 +{%- endif %} + +listen stats + bind {{ local_host }}:{{ stat_port }} + mode http + stats enable + stats hide-version + stats realm Haproxy\ Statistics + stats uri / + stats auth admin:{{ stat_password }} + +{% if frontends -%} +{% for service, ports in service_ports.items() -%} +frontend tcp-in_{{ service }} + bind *:{{ ports[0] }} + {% if ipv6_enabled -%} + bind :::{{ ports[0] }} + {% endif -%} + {% for frontend in frontends -%} + acl net_{{ frontend }} dst {{ frontends[frontend]['network'] }} + use_backend {{ service }}_{{ frontend }} if net_{{ frontend }} + {% endfor -%} + default_backend {{ service }}_{{ default_backend }} + +{% for frontend in frontends -%} +backend {{ service }}_{{ frontend }} + balance leastconn + {% if backend_options -%} + {% if backend_options[service] -%} + {% for option in backend_options[service] -%} + {% for key, value in option.items() -%} + {{ key }} {{ value }} + {% endfor -%} + {% endfor -%} + {% endif -%} + {% endif -%} + {% for unit, address in frontends[frontend]['backends'].items() -%} + server {{ unit }} {{ address }}:{{ ports[1] }} check + {% endfor %} +{% endfor -%} +{% endfor -%} +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/logrotate b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/logrotate new file mode 100644 index 0000000000000000000000000000000000000000..b2900d09a4ec2d04152ed7ce25bdc7346c349675 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/logrotate @@ -0,0 +1,9 @@ +/var/log/{{ logrotate_logs_location }}/*.log { + {{ logrotate_interval }} + {{ logrotate_count }} + compress + delaycompress + missingok + notifempty + copytruncate +} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/memcached.conf b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/memcached.conf new file mode 100644 index 0000000000000000000000000000000000000000..26cb037c72beccdb1a4f9269844abdbac2406369 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/memcached.conf @@ -0,0 +1,53 @@ +############################################################################### +# [ WARNING ] +# memcached configuration file maintained by Juju +# local changes may be overwritten. +############################################################################### + +# memcached default config file +# 2003 - Jay Bonci +# This configuration file is read by the start-memcached script provided as +# part of the Debian GNU/Linux distribution. + +# Run memcached as a daemon. This command is implied, and is not needed for the +# daemon to run. See the README.Debian that comes with this package for more +# information. +-d + +# Log memcached's output to /var/log/memcached +logfile /var/log/memcached.log + +# Be verbose +# -v + +# Be even more verbose (print client commands as well) +# -vv + +# Start with a cap of 64 megs of memory. It's reasonable, and the daemon default +# Note that the daemon will grow to this size, but does not start out holding this much +# memory +-m 64 + +# Default connection port is 11211 +-p {{ memcache_port }} + +# Run the daemon as root. The start-memcached will default to running as root if no +# -u command is present in this config file +-u memcache + +# Specify which IP address to listen on. The default is to listen on all IP addresses +# This parameter is one of the only security measures that memcached has, so make sure +# it's listening on a firewalled interface. +-l {{ memcache_server }} + +# Limit the number of simultaneous incoming connections. The daemon default is 1024 +# -c 1024 + +# Lock down all paged memory. Consult with the README and homepage before you do this +# -k + +# Return error when memory is exhausted (rather than removing items) +# -M + +# Maximize core file limit +# -r diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/openstack_https_frontend b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/openstack_https_frontend new file mode 100644 index 0000000000000000000000000000000000000000..f614b3fa71928b615a1f058c6928d3c98bed9575 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/openstack_https_frontend @@ -0,0 +1,29 @@ +{% if endpoints -%} +{% for ext_port in ext_ports -%} +Listen {{ ext_port }} +{% endfor -%} +{% for address, endpoint, ext, int in endpoints -%} + + ServerName {{ endpoint }} + SSLEngine on + SSLProtocol +TLSv1 +TLSv1.1 +TLSv1.2 + SSLCipherSuite HIGH:!RC4:!MD5:!aNULL:!eNULL:!EXP:!LOW:!MEDIUM + SSLCertificateFile /etc/apache2/ssl/{{ namespace }}/cert_{{ endpoint }} + # See LP 1484489 - this is to support <= 2.4.7 and >= 2.4.8 + SSLCertificateChainFile /etc/apache2/ssl/{{ namespace }}/cert_{{ endpoint }} + SSLCertificateKeyFile /etc/apache2/ssl/{{ namespace }}/key_{{ endpoint }} + ProxyPass / http://localhost:{{ int }}/ + ProxyPassReverse / http://localhost:{{ int }}/ + ProxyPreserveHost on + RequestHeader set X-Forwarded-Proto "https" + +{% endfor -%} + + Order deny,allow + Allow from all + + + Order allow,deny + Allow from all + +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/openstack_https_frontend.conf b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/openstack_https_frontend.conf new file mode 100644 index 0000000000000000000000000000000000000000..f614b3fa71928b615a1f058c6928d3c98bed9575 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/openstack_https_frontend.conf @@ -0,0 +1,29 @@ +{% if endpoints -%} +{% for ext_port in ext_ports -%} +Listen {{ ext_port }} +{% endfor -%} +{% for address, endpoint, ext, int in endpoints -%} + + ServerName {{ endpoint }} + SSLEngine on + SSLProtocol +TLSv1 +TLSv1.1 +TLSv1.2 + SSLCipherSuite HIGH:!RC4:!MD5:!aNULL:!eNULL:!EXP:!LOW:!MEDIUM + SSLCertificateFile /etc/apache2/ssl/{{ namespace }}/cert_{{ endpoint }} + # See LP 1484489 - this is to support <= 2.4.7 and >= 2.4.8 + SSLCertificateChainFile /etc/apache2/ssl/{{ namespace }}/cert_{{ endpoint }} + SSLCertificateKeyFile /etc/apache2/ssl/{{ namespace }}/key_{{ endpoint }} + ProxyPass / http://localhost:{{ int }}/ + ProxyPassReverse / http://localhost:{{ int }}/ + ProxyPreserveHost on + RequestHeader set X-Forwarded-Proto "https" + +{% endfor -%} + + Order deny,allow + Allow from all + + + Order allow,deny + Allow from all + +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/section-keystone-authtoken b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/section-keystone-authtoken new file mode 100644 index 0000000000000000000000000000000000000000..5dcebe7c863728b040337616b492296f536b3ef3 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/section-keystone-authtoken @@ -0,0 +1,12 @@ +{% if auth_host -%} +[keystone_authtoken] +auth_uri = {{ service_protocol }}://{{ service_host }}:{{ service_port }} +auth_url = {{ auth_protocol }}://{{ auth_host }}:{{ auth_port }} +auth_plugin = password +project_domain_id = default +user_domain_id = default +project_name = {{ admin_tenant_name }} +username = {{ admin_user }} +password = {{ admin_password }} +signing_dir = {{ signing_dir }} +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/section-keystone-authtoken-legacy b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/section-keystone-authtoken-legacy new file mode 100644 index 0000000000000000000000000000000000000000..9356b2be4e7dabe089b4d7d39793146bb92f3f48 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/section-keystone-authtoken-legacy @@ -0,0 +1,10 @@ +{% if auth_host -%} +[keystone_authtoken] +# Juno specific config (Bug #1557223) +auth_uri = {{ service_protocol }}://{{ service_host }}:{{ service_port }}/{{ service_admin_prefix }} +identity_uri = {{ auth_protocol }}://{{ auth_host }}:{{ auth_port }} +admin_tenant_name = {{ admin_tenant_name }} +admin_user = {{ admin_user }} +admin_password = {{ admin_password }} +signing_dir = {{ signing_dir }} +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/section-keystone-authtoken-mitaka b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/section-keystone-authtoken-mitaka new file mode 100644 index 0000000000000000000000000000000000000000..c281868b16a885cd01a234984974af0349a5d242 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/section-keystone-authtoken-mitaka @@ -0,0 +1,22 @@ +{% if auth_host -%} +[keystone_authtoken] +auth_type = password +{% if api_version == "3" -%} +auth_uri = {{ service_protocol }}://{{ service_host }}:{{ service_port }}/v3 +auth_url = {{ auth_protocol }}://{{ auth_host }}:{{ auth_port }}/v3 +project_domain_name = {{ admin_domain_name }} +user_domain_name = {{ admin_domain_name }} +{% else -%} +auth_uri = {{ service_protocol }}://{{ service_host }}:{{ service_port }} +auth_url = {{ auth_protocol }}://{{ auth_host }}:{{ auth_port }} +project_domain_name = default +user_domain_name = default +{% endif -%} +project_name = {{ admin_tenant_name }} +username = {{ admin_user }} +password = {{ admin_password }} +signing_dir = {{ signing_dir }} +{% if use_memcache == true %} +memcached_servers = {{ memcache_url }} +{% endif -%} +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/section-keystone-authtoken-v3only b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/section-keystone-authtoken-v3only new file mode 100644 index 0000000000000000000000000000000000000000..d26a91fe1f00e2b12d094be727a53eddc87b829d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/section-keystone-authtoken-v3only @@ -0,0 +1,9 @@ +{% if auth_host -%} +[keystone_authtoken] +{% for option_name, option_value in keystone_authtoken.items() -%} +{{ option_name }} = {{ option_value }} +{% endfor -%} +{% if use_memcache == true %} +memcached_servers = {{ memcache_url }} +{% endif -%} +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/section-oslo-cache b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/section-oslo-cache new file mode 100644 index 0000000000000000000000000000000000000000..e056a32aafe32413cefb40b9414d064f2b0c8fd2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/section-oslo-cache @@ -0,0 +1,6 @@ +[cache] +{% if memcache_url %} +enabled = true +backend = oslo_cache.memcache_pool +memcache_servers = {{ memcache_url }} +{% endif %} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/section-oslo-messaging-rabbit b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/section-oslo-messaging-rabbit new file mode 100644 index 0000000000000000000000000000000000000000..bed2216aba7217022ded17dec4cdb0871f513b40 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/section-oslo-messaging-rabbit @@ -0,0 +1,10 @@ +[oslo_messaging_rabbit] +{% if rabbitmq_ha_queues -%} +rabbit_ha_queues = True +{% endif -%} +{% if rabbit_ssl_port -%} +ssl = True +{% endif -%} +{% if rabbit_ssl_ca -%} +ssl_ca_file = {{ rabbit_ssl_ca }} +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/section-oslo-messaging-rabbit-ocata b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/section-oslo-messaging-rabbit-ocata new file mode 100644 index 0000000000000000000000000000000000000000..365f43757719b2de3c601ea6a9752dac8a8b3545 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/section-oslo-messaging-rabbit-ocata @@ -0,0 +1,10 @@ +[oslo_messaging_rabbit] +{% if rabbitmq_ha_queues -%} +rabbit_ha_queues = True +{% endif -%} +{% if rabbit_ssl_port -%} +rabbit_use_ssl = True +{% endif -%} +{% if rabbit_ssl_ca -%} +ssl_ca_file = {{ rabbit_ssl_ca }} +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/section-oslo-middleware b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/section-oslo-middleware new file mode 100644 index 0000000000000000000000000000000000000000..dd73230a42aa037582989979c1bc8132d30b9b38 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/section-oslo-middleware @@ -0,0 +1,5 @@ +[oslo_middleware] + +# Bug #1758675 +enable_proxy_headers_parsing = true + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/section-oslo-notifications b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/section-oslo-notifications new file mode 100644 index 0000000000000000000000000000000000000000..71c7eb068eace94e8986d7a868bded236f75128c --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/section-oslo-notifications @@ -0,0 +1,15 @@ +{% if transport_url -%} +[oslo_messaging_notifications] +driver = {{ oslo_messaging_driver }} +transport_url = {{ transport_url }} +{% if send_notifications_to_logs %} +driver = log +{% endif %} +{% if notification_topics -%} +topics = {{ notification_topics }} +{% endif -%} +{% if notification_format -%} +[notifications] +notification_format = {{ notification_format }} +{% endif -%} +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/section-placement b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/section-placement new file mode 100644 index 0000000000000000000000000000000000000000..97724bdb5af6e0352d0b920600d7cbbc8318fa7c --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/section-placement @@ -0,0 +1,19 @@ +[placement] +{% if auth_host -%} +auth_url = {{ auth_protocol }}://{{ auth_host }}:{{ auth_port }} +auth_type = password +{% if api_version == "3" -%} +project_domain_name = {{ admin_domain_name }} +user_domain_name = {{ admin_domain_name }} +{% else -%} +project_domain_name = default +user_domain_name = default +{% endif -%} +project_name = {{ admin_tenant_name }} +username = {{ admin_user }} +password = {{ admin_password }} +{% endif -%} +{% if region -%} +os_region_name = {{ region }} +{% endif -%} +randomize_allocation_candidates = true diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/section-rabbitmq-oslo b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/section-rabbitmq-oslo new file mode 100644 index 0000000000000000000000000000000000000000..b444c9c99bd179eaa0be31af462576e4ca9c45b5 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/section-rabbitmq-oslo @@ -0,0 +1,22 @@ +{% if rabbitmq_host or rabbitmq_hosts -%} +[oslo_messaging_rabbit] +rabbit_userid = {{ rabbitmq_user }} +rabbit_virtual_host = {{ rabbitmq_virtual_host }} +rabbit_password = {{ rabbitmq_password }} +{% if rabbitmq_hosts -%} +rabbit_hosts = {{ rabbitmq_hosts }} +{% if rabbitmq_ha_queues -%} +rabbit_ha_queues = True +rabbit_durable_queues = False +{% endif -%} +{% else -%} +rabbit_host = {{ rabbitmq_host }} +{% endif -%} +{% if rabbit_ssl_port -%} +rabbit_use_ssl = True +rabbit_port = {{ rabbit_ssl_port }} +{% if rabbit_ssl_ca -%} +kombu_ssl_ca_certs = {{ rabbit_ssl_ca }} +{% endif -%} +{% endif -%} +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/section-zeromq b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/section-zeromq new file mode 100644 index 0000000000000000000000000000000000000000..95f1a76ce87f9babdcba58da79c68d34c2776eea --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/section-zeromq @@ -0,0 +1,14 @@ +{% if zmq_host -%} +# ZeroMQ configuration (restart-nonce: {{ zmq_nonce }}) +rpc_backend = zmq +rpc_zmq_host = {{ zmq_host }} +{% if zmq_redis_address -%} +rpc_zmq_matchmaker = redis +matchmaker_heartbeat_freq = 15 +matchmaker_heartbeat_ttl = 30 +[matchmaker_redis] +host = {{ zmq_redis_address }} +{% else -%} +rpc_zmq_matchmaker = ring +{% endif -%} +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/vendor_data.json b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/vendor_data.json new file mode 100644 index 0000000000000000000000000000000000000000..904f612a7f74490d4508920400d18d493d056477 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/vendor_data.json @@ -0,0 +1 @@ +{{ vendor_data_json }} \ No newline at end of file diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/wsgi-openstack-api.conf b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/wsgi-openstack-api.conf new file mode 100644 index 0000000000000000000000000000000000000000..23b62a385283e6b8f1a6af0fcebccac747031b26 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/wsgi-openstack-api.conf @@ -0,0 +1,91 @@ +# Configuration file maintained by Juju. Local changes may be overwritten. + +{% if port -%} +Listen {{ port }} +{% endif -%} + +{% if admin_port -%} +Listen {{ admin_port }} +{% endif -%} + +{% if public_port -%} +Listen {{ public_port }} +{% endif -%} + +{% if port -%} + + WSGIDaemonProcess {{ service_name }} processes={{ processes }} threads={{ threads }} user={{ user }} group={{ group }} \ + display-name=%{GROUP} + WSGIProcessGroup {{ service_name }} + WSGIScriptAlias / {{ script }} + WSGIApplicationGroup %{GLOBAL} + WSGIPassAuthorization On + = 2.4> + ErrorLogFormat "%{cu}t %M" + + ErrorLog /var/log/apache2/{{ service_name }}_error.log + CustomLog /var/log/apache2/{{ service_name }}_access.log combined + + + = 2.4> + Require all granted + + + Order allow,deny + Allow from all + + + +{% endif -%} + +{% if admin_port -%} + + WSGIDaemonProcess {{ service_name }}-admin processes={{ admin_processes }} threads={{ threads }} user={{ user }} group={{ group }} \ + display-name=%{GROUP} + WSGIProcessGroup {{ service_name }}-admin + WSGIScriptAlias / {{ admin_script }} + WSGIApplicationGroup %{GLOBAL} + WSGIPassAuthorization On + = 2.4> + ErrorLogFormat "%{cu}t %M" + + ErrorLog /var/log/apache2/{{ service_name }}_error.log + CustomLog /var/log/apache2/{{ service_name }}_access.log combined + + + = 2.4> + Require all granted + + + Order allow,deny + Allow from all + + + +{% endif -%} + +{% if public_port -%} + + WSGIDaemonProcess {{ service_name }}-public processes={{ public_processes }} threads={{ threads }} user={{ user }} group={{ group }} \ + display-name=%{GROUP} + WSGIProcessGroup {{ service_name }}-public + WSGIScriptAlias / {{ public_script }} + WSGIApplicationGroup %{GLOBAL} + WSGIPassAuthorization On + = 2.4> + ErrorLogFormat "%{cu}t %M" + + ErrorLog /var/log/apache2/{{ service_name }}_error.log + CustomLog /var/log/apache2/{{ service_name }}_access.log combined + + + = 2.4> + Require all granted + + + Order allow,deny + Allow from all + + + +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/wsgi-openstack-metadata.conf b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/wsgi-openstack-metadata.conf new file mode 100644 index 0000000000000000000000000000000000000000..23b62a385283e6b8f1a6af0fcebccac747031b26 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templates/wsgi-openstack-metadata.conf @@ -0,0 +1,91 @@ +# Configuration file maintained by Juju. Local changes may be overwritten. + +{% if port -%} +Listen {{ port }} +{% endif -%} + +{% if admin_port -%} +Listen {{ admin_port }} +{% endif -%} + +{% if public_port -%} +Listen {{ public_port }} +{% endif -%} + +{% if port -%} + + WSGIDaemonProcess {{ service_name }} processes={{ processes }} threads={{ threads }} user={{ user }} group={{ group }} \ + display-name=%{GROUP} + WSGIProcessGroup {{ service_name }} + WSGIScriptAlias / {{ script }} + WSGIApplicationGroup %{GLOBAL} + WSGIPassAuthorization On + = 2.4> + ErrorLogFormat "%{cu}t %M" + + ErrorLog /var/log/apache2/{{ service_name }}_error.log + CustomLog /var/log/apache2/{{ service_name }}_access.log combined + + + = 2.4> + Require all granted + + + Order allow,deny + Allow from all + + + +{% endif -%} + +{% if admin_port -%} + + WSGIDaemonProcess {{ service_name }}-admin processes={{ admin_processes }} threads={{ threads }} user={{ user }} group={{ group }} \ + display-name=%{GROUP} + WSGIProcessGroup {{ service_name }}-admin + WSGIScriptAlias / {{ admin_script }} + WSGIApplicationGroup %{GLOBAL} + WSGIPassAuthorization On + = 2.4> + ErrorLogFormat "%{cu}t %M" + + ErrorLog /var/log/apache2/{{ service_name }}_error.log + CustomLog /var/log/apache2/{{ service_name }}_access.log combined + + + = 2.4> + Require all granted + + + Order allow,deny + Allow from all + + + +{% endif -%} + +{% if public_port -%} + + WSGIDaemonProcess {{ service_name }}-public processes={{ public_processes }} threads={{ threads }} user={{ user }} group={{ group }} \ + display-name=%{GROUP} + WSGIProcessGroup {{ service_name }}-public + WSGIScriptAlias / {{ public_script }} + WSGIApplicationGroup %{GLOBAL} + WSGIPassAuthorization On + = 2.4> + ErrorLogFormat "%{cu}t %M" + + ErrorLog /var/log/apache2/{{ service_name }}_error.log + CustomLog /var/log/apache2/{{ service_name }}_access.log combined + + + = 2.4> + Require all granted + + + Order allow,deny + Allow from all + + + +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templating.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templating.py new file mode 100644 index 0000000000000000000000000000000000000000..050f8af5c9135db4fafdbf5098edcd22c7157ff8 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/templating.py @@ -0,0 +1,379 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os + +import six + +from charmhelpers.fetch import apt_install, apt_update +from charmhelpers.core.hookenv import ( + log, + ERROR, + INFO, + TRACE +) +from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES + +try: + from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions +except ImportError: + apt_update(fatal=True) + if six.PY2: + apt_install('python-jinja2', fatal=True) + else: + apt_install('python3-jinja2', fatal=True) + from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions + + +class OSConfigException(Exception): + pass + + +def get_loader(templates_dir, os_release): + """ + Create a jinja2.ChoiceLoader containing template dirs up to + and including os_release. If directory template directory + is missing at templates_dir, it will be omitted from the loader. + templates_dir is added to the bottom of the search list as a base + loading dir. + + A charm may also ship a templates dir with this module + and it will be appended to the bottom of the search list, eg:: + + hooks/charmhelpers/contrib/openstack/templates + + :param templates_dir (str): Base template directory containing release + sub-directories. + :param os_release (str): OpenStack release codename to construct template + loader. + :returns: jinja2.ChoiceLoader constructed with a list of + jinja2.FilesystemLoaders, ordered in descending + order by OpenStack release. + """ + tmpl_dirs = [(rel, os.path.join(templates_dir, rel)) + for rel in six.itervalues(OPENSTACK_CODENAMES)] + + if not os.path.isdir(templates_dir): + log('Templates directory not found @ %s.' % templates_dir, + level=ERROR) + raise OSConfigException + + # the bottom contains tempaltes_dir and possibly a common templates dir + # shipped with the helper. + loaders = [FileSystemLoader(templates_dir)] + helper_templates = os.path.join(os.path.dirname(__file__), 'templates') + if os.path.isdir(helper_templates): + loaders.append(FileSystemLoader(helper_templates)) + + for rel, tmpl_dir in tmpl_dirs: + if os.path.isdir(tmpl_dir): + loaders.insert(0, FileSystemLoader(tmpl_dir)) + if rel == os_release: + break + # demote this log to the lowest level; we don't really need to see these + # lots in production even when debugging. + log('Creating choice loader with dirs: %s' % + [l.searchpath for l in loaders], level=TRACE) + return ChoiceLoader(loaders) + + +class OSConfigTemplate(object): + """ + Associates a config file template with a list of context generators. + Responsible for constructing a template context based on those generators. + """ + + def __init__(self, config_file, contexts, config_template=None): + self.config_file = config_file + + if hasattr(contexts, '__call__'): + self.contexts = [contexts] + else: + self.contexts = contexts + + self._complete_contexts = [] + + self.config_template = config_template + + def context(self): + ctxt = {} + for context in self.contexts: + _ctxt = context() + if _ctxt: + ctxt.update(_ctxt) + # track interfaces for every complete context. + [self._complete_contexts.append(interface) + for interface in context.interfaces + if interface not in self._complete_contexts] + return ctxt + + def complete_contexts(self): + ''' + Return a list of interfaces that have satisfied contexts. + ''' + if self._complete_contexts: + return self._complete_contexts + self.context() + return self._complete_contexts + + @property + def is_string_template(self): + """:returns: Boolean if this instance is a template initialised with a string""" + return self.config_template is not None + + +class OSConfigRenderer(object): + """ + This class provides a common templating system to be used by OpenStack + charms. It is intended to help charms share common code and templates, + and ease the burden of managing config templates across multiple OpenStack + releases. + + Basic usage:: + + # import some common context generates from charmhelpers + from charmhelpers.contrib.openstack import context + + # Create a renderer object for a specific OS release. + configs = OSConfigRenderer(templates_dir='/tmp/templates', + openstack_release='folsom') + # register some config files with context generators. + configs.register(config_file='/etc/nova/nova.conf', + contexts=[context.SharedDBContext(), + context.AMQPContext()]) + configs.register(config_file='/etc/nova/api-paste.ini', + contexts=[context.IdentityServiceContext()]) + configs.register(config_file='/etc/haproxy/haproxy.conf', + contexts=[context.HAProxyContext()]) + configs.register(config_file='/etc/keystone/policy.d/extra.cfg', + contexts=[context.ExtraPolicyContext() + context.KeystoneContext()], + config_template=hookenv.config('extra-policy')) + # write out a single config + configs.write('/etc/nova/nova.conf') + # write out all registered configs + configs.write_all() + + **OpenStack Releases and template loading** + + When the object is instantiated, it is associated with a specific OS + release. This dictates how the template loader will be constructed. + + The constructed loader attempts to load the template from several places + in the following order: + - from the most recent OS release-specific template dir (if one exists) + - the base templates_dir + - a template directory shipped in the charm with this helper file. + + For the example above, '/tmp/templates' contains the following structure:: + + /tmp/templates/nova.conf + /tmp/templates/api-paste.ini + /tmp/templates/grizzly/api-paste.ini + /tmp/templates/havana/api-paste.ini + + Since it was registered with the grizzly release, it first searches + the grizzly directory for nova.conf, then the templates dir. + + When writing api-paste.ini, it will find the template in the grizzly + directory. + + If the object were created with folsom, it would fall back to the + base templates dir for its api-paste.ini template. + + This system should help manage changes in config files through + openstack releases, allowing charms to fall back to the most recently + updated config template for a given release + + The haproxy.conf, since it is not shipped in the templates dir, will + be loaded from the module directory's template directory, eg + $CHARM/hooks/charmhelpers/contrib/openstack/templates. This allows + us to ship common templates (haproxy, apache) with the helpers. + + **Context generators** + + Context generators are used to generate template contexts during hook + execution. Doing so may require inspecting service relations, charm + config, etc. When registered, a config file is associated with a list + of generators. When a template is rendered and written, all context + generates are called in a chain to generate the context dictionary + passed to the jinja2 template. See context.py for more info. + """ + def __init__(self, templates_dir, openstack_release): + if not os.path.isdir(templates_dir): + log('Could not locate templates dir %s' % templates_dir, + level=ERROR) + raise OSConfigException + + self.templates_dir = templates_dir + self.openstack_release = openstack_release + self.templates = {} + self._tmpl_env = None + + if None in [Environment, ChoiceLoader, FileSystemLoader]: + # if this code is running, the object is created pre-install hook. + # jinja2 shouldn't get touched until the module is reloaded on next + # hook execution, with proper jinja2 bits successfully imported. + if six.PY2: + apt_install('python-jinja2') + else: + apt_install('python3-jinja2') + + def register(self, config_file, contexts, config_template=None): + """ + Register a config file with a list of context generators to be called + during rendering. + config_template can be used to load a template from a string instead of + using template loaders and template files. + :param config_file (str): a path where a config file will be rendered + :param contexts (list): a list of context dictionaries with kv pairs + :param config_template (str): an optional template string to use + """ + self.templates[config_file] = OSConfigTemplate( + config_file=config_file, + contexts=contexts, + config_template=config_template + ) + log('Registered config file: {}'.format(config_file), + level=INFO) + + def _get_tmpl_env(self): + if not self._tmpl_env: + loader = get_loader(self.templates_dir, self.openstack_release) + self._tmpl_env = Environment(loader=loader) + + def _get_template(self, template): + self._get_tmpl_env() + template = self._tmpl_env.get_template(template) + log('Loaded template from {}'.format(template.filename), + level=INFO) + return template + + def _get_template_from_string(self, ostmpl): + ''' + Get a jinja2 template object from a string. + :param ostmpl: OSConfigTemplate to use as a data source. + ''' + self._get_tmpl_env() + template = self._tmpl_env.from_string(ostmpl.config_template) + log('Loaded a template from a string for {}'.format( + ostmpl.config_file), + level=INFO) + return template + + def render(self, config_file): + if config_file not in self.templates: + log('Config not registered: {}'.format(config_file), level=ERROR) + raise OSConfigException + + ostmpl = self.templates[config_file] + ctxt = ostmpl.context() + + if ostmpl.is_string_template: + template = self._get_template_from_string(ostmpl) + log('Rendering from a string template: ' + '{}'.format(config_file), + level=INFO) + else: + _tmpl = os.path.basename(config_file) + try: + template = self._get_template(_tmpl) + except exceptions.TemplateNotFound: + # if no template is found with basename, try looking + # for it using a munged full path, eg: + # /etc/apache2/apache2.conf -> etc_apache2_apache2.conf + _tmpl = '_'.join(config_file.split('/')[1:]) + try: + template = self._get_template(_tmpl) + except exceptions.TemplateNotFound as e: + log('Could not load template from {} by {} or {}.' + ''.format( + self.templates_dir, + os.path.basename(config_file), + _tmpl + ), + level=ERROR) + raise e + + log('Rendering from template: {}'.format(config_file), + level=INFO) + return template.render(ctxt) + + def write(self, config_file): + """ + Write a single config file, raises if config file is not registered. + """ + if config_file not in self.templates: + log('Config not registered: %s' % config_file, level=ERROR) + raise OSConfigException + + _out = self.render(config_file) + if six.PY3: + _out = _out.encode('UTF-8') + + with open(config_file, 'wb') as out: + out.write(_out) + + log('Wrote template %s.' % config_file, level=INFO) + + def write_all(self): + """ + Write out all registered config files. + """ + [self.write(k) for k in six.iterkeys(self.templates)] + + def set_release(self, openstack_release): + """ + Resets the template environment and generates a new template loader + based on a the new openstack release. + """ + self._tmpl_env = None + self.openstack_release = openstack_release + self._get_tmpl_env() + + def complete_contexts(self): + ''' + Returns a list of context interfaces that yield a complete context. + ''' + interfaces = [] + [interfaces.extend(i.complete_contexts()) + for i in six.itervalues(self.templates)] + return interfaces + + def get_incomplete_context_data(self, interfaces): + ''' + Return dictionary of relation status of interfaces and any missing + required context data. Example: + {'amqp': {'missing_data': ['rabbitmq_password'], 'related': True}, + 'zeromq-configuration': {'related': False}} + ''' + incomplete_context_data = {} + + for i in six.itervalues(self.templates): + for context in i.contexts: + for interface in interfaces: + related = False + if interface in context.interfaces: + related = context.get_related() + missing_data = context.missing_data + if missing_data: + incomplete_context_data[interface] = {'missing_data': missing_data} + if related: + if incomplete_context_data.get(interface): + incomplete_context_data[interface].update({'related': True}) + else: + incomplete_context_data[interface] = {'related': True} + else: + incomplete_context_data[interface] = {'related': False} + return incomplete_context_data diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..fbf0156108537f9caae35e193c346b0e4ee237b9 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/utils.py @@ -0,0 +1,2352 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Common python helper functions used for OpenStack charms. +from collections import OrderedDict, namedtuple +from functools import wraps + +import subprocess +import json +import os +import sys +import re +import itertools +import functools + +import six +import traceback +import uuid +import yaml + +from charmhelpers import deprecate + +from charmhelpers.contrib.network import ip + +from charmhelpers.core import unitdata + +from charmhelpers.core.hookenv import ( + WORKLOAD_STATES, + action_fail, + action_set, + config, + expected_peer_units, + expected_related_units, + log as juju_log, + charm_dir, + INFO, + ERROR, + metadata, + related_units, + relation_get, + relation_id, + relation_ids, + relation_set, + status_set, + hook_name, + application_version_set, + cached, + leader_set, + leader_get, + local_unit, +) + +from charmhelpers.core.strutils import ( + BasicStringComparator, + bool_from_string, +) + +from charmhelpers.contrib.storage.linux.lvm import ( + deactivate_lvm_volume_group, + is_lvm_physical_volume, + remove_lvm_physical_volume, +) + +from charmhelpers.contrib.network.ip import ( + get_ipv6_addr, + is_ipv6, + port_has_listener, +) + +from charmhelpers.core.host import ( + lsb_release, + mounts, + umount, + service_running, + service_pause, + service_resume, + service_stop, + service_start, + restart_on_change_helper, +) +from charmhelpers.fetch import ( + apt_cache, + import_key as fetch_import_key, + add_source as fetch_add_source, + SourceConfigError, + GPGKeyError, + get_upstream_version, + filter_missing_packages, + ubuntu_apt_pkg as apt, +) + +from charmhelpers.fetch.snap import ( + snap_install, + snap_refresh, + valid_snap_channel, +) + +from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk +from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device +from charmhelpers.contrib.openstack.exceptions import OSContextError +from charmhelpers.contrib.openstack.policyd import ( + policyd_status_message_prefix, + POLICYD_CONFIG_NAME, +) + +from charmhelpers.contrib.openstack.ha.utils import ( + expect_ha, +) + +CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu" +CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA' + +DISTRO_PROPOSED = ('deb http://archive.ubuntu.com/ubuntu/ %s-proposed ' + 'restricted main multiverse universe') + +OPENSTACK_RELEASES = ( + 'diablo', + 'essex', + 'folsom', + 'grizzly', + 'havana', + 'icehouse', + 'juno', + 'kilo', + 'liberty', + 'mitaka', + 'newton', + 'ocata', + 'pike', + 'queens', + 'rocky', + 'stein', + 'train', + 'ussuri', +) + +UBUNTU_OPENSTACK_RELEASE = OrderedDict([ + ('oneiric', 'diablo'), + ('precise', 'essex'), + ('quantal', 'folsom'), + ('raring', 'grizzly'), + ('saucy', 'havana'), + ('trusty', 'icehouse'), + ('utopic', 'juno'), + ('vivid', 'kilo'), + ('wily', 'liberty'), + ('xenial', 'mitaka'), + ('yakkety', 'newton'), + ('zesty', 'ocata'), + ('artful', 'pike'), + ('bionic', 'queens'), + ('cosmic', 'rocky'), + ('disco', 'stein'), + ('eoan', 'train'), + ('focal', 'ussuri'), +]) + + +OPENSTACK_CODENAMES = OrderedDict([ + ('2011.2', 'diablo'), + ('2012.1', 'essex'), + ('2012.2', 'folsom'), + ('2013.1', 'grizzly'), + ('2013.2', 'havana'), + ('2014.1', 'icehouse'), + ('2014.2', 'juno'), + ('2015.1', 'kilo'), + ('2015.2', 'liberty'), + ('2016.1', 'mitaka'), + ('2016.2', 'newton'), + ('2017.1', 'ocata'), + ('2017.2', 'pike'), + ('2018.1', 'queens'), + ('2018.2', 'rocky'), + ('2019.1', 'stein'), + ('2019.2', 'train'), + ('2020.1', 'ussuri'), +]) + +# The ugly duckling - must list releases oldest to newest +SWIFT_CODENAMES = OrderedDict([ + ('diablo', + ['1.4.3']), + ('essex', + ['1.4.8']), + ('folsom', + ['1.7.4']), + ('grizzly', + ['1.7.6', '1.7.7', '1.8.0']), + ('havana', + ['1.9.0', '1.9.1', '1.10.0']), + ('icehouse', + ['1.11.0', '1.12.0', '1.13.0', '1.13.1']), + ('juno', + ['2.0.0', '2.1.0', '2.2.0']), + ('kilo', + ['2.2.1', '2.2.2']), + ('liberty', + ['2.3.0', '2.4.0', '2.5.0']), + ('mitaka', + ['2.5.0', '2.6.0', '2.7.0']), + ('newton', + ['2.8.0', '2.9.0', '2.10.0']), + ('ocata', + ['2.11.0', '2.12.0', '2.13.0']), + ('pike', + ['2.13.0', '2.15.0']), + ('queens', + ['2.16.0', '2.17.0']), + ('rocky', + ['2.18.0', '2.19.0']), + ('stein', + ['2.20.0', '2.21.0']), + ('train', + ['2.22.0', '2.23.0']), + ('ussuri', + ['2.24.0', '2.25.0']), +]) + +# >= Liberty version->codename mapping +PACKAGE_CODENAMES = { + 'nova-common': OrderedDict([ + ('12', 'liberty'), + ('13', 'mitaka'), + ('14', 'newton'), + ('15', 'ocata'), + ('16', 'pike'), + ('17', 'queens'), + ('18', 'rocky'), + ('19', 'stein'), + ('20', 'train'), + ('21', 'ussuri'), + ]), + 'neutron-common': OrderedDict([ + ('7', 'liberty'), + ('8', 'mitaka'), + ('9', 'newton'), + ('10', 'ocata'), + ('11', 'pike'), + ('12', 'queens'), + ('13', 'rocky'), + ('14', 'stein'), + ('15', 'train'), + ('16', 'ussuri'), + ]), + 'cinder-common': OrderedDict([ + ('7', 'liberty'), + ('8', 'mitaka'), + ('9', 'newton'), + ('10', 'ocata'), + ('11', 'pike'), + ('12', 'queens'), + ('13', 'rocky'), + ('14', 'stein'), + ('15', 'train'), + ('16', 'ussuri'), + ]), + 'keystone': OrderedDict([ + ('8', 'liberty'), + ('9', 'mitaka'), + ('10', 'newton'), + ('11', 'ocata'), + ('12', 'pike'), + ('13', 'queens'), + ('14', 'rocky'), + ('15', 'stein'), + ('16', 'train'), + ('17', 'ussuri'), + ]), + 'horizon-common': OrderedDict([ + ('8', 'liberty'), + ('9', 'mitaka'), + ('10', 'newton'), + ('11', 'ocata'), + ('12', 'pike'), + ('13', 'queens'), + ('14', 'rocky'), + ('15', 'stein'), + ('16', 'train'), + ('18', 'ussuri'), + ]), + 'ceilometer-common': OrderedDict([ + ('5', 'liberty'), + ('6', 'mitaka'), + ('7', 'newton'), + ('8', 'ocata'), + ('9', 'pike'), + ('10', 'queens'), + ('11', 'rocky'), + ('12', 'stein'), + ('13', 'train'), + ('14', 'ussuri'), + ]), + 'heat-common': OrderedDict([ + ('5', 'liberty'), + ('6', 'mitaka'), + ('7', 'newton'), + ('8', 'ocata'), + ('9', 'pike'), + ('10', 'queens'), + ('11', 'rocky'), + ('12', 'stein'), + ('13', 'train'), + ('14', 'ussuri'), + ]), + 'glance-common': OrderedDict([ + ('11', 'liberty'), + ('12', 'mitaka'), + ('13', 'newton'), + ('14', 'ocata'), + ('15', 'pike'), + ('16', 'queens'), + ('17', 'rocky'), + ('18', 'stein'), + ('19', 'train'), + ('20', 'ussuri'), + ]), + 'openstack-dashboard': OrderedDict([ + ('8', 'liberty'), + ('9', 'mitaka'), + ('10', 'newton'), + ('11', 'ocata'), + ('12', 'pike'), + ('13', 'queens'), + ('14', 'rocky'), + ('15', 'stein'), + ('16', 'train'), + ('18', 'ussuri'), + ]), +} + +DEFAULT_LOOPBACK_SIZE = '5G' + +DB_SERIES_UPGRADING_KEY = 'cluster-series-upgrading' + +DB_MAINTENANCE_KEYS = [DB_SERIES_UPGRADING_KEY] + + +class CompareOpenStackReleases(BasicStringComparator): + """Provide comparisons of OpenStack releases. + + Use in the form of + + if CompareOpenStackReleases(release) > 'mitaka': + # do something with mitaka + """ + _list = OPENSTACK_RELEASES + + +def error_out(msg): + juju_log("FATAL ERROR: %s" % msg, level='ERROR') + sys.exit(1) + + +def get_installed_semantic_versioned_packages(): + '''Get a list of installed packages which have OpenStack semantic versioning + + :returns List of installed packages + :rtype: [pkg1, pkg2, ...] + ''' + return filter_missing_packages(PACKAGE_CODENAMES.keys()) + + +def get_os_codename_install_source(src): + '''Derive OpenStack release codename from a given installation source.''' + ubuntu_rel = lsb_release()['DISTRIB_CODENAME'] + rel = '' + if src is None: + return rel + if src in ['distro', 'distro-proposed', 'proposed']: + try: + rel = UBUNTU_OPENSTACK_RELEASE[ubuntu_rel] + except KeyError: + e = 'Could not derive openstack release for '\ + 'this Ubuntu release: %s' % ubuntu_rel + error_out(e) + return rel + + if src.startswith('cloud:'): + ca_rel = src.split(':')[1] + ca_rel = ca_rel.split('-')[1].split('/')[0] + return ca_rel + + # Best guess match based on deb string provided + if (src.startswith('deb') or + src.startswith('ppa') or + src.startswith('snap')): + for v in OPENSTACK_CODENAMES.values(): + if v in src: + return v + + +def get_os_version_install_source(src): + codename = get_os_codename_install_source(src) + return get_os_version_codename(codename) + + +def get_os_codename_version(vers): + '''Determine OpenStack codename from version number.''' + try: + return OPENSTACK_CODENAMES[vers] + except KeyError: + e = 'Could not determine OpenStack codename for version %s' % vers + error_out(e) + + +def get_os_version_codename(codename, version_map=OPENSTACK_CODENAMES): + '''Determine OpenStack version number from codename.''' + for k, v in six.iteritems(version_map): + if v == codename: + return k + e = 'Could not derive OpenStack version for '\ + 'codename: %s' % codename + error_out(e) + + +def get_os_version_codename_swift(codename): + '''Determine OpenStack version number of swift from codename.''' + for k, v in six.iteritems(SWIFT_CODENAMES): + if k == codename: + return v[-1] + e = 'Could not derive swift version for '\ + 'codename: %s' % codename + error_out(e) + + +def get_swift_codename(version): + '''Determine OpenStack codename that corresponds to swift version.''' + codenames = [k for k, v in six.iteritems(SWIFT_CODENAMES) if version in v] + + if len(codenames) > 1: + # If more than one release codename contains this version we determine + # the actual codename based on the highest available install source. + for codename in reversed(codenames): + releases = UBUNTU_OPENSTACK_RELEASE + release = [k for k, v in six.iteritems(releases) if codename in v] + ret = subprocess.check_output(['apt-cache', 'policy', 'swift']) + if six.PY3: + ret = ret.decode('UTF-8') + if codename in ret or release[0] in ret: + return codename + elif len(codenames) == 1: + return codenames[0] + + # NOTE: fallback - attempt to match with just major.minor version + match = re.match(r'^(\d+)\.(\d+)', version) + if match: + major_minor_version = match.group(0) + for codename, versions in six.iteritems(SWIFT_CODENAMES): + for release_version in versions: + if release_version.startswith(major_minor_version): + return codename + + return None + + +def get_os_codename_package(package, fatal=True): + '''Derive OpenStack release codename from an installed package.''' + + if snap_install_requested(): + cmd = ['snap', 'list', package] + try: + out = subprocess.check_output(cmd) + if six.PY3: + out = out.decode('UTF-8') + except subprocess.CalledProcessError: + return None + lines = out.split('\n') + for line in lines: + if package in line: + # Second item in list is Version + return line.split()[1] + + cache = apt_cache() + + try: + pkg = cache[package] + except Exception: + if not fatal: + return None + # the package is unknown to the current apt cache. + e = 'Could not determine version of package with no installation '\ + 'candidate: %s' % package + error_out(e) + + if not pkg.current_ver: + if not fatal: + return None + # package is known, but no version is currently installed. + e = 'Could not determine version of uninstalled package: %s' % package + error_out(e) + + vers = apt.upstream_version(pkg.current_ver.ver_str) + if 'swift' in pkg.name: + # Fully x.y.z match for swift versions + match = re.match(r'^(\d+)\.(\d+)\.(\d+)', vers) + else: + # x.y match only for 20XX.X + # and ignore patch level for other packages + match = re.match(r'^(\d+)\.(\d+)', vers) + + if match: + vers = match.group(0) + + # Generate a major version number for newer semantic + # versions of openstack projects + major_vers = vers.split('.')[0] + # >= Liberty independent project versions + if (package in PACKAGE_CODENAMES and + major_vers in PACKAGE_CODENAMES[package]): + return PACKAGE_CODENAMES[package][major_vers] + else: + # < Liberty co-ordinated project versions + try: + if 'swift' in pkg.name: + return get_swift_codename(vers) + else: + return OPENSTACK_CODENAMES[vers] + except KeyError: + if not fatal: + return None + e = 'Could not determine OpenStack codename for version %s' % vers + error_out(e) + + +def get_os_version_package(pkg, fatal=True): + '''Derive OpenStack version number from an installed package.''' + codename = get_os_codename_package(pkg, fatal=fatal) + + if not codename: + return None + + if 'swift' in pkg: + vers_map = SWIFT_CODENAMES + for cname, version in six.iteritems(vers_map): + if cname == codename: + return version[-1] + else: + vers_map = OPENSTACK_CODENAMES + for version, cname in six.iteritems(vers_map): + if cname == codename: + return version + # e = "Could not determine OpenStack version for package: %s" % pkg + # error_out(e) + + +# Module local cache variable for the os_release. +_os_rel = None + + +def reset_os_release(): + '''Unset the cached os_release version''' + global _os_rel + _os_rel = None + + +def os_release(package, base=None, reset_cache=False, source_key=None): + """Returns OpenStack release codename from a cached global. + + If reset_cache then unset the cached os_release version and return the + freshly determined version. + + If the codename can not be determined from either an installed package or + the installation source, the earliest release supported by the charm should + be returned. + + :param package: Name of package to determine release from + :type package: str + :param base: Fallback codename if endavours to determine from package fail + :type base: Optional[str] + :param reset_cache: Reset any cached codename value + :type reset_cache: bool + :param source_key: Name of source configuration option + (default: 'openstack-origin') + :type source_key: Optional[str] + :returns: OpenStack release codename + :rtype: str + """ + source_key = source_key or 'openstack-origin' + if not base: + base = UBUNTU_OPENSTACK_RELEASE[lsb_release()['DISTRIB_CODENAME']] + global _os_rel + if reset_cache: + reset_os_release() + if _os_rel: + return _os_rel + _os_rel = ( + get_os_codename_package(package, fatal=False) or + get_os_codename_install_source(config(source_key)) or + base) + return _os_rel + + +@deprecate("moved to charmhelpers.fetch.import_key()", "2017-07", log=juju_log) +def import_key(keyid): + """Import a key, either ASCII armored, or a GPG key id. + + @param keyid: the key in ASCII armor format, or a GPG key id. + @raises SystemExit() via sys.exit() on failure. + """ + try: + return fetch_import_key(keyid) + except GPGKeyError as e: + error_out("Could not import key: {}".format(str(e))) + + +def get_source_and_pgp_key(source_and_key): + """Look for a pgp key ID or ascii-armor key in the given input. + + :param source_and_key: Sting, "source_spec|keyid" where '|keyid' is + optional. + :returns (source_spec, key_id OR None) as a tuple. Returns None for key_id + if there was no '|' in the source_and_key string. + """ + try: + source, key = source_and_key.split('|', 2) + return source, key or None + except ValueError: + return source_and_key, None + + +@deprecate("use charmhelpers.fetch.add_source() instead.", + "2017-07", log=juju_log) +def configure_installation_source(source_plus_key): + """Configure an installation source. + + The functionality is provided by charmhelpers.fetch.add_source() + The difference between the two functions is that add_source() signature + requires the key to be passed directly, whereas this function passes an + optional key by appending '|' to the end of the source specificiation + 'source'. + + Another difference from add_source() is that the function calls sys.exit(1) + if the configuration fails, whereas add_source() raises + SourceConfigurationError(). Another difference, is that add_source() + silently fails (with a juju_log command) if there is no matching source to + configure, whereas this function fails with a sys.exit(1) + + :param source: String_plus_key -- see above for details. + + Note that the behaviour on error is to log the error to the juju log and + then call sys.exit(1). + """ + if source_plus_key.startswith('snap'): + # Do nothing for snap installs + return + # extract the key if there is one, denoted by a '|' in the rel + source, key = get_source_and_pgp_key(source_plus_key) + + # handle the ordinary sources via add_source + try: + fetch_add_source(source, key, fail_invalid=True) + except SourceConfigError as se: + error_out(str(se)) + + +def config_value_changed(option): + """ + Determine if config value changed since last call to this function. + """ + hook_data = unitdata.HookData() + with hook_data(): + db = unitdata.kv() + current = config(option) + saved = db.get(option) + db.set(option, current) + if saved is None: + return False + return current != saved + + +def get_endpoint_key(service_name, relation_id, unit_name): + """Return the key used to refer to an ep changed notification from a unit. + + :param service_name: Service name eg nova, neutron, placement etc + :type service_name: str + :param relation_id: The id of the relation the unit is on. + :type relation_id: str + :param unit_name: The name of the unit publishing the notification. + :type unit_name: str + :returns: The key used to refer to an ep changed notification from a unit + :rtype: str + """ + return '{}-{}-{}'.format( + service_name, + relation_id.replace(':', '_'), + unit_name.replace('/', '_')) + + +def get_endpoint_notifications(service_names, rel_name='identity-service'): + """Return all notifications for the given services. + + :param service_names: List of service name. + :type service_name: List + :param rel_name: Name of the relation to query + :type rel_name: str + :returns: A dict containing the source of the notification and its nonce. + :rtype: Dict[str, str] + """ + notifications = {} + for rid in relation_ids(rel_name): + for unit in related_units(relid=rid): + ep_changed_json = relation_get( + rid=rid, + unit=unit, + attribute='ep_changed') + if ep_changed_json: + ep_changed = json.loads(ep_changed_json) + for service in service_names: + if ep_changed.get(service): + key = get_endpoint_key(service, rid, unit) + notifications[key] = ep_changed[service] + return notifications + + +def endpoint_changed(service_name, rel_name='identity-service'): + """Whether a new notification has been recieved for an endpoint. + + :param service_name: Service name eg nova, neutron, placement etc + :type service_name: str + :param rel_name: Name of the relation to query + :type rel_name: str + :returns: Whether endpoint has changed + :rtype: bool + """ + changed = False + with unitdata.HookData()() as t: + db = t[0] + notifications = get_endpoint_notifications( + [service_name], + rel_name=rel_name) + for key, nonce in notifications.items(): + if db.get(key) != nonce: + juju_log(('New endpoint change notification found: ' + '{}={}').format(key, nonce), + 'INFO') + changed = True + break + return changed + + +def save_endpoint_changed_triggers(service_names, rel_name='identity-service'): + """Save the enpoint triggers in db so it can be tracked if they changed. + + :param service_names: List of service name. + :type service_name: List + :param rel_name: Name of the relation to query + :type rel_name: str + """ + with unitdata.HookData()() as t: + db = t[0] + notifications = get_endpoint_notifications( + service_names, + rel_name=rel_name) + for key, nonce in notifications.items(): + db.set(key, nonce) + + +def save_script_rc(script_path="scripts/scriptrc", **env_vars): + """ + Write an rc file in the charm-delivered directory containing + exported environment variables provided by env_vars. Any charm scripts run + outside the juju hook environment can source this scriptrc to obtain + updated config information necessary to perform health checks or + service changes. + """ + juju_rc_path = "%s/%s" % (charm_dir(), script_path) + if not os.path.exists(os.path.dirname(juju_rc_path)): + os.mkdir(os.path.dirname(juju_rc_path)) + with open(juju_rc_path, 'wt') as rc_script: + rc_script.write( + "#!/bin/bash\n") + [rc_script.write('export %s=%s\n' % (u, p)) + for u, p in six.iteritems(env_vars) if u != "script_path"] + + +def openstack_upgrade_available(package): + """ + Determines if an OpenStack upgrade is available from installation + source, based on version of installed package. + + :param package: str: Name of installed package. + + :returns: bool: : Returns True if configured installation source offers + a newer version of package. + """ + + src = config('openstack-origin') + cur_vers = get_os_version_package(package) + if not cur_vers: + # The package has not been installed yet do not attempt upgrade + return False + if "swift" in package: + codename = get_os_codename_install_source(src) + avail_vers = get_os_version_codename_swift(codename) + else: + try: + avail_vers = get_os_version_install_source(src) + except Exception: + avail_vers = cur_vers + apt.init() + return apt.version_compare(avail_vers, cur_vers) >= 1 + + +def ensure_block_device(block_device): + ''' + Confirm block_device, create as loopback if necessary. + + :param block_device: str: Full path of block device to ensure. + + :returns: str: Full path of ensured block device. + ''' + _none = ['None', 'none', None] + if (block_device in _none): + error_out('prepare_storage(): Missing required input: block_device=%s.' + % block_device) + + if block_device.startswith('/dev/'): + bdev = block_device + elif block_device.startswith('/'): + _bd = block_device.split('|') + if len(_bd) == 2: + bdev, size = _bd + else: + bdev = block_device + size = DEFAULT_LOOPBACK_SIZE + bdev = ensure_loopback_device(bdev, size) + else: + bdev = '/dev/%s' % block_device + + if not is_block_device(bdev): + error_out('Failed to locate valid block device at %s' % bdev) + + return bdev + + +def clean_storage(block_device): + ''' + Ensures a block device is clean. That is: + - unmounted + - any lvm volume groups are deactivated + - any lvm physical device signatures removed + - partition table wiped + + :param block_device: str: Full path to block device to clean. + ''' + for mp, d in mounts(): + if d == block_device: + juju_log('clean_storage(): %s is mounted @ %s, unmounting.' % + (d, mp), level=INFO) + umount(mp, persist=True) + + if is_lvm_physical_volume(block_device): + deactivate_lvm_volume_group(block_device) + remove_lvm_physical_volume(block_device) + else: + zap_disk(block_device) + + +is_ip = ip.is_ip +ns_query = ip.ns_query +get_host_ip = ip.get_host_ip +get_hostname = ip.get_hostname + + +def get_matchmaker_map(mm_file='/etc/oslo/matchmaker_ring.json'): + mm_map = {} + if os.path.isfile(mm_file): + with open(mm_file, 'r') as f: + mm_map = json.load(f) + return mm_map + + +def sync_db_with_multi_ipv6_addresses(database, database_user, + relation_prefix=None): + hosts = get_ipv6_addr(dynamic_only=False) + + if config('vip'): + vips = config('vip').split() + for vip in vips: + if vip and is_ipv6(vip): + hosts.append(vip) + + kwargs = {'database': database, + 'username': database_user, + 'hostname': json.dumps(hosts)} + + if relation_prefix: + for key in list(kwargs.keys()): + kwargs["%s_%s" % (relation_prefix, key)] = kwargs[key] + del kwargs[key] + + for rid in relation_ids('shared-db'): + relation_set(relation_id=rid, **kwargs) + + +def os_requires_version(ostack_release, pkg): + """ + Decorator for hook to specify minimum supported release + """ + def wrap(f): + @wraps(f) + def wrapped_f(*args): + if os_release(pkg) < ostack_release: + raise Exception("This hook is not supported on releases" + " before %s" % ostack_release) + f(*args) + return wrapped_f + return wrap + + +def os_workload_status(configs, required_interfaces, charm_func=None): + """ + Decorator to set workload status based on complete contexts + """ + def wrap(f): + @wraps(f) + def wrapped_f(*args, **kwargs): + # Run the original function first + f(*args, **kwargs) + # Set workload status now that contexts have been + # acted on + set_os_workload_status(configs, required_interfaces, charm_func) + return wrapped_f + return wrap + + +def set_os_workload_status(configs, required_interfaces, charm_func=None, + services=None, ports=None): + """Set the state of the workload status for the charm. + + This calls _determine_os_workload_status() to get the new state, message + and sets the status using status_set() + + @param configs: a templating.OSConfigRenderer() object + @param required_interfaces: {generic: [specific, specific2, ...]} + @param charm_func: a callable function that returns state, message. The + signature is charm_func(configs) -> (state, message) + @param services: list of strings OR dictionary specifying services/ports + @param ports: OPTIONAL list of port numbers. + @returns state, message: the new workload status, user message + """ + state, message = _determine_os_workload_status( + configs, required_interfaces, charm_func, services, ports) + status_set(state, message) + + +def _determine_os_workload_status( + configs, required_interfaces, charm_func=None, + services=None, ports=None): + """Determine the state of the workload status for the charm. + + This function returns the new workload status for the charm based + on the state of the interfaces, the paused state and whether the + services are actually running and any specified ports are open. + + This checks: + + 1. if the unit should be paused, that it is actually paused. If so the + state is 'maintenance' + message, else 'broken'. + 2. that the interfaces/relations are complete. If they are not then + it sets the state to either 'broken' or 'waiting' and an appropriate + message. + 3. If all the relation data is set, then it checks that the actual + services really are running. If not it sets the state to 'broken'. + + If everything is okay then the state returns 'active'. + + @param configs: a templating.OSConfigRenderer() object + @param required_interfaces: {generic: [specific, specific2, ...]} + @param charm_func: a callable function that returns state, message. The + signature is charm_func(configs) -> (state, message) + @param services: list of strings OR dictionary specifying services/ports + @param ports: OPTIONAL list of port numbers. + @returns state, message: the new workload status, user message + """ + state, message = _ows_check_if_paused(services, ports) + + if state is None: + state, message = _ows_check_generic_interfaces( + configs, required_interfaces) + + if state != 'maintenance' and charm_func: + # _ows_check_charm_func() may modify the state, message + state, message = _ows_check_charm_func( + state, message, lambda: charm_func(configs)) + + if state is None: + state, message = _ows_check_services_running(services, ports) + + if state is None: + state = 'active' + message = "Unit is ready" + juju_log(message, 'INFO') + + try: + if config(POLICYD_CONFIG_NAME): + message = "{} {}".format(policyd_status_message_prefix(), message) + except Exception: + pass + + return state, message + + +def _ows_check_if_paused(services=None, ports=None): + """Check if the unit is supposed to be paused, and if so check that the + services/ports (if passed) are actually stopped/not being listened to. + + If the unit isn't supposed to be paused, just return None, None + + If the unit is performing a series upgrade, return a message indicating + this. + + @param services: OPTIONAL services spec or list of service names. + @param ports: OPTIONAL list of port numbers. + @returns state, message or None, None + """ + if is_unit_upgrading_set(): + state, message = check_actually_paused(services=services, + ports=ports) + if state is None: + # we're paused okay, so set maintenance and return + state = "blocked" + message = ("Ready for do-release-upgrade and reboot. " + "Set complete when finished.") + return state, message + + if is_unit_paused_set(): + state, message = check_actually_paused(services=services, + ports=ports) + if state is None: + # we're paused okay, so set maintenance and return + state = "maintenance" + message = "Paused. Use 'resume' action to resume normal service." + return state, message + return None, None + + +def _ows_check_generic_interfaces(configs, required_interfaces): + """Check the complete contexts to determine the workload status. + + - Checks for missing or incomplete contexts + - juju log details of missing required data. + - determines the correct workload status + - creates an appropriate message for status_set(...) + + if there are no problems then the function returns None, None + + @param configs: a templating.OSConfigRenderer() object + @params required_interfaces: {generic_interface: [specific_interface], } + @returns state, message or None, None + """ + incomplete_rel_data = incomplete_relation_data(configs, + required_interfaces) + state = None + message = None + missing_relations = set() + incomplete_relations = set() + + for generic_interface, relations_states in incomplete_rel_data.items(): + related_interface = None + missing_data = {} + # Related or not? + for interface, relation_state in relations_states.items(): + if relation_state.get('related'): + related_interface = interface + missing_data = relation_state.get('missing_data') + break + # No relation ID for the generic_interface? + if not related_interface: + juju_log("{} relation is missing and must be related for " + "functionality. ".format(generic_interface), 'WARN') + state = 'blocked' + missing_relations.add(generic_interface) + else: + # Relation ID eists but no related unit + if not missing_data: + # Edge case - relation ID exists but departings + _hook_name = hook_name() + if (('departed' in _hook_name or 'broken' in _hook_name) and + related_interface in _hook_name): + state = 'blocked' + missing_relations.add(generic_interface) + juju_log("{} relation's interface, {}, " + "relationship is departed or broken " + "and is required for functionality." + "".format(generic_interface, related_interface), + "WARN") + # Normal case relation ID exists but no related unit + # (joining) + else: + juju_log("{} relations's interface, {}, is related but has" + " no units in the relation." + "".format(generic_interface, related_interface), + "INFO") + # Related unit exists and data missing on the relation + else: + juju_log("{} relation's interface, {}, is related awaiting " + "the following data from the relationship: {}. " + "".format(generic_interface, related_interface, + ", ".join(missing_data)), "INFO") + if state != 'blocked': + state = 'waiting' + if generic_interface not in missing_relations: + incomplete_relations.add(generic_interface) + + if missing_relations: + message = "Missing relations: {}".format(", ".join(missing_relations)) + if incomplete_relations: + message += "; incomplete relations: {}" \ + "".format(", ".join(incomplete_relations)) + state = 'blocked' + elif incomplete_relations: + message = "Incomplete relations: {}" \ + "".format(", ".join(incomplete_relations)) + state = 'waiting' + + return state, message + + +def _ows_check_charm_func(state, message, charm_func_with_configs): + """Run a custom check function for the charm to see if it wants to + change the state. This is only run if not in 'maintenance' and + tests to see if the new state is more important that the previous + one determined by the interfaces/relations check. + + @param state: the previously determined state so far. + @param message: the user orientated message so far. + @param charm_func: a callable function that returns state, message + @returns state, message strings. + """ + if charm_func_with_configs: + charm_state, charm_message = charm_func_with_configs() + if (charm_state != 'active' and + charm_state != 'unknown' and + charm_state is not None): + state = workload_state_compare(state, charm_state) + if message: + charm_message = charm_message.replace("Incomplete relations: ", + "") + message = "{}, {}".format(message, charm_message) + else: + message = charm_message + return state, message + + +def _ows_check_services_running(services, ports): + """Check that the services that should be running are actually running + and that any ports specified are being listened to. + + @param services: list of strings OR dictionary specifying services/ports + @param ports: list of ports + @returns state, message: strings or None, None + """ + messages = [] + state = None + if services is not None: + services = _extract_services_list_helper(services) + services_running, running = _check_running_services(services) + if not all(running): + messages.append( + "Services not running that should be: {}" + .format(", ".join(_filter_tuples(services_running, False)))) + state = 'blocked' + # also verify that the ports that should be open are open + # NB, that ServiceManager objects only OPTIONALLY have ports + map_not_open, ports_open = ( + _check_listening_on_services_ports(services)) + if not all(ports_open): + # find which service has missing ports. They are in service + # order which makes it a bit easier. + message_parts = {service: ", ".join([str(v) for v in open_ports]) + for service, open_ports in map_not_open.items()} + message = ", ".join( + ["{}: [{}]".format(s, sp) for s, sp in message_parts.items()]) + messages.append( + "Services with ports not open that should be: {}" + .format(message)) + state = 'blocked' + + if ports is not None: + # and we can also check ports which we don't know the service for + ports_open, ports_open_bools = _check_listening_on_ports_list(ports) + if not all(ports_open_bools): + messages.append( + "Ports which should be open, but are not: {}" + .format(", ".join([str(p) for p, v in ports_open + if not v]))) + state = 'blocked' + + if state is not None: + message = "; ".join(messages) + return state, message + + return None, None + + +def _extract_services_list_helper(services): + """Extract a OrderedDict of {service: [ports]} of the supplied services + for use by the other functions. + + The services object can either be: + - None : no services were passed (an empty dict is returned) + - a list of strings + - A dictionary (optionally OrderedDict) {service_name: {'service': ..}} + - An array of [{'service': service_name, ...}, ...] + + @param services: see above + @returns OrderedDict(service: [ports], ...) + """ + if services is None: + return {} + if isinstance(services, dict): + services = services.values() + # either extract the list of services from the dictionary, or if + # it is a simple string, use that. i.e. works with mixed lists. + _s = OrderedDict() + for s in services: + if isinstance(s, dict) and 'service' in s: + _s[s['service']] = s.get('ports', []) + if isinstance(s, str): + _s[s] = [] + return _s + + +def _check_running_services(services): + """Check that the services dict provided is actually running and provide + a list of (service, boolean) tuples for each service. + + Returns both a zipped list of (service, boolean) and a list of booleans + in the same order as the services. + + @param services: OrderedDict of strings: [ports], one for each service to + check. + @returns [(service, boolean), ...], : results for checks + [boolean] : just the result of the service checks + """ + services_running = [service_running(s) for s in services] + return list(zip(services, services_running)), services_running + + +def _check_listening_on_services_ports(services, test=False): + """Check that the unit is actually listening (has the port open) on the + ports that the service specifies are open. If test is True then the + function returns the services with ports that are open rather than + closed. + + Returns an OrderedDict of service: ports and a list of booleans + + @param services: OrderedDict(service: [port, ...], ...) + @param test: default=False, if False, test for closed, otherwise open. + @returns OrderedDict(service: [port-not-open, ...]...), [boolean] + """ + test = not(not(test)) # ensure test is True or False + all_ports = list(itertools.chain(*services.values())) + ports_states = [port_has_listener('0.0.0.0', p) for p in all_ports] + map_ports = OrderedDict() + matched_ports = [p for p, opened in zip(all_ports, ports_states) + if opened == test] # essentially opened xor test + for service, ports in services.items(): + set_ports = set(ports).intersection(matched_ports) + if set_ports: + map_ports[service] = set_ports + return map_ports, ports_states + + +def _check_listening_on_ports_list(ports): + """Check that the ports list given are being listened to + + Returns a list of ports being listened to and a list of the + booleans. + + @param ports: LIST or port numbers. + @returns [(port_num, boolean), ...], [boolean] + """ + ports_open = [port_has_listener('0.0.0.0', p) for p in ports] + return zip(ports, ports_open), ports_open + + +def _filter_tuples(services_states, state): + """Return a simple list from a list of tuples according to the condition + + @param services_states: LIST of (string, boolean): service and running + state. + @param state: Boolean to match the tuple against. + @returns [LIST of strings] that matched the tuple RHS. + """ + return [s for s, b in services_states if b == state] + + +def workload_state_compare(current_workload_state, workload_state): + """ Return highest priority of two states""" + hierarchy = {'unknown': -1, + 'active': 0, + 'maintenance': 1, + 'waiting': 2, + 'blocked': 3, + } + + if hierarchy.get(workload_state) is None: + workload_state = 'unknown' + if hierarchy.get(current_workload_state) is None: + current_workload_state = 'unknown' + + # Set workload_state based on hierarchy of statuses + if hierarchy.get(current_workload_state) > hierarchy.get(workload_state): + return current_workload_state + else: + return workload_state + + +def incomplete_relation_data(configs, required_interfaces): + """Check complete contexts against required_interfaces + Return dictionary of incomplete relation data. + + configs is an OSConfigRenderer object with configs registered + + required_interfaces is a dictionary of required general interfaces + with dictionary values of possible specific interfaces. + Example: + required_interfaces = {'database': ['shared-db', 'pgsql-db']} + + The interface is said to be satisfied if anyone of the interfaces in the + list has a complete context. + + Return dictionary of incomplete or missing required contexts with relation + status of interfaces and any missing data points. Example: + {'message': + {'amqp': {'missing_data': ['rabbitmq_password'], 'related': True}, + 'zeromq-configuration': {'related': False}}, + 'identity': + {'identity-service': {'related': False}}, + 'database': + {'pgsql-db': {'related': False}, + 'shared-db': {'related': True}}} + """ + complete_ctxts = configs.complete_contexts() + incomplete_relations = [ + svc_type + for svc_type, interfaces in required_interfaces.items() + if not set(interfaces).intersection(complete_ctxts)] + return { + i: configs.get_incomplete_context_data(required_interfaces[i]) + for i in incomplete_relations} + + +def do_action_openstack_upgrade(package, upgrade_callback, configs): + """Perform action-managed OpenStack upgrade. + + Upgrades packages to the configured openstack-origin version and sets + the corresponding action status as a result. + + If the charm was installed from source we cannot upgrade it. + For backwards compatibility a config flag (action-managed-upgrade) must + be set for this code to run, otherwise a full service level upgrade will + fire on config-changed. + + @param package: package name for determining if upgrade available + @param upgrade_callback: function callback to charm's upgrade function + @param configs: templating object derived from OSConfigRenderer class + + @return: True if upgrade successful; False if upgrade failed or skipped + """ + ret = False + + if openstack_upgrade_available(package): + if config('action-managed-upgrade'): + juju_log('Upgrading OpenStack release') + + try: + upgrade_callback(configs=configs) + action_set({'outcome': 'success, upgrade completed.'}) + ret = True + except Exception: + action_set({'outcome': 'upgrade failed, see traceback.'}) + action_set({'traceback': traceback.format_exc()}) + action_fail('do_openstack_upgrade resulted in an ' + 'unexpected error') + else: + action_set({'outcome': 'action-managed-upgrade config is ' + 'False, skipped upgrade.'}) + else: + action_set({'outcome': 'no upgrade available.'}) + + return ret + + +def remote_restart(rel_name, remote_service=None): + trigger = { + 'restart-trigger': str(uuid.uuid4()), + } + if remote_service: + trigger['remote-service'] = remote_service + for rid in relation_ids(rel_name): + # This subordinate can be related to two seperate services using + # different subordinate relations so only issue the restart if + # the principle is conencted down the relation we think it is + if related_units(relid=rid): + relation_set(relation_id=rid, + relation_settings=trigger, + ) + + +def check_actually_paused(services=None, ports=None): + """Check that services listed in the services object and ports + are actually closed (not listened to), to verify that the unit is + properly paused. + + @param services: See _extract_services_list_helper + @returns status, : string for status (None if okay) + message : string for problem for status_set + """ + state = None + message = None + messages = [] + if services is not None: + services = _extract_services_list_helper(services) + services_running, services_states = _check_running_services(services) + if any(services_states): + # there shouldn't be any running so this is a problem + messages.append("these services running: {}" + .format(", ".join( + _filter_tuples(services_running, True)))) + state = "blocked" + ports_open, ports_open_bools = ( + _check_listening_on_services_ports(services, True)) + if any(ports_open_bools): + message_parts = {service: ", ".join([str(v) for v in open_ports]) + for service, open_ports in ports_open.items()} + message = ", ".join( + ["{}: [{}]".format(s, sp) for s, sp in message_parts.items()]) + messages.append( + "these service:ports are open: {}".format(message)) + state = 'blocked' + if ports is not None: + ports_open, bools = _check_listening_on_ports_list(ports) + if any(bools): + messages.append( + "these ports which should be closed, but are open: {}" + .format(", ".join([str(p) for p, v in ports_open if v]))) + state = 'blocked' + if messages: + message = ("Services should be paused but {}" + .format(", ".join(messages))) + return state, message + + +def set_unit_paused(): + """Set the unit to a paused state in the local kv() store. + This does NOT actually pause the unit + """ + with unitdata.HookData()() as t: + kv = t[0] + kv.set('unit-paused', True) + + +def clear_unit_paused(): + """Clear the unit from a paused state in the local kv() store + This does NOT actually restart any services - it only clears the + local state. + """ + with unitdata.HookData()() as t: + kv = t[0] + kv.set('unit-paused', False) + + +def is_unit_paused_set(): + """Return the state of the kv().get('unit-paused'). + This does NOT verify that the unit really is paused. + + To help with units that don't have HookData() (testing) + if it excepts, return False + """ + try: + with unitdata.HookData()() as t: + kv = t[0] + # transform something truth-y into a Boolean. + return not(not(kv.get('unit-paused'))) + except Exception: + return False + + +def manage_payload_services(action, services=None, charm_func=None): + """Run an action against all services. + + An optional charm_func() can be called. It should raise an Exception to + indicate that the function failed. If it was succesfull it should return + None or an optional message. + + The signature for charm_func is: + charm_func() -> message: str + + charm_func() is executed after any services are stopped, if supplied. + + The services object can either be: + - None : no services were passed (an empty dict is returned) + - a list of strings + - A dictionary (optionally OrderedDict) {service_name: {'service': ..}} + - An array of [{'service': service_name, ...}, ...] + + :param action: Action to run: pause, resume, start or stop. + :type action: str + :param services: See above + :type services: See above + :param charm_func: function to run for custom charm pausing. + :type charm_func: f() + :returns: Status boolean and list of messages + :rtype: (bool, []) + :raises: RuntimeError + """ + actions = { + 'pause': service_pause, + 'resume': service_resume, + 'start': service_start, + 'stop': service_stop} + action = action.lower() + if action not in actions.keys(): + raise RuntimeError( + "action: {} must be one of: {}".format(action, + ', '.join(actions.keys()))) + services = _extract_services_list_helper(services) + messages = [] + success = True + if services: + for service in services.keys(): + rc = actions[action](service) + if not rc: + success = False + messages.append("{} didn't {} cleanly.".format(service, + action)) + if charm_func: + try: + message = charm_func() + if message: + messages.append(message) + except Exception as e: + success = False + messages.append(str(e)) + return success, messages + + +def pause_unit(assess_status_func, services=None, ports=None, + charm_func=None): + """Pause a unit by stopping the services and setting 'unit-paused' + in the local kv() store. + + Also checks that the services have stopped and ports are no longer + being listened to. + + An optional charm_func() can be called that can either raise an + Exception or return non None, None to indicate that the unit + didn't pause cleanly. + + The signature for charm_func is: + charm_func() -> message: string + + charm_func() is executed after any services are stopped, if supplied. + + The services object can either be: + - None : no services were passed (an empty dict is returned) + - a list of strings + - A dictionary (optionally OrderedDict) {service_name: {'service': ..}} + - An array of [{'service': service_name, ...}, ...] + + @param assess_status_func: (f() -> message: string | None) or None + @param services: OPTIONAL see above + @param ports: OPTIONAL list of port + @param charm_func: function to run for custom charm pausing. + @returns None + @raises Exception(message) on an error for action_fail(). + """ + _, messages = manage_payload_services( + 'pause', + services=services, + charm_func=charm_func) + set_unit_paused() + if assess_status_func: + message = assess_status_func() + if message: + messages.append(message) + if messages and not is_unit_upgrading_set(): + raise Exception("Couldn't pause: {}".format("; ".join(messages))) + + +def resume_unit(assess_status_func, services=None, ports=None, + charm_func=None): + """Resume a unit by starting the services and clearning 'unit-paused' + in the local kv() store. + + Also checks that the services have started and ports are being listened to. + + An optional charm_func() can be called that can either raise an + Exception or return non None to indicate that the unit + didn't resume cleanly. + + The signature for charm_func is: + charm_func() -> message: string + + charm_func() is executed after any services are started, if supplied. + + The services object can either be: + - None : no services were passed (an empty dict is returned) + - a list of strings + - A dictionary (optionally OrderedDict) {service_name: {'service': ..}} + - An array of [{'service': service_name, ...}, ...] + + @param assess_status_func: (f() -> message: string | None) or None + @param services: OPTIONAL see above + @param ports: OPTIONAL list of port + @param charm_func: function to run for custom charm resuming. + @returns None + @raises Exception(message) on an error for action_fail(). + """ + _, messages = manage_payload_services( + 'resume', + services=services, + charm_func=charm_func) + clear_unit_paused() + if assess_status_func: + message = assess_status_func() + if message: + messages.append(message) + if messages: + raise Exception("Couldn't resume: {}".format("; ".join(messages))) + + +def make_assess_status_func(*args, **kwargs): + """Creates an assess_status_func() suitable for handing to pause_unit() + and resume_unit(). + + This uses the _determine_os_workload_status(...) function to determine + what the workload_status should be for the unit. If the unit is + not in maintenance or active states, then the message is returned to + the caller. This is so an action that doesn't result in either a + complete pause or complete resume can signal failure with an action_fail() + """ + def _assess_status_func(): + state, message = _determine_os_workload_status(*args, **kwargs) + status_set(state, message) + if state not in ['maintenance', 'active']: + return message + return None + + return _assess_status_func + + +def pausable_restart_on_change(restart_map, stopstart=False, + restart_functions=None): + """A restart_on_change decorator that checks to see if the unit is + paused. If it is paused then the decorated function doesn't fire. + + This is provided as a helper, as the @restart_on_change(...) decorator + is in core.host, yet the openstack specific helpers are in this file + (contrib.openstack.utils). Thus, this needs to be an optional feature + for openstack charms (or charms that wish to use the openstack + pause/resume type features). + + It is used as follows: + + from contrib.openstack.utils import ( + pausable_restart_on_change as restart_on_change) + + @restart_on_change(restart_map, stopstart=) + def some_hook(...): + pass + + see core.utils.restart_on_change() for more details. + + Note restart_map can be a callable, in which case, restart_map is only + evaluated at runtime. This means that it is lazy and the underlying + function won't be called if the decorated function is never called. Note, + retains backwards compatibility for passing a non-callable dictionary. + + @param f: the function to decorate + @param restart_map: (optionally callable, which then returns the + restart_map) the restart map {conf_file: [services]} + @param stopstart: DEFAULT false; whether to stop, start or just restart + @returns decorator to use a restart_on_change with pausability + """ + def wrap(f): + # py27 compatible nonlocal variable. When py3 only, replace with + # nonlocal keyword + __restart_map_cache = {'cache': None} + + @functools.wraps(f) + def wrapped_f(*args, **kwargs): + if is_unit_paused_set(): + return f(*args, **kwargs) + if __restart_map_cache['cache'] is None: + __restart_map_cache['cache'] = restart_map() \ + if callable(restart_map) else restart_map + # otherwise, normal restart_on_change functionality + return restart_on_change_helper( + (lambda: f(*args, **kwargs)), __restart_map_cache['cache'], + stopstart, restart_functions) + return wrapped_f + return wrap + + +def ordered(orderme): + """Converts the provided dictionary into a collections.OrderedDict. + + The items in the returned OrderedDict will be inserted based on the + natural sort order of the keys. Nested dictionaries will also be sorted + in order to ensure fully predictable ordering. + + :param orderme: the dict to order + :return: collections.OrderedDict + :raises: ValueError: if `orderme` isn't a dict instance. + """ + if not isinstance(orderme, dict): + raise ValueError('argument must be a dict type') + + result = OrderedDict() + for k, v in sorted(six.iteritems(orderme), key=lambda x: x[0]): + if isinstance(v, dict): + result[k] = ordered(v) + else: + result[k] = v + + return result + + +def config_flags_parser(config_flags): + """Parses config flags string into dict. + + This parsing method supports a few different formats for the config + flag values to be parsed: + + 1. A string in the simple format of key=value pairs, with the possibility + of specifying multiple key value pairs within the same string. For + example, a string in the format of 'key1=value1, key2=value2' will + return a dict of: + + {'key1': 'value1', 'key2': 'value2'}. + + 2. A string in the above format, but supporting a comma-delimited list + of values for the same key. For example, a string in the format of + 'key1=value1, key2=value3,value4,value5' will return a dict of: + + {'key1': 'value1', 'key2': 'value2,value3,value4'} + + 3. A string containing a colon character (:) prior to an equal + character (=) will be treated as yaml and parsed as such. This can be + used to specify more complex key value pairs. For example, + a string in the format of 'key1: subkey1=value1, subkey2=value2' will + return a dict of: + + {'key1', 'subkey1=value1, subkey2=value2'} + + The provided config_flags string may be a list of comma-separated values + which themselves may be comma-separated list of values. + """ + # If we find a colon before an equals sign then treat it as yaml. + # Note: limit it to finding the colon first since this indicates assignment + # for inline yaml. + colon = config_flags.find(':') + equals = config_flags.find('=') + if colon > 0: + if colon < equals or equals < 0: + return ordered(yaml.safe_load(config_flags)) + + if config_flags.find('==') >= 0: + juju_log("config_flags is not in expected format (key=value)", + level=ERROR) + raise OSContextError + + # strip the following from each value. + post_strippers = ' ,' + # we strip any leading/trailing '=' or ' ' from the string then + # split on '='. + split = config_flags.strip(' =').split('=') + limit = len(split) + flags = OrderedDict() + for i in range(0, limit - 1): + current = split[i] + next = split[i + 1] + vindex = next.rfind(',') + if (i == limit - 2) or (vindex < 0): + value = next + else: + value = next[:vindex] + + if i == 0: + key = current + else: + # if this not the first entry, expect an embedded key. + index = current.rfind(',') + if index < 0: + juju_log("Invalid config value(s) at index %s" % (i), + level=ERROR) + raise OSContextError + key = current[index + 1:] + + # Add to collection. + flags[key.strip(post_strippers)] = value.rstrip(post_strippers) + + return flags + + +def os_application_version_set(package): + '''Set version of application for Juju 2.0 and later''' + application_version = get_upstream_version(package) + # NOTE(jamespage) if not able to figure out package version, fallback to + # openstack codename version detection. + if not application_version: + application_version_set(os_release(package)) + else: + application_version_set(application_version) + + +def os_application_status_set(check_function): + """Run the supplied function and set the application status accordingly. + + :param check_function: Function to run to get app states and messages. + :type check_function: function + """ + state, message = check_function() + status_set(state, message, application=True) + + +def enable_memcache(source=None, release=None, package=None): + """Determine if memcache should be enabled on the local unit + + @param release: release of OpenStack currently deployed + @param package: package to derive OpenStack version deployed + @returns boolean Whether memcache should be enabled + """ + _release = None + if release: + _release = release + else: + _release = os_release(package) + if not _release: + _release = get_os_codename_install_source(source) + + return CompareOpenStackReleases(_release) >= 'mitaka' + + +def token_cache_pkgs(source=None, release=None): + """Determine additional packages needed for token caching + + @param source: source string for charm + @param release: release of OpenStack currently deployed + @returns List of package to enable token caching + """ + packages = [] + if enable_memcache(source=source, release=release): + packages.extend(['memcached', 'python-memcache']) + return packages + + +def update_json_file(filename, items): + """Updates the json `filename` with a given dict. + :param filename: path to json file (e.g. /etc/glance/policy.json) + :param items: dict of items to update + """ + if not items: + return + + with open(filename) as fd: + policy = json.load(fd) + + # Compare before and after and if nothing has changed don't write the file + # since that could cause unnecessary service restarts. + before = json.dumps(policy, indent=4, sort_keys=True) + policy.update(items) + after = json.dumps(policy, indent=4, sort_keys=True) + if before == after: + return + + with open(filename, "w") as fd: + fd.write(after) + + +@cached +def snap_install_requested(): + """ Determine if installing from snaps + + If openstack-origin is of the form snap:track/channel[/branch] + and channel is in SNAPS_CHANNELS return True. + """ + origin = config('openstack-origin') or "" + if not origin.startswith('snap:'): + return False + + _src = origin[5:] + if '/' in _src: + channel = _src.split('/')[1] + else: + # Handle snap:track with no channel + channel = 'stable' + return valid_snap_channel(channel) + + +def get_snaps_install_info_from_origin(snaps, src, mode='classic'): + """Generate a dictionary of snap install information from origin + + @param snaps: List of snaps + @param src: String of openstack-origin or source of the form + snap:track/channel + @param mode: String classic, devmode or jailmode + @returns: Dictionary of snaps with channels and modes + """ + + if not src.startswith('snap:'): + juju_log("Snap source is not a snap origin", 'WARN') + return {} + + _src = src[5:] + channel = '--channel={}'.format(_src) + + return {snap: {'channel': channel, 'mode': mode} + for snap in snaps} + + +def install_os_snaps(snaps, refresh=False): + """Install OpenStack snaps from channel and with mode + + @param snaps: Dictionary of snaps with channels and modes of the form: + {'snap_name': {'channel': 'snap_channel', + 'mode': 'snap_mode'}} + Where channel is a snapstore channel and mode is --classic, --devmode + or --jailmode. + @param post_snap_install: Callback function to run after snaps have been + installed + """ + + def _ensure_flag(flag): + if flag.startswith('--'): + return flag + return '--{}'.format(flag) + + if refresh: + for snap in snaps.keys(): + snap_refresh(snap, + _ensure_flag(snaps[snap]['channel']), + _ensure_flag(snaps[snap]['mode'])) + else: + for snap in snaps.keys(): + snap_install(snap, + _ensure_flag(snaps[snap]['channel']), + _ensure_flag(snaps[snap]['mode'])) + + +def set_unit_upgrading(): + """Set the unit to a upgrading state in the local kv() store. + """ + with unitdata.HookData()() as t: + kv = t[0] + kv.set('unit-upgrading', True) + + +def clear_unit_upgrading(): + """Clear the unit from a upgrading state in the local kv() store + """ + with unitdata.HookData()() as t: + kv = t[0] + kv.set('unit-upgrading', False) + + +def is_unit_upgrading_set(): + """Return the state of the kv().get('unit-upgrading'). + + To help with units that don't have HookData() (testing) + if it excepts, return False + """ + try: + with unitdata.HookData()() as t: + kv = t[0] + # transform something truth-y into a Boolean. + return not(not(kv.get('unit-upgrading'))) + except Exception: + return False + + +def series_upgrade_prepare(pause_unit_helper=None, configs=None): + """ Run common series upgrade prepare tasks. + + :param pause_unit_helper: function: Function to pause unit + :param configs: OSConfigRenderer object: Configurations + :returns None: + """ + set_unit_upgrading() + if pause_unit_helper and configs: + if not is_unit_paused_set(): + pause_unit_helper(configs) + + +def series_upgrade_complete(resume_unit_helper=None, configs=None): + """ Run common series upgrade complete tasks. + + :param resume_unit_helper: function: Function to resume unit + :param configs: OSConfigRenderer object: Configurations + :returns None: + """ + clear_unit_paused() + clear_unit_upgrading() + if configs: + configs.write_all() + if resume_unit_helper: + resume_unit_helper(configs) + + +def is_db_initialised(): + """Check leader storage to see if database has been initialised. + + :returns: Whether DB has been initialised + :rtype: bool + """ + db_initialised = None + if leader_get('db-initialised') is None: + juju_log( + 'db-initialised key missing, assuming db is not initialised', + 'DEBUG') + db_initialised = False + else: + db_initialised = bool_from_string(leader_get('db-initialised')) + juju_log('Database initialised: {}'.format(db_initialised), 'DEBUG') + return db_initialised + + +def set_db_initialised(): + """Add flag to leader storage to indicate database has been initialised. + """ + juju_log('Setting db-initialised to True', 'DEBUG') + leader_set({'db-initialised': True}) + + +def is_db_maintenance_mode(relid=None): + """Check relation data from notifications of db in maintenance mode. + + :returns: Whether db has notified it is in maintenance mode. + :rtype: bool + """ + juju_log('Checking for maintenance notifications', 'DEBUG') + if relid: + r_ids = [relid] + else: + r_ids = relation_ids('shared-db') + rids_units = [(r, u) for r in r_ids for u in related_units(r)] + notifications = [] + for r_id, unit in rids_units: + settings = relation_get(unit=unit, rid=r_id) + for key, value in settings.items(): + if value and key in DB_MAINTENANCE_KEYS: + juju_log( + 'Unit: {}, Key: {}, Value: {}'.format(unit, key, value), + 'DEBUG') + try: + notifications.append(bool_from_string(value)) + except ValueError: + juju_log( + 'Could not discern bool from {}'.format(value), + 'WARN') + pass + return True in notifications + + +@cached +def container_scoped_relations(): + """Get all the container scoped relations + + :returns: List of relation names + :rtype: List + """ + md = metadata() + relations = [] + for relation_type in ('provides', 'requires', 'peers'): + for relation in md.get(relation_type, []): + if md[relation_type][relation].get('scope') == 'container': + relations.append(relation) + return relations + + +def is_db_ready(use_current_context=False, rel_name=None): + """Check remote database is ready to be used. + + Database relations are expected to provide a list of 'allowed' units to + confirm that the database is ready for use by those units. + + If db relation has provided this information and local unit is a member, + returns True otherwise False. + + :param use_current_context: Whether to limit checks to current hook + context. + :type use_current_context: bool + :param rel_name: Name of relation to check + :type rel_name: string + :returns: Whether remote db is ready. + :rtype: bool + :raises: Exception + """ + key = 'allowed_units' + + rel_name = rel_name or 'shared-db' + this_unit = local_unit() + + if use_current_context: + if relation_id() in relation_ids(rel_name): + rids_units = [(None, None)] + else: + raise Exception("use_current_context=True but not in {} " + "rel hook contexts (currently in {})." + .format(rel_name, relation_id())) + else: + rids_units = [(r_id, u) + for r_id in relation_ids(rel_name) + for u in related_units(r_id)] + + for rid, unit in rids_units: + allowed_units = relation_get(rid=rid, unit=unit, attribute=key) + if allowed_units and this_unit in allowed_units.split(): + juju_log("This unit ({}) is in allowed unit list from {}".format( + this_unit, + unit), 'DEBUG') + return True + + juju_log("This unit was not found in any allowed unit list") + return False + + +def is_expected_scale(peer_relation_name='cluster'): + """Query juju goal-state to determine whether our peer- and dependency- + relations are at the expected scale. + + Useful for deferring per unit per relation housekeeping work until we are + ready to complete it successfully and without unnecessary repetiton. + + Always returns True if version of juju used does not support goal-state. + + :param peer_relation_name: Name of peer relation + :type rel_name: string + :returns: True or False + :rtype: bool + """ + def _get_relation_id(rel_type): + return next((rid for rid in relation_ids(reltype=rel_type)), None) + + Relation = namedtuple('Relation', 'rel_type rel_id') + peer_rid = _get_relation_id(peer_relation_name) + # Units with no peers should still have a peer relation. + if not peer_rid: + juju_log('Not at expected scale, no peer relation found', 'DEBUG') + return False + expected_relations = [ + Relation(rel_type='shared-db', rel_id=_get_relation_id('shared-db'))] + if expect_ha(): + expected_relations.append( + Relation( + rel_type='ha', + rel_id=_get_relation_id('ha'))) + juju_log( + 'Checking scale of {} relations'.format( + ','.join([r.rel_type for r in expected_relations])), + 'DEBUG') + try: + if (len(related_units(relid=peer_rid)) < + len(list(expected_peer_units()))): + return False + for rel in expected_relations: + if not rel.rel_id: + juju_log( + 'Expected to find {} relation, but it is missing'.format( + rel.rel_type), + 'DEBUG') + return False + # Goal state returns every unit even for container scoped + # relations but the charm only ever has a relation with + # the local unit. + if rel.rel_type in container_scoped_relations(): + expected_count = 1 + else: + expected_count = len( + list(expected_related_units(reltype=rel.rel_type))) + if len(related_units(relid=rel.rel_id)) < expected_count: + juju_log( + ('Not at expected scale, not enough units on {} ' + 'relation'.format(rel.rel_type)), + 'DEBUG') + return False + except NotImplementedError: + return True + juju_log('All checks have passed, unit is at expected scale', 'DEBUG') + return True + + +def get_peer_key(unit_name): + """Get the peer key for this unit. + + The peer key is the key a unit uses to publish its status down the peer + relation + + :param unit_name: Name of unit + :type unit_name: string + :returns: Peer key for given unit + :rtype: string + """ + return 'unit-state-{}'.format(unit_name.replace('/', '-')) + + +UNIT_READY = 'READY' +UNIT_NOTREADY = 'NOTREADY' +UNIT_UNKNOWN = 'UNKNOWN' +UNIT_STATES = [UNIT_READY, UNIT_NOTREADY, UNIT_UNKNOWN] + + +def inform_peers_unit_state(state, relation_name='cluster'): + """Inform peers of the state of this unit. + + :param state: State of unit to publish + :type state: string + :param relation_name: Name of relation to publish state on + :type relation_name: string + """ + if state not in UNIT_STATES: + raise ValueError( + "Setting invalid state {} for unit".format(state)) + for r_id in relation_ids(relation_name): + relation_set(relation_id=r_id, + relation_settings={ + get_peer_key(local_unit()): state}) + + +def get_peers_unit_state(relation_name='cluster'): + """Get the state of all peers. + + :param relation_name: Name of relation to check peers on. + :type relation_name: string + :returns: Unit states keyed on unit name. + :rtype: dict + :raises: ValueError + """ + r_ids = relation_ids(relation_name) + rids_units = [(r, u) for r in r_ids for u in related_units(r)] + unit_states = {} + for r_id, unit in rids_units: + settings = relation_get(unit=unit, rid=r_id) + unit_states[unit] = settings.get(get_peer_key(unit), UNIT_UNKNOWN) + if unit_states[unit] not in UNIT_STATES: + raise ValueError( + "Unit in unknown state {}".format(unit_states[unit])) + return unit_states + + +def are_peers_ready(relation_name='cluster'): + """Check if all peers are ready. + + :param relation_name: Name of relation to check peers on. + :type relation_name: string + :returns: Whether all units are ready. + :rtype: bool + """ + unit_states = get_peers_unit_state(relation_name) + return all(v == UNIT_READY for v in unit_states.values()) + + +def inform_peers_if_ready(check_unit_ready_func, relation_name='cluster'): + """Inform peers if this unit is ready. + + The check function should return a tuple (state, message). A state + of 'READY' indicates the unit is READY. + + :param check_unit_ready_func: Function to run to check readiness + :type check_unit_ready_func: function + :param relation_name: Name of relation to check peers on. + :type relation_name: string + """ + unit_ready, msg = check_unit_ready_func() + if unit_ready: + state = UNIT_READY + else: + state = UNIT_NOTREADY + juju_log('Telling peers this unit is: {}'.format(state), 'DEBUG') + inform_peers_unit_state(state, relation_name) + + +def check_api_unit_ready(check_db_ready=True): + """Check if this unit is ready. + + :param check_db_ready: Include checks of database readiness. + :type check_db_ready: bool + :returns: Whether unit state is ready and status message + :rtype: (bool, str) + """ + unit_state, msg = get_api_unit_status(check_db_ready=check_db_ready) + return unit_state == WORKLOAD_STATES.ACTIVE, msg + + +def get_api_unit_status(check_db_ready=True): + """Return a workload status and message for this unit. + + :param check_db_ready: Include checks of database readiness. + :type check_db_ready: bool + :returns: Workload state and message + :rtype: (bool, str) + """ + unit_state = WORKLOAD_STATES.ACTIVE + msg = 'Unit is ready' + if is_db_maintenance_mode(): + unit_state = WORKLOAD_STATES.MAINTENANCE + msg = 'Database in maintenance mode.' + elif is_unit_paused_set(): + unit_state = WORKLOAD_STATES.BLOCKED + msg = 'Unit paused.' + elif check_db_ready and not is_db_ready(): + unit_state = WORKLOAD_STATES.WAITING + msg = 'Allowed_units list provided but this unit not present' + elif not is_db_initialised(): + unit_state = WORKLOAD_STATES.WAITING + msg = 'Database not initialised' + elif not is_expected_scale(): + unit_state = WORKLOAD_STATES.WAITING + msg = 'Charm and its dependencies not yet at expected scale' + juju_log(msg, 'DEBUG') + return unit_state, msg + + +def check_api_application_ready(): + """Check if this application is ready. + + :returns: Whether application state is ready and status message + :rtype: (bool, str) + """ + app_state, msg = get_api_application_status() + return app_state == WORKLOAD_STATES.ACTIVE, msg + + +def get_api_application_status(): + """Return a workload status and message for this application. + + :returns: Workload state and message + :rtype: (bool, str) + """ + app_state, msg = get_api_unit_status() + if app_state == WORKLOAD_STATES.ACTIVE: + if are_peers_ready(): + return WORKLOAD_STATES.ACTIVE, 'Application Ready' + else: + return WORKLOAD_STATES.WAITING, 'Some units are not ready' + return app_state, msg diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/vaultlocker.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/vaultlocker.py new file mode 100644 index 0000000000000000000000000000000000000000..4ee6c1dba910b824467c2c34e0a3d3d0f3fd906d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/openstack/vaultlocker.py @@ -0,0 +1,179 @@ +# Copyright 2018 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import json +import os + +import charmhelpers.contrib.openstack.alternatives as alternatives +import charmhelpers.contrib.openstack.context as context + +import charmhelpers.core.hookenv as hookenv +import charmhelpers.core.host as host +import charmhelpers.core.templating as templating +import charmhelpers.core.unitdata as unitdata + +VAULTLOCKER_BACKEND = 'charm-vaultlocker' + + +class VaultKVContext(context.OSContextGenerator): + """Vault KV context for interaction with vault-kv interfaces""" + interfaces = ['secrets-storage'] + + def __init__(self, secret_backend=None): + super(context.OSContextGenerator, self).__init__() + self.secret_backend = ( + secret_backend or 'charm-{}'.format(hookenv.service_name()) + ) + + def __call__(self): + try: + import hvac + except ImportError: + # BUG: #1862085 - if the relation is made to vault, but the + # 'encrypt' option is not made, then the charm errors with an + # import warning. This catches that, logs a warning, and returns + # with an empty context. + hookenv.log("VaultKVContext: trying to use hvac pythong module " + "but it's not available. Is secrets-stroage relation " + "made, but encrypt option not set?", + level=hookenv.WARNING) + # return an emptry context on hvac import error + return {} + ctxt = {} + # NOTE(hopem): see https://bugs.launchpad.net/charm-helpers/+bug/1849323 + db = unitdata.kv() + # currently known-good secret-id + secret_id = db.get('secret-id') + + for relation_id in hookenv.relation_ids(self.interfaces[0]): + for unit in hookenv.related_units(relation_id): + data = hookenv.relation_get(unit=unit, + rid=relation_id) + vault_url = data.get('vault_url') + role_id = data.get('{}_role_id'.format(hookenv.local_unit())) + token = data.get('{}_token'.format(hookenv.local_unit())) + + if all([vault_url, role_id, token]): + token = json.loads(token) + vault_url = json.loads(vault_url) + + # Tokens may change when secret_id's are being + # reissued - if so use token to get new secret_id + token_success = False + try: + secret_id = retrieve_secret_id( + url=vault_url, + token=token + ) + token_success = True + except hvac.exceptions.InvalidRequest: + # Try next + pass + + if token_success: + db.set('secret-id', secret_id) + db.flush() + + ctxt['vault_url'] = vault_url + ctxt['role_id'] = json.loads(role_id) + ctxt['secret_id'] = secret_id + ctxt['secret_backend'] = self.secret_backend + vault_ca = data.get('vault_ca') + if vault_ca: + ctxt['vault_ca'] = json.loads(vault_ca) + + self.complete = True + break + else: + if secret_id: + ctxt['vault_url'] = vault_url + ctxt['role_id'] = json.loads(role_id) + ctxt['secret_id'] = secret_id + ctxt['secret_backend'] = self.secret_backend + vault_ca = data.get('vault_ca') + if vault_ca: + ctxt['vault_ca'] = json.loads(vault_ca) + + if self.complete: + break + + if ctxt: + self.complete = True + + return ctxt + + +def write_vaultlocker_conf(context, priority=100): + """Write vaultlocker configuration to disk and install alternative + + :param context: Dict of data from vault-kv relation + :ptype: context: dict + :param priority: Priority of alternative configuration + :ptype: priority: int""" + charm_vl_path = "/var/lib/charm/{}/vaultlocker.conf".format( + hookenv.service_name() + ) + host.mkdir(os.path.dirname(charm_vl_path), perms=0o700) + templating.render(source='vaultlocker.conf.j2', + target=charm_vl_path, + context=context, perms=0o600), + alternatives.install_alternative('vaultlocker.conf', + '/etc/vaultlocker/vaultlocker.conf', + charm_vl_path, priority) + + +def vault_relation_complete(backend=None): + """Determine whether vault relation is complete + + :param backend: Name of secrets backend requested + :ptype backend: string + :returns: whether the relation to vault is complete + :rtype: bool""" + try: + import hvac + except ImportError: + return False + try: + vault_kv = VaultKVContext(secret_backend=backend or VAULTLOCKER_BACKEND) + vault_kv() + return vault_kv.complete + except hvac.exceptions.InvalidRequest: + return False + + +# TODO: contrib a high level unwrap method to hvac that works +def retrieve_secret_id(url, token): + """Retrieve a response-wrapped secret_id from Vault + + :param url: URL to Vault Server + :ptype url: str + :param token: One shot Token to use + :ptype token: str + :returns: secret_id to use for Vault Access + :rtype: str""" + import hvac + try: + # hvac 0.10.1 changed default adapter to JSONAdapter + client = hvac.Client(url=url, token=token, adapter=hvac.adapters.Request) + except AttributeError: + # hvac < 0.6.2 doesn't have adapter but uses the same response interface + client = hvac.Client(url=url, token=token) + else: + # hvac < 0.9.2 assumes adapter is an instance, so doesn't instantiate + if not isinstance(client.adapter, hvac.adapters.Request): + client.adapter = hvac.adapters.Request(base_uri=url, token=token) + response = client._post('/v1/sys/wrapping/unwrap') + if response.status_code == 200: + data = response.json() + return data['data']['secret_id'] diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/peerstorage/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/peerstorage/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..a8fa60c2a3080d088b8e0abae370aa355c80ec56 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/peerstorage/__init__.py @@ -0,0 +1,267 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import json +import six + +from charmhelpers.core.hookenv import relation_id as current_relation_id +from charmhelpers.core.hookenv import ( + is_relation_made, + relation_ids, + relation_get as _relation_get, + local_unit, + relation_set as _relation_set, + leader_get as _leader_get, + leader_set, + is_leader, +) + + +""" +This helper provides functions to support use of a peer relation +for basic key/value storage, with the added benefit that all storage +can be replicated across peer units. + +Requirement to use: + +To use this, the "peer_echo()" method has to be called form the peer +relation's relation-changed hook: + +@hooks.hook("cluster-relation-changed") # Adapt the to your peer relation name +def cluster_relation_changed(): + peer_echo() + +Once this is done, you can use peer storage from anywhere: + +@hooks.hook("some-hook") +def some_hook(): + # You can store and retrieve key/values this way: + if is_relation_made("cluster"): # from charmhelpers.core.hookenv + # There are peers available so we can work with peer storage + peer_store("mykey", "myvalue") + value = peer_retrieve("mykey") + print value + else: + print "No peers joind the relation, cannot share key/values :(" +""" + + +def leader_get(attribute=None, rid=None): + """Wrapper to ensure that settings are migrated from the peer relation. + + This is to support upgrading an environment that does not support + Juju leadership election to one that does. + + If a setting is not extant in the leader-get but is on the relation-get + peer rel, it is migrated and marked as such so that it is not re-migrated. + """ + migration_key = '__leader_get_migrated_settings__' + if not is_leader(): + return _leader_get(attribute=attribute) + + settings_migrated = False + leader_settings = _leader_get(attribute=attribute) + previously_migrated = _leader_get(attribute=migration_key) + + if previously_migrated: + migrated = set(json.loads(previously_migrated)) + else: + migrated = set([]) + + try: + if migration_key in leader_settings: + del leader_settings[migration_key] + except TypeError: + pass + + if attribute: + if attribute in migrated: + return leader_settings + + # If attribute not present in leader db, check if this unit has set + # the attribute in the peer relation + if not leader_settings: + peer_setting = _relation_get(attribute=attribute, unit=local_unit(), + rid=rid) + if peer_setting: + leader_set(settings={attribute: peer_setting}) + leader_settings = peer_setting + + if leader_settings: + settings_migrated = True + migrated.add(attribute) + else: + r_settings = _relation_get(unit=local_unit(), rid=rid) + if r_settings: + for key in set(r_settings.keys()).difference(migrated): + # Leader setting wins + if not leader_settings.get(key): + leader_settings[key] = r_settings[key] + + settings_migrated = True + migrated.add(key) + + if settings_migrated: + leader_set(**leader_settings) + + if migrated and settings_migrated: + migrated = json.dumps(list(migrated)) + leader_set(settings={migration_key: migrated}) + + return leader_settings + + +def relation_set(relation_id=None, relation_settings=None, **kwargs): + """Attempt to use leader-set if supported in the current version of Juju, + otherwise falls back on relation-set. + + Note that we only attempt to use leader-set if the provided relation_id is + a peer relation id or no relation id is provided (in which case we assume + we are within the peer relation context). + """ + try: + if relation_id in relation_ids('cluster'): + return leader_set(settings=relation_settings, **kwargs) + else: + raise NotImplementedError + except NotImplementedError: + return _relation_set(relation_id=relation_id, + relation_settings=relation_settings, **kwargs) + + +def relation_get(attribute=None, unit=None, rid=None): + """Attempt to use leader-get if supported in the current version of Juju, + otherwise falls back on relation-get. + + Note that we only attempt to use leader-get if the provided rid is a peer + relation id or no relation id is provided (in which case we assume we are + within the peer relation context). + """ + try: + if rid in relation_ids('cluster'): + return leader_get(attribute, rid) + else: + raise NotImplementedError + except NotImplementedError: + return _relation_get(attribute=attribute, rid=rid, unit=unit) + + +def peer_retrieve(key, relation_name='cluster'): + """Retrieve a named key from peer relation `relation_name`.""" + cluster_rels = relation_ids(relation_name) + if len(cluster_rels) > 0: + cluster_rid = cluster_rels[0] + return relation_get(attribute=key, rid=cluster_rid, + unit=local_unit()) + else: + raise ValueError('Unable to detect' + 'peer relation {}'.format(relation_name)) + + +def peer_retrieve_by_prefix(prefix, relation_name='cluster', delimiter='_', + inc_list=None, exc_list=None): + """ Retrieve k/v pairs given a prefix and filter using {inc,exc}_list """ + inc_list = inc_list if inc_list else [] + exc_list = exc_list if exc_list else [] + peerdb_settings = peer_retrieve('-', relation_name=relation_name) + matched = {} + if peerdb_settings is None: + return matched + for k, v in peerdb_settings.items(): + full_prefix = prefix + delimiter + if k.startswith(full_prefix): + new_key = k.replace(full_prefix, '') + if new_key in exc_list: + continue + if new_key in inc_list or len(inc_list) == 0: + matched[new_key] = v + return matched + + +def peer_store(key, value, relation_name='cluster'): + """Store the key/value pair on the named peer relation `relation_name`.""" + cluster_rels = relation_ids(relation_name) + if len(cluster_rels) > 0: + cluster_rid = cluster_rels[0] + relation_set(relation_id=cluster_rid, + relation_settings={key: value}) + else: + raise ValueError('Unable to detect ' + 'peer relation {}'.format(relation_name)) + + +def peer_echo(includes=None, force=False): + """Echo filtered attributes back onto the same relation for storage. + + This is a requirement to use the peerstorage module - it needs to be called + from the peer relation's changed hook. + + If Juju leader support exists this will be a noop unless force is True. + """ + try: + is_leader() + except NotImplementedError: + pass + else: + if not force: + return # NOOP if leader-election is supported + + # Use original non-leader calls + relation_get = _relation_get + relation_set = _relation_set + + rdata = relation_get() + echo_data = {} + if includes is None: + echo_data = rdata.copy() + for ex in ['private-address', 'public-address']: + if ex in echo_data: + echo_data.pop(ex) + else: + for attribute, value in six.iteritems(rdata): + for include in includes: + if include in attribute: + echo_data[attribute] = value + if len(echo_data) > 0: + relation_set(relation_settings=echo_data) + + +def peer_store_and_set(relation_id=None, peer_relation_name='cluster', + peer_store_fatal=False, relation_settings=None, + delimiter='_', **kwargs): + """Store passed-in arguments both in argument relation and in peer storage. + + It functions like doing relation_set() and peer_store() at the same time, + with the same data. + + @param relation_id: the id of the relation to store the data on. Defaults + to the current relation. + @param peer_store_fatal: Set to True, the function will raise an exception + should the peer storage not be available.""" + + relation_settings = relation_settings if relation_settings else {} + relation_set(relation_id=relation_id, + relation_settings=relation_settings, + **kwargs) + if is_relation_made(peer_relation_name): + for key, value in six.iteritems(dict(list(kwargs.items()) + + list(relation_settings.items()))): + key_prefix = relation_id or current_relation_id() + peer_store(key_prefix + delimiter + key, + value, + relation_name=peer_relation_name) + else: + if peer_store_fatal: + raise ValueError('Unable to detect ' + 'peer relation {}'.format(peer_relation_name)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/python.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/python.py new file mode 100644 index 0000000000000000000000000000000000000000..84cba8c4eba34fdd705f4ee39628ebd33b5175a2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/python.py @@ -0,0 +1,21 @@ +# Copyright 2014-2019 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from __future__ import absolute_import + +# deprecated aliases for backwards compatibility +from charmhelpers.fetch.python import debug # noqa +from charmhelpers.fetch.python import packages # noqa +from charmhelpers.fetch.python import rpdb # noqa +from charmhelpers.fetch.python import version # noqa diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/saltstack/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/saltstack/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d74f4039379b608b5c076fd96c906ff5237f1f83 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/saltstack/__init__.py @@ -0,0 +1,116 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Charm Helpers saltstack - declare the state of your machines. + +This helper enables you to declare your machine state, rather than +program it procedurally (and have to test each change to your procedures). +Your install hook can be as simple as:: + + {{{ + from charmhelpers.contrib.saltstack import ( + install_salt_support, + update_machine_state, + ) + + + def install(): + install_salt_support() + update_machine_state('machine_states/dependencies.yaml') + update_machine_state('machine_states/installed.yaml') + }}} + +and won't need to change (nor will its tests) when you change the machine +state. + +It's using a python package called salt-minion which allows various formats for +specifying resources, such as:: + + {{{ + /srv/{{ basedir }}: + file.directory: + - group: ubunet + - user: ubunet + - require: + - user: ubunet + - recurse: + - user + - group + + ubunet: + group.present: + - gid: 1500 + user.present: + - uid: 1500 + - gid: 1500 + - createhome: False + - require: + - group: ubunet + }}} + +The docs for all the different state definitions are at: + http://docs.saltstack.com/ref/states/all/ + + +TODO: + * Add test helpers which will ensure that machine state definitions + are functionally (but not necessarily logically) correct (ie. getting + salt to parse all state defs. + * Add a link to a public bootstrap charm example / blogpost. + * Find a way to obviate the need to use the grains['charm_dir'] syntax + in templates. +""" +# Copyright 2013 Canonical Ltd. +# +# Authors: +# Charm Helpers Developers +import subprocess + +import charmhelpers.contrib.templating.contexts +import charmhelpers.core.host +import charmhelpers.core.hookenv + + +salt_grains_path = '/etc/salt/grains' + + +def install_salt_support(from_ppa=True): + """Installs the salt-minion helper for machine state. + + By default the salt-minion package is installed from + the saltstack PPA. If from_ppa is False you must ensure + that the salt-minion package is available in the apt cache. + """ + if from_ppa: + subprocess.check_call([ + '/usr/bin/add-apt-repository', + '--yes', + 'ppa:saltstack/salt', + ]) + subprocess.check_call(['/usr/bin/apt-get', 'update']) + # We install salt-common as salt-minion would run the salt-minion + # daemon. + charmhelpers.fetch.apt_install('salt-common') + + +def update_machine_state(state_path): + """Update the machine state using the provided state declaration.""" + charmhelpers.contrib.templating.contexts.juju_state_to_yaml( + salt_grains_path) + subprocess.check_call([ + 'salt-call', + '--local', + 'state.template', + state_path, + ]) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/ssl/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/ssl/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..1d238b529e44ad2761d86e96b4845589f8e951c4 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/ssl/__init__.py @@ -0,0 +1,92 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import subprocess +from charmhelpers.core import hookenv + + +def generate_selfsigned(keyfile, certfile, keysize="1024", config=None, subject=None, cn=None): + """Generate selfsigned SSL keypair + + You must provide one of the 3 optional arguments: + config, subject or cn + If more than one is provided the leftmost will be used + + Arguments: + keyfile -- (required) full path to the keyfile to be created + certfile -- (required) full path to the certfile to be created + keysize -- (optional) SSL key length + config -- (optional) openssl configuration file + subject -- (optional) dictionary with SSL subject variables + cn -- (optional) cerfificate common name + + Required keys in subject dict: + cn -- Common name (eq. FQDN) + + Optional keys in subject dict + country -- Country Name (2 letter code) + state -- State or Province Name (full name) + locality -- Locality Name (eg, city) + organization -- Organization Name (eg, company) + organizational_unit -- Organizational Unit Name (eg, section) + email -- Email Address + """ + + cmd = [] + if config: + cmd = ["/usr/bin/openssl", "req", "-new", "-newkey", + "rsa:{}".format(keysize), "-days", "365", "-nodes", "-x509", + "-keyout", keyfile, + "-out", certfile, "-config", config] + elif subject: + ssl_subject = "" + if "country" in subject: + ssl_subject = ssl_subject + "/C={}".format(subject["country"]) + if "state" in subject: + ssl_subject = ssl_subject + "/ST={}".format(subject["state"]) + if "locality" in subject: + ssl_subject = ssl_subject + "/L={}".format(subject["locality"]) + if "organization" in subject: + ssl_subject = ssl_subject + "/O={}".format(subject["organization"]) + if "organizational_unit" in subject: + ssl_subject = ssl_subject + "/OU={}".format(subject["organizational_unit"]) + if "cn" in subject: + ssl_subject = ssl_subject + "/CN={}".format(subject["cn"]) + else: + hookenv.log("When using \"subject\" argument you must " + "provide \"cn\" field at very least") + return False + if "email" in subject: + ssl_subject = ssl_subject + "/emailAddress={}".format(subject["email"]) + + cmd = ["/usr/bin/openssl", "req", "-new", "-newkey", + "rsa:{}".format(keysize), "-days", "365", "-nodes", "-x509", + "-keyout", keyfile, + "-out", certfile, "-subj", ssl_subject] + elif cn: + cmd = ["/usr/bin/openssl", "req", "-new", "-newkey", + "rsa:{}".format(keysize), "-days", "365", "-nodes", "-x509", + "-keyout", keyfile, + "-out", certfile, "-subj", "/CN={}".format(cn)] + + if not cmd: + hookenv.log("No config, subject or cn provided," + "unable to generate self signed SSL certificates") + return False + try: + subprocess.check_call(cmd) + return True + except Exception as e: + print("Execution of openssl command failed:\n{}".format(e)) + return False diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/ssl/service.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/ssl/service.py new file mode 100644 index 0000000000000000000000000000000000000000..06b534ffa3b0de97a56bac4060c39a9a4485b0c2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/ssl/service.py @@ -0,0 +1,277 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +from os.path import join as path_join +from os.path import exists +import subprocess + +from charmhelpers.core.hookenv import log, DEBUG + +STD_CERT = "standard" + +# Mysql server is fairly picky about cert creation +# and types, spec its creation separately for now. +MYSQL_CERT = "mysql" + + +class ServiceCA(object): + + default_expiry = str(365 * 2) + default_ca_expiry = str(365 * 6) + + def __init__(self, name, ca_dir, cert_type=STD_CERT): + self.name = name + self.ca_dir = ca_dir + self.cert_type = cert_type + + ############### + # Hook Helper API + @staticmethod + def get_ca(type=STD_CERT): + service_name = os.environ['JUJU_UNIT_NAME'].split('/')[0] + ca_path = os.path.join(os.environ['CHARM_DIR'], 'ca') + ca = ServiceCA(service_name, ca_path, type) + ca.init() + return ca + + @classmethod + def get_service_cert(cls, type=STD_CERT): + service_name = os.environ['JUJU_UNIT_NAME'].split('/')[0] + ca = cls.get_ca() + crt, key = ca.get_or_create_cert(service_name) + return crt, key, ca.get_ca_bundle() + + ############### + + def init(self): + log("initializing service ca", level=DEBUG) + if not exists(self.ca_dir): + self._init_ca_dir(self.ca_dir) + self._init_ca() + + @property + def ca_key(self): + return path_join(self.ca_dir, 'private', 'cacert.key') + + @property + def ca_cert(self): + return path_join(self.ca_dir, 'cacert.pem') + + @property + def ca_conf(self): + return path_join(self.ca_dir, 'ca.cnf') + + @property + def signing_conf(self): + return path_join(self.ca_dir, 'signing.cnf') + + def _init_ca_dir(self, ca_dir): + os.mkdir(ca_dir) + for i in ['certs', 'crl', 'newcerts', 'private']: + sd = path_join(ca_dir, i) + if not exists(sd): + os.mkdir(sd) + + if not exists(path_join(ca_dir, 'serial')): + with open(path_join(ca_dir, 'serial'), 'w') as fh: + fh.write('02\n') + + if not exists(path_join(ca_dir, 'index.txt')): + with open(path_join(ca_dir, 'index.txt'), 'w') as fh: + fh.write('') + + def _init_ca(self): + """Generate the root ca's cert and key. + """ + if not exists(path_join(self.ca_dir, 'ca.cnf')): + with open(path_join(self.ca_dir, 'ca.cnf'), 'w') as fh: + fh.write( + CA_CONF_TEMPLATE % (self.get_conf_variables())) + + if not exists(path_join(self.ca_dir, 'signing.cnf')): + with open(path_join(self.ca_dir, 'signing.cnf'), 'w') as fh: + fh.write( + SIGNING_CONF_TEMPLATE % (self.get_conf_variables())) + + if exists(self.ca_cert) or exists(self.ca_key): + raise RuntimeError("Initialized called when CA already exists") + cmd = ['openssl', 'req', '-config', self.ca_conf, + '-x509', '-nodes', '-newkey', 'rsa', + '-days', self.default_ca_expiry, + '-keyout', self.ca_key, '-out', self.ca_cert, + '-outform', 'PEM'] + output = subprocess.check_output(cmd, stderr=subprocess.STDOUT) + log("CA Init:\n %s" % output, level=DEBUG) + + def get_conf_variables(self): + return dict( + org_name="juju", + org_unit_name="%s service" % self.name, + common_name=self.name, + ca_dir=self.ca_dir) + + def get_or_create_cert(self, common_name): + if common_name in self: + return self.get_certificate(common_name) + return self.create_certificate(common_name) + + def create_certificate(self, common_name): + if common_name in self: + return self.get_certificate(common_name) + key_p = path_join(self.ca_dir, "certs", "%s.key" % common_name) + crt_p = path_join(self.ca_dir, "certs", "%s.crt" % common_name) + csr_p = path_join(self.ca_dir, "certs", "%s.csr" % common_name) + self._create_certificate(common_name, key_p, csr_p, crt_p) + return self.get_certificate(common_name) + + def get_certificate(self, common_name): + if common_name not in self: + raise ValueError("No certificate for %s" % common_name) + key_p = path_join(self.ca_dir, "certs", "%s.key" % common_name) + crt_p = path_join(self.ca_dir, "certs", "%s.crt" % common_name) + with open(crt_p) as fh: + crt = fh.read() + with open(key_p) as fh: + key = fh.read() + return crt, key + + def __contains__(self, common_name): + crt_p = path_join(self.ca_dir, "certs", "%s.crt" % common_name) + return exists(crt_p) + + def _create_certificate(self, common_name, key_p, csr_p, crt_p): + template_vars = self.get_conf_variables() + template_vars['common_name'] = common_name + subj = '/O=%(org_name)s/OU=%(org_unit_name)s/CN=%(common_name)s' % ( + template_vars) + + log("CA Create Cert %s" % common_name, level=DEBUG) + cmd = ['openssl', 'req', '-sha1', '-newkey', 'rsa:2048', + '-nodes', '-days', self.default_expiry, + '-keyout', key_p, '-out', csr_p, '-subj', subj] + subprocess.check_call(cmd, stderr=subprocess.PIPE) + cmd = ['openssl', 'rsa', '-in', key_p, '-out', key_p] + subprocess.check_call(cmd, stderr=subprocess.PIPE) + + log("CA Sign Cert %s" % common_name, level=DEBUG) + if self.cert_type == MYSQL_CERT: + cmd = ['openssl', 'x509', '-req', + '-in', csr_p, '-days', self.default_expiry, + '-CA', self.ca_cert, '-CAkey', self.ca_key, + '-set_serial', '01', '-out', crt_p] + else: + cmd = ['openssl', 'ca', '-config', self.signing_conf, + '-extensions', 'req_extensions', + '-days', self.default_expiry, '-notext', + '-in', csr_p, '-out', crt_p, '-subj', subj, '-batch'] + log("running %s" % " ".join(cmd), level=DEBUG) + subprocess.check_call(cmd, stderr=subprocess.PIPE) + + def get_ca_bundle(self): + with open(self.ca_cert) as fh: + return fh.read() + + +CA_CONF_TEMPLATE = """ +[ ca ] +default_ca = CA_default + +[ CA_default ] +dir = %(ca_dir)s +policy = policy_match +database = $dir/index.txt +serial = $dir/serial +certs = $dir/certs +crl_dir = $dir/crl +new_certs_dir = $dir/newcerts +certificate = $dir/cacert.pem +private_key = $dir/private/cacert.key +RANDFILE = $dir/private/.rand +default_md = default + +[ req ] +default_bits = 1024 +default_md = sha1 + +prompt = no +distinguished_name = ca_distinguished_name + +x509_extensions = ca_extensions + +[ ca_distinguished_name ] +organizationName = %(org_name)s +organizationalUnitName = %(org_unit_name)s Certificate Authority + + +[ policy_match ] +countryName = optional +stateOrProvinceName = optional +organizationName = match +organizationalUnitName = optional +commonName = supplied + +[ ca_extensions ] +basicConstraints = critical,CA:true +subjectKeyIdentifier = hash +authorityKeyIdentifier = keyid:always, issuer +keyUsage = cRLSign, keyCertSign +""" + + +SIGNING_CONF_TEMPLATE = """ +[ ca ] +default_ca = CA_default + +[ CA_default ] +dir = %(ca_dir)s +policy = policy_match +database = $dir/index.txt +serial = $dir/serial +certs = $dir/certs +crl_dir = $dir/crl +new_certs_dir = $dir/newcerts +certificate = $dir/cacert.pem +private_key = $dir/private/cacert.key +RANDFILE = $dir/private/.rand +default_md = default + +[ req ] +default_bits = 1024 +default_md = sha1 + +prompt = no +distinguished_name = req_distinguished_name + +x509_extensions = req_extensions + +[ req_distinguished_name ] +organizationName = %(org_name)s +organizationalUnitName = %(org_unit_name)s machine resources +commonName = %(common_name)s + +[ policy_match ] +countryName = optional +stateOrProvinceName = optional +organizationName = match +organizationalUnitName = optional +commonName = supplied + +[ req_extensions ] +basicConstraints = CA:false +subjectKeyIdentifier = hash +authorityKeyIdentifier = keyid:always, issuer +keyUsage = digitalSignature, keyEncipherment, keyAgreement +extendedKeyUsage = serverAuth, clientAuth +""" diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/storage/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/storage/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d7567b863e3a5ad2b7a7f44958b4166e0c3d346b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/storage/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/storage/linux/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/storage/linux/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d7567b863e3a5ad2b7a7f44958b4166e0c3d346b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/storage/linux/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/storage/linux/bcache.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/storage/linux/bcache.py new file mode 100644 index 0000000000000000000000000000000000000000..605991e16a4238f4ed46fbf0791048187f375c5c --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/storage/linux/bcache.py @@ -0,0 +1,74 @@ +# Copyright 2017 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +import os +import json + +from charmhelpers.core.hookenv import log + +stats_intervals = ['stats_day', 'stats_five_minute', + 'stats_hour', 'stats_total'] + +SYSFS = '/sys' + + +class Bcache(object): + """Bcache behaviour + """ + + def __init__(self, cachepath): + self.cachepath = cachepath + + @classmethod + def fromdevice(cls, devname): + return cls('{}/block/{}/bcache'.format(SYSFS, devname)) + + def __str__(self): + return self.cachepath + + def get_stats(self, interval): + """Get cache stats + """ + intervaldir = 'stats_{}'.format(interval) + path = "{}/{}".format(self.cachepath, intervaldir) + out = dict() + for elem in os.listdir(path): + out[elem] = open('{}/{}'.format(path, elem)).read().strip() + return out + + +def get_bcache_fs(): + """Return all cache sets + """ + cachesetroot = "{}/fs/bcache".format(SYSFS) + try: + dirs = os.listdir(cachesetroot) + except OSError: + log("No bcache fs found") + return [] + cacheset = set([Bcache('{}/{}'.format(cachesetroot, d)) for d in dirs if not d.startswith('register')]) + return cacheset + + +def get_stats_action(cachespec, interval): + """Action for getting bcache statistics for a given cachespec. + Cachespec can either be a device name, eg. 'sdb', which will retrieve + cache stats for the given device, or 'global', which will retrieve stats + for all cachesets + """ + if cachespec == 'global': + caches = get_bcache_fs() + else: + caches = [Bcache.fromdevice(cachespec)] + res = dict((c.cachepath, c.get_stats(interval)) for c in caches) + return json.dumps(res, indent=4, separators=(',', ': ')) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/storage/linux/ceph.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/storage/linux/ceph.py new file mode 100644 index 0000000000000000000000000000000000000000..814d5c72bc246c9536d3bb5975ade1c7fbe989e4 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/storage/linux/ceph.py @@ -0,0 +1,1810 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# +# Copyright 2012 Canonical Ltd. +# +# This file is sourced from lp:openstack-charm-helpers +# +# Authors: +# James Page +# Adam Gandelman +# + +import collections +import errno +import hashlib +import math +import six + +import os +import shutil +import json +import time +import uuid + +from subprocess import ( + check_call, + check_output, + CalledProcessError, +) +from charmhelpers.core.hookenv import ( + config, + service_name, + local_unit, + relation_get, + relation_ids, + relation_set, + related_units, + log, + DEBUG, + INFO, + WARNING, + ERROR, +) +from charmhelpers.core.host import ( + mount, + mounts, + service_start, + service_stop, + service_running, + umount, + cmp_pkgrevno, +) +from charmhelpers.fetch import ( + apt_install, +) +from charmhelpers.core.unitdata import kv + +from charmhelpers.core.kernel import modprobe +from charmhelpers.contrib.openstack.utils import config_flags_parser + +KEYRING = '/etc/ceph/ceph.client.{}.keyring' +KEYFILE = '/etc/ceph/ceph.client.{}.key' + +CEPH_CONF = """[global] +auth supported = {auth} +keyring = {keyring} +mon host = {mon_hosts} +log to syslog = {use_syslog} +err to syslog = {use_syslog} +clog to syslog = {use_syslog} +""" + +# The number of placement groups per OSD to target for placement group +# calculations. This number is chosen as 100 due to the ceph PG Calc +# documentation recommending to choose 100 for clusters which are not +# expected to increase in the foreseeable future. Since the majority of the +# calculations are done on deployment, target the case of non-expanding +# clusters as the default. +DEFAULT_PGS_PER_OSD_TARGET = 100 +DEFAULT_POOL_WEIGHT = 10.0 +LEGACY_PG_COUNT = 200 +DEFAULT_MINIMUM_PGS = 2 +AUTOSCALER_DEFAULT_PGS = 32 + + +class OsdPostUpgradeError(Exception): + """Error class for OSD post-upgrade operations.""" + pass + + +class OSDSettingConflict(Exception): + """Error class for conflicting osd setting requests.""" + pass + + +class OSDSettingNotAllowed(Exception): + """Error class for a disallowed setting.""" + pass + + +OSD_SETTING_EXCEPTIONS = (OSDSettingConflict, OSDSettingNotAllowed) + +OSD_SETTING_WHITELIST = [ + 'osd heartbeat grace', + 'osd heartbeat interval', +] + + +def _order_dict_by_key(rdict): + """Convert a dictionary into an OrderedDict sorted by key. + + :param rdict: Dictionary to be ordered. + :type rdict: dict + :returns: Ordered Dictionary. + :rtype: collections.OrderedDict + """ + return collections.OrderedDict(sorted(rdict.items(), key=lambda k: k[0])) + + +def get_osd_settings(relation_name): + """Consolidate requested osd settings from all clients. + + Consolidate requested osd settings from all clients. Check that the + requested setting is on the whitelist and it does not conflict with + any other requested settings. + + :returns: Dictionary of settings + :rtype: dict + + :raises: OSDSettingNotAllowed + :raises: OSDSettingConflict + """ + rel_ids = relation_ids(relation_name) + osd_settings = {} + for relid in rel_ids: + for unit in related_units(relid): + unit_settings = relation_get('osd-settings', unit, relid) or '{}' + unit_settings = json.loads(unit_settings) + for key, value in unit_settings.items(): + if key not in OSD_SETTING_WHITELIST: + msg = 'Illegal settings "{}"'.format(key) + raise OSDSettingNotAllowed(msg) + if key in osd_settings: + if osd_settings[key] != unit_settings[key]: + msg = 'Conflicting settings for "{}"'.format(key) + raise OSDSettingConflict(msg) + else: + osd_settings[key] = value + return _order_dict_by_key(osd_settings) + + +def send_osd_settings(): + """Pass on requested OSD settings to osd units.""" + try: + settings = get_osd_settings('client') + except OSD_SETTING_EXCEPTIONS as e: + # There is a problem with the settings, not passing them on. Update + # status will notify the user. + log(e, level=ERROR) + return + data = { + 'osd-settings': json.dumps(settings, sort_keys=True)} + for relid in relation_ids('osd'): + relation_set(relation_id=relid, + relation_settings=data) + + +def validator(value, valid_type, valid_range=None): + """ + Used to validate these: http://docs.ceph.com/docs/master/rados/operations/pools/#set-pool-values + Example input: + validator(value=1, + valid_type=int, + valid_range=[0, 2]) + This says I'm testing value=1. It must be an int inclusive in [0,2] + + :param value: The value to validate + :param valid_type: The type that value should be. + :param valid_range: A range of values that value can assume. + :return: + """ + assert isinstance(value, valid_type), "{} is not a {}".format( + value, + valid_type) + if valid_range is not None: + assert isinstance(valid_range, list), \ + "valid_range must be a list, was given {}".format(valid_range) + # If we're dealing with strings + if isinstance(value, six.string_types): + assert value in valid_range, \ + "{} is not in the list {}".format(value, valid_range) + # Integer, float should have a min and max + else: + if len(valid_range) != 2: + raise ValueError( + "Invalid valid_range list of {} for {}. " + "List must be [min,max]".format(valid_range, value)) + assert value >= valid_range[0], \ + "{} is less than minimum allowed value of {}".format( + value, valid_range[0]) + assert value <= valid_range[1], \ + "{} is greater than maximum allowed value of {}".format( + value, valid_range[1]) + + +class PoolCreationError(Exception): + """ + A custom error to inform the caller that a pool creation failed. Provides an error message + """ + + def __init__(self, message): + super(PoolCreationError, self).__init__(message) + + +class Pool(object): + """ + An object oriented approach to Ceph pool creation. This base class is inherited by ReplicatedPool and ErasurePool. + Do not call create() on this base class as it will not do anything. Instantiate a child class and call create(). + """ + + def __init__(self, service, name): + self.service = service + self.name = name + + # Create the pool if it doesn't exist already + # To be implemented by subclasses + def create(self): + pass + + def add_cache_tier(self, cache_pool, mode): + """ + Adds a new cache tier to an existing pool. + :param cache_pool: six.string_types. The cache tier pool name to add. + :param mode: six.string_types. The caching mode to use for this pool. valid range = ["readonly", "writeback"] + :return: None + """ + # Check the input types and values + validator(value=cache_pool, valid_type=six.string_types) + validator(value=mode, valid_type=six.string_types, valid_range=["readonly", "writeback"]) + + check_call(['ceph', '--id', self.service, 'osd', 'tier', 'add', self.name, cache_pool]) + check_call(['ceph', '--id', self.service, 'osd', 'tier', 'cache-mode', cache_pool, mode]) + check_call(['ceph', '--id', self.service, 'osd', 'tier', 'set-overlay', self.name, cache_pool]) + check_call(['ceph', '--id', self.service, 'osd', 'pool', 'set', cache_pool, 'hit_set_type', 'bloom']) + + def remove_cache_tier(self, cache_pool): + """ + Removes a cache tier from Ceph. Flushes all dirty objects from writeback pools and waits for that to complete. + :param cache_pool: six.string_types. The cache tier pool name to remove. + :return: None + """ + # read-only is easy, writeback is much harder + mode = get_cache_mode(self.service, cache_pool) + if mode == 'readonly': + check_call(['ceph', '--id', self.service, 'osd', 'tier', 'cache-mode', cache_pool, 'none']) + check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove', self.name, cache_pool]) + + elif mode == 'writeback': + pool_forward_cmd = ['ceph', '--id', self.service, 'osd', 'tier', + 'cache-mode', cache_pool, 'forward'] + if cmp_pkgrevno('ceph-common', '10.1') >= 0: + # Jewel added a mandatory flag + pool_forward_cmd.append('--yes-i-really-mean-it') + + check_call(pool_forward_cmd) + # Flush the cache and wait for it to return + check_call(['rados', '--id', self.service, '-p', cache_pool, 'cache-flush-evict-all']) + check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove-overlay', self.name]) + check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove', self.name, cache_pool]) + + def get_pgs(self, pool_size, percent_data=DEFAULT_POOL_WEIGHT, + device_class=None): + """Return the number of placement groups to use when creating the pool. + + Returns the number of placement groups which should be specified when + creating the pool. This is based upon the calculation guidelines + provided by the Ceph Placement Group Calculator (located online at + http://ceph.com/pgcalc/). + + The number of placement groups are calculated using the following: + + (Target PGs per OSD) * (OSD #) * (%Data) + ---------------------------------------- + (Pool size) + + Per the upstream guidelines, the OSD # should really be considered + based on the number of OSDs which are eligible to be selected by the + pool. Since the pool creation doesn't specify any of CRUSH set rules, + the default rule will be dependent upon the type of pool being + created (replicated or erasure). + + This code makes no attempt to determine the number of OSDs which can be + selected for the specific rule, rather it is left to the user to tune + in the form of 'expected-osd-count' config option. + + :param pool_size: int. pool_size is either the number of replicas for + replicated pools or the K+M sum for erasure coded pools + :param percent_data: float. the percentage of data that is expected to + be contained in the pool for the specific OSD set. Default value + is to assume 10% of the data is for this pool, which is a + relatively low % of the data but allows for the pg_num to be + increased. NOTE: the default is primarily to handle the scenario + where related charms requiring pools has not been upgraded to + include an update to indicate their relative usage of the pools. + :param device_class: str. class of storage to use for basis of pgs + calculation; ceph supports nvme, ssd and hdd by default based + on presence of devices of each type in the deployment. + :return: int. The number of pgs to use. + """ + + # Note: This calculation follows the approach that is provided + # by the Ceph PG Calculator located at http://ceph.com/pgcalc/. + validator(value=pool_size, valid_type=int) + + # Ensure that percent data is set to something - even with a default + # it can be set to None, which would wreak havoc below. + if percent_data is None: + percent_data = DEFAULT_POOL_WEIGHT + + # If the expected-osd-count is specified, then use the max between + # the expected-osd-count and the actual osd_count + osd_list = get_osds(self.service, device_class) + expected = config('expected-osd-count') or 0 + + if osd_list: + if device_class: + osd_count = len(osd_list) + else: + osd_count = max(expected, len(osd_list)) + + # Log a message to provide some insight if the calculations claim + # to be off because someone is setting the expected count and + # there are more OSDs in reality. Try to make a proper guess + # based upon the cluster itself. + if not device_class and expected and osd_count != expected: + log("Found more OSDs than provided expected count. " + "Using the actual count instead", INFO) + elif expected: + # Use the expected-osd-count in older ceph versions to allow for + # a more accurate pg calculations + osd_count = expected + else: + # NOTE(james-page): Default to 200 for older ceph versions + # which don't support OSD query from cli + return LEGACY_PG_COUNT + + percent_data /= 100.0 + target_pgs_per_osd = config('pgs-per-osd') or DEFAULT_PGS_PER_OSD_TARGET + num_pg = (target_pgs_per_osd * osd_count * percent_data) // pool_size + + # NOTE: ensure a sane minimum number of PGS otherwise we don't get any + # reasonable data distribution in minimal OSD configurations + if num_pg < DEFAULT_MINIMUM_PGS: + num_pg = DEFAULT_MINIMUM_PGS + + # The CRUSH algorithm has a slight optimization for placement groups + # with powers of 2 so find the nearest power of 2. If the nearest + # power of 2 is more than 25% below the original value, the next + # highest value is used. To do this, find the nearest power of 2 such + # that 2^n <= num_pg, check to see if its within the 25% tolerance. + exponent = math.floor(math.log(num_pg, 2)) + nearest = 2 ** exponent + if (num_pg - nearest) > (num_pg * 0.25): + # Choose the next highest power of 2 since the nearest is more + # than 25% below the original value. + return int(nearest * 2) + else: + return int(nearest) + + +class ReplicatedPool(Pool): + def __init__(self, service, name, pg_num=None, replicas=2, + percent_data=10.0, app_name=None): + super(ReplicatedPool, self).__init__(service=service, name=name) + self.replicas = replicas + self.percent_data = percent_data + if pg_num: + # Since the number of placement groups were specified, ensure + # that there aren't too many created. + max_pgs = self.get_pgs(self.replicas, 100.0) + self.pg_num = min(pg_num, max_pgs) + else: + self.pg_num = self.get_pgs(self.replicas, percent_data) + if app_name: + self.app_name = app_name + else: + self.app_name = 'unknown' + + def create(self): + if not pool_exists(self.service, self.name): + nautilus_or_later = cmp_pkgrevno('ceph-common', '14.2.0') >= 0 + # Create it + if nautilus_or_later: + cmd = [ + 'ceph', '--id', self.service, 'osd', 'pool', 'create', + '--pg-num-min={}'.format( + min(AUTOSCALER_DEFAULT_PGS, self.pg_num) + ), + self.name, str(self.pg_num) + ] + else: + cmd = [ + 'ceph', '--id', self.service, 'osd', 'pool', 'create', + self.name, str(self.pg_num) + ] + + try: + check_call(cmd) + # Set the pool replica size + update_pool(client=self.service, + pool=self.name, + settings={'size': str(self.replicas)}) + if nautilus_or_later: + # Ensure we set the expected pool ratio + update_pool(client=self.service, + pool=self.name, + settings={'target_size_ratio': str(self.percent_data / 100.0)}) + try: + set_app_name_for_pool(client=self.service, + pool=self.name, + name=self.app_name) + except CalledProcessError: + log('Could not set app name for pool {}'.format(self.name), level=WARNING) + if 'pg_autoscaler' in enabled_manager_modules(): + try: + enable_pg_autoscale(self.service, self.name) + except CalledProcessError as e: + log('Could not configure auto scaling for pool {}: {}'.format( + self.name, e), level=WARNING) + except CalledProcessError: + raise + + +# Default jerasure erasure coded pool +class ErasurePool(Pool): + def __init__(self, service, name, erasure_code_profile="default", + percent_data=10.0, app_name=None): + super(ErasurePool, self).__init__(service=service, name=name) + self.erasure_code_profile = erasure_code_profile + self.percent_data = percent_data + if app_name: + self.app_name = app_name + else: + self.app_name = 'unknown' + + def create(self): + if not pool_exists(self.service, self.name): + # Try to find the erasure profile information in order to properly + # size the number of placement groups. The size of an erasure + # coded placement group is calculated as k+m. + erasure_profile = get_erasure_profile(self.service, + self.erasure_code_profile) + + # Check for errors + if erasure_profile is None: + msg = ("Failed to discover erasure profile named " + "{}".format(self.erasure_code_profile)) + log(msg, level=ERROR) + raise PoolCreationError(msg) + if 'k' not in erasure_profile or 'm' not in erasure_profile: + # Error + msg = ("Unable to find k (data chunks) or m (coding chunks) " + "in erasure profile {}".format(erasure_profile)) + log(msg, level=ERROR) + raise PoolCreationError(msg) + + k = int(erasure_profile['k']) + m = int(erasure_profile['m']) + pgs = self.get_pgs(k + m, self.percent_data) + nautilus_or_later = cmp_pkgrevno('ceph-common', '14.2.0') >= 0 + # Create it + if nautilus_or_later: + cmd = [ + 'ceph', '--id', self.service, 'osd', 'pool', 'create', + '--pg-num-min={}'.format( + min(AUTOSCALER_DEFAULT_PGS, pgs) + ), + self.name, str(pgs), str(pgs), + 'erasure', self.erasure_code_profile + ] + else: + cmd = [ + 'ceph', '--id', self.service, 'osd', 'pool', 'create', + self.name, str(pgs), str(pgs), + 'erasure', self.erasure_code_profile + ] + + try: + check_call(cmd) + try: + set_app_name_for_pool(client=self.service, + pool=self.name, + name=self.app_name) + except CalledProcessError: + log('Could not set app name for pool {}'.format(self.name), level=WARNING) + if nautilus_or_later: + # Ensure we set the expected pool ratio + update_pool(client=self.service, + pool=self.name, + settings={'target_size_ratio': str(self.percent_data / 100.0)}) + if 'pg_autoscaler' in enabled_manager_modules(): + try: + enable_pg_autoscale(self.service, self.name) + except CalledProcessError as e: + log('Could not configure auto scaling for pool {}: {}'.format( + self.name, e), level=WARNING) + except CalledProcessError: + raise + + """Get an existing erasure code profile if it already exists. + Returns json formatted output""" + + +def enabled_manager_modules(): + """Return a list of enabled manager modules. + + :rtype: List[str] + """ + cmd = ['ceph', 'mgr', 'module', 'ls'] + try: + modules = check_output(cmd) + if six.PY3: + modules = modules.decode('UTF-8') + except CalledProcessError as e: + log("Failed to list ceph modules: {}".format(e), WARNING) + return [] + modules = json.loads(modules) + return modules['enabled_modules'] + + +def enable_pg_autoscale(service, pool_name): + """ + Enable Ceph's PG autoscaler for the specified pool. + + :param service: six.string_types. The Ceph user name to run the command under + :param pool_name: six.string_types. The name of the pool to enable sutoscaling on + :raise: CalledProcessError if the command fails + """ + check_call(['ceph', '--id', service, 'osd', 'pool', 'set', pool_name, 'pg_autoscale_mode', 'on']) + + +def get_mon_map(service): + """ + Returns the current monitor map. + :param service: six.string_types. The Ceph user name to run the command under + :return: json string. :raise: ValueError if the monmap fails to parse. + Also raises CalledProcessError if our ceph command fails + """ + try: + mon_status = check_output(['ceph', '--id', service, + 'mon_status', '--format=json']) + if six.PY3: + mon_status = mon_status.decode('UTF-8') + try: + return json.loads(mon_status) + except ValueError as v: + log("Unable to parse mon_status json: {}. Error: {}" + .format(mon_status, str(v))) + raise + except CalledProcessError as e: + log("mon_status command failed with message: {}" + .format(str(e))) + raise + + +def hash_monitor_names(service): + """ + Uses the get_mon_map() function to get information about the monitor + cluster. + Hash the name of each monitor. Return a sorted list of monitor hashes + in an ascending order. + :param service: six.string_types. The Ceph user name to run the command under + :rtype : dict. json dict of monitor name, ip address and rank + example: { + 'name': 'ip-172-31-13-165', + 'rank': 0, + 'addr': '172.31.13.165:6789/0'} + """ + try: + hash_list = [] + monitor_list = get_mon_map(service=service) + if monitor_list['monmap']['mons']: + for mon in monitor_list['monmap']['mons']: + hash_list.append( + hashlib.sha224(mon['name'].encode('utf-8')).hexdigest()) + return sorted(hash_list) + else: + return None + except (ValueError, CalledProcessError): + raise + + +def monitor_key_delete(service, key): + """ + Delete a key and value pair from the monitor cluster + :param service: six.string_types. The Ceph user name to run the command under + Deletes a key value pair on the monitor cluster. + :param key: six.string_types. The key to delete. + """ + try: + check_output( + ['ceph', '--id', service, + 'config-key', 'del', str(key)]) + except CalledProcessError as e: + log("Monitor config-key put failed with message: {}".format( + e.output)) + raise + + +def monitor_key_set(service, key, value): + """ + Sets a key value pair on the monitor cluster. + :param service: six.string_types. The Ceph user name to run the command under + :param key: six.string_types. The key to set. + :param value: The value to set. This will be converted to a string + before setting + """ + try: + check_output( + ['ceph', '--id', service, + 'config-key', 'put', str(key), str(value)]) + except CalledProcessError as e: + log("Monitor config-key put failed with message: {}".format( + e.output)) + raise + + +def monitor_key_get(service, key): + """ + Gets the value of an existing key in the monitor cluster. + :param service: six.string_types. The Ceph user name to run the command under + :param key: six.string_types. The key to search for. + :return: Returns the value of that key or None if not found. + """ + try: + output = check_output( + ['ceph', '--id', service, + 'config-key', 'get', str(key)]).decode('UTF-8') + return output + except CalledProcessError as e: + log("Monitor config-key get failed with message: {}".format( + e.output)) + return None + + +def monitor_key_exists(service, key): + """ + Searches for the existence of a key in the monitor cluster. + :param service: six.string_types. The Ceph user name to run the command under + :param key: six.string_types. The key to search for + :return: Returns True if the key exists, False if not and raises an + exception if an unknown error occurs. :raise: CalledProcessError if + an unknown error occurs + """ + try: + check_call( + ['ceph', '--id', service, + 'config-key', 'exists', str(key)]) + # I can return true here regardless because Ceph returns + # ENOENT if the key wasn't found + return True + except CalledProcessError as e: + if e.returncode == errno.ENOENT: + return False + else: + log("Unknown error from ceph config-get exists: {} {}".format( + e.returncode, e.output)) + raise + + +def get_erasure_profile(service, name): + """ + :param service: six.string_types. The Ceph user name to run the command under + :param name: + :return: + """ + try: + out = check_output(['ceph', '--id', service, + 'osd', 'erasure-code-profile', 'get', + name, '--format=json']) + if six.PY3: + out = out.decode('UTF-8') + return json.loads(out) + except (CalledProcessError, OSError, ValueError): + return None + + +def pool_set(service, pool_name, key, value): + """ + Sets a value for a RADOS pool in ceph. + :param service: six.string_types. The Ceph user name to run the command under + :param pool_name: six.string_types + :param key: six.string_types + :param value: + :return: None. Can raise CalledProcessError + """ + cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', pool_name, key, + str(value).lower()] + try: + check_call(cmd) + except CalledProcessError: + raise + + +def snapshot_pool(service, pool_name, snapshot_name): + """ + Snapshots a RADOS pool in ceph. + :param service: six.string_types. The Ceph user name to run the command under + :param pool_name: six.string_types + :param snapshot_name: six.string_types + :return: None. Can raise CalledProcessError + """ + cmd = ['ceph', '--id', service, 'osd', 'pool', 'mksnap', pool_name, snapshot_name] + try: + check_call(cmd) + except CalledProcessError: + raise + + +def remove_pool_snapshot(service, pool_name, snapshot_name): + """ + Remove a snapshot from a RADOS pool in ceph. + :param service: six.string_types. The Ceph user name to run the command under + :param pool_name: six.string_types + :param snapshot_name: six.string_types + :return: None. Can raise CalledProcessError + """ + cmd = ['ceph', '--id', service, 'osd', 'pool', 'rmsnap', pool_name, snapshot_name] + try: + check_call(cmd) + except CalledProcessError: + raise + + +def set_pool_quota(service, pool_name, max_bytes=None, max_objects=None): + """ + :param service: The Ceph user name to run the command under + :type service: str + :param pool_name: Name of pool + :type pool_name: str + :param max_bytes: Maximum bytes quota to apply + :type max_bytes: int + :param max_objects: Maximum objects quota to apply + :type max_objects: int + :raises: subprocess.CalledProcessError + """ + cmd = ['ceph', '--id', service, 'osd', 'pool', 'set-quota', pool_name] + if max_bytes: + cmd = cmd + ['max_bytes', str(max_bytes)] + if max_objects: + cmd = cmd + ['max_objects', str(max_objects)] + check_call(cmd) + + +def remove_pool_quota(service, pool_name): + """ + Set a byte quota on a RADOS pool in ceph. + :param service: six.string_types. The Ceph user name to run the command under + :param pool_name: six.string_types + :return: None. Can raise CalledProcessError + """ + cmd = ['ceph', '--id', service, 'osd', 'pool', 'set-quota', pool_name, 'max_bytes', '0'] + try: + check_call(cmd) + except CalledProcessError: + raise + + +def remove_erasure_profile(service, profile_name): + """ + Create a new erasure code profile if one does not already exist for it. Updates + the profile if it exists. Please see http://docs.ceph.com/docs/master/rados/operations/erasure-code-profile/ + for more details + :param service: six.string_types. The Ceph user name to run the command under + :param profile_name: six.string_types + :return: None. Can raise CalledProcessError + """ + cmd = ['ceph', '--id', service, 'osd', 'erasure-code-profile', 'rm', + profile_name] + try: + check_call(cmd) + except CalledProcessError: + raise + + +def create_erasure_profile(service, profile_name, erasure_plugin_name='jerasure', + failure_domain='host', + data_chunks=2, coding_chunks=1, + locality=None, durability_estimator=None, + device_class=None): + """ + Create a new erasure code profile if one does not already exist for it. Updates + the profile if it exists. Please see http://docs.ceph.com/docs/master/rados/operations/erasure-code-profile/ + for more details + :param service: six.string_types. The Ceph user name to run the command under + :param profile_name: six.string_types + :param erasure_plugin_name: six.string_types + :param failure_domain: six.string_types. One of ['chassis', 'datacenter', 'host', 'osd', 'pdu', 'pod', 'rack', 'region', + 'room', 'root', 'row']) + :param data_chunks: int + :param coding_chunks: int + :param locality: int + :param durability_estimator: int + :param device_class: six.string_types + :return: None. Can raise CalledProcessError + """ + # Ensure this failure_domain is allowed by Ceph + validator(failure_domain, six.string_types, + ['chassis', 'datacenter', 'host', 'osd', 'pdu', 'pod', 'rack', 'region', 'room', 'root', 'row']) + + cmd = ['ceph', '--id', service, 'osd', 'erasure-code-profile', 'set', profile_name, + 'plugin=' + erasure_plugin_name, 'k=' + str(data_chunks), 'm=' + str(coding_chunks) + ] + if locality is not None and durability_estimator is not None: + raise ValueError("create_erasure_profile should be called with k, m and one of l or c but not both.") + + luminous_or_later = cmp_pkgrevno('ceph-common', '12.0.0') >= 0 + # failure_domain changed in luminous + if luminous_or_later: + cmd.append('crush-failure-domain=' + failure_domain) + else: + cmd.append('ruleset-failure-domain=' + failure_domain) + + # device class new in luminous + if luminous_or_later and device_class: + cmd.append('crush-device-class={}'.format(device_class)) + else: + log('Skipping device class configuration (ceph < 12.0.0)', + level=DEBUG) + + # Add plugin specific information + if locality is not None: + # For local erasure codes + cmd.append('l=' + str(locality)) + if durability_estimator is not None: + # For Shec erasure codes + cmd.append('c=' + str(durability_estimator)) + + if erasure_profile_exists(service, profile_name): + cmd.append('--force') + + try: + check_call(cmd) + except CalledProcessError: + raise + + +def rename_pool(service, old_name, new_name): + """ + Rename a Ceph pool from old_name to new_name + :param service: six.string_types. The Ceph user name to run the command under + :param old_name: six.string_types + :param new_name: six.string_types + :return: None + """ + validator(value=old_name, valid_type=six.string_types) + validator(value=new_name, valid_type=six.string_types) + + cmd = ['ceph', '--id', service, 'osd', 'pool', 'rename', old_name, new_name] + check_call(cmd) + + +def erasure_profile_exists(service, name): + """ + Check to see if an Erasure code profile already exists. + :param service: six.string_types. The Ceph user name to run the command under + :param name: six.string_types + :return: int or None + """ + validator(value=name, valid_type=six.string_types) + try: + check_call(['ceph', '--id', service, + 'osd', 'erasure-code-profile', 'get', + name]) + return True + except CalledProcessError: + return False + + +def get_cache_mode(service, pool_name): + """ + Find the current caching mode of the pool_name given. + :param service: six.string_types. The Ceph user name to run the command under + :param pool_name: six.string_types + :return: int or None + """ + validator(value=service, valid_type=six.string_types) + validator(value=pool_name, valid_type=six.string_types) + out = check_output(['ceph', '--id', service, + 'osd', 'dump', '--format=json']) + if six.PY3: + out = out.decode('UTF-8') + try: + osd_json = json.loads(out) + for pool in osd_json['pools']: + if pool['pool_name'] == pool_name: + return pool['cache_mode'] + return None + except ValueError: + raise + + +def pool_exists(service, name): + """Check to see if a RADOS pool already exists.""" + try: + out = check_output(['rados', '--id', service, 'lspools']) + if six.PY3: + out = out.decode('UTF-8') + except CalledProcessError: + return False + + return name in out.split() + + +def get_osds(service, device_class=None): + """Return a list of all Ceph Object Storage Daemons currently in the + cluster (optionally filtered by storage device class). + + :param device_class: Class of storage device for OSD's + :type device_class: str + """ + luminous_or_later = cmp_pkgrevno('ceph-common', '12.0.0') >= 0 + if luminous_or_later and device_class: + out = check_output(['ceph', '--id', service, + 'osd', 'crush', 'class', + 'ls-osd', device_class, + '--format=json']) + else: + out = check_output(['ceph', '--id', service, + 'osd', 'ls', + '--format=json']) + if six.PY3: + out = out.decode('UTF-8') + return json.loads(out) + + +def install(): + """Basic Ceph client installation.""" + ceph_dir = "/etc/ceph" + if not os.path.exists(ceph_dir): + os.mkdir(ceph_dir) + + apt_install('ceph-common', fatal=True) + + +def rbd_exists(service, pool, rbd_img): + """Check to see if a RADOS block device exists.""" + try: + out = check_output(['rbd', 'list', '--id', + service, '--pool', pool]) + if six.PY3: + out = out.decode('UTF-8') + except CalledProcessError: + return False + + return rbd_img in out + + +def create_rbd_image(service, pool, image, sizemb): + """Create a new RADOS block device.""" + cmd = ['rbd', 'create', image, '--size', str(sizemb), '--id', service, + '--pool', pool] + check_call(cmd) + + +def update_pool(client, pool, settings): + cmd = ['ceph', '--id', client, 'osd', 'pool', 'set', pool] + for k, v in six.iteritems(settings): + cmd.append(k) + cmd.append(v) + + check_call(cmd) + + +def set_app_name_for_pool(client, pool, name): + """ + Calls `osd pool application enable` for the specified pool name + + :param client: Name of the ceph client to use + :type client: str + :param pool: Pool to set app name for + :type pool: str + :param name: app name for the specified pool + :type name: str + + :raises: CalledProcessError if ceph call fails + """ + if cmp_pkgrevno('ceph-common', '12.0.0') >= 0: + cmd = ['ceph', '--id', client, 'osd', 'pool', + 'application', 'enable', pool, name] + check_call(cmd) + + +def create_pool(service, name, replicas=3, pg_num=None): + """Create a new RADOS pool.""" + if pool_exists(service, name): + log("Ceph pool {} already exists, skipping creation".format(name), + level=WARNING) + return + + if not pg_num: + # Calculate the number of placement groups based + # on upstream recommended best practices. + osds = get_osds(service) + if osds: + pg_num = (len(osds) * 100 // replicas) + else: + # NOTE(james-page): Default to 200 for older ceph versions + # which don't support OSD query from cli + pg_num = 200 + + cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pg_num)] + check_call(cmd) + + update_pool(service, name, settings={'size': str(replicas)}) + + +def delete_pool(service, name): + """Delete a RADOS pool from ceph.""" + cmd = ['ceph', '--id', service, 'osd', 'pool', 'delete', name, + '--yes-i-really-really-mean-it'] + check_call(cmd) + + +def _keyfile_path(service): + return KEYFILE.format(service) + + +def _keyring_path(service): + return KEYRING.format(service) + + +def add_key(service, key): + """ + Add a key to a keyring. + + Creates the keyring if it doesn't already exist. + + Logs and returns if the key is already in the keyring. + """ + keyring = _keyring_path(service) + if os.path.exists(keyring): + with open(keyring, 'r') as ring: + if key in ring.read(): + log('Ceph keyring exists at %s and has not changed.' % keyring, + level=DEBUG) + return + log('Updating existing keyring %s.' % keyring, level=DEBUG) + + cmd = ['ceph-authtool', keyring, '--create-keyring', + '--name=client.{}'.format(service), '--add-key={}'.format(key)] + check_call(cmd) + log('Created new ceph keyring at %s.' % keyring, level=DEBUG) + + +def create_keyring(service, key): + """Deprecated. Please use the more accurately named 'add_key'""" + return add_key(service, key) + + +def delete_keyring(service): + """Delete an existing Ceph keyring.""" + keyring = _keyring_path(service) + if not os.path.exists(keyring): + log('Keyring does not exist at %s' % keyring, level=WARNING) + return + + os.remove(keyring) + log('Deleted ring at %s.' % keyring, level=INFO) + + +def create_key_file(service, key): + """Create a file containing key.""" + keyfile = _keyfile_path(service) + if os.path.exists(keyfile): + log('Keyfile exists at %s.' % keyfile, level=WARNING) + return + + with open(keyfile, 'w') as fd: + fd.write(key) + + log('Created new keyfile at %s.' % keyfile, level=INFO) + + +def get_ceph_nodes(relation='ceph'): + """Query named relation to determine current nodes.""" + hosts = [] + for r_id in relation_ids(relation): + for unit in related_units(r_id): + hosts.append(relation_get('private-address', unit=unit, rid=r_id)) + + return hosts + + +def configure(service, key, auth, use_syslog): + """Perform basic configuration of Ceph.""" + add_key(service, key) + create_key_file(service, key) + hosts = get_ceph_nodes() + with open('/etc/ceph/ceph.conf', 'w') as ceph_conf: + ceph_conf.write(CEPH_CONF.format(auth=auth, + keyring=_keyring_path(service), + mon_hosts=",".join(map(str, hosts)), + use_syslog=use_syslog)) + modprobe('rbd') + + +def image_mapped(name): + """Determine whether a RADOS block device is mapped locally.""" + try: + out = check_output(['rbd', 'showmapped']) + if six.PY3: + out = out.decode('UTF-8') + except CalledProcessError: + return False + + return name in out + + +def map_block_storage(service, pool, image): + """Map a RADOS block device for local use.""" + cmd = [ + 'rbd', + 'map', + '{}/{}'.format(pool, image), + '--user', + service, + '--secret', + _keyfile_path(service), + ] + check_call(cmd) + + +def filesystem_mounted(fs): + """Determine whether a filesytems is already mounted.""" + return fs in [f for f, m in mounts()] + + +def make_filesystem(blk_device, fstype='ext4', timeout=10): + """Make a new filesystem on the specified block device.""" + count = 0 + e_noent = errno.ENOENT + while not os.path.exists(blk_device): + if count >= timeout: + log('Gave up waiting on block device %s' % blk_device, + level=ERROR) + raise IOError(e_noent, os.strerror(e_noent), blk_device) + + log('Waiting for block device %s to appear' % blk_device, + level=DEBUG) + count += 1 + time.sleep(1) + else: + log('Formatting block device %s as filesystem %s.' % + (blk_device, fstype), level=INFO) + check_call(['mkfs', '-t', fstype, blk_device]) + + +def place_data_on_block_device(blk_device, data_src_dst): + """Migrate data in data_src_dst to blk_device and then remount.""" + # mount block device into /mnt + mount(blk_device, '/mnt') + # copy data to /mnt + copy_files(data_src_dst, '/mnt') + # umount block device + umount('/mnt') + # Grab user/group ID's from original source + _dir = os.stat(data_src_dst) + uid = _dir.st_uid + gid = _dir.st_gid + # re-mount where the data should originally be + # TODO: persist is currently a NO-OP in core.host + mount(blk_device, data_src_dst, persist=True) + # ensure original ownership of new mount. + os.chown(data_src_dst, uid, gid) + + +def copy_files(src, dst, symlinks=False, ignore=None): + """Copy files from src to dst.""" + for item in os.listdir(src): + s = os.path.join(src, item) + d = os.path.join(dst, item) + if os.path.isdir(s): + shutil.copytree(s, d, symlinks, ignore) + else: + shutil.copy2(s, d) + + +def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point, + blk_device, fstype, system_services=[], + replicas=3): + """NOTE: This function must only be called from a single service unit for + the same rbd_img otherwise data loss will occur. + + Ensures given pool and RBD image exists, is mapped to a block device, + and the device is formatted and mounted at the given mount_point. + + If formatting a device for the first time, data existing at mount_point + will be migrated to the RBD device before being re-mounted. + + All services listed in system_services will be stopped prior to data + migration and restarted when complete. + """ + # Ensure pool, RBD image, RBD mappings are in place. + if not pool_exists(service, pool): + log('Creating new pool {}.'.format(pool), level=INFO) + create_pool(service, pool, replicas=replicas) + + if not rbd_exists(service, pool, rbd_img): + log('Creating RBD image ({}).'.format(rbd_img), level=INFO) + create_rbd_image(service, pool, rbd_img, sizemb) + + if not image_mapped(rbd_img): + log('Mapping RBD Image {} as a Block Device.'.format(rbd_img), + level=INFO) + map_block_storage(service, pool, rbd_img) + + # make file system + # TODO: What happens if for whatever reason this is run again and + # the data is already in the rbd device and/or is mounted?? + # When it is mounted already, it will fail to make the fs + # XXX: This is really sketchy! Need to at least add an fstab entry + # otherwise this hook will blow away existing data if its executed + # after a reboot. + if not filesystem_mounted(mount_point): + make_filesystem(blk_device, fstype) + + for svc in system_services: + if service_running(svc): + log('Stopping services {} prior to migrating data.' + .format(svc), level=DEBUG) + service_stop(svc) + + place_data_on_block_device(blk_device, mount_point) + + for svc in system_services: + log('Starting service {} after migrating data.' + .format(svc), level=DEBUG) + service_start(svc) + + +def ensure_ceph_keyring(service, user=None, group=None, + relation='ceph', key=None): + """Ensures a ceph keyring is created for a named service and optionally + ensures user and group ownership. + + @returns boolean: Flag to indicate whether a key was successfully written + to disk based on either relation data or a supplied key + """ + if not key: + for rid in relation_ids(relation): + for unit in related_units(rid): + key = relation_get('key', rid=rid, unit=unit) + if key: + break + + if not key: + return False + + add_key(service=service, key=key) + keyring = _keyring_path(service) + if user and group: + check_call(['chown', '%s.%s' % (user, group), keyring]) + + return True + + +class CephBrokerRq(object): + """Ceph broker request. + + Multiple operations can be added to a request and sent to the Ceph broker + to be executed. + + Request is json-encoded for sending over the wire. + + The API is versioned and defaults to version 1. + """ + + def __init__(self, api_version=1, request_id=None): + self.api_version = api_version + if request_id: + self.request_id = request_id + else: + self.request_id = str(uuid.uuid1()) + self.ops = [] + + def add_op(self, op): + """Add an op if it is not already in the list. + + :param op: Operation to add. + :type op: dict + """ + if op not in self.ops: + self.ops.append(op) + + def add_op_request_access_to_group(self, name, namespace=None, + permission=None, key_name=None, + object_prefix_permissions=None): + """ + Adds the requested permissions to the current service's Ceph key, + allowing the key to access only the specified pools or + object prefixes. object_prefix_permissions should be a dictionary + keyed on the permission with the corresponding value being a list + of prefixes to apply that permission to. + { + 'rwx': ['prefix1', 'prefix2'], + 'class-read': ['prefix3']} + """ + self.add_op({ + 'op': 'add-permissions-to-key', 'group': name, + 'namespace': namespace, + 'name': key_name or service_name(), + 'group-permission': permission, + 'object-prefix-permissions': object_prefix_permissions}) + + def add_op_create_pool(self, name, replica_count=3, pg_num=None, + weight=None, group=None, namespace=None, + app_name=None, max_bytes=None, max_objects=None): + """DEPRECATED: Use ``add_op_create_replicated_pool()`` or + ``add_op_create_erasure_pool()`` instead. + """ + return self.add_op_create_replicated_pool( + name, replica_count=replica_count, pg_num=pg_num, weight=weight, + group=group, namespace=namespace, app_name=app_name, + max_bytes=max_bytes, max_objects=max_objects) + + def add_op_create_replicated_pool(self, name, replica_count=3, pg_num=None, + weight=None, group=None, namespace=None, + app_name=None, max_bytes=None, + max_objects=None): + """Adds an operation to create a replicated pool. + + :param name: Name of pool to create + :type name: str + :param replica_count: Number of copies Ceph should keep of your data. + :type replica_count: int + :param pg_num: Request specific number of Placement Groups to create + for pool. + :type pg_num: int + :param weight: The percentage of data that is expected to be contained + in the pool from the total available space on the OSDs. + Used to calculate number of Placement Groups to create + for pool. + :type weight: float + :param group: Group to add pool to + :type group: str + :param namespace: Group namespace + :type namespace: str + :param app_name: (Optional) Tag pool with application name. Note that + there is certain protocols emerging upstream with + regard to meaningful application names to use. + Examples are ``rbd`` and ``rgw``. + :type app_name: str + :param max_bytes: Maximum bytes quota to apply + :type max_bytes: int + :param max_objects: Maximum objects quota to apply + :type max_objects: int + """ + if pg_num and weight: + raise ValueError('pg_num and weight are mutually exclusive') + + self.add_op({'op': 'create-pool', 'name': name, + 'replicas': replica_count, 'pg_num': pg_num, + 'weight': weight, 'group': group, + 'group-namespace': namespace, 'app-name': app_name, + 'max-bytes': max_bytes, 'max-objects': max_objects}) + + def add_op_create_erasure_pool(self, name, erasure_profile=None, + weight=None, group=None, app_name=None, + max_bytes=None, max_objects=None): + """Adds an operation to create a erasure coded pool. + + :param name: Name of pool to create + :type name: str + :param erasure_profile: Name of erasure code profile to use. If not + set the ceph-mon unit handling the broker + request will set its default value. + :type erasure_profile: str + :param weight: The percentage of data that is expected to be contained + in the pool from the total available space on the OSDs. + :type weight: float + :param group: Group to add pool to + :type group: str + :param app_name: (Optional) Tag pool with application name. Note that + there is certain protocols emerging upstream with + regard to meaningful application names to use. + Examples are ``rbd`` and ``rgw``. + :type app_name: str + :param max_bytes: Maximum bytes quota to apply + :type max_bytes: int + :param max_objects: Maximum objects quota to apply + :type max_objects: int + """ + self.add_op({'op': 'create-pool', 'name': name, + 'pool-type': 'erasure', + 'erasure-profile': erasure_profile, + 'weight': weight, + 'group': group, 'app-name': app_name, + 'max-bytes': max_bytes, 'max-objects': max_objects}) + + def set_ops(self, ops): + """Set request ops to provided value. + + Useful for injecting ops that come from a previous request + to allow comparisons to ensure validity. + """ + self.ops = ops + + @property + def request(self): + return json.dumps({'api-version': self.api_version, 'ops': self.ops, + 'request-id': self.request_id}) + + def _ops_equal(self, other): + if len(self.ops) == len(other.ops): + for req_no in range(0, len(self.ops)): + for key in [ + 'replicas', 'name', 'op', 'pg_num', 'weight', + 'group', 'group-namespace', 'group-permission', + 'object-prefix-permissions']: + if self.ops[req_no].get(key) != other.ops[req_no].get(key): + return False + else: + return False + return True + + def __eq__(self, other): + if not isinstance(other, self.__class__): + return False + if self.api_version == other.api_version and \ + self._ops_equal(other): + return True + else: + return False + + def __ne__(self, other): + return not self.__eq__(other) + + +class CephBrokerRsp(object): + """Ceph broker response. + + Response is json-decoded and contents provided as methods/properties. + + The API is versioned and defaults to version 1. + """ + + def __init__(self, encoded_rsp): + self.api_version = None + self.rsp = json.loads(encoded_rsp) + + @property + def request_id(self): + return self.rsp.get('request-id') + + @property + def exit_code(self): + return self.rsp.get('exit-code') + + @property + def exit_msg(self): + return self.rsp.get('stderr') + + +# Ceph Broker Conversation: +# If a charm needs an action to be taken by ceph it can create a CephBrokerRq +# and send that request to ceph via the ceph relation. The CephBrokerRq has a +# unique id so that the client can identity which CephBrokerRsp is associated +# with the request. Ceph will also respond to each client unit individually +# creating a response key per client unit eg glance/0 will get a CephBrokerRsp +# via key broker-rsp-glance-0 +# +# To use this the charm can just do something like: +# +# from charmhelpers.contrib.storage.linux.ceph import ( +# send_request_if_needed, +# is_request_complete, +# CephBrokerRq, +# ) +# +# @hooks.hook('ceph-relation-changed') +# def ceph_changed(): +# rq = CephBrokerRq() +# rq.add_op_create_pool(name='poolname', replica_count=3) +# +# if is_request_complete(rq): +# +# else: +# send_request_if_needed(get_ceph_request()) +# +# CephBrokerRq and CephBrokerRsp are serialized into JSON. Below is an example +# of glance having sent a request to ceph which ceph has successfully processed +# 'ceph:8': { +# 'ceph/0': { +# 'auth': 'cephx', +# 'broker-rsp-glance-0': '{"request-id": "0bc7dc54", "exit-code": 0}', +# 'broker_rsp': '{"request-id": "0da543b8", "exit-code": 0}', +# 'ceph-public-address': '10.5.44.103', +# 'key': 'AQCLDttVuHXINhAAvI144CB09dYchhHyTUY9BQ==', +# 'private-address': '10.5.44.103', +# }, +# 'glance/0': { +# 'broker_req': ('{"api-version": 1, "request-id": "0bc7dc54", ' +# '"ops": [{"replicas": 3, "name": "glance", ' +# '"op": "create-pool"}]}'), +# 'private-address': '10.5.44.109', +# }, +# } + +def get_previous_request(rid): + """Return the last ceph broker request sent on a given relation + + @param rid: Relation id to query for request + """ + request = None + broker_req = relation_get(attribute='broker_req', rid=rid, + unit=local_unit()) + if broker_req: + request_data = json.loads(broker_req) + request = CephBrokerRq(api_version=request_data['api-version'], + request_id=request_data['request-id']) + request.set_ops(request_data['ops']) + + return request + + +def get_request_states(request, relation='ceph'): + """Return a dict of requests per relation id with their corresponding + completion state. + + This allows a charm, which has a request for ceph, to see whether there is + an equivalent request already being processed and if so what state that + request is in. + + @param request: A CephBrokerRq object + """ + complete = [] + requests = {} + for rid in relation_ids(relation): + complete = False + previous_request = get_previous_request(rid) + if request == previous_request: + sent = True + complete = is_request_complete_for_rid(previous_request, rid) + else: + sent = False + complete = False + + requests[rid] = { + 'sent': sent, + 'complete': complete, + } + + return requests + + +def is_request_sent(request, relation='ceph'): + """Check to see if a functionally equivalent request has already been sent + + Returns True if a similair request has been sent + + @param request: A CephBrokerRq object + """ + states = get_request_states(request, relation=relation) + for rid in states.keys(): + if not states[rid]['sent']: + return False + + return True + + +def is_request_complete(request, relation='ceph'): + """Check to see if a functionally equivalent request has already been + completed + + Returns True if a similair request has been completed + + @param request: A CephBrokerRq object + """ + states = get_request_states(request, relation=relation) + for rid in states.keys(): + if not states[rid]['complete']: + return False + + return True + + +def is_request_complete_for_rid(request, rid): + """Check if a given request has been completed on the given relation + + @param request: A CephBrokerRq object + @param rid: Relation ID + """ + broker_key = get_broker_rsp_key() + for unit in related_units(rid): + rdata = relation_get(rid=rid, unit=unit) + if rdata.get(broker_key): + rsp = CephBrokerRsp(rdata.get(broker_key)) + if rsp.request_id == request.request_id: + if not rsp.exit_code: + return True + else: + # The remote unit sent no reply targeted at this unit so either the + # remote ceph cluster does not support unit targeted replies or it + # has not processed our request yet. + if rdata.get('broker_rsp'): + request_data = json.loads(rdata['broker_rsp']) + if request_data.get('request-id'): + log('Ignoring legacy broker_rsp without unit key as remote ' + 'service supports unit specific replies', level=DEBUG) + else: + log('Using legacy broker_rsp as remote service does not ' + 'supports unit specific replies', level=DEBUG) + rsp = CephBrokerRsp(rdata['broker_rsp']) + if not rsp.exit_code: + return True + + return False + + +def get_broker_rsp_key(): + """Return broker response key for this unit + + This is the key that ceph is going to use to pass request status + information back to this unit + """ + return 'broker-rsp-' + local_unit().replace('/', '-') + + +def send_request_if_needed(request, relation='ceph'): + """Send broker request if an equivalent request has not already been sent + + @param request: A CephBrokerRq object + """ + if is_request_sent(request, relation=relation): + log('Request already sent but not complete, not sending new request', + level=DEBUG) + else: + for rid in relation_ids(relation): + log('Sending request {}'.format(request.request_id), level=DEBUG) + relation_set(relation_id=rid, broker_req=request.request) + + +def has_broker_rsp(rid=None, unit=None): + """Return True if the broker_rsp key is 'truthy' (i.e. set to something) in the relation data. + + :param rid: The relation to check (default of None means current relation) + :type rid: Union[str, None] + :param unit: The remote unit to check (default of None means current unit) + :type unit: Union[str, None] + :returns: True if broker key exists and is set to something 'truthy' + :rtype: bool + """ + rdata = relation_get(rid=rid, unit=unit) or {} + broker_rsp = rdata.get(get_broker_rsp_key()) + return True if broker_rsp else False + + +def is_broker_action_done(action, rid=None, unit=None): + """Check whether broker action has completed yet. + + @param action: name of action to be performed + @returns True if action complete otherwise False + """ + rdata = relation_get(rid=rid, unit=unit) or {} + broker_rsp = rdata.get(get_broker_rsp_key()) + if not broker_rsp: + return False + + rsp = CephBrokerRsp(broker_rsp) + unit_name = local_unit().partition('/')[2] + key = "unit_{}_ceph_broker_action.{}".format(unit_name, action) + kvstore = kv() + val = kvstore.get(key=key) + if val and val == rsp.request_id: + return True + + return False + + +def mark_broker_action_done(action, rid=None, unit=None): + """Mark action as having been completed. + + @param action: name of action to be performed + @returns None + """ + rdata = relation_get(rid=rid, unit=unit) or {} + broker_rsp = rdata.get(get_broker_rsp_key()) + if not broker_rsp: + return + + rsp = CephBrokerRsp(broker_rsp) + unit_name = local_unit().partition('/')[2] + key = "unit_{}_ceph_broker_action.{}".format(unit_name, action) + kvstore = kv() + kvstore.set(key=key, value=rsp.request_id) + kvstore.flush() + + +class CephConfContext(object): + """Ceph config (ceph.conf) context. + + Supports user-provided Ceph configuration settings. Use can provide a + dictionary as the value for the config-flags charm option containing + Ceph configuration settings keyede by their section in ceph.conf. + """ + def __init__(self, permitted_sections=None): + self.permitted_sections = permitted_sections or [] + + def __call__(self): + conf = config('config-flags') + if not conf: + return {} + + conf = config_flags_parser(conf) + if not isinstance(conf, dict): + log("Provided config-flags is not a dictionary - ignoring", + level=WARNING) + return {} + + permitted = self.permitted_sections + if permitted: + diff = set(conf.keys()).difference(set(permitted)) + if diff: + log("Config-flags contains invalid keys '%s' - they will be " + "ignored" % (', '.join(diff)), level=WARNING) + + ceph_conf = {} + for key in conf: + if permitted and key not in permitted: + log("Ignoring key '%s'" % key, level=WARNING) + continue + + ceph_conf[key] = conf[key] + return ceph_conf + + +class CephOSDConfContext(CephConfContext): + """Ceph config (ceph.conf) context. + + Consolidates settings from config-flags via CephConfContext with + settings provided by the mons. The config-flag values are preserved in + conf['osd'], settings from the mons which do not clash with config-flag + settings are in conf['osd_from_client'] and finally settings which do + clash are in conf['osd_from_client_conflict']. Rather than silently drop + the conflicting settings they are provided in the context so they can be + rendered commented out to give some visability to the admin. + """ + + def __init__(self, permitted_sections=None): + super(CephOSDConfContext, self).__init__( + permitted_sections=permitted_sections) + try: + self.settings_from_mons = get_osd_settings('mon') + except OSDSettingConflict: + log( + "OSD settings from mons are inconsistent, ignoring them", + level=WARNING) + self.settings_from_mons = {} + + def filter_osd_from_mon_settings(self): + """Filter settings from client relation against config-flags. + + :returns: A tuple ( + ,config-flag values, + ,client settings which do not conflict with config-flag values, + ,client settings which confilct with config-flag values) + :rtype: (OrderedDict, OrderedDict, OrderedDict) + """ + ceph_conf = super(CephOSDConfContext, self).__call__() + conflicting_entries = {} + clear_entries = {} + for key, value in self.settings_from_mons.items(): + if key in ceph_conf.get('osd', {}): + if ceph_conf['osd'][key] != value: + conflicting_entries[key] = value + else: + clear_entries[key] = value + clear_entries = _order_dict_by_key(clear_entries) + conflicting_entries = _order_dict_by_key(conflicting_entries) + return ceph_conf, clear_entries, conflicting_entries + + def __call__(self): + """Construct OSD config context. + + Standard context with two additional special keys. + osd_from_client_conflict: client settings which confilct with + config-flag values + osd_from_client: settings which do not conflict with config-flag + values + + :returns: OSD config context dict. + :rtype: dict + """ + conf, osd_clear, osd_conflict = self.filter_osd_from_mon_settings() + conf['osd_from_client_conflict'] = osd_conflict + conf['osd_from_client'] = osd_clear + return conf diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/storage/linux/loopback.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/storage/linux/loopback.py new file mode 100644 index 0000000000000000000000000000000000000000..74bab40e43a978e3d9e1e2f9c8975368092145c0 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/storage/linux/loopback.py @@ -0,0 +1,92 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import re +from subprocess import ( + check_call, + check_output, +) + +import six + + +################################################## +# loopback device helpers. +################################################## +def loopback_devices(): + ''' + Parse through 'losetup -a' output to determine currently mapped + loopback devices. Output is expected to look like: + + /dev/loop0: [0807]:961814 (/tmp/my.img) + + or: + + /dev/loop0: [0807]:961814 (/tmp/my.img (deleted)) + + :returns: dict: a dict mapping {loopback_dev: backing_file} + ''' + loopbacks = {} + cmd = ['losetup', '-a'] + output = check_output(cmd) + if six.PY3: + output = output.decode('utf-8') + devs = [d.strip().split(' ', 2) for d in output.splitlines() if d != ''] + for dev, _, f in devs: + loopbacks[dev.replace(':', '')] = re.search(r'\((.+)\)', f).groups()[0] + return loopbacks + + +def create_loopback(file_path): + ''' + Create a loopback device for a given backing file. + + :returns: str: Full path to new loopback device (eg, /dev/loop0) + ''' + file_path = os.path.abspath(file_path) + check_call(['losetup', '--find', file_path]) + for d, f in six.iteritems(loopback_devices()): + if f == file_path: + return d + + +def ensure_loopback_device(path, size): + ''' + Ensure a loopback device exists for a given backing file path and size. + If it a loopback device is not mapped to file, a new one will be created. + + TODO: Confirm size of found loopback device. + + :returns: str: Full path to the ensured loopback device (eg, /dev/loop0) + ''' + for d, f in six.iteritems(loopback_devices()): + if f == path: + return d + + if not os.path.exists(path): + cmd = ['truncate', '--size', size, path] + check_call(cmd) + + return create_loopback(path) + + +def is_mapped_loopback_device(device): + """ + Checks if a given device name is an existing/mapped loopback device. + :param device: str: Full path to the device (eg, /dev/loop1). + :returns: str: Path to the backing file if is a loopback device + empty string otherwise + """ + return loopback_devices().get(device, "") diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/storage/linux/lvm.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/storage/linux/lvm.py new file mode 100644 index 0000000000000000000000000000000000000000..c8bde69263f0e917d32d0e5d70abba1409b26012 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/storage/linux/lvm.py @@ -0,0 +1,182 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import functools +from subprocess import ( + CalledProcessError, + check_call, + check_output, + Popen, + PIPE, +) + + +################################################## +# LVM helpers. +################################################## +def deactivate_lvm_volume_group(block_device): + ''' + Deactivate any volume gruop associated with an LVM physical volume. + + :param block_device: str: Full path to LVM physical volume + ''' + vg = list_lvm_volume_group(block_device) + if vg: + cmd = ['vgchange', '-an', vg] + check_call(cmd) + + +def is_lvm_physical_volume(block_device): + ''' + Determine whether a block device is initialized as an LVM PV. + + :param block_device: str: Full path of block device to inspect. + + :returns: boolean: True if block device is a PV, False if not. + ''' + try: + check_output(['pvdisplay', block_device]) + return True + except CalledProcessError: + return False + + +def remove_lvm_physical_volume(block_device): + ''' + Remove LVM PV signatures from a given block device. + + :param block_device: str: Full path of block device to scrub. + ''' + p = Popen(['pvremove', '-ff', block_device], + stdin=PIPE) + p.communicate(input='y\n') + + +def list_lvm_volume_group(block_device): + ''' + List LVM volume group associated with a given block device. + + Assumes block device is a valid LVM PV. + + :param block_device: str: Full path of block device to inspect. + + :returns: str: Name of volume group associated with block device or None + ''' + vg = None + pvd = check_output(['pvdisplay', block_device]).splitlines() + for lvm in pvd: + lvm = lvm.decode('UTF-8') + if lvm.strip().startswith('VG Name'): + vg = ' '.join(lvm.strip().split()[2:]) + return vg + + +def create_lvm_physical_volume(block_device): + ''' + Initialize a block device as an LVM physical volume. + + :param block_device: str: Full path of block device to initialize. + + ''' + check_call(['pvcreate', block_device]) + + +def create_lvm_volume_group(volume_group, block_device): + ''' + Create an LVM volume group backed by a given block device. + + Assumes block device has already been initialized as an LVM PV. + + :param volume_group: str: Name of volume group to create. + :block_device: str: Full path of PV-initialized block device. + ''' + check_call(['vgcreate', volume_group, block_device]) + + +def list_logical_volumes(select_criteria=None, path_mode=False): + ''' + List logical volumes + + :param select_criteria: str: Limit list to those volumes matching this + criteria (see 'lvs -S help' for more details) + :param path_mode: bool: return logical volume name in 'vg/lv' format, this + format is required for some commands like lvextend + :returns: [str]: List of logical volumes + ''' + lv_diplay_attr = 'lv_name' + if path_mode: + # Parsing output logic relies on the column order + lv_diplay_attr = 'vg_name,' + lv_diplay_attr + cmd = ['lvs', '--options', lv_diplay_attr, '--noheadings'] + if select_criteria: + cmd.extend(['--select', select_criteria]) + lvs = [] + for lv in check_output(cmd).decode('UTF-8').splitlines(): + if not lv: + continue + if path_mode: + lvs.append('/'.join(lv.strip().split())) + else: + lvs.append(lv.strip()) + return lvs + + +list_thin_logical_volume_pools = functools.partial( + list_logical_volumes, + select_criteria='lv_attr =~ ^t') + +list_thin_logical_volumes = functools.partial( + list_logical_volumes, + select_criteria='lv_attr =~ ^V') + + +def extend_logical_volume_by_device(lv_name, block_device): + ''' + Extends the size of logical volume lv_name by the amount of free space on + physical volume block_device. + + :param lv_name: str: name of logical volume to be extended (vg/lv format) + :param block_device: str: name of block_device to be allocated to lv_name + ''' + cmd = ['lvextend', lv_name, block_device] + check_call(cmd) + + +def create_logical_volume(lv_name, volume_group, size=None): + ''' + Create a new logical volume in an existing volume group + + :param lv_name: str: name of logical volume to be created. + :param volume_group: str: Name of volume group to use for the new volume. + :param size: str: Size of logical volume to create (100% if not supplied) + :raises subprocess.CalledProcessError: in the event that the lvcreate fails. + ''' + if size: + check_call([ + 'lvcreate', + '--yes', + '-L', + '{}'.format(size), + '-n', lv_name, volume_group + ]) + # create the lv with all the space available, this is needed because the + # system call is different for LVM + else: + check_call([ + 'lvcreate', + '--yes', + '-l', + '100%FREE', + '-n', lv_name, volume_group + ]) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/storage/linux/utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/storage/linux/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..a35617606cf52d7cffc04ac245811b770fd95e8e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/storage/linux/utils.py @@ -0,0 +1,128 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import re +from stat import S_ISBLK + +from subprocess import ( + CalledProcessError, + check_call, + check_output, + call +) + + +def _luks_uuid(dev): + """ + Check to see if dev is a LUKS encrypted volume, returning the UUID + of volume if it is. + + :param: dev: path to block device to check. + :returns: str. UUID of LUKS device or None if not a LUKS device + """ + try: + cmd = ['cryptsetup', 'luksUUID', dev] + return check_output(cmd).decode('UTF-8').strip() + except CalledProcessError: + return None + + +def is_luks_device(dev): + """ + Determine if dev is a LUKS-formatted block device. + + :param: dev: A full path to a block device to check for LUKS header + presence + :returns: boolean: indicates whether a device is used based on LUKS header. + """ + return True if _luks_uuid(dev) else False + + +def is_mapped_luks_device(dev): + """ + Determine if dev is a mapped LUKS device + :param: dev: A full path to a block device to be checked + :returns: boolean: indicates whether a device is mapped + """ + _, dirs, _ = next(os.walk( + '/sys/class/block/{}/holders/' + .format(os.path.basename(os.path.realpath(dev)))) + ) + is_held = len(dirs) > 0 + return is_held and is_luks_device(dev) + + +def is_block_device(path): + ''' + Confirm device at path is a valid block device node. + + :returns: boolean: True if path is a block device, False if not. + ''' + if not os.path.exists(path): + return False + return S_ISBLK(os.stat(path).st_mode) + + +def zap_disk(block_device): + ''' + Clear a block device of partition table. Relies on sgdisk, which is + installed as pat of the 'gdisk' package in Ubuntu. + + :param block_device: str: Full path of block device to clean. + ''' + # https://github.com/ceph/ceph/commit/fdd7f8d83afa25c4e09aaedd90ab93f3b64a677b + # sometimes sgdisk exits non-zero; this is OK, dd will clean up + call(['sgdisk', '--zap-all', '--', block_device]) + call(['sgdisk', '--clear', '--mbrtogpt', '--', block_device]) + dev_end = check_output(['blockdev', '--getsz', + block_device]).decode('UTF-8') + gpt_end = int(dev_end.split()[0]) - 100 + check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device), + 'bs=1M', 'count=1']) + check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device), + 'bs=512', 'count=100', 'seek=%s' % (gpt_end)]) + + +def is_device_mounted(device): + '''Given a device path, return True if that device is mounted, and False + if it isn't. + + :param device: str: Full path of the device to check. + :returns: boolean: True if the path represents a mounted device, False if + it doesn't. + ''' + try: + out = check_output(['lsblk', '-P', device]).decode('UTF-8') + except Exception: + return False + return bool(re.search(r'MOUNTPOINT=".+"', out)) + + +def mkfs_xfs(device, force=False, inode_size=1024): + """Format device with XFS filesystem. + + By default this should fail if the device already has a filesystem on it. + :param device: Full path to device to format + :ptype device: tr + :param force: Force operation + :ptype: force: boolean + :param inode_size: XFS inode size in bytes + :ptype inode_size: int""" + cmd = ['mkfs.xfs'] + if force: + cmd.append("-f") + + cmd += ['-i', "size={}".format(inode_size), device] + check_call(cmd) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/templating/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/templating/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d7567b863e3a5ad2b7a7f44958b4166e0c3d346b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/templating/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/templating/contexts.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/templating/contexts.py new file mode 100644 index 0000000000000000000000000000000000000000..c1adf94b133f41a168727a3cdd3a536633ff62b3 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/templating/contexts.py @@ -0,0 +1,137 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Copyright 2013 Canonical Ltd. +# +# Authors: +# Charm Helpers Developers +"""A helper to create a yaml cache of config with namespaced relation data.""" +import os +import yaml + +import six + +import charmhelpers.core.hookenv + + +charm_dir = os.environ.get('CHARM_DIR', '') + + +def dict_keys_without_hyphens(a_dict): + """Return the a new dict with underscores instead of hyphens in keys.""" + return dict( + (key.replace('-', '_'), val) for key, val in a_dict.items()) + + +def update_relations(context, namespace_separator=':'): + """Update the context with the relation data.""" + # Add any relation data prefixed with the relation type. + relation_type = charmhelpers.core.hookenv.relation_type() + relations = [] + context['current_relation'] = {} + if relation_type is not None: + relation_data = charmhelpers.core.hookenv.relation_get() + context['current_relation'] = relation_data + # Deprecated: the following use of relation data as keys + # directly in the context will be removed. + relation_data = dict( + ("{relation_type}{namespace_separator}{key}".format( + relation_type=relation_type, + key=key, + namespace_separator=namespace_separator), val) + for key, val in relation_data.items()) + relation_data = dict_keys_without_hyphens(relation_data) + context.update(relation_data) + relations = charmhelpers.core.hookenv.relations_of_type(relation_type) + relations = [dict_keys_without_hyphens(rel) for rel in relations] + + context['relations_full'] = charmhelpers.core.hookenv.relations() + + # the hookenv.relations() data structure is effectively unusable in + # templates and other contexts when trying to access relation data other + # than the current relation. So provide a more useful structure that works + # with any hook. + local_unit = charmhelpers.core.hookenv.local_unit() + relations = {} + for rname, rids in context['relations_full'].items(): + relations[rname] = [] + for rid, rdata in rids.items(): + data = rdata.copy() + if local_unit in rdata: + data.pop(local_unit) + for unit_name, rel_data in data.items(): + new_data = {'__relid__': rid, '__unit__': unit_name} + new_data.update(rel_data) + relations[rname].append(new_data) + context['relations'] = relations + + +def juju_state_to_yaml(yaml_path, namespace_separator=':', + allow_hyphens_in_keys=True, mode=None): + """Update the juju config and state in a yaml file. + + This includes any current relation-get data, and the charm + directory. + + This function was created for the ansible and saltstack + support, as those libraries can use a yaml file to supply + context to templates, but it may be useful generally to + create and update an on-disk cache of all the config, including + previous relation data. + + By default, hyphens are allowed in keys as this is supported + by yaml, but for tools like ansible, hyphens are not valid [1]. + + [1] http://www.ansibleworks.com/docs/playbooks_variables.html#what-makes-a-valid-variable-name + """ + config = charmhelpers.core.hookenv.config() + + # Add the charm_dir which we will need to refer to charm + # file resources etc. + config['charm_dir'] = charm_dir + config['local_unit'] = charmhelpers.core.hookenv.local_unit() + config['unit_private_address'] = charmhelpers.core.hookenv.unit_private_ip() + config['unit_public_address'] = charmhelpers.core.hookenv.unit_get( + 'public-address' + ) + + # Don't use non-standard tags for unicode which will not + # work when salt uses yaml.load_safe. + yaml.add_representer(six.text_type, + lambda dumper, value: dumper.represent_scalar( + six.u('tag:yaml.org,2002:str'), value)) + + yaml_dir = os.path.dirname(yaml_path) + if not os.path.exists(yaml_dir): + os.makedirs(yaml_dir) + + if os.path.exists(yaml_path): + with open(yaml_path, "r") as existing_vars_file: + existing_vars = yaml.load(existing_vars_file.read()) + else: + with open(yaml_path, "w+"): + pass + existing_vars = {} + + if mode is not None: + os.chmod(yaml_path, mode) + + if not allow_hyphens_in_keys: + config = dict_keys_without_hyphens(config) + existing_vars.update(config) + + update_relations(existing_vars, namespace_separator) + + with open(yaml_path, "w+") as fp: + fp.write(yaml.dump(existing_vars, default_flow_style=False)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/templating/jinja.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/templating/jinja.py new file mode 100644 index 0000000000000000000000000000000000000000..38d4fba0e651399064a14112398a0ef43717f5cd --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/templating/jinja.py @@ -0,0 +1,38 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +""" +Templating using the python-jinja2 package. +""" +import six +from charmhelpers.fetch import apt_install, apt_update +try: + import jinja2 +except ImportError: + apt_update(fatal=True) + if six.PY3: + apt_install(["python3-jinja2"], fatal=True) + else: + apt_install(["python-jinja2"], fatal=True) + import jinja2 + + +DEFAULT_TEMPLATES_DIR = 'templates' + + +def render(template_name, context, template_dir=DEFAULT_TEMPLATES_DIR): + templates = jinja2.Environment( + loader=jinja2.FileSystemLoader(template_dir)) + template = templates.get_template(template_name) + return template.render(context) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/templating/pyformat.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/templating/pyformat.py new file mode 100644 index 0000000000000000000000000000000000000000..51a24dc42379fbee88d8952c578fc2498c852b0b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/templating/pyformat.py @@ -0,0 +1,27 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +''' +Templating using standard Python str.format() method. +''' + +from charmhelpers.core import hookenv + + +def render(template, extra={}, **kwargs): + """Return the template rendered using Python's str.format().""" + context = hookenv.execution_environment() + context.update(extra) + context.update(kwargs) + return template.format(**context) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/unison/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/unison/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..61409b14c843766aa10cb957bb1c3d83ba87c6a7 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/contrib/unison/__init__.py @@ -0,0 +1,314 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Easy file synchronization among peer units using ssh + unison. +# +# For the -joined, -changed, and -departed peer relations, add a call to +# ssh_authorized_peers() describing the peer relation and the desired +# user + group. After all peer relations have settled, all hosts should +# be able to connect to on another via key auth'd ssh as the specified user. +# +# Other hooks are then free to synchronize files and directories using +# sync_to_peers(). +# +# For a peer relation named 'cluster', for example: +# +# cluster-relation-joined: +# ... +# ssh_authorized_peers(peer_interface='cluster', +# user='juju_ssh', group='juju_ssh', +# ensure_local_user=True) +# ... +# +# cluster-relation-changed: +# ... +# ssh_authorized_peers(peer_interface='cluster', +# user='juju_ssh', group='juju_ssh', +# ensure_local_user=True) +# ... +# +# cluster-relation-departed: +# ... +# ssh_authorized_peers(peer_interface='cluster', +# user='juju_ssh', group='juju_ssh', +# ensure_local_user=True) +# ... +# +# Hooks are now free to sync files as easily as: +# +# files = ['/etc/fstab', '/etc/apt.conf.d/'] +# sync_to_peers(peer_interface='cluster', +# user='juju_ssh, paths=[files]) +# +# It is assumed the charm itself has setup permissions on each unit +# such that 'juju_ssh' has read + write permissions. Also assumed +# that the calling charm takes care of leader delegation. +# +# Additionally files can be synchronized only to an specific unit: +# sync_to_peer(slave_address, user='juju_ssh', +# paths=[files], verbose=False) + +import os +import pwd + +from copy import copy +from subprocess import check_call, check_output + +from charmhelpers.core.host import ( + adduser, + add_user_to_group, + pwgen, + remove_password_expiry, +) + +from charmhelpers.core.hookenv import ( + log, + hook_name, + relation_ids, + related_units, + relation_set, + relation_get, + unit_private_ip, + INFO, + ERROR, +) + +BASE_CMD = ['unison', '-auto', '-batch=true', '-confirmbigdel=false', + '-fastcheck=true', '-group=false', '-owner=false', + '-prefer=newer', '-times=true'] + + +def get_homedir(user): + try: + user = pwd.getpwnam(user) + return user.pw_dir + except KeyError: + log('Could not get homedir for user %s: user exists?' % (user), ERROR) + raise Exception + + +def create_private_key(user, priv_key_path, key_type='rsa'): + types_bits = { + 'rsa': '2048', + 'ecdsa': '521', + } + if key_type not in types_bits: + log('Unknown ssh key type {}, using rsa'.format(key_type), ERROR) + key_type = 'rsa' + if not os.path.isfile(priv_key_path): + log('Generating new SSH key for user %s.' % user) + cmd = ['ssh-keygen', '-q', '-N', '', '-t', key_type, + '-b', types_bits[key_type], '-f', priv_key_path] + check_call(cmd) + else: + log('SSH key already exists at %s.' % priv_key_path) + check_call(['chown', user, priv_key_path]) + check_call(['chmod', '0600', priv_key_path]) + + +def create_public_key(user, priv_key_path, pub_key_path): + if not os.path.isfile(pub_key_path): + log('Generating missing ssh public key @ %s.' % pub_key_path) + cmd = ['ssh-keygen', '-y', '-f', priv_key_path] + p = check_output(cmd).strip() + with open(pub_key_path, 'wb') as out: + out.write(p) + check_call(['chown', user, pub_key_path]) + + +def get_keypair(user): + home_dir = get_homedir(user) + ssh_dir = os.path.join(home_dir, '.ssh') + priv_key = os.path.join(ssh_dir, 'id_rsa') + pub_key = '%s.pub' % priv_key + + if not os.path.isdir(ssh_dir): + os.mkdir(ssh_dir) + check_call(['chown', '-R', user, ssh_dir]) + + create_private_key(user, priv_key) + create_public_key(user, priv_key, pub_key) + + with open(priv_key, 'r') as p: + _priv = p.read().strip() + + with open(pub_key, 'r') as p: + _pub = p.read().strip() + + return (_priv, _pub) + + +def write_authorized_keys(user, keys): + home_dir = get_homedir(user) + ssh_dir = os.path.join(home_dir, '.ssh') + auth_keys = os.path.join(ssh_dir, 'authorized_keys') + log('Syncing authorized_keys @ %s.' % auth_keys) + with open(auth_keys, 'w') as out: + for k in keys: + out.write('%s\n' % k) + + +def write_known_hosts(user, hosts): + home_dir = get_homedir(user) + ssh_dir = os.path.join(home_dir, '.ssh') + known_hosts = os.path.join(ssh_dir, 'known_hosts') + khosts = [] + for host in hosts: + cmd = ['ssh-keyscan', host] + remote_key = check_output(cmd, universal_newlines=True).strip() + khosts.append(remote_key) + log('Syncing known_hosts @ %s.' % known_hosts) + with open(known_hosts, 'w') as out: + for host in khosts: + out.write('%s\n' % host) + + +def ensure_user(user, group=None): + adduser(user, pwgen()) + if group: + add_user_to_group(user, group) + # Remove password expiry (Bug #1686085) + remove_password_expiry(user) + + +def ssh_authorized_peers(peer_interface, user, group=None, + ensure_local_user=False): + """ + Main setup function, should be called from both peer -changed and -joined + hooks with the same parameters. + """ + if ensure_local_user: + ensure_user(user, group) + priv_key, pub_key = get_keypair(user) + hook = hook_name() + if hook == '%s-relation-joined' % peer_interface: + relation_set(ssh_pub_key=pub_key) + elif hook == '%s-relation-changed' % peer_interface or \ + hook == '%s-relation-departed' % peer_interface: + hosts = [] + keys = [] + + for r_id in relation_ids(peer_interface): + for unit in related_units(r_id): + ssh_pub_key = relation_get('ssh_pub_key', + rid=r_id, + unit=unit) + priv_addr = relation_get('private-address', + rid=r_id, + unit=unit) + if ssh_pub_key: + keys.append(ssh_pub_key) + hosts.append(priv_addr) + else: + log('ssh_authorized_peers(): ssh_pub_key ' + 'missing for unit %s, skipping.' % unit) + write_authorized_keys(user, keys) + write_known_hosts(user, hosts) + authed_hosts = ':'.join(hosts) + relation_set(ssh_authorized_hosts=authed_hosts) + + +def _run_as_user(user, gid=None): + try: + user = pwd.getpwnam(user) + except KeyError: + log('Invalid user: %s' % user) + raise Exception + uid = user.pw_uid + gid = gid or user.pw_gid + os.environ['HOME'] = user.pw_dir + + def _inner(): + os.setgid(gid) + os.setuid(uid) + return _inner + + +def run_as_user(user, cmd, gid=None): + return check_output(cmd, preexec_fn=_run_as_user(user, gid), cwd='/') + + +def collect_authed_hosts(peer_interface): + '''Iterate through the units on peer interface to find all that + have the calling host in its authorized hosts list''' + hosts = [] + for r_id in (relation_ids(peer_interface) or []): + for unit in related_units(r_id): + private_addr = relation_get('private-address', + rid=r_id, unit=unit) + authed_hosts = relation_get('ssh_authorized_hosts', + rid=r_id, unit=unit) + + if not authed_hosts: + log('Peer %s has not authorized *any* hosts yet, skipping.' % + (unit), level=INFO) + continue + + if unit_private_ip() in authed_hosts.split(':'): + hosts.append(private_addr) + else: + log('Peer %s has not authorized *this* host yet, skipping.' % + (unit), level=INFO) + return hosts + + +def sync_path_to_host(path, host, user, verbose=False, cmd=None, gid=None, + fatal=False): + """Sync path to an specific peer host + + Propagates exception if operation fails and fatal=True. + """ + cmd = cmd or copy(BASE_CMD) + if not verbose: + cmd.append('-silent') + + # removing trailing slash from directory paths, unison + # doesn't like these. + if path.endswith('/'): + path = path[:(len(path) - 1)] + + cmd = cmd + [path, 'ssh://%s@%s/%s' % (user, host, path)] + + try: + log('Syncing local path %s to %s@%s:%s' % (path, user, host, path)) + run_as_user(user, cmd, gid) + except Exception: + log('Error syncing remote files') + if fatal: + raise + + +def sync_to_peer(host, user, paths=None, verbose=False, cmd=None, gid=None, + fatal=False): + """Sync paths to an specific peer host + + Propagates exception if any operation fails and fatal=True. + """ + if paths: + for p in paths: + sync_path_to_host(p, host, user, verbose, cmd, gid, fatal) + + +def sync_to_peers(peer_interface, user, paths=None, verbose=False, cmd=None, + gid=None, fatal=False): + """Sync all hosts to an specific path + + The type of group is integer, it allows user has permissions to + operate a directory have a different group id with the user id. + + Propagates exception if any operation fails and fatal=True. + """ + if paths: + for host in collect_authed_hosts(peer_interface): + sync_to_peer(host, user, paths, verbose, cmd, gid, fatal) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/coordinator.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/coordinator.py new file mode 100644 index 0000000000000000000000000000000000000000..59bee3e5b320f6bffa0d89eef7a20888cddb0521 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/coordinator.py @@ -0,0 +1,606 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +''' +The coordinator module allows you to use Juju's leadership feature to +coordinate operations between units of a service. + +Behavior is defined in subclasses of coordinator.BaseCoordinator. +One implementation is provided (coordinator.Serial), which allows an +operation to be run on a single unit at a time, on a first come, first +served basis. You can trivially define more complex behavior by +subclassing BaseCoordinator or Serial. + +:author: Stuart Bishop + + +Services Framework Usage +======================== + +Ensure a peers relation is defined in metadata.yaml. Instantiate a +BaseCoordinator subclass before invoking ServiceManager.manage(). +Ensure that ServiceManager.manage() is wired up to the leader-elected, +leader-settings-changed, peers relation-changed and peers +relation-departed hooks in addition to any other hooks you need, or your +service will deadlock. + +Ensure calls to acquire() are guarded, so that locks are only requested +when they are really needed (and thus hooks only triggered when necessary). +Failing to do this and calling acquire() unconditionally will put your unit +into a hook loop. Calls to granted() do not need to be guarded. + +For example:: + + from charmhelpers.core import hookenv, services + from charmhelpers import coordinator + + def maybe_restart(servicename): + serial = coordinator.Serial() + if needs_restart(): + serial.acquire('restart') + if serial.granted('restart'): + hookenv.service_restart(servicename) + + services = [dict(service='servicename', + data_ready=[maybe_restart])] + + if __name__ == '__main__': + _ = coordinator.Serial() # Must instantiate before manager.manage() + manager = services.ServiceManager(services) + manager.manage() + + +You can implement a similar pattern using a decorator. If the lock has +not been granted, an attempt to acquire() it will be made if the guard +function returns True. If the lock has been granted, the decorated function +is run as normal:: + + from charmhelpers.core import hookenv, services + from charmhelpers import coordinator + + serial = coordinator.Serial() # Global, instatiated on module import. + + def needs_restart(): + [ ... Introspect state. Return True if restart is needed ... ] + + @serial.require('restart', needs_restart) + def maybe_restart(servicename): + hookenv.service_restart(servicename) + + services = [dict(service='servicename', + data_ready=[maybe_restart])] + + if __name__ == '__main__': + manager = services.ServiceManager(services) + manager.manage() + + +Traditional Usage +================= + +Ensure a peers relation is defined in metadata.yaml. + +If you are using charmhelpers.core.hookenv.Hooks, ensure that a +BaseCoordinator subclass is instantiated before calling Hooks.execute. + +If you are not using charmhelpers.core.hookenv.Hooks, ensure +that a BaseCoordinator subclass is instantiated and its handle() +method called at the start of all your hooks. + +For example:: + + import sys + from charmhelpers.core import hookenv + from charmhelpers import coordinator + + hooks = hookenv.Hooks() + + def maybe_restart(): + serial = coordinator.Serial() + if serial.granted('restart'): + hookenv.service_restart('myservice') + + @hooks.hook + def config_changed(): + update_config() + serial = coordinator.Serial() + if needs_restart(): + serial.acquire('restart'): + maybe_restart() + + # Cluster hooks must be wired up. + @hooks.hook('cluster-relation-changed', 'cluster-relation-departed') + def cluster_relation_changed(): + maybe_restart() + + # Leader hooks must be wired up. + @hooks.hook('leader-elected', 'leader-settings-changed') + def leader_settings_changed(): + maybe_restart() + + [ ... repeat for *all* other hooks you are using ... ] + + if __name__ == '__main__': + _ = coordinator.Serial() # Must instantiate before execute() + hooks.execute(sys.argv) + + +You can also use the require decorator. If the lock has not been granted, +an attempt to acquire() it will be made if the guard function returns True. +If the lock has been granted, the decorated function is run as normal:: + + from charmhelpers.core import hookenv + + hooks = hookenv.Hooks() + serial = coordinator.Serial() # Must instantiate before execute() + + @require('restart', needs_restart) + def maybe_restart(): + hookenv.service_restart('myservice') + + @hooks.hook('install', 'config-changed', 'upgrade-charm', + # Peers and leader hooks must be wired up. + 'cluster-relation-changed', 'cluster-relation-departed', + 'leader-elected', 'leader-settings-changed') + def default_hook(): + [...] + maybe_restart() + + if __name__ == '__main__': + hooks.execute() + + +Details +======= + +A simple API is provided similar to traditional locking APIs. A lock +may be requested using the acquire() method, and the granted() method +may be used do to check if a lock previously requested by acquire() has +been granted. It doesn't matter how many times acquire() is called in a +hook. + +Locks are released at the end of the hook they are acquired in. This may +be the current hook if the unit is leader and the lock is free. It is +more likely a future hook (probably leader-settings-changed, possibly +the peers relation-changed or departed hook, potentially any hook). + +Whenever a charm needs to perform a coordinated action it will acquire() +the lock and perform the action immediately if acquisition is +successful. It will also need to perform the same action in every other +hook if the lock has been granted. + + +Grubby Details +-------------- + +Why do you need to be able to perform the same action in every hook? +If the unit is the leader, then it may be able to grant its own lock +and perform the action immediately in the source hook. If the unit is +the leader and cannot immediately grant the lock, then its only +guaranteed chance of acquiring the lock is in the peers relation-joined, +relation-changed or peers relation-departed hooks when another unit has +released it (the only channel to communicate to the leader is the peers +relation). If the unit is not the leader, then it is unlikely the lock +is granted in the source hook (a previous hook must have also made the +request for this to happen). A non-leader is notified about the lock via +leader settings. These changes may be visible in any hook, even before +the leader-settings-changed hook has been invoked. Or the requesting +unit may be promoted to leader after making a request, in which case the +lock may be granted in leader-elected or in a future peers +relation-changed or relation-departed hook. + +This could be simpler if leader-settings-changed was invoked on the +leader. We could then never grant locks except in +leader-settings-changed hooks giving one place for the operation to be +performed. Unfortunately this is not the case with Juju 1.23 leadership. + +But of course, this doesn't really matter to most people as most people +seem to prefer the Services Framework or similar reset-the-world +approaches, rather than the twisty maze of attempting to deduce what +should be done based on what hook happens to be running (which always +seems to evolve into reset-the-world anyway when the charm grows beyond +the trivial). + +I chose not to implement a callback model, where a callback was passed +to acquire to be executed when the lock is granted, because the callback +may become invalid between making the request and the lock being granted +due to an upgrade-charm being run in the interim. And it would create +restrictions, such no lambdas, callback defined at the top level of a +module, etc. Still, we could implement it on top of what is here, eg. +by adding a defer decorator that stores a pickle of itself to disk and +have BaseCoordinator unpickle and execute them when the locks are granted. +''' +from datetime import datetime +from functools import wraps +import json +import os.path + +from six import with_metaclass + +from charmhelpers.core import hookenv + + +# We make BaseCoordinator and subclasses singletons, so that if we +# need to spill to local storage then only a single instance does so, +# rather than having multiple instances stomp over each other. +class Singleton(type): + _instances = {} + + def __call__(cls, *args, **kwargs): + if cls not in cls._instances: + cls._instances[cls] = super(Singleton, cls).__call__(*args, + **kwargs) + return cls._instances[cls] + + +class BaseCoordinator(with_metaclass(Singleton, object)): + relid = None # Peer relation-id, set by __init__ + relname = None + + grants = None # self.grants[unit][lock] == timestamp + requests = None # self.requests[unit][lock] == timestamp + + def __init__(self, relation_key='coordinator', peer_relation_name=None): + '''Instatiate a Coordinator. + + Data is stored on the peers relation and in leadership storage + under the provided relation_key. + + The peers relation is identified by peer_relation_name, and defaults + to the first one found in metadata.yaml. + ''' + # Most initialization is deferred, since invoking hook tools from + # the constructor makes testing hard. + self.key = relation_key + self.relname = peer_relation_name + hookenv.atstart(self.initialize) + + # Ensure that handle() is called, without placing that burden on + # the charm author. They still need to do this manually if they + # are not using a hook framework. + hookenv.atstart(self.handle) + + def initialize(self): + if self.requests is not None: + return # Already initialized. + + assert hookenv.has_juju_version('1.23'), 'Needs Juju 1.23+' + + if self.relname is None: + self.relname = _implicit_peer_relation_name() + + relids = hookenv.relation_ids(self.relname) + if relids: + self.relid = sorted(relids)[0] + + # Load our state, from leadership, the peer relationship, and maybe + # local state as a fallback. Populates self.requests and self.grants. + self._load_state() + self._emit_state() + + # Save our state if the hook completes successfully. + hookenv.atexit(self._save_state) + + # Schedule release of granted locks for the end of the hook. + # This needs to be the last of our atexit callbacks to ensure + # it will be run first when the hook is complete, because there + # is no point mutating our state after it has been saved. + hookenv.atexit(self._release_granted) + + def acquire(self, lock): + '''Acquire the named lock, non-blocking. + + The lock may be granted immediately, or in a future hook. + + Returns True if the lock has been granted. The lock will be + automatically released at the end of the hook in which it is + granted. + + Do not mindlessly call this method, as it triggers a cascade of + hooks. For example, if you call acquire() every time in your + peers relation-changed hook you will end up with an infinite loop + of hooks. It should almost always be guarded by some condition. + ''' + unit = hookenv.local_unit() + ts = self.requests[unit].get(lock) + if not ts: + # If there is no outstanding request on the peers relation, + # create one. + self.requests.setdefault(lock, {}) + self.requests[unit][lock] = _timestamp() + self.msg('Requested {}'.format(lock)) + + # If the leader has granted the lock, yay. + if self.granted(lock): + self.msg('Acquired {}'.format(lock)) + return True + + # If the unit making the request also happens to be the + # leader, it must handle the request now. Even though the + # request has been stored on the peers relation, the peers + # relation-changed hook will not be triggered. + if hookenv.is_leader(): + return self.grant(lock, unit) + + return False # Can't acquire lock, yet. Maybe next hook. + + def granted(self, lock): + '''Return True if a previously requested lock has been granted''' + unit = hookenv.local_unit() + ts = self.requests[unit].get(lock) + if ts and self.grants.get(unit, {}).get(lock) == ts: + return True + return False + + def requested(self, lock): + '''Return True if we are in the queue for the lock''' + return lock in self.requests[hookenv.local_unit()] + + def request_timestamp(self, lock): + '''Return the timestamp of our outstanding request for lock, or None. + + Returns a datetime.datetime() UTC timestamp, with no tzinfo attribute. + ''' + ts = self.requests[hookenv.local_unit()].get(lock, None) + if ts is not None: + return datetime.strptime(ts, _timestamp_format) + + def handle(self): + if not hookenv.is_leader(): + return # Only the leader can grant requests. + + self.msg('Leader handling coordinator requests') + + # Clear our grants that have been released. + for unit in self.grants.keys(): + for lock, grant_ts in list(self.grants[unit].items()): + req_ts = self.requests.get(unit, {}).get(lock) + if req_ts != grant_ts: + # The request timestamp does not match the granted + # timestamp. Several hooks on 'unit' may have run + # before the leader got a chance to make a decision, + # and 'unit' may have released its lock and attempted + # to reacquire it. This will change the timestamp, + # and we correctly revoke the old grant putting it + # to the end of the queue. + ts = datetime.strptime(self.grants[unit][lock], + _timestamp_format) + del self.grants[unit][lock] + self.released(unit, lock, ts) + + # Grant locks + for unit in self.requests.keys(): + for lock in self.requests[unit]: + self.grant(lock, unit) + + def grant(self, lock, unit): + '''Maybe grant the lock to a unit. + + The decision to grant the lock or not is made for $lock + by a corresponding method grant_$lock, which you may define + in a subclass. If no such method is defined, the default_grant + method is used. See Serial.default_grant() for details. + ''' + if not hookenv.is_leader(): + return False # Not the leader, so we cannot grant. + + # Set of units already granted the lock. + granted = set() + for u in self.grants: + if lock in self.grants[u]: + granted.add(u) + if unit in granted: + return True # Already granted. + + # Ordered list of units waiting for the lock. + reqs = set() + for u in self.requests: + if u in granted: + continue # In the granted set. Not wanted in the req list. + for _lock, ts in self.requests[u].items(): + if _lock == lock: + reqs.add((ts, u)) + queue = [t[1] for t in sorted(reqs)] + if unit not in queue: + return False # Unit has not requested the lock. + + # Locate custom logic, or fallback to the default. + grant_func = getattr(self, 'grant_{}'.format(lock), self.default_grant) + + if grant_func(lock, unit, granted, queue): + # Grant the lock. + self.msg('Leader grants {} to {}'.format(lock, unit)) + self.grants.setdefault(unit, {})[lock] = self.requests[unit][lock] + return True + + return False + + def released(self, unit, lock, timestamp): + '''Called on the leader when it has released a lock. + + By default, does nothing but log messages. Override if you + need to perform additional housekeeping when a lock is released, + for example recording timestamps. + ''' + interval = _utcnow() - timestamp + self.msg('Leader released {} from {}, held {}'.format(lock, unit, + interval)) + + def require(self, lock, guard_func, *guard_args, **guard_kw): + """Decorate a function to be run only when a lock is acquired. + + The lock is requested if the guard function returns True. + + The decorated function is called if the lock has been granted. + """ + def decorator(f): + @wraps(f) + def wrapper(*args, **kw): + if self.granted(lock): + self.msg('Granted {}'.format(lock)) + return f(*args, **kw) + if guard_func(*guard_args, **guard_kw) and self.acquire(lock): + return f(*args, **kw) + return None + return wrapper + return decorator + + def msg(self, msg): + '''Emit a message. Override to customize log spam.''' + hookenv.log('coordinator.{} {}'.format(self._name(), msg), + level=hookenv.INFO) + + def _name(self): + return self.__class__.__name__ + + def _load_state(self): + self.msg('Loading state') + + # All responses must be stored in the leadership settings. + # The leader cannot use local state, as a different unit may + # be leader next time. Which is fine, as the leadership + # settings are always available. + self.grants = json.loads(hookenv.leader_get(self.key) or '{}') + + local_unit = hookenv.local_unit() + + # All requests must be stored on the peers relation. This is + # the only channel units have to communicate with the leader. + # Even the leader needs to store its requests here, as a + # different unit may be leader by the time the request can be + # granted. + if self.relid is None: + # The peers relation is not available. Maybe we are early in + # the units's lifecycle. Maybe this unit is standalone. + # Fallback to using local state. + self.msg('No peer relation. Loading local state') + self.requests = {local_unit: self._load_local_state()} + else: + self.requests = self._load_peer_state() + if local_unit not in self.requests: + # The peers relation has just been joined. Update any state + # loaded from our peers with our local state. + self.msg('New peer relation. Merging local state') + self.requests[local_unit] = self._load_local_state() + + def _emit_state(self): + # Emit this units lock status. + for lock in sorted(self.requests[hookenv.local_unit()].keys()): + if self.granted(lock): + self.msg('Granted {}'.format(lock)) + else: + self.msg('Waiting on {}'.format(lock)) + + def _save_state(self): + self.msg('Publishing state') + if hookenv.is_leader(): + # sort_keys to ensure stability. + raw = json.dumps(self.grants, sort_keys=True) + hookenv.leader_set({self.key: raw}) + + local_unit = hookenv.local_unit() + + if self.relid is None: + # No peers relation yet. Fallback to local state. + self.msg('No peer relation. Saving local state') + self._save_local_state(self.requests[local_unit]) + else: + # sort_keys to ensure stability. + raw = json.dumps(self.requests[local_unit], sort_keys=True) + hookenv.relation_set(self.relid, relation_settings={self.key: raw}) + + def _load_peer_state(self): + requests = {} + units = set(hookenv.related_units(self.relid)) + units.add(hookenv.local_unit()) + for unit in units: + raw = hookenv.relation_get(self.key, unit, self.relid) + if raw: + requests[unit] = json.loads(raw) + return requests + + def _local_state_filename(self): + # Include the class name. We allow multiple BaseCoordinator + # subclasses to be instantiated, and they are singletons, so + # this avoids conflicts (unless someone creates and uses two + # BaseCoordinator subclasses with the same class name, so don't + # do that). + return '.charmhelpers.coordinator.{}'.format(self._name()) + + def _load_local_state(self): + fn = self._local_state_filename() + if os.path.exists(fn): + with open(fn, 'r') as f: + return json.load(f) + return {} + + def _save_local_state(self, state): + fn = self._local_state_filename() + with open(fn, 'w') as f: + json.dump(state, f) + + def _release_granted(self): + # At the end of every hook, release all locks granted to + # this unit. If a hook neglects to make use of what it + # requested, it will just have to make the request again. + # Implicit release is the only way this will work, as + # if the unit is standalone there may be no future triggers + # called to do a manual release. + unit = hookenv.local_unit() + for lock in list(self.requests[unit].keys()): + if self.granted(lock): + self.msg('Released local {} lock'.format(lock)) + del self.requests[unit][lock] + + +class Serial(BaseCoordinator): + def default_grant(self, lock, unit, granted, queue): + '''Default logic to grant a lock to a unit. Unless overridden, + only one unit may hold the lock and it will be granted to the + earliest queued request. + + To define custom logic for $lock, create a subclass and + define a grant_$lock method. + + `unit` is the unit name making the request. + + `granted` is the set of units already granted the lock. It will + never include `unit`. It may be empty. + + `queue` is the list of units waiting for the lock, ordered by time + of request. It will always include `unit`, but `unit` is not + necessarily first. + + Returns True if the lock should be granted to `unit`. + ''' + return unit == queue[0] and not granted + + +def _implicit_peer_relation_name(): + md = hookenv.metadata() + assert 'peers' in md, 'No peer relations in metadata.yaml' + return sorted(md['peers'].keys())[0] + + +# A human readable, sortable UTC timestamp format. +_timestamp_format = '%Y-%m-%d %H:%M:%S.%fZ' + + +def _utcnow(): # pragma: no cover + # This wrapper exists as mocking datetime methods is problematic. + return datetime.utcnow() + + +def _timestamp(): + return _utcnow().strftime(_timestamp_format) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d7567b863e3a5ad2b7a7f44958b4166e0c3d346b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/decorators.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/decorators.py new file mode 100644 index 0000000000000000000000000000000000000000..6ad41ee4121f4c0816935f8b16cd84f972aff22b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/decorators.py @@ -0,0 +1,55 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# +# Copyright 2014 Canonical Ltd. +# +# Authors: +# Edward Hope-Morley +# + +import time + +from charmhelpers.core.hookenv import ( + log, + INFO, +) + + +def retry_on_exception(num_retries, base_delay=0, exc_type=Exception): + """If the decorated function raises exception exc_type, allow num_retries + retry attempts before raise the exception. + """ + def _retry_on_exception_inner_1(f): + def _retry_on_exception_inner_2(*args, **kwargs): + retries = num_retries + multiplier = 1 + while True: + try: + return f(*args, **kwargs) + except exc_type: + if not retries: + raise + + delay = base_delay * multiplier + multiplier += 1 + log("Retrying '%s' %d more times (delay=%s)" % + (f.__name__, retries, delay), level=INFO) + retries -= 1 + if delay: + time.sleep(delay) + + return _retry_on_exception_inner_2 + + return _retry_on_exception_inner_1 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/files.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/files.py new file mode 100644 index 0000000000000000000000000000000000000000..fdd82b75709c13da0d534bf4962822984a3c1867 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/files.py @@ -0,0 +1,43 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +__author__ = 'Jorge Niedbalski ' + +import os +import subprocess + + +def sed(filename, before, after, flags='g'): + """ + Search and replaces the given pattern on filename. + + :param filename: relative or absolute file path. + :param before: expression to be replaced (see 'man sed') + :param after: expression to replace with (see 'man sed') + :param flags: sed-compatible regex flags in example, to make + the search and replace case insensitive, specify ``flags="i"``. + The ``g`` flag is always specified regardless, so you do not + need to remember to include it when overriding this parameter. + :returns: If the sed command exit code was zero then return, + otherwise raise CalledProcessError. + """ + expression = r's/{0}/{1}/{2}'.format(before, + after, flags) + + return subprocess.check_call(["sed", "-i", "-r", "-e", + expression, + os.path.expanduser(filename)]) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/fstab.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/fstab.py new file mode 100644 index 0000000000000000000000000000000000000000..d9fa9152c765c538adad3fd9bc45a46018c89b72 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/fstab.py @@ -0,0 +1,132 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import io +import os + +__author__ = 'Jorge Niedbalski R. ' + + +class Fstab(io.FileIO): + """This class extends file in order to implement a file reader/writer + for file `/etc/fstab` + """ + + class Entry(object): + """Entry class represents a non-comment line on the `/etc/fstab` file + """ + def __init__(self, device, mountpoint, filesystem, + options, d=0, p=0): + self.device = device + self.mountpoint = mountpoint + self.filesystem = filesystem + + if not options: + options = "defaults" + + self.options = options + self.d = int(d) + self.p = int(p) + + def __eq__(self, o): + return str(self) == str(o) + + def __str__(self): + return "{} {} {} {} {} {}".format(self.device, + self.mountpoint, + self.filesystem, + self.options, + self.d, + self.p) + + DEFAULT_PATH = os.path.join(os.path.sep, 'etc', 'fstab') + + def __init__(self, path=None): + if path: + self._path = path + else: + self._path = self.DEFAULT_PATH + super(Fstab, self).__init__(self._path, 'rb+') + + def _hydrate_entry(self, line): + # NOTE: use split with no arguments to split on any + # whitespace including tabs + return Fstab.Entry(*filter( + lambda x: x not in ('', None), + line.strip("\n").split())) + + @property + def entries(self): + self.seek(0) + for line in self.readlines(): + line = line.decode('us-ascii') + try: + if line.strip() and not line.strip().startswith("#"): + yield self._hydrate_entry(line) + except ValueError: + pass + + def get_entry_by_attr(self, attr, value): + for entry in self.entries: + e_attr = getattr(entry, attr) + if e_attr == value: + return entry + return None + + def add_entry(self, entry): + if self.get_entry_by_attr('device', entry.device): + return False + + self.write((str(entry) + '\n').encode('us-ascii')) + self.truncate() + return entry + + def remove_entry(self, entry): + self.seek(0) + + lines = [l.decode('us-ascii') for l in self.readlines()] + + found = False + for index, line in enumerate(lines): + if line.strip() and not line.strip().startswith("#"): + if self._hydrate_entry(line) == entry: + found = True + break + + if not found: + return False + + lines.remove(line) + + self.seek(0) + self.write(''.join(lines).encode('us-ascii')) + self.truncate() + return True + + @classmethod + def remove_by_mountpoint(cls, mountpoint, path=None): + fstab = cls(path=path) + entry = fstab.get_entry_by_attr('mountpoint', mountpoint) + if entry: + return fstab.remove_entry(entry) + return False + + @classmethod + def add(cls, device, mountpoint, filesystem, options=None, path=None): + return cls(path=path).add_entry(Fstab.Entry(device, + mountpoint, filesystem, + options=options)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/hookenv.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/hookenv.py new file mode 100644 index 0000000000000000000000000000000000000000..db7ce7282b4c96c8a33abf309a340377216922ec --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/hookenv.py @@ -0,0 +1,1613 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"Interactions with the Juju environment" +# Copyright 2013 Canonical Ltd. +# +# Authors: +# Charm Helpers Developers + +from __future__ import print_function +import copy +from distutils.version import LooseVersion +from enum import Enum +from functools import wraps +from collections import namedtuple +import glob +import os +import json +import yaml +import re +import subprocess +import sys +import errno +import tempfile +from subprocess import CalledProcessError + +from charmhelpers import deprecate + +import six +if not six.PY3: + from UserDict import UserDict +else: + from collections import UserDict + + +CRITICAL = "CRITICAL" +ERROR = "ERROR" +WARNING = "WARNING" +INFO = "INFO" +DEBUG = "DEBUG" +TRACE = "TRACE" +MARKER = object() +SH_MAX_ARG = 131071 + + +RANGE_WARNING = ('Passing NO_PROXY string that includes a cidr. ' + 'This may not be compatible with software you are ' + 'running in your shell.') + + +class WORKLOAD_STATES(Enum): + ACTIVE = 'active' + BLOCKED = 'blocked' + MAINTENANCE = 'maintenance' + WAITING = 'waiting' + + +cache = {} + + +def cached(func): + """Cache return values for multiple executions of func + args + + For example:: + + @cached + def unit_get(attribute): + pass + + unit_get('test') + + will cache the result of unit_get + 'test' for future calls. + """ + @wraps(func) + def wrapper(*args, **kwargs): + global cache + key = json.dumps((func, args, kwargs), sort_keys=True, default=str) + try: + return cache[key] + except KeyError: + pass # Drop out of the exception handler scope. + res = func(*args, **kwargs) + cache[key] = res + return res + wrapper._wrapped = func + return wrapper + + +def flush(key): + """Flushes any entries from function cache where the + key is found in the function+args """ + flush_list = [] + for item in cache: + if key in item: + flush_list.append(item) + for item in flush_list: + del cache[item] + + +def log(message, level=None): + """Write a message to the juju log""" + command = ['juju-log'] + if level: + command += ['-l', level] + if not isinstance(message, six.string_types): + message = repr(message) + command += [message[:SH_MAX_ARG]] + # Missing juju-log should not cause failures in unit tests + # Send log output to stderr + try: + subprocess.call(command) + except OSError as e: + if e.errno == errno.ENOENT: + if level: + message = "{}: {}".format(level, message) + message = "juju-log: {}".format(message) + print(message, file=sys.stderr) + else: + raise + + +def function_log(message): + """Write a function progress message""" + command = ['function-log'] + if not isinstance(message, six.string_types): + message = repr(message) + command += [message[:SH_MAX_ARG]] + # Missing function-log should not cause failures in unit tests + # Send function_log output to stderr + try: + subprocess.call(command) + except OSError as e: + if e.errno == errno.ENOENT: + message = "function-log: {}".format(message) + print(message, file=sys.stderr) + else: + raise + + +class Serializable(UserDict): + """Wrapper, an object that can be serialized to yaml or json""" + + def __init__(self, obj): + # wrap the object + UserDict.__init__(self) + self.data = obj + + def __getattr__(self, attr): + # See if this object has attribute. + if attr in ("json", "yaml", "data"): + return self.__dict__[attr] + # Check for attribute in wrapped object. + got = getattr(self.data, attr, MARKER) + if got is not MARKER: + return got + # Proxy to the wrapped object via dict interface. + try: + return self.data[attr] + except KeyError: + raise AttributeError(attr) + + def __getstate__(self): + # Pickle as a standard dictionary. + return self.data + + def __setstate__(self, state): + # Unpickle into our wrapper. + self.data = state + + def json(self): + """Serialize the object to json""" + return json.dumps(self.data) + + def yaml(self): + """Serialize the object to yaml""" + return yaml.dump(self.data) + + +def execution_environment(): + """A convenient bundling of the current execution context""" + context = {} + context['conf'] = config() + if relation_id(): + context['reltype'] = relation_type() + context['relid'] = relation_id() + context['rel'] = relation_get() + context['unit'] = local_unit() + context['rels'] = relations() + context['env'] = os.environ + return context + + +def in_relation_hook(): + """Determine whether we're running in a relation hook""" + return 'JUJU_RELATION' in os.environ + + +def relation_type(): + """The scope for the current relation hook""" + return os.environ.get('JUJU_RELATION', None) + + +@cached +def relation_id(relation_name=None, service_or_unit=None): + """The relation ID for the current or a specified relation""" + if not relation_name and not service_or_unit: + return os.environ.get('JUJU_RELATION_ID', None) + elif relation_name and service_or_unit: + service_name = service_or_unit.split('/')[0] + for relid in relation_ids(relation_name): + remote_service = remote_service_name(relid) + if remote_service == service_name: + return relid + else: + raise ValueError('Must specify neither or both of relation_name and service_or_unit') + + +def local_unit(): + """Local unit ID""" + return os.environ['JUJU_UNIT_NAME'] + + +def remote_unit(): + """The remote unit for the current relation hook""" + return os.environ.get('JUJU_REMOTE_UNIT', None) + + +def application_name(): + """ + The name of the deployed application this unit belongs to. + """ + return local_unit().split('/')[0] + + +def service_name(): + """ + .. deprecated:: 0.19.1 + Alias for :func:`application_name`. + """ + return application_name() + + +def model_name(): + """ + Name of the model that this unit is deployed in. + """ + return os.environ['JUJU_MODEL_NAME'] + + +def model_uuid(): + """ + UUID of the model that this unit is deployed in. + """ + return os.environ['JUJU_MODEL_UUID'] + + +def principal_unit(): + """Returns the principal unit of this unit, otherwise None""" + # Juju 2.2 and above provides JUJU_PRINCIPAL_UNIT + principal_unit = os.environ.get('JUJU_PRINCIPAL_UNIT', None) + # If it's empty, then this unit is the principal + if principal_unit == '': + return os.environ['JUJU_UNIT_NAME'] + elif principal_unit is not None: + return principal_unit + # For Juju 2.1 and below, let's try work out the principle unit by + # the various charms' metadata.yaml. + for reltype in relation_types(): + for rid in relation_ids(reltype): + for unit in related_units(rid): + md = _metadata_unit(unit) + if not md: + continue + subordinate = md.pop('subordinate', None) + if not subordinate: + return unit + return None + + +@cached +def remote_service_name(relid=None): + """The remote service name for a given relation-id (or the current relation)""" + if relid is None: + unit = remote_unit() + else: + units = related_units(relid) + unit = units[0] if units else None + return unit.split('/')[0] if unit else None + + +def hook_name(): + """The name of the currently executing hook""" + return os.environ.get('JUJU_HOOK_NAME', os.path.basename(sys.argv[0])) + + +class Config(dict): + """A dictionary representation of the charm's config.yaml, with some + extra features: + + - See which values in the dictionary have changed since the previous hook. + - For values that have changed, see what the previous value was. + - Store arbitrary data for use in a later hook. + + NOTE: Do not instantiate this object directly - instead call + ``hookenv.config()``, which will return an instance of :class:`Config`. + + Example usage:: + + >>> # inside a hook + >>> from charmhelpers.core import hookenv + >>> config = hookenv.config() + >>> config['foo'] + 'bar' + >>> # store a new key/value for later use + >>> config['mykey'] = 'myval' + + + >>> # user runs `juju set mycharm foo=baz` + >>> # now we're inside subsequent config-changed hook + >>> config = hookenv.config() + >>> config['foo'] + 'baz' + >>> # test to see if this val has changed since last hook + >>> config.changed('foo') + True + >>> # what was the previous value? + >>> config.previous('foo') + 'bar' + >>> # keys/values that we add are preserved across hooks + >>> config['mykey'] + 'myval' + + """ + CONFIG_FILE_NAME = '.juju-persistent-config' + + def __init__(self, *args, **kw): + super(Config, self).__init__(*args, **kw) + self.implicit_save = True + self._prev_dict = None + self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME) + if os.path.exists(self.path) and os.stat(self.path).st_size: + self.load_previous() + atexit(self._implicit_save) + + def load_previous(self, path=None): + """Load previous copy of config from disk. + + In normal usage you don't need to call this method directly - it + is called automatically at object initialization. + + :param path: + + File path from which to load the previous config. If `None`, + config is loaded from the default location. If `path` is + specified, subsequent `save()` calls will write to the same + path. + + """ + self.path = path or self.path + with open(self.path) as f: + try: + self._prev_dict = json.load(f) + except ValueError as e: + log('Found but was unable to parse previous config data, ' + 'ignoring which will report all values as changed - {}' + .format(str(e)), level=ERROR) + return + for k, v in copy.deepcopy(self._prev_dict).items(): + if k not in self: + self[k] = v + + def changed(self, key): + """Return True if the current value for this key is different from + the previous value. + + """ + if self._prev_dict is None: + return True + return self.previous(key) != self.get(key) + + def previous(self, key): + """Return previous value for this key, or None if there + is no previous value. + + """ + if self._prev_dict: + return self._prev_dict.get(key) + return None + + def save(self): + """Save this config to disk. + + If the charm is using the :mod:`Services Framework ` + or :meth:'@hook ' decorator, this + is called automatically at the end of successful hook execution. + Otherwise, it should be called directly by user code. + + To disable automatic saves, set ``implicit_save=False`` on this + instance. + + """ + with open(self.path, 'w') as f: + os.fchmod(f.fileno(), 0o600) + json.dump(self, f) + + def _implicit_save(self): + if self.implicit_save: + self.save() + + +_cache_config = None + + +def config(scope=None): + """ + Get the juju charm configuration (scope==None) or individual key, + (scope=str). The returned value is a Python data structure loaded as + JSON from the Juju config command. + + :param scope: If set, return the value for the specified key. + :type scope: Optional[str] + :returns: Either the whole config as a Config, or a key from it. + :rtype: Any + """ + global _cache_config + config_cmd_line = ['config-get', '--all', '--format=json'] + try: + # JSON Decode Exception for Python3.5+ + exc_json = json.decoder.JSONDecodeError + except AttributeError: + # JSON Decode Exception for Python2.7 through Python3.4 + exc_json = ValueError + try: + if _cache_config is None: + config_data = json.loads( + subprocess.check_output(config_cmd_line).decode('UTF-8')) + _cache_config = Config(config_data) + if scope is not None: + return _cache_config.get(scope) + return _cache_config + except (exc_json, UnicodeDecodeError) as e: + log('Unable to parse output from config-get: config_cmd_line="{}" ' + 'message="{}"' + .format(config_cmd_line, str(e)), level=ERROR) + return None + + +@cached +def relation_get(attribute=None, unit=None, rid=None): + """Get relation information""" + _args = ['relation-get', '--format=json'] + if rid: + _args.append('-r') + _args.append(rid) + _args.append(attribute or '-') + if unit: + _args.append(unit) + try: + return json.loads(subprocess.check_output(_args).decode('UTF-8')) + except ValueError: + return None + except CalledProcessError as e: + if e.returncode == 2: + return None + raise + + +def relation_set(relation_id=None, relation_settings=None, **kwargs): + """Set relation information for the current unit""" + relation_settings = relation_settings if relation_settings else {} + relation_cmd_line = ['relation-set'] + accepts_file = "--file" in subprocess.check_output( + relation_cmd_line + ["--help"], universal_newlines=True) + if relation_id is not None: + relation_cmd_line.extend(('-r', relation_id)) + settings = relation_settings.copy() + settings.update(kwargs) + for key, value in settings.items(): + # Force value to be a string: it always should, but some call + # sites pass in things like dicts or numbers. + if value is not None: + settings[key] = "{}".format(value) + if accepts_file: + # --file was introduced in Juju 1.23.2. Use it by default if + # available, since otherwise we'll break if the relation data is + # too big. Ideally we should tell relation-set to read the data from + # stdin, but that feature is broken in 1.23.2: Bug #1454678. + with tempfile.NamedTemporaryFile(delete=False) as settings_file: + settings_file.write(yaml.safe_dump(settings).encode("utf-8")) + subprocess.check_call( + relation_cmd_line + ["--file", settings_file.name]) + os.remove(settings_file.name) + else: + for key, value in settings.items(): + if value is None: + relation_cmd_line.append('{}='.format(key)) + else: + relation_cmd_line.append('{}={}'.format(key, value)) + subprocess.check_call(relation_cmd_line) + # Flush cache of any relation-gets for local unit + flush(local_unit()) + + +def relation_clear(r_id=None): + ''' Clears any relation data already set on relation r_id ''' + settings = relation_get(rid=r_id, + unit=local_unit()) + for setting in settings: + if setting not in ['public-address', 'private-address']: + settings[setting] = None + relation_set(relation_id=r_id, + **settings) + + +@cached +def relation_ids(reltype=None): + """A list of relation_ids""" + reltype = reltype or relation_type() + relid_cmd_line = ['relation-ids', '--format=json'] + if reltype is not None: + relid_cmd_line.append(reltype) + return json.loads( + subprocess.check_output(relid_cmd_line).decode('UTF-8')) or [] + return [] + + +@cached +def related_units(relid=None): + """A list of related units""" + relid = relid or relation_id() + units_cmd_line = ['relation-list', '--format=json'] + if relid is not None: + units_cmd_line.extend(('-r', relid)) + return json.loads( + subprocess.check_output(units_cmd_line).decode('UTF-8')) or [] + + +def expected_peer_units(): + """Get a generator for units we expect to join peer relation based on + goal-state. + + The local unit is excluded from the result to make it easy to gauge + completion of all peers joining the relation with existing hook tools. + + Example usage: + log('peer {} of {} joined peer relation' + .format(len(related_units()), + len(list(expected_peer_units())))) + + This function will raise NotImplementedError if used with juju versions + without goal-state support. + + :returns: iterator + :rtype: types.GeneratorType + :raises: NotImplementedError + """ + if not has_juju_version("2.4.0"): + # goal-state first appeared in 2.4.0. + raise NotImplementedError("goal-state") + _goal_state = goal_state() + return (key for key in _goal_state['units'] + if '/' in key and key != local_unit()) + + +def expected_related_units(reltype=None): + """Get a generator for units we expect to join relation based on + goal-state. + + Note that you can not use this function for the peer relation, take a look + at expected_peer_units() for that. + + This function will raise KeyError if you request information for a + relation type for which juju goal-state does not have information. It will + raise NotImplementedError if used with juju versions without goal-state + support. + + Example usage: + log('participant {} of {} joined relation {}' + .format(len(related_units()), + len(list(expected_related_units())), + relation_type())) + + :param reltype: Relation type to list data for, default is to list data for + the realtion type we are currently executing a hook for. + :type reltype: str + :returns: iterator + :rtype: types.GeneratorType + :raises: KeyError, NotImplementedError + """ + if not has_juju_version("2.4.4"): + # goal-state existed in 2.4.0, but did not list individual units to + # join a relation in 2.4.1 through 2.4.3. (LP: #1794739) + raise NotImplementedError("goal-state relation unit count") + reltype = reltype or relation_type() + _goal_state = goal_state() + return (key for key in _goal_state['relations'][reltype] if '/' in key) + + +@cached +def relation_for_unit(unit=None, rid=None): + """Get the json represenation of a unit's relation""" + unit = unit or remote_unit() + relation = relation_get(unit=unit, rid=rid) + for key in relation: + if key.endswith('-list'): + relation[key] = relation[key].split() + relation['__unit__'] = unit + return relation + + +@cached +def relations_for_id(relid=None): + """Get relations of a specific relation ID""" + relation_data = [] + relid = relid or relation_ids() + for unit in related_units(relid): + unit_data = relation_for_unit(unit, relid) + unit_data['__relid__'] = relid + relation_data.append(unit_data) + return relation_data + + +@cached +def relations_of_type(reltype=None): + """Get relations of a specific type""" + relation_data = [] + reltype = reltype or relation_type() + for relid in relation_ids(reltype): + for relation in relations_for_id(relid): + relation['__relid__'] = relid + relation_data.append(relation) + return relation_data + + +@cached +def metadata(): + """Get the current charm metadata.yaml contents as a python object""" + with open(os.path.join(charm_dir(), 'metadata.yaml')) as md: + return yaml.safe_load(md) + + +def _metadata_unit(unit): + """Given the name of a unit (e.g. apache2/0), get the unit charm's + metadata.yaml. Very similar to metadata() but allows us to inspect + other units. Unit needs to be co-located, such as a subordinate or + principal/primary. + + :returns: metadata.yaml as a python object. + + """ + basedir = os.sep.join(charm_dir().split(os.sep)[:-2]) + unitdir = 'unit-{}'.format(unit.replace(os.sep, '-')) + joineddir = os.path.join(basedir, unitdir, 'charm', 'metadata.yaml') + if not os.path.exists(joineddir): + return None + with open(joineddir) as md: + return yaml.safe_load(md) + + +@cached +def relation_types(): + """Get a list of relation types supported by this charm""" + rel_types = [] + md = metadata() + for key in ('provides', 'requires', 'peers'): + section = md.get(key) + if section: + rel_types.extend(section.keys()) + return rel_types + + +@cached +def peer_relation_id(): + '''Get the peers relation id if a peers relation has been joined, else None.''' + md = metadata() + section = md.get('peers') + if section: + for key in section: + relids = relation_ids(key) + if relids: + return relids[0] + return None + + +@cached +def relation_to_interface(relation_name): + """ + Given the name of a relation, return the interface that relation uses. + + :returns: The interface name, or ``None``. + """ + return relation_to_role_and_interface(relation_name)[1] + + +@cached +def relation_to_role_and_interface(relation_name): + """ + Given the name of a relation, return the role and the name of the interface + that relation uses (where role is one of ``provides``, ``requires``, or ``peers``). + + :returns: A tuple containing ``(role, interface)``, or ``(None, None)``. + """ + _metadata = metadata() + for role in ('provides', 'requires', 'peers'): + interface = _metadata.get(role, {}).get(relation_name, {}).get('interface') + if interface: + return role, interface + return None, None + + +@cached +def role_and_interface_to_relations(role, interface_name): + """ + Given a role and interface name, return a list of relation names for the + current charm that use that interface under that role (where role is one + of ``provides``, ``requires``, or ``peers``). + + :returns: A list of relation names. + """ + _metadata = metadata() + results = [] + for relation_name, relation in _metadata.get(role, {}).items(): + if relation['interface'] == interface_name: + results.append(relation_name) + return results + + +@cached +def interface_to_relations(interface_name): + """ + Given an interface, return a list of relation names for the current + charm that use that interface. + + :returns: A list of relation names. + """ + results = [] + for role in ('provides', 'requires', 'peers'): + results.extend(role_and_interface_to_relations(role, interface_name)) + return results + + +@cached +def charm_name(): + """Get the name of the current charm as is specified on metadata.yaml""" + return metadata().get('name') + + +@cached +def relations(): + """Get a nested dictionary of relation data for all related units""" + rels = {} + for reltype in relation_types(): + relids = {} + for relid in relation_ids(reltype): + units = {local_unit(): relation_get(unit=local_unit(), rid=relid)} + for unit in related_units(relid): + reldata = relation_get(unit=unit, rid=relid) + units[unit] = reldata + relids[relid] = units + rels[reltype] = relids + return rels + + +@cached +def is_relation_made(relation, keys='private-address'): + ''' + Determine whether a relation is established by checking for + presence of key(s). If a list of keys is provided, they + must all be present for the relation to be identified as made + ''' + if isinstance(keys, str): + keys = [keys] + for r_id in relation_ids(relation): + for unit in related_units(r_id): + context = {} + for k in keys: + context[k] = relation_get(k, rid=r_id, + unit=unit) + if None not in context.values(): + return True + return False + + +def _port_op(op_name, port, protocol="TCP"): + """Open or close a service network port""" + _args = [op_name] + icmp = protocol.upper() == "ICMP" + if icmp: + _args.append(protocol) + else: + _args.append('{}/{}'.format(port, protocol)) + try: + subprocess.check_call(_args) + except subprocess.CalledProcessError: + # Older Juju pre 2.3 doesn't support ICMP + # so treat it as a no-op if it fails. + if not icmp: + raise + + +def open_port(port, protocol="TCP"): + """Open a service network port""" + _port_op('open-port', port, protocol) + + +def close_port(port, protocol="TCP"): + """Close a service network port""" + _port_op('close-port', port, protocol) + + +def open_ports(start, end, protocol="TCP"): + """Opens a range of service network ports""" + _args = ['open-port'] + _args.append('{}-{}/{}'.format(start, end, protocol)) + subprocess.check_call(_args) + + +def close_ports(start, end, protocol="TCP"): + """Close a range of service network ports""" + _args = ['close-port'] + _args.append('{}-{}/{}'.format(start, end, protocol)) + subprocess.check_call(_args) + + +def opened_ports(): + """Get the opened ports + + *Note that this will only show ports opened in a previous hook* + + :returns: Opened ports as a list of strings: ``['8080/tcp', '8081-8083/tcp']`` + """ + _args = ['opened-ports', '--format=json'] + return json.loads(subprocess.check_output(_args).decode('UTF-8')) + + +@cached +def unit_get(attribute): + """Get the unit ID for the remote unit""" + _args = ['unit-get', '--format=json', attribute] + try: + return json.loads(subprocess.check_output(_args).decode('UTF-8')) + except ValueError: + return None + + +def unit_public_ip(): + """Get this unit's public IP address""" + return unit_get('public-address') + + +def unit_private_ip(): + """Get this unit's private IP address""" + return unit_get('private-address') + + +@cached +def storage_get(attribute=None, storage_id=None): + """Get storage attributes""" + _args = ['storage-get', '--format=json'] + if storage_id: + _args.extend(('-s', storage_id)) + if attribute: + _args.append(attribute) + try: + return json.loads(subprocess.check_output(_args).decode('UTF-8')) + except ValueError: + return None + + +@cached +def storage_list(storage_name=None): + """List the storage IDs for the unit""" + _args = ['storage-list', '--format=json'] + if storage_name: + _args.append(storage_name) + try: + return json.loads(subprocess.check_output(_args).decode('UTF-8')) + except ValueError: + return None + except OSError as e: + import errno + if e.errno == errno.ENOENT: + # storage-list does not exist + return [] + raise + + +class UnregisteredHookError(Exception): + """Raised when an undefined hook is called""" + pass + + +class Hooks(object): + """A convenient handler for hook functions. + + Example:: + + hooks = Hooks() + + # register a hook, taking its name from the function name + @hooks.hook() + def install(): + pass # your code here + + # register a hook, providing a custom hook name + @hooks.hook("config-changed") + def config_changed(): + pass # your code here + + if __name__ == "__main__": + # execute a hook based on the name the program is called by + hooks.execute(sys.argv) + """ + + def __init__(self, config_save=None): + super(Hooks, self).__init__() + self._hooks = {} + + # For unknown reasons, we allow the Hooks constructor to override + # config().implicit_save. + if config_save is not None: + config().implicit_save = config_save + + def register(self, name, function): + """Register a hook""" + self._hooks[name] = function + + def execute(self, args): + """Execute a registered hook based on args[0]""" + _run_atstart() + hook_name = os.path.basename(args[0]) + if hook_name in self._hooks: + try: + self._hooks[hook_name]() + except SystemExit as x: + if x.code is None or x.code == 0: + _run_atexit() + raise + _run_atexit() + else: + raise UnregisteredHookError(hook_name) + + def hook(self, *hook_names): + """Decorator, registering them as hooks""" + def wrapper(decorated): + for hook_name in hook_names: + self.register(hook_name, decorated) + else: + self.register(decorated.__name__, decorated) + if '_' in decorated.__name__: + self.register( + decorated.__name__.replace('_', '-'), decorated) + return decorated + return wrapper + + +class NoNetworkBinding(Exception): + pass + + +def charm_dir(): + """Return the root directory of the current charm""" + d = os.environ.get('JUJU_CHARM_DIR') + if d is not None: + return d + return os.environ.get('CHARM_DIR') + + +def cmd_exists(cmd): + """Return True if the specified cmd exists in the path""" + return any( + os.access(os.path.join(path, cmd), os.X_OK) + for path in os.environ["PATH"].split(os.pathsep) + ) + + +@cached +@deprecate("moved to function_get()", log=log) +def action_get(key=None): + """ + .. deprecated:: 0.20.7 + Alias for :func:`function_get`. + + Gets the value of an action parameter, or all key/value param pairs. + """ + cmd = ['action-get'] + if key is not None: + cmd.append(key) + cmd.append('--format=json') + action_data = json.loads(subprocess.check_output(cmd).decode('UTF-8')) + return action_data + + +@cached +def function_get(key=None): + """Gets the value of an action parameter, or all key/value param pairs""" + cmd = ['function-get'] + # Fallback for older charms. + if not cmd_exists('function-get'): + cmd = ['action-get'] + + if key is not None: + cmd.append(key) + cmd.append('--format=json') + function_data = json.loads(subprocess.check_output(cmd).decode('UTF-8')) + return function_data + + +@deprecate("moved to function_set()", log=log) +def action_set(values): + """ + .. deprecated:: 0.20.7 + Alias for :func:`function_set`. + + Sets the values to be returned after the action finishes. + """ + cmd = ['action-set'] + for k, v in list(values.items()): + cmd.append('{}={}'.format(k, v)) + subprocess.check_call(cmd) + + +def function_set(values): + """Sets the values to be returned after the function finishes""" + cmd = ['function-set'] + # Fallback for older charms. + if not cmd_exists('function-get'): + cmd = ['action-set'] + + for k, v in list(values.items()): + cmd.append('{}={}'.format(k, v)) + subprocess.check_call(cmd) + + +@deprecate("moved to function_fail()", log=log) +def action_fail(message): + """ + .. deprecated:: 0.20.7 + Alias for :func:`function_fail`. + + Sets the action status to failed and sets the error message. + + The results set by action_set are preserved. + """ + subprocess.check_call(['action-fail', message]) + + +def function_fail(message): + """Sets the function status to failed and sets the error message. + + The results set by function_set are preserved.""" + cmd = ['function-fail'] + # Fallback for older charms. + if not cmd_exists('function-fail'): + cmd = ['action-fail'] + cmd.append(message) + + subprocess.check_call(cmd) + + +def action_name(): + """Get the name of the currently executing action.""" + return os.environ.get('JUJU_ACTION_NAME') + + +def function_name(): + """Get the name of the currently executing function.""" + return os.environ.get('JUJU_FUNCTION_NAME') or action_name() + + +def action_uuid(): + """Get the UUID of the currently executing action.""" + return os.environ.get('JUJU_ACTION_UUID') + + +def function_id(): + """Get the ID of the currently executing function.""" + return os.environ.get('JUJU_FUNCTION_ID') or action_uuid() + + +def action_tag(): + """Get the tag for the currently executing action.""" + return os.environ.get('JUJU_ACTION_TAG') + + +def function_tag(): + """Get the tag for the currently executing function.""" + return os.environ.get('JUJU_FUNCTION_TAG') or action_tag() + + +def status_set(workload_state, message, application=False): + """Set the workload state with a message + + Use status-set to set the workload state with a message which is visible + to the user via juju status. If the status-set command is not found then + assume this is juju < 1.23 and juju-log the message instead. + + workload_state -- valid juju workload state. str or WORKLOAD_STATES + message -- status update message + application -- Whether this is an application state set + """ + bad_state_msg = '{!r} is not a valid workload state' + + if isinstance(workload_state, str): + try: + # Convert string to enum. + workload_state = WORKLOAD_STATES[workload_state.upper()] + except KeyError: + raise ValueError(bad_state_msg.format(workload_state)) + + if workload_state not in WORKLOAD_STATES: + raise ValueError(bad_state_msg.format(workload_state)) + + cmd = ['status-set'] + if application: + cmd.append('--application') + cmd.extend([workload_state.value, message]) + try: + ret = subprocess.call(cmd) + if ret == 0: + return + except OSError as e: + if e.errno != errno.ENOENT: + raise + log_message = 'status-set failed: {} {}'.format(workload_state.value, + message) + log(log_message, level='INFO') + + +def status_get(): + """Retrieve the previously set juju workload state and message + + If the status-get command is not found then assume this is juju < 1.23 and + return 'unknown', "" + + """ + cmd = ['status-get', "--format=json", "--include-data"] + try: + raw_status = subprocess.check_output(cmd) + except OSError as e: + if e.errno == errno.ENOENT: + return ('unknown', "") + else: + raise + else: + status = json.loads(raw_status.decode("UTF-8")) + return (status["status"], status["message"]) + + +def translate_exc(from_exc, to_exc): + def inner_translate_exc1(f): + @wraps(f) + def inner_translate_exc2(*args, **kwargs): + try: + return f(*args, **kwargs) + except from_exc: + raise to_exc + + return inner_translate_exc2 + + return inner_translate_exc1 + + +def application_version_set(version): + """Charm authors may trigger this command from any hook to output what + version of the application is running. This could be a package version, + for instance postgres version 9.5. It could also be a build number or + version control revision identifier, for instance git sha 6fb7ba68. """ + + cmd = ['application-version-set'] + cmd.append(version) + try: + subprocess.check_call(cmd) + except OSError: + log("Application Version: {}".format(version)) + + +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) +@cached +def goal_state(): + """Juju goal state values""" + cmd = ['goal-state', '--format=json'] + return json.loads(subprocess.check_output(cmd).decode('UTF-8')) + + +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) +def is_leader(): + """Does the current unit hold the juju leadership + + Uses juju to determine whether the current unit is the leader of its peers + """ + cmd = ['is-leader', '--format=json'] + return json.loads(subprocess.check_output(cmd).decode('UTF-8')) + + +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) +def leader_get(attribute=None): + """Juju leader get value(s)""" + cmd = ['leader-get', '--format=json'] + [attribute or '-'] + return json.loads(subprocess.check_output(cmd).decode('UTF-8')) + + +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) +def leader_set(settings=None, **kwargs): + """Juju leader set value(s)""" + # Don't log secrets. + # log("Juju leader-set '%s'" % (settings), level=DEBUG) + cmd = ['leader-set'] + settings = settings or {} + settings.update(kwargs) + for k, v in settings.items(): + if v is None: + cmd.append('{}='.format(k)) + else: + cmd.append('{}={}'.format(k, v)) + subprocess.check_call(cmd) + + +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) +def payload_register(ptype, klass, pid): + """ is used while a hook is running to let Juju know that a + payload has been started.""" + cmd = ['payload-register'] + for x in [ptype, klass, pid]: + cmd.append(x) + subprocess.check_call(cmd) + + +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) +def payload_unregister(klass, pid): + """ is used while a hook is running to let Juju know + that a payload has been manually stopped. The and provided + must match a payload that has been previously registered with juju using + payload-register.""" + cmd = ['payload-unregister'] + for x in [klass, pid]: + cmd.append(x) + subprocess.check_call(cmd) + + +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) +def payload_status_set(klass, pid, status): + """is used to update the current status of a registered payload. + The and provided must match a payload that has been previously + registered with juju using payload-register. The must be one of the + follow: starting, started, stopping, stopped""" + cmd = ['payload-status-set'] + for x in [klass, pid, status]: + cmd.append(x) + subprocess.check_call(cmd) + + +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) +def resource_get(name): + """used to fetch the resource path of the given name. + + must match a name of defined resource in metadata.yaml + + returns either a path or False if resource not available + """ + if not name: + return False + + cmd = ['resource-get', name] + try: + return subprocess.check_output(cmd).decode('UTF-8') + except subprocess.CalledProcessError: + return False + + +@cached +def juju_version(): + """Full version string (eg. '1.23.3.1-trusty-amd64')""" + # Per https://bugs.launchpad.net/juju-core/+bug/1455368/comments/1 + jujud = glob.glob('/var/lib/juju/tools/machine-*/jujud')[0] + return subprocess.check_output([jujud, 'version'], + universal_newlines=True).strip() + + +def has_juju_version(minimum_version): + """Return True if the Juju version is at least the provided version""" + return LooseVersion(juju_version()) >= LooseVersion(minimum_version) + + +_atexit = [] +_atstart = [] + + +def atstart(callback, *args, **kwargs): + '''Schedule a callback to run before the main hook. + + Callbacks are run in the order they were added. + + This is useful for modules and classes to perform initialization + and inject behavior. In particular: + + - Run common code before all of your hooks, such as logging + the hook name or interesting relation data. + - Defer object or module initialization that requires a hook + context until we know there actually is a hook context, + making testing easier. + - Rather than requiring charm authors to include boilerplate to + invoke your helper's behavior, have it run automatically if + your object is instantiated or module imported. + + This is not at all useful after your hook framework as been launched. + ''' + global _atstart + _atstart.append((callback, args, kwargs)) + + +def atexit(callback, *args, **kwargs): + '''Schedule a callback to run on successful hook completion. + + Callbacks are run in the reverse order that they were added.''' + _atexit.append((callback, args, kwargs)) + + +def _run_atstart(): + '''Hook frameworks must invoke this before running the main hook body.''' + global _atstart + for callback, args, kwargs in _atstart: + callback(*args, **kwargs) + del _atstart[:] + + +def _run_atexit(): + '''Hook frameworks must invoke this after the main hook body has + successfully completed. Do not invoke it if the hook fails.''' + global _atexit + for callback, args, kwargs in reversed(_atexit): + callback(*args, **kwargs) + del _atexit[:] + + +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) +def network_get_primary_address(binding): + ''' + Deprecated since Juju 2.3; use network_get() + + Retrieve the primary network address for a named binding + + :param binding: string. The name of a relation of extra-binding + :return: string. The primary IP address for the named binding + :raise: NotImplementedError if run on Juju < 2.0 + ''' + cmd = ['network-get', '--primary-address', binding] + try: + response = subprocess.check_output( + cmd, + stderr=subprocess.STDOUT).decode('UTF-8').strip() + except CalledProcessError as e: + if 'no network config found for binding' in e.output.decode('UTF-8'): + raise NoNetworkBinding("No network binding for {}" + .format(binding)) + else: + raise + return response + + +def network_get(endpoint, relation_id=None): + """ + Retrieve the network details for a relation endpoint + + :param endpoint: string. The name of a relation endpoint + :param relation_id: int. The ID of the relation for the current context. + :return: dict. The loaded YAML output of the network-get query. + :raise: NotImplementedError if request not supported by the Juju version. + """ + if not has_juju_version('2.2'): + raise NotImplementedError(juju_version()) # earlier versions require --primary-address + if relation_id and not has_juju_version('2.3'): + raise NotImplementedError # 2.3 added the -r option + + cmd = ['network-get', endpoint, '--format', 'yaml'] + if relation_id: + cmd.append('-r') + cmd.append(relation_id) + response = subprocess.check_output( + cmd, + stderr=subprocess.STDOUT).decode('UTF-8').strip() + return yaml.safe_load(response) + + +def add_metric(*args, **kwargs): + """Add metric values. Values may be expressed with keyword arguments. For + metric names containing dashes, these may be expressed as one or more + 'key=value' positional arguments. May only be called from the collect-metrics + hook.""" + _args = ['add-metric'] + _kvpairs = [] + _kvpairs.extend(args) + _kvpairs.extend(['{}={}'.format(k, v) for k, v in kwargs.items()]) + _args.extend(sorted(_kvpairs)) + try: + subprocess.check_call(_args) + return + except EnvironmentError as e: + if e.errno != errno.ENOENT: + raise + log_message = 'add-metric failed: {}'.format(' '.join(_kvpairs)) + log(log_message, level='INFO') + + +def meter_status(): + """Get the meter status, if running in the meter-status-changed hook.""" + return os.environ.get('JUJU_METER_STATUS') + + +def meter_info(): + """Get the meter status information, if running in the meter-status-changed + hook.""" + return os.environ.get('JUJU_METER_INFO') + + +def iter_units_for_relation_name(relation_name): + """Iterate through all units in a relation + + Generator that iterates through all the units in a relation and yields + a named tuple with rid and unit field names. + + Usage: + data = [(u.rid, u.unit) + for u in iter_units_for_relation_name(relation_name)] + + :param relation_name: string relation name + :yield: Named Tuple with rid and unit field names + """ + RelatedUnit = namedtuple('RelatedUnit', 'rid, unit') + for rid in relation_ids(relation_name): + for unit in related_units(rid): + yield RelatedUnit(rid, unit) + + +def ingress_address(rid=None, unit=None): + """ + Retrieve the ingress-address from a relation when available. + Otherwise, return the private-address. + + When used on the consuming side of the relation (unit is a remote + unit), the ingress-address is the IP address that this unit needs + to use to reach the provided service on the remote unit. + + When used on the providing side of the relation (unit == local_unit()), + the ingress-address is the IP address that is advertised to remote + units on this relation. Remote units need to use this address to + reach the local provided service on this unit. + + Note that charms may document some other method to use in + preference to the ingress_address(), such as an address provided + on a different relation attribute or a service discovery mechanism. + This allows charms to redirect inbound connections to their peers + or different applications such as load balancers. + + Usage: + addresses = [ingress_address(rid=u.rid, unit=u.unit) + for u in iter_units_for_relation_name(relation_name)] + + :param rid: string relation id + :param unit: string unit name + :side effect: calls relation_get + :return: string IP address + """ + settings = relation_get(rid=rid, unit=unit) + return (settings.get('ingress-address') or + settings.get('private-address')) + + +def egress_subnets(rid=None, unit=None): + """ + Retrieve the egress-subnets from a relation. + + This function is to be used on the providing side of the + relation, and provides the ranges of addresses that client + connections may come from. The result is uninteresting on + the consuming side of a relation (unit == local_unit()). + + Returns a stable list of subnets in CIDR format. + eg. ['192.168.1.0/24', '2001::F00F/128'] + + If egress-subnets is not available, falls back to using the published + ingress-address, or finally private-address. + + :param rid: string relation id + :param unit: string unit name + :side effect: calls relation_get + :return: list of subnets in CIDR format. eg. ['192.168.1.0/24', '2001::F00F/128'] + """ + def _to_range(addr): + if re.search(r'^(?:\d{1,3}\.){3}\d{1,3}$', addr) is not None: + addr += '/32' + elif ':' in addr and '/' not in addr: # IPv6 + addr += '/128' + return addr + + settings = relation_get(rid=rid, unit=unit) + if 'egress-subnets' in settings: + return [n.strip() for n in settings['egress-subnets'].split(',') if n.strip()] + if 'ingress-address' in settings: + return [_to_range(settings['ingress-address'])] + if 'private-address' in settings: + return [_to_range(settings['private-address'])] + return [] # Should never happen + + +def unit_doomed(unit=None): + """Determines if the unit is being removed from the model + + Requires Juju 2.4.1. + + :param unit: string unit name, defaults to local_unit + :side effect: calls goal_state + :side effect: calls local_unit + :side effect: calls has_juju_version + :return: True if the unit is being removed, already gone, or never existed + """ + if not has_juju_version("2.4.1"): + # We cannot risk blindly returning False for 'we don't know', + # because that could cause data loss; if call sites don't + # need an accurate answer, they likely don't need this helper + # at all. + # goal-state existed in 2.4.0, but did not handle removals + # correctly until 2.4.1. + raise NotImplementedError("is_doomed") + if unit is None: + unit = local_unit() + gs = goal_state() + units = gs.get('units', {}) + if unit not in units: + return True + # I don't think 'dead' units ever show up in the goal-state, but + # check anyway in addition to 'dying'. + return units[unit]['status'] in ('dying', 'dead') + + +def env_proxy_settings(selected_settings=None): + """Get proxy settings from process environment variables. + + Get charm proxy settings from environment variables that correspond to + juju-http-proxy, juju-https-proxy juju-no-proxy (available as of 2.4.2, see + lp:1782236) and juju-ftp-proxy in a format suitable for passing to an + application that reacts to proxy settings passed as environment variables. + Some applications support lowercase or uppercase notation (e.g. curl), some + support only lowercase (e.g. wget), there are also subjectively rare cases + of only uppercase notation support. no_proxy CIDR and wildcard support also + varies between runtimes and applications as there is no enforced standard. + + Some applications may connect to multiple destinations and expose config + options that would affect only proxy settings for a specific destination + these should be handled in charms in an application-specific manner. + + :param selected_settings: format only a subset of possible settings + :type selected_settings: list + :rtype: Option(None, dict[str, str]) + """ + SUPPORTED_SETTINGS = { + 'http': 'HTTP_PROXY', + 'https': 'HTTPS_PROXY', + 'no_proxy': 'NO_PROXY', + 'ftp': 'FTP_PROXY' + } + if selected_settings is None: + selected_settings = SUPPORTED_SETTINGS + + selected_vars = [v for k, v in SUPPORTED_SETTINGS.items() + if k in selected_settings] + proxy_settings = {} + for var in selected_vars: + var_val = os.getenv(var) + if var_val: + proxy_settings[var] = var_val + proxy_settings[var.lower()] = var_val + # Now handle juju-prefixed environment variables. The legacy vs new + # environment variable usage is mutually exclusive + charm_var_val = os.getenv('JUJU_CHARM_{}'.format(var)) + if charm_var_val: + proxy_settings[var] = charm_var_val + proxy_settings[var.lower()] = charm_var_val + if 'no_proxy' in proxy_settings: + if _contains_range(proxy_settings['no_proxy']): + log(RANGE_WARNING, level=WARNING) + return proxy_settings if proxy_settings else None + + +def _contains_range(addresses): + """Check for cidr or wildcard domain in a string. + + Given a string comprising a comma seperated list of ip addresses + and domain names, determine whether the string contains IP ranges + or wildcard domains. + + :param addresses: comma seperated list of domains and ip addresses. + :type addresses: str + """ + return ( + # Test for cidr (e.g. 10.20.20.0/24) + "/" in addresses or + # Test for wildcard domains (*.foo.com or .foo.com) + "*" in addresses or + addresses.startswith(".") or + ",." in addresses or + " ." in addresses) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/host.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/host.py new file mode 100644 index 0000000000000000000000000000000000000000..b33ac906d9eeb198210dceb069724ac9a35652ea --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/host.py @@ -0,0 +1,1104 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Tools for working with the host system""" +# Copyright 2012 Canonical Ltd. +# +# Authors: +# Nick Moffitt +# Matthew Wedgwood + +import os +import re +import pwd +import glob +import grp +import random +import string +import subprocess +import hashlib +import functools +import itertools +import six + +from contextlib import contextmanager +from collections import OrderedDict +from .hookenv import log, INFO, DEBUG, local_unit, charm_name +from .fstab import Fstab +from charmhelpers.osplatform import get_platform + +__platform__ = get_platform() +if __platform__ == "ubuntu": + from charmhelpers.core.host_factory.ubuntu import ( # NOQA:F401 + service_available, + add_new_group, + lsb_release, + cmp_pkgrevno, + CompareHostReleases, + get_distrib_codename, + arch + ) # flake8: noqa -- ignore F401 for this import +elif __platform__ == "centos": + from charmhelpers.core.host_factory.centos import ( # NOQA:F401 + service_available, + add_new_group, + lsb_release, + cmp_pkgrevno, + CompareHostReleases, + ) # flake8: noqa -- ignore F401 for this import + +UPDATEDB_PATH = '/etc/updatedb.conf' + + +def service_start(service_name, **kwargs): + """Start a system service. + + The specified service name is managed via the system level init system. + Some init systems (e.g. upstart) require that additional arguments be + provided in order to directly control service instances whereas other init + systems allow for addressing instances of a service directly by name (e.g. + systemd). + + The kwargs allow for the additional parameters to be passed to underlying + init systems for those systems which require/allow for them. For example, + the ceph-osd upstart script requires the id parameter to be passed along + in order to identify which running daemon should be reloaded. The follow- + ing example stops the ceph-osd service for instance id=4: + + service_stop('ceph-osd', id=4) + + :param service_name: the name of the service to stop + :param **kwargs: additional parameters to pass to the init system when + managing services. These will be passed as key=value + parameters to the init system's commandline. kwargs + are ignored for systemd enabled systems. + """ + return service('start', service_name, **kwargs) + + +def service_stop(service_name, **kwargs): + """Stop a system service. + + The specified service name is managed via the system level init system. + Some init systems (e.g. upstart) require that additional arguments be + provided in order to directly control service instances whereas other init + systems allow for addressing instances of a service directly by name (e.g. + systemd). + + The kwargs allow for the additional parameters to be passed to underlying + init systems for those systems which require/allow for them. For example, + the ceph-osd upstart script requires the id parameter to be passed along + in order to identify which running daemon should be reloaded. The follow- + ing example stops the ceph-osd service for instance id=4: + + service_stop('ceph-osd', id=4) + + :param service_name: the name of the service to stop + :param **kwargs: additional parameters to pass to the init system when + managing services. These will be passed as key=value + parameters to the init system's commandline. kwargs + are ignored for systemd enabled systems. + """ + return service('stop', service_name, **kwargs) + + +def service_restart(service_name, **kwargs): + """Restart a system service. + + The specified service name is managed via the system level init system. + Some init systems (e.g. upstart) require that additional arguments be + provided in order to directly control service instances whereas other init + systems allow for addressing instances of a service directly by name (e.g. + systemd). + + The kwargs allow for the additional parameters to be passed to underlying + init systems for those systems which require/allow for them. For example, + the ceph-osd upstart script requires the id parameter to be passed along + in order to identify which running daemon should be restarted. The follow- + ing example restarts the ceph-osd service for instance id=4: + + service_restart('ceph-osd', id=4) + + :param service_name: the name of the service to restart + :param **kwargs: additional parameters to pass to the init system when + managing services. These will be passed as key=value + parameters to the init system's commandline. kwargs + are ignored for init systems not allowing additional + parameters via the commandline (systemd). + """ + return service('restart', service_name) + + +def service_reload(service_name, restart_on_failure=False, **kwargs): + """Reload a system service, optionally falling back to restart if + reload fails. + + The specified service name is managed via the system level init system. + Some init systems (e.g. upstart) require that additional arguments be + provided in order to directly control service instances whereas other init + systems allow for addressing instances of a service directly by name (e.g. + systemd). + + The kwargs allow for the additional parameters to be passed to underlying + init systems for those systems which require/allow for them. For example, + the ceph-osd upstart script requires the id parameter to be passed along + in order to identify which running daemon should be reloaded. The follow- + ing example restarts the ceph-osd service for instance id=4: + + service_reload('ceph-osd', id=4) + + :param service_name: the name of the service to reload + :param restart_on_failure: boolean indicating whether to fallback to a + restart if the reload fails. + :param **kwargs: additional parameters to pass to the init system when + managing services. These will be passed as key=value + parameters to the init system's commandline. kwargs + are ignored for init systems not allowing additional + parameters via the commandline (systemd). + """ + service_result = service('reload', service_name, **kwargs) + if not service_result and restart_on_failure: + service_result = service('restart', service_name, **kwargs) + return service_result + + +def service_pause(service_name, init_dir="/etc/init", initd_dir="/etc/init.d", + **kwargs): + """Pause a system service. + + Stop it, and prevent it from starting again at boot. + + :param service_name: the name of the service to pause + :param init_dir: path to the upstart init directory + :param initd_dir: path to the sysv init directory + :param **kwargs: additional parameters to pass to the init system when + managing services. These will be passed as key=value + parameters to the init system's commandline. kwargs + are ignored for init systems which do not support + key=value arguments via the commandline. + """ + stopped = True + if service_running(service_name, **kwargs): + stopped = service_stop(service_name, **kwargs) + upstart_file = os.path.join(init_dir, "{}.conf".format(service_name)) + sysv_file = os.path.join(initd_dir, service_name) + if init_is_systemd(): + service('disable', service_name) + service('mask', service_name) + elif os.path.exists(upstart_file): + override_path = os.path.join( + init_dir, '{}.override'.format(service_name)) + with open(override_path, 'w') as fh: + fh.write("manual\n") + elif os.path.exists(sysv_file): + subprocess.check_call(["update-rc.d", service_name, "disable"]) + else: + raise ValueError( + "Unable to detect {0} as SystemD, Upstart {1} or" + " SysV {2}".format( + service_name, upstart_file, sysv_file)) + return stopped + + +def service_resume(service_name, init_dir="/etc/init", + initd_dir="/etc/init.d", **kwargs): + """Resume a system service. + + Reenable starting again at boot. Start the service. + + :param service_name: the name of the service to resume + :param init_dir: the path to the init dir + :param initd dir: the path to the initd dir + :param **kwargs: additional parameters to pass to the init system when + managing services. These will be passed as key=value + parameters to the init system's commandline. kwargs + are ignored for systemd enabled systems. + """ + upstart_file = os.path.join(init_dir, "{}.conf".format(service_name)) + sysv_file = os.path.join(initd_dir, service_name) + if init_is_systemd(): + service('unmask', service_name) + service('enable', service_name) + elif os.path.exists(upstart_file): + override_path = os.path.join( + init_dir, '{}.override'.format(service_name)) + if os.path.exists(override_path): + os.unlink(override_path) + elif os.path.exists(sysv_file): + subprocess.check_call(["update-rc.d", service_name, "enable"]) + else: + raise ValueError( + "Unable to detect {0} as SystemD, Upstart {1} or" + " SysV {2}".format( + service_name, upstart_file, sysv_file)) + started = service_running(service_name, **kwargs) + + if not started: + started = service_start(service_name, **kwargs) + return started + + +def service(action, service_name, **kwargs): + """Control a system service. + + :param action: the action to take on the service + :param service_name: the name of the service to perform th action on + :param **kwargs: additional params to be passed to the service command in + the form of key=value. + """ + if init_is_systemd(): + cmd = ['systemctl', action, service_name] + else: + cmd = ['service', service_name, action] + for key, value in six.iteritems(kwargs): + parameter = '%s=%s' % (key, value) + cmd.append(parameter) + return subprocess.call(cmd) == 0 + + +_UPSTART_CONF = "/etc/init/{}.conf" +_INIT_D_CONF = "/etc/init.d/{}" + + +def service_running(service_name, **kwargs): + """Determine whether a system service is running. + + :param service_name: the name of the service + :param **kwargs: additional args to pass to the service command. This is + used to pass additional key=value arguments to the + service command line for managing specific instance + units (e.g. service ceph-osd status id=2). The kwargs + are ignored in systemd services. + """ + if init_is_systemd(): + return service('is-active', service_name) + else: + if os.path.exists(_UPSTART_CONF.format(service_name)): + try: + cmd = ['status', service_name] + for key, value in six.iteritems(kwargs): + parameter = '%s=%s' % (key, value) + cmd.append(parameter) + output = subprocess.check_output( + cmd, stderr=subprocess.STDOUT).decode('UTF-8') + except subprocess.CalledProcessError: + return False + else: + # This works for upstart scripts where the 'service' command + # returns a consistent string to represent running + # 'start/running' + if ("start/running" in output or + "is running" in output or + "up and running" in output): + return True + elif os.path.exists(_INIT_D_CONF.format(service_name)): + # Check System V scripts init script return codes + return service('status', service_name) + return False + + +SYSTEMD_SYSTEM = '/run/systemd/system' + + +def init_is_systemd(): + """Return True if the host system uses systemd, False otherwise.""" + if lsb_release()['DISTRIB_CODENAME'] == 'trusty': + return False + return os.path.isdir(SYSTEMD_SYSTEM) + + +def adduser(username, password=None, shell='/bin/bash', + system_user=False, primary_group=None, + secondary_groups=None, uid=None, home_dir=None): + """Add a user to the system. + + Will log but otherwise succeed if the user already exists. + + :param str username: Username to create + :param str password: Password for user; if ``None``, create a system user + :param str shell: The default shell for the user + :param bool system_user: Whether to create a login or system user + :param str primary_group: Primary group for user; defaults to username + :param list secondary_groups: Optional list of additional groups + :param int uid: UID for user being created + :param str home_dir: Home directory for user + + :returns: The password database entry struct, as returned by `pwd.getpwnam` + """ + try: + user_info = pwd.getpwnam(username) + log('user {0} already exists!'.format(username)) + if uid: + user_info = pwd.getpwuid(int(uid)) + log('user with uid {0} already exists!'.format(uid)) + except KeyError: + log('creating user {0}'.format(username)) + cmd = ['useradd'] + if uid: + cmd.extend(['--uid', str(uid)]) + if home_dir: + cmd.extend(['--home', str(home_dir)]) + if system_user or password is None: + cmd.append('--system') + else: + cmd.extend([ + '--create-home', + '--shell', shell, + '--password', password, + ]) + if not primary_group: + try: + grp.getgrnam(username) + primary_group = username # avoid "group exists" error + except KeyError: + pass + if primary_group: + cmd.extend(['-g', primary_group]) + if secondary_groups: + cmd.extend(['-G', ','.join(secondary_groups)]) + cmd.append(username) + subprocess.check_call(cmd) + user_info = pwd.getpwnam(username) + return user_info + + +def user_exists(username): + """Check if a user exists""" + try: + pwd.getpwnam(username) + user_exists = True + except KeyError: + user_exists = False + return user_exists + + +def uid_exists(uid): + """Check if a uid exists""" + try: + pwd.getpwuid(uid) + uid_exists = True + except KeyError: + uid_exists = False + return uid_exists + + +def group_exists(groupname): + """Check if a group exists""" + try: + grp.getgrnam(groupname) + group_exists = True + except KeyError: + group_exists = False + return group_exists + + +def gid_exists(gid): + """Check if a gid exists""" + try: + grp.getgrgid(gid) + gid_exists = True + except KeyError: + gid_exists = False + return gid_exists + + +def add_group(group_name, system_group=False, gid=None): + """Add a group to the system + + Will log but otherwise succeed if the group already exists. + + :param str group_name: group to create + :param bool system_group: Create system group + :param int gid: GID for user being created + + :returns: The password database entry struct, as returned by `grp.getgrnam` + """ + try: + group_info = grp.getgrnam(group_name) + log('group {0} already exists!'.format(group_name)) + if gid: + group_info = grp.getgrgid(gid) + log('group with gid {0} already exists!'.format(gid)) + except KeyError: + log('creating group {0}'.format(group_name)) + add_new_group(group_name, system_group, gid) + group_info = grp.getgrnam(group_name) + return group_info + + +def add_user_to_group(username, group): + """Add a user to a group""" + cmd = ['gpasswd', '-a', username, group] + log("Adding user {} to group {}".format(username, group)) + subprocess.check_call(cmd) + + +def chage(username, lastday=None, expiredate=None, inactive=None, + mindays=None, maxdays=None, root=None, warndays=None): + """Change user password expiry information + + :param str username: User to update + :param str lastday: Set when password was changed in YYYY-MM-DD format + :param str expiredate: Set when user's account will no longer be + accessible in YYYY-MM-DD format. + -1 will remove an account expiration date. + :param str inactive: Set the number of days of inactivity after a password + has expired before the account is locked. + -1 will remove an account's inactivity. + :param str mindays: Set the minimum number of days between password + changes to MIN_DAYS. + 0 indicates the password can be changed anytime. + :param str maxdays: Set the maximum number of days during which a + password is valid. + -1 as MAX_DAYS will remove checking maxdays + :param str root: Apply changes in the CHROOT_DIR directory + :param str warndays: Set the number of days of warning before a password + change is required + :raises subprocess.CalledProcessError: if call to chage fails + """ + cmd = ['chage'] + if root: + cmd.extend(['--root', root]) + if lastday: + cmd.extend(['--lastday', lastday]) + if expiredate: + cmd.extend(['--expiredate', expiredate]) + if inactive: + cmd.extend(['--inactive', inactive]) + if mindays: + cmd.extend(['--mindays', mindays]) + if maxdays: + cmd.extend(['--maxdays', maxdays]) + if warndays: + cmd.extend(['--warndays', warndays]) + cmd.append(username) + subprocess.check_call(cmd) + + +remove_password_expiry = functools.partial(chage, expiredate='-1', inactive='-1', mindays='0', maxdays='-1') + + +def rsync(from_path, to_path, flags='-r', options=None, timeout=None): + """Replicate the contents of a path""" + options = options or ['--delete', '--executability'] + cmd = ['/usr/bin/rsync', flags] + if timeout: + cmd = ['timeout', str(timeout)] + cmd + cmd.extend(options) + cmd.append(from_path) + cmd.append(to_path) + log(" ".join(cmd)) + return subprocess.check_output(cmd, stderr=subprocess.STDOUT).decode('UTF-8').strip() + + +def symlink(source, destination): + """Create a symbolic link""" + log("Symlinking {} as {}".format(source, destination)) + cmd = [ + 'ln', + '-sf', + source, + destination, + ] + subprocess.check_call(cmd) + + +def mkdir(path, owner='root', group='root', perms=0o555, force=False): + """Create a directory""" + log("Making dir {} {}:{} {:o}".format(path, owner, group, + perms)) + uid = pwd.getpwnam(owner).pw_uid + gid = grp.getgrnam(group).gr_gid + realpath = os.path.abspath(path) + path_exists = os.path.exists(realpath) + if path_exists and force: + if not os.path.isdir(realpath): + log("Removing non-directory file {} prior to mkdir()".format(path)) + os.unlink(realpath) + os.makedirs(realpath, perms) + elif not path_exists: + os.makedirs(realpath, perms) + os.chown(realpath, uid, gid) + os.chmod(realpath, perms) + + +def write_file(path, content, owner='root', group='root', perms=0o444): + """Create or overwrite a file with the contents of a byte string.""" + uid = pwd.getpwnam(owner).pw_uid + gid = grp.getgrnam(group).gr_gid + # lets see if we can grab the file and compare the context, to avoid doing + # a write. + existing_content = None + existing_uid, existing_gid, existing_perms = None, None, None + try: + with open(path, 'rb') as target: + existing_content = target.read() + stat = os.stat(path) + existing_uid, existing_gid, existing_perms = ( + stat.st_uid, stat.st_gid, stat.st_mode + ) + except Exception: + pass + if content != existing_content: + log("Writing file {} {}:{} {:o}".format(path, owner, group, perms), + level=DEBUG) + with open(path, 'wb') as target: + os.fchown(target.fileno(), uid, gid) + os.fchmod(target.fileno(), perms) + if six.PY3 and isinstance(content, six.string_types): + content = content.encode('UTF-8') + target.write(content) + return + # the contents were the same, but we might still need to change the + # ownership or permissions. + if existing_uid != uid: + log("Changing uid on already existing content: {} -> {}" + .format(existing_uid, uid), level=DEBUG) + os.chown(path, uid, -1) + if existing_gid != gid: + log("Changing gid on already existing content: {} -> {}" + .format(existing_gid, gid), level=DEBUG) + os.chown(path, -1, gid) + if existing_perms != perms: + log("Changing permissions on existing content: {} -> {}" + .format(existing_perms, perms), level=DEBUG) + os.chmod(path, perms) + + +def fstab_remove(mp): + """Remove the given mountpoint entry from /etc/fstab""" + return Fstab.remove_by_mountpoint(mp) + + +def fstab_add(dev, mp, fs, options=None): + """Adds the given device entry to the /etc/fstab file""" + return Fstab.add(dev, mp, fs, options=options) + + +def mount(device, mountpoint, options=None, persist=False, filesystem="ext3"): + """Mount a filesystem at a particular mountpoint""" + cmd_args = ['mount'] + if options is not None: + cmd_args.extend(['-o', options]) + cmd_args.extend([device, mountpoint]) + try: + subprocess.check_output(cmd_args) + except subprocess.CalledProcessError as e: + log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output)) + return False + + if persist: + return fstab_add(device, mountpoint, filesystem, options=options) + return True + + +def umount(mountpoint, persist=False): + """Unmount a filesystem""" + cmd_args = ['umount', mountpoint] + try: + subprocess.check_output(cmd_args) + except subprocess.CalledProcessError as e: + log('Error unmounting {}\n{}'.format(mountpoint, e.output)) + return False + + if persist: + return fstab_remove(mountpoint) + return True + + +def mounts(): + """Get a list of all mounted volumes as [[mountpoint,device],[...]]""" + with open('/proc/mounts') as f: + # [['/mount/point','/dev/path'],[...]] + system_mounts = [m[1::-1] for m in [l.strip().split() + for l in f.readlines()]] + return system_mounts + + +def fstab_mount(mountpoint): + """Mount filesystem using fstab""" + cmd_args = ['mount', mountpoint] + try: + subprocess.check_output(cmd_args) + except subprocess.CalledProcessError as e: + log('Error unmounting {}\n{}'.format(mountpoint, e.output)) + return False + return True + + +def file_hash(path, hash_type='md5'): + """Generate a hash checksum of the contents of 'path' or None if not found. + + :param str hash_type: Any hash alrgorithm supported by :mod:`hashlib`, + such as md5, sha1, sha256, sha512, etc. + """ + if os.path.exists(path): + h = getattr(hashlib, hash_type)() + with open(path, 'rb') as source: + h.update(source.read()) + return h.hexdigest() + else: + return None + + +def path_hash(path): + """Generate a hash checksum of all files matching 'path'. Standard + wildcards like '*' and '?' are supported, see documentation for the 'glob' + module for more information. + + :return: dict: A { filename: hash } dictionary for all matched files. + Empty if none found. + """ + return { + filename: file_hash(filename) + for filename in glob.iglob(path) + } + + +def check_hash(path, checksum, hash_type='md5'): + """Validate a file using a cryptographic checksum. + + :param str checksum: Value of the checksum used to validate the file. + :param str hash_type: Hash algorithm used to generate `checksum`. + Can be any hash alrgorithm supported by :mod:`hashlib`, + such as md5, sha1, sha256, sha512, etc. + :raises ChecksumError: If the file fails the checksum + + """ + actual_checksum = file_hash(path, hash_type) + if checksum != actual_checksum: + raise ChecksumError("'%s' != '%s'" % (checksum, actual_checksum)) + + +class ChecksumError(ValueError): + """A class derived from Value error to indicate the checksum failed.""" + pass + + +def restart_on_change(restart_map, stopstart=False, restart_functions=None): + """Restart services based on configuration files changing + + This function is used a decorator, for example:: + + @restart_on_change({ + '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ] + '/etc/apache/sites-enabled/*': [ 'apache2' ] + }) + def config_changed(): + pass # your code here + + In this example, the cinder-api and cinder-volume services + would be restarted if /etc/ceph/ceph.conf is changed by the + ceph_client_changed function. The apache2 service would be + restarted if any file matching the pattern got changed, created + or removed. Standard wildcards are supported, see documentation + for the 'glob' module for more information. + + @param restart_map: {path_file_name: [service_name, ...] + @param stopstart: DEFAULT false; whether to stop, start OR restart + @param restart_functions: nonstandard functions to use to restart services + {svc: func, ...} + @returns result from decorated function + """ + def wrap(f): + @functools.wraps(f) + def wrapped_f(*args, **kwargs): + return restart_on_change_helper( + (lambda: f(*args, **kwargs)), restart_map, stopstart, + restart_functions) + return wrapped_f + return wrap + + +def restart_on_change_helper(lambda_f, restart_map, stopstart=False, + restart_functions=None): + """Helper function to perform the restart_on_change function. + + This is provided for decorators to restart services if files described + in the restart_map have changed after an invocation of lambda_f(). + + @param lambda_f: function to call. + @param restart_map: {file: [service, ...]} + @param stopstart: whether to stop, start or restart a service + @param restart_functions: nonstandard functions to use to restart services + {svc: func, ...} + @returns result of lambda_f() + """ + if restart_functions is None: + restart_functions = {} + checksums = {path: path_hash(path) for path in restart_map} + r = lambda_f() + # create a list of lists of the services to restart + restarts = [restart_map[path] + for path in restart_map + if path_hash(path) != checksums[path]] + # create a flat list of ordered services without duplicates from lists + services_list = list(OrderedDict.fromkeys(itertools.chain(*restarts))) + if services_list: + actions = ('stop', 'start') if stopstart else ('restart',) + for service_name in services_list: + if service_name in restart_functions: + restart_functions[service_name](service_name) + else: + for action in actions: + service(action, service_name) + return r + + +def pwgen(length=None): + """Generate a random pasword.""" + if length is None: + # A random length is ok to use a weak PRNG + length = random.choice(range(35, 45)) + alphanumeric_chars = [ + l for l in (string.ascii_letters + string.digits) + if l not in 'l0QD1vAEIOUaeiou'] + # Use a crypto-friendly PRNG (e.g. /dev/urandom) for making the + # actual password + random_generator = random.SystemRandom() + random_chars = [ + random_generator.choice(alphanumeric_chars) for _ in range(length)] + return(''.join(random_chars)) + + +def is_phy_iface(interface): + """Returns True if interface is not virtual, otherwise False.""" + if interface: + sys_net = '/sys/class/net' + if os.path.isdir(sys_net): + for iface in glob.glob(os.path.join(sys_net, '*')): + if '/virtual/' in os.path.realpath(iface): + continue + + if interface == os.path.basename(iface): + return True + + return False + + +def get_bond_master(interface): + """Returns bond master if interface is bond slave otherwise None. + + NOTE: the provided interface is expected to be physical + """ + if interface: + iface_path = '/sys/class/net/%s' % (interface) + if os.path.exists(iface_path): + if '/virtual/' in os.path.realpath(iface_path): + return None + + master = os.path.join(iface_path, 'master') + if os.path.exists(master): + master = os.path.realpath(master) + # make sure it is a bond master + if os.path.exists(os.path.join(master, 'bonding')): + return os.path.basename(master) + + return None + + +def list_nics(nic_type=None): + """Return a list of nics of given type(s)""" + if isinstance(nic_type, six.string_types): + int_types = [nic_type] + else: + int_types = nic_type + + interfaces = [] + if nic_type: + for int_type in int_types: + cmd = ['ip', 'addr', 'show', 'label', int_type + '*'] + ip_output = subprocess.check_output(cmd).decode('UTF-8') + ip_output = ip_output.split('\n') + ip_output = (line for line in ip_output if line) + for line in ip_output: + if line.split()[1].startswith(int_type): + matched = re.search('.*: (' + int_type + + r'[0-9]+\.[0-9]+)@.*', line) + if matched: + iface = matched.groups()[0] + else: + iface = line.split()[1].replace(":", "") + + if iface not in interfaces: + interfaces.append(iface) + else: + cmd = ['ip', 'a'] + ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n') + ip_output = (line.strip() for line in ip_output if line) + + key = re.compile(r'^[0-9]+:\s+(.+):') + for line in ip_output: + matched = re.search(key, line) + if matched: + iface = matched.group(1) + iface = iface.partition("@")[0] + if iface not in interfaces: + interfaces.append(iface) + + return interfaces + + +def set_nic_mtu(nic, mtu): + """Set the Maximum Transmission Unit (MTU) on a network interface.""" + cmd = ['ip', 'link', 'set', nic, 'mtu', mtu] + subprocess.check_call(cmd) + + +def get_nic_mtu(nic): + """Return the Maximum Transmission Unit (MTU) for a network interface.""" + cmd = ['ip', 'addr', 'show', nic] + ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n') + mtu = "" + for line in ip_output: + words = line.split() + if 'mtu' in words: + mtu = words[words.index("mtu") + 1] + return mtu + + +def get_nic_hwaddr(nic): + """Return the Media Access Control (MAC) for a network interface.""" + cmd = ['ip', '-o', '-0', 'addr', 'show', nic] + ip_output = subprocess.check_output(cmd).decode('UTF-8') + hwaddr = "" + words = ip_output.split() + if 'link/ether' in words: + hwaddr = words[words.index('link/ether') + 1] + return hwaddr + + +@contextmanager +def chdir(directory): + """Change the current working directory to a different directory for a code + block and return the previous directory after the block exits. Useful to + run commands from a specificed directory. + + :param str directory: The directory path to change to for this context. + """ + cur = os.getcwd() + try: + yield os.chdir(directory) + finally: + os.chdir(cur) + + +def chownr(path, owner, group, follow_links=True, chowntopdir=False): + """Recursively change user and group ownership of files and directories + in given path. Doesn't chown path itself by default, only its children. + + :param str path: The string path to start changing ownership. + :param str owner: The owner string to use when looking up the uid. + :param str group: The group string to use when looking up the gid. + :param bool follow_links: Also follow and chown links if True + :param bool chowntopdir: Also chown path itself if True + """ + uid = pwd.getpwnam(owner).pw_uid + gid = grp.getgrnam(group).gr_gid + if follow_links: + chown = os.chown + else: + chown = os.lchown + + if chowntopdir: + broken_symlink = os.path.lexists(path) and not os.path.exists(path) + if not broken_symlink: + chown(path, uid, gid) + for root, dirs, files in os.walk(path, followlinks=follow_links): + for name in dirs + files: + full = os.path.join(root, name) + broken_symlink = os.path.lexists(full) and not os.path.exists(full) + if not broken_symlink: + chown(full, uid, gid) + + +def lchownr(path, owner, group): + """Recursively change user and group ownership of files and directories + in a given path, not following symbolic links. See the documentation for + 'os.lchown' for more information. + + :param str path: The string path to start changing ownership. + :param str owner: The owner string to use when looking up the uid. + :param str group: The group string to use when looking up the gid. + """ + chownr(path, owner, group, follow_links=False) + + +def owner(path): + """Returns a tuple containing the username & groupname owning the path. + + :param str path: the string path to retrieve the ownership + :return tuple(str, str): A (username, groupname) tuple containing the + name of the user and group owning the path. + :raises OSError: if the specified path does not exist + """ + stat = os.stat(path) + username = pwd.getpwuid(stat.st_uid)[0] + groupname = grp.getgrgid(stat.st_gid)[0] + return username, groupname + + +def get_total_ram(): + """The total amount of system RAM in bytes. + + This is what is reported by the OS, and may be overcommitted when + there are multiple containers hosted on the same machine. + """ + with open('/proc/meminfo', 'r') as f: + for line in f.readlines(): + if line: + key, value, unit = line.split() + if key == 'MemTotal:': + assert unit == 'kB', 'Unknown unit' + return int(value) * 1024 # Classic, not KiB. + raise NotImplementedError() + + +UPSTART_CONTAINER_TYPE = '/run/container_type' + + +def is_container(): + """Determine whether unit is running in a container + + @return: boolean indicating if unit is in a container + """ + if init_is_systemd(): + # Detect using systemd-detect-virt + return subprocess.call(['systemd-detect-virt', + '--container']) == 0 + else: + # Detect using upstart container file marker + return os.path.exists(UPSTART_CONTAINER_TYPE) + + +def add_to_updatedb_prunepath(path, updatedb_path=UPDATEDB_PATH): + """Adds the specified path to the mlocate's udpatedb.conf PRUNEPATH list. + + This method has no effect if the path specified by updatedb_path does not + exist or is not a file. + + @param path: string the path to add to the updatedb.conf PRUNEPATHS value + @param updatedb_path: the path the updatedb.conf file + """ + if not os.path.exists(updatedb_path) or os.path.isdir(updatedb_path): + # If the updatedb.conf file doesn't exist then don't attempt to update + # the file as the package providing mlocate may not be installed on + # the local system + return + + with open(updatedb_path, 'r+') as f_id: + updatedb_text = f_id.read() + output = updatedb(updatedb_text, path) + f_id.seek(0) + f_id.write(output) + f_id.truncate() + + +def updatedb(updatedb_text, new_path): + lines = [line for line in updatedb_text.split("\n")] + for i, line in enumerate(lines): + if line.startswith("PRUNEPATHS="): + paths_line = line.split("=")[1].replace('"', '') + paths = paths_line.split(" ") + if new_path not in paths: + paths.append(new_path) + lines[i] = 'PRUNEPATHS="{}"'.format(' '.join(paths)) + output = "\n".join(lines) + return output + + +def modulo_distribution(modulo=3, wait=30, non_zero_wait=False): + """ Modulo distribution + + This helper uses the unit number, a modulo value and a constant wait time + to produce a calculated wait time distribution. This is useful in large + scale deployments to distribute load during an expensive operation such as + service restarts. + + If you have 1000 nodes that need to restart 100 at a time 1 minute at a + time: + + time.wait(modulo_distribution(modulo=100, wait=60)) + restart() + + If you need restarts to happen serially set modulo to the exact number of + nodes and set a high constant wait time: + + time.wait(modulo_distribution(modulo=10, wait=120)) + restart() + + @param modulo: int The modulo number creates the group distribution + @param wait: int The constant time wait value + @param non_zero_wait: boolean Override unit % modulo == 0, + return modulo * wait. Used to avoid collisions with + leader nodes which are often given priority. + @return: int Calculated time to wait for unit operation + """ + unit_number = int(local_unit().split('/')[1]) + calculated_wait_time = (unit_number % modulo) * wait + if non_zero_wait and calculated_wait_time == 0: + return modulo * wait + else: + return calculated_wait_time + + +def install_ca_cert(ca_cert, name=None): + """ + Install the given cert as a trusted CA. + + The ``name`` is the stem of the filename where the cert is written, and if + not provided, it will default to ``juju-{charm_name}``. + + If the cert is empty or None, or is unchanged, nothing is done. + """ + if not ca_cert: + return + if not isinstance(ca_cert, bytes): + ca_cert = ca_cert.encode('utf8') + if not name: + name = 'juju-{}'.format(charm_name()) + cert_file = '/usr/local/share/ca-certificates/{}.crt'.format(name) + new_hash = hashlib.md5(ca_cert).hexdigest() + if file_hash(cert_file) == new_hash: + return + log("Installing new CA cert at: {}".format(cert_file), level=INFO) + write_file(cert_file, ca_cert) + subprocess.check_call(['update-ca-certificates', '--fresh']) + + +def get_system_env(key, default=None): + """Get data from system environment as represented in ``/etc/environment``. + + :param key: Key to look up + :type key: str + :param default: Value to return if key is not found + :type default: any + :returns: Value for key if found or contents of default parameter + :rtype: any + :raises: subprocess.CalledProcessError + """ + env_file = '/etc/environment' + # use the shell and env(1) to parse the global environments file. This is + # done to get the correct result even if the user has shell variable + # substitutions or other shell logic in that file. + output = subprocess.check_output( + ['env', '-i', '/bin/bash', '-c', + 'set -a && source {} && env'.format(env_file)], + universal_newlines=True) + for k, v in (line.split('=', 1) + for line in output.splitlines() if '=' in line): + if k == key: + return v + else: + return default diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/host_factory/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/host_factory/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/host_factory/centos.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/host_factory/centos.py new file mode 100644 index 0000000000000000000000000000000000000000..7781a3961f23ce0b161ae08b11710466af8de814 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/host_factory/centos.py @@ -0,0 +1,72 @@ +import subprocess +import yum +import os + +from charmhelpers.core.strutils import BasicStringComparator + + +class CompareHostReleases(BasicStringComparator): + """Provide comparisons of Host releases. + + Use in the form of + + if CompareHostReleases(release) > 'trusty': + # do something with mitaka + """ + + def __init__(self, item): + raise NotImplementedError( + "CompareHostReleases() is not implemented for CentOS") + + +def service_available(service_name): + # """Determine whether a system service is available.""" + if os.path.isdir('/run/systemd/system'): + cmd = ['systemctl', 'is-enabled', service_name] + else: + cmd = ['service', service_name, 'is-enabled'] + return subprocess.call(cmd) == 0 + + +def add_new_group(group_name, system_group=False, gid=None): + cmd = ['groupadd'] + if gid: + cmd.extend(['--gid', str(gid)]) + if system_group: + cmd.append('-r') + cmd.append(group_name) + subprocess.check_call(cmd) + + +def lsb_release(): + """Return /etc/os-release in a dict.""" + d = {} + with open('/etc/os-release', 'r') as lsb: + for l in lsb: + s = l.split('=') + if len(s) != 2: + continue + d[s[0].strip()] = s[1].strip() + return d + + +def cmp_pkgrevno(package, revno, pkgcache=None): + """Compare supplied revno with the revno of the installed package. + + * 1 => Installed revno is greater than supplied arg + * 0 => Installed revno is the same as supplied arg + * -1 => Installed revno is less than supplied arg + + This function imports YumBase function if the pkgcache argument + is None. + """ + if not pkgcache: + y = yum.YumBase() + packages = y.doPackageLists() + pkgcache = {i.Name: i.version for i in packages['installed']} + pkg = pkgcache[package] + if pkg > revno: + return 1 + if pkg < revno: + return -1 + return 0 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/host_factory/ubuntu.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/host_factory/ubuntu.py new file mode 100644 index 0000000000000000000000000000000000000000..3edc0687275b29762b45ebf3fe1b045bc9f568b2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/host_factory/ubuntu.py @@ -0,0 +1,116 @@ +import subprocess + +from charmhelpers.core.hookenv import cached +from charmhelpers.core.strutils import BasicStringComparator + + +UBUNTU_RELEASES = ( + 'lucid', + 'maverick', + 'natty', + 'oneiric', + 'precise', + 'quantal', + 'raring', + 'saucy', + 'trusty', + 'utopic', + 'vivid', + 'wily', + 'xenial', + 'yakkety', + 'zesty', + 'artful', + 'bionic', + 'cosmic', + 'disco', + 'eoan', + 'focal' +) + + +class CompareHostReleases(BasicStringComparator): + """Provide comparisons of Ubuntu releases. + + Use in the form of + + if CompareHostReleases(release) > 'trusty': + # do something with mitaka + """ + _list = UBUNTU_RELEASES + + +def service_available(service_name): + """Determine whether a system service is available""" + try: + subprocess.check_output( + ['service', service_name, 'status'], + stderr=subprocess.STDOUT).decode('UTF-8') + except subprocess.CalledProcessError as e: + return b'unrecognized service' not in e.output + else: + return True + + +def add_new_group(group_name, system_group=False, gid=None): + cmd = ['addgroup'] + if gid: + cmd.extend(['--gid', str(gid)]) + if system_group: + cmd.append('--system') + else: + cmd.extend([ + '--group', + ]) + cmd.append(group_name) + subprocess.check_call(cmd) + + +def lsb_release(): + """Return /etc/lsb-release in a dict""" + d = {} + with open('/etc/lsb-release', 'r') as lsb: + for l in lsb: + k, v = l.split('=') + d[k.strip()] = v.strip() + return d + + +def get_distrib_codename(): + """Return the codename of the distribution + :returns: The codename + :rtype: str + """ + return lsb_release()['DISTRIB_CODENAME'].lower() + + +def cmp_pkgrevno(package, revno, pkgcache=None): + """Compare supplied revno with the revno of the installed package. + + * 1 => Installed revno is greater than supplied arg + * 0 => Installed revno is the same as supplied arg + * -1 => Installed revno is less than supplied arg + + This function imports apt_cache function from charmhelpers.fetch if + the pkgcache argument is None. Be sure to add charmhelpers.fetch if + you call this function, or pass an apt_pkg.Cache() instance. + """ + from charmhelpers.fetch import apt_pkg + if not pkgcache: + from charmhelpers.fetch import apt_cache + pkgcache = apt_cache() + pkg = pkgcache[package] + return apt_pkg.version_compare(pkg.current_ver.ver_str, revno) + + +@cached +def arch(): + """Return the package architecture as a string. + + :returns: the architecture + :rtype: str + :raises: subprocess.CalledProcessError if dpkg command fails + """ + return subprocess.check_output( + ['dpkg', '--print-architecture'] + ).rstrip().decode('UTF-8') diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/hugepage.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/hugepage.py new file mode 100644 index 0000000000000000000000000000000000000000..54b5b5e2fcf81eea5f2ebfbceb620ea68d725584 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/hugepage.py @@ -0,0 +1,69 @@ +# -*- coding: utf-8 -*- + +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import yaml +from charmhelpers.core import fstab +from charmhelpers.core import sysctl +from charmhelpers.core.host import ( + add_group, + add_user_to_group, + fstab_mount, + mkdir, +) +from charmhelpers.core.strutils import bytes_from_string +from subprocess import check_output + + +def hugepage_support(user, group='hugetlb', nr_hugepages=256, + max_map_count=65536, mnt_point='/run/hugepages/kvm', + pagesize='2MB', mount=True, set_shmmax=False): + """Enable hugepages on system. + + Args: + user (str) -- Username to allow access to hugepages to + group (str) -- Group name to own hugepages + nr_hugepages (int) -- Number of pages to reserve + max_map_count (int) -- Number of Virtual Memory Areas a process can own + mnt_point (str) -- Directory to mount hugepages on + pagesize (str) -- Size of hugepages + mount (bool) -- Whether to Mount hugepages + """ + group_info = add_group(group) + gid = group_info.gr_gid + add_user_to_group(user, group) + if max_map_count < 2 * nr_hugepages: + max_map_count = 2 * nr_hugepages + sysctl_settings = { + 'vm.nr_hugepages': nr_hugepages, + 'vm.max_map_count': max_map_count, + 'vm.hugetlb_shm_group': gid, + } + if set_shmmax: + shmmax_current = int(check_output(['sysctl', '-n', 'kernel.shmmax'])) + shmmax_minsize = bytes_from_string(pagesize) * nr_hugepages + if shmmax_minsize > shmmax_current: + sysctl_settings['kernel.shmmax'] = shmmax_minsize + sysctl.create(yaml.dump(sysctl_settings), '/etc/sysctl.d/10-hugepage.conf') + mkdir(mnt_point, owner='root', group='root', perms=0o755, force=False) + lfstab = fstab.Fstab() + fstab_entry = lfstab.get_entry_by_attr('mountpoint', mnt_point) + if fstab_entry: + lfstab.remove_entry(fstab_entry) + entry = lfstab.Entry('nodev', mnt_point, 'hugetlbfs', + 'mode=1770,gid={},pagesize={}'.format(gid, pagesize), 0, 0) + lfstab.add_entry(entry) + if mount: + fstab_mount(mnt_point) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/kernel.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/kernel.py new file mode 100644 index 0000000000000000000000000000000000000000..e01f4f8ba73ee0d5ab7553740c2590a50e42f96d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/kernel.py @@ -0,0 +1,72 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import re +import subprocess + +from charmhelpers.osplatform import get_platform +from charmhelpers.core.hookenv import ( + log, + INFO +) + +__platform__ = get_platform() +if __platform__ == "ubuntu": + from charmhelpers.core.kernel_factory.ubuntu import ( # NOQA:F401 + persistent_modprobe, + update_initramfs, + ) # flake8: noqa -- ignore F401 for this import +elif __platform__ == "centos": + from charmhelpers.core.kernel_factory.centos import ( # NOQA:F401 + persistent_modprobe, + update_initramfs, + ) # flake8: noqa -- ignore F401 for this import + +__author__ = "Jorge Niedbalski " + + +def modprobe(module, persist=True): + """Load a kernel module and configure for auto-load on reboot.""" + cmd = ['modprobe', module] + + log('Loading kernel module %s' % module, level=INFO) + + subprocess.check_call(cmd) + if persist: + persistent_modprobe(module) + + +def rmmod(module, force=False): + """Remove a module from the linux kernel""" + cmd = ['rmmod'] + if force: + cmd.append('-f') + cmd.append(module) + log('Removing kernel module %s' % module, level=INFO) + return subprocess.check_call(cmd) + + +def lsmod(): + """Shows what kernel modules are currently loaded""" + return subprocess.check_output(['lsmod'], + universal_newlines=True) + + +def is_module_loaded(module): + """Checks if a kernel module is already loaded""" + matches = re.findall('^%s[ ]+' % module, lsmod(), re.M) + return len(matches) > 0 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/kernel_factory/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/kernel_factory/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/kernel_factory/centos.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/kernel_factory/centos.py new file mode 100644 index 0000000000000000000000000000000000000000..1c402c1157900ff1ad5c6c296a409c9e8fb96d2b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/kernel_factory/centos.py @@ -0,0 +1,17 @@ +import subprocess +import os + + +def persistent_modprobe(module): + """Load a kernel module and configure for auto-load on reboot.""" + if not os.path.exists('/etc/rc.modules'): + open('/etc/rc.modules', 'a') + os.chmod('/etc/rc.modules', 111) + with open('/etc/rc.modules', 'r+') as modules: + if module not in modules.read(): + modules.write('modprobe %s\n' % module) + + +def update_initramfs(version='all'): + """Updates an initramfs image.""" + return subprocess.check_call(["dracut", "-f", version]) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/kernel_factory/ubuntu.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/kernel_factory/ubuntu.py new file mode 100644 index 0000000000000000000000000000000000000000..3de372fd3df38fe151cf79243f129cb504516f22 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/kernel_factory/ubuntu.py @@ -0,0 +1,13 @@ +import subprocess + + +def persistent_modprobe(module): + """Load a kernel module and configure for auto-load on reboot.""" + with open('/etc/modules', 'r+') as modules: + if module not in modules.read(): + modules.write(module + "\n") + + +def update_initramfs(version='all'): + """Updates an initramfs image.""" + return subprocess.check_call(["update-initramfs", "-k", version, "-u"]) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/services/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/services/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..61fd074edc09de434859e48ae1b36baef0503708 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/services/__init__.py @@ -0,0 +1,16 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from .base import * # NOQA +from .helpers import * # NOQA diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/services/base.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/services/base.py new file mode 100644 index 0000000000000000000000000000000000000000..179ad4f0c367dd6b13c10b201c3752d1c8daf05e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/services/base.py @@ -0,0 +1,362 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import json +from inspect import getargspec +from collections import Iterable, OrderedDict + +from charmhelpers.core import host +from charmhelpers.core import hookenv + + +__all__ = ['ServiceManager', 'ManagerCallback', + 'PortManagerCallback', 'open_ports', 'close_ports', 'manage_ports', + 'service_restart', 'service_stop'] + + +class ServiceManager(object): + def __init__(self, services=None): + """ + Register a list of services, given their definitions. + + Service definitions are dicts in the following formats (all keys except + 'service' are optional):: + + { + "service": , + "required_data": , + "provided_data": , + "data_ready": , + "data_lost": , + "start": , + "stop": , + "ports": , + } + + The 'required_data' list should contain dicts of required data (or + dependency managers that act like dicts and know how to collect the data). + Only when all items in the 'required_data' list are populated are the list + of 'data_ready' and 'start' callbacks executed. See `is_ready()` for more + information. + + The 'provided_data' list should contain relation data providers, most likely + a subclass of :class:`charmhelpers.core.services.helpers.RelationContext`, + that will indicate a set of data to set on a given relation. + + The 'data_ready' value should be either a single callback, or a list of + callbacks, to be called when all items in 'required_data' pass `is_ready()`. + Each callback will be called with the service name as the only parameter. + After all of the 'data_ready' callbacks are called, the 'start' callbacks + are fired. + + The 'data_lost' value should be either a single callback, or a list of + callbacks, to be called when a 'required_data' item no longer passes + `is_ready()`. Each callback will be called with the service name as the + only parameter. After all of the 'data_lost' callbacks are called, + the 'stop' callbacks are fired. + + The 'start' value should be either a single callback, or a list of + callbacks, to be called when starting the service, after the 'data_ready' + callbacks are complete. Each callback will be called with the service + name as the only parameter. This defaults to + `[host.service_start, services.open_ports]`. + + The 'stop' value should be either a single callback, or a list of + callbacks, to be called when stopping the service. If the service is + being stopped because it no longer has all of its 'required_data', this + will be called after all of the 'data_lost' callbacks are complete. + Each callback will be called with the service name as the only parameter. + This defaults to `[services.close_ports, host.service_stop]`. + + The 'ports' value should be a list of ports to manage. The default + 'start' handler will open the ports after the service is started, + and the default 'stop' handler will close the ports prior to stopping + the service. + + + Examples: + + The following registers an Upstart service called bingod that depends on + a mongodb relation and which runs a custom `db_migrate` function prior to + restarting the service, and a Runit service called spadesd:: + + manager = services.ServiceManager([ + { + 'service': 'bingod', + 'ports': [80, 443], + 'required_data': [MongoRelation(), config(), {'my': 'data'}], + 'data_ready': [ + services.template(source='bingod.conf'), + services.template(source='bingod.ini', + target='/etc/bingod.ini', + owner='bingo', perms=0400), + ], + }, + { + 'service': 'spadesd', + 'data_ready': services.template(source='spadesd_run.j2', + target='/etc/sv/spadesd/run', + perms=0555), + 'start': runit_start, + 'stop': runit_stop, + }, + ]) + manager.manage() + """ + self._ready_file = os.path.join(hookenv.charm_dir(), 'READY-SERVICES.json') + self._ready = None + self.services = OrderedDict() + for service in services or []: + service_name = service['service'] + self.services[service_name] = service + + def manage(self): + """ + Handle the current hook by doing The Right Thing with the registered services. + """ + hookenv._run_atstart() + try: + hook_name = hookenv.hook_name() + if hook_name == 'stop': + self.stop_services() + else: + self.reconfigure_services() + self.provide_data() + except SystemExit as x: + if x.code is None or x.code == 0: + hookenv._run_atexit() + hookenv._run_atexit() + + def provide_data(self): + """ + Set the relation data for each provider in the ``provided_data`` list. + + A provider must have a `name` attribute, which indicates which relation + to set data on, and a `provide_data()` method, which returns a dict of + data to set. + + The `provide_data()` method can optionally accept two parameters: + + * ``remote_service`` The name of the remote service that the data will + be provided to. The `provide_data()` method will be called once + for each connected service (not unit). This allows the method to + tailor its data to the given service. + * ``service_ready`` Whether or not the service definition had all of + its requirements met, and thus the ``data_ready`` callbacks run. + + Note that the ``provided_data`` methods are now called **after** the + ``data_ready`` callbacks are run. This gives the ``data_ready`` callbacks + a chance to generate any data necessary for the providing to the remote + services. + """ + for service_name, service in self.services.items(): + service_ready = self.is_ready(service_name) + for provider in service.get('provided_data', []): + for relid in hookenv.relation_ids(provider.name): + units = hookenv.related_units(relid) + if not units: + continue + remote_service = units[0].split('/')[0] + argspec = getargspec(provider.provide_data) + if len(argspec.args) > 1: + data = provider.provide_data(remote_service, service_ready) + else: + data = provider.provide_data() + if data: + hookenv.relation_set(relid, data) + + def reconfigure_services(self, *service_names): + """ + Update all files for one or more registered services, and, + if ready, optionally restart them. + + If no service names are given, reconfigures all registered services. + """ + for service_name in service_names or self.services.keys(): + if self.is_ready(service_name): + self.fire_event('data_ready', service_name) + self.fire_event('start', service_name, default=[ + service_restart, + manage_ports]) + self.save_ready(service_name) + else: + if self.was_ready(service_name): + self.fire_event('data_lost', service_name) + self.fire_event('stop', service_name, default=[ + manage_ports, + service_stop]) + self.save_lost(service_name) + + def stop_services(self, *service_names): + """ + Stop one or more registered services, by name. + + If no service names are given, stops all registered services. + """ + for service_name in service_names or self.services.keys(): + self.fire_event('stop', service_name, default=[ + manage_ports, + service_stop]) + + def get_service(self, service_name): + """ + Given the name of a registered service, return its service definition. + """ + service = self.services.get(service_name) + if not service: + raise KeyError('Service not registered: %s' % service_name) + return service + + def fire_event(self, event_name, service_name, default=None): + """ + Fire a data_ready, data_lost, start, or stop event on a given service. + """ + service = self.get_service(service_name) + callbacks = service.get(event_name, default) + if not callbacks: + return + if not isinstance(callbacks, Iterable): + callbacks = [callbacks] + for callback in callbacks: + if isinstance(callback, ManagerCallback): + callback(self, service_name, event_name) + else: + callback(service_name) + + def is_ready(self, service_name): + """ + Determine if a registered service is ready, by checking its 'required_data'. + + A 'required_data' item can be any mapping type, and is considered ready + if `bool(item)` evaluates as True. + """ + service = self.get_service(service_name) + reqs = service.get('required_data', []) + return all(bool(req) for req in reqs) + + def _load_ready_file(self): + if self._ready is not None: + return + if os.path.exists(self._ready_file): + with open(self._ready_file) as fp: + self._ready = set(json.load(fp)) + else: + self._ready = set() + + def _save_ready_file(self): + if self._ready is None: + return + with open(self._ready_file, 'w') as fp: + json.dump(list(self._ready), fp) + + def save_ready(self, service_name): + """ + Save an indicator that the given service is now data_ready. + """ + self._load_ready_file() + self._ready.add(service_name) + self._save_ready_file() + + def save_lost(self, service_name): + """ + Save an indicator that the given service is no longer data_ready. + """ + self._load_ready_file() + self._ready.discard(service_name) + self._save_ready_file() + + def was_ready(self, service_name): + """ + Determine if the given service was previously data_ready. + """ + self._load_ready_file() + return service_name in self._ready + + +class ManagerCallback(object): + """ + Special case of a callback that takes the `ServiceManager` instance + in addition to the service name. + + Subclasses should implement `__call__` which should accept three parameters: + + * `manager` The `ServiceManager` instance + * `service_name` The name of the service it's being triggered for + * `event_name` The name of the event that this callback is handling + """ + def __call__(self, manager, service_name, event_name): + raise NotImplementedError() + + +class PortManagerCallback(ManagerCallback): + """ + Callback class that will open or close ports, for use as either + a start or stop action. + """ + def __call__(self, manager, service_name, event_name): + service = manager.get_service(service_name) + # turn this generator into a list, + # as we'll be going over it multiple times + new_ports = list(service.get('ports', [])) + port_file = os.path.join(hookenv.charm_dir(), '.{}.ports'.format(service_name)) + if os.path.exists(port_file): + with open(port_file) as fp: + old_ports = fp.read().split(',') + for old_port in old_ports: + if bool(old_port) and not self.ports_contains(old_port, new_ports): + hookenv.close_port(old_port) + with open(port_file, 'w') as fp: + fp.write(','.join(str(port) for port in new_ports)) + for port in new_ports: + # A port is either a number or 'ICMP' + protocol = 'TCP' + if str(port).upper() == 'ICMP': + protocol = 'ICMP' + if event_name == 'start': + hookenv.open_port(port, protocol) + elif event_name == 'stop': + hookenv.close_port(port, protocol) + + def ports_contains(self, port, ports): + if not bool(port): + return False + if str(port).upper() != 'ICMP': + port = int(port) + return port in ports + + +def service_stop(service_name): + """ + Wrapper around host.service_stop to prevent spurious "unknown service" + messages in the logs. + """ + if host.service_running(service_name): + host.service_stop(service_name) + + +def service_restart(service_name): + """ + Wrapper around host.service_restart to prevent spurious "unknown service" + messages in the logs. + """ + if host.service_available(service_name): + if host.service_running(service_name): + host.service_restart(service_name) + else: + host.service_start(service_name) + + +# Convenience aliases +open_ports = close_ports = manage_ports = PortManagerCallback() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/services/helpers.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/services/helpers.py new file mode 100644 index 0000000000000000000000000000000000000000..3e6e30d2fe0d9c73ffdc42d70b77e864b6379c53 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/services/helpers.py @@ -0,0 +1,290 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import yaml + +from charmhelpers.core import hookenv +from charmhelpers.core import host +from charmhelpers.core import templating + +from charmhelpers.core.services.base import ManagerCallback + + +__all__ = ['RelationContext', 'TemplateCallback', + 'render_template', 'template'] + + +class RelationContext(dict): + """ + Base class for a context generator that gets relation data from juju. + + Subclasses must provide the attributes `name`, which is the name of the + interface of interest, `interface`, which is the type of the interface of + interest, and `required_keys`, which is the set of keys required for the + relation to be considered complete. The data for all interfaces matching + the `name` attribute that are complete will used to populate the dictionary + values (see `get_data`, below). + + The generated context will be namespaced under the relation :attr:`name`, + to prevent potential naming conflicts. + + :param str name: Override the relation :attr:`name`, since it can vary from charm to charm + :param list additional_required_keys: Extend the list of :attr:`required_keys` + """ + name = None + interface = None + + def __init__(self, name=None, additional_required_keys=None): + if not hasattr(self, 'required_keys'): + self.required_keys = [] + + if name is not None: + self.name = name + if additional_required_keys: + self.required_keys.extend(additional_required_keys) + self.get_data() + + def __bool__(self): + """ + Returns True if all of the required_keys are available. + """ + return self.is_ready() + + __nonzero__ = __bool__ + + def __repr__(self): + return super(RelationContext, self).__repr__() + + def is_ready(self): + """ + Returns True if all of the `required_keys` are available from any units. + """ + ready = len(self.get(self.name, [])) > 0 + if not ready: + hookenv.log('Incomplete relation: {}'.format(self.__class__.__name__), hookenv.DEBUG) + return ready + + def _is_ready(self, unit_data): + """ + Helper method that tests a set of relation data and returns True if + all of the `required_keys` are present. + """ + return set(unit_data.keys()).issuperset(set(self.required_keys)) + + def get_data(self): + """ + Retrieve the relation data for each unit involved in a relation and, + if complete, store it in a list under `self[self.name]`. This + is automatically called when the RelationContext is instantiated. + + The units are sorted lexographically first by the service ID, then by + the unit ID. Thus, if an interface has two other services, 'db:1' + and 'db:2', with 'db:1' having two units, 'wordpress/0' and 'wordpress/1', + and 'db:2' having one unit, 'mediawiki/0', all of which have a complete + set of data, the relation data for the units will be stored in the + order: 'wordpress/0', 'wordpress/1', 'mediawiki/0'. + + If you only care about a single unit on the relation, you can just + access it as `{{ interface[0]['key'] }}`. However, if you can at all + support multiple units on a relation, you should iterate over the list, + like:: + + {% for unit in interface -%} + {{ unit['key'] }}{% if not loop.last %},{% endif %} + {%- endfor %} + + Note that since all sets of relation data from all related services and + units are in a single list, if you need to know which service or unit a + set of data came from, you'll need to extend this class to preserve + that information. + """ + if not hookenv.relation_ids(self.name): + return + + ns = self.setdefault(self.name, []) + for rid in sorted(hookenv.relation_ids(self.name)): + for unit in sorted(hookenv.related_units(rid)): + reldata = hookenv.relation_get(rid=rid, unit=unit) + if self._is_ready(reldata): + ns.append(reldata) + + def provide_data(self): + """ + Return data to be relation_set for this interface. + """ + return {} + + +class MysqlRelation(RelationContext): + """ + Relation context for the `mysql` interface. + + :param str name: Override the relation :attr:`name`, since it can vary from charm to charm + :param list additional_required_keys: Extend the list of :attr:`required_keys` + """ + name = 'db' + interface = 'mysql' + + def __init__(self, *args, **kwargs): + self.required_keys = ['host', 'user', 'password', 'database'] + RelationContext.__init__(self, *args, **kwargs) + + +class HttpRelation(RelationContext): + """ + Relation context for the `http` interface. + + :param str name: Override the relation :attr:`name`, since it can vary from charm to charm + :param list additional_required_keys: Extend the list of :attr:`required_keys` + """ + name = 'website' + interface = 'http' + + def __init__(self, *args, **kwargs): + self.required_keys = ['host', 'port'] + RelationContext.__init__(self, *args, **kwargs) + + def provide_data(self): + return { + 'host': hookenv.unit_get('private-address'), + 'port': 80, + } + + +class RequiredConfig(dict): + """ + Data context that loads config options with one or more mandatory options. + + Once the required options have been changed from their default values, all + config options will be available, namespaced under `config` to prevent + potential naming conflicts (for example, between a config option and a + relation property). + + :param list *args: List of options that must be changed from their default values. + """ + + def __init__(self, *args): + self.required_options = args + self['config'] = hookenv.config() + with open(os.path.join(hookenv.charm_dir(), 'config.yaml')) as fp: + self.config = yaml.load(fp).get('options', {}) + + def __bool__(self): + for option in self.required_options: + if option not in self['config']: + return False + current_value = self['config'][option] + default_value = self.config[option].get('default') + if current_value == default_value: + return False + if current_value in (None, '') and default_value in (None, ''): + return False + return True + + def __nonzero__(self): + return self.__bool__() + + +class StoredContext(dict): + """ + A data context that always returns the data that it was first created with. + + This is useful to do a one-time generation of things like passwords, that + will thereafter use the same value that was originally generated, instead + of generating a new value each time it is run. + """ + def __init__(self, file_name, config_data): + """ + If the file exists, populate `self` with the data from the file. + Otherwise, populate with the given data and persist it to the file. + """ + if os.path.exists(file_name): + self.update(self.read_context(file_name)) + else: + self.store_context(file_name, config_data) + self.update(config_data) + + def store_context(self, file_name, config_data): + if not os.path.isabs(file_name): + file_name = os.path.join(hookenv.charm_dir(), file_name) + with open(file_name, 'w') as file_stream: + os.fchmod(file_stream.fileno(), 0o600) + yaml.dump(config_data, file_stream) + + def read_context(self, file_name): + if not os.path.isabs(file_name): + file_name = os.path.join(hookenv.charm_dir(), file_name) + with open(file_name, 'r') as file_stream: + data = yaml.load(file_stream) + if not data: + raise OSError("%s is empty" % file_name) + return data + + +class TemplateCallback(ManagerCallback): + """ + Callback class that will render a Jinja2 template, for use as a ready + action. + + :param str source: The template source file, relative to + `$CHARM_DIR/templates` + + :param str target: The target to write the rendered template to (or None) + :param str owner: The owner of the rendered file + :param str group: The group of the rendered file + :param int perms: The permissions of the rendered file + :param partial on_change_action: functools partial to be executed when + rendered file changes + :param jinja2 loader template_loader: A jinja2 template loader + + :return str: The rendered template + """ + def __init__(self, source, target, + owner='root', group='root', perms=0o444, + on_change_action=None, template_loader=None): + self.source = source + self.target = target + self.owner = owner + self.group = group + self.perms = perms + self.on_change_action = on_change_action + self.template_loader = template_loader + + def __call__(self, manager, service_name, event_name): + pre_checksum = '' + if self.on_change_action and os.path.isfile(self.target): + pre_checksum = host.file_hash(self.target) + service = manager.get_service(service_name) + context = {'ctx': {}} + for ctx in service.get('required_data', []): + context.update(ctx) + context['ctx'].update(ctx) + + result = templating.render(self.source, self.target, context, + self.owner, self.group, self.perms, + template_loader=self.template_loader) + if self.on_change_action: + if pre_checksum == host.file_hash(self.target): + hookenv.log( + 'No change detected: {}'.format(self.target), + hookenv.DEBUG) + else: + self.on_change_action() + + return result + + +# Convenience aliases for templates +render_template = template = TemplateCallback diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/strutils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/strutils.py new file mode 100644 index 0000000000000000000000000000000000000000..e8df0452f8203b53947eb137eed22d85ff62dff0 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/strutils.py @@ -0,0 +1,129 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import six +import re + + +def bool_from_string(value): + """Interpret string value as boolean. + + Returns True if value translates to True otherwise False. + """ + if isinstance(value, six.string_types): + value = six.text_type(value) + else: + msg = "Unable to interpret non-string value '%s' as boolean" % (value) + raise ValueError(msg) + + value = value.strip().lower() + + if value in ['y', 'yes', 'true', 't', 'on']: + return True + elif value in ['n', 'no', 'false', 'f', 'off']: + return False + + msg = "Unable to interpret string value '%s' as boolean" % (value) + raise ValueError(msg) + + +def bytes_from_string(value): + """Interpret human readable string value as bytes. + + Returns int + """ + BYTE_POWER = { + 'K': 1, + 'KB': 1, + 'M': 2, + 'MB': 2, + 'G': 3, + 'GB': 3, + 'T': 4, + 'TB': 4, + 'P': 5, + 'PB': 5, + } + if isinstance(value, six.string_types): + value = six.text_type(value) + else: + msg = "Unable to interpret non-string value '%s' as bytes" % (value) + raise ValueError(msg) + matches = re.match("([0-9]+)([a-zA-Z]+)", value) + if matches: + size = int(matches.group(1)) * (1024 ** BYTE_POWER[matches.group(2)]) + else: + # Assume that value passed in is bytes + try: + size = int(value) + except ValueError: + msg = "Unable to interpret string value '%s' as bytes" % (value) + raise ValueError(msg) + return size + + +class BasicStringComparator(object): + """Provides a class that will compare strings from an iterator type object. + Used to provide > and < comparisons on strings that may not necessarily be + alphanumerically ordered. e.g. OpenStack or Ubuntu releases AFTER the + z-wrap. + """ + + _list = None + + def __init__(self, item): + if self._list is None: + raise Exception("Must define the _list in the class definition!") + try: + self.index = self._list.index(item) + except Exception: + raise KeyError("Item '{}' is not in list '{}'" + .format(item, self._list)) + + def __eq__(self, other): + assert isinstance(other, str) or isinstance(other, self.__class__) + return self.index == self._list.index(other) + + def __ne__(self, other): + return not self.__eq__(other) + + def __lt__(self, other): + assert isinstance(other, str) or isinstance(other, self.__class__) + return self.index < self._list.index(other) + + def __ge__(self, other): + return not self.__lt__(other) + + def __gt__(self, other): + assert isinstance(other, str) or isinstance(other, self.__class__) + return self.index > self._list.index(other) + + def __le__(self, other): + return not self.__gt__(other) + + def __str__(self): + """Always give back the item at the index so it can be used in + comparisons like: + + s_mitaka = CompareOpenStack('mitaka') + s_newton = CompareOpenstack('newton') + + assert s_newton > s_mitaka + + @returns: + """ + return self._list[self.index] diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/sysctl.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/sysctl.py new file mode 100644 index 0000000000000000000000000000000000000000..386428d619bc38edf02dc088bf7ec32767c0ab94 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/sysctl.py @@ -0,0 +1,75 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import yaml + +from subprocess import check_call, CalledProcessError + +from charmhelpers.core.hookenv import ( + log, + DEBUG, + ERROR, + WARNING, +) + +from charmhelpers.core.host import is_container + +__author__ = 'Jorge Niedbalski R. ' + + +def create(sysctl_dict, sysctl_file, ignore=False): + """Creates a sysctl.conf file from a YAML associative array + + :param sysctl_dict: a dict or YAML-formatted string of sysctl + options eg "{ 'kernel.max_pid': 1337 }" + :type sysctl_dict: str + :param sysctl_file: path to the sysctl file to be saved + :type sysctl_file: str or unicode + :param ignore: If True, ignore "unknown variable" errors. + :type ignore: bool + :returns: None + """ + if type(sysctl_dict) is not dict: + try: + sysctl_dict_parsed = yaml.safe_load(sysctl_dict) + except yaml.YAMLError: + log("Error parsing YAML sysctl_dict: {}".format(sysctl_dict), + level=ERROR) + return + else: + sysctl_dict_parsed = sysctl_dict + + with open(sysctl_file, "w") as fd: + for key, value in sysctl_dict_parsed.items(): + fd.write("{}={}\n".format(key, value)) + + log("Updating sysctl_file: {} values: {}".format(sysctl_file, + sysctl_dict_parsed), + level=DEBUG) + + call = ["sysctl", "-p", sysctl_file] + if ignore: + call.append("-e") + + try: + check_call(call) + except CalledProcessError as e: + if is_container(): + log("Error setting some sysctl keys in this container: {}".format(e.output), + level=WARNING) + else: + raise e diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/templating.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/templating.py new file mode 100644 index 0000000000000000000000000000000000000000..9014015c14ee0b48c775562cd4f0d30884944439 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/templating.py @@ -0,0 +1,93 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import sys + +from charmhelpers.core import host +from charmhelpers.core import hookenv + + +def render(source, target, context, owner='root', group='root', + perms=0o444, templates_dir=None, encoding='UTF-8', + template_loader=None, config_template=None): + """ + Render a template. + + The `source` path, if not absolute, is relative to the `templates_dir`. + + The `target` path should be absolute. It can also be `None`, in which + case no file will be written. + + The context should be a dict containing the values to be replaced in the + template. + + config_template may be provided to render from a provided template instead + of loading from a file. + + The `owner`, `group`, and `perms` options will be passed to `write_file`. + + If omitted, `templates_dir` defaults to the `templates` folder in the charm. + + The rendered template will be written to the file as well as being returned + as a string. + + Note: Using this requires python-jinja2 or python3-jinja2; if it is not + installed, calling this will attempt to use charmhelpers.fetch.apt_install + to install it. + """ + try: + from jinja2 import FileSystemLoader, Environment, exceptions + except ImportError: + try: + from charmhelpers.fetch import apt_install + except ImportError: + hookenv.log('Could not import jinja2, and could not import ' + 'charmhelpers.fetch to install it', + level=hookenv.ERROR) + raise + if sys.version_info.major == 2: + apt_install('python-jinja2', fatal=True) + else: + apt_install('python3-jinja2', fatal=True) + from jinja2 import FileSystemLoader, Environment, exceptions + + if template_loader: + template_env = Environment(loader=template_loader) + else: + if templates_dir is None: + templates_dir = os.path.join(hookenv.charm_dir(), 'templates') + template_env = Environment(loader=FileSystemLoader(templates_dir)) + + # load from a string if provided explicitly + if config_template is not None: + template = template_env.from_string(config_template) + else: + try: + source = source + template = template_env.get_template(source) + except exceptions.TemplateNotFound as e: + hookenv.log('Could not load template %s from %s.' % + (source, templates_dir), + level=hookenv.ERROR) + raise e + content = template.render(context) + if target is not None: + target_dir = os.path.dirname(target) + if not os.path.exists(target_dir): + # This is a terrible default directory permission, as the file + # or its siblings will often contain secrets. + host.mkdir(os.path.dirname(target), owner, group, perms=0o755) + host.write_file(target, content.encode(encoding), owner, group, perms) + return content diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/unitdata.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/unitdata.py new file mode 100644 index 0000000000000000000000000000000000000000..ab554327b343f896880523fc627c1abea84be29a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/core/unitdata.py @@ -0,0 +1,525 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +# +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Authors: +# Kapil Thangavelu +# +""" +Intro +----- + +A simple way to store state in units. This provides a key value +storage with support for versioned, transactional operation, +and can calculate deltas from previous values to simplify unit logic +when processing changes. + + +Hook Integration +---------------- + +There are several extant frameworks for hook execution, including + + - charmhelpers.core.hookenv.Hooks + - charmhelpers.core.services.ServiceManager + +The storage classes are framework agnostic, one simple integration is +via the HookData contextmanager. It will record the current hook +execution environment (including relation data, config data, etc.), +setup a transaction and allow easy access to the changes from +previously seen values. One consequence of the integration is the +reservation of particular keys ('rels', 'unit', 'env', 'config', +'charm_revisions') for their respective values. + +Here's a fully worked integration example using hookenv.Hooks:: + + from charmhelper.core import hookenv, unitdata + + hook_data = unitdata.HookData() + db = unitdata.kv() + hooks = hookenv.Hooks() + + @hooks.hook + def config_changed(): + # Print all changes to configuration from previously seen + # values. + for changed, (prev, cur) in hook_data.conf.items(): + print('config changed', changed, + 'previous value', prev, + 'current value', cur) + + # Get some unit specific bookeeping + if not db.get('pkg_key'): + key = urllib.urlopen('https://example.com/pkg_key').read() + db.set('pkg_key', key) + + # Directly access all charm config as a mapping. + conf = db.getrange('config', True) + + # Directly access all relation data as a mapping + rels = db.getrange('rels', True) + + if __name__ == '__main__': + with hook_data(): + hook.execute() + + +A more basic integration is via the hook_scope context manager which simply +manages transaction scope (and records hook name, and timestamp):: + + >>> from unitdata import kv + >>> db = kv() + >>> with db.hook_scope('install'): + ... # do work, in transactional scope. + ... db.set('x', 1) + >>> db.get('x') + 1 + + +Usage +----- + +Values are automatically json de/serialized to preserve basic typing +and complex data struct capabilities (dicts, lists, ints, booleans, etc). + +Individual values can be manipulated via get/set:: + + >>> kv.set('y', True) + >>> kv.get('y') + True + + # We can set complex values (dicts, lists) as a single key. + >>> kv.set('config', {'a': 1, 'b': True'}) + + # Also supports returning dictionaries as a record which + # provides attribute access. + >>> config = kv.get('config', record=True) + >>> config.b + True + + +Groups of keys can be manipulated with update/getrange:: + + >>> kv.update({'z': 1, 'y': 2}, prefix="gui.") + >>> kv.getrange('gui.', strip=True) + {'z': 1, 'y': 2} + +When updating values, its very helpful to understand which values +have actually changed and how have they changed. The storage +provides a delta method to provide for this:: + + >>> data = {'debug': True, 'option': 2} + >>> delta = kv.delta(data, 'config.') + >>> delta.debug.previous + None + >>> delta.debug.current + True + >>> delta + {'debug': (None, True), 'option': (None, 2)} + +Note the delta method does not persist the actual change, it needs to +be explicitly saved via 'update' method:: + + >>> kv.update(data, 'config.') + +Values modified in the context of a hook scope retain historical values +associated to the hookname. + + >>> with db.hook_scope('config-changed'): + ... db.set('x', 42) + >>> db.gethistory('x') + [(1, u'x', 1, u'install', u'2015-01-21T16:49:30.038372'), + (2, u'x', 42, u'config-changed', u'2015-01-21T16:49:30.038786')] + +""" + +import collections +import contextlib +import datetime +import itertools +import json +import os +import pprint +import sqlite3 +import sys + +__author__ = 'Kapil Thangavelu ' + + +class Storage(object): + """Simple key value database for local unit state within charms. + + Modifications are not persisted unless :meth:`flush` is called. + + To support dicts, lists, integer, floats, and booleans values + are automatically json encoded/decoded. + + Note: to facilitate unit testing, ':memory:' can be passed as the + path parameter which causes sqlite3 to only build the db in memory. + This should only be used for testing purposes. + """ + def __init__(self, path=None): + self.db_path = path + if path is None: + if 'UNIT_STATE_DB' in os.environ: + self.db_path = os.environ['UNIT_STATE_DB'] + else: + self.db_path = os.path.join( + os.environ.get('CHARM_DIR', ''), '.unit-state.db') + if self.db_path != ':memory:': + with open(self.db_path, 'a') as f: + os.fchmod(f.fileno(), 0o600) + self.conn = sqlite3.connect('%s' % self.db_path) + self.cursor = self.conn.cursor() + self.revision = None + self._closed = False + self._init() + + def close(self): + if self._closed: + return + self.flush(False) + self.cursor.close() + self.conn.close() + self._closed = True + + def get(self, key, default=None, record=False): + self.cursor.execute('select data from kv where key=?', [key]) + result = self.cursor.fetchone() + if not result: + return default + if record: + return Record(json.loads(result[0])) + return json.loads(result[0]) + + def getrange(self, key_prefix, strip=False): + """ + Get a range of keys starting with a common prefix as a mapping of + keys to values. + + :param str key_prefix: Common prefix among all keys + :param bool strip: Optionally strip the common prefix from the key + names in the returned dict + :return dict: A (possibly empty) dict of key-value mappings + """ + self.cursor.execute("select key, data from kv where key like ?", + ['%s%%' % key_prefix]) + result = self.cursor.fetchall() + + if not result: + return {} + if not strip: + key_prefix = '' + return dict([ + (k[len(key_prefix):], json.loads(v)) for k, v in result]) + + def update(self, mapping, prefix=""): + """ + Set the values of multiple keys at once. + + :param dict mapping: Mapping of keys to values + :param str prefix: Optional prefix to apply to all keys in `mapping` + before setting + """ + for k, v in mapping.items(): + self.set("%s%s" % (prefix, k), v) + + def unset(self, key): + """ + Remove a key from the database entirely. + """ + self.cursor.execute('delete from kv where key=?', [key]) + if self.revision and self.cursor.rowcount: + self.cursor.execute( + 'insert into kv_revisions values (?, ?, ?)', + [key, self.revision, json.dumps('DELETED')]) + + def unsetrange(self, keys=None, prefix=""): + """ + Remove a range of keys starting with a common prefix, from the database + entirely. + + :param list keys: List of keys to remove. + :param str prefix: Optional prefix to apply to all keys in ``keys`` + before removing. + """ + if keys is not None: + keys = ['%s%s' % (prefix, key) for key in keys] + self.cursor.execute('delete from kv where key in (%s)' % ','.join(['?'] * len(keys)), keys) + if self.revision and self.cursor.rowcount: + self.cursor.execute( + 'insert into kv_revisions values %s' % ','.join(['(?, ?, ?)'] * len(keys)), + list(itertools.chain.from_iterable((key, self.revision, json.dumps('DELETED')) for key in keys))) + else: + self.cursor.execute('delete from kv where key like ?', + ['%s%%' % prefix]) + if self.revision and self.cursor.rowcount: + self.cursor.execute( + 'insert into kv_revisions values (?, ?, ?)', + ['%s%%' % prefix, self.revision, json.dumps('DELETED')]) + + def set(self, key, value): + """ + Set a value in the database. + + :param str key: Key to set the value for + :param value: Any JSON-serializable value to be set + """ + serialized = json.dumps(value) + + self.cursor.execute('select data from kv where key=?', [key]) + exists = self.cursor.fetchone() + + # Skip mutations to the same value + if exists: + if exists[0] == serialized: + return value + + if not exists: + self.cursor.execute( + 'insert into kv (key, data) values (?, ?)', + (key, serialized)) + else: + self.cursor.execute(''' + update kv + set data = ? + where key = ?''', [serialized, key]) + + # Save + if not self.revision: + return value + + self.cursor.execute( + 'select 1 from kv_revisions where key=? and revision=?', + [key, self.revision]) + exists = self.cursor.fetchone() + + if not exists: + self.cursor.execute( + '''insert into kv_revisions ( + revision, key, data) values (?, ?, ?)''', + (self.revision, key, serialized)) + else: + self.cursor.execute( + ''' + update kv_revisions + set data = ? + where key = ? + and revision = ?''', + [serialized, key, self.revision]) + + return value + + def delta(self, mapping, prefix): + """ + return a delta containing values that have changed. + """ + previous = self.getrange(prefix, strip=True) + if not previous: + pk = set() + else: + pk = set(previous.keys()) + ck = set(mapping.keys()) + delta = DeltaSet() + + # added + for k in ck.difference(pk): + delta[k] = Delta(None, mapping[k]) + + # removed + for k in pk.difference(ck): + delta[k] = Delta(previous[k], None) + + # changed + for k in pk.intersection(ck): + c = mapping[k] + p = previous[k] + if c != p: + delta[k] = Delta(p, c) + + return delta + + @contextlib.contextmanager + def hook_scope(self, name=""): + """Scope all future interactions to the current hook execution + revision.""" + assert not self.revision + self.cursor.execute( + 'insert into hooks (hook, date) values (?, ?)', + (name or sys.argv[0], + datetime.datetime.utcnow().isoformat())) + self.revision = self.cursor.lastrowid + try: + yield self.revision + self.revision = None + except Exception: + self.flush(False) + self.revision = None + raise + else: + self.flush() + + def flush(self, save=True): + if save: + self.conn.commit() + elif self._closed: + return + else: + self.conn.rollback() + + def _init(self): + self.cursor.execute(''' + create table if not exists kv ( + key text, + data text, + primary key (key) + )''') + self.cursor.execute(''' + create table if not exists kv_revisions ( + key text, + revision integer, + data text, + primary key (key, revision) + )''') + self.cursor.execute(''' + create table if not exists hooks ( + version integer primary key autoincrement, + hook text, + date text + )''') + self.conn.commit() + + def gethistory(self, key, deserialize=False): + self.cursor.execute( + ''' + select kv.revision, kv.key, kv.data, h.hook, h.date + from kv_revisions kv, + hooks h + where kv.key=? + and kv.revision = h.version + ''', [key]) + if deserialize is False: + return self.cursor.fetchall() + return map(_parse_history, self.cursor.fetchall()) + + def debug(self, fh=sys.stderr): + self.cursor.execute('select * from kv') + pprint.pprint(self.cursor.fetchall(), stream=fh) + self.cursor.execute('select * from kv_revisions') + pprint.pprint(self.cursor.fetchall(), stream=fh) + + +def _parse_history(d): + return (d[0], d[1], json.loads(d[2]), d[3], + datetime.datetime.strptime(d[-1], "%Y-%m-%dT%H:%M:%S.%f")) + + +class HookData(object): + """Simple integration for existing hook exec frameworks. + + Records all unit information, and stores deltas for processing + by the hook. + + Sample:: + + from charmhelper.core import hookenv, unitdata + + changes = unitdata.HookData() + db = unitdata.kv() + hooks = hookenv.Hooks() + + @hooks.hook + def config_changed(): + # View all changes to configuration + for changed, (prev, cur) in changes.conf.items(): + print('config changed', changed, + 'previous value', prev, + 'current value', cur) + + # Get some unit specific bookeeping + if not db.get('pkg_key'): + key = urllib.urlopen('https://example.com/pkg_key').read() + db.set('pkg_key', key) + + if __name__ == '__main__': + with changes(): + hook.execute() + + """ + def __init__(self): + self.kv = kv() + self.conf = None + self.rels = None + + @contextlib.contextmanager + def __call__(self): + from charmhelpers.core import hookenv + hook_name = hookenv.hook_name() + + with self.kv.hook_scope(hook_name): + self._record_charm_version(hookenv.charm_dir()) + delta_config, delta_relation = self._record_hook(hookenv) + yield self.kv, delta_config, delta_relation + + def _record_charm_version(self, charm_dir): + # Record revisions.. charm revisions are meaningless + # to charm authors as they don't control the revision. + # so logic dependnent on revision is not particularly + # useful, however it is useful for debugging analysis. + charm_rev = open( + os.path.join(charm_dir, 'revision')).read().strip() + charm_rev = charm_rev or '0' + revs = self.kv.get('charm_revisions', []) + if charm_rev not in revs: + revs.append(charm_rev.strip() or '0') + self.kv.set('charm_revisions', revs) + + def _record_hook(self, hookenv): + data = hookenv.execution_environment() + self.conf = conf_delta = self.kv.delta(data['conf'], 'config') + self.rels = rels_delta = self.kv.delta(data['rels'], 'rels') + self.kv.set('env', dict(data['env'])) + self.kv.set('unit', data['unit']) + self.kv.set('relid', data.get('relid')) + return conf_delta, rels_delta + + +class Record(dict): + + __slots__ = () + + def __getattr__(self, k): + if k in self: + return self[k] + raise AttributeError(k) + + +class DeltaSet(Record): + + __slots__ = () + + +Delta = collections.namedtuple('Delta', ['previous', 'current']) + + +_KV = None + + +def kv(): + global _KV + if _KV is None: + _KV = Storage() + return _KV diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/fetch/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/fetch/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..0cc7fc850a0632568ad78aae9716be718c9ff6b5 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/fetch/__init__.py @@ -0,0 +1,209 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import importlib +from charmhelpers.osplatform import get_platform +from yaml import safe_load +from charmhelpers.core.hookenv import ( + config, + log, +) + +import six +if six.PY3: + from urllib.parse import urlparse, urlunparse +else: + from urlparse import urlparse, urlunparse + + +# The order of this list is very important. Handlers should be listed in from +# least- to most-specific URL matching. +FETCH_HANDLERS = ( + 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler', + 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler', + 'charmhelpers.fetch.giturl.GitUrlFetchHandler', +) + + +class SourceConfigError(Exception): + pass + + +class UnhandledSource(Exception): + pass + + +class AptLockError(Exception): + pass + + +class GPGKeyError(Exception): + """Exception occurs when a GPG key cannot be fetched or used. The message + indicates what the problem is. + """ + pass + + +class BaseFetchHandler(object): + + """Base class for FetchHandler implementations in fetch plugins""" + + def can_handle(self, source): + """Returns True if the source can be handled. Otherwise returns + a string explaining why it cannot""" + return "Wrong source type" + + def install(self, source): + """Try to download and unpack the source. Return the path to the + unpacked files or raise UnhandledSource.""" + raise UnhandledSource("Wrong source type {}".format(source)) + + def parse_url(self, url): + return urlparse(url) + + def base_url(self, url): + """Return url without querystring or fragment""" + parts = list(self.parse_url(url)) + parts[4:] = ['' for i in parts[4:]] + return urlunparse(parts) + + +__platform__ = get_platform() +module = "charmhelpers.fetch.%s" % __platform__ +fetch = importlib.import_module(module) + +filter_installed_packages = fetch.filter_installed_packages +filter_missing_packages = fetch.filter_missing_packages +install = fetch.apt_install +upgrade = fetch.apt_upgrade +update = _fetch_update = fetch.apt_update +purge = fetch.apt_purge +add_source = fetch.add_source + +if __platform__ == "ubuntu": + apt_cache = fetch.apt_cache + apt_install = fetch.apt_install + apt_update = fetch.apt_update + apt_upgrade = fetch.apt_upgrade + apt_purge = fetch.apt_purge + apt_autoremove = fetch.apt_autoremove + apt_mark = fetch.apt_mark + apt_hold = fetch.apt_hold + apt_unhold = fetch.apt_unhold + import_key = fetch.import_key + get_upstream_version = fetch.get_upstream_version + apt_pkg = fetch.ubuntu_apt_pkg + get_apt_dpkg_env = fetch.get_apt_dpkg_env +elif __platform__ == "centos": + yum_search = fetch.yum_search + + +def configure_sources(update=False, + sources_var='install_sources', + keys_var='install_keys'): + """Configure multiple sources from charm configuration. + + The lists are encoded as yaml fragments in the configuration. + The fragment needs to be included as a string. Sources and their + corresponding keys are of the types supported by add_source(). + + Example config: + install_sources: | + - "ppa:foo" + - "http://example.com/repo precise main" + install_keys: | + - null + - "a1b2c3d4" + + Note that 'null' (a.k.a. None) should not be quoted. + """ + sources = safe_load((config(sources_var) or '').strip()) or [] + keys = safe_load((config(keys_var) or '').strip()) or None + + if isinstance(sources, six.string_types): + sources = [sources] + + if keys is None: + for source in sources: + add_source(source, None) + else: + if isinstance(keys, six.string_types): + keys = [keys] + + if len(sources) != len(keys): + raise SourceConfigError( + 'Install sources and keys lists are different lengths') + for source, key in zip(sources, keys): + add_source(source, key) + if update: + _fetch_update(fatal=True) + + +def install_remote(source, *args, **kwargs): + """Install a file tree from a remote source. + + The specified source should be a url of the form: + scheme://[host]/path[#[option=value][&...]] + + Schemes supported are based on this modules submodules. + Options supported are submodule-specific. + Additional arguments are passed through to the submodule. + + For example:: + + dest = install_remote('http://example.com/archive.tgz', + checksum='deadbeef', + hash_type='sha1') + + This will download `archive.tgz`, validate it using SHA1 and, if + the file is ok, extract it and return the directory in which it + was extracted. If the checksum fails, it will raise + :class:`charmhelpers.core.host.ChecksumError`. + """ + # We ONLY check for True here because can_handle may return a string + # explaining why it can't handle a given source. + handlers = [h for h in plugins() if h.can_handle(source) is True] + for handler in handlers: + try: + return handler.install(source, *args, **kwargs) + except UnhandledSource as e: + log('Install source attempt unsuccessful: {}'.format(e), + level='WARNING') + raise UnhandledSource("No handler found for source {}".format(source)) + + +def install_from_config(config_var_name): + """Install a file from config.""" + charm_config = config() + source = charm_config[config_var_name] + return install_remote(source) + + +def plugins(fetch_handlers=None): + if not fetch_handlers: + fetch_handlers = FETCH_HANDLERS + plugin_list = [] + for handler_name in fetch_handlers: + package, classname = handler_name.rsplit('.', 1) + try: + handler_class = getattr( + importlib.import_module(package), + classname) + plugin_list.append(handler_class()) + except NotImplementedError: + # Skip missing plugins so that they can be ommitted from + # installation if desired + log("FetchHandler {} not found, skipping plugin".format( + handler_name)) + return plugin_list diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/fetch/archiveurl.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/fetch/archiveurl.py new file mode 100644 index 0000000000000000000000000000000000000000..d25587adeff102c3fc9e402f98746fccbd8a3693 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/fetch/archiveurl.py @@ -0,0 +1,165 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import hashlib +import re + +from charmhelpers.fetch import ( + BaseFetchHandler, + UnhandledSource +) +from charmhelpers.payload.archive import ( + get_archive_handler, + extract, +) +from charmhelpers.core.host import mkdir, check_hash + +import six +if six.PY3: + from urllib.request import ( + build_opener, install_opener, urlopen, urlretrieve, + HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler, + ) + from urllib.parse import urlparse, urlunparse, parse_qs + from urllib.error import URLError +else: + from urllib import urlretrieve + from urllib2 import ( + build_opener, install_opener, urlopen, + HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler, + URLError + ) + from urlparse import urlparse, urlunparse, parse_qs + + +def splituser(host): + '''urllib.splituser(), but six's support of this seems broken''' + _userprog = re.compile('^(.*)@(.*)$') + match = _userprog.match(host) + if match: + return match.group(1, 2) + return None, host + + +def splitpasswd(user): + '''urllib.splitpasswd(), but six's support of this is missing''' + _passwdprog = re.compile('^([^:]*):(.*)$', re.S) + match = _passwdprog.match(user) + if match: + return match.group(1, 2) + return user, None + + +class ArchiveUrlFetchHandler(BaseFetchHandler): + """ + Handler to download archive files from arbitrary URLs. + + Can fetch from http, https, ftp, and file URLs. + + Can install either tarballs (.tar, .tgz, .tbz2, etc) or zip files. + + Installs the contents of the archive in $CHARM_DIR/fetched/. + """ + def can_handle(self, source): + url_parts = self.parse_url(source) + if url_parts.scheme not in ('http', 'https', 'ftp', 'file'): + # XXX: Why is this returning a boolean and a string? It's + # doomed to fail since "bool(can_handle('foo://'))" will be True. + return "Wrong source type" + if get_archive_handler(self.base_url(source)): + return True + return False + + def download(self, source, dest): + """ + Download an archive file. + + :param str source: URL pointing to an archive file. + :param str dest: Local path location to download archive file to. + """ + # propagate all exceptions + # URLError, OSError, etc + proto, netloc, path, params, query, fragment = urlparse(source) + if proto in ('http', 'https'): + auth, barehost = splituser(netloc) + if auth is not None: + source = urlunparse((proto, barehost, path, params, query, fragment)) + username, password = splitpasswd(auth) + passman = HTTPPasswordMgrWithDefaultRealm() + # Realm is set to None in add_password to force the username and password + # to be used whatever the realm + passman.add_password(None, source, username, password) + authhandler = HTTPBasicAuthHandler(passman) + opener = build_opener(authhandler) + install_opener(opener) + response = urlopen(source) + try: + with open(dest, 'wb') as dest_file: + dest_file.write(response.read()) + except Exception as e: + if os.path.isfile(dest): + os.unlink(dest) + raise e + + # Mandatory file validation via Sha1 or MD5 hashing. + def download_and_validate(self, url, hashsum, validate="sha1"): + tempfile, headers = urlretrieve(url) + check_hash(tempfile, hashsum, validate) + return tempfile + + def install(self, source, dest=None, checksum=None, hash_type='sha1'): + """ + Download and install an archive file, with optional checksum validation. + + The checksum can also be given on the `source` URL's fragment. + For example:: + + handler.install('http://example.com/file.tgz#sha1=deadbeef') + + :param str source: URL pointing to an archive file. + :param str dest: Local destination path to install to. If not given, + installs to `$CHARM_DIR/archives/archive_file_name`. + :param str checksum: If given, validate the archive file after download. + :param str hash_type: Algorithm used to generate `checksum`. + Can be any hash alrgorithm supported by :mod:`hashlib`, + such as md5, sha1, sha256, sha512, etc. + + """ + url_parts = self.parse_url(source) + dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched') + if not os.path.exists(dest_dir): + mkdir(dest_dir, perms=0o755) + dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path)) + try: + self.download(source, dld_file) + except URLError as e: + raise UnhandledSource(e.reason) + except OSError as e: + raise UnhandledSource(e.strerror) + options = parse_qs(url_parts.fragment) + for key, value in options.items(): + if not six.PY3: + algorithms = hashlib.algorithms + else: + algorithms = hashlib.algorithms_available + if key in algorithms: + if len(value) != 1: + raise TypeError( + "Expected 1 hash value, not %d" % len(value)) + expected = value[0] + check_hash(dld_file, expected, key) + if checksum: + check_hash(dld_file, checksum, hash_type) + return extract(dld_file, dest) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/fetch/bzrurl.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/fetch/bzrurl.py new file mode 100644 index 0000000000000000000000000000000000000000..c4ab3ff1e6bc7dde24e8ed568a3dc0c6012ddea6 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/fetch/bzrurl.py @@ -0,0 +1,76 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +from subprocess import STDOUT, check_output +from charmhelpers.fetch import ( + BaseFetchHandler, + UnhandledSource, + filter_installed_packages, + install, +) +from charmhelpers.core.host import mkdir + + +if filter_installed_packages(['bzr']) != []: + install(['bzr']) + if filter_installed_packages(['bzr']) != []: + raise NotImplementedError('Unable to install bzr') + + +class BzrUrlFetchHandler(BaseFetchHandler): + """Handler for bazaar branches via generic and lp URLs.""" + + def can_handle(self, source): + url_parts = self.parse_url(source) + if url_parts.scheme not in ('bzr+ssh', 'lp', ''): + return False + elif not url_parts.scheme: + return os.path.exists(os.path.join(source, '.bzr')) + else: + return True + + def branch(self, source, dest, revno=None): + if not self.can_handle(source): + raise UnhandledSource("Cannot handle {}".format(source)) + cmd_opts = [] + if revno: + cmd_opts += ['-r', str(revno)] + if os.path.exists(dest): + cmd = ['bzr', 'pull'] + cmd += cmd_opts + cmd += ['--overwrite', '-d', dest, source] + else: + cmd = ['bzr', 'branch'] + cmd += cmd_opts + cmd += [source, dest] + check_output(cmd, stderr=STDOUT) + + def install(self, source, dest=None, revno=None): + url_parts = self.parse_url(source) + branch_name = url_parts.path.strip("/").split("/")[-1] + if dest: + dest_dir = os.path.join(dest, branch_name) + else: + dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", + branch_name) + + if dest and not os.path.exists(dest): + mkdir(dest, perms=0o755) + + try: + self.branch(source, dest_dir, revno) + except OSError as e: + raise UnhandledSource(e.strerror) + return dest_dir diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/fetch/centos.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/fetch/centos.py new file mode 100644 index 0000000000000000000000000000000000000000..a91dcff0645ed541a79cd72af3112bdff393719a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/fetch/centos.py @@ -0,0 +1,171 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import subprocess +import os +import time +import six +import yum + +from tempfile import NamedTemporaryFile +from charmhelpers.core.hookenv import log + +YUM_NO_LOCK = 1 # The return code for "couldn't acquire lock" in YUM. +YUM_NO_LOCK_RETRY_DELAY = 10 # Wait 10 seconds between apt lock checks. +YUM_NO_LOCK_RETRY_COUNT = 30 # Retry to acquire the lock X times. + + +def filter_installed_packages(packages): + """Return a list of packages that require installation.""" + yb = yum.YumBase() + package_list = yb.doPackageLists() + temp_cache = {p.base_package_name: 1 for p in package_list['installed']} + + _pkgs = [p for p in packages if not temp_cache.get(p, False)] + return _pkgs + + +def install(packages, options=None, fatal=False): + """Install one or more packages.""" + cmd = ['yum', '--assumeyes'] + if options is not None: + cmd.extend(options) + cmd.append('install') + if isinstance(packages, six.string_types): + cmd.append(packages) + else: + cmd.extend(packages) + log("Installing {} with options: {}".format(packages, + options)) + _run_yum_command(cmd, fatal) + + +def upgrade(options=None, fatal=False, dist=False): + """Upgrade all packages.""" + cmd = ['yum', '--assumeyes'] + if options is not None: + cmd.extend(options) + cmd.append('upgrade') + log("Upgrading with options: {}".format(options)) + _run_yum_command(cmd, fatal) + + +def update(fatal=False): + """Update local yum cache.""" + cmd = ['yum', '--assumeyes', 'update'] + log("Update with fatal: {}".format(fatal)) + _run_yum_command(cmd, fatal) + + +def purge(packages, fatal=False): + """Purge one or more packages.""" + cmd = ['yum', '--assumeyes', 'remove'] + if isinstance(packages, six.string_types): + cmd.append(packages) + else: + cmd.extend(packages) + log("Purging {}".format(packages)) + _run_yum_command(cmd, fatal) + + +def yum_search(packages): + """Search for a package.""" + output = {} + cmd = ['yum', 'search'] + if isinstance(packages, six.string_types): + cmd.append(packages) + else: + cmd.extend(packages) + log("Searching for {}".format(packages)) + result = subprocess.check_output(cmd) + for package in list(packages): + output[package] = package in result + return output + + +def add_source(source, key=None): + """Add a package source to this system. + + @param source: a URL with a rpm package + + @param key: A key to be added to the system's keyring and used + to verify the signatures on packages. Ideally, this should be an + ASCII format GPG public key including the block headers. A GPG key + id may also be used, but be aware that only insecure protocols are + available to retrieve the actual public key from a public keyserver + placing your Juju environment at risk. + """ + if source is None: + log('Source is not present. Skipping') + return + + if source.startswith('http'): + directory = '/etc/yum.repos.d/' + for filename in os.listdir(directory): + with open(directory + filename, 'r') as rpm_file: + if source in rpm_file.read(): + break + else: + log("Add source: {!r}".format(source)) + # write in the charms.repo + with open(directory + 'Charms.repo', 'a') as rpm_file: + rpm_file.write('[%s]\n' % source[7:].replace('/', '_')) + rpm_file.write('name=%s\n' % source[7:]) + rpm_file.write('baseurl=%s\n\n' % source) + else: + log("Unknown source: {!r}".format(source)) + + if key: + if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key: + with NamedTemporaryFile('w+') as key_file: + key_file.write(key) + key_file.flush() + key_file.seek(0) + subprocess.check_call(['rpm', '--import', key_file.name]) + else: + subprocess.check_call(['rpm', '--import', key]) + + +def _run_yum_command(cmd, fatal=False): + """Run an YUM command. + + Checks the output and retry if the fatal flag is set to True. + + :param: cmd: str: The yum command to run. + :param: fatal: bool: Whether the command's output should be checked and + retried. + """ + env = os.environ.copy() + + if fatal: + retry_count = 0 + result = None + + # If the command is considered "fatal", we need to retry if the yum + # lock was not acquired. + + while result is None or result == YUM_NO_LOCK: + try: + result = subprocess.check_call(cmd, env=env) + except subprocess.CalledProcessError as e: + retry_count = retry_count + 1 + if retry_count > YUM_NO_LOCK_RETRY_COUNT: + raise + result = e.returncode + log("Couldn't acquire YUM lock. Will retry in {} seconds." + "".format(YUM_NO_LOCK_RETRY_DELAY)) + time.sleep(YUM_NO_LOCK_RETRY_DELAY) + + else: + subprocess.call(cmd, env=env) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/fetch/giturl.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/fetch/giturl.py new file mode 100644 index 0000000000000000000000000000000000000000..070ca9bb5c1a2fdef39f88606ffcaf39bb049410 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/fetch/giturl.py @@ -0,0 +1,69 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +from subprocess import check_output, CalledProcessError, STDOUT +from charmhelpers.fetch import ( + BaseFetchHandler, + UnhandledSource, + filter_installed_packages, + install, +) + +if filter_installed_packages(['git']) != []: + install(['git']) + if filter_installed_packages(['git']) != []: + raise NotImplementedError('Unable to install git') + + +class GitUrlFetchHandler(BaseFetchHandler): + """Handler for git branches via generic and github URLs.""" + + def can_handle(self, source): + url_parts = self.parse_url(source) + # TODO (mattyw) no support for ssh git@ yet + if url_parts.scheme not in ('http', 'https', 'git', ''): + return False + elif not url_parts.scheme: + return os.path.exists(os.path.join(source, '.git')) + else: + return True + + def clone(self, source, dest, branch="master", depth=None): + if not self.can_handle(source): + raise UnhandledSource("Cannot handle {}".format(source)) + + if os.path.exists(dest): + cmd = ['git', '-C', dest, 'pull', source, branch] + else: + cmd = ['git', 'clone', source, dest, '--branch', branch] + if depth: + cmd.extend(['--depth', depth]) + check_output(cmd, stderr=STDOUT) + + def install(self, source, branch="master", dest=None, depth=None): + url_parts = self.parse_url(source) + branch_name = url_parts.path.strip("/").split("/")[-1] + if dest: + dest_dir = os.path.join(dest, branch_name) + else: + dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", + branch_name) + try: + self.clone(source, dest_dir, branch, depth) + except CalledProcessError as e: + raise UnhandledSource(e) + except OSError as e: + raise UnhandledSource(e.strerror) + return dest_dir diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/fetch/python/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/fetch/python/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..bff99dc93c64f80716e2d5a2b6d0d4e8a2436955 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/fetch/python/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2014-2019 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/fetch/python/debug.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/fetch/python/debug.py new file mode 100644 index 0000000000000000000000000000000000000000..757135ee4cf3b5ff4c02305126f5ca3940892afc --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/fetch/python/debug.py @@ -0,0 +1,54 @@ +#!/usr/bin/env python +# coding: utf-8 + +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from __future__ import print_function + +import atexit +import sys + +from charmhelpers.fetch.python.rpdb import Rpdb +from charmhelpers.core.hookenv import ( + open_port, + close_port, + ERROR, + log +) + +__author__ = "Jorge Niedbalski " + +DEFAULT_ADDR = "0.0.0.0" +DEFAULT_PORT = 4444 + + +def _error(message): + log(message, level=ERROR) + + +def set_trace(addr=DEFAULT_ADDR, port=DEFAULT_PORT): + """ + Set a trace point using the remote debugger + """ + atexit.register(close_port, port) + try: + log("Starting a remote python debugger session on %s:%s" % (addr, + port)) + open_port(port) + debugger = Rpdb(addr=addr, port=port) + debugger.set_trace(sys._getframe().f_back) + except Exception: + _error("Cannot start a remote debug session on %s:%s" % (addr, + port)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/fetch/python/packages.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/fetch/python/packages.py new file mode 100644 index 0000000000000000000000000000000000000000..6e95028bc540aace84a2ec6c1bcc4de2663e8a87 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/fetch/python/packages.py @@ -0,0 +1,154 @@ +#!/usr/bin/env python +# coding: utf-8 + +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import six +import subprocess +import sys + +from charmhelpers.fetch import apt_install, apt_update +from charmhelpers.core.hookenv import charm_dir, log + +__author__ = "Jorge Niedbalski " + + +def pip_execute(*args, **kwargs): + """Overriden pip_execute() to stop sys.path being changed. + + The act of importing main from the pip module seems to cause add wheels + from the /usr/share/python-wheels which are installed by various tools. + This function ensures that sys.path remains the same after the call is + executed. + """ + try: + _path = sys.path + try: + from pip import main as _pip_execute + except ImportError: + apt_update() + if six.PY2: + apt_install('python-pip') + else: + apt_install('python3-pip') + from pip import main as _pip_execute + _pip_execute(*args, **kwargs) + finally: + sys.path = _path + + +def parse_options(given, available): + """Given a set of options, check if available""" + for key, value in sorted(given.items()): + if not value: + continue + if key in available: + yield "--{0}={1}".format(key, value) + + +def pip_install_requirements(requirements, constraints=None, **options): + """Install a requirements file. + + :param constraints: Path to pip constraints file. + http://pip.readthedocs.org/en/stable/user_guide/#constraints-files + """ + command = ["install"] + + available_options = ('proxy', 'src', 'log', ) + for option in parse_options(options, available_options): + command.append(option) + + command.append("-r {0}".format(requirements)) + if constraints: + command.append("-c {0}".format(constraints)) + log("Installing from file: {} with constraints {} " + "and options: {}".format(requirements, constraints, command)) + else: + log("Installing from file: {} with options: {}".format(requirements, + command)) + pip_execute(command) + + +def pip_install(package, fatal=False, upgrade=False, venv=None, + constraints=None, **options): + """Install a python package""" + if venv: + venv_python = os.path.join(venv, 'bin/pip') + command = [venv_python, "install"] + else: + command = ["install"] + + available_options = ('proxy', 'src', 'log', 'index-url', ) + for option in parse_options(options, available_options): + command.append(option) + + if upgrade: + command.append('--upgrade') + + if constraints: + command.extend(['-c', constraints]) + + if isinstance(package, list): + command.extend(package) + else: + command.append(package) + + log("Installing {} package with options: {}".format(package, + command)) + if venv: + subprocess.check_call(command) + else: + pip_execute(command) + + +def pip_uninstall(package, **options): + """Uninstall a python package""" + command = ["uninstall", "-q", "-y"] + + available_options = ('proxy', 'log', ) + for option in parse_options(options, available_options): + command.append(option) + + if isinstance(package, list): + command.extend(package) + else: + command.append(package) + + log("Uninstalling {} package with options: {}".format(package, + command)) + pip_execute(command) + + +def pip_list(): + """Returns the list of current python installed packages + """ + return pip_execute(["list"]) + + +def pip_create_virtualenv(path=None): + """Create an isolated Python environment.""" + if six.PY2: + apt_install('python-virtualenv') + else: + apt_install('python3-virtualenv') + + if path: + venv_path = path + else: + venv_path = os.path.join(charm_dir(), 'venv') + + if not os.path.exists(venv_path): + subprocess.check_call(['virtualenv', venv_path]) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/fetch/python/rpdb.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/fetch/python/rpdb.py new file mode 100644 index 0000000000000000000000000000000000000000..9b31610c22fc2d24fe5097016cf45728f87de4ae --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/fetch/python/rpdb.py @@ -0,0 +1,56 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Remote Python Debugger (pdb wrapper).""" + +import pdb +import socket +import sys + +__author__ = "Bertrand Janin " +__version__ = "0.1.3" + + +class Rpdb(pdb.Pdb): + + def __init__(self, addr="127.0.0.1", port=4444): + """Initialize the socket and initialize pdb.""" + + # Backup stdin and stdout before replacing them by the socket handle + self.old_stdout = sys.stdout + self.old_stdin = sys.stdin + + # Open a 'reusable' socket to let the webapp reload on the same port + self.skt = socket.socket(socket.AF_INET, socket.SOCK_STREAM) + self.skt.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, True) + self.skt.bind((addr, port)) + self.skt.listen(1) + (clientsocket, address) = self.skt.accept() + handle = clientsocket.makefile('rw') + pdb.Pdb.__init__(self, completekey='tab', stdin=handle, stdout=handle) + sys.stdout = sys.stdin = handle + + def shutdown(self): + """Revert stdin and stdout, close the socket.""" + sys.stdout = self.old_stdout + sys.stdin = self.old_stdin + self.skt.close() + self.set_continue() + + def do_continue(self, arg): + """Stop all operation on ``continue``.""" + self.shutdown() + return 1 + + do_EOF = do_quit = do_exit = do_c = do_cont = do_continue diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/fetch/python/version.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/fetch/python/version.py new file mode 100644 index 0000000000000000000000000000000000000000..3eb421036ff737f8ff1684e85ff87703e30fe543 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/fetch/python/version.py @@ -0,0 +1,32 @@ +#!/usr/bin/env python +# coding: utf-8 + +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import sys + +__author__ = "Jorge Niedbalski " + + +def current_version(): + """Current system python version""" + return sys.version_info + + +def current_version_string(): + """Current system python version as string major.minor.micro""" + return "{0}.{1}.{2}".format(sys.version_info.major, + sys.version_info.minor, + sys.version_info.micro) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/fetch/snap.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/fetch/snap.py new file mode 100644 index 0000000000000000000000000000000000000000..fc70aa941bc4f0bb5ff126237db65705b9e4a10a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/fetch/snap.py @@ -0,0 +1,150 @@ +# Copyright 2014-2017 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +""" +Charm helpers snap for classic charms. + +If writing reactive charms, use the snap layer: +https://lists.ubuntu.com/archives/snapcraft/2016-September/001114.html +""" +import subprocess +import os +from time import sleep +from charmhelpers.core.hookenv import log + +__author__ = 'Joseph Borg ' + +# The return code for "couldn't acquire lock" in Snap +# (hopefully this will be improved). +SNAP_NO_LOCK = 1 +SNAP_NO_LOCK_RETRY_DELAY = 10 # Wait X seconds between Snap lock checks. +SNAP_NO_LOCK_RETRY_COUNT = 30 # Retry to acquire the lock X times. +SNAP_CHANNELS = [ + 'edge', + 'beta', + 'candidate', + 'stable', +] + + +class CouldNotAcquireLockException(Exception): + pass + + +class InvalidSnapChannel(Exception): + pass + + +def _snap_exec(commands): + """ + Execute snap commands. + + :param commands: List commands + :return: Integer exit code + """ + assert type(commands) == list + + retry_count = 0 + return_code = None + + while return_code is None or return_code == SNAP_NO_LOCK: + try: + return_code = subprocess.check_call(['snap'] + commands, + env=os.environ) + except subprocess.CalledProcessError as e: + retry_count += + 1 + if retry_count > SNAP_NO_LOCK_RETRY_COUNT: + raise CouldNotAcquireLockException( + 'Could not aquire lock after {} attempts' + .format(SNAP_NO_LOCK_RETRY_COUNT)) + return_code = e.returncode + log('Snap failed to acquire lock, trying again in {} seconds.' + .format(SNAP_NO_LOCK_RETRY_DELAY), level='WARN') + sleep(SNAP_NO_LOCK_RETRY_DELAY) + + return return_code + + +def snap_install(packages, *flags): + """ + Install a snap package. + + :param packages: String or List String package name + :param flags: List String flags to pass to install command + :return: Integer return code from snap + """ + if type(packages) is not list: + packages = [packages] + + flags = list(flags) + + message = 'Installing snap(s) "%s"' % ', '.join(packages) + if flags: + message += ' with option(s) "%s"' % ', '.join(flags) + + log(message, level='INFO') + return _snap_exec(['install'] + flags + packages) + + +def snap_remove(packages, *flags): + """ + Remove a snap package. + + :param packages: String or List String package name + :param flags: List String flags to pass to remove command + :return: Integer return code from snap + """ + if type(packages) is not list: + packages = [packages] + + flags = list(flags) + + message = 'Removing snap(s) "%s"' % ', '.join(packages) + if flags: + message += ' with options "%s"' % ', '.join(flags) + + log(message, level='INFO') + return _snap_exec(['remove'] + flags + packages) + + +def snap_refresh(packages, *flags): + """ + Refresh / Update snap package. + + :param packages: String or List String package name + :param flags: List String flags to pass to refresh command + :return: Integer return code from snap + """ + if type(packages) is not list: + packages = [packages] + + flags = list(flags) + + message = 'Refreshing snap(s) "%s"' % ', '.join(packages) + if flags: + message += ' with options "%s"' % ', '.join(flags) + + log(message, level='INFO') + return _snap_exec(['refresh'] + flags + packages) + + +def valid_snap_channel(channel): + """ Validate snap channel exists + + :raises InvalidSnapChannel: When channel does not exist + :return: Boolean + """ + if channel.lower() in SNAP_CHANNELS: + return True + else: + raise InvalidSnapChannel("Invalid Snap Channel: {}".format(channel)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/fetch/ubuntu.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/fetch/ubuntu.py new file mode 100644 index 0000000000000000000000000000000000000000..3ddaf0dd47f23cef60d7b7e59a83c989999d9f3f --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/fetch/ubuntu.py @@ -0,0 +1,805 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from collections import OrderedDict +import platform +import re +import six +import subprocess +import sys +import time + +from charmhelpers.core.host import get_distrib_codename, get_system_env + +from charmhelpers.core.hookenv import ( + log, + DEBUG, + WARNING, + env_proxy_settings, +) +from charmhelpers.fetch import SourceConfigError, GPGKeyError +from charmhelpers.fetch import ubuntu_apt_pkg + +PROPOSED_POCKET = ( + "# Proposed\n" + "deb http://archive.ubuntu.com/ubuntu {}-proposed main universe " + "multiverse restricted\n") +PROPOSED_PORTS_POCKET = ( + "# Proposed\n" + "deb http://ports.ubuntu.com/ubuntu-ports {}-proposed main universe " + "multiverse restricted\n") +# Only supports 64bit and ppc64 at the moment. +ARCH_TO_PROPOSED_POCKET = { + 'x86_64': PROPOSED_POCKET, + 'ppc64le': PROPOSED_PORTS_POCKET, + 'aarch64': PROPOSED_PORTS_POCKET, + 's390x': PROPOSED_PORTS_POCKET, +} +CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu" +CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA' +CLOUD_ARCHIVE = """# Ubuntu Cloud Archive +deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main +""" +CLOUD_ARCHIVE_POCKETS = { + # Folsom + 'folsom': 'precise-updates/folsom', + 'folsom/updates': 'precise-updates/folsom', + 'precise-folsom': 'precise-updates/folsom', + 'precise-folsom/updates': 'precise-updates/folsom', + 'precise-updates/folsom': 'precise-updates/folsom', + 'folsom/proposed': 'precise-proposed/folsom', + 'precise-folsom/proposed': 'precise-proposed/folsom', + 'precise-proposed/folsom': 'precise-proposed/folsom', + # Grizzly + 'grizzly': 'precise-updates/grizzly', + 'grizzly/updates': 'precise-updates/grizzly', + 'precise-grizzly': 'precise-updates/grizzly', + 'precise-grizzly/updates': 'precise-updates/grizzly', + 'precise-updates/grizzly': 'precise-updates/grizzly', + 'grizzly/proposed': 'precise-proposed/grizzly', + 'precise-grizzly/proposed': 'precise-proposed/grizzly', + 'precise-proposed/grizzly': 'precise-proposed/grizzly', + # Havana + 'havana': 'precise-updates/havana', + 'havana/updates': 'precise-updates/havana', + 'precise-havana': 'precise-updates/havana', + 'precise-havana/updates': 'precise-updates/havana', + 'precise-updates/havana': 'precise-updates/havana', + 'havana/proposed': 'precise-proposed/havana', + 'precise-havana/proposed': 'precise-proposed/havana', + 'precise-proposed/havana': 'precise-proposed/havana', + # Icehouse + 'icehouse': 'precise-updates/icehouse', + 'icehouse/updates': 'precise-updates/icehouse', + 'precise-icehouse': 'precise-updates/icehouse', + 'precise-icehouse/updates': 'precise-updates/icehouse', + 'precise-updates/icehouse': 'precise-updates/icehouse', + 'icehouse/proposed': 'precise-proposed/icehouse', + 'precise-icehouse/proposed': 'precise-proposed/icehouse', + 'precise-proposed/icehouse': 'precise-proposed/icehouse', + # Juno + 'juno': 'trusty-updates/juno', + 'juno/updates': 'trusty-updates/juno', + 'trusty-juno': 'trusty-updates/juno', + 'trusty-juno/updates': 'trusty-updates/juno', + 'trusty-updates/juno': 'trusty-updates/juno', + 'juno/proposed': 'trusty-proposed/juno', + 'trusty-juno/proposed': 'trusty-proposed/juno', + 'trusty-proposed/juno': 'trusty-proposed/juno', + # Kilo + 'kilo': 'trusty-updates/kilo', + 'kilo/updates': 'trusty-updates/kilo', + 'trusty-kilo': 'trusty-updates/kilo', + 'trusty-kilo/updates': 'trusty-updates/kilo', + 'trusty-updates/kilo': 'trusty-updates/kilo', + 'kilo/proposed': 'trusty-proposed/kilo', + 'trusty-kilo/proposed': 'trusty-proposed/kilo', + 'trusty-proposed/kilo': 'trusty-proposed/kilo', + # Liberty + 'liberty': 'trusty-updates/liberty', + 'liberty/updates': 'trusty-updates/liberty', + 'trusty-liberty': 'trusty-updates/liberty', + 'trusty-liberty/updates': 'trusty-updates/liberty', + 'trusty-updates/liberty': 'trusty-updates/liberty', + 'liberty/proposed': 'trusty-proposed/liberty', + 'trusty-liberty/proposed': 'trusty-proposed/liberty', + 'trusty-proposed/liberty': 'trusty-proposed/liberty', + # Mitaka + 'mitaka': 'trusty-updates/mitaka', + 'mitaka/updates': 'trusty-updates/mitaka', + 'trusty-mitaka': 'trusty-updates/mitaka', + 'trusty-mitaka/updates': 'trusty-updates/mitaka', + 'trusty-updates/mitaka': 'trusty-updates/mitaka', + 'mitaka/proposed': 'trusty-proposed/mitaka', + 'trusty-mitaka/proposed': 'trusty-proposed/mitaka', + 'trusty-proposed/mitaka': 'trusty-proposed/mitaka', + # Newton + 'newton': 'xenial-updates/newton', + 'newton/updates': 'xenial-updates/newton', + 'xenial-newton': 'xenial-updates/newton', + 'xenial-newton/updates': 'xenial-updates/newton', + 'xenial-updates/newton': 'xenial-updates/newton', + 'newton/proposed': 'xenial-proposed/newton', + 'xenial-newton/proposed': 'xenial-proposed/newton', + 'xenial-proposed/newton': 'xenial-proposed/newton', + # Ocata + 'ocata': 'xenial-updates/ocata', + 'ocata/updates': 'xenial-updates/ocata', + 'xenial-ocata': 'xenial-updates/ocata', + 'xenial-ocata/updates': 'xenial-updates/ocata', + 'xenial-updates/ocata': 'xenial-updates/ocata', + 'ocata/proposed': 'xenial-proposed/ocata', + 'xenial-ocata/proposed': 'xenial-proposed/ocata', + 'xenial-proposed/ocata': 'xenial-proposed/ocata', + # Pike + 'pike': 'xenial-updates/pike', + 'xenial-pike': 'xenial-updates/pike', + 'xenial-pike/updates': 'xenial-updates/pike', + 'xenial-updates/pike': 'xenial-updates/pike', + 'pike/proposed': 'xenial-proposed/pike', + 'xenial-pike/proposed': 'xenial-proposed/pike', + 'xenial-proposed/pike': 'xenial-proposed/pike', + # Queens + 'queens': 'xenial-updates/queens', + 'xenial-queens': 'xenial-updates/queens', + 'xenial-queens/updates': 'xenial-updates/queens', + 'xenial-updates/queens': 'xenial-updates/queens', + 'queens/proposed': 'xenial-proposed/queens', + 'xenial-queens/proposed': 'xenial-proposed/queens', + 'xenial-proposed/queens': 'xenial-proposed/queens', + # Rocky + 'rocky': 'bionic-updates/rocky', + 'bionic-rocky': 'bionic-updates/rocky', + 'bionic-rocky/updates': 'bionic-updates/rocky', + 'bionic-updates/rocky': 'bionic-updates/rocky', + 'rocky/proposed': 'bionic-proposed/rocky', + 'bionic-rocky/proposed': 'bionic-proposed/rocky', + 'bionic-proposed/rocky': 'bionic-proposed/rocky', + # Stein + 'stein': 'bionic-updates/stein', + 'bionic-stein': 'bionic-updates/stein', + 'bionic-stein/updates': 'bionic-updates/stein', + 'bionic-updates/stein': 'bionic-updates/stein', + 'stein/proposed': 'bionic-proposed/stein', + 'bionic-stein/proposed': 'bionic-proposed/stein', + 'bionic-proposed/stein': 'bionic-proposed/stein', + # Train + 'train': 'bionic-updates/train', + 'bionic-train': 'bionic-updates/train', + 'bionic-train/updates': 'bionic-updates/train', + 'bionic-updates/train': 'bionic-updates/train', + 'train/proposed': 'bionic-proposed/train', + 'bionic-train/proposed': 'bionic-proposed/train', + 'bionic-proposed/train': 'bionic-proposed/train', + # Ussuri + 'ussuri': 'bionic-updates/ussuri', + 'bionic-ussuri': 'bionic-updates/ussuri', + 'bionic-ussuri/updates': 'bionic-updates/ussuri', + 'bionic-updates/ussuri': 'bionic-updates/ussuri', + 'ussuri/proposed': 'bionic-proposed/ussuri', + 'bionic-ussuri/proposed': 'bionic-proposed/ussuri', + 'bionic-proposed/ussuri': 'bionic-proposed/ussuri', +} + + +APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT. +CMD_RETRY_DELAY = 10 # Wait 10 seconds between command retries. +CMD_RETRY_COUNT = 3 # Retry a failing fatal command X times. + + +def filter_installed_packages(packages): + """Return a list of packages that require installation.""" + cache = apt_cache() + _pkgs = [] + for package in packages: + try: + p = cache[package] + p.current_ver or _pkgs.append(package) + except KeyError: + log('Package {} has no installation candidate.'.format(package), + level='WARNING') + _pkgs.append(package) + return _pkgs + + +def filter_missing_packages(packages): + """Return a list of packages that are installed. + + :param packages: list of packages to evaluate. + :returns list: Packages that are installed. + """ + return list( + set(packages) - + set(filter_installed_packages(packages)) + ) + + +def apt_cache(*_, **__): + """Shim returning an object simulating the apt_pkg Cache. + + :param _: Accept arguments for compability, not used. + :type _: any + :param __: Accept keyword arguments for compability, not used. + :type __: any + :returns:Object used to interrogate the system apt and dpkg databases. + :rtype:ubuntu_apt_pkg.Cache + """ + if 'apt_pkg' in sys.modules: + # NOTE(fnordahl): When our consumer use the upstream ``apt_pkg`` module + # in conjunction with the apt_cache helper function, they may expect us + # to call ``apt_pkg.init()`` for them. + # + # Detect this situation, log a warning and make the call to + # ``apt_pkg.init()`` to avoid the consumer Python interpreter from + # crashing with a segmentation fault. + log('Support for use of upstream ``apt_pkg`` module in conjunction' + 'with charm-helpers is deprecated since 2019-06-25', level=WARNING) + sys.modules['apt_pkg'].init() + return ubuntu_apt_pkg.Cache() + + +def apt_install(packages, options=None, fatal=False): + """Install one or more packages. + + :param packages: Package(s) to install + :type packages: Option[str, List[str]] + :param options: Options to pass on to apt-get + :type options: Option[None, List[str]] + :param fatal: Whether the command's output should be checked and + retried. + :type fatal: bool + :raises: subprocess.CalledProcessError + """ + if options is None: + options = ['--option=Dpkg::Options::=--force-confold'] + + cmd = ['apt-get', '--assume-yes'] + cmd.extend(options) + cmd.append('install') + if isinstance(packages, six.string_types): + cmd.append(packages) + else: + cmd.extend(packages) + log("Installing {} with options: {}".format(packages, + options)) + _run_apt_command(cmd, fatal) + + +def apt_upgrade(options=None, fatal=False, dist=False): + """Upgrade all packages. + + :param options: Options to pass on to apt-get + :type options: Option[None, List[str]] + :param fatal: Whether the command's output should be checked and + retried. + :type fatal: bool + :param dist: Whether ``dist-upgrade`` should be used over ``upgrade`` + :type dist: bool + :raises: subprocess.CalledProcessError + """ + if options is None: + options = ['--option=Dpkg::Options::=--force-confold'] + + cmd = ['apt-get', '--assume-yes'] + cmd.extend(options) + if dist: + cmd.append('dist-upgrade') + else: + cmd.append('upgrade') + log("Upgrading with options: {}".format(options)) + _run_apt_command(cmd, fatal) + + +def apt_update(fatal=False): + """Update local apt cache.""" + cmd = ['apt-get', 'update'] + _run_apt_command(cmd, fatal) + + +def apt_purge(packages, fatal=False): + """Purge one or more packages. + + :param packages: Package(s) to install + :type packages: Option[str, List[str]] + :param fatal: Whether the command's output should be checked and + retried. + :type fatal: bool + :raises: subprocess.CalledProcessError + """ + cmd = ['apt-get', '--assume-yes', 'purge'] + if isinstance(packages, six.string_types): + cmd.append(packages) + else: + cmd.extend(packages) + log("Purging {}".format(packages)) + _run_apt_command(cmd, fatal) + + +def apt_autoremove(purge=True, fatal=False): + """Purge one or more packages. + :param purge: Whether the ``--purge`` option should be passed on or not. + :type purge: bool + :param fatal: Whether the command's output should be checked and + retried. + :type fatal: bool + :raises: subprocess.CalledProcessError + """ + cmd = ['apt-get', '--assume-yes', 'autoremove'] + if purge: + cmd.append('--purge') + _run_apt_command(cmd, fatal) + + +def apt_mark(packages, mark, fatal=False): + """Flag one or more packages using apt-mark.""" + log("Marking {} as {}".format(packages, mark)) + cmd = ['apt-mark', mark] + if isinstance(packages, six.string_types): + cmd.append(packages) + else: + cmd.extend(packages) + + if fatal: + subprocess.check_call(cmd, universal_newlines=True) + else: + subprocess.call(cmd, universal_newlines=True) + + +def apt_hold(packages, fatal=False): + return apt_mark(packages, 'hold', fatal=fatal) + + +def apt_unhold(packages, fatal=False): + return apt_mark(packages, 'unhold', fatal=fatal) + + +def import_key(key): + """Import an ASCII Armor key. + + A Radix64 format keyid is also supported for backwards + compatibility. In this case Ubuntu keyserver will be + queried for a key via HTTPS by its keyid. This method + is less preferrable because https proxy servers may + require traffic decryption which is equivalent to a + man-in-the-middle attack (a proxy server impersonates + keyserver TLS certificates and has to be explicitly + trusted by the system). + + :param key: A GPG key in ASCII armor format, + including BEGIN and END markers or a keyid. + :type key: (bytes, str) + :raises: GPGKeyError if the key could not be imported + """ + key = key.strip() + if '-' in key or '\n' in key: + # Send everything not obviously a keyid to GPG to import, as + # we trust its validation better than our own. eg. handling + # comments before the key. + log("PGP key found (looks like ASCII Armor format)", level=DEBUG) + if ('-----BEGIN PGP PUBLIC KEY BLOCK-----' in key and + '-----END PGP PUBLIC KEY BLOCK-----' in key): + log("Writing provided PGP key in the binary format", level=DEBUG) + if six.PY3: + key_bytes = key.encode('utf-8') + else: + key_bytes = key + key_name = _get_keyid_by_gpg_key(key_bytes) + key_gpg = _dearmor_gpg_key(key_bytes) + _write_apt_gpg_keyfile(key_name=key_name, key_material=key_gpg) + else: + raise GPGKeyError("ASCII armor markers missing from GPG key") + else: + log("PGP key found (looks like Radix64 format)", level=WARNING) + log("SECURELY importing PGP key from keyserver; " + "full key not provided.", level=WARNING) + # as of bionic add-apt-repository uses curl with an HTTPS keyserver URL + # to retrieve GPG keys. `apt-key adv` command is deprecated as is + # apt-key in general as noted in its manpage. See lp:1433761 for more + # history. Instead, /etc/apt/trusted.gpg.d is used directly to drop + # gpg + key_asc = _get_key_by_keyid(key) + # write the key in GPG format so that apt-key list shows it + key_gpg = _dearmor_gpg_key(key_asc) + _write_apt_gpg_keyfile(key_name=key, key_material=key_gpg) + + +def _get_keyid_by_gpg_key(key_material): + """Get a GPG key fingerprint by GPG key material. + Gets a GPG key fingerprint (40-digit, 160-bit) by the ASCII armor-encoded + or binary GPG key material. Can be used, for example, to generate file + names for keys passed via charm options. + + :param key_material: ASCII armor-encoded or binary GPG key material + :type key_material: bytes + :raises: GPGKeyError if invalid key material has been provided + :returns: A GPG key fingerprint + :rtype: str + """ + # Use the same gpg command for both Xenial and Bionic + cmd = 'gpg --with-colons --with-fingerprint' + ps = subprocess.Popen(cmd.split(), + stdout=subprocess.PIPE, + stderr=subprocess.PIPE, + stdin=subprocess.PIPE) + out, err = ps.communicate(input=key_material) + if six.PY3: + out = out.decode('utf-8') + err = err.decode('utf-8') + if 'gpg: no valid OpenPGP data found.' in err: + raise GPGKeyError('Invalid GPG key material provided') + # from gnupg2 docs: fpr :: Fingerprint (fingerprint is in field 10) + return re.search(r"^fpr:{9}([0-9A-F]{40}):$", out, re.MULTILINE).group(1) + + +def _get_key_by_keyid(keyid): + """Get a key via HTTPS from the Ubuntu keyserver. + Different key ID formats are supported by SKS keyservers (the longer ones + are more secure, see "dead beef attack" and https://evil32.com/). Since + HTTPS is used, if SSLBump-like HTTPS proxies are in place, they will + impersonate keyserver.ubuntu.com and generate a certificate with + keyserver.ubuntu.com in the CN field or in SubjAltName fields of a + certificate. If such proxy behavior is expected it is necessary to add the + CA certificate chain containing the intermediate CA of the SSLBump proxy to + every machine that this code runs on via ca-certs cloud-init directive (via + cloudinit-userdata model-config) or via other means (such as through a + custom charm option). Also note that DNS resolution for the hostname in a + URL is done at a proxy server - not at the client side. + + 8-digit (32 bit) key ID + https://keyserver.ubuntu.com/pks/lookup?search=0x4652B4E6 + 16-digit (64 bit) key ID + https://keyserver.ubuntu.com/pks/lookup?search=0x6E85A86E4652B4E6 + 40-digit key ID: + https://keyserver.ubuntu.com/pks/lookup?search=0x35F77D63B5CEC106C577ED856E85A86E4652B4E6 + + :param keyid: An 8, 16 or 40 hex digit keyid to find a key for + :type keyid: (bytes, str) + :returns: A key material for the specified GPG key id + :rtype: (str, bytes) + :raises: subprocess.CalledProcessError + """ + # options=mr - machine-readable output (disables html wrappers) + keyserver_url = ('https://keyserver.ubuntu.com' + '/pks/lookup?op=get&options=mr&exact=on&search=0x{}') + curl_cmd = ['curl', keyserver_url.format(keyid)] + # use proxy server settings in order to retrieve the key + return subprocess.check_output(curl_cmd, + env=env_proxy_settings(['https'])) + + +def _dearmor_gpg_key(key_asc): + """Converts a GPG key in the ASCII armor format to the binary format. + + :param key_asc: A GPG key in ASCII armor format. + :type key_asc: (str, bytes) + :returns: A GPG key in binary format + :rtype: (str, bytes) + :raises: GPGKeyError + """ + ps = subprocess.Popen(['gpg', '--dearmor'], + stdout=subprocess.PIPE, + stderr=subprocess.PIPE, + stdin=subprocess.PIPE) + out, err = ps.communicate(input=key_asc) + # no need to decode output as it is binary (invalid utf-8), only error + if six.PY3: + err = err.decode('utf-8') + if 'gpg: no valid OpenPGP data found.' in err: + raise GPGKeyError('Invalid GPG key material. Check your network setup' + ' (MTU, routing, DNS) and/or proxy server settings' + ' as well as destination keyserver status.') + else: + return out + + +def _write_apt_gpg_keyfile(key_name, key_material): + """Writes GPG key material into a file at a provided path. + + :param key_name: A key name to use for a key file (could be a fingerprint) + :type key_name: str + :param key_material: A GPG key material (binary) + :type key_material: (str, bytes) + """ + with open('/etc/apt/trusted.gpg.d/{}.gpg'.format(key_name), + 'wb') as keyf: + keyf.write(key_material) + + +def add_source(source, key=None, fail_invalid=False): + """Add a package source to this system. + + @param source: a URL or sources.list entry, as supported by + add-apt-repository(1). Examples:: + + ppa:charmers/example + deb https://stub:key@private.example.com/ubuntu trusty main + + In addition: + 'proposed:' may be used to enable the standard 'proposed' + pocket for the release. + 'cloud:' may be used to activate official cloud archive pockets, + such as 'cloud:icehouse' + 'distro' may be used as a noop + + Full list of source specifications supported by the function are: + + 'distro': A NOP; i.e. it has no effect. + 'proposed': the proposed deb spec [2] is wrtten to + /etc/apt/sources.list/proposed + 'distro-proposed': adds -proposed to the debs [2] + 'ppa:': add-apt-repository --yes + 'deb ': add-apt-repository --yes deb + 'http://....': add-apt-repository --yes http://... + 'cloud-archive:': add-apt-repository -yes cloud-archive: + 'cloud:[-staging]': specify a Cloud Archive pocket with + optional staging version. If staging is used then the staging PPA [2] + with be used. If staging is NOT used then the cloud archive [3] will be + added, and the 'ubuntu-cloud-keyring' package will be added for the + current distro. + + Otherwise the source is not recognised and this is logged to the juju log. + However, no error is raised, unless sys_error_on_exit is True. + + [1] deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main + where {} is replaced with the derived pocket name. + [2] deb http://archive.ubuntu.com/ubuntu {}-proposed \ + main universe multiverse restricted + where {} is replaced with the lsb_release codename (e.g. xenial) + [3] deb http://ubuntu-cloud.archive.canonical.com/ubuntu + to /etc/apt/sources.list.d/cloud-archive-list + + @param key: A key to be added to the system's APT keyring and used + to verify the signatures on packages. Ideally, this should be an + ASCII format GPG public key including the block headers. A GPG key + id may also be used, but be aware that only insecure protocols are + available to retrieve the actual public key from a public keyserver + placing your Juju environment at risk. ppa and cloud archive keys + are securely added automtically, so sould not be provided. + + @param fail_invalid: (boolean) if True, then the function raises a + SourceConfigError is there is no matching installation source. + + @raises SourceConfigError() if for cloud:, the is not a + valid pocket in CLOUD_ARCHIVE_POCKETS + """ + _mapping = OrderedDict([ + (r"^distro$", lambda: None), # This is a NOP + (r"^(?:proposed|distro-proposed)$", _add_proposed), + (r"^cloud-archive:(.*)$", _add_apt_repository), + (r"^((?:deb |http:|https:|ppa:).*)$", _add_apt_repository), + (r"^cloud:(.*)-(.*)\/staging$", _add_cloud_staging), + (r"^cloud:(.*)-(.*)$", _add_cloud_distro_check), + (r"^cloud:(.*)$", _add_cloud_pocket), + (r"^snap:.*-(.*)-(.*)$", _add_cloud_distro_check), + ]) + if source is None: + source = '' + for r, fn in six.iteritems(_mapping): + m = re.match(r, source) + if m: + if key: + # Import key before adding the source which depends on it, + # as refreshing packages could fail otherwise. + try: + import_key(key) + except GPGKeyError as e: + raise SourceConfigError(str(e)) + # call the associated function with the captured groups + # raises SourceConfigError on error. + fn(*m.groups()) + break + else: + # nothing matched. log an error and maybe sys.exit + err = "Unknown source: {!r}".format(source) + log(err) + if fail_invalid: + raise SourceConfigError(err) + + +def _add_proposed(): + """Add the PROPOSED_POCKET as /etc/apt/source.list.d/proposed.list + + Uses get_distrib_codename to determine the correct stanza for + the deb line. + + For intel architecutres PROPOSED_POCKET is used for the release, but for + other architectures PROPOSED_PORTS_POCKET is used for the release. + """ + release = get_distrib_codename() + arch = platform.machine() + if arch not in six.iterkeys(ARCH_TO_PROPOSED_POCKET): + raise SourceConfigError("Arch {} not supported for (distro-)proposed" + .format(arch)) + with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt: + apt.write(ARCH_TO_PROPOSED_POCKET[arch].format(release)) + + +def _add_apt_repository(spec): + """Add the spec using add_apt_repository + + :param spec: the parameter to pass to add_apt_repository + :type spec: str + """ + if '{series}' in spec: + series = get_distrib_codename() + spec = spec.replace('{series}', series) + # software-properties package for bionic properly reacts to proxy settings + # passed as environment variables (See lp:1433761). This is not the case + # LTS and non-LTS releases below bionic. + _run_with_retries(['add-apt-repository', '--yes', spec], + cmd_env=env_proxy_settings(['https'])) + + +def _add_cloud_pocket(pocket): + """Add a cloud pocket as /etc/apt/sources.d/cloud-archive.list + + Note that this overwrites the existing file if there is one. + + This function also converts the simple pocket in to the actual pocket using + the CLOUD_ARCHIVE_POCKETS mapping. + + :param pocket: string representing the pocket to add a deb spec for. + :raises: SourceConfigError if the cloud pocket doesn't exist or the + requested release doesn't match the current distro version. + """ + apt_install(filter_installed_packages(['ubuntu-cloud-keyring']), + fatal=True) + if pocket not in CLOUD_ARCHIVE_POCKETS: + raise SourceConfigError( + 'Unsupported cloud: source option %s' % + pocket) + actual_pocket = CLOUD_ARCHIVE_POCKETS[pocket] + with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt: + apt.write(CLOUD_ARCHIVE.format(actual_pocket)) + + +def _add_cloud_staging(cloud_archive_release, openstack_release): + """Add the cloud staging repository which is in + ppa:ubuntu-cloud-archive/-staging + + This function checks that the cloud_archive_release matches the current + codename for the distro that charm is being installed on. + + :param cloud_archive_release: string, codename for the release. + :param openstack_release: String, codename for the openstack release. + :raises: SourceConfigError if the cloud_archive_release doesn't match the + current version of the os. + """ + _verify_is_ubuntu_rel(cloud_archive_release, openstack_release) + ppa = 'ppa:ubuntu-cloud-archive/{}-staging'.format(openstack_release) + cmd = 'add-apt-repository -y {}'.format(ppa) + _run_with_retries(cmd.split(' ')) + + +def _add_cloud_distro_check(cloud_archive_release, openstack_release): + """Add the cloud pocket, but also check the cloud_archive_release against + the current distro, and use the openstack_release as the full lookup. + + This just calls _add_cloud_pocket() with the openstack_release as pocket + to get the correct cloud-archive.list for dpkg to work with. + + :param cloud_archive_release:String, codename for the distro release. + :param openstack_release: String, spec for the release to look up in the + CLOUD_ARCHIVE_POCKETS + :raises: SourceConfigError if this is the wrong distro, or the pocket spec + doesn't exist. + """ + _verify_is_ubuntu_rel(cloud_archive_release, openstack_release) + _add_cloud_pocket("{}-{}".format(cloud_archive_release, openstack_release)) + + +def _verify_is_ubuntu_rel(release, os_release): + """Verify that the release is in the same as the current ubuntu release. + + :param release: String, lowercase for the release. + :param os_release: String, the os_release being asked for + :raises: SourceConfigError if the release is not the same as the ubuntu + release. + """ + ubuntu_rel = get_distrib_codename() + if release != ubuntu_rel: + raise SourceConfigError( + 'Invalid Cloud Archive release specified: {}-{} on this Ubuntu' + 'version ({})'.format(release, os_release, ubuntu_rel)) + + +def _run_with_retries(cmd, max_retries=CMD_RETRY_COUNT, retry_exitcodes=(1,), + retry_message="", cmd_env=None): + """Run a command and retry until success or max_retries is reached. + + :param cmd: The apt command to run. + :type cmd: str + :param max_retries: The number of retries to attempt on a fatal + command. Defaults to CMD_RETRY_COUNT. + :type max_retries: int + :param retry_exitcodes: Optional additional exit codes to retry. + Defaults to retry on exit code 1. + :type retry_exitcodes: tuple + :param retry_message: Optional log prefix emitted during retries. + :type retry_message: str + :param: cmd_env: Environment variables to add to the command run. + :type cmd_env: Option[None, Dict[str, str]] + """ + env = get_apt_dpkg_env() + if cmd_env: + env.update(cmd_env) + + if not retry_message: + retry_message = "Failed executing '{}'".format(" ".join(cmd)) + retry_message += ". Will retry in {} seconds".format(CMD_RETRY_DELAY) + + retry_count = 0 + result = None + + retry_results = (None,) + retry_exitcodes + while result in retry_results: + try: + result = subprocess.check_call(cmd, env=env) + except subprocess.CalledProcessError as e: + retry_count = retry_count + 1 + if retry_count > max_retries: + raise + result = e.returncode + log(retry_message) + time.sleep(CMD_RETRY_DELAY) + + +def _run_apt_command(cmd, fatal=False): + """Run an apt command with optional retries. + + :param cmd: The apt command to run. + :type cmd: str + :param fatal: Whether the command's output should be checked and + retried. + :type fatal: bool + """ + if fatal: + _run_with_retries( + cmd, retry_exitcodes=(1, APT_NO_LOCK,), + retry_message="Couldn't acquire DPKG lock") + else: + subprocess.call(cmd, env=get_apt_dpkg_env()) + + +def get_upstream_version(package): + """Determine upstream version based on installed package + + @returns None (if not installed) or the upstream version + """ + cache = apt_cache() + try: + pkg = cache[package] + except Exception: + # the package is unknown to the current apt cache. + return None + + if not pkg.current_ver: + # package is known, but no version is currently installed. + return None + + return ubuntu_apt_pkg.upstream_version(pkg.current_ver.ver_str) + + +def get_apt_dpkg_env(): + """Get environment suitable for execution of APT and DPKG tools. + + We keep this in a helper function instead of in a global constant to + avoid execution on import of the library. + :returns: Environment suitable for execution of APT and DPKG tools. + :rtype: Dict[str, str] + """ + # The fallback is used in the event of ``/etc/environment`` not containing + # avalid PATH variable. + return {'DEBIAN_FRONTEND': 'noninteractive', + 'PATH': get_system_env('PATH', '/usr/sbin:/usr/bin:/sbin:/bin')} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/fetch/ubuntu_apt_pkg.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/fetch/ubuntu_apt_pkg.py new file mode 100644 index 0000000000000000000000000000000000000000..929a75d7a775921c32368e8cc7950eaa97390047 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/fetch/ubuntu_apt_pkg.py @@ -0,0 +1,267 @@ +# Copyright 2019 Canonical Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Provide a subset of the ``python-apt`` module API. + +Data collection is done through subprocess calls to ``apt-cache`` and +``dpkg-query`` commands. + +The main purpose for this module is to avoid dependency on the +``python-apt`` python module. + +The indicated python module is a wrapper around the ``apt`` C++ library +which is tightly connected to the version of the distribution it was +shipped on. It is not developed in a backward/forward compatible manner. + +This in turn makes it incredibly hard to distribute as a wheel for a piece +of python software that supports a span of distro releases [0][1]. + +Upstream feedback like [2] does not give confidence in this ever changing, +so with this we get rid of the dependency. + +0: https://github.com/juju-solutions/layer-basic/pull/135 +1: https://bugs.launchpad.net/charm-octavia/+bug/1824112 +2: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=845330#10 +""" + +import locale +import os +import subprocess +import sys + + +class _container(dict): + """Simple container for attributes.""" + __getattr__ = dict.__getitem__ + __setattr__ = dict.__setitem__ + + +class Package(_container): + """Simple container for package attributes.""" + + +class Version(_container): + """Simple container for version attributes.""" + + +class Cache(object): + """Simulation of ``apt_pkg`` Cache object.""" + def __init__(self, progress=None): + pass + + def __contains__(self, package): + try: + pkg = self.__getitem__(package) + return pkg is not None + except KeyError: + return False + + def __getitem__(self, package): + """Get information about a package from apt and dpkg databases. + + :param package: Name of package + :type package: str + :returns: Package object + :rtype: object + :raises: KeyError, subprocess.CalledProcessError + """ + apt_result = self._apt_cache_show([package])[package] + apt_result['name'] = apt_result.pop('package') + pkg = Package(apt_result) + dpkg_result = self._dpkg_list([package]).get(package, {}) + current_ver = None + installed_version = dpkg_result.get('version') + if installed_version: + current_ver = Version({'ver_str': installed_version}) + pkg.current_ver = current_ver + pkg.architecture = dpkg_result.get('architecture') + return pkg + + def _dpkg_list(self, packages): + """Get data from system dpkg database for package. + + :param packages: Packages to get data from + :type packages: List[str] + :returns: Structured data about installed packages, keys like + ``dpkg-query --list`` + :rtype: dict + :raises: subprocess.CalledProcessError + """ + pkgs = {} + cmd = ['dpkg-query', '--list'] + cmd.extend(packages) + if locale.getlocale() == (None, None): + # subprocess calls out to locale.getpreferredencoding(False) to + # determine encoding. Workaround for Trusty where the + # environment appears to not be set up correctly. + locale.setlocale(locale.LC_ALL, 'en_US.UTF-8') + try: + output = subprocess.check_output(cmd, + stderr=subprocess.STDOUT, + universal_newlines=True) + except subprocess.CalledProcessError as cp: + # ``dpkg-query`` may return error and at the same time have + # produced useful output, for example when asked for multiple + # packages where some are not installed + if cp.returncode != 1: + raise + output = cp.output + headings = [] + for line in output.splitlines(): + if line.startswith('||/'): + headings = line.split() + headings.pop(0) + continue + elif (line.startswith('|') or line.startswith('+') or + line.startswith('dpkg-query:')): + continue + else: + data = line.split(None, 4) + status = data.pop(0) + if status != 'ii': + continue + pkg = {} + pkg.update({k.lower(): v for k, v in zip(headings, data)}) + if 'name' in pkg: + pkgs.update({pkg['name']: pkg}) + return pkgs + + def _apt_cache_show(self, packages): + """Get data from system apt cache for package. + + :param packages: Packages to get data from + :type packages: List[str] + :returns: Structured data about package, keys like + ``apt-cache show`` + :rtype: dict + :raises: subprocess.CalledProcessError + """ + pkgs = {} + cmd = ['apt-cache', 'show', '--no-all-versions'] + cmd.extend(packages) + if locale.getlocale() == (None, None): + # subprocess calls out to locale.getpreferredencoding(False) to + # determine encoding. Workaround for Trusty where the + # environment appears to not be set up correctly. + locale.setlocale(locale.LC_ALL, 'en_US.UTF-8') + try: + output = subprocess.check_output(cmd, + stderr=subprocess.STDOUT, + universal_newlines=True) + previous = None + pkg = {} + for line in output.splitlines(): + if not line: + if 'package' in pkg: + pkgs.update({pkg['package']: pkg}) + pkg = {} + continue + if line.startswith(' '): + if previous and previous in pkg: + pkg[previous] += os.linesep + line.lstrip() + continue + if ':' in line: + kv = line.split(':', 1) + key = kv[0].lower() + if key == 'n': + continue + previous = key + pkg.update({key: kv[1].lstrip()}) + except subprocess.CalledProcessError as cp: + # ``apt-cache`` returns 100 if none of the packages asked for + # exist in the apt cache. + if cp.returncode != 100: + raise + return pkgs + + +class Config(_container): + def __init__(self): + super(Config, self).__init__(self._populate()) + + def _populate(self): + cfgs = {} + cmd = ['apt-config', 'dump'] + output = subprocess.check_output(cmd, + stderr=subprocess.STDOUT, + universal_newlines=True) + for line in output.splitlines(): + if not line.startswith("CommandLine"): + k, v = line.split(" ", 1) + cfgs[k] = v.strip(";").strip("\"") + + return cfgs + + +# Backwards compatibility with old apt_pkg module +sys.modules[__name__].config = Config() + + +def init(): + """Compability shim that does nothing.""" + pass + + +def upstream_version(version): + """Extracts upstream version from a version string. + + Upstream reference: https://salsa.debian.org/apt-team/apt/blob/master/ + apt-pkg/deb/debversion.cc#L259 + + :param version: Version string + :type version: str + :returns: Upstream version + :rtype: str + """ + if version: + version = version.split(':')[-1] + version = version.split('-')[0] + return version + + +def version_compare(a, b): + """Compare the given versions. + + Call out to ``dpkg`` to make sure the code doing the comparison is + compatible with what the ``apt`` library would do. Mimic the return + values. + + Upstream reference: + https://apt-team.pages.debian.net/python-apt/library/apt_pkg.html + ?highlight=version_compare#apt_pkg.version_compare + + :param a: version string + :type a: str + :param b: version string + :type b: str + :returns: >0 if ``a`` is greater than ``b``, 0 if a equals b, + <0 if ``a`` is smaller than ``b`` + :rtype: int + :raises: subprocess.CalledProcessError, RuntimeError + """ + for op in ('gt', 1), ('eq', 0), ('lt', -1): + try: + subprocess.check_call(['dpkg', '--compare-versions', + a, op[0], b], + stderr=subprocess.STDOUT, + universal_newlines=True) + return op[1] + except subprocess.CalledProcessError as cp: + if cp.returncode == 1: + continue + raise + else: + raise RuntimeError('Unable to compare "{}" and "{}", according to ' + 'our logic they are neither greater, equal nor ' + 'less than each other.') diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/osplatform.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/osplatform.py new file mode 100644 index 0000000000000000000000000000000000000000..78c81af5955caee51271c830d58cacac2cab9bcc --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/osplatform.py @@ -0,0 +1,46 @@ +import platform +import os + + +def get_platform(): + """Return the current OS platform. + + For example: if current os platform is Ubuntu then a string "ubuntu" + will be returned (which is the name of the module). + This string is used to decide which platform module should be imported. + """ + # linux_distribution is deprecated and will be removed in Python 3.7 + # Warnings *not* disabled, as we certainly need to fix this. + if hasattr(platform, 'linux_distribution'): + tuple_platform = platform.linux_distribution() + current_platform = tuple_platform[0] + else: + current_platform = _get_platform_from_fs() + + if "Ubuntu" in current_platform: + return "ubuntu" + elif "CentOS" in current_platform: + return "centos" + elif "debian" in current_platform: + # Stock Python does not detect Ubuntu and instead returns debian. + # Or at least it does in some build environments like Travis CI + return "ubuntu" + elif "elementary" in current_platform: + # ElementaryOS fails to run tests locally without this. + return "ubuntu" + else: + raise RuntimeError("This module is not supported on {}." + .format(current_platform)) + + +def _get_platform_from_fs(): + """Get Platform from /etc/os-release.""" + with open(os.path.join(os.sep, 'etc', 'os-release')) as fin: + content = dict( + line.split('=', 1) + for line in fin.read().splitlines() + if '=' in line + ) + for k, v in content.items(): + content[k] = v.strip('"') + return content["NAME"] diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/payload/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/payload/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..ee55cb3d2baddb556df910f1d41638c3c7f39c59 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/payload/__init__.py @@ -0,0 +1,15 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"Tools for working with files injected into a charm just before deployment." diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/payload/archive.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/payload/archive.py new file mode 100644 index 0000000000000000000000000000000000000000..7fc453f523cf49f0c123839a347c4452c6d465ca --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/payload/archive.py @@ -0,0 +1,71 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import tarfile +import zipfile +from charmhelpers.core import ( + host, + hookenv, +) + + +class ArchiveError(Exception): + pass + + +def get_archive_handler(archive_name): + if os.path.isfile(archive_name): + if tarfile.is_tarfile(archive_name): + return extract_tarfile + elif zipfile.is_zipfile(archive_name): + return extract_zipfile + else: + # look at the file name + for ext in ('.tar', '.tar.gz', '.tgz', 'tar.bz2', '.tbz2', '.tbz'): + if archive_name.endswith(ext): + return extract_tarfile + for ext in ('.zip', '.jar'): + if archive_name.endswith(ext): + return extract_zipfile + + +def archive_dest_default(archive_name): + archive_file = os.path.basename(archive_name) + return os.path.join(hookenv.charm_dir(), "archives", archive_file) + + +def extract(archive_name, destpath=None): + handler = get_archive_handler(archive_name) + if handler: + if not destpath: + destpath = archive_dest_default(archive_name) + if not os.path.isdir(destpath): + host.mkdir(destpath) + handler(archive_name, destpath) + return destpath + else: + raise ArchiveError("No handler for archive") + + +def extract_tarfile(archive_name, destpath): + "Unpack a tar archive, optionally compressed" + archive = tarfile.open(archive_name) + archive.extractall(destpath) + + +def extract_zipfile(archive_name, destpath): + "Unpack a zip file" + archive = zipfile.ZipFile(archive_name) + archive.extractall(destpath) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/payload/execd.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/payload/execd.py new file mode 100644 index 0000000000000000000000000000000000000000..1502aa0b596f0b1a2017ccb4543a35999774431d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charmhelpers/payload/execd.py @@ -0,0 +1,65 @@ +#!/usr/bin/env python + +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import sys +import subprocess +from charmhelpers.core import hookenv + + +def default_execd_dir(): + return os.path.join(os.environ['CHARM_DIR'], 'exec.d') + + +def execd_module_paths(execd_dir=None): + """Generate a list of full paths to modules within execd_dir.""" + if not execd_dir: + execd_dir = default_execd_dir() + + if not os.path.exists(execd_dir): + return + + for subpath in os.listdir(execd_dir): + module = os.path.join(execd_dir, subpath) + if os.path.isdir(module): + yield module + + +def execd_submodule_paths(command, execd_dir=None): + """Generate a list of full paths to the specified command within exec_dir. + """ + for module_path in execd_module_paths(execd_dir): + path = os.path.join(module_path, command) + if os.access(path, os.X_OK) and os.path.isfile(path): + yield path + + +def execd_run(command, execd_dir=None, die_on_error=True, stderr=subprocess.STDOUT): + """Run command for each module within execd_dir which defines it.""" + for submodule_path in execd_submodule_paths(command, execd_dir): + try: + subprocess.check_output(submodule_path, stderr=stderr, + universal_newlines=True) + except subprocess.CalledProcessError as e: + hookenv.log("Error ({}) running {}. Output: {}".format( + e.returncode, e.cmd, e.output)) + if die_on_error: + sys.exit(e.returncode) + + +def execd_preinstall(execd_dir=None): + """Run charm-pre-install for each module within execd_dir.""" + execd_run('charm-pre-install', execd_dir=execd_dir) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charms/osm/libansible.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charms/osm/libansible.py new file mode 100644 index 0000000000000000000000000000000000000000..811ec4441fdde7f424a6e29800f265a528d27da5 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charms/osm/libansible.py @@ -0,0 +1,108 @@ +## +# Copyright ETSI OSM +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## + +import fnmatch +import os +import subprocess +import sys +import yaml + +import charmhelpers.fetch + + +sys.path.append("lib") + + +ansible_hosts_path = "/etc/ansible/hosts" + + +def install_ansible_support(from_ppa=True, ppa_location="ppa:ansible/ansible"): + """Installs the ansible package. + + By default it is installed from the `PPA`_ linked from + the ansible `website`_ or from a ppa specified by a charm config.. + + .. _PPA: https://launchpad.net/~rquillo/+archive/ansible + .. _website: http://docs.ansible.com/intro_installation.html#latest-releases-via-apt-ubuntu + + If from_ppa is empty, you must ensure that the package is available + from a configured repository. + """ + if from_ppa: + charmhelpers.fetch.add_source(ppa_location) + charmhelpers.fetch.apt_update(fatal=True) + charmhelpers.fetch.apt_install("ansible") + with open(ansible_hosts_path, "w+") as hosts_file: + hosts_file.write("localhost ansible_connection=local") + + +def create_hosts(hostname, username, password, hosts): + inventory_path = "/etc/ansible/hosts" + with open(inventory_path, 'w') as f: + f.write('[{}]\n'.format(hosts)) + h1 = '{0} ansible_connection=network_cli ansible_network_os=vyos ' \ + 'ansible_ssh_user={1} ansible_ssh_pass={2} ' \ + 'ansible_python_interpreter=/usr/bin/python\n'.format(hostname, username, password) + f.write(h1) + + +def create_ansible_cfg(): + ansible_config_path = "/etc/ansible/ansible.cfg" + with open(ansible_config_path, 'w') as f: + f.write('[defaults]\n') + f.write('host_key_checking = False\n') + # logs playbook execution attempts to the specified path + f.write('log_path = /var/log/ansible.log\n') + f.write('[ssh_connection]\n') + f.write('control_path=%(directory)s/%%h-%%r\n') + f.write('control_path_dir=~/.ansible/cp\n') + + +# Function to find the playbook path +def find(pattern, path): + result = "" + for root, dirs, files in os.walk(path): + for name in files: + if fnmatch.fnmatch(name, pattern): + result = os.path.join(root, name) + return result + + +def execute_playbook(playbook_file, hostname, user, password, vars_dict=None): + playbook_path = find(playbook_file, '/var/lib/juju/agents/') + print("Playbook " + playbook_file + " is " + playbook_path) + with open(playbook_path, 'r') as f: + playbook_data = yaml.load(f, Loader=yaml.SafeLoader) + hosts = 'all' + if 'hosts' in playbook_data[0].keys() and playbook_data[0]['hosts']: + hosts = playbook_data[0]['hosts'] + create_ansible_cfg() + create_hosts(hostname, user, password, hosts) + call = 'ansible-playbook -vvvv %s ' % playbook_path + if vars_dict and isinstance(vars_dict, dict) and len(vars_dict) > 0: + call += '--extra-vars ' + string_var = '' + for v in vars_dict.items(): + string_var += '%s=%s ' % v + string_var = string_var.strip() + call += '"%s"' % string_var + call = call.strip() + print(call) + result = subprocess.check_output(call, shell=True) + lastline = result.decode('utf-8').splitlines()[-2] + print("Result is " + lastline) + return lastline diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charms/osm/ns.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charms/osm/ns.py new file mode 100644 index 0000000000000000000000000000000000000000..25be4056282e48ce946632025be8b466557e3171 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charms/osm/ns.py @@ -0,0 +1,301 @@ +# A prototype of a library to aid in the development and operation of +# OSM Network Service charms + +import asyncio +import logging +import os +import os.path +import re +import subprocess +import sys +import time +import yaml + +try: + import juju +except ImportError: + # Not all cloud images are created equal + if not os.path.exists("/usr/bin/python3") or not os.path.exists("/usr/bin/pip3"): + # Update the apt cache + subprocess.check_call(["apt-get", "update"]) + + # Install the Python3 package + subprocess.check_call(["apt-get", "install", "-y", "python3", "python3-pip"],) + + + # Install the libjuju build dependencies + subprocess.check_call(["apt-get", "install", "-y", "libffi-dev", "libssl-dev"],) + + subprocess.check_call( + [sys.executable, "-m", "pip", "install", "juju"], + ) + +from juju.controller import Controller + +# Quiet the debug logging +logging.getLogger('websockets.protocol').setLevel(logging.INFO) +logging.getLogger('juju.client.connection').setLevel(logging.WARN) +logging.getLogger('juju.model').setLevel(logging.WARN) +logging.getLogger('juju.machine').setLevel(logging.WARN) + + +class NetworkService: + """A lightweight interface to the Juju controller. + + This NetworkService client is specifically designed to allow a higher-level + "NS" charm to interoperate with "VNF" charms, allowing for the execution of + Primitives across other charms within the same model. + """ + endpoint = None + user = 'admin' + secret = None + port = 17070 + loop = None + client = None + model = None + cacert = None + + def __init__(self, user, secret, endpoint=None): + + self.user = user + self.secret = secret + if endpoint is None: + addresses = os.environ['JUJU_API_ADDRESSES'] + for address in addresses.split(' '): + self.endpoint = address + else: + self.endpoint = endpoint + + # Stash the name of the model + self.model = os.environ['JUJU_MODEL_NAME'] + + # Load the ca-cert from agent.conf + AGENT_PATH = os.path.dirname(os.environ['JUJU_CHARM_DIR']) + with open("{}/agent.conf".format(AGENT_PATH), "r") as f: + try: + y = yaml.safe_load(f) + self.cacert = y['cacert'] + except yaml.YAMLError as exc: + print("Unable to find Juju ca-cert.") + raise exc + + # Create our event loop + self.loop = asyncio.new_event_loop() + asyncio.set_event_loop(self.loop) + + async def connect(self): + """Connect to the Juju controller.""" + controller = Controller() + + print( + "Connecting to controller... ws://{}:{} as {}/{}".format( + self.endpoint, + self.port, + self.user, + self.secret[-4:].rjust(len(self.secret), "*"), + ) + ) + await controller.connect( + endpoint=self.endpoint, + username=self.user, + password=self.secret, + cacert=self.cacert, + ) + + return controller + + def __del__(self): + self.logout() + + async def disconnect(self): + """Disconnect from the Juju controller.""" + if self.client: + print("Disconnecting Juju controller") + await self.client.disconnect() + + def login(self): + """Login to the Juju controller.""" + if not self.client: + # Connect to the Juju API server + self.client = self.loop.run_until_complete(self.connect()) + return self.client + + def logout(self): + """Logout of the Juju controller.""" + + if self.loop: + print("Disconnecting from API") + self.loop.run_until_complete(self.disconnect()) + + def FormatApplicationName(self, *args): + """ + Generate a Juju-compatible Application name + + :param args tuple: Positional arguments to be used to construct the + application name. + + Limitations:: + - Only accepts characters a-z and non-consequitive dashes (-) + - Application name should not exceed 50 characters + + Examples:: + + FormatApplicationName("ping_pong_ns", "ping_vnf", "a") + """ + appname = "" + for c in "-".join(list(args)): + if c.isdigit(): + c = chr(97 + int(c)) + elif not c.isalpha(): + c = "-" + appname += c + + return re.sub('-+', '-', appname.lower()) + + def GetApplicationName(self, nsr_name, vnf_name, vnf_member_index): + """Get the runtime application name of a VNF/VDU. + + This will generate an application name matching the name of the deployed charm, + given the right parameters. + + :param nsr_name str: The name of the running Network Service, as specified at instantiation. + :param vnf_name str: The name of the VNF or VDU + :param vnf_member_index: The vnf-member-index as specified in the descriptor + """ + + application_name = self.FormatApplicationName(nsr_name, vnf_member_index, vnf_name) + + # This matches the logic used by the LCM + application_name = application_name[0:48] + vca_index = int(vnf_member_index) - 1 + application_name += '-' + chr(97 + vca_index // 26) + chr(97 + vca_index % 26) + + return application_name + + def ExecutePrimitiveGetOutput(self, application, primitive, params={}, timeout=600): + """Execute a single primitive and return it's output. + + This is a blocking method that will execute a single primitive and wait + for its completion before return it's output. + + :param application str: The application name provided by `GetApplicationName`. + :param primitive str: The name of the primitive to execute. + :param params list: A list of parameters. + :param timeout int: A timeout, in seconds, to wait for the primitive to finish. Defaults to 600 seconds. + """ + uuid = self.ExecutePrimitive(application, primitive, params) + + status = None + output = None + + starttime = time.time() + while(time.time() < starttime + timeout): + status = self.GetPrimitiveStatus(uuid) + if status in ['completed', 'failed']: + break + time.sleep(10) + + # When the primitive is done, get the output + if status in ['completed', 'failed']: + output = self.GetPrimitiveOutput(uuid) + + return output + + def ExecutePrimitive(self, application, primitive, params={}): + """Execute a primitive. + + This is a non-blocking method to execute a primitive. It will return + the UUID of the queued primitive execution, which you can use + for subsequent calls to `GetPrimitiveStatus` and `GetPrimitiveOutput`. + + :param application string: The name of the application + :param primitive string: The name of the Primitive. + :param params list: A list of parameters. + + :returns uuid string: The UUID of the executed Primitive + """ + uuid = None + + if not self.client: + self.login() + + model = self.loop.run_until_complete( + self.client.get_model(self.model) + ) + + # Get the application + if application in model.applications: + app = model.applications[application] + + # Execute the primitive + unit = app.units[0] + if unit: + action = self.loop.run_until_complete( + unit.run_action(primitive, **params) + ) + uuid = action.id + print("Executing action: {}".format(uuid)) + self.loop.run_until_complete( + model.disconnect() + ) + else: + # Invalid mapping: application not found. Raise exception + raise Exception("Application not found: {}".format(application)) + + return uuid + + def GetPrimitiveStatus(self, uuid): + """Get the status of a Primitive execution. + + This will return one of the following strings: + - pending + - running + - completed + - failed + + :param uuid string: The UUID of the executed Primitive. + :returns: The status of the executed Primitive + """ + status = None + + if not self.client: + self.login() + + model = self.loop.run_until_complete( + self.client.get_model(self.model) + ) + + status = self.loop.run_until_complete( + model.get_action_status(uuid) + ) + + self.loop.run_until_complete( + model.disconnect() + ) + + return status[uuid] + + def GetPrimitiveOutput(self, uuid): + """Get the output of a completed Primitive execution. + + + :param uuid string: The UUID of the executed Primitive. + :returns: The output of the execution, or None if it's still running. + """ + result = None + if not self.client: + self.login() + + model = self.loop.run_until_complete( + self.client.get_model(self.model) + ) + + result = self.loop.run_until_complete( + model.get_action_output(uuid) + ) + + self.loop.run_until_complete( + model.disconnect() + ) + + return result diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charms/osm/proxy_cluster.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charms/osm/proxy_cluster.py new file mode 100644 index 0000000000000000000000000000000000000000..f323a3af88f3011ef9fde794cdfe343be8665abb --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charms/osm/proxy_cluster.py @@ -0,0 +1,59 @@ +import socket + +from ops.framework import Object, StoredState + + +class ProxyCluster(Object): + + state = StoredState() + + def __init__(self, charm, relation_name): + super().__init__(charm, relation_name) + self._relation_name = relation_name + self._relation = self.framework.model.get_relation(self._relation_name) + + self.framework.observe(charm.on.ssh_keys_initialized, self.on_ssh_keys_initialized) + + self.state.set_default(ssh_public_key=None) + self.state.set_default(ssh_private_key=None) + + def on_ssh_keys_initialized(self, event): + if not self.framework.model.unit.is_leader(): + raise RuntimeError("The initial unit of a cluster must also be a leader.") + + self.state.ssh_public_key = event.ssh_public_key + self.state.ssh_private_key = event.ssh_private_key + if not self.is_joined: + event.defer() + return + + self._relation.data[self.model.app][ + "ssh_public_key" + ] = self.state.ssh_public_key + self._relation.data[self.model.app][ + "ssh_private_key" + ] = self.state.ssh_private_key + + @property + def is_joined(self): + return self._relation is not None + + @property + def ssh_public_key(self): + if self.is_joined: + return self._relation.data[self.model.app].get("ssh_public_key") + + @property + def ssh_private_key(self): + if self.is_joined: + return self._relation.data[self.model.app].get("ssh_private_key") + + @property + def is_cluster_initialized(self): + return ( + True + if self.is_joined + and self._relation.data[self.model.app].get("ssh_public_key") + and self._relation.data[self.model.app].get("ssh_private_key") + else False + ) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charms/osm/sshproxy.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charms/osm/sshproxy.py new file mode 100644 index 0000000000000000000000000000000000000000..e2c311e5be8515a7fe19c05dfed9f042ded0576b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/charms/osm/sshproxy.py @@ -0,0 +1,375 @@ +"""Module to help with executing commands over SSH.""" +## +# Copyright 2016 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## + +# from charmhelpers.core import unitdata +# from charmhelpers.core.hookenv import log + +import io +import ipaddress +import subprocess +import os +import socket +import shlex +import traceback +import sys + +from subprocess import ( + check_call, + Popen, + CalledProcessError, + PIPE, +) + +from ops.charm import CharmBase, CharmEvents +from ops.framework import StoredState, EventBase, EventSource +from ops.main import main +from ops.model import ( + ActiveStatus, + BlockedStatus, + MaintenanceStatus, + WaitingStatus, + ModelError, +) +import os +import subprocess +from .proxy_cluster import ProxyCluster + +import logging + + +logger = logging.getLogger(__name__) + +class SSHKeysInitialized(EventBase): + def __init__(self, handle, ssh_public_key, ssh_private_key): + super().__init__(handle) + self.ssh_public_key = ssh_public_key + self.ssh_private_key = ssh_private_key + + def snapshot(self): + return { + "ssh_public_key": self.ssh_public_key, + "ssh_private_key": self.ssh_private_key, + } + + def restore(self, snapshot): + self.ssh_public_key = snapshot["ssh_public_key"] + self.ssh_private_key = snapshot["ssh_private_key"] + + +class ProxyClusterEvents(CharmEvents): + ssh_keys_initialized = EventSource(SSHKeysInitialized) + + +class SSHProxyCharm(CharmBase): + + state = StoredState() + on = ProxyClusterEvents() + + def __init__(self, framework, key): + super().__init__(framework, key) + + self.peers = ProxyCluster(self, "proxypeer") + + # SSH Proxy actions (primitives) + self.framework.observe(self.on.generate_ssh_key_action, self.on_generate_ssh_key_action) + self.framework.observe(self.on.get_ssh_public_key_action, self.on_get_ssh_public_key_action) + self.framework.observe(self.on.run_action, self.on_run_action) + self.framework.observe(self.on.verify_ssh_credentials_action, self.on_verify_ssh_credentials_action) + + self.framework.observe(self.on.proxypeer_relation_changed, self.on_proxypeer_relation_changed) + + def get_ssh_proxy(self): + """Get the SSHProxy instance""" + proxy = SSHProxy( + hostname=self.model.config["ssh-hostname"], + username=self.model.config["ssh-username"], + password=self.model.config["ssh-password"], + ) + return proxy + + def on_proxypeer_relation_changed(self, event): + if self.peers.is_cluster_initialized and not SSHProxy.has_ssh_key(): + pubkey = self.peers.ssh_public_key + privkey = self.peers.ssh_private_key + SSHProxy.write_ssh_keys(public=pubkey, private=privkey) + self.verify_credentials() + else: + event.defer() + + def on_config_changed(self, event): + """Handle changes in configuration""" + self.verify_credentials() + + def on_install(self, event): + SSHProxy.install() + + def on_start(self, event): + """Called when the charm is being installed""" + if not self.peers.is_joined: + event.defer() + return + + unit = self.model.unit + + if not SSHProxy.has_ssh_key(): + unit.status = MaintenanceStatus("Generating SSH keys...") + pubkey = None + privkey = None + if self.model.unit.is_leader(): + if self.peers.is_cluster_initialized: + SSHProxy.write_ssh_keys( + public=self.peers.ssh_public_key, + private=self.peers.ssh_private_key, + ) + else: + SSHProxy.generate_ssh_key() + self.on.ssh_keys_initialized.emit( + SSHProxy.get_ssh_public_key(), SSHProxy.get_ssh_private_key() + ) + self.verify_credentials() + + def verify_credentials(self): + unit = self.model.unit + + # Unit should go into a waiting state until verify_ssh_credentials is successful + unit.status = WaitingStatus("Waiting for SSH credentials") + proxy = self.get_ssh_proxy() + verified, _ = proxy.verify_credentials() + if verified: + unit.status = ActiveStatus() + else: + unit.status = BlockedStatus("Invalid SSH credentials.") + return verified + + ##################### + # SSH Proxy methods # + ##################### + def on_generate_ssh_key_action(self, event): + """Generate a new SSH keypair for this unit.""" + if self.model.unit.is_leader(): + if not SSHProxy.generate_ssh_key(): + event.fail("Unable to generate ssh key") + else: + event.fail("Unit is not leader") + return + + def on_get_ssh_public_key_action(self, event): + """Get the SSH public key for this unit.""" + if self.model.unit.is_leader(): + pubkey = SSHProxy.get_ssh_public_key() + event.set_results({"pubkey": SSHProxy.get_ssh_public_key()}) + else: + event.fail("Unit is not leader") + return + + def on_run_action(self, event): + """Run an arbitrary command on the remote host.""" + if self.model.unit.is_leader(): + cmd = event.params["command"] + proxy = self.get_ssh_proxy() + stdout, stderr = proxy.run(cmd) + event.set_results({"output": stdout}) + if len(stderr): + event.fail(stderr) + else: + event.fail("Unit is not leader") + return + + def on_verify_ssh_credentials_action(self, event): + """Verify the SSH credentials for this unit.""" + unit = self.model.unit + if unit.is_leader(): + proxy = self.get_ssh_proxy() + verified, stderr = proxy.verify_credentials() + if verified: + event.set_results({"verified": True}) + unit.status = ActiveStatus() + else: + event.set_results({"verified": False, "stderr": stderr}) + event.fail("Not verified") + unit.status = BlockedStatus("Invalid SSH credentials.") + + else: + event.fail("Unit is not leader") + return + + +class LeadershipError(ModelError): + def __init__(self): + super().__init__("not leader") + +class SSHProxy: + private_key_path = "/root/.ssh/id_sshproxy" + public_key_path = "/root/.ssh/id_sshproxy.pub" + key_type = "rsa" + key_bits = 4096 + + def __init__(self, hostname: str, username: str, password: str = ""): + self.hostname = hostname + self.username = username + self.password = password + + @staticmethod + def install(): + check_call("apt update && apt install -y openssh-client sshpass", shell=True) + + @staticmethod + def generate_ssh_key(): + """Generate a 4096-bit rsa keypair.""" + if not os.path.exists(SSHProxy.private_key_path): + cmd = "ssh-keygen -t {} -b {} -N '' -f {}".format( + SSHProxy.key_type, SSHProxy.key_bits, SSHProxy.private_key_path, + ) + + try: + check_call(cmd, shell=True) + except CalledProcessError: + return False + + return True + + @staticmethod + def write_ssh_keys(public, private): + """Write a 4096-bit rsa keypair.""" + with open(SSHProxy.public_key_path, "w") as f: + f.write(public) + f.close() + with open(SSHProxy.private_key_path, "w") as f: + f.write(private) + f.close() + + @staticmethod + def get_ssh_public_key(): + publickey = "" + if os.path.exists(SSHProxy.private_key_path): + with open(SSHProxy.public_key_path, "r") as f: + publickey = f.read() + return publickey + + @staticmethod + def get_ssh_private_key(): + privatekey = "" + if os.path.exists(SSHProxy.private_key_path): + with open(SSHProxy.private_key_path, "r") as f: + privatekey = f.read() + return privatekey + + @staticmethod + def has_ssh_key(): + return True if os.path.exists(SSHProxy.private_key_path) else False + + def run(self, cmd: str) -> (str, str): + """Run a command remotely via SSH. + + Note: The previous behavior was to run the command locally if SSH wasn't + configured, but that can lead to cases where execution succeeds when you'd + expect it not to. + """ + if isinstance(cmd, str): + cmd = shlex.split(cmd) + + host = self._get_hostname() + user = self.username + passwd = self.password + key = self.private_key_path + + # Make sure we have everything we need to connect + if host and user: + return self.ssh(cmd) + + raise Exception("Invalid SSH credentials.") + + def scp(self, source_file, destination_file): + """Execute an scp command. Requires a fully qualified source and + destination. + + :param str source_file: Path to the source file + :param str destination_file: Path to the destination file + :raises: :class:`CalledProcessError` if the command fails + """ + cmd = [ + "sshpass", + "-p", + self.password, + "scp", + "-i", + os.path.expanduser(self.private_key_path), + "-o", + "StrictHostKeyChecking=no", + "-q", + "-B", + ] + destination = "{}@{}:{}".format(self.username, self.hostname, destination_file) + cmd.extend([source_file, destination]) + subprocess.run(cmd, check=True) + + def ssh(self, command): + """Run a command remotely via SSH. + + :param list(str) command: The command to execute + :return: tuple: The stdout and stderr of the command execution + :raises: :class:`CalledProcessError` if the command fails + """ + + destination = "{}@{}".format(self.username, self.hostname) + cmd = [ + "sshpass", + "-p", + self.password, + "ssh", + "-i", + os.path.expanduser(self.private_key_path), + "-o", + "StrictHostKeyChecking=no", + "-q", + destination, + ] + cmd.extend(command) + output = subprocess.run(cmd, check=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) + return (output.stdout.decode("utf-8").strip(), output.stderr.decode("utf-8").strip()) + + def verify_credentials(self): + """Verify the SSH credentials. + + :return (bool, str): Verified, Stderr + """ + verified = False + try: + (stdout, stderr) = self.run("hostname") + verified = True + except CalledProcessError as e: + stderr = "Command failed: {} ({})".format(" ".join(e.cmd), str(e.output)) + except (TimeoutError, socket.timeout): + stderr = "Timeout attempting to reach {}".format(self._get_hostname()) + except Exception as error: + tb = traceback.format_exc() + stderr = "Unhandled exception: {}".format(tb) + return verified, stderr + + ################### + # Private methods # + ################### + def _get_hostname(self): + """Get the hostname for the ssh target. + + HACK: This function was added to work around an issue where the + ssh-hostname was passed in the format of a.b.c.d;a.b.c.d, where the first + is the floating ip, and the second the non-floating ip, for an Openstack + instance. + """ + return self.hostname.split(";")[0] diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/ops/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/ops/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..f17b2969db298b21bc47bbe1d3614ccff93e9c6e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/ops/__init__.py @@ -0,0 +1,20 @@ +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""The Operator Framework.""" + +from .version import version as __version__ # noqa: F401 (imported but unused) + +# Import here the bare minimum to break the circular import between modules +from . import charm # noqa: F401 (imported but unused) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/ops/charm.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/ops/charm.py new file mode 100755 index 0000000000000000000000000000000000000000..d898de859fc444814bc19a7f8f0caaaec6f7e5f4 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/ops/charm.py @@ -0,0 +1,575 @@ +# Copyright 2019-2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import enum +import os +import pathlib +import typing + +import yaml + +from ops.framework import Object, EventSource, EventBase, Framework, ObjectEvents +from ops import model + + +def _loadYaml(source): + if yaml.__with_libyaml__: + return yaml.load(source, Loader=yaml.CSafeLoader) + return yaml.load(source, Loader=yaml.SafeLoader) + + +class HookEvent(EventBase): + """A base class for events that trigger because of a Juju hook firing.""" + + +class ActionEvent(EventBase): + """A base class for events that trigger when a user asks for an Action to be run. + + To read the parameters for the action, see the instance variable `params`. + To respond with the result of the action, call `set_results`. To add progress + messages that are visible as the action is progressing use `log`. + + :ivar params: The parameters passed to the action (read by action-get) + """ + + def defer(self): + """Action events are not deferable like other events. + + This is because an action runs synchronously and the user is waiting for the result. + """ + raise RuntimeError('cannot defer action events') + + def restore(self, snapshot: dict) -> None: + """Used by the operator framework to record the action. + + Not meant to be called directly by Charm code. + """ + env_action_name = os.environ.get('JUJU_ACTION_NAME') + event_action_name = self.handle.kind[:-len('_action')].replace('_', '-') + if event_action_name != env_action_name: + # This could only happen if the dev manually emits the action, or from a bug. + raise RuntimeError('action event kind does not match current action') + # Params are loaded at restore rather than __init__ because + # the model is not available in __init__. + self.params = self.framework.model._backend.action_get() + + def set_results(self, results: typing.Mapping) -> None: + """Report the result of the action. + + Args: + results: The result of the action as a Dict + """ + self.framework.model._backend.action_set(results) + + def log(self, message: str) -> None: + """Send a message that a user will see while the action is running. + + Args: + message: The message for the user. + """ + self.framework.model._backend.action_log(message) + + def fail(self, message: str = '') -> None: + """Report that this action has failed. + + Args: + message: Optional message to record why it has failed. + """ + self.framework.model._backend.action_fail(message) + + +class InstallEvent(HookEvent): + """Represents the `install` hook from Juju.""" + + +class StartEvent(HookEvent): + """Represents the `start` hook from Juju.""" + + +class StopEvent(HookEvent): + """Represents the `stop` hook from Juju.""" + + +class RemoveEvent(HookEvent): + """Represents the `remove` hook from Juju. """ + + +class ConfigChangedEvent(HookEvent): + """Represents the `config-changed` hook from Juju.""" + + +class UpdateStatusEvent(HookEvent): + """Represents the `update-status` hook from Juju.""" + + +class UpgradeCharmEvent(HookEvent): + """Represents the `upgrade-charm` hook from Juju. + + This will be triggered when a user has run `juju upgrade-charm`. It is run after Juju + has unpacked the upgraded charm code, and so this event will be handled with new code. + """ + + +class PreSeriesUpgradeEvent(HookEvent): + """Represents the `pre-series-upgrade` hook from Juju. + + This happens when a user has run `juju upgrade-series MACHINE prepare` and + will fire for each unit that is running on the machine, telling them that + the user is preparing to upgrade the Machine's series (eg trusty->bionic). + The charm should take actions to prepare for the upgrade (a database charm + would want to write out a version-independent dump of the database, so that + when a new version of the database is available in a new series, it can be + used.) + Once all units on a machine have run `pre-series-upgrade`, the user will + initiate the steps to actually upgrade the machine (eg `do-release-upgrade`). + When the upgrade has been completed, the :class:`PostSeriesUpgradeEvent` will fire. + """ + + +class PostSeriesUpgradeEvent(HookEvent): + """Represents the `post-series-upgrade` hook from Juju. + + This is run after the user has done a distribution upgrade (or rolled back + and kept the same series). It is called in response to + `juju upgrade-series MACHINE complete`. Charms are expected to do whatever + steps are necessary to reconfigure their applications for the new series. + """ + + +class LeaderElectedEvent(HookEvent): + """Represents the `leader-elected` hook from Juju. + + Juju will trigger this when a new lead unit is chosen for a given application. + This represents the leader of the charm information (not necessarily the primary + of a running application). The main utility is that charm authors can know + that only one unit will be a leader at any given time, so they can do + configuration, etc, that would otherwise require coordination between units. + (eg, selecting a password for a new relation) + """ + + +class LeaderSettingsChangedEvent(HookEvent): + """Represents the `leader-settings-changed` hook from Juju. + + Deprecated. This represents when a lead unit would call `leader-set` to inform + the other units of an application that they have new information to handle. + This has been deprecated in favor of using a Peer relation, and having the + leader set a value in the Application data bag for that peer relation. + (see :class:`RelationChangedEvent`). + """ + + +class CollectMetricsEvent(HookEvent): + """Represents the `collect-metrics` hook from Juju. + + Note that events firing during a CollectMetricsEvent are currently + sandboxed in how they can interact with Juju. To report metrics + use :meth:`.add_metrics`. + """ + + def add_metrics(self, metrics: typing.Mapping, labels: typing.Mapping = None) -> None: + """Record metrics that have been gathered by the charm for this unit. + + Args: + metrics: A collection of {key: float} pairs that contains the + metrics that have been gathered + labels: {key:value} strings that can be applied to the + metrics that are being gathered + """ + self.framework.model._backend.add_metrics(metrics, labels) + + +class RelationEvent(HookEvent): + """A base class representing the various relation lifecycle events. + + Charmers should not be creating RelationEvents directly. The events will be + generated by the framework from Juju related events. Users can observe them + from the various `CharmBase.on[relation_name].relation_*` events. + + Attributes: + relation: The Relation involved in this event + app: The remote application that has triggered this event + unit: The remote unit that has triggered this event. This may be None + if the relation event was triggered as an Application level event + """ + + def __init__(self, handle, relation, app=None, unit=None): + super().__init__(handle) + + if unit is not None and unit.app != app: + raise RuntimeError( + 'cannot create RelationEvent with application {} and unit {}'.format(app, unit)) + + self.relation = relation + self.app = app + self.unit = unit + + def snapshot(self) -> dict: + """Used by the framework to serialize the event to disk. + + Not meant to be called by Charm code. + """ + snapshot = { + 'relation_name': self.relation.name, + 'relation_id': self.relation.id, + } + if self.app: + snapshot['app_name'] = self.app.name + if self.unit: + snapshot['unit_name'] = self.unit.name + return snapshot + + def restore(self, snapshot: dict) -> None: + """Used by the framework to deserialize the event from disk. + + Not meant to be called by Charm code. + """ + self.relation = self.framework.model.get_relation( + snapshot['relation_name'], snapshot['relation_id']) + + app_name = snapshot.get('app_name') + if app_name: + self.app = self.framework.model.get_app(app_name) + else: + self.app = None + + unit_name = snapshot.get('unit_name') + if unit_name: + self.unit = self.framework.model.get_unit(unit_name) + else: + self.unit = None + + +class RelationCreatedEvent(RelationEvent): + """Represents the `relation-created` hook from Juju. + + This is triggered when a new relation to another app is added in Juju. This + can occur before units for those applications have started. All existing + relations should be established before start. + """ + + +class RelationJoinedEvent(RelationEvent): + """Represents the `relation-joined` hook from Juju. + + This is triggered whenever a new unit of a related application joins the relation. + (eg, a unit was added to an existing related app, or a new relation was established + with an application that already had units.) + """ + + +class RelationChangedEvent(RelationEvent): + """Represents the `relation-changed` hook from Juju. + + This is triggered whenever there is a change to the data bucket for a related + application or unit. Look at `event.relation.data[event.unit/app]` to see the + new information. + """ + + +class RelationDepartedEvent(RelationEvent): + """Represents the `relation-departed` hook from Juju. + + This is the inverse of the RelationJoinedEvent, representing when a unit + is leaving the relation (the unit is being removed, the app is being removed, + the relation is being removed). It is fired once for each unit that is + going away. + """ + + +class RelationBrokenEvent(RelationEvent): + """Represents the `relation-broken` hook from Juju. + + If a relation is being removed (`juju remove-relation` or `juju remove-application`), + once all the units have been removed, RelationBrokenEvent will fire to signal + that the relationship has been fully terminated. + """ + + +class StorageEvent(HookEvent): + """Base class representing Storage related events.""" + + +class StorageAttachedEvent(StorageEvent): + """Represents the `storage-attached` hook from Juju. + + Called when new storage is available for the charm to use. + """ + + +class StorageDetachingEvent(StorageEvent): + """Represents the `storage-detaching` hook from Juju. + + Called when storage a charm has been using is going away. + """ + + +class CharmEvents(ObjectEvents): + """The events that are generated by Juju in response to the lifecycle of an application.""" + + install = EventSource(InstallEvent) + start = EventSource(StartEvent) + stop = EventSource(StopEvent) + remove = EventSource(RemoveEvent) + update_status = EventSource(UpdateStatusEvent) + config_changed = EventSource(ConfigChangedEvent) + upgrade_charm = EventSource(UpgradeCharmEvent) + pre_series_upgrade = EventSource(PreSeriesUpgradeEvent) + post_series_upgrade = EventSource(PostSeriesUpgradeEvent) + leader_elected = EventSource(LeaderElectedEvent) + leader_settings_changed = EventSource(LeaderSettingsChangedEvent) + collect_metrics = EventSource(CollectMetricsEvent) + + +class CharmBase(Object): + """Base class that represents the Charm overall. + + Usually this initialization is done by ops.main.main() rather than Charm authors + directly instantiating a Charm. + + Args: + framework: The framework responsible for managing the Model and events for this + Charm. + key: Ignored; will remove after deprecation period of the signature change. + """ + + on = CharmEvents() + + def __init__(self, framework: Framework, key: typing.Optional = None): + super().__init__(framework, None) + + for relation_name in self.framework.meta.relations: + relation_name = relation_name.replace('-', '_') + self.on.define_event(relation_name + '_relation_created', RelationCreatedEvent) + self.on.define_event(relation_name + '_relation_joined', RelationJoinedEvent) + self.on.define_event(relation_name + '_relation_changed', RelationChangedEvent) + self.on.define_event(relation_name + '_relation_departed', RelationDepartedEvent) + self.on.define_event(relation_name + '_relation_broken', RelationBrokenEvent) + + for storage_name in self.framework.meta.storages: + storage_name = storage_name.replace('-', '_') + self.on.define_event(storage_name + '_storage_attached', StorageAttachedEvent) + self.on.define_event(storage_name + '_storage_detaching', StorageDetachingEvent) + + for action_name in self.framework.meta.actions: + action_name = action_name.replace('-', '_') + self.on.define_event(action_name + '_action', ActionEvent) + + @property + def app(self) -> model.Application: + """Application that this unit is part of.""" + return self.framework.model.app + + @property + def unit(self) -> model.Unit: + """Unit that this execution is responsible for.""" + return self.framework.model.unit + + @property + def meta(self) -> 'CharmMeta': + """CharmMeta of this charm. + """ + return self.framework.meta + + @property + def charm_dir(self) -> pathlib.Path: + """Root directory of the Charm as it is running. + """ + return self.framework.charm_dir + + +class CharmMeta: + """Object containing the metadata for the charm. + + This is read from metadata.yaml and/or actions.yaml. Generally charms will + define this information, rather than reading it at runtime. This class is + mostly for the framework to understand what the charm has defined. + + The maintainers, tags, terms, series, and extra_bindings attributes are all + lists of strings. The requires, provides, peers, relations, storage, + resources, and payloads attributes are all mappings of names to instances + of the respective RelationMeta, StorageMeta, ResourceMeta, or PayloadMeta. + + The relations attribute is a convenience accessor which includes all of the + requires, provides, and peers RelationMeta items. If needed, the role of + the relation definition can be obtained from its role attribute. + + Attributes: + name: The name of this charm + summary: Short description of what this charm does + description: Long description for this charm + maintainers: A list of strings of the email addresses of the maintainers + of this charm. + tags: Charm store tag metadata for categories associated with this charm. + terms: Charm store terms that should be agreed to before this charm can + be deployed. (Used for things like licensing issues.) + series: The list of supported OS series that this charm can support. + The first entry in the list is the default series that will be + used by deploy if no other series is requested by the user. + subordinate: True/False whether this charm is intended to be used as a + subordinate charm. + min_juju_version: If supplied, indicates this charm needs features that + are not available in older versions of Juju. + requires: A dict of {name: :class:`RelationMeta` } for each 'requires' relation. + provides: A dict of {name: :class:`RelationMeta` } for each 'provides' relation. + peers: A dict of {name: :class:`RelationMeta` } for each 'peer' relation. + relations: A dict containing all :class:`RelationMeta` attributes (merged from other + sections) + storages: A dict of {name: :class:`StorageMeta`} for each defined storage. + resources: A dict of {name: :class:`ResourceMeta`} for each defined resource. + payloads: A dict of {name: :class:`PayloadMeta`} for each defined payload. + extra_bindings: A dict of additional named bindings that a charm can use + for network configuration. + actions: A dict of {name: :class:`ActionMeta`} for actions that the charm has defined. + Args: + raw: a mapping containing the contents of metadata.yaml + actions_raw: a mapping containing the contents of actions.yaml + """ + + def __init__(self, raw: dict = {}, actions_raw: dict = {}): + self.name = raw.get('name', '') + self.summary = raw.get('summary', '') + self.description = raw.get('description', '') + self.maintainers = [] + if 'maintainer' in raw: + self.maintainers.append(raw['maintainer']) + if 'maintainers' in raw: + self.maintainers.extend(raw['maintainers']) + self.tags = raw.get('tags', []) + self.terms = raw.get('terms', []) + self.series = raw.get('series', []) + self.subordinate = raw.get('subordinate', False) + self.min_juju_version = raw.get('min-juju-version') + self.requires = {name: RelationMeta(RelationRole.requires, name, rel) + for name, rel in raw.get('requires', {}).items()} + self.provides = {name: RelationMeta(RelationRole.provides, name, rel) + for name, rel in raw.get('provides', {}).items()} + self.peers = {name: RelationMeta(RelationRole.peer, name, rel) + for name, rel in raw.get('peers', {}).items()} + self.relations = {} + self.relations.update(self.requires) + self.relations.update(self.provides) + self.relations.update(self.peers) + self.storages = {name: StorageMeta(name, storage) + for name, storage in raw.get('storage', {}).items()} + self.resources = {name: ResourceMeta(name, res) + for name, res in raw.get('resources', {}).items()} + self.payloads = {name: PayloadMeta(name, payload) + for name, payload in raw.get('payloads', {}).items()} + self.extra_bindings = raw.get('extra-bindings', {}) + self.actions = {name: ActionMeta(name, action) for name, action in actions_raw.items()} + + @classmethod + def from_yaml( + cls, metadata: typing.Union[str, typing.TextIO], + actions: typing.Optional[typing.Union[str, typing.TextIO]] = None): + """Instantiate a CharmMeta from a YAML description of metadata.yaml. + + Args: + metadata: A YAML description of charm metadata (name, relations, etc.) + This can be a simple string, or a file-like object. (passed to `yaml.safe_load`). + actions: YAML description of Actions for this charm (eg actions.yaml) + """ + meta = _loadYaml(metadata) + raw_actions = {} + if actions is not None: + raw_actions = _loadYaml(actions) + return cls(meta, raw_actions) + + +class RelationRole(enum.Enum): + peer = 'peer' + requires = 'requires' + provides = 'provides' + + def is_peer(self) -> bool: + """Return whether the current role is peer. + + A convenience to avoid having to import charm. + """ + return self is RelationRole.peer + + +class RelationMeta: + """Object containing metadata about a relation definition. + + Should not be constructed directly by Charm code. Is gotten from one of + :attr:`CharmMeta.peers`, :attr:`CharmMeta.requires`, :attr:`CharmMeta.provides`, + or :attr:`CharmMeta.relations`. + + Attributes: + role: This is one of peer/requires/provides + relation_name: Name of this relation from metadata.yaml + interface_name: Optional definition of the interface protocol. + scope: "global" or "container" scope based on how the relation should be used. + """ + + def __init__(self, role: RelationRole, relation_name: str, raw: dict): + if not isinstance(role, RelationRole): + raise TypeError("role should be a Role, not {!r}".format(role)) + self.role = role + self.relation_name = relation_name + self.interface_name = raw['interface'] + self.scope = raw.get('scope') + + +class StorageMeta: + """Object containing metadata about a storage definition.""" + + def __init__(self, name, raw): + self.storage_name = name + self.type = raw['type'] + self.description = raw.get('description', '') + self.shared = raw.get('shared', False) + self.read_only = raw.get('read-only', False) + self.minimum_size = raw.get('minimum-size') + self.location = raw.get('location') + self.multiple_range = None + if 'multiple' in raw: + range = raw['multiple']['range'] + if '-' not in range: + self.multiple_range = (int(range), int(range)) + else: + range = range.split('-') + self.multiple_range = (int(range[0]), int(range[1]) if range[1] else None) + + +class ResourceMeta: + """Object containing metadata about a resource definition.""" + + def __init__(self, name, raw): + self.resource_name = name + self.type = raw['type'] + self.filename = raw.get('filename', None) + self.description = raw.get('description', '') + + +class PayloadMeta: + """Object containing metadata about a payload definition.""" + + def __init__(self, name, raw): + self.payload_name = name + self.type = raw['type'] + + +class ActionMeta: + """Object containing metadata about an action's definition.""" + + def __init__(self, name, raw=None): + raw = raw or {} + self.name = name + self.title = raw.get('title', '') + self.description = raw.get('description', '') + self.parameters = raw.get('params', {}) # {: } + self.required = raw.get('required', []) # [, ...] diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/ops/framework.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/ops/framework.py new file mode 100755 index 0000000000000000000000000000000000000000..b7c4749ff2b5bfb4f354bf1a8d4cd6ed64cf0da5 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/ops/framework.py @@ -0,0 +1,1067 @@ +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import collections +import collections.abc +import inspect +import keyword +import logging +import marshal +import os +import pathlib +import pdb +import re +import sys +import types +import weakref + +from ops import charm +from ops.storage import ( + NoSnapshotError, + SQLiteStorage, +) + +logger = logging.getLogger(__name__) + + +class Handle: + """Handle defines a name for an object in the form of a hierarchical path. + + The provided parent is the object (or that object's handle) that this handle + sits under, or None if the object identified by this handle stands by itself + as the root of its own hierarchy. + + The handle kind is a string that defines a namespace so objects with the + same parent and kind will have unique keys. + + The handle key is a string uniquely identifying the object. No other objects + under the same parent and kind may have the same key. + """ + + def __init__(self, parent, kind, key): + if parent and not isinstance(parent, Handle): + parent = parent.handle + self._parent = parent + self._kind = kind + self._key = key + if parent: + if key: + self._path = "{}/{}[{}]".format(parent, kind, key) + else: + self._path = "{}/{}".format(parent, kind) + else: + if key: + self._path = "{}[{}]".format(kind, key) + else: + self._path = "{}".format(kind) + + def nest(self, kind, key): + return Handle(self, kind, key) + + def __hash__(self): + return hash((self.parent, self.kind, self.key)) + + def __eq__(self, other): + return (self.parent, self.kind, self.key) == (other.parent, other.kind, other.key) + + def __str__(self): + return self.path + + @property + def parent(self): + return self._parent + + @property + def kind(self): + return self._kind + + @property + def key(self): + return self._key + + @property + def path(self): + return self._path + + @classmethod + def from_path(cls, path): + handle = None + for pair in path.split("/"): + pair = pair.split("[") + good = False + if len(pair) == 1: + kind, key = pair[0], None + good = True + elif len(pair) == 2: + kind, key = pair + if key and key[-1] == ']': + key = key[:-1] + good = True + if not good: + raise RuntimeError("attempted to restore invalid handle path {}".format(path)) + handle = Handle(handle, kind, key) + return handle + + +class EventBase: + + def __init__(self, handle): + self.handle = handle + self.deferred = False + + def defer(self): + self.deferred = True + + def snapshot(self): + """Return the snapshot data that should be persisted. + + Subclasses must override to save any custom state. + """ + return None + + def restore(self, snapshot): + """Restore the value state from the given snapshot. + + Subclasses must override to restore their custom state. + """ + self.deferred = False + + +class EventSource: + """EventSource wraps an event type with a descriptor to facilitate observing and emitting. + + It is generally used as: + + class SomethingHappened(EventBase): + pass + + class SomeObject(Object): + something_happened = EventSource(SomethingHappened) + + With that, instances of that type will offer the someobj.something_happened + attribute which is a BoundEvent and may be used to emit and observe the event. + """ + + def __init__(self, event_type): + if not isinstance(event_type, type) or not issubclass(event_type, EventBase): + raise RuntimeError( + 'Event requires a subclass of EventBase as an argument, got {}'.format(event_type)) + self.event_type = event_type + self.event_kind = None + self.emitter_type = None + + def _set_name(self, emitter_type, event_kind): + if self.event_kind is not None: + raise RuntimeError( + 'EventSource({}) reused as {}.{} and {}.{}'.format( + self.event_type.__name__, + self.emitter_type.__name__, + self.event_kind, + emitter_type.__name__, + event_kind, + )) + self.event_kind = event_kind + self.emitter_type = emitter_type + + def __get__(self, emitter, emitter_type=None): + if emitter is None: + return self + # Framework might not be available if accessed as CharmClass.on.event + # rather than charm_instance.on.event, but in that case it couldn't be + # emitted anyway, so there's no point to registering it. + framework = getattr(emitter, 'framework', None) + if framework is not None: + framework.register_type(self.event_type, emitter, self.event_kind) + return BoundEvent(emitter, self.event_type, self.event_kind) + + +class BoundEvent: + + def __repr__(self): + return ''.format( + self.event_type.__name__, + type(self.emitter).__name__, + self.event_kind, + hex(id(self)), + ) + + def __init__(self, emitter, event_type, event_kind): + self.emitter = emitter + self.event_type = event_type + self.event_kind = event_kind + + def emit(self, *args, **kwargs): + """Emit event to all registered observers. + + The current storage state is committed before and after each observer is notified. + """ + framework = self.emitter.framework + key = framework._next_event_key() + event = self.event_type(Handle(self.emitter, self.event_kind, key), *args, **kwargs) + framework._emit(event) + + +class HandleKind: + """Helper descriptor to define the Object.handle_kind field. + + The handle_kind for an object defaults to its type name, but it may + be explicitly overridden if desired. + """ + + def __get__(self, obj, obj_type): + kind = obj_type.__dict__.get("handle_kind") + if kind: + return kind + return obj_type.__name__ + + +class _Metaclass(type): + """Helper class to ensure proper instantiation of Object-derived classes. + + This class currently has a single purpose: events derived from EventSource + that are class attributes of Object-derived classes need to be told what + their name is in that class. For example, in + + class SomeObject(Object): + something_happened = EventSource(SomethingHappened) + + the instance of EventSource needs to know it's called 'something_happened'. + + Starting from python 3.6 we could use __set_name__ on EventSource for this, + but until then this (meta)class does the equivalent work. + + TODO: when we drop support for 3.5 drop this class, and rename _set_name in + EventSource to __set_name__; everything should continue to work. + + """ + + def __new__(typ, *a, **kw): + k = super().__new__(typ, *a, **kw) + # k is now the Object-derived class; loop over its class attributes + for n, v in vars(k).items(): + # we could do duck typing here if we want to support + # non-EventSource-derived shenanigans. We don't. + if isinstance(v, EventSource): + # this is what 3.6+ does automatically for us: + v._set_name(k, n) + return k + + +class Object(metaclass=_Metaclass): + + handle_kind = HandleKind() + + def __init__(self, parent, key): + kind = self.handle_kind + if isinstance(parent, Framework): + self.framework = parent + # Avoid Framework instances having a circular reference to themselves. + if self.framework is self: + self.framework = weakref.proxy(self.framework) + self.handle = Handle(None, kind, key) + else: + self.framework = parent.framework + self.handle = Handle(parent, kind, key) + self.framework._track(self) + + # TODO Detect conflicting handles here. + + @property + def model(self): + return self.framework.model + + +class ObjectEvents(Object): + """Convenience type to allow defining .on attributes at class level.""" + + handle_kind = "on" + + def __init__(self, parent=None, key=None): + if parent is not None: + super().__init__(parent, key) + else: + self._cache = weakref.WeakKeyDictionary() + + def __get__(self, emitter, emitter_type): + if emitter is None: + return self + instance = self._cache.get(emitter) + if instance is None: + # Same type, different instance, more data. Doing this unusual construct + # means people can subclass just this one class to have their own 'on'. + instance = self._cache[emitter] = type(self)(emitter) + return instance + + @classmethod + def define_event(cls, event_kind, event_type): + """Define an event on this type at runtime. + + cls: a type to define an event on. + + event_kind: an attribute name that will be used to access the + event. Must be a valid python identifier, not be a keyword + or an existing attribute. + + event_type: a type of the event to define. + + """ + prefix = 'unable to define an event with event_kind that ' + if not event_kind.isidentifier(): + raise RuntimeError(prefix + 'is not a valid python identifier: ' + event_kind) + elif keyword.iskeyword(event_kind): + raise RuntimeError(prefix + 'is a python keyword: ' + event_kind) + try: + getattr(cls, event_kind) + raise RuntimeError( + prefix + 'overlaps with an existing type {} attribute: {}'.format(cls, event_kind)) + except AttributeError: + pass + + event_descriptor = EventSource(event_type) + event_descriptor._set_name(cls, event_kind) + setattr(cls, event_kind, event_descriptor) + + def events(self): + """Return a mapping of event_kinds to bound_events for all available events. + """ + events_map = {} + # We have to iterate over the class rather than instance to allow for properties which + # might call this method (e.g., event views), leading to infinite recursion. + for attr_name, attr_value in inspect.getmembers(type(self)): + if isinstance(attr_value, EventSource): + # We actually care about the bound_event, however, since it + # provides the most info for users of this method. + event_kind = attr_name + bound_event = getattr(self, event_kind) + events_map[event_kind] = bound_event + return events_map + + def __getitem__(self, key): + return PrefixedEvents(self, key) + + +class PrefixedEvents: + + def __init__(self, emitter, key): + self._emitter = emitter + self._prefix = key.replace("-", "_") + '_' + + def __getattr__(self, name): + return getattr(self._emitter, self._prefix + name) + + +class PreCommitEvent(EventBase): + pass + + +class CommitEvent(EventBase): + pass + + +class FrameworkEvents(ObjectEvents): + pre_commit = EventSource(PreCommitEvent) + commit = EventSource(CommitEvent) + + +class NoTypeError(Exception): + + def __init__(self, handle_path): + self.handle_path = handle_path + + def __str__(self): + return "cannot restore {} since no class was registered for it".format(self.handle_path) + + +# the message to show to the user when a pdb breakpoint goes active +_BREAKPOINT_WELCOME_MESSAGE = """ +Starting pdb to debug charm operator. +Run `h` for help, `c` to continue, or `exit`/CTRL-d to abort. +Future breakpoints may interrupt execution again. +More details at https://discourse.jujucharms.com/t/debugging-charm-hooks + +""" + + +_event_regex = r'^(|.*/)on/[a-zA-Z_]+\[\d+\]$' + + +class Framework(Object): + + on = FrameworkEvents() + + # Override properties from Object so that we can set them in __init__. + model = None + meta = None + charm_dir = None + + def __init__(self, storage, charm_dir, meta, model): + + super().__init__(self, None) + + self.charm_dir = charm_dir + self.meta = meta + self.model = model + self._observers = [] # [(observer_path, method_name, parent_path, event_key)] + self._observer = weakref.WeakValueDictionary() # {observer_path: observer} + self._objects = weakref.WeakValueDictionary() + self._type_registry = {} # {(parent_path, kind): cls} + self._type_known = set() # {cls} + + if isinstance(storage, (str, pathlib.Path)): + logger.warning( + "deprecated: Framework now takes a Storage not a path") + storage = SQLiteStorage(storage) + self._storage = storage + + # We can't use the higher-level StoredState because it relies on events. + self.register_type(StoredStateData, None, StoredStateData.handle_kind) + stored_handle = Handle(None, StoredStateData.handle_kind, '_stored') + try: + self._stored = self.load_snapshot(stored_handle) + except NoSnapshotError: + self._stored = StoredStateData(self, '_stored') + self._stored['event_count'] = 0 + + # Hook into builtin breakpoint, so if Python >= 3.7, devs will be able to just do + # breakpoint(); if Python < 3.7, this doesn't affect anything + sys.breakpointhook = self.breakpoint + + # Flag to indicate that we already presented the welcome message in a debugger breakpoint + self._breakpoint_welcomed = False + + # Parse once the env var, which may be used multiple times later + debug_at = os.environ.get('JUJU_DEBUG_AT') + self._juju_debug_at = debug_at.split(',') if debug_at else () + + def close(self): + self._storage.close() + + def _track(self, obj): + """Track object and ensure it is the only object created using its handle path.""" + if obj is self: + # Framework objects don't track themselves + return + if obj.handle.path in self.framework._objects: + raise RuntimeError( + 'two objects claiming to be {} have been created'.format(obj.handle.path)) + self._objects[obj.handle.path] = obj + + def _forget(self, obj): + """Stop tracking the given object. See also _track.""" + self._objects.pop(obj.handle.path, None) + + def commit(self): + # Give a chance for objects to persist data they want to before a commit is made. + self.on.pre_commit.emit() + # Make sure snapshots are saved by instances of StoredStateData. Any possible state + # modifications in on_commit handlers of instances of other classes will not be persisted. + self.on.commit.emit() + # Save our event count after all events have been emitted. + self.save_snapshot(self._stored) + self._storage.commit() + + def register_type(self, cls, parent, kind=None): + if parent and not isinstance(parent, Handle): + parent = parent.handle + if parent: + parent_path = parent.path + else: + parent_path = None + if not kind: + kind = cls.handle_kind + self._type_registry[(parent_path, kind)] = cls + self._type_known.add(cls) + + def save_snapshot(self, value): + """Save a persistent snapshot of the provided value. + + The provided value must implement the following interface: + + value.handle = Handle(...) + value.snapshot() => {...} # Simple builtin types only. + value.restore(snapshot) # Restore custom state from prior snapshot. + """ + if type(value) not in self._type_known: + raise RuntimeError( + 'cannot save {} values before registering that type'.format(type(value).__name__)) + data = value.snapshot() + + # Use marshal as a validator, enforcing the use of simple types, as we later the + # information is really pickled, which is too error prone for future evolution of the + # stored data (e.g. if the developer stores a custom object and later changes its + # class name; when unpickling the original class will not be there and event + # data loading will fail). + try: + marshal.dumps(data) + except ValueError: + msg = "unable to save the data for {}, it must contain only simple types: {!r}" + raise ValueError(msg.format(value.__class__.__name__, data)) + + self._storage.save_snapshot(value.handle.path, data) + + def load_snapshot(self, handle): + parent_path = None + if handle.parent: + parent_path = handle.parent.path + cls = self._type_registry.get((parent_path, handle.kind)) + if not cls: + raise NoTypeError(handle.path) + data = self._storage.load_snapshot(handle.path) + obj = cls.__new__(cls) + obj.framework = self + obj.handle = handle + obj.restore(data) + self._track(obj) + return obj + + def drop_snapshot(self, handle): + self._storage.drop_snapshot(handle.path) + + def observe(self, bound_event: BoundEvent, observer: types.MethodType): + """Register observer to be called when bound_event is emitted. + + The bound_event is generally provided as an attribute of the object that emits + the event, and is created in this style: + + class SomeObject: + something_happened = Event(SomethingHappened) + + That event may be observed as: + + framework.observe(someobj.something_happened, self._on_something_happened) + + Raises: + RuntimeError: if bound_event or observer are the wrong type. + """ + if not isinstance(bound_event, BoundEvent): + raise RuntimeError( + 'Framework.observe requires a BoundEvent as second parameter, got {}'.format( + bound_event)) + if not isinstance(observer, types.MethodType): + # help users of older versions of the framework + if isinstance(observer, charm.CharmBase): + raise TypeError( + 'observer methods must now be explicitly provided;' + ' please replace observe(self.on.{0}, self)' + ' with e.g. observe(self.on.{0}, self._on_{0})'.format( + bound_event.event_kind)) + raise RuntimeError( + 'Framework.observe requires a method as third parameter, got {}'.format(observer)) + + event_type = bound_event.event_type + event_kind = bound_event.event_kind + emitter = bound_event.emitter + + self.register_type(event_type, emitter, event_kind) + + if hasattr(emitter, "handle"): + emitter_path = emitter.handle.path + else: + raise RuntimeError( + 'event emitter {} must have a "handle" attribute'.format(type(emitter).__name__)) + + # Validate that the method has an acceptable call signature. + sig = inspect.signature(observer) + # Self isn't included in the params list, so the first arg will be the event. + extra_params = list(sig.parameters.values())[1:] + + method_name = observer.__name__ + observer = observer.__self__ + if not sig.parameters: + raise TypeError( + '{}.{} must accept event parameter'.format(type(observer).__name__, method_name)) + elif any(param.default is inspect.Parameter.empty for param in extra_params): + # Allow for additional optional params, since there's no reason to exclude them, but + # required params will break. + raise TypeError( + '{}.{} has extra required parameter'.format(type(observer).__name__, method_name)) + + # TODO Prevent the exact same parameters from being registered more than once. + + self._observer[observer.handle.path] = observer + self._observers.append((observer.handle.path, method_name, emitter_path, event_kind)) + + def _next_event_key(self): + """Return the next event key that should be used, incrementing the internal counter.""" + # Increment the count first; this means the keys will start at 1, and 0 + # means no events have been emitted. + self._stored['event_count'] += 1 + return str(self._stored['event_count']) + + def _emit(self, event): + """See BoundEvent.emit for the public way to call this.""" + + saved = False + event_path = event.handle.path + event_kind = event.handle.kind + parent_path = event.handle.parent.path + # TODO Track observers by (parent_path, event_kind) rather than as a list of + # all observers. Avoiding linear search through all observers for every event + for observer_path, method_name, _parent_path, _event_kind in self._observers: + if _parent_path != parent_path: + continue + if _event_kind and _event_kind != event_kind: + continue + if not saved: + # Save the event for all known observers before the first notification + # takes place, so that either everyone interested sees it, or nobody does. + self.save_snapshot(event) + saved = True + # Again, only commit this after all notices are saved. + self._storage.save_notice(event_path, observer_path, method_name) + if saved: + self._reemit(event_path) + + def reemit(self): + """Reemit previously deferred events to the observers that deferred them. + + Only the specific observers that have previously deferred the event will be + notified again. Observers that asked to be notified about events after it's + been first emitted won't be notified, as that would mean potentially observing + events out of order. + """ + self._reemit() + + def _reemit(self, single_event_path=None): + last_event_path = None + deferred = True + for event_path, observer_path, method_name in self._storage.notices(single_event_path): + event_handle = Handle.from_path(event_path) + + if last_event_path != event_path: + if not deferred and last_event_path is not None: + self._storage.drop_snapshot(last_event_path) + last_event_path = event_path + deferred = False + + try: + event = self.load_snapshot(event_handle) + except NoTypeError: + self._storage.drop_notice(event_path, observer_path, method_name) + continue + + event.deferred = False + observer = self._observer.get(observer_path) + if observer: + custom_handler = getattr(observer, method_name, None) + if custom_handler: + event_is_from_juju = isinstance(event, charm.HookEvent) + event_is_action = isinstance(event, charm.ActionEvent) + if (event_is_from_juju or event_is_action) and 'hook' in self._juju_debug_at: + # Present the welcome message and run under PDB. + self._show_debug_code_message() + pdb.runcall(custom_handler, event) + else: + # Regular call to the registered method. + custom_handler(event) + + if event.deferred: + deferred = True + else: + self._storage.drop_notice(event_path, observer_path, method_name) + # We intentionally consider this event to be dead and reload it from + # scratch in the next path. + self.framework._forget(event) + + if not deferred and last_event_path is not None: + self._storage.drop_snapshot(last_event_path) + + def _show_debug_code_message(self): + """Present the welcome message (only once!) when using debugger functionality.""" + if not self._breakpoint_welcomed: + self._breakpoint_welcomed = True + print(_BREAKPOINT_WELCOME_MESSAGE, file=sys.stderr, end='') + + def breakpoint(self, name=None): + """Add breakpoint, optionally named, at the place where this method is called. + + For the breakpoint to be activated the JUJU_DEBUG_AT environment variable + must be set to "all" or to the specific name parameter provided, if any. In every + other situation calling this method does nothing. + + The framework also provides a standard breakpoint named "hook", that will + stop execution when a hook event is about to be handled. + + For those reasons, the "all" and "hook" breakpoint names are reserved. + """ + # If given, validate the name comply with all the rules + if name is not None: + if not isinstance(name, str): + raise TypeError('breakpoint names must be strings') + if name in ('hook', 'all'): + raise ValueError('breakpoint names "all" and "hook" are reserved') + if not re.match(r'^[a-z0-9]([a-z0-9\-]*[a-z0-9])?$', name): + raise ValueError('breakpoint names must look like "foo" or "foo-bar"') + + indicated_breakpoints = self._juju_debug_at + if not indicated_breakpoints: + return + + if 'all' in indicated_breakpoints or name in indicated_breakpoints: + self._show_debug_code_message() + + # If we call set_trace() directly it will open the debugger *here*, so indicating + # it to use our caller's frame + code_frame = inspect.currentframe().f_back + pdb.Pdb().set_trace(code_frame) + else: + logger.warning( + "Breakpoint %r skipped (not found in the requested breakpoints: %s)", + name, indicated_breakpoints) + + def remove_unreferenced_events(self): + """Remove events from storage that are not referenced. + + In older versions of the framework, events that had no observers would get recorded but + never deleted. This makes a best effort to find these events and remove them from the + database. + """ + event_regex = re.compile(_event_regex) + to_remove = [] + for handle_path in self._storage.list_snapshots(): + if event_regex.match(handle_path): + notices = self._storage.notices(handle_path) + if next(notices, None) is None: + # There are no notices for this handle_path, it is valid to remove it + to_remove.append(handle_path) + for handle_path in to_remove: + self._storage.drop_snapshot(handle_path) + + +class StoredStateData(Object): + + def __init__(self, parent, attr_name): + super().__init__(parent, attr_name) + self._cache = {} + self.dirty = False + + def __getitem__(self, key): + return self._cache.get(key) + + def __setitem__(self, key, value): + self._cache[key] = value + self.dirty = True + + def __contains__(self, key): + return key in self._cache + + def snapshot(self): + return self._cache + + def restore(self, snapshot): + self._cache = snapshot + self.dirty = False + + def on_commit(self, event): + if self.dirty: + self.framework.save_snapshot(self) + self.dirty = False + + +class BoundStoredState: + + def __init__(self, parent, attr_name): + parent.framework.register_type(StoredStateData, parent) + + handle = Handle(parent, StoredStateData.handle_kind, attr_name) + try: + data = parent.framework.load_snapshot(handle) + except NoSnapshotError: + data = StoredStateData(parent, attr_name) + + # __dict__ is used to avoid infinite recursion. + self.__dict__["_data"] = data + self.__dict__["_attr_name"] = attr_name + + parent.framework.observe(parent.framework.on.commit, self._data.on_commit) + + def __getattr__(self, key): + # "on" is the only reserved key that can't be used in the data map. + if key == "on": + return self._data.on + if key not in self._data: + raise AttributeError("attribute '{}' is not stored".format(key)) + return _wrap_stored(self._data, self._data[key]) + + def __setattr__(self, key, value): + if key == "on": + raise AttributeError("attribute 'on' is reserved and cannot be set") + + value = _unwrap_stored(self._data, value) + + if not isinstance(value, (type(None), int, float, str, bytes, list, dict, set)): + raise AttributeError( + 'attribute {!r} cannot be a {}: must be int/float/dict/list/etc'.format( + key, type(value).__name__)) + + self._data[key] = _unwrap_stored(self._data, value) + + def set_default(self, **kwargs): + """"Set the value of any given key if it has not already been set""" + for k, v in kwargs.items(): + if k not in self._data: + self._data[k] = v + + +class StoredState: + """A class used to store data the charm needs persisted across invocations. + + Example:: + + class MyClass(Object): + _stored = StoredState() + + Instances of `MyClass` can transparently save state between invocations by + setting attributes on `_stored`. Initial state should be set with + `set_default` on the bound object, that is:: + + class MyClass(Object): + _stored = StoredState() + + def __init__(self, parent, key): + super().__init__(parent, key) + self._stored.set_default(seen=set()) + self.framework.observe(self.on.seen, self._on_seen) + + def _on_seen(self, event): + self._stored.seen.add(event.uuid) + + """ + + def __init__(self): + self.parent_type = None + self.attr_name = None + + def __get__(self, parent, parent_type=None): + if self.parent_type is not None and self.parent_type not in parent_type.mro(): + # the StoredState instance is being shared between two unrelated classes + # -> unclear what is exepcted of us -> bail out + raise RuntimeError( + 'StoredState shared by {} and {}'.format( + self.parent_type.__name__, parent_type.__name__)) + + if parent is None: + # accessing via the class directly (e.g. MyClass.stored) + return self + + bound = None + if self.attr_name is not None: + bound = parent.__dict__.get(self.attr_name) + if bound is not None: + # we already have the thing from a previous pass, huzzah + return bound + + # need to find ourselves amongst the parent's bases + for cls in parent_type.mro(): + for attr_name, attr_value in cls.__dict__.items(): + if attr_value is not self: + continue + # we've found ourselves! is it the first time? + if bound is not None: + # the StoredState instance is being stored in two different + # attributes -> unclear what is expected of us -> bail out + raise RuntimeError("StoredState shared by {0}.{1} and {0}.{2}".format( + cls.__name__, self.attr_name, attr_name)) + # we've found ourselves for the first time; save where, and bind the object + self.attr_name = attr_name + self.parent_type = cls + bound = BoundStoredState(parent, attr_name) + + if bound is not None: + # cache the bound object to avoid the expensive lookup the next time + # (don't use setattr, to keep things symmetric with the fast-path lookup above) + parent.__dict__[self.attr_name] = bound + return bound + + raise AttributeError( + 'cannot find {} attribute in type {}'.format( + self.__class__.__name__, parent_type.__name__)) + + +def _wrap_stored(parent_data, value): + t = type(value) + if t is dict: + return StoredDict(parent_data, value) + if t is list: + return StoredList(parent_data, value) + if t is set: + return StoredSet(parent_data, value) + return value + + +def _unwrap_stored(parent_data, value): + t = type(value) + if t is StoredDict or t is StoredList or t is StoredSet: + return value._under + return value + + +class StoredDict(collections.abc.MutableMapping): + + def __init__(self, stored_data, under): + self._stored_data = stored_data + self._under = under + + def __getitem__(self, key): + return _wrap_stored(self._stored_data, self._under[key]) + + def __setitem__(self, key, value): + self._under[key] = _unwrap_stored(self._stored_data, value) + self._stored_data.dirty = True + + def __delitem__(self, key): + del self._under[key] + self._stored_data.dirty = True + + def __iter__(self): + return self._under.__iter__() + + def __len__(self): + return len(self._under) + + def __eq__(self, other): + if isinstance(other, StoredDict): + return self._under == other._under + elif isinstance(other, collections.abc.Mapping): + return self._under == other + else: + return NotImplemented + + +class StoredList(collections.abc.MutableSequence): + + def __init__(self, stored_data, under): + self._stored_data = stored_data + self._under = under + + def __getitem__(self, index): + return _wrap_stored(self._stored_data, self._under[index]) + + def __setitem__(self, index, value): + self._under[index] = _unwrap_stored(self._stored_data, value) + self._stored_data.dirty = True + + def __delitem__(self, index): + del self._under[index] + self._stored_data.dirty = True + + def __len__(self): + return len(self._under) + + def insert(self, index, value): + self._under.insert(index, value) + self._stored_data.dirty = True + + def append(self, value): + self._under.append(value) + self._stored_data.dirty = True + + def __eq__(self, other): + if isinstance(other, StoredList): + return self._under == other._under + elif isinstance(other, collections.abc.Sequence): + return self._under == other + else: + return NotImplemented + + def __lt__(self, other): + if isinstance(other, StoredList): + return self._under < other._under + elif isinstance(other, collections.abc.Sequence): + return self._under < other + else: + return NotImplemented + + def __le__(self, other): + if isinstance(other, StoredList): + return self._under <= other._under + elif isinstance(other, collections.abc.Sequence): + return self._under <= other + else: + return NotImplemented + + def __gt__(self, other): + if isinstance(other, StoredList): + return self._under > other._under + elif isinstance(other, collections.abc.Sequence): + return self._under > other + else: + return NotImplemented + + def __ge__(self, other): + if isinstance(other, StoredList): + return self._under >= other._under + elif isinstance(other, collections.abc.Sequence): + return self._under >= other + else: + return NotImplemented + + +class StoredSet(collections.abc.MutableSet): + + def __init__(self, stored_data, under): + self._stored_data = stored_data + self._under = under + + def add(self, key): + self._under.add(key) + self._stored_data.dirty = True + + def discard(self, key): + self._under.discard(key) + self._stored_data.dirty = True + + def __contains__(self, key): + return key in self._under + + def __iter__(self): + return self._under.__iter__() + + def __len__(self): + return len(self._under) + + @classmethod + def _from_iterable(cls, it): + """Construct an instance of the class from any iterable input. + + Per https://docs.python.org/3/library/collections.abc.html + if the Set mixin is being used in a class with a different constructor signature, + you will need to override _from_iterable() with a classmethod that can construct + new instances from an iterable argument. + """ + return set(it) + + def __le__(self, other): + if isinstance(other, StoredSet): + return self._under <= other._under + elif isinstance(other, collections.abc.Set): + return self._under <= other + else: + return NotImplemented + + def __ge__(self, other): + if isinstance(other, StoredSet): + return self._under >= other._under + elif isinstance(other, collections.abc.Set): + return self._under >= other + else: + return NotImplemented + + def __eq__(self, other): + if isinstance(other, StoredSet): + return self._under == other._under + elif isinstance(other, collections.abc.Set): + return self._under == other + else: + return NotImplemented diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/ops/jujuversion.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/ops/jujuversion.py new file mode 100755 index 0000000000000000000000000000000000000000..b2b8177dbe396f0d8c46b86e26af6b4e54ea046d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/ops/jujuversion.py @@ -0,0 +1,98 @@ +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import re +from functools import total_ordering + + +@total_ordering +class JujuVersion: + + PATTERN = r'''^ + (?P\d{1,9})\.(?P\d{1,9}) # and numbers are always there + ((?:\.|-(?P[a-z]+))(?P\d{1,9}))? # sometimes with . or - + (\.(?P\d{1,9}))?$ # and sometimes with a number. + ''' + + def __init__(self, version): + m = re.match(self.PATTERN, version, re.VERBOSE) + if not m: + raise RuntimeError('"{}" is not a valid Juju version string'.format(version)) + + d = m.groupdict() + self.major = int(m.group('major')) + self.minor = int(m.group('minor')) + self.tag = d['tag'] or '' + self.patch = int(d['patch'] or 0) + self.build = int(d['build'] or 0) + + def __repr__(self): + if self.tag: + s = '{}.{}-{}{}'.format(self.major, self.minor, self.tag, self.patch) + else: + s = '{}.{}.{}'.format(self.major, self.minor, self.patch) + if self.build > 0: + s += '.{}'.format(self.build) + return s + + def __eq__(self, other): + if self is other: + return True + if isinstance(other, str): + other = type(self)(other) + elif not isinstance(other, JujuVersion): + raise RuntimeError('cannot compare Juju version "{}" with "{}"'.format(self, other)) + return ( + self.major == other.major + and self.minor == other.minor + and self.tag == other.tag + and self.build == other.build + and self.patch == other.patch) + + def __lt__(self, other): + if self is other: + return False + if isinstance(other, str): + other = type(self)(other) + elif not isinstance(other, JujuVersion): + raise RuntimeError('cannot compare Juju version "{}" with "{}"'.format(self, other)) + + if self.major != other.major: + return self.major < other.major + elif self.minor != other.minor: + return self.minor < other.minor + elif self.tag != other.tag: + if not self.tag: + return False + elif not other.tag: + return True + return self.tag < other.tag + elif self.patch != other.patch: + return self.patch < other.patch + elif self.build != other.build: + return self.build < other.build + return False + + @classmethod + def from_environ(cls) -> 'JujuVersion': + """Build a JujuVersion from JUJU_VERSION.""" + v = os.environ.get('JUJU_VERSION') + if not v: + raise RuntimeError('environ has no JUJU_VERSION') + return cls(v) + + def has_app_data(self) -> bool: + """Determine whether this juju version knows about app data.""" + return (self.major, self.minor, self.patch) >= (2, 7, 0) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/ops/lib/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/ops/lib/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..edb9fcacea6f0173aed9f07ca8a683cfead989cc --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/ops/lib/__init__.py @@ -0,0 +1,194 @@ +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import sys +import os +import re + +from ast import literal_eval +from importlib.util import module_from_spec +from importlib.machinery import ModuleSpec +from pkgutil import get_importer +from types import ModuleType + + +_libraries = None + +_libline_re = re.compile(r'''^LIB([A-Z]+)\s*=\s*([0-9]+|['"][a-zA-Z0-9_.\-@]+['"])''') +_libname_re = re.compile(r'''^[a-z][a-z0-9]+$''') + +# Not perfect, but should do for now. +_libauthor_re = re.compile(r'''^[A-Za-z0-9_+.-]+@[a-z0-9_-]+(?:\.[a-z0-9_-]+)*\.[a-z]{2,3}$''') + + +def use(name: str, api: int, author: str) -> ModuleType: + """Use a library from the ops libraries. + + Args: + name: the name of the library requested. + api: the API version of the library. + author: the author of the library. If not given, requests the + one in the standard library. + Raises: + ImportError: if the library cannot be found. + TypeError: if the name, api, or author are the wrong type. + ValueError: if the name, api, or author are invalid. + """ + if not isinstance(name, str): + raise TypeError("invalid library name: {!r} (must be a str)".format(name)) + if not isinstance(author, str): + raise TypeError("invalid library author: {!r} (must be a str)".format(author)) + if not isinstance(api, int): + raise TypeError("invalid library API: {!r} (must be an int)".format(api)) + if api < 0: + raise ValueError('invalid library api: {} (must be ≥0)'.format(api)) + if not _libname_re.match(name): + raise ValueError("invalid library name: {!r} (chars and digits only)".format(name)) + if not _libauthor_re.match(author): + raise ValueError("invalid library author email: {!r}".format(author)) + + if _libraries is None: + autoimport() + + versions = _libraries.get((name, author), ()) + for lib in versions: + if lib.api == api: + return lib.import_module() + + others = ', '.join(str(lib.api) for lib in versions) + if others: + msg = 'cannot find "{}" from "{}" with API version {} (have {})'.format( + name, author, api, others) + else: + msg = 'cannot find library "{}" from "{}"'.format(name, author) + + raise ImportError(msg, name=name) + + +def autoimport(): + """Find all libs in the path and enable use of them. + + You only need to call this if you've installed a package or + otherwise changed sys.path in the current run, and need to see the + changes. Otherwise libraries are found on first call of `use`. + """ + global _libraries + _libraries = {} + for spec in _find_all_specs(sys.path): + lib = _parse_lib(spec) + if lib is None: + continue + + versions = _libraries.setdefault((lib.name, lib.author), []) + versions.append(lib) + versions.sort(reverse=True) + + +def _find_all_specs(path): + for sys_dir in path: + if sys_dir == "": + sys_dir = "." + try: + top_dirs = os.listdir(sys_dir) + except OSError: + continue + for top_dir in top_dirs: + opslib = os.path.join(sys_dir, top_dir, 'opslib') + try: + lib_dirs = os.listdir(opslib) + except OSError: + continue + finder = get_importer(opslib) + if finder is None or not hasattr(finder, 'find_spec'): + continue + for lib_dir in lib_dirs: + spec = finder.find_spec(lib_dir) + if spec is None: + continue + if spec.loader is None: + # a namespace package; not supported + continue + yield spec + + +# only the first this many lines of a file are looked at for the LIB* constants +_MAX_LIB_LINES = 99 + + +def _parse_lib(spec): + if spec.origin is None: + return None + + _expected = {'NAME': str, 'AUTHOR': str, 'API': int, 'PATCH': int} + + try: + with open(spec.origin, 'rt', encoding='utf-8') as f: + libinfo = {} + for n, line in enumerate(f): + if len(libinfo) == len(_expected): + break + if n > _MAX_LIB_LINES: + return None + m = _libline_re.match(line) + if m is None: + continue + key, value = m.groups() + if key in _expected: + value = literal_eval(value) + if not isinstance(value, _expected[key]): + return None + libinfo[key] = value + else: + if len(libinfo) != len(_expected): + return None + except Exception: + return None + + return _Lib(spec, libinfo['NAME'], libinfo['AUTHOR'], libinfo['API'], libinfo['PATCH']) + + +class _Lib: + + def __init__(self, spec: ModuleSpec, name: str, author: str, api: int, patch: int): + self.spec = spec + self.name = name + self.author = author + self.api = api + self.patch = patch + + self._module = None + + def __repr__(self): + return "<_Lib {0.name} by {0.author}, API {0.api}, patch {0.patch}>".format(self) + + def import_module(self) -> ModuleType: + if self._module is None: + module = module_from_spec(self.spec) + self.spec.loader.exec_module(module) + self._module = module + return self._module + + def __eq__(self, other): + if not isinstance(other, _Lib): + return NotImplemented + a = (self.name, self.author, self.api, self.patch) + b = (other.name, other.author, other.api, other.patch) + return a == b + + def __lt__(self, other): + if not isinstance(other, _Lib): + return NotImplemented + a = (self.name, self.author, self.api, self.patch) + b = (other.name, other.author, other.api, other.patch) + return a < b diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/ops/log.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/ops/log.py new file mode 100644 index 0000000000000000000000000000000000000000..4aac5543aec4d84dc393e79b772a30284712d6d4 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/ops/log.py @@ -0,0 +1,51 @@ +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import sys +import logging + + +class JujuLogHandler(logging.Handler): + """A handler for sending logs to Juju via juju-log.""" + + def __init__(self, model_backend, level=logging.DEBUG): + super().__init__(level) + self.model_backend = model_backend + + def emit(self, record): + self.model_backend.juju_log(record.levelname, self.format(record)) + + +def setup_root_logging(model_backend, debug=False): + """Setup python logging to forward messages to juju-log. + + By default, logging is set to DEBUG level, and messages will be filtered by Juju. + Charmers can also set their own default log level with:: + + logging.getLogger().setLevel(logging.INFO) + + model_backend -- a ModelBackend to use for juju-log + debug -- if True, write logs to stderr as well as to juju-log. + """ + logger = logging.getLogger() + logger.setLevel(logging.DEBUG) + logger.addHandler(JujuLogHandler(model_backend)) + if debug: + handler = logging.StreamHandler() + formatter = logging.Formatter('%(asctime)s %(levelname)-8s %(message)s') + handler.setFormatter(formatter) + logger.addHandler(handler) + + sys.excepthook = lambda etype, value, tb: logger.error( + "Uncaught exception while in charm code:", exc_info=(etype, value, tb)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/ops/main.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/ops/main.py new file mode 100755 index 0000000000000000000000000000000000000000..6dc31c3575044796e8fe1f61b8415395689d6339 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/ops/main.py @@ -0,0 +1,348 @@ +# Copyright 2019-2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import inspect +import logging +import os +import subprocess +import sys +import warnings +from pathlib import Path + +import yaml + +import ops.charm +import ops.framework +import ops.model +import ops.storage + +from ops.log import setup_root_logging + +CHARM_STATE_FILE = '.unit-state.db' + + +logger = logging.getLogger() + + +def _get_charm_dir(): + charm_dir = os.environ.get("JUJU_CHARM_DIR") + if charm_dir is None: + # Assume $JUJU_CHARM_DIR/lib/op/main.py structure. + charm_dir = Path('{}/../../..'.format(__file__)).resolve() + else: + charm_dir = Path(charm_dir).resolve() + return charm_dir + + +def _create_event_link(charm, bound_event): + """Create a symlink for a particular event. + + charm -- A charm object. + bound_event -- An event for which to create a symlink. + """ + if issubclass(bound_event.event_type, ops.charm.HookEvent): + event_dir = charm.framework.charm_dir / 'hooks' + event_path = event_dir / bound_event.event_kind.replace('_', '-') + elif issubclass(bound_event.event_type, ops.charm.ActionEvent): + if not bound_event.event_kind.endswith("_action"): + raise RuntimeError( + 'action event name {} needs _action suffix'.format(bound_event.event_kind)) + event_dir = charm.framework.charm_dir / 'actions' + # The event_kind is suffixed with "_action" while the executable is not. + event_path = event_dir / bound_event.event_kind[:-len('_action')].replace('_', '-') + else: + raise RuntimeError( + 'cannot create a symlink: unsupported event type {}'.format(bound_event.event_type)) + + event_dir.mkdir(exist_ok=True) + if not event_path.exists(): + # CPython has different implementations for populating sys.argv[0] for Linux and Windows. + # For Windows it is always an absolute path (any symlinks are resolved) + # while for Linux it can be a relative path. + target_path = os.path.relpath(os.path.realpath(sys.argv[0]), str(event_dir)) + + # Ignore the non-symlink files or directories + # assuming the charm author knows what they are doing. + logger.debug( + 'Creating a new relative symlink at %s pointing to %s', + event_path, target_path) + event_path.symlink_to(target_path) + + +def _setup_event_links(charm_dir, charm): + """Set up links for supported events that originate from Juju. + + Whether a charm can handle an event or not can be determined by + introspecting which events are defined on it. + + Hooks or actions are created as symlinks to the charm code file + which is determined by inspecting symlinks provided by the charm + author at hooks/install or hooks/start. + + charm_dir -- A root directory of the charm. + charm -- An instance of the Charm class. + + """ + for bound_event in charm.on.events().values(): + # Only events that originate from Juju need symlinks. + if issubclass(bound_event.event_type, (ops.charm.HookEvent, ops.charm.ActionEvent)): + _create_event_link(charm, bound_event) + + +def _emit_charm_event(charm, event_name): + """Emits a charm event based on a Juju event name. + + charm -- A charm instance to emit an event from. + event_name -- A Juju event name to emit on a charm. + """ + event_to_emit = None + try: + event_to_emit = getattr(charm.on, event_name) + except AttributeError: + logger.debug("Event %s not defined for %s.", event_name, charm) + + # If the event is not supported by the charm implementation, do + # not error out or try to emit it. This is to support rollbacks. + if event_to_emit is not None: + args, kwargs = _get_event_args(charm, event_to_emit) + logger.debug('Emitting Juju event %s.', event_name) + event_to_emit.emit(*args, **kwargs) + + +def _get_event_args(charm, bound_event): + event_type = bound_event.event_type + model = charm.framework.model + + if issubclass(event_type, ops.charm.RelationEvent): + relation_name = os.environ['JUJU_RELATION'] + relation_id = int(os.environ['JUJU_RELATION_ID'].split(':')[-1]) + relation = model.get_relation(relation_name, relation_id) + else: + relation = None + + remote_app_name = os.environ.get('JUJU_REMOTE_APP', '') + remote_unit_name = os.environ.get('JUJU_REMOTE_UNIT', '') + if remote_app_name or remote_unit_name: + if not remote_app_name: + if '/' not in remote_unit_name: + raise RuntimeError('invalid remote unit name: {}'.format(remote_unit_name)) + remote_app_name = remote_unit_name.split('/')[0] + args = [relation, model.get_app(remote_app_name)] + if remote_unit_name: + args.append(model.get_unit(remote_unit_name)) + return args, {} + elif relation: + return [relation], {} + return [], {} + + +class _Dispatcher: + """Encapsulate how to figure out what event Juju wants us to run. + + Also knows how to run “legacy” hooks when Juju called us via a top-level + ``dispatch`` binary. + + Args: + charm_dir: the toplevel directory of the charm + + Attributes: + event_name: the name of the event to run + is_dispatch_aware: are we running under a Juju that knows about the + dispatch binary? + + """ + + def __init__(self, charm_dir: Path): + self._charm_dir = charm_dir + self._exec_path = Path(sys.argv[0]) + + if 'JUJU_DISPATCH_PATH' in os.environ and (charm_dir / 'dispatch').exists(): + self._init_dispatch() + else: + self._init_legacy() + + def ensure_event_links(self, charm): + """Make sure necessary symlinks are present on disk""" + + if self.is_dispatch_aware: + # links aren't needed + return + + # When a charm is force-upgraded and a unit is in an error state Juju + # does not run upgrade-charm and instead runs the failed hook followed + # by config-changed. Given the nature of force-upgrading the hook setup + # code is not triggered on config-changed. + # + # 'start' event is included as Juju does not fire the install event for + # K8s charms (see LP: #1854635). + if (self.event_name in ('install', 'start', 'upgrade_charm') + or self.event_name.endswith('_storage_attached')): + _setup_event_links(self._charm_dir, charm) + + def run_any_legacy_hook(self): + """Run any extant legacy hook. + + If there is both a dispatch file and a legacy hook for the + current event, run the wanted legacy hook. + """ + + if not self.is_dispatch_aware: + # we *are* the legacy hook + return + + dispatch_path = self._charm_dir / self._dispatch_path + if not dispatch_path.exists(): + logger.debug("Legacy %s does not exist.", self._dispatch_path) + return + + # super strange that there isn't an is_executable + if not os.access(str(dispatch_path), os.X_OK): + logger.warning("Legacy %s exists but is not executable.", self._dispatch_path) + return + + if dispatch_path.resolve() == self._exec_path.resolve(): + logger.debug("Legacy %s is just a link to ourselves.", self._dispatch_path) + return + + argv = sys.argv.copy() + argv[0] = str(dispatch_path) + logger.info("Running legacy %s.", self._dispatch_path) + try: + subprocess.run(argv, check=True) + except subprocess.CalledProcessError as e: + logger.warning( + "Legacy %s exited with status %d.", + self._dispatch_path, e.returncode) + sys.exit(e.returncode) + else: + logger.debug("Legacy %s exited with status 0.", self._dispatch_path) + + def _set_name_from_path(self, path: Path): + """Sets the name attribute to that which can be inferred from the given path.""" + name = path.name.replace('-', '_') + if path.parent.name == 'actions': + name = '{}_action'.format(name) + self.event_name = name + + def _init_legacy(self): + """Set up the 'legacy' dispatcher. + + The current Juju doesn't know about 'dispatch' and calls hooks + explicitly. + """ + self.is_dispatch_aware = False + self._set_name_from_path(self._exec_path) + + def _init_dispatch(self): + """Set up the new 'dispatch' dispatcher. + + The current Juju will run 'dispatch' if it exists, and otherwise fall + back to the old behaviour. + + JUJU_DISPATCH_PATH will be set to the wanted hook, e.g. hooks/install, + in both cases. + """ + self._dispatch_path = Path(os.environ['JUJU_DISPATCH_PATH']) + + if 'OPERATOR_DISPATCH' in os.environ: + logger.debug("Charm called itself via %s.", self._dispatch_path) + sys.exit(0) + os.environ['OPERATOR_DISPATCH'] = '1' + + self.is_dispatch_aware = True + self._set_name_from_path(self._dispatch_path) + + def is_restricted_context(self): + """"Return True if we are running in a restricted Juju context. + + When in a restricted context, most commands (relation-get, config-get, + state-get) are not available. As such, we change how we interact with + Juju. + """ + return self.event_name in ('collect_metrics',) + + +def main(charm_class, use_juju_for_storage=False): + """Setup the charm and dispatch the observed event. + + The event name is based on the way this executable was called (argv[0]). + """ + charm_dir = _get_charm_dir() + + model_backend = ops.model._ModelBackend() + debug = ('JUJU_DEBUG' in os.environ) + setup_root_logging(model_backend, debug=debug) + logger.debug("Operator Framework %s up and running.", ops.__version__) + + dispatcher = _Dispatcher(charm_dir) + dispatcher.run_any_legacy_hook() + + metadata = (charm_dir / 'metadata.yaml').read_text() + actions_meta = charm_dir / 'actions.yaml' + if actions_meta.exists(): + actions_metadata = actions_meta.read_text() + else: + actions_metadata = None + + if not yaml.__with_libyaml__: + logger.debug('yaml does not have libyaml extensions, using slower pure Python yaml loader') + meta = ops.charm.CharmMeta.from_yaml(metadata, actions_metadata) + model = ops.model.Model(meta, model_backend) + + # TODO: If Juju unit agent crashes after exit(0) from the charm code + # the framework will commit the snapshot but Juju will not commit its + # operation. + charm_state_path = charm_dir / CHARM_STATE_FILE + if use_juju_for_storage: + if dispatcher.is_restricted_context(): + # TODO: jam 2020-06-30 This unconditionally avoids running a collect metrics event + # Though we eventually expect that juju will run collect-metrics in a + # non-restricted context. Once we can determine that we are running collect-metrics + # in a non-restricted context, we should fire the event as normal. + logger.debug('"%s" is not supported when using Juju for storage\n' + 'see: https://github.com/canonical/operator/issues/348', + dispatcher.event_name) + # Note that we don't exit nonzero, because that would cause Juju to rerun the hook + return + store = ops.storage.JujuStorage() + else: + store = ops.storage.SQLiteStorage(charm_state_path) + framework = ops.framework.Framework(store, charm_dir, meta, model) + try: + sig = inspect.signature(charm_class) + try: + sig.bind(framework) + except TypeError: + msg = ( + "the second argument, 'key', has been deprecated and will be " + "removed after the 0.7 release") + warnings.warn(msg, DeprecationWarning) + charm = charm_class(framework, None) + else: + charm = charm_class(framework) + dispatcher.ensure_event_links(charm) + + # TODO: Remove the collect_metrics check below as soon as the relevant + # Juju changes are made. + # + # Skip reemission of deferred events for collect-metrics events because + # they do not have the full access to all hook tools. + if not dispatcher.is_restricted_context(): + framework.reemit() + + _emit_charm_event(charm, dispatcher.event_name) + + framework.commit() + finally: + framework.close() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/ops/model.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/ops/model.py new file mode 100644 index 0000000000000000000000000000000000000000..b96e89154ea9cec2b62a4fab4649412e115c304e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/ops/model.py @@ -0,0 +1,1237 @@ +# Copyright 2019 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import datetime +import decimal +import ipaddress +import json +import os +import re +import shutil +import tempfile +import time +import typing +import weakref + +from abc import ABC, abstractmethod +from collections.abc import Mapping, MutableMapping +from pathlib import Path +from subprocess import run, PIPE, CalledProcessError + +import ops +from ops.jujuversion import JujuVersion + + +class Model: + """Represents the Juju Model as seen from this unit. + + This should not be instantiated directly by Charmers, but can be accessed as `self.model` + from any class that derives from Object. + + Attributes: + unit: A :class:`Unit` that represents the unit that is running this code (eg yourself) + app: A :class:`Application` that represents the application this unit is a part of. + relations: Mapping of endpoint to list of :class:`Relation` answering the question + "what am I currently related to". See also :meth:`.get_relation` + config: A dict of the config for the current application. + resources: Access to resources for this charm. Use ``model.resources.fetch(resource_name)`` + to get the path on disk where the resource can be found. + storages: Mapping of storage_name to :class:`Storage` for the storage points defined in + metadata.yaml + pod: Used to get access to ``model.pod.set_spec`` to set the container specification + for Kubernetes charms. + """ + + def __init__(self, meta: 'ops.charm.CharmMeta', backend: '_ModelBackend'): + self._cache = _ModelCache(backend) + self._backend = backend + self.unit = self.get_unit(self._backend.unit_name) + self.app = self.unit.app + self.relations = RelationMapping(meta.relations, self.unit, self._backend, self._cache) + self.config = ConfigData(self._backend) + self.resources = Resources(list(meta.resources), self._backend) + self.pod = Pod(self._backend) + self.storages = StorageMapping(list(meta.storages), self._backend) + self._bindings = BindingMapping(self._backend) + + @property + def name(self) -> str: + """Return the name of the Model that this unit is running in. + + This is read from the environment variable ``JUJU_MODEL_NAME``. + """ + return self._backend.model_name + + def get_unit(self, unit_name: str) -> 'Unit': + """Get an arbitrary unit by name. + + Internally this uses a cache, so asking for the same unit two times will + return the same object. + """ + return self._cache.get(Unit, unit_name) + + def get_app(self, app_name: str) -> 'Application': + """Get an application by name. + + Internally this uses a cache, so asking for the same application two times will + return the same object. + """ + return self._cache.get(Application, app_name) + + def get_relation( + self, relation_name: str, + relation_id: typing.Optional[int] = None) -> 'Relation': + """Get a specific Relation instance. + + If relation_id is not given, this will return the Relation instance if the + relation is established only once or None if it is not established. If this + same relation is established multiple times the error TooManyRelatedAppsError is raised. + + Args: + relation_name: The name of the endpoint for this charm + relation_id: An identifier for a specific relation. Used to disambiguate when a + given application has more than one relation on a given endpoint. + Raises: + TooManyRelatedAppsError: is raised if there is more than one relation to the + supplied relation_name and no relation_id was supplied + """ + return self.relations._get_unique(relation_name, relation_id) + + def get_binding(self, binding_key: typing.Union[str, 'Relation']) -> 'Binding': + """Get a network space binding. + + Args: + binding_key: The relation name or instance to obtain bindings for. + Returns: + If ``binding_key`` is a relation name, the method returns the default binding + for that relation. If a relation instance is provided, the method first looks + up a more specific binding for that specific relation ID, and if none is found + falls back to the default binding for the relation name. + """ + return self._bindings.get(binding_key) + + +class _ModelCache: + + def __init__(self, backend): + self._backend = backend + self._weakrefs = weakref.WeakValueDictionary() + + def get(self, entity_type, *args): + key = (entity_type,) + args + entity = self._weakrefs.get(key) + if entity is None: + entity = entity_type(*args, backend=self._backend, cache=self) + self._weakrefs[key] = entity + return entity + + +class Application: + """Represents a named application in the model. + + This might be your application, or might be an application that you are related to. + Charmers should not instantiate Application objects directly, but should use + :meth:`Model.get_app` if they need a reference to a given application. + + Attributes: + name: The name of this application (eg, 'mysql'). This name may differ from the name of + the charm, if the user has deployed it to a different name. + """ + + def __init__(self, name, backend, cache): + self.name = name + self._backend = backend + self._cache = cache + self._is_our_app = self.name == self._backend.app_name + self._status = None + + def _invalidate(self): + self._status = None + + @property + def status(self) -> 'StatusBase': + """Used to report or read the status of the overall application. + + Can only be read and set by the lead unit of the application. + + The status of remote units is always Unknown. + + Raises: + RuntimeError: if you try to set the status of another application, or if you try to + set the status of this application as a unit that is not the leader. + InvalidStatusError: if you try to set the status to something that is not a + :class:`StatusBase` + + Example:: + + self.model.app.status = BlockedStatus('I need a human to come help me') + """ + if not self._is_our_app: + return UnknownStatus() + + if not self._backend.is_leader(): + raise RuntimeError('cannot get application status as a non-leader unit') + + if self._status: + return self._status + + s = self._backend.status_get(is_app=True) + self._status = StatusBase.from_name(s['status'], s['message']) + return self._status + + @status.setter + def status(self, value: 'StatusBase'): + if not isinstance(value, StatusBase): + raise InvalidStatusError( + 'invalid value provided for application {} status: {}'.format(self, value) + ) + + if not self._is_our_app: + raise RuntimeError('cannot to set status for a remote application {}'.format(self)) + + if not self._backend.is_leader(): + raise RuntimeError('cannot set application status as a non-leader unit') + + self._backend.status_set(value.name, value.message, is_app=True) + self._status = value + + def __repr__(self): + return '<{}.{} {}>'.format(type(self).__module__, type(self).__name__, self.name) + + +class Unit: + """Represents a named unit in the model. + + This might be your unit, another unit of your application, or a unit of another application + that you are related to. + + Attributes: + name: The name of the unit (eg, 'mysql/0') + app: The Application the unit is a part of. + """ + + def __init__(self, name, backend, cache): + self.name = name + + app_name = name.split('/')[0] + self.app = cache.get(Application, app_name) + + self._backend = backend + self._cache = cache + self._is_our_unit = self.name == self._backend.unit_name + self._status = None + + def _invalidate(self): + self._status = None + + @property + def status(self) -> 'StatusBase': + """Used to report or read the status of a specific unit. + + The status of any unit other than yourself is always Unknown. + + Raises: + RuntimeError: if you try to set the status of a unit other than yourself. + InvalidStatusError: if you try to set the status to something other than + a :class:`StatusBase` + Example:: + + self.model.unit.status = MaintenanceStatus('reconfiguring the frobnicators') + """ + if not self._is_our_unit: + return UnknownStatus() + + if self._status: + return self._status + + s = self._backend.status_get(is_app=False) + self._status = StatusBase.from_name(s['status'], s['message']) + return self._status + + @status.setter + def status(self, value: 'StatusBase'): + if not isinstance(value, StatusBase): + raise InvalidStatusError( + 'invalid value provided for unit {} status: {}'.format(self, value) + ) + + if not self._is_our_unit: + raise RuntimeError('cannot set status for a remote unit {}'.format(self)) + + self._backend.status_set(value.name, value.message, is_app=False) + self._status = value + + def __repr__(self): + return '<{}.{} {}>'.format(type(self).__module__, type(self).__name__, self.name) + + def is_leader(self) -> bool: + """Return whether this unit is the leader of its application. + + This can only be called for your own unit. + Returns: + True if you are the leader, False otherwise + Raises: + RuntimeError: if called for a unit that is not yourself + """ + if self._is_our_unit: + # This value is not cached as it is not guaranteed to persist for the whole duration + # of a hook execution. + return self._backend.is_leader() + else: + raise RuntimeError( + 'leadership status of remote units ({}) is not visible to other' + ' applications'.format(self) + ) + + def set_workload_version(self, version: str) -> None: + """Record the version of the software running as the workload. + + This shouldn't be confused with the revision of the charm. This is informative only; + shown in the output of 'juju status'. + """ + if not isinstance(version, str): + raise TypeError("workload version must be a str, not {}: {!r}".format( + type(version).__name__, version)) + self._backend.application_version_set(version) + + +class LazyMapping(Mapping, ABC): + """Represents a dict that isn't populated until it is accessed. + + Charm authors should generally never need to use this directly, but it forms + the basis for many of the dicts that the framework tracks. + """ + + _lazy_data = None + + @abstractmethod + def _load(self): + raise NotImplementedError() + + @property + def _data(self): + data = self._lazy_data + if data is None: + data = self._lazy_data = self._load() + return data + + def _invalidate(self): + self._lazy_data = None + + def __contains__(self, key): + return key in self._data + + def __len__(self): + return len(self._data) + + def __iter__(self): + return iter(self._data) + + def __getitem__(self, key): + return self._data[key] + + +class RelationMapping(Mapping): + """Map of relation names to lists of :class:`Relation` instances.""" + + def __init__(self, relations_meta, our_unit, backend, cache): + self._peers = set() + for name, relation_meta in relations_meta.items(): + if relation_meta.role.is_peer(): + self._peers.add(name) + self._our_unit = our_unit + self._backend = backend + self._cache = cache + self._data = {relation_name: None for relation_name in relations_meta} + + def __contains__(self, key): + return key in self._data + + def __len__(self): + return len(self._data) + + def __iter__(self): + return iter(self._data) + + def __getitem__(self, relation_name): + is_peer = relation_name in self._peers + relation_list = self._data[relation_name] + if relation_list is None: + relation_list = self._data[relation_name] = [] + for rid in self._backend.relation_ids(relation_name): + relation = Relation(relation_name, rid, is_peer, + self._our_unit, self._backend, self._cache) + relation_list.append(relation) + return relation_list + + def _invalidate(self, relation_name): + """Used to wipe the cache of a given relation_name. + + Not meant to be used by Charm authors. The content of relation data is + static for the lifetime of a hook, so it is safe to cache in memory once + accessed. + """ + self._data[relation_name] = None + + def _get_unique(self, relation_name, relation_id=None): + if relation_id is not None: + if not isinstance(relation_id, int): + raise ModelError('relation id {} must be int or None not {}'.format( + relation_id, + type(relation_id).__name__)) + for relation in self[relation_name]: + if relation.id == relation_id: + return relation + else: + # The relation may be dead, but it is not forgotten. + is_peer = relation_name in self._peers + return Relation(relation_name, relation_id, is_peer, + self._our_unit, self._backend, self._cache) + num_related = len(self[relation_name]) + if num_related == 0: + return None + elif num_related == 1: + return self[relation_name][0] + else: + # TODO: We need something in the framework to catch and gracefully handle + # errors, ideally integrating the error catching with Juju's mechanisms. + raise TooManyRelatedAppsError(relation_name, num_related, 1) + + +class BindingMapping: + """Mapping of endpoints to network bindings. + + Charm authors should not instantiate this directly, but access it via + :meth:`Model.get_binding` + """ + + def __init__(self, backend): + self._backend = backend + self._data = {} + + def get(self, binding_key: typing.Union[str, 'Relation']) -> 'Binding': + """Get a specific Binding for an endpoint/relation. + + Not used directly by Charm authors. See :meth:`Model.get_binding` + """ + if isinstance(binding_key, Relation): + binding_name = binding_key.name + relation_id = binding_key.id + elif isinstance(binding_key, str): + binding_name = binding_key + relation_id = None + else: + raise ModelError('binding key must be str or relation instance, not {}' + ''.format(type(binding_key).__name__)) + binding = self._data.get(binding_key) + if binding is None: + binding = Binding(binding_name, relation_id, self._backend) + self._data[binding_key] = binding + return binding + + +class Binding: + """Binding to a network space. + + Attributes: + name: The name of the endpoint this binding represents (eg, 'db') + """ + + def __init__(self, name, relation_id, backend): + self.name = name + self._relation_id = relation_id + self._backend = backend + self._network = None + + @property + def network(self) -> 'Network': + """The network information for this binding.""" + if self._network is None: + try: + self._network = Network(self._backend.network_get(self.name, self._relation_id)) + except RelationNotFoundError: + if self._relation_id is None: + raise + # If a relation is dead, we can still get network info associated with an + # endpoint itself + self._network = Network(self._backend.network_get(self.name)) + return self._network + + +class Network: + """Network space details. + + Charm authors should not instantiate this directly, but should get access to the Network + definition from :meth:`Model.get_binding` and its ``network`` attribute. + + Attributes: + interfaces: A list of :class:`NetworkInterface` details. This includes the + information about how your application should be configured (eg, what + IP addresses should you bind to.) + Note that multiple addresses for a single interface are represented as multiple + interfaces. (eg, ``[NetworKInfo('ens1', '10.1.1.1/32'), + NetworkInfo('ens1', '10.1.2.1/32'])``) + ingress_addresses: A list of :class:`ipaddress.ip_address` objects representing the IP + addresses that other units should use to get in touch with you. + egress_subnets: A list of :class:`ipaddress.ip_network` representing the subnets that + other units will see you connecting from. Due to things like NAT it isn't always + possible to narrow it down to a single address, but when it is clear, the CIDRs + will be constrained to a single address. (eg, 10.0.0.1/32) + Args: + network_info: A dict of network information as returned by ``network-get``. + """ + + def __init__(self, network_info: dict): + self.interfaces = [] + # Treat multiple addresses on an interface as multiple logical + # interfaces with the same name. + for interface_info in network_info['bind-addresses']: + interface_name = interface_info['interface-name'] + for address_info in interface_info['addresses']: + self.interfaces.append(NetworkInterface(interface_name, address_info)) + self.ingress_addresses = [] + for address in network_info['ingress-addresses']: + self.ingress_addresses.append(ipaddress.ip_address(address)) + self.egress_subnets = [] + for subnet in network_info['egress-subnets']: + self.egress_subnets.append(ipaddress.ip_network(subnet)) + + @property + def bind_address(self): + """A single address that your application should bind() to. + + For the common case where there is a single answer. This represents a single + address from :attr:`.interfaces` that can be used to configure where your + application should bind() and listen(). + """ + return self.interfaces[0].address + + @property + def ingress_address(self): + """The address other applications should use to connect to your unit. + + Due to things like public/private addresses, NAT and tunneling, the address you bind() + to is not always the address other people can use to connect() to you. + This is just the first address from :attr:`.ingress_addresses`. + """ + return self.ingress_addresses[0] + + +class NetworkInterface: + """Represents a single network interface that the charm needs to know about. + + Charmers should not instantiate this type directly. Instead use :meth:`Model.get_binding` + to get the network information for a given endpoint. + + Attributes: + name: The name of the interface (eg. 'eth0', or 'ens1') + subnet: An :class:`ipaddress.ip_network` representation of the IP for the network + interface. This may be a single address (eg '10.0.1.2/32') + """ + + def __init__(self, name: str, address_info: dict): + self.name = name + # TODO: expose a hardware address here, see LP: #1864070. + self.address = ipaddress.ip_address(address_info['value']) + cidr = address_info['cidr'] + if not cidr: + # The cidr field may be empty, see LP: #1864102. + # In this case, make it a /32 or /128 IP network. + self.subnet = ipaddress.ip_network(address_info['value']) + else: + self.subnet = ipaddress.ip_network(cidr) + # TODO: expose a hostname/canonical name for the address here, see LP: #1864086. + + +class Relation: + """Represents an established relation between this application and another application. + + This class should not be instantiated directly, instead use :meth:`Model.get_relation` + or :attr:`RelationEvent.relation`. + + Attributes: + name: The name of the local endpoint of the relation (eg 'db') + id: The identifier for a particular relation (integer) + app: An :class:`Application` representing the remote application of this relation. + For peer relations this will be the local application. + units: A set of :class:`Unit` for units that have started and joined this relation. + data: A :class:`RelationData` holding the data buckets for each entity + of a relation. Accessed via eg Relation.data[unit]['foo'] + """ + + def __init__( + self, relation_name: str, relation_id: int, is_peer: bool, our_unit: Unit, + backend: '_ModelBackend', cache: '_ModelCache'): + self.name = relation_name + self.id = relation_id + self.app = None + self.units = set() + + # For peer relations, both the remote and the local app are the same. + if is_peer: + self.app = our_unit.app + try: + for unit_name in backend.relation_list(self.id): + unit = cache.get(Unit, unit_name) + self.units.add(unit) + if self.app is None: + self.app = unit.app + except RelationNotFoundError: + # If the relation is dead, just treat it as if it has no remote units. + pass + self.data = RelationData(self, our_unit, backend) + + def __repr__(self): + return '<{}.{} {}:{}>'.format(type(self).__module__, + type(self).__name__, + self.name, + self.id) + + +class RelationData(Mapping): + """Represents the various data buckets of a given relation. + + Each unit and application involved in a relation has their own data bucket. + Eg: ``{entity: RelationDataContent}`` + where entity can be either a :class:`Unit` or a :class:`Application`. + + Units can read and write their own data, and if they are the leader, + they can read and write their application data. They are allowed to read + remote unit and application data. + + This class should not be created directly. It should be accessed via + :attr:`Relation.data` + """ + + def __init__(self, relation: Relation, our_unit: Unit, backend: '_ModelBackend'): + self.relation = weakref.proxy(relation) + self._data = { + our_unit: RelationDataContent(self.relation, our_unit, backend), + our_unit.app: RelationDataContent(self.relation, our_unit.app, backend), + } + self._data.update({ + unit: RelationDataContent(self.relation, unit, backend) + for unit in self.relation.units}) + # The relation might be dead so avoid a None key here. + if self.relation.app is not None: + self._data.update({ + self.relation.app: RelationDataContent(self.relation, self.relation.app, backend), + }) + + def __contains__(self, key): + return key in self._data + + def __len__(self): + return len(self._data) + + def __iter__(self): + return iter(self._data) + + def __getitem__(self, key): + return self._data[key] + + +# We mix in MutableMapping here to get some convenience implementations, but whether it's actually +# mutable or not is controlled by the flag. +class RelationDataContent(LazyMapping, MutableMapping): + + def __init__(self, relation, entity, backend): + self.relation = relation + self._entity = entity + self._backend = backend + self._is_app = isinstance(entity, Application) + + def _load(self): + try: + return self._backend.relation_get(self.relation.id, self._entity.name, self._is_app) + except RelationNotFoundError: + # Dead relations tell no tales (and have no data). + return {} + + def _is_mutable(self): + if self._is_app: + is_our_app = self._backend.app_name == self._entity.name + if not is_our_app: + return False + # Whether the application data bag is mutable or not depends on + # whether this unit is a leader or not, but this is not guaranteed + # to be always true during the same hook execution. + return self._backend.is_leader() + else: + is_our_unit = self._backend.unit_name == self._entity.name + if is_our_unit: + return True + return False + + def __setitem__(self, key, value): + if not self._is_mutable(): + raise RelationDataError('cannot set relation data for {}'.format(self._entity.name)) + if not isinstance(value, str): + raise RelationDataError('relation data values must be strings') + + self._backend.relation_set(self.relation.id, key, value, self._is_app) + + # Don't load data unnecessarily if we're only updating. + if self._lazy_data is not None: + if value == '': + # Match the behavior of Juju, which is that setting the value to an + # empty string will remove the key entirely from the relation data. + del self._data[key] + else: + self._data[key] = value + + def __delitem__(self, key): + # Match the behavior of Juju, which is that setting the value to an empty + # string will remove the key entirely from the relation data. + self.__setitem__(key, '') + + +class ConfigData(LazyMapping): + + def __init__(self, backend): + self._backend = backend + + def _load(self): + return self._backend.config_get() + + +class StatusBase: + """Status values specific to applications and units. + + To access a status by name, see :meth:`StatusBase.from_name`, most use cases will just + directly use the child class to indicate their status. + """ + + _statuses = {} + name = None + + def __init__(self, message: str): + self.message = message + + def __new__(cls, *args, **kwargs): + if cls is StatusBase: + raise TypeError("cannot instantiate a base class") + return super().__new__(cls) + + def __eq__(self, other): + if not isinstance(self, type(other)): + return False + return self.message == other.message + + def __repr__(self): + return "{.__class__.__name__}({!r})".format(self, self.message) + + @classmethod + def from_name(cls, name: str, message: str): + if name == 'unknown': + # unknown is special + return UnknownStatus() + else: + return cls._statuses[name](message) + + @classmethod + def register(cls, child): + if child.name is None: + raise AttributeError('cannot register a Status which has no name') + cls._statuses[child.name] = child + return child + + +@StatusBase.register +class UnknownStatus(StatusBase): + """The unit status is unknown. + + A unit-agent has finished calling install, config-changed and start, but the + charm has not called status-set yet. + + """ + name = 'unknown' + + def __init__(self): + # Unknown status cannot be set and does not have a message associated with it. + super().__init__('') + + def __repr__(self): + return "UnknownStatus()" + + +@StatusBase.register +class ActiveStatus(StatusBase): + """The unit is ready. + + The unit believes it is correctly offering all the services it has been asked to offer. + """ + name = 'active' + + def __init__(self, message: str = ''): + super().__init__(message) + + +@StatusBase.register +class BlockedStatus(StatusBase): + """The unit requires manual intervention. + + An operator has to manually intervene to unblock the unit and let it proceed. + """ + name = 'blocked' + + +@StatusBase.register +class MaintenanceStatus(StatusBase): + """The unit is performing maintenance tasks. + + The unit is not yet providing services, but is actively doing work in preparation + for providing those services. This is a "spinning" state, not an error state. It + reflects activity on the unit itself, not on peers or related units. + + """ + name = 'maintenance' + + +@StatusBase.register +class WaitingStatus(StatusBase): + """A unit is unable to progress. + + The unit is unable to progress to an active state because an application to which + it is related is not running. + + """ + name = 'waiting' + + +class Resources: + """Object representing resources for the charm. + """ + + def __init__(self, names: typing.Iterable[str], backend: '_ModelBackend'): + self._backend = backend + self._paths = {name: None for name in names} + + def fetch(self, name: str) -> Path: + """Fetch the resource from the controller or store. + + If successfully fetched, this returns a Path object to where the resource is stored + on disk, otherwise it raises a ModelError. + """ + if name not in self._paths: + raise RuntimeError('invalid resource name: {}'.format(name)) + if self._paths[name] is None: + self._paths[name] = Path(self._backend.resource_get(name)) + return self._paths[name] + + +class Pod: + """Represents the definition of a pod spec in Kubernetes models. + + Currently only supports simple access to setting the Juju pod spec via :attr:`.set_spec`. + """ + + def __init__(self, backend: '_ModelBackend'): + self._backend = backend + + def set_spec(self, spec: typing.Mapping, k8s_resources: typing.Mapping = None): + """Set the specification for pods that Juju should start in kubernetes. + + See `juju help-tool pod-spec-set` for details of what should be passed. + Args: + spec: The mapping defining the pod specification + k8s_resources: Additional kubernetes specific specification. + + Returns: + """ + if not self._backend.is_leader(): + raise ModelError('cannot set a pod spec as this unit is not a leader') + self._backend.pod_spec_set(spec, k8s_resources) + + +class StorageMapping(Mapping): + """Map of storage names to lists of Storage instances.""" + + def __init__(self, storage_names: typing.Iterable[str], backend: '_ModelBackend'): + self._backend = backend + self._storage_map = {storage_name: None for storage_name in storage_names} + + def __contains__(self, key: str): + return key in self._storage_map + + def __len__(self): + return len(self._storage_map) + + def __iter__(self): + return iter(self._storage_map) + + def __getitem__(self, storage_name: str) -> typing.List['Storage']: + storage_list = self._storage_map[storage_name] + if storage_list is None: + storage_list = self._storage_map[storage_name] = [] + for storage_id in self._backend.storage_list(storage_name): + storage_list.append(Storage(storage_name, storage_id, self._backend)) + return storage_list + + def request(self, storage_name: str, count: int = 1): + """Requests new storage instances of a given name. + + Uses storage-add tool to request additional storage. Juju will notify the unit + via -storage-attached events when it becomes available. + """ + if storage_name not in self._storage_map: + raise ModelError(('cannot add storage {!r}:' + ' it is not present in the charm metadata').format(storage_name)) + self._backend.storage_add(storage_name, count) + + +class Storage: + """"Represents a storage as defined in metadata.yaml + + Attributes: + name: Simple string name of the storage + id: The provider id for storage + """ + + def __init__(self, storage_name, storage_id, backend): + self.name = storage_name + self.id = storage_id + self._backend = backend + self._location = None + + @property + def location(self): + if self._location is None: + raw = self._backend.storage_get('{}/{}'.format(self.name, self.id), "location") + self._location = Path(raw) + return self._location + + +class ModelError(Exception): + """Base class for exceptions raised when interacting with the Model.""" + pass + + +class TooManyRelatedAppsError(ModelError): + """Raised by :meth:`Model.get_relation` if there is more than one related application.""" + + def __init__(self, relation_name, num_related, max_supported): + super().__init__('Too many remote applications on {} ({} > {})'.format( + relation_name, num_related, max_supported)) + self.relation_name = relation_name + self.num_related = num_related + self.max_supported = max_supported + + +class RelationDataError(ModelError): + """Raised by ``Relation.data[entity][key] = 'foo'`` if the data is invalid. + + This is raised if you're either trying to set a value to something that isn't a string, + or if you are trying to set a value in a bucket that you don't have access to. (eg, + another application/unit or setting your application data but you aren't the leader.) + """ + + +class RelationNotFoundError(ModelError): + """Backend error when querying juju for a given relation and that relation doesn't exist.""" + + +class InvalidStatusError(ModelError): + """Raised if trying to set an Application or Unit status to something invalid.""" + + +class _ModelBackend: + """Represents the connection between the Model representation and talking to Juju. + + Charm authors should not directly interact with the ModelBackend, it is a private + implementation of Model. + """ + + LEASE_RENEWAL_PERIOD = datetime.timedelta(seconds=30) + + def __init__(self, unit_name=None, model_name=None): + if unit_name is None: + self.unit_name = os.environ['JUJU_UNIT_NAME'] + else: + self.unit_name = unit_name + if model_name is None: + model_name = os.environ.get('JUJU_MODEL_NAME') + self.model_name = model_name + self.app_name = self.unit_name.split('/')[0] + + self._is_leader = None + self._leader_check_time = None + + def _run(self, *args, return_output=False, use_json=False): + kwargs = dict(stdout=PIPE, stderr=PIPE) + if use_json: + args += ('--format=json',) + try: + result = run(args, check=True, **kwargs) + except CalledProcessError as e: + raise ModelError(e.stderr) + if return_output: + if result.stdout is None: + return '' + else: + text = result.stdout.decode('utf8') + if use_json: + return json.loads(text) + else: + return text + + def relation_ids(self, relation_name): + relation_ids = self._run('relation-ids', relation_name, return_output=True, use_json=True) + return [int(relation_id.split(':')[-1]) for relation_id in relation_ids] + + def relation_list(self, relation_id): + try: + return self._run('relation-list', '-r', str(relation_id), + return_output=True, use_json=True) + except ModelError as e: + if 'relation not found' in str(e): + raise RelationNotFoundError() from e + raise + + def relation_get(self, relation_id, member_name, is_app): + if not isinstance(is_app, bool): + raise TypeError('is_app parameter to relation_get must be a boolean') + + if is_app: + version = JujuVersion.from_environ() + if not version.has_app_data(): + raise RuntimeError( + 'getting application data is not supported on Juju version {}'.format(version)) + + args = ['relation-get', '-r', str(relation_id), '-', member_name] + if is_app: + args.append('--app') + + try: + return self._run(*args, return_output=True, use_json=True) + except ModelError as e: + if 'relation not found' in str(e): + raise RelationNotFoundError() from e + raise + + def relation_set(self, relation_id, key, value, is_app): + if not isinstance(is_app, bool): + raise TypeError('is_app parameter to relation_set must be a boolean') + + if is_app: + version = JujuVersion.from_environ() + if not version.has_app_data(): + raise RuntimeError( + 'setting application data is not supported on Juju version {}'.format(version)) + + args = ['relation-set', '-r', str(relation_id), '{}={}'.format(key, value)] + if is_app: + args.append('--app') + + try: + return self._run(*args) + except ModelError as e: + if 'relation not found' in str(e): + raise RelationNotFoundError() from e + raise + + def config_get(self): + return self._run('config-get', return_output=True, use_json=True) + + def is_leader(self): + """Obtain the current leadership status for the unit the charm code is executing on. + + The value is cached for the duration of a lease which is 30s in Juju. + """ + now = time.monotonic() + if self._leader_check_time is None: + check = True + else: + time_since_check = datetime.timedelta(seconds=now - self._leader_check_time) + check = (time_since_check > self.LEASE_RENEWAL_PERIOD or self._is_leader is None) + if check: + # Current time MUST be saved before running is-leader to ensure the cache + # is only used inside the window that is-leader itself asserts. + self._leader_check_time = now + self._is_leader = self._run('is-leader', return_output=True, use_json=True) + + return self._is_leader + + def resource_get(self, resource_name): + return self._run('resource-get', resource_name, return_output=True).strip() + + def pod_spec_set(self, spec, k8s_resources): + tmpdir = Path(tempfile.mkdtemp('-pod-spec-set')) + try: + spec_path = tmpdir / 'spec.json' + spec_path.write_text(json.dumps(spec)) + args = ['--file', str(spec_path)] + if k8s_resources: + k8s_res_path = tmpdir / 'k8s-resources.json' + k8s_res_path.write_text(json.dumps(k8s_resources)) + args.extend(['--k8s-resources', str(k8s_res_path)]) + self._run('pod-spec-set', *args) + finally: + shutil.rmtree(str(tmpdir)) + + def status_get(self, *, is_app=False): + """Get a status of a unit or an application. + + Args: + is_app: A boolean indicating whether the status should be retrieved for a unit + or an application. + """ + content = self._run( + 'status-get', '--include-data', '--application={}'.format(is_app), + use_json=True, + return_output=True) + # Unit status looks like (in YAML): + # message: 'load: 0.28 0.26 0.26' + # status: active + # status-data: {} + # Application status looks like (in YAML): + # application-status: + # message: 'load: 0.28 0.26 0.26' + # status: active + # status-data: {} + # units: + # uo/0: + # message: 'load: 0.28 0.26 0.26' + # status: active + # status-data: {} + + if is_app: + return {'status': content['application-status']['status'], + 'message': content['application-status']['message']} + else: + return content + + def status_set(self, status, message='', *, is_app=False): + """Set a status of a unit or an application. + + Args: + app: A boolean indicating whether the status should be set for a unit or an + application. + """ + if not isinstance(is_app, bool): + raise TypeError('is_app parameter must be boolean') + return self._run('status-set', '--application={}'.format(is_app), status, message) + + def storage_list(self, name): + return [int(s.split('/')[1]) for s in self._run('storage-list', name, + return_output=True, use_json=True)] + + def storage_get(self, storage_name_id, attribute): + return self._run('storage-get', '-s', storage_name_id, attribute, + return_output=True, use_json=True) + + def storage_add(self, name, count=1): + if not isinstance(count, int) or isinstance(count, bool): + raise TypeError('storage count must be integer, got: {} ({})'.format(count, + type(count))) + self._run('storage-add', '{}={}'.format(name, count)) + + def action_get(self): + return self._run('action-get', return_output=True, use_json=True) + + def action_set(self, results): + self._run('action-set', *["{}={}".format(k, v) for k, v in results.items()]) + + def action_log(self, message): + self._run('action-log', message) + + def action_fail(self, message=''): + self._run('action-fail', message) + + def application_version_set(self, version): + self._run('application-version-set', '--', version) + + def juju_log(self, level, message): + self._run('juju-log', '--log-level', level, message) + + def network_get(self, binding_name, relation_id=None): + """Return network info provided by network-get for a given binding. + + Args: + binding_name: A name of a binding (relation name or extra-binding name). + relation_id: An optional relation id to get network info for. + """ + cmd = ['network-get', binding_name] + if relation_id is not None: + cmd.extend(['-r', str(relation_id)]) + try: + return self._run(*cmd, return_output=True, use_json=True) + except ModelError as e: + if 'relation not found' in str(e): + raise RelationNotFoundError() from e + raise + + def add_metrics(self, metrics, labels=None): + cmd = ['add-metric'] + + if labels: + label_args = [] + for k, v in labels.items(): + _ModelBackendValidator.validate_metric_label(k) + _ModelBackendValidator.validate_label_value(k, v) + label_args.append('{}={}'.format(k, v)) + cmd.extend(['--labels', ','.join(label_args)]) + + metric_args = [] + for k, v in metrics.items(): + _ModelBackendValidator.validate_metric_key(k) + metric_value = _ModelBackendValidator.format_metric_value(v) + metric_args.append('{}={}'.format(k, metric_value)) + cmd.extend(metric_args) + self._run(*cmd) + + +class _ModelBackendValidator: + """Provides facilities for validating inputs and formatting them for model backends.""" + + METRIC_KEY_REGEX = re.compile(r'^[a-zA-Z](?:[a-zA-Z0-9-_]*[a-zA-Z0-9])?$') + + @classmethod + def validate_metric_key(cls, key): + if cls.METRIC_KEY_REGEX.match(key) is None: + raise ModelError( + 'invalid metric key {!r}: must match {}'.format( + key, cls.METRIC_KEY_REGEX.pattern)) + + @classmethod + def validate_metric_label(cls, label_name): + if cls.METRIC_KEY_REGEX.match(label_name) is None: + raise ModelError( + 'invalid metric label name {!r}: must match {}'.format( + label_name, cls.METRIC_KEY_REGEX.pattern)) + + @classmethod + def format_metric_value(cls, value): + try: + decimal_value = decimal.Decimal.from_float(value) + except TypeError as e: + e2 = ModelError('invalid metric value {!r} provided:' + ' must be a positive finite float'.format(value)) + raise e2 from e + if decimal_value.is_nan() or decimal_value.is_infinite() or decimal_value < 0: + raise ModelError('invalid metric value {!r} provided:' + ' must be a positive finite float'.format(value)) + return str(decimal_value) + + @classmethod + def validate_label_value(cls, label, value): + # Label values cannot be empty, contain commas or equal signs as those are + # used by add-metric as separators. + if not value: + raise ModelError( + 'metric label {} has an empty value, which is not allowed'.format(label)) + v = str(value) + if re.search('[,=]', v) is not None: + raise ModelError( + 'metric label values must not contain "," or "=": {}={!r}'.format(label, value)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/ops/storage.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/ops/storage.py new file mode 100755 index 0000000000000000000000000000000000000000..d4310ce1cfbb707c6278b70f84a9751da3ce07af --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/ops/storage.py @@ -0,0 +1,318 @@ +# Copyright 2019-2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from datetime import timedelta +import pickle +import shutil +import subprocess +import sqlite3 +import typing + +import yaml + + +class SQLiteStorage: + + DB_LOCK_TIMEOUT = timedelta(hours=1) + + def __init__(self, filename): + # The isolation_level argument is set to None such that the implicit + # transaction management behavior of the sqlite3 module is disabled. + self._db = sqlite3.connect(str(filename), + isolation_level=None, + timeout=self.DB_LOCK_TIMEOUT.total_seconds()) + self._setup() + + def _setup(self): + # Make sure that the database is locked until the connection is closed, + # not until the transaction ends. + self._db.execute("PRAGMA locking_mode=EXCLUSIVE") + c = self._db.execute("BEGIN") + c.execute("SELECT count(name) FROM sqlite_master WHERE type='table' AND name='snapshot'") + if c.fetchone()[0] == 0: + # Keep in mind what might happen if the process dies somewhere below. + # The system must not be rendered permanently broken by that. + self._db.execute("CREATE TABLE snapshot (handle TEXT PRIMARY KEY, data BLOB)") + self._db.execute(''' + CREATE TABLE notice ( + sequence INTEGER PRIMARY KEY AUTOINCREMENT, + event_path TEXT, + observer_path TEXT, + method_name TEXT) + ''') + self._db.commit() + + def close(self): + self._db.close() + + def commit(self): + self._db.commit() + + # There's commit but no rollback. For abort to be supported, we'll need logic that + # can rollback decisions made by third-party code in terms of the internal state + # of objects that have been snapshotted, and hooks to let them know about it and + # take the needed actions to undo their logic until the last snapshot. + # This is doable but will increase significantly the chances for mistakes. + + def save_snapshot(self, handle_path: str, snapshot_data: typing.Any) -> None: + """Part of the Storage API, persist a snapshot data under the given handle. + + Args: + handle_path: The string identifying the snapshot. + snapshot_data: The data to be persisted. (as returned by Object.snapshot()). This + might be a dict/tuple/int, but must only contain 'simple' python types. + """ + # Use pickle for serialization, so the value remains portable. + raw_data = pickle.dumps(snapshot_data) + self._db.execute("REPLACE INTO snapshot VALUES (?, ?)", (handle_path, raw_data)) + + def load_snapshot(self, handle_path: str) -> typing.Any: + """Part of the Storage API, retrieve a snapshot that was previously saved. + + Args: + handle_path: The string identifying the snapshot. + Raises: + NoSnapshotError: if there is no snapshot for the given handle_path. + """ + c = self._db.cursor() + c.execute("SELECT data FROM snapshot WHERE handle=?", (handle_path,)) + row = c.fetchone() + if row: + return pickle.loads(row[0]) + raise NoSnapshotError(handle_path) + + def drop_snapshot(self, handle_path: str): + """Part of the Storage API, remove a snapshot that was previously saved. + + Dropping a snapshot that doesn't exist is treated as a no-op. + """ + self._db.execute("DELETE FROM snapshot WHERE handle=?", (handle_path,)) + + def list_snapshots(self) -> typing.Generator[str, None, None]: + """Return the name of all snapshots that are currently saved.""" + c = self._db.cursor() + c.execute("SELECT handle FROM snapshot") + while True: + rows = c.fetchmany() + if not rows: + break + for row in rows: + yield row[0] + + def save_notice(self, event_path: str, observer_path: str, method_name: str) -> None: + """Part of the Storage API, record an notice (event and observer)""" + self._db.execute('INSERT INTO notice VALUES (NULL, ?, ?, ?)', + (event_path, observer_path, method_name)) + + def drop_notice(self, event_path: str, observer_path: str, method_name: str) -> None: + """Part of the Storage API, remove a notice that was previously recorded.""" + self._db.execute(''' + DELETE FROM notice + WHERE event_path=? + AND observer_path=? + AND method_name=? + ''', (event_path, observer_path, method_name)) + + def notices(self, event_path: typing.Optional[str]) ->\ + typing.Generator[typing.Tuple[str, str, str], None, None]: + """Part of the Storage API, return all notices that begin with event_path. + + Args: + event_path: If supplied, will only yield events that match event_path. If not + supplied (or None/'') will return all events. + Returns: + Iterable of (event_path, observer_path, method_name) tuples + """ + if event_path: + c = self._db.execute(''' + SELECT event_path, observer_path, method_name + FROM notice + WHERE event_path=? + ORDER BY sequence + ''', (event_path,)) + else: + c = self._db.execute(''' + SELECT event_path, observer_path, method_name + FROM notice + ORDER BY sequence + ''') + while True: + rows = c.fetchmany() + if not rows: + break + for row in rows: + yield tuple(row) + + +class JujuStorage: + """"Storing the content tracked by the Framework in Juju. + + This uses :class:`_JujuStorageBackend` to interact with state-get/state-set + as the way to store state for the framework and for components. + """ + + NOTICE_KEY = "#notices#" + + def __init__(self, backend: '_JujuStorageBackend' = None): + self._backend = backend + if backend is None: + self._backend = _JujuStorageBackend() + + def close(self): + return + + def commit(self): + return + + def save_snapshot(self, handle_path: str, snapshot_data: typing.Any) -> None: + self._backend.set(handle_path, snapshot_data) + + def load_snapshot(self, handle_path): + try: + content = self._backend.get(handle_path) + except KeyError: + raise NoSnapshotError(handle_path) + return content + + def drop_snapshot(self, handle_path): + self._backend.delete(handle_path) + + def save_notice(self, event_path: str, observer_path: str, method_name: str): + notice_list = self._load_notice_list() + notice_list.append([event_path, observer_path, method_name]) + self._save_notice_list(notice_list) + + def drop_notice(self, event_path: str, observer_path: str, method_name: str): + notice_list = self._load_notice_list() + notice_list.remove([event_path, observer_path, method_name]) + self._save_notice_list(notice_list) + + def notices(self, event_path: str): + notice_list = self._load_notice_list() + for row in notice_list: + if row[0] != event_path: + continue + yield tuple(row) + + def _load_notice_list(self) -> typing.List[typing.Tuple[str]]: + try: + notice_list = self._backend.get(self.NOTICE_KEY) + except KeyError: + return [] + if notice_list is None: + return [] + return notice_list + + def _save_notice_list(self, notices: typing.List[typing.Tuple[str]]) -> None: + self._backend.set(self.NOTICE_KEY, notices) + + +class _SimpleLoader(getattr(yaml, 'CSafeLoader', yaml.SafeLoader)): + """Handle a couple basic python types. + + yaml.SafeLoader can handle all the basic int/float/dict/set/etc that we want. The only one + that it *doesn't* handle is tuples. We don't want to support arbitrary types, so we just + subclass SafeLoader and add tuples back in. + """ + # Taken from the example at: + # https://stackoverflow.com/questions/9169025/how-can-i-add-a-python-tuple-to-a-yaml-file-using-pyyaml + + construct_python_tuple = yaml.Loader.construct_python_tuple + + +_SimpleLoader.add_constructor( + u'tag:yaml.org,2002:python/tuple', + _SimpleLoader.construct_python_tuple) + + +class _SimpleDumper(getattr(yaml, 'CSafeDumper', yaml.SafeDumper)): + """Add types supported by 'marshal' + + YAML can support arbitrary types, but that is generally considered unsafe (like pickle). So + we want to only support dumping out types that are safe to load. + """ + + +_SimpleDumper.represent_tuple = yaml.Dumper.represent_tuple +_SimpleDumper.add_representer(tuple, _SimpleDumper.represent_tuple) + + +class _JujuStorageBackend: + """Implements the interface from the Operator framework to Juju's state-get/set/etc.""" + + @staticmethod + def is_available() -> bool: + """Check if Juju state storage is available. + + This checks if there is a 'state-get' executable in PATH. + """ + p = shutil.which('state-get') + return p is not None + + def set(self, key: str, value: typing.Any) -> None: + """Set a key to a given value. + + Args: + key: The string key that will be used to find the value later + value: Arbitrary content that will be returned by get(). + Raises: + CalledProcessError: if 'state-set' returns an error code. + """ + # default_flow_style=None means that it can use Block for + # complex types (types that have nested types) but use flow + # for simple types (like an array). Not all versions of PyYAML + # have the same default style. + encoded_value = yaml.dump(value, Dumper=_SimpleDumper, default_flow_style=None) + content = yaml.dump( + {key: encoded_value}, encoding='utf-8', default_style='|', + default_flow_style=False, + Dumper=_SimpleDumper) + subprocess.run(["state-set", "--file", "-"], input=content, check=True) + + def get(self, key: str) -> typing.Any: + """Get the bytes value associated with a given key. + + Args: + key: The string key that will be used to find the value + Raises: + CalledProcessError: if 'state-get' returns an error code. + """ + # We don't capture stderr here so it can end up in debug logs. + p = subprocess.run( + ["state-get", key], + stdout=subprocess.PIPE, + check=True, + ) + if p.stdout == b'' or p.stdout == b'\n': + raise KeyError(key) + return yaml.load(p.stdout, Loader=_SimpleLoader) + + def delete(self, key: str) -> None: + """Remove a key from being tracked. + + Args: + key: The key to stop storing + Raises: + CalledProcessError: if 'state-delete' returns an error code. + """ + subprocess.run(["state-delete", key], check=True) + + +class NoSnapshotError(Exception): + + def __init__(self, handle_path): + self.handle_path = handle_path + + def __str__(self): + return 'no snapshot data found for {} object'.format(self.handle_path) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/ops/testing.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/ops/testing.py new file mode 100755 index 0000000000000000000000000000000000000000..b4b3fe071216238007c9f3847ca9556be626bf6b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/ops/testing.py @@ -0,0 +1,586 @@ +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import inspect +import pathlib +from textwrap import dedent +import tempfile +import typing +import yaml +import weakref + +from ops import ( + charm, + framework, + model, + storage, +) + + +# OptionalYAML is something like metadata.yaml or actions.yaml. You can +# pass in a file-like object or the string directly. +OptionalYAML = typing.Optional[typing.Union[str, typing.TextIO]] + + +# noinspection PyProtectedMember +class Harness: + """This class represents a way to build up the model that will drive a test suite. + + The model that is created is from the viewpoint of the charm that you are testing. + + Example:: + + harness = Harness(MyCharm) + # Do initial setup here + relation_id = harness.add_relation('db', 'postgresql') + # Now instantiate the charm to see events as the model changes + harness.begin() + harness.add_relation_unit(relation_id, 'postgresql/0') + harness.update_relation_data(relation_id, 'postgresql/0', {'key': 'val'}) + # Check that charm has properly handled the relation_joined event for postgresql/0 + self.assertEqual(harness.charm. ...) + + Args: + charm_cls: The Charm class that you'll be testing. + meta: charm.CharmBase is a A string or file-like object containing the contents of + metadata.yaml. If not supplied, we will look for a 'metadata.yaml' file in the + parent directory of the Charm, and if not found fall back to a trivial + 'name: test-charm' metadata. + actions: A string or file-like object containing the contents of + actions.yaml. If not supplied, we will look for a 'actions.yaml' file in the + parent directory of the Charm. + """ + + def __init__( + self, + charm_cls: typing.Type[charm.CharmBase], + *, + meta: OptionalYAML = None, + actions: OptionalYAML = None): + # TODO: jam 2020-03-05 We probably want to take config as a parameter as well, since + # it would define the default values of config that the charm would see. + self._charm_cls = charm_cls + self._charm = None + self._charm_dir = 'no-disk-path' # this may be updated by _create_meta + self._lazy_resource_dir = None + self._meta = self._create_meta(meta, actions) + self._unit_name = self._meta.name + '/0' + self._framework = None + self._hooks_enabled = True + self._relation_id_counter = 0 + self._backend = _TestingModelBackend(self._unit_name, self._meta) + self._model = model.Model(self._meta, self._backend) + self._storage = storage.SQLiteStorage(':memory:') + self._framework = framework.Framework( + self._storage, self._charm_dir, self._meta, self._model) + + @property + def charm(self) -> charm.CharmBase: + """Return the instance of the charm class that was passed to __init__. + + Note that the Charm is not instantiated until you have called + :meth:`.begin()`. + """ + return self._charm + + @property + def model(self) -> model.Model: + """Return the :class:`~ops.model.Model` that is being driven by this Harness.""" + return self._model + + @property + def framework(self) -> framework.Framework: + """Return the Framework that is being driven by this Harness.""" + return self._framework + + @property + def _resource_dir(self) -> pathlib.Path: + if self._lazy_resource_dir is not None: + return self._lazy_resource_dir + + self.__resource_dir = tempfile.TemporaryDirectory() + self._lazy_resource_dir = pathlib.Path(self.__resource_dir.name) + self._finalizer = weakref.finalize(self, self.__resource_dir.cleanup) + return self._lazy_resource_dir + + def begin(self) -> None: + """Instantiate the Charm and start handling events. + + Before calling begin(), there is no Charm instance, so changes to the Model won't emit + events. You must call begin before :attr:`.charm` is valid. + """ + if self._charm is not None: + raise RuntimeError('cannot call the begin method on the harness more than once') + + # The Framework adds attributes to class objects for events, etc. As such, we can't re-use + # the original class against multiple Frameworks. So create a locally defined class + # and register it. + # TODO: jam 2020-03-16 We are looking to changes this to Instance attributes instead of + # Class attributes which should clean up this ugliness. The API can stay the same + class TestEvents(self._charm_cls.on.__class__): + pass + + TestEvents.__name__ = self._charm_cls.on.__class__.__name__ + + class TestCharm(self._charm_cls): + on = TestEvents() + + # Note: jam 2020-03-01 This is so that errors in testing say MyCharm has no attribute foo, + # rather than TestCharm has no attribute foo. + TestCharm.__name__ = self._charm_cls.__name__ + self._charm = TestCharm(self._framework) + + def _create_meta(self, charm_metadata, action_metadata): + """Create a CharmMeta object. + + Handle the cases where a user doesn't supply explicit metadata snippets. + """ + filename = inspect.getfile(self._charm_cls) + charm_dir = pathlib.Path(filename).parents[1] + + if charm_metadata is None: + metadata_path = charm_dir / 'metadata.yaml' + if metadata_path.is_file(): + charm_metadata = metadata_path.read_text() + self._charm_dir = charm_dir + else: + # The simplest of metadata that the framework can support + charm_metadata = 'name: test-charm' + elif isinstance(charm_metadata, str): + charm_metadata = dedent(charm_metadata) + + if action_metadata is None: + actions_path = charm_dir / 'actions.yaml' + if actions_path.is_file(): + action_metadata = actions_path.read_text() + self._charm_dir = charm_dir + elif isinstance(action_metadata, str): + action_metadata = dedent(action_metadata) + + return charm.CharmMeta.from_yaml(charm_metadata, action_metadata) + + def add_oci_resource(self, resource_name: str, + contents: typing.Mapping[str, str] = None) -> None: + """Add oci resources to the backend. + + This will register an oci resource and create a temporary file for processing metadata + about the resource. A default set of values will be used for all the file contents + unless a specific contents dict is provided. + + Args: + resource_name: Name of the resource to add custom contents to. + contents: Optional custom dict to write for the named resource. + """ + if not contents: + contents = {'registrypath': 'registrypath', + 'username': 'username', + 'password': 'password', + } + if resource_name not in self._meta.resources.keys(): + raise RuntimeError('Resource {} is not a defined resources'.format(resource_name)) + if self._meta.resources[resource_name].type != "oci-image": + raise RuntimeError('Resource {} is not an OCI Image'.format(resource_name)) + resource_dir = self._resource_dir / resource_name + resource_dir.mkdir(exist_ok=True) + resource_file = resource_dir / "contents.yaml" + with resource_file.open('wt', encoding='utf8') as resource_yaml: + yaml.dump(contents, resource_yaml) + self._backend._resources_map[resource_name] = resource_file + + def populate_oci_resources(self) -> None: + """Populate all OCI resources.""" + for name, data in self._meta.resources.items(): + if data.type == "oci-image": + self.add_oci_resource(name) + + def disable_hooks(self) -> None: + """Stop emitting hook events when the model changes. + + This can be used by developers to stop changes to the model from emitting events that + the charm will react to. Call :meth:`.enable_hooks` + to re-enable them. + """ + self._hooks_enabled = False + + def enable_hooks(self) -> None: + """Re-enable hook events from charm.on when the model is changed. + + By default hook events are enabled once you call :meth:`.begin`, + but if you have used :meth:`.disable_hooks`, this can be used to + enable them again. + """ + self._hooks_enabled = True + + def _next_relation_id(self): + rel_id = self._relation_id_counter + self._relation_id_counter += 1 + return rel_id + + def add_relation(self, relation_name: str, remote_app: str) -> int: + """Declare that there is a new relation between this app and `remote_app`. + + Args: + relation_name: The relation on Charm that is being related to + remote_app: The name of the application that is being related to + + Return: + The relation_id created by this add_relation. + """ + rel_id = self._next_relation_id() + self._backend._relation_ids_map.setdefault(relation_name, []).append(rel_id) + self._backend._relation_names[rel_id] = relation_name + self._backend._relation_list_map[rel_id] = [] + self._backend._relation_data[rel_id] = { + remote_app: {}, + self._backend.unit_name: {}, + self._backend.app_name: {}, + } + # Reload the relation_ids list + if self._model is not None: + self._model.relations._invalidate(relation_name) + if self._charm is None or not self._hooks_enabled: + return rel_id + relation = self._model.get_relation(relation_name, rel_id) + app = self._model.get_app(remote_app) + self._charm.on[relation_name].relation_created.emit( + relation, app) + return rel_id + + def add_relation_unit(self, relation_id: int, remote_unit_name: str) -> None: + """Add a new unit to a relation. + + Example:: + + rel_id = harness.add_relation('db', 'postgresql') + harness.add_relation_unit(rel_id, 'postgresql/0') + + This will trigger a `relation_joined` event and a `relation_changed` event. + + Args: + relation_id: The integer relation identifier (as returned by add_relation). + remote_unit_name: A string representing the remote unit that is being added. + Return: + None + """ + self._backend._relation_list_map[relation_id].append(remote_unit_name) + self._backend._relation_data[relation_id][remote_unit_name] = {} + relation_name = self._backend._relation_names[relation_id] + # Make sure that the Model reloads the relation_list for this relation_id, as well as + # reloading the relation data for this unit. + if self._model is not None: + remote_unit = self._model.get_unit(remote_unit_name) + relation = self._model.get_relation(relation_name, relation_id) + unit_cache = relation.data.get(remote_unit, None) + if unit_cache is not None: + unit_cache._invalidate() + self._model.relations._invalidate(relation_name) + if self._charm is None or not self._hooks_enabled: + return + self._charm.on[relation_name].relation_joined.emit( + relation, remote_unit.app, remote_unit) + + def get_relation_data(self, relation_id: int, app_or_unit: str) -> typing.Mapping: + """Get the relation data bucket for a single app or unit in a given relation. + + This ignores all of the safety checks of who can and can't see data in relations (eg, + non-leaders can't read their own application's relation data because there are no events + that keep that data up-to-date for the unit). + + Args: + relation_id: The relation whose content we want to look at. + app_or_unit: The name of the application or unit whose data we want to read + Return: + a dict containing the relation data for `app_or_unit` or None. + Raises: + KeyError: if relation_id doesn't exist + """ + return self._backend._relation_data[relation_id].get(app_or_unit, None) + + def get_workload_version(self) -> str: + """Read the workload version that was set by the unit.""" + return self._backend._workload_version + + def set_model_name(self, name: str) -> None: + """Set the name of the Model that this is representing. + + This cannot be called once begin() has been called. But it lets you set the value that + will be returned by Model.name. + """ + if self._charm is not None: + raise RuntimeError('cannot set the Model name after begin()') + self._backend.model_name = name + + def update_relation_data( + self, + relation_id: int, + app_or_unit: str, + key_values: typing.Mapping, + ) -> None: + """Update the relation data for a given unit or application in a given relation. + + This also triggers the `relation_changed` event for this relation_id. + + Args: + relation_id: The integer relation_id representing this relation. + app_or_unit: The unit or application name that is being updated. + This can be the local or remote application. + key_values: Each key/value will be updated in the relation data. + """ + relation_name = self._backend._relation_names[relation_id] + relation = self._model.get_relation(relation_name, relation_id) + if '/' in app_or_unit: + entity = self._model.get_unit(app_or_unit) + else: + entity = self._model.get_app(app_or_unit) + rel_data = relation.data.get(entity, None) + if rel_data is not None: + # rel_data may have cached now-stale data, so _invalidate() it. + # Note, this won't cause the data to be loaded if it wasn't already. + rel_data._invalidate() + + new_values = self._backend._relation_data[relation_id][app_or_unit].copy() + for k, v in key_values.items(): + if v == '': + new_values.pop(k, None) + else: + new_values[k] = v + self._backend._relation_data[relation_id][app_or_unit] = new_values + + if app_or_unit == self._model.unit.name: + # No events for our own unit + return + if app_or_unit == self._model.app.name: + # updating our own app only generates an event if it is a peer relation and we + # aren't the leader + is_peer = self._meta.relations[relation_name].role.is_peer() + if not is_peer: + return + if self._model.unit.is_leader(): + return + self._emit_relation_changed(relation_id, app_or_unit) + + def _emit_relation_changed(self, relation_id, app_or_unit): + if self._charm is None or not self._hooks_enabled: + return + rel_name = self._backend._relation_names[relation_id] + relation = self.model.get_relation(rel_name, relation_id) + if '/' in app_or_unit: + app_name = app_or_unit.split('/')[0] + unit_name = app_or_unit + app = self.model.get_app(app_name) + unit = self.model.get_unit(unit_name) + args = (relation, app, unit) + else: + app_name = app_or_unit + app = self.model.get_app(app_name) + args = (relation, app) + self._charm.on[rel_name].relation_changed.emit(*args) + + def update_config( + self, + key_values: typing.Mapping[str, str] = None, + unset: typing.Iterable[str] = (), + ) -> None: + """Update the config as seen by the charm. + + This will trigger a `config_changed` event. + + Args: + key_values: A Mapping of key:value pairs to update in config. + unset: An iterable of keys to remove from Config. (Note that this does + not currently reset the config values to the default defined in config.yaml.) + """ + config = self._backend._config + if key_values is not None: + for key, value in key_values.items(): + config[key] = value + for key in unset: + config.pop(key, None) + # NOTE: jam 2020-03-01 Note that this sort of works "by accident". Config + # is a LazyMapping, but its _load returns a dict and this method mutates + # the dict that Config is caching. Arguably we should be doing some sort + # of charm.framework.model.config._invalidate() + if self._charm is None or not self._hooks_enabled: + return + self._charm.on.config_changed.emit() + + def set_leader(self, is_leader: bool = True) -> None: + """Set whether this unit is the leader or not. + + If this charm becomes a leader then `leader_elected` will be triggered. + + Args: + is_leader: True/False as to whether this unit is the leader. + """ + was_leader = self._backend._is_leader + self._backend._is_leader = is_leader + # Note: jam 2020-03-01 currently is_leader is cached at the ModelBackend level, not in + # the Model objects, so this automatically gets noticed. + if is_leader and not was_leader and self._charm is not None and self._hooks_enabled: + self._charm.on.leader_elected.emit() + + def _get_backend_calls(self, reset: bool = True) -> list: + """Return the calls that we have made to the TestingModelBackend. + + This is useful mostly for testing the framework itself, so that we can assert that we + do/don't trigger extra calls. + + Args: + reset: If True, reset the calls list back to empty, if false, the call list is + preserved. + Return: + ``[(call1, args...), (call2, args...)]`` + """ + calls = self._backend._calls.copy() + if reset: + self._backend._calls.clear() + return calls + + +def _record_calls(cls): + """Replace methods on cls with methods that record that they have been called. + + Iterate all attributes of cls, and for public methods, replace them with a wrapped method + that records the method called along with the arguments and keyword arguments. + """ + for meth_name, orig_method in cls.__dict__.items(): + if meth_name.startswith('_'): + continue + + def decorator(orig_method): + def wrapped(self, *args, **kwargs): + full_args = (orig_method.__name__,) + args + if kwargs: + full_args = full_args + (kwargs,) + self._calls.append(full_args) + return orig_method(self, *args, **kwargs) + return wrapped + + setattr(cls, meth_name, decorator(orig_method)) + return cls + + +@_record_calls +class _TestingModelBackend: + """This conforms to the interface for ModelBackend but provides canned data. + + DO NOT use this class directly, it is used by `Harness`_ to drive the model. + `Harness`_ is responsible for maintaining the internal consistency of the values here, + as the only public methods of this type are for implementing ModelBackend. + """ + + def __init__(self, unit_name, meta): + self.unit_name = unit_name + self.app_name = self.unit_name.split('/')[0] + self.model_name = None + self._calls = [] + self._meta = meta + self._is_leader = None + self._relation_ids_map = {} # relation name to [relation_ids,...] + self._relation_names = {} # reverse map from relation_id to relation_name + self._relation_list_map = {} # relation_id: [unit_name,...] + self._relation_data = {} # {relation_id: {name: data}} + self._config = {} + self._is_leader = False + self._resources_map = {} + self._pod_spec = None + self._app_status = {'status': 'unknown', 'message': ''} + self._unit_status = {'status': 'maintenance', 'message': ''} + self._workload_version = None + + def relation_ids(self, relation_name): + try: + return self._relation_ids_map[relation_name] + except KeyError as e: + if relation_name not in self._meta.relations: + raise model.ModelError('{} is not a known relation'.format(relation_name)) from e + return [] + + def relation_list(self, relation_id): + try: + return self._relation_list_map[relation_id] + except KeyError as e: + raise model.RelationNotFoundError from e + + def relation_get(self, relation_id, member_name, is_app): + if is_app and '/' in member_name: + member_name = member_name.split('/')[0] + if relation_id not in self._relation_data: + raise model.RelationNotFoundError() + return self._relation_data[relation_id][member_name].copy() + + def relation_set(self, relation_id, key, value, is_app): + relation = self._relation_data[relation_id] + if is_app: + bucket_key = self.app_name + else: + bucket_key = self.unit_name + if bucket_key not in relation: + relation[bucket_key] = {} + bucket = relation[bucket_key] + if value == '': + bucket.pop(key, None) + else: + bucket[key] = value + + def config_get(self): + return self._config + + def is_leader(self): + return self._is_leader + + def application_version_set(self, version): + self._workload_version = version + + def resource_get(self, resource_name): + return self._resources_map[resource_name] + + def pod_spec_set(self, spec, k8s_resources): + self._pod_spec = (spec, k8s_resources) + + def status_get(self, *, is_app=False): + if is_app: + return self._app_status + else: + return self._unit_status + + def status_set(self, status, message='', *, is_app=False): + if is_app: + self._app_status = {'status': status, 'message': message} + else: + self._unit_status = {'status': status, 'message': message} + + def storage_list(self, name): + raise NotImplementedError(self.storage_list) + + def storage_get(self, storage_name_id, attribute): + raise NotImplementedError(self.storage_get) + + def storage_add(self, name, count=1): + raise NotImplementedError(self.storage_add) + + def action_get(self): + raise NotImplementedError(self.action_get) + + def action_set(self, results): + raise NotImplementedError(self.action_set) + + def action_log(self, message): + raise NotImplementedError(self.action_log) + + def action_fail(self, message=''): + raise NotImplementedError(self.action_fail) + + def network_get(self, endpoint_name, relation_id=None): + raise NotImplementedError(self.network_get) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/ops/version.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/ops/version.py new file mode 100644 index 0000000000000000000000000000000000000000..15e5478555ee0fa948bfb0ad57cc79ba7cef3721 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/lib/ops/version.py @@ -0,0 +1,50 @@ +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import subprocess +from pathlib import Path + +__all__ = ('version',) + +_FALLBACK = '0.8' # this gets bumped after release + + +def _get_version(): + version = _FALLBACK + ".dev0+unknown" + + p = Path(__file__).parent + if (p.parent / '.git').exists(): + try: + proc = subprocess.run( + ['git', 'describe', '--tags', '--dirty'], + stdout=subprocess.PIPE, + stderr=subprocess.DEVNULL, + cwd=p, + check=True) + except Exception: + pass + else: + version = proc.stdout.strip().decode('utf8') + if '-' in version: + # version will look like -<#commits>-g[-dirty] + # in terms of PEP 440, the tag we'll make sure is a 'public version identifier'; + # everything after the first - needs to be a 'local version' + public, local = version.split('-', 1) + version = public + '+' + local.replace('-', '.') + # version now +<#commits>.g[.dirty] + # which is PEP440-compliant (as long as is :-) + return version + + +version = _get_version() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/metadata.yaml b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/metadata.yaml new file mode 100644 index 0000000000000000000000000000000000000000..c331b60d88dcfc3f1c675e24282dacdf8ec4ffdb --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/metadata.yaml @@ -0,0 +1,11 @@ +name: vyos-config +summary: A proxy charm to configure VyOS Router +maintainer: David García +description: | + Charm to configure VyOS PNF +series: + - xenial + - bionic +peers: + proxypeer: + interface: proxypeer diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/playbooks/add-port-forward.yaml b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/playbooks/add-port-forward.yaml new file mode 100644 index 0000000000000000000000000000000000000000..5d48d75870345d86b3aaa2ae06204bffe2dfe00e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/playbooks/add-port-forward.yaml @@ -0,0 +1,13 @@ +- hosts: vyos-routers + gather_facts: false + connection: local + tasks: + - name: backup switch (vyos) + vyos_config: + lines: + - nat destination rule {{ ruleNumber }} destination port "{{ sourcePort }}" + - nat destination rule {{ ruleNumber }} inbound-interface "eth0" + - nat destination rule {{ ruleNumber }} protocol "tcp" + - nat destination rule {{ ruleNumber }} translation port "{{ destinationPort }}" + - nat destination rule {{ ruleNumber }} translation address "{{ destinationAddress }}" + save: yes diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/playbooks/backup.yaml b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/playbooks/backup.yaml new file mode 100644 index 0000000000000000000000000000000000000000..c83de0831c8271bdd5477e6fd3c012d204df7ac1 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/playbooks/backup.yaml @@ -0,0 +1,9 @@ +- hosts: vyos-routers + gather_facts: false + connection: local + tasks: + - name: backup switch (vyos) + vyos_config: + backup: yes + backup_options: + filename: "{{ backupFile }}" diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/playbooks/remove-port-forward.yaml b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/playbooks/remove-port-forward.yaml new file mode 100644 index 0000000000000000000000000000000000000000..61c12f955f95d172f2da52a7049329470071bf3c --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/playbooks/remove-port-forward.yaml @@ -0,0 +1,11 @@ +- hosts: vyos-routers + gather_facts: false + connection: local + tasks: + - name: backup switch (vyos) + vyos_command: + commands: + - config + - delete nat destination rule {{ ruleNumber }} + - commit + - save diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/requirements.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/requirements.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/src/charm.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/src/charm.py new file mode 100755 index 0000000000000000000000000000000000000000..6f97799ce1827c2da22ea51cd6453564bf83321c --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config-src/src/charm.py @@ -0,0 +1,112 @@ +#!/usr/bin/env python3 +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import subprocess +import sys +import traceback + +from charms.osm import libansible +from charms.osm.sshproxy import SSHProxyCharm +from ops.charm import CharmBase, CharmEvents +from ops.framework import StoredState, EventBase, EventSource +from ops.main import main +from ops.model import ( + ActiveStatus, + BlockedStatus, + MaintenanceStatus, + WaitingStatus, + ModelError, +) + + +sys.path.append("lib") + + +class VyosCharm(SSHProxyCharm): + def __init__(self, framework, key): + super().__init__(framework, key) + + # Register all of the events we want to observe + self.framework.observe(self.on.config_changed, self.on_config_changed) + self.framework.observe(self.on.install, self.on_install) + self.framework.observe(self.on.start, self.on_start) + self.framework.observe(self.on.upgrade_charm, self.on_upgrade_charm) + # Charm actions (primitives) + self.framework.observe( + self.on.add_port_forward_action, self.on_add_port_forward_action + ) + self.framework.observe( + self.on.remove_port_forward_action, self.on_remove_port_forward_action + ) + + def on_config_changed(self, event): + """Handle changes in configuration""" + super().on_config_changed(event) + + def on_install(self, event): + """Called when the charm is being installed""" + super().on_install(event) + self.unit.status = MaintenanceStatus("Installing Ansible") + libansible.install_ansible_support() + self.unit.status = ActiveStatus() + + def on_start(self, event): + """Called when the charm is being started""" + super().on_start(event) + + def on_add_port_forward_action(self, event): + """Adds a port forward rule.""" + self._run_playbook( + event, + "add-port-forward.yaml", + { + "sourcePort": event.params["sourcePort"], + "destinationPort": event.params["destinationPort"], + "destinationAddress": event.params["destinationAddress"], + "ruleNumber": event.params["ruleNumber"], + }) + + def on_remove_port_forward_action(self, event): + """Remove a port forward rule.""" + self._run_playbook( + event, + "remove-port-forward.yaml", + { + "ruleNumber": event.params["ruleNumber"], + }) + + def on_upgrade_charm(self, event): + """Upgrade the charm.""" + + def _run_playbook(self, event, playbook, variables): + try: + config = self.model.config + result = libansible.execute_playbook( + playbook, + config["ssh-hostname"], + config["ssh-username"], + config["ssh-password"], + variables, + ) + event.set_results({"output": result}) + except: + exc_type, exc_value, exc_traceback = sys.exc_info() + err = traceback.format_exception(exc_type, exc_value, exc_traceback) + event.fail(message="Playbook " + playbook + " failed: " + str(err)) + + +if __name__ == "__main__": + main(VyosCharm) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/README.md b/SOL004/hackfest_firewall_pnf/charms/vyos-config/README.md new file mode 100644 index 0000000000000000000000000000000000000000..2bdcab20c1a6d39aaffe9a450ba3bdbc44044b42 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/README.md @@ -0,0 +1,3 @@ +# Vyos-config + +This is a proxy charm used by Open Source Mano (OSM) to configure Vyos Router PNF, written in the [Python Operator Framwork](https://github.com/canonical/operator) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/actions.yaml b/SOL004/hackfest_firewall_pnf/charms/vyos-config/actions.yaml new file mode 100644 index 0000000000000000000000000000000000000000..80497e105a10016e2dec0329f6157dbad382a298 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/actions.yaml @@ -0,0 +1,46 @@ +# VyOS Action +add-port-forward: + description: "Adds a port forwarding rule" + params: + ruleNumber: + description: "Rule number, must be unique and needed to remove the rule later" + type: "string" + default: "10" + sourcePort: + description: "Source port to listen on" + type: "string" + destinationPort: + description: "Target port number on remote host to forward" + type: "string" + destinationAddress: + description: "Target host or IP address to forward traffic" + type: "string" + required: + - sourcePort + - destinationPort + - destinationAddress + +remove-port-forward: + description: "Removes a port forwarding rule by number" + params: + ruleNumber: + description: "Rule number to remove" + type: "string" + default: "10" + +# Required by charms.osm.sshproxy +run: + description: "Run an arbitrary command" + params: + command: + description: "The command to execute." + type: string + default: "" + required: + - command +generate-ssh-key: + description: "Generate a new SSH keypair for this unit. This will replace any existing previously generated keypair." +verify-ssh-credentials: + description: "Verify that this unit can authenticate with server specified by ssh-hostname and ssh-username." +get-ssh-public-key: + description: "Get the public SSH key for this unit." diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/config.yaml b/SOL004/hackfest_firewall_pnf/charms/vyos-config/config.yaml new file mode 100644 index 0000000000000000000000000000000000000000..7f6d4408bec64af0b5c3ebf0e8cae30b33609f74 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/config.yaml @@ -0,0 +1,25 @@ +options: + ssh-hostname: + type: string + default: "" + description: "The hostname or IP address of the machine to" + ssh-username: + type: string + default: "" + description: "The username to login as." + ssh-password: + type: string + default: "" + description: "The password used to authenticate." + ssh-public-key: + type: string + default: "" + description: "The public key of this unit." + ssh-key-type: + type: string + default: "rsa" + description: "The type of encryption to use for the SSH key." + ssh-key-bits: + type: int + default: 4096 + description: "The number of bits to use for the SSH key." diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/dispatch b/SOL004/hackfest_firewall_pnf/charms/vyos-config/dispatch new file mode 100755 index 0000000000000000000000000000000000000000..fe31c0567bdce62a6542a6470997cb6a874e4bd8 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/dispatch @@ -0,0 +1,3 @@ +#!/bin/sh + +JUJU_DISPATCH_PATH="${JUJU_DISPATCH_PATH:-$0}" PYTHONPATH=lib:venv ./src/charm.py diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/hooks/install b/SOL004/hackfest_firewall_pnf/charms/vyos-config/hooks/install new file mode 120000 index 0000000000000000000000000000000000000000..8b970447af1decd19c27ca3c609fc97f56a233e3 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/hooks/install @@ -0,0 +1 @@ +../dispatch \ No newline at end of file diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/hooks/start b/SOL004/hackfest_firewall_pnf/charms/vyos-config/hooks/start new file mode 120000 index 0000000000000000000000000000000000000000..8b970447af1decd19c27ca3c609fc97f56a233e3 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/hooks/start @@ -0,0 +1 @@ +../dispatch \ No newline at end of file diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/hooks/upgrade-charm b/SOL004/hackfest_firewall_pnf/charms/vyos-config/hooks/upgrade-charm new file mode 120000 index 0000000000000000000000000000000000000000..8b970447af1decd19c27ca3c609fc97f56a233e3 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/hooks/upgrade-charm @@ -0,0 +1 @@ +../dispatch \ No newline at end of file diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..61ef90719b5d5d759de1a6b80a1ea748d8bb0911 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/__init__.py @@ -0,0 +1,97 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Bootstrap charm-helpers, installing its dependencies if necessary using +# only standard libraries. +from __future__ import print_function +from __future__ import absolute_import + +import functools +import inspect +import subprocess +import sys + +try: + import six # NOQA:F401 +except ImportError: + if sys.version_info.major == 2: + subprocess.check_call(['apt-get', 'install', '-y', 'python-six']) + else: + subprocess.check_call(['apt-get', 'install', '-y', 'python3-six']) + import six # NOQA:F401 + +try: + import yaml # NOQA:F401 +except ImportError: + if sys.version_info.major == 2: + subprocess.check_call(['apt-get', 'install', '-y', 'python-yaml']) + else: + subprocess.check_call(['apt-get', 'install', '-y', 'python3-yaml']) + import yaml # NOQA:F401 + + +# Holds a list of mapping of mangled function names that have been deprecated +# using the @deprecate decorator below. This is so that the warning is only +# printed once for each usage of the function. +__deprecated_functions = {} + + +def deprecate(warning, date=None, log=None): + """Add a deprecation warning the first time the function is used. + The date, which is a string in semi-ISO8660 format indicate the year-month + that the function is officially going to be removed. + + usage: + + @deprecate('use core/fetch/add_source() instead', '2017-04') + def contributed_add_source_thing(...): + ... + + And it then prints to the log ONCE that the function is deprecated. + The reason for passing the logging function (log) is so that hookenv.log + can be used for a charm if needed. + + :param warning: String to indicat where it has moved ot. + :param date: optional sting, in YYYY-MM format to indicate when the + function will definitely (probably) be removed. + :param log: The log function to call to log. If not, logs to stdout + """ + def wrap(f): + + @functools.wraps(f) + def wrapped_f(*args, **kwargs): + try: + module = inspect.getmodule(f) + file = inspect.getsourcefile(f) + lines = inspect.getsourcelines(f) + f_name = "{}-{}-{}..{}-{}".format( + module.__name__, file, lines[0], lines[-1], f.__name__) + except (IOError, TypeError): + # assume it was local, so just use the name of the function + f_name = f.__name__ + if f_name not in __deprecated_functions: + __deprecated_functions[f_name] = True + s = "DEPRECATION WARNING: Function {} is being removed".format( + f.__name__) + if date: + s = "{} on/around {}".format(s, date) + if warning: + s = "{} : {}".format(s, warning) + if log: + log(s) + else: + print(s) + return f(*args, **kwargs) + return wrapped_f + return wrap diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/cli/README.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/cli/README.rst new file mode 100644 index 0000000000000000000000000000000000000000..f7901c09c79c268d353825776a74072a7ee4dee7 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/cli/README.rst @@ -0,0 +1,57 @@ +========== +Commandant +========== + +----------------------------------------------------- +Automatic command-line interfaces to Python functions +----------------------------------------------------- + +One of the benefits of ``libvirt`` is the uniformity of the interface: the C API (as well as the bindings in other languages) is a set of functions that accept parameters that are nearly identical to the command-line arguments. If you run ``virsh``, you get an interactive command prompt that supports all of the same commands that your shell scripts use as ``virsh`` subcommands. + +Command execution and stdio manipulation is the greatest common factor across all development systems in the POSIX environment. By exposing your functions as commands that manipulate streams of text, you can make life easier for all the Ruby and Erlang and Go programmers in your life. + +Goals +===== + +* Single decorator to expose a function as a command. + * now two decorators - one "automatic" and one that allows authors to manipulate the arguments for fine-grained control.(MW) +* Automatic analysis of function signature through ``inspect.getargspec()`` +* Command argument parser built automatically with ``argparse`` +* Interactive interpreter loop object made with ``Cmd`` +* Options to output structured return value data via ``pprint``, ``yaml`` or ``json`` dumps. + +Other Important Features that need writing +------------------------------------------ + +* Help and Usage documentation can be automatically generated, but it will be important to let users override this behaviour +* The decorator should allow specifying further parameters to the parser's add_argument() calls, to specify types or to make arguments behave as boolean flags, etc. + - Filename arguments are important, as good practice is for functions to accept file objects as parameters. + - choices arguments help to limit bad input before the function is called +* Some automatic behaviour could make for better defaults, once the user can override them. + - We could automatically detect arguments that default to False or True, and automatically support --no-foo for foo=True. + - We could automatically support hyphens as alternates for underscores + - Arguments defaulting to sequence types could support the ``append`` action. + + +----------------------------------------------------- +Implementing subcommands +----------------------------------------------------- + +(WIP) + +So as to avoid dependencies on the cli module, subcommands should be defined separately from their implementations. The recommmendation would be to place definitions into separate modules near the implementations which they expose. + +Some examples:: + + from charmhelpers.cli import CommandLine + from charmhelpers.payload import execd + from charmhelpers.foo import bar + + cli = CommandLine() + + cli.subcommand(execd.execd_run) + + @cli.subcommand_builder("bar", help="Bar baz qux") + def barcmd_builder(subparser): + subparser.add_argument('argument1', help="yackety") + return bar diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/cli/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/cli/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..389b490f4eec27f18118ab6a5f3f529dbf2e9ecc --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/cli/__init__.py @@ -0,0 +1,189 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import inspect +import argparse +import sys + +from six.moves import zip + +import charmhelpers.core.unitdata + + +class OutputFormatter(object): + def __init__(self, outfile=sys.stdout): + self.formats = ( + "raw", + "json", + "py", + "yaml", + "csv", + "tab", + ) + self.outfile = outfile + + def add_arguments(self, argument_parser): + formatgroup = argument_parser.add_mutually_exclusive_group() + choices = self.supported_formats + formatgroup.add_argument("--format", metavar='FMT', + help="Select output format for returned data, " + "where FMT is one of: {}".format(choices), + choices=choices, default='raw') + for fmt in self.formats: + fmtfunc = getattr(self, fmt) + formatgroup.add_argument("-{}".format(fmt[0]), + "--{}".format(fmt), action='store_const', + const=fmt, dest='format', + help=fmtfunc.__doc__) + + @property + def supported_formats(self): + return self.formats + + def raw(self, output): + """Output data as raw string (default)""" + if isinstance(output, (list, tuple)): + output = '\n'.join(map(str, output)) + self.outfile.write(str(output)) + + def py(self, output): + """Output data as a nicely-formatted python data structure""" + import pprint + pprint.pprint(output, stream=self.outfile) + + def json(self, output): + """Output data in JSON format""" + import json + json.dump(output, self.outfile) + + def yaml(self, output): + """Output data in YAML format""" + import yaml + yaml.safe_dump(output, self.outfile) + + def csv(self, output): + """Output data as excel-compatible CSV""" + import csv + csvwriter = csv.writer(self.outfile) + csvwriter.writerows(output) + + def tab(self, output): + """Output data in excel-compatible tab-delimited format""" + import csv + csvwriter = csv.writer(self.outfile, dialect=csv.excel_tab) + csvwriter.writerows(output) + + def format_output(self, output, fmt='raw'): + fmtfunc = getattr(self, fmt) + fmtfunc(output) + + +class CommandLine(object): + argument_parser = None + subparsers = None + formatter = None + exit_code = 0 + + def __init__(self): + if not self.argument_parser: + self.argument_parser = argparse.ArgumentParser(description='Perform common charm tasks') + if not self.formatter: + self.formatter = OutputFormatter() + self.formatter.add_arguments(self.argument_parser) + if not self.subparsers: + self.subparsers = self.argument_parser.add_subparsers(help='Commands') + + def subcommand(self, command_name=None): + """ + Decorate a function as a subcommand. Use its arguments as the + command-line arguments""" + def wrapper(decorated): + cmd_name = command_name or decorated.__name__ + subparser = self.subparsers.add_parser(cmd_name, + description=decorated.__doc__) + for args, kwargs in describe_arguments(decorated): + subparser.add_argument(*args, **kwargs) + subparser.set_defaults(func=decorated) + return decorated + return wrapper + + def test_command(self, decorated): + """ + Subcommand is a boolean test function, so bool return values should be + converted to a 0/1 exit code. + """ + decorated._cli_test_command = True + return decorated + + def no_output(self, decorated): + """ + Subcommand is not expected to return a value, so don't print a spurious None. + """ + decorated._cli_no_output = True + return decorated + + def subcommand_builder(self, command_name, description=None): + """ + Decorate a function that builds a subcommand. Builders should accept a + single argument (the subparser instance) and return the function to be + run as the command.""" + def wrapper(decorated): + subparser = self.subparsers.add_parser(command_name) + func = decorated(subparser) + subparser.set_defaults(func=func) + subparser.description = description or func.__doc__ + return wrapper + + def run(self): + "Run cli, processing arguments and executing subcommands." + arguments = self.argument_parser.parse_args() + argspec = inspect.getargspec(arguments.func) + vargs = [] + for arg in argspec.args: + vargs.append(getattr(arguments, arg)) + if argspec.varargs: + vargs.extend(getattr(arguments, argspec.varargs)) + output = arguments.func(*vargs) + if getattr(arguments.func, '_cli_test_command', False): + self.exit_code = 0 if output else 1 + output = '' + if getattr(arguments.func, '_cli_no_output', False): + output = '' + self.formatter.format_output(output, arguments.format) + if charmhelpers.core.unitdata._KV: + charmhelpers.core.unitdata._KV.flush() + + +cmdline = CommandLine() + + +def describe_arguments(func): + """ + Analyze a function's signature and return a data structure suitable for + passing in as arguments to an argparse parser's add_argument() method.""" + + argspec = inspect.getargspec(func) + # we should probably raise an exception somewhere if func includes **kwargs + if argspec.defaults: + positional_args = argspec.args[:-len(argspec.defaults)] + keyword_names = argspec.args[-len(argspec.defaults):] + for arg, default in zip(keyword_names, argspec.defaults): + yield ('--{}'.format(arg),), {'default': default} + else: + positional_args = argspec.args + + for arg in positional_args: + yield (arg,), {} + if argspec.varargs: + yield (argspec.varargs,), {'nargs': '*'} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/cli/benchmark.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/cli/benchmark.py new file mode 100644 index 0000000000000000000000000000000000000000..303af14b607d31e338aefff0df593609b7b45feb --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/cli/benchmark.py @@ -0,0 +1,34 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from . import cmdline +from charmhelpers.contrib.benchmark import Benchmark + + +@cmdline.subcommand(command_name='benchmark-start') +def start(): + Benchmark.start() + + +@cmdline.subcommand(command_name='benchmark-finish') +def finish(): + Benchmark.finish() + + +@cmdline.subcommand_builder('benchmark-composite', description="Set the benchmark composite score") +def service(subparser): + subparser.add_argument("value", help="The composite score.") + subparser.add_argument("units", help="The units the composite score represents, i.e., 'reads/sec'.") + subparser.add_argument("direction", help="'asc' if a lower score is better, 'desc' if a higher score is better.") + return Benchmark.set_composite_score diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/cli/commands.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/cli/commands.py new file mode 100644 index 0000000000000000000000000000000000000000..b93105650be8226aa390fda46aa08afb23ebc7bc --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/cli/commands.py @@ -0,0 +1,30 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +""" +This module loads sub-modules into the python runtime so they can be +discovered via the inspect module. In order to prevent flake8 from (rightfully) +telling us these are unused modules, throw a ' # noqa' at the end of each import +so that the warning is suppressed. +""" + +from . import CommandLine # noqa + +""" +Import the sub-modules which have decorated subcommands to register with chlp. +""" +from . import host # noqa +from . import benchmark # noqa +from . import unitdata # noqa +from . import hookenv # noqa diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/cli/hookenv.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/cli/hookenv.py new file mode 100644 index 0000000000000000000000000000000000000000..bd72f448bf0092251a454ff6dd3145f09048ae72 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/cli/hookenv.py @@ -0,0 +1,21 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from . import cmdline +from charmhelpers.core import hookenv + + +cmdline.subcommand('relation-id')(hookenv.relation_id._wrapped) +cmdline.subcommand('service-name')(hookenv.service_name) +cmdline.subcommand('remote-service-name')(hookenv.remote_service_name._wrapped) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/cli/host.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/cli/host.py new file mode 100644 index 0000000000000000000000000000000000000000..40396849907976fd077cc9c53a0852c4380c6266 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/cli/host.py @@ -0,0 +1,29 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from . import cmdline +from charmhelpers.core import host + + +@cmdline.subcommand() +def mounts(): + "List mounts" + return host.mounts() + + +@cmdline.subcommand_builder('service', description="Control system services") +def service(subparser): + subparser.add_argument("action", help="The action to perform (start, stop, etc...)") + subparser.add_argument("service_name", help="Name of the service to control") + return host.service diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/cli/unitdata.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/cli/unitdata.py new file mode 100644 index 0000000000000000000000000000000000000000..acce846f84ef32ed0b5829cf08e67ad33f0eb5d1 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/cli/unitdata.py @@ -0,0 +1,46 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from . import cmdline +from charmhelpers.core import unitdata + + +@cmdline.subcommand_builder('unitdata', description="Store and retrieve data") +def unitdata_cmd(subparser): + nested = subparser.add_subparsers() + + get_cmd = nested.add_parser('get', help='Retrieve data') + get_cmd.add_argument('key', help='Key to retrieve the value of') + get_cmd.set_defaults(action='get', value=None) + + getrange_cmd = nested.add_parser('getrange', help='Retrieve multiple data') + getrange_cmd.add_argument('key', metavar='prefix', + help='Prefix of the keys to retrieve') + getrange_cmd.set_defaults(action='getrange', value=None) + + set_cmd = nested.add_parser('set', help='Store data') + set_cmd.add_argument('key', help='Key to set') + set_cmd.add_argument('value', help='Value to store') + set_cmd.set_defaults(action='set') + + def _unitdata_cmd(action, key, value): + if action == 'get': + return unitdata.kv().get(key) + elif action == 'getrange': + return unitdata.kv().getrange(key) + elif action == 'set': + unitdata.kv().set(key, value) + unitdata.kv().flush() + return '' + return _unitdata_cmd diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/context.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/context.py new file mode 100644 index 0000000000000000000000000000000000000000..01864740e89633f6c65f43c32ed89ef9d5e857c9 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/context.py @@ -0,0 +1,205 @@ +# Copyright 2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +''' +A Pythonic API to interact with the charm hook environment. + +:author: Stuart Bishop +''' + +import six + +from charmhelpers.core import hookenv + +from collections import OrderedDict +if six.PY3: + from collections import UserDict # pragma: nocover +else: + from UserDict import IterableUserDict as UserDict # pragma: nocover + + +class Relations(OrderedDict): + '''Mapping relation name -> relation id -> Relation. + + >>> rels = Relations() + >>> rels['sprog']['sprog:12']['client/6']['widget'] + 'remote widget' + >>> rels['sprog']['sprog:12'].local['widget'] = 'local widget' + >>> rels['sprog']['sprog:12'].local['widget'] + 'local widget' + >>> rels.peer.local['widget'] + 'local widget on the peer relation' + ''' + def __init__(self): + super(Relations, self).__init__() + for relname in sorted(hookenv.relation_types()): + self[relname] = OrderedDict() + relids = hookenv.relation_ids(relname) + relids.sort(key=lambda x: int(x.split(':', 1)[-1])) + for relid in relids: + self[relname][relid] = Relation(relid) + + @property + def peer(self): + peer_relid = hookenv.peer_relation_id() + for rels in self.values(): + if peer_relid in rels: + return rels[peer_relid] + + +class Relation(OrderedDict): + '''Mapping of unit -> remote RelationInfo for a relation. + + This is an OrderedDict mapping, ordered numerically by + by unit number. + + Also provides access to the local RelationInfo, and peer RelationInfo + instances by the 'local' and 'peers' attributes. + + >>> r = Relation('sprog:12') + >>> r.keys() + ['client/9', 'client/10'] # Ordered numerically + >>> r['client/10']['widget'] # A remote RelationInfo setting + 'remote widget' + >>> r.local['widget'] # The local RelationInfo setting + 'local widget' + ''' + relid = None # The relation id. + relname = None # The relation name (also known as relation type). + service = None # The remote service name, if known. + local = None # The local end's RelationInfo. + peers = None # Map of peer -> RelationInfo. None if no peer relation. + + def __init__(self, relid): + remote_units = hookenv.related_units(relid) + remote_units.sort(key=lambda u: int(u.split('/', 1)[-1])) + super(Relation, self).__init__((unit, RelationInfo(relid, unit)) + for unit in remote_units) + + self.relname = relid.split(':', 1)[0] + self.relid = relid + self.local = RelationInfo(relid, hookenv.local_unit()) + + for relinfo in self.values(): + self.service = relinfo.service + break + + # If we have peers, and they have joined both the provided peer + # relation and this relation, we can peek at their data too. + # This is useful for creating consensus without leadership. + peer_relid = hookenv.peer_relation_id() + if peer_relid and peer_relid != relid: + peers = hookenv.related_units(peer_relid) + if peers: + peers.sort(key=lambda u: int(u.split('/', 1)[-1])) + self.peers = OrderedDict((peer, RelationInfo(relid, peer)) + for peer in peers) + else: + self.peers = OrderedDict() + else: + self.peers = None + + def __str__(self): + return '{} ({})'.format(self.relid, self.service) + + +class RelationInfo(UserDict): + '''The bag of data at an end of a relation. + + Every unit participating in a relation has a single bag of + data associated with that relation. This is that bag. + + The bag of data for the local unit may be updated. Remote data + is immutable and will remain static for the duration of the hook. + + Changes made to the local units relation data only become visible + to other units after the hook completes successfully. If the hook + does not complete successfully, the changes are rolled back. + + Unlike standard Python mappings, setting an item to None is the + same as deleting it. + + >>> relinfo = RelationInfo('db:12') # Default is the local unit. + >>> relinfo['user'] = 'fred' + >>> relinfo['user'] + 'fred' + >>> relinfo['user'] = None + >>> 'fred' in relinfo + False + + This class wraps hookenv.relation_get and hookenv.relation_set. + All caching is left up to these two methods to avoid synchronization + issues. Data is only loaded on demand. + ''' + relid = None # The relation id. + relname = None # The relation name (also know as the relation type). + unit = None # The unit id. + number = None # The unit number (integer). + service = None # The service name. + + def __init__(self, relid, unit): + self.relname = relid.split(':', 1)[0] + self.relid = relid + self.unit = unit + self.service, num = self.unit.split('/', 1) + self.number = int(num) + + def __str__(self): + return '{} ({})'.format(self.relid, self.unit) + + @property + def data(self): + return hookenv.relation_get(rid=self.relid, unit=self.unit) + + def __setitem__(self, key, value): + if self.unit != hookenv.local_unit(): + raise TypeError('Attempting to set {} on remote unit {}' + ''.format(key, self.unit)) + if value is not None and not isinstance(value, six.string_types): + # We don't do implicit casting. This would cause simple + # types like integers to be read back as strings in subsequent + # hooks, and mutable types would require a lot of wrapping + # to ensure relation-set gets called when they are mutated. + raise ValueError('Only string values allowed') + hookenv.relation_set(self.relid, {key: value}) + + def __delitem__(self, key): + # Deleting a key and setting it to null is the same thing in + # Juju relations. + self[key] = None + + +class Leader(UserDict): + def __init__(self): + pass # Don't call superclass initializer, as it will nuke self.data + + @property + def data(self): + return hookenv.leader_get() + + def __setitem__(self, key, value): + if not hookenv.is_leader(): + raise TypeError('Not the leader. Cannot change leader settings.') + if value is not None and not isinstance(value, six.string_types): + # We don't do implicit casting. This would cause simple + # types like integers to be read back as strings in subsequent + # hooks, and mutable types would require a lot of wrapping + # to ensure leader-set gets called when they are mutated. + raise ValueError('Only string values allowed') + hookenv.leader_set({key: value}) + + def __delitem__(self, key): + # Deleting a key and setting it to null is the same thing in + # Juju leadership settings. + self[key] = None diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d7567b863e3a5ad2b7a7f44958b4166e0c3d346b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/amulet/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/amulet/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d7567b863e3a5ad2b7a7f44958b4166e0c3d346b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/amulet/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/amulet/deployment.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/amulet/deployment.py new file mode 100644 index 0000000000000000000000000000000000000000..d21d01d8ffe242d686283b0ed977b88be6bfc74e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/amulet/deployment.py @@ -0,0 +1,99 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import amulet +import os +import six + + +class AmuletDeployment(object): + """Amulet deployment. + + This class provides generic Amulet deployment and test runner + methods. + """ + + def __init__(self, series=None): + """Initialize the deployment environment.""" + self.series = None + + if series: + self.series = series + self.d = amulet.Deployment(series=self.series) + else: + self.d = amulet.Deployment() + + def _add_services(self, this_service, other_services): + """Add services. + + Add services to the deployment where this_service is the local charm + that we're testing and other_services are the other services that + are being used in the local amulet tests. + """ + if this_service['name'] != os.path.basename(os.getcwd()): + s = this_service['name'] + msg = "The charm's root directory name needs to be {}".format(s) + amulet.raise_status(amulet.FAIL, msg=msg) + + if 'units' not in this_service: + this_service['units'] = 1 + + self.d.add(this_service['name'], units=this_service['units'], + constraints=this_service.get('constraints'), + storage=this_service.get('storage')) + + for svc in other_services: + if 'location' in svc: + branch_location = svc['location'] + elif self.series: + branch_location = 'cs:{}/{}'.format(self.series, svc['name']), + else: + branch_location = None + + if 'units' not in svc: + svc['units'] = 1 + + self.d.add(svc['name'], charm=branch_location, units=svc['units'], + constraints=svc.get('constraints'), + storage=svc.get('storage')) + + def _add_relations(self, relations): + """Add all of the relations for the services.""" + for k, v in six.iteritems(relations): + self.d.relate(k, v) + + def _configure_services(self, configs): + """Configure all of the services.""" + for service, config in six.iteritems(configs): + self.d.configure(service, config) + + def _deploy(self): + """Deploy environment and wait for all hooks to finish executing.""" + timeout = int(os.environ.get('AMULET_SETUP_TIMEOUT', 900)) + try: + self.d.setup(timeout=timeout) + self.d.sentry.wait(timeout=timeout) + except amulet.helpers.TimeoutError: + amulet.raise_status( + amulet.FAIL, + msg="Deployment timed out ({}s)".format(timeout) + ) + except Exception: + raise + + def run_tests(self): + """Run all of the methods that are prefixed with 'test_'.""" + for test in dir(self): + if test.startswith('test_'): + getattr(self, test)() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/amulet/utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/amulet/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..54283088e1ddd53bc85008bc7446620e87b42278 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/amulet/utils.py @@ -0,0 +1,820 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import io +import json +import logging +import os +import re +import socket +import subprocess +import sys +import time +import uuid + +import amulet +import distro_info +import six +from six.moves import configparser +if six.PY3: + from urllib import parse as urlparse +else: + import urlparse + + +class AmuletUtils(object): + """Amulet utilities. + + This class provides common utility functions that are used by Amulet + tests. + """ + + def __init__(self, log_level=logging.ERROR): + self.log = self.get_logger(level=log_level) + self.ubuntu_releases = self.get_ubuntu_releases() + + def get_logger(self, name="amulet-logger", level=logging.DEBUG): + """Get a logger object that will log to stdout.""" + log = logging + logger = log.getLogger(name) + fmt = log.Formatter("%(asctime)s %(funcName)s " + "%(levelname)s: %(message)s") + + handler = log.StreamHandler(stream=sys.stdout) + handler.setLevel(level) + handler.setFormatter(fmt) + + logger.addHandler(handler) + logger.setLevel(level) + + return logger + + def valid_ip(self, ip): + if re.match(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$", ip): + return True + else: + return False + + def valid_url(self, url): + p = re.compile( + r'^(?:http|ftp)s?://' + r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}\.?)|' # noqa + r'localhost|' + r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})' + r'(?::\d+)?' + r'(?:/?|[/?]\S+)$', + re.IGNORECASE) + if p.match(url): + return True + else: + return False + + def get_ubuntu_release_from_sentry(self, sentry_unit): + """Get Ubuntu release codename from sentry unit. + + :param sentry_unit: amulet sentry/service unit pointer + :returns: list of strings - release codename, failure message + """ + msg = None + cmd = 'lsb_release -cs' + release, code = sentry_unit.ssh(cmd) + if code == 0: + self.log.debug('{} lsb_release: {}'.format( + sentry_unit.info['unit_name'], release)) + else: + msg = ('{} `{}` returned {} ' + '{}'.format(sentry_unit.info['unit_name'], + cmd, release, code)) + if release not in self.ubuntu_releases: + msg = ("Release ({}) not found in Ubuntu releases " + "({})".format(release, self.ubuntu_releases)) + return release, msg + + def validate_services(self, commands): + """Validate that lists of commands succeed on service units. Can be + used to verify system services are running on the corresponding + service units. + + :param commands: dict with sentry keys and arbitrary command list vals + :returns: None if successful, Failure string message otherwise + """ + self.log.debug('Checking status of system services...') + + # /!\ DEPRECATION WARNING (beisner): + # New and existing tests should be rewritten to use + # validate_services_by_name() as it is aware of init systems. + self.log.warn('DEPRECATION WARNING: use ' + 'validate_services_by_name instead of validate_services ' + 'due to init system differences.') + + for k, v in six.iteritems(commands): + for cmd in v: + output, code = k.run(cmd) + self.log.debug('{} `{}` returned ' + '{}'.format(k.info['unit_name'], + cmd, code)) + if code != 0: + return "command `{}` returned {}".format(cmd, str(code)) + return None + + def validate_services_by_name(self, sentry_services): + """Validate system service status by service name, automatically + detecting init system based on Ubuntu release codename. + + :param sentry_services: dict with sentry keys and svc list values + :returns: None if successful, Failure string message otherwise + """ + self.log.debug('Checking status of system services...') + + # Point at which systemd became a thing + systemd_switch = self.ubuntu_releases.index('vivid') + + for sentry_unit, services_list in six.iteritems(sentry_services): + # Get lsb_release codename from unit + release, ret = self.get_ubuntu_release_from_sentry(sentry_unit) + if ret: + return ret + + for service_name in services_list: + if (self.ubuntu_releases.index(release) >= systemd_switch or + service_name in ['rabbitmq-server', 'apache2', + 'memcached']): + # init is systemd (or regular sysv) + cmd = 'sudo service {} status'.format(service_name) + output, code = sentry_unit.run(cmd) + service_running = code == 0 + elif self.ubuntu_releases.index(release) < systemd_switch: + # init is upstart + cmd = 'sudo status {}'.format(service_name) + output, code = sentry_unit.run(cmd) + service_running = code == 0 and "start/running" in output + + self.log.debug('{} `{}` returned ' + '{}'.format(sentry_unit.info['unit_name'], + cmd, code)) + if not service_running: + return u"command `{}` returned {} {}".format( + cmd, output, str(code)) + return None + + def _get_config(self, unit, filename): + """Get a ConfigParser object for parsing a unit's config file.""" + file_contents = unit.file_contents(filename) + + # NOTE(beisner): by default, ConfigParser does not handle options + # with no value, such as the flags used in the mysql my.cnf file. + # https://bugs.python.org/issue7005 + config = configparser.ConfigParser(allow_no_value=True) + config.readfp(io.StringIO(file_contents)) + return config + + def validate_config_data(self, sentry_unit, config_file, section, + expected): + """Validate config file data. + + Verify that the specified section of the config file contains + the expected option key:value pairs. + + Compare expected dictionary data vs actual dictionary data. + The values in the 'expected' dictionary can be strings, bools, ints, + longs, or can be a function that evaluates a variable and returns a + bool. + """ + self.log.debug('Validating config file data ({} in {} on {})' + '...'.format(section, config_file, + sentry_unit.info['unit_name'])) + config = self._get_config(sentry_unit, config_file) + + if section != 'DEFAULT' and not config.has_section(section): + return "section [{}] does not exist".format(section) + + for k in expected.keys(): + if not config.has_option(section, k): + return "section [{}] is missing option {}".format(section, k) + + actual = config.get(section, k) + v = expected[k] + if (isinstance(v, six.string_types) or + isinstance(v, bool) or + isinstance(v, six.integer_types)): + # handle explicit values + if actual != v: + return "section [{}] {}:{} != expected {}:{}".format( + section, k, actual, k, expected[k]) + # handle function pointers, such as not_null or valid_ip + elif not v(actual): + return "section [{}] {}:{} != expected {}:{}".format( + section, k, actual, k, expected[k]) + return None + + def _validate_dict_data(self, expected, actual): + """Validate dictionary data. + + Compare expected dictionary data vs actual dictionary data. + The values in the 'expected' dictionary can be strings, bools, ints, + longs, or can be a function that evaluates a variable and returns a + bool. + """ + self.log.debug('actual: {}'.format(repr(actual))) + self.log.debug('expected: {}'.format(repr(expected))) + + for k, v in six.iteritems(expected): + if k in actual: + if (isinstance(v, six.string_types) or + isinstance(v, bool) or + isinstance(v, six.integer_types)): + # handle explicit values + if v != actual[k]: + return "{}:{}".format(k, actual[k]) + # handle function pointers, such as not_null or valid_ip + elif not v(actual[k]): + return "{}:{}".format(k, actual[k]) + else: + return "key '{}' does not exist".format(k) + return None + + def validate_relation_data(self, sentry_unit, relation, expected): + """Validate actual relation data based on expected relation data.""" + actual = sentry_unit.relation(relation[0], relation[1]) + return self._validate_dict_data(expected, actual) + + def _validate_list_data(self, expected, actual): + """Compare expected list vs actual list data.""" + for e in expected: + if e not in actual: + return "expected item {} not found in actual list".format(e) + return None + + def not_null(self, string): + if string is not None: + return True + else: + return False + + def _get_file_mtime(self, sentry_unit, filename): + """Get last modification time of file.""" + return sentry_unit.file_stat(filename)['mtime'] + + def _get_dir_mtime(self, sentry_unit, directory): + """Get last modification time of directory.""" + return sentry_unit.directory_stat(directory)['mtime'] + + def _get_proc_start_time(self, sentry_unit, service, pgrep_full=None): + """Get start time of a process based on the last modification time + of the /proc/pid directory. + + :sentry_unit: The sentry unit to check for the service on + :service: service name to look for in process table + :pgrep_full: [Deprecated] Use full command line search mode with pgrep + :returns: epoch time of service process start + :param commands: list of bash commands + :param sentry_units: list of sentry unit pointers + :returns: None if successful; Failure message otherwise + """ + pid_list = self.get_process_id_list( + sentry_unit, service, pgrep_full=pgrep_full) + pid = pid_list[0] + proc_dir = '/proc/{}'.format(pid) + self.log.debug('Pid for {} on {}: {}'.format( + service, sentry_unit.info['unit_name'], pid)) + + return self._get_dir_mtime(sentry_unit, proc_dir) + + def service_restarted(self, sentry_unit, service, filename, + pgrep_full=None, sleep_time=20): + """Check if service was restarted. + + Compare a service's start time vs a file's last modification time + (such as a config file for that service) to determine if the service + has been restarted. + """ + # /!\ DEPRECATION WARNING (beisner): + # This method is prone to races in that no before-time is known. + # Use validate_service_config_changed instead. + + # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now + # used instead of pgrep. pgrep_full is still passed through to ensure + # deprecation WARNS. lp1474030 + self.log.warn('DEPRECATION WARNING: use ' + 'validate_service_config_changed instead of ' + 'service_restarted due to known races.') + + time.sleep(sleep_time) + if (self._get_proc_start_time(sentry_unit, service, pgrep_full) >= + self._get_file_mtime(sentry_unit, filename)): + return True + else: + return False + + def service_restarted_since(self, sentry_unit, mtime, service, + pgrep_full=None, sleep_time=20, + retry_count=30, retry_sleep_time=10): + """Check if service was been started after a given time. + + Args: + sentry_unit (sentry): The sentry unit to check for the service on + mtime (float): The epoch time to check against + service (string): service name to look for in process table + pgrep_full: [Deprecated] Use full command line search mode with pgrep + sleep_time (int): Initial sleep time (s) before looking for file + retry_sleep_time (int): Time (s) to sleep between retries + retry_count (int): If file is not found, how many times to retry + + Returns: + bool: True if service found and its start time it newer than mtime, + False if service is older than mtime or if service was + not found. + """ + # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now + # used instead of pgrep. pgrep_full is still passed through to ensure + # deprecation WARNS. lp1474030 + + unit_name = sentry_unit.info['unit_name'] + self.log.debug('Checking that %s service restarted since %s on ' + '%s' % (service, mtime, unit_name)) + time.sleep(sleep_time) + proc_start_time = None + tries = 0 + while tries <= retry_count and not proc_start_time: + try: + proc_start_time = self._get_proc_start_time(sentry_unit, + service, + pgrep_full) + self.log.debug('Attempt {} to get {} proc start time on {} ' + 'OK'.format(tries, service, unit_name)) + except IOError as e: + # NOTE(beisner) - race avoidance, proc may not exist yet. + # https://bugs.launchpad.net/charm-helpers/+bug/1474030 + self.log.debug('Attempt {} to get {} proc start time on {} ' + 'failed\n{}'.format(tries, service, + unit_name, e)) + time.sleep(retry_sleep_time) + tries += 1 + + if not proc_start_time: + self.log.warn('No proc start time found, assuming service did ' + 'not start') + return False + if proc_start_time >= mtime: + self.log.debug('Proc start time is newer than provided mtime' + '(%s >= %s) on %s (OK)' % (proc_start_time, + mtime, unit_name)) + return True + else: + self.log.warn('Proc start time (%s) is older than provided mtime ' + '(%s) on %s, service did not ' + 'restart' % (proc_start_time, mtime, unit_name)) + return False + + def config_updated_since(self, sentry_unit, filename, mtime, + sleep_time=20, retry_count=30, + retry_sleep_time=10): + """Check if file was modified after a given time. + + Args: + sentry_unit (sentry): The sentry unit to check the file mtime on + filename (string): The file to check mtime of + mtime (float): The epoch time to check against + sleep_time (int): Initial sleep time (s) before looking for file + retry_sleep_time (int): Time (s) to sleep between retries + retry_count (int): If file is not found, how many times to retry + + Returns: + bool: True if file was modified more recently than mtime, False if + file was modified before mtime, or if file not found. + """ + unit_name = sentry_unit.info['unit_name'] + self.log.debug('Checking that %s updated since %s on ' + '%s' % (filename, mtime, unit_name)) + time.sleep(sleep_time) + file_mtime = None + tries = 0 + while tries <= retry_count and not file_mtime: + try: + file_mtime = self._get_file_mtime(sentry_unit, filename) + self.log.debug('Attempt {} to get {} file mtime on {} ' + 'OK'.format(tries, filename, unit_name)) + except IOError as e: + # NOTE(beisner) - race avoidance, file may not exist yet. + # https://bugs.launchpad.net/charm-helpers/+bug/1474030 + self.log.debug('Attempt {} to get {} file mtime on {} ' + 'failed\n{}'.format(tries, filename, + unit_name, e)) + time.sleep(retry_sleep_time) + tries += 1 + + if not file_mtime: + self.log.warn('Could not determine file mtime, assuming ' + 'file does not exist') + return False + + if file_mtime >= mtime: + self.log.debug('File mtime is newer than provided mtime ' + '(%s >= %s) on %s (OK)' % (file_mtime, + mtime, unit_name)) + return True + else: + self.log.warn('File mtime is older than provided mtime' + '(%s < on %s) on %s' % (file_mtime, + mtime, unit_name)) + return False + + def validate_service_config_changed(self, sentry_unit, mtime, service, + filename, pgrep_full=None, + sleep_time=20, retry_count=30, + retry_sleep_time=10): + """Check service and file were updated after mtime + + Args: + sentry_unit (sentry): The sentry unit to check for the service on + mtime (float): The epoch time to check against + service (string): service name to look for in process table + filename (string): The file to check mtime of + pgrep_full: [Deprecated] Use full command line search mode with pgrep + sleep_time (int): Initial sleep in seconds to pass to test helpers + retry_count (int): If service is not found, how many times to retry + retry_sleep_time (int): Time in seconds to wait between retries + + Typical Usage: + u = OpenStackAmuletUtils(ERROR) + ... + mtime = u.get_sentry_time(self.cinder_sentry) + self.d.configure('cinder', {'verbose': 'True', 'debug': 'True'}) + if not u.validate_service_config_changed(self.cinder_sentry, + mtime, + 'cinder-api', + '/etc/cinder/cinder.conf') + amulet.raise_status(amulet.FAIL, msg='update failed') + Returns: + bool: True if both service and file where updated/restarted after + mtime, False if service is older than mtime or if service was + not found or if filename was modified before mtime. + """ + + # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now + # used instead of pgrep. pgrep_full is still passed through to ensure + # deprecation WARNS. lp1474030 + + service_restart = self.service_restarted_since( + sentry_unit, mtime, + service, + pgrep_full=pgrep_full, + sleep_time=sleep_time, + retry_count=retry_count, + retry_sleep_time=retry_sleep_time) + + config_update = self.config_updated_since( + sentry_unit, + filename, + mtime, + sleep_time=sleep_time, + retry_count=retry_count, + retry_sleep_time=retry_sleep_time) + + return service_restart and config_update + + def get_sentry_time(self, sentry_unit): + """Return current epoch time on a sentry""" + cmd = "date +'%s'" + return float(sentry_unit.run(cmd)[0]) + + def relation_error(self, name, data): + return 'unexpected relation data in {} - {}'.format(name, data) + + def endpoint_error(self, name, data): + return 'unexpected endpoint data in {} - {}'.format(name, data) + + def get_ubuntu_releases(self): + """Return a list of all Ubuntu releases in order of release.""" + _d = distro_info.UbuntuDistroInfo() + _release_list = _d.all + return _release_list + + def file_to_url(self, file_rel_path): + """Convert a relative file path to a file URL.""" + _abs_path = os.path.abspath(file_rel_path) + return urlparse.urlparse(_abs_path, scheme='file').geturl() + + def check_commands_on_units(self, commands, sentry_units): + """Check that all commands in a list exit zero on all + sentry units in a list. + + :param commands: list of bash commands + :param sentry_units: list of sentry unit pointers + :returns: None if successful; Failure message otherwise + """ + self.log.debug('Checking exit codes for {} commands on {} ' + 'sentry units...'.format(len(commands), + len(sentry_units))) + for sentry_unit in sentry_units: + for cmd in commands: + output, code = sentry_unit.run(cmd) + if code == 0: + self.log.debug('{} `{}` returned {} ' + '(OK)'.format(sentry_unit.info['unit_name'], + cmd, code)) + else: + return ('{} `{}` returned {} ' + '{}'.format(sentry_unit.info['unit_name'], + cmd, code, output)) + return None + + def get_process_id_list(self, sentry_unit, process_name, + expect_success=True, pgrep_full=False): + """Get a list of process ID(s) from a single sentry juju unit + for a single process name. + + :param sentry_unit: Amulet sentry instance (juju unit) + :param process_name: Process name + :param expect_success: If False, expect the PID to be missing, + raise if it is present. + :returns: List of process IDs + """ + if pgrep_full: + cmd = 'pgrep -f "{}"'.format(process_name) + else: + cmd = 'pidof -x "{}"'.format(process_name) + if not expect_success: + cmd += " || exit 0 && exit 1" + output, code = sentry_unit.run(cmd) + if code != 0: + msg = ('{} `{}` returned {} ' + '{}'.format(sentry_unit.info['unit_name'], + cmd, code, output)) + amulet.raise_status(amulet.FAIL, msg=msg) + return str(output).split() + + def get_unit_process_ids( + self, unit_processes, expect_success=True, pgrep_full=False): + """Construct a dict containing unit sentries, process names, and + process IDs. + + :param unit_processes: A dictionary of Amulet sentry instance + to list of process names. + :param expect_success: if False expect the processes to not be + running, raise if they are. + :returns: Dictionary of Amulet sentry instance to dictionary + of process names to PIDs. + """ + pid_dict = {} + for sentry_unit, process_list in six.iteritems(unit_processes): + pid_dict[sentry_unit] = {} + for process in process_list: + pids = self.get_process_id_list( + sentry_unit, process, expect_success=expect_success, + pgrep_full=pgrep_full) + pid_dict[sentry_unit].update({process: pids}) + return pid_dict + + def validate_unit_process_ids(self, expected, actual): + """Validate process id quantities for services on units.""" + self.log.debug('Checking units for running processes...') + self.log.debug('Expected PIDs: {}'.format(expected)) + self.log.debug('Actual PIDs: {}'.format(actual)) + + if len(actual) != len(expected): + return ('Unit count mismatch. expected, actual: {}, ' + '{} '.format(len(expected), len(actual))) + + for (e_sentry, e_proc_names) in six.iteritems(expected): + e_sentry_name = e_sentry.info['unit_name'] + if e_sentry in actual.keys(): + a_proc_names = actual[e_sentry] + else: + return ('Expected sentry ({}) not found in actual dict data.' + '{}'.format(e_sentry_name, e_sentry)) + + if len(e_proc_names.keys()) != len(a_proc_names.keys()): + return ('Process name count mismatch. expected, actual: {}, ' + '{}'.format(len(expected), len(actual))) + + for (e_proc_name, e_pids), (a_proc_name, a_pids) in \ + zip(e_proc_names.items(), a_proc_names.items()): + if e_proc_name != a_proc_name: + return ('Process name mismatch. expected, actual: {}, ' + '{}'.format(e_proc_name, a_proc_name)) + + a_pids_length = len(a_pids) + fail_msg = ('PID count mismatch. {} ({}) expected, actual: ' + '{}, {} ({})'.format(e_sentry_name, e_proc_name, + e_pids, a_pids_length, + a_pids)) + + # If expected is a list, ensure at least one PID quantity match + if isinstance(e_pids, list) and \ + a_pids_length not in e_pids: + return fail_msg + # If expected is not bool and not list, + # ensure PID quantities match + elif not isinstance(e_pids, bool) and \ + not isinstance(e_pids, list) and \ + a_pids_length != e_pids: + return fail_msg + # If expected is bool True, ensure 1 or more PIDs exist + elif isinstance(e_pids, bool) and \ + e_pids is True and a_pids_length < 1: + return fail_msg + # If expected is bool False, ensure 0 PIDs exist + elif isinstance(e_pids, bool) and \ + e_pids is False and a_pids_length != 0: + return fail_msg + else: + self.log.debug('PID check OK: {} {} {}: ' + '{}'.format(e_sentry_name, e_proc_name, + e_pids, a_pids)) + return None + + def validate_list_of_identical_dicts(self, list_of_dicts): + """Check that all dicts within a list are identical.""" + hashes = [] + for _dict in list_of_dicts: + hashes.append(hash(frozenset(_dict.items()))) + + self.log.debug('Hashes: {}'.format(hashes)) + if len(set(hashes)) == 1: + self.log.debug('Dicts within list are identical') + else: + return 'Dicts within list are not identical' + + return None + + def validate_sectionless_conf(self, file_contents, expected): + """A crude conf parser. Useful to inspect configuration files which + do not have section headers (as would be necessary in order to use + the configparser). Such as openstack-dashboard or rabbitmq confs.""" + for line in file_contents.split('\n'): + if '=' in line: + args = line.split('=') + if len(args) <= 1: + continue + key = args[0].strip() + value = args[1].strip() + if key in expected.keys(): + if expected[key] != value: + msg = ('Config mismatch. Expected, actual: {}, ' + '{}'.format(expected[key], value)) + amulet.raise_status(amulet.FAIL, msg=msg) + + def get_unit_hostnames(self, units): + """Return a dict of juju unit names to hostnames.""" + host_names = {} + for unit in units: + host_names[unit.info['unit_name']] = \ + str(unit.file_contents('/etc/hostname').strip()) + self.log.debug('Unit host names: {}'.format(host_names)) + return host_names + + def run_cmd_unit(self, sentry_unit, cmd): + """Run a command on a unit, return the output and exit code.""" + output, code = sentry_unit.run(cmd) + if code == 0: + self.log.debug('{} `{}` command returned {} ' + '(OK)'.format(sentry_unit.info['unit_name'], + cmd, code)) + else: + msg = ('{} `{}` command returned {} ' + '{}'.format(sentry_unit.info['unit_name'], + cmd, code, output)) + amulet.raise_status(amulet.FAIL, msg=msg) + return str(output), code + + def file_exists_on_unit(self, sentry_unit, file_name): + """Check if a file exists on a unit.""" + try: + sentry_unit.file_stat(file_name) + return True + except IOError: + return False + except Exception as e: + msg = 'Error checking file {}: {}'.format(file_name, e) + amulet.raise_status(amulet.FAIL, msg=msg) + + def file_contents_safe(self, sentry_unit, file_name, + max_wait=60, fatal=False): + """Get file contents from a sentry unit. Wrap amulet file_contents + with retry logic to address races where a file checks as existing, + but no longer exists by the time file_contents is called. + Return None if file not found. Optionally raise if fatal is True.""" + unit_name = sentry_unit.info['unit_name'] + file_contents = False + tries = 0 + while not file_contents and tries < (max_wait / 4): + try: + file_contents = sentry_unit.file_contents(file_name) + except IOError: + self.log.debug('Attempt {} to open file {} from {} ' + 'failed'.format(tries, file_name, + unit_name)) + time.sleep(4) + tries += 1 + + if file_contents: + return file_contents + elif not fatal: + return None + elif fatal: + msg = 'Failed to get file contents from unit.' + amulet.raise_status(amulet.FAIL, msg) + + def port_knock_tcp(self, host="localhost", port=22, timeout=15): + """Open a TCP socket to check for a listening sevice on a host. + + :param host: host name or IP address, default to localhost + :param port: TCP port number, default to 22 + :param timeout: Connect timeout, default to 15 seconds + :returns: True if successful, False if connect failed + """ + + # Resolve host name if possible + try: + connect_host = socket.gethostbyname(host) + host_human = "{} ({})".format(connect_host, host) + except socket.error as e: + self.log.warn('Unable to resolve address: ' + '{} ({}) Trying anyway!'.format(host, e)) + connect_host = host + host_human = connect_host + + # Attempt socket connection + try: + knock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) + knock.settimeout(timeout) + knock.connect((connect_host, port)) + knock.close() + self.log.debug('Socket connect OK for host ' + '{} on port {}.'.format(host_human, port)) + return True + except socket.error as e: + self.log.debug('Socket connect FAIL for' + ' {} port {} ({})'.format(host_human, port, e)) + return False + + def port_knock_units(self, sentry_units, port=22, + timeout=15, expect_success=True): + """Open a TCP socket to check for a listening sevice on each + listed juju unit. + + :param sentry_units: list of sentry unit pointers + :param port: TCP port number, default to 22 + :param timeout: Connect timeout, default to 15 seconds + :expect_success: True by default, set False to invert logic + :returns: None if successful, Failure message otherwise + """ + for unit in sentry_units: + host = unit.info['public-address'] + connected = self.port_knock_tcp(host, port, timeout) + if not connected and expect_success: + return 'Socket connect failed.' + elif connected and not expect_success: + return 'Socket connected unexpectedly.' + + def get_uuid_epoch_stamp(self): + """Returns a stamp string based on uuid4 and epoch time. Useful in + generating test messages which need to be unique-ish.""" + return '[{}-{}]'.format(uuid.uuid4(), time.time()) + + # amulet juju action helpers: + def run_action(self, unit_sentry, action, + _check_output=subprocess.check_output, + params=None): + """Translate to amulet's built in run_action(). Deprecated. + + Run the named action on a given unit sentry. + + params a dict of parameters to use + _check_output parameter is no longer used + + @return action_id. + """ + self.log.warn('charmhelpers.contrib.amulet.utils.run_action has been ' + 'deprecated for amulet.run_action') + return unit_sentry.run_action(action, action_args=params) + + def wait_on_action(self, action_id, _check_output=subprocess.check_output): + """Wait for a given action, returning if it completed or not. + + action_id a string action uuid + _check_output parameter is no longer used + """ + data = amulet.actions.get_action_output(action_id, full_output=True) + return data.get(u"status") == "completed" + + def status_get(self, unit): + """Return the current service status of this unit.""" + raw_status, return_code = unit.run( + "status-get --format=json --include-data") + if return_code != 0: + return ("unknown", "") + status = json.loads(raw_status) + return (status["status"], status["message"]) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/ansible/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/ansible/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..824780aca93c406f7331ec3f0da8f22bee1272ec --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/ansible/__init__.py @@ -0,0 +1,303 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Copyright 2013 Canonical Ltd. +# +# Authors: +# Charm Helpers Developers +""" +The ansible package enables you to easily use the configuration management +tool `Ansible`_ to setup and configure your charm. All of your charm +configuration options and relation-data are available as regular Ansible +variables which can be used in your playbooks and templates. + +.. _Ansible: https://www.ansible.com/ + +Usage +===== + +Here is an example directory structure for a charm to get you started:: + + charm-ansible-example/ + |-- ansible + | |-- playbook.yaml + | `-- templates + | `-- example.j2 + |-- config.yaml + |-- copyright + |-- icon.svg + |-- layer.yaml + |-- metadata.yaml + |-- reactive + | `-- example.py + |-- README.md + +Running a playbook called ``playbook.yaml`` when the ``install`` hook is run +can be as simple as:: + + from charmhelpers.contrib import ansible + from charms.reactive import hook + + @hook('install') + def install(): + ansible.install_ansible_support() + ansible.apply_playbook('ansible/playbook.yaml') + +Here is an example playbook that uses the ``template`` module to template the +file ``example.j2`` to the charm host and then uses the ``debug`` module to +print out all the host and Juju variables that you can use in your playbooks. +Note that you must target ``localhost`` as the playbook is run locally on the +charm host:: + + --- + - hosts: localhost + tasks: + - name: Template a file + template: + src: templates/example.j2 + dest: /tmp/example.j2 + + - name: Print all variables available to Ansible + debug: + var: vars + +Read more online about `playbooks`_ and standard Ansible `modules`_. + +.. _playbooks: https://docs.ansible.com/ansible/latest/user_guide/playbooks.html +.. _modules: https://docs.ansible.com/ansible/latest/user_guide/modules.html + +A further feature of the Ansible hooks is to provide a light weight "action" +scripting tool. This is a decorator that you apply to a function, and that +function can now receive cli args, and can pass extra args to the playbook:: + + @hooks.action() + def some_action(amount, force="False"): + "Usage: some-action AMOUNT [force=True]" # <-- shown on error + # process the arguments + # do some calls + # return extra-vars to be passed to ansible-playbook + return { + 'amount': int(amount), + 'type': force, + } + +You can now create a symlink to hooks.py that can be invoked like a hook, but +with cli params:: + + # link actions/some-action to hooks/hooks.py + + actions/some-action amount=10 force=true + +Install Ansible via pip +======================= + +If you want to install a specific version of Ansible via pip instead of +``install_ansible_support`` which uses APT, consider using the layer options +of `layer-basic`_ to install Ansible in a virtualenv:: + + options: + basic: + python_packages: ['ansible==2.9.0'] + include_system_packages: true + use_venv: true + +.. _layer-basic: https://charmsreactive.readthedocs.io/en/latest/layer-basic.html#layer-configuration + +""" +import os +import json +import stat +import subprocess +import functools + +import charmhelpers.contrib.templating.contexts +import charmhelpers.core.host +import charmhelpers.core.hookenv +import charmhelpers.fetch + + +charm_dir = os.environ.get('CHARM_DIR', '') +ansible_hosts_path = '/etc/ansible/hosts' +# Ansible will automatically include any vars in the following +# file in its inventory when run locally. +ansible_vars_path = '/etc/ansible/host_vars/localhost' + + +def install_ansible_support(from_ppa=True, ppa_location='ppa:ansible/ansible'): + """Installs Ansible via APT. + + By default this installs Ansible from the `PPA`_ linked from + the Ansible `website`_ or from a PPA set in ``ppa_location``. + + .. _PPA: https://launchpad.net/~ansible/+archive/ubuntu/ansible + .. _website: http://docs.ansible.com/intro_installation.html#latest-releases-via-apt-ubuntu + + If ``from_ppa`` is ``False``, then Ansible will be installed from + Ubuntu's Universe repositories. + """ + if from_ppa: + charmhelpers.fetch.add_source(ppa_location) + charmhelpers.fetch.apt_update(fatal=True) + charmhelpers.fetch.apt_install('ansible') + with open(ansible_hosts_path, 'w+') as hosts_file: + hosts_file.write('localhost ansible_connection=local ansible_remote_tmp=/root/.ansible/tmp') + + +def apply_playbook(playbook, tags=None, extra_vars=None): + """Run a playbook. + + This helper runs a playbook with juju state variables as context, + therefore variables set in application config can be used directly. + List of tags (--tags) and dictionary with extra_vars (--extra-vars) + can be passed as additional parameters. + + Read more about playbook `_variables`_ online. + + .. _variables: https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html + + Example:: + + # Run ansible/playbook.yaml with tag install and pass extra + # variables var_a and var_b + apply_playbook( + playbook='ansible/playbook.yaml', + tags=['install'], + extra_vars={'var_a': 'val_a', 'var_b': 'val_b'} + ) + + # Run ansible/playbook.yaml with tag config and extra variable nested, + # which is passed as json and can be used as dictionary in playbook + apply_playbook( + playbook='ansible/playbook.yaml', + tags=['config'], + extra_vars={'nested': {'a': 'value1', 'b': 'value2'}} + ) + + # Custom config file can be passed within extra_vars + apply_playbook( + playbook='ansible/playbook.yaml', + extra_vars="@some_file.json" + ) + + """ + tags = tags or [] + tags = ",".join(tags) + charmhelpers.contrib.templating.contexts.juju_state_to_yaml( + ansible_vars_path, namespace_separator='__', + allow_hyphens_in_keys=False, mode=(stat.S_IRUSR | stat.S_IWUSR)) + + # we want ansible's log output to be unbuffered + env = os.environ.copy() + env['PYTHONUNBUFFERED'] = "1" + call = [ + 'ansible-playbook', + '-c', + 'local', + playbook, + ] + if tags: + call.extend(['--tags', '{}'.format(tags)]) + if extra_vars: + call.extend(['--extra-vars', json.dumps(extra_vars)]) + subprocess.check_call(call, env=env) + + +class AnsibleHooks(charmhelpers.core.hookenv.Hooks): + """Run a playbook with the hook-name as the tag. + + This helper builds on the standard hookenv.Hooks helper, + but additionally runs the playbook with the hook-name specified + using --tags (ie. running all the tasks tagged with the hook-name). + + Example:: + + hooks = AnsibleHooks(playbook_path='ansible/my_machine_state.yaml') + + # All the tasks within my_machine_state.yaml tagged with 'install' + # will be run automatically after do_custom_work() + @hooks.hook() + def install(): + do_custom_work() + + # For most of your hooks, you won't need to do anything other + # than run the tagged tasks for the hook: + @hooks.hook('config-changed', 'start', 'stop') + def just_use_playbook(): + pass + + # As a convenience, you can avoid the above noop function by specifying + # the hooks which are handled by ansible-only and they'll be registered + # for you: + # hooks = AnsibleHooks( + # 'ansible/my_machine_state.yaml', + # default_hooks=['config-changed', 'start', 'stop']) + + if __name__ == "__main__": + # execute a hook based on the name the program is called by + hooks.execute(sys.argv) + """ + + def __init__(self, playbook_path, default_hooks=None): + """Register any hooks handled by ansible.""" + super(AnsibleHooks, self).__init__() + + self._actions = {} + self.playbook_path = playbook_path + + default_hooks = default_hooks or [] + + def noop(*args, **kwargs): + pass + + for hook in default_hooks: + self.register(hook, noop) + + def register_action(self, name, function): + """Register a hook""" + self._actions[name] = function + + def execute(self, args): + """Execute the hook followed by the playbook using the hook as tag.""" + hook_name = os.path.basename(args[0]) + extra_vars = None + if hook_name in self._actions: + extra_vars = self._actions[hook_name](args[1:]) + else: + super(AnsibleHooks, self).execute(args) + + charmhelpers.contrib.ansible.apply_playbook( + self.playbook_path, tags=[hook_name], extra_vars=extra_vars) + + def action(self, *action_names): + """Decorator, registering them as actions""" + def action_wrapper(decorated): + + @functools.wraps(decorated) + def wrapper(argv): + kwargs = dict(arg.split('=') for arg in argv) + try: + return decorated(**kwargs) + except TypeError as e: + if decorated.__doc__: + e.args += (decorated.__doc__,) + raise + + self.register_action(decorated.__name__, wrapper) + if '_' in decorated.__name__: + self.register_action( + decorated.__name__.replace('_', '-'), wrapper) + + return wrapper + + return action_wrapper diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/benchmark/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/benchmark/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..c35f7fe785a29519a257f1ed5aa33fdfa19129c7 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/benchmark/__init__.py @@ -0,0 +1,124 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import subprocess +import time +import os +from distutils.spawn import find_executable + +from charmhelpers.core.hookenv import ( + in_relation_hook, + relation_ids, + relation_set, + relation_get, +) + + +def action_set(key, val): + if find_executable('action-set'): + action_cmd = ['action-set'] + + if isinstance(val, dict): + for k, v in iter(val.items()): + action_set('%s.%s' % (key, k), v) + return True + + action_cmd.append('%s=%s' % (key, val)) + subprocess.check_call(action_cmd) + return True + return False + + +class Benchmark(): + """ + Helper class for the `benchmark` interface. + + :param list actions: Define the actions that are also benchmarks + + From inside the benchmark-relation-changed hook, you would + Benchmark(['memory', 'cpu', 'disk', 'smoke', 'custom']) + + Examples: + + siege = Benchmark(['siege']) + siege.start() + [... run siege ...] + # The higher the score, the better the benchmark + siege.set_composite_score(16.70, 'trans/sec', 'desc') + siege.finish() + + + """ + + BENCHMARK_CONF = '/etc/benchmark.conf' # Replaced in testing + + required_keys = [ + 'hostname', + 'port', + 'graphite_port', + 'graphite_endpoint', + 'api_port' + ] + + def __init__(self, benchmarks=None): + if in_relation_hook(): + if benchmarks is not None: + for rid in sorted(relation_ids('benchmark')): + relation_set(relation_id=rid, relation_settings={ + 'benchmarks': ",".join(benchmarks) + }) + + # Check the relation data + config = {} + for key in self.required_keys: + val = relation_get(key) + if val is not None: + config[key] = val + else: + # We don't have all of the required keys + config = {} + break + + if len(config): + with open(self.BENCHMARK_CONF, 'w') as f: + for key, val in iter(config.items()): + f.write("%s=%s\n" % (key, val)) + + @staticmethod + def start(): + action_set('meta.start', time.strftime('%Y-%m-%dT%H:%M:%SZ')) + + """ + If the collectd charm is also installed, tell it to send a snapshot + of the current profile data. + """ + COLLECT_PROFILE_DATA = '/usr/local/bin/collect-profile-data' + if os.path.exists(COLLECT_PROFILE_DATA): + subprocess.check_output([COLLECT_PROFILE_DATA]) + + @staticmethod + def finish(): + action_set('meta.stop', time.strftime('%Y-%m-%dT%H:%M:%SZ')) + + @staticmethod + def set_composite_score(value, units, direction='asc'): + """ + Set the composite score for a benchmark run. This is a single number + representative of the benchmark results. This could be the most + important metric, or an amalgamation of metric scores. + """ + return action_set( + "meta.composite", + {'value': value, 'units': units, 'direction': direction} + ) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/charmhelpers/IMPORT b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/charmhelpers/IMPORT new file mode 100644 index 0000000000000000000000000000000000000000..d41cb041f45cb19b87aee7d3f5fd4d47a3d3fb2d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/charmhelpers/IMPORT @@ -0,0 +1,4 @@ +Source lp:charm-tools/trunk + +charm-tools/helpers/python/charmhelpers/__init__.py -> charmhelpers/charmhelpers/contrib/charmhelpers/__init__.py +charm-tools/helpers/python/charmhelpers/tests/test_charmhelpers.py -> charmhelpers/tests/contrib/charmhelpers/test_charmhelpers.py diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/charmhelpers/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/charmhelpers/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..ed63e8121e23a8e01ccb8ddcbf9aea34543d3666 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/charmhelpers/__init__.py @@ -0,0 +1,203 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import warnings +warnings.warn("contrib.charmhelpers is deprecated", DeprecationWarning) # noqa + +import operator +import tempfile +import time +import yaml +import subprocess + +import six +if six.PY3: + from urllib.request import urlopen + from urllib.error import (HTTPError, URLError) +else: + from urllib2 import (urlopen, HTTPError, URLError) + +"""Helper functions for writing Juju charms in Python.""" + +__metaclass__ = type +__all__ = [ + # 'get_config', # core.hookenv.config() + # 'log', # core.hookenv.log() + # 'log_entry', # core.hookenv.log() + # 'log_exit', # core.hookenv.log() + # 'relation_get', # core.hookenv.relation_get() + # 'relation_set', # core.hookenv.relation_set() + # 'relation_ids', # core.hookenv.relation_ids() + # 'relation_list', # core.hookenv.relation_units() + # 'config_get', # core.hookenv.config() + # 'unit_get', # core.hookenv.unit_get() + # 'open_port', # core.hookenv.open_port() + # 'close_port', # core.hookenv.close_port() + # 'service_control', # core.host.service() + 'unit_info', # client-side, NOT IMPLEMENTED + 'wait_for_machine', # client-side, NOT IMPLEMENTED + 'wait_for_page_contents', # client-side, NOT IMPLEMENTED + 'wait_for_relation', # client-side, NOT IMPLEMENTED + 'wait_for_unit', # client-side, NOT IMPLEMENTED +] + + +SLEEP_AMOUNT = 0.1 + + +# We create a juju_status Command here because it makes testing much, +# much easier. +def juju_status(): + subprocess.check_call(['juju', 'status']) + +# re-implemented as charmhelpers.fetch.configure_sources() +# def configure_source(update=False): +# source = config_get('source') +# if ((source.startswith('ppa:') or +# source.startswith('cloud:') or +# source.startswith('http:'))): +# run('add-apt-repository', source) +# if source.startswith("http:"): +# run('apt-key', 'import', config_get('key')) +# if update: +# run('apt-get', 'update') + + +# DEPRECATED: client-side only +def make_charm_config_file(charm_config): + charm_config_file = tempfile.NamedTemporaryFile(mode='w+') + charm_config_file.write(yaml.dump(charm_config)) + charm_config_file.flush() + # The NamedTemporaryFile instance is returned instead of just the name + # because we want to take advantage of garbage collection-triggered + # deletion of the temp file when it goes out of scope in the caller. + return charm_config_file + + +# DEPRECATED: client-side only +def unit_info(service_name, item_name, data=None, unit=None): + if data is None: + data = yaml.safe_load(juju_status()) + service = data['services'].get(service_name) + if service is None: + # XXX 2012-02-08 gmb: + # This allows us to cope with the race condition that we + # have between deploying a service and having it come up in + # `juju status`. We could probably do with cleaning it up so + # that it fails a bit more noisily after a while. + return '' + units = service['units'] + if unit is not None: + item = units[unit][item_name] + else: + # It might seem odd to sort the units here, but we do it to + # ensure that when no unit is specified, the first unit for the + # service (or at least the one with the lowest number) is the + # one whose data gets returned. + sorted_unit_names = sorted(units.keys()) + item = units[sorted_unit_names[0]][item_name] + return item + + +# DEPRECATED: client-side only +def get_machine_data(): + return yaml.safe_load(juju_status())['machines'] + + +# DEPRECATED: client-side only +def wait_for_machine(num_machines=1, timeout=300): + """Wait `timeout` seconds for `num_machines` machines to come up. + + This wait_for... function can be called by other wait_for functions + whose timeouts might be too short in situations where only a bare + Juju setup has been bootstrapped. + + :return: A tuple of (num_machines, time_taken). This is used for + testing. + """ + # You may think this is a hack, and you'd be right. The easiest way + # to tell what environment we're working in (LXC vs EC2) is to check + # the dns-name of the first machine. If it's localhost we're in LXC + # and we can just return here. + if get_machine_data()[0]['dns-name'] == 'localhost': + return 1, 0 + start_time = time.time() + while True: + # Drop the first machine, since it's the Zookeeper and that's + # not a machine that we need to wait for. This will only work + # for EC2 environments, which is why we return early above if + # we're in LXC. + machine_data = get_machine_data() + non_zookeeper_machines = [ + machine_data[key] for key in list(machine_data.keys())[1:]] + if len(non_zookeeper_machines) >= num_machines: + all_machines_running = True + for machine in non_zookeeper_machines: + if machine.get('instance-state') != 'running': + all_machines_running = False + break + if all_machines_running: + break + if time.time() - start_time >= timeout: + raise RuntimeError('timeout waiting for service to start') + time.sleep(SLEEP_AMOUNT) + return num_machines, time.time() - start_time + + +# DEPRECATED: client-side only +def wait_for_unit(service_name, timeout=480): + """Wait `timeout` seconds for a given service name to come up.""" + wait_for_machine(num_machines=1) + start_time = time.time() + while True: + state = unit_info(service_name, 'agent-state') + if 'error' in state or state == 'started': + break + if time.time() - start_time >= timeout: + raise RuntimeError('timeout waiting for service to start') + time.sleep(SLEEP_AMOUNT) + if state != 'started': + raise RuntimeError('unit did not start, agent-state: ' + state) + + +# DEPRECATED: client-side only +def wait_for_relation(service_name, relation_name, timeout=120): + """Wait `timeout` seconds for a given relation to come up.""" + start_time = time.time() + while True: + relation = unit_info(service_name, 'relations').get(relation_name) + if relation is not None and relation['state'] == 'up': + break + if time.time() - start_time >= timeout: + raise RuntimeError('timeout waiting for relation to be up') + time.sleep(SLEEP_AMOUNT) + + +# DEPRECATED: client-side only +def wait_for_page_contents(url, contents, timeout=120, validate=None): + if validate is None: + validate = operator.contains + start_time = time.time() + while True: + try: + stream = urlopen(url) + except (HTTPError, URLError): + pass + else: + page = stream.read() + if validate(page, contents): + return page + if time.time() - start_time >= timeout: + raise RuntimeError('timeout waiting for contents of ' + url) + time.sleep(SLEEP_AMOUNT) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/charmsupport/IMPORT b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/charmsupport/IMPORT new file mode 100644 index 0000000000000000000000000000000000000000..554fddda9f48d6d17abaff942834493b2cca2dbe --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/charmsupport/IMPORT @@ -0,0 +1,14 @@ +Source: lp:charmsupport/trunk + +charmsupport/charmsupport/execd.py -> charm-helpers/charmhelpers/contrib/charmsupport/execd.py +charmsupport/charmsupport/hookenv.py -> charm-helpers/charmhelpers/contrib/charmsupport/hookenv.py +charmsupport/charmsupport/host.py -> charm-helpers/charmhelpers/contrib/charmsupport/host.py +charmsupport/charmsupport/nrpe.py -> charm-helpers/charmhelpers/contrib/charmsupport/nrpe.py +charmsupport/charmsupport/volumes.py -> charm-helpers/charmhelpers/contrib/charmsupport/volumes.py + +charmsupport/tests/test_execd.py -> charm-helpers/tests/contrib/charmsupport/test_execd.py +charmsupport/tests/test_hookenv.py -> charm-helpers/tests/contrib/charmsupport/test_hookenv.py +charmsupport/tests/test_host.py -> charm-helpers/tests/contrib/charmsupport/test_host.py +charmsupport/tests/test_nrpe.py -> charm-helpers/tests/contrib/charmsupport/test_nrpe.py + +charmsupport/bin/charmsupport -> charm-helpers/bin/contrib/charmsupport/charmsupport diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/charmsupport/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/charmsupport/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d7567b863e3a5ad2b7a7f44958b4166e0c3d346b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/charmsupport/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/charmsupport/nrpe.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/charmsupport/nrpe.py new file mode 100644 index 0000000000000000000000000000000000000000..d775861b0868a174a3f0f4a8d4a42320272a4cb0 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/charmsupport/nrpe.py @@ -0,0 +1,500 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Compatibility with the nrpe-external-master charm""" +# Copyright 2012 Canonical Ltd. +# +# Authors: +# Matthew Wedgwood + +import subprocess +import pwd +import grp +import os +import glob +import shutil +import re +import shlex +import yaml + +from charmhelpers.core.hookenv import ( + config, + hook_name, + local_unit, + log, + relation_get, + relation_ids, + relation_set, + relations_of_type, +) + +from charmhelpers.core.host import service +from charmhelpers.core import host + +# This module adds compatibility with the nrpe-external-master and plain nrpe +# subordinate charms. To use it in your charm: +# +# 1. Update metadata.yaml +# +# provides: +# (...) +# nrpe-external-master: +# interface: nrpe-external-master +# scope: container +# +# and/or +# +# provides: +# (...) +# local-monitors: +# interface: local-monitors +# scope: container + +# +# 2. Add the following to config.yaml +# +# nagios_context: +# default: "juju" +# type: string +# description: | +# Used by the nrpe subordinate charms. +# A string that will be prepended to instance name to set the host name +# in nagios. So for instance the hostname would be something like: +# juju-myservice-0 +# If you're running multiple environments with the same services in them +# this allows you to differentiate between them. +# nagios_servicegroups: +# default: "" +# type: string +# description: | +# A comma-separated list of nagios servicegroups. +# If left empty, the nagios_context will be used as the servicegroup +# +# 3. Add custom checks (Nagios plugins) to files/nrpe-external-master +# +# 4. Update your hooks.py with something like this: +# +# from charmsupport.nrpe import NRPE +# (...) +# def update_nrpe_config(): +# nrpe_compat = NRPE() +# nrpe_compat.add_check( +# shortname = "myservice", +# description = "Check MyService", +# check_cmd = "check_http -w 2 -c 10 http://localhost" +# ) +# nrpe_compat.add_check( +# "myservice_other", +# "Check for widget failures", +# check_cmd = "/srv/myapp/scripts/widget_check" +# ) +# nrpe_compat.write() +# +# def config_changed(): +# (...) +# update_nrpe_config() +# +# def nrpe_external_master_relation_changed(): +# update_nrpe_config() +# +# def local_monitors_relation_changed(): +# update_nrpe_config() +# +# 4.a If your charm is a subordinate charm set primary=False +# +# from charmsupport.nrpe import NRPE +# (...) +# def update_nrpe_config(): +# nrpe_compat = NRPE(primary=False) +# +# 5. ln -s hooks.py nrpe-external-master-relation-changed +# ln -s hooks.py local-monitors-relation-changed + + +class CheckException(Exception): + pass + + +class Check(object): + shortname_re = '[A-Za-z0-9-_.@]+$' + service_template = (""" +#--------------------------------------------------- +# This file is Juju managed +#--------------------------------------------------- +define service {{ + use active-service + host_name {nagios_hostname} + service_description {nagios_hostname}[{shortname}] """ + """{description} + check_command check_nrpe!{command} + servicegroups {nagios_servicegroup} +}} +""") + + def __init__(self, shortname, description, check_cmd): + super(Check, self).__init__() + # XXX: could be better to calculate this from the service name + if not re.match(self.shortname_re, shortname): + raise CheckException("shortname must match {}".format( + Check.shortname_re)) + self.shortname = shortname + self.command = "check_{}".format(shortname) + # Note: a set of invalid characters is defined by the + # Nagios server config + # The default is: illegal_object_name_chars=`~!$%^&*"|'<>?,()= + self.description = description + self.check_cmd = self._locate_cmd(check_cmd) + + def _get_check_filename(self): + return os.path.join(NRPE.nrpe_confdir, '{}.cfg'.format(self.command)) + + def _get_service_filename(self, hostname): + return os.path.join(NRPE.nagios_exportdir, + 'service__{}_{}.cfg'.format(hostname, self.command)) + + def _locate_cmd(self, check_cmd): + search_path = ( + '/usr/lib/nagios/plugins', + '/usr/local/lib/nagios/plugins', + ) + parts = shlex.split(check_cmd) + for path in search_path: + if os.path.exists(os.path.join(path, parts[0])): + command = os.path.join(path, parts[0]) + if len(parts) > 1: + command += " " + " ".join(parts[1:]) + return command + log('Check command not found: {}'.format(parts[0])) + return '' + + def _remove_service_files(self): + if not os.path.exists(NRPE.nagios_exportdir): + return + for f in os.listdir(NRPE.nagios_exportdir): + if f.endswith('_{}.cfg'.format(self.command)): + os.remove(os.path.join(NRPE.nagios_exportdir, f)) + + def remove(self, hostname): + nrpe_check_file = self._get_check_filename() + if os.path.exists(nrpe_check_file): + os.remove(nrpe_check_file) + self._remove_service_files() + + def write(self, nagios_context, hostname, nagios_servicegroups): + nrpe_check_file = self._get_check_filename() + with open(nrpe_check_file, 'w') as nrpe_check_config: + nrpe_check_config.write("# check {}\n".format(self.shortname)) + if nagios_servicegroups: + nrpe_check_config.write( + "# The following header was added automatically by juju\n") + nrpe_check_config.write( + "# Modifying it will affect nagios monitoring and alerting\n") + nrpe_check_config.write( + "# servicegroups: {}\n".format(nagios_servicegroups)) + nrpe_check_config.write("command[{}]={}\n".format( + self.command, self.check_cmd)) + + if not os.path.exists(NRPE.nagios_exportdir): + log('Not writing service config as {} is not accessible'.format( + NRPE.nagios_exportdir)) + else: + self.write_service_config(nagios_context, hostname, + nagios_servicegroups) + + def write_service_config(self, nagios_context, hostname, + nagios_servicegroups): + self._remove_service_files() + + templ_vars = { + 'nagios_hostname': hostname, + 'nagios_servicegroup': nagios_servicegroups, + 'description': self.description, + 'shortname': self.shortname, + 'command': self.command, + } + nrpe_service_text = Check.service_template.format(**templ_vars) + nrpe_service_file = self._get_service_filename(hostname) + with open(nrpe_service_file, 'w') as nrpe_service_config: + nrpe_service_config.write(str(nrpe_service_text)) + + def run(self): + subprocess.call(self.check_cmd) + + +class NRPE(object): + nagios_logdir = '/var/log/nagios' + nagios_exportdir = '/var/lib/nagios/export' + nrpe_confdir = '/etc/nagios/nrpe.d' + homedir = '/var/lib/nagios' # home dir provided by nagios-nrpe-server + + def __init__(self, hostname=None, primary=True): + super(NRPE, self).__init__() + self.config = config() + self.primary = primary + self.nagios_context = self.config['nagios_context'] + if 'nagios_servicegroups' in self.config and self.config['nagios_servicegroups']: + self.nagios_servicegroups = self.config['nagios_servicegroups'] + else: + self.nagios_servicegroups = self.nagios_context + self.unit_name = local_unit().replace('/', '-') + if hostname: + self.hostname = hostname + else: + nagios_hostname = get_nagios_hostname() + if nagios_hostname: + self.hostname = nagios_hostname + else: + self.hostname = "{}-{}".format(self.nagios_context, self.unit_name) + self.checks = [] + # Iff in an nrpe-external-master relation hook, set primary status + relation = relation_ids('nrpe-external-master') + if relation: + log("Setting charm primary status {}".format(primary)) + for rid in relation: + relation_set(relation_id=rid, relation_settings={'primary': self.primary}) + self.remove_check_queue = set() + + def add_check(self, *args, **kwargs): + shortname = None + if kwargs.get('shortname') is None: + if len(args) > 0: + shortname = args[0] + else: + shortname = kwargs['shortname'] + + self.checks.append(Check(*args, **kwargs)) + try: + self.remove_check_queue.remove(shortname) + except KeyError: + pass + + def remove_check(self, *args, **kwargs): + if kwargs.get('shortname') is None: + raise ValueError('shortname of check must be specified') + + # Use sensible defaults if they're not specified - these are not + # actually used during removal, but they're required for constructing + # the Check object; check_disk is chosen because it's part of the + # nagios-plugins-basic package. + if kwargs.get('check_cmd') is None: + kwargs['check_cmd'] = 'check_disk' + if kwargs.get('description') is None: + kwargs['description'] = '' + + check = Check(*args, **kwargs) + check.remove(self.hostname) + self.remove_check_queue.add(kwargs['shortname']) + + def write(self): + try: + nagios_uid = pwd.getpwnam('nagios').pw_uid + nagios_gid = grp.getgrnam('nagios').gr_gid + except Exception: + log("Nagios user not set up, nrpe checks not updated") + return + + if not os.path.exists(NRPE.nagios_logdir): + os.mkdir(NRPE.nagios_logdir) + os.chown(NRPE.nagios_logdir, nagios_uid, nagios_gid) + + nrpe_monitors = {} + monitors = {"monitors": {"remote": {"nrpe": nrpe_monitors}}} + for nrpecheck in self.checks: + nrpecheck.write(self.nagios_context, self.hostname, + self.nagios_servicegroups) + nrpe_monitors[nrpecheck.shortname] = { + "command": nrpecheck.command, + } + + # update-status hooks are configured to firing every 5 minutes by + # default. When nagios-nrpe-server is restarted, the nagios server + # reports checks failing causing unnecessary alerts. Let's not restart + # on update-status hooks. + if not hook_name() == 'update-status': + service('restart', 'nagios-nrpe-server') + + monitor_ids = relation_ids("local-monitors") + \ + relation_ids("nrpe-external-master") + for rid in monitor_ids: + reldata = relation_get(unit=local_unit(), rid=rid) + if 'monitors' in reldata: + # update the existing set of monitors with the new data + old_monitors = yaml.safe_load(reldata['monitors']) + old_nrpe_monitors = old_monitors['monitors']['remote']['nrpe'] + # remove keys that are in the remove_check_queue + old_nrpe_monitors = {k: v for k, v in old_nrpe_monitors.items() + if k not in self.remove_check_queue} + # update/add nrpe_monitors + old_nrpe_monitors.update(nrpe_monitors) + old_monitors['monitors']['remote']['nrpe'] = old_nrpe_monitors + # write back to the relation + relation_set(relation_id=rid, monitors=yaml.dump(old_monitors)) + else: + # write a brand new set of monitors, as no existing ones. + relation_set(relation_id=rid, monitors=yaml.dump(monitors)) + + self.remove_check_queue.clear() + + +def get_nagios_hostcontext(relation_name='nrpe-external-master'): + """ + Query relation with nrpe subordinate, return the nagios_host_context + + :param str relation_name: Name of relation nrpe sub joined to + """ + for rel in relations_of_type(relation_name): + if 'nagios_host_context' in rel: + return rel['nagios_host_context'] + + +def get_nagios_hostname(relation_name='nrpe-external-master'): + """ + Query relation with nrpe subordinate, return the nagios_hostname + + :param str relation_name: Name of relation nrpe sub joined to + """ + for rel in relations_of_type(relation_name): + if 'nagios_hostname' in rel: + return rel['nagios_hostname'] + + +def get_nagios_unit_name(relation_name='nrpe-external-master'): + """ + Return the nagios unit name prepended with host_context if needed + + :param str relation_name: Name of relation nrpe sub joined to + """ + host_context = get_nagios_hostcontext(relation_name) + if host_context: + unit = "%s:%s" % (host_context, local_unit()) + else: + unit = local_unit() + return unit + + +def add_init_service_checks(nrpe, services, unit_name, immediate_check=True): + """ + Add checks for each service in list + + :param NRPE nrpe: NRPE object to add check to + :param list services: List of services to check + :param str unit_name: Unit name to use in check description + :param bool immediate_check: For sysv init, run the service check immediately + """ + for svc in services: + # Don't add a check for these services from neutron-gateway + if svc in ['ext-port', 'os-charm-phy-nic-mtu']: + next + + upstart_init = '/etc/init/%s.conf' % svc + sysv_init = '/etc/init.d/%s' % svc + + if host.init_is_systemd(): + nrpe.add_check( + shortname=svc, + description='process check {%s}' % unit_name, + check_cmd='check_systemd.py %s' % svc + ) + elif os.path.exists(upstart_init): + nrpe.add_check( + shortname=svc, + description='process check {%s}' % unit_name, + check_cmd='check_upstart_job %s' % svc + ) + elif os.path.exists(sysv_init): + cronpath = '/etc/cron.d/nagios-service-check-%s' % svc + checkpath = '%s/service-check-%s.txt' % (nrpe.homedir, svc) + croncmd = ( + '/usr/local/lib/nagios/plugins/check_exit_status.pl ' + '-e -s /etc/init.d/%s status' % svc + ) + cron_file = '*/5 * * * * root %s > %s\n' % (croncmd, checkpath) + f = open(cronpath, 'w') + f.write(cron_file) + f.close() + nrpe.add_check( + shortname=svc, + description='service check {%s}' % unit_name, + check_cmd='check_status_file.py -f %s' % checkpath, + ) + # if /var/lib/nagios doesn't exist open(checkpath, 'w') will fail + # (LP: #1670223). + if immediate_check and os.path.isdir(nrpe.homedir): + f = open(checkpath, 'w') + subprocess.call( + croncmd.split(), + stdout=f, + stderr=subprocess.STDOUT + ) + f.close() + os.chmod(checkpath, 0o644) + + +def copy_nrpe_checks(nrpe_files_dir=None): + """ + Copy the nrpe checks into place + + """ + NAGIOS_PLUGINS = '/usr/local/lib/nagios/plugins' + if nrpe_files_dir is None: + # determine if "charmhelpers" is in CHARMDIR or CHARMDIR/hooks + for segment in ['.', 'hooks']: + nrpe_files_dir = os.path.abspath(os.path.join( + os.getenv('CHARM_DIR'), + segment, + 'charmhelpers', + 'contrib', + 'openstack', + 'files')) + if os.path.isdir(nrpe_files_dir): + break + else: + raise RuntimeError("Couldn't find charmhelpers directory") + if not os.path.exists(NAGIOS_PLUGINS): + os.makedirs(NAGIOS_PLUGINS) + for fname in glob.glob(os.path.join(nrpe_files_dir, "check_*")): + if os.path.isfile(fname): + shutil.copy2(fname, + os.path.join(NAGIOS_PLUGINS, os.path.basename(fname))) + + +def add_haproxy_checks(nrpe, unit_name): + """ + Add checks for each service in list + + :param NRPE nrpe: NRPE object to add check to + :param str unit_name: Unit name to use in check description + """ + nrpe.add_check( + shortname='haproxy_servers', + description='Check HAProxy {%s}' % unit_name, + check_cmd='check_haproxy.sh') + nrpe.add_check( + shortname='haproxy_queue', + description='Check HAProxy queue depth {%s}' % unit_name, + check_cmd='check_haproxy_queue_depth.sh') + + +def remove_deprecated_check(nrpe, deprecated_services): + """ + Remove checks fro deprecated services in list + + :param nrpe: NRPE object to remove check from + :type nrpe: NRPE + :param deprecated_services: List of deprecated services that are removed + :type deprecated_services: list + """ + for dep_svc in deprecated_services: + log('Deprecated service: {}'.format(dep_svc)) + nrpe.remove_check(shortname=dep_svc) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/charmsupport/volumes.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/charmsupport/volumes.py new file mode 100644 index 0000000000000000000000000000000000000000..7ea43f0888cd92ac4344f908aa7c9d0afe7568ed --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/charmsupport/volumes.py @@ -0,0 +1,173 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +''' +Functions for managing volumes in juju units. One volume is supported per unit. +Subordinates may have their own storage, provided it is on its own partition. + +Configuration stanzas:: + + volume-ephemeral: + type: boolean + default: true + description: > + If false, a volume is mounted as sepecified in "volume-map" + If true, ephemeral storage will be used, meaning that log data + will only exist as long as the machine. YOU HAVE BEEN WARNED. + volume-map: + type: string + default: {} + description: > + YAML map of units to device names, e.g: + "{ rsyslog/0: /dev/vdb, rsyslog/1: /dev/vdb }" + Service units will raise a configure-error if volume-ephemeral + is 'true' and no volume-map value is set. Use 'juju set' to set a + value and 'juju resolved' to complete configuration. + +Usage:: + + from charmsupport.volumes import configure_volume, VolumeConfigurationError + from charmsupport.hookenv import log, ERROR + def post_mount_hook(): + stop_service('myservice') + def post_mount_hook(): + start_service('myservice') + + if __name__ == '__main__': + try: + configure_volume(before_change=pre_mount_hook, + after_change=post_mount_hook) + except VolumeConfigurationError: + log('Storage could not be configured', ERROR) + +''' + +# XXX: Known limitations +# - fstab is neither consulted nor updated + +import os +from charmhelpers.core import hookenv +from charmhelpers.core import host +import yaml + + +MOUNT_BASE = '/srv/juju/volumes' + + +class VolumeConfigurationError(Exception): + '''Volume configuration data is missing or invalid''' + pass + + +def get_config(): + '''Gather and sanity-check volume configuration data''' + volume_config = {} + config = hookenv.config() + + errors = False + + if config.get('volume-ephemeral') in (True, 'True', 'true', 'Yes', 'yes'): + volume_config['ephemeral'] = True + else: + volume_config['ephemeral'] = False + + try: + volume_map = yaml.safe_load(config.get('volume-map', '{}')) + except yaml.YAMLError as e: + hookenv.log("Error parsing YAML volume-map: {}".format(e), + hookenv.ERROR) + errors = True + if volume_map is None: + # probably an empty string + volume_map = {} + elif not isinstance(volume_map, dict): + hookenv.log("Volume-map should be a dictionary, not {}".format( + type(volume_map))) + errors = True + + volume_config['device'] = volume_map.get(os.environ['JUJU_UNIT_NAME']) + if volume_config['device'] and volume_config['ephemeral']: + # asked for ephemeral storage but also defined a volume ID + hookenv.log('A volume is defined for this unit, but ephemeral ' + 'storage was requested', hookenv.ERROR) + errors = True + elif not volume_config['device'] and not volume_config['ephemeral']: + # asked for permanent storage but did not define volume ID + hookenv.log('Ephemeral storage was requested, but there is no volume ' + 'defined for this unit.', hookenv.ERROR) + errors = True + + unit_mount_name = hookenv.local_unit().replace('/', '-') + volume_config['mountpoint'] = os.path.join(MOUNT_BASE, unit_mount_name) + + if errors: + return None + return volume_config + + +def mount_volume(config): + if os.path.exists(config['mountpoint']): + if not os.path.isdir(config['mountpoint']): + hookenv.log('Not a directory: {}'.format(config['mountpoint'])) + raise VolumeConfigurationError() + else: + host.mkdir(config['mountpoint']) + if os.path.ismount(config['mountpoint']): + unmount_volume(config) + if not host.mount(config['device'], config['mountpoint'], persist=True): + raise VolumeConfigurationError() + + +def unmount_volume(config): + if os.path.ismount(config['mountpoint']): + if not host.umount(config['mountpoint'], persist=True): + raise VolumeConfigurationError() + + +def managed_mounts(): + '''List of all mounted managed volumes''' + return filter(lambda mount: mount[0].startswith(MOUNT_BASE), host.mounts()) + + +def configure_volume(before_change=lambda: None, after_change=lambda: None): + '''Set up storage (or don't) according to the charm's volume configuration. + Returns the mount point or "ephemeral". before_change and after_change + are optional functions to be called if the volume configuration changes. + ''' + + config = get_config() + if not config: + hookenv.log('Failed to read volume configuration', hookenv.CRITICAL) + raise VolumeConfigurationError() + + if config['ephemeral']: + if os.path.ismount(config['mountpoint']): + before_change() + unmount_volume(config) + after_change() + return 'ephemeral' + else: + # persistent storage + if os.path.ismount(config['mountpoint']): + mounts = dict(managed_mounts()) + if mounts.get(config['mountpoint']) != config['device']: + before_change() + unmount_volume(config) + mount_volume(config) + after_change() + else: + before_change() + mount_volume(config) + after_change() + return config['mountpoint'] diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/database/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/database/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..64fac9def6101c1de83cd5dafe7fdb5213d6a955 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/database/__init__.py @@ -0,0 +1,11 @@ +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/database/mysql.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/database/mysql.py new file mode 100644 index 0000000000000000000000000000000000000000..c9ecce5f793eedeb963557d109deaf5e1271e65f --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/database/mysql.py @@ -0,0 +1,821 @@ +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Helper for working with a MySQL database""" +import collections +import copy +import json +import re +import sys +import platform +import os +import glob +import six + +# from string import upper + +from charmhelpers.core.host import ( + CompareHostReleases, + lsb_release, + mkdir, + pwgen, + write_file +) +from charmhelpers.core.hookenv import ( + config as config_get, + relation_get, + related_units, + unit_get, + log, + DEBUG, + INFO, + WARNING, + leader_get, + leader_set, + is_leader, +) +from charmhelpers.fetch import ( + apt_install, + apt_update, + filter_installed_packages, +) +from charmhelpers.contrib.network.ip import get_host_ip + +try: + import MySQLdb +except ImportError: + apt_update(fatal=True) + if six.PY2: + apt_install(filter_installed_packages(['python-mysqldb']), fatal=True) + else: + apt_install(filter_installed_packages(['python3-mysqldb']), fatal=True) + import MySQLdb + + +class MySQLSetPasswordError(Exception): + pass + + +class MySQLHelper(object): + + def __init__(self, rpasswdf_template, upasswdf_template, host='localhost', + migrate_passwd_to_leader_storage=True, + delete_ondisk_passwd_file=True, user="root", password=None, port=None): + self.user = user + self.host = host + self.password = password + self.port = port + + # Password file path templates + self.root_passwd_file_template = rpasswdf_template + self.user_passwd_file_template = upasswdf_template + + self.migrate_passwd_to_leader_storage = migrate_passwd_to_leader_storage + # If we migrate we have the option to delete local copy of root passwd + self.delete_ondisk_passwd_file = delete_ondisk_passwd_file + self.connection = None + + def connect(self, user='root', password=None, host=None, port=None): + _connection_info = { + "user": user or self.user, + "passwd": password or self.password, + "host": host or self.host + } + # port cannot be None but we also do not want to specify it unless it + # has been explicit set. + port = port or self.port + if port is not None: + _connection_info["port"] = port + + log("Opening db connection for %s@%s" % (user, host), level=DEBUG) + self.connection = MySQLdb.connect(**_connection_info) + + def database_exists(self, db_name): + cursor = self.connection.cursor() + try: + cursor.execute("SHOW DATABASES") + databases = [i[0] for i in cursor.fetchall()] + finally: + cursor.close() + + return db_name in databases + + def create_database(self, db_name): + cursor = self.connection.cursor() + try: + cursor.execute("CREATE DATABASE `{}` CHARACTER SET UTF8" + .format(db_name)) + finally: + cursor.close() + + def grant_exists(self, db_name, db_user, remote_ip): + cursor = self.connection.cursor() + priv_string = "GRANT ALL PRIVILEGES ON `{}`.* " \ + "TO '{}'@'{}'".format(db_name, db_user, remote_ip) + try: + cursor.execute("SHOW GRANTS for '{}'@'{}'".format(db_user, + remote_ip)) + grants = [i[0] for i in cursor.fetchall()] + except MySQLdb.OperationalError: + return False + finally: + cursor.close() + + # TODO: review for different grants + return priv_string in grants + + def create_grant(self, db_name, db_user, remote_ip, password): + cursor = self.connection.cursor() + try: + # TODO: review for different grants + cursor.execute("GRANT ALL PRIVILEGES ON `{}`.* TO '{}'@'{}' " + "IDENTIFIED BY '{}'".format(db_name, + db_user, + remote_ip, + password)) + finally: + cursor.close() + + def create_admin_grant(self, db_user, remote_ip, password): + cursor = self.connection.cursor() + try: + cursor.execute("GRANT ALL PRIVILEGES ON *.* TO '{}'@'{}' " + "IDENTIFIED BY '{}'".format(db_user, + remote_ip, + password)) + finally: + cursor.close() + + def cleanup_grant(self, db_user, remote_ip): + cursor = self.connection.cursor() + try: + cursor.execute("DROP FROM mysql.user WHERE user='{}' " + "AND HOST='{}'".format(db_user, + remote_ip)) + finally: + cursor.close() + + def flush_priviledges(self): + cursor = self.connection.cursor() + try: + cursor.execute("FLUSH PRIVILEGES") + finally: + cursor.close() + + def execute(self, sql): + """Execute arbitary SQL against the database.""" + cursor = self.connection.cursor() + try: + cursor.execute(sql) + finally: + cursor.close() + + def select(self, sql): + """ + Execute arbitrary SQL select query against the database + and return the results. + + :param sql: SQL select query to execute + :type sql: string + :returns: SQL select query result + :rtype: list of lists + :raises: MySQLdb.Error + """ + cursor = self.connection.cursor() + try: + cursor.execute(sql) + results = [list(i) for i in cursor.fetchall()] + finally: + cursor.close() + return results + + def migrate_passwords_to_leader_storage(self, excludes=None): + """Migrate any passwords storage on disk to leader storage.""" + if not is_leader(): + log("Skipping password migration as not the lead unit", + level=DEBUG) + return + dirname = os.path.dirname(self.root_passwd_file_template) + path = os.path.join(dirname, '*.passwd') + for f in glob.glob(path): + if excludes and f in excludes: + log("Excluding %s from leader storage migration" % (f), + level=DEBUG) + continue + + key = os.path.basename(f) + with open(f, 'r') as passwd: + _value = passwd.read().strip() + + try: + leader_set(settings={key: _value}) + + if self.delete_ondisk_passwd_file: + os.unlink(f) + except ValueError: + # NOTE cluster relation not yet ready - skip for now + pass + + def get_mysql_password_on_disk(self, username=None, password=None): + """Retrieve, generate or store a mysql password for the provided + username on disk.""" + if username: + template = self.user_passwd_file_template + passwd_file = template.format(username) + else: + passwd_file = self.root_passwd_file_template + + _password = None + if os.path.exists(passwd_file): + log("Using existing password file '%s'" % passwd_file, level=DEBUG) + with open(passwd_file, 'r') as passwd: + _password = passwd.read().strip() + else: + log("Generating new password file '%s'" % passwd_file, level=DEBUG) + if not os.path.isdir(os.path.dirname(passwd_file)): + # NOTE: need to ensure this is not mysql root dir (which needs + # to be mysql readable) + mkdir(os.path.dirname(passwd_file), owner='root', group='root', + perms=0o770) + # Force permissions - for some reason the chmod in makedirs + # fails + os.chmod(os.path.dirname(passwd_file), 0o770) + + _password = password or pwgen(length=32) + write_file(passwd_file, _password, owner='root', group='root', + perms=0o660) + + return _password + + def passwd_keys(self, username): + """Generator to return keys used to store passwords in peer store. + + NOTE: we support both legacy and new format to support mysql + charm prior to refactor. This is necessary to avoid LP 1451890. + """ + keys = [] + if username == 'mysql': + log("Bad username '%s'" % (username), level=WARNING) + + if username: + # IMPORTANT: *newer* format must be returned first + keys.append('mysql-%s.passwd' % (username)) + keys.append('%s.passwd' % (username)) + else: + keys.append('mysql.passwd') + + for key in keys: + yield key + + def get_mysql_password(self, username=None, password=None): + """Retrieve, generate or store a mysql password for the provided + username using peer relation cluster.""" + excludes = [] + + # First check peer relation. + try: + for key in self.passwd_keys(username): + _password = leader_get(key) + if _password: + break + + # If root password available don't update peer relation from local + if _password and not username: + excludes.append(self.root_passwd_file_template) + + except ValueError: + # cluster relation is not yet started; use on-disk + _password = None + + # If none available, generate new one + if not _password: + _password = self.get_mysql_password_on_disk(username, password) + + # Put on wire if required + if self.migrate_passwd_to_leader_storage: + self.migrate_passwords_to_leader_storage(excludes=excludes) + + return _password + + def get_mysql_root_password(self, password=None): + """Retrieve or generate mysql root password for service units.""" + return self.get_mysql_password(username=None, password=password) + + def set_mysql_password(self, username, password, current_password=None): + """Update a mysql password for the provided username changing the + leader settings + + To update root's password pass `None` in the username + + :param username: Username to change password of + :type username: str + :param password: New password for user. + :type password: str + :param current_password: Existing password for user. + :type current_password: str + """ + + if username is None: + username = 'root' + + # get root password via leader-get, it may be that in the past (when + # changes to root-password were not supported) the user changed the + # password, so leader-get is more reliable source than + # config.previous('root-password'). + rel_username = None if username == 'root' else username + if not current_password: + current_password = self.get_mysql_password(rel_username) + + # password that needs to be set + new_passwd = password + + # update password for all users (e.g. root@localhost, root@::1, etc) + try: + self.connect(user=username, password=current_password) + cursor = self.connection.cursor() + except MySQLdb.OperationalError as ex: + raise MySQLSetPasswordError(('Cannot connect using password in ' + 'leader settings (%s)') % ex, ex) + + try: + # NOTE(freyes): Due to skip-name-resolve root@$HOSTNAME account + # fails when using SET PASSWORD so using UPDATE against the + # mysql.user table is needed, but changes to this table are not + # replicated across the cluster, so this update needs to run in + # all the nodes. More info at + # http://galeracluster.com/documentation-webpages/userchanges.html + release = CompareHostReleases(lsb_release()['DISTRIB_CODENAME']) + if release < 'bionic': + SQL_UPDATE_PASSWD = ("UPDATE mysql.user SET password = " + "PASSWORD( %s ) WHERE user = %s;") + else: + # PXC 5.7 (introduced in Bionic) uses authentication_string + SQL_UPDATE_PASSWD = ("UPDATE mysql.user SET " + "authentication_string = " + "PASSWORD( %s ) WHERE user = %s;") + cursor.execute(SQL_UPDATE_PASSWD, (new_passwd, username)) + cursor.execute('FLUSH PRIVILEGES;') + self.connection.commit() + except MySQLdb.OperationalError as ex: + raise MySQLSetPasswordError('Cannot update password: %s' % str(ex), + ex) + finally: + cursor.close() + + # check the password was changed + try: + self.connect(user=username, password=new_passwd) + self.execute('select 1;') + except MySQLdb.OperationalError as ex: + raise MySQLSetPasswordError(('Cannot connect using new password: ' + '%s') % str(ex), ex) + + if not is_leader(): + log('Only the leader can set a new password in the relation', + level=DEBUG) + return + + for key in self.passwd_keys(rel_username): + _password = leader_get(key) + if _password: + log('Updating password for %s (%s)' % (key, rel_username), + level=DEBUG) + leader_set(settings={key: new_passwd}) + + def set_mysql_root_password(self, password, current_password=None): + """Update mysql root password changing the leader settings + + :param password: New password for user. + :type password: str + :param current_password: Existing password for user. + :type current_password: str + """ + self.set_mysql_password( + 'root', + password, + current_password=current_password) + + def normalize_address(self, hostname): + """Ensure that address returned is an IP address (i.e. not fqdn)""" + if config_get('prefer-ipv6'): + # TODO: add support for ipv6 dns + return hostname + + if hostname != unit_get('private-address'): + return get_host_ip(hostname, fallback=hostname) + + # Otherwise assume localhost + return '127.0.0.1' + + def get_allowed_units(self, database, username, relation_id=None): + """Get list of units with access grants for database with username. + + This is typically used to provide shared-db relations with a list of + which units have been granted access to the given database. + """ + self.connect(password=self.get_mysql_root_password()) + allowed_units = set() + for unit in related_units(relation_id): + settings = relation_get(rid=relation_id, unit=unit) + # First check for setting with prefix, then without + for attr in ["%s_hostname" % (database), 'hostname']: + hosts = settings.get(attr, None) + if hosts: + break + + if hosts: + # hostname can be json-encoded list of hostnames + try: + hosts = json.loads(hosts) + except ValueError: + hosts = [hosts] + else: + hosts = [settings['private-address']] + + if hosts: + for host in hosts: + host = self.normalize_address(host) + if self.grant_exists(database, username, host): + log("Grant exists for host '%s' on db '%s'" % + (host, database), level=DEBUG) + if unit not in allowed_units: + allowed_units.add(unit) + else: + log("Grant does NOT exist for host '%s' on db '%s'" % + (host, database), level=DEBUG) + else: + log("No hosts found for grant check", level=INFO) + + return allowed_units + + def configure_db(self, hostname, database, username, admin=False): + """Configure access to database for username from hostname.""" + self.connect(password=self.get_mysql_root_password()) + if not self.database_exists(database): + self.create_database(database) + + remote_ip = self.normalize_address(hostname) + password = self.get_mysql_password(username) + if not self.grant_exists(database, username, remote_ip): + if not admin: + self.create_grant(database, username, remote_ip, password) + else: + self.create_admin_grant(username, remote_ip, password) + self.flush_priviledges() + + return password + + +# `_singleton_config_helper` stores the instance of the helper class that is +# being used during a hook invocation. +_singleton_config_helper = None + + +def get_mysql_config_helper(): + global _singleton_config_helper + if _singleton_config_helper is None: + _singleton_config_helper = MySQLConfigHelper() + return _singleton_config_helper + + +class MySQLConfigHelper(object): + """Base configuration helper for MySQL.""" + + # Going for the biggest page size to avoid wasted bytes. + # InnoDB page size is 16MB + + DEFAULT_PAGE_SIZE = 16 * 1024 * 1024 + DEFAULT_INNODB_BUFFER_FACTOR = 0.50 + DEFAULT_INNODB_BUFFER_SIZE_MAX = 512 * 1024 * 1024 + + # Validation and lookups for InnoDB configuration + INNODB_VALID_BUFFERING_VALUES = [ + 'none', + 'inserts', + 'deletes', + 'changes', + 'purges', + 'all' + ] + INNODB_FLUSH_CONFIG_VALUES = { + 'fast': 2, + 'safest': 1, + 'unsafe': 0, + } + + def human_to_bytes(self, human): + """Convert human readable configuration options to bytes.""" + num_re = re.compile('^[0-9]+$') + if num_re.match(human): + return human + + factors = { + 'K': 1024, + 'M': 1048576, + 'G': 1073741824, + 'T': 1099511627776 + } + modifier = human[-1] + if modifier in factors: + return int(human[:-1]) * factors[modifier] + + if modifier == '%': + total_ram = self.human_to_bytes(self.get_mem_total()) + if self.is_32bit_system() and total_ram > self.sys_mem_limit(): + total_ram = self.sys_mem_limit() + factor = int(human[:-1]) * 0.01 + pctram = total_ram * factor + return int(pctram - (pctram % self.DEFAULT_PAGE_SIZE)) + + raise ValueError("Can only convert K,M,G, or T") + + def is_32bit_system(self): + """Determine whether system is 32 or 64 bit.""" + try: + return sys.maxsize < 2 ** 32 + except OverflowError: + return False + + def sys_mem_limit(self): + """Determine the default memory limit for the current service unit.""" + if platform.machine() in ['armv7l']: + _mem_limit = self.human_to_bytes('2700M') # experimentally determined + else: + # Limit for x86 based 32bit systems + _mem_limit = self.human_to_bytes('4G') + + return _mem_limit + + def get_mem_total(self): + """Calculate the total memory in the current service unit.""" + with open('/proc/meminfo') as meminfo_file: + for line in meminfo_file: + key, mem = line.split(':', 2) + if key == 'MemTotal': + mtot, modifier = mem.strip().split(' ') + return '%s%s' % (mtot, modifier[0].upper()) + + def get_innodb_flush_log_at_trx_commit(self): + """Get value for innodb_flush_log_at_trx_commit. + + Use the innodb-flush-log-at-trx-commit or the tunning-level setting + translated by INNODB_FLUSH_CONFIG_VALUES to get the + innodb_flush_log_at_trx_commit value. + + :returns: Numeric value for innodb_flush_log_at_trx_commit + :rtype: Union[None, int] + """ + _iflatc = config_get('innodb-flush-log-at-trx-commit') + _tuning_level = config_get('tuning-level') + if _iflatc: + return _iflatc + elif _tuning_level: + return self.INNODB_FLUSH_CONFIG_VALUES.get(_tuning_level, 1) + + def get_innodb_change_buffering(self): + """Get value for innodb_change_buffering. + + Use the innodb-change-buffering validated against + INNODB_VALID_BUFFERING_VALUES to get the innodb_change_buffering value. + + :returns: String value for innodb_change_buffering. + :rtype: Union[None, str] + """ + _icb = config_get('innodb-change-buffering') + if _icb and _icb in self.INNODB_VALID_BUFFERING_VALUES: + return _icb + + def get_innodb_buffer_pool_size(self): + """Get value for innodb_buffer_pool_size. + + Return the number value of innodb-buffer-pool-size or dataset-size. If + neither is set, calculate a sane default based on total memory. + + :returns: Numeric value for innodb_buffer_pool_size. + :rtype: int + """ + total_memory = self.human_to_bytes(self.get_mem_total()) + + dataset_bytes = config_get('dataset-size') + innodb_buffer_pool_size = config_get('innodb-buffer-pool-size') + + if innodb_buffer_pool_size: + innodb_buffer_pool_size = self.human_to_bytes( + innodb_buffer_pool_size) + elif dataset_bytes: + log("Option 'dataset-size' has been deprecated, please use" + "innodb_buffer_pool_size option instead", level="WARN") + innodb_buffer_pool_size = self.human_to_bytes( + dataset_bytes) + else: + # NOTE(jamespage): pick the smallest of 50% of RAM or 512MB + # to ensure that deployments in containers + # without constraints don't try to consume + # silly amounts of memory. + innodb_buffer_pool_size = min( + int(total_memory * self.DEFAULT_INNODB_BUFFER_FACTOR), + self.DEFAULT_INNODB_BUFFER_SIZE_MAX + ) + + if innodb_buffer_pool_size > total_memory: + log("innodb_buffer_pool_size; {} is greater than system available memory:{}".format( + innodb_buffer_pool_size, + total_memory), level='WARN') + + return innodb_buffer_pool_size + + +class PerconaClusterHelper(MySQLConfigHelper): + """Percona-cluster specific configuration helper.""" + + def parse_config(self): + """Parse charm configuration and calculate values for config files.""" + config = config_get() + mysql_config = {} + if 'max-connections' in config: + mysql_config['max_connections'] = config['max-connections'] + + if 'wait-timeout' in config: + mysql_config['wait_timeout'] = config['wait-timeout'] + + if self.get_innodb_flush_log_at_trx_commit() is not None: + mysql_config['innodb_flush_log_at_trx_commit'] = \ + self.get_innodb_flush_log_at_trx_commit() + + if self.get_innodb_change_buffering() is not None: + mysql_config['innodb_change_buffering'] = config['innodb-change-buffering'] + + if 'innodb-io-capacity' in config: + mysql_config['innodb_io_capacity'] = config['innodb-io-capacity'] + + # Set a sane default key_buffer size + mysql_config['key_buffer'] = self.human_to_bytes('32M') + mysql_config['innodb_buffer_pool_size'] = self.get_innodb_buffer_pool_size() + return mysql_config + + +class MySQL8Helper(MySQLHelper): + + def grant_exists(self, db_name, db_user, remote_ip): + cursor = self.connection.cursor() + priv_string = ("GRANT ALL PRIVILEGES ON {}.* " + "TO {}@{}".format(db_name, db_user, remote_ip)) + try: + cursor.execute("SHOW GRANTS FOR '{}'@'{}'".format(db_user, + remote_ip)) + grants = [i[0] for i in cursor.fetchall()] + except MySQLdb.OperationalError: + return False + finally: + cursor.close() + + # Different versions of MySQL use ' or `. Ignore these in the check. + return priv_string in [ + i.replace("'", "").replace("`", "") for i in grants] + + def create_grant(self, db_name, db_user, remote_ip, password): + if self.grant_exists(db_name, db_user, remote_ip): + return + + # Make sure the user exists + # MySQL8 must create the user before the grant + self.create_user(db_user, remote_ip, password) + + cursor = self.connection.cursor() + try: + cursor.execute("GRANT ALL PRIVILEGES ON `{}`.* TO '{}'@'{}'" + .format(db_name, db_user, remote_ip)) + finally: + cursor.close() + + def create_user(self, db_user, remote_ip, password): + + SQL_USER_CREATE = ( + "CREATE USER '{db_user}'@'{remote_ip}' " + "IDENTIFIED BY '{password}'") + + cursor = self.connection.cursor() + try: + cursor.execute(SQL_USER_CREATE.format( + db_user=db_user, + remote_ip=remote_ip, + password=password) + ) + except MySQLdb._exceptions.OperationalError: + log("DB user {} already exists.".format(db_user), + "WARNING") + finally: + cursor.close() + + def create_router_grant(self, db_user, remote_ip, password): + + # Make sure the user exists + # MySQL8 must create the user before the grant + self.create_user(db_user, remote_ip, password) + + # Mysql-Router specific grants + cursor = self.connection.cursor() + try: + cursor.execute("GRANT CREATE USER ON *.* TO '{}'@'{}' WITH GRANT " + "OPTION".format(db_user, remote_ip)) + cursor.execute("GRANT SELECT, INSERT, UPDATE, DELETE, EXECUTE ON " + "mysql_innodb_cluster_metadata.* TO '{}'@'{}'" + .format(db_user, remote_ip)) + cursor.execute("GRANT SELECT ON mysql.user TO '{}'@'{}'" + .format(db_user, remote_ip)) + cursor.execute("GRANT SELECT ON " + "performance_schema.replication_group_members " + "TO '{}'@'{}'".format(db_user, remote_ip)) + cursor.execute("GRANT SELECT ON " + "performance_schema.replication_group_member_stats " + "TO '{}'@'{}'".format(db_user, remote_ip)) + cursor.execute("GRANT SELECT ON " + "performance_schema.global_variables " + "TO '{}'@'{}'".format(db_user, remote_ip)) + finally: + cursor.close() + + def configure_router(self, hostname, username): + + if self.connection is None: + self.connect(password=self.get_mysql_root_password()) + + remote_ip = self.normalize_address(hostname) + password = self.get_mysql_password(username) + self.create_user(username, remote_ip, password) + self.create_router_grant(username, remote_ip, password) + + return password + + +def get_prefix(requested, keys=None): + """Return existing prefix or None. + + :param requested: Request string. i.e. novacell0_username + :type requested: str + :param keys: Keys to determine prefix. Defaults set in function. + :type keys: List of str keys + :returns: String prefix i.e. novacell0 + :rtype: Union[None, str] + """ + if keys is None: + # Shared-DB default keys + keys = ["_database", "_username", "_hostname"] + for key in keys: + if requested.endswith(key): + return requested[:-len(key)] + + +def get_db_data(relation_data, unprefixed): + """Organize database requests into a collections.OrderedDict + + :param relation_data: shared-db relation data + :type relation_data: dict + :param unprefixed: Prefix to use for requests without a prefix. This should + be unique for each side of the relation to avoid + conflicts. + :type unprefixed: str + :returns: Order dict of databases and users + :rtype: collections.OrderedDict + """ + # Deep copy to avoid unintentionally changing relation data + settings = copy.deepcopy(relation_data) + databases = collections.OrderedDict() + + # Clear non-db related elements + if "egress-subnets" in settings.keys(): + settings.pop("egress-subnets") + if "ingress-address" in settings.keys(): + settings.pop("ingress-address") + if "private-address" in settings.keys(): + settings.pop("private-address") + + singleset = {"database", "username", "hostname"} + if singleset.issubset(settings): + settings["{}_{}".format(unprefixed, "hostname")] = ( + settings["hostname"]) + settings.pop("hostname") + settings["{}_{}".format(unprefixed, "database")] = ( + settings["database"]) + settings.pop("database") + settings["{}_{}".format(unprefixed, "username")] = ( + settings["username"]) + settings.pop("username") + + for k, v in settings.items(): + db = k.split("_")[0] + x = "_".join(k.split("_")[1:]) + if db not in databases: + databases[db] = collections.OrderedDict() + databases[db][x] = v + + return databases diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hahelpers/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hahelpers/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d7567b863e3a5ad2b7a7f44958b4166e0c3d346b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hahelpers/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hahelpers/apache.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hahelpers/apache.py new file mode 100644 index 0000000000000000000000000000000000000000..2c1e371e179bb6926f227331f86cb14a38c229a2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hahelpers/apache.py @@ -0,0 +1,86 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# +# Copyright 2012 Canonical Ltd. +# +# This file is sourced from lp:openstack-charm-helpers +# +# Authors: +# James Page +# Adam Gandelman +# + +import os + +from charmhelpers.core import host +from charmhelpers.core.hookenv import ( + config as config_get, + relation_get, + relation_ids, + related_units as relation_list, + log, + INFO, +) + + +def get_cert(cn=None): + # TODO: deal with multiple https endpoints via charm config + cert = config_get('ssl_cert') + key = config_get('ssl_key') + if not (cert and key): + log("Inspecting identity-service relations for SSL certificate.", + level=INFO) + cert = key = None + if cn: + ssl_cert_attr = 'ssl_cert_{}'.format(cn) + ssl_key_attr = 'ssl_key_{}'.format(cn) + else: + ssl_cert_attr = 'ssl_cert' + ssl_key_attr = 'ssl_key' + for r_id in relation_ids('identity-service'): + for unit in relation_list(r_id): + if not cert: + cert = relation_get(ssl_cert_attr, + rid=r_id, unit=unit) + if not key: + key = relation_get(ssl_key_attr, + rid=r_id, unit=unit) + return (cert, key) + + +def get_ca_cert(): + ca_cert = config_get('ssl_ca') + if ca_cert is None: + log("Inspecting identity-service relations for CA SSL certificate.", + level=INFO) + for r_id in (relation_ids('identity-service') + + relation_ids('identity-credentials')): + for unit in relation_list(r_id): + if ca_cert is None: + ca_cert = relation_get('ca_cert', + rid=r_id, unit=unit) + return ca_cert + + +def retrieve_ca_cert(cert_file): + cert = None + if os.path.isfile(cert_file): + with open(cert_file, 'rb') as crt: + cert = crt.read() + return cert + + +def install_ca_cert(ca_cert): + host.install_ca_cert(ca_cert, 'keystone_juju_ca_cert') diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hahelpers/cluster.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hahelpers/cluster.py new file mode 100644 index 0000000000000000000000000000000000000000..ba34fba0cafa21d15a6a27946544b2c99fbd3663 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hahelpers/cluster.py @@ -0,0 +1,451 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# +# Copyright 2012 Canonical Ltd. +# +# Authors: +# James Page +# Adam Gandelman +# + +""" +Helpers for clustering and determining "cluster leadership" and other +clustering-related helpers. +""" + +import functools +import subprocess +import os +import time + +from socket import gethostname as get_unit_hostname + +import six + +from charmhelpers.core.hookenv import ( + log, + relation_ids, + related_units as relation_list, + relation_get, + config as config_get, + INFO, + DEBUG, + WARNING, + unit_get, + is_leader as juju_is_leader, + status_set, +) +from charmhelpers.core.host import ( + modulo_distribution, +) +from charmhelpers.core.decorators import ( + retry_on_exception, +) +from charmhelpers.core.strutils import ( + bool_from_string, +) + +DC_RESOURCE_NAME = 'DC' + + +class HAIncompleteConfig(Exception): + pass + + +class HAIncorrectConfig(Exception): + pass + + +class CRMResourceNotFound(Exception): + pass + + +class CRMDCNotFound(Exception): + pass + + +def is_elected_leader(resource): + """ + Returns True if the charm executing this is the elected cluster leader. + + It relies on two mechanisms to determine leadership: + 1. If juju is sufficiently new and leadership election is supported, + the is_leader command will be used. + 2. If the charm is part of a corosync cluster, call corosync to + determine leadership. + 3. If the charm is not part of a corosync cluster, the leader is + determined as being "the alive unit with the lowest unit numer". In + other words, the oldest surviving unit. + """ + try: + return juju_is_leader() + except NotImplementedError: + log('Juju leadership election feature not enabled' + ', using fallback support', + level=WARNING) + + if is_clustered(): + if not is_crm_leader(resource): + log('Deferring action to CRM leader.', level=INFO) + return False + else: + peers = peer_units() + if peers and not oldest_peer(peers): + log('Deferring action to oldest service unit.', level=INFO) + return False + return True + + +def is_clustered(): + for r_id in (relation_ids('ha') or []): + for unit in (relation_list(r_id) or []): + clustered = relation_get('clustered', + rid=r_id, + unit=unit) + if clustered: + return True + return False + + +def is_crm_dc(): + """ + Determine leadership by querying the pacemaker Designated Controller + """ + cmd = ['crm', 'status'] + try: + status = subprocess.check_output(cmd, stderr=subprocess.STDOUT) + if not isinstance(status, six.text_type): + status = six.text_type(status, "utf-8") + except subprocess.CalledProcessError as ex: + raise CRMDCNotFound(str(ex)) + + current_dc = '' + for line in status.split('\n'): + if line.startswith('Current DC'): + # Current DC: juju-lytrusty-machine-2 (168108163) - partition with quorum + current_dc = line.split(':')[1].split()[0] + if current_dc == get_unit_hostname(): + return True + elif current_dc == 'NONE': + raise CRMDCNotFound('Current DC: NONE') + + return False + + +@retry_on_exception(5, base_delay=2, + exc_type=(CRMResourceNotFound, CRMDCNotFound)) +def is_crm_leader(resource, retry=False): + """ + Returns True if the charm calling this is the elected corosync leader, + as returned by calling the external "crm" command. + + We allow this operation to be retried to avoid the possibility of getting a + false negative. See LP #1396246 for more info. + """ + if resource == DC_RESOURCE_NAME: + return is_crm_dc() + cmd = ['crm', 'resource', 'show', resource] + try: + status = subprocess.check_output(cmd, stderr=subprocess.STDOUT) + if not isinstance(status, six.text_type): + status = six.text_type(status, "utf-8") + except subprocess.CalledProcessError: + status = None + + if status and get_unit_hostname() in status: + return True + + if status and "resource %s is NOT running" % (resource) in status: + raise CRMResourceNotFound("CRM resource %s not found" % (resource)) + + return False + + +def is_leader(resource): + log("is_leader is deprecated. Please consider using is_crm_leader " + "instead.", level=WARNING) + return is_crm_leader(resource) + + +def peer_units(peer_relation="cluster"): + peers = [] + for r_id in (relation_ids(peer_relation) or []): + for unit in (relation_list(r_id) or []): + peers.append(unit) + return peers + + +def peer_ips(peer_relation='cluster', addr_key='private-address'): + '''Return a dict of peers and their private-address''' + peers = {} + for r_id in relation_ids(peer_relation): + for unit in relation_list(r_id): + peers[unit] = relation_get(addr_key, rid=r_id, unit=unit) + return peers + + +def oldest_peer(peers): + """Determines who the oldest peer is by comparing unit numbers.""" + local_unit_no = int(os.getenv('JUJU_UNIT_NAME').split('/')[1]) + for peer in peers: + remote_unit_no = int(peer.split('/')[1]) + if remote_unit_no < local_unit_no: + return False + return True + + +def eligible_leader(resource): + log("eligible_leader is deprecated. Please consider using " + "is_elected_leader instead.", level=WARNING) + return is_elected_leader(resource) + + +def https(): + ''' + Determines whether enough data has been provided in configuration + or relation data to configure HTTPS + . + returns: boolean + ''' + use_https = config_get('use-https') + if use_https and bool_from_string(use_https): + return True + if config_get('ssl_cert') and config_get('ssl_key'): + return True + for r_id in relation_ids('certificates'): + for unit in relation_list(r_id): + ca = relation_get('ca', rid=r_id, unit=unit) + if ca: + return True + for r_id in relation_ids('identity-service'): + for unit in relation_list(r_id): + # TODO - needs fixing for new helper as ssl_cert/key suffixes with CN + rel_state = [ + relation_get('https_keystone', rid=r_id, unit=unit), + relation_get('ca_cert', rid=r_id, unit=unit), + ] + # NOTE: works around (LP: #1203241) + if (None not in rel_state) and ('' not in rel_state): + return True + return False + + +def determine_api_port(public_port, singlenode_mode=False): + ''' + Determine correct API server listening port based on + existence of HTTPS reverse proxy and/or haproxy. + + public_port: int: standard public port for given service + + singlenode_mode: boolean: Shuffle ports when only a single unit is present + + returns: int: the correct listening port for the API service + ''' + i = 0 + if singlenode_mode: + i += 1 + elif len(peer_units()) > 0 or is_clustered(): + i += 1 + if https(): + i += 1 + return public_port - (i * 10) + + +def determine_apache_port(public_port, singlenode_mode=False): + ''' + Description: Determine correct apache listening port based on public IP + + state of the cluster. + + public_port: int: standard public port for given service + + singlenode_mode: boolean: Shuffle ports when only a single unit is present + + returns: int: the correct listening port for the HAProxy service + ''' + i = 0 + if singlenode_mode: + i += 1 + elif len(peer_units()) > 0 or is_clustered(): + i += 1 + return public_port - (i * 10) + + +determine_apache_port_single = functools.partial( + determine_apache_port, singlenode_mode=True) + + +def get_hacluster_config(exclude_keys=None): + ''' + Obtains all relevant configuration from charm configuration required + for initiating a relation to hacluster: + + ha-bindiface, ha-mcastport, vip, os-internal-hostname, + os-admin-hostname, os-public-hostname, os-access-hostname + + param: exclude_keys: list of setting key(s) to be excluded. + returns: dict: A dict containing settings keyed by setting name. + raises: HAIncompleteConfig if settings are missing or incorrect. + ''' + settings = ['ha-bindiface', 'ha-mcastport', 'vip', 'os-internal-hostname', + 'os-admin-hostname', 'os-public-hostname', 'os-access-hostname'] + conf = {} + for setting in settings: + if exclude_keys and setting in exclude_keys: + continue + + conf[setting] = config_get(setting) + + if not valid_hacluster_config(): + raise HAIncorrectConfig('Insufficient or incorrect config data to ' + 'configure hacluster.') + return conf + + +def valid_hacluster_config(): + ''' + Check that either vip or dns-ha is set. If dns-ha then one of os-*-hostname + must be set. + + Note: ha-bindiface and ha-macastport both have defaults and will always + be set. We only care that either vip or dns-ha is set. + + :returns: boolean: valid config returns true. + raises: HAIncompatibileConfig if settings conflict. + raises: HAIncompleteConfig if settings are missing. + ''' + vip = config_get('vip') + dns = config_get('dns-ha') + if not(bool(vip) ^ bool(dns)): + msg = ('HA: Either vip or dns-ha must be set but not both in order to ' + 'use high availability') + status_set('blocked', msg) + raise HAIncorrectConfig(msg) + + # If dns-ha then one of os-*-hostname must be set + if dns: + dns_settings = ['os-internal-hostname', 'os-admin-hostname', + 'os-public-hostname', 'os-access-hostname'] + # At this point it is unknown if one or all of the possible + # network spaces are in HA. Validate at least one is set which is + # the minimum required. + for setting in dns_settings: + if config_get(setting): + log('DNS HA: At least one hostname is set {}: {}' + ''.format(setting, config_get(setting)), + level=DEBUG) + return True + + msg = ('DNS HA: At least one os-*-hostname(s) must be set to use ' + 'DNS HA') + status_set('blocked', msg) + raise HAIncompleteConfig(msg) + + log('VIP HA: VIP is set {}'.format(vip), level=DEBUG) + return True + + +def canonical_url(configs, vip_setting='vip'): + ''' + Returns the correct HTTP URL to this host given the state of HTTPS + configuration and hacluster. + + :configs : OSTemplateRenderer: A config tempating object to inspect for + a complete https context. + + :vip_setting: str: Setting in charm config that specifies + VIP address. + ''' + scheme = 'http' + if 'https' in configs.complete_contexts(): + scheme = 'https' + if is_clustered(): + addr = config_get(vip_setting) + else: + addr = unit_get('private-address') + return '%s://%s' % (scheme, addr) + + +def distributed_wait(modulo=None, wait=None, operation_name='operation'): + ''' Distribute operations by waiting based on modulo_distribution + + If modulo and or wait are not set, check config_get for those values. + If config values are not set, default to modulo=3 and wait=30. + + :param modulo: int The modulo number creates the group distribution + :param wait: int The constant time wait value + :param operation_name: string Operation name for status message + i.e. 'restart' + :side effect: Calls config_get() + :side effect: Calls log() + :side effect: Calls status_set() + :side effect: Calls time.sleep() + ''' + if modulo is None: + modulo = config_get('modulo-nodes') or 3 + if wait is None: + wait = config_get('known-wait') or 30 + if juju_is_leader(): + # The leader should never wait + calculated_wait = 0 + else: + # non_zero_wait=True guarantees the non-leader who gets modulo 0 + # will still wait + calculated_wait = modulo_distribution(modulo=modulo, wait=wait, + non_zero_wait=True) + msg = "Waiting {} seconds for {} ...".format(calculated_wait, + operation_name) + log(msg, DEBUG) + status_set('maintenance', msg) + time.sleep(calculated_wait) + + +def get_managed_services_and_ports(services, external_ports, + external_services=None, + port_conv_f=determine_apache_port_single): + """Get the services and ports managed by this charm. + + Return only the services and corresponding ports that are managed by this + charm. This excludes haproxy when there is a relation with hacluster. This + is because this charm passes responsability for stopping and starting + haproxy to hacluster. + + Similarly, if a relation with hacluster exists then the ports returned by + this method correspond to those managed by the apache server rather than + haproxy. + + :param services: List of services. + :type services: List[str] + :param external_ports: List of ports managed by external services. + :type external_ports: List[int] + :param external_services: List of services to be removed if ha relation is + present. + :type external_services: List[str] + :param port_conv_f: Function to apply to ports to calculate the ports + managed by services controlled by this charm. + :type port_convert_func: f() + :returns: A tuple containing a list of services first followed by a list of + ports. + :rtype: Tuple[List[str], List[int]] + """ + if external_services is None: + external_services = ['haproxy'] + if relation_ids('ha'): + for svc in external_services: + try: + services.remove(svc) + except ValueError: + pass + external_ports = [port_conv_f(p) for p in external_ports] + return services, external_ports diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/README.hardening.md b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/README.hardening.md new file mode 100644 index 0000000000000000000000000000000000000000..91280c03e6d7b5d75b356cd94614fc821abc2644 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/README.hardening.md @@ -0,0 +1,38 @@ +# Juju charm-helpers hardening library + +## Description + +This library provides multiple implementations of system and application +hardening that conform to the standards of http://hardening.io/. + +Current implementations include: + + * OS + * SSH + * MySQL + * Apache + +## Requirements + +* Juju Charms + +## Usage + +1. Synchronise this library into your charm and add the harden() decorator + (from contrib.hardening.harden) to any functions or methods you want to use + to trigger hardening of your application/system. + +2. Add a config option called 'harden' to your charm config.yaml and set it to + a space-delimited list of hardening modules you want to run e.g. "os ssh" + +3. Override any config defaults (contrib.hardening.defaults) by adding a file + called hardening.yaml to your charm root containing the name(s) of the + modules whose settings you want override at root level and then any settings + with overrides e.g. + + os: + general: + desktop_enable: True + +4. Now just run your charm as usual and hardening will be applied each time the + hook runs. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..30a3e94359e94011cd247de7ade76667346e7379 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/apache/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/apache/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..58bebd846bd6fa648cfab6ab1056ad10d8415453 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/apache/__init__.py @@ -0,0 +1,17 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from os import path + +TEMPLATES_DIR = path.join(path.dirname(__file__), 'templates') diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/apache/checks/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/apache/checks/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..3bc2ebd4760124e23c128868e098aceac610260f --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/apache/checks/__init__.py @@ -0,0 +1,29 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from charmhelpers.core.hookenv import ( + log, + DEBUG, +) +from charmhelpers.contrib.hardening.apache.checks import config + + +def run_apache_checks(): + log("Starting Apache hardening checks.", level=DEBUG) + checks = config.get_audits() + for check in checks: + log("Running '%s' check" % (check.__class__.__name__), level=DEBUG) + check.ensure_compliance() + + log("Apache hardening checks complete.", level=DEBUG) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/apache/checks/config.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/apache/checks/config.py new file mode 100644 index 0000000000000000000000000000000000000000..341da9eee10f73cbe3d7e7e5cf91b57b4d2a89b4 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/apache/checks/config.py @@ -0,0 +1,104 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import re +import six +import subprocess + + +from charmhelpers.core.hookenv import ( + log, + INFO, +) +from charmhelpers.contrib.hardening.audits.file import ( + FilePermissionAudit, + DirectoryPermissionAudit, + NoReadWriteForOther, + TemplatedFile, + DeletedFile +) +from charmhelpers.contrib.hardening.audits.apache import DisabledModuleAudit +from charmhelpers.contrib.hardening.apache import TEMPLATES_DIR +from charmhelpers.contrib.hardening import utils + + +def get_audits(): + """Get Apache hardening config audits. + + :returns: dictionary of audits + """ + if subprocess.call(['which', 'apache2'], stdout=subprocess.PIPE) != 0: + log("Apache server does not appear to be installed on this node - " + "skipping apache hardening", level=INFO) + return [] + + context = ApacheConfContext() + settings = utils.get_settings('apache') + audits = [ + FilePermissionAudit(paths=os.path.join( + settings['common']['apache_dir'], 'apache2.conf'), + user='root', group='root', mode=0o0640), + + TemplatedFile(os.path.join(settings['common']['apache_dir'], + 'mods-available/alias.conf'), + context, + TEMPLATES_DIR, + mode=0o0640, + user='root', + service_actions=[{'service': 'apache2', + 'actions': ['restart']}]), + + TemplatedFile(os.path.join(settings['common']['apache_dir'], + 'conf-enabled/99-hardening.conf'), + context, + TEMPLATES_DIR, + mode=0o0640, + user='root', + service_actions=[{'service': 'apache2', + 'actions': ['restart']}]), + + DirectoryPermissionAudit(settings['common']['apache_dir'], + user='root', + group='root', + mode=0o0750), + + DisabledModuleAudit(settings['hardening']['modules_to_disable']), + + NoReadWriteForOther(settings['common']['apache_dir']), + + DeletedFile(['/var/www/html/index.html']) + ] + + return audits + + +class ApacheConfContext(object): + """Defines the set of key/value pairs to set in a apache config file. + + This context, when called, will return a dictionary containing the + key/value pairs of setting to specify in the + /etc/apache/conf-enabled/hardening.conf file. + """ + def __call__(self): + settings = utils.get_settings('apache') + ctxt = settings['hardening'] + + out = subprocess.check_output(['apache2', '-v']) + if six.PY3: + out = out.decode('utf-8') + ctxt['apache_version'] = re.search(r'.+version: Apache/(.+?)\s.+', + out).group(1) + ctxt['apache_icondir'] = '/usr/share/apache2/icons/' + return ctxt diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/apache/templates/99-hardening.conf b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/apache/templates/99-hardening.conf new file mode 100644 index 0000000000000000000000000000000000000000..22b68041d50ff753284bbb4b41a21e8f2bd8c18a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/apache/templates/99-hardening.conf @@ -0,0 +1,32 @@ +############################################################################### +# WARNING: This configuration file is maintained by Juju. Local changes may +# be overwritten. +############################################################################### + + + + # http://httpd.apache.org/docs/2.4/upgrading.html + {% if apache_version > '2.2' -%} + Require all granted + {% else -%} + Order Allow,Deny + Deny from all + {% endif %} + + + + + Options -Indexes -FollowSymLinks + AllowOverride None + + + + Options -Indexes -FollowSymLinks + AllowOverride None + + +TraceEnable {{ traceenable }} +ServerTokens {{ servertokens }} + +SSLHonorCipherOrder {{ honor_cipher_order }} +SSLCipherSuite {{ cipher_suite }} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/apache/templates/alias.conf b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/apache/templates/alias.conf new file mode 100644 index 0000000000000000000000000000000000000000..e46a58a30dadbb6ccffa02d82593c63a9cbf52df --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/apache/templates/alias.conf @@ -0,0 +1,31 @@ +############################################################################### +# WARNING: This configuration file is maintained by Juju. Local changes may +# be overwritten. +############################################################################### + + # + # Aliases: Add here as many aliases as you need (with no limit). The format is + # Alias fakename realname + # + # Note that if you include a trailing / on fakename then the server will + # require it to be present in the URL. So "/icons" isn't aliased in this + # example, only "/icons/". If the fakename is slash-terminated, then the + # realname must also be slash terminated, and if the fakename omits the + # trailing slash, the realname must also omit it. + # + # We include the /icons/ alias for FancyIndexed directory listings. If + # you do not use FancyIndexing, you may comment this out. + # + Alias /icons/ "{{ apache_icondir }}/" + + + Options -Indexes -MultiViews -FollowSymLinks + AllowOverride None +{% if apache_version == '2.4' -%} + Require all granted +{% else -%} + Order allow,deny + Allow from all +{% endif %} + + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/audits/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/audits/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..6dd5b05fec4ffcfcdb4378a06dfda4e8ac7e8371 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/audits/__init__.py @@ -0,0 +1,54 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + +class BaseAudit(object): # NO-QA + """Base class for hardening checks. + + The lifecycle of a hardening check is to first check to see if the system + is in compliance for the specified check. If it is not in compliance, the + check method will return a value which will be supplied to the. + """ + def __init__(self, *args, **kwargs): + self.unless = kwargs.get('unless', None) + super(BaseAudit, self).__init__() + + def ensure_compliance(self): + """Checks to see if the current hardening check is in compliance or + not. + + If the check that is performed is not in compliance, then an exception + should be raised. + """ + pass + + def _take_action(self): + """Determines whether to perform the action or not. + + Checks whether or not an action should be taken. This is determined by + the truthy value for the unless parameter. If unless is a callback + method, it will be invoked with no parameters in order to determine + whether or not the action should be taken. Otherwise, the truthy value + of the unless attribute will determine if the action should be + performed. + """ + # Do the action if there isn't an unless override. + if self.unless is None: + return True + + # Invoke the callback if there is one. + if hasattr(self.unless, '__call__'): + return not self.unless() + + return not self.unless diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/audits/apache.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/audits/apache.py new file mode 100644 index 0000000000000000000000000000000000000000..04825f5ada0c5b0bb9fc0955baa9a10fa199184d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/audits/apache.py @@ -0,0 +1,100 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import re +import subprocess + +import six + +from charmhelpers.core.hookenv import ( + log, + INFO, + ERROR, +) + +from charmhelpers.contrib.hardening.audits import BaseAudit + + +class DisabledModuleAudit(BaseAudit): + """Audits Apache2 modules. + + Determines if the apache2 modules are enabled. If the modules are enabled + then they are removed in the ensure_compliance. + """ + def __init__(self, modules): + if modules is None: + self.modules = [] + elif isinstance(modules, six.string_types): + self.modules = [modules] + else: + self.modules = modules + + def ensure_compliance(self): + """Ensures that the modules are not loaded.""" + if not self.modules: + return + + try: + loaded_modules = self._get_loaded_modules() + non_compliant_modules = [] + for module in self.modules: + if module in loaded_modules: + log("Module '%s' is enabled but should not be." % + (module), level=INFO) + non_compliant_modules.append(module) + + if len(non_compliant_modules) == 0: + return + + for module in non_compliant_modules: + self._disable_module(module) + self._restart_apache() + except subprocess.CalledProcessError as e: + log('Error occurred auditing apache module compliance. ' + 'This may have been already reported. ' + 'Output is: %s' % e.output, level=ERROR) + + @staticmethod + def _get_loaded_modules(): + """Returns the modules which are enabled in Apache.""" + output = subprocess.check_output(['apache2ctl', '-M']) + if six.PY3: + output = output.decode('utf-8') + modules = [] + for line in output.splitlines(): + # Each line of the enabled module output looks like: + # module_name (static|shared) + # Plus a header line at the top of the output which is stripped + # out by the regex. + matcher = re.search(r'^ (\S*)_module (\S*)', line) + if matcher: + modules.append(matcher.group(1)) + return modules + + @staticmethod + def _disable_module(module): + """Disables the specified module in Apache.""" + try: + subprocess.check_call(['a2dismod', module]) + except subprocess.CalledProcessError as e: + # Note: catch error here to allow the attempt of disabling + # multiple modules in one go rather than failing after the + # first module fails. + log('Error occurred disabling module %s. ' + 'Output is: %s' % (module, e.output), level=ERROR) + + @staticmethod + def _restart_apache(): + """Restarts the apache process""" + subprocess.check_output(['service', 'apache2', 'restart']) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/audits/apt.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/audits/apt.py new file mode 100644 index 0000000000000000000000000000000000000000..cad7bf7376d6f22ce8feccd843b070c399887aa9 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/audits/apt.py @@ -0,0 +1,104 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from __future__ import absolute_import # required for external apt import +from six import string_types + +from charmhelpers.fetch import ( + apt_cache, + apt_purge +) +from charmhelpers.core.hookenv import ( + log, + DEBUG, + WARNING, +) +from charmhelpers.contrib.hardening.audits import BaseAudit +from charmhelpers.fetch import ubuntu_apt_pkg as apt_pkg + + +class AptConfig(BaseAudit): + + def __init__(self, config, **kwargs): + self.config = config + + def verify_config(self): + apt_pkg.init() + for cfg in self.config: + value = apt_pkg.config.get(cfg['key'], cfg.get('default', '')) + if value and value != cfg['expected']: + log("APT config '%s' has unexpected value '%s' " + "(expected='%s')" % + (cfg['key'], value, cfg['expected']), level=WARNING) + + def ensure_compliance(self): + self.verify_config() + + +class RestrictedPackages(BaseAudit): + """Class used to audit restricted packages on the system.""" + + def __init__(self, pkgs, **kwargs): + super(RestrictedPackages, self).__init__(**kwargs) + if isinstance(pkgs, string_types) or not hasattr(pkgs, '__iter__'): + self.pkgs = pkgs.split() + else: + self.pkgs = pkgs + + def ensure_compliance(self): + cache = apt_cache() + + for p in self.pkgs: + if p not in cache: + continue + + pkg = cache[p] + if not self.is_virtual_package(pkg): + if not pkg.current_ver: + log("Package '%s' is not installed." % pkg.name, + level=DEBUG) + continue + else: + log("Restricted package '%s' is installed" % pkg.name, + level=WARNING) + self.delete_package(cache, pkg) + else: + log("Checking restricted virtual package '%s' provides" % + pkg.name, level=DEBUG) + self.delete_package(cache, pkg) + + def delete_package(self, cache, pkg): + """Deletes the package from the system. + + Deletes the package form the system, properly handling virtual + packages. + + :param cache: the apt cache + :param pkg: the package to remove + """ + if self.is_virtual_package(pkg): + log("Package '%s' appears to be virtual - purging provides" % + pkg.name, level=DEBUG) + for _p in pkg.provides_list: + self.delete_package(cache, _p[2].parent_pkg) + elif not pkg.current_ver: + log("Package '%s' not installed" % pkg.name, level=DEBUG) + return + else: + log("Purging package '%s'" % pkg.name, level=DEBUG) + apt_purge(pkg.name) + + def is_virtual_package(self, pkg): + return (pkg.get('has_provides', False) and + not pkg.get('has_versions', False)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/audits/file.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/audits/file.py new file mode 100644 index 0000000000000000000000000000000000000000..257c6351a0b0d244273013faef913f52349f2486 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/audits/file.py @@ -0,0 +1,550 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import grp +import os +import pwd +import re + +from subprocess import ( + CalledProcessError, + check_output, + check_call, +) +from traceback import format_exc +from six import string_types +from stat import ( + S_ISGID, + S_ISUID +) + +from charmhelpers.core.hookenv import ( + log, + DEBUG, + INFO, + WARNING, + ERROR, +) +from charmhelpers.core import unitdata +from charmhelpers.core.host import file_hash +from charmhelpers.contrib.hardening.audits import BaseAudit +from charmhelpers.contrib.hardening.templating import ( + get_template_path, + render_and_write, +) +from charmhelpers.contrib.hardening import utils + + +class BaseFileAudit(BaseAudit): + """Base class for file audits. + + Provides api stubs for compliance check flow that must be used by any class + that implemented this one. + """ + + def __init__(self, paths, always_comply=False, *args, **kwargs): + """ + :param paths: string path of list of paths of files we want to apply + compliance checks are criteria to. + :param always_comply: if true compliance criteria is always applied + else compliance is skipped for non-existent + paths. + """ + super(BaseFileAudit, self).__init__(*args, **kwargs) + self.always_comply = always_comply + if isinstance(paths, string_types) or not hasattr(paths, '__iter__'): + self.paths = [paths] + else: + self.paths = paths + + def ensure_compliance(self): + """Ensure that the all registered files comply to registered criteria. + """ + for p in self.paths: + if os.path.exists(p): + if self.is_compliant(p): + continue + + log('File %s is not in compliance.' % p, level=INFO) + else: + if not self.always_comply: + log("Non-existent path '%s' - skipping compliance check" + % (p), level=INFO) + continue + + if self._take_action(): + log("Applying compliance criteria to '%s'" % (p), level=INFO) + self.comply(p) + + def is_compliant(self, path): + """Audits the path to see if it is compliance. + + :param path: the path to the file that should be checked. + """ + raise NotImplementedError + + def comply(self, path): + """Enforces the compliance of a path. + + :param path: the path to the file that should be enforced. + """ + raise NotImplementedError + + @classmethod + def _get_stat(cls, path): + """Returns the Posix st_stat information for the specified file path. + + :param path: the path to get the st_stat information for. + :returns: an st_stat object for the path or None if the path doesn't + exist. + """ + return os.stat(path) + + +class FilePermissionAudit(BaseFileAudit): + """Implements an audit for file permissions and ownership for a user. + + This class implements functionality that ensures that a specific user/group + will own the file(s) specified and that the permissions specified are + applied properly to the file. + """ + def __init__(self, paths, user, group=None, mode=0o600, **kwargs): + self.user = user + self.group = group + self.mode = mode + super(FilePermissionAudit, self).__init__(paths, user, group, mode, + **kwargs) + + @property + def user(self): + return self._user + + @user.setter + def user(self, name): + try: + user = pwd.getpwnam(name) + except KeyError: + log('Unknown user %s' % name, level=ERROR) + user = None + self._user = user + + @property + def group(self): + return self._group + + @group.setter + def group(self, name): + try: + group = None + if name: + group = grp.getgrnam(name) + else: + group = grp.getgrgid(self.user.pw_gid) + except KeyError: + log('Unknown group %s' % name, level=ERROR) + self._group = group + + def is_compliant(self, path): + """Checks if the path is in compliance. + + Used to determine if the path specified meets the necessary + requirements to be in compliance with the check itself. + + :param path: the file path to check + :returns: True if the path is compliant, False otherwise. + """ + stat = self._get_stat(path) + user = self.user + group = self.group + + compliant = True + if stat.st_uid != user.pw_uid or stat.st_gid != group.gr_gid: + log('File %s is not owned by %s:%s.' % (path, user.pw_name, + group.gr_name), + level=INFO) + compliant = False + + # POSIX refers to the st_mode bits as corresponding to both the + # file type and file permission bits, where the least significant 12 + # bits (o7777) are the suid (11), sgid (10), sticky bits (9), and the + # file permission bits (8-0) + perms = stat.st_mode & 0o7777 + if perms != self.mode: + log('File %s has incorrect permissions, currently set to %s' % + (path, oct(stat.st_mode & 0o7777)), level=INFO) + compliant = False + + return compliant + + def comply(self, path): + """Issues a chown and chmod to the file paths specified.""" + utils.ensure_permissions(path, self.user.pw_name, self.group.gr_name, + self.mode) + + +class DirectoryPermissionAudit(FilePermissionAudit): + """Performs a permission check for the specified directory path.""" + + def __init__(self, paths, user, group=None, mode=0o600, + recursive=True, **kwargs): + super(DirectoryPermissionAudit, self).__init__(paths, user, group, + mode, **kwargs) + self.recursive = recursive + + def is_compliant(self, path): + """Checks if the directory is compliant. + + Used to determine if the path specified and all of its children + directories are in compliance with the check itself. + + :param path: the directory path to check + :returns: True if the directory tree is compliant, otherwise False. + """ + if not os.path.isdir(path): + log('Path specified %s is not a directory.' % path, level=ERROR) + raise ValueError("%s is not a directory." % path) + + if not self.recursive: + return super(DirectoryPermissionAudit, self).is_compliant(path) + + compliant = True + for root, dirs, _ in os.walk(path): + if len(dirs) > 0: + continue + + if not super(DirectoryPermissionAudit, self).is_compliant(root): + compliant = False + continue + + return compliant + + def comply(self, path): + for root, dirs, _ in os.walk(path): + if len(dirs) > 0: + super(DirectoryPermissionAudit, self).comply(root) + + +class ReadOnly(BaseFileAudit): + """Audits that files and folders are read only.""" + def __init__(self, paths, *args, **kwargs): + super(ReadOnly, self).__init__(paths=paths, *args, **kwargs) + + def is_compliant(self, path): + try: + output = check_output(['find', path, '-perm', '-go+w', + '-type', 'f']).strip() + + # The find above will find any files which have permission sets + # which allow too broad of write access. As such, the path is + # compliant if there is no output. + if output: + return False + + return True + except CalledProcessError as e: + log('Error occurred checking finding writable files for %s. ' + 'Error information is: command %s failed with returncode ' + '%d and output %s.\n%s' % (path, e.cmd, e.returncode, e.output, + format_exc(e)), level=ERROR) + return False + + def comply(self, path): + try: + check_output(['chmod', 'go-w', '-R', path]) + except CalledProcessError as e: + log('Error occurred removing writeable permissions for %s. ' + 'Error information is: command %s failed with returncode ' + '%d and output %s.\n%s' % (path, e.cmd, e.returncode, e.output, + format_exc(e)), level=ERROR) + + +class NoReadWriteForOther(BaseFileAudit): + """Ensures that the files found under the base path are readable or + writable by anyone other than the owner or the group. + """ + def __init__(self, paths): + super(NoReadWriteForOther, self).__init__(paths) + + def is_compliant(self, path): + try: + cmd = ['find', path, '-perm', '-o+r', '-type', 'f', '-o', + '-perm', '-o+w', '-type', 'f'] + output = check_output(cmd).strip() + + # The find above here will find any files which have read or + # write permissions for other, meaning there is too broad of access + # to read/write the file. As such, the path is compliant if there's + # no output. + if output: + return False + + return True + except CalledProcessError as e: + log('Error occurred while finding files which are readable or ' + 'writable to the world in %s. ' + 'Command output is: %s.' % (path, e.output), level=ERROR) + + def comply(self, path): + try: + check_output(['chmod', '-R', 'o-rw', path]) + except CalledProcessError as e: + log('Error occurred attempting to change modes of files under ' + 'path %s. Output of command is: %s' % (path, e.output)) + + +class NoSUIDSGIDAudit(BaseFileAudit): + """Audits that specified files do not have SUID/SGID bits set.""" + def __init__(self, paths, *args, **kwargs): + super(NoSUIDSGIDAudit, self).__init__(paths=paths, *args, **kwargs) + + def is_compliant(self, path): + stat = self._get_stat(path) + if (stat.st_mode & (S_ISGID | S_ISUID)) != 0: + return False + + return True + + def comply(self, path): + try: + log('Removing suid/sgid from %s.' % path, level=DEBUG) + check_output(['chmod', '-s', path]) + except CalledProcessError as e: + log('Error occurred removing suid/sgid from %s.' + 'Error information is: command %s failed with returncode ' + '%d and output %s.\n%s' % (path, e.cmd, e.returncode, e.output, + format_exc(e)), level=ERROR) + + +class TemplatedFile(BaseFileAudit): + """The TemplatedFileAudit audits the contents of a templated file. + + This audit renders a file from a template, sets the appropriate file + permissions, then generates a hashsum with which to check the content + changed. + """ + def __init__(self, path, context, template_dir, mode, user='root', + group='root', service_actions=None, **kwargs): + self.context = context + self.user = user + self.group = group + self.mode = mode + self.template_dir = template_dir + self.service_actions = service_actions + super(TemplatedFile, self).__init__(paths=path, always_comply=True, + **kwargs) + + def is_compliant(self, path): + """Determines if the templated file is compliant. + + A templated file is only compliant if it has not changed (as + determined by its sha256 hashsum) AND its file permissions are set + appropriately. + + :param path: the path to check compliance. + """ + same_templates = self.templates_match(path) + same_content = self.contents_match(path) + same_permissions = self.permissions_match(path) + + if same_content and same_permissions and same_templates: + return True + + return False + + def run_service_actions(self): + """Run any actions on services requested.""" + if not self.service_actions: + return + + for svc_action in self.service_actions: + name = svc_action['service'] + actions = svc_action['actions'] + log("Running service '%s' actions '%s'" % (name, actions), + level=DEBUG) + for action in actions: + cmd = ['service', name, action] + try: + check_call(cmd) + except CalledProcessError as exc: + log("Service name='%s' action='%s' failed - %s" % + (name, action, exc), level=WARNING) + + def comply(self, path): + """Ensures the contents and the permissions of the file. + + :param path: the path to correct + """ + dirname = os.path.dirname(path) + if not os.path.exists(dirname): + os.makedirs(dirname) + + self.pre_write() + render_and_write(self.template_dir, path, self.context()) + utils.ensure_permissions(path, self.user, self.group, self.mode) + self.run_service_actions() + self.save_checksum(path) + self.post_write() + + def pre_write(self): + """Invoked prior to writing the template.""" + pass + + def post_write(self): + """Invoked after writing the template.""" + pass + + def templates_match(self, path): + """Determines if the template files are the same. + + The template file equality is determined by the hashsum of the + template files themselves. If there is no hashsum, then the content + cannot be sure to be the same so treat it as if they changed. + Otherwise, return whether or not the hashsums are the same. + + :param path: the path to check + :returns: boolean + """ + template_path = get_template_path(self.template_dir, path) + key = 'hardening:template:%s' % template_path + template_checksum = file_hash(template_path) + kv = unitdata.kv() + stored_tmplt_checksum = kv.get(key) + if not stored_tmplt_checksum: + kv.set(key, template_checksum) + kv.flush() + log('Saved template checksum for %s.' % template_path, + level=DEBUG) + # Since we don't have a template checksum, then assume it doesn't + # match and return that the template is different. + return False + elif stored_tmplt_checksum != template_checksum: + kv.set(key, template_checksum) + kv.flush() + log('Updated template checksum for %s.' % template_path, + level=DEBUG) + return False + + # Here the template hasn't changed based upon the calculated + # checksum of the template and what was previously stored. + return True + + def contents_match(self, path): + """Determines if the file content is the same. + + This is determined by comparing hashsum of the file contents and + the saved hashsum. If there is no hashsum, then the content cannot + be sure to be the same so treat them as if they are not the same. + Otherwise, return True if the hashsums are the same, False if they + are not the same. + + :param path: the file to check. + """ + checksum = file_hash(path) + + kv = unitdata.kv() + stored_checksum = kv.get('hardening:%s' % path) + if not stored_checksum: + # If the checksum hasn't been generated, return False to ensure + # the file is written and the checksum stored. + log('Checksum for %s has not been calculated.' % path, level=DEBUG) + return False + elif stored_checksum != checksum: + log('Checksum mismatch for %s.' % path, level=DEBUG) + return False + + return True + + def permissions_match(self, path): + """Determines if the file owner and permissions match. + + :param path: the path to check. + """ + audit = FilePermissionAudit(path, self.user, self.group, self.mode) + return audit.is_compliant(path) + + def save_checksum(self, path): + """Calculates and saves the checksum for the path specified. + + :param path: the path of the file to save the checksum. + """ + checksum = file_hash(path) + kv = unitdata.kv() + kv.set('hardening:%s' % path, checksum) + kv.flush() + + +class DeletedFile(BaseFileAudit): + """Audit to ensure that a file is deleted.""" + def __init__(self, paths): + super(DeletedFile, self).__init__(paths) + + def is_compliant(self, path): + return not os.path.exists(path) + + def comply(self, path): + os.remove(path) + + +class FileContentAudit(BaseFileAudit): + """Audit the contents of a file.""" + def __init__(self, paths, cases, **kwargs): + # Cases we expect to pass + self.pass_cases = cases.get('pass', []) + # Cases we expect to fail + self.fail_cases = cases.get('fail', []) + super(FileContentAudit, self).__init__(paths, **kwargs) + + def is_compliant(self, path): + """ + Given a set of content matching cases i.e. tuple(regex, bool) where + bool value denotes whether or not regex is expected to match, check that + all cases match as expected with the contents of the file. Cases can be + expected to pass of fail. + + :param path: Path of file to check. + :returns: Boolean value representing whether or not all cases are + found to be compliant. + """ + log("Auditing contents of file '%s'" % (path), level=DEBUG) + with open(path, 'r') as fd: + contents = fd.read() + + matches = 0 + for pattern in self.pass_cases: + key = re.compile(pattern, flags=re.MULTILINE) + results = re.search(key, contents) + if results: + matches += 1 + else: + log("Pattern '%s' was expected to pass but instead it failed" + % (pattern), level=WARNING) + + for pattern in self.fail_cases: + key = re.compile(pattern, flags=re.MULTILINE) + results = re.search(key, contents) + if not results: + matches += 1 + else: + log("Pattern '%s' was expected to fail but instead it passed" + % (pattern), level=WARNING) + + total = len(self.pass_cases) + len(self.fail_cases) + log("Checked %s cases and %s passed" % (total, matches), level=DEBUG) + return matches == total + + def comply(self, *args, **kwargs): + """NOOP since we just issue warnings. This is to avoid the + NotImplememtedError. + """ + log("Not applying any compliance criteria, only checks.", level=INFO) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/defaults/apache.yaml b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/defaults/apache.yaml new file mode 100644 index 0000000000000000000000000000000000000000..0f940d4cfa85ca7051dd60a4805d84bb6aebed6d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/defaults/apache.yaml @@ -0,0 +1,16 @@ +# NOTE: this file contains the default configuration for the 'apache' hardening +# code. If you want to override any settings you must add them to a file +# called hardening.yaml in the root directory of your charm using the +# name 'apache' as the root key followed by any of the following with new +# values. + +common: + apache_dir: '/etc/apache2' + +hardening: + traceenable: 'off' + allowed_http_methods: "GET POST" + modules_to_disable: [ cgi, cgid ] + servertokens: 'Prod' + honor_cipher_order: 'on' + cipher_suite: 'ALL:+MEDIUM:+HIGH:!LOW:!MD5:!RC4:!eNULL:!aNULL:!3DES' diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/defaults/apache.yaml.schema b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/defaults/apache.yaml.schema new file mode 100644 index 0000000000000000000000000000000000000000..c112137cb45c4b63cb05384145b3edf8c443e2b8 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/defaults/apache.yaml.schema @@ -0,0 +1,12 @@ +# NOTE: this schema must contain all valid keys from it's associated defaults +# file. It is used to validate user-provided overrides. +common: + apache_dir: + traceenable: + +hardening: + allowed_http_methods: + modules_to_disable: + servertokens: + honor_cipher_order: + cipher_suite: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/defaults/mysql.yaml b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/defaults/mysql.yaml new file mode 100644 index 0000000000000000000000000000000000000000..682d22bf3ded32eb1c8d6188486ec4468d9ec457 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/defaults/mysql.yaml @@ -0,0 +1,38 @@ +# NOTE: this file contains the default configuration for the 'mysql' hardening +# code. If you want to override any settings you must add them to a file +# called hardening.yaml in the root directory of your charm using the +# name 'mysql' as the root key followed by any of the following with new +# values. + +hardening: + mysql-conf: /etc/mysql/my.cnf + hardening-conf: /etc/mysql/conf.d/hardening.cnf + +security: + # @see http://www.symantec.com/connect/articles/securing-mysql-step-step + # @see http://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_chroot + chroot: None + + # @see http://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_safe-user-create + safe-user-create: 1 + + # @see http://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_secure-auth + secure-auth: 1 + + # @see http://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_symbolic-links + skip-symbolic-links: 1 + + # @see http://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_skip-show-database + skip-show-database: True + + # @see http://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_local_infile + local-infile: 0 + + # @see https://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_allow-suspicious-udfs + allow-suspicious-udfs: 0 + + # @see https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_automatic_sp_privileges + automatic-sp-privileges: 0 + + # @see https://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_secure-file-priv + secure-file-priv: /tmp diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/defaults/mysql.yaml.schema b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/defaults/mysql.yaml.schema new file mode 100644 index 0000000000000000000000000000000000000000..2edf325c311c6fbb062a072083b4d12cebc3d9c1 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/defaults/mysql.yaml.schema @@ -0,0 +1,15 @@ +# NOTE: this schema must contain all valid keys from it's associated defaults +# file. It is used to validate user-provided overrides. +hardening: + mysql-conf: + hardening-conf: +security: + chroot: + safe-user-create: + secure-auth: + skip-symbolic-links: + skip-show-database: + local-infile: + allow-suspicious-udfs: + automatic-sp-privileges: + secure-file-priv: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/defaults/os.yaml b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/defaults/os.yaml new file mode 100644 index 0000000000000000000000000000000000000000..9a8627b5ed2803828e1e4d78260c6b5f90cae659 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/defaults/os.yaml @@ -0,0 +1,68 @@ +# NOTE: this file contains the default configuration for the 'os' hardening +# code. If you want to override any settings you must add them to a file +# called hardening.yaml in the root directory of your charm using the +# name 'os' as the root key followed by any of the following with new +# values. + +general: + desktop_enable: False # (type:boolean) + +environment: + extra_user_paths: [] + umask: 027 + root_path: / + +auth: + pw_max_age: 60 + # discourage password cycling + pw_min_age: 7 + retries: 5 + lockout_time: 600 + timeout: 60 + allow_homeless: False # (type:boolean) + pam_passwdqc_enable: True # (type:boolean) + pam_passwdqc_options: 'min=disabled,disabled,16,12,8' + root_ttys: + console + tty1 + tty2 + tty3 + tty4 + tty5 + tty6 + uid_min: 1000 + gid_min: 1000 + sys_uid_min: 100 + sys_uid_max: 999 + sys_gid_min: 100 + sys_gid_max: 999 + chfn_restrict: + +security: + users_allow: [] + suid_sgid_enforce: True # (type:boolean) + # user-defined blacklist and whitelist + suid_sgid_blacklist: [] + suid_sgid_whitelist: [] + # if this is True, remove any suid/sgid bits from files that were not in the whitelist + suid_sgid_dry_run_on_unknown: False # (type:boolean) + suid_sgid_remove_from_unknown: False # (type:boolean) + # remove packages with known issues + packages_clean: True # (type:boolean) + packages_list: + xinetd + inetd + ypserv + telnet-server + rsh-server + rsync + kernel_enable_module_loading: True # (type:boolean) + kernel_enable_core_dump: False # (type:boolean) + ssh_tmout: 300 + +sysctl: + kernel_secure_sysrq: 244 # 4 + 16 + 32 + 64 + 128 + kernel_enable_sysrq: False # (type:boolean) + forwarding: False # (type:boolean) + ipv6_enable: False # (type:boolean) + arp_restricted: True # (type:boolean) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/defaults/os.yaml.schema b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/defaults/os.yaml.schema new file mode 100644 index 0000000000000000000000000000000000000000..cc3b9c206eae56cbe68826cb76748e2deb9483e1 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/defaults/os.yaml.schema @@ -0,0 +1,43 @@ +# NOTE: this schema must contain all valid keys from it's associated defaults +# file. It is used to validate user-provided overrides. +general: + desktop_enable: +environment: + extra_user_paths: + umask: + root_path: +auth: + pw_max_age: + pw_min_age: + retries: + lockout_time: + timeout: + allow_homeless: + pam_passwdqc_enable: + pam_passwdqc_options: + root_ttys: + uid_min: + gid_min: + sys_uid_min: + sys_uid_max: + sys_gid_min: + sys_gid_max: + chfn_restrict: +security: + users_allow: + suid_sgid_enforce: + suid_sgid_blacklist: + suid_sgid_whitelist: + suid_sgid_dry_run_on_unknown: + suid_sgid_remove_from_unknown: + packages_clean: + packages_list: + kernel_enable_module_loading: + kernel_enable_core_dump: + ssh_tmout: +sysctl: + kernel_secure_sysrq: + kernel_enable_sysrq: + forwarding: + ipv6_enable: + arp_restricted: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/defaults/ssh.yaml b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/defaults/ssh.yaml new file mode 100644 index 0000000000000000000000000000000000000000..cd529bcae1ec00fef2e969f43dc3cf530b46ef9a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/defaults/ssh.yaml @@ -0,0 +1,49 @@ +# NOTE: this file contains the default configuration for the 'ssh' hardening +# code. If you want to override any settings you must add them to a file +# called hardening.yaml in the root directory of your charm using the +# name 'ssh' as the root key followed by any of the following with new +# values. + +common: + service_name: 'ssh' + network_ipv6_enable: False # (type:boolean) + ports: [22] + remote_hosts: [] + +client: + package: 'openssh-client' + cbc_required: False # (type:boolean) + weak_hmac: False # (type:boolean) + weak_kex: False # (type:boolean) + roaming: False + password_authentication: 'no' + +server: + host_key_files: ['/etc/ssh/ssh_host_rsa_key', '/etc/ssh/ssh_host_dsa_key', + '/etc/ssh/ssh_host_ecdsa_key'] + cbc_required: False # (type:boolean) + weak_hmac: False # (type:boolean) + weak_kex: False # (type:boolean) + allow_root_with_key: False # (type:boolean) + allow_tcp_forwarding: 'no' + allow_agent_forwarding: 'no' + allow_x11_forwarding: 'no' + use_privilege_separation: 'sandbox' + listen_to: ['0.0.0.0'] + use_pam: 'no' + package: 'openssh-server' + password_authentication: 'no' + alive_interval: '600' + alive_count: '3' + sftp_enable: False # (type:boolean) + sftp_group: 'sftponly' + sftp_chroot: '/home/%u' + deny_users: [] + allow_users: [] + deny_groups: [] + allow_groups: [] + print_motd: 'no' + print_last_log: 'no' + use_dns: 'no' + max_auth_tries: 2 + max_sessions: 10 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/defaults/ssh.yaml.schema b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/defaults/ssh.yaml.schema new file mode 100644 index 0000000000000000000000000000000000000000..d05e054bc234015206bb1195152fa9ffd6a33151 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/defaults/ssh.yaml.schema @@ -0,0 +1,42 @@ +# NOTE: this schema must contain all valid keys from it's associated defaults +# file. It is used to validate user-provided overrides. +common: + service_name: + network_ipv6_enable: + ports: + remote_hosts: +client: + package: + cbc_required: + weak_hmac: + weak_kex: + roaming: + password_authentication: +server: + host_key_files: + cbc_required: + weak_hmac: + weak_kex: + allow_root_with_key: + allow_tcp_forwarding: + allow_agent_forwarding: + allow_x11_forwarding: + use_privilege_separation: + listen_to: + use_pam: + package: + password_authentication: + alive_interval: + alive_count: + sftp_enable: + sftp_group: + sftp_chroot: + deny_users: + allow_users: + deny_groups: + allow_groups: + print_motd: + print_last_log: + use_dns: + max_auth_tries: + max_sessions: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/harden.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/harden.py new file mode 100644 index 0000000000000000000000000000000000000000..63f21b9c9855065da3be875c01a2c94db7df47b4 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/harden.py @@ -0,0 +1,96 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import six + +from collections import OrderedDict + +from charmhelpers.core.hookenv import ( + config, + log, + DEBUG, + WARNING, +) +from charmhelpers.contrib.hardening.host.checks import run_os_checks +from charmhelpers.contrib.hardening.ssh.checks import run_ssh_checks +from charmhelpers.contrib.hardening.mysql.checks import run_mysql_checks +from charmhelpers.contrib.hardening.apache.checks import run_apache_checks + +_DISABLE_HARDENING_FOR_UNIT_TEST = False + + +def harden(overrides=None): + """Hardening decorator. + + This is the main entry point for running the hardening stack. In order to + run modules of the stack you must add this decorator to charm hook(s) and + ensure that your charm config.yaml contains the 'harden' option set to + one or more of the supported modules. Setting these will cause the + corresponding hardening code to be run when the hook fires. + + This decorator can and should be applied to more than one hook or function + such that hardening modules are called multiple times. This is because + subsequent calls will perform auditing checks that will report any changes + to resources hardened by the first run (and possibly perform compliance + actions as a result of any detected infractions). + + :param overrides: Optional list of stack modules used to override those + provided with 'harden' config. + :returns: Returns value returned by decorated function once executed. + """ + if overrides is None: + overrides = [] + + def _harden_inner1(f): + # As this has to be py2.7 compat, we can't use nonlocal. Use a trick + # to capture the dictionary that can then be updated. + _logged = {'done': False} + + def _harden_inner2(*args, **kwargs): + # knock out hardening via a config var; normally it won't get + # disabled. + if _DISABLE_HARDENING_FOR_UNIT_TEST: + return f(*args, **kwargs) + if not _logged['done']: + log("Hardening function '%s'" % (f.__name__), level=DEBUG) + _logged['done'] = True + RUN_CATALOG = OrderedDict([('os', run_os_checks), + ('ssh', run_ssh_checks), + ('mysql', run_mysql_checks), + ('apache', run_apache_checks)]) + + enabled = overrides[:] or (config("harden") or "").split() + if enabled: + modules_to_run = [] + # modules will always be performed in the following order + for module, func in six.iteritems(RUN_CATALOG): + if module in enabled: + enabled.remove(module) + modules_to_run.append(func) + + if enabled: + log("Unknown hardening modules '%s' - ignoring" % + (', '.join(enabled)), level=WARNING) + + for hardener in modules_to_run: + log("Executing hardening module '%s'" % + (hardener.__name__), level=DEBUG) + hardener() + else: + log("No hardening applied to '%s'" % (f.__name__), level=DEBUG) + + return f(*args, **kwargs) + return _harden_inner2 + + return _harden_inner1 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..58bebd846bd6fa648cfab6ab1056ad10d8415453 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/__init__.py @@ -0,0 +1,17 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from os import path + +TEMPLATES_DIR = path.join(path.dirname(__file__), 'templates') diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/checks/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/checks/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..0e7e409f3c7e0406b40353f48acfc3479e4c1a24 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/checks/__init__.py @@ -0,0 +1,48 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from charmhelpers.core.hookenv import ( + log, + DEBUG, +) +from charmhelpers.contrib.hardening.host.checks import ( + apt, + limits, + login, + minimize_access, + pam, + profile, + securetty, + suid_sgid, + sysctl +) + + +def run_os_checks(): + log("Starting OS hardening checks.", level=DEBUG) + checks = apt.get_audits() + checks.extend(limits.get_audits()) + checks.extend(login.get_audits()) + checks.extend(minimize_access.get_audits()) + checks.extend(pam.get_audits()) + checks.extend(profile.get_audits()) + checks.extend(securetty.get_audits()) + checks.extend(suid_sgid.get_audits()) + checks.extend(sysctl.get_audits()) + + for check in checks: + log("Running '%s' check" % (check.__class__.__name__), level=DEBUG) + check.ensure_compliance() + + log("OS hardening checks complete.", level=DEBUG) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/checks/apt.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/checks/apt.py new file mode 100644 index 0000000000000000000000000000000000000000..7ce41b0043134e256d9c20ee729f1c4345faa3f9 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/checks/apt.py @@ -0,0 +1,37 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from charmhelpers.contrib.hardening.utils import get_settings +from charmhelpers.contrib.hardening.audits.apt import ( + AptConfig, + RestrictedPackages, +) + + +def get_audits(): + """Get OS hardening apt audits. + + :returns: dictionary of audits + """ + audits = [AptConfig([{'key': 'APT::Get::AllowUnauthenticated', + 'expected': 'false'}])] + + settings = get_settings('os') + clean_packages = settings['security']['packages_clean'] + if clean_packages: + security_packages = settings['security']['packages_list'] + if security_packages: + audits.append(RestrictedPackages(security_packages)) + + return audits diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/checks/limits.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/checks/limits.py new file mode 100644 index 0000000000000000000000000000000000000000..e94f5ebef360c7c80c35eba8243d3e7f7dcbb14d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/checks/limits.py @@ -0,0 +1,53 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from charmhelpers.contrib.hardening.audits.file import ( + DirectoryPermissionAudit, + TemplatedFile, +) +from charmhelpers.contrib.hardening.host import TEMPLATES_DIR +from charmhelpers.contrib.hardening import utils + + +def get_audits(): + """Get OS hardening security limits audits. + + :returns: dictionary of audits + """ + audits = [] + settings = utils.get_settings('os') + + # Ensure that the /etc/security/limits.d directory is only writable + # by the root user, but others can execute and read. + audits.append(DirectoryPermissionAudit('/etc/security/limits.d', + user='root', group='root', + mode=0o755)) + + # If core dumps are not enabled, then don't allow core dumps to be + # created as they may contain sensitive information. + if not settings['security']['kernel_enable_core_dump']: + audits.append(TemplatedFile('/etc/security/limits.d/10.hardcore.conf', + SecurityLimitsContext(), + template_dir=TEMPLATES_DIR, + user='root', group='root', mode=0o0440)) + return audits + + +class SecurityLimitsContext(object): + + def __call__(self): + settings = utils.get_settings('os') + ctxt = {'disable_core_dump': + not settings['security']['kernel_enable_core_dump']} + return ctxt diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/checks/login.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/checks/login.py new file mode 100644 index 0000000000000000000000000000000000000000..fe2bc6ef34a0dae612c2617dc1d13390f651e419 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/checks/login.py @@ -0,0 +1,65 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from six import string_types + +from charmhelpers.contrib.hardening.audits.file import TemplatedFile +from charmhelpers.contrib.hardening.host import TEMPLATES_DIR +from charmhelpers.contrib.hardening import utils + + +def get_audits(): + """Get OS hardening login.defs audits. + + :returns: dictionary of audits + """ + audits = [TemplatedFile('/etc/login.defs', LoginContext(), + template_dir=TEMPLATES_DIR, + user='root', group='root', mode=0o0444)] + return audits + + +class LoginContext(object): + + def __call__(self): + settings = utils.get_settings('os') + + # Octal numbers in yaml end up being turned into decimal, + # so check if the umask is entered as a string (e.g. '027') + # or as an octal umask as we know it (e.g. 002). If its not + # a string assume it to be octal and turn it into an octal + # string. + umask = settings['environment']['umask'] + if not isinstance(umask, string_types): + umask = '%s' % oct(umask) + + ctxt = { + 'additional_user_paths': + settings['environment']['extra_user_paths'], + 'umask': umask, + 'pwd_max_age': settings['auth']['pw_max_age'], + 'pwd_min_age': settings['auth']['pw_min_age'], + 'uid_min': settings['auth']['uid_min'], + 'sys_uid_min': settings['auth']['sys_uid_min'], + 'sys_uid_max': settings['auth']['sys_uid_max'], + 'gid_min': settings['auth']['gid_min'], + 'sys_gid_min': settings['auth']['sys_gid_min'], + 'sys_gid_max': settings['auth']['sys_gid_max'], + 'login_retries': settings['auth']['retries'], + 'login_timeout': settings['auth']['timeout'], + 'chfn_restrict': settings['auth']['chfn_restrict'], + 'allow_login_without_home': settings['auth']['allow_homeless'] + } + + return ctxt diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/checks/minimize_access.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/checks/minimize_access.py new file mode 100644 index 0000000000000000000000000000000000000000..6e64be003be0b89d1416b22c35c43b6024979361 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/checks/minimize_access.py @@ -0,0 +1,50 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from charmhelpers.contrib.hardening.audits.file import ( + FilePermissionAudit, + ReadOnly, +) +from charmhelpers.contrib.hardening import utils + + +def get_audits(): + """Get OS hardening access audits. + + :returns: dictionary of audits + """ + audits = [] + settings = utils.get_settings('os') + + # Remove write permissions from $PATH folders for all regular users. + # This prevents changing system-wide commands from normal users. + path_folders = {'/usr/local/sbin', + '/usr/local/bin', + '/usr/sbin', + '/usr/bin', + '/bin'} + extra_user_paths = settings['environment']['extra_user_paths'] + path_folders.update(extra_user_paths) + audits.append(ReadOnly(path_folders)) + + # Only allow the root user to have access to the shadow file. + audits.append(FilePermissionAudit('/etc/shadow', 'root', 'root', 0o0600)) + + if 'change_user' not in settings['security']['users_allow']: + # su should only be accessible to user and group root, unless it is + # expressly defined to allow users to change to root via the + # security_users_allow config option. + audits.append(FilePermissionAudit('/bin/su', 'root', 'root', 0o750)) + + return audits diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/checks/pam.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/checks/pam.py new file mode 100644 index 0000000000000000000000000000000000000000..9b38d5f0cf0b16282968825b79b44d80a1a7f577 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/checks/pam.py @@ -0,0 +1,132 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from subprocess import ( + check_output, + CalledProcessError, +) + +from charmhelpers.core.hookenv import ( + log, + DEBUG, + ERROR, +) +from charmhelpers.fetch import ( + apt_install, + apt_purge, + apt_update, +) +from charmhelpers.contrib.hardening.audits.file import ( + TemplatedFile, + DeletedFile, +) +from charmhelpers.contrib.hardening import utils +from charmhelpers.contrib.hardening.host import TEMPLATES_DIR + + +def get_audits(): + """Get OS hardening PAM authentication audits. + + :returns: dictionary of audits + """ + audits = [] + + settings = utils.get_settings('os') + + if settings['auth']['pam_passwdqc_enable']: + audits.append(PasswdqcPAM('/etc/passwdqc.conf')) + + if settings['auth']['retries']: + audits.append(Tally2PAM('/usr/share/pam-configs/tally2')) + else: + audits.append(DeletedFile('/usr/share/pam-configs/tally2')) + + return audits + + +class PasswdqcPAMContext(object): + + def __call__(self): + ctxt = {} + settings = utils.get_settings('os') + + ctxt['auth_pam_passwdqc_options'] = \ + settings['auth']['pam_passwdqc_options'] + + return ctxt + + +class PasswdqcPAM(TemplatedFile): + """The PAM Audit verifies the linux PAM settings.""" + def __init__(self, path): + super(PasswdqcPAM, self).__init__(path=path, + template_dir=TEMPLATES_DIR, + context=PasswdqcPAMContext(), + user='root', + group='root', + mode=0o0640) + + def pre_write(self): + # Always remove? + for pkg in ['libpam-ccreds', 'libpam-cracklib']: + log("Purging package '%s'" % pkg, level=DEBUG), + apt_purge(pkg) + + apt_update(fatal=True) + for pkg in ['libpam-passwdqc']: + log("Installing package '%s'" % pkg, level=DEBUG), + apt_install(pkg) + + def post_write(self): + """Updates the PAM configuration after the file has been written""" + try: + check_output(['pam-auth-update', '--package']) + except CalledProcessError as e: + log('Error calling pam-auth-update: %s' % e, level=ERROR) + + +class Tally2PAMContext(object): + + def __call__(self): + ctxt = {} + settings = utils.get_settings('os') + + ctxt['auth_lockout_time'] = settings['auth']['lockout_time'] + ctxt['auth_retries'] = settings['auth']['retries'] + + return ctxt + + +class Tally2PAM(TemplatedFile): + """The PAM Audit verifies the linux PAM settings.""" + def __init__(self, path): + super(Tally2PAM, self).__init__(path=path, + template_dir=TEMPLATES_DIR, + context=Tally2PAMContext(), + user='root', + group='root', + mode=0o0640) + + def pre_write(self): + # Always remove? + apt_purge('libpam-ccreds') + apt_update(fatal=True) + apt_install('libpam-modules') + + def post_write(self): + """Updates the PAM configuration after the file has been written""" + try: + check_output(['pam-auth-update', '--package']) + except CalledProcessError as e: + log('Error calling pam-auth-update: %s' % e, level=ERROR) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/checks/profile.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/checks/profile.py new file mode 100644 index 0000000000000000000000000000000000000000..2727428da9241ccf88a60843d05dffb26cebac96 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/checks/profile.py @@ -0,0 +1,49 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from charmhelpers.contrib.hardening.audits.file import TemplatedFile +from charmhelpers.contrib.hardening.host import TEMPLATES_DIR +from charmhelpers.contrib.hardening import utils + + +def get_audits(): + """Get OS hardening profile audits. + + :returns: dictionary of audits + """ + audits = [] + + settings = utils.get_settings('os') + # If core dumps are not enabled, then don't allow core dumps to be + # created as they may contain sensitive information. + if not settings['security']['kernel_enable_core_dump']: + audits.append(TemplatedFile('/etc/profile.d/pinerolo_profile.sh', + ProfileContext(), + template_dir=TEMPLATES_DIR, + mode=0o0755, user='root', group='root')) + if settings['security']['ssh_tmout']: + audits.append(TemplatedFile('/etc/profile.d/99-hardening.sh', + ProfileContext(), + template_dir=TEMPLATES_DIR, + mode=0o0644, user='root', group='root')) + return audits + + +class ProfileContext(object): + + def __call__(self): + settings = utils.get_settings('os') + ctxt = {'ssh_tmout': + settings['security']['ssh_tmout']} + return ctxt diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/checks/securetty.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/checks/securetty.py new file mode 100644 index 0000000000000000000000000000000000000000..34cd02178c1fbf1c7d467af0814ad9fd4199dc3d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/checks/securetty.py @@ -0,0 +1,37 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from charmhelpers.contrib.hardening.audits.file import TemplatedFile +from charmhelpers.contrib.hardening.host import TEMPLATES_DIR +from charmhelpers.contrib.hardening import utils + + +def get_audits(): + """Get OS hardening Secure TTY audits. + + :returns: dictionary of audits + """ + audits = [] + audits.append(TemplatedFile('/etc/securetty', SecureTTYContext(), + template_dir=TEMPLATES_DIR, + mode=0o0400, user='root', group='root')) + return audits + + +class SecureTTYContext(object): + + def __call__(self): + settings = utils.get_settings('os') + ctxt = {'ttys': settings['auth']['root_ttys']} + return ctxt diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/checks/suid_sgid.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/checks/suid_sgid.py new file mode 100644 index 0000000000000000000000000000000000000000..bcbe3fde07ea0716e2de6d1d4e103fcb19166c14 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/checks/suid_sgid.py @@ -0,0 +1,129 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import subprocess + +from charmhelpers.core.hookenv import ( + log, + INFO, +) +from charmhelpers.contrib.hardening.audits.file import NoSUIDSGIDAudit +from charmhelpers.contrib.hardening import utils + + +BLACKLIST = ['/usr/bin/rcp', '/usr/bin/rlogin', '/usr/bin/rsh', + '/usr/libexec/openssh/ssh-keysign', + '/usr/lib/openssh/ssh-keysign', + '/sbin/netreport', + '/usr/sbin/usernetctl', + '/usr/sbin/userisdnctl', + '/usr/sbin/pppd', + '/usr/bin/lockfile', + '/usr/bin/mail-lock', + '/usr/bin/mail-unlock', + '/usr/bin/mail-touchlock', + '/usr/bin/dotlockfile', + '/usr/bin/arping', + '/usr/sbin/uuidd', + '/usr/bin/mtr', + '/usr/lib/evolution/camel-lock-helper-1.2', + '/usr/lib/pt_chown', + '/usr/lib/eject/dmcrypt-get-device', + '/usr/lib/mc/cons.saver'] + +WHITELIST = ['/bin/mount', '/bin/ping', '/bin/su', '/bin/umount', + '/sbin/pam_timestamp_check', '/sbin/unix_chkpwd', '/usr/bin/at', + '/usr/bin/gpasswd', '/usr/bin/locate', '/usr/bin/newgrp', + '/usr/bin/passwd', '/usr/bin/ssh-agent', + '/usr/libexec/utempter/utempter', '/usr/sbin/lockdev', + '/usr/sbin/sendmail.sendmail', '/usr/bin/expiry', + '/bin/ping6', '/usr/bin/traceroute6.iputils', + '/sbin/mount.nfs', '/sbin/umount.nfs', + '/sbin/mount.nfs4', '/sbin/umount.nfs4', + '/usr/bin/crontab', + '/usr/bin/wall', '/usr/bin/write', + '/usr/bin/screen', + '/usr/bin/mlocate', + '/usr/bin/chage', '/usr/bin/chfn', '/usr/bin/chsh', + '/bin/fusermount', + '/usr/bin/pkexec', + '/usr/bin/sudo', '/usr/bin/sudoedit', + '/usr/sbin/postdrop', '/usr/sbin/postqueue', + '/usr/sbin/suexec', + '/usr/lib/squid/ncsa_auth', '/usr/lib/squid/pam_auth', + '/usr/kerberos/bin/ksu', + '/usr/sbin/ccreds_validate', + '/usr/bin/Xorg', + '/usr/bin/X', + '/usr/lib/dbus-1.0/dbus-daemon-launch-helper', + '/usr/lib/vte/gnome-pty-helper', + '/usr/lib/libvte9/gnome-pty-helper', + '/usr/lib/libvte-2.90-9/gnome-pty-helper'] + + +def get_audits(): + """Get OS hardening suid/sgid audits. + + :returns: dictionary of audits + """ + checks = [] + settings = utils.get_settings('os') + if not settings['security']['suid_sgid_enforce']: + log("Skipping suid/sgid hardening", level=INFO) + return checks + + # Build the blacklist and whitelist of files for suid/sgid checks. + # There are a total of 4 lists: + # 1. the system blacklist + # 2. the system whitelist + # 3. the user blacklist + # 4. the user whitelist + # + # The blacklist is the set of paths which should NOT have the suid/sgid bit + # set and the whitelist is the set of paths which MAY have the suid/sgid + # bit setl. The user whitelist/blacklist effectively override the system + # whitelist/blacklist. + u_b = settings['security']['suid_sgid_blacklist'] + u_w = settings['security']['suid_sgid_whitelist'] + + blacklist = set(BLACKLIST) - set(u_w + u_b) + whitelist = set(WHITELIST) - set(u_b + u_w) + + checks.append(NoSUIDSGIDAudit(blacklist)) + + dry_run = settings['security']['suid_sgid_dry_run_on_unknown'] + + if settings['security']['suid_sgid_remove_from_unknown'] or dry_run: + # If the policy is a dry_run (e.g. complain only) or remove unknown + # suid/sgid bits then find all of the paths which have the suid/sgid + # bit set and then remove the whitelisted paths. + root_path = settings['environment']['root_path'] + unknown_paths = find_paths_with_suid_sgid(root_path) - set(whitelist) + checks.append(NoSUIDSGIDAudit(unknown_paths, unless=dry_run)) + + return checks + + +def find_paths_with_suid_sgid(root_path): + """Finds all paths/files which have an suid/sgid bit enabled. + + Starting with the root_path, this will recursively find all paths which + have an suid or sgid bit set. + """ + cmd = ['find', root_path, '-perm', '-4000', '-o', '-perm', '-2000', + '-type', 'f', '!', '-path', '/proc/*', '-print'] + + p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) + out, _ = p.communicate() + return set(out.split('\n')) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/checks/sysctl.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/checks/sysctl.py new file mode 100644 index 0000000000000000000000000000000000000000..f1ea5813036b11893e8b9a986bf30a2f7a541b5d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/checks/sysctl.py @@ -0,0 +1,209 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import platform +import re +import six +import subprocess + +from charmhelpers.core.hookenv import ( + log, + INFO, + WARNING, +) +from charmhelpers.contrib.hardening import utils +from charmhelpers.contrib.hardening.audits.file import ( + FilePermissionAudit, + TemplatedFile, +) +from charmhelpers.contrib.hardening.host import TEMPLATES_DIR + + +SYSCTL_DEFAULTS = """net.ipv4.ip_forward=%(net_ipv4_ip_forward)s +net.ipv6.conf.all.forwarding=%(net_ipv6_conf_all_forwarding)s +net.ipv4.conf.all.rp_filter=1 +net.ipv4.conf.default.rp_filter=1 +net.ipv4.icmp_echo_ignore_broadcasts=1 +net.ipv4.icmp_ignore_bogus_error_responses=1 +net.ipv4.icmp_ratelimit=100 +net.ipv4.icmp_ratemask=88089 +net.ipv6.conf.all.disable_ipv6=%(net_ipv6_conf_all_disable_ipv6)s +net.ipv4.tcp_timestamps=%(net_ipv4_tcp_timestamps)s +net.ipv4.conf.all.arp_ignore=%(net_ipv4_conf_all_arp_ignore)s +net.ipv4.conf.all.arp_announce=%(net_ipv4_conf_all_arp_announce)s +net.ipv4.tcp_rfc1337=1 +net.ipv4.tcp_syncookies=1 +net.ipv4.conf.all.shared_media=1 +net.ipv4.conf.default.shared_media=1 +net.ipv4.conf.all.accept_source_route=0 +net.ipv4.conf.default.accept_source_route=0 +net.ipv4.conf.all.accept_redirects=0 +net.ipv4.conf.default.accept_redirects=0 +net.ipv6.conf.all.accept_redirects=0 +net.ipv6.conf.default.accept_redirects=0 +net.ipv4.conf.all.secure_redirects=0 +net.ipv4.conf.default.secure_redirects=0 +net.ipv4.conf.all.send_redirects=0 +net.ipv4.conf.default.send_redirects=0 +net.ipv4.conf.all.log_martians=0 +net.ipv6.conf.default.router_solicitations=0 +net.ipv6.conf.default.accept_ra_rtr_pref=0 +net.ipv6.conf.default.accept_ra_pinfo=0 +net.ipv6.conf.default.accept_ra_defrtr=0 +net.ipv6.conf.default.autoconf=0 +net.ipv6.conf.default.dad_transmits=0 +net.ipv6.conf.default.max_addresses=1 +net.ipv6.conf.all.accept_ra=0 +net.ipv6.conf.default.accept_ra=0 +kernel.modules_disabled=%(kernel_modules_disabled)s +kernel.sysrq=%(kernel_sysrq)s +fs.suid_dumpable=%(fs_suid_dumpable)s +kernel.randomize_va_space=2 +""" + + +def get_audits(): + """Get OS hardening sysctl audits. + + :returns: dictionary of audits + """ + audits = [] + settings = utils.get_settings('os') + + # Apply the sysctl settings which are configured to be applied. + audits.append(SysctlConf()) + # Make sure that only root has access to the sysctl.conf file, and + # that it is read-only. + audits.append(FilePermissionAudit('/etc/sysctl.conf', + user='root', + group='root', mode=0o0440)) + # If module loading is not enabled, then ensure that the modules + # file has the appropriate permissions and rebuild the initramfs + if not settings['security']['kernel_enable_module_loading']: + audits.append(ModulesTemplate()) + + return audits + + +class ModulesContext(object): + + def __call__(self): + settings = utils.get_settings('os') + with open('/proc/cpuinfo', 'r') as fd: + cpuinfo = fd.readlines() + + for line in cpuinfo: + match = re.search(r"^vendor_id\s+:\s+(.+)", line) + if match: + vendor = match.group(1) + + if vendor == "GenuineIntel": + vendor = "intel" + elif vendor == "AuthenticAMD": + vendor = "amd" + + ctxt = {'arch': platform.processor(), + 'cpuVendor': vendor, + 'desktop_enable': settings['general']['desktop_enable']} + + return ctxt + + +class ModulesTemplate(object): + + def __init__(self): + super(ModulesTemplate, self).__init__('/etc/initramfs-tools/modules', + ModulesContext(), + templates_dir=TEMPLATES_DIR, + user='root', group='root', + mode=0o0440) + + def post_write(self): + subprocess.check_call(['update-initramfs', '-u']) + + +class SysCtlHardeningContext(object): + def __call__(self): + settings = utils.get_settings('os') + ctxt = {'sysctl': {}} + + log("Applying sysctl settings", level=INFO) + extras = {'net_ipv4_ip_forward': 0, + 'net_ipv6_conf_all_forwarding': 0, + 'net_ipv6_conf_all_disable_ipv6': 1, + 'net_ipv4_tcp_timestamps': 0, + 'net_ipv4_conf_all_arp_ignore': 0, + 'net_ipv4_conf_all_arp_announce': 0, + 'kernel_sysrq': 0, + 'fs_suid_dumpable': 0, + 'kernel_modules_disabled': 1} + + if settings['sysctl']['ipv6_enable']: + extras['net_ipv6_conf_all_disable_ipv6'] = 0 + + if settings['sysctl']['forwarding']: + extras['net_ipv4_ip_forward'] = 1 + extras['net_ipv6_conf_all_forwarding'] = 1 + + if settings['sysctl']['arp_restricted']: + extras['net_ipv4_conf_all_arp_ignore'] = 1 + extras['net_ipv4_conf_all_arp_announce'] = 2 + + if settings['security']['kernel_enable_module_loading']: + extras['kernel_modules_disabled'] = 0 + + if settings['sysctl']['kernel_enable_sysrq']: + sysrq_val = settings['sysctl']['kernel_secure_sysrq'] + extras['kernel_sysrq'] = sysrq_val + + if settings['security']['kernel_enable_core_dump']: + extras['fs_suid_dumpable'] = 1 + + settings.update(extras) + for d in (SYSCTL_DEFAULTS % settings).split(): + d = d.strip().partition('=') + key = d[0].strip() + path = os.path.join('/proc/sys', key.replace('.', '/')) + if not os.path.exists(path): + log("Skipping '%s' since '%s' does not exist" % (key, path), + level=WARNING) + continue + + ctxt['sysctl'][key] = d[2] or None + + # Translate for python3 + return {'sysctl_settings': + [(k, v) for k, v in six.iteritems(ctxt['sysctl'])]} + + +class SysctlConf(TemplatedFile): + """An audit check for sysctl settings.""" + def __init__(self): + self.conffile = '/etc/sysctl.d/99-juju-hardening.conf' + super(SysctlConf, self).__init__(self.conffile, + SysCtlHardeningContext(), + template_dir=TEMPLATES_DIR, + user='root', group='root', + mode=0o0440) + + def post_write(self): + try: + subprocess.check_call(['sysctl', '-p', self.conffile]) + except subprocess.CalledProcessError as e: + # NOTE: on some systems if sysctl cannot apply all settings it + # will return non-zero as well. + log("sysctl command returned an error (maybe some " + "keys could not be set) - %s" % (e), + level=WARNING) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/templates/10.hardcore.conf b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/templates/10.hardcore.conf new file mode 100644 index 0000000000000000000000000000000000000000..0014191fc8152fd9147b3fb5446987e6e62f2d77 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/templates/10.hardcore.conf @@ -0,0 +1,8 @@ +############################################################################### +# WARNING: This configuration file is maintained by Juju. Local changes may +# be overwritten. +############################################################################### +{% if disable_core_dump -%} +# Prevent core dumps for all users. These are usually only needed by developers and may contain sensitive information. +* hard core 0 +{% endif %} \ No newline at end of file diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/templates/99-hardening.sh b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/templates/99-hardening.sh new file mode 100644 index 0000000000000000000000000000000000000000..616cef46f492f682aca28c71a6e20176870a36f2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/templates/99-hardening.sh @@ -0,0 +1,5 @@ +TMOUT={{ tmout }} +readonly TMOUT +export TMOUT + +readonly HISTFILE diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/templates/99-juju-hardening.conf b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/templates/99-juju-hardening.conf new file mode 100644 index 0000000000000000000000000000000000000000..101f1e1d709c268890553957f30c93259681ce59 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/templates/99-juju-hardening.conf @@ -0,0 +1,7 @@ +############################################################################### +# WARNING: This configuration file is maintained by Juju. Local changes may +# be overwritten. +############################################################################### +{% for key, value in sysctl_settings -%} +{{ key }}={{ value }} +{% endfor -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/templates/login.defs b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/templates/login.defs new file mode 100644 index 0000000000000000000000000000000000000000..db137d6dbb7a3a850294407199225392a880cfc2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/templates/login.defs @@ -0,0 +1,349 @@ +############################################################################### +# WARNING: This configuration file is maintained by Juju. Local changes may +# be overwritten. +############################################################################### +# +# /etc/login.defs - Configuration control definitions for the login package. +# +# Three items must be defined: MAIL_DIR, ENV_SUPATH, and ENV_PATH. +# If unspecified, some arbitrary (and possibly incorrect) value will +# be assumed. All other items are optional - if not specified then +# the described action or option will be inhibited. +# +# Comment lines (lines beginning with "#") and blank lines are ignored. +# +# Modified for Linux. --marekm + +# REQUIRED for useradd/userdel/usermod +# Directory where mailboxes reside, _or_ name of file, relative to the +# home directory. If you _do_ define MAIL_DIR and MAIL_FILE, +# MAIL_DIR takes precedence. +# +# Essentially: +# - MAIL_DIR defines the location of users mail spool files +# (for mbox use) by appending the username to MAIL_DIR as defined +# below. +# - MAIL_FILE defines the location of the users mail spool files as the +# fully-qualified filename obtained by prepending the user home +# directory before $MAIL_FILE +# +# NOTE: This is no more used for setting up users MAIL environment variable +# which is, starting from shadow 4.0.12-1 in Debian, entirely the +# job of the pam_mail PAM modules +# See default PAM configuration files provided for +# login, su, etc. +# +# This is a temporary situation: setting these variables will soon +# move to /etc/default/useradd and the variables will then be +# no more supported +MAIL_DIR /var/mail +#MAIL_FILE .mail + +# +# Enable logging and display of /var/log/faillog login failure info. +# This option conflicts with the pam_tally PAM module. +# +FAILLOG_ENAB yes + +# +# Enable display of unknown usernames when login failures are recorded. +# +# WARNING: Unknown usernames may become world readable. +# See #290803 and #298773 for details about how this could become a security +# concern +LOG_UNKFAIL_ENAB no + +# +# Enable logging of successful logins +# +LOG_OK_LOGINS yes + +# +# Enable "syslog" logging of su activity - in addition to sulog file logging. +# SYSLOG_SG_ENAB does the same for newgrp and sg. +# +SYSLOG_SU_ENAB yes +SYSLOG_SG_ENAB yes + +# +# If defined, all su activity is logged to this file. +# +#SULOG_FILE /var/log/sulog + +# +# If defined, file which maps tty line to TERM environment parameter. +# Each line of the file is in a format something like "vt100 tty01". +# +#TTYTYPE_FILE /etc/ttytype + +# +# If defined, login failures will be logged here in a utmp format +# last, when invoked as lastb, will read /var/log/btmp, so... +# +FTMP_FILE /var/log/btmp + +# +# If defined, the command name to display when running "su -". For +# example, if this is defined as "su" then a "ps" will display the +# command is "-su". If not defined, then "ps" would display the +# name of the shell actually being run, e.g. something like "-sh". +# +SU_NAME su + +# +# If defined, file which inhibits all the usual chatter during the login +# sequence. If a full pathname, then hushed mode will be enabled if the +# user's name or shell are found in the file. If not a full pathname, then +# hushed mode will be enabled if the file exists in the user's home directory. +# +HUSHLOGIN_FILE .hushlogin +#HUSHLOGIN_FILE /etc/hushlogins + +# +# *REQUIRED* The default PATH settings, for superuser and normal users. +# +# (they are minimal, add the rest in the shell startup files) +ENV_SUPATH PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin +ENV_PATH PATH=/usr/local/bin:/usr/bin:/bin{% if additional_user_paths %}{{ additional_user_paths }}{% endif %} + +# +# Terminal permissions +# +# TTYGROUP Login tty will be assigned this group ownership. +# TTYPERM Login tty will be set to this permission. +# +# If you have a "write" program which is "setgid" to a special group +# which owns the terminals, define TTYGROUP to the group number and +# TTYPERM to 0620. Otherwise leave TTYGROUP commented out and assign +# TTYPERM to either 622 or 600. +# +# In Debian /usr/bin/bsd-write or similar programs are setgid tty +# However, the default and recommended value for TTYPERM is still 0600 +# to not allow anyone to write to anyone else console or terminal + +# Users can still allow other people to write them by issuing +# the "mesg y" command. + +TTYGROUP tty +TTYPERM 0600 + +# +# Login configuration initializations: +# +# ERASECHAR Terminal ERASE character ('\010' = backspace). +# KILLCHAR Terminal KILL character ('\025' = CTRL/U). +# UMASK Default "umask" value. +# +# The ERASECHAR and KILLCHAR are used only on System V machines. +# +# UMASK is the default umask value for pam_umask and is used by +# useradd and newusers to set the mode of the new home directories. +# 022 is the "historical" value in Debian for UMASK +# 027, or even 077, could be considered better for privacy +# There is no One True Answer here : each sysadmin must make up his/her +# mind. +# +# If USERGROUPS_ENAB is set to "yes", that will modify this UMASK default value +# for private user groups, i. e. the uid is the same as gid, and username is +# the same as the primary group name: for these, the user permissions will be +# used as group permissions, e. g. 022 will become 002. +# +# Prefix these values with "0" to get octal, "0x" to get hexadecimal. +# +ERASECHAR 0177 +KILLCHAR 025 +UMASK {{ umask }} + +# Enable setting of the umask group bits to be the same as owner bits (examples: `022` -> `002`, `077` -> `007`) for non-root users, if the uid is the same as gid, and username is the same as the primary group name. +# If set to yes, userdel will remove the user´s group if it contains no more members, and useradd will create by default a group with the name of the user. +USERGROUPS_ENAB yes + +# +# Password aging controls: +# +# PASS_MAX_DAYS Maximum number of days a password may be used. +# PASS_MIN_DAYS Minimum number of days allowed between password changes. +# PASS_WARN_AGE Number of days warning given before a password expires. +# +PASS_MAX_DAYS {{ pwd_max_age }} +PASS_MIN_DAYS {{ pwd_min_age }} +PASS_WARN_AGE 7 + +# +# Min/max values for automatic uid selection in useradd +# +UID_MIN {{ uid_min }} +UID_MAX 60000 +# System accounts +SYS_UID_MIN {{ sys_uid_min }} +SYS_UID_MAX {{ sys_uid_max }} + +# Min/max values for automatic gid selection in groupadd +GID_MIN {{ gid_min }} +GID_MAX 60000 +# System accounts +SYS_GID_MIN {{ sys_gid_min }} +SYS_GID_MAX {{ sys_gid_max }} + +# +# Max number of login retries if password is bad. This will most likely be +# overriden by PAM, since the default pam_unix module has it's own built +# in of 3 retries. However, this is a safe fallback in case you are using +# an authentication module that does not enforce PAM_MAXTRIES. +# +LOGIN_RETRIES {{ login_retries }} + +# +# Max time in seconds for login +# +LOGIN_TIMEOUT {{ login_timeout }} + +# +# Which fields may be changed by regular users using chfn - use +# any combination of letters "frwh" (full name, room number, work +# phone, home phone). If not defined, no changes are allowed. +# For backward compatibility, "yes" = "rwh" and "no" = "frwh". +# +{% if chfn_restrict %} +CHFN_RESTRICT {{ chfn_restrict }} +{% endif %} + +# +# Should login be allowed if we can't cd to the home directory? +# Default in no. +# +DEFAULT_HOME {% if allow_login_without_home %} yes {% else %} no {% endif %} + +# +# If defined, this command is run when removing a user. +# It should remove any at/cron/print jobs etc. owned by +# the user to be removed (passed as the first argument). +# +#USERDEL_CMD /usr/sbin/userdel_local + +# +# Enable setting of the umask group bits to be the same as owner bits +# (examples: 022 -> 002, 077 -> 007) for non-root users, if the uid is +# the same as gid, and username is the same as the primary group name. +# +# If set to yes, userdel will remove the user´s group if it contains no +# more members, and useradd will create by default a group with the name +# of the user. +# +USERGROUPS_ENAB yes + +# +# Instead of the real user shell, the program specified by this parameter +# will be launched, although its visible name (argv[0]) will be the shell's. +# The program may do whatever it wants (logging, additional authentification, +# banner, ...) before running the actual shell. +# +# FAKE_SHELL /bin/fakeshell + +# +# If defined, either full pathname of a file containing device names or +# a ":" delimited list of device names. Root logins will be allowed only +# upon these devices. +# +# This variable is used by login and su. +# +#CONSOLE /etc/consoles +#CONSOLE console:tty01:tty02:tty03:tty04 + +# +# List of groups to add to the user's supplementary group set +# when logging in on the console (as determined by the CONSOLE +# setting). Default is none. +# +# Use with caution - it is possible for users to gain permanent +# access to these groups, even when not logged in on the console. +# How to do it is left as an exercise for the reader... +# +# This variable is used by login and su. +# +#CONSOLE_GROUPS floppy:audio:cdrom + +# +# If set to "yes", new passwords will be encrypted using the MD5-based +# algorithm compatible with the one used by recent releases of FreeBSD. +# It supports passwords of unlimited length and longer salt strings. +# Set to "no" if you need to copy encrypted passwords to other systems +# which don't understand the new algorithm. Default is "no". +# +# This variable is deprecated. You should use ENCRYPT_METHOD. +# +MD5_CRYPT_ENAB no + +# +# If set to MD5 , MD5-based algorithm will be used for encrypting password +# If set to SHA256, SHA256-based algorithm will be used for encrypting password +# If set to SHA512, SHA512-based algorithm will be used for encrypting password +# If set to DES, DES-based algorithm will be used for encrypting password (default) +# Overrides the MD5_CRYPT_ENAB option +# +# Note: It is recommended to use a value consistent with +# the PAM modules configuration. +# +ENCRYPT_METHOD SHA512 + +# +# Only used if ENCRYPT_METHOD is set to SHA256 or SHA512. +# +# Define the number of SHA rounds. +# With a lot of rounds, it is more difficult to brute forcing the password. +# But note also that it more CPU resources will be needed to authenticate +# users. +# +# If not specified, the libc will choose the default number of rounds (5000). +# The values must be inside the 1000-999999999 range. +# If only one of the MIN or MAX values is set, then this value will be used. +# If MIN > MAX, the highest value will be used. +# +# SHA_CRYPT_MIN_ROUNDS 5000 +# SHA_CRYPT_MAX_ROUNDS 5000 + +################# OBSOLETED BY PAM ############## +# # +# These options are now handled by PAM. Please # +# edit the appropriate file in /etc/pam.d/ to # +# enable the equivelants of them. +# +############### + +#MOTD_FILE +#DIALUPS_CHECK_ENAB +#LASTLOG_ENAB +#MAIL_CHECK_ENAB +#OBSCURE_CHECKS_ENAB +#PORTTIME_CHECKS_ENAB +#SU_WHEEL_ONLY +#CRACKLIB_DICTPATH +#PASS_CHANGE_TRIES +#PASS_ALWAYS_WARN +#ENVIRON_FILE +#NOLOGINS_FILE +#ISSUE_FILE +#PASS_MIN_LEN +#PASS_MAX_LEN +#ULIMIT +#ENV_HZ +#CHFN_AUTH +#CHSH_AUTH +#FAIL_DELAY + +################# OBSOLETED ####################### +# # +# These options are no more handled by shadow. # +# # +# Shadow utilities will display a warning if they # +# still appear. # +# # +################################################### + +# CLOSE_SESSIONS +# LOGIN_STRING +# NO_PASSWORD_CONSOLE +# QMAIL_DIR + + + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/templates/modules b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/templates/modules new file mode 100644 index 0000000000000000000000000000000000000000..ef0354ee35fa363b303bb22c6ed0d2d1196aed52 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/templates/modules @@ -0,0 +1,117 @@ +############################################################################### +# WARNING: This configuration file is maintained by Juju. Local changes may +# be overwritten. +############################################################################### +# /etc/modules: kernel modules to load at boot time. +# +# This file contains the names of kernel modules that should be loaded +# at boot time, one per line. Lines beginning with "#" are ignored. +# Parameters can be specified after the module name. + +# Arch +# ---- +# +# Modules for certains builds, contains support modules and some CPU-specific optimizations. + +{% if arch == "x86_64" -%} +# Optimize for x86_64 cryptographic features +twofish-x86_64-3way +twofish-x86_64 +aes-x86_64 +salsa20-x86_64 +blowfish-x86_64 +{% endif -%} + +{% if cpuVendor == "intel" -%} +# Intel-specific optimizations +ghash-clmulni-intel +aesni-intel +kvm-intel +{% endif -%} + +{% if cpuVendor == "amd" -%} +# AMD-specific optimizations +kvm-amd +{% endif -%} + +kvm + + +# Crypto +# ------ + +# Some core modules which comprise strong cryptography. +blowfish_common +blowfish_generic +ctr +cts +lrw +lzo +rmd160 +rmd256 +rmd320 +serpent +sha512_generic +twofish_common +twofish_generic +xts +zlib + + +# Drivers +# ------- + +# Basics +lp +rtc +loop + +# Filesystems +ext2 +btrfs + +{% if desktop_enable -%} +# Desktop +psmouse +snd +snd_ac97_codec +snd_intel8x0 +snd_page_alloc +snd_pcm +snd_timer +soundcore +usbhid +{% endif -%} + +# Lib +# --- +xz + + +# Net +# --- + +# All packets needed for netfilter rules (ie iptables, ebtables). +ip_tables +x_tables +iptable_filter +iptable_nat + +# Targets +ipt_LOG +ipt_REJECT + +# Modules +xt_connlimit +xt_tcpudp +xt_recent +xt_limit +xt_conntrack +nf_conntrack +nf_conntrack_ipv4 +nf_defrag_ipv4 +xt_state +nf_nat + +# Addons +xt_pknock \ No newline at end of file diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/templates/passwdqc.conf b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/templates/passwdqc.conf new file mode 100644 index 0000000000000000000000000000000000000000..f98d14e57428c106692e0f57e8b381f2b0a12c44 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/templates/passwdqc.conf @@ -0,0 +1,11 @@ +############################################################################### +# WARNING: This configuration file is maintained by Juju. Local changes may +# be overwritten. +############################################################################### +Name: passwdqc password strength enforcement +Default: yes +Priority: 1024 +Conflicts: cracklib +Password-Type: Primary +Password: + requisite pam_passwdqc.so {{ auth_pam_passwdqc_options }} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/templates/pinerolo_profile.sh b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/templates/pinerolo_profile.sh new file mode 100644 index 0000000000000000000000000000000000000000..fd2de791b96fbb8889811daf7340d1f2ca2ab3a6 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/templates/pinerolo_profile.sh @@ -0,0 +1,8 @@ +############################################################################### +# WARNING: This configuration file is maintained by Juju. Local changes may +# be overwritten. +############################################################################### +# Disable core dumps via soft limits for all users. Compliance to this setting +# is voluntary and can be modified by users up to a hard limit. This setting is +# a sane default. +ulimit -S -c 0 > /dev/null 2>&1 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/templates/securetty b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/templates/securetty new file mode 100644 index 0000000000000000000000000000000000000000..15b18d4e2f45747845d0b65c06997f154ef674a4 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/templates/securetty @@ -0,0 +1,11 @@ +############################################################################### +# WARNING: This configuration file is maintained by Juju. Local changes may +# be overwritten. +############################################################################### +# A list of TTYs, from which root can log in +# see `man securetty` for reference +{% if ttys -%} +{% for tty in ttys -%} +{{ tty }} +{% endfor -%} +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/templates/tally2 b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/templates/tally2 new file mode 100644 index 0000000000000000000000000000000000000000..d9620299c55e51abbee1017a227c217cd4a9fd33 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/host/templates/tally2 @@ -0,0 +1,14 @@ +############################################################################### +# WARNING: This configuration file is maintained by Juju. Local changes may +# be overwritten. +############################################################################### +Name: tally2 lockout after failed attempts enforcement +Default: yes +Priority: 1024 +Conflicts: cracklib +Auth-Type: Primary +Auth-Initial: + required pam_tally2.so deny={{ auth_retries }} onerr=fail unlock_time={{ auth_lockout_time }} +Account-Type: Primary +Account-Initial: + required pam_tally2.so diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/mysql/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/mysql/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..58bebd846bd6fa648cfab6ab1056ad10d8415453 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/mysql/__init__.py @@ -0,0 +1,17 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from os import path + +TEMPLATES_DIR = path.join(path.dirname(__file__), 'templates') diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/mysql/checks/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/mysql/checks/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..1990d8513bbeef067a8d9a2168e1952efb2961dc --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/mysql/checks/__init__.py @@ -0,0 +1,29 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from charmhelpers.core.hookenv import ( + log, + DEBUG, +) +from charmhelpers.contrib.hardening.mysql.checks import config + + +def run_mysql_checks(): + log("Starting MySQL hardening checks.", level=DEBUG) + checks = config.get_audits() + for check in checks: + log("Running '%s' check" % (check.__class__.__name__), level=DEBUG) + check.ensure_compliance() + + log("MySQL hardening checks complete.", level=DEBUG) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/mysql/checks/config.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/mysql/checks/config.py new file mode 100644 index 0000000000000000000000000000000000000000..a79f33b74a5c2972a82f0b4d8de8d1073dc293ed --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/mysql/checks/config.py @@ -0,0 +1,87 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import six +import subprocess + +from charmhelpers.core.hookenv import ( + log, + WARNING, +) +from charmhelpers.contrib.hardening.audits.file import ( + FilePermissionAudit, + DirectoryPermissionAudit, + TemplatedFile, +) +from charmhelpers.contrib.hardening.mysql import TEMPLATES_DIR +from charmhelpers.contrib.hardening import utils + + +def get_audits(): + """Get MySQL hardening config audits. + + :returns: dictionary of audits + """ + if subprocess.call(['which', 'mysql'], stdout=subprocess.PIPE) != 0: + log("MySQL does not appear to be installed on this node - " + "skipping mysql hardening", level=WARNING) + return [] + + settings = utils.get_settings('mysql') + hardening_settings = settings['hardening'] + my_cnf = hardening_settings['mysql-conf'] + + audits = [ + FilePermissionAudit(paths=[my_cnf], user='root', + group='root', mode=0o0600), + + TemplatedFile(hardening_settings['hardening-conf'], + MySQLConfContext(), + TEMPLATES_DIR, + mode=0o0750, + user='mysql', + group='root', + service_actions=[{'service': 'mysql', + 'actions': ['restart']}]), + + # MySQL and Percona charms do not allow configuration of the + # data directory, so use the default. + DirectoryPermissionAudit('/var/lib/mysql', + user='mysql', + group='mysql', + recursive=False, + mode=0o755), + + DirectoryPermissionAudit('/etc/mysql', + user='root', + group='root', + recursive=False, + mode=0o700), + ] + + return audits + + +class MySQLConfContext(object): + """Defines the set of key/value pairs to set in a mysql config file. + + This context, when called, will return a dictionary containing the + key/value pairs of setting to specify in the + /etc/mysql/conf.d/hardening.cnf file. + """ + def __call__(self): + settings = utils.get_settings('mysql') + # Translate for python3 + return {'mysql_settings': + [(k, v) for k, v in six.iteritems(settings['security'])]} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/mysql/templates/hardening.cnf b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/mysql/templates/hardening.cnf new file mode 100644 index 0000000000000000000000000000000000000000..8242586cd66360b7e6ae33f13018363b95cd4ea9 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/mysql/templates/hardening.cnf @@ -0,0 +1,12 @@ +############################################################################### +# WARNING: This configuration file is maintained by Juju. Local changes may +# be overwritten. +############################################################################### +[mysqld] +{% for setting, value in mysql_settings -%} +{% if value == 'True' -%} +{{ setting }} +{% elif value != 'None' and value != None -%} +{{ setting }} = {{ value }} +{% endif -%} +{% endfor -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/ssh/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/ssh/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..58bebd846bd6fa648cfab6ab1056ad10d8415453 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/ssh/__init__.py @@ -0,0 +1,17 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from os import path + +TEMPLATES_DIR = path.join(path.dirname(__file__), 'templates') diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/ssh/checks/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/ssh/checks/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..edaf484b39f8c7353cebb2f4b68944c6493ba7b3 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/ssh/checks/__init__.py @@ -0,0 +1,29 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from charmhelpers.core.hookenv import ( + log, + DEBUG, +) +from charmhelpers.contrib.hardening.ssh.checks import config + + +def run_ssh_checks(): + log("Starting SSH hardening checks.", level=DEBUG) + checks = config.get_audits() + for check in checks: + log("Running '%s' check" % (check.__class__.__name__), level=DEBUG) + check.ensure_compliance() + + log("SSH hardening checks complete.", level=DEBUG) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/ssh/checks/config.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/ssh/checks/config.py new file mode 100644 index 0000000000000000000000000000000000000000..41bed2d1e7b031182edcf62710876e4073dfbc6e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/ssh/checks/config.py @@ -0,0 +1,435 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os + +from charmhelpers.contrib.network.ip import ( + get_address_in_network, + get_iface_addr, + is_ip, +) +from charmhelpers.core.hookenv import ( + log, + DEBUG, +) +from charmhelpers.fetch import ( + apt_install, + apt_update, +) +from charmhelpers.core.host import ( + lsb_release, + CompareHostReleases, +) +from charmhelpers.contrib.hardening.audits.file import ( + TemplatedFile, + FileContentAudit, +) +from charmhelpers.contrib.hardening.ssh import TEMPLATES_DIR +from charmhelpers.contrib.hardening import utils + + +def get_audits(): + """Get SSH hardening config audits. + + :returns: dictionary of audits + """ + audits = [SSHConfig(), SSHDConfig(), SSHConfigFileContentAudit(), + SSHDConfigFileContentAudit()] + return audits + + +class SSHConfigContext(object): + + type = 'client' + + def get_macs(self, allow_weak_mac): + if allow_weak_mac: + weak_macs = 'weak' + else: + weak_macs = 'default' + + default = 'hmac-sha2-512,hmac-sha2-256,hmac-ripemd160' + macs = {'default': default, + 'weak': default + ',hmac-sha1'} + + default = ('hmac-sha2-512-etm@openssh.com,' + 'hmac-sha2-256-etm@openssh.com,' + 'hmac-ripemd160-etm@openssh.com,umac-128-etm@openssh.com,' + 'hmac-sha2-512,hmac-sha2-256,hmac-ripemd160') + macs_66 = {'default': default, + 'weak': default + ',hmac-sha1'} + + # Use newer ciphers on Ubuntu Trusty and above + _release = lsb_release()['DISTRIB_CODENAME'].lower() + if CompareHostReleases(_release) >= 'trusty': + log("Detected Ubuntu 14.04 or newer, using new macs", level=DEBUG) + macs = macs_66 + + return macs[weak_macs] + + def get_kexs(self, allow_weak_kex): + if allow_weak_kex: + weak_kex = 'weak' + else: + weak_kex = 'default' + + default = 'diffie-hellman-group-exchange-sha256' + weak = (default + ',diffie-hellman-group14-sha1,' + 'diffie-hellman-group-exchange-sha1,' + 'diffie-hellman-group1-sha1') + kex = {'default': default, + 'weak': weak} + + default = ('curve25519-sha256@libssh.org,' + 'diffie-hellman-group-exchange-sha256') + weak = (default + ',diffie-hellman-group14-sha1,' + 'diffie-hellman-group-exchange-sha1,' + 'diffie-hellman-group1-sha1') + kex_66 = {'default': default, + 'weak': weak} + + # Use newer kex on Ubuntu Trusty and above + _release = lsb_release()['DISTRIB_CODENAME'].lower() + if CompareHostReleases(_release) >= 'trusty': + log('Detected Ubuntu 14.04 or newer, using new key exchange ' + 'algorithms', level=DEBUG) + kex = kex_66 + + return kex[weak_kex] + + def get_ciphers(self, cbc_required): + if cbc_required: + weak_ciphers = 'weak' + else: + weak_ciphers = 'default' + + default = 'aes256-ctr,aes192-ctr,aes128-ctr' + cipher = {'default': default, + 'weak': default + 'aes256-cbc,aes192-cbc,aes128-cbc'} + + default = ('chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,' + 'aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr') + ciphers_66 = {'default': default, + 'weak': default + ',aes256-cbc,aes192-cbc,aes128-cbc'} + + # Use newer ciphers on ubuntu Trusty and above + _release = lsb_release()['DISTRIB_CODENAME'].lower() + if CompareHostReleases(_release) >= 'trusty': + log('Detected Ubuntu 14.04 or newer, using new ciphers', + level=DEBUG) + cipher = ciphers_66 + + return cipher[weak_ciphers] + + def get_listening(self, listen=['0.0.0.0']): + """Returns a list of addresses SSH can list on + + Turns input into a sensible list of IPs SSH can listen on. Input + must be a python list of interface names, IPs and/or CIDRs. + + :param listen: list of IPs, CIDRs, interface names + + :returns: list of IPs available on the host + """ + if listen == ['0.0.0.0']: + return listen + + value = [] + for network in listen: + try: + ip = get_address_in_network(network=network, fatal=True) + except ValueError: + if is_ip(network): + ip = network + else: + try: + ip = get_iface_addr(iface=network, fatal=False)[0] + except IndexError: + continue + value.append(ip) + if value == []: + return ['0.0.0.0'] + return value + + def __call__(self): + settings = utils.get_settings('ssh') + if settings['common']['network_ipv6_enable']: + addr_family = 'any' + else: + addr_family = 'inet' + + ctxt = { + 'addr_family': addr_family, + 'remote_hosts': settings['common']['remote_hosts'], + 'password_auth_allowed': + settings['client']['password_authentication'], + 'ports': settings['common']['ports'], + 'ciphers': self.get_ciphers(settings['client']['cbc_required']), + 'macs': self.get_macs(settings['client']['weak_hmac']), + 'kexs': self.get_kexs(settings['client']['weak_kex']), + 'roaming': settings['client']['roaming'], + } + return ctxt + + +class SSHConfig(TemplatedFile): + def __init__(self): + path = '/etc/ssh/ssh_config' + super(SSHConfig, self).__init__(path=path, + template_dir=TEMPLATES_DIR, + context=SSHConfigContext(), + user='root', + group='root', + mode=0o0644) + + def pre_write(self): + settings = utils.get_settings('ssh') + apt_update(fatal=True) + apt_install(settings['client']['package']) + if not os.path.exists('/etc/ssh'): + os.makedir('/etc/ssh') + # NOTE: don't recurse + utils.ensure_permissions('/etc/ssh', 'root', 'root', 0o0755, + maxdepth=0) + + def post_write(self): + # NOTE: don't recurse + utils.ensure_permissions('/etc/ssh', 'root', 'root', 0o0755, + maxdepth=0) + + +class SSHDConfigContext(SSHConfigContext): + + type = 'server' + + def __call__(self): + settings = utils.get_settings('ssh') + if settings['common']['network_ipv6_enable']: + addr_family = 'any' + else: + addr_family = 'inet' + + ctxt = { + 'ssh_ip': self.get_listening(settings['server']['listen_to']), + 'password_auth_allowed': + settings['server']['password_authentication'], + 'ports': settings['common']['ports'], + 'addr_family': addr_family, + 'ciphers': self.get_ciphers(settings['server']['cbc_required']), + 'macs': self.get_macs(settings['server']['weak_hmac']), + 'kexs': self.get_kexs(settings['server']['weak_kex']), + 'host_key_files': settings['server']['host_key_files'], + 'allow_root_with_key': settings['server']['allow_root_with_key'], + 'password_authentication': + settings['server']['password_authentication'], + 'use_priv_sep': settings['server']['use_privilege_separation'], + 'use_pam': settings['server']['use_pam'], + 'allow_x11_forwarding': settings['server']['allow_x11_forwarding'], + 'print_motd': settings['server']['print_motd'], + 'print_last_log': settings['server']['print_last_log'], + 'client_alive_interval': + settings['server']['alive_interval'], + 'client_alive_count': settings['server']['alive_count'], + 'allow_tcp_forwarding': settings['server']['allow_tcp_forwarding'], + 'allow_agent_forwarding': + settings['server']['allow_agent_forwarding'], + 'deny_users': settings['server']['deny_users'], + 'allow_users': settings['server']['allow_users'], + 'deny_groups': settings['server']['deny_groups'], + 'allow_groups': settings['server']['allow_groups'], + 'use_dns': settings['server']['use_dns'], + 'sftp_enable': settings['server']['sftp_enable'], + 'sftp_group': settings['server']['sftp_group'], + 'sftp_chroot': settings['server']['sftp_chroot'], + 'max_auth_tries': settings['server']['max_auth_tries'], + 'max_sessions': settings['server']['max_sessions'], + } + return ctxt + + +class SSHDConfig(TemplatedFile): + def __init__(self): + path = '/etc/ssh/sshd_config' + super(SSHDConfig, self).__init__(path=path, + template_dir=TEMPLATES_DIR, + context=SSHDConfigContext(), + user='root', + group='root', + mode=0o0600, + service_actions=[{'service': 'ssh', + 'actions': + ['restart']}]) + + def pre_write(self): + settings = utils.get_settings('ssh') + apt_update(fatal=True) + apt_install(settings['server']['package']) + if not os.path.exists('/etc/ssh'): + os.makedir('/etc/ssh') + # NOTE: don't recurse + utils.ensure_permissions('/etc/ssh', 'root', 'root', 0o0755, + maxdepth=0) + + def post_write(self): + # NOTE: don't recurse + utils.ensure_permissions('/etc/ssh', 'root', 'root', 0o0755, + maxdepth=0) + + +class SSHConfigFileContentAudit(FileContentAudit): + def __init__(self): + self.path = '/etc/ssh/ssh_config' + super(SSHConfigFileContentAudit, self).__init__(self.path, {}) + + def is_compliant(self, *args, **kwargs): + self.pass_cases = [] + self.fail_cases = [] + settings = utils.get_settings('ssh') + + _release = lsb_release()['DISTRIB_CODENAME'].lower() + if CompareHostReleases(_release) >= 'trusty': + if not settings['server']['weak_hmac']: + self.pass_cases.append(r'^MACs.+,hmac-ripemd160$') + else: + self.pass_cases.append(r'^MACs.+,hmac-sha1$') + + if settings['server']['weak_kex']: + self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha256[,\s]?') # noqa + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group14-sha1[,\s]?') # noqa + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha1[,\s]?') # noqa + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group1-sha1[,\s]?') # noqa + else: + self.pass_cases.append(r'^KexAlgorithms.+,diffie-hellman-group-exchange-sha256$') # noqa + self.fail_cases.append(r'^KexAlgorithms.*diffie-hellman-group14-sha1[,\s]?') # noqa + + if settings['server']['cbc_required']: + self.pass_cases.append(r'^Ciphers\s.*-cbc[,\s]?') + self.fail_cases.append(r'^Ciphers\s.*aes128-ctr[,\s]?') + self.fail_cases.append(r'^Ciphers\s.*aes192-ctr[,\s]?') + self.fail_cases.append(r'^Ciphers\s.*aes256-ctr[,\s]?') + else: + self.fail_cases.append(r'^Ciphers\s.*-cbc[,\s]?') + self.pass_cases.append(r'^Ciphers\schacha20-poly1305@openssh.com,.+') # noqa + self.pass_cases.append(r'^Ciphers\s.*aes128-ctr$') + self.pass_cases.append(r'^Ciphers\s.*aes192-ctr[,\s]?') + self.pass_cases.append(r'^Ciphers\s.*aes256-ctr[,\s]?') + else: + if not settings['client']['weak_hmac']: + self.fail_cases.append(r'^MACs.+,hmac-sha1$') + else: + self.pass_cases.append(r'^MACs.+,hmac-sha1$') + + if settings['client']['weak_kex']: + self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha256[,\s]?') # noqa + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group14-sha1[,\s]?') # noqa + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha1[,\s]?') # noqa + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group1-sha1[,\s]?') # noqa + else: + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha256$') # noqa + self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group14-sha1[,\s]?') # noqa + self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha1[,\s]?') # noqa + self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group1-sha1[,\s]?') # noqa + + if settings['client']['cbc_required']: + self.pass_cases.append(r'^Ciphers\s.*-cbc[,\s]?') + self.fail_cases.append(r'^Ciphers\s.*aes128-ctr[,\s]?') + self.fail_cases.append(r'^Ciphers\s.*aes192-ctr[,\s]?') + self.fail_cases.append(r'^Ciphers\s.*aes256-ctr[,\s]?') + else: + self.fail_cases.append(r'^Ciphers\s.*-cbc[,\s]?') + self.pass_cases.append(r'^Ciphers\s.*aes128-ctr[,\s]?') + self.pass_cases.append(r'^Ciphers\s.*aes192-ctr[,\s]?') + self.pass_cases.append(r'^Ciphers\s.*aes256-ctr[,\s]?') + + if settings['client']['roaming']: + self.pass_cases.append(r'^UseRoaming yes$') + else: + self.fail_cases.append(r'^UseRoaming yes$') + + return super(SSHConfigFileContentAudit, self).is_compliant(*args, + **kwargs) + + +class SSHDConfigFileContentAudit(FileContentAudit): + def __init__(self): + self.path = '/etc/ssh/sshd_config' + super(SSHDConfigFileContentAudit, self).__init__(self.path, {}) + + def is_compliant(self, *args, **kwargs): + self.pass_cases = [] + self.fail_cases = [] + settings = utils.get_settings('ssh') + + _release = lsb_release()['DISTRIB_CODENAME'].lower() + if CompareHostReleases(_release) >= 'trusty': + if not settings['server']['weak_hmac']: + self.pass_cases.append(r'^MACs.+,hmac-ripemd160$') + else: + self.pass_cases.append(r'^MACs.+,hmac-sha1$') + + if settings['server']['weak_kex']: + self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha256[,\s]?') # noqa + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group14-sha1[,\s]?') # noqa + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha1[,\s]?') # noqa + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group1-sha1[,\s]?') # noqa + else: + self.pass_cases.append(r'^KexAlgorithms.+,diffie-hellman-group-exchange-sha256$') # noqa + self.fail_cases.append(r'^KexAlgorithms.*diffie-hellman-group14-sha1[,\s]?') # noqa + + if settings['server']['cbc_required']: + self.pass_cases.append(r'^Ciphers\s.*-cbc[,\s]?') + self.fail_cases.append(r'^Ciphers\s.*aes128-ctr[,\s]?') + self.fail_cases.append(r'^Ciphers\s.*aes192-ctr[,\s]?') + self.fail_cases.append(r'^Ciphers\s.*aes256-ctr[,\s]?') + else: + self.fail_cases.append(r'^Ciphers\s.*-cbc[,\s]?') + self.pass_cases.append(r'^Ciphers\schacha20-poly1305@openssh.com,.+') # noqa + self.pass_cases.append(r'^Ciphers\s.*aes128-ctr$') + self.pass_cases.append(r'^Ciphers\s.*aes192-ctr[,\s]?') + self.pass_cases.append(r'^Ciphers\s.*aes256-ctr[,\s]?') + else: + if not settings['server']['weak_hmac']: + self.pass_cases.append(r'^MACs.+,hmac-ripemd160$') + else: + self.pass_cases.append(r'^MACs.+,hmac-sha1$') + + if settings['server']['weak_kex']: + self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha256[,\s]?') # noqa + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group14-sha1[,\s]?') # noqa + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha1[,\s]?') # noqa + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group1-sha1[,\s]?') # noqa + else: + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha256$') # noqa + self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group14-sha1[,\s]?') # noqa + self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha1[,\s]?') # noqa + self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group1-sha1[,\s]?') # noqa + + if settings['server']['cbc_required']: + self.pass_cases.append(r'^Ciphers\s.*-cbc[,\s]?') + self.fail_cases.append(r'^Ciphers\s.*aes128-ctr[,\s]?') + self.fail_cases.append(r'^Ciphers\s.*aes192-ctr[,\s]?') + self.fail_cases.append(r'^Ciphers\s.*aes256-ctr[,\s]?') + else: + self.fail_cases.append(r'^Ciphers\s.*-cbc[,\s]?') + self.pass_cases.append(r'^Ciphers\s.*aes128-ctr[,\s]?') + self.pass_cases.append(r'^Ciphers\s.*aes192-ctr[,\s]?') + self.pass_cases.append(r'^Ciphers\s.*aes256-ctr[,\s]?') + + if settings['server']['sftp_enable']: + self.pass_cases.append(r'^Subsystem\ssftp') + else: + self.fail_cases.append(r'^Subsystem\ssftp') + + return super(SSHDConfigFileContentAudit, self).is_compliant(*args, + **kwargs) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/ssh/templates/ssh_config b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/ssh/templates/ssh_config new file mode 100644 index 0000000000000000000000000000000000000000..9742d8e2a32cd5da01a9dcb691a5a1201ed93050 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/ssh/templates/ssh_config @@ -0,0 +1,70 @@ +############################################################################### +# WARNING: This configuration file is maintained by Juju. Local changes may +# be overwritten. +############################################################################### +# This is the ssh client system-wide configuration file. See +# ssh_config(5) for more information. This file provides defaults for +# users, and the values can be changed in per-user configuration files +# or on the command line. + +# Configuration data is parsed as follows: +# 1. command line options +# 2. user-specific file +# 3. system-wide file +# Any configuration value is only changed the first time it is set. +# Thus, host-specific definitions should be at the beginning of the +# configuration file, and defaults at the end. + +# Site-wide defaults for some commonly used options. For a comprehensive +# list of available options, their meanings and defaults, please see the +# ssh_config(5) man page. + +# Restrict the following configuration to be limited to this Host. +{% if remote_hosts -%} +Host {{ ' '.join(remote_hosts) }} +{% endif %} +ForwardAgent no +ForwardX11 no +ForwardX11Trusted yes +RhostsRSAAuthentication no +RSAAuthentication yes +PasswordAuthentication {{ password_auth_allowed }} +HostbasedAuthentication no +GSSAPIAuthentication no +GSSAPIDelegateCredentials no +GSSAPIKeyExchange no +GSSAPITrustDNS no +BatchMode no +CheckHostIP yes +AddressFamily {{ addr_family }} +ConnectTimeout 0 +StrictHostKeyChecking ask +IdentityFile ~/.ssh/identity +IdentityFile ~/.ssh/id_rsa +IdentityFile ~/.ssh/id_dsa +# The port at the destination should be defined +{% for port in ports -%} +Port {{ port }} +{% endfor %} +Protocol 2 +Cipher 3des +{% if ciphers -%} +Ciphers {{ ciphers }} +{%- endif %} +{% if macs -%} +MACs {{ macs }} +{%- endif %} +{% if kexs -%} +KexAlgorithms {{ kexs }} +{%- endif %} +EscapeChar ~ +Tunnel no +TunnelDevice any:any +PermitLocalCommand no +VisualHostKey no +RekeyLimit 1G 1h +SendEnv LANG LC_* +HashKnownHosts yes +{% if roaming -%} +UseRoaming {{ roaming }} +{% endif %} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/ssh/templates/sshd_config b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/ssh/templates/sshd_config new file mode 100644 index 0000000000000000000000000000000000000000..5f87298a8119bcab1d2578bcaefd068e5af167c4 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/ssh/templates/sshd_config @@ -0,0 +1,159 @@ +############################################################################### +# WARNING: This configuration file is maintained by Juju. Local changes may +# be overwritten. +############################################################################### +# Package generated configuration file +# See the sshd_config(5) manpage for details + +# What ports, IPs and protocols we listen for +{% for port in ports -%} +Port {{ port }} +{% endfor -%} +AddressFamily {{ addr_family }} +# Use these options to restrict which interfaces/protocols sshd will bind to +{% if ssh_ip -%} +{% for ip in ssh_ip -%} +ListenAddress {{ ip }} +{% endfor %} +{%- else -%} +ListenAddress :: +ListenAddress 0.0.0.0 +{% endif -%} +Protocol 2 +{% if ciphers -%} +Ciphers {{ ciphers }} +{% endif -%} +{% if macs -%} +MACs {{ macs }} +{% endif -%} +{% if kexs -%} +KexAlgorithms {{ kexs }} +{% endif -%} +# HostKeys for protocol version 2 +{% for keyfile in host_key_files -%} +HostKey {{ keyfile }} +{% endfor -%} + +# Privilege Separation is turned on for security +{% if use_priv_sep -%} +UsePrivilegeSeparation {{ use_priv_sep }} +{% endif -%} + +# Lifetime and size of ephemeral version 1 server key +KeyRegenerationInterval 3600 +ServerKeyBits 1024 + +# Logging +SyslogFacility AUTH +LogLevel VERBOSE + +# Authentication: +LoginGraceTime 30s +{% if allow_root_with_key -%} +PermitRootLogin without-password +{% else -%} +PermitRootLogin no +{% endif %} +PermitTunnel no +PermitUserEnvironment no +StrictModes yes + +RSAAuthentication yes +PubkeyAuthentication yes +AuthorizedKeysFile %h/.ssh/authorized_keys + +# Don't read the user's ~/.rhosts and ~/.shosts files +IgnoreRhosts yes +# For this to work you will also need host keys in /etc/ssh_known_hosts +RhostsRSAAuthentication no +# similar for protocol version 2 +HostbasedAuthentication no +# Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication +IgnoreUserKnownHosts yes + +# To enable empty passwords, change to yes (NOT RECOMMENDED) +PermitEmptyPasswords no + +# Change to yes to enable challenge-response passwords (beware issues with +# some PAM modules and threads) +ChallengeResponseAuthentication no + +# Change to no to disable tunnelled clear text passwords +PasswordAuthentication {{ password_authentication }} + +# Kerberos options +KerberosAuthentication no +KerberosGetAFSToken no +KerberosOrLocalPasswd no +KerberosTicketCleanup yes + +# GSSAPI options +GSSAPIAuthentication no +GSSAPICleanupCredentials yes + +X11Forwarding {{ allow_x11_forwarding }} +X11DisplayOffset 10 +X11UseLocalhost yes +GatewayPorts no +PrintMotd {{ print_motd }} +PrintLastLog {{ print_last_log }} +TCPKeepAlive no +UseLogin no + +ClientAliveInterval {{ client_alive_interval }} +ClientAliveCountMax {{ client_alive_count }} +AllowTcpForwarding {{ allow_tcp_forwarding }} +AllowAgentForwarding {{ allow_agent_forwarding }} + +MaxStartups 10:30:100 +#Banner /etc/issue.net + +# Allow client to pass locale environment variables +AcceptEnv LANG LC_* + +# Set this to 'yes' to enable PAM authentication, account processing, +# and session processing. If this is enabled, PAM authentication will +# be allowed through the ChallengeResponseAuthentication and +# PasswordAuthentication. Depending on your PAM configuration, +# PAM authentication via ChallengeResponseAuthentication may bypass +# the setting of "PermitRootLogin without-password". +# If you just want the PAM account and session checks to run without +# PAM authentication, then enable this but set PasswordAuthentication +# and ChallengeResponseAuthentication to 'no'. +UsePAM {{ use_pam }} + +{% if deny_users -%} +DenyUsers {{ deny_users }} +{% endif -%} +{% if allow_users -%} +AllowUsers {{ allow_users }} +{% endif -%} +{% if deny_groups -%} +DenyGroups {{ deny_groups }} +{% endif -%} +{% if allow_groups -%} +AllowGroups allow_groups +{% endif -%} +UseDNS {{ use_dns }} +MaxAuthTries {{ max_auth_tries }} +MaxSessions {{ max_sessions }} + +{% if sftp_enable -%} +# Configuration, in case SFTP is used +## override default of no subsystems +## Subsystem sftp /opt/app/openssh5/libexec/sftp-server +Subsystem sftp internal-sftp -l VERBOSE + +## These lines must appear at the *end* of sshd_config +Match Group {{ sftp_group }} +ForceCommand internal-sftp -l VERBOSE +ChrootDirectory {{ sftp_chroot }} +{% else -%} +# Configuration, in case SFTP is used +## override default of no subsystems +## Subsystem sftp /opt/app/openssh5/libexec/sftp-server +## These lines must appear at the *end* of sshd_config +Match Group sftponly +ForceCommand internal-sftp -l VERBOSE +ChrootDirectory /sftpchroot/home/%u +{% endif %} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/templating.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/templating.py new file mode 100644 index 0000000000000000000000000000000000000000..5b6765f7edeee4bed739fd354c6f7bdf0a8c952e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/templating.py @@ -0,0 +1,73 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import six + +from charmhelpers.core.hookenv import ( + log, + DEBUG, + WARNING, +) + +try: + from jinja2 import FileSystemLoader, Environment +except ImportError: + from charmhelpers.fetch import apt_install + from charmhelpers.fetch import apt_update + apt_update(fatal=True) + if six.PY2: + apt_install('python-jinja2', fatal=True) + else: + apt_install('python3-jinja2', fatal=True) + from jinja2 import FileSystemLoader, Environment + + +# NOTE: function separated from main rendering code to facilitate easier +# mocking in unit tests. +def write(path, data): + with open(path, 'wb') as out: + out.write(data) + + +def get_template_path(template_dir, path): + """Returns the template file which would be used to render the path. + + The path to the template file is returned. + :param template_dir: the directory the templates are located in + :param path: the file path to be written to. + :returns: path to the template file + """ + return os.path.join(template_dir, os.path.basename(path)) + + +def render_and_write(template_dir, path, context): + """Renders the specified template into the file. + + :param template_dir: the directory to load the template from + :param path: the path to write the templated contents to + :param context: the parameters to pass to the rendering engine + """ + env = Environment(loader=FileSystemLoader(template_dir)) + template_file = os.path.basename(path) + template = env.get_template(template_file) + log('Rendering from template: %s' % template.name, level=DEBUG) + rendered_content = template.render(context) + if not rendered_content: + log("Render returned None - skipping '%s'" % path, + level=WARNING) + return + + write(path, rendered_content.encode('utf-8').strip()) + log('Wrote template %s' % path, level=DEBUG) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..ff7485c28c8748ba366dba54f1e3b8f7e6a7c619 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/hardening/utils.py @@ -0,0 +1,155 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import glob +import grp +import os +import pwd +import six +import yaml + +from charmhelpers.core.hookenv import ( + log, + DEBUG, + INFO, + WARNING, + ERROR, +) + + +# Global settings cache. Since each hook fire entails a fresh module import it +# is safe to hold this in memory and not risk missing config changes (since +# they will result in a new hook fire and thus re-import). +__SETTINGS__ = {} + + +def _get_defaults(modules): + """Load the default config for the provided modules. + + :param modules: stack modules config defaults to lookup. + :returns: modules default config dictionary. + """ + default = os.path.join(os.path.dirname(__file__), + 'defaults/%s.yaml' % (modules)) + return yaml.safe_load(open(default)) + + +def _get_schema(modules): + """Load the config schema for the provided modules. + + NOTE: this schema is intended to have 1-1 relationship with they keys in + the default config and is used a means to verify valid overrides provided + by the user. + + :param modules: stack modules config schema to lookup. + :returns: modules default schema dictionary. + """ + schema = os.path.join(os.path.dirname(__file__), + 'defaults/%s.yaml.schema' % (modules)) + return yaml.safe_load(open(schema)) + + +def _get_user_provided_overrides(modules): + """Load user-provided config overrides. + + :param modules: stack modules to lookup in user overrides yaml file. + :returns: overrides dictionary. + """ + overrides = os.path.join(os.environ['JUJU_CHARM_DIR'], + 'hardening.yaml') + if os.path.exists(overrides): + log("Found user-provided config overrides file '%s'" % + (overrides), level=DEBUG) + settings = yaml.safe_load(open(overrides)) + if settings and settings.get(modules): + log("Applying '%s' overrides" % (modules), level=DEBUG) + return settings.get(modules) + + log("No overrides found for '%s'" % (modules), level=DEBUG) + else: + log("No hardening config overrides file '%s' found in charm " + "root dir" % (overrides), level=DEBUG) + + return {} + + +def _apply_overrides(settings, overrides, schema): + """Get overrides config overlayed onto modules defaults. + + :param modules: require stack modules config. + :returns: dictionary of modules config with user overrides applied. + """ + if overrides: + for k, v in six.iteritems(overrides): + if k in schema: + if schema[k] is None: + settings[k] = v + elif type(schema[k]) is dict: + settings[k] = _apply_overrides(settings[k], overrides[k], + schema[k]) + else: + raise Exception("Unexpected type found in schema '%s'" % + type(schema[k]), level=ERROR) + else: + log("Unknown override key '%s' - ignoring" % (k), level=INFO) + + return settings + + +def get_settings(modules): + global __SETTINGS__ + if modules in __SETTINGS__: + return __SETTINGS__[modules] + + schema = _get_schema(modules) + settings = _get_defaults(modules) + overrides = _get_user_provided_overrides(modules) + __SETTINGS__[modules] = _apply_overrides(settings, overrides, schema) + return __SETTINGS__[modules] + + +def ensure_permissions(path, user, group, permissions, maxdepth=-1): + """Ensure permissions for path. + + If path is a file, apply to file and return. If path is a directory, + apply recursively (if required) to directory contents and return. + + :param user: user name + :param group: group name + :param permissions: octal permissions + :param maxdepth: maximum recursion depth. A negative maxdepth allows + infinite recursion and maxdepth=0 means no recursion. + :returns: None + """ + if not os.path.exists(path): + log("File '%s' does not exist - cannot set permissions" % (path), + level=WARNING) + return + + _user = pwd.getpwnam(user) + os.chown(path, _user.pw_uid, grp.getgrnam(group).gr_gid) + os.chmod(path, permissions) + + if maxdepth == 0: + log("Max recursion depth reached - skipping further recursion", + level=DEBUG) + return + elif maxdepth > 0: + maxdepth -= 1 + + if os.path.isdir(path): + contents = glob.glob("%s/*" % (path)) + for c in contents: + ensure_permissions(c, user=user, group=group, + permissions=permissions, maxdepth=maxdepth) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/mellanox/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/mellanox/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..9b088de84e4b288b551603816fc10eebfa7b1503 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/mellanox/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2016 Canonical Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/mellanox/infiniband.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/mellanox/infiniband.py new file mode 100644 index 0000000000000000000000000000000000000000..0edb2314114738c79859f51e505de05e5c7c8fcc --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/mellanox/infiniband.py @@ -0,0 +1,153 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +__author__ = "Jorge Niedbalski " + +import six + +from charmhelpers.fetch import ( + apt_install, + apt_update, +) + +from charmhelpers.core.hookenv import ( + log, + INFO, +) + +try: + from netifaces import interfaces as network_interfaces +except ImportError: + if six.PY2: + apt_install('python-netifaces') + else: + apt_install('python3-netifaces') + from netifaces import interfaces as network_interfaces + +import os +import re +import subprocess + +from charmhelpers.core.kernel import modprobe + +REQUIRED_MODULES = ( + "mlx4_ib", + "mlx4_en", + "mlx4_core", + "ib_ipath", + "ib_mthca", + "ib_srpt", + "ib_srp", + "ib_ucm", + "ib_isert", + "ib_iser", + "ib_ipoib", + "ib_cm", + "ib_uverbs" + "ib_umad", + "ib_sa", + "ib_mad", + "ib_core", + "ib_addr", + "rdma_ucm", +) + +REQUIRED_PACKAGES = ( + "ibutils", + "infiniband-diags", + "ibverbs-utils", +) + +IPOIB_DRIVERS = ( + "ib_ipoib", +) + +ABI_VERSION_FILE = "/sys/class/infiniband_mad/abi_version" + + +class DeviceInfo(object): + pass + + +def install_packages(): + apt_update() + apt_install(REQUIRED_PACKAGES, fatal=True) + + +def load_modules(): + for module in REQUIRED_MODULES: + modprobe(module, persist=True) + + +def is_enabled(): + """Check if infiniband is loaded on the system""" + return os.path.exists(ABI_VERSION_FILE) + + +def stat(): + """Return full output of ibstat""" + return subprocess.check_output(["ibstat"]) + + +def devices(): + """Returns a list of IB enabled devices""" + return subprocess.check_output(['ibstat', '-l']).splitlines() + + +def device_info(device): + """Returns a DeviceInfo object with the current device settings""" + + status = subprocess.check_output([ + 'ibstat', device, '-s']).splitlines() + + regexes = { + "CA type: (.*)": "device_type", + "Number of ports: (.*)": "num_ports", + "Firmware version: (.*)": "fw_ver", + "Hardware version: (.*)": "hw_ver", + "Node GUID: (.*)": "node_guid", + "System image GUID: (.*)": "sys_guid", + } + + device = DeviceInfo() + + for line in status: + for expression, key in regexes.items(): + matches = re.search(expression, line) + if matches: + setattr(device, key, matches.group(1)) + + return device + + +def ipoib_interfaces(): + """Return a list of IPOIB capable ethernet interfaces""" + interfaces = [] + + for interface in network_interfaces(): + try: + driver = re.search('^driver: (.+)$', subprocess.check_output([ + 'ethtool', '-i', + interface]), re.M).group(1) + + if driver in IPOIB_DRIVERS: + interfaces.append(interface) + except Exception: + log("Skipping interface %s" % interface, level=INFO) + continue + + return interfaces diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/network/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/network/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d7567b863e3a5ad2b7a7f44958b4166e0c3d346b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/network/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/network/ip.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/network/ip.py new file mode 100644 index 0000000000000000000000000000000000000000..b13277bb57c9227b1d9dfecf4f6750740e5a262a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/network/ip.py @@ -0,0 +1,602 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import glob +import re +import subprocess +import six +import socket + +from functools import partial + +from charmhelpers.fetch import apt_install, apt_update +from charmhelpers.core.hookenv import ( + config, + log, + network_get_primary_address, + unit_get, + WARNING, + NoNetworkBinding, +) + +from charmhelpers.core.host import ( + lsb_release, + CompareHostReleases, +) + +try: + import netifaces +except ImportError: + apt_update(fatal=True) + if six.PY2: + apt_install('python-netifaces', fatal=True) + else: + apt_install('python3-netifaces', fatal=True) + import netifaces + +try: + import netaddr +except ImportError: + apt_update(fatal=True) + if six.PY2: + apt_install('python-netaddr', fatal=True) + else: + apt_install('python3-netaddr', fatal=True) + import netaddr + + +def _validate_cidr(network): + try: + netaddr.IPNetwork(network) + except (netaddr.core.AddrFormatError, ValueError): + raise ValueError("Network (%s) is not in CIDR presentation format" % + network) + + +def no_ip_found_error_out(network): + errmsg = ("No IP address found in network(s): %s" % network) + raise ValueError(errmsg) + + +def _get_ipv6_network_from_address(address): + """Get an netaddr.IPNetwork for the given IPv6 address + :param address: a dict as returned by netifaces.ifaddresses + :returns netaddr.IPNetwork: None if the address is a link local or loopback + address + """ + if address['addr'].startswith('fe80') or address['addr'] == "::1": + return None + + prefix = address['netmask'].split("/") + if len(prefix) > 1: + netmask = prefix[1] + else: + netmask = address['netmask'] + return netaddr.IPNetwork("%s/%s" % (address['addr'], + netmask)) + + +def get_address_in_network(network, fallback=None, fatal=False): + """Get an IPv4 or IPv6 address within the network from the host. + + :param network (str): CIDR presentation format. For example, + '192.168.1.0/24'. Supports multiple networks as a space-delimited list. + :param fallback (str): If no address is found, return fallback. + :param fatal (boolean): If no address is found, fallback is not + set and fatal is True then exit(1). + """ + if network is None: + if fallback is not None: + return fallback + + if fatal: + no_ip_found_error_out(network) + else: + return None + + networks = network.split() or [network] + for network in networks: + _validate_cidr(network) + network = netaddr.IPNetwork(network) + for iface in netifaces.interfaces(): + try: + addresses = netifaces.ifaddresses(iface) + except ValueError: + # If an instance was deleted between + # netifaces.interfaces() run and now, its interfaces are gone + continue + if network.version == 4 and netifaces.AF_INET in addresses: + for addr in addresses[netifaces.AF_INET]: + cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'], + addr['netmask'])) + if cidr in network: + return str(cidr.ip) + + if network.version == 6 and netifaces.AF_INET6 in addresses: + for addr in addresses[netifaces.AF_INET6]: + cidr = _get_ipv6_network_from_address(addr) + if cidr and cidr in network: + return str(cidr.ip) + + if fallback is not None: + return fallback + + if fatal: + no_ip_found_error_out(network) + + return None + + +def is_ipv6(address): + """Determine whether provided address is IPv6 or not.""" + try: + address = netaddr.IPAddress(address) + except netaddr.AddrFormatError: + # probably a hostname - so not an address at all! + return False + + return address.version == 6 + + +def is_address_in_network(network, address): + """ + Determine whether the provided address is within a network range. + + :param network (str): CIDR presentation format. For example, + '192.168.1.0/24'. + :param address: An individual IPv4 or IPv6 address without a net + mask or subnet prefix. For example, '192.168.1.1'. + :returns boolean: Flag indicating whether address is in network. + """ + try: + network = netaddr.IPNetwork(network) + except (netaddr.core.AddrFormatError, ValueError): + raise ValueError("Network (%s) is not in CIDR presentation format" % + network) + + try: + address = netaddr.IPAddress(address) + except (netaddr.core.AddrFormatError, ValueError): + raise ValueError("Address (%s) is not in correct presentation format" % + address) + + if address in network: + return True + else: + return False + + +def _get_for_address(address, key): + """Retrieve an attribute of or the physical interface that + the IP address provided could be bound to. + + :param address (str): An individual IPv4 or IPv6 address without a net + mask or subnet prefix. For example, '192.168.1.1'. + :param key: 'iface' for the physical interface name or an attribute + of the configured interface, for example 'netmask'. + :returns str: Requested attribute or None if address is not bindable. + """ + address = netaddr.IPAddress(address) + for iface in netifaces.interfaces(): + addresses = netifaces.ifaddresses(iface) + if address.version == 4 and netifaces.AF_INET in addresses: + addr = addresses[netifaces.AF_INET][0]['addr'] + netmask = addresses[netifaces.AF_INET][0]['netmask'] + network = netaddr.IPNetwork("%s/%s" % (addr, netmask)) + cidr = network.cidr + if address in cidr: + if key == 'iface': + return iface + else: + return addresses[netifaces.AF_INET][0][key] + + if address.version == 6 and netifaces.AF_INET6 in addresses: + for addr in addresses[netifaces.AF_INET6]: + network = _get_ipv6_network_from_address(addr) + if not network: + continue + + cidr = network.cidr + if address in cidr: + if key == 'iface': + return iface + elif key == 'netmask' and cidr: + return str(cidr).split('/')[1] + else: + return addr[key] + return None + + +get_iface_for_address = partial(_get_for_address, key='iface') + + +get_netmask_for_address = partial(_get_for_address, key='netmask') + + +def resolve_network_cidr(ip_address): + ''' + Resolves the full address cidr of an ip_address based on + configured network interfaces + ''' + netmask = get_netmask_for_address(ip_address) + return str(netaddr.IPNetwork("%s/%s" % (ip_address, netmask)).cidr) + + +def format_ipv6_addr(address): + """If address is IPv6, wrap it in '[]' otherwise return None. + + This is required by most configuration files when specifying IPv6 + addresses. + """ + if is_ipv6(address): + return "[%s]" % address + + return None + + +def is_ipv6_disabled(): + try: + result = subprocess.check_output( + ['sysctl', 'net.ipv6.conf.all.disable_ipv6'], + stderr=subprocess.STDOUT, + universal_newlines=True) + except subprocess.CalledProcessError: + return True + + return "net.ipv6.conf.all.disable_ipv6 = 1" in result + + +def get_iface_addr(iface='eth0', inet_type='AF_INET', inc_aliases=False, + fatal=True, exc_list=None): + """Return the assigned IP address for a given interface, if any. + + :param iface: network interface on which address(es) are expected to + be found. + :param inet_type: inet address family + :param inc_aliases: include alias interfaces in search + :param fatal: if True, raise exception if address not found + :param exc_list: list of addresses to ignore + :return: list of ip addresses + """ + # Extract nic if passed /dev/ethX + if '/' in iface: + iface = iface.split('/')[-1] + + if not exc_list: + exc_list = [] + + try: + inet_num = getattr(netifaces, inet_type) + except AttributeError: + raise Exception("Unknown inet type '%s'" % str(inet_type)) + + interfaces = netifaces.interfaces() + if inc_aliases: + ifaces = [] + for _iface in interfaces: + if iface == _iface or _iface.split(':')[0] == iface: + ifaces.append(_iface) + + if fatal and not ifaces: + raise Exception("Invalid interface '%s'" % iface) + + ifaces.sort() + else: + if iface not in interfaces: + if fatal: + raise Exception("Interface '%s' not found " % (iface)) + else: + return [] + + else: + ifaces = [iface] + + addresses = [] + for netiface in ifaces: + net_info = netifaces.ifaddresses(netiface) + if inet_num in net_info: + for entry in net_info[inet_num]: + if 'addr' in entry and entry['addr'] not in exc_list: + addresses.append(entry['addr']) + + if fatal and not addresses: + raise Exception("Interface '%s' doesn't have any %s addresses." % + (iface, inet_type)) + + return sorted(addresses) + + +get_ipv4_addr = partial(get_iface_addr, inet_type='AF_INET') + + +def get_iface_from_addr(addr): + """Work out on which interface the provided address is configured.""" + for iface in netifaces.interfaces(): + addresses = netifaces.ifaddresses(iface) + for inet_type in addresses: + for _addr in addresses[inet_type]: + _addr = _addr['addr'] + # link local + ll_key = re.compile("(.+)%.*") + raw = re.match(ll_key, _addr) + if raw: + _addr = raw.group(1) + + if _addr == addr: + log("Address '%s' is configured on iface '%s'" % + (addr, iface)) + return iface + + msg = "Unable to infer net iface on which '%s' is configured" % (addr) + raise Exception(msg) + + +def sniff_iface(f): + """Ensure decorated function is called with a value for iface. + + If no iface provided, inject net iface inferred from unit private address. + """ + def iface_sniffer(*args, **kwargs): + if not kwargs.get('iface', None): + kwargs['iface'] = get_iface_from_addr(unit_get('private-address')) + + return f(*args, **kwargs) + + return iface_sniffer + + +@sniff_iface +def get_ipv6_addr(iface=None, inc_aliases=False, fatal=True, exc_list=None, + dynamic_only=True): + """Get assigned IPv6 address for a given interface. + + Returns list of addresses found. If no address found, returns empty list. + + If iface is None, we infer the current primary interface by doing a reverse + lookup on the unit private-address. + + We currently only support scope global IPv6 addresses i.e. non-temporary + addresses. If no global IPv6 address is found, return the first one found + in the ipv6 address list. + + :param iface: network interface on which ipv6 address(es) are expected to + be found. + :param inc_aliases: include alias interfaces in search + :param fatal: if True, raise exception if address not found + :param exc_list: list of addresses to ignore + :param dynamic_only: only recognise dynamic addresses + :return: list of ipv6 addresses + """ + addresses = get_iface_addr(iface=iface, inet_type='AF_INET6', + inc_aliases=inc_aliases, fatal=fatal, + exc_list=exc_list) + + if addresses: + global_addrs = [] + for addr in addresses: + key_scope_link_local = re.compile("^fe80::..(.+)%(.+)") + m = re.match(key_scope_link_local, addr) + if m: + eui_64_mac = m.group(1) + iface = m.group(2) + else: + global_addrs.append(addr) + + if global_addrs: + # Make sure any found global addresses are not temporary + cmd = ['ip', 'addr', 'show', iface] + out = subprocess.check_output(cmd).decode('UTF-8') + if dynamic_only: + key = re.compile("inet6 (.+)/[0-9]+ scope global.* dynamic.*") + else: + key = re.compile("inet6 (.+)/[0-9]+ scope global.*") + + addrs = [] + for line in out.split('\n'): + line = line.strip() + m = re.match(key, line) + if m and 'temporary' not in line: + # Return the first valid address we find + for addr in global_addrs: + if m.group(1) == addr: + if not dynamic_only or \ + m.group(1).endswith(eui_64_mac): + addrs.append(addr) + + if addrs: + return addrs + + if fatal: + raise Exception("Interface '%s' does not have a scope global " + "non-temporary ipv6 address." % iface) + + return [] + + +def get_bridges(vnic_dir='/sys/devices/virtual/net'): + """Return a list of bridges on the system.""" + b_regex = "%s/*/bridge" % vnic_dir + return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_regex)] + + +def get_bridge_nics(bridge, vnic_dir='/sys/devices/virtual/net'): + """Return a list of nics comprising a given bridge on the system.""" + brif_regex = "%s/%s/brif/*" % (vnic_dir, bridge) + return [x.split('/')[-1] for x in glob.glob(brif_regex)] + + +def is_bridge_member(nic): + """Check if a given nic is a member of a bridge.""" + for bridge in get_bridges(): + if nic in get_bridge_nics(bridge): + return True + + return False + + +def is_ip(address): + """ + Returns True if address is a valid IP address. + """ + try: + # Test to see if already an IPv4/IPv6 address + address = netaddr.IPAddress(address) + return True + except (netaddr.AddrFormatError, ValueError): + return False + + +def ns_query(address): + try: + import dns.resolver + except ImportError: + if six.PY2: + apt_install('python-dnspython', fatal=True) + else: + apt_install('python3-dnspython', fatal=True) + import dns.resolver + + if isinstance(address, dns.name.Name): + rtype = 'PTR' + elif isinstance(address, six.string_types): + rtype = 'A' + else: + return None + + try: + answers = dns.resolver.query(address, rtype) + except dns.resolver.NXDOMAIN: + return None + + if answers: + return str(answers[0]) + return None + + +def get_host_ip(hostname, fallback=None): + """ + Resolves the IP for a given hostname, or returns + the input if it is already an IP. + """ + if is_ip(hostname): + return hostname + + ip_addr = ns_query(hostname) + if not ip_addr: + try: + ip_addr = socket.gethostbyname(hostname) + except Exception: + log("Failed to resolve hostname '%s'" % (hostname), + level=WARNING) + return fallback + return ip_addr + + +def get_hostname(address, fqdn=True): + """ + Resolves hostname for given IP, or returns the input + if it is already a hostname. + """ + if is_ip(address): + try: + import dns.reversename + except ImportError: + if six.PY2: + apt_install("python-dnspython", fatal=True) + else: + apt_install("python3-dnspython", fatal=True) + import dns.reversename + + rev = dns.reversename.from_address(address) + result = ns_query(rev) + + if not result: + try: + result = socket.gethostbyaddr(address)[0] + except Exception: + return None + else: + result = address + + if fqdn: + # strip trailing . + if result.endswith('.'): + return result[:-1] + else: + return result + else: + return result.split('.')[0] + + +def port_has_listener(address, port): + """ + Returns True if the address:port is open and being listened to, + else False. + + @param address: an IP address or hostname + @param port: integer port + + Note calls 'zc' via a subprocess shell + """ + cmd = ['nc', '-z', address, str(port)] + result = subprocess.call(cmd) + return not(bool(result)) + + +def assert_charm_supports_ipv6(): + """Check whether we are able to support charms ipv6.""" + release = lsb_release()['DISTRIB_CODENAME'].lower() + if CompareHostReleases(release) < "trusty": + raise Exception("IPv6 is not supported in the charms for Ubuntu " + "versions less than Trusty 14.04") + + +def get_relation_ip(interface, cidr_network=None): + """Return this unit's IP for the given interface. + + Allow for an arbitrary interface to use with network-get to select an IP. + Handle all address selection options including passed cidr network and + IPv6. + + Usage: get_relation_ip('amqp', cidr_network='10.0.0.0/8') + + @param interface: string name of the relation. + @param cidr_network: string CIDR Network to select an address from. + @raises Exception if prefer-ipv6 is configured but IPv6 unsupported. + @returns IPv6 or IPv4 address + """ + # Select the interface address first + # For possible use as a fallback bellow with get_address_in_network + try: + # Get the interface specific IP + address = network_get_primary_address(interface) + except NotImplementedError: + # If network-get is not available + address = get_host_ip(unit_get('private-address')) + except NoNetworkBinding: + log("No network binding for {}".format(interface), WARNING) + address = get_host_ip(unit_get('private-address')) + + if config('prefer-ipv6'): + # Currently IPv6 has priority, eventually we want IPv6 to just be + # another network space. + assert_charm_supports_ipv6() + return get_ipv6_addr()[0] + elif cidr_network: + # If a specific CIDR network is passed get the address from that + # network. + return get_address_in_network(cidr_network, address) + + # Return the interface address + return address diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/network/ovs/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/network/ovs/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..fd001bc2eaa9b1b511bbd1816e1089521935a50a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/network/ovs/__init__.py @@ -0,0 +1,541 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +''' Helpers for interacting with OpenvSwitch ''' +import hashlib +import subprocess +import os +import six + +from charmhelpers.fetch import apt_install + + +from charmhelpers.core.hookenv import ( + log, WARNING, INFO, DEBUG +) +from charmhelpers.core.host import ( + service +) + + +BRIDGE_TEMPLATE = """\ +# This veth pair is required when neutron data-port is mapped to an existing linux bridge. lp:1635067 + +auto {linuxbridge_port} +iface {linuxbridge_port} inet manual + pre-up ip link add name {linuxbridge_port} type veth peer name {ovsbridge_port} + pre-up ip link set {ovsbridge_port} master {bridge} + pre-up ip link set {ovsbridge_port} up + up ip link set {linuxbridge_port} up + down ip link del {linuxbridge_port} +""" + +MAX_KERNEL_INTERFACE_NAME_LEN = 15 + + +def get_bridges(): + """Return list of the bridges on the default openvswitch + + :returns: List of bridge names + :rtype: List[str] + :raises: subprocess.CalledProcessError if ovs-vsctl fails + """ + cmd = ["ovs-vsctl", "list-br"] + lines = subprocess.check_output(cmd).decode('utf-8').split("\n") + maybe_bridges = [l.strip() for l in lines] + return [b for b in maybe_bridges if b] + + +def get_bridge_ports(name): + """Return a list the ports on a named bridge + + :param name: the name of the bridge to list + :type name: str + :returns: List of ports on the named bridge + :rtype: List[str] + :raises: subprocess.CalledProcessError if the ovs-vsctl command fails. If + the named bridge doesn't exist, then the exception will be raised. + """ + cmd = ["ovs-vsctl", "--", "list-ports", name] + lines = subprocess.check_output(cmd).decode('utf-8').split("\n") + maybe_ports = [l.strip() for l in lines] + return [p for p in maybe_ports if p] + + +def get_bridges_and_ports_map(): + """Return dictionary of bridge to ports for the default openvswitch + + :returns: a mapping of bridge name to a list of ports. + :rtype: Dict[str, List[str]] + :raises: subprocess.CalledProcessError if any of the underlying ovs-vsctl + command fail. + """ + return {b: get_bridge_ports(b) for b in get_bridges()} + + +def _dict_to_vsctl_set(data, table, entity): + """Helper that takes dictionary and provides ``ovs-vsctl set`` commands + + :param data: Additional data to attach to interface + The keys in the data dictionary map directly to column names in the + OpenvSwitch table specified as defined in DB-SCHEMA [0] referenced in + RFC 7047 [1] + + There are some established conventions for keys in the external-ids + column of various tables, consult the OVS Integration Guide [2] for + more details. + + NOTE(fnordahl): Technically the ``external-ids`` column is called + ``external_ids`` (with an underscore) and we rely on ``ovs-vsctl``'s + behaviour of transforming dashes to underscores for us [3] so we can + have a more pleasant data structure. + + 0: http://www.openvswitch.org/ovs-vswitchd.conf.db.5.pdf + 1: https://tools.ietf.org/html/rfc7047 + 2: http://docs.openvswitch.org/en/latest/topics/integration/ + 3: https://github.com/openvswitch/ovs/blob/ + 20dac08fdcce4b7fda1d07add3b346aa9751cfbc/ + lib/db-ctl-base.c#L189-L215 + :type data: Optional[Dict[str,Union[str,Dict[str,str]]]] + :param table: Name of table to operate on + :type table: str + :param entity: Name of entity to operate on + :type entity: str + :returns: '--' separated ``ovs-vsctl set`` commands + :rtype: Iterator[Tuple[str, str, str, str, str]] + """ + for (k, v) in data.items(): + if isinstance(v, dict): + entries = { + '{}:{}'.format(k, dk): dv for (dk, dv) in v.items()} + else: + entries = {k: v} + for (colk, colv) in entries.items(): + yield ('--', 'set', table, entity, '{}={}'.format(colk, colv)) + + +def add_bridge(name, datapath_type=None, brdata=None, exclusive=False): + """Add the named bridge to openvswitch and set/update bridge data for it + + :param name: Name of bridge to create + :type name: str + :param datapath_type: Add datapath_type to bridge (DEPRECATED, use brdata) + :type datapath_type: Optional[str] + :param brdata: Additional data to attach to bridge + The keys in the brdata dictionary map directly to column names in the + OpenvSwitch bridge table as defined in DB-SCHEMA [0] referenced in + RFC 7047 [1] + + There are some established conventions for keys in the external-ids + column of various tables, consult the OVS Integration Guide [2] for + more details. + + NOTE(fnordahl): Technically the ``external-ids`` column is called + ``external_ids`` (with an underscore) and we rely on ``ovs-vsctl``'s + behaviour of transforming dashes to underscores for us [3] so we can + have a more pleasant data structure. + + 0: http://www.openvswitch.org/ovs-vswitchd.conf.db.5.pdf + 1: https://tools.ietf.org/html/rfc7047 + 2: http://docs.openvswitch.org/en/latest/topics/integration/ + 3: https://github.com/openvswitch/ovs/blob/ + 20dac08fdcce4b7fda1d07add3b346aa9751cfbc/ + lib/db-ctl-base.c#L189-L215 + :type brdata: Optional[Dict[str,Union[str,Dict[str,str]]]] + :param exclusive: If True, raise exception if bridge exists + :type exclusive: bool + :raises: subprocess.CalledProcessError + """ + log('Creating bridge {}'.format(name)) + cmd = ['ovs-vsctl', '--'] + if not exclusive: + cmd.append('--may-exist') + cmd.extend(('add-br', name)) + if brdata: + for setcmd in _dict_to_vsctl_set(brdata, 'bridge', name): + cmd.extend(setcmd) + if datapath_type is not None: + log('DEPRECATION WARNING: add_bridge called with datapath_type, ' + 'please use the brdata keyword argument instead.') + cmd += ['--', 'set', 'bridge', name, + 'datapath_type={}'.format(datapath_type)] + subprocess.check_call(cmd) + + +def del_bridge(name): + """Delete the named bridge from openvswitch + + :param name: Name of bridge to remove + :type name: str + :raises: subprocess.CalledProcessError + """ + log('Deleting bridge {}'.format(name)) + subprocess.check_call(["ovs-vsctl", "--", "--if-exists", "del-br", name]) + + +def add_bridge_port(name, port, promisc=False, ifdata=None, exclusive=False, + linkup=True, portdata=None): + """Add port to bridge and optionally set/update interface data for it + + :param name: Name of bridge to attach port to + :type name: str + :param port: Name of port as represented in netdev + :type port: str + :param promisc: Whether to set promiscuous mode on interface + True=on, False=off, None leave untouched + :type promisc: Optional[bool] + :param ifdata: Additional data to attach to interface + The keys in the ifdata dictionary map directly to column names in the + OpenvSwitch Interface table as defined in DB-SCHEMA [0] referenced in + RFC 7047 [1] + + There are some established conventions for keys in the external-ids + column of various tables, consult the OVS Integration Guide [2] for + more details. + + NOTE(fnordahl): Technically the ``external-ids`` column is called + ``external_ids`` (with an underscore) and we rely on ``ovs-vsctl``'s + behaviour of transforming dashes to underscores for us [3] so we can + have a more pleasant data structure. + + 0: http://www.openvswitch.org/ovs-vswitchd.conf.db.5.pdf + 1: https://tools.ietf.org/html/rfc7047 + 2: http://docs.openvswitch.org/en/latest/topics/integration/ + 3: https://github.com/openvswitch/ovs/blob/ + 20dac08fdcce4b7fda1d07add3b346aa9751cfbc/ + lib/db-ctl-base.c#L189-L215 + :type ifdata: Optional[Dict[str,Union[str,Dict[str,str]]]] + :param exclusive: If True, raise exception if port exists + :type exclusive: bool + :param linkup: Bring link up + :type linkup: bool + :param portdata: Additional data to attach to port. Similar to ifdata. + :type portdata: Optional[Dict[str,Union[str,Dict[str,str]]]] + :raises: subprocess.CalledProcessError + """ + cmd = ['ovs-vsctl', '--'] + if not exclusive: + cmd.append('--may-exist') + cmd.extend(('add-port', name, port)) + for ovs_table, data in (('Interface', ifdata), ('Port', portdata)): + if data: + for setcmd in _dict_to_vsctl_set(data, ovs_table, port): + cmd.extend(setcmd) + + log('Adding port {} to bridge {}'.format(port, name)) + subprocess.check_call(cmd) + if linkup: + # This is mostly a workaround for CI environments, in the real world + # the bare metal provider would most likely have configured and brought + # up the link for us. + subprocess.check_call(["ip", "link", "set", port, "up"]) + if promisc: + subprocess.check_call(["ip", "link", "set", port, "promisc", "on"]) + elif promisc is False: + subprocess.check_call(["ip", "link", "set", port, "promisc", "off"]) + + +def del_bridge_port(name, port): + """Delete a port from the named openvswitch bridge + + :param name: Name of bridge to remove port from + :type name: str + :param port: Name of port to remove + :type port: str + :raises: subprocess.CalledProcessError + """ + log('Deleting port {} from bridge {}'.format(port, name)) + subprocess.check_call(["ovs-vsctl", "--", "--if-exists", "del-port", + name, port]) + subprocess.check_call(["ip", "link", "set", port, "down"]) + subprocess.check_call(["ip", "link", "set", port, "promisc", "off"]) + + +def add_bridge_bond(bridge, port, interfaces, portdata=None, ifdatamap=None, + exclusive=False): + """Add bonded port in bridge from interfaces. + + :param bridge: Name of bridge to add bonded port to + :type bridge: str + :param port: Name of created port + :type port: str + :param interfaces: Underlying interfaces that make up the bonded port + :type interfaces: Iterator[str] + :param portdata: Additional data to attach to the created bond port + See _dict_to_vsctl_set() for detailed description. + Example: + { + 'bond-mode': 'balance-tcp', + 'lacp': 'active', + 'other-config': { + 'lacp-time': 'fast', + }, + } + :type portdata: Optional[Dict[str,Union[str,Dict[str,str]]]] + :param ifdatamap: Map of data to attach to created bond interfaces + See _dict_to_vsctl_set() for detailed description. + Example: + { + 'eth0': { + 'type': 'dpdk', + 'mtu-request': '9000', + 'options': { + 'dpdk-devargs': '0000:01:00.0', + }, + }, + } + :type ifdatamap: Optional[Dict[str,Dict[str,Union[str,Dict[str,str]]]]] + :param exclusive: If True, raise exception if port exists + :type exclusive: bool + :raises: subprocess.CalledProcessError + """ + cmd = ['ovs-vsctl', '--'] + if not exclusive: + cmd.append('--may-exist') + cmd.extend(('add-bond', bridge, port)) + cmd.extend(interfaces) + if portdata: + for setcmd in _dict_to_vsctl_set(portdata, 'port', port): + cmd.extend(setcmd) + if ifdatamap: + for ifname, ifdata in ifdatamap.items(): + for setcmd in _dict_to_vsctl_set(ifdata, 'Interface', ifname): + cmd.extend(setcmd) + subprocess.check_call(cmd) + + +def add_ovsbridge_linuxbridge(name, bridge, ifdata=None): + """Add linux bridge to the named openvswitch bridge + + :param name: Name of ovs bridge to be added to Linux bridge + :type name: str + :param bridge: Name of Linux bridge to be added to ovs bridge + :type name: str + :param ifdata: Additional data to attach to interface + The keys in the ifdata dictionary map directly to column names in the + OpenvSwitch Interface table as defined in DB-SCHEMA [0] referenced in + RFC 7047 [1] + + There are some established conventions for keys in the external-ids + column of various tables, consult the OVS Integration Guide [2] for + more details. + + NOTE(fnordahl): Technically the ``external-ids`` column is called + ``external_ids`` (with an underscore) and we rely on ``ovs-vsctl``'s + behaviour of transforming dashes to underscores for us [3] so we can + have a more pleasant data structure. + + 0: http://www.openvswitch.org/ovs-vswitchd.conf.db.5.pdf + 1: https://tools.ietf.org/html/rfc7047 + 2: http://docs.openvswitch.org/en/latest/topics/integration/ + 3: https://github.com/openvswitch/ovs/blob/ + 20dac08fdcce4b7fda1d07add3b346aa9751cfbc/ + lib/db-ctl-base.c#L189-L215 + :type ifdata: Optional[Dict[str,Union[str,Dict[str,str]]]] + """ + try: + import netifaces + except ImportError: + if six.PY2: + apt_install('python-netifaces', fatal=True) + else: + apt_install('python3-netifaces', fatal=True) + import netifaces + + # NOTE(jamespage): + # Older code supported addition of a linuxbridge directly + # to an OVS bridge; ensure we don't break uses on upgrade + existing_ovs_bridge = port_to_br(bridge) + if existing_ovs_bridge is not None: + log('Linuxbridge {} is already directly in use' + ' by OVS bridge {}'.format(bridge, existing_ovs_bridge), + level=INFO) + return + + # NOTE(jamespage): + # preserve existing naming because interfaces may already exist. + ovsbridge_port = "veth-" + name + linuxbridge_port = "veth-" + bridge + if (len(ovsbridge_port) > MAX_KERNEL_INTERFACE_NAME_LEN or + len(linuxbridge_port) > MAX_KERNEL_INTERFACE_NAME_LEN): + # NOTE(jamespage): + # use parts of hashed bridgename (openstack style) when + # a bridge name exceeds 15 chars + hashed_bridge = hashlib.sha256(bridge.encode('UTF-8')).hexdigest() + base = '{}-{}'.format(hashed_bridge[:8], hashed_bridge[-2:]) + ovsbridge_port = "cvo{}".format(base) + linuxbridge_port = "cvb{}".format(base) + + interfaces = netifaces.interfaces() + for interface in interfaces: + if interface == ovsbridge_port or interface == linuxbridge_port: + log('Interface {} already exists'.format(interface), level=INFO) + return + + log('Adding linuxbridge {} to ovsbridge {}'.format(bridge, name), + level=INFO) + + check_for_eni_source() + + with open('/etc/network/interfaces.d/{}.cfg'.format( + linuxbridge_port), 'w') as config: + config.write(BRIDGE_TEMPLATE.format(linuxbridge_port=linuxbridge_port, + ovsbridge_port=ovsbridge_port, + bridge=bridge)) + + subprocess.check_call(["ifup", linuxbridge_port]) + add_bridge_port(name, linuxbridge_port, ifdata=ifdata) + + +def is_linuxbridge_interface(port): + ''' Check if the interface is a linuxbridge bridge + :param port: Name of an interface to check whether it is a Linux bridge + :returns: True if port is a Linux bridge''' + + if os.path.exists('/sys/class/net/' + port + '/bridge'): + log('Interface {} is a Linux bridge'.format(port), level=DEBUG) + return True + else: + log('Interface {} is not a Linux bridge'.format(port), level=DEBUG) + return False + + +def set_manager(manager): + ''' Set the controller for the local openvswitch ''' + log('Setting manager for local ovs to {}'.format(manager)) + subprocess.check_call(['ovs-vsctl', 'set-manager', + 'ssl:{}'.format(manager)]) + + +def set_Open_vSwitch_column_value(column_value): + """ + Calls ovs-vsctl and sets the 'column_value' in the Open_vSwitch table. + + :param column_value: + See http://www.openvswitch.org//ovs-vswitchd.conf.db.5.pdf for + details of the relevant values. + :type str + :raises CalledProcessException: possibly ovsdb-server is not running + """ + log('Setting {} in the Open_vSwitch table'.format(column_value)) + subprocess.check_call(['ovs-vsctl', 'set', 'Open_vSwitch', '.', column_value]) + + +CERT_PATH = '/etc/openvswitch/ovsclient-cert.pem' + + +def get_certificate(): + ''' Read openvswitch certificate from disk ''' + if os.path.exists(CERT_PATH): + log('Reading ovs certificate from {}'.format(CERT_PATH)) + with open(CERT_PATH, 'r') as cert: + full_cert = cert.read() + begin_marker = "-----BEGIN CERTIFICATE-----" + end_marker = "-----END CERTIFICATE-----" + begin_index = full_cert.find(begin_marker) + end_index = full_cert.rfind(end_marker) + if end_index == -1 or begin_index == -1: + raise RuntimeError("Certificate does not contain valid begin" + " and end markers.") + full_cert = full_cert[begin_index:(end_index + len(end_marker))] + return full_cert + else: + log('Certificate not found', level=WARNING) + return None + + +def check_for_eni_source(): + ''' Juju removes the source line when setting up interfaces, + replace if missing ''' + + with open('/etc/network/interfaces', 'r') as eni: + for line in eni: + if line == 'source /etc/network/interfaces.d/*': + return + with open('/etc/network/interfaces', 'a') as eni: + eni.write('\nsource /etc/network/interfaces.d/*') + + +def full_restart(): + ''' Full restart and reload of openvswitch ''' + if os.path.exists('/etc/init/openvswitch-force-reload-kmod.conf'): + service('start', 'openvswitch-force-reload-kmod') + else: + service('force-reload-kmod', 'openvswitch-switch') + + +def enable_ipfix(bridge, target, + cache_active_timeout=60, + cache_max_flows=128, + sampling=64): + '''Enable IPFIX on bridge to target. + :param bridge: Bridge to monitor + :param target: IPFIX remote endpoint + :param cache_active_timeout: The maximum period in seconds for + which an IPFIX flow record is cached + and aggregated before being sent + :param cache_max_flows: The maximum number of IPFIX flow records + that can be cached at a time + :param sampling: The rate at which packets should be sampled and + sent to each target collector + ''' + cmd = [ + 'ovs-vsctl', 'set', 'Bridge', bridge, 'ipfix=@i', '--', + '--id=@i', 'create', 'IPFIX', + 'targets="{}"'.format(target), + 'sampling={}'.format(sampling), + 'cache_active_timeout={}'.format(cache_active_timeout), + 'cache_max_flows={}'.format(cache_max_flows), + ] + log('Enabling IPfix on {}.'.format(bridge)) + subprocess.check_call(cmd) + + +def disable_ipfix(bridge): + '''Diable IPFIX on target bridge. + :param bridge: Bridge to modify + ''' + cmd = ['ovs-vsctl', 'clear', 'Bridge', bridge, 'ipfix'] + subprocess.check_call(cmd) + + +def port_to_br(port): + '''Determine the bridge that contains a port + :param port: Name of port to check for + :returns str: OVS bridge containing port or None if not found + ''' + try: + return subprocess.check_output( + ['ovs-vsctl', 'port-to-br', port] + ).decode('UTF-8').strip() + except subprocess.CalledProcessError: + return None + + +def ovs_appctl(target, args): + """Run `ovs-appctl` for target with args and return output. + + :param target: Name of daemon to contact. Unless target begins with '/', + `ovs-appctl` looks for a pidfile and will build the path to + a /var/run/openvswitch/target.pid.ctl for you. + :type target: str + :param args: Command and arguments to pass to `ovs-appctl` + :type args: Tuple[str, ...] + :returns: Output from command + :rtype: str + :raises: subprocess.CalledProcessError + """ + cmd = ['ovs-appctl', '-t', target] + cmd.extend(args) + return subprocess.check_output(cmd, universal_newlines=True) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/network/ovs/ovn.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/network/ovs/ovn.py new file mode 100644 index 0000000000000000000000000000000000000000..2075f11acfbeb3d0d614cf3db5a1b535bf128824 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/network/ovs/ovn.py @@ -0,0 +1,233 @@ +# Copyright 2019 Canonical Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +import os +import subprocess +import uuid + +from . import utils + + +OVN_RUNDIR = '/var/run/ovn' +OVN_SYSCONFDIR = '/etc/ovn' + + +def ovn_appctl(target, args, rundir=None, use_ovs_appctl=False): + """Run ovn/ovs-appctl for target with args and return output. + + :param target: Name of daemon to contact. Unless target begins with '/', + `ovn-appctl` looks for a pidfile and will build the path to + a /var/run/ovn/target.pid.ctl for you. + :type target: str + :param args: Command and arguments to pass to `ovn-appctl` + :type args: Tuple[str, ...] + :param rundir: Override path to sockets + :type rundir: Optional[str] + :param use_ovs_appctl: The ``ovn-appctl`` command appeared in OVN 20.03, + set this to True to use ``ovs-appctl`` instead. + :type use_ovs_appctl: bool + :returns: Output from command + :rtype: str + :raises: subprocess.CalledProcessError + """ + # NOTE(fnordahl): The ovsdb-server processes for the OVN databases use a + # non-standard naming scheme for their daemon control socket and we need + # to pass the full path to the socket. + if target in ('ovnnb_db', 'ovnsb_db',): + target = os.path.join(rundir or OVN_RUNDIR, target + '.ctl') + + if use_ovs_appctl: + tool = 'ovs-appctl' + else: + tool = 'ovn-appctl' + + return utils._run(tool, '-t', target, *args) + + +class OVNClusterStatus(object): + + def __init__(self, name, cluster_id, server_id, address, status, role, + term, leader, vote, election_timer, log, + entries_not_yet_committed, entries_not_yet_applied, + connections, servers): + """Initialize and populate OVNClusterStatus object. + + Use class initializer so we can define types in a compatible manner. + + :param name: Name of schema used for database + :type name: str + :param cluster_id: UUID of cluster + :type cluster_id: uuid.UUID + :param server_id: UUID of server + :type server_id: uuid.UUID + :param address: OVSDB connection method + :type address: str + :param status: Status text + :type status: str + :param role: Role of server + :type role: str + :param term: Election term + :type term: int + :param leader: Short form UUID of leader + :type leader: str + :param vote: Vote + :type vote: str + :param election_timer: Current value of election timer + :type election_timer: int + :param log: Log + :type log: str + :param entries_not_yet_committed: Entries not yet committed + :type entries_not_yet_committed: int + :param entries_not_yet_applied: Entries not yet applied + :type entries_not_yet_applied: int + :param connections: Connections + :type connections: str + :param servers: Servers in the cluster + [('0ea6', 'ssl:192.0.2.42:6643')] + :type servers: List[Tuple[str,str]] + """ + self.name = name + self.cluster_id = cluster_id + self.server_id = server_id + self.address = address + self.status = status + self.role = role + self.term = term + self.leader = leader + self.vote = vote + self.election_timer = election_timer + self.log = log + self.entries_not_yet_committed = entries_not_yet_committed + self.entries_not_yet_applied = entries_not_yet_applied + self.connections = connections + self.servers = servers + + def __eq__(self, other): + return ( + self.name == other.name and + self.cluster_id == other.cluster_id and + self.server_id == other.server_id and + self.address == other.address and + self.status == other.status and + self.role == other.role and + self.term == other.term and + self.leader == other.leader and + self.vote == other.vote and + self.election_timer == other.election_timer and + self.log == other.log and + self.entries_not_yet_committed == other.entries_not_yet_committed and + self.entries_not_yet_applied == other.entries_not_yet_applied and + self.connections == other.connections and + self.servers == other.servers) + + @property + def is_cluster_leader(self): + """Retrieve status information from clustered OVSDB. + + :returns: Whether target is cluster leader + :rtype: bool + """ + return self.leader == 'self' + + +def cluster_status(target, schema=None, use_ovs_appctl=False, rundir=None): + """Retrieve status information from clustered OVSDB. + + :param target: Usually one of 'ovsdb-server', 'ovnnb_db', 'ovnsb_db', can + also be full path to control socket. + :type target: str + :param schema: Database schema name, deduced from target if not provided + :type schema: Optional[str] + :param use_ovs_appctl: The ``ovn-appctl`` command appeared in OVN 20.03, + set this to True to use ``ovs-appctl`` instead. + :type use_ovs_appctl: bool + :param rundir: Override path to sockets + :type rundir: Optional[str] + :returns: cluster status data object + :rtype: OVNClusterStatus + :raises: subprocess.CalledProcessError, KeyError, RuntimeError + """ + schema_map = { + 'ovnnb_db': 'OVN_Northbound', + 'ovnsb_db': 'OVN_Southbound', + } + if schema and schema not in schema_map.keys(): + raise RuntimeError('Unknown schema provided: "{}"'.format(schema)) + + status = {} + k = '' + for line in ovn_appctl(target, + ('cluster/status', schema or schema_map[target]), + rundir=rundir, + use_ovs_appctl=use_ovs_appctl).splitlines(): + if k and line.startswith(' '): + # there is no key which means this is a instance of a multi-line/ + # multi-value item, populate the List which is already stored under + # the key. + if k == 'servers': + status[k].append( + tuple(line.replace(')', '').lstrip().split()[0:4:3])) + else: + status[k].append(line.lstrip()) + elif ':' in line: + # this is a line with a key + k, v = line.split(':', 1) + k = k.lower() + k = k.replace(' ', '_') + if v: + # this is a line with both key and value + if k in ('cluster_id', 'server_id',): + v = v.replace('(', '') + v = v.replace(')', '') + status[k] = tuple(v.split()) + else: + status[k] = v.lstrip() + else: + # this is a line with only key which means a multi-line/ + # multi-value item. Store key as List which will be + # populated on subsequent iterations. + status[k] = [] + return OVNClusterStatus( + status['name'], + uuid.UUID(status['cluster_id'][1]), + uuid.UUID(status['server_id'][1]), + status['address'], + status['status'], + status['role'], + int(status['term']), + status['leader'], + status['vote'], + int(status['election_timer']), + status['log'], + int(status['entries_not_yet_committed']), + int(status['entries_not_yet_applied']), + status['connections'], + status['servers']) + + +def is_northd_active(): + """Query `ovn-northd` for active status. + + Note that the active status information for ovn-northd is available for + OVN 20.03 and onward. + + :returns: True if local `ovn-northd` instance is active, False otherwise + :rtype: bool + """ + try: + for line in ovn_appctl('ovn-northd', ('status',)).splitlines(): + if line.startswith('Status:') and 'active' in line: + return True + except subprocess.CalledProcessError: + pass + return False diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/network/ovs/ovsdb.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/network/ovs/ovsdb.py new file mode 100644 index 0000000000000000000000000000000000000000..5e50bc36333bde56674a82ad6c88e0d5de44ee07 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/network/ovs/ovsdb.py @@ -0,0 +1,206 @@ +# Copyright 2019 Canonical Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +import json +import uuid + +from . import utils + + +class SimpleOVSDB(object): + """Simple interface to OVSDB through the use of command line tools. + + OVS and OVN is managed through a set of databases. These databases have + similar command line tools to manage them. We make use of the similarity + to provide a generic class that can be used to manage them. + + The OpenvSwitch project does provide a Python API, but on the surface it + appears to be a bit too involved for our simple use case. + + Examples: + sbdb = SimpleOVSDB('ovn-sbctl') + for chs in sbdb.chassis: + print(chs) + + ovsdb = SimpleOVSDB('ovs-vsctl') + for br in ovsdb.bridge: + if br['name'] == 'br-test': + ovsdb.bridge.set(br['uuid'], 'external_ids:charm', 'managed') + """ + + # For validation we keep a complete map of currently known good tool and + # table combinations. This requires maintenance down the line whenever + # upstream adds things that downstream wants, and the cost of maintaining + # that will most likely be lower then the cost of finding the needle in + # the haystack whenever downstream code misspells something. + _tool_table_map = { + 'ovs-vsctl': ( + 'autoattach', + 'bridge', + 'ct_timeout_policy', + 'ct_zone', + 'controller', + 'datapath', + 'flow_sample_collector_set', + 'flow_table', + 'ipfix', + 'interface', + 'manager', + 'mirror', + 'netflow', + 'open_vswitch', + 'port', + 'qos', + 'queue', + 'ssl', + 'sflow', + ), + 'ovn-nbctl': ( + 'acl', + 'address_set', + 'connection', + 'dhcp_options', + 'dns', + 'forwarding_group', + 'gateway_chassis', + 'ha_chassis', + 'ha_chassis_group', + 'load_balancer', + 'load_balancer_health_check', + 'logical_router', + 'logical_router_policy', + 'logical_router_port', + 'logical_router_static_route', + 'logical_switch', + 'logical_switch_port', + 'meter', + 'meter_band', + 'nat', + 'nb_global', + 'port_group', + 'qos', + 'ssl', + ), + 'ovn-sbctl': ( + 'address_set', + 'chassis', + 'connection', + 'controller_event', + 'dhcp_options', + 'dhcpv6_options', + 'dns', + 'datapath_binding', + 'encap', + 'gateway_chassis', + 'ha_chassis', + 'ha_chassis_group', + 'igmp_group', + 'ip_multicast', + 'logical_flow', + 'mac_binding', + 'meter', + 'meter_band', + 'multicast_group', + 'port_binding', + 'port_group', + 'rbac_permission', + 'rbac_role', + 'sb_global', + 'ssl', + 'service_monitor', + ), + } + + def __init__(self, tool): + """SimpleOVSDB constructor. + + :param tool: Which tool with database commands to operate on. + Usually one of `ovs-vsctl`, `ovn-nbctl`, `ovn-sbctl` + :type tool: str + """ + if tool not in self._tool_table_map: + raise RuntimeError( + 'tool must be one of "{}"'.format(self._tool_table_map.keys())) + self._tool = tool + + def __getattr__(self, table): + if table not in self._tool_table_map[self._tool]: + raise AttributeError( + 'table "{}" not known for use with "{}"' + .format(table, self._tool)) + return self.Table(self._tool, table) + + class Table(object): + """Methods to interact with contents of OVSDB tables. + + NOTE: At the time of this writing ``find`` is the only command + line argument to OVSDB manipulating tools that actually supports + JSON output. + """ + + def __init__(self, tool, table): + """SimpleOVSDBTable constructor. + + :param table: Which table to operate on + :type table: str + """ + self._tool = tool + self._table = table + + def _find_tbl(self, condition=None): + """Run and parse output of OVSDB `find` command. + + :param condition: An optional RFC 7047 5.1 match condition + :type condition: Optional[str] + :returns: Dictionary with data + :rtype: Dict[str, any] + """ + # When using json formatted output to OVS commands Internal OVSDB + # notation may occur that require further deserializing. + # Reference: https://tools.ietf.org/html/rfc7047#section-5.1 + ovs_type_cb_map = { + 'uuid': uuid.UUID, + # FIXME sets also appear to sometimes contain type/value tuples + 'set': list, + 'map': dict, + } + cmd = [self._tool, '-f', 'json', 'find', self._table] + if condition: + cmd.append(condition) + output = utils._run(*cmd) + data = json.loads(output) + for row in data['data']: + values = [] + for col in row: + if isinstance(col, list): + f = ovs_type_cb_map.get(col[0], str) + values.append(f(col[1])) + else: + values.append(col) + yield dict(zip(data['headings'], values)) + + def __iter__(self): + return self._find_tbl() + + def clear(self, rec, col): + utils._run(self._tool, 'clear', self._table, rec, col) + + def find(self, condition): + return self._find_tbl(condition=condition) + + def remove(self, rec, col, value): + utils._run(self._tool, 'remove', self._table, rec, col, value) + + def set(self, rec, col, value): + utils._run(self._tool, 'set', self._table, rec, + '{}={}'.format(col, value)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/network/ovs/utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/network/ovs/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..53c9b4ddab6d68d2478d4161e658c70e2caa6a74 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/network/ovs/utils.py @@ -0,0 +1,26 @@ +# Copyright 2019 Canonical Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +import subprocess + + +def _run(*args): + """Run a process, check result, capture decoded output from STDOUT. + + :param args: Command and arguments to run + :type args: Tuple[str, ...] + :returns: Information about the completed process + :rtype: str + :raises subprocess.CalledProcessError + """ + return subprocess.check_output(args, universal_newlines=True) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/network/ufw.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/network/ufw.py new file mode 100644 index 0000000000000000000000000000000000000000..b9bf7c9df5576615e23225a7fdb6c11a28931ba4 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/network/ufw.py @@ -0,0 +1,386 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +""" +This module contains helpers to add and remove ufw rules. + +Examples: + +- open SSH port for subnet 10.0.3.0/24: + + >>> from charmhelpers.contrib.network import ufw + >>> ufw.enable() + >>> ufw.grant_access(src='10.0.3.0/24', dst='any', port='22', proto='tcp') + +- open service by name as defined in /etc/services: + + >>> from charmhelpers.contrib.network import ufw + >>> ufw.enable() + >>> ufw.service('ssh', 'open') + +- close service by port number: + + >>> from charmhelpers.contrib.network import ufw + >>> ufw.enable() + >>> ufw.service('4949', 'close') # munin +""" +import os +import re +import subprocess + +from charmhelpers.core import hookenv +from charmhelpers.core.kernel import modprobe, is_module_loaded + +__author__ = "Felipe Reyes " + + +class UFWError(Exception): + pass + + +class UFWIPv6Error(UFWError): + pass + + +def is_enabled(): + """ + Check if `ufw` is enabled + + :returns: True if ufw is enabled + """ + output = subprocess.check_output(['ufw', 'status'], + universal_newlines=True, + env={'LANG': 'en_US', + 'PATH': os.environ['PATH']}) + + m = re.findall(r'^Status: active\n', output, re.M) + + return len(m) >= 1 + + +def is_ipv6_ok(soft_fail=False): + """ + Check if IPv6 support is present and ip6tables functional + + :param soft_fail: If set to True and IPv6 support is broken, then reports + that the host doesn't have IPv6 support, otherwise a + UFWIPv6Error exception is raised. + :returns: True if IPv6 is working, False otherwise + """ + + # do we have IPv6 in the machine? + if os.path.isdir('/proc/sys/net/ipv6'): + # is ip6tables kernel module loaded? + if not is_module_loaded('ip6_tables'): + # ip6tables support isn't complete, let's try to load it + try: + modprobe('ip6_tables') + # great, we can load the module + return True + except subprocess.CalledProcessError as ex: + hookenv.log("Couldn't load ip6_tables module: %s" % ex.output, + level="WARN") + # we are in a world where ip6tables isn't working + if soft_fail: + # so we inform that the machine doesn't have IPv6 + return False + else: + raise UFWIPv6Error("IPv6 firewall support broken") + else: + # the module is present :) + return True + + else: + # the system doesn't have IPv6 + return False + + +def disable_ipv6(): + """ + Disable ufw IPv6 support in /etc/default/ufw + """ + exit_code = subprocess.call(['sed', '-i', 's/IPV6=.*/IPV6=no/g', + '/etc/default/ufw']) + if exit_code == 0: + hookenv.log('IPv6 support in ufw disabled', level='INFO') + else: + hookenv.log("Couldn't disable IPv6 support in ufw", level="ERROR") + raise UFWError("Couldn't disable IPv6 support in ufw") + + +def enable(soft_fail=False): + """ + Enable ufw + + :param soft_fail: If set to True silently disables IPv6 support in ufw, + otherwise a UFWIPv6Error exception is raised when IP6 + support is broken. + :returns: True if ufw is successfully enabled + """ + if is_enabled(): + return True + + if not is_ipv6_ok(soft_fail): + disable_ipv6() + + output = subprocess.check_output(['ufw', 'enable'], + universal_newlines=True, + env={'LANG': 'en_US', + 'PATH': os.environ['PATH']}) + + m = re.findall('^Firewall is active and enabled on system startup\n', + output, re.M) + hookenv.log(output, level='DEBUG') + + if len(m) == 0: + hookenv.log("ufw couldn't be enabled", level='WARN') + return False + else: + hookenv.log("ufw enabled", level='INFO') + return True + + +def reload(): + """ + Reload ufw + + :returns: True if ufw is successfully enabled + """ + output = subprocess.check_output(['ufw', 'reload'], + universal_newlines=True, + env={'LANG': 'en_US', + 'PATH': os.environ['PATH']}) + + m = re.findall('^Firewall reloaded\n', + output, re.M) + hookenv.log(output, level='DEBUG') + + if len(m) == 0: + hookenv.log("ufw couldn't be reloaded", level='WARN') + return False + else: + hookenv.log("ufw reloaded", level='INFO') + return True + + +def disable(): + """ + Disable ufw + + :returns: True if ufw is successfully disabled + """ + if not is_enabled(): + return True + + output = subprocess.check_output(['ufw', 'disable'], + universal_newlines=True, + env={'LANG': 'en_US', + 'PATH': os.environ['PATH']}) + + m = re.findall(r'^Firewall stopped and disabled on system startup\n', + output, re.M) + hookenv.log(output, level='DEBUG') + + if len(m) == 0: + hookenv.log("ufw couldn't be disabled", level='WARN') + return False + else: + hookenv.log("ufw disabled", level='INFO') + return True + + +def default_policy(policy='deny', direction='incoming'): + """ + Changes the default policy for traffic `direction` + + :param policy: allow, deny or reject + :param direction: traffic direction, possible values: incoming, outgoing, + routed + """ + if policy not in ['allow', 'deny', 'reject']: + raise UFWError(('Unknown policy %s, valid values: ' + 'allow, deny, reject') % policy) + + if direction not in ['incoming', 'outgoing', 'routed']: + raise UFWError(('Unknown direction %s, valid values: ' + 'incoming, outgoing, routed') % direction) + + output = subprocess.check_output(['ufw', 'default', policy, direction], + universal_newlines=True, + env={'LANG': 'en_US', + 'PATH': os.environ['PATH']}) + hookenv.log(output, level='DEBUG') + + m = re.findall("^Default %s policy changed to '%s'\n" % (direction, + policy), + output, re.M) + if len(m) == 0: + hookenv.log("ufw couldn't change the default policy to %s for %s" + % (policy, direction), level='WARN') + return False + else: + hookenv.log("ufw default policy for %s changed to %s" + % (direction, policy), level='INFO') + return True + + +def modify_access(src, dst='any', port=None, proto=None, action='allow', + index=None, prepend=False, comment=None): + """ + Grant access to an address or subnet + + :param src: address (e.g. 192.168.1.234) or subnet + (e.g. 192.168.1.0/24). + :type src: Optional[str] + :param dst: destiny of the connection, if the machine has multiple IPs and + connections to only one of those have to accepted this is the + field has to be set. + :type dst: Optional[str] + :param port: destiny port + :type port: Optional[int] + :param proto: protocol (tcp or udp) + :type proto: Optional[str] + :param action: `allow` or `delete` + :type action: str + :param index: if different from None the rule is inserted at the given + `index`. + :type index: Optional[int] + :param prepend: Whether to insert the rule before all other rules matching + the rule's IP type. + :type prepend: bool + :param comment: Create the rule with a comment + :type comment: Optional[str] + """ + if not is_enabled(): + hookenv.log('ufw is disabled, skipping modify_access()', level='WARN') + return + + if action == 'delete': + if index is not None: + cmd = ['ufw', '--force', 'delete', str(index)] + else: + cmd = ['ufw', 'delete', 'allow'] + elif index is not None: + cmd = ['ufw', 'insert', str(index), action] + elif prepend: + cmd = ['ufw', 'prepend', action] + else: + cmd = ['ufw', action] + + if src is not None: + cmd += ['from', src] + + if dst is not None: + cmd += ['to', dst] + + if port is not None: + cmd += ['port', str(port)] + + if proto is not None: + cmd += ['proto', proto] + + if comment: + cmd.extend(['comment', comment]) + + hookenv.log('ufw {}: {}'.format(action, ' '.join(cmd)), level='DEBUG') + p = subprocess.Popen(cmd, stdout=subprocess.PIPE) + (stdout, stderr) = p.communicate() + + hookenv.log(stdout, level='INFO') + + if p.returncode != 0: + hookenv.log(stderr, level='ERROR') + hookenv.log('Error running: {}, exit code: {}'.format(' '.join(cmd), + p.returncode), + level='ERROR') + + +def grant_access(src, dst='any', port=None, proto=None, index=None): + """ + Grant access to an address or subnet + + :param src: address (e.g. 192.168.1.234) or subnet + (e.g. 192.168.1.0/24). + :param dst: destiny of the connection, if the machine has multiple IPs and + connections to only one of those have to accepted this is the + field has to be set. + :param port: destiny port + :param proto: protocol (tcp or udp) + :param index: if different from None the rule is inserted at the given + `index`. + """ + return modify_access(src, dst=dst, port=port, proto=proto, action='allow', + index=index) + + +def revoke_access(src, dst='any', port=None, proto=None): + """ + Revoke access to an address or subnet + + :param src: address (e.g. 192.168.1.234) or subnet + (e.g. 192.168.1.0/24). + :param dst: destiny of the connection, if the machine has multiple IPs and + connections to only one of those have to accepted this is the + field has to be set. + :param port: destiny port + :param proto: protocol (tcp or udp) + """ + return modify_access(src, dst=dst, port=port, proto=proto, action='delete') + + +def service(name, action): + """ + Open/close access to a service + + :param name: could be a service name defined in `/etc/services` or a port + number. + :param action: `open` or `close` + """ + if action == 'open': + subprocess.check_output(['ufw', 'allow', str(name)], + universal_newlines=True) + elif action == 'close': + subprocess.check_output(['ufw', 'delete', 'allow', str(name)], + universal_newlines=True) + else: + raise UFWError(("'{}' not supported, use 'allow' " + "or 'delete'").format(action)) + + +def status(): + """Retrieve firewall rules as represented by UFW. + + :returns: Tuples with rule number and data + (1, {'to': '', 'action':, 'from':, '', ipv6: True, 'comment': ''}) + :rtype: Iterator[Tuple[int, Dict[str, Union[bool, str]]]] + """ + cp = subprocess.check_output(('ufw', 'status', 'numbered',), + stderr=subprocess.STDOUT, + universal_newlines=True) + for line in cp.splitlines(): + if not line.startswith('['): + continue + ipv6 = True if '(v6)' in line else False + line = line.replace('(v6)', '') + line = line.replace('[', '') + line = line.replace(']', '') + line = line.replace('Anywhere', 'any') + row = line.split() + yield (int(row[0]), { + 'to': row[1], + 'action': ' '.join(row[2:4]).lower(), + 'from': row[4], + 'ipv6': ipv6, + 'comment': row[6] if len(row) > 5 and row[5] == '#' else '', + }) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d7567b863e3a5ad2b7a7f44958b4166e0c3d346b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/alternatives.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/alternatives.py new file mode 100644 index 0000000000000000000000000000000000000000..547de09c6d818772191b519618fa32b08b0e6eff --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/alternatives.py @@ -0,0 +1,44 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +''' Helper for managing alternatives for file conflict resolution ''' + +import subprocess +import shutil +import os + + +def install_alternative(name, target, source, priority=50): + ''' Install alternative configuration ''' + if (os.path.exists(target) and not os.path.islink(target)): + # Move existing file/directory away before installing + shutil.move(target, '{}.bak'.format(target)) + cmd = [ + 'update-alternatives', '--force', '--install', + target, name, source, str(priority) + ] + subprocess.check_call(cmd) + + +def remove_alternative(name, source): + """Remove an installed alternative configuration file + + :param name: string name of the alternative to remove + :param source: string full path to alternative to remove + """ + cmd = [ + 'update-alternatives', '--remove', + name, source + ] + subprocess.check_call(cmd) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/amulet/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/amulet/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d7567b863e3a5ad2b7a7f44958b4166e0c3d346b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/amulet/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/amulet/deployment.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/amulet/deployment.py new file mode 100644 index 0000000000000000000000000000000000000000..dd3aebe97dd04bf42a2c4ee7c7b7b83b1917a678 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/amulet/deployment.py @@ -0,0 +1,384 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import logging +import os +import re +import sys +import six +from collections import OrderedDict +from charmhelpers.contrib.amulet.deployment import ( + AmuletDeployment +) +from charmhelpers.contrib.openstack.amulet.utils import ( + OPENSTACK_RELEASES_PAIRS +) + +DEBUG = logging.DEBUG +ERROR = logging.ERROR + + +class OpenStackAmuletDeployment(AmuletDeployment): + """OpenStack amulet deployment. + + This class inherits from AmuletDeployment and has additional support + that is specifically for use by OpenStack charms. + """ + + def __init__(self, series=None, openstack=None, source=None, + stable=True, log_level=DEBUG): + """Initialize the deployment environment.""" + super(OpenStackAmuletDeployment, self).__init__(series) + self.log = self.get_logger(level=log_level) + self.log.info('OpenStackAmuletDeployment: init') + self.openstack = openstack + self.source = source + self.stable = stable + + def get_logger(self, name="deployment-logger", level=logging.DEBUG): + """Get a logger object that will log to stdout.""" + log = logging + logger = log.getLogger(name) + fmt = log.Formatter("%(asctime)s %(funcName)s " + "%(levelname)s: %(message)s") + + handler = log.StreamHandler(stream=sys.stdout) + handler.setLevel(level) + handler.setFormatter(fmt) + + logger.addHandler(handler) + logger.setLevel(level) + + return logger + + def _determine_branch_locations(self, other_services): + """Determine the branch locations for the other services. + + Determine if the local branch being tested is derived from its + stable or next (dev) branch, and based on this, use the corresonding + stable or next branches for the other_services.""" + + self.log.info('OpenStackAmuletDeployment: determine branch locations') + + # Charms outside the ~openstack-charmers + base_charms = { + 'mysql': ['trusty'], + 'mongodb': ['trusty'], + 'nrpe': ['trusty', 'xenial'], + } + + for svc in other_services: + # If a location has been explicitly set, use it + if svc.get('location'): + continue + if svc['name'] in base_charms: + # NOTE: not all charms have support for all series we + # want/need to test against, so fix to most recent + # that each base charm supports + target_series = self.series + if self.series not in base_charms[svc['name']]: + target_series = base_charms[svc['name']][-1] + svc['location'] = 'cs:{}/{}'.format(target_series, + svc['name']) + elif self.stable: + svc['location'] = 'cs:{}/{}'.format(self.series, + svc['name']) + else: + svc['location'] = 'cs:~openstack-charmers-next/{}/{}'.format( + self.series, + svc['name'] + ) + + return other_services + + def _add_services(self, this_service, other_services, use_source=None, + no_origin=None): + """Add services to the deployment and optionally set + openstack-origin/source. + + :param this_service dict: Service dictionary describing the service + whose amulet tests are being run + :param other_services dict: List of service dictionaries describing + the services needed to support the target + service + :param use_source list: List of services which use the 'source' config + option rather than 'openstack-origin' + :param no_origin list: List of services which do not support setting + the Cloud Archive. + Service Dict: + { + 'name': str charm-name, + 'units': int number of units, + 'constraints': dict of juju constraints, + 'location': str location of charm, + } + eg + this_service = { + 'name': 'openvswitch-odl', + 'constraints': {'mem': '8G'}, + } + other_services = [ + { + 'name': 'nova-compute', + 'units': 2, + 'constraints': {'mem': '4G'}, + 'location': cs:~bob/xenial/nova-compute + }, + { + 'name': 'mysql', + 'constraints': {'mem': '2G'}, + }, + {'neutron-api-odl'}] + use_source = ['mysql'] + no_origin = ['neutron-api-odl'] + """ + self.log.info('OpenStackAmuletDeployment: adding services') + + other_services = self._determine_branch_locations(other_services) + + super(OpenStackAmuletDeployment, self)._add_services(this_service, + other_services) + + services = other_services + services.append(this_service) + + use_source = use_source or [] + no_origin = no_origin or [] + + # Charms which should use the source config option + use_source = list(set( + use_source + ['mysql', 'mongodb', 'rabbitmq-server', 'ceph', + 'ceph-osd', 'ceph-radosgw', 'ceph-mon', + 'ceph-proxy', 'percona-cluster', 'lxd'])) + + # Charms which can not use openstack-origin, ie. many subordinates + no_origin = list(set( + no_origin + ['cinder-ceph', 'hacluster', 'neutron-openvswitch', + 'nrpe', 'openvswitch-odl', 'neutron-api-odl', + 'odl-controller', 'cinder-backup', 'nexentaedge-data', + 'nexentaedge-iscsi-gw', 'nexentaedge-swift-gw', + 'cinder-nexentaedge', 'nexentaedge-mgmt', + 'ceilometer-agent'])) + + if self.openstack: + for svc in services: + if svc['name'] not in use_source + no_origin: + config = {'openstack-origin': self.openstack} + self.d.configure(svc['name'], config) + + if self.source: + for svc in services: + if svc['name'] in use_source and svc['name'] not in no_origin: + config = {'source': self.source} + self.d.configure(svc['name'], config) + + def _configure_services(self, configs): + """Configure all of the services.""" + self.log.info('OpenStackAmuletDeployment: configure services') + for service, config in six.iteritems(configs): + self.d.configure(service, config) + + def _auto_wait_for_status(self, message=None, exclude_services=None, + include_only=None, timeout=None): + """Wait for all units to have a specific extended status, except + for any defined as excluded. Unless specified via message, any + status containing any case of 'ready' will be considered a match. + + Examples of message usage: + + Wait for all unit status to CONTAIN any case of 'ready' or 'ok': + message = re.compile('.*ready.*|.*ok.*', re.IGNORECASE) + + Wait for all units to reach this status (exact match): + message = re.compile('^Unit is ready and clustered$') + + Wait for all units to reach any one of these (exact match): + message = re.compile('Unit is ready|OK|Ready') + + Wait for at least one unit to reach this status (exact match): + message = {'ready'} + + See Amulet's sentry.wait_for_messages() for message usage detail. + https://github.com/juju/amulet/blob/master/amulet/sentry.py + + :param message: Expected status match + :param exclude_services: List of juju service names to ignore, + not to be used in conjuction with include_only. + :param include_only: List of juju service names to exclusively check, + not to be used in conjuction with exclude_services. + :param timeout: Maximum time in seconds to wait for status match + :returns: None. Raises if timeout is hit. + """ + if not timeout: + timeout = int(os.environ.get('AMULET_SETUP_TIMEOUT', 1800)) + self.log.info('Waiting for extended status on units for {}s...' + ''.format(timeout)) + + all_services = self.d.services.keys() + + if exclude_services and include_only: + raise ValueError('exclude_services can not be used ' + 'with include_only') + + if message: + if isinstance(message, re._pattern_type): + match = message.pattern + else: + match = message + + self.log.debug('Custom extended status wait match: ' + '{}'.format(match)) + else: + self.log.debug('Default extended status wait match: contains ' + 'READY (case-insensitive)') + message = re.compile('.*ready.*', re.IGNORECASE) + + if exclude_services: + self.log.debug('Excluding services from extended status match: ' + '{}'.format(exclude_services)) + else: + exclude_services = [] + + if include_only: + services = include_only + else: + services = list(set(all_services) - set(exclude_services)) + + self.log.debug('Waiting up to {}s for extended status on services: ' + '{}'.format(timeout, services)) + service_messages = {service: message for service in services} + + # Check for idleness + self.d.sentry.wait(timeout=timeout) + # Check for error states and bail early + self.d.sentry.wait_for_status(self.d.juju_env, services, timeout=timeout) + # Check for ready messages + self.d.sentry.wait_for_messages(service_messages, timeout=timeout) + + self.log.info('OK') + + def _get_openstack_release(self): + """Get openstack release. + + Return an integer representing the enum value of the openstack + release. + """ + # Must be ordered by OpenStack release (not by Ubuntu release): + for i, os_pair in enumerate(OPENSTACK_RELEASES_PAIRS): + setattr(self, os_pair, i) + + releases = { + ('trusty', None): self.trusty_icehouse, + ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo, + ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty, + ('trusty', 'cloud:trusty-mitaka'): self.trusty_mitaka, + ('xenial', None): self.xenial_mitaka, + ('xenial', 'cloud:xenial-newton'): self.xenial_newton, + ('xenial', 'cloud:xenial-ocata'): self.xenial_ocata, + ('xenial', 'cloud:xenial-pike'): self.xenial_pike, + ('xenial', 'cloud:xenial-queens'): self.xenial_queens, + ('yakkety', None): self.yakkety_newton, + ('zesty', None): self.zesty_ocata, + ('artful', None): self.artful_pike, + ('bionic', None): self.bionic_queens, + ('bionic', 'cloud:bionic-rocky'): self.bionic_rocky, + ('bionic', 'cloud:bionic-stein'): self.bionic_stein, + ('bionic', 'cloud:bionic-train'): self.bionic_train, + ('bionic', 'cloud:bionic-ussuri'): self.bionic_ussuri, + ('cosmic', None): self.cosmic_rocky, + ('disco', None): self.disco_stein, + ('eoan', None): self.eoan_train, + ('focal', None): self.focal_ussuri, + } + return releases[(self.series, self.openstack)] + + def _get_openstack_release_string(self): + """Get openstack release string. + + Return a string representing the openstack release. + """ + releases = OrderedDict([ + ('trusty', 'icehouse'), + ('xenial', 'mitaka'), + ('yakkety', 'newton'), + ('zesty', 'ocata'), + ('artful', 'pike'), + ('bionic', 'queens'), + ('cosmic', 'rocky'), + ('disco', 'stein'), + ('eoan', 'train'), + ('focal', 'ussuri'), + ]) + if self.openstack: + os_origin = self.openstack.split(':')[1] + return os_origin.split('%s-' % self.series)[1].split('/')[0] + else: + return releases[self.series] + + def get_percona_service_entry(self, memory_constraint=None): + """Return a amulet service entry for percona cluster. + + :param memory_constraint: Override the default memory constraint + in the service entry. + :type memory_constraint: str + :returns: Amulet service entry. + :rtype: dict + """ + memory_constraint = memory_constraint or '3072M' + svc_entry = { + 'name': 'percona-cluster', + 'constraints': {'mem': memory_constraint}} + if self._get_openstack_release() <= self.trusty_mitaka: + svc_entry['location'] = 'cs:trusty/percona-cluster' + return svc_entry + + def get_ceph_expected_pools(self, radosgw=False): + """Return a list of expected ceph pools in a ceph + cinder + glance + test scenario, based on OpenStack release and whether ceph radosgw + is flagged as present or not.""" + + if self._get_openstack_release() == self.trusty_icehouse: + # Icehouse + pools = [ + 'data', + 'metadata', + 'rbd', + 'cinder-ceph', + 'glance' + ] + elif (self.trusty_kilo <= self._get_openstack_release() <= + self.zesty_ocata): + # Kilo through Ocata + pools = [ + 'rbd', + 'cinder-ceph', + 'glance' + ] + else: + # Pike and later + pools = [ + 'cinder-ceph', + 'glance' + ] + + if radosgw: + pools.extend([ + '.rgw.root', + '.rgw.control', + '.rgw', + '.rgw.gc', + '.users.uid' + ]) + + return pools diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/amulet/utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/amulet/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..14864198926b82afe382d07263034b020ad43c09 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/amulet/utils.py @@ -0,0 +1,1593 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import amulet +import json +import logging +import os +import re +import six +import time +import urllib +import urlparse + +import cinderclient.v1.client as cinder_client +import cinderclient.v2.client as cinder_clientv2 +import glanceclient.v1 as glance_client +import glanceclient.v2 as glance_clientv2 +import heatclient.v1.client as heat_client +from keystoneclient.v2_0 import client as keystone_client +from keystoneauth1.identity import ( + v3, + v2, +) +from keystoneauth1 import session as keystone_session +from keystoneclient.v3 import client as keystone_client_v3 +from novaclient import exceptions + +import novaclient.client as nova_client +import novaclient +import pika +import swiftclient + +from charmhelpers.core.decorators import retry_on_exception +from charmhelpers.contrib.amulet.utils import ( + AmuletUtils +) +from charmhelpers.core.host import CompareHostReleases + +DEBUG = logging.DEBUG +ERROR = logging.ERROR + +NOVA_CLIENT_VERSION = "2" + +OPENSTACK_RELEASES_PAIRS = [ + 'trusty_icehouse', 'trusty_kilo', 'trusty_liberty', + 'trusty_mitaka', 'xenial_mitaka', + 'xenial_newton', 'yakkety_newton', + 'xenial_ocata', 'zesty_ocata', + 'xenial_pike', 'artful_pike', + 'xenial_queens', 'bionic_queens', + 'bionic_rocky', 'cosmic_rocky', + 'bionic_stein', 'disco_stein', + 'bionic_train', 'eoan_train', + 'bionic_ussuri', 'focal_ussuri', +] + + +class OpenStackAmuletUtils(AmuletUtils): + """OpenStack amulet utilities. + + This class inherits from AmuletUtils and has additional support + that is specifically for use by OpenStack charm tests. + """ + + def __init__(self, log_level=ERROR): + """Initialize the deployment environment.""" + super(OpenStackAmuletUtils, self).__init__(log_level) + + def validate_endpoint_data(self, endpoints, admin_port, internal_port, + public_port, expected, openstack_release=None): + """Validate endpoint data. Pick the correct validator based on + OpenStack release. Expected data should be in the v2 format: + { + 'id': id, + 'region': region, + 'adminurl': adminurl, + 'internalurl': internalurl, + 'publicurl': publicurl, + 'service_id': service_id} + + """ + validation_function = self.validate_v2_endpoint_data + xenial_queens = OPENSTACK_RELEASES_PAIRS.index('xenial_queens') + if openstack_release and openstack_release >= xenial_queens: + validation_function = self.validate_v3_endpoint_data + expected = { + 'id': expected['id'], + 'region': expected['region'], + 'region_id': 'RegionOne', + 'url': self.valid_url, + 'interface': self.not_null, + 'service_id': expected['service_id']} + return validation_function(endpoints, admin_port, internal_port, + public_port, expected) + + def validate_v2_endpoint_data(self, endpoints, admin_port, internal_port, + public_port, expected): + """Validate endpoint data. + + Validate actual endpoint data vs expected endpoint data. The ports + are used to find the matching endpoint. + """ + self.log.debug('Validating endpoint data...') + self.log.debug('actual: {}'.format(repr(endpoints))) + found = False + for ep in endpoints: + self.log.debug('endpoint: {}'.format(repr(ep))) + if (admin_port in ep.adminurl and + internal_port in ep.internalurl and + public_port in ep.publicurl): + found = True + actual = {'id': ep.id, + 'region': ep.region, + 'adminurl': ep.adminurl, + 'internalurl': ep.internalurl, + 'publicurl': ep.publicurl, + 'service_id': ep.service_id} + ret = self._validate_dict_data(expected, actual) + if ret: + return 'unexpected endpoint data - {}'.format(ret) + + if not found: + return 'endpoint not found' + + def validate_v3_endpoint_data(self, endpoints, admin_port, internal_port, + public_port, expected, expected_num_eps=3): + """Validate keystone v3 endpoint data. + + Validate the v3 endpoint data which has changed from v2. The + ports are used to find the matching endpoint. + + The new v3 endpoint data looks like: + + ['}, + region=RegionOne, + region_id=RegionOne, + service_id=17f842a0dc084b928e476fafe67e4095, + url=http://10.5.6.5:9312>, + '}, + region=RegionOne, + region_id=RegionOne, + service_id=72fc8736fb41435e8b3584205bb2cfa3, + url=http://10.5.6.6:35357/v3>, + ... ] + """ + self.log.debug('Validating v3 endpoint data...') + self.log.debug('actual: {}'.format(repr(endpoints))) + found = [] + for ep in endpoints: + self.log.debug('endpoint: {}'.format(repr(ep))) + if ((admin_port in ep.url and ep.interface == 'admin') or + (internal_port in ep.url and ep.interface == 'internal') or + (public_port in ep.url and ep.interface == 'public')): + found.append(ep.interface) + # note we ignore the links member. + actual = {'id': ep.id, + 'region': ep.region, + 'region_id': ep.region_id, + 'interface': self.not_null, + 'url': ep.url, + 'service_id': ep.service_id, } + ret = self._validate_dict_data(expected, actual) + if ret: + return 'unexpected endpoint data - {}'.format(ret) + + if len(found) != expected_num_eps: + return 'Unexpected number of endpoints found' + + def convert_svc_catalog_endpoint_data_to_v3(self, ep_data): + """Convert v2 endpoint data into v3. + + { + 'service_name1': [ + { + 'adminURL': adminURL, + 'id': id, + 'region': region. + 'publicURL': publicURL, + 'internalURL': internalURL + }], + 'service_name2': [ + { + 'adminURL': adminURL, + 'id': id, + 'region': region. + 'publicURL': publicURL, + 'internalURL': internalURL + }], + } + """ + self.log.warn("Endpoint ID and Region ID validation is limited to not " + "null checks after v2 to v3 conversion") + for svc in ep_data.keys(): + assert len(ep_data[svc]) == 1, "Unknown data format" + svc_ep_data = ep_data[svc][0] + ep_data[svc] = [ + { + 'url': svc_ep_data['adminURL'], + 'interface': 'admin', + 'region': svc_ep_data['region'], + 'region_id': self.not_null, + 'id': self.not_null}, + { + 'url': svc_ep_data['publicURL'], + 'interface': 'public', + 'region': svc_ep_data['region'], + 'region_id': self.not_null, + 'id': self.not_null}, + { + 'url': svc_ep_data['internalURL'], + 'interface': 'internal', + 'region': svc_ep_data['region'], + 'region_id': self.not_null, + 'id': self.not_null}] + return ep_data + + def validate_svc_catalog_endpoint_data(self, expected, actual, + openstack_release=None): + """Validate service catalog endpoint data. Pick the correct validator + for the OpenStack version. Expected data should be in the v2 format: + { + 'service_name1': [ + { + 'adminURL': adminURL, + 'id': id, + 'region': region. + 'publicURL': publicURL, + 'internalURL': internalURL + }], + 'service_name2': [ + { + 'adminURL': adminURL, + 'id': id, + 'region': region. + 'publicURL': publicURL, + 'internalURL': internalURL + }], + } + + """ + validation_function = self.validate_v2_svc_catalog_endpoint_data + xenial_queens = OPENSTACK_RELEASES_PAIRS.index('xenial_queens') + if openstack_release and openstack_release >= xenial_queens: + validation_function = self.validate_v3_svc_catalog_endpoint_data + expected = self.convert_svc_catalog_endpoint_data_to_v3(expected) + return validation_function(expected, actual) + + def validate_v2_svc_catalog_endpoint_data(self, expected, actual): + """Validate service catalog endpoint data. + + Validate a list of actual service catalog endpoints vs a list of + expected service catalog endpoints. + """ + self.log.debug('Validating service catalog endpoint data...') + self.log.debug('actual: {}'.format(repr(actual))) + for k, v in six.iteritems(expected): + if k in actual: + ret = self._validate_dict_data(expected[k][0], actual[k][0]) + if ret: + return self.endpoint_error(k, ret) + else: + return "endpoint {} does not exist".format(k) + return ret + + def validate_v3_svc_catalog_endpoint_data(self, expected, actual): + """Validate the keystone v3 catalog endpoint data. + + Validate a list of dictinaries that make up the keystone v3 service + catalogue. + + It is in the form of: + + + {u'identity': [{u'id': u'48346b01c6804b298cdd7349aadb732e', + u'interface': u'admin', + u'region': u'RegionOne', + u'region_id': u'RegionOne', + u'url': u'http://10.5.5.224:35357/v3'}, + {u'id': u'8414f7352a4b47a69fddd9dbd2aef5cf', + u'interface': u'public', + u'region': u'RegionOne', + u'region_id': u'RegionOne', + u'url': u'http://10.5.5.224:5000/v3'}, + {u'id': u'd5ca31440cc24ee1bf625e2996fb6a5b', + u'interface': u'internal', + u'region': u'RegionOne', + u'region_id': u'RegionOne', + u'url': u'http://10.5.5.224:5000/v3'}], + u'key-manager': [{u'id': u'68ebc17df0b045fcb8a8a433ebea9e62', + u'interface': u'public', + u'region': u'RegionOne', + u'region_id': u'RegionOne', + u'url': u'http://10.5.5.223:9311'}, + {u'id': u'9cdfe2a893c34afd8f504eb218cd2f9d', + u'interface': u'internal', + u'region': u'RegionOne', + u'region_id': u'RegionOne', + u'url': u'http://10.5.5.223:9311'}, + {u'id': u'f629388955bc407f8b11d8b7ca168086', + u'interface': u'admin', + u'region': u'RegionOne', + u'region_id': u'RegionOne', + u'url': u'http://10.5.5.223:9312'}]} + + Note, that an added complication is that the order of admin, public, + internal against 'interface' in each region. + + Thus, the function sorts the expected and actual lists using the + interface key as a sort key, prior to the comparison. + """ + self.log.debug('Validating v3 service catalog endpoint data...') + self.log.debug('actual: {}'.format(repr(actual))) + for k, v in six.iteritems(expected): + if k in actual: + l_expected = sorted(v, key=lambda x: x['interface']) + l_actual = sorted(actual[k], key=lambda x: x['interface']) + if len(l_actual) != len(l_expected): + return ("endpoint {} has differing number of interfaces " + " - expected({}), actual({})" + .format(k, len(l_expected), len(l_actual))) + for i_expected, i_actual in zip(l_expected, l_actual): + self.log.debug("checking interface {}" + .format(i_expected['interface'])) + ret = self._validate_dict_data(i_expected, i_actual) + if ret: + return self.endpoint_error(k, ret) + else: + return "endpoint {} does not exist".format(k) + return ret + + def validate_tenant_data(self, expected, actual): + """Validate tenant data. + + Validate a list of actual tenant data vs list of expected tenant + data. + """ + self.log.debug('Validating tenant data...') + self.log.debug('actual: {}'.format(repr(actual))) + for e in expected: + found = False + for act in actual: + a = {'enabled': act.enabled, 'description': act.description, + 'name': act.name, 'id': act.id} + if e['name'] == a['name']: + found = True + ret = self._validate_dict_data(e, a) + if ret: + return "unexpected tenant data - {}".format(ret) + if not found: + return "tenant {} does not exist".format(e['name']) + return ret + + def validate_role_data(self, expected, actual): + """Validate role data. + + Validate a list of actual role data vs a list of expected role + data. + """ + self.log.debug('Validating role data...') + self.log.debug('actual: {}'.format(repr(actual))) + for e in expected: + found = False + for act in actual: + a = {'name': act.name, 'id': act.id} + if e['name'] == a['name']: + found = True + ret = self._validate_dict_data(e, a) + if ret: + return "unexpected role data - {}".format(ret) + if not found: + return "role {} does not exist".format(e['name']) + return ret + + def validate_user_data(self, expected, actual, api_version=None): + """Validate user data. + + Validate a list of actual user data vs a list of expected user + data. + """ + self.log.debug('Validating user data...') + self.log.debug('actual: {}'.format(repr(actual))) + for e in expected: + found = False + for act in actual: + if e['name'] == act.name: + a = {'enabled': act.enabled, 'name': act.name, + 'email': act.email, 'id': act.id} + if api_version == 3: + a['default_project_id'] = getattr(act, + 'default_project_id', + 'none') + else: + a['tenantId'] = act.tenantId + found = True + ret = self._validate_dict_data(e, a) + if ret: + return "unexpected user data - {}".format(ret) + if not found: + return "user {} does not exist".format(e['name']) + return ret + + def validate_flavor_data(self, expected, actual): + """Validate flavor data. + + Validate a list of actual flavors vs a list of expected flavors. + """ + self.log.debug('Validating flavor data...') + self.log.debug('actual: {}'.format(repr(actual))) + act = [a.name for a in actual] + return self._validate_list_data(expected, act) + + def tenant_exists(self, keystone, tenant): + """Return True if tenant exists.""" + self.log.debug('Checking if tenant exists ({})...'.format(tenant)) + return tenant in [t.name for t in keystone.tenants.list()] + + @retry_on_exception(num_retries=5, base_delay=1) + def keystone_wait_for_propagation(self, sentry_relation_pairs, + api_version): + """Iterate over list of sentry and relation tuples and verify that + api_version has the expected value. + + :param sentry_relation_pairs: list of sentry, relation name tuples used + for monitoring propagation of relation + data + :param api_version: api_version to expect in relation data + :returns: None if successful. Raise on error. + """ + for (sentry, relation_name) in sentry_relation_pairs: + rel = sentry.relation('identity-service', + relation_name) + self.log.debug('keystone relation data: {}'.format(rel)) + if rel.get('api_version') != str(api_version): + raise Exception("api_version not propagated through relation" + " data yet ('{}' != '{}')." + "".format(rel.get('api_version'), api_version)) + + def keystone_configure_api_version(self, sentry_relation_pairs, deployment, + api_version): + """Configure preferred-api-version of keystone in deployment and + monitor provided list of relation objects for propagation + before returning to caller. + + :param sentry_relation_pairs: list of sentry, relation tuples used for + monitoring propagation of relation data + :param deployment: deployment to configure + :param api_version: value preferred-api-version will be set to + :returns: None if successful. Raise on error. + """ + self.log.debug("Setting keystone preferred-api-version: '{}'" + "".format(api_version)) + + config = {'preferred-api-version': api_version} + deployment.d.configure('keystone', config) + deployment._auto_wait_for_status() + self.keystone_wait_for_propagation(sentry_relation_pairs, api_version) + + def authenticate_cinder_admin(self, keystone, api_version=2): + """Authenticates admin user with cinder.""" + self.log.debug('Authenticating cinder admin...') + _clients = { + 1: cinder_client.Client, + 2: cinder_clientv2.Client} + return _clients[api_version](session=keystone.session) + + def authenticate_keystone(self, keystone_ip, username, password, + api_version=False, admin_port=False, + user_domain_name=None, domain_name=None, + project_domain_name=None, project_name=None): + """Authenticate with Keystone""" + self.log.debug('Authenticating with keystone...') + if not api_version: + api_version = 2 + sess, auth = self.get_keystone_session( + keystone_ip=keystone_ip, + username=username, + password=password, + api_version=api_version, + admin_port=admin_port, + user_domain_name=user_domain_name, + domain_name=domain_name, + project_domain_name=project_domain_name, + project_name=project_name + ) + if api_version == 2: + client = keystone_client.Client(session=sess) + else: + client = keystone_client_v3.Client(session=sess) + # This populates the client.service_catalog + client.auth_ref = auth.get_access(sess) + return client + + def get_keystone_session(self, keystone_ip, username, password, + api_version=False, admin_port=False, + user_domain_name=None, domain_name=None, + project_domain_name=None, project_name=None): + """Return a keystone session object""" + ep = self.get_keystone_endpoint(keystone_ip, + api_version=api_version, + admin_port=admin_port) + if api_version == 2: + auth = v2.Password( + username=username, + password=password, + tenant_name=project_name, + auth_url=ep + ) + sess = keystone_session.Session(auth=auth) + else: + auth = v3.Password( + user_domain_name=user_domain_name, + username=username, + password=password, + domain_name=domain_name, + project_domain_name=project_domain_name, + project_name=project_name, + auth_url=ep + ) + sess = keystone_session.Session(auth=auth) + return (sess, auth) + + def get_keystone_endpoint(self, keystone_ip, api_version=None, + admin_port=False): + """Return keystone endpoint""" + port = 5000 + if admin_port: + port = 35357 + base_ep = "http://{}:{}".format(keystone_ip.strip().decode('utf-8'), + port) + if api_version == 2: + ep = base_ep + "/v2.0" + else: + ep = base_ep + "/v3" + return ep + + def get_default_keystone_session(self, keystone_sentry, + openstack_release=None, api_version=2): + """Return a keystone session object and client object assuming standard + default settings + + Example call in amulet tests: + self.keystone_session, self.keystone = u.get_default_keystone_session( + self.keystone_sentry, + openstack_release=self._get_openstack_release()) + + The session can then be used to auth other clients: + neutronclient.Client(session=session) + aodh_client.Client(session=session) + eyc + """ + self.log.debug('Authenticating keystone admin...') + # 11 => xenial_queens + if api_version == 3 or (openstack_release and openstack_release >= 11): + client_class = keystone_client_v3.Client + api_version = 3 + else: + client_class = keystone_client.Client + keystone_ip = keystone_sentry.info['public-address'] + session, auth = self.get_keystone_session( + keystone_ip, + api_version=api_version, + username='admin', + password='openstack', + project_name='admin', + user_domain_name='admin_domain', + project_domain_name='admin_domain') + client = client_class(session=session) + # This populates the client.service_catalog + client.auth_ref = auth.get_access(session) + return session, client + + def authenticate_keystone_admin(self, keystone_sentry, user, password, + tenant=None, api_version=None, + keystone_ip=None, user_domain_name=None, + project_domain_name=None, + project_name=None): + """Authenticates admin user with the keystone admin endpoint.""" + self.log.debug('Authenticating keystone admin...') + if not keystone_ip: + keystone_ip = keystone_sentry.info['public-address'] + + # To support backward compatibility usage of this function + if not project_name: + project_name = tenant + if api_version == 3 and not user_domain_name: + user_domain_name = 'admin_domain' + if api_version == 3 and not project_domain_name: + project_domain_name = 'admin_domain' + if api_version == 3 and not project_name: + project_name = 'admin' + + return self.authenticate_keystone( + keystone_ip, user, password, + api_version=api_version, + user_domain_name=user_domain_name, + project_domain_name=project_domain_name, + project_name=project_name, + admin_port=True) + + def authenticate_keystone_user(self, keystone, user, password, tenant): + """Authenticates a regular user with the keystone public endpoint.""" + self.log.debug('Authenticating keystone user ({})...'.format(user)) + ep = keystone.service_catalog.url_for(service_type='identity', + interface='publicURL') + keystone_ip = urlparse.urlparse(ep).hostname + + return self.authenticate_keystone(keystone_ip, user, password, + project_name=tenant) + + def authenticate_glance_admin(self, keystone, force_v1_client=False): + """Authenticates admin user with glance.""" + self.log.debug('Authenticating glance admin...') + ep = keystone.service_catalog.url_for(service_type='image', + interface='adminURL') + if not force_v1_client and keystone.session: + return glance_clientv2.Client("2", session=keystone.session) + else: + return glance_client.Client(ep, token=keystone.auth_token) + + def authenticate_heat_admin(self, keystone): + """Authenticates the admin user with heat.""" + self.log.debug('Authenticating heat admin...') + ep = keystone.service_catalog.url_for(service_type='orchestration', + interface='publicURL') + if keystone.session: + return heat_client.Client(endpoint=ep, session=keystone.session) + else: + return heat_client.Client(endpoint=ep, token=keystone.auth_token) + + def authenticate_nova_user(self, keystone, user, password, tenant): + """Authenticates a regular user with nova-api.""" + self.log.debug('Authenticating nova user ({})...'.format(user)) + ep = keystone.service_catalog.url_for(service_type='identity', + interface='publicURL') + if keystone.session: + return nova_client.Client(NOVA_CLIENT_VERSION, + session=keystone.session, + auth_url=ep) + elif novaclient.__version__[0] >= "7": + return nova_client.Client(NOVA_CLIENT_VERSION, + username=user, password=password, + project_name=tenant, auth_url=ep) + else: + return nova_client.Client(NOVA_CLIENT_VERSION, + username=user, api_key=password, + project_id=tenant, auth_url=ep) + + def authenticate_swift_user(self, keystone, user, password, tenant): + """Authenticates a regular user with swift api.""" + self.log.debug('Authenticating swift user ({})...'.format(user)) + ep = keystone.service_catalog.url_for(service_type='identity', + interface='publicURL') + if keystone.session: + return swiftclient.Connection(session=keystone.session) + else: + return swiftclient.Connection(authurl=ep, + user=user, + key=password, + tenant_name=tenant, + auth_version='2.0') + + def create_flavor(self, nova, name, ram, vcpus, disk, flavorid="auto", + ephemeral=0, swap=0, rxtx_factor=1.0, is_public=True): + """Create the specified flavor.""" + try: + nova.flavors.find(name=name) + except (exceptions.NotFound, exceptions.NoUniqueMatch): + self.log.debug('Creating flavor ({})'.format(name)) + nova.flavors.create(name, ram, vcpus, disk, flavorid, + ephemeral, swap, rxtx_factor, is_public) + + def glance_create_image(self, glance, image_name, image_url, + download_dir='tests', + hypervisor_type=None, + disk_format='qcow2', + architecture='x86_64', + container_format='bare'): + """Download an image and upload it to glance, validate its status + and return an image object pointer. KVM defaults, can override for + LXD. + + :param glance: pointer to authenticated glance api connection + :param image_name: display name for new image + :param image_url: url to retrieve + :param download_dir: directory to store downloaded image file + :param hypervisor_type: glance image hypervisor property + :param disk_format: glance image disk format + :param architecture: glance image architecture property + :param container_format: glance image container format + :returns: glance image pointer + """ + self.log.debug('Creating glance image ({}) from ' + '{}...'.format(image_name, image_url)) + + # Download image + http_proxy = os.getenv('OS_TEST_HTTP_PROXY') + self.log.debug('OS_TEST_HTTP_PROXY: {}'.format(http_proxy)) + if http_proxy: + proxies = {'http': http_proxy} + opener = urllib.FancyURLopener(proxies) + else: + opener = urllib.FancyURLopener() + + abs_file_name = os.path.join(download_dir, image_name) + if not os.path.exists(abs_file_name): + opener.retrieve(image_url, abs_file_name) + + # Create glance image + glance_properties = { + 'architecture': architecture, + } + if hypervisor_type: + glance_properties['hypervisor_type'] = hypervisor_type + # Create glance image + if float(glance.version) < 2.0: + with open(abs_file_name) as f: + image = glance.images.create( + name=image_name, + is_public=True, + disk_format=disk_format, + container_format=container_format, + properties=glance_properties, + data=f) + else: + image = glance.images.create( + name=image_name, + visibility="public", + disk_format=disk_format, + container_format=container_format) + glance.images.upload(image.id, open(abs_file_name, 'rb')) + glance.images.update(image.id, **glance_properties) + + # Wait for image to reach active status + img_id = image.id + ret = self.resource_reaches_status(glance.images, img_id, + expected_stat='active', + msg='Image status wait') + if not ret: + msg = 'Glance image failed to reach expected state.' + amulet.raise_status(amulet.FAIL, msg=msg) + + # Re-validate new image + self.log.debug('Validating image attributes...') + val_img_name = glance.images.get(img_id).name + val_img_stat = glance.images.get(img_id).status + val_img_cfmt = glance.images.get(img_id).container_format + val_img_dfmt = glance.images.get(img_id).disk_format + + if float(glance.version) < 2.0: + val_img_pub = glance.images.get(img_id).is_public + else: + val_img_pub = glance.images.get(img_id).visibility == "public" + + msg_attr = ('Image attributes - name:{} public:{} id:{} stat:{} ' + 'container fmt:{} disk fmt:{}'.format( + val_img_name, val_img_pub, img_id, + val_img_stat, val_img_cfmt, val_img_dfmt)) + + if val_img_name == image_name and val_img_stat == 'active' \ + and val_img_pub is True and val_img_cfmt == container_format \ + and val_img_dfmt == disk_format: + self.log.debug(msg_attr) + else: + msg = ('Image validation failed, {}'.format(msg_attr)) + amulet.raise_status(amulet.FAIL, msg=msg) + + return image + + def create_cirros_image(self, glance, image_name, hypervisor_type=None): + """Download the latest cirros image and upload it to glance, + validate and return a resource pointer. + + :param glance: pointer to authenticated glance connection + :param image_name: display name for new image + :param hypervisor_type: glance image hypervisor property + :returns: glance image pointer + """ + # /!\ DEPRECATION WARNING + self.log.warn('/!\\ DEPRECATION WARNING: use ' + 'glance_create_image instead of ' + 'create_cirros_image.') + + self.log.debug('Creating glance cirros image ' + '({})...'.format(image_name)) + + # Get cirros image URL + http_proxy = os.getenv('OS_TEST_HTTP_PROXY') + self.log.debug('OS_TEST_HTTP_PROXY: {}'.format(http_proxy)) + if http_proxy: + proxies = {'http': http_proxy} + opener = urllib.FancyURLopener(proxies) + else: + opener = urllib.FancyURLopener() + + f = opener.open('http://download.cirros-cloud.net/version/released') + version = f.read().strip() + cirros_img = 'cirros-{}-x86_64-disk.img'.format(version) + cirros_url = 'http://{}/{}/{}'.format('download.cirros-cloud.net', + version, cirros_img) + f.close() + + return self.glance_create_image( + glance, + image_name, + cirros_url, + hypervisor_type=hypervisor_type) + + def delete_image(self, glance, image): + """Delete the specified image.""" + + # /!\ DEPRECATION WARNING + self.log.warn('/!\\ DEPRECATION WARNING: use ' + 'delete_resource instead of delete_image.') + self.log.debug('Deleting glance image ({})...'.format(image)) + return self.delete_resource(glance.images, image, msg='glance image') + + def create_instance(self, nova, image_name, instance_name, flavor): + """Create the specified instance.""" + self.log.debug('Creating instance ' + '({}|{}|{})'.format(instance_name, image_name, flavor)) + image = nova.glance.find_image(image_name) + flavor = nova.flavors.find(name=flavor) + instance = nova.servers.create(name=instance_name, image=image, + flavor=flavor) + + count = 1 + status = instance.status + while status != 'ACTIVE' and count < 60: + time.sleep(3) + instance = nova.servers.get(instance.id) + status = instance.status + self.log.debug('instance status: {}'.format(status)) + count += 1 + + if status != 'ACTIVE': + self.log.error('instance creation timed out') + return None + + return instance + + def delete_instance(self, nova, instance): + """Delete the specified instance.""" + + # /!\ DEPRECATION WARNING + self.log.warn('/!\\ DEPRECATION WARNING: use ' + 'delete_resource instead of delete_instance.') + self.log.debug('Deleting instance ({})...'.format(instance)) + return self.delete_resource(nova.servers, instance, + msg='nova instance') + + def create_or_get_keypair(self, nova, keypair_name="testkey"): + """Create a new keypair, or return pointer if it already exists.""" + try: + _keypair = nova.keypairs.get(keypair_name) + self.log.debug('Keypair ({}) already exists, ' + 'using it.'.format(keypair_name)) + return _keypair + except Exception: + self.log.debug('Keypair ({}) does not exist, ' + 'creating it.'.format(keypair_name)) + + _keypair = nova.keypairs.create(name=keypair_name) + return _keypair + + def _get_cinder_obj_name(self, cinder_object): + """Retrieve name of cinder object. + + :param cinder_object: cinder snapshot or volume object + :returns: str cinder object name + """ + # v1 objects store name in 'display_name' attr but v2+ use 'name' + try: + return cinder_object.display_name + except AttributeError: + return cinder_object.name + + def create_cinder_volume(self, cinder, vol_name="demo-vol", vol_size=1, + img_id=None, src_vol_id=None, snap_id=None): + """Create cinder volume, optionally from a glance image, OR + optionally as a clone of an existing volume, OR optionally + from a snapshot. Wait for the new volume status to reach + the expected status, validate and return a resource pointer. + + :param vol_name: cinder volume display name + :param vol_size: size in gigabytes + :param img_id: optional glance image id + :param src_vol_id: optional source volume id to clone + :param snap_id: optional snapshot id to use + :returns: cinder volume pointer + """ + # Handle parameter input and avoid impossible combinations + if img_id and not src_vol_id and not snap_id: + # Create volume from image + self.log.debug('Creating cinder volume from glance image...') + bootable = 'true' + elif src_vol_id and not img_id and not snap_id: + # Clone an existing volume + self.log.debug('Cloning cinder volume...') + bootable = cinder.volumes.get(src_vol_id).bootable + elif snap_id and not src_vol_id and not img_id: + # Create volume from snapshot + self.log.debug('Creating cinder volume from snapshot...') + snap = cinder.volume_snapshots.find(id=snap_id) + vol_size = snap.size + snap_vol_id = cinder.volume_snapshots.get(snap_id).volume_id + bootable = cinder.volumes.get(snap_vol_id).bootable + elif not img_id and not src_vol_id and not snap_id: + # Create volume + self.log.debug('Creating cinder volume...') + bootable = 'false' + else: + # Impossible combination of parameters + msg = ('Invalid method use - name:{} size:{} img_id:{} ' + 'src_vol_id:{} snap_id:{}'.format(vol_name, vol_size, + img_id, src_vol_id, + snap_id)) + amulet.raise_status(amulet.FAIL, msg=msg) + + # Create new volume + try: + vol_new = cinder.volumes.create(display_name=vol_name, + imageRef=img_id, + size=vol_size, + source_volid=src_vol_id, + snapshot_id=snap_id) + vol_id = vol_new.id + except TypeError: + vol_new = cinder.volumes.create(name=vol_name, + imageRef=img_id, + size=vol_size, + source_volid=src_vol_id, + snapshot_id=snap_id) + vol_id = vol_new.id + except Exception as e: + msg = 'Failed to create volume: {}'.format(e) + amulet.raise_status(amulet.FAIL, msg=msg) + + # Wait for volume to reach available status + ret = self.resource_reaches_status(cinder.volumes, vol_id, + expected_stat="available", + msg="Volume status wait") + if not ret: + msg = 'Cinder volume failed to reach expected state.' + amulet.raise_status(amulet.FAIL, msg=msg) + + # Re-validate new volume + self.log.debug('Validating volume attributes...') + val_vol_name = self._get_cinder_obj_name(cinder.volumes.get(vol_id)) + val_vol_boot = cinder.volumes.get(vol_id).bootable + val_vol_stat = cinder.volumes.get(vol_id).status + val_vol_size = cinder.volumes.get(vol_id).size + msg_attr = ('Volume attributes - name:{} id:{} stat:{} boot:' + '{} size:{}'.format(val_vol_name, vol_id, + val_vol_stat, val_vol_boot, + val_vol_size)) + + if val_vol_boot == bootable and val_vol_stat == 'available' \ + and val_vol_name == vol_name and val_vol_size == vol_size: + self.log.debug(msg_attr) + else: + msg = ('Volume validation failed, {}'.format(msg_attr)) + amulet.raise_status(amulet.FAIL, msg=msg) + + return vol_new + + def delete_resource(self, resource, resource_id, + msg="resource", max_wait=120): + """Delete one openstack resource, such as one instance, keypair, + image, volume, stack, etc., and confirm deletion within max wait time. + + :param resource: pointer to os resource type, ex:glance_client.images + :param resource_id: unique name or id for the openstack resource + :param msg: text to identify purpose in logging + :param max_wait: maximum wait time in seconds + :returns: True if successful, otherwise False + """ + self.log.debug('Deleting OpenStack resource ' + '{} ({})'.format(resource_id, msg)) + num_before = len(list(resource.list())) + resource.delete(resource_id) + + tries = 0 + num_after = len(list(resource.list())) + while num_after != (num_before - 1) and tries < (max_wait / 4): + self.log.debug('{} delete check: ' + '{} [{}:{}] {}'.format(msg, tries, + num_before, + num_after, + resource_id)) + time.sleep(4) + num_after = len(list(resource.list())) + tries += 1 + + self.log.debug('{}: expected, actual count = {}, ' + '{}'.format(msg, num_before - 1, num_after)) + + if num_after == (num_before - 1): + return True + else: + self.log.error('{} delete timed out'.format(msg)) + return False + + def resource_reaches_status(self, resource, resource_id, + expected_stat='available', + msg='resource', max_wait=120): + """Wait for an openstack resources status to reach an + expected status within a specified time. Useful to confirm that + nova instances, cinder vols, snapshots, glance images, heat stacks + and other resources eventually reach the expected status. + + :param resource: pointer to os resource type, ex: heat_client.stacks + :param resource_id: unique id for the openstack resource + :param expected_stat: status to expect resource to reach + :param msg: text to identify purpose in logging + :param max_wait: maximum wait time in seconds + :returns: True if successful, False if status is not reached + """ + + tries = 0 + resource_stat = resource.get(resource_id).status + while resource_stat != expected_stat and tries < (max_wait / 4): + self.log.debug('{} status check: ' + '{} [{}:{}] {}'.format(msg, tries, + resource_stat, + expected_stat, + resource_id)) + time.sleep(4) + resource_stat = resource.get(resource_id).status + tries += 1 + + self.log.debug('{}: expected, actual status = {}, ' + '{}'.format(msg, resource_stat, expected_stat)) + + if resource_stat == expected_stat: + return True + else: + self.log.debug('{} never reached expected status: ' + '{}'.format(resource_id, expected_stat)) + return False + + def get_ceph_osd_id_cmd(self, index): + """Produce a shell command that will return a ceph-osd id.""" + return ("`initctl list | grep 'ceph-osd ' | " + "awk 'NR=={} {{ print $2 }}' | " + "grep -o '[0-9]*'`".format(index + 1)) + + def get_ceph_pools(self, sentry_unit): + """Return a dict of ceph pools from a single ceph unit, with + pool name as keys, pool id as vals.""" + pools = {} + cmd = 'sudo ceph osd lspools' + output, code = sentry_unit.run(cmd) + if code != 0: + msg = ('{} `{}` returned {} ' + '{}'.format(sentry_unit.info['unit_name'], + cmd, code, output)) + amulet.raise_status(amulet.FAIL, msg=msg) + + # For mimic ceph osd lspools output + output = output.replace("\n", ",") + + # Example output: 0 data,1 metadata,2 rbd,3 cinder,4 glance, + for pool in str(output).split(','): + pool_id_name = pool.split(' ') + if len(pool_id_name) == 2: + pool_id = pool_id_name[0] + pool_name = pool_id_name[1] + pools[pool_name] = int(pool_id) + + self.log.debug('Pools on {}: {}'.format(sentry_unit.info['unit_name'], + pools)) + return pools + + def get_ceph_df(self, sentry_unit): + """Return dict of ceph df json output, including ceph pool state. + + :param sentry_unit: Pointer to amulet sentry instance (juju unit) + :returns: Dict of ceph df output + """ + cmd = 'sudo ceph df --format=json' + output, code = sentry_unit.run(cmd) + if code != 0: + msg = ('{} `{}` returned {} ' + '{}'.format(sentry_unit.info['unit_name'], + cmd, code, output)) + amulet.raise_status(amulet.FAIL, msg=msg) + return json.loads(output) + + def get_ceph_pool_sample(self, sentry_unit, pool_id=0): + """Take a sample of attributes of a ceph pool, returning ceph + pool name, object count and disk space used for the specified + pool ID number. + + :param sentry_unit: Pointer to amulet sentry instance (juju unit) + :param pool_id: Ceph pool ID + :returns: List of pool name, object count, kb disk space used + """ + df = self.get_ceph_df(sentry_unit) + for pool in df['pools']: + if pool['id'] == pool_id: + pool_name = pool['name'] + obj_count = pool['stats']['objects'] + kb_used = pool['stats']['kb_used'] + + self.log.debug('Ceph {} pool (ID {}): {} objects, ' + '{} kb used'.format(pool_name, pool_id, + obj_count, kb_used)) + return pool_name, obj_count, kb_used + + def validate_ceph_pool_samples(self, samples, sample_type="resource pool"): + """Validate ceph pool samples taken over time, such as pool + object counts or pool kb used, before adding, after adding, and + after deleting items which affect those pool attributes. The + 2nd element is expected to be greater than the 1st; 3rd is expected + to be less than the 2nd. + + :param samples: List containing 3 data samples + :param sample_type: String for logging and usage context + :returns: None if successful, Failure message otherwise + """ + original, created, deleted = range(3) + if samples[created] <= samples[original] or \ + samples[deleted] >= samples[created]: + return ('Ceph {} samples ({}) ' + 'unexpected.'.format(sample_type, samples)) + else: + self.log.debug('Ceph {} samples (OK): ' + '{}'.format(sample_type, samples)) + return None + + # rabbitmq/amqp specific helpers: + + def rmq_wait_for_cluster(self, deployment, init_sleep=15, timeout=1200): + """Wait for rmq units extended status to show cluster readiness, + after an optional initial sleep period. Initial sleep is likely + necessary to be effective following a config change, as status + message may not instantly update to non-ready.""" + + if init_sleep: + time.sleep(init_sleep) + + message = re.compile('^Unit is ready and clustered$') + deployment._auto_wait_for_status(message=message, + timeout=timeout, + include_only=['rabbitmq-server']) + + def add_rmq_test_user(self, sentry_units, + username="testuser1", password="changeme"): + """Add a test user via the first rmq juju unit, check connection as + the new user against all sentry units. + + :param sentry_units: list of sentry unit pointers + :param username: amqp user name, default to testuser1 + :param password: amqp user password + :returns: None if successful. Raise on error. + """ + self.log.debug('Adding rmq user ({})...'.format(username)) + + # Check that user does not already exist + cmd_user_list = 'rabbitmqctl list_users' + output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list) + if username in output: + self.log.warning('User ({}) already exists, returning ' + 'gracefully.'.format(username)) + return + + perms = '".*" ".*" ".*"' + cmds = ['rabbitmqctl add_user {} {}'.format(username, password), + 'rabbitmqctl set_permissions {} {}'.format(username, perms)] + + # Add user via first unit + for cmd in cmds: + output, _ = self.run_cmd_unit(sentry_units[0], cmd) + + # Check connection against the other sentry_units + self.log.debug('Checking user connect against units...') + for sentry_unit in sentry_units: + connection = self.connect_amqp_by_unit(sentry_unit, ssl=False, + username=username, + password=password) + connection.close() + + def delete_rmq_test_user(self, sentry_units, username="testuser1"): + """Delete a rabbitmq user via the first rmq juju unit. + + :param sentry_units: list of sentry unit pointers + :param username: amqp user name, default to testuser1 + :param password: amqp user password + :returns: None if successful or no such user. + """ + self.log.debug('Deleting rmq user ({})...'.format(username)) + + # Check that the user exists + cmd_user_list = 'rabbitmqctl list_users' + output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list) + + if username not in output: + self.log.warning('User ({}) does not exist, returning ' + 'gracefully.'.format(username)) + return + + # Delete the user + cmd_user_del = 'rabbitmqctl delete_user {}'.format(username) + output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_del) + + def get_rmq_cluster_status(self, sentry_unit): + """Execute rabbitmq cluster status command on a unit and return + the full output. + + :param unit: sentry unit + :returns: String containing console output of cluster status command + """ + cmd = 'rabbitmqctl cluster_status' + output, _ = self.run_cmd_unit(sentry_unit, cmd) + self.log.debug('{} cluster_status:\n{}'.format( + sentry_unit.info['unit_name'], output)) + return str(output) + + def get_rmq_cluster_running_nodes(self, sentry_unit): + """Parse rabbitmqctl cluster_status output string, return list of + running rabbitmq cluster nodes. + + :param unit: sentry unit + :returns: List containing node names of running nodes + """ + # NOTE(beisner): rabbitmqctl cluster_status output is not + # json-parsable, do string chop foo, then json.loads that. + str_stat = self.get_rmq_cluster_status(sentry_unit) + if 'running_nodes' in str_stat: + pos_start = str_stat.find("{running_nodes,") + 15 + pos_end = str_stat.find("]},", pos_start) + 1 + str_run_nodes = str_stat[pos_start:pos_end].replace("'", '"') + run_nodes = json.loads(str_run_nodes) + return run_nodes + else: + return [] + + def validate_rmq_cluster_running_nodes(self, sentry_units): + """Check that all rmq unit hostnames are represented in the + cluster_status output of all units. + + :param host_names: dict of juju unit names to host names + :param units: list of sentry unit pointers (all rmq units) + :returns: None if successful, otherwise return error message + """ + host_names = self.get_unit_hostnames(sentry_units) + errors = [] + + # Query every unit for cluster_status running nodes + for query_unit in sentry_units: + query_unit_name = query_unit.info['unit_name'] + running_nodes = self.get_rmq_cluster_running_nodes(query_unit) + + # Confirm that every unit is represented in the queried unit's + # cluster_status running nodes output. + for validate_unit in sentry_units: + val_host_name = host_names[validate_unit.info['unit_name']] + val_node_name = 'rabbit@{}'.format(val_host_name) + + if val_node_name not in running_nodes: + errors.append('Cluster member check failed on {}: {} not ' + 'in {}\n'.format(query_unit_name, + val_node_name, + running_nodes)) + if errors: + return ''.join(errors) + + def rmq_ssl_is_enabled_on_unit(self, sentry_unit, port=None): + """Check a single juju rmq unit for ssl and port in the config file.""" + host = sentry_unit.info['public-address'] + unit_name = sentry_unit.info['unit_name'] + + conf_file = '/etc/rabbitmq/rabbitmq.config' + conf_contents = str(self.file_contents_safe(sentry_unit, + conf_file, max_wait=16)) + # Checks + conf_ssl = 'ssl' in conf_contents + conf_port = str(port) in conf_contents + + # Port explicitly checked in config + if port and conf_port and conf_ssl: + self.log.debug('SSL is enabled @{}:{} ' + '({})'.format(host, port, unit_name)) + return True + elif port and not conf_port and conf_ssl: + self.log.debug('SSL is enabled @{} but not on port {} ' + '({})'.format(host, port, unit_name)) + return False + # Port not checked (useful when checking that ssl is disabled) + elif not port and conf_ssl: + self.log.debug('SSL is enabled @{}:{} ' + '({})'.format(host, port, unit_name)) + return True + elif not conf_ssl: + self.log.debug('SSL not enabled @{}:{} ' + '({})'.format(host, port, unit_name)) + return False + else: + msg = ('Unknown condition when checking SSL status @{}:{} ' + '({})'.format(host, port, unit_name)) + amulet.raise_status(amulet.FAIL, msg) + + def validate_rmq_ssl_enabled_units(self, sentry_units, port=None): + """Check that ssl is enabled on rmq juju sentry units. + + :param sentry_units: list of all rmq sentry units + :param port: optional ssl port override to validate + :returns: None if successful, otherwise return error message + """ + for sentry_unit in sentry_units: + if not self.rmq_ssl_is_enabled_on_unit(sentry_unit, port=port): + return ('Unexpected condition: ssl is disabled on unit ' + '({})'.format(sentry_unit.info['unit_name'])) + return None + + def validate_rmq_ssl_disabled_units(self, sentry_units): + """Check that ssl is enabled on listed rmq juju sentry units. + + :param sentry_units: list of all rmq sentry units + :returns: True if successful. Raise on error. + """ + for sentry_unit in sentry_units: + if self.rmq_ssl_is_enabled_on_unit(sentry_unit): + return ('Unexpected condition: ssl is enabled on unit ' + '({})'.format(sentry_unit.info['unit_name'])) + return None + + def configure_rmq_ssl_on(self, sentry_units, deployment, + port=None, max_wait=60): + """Turn ssl charm config option on, with optional non-default + ssl port specification. Confirm that it is enabled on every + unit. + + :param sentry_units: list of sentry units + :param deployment: amulet deployment object pointer + :param port: amqp port, use defaults if None + :param max_wait: maximum time to wait in seconds to confirm + :returns: None if successful. Raise on error. + """ + self.log.debug('Setting ssl charm config option: on') + + # Enable RMQ SSL + config = {'ssl': 'on'} + if port: + config['ssl_port'] = port + + deployment.d.configure('rabbitmq-server', config) + + # Wait for unit status + self.rmq_wait_for_cluster(deployment) + + # Confirm + tries = 0 + ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port) + while ret and tries < (max_wait / 4): + time.sleep(4) + self.log.debug('Attempt {}: {}'.format(tries, ret)) + ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port) + tries += 1 + + if ret: + amulet.raise_status(amulet.FAIL, ret) + + def configure_rmq_ssl_off(self, sentry_units, deployment, max_wait=60): + """Turn ssl charm config option off, confirm that it is disabled + on every unit. + + :param sentry_units: list of sentry units + :param deployment: amulet deployment object pointer + :param max_wait: maximum time to wait in seconds to confirm + :returns: None if successful. Raise on error. + """ + self.log.debug('Setting ssl charm config option: off') + + # Disable RMQ SSL + config = {'ssl': 'off'} + deployment.d.configure('rabbitmq-server', config) + + # Wait for unit status + self.rmq_wait_for_cluster(deployment) + + # Confirm + tries = 0 + ret = self.validate_rmq_ssl_disabled_units(sentry_units) + while ret and tries < (max_wait / 4): + time.sleep(4) + self.log.debug('Attempt {}: {}'.format(tries, ret)) + ret = self.validate_rmq_ssl_disabled_units(sentry_units) + tries += 1 + + if ret: + amulet.raise_status(amulet.FAIL, ret) + + def connect_amqp_by_unit(self, sentry_unit, ssl=False, + port=None, fatal=True, + username="testuser1", password="changeme"): + """Establish and return a pika amqp connection to the rabbitmq service + running on a rmq juju unit. + + :param sentry_unit: sentry unit pointer + :param ssl: boolean, default to False + :param port: amqp port, use defaults if None + :param fatal: boolean, default to True (raises on connect error) + :param username: amqp user name, default to testuser1 + :param password: amqp user password + :returns: pika amqp connection pointer or None if failed and non-fatal + """ + host = sentry_unit.info['public-address'] + unit_name = sentry_unit.info['unit_name'] + + # Default port logic if port is not specified + if ssl and not port: + port = 5671 + elif not ssl and not port: + port = 5672 + + self.log.debug('Connecting to amqp on {}:{} ({}) as ' + '{}...'.format(host, port, unit_name, username)) + + try: + credentials = pika.PlainCredentials(username, password) + parameters = pika.ConnectionParameters(host=host, port=port, + credentials=credentials, + ssl=ssl, + connection_attempts=3, + retry_delay=5, + socket_timeout=1) + connection = pika.BlockingConnection(parameters) + assert connection.is_open is True + assert connection.is_closing is False + self.log.debug('Connect OK') + return connection + except Exception as e: + msg = ('amqp connection failed to {}:{} as ' + '{} ({})'.format(host, port, username, str(e))) + if fatal: + amulet.raise_status(amulet.FAIL, msg) + else: + self.log.warn(msg) + return None + + def publish_amqp_message_by_unit(self, sentry_unit, message, + queue="test", ssl=False, + username="testuser1", + password="changeme", + port=None): + """Publish an amqp message to a rmq juju unit. + + :param sentry_unit: sentry unit pointer + :param message: amqp message string + :param queue: message queue, default to test + :param username: amqp user name, default to testuser1 + :param password: amqp user password + :param ssl: boolean, default to False + :param port: amqp port, use defaults if None + :returns: None. Raises exception if publish failed. + """ + self.log.debug('Publishing message to {} queue:\n{}'.format(queue, + message)) + connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl, + port=port, + username=username, + password=password) + + # NOTE(beisner): extra debug here re: pika hang potential: + # https://github.com/pika/pika/issues/297 + # https://groups.google.com/forum/#!topic/rabbitmq-users/Ja0iyfF0Szw + self.log.debug('Defining channel...') + channel = connection.channel() + self.log.debug('Declaring queue...') + channel.queue_declare(queue=queue, auto_delete=False, durable=True) + self.log.debug('Publishing message...') + channel.basic_publish(exchange='', routing_key=queue, body=message) + self.log.debug('Closing channel...') + channel.close() + self.log.debug('Closing connection...') + connection.close() + + def get_amqp_message_by_unit(self, sentry_unit, queue="test", + username="testuser1", + password="changeme", + ssl=False, port=None): + """Get an amqp message from a rmq juju unit. + + :param sentry_unit: sentry unit pointer + :param queue: message queue, default to test + :param username: amqp user name, default to testuser1 + :param password: amqp user password + :param ssl: boolean, default to False + :param port: amqp port, use defaults if None + :returns: amqp message body as string. Raise if get fails. + """ + connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl, + port=port, + username=username, + password=password) + channel = connection.channel() + method_frame, _, body = channel.basic_get(queue) + + if method_frame: + self.log.debug('Retreived message from {} queue:\n{}'.format(queue, + body)) + channel.basic_ack(method_frame.delivery_tag) + channel.close() + connection.close() + return body + else: + msg = 'No message retrieved.' + amulet.raise_status(amulet.FAIL, msg) + + def validate_memcache(self, sentry_unit, conf, os_release, + earliest_release=5, section='keystone_authtoken', + check_kvs=None): + """Check Memcache is running and is configured to be used + + Example call from Amulet test: + + def test_110_memcache(self): + u.validate_memcache(self.neutron_api_sentry, + '/etc/neutron/neutron.conf', + self._get_openstack_release()) + + :param sentry_unit: sentry unit + :param conf: OpenStack config file to check memcache settings + :param os_release: Current OpenStack release int code + :param earliest_release: Earliest Openstack release to check int code + :param section: OpenStack config file section to check + :param check_kvs: Dict of settings to check in config file + :returns: None + """ + if os_release < earliest_release: + self.log.debug('Skipping memcache checks for deployment. {} <' + 'mitaka'.format(os_release)) + return + _kvs = check_kvs or {'memcached_servers': 'inet6:[::1]:11211'} + self.log.debug('Checking memcached is running') + ret = self.validate_services_by_name({sentry_unit: ['memcached']}) + if ret: + amulet.raise_status(amulet.FAIL, msg='Memcache running check' + 'failed {}'.format(ret)) + else: + self.log.debug('OK') + self.log.debug('Checking memcache url is configured in {}'.format( + conf)) + if self.validate_config_data(sentry_unit, conf, section, _kvs): + message = "Memcache config error in: {}".format(conf) + amulet.raise_status(amulet.FAIL, msg=message) + else: + self.log.debug('OK') + self.log.debug('Checking memcache configuration in ' + '/etc/memcached.conf') + contents = self.file_contents_safe(sentry_unit, '/etc/memcached.conf', + fatal=True) + ubuntu_release, _ = self.run_cmd_unit(sentry_unit, 'lsb_release -cs') + if CompareHostReleases(ubuntu_release) <= 'trusty': + memcache_listen_addr = 'ip6-localhost' + else: + memcache_listen_addr = '::1' + expected = { + '-p': '11211', + '-l': memcache_listen_addr} + found = [] + for key, value in expected.items(): + for line in contents.split('\n'): + if line.startswith(key): + self.log.debug('Checking {} is set to {}'.format( + key, + value)) + assert value == line.split()[-1] + self.log.debug(line.split()[-1]) + found.append(key) + if sorted(found) == sorted(expected.keys()): + self.log.debug('OK') + else: + message = "Memcache config error in: /etc/memcached.conf" + amulet.raise_status(amulet.FAIL, msg=message) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/audits/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/audits/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..7f7e5f79a5d5fe3cb374814e32ea16f6060f4f27 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/audits/__init__.py @@ -0,0 +1,212 @@ +# Copyright 2019 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""OpenStack Security Audit code""" + +import collections +from enum import Enum +import traceback + +from charmhelpers.core.host import cmp_pkgrevno +import charmhelpers.contrib.openstack.utils as openstack_utils +import charmhelpers.core.hookenv as hookenv + + +class AuditType(Enum): + OpenStackSecurityGuide = 1 + + +_audits = {} + +Audit = collections.namedtuple('Audit', 'func filters') + + +def audit(*args): + """Decorator to register an audit. + + These are used to generate audits that can be run on a + deployed system that matches the given configuration + + :param args: List of functions to filter tests against + :type args: List[Callable[Dict]] + """ + def wrapper(f): + test_name = f.__name__ + if _audits.get(test_name): + raise RuntimeError( + "Test name '{}' used more than once" + .format(test_name)) + non_callables = [fn for fn in args if not callable(fn)] + if non_callables: + raise RuntimeError( + "Configuration includes non-callable filters: {}" + .format(non_callables)) + _audits[test_name] = Audit(func=f, filters=args) + return f + return wrapper + + +def is_audit_type(*args): + """This audit is included in the specified kinds of audits. + + :param *args: List of AuditTypes to include this audit in + :type args: List[AuditType] + :rtype: Callable[Dict] + """ + def _is_audit_type(audit_options): + if audit_options.get('audit_type') in args: + return True + else: + return False + return _is_audit_type + + +def since_package(pkg, pkg_version): + """This audit should be run after the specified package version (incl). + + :param pkg: Package name to compare + :type pkg: str + :param release: The package version + :type release: str + :rtype: Callable[Dict] + """ + def _since_package(audit_options=None): + return cmp_pkgrevno(pkg, pkg_version) >= 0 + + return _since_package + + +def before_package(pkg, pkg_version): + """This audit should be run before the specified package version (excl). + + :param pkg: Package name to compare + :type pkg: str + :param release: The package version + :type release: str + :rtype: Callable[Dict] + """ + def _before_package(audit_options=None): + return not since_package(pkg, pkg_version)() + + return _before_package + + +def since_openstack_release(pkg, release): + """This audit should run after the specified OpenStack version (incl). + + :param pkg: Package name to compare + :type pkg: str + :param release: The OpenStack release codename + :type release: str + :rtype: Callable[Dict] + """ + def _since_openstack_release(audit_options=None): + _release = openstack_utils.get_os_codename_package(pkg) + return openstack_utils.CompareOpenStackReleases(_release) >= release + + return _since_openstack_release + + +def before_openstack_release(pkg, release): + """This audit should run before the specified OpenStack version (excl). + + :param pkg: Package name to compare + :type pkg: str + :param release: The OpenStack release codename + :type release: str + :rtype: Callable[Dict] + """ + def _before_openstack_release(audit_options=None): + return not since_openstack_release(pkg, release)() + + return _before_openstack_release + + +def it_has_config(config_key): + """This audit should be run based on specified config keys. + + :param config_key: Config key to look for + :type config_key: str + :rtype: Callable[Dict] + """ + def _it_has_config(audit_options): + return audit_options.get(config_key) is not None + + return _it_has_config + + +def run(audit_options): + """Run the configured audits with the specified audit_options. + + :param audit_options: Configuration for the audit + :type audit_options: Config + + :rtype: Dict[str, str] + """ + errors = {} + results = {} + for name, audit in sorted(_audits.items()): + result_name = name.replace('_', '-') + if result_name in audit_options.get('excludes', []): + print( + "Skipping {} because it is" + "excluded in audit config" + .format(result_name)) + continue + if all(p(audit_options) for p in audit.filters): + try: + audit.func(audit_options) + print("{}: PASS".format(name)) + results[result_name] = { + 'success': True, + } + except AssertionError as e: + print("{}: FAIL ({})".format(name, e)) + results[result_name] = { + 'success': False, + 'message': e, + } + except Exception as e: + print("{}: ERROR ({})".format(name, e)) + errors[name] = e + results[result_name] = { + 'success': False, + 'message': e, + } + for name, error in errors.items(): + print("=" * 20) + print("Error in {}: ".format(name)) + traceback.print_tb(error.__traceback__) + print() + return results + + +def action_parse_results(result): + """Parse the result of `run` in the context of an action. + + :param result: The result of running the security-checklist + action on a unit + :type result: Dict[str, Dict[str, str]] + :rtype: int + """ + passed = True + for test, result in result.items(): + if result['success']: + hookenv.action_set({test: 'PASS'}) + else: + hookenv.action_set({test: 'FAIL - {}'.format(result['message'])}) + passed = False + if not passed: + hookenv.action_fail("One or more tests failed") + return 0 if passed else 1 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/audits/openstack_security_guide.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/audits/openstack_security_guide.py new file mode 100644 index 0000000000000000000000000000000000000000..79740ed0c103e841b6e280af920f8be65e3d1d0d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/audits/openstack_security_guide.py @@ -0,0 +1,270 @@ +# Copyright 2019 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import collections +import configparser +import glob +import os.path +import subprocess + +from charmhelpers.contrib.openstack.audits import ( + audit, + AuditType, + # filters + is_audit_type, + it_has_config, +) + +from charmhelpers.core.hookenv import ( + cached, +) + +""" +The Security Guide suggests a specific list of files inside the +config directory for the service having 640 specifically, but +by ensuring the containing directory is 750, only the owner can +write, and only the group can read files within the directory. + +By restricting access to the containing directory, we can more +effectively ensure that there is no accidental leakage if a new +file is added to the service without being added to the security +guide, and to this check. +""" +FILE_ASSERTIONS = { + 'barbican': { + '/etc/barbican': {'group': 'barbican', 'mode': '750'}, + }, + 'ceph-mon': { + '/var/lib/charm/ceph-mon/ceph.conf': + {'owner': 'root', 'group': 'root', 'mode': '644'}, + '/etc/ceph/ceph.client.admin.keyring': + {'owner': 'ceph', 'group': 'ceph'}, + '/etc/ceph/rbdmap': {'mode': '644'}, + '/var/lib/ceph': {'owner': 'ceph', 'group': 'ceph', 'mode': '750'}, + '/var/lib/ceph/bootstrap-*/ceph.keyring': + {'owner': 'ceph', 'group': 'ceph', 'mode': '600'} + }, + 'ceph-osd': { + '/var/lib/charm/ceph-osd/ceph.conf': + {'owner': 'ceph', 'group': 'ceph', 'mode': '644'}, + '/var/lib/ceph': {'owner': 'ceph', 'group': 'ceph', 'mode': '750'}, + '/var/lib/ceph/*': {'owner': 'ceph', 'group': 'ceph', 'mode': '755'}, + '/var/lib/ceph/bootstrap-*/ceph.keyring': + {'owner': 'ceph', 'group': 'ceph', 'mode': '600'}, + '/var/lib/ceph/radosgw': + {'owner': 'ceph', 'group': 'ceph', 'mode': '755'}, + }, + 'cinder': { + '/etc/cinder': {'group': 'cinder', 'mode': '750'}, + }, + 'glance': { + '/etc/glance': {'group': 'glance', 'mode': '750'}, + }, + 'keystone': { + '/etc/keystone': + {'owner': 'keystone', 'group': 'keystone', 'mode': '750'}, + }, + 'manilla': { + '/etc/manila': {'group': 'manilla', 'mode': '750'}, + }, + 'neutron-gateway': { + '/etc/neutron': {'group': 'neutron', 'mode': '750'}, + }, + 'neutron-api': { + '/etc/neutron/': {'group': 'neutron', 'mode': '750'}, + }, + 'nova-cloud-controller': { + '/etc/nova': {'group': 'nova', 'mode': '750'}, + }, + 'nova-compute': { + '/etc/nova/': {'group': 'nova', 'mode': '750'}, + }, + 'openstack-dashboard': { + # From security guide + '/etc/openstack-dashboard/local_settings.py': + {'group': 'horizon', 'mode': '640'}, + }, +} + +Ownership = collections.namedtuple('Ownership', 'owner group mode') + + +@cached +def _stat(file): + """ + Get the Ownership information from a file. + + :param file: The path to a file to stat + :type file: str + :returns: owner, group, and mode of the specified file + :rtype: Ownership + :raises subprocess.CalledProcessError: If the underlying stat fails + """ + out = subprocess.check_output( + ['stat', '-c', '%U %G %a', file]).decode('utf-8') + return Ownership(*out.strip().split(' ')) + + +@cached +def _config_ini(path): + """ + Parse an ini file + + :param path: The path to a file to parse + :type file: str + :returns: Configuration contained in path + :rtype: Dict + """ + # When strict is enabled, duplicate options are not allowed in the + # parsed INI; however, Oslo allows duplicate values. This change + # causes us to ignore the duplicate values which is acceptable as + # long as we don't validate any multi-value options + conf = configparser.ConfigParser(strict=False) + conf.read(path) + return dict(conf) + + +def _validate_file_ownership(owner, group, file_name, optional=False): + """ + Validate that a specified file is owned by `owner:group`. + + :param owner: Name of the owner + :type owner: str + :param group: Name of the group + :type group: str + :param file_name: Path to the file to verify + :type file_name: str + :param optional: Is this file optional, + ie: Should this test fail when it's missing + :type optional: bool + """ + try: + ownership = _stat(file_name) + except subprocess.CalledProcessError as e: + print("Error reading file: {}".format(e)) + if not optional: + assert False, "Specified file does not exist: {}".format(file_name) + assert owner == ownership.owner, \ + "{} has an incorrect owner: {} should be {}".format( + file_name, ownership.owner, owner) + assert group == ownership.group, \ + "{} has an incorrect group: {} should be {}".format( + file_name, ownership.group, group) + print("Validate ownership of {}: PASS".format(file_name)) + + +def _validate_file_mode(mode, file_name, optional=False): + """ + Validate that a specified file has the specified permissions. + + :param mode: file mode that is desires + :type owner: str + :param file_name: Path to the file to verify + :type file_name: str + :param optional: Is this file optional, + ie: Should this test fail when it's missing + :type optional: bool + """ + try: + ownership = _stat(file_name) + except subprocess.CalledProcessError as e: + print("Error reading file: {}".format(e)) + if not optional: + assert False, "Specified file does not exist: {}".format(file_name) + assert mode == ownership.mode, \ + "{} has an incorrect mode: {} should be {}".format( + file_name, ownership.mode, mode) + print("Validate mode of {}: PASS".format(file_name)) + + +@cached +def _config_section(config, section): + """Read the configuration file and return a section.""" + path = os.path.join(config.get('config_path'), config.get('config_file')) + conf = _config_ini(path) + return conf.get(section) + + +@audit(is_audit_type(AuditType.OpenStackSecurityGuide), + it_has_config('files')) +def validate_file_ownership(config): + """Verify that configuration files are owned by the correct user/group.""" + files = config.get('files', {}) + for file_name, options in files.items(): + for key in options.keys(): + if key not in ["owner", "group", "mode"]: + raise RuntimeError( + "Invalid ownership configuration: {}".format(key)) + owner = options.get('owner', config.get('owner', 'root')) + group = options.get('group', config.get('group', 'root')) + optional = options.get('optional', config.get('optional', False)) + if '*' in file_name: + for file in glob.glob(file_name): + if file not in files.keys(): + if os.path.isfile(file): + _validate_file_ownership(owner, group, file, optional) + else: + if os.path.isfile(file_name): + _validate_file_ownership(owner, group, file_name, optional) + + +@audit(is_audit_type(AuditType.OpenStackSecurityGuide), + it_has_config('files')) +def validate_file_permissions(config): + """Verify that permissions on configuration files are secure enough.""" + files = config.get('files', {}) + for file_name, options in files.items(): + for key in options.keys(): + if key not in ["owner", "group", "mode"]: + raise RuntimeError( + "Invalid ownership configuration: {}".format(key)) + mode = options.get('mode', config.get('permissions', '600')) + optional = options.get('optional', config.get('optional', False)) + if '*' in file_name: + for file in glob.glob(file_name): + if file not in files.keys(): + if os.path.isfile(file): + _validate_file_mode(mode, file, optional) + else: + if os.path.isfile(file_name): + _validate_file_mode(mode, file_name, optional) + + +@audit(is_audit_type(AuditType.OpenStackSecurityGuide)) +def validate_uses_keystone(audit_options): + """Validate that the service uses Keystone for authentication.""" + section = _config_section(audit_options, 'api') or _config_section(audit_options, 'DEFAULT') + assert section is not None, "Missing section 'api / DEFAULT'" + assert section.get('auth_strategy') == "keystone", \ + "Application is not using Keystone" + + +@audit(is_audit_type(AuditType.OpenStackSecurityGuide)) +def validate_uses_tls_for_keystone(audit_options): + """Verify that TLS is used to communicate with Keystone.""" + section = _config_section(audit_options, 'keystone_authtoken') + assert section is not None, "Missing section 'keystone_authtoken'" + assert not section.get('insecure') and \ + "https://" in section.get("auth_uri"), \ + "TLS is not used for Keystone" + + +@audit(is_audit_type(AuditType.OpenStackSecurityGuide)) +def validate_uses_tls_for_glance(audit_options): + """Verify that TLS is used to communicate with Glance.""" + section = _config_section(audit_options, 'glance') + assert section is not None, "Missing section 'glance'" + assert not section.get('insecure') and \ + "https://" in section.get("api_servers"), \ + "TLS is not used for Glance" diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/cert_utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/cert_utils.py new file mode 100644 index 0000000000000000000000000000000000000000..b494af64aeae55db44b669990725de81705104b2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/cert_utils.py @@ -0,0 +1,289 @@ +# Copyright 2014-2018 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Common python helper functions used for OpenStack charm certificats. + +import os +import json + +from charmhelpers.contrib.network.ip import ( + get_hostname, + resolve_network_cidr, +) +from charmhelpers.core.hookenv import ( + local_unit, + network_get_primary_address, + config, + related_units, + relation_get, + relation_ids, + unit_get, + NoNetworkBinding, + log, + WARNING, +) +from charmhelpers.contrib.openstack.ip import ( + ADMIN, + resolve_address, + get_vip_in_network, + INTERNAL, + PUBLIC, + ADDRESS_MAP) + +from charmhelpers.core.host import ( + mkdir, + write_file, +) + +from charmhelpers.contrib.hahelpers.apache import ( + install_ca_cert +) + + +class CertRequest(object): + + """Create a request for certificates to be generated + """ + + def __init__(self, json_encode=True): + self.entries = [] + self.hostname_entry = None + self.json_encode = json_encode + + def add_entry(self, net_type, cn, addresses): + """Add a request to the batch + + :param net_type: str netwrok space name request is for + :param cn: str Canonical Name for certificate + :param addresses: [] List of addresses to be used as SANs + """ + self.entries.append({ + 'cn': cn, + 'addresses': addresses}) + + def add_hostname_cn(self): + """Add a request for the hostname of the machine""" + ip = unit_get('private-address') + addresses = [ip] + # If a vip is being used without os-hostname config or + # network spaces then we need to ensure the local units + # cert has the approriate vip in the SAN list + vip = get_vip_in_network(resolve_network_cidr(ip)) + if vip: + addresses.append(vip) + self.hostname_entry = { + 'cn': get_hostname(ip), + 'addresses': addresses} + + def add_hostname_cn_ip(self, addresses): + """Add an address to the SAN list for the hostname request + + :param addr: [] List of address to be added + """ + for addr in addresses: + if addr not in self.hostname_entry['addresses']: + self.hostname_entry['addresses'].append(addr) + + def get_request(self): + """Generate request from the batched up entries + + """ + if self.hostname_entry: + self.entries.append(self.hostname_entry) + request = {} + for entry in self.entries: + sans = sorted(list(set(entry['addresses']))) + request[entry['cn']] = {'sans': sans} + if self.json_encode: + req = {'cert_requests': json.dumps(request, sort_keys=True)} + else: + req = {'cert_requests': request} + req['unit_name'] = local_unit().replace('/', '_') + return req + + +def get_certificate_request(json_encode=True): + """Generate a certificatee requests based on the network confioguration + + """ + req = CertRequest(json_encode=json_encode) + req.add_hostname_cn() + # Add os-hostname entries + for net_type in [INTERNAL, ADMIN, PUBLIC]: + net_config = config(ADDRESS_MAP[net_type]['override']) + try: + net_addr = resolve_address(endpoint_type=net_type) + ip = network_get_primary_address( + ADDRESS_MAP[net_type]['binding']) + addresses = [net_addr, ip] + vip = get_vip_in_network(resolve_network_cidr(ip)) + if vip: + addresses.append(vip) + if net_config: + req.add_entry( + net_type, + net_config, + addresses) + else: + # There is network address with no corresponding hostname. + # Add the ip to the hostname cert to allow for this. + req.add_hostname_cn_ip(addresses) + except NoNetworkBinding: + log("Skipping request for certificate for ip in {} space, no " + "local address found".format(net_type), WARNING) + return req.get_request() + + +def create_ip_cert_links(ssl_dir, custom_hostname_link=None): + """Create symlinks for SAN records + + :param ssl_dir: str Directory to create symlinks in + :param custom_hostname_link: str Additional link to be created + """ + hostname = get_hostname(unit_get('private-address')) + hostname_cert = os.path.join( + ssl_dir, + 'cert_{}'.format(hostname)) + hostname_key = os.path.join( + ssl_dir, + 'key_{}'.format(hostname)) + # Add links to hostname cert, used if os-hostname vars not set + for net_type in [INTERNAL, ADMIN, PUBLIC]: + try: + addr = resolve_address(endpoint_type=net_type) + cert = os.path.join(ssl_dir, 'cert_{}'.format(addr)) + key = os.path.join(ssl_dir, 'key_{}'.format(addr)) + if os.path.isfile(hostname_cert) and not os.path.isfile(cert): + os.symlink(hostname_cert, cert) + os.symlink(hostname_key, key) + except NoNetworkBinding: + log("Skipping creating cert symlink for ip in {} space, no " + "local address found".format(net_type), WARNING) + if custom_hostname_link: + custom_cert = os.path.join( + ssl_dir, + 'cert_{}'.format(custom_hostname_link)) + custom_key = os.path.join( + ssl_dir, + 'key_{}'.format(custom_hostname_link)) + if os.path.isfile(hostname_cert) and not os.path.isfile(custom_cert): + os.symlink(hostname_cert, custom_cert) + os.symlink(hostname_key, custom_key) + + +def install_certs(ssl_dir, certs, chain=None, user='root', group='root'): + """Install the certs passed into the ssl dir and append the chain if + provided. + + :param ssl_dir: str Directory to create symlinks in + :param certs: {} {'cn': {'cert': 'CERT', 'key': 'KEY'}} + :param chain: str Chain to be appended to certs + :param user: (Optional) Owner of certificate files. Defaults to 'root' + :type user: str + :param group: (Optional) Group of certificate files. Defaults to 'root' + :type group: str + """ + for cn, bundle in certs.items(): + cert_filename = 'cert_{}'.format(cn) + key_filename = 'key_{}'.format(cn) + cert_data = bundle['cert'] + if chain: + # Append chain file so that clients that trust the root CA will + # trust certs signed by an intermediate in the chain + cert_data = cert_data + os.linesep + chain + write_file( + path=os.path.join(ssl_dir, cert_filename), owner=user, group=group, + content=cert_data, perms=0o640) + write_file( + path=os.path.join(ssl_dir, key_filename), owner=user, group=group, + content=bundle['key'], perms=0o640) + + +def process_certificates(service_name, relation_id, unit, + custom_hostname_link=None, user='root', group='root'): + """Process the certificates supplied down the relation + + :param service_name: str Name of service the certifcates are for. + :param relation_id: str Relation id providing the certs + :param unit: str Unit providing the certs + :param custom_hostname_link: str Name of custom link to create + :param user: (Optional) Owner of certificate files. Defaults to 'root' + :type user: str + :param group: (Optional) Group of certificate files. Defaults to 'root' + :type group: str + :returns: True if certificates processed for local unit or False + :rtype: bool + """ + data = relation_get(rid=relation_id, unit=unit) + ssl_dir = os.path.join('/etc/apache2/ssl/', service_name) + mkdir(path=ssl_dir) + name = local_unit().replace('/', '_') + certs = data.get('{}.processed_requests'.format(name)) + chain = data.get('chain') + ca = data.get('ca') + if certs: + certs = json.loads(certs) + install_ca_cert(ca.encode()) + install_certs(ssl_dir, certs, chain, user=user, group=group) + create_ip_cert_links( + ssl_dir, + custom_hostname_link=custom_hostname_link) + return True + return False + + +def get_requests_for_local_unit(relation_name=None): + """Extract any certificates data targeted at this unit down relation_name. + + :param relation_name: str Name of relation to check for data. + :returns: List of bundles of certificates. + :rtype: List of dicts + """ + local_name = local_unit().replace('/', '_') + raw_certs_key = '{}.processed_requests'.format(local_name) + relation_name = relation_name or 'certificates' + bundles = [] + for rid in relation_ids(relation_name): + for unit in related_units(rid): + data = relation_get(rid=rid, unit=unit) + if data.get(raw_certs_key): + bundles.append({ + 'ca': data['ca'], + 'chain': data.get('chain'), + 'certs': json.loads(data[raw_certs_key])}) + return bundles + + +def get_bundle_for_cn(cn, relation_name=None): + """Extract certificates for the given cn. + + :param cn: str Canonical Name on certificate. + :param relation_name: str Relation to check for certificates down. + :returns: Dictionary of certificate data, + :rtype: dict. + """ + entries = get_requests_for_local_unit(relation_name) + cert_bundle = {} + for entry in entries: + for _cn, bundle in entry['certs'].items(): + if _cn == cn: + cert_bundle = { + 'cert': bundle['cert'], + 'key': bundle['key'], + 'chain': entry['chain'], + 'ca': entry['ca']} + break + if cert_bundle: + break + return cert_bundle diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/context.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/context.py new file mode 100644 index 0000000000000000000000000000000000000000..42abccf7cb9eabc063bc8d87f016e59f4dc8f898 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/context.py @@ -0,0 +1,3177 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import collections +import copy +import enum +import glob +import hashlib +import json +import math +import os +import re +import socket +import time + +from base64 import b64decode +from subprocess import check_call, CalledProcessError + +import six + +from charmhelpers.contrib.openstack.audits.openstack_security_guide import ( + _config_ini as config_ini +) + +from charmhelpers.fetch import ( + apt_install, + filter_installed_packages, +) +from charmhelpers.core.hookenv import ( + NoNetworkBinding, + config, + is_relation_made, + local_unit, + log, + relation_get, + relation_ids, + related_units, + relation_set, + unit_get, + unit_private_ip, + charm_name, + DEBUG, + INFO, + ERROR, + status_set, + network_get_primary_address, + WARNING, +) + +from charmhelpers.core.sysctl import create as sysctl_create +from charmhelpers.core.strutils import bool_from_string +from charmhelpers.contrib.openstack.exceptions import OSContextError + +from charmhelpers.core.host import ( + get_bond_master, + is_phy_iface, + list_nics, + get_nic_hwaddr, + mkdir, + write_file, + pwgen, + lsb_release, + CompareHostReleases, + is_container, +) +from charmhelpers.contrib.hahelpers.cluster import ( + determine_apache_port, + determine_api_port, + https, + is_clustered, +) +from charmhelpers.contrib.hahelpers.apache import ( + get_cert, + get_ca_cert, + install_ca_cert, +) +from charmhelpers.contrib.openstack.neutron import ( + neutron_plugin_attribute, + parse_data_port_mappings, +) +from charmhelpers.contrib.openstack.ip import ( + resolve_address, + INTERNAL, + ADMIN, + PUBLIC, + ADDRESS_MAP, +) +from charmhelpers.contrib.network.ip import ( + get_address_in_network, + get_ipv4_addr, + get_ipv6_addr, + get_netmask_for_address, + format_ipv6_addr, + is_bridge_member, + is_ipv6_disabled, + get_relation_ip, +) +from charmhelpers.contrib.openstack.utils import ( + config_flags_parser, + get_os_codename_install_source, + enable_memcache, + CompareOpenStackReleases, + os_release, +) +from charmhelpers.core.unitdata import kv + +try: + from sriov_netplan_shim import pci +except ImportError: + # The use of the function and contexts that require the pci module is + # optional. + pass + +try: + import psutil +except ImportError: + if six.PY2: + apt_install('python-psutil', fatal=True) + else: + apt_install('python3-psutil', fatal=True) + import psutil + +CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt' +ADDRESS_TYPES = ['admin', 'internal', 'public'] +HAPROXY_RUN_DIR = '/var/run/haproxy/' +DEFAULT_OSLO_MESSAGING_DRIVER = "messagingv2" + + +def ensure_packages(packages): + """Install but do not upgrade required plugin packages.""" + required = filter_installed_packages(packages) + if required: + apt_install(required, fatal=True) + + +def context_complete(ctxt): + _missing = [] + for k, v in six.iteritems(ctxt): + if v is None or v == '': + _missing.append(k) + + if _missing: + log('Missing required data: %s' % ' '.join(_missing), level=INFO) + return False + + return True + + +class OSContextGenerator(object): + """Base class for all context generators.""" + interfaces = [] + related = False + complete = False + missing_data = [] + + def __call__(self): + raise NotImplementedError + + def context_complete(self, ctxt): + """Check for missing data for the required context data. + Set self.missing_data if it exists and return False. + Set self.complete if no missing data and return True. + """ + # Fresh start + self.complete = False + self.missing_data = [] + for k, v in six.iteritems(ctxt): + if v is None or v == '': + if k not in self.missing_data: + self.missing_data.append(k) + + if self.missing_data: + self.complete = False + log('Missing required data: %s' % ' '.join(self.missing_data), + level=INFO) + else: + self.complete = True + return self.complete + + def get_related(self): + """Check if any of the context interfaces have relation ids. + Set self.related and return True if one of the interfaces + has relation ids. + """ + # Fresh start + self.related = False + try: + for interface in self.interfaces: + if relation_ids(interface): + self.related = True + return self.related + except AttributeError as e: + log("{} {}" + "".format(self, e), 'INFO') + return self.related + + +class SharedDBContext(OSContextGenerator): + interfaces = ['shared-db'] + + def __init__(self, database=None, user=None, relation_prefix=None, + ssl_dir=None, relation_id=None): + """Allows inspecting relation for settings prefixed with + relation_prefix. This is useful for parsing access for multiple + databases returned via the shared-db interface (eg, nova_password, + quantum_password) + """ + self.relation_prefix = relation_prefix + self.database = database + self.user = user + self.ssl_dir = ssl_dir + self.rel_name = self.interfaces[0] + self.relation_id = relation_id + + def __call__(self): + self.database = self.database or config('database') + self.user = self.user or config('database-user') + if None in [self.database, self.user]: + log("Could not generate shared_db context. Missing required charm " + "config options. (database name and user)", level=ERROR) + raise OSContextError + + ctxt = {} + + # NOTE(jamespage) if mysql charm provides a network upon which + # access to the database should be made, reconfigure relation + # with the service units local address and defer execution + access_network = relation_get('access-network') + if access_network is not None: + if self.relation_prefix is not None: + hostname_key = "{}_hostname".format(self.relation_prefix) + else: + hostname_key = "hostname" + access_hostname = get_address_in_network( + access_network, + unit_get('private-address')) + set_hostname = relation_get(attribute=hostname_key, + unit=local_unit()) + if set_hostname != access_hostname: + relation_set(relation_settings={hostname_key: access_hostname}) + return None # Defer any further hook execution for now.... + + password_setting = 'password' + if self.relation_prefix: + password_setting = self.relation_prefix + '_password' + + if self.relation_id: + rids = [self.relation_id] + else: + rids = relation_ids(self.interfaces[0]) + + rel = (get_os_codename_install_source(config('openstack-origin')) or + 'icehouse') + for rid in rids: + self.related = True + for unit in related_units(rid): + rdata = relation_get(rid=rid, unit=unit) + host = rdata.get('db_host') + host = format_ipv6_addr(host) or host + ctxt = { + 'database_host': host, + 'database': self.database, + 'database_user': self.user, + 'database_password': rdata.get(password_setting), + 'database_type': 'mysql+pymysql' + } + # Port is being introduced with LP Bug #1876188 + # but it not currently required and may not be set in all + # cases, particularly in classic charms. + port = rdata.get('db_port') + if port: + ctxt['database_port'] = port + if CompareOpenStackReleases(rel) < 'queens': + ctxt['database_type'] = 'mysql' + if self.context_complete(ctxt): + db_ssl(rdata, ctxt, self.ssl_dir) + return ctxt + return {} + + +class PostgresqlDBContext(OSContextGenerator): + interfaces = ['pgsql-db'] + + def __init__(self, database=None): + self.database = database + + def __call__(self): + self.database = self.database or config('database') + if self.database is None: + log('Could not generate postgresql_db context. Missing required ' + 'charm config options. (database name)', level=ERROR) + raise OSContextError + + ctxt = {} + for rid in relation_ids(self.interfaces[0]): + self.related = True + for unit in related_units(rid): + rel_host = relation_get('host', rid=rid, unit=unit) + rel_user = relation_get('user', rid=rid, unit=unit) + rel_passwd = relation_get('password', rid=rid, unit=unit) + ctxt = {'database_host': rel_host, + 'database': self.database, + 'database_user': rel_user, + 'database_password': rel_passwd, + 'database_type': 'postgresql'} + if self.context_complete(ctxt): + return ctxt + + return {} + + +def db_ssl(rdata, ctxt, ssl_dir): + if 'ssl_ca' in rdata and ssl_dir: + ca_path = os.path.join(ssl_dir, 'db-client.ca') + with open(ca_path, 'wb') as fh: + fh.write(b64decode(rdata['ssl_ca'])) + + ctxt['database_ssl_ca'] = ca_path + elif 'ssl_ca' in rdata: + log("Charm not setup for ssl support but ssl ca found", level=INFO) + return ctxt + + if 'ssl_cert' in rdata: + cert_path = os.path.join( + ssl_dir, 'db-client.cert') + if not os.path.exists(cert_path): + log("Waiting 1m for ssl client cert validity", level=INFO) + time.sleep(60) + + with open(cert_path, 'wb') as fh: + fh.write(b64decode(rdata['ssl_cert'])) + + ctxt['database_ssl_cert'] = cert_path + key_path = os.path.join(ssl_dir, 'db-client.key') + with open(key_path, 'wb') as fh: + fh.write(b64decode(rdata['ssl_key'])) + + ctxt['database_ssl_key'] = key_path + + return ctxt + + +class IdentityServiceContext(OSContextGenerator): + + def __init__(self, + service=None, + service_user=None, + rel_name='identity-service'): + self.service = service + self.service_user = service_user + self.rel_name = rel_name + self.interfaces = [self.rel_name] + + def _setup_pki_cache(self): + if self.service and self.service_user: + # This is required for pki token signing if we don't want /tmp to + # be used. + cachedir = '/var/cache/%s' % (self.service) + if not os.path.isdir(cachedir): + log("Creating service cache dir %s" % (cachedir), level=DEBUG) + mkdir(path=cachedir, owner=self.service_user, + group=self.service_user, perms=0o700) + + return cachedir + return None + + def _get_pkg_name(self, python_name='keystonemiddleware'): + """Get corresponding distro installed package for python + package name. + + :param python_name: nameof the python package + :type: string + """ + pkg_names = map(lambda x: x + python_name, ('python3-', 'python-')) + + for pkg in pkg_names: + if not filter_installed_packages((pkg,)): + return pkg + + return None + + def _get_keystone_authtoken_ctxt(self, ctxt, keystonemiddleware_os_rel): + """Build Jinja2 context for full rendering of [keystone_authtoken] + section with variable names included. Re-constructed from former + template 'section-keystone-auth-mitaka'. + + :param ctxt: Jinja2 context returned from self.__call__() + :type: dict + :param keystonemiddleware_os_rel: OpenStack release name of + keystonemiddleware package installed + """ + c = collections.OrderedDict((('auth_type', 'password'),)) + + # 'www_authenticate_uri' replaced 'auth_uri' since Stein, + # see keystonemiddleware upstream sources for more info + if CompareOpenStackReleases(keystonemiddleware_os_rel) >= 'stein': + c.update(( + ('www_authenticate_uri', "{}://{}:{}/v3".format( + ctxt.get('service_protocol', ''), + ctxt.get('service_host', ''), + ctxt.get('service_port', ''))),)) + else: + c.update(( + ('auth_uri', "{}://{}:{}/v3".format( + ctxt.get('service_protocol', ''), + ctxt.get('service_host', ''), + ctxt.get('service_port', ''))),)) + + c.update(( + ('auth_url', "{}://{}:{}/v3".format( + ctxt.get('auth_protocol', ''), + ctxt.get('auth_host', ''), + ctxt.get('auth_port', ''))), + ('project_domain_name', ctxt.get('admin_domain_name', '')), + ('user_domain_name', ctxt.get('admin_domain_name', '')), + ('project_name', ctxt.get('admin_tenant_name', '')), + ('username', ctxt.get('admin_user', '')), + ('password', ctxt.get('admin_password', '')), + ('signing_dir', ctxt.get('signing_dir', '')),)) + + return c + + def __call__(self): + log('Generating template context for ' + self.rel_name, level=DEBUG) + ctxt = {} + + keystonemiddleware_os_release = None + if self._get_pkg_name(): + keystonemiddleware_os_release = os_release(self._get_pkg_name()) + + cachedir = self._setup_pki_cache() + if cachedir: + ctxt['signing_dir'] = cachedir + + for rid in relation_ids(self.rel_name): + self.related = True + for unit in related_units(rid): + rdata = relation_get(rid=rid, unit=unit) + serv_host = rdata.get('service_host') + serv_host = format_ipv6_addr(serv_host) or serv_host + auth_host = rdata.get('auth_host') + auth_host = format_ipv6_addr(auth_host) or auth_host + svc_protocol = rdata.get('service_protocol') or 'http' + auth_protocol = rdata.get('auth_protocol') or 'http' + api_version = rdata.get('api_version') or '2.0' + ctxt.update({'service_port': rdata.get('service_port'), + 'service_host': serv_host, + 'auth_host': auth_host, + 'auth_port': rdata.get('auth_port'), + 'admin_tenant_name': rdata.get('service_tenant'), + 'admin_user': rdata.get('service_username'), + 'admin_password': rdata.get('service_password'), + 'service_protocol': svc_protocol, + 'auth_protocol': auth_protocol, + 'api_version': api_version}) + + if float(api_version) > 2: + ctxt.update({ + 'admin_domain_name': rdata.get('service_domain'), + 'service_project_id': rdata.get('service_tenant_id'), + 'service_domain_id': rdata.get('service_domain_id')}) + + # we keep all veriables in ctxt for compatibility and + # add nested dictionary for keystone_authtoken generic + # templating + if keystonemiddleware_os_release: + ctxt['keystone_authtoken'] = \ + self._get_keystone_authtoken_ctxt( + ctxt, keystonemiddleware_os_release) + + if self.context_complete(ctxt): + # NOTE(jamespage) this is required for >= icehouse + # so a missing value just indicates keystone needs + # upgrading + ctxt['admin_tenant_id'] = rdata.get('service_tenant_id') + ctxt['admin_domain_id'] = rdata.get('service_domain_id') + return ctxt + + return {} + + +class IdentityCredentialsContext(IdentityServiceContext): + '''Context for identity-credentials interface type''' + + def __init__(self, + service=None, + service_user=None, + rel_name='identity-credentials'): + super(IdentityCredentialsContext, self).__init__(service, + service_user, + rel_name) + + def __call__(self): + log('Generating template context for ' + self.rel_name, level=DEBUG) + ctxt = {} + + cachedir = self._setup_pki_cache() + if cachedir: + ctxt['signing_dir'] = cachedir + + for rid in relation_ids(self.rel_name): + self.related = True + for unit in related_units(rid): + rdata = relation_get(rid=rid, unit=unit) + credentials_host = rdata.get('credentials_host') + credentials_host = ( + format_ipv6_addr(credentials_host) or credentials_host + ) + auth_host = rdata.get('auth_host') + auth_host = format_ipv6_addr(auth_host) or auth_host + svc_protocol = rdata.get('credentials_protocol') or 'http' + auth_protocol = rdata.get('auth_protocol') or 'http' + api_version = rdata.get('api_version') or '2.0' + ctxt.update({ + 'service_port': rdata.get('credentials_port'), + 'service_host': credentials_host, + 'auth_host': auth_host, + 'auth_port': rdata.get('auth_port'), + 'admin_tenant_name': rdata.get('credentials_project'), + 'admin_tenant_id': rdata.get('credentials_project_id'), + 'admin_user': rdata.get('credentials_username'), + 'admin_password': rdata.get('credentials_password'), + 'service_protocol': svc_protocol, + 'auth_protocol': auth_protocol, + 'api_version': api_version + }) + + if float(api_version) > 2: + ctxt.update({'admin_domain_name': + rdata.get('domain')}) + + if self.context_complete(ctxt): + return ctxt + + return {} + + +class NovaVendorMetadataContext(OSContextGenerator): + """Context used for configuring nova vendor metadata on nova.conf file.""" + + def __init__(self, os_release_pkg, interfaces=None): + """Initialize the NovaVendorMetadataContext object. + + :param os_release_pkg: the package name to extract the OpenStack + release codename from. + :type os_release_pkg: str + :param interfaces: list of string values to be used as the Context's + relation interfaces. + :type interfaces: List[str] + """ + self.os_release_pkg = os_release_pkg + if interfaces is not None: + self.interfaces = interfaces + + def __call__(self): + cmp_os_release = CompareOpenStackReleases( + os_release(self.os_release_pkg)) + ctxt = {'vendor_data': False} + + vdata_providers = [] + vdata = config('vendor-data') + vdata_url = config('vendor-data-url') + + if vdata: + try: + # validate the JSON. If invalid, we do not set anything here + json.loads(vdata) + except (TypeError, ValueError) as e: + log('Error decoding vendor-data. {}'.format(e), level=ERROR) + else: + ctxt['vendor_data'] = True + # Mitaka does not support DynamicJSON + # so vendordata_providers is not needed + if cmp_os_release > 'mitaka': + vdata_providers.append('StaticJSON') + + if vdata_url: + if cmp_os_release > 'mitaka': + ctxt['vendor_data_url'] = vdata_url + vdata_providers.append('DynamicJSON') + else: + log('Dynamic vendor data unsupported' + ' for {}.'.format(cmp_os_release), level=ERROR) + if vdata_providers: + ctxt['vendordata_providers'] = ','.join(vdata_providers) + + return ctxt + + +class NovaVendorMetadataJSONContext(OSContextGenerator): + """Context used for writing nova vendor metadata json file.""" + + def __init__(self, os_release_pkg): + """Initialize the NovaVendorMetadataJSONContext object. + + :param os_release_pkg: the package name to extract the OpenStack + release codename from. + :type os_release_pkg: str + """ + self.os_release_pkg = os_release_pkg + + def __call__(self): + ctxt = {'vendor_data_json': '{}'} + + vdata = config('vendor-data') + if vdata: + try: + # validate the JSON. If invalid, we return empty. + json.loads(vdata) + except (TypeError, ValueError) as e: + log('Error decoding vendor-data. {}'.format(e), level=ERROR) + else: + ctxt['vendor_data_json'] = vdata + + return ctxt + + +class AMQPContext(OSContextGenerator): + + def __init__(self, ssl_dir=None, rel_name='amqp', relation_prefix=None, + relation_id=None): + self.ssl_dir = ssl_dir + self.rel_name = rel_name + self.relation_prefix = relation_prefix + self.interfaces = [rel_name] + self.relation_id = relation_id + + def __call__(self): + log('Generating template context for amqp', level=DEBUG) + conf = config() + if self.relation_prefix: + user_setting = '%s-rabbit-user' % (self.relation_prefix) + vhost_setting = '%s-rabbit-vhost' % (self.relation_prefix) + else: + user_setting = 'rabbit-user' + vhost_setting = 'rabbit-vhost' + + try: + username = conf[user_setting] + vhost = conf[vhost_setting] + except KeyError as e: + log('Could not generate shared_db context. Missing required charm ' + 'config options: %s.' % e, level=ERROR) + raise OSContextError + + ctxt = {} + if self.relation_id: + rids = [self.relation_id] + else: + rids = relation_ids(self.rel_name) + for rid in rids: + ha_vip_only = False + self.related = True + transport_hosts = None + rabbitmq_port = '5672' + for unit in related_units(rid): + if relation_get('clustered', rid=rid, unit=unit): + ctxt['clustered'] = True + vip = relation_get('vip', rid=rid, unit=unit) + vip = format_ipv6_addr(vip) or vip + ctxt['rabbitmq_host'] = vip + transport_hosts = [vip] + else: + host = relation_get('private-address', rid=rid, unit=unit) + host = format_ipv6_addr(host) or host + ctxt['rabbitmq_host'] = host + transport_hosts = [host] + + ctxt.update({ + 'rabbitmq_user': username, + 'rabbitmq_password': relation_get('password', rid=rid, + unit=unit), + 'rabbitmq_virtual_host': vhost, + }) + + ssl_port = relation_get('ssl_port', rid=rid, unit=unit) + if ssl_port: + ctxt['rabbit_ssl_port'] = ssl_port + rabbitmq_port = ssl_port + + ssl_ca = relation_get('ssl_ca', rid=rid, unit=unit) + if ssl_ca: + ctxt['rabbit_ssl_ca'] = ssl_ca + + if relation_get('ha_queues', rid=rid, unit=unit) is not None: + ctxt['rabbitmq_ha_queues'] = True + + ha_vip_only = relation_get('ha-vip-only', + rid=rid, unit=unit) is not None + + if self.context_complete(ctxt): + if 'rabbit_ssl_ca' in ctxt: + if not self.ssl_dir: + log("Charm not setup for ssl support but ssl ca " + "found", level=INFO) + break + + ca_path = os.path.join( + self.ssl_dir, 'rabbit-client-ca.pem') + with open(ca_path, 'wb') as fh: + fh.write(b64decode(ctxt['rabbit_ssl_ca'])) + ctxt['rabbit_ssl_ca'] = ca_path + + # Sufficient information found = break out! + break + + # Used for active/active rabbitmq >= grizzly + if (('clustered' not in ctxt or ha_vip_only) and + len(related_units(rid)) > 1): + rabbitmq_hosts = [] + for unit in related_units(rid): + host = relation_get('private-address', rid=rid, unit=unit) + if not relation_get('password', rid=rid, unit=unit): + log( + ("Skipping {} password not sent which indicates " + "unit is not ready.".format(host)), + level=DEBUG) + continue + host = format_ipv6_addr(host) or host + rabbitmq_hosts.append(host) + + rabbitmq_hosts = sorted(rabbitmq_hosts) + ctxt['rabbitmq_hosts'] = ','.join(rabbitmq_hosts) + transport_hosts = rabbitmq_hosts + + if transport_hosts: + transport_url_hosts = ','.join([ + "{}:{}@{}:{}".format(ctxt['rabbitmq_user'], + ctxt['rabbitmq_password'], + host_, + rabbitmq_port) + for host_ in transport_hosts]) + ctxt['transport_url'] = "rabbit://{}/{}".format( + transport_url_hosts, vhost) + + oslo_messaging_flags = conf.get('oslo-messaging-flags', None) + if oslo_messaging_flags: + ctxt['oslo_messaging_flags'] = config_flags_parser( + oslo_messaging_flags) + + oslo_messaging_driver = conf.get( + 'oslo-messaging-driver', DEFAULT_OSLO_MESSAGING_DRIVER) + if oslo_messaging_driver: + ctxt['oslo_messaging_driver'] = oslo_messaging_driver + + notification_format = conf.get('notification-format', None) + if notification_format: + ctxt['notification_format'] = notification_format + + notification_topics = conf.get('notification-topics', None) + if notification_topics: + ctxt['notification_topics'] = notification_topics + + send_notifications_to_logs = conf.get('send-notifications-to-logs', None) + if send_notifications_to_logs: + ctxt['send_notifications_to_logs'] = send_notifications_to_logs + + if not self.complete: + return {} + + return ctxt + + +class CephContext(OSContextGenerator): + """Generates context for /etc/ceph/ceph.conf templates.""" + interfaces = ['ceph'] + + def __call__(self): + if not relation_ids('ceph'): + return {} + + log('Generating template context for ceph', level=DEBUG) + mon_hosts = [] + ctxt = { + 'use_syslog': str(config('use-syslog')).lower() + } + for rid in relation_ids('ceph'): + for unit in related_units(rid): + if not ctxt.get('auth'): + ctxt['auth'] = relation_get('auth', rid=rid, unit=unit) + if not ctxt.get('key'): + ctxt['key'] = relation_get('key', rid=rid, unit=unit) + if not ctxt.get('rbd_features'): + default_features = relation_get('rbd-features', rid=rid, unit=unit) + if default_features is not None: + ctxt['rbd_features'] = default_features + + ceph_addrs = relation_get('ceph-public-address', rid=rid, + unit=unit) + if ceph_addrs: + for addr in ceph_addrs.split(' '): + mon_hosts.append(format_ipv6_addr(addr) or addr) + else: + priv_addr = relation_get('private-address', rid=rid, + unit=unit) + mon_hosts.append(format_ipv6_addr(priv_addr) or priv_addr) + + ctxt['mon_hosts'] = ' '.join(sorted(mon_hosts)) + + if not os.path.isdir('/etc/ceph'): + os.mkdir('/etc/ceph') + + if not self.context_complete(ctxt): + return {} + + ensure_packages(['ceph-common']) + return ctxt + + def context_complete(self, ctxt): + """Overridden here to ensure the context is actually complete. + + We set `key` and `auth` to None here, by default, to ensure + that the context will always evaluate to incomplete until the + Ceph relation has actually sent these details; otherwise, + there is a potential race condition between the relation + appearing and the first unit actually setting this data on the + relation. + + :param ctxt: The current context members + :type ctxt: Dict[str, ANY] + :returns: True if the context is complete + :rtype: bool + """ + if 'auth' not in ctxt or 'key' not in ctxt: + return False + return super(CephContext, self).context_complete(ctxt) + + +class HAProxyContext(OSContextGenerator): + """Provides half a context for the haproxy template, which describes + all peers to be included in the cluster. Each charm needs to include + its own context generator that describes the port mapping. + + :side effect: mkdir is called on HAPROXY_RUN_DIR + """ + interfaces = ['cluster'] + + def __init__(self, singlenode_mode=False, + address_types=ADDRESS_TYPES): + self.address_types = address_types + self.singlenode_mode = singlenode_mode + + def __call__(self): + if not os.path.isdir(HAPROXY_RUN_DIR): + mkdir(path=HAPROXY_RUN_DIR) + if not relation_ids('cluster') and not self.singlenode_mode: + return {} + + l_unit = local_unit().replace('/', '-') + cluster_hosts = collections.OrderedDict() + + # NOTE(jamespage): build out map of configured network endpoints + # and associated backends + for addr_type in self.address_types: + cfg_opt = 'os-{}-network'.format(addr_type) + # NOTE(thedac) For some reason the ADDRESS_MAP uses 'int' rather + # than 'internal' + if addr_type == 'internal': + _addr_map_type = INTERNAL + else: + _addr_map_type = addr_type + # Network spaces aware + laddr = get_relation_ip(ADDRESS_MAP[_addr_map_type]['binding'], + config(cfg_opt)) + if laddr: + netmask = get_netmask_for_address(laddr) + cluster_hosts[laddr] = { + 'network': "{}/{}".format(laddr, + netmask), + 'backends': collections.OrderedDict([(l_unit, + laddr)]) + } + for rid in relation_ids('cluster'): + for unit in sorted(related_units(rid)): + # API Charms will need to set {addr_type}-address with + # get_relation_ip(addr_type) + _laddr = relation_get('{}-address'.format(addr_type), + rid=rid, unit=unit) + if _laddr: + _unit = unit.replace('/', '-') + cluster_hosts[laddr]['backends'][_unit] = _laddr + + # NOTE(jamespage) add backend based on get_relation_ip - this + # will either be the only backend or the fallback if no acls + # match in the frontend + # Network spaces aware + addr = get_relation_ip('cluster') + cluster_hosts[addr] = {} + netmask = get_netmask_for_address(addr) + cluster_hosts[addr] = { + 'network': "{}/{}".format(addr, netmask), + 'backends': collections.OrderedDict([(l_unit, + addr)]) + } + for rid in relation_ids('cluster'): + for unit in sorted(related_units(rid)): + # API Charms will need to set their private-address with + # get_relation_ip('cluster') + _laddr = relation_get('private-address', + rid=rid, unit=unit) + if _laddr: + _unit = unit.replace('/', '-') + cluster_hosts[addr]['backends'][_unit] = _laddr + + ctxt = { + 'frontends': cluster_hosts, + 'default_backend': addr + } + + if config('haproxy-server-timeout'): + ctxt['haproxy_server_timeout'] = config('haproxy-server-timeout') + + if config('haproxy-client-timeout'): + ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout') + + if config('haproxy-queue-timeout'): + ctxt['haproxy_queue_timeout'] = config('haproxy-queue-timeout') + + if config('haproxy-connect-timeout'): + ctxt['haproxy_connect_timeout'] = config('haproxy-connect-timeout') + + if config('prefer-ipv6'): + ctxt['local_host'] = 'ip6-localhost' + ctxt['haproxy_host'] = '::' + else: + ctxt['local_host'] = '127.0.0.1' + ctxt['haproxy_host'] = '0.0.0.0' + + ctxt['ipv6_enabled'] = not is_ipv6_disabled() + + ctxt['stat_port'] = '8888' + + db = kv() + ctxt['stat_password'] = db.get('stat-password') + if not ctxt['stat_password']: + ctxt['stat_password'] = db.set('stat-password', + pwgen(32)) + db.flush() + + for frontend in cluster_hosts: + if (len(cluster_hosts[frontend]['backends']) > 1 or + self.singlenode_mode): + # Enable haproxy when we have enough peers. + log('Ensuring haproxy enabled in /etc/default/haproxy.', + level=DEBUG) + with open('/etc/default/haproxy', 'w') as out: + out.write('ENABLED=1\n') + + return ctxt + + log('HAProxy context is incomplete, this unit has no peers.', + level=INFO) + return {} + + +class ImageServiceContext(OSContextGenerator): + interfaces = ['image-service'] + + def __call__(self): + """Obtains the glance API server from the image-service relation. + Useful in nova and cinder (currently). + """ + log('Generating template context for image-service.', level=DEBUG) + rids = relation_ids('image-service') + if not rids: + return {} + + for rid in rids: + for unit in related_units(rid): + api_server = relation_get('glance-api-server', + rid=rid, unit=unit) + if api_server: + return {'glance_api_servers': api_server} + + log("ImageService context is incomplete. Missing required relation " + "data.", level=INFO) + return {} + + +class ApacheSSLContext(OSContextGenerator): + """Generates a context for an apache vhost configuration that configures + HTTPS reverse proxying for one or many endpoints. Generated context + looks something like:: + + { + 'namespace': 'cinder', + 'private_address': 'iscsi.mycinderhost.com', + 'endpoints': [(8776, 8766), (8777, 8767)] + } + + The endpoints list consists of a tuples mapping external ports + to internal ports. + """ + interfaces = ['https'] + + # charms should inherit this context and set external ports + # and service namespace accordingly. + external_ports = [] + service_namespace = None + user = group = 'root' + + def enable_modules(self): + cmd = ['a2enmod', 'ssl', 'proxy', 'proxy_http', 'headers'] + check_call(cmd) + + def configure_cert(self, cn=None): + ssl_dir = os.path.join('/etc/apache2/ssl/', self.service_namespace) + mkdir(path=ssl_dir) + cert, key = get_cert(cn) + if cert and key: + if cn: + cert_filename = 'cert_{}'.format(cn) + key_filename = 'key_{}'.format(cn) + else: + cert_filename = 'cert' + key_filename = 'key' + + write_file(path=os.path.join(ssl_dir, cert_filename), + content=b64decode(cert), owner=self.user, + group=self.group, perms=0o640) + write_file(path=os.path.join(ssl_dir, key_filename), + content=b64decode(key), owner=self.user, + group=self.group, perms=0o640) + + def configure_ca(self): + ca_cert = get_ca_cert() + if ca_cert: + install_ca_cert(b64decode(ca_cert)) + + def canonical_names(self): + """Figure out which canonical names clients will access this service. + """ + cns = [] + for r_id in relation_ids('identity-service'): + for unit in related_units(r_id): + rdata = relation_get(rid=r_id, unit=unit) + for k in rdata: + if k.startswith('ssl_key_'): + cns.append(k.lstrip('ssl_key_')) + + return sorted(list(set(cns))) + + def get_network_addresses(self): + """For each network configured, return corresponding address and + hostnamr or vip (if available). + + Returns a list of tuples of the form: + + [(address_in_net_a, hostname_in_net_a), + (address_in_net_b, hostname_in_net_b), + ...] + + or, if no hostnames(s) available: + + [(address_in_net_a, vip_in_net_a), + (address_in_net_b, vip_in_net_b), + ...] + + or, if no vip(s) available: + + [(address_in_net_a, address_in_net_a), + (address_in_net_b, address_in_net_b), + ...] + """ + addresses = [] + for net_type in [INTERNAL, ADMIN, PUBLIC]: + net_config = config(ADDRESS_MAP[net_type]['config']) + # NOTE(jamespage): Fallback must always be private address + # as this is used to bind services on the + # local unit. + fallback = unit_get("private-address") + if net_config: + addr = get_address_in_network(net_config, + fallback) + else: + try: + addr = network_get_primary_address( + ADDRESS_MAP[net_type]['binding'] + ) + except (NotImplementedError, NoNetworkBinding): + addr = fallback + + endpoint = resolve_address(net_type) + addresses.append((addr, endpoint)) + + return sorted(set(addresses)) + + def __call__(self): + if isinstance(self.external_ports, six.string_types): + self.external_ports = [self.external_ports] + + if not self.external_ports or not https(): + return {} + + use_keystone_ca = True + for rid in relation_ids('certificates'): + if related_units(rid): + use_keystone_ca = False + + if use_keystone_ca: + self.configure_ca() + + self.enable_modules() + + ctxt = {'namespace': self.service_namespace, + 'endpoints': [], + 'ext_ports': []} + + if use_keystone_ca: + cns = self.canonical_names() + if cns: + for cn in cns: + self.configure_cert(cn) + else: + # Expect cert/key provided in config (currently assumed that ca + # uses ip for cn) + for net_type in (INTERNAL, ADMIN, PUBLIC): + cn = resolve_address(endpoint_type=net_type) + self.configure_cert(cn) + + addresses = self.get_network_addresses() + for address, endpoint in addresses: + for api_port in self.external_ports: + ext_port = determine_apache_port(api_port, + singlenode_mode=True) + int_port = determine_api_port(api_port, singlenode_mode=True) + portmap = (address, endpoint, int(ext_port), int(int_port)) + ctxt['endpoints'].append(portmap) + ctxt['ext_ports'].append(int(ext_port)) + + ctxt['ext_ports'] = sorted(list(set(ctxt['ext_ports']))) + return ctxt + + +class NeutronContext(OSContextGenerator): + interfaces = [] + + @property + def plugin(self): + return None + + @property + def network_manager(self): + return None + + @property + def packages(self): + return neutron_plugin_attribute(self.plugin, 'packages', + self.network_manager) + + @property + def neutron_security_groups(self): + return None + + def _ensure_packages(self): + for pkgs in self.packages: + ensure_packages(pkgs) + + def ovs_ctxt(self): + driver = neutron_plugin_attribute(self.plugin, 'driver', + self.network_manager) + config = neutron_plugin_attribute(self.plugin, 'config', + self.network_manager) + ovs_ctxt = {'core_plugin': driver, + 'neutron_plugin': 'ovs', + 'neutron_security_groups': self.neutron_security_groups, + 'local_ip': unit_private_ip(), + 'config': config} + + return ovs_ctxt + + def nuage_ctxt(self): + driver = neutron_plugin_attribute(self.plugin, 'driver', + self.network_manager) + config = neutron_plugin_attribute(self.plugin, 'config', + self.network_manager) + nuage_ctxt = {'core_plugin': driver, + 'neutron_plugin': 'vsp', + 'neutron_security_groups': self.neutron_security_groups, + 'local_ip': unit_private_ip(), + 'config': config} + + return nuage_ctxt + + def nvp_ctxt(self): + driver = neutron_plugin_attribute(self.plugin, 'driver', + self.network_manager) + config = neutron_plugin_attribute(self.plugin, 'config', + self.network_manager) + nvp_ctxt = {'core_plugin': driver, + 'neutron_plugin': 'nvp', + 'neutron_security_groups': self.neutron_security_groups, + 'local_ip': unit_private_ip(), + 'config': config} + + return nvp_ctxt + + def n1kv_ctxt(self): + driver = neutron_plugin_attribute(self.plugin, 'driver', + self.network_manager) + n1kv_config = neutron_plugin_attribute(self.plugin, 'config', + self.network_manager) + n1kv_user_config_flags = config('n1kv-config-flags') + restrict_policy_profiles = config('n1kv-restrict-policy-profiles') + n1kv_ctxt = {'core_plugin': driver, + 'neutron_plugin': 'n1kv', + 'neutron_security_groups': self.neutron_security_groups, + 'local_ip': unit_private_ip(), + 'config': n1kv_config, + 'vsm_ip': config('n1kv-vsm-ip'), + 'vsm_username': config('n1kv-vsm-username'), + 'vsm_password': config('n1kv-vsm-password'), + 'restrict_policy_profiles': restrict_policy_profiles} + + if n1kv_user_config_flags: + flags = config_flags_parser(n1kv_user_config_flags) + n1kv_ctxt['user_config_flags'] = flags + + return n1kv_ctxt + + def calico_ctxt(self): + driver = neutron_plugin_attribute(self.plugin, 'driver', + self.network_manager) + config = neutron_plugin_attribute(self.plugin, 'config', + self.network_manager) + calico_ctxt = {'core_plugin': driver, + 'neutron_plugin': 'Calico', + 'neutron_security_groups': self.neutron_security_groups, + 'local_ip': unit_private_ip(), + 'config': config} + + return calico_ctxt + + def neutron_ctxt(self): + if https(): + proto = 'https' + else: + proto = 'http' + + if is_clustered(): + host = config('vip') + else: + host = unit_get('private-address') + + ctxt = {'network_manager': self.network_manager, + 'neutron_url': '%s://%s:%s' % (proto, host, '9696')} + return ctxt + + def pg_ctxt(self): + driver = neutron_plugin_attribute(self.plugin, 'driver', + self.network_manager) + config = neutron_plugin_attribute(self.plugin, 'config', + self.network_manager) + ovs_ctxt = {'core_plugin': driver, + 'neutron_plugin': 'plumgrid', + 'neutron_security_groups': self.neutron_security_groups, + 'local_ip': unit_private_ip(), + 'config': config} + return ovs_ctxt + + def midonet_ctxt(self): + driver = neutron_plugin_attribute(self.plugin, 'driver', + self.network_manager) + midonet_config = neutron_plugin_attribute(self.plugin, 'config', + self.network_manager) + mido_ctxt = {'core_plugin': driver, + 'neutron_plugin': 'midonet', + 'neutron_security_groups': self.neutron_security_groups, + 'local_ip': unit_private_ip(), + 'config': midonet_config} + + return mido_ctxt + + def __call__(self): + if self.network_manager not in ['quantum', 'neutron']: + return {} + + if not self.plugin: + return {} + + ctxt = self.neutron_ctxt() + + if self.plugin == 'ovs': + ctxt.update(self.ovs_ctxt()) + elif self.plugin in ['nvp', 'nsx']: + ctxt.update(self.nvp_ctxt()) + elif self.plugin == 'n1kv': + ctxt.update(self.n1kv_ctxt()) + elif self.plugin == 'Calico': + ctxt.update(self.calico_ctxt()) + elif self.plugin == 'vsp': + ctxt.update(self.nuage_ctxt()) + elif self.plugin == 'plumgrid': + ctxt.update(self.pg_ctxt()) + elif self.plugin == 'midonet': + ctxt.update(self.midonet_ctxt()) + + alchemy_flags = config('neutron-alchemy-flags') + if alchemy_flags: + flags = config_flags_parser(alchemy_flags) + ctxt['neutron_alchemy_flags'] = flags + + return ctxt + + +class NeutronPortContext(OSContextGenerator): + + def resolve_ports(self, ports): + """Resolve NICs not yet bound to bridge(s) + + If hwaddress provided then returns resolved hwaddress otherwise NIC. + """ + if not ports: + return None + + hwaddr_to_nic = {} + hwaddr_to_ip = {} + extant_nics = list_nics() + + for nic in extant_nics: + # Ignore virtual interfaces (bond masters will be identified from + # their slaves) + if not is_phy_iface(nic): + continue + + _nic = get_bond_master(nic) + if _nic: + log("Replacing iface '%s' with bond master '%s'" % (nic, _nic), + level=DEBUG) + nic = _nic + + hwaddr = get_nic_hwaddr(nic) + hwaddr_to_nic[hwaddr] = nic + addresses = get_ipv4_addr(nic, fatal=False) + addresses += get_ipv6_addr(iface=nic, fatal=False) + hwaddr_to_ip[hwaddr] = addresses + + resolved = [] + mac_regex = re.compile(r'([0-9A-F]{2}[:-]){5}([0-9A-F]{2})', re.I) + for entry in ports: + if re.match(mac_regex, entry): + # NIC is in known NICs and does NOT hace an IP address + if entry in hwaddr_to_nic and not hwaddr_to_ip[entry]: + # If the nic is part of a bridge then don't use it + if is_bridge_member(hwaddr_to_nic[entry]): + continue + + # Entry is a MAC address for a valid interface that doesn't + # have an IP address assigned yet. + resolved.append(hwaddr_to_nic[entry]) + elif entry in extant_nics: + # If the passed entry is not a MAC address and the interface + # exists, assume it's a valid interface, and that the user put + # it there on purpose (we can trust it to be the real external + # network). + resolved.append(entry) + + # Ensure no duplicates + return list(set(resolved)) + + +class OSConfigFlagContext(OSContextGenerator): + """Provides support for user-defined config flags. + + Users can define a comma-seperated list of key=value pairs + in the charm configuration and apply them at any point in + any file by using a template flag. + + Sometimes users might want config flags inserted within a + specific section so this class allows users to specify the + template flag name, allowing for multiple template flags + (sections) within the same context. + + NOTE: the value of config-flags may be a comma-separated list of + key=value pairs and some Openstack config files support + comma-separated lists as values. + """ + + def __init__(self, charm_flag='config-flags', + template_flag='user_config_flags'): + """ + :param charm_flag: config flags in charm configuration. + :param template_flag: insert point for user-defined flags in template + file. + """ + super(OSConfigFlagContext, self).__init__() + self._charm_flag = charm_flag + self._template_flag = template_flag + + def __call__(self): + config_flags = config(self._charm_flag) + if not config_flags: + return {} + + return {self._template_flag: + config_flags_parser(config_flags)} + + +class LibvirtConfigFlagsContext(OSContextGenerator): + """ + This context provides support for extending + the libvirt section through user-defined flags. + """ + def __call__(self): + ctxt = {} + libvirt_flags = config('libvirt-flags') + if libvirt_flags: + ctxt['libvirt_flags'] = config_flags_parser( + libvirt_flags) + return ctxt + + +class SubordinateConfigContext(OSContextGenerator): + + """ + Responsible for inspecting relations to subordinates that + may be exporting required config via a json blob. + + The subordinate interface allows subordinates to export their + configuration requirements to the principle for multiple config + files and multiple services. Ie, a subordinate that has interfaces + to both glance and nova may export to following yaml blob as json:: + + glance: + /etc/glance/glance-api.conf: + sections: + DEFAULT: + - [key1, value1] + /etc/glance/glance-registry.conf: + MYSECTION: + - [key2, value2] + nova: + /etc/nova/nova.conf: + sections: + DEFAULT: + - [key3, value3] + + + It is then up to the principle charms to subscribe this context to + the service+config file it is interestd in. Configuration data will + be available in the template context, in glance's case, as:: + + ctxt = { + ... other context ... + 'subordinate_configuration': { + 'DEFAULT': { + 'key1': 'value1', + }, + 'MYSECTION': { + 'key2': 'value2', + }, + } + } + """ + + def __init__(self, service, config_file, interface): + """ + :param service : Service name key to query in any subordinate + data found + :param config_file : Service's config file to query sections + :param interface : Subordinate interface to inspect + """ + self.config_file = config_file + if isinstance(service, list): + self.services = service + else: + self.services = [service] + if isinstance(interface, list): + self.interfaces = interface + else: + self.interfaces = [interface] + + def __call__(self): + ctxt = {'sections': {}} + rids = [] + for interface in self.interfaces: + rids.extend(relation_ids(interface)) + for rid in rids: + for unit in related_units(rid): + sub_config = relation_get('subordinate_configuration', + rid=rid, unit=unit) + if sub_config and sub_config != '': + try: + sub_config = json.loads(sub_config) + except Exception: + log('Could not parse JSON from ' + 'subordinate_configuration setting from %s' + % rid, level=ERROR) + continue + + for service in self.services: + if service not in sub_config: + log('Found subordinate_configuration on %s but it ' + 'contained nothing for %s service' + % (rid, service), level=INFO) + continue + + sub_config = sub_config[service] + if self.config_file not in sub_config: + log('Found subordinate_configuration on %s but it ' + 'contained nothing for %s' + % (rid, self.config_file), level=INFO) + continue + + sub_config = sub_config[self.config_file] + for k, v in six.iteritems(sub_config): + if k == 'sections': + for section, config_list in six.iteritems(v): + log("adding section '%s'" % (section), + level=DEBUG) + if ctxt[k].get(section): + ctxt[k][section].extend(config_list) + else: + ctxt[k][section] = config_list + else: + ctxt[k] = v + log("%d section(s) found" % (len(ctxt['sections'])), level=DEBUG) + return ctxt + + +class LogLevelContext(OSContextGenerator): + + def __call__(self): + ctxt = {} + ctxt['debug'] = \ + False if config('debug') is None else config('debug') + ctxt['verbose'] = \ + False if config('verbose') is None else config('verbose') + + return ctxt + + +class SyslogContext(OSContextGenerator): + + def __call__(self): + ctxt = {'use_syslog': config('use-syslog')} + return ctxt + + +class BindHostContext(OSContextGenerator): + + def __call__(self): + if config('prefer-ipv6'): + return {'bind_host': '::'} + else: + return {'bind_host': '0.0.0.0'} + + +MAX_DEFAULT_WORKERS = 4 +DEFAULT_MULTIPLIER = 2 + + +def _calculate_workers(): + ''' + Determine the number of worker processes based on the CPU + count of the unit containing the application. + + Workers will be limited to MAX_DEFAULT_WORKERS in + container environments where no worker-multipler configuration + option been set. + + @returns int: number of worker processes to use + ''' + multiplier = config('worker-multiplier') or DEFAULT_MULTIPLIER + count = int(_num_cpus() * multiplier) + if multiplier > 0 and count == 0: + count = 1 + + if config('worker-multiplier') is None and is_container(): + # NOTE(jamespage): Limit unconfigured worker-multiplier + # to MAX_DEFAULT_WORKERS to avoid insane + # worker configuration in LXD containers + # on large servers + # Reference: https://pad.lv/1665270 + count = min(count, MAX_DEFAULT_WORKERS) + + return count + + +def _num_cpus(): + ''' + Compatibility wrapper for calculating the number of CPU's + a unit has. + + @returns: int: number of CPU cores detected + ''' + try: + return psutil.cpu_count() + except AttributeError: + return psutil.NUM_CPUS + + +class WorkerConfigContext(OSContextGenerator): + + def __call__(self): + ctxt = {"workers": _calculate_workers()} + return ctxt + + +class WSGIWorkerConfigContext(WorkerConfigContext): + + def __init__(self, name=None, script=None, admin_script=None, + public_script=None, user=None, group=None, + process_weight=1.00, + admin_process_weight=0.25, public_process_weight=0.75): + self.service_name = name + self.user = user or name + self.group = group or name + self.script = script + self.admin_script = admin_script + self.public_script = public_script + self.process_weight = process_weight + self.admin_process_weight = admin_process_weight + self.public_process_weight = public_process_weight + + def __call__(self): + total_processes = _calculate_workers() + ctxt = { + "service_name": self.service_name, + "user": self.user, + "group": self.group, + "script": self.script, + "admin_script": self.admin_script, + "public_script": self.public_script, + "processes": int(math.ceil(self.process_weight * total_processes)), + "admin_processes": int(math.ceil(self.admin_process_weight * + total_processes)), + "public_processes": int(math.ceil(self.public_process_weight * + total_processes)), + "threads": 1, + } + return ctxt + + +class ZeroMQContext(OSContextGenerator): + interfaces = ['zeromq-configuration'] + + def __call__(self): + ctxt = {} + if is_relation_made('zeromq-configuration', 'host'): + for rid in relation_ids('zeromq-configuration'): + for unit in related_units(rid): + ctxt['zmq_nonce'] = relation_get('nonce', unit, rid) + ctxt['zmq_host'] = relation_get('host', unit, rid) + ctxt['zmq_redis_address'] = relation_get( + 'zmq_redis_address', unit, rid) + + return ctxt + + +class NotificationDriverContext(OSContextGenerator): + + def __init__(self, zmq_relation='zeromq-configuration', + amqp_relation='amqp'): + """ + :param zmq_relation: Name of Zeromq relation to check + """ + self.zmq_relation = zmq_relation + self.amqp_relation = amqp_relation + + def __call__(self): + ctxt = {'notifications': 'False'} + if is_relation_made(self.amqp_relation): + ctxt['notifications'] = "True" + + return ctxt + + +class SysctlContext(OSContextGenerator): + """This context check if the 'sysctl' option exists on configuration + then creates a file with the loaded contents""" + def __call__(self): + sysctl_dict = config('sysctl') + if sysctl_dict: + sysctl_create(sysctl_dict, + '/etc/sysctl.d/50-{0}.conf'.format(charm_name())) + return {'sysctl': sysctl_dict} + + +class NeutronAPIContext(OSContextGenerator): + ''' + Inspects current neutron-plugin-api relation for neutron settings. Return + defaults if it is not present. + ''' + interfaces = ['neutron-plugin-api'] + + def __call__(self): + self.neutron_defaults = { + 'l2_population': { + 'rel_key': 'l2-population', + 'default': False, + }, + 'overlay_network_type': { + 'rel_key': 'overlay-network-type', + 'default': 'gre', + }, + 'neutron_security_groups': { + 'rel_key': 'neutron-security-groups', + 'default': False, + }, + 'network_device_mtu': { + 'rel_key': 'network-device-mtu', + 'default': None, + }, + 'enable_dvr': { + 'rel_key': 'enable-dvr', + 'default': False, + }, + 'enable_l3ha': { + 'rel_key': 'enable-l3ha', + 'default': False, + }, + 'dns_domain': { + 'rel_key': 'dns-domain', + 'default': None, + }, + 'polling_interval': { + 'rel_key': 'polling-interval', + 'default': 2, + }, + 'rpc_response_timeout': { + 'rel_key': 'rpc-response-timeout', + 'default': 60, + }, + 'report_interval': { + 'rel_key': 'report-interval', + 'default': 30, + }, + 'enable_qos': { + 'rel_key': 'enable-qos', + 'default': False, + }, + 'enable_nsg_logging': { + 'rel_key': 'enable-nsg-logging', + 'default': False, + }, + 'enable_nfg_logging': { + 'rel_key': 'enable-nfg-logging', + 'default': False, + }, + 'enable_port_forwarding': { + 'rel_key': 'enable-port-forwarding', + 'default': False, + }, + 'global_physnet_mtu': { + 'rel_key': 'global-physnet-mtu', + 'default': 1500, + }, + 'physical_network_mtus': { + 'rel_key': 'physical-network-mtus', + 'default': None, + }, + } + ctxt = self.get_neutron_options({}) + for rid in relation_ids('neutron-plugin-api'): + for unit in related_units(rid): + rdata = relation_get(rid=rid, unit=unit) + # The l2-population key is used by the context as a way of + # checking if the api service on the other end is sending data + # in a recent format. + if 'l2-population' in rdata: + ctxt.update(self.get_neutron_options(rdata)) + + extension_drivers = [] + + if ctxt['enable_qos']: + extension_drivers.append('qos') + + if ctxt['enable_nsg_logging']: + extension_drivers.append('log') + + ctxt['extension_drivers'] = ','.join(extension_drivers) + + l3_extension_plugins = [] + + if ctxt['enable_port_forwarding']: + l3_extension_plugins.append('port_forwarding') + + ctxt['l3_extension_plugins'] = l3_extension_plugins + + return ctxt + + def get_neutron_options(self, rdata): + settings = {} + for nkey in self.neutron_defaults.keys(): + defv = self.neutron_defaults[nkey]['default'] + rkey = self.neutron_defaults[nkey]['rel_key'] + if rkey in rdata.keys(): + if type(defv) is bool: + settings[nkey] = bool_from_string(rdata[rkey]) + else: + settings[nkey] = rdata[rkey] + else: + settings[nkey] = defv + return settings + + +class ExternalPortContext(NeutronPortContext): + + def __call__(self): + ctxt = {} + ports = config('ext-port') + if ports: + ports = [p.strip() for p in ports.split()] + ports = self.resolve_ports(ports) + if ports: + ctxt = {"ext_port": ports[0]} + napi_settings = NeutronAPIContext()() + mtu = napi_settings.get('network_device_mtu') + if mtu: + ctxt['ext_port_mtu'] = mtu + + return ctxt + + +class DataPortContext(NeutronPortContext): + + def __call__(self): + ports = config('data-port') + if ports: + # Map of {bridge:port/mac} + portmap = parse_data_port_mappings(ports) + ports = portmap.keys() + # Resolve provided ports or mac addresses and filter out those + # already attached to a bridge. + resolved = self.resolve_ports(ports) + # Rebuild port index using resolved and filtered ports. + normalized = {get_nic_hwaddr(port): port for port in resolved + if port not in ports} + normalized.update({port: port for port in resolved + if port in ports}) + if resolved: + return {normalized[port]: bridge for port, bridge in + six.iteritems(portmap) if port in normalized.keys()} + + return None + + +class PhyNICMTUContext(DataPortContext): + + def __call__(self): + ctxt = {} + mappings = super(PhyNICMTUContext, self).__call__() + if mappings and mappings.keys(): + ports = sorted(mappings.keys()) + napi_settings = NeutronAPIContext()() + mtu = napi_settings.get('network_device_mtu') + all_ports = set() + # If any of ports is a vlan device, its underlying device must have + # mtu applied first. + for port in ports: + for lport in glob.glob("/sys/class/net/%s/lower_*" % port): + lport = os.path.basename(lport) + all_ports.add(lport.split('_')[1]) + + all_ports = list(all_ports) + all_ports.extend(ports) + if mtu: + ctxt["devs"] = '\\n'.join(all_ports) + ctxt['mtu'] = mtu + + return ctxt + + +class NetworkServiceContext(OSContextGenerator): + + def __init__(self, rel_name='quantum-network-service'): + self.rel_name = rel_name + self.interfaces = [rel_name] + + def __call__(self): + for rid in relation_ids(self.rel_name): + for unit in related_units(rid): + rdata = relation_get(rid=rid, unit=unit) + ctxt = { + 'keystone_host': rdata.get('keystone_host'), + 'service_port': rdata.get('service_port'), + 'auth_port': rdata.get('auth_port'), + 'service_tenant': rdata.get('service_tenant'), + 'service_username': rdata.get('service_username'), + 'service_password': rdata.get('service_password'), + 'quantum_host': rdata.get('quantum_host'), + 'quantum_port': rdata.get('quantum_port'), + 'quantum_url': rdata.get('quantum_url'), + 'region': rdata.get('region'), + 'service_protocol': + rdata.get('service_protocol') or 'http', + 'auth_protocol': + rdata.get('auth_protocol') or 'http', + 'api_version': + rdata.get('api_version') or '2.0', + } + if self.context_complete(ctxt): + return ctxt + return {} + + +class InternalEndpointContext(OSContextGenerator): + """Internal endpoint context. + + This context provides the endpoint type used for communication between + services e.g. between Nova and Cinder internally. Openstack uses Public + endpoints by default so this allows admins to optionally use internal + endpoints. + """ + def __call__(self): + return {'use_internal_endpoints': config('use-internal-endpoints')} + + +class VolumeAPIContext(InternalEndpointContext): + """Volume API context. + + This context provides information regarding the volume endpoint to use + when communicating between services. It determines which version of the + API is appropriate for use. + + This value will be determined in the resulting context dictionary + returned from calling the VolumeAPIContext object. Information provided + by this context is as follows: + + volume_api_version: the volume api version to use, currently + 'v2' or 'v3' + volume_catalog_info: the information to use for a cinder client + configuration that consumes API endpoints from the keystone + catalog. This is defined as the type:name:endpoint_type string. + """ + # FIXME(wolsen) This implementation is based on the provider being able + # to specify the package version to check but does not guarantee that the + # volume service api version selected is available. In practice, it is + # quite likely the volume service *is* providing the v3 volume service. + # This should be resolved when the service-discovery spec is implemented. + def __init__(self, pkg): + """ + Creates a new VolumeAPIContext for use in determining which version + of the Volume API should be used for communication. A package codename + should be supplied for determining the currently installed OpenStack + version. + + :param pkg: the package codename to use in order to determine the + component version (e.g. nova-common). See + charmhelpers.contrib.openstack.utils.PACKAGE_CODENAMES for more. + """ + super(VolumeAPIContext, self).__init__() + self._ctxt = None + if not pkg: + raise ValueError('package name must be provided in order to ' + 'determine current OpenStack version.') + self.pkg = pkg + + @property + def ctxt(self): + if self._ctxt is not None: + return self._ctxt + self._ctxt = self._determine_ctxt() + return self._ctxt + + def _determine_ctxt(self): + """Determines the Volume API endpoint information. + + Determines the appropriate version of the API that should be used + as well as the catalog_info string that would be supplied. Returns + a dict containing the volume_api_version and the volume_catalog_info. + """ + rel = os_release(self.pkg) + version = '2' + if CompareOpenStackReleases(rel) >= 'pike': + version = '3' + + service_type = 'volumev{version}'.format(version=version) + service_name = 'cinderv{version}'.format(version=version) + endpoint_type = 'publicURL' + if config('use-internal-endpoints'): + endpoint_type = 'internalURL' + catalog_info = '{type}:{name}:{endpoint}'.format( + type=service_type, name=service_name, endpoint=endpoint_type) + + return { + 'volume_api_version': version, + 'volume_catalog_info': catalog_info, + } + + def __call__(self): + return self.ctxt + + +class AppArmorContext(OSContextGenerator): + """Base class for apparmor contexts.""" + + def __init__(self, profile_name=None): + self._ctxt = None + self.aa_profile = profile_name + self.aa_utils_packages = ['apparmor-utils'] + + @property + def ctxt(self): + if self._ctxt is not None: + return self._ctxt + self._ctxt = self._determine_ctxt() + return self._ctxt + + def _determine_ctxt(self): + """ + Validate aa-profile-mode settings is disable, enforce, or complain. + + :return ctxt: Dictionary of the apparmor profile or None + """ + if config('aa-profile-mode') in ['disable', 'enforce', 'complain']: + ctxt = {'aa_profile_mode': config('aa-profile-mode'), + 'ubuntu_release': lsb_release()['DISTRIB_RELEASE']} + if self.aa_profile: + ctxt['aa_profile'] = self.aa_profile + else: + ctxt = None + return ctxt + + def __call__(self): + return self.ctxt + + def install_aa_utils(self): + """ + Install packages required for apparmor configuration. + """ + log("Installing apparmor utils.") + ensure_packages(self.aa_utils_packages) + + def manually_disable_aa_profile(self): + """ + Manually disable an apparmor profile. + + If aa-profile-mode is set to disabled (default) this is required as the + template has been written but apparmor is yet unaware of the profile + and aa-disable aa-profile fails. Without this the profile would kick + into enforce mode on the next service restart. + + """ + profile_path = '/etc/apparmor.d' + disable_path = '/etc/apparmor.d/disable' + if not os.path.lexists(os.path.join(disable_path, self.aa_profile)): + os.symlink(os.path.join(profile_path, self.aa_profile), + os.path.join(disable_path, self.aa_profile)) + + def setup_aa_profile(self): + """ + Setup an apparmor profile. + The ctxt dictionary will contain the apparmor profile mode and + the apparmor profile name. + Makes calls out to aa-disable, aa-complain, or aa-enforce to setup + the apparmor profile. + """ + self() + if not self.ctxt: + log("Not enabling apparmor Profile") + return + self.install_aa_utils() + cmd = ['aa-{}'.format(self.ctxt['aa_profile_mode'])] + cmd.append(self.ctxt['aa_profile']) + log("Setting up the apparmor profile for {} in {} mode." + "".format(self.ctxt['aa_profile'], self.ctxt['aa_profile_mode'])) + try: + check_call(cmd) + except CalledProcessError as e: + # If aa-profile-mode is set to disabled (default) manual + # disabling is required as the template has been written but + # apparmor is yet unaware of the profile and aa-disable aa-profile + # fails. If aa-disable learns to read profile files first this can + # be removed. + if self.ctxt['aa_profile_mode'] == 'disable': + log("Manually disabling the apparmor profile for {}." + "".format(self.ctxt['aa_profile'])) + self.manually_disable_aa_profile() + return + status_set('blocked', "Apparmor profile {} failed to be set to {}." + "".format(self.ctxt['aa_profile'], + self.ctxt['aa_profile_mode'])) + raise e + + +class MemcacheContext(OSContextGenerator): + """Memcache context + + This context provides options for configuring a local memcache client and + server for both IPv4 and IPv6 + """ + + def __init__(self, package=None): + """ + @param package: Package to examine to extrapolate OpenStack release. + Used when charms have no openstack-origin config + option (ie subordinates) + """ + self.package = package + + def __call__(self): + ctxt = {} + ctxt['use_memcache'] = enable_memcache(package=self.package) + if ctxt['use_memcache']: + # Trusty version of memcached does not support ::1 as a listen + # address so use host file entry instead + release = lsb_release()['DISTRIB_CODENAME'].lower() + if is_ipv6_disabled(): + if CompareHostReleases(release) > 'trusty': + ctxt['memcache_server'] = '127.0.0.1' + else: + ctxt['memcache_server'] = 'localhost' + ctxt['memcache_server_formatted'] = '127.0.0.1' + ctxt['memcache_port'] = '11211' + ctxt['memcache_url'] = '{}:{}'.format( + ctxt['memcache_server_formatted'], + ctxt['memcache_port']) + else: + if CompareHostReleases(release) > 'trusty': + ctxt['memcache_server'] = '::1' + else: + ctxt['memcache_server'] = 'ip6-localhost' + ctxt['memcache_server_formatted'] = '[::1]' + ctxt['memcache_port'] = '11211' + ctxt['memcache_url'] = 'inet6:{}:{}'.format( + ctxt['memcache_server_formatted'], + ctxt['memcache_port']) + return ctxt + + +class EnsureDirContext(OSContextGenerator): + ''' + Serves as a generic context to create a directory as a side-effect. + + Useful for software that supports drop-in files (.d) in conjunction + with config option-based templates. Examples include: + * OpenStack oslo.policy drop-in files; + * systemd drop-in config files; + * other software that supports overriding defaults with .d files + + Another use-case is when a subordinate generates a configuration for + primary to render in a separate directory. + + Some software requires a user to create a target directory to be + scanned for drop-in files with a specific format. This is why this + context is needed to do that before rendering a template. + ''' + + def __init__(self, dirname, **kwargs): + '''Used merely to ensure that a given directory exists.''' + self.dirname = dirname + self.kwargs = kwargs + + def __call__(self): + mkdir(self.dirname, **self.kwargs) + return {} + + +class VersionsContext(OSContextGenerator): + """Context to return the openstack and operating system versions. + + """ + def __init__(self, pkg='python-keystone'): + """Initialise context. + + :param pkg: Package to extrapolate openstack version from. + :type pkg: str + """ + self.pkg = pkg + + def __call__(self): + ostack = os_release(self.pkg) + osystem = lsb_release()['DISTRIB_CODENAME'].lower() + return { + 'openstack_release': ostack, + 'operating_system_release': osystem} + + +class LogrotateContext(OSContextGenerator): + """Common context generator for logrotate.""" + + def __init__(self, location, interval, count): + """ + :param location: Absolute path for the logrotate config file + :type location: str + :param interval: The interval for the rotations. Valid values are + 'daily', 'weekly', 'monthly', 'yearly' + :type interval: str + :param count: The logrotate count option configures the 'count' times + the log files are being rotated before being + :type count: int + """ + self.location = location + self.interval = interval + self.count = 'rotate {}'.format(count) + + def __call__(self): + ctxt = { + 'logrotate_logs_location': self.location, + 'logrotate_interval': self.interval, + 'logrotate_count': self.count, + } + return ctxt + + +class HostInfoContext(OSContextGenerator): + """Context to provide host information.""" + + def __init__(self, use_fqdn_hint_cb=None): + """Initialize HostInfoContext + + :param use_fqdn_hint_cb: Callback whose return value used to populate + `use_fqdn_hint` + :type use_fqdn_hint_cb: Callable[[], bool] + """ + # Store callback used to get hint for whether FQDN should be used + + # Depending on the workload a charm manages, the use of FQDN vs. + # shortname may be a deploy-time decision, i.e. behaviour can not + # change on charm upgrade or post-deployment configuration change. + + # The hint is passed on as a flag in the context to allow the decision + # to be made in the Jinja2 configuration template. + self.use_fqdn_hint_cb = use_fqdn_hint_cb + + def _get_canonical_name(self, name=None): + """Get the official FQDN of the host + + The implementation of ``socket.getfqdn()`` in the standard Python + library does not exhaust all methods of getting the official name + of a host ref Python issue https://bugs.python.org/issue5004 + + This function mimics the behaviour of a call to ``hostname -f`` to + get the official FQDN but returns an empty string if it is + unsuccessful. + + :param name: Shortname to get FQDN on + :type name: Optional[str] + :returns: The official FQDN for host or empty string ('') + :rtype: str + """ + name = name or socket.gethostname() + fqdn = '' + + if six.PY2: + exc = socket.error + else: + exc = OSError + + try: + addrs = socket.getaddrinfo( + name, None, 0, socket.SOCK_DGRAM, 0, socket.AI_CANONNAME) + except exc: + pass + else: + for addr in addrs: + if addr[3]: + if '.' in addr[3]: + fqdn = addr[3] + break + return fqdn + + def __call__(self): + name = socket.gethostname() + ctxt = { + 'host_fqdn': self._get_canonical_name(name) or name, + 'host': name, + 'use_fqdn_hint': ( + self.use_fqdn_hint_cb() if self.use_fqdn_hint_cb else False) + } + return ctxt + + +def validate_ovs_use_veth(*args, **kwargs): + """Validate OVS use veth setting for dhcp agents + + The ovs_use_veth setting is considered immutable as it will break existing + deployments. Historically, we set ovs_use_veth=True in dhcp_agent.ini. It + turns out this is no longer necessary. Ideally, all new deployments would + have this set to False. + + This function validates that the config value does not conflict with + previously deployed settings in dhcp_agent.ini. + + See LP Bug#1831935 for details. + + :returns: Status state and message + :rtype: Union[(None, None), (string, string)] + """ + existing_ovs_use_veth = ( + DHCPAgentContext.get_existing_ovs_use_veth()) + config_ovs_use_veth = DHCPAgentContext.parse_ovs_use_veth() + + # Check settings are set and not None + if existing_ovs_use_veth is not None and config_ovs_use_veth is not None: + # Check for mismatch between existing config ini and juju config + if existing_ovs_use_veth != config_ovs_use_veth: + # Stop the line to avoid breakage + msg = ( + "The existing setting for dhcp_agent.ini ovs_use_veth, {}, " + "does not match the juju config setting, {}. This may lead to " + "VMs being unable to receive a DHCP IP. Either change the " + "juju config setting or dhcp agents may need to be recreated." + .format(existing_ovs_use_veth, config_ovs_use_veth)) + log(msg, ERROR) + return ( + "blocked", + "Mismatched existing and configured ovs-use-veth. See log.") + + # Everything is OK + return None, None + + +class DHCPAgentContext(OSContextGenerator): + + def __call__(self): + """Return the DHCPAGentContext. + + Return all DHCP Agent INI related configuration. + ovs unit is attached to (as a subordinate) and the 'dns_domain' from + the neutron-plugin-api relations (if one is set). + + :returns: Dictionary context + :rtype: Dict + """ + + ctxt = {} + dnsmasq_flags = config('dnsmasq-flags') + if dnsmasq_flags: + ctxt['dnsmasq_flags'] = config_flags_parser(dnsmasq_flags) + ctxt['dns_servers'] = config('dns-servers') + + neutron_api_settings = NeutronAPIContext()() + + ctxt['debug'] = config('debug') + ctxt['instance_mtu'] = config('instance-mtu') + ctxt['ovs_use_veth'] = self.get_ovs_use_veth() + + ctxt['enable_metadata_network'] = config('enable-metadata-network') + ctxt['enable_isolated_metadata'] = config('enable-isolated-metadata') + + if neutron_api_settings.get('dns_domain'): + ctxt['dns_domain'] = neutron_api_settings.get('dns_domain') + + # Override user supplied config for these plugins as these settings are + # mandatory + if config('plugin') in ['nvp', 'nsx', 'n1kv']: + ctxt['enable_metadata_network'] = True + ctxt['enable_isolated_metadata'] = True + + return ctxt + + @staticmethod + def get_existing_ovs_use_veth(): + """Return existing ovs_use_veth setting from dhcp_agent.ini. + + :returns: Boolean value of existing ovs_use_veth setting or None + :rtype: Optional[Bool] + """ + DHCP_AGENT_INI = "/etc/neutron/dhcp_agent.ini" + existing_ovs_use_veth = None + # If there is a dhcp_agent.ini file read the current setting + if os.path.isfile(DHCP_AGENT_INI): + # config_ini does the right thing and returns None if the setting is + # commented. + existing_ovs_use_veth = ( + config_ini(DHCP_AGENT_INI)["DEFAULT"].get("ovs_use_veth")) + # Convert to Bool if necessary + if isinstance(existing_ovs_use_veth, six.string_types): + return bool_from_string(existing_ovs_use_veth) + return existing_ovs_use_veth + + @staticmethod + def parse_ovs_use_veth(): + """Parse the ovs-use-veth config setting. + + Parse the string config setting for ovs-use-veth and return a boolean + or None. + + bool_from_string will raise a ValueError if the string is not falsy or + truthy. + + :raises: ValueError for invalid input + :returns: Boolean value of ovs-use-veth or None + :rtype: Optional[Bool] + """ + _config = config("ovs-use-veth") + # An unset parameter returns None. Just in case we will also check for + # an empty string: "". Ironically, (the problem we are trying to avoid) + # "False" returns True and "" returns False. + if _config is None or not _config: + # Return None + return + # bool_from_string handles many variations of true and false strings + # as well as upper and lowercases including: + # ['y', 'yes', 'true', 't', 'on', 'n', 'no', 'false', 'f', 'off'] + return bool_from_string(_config) + + def get_ovs_use_veth(self): + """Return correct ovs_use_veth setting for use in dhcp_agent.ini. + + Get the right value from config or existing dhcp_agent.ini file. + Existing has precedence. Attempt to default to "False" without + disrupting existing deployments. Handle existing deployments and + upgrades safely. See LP Bug#1831935 + + :returns: Value to use for ovs_use_veth setting + :rtype: Bool + """ + _existing = self.get_existing_ovs_use_veth() + if _existing is not None: + return _existing + + _config = self.parse_ovs_use_veth() + if _config is None: + # New better default + return False + else: + return _config + + +EntityMac = collections.namedtuple('EntityMac', ['entity', 'mac']) + + +def resolve_pci_from_mapping_config(config_key): + """Resolve local PCI devices from MAC addresses in mapping config. + + Note that this function keeps record of mac->PCI address lookups + in the local unit db as the devices will disappaear from the system + once bound. + + :param config_key: Configuration option key to parse data from + :type config_key: str + :returns: PCI device address to Tuple(entity, mac) map + :rtype: collections.OrderedDict[str,Tuple[str,str]] + """ + devices = pci.PCINetDevices() + resolved_devices = collections.OrderedDict() + db = kv() + # Note that ``parse_data_port_mappings`` returns Dict regardless of input + for mac, entity in parse_data_port_mappings(config(config_key)).items(): + pcidev = devices.get_device_from_mac(mac) + if pcidev: + # NOTE: store mac->pci allocation as post binding + # it disappears from PCIDevices. + db.set(mac, pcidev.pci_address) + db.flush() + + pci_address = db.get(mac) + if pci_address: + resolved_devices[pci_address] = EntityMac(entity, mac) + + return resolved_devices + + +class DPDKDeviceContext(OSContextGenerator): + + def __init__(self, driver_key=None, bridges_key=None, bonds_key=None): + """Initialize DPDKDeviceContext. + + :param driver_key: Key to use when retrieving driver config. + :type driver_key: str + :param bridges_key: Key to use when retrieving bridge config. + :type bridges_key: str + :param bonds_key: Key to use when retrieving bonds config. + :type bonds_key: str + """ + self.driver_key = driver_key or 'dpdk-driver' + self.bridges_key = bridges_key or 'data-port' + self.bonds_key = bonds_key or 'dpdk-bond-mappings' + + def __call__(self): + """Populate context. + + :returns: context + :rtype: Dict[str,Union[str,collections.OrderedDict[str,str]]] + """ + driver = config(self.driver_key) + if driver is None: + return {} + # Resolve PCI devices for both directly used devices (_bridges) + # and devices for use in dpdk bonds (_bonds) + pci_devices = resolve_pci_from_mapping_config(self.bridges_key) + pci_devices.update(resolve_pci_from_mapping_config(self.bonds_key)) + return {'devices': pci_devices, + 'driver': driver} + + +class OVSDPDKDeviceContext(OSContextGenerator): + + def __init__(self, bridges_key=None, bonds_key=None): + """Initialize OVSDPDKDeviceContext. + + :param bridges_key: Key to use when retrieving bridge config. + :type bridges_key: str + :param bonds_key: Key to use when retrieving bonds config. + :type bonds_key: str + """ + self.bridges_key = bridges_key or 'data-port' + self.bonds_key = bonds_key or 'dpdk-bond-mappings' + + @staticmethod + def _parse_cpu_list(cpulist): + """Parses a linux cpulist for a numa node + + :returns: list of cores + :rtype: List[int] + """ + cores = [] + ranges = cpulist.split(',') + for cpu_range in ranges: + if "-" in cpu_range: + cpu_min_max = cpu_range.split('-') + cores += range(int(cpu_min_max[0]), + int(cpu_min_max[1]) + 1) + else: + cores.append(int(cpu_range)) + return cores + + def _numa_node_cores(self): + """Get map of numa node -> cpu core + + :returns: map of numa node -> cpu core + :rtype: Dict[str,List[int]] + """ + nodes = {} + node_regex = '/sys/devices/system/node/node*' + for node in glob.glob(node_regex): + index = node.lstrip('/sys/devices/system/node/node') + with open(os.path.join(node, 'cpulist')) as cpulist: + nodes[index] = self._parse_cpu_list(cpulist.read().strip()) + return nodes + + def cpu_mask(self): + """Get hex formatted CPU mask + + The mask is based on using the first config:dpdk-socket-cores + cores of each NUMA node in the unit. + :returns: hex formatted CPU mask + :rtype: str + """ + num_cores = config('dpdk-socket-cores') + mask = 0 + for cores in self._numa_node_cores().values(): + for core in cores[:num_cores]: + mask = mask | 1 << core + return format(mask, '#04x') + + def socket_memory(self): + """Formatted list of socket memory configuration per NUMA node + + :returns: socket memory configuration per NUMA node + :rtype: str + """ + sm_size = config('dpdk-socket-memory') + node_regex = '/sys/devices/system/node/node*' + mem_list = [str(sm_size) for _ in glob.glob(node_regex)] + if mem_list: + return ','.join(mem_list) + else: + return str(sm_size) + + def devices(self): + """List of PCI devices for use by DPDK + + :returns: List of PCI devices for use by DPDK + :rtype: collections.OrderedDict[str,str] + """ + pci_devices = resolve_pci_from_mapping_config(self.bridges_key) + pci_devices.update(resolve_pci_from_mapping_config(self.bonds_key)) + return pci_devices + + def _formatted_whitelist(self, flag): + """Flag formatted list of devices to whitelist + + :param flag: flag format to use + :type flag: str + :rtype: str + """ + whitelist = [] + for device in self.devices(): + whitelist.append(flag.format(device=device)) + return ' '.join(whitelist) + + def device_whitelist(self): + """Formatted list of devices to whitelist for dpdk + + using the old style '-w' flag + + :returns: devices to whitelist prefixed by '-w ' + :rtype: str + """ + return self._formatted_whitelist('-w {device}') + + def pci_whitelist(self): + """Formatted list of devices to whitelist for dpdk + + using the new style '--pci-whitelist' flag + + :returns: devices to whitelist prefixed by '--pci-whitelist ' + :rtype: str + """ + return self._formatted_whitelist('--pci-whitelist {device}') + + def __call__(self): + """Populate context. + + :returns: context + :rtype: Dict[str,Union[bool,str]] + """ + ctxt = {} + whitelist = self.device_whitelist() + if whitelist: + ctxt['dpdk_enabled'] = config('enable-dpdk') + ctxt['device_whitelist'] = self.device_whitelist() + ctxt['socket_memory'] = self.socket_memory() + ctxt['cpu_mask'] = self.cpu_mask() + return ctxt + + +class BridgePortInterfaceMap(object): + """Build a map of bridge ports and interaces from charm configuration. + + NOTE: the handling of this detail in the charm is pre-deprecated. + + The long term goal is for network connectivity detail to be modelled in + the server provisioning layer (such as MAAS) which in turn will provide + a Netplan YAML description that will be used to drive Open vSwitch. + + Until we get to that reality the charm will need to configure this + detail based on application level configuration options. + + There is a established way of mapping interfaces to ports and bridges + in the ``neutron-openvswitch`` and ``neutron-gateway`` charms and we + will carry that forward. + + The relationship between bridge, port and interface(s). + +--------+ + | bridge | + +--------+ + | + +----------------+ + | port aka. bond | + +----------------+ + | | + +-+ +-+ + |i| |i| + |n| |n| + |t| |t| + |0| |N| + +-+ +-+ + """ + class interface_type(enum.Enum): + """Supported interface types. + + Supported interface types can be found in the ``iface_types`` column + in the ``Open_vSwitch`` table on a running system. + """ + dpdk = 'dpdk' + internal = 'internal' + system = 'system' + + def __str__(self): + """Return string representation of value. + + :returns: string representation of value. + :rtype: str + """ + return self.value + + def __init__(self, bridges_key=None, bonds_key=None, enable_dpdk_key=None, + global_mtu=None): + """Initialize map. + + :param bridges_key: Name of bridge:interface/port map config key + (default: 'data-port') + :type bridges_key: Optional[str] + :param bonds_key: Name of port-name:interface map config key + (default: 'dpdk-bond-mappings') + :type bonds_key: Optional[str] + :param enable_dpdk_key: Name of DPDK toggle config key + (default: 'enable-dpdk') + :type enable_dpdk_key: Optional[str] + :param global_mtu: Set a MTU on all interfaces at map initialization. + + The default is to have Open vSwitch get this from the underlying + interface as set up by bare metal provisioning. + + Note that you can augment the MTU on an individual interface basis + like this: + + ifdatamap = bpi.get_ifdatamap(bridge, port) + ifdatamap = { + port: { + **ifdata, + **{'mtu-request': my_individual_mtu_map[port]}, + } + for port, ifdata in ifdatamap.items() + } + :type global_mtu: Optional[int] + """ + bridges_key = bridges_key or 'data-port' + bonds_key = bonds_key or 'dpdk-bond-mappings' + enable_dpdk_key = enable_dpdk_key or 'enable-dpdk' + self._map = collections.defaultdict( + lambda: collections.defaultdict(dict)) + self._ifname_mac_map = collections.defaultdict(list) + self._mac_ifname_map = {} + self._mac_pci_address_map = {} + + # First we iterate over the list of physical interfaces visible to the + # system and update interface name to mac and mac to interface name map + for ifname in list_nics(): + if not is_phy_iface(ifname): + continue + mac = get_nic_hwaddr(ifname) + self._ifname_mac_map[ifname] = [mac] + self._mac_ifname_map[mac] = ifname + + # check if interface is part of a linux bond + _bond_name = get_bond_master(ifname) + if _bond_name and _bond_name != ifname: + log('Add linux bond "{}" to map for physical interface "{}" ' + 'with mac "{}".'.format(_bond_name, ifname, mac), + level=DEBUG) + # for bonds we want to be able to get a list of the mac + # addresses for the physical interfaces the bond is made up of. + if self._ifname_mac_map.get(_bond_name): + self._ifname_mac_map[_bond_name].append(mac) + else: + self._ifname_mac_map[_bond_name] = [mac] + + # In light of the pre-deprecation notice in the docstring of this + # class we will expose the ability to configure OVS bonds as a + # DPDK-only feature, but generally use the data structures internally. + if config(enable_dpdk_key): + # resolve PCI address of interfaces listed in the bridges and bonds + # charm configuration options. Note that for already bound + # interfaces the helper will retrieve MAC address from the unit + # KV store as the information is no longer available in sysfs. + _pci_bridge_mac = resolve_pci_from_mapping_config( + bridges_key) + _pci_bond_mac = resolve_pci_from_mapping_config( + bonds_key) + + for pci_address, bridge_mac in _pci_bridge_mac.items(): + if bridge_mac.mac in self._mac_ifname_map: + # if we already have the interface name in our map it is + # visible to the system and therefore not bound to DPDK + continue + ifname = 'dpdk-{}'.format( + hashlib.sha1( + pci_address.encode('UTF-8')).hexdigest()[:7]) + self._ifname_mac_map[ifname] = [bridge_mac.mac] + self._mac_ifname_map[bridge_mac.mac] = ifname + self._mac_pci_address_map[bridge_mac.mac] = pci_address + + for pci_address, bond_mac in _pci_bond_mac.items(): + # for bonds we want to be able to get a list of macs from + # the bond name and also get at the interface name made up + # of the hash of the PCI address + ifname = 'dpdk-{}'.format( + hashlib.sha1( + pci_address.encode('UTF-8')).hexdigest()[:7]) + self._ifname_mac_map[bond_mac.entity].append(bond_mac.mac) + self._mac_ifname_map[bond_mac.mac] = ifname + self._mac_pci_address_map[bond_mac.mac] = pci_address + + config_bridges = config(bridges_key) or '' + for bridge, ifname_or_mac in ( + pair.split(':', 1) + for pair in config_bridges.split()): + if ':' in ifname_or_mac: + try: + ifname = self.ifname_from_mac(ifname_or_mac) + except KeyError: + # The interface is destined for a different unit in the + # deployment. + continue + macs = [ifname_or_mac] + else: + ifname = ifname_or_mac + macs = self.macs_from_ifname(ifname_or_mac) + + portname = ifname + for mac in macs: + try: + pci_address = self.pci_address_from_mac(mac) + iftype = self.interface_type.dpdk + ifname = self.ifname_from_mac(mac) + except KeyError: + pci_address = None + iftype = self.interface_type.system + + self.add_interface( + bridge, portname, ifname, iftype, pci_address, global_mtu) + + if not macs: + # We have not mapped the interface and it is probably some sort + # of virtual interface. Our user have put it in the config with + # a purpose so let's carry out their wish. LP: #1884743 + log('Add unmapped interface from config: name "{}" bridge "{}"' + .format(ifname, bridge), + level=DEBUG) + self.add_interface( + bridge, ifname, ifname, self.interface_type.system, None, + global_mtu) + + def __getitem__(self, key): + """Provide a Dict-like interface, get value of item. + + :param key: Key to look up value from. + :type key: any + :returns: Value + :rtype: any + """ + return self._map.__getitem__(key) + + def __iter__(self): + """Provide a Dict-like interface, iterate over keys. + + :returns: Iterator + :rtype: Iterator[any] + """ + return self._map.__iter__() + + def __len__(self): + """Provide a Dict-like interface, measure the length of internal map. + + :returns: Length + :rtype: int + """ + return len(self._map) + + def items(self): + """Provide a Dict-like interface, iterate over items. + + :returns: Key Value pairs + :rtype: Iterator[any, any] + """ + return self._map.items() + + def keys(self): + """Provide a Dict-like interface, iterate over keys. + + :returns: Iterator + :rtype: Iterator[any] + """ + return self._map.keys() + + def ifname_from_mac(self, mac): + """ + :returns: Name of interface + :rtype: str + :raises: KeyError + """ + return (get_bond_master(self._mac_ifname_map[mac]) or + self._mac_ifname_map[mac]) + + def macs_from_ifname(self, ifname): + """ + :returns: List of hardware address (MAC) of interface + :rtype: List[str] + :raises: KeyError + """ + return self._ifname_mac_map[ifname] + + def pci_address_from_mac(self, mac): + """ + :param mac: Hardware address (MAC) of interface + :type mac: str + :returns: PCI address of device associated with mac + :rtype: str + :raises: KeyError + """ + return self._mac_pci_address_map[mac] + + def add_interface(self, bridge, port, ifname, iftype, + pci_address, mtu_request): + """Add an interface to the map. + + :param bridge: Name of bridge on which the bond will be added + :type bridge: str + :param port: Name of port which will represent the bond on bridge + :type port: str + :param ifname: Name of interface that will make up the bonded port + :type ifname: str + :param iftype: Type of interface + :type iftype: BridgeBondMap.interface_type + :param pci_address: PCI address of interface + :type pci_address: Optional[str] + :param mtu_request: MTU to request for interface + :type mtu_request: Optional[int] + """ + self._map[bridge][port][ifname] = { + 'type': str(iftype), + } + if pci_address: + self._map[bridge][port][ifname].update({ + 'pci-address': pci_address, + }) + if mtu_request is not None: + self._map[bridge][port][ifname].update({ + 'mtu-request': str(mtu_request) + }) + + def get_ifdatamap(self, bridge, port): + """Get structure suitable for charmhelpers.contrib.network.ovs helpers. + + :param bridge: Name of bridge on which the port will be added + :type bridge: str + :param port: Name of port which will represent one or more interfaces + :type port: str + """ + for _bridge, _ports in self.items(): + for _port, _interfaces in _ports.items(): + if _bridge == bridge and _port == port: + ifdatamap = {} + for name, data in _interfaces.items(): + ifdatamap.update({ + name: { + 'type': data['type'], + }, + }) + if data.get('mtu-request') is not None: + ifdatamap[name].update({ + 'mtu_request': data['mtu-request'], + }) + if data.get('pci-address'): + ifdatamap[name].update({ + 'options': { + 'dpdk-devargs': data['pci-address'], + }, + }) + return ifdatamap + + +class BondConfig(object): + """Container and helpers for bond configuration options. + + Data is put into a dictionary and a convenient config get interface is + provided. + """ + + DEFAULT_LACP_CONFIG = { + 'mode': 'balance-tcp', + 'lacp': 'active', + 'lacp-time': 'fast' + } + ALL_BONDS = 'ALL_BONDS' + + BOND_MODES = ['active-backup', 'balance-slb', 'balance-tcp'] + BOND_LACP = ['active', 'passive', 'off'] + BOND_LACP_TIME = ['fast', 'slow'] + + def __init__(self, config_key=None): + """Parse specified configuration option. + + :param config_key: Configuration key to retrieve data from + (default: ``dpdk-bond-config``) + :type config_key: Optional[str] + """ + self.config_key = config_key or 'dpdk-bond-config' + + self.lacp_config = { + self.ALL_BONDS: copy.deepcopy(self.DEFAULT_LACP_CONFIG) + } + + lacp_config = config(self.config_key) + if lacp_config: + lacp_config_map = lacp_config.split() + for entry in lacp_config_map: + bond, entry = entry.partition(':')[0:3:2] + if not bond: + bond = self.ALL_BONDS + + mode, entry = entry.partition(':')[0:3:2] + if not mode: + mode = self.DEFAULT_LACP_CONFIG['mode'] + assert mode in self.BOND_MODES, \ + "Bond mode {} is invalid".format(mode) + + lacp, entry = entry.partition(':')[0:3:2] + if not lacp: + lacp = self.DEFAULT_LACP_CONFIG['lacp'] + assert lacp in self.BOND_LACP, \ + "Bond lacp {} is invalid".format(lacp) + + lacp_time, entry = entry.partition(':')[0:3:2] + if not lacp_time: + lacp_time = self.DEFAULT_LACP_CONFIG['lacp-time'] + assert lacp_time in self.BOND_LACP_TIME, \ + "Bond lacp-time {} is invalid".format(lacp_time) + + self.lacp_config[bond] = { + 'mode': mode, + 'lacp': lacp, + 'lacp-time': lacp_time + } + + def get_bond_config(self, bond): + """Get the LACP configuration for a bond + + :param bond: the bond name + :return: a dictionary with the configuration of the bond + :rtype: Dict[str,Dict[str,str]] + """ + return self.lacp_config.get(bond, self.lacp_config[self.ALL_BONDS]) + + def get_ovs_portdata(self, bond): + """Get structure suitable for charmhelpers.contrib.network.ovs helpers. + + :param bond: the bond name + :return: a dictionary with the configuration of the bond + :rtype: Dict[str,Union[str,Dict[str,str]]] + """ + bond_config = self.get_bond_config(bond) + return { + 'bond_mode': bond_config['mode'], + 'lacp': bond_config['lacp'], + 'other_config': { + 'lacp-time': bond_config['lacp-time'], + }, + } + + +class SRIOVContext(OSContextGenerator): + """Provide context for configuring SR-IOV devices.""" + + class sriov_config_mode(enum.Enum): + """Mode in which SR-IOV is configured. + + The configuration option identified by the ``numvfs_key`` parameter + is overloaded and defines in which mode the charm should interpret + the other SR-IOV-related configuration options. + """ + auto = 'auto' + blanket = 'blanket' + explicit = 'explicit' + + def _determine_numvfs(self, device, sriov_numvfs): + """Determine number of Virtual Functions (VFs) configured for device. + + :param device: Object describing a PCI Network interface card (NIC)/ + :type device: sriov_netplan_shim.pci.PCINetDevice + :param sriov_numvfs: Number of VFs requested for blanket configuration. + :type sriov_numvfs: int + :returns: Number of VFs to configure for device + :rtype: Optional[int] + """ + + def _get_capped_numvfs(requested): + """Get a number of VFs that does not exceed individual card limits. + + Depending and make and model of NIC the number of VFs supported + vary. Requesting more VFs than a card support would be a fatal + error, cap the requested number at the total number of VFs each + individual card supports. + + :param requested: Number of VFs requested + :type requested: int + :returns: Number of VFs allowed + :rtype: int + """ + actual = min(int(requested), int(device.sriov_totalvfs)) + if actual < int(requested): + log('Requested VFs ({}) too high for device {}. Falling back ' + 'to value supprted by device: {}' + .format(requested, device.interface_name, + device.sriov_totalvfs), + level=WARNING) + return actual + + if self._sriov_config_mode == self.sriov_config_mode.auto: + # auto-mode + # + # If device mapping configuration is present, return information + # on cards with mapping. + # + # If no device mapping configuration is present, return information + # for all cards. + # + # The maximum number of VFs supported by card will be used. + if (self._sriov_mapped_devices and + device.interface_name not in self._sriov_mapped_devices): + log('SR-IOV configured in auto mode: No device mapping for {}' + .format(device.interface_name), + level=DEBUG) + return + return _get_capped_numvfs(device.sriov_totalvfs) + elif self._sriov_config_mode == self.sriov_config_mode.blanket: + # blanket-mode + # + # User has specified a number of VFs that should apply to all + # cards with support for VFs. + return _get_capped_numvfs(sriov_numvfs) + elif self._sriov_config_mode == self.sriov_config_mode.explicit: + # explicit-mode + # + # User has given a list of interface names and associated number of + # VFs + if device.interface_name not in self._sriov_config_devices: + log('SR-IOV configured in explicit mode: No device:numvfs ' + 'pair for device {}, skipping.' + .format(device.interface_name), + level=DEBUG) + return + return _get_capped_numvfs( + self._sriov_config_devices[device.interface_name]) + else: + raise RuntimeError('This should not be reached') + + def __init__(self, numvfs_key=None, device_mappings_key=None): + """Initialize map from PCI devices and configuration options. + + :param numvfs_key: Config key for numvfs (default: 'sriov-numvfs') + :type numvfs_key: Optional[str] + :param device_mappings_key: Config key for device mappings + (default: 'sriov-device-mappings') + :type device_mappings_key: Optional[str] + :raises: RuntimeError + """ + numvfs_key = numvfs_key or 'sriov-numvfs' + device_mappings_key = device_mappings_key or 'sriov-device-mappings' + + devices = pci.PCINetDevices() + charm_config = config() + sriov_numvfs = charm_config.get(numvfs_key) or '' + sriov_device_mappings = charm_config.get(device_mappings_key) or '' + + # create list of devices from sriov_device_mappings config option + self._sriov_mapped_devices = [ + pair.split(':', 1)[1] + for pair in sriov_device_mappings.split() + ] + + # create map of device:numvfs from sriov_numvfs config option + self._sriov_config_devices = { + ifname: numvfs for ifname, numvfs in ( + pair.split(':', 1) for pair in sriov_numvfs.split() + if ':' in sriov_numvfs) + } + + # determine configuration mode from contents of sriov_numvfs + if sriov_numvfs == 'auto': + self._sriov_config_mode = self.sriov_config_mode.auto + elif sriov_numvfs.isdigit(): + self._sriov_config_mode = self.sriov_config_mode.blanket + elif ':' in sriov_numvfs: + self._sriov_config_mode = self.sriov_config_mode.explicit + else: + raise RuntimeError('Unable to determine mode of SR-IOV ' + 'configuration.') + + self._map = { + device.interface_name: self._determine_numvfs(device, sriov_numvfs) + for device in devices.pci_devices + if device.sriov and + self._determine_numvfs(device, sriov_numvfs) is not None + } + + def __call__(self): + """Provide SR-IOV context. + + :returns: Map interface name: min(configured, max) virtual functions. + Example: + { + 'eth0': 16, + 'eth1': 32, + 'eth2': 64, + } + :rtype: Dict[str,int] + """ + return self._map diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/exceptions.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/exceptions.py new file mode 100644 index 0000000000000000000000000000000000000000..f85ae4f4cdbb6567cbdd896338bf88fbf3c9c0ec --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/exceptions.py @@ -0,0 +1,21 @@ +# Copyright 2016 Canonical Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + +class OSContextError(Exception): + """Raised when an error occurs during context generation. + + This exception is principally used in contrib.openstack.context + """ + pass diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/files/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/files/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..9df5f746fbdf5491c640a77df907b71817cbc5af --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/files/__init__.py @@ -0,0 +1,16 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# dummy __init__.py to fool syncer into thinking this is a syncable python +# module diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/files/check_haproxy.sh b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/files/check_haproxy.sh new file mode 100755 index 0000000000000000000000000000000000000000..1df55db4816ec51d6732d68ea0a1e25e6f7b116e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/files/check_haproxy.sh @@ -0,0 +1,34 @@ +#!/bin/bash +#-------------------------------------------- +# This file is managed by Juju +#-------------------------------------------- +# +# Copyright 2009,2012 Canonical Ltd. +# Author: Tom Haddon + +CRITICAL=0 +NOTACTIVE='' +LOGFILE=/var/log/nagios/check_haproxy.log +AUTH=$(grep -r "stats auth" /etc/haproxy/haproxy.cfg | awk 'NR=1{print $3}') + +typeset -i N_INSTANCES=0 +for appserver in $(awk '/^\s+server/{print $2}' /etc/haproxy/haproxy.cfg) +do + N_INSTANCES=N_INSTANCES+1 + output=$(/usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 -u '/;csv' --regex=",${appserver},.*,UP.*" -e ' 200 OK') + if [ $? != 0 ]; then + date >> $LOGFILE + echo $output >> $LOGFILE + /usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 -u '/;csv' -v | grep ",${appserver}," >> $LOGFILE 2>&1 + CRITICAL=1 + NOTACTIVE="${NOTACTIVE} $appserver" + fi +done + +if [ $CRITICAL = 1 ]; then + echo "CRITICAL:${NOTACTIVE}" + exit 2 +fi + +echo "OK: All haproxy instances ($N_INSTANCES) looking good" +exit 0 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/files/check_haproxy_queue_depth.sh b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/files/check_haproxy_queue_depth.sh new file mode 100755 index 0000000000000000000000000000000000000000..91ce0246e66115994c3f518b36448f70100ecfc7 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/files/check_haproxy_queue_depth.sh @@ -0,0 +1,30 @@ +#!/bin/bash +#-------------------------------------------- +# This file is managed by Juju +#-------------------------------------------- +# +# Copyright 2009,2012 Canonical Ltd. +# Author: Tom Haddon + +# These should be config options at some stage +CURRQthrsh=0 +MAXQthrsh=100 + +AUTH=$(grep -r "stats auth" /etc/haproxy/haproxy.cfg | awk 'NR=1{print $3}') + +HAPROXYSTATS=$(/usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 -u '/;csv' -v) + +for BACKEND in $(echo $HAPROXYSTATS| xargs -n1 | grep BACKEND | awk -F , '{print $1}') +do + CURRQ=$(echo "$HAPROXYSTATS" | grep $BACKEND | grep BACKEND | cut -d , -f 3) + MAXQ=$(echo "$HAPROXYSTATS" | grep $BACKEND | grep BACKEND | cut -d , -f 4) + + if [[ $CURRQ -gt $CURRQthrsh || $MAXQ -gt $MAXQthrsh ]] ; then + echo "CRITICAL: queue depth for $BACKEND - CURRENT:$CURRQ MAX:$MAXQ" + exit 2 + fi +done + +echo "OK: All haproxy queue depths looking good" +exit 0 + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/ha/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/ha/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..9b088de84e4b288b551603816fc10eebfa7b1503 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/ha/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2016 Canonical Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/ha/utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/ha/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..a5cbdf535d495a09a0b91f41fdda09862e34140d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/ha/utils.py @@ -0,0 +1,348 @@ +# Copyright 2014-2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# +# Copyright 2016 Canonical Ltd. +# +# Authors: +# Openstack Charmers < +# + +""" +Helpers for high availability. +""" + +import hashlib +import json + +import re + +from charmhelpers.core.hookenv import ( + expected_related_units, + log, + relation_set, + charm_name, + config, + status_set, + DEBUG, +) + +from charmhelpers.core.host import ( + lsb_release +) + +from charmhelpers.contrib.openstack.ip import ( + resolve_address, + is_ipv6, +) + +from charmhelpers.contrib.network.ip import ( + get_iface_for_address, + get_netmask_for_address, +) + +from charmhelpers.contrib.hahelpers.cluster import ( + get_hacluster_config +) + +JSON_ENCODE_OPTIONS = dict( + sort_keys=True, + allow_nan=False, + indent=None, + separators=(',', ':'), +) + +VIP_GROUP_NAME = 'grp_{service}_vips' +DNSHA_GROUP_NAME = 'grp_{service}_hostnames' + + +class DNSHAException(Exception): + """Raised when an error occurs setting up DNS HA + """ + + pass + + +def update_dns_ha_resource_params(resources, resource_params, + relation_id=None, + crm_ocf='ocf:maas:dns'): + """ Configure DNS-HA resources based on provided configuration and + update resource dictionaries for the HA relation. + + @param resources: Pointer to dictionary of resources. + Usually instantiated in ha_joined(). + @param resource_params: Pointer to dictionary of resource parameters. + Usually instantiated in ha_joined() + @param relation_id: Relation ID of the ha relation + @param crm_ocf: Corosync Open Cluster Framework resource agent to use for + DNS HA + """ + _relation_data = {'resources': {}, 'resource_params': {}} + update_hacluster_dns_ha(charm_name(), + _relation_data, + crm_ocf) + resources.update(_relation_data['resources']) + resource_params.update(_relation_data['resource_params']) + relation_set(relation_id=relation_id, groups=_relation_data['groups']) + + +def assert_charm_supports_dns_ha(): + """Validate prerequisites for DNS HA + The MAAS client is only available on Xenial or greater + + :raises DNSHAException: if release is < 16.04 + """ + if lsb_release().get('DISTRIB_RELEASE') < '16.04': + msg = ('DNS HA is only supported on 16.04 and greater ' + 'versions of Ubuntu.') + status_set('blocked', msg) + raise DNSHAException(msg) + return True + + +def expect_ha(): + """ Determine if the unit expects to be in HA + + Check juju goal-state if ha relation is expected, check for VIP or dns-ha + settings which indicate the unit should expect to be related to hacluster. + + @returns boolean + """ + ha_related_units = [] + try: + ha_related_units = list(expected_related_units(reltype='ha')) + except (NotImplementedError, KeyError): + pass + return len(ha_related_units) > 0 or config('vip') or config('dns-ha') + + +def generate_ha_relation_data(service, + extra_settings=None, + haproxy_enabled=True): + """ Generate relation data for ha relation + + Based on configuration options and unit interfaces, generate a json + encoded dict of relation data items for the hacluster relation, + providing configuration for DNS HA or VIP's + haproxy clone sets. + + Example of supplying additional settings:: + + COLO_CONSOLEAUTH = 'inf: res_nova_consoleauth grp_nova_vips' + AGENT_CONSOLEAUTH = 'ocf:openstack:nova-consoleauth' + AGENT_CA_PARAMS = 'op monitor interval="5s"' + + ha_console_settings = { + 'colocations': {'vip_consoleauth': COLO_CONSOLEAUTH}, + 'init_services': {'res_nova_consoleauth': 'nova-consoleauth'}, + 'resources': {'res_nova_consoleauth': AGENT_CONSOLEAUTH}, + 'resource_params': {'res_nova_consoleauth': AGENT_CA_PARAMS}) + generate_ha_relation_data('nova', extra_settings=ha_console_settings) + + + @param service: Name of the service being configured + @param extra_settings: Dict of additional resource data + @returns dict: json encoded data for use with relation_set + """ + _relation_data = {'resources': {}, 'resource_params': {}} + + if haproxy_enabled: + _meta = 'meta migration-threshold="INFINITY" failure-timeout="5s"' + _haproxy_res = 'res_{}_haproxy'.format(service) + _relation_data['resources'] = {_haproxy_res: 'lsb:haproxy'} + _relation_data['resource_params'] = { + _haproxy_res: '{} op monitor interval="5s"'.format(_meta) + } + _relation_data['init_services'] = {_haproxy_res: 'haproxy'} + _relation_data['clones'] = { + 'cl_{}_haproxy'.format(service): _haproxy_res + } + + if extra_settings: + for k, v in extra_settings.items(): + if _relation_data.get(k): + _relation_data[k].update(v) + else: + _relation_data[k] = v + + if config('dns-ha'): + update_hacluster_dns_ha(service, _relation_data) + else: + update_hacluster_vip(service, _relation_data) + + return { + 'json_{}'.format(k): json.dumps(v, **JSON_ENCODE_OPTIONS) + for k, v in _relation_data.items() if v + } + + +def update_hacluster_dns_ha(service, relation_data, + crm_ocf='ocf:maas:dns'): + """ Configure DNS-HA resources based on provided configuration + + @param service: Name of the service being configured + @param relation_data: Pointer to dictionary of relation data. + @param crm_ocf: Corosync Open Cluster Framework resource agent to use for + DNS HA + """ + # Validate the charm environment for DNS HA + assert_charm_supports_dns_ha() + + settings = ['os-admin-hostname', 'os-internal-hostname', + 'os-public-hostname', 'os-access-hostname'] + + # Check which DNS settings are set and update dictionaries + hostname_group = [] + for setting in settings: + hostname = config(setting) + if hostname is None: + log('DNS HA: Hostname setting {} is None. Ignoring.' + ''.format(setting), + DEBUG) + continue + m = re.search('os-(.+?)-hostname', setting) + if m: + endpoint_type = m.group(1) + # resolve_address's ADDRESS_MAP uses 'int' not 'internal' + if endpoint_type == 'internal': + endpoint_type = 'int' + else: + msg = ('Unexpected DNS hostname setting: {}. ' + 'Cannot determine endpoint_type name' + ''.format(setting)) + status_set('blocked', msg) + raise DNSHAException(msg) + + hostname_key = 'res_{}_{}_hostname'.format(service, endpoint_type) + if hostname_key in hostname_group: + log('DNS HA: Resource {}: {} already exists in ' + 'hostname group - skipping'.format(hostname_key, hostname), + DEBUG) + continue + + hostname_group.append(hostname_key) + relation_data['resources'][hostname_key] = crm_ocf + relation_data['resource_params'][hostname_key] = ( + 'params fqdn="{}" ip_address="{}"' + .format(hostname, resolve_address(endpoint_type=endpoint_type, + override=False))) + + if len(hostname_group) >= 1: + log('DNS HA: Hostname group is set with {} as members. ' + 'Informing the ha relation'.format(' '.join(hostname_group)), + DEBUG) + relation_data['groups'] = { + DNSHA_GROUP_NAME.format(service=service): ' '.join(hostname_group) + } + else: + msg = 'DNS HA: Hostname group has no members.' + status_set('blocked', msg) + raise DNSHAException(msg) + + +def get_vip_settings(vip): + """Calculate which nic is on the correct network for the given vip. + + If nic or netmask discovery fail then fallback to using charm supplied + config. If fallback is used this is indicated via the fallback variable. + + @param vip: VIP to lookup nic and cidr for. + @returns (str, str, bool): eg (iface, netmask, fallback) + """ + iface = get_iface_for_address(vip) + netmask = get_netmask_for_address(vip) + fallback = False + if iface is None: + iface = config('vip_iface') + fallback = True + if netmask is None: + netmask = config('vip_cidr') + fallback = True + return iface, netmask, fallback + + +def update_hacluster_vip(service, relation_data): + """ Configure VIP resources based on provided configuration + + @param service: Name of the service being configured + @param relation_data: Pointer to dictionary of relation data. + """ + cluster_config = get_hacluster_config() + vip_group = [] + vips_to_delete = [] + for vip in cluster_config['vip'].split(): + if is_ipv6(vip): + res_vip = 'ocf:heartbeat:IPv6addr' + vip_params = 'ipv6addr' + else: + res_vip = 'ocf:heartbeat:IPaddr2' + vip_params = 'ip' + + iface, netmask, fallback = get_vip_settings(vip) + + vip_monitoring = 'op monitor timeout="20s" interval="10s" depth="0"' + if iface is not None: + # NOTE(jamespage): Delete old VIP resources + # Old style naming encoding iface in name + # does not work well in environments where + # interface/subnet wiring is not consistent + vip_key = 'res_{}_{}_vip'.format(service, iface) + if vip_key in vips_to_delete: + vip_key = '{}_{}'.format(vip_key, vip_params) + vips_to_delete.append(vip_key) + + vip_key = 'res_{}_{}_vip'.format( + service, + hashlib.sha1(vip.encode('UTF-8')).hexdigest()[:7]) + + relation_data['resources'][vip_key] = res_vip + # NOTE(jamespage): + # Use option provided vip params if these where used + # instead of auto-detected values + if fallback: + relation_data['resource_params'][vip_key] = ( + 'params {ip}="{vip}" cidr_netmask="{netmask}" ' + 'nic="{iface}" {vip_monitoring}'.format( + ip=vip_params, + vip=vip, + iface=iface, + netmask=netmask, + vip_monitoring=vip_monitoring)) + else: + # NOTE(jamespage): + # let heartbeat figure out which interface and + # netmask to configure, which works nicely + # when network interface naming is not + # consistent across units. + relation_data['resource_params'][vip_key] = ( + 'params {ip}="{vip}" {vip_monitoring}'.format( + ip=vip_params, + vip=vip, + vip_monitoring=vip_monitoring)) + + vip_group.append(vip_key) + + if vips_to_delete: + try: + relation_data['delete_resources'].extend(vips_to_delete) + except KeyError: + relation_data['delete_resources'] = vips_to_delete + + if len(vip_group) >= 1: + key = VIP_GROUP_NAME.format(service=service) + try: + relation_data['groups'][key] = ' '.join(vip_group) + except KeyError: + relation_data['groups'] = { + key: ' '.join(vip_group) + } diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/ip.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/ip.py new file mode 100644 index 0000000000000000000000000000000000000000..723aebc172e94a5b00c385d9861dd4d45c1bc753 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/ip.py @@ -0,0 +1,197 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from charmhelpers.core.hookenv import ( + NoNetworkBinding, + config, + unit_get, + service_name, + network_get_primary_address, +) +from charmhelpers.contrib.network.ip import ( + get_address_in_network, + is_address_in_network, + is_ipv6, + get_ipv6_addr, + resolve_network_cidr, +) +from charmhelpers.contrib.hahelpers.cluster import is_clustered + +PUBLIC = 'public' +INTERNAL = 'int' +ADMIN = 'admin' +ACCESS = 'access' + +ADDRESS_MAP = { + PUBLIC: { + 'binding': 'public', + 'config': 'os-public-network', + 'fallback': 'public-address', + 'override': 'os-public-hostname', + }, + INTERNAL: { + 'binding': 'internal', + 'config': 'os-internal-network', + 'fallback': 'private-address', + 'override': 'os-internal-hostname', + }, + ADMIN: { + 'binding': 'admin', + 'config': 'os-admin-network', + 'fallback': 'private-address', + 'override': 'os-admin-hostname', + }, + ACCESS: { + 'binding': 'access', + 'config': 'access-network', + 'fallback': 'private-address', + 'override': 'os-access-hostname', + }, +} + + +def canonical_url(configs, endpoint_type=PUBLIC): + """Returns the correct HTTP URL to this host given the state of HTTPS + configuration, hacluster and charm configuration. + + :param configs: OSTemplateRenderer config templating object to inspect + for a complete https context. + :param endpoint_type: str endpoint type to resolve. + :param returns: str base URL for services on the current service unit. + """ + scheme = _get_scheme(configs) + + address = resolve_address(endpoint_type) + if is_ipv6(address): + address = "[{}]".format(address) + + return '%s://%s' % (scheme, address) + + +def _get_scheme(configs): + """Returns the scheme to use for the url (either http or https) + depending upon whether https is in the configs value. + + :param configs: OSTemplateRenderer config templating object to inspect + for a complete https context. + :returns: either 'http' or 'https' depending on whether https is + configured within the configs context. + """ + scheme = 'http' + if configs and 'https' in configs.complete_contexts(): + scheme = 'https' + return scheme + + +def _get_address_override(endpoint_type=PUBLIC): + """Returns any address overrides that the user has defined based on the + endpoint type. + + Note: this function allows for the service name to be inserted into the + address if the user specifies {service_name}.somehost.org. + + :param endpoint_type: the type of endpoint to retrieve the override + value for. + :returns: any endpoint address or hostname that the user has overridden + or None if an override is not present. + """ + override_key = ADDRESS_MAP[endpoint_type]['override'] + addr_override = config(override_key) + if not addr_override: + return None + else: + return addr_override.format(service_name=service_name()) + + +def resolve_address(endpoint_type=PUBLIC, override=True): + """Return unit address depending on net config. + + If unit is clustered with vip(s) and has net splits defined, return vip on + correct network. If clustered with no nets defined, return primary vip. + + If not clustered, return unit address ensuring address is on configured net + split if one is configured, or a Juju 2.0 extra-binding has been used. + + :param endpoint_type: Network endpoing type + :param override: Accept hostname overrides or not + """ + resolved_address = None + if override: + resolved_address = _get_address_override(endpoint_type) + if resolved_address: + return resolved_address + + vips = config('vip') + if vips: + vips = vips.split() + + net_type = ADDRESS_MAP[endpoint_type]['config'] + net_addr = config(net_type) + net_fallback = ADDRESS_MAP[endpoint_type]['fallback'] + binding = ADDRESS_MAP[endpoint_type]['binding'] + clustered = is_clustered() + + if clustered and vips: + if net_addr: + for vip in vips: + if is_address_in_network(net_addr, vip): + resolved_address = vip + break + else: + # NOTE: endeavour to check vips against network space + # bindings + try: + bound_cidr = resolve_network_cidr( + network_get_primary_address(binding) + ) + for vip in vips: + if is_address_in_network(bound_cidr, vip): + resolved_address = vip + break + except (NotImplementedError, NoNetworkBinding): + # If no net-splits configured and no support for extra + # bindings/network spaces so we expect a single vip + resolved_address = vips[0] + else: + if config('prefer-ipv6'): + fallback_addr = get_ipv6_addr(exc_list=vips)[0] + else: + fallback_addr = unit_get(net_fallback) + + if net_addr: + resolved_address = get_address_in_network(net_addr, fallback_addr) + else: + # NOTE: only try to use extra bindings if legacy network + # configuration is not in use + try: + resolved_address = network_get_primary_address(binding) + except (NotImplementedError, NoNetworkBinding): + resolved_address = fallback_addr + + if resolved_address is None: + raise ValueError("Unable to resolve a suitable IP address based on " + "charm state and configuration. (net_type=%s, " + "clustered=%s)" % (net_type, clustered)) + + return resolved_address + + +def get_vip_in_network(network): + matching_vip = None + vips = config('vip') + if vips: + for vip in vips.split(): + if is_address_in_network(network, vip): + matching_vip = vip + return matching_vip diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/keystone.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/keystone.py new file mode 100644 index 0000000000000000000000000000000000000000..d7e02ccd90155710901e444482b589aa264158e6 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/keystone.py @@ -0,0 +1,178 @@ +#!/usr/bin/python +# +# Copyright 2017 Canonical Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import six +from charmhelpers.fetch import apt_install +from charmhelpers.contrib.openstack.context import IdentityServiceContext +from charmhelpers.core.hookenv import ( + log, + ERROR, +) + + +def get_api_suffix(api_version): + """Return the formatted api suffix for the given version + @param api_version: version of the keystone endpoint + @returns the api suffix formatted according to the given api + version + """ + return 'v2.0' if api_version in (2, "2", "2.0") else 'v3' + + +def format_endpoint(schema, addr, port, api_version): + """Return a formatted keystone endpoint + @param schema: http or https + @param addr: ipv4/ipv6 host of the keystone service + @param port: port of the keystone service + @param api_version: 2 or 3 + @returns a fully formatted keystone endpoint + """ + return '{}://{}:{}/{}/'.format(schema, addr, port, + get_api_suffix(api_version)) + + +def get_keystone_manager(endpoint, api_version, **kwargs): + """Return a keystonemanager for the correct API version + + @param endpoint: the keystone endpoint to point client at + @param api_version: version of the keystone api the client should use + @param kwargs: token or username/tenant/password information + @returns keystonemanager class used for interrogating keystone + """ + if api_version == 2: + return KeystoneManager2(endpoint, **kwargs) + if api_version == 3: + return KeystoneManager3(endpoint, **kwargs) + raise ValueError('No manager found for api version {}'.format(api_version)) + + +def get_keystone_manager_from_identity_service_context(): + """Return a keystonmanager generated from a + instance of charmhelpers.contrib.openstack.context.IdentityServiceContext + @returns keystonamenager instance + """ + context = IdentityServiceContext()() + if not context: + msg = "Identity service context cannot be generated" + log(msg, level=ERROR) + raise ValueError(msg) + + endpoint = format_endpoint(context['service_protocol'], + context['service_host'], + context['service_port'], + context['api_version']) + + if context['api_version'] in (2, "2.0"): + api_version = 2 + else: + api_version = 3 + + return get_keystone_manager(endpoint, api_version, + username=context['admin_user'], + password=context['admin_password'], + tenant_name=context['admin_tenant_name']) + + +class KeystoneManager(object): + + def resolve_service_id(self, service_name=None, service_type=None): + """Find the service_id of a given service""" + services = [s._info for s in self.api.services.list()] + + service_name = service_name.lower() + for s in services: + name = s['name'].lower() + if service_type and service_name: + if (service_name == name and service_type == s['type']): + return s['id'] + elif service_name and service_name == name: + return s['id'] + elif service_type and service_type == s['type']: + return s['id'] + return None + + def service_exists(self, service_name=None, service_type=None): + """Determine if the given service exists on the service list""" + return self.resolve_service_id(service_name, service_type) is not None + + +class KeystoneManager2(KeystoneManager): + + def __init__(self, endpoint, **kwargs): + try: + from keystoneclient.v2_0 import client + from keystoneclient.auth.identity import v2 + from keystoneclient import session + except ImportError: + if six.PY2: + apt_install(["python-keystoneclient"], fatal=True) + else: + apt_install(["python3-keystoneclient"], fatal=True) + + from keystoneclient.v2_0 import client + from keystoneclient.auth.identity import v2 + from keystoneclient import session + + self.api_version = 2 + + token = kwargs.get("token", None) + if token: + api = client.Client(endpoint=endpoint, token=token) + else: + auth = v2.Password(username=kwargs.get("username"), + password=kwargs.get("password"), + tenant_name=kwargs.get("tenant_name"), + auth_url=endpoint) + sess = session.Session(auth=auth) + api = client.Client(session=sess) + + self.api = api + + +class KeystoneManager3(KeystoneManager): + + def __init__(self, endpoint, **kwargs): + try: + from keystoneclient.v3 import client + from keystoneclient.auth import token_endpoint + from keystoneclient import session + from keystoneclient.auth.identity import v3 + except ImportError: + if six.PY2: + apt_install(["python-keystoneclient"], fatal=True) + else: + apt_install(["python3-keystoneclient"], fatal=True) + + from keystoneclient.v3 import client + from keystoneclient.auth import token_endpoint + from keystoneclient import session + from keystoneclient.auth.identity import v3 + + self.api_version = 3 + + token = kwargs.get("token", None) + if token: + auth = token_endpoint.Token(endpoint=endpoint, + token=token) + sess = session.Session(auth=auth) + else: + auth = v3.Password(auth_url=endpoint, + user_id=kwargs.get("username"), + password=kwargs.get("password"), + project_id=kwargs.get("tenant_name")) + sess = session.Session(auth=auth) + + self.api = client.Client(session=sess) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/neutron.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/neutron.py new file mode 100644 index 0000000000000000000000000000000000000000..fb5607f3e73159d90236b2d7a4051aa82119e889 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/neutron.py @@ -0,0 +1,359 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Various utilies for dealing with Neutron and the renaming from Quantum. + +import six +from subprocess import check_output + +from charmhelpers.core.hookenv import ( + config, + log, + ERROR, +) + +from charmhelpers.contrib.openstack.utils import ( + os_release, + CompareOpenStackReleases, +) + + +def headers_package(): + """Ensures correct linux-headers for running kernel are installed, + for building DKMS package""" + kver = check_output(['uname', '-r']).decode('UTF-8').strip() + return 'linux-headers-%s' % kver + + +QUANTUM_CONF_DIR = '/etc/quantum' + + +def kernel_version(): + """ Retrieve the current major kernel version as a tuple e.g. (3, 13) """ + kver = check_output(['uname', '-r']).decode('UTF-8').strip() + kver = kver.split('.') + return (int(kver[0]), int(kver[1])) + + +def determine_dkms_package(): + """ Determine which DKMS package should be used based on kernel version """ + # NOTE: 3.13 kernels have support for GRE and VXLAN native + if kernel_version() >= (3, 13): + return [] + else: + return [headers_package(), 'openvswitch-datapath-dkms'] + + +# legacy + + +def quantum_plugins(): + return { + 'ovs': { + 'config': '/etc/quantum/plugins/openvswitch/' + 'ovs_quantum_plugin.ini', + 'driver': 'quantum.plugins.openvswitch.ovs_quantum_plugin.' + 'OVSQuantumPluginV2', + 'contexts': [], + 'services': ['quantum-plugin-openvswitch-agent'], + 'packages': [determine_dkms_package(), + ['quantum-plugin-openvswitch-agent']], + 'server_packages': ['quantum-server', + 'quantum-plugin-openvswitch'], + 'server_services': ['quantum-server'] + }, + 'nvp': { + 'config': '/etc/quantum/plugins/nicira/nvp.ini', + 'driver': 'quantum.plugins.nicira.nicira_nvp_plugin.' + 'QuantumPlugin.NvpPluginV2', + 'contexts': [], + 'services': [], + 'packages': [], + 'server_packages': ['quantum-server', + 'quantum-plugin-nicira'], + 'server_services': ['quantum-server'] + } + } + + +NEUTRON_CONF_DIR = '/etc/neutron' + + +def neutron_plugins(): + release = os_release('nova-common') + plugins = { + 'ovs': { + 'config': '/etc/neutron/plugins/openvswitch/' + 'ovs_neutron_plugin.ini', + 'driver': 'neutron.plugins.openvswitch.ovs_neutron_plugin.' + 'OVSNeutronPluginV2', + 'contexts': [], + 'services': ['neutron-plugin-openvswitch-agent'], + 'packages': [determine_dkms_package(), + ['neutron-plugin-openvswitch-agent']], + 'server_packages': ['neutron-server', + 'neutron-plugin-openvswitch'], + 'server_services': ['neutron-server'] + }, + 'nvp': { + 'config': '/etc/neutron/plugins/nicira/nvp.ini', + 'driver': 'neutron.plugins.nicira.nicira_nvp_plugin.' + 'NeutronPlugin.NvpPluginV2', + 'contexts': [], + 'services': [], + 'packages': [], + 'server_packages': ['neutron-server', + 'neutron-plugin-nicira'], + 'server_services': ['neutron-server'] + }, + 'nsx': { + 'config': '/etc/neutron/plugins/vmware/nsx.ini', + 'driver': 'vmware', + 'contexts': [], + 'services': [], + 'packages': [], + 'server_packages': ['neutron-server', + 'neutron-plugin-vmware'], + 'server_services': ['neutron-server'] + }, + 'n1kv': { + 'config': '/etc/neutron/plugins/cisco/cisco_plugins.ini', + 'driver': 'neutron.plugins.cisco.network_plugin.PluginV2', + 'contexts': [], + 'services': [], + 'packages': [determine_dkms_package(), + ['neutron-plugin-cisco']], + 'server_packages': ['neutron-server', + 'neutron-plugin-cisco'], + 'server_services': ['neutron-server'] + }, + 'Calico': { + 'config': '/etc/neutron/plugins/ml2/ml2_conf.ini', + 'driver': 'neutron.plugins.ml2.plugin.Ml2Plugin', + 'contexts': [], + 'services': ['calico-felix', + 'bird', + 'neutron-dhcp-agent', + 'nova-api-metadata', + 'etcd'], + 'packages': [determine_dkms_package(), + ['calico-compute', + 'bird', + 'neutron-dhcp-agent', + 'nova-api-metadata', + 'etcd']], + 'server_packages': ['neutron-server', 'calico-control', 'etcd'], + 'server_services': ['neutron-server', 'etcd'] + }, + 'vsp': { + 'config': '/etc/neutron/plugins/nuage/nuage_plugin.ini', + 'driver': 'neutron.plugins.nuage.plugin.NuagePlugin', + 'contexts': [], + 'services': [], + 'packages': [], + 'server_packages': ['neutron-server', 'neutron-plugin-nuage'], + 'server_services': ['neutron-server'] + }, + 'plumgrid': { + 'config': '/etc/neutron/plugins/plumgrid/plumgrid.ini', + 'driver': ('neutron.plugins.plumgrid.plumgrid_plugin' + '.plumgrid_plugin.NeutronPluginPLUMgridV2'), + 'contexts': [], + 'services': [], + 'packages': ['plumgrid-lxc', + 'iovisor-dkms'], + 'server_packages': ['neutron-server', + 'neutron-plugin-plumgrid'], + 'server_services': ['neutron-server'] + }, + 'midonet': { + 'config': '/etc/neutron/plugins/midonet/midonet.ini', + 'driver': 'midonet.neutron.plugin.MidonetPluginV2', + 'contexts': [], + 'services': [], + 'packages': [determine_dkms_package()], + 'server_packages': ['neutron-server', + 'python-neutron-plugin-midonet'], + 'server_services': ['neutron-server'] + } + } + if CompareOpenStackReleases(release) >= 'icehouse': + # NOTE: patch in ml2 plugin for icehouse onwards + plugins['ovs']['config'] = '/etc/neutron/plugins/ml2/ml2_conf.ini' + plugins['ovs']['driver'] = 'neutron.plugins.ml2.plugin.Ml2Plugin' + plugins['ovs']['server_packages'] = ['neutron-server', + 'neutron-plugin-ml2'] + # NOTE: patch in vmware renames nvp->nsx for icehouse onwards + plugins['nvp'] = plugins['nsx'] + if CompareOpenStackReleases(release) >= 'kilo': + plugins['midonet']['driver'] = ( + 'neutron.plugins.midonet.plugin.MidonetPluginV2') + if CompareOpenStackReleases(release) >= 'liberty': + plugins['midonet']['driver'] = ( + 'midonet.neutron.plugin_v1.MidonetPluginV2') + plugins['midonet']['server_packages'].remove( + 'python-neutron-plugin-midonet') + plugins['midonet']['server_packages'].append( + 'python-networking-midonet') + plugins['plumgrid']['driver'] = ( + 'networking_plumgrid.neutron.plugins' + '.plugin.NeutronPluginPLUMgridV2') + plugins['plumgrid']['server_packages'].remove( + 'neutron-plugin-plumgrid') + if CompareOpenStackReleases(release) >= 'mitaka': + plugins['nsx']['server_packages'].remove('neutron-plugin-vmware') + plugins['nsx']['server_packages'].append('python-vmware-nsx') + plugins['nsx']['config'] = '/etc/neutron/nsx.ini' + plugins['vsp']['driver'] = ( + 'nuage_neutron.plugins.nuage.plugin.NuagePlugin') + if CompareOpenStackReleases(release) >= 'newton': + plugins['vsp']['config'] = '/etc/neutron/plugins/ml2/ml2_conf.ini' + plugins['vsp']['driver'] = 'neutron.plugins.ml2.plugin.Ml2Plugin' + plugins['vsp']['server_packages'] = ['neutron-server', + 'neutron-plugin-ml2'] + return plugins + + +def neutron_plugin_attribute(plugin, attr, net_manager=None): + manager = net_manager or network_manager() + if manager == 'quantum': + plugins = quantum_plugins() + elif manager == 'neutron': + plugins = neutron_plugins() + else: + log("Network manager '%s' does not support plugins." % (manager), + level=ERROR) + raise Exception + + try: + _plugin = plugins[plugin] + except KeyError: + log('Unrecognised plugin for %s: %s' % (manager, plugin), level=ERROR) + raise Exception + + try: + return _plugin[attr] + except KeyError: + return None + + +def network_manager(): + ''' + Deals with the renaming of Quantum to Neutron in H and any situations + that require compatability (eg, deploying H with network-manager=quantum, + upgrading from G). + ''' + release = os_release('nova-common') + manager = config('network-manager').lower() + + if manager not in ['quantum', 'neutron']: + return manager + + if release in ['essex']: + # E does not support neutron + log('Neutron networking not supported in Essex.', level=ERROR) + raise Exception + elif release in ['folsom', 'grizzly']: + # neutron is named quantum in F and G + return 'quantum' + else: + # ensure accurate naming for all releases post-H + return 'neutron' + + +def parse_mappings(mappings, key_rvalue=False): + """By default mappings are lvalue keyed. + + If key_rvalue is True, the mapping will be reversed to allow multiple + configs for the same lvalue. + """ + parsed = {} + if mappings: + mappings = mappings.split() + for m in mappings: + p = m.partition(':') + + if key_rvalue: + key_index = 2 + val_index = 0 + # if there is no rvalue skip to next + if not p[1]: + continue + else: + key_index = 0 + val_index = 2 + + key = p[key_index].strip() + parsed[key] = p[val_index].strip() + + return parsed + + +def parse_bridge_mappings(mappings): + """Parse bridge mappings. + + Mappings must be a space-delimited list of provider:bridge mappings. + + Returns dict of the form {provider:bridge}. + """ + return parse_mappings(mappings) + + +def parse_data_port_mappings(mappings, default_bridge='br-data'): + """Parse data port mappings. + + Mappings must be a space-delimited list of bridge:port. + + Returns dict of the form {port:bridge} where ports may be mac addresses or + interface names. + """ + + # NOTE(dosaboy): we use rvalue for key to allow multiple values to be + # proposed for since it may be a mac address which will differ + # across units this allowing first-known-good to be chosen. + _mappings = parse_mappings(mappings, key_rvalue=True) + if not _mappings or list(_mappings.values()) == ['']: + if not mappings: + return {} + + # For backwards-compatibility we need to support port-only provided in + # config. + _mappings = {mappings.split()[0]: default_bridge} + + ports = _mappings.keys() + if len(set(ports)) != len(ports): + raise Exception("It is not allowed to have the same port configured " + "on more than one bridge") + + return _mappings + + +def parse_vlan_range_mappings(mappings): + """Parse vlan range mappings. + + Mappings must be a space-delimited list of provider:start:end mappings. + + The start:end range is optional and may be omitted. + + Returns dict of the form {provider: (start, end)}. + """ + _mappings = parse_mappings(mappings) + if not _mappings: + return {} + + mappings = {} + for p, r in six.iteritems(_mappings): + mappings[p] = tuple(r.split(':')) + + return mappings diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/policyd.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/policyd.py new file mode 100644 index 0000000000000000000000000000000000000000..f2bb21e9db926bd2c4de8ab3e8d10d0837af563a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/policyd.py @@ -0,0 +1,801 @@ +# Copyright 2019 Canonical Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import collections +import contextlib +import os +import six +import shutil +import yaml +import zipfile + +import charmhelpers +import charmhelpers.core.hookenv as hookenv +import charmhelpers.core.host as ch_host + +# Features provided by this module: + +""" +Policy.d helper functions +========================= + +The functions in this module are designed, as a set, to provide an easy-to-use +set of hooks for classic charms to add in /etc//policy.d/ +directory override YAML files. + +(For charms.openstack charms, a mixin class is provided for this +functionality). + +In order to "hook" this functionality into a (classic) charm, two functions are +provided: + + maybe_do_policyd_overrides(openstack_release, + service, + blacklist_paths=none, + blacklist_keys=none, + template_function=none, + restart_handler=none) + + maybe_do_policyd_overrides_on_config_changed(openstack_release, + service, + blacklist_paths=None, + blacklist_keys=None, + template_function=None, + restart_handler=None + +(See the docstrings for details on the parameters) + +The functions should be called from the install and upgrade hooks in the charm. +The `maybe_do_policyd_overrides_on_config_changed` function is designed to be +called on the config-changed hook, in that it does an additional check to +ensure that an already overriden policy.d in an upgrade or install hooks isn't +repeated. + +In order the *enable* this functionality, the charm's install, config_changed, +and upgrade_charm hooks need to be modified, and a new config option (see +below) needs to be added. The README for the charm should also be updated. + +Examples from the keystone charm are: + +@hooks.hook('install.real') +@harden() +def install(): + ... + # call the policy overrides handler which will install any policy overrides + maybe_do_policyd_overrides(os_release('keystone'), 'keystone') + + +@hooks.hook('config-changed') +@restart_on_change(restart_map(), restart_functions=restart_function_map()) +@harden() +def config_changed(): + ... + # call the policy overrides handler which will install any policy overrides + maybe_do_policyd_overrides_on_config_changed(os_release('keystone'), + 'keystone') + +@hooks.hook('upgrade-charm') +@restart_on_change(restart_map(), stopstart=True) +@harden() +def upgrade_charm(): + ... + # call the policy overrides handler which will install any policy overrides + maybe_do_policyd_overrides(os_release('keystone'), 'keystone') + +Status Line +=========== + +The workload status code in charm-helpers has been modified to detect if +policy.d override code has been incorporated into the charm by checking for the +new config variable (in the config.yaml). If it has been, then the workload +status line will automatically show "PO:" at the beginning of the workload +status for that unit/service if the config option is set. If the policy +override is broken, the "PO (broken):" will be shown. No changes to the charm +(apart from those already mentioned) are needed to enable this functionality. +(charms.openstack charms also get this functionality, but please see that +library for further details). +""" + +# The config.yaml for the charm should contain the following for the config +# option: + +""" + use-policyd-override: + type: boolean + default: False + description: | + If True then use the resource file named 'policyd-override' to install + override YAML files in the service's policy.d directory. The resource + file should be a ZIP file containing at least one yaml file with a .yaml + or .yml extension. If False then remove the overrides. +""" + +# The metadata.yaml for the charm should contain the following: +""" +resources: + policyd-override: + type: file + filename: policyd-override.zip + description: The policy.d overrides file +""" + +# The README for the charm should contain the following: +""" +Policy Overrides +---------------- + +This feature allows for policy overrides using the `policy.d` directory. This +is an **advanced** feature and the policies that the OpenStack service supports +should be clearly and unambiguously understood before trying to override, or +add to, the default policies that the service uses. The charm also has some +policy defaults. They should also be understood before being overridden. + +> **Caution**: It is possible to break the system (for tenants and other + services) if policies are incorrectly applied to the service. + +Policy overrides are YAML files that contain rules that will add to, or +override, existing policy rules in the service. The `policy.d` directory is +a place to put the YAML override files. This charm owns the +`/etc/keystone/policy.d` directory, and as such, any manual changes to it will +be overwritten on charm upgrades. + +Overrides are provided to the charm using a Juju resource called +`policyd-override`. The resource is a ZIP file. This file, say +`overrides.zip`, is attached to the charm by: + + + juju attach-resource policyd-override=overrides.zip + +The policy override is enabled in the charm using: + + juju config use-policyd-override=true + +When `use-policyd-override` is `True` the status line of the charm will be +prefixed with `PO:` indicating that policies have been overridden. If the +installation of the policy override YAML files failed for any reason then the +status line will be prefixed with `PO (broken):`. The log file for the charm +will indicate the reason. No policy override files are installed if the `PO +(broken):` is shown. The status line indicates that the overrides are broken, +not that the policy for the service has failed. The policy will be the defaults +for the charm and service. + +Policy overrides on one service may affect the functionality of another +service. Therefore, it may be necessary to provide policy overrides for +multiple service charms to achieve a consistent set of policies across the +OpenStack system. The charms for the other services that may need overrides +should be checked to ensure that they support overrides before proceeding. +""" + +POLICYD_VALID_EXTS = ['.yaml', '.yml', '.j2', '.tmpl', '.tpl'] +POLICYD_TEMPLATE_EXTS = ['.j2', '.tmpl', '.tpl'] +POLICYD_RESOURCE_NAME = "policyd-override" +POLICYD_CONFIG_NAME = "use-policyd-override" +POLICYD_SUCCESS_FILENAME = "policyd-override-success" +POLICYD_LOG_LEVEL_DEFAULT = hookenv.INFO +POLICYD_ALWAYS_BLACKLISTED_KEYS = ("admin_required", "cloud_admin") + + +class BadPolicyZipFile(Exception): + + def __init__(self, log_message): + self.log_message = log_message + + def __str__(self): + return self.log_message + + +class BadPolicyYamlFile(Exception): + + def __init__(self, log_message): + self.log_message = log_message + + def __str__(self): + return self.log_message + + +if six.PY2: + BadZipFile = zipfile.BadZipfile +else: + BadZipFile = zipfile.BadZipFile + + +def is_policyd_override_valid_on_this_release(openstack_release): + """Check that the charm is running on at least Ubuntu Xenial, and at + least the queens release. + + :param openstack_release: the release codename that is installed. + :type openstack_release: str + :returns: True if okay + :rtype: bool + """ + # NOTE(ajkavanagh) circular import! This is because the status message + # generation code in utils has to call into this module, but this function + # needs the CompareOpenStackReleases() function. The only way to solve + # this is either to put ALL of this module into utils, or refactor one or + # other of the CompareOpenStackReleases or status message generation code + # into a 3rd module. + import charmhelpers.contrib.openstack.utils as ch_utils + return ch_utils.CompareOpenStackReleases(openstack_release) >= 'queens' + + +def maybe_do_policyd_overrides(openstack_release, + service, + blacklist_paths=None, + blacklist_keys=None, + template_function=None, + restart_handler=None, + user=None, + group=None, + config_changed=False): + """If the config option is set, get the resource file and process it to + enable the policy.d overrides for the service passed. + + The param `openstack_release` is required as the policyd overrides feature + is only supported on openstack_release "queens" or later, and on ubuntu + "xenial" or later. Prior to these versions, this feature is a NOP. + + The optional template_function is a function that accepts a string and has + an opportunity to modify the loaded file prior to it being read by + yaml.safe_load(). This allows the charm to perform "templating" using + charm derived data. + + The param blacklist_paths are paths (that are in the service's policy.d + directory that should not be touched). + + The param blacklist_keys are keys that must not appear in the yaml file. + If they do, then the whole policy.d file fails. + + The yaml file extracted from the resource_file (which is a zipped file) has + its file path reconstructed. This, also, must not match any path in the + black list. + + The param restart_handler is an optional Callable that is called to perform + the service restart if the policy.d file is changed. This should normally + be None as oslo.policy automatically picks up changes in the policy.d + directory. However, for any services where this is buggy then a + restart_handler can be used to force the policy.d files to be read. + + If the config_changed param is True, then the handling is slightly + different: It will only perform the policyd overrides if the config is True + and the success file doesn't exist. Otherwise, it does nothing as the + resource file has already been processed. + + :param openstack_release: The openstack release that is installed. + :type openstack_release: str + :param service: the service name to construct the policy.d directory for. + :type service: str + :param blacklist_paths: optional list of paths to leave alone + :type blacklist_paths: Union[None, List[str]] + :param blacklist_keys: optional list of keys that mustn't appear in the + yaml file's + :type blacklist_keys: Union[None, List[str]] + :param template_function: Optional function that can modify the string + prior to being processed as a Yaml document. + :type template_function: Union[None, Callable[[str], str]] + :param restart_handler: The function to call if the service should be + restarted. + :type restart_handler: Union[None, Callable[]] + :param user: The user to create/write files/directories as + :type user: Union[None, str] + :param group: the group to create/write files/directories as + :type group: Union[None, str] + :param config_changed: Set to True for config_changed hook. + :type config_changed: bool + """ + _user = service if user is None else user + _group = service if group is None else group + if not is_policyd_override_valid_on_this_release(openstack_release): + return + hookenv.log("Running maybe_do_policyd_overrides", + level=POLICYD_LOG_LEVEL_DEFAULT) + config = hookenv.config() + try: + if not config.get(POLICYD_CONFIG_NAME, False): + clean_policyd_dir_for(service, + blacklist_paths, + user=_user, + group=_group) + if (os.path.isfile(_policy_success_file()) and + restart_handler is not None and + callable(restart_handler)): + restart_handler() + remove_policy_success_file() + return + except Exception as e: + hookenv.log("... ERROR: Exception is: {}".format(str(e)), + level=POLICYD_CONFIG_NAME) + import traceback + hookenv.log(traceback.format_exc(), level=POLICYD_LOG_LEVEL_DEFAULT) + return + # if the policyd overrides have been performed when doing config_changed + # just return + if config_changed and is_policy_success_file_set(): + hookenv.log("... already setup, so skipping.", + level=POLICYD_LOG_LEVEL_DEFAULT) + return + # from now on it should succeed; if it doesn't then status line will show + # broken. + resource_filename = get_policy_resource_filename() + restart = process_policy_resource_file( + resource_filename, service, blacklist_paths, blacklist_keys, + template_function) + if restart and restart_handler is not None and callable(restart_handler): + restart_handler() + + +@charmhelpers.deprecate("Use maybe_do_poliyd_overrrides instead") +def maybe_do_policyd_overrides_on_config_changed(*args, **kwargs): + """This function is designed to be called from the config changed hook. + + DEPRECATED: please use maybe_do_policyd_overrides() with the param + `config_changed` as `True`. + + See maybe_do_policyd_overrides() for more details on the params. + """ + if 'config_changed' not in kwargs.keys(): + kwargs['config_changed'] = True + return maybe_do_policyd_overrides(*args, **kwargs) + + +def get_policy_resource_filename(): + """Function to extract the policy resource filename + + :returns: The filename of the resource, if set, otherwise, if an error + occurs, then None is returned. + :rtype: Union[str, None] + """ + try: + return hookenv.resource_get(POLICYD_RESOURCE_NAME) + except Exception: + return None + + +@contextlib.contextmanager +def open_and_filter_yaml_files(filepath, has_subdirs=False): + """Validate that the filepath provided is a zip file and contains at least + one (.yaml|.yml) file, and that the files are not duplicated when the zip + file is flattened. Note that the yaml files are not checked. This is the + first stage in validating the policy zipfile; individual yaml files are not + checked for validity or black listed keys. + + If the has_subdirs param is True, then the files are flattened to the first + directory, and the files in the root are ignored. + + An example of use is: + + with open_and_filter_yaml_files(some_path) as zfp, g: + for zipinfo in g: + # do something with zipinfo ... + + :param filepath: a filepath object that can be opened by zipfile + :type filepath: Union[AnyStr, os.PathLike[AntStr]] + :param has_subdirs: Keep first level of subdirectories in yaml file. + :type has_subdirs: bool + :returns: (zfp handle, + a generator of the (name, filename, ZipInfo object) tuples) as a + tuple. + :rtype: ContextManager[(zipfile.ZipFile, + Generator[(name, str, str, zipfile.ZipInfo)])] + :raises: zipfile.BadZipFile + :raises: BadPolicyZipFile if duplicated yaml or missing + :raises: IOError if the filepath is not found + """ + with zipfile.ZipFile(filepath, 'r') as zfp: + # first pass through; check for duplicates and at least one yaml file. + names = collections.defaultdict(int) + yamlfiles = _yamlfiles(zfp, has_subdirs) + for name, _, _, _ in yamlfiles: + names[name] += 1 + # There must be at least 1 yaml file. + if len(names.keys()) == 0: + raise BadPolicyZipFile("contains no yaml files with {} extensions." + .format(", ".join(POLICYD_VALID_EXTS))) + # There must be no duplicates + duplicates = [n for n, c in names.items() if c > 1] + if duplicates: + raise BadPolicyZipFile("{} have duplicates in the zip file." + .format(", ".join(duplicates))) + # Finally, let's yield the generator + yield (zfp, yamlfiles) + + +def _yamlfiles(zipfile, has_subdirs=False): + """Helper to get a yaml file (according to POLICYD_VALID_EXTS extensions) + and the infolist item from a zipfile. + + If the `has_subdirs` param is True, the the only yaml files that have a + directory component are read, and then first part of the directory + component is kept, along with the filename in the name. e.g. an entry with + a filename of: + + compute/someotherdir/override.yaml + + is returned as: + + compute/override, yaml, override.yaml, + + This is to help with the special, additional, processing that the dashboard + charm requires. + + :param zipfile: the zipfile to read zipinfo items from + :type zipfile: zipfile.ZipFile + :param has_subdirs: Keep first level of subdirectories in yaml file. + :type has_subdirs: bool + :returns: generator of (name, ext, filename, info item) for each + self-identified yaml file. + :rtype: List[(str, str, str, zipfile.ZipInfo)] + """ + files = [] + for infolist_item in zipfile.infolist(): + try: + if infolist_item.is_dir(): + continue + except AttributeError: + # fallback to "old" way to determine dir entry for pre-py36 + if infolist_item.filename.endswith('/'): + continue + _dir, name_ext = os.path.split(infolist_item.filename) + name, ext = os.path.splitext(name_ext) + if has_subdirs and _dir != "": + name = os.path.join(_dir.split(os.path.sep)[0], name) + ext = ext.lower() + if ext and ext in POLICYD_VALID_EXTS: + files.append((name, ext, name_ext, infolist_item)) + return files + + +def read_and_validate_yaml(stream_or_doc, blacklist_keys=None): + """Read, validate and return the (first) yaml document from the stream. + + The doc is read, and checked for a yaml file. The the top-level keys are + checked against the blacklist_keys provided. If there are problems then an + Exception is raised. Otherwise the yaml document is returned as a Python + object that can be dumped back as a yaml file on the system. + + The yaml file must only consist of a str:str mapping, and if not then the + yaml file is rejected. + + :param stream_or_doc: the file object to read the yaml from + :type stream_or_doc: Union[AnyStr, IO[AnyStr]] + :param blacklist_keys: Any keys, which if in the yaml file, should cause + and error. + :type blacklisted_keys: Union[None, List[str]] + :returns: the yaml file as a python document + :rtype: Dict[str, str] + :raises: yaml.YAMLError if there is a problem with the document + :raises: BadPolicyYamlFile if file doesn't look right or there are + blacklisted keys in the file. + """ + blacklist_keys = blacklist_keys or [] + blacklist_keys.append(POLICYD_ALWAYS_BLACKLISTED_KEYS) + doc = yaml.safe_load(stream_or_doc) + if not isinstance(doc, dict): + raise BadPolicyYamlFile("doesn't look like a policy file?") + keys = set(doc.keys()) + blacklisted_keys_present = keys.intersection(blacklist_keys) + if blacklisted_keys_present: + raise BadPolicyYamlFile("blacklisted keys {} present." + .format(", ".join(blacklisted_keys_present))) + if not all(isinstance(k, six.string_types) for k in keys): + raise BadPolicyYamlFile("keys in yaml aren't all strings?") + # check that the dictionary looks like a mapping of str to str + if not all(isinstance(v, six.string_types) for v in doc.values()): + raise BadPolicyYamlFile("values in yaml aren't all strings?") + return doc + + +def policyd_dir_for(service): + """Return the policy directory for the named service. + + :param service: str + :returns: the policy.d override directory. + :rtype: os.PathLike[str] + """ + return os.path.join("/", "etc", service, "policy.d") + + +def clean_policyd_dir_for(service, keep_paths=None, user=None, group=None): + """Clean out the policyd directory except for items that should be kept. + + The keep_paths, if used, should be set to the full path of the files that + should be kept in the policyd directory for the service. Note that the + service name is passed in, and then the policyd_dir_for() function is used. + This is so that a coding error doesn't result in a sudden deletion of the + charm (say). + + :param service: the service name to use to construct the policy.d dir. + :type service: str + :param keep_paths: optional list of paths to not delete. + :type keep_paths: Union[None, List[str]] + :param user: The user to create/write files/directories as + :type user: Union[None, str] + :param group: the group to create/write files/directories as + :type group: Union[None, str] + """ + _user = service if user is None else user + _group = service if group is None else group + keep_paths = keep_paths or [] + path = policyd_dir_for(service) + hookenv.log("Cleaning path: {}".format(path), level=hookenv.DEBUG) + if not os.path.exists(path): + ch_host.mkdir(path, owner=_user, group=_group, perms=0o775) + _scanner = os.scandir if hasattr(os, 'scandir') else _fallback_scandir + for direntry in _scanner(path): + # see if the path should be kept. + if direntry.path in keep_paths: + continue + # we remove any directories; it's ours and there shouldn't be any + if direntry.is_dir(): + shutil.rmtree(direntry.path) + else: + os.remove(direntry.path) + + +def maybe_create_directory_for(path, user, group): + """For the filename 'path', ensure that the directory for that path exists. + + Note that if the directory already exists then the permissions are NOT + changed. + + :param path: the filename including the path to it. + :type path: str + :param user: the user to create the directory as + :param group: the group to create the directory as + """ + _dir, _ = os.path.split(path) + if not os.path.exists(_dir): + ch_host.mkdir(_dir, owner=user, group=group, perms=0o775) + + +@contextlib.contextmanager +def _fallback_scandir(path): + """Fallback os.scandir implementation. + + provide a fallback implementation of os.scandir if this module ever gets + used in a py2 or py34 charm. Uses os.listdir() to get the names in the path, + and then mocks the is_dir() function using os.path.isdir() to check for + directory. + + :param path: the path to list the directories for + :type path: str + :returns: Generator that provides _FBDirectory objects + :rtype: ContextManager[_FBDirectory] + """ + for f in os.listdir(path): + yield _FBDirectory(f) + + +class _FBDirectory(object): + """Mock a scandir Directory object with enough to use in + clean_policyd_dir_for + """ + + def __init__(self, path): + self.path = path + + def is_dir(self): + return os.path.isdir(self.path) + + +def path_for_policy_file(service, name): + """Return the full path for a policy.d file that will be written to the + service's policy.d directory. + + It is constructed using policyd_dir_for(), the name and the ".yaml" + extension. + + For horizon, for example, it's a bit more complicated. The name param is + actually "override_service_dir/a_name", where target_service needs to be + one the allowed horizon override services. This translation and check is + done in the _yamlfiles() function. + + :param service: the service name + :type service: str + :param name: the name for the policy override + :type name: str + :returns: the full path name for the file + :rtype: os.PathLike[str] + """ + return os.path.join(policyd_dir_for(service), name + ".yaml") + + +def _policy_success_file(): + """Return the file name for a successful drop of policy.d overrides + + :returns: the path name for the file. + :rtype: str + """ + return os.path.join(hookenv.charm_dir(), POLICYD_SUCCESS_FILENAME) + + +def remove_policy_success_file(): + """Remove the file that indicates successful policyd override.""" + try: + os.remove(_policy_success_file()) + except Exception: + pass + + +def set_policy_success_file(): + """Set the file that indicates successful policyd override.""" + open(_policy_success_file(), "w").close() + + +def is_policy_success_file_set(): + """Returns True if the policy success file has been set. + + This indicates that policies are overridden and working properly. + + :returns: True if the policy file is set + :rtype: bool + """ + return os.path.isfile(_policy_success_file()) + + +def policyd_status_message_prefix(): + """Return the prefix str for the status line. + + "PO:" indicating that the policy overrides are in place, or "PO (broken):" + if the policy is supposed to be working but there is no success file. + + :returns: the prefix + :rtype: str + """ + if is_policy_success_file_set(): + return "PO:" + return "PO (broken):" + + +def process_policy_resource_file(resource_file, + service, + blacklist_paths=None, + blacklist_keys=None, + template_function=None, + preserve_topdir=False, + preprocess_filename=None, + user=None, + group=None): + """Process the resource file (which should contain at least one yaml file) + and write those files to the service's policy.d directory. + + The optional template_function is a function that accepts a python + string and has an opportunity to modify the document + prior to it being read by the yaml.safe_load() function and written to + disk. Note that this function does *not* say how the templating is done - + this is up to the charm to implement its chosen method. + + The param blacklist_paths are paths (that are in the service's policy.d + directory that should not be touched). + + The param blacklist_keys are keys that must not appear in the yaml file. + If they do, then the whole policy.d file fails. + + The yaml file extracted from the resource_file (which is a zipped file) has + its file path reconstructed. This, also, must not match any path in the + black list. + + The yaml filename can be modified in two ways. If the `preserve_topdir` + param is True, then files will be flattened to the top dir. This allows + for creating sets of files that can be grouped into a single level tree + structure. + + Secondly, if the `preprocess_filename` param is not None and callable() + then the name is passed to that function for preprocessing before being + converted to the end location. This is to allow munging of the filename + prior to being tested for a blacklist path. + + If any error occurs, then the policy.d directory is cleared, the error is + written to the log, and the status line will eventually show as failed. + + :param resource_file: The zipped file to open and extract yaml files form. + :type resource_file: Union[AnyStr, os.PathLike[AnyStr]] + :param service: the service name to construct the policy.d directory for. + :type service: str + :param blacklist_paths: optional list of paths to leave alone + :type blacklist_paths: Union[None, List[str]] + :param blacklist_keys: optional list of keys that mustn't appear in the + yaml file's + :type blacklist_keys: Union[None, List[str]] + :param template_function: Optional function that can modify the yaml + document. + :type template_function: Union[None, Callable[[AnyStr], AnyStr]] + :param preserve_topdir: Keep the toplevel subdir + :type preserve_topdir: bool + :param preprocess_filename: Optional function to use to process filenames + extracted from the resource file. + :type preprocess_filename: Union[None, Callable[[AnyStr]. AnyStr]] + :param user: The user to create/write files/directories as + :type user: Union[None, str] + :param group: the group to create/write files/directories as + :type group: Union[None, str] + :returns: True if the processing was successful, False if not. + :rtype: boolean + """ + hookenv.log("Running process_policy_resource_file", level=hookenv.DEBUG) + blacklist_paths = blacklist_paths or [] + completed = False + _preprocess = None + if preprocess_filename is not None and callable(preprocess_filename): + _preprocess = preprocess_filename + _user = service if user is None else user + _group = service if group is None else group + try: + with open_and_filter_yaml_files( + resource_file, preserve_topdir) as (zfp, gen): + # first clear out the policy.d directory and clear success + remove_policy_success_file() + clean_policyd_dir_for(service, + blacklist_paths, + user=_user, + group=_group) + for name, ext, filename, zipinfo in gen: + # See if the name should be preprocessed. + if _preprocess is not None: + name = _preprocess(name) + # construct a name for the output file. + yaml_filename = path_for_policy_file(service, name) + if yaml_filename in blacklist_paths: + raise BadPolicyZipFile("policy.d name {} is blacklisted" + .format(yaml_filename)) + with zfp.open(zipinfo) as fp: + doc = fp.read() + # if template_function is not None, then offer the document + # to the template function + if ext in POLICYD_TEMPLATE_EXTS: + if (template_function is None or not + callable(template_function)): + raise BadPolicyZipFile( + "Template {} but no template_function is " + "available".format(filename)) + doc = template_function(doc) + yaml_doc = read_and_validate_yaml(doc, blacklist_keys) + # we may have to create the directory + maybe_create_directory_for(yaml_filename, _user, _group) + ch_host.write_file(yaml_filename, + yaml.dump(yaml_doc).encode('utf-8'), + _user, + _group) + # Every thing worked, so we mark up a success. + completed = True + except (BadZipFile, BadPolicyZipFile, BadPolicyYamlFile) as e: + hookenv.log("Processing {} failed: {}".format(resource_file, str(e)), + level=POLICYD_LOG_LEVEL_DEFAULT) + except IOError as e: + # technically this shouldn't happen; it would be a programming error as + # the filename comes from Juju and thus, should exist. + hookenv.log( + "File {} failed with IOError. This really shouldn't happen" + " -- error: {}".format(resource_file, str(e)), + level=POLICYD_LOG_LEVEL_DEFAULT) + except Exception as e: + import traceback + hookenv.log("General Exception({}) during policyd processing" + .format(str(e)), + level=POLICYD_LOG_LEVEL_DEFAULT) + hookenv.log(traceback.format_exc()) + finally: + if not completed: + hookenv.log("Processing {} failed: cleaning policy.d directory" + .format(resource_file), + level=POLICYD_LOG_LEVEL_DEFAULT) + clean_policyd_dir_for(service, + blacklist_paths, + user=_user, + group=_group) + else: + # touch the success filename + hookenv.log("policy.d overrides installed.", + level=POLICYD_LOG_LEVEL_DEFAULT) + set_policy_success_file() + return completed diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/ssh_migrations.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/ssh_migrations.py new file mode 100644 index 0000000000000000000000000000000000000000..96b9f71d42d1c81539f78b8e1c4761f81d84c304 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/ssh_migrations.py @@ -0,0 +1,412 @@ +# Copyright 2018 Canonical Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import subprocess + +from charmhelpers.core.hookenv import ( + ERROR, + log, + relation_get, +) +from charmhelpers.contrib.network.ip import ( + is_ipv6, + ns_query, +) +from charmhelpers.contrib.openstack.utils import ( + get_hostname, + get_host_ip, + is_ip, +) + +NOVA_SSH_DIR = '/etc/nova/compute_ssh/' + + +def ssh_directory_for_unit(application_name, user=None): + """Return the directory used to store ssh assets for the application. + + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param user: The user that the ssh asserts are for. + :type user: str + :returns: Fully qualified directory path. + :rtype: str + """ + if user: + application_name = "{}_{}".format(application_name, user) + _dir = os.path.join(NOVA_SSH_DIR, application_name) + for d in [NOVA_SSH_DIR, _dir]: + if not os.path.isdir(d): + os.mkdir(d) + for f in ['authorized_keys', 'known_hosts']: + f = os.path.join(_dir, f) + if not os.path.isfile(f): + open(f, 'w').close() + return _dir + + +def known_hosts(application_name, user=None): + """Return the known hosts file for the application. + + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param user: The user that the ssh asserts are for. + :type user: str + :returns: Fully qualified path to file. + :rtype: str + """ + return os.path.join( + ssh_directory_for_unit(application_name, user), + 'known_hosts') + + +def authorized_keys(application_name, user=None): + """Return the authorized keys file for the application. + + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param user: The user that the ssh asserts are for. + :type user: str + :returns: Fully qualified path to file. + :rtype: str + """ + return os.path.join( + ssh_directory_for_unit(application_name, user), + 'authorized_keys') + + +def ssh_known_host_key(host, application_name, user=None): + """Return the first entry in known_hosts for host. + + :param host: hostname to lookup in file. + :type host: str + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param user: The user that the ssh asserts are for. + :type user: str + :returns: Host key + :rtype: str or None + """ + cmd = [ + 'ssh-keygen', + '-f', known_hosts(application_name, user), + '-H', + '-F', + host] + try: + # The first line of output is like '# Host xx found: line 1 type RSA', + # which should be excluded. + output = subprocess.check_output(cmd) + except subprocess.CalledProcessError as e: + # RC of 1 seems to be legitimate for most ssh-keygen -F calls. + if e.returncode == 1: + output = e.output + else: + raise + output = output.strip() + + if output: + # Bug #1500589 cmd has 0 rc on precise if entry not present + lines = output.split('\n') + if len(lines) >= 1: + return lines[0] + + return None + + +def remove_known_host(host, application_name, user=None): + """Remove the entry in known_hosts for host. + + :param host: hostname to lookup in file. + :type host: str + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param user: The user that the ssh asserts are for. + :type user: str + """ + log('Removing SSH known host entry for compute host at %s' % host) + cmd = ['ssh-keygen', '-f', known_hosts(application_name, user), '-R', host] + subprocess.check_call(cmd) + + +def is_same_key(key_1, key_2): + """Extract the key from two host entries and compare them. + + :param key_1: Host key + :type key_1: str + :param key_2: Host key + :type key_2: str + """ + # The key format get will be like '|1|2rUumCavEXWVaVyB5uMl6m85pZo=|Cp' + # 'EL6l7VTY37T/fg/ihhNb/GPgs= ssh-rsa AAAAB', we only need to compare + # the part start with 'ssh-rsa' followed with '= ', because the hash + # value in the beginning will change each time. + k_1 = key_1.split('= ')[1] + k_2 = key_2.split('= ')[1] + return k_1 == k_2 + + +def add_known_host(host, application_name, user=None): + """Add the given host key to the known hosts file. + + :param host: host name + :type host: str + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param user: The user that the ssh asserts are for. + :type user: str + """ + cmd = ['ssh-keyscan', '-H', '-t', 'rsa', host] + try: + remote_key = subprocess.check_output(cmd).strip() + except Exception as e: + log('Could not obtain SSH host key from %s' % host, level=ERROR) + raise e + + current_key = ssh_known_host_key(host, application_name, user) + if current_key and remote_key: + if is_same_key(remote_key, current_key): + log('Known host key for compute host %s up to date.' % host) + return + else: + remove_known_host(host, application_name, user) + + log('Adding SSH host key to known hosts for compute node at %s.' % host) + with open(known_hosts(application_name, user), 'a') as out: + out.write("{}\n".format(remote_key)) + + +def ssh_authorized_key_exists(public_key, application_name, user=None): + """Check if given key is in the authorized_key file. + + :param public_key: Public key. + :type public_key: str + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param user: The user that the ssh asserts are for. + :type user: str + :returns: Whether given key is in the authorized_key file. + :rtype: boolean + """ + with open(authorized_keys(application_name, user)) as keys: + return ('%s' % public_key) in keys.read() + + +def add_authorized_key(public_key, application_name, user=None): + """Add given key to the authorized_key file. + + :param public_key: Public key. + :type public_key: str + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param user: The user that the ssh asserts are for. + :type user: str + """ + with open(authorized_keys(application_name, user), 'a') as keys: + keys.write("{}\n".format(public_key)) + + +def ssh_compute_add_host_and_key(public_key, hostname, private_address, + application_name, user=None): + """Add a compute nodes ssh details to local cache. + + Collect various hostname variations and add the corresponding host keys to + the local known hosts file. Finally, add the supplied public key to the + authorized_key file. + + :param public_key: Public key. + :type public_key: str + :param hostname: Hostname to collect host keys from. + :type hostname: str + :param private_address:aCorresponding private address for hostname + :type private_address: str + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param user: The user that the ssh asserts are for. + :type user: str + """ + # If remote compute node hands us a hostname, ensure we have a + # known hosts entry for its IP, hostname and FQDN. + hosts = [private_address] + + if not is_ipv6(private_address): + if hostname: + hosts.append(hostname) + + if is_ip(private_address): + hn = get_hostname(private_address) + if hn: + hosts.append(hn) + short = hn.split('.')[0] + if ns_query(short): + hosts.append(short) + else: + hosts.append(get_host_ip(private_address)) + short = private_address.split('.')[0] + if ns_query(short): + hosts.append(short) + + for host in list(set(hosts)): + add_known_host(host, application_name, user) + + if not ssh_authorized_key_exists(public_key, application_name, user): + log('Saving SSH authorized key for compute host at %s.' % + private_address) + add_authorized_key(public_key, application_name, user) + + +def ssh_compute_add(public_key, application_name, rid=None, unit=None, + user=None): + """Add a compute nodes ssh details to local cache. + + Collect various hostname variations and add the corresponding host keys to + the local known hosts file. Finally, add the supplied public key to the + authorized_key file. + + :param public_key: Public key. + :type public_key: str + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param rid: Relation id of the relation between this charm and the app. If + none is supplied it is assumed its the relation relating to + the current hook context. + :type rid: str + :param unit: Unit to add ssh asserts for if none is supplied it is assumed + its the unit relating to the current hook context. + :type unit: str + :param user: The user that the ssh asserts are for. + :type user: str + """ + relation_data = relation_get(rid=rid, unit=unit) + ssh_compute_add_host_and_key( + public_key, + relation_data.get('hostname'), + relation_data.get('private-address'), + application_name, + user=user) + + +def ssh_known_hosts_lines(application_name, user=None): + """Return contents of known_hosts file for given application. + + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param user: The user that the ssh asserts are for. + :type user: str + """ + known_hosts_list = [] + with open(known_hosts(application_name, user)) as hosts: + for hosts_line in hosts: + if hosts_line.rstrip(): + known_hosts_list.append(hosts_line.rstrip()) + return(known_hosts_list) + + +def ssh_authorized_keys_lines(application_name, user=None): + """Return contents of authorized_keys file for given application. + + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param user: The user that the ssh asserts are for. + :type user: str + """ + authorized_keys_list = [] + + with open(authorized_keys(application_name, user)) as keys: + for authkey_line in keys: + if authkey_line.rstrip(): + authorized_keys_list.append(authkey_line.rstrip()) + return(authorized_keys_list) + + +def ssh_compute_remove(public_key, application_name, user=None): + """Remove given public key from authorized_keys file. + + :param public_key: Public key. + :type public_key: str + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param user: The user that the ssh asserts are for. + :type user: str + """ + if not (os.path.isfile(authorized_keys(application_name, user)) or + os.path.isfile(known_hosts(application_name, user))): + return + + keys = ssh_authorized_keys_lines(application_name, user=None) + keys = [k.strip() for k in keys] + + if public_key not in keys: + return + + [keys.remove(key) for key in keys if key == public_key] + + with open(authorized_keys(application_name, user), 'w') as _keys: + keys = '\n'.join(keys) + if not keys.endswith('\n'): + keys += '\n' + _keys.write(keys) + + +def get_ssh_settings(application_name, user=None): + """Retrieve the known host entries and public keys for application + + Retrieve the known host entries and public keys for application for all + units of the given application related to this application for the + app + user combination. + + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param user: The user that the ssh asserts are for. + :type user: str + :returns: Public keys + host keys for all units for app + user combination. + :rtype: dict + """ + settings = {} + keys = {} + prefix = '' + if user: + prefix = '{}_'.format(user) + + for i, line in enumerate(ssh_known_hosts_lines( + application_name=application_name, user=user)): + settings['{}known_hosts_{}'.format(prefix, i)] = line + if settings: + settings['{}known_hosts_max_index'.format(prefix)] = len( + settings.keys()) + + for i, line in enumerate(ssh_authorized_keys_lines( + application_name=application_name, user=user)): + keys['{}authorized_keys_{}'.format(prefix, i)] = line + if keys: + keys['{}authorized_keys_max_index'.format(prefix)] = len(keys.keys()) + settings.update(keys) + return settings + + +def get_all_user_ssh_settings(application_name): + """Retrieve the known host entries and public keys for application + + Retrieve the known host entries and public keys for application for all + units of the given application related to this application for root user + and nova user. + + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :returns: Public keys + host keys for all units for app + user combination. + :rtype: dict + """ + settings = get_ssh_settings(application_name) + settings.update(get_ssh_settings(application_name, user='nova')) + return settings diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..9df5f746fbdf5491c640a77df907b71817cbc5af --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/__init__.py @@ -0,0 +1,16 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# dummy __init__.py to fool syncer into thinking this is a syncable python +# module diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/ceph.conf b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/ceph.conf new file mode 100644 index 0000000000000000000000000000000000000000..a11ce8ab85654a4d838e448f36fe3698b59f9531 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/ceph.conf @@ -0,0 +1,24 @@ +############################################################################### +# [ WARNING ] +# ceph configuration file maintained by Juju +# local changes may be overwritten. +############################################################################### +[global] +{% if auth -%} +auth_supported = {{ auth }} +keyring = /etc/ceph/$cluster.$name.keyring +mon host = {{ mon_hosts }} +{% endif -%} +log to syslog = {{ use_syslog }} +err to syslog = {{ use_syslog }} +clog to syslog = {{ use_syslog }} +{% if rbd_features %} +rbd default features = {{ rbd_features }} +{% endif %} + +[client] +{% if rbd_client_cache_settings -%} +{% for key, value in rbd_client_cache_settings.items() -%} +{{ key }} = {{ value }} +{% endfor -%} +{%- endif %} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/git.upstart b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/git.upstart new file mode 100644 index 0000000000000000000000000000000000000000..4bed404bc01087c4dec6a44a56d17ed122e1d1e3 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/git.upstart @@ -0,0 +1,17 @@ +description "{{ service_description }}" +author "Juju {{ service_name }} Charm " + +start on runlevel [2345] +stop on runlevel [!2345] + +respawn + +exec start-stop-daemon --start --chuid {{ user_name }} \ + --chdir {{ start_dir }} --name {{ process_name }} \ + --exec {{ executable_name }} -- \ + {% for config_file in config_files -%} + --config-file={{ config_file }} \ + {% endfor -%} + {% if log_file -%} + --log-file={{ log_file }} + {% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/haproxy.cfg b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/haproxy.cfg new file mode 100644 index 0000000000000000000000000000000000000000..d36af2aa86f91b2ee1249594fffd106843dd0676 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/haproxy.cfg @@ -0,0 +1,77 @@ +global + log /var/lib/haproxy/dev/log local0 + log /var/lib/haproxy/dev/log local1 notice + maxconn 20000 + user haproxy + group haproxy + spread-checks 0 + stats socket /var/run/haproxy/admin.sock mode 600 level admin + stats timeout 2m + +defaults + log global + mode tcp + option tcplog + option dontlognull + retries 3 +{%- if haproxy_queue_timeout %} + timeout queue {{ haproxy_queue_timeout }} +{%- else %} + timeout queue 9000 +{%- endif %} +{%- if haproxy_connect_timeout %} + timeout connect {{ haproxy_connect_timeout }} +{%- else %} + timeout connect 9000 +{%- endif %} +{%- if haproxy_client_timeout %} + timeout client {{ haproxy_client_timeout }} +{%- else %} + timeout client 90000 +{%- endif %} +{%- if haproxy_server_timeout %} + timeout server {{ haproxy_server_timeout }} +{%- else %} + timeout server 90000 +{%- endif %} + +listen stats + bind {{ local_host }}:{{ stat_port }} + mode http + stats enable + stats hide-version + stats realm Haproxy\ Statistics + stats uri / + stats auth admin:{{ stat_password }} + +{% if frontends -%} +{% for service, ports in service_ports.items() -%} +frontend tcp-in_{{ service }} + bind *:{{ ports[0] }} + {% if ipv6_enabled -%} + bind :::{{ ports[0] }} + {% endif -%} + {% for frontend in frontends -%} + acl net_{{ frontend }} dst {{ frontends[frontend]['network'] }} + use_backend {{ service }}_{{ frontend }} if net_{{ frontend }} + {% endfor -%} + default_backend {{ service }}_{{ default_backend }} + +{% for frontend in frontends -%} +backend {{ service }}_{{ frontend }} + balance leastconn + {% if backend_options -%} + {% if backend_options[service] -%} + {% for option in backend_options[service] -%} + {% for key, value in option.items() -%} + {{ key }} {{ value }} + {% endfor -%} + {% endfor -%} + {% endif -%} + {% endif -%} + {% for unit, address in frontends[frontend]['backends'].items() -%} + server {{ unit }} {{ address }}:{{ ports[1] }} check + {% endfor %} +{% endfor -%} +{% endfor -%} +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/logrotate b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/logrotate new file mode 100644 index 0000000000000000000000000000000000000000..b2900d09a4ec2d04152ed7ce25bdc7346c349675 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/logrotate @@ -0,0 +1,9 @@ +/var/log/{{ logrotate_logs_location }}/*.log { + {{ logrotate_interval }} + {{ logrotate_count }} + compress + delaycompress + missingok + notifempty + copytruncate +} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/memcached.conf b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/memcached.conf new file mode 100644 index 0000000000000000000000000000000000000000..26cb037c72beccdb1a4f9269844abdbac2406369 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/memcached.conf @@ -0,0 +1,53 @@ +############################################################################### +# [ WARNING ] +# memcached configuration file maintained by Juju +# local changes may be overwritten. +############################################################################### + +# memcached default config file +# 2003 - Jay Bonci +# This configuration file is read by the start-memcached script provided as +# part of the Debian GNU/Linux distribution. + +# Run memcached as a daemon. This command is implied, and is not needed for the +# daemon to run. See the README.Debian that comes with this package for more +# information. +-d + +# Log memcached's output to /var/log/memcached +logfile /var/log/memcached.log + +# Be verbose +# -v + +# Be even more verbose (print client commands as well) +# -vv + +# Start with a cap of 64 megs of memory. It's reasonable, and the daemon default +# Note that the daemon will grow to this size, but does not start out holding this much +# memory +-m 64 + +# Default connection port is 11211 +-p {{ memcache_port }} + +# Run the daemon as root. The start-memcached will default to running as root if no +# -u command is present in this config file +-u memcache + +# Specify which IP address to listen on. The default is to listen on all IP addresses +# This parameter is one of the only security measures that memcached has, so make sure +# it's listening on a firewalled interface. +-l {{ memcache_server }} + +# Limit the number of simultaneous incoming connections. The daemon default is 1024 +# -c 1024 + +# Lock down all paged memory. Consult with the README and homepage before you do this +# -k + +# Return error when memory is exhausted (rather than removing items) +# -M + +# Maximize core file limit +# -r diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/openstack_https_frontend b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/openstack_https_frontend new file mode 100644 index 0000000000000000000000000000000000000000..f614b3fa71928b615a1f058c6928d3c98bed9575 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/openstack_https_frontend @@ -0,0 +1,29 @@ +{% if endpoints -%} +{% for ext_port in ext_ports -%} +Listen {{ ext_port }} +{% endfor -%} +{% for address, endpoint, ext, int in endpoints -%} + + ServerName {{ endpoint }} + SSLEngine on + SSLProtocol +TLSv1 +TLSv1.1 +TLSv1.2 + SSLCipherSuite HIGH:!RC4:!MD5:!aNULL:!eNULL:!EXP:!LOW:!MEDIUM + SSLCertificateFile /etc/apache2/ssl/{{ namespace }}/cert_{{ endpoint }} + # See LP 1484489 - this is to support <= 2.4.7 and >= 2.4.8 + SSLCertificateChainFile /etc/apache2/ssl/{{ namespace }}/cert_{{ endpoint }} + SSLCertificateKeyFile /etc/apache2/ssl/{{ namespace }}/key_{{ endpoint }} + ProxyPass / http://localhost:{{ int }}/ + ProxyPassReverse / http://localhost:{{ int }}/ + ProxyPreserveHost on + RequestHeader set X-Forwarded-Proto "https" + +{% endfor -%} + + Order deny,allow + Allow from all + + + Order allow,deny + Allow from all + +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/openstack_https_frontend.conf b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/openstack_https_frontend.conf new file mode 100644 index 0000000000000000000000000000000000000000..f614b3fa71928b615a1f058c6928d3c98bed9575 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/openstack_https_frontend.conf @@ -0,0 +1,29 @@ +{% if endpoints -%} +{% for ext_port in ext_ports -%} +Listen {{ ext_port }} +{% endfor -%} +{% for address, endpoint, ext, int in endpoints -%} + + ServerName {{ endpoint }} + SSLEngine on + SSLProtocol +TLSv1 +TLSv1.1 +TLSv1.2 + SSLCipherSuite HIGH:!RC4:!MD5:!aNULL:!eNULL:!EXP:!LOW:!MEDIUM + SSLCertificateFile /etc/apache2/ssl/{{ namespace }}/cert_{{ endpoint }} + # See LP 1484489 - this is to support <= 2.4.7 and >= 2.4.8 + SSLCertificateChainFile /etc/apache2/ssl/{{ namespace }}/cert_{{ endpoint }} + SSLCertificateKeyFile /etc/apache2/ssl/{{ namespace }}/key_{{ endpoint }} + ProxyPass / http://localhost:{{ int }}/ + ProxyPassReverse / http://localhost:{{ int }}/ + ProxyPreserveHost on + RequestHeader set X-Forwarded-Proto "https" + +{% endfor -%} + + Order deny,allow + Allow from all + + + Order allow,deny + Allow from all + +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/section-keystone-authtoken b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/section-keystone-authtoken new file mode 100644 index 0000000000000000000000000000000000000000..5dcebe7c863728b040337616b492296f536b3ef3 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/section-keystone-authtoken @@ -0,0 +1,12 @@ +{% if auth_host -%} +[keystone_authtoken] +auth_uri = {{ service_protocol }}://{{ service_host }}:{{ service_port }} +auth_url = {{ auth_protocol }}://{{ auth_host }}:{{ auth_port }} +auth_plugin = password +project_domain_id = default +user_domain_id = default +project_name = {{ admin_tenant_name }} +username = {{ admin_user }} +password = {{ admin_password }} +signing_dir = {{ signing_dir }} +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/section-keystone-authtoken-legacy b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/section-keystone-authtoken-legacy new file mode 100644 index 0000000000000000000000000000000000000000..9356b2be4e7dabe089b4d7d39793146bb92f3f48 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/section-keystone-authtoken-legacy @@ -0,0 +1,10 @@ +{% if auth_host -%} +[keystone_authtoken] +# Juno specific config (Bug #1557223) +auth_uri = {{ service_protocol }}://{{ service_host }}:{{ service_port }}/{{ service_admin_prefix }} +identity_uri = {{ auth_protocol }}://{{ auth_host }}:{{ auth_port }} +admin_tenant_name = {{ admin_tenant_name }} +admin_user = {{ admin_user }} +admin_password = {{ admin_password }} +signing_dir = {{ signing_dir }} +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/section-keystone-authtoken-mitaka b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/section-keystone-authtoken-mitaka new file mode 100644 index 0000000000000000000000000000000000000000..c281868b16a885cd01a234984974af0349a5d242 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/section-keystone-authtoken-mitaka @@ -0,0 +1,22 @@ +{% if auth_host -%} +[keystone_authtoken] +auth_type = password +{% if api_version == "3" -%} +auth_uri = {{ service_protocol }}://{{ service_host }}:{{ service_port }}/v3 +auth_url = {{ auth_protocol }}://{{ auth_host }}:{{ auth_port }}/v3 +project_domain_name = {{ admin_domain_name }} +user_domain_name = {{ admin_domain_name }} +{% else -%} +auth_uri = {{ service_protocol }}://{{ service_host }}:{{ service_port }} +auth_url = {{ auth_protocol }}://{{ auth_host }}:{{ auth_port }} +project_domain_name = default +user_domain_name = default +{% endif -%} +project_name = {{ admin_tenant_name }} +username = {{ admin_user }} +password = {{ admin_password }} +signing_dir = {{ signing_dir }} +{% if use_memcache == true %} +memcached_servers = {{ memcache_url }} +{% endif -%} +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/section-keystone-authtoken-v3only b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/section-keystone-authtoken-v3only new file mode 100644 index 0000000000000000000000000000000000000000..d26a91fe1f00e2b12d094be727a53eddc87b829d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/section-keystone-authtoken-v3only @@ -0,0 +1,9 @@ +{% if auth_host -%} +[keystone_authtoken] +{% for option_name, option_value in keystone_authtoken.items() -%} +{{ option_name }} = {{ option_value }} +{% endfor -%} +{% if use_memcache == true %} +memcached_servers = {{ memcache_url }} +{% endif -%} +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/section-oslo-cache b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/section-oslo-cache new file mode 100644 index 0000000000000000000000000000000000000000..e056a32aafe32413cefb40b9414d064f2b0c8fd2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/section-oslo-cache @@ -0,0 +1,6 @@ +[cache] +{% if memcache_url %} +enabled = true +backend = oslo_cache.memcache_pool +memcache_servers = {{ memcache_url }} +{% endif %} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/section-oslo-messaging-rabbit b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/section-oslo-messaging-rabbit new file mode 100644 index 0000000000000000000000000000000000000000..bed2216aba7217022ded17dec4cdb0871f513b40 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/section-oslo-messaging-rabbit @@ -0,0 +1,10 @@ +[oslo_messaging_rabbit] +{% if rabbitmq_ha_queues -%} +rabbit_ha_queues = True +{% endif -%} +{% if rabbit_ssl_port -%} +ssl = True +{% endif -%} +{% if rabbit_ssl_ca -%} +ssl_ca_file = {{ rabbit_ssl_ca }} +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/section-oslo-messaging-rabbit-ocata b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/section-oslo-messaging-rabbit-ocata new file mode 100644 index 0000000000000000000000000000000000000000..365f43757719b2de3c601ea6a9752dac8a8b3545 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/section-oslo-messaging-rabbit-ocata @@ -0,0 +1,10 @@ +[oslo_messaging_rabbit] +{% if rabbitmq_ha_queues -%} +rabbit_ha_queues = True +{% endif -%} +{% if rabbit_ssl_port -%} +rabbit_use_ssl = True +{% endif -%} +{% if rabbit_ssl_ca -%} +ssl_ca_file = {{ rabbit_ssl_ca }} +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/section-oslo-middleware b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/section-oslo-middleware new file mode 100644 index 0000000000000000000000000000000000000000..dd73230a42aa037582989979c1bc8132d30b9b38 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/section-oslo-middleware @@ -0,0 +1,5 @@ +[oslo_middleware] + +# Bug #1758675 +enable_proxy_headers_parsing = true + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/section-oslo-notifications b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/section-oslo-notifications new file mode 100644 index 0000000000000000000000000000000000000000..71c7eb068eace94e8986d7a868bded236f75128c --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/section-oslo-notifications @@ -0,0 +1,15 @@ +{% if transport_url -%} +[oslo_messaging_notifications] +driver = {{ oslo_messaging_driver }} +transport_url = {{ transport_url }} +{% if send_notifications_to_logs %} +driver = log +{% endif %} +{% if notification_topics -%} +topics = {{ notification_topics }} +{% endif -%} +{% if notification_format -%} +[notifications] +notification_format = {{ notification_format }} +{% endif -%} +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/section-placement b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/section-placement new file mode 100644 index 0000000000000000000000000000000000000000..97724bdb5af6e0352d0b920600d7cbbc8318fa7c --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/section-placement @@ -0,0 +1,19 @@ +[placement] +{% if auth_host -%} +auth_url = {{ auth_protocol }}://{{ auth_host }}:{{ auth_port }} +auth_type = password +{% if api_version == "3" -%} +project_domain_name = {{ admin_domain_name }} +user_domain_name = {{ admin_domain_name }} +{% else -%} +project_domain_name = default +user_domain_name = default +{% endif -%} +project_name = {{ admin_tenant_name }} +username = {{ admin_user }} +password = {{ admin_password }} +{% endif -%} +{% if region -%} +os_region_name = {{ region }} +{% endif -%} +randomize_allocation_candidates = true diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/section-rabbitmq-oslo b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/section-rabbitmq-oslo new file mode 100644 index 0000000000000000000000000000000000000000..b444c9c99bd179eaa0be31af462576e4ca9c45b5 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/section-rabbitmq-oslo @@ -0,0 +1,22 @@ +{% if rabbitmq_host or rabbitmq_hosts -%} +[oslo_messaging_rabbit] +rabbit_userid = {{ rabbitmq_user }} +rabbit_virtual_host = {{ rabbitmq_virtual_host }} +rabbit_password = {{ rabbitmq_password }} +{% if rabbitmq_hosts -%} +rabbit_hosts = {{ rabbitmq_hosts }} +{% if rabbitmq_ha_queues -%} +rabbit_ha_queues = True +rabbit_durable_queues = False +{% endif -%} +{% else -%} +rabbit_host = {{ rabbitmq_host }} +{% endif -%} +{% if rabbit_ssl_port -%} +rabbit_use_ssl = True +rabbit_port = {{ rabbit_ssl_port }} +{% if rabbit_ssl_ca -%} +kombu_ssl_ca_certs = {{ rabbit_ssl_ca }} +{% endif -%} +{% endif -%} +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/section-zeromq b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/section-zeromq new file mode 100644 index 0000000000000000000000000000000000000000..95f1a76ce87f9babdcba58da79c68d34c2776eea --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/section-zeromq @@ -0,0 +1,14 @@ +{% if zmq_host -%} +# ZeroMQ configuration (restart-nonce: {{ zmq_nonce }}) +rpc_backend = zmq +rpc_zmq_host = {{ zmq_host }} +{% if zmq_redis_address -%} +rpc_zmq_matchmaker = redis +matchmaker_heartbeat_freq = 15 +matchmaker_heartbeat_ttl = 30 +[matchmaker_redis] +host = {{ zmq_redis_address }} +{% else -%} +rpc_zmq_matchmaker = ring +{% endif -%} +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/vendor_data.json b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/vendor_data.json new file mode 100644 index 0000000000000000000000000000000000000000..904f612a7f74490d4508920400d18d493d056477 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/vendor_data.json @@ -0,0 +1 @@ +{{ vendor_data_json }} \ No newline at end of file diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/wsgi-openstack-api.conf b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/wsgi-openstack-api.conf new file mode 100644 index 0000000000000000000000000000000000000000..23b62a385283e6b8f1a6af0fcebccac747031b26 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/wsgi-openstack-api.conf @@ -0,0 +1,91 @@ +# Configuration file maintained by Juju. Local changes may be overwritten. + +{% if port -%} +Listen {{ port }} +{% endif -%} + +{% if admin_port -%} +Listen {{ admin_port }} +{% endif -%} + +{% if public_port -%} +Listen {{ public_port }} +{% endif -%} + +{% if port -%} + + WSGIDaemonProcess {{ service_name }} processes={{ processes }} threads={{ threads }} user={{ user }} group={{ group }} \ + display-name=%{GROUP} + WSGIProcessGroup {{ service_name }} + WSGIScriptAlias / {{ script }} + WSGIApplicationGroup %{GLOBAL} + WSGIPassAuthorization On + = 2.4> + ErrorLogFormat "%{cu}t %M" + + ErrorLog /var/log/apache2/{{ service_name }}_error.log + CustomLog /var/log/apache2/{{ service_name }}_access.log combined + + + = 2.4> + Require all granted + + + Order allow,deny + Allow from all + + + +{% endif -%} + +{% if admin_port -%} + + WSGIDaemonProcess {{ service_name }}-admin processes={{ admin_processes }} threads={{ threads }} user={{ user }} group={{ group }} \ + display-name=%{GROUP} + WSGIProcessGroup {{ service_name }}-admin + WSGIScriptAlias / {{ admin_script }} + WSGIApplicationGroup %{GLOBAL} + WSGIPassAuthorization On + = 2.4> + ErrorLogFormat "%{cu}t %M" + + ErrorLog /var/log/apache2/{{ service_name }}_error.log + CustomLog /var/log/apache2/{{ service_name }}_access.log combined + + + = 2.4> + Require all granted + + + Order allow,deny + Allow from all + + + +{% endif -%} + +{% if public_port -%} + + WSGIDaemonProcess {{ service_name }}-public processes={{ public_processes }} threads={{ threads }} user={{ user }} group={{ group }} \ + display-name=%{GROUP} + WSGIProcessGroup {{ service_name }}-public + WSGIScriptAlias / {{ public_script }} + WSGIApplicationGroup %{GLOBAL} + WSGIPassAuthorization On + = 2.4> + ErrorLogFormat "%{cu}t %M" + + ErrorLog /var/log/apache2/{{ service_name }}_error.log + CustomLog /var/log/apache2/{{ service_name }}_access.log combined + + + = 2.4> + Require all granted + + + Order allow,deny + Allow from all + + + +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/wsgi-openstack-metadata.conf b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/wsgi-openstack-metadata.conf new file mode 100644 index 0000000000000000000000000000000000000000..23b62a385283e6b8f1a6af0fcebccac747031b26 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templates/wsgi-openstack-metadata.conf @@ -0,0 +1,91 @@ +# Configuration file maintained by Juju. Local changes may be overwritten. + +{% if port -%} +Listen {{ port }} +{% endif -%} + +{% if admin_port -%} +Listen {{ admin_port }} +{% endif -%} + +{% if public_port -%} +Listen {{ public_port }} +{% endif -%} + +{% if port -%} + + WSGIDaemonProcess {{ service_name }} processes={{ processes }} threads={{ threads }} user={{ user }} group={{ group }} \ + display-name=%{GROUP} + WSGIProcessGroup {{ service_name }} + WSGIScriptAlias / {{ script }} + WSGIApplicationGroup %{GLOBAL} + WSGIPassAuthorization On + = 2.4> + ErrorLogFormat "%{cu}t %M" + + ErrorLog /var/log/apache2/{{ service_name }}_error.log + CustomLog /var/log/apache2/{{ service_name }}_access.log combined + + + = 2.4> + Require all granted + + + Order allow,deny + Allow from all + + + +{% endif -%} + +{% if admin_port -%} + + WSGIDaemonProcess {{ service_name }}-admin processes={{ admin_processes }} threads={{ threads }} user={{ user }} group={{ group }} \ + display-name=%{GROUP} + WSGIProcessGroup {{ service_name }}-admin + WSGIScriptAlias / {{ admin_script }} + WSGIApplicationGroup %{GLOBAL} + WSGIPassAuthorization On + = 2.4> + ErrorLogFormat "%{cu}t %M" + + ErrorLog /var/log/apache2/{{ service_name }}_error.log + CustomLog /var/log/apache2/{{ service_name }}_access.log combined + + + = 2.4> + Require all granted + + + Order allow,deny + Allow from all + + + +{% endif -%} + +{% if public_port -%} + + WSGIDaemonProcess {{ service_name }}-public processes={{ public_processes }} threads={{ threads }} user={{ user }} group={{ group }} \ + display-name=%{GROUP} + WSGIProcessGroup {{ service_name }}-public + WSGIScriptAlias / {{ public_script }} + WSGIApplicationGroup %{GLOBAL} + WSGIPassAuthorization On + = 2.4> + ErrorLogFormat "%{cu}t %M" + + ErrorLog /var/log/apache2/{{ service_name }}_error.log + CustomLog /var/log/apache2/{{ service_name }}_access.log combined + + + = 2.4> + Require all granted + + + Order allow,deny + Allow from all + + + +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templating.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templating.py new file mode 100644 index 0000000000000000000000000000000000000000..050f8af5c9135db4fafdbf5098edcd22c7157ff8 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/templating.py @@ -0,0 +1,379 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os + +import six + +from charmhelpers.fetch import apt_install, apt_update +from charmhelpers.core.hookenv import ( + log, + ERROR, + INFO, + TRACE +) +from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES + +try: + from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions +except ImportError: + apt_update(fatal=True) + if six.PY2: + apt_install('python-jinja2', fatal=True) + else: + apt_install('python3-jinja2', fatal=True) + from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions + + +class OSConfigException(Exception): + pass + + +def get_loader(templates_dir, os_release): + """ + Create a jinja2.ChoiceLoader containing template dirs up to + and including os_release. If directory template directory + is missing at templates_dir, it will be omitted from the loader. + templates_dir is added to the bottom of the search list as a base + loading dir. + + A charm may also ship a templates dir with this module + and it will be appended to the bottom of the search list, eg:: + + hooks/charmhelpers/contrib/openstack/templates + + :param templates_dir (str): Base template directory containing release + sub-directories. + :param os_release (str): OpenStack release codename to construct template + loader. + :returns: jinja2.ChoiceLoader constructed with a list of + jinja2.FilesystemLoaders, ordered in descending + order by OpenStack release. + """ + tmpl_dirs = [(rel, os.path.join(templates_dir, rel)) + for rel in six.itervalues(OPENSTACK_CODENAMES)] + + if not os.path.isdir(templates_dir): + log('Templates directory not found @ %s.' % templates_dir, + level=ERROR) + raise OSConfigException + + # the bottom contains tempaltes_dir and possibly a common templates dir + # shipped with the helper. + loaders = [FileSystemLoader(templates_dir)] + helper_templates = os.path.join(os.path.dirname(__file__), 'templates') + if os.path.isdir(helper_templates): + loaders.append(FileSystemLoader(helper_templates)) + + for rel, tmpl_dir in tmpl_dirs: + if os.path.isdir(tmpl_dir): + loaders.insert(0, FileSystemLoader(tmpl_dir)) + if rel == os_release: + break + # demote this log to the lowest level; we don't really need to see these + # lots in production even when debugging. + log('Creating choice loader with dirs: %s' % + [l.searchpath for l in loaders], level=TRACE) + return ChoiceLoader(loaders) + + +class OSConfigTemplate(object): + """ + Associates a config file template with a list of context generators. + Responsible for constructing a template context based on those generators. + """ + + def __init__(self, config_file, contexts, config_template=None): + self.config_file = config_file + + if hasattr(contexts, '__call__'): + self.contexts = [contexts] + else: + self.contexts = contexts + + self._complete_contexts = [] + + self.config_template = config_template + + def context(self): + ctxt = {} + for context in self.contexts: + _ctxt = context() + if _ctxt: + ctxt.update(_ctxt) + # track interfaces for every complete context. + [self._complete_contexts.append(interface) + for interface in context.interfaces + if interface not in self._complete_contexts] + return ctxt + + def complete_contexts(self): + ''' + Return a list of interfaces that have satisfied contexts. + ''' + if self._complete_contexts: + return self._complete_contexts + self.context() + return self._complete_contexts + + @property + def is_string_template(self): + """:returns: Boolean if this instance is a template initialised with a string""" + return self.config_template is not None + + +class OSConfigRenderer(object): + """ + This class provides a common templating system to be used by OpenStack + charms. It is intended to help charms share common code and templates, + and ease the burden of managing config templates across multiple OpenStack + releases. + + Basic usage:: + + # import some common context generates from charmhelpers + from charmhelpers.contrib.openstack import context + + # Create a renderer object for a specific OS release. + configs = OSConfigRenderer(templates_dir='/tmp/templates', + openstack_release='folsom') + # register some config files with context generators. + configs.register(config_file='/etc/nova/nova.conf', + contexts=[context.SharedDBContext(), + context.AMQPContext()]) + configs.register(config_file='/etc/nova/api-paste.ini', + contexts=[context.IdentityServiceContext()]) + configs.register(config_file='/etc/haproxy/haproxy.conf', + contexts=[context.HAProxyContext()]) + configs.register(config_file='/etc/keystone/policy.d/extra.cfg', + contexts=[context.ExtraPolicyContext() + context.KeystoneContext()], + config_template=hookenv.config('extra-policy')) + # write out a single config + configs.write('/etc/nova/nova.conf') + # write out all registered configs + configs.write_all() + + **OpenStack Releases and template loading** + + When the object is instantiated, it is associated with a specific OS + release. This dictates how the template loader will be constructed. + + The constructed loader attempts to load the template from several places + in the following order: + - from the most recent OS release-specific template dir (if one exists) + - the base templates_dir + - a template directory shipped in the charm with this helper file. + + For the example above, '/tmp/templates' contains the following structure:: + + /tmp/templates/nova.conf + /tmp/templates/api-paste.ini + /tmp/templates/grizzly/api-paste.ini + /tmp/templates/havana/api-paste.ini + + Since it was registered with the grizzly release, it first searches + the grizzly directory for nova.conf, then the templates dir. + + When writing api-paste.ini, it will find the template in the grizzly + directory. + + If the object were created with folsom, it would fall back to the + base templates dir for its api-paste.ini template. + + This system should help manage changes in config files through + openstack releases, allowing charms to fall back to the most recently + updated config template for a given release + + The haproxy.conf, since it is not shipped in the templates dir, will + be loaded from the module directory's template directory, eg + $CHARM/hooks/charmhelpers/contrib/openstack/templates. This allows + us to ship common templates (haproxy, apache) with the helpers. + + **Context generators** + + Context generators are used to generate template contexts during hook + execution. Doing so may require inspecting service relations, charm + config, etc. When registered, a config file is associated with a list + of generators. When a template is rendered and written, all context + generates are called in a chain to generate the context dictionary + passed to the jinja2 template. See context.py for more info. + """ + def __init__(self, templates_dir, openstack_release): + if not os.path.isdir(templates_dir): + log('Could not locate templates dir %s' % templates_dir, + level=ERROR) + raise OSConfigException + + self.templates_dir = templates_dir + self.openstack_release = openstack_release + self.templates = {} + self._tmpl_env = None + + if None in [Environment, ChoiceLoader, FileSystemLoader]: + # if this code is running, the object is created pre-install hook. + # jinja2 shouldn't get touched until the module is reloaded on next + # hook execution, with proper jinja2 bits successfully imported. + if six.PY2: + apt_install('python-jinja2') + else: + apt_install('python3-jinja2') + + def register(self, config_file, contexts, config_template=None): + """ + Register a config file with a list of context generators to be called + during rendering. + config_template can be used to load a template from a string instead of + using template loaders and template files. + :param config_file (str): a path where a config file will be rendered + :param contexts (list): a list of context dictionaries with kv pairs + :param config_template (str): an optional template string to use + """ + self.templates[config_file] = OSConfigTemplate( + config_file=config_file, + contexts=contexts, + config_template=config_template + ) + log('Registered config file: {}'.format(config_file), + level=INFO) + + def _get_tmpl_env(self): + if not self._tmpl_env: + loader = get_loader(self.templates_dir, self.openstack_release) + self._tmpl_env = Environment(loader=loader) + + def _get_template(self, template): + self._get_tmpl_env() + template = self._tmpl_env.get_template(template) + log('Loaded template from {}'.format(template.filename), + level=INFO) + return template + + def _get_template_from_string(self, ostmpl): + ''' + Get a jinja2 template object from a string. + :param ostmpl: OSConfigTemplate to use as a data source. + ''' + self._get_tmpl_env() + template = self._tmpl_env.from_string(ostmpl.config_template) + log('Loaded a template from a string for {}'.format( + ostmpl.config_file), + level=INFO) + return template + + def render(self, config_file): + if config_file not in self.templates: + log('Config not registered: {}'.format(config_file), level=ERROR) + raise OSConfigException + + ostmpl = self.templates[config_file] + ctxt = ostmpl.context() + + if ostmpl.is_string_template: + template = self._get_template_from_string(ostmpl) + log('Rendering from a string template: ' + '{}'.format(config_file), + level=INFO) + else: + _tmpl = os.path.basename(config_file) + try: + template = self._get_template(_tmpl) + except exceptions.TemplateNotFound: + # if no template is found with basename, try looking + # for it using a munged full path, eg: + # /etc/apache2/apache2.conf -> etc_apache2_apache2.conf + _tmpl = '_'.join(config_file.split('/')[1:]) + try: + template = self._get_template(_tmpl) + except exceptions.TemplateNotFound as e: + log('Could not load template from {} by {} or {}.' + ''.format( + self.templates_dir, + os.path.basename(config_file), + _tmpl + ), + level=ERROR) + raise e + + log('Rendering from template: {}'.format(config_file), + level=INFO) + return template.render(ctxt) + + def write(self, config_file): + """ + Write a single config file, raises if config file is not registered. + """ + if config_file not in self.templates: + log('Config not registered: %s' % config_file, level=ERROR) + raise OSConfigException + + _out = self.render(config_file) + if six.PY3: + _out = _out.encode('UTF-8') + + with open(config_file, 'wb') as out: + out.write(_out) + + log('Wrote template %s.' % config_file, level=INFO) + + def write_all(self): + """ + Write out all registered config files. + """ + [self.write(k) for k in six.iterkeys(self.templates)] + + def set_release(self, openstack_release): + """ + Resets the template environment and generates a new template loader + based on a the new openstack release. + """ + self._tmpl_env = None + self.openstack_release = openstack_release + self._get_tmpl_env() + + def complete_contexts(self): + ''' + Returns a list of context interfaces that yield a complete context. + ''' + interfaces = [] + [interfaces.extend(i.complete_contexts()) + for i in six.itervalues(self.templates)] + return interfaces + + def get_incomplete_context_data(self, interfaces): + ''' + Return dictionary of relation status of interfaces and any missing + required context data. Example: + {'amqp': {'missing_data': ['rabbitmq_password'], 'related': True}, + 'zeromq-configuration': {'related': False}} + ''' + incomplete_context_data = {} + + for i in six.itervalues(self.templates): + for context in i.contexts: + for interface in interfaces: + related = False + if interface in context.interfaces: + related = context.get_related() + missing_data = context.missing_data + if missing_data: + incomplete_context_data[interface] = {'missing_data': missing_data} + if related: + if incomplete_context_data.get(interface): + incomplete_context_data[interface].update({'related': True}) + else: + incomplete_context_data[interface] = {'related': True} + else: + incomplete_context_data[interface] = {'related': False} + return incomplete_context_data diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..fbf0156108537f9caae35e193c346b0e4ee237b9 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/utils.py @@ -0,0 +1,2352 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Common python helper functions used for OpenStack charms. +from collections import OrderedDict, namedtuple +from functools import wraps + +import subprocess +import json +import os +import sys +import re +import itertools +import functools + +import six +import traceback +import uuid +import yaml + +from charmhelpers import deprecate + +from charmhelpers.contrib.network import ip + +from charmhelpers.core import unitdata + +from charmhelpers.core.hookenv import ( + WORKLOAD_STATES, + action_fail, + action_set, + config, + expected_peer_units, + expected_related_units, + log as juju_log, + charm_dir, + INFO, + ERROR, + metadata, + related_units, + relation_get, + relation_id, + relation_ids, + relation_set, + status_set, + hook_name, + application_version_set, + cached, + leader_set, + leader_get, + local_unit, +) + +from charmhelpers.core.strutils import ( + BasicStringComparator, + bool_from_string, +) + +from charmhelpers.contrib.storage.linux.lvm import ( + deactivate_lvm_volume_group, + is_lvm_physical_volume, + remove_lvm_physical_volume, +) + +from charmhelpers.contrib.network.ip import ( + get_ipv6_addr, + is_ipv6, + port_has_listener, +) + +from charmhelpers.core.host import ( + lsb_release, + mounts, + umount, + service_running, + service_pause, + service_resume, + service_stop, + service_start, + restart_on_change_helper, +) +from charmhelpers.fetch import ( + apt_cache, + import_key as fetch_import_key, + add_source as fetch_add_source, + SourceConfigError, + GPGKeyError, + get_upstream_version, + filter_missing_packages, + ubuntu_apt_pkg as apt, +) + +from charmhelpers.fetch.snap import ( + snap_install, + snap_refresh, + valid_snap_channel, +) + +from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk +from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device +from charmhelpers.contrib.openstack.exceptions import OSContextError +from charmhelpers.contrib.openstack.policyd import ( + policyd_status_message_prefix, + POLICYD_CONFIG_NAME, +) + +from charmhelpers.contrib.openstack.ha.utils import ( + expect_ha, +) + +CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu" +CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA' + +DISTRO_PROPOSED = ('deb http://archive.ubuntu.com/ubuntu/ %s-proposed ' + 'restricted main multiverse universe') + +OPENSTACK_RELEASES = ( + 'diablo', + 'essex', + 'folsom', + 'grizzly', + 'havana', + 'icehouse', + 'juno', + 'kilo', + 'liberty', + 'mitaka', + 'newton', + 'ocata', + 'pike', + 'queens', + 'rocky', + 'stein', + 'train', + 'ussuri', +) + +UBUNTU_OPENSTACK_RELEASE = OrderedDict([ + ('oneiric', 'diablo'), + ('precise', 'essex'), + ('quantal', 'folsom'), + ('raring', 'grizzly'), + ('saucy', 'havana'), + ('trusty', 'icehouse'), + ('utopic', 'juno'), + ('vivid', 'kilo'), + ('wily', 'liberty'), + ('xenial', 'mitaka'), + ('yakkety', 'newton'), + ('zesty', 'ocata'), + ('artful', 'pike'), + ('bionic', 'queens'), + ('cosmic', 'rocky'), + ('disco', 'stein'), + ('eoan', 'train'), + ('focal', 'ussuri'), +]) + + +OPENSTACK_CODENAMES = OrderedDict([ + ('2011.2', 'diablo'), + ('2012.1', 'essex'), + ('2012.2', 'folsom'), + ('2013.1', 'grizzly'), + ('2013.2', 'havana'), + ('2014.1', 'icehouse'), + ('2014.2', 'juno'), + ('2015.1', 'kilo'), + ('2015.2', 'liberty'), + ('2016.1', 'mitaka'), + ('2016.2', 'newton'), + ('2017.1', 'ocata'), + ('2017.2', 'pike'), + ('2018.1', 'queens'), + ('2018.2', 'rocky'), + ('2019.1', 'stein'), + ('2019.2', 'train'), + ('2020.1', 'ussuri'), +]) + +# The ugly duckling - must list releases oldest to newest +SWIFT_CODENAMES = OrderedDict([ + ('diablo', + ['1.4.3']), + ('essex', + ['1.4.8']), + ('folsom', + ['1.7.4']), + ('grizzly', + ['1.7.6', '1.7.7', '1.8.0']), + ('havana', + ['1.9.0', '1.9.1', '1.10.0']), + ('icehouse', + ['1.11.0', '1.12.0', '1.13.0', '1.13.1']), + ('juno', + ['2.0.0', '2.1.0', '2.2.0']), + ('kilo', + ['2.2.1', '2.2.2']), + ('liberty', + ['2.3.0', '2.4.0', '2.5.0']), + ('mitaka', + ['2.5.0', '2.6.0', '2.7.0']), + ('newton', + ['2.8.0', '2.9.0', '2.10.0']), + ('ocata', + ['2.11.0', '2.12.0', '2.13.0']), + ('pike', + ['2.13.0', '2.15.0']), + ('queens', + ['2.16.0', '2.17.0']), + ('rocky', + ['2.18.0', '2.19.0']), + ('stein', + ['2.20.0', '2.21.0']), + ('train', + ['2.22.0', '2.23.0']), + ('ussuri', + ['2.24.0', '2.25.0']), +]) + +# >= Liberty version->codename mapping +PACKAGE_CODENAMES = { + 'nova-common': OrderedDict([ + ('12', 'liberty'), + ('13', 'mitaka'), + ('14', 'newton'), + ('15', 'ocata'), + ('16', 'pike'), + ('17', 'queens'), + ('18', 'rocky'), + ('19', 'stein'), + ('20', 'train'), + ('21', 'ussuri'), + ]), + 'neutron-common': OrderedDict([ + ('7', 'liberty'), + ('8', 'mitaka'), + ('9', 'newton'), + ('10', 'ocata'), + ('11', 'pike'), + ('12', 'queens'), + ('13', 'rocky'), + ('14', 'stein'), + ('15', 'train'), + ('16', 'ussuri'), + ]), + 'cinder-common': OrderedDict([ + ('7', 'liberty'), + ('8', 'mitaka'), + ('9', 'newton'), + ('10', 'ocata'), + ('11', 'pike'), + ('12', 'queens'), + ('13', 'rocky'), + ('14', 'stein'), + ('15', 'train'), + ('16', 'ussuri'), + ]), + 'keystone': OrderedDict([ + ('8', 'liberty'), + ('9', 'mitaka'), + ('10', 'newton'), + ('11', 'ocata'), + ('12', 'pike'), + ('13', 'queens'), + ('14', 'rocky'), + ('15', 'stein'), + ('16', 'train'), + ('17', 'ussuri'), + ]), + 'horizon-common': OrderedDict([ + ('8', 'liberty'), + ('9', 'mitaka'), + ('10', 'newton'), + ('11', 'ocata'), + ('12', 'pike'), + ('13', 'queens'), + ('14', 'rocky'), + ('15', 'stein'), + ('16', 'train'), + ('18', 'ussuri'), + ]), + 'ceilometer-common': OrderedDict([ + ('5', 'liberty'), + ('6', 'mitaka'), + ('7', 'newton'), + ('8', 'ocata'), + ('9', 'pike'), + ('10', 'queens'), + ('11', 'rocky'), + ('12', 'stein'), + ('13', 'train'), + ('14', 'ussuri'), + ]), + 'heat-common': OrderedDict([ + ('5', 'liberty'), + ('6', 'mitaka'), + ('7', 'newton'), + ('8', 'ocata'), + ('9', 'pike'), + ('10', 'queens'), + ('11', 'rocky'), + ('12', 'stein'), + ('13', 'train'), + ('14', 'ussuri'), + ]), + 'glance-common': OrderedDict([ + ('11', 'liberty'), + ('12', 'mitaka'), + ('13', 'newton'), + ('14', 'ocata'), + ('15', 'pike'), + ('16', 'queens'), + ('17', 'rocky'), + ('18', 'stein'), + ('19', 'train'), + ('20', 'ussuri'), + ]), + 'openstack-dashboard': OrderedDict([ + ('8', 'liberty'), + ('9', 'mitaka'), + ('10', 'newton'), + ('11', 'ocata'), + ('12', 'pike'), + ('13', 'queens'), + ('14', 'rocky'), + ('15', 'stein'), + ('16', 'train'), + ('18', 'ussuri'), + ]), +} + +DEFAULT_LOOPBACK_SIZE = '5G' + +DB_SERIES_UPGRADING_KEY = 'cluster-series-upgrading' + +DB_MAINTENANCE_KEYS = [DB_SERIES_UPGRADING_KEY] + + +class CompareOpenStackReleases(BasicStringComparator): + """Provide comparisons of OpenStack releases. + + Use in the form of + + if CompareOpenStackReleases(release) > 'mitaka': + # do something with mitaka + """ + _list = OPENSTACK_RELEASES + + +def error_out(msg): + juju_log("FATAL ERROR: %s" % msg, level='ERROR') + sys.exit(1) + + +def get_installed_semantic_versioned_packages(): + '''Get a list of installed packages which have OpenStack semantic versioning + + :returns List of installed packages + :rtype: [pkg1, pkg2, ...] + ''' + return filter_missing_packages(PACKAGE_CODENAMES.keys()) + + +def get_os_codename_install_source(src): + '''Derive OpenStack release codename from a given installation source.''' + ubuntu_rel = lsb_release()['DISTRIB_CODENAME'] + rel = '' + if src is None: + return rel + if src in ['distro', 'distro-proposed', 'proposed']: + try: + rel = UBUNTU_OPENSTACK_RELEASE[ubuntu_rel] + except KeyError: + e = 'Could not derive openstack release for '\ + 'this Ubuntu release: %s' % ubuntu_rel + error_out(e) + return rel + + if src.startswith('cloud:'): + ca_rel = src.split(':')[1] + ca_rel = ca_rel.split('-')[1].split('/')[0] + return ca_rel + + # Best guess match based on deb string provided + if (src.startswith('deb') or + src.startswith('ppa') or + src.startswith('snap')): + for v in OPENSTACK_CODENAMES.values(): + if v in src: + return v + + +def get_os_version_install_source(src): + codename = get_os_codename_install_source(src) + return get_os_version_codename(codename) + + +def get_os_codename_version(vers): + '''Determine OpenStack codename from version number.''' + try: + return OPENSTACK_CODENAMES[vers] + except KeyError: + e = 'Could not determine OpenStack codename for version %s' % vers + error_out(e) + + +def get_os_version_codename(codename, version_map=OPENSTACK_CODENAMES): + '''Determine OpenStack version number from codename.''' + for k, v in six.iteritems(version_map): + if v == codename: + return k + e = 'Could not derive OpenStack version for '\ + 'codename: %s' % codename + error_out(e) + + +def get_os_version_codename_swift(codename): + '''Determine OpenStack version number of swift from codename.''' + for k, v in six.iteritems(SWIFT_CODENAMES): + if k == codename: + return v[-1] + e = 'Could not derive swift version for '\ + 'codename: %s' % codename + error_out(e) + + +def get_swift_codename(version): + '''Determine OpenStack codename that corresponds to swift version.''' + codenames = [k for k, v in six.iteritems(SWIFT_CODENAMES) if version in v] + + if len(codenames) > 1: + # If more than one release codename contains this version we determine + # the actual codename based on the highest available install source. + for codename in reversed(codenames): + releases = UBUNTU_OPENSTACK_RELEASE + release = [k for k, v in six.iteritems(releases) if codename in v] + ret = subprocess.check_output(['apt-cache', 'policy', 'swift']) + if six.PY3: + ret = ret.decode('UTF-8') + if codename in ret or release[0] in ret: + return codename + elif len(codenames) == 1: + return codenames[0] + + # NOTE: fallback - attempt to match with just major.minor version + match = re.match(r'^(\d+)\.(\d+)', version) + if match: + major_minor_version = match.group(0) + for codename, versions in six.iteritems(SWIFT_CODENAMES): + for release_version in versions: + if release_version.startswith(major_minor_version): + return codename + + return None + + +def get_os_codename_package(package, fatal=True): + '''Derive OpenStack release codename from an installed package.''' + + if snap_install_requested(): + cmd = ['snap', 'list', package] + try: + out = subprocess.check_output(cmd) + if six.PY3: + out = out.decode('UTF-8') + except subprocess.CalledProcessError: + return None + lines = out.split('\n') + for line in lines: + if package in line: + # Second item in list is Version + return line.split()[1] + + cache = apt_cache() + + try: + pkg = cache[package] + except Exception: + if not fatal: + return None + # the package is unknown to the current apt cache. + e = 'Could not determine version of package with no installation '\ + 'candidate: %s' % package + error_out(e) + + if not pkg.current_ver: + if not fatal: + return None + # package is known, but no version is currently installed. + e = 'Could not determine version of uninstalled package: %s' % package + error_out(e) + + vers = apt.upstream_version(pkg.current_ver.ver_str) + if 'swift' in pkg.name: + # Fully x.y.z match for swift versions + match = re.match(r'^(\d+)\.(\d+)\.(\d+)', vers) + else: + # x.y match only for 20XX.X + # and ignore patch level for other packages + match = re.match(r'^(\d+)\.(\d+)', vers) + + if match: + vers = match.group(0) + + # Generate a major version number for newer semantic + # versions of openstack projects + major_vers = vers.split('.')[0] + # >= Liberty independent project versions + if (package in PACKAGE_CODENAMES and + major_vers in PACKAGE_CODENAMES[package]): + return PACKAGE_CODENAMES[package][major_vers] + else: + # < Liberty co-ordinated project versions + try: + if 'swift' in pkg.name: + return get_swift_codename(vers) + else: + return OPENSTACK_CODENAMES[vers] + except KeyError: + if not fatal: + return None + e = 'Could not determine OpenStack codename for version %s' % vers + error_out(e) + + +def get_os_version_package(pkg, fatal=True): + '''Derive OpenStack version number from an installed package.''' + codename = get_os_codename_package(pkg, fatal=fatal) + + if not codename: + return None + + if 'swift' in pkg: + vers_map = SWIFT_CODENAMES + for cname, version in six.iteritems(vers_map): + if cname == codename: + return version[-1] + else: + vers_map = OPENSTACK_CODENAMES + for version, cname in six.iteritems(vers_map): + if cname == codename: + return version + # e = "Could not determine OpenStack version for package: %s" % pkg + # error_out(e) + + +# Module local cache variable for the os_release. +_os_rel = None + + +def reset_os_release(): + '''Unset the cached os_release version''' + global _os_rel + _os_rel = None + + +def os_release(package, base=None, reset_cache=False, source_key=None): + """Returns OpenStack release codename from a cached global. + + If reset_cache then unset the cached os_release version and return the + freshly determined version. + + If the codename can not be determined from either an installed package or + the installation source, the earliest release supported by the charm should + be returned. + + :param package: Name of package to determine release from + :type package: str + :param base: Fallback codename if endavours to determine from package fail + :type base: Optional[str] + :param reset_cache: Reset any cached codename value + :type reset_cache: bool + :param source_key: Name of source configuration option + (default: 'openstack-origin') + :type source_key: Optional[str] + :returns: OpenStack release codename + :rtype: str + """ + source_key = source_key or 'openstack-origin' + if not base: + base = UBUNTU_OPENSTACK_RELEASE[lsb_release()['DISTRIB_CODENAME']] + global _os_rel + if reset_cache: + reset_os_release() + if _os_rel: + return _os_rel + _os_rel = ( + get_os_codename_package(package, fatal=False) or + get_os_codename_install_source(config(source_key)) or + base) + return _os_rel + + +@deprecate("moved to charmhelpers.fetch.import_key()", "2017-07", log=juju_log) +def import_key(keyid): + """Import a key, either ASCII armored, or a GPG key id. + + @param keyid: the key in ASCII armor format, or a GPG key id. + @raises SystemExit() via sys.exit() on failure. + """ + try: + return fetch_import_key(keyid) + except GPGKeyError as e: + error_out("Could not import key: {}".format(str(e))) + + +def get_source_and_pgp_key(source_and_key): + """Look for a pgp key ID or ascii-armor key in the given input. + + :param source_and_key: Sting, "source_spec|keyid" where '|keyid' is + optional. + :returns (source_spec, key_id OR None) as a tuple. Returns None for key_id + if there was no '|' in the source_and_key string. + """ + try: + source, key = source_and_key.split('|', 2) + return source, key or None + except ValueError: + return source_and_key, None + + +@deprecate("use charmhelpers.fetch.add_source() instead.", + "2017-07", log=juju_log) +def configure_installation_source(source_plus_key): + """Configure an installation source. + + The functionality is provided by charmhelpers.fetch.add_source() + The difference between the two functions is that add_source() signature + requires the key to be passed directly, whereas this function passes an + optional key by appending '|' to the end of the source specificiation + 'source'. + + Another difference from add_source() is that the function calls sys.exit(1) + if the configuration fails, whereas add_source() raises + SourceConfigurationError(). Another difference, is that add_source() + silently fails (with a juju_log command) if there is no matching source to + configure, whereas this function fails with a sys.exit(1) + + :param source: String_plus_key -- see above for details. + + Note that the behaviour on error is to log the error to the juju log and + then call sys.exit(1). + """ + if source_plus_key.startswith('snap'): + # Do nothing for snap installs + return + # extract the key if there is one, denoted by a '|' in the rel + source, key = get_source_and_pgp_key(source_plus_key) + + # handle the ordinary sources via add_source + try: + fetch_add_source(source, key, fail_invalid=True) + except SourceConfigError as se: + error_out(str(se)) + + +def config_value_changed(option): + """ + Determine if config value changed since last call to this function. + """ + hook_data = unitdata.HookData() + with hook_data(): + db = unitdata.kv() + current = config(option) + saved = db.get(option) + db.set(option, current) + if saved is None: + return False + return current != saved + + +def get_endpoint_key(service_name, relation_id, unit_name): + """Return the key used to refer to an ep changed notification from a unit. + + :param service_name: Service name eg nova, neutron, placement etc + :type service_name: str + :param relation_id: The id of the relation the unit is on. + :type relation_id: str + :param unit_name: The name of the unit publishing the notification. + :type unit_name: str + :returns: The key used to refer to an ep changed notification from a unit + :rtype: str + """ + return '{}-{}-{}'.format( + service_name, + relation_id.replace(':', '_'), + unit_name.replace('/', '_')) + + +def get_endpoint_notifications(service_names, rel_name='identity-service'): + """Return all notifications for the given services. + + :param service_names: List of service name. + :type service_name: List + :param rel_name: Name of the relation to query + :type rel_name: str + :returns: A dict containing the source of the notification and its nonce. + :rtype: Dict[str, str] + """ + notifications = {} + for rid in relation_ids(rel_name): + for unit in related_units(relid=rid): + ep_changed_json = relation_get( + rid=rid, + unit=unit, + attribute='ep_changed') + if ep_changed_json: + ep_changed = json.loads(ep_changed_json) + for service in service_names: + if ep_changed.get(service): + key = get_endpoint_key(service, rid, unit) + notifications[key] = ep_changed[service] + return notifications + + +def endpoint_changed(service_name, rel_name='identity-service'): + """Whether a new notification has been recieved for an endpoint. + + :param service_name: Service name eg nova, neutron, placement etc + :type service_name: str + :param rel_name: Name of the relation to query + :type rel_name: str + :returns: Whether endpoint has changed + :rtype: bool + """ + changed = False + with unitdata.HookData()() as t: + db = t[0] + notifications = get_endpoint_notifications( + [service_name], + rel_name=rel_name) + for key, nonce in notifications.items(): + if db.get(key) != nonce: + juju_log(('New endpoint change notification found: ' + '{}={}').format(key, nonce), + 'INFO') + changed = True + break + return changed + + +def save_endpoint_changed_triggers(service_names, rel_name='identity-service'): + """Save the enpoint triggers in db so it can be tracked if they changed. + + :param service_names: List of service name. + :type service_name: List + :param rel_name: Name of the relation to query + :type rel_name: str + """ + with unitdata.HookData()() as t: + db = t[0] + notifications = get_endpoint_notifications( + service_names, + rel_name=rel_name) + for key, nonce in notifications.items(): + db.set(key, nonce) + + +def save_script_rc(script_path="scripts/scriptrc", **env_vars): + """ + Write an rc file in the charm-delivered directory containing + exported environment variables provided by env_vars. Any charm scripts run + outside the juju hook environment can source this scriptrc to obtain + updated config information necessary to perform health checks or + service changes. + """ + juju_rc_path = "%s/%s" % (charm_dir(), script_path) + if not os.path.exists(os.path.dirname(juju_rc_path)): + os.mkdir(os.path.dirname(juju_rc_path)) + with open(juju_rc_path, 'wt') as rc_script: + rc_script.write( + "#!/bin/bash\n") + [rc_script.write('export %s=%s\n' % (u, p)) + for u, p in six.iteritems(env_vars) if u != "script_path"] + + +def openstack_upgrade_available(package): + """ + Determines if an OpenStack upgrade is available from installation + source, based on version of installed package. + + :param package: str: Name of installed package. + + :returns: bool: : Returns True if configured installation source offers + a newer version of package. + """ + + src = config('openstack-origin') + cur_vers = get_os_version_package(package) + if not cur_vers: + # The package has not been installed yet do not attempt upgrade + return False + if "swift" in package: + codename = get_os_codename_install_source(src) + avail_vers = get_os_version_codename_swift(codename) + else: + try: + avail_vers = get_os_version_install_source(src) + except Exception: + avail_vers = cur_vers + apt.init() + return apt.version_compare(avail_vers, cur_vers) >= 1 + + +def ensure_block_device(block_device): + ''' + Confirm block_device, create as loopback if necessary. + + :param block_device: str: Full path of block device to ensure. + + :returns: str: Full path of ensured block device. + ''' + _none = ['None', 'none', None] + if (block_device in _none): + error_out('prepare_storage(): Missing required input: block_device=%s.' + % block_device) + + if block_device.startswith('/dev/'): + bdev = block_device + elif block_device.startswith('/'): + _bd = block_device.split('|') + if len(_bd) == 2: + bdev, size = _bd + else: + bdev = block_device + size = DEFAULT_LOOPBACK_SIZE + bdev = ensure_loopback_device(bdev, size) + else: + bdev = '/dev/%s' % block_device + + if not is_block_device(bdev): + error_out('Failed to locate valid block device at %s' % bdev) + + return bdev + + +def clean_storage(block_device): + ''' + Ensures a block device is clean. That is: + - unmounted + - any lvm volume groups are deactivated + - any lvm physical device signatures removed + - partition table wiped + + :param block_device: str: Full path to block device to clean. + ''' + for mp, d in mounts(): + if d == block_device: + juju_log('clean_storage(): %s is mounted @ %s, unmounting.' % + (d, mp), level=INFO) + umount(mp, persist=True) + + if is_lvm_physical_volume(block_device): + deactivate_lvm_volume_group(block_device) + remove_lvm_physical_volume(block_device) + else: + zap_disk(block_device) + + +is_ip = ip.is_ip +ns_query = ip.ns_query +get_host_ip = ip.get_host_ip +get_hostname = ip.get_hostname + + +def get_matchmaker_map(mm_file='/etc/oslo/matchmaker_ring.json'): + mm_map = {} + if os.path.isfile(mm_file): + with open(mm_file, 'r') as f: + mm_map = json.load(f) + return mm_map + + +def sync_db_with_multi_ipv6_addresses(database, database_user, + relation_prefix=None): + hosts = get_ipv6_addr(dynamic_only=False) + + if config('vip'): + vips = config('vip').split() + for vip in vips: + if vip and is_ipv6(vip): + hosts.append(vip) + + kwargs = {'database': database, + 'username': database_user, + 'hostname': json.dumps(hosts)} + + if relation_prefix: + for key in list(kwargs.keys()): + kwargs["%s_%s" % (relation_prefix, key)] = kwargs[key] + del kwargs[key] + + for rid in relation_ids('shared-db'): + relation_set(relation_id=rid, **kwargs) + + +def os_requires_version(ostack_release, pkg): + """ + Decorator for hook to specify minimum supported release + """ + def wrap(f): + @wraps(f) + def wrapped_f(*args): + if os_release(pkg) < ostack_release: + raise Exception("This hook is not supported on releases" + " before %s" % ostack_release) + f(*args) + return wrapped_f + return wrap + + +def os_workload_status(configs, required_interfaces, charm_func=None): + """ + Decorator to set workload status based on complete contexts + """ + def wrap(f): + @wraps(f) + def wrapped_f(*args, **kwargs): + # Run the original function first + f(*args, **kwargs) + # Set workload status now that contexts have been + # acted on + set_os_workload_status(configs, required_interfaces, charm_func) + return wrapped_f + return wrap + + +def set_os_workload_status(configs, required_interfaces, charm_func=None, + services=None, ports=None): + """Set the state of the workload status for the charm. + + This calls _determine_os_workload_status() to get the new state, message + and sets the status using status_set() + + @param configs: a templating.OSConfigRenderer() object + @param required_interfaces: {generic: [specific, specific2, ...]} + @param charm_func: a callable function that returns state, message. The + signature is charm_func(configs) -> (state, message) + @param services: list of strings OR dictionary specifying services/ports + @param ports: OPTIONAL list of port numbers. + @returns state, message: the new workload status, user message + """ + state, message = _determine_os_workload_status( + configs, required_interfaces, charm_func, services, ports) + status_set(state, message) + + +def _determine_os_workload_status( + configs, required_interfaces, charm_func=None, + services=None, ports=None): + """Determine the state of the workload status for the charm. + + This function returns the new workload status for the charm based + on the state of the interfaces, the paused state and whether the + services are actually running and any specified ports are open. + + This checks: + + 1. if the unit should be paused, that it is actually paused. If so the + state is 'maintenance' + message, else 'broken'. + 2. that the interfaces/relations are complete. If they are not then + it sets the state to either 'broken' or 'waiting' and an appropriate + message. + 3. If all the relation data is set, then it checks that the actual + services really are running. If not it sets the state to 'broken'. + + If everything is okay then the state returns 'active'. + + @param configs: a templating.OSConfigRenderer() object + @param required_interfaces: {generic: [specific, specific2, ...]} + @param charm_func: a callable function that returns state, message. The + signature is charm_func(configs) -> (state, message) + @param services: list of strings OR dictionary specifying services/ports + @param ports: OPTIONAL list of port numbers. + @returns state, message: the new workload status, user message + """ + state, message = _ows_check_if_paused(services, ports) + + if state is None: + state, message = _ows_check_generic_interfaces( + configs, required_interfaces) + + if state != 'maintenance' and charm_func: + # _ows_check_charm_func() may modify the state, message + state, message = _ows_check_charm_func( + state, message, lambda: charm_func(configs)) + + if state is None: + state, message = _ows_check_services_running(services, ports) + + if state is None: + state = 'active' + message = "Unit is ready" + juju_log(message, 'INFO') + + try: + if config(POLICYD_CONFIG_NAME): + message = "{} {}".format(policyd_status_message_prefix(), message) + except Exception: + pass + + return state, message + + +def _ows_check_if_paused(services=None, ports=None): + """Check if the unit is supposed to be paused, and if so check that the + services/ports (if passed) are actually stopped/not being listened to. + + If the unit isn't supposed to be paused, just return None, None + + If the unit is performing a series upgrade, return a message indicating + this. + + @param services: OPTIONAL services spec or list of service names. + @param ports: OPTIONAL list of port numbers. + @returns state, message or None, None + """ + if is_unit_upgrading_set(): + state, message = check_actually_paused(services=services, + ports=ports) + if state is None: + # we're paused okay, so set maintenance and return + state = "blocked" + message = ("Ready for do-release-upgrade and reboot. " + "Set complete when finished.") + return state, message + + if is_unit_paused_set(): + state, message = check_actually_paused(services=services, + ports=ports) + if state is None: + # we're paused okay, so set maintenance and return + state = "maintenance" + message = "Paused. Use 'resume' action to resume normal service." + return state, message + return None, None + + +def _ows_check_generic_interfaces(configs, required_interfaces): + """Check the complete contexts to determine the workload status. + + - Checks for missing or incomplete contexts + - juju log details of missing required data. + - determines the correct workload status + - creates an appropriate message for status_set(...) + + if there are no problems then the function returns None, None + + @param configs: a templating.OSConfigRenderer() object + @params required_interfaces: {generic_interface: [specific_interface], } + @returns state, message or None, None + """ + incomplete_rel_data = incomplete_relation_data(configs, + required_interfaces) + state = None + message = None + missing_relations = set() + incomplete_relations = set() + + for generic_interface, relations_states in incomplete_rel_data.items(): + related_interface = None + missing_data = {} + # Related or not? + for interface, relation_state in relations_states.items(): + if relation_state.get('related'): + related_interface = interface + missing_data = relation_state.get('missing_data') + break + # No relation ID for the generic_interface? + if not related_interface: + juju_log("{} relation is missing and must be related for " + "functionality. ".format(generic_interface), 'WARN') + state = 'blocked' + missing_relations.add(generic_interface) + else: + # Relation ID eists but no related unit + if not missing_data: + # Edge case - relation ID exists but departings + _hook_name = hook_name() + if (('departed' in _hook_name or 'broken' in _hook_name) and + related_interface in _hook_name): + state = 'blocked' + missing_relations.add(generic_interface) + juju_log("{} relation's interface, {}, " + "relationship is departed or broken " + "and is required for functionality." + "".format(generic_interface, related_interface), + "WARN") + # Normal case relation ID exists but no related unit + # (joining) + else: + juju_log("{} relations's interface, {}, is related but has" + " no units in the relation." + "".format(generic_interface, related_interface), + "INFO") + # Related unit exists and data missing on the relation + else: + juju_log("{} relation's interface, {}, is related awaiting " + "the following data from the relationship: {}. " + "".format(generic_interface, related_interface, + ", ".join(missing_data)), "INFO") + if state != 'blocked': + state = 'waiting' + if generic_interface not in missing_relations: + incomplete_relations.add(generic_interface) + + if missing_relations: + message = "Missing relations: {}".format(", ".join(missing_relations)) + if incomplete_relations: + message += "; incomplete relations: {}" \ + "".format(", ".join(incomplete_relations)) + state = 'blocked' + elif incomplete_relations: + message = "Incomplete relations: {}" \ + "".format(", ".join(incomplete_relations)) + state = 'waiting' + + return state, message + + +def _ows_check_charm_func(state, message, charm_func_with_configs): + """Run a custom check function for the charm to see if it wants to + change the state. This is only run if not in 'maintenance' and + tests to see if the new state is more important that the previous + one determined by the interfaces/relations check. + + @param state: the previously determined state so far. + @param message: the user orientated message so far. + @param charm_func: a callable function that returns state, message + @returns state, message strings. + """ + if charm_func_with_configs: + charm_state, charm_message = charm_func_with_configs() + if (charm_state != 'active' and + charm_state != 'unknown' and + charm_state is not None): + state = workload_state_compare(state, charm_state) + if message: + charm_message = charm_message.replace("Incomplete relations: ", + "") + message = "{}, {}".format(message, charm_message) + else: + message = charm_message + return state, message + + +def _ows_check_services_running(services, ports): + """Check that the services that should be running are actually running + and that any ports specified are being listened to. + + @param services: list of strings OR dictionary specifying services/ports + @param ports: list of ports + @returns state, message: strings or None, None + """ + messages = [] + state = None + if services is not None: + services = _extract_services_list_helper(services) + services_running, running = _check_running_services(services) + if not all(running): + messages.append( + "Services not running that should be: {}" + .format(", ".join(_filter_tuples(services_running, False)))) + state = 'blocked' + # also verify that the ports that should be open are open + # NB, that ServiceManager objects only OPTIONALLY have ports + map_not_open, ports_open = ( + _check_listening_on_services_ports(services)) + if not all(ports_open): + # find which service has missing ports. They are in service + # order which makes it a bit easier. + message_parts = {service: ", ".join([str(v) for v in open_ports]) + for service, open_ports in map_not_open.items()} + message = ", ".join( + ["{}: [{}]".format(s, sp) for s, sp in message_parts.items()]) + messages.append( + "Services with ports not open that should be: {}" + .format(message)) + state = 'blocked' + + if ports is not None: + # and we can also check ports which we don't know the service for + ports_open, ports_open_bools = _check_listening_on_ports_list(ports) + if not all(ports_open_bools): + messages.append( + "Ports which should be open, but are not: {}" + .format(", ".join([str(p) for p, v in ports_open + if not v]))) + state = 'blocked' + + if state is not None: + message = "; ".join(messages) + return state, message + + return None, None + + +def _extract_services_list_helper(services): + """Extract a OrderedDict of {service: [ports]} of the supplied services + for use by the other functions. + + The services object can either be: + - None : no services were passed (an empty dict is returned) + - a list of strings + - A dictionary (optionally OrderedDict) {service_name: {'service': ..}} + - An array of [{'service': service_name, ...}, ...] + + @param services: see above + @returns OrderedDict(service: [ports], ...) + """ + if services is None: + return {} + if isinstance(services, dict): + services = services.values() + # either extract the list of services from the dictionary, or if + # it is a simple string, use that. i.e. works with mixed lists. + _s = OrderedDict() + for s in services: + if isinstance(s, dict) and 'service' in s: + _s[s['service']] = s.get('ports', []) + if isinstance(s, str): + _s[s] = [] + return _s + + +def _check_running_services(services): + """Check that the services dict provided is actually running and provide + a list of (service, boolean) tuples for each service. + + Returns both a zipped list of (service, boolean) and a list of booleans + in the same order as the services. + + @param services: OrderedDict of strings: [ports], one for each service to + check. + @returns [(service, boolean), ...], : results for checks + [boolean] : just the result of the service checks + """ + services_running = [service_running(s) for s in services] + return list(zip(services, services_running)), services_running + + +def _check_listening_on_services_ports(services, test=False): + """Check that the unit is actually listening (has the port open) on the + ports that the service specifies are open. If test is True then the + function returns the services with ports that are open rather than + closed. + + Returns an OrderedDict of service: ports and a list of booleans + + @param services: OrderedDict(service: [port, ...], ...) + @param test: default=False, if False, test for closed, otherwise open. + @returns OrderedDict(service: [port-not-open, ...]...), [boolean] + """ + test = not(not(test)) # ensure test is True or False + all_ports = list(itertools.chain(*services.values())) + ports_states = [port_has_listener('0.0.0.0', p) for p in all_ports] + map_ports = OrderedDict() + matched_ports = [p for p, opened in zip(all_ports, ports_states) + if opened == test] # essentially opened xor test + for service, ports in services.items(): + set_ports = set(ports).intersection(matched_ports) + if set_ports: + map_ports[service] = set_ports + return map_ports, ports_states + + +def _check_listening_on_ports_list(ports): + """Check that the ports list given are being listened to + + Returns a list of ports being listened to and a list of the + booleans. + + @param ports: LIST or port numbers. + @returns [(port_num, boolean), ...], [boolean] + """ + ports_open = [port_has_listener('0.0.0.0', p) for p in ports] + return zip(ports, ports_open), ports_open + + +def _filter_tuples(services_states, state): + """Return a simple list from a list of tuples according to the condition + + @param services_states: LIST of (string, boolean): service and running + state. + @param state: Boolean to match the tuple against. + @returns [LIST of strings] that matched the tuple RHS. + """ + return [s for s, b in services_states if b == state] + + +def workload_state_compare(current_workload_state, workload_state): + """ Return highest priority of two states""" + hierarchy = {'unknown': -1, + 'active': 0, + 'maintenance': 1, + 'waiting': 2, + 'blocked': 3, + } + + if hierarchy.get(workload_state) is None: + workload_state = 'unknown' + if hierarchy.get(current_workload_state) is None: + current_workload_state = 'unknown' + + # Set workload_state based on hierarchy of statuses + if hierarchy.get(current_workload_state) > hierarchy.get(workload_state): + return current_workload_state + else: + return workload_state + + +def incomplete_relation_data(configs, required_interfaces): + """Check complete contexts against required_interfaces + Return dictionary of incomplete relation data. + + configs is an OSConfigRenderer object with configs registered + + required_interfaces is a dictionary of required general interfaces + with dictionary values of possible specific interfaces. + Example: + required_interfaces = {'database': ['shared-db', 'pgsql-db']} + + The interface is said to be satisfied if anyone of the interfaces in the + list has a complete context. + + Return dictionary of incomplete or missing required contexts with relation + status of interfaces and any missing data points. Example: + {'message': + {'amqp': {'missing_data': ['rabbitmq_password'], 'related': True}, + 'zeromq-configuration': {'related': False}}, + 'identity': + {'identity-service': {'related': False}}, + 'database': + {'pgsql-db': {'related': False}, + 'shared-db': {'related': True}}} + """ + complete_ctxts = configs.complete_contexts() + incomplete_relations = [ + svc_type + for svc_type, interfaces in required_interfaces.items() + if not set(interfaces).intersection(complete_ctxts)] + return { + i: configs.get_incomplete_context_data(required_interfaces[i]) + for i in incomplete_relations} + + +def do_action_openstack_upgrade(package, upgrade_callback, configs): + """Perform action-managed OpenStack upgrade. + + Upgrades packages to the configured openstack-origin version and sets + the corresponding action status as a result. + + If the charm was installed from source we cannot upgrade it. + For backwards compatibility a config flag (action-managed-upgrade) must + be set for this code to run, otherwise a full service level upgrade will + fire on config-changed. + + @param package: package name for determining if upgrade available + @param upgrade_callback: function callback to charm's upgrade function + @param configs: templating object derived from OSConfigRenderer class + + @return: True if upgrade successful; False if upgrade failed or skipped + """ + ret = False + + if openstack_upgrade_available(package): + if config('action-managed-upgrade'): + juju_log('Upgrading OpenStack release') + + try: + upgrade_callback(configs=configs) + action_set({'outcome': 'success, upgrade completed.'}) + ret = True + except Exception: + action_set({'outcome': 'upgrade failed, see traceback.'}) + action_set({'traceback': traceback.format_exc()}) + action_fail('do_openstack_upgrade resulted in an ' + 'unexpected error') + else: + action_set({'outcome': 'action-managed-upgrade config is ' + 'False, skipped upgrade.'}) + else: + action_set({'outcome': 'no upgrade available.'}) + + return ret + + +def remote_restart(rel_name, remote_service=None): + trigger = { + 'restart-trigger': str(uuid.uuid4()), + } + if remote_service: + trigger['remote-service'] = remote_service + for rid in relation_ids(rel_name): + # This subordinate can be related to two seperate services using + # different subordinate relations so only issue the restart if + # the principle is conencted down the relation we think it is + if related_units(relid=rid): + relation_set(relation_id=rid, + relation_settings=trigger, + ) + + +def check_actually_paused(services=None, ports=None): + """Check that services listed in the services object and ports + are actually closed (not listened to), to verify that the unit is + properly paused. + + @param services: See _extract_services_list_helper + @returns status, : string for status (None if okay) + message : string for problem for status_set + """ + state = None + message = None + messages = [] + if services is not None: + services = _extract_services_list_helper(services) + services_running, services_states = _check_running_services(services) + if any(services_states): + # there shouldn't be any running so this is a problem + messages.append("these services running: {}" + .format(", ".join( + _filter_tuples(services_running, True)))) + state = "blocked" + ports_open, ports_open_bools = ( + _check_listening_on_services_ports(services, True)) + if any(ports_open_bools): + message_parts = {service: ", ".join([str(v) for v in open_ports]) + for service, open_ports in ports_open.items()} + message = ", ".join( + ["{}: [{}]".format(s, sp) for s, sp in message_parts.items()]) + messages.append( + "these service:ports are open: {}".format(message)) + state = 'blocked' + if ports is not None: + ports_open, bools = _check_listening_on_ports_list(ports) + if any(bools): + messages.append( + "these ports which should be closed, but are open: {}" + .format(", ".join([str(p) for p, v in ports_open if v]))) + state = 'blocked' + if messages: + message = ("Services should be paused but {}" + .format(", ".join(messages))) + return state, message + + +def set_unit_paused(): + """Set the unit to a paused state in the local kv() store. + This does NOT actually pause the unit + """ + with unitdata.HookData()() as t: + kv = t[0] + kv.set('unit-paused', True) + + +def clear_unit_paused(): + """Clear the unit from a paused state in the local kv() store + This does NOT actually restart any services - it only clears the + local state. + """ + with unitdata.HookData()() as t: + kv = t[0] + kv.set('unit-paused', False) + + +def is_unit_paused_set(): + """Return the state of the kv().get('unit-paused'). + This does NOT verify that the unit really is paused. + + To help with units that don't have HookData() (testing) + if it excepts, return False + """ + try: + with unitdata.HookData()() as t: + kv = t[0] + # transform something truth-y into a Boolean. + return not(not(kv.get('unit-paused'))) + except Exception: + return False + + +def manage_payload_services(action, services=None, charm_func=None): + """Run an action against all services. + + An optional charm_func() can be called. It should raise an Exception to + indicate that the function failed. If it was succesfull it should return + None or an optional message. + + The signature for charm_func is: + charm_func() -> message: str + + charm_func() is executed after any services are stopped, if supplied. + + The services object can either be: + - None : no services were passed (an empty dict is returned) + - a list of strings + - A dictionary (optionally OrderedDict) {service_name: {'service': ..}} + - An array of [{'service': service_name, ...}, ...] + + :param action: Action to run: pause, resume, start or stop. + :type action: str + :param services: See above + :type services: See above + :param charm_func: function to run for custom charm pausing. + :type charm_func: f() + :returns: Status boolean and list of messages + :rtype: (bool, []) + :raises: RuntimeError + """ + actions = { + 'pause': service_pause, + 'resume': service_resume, + 'start': service_start, + 'stop': service_stop} + action = action.lower() + if action not in actions.keys(): + raise RuntimeError( + "action: {} must be one of: {}".format(action, + ', '.join(actions.keys()))) + services = _extract_services_list_helper(services) + messages = [] + success = True + if services: + for service in services.keys(): + rc = actions[action](service) + if not rc: + success = False + messages.append("{} didn't {} cleanly.".format(service, + action)) + if charm_func: + try: + message = charm_func() + if message: + messages.append(message) + except Exception as e: + success = False + messages.append(str(e)) + return success, messages + + +def pause_unit(assess_status_func, services=None, ports=None, + charm_func=None): + """Pause a unit by stopping the services and setting 'unit-paused' + in the local kv() store. + + Also checks that the services have stopped and ports are no longer + being listened to. + + An optional charm_func() can be called that can either raise an + Exception or return non None, None to indicate that the unit + didn't pause cleanly. + + The signature for charm_func is: + charm_func() -> message: string + + charm_func() is executed after any services are stopped, if supplied. + + The services object can either be: + - None : no services were passed (an empty dict is returned) + - a list of strings + - A dictionary (optionally OrderedDict) {service_name: {'service': ..}} + - An array of [{'service': service_name, ...}, ...] + + @param assess_status_func: (f() -> message: string | None) or None + @param services: OPTIONAL see above + @param ports: OPTIONAL list of port + @param charm_func: function to run for custom charm pausing. + @returns None + @raises Exception(message) on an error for action_fail(). + """ + _, messages = manage_payload_services( + 'pause', + services=services, + charm_func=charm_func) + set_unit_paused() + if assess_status_func: + message = assess_status_func() + if message: + messages.append(message) + if messages and not is_unit_upgrading_set(): + raise Exception("Couldn't pause: {}".format("; ".join(messages))) + + +def resume_unit(assess_status_func, services=None, ports=None, + charm_func=None): + """Resume a unit by starting the services and clearning 'unit-paused' + in the local kv() store. + + Also checks that the services have started and ports are being listened to. + + An optional charm_func() can be called that can either raise an + Exception or return non None to indicate that the unit + didn't resume cleanly. + + The signature for charm_func is: + charm_func() -> message: string + + charm_func() is executed after any services are started, if supplied. + + The services object can either be: + - None : no services were passed (an empty dict is returned) + - a list of strings + - A dictionary (optionally OrderedDict) {service_name: {'service': ..}} + - An array of [{'service': service_name, ...}, ...] + + @param assess_status_func: (f() -> message: string | None) or None + @param services: OPTIONAL see above + @param ports: OPTIONAL list of port + @param charm_func: function to run for custom charm resuming. + @returns None + @raises Exception(message) on an error for action_fail(). + """ + _, messages = manage_payload_services( + 'resume', + services=services, + charm_func=charm_func) + clear_unit_paused() + if assess_status_func: + message = assess_status_func() + if message: + messages.append(message) + if messages: + raise Exception("Couldn't resume: {}".format("; ".join(messages))) + + +def make_assess_status_func(*args, **kwargs): + """Creates an assess_status_func() suitable for handing to pause_unit() + and resume_unit(). + + This uses the _determine_os_workload_status(...) function to determine + what the workload_status should be for the unit. If the unit is + not in maintenance or active states, then the message is returned to + the caller. This is so an action that doesn't result in either a + complete pause or complete resume can signal failure with an action_fail() + """ + def _assess_status_func(): + state, message = _determine_os_workload_status(*args, **kwargs) + status_set(state, message) + if state not in ['maintenance', 'active']: + return message + return None + + return _assess_status_func + + +def pausable_restart_on_change(restart_map, stopstart=False, + restart_functions=None): + """A restart_on_change decorator that checks to see if the unit is + paused. If it is paused then the decorated function doesn't fire. + + This is provided as a helper, as the @restart_on_change(...) decorator + is in core.host, yet the openstack specific helpers are in this file + (contrib.openstack.utils). Thus, this needs to be an optional feature + for openstack charms (or charms that wish to use the openstack + pause/resume type features). + + It is used as follows: + + from contrib.openstack.utils import ( + pausable_restart_on_change as restart_on_change) + + @restart_on_change(restart_map, stopstart=) + def some_hook(...): + pass + + see core.utils.restart_on_change() for more details. + + Note restart_map can be a callable, in which case, restart_map is only + evaluated at runtime. This means that it is lazy and the underlying + function won't be called if the decorated function is never called. Note, + retains backwards compatibility for passing a non-callable dictionary. + + @param f: the function to decorate + @param restart_map: (optionally callable, which then returns the + restart_map) the restart map {conf_file: [services]} + @param stopstart: DEFAULT false; whether to stop, start or just restart + @returns decorator to use a restart_on_change with pausability + """ + def wrap(f): + # py27 compatible nonlocal variable. When py3 only, replace with + # nonlocal keyword + __restart_map_cache = {'cache': None} + + @functools.wraps(f) + def wrapped_f(*args, **kwargs): + if is_unit_paused_set(): + return f(*args, **kwargs) + if __restart_map_cache['cache'] is None: + __restart_map_cache['cache'] = restart_map() \ + if callable(restart_map) else restart_map + # otherwise, normal restart_on_change functionality + return restart_on_change_helper( + (lambda: f(*args, **kwargs)), __restart_map_cache['cache'], + stopstart, restart_functions) + return wrapped_f + return wrap + + +def ordered(orderme): + """Converts the provided dictionary into a collections.OrderedDict. + + The items in the returned OrderedDict will be inserted based on the + natural sort order of the keys. Nested dictionaries will also be sorted + in order to ensure fully predictable ordering. + + :param orderme: the dict to order + :return: collections.OrderedDict + :raises: ValueError: if `orderme` isn't a dict instance. + """ + if not isinstance(orderme, dict): + raise ValueError('argument must be a dict type') + + result = OrderedDict() + for k, v in sorted(six.iteritems(orderme), key=lambda x: x[0]): + if isinstance(v, dict): + result[k] = ordered(v) + else: + result[k] = v + + return result + + +def config_flags_parser(config_flags): + """Parses config flags string into dict. + + This parsing method supports a few different formats for the config + flag values to be parsed: + + 1. A string in the simple format of key=value pairs, with the possibility + of specifying multiple key value pairs within the same string. For + example, a string in the format of 'key1=value1, key2=value2' will + return a dict of: + + {'key1': 'value1', 'key2': 'value2'}. + + 2. A string in the above format, but supporting a comma-delimited list + of values for the same key. For example, a string in the format of + 'key1=value1, key2=value3,value4,value5' will return a dict of: + + {'key1': 'value1', 'key2': 'value2,value3,value4'} + + 3. A string containing a colon character (:) prior to an equal + character (=) will be treated as yaml and parsed as such. This can be + used to specify more complex key value pairs. For example, + a string in the format of 'key1: subkey1=value1, subkey2=value2' will + return a dict of: + + {'key1', 'subkey1=value1, subkey2=value2'} + + The provided config_flags string may be a list of comma-separated values + which themselves may be comma-separated list of values. + """ + # If we find a colon before an equals sign then treat it as yaml. + # Note: limit it to finding the colon first since this indicates assignment + # for inline yaml. + colon = config_flags.find(':') + equals = config_flags.find('=') + if colon > 0: + if colon < equals or equals < 0: + return ordered(yaml.safe_load(config_flags)) + + if config_flags.find('==') >= 0: + juju_log("config_flags is not in expected format (key=value)", + level=ERROR) + raise OSContextError + + # strip the following from each value. + post_strippers = ' ,' + # we strip any leading/trailing '=' or ' ' from the string then + # split on '='. + split = config_flags.strip(' =').split('=') + limit = len(split) + flags = OrderedDict() + for i in range(0, limit - 1): + current = split[i] + next = split[i + 1] + vindex = next.rfind(',') + if (i == limit - 2) or (vindex < 0): + value = next + else: + value = next[:vindex] + + if i == 0: + key = current + else: + # if this not the first entry, expect an embedded key. + index = current.rfind(',') + if index < 0: + juju_log("Invalid config value(s) at index %s" % (i), + level=ERROR) + raise OSContextError + key = current[index + 1:] + + # Add to collection. + flags[key.strip(post_strippers)] = value.rstrip(post_strippers) + + return flags + + +def os_application_version_set(package): + '''Set version of application for Juju 2.0 and later''' + application_version = get_upstream_version(package) + # NOTE(jamespage) if not able to figure out package version, fallback to + # openstack codename version detection. + if not application_version: + application_version_set(os_release(package)) + else: + application_version_set(application_version) + + +def os_application_status_set(check_function): + """Run the supplied function and set the application status accordingly. + + :param check_function: Function to run to get app states and messages. + :type check_function: function + """ + state, message = check_function() + status_set(state, message, application=True) + + +def enable_memcache(source=None, release=None, package=None): + """Determine if memcache should be enabled on the local unit + + @param release: release of OpenStack currently deployed + @param package: package to derive OpenStack version deployed + @returns boolean Whether memcache should be enabled + """ + _release = None + if release: + _release = release + else: + _release = os_release(package) + if not _release: + _release = get_os_codename_install_source(source) + + return CompareOpenStackReleases(_release) >= 'mitaka' + + +def token_cache_pkgs(source=None, release=None): + """Determine additional packages needed for token caching + + @param source: source string for charm + @param release: release of OpenStack currently deployed + @returns List of package to enable token caching + """ + packages = [] + if enable_memcache(source=source, release=release): + packages.extend(['memcached', 'python-memcache']) + return packages + + +def update_json_file(filename, items): + """Updates the json `filename` with a given dict. + :param filename: path to json file (e.g. /etc/glance/policy.json) + :param items: dict of items to update + """ + if not items: + return + + with open(filename) as fd: + policy = json.load(fd) + + # Compare before and after and if nothing has changed don't write the file + # since that could cause unnecessary service restarts. + before = json.dumps(policy, indent=4, sort_keys=True) + policy.update(items) + after = json.dumps(policy, indent=4, sort_keys=True) + if before == after: + return + + with open(filename, "w") as fd: + fd.write(after) + + +@cached +def snap_install_requested(): + """ Determine if installing from snaps + + If openstack-origin is of the form snap:track/channel[/branch] + and channel is in SNAPS_CHANNELS return True. + """ + origin = config('openstack-origin') or "" + if not origin.startswith('snap:'): + return False + + _src = origin[5:] + if '/' in _src: + channel = _src.split('/')[1] + else: + # Handle snap:track with no channel + channel = 'stable' + return valid_snap_channel(channel) + + +def get_snaps_install_info_from_origin(snaps, src, mode='classic'): + """Generate a dictionary of snap install information from origin + + @param snaps: List of snaps + @param src: String of openstack-origin or source of the form + snap:track/channel + @param mode: String classic, devmode or jailmode + @returns: Dictionary of snaps with channels and modes + """ + + if not src.startswith('snap:'): + juju_log("Snap source is not a snap origin", 'WARN') + return {} + + _src = src[5:] + channel = '--channel={}'.format(_src) + + return {snap: {'channel': channel, 'mode': mode} + for snap in snaps} + + +def install_os_snaps(snaps, refresh=False): + """Install OpenStack snaps from channel and with mode + + @param snaps: Dictionary of snaps with channels and modes of the form: + {'snap_name': {'channel': 'snap_channel', + 'mode': 'snap_mode'}} + Where channel is a snapstore channel and mode is --classic, --devmode + or --jailmode. + @param post_snap_install: Callback function to run after snaps have been + installed + """ + + def _ensure_flag(flag): + if flag.startswith('--'): + return flag + return '--{}'.format(flag) + + if refresh: + for snap in snaps.keys(): + snap_refresh(snap, + _ensure_flag(snaps[snap]['channel']), + _ensure_flag(snaps[snap]['mode'])) + else: + for snap in snaps.keys(): + snap_install(snap, + _ensure_flag(snaps[snap]['channel']), + _ensure_flag(snaps[snap]['mode'])) + + +def set_unit_upgrading(): + """Set the unit to a upgrading state in the local kv() store. + """ + with unitdata.HookData()() as t: + kv = t[0] + kv.set('unit-upgrading', True) + + +def clear_unit_upgrading(): + """Clear the unit from a upgrading state in the local kv() store + """ + with unitdata.HookData()() as t: + kv = t[0] + kv.set('unit-upgrading', False) + + +def is_unit_upgrading_set(): + """Return the state of the kv().get('unit-upgrading'). + + To help with units that don't have HookData() (testing) + if it excepts, return False + """ + try: + with unitdata.HookData()() as t: + kv = t[0] + # transform something truth-y into a Boolean. + return not(not(kv.get('unit-upgrading'))) + except Exception: + return False + + +def series_upgrade_prepare(pause_unit_helper=None, configs=None): + """ Run common series upgrade prepare tasks. + + :param pause_unit_helper: function: Function to pause unit + :param configs: OSConfigRenderer object: Configurations + :returns None: + """ + set_unit_upgrading() + if pause_unit_helper and configs: + if not is_unit_paused_set(): + pause_unit_helper(configs) + + +def series_upgrade_complete(resume_unit_helper=None, configs=None): + """ Run common series upgrade complete tasks. + + :param resume_unit_helper: function: Function to resume unit + :param configs: OSConfigRenderer object: Configurations + :returns None: + """ + clear_unit_paused() + clear_unit_upgrading() + if configs: + configs.write_all() + if resume_unit_helper: + resume_unit_helper(configs) + + +def is_db_initialised(): + """Check leader storage to see if database has been initialised. + + :returns: Whether DB has been initialised + :rtype: bool + """ + db_initialised = None + if leader_get('db-initialised') is None: + juju_log( + 'db-initialised key missing, assuming db is not initialised', + 'DEBUG') + db_initialised = False + else: + db_initialised = bool_from_string(leader_get('db-initialised')) + juju_log('Database initialised: {}'.format(db_initialised), 'DEBUG') + return db_initialised + + +def set_db_initialised(): + """Add flag to leader storage to indicate database has been initialised. + """ + juju_log('Setting db-initialised to True', 'DEBUG') + leader_set({'db-initialised': True}) + + +def is_db_maintenance_mode(relid=None): + """Check relation data from notifications of db in maintenance mode. + + :returns: Whether db has notified it is in maintenance mode. + :rtype: bool + """ + juju_log('Checking for maintenance notifications', 'DEBUG') + if relid: + r_ids = [relid] + else: + r_ids = relation_ids('shared-db') + rids_units = [(r, u) for r in r_ids for u in related_units(r)] + notifications = [] + for r_id, unit in rids_units: + settings = relation_get(unit=unit, rid=r_id) + for key, value in settings.items(): + if value and key in DB_MAINTENANCE_KEYS: + juju_log( + 'Unit: {}, Key: {}, Value: {}'.format(unit, key, value), + 'DEBUG') + try: + notifications.append(bool_from_string(value)) + except ValueError: + juju_log( + 'Could not discern bool from {}'.format(value), + 'WARN') + pass + return True in notifications + + +@cached +def container_scoped_relations(): + """Get all the container scoped relations + + :returns: List of relation names + :rtype: List + """ + md = metadata() + relations = [] + for relation_type in ('provides', 'requires', 'peers'): + for relation in md.get(relation_type, []): + if md[relation_type][relation].get('scope') == 'container': + relations.append(relation) + return relations + + +def is_db_ready(use_current_context=False, rel_name=None): + """Check remote database is ready to be used. + + Database relations are expected to provide a list of 'allowed' units to + confirm that the database is ready for use by those units. + + If db relation has provided this information and local unit is a member, + returns True otherwise False. + + :param use_current_context: Whether to limit checks to current hook + context. + :type use_current_context: bool + :param rel_name: Name of relation to check + :type rel_name: string + :returns: Whether remote db is ready. + :rtype: bool + :raises: Exception + """ + key = 'allowed_units' + + rel_name = rel_name or 'shared-db' + this_unit = local_unit() + + if use_current_context: + if relation_id() in relation_ids(rel_name): + rids_units = [(None, None)] + else: + raise Exception("use_current_context=True but not in {} " + "rel hook contexts (currently in {})." + .format(rel_name, relation_id())) + else: + rids_units = [(r_id, u) + for r_id in relation_ids(rel_name) + for u in related_units(r_id)] + + for rid, unit in rids_units: + allowed_units = relation_get(rid=rid, unit=unit, attribute=key) + if allowed_units and this_unit in allowed_units.split(): + juju_log("This unit ({}) is in allowed unit list from {}".format( + this_unit, + unit), 'DEBUG') + return True + + juju_log("This unit was not found in any allowed unit list") + return False + + +def is_expected_scale(peer_relation_name='cluster'): + """Query juju goal-state to determine whether our peer- and dependency- + relations are at the expected scale. + + Useful for deferring per unit per relation housekeeping work until we are + ready to complete it successfully and without unnecessary repetiton. + + Always returns True if version of juju used does not support goal-state. + + :param peer_relation_name: Name of peer relation + :type rel_name: string + :returns: True or False + :rtype: bool + """ + def _get_relation_id(rel_type): + return next((rid for rid in relation_ids(reltype=rel_type)), None) + + Relation = namedtuple('Relation', 'rel_type rel_id') + peer_rid = _get_relation_id(peer_relation_name) + # Units with no peers should still have a peer relation. + if not peer_rid: + juju_log('Not at expected scale, no peer relation found', 'DEBUG') + return False + expected_relations = [ + Relation(rel_type='shared-db', rel_id=_get_relation_id('shared-db'))] + if expect_ha(): + expected_relations.append( + Relation( + rel_type='ha', + rel_id=_get_relation_id('ha'))) + juju_log( + 'Checking scale of {} relations'.format( + ','.join([r.rel_type for r in expected_relations])), + 'DEBUG') + try: + if (len(related_units(relid=peer_rid)) < + len(list(expected_peer_units()))): + return False + for rel in expected_relations: + if not rel.rel_id: + juju_log( + 'Expected to find {} relation, but it is missing'.format( + rel.rel_type), + 'DEBUG') + return False + # Goal state returns every unit even for container scoped + # relations but the charm only ever has a relation with + # the local unit. + if rel.rel_type in container_scoped_relations(): + expected_count = 1 + else: + expected_count = len( + list(expected_related_units(reltype=rel.rel_type))) + if len(related_units(relid=rel.rel_id)) < expected_count: + juju_log( + ('Not at expected scale, not enough units on {} ' + 'relation'.format(rel.rel_type)), + 'DEBUG') + return False + except NotImplementedError: + return True + juju_log('All checks have passed, unit is at expected scale', 'DEBUG') + return True + + +def get_peer_key(unit_name): + """Get the peer key for this unit. + + The peer key is the key a unit uses to publish its status down the peer + relation + + :param unit_name: Name of unit + :type unit_name: string + :returns: Peer key for given unit + :rtype: string + """ + return 'unit-state-{}'.format(unit_name.replace('/', '-')) + + +UNIT_READY = 'READY' +UNIT_NOTREADY = 'NOTREADY' +UNIT_UNKNOWN = 'UNKNOWN' +UNIT_STATES = [UNIT_READY, UNIT_NOTREADY, UNIT_UNKNOWN] + + +def inform_peers_unit_state(state, relation_name='cluster'): + """Inform peers of the state of this unit. + + :param state: State of unit to publish + :type state: string + :param relation_name: Name of relation to publish state on + :type relation_name: string + """ + if state not in UNIT_STATES: + raise ValueError( + "Setting invalid state {} for unit".format(state)) + for r_id in relation_ids(relation_name): + relation_set(relation_id=r_id, + relation_settings={ + get_peer_key(local_unit()): state}) + + +def get_peers_unit_state(relation_name='cluster'): + """Get the state of all peers. + + :param relation_name: Name of relation to check peers on. + :type relation_name: string + :returns: Unit states keyed on unit name. + :rtype: dict + :raises: ValueError + """ + r_ids = relation_ids(relation_name) + rids_units = [(r, u) for r in r_ids for u in related_units(r)] + unit_states = {} + for r_id, unit in rids_units: + settings = relation_get(unit=unit, rid=r_id) + unit_states[unit] = settings.get(get_peer_key(unit), UNIT_UNKNOWN) + if unit_states[unit] not in UNIT_STATES: + raise ValueError( + "Unit in unknown state {}".format(unit_states[unit])) + return unit_states + + +def are_peers_ready(relation_name='cluster'): + """Check if all peers are ready. + + :param relation_name: Name of relation to check peers on. + :type relation_name: string + :returns: Whether all units are ready. + :rtype: bool + """ + unit_states = get_peers_unit_state(relation_name) + return all(v == UNIT_READY for v in unit_states.values()) + + +def inform_peers_if_ready(check_unit_ready_func, relation_name='cluster'): + """Inform peers if this unit is ready. + + The check function should return a tuple (state, message). A state + of 'READY' indicates the unit is READY. + + :param check_unit_ready_func: Function to run to check readiness + :type check_unit_ready_func: function + :param relation_name: Name of relation to check peers on. + :type relation_name: string + """ + unit_ready, msg = check_unit_ready_func() + if unit_ready: + state = UNIT_READY + else: + state = UNIT_NOTREADY + juju_log('Telling peers this unit is: {}'.format(state), 'DEBUG') + inform_peers_unit_state(state, relation_name) + + +def check_api_unit_ready(check_db_ready=True): + """Check if this unit is ready. + + :param check_db_ready: Include checks of database readiness. + :type check_db_ready: bool + :returns: Whether unit state is ready and status message + :rtype: (bool, str) + """ + unit_state, msg = get_api_unit_status(check_db_ready=check_db_ready) + return unit_state == WORKLOAD_STATES.ACTIVE, msg + + +def get_api_unit_status(check_db_ready=True): + """Return a workload status and message for this unit. + + :param check_db_ready: Include checks of database readiness. + :type check_db_ready: bool + :returns: Workload state and message + :rtype: (bool, str) + """ + unit_state = WORKLOAD_STATES.ACTIVE + msg = 'Unit is ready' + if is_db_maintenance_mode(): + unit_state = WORKLOAD_STATES.MAINTENANCE + msg = 'Database in maintenance mode.' + elif is_unit_paused_set(): + unit_state = WORKLOAD_STATES.BLOCKED + msg = 'Unit paused.' + elif check_db_ready and not is_db_ready(): + unit_state = WORKLOAD_STATES.WAITING + msg = 'Allowed_units list provided but this unit not present' + elif not is_db_initialised(): + unit_state = WORKLOAD_STATES.WAITING + msg = 'Database not initialised' + elif not is_expected_scale(): + unit_state = WORKLOAD_STATES.WAITING + msg = 'Charm and its dependencies not yet at expected scale' + juju_log(msg, 'DEBUG') + return unit_state, msg + + +def check_api_application_ready(): + """Check if this application is ready. + + :returns: Whether application state is ready and status message + :rtype: (bool, str) + """ + app_state, msg = get_api_application_status() + return app_state == WORKLOAD_STATES.ACTIVE, msg + + +def get_api_application_status(): + """Return a workload status and message for this application. + + :returns: Workload state and message + :rtype: (bool, str) + """ + app_state, msg = get_api_unit_status() + if app_state == WORKLOAD_STATES.ACTIVE: + if are_peers_ready(): + return WORKLOAD_STATES.ACTIVE, 'Application Ready' + else: + return WORKLOAD_STATES.WAITING, 'Some units are not ready' + return app_state, msg diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/vaultlocker.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/vaultlocker.py new file mode 100644 index 0000000000000000000000000000000000000000..4ee6c1dba910b824467c2c34e0a3d3d0f3fd906d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/openstack/vaultlocker.py @@ -0,0 +1,179 @@ +# Copyright 2018 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import json +import os + +import charmhelpers.contrib.openstack.alternatives as alternatives +import charmhelpers.contrib.openstack.context as context + +import charmhelpers.core.hookenv as hookenv +import charmhelpers.core.host as host +import charmhelpers.core.templating as templating +import charmhelpers.core.unitdata as unitdata + +VAULTLOCKER_BACKEND = 'charm-vaultlocker' + + +class VaultKVContext(context.OSContextGenerator): + """Vault KV context for interaction with vault-kv interfaces""" + interfaces = ['secrets-storage'] + + def __init__(self, secret_backend=None): + super(context.OSContextGenerator, self).__init__() + self.secret_backend = ( + secret_backend or 'charm-{}'.format(hookenv.service_name()) + ) + + def __call__(self): + try: + import hvac + except ImportError: + # BUG: #1862085 - if the relation is made to vault, but the + # 'encrypt' option is not made, then the charm errors with an + # import warning. This catches that, logs a warning, and returns + # with an empty context. + hookenv.log("VaultKVContext: trying to use hvac pythong module " + "but it's not available. Is secrets-stroage relation " + "made, but encrypt option not set?", + level=hookenv.WARNING) + # return an emptry context on hvac import error + return {} + ctxt = {} + # NOTE(hopem): see https://bugs.launchpad.net/charm-helpers/+bug/1849323 + db = unitdata.kv() + # currently known-good secret-id + secret_id = db.get('secret-id') + + for relation_id in hookenv.relation_ids(self.interfaces[0]): + for unit in hookenv.related_units(relation_id): + data = hookenv.relation_get(unit=unit, + rid=relation_id) + vault_url = data.get('vault_url') + role_id = data.get('{}_role_id'.format(hookenv.local_unit())) + token = data.get('{}_token'.format(hookenv.local_unit())) + + if all([vault_url, role_id, token]): + token = json.loads(token) + vault_url = json.loads(vault_url) + + # Tokens may change when secret_id's are being + # reissued - if so use token to get new secret_id + token_success = False + try: + secret_id = retrieve_secret_id( + url=vault_url, + token=token + ) + token_success = True + except hvac.exceptions.InvalidRequest: + # Try next + pass + + if token_success: + db.set('secret-id', secret_id) + db.flush() + + ctxt['vault_url'] = vault_url + ctxt['role_id'] = json.loads(role_id) + ctxt['secret_id'] = secret_id + ctxt['secret_backend'] = self.secret_backend + vault_ca = data.get('vault_ca') + if vault_ca: + ctxt['vault_ca'] = json.loads(vault_ca) + + self.complete = True + break + else: + if secret_id: + ctxt['vault_url'] = vault_url + ctxt['role_id'] = json.loads(role_id) + ctxt['secret_id'] = secret_id + ctxt['secret_backend'] = self.secret_backend + vault_ca = data.get('vault_ca') + if vault_ca: + ctxt['vault_ca'] = json.loads(vault_ca) + + if self.complete: + break + + if ctxt: + self.complete = True + + return ctxt + + +def write_vaultlocker_conf(context, priority=100): + """Write vaultlocker configuration to disk and install alternative + + :param context: Dict of data from vault-kv relation + :ptype: context: dict + :param priority: Priority of alternative configuration + :ptype: priority: int""" + charm_vl_path = "/var/lib/charm/{}/vaultlocker.conf".format( + hookenv.service_name() + ) + host.mkdir(os.path.dirname(charm_vl_path), perms=0o700) + templating.render(source='vaultlocker.conf.j2', + target=charm_vl_path, + context=context, perms=0o600), + alternatives.install_alternative('vaultlocker.conf', + '/etc/vaultlocker/vaultlocker.conf', + charm_vl_path, priority) + + +def vault_relation_complete(backend=None): + """Determine whether vault relation is complete + + :param backend: Name of secrets backend requested + :ptype backend: string + :returns: whether the relation to vault is complete + :rtype: bool""" + try: + import hvac + except ImportError: + return False + try: + vault_kv = VaultKVContext(secret_backend=backend or VAULTLOCKER_BACKEND) + vault_kv() + return vault_kv.complete + except hvac.exceptions.InvalidRequest: + return False + + +# TODO: contrib a high level unwrap method to hvac that works +def retrieve_secret_id(url, token): + """Retrieve a response-wrapped secret_id from Vault + + :param url: URL to Vault Server + :ptype url: str + :param token: One shot Token to use + :ptype token: str + :returns: secret_id to use for Vault Access + :rtype: str""" + import hvac + try: + # hvac 0.10.1 changed default adapter to JSONAdapter + client = hvac.Client(url=url, token=token, adapter=hvac.adapters.Request) + except AttributeError: + # hvac < 0.6.2 doesn't have adapter but uses the same response interface + client = hvac.Client(url=url, token=token) + else: + # hvac < 0.9.2 assumes adapter is an instance, so doesn't instantiate + if not isinstance(client.adapter, hvac.adapters.Request): + client.adapter = hvac.adapters.Request(base_uri=url, token=token) + response = client._post('/v1/sys/wrapping/unwrap') + if response.status_code == 200: + data = response.json() + return data['data']['secret_id'] diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/peerstorage/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/peerstorage/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..a8fa60c2a3080d088b8e0abae370aa355c80ec56 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/peerstorage/__init__.py @@ -0,0 +1,267 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import json +import six + +from charmhelpers.core.hookenv import relation_id as current_relation_id +from charmhelpers.core.hookenv import ( + is_relation_made, + relation_ids, + relation_get as _relation_get, + local_unit, + relation_set as _relation_set, + leader_get as _leader_get, + leader_set, + is_leader, +) + + +""" +This helper provides functions to support use of a peer relation +for basic key/value storage, with the added benefit that all storage +can be replicated across peer units. + +Requirement to use: + +To use this, the "peer_echo()" method has to be called form the peer +relation's relation-changed hook: + +@hooks.hook("cluster-relation-changed") # Adapt the to your peer relation name +def cluster_relation_changed(): + peer_echo() + +Once this is done, you can use peer storage from anywhere: + +@hooks.hook("some-hook") +def some_hook(): + # You can store and retrieve key/values this way: + if is_relation_made("cluster"): # from charmhelpers.core.hookenv + # There are peers available so we can work with peer storage + peer_store("mykey", "myvalue") + value = peer_retrieve("mykey") + print value + else: + print "No peers joind the relation, cannot share key/values :(" +""" + + +def leader_get(attribute=None, rid=None): + """Wrapper to ensure that settings are migrated from the peer relation. + + This is to support upgrading an environment that does not support + Juju leadership election to one that does. + + If a setting is not extant in the leader-get but is on the relation-get + peer rel, it is migrated and marked as such so that it is not re-migrated. + """ + migration_key = '__leader_get_migrated_settings__' + if not is_leader(): + return _leader_get(attribute=attribute) + + settings_migrated = False + leader_settings = _leader_get(attribute=attribute) + previously_migrated = _leader_get(attribute=migration_key) + + if previously_migrated: + migrated = set(json.loads(previously_migrated)) + else: + migrated = set([]) + + try: + if migration_key in leader_settings: + del leader_settings[migration_key] + except TypeError: + pass + + if attribute: + if attribute in migrated: + return leader_settings + + # If attribute not present in leader db, check if this unit has set + # the attribute in the peer relation + if not leader_settings: + peer_setting = _relation_get(attribute=attribute, unit=local_unit(), + rid=rid) + if peer_setting: + leader_set(settings={attribute: peer_setting}) + leader_settings = peer_setting + + if leader_settings: + settings_migrated = True + migrated.add(attribute) + else: + r_settings = _relation_get(unit=local_unit(), rid=rid) + if r_settings: + for key in set(r_settings.keys()).difference(migrated): + # Leader setting wins + if not leader_settings.get(key): + leader_settings[key] = r_settings[key] + + settings_migrated = True + migrated.add(key) + + if settings_migrated: + leader_set(**leader_settings) + + if migrated and settings_migrated: + migrated = json.dumps(list(migrated)) + leader_set(settings={migration_key: migrated}) + + return leader_settings + + +def relation_set(relation_id=None, relation_settings=None, **kwargs): + """Attempt to use leader-set if supported in the current version of Juju, + otherwise falls back on relation-set. + + Note that we only attempt to use leader-set if the provided relation_id is + a peer relation id or no relation id is provided (in which case we assume + we are within the peer relation context). + """ + try: + if relation_id in relation_ids('cluster'): + return leader_set(settings=relation_settings, **kwargs) + else: + raise NotImplementedError + except NotImplementedError: + return _relation_set(relation_id=relation_id, + relation_settings=relation_settings, **kwargs) + + +def relation_get(attribute=None, unit=None, rid=None): + """Attempt to use leader-get if supported in the current version of Juju, + otherwise falls back on relation-get. + + Note that we only attempt to use leader-get if the provided rid is a peer + relation id or no relation id is provided (in which case we assume we are + within the peer relation context). + """ + try: + if rid in relation_ids('cluster'): + return leader_get(attribute, rid) + else: + raise NotImplementedError + except NotImplementedError: + return _relation_get(attribute=attribute, rid=rid, unit=unit) + + +def peer_retrieve(key, relation_name='cluster'): + """Retrieve a named key from peer relation `relation_name`.""" + cluster_rels = relation_ids(relation_name) + if len(cluster_rels) > 0: + cluster_rid = cluster_rels[0] + return relation_get(attribute=key, rid=cluster_rid, + unit=local_unit()) + else: + raise ValueError('Unable to detect' + 'peer relation {}'.format(relation_name)) + + +def peer_retrieve_by_prefix(prefix, relation_name='cluster', delimiter='_', + inc_list=None, exc_list=None): + """ Retrieve k/v pairs given a prefix and filter using {inc,exc}_list """ + inc_list = inc_list if inc_list else [] + exc_list = exc_list if exc_list else [] + peerdb_settings = peer_retrieve('-', relation_name=relation_name) + matched = {} + if peerdb_settings is None: + return matched + for k, v in peerdb_settings.items(): + full_prefix = prefix + delimiter + if k.startswith(full_prefix): + new_key = k.replace(full_prefix, '') + if new_key in exc_list: + continue + if new_key in inc_list or len(inc_list) == 0: + matched[new_key] = v + return matched + + +def peer_store(key, value, relation_name='cluster'): + """Store the key/value pair on the named peer relation `relation_name`.""" + cluster_rels = relation_ids(relation_name) + if len(cluster_rels) > 0: + cluster_rid = cluster_rels[0] + relation_set(relation_id=cluster_rid, + relation_settings={key: value}) + else: + raise ValueError('Unable to detect ' + 'peer relation {}'.format(relation_name)) + + +def peer_echo(includes=None, force=False): + """Echo filtered attributes back onto the same relation for storage. + + This is a requirement to use the peerstorage module - it needs to be called + from the peer relation's changed hook. + + If Juju leader support exists this will be a noop unless force is True. + """ + try: + is_leader() + except NotImplementedError: + pass + else: + if not force: + return # NOOP if leader-election is supported + + # Use original non-leader calls + relation_get = _relation_get + relation_set = _relation_set + + rdata = relation_get() + echo_data = {} + if includes is None: + echo_data = rdata.copy() + for ex in ['private-address', 'public-address']: + if ex in echo_data: + echo_data.pop(ex) + else: + for attribute, value in six.iteritems(rdata): + for include in includes: + if include in attribute: + echo_data[attribute] = value + if len(echo_data) > 0: + relation_set(relation_settings=echo_data) + + +def peer_store_and_set(relation_id=None, peer_relation_name='cluster', + peer_store_fatal=False, relation_settings=None, + delimiter='_', **kwargs): + """Store passed-in arguments both in argument relation and in peer storage. + + It functions like doing relation_set() and peer_store() at the same time, + with the same data. + + @param relation_id: the id of the relation to store the data on. Defaults + to the current relation. + @param peer_store_fatal: Set to True, the function will raise an exception + should the peer storage not be available.""" + + relation_settings = relation_settings if relation_settings else {} + relation_set(relation_id=relation_id, + relation_settings=relation_settings, + **kwargs) + if is_relation_made(peer_relation_name): + for key, value in six.iteritems(dict(list(kwargs.items()) + + list(relation_settings.items()))): + key_prefix = relation_id or current_relation_id() + peer_store(key_prefix + delimiter + key, + value, + relation_name=peer_relation_name) + else: + if peer_store_fatal: + raise ValueError('Unable to detect ' + 'peer relation {}'.format(peer_relation_name)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/python.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/python.py new file mode 100644 index 0000000000000000000000000000000000000000..84cba8c4eba34fdd705f4ee39628ebd33b5175a2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/python.py @@ -0,0 +1,21 @@ +# Copyright 2014-2019 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from __future__ import absolute_import + +# deprecated aliases for backwards compatibility +from charmhelpers.fetch.python import debug # noqa +from charmhelpers.fetch.python import packages # noqa +from charmhelpers.fetch.python import rpdb # noqa +from charmhelpers.fetch.python import version # noqa diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/saltstack/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/saltstack/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d74f4039379b608b5c076fd96c906ff5237f1f83 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/saltstack/__init__.py @@ -0,0 +1,116 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Charm Helpers saltstack - declare the state of your machines. + +This helper enables you to declare your machine state, rather than +program it procedurally (and have to test each change to your procedures). +Your install hook can be as simple as:: + + {{{ + from charmhelpers.contrib.saltstack import ( + install_salt_support, + update_machine_state, + ) + + + def install(): + install_salt_support() + update_machine_state('machine_states/dependencies.yaml') + update_machine_state('machine_states/installed.yaml') + }}} + +and won't need to change (nor will its tests) when you change the machine +state. + +It's using a python package called salt-minion which allows various formats for +specifying resources, such as:: + + {{{ + /srv/{{ basedir }}: + file.directory: + - group: ubunet + - user: ubunet + - require: + - user: ubunet + - recurse: + - user + - group + + ubunet: + group.present: + - gid: 1500 + user.present: + - uid: 1500 + - gid: 1500 + - createhome: False + - require: + - group: ubunet + }}} + +The docs for all the different state definitions are at: + http://docs.saltstack.com/ref/states/all/ + + +TODO: + * Add test helpers which will ensure that machine state definitions + are functionally (but not necessarily logically) correct (ie. getting + salt to parse all state defs. + * Add a link to a public bootstrap charm example / blogpost. + * Find a way to obviate the need to use the grains['charm_dir'] syntax + in templates. +""" +# Copyright 2013 Canonical Ltd. +# +# Authors: +# Charm Helpers Developers +import subprocess + +import charmhelpers.contrib.templating.contexts +import charmhelpers.core.host +import charmhelpers.core.hookenv + + +salt_grains_path = '/etc/salt/grains' + + +def install_salt_support(from_ppa=True): + """Installs the salt-minion helper for machine state. + + By default the salt-minion package is installed from + the saltstack PPA. If from_ppa is False you must ensure + that the salt-minion package is available in the apt cache. + """ + if from_ppa: + subprocess.check_call([ + '/usr/bin/add-apt-repository', + '--yes', + 'ppa:saltstack/salt', + ]) + subprocess.check_call(['/usr/bin/apt-get', 'update']) + # We install salt-common as salt-minion would run the salt-minion + # daemon. + charmhelpers.fetch.apt_install('salt-common') + + +def update_machine_state(state_path): + """Update the machine state using the provided state declaration.""" + charmhelpers.contrib.templating.contexts.juju_state_to_yaml( + salt_grains_path) + subprocess.check_call([ + 'salt-call', + '--local', + 'state.template', + state_path, + ]) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/ssl/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/ssl/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..1d238b529e44ad2761d86e96b4845589f8e951c4 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/ssl/__init__.py @@ -0,0 +1,92 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import subprocess +from charmhelpers.core import hookenv + + +def generate_selfsigned(keyfile, certfile, keysize="1024", config=None, subject=None, cn=None): + """Generate selfsigned SSL keypair + + You must provide one of the 3 optional arguments: + config, subject or cn + If more than one is provided the leftmost will be used + + Arguments: + keyfile -- (required) full path to the keyfile to be created + certfile -- (required) full path to the certfile to be created + keysize -- (optional) SSL key length + config -- (optional) openssl configuration file + subject -- (optional) dictionary with SSL subject variables + cn -- (optional) cerfificate common name + + Required keys in subject dict: + cn -- Common name (eq. FQDN) + + Optional keys in subject dict + country -- Country Name (2 letter code) + state -- State or Province Name (full name) + locality -- Locality Name (eg, city) + organization -- Organization Name (eg, company) + organizational_unit -- Organizational Unit Name (eg, section) + email -- Email Address + """ + + cmd = [] + if config: + cmd = ["/usr/bin/openssl", "req", "-new", "-newkey", + "rsa:{}".format(keysize), "-days", "365", "-nodes", "-x509", + "-keyout", keyfile, + "-out", certfile, "-config", config] + elif subject: + ssl_subject = "" + if "country" in subject: + ssl_subject = ssl_subject + "/C={}".format(subject["country"]) + if "state" in subject: + ssl_subject = ssl_subject + "/ST={}".format(subject["state"]) + if "locality" in subject: + ssl_subject = ssl_subject + "/L={}".format(subject["locality"]) + if "organization" in subject: + ssl_subject = ssl_subject + "/O={}".format(subject["organization"]) + if "organizational_unit" in subject: + ssl_subject = ssl_subject + "/OU={}".format(subject["organizational_unit"]) + if "cn" in subject: + ssl_subject = ssl_subject + "/CN={}".format(subject["cn"]) + else: + hookenv.log("When using \"subject\" argument you must " + "provide \"cn\" field at very least") + return False + if "email" in subject: + ssl_subject = ssl_subject + "/emailAddress={}".format(subject["email"]) + + cmd = ["/usr/bin/openssl", "req", "-new", "-newkey", + "rsa:{}".format(keysize), "-days", "365", "-nodes", "-x509", + "-keyout", keyfile, + "-out", certfile, "-subj", ssl_subject] + elif cn: + cmd = ["/usr/bin/openssl", "req", "-new", "-newkey", + "rsa:{}".format(keysize), "-days", "365", "-nodes", "-x509", + "-keyout", keyfile, + "-out", certfile, "-subj", "/CN={}".format(cn)] + + if not cmd: + hookenv.log("No config, subject or cn provided," + "unable to generate self signed SSL certificates") + return False + try: + subprocess.check_call(cmd) + return True + except Exception as e: + print("Execution of openssl command failed:\n{}".format(e)) + return False diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/ssl/service.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/ssl/service.py new file mode 100644 index 0000000000000000000000000000000000000000..06b534ffa3b0de97a56bac4060c39a9a4485b0c2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/ssl/service.py @@ -0,0 +1,277 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +from os.path import join as path_join +from os.path import exists +import subprocess + +from charmhelpers.core.hookenv import log, DEBUG + +STD_CERT = "standard" + +# Mysql server is fairly picky about cert creation +# and types, spec its creation separately for now. +MYSQL_CERT = "mysql" + + +class ServiceCA(object): + + default_expiry = str(365 * 2) + default_ca_expiry = str(365 * 6) + + def __init__(self, name, ca_dir, cert_type=STD_CERT): + self.name = name + self.ca_dir = ca_dir + self.cert_type = cert_type + + ############### + # Hook Helper API + @staticmethod + def get_ca(type=STD_CERT): + service_name = os.environ['JUJU_UNIT_NAME'].split('/')[0] + ca_path = os.path.join(os.environ['CHARM_DIR'], 'ca') + ca = ServiceCA(service_name, ca_path, type) + ca.init() + return ca + + @classmethod + def get_service_cert(cls, type=STD_CERT): + service_name = os.environ['JUJU_UNIT_NAME'].split('/')[0] + ca = cls.get_ca() + crt, key = ca.get_or_create_cert(service_name) + return crt, key, ca.get_ca_bundle() + + ############### + + def init(self): + log("initializing service ca", level=DEBUG) + if not exists(self.ca_dir): + self._init_ca_dir(self.ca_dir) + self._init_ca() + + @property + def ca_key(self): + return path_join(self.ca_dir, 'private', 'cacert.key') + + @property + def ca_cert(self): + return path_join(self.ca_dir, 'cacert.pem') + + @property + def ca_conf(self): + return path_join(self.ca_dir, 'ca.cnf') + + @property + def signing_conf(self): + return path_join(self.ca_dir, 'signing.cnf') + + def _init_ca_dir(self, ca_dir): + os.mkdir(ca_dir) + for i in ['certs', 'crl', 'newcerts', 'private']: + sd = path_join(ca_dir, i) + if not exists(sd): + os.mkdir(sd) + + if not exists(path_join(ca_dir, 'serial')): + with open(path_join(ca_dir, 'serial'), 'w') as fh: + fh.write('02\n') + + if not exists(path_join(ca_dir, 'index.txt')): + with open(path_join(ca_dir, 'index.txt'), 'w') as fh: + fh.write('') + + def _init_ca(self): + """Generate the root ca's cert and key. + """ + if not exists(path_join(self.ca_dir, 'ca.cnf')): + with open(path_join(self.ca_dir, 'ca.cnf'), 'w') as fh: + fh.write( + CA_CONF_TEMPLATE % (self.get_conf_variables())) + + if not exists(path_join(self.ca_dir, 'signing.cnf')): + with open(path_join(self.ca_dir, 'signing.cnf'), 'w') as fh: + fh.write( + SIGNING_CONF_TEMPLATE % (self.get_conf_variables())) + + if exists(self.ca_cert) or exists(self.ca_key): + raise RuntimeError("Initialized called when CA already exists") + cmd = ['openssl', 'req', '-config', self.ca_conf, + '-x509', '-nodes', '-newkey', 'rsa', + '-days', self.default_ca_expiry, + '-keyout', self.ca_key, '-out', self.ca_cert, + '-outform', 'PEM'] + output = subprocess.check_output(cmd, stderr=subprocess.STDOUT) + log("CA Init:\n %s" % output, level=DEBUG) + + def get_conf_variables(self): + return dict( + org_name="juju", + org_unit_name="%s service" % self.name, + common_name=self.name, + ca_dir=self.ca_dir) + + def get_or_create_cert(self, common_name): + if common_name in self: + return self.get_certificate(common_name) + return self.create_certificate(common_name) + + def create_certificate(self, common_name): + if common_name in self: + return self.get_certificate(common_name) + key_p = path_join(self.ca_dir, "certs", "%s.key" % common_name) + crt_p = path_join(self.ca_dir, "certs", "%s.crt" % common_name) + csr_p = path_join(self.ca_dir, "certs", "%s.csr" % common_name) + self._create_certificate(common_name, key_p, csr_p, crt_p) + return self.get_certificate(common_name) + + def get_certificate(self, common_name): + if common_name not in self: + raise ValueError("No certificate for %s" % common_name) + key_p = path_join(self.ca_dir, "certs", "%s.key" % common_name) + crt_p = path_join(self.ca_dir, "certs", "%s.crt" % common_name) + with open(crt_p) as fh: + crt = fh.read() + with open(key_p) as fh: + key = fh.read() + return crt, key + + def __contains__(self, common_name): + crt_p = path_join(self.ca_dir, "certs", "%s.crt" % common_name) + return exists(crt_p) + + def _create_certificate(self, common_name, key_p, csr_p, crt_p): + template_vars = self.get_conf_variables() + template_vars['common_name'] = common_name + subj = '/O=%(org_name)s/OU=%(org_unit_name)s/CN=%(common_name)s' % ( + template_vars) + + log("CA Create Cert %s" % common_name, level=DEBUG) + cmd = ['openssl', 'req', '-sha1', '-newkey', 'rsa:2048', + '-nodes', '-days', self.default_expiry, + '-keyout', key_p, '-out', csr_p, '-subj', subj] + subprocess.check_call(cmd, stderr=subprocess.PIPE) + cmd = ['openssl', 'rsa', '-in', key_p, '-out', key_p] + subprocess.check_call(cmd, stderr=subprocess.PIPE) + + log("CA Sign Cert %s" % common_name, level=DEBUG) + if self.cert_type == MYSQL_CERT: + cmd = ['openssl', 'x509', '-req', + '-in', csr_p, '-days', self.default_expiry, + '-CA', self.ca_cert, '-CAkey', self.ca_key, + '-set_serial', '01', '-out', crt_p] + else: + cmd = ['openssl', 'ca', '-config', self.signing_conf, + '-extensions', 'req_extensions', + '-days', self.default_expiry, '-notext', + '-in', csr_p, '-out', crt_p, '-subj', subj, '-batch'] + log("running %s" % " ".join(cmd), level=DEBUG) + subprocess.check_call(cmd, stderr=subprocess.PIPE) + + def get_ca_bundle(self): + with open(self.ca_cert) as fh: + return fh.read() + + +CA_CONF_TEMPLATE = """ +[ ca ] +default_ca = CA_default + +[ CA_default ] +dir = %(ca_dir)s +policy = policy_match +database = $dir/index.txt +serial = $dir/serial +certs = $dir/certs +crl_dir = $dir/crl +new_certs_dir = $dir/newcerts +certificate = $dir/cacert.pem +private_key = $dir/private/cacert.key +RANDFILE = $dir/private/.rand +default_md = default + +[ req ] +default_bits = 1024 +default_md = sha1 + +prompt = no +distinguished_name = ca_distinguished_name + +x509_extensions = ca_extensions + +[ ca_distinguished_name ] +organizationName = %(org_name)s +organizationalUnitName = %(org_unit_name)s Certificate Authority + + +[ policy_match ] +countryName = optional +stateOrProvinceName = optional +organizationName = match +organizationalUnitName = optional +commonName = supplied + +[ ca_extensions ] +basicConstraints = critical,CA:true +subjectKeyIdentifier = hash +authorityKeyIdentifier = keyid:always, issuer +keyUsage = cRLSign, keyCertSign +""" + + +SIGNING_CONF_TEMPLATE = """ +[ ca ] +default_ca = CA_default + +[ CA_default ] +dir = %(ca_dir)s +policy = policy_match +database = $dir/index.txt +serial = $dir/serial +certs = $dir/certs +crl_dir = $dir/crl +new_certs_dir = $dir/newcerts +certificate = $dir/cacert.pem +private_key = $dir/private/cacert.key +RANDFILE = $dir/private/.rand +default_md = default + +[ req ] +default_bits = 1024 +default_md = sha1 + +prompt = no +distinguished_name = req_distinguished_name + +x509_extensions = req_extensions + +[ req_distinguished_name ] +organizationName = %(org_name)s +organizationalUnitName = %(org_unit_name)s machine resources +commonName = %(common_name)s + +[ policy_match ] +countryName = optional +stateOrProvinceName = optional +organizationName = match +organizationalUnitName = optional +commonName = supplied + +[ req_extensions ] +basicConstraints = CA:false +subjectKeyIdentifier = hash +authorityKeyIdentifier = keyid:always, issuer +keyUsage = digitalSignature, keyEncipherment, keyAgreement +extendedKeyUsage = serverAuth, clientAuth +""" diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/storage/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/storage/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d7567b863e3a5ad2b7a7f44958b4166e0c3d346b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/storage/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/storage/linux/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/storage/linux/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d7567b863e3a5ad2b7a7f44958b4166e0c3d346b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/storage/linux/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/storage/linux/bcache.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/storage/linux/bcache.py new file mode 100644 index 0000000000000000000000000000000000000000..605991e16a4238f4ed46fbf0791048187f375c5c --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/storage/linux/bcache.py @@ -0,0 +1,74 @@ +# Copyright 2017 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +import os +import json + +from charmhelpers.core.hookenv import log + +stats_intervals = ['stats_day', 'stats_five_minute', + 'stats_hour', 'stats_total'] + +SYSFS = '/sys' + + +class Bcache(object): + """Bcache behaviour + """ + + def __init__(self, cachepath): + self.cachepath = cachepath + + @classmethod + def fromdevice(cls, devname): + return cls('{}/block/{}/bcache'.format(SYSFS, devname)) + + def __str__(self): + return self.cachepath + + def get_stats(self, interval): + """Get cache stats + """ + intervaldir = 'stats_{}'.format(interval) + path = "{}/{}".format(self.cachepath, intervaldir) + out = dict() + for elem in os.listdir(path): + out[elem] = open('{}/{}'.format(path, elem)).read().strip() + return out + + +def get_bcache_fs(): + """Return all cache sets + """ + cachesetroot = "{}/fs/bcache".format(SYSFS) + try: + dirs = os.listdir(cachesetroot) + except OSError: + log("No bcache fs found") + return [] + cacheset = set([Bcache('{}/{}'.format(cachesetroot, d)) for d in dirs if not d.startswith('register')]) + return cacheset + + +def get_stats_action(cachespec, interval): + """Action for getting bcache statistics for a given cachespec. + Cachespec can either be a device name, eg. 'sdb', which will retrieve + cache stats for the given device, or 'global', which will retrieve stats + for all cachesets + """ + if cachespec == 'global': + caches = get_bcache_fs() + else: + caches = [Bcache.fromdevice(cachespec)] + res = dict((c.cachepath, c.get_stats(interval)) for c in caches) + return json.dumps(res, indent=4, separators=(',', ': ')) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/storage/linux/ceph.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/storage/linux/ceph.py new file mode 100644 index 0000000000000000000000000000000000000000..814d5c72bc246c9536d3bb5975ade1c7fbe989e4 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/storage/linux/ceph.py @@ -0,0 +1,1810 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# +# Copyright 2012 Canonical Ltd. +# +# This file is sourced from lp:openstack-charm-helpers +# +# Authors: +# James Page +# Adam Gandelman +# + +import collections +import errno +import hashlib +import math +import six + +import os +import shutil +import json +import time +import uuid + +from subprocess import ( + check_call, + check_output, + CalledProcessError, +) +from charmhelpers.core.hookenv import ( + config, + service_name, + local_unit, + relation_get, + relation_ids, + relation_set, + related_units, + log, + DEBUG, + INFO, + WARNING, + ERROR, +) +from charmhelpers.core.host import ( + mount, + mounts, + service_start, + service_stop, + service_running, + umount, + cmp_pkgrevno, +) +from charmhelpers.fetch import ( + apt_install, +) +from charmhelpers.core.unitdata import kv + +from charmhelpers.core.kernel import modprobe +from charmhelpers.contrib.openstack.utils import config_flags_parser + +KEYRING = '/etc/ceph/ceph.client.{}.keyring' +KEYFILE = '/etc/ceph/ceph.client.{}.key' + +CEPH_CONF = """[global] +auth supported = {auth} +keyring = {keyring} +mon host = {mon_hosts} +log to syslog = {use_syslog} +err to syslog = {use_syslog} +clog to syslog = {use_syslog} +""" + +# The number of placement groups per OSD to target for placement group +# calculations. This number is chosen as 100 due to the ceph PG Calc +# documentation recommending to choose 100 for clusters which are not +# expected to increase in the foreseeable future. Since the majority of the +# calculations are done on deployment, target the case of non-expanding +# clusters as the default. +DEFAULT_PGS_PER_OSD_TARGET = 100 +DEFAULT_POOL_WEIGHT = 10.0 +LEGACY_PG_COUNT = 200 +DEFAULT_MINIMUM_PGS = 2 +AUTOSCALER_DEFAULT_PGS = 32 + + +class OsdPostUpgradeError(Exception): + """Error class for OSD post-upgrade operations.""" + pass + + +class OSDSettingConflict(Exception): + """Error class for conflicting osd setting requests.""" + pass + + +class OSDSettingNotAllowed(Exception): + """Error class for a disallowed setting.""" + pass + + +OSD_SETTING_EXCEPTIONS = (OSDSettingConflict, OSDSettingNotAllowed) + +OSD_SETTING_WHITELIST = [ + 'osd heartbeat grace', + 'osd heartbeat interval', +] + + +def _order_dict_by_key(rdict): + """Convert a dictionary into an OrderedDict sorted by key. + + :param rdict: Dictionary to be ordered. + :type rdict: dict + :returns: Ordered Dictionary. + :rtype: collections.OrderedDict + """ + return collections.OrderedDict(sorted(rdict.items(), key=lambda k: k[0])) + + +def get_osd_settings(relation_name): + """Consolidate requested osd settings from all clients. + + Consolidate requested osd settings from all clients. Check that the + requested setting is on the whitelist and it does not conflict with + any other requested settings. + + :returns: Dictionary of settings + :rtype: dict + + :raises: OSDSettingNotAllowed + :raises: OSDSettingConflict + """ + rel_ids = relation_ids(relation_name) + osd_settings = {} + for relid in rel_ids: + for unit in related_units(relid): + unit_settings = relation_get('osd-settings', unit, relid) or '{}' + unit_settings = json.loads(unit_settings) + for key, value in unit_settings.items(): + if key not in OSD_SETTING_WHITELIST: + msg = 'Illegal settings "{}"'.format(key) + raise OSDSettingNotAllowed(msg) + if key in osd_settings: + if osd_settings[key] != unit_settings[key]: + msg = 'Conflicting settings for "{}"'.format(key) + raise OSDSettingConflict(msg) + else: + osd_settings[key] = value + return _order_dict_by_key(osd_settings) + + +def send_osd_settings(): + """Pass on requested OSD settings to osd units.""" + try: + settings = get_osd_settings('client') + except OSD_SETTING_EXCEPTIONS as e: + # There is a problem with the settings, not passing them on. Update + # status will notify the user. + log(e, level=ERROR) + return + data = { + 'osd-settings': json.dumps(settings, sort_keys=True)} + for relid in relation_ids('osd'): + relation_set(relation_id=relid, + relation_settings=data) + + +def validator(value, valid_type, valid_range=None): + """ + Used to validate these: http://docs.ceph.com/docs/master/rados/operations/pools/#set-pool-values + Example input: + validator(value=1, + valid_type=int, + valid_range=[0, 2]) + This says I'm testing value=1. It must be an int inclusive in [0,2] + + :param value: The value to validate + :param valid_type: The type that value should be. + :param valid_range: A range of values that value can assume. + :return: + """ + assert isinstance(value, valid_type), "{} is not a {}".format( + value, + valid_type) + if valid_range is not None: + assert isinstance(valid_range, list), \ + "valid_range must be a list, was given {}".format(valid_range) + # If we're dealing with strings + if isinstance(value, six.string_types): + assert value in valid_range, \ + "{} is not in the list {}".format(value, valid_range) + # Integer, float should have a min and max + else: + if len(valid_range) != 2: + raise ValueError( + "Invalid valid_range list of {} for {}. " + "List must be [min,max]".format(valid_range, value)) + assert value >= valid_range[0], \ + "{} is less than minimum allowed value of {}".format( + value, valid_range[0]) + assert value <= valid_range[1], \ + "{} is greater than maximum allowed value of {}".format( + value, valid_range[1]) + + +class PoolCreationError(Exception): + """ + A custom error to inform the caller that a pool creation failed. Provides an error message + """ + + def __init__(self, message): + super(PoolCreationError, self).__init__(message) + + +class Pool(object): + """ + An object oriented approach to Ceph pool creation. This base class is inherited by ReplicatedPool and ErasurePool. + Do not call create() on this base class as it will not do anything. Instantiate a child class and call create(). + """ + + def __init__(self, service, name): + self.service = service + self.name = name + + # Create the pool if it doesn't exist already + # To be implemented by subclasses + def create(self): + pass + + def add_cache_tier(self, cache_pool, mode): + """ + Adds a new cache tier to an existing pool. + :param cache_pool: six.string_types. The cache tier pool name to add. + :param mode: six.string_types. The caching mode to use for this pool. valid range = ["readonly", "writeback"] + :return: None + """ + # Check the input types and values + validator(value=cache_pool, valid_type=six.string_types) + validator(value=mode, valid_type=six.string_types, valid_range=["readonly", "writeback"]) + + check_call(['ceph', '--id', self.service, 'osd', 'tier', 'add', self.name, cache_pool]) + check_call(['ceph', '--id', self.service, 'osd', 'tier', 'cache-mode', cache_pool, mode]) + check_call(['ceph', '--id', self.service, 'osd', 'tier', 'set-overlay', self.name, cache_pool]) + check_call(['ceph', '--id', self.service, 'osd', 'pool', 'set', cache_pool, 'hit_set_type', 'bloom']) + + def remove_cache_tier(self, cache_pool): + """ + Removes a cache tier from Ceph. Flushes all dirty objects from writeback pools and waits for that to complete. + :param cache_pool: six.string_types. The cache tier pool name to remove. + :return: None + """ + # read-only is easy, writeback is much harder + mode = get_cache_mode(self.service, cache_pool) + if mode == 'readonly': + check_call(['ceph', '--id', self.service, 'osd', 'tier', 'cache-mode', cache_pool, 'none']) + check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove', self.name, cache_pool]) + + elif mode == 'writeback': + pool_forward_cmd = ['ceph', '--id', self.service, 'osd', 'tier', + 'cache-mode', cache_pool, 'forward'] + if cmp_pkgrevno('ceph-common', '10.1') >= 0: + # Jewel added a mandatory flag + pool_forward_cmd.append('--yes-i-really-mean-it') + + check_call(pool_forward_cmd) + # Flush the cache and wait for it to return + check_call(['rados', '--id', self.service, '-p', cache_pool, 'cache-flush-evict-all']) + check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove-overlay', self.name]) + check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove', self.name, cache_pool]) + + def get_pgs(self, pool_size, percent_data=DEFAULT_POOL_WEIGHT, + device_class=None): + """Return the number of placement groups to use when creating the pool. + + Returns the number of placement groups which should be specified when + creating the pool. This is based upon the calculation guidelines + provided by the Ceph Placement Group Calculator (located online at + http://ceph.com/pgcalc/). + + The number of placement groups are calculated using the following: + + (Target PGs per OSD) * (OSD #) * (%Data) + ---------------------------------------- + (Pool size) + + Per the upstream guidelines, the OSD # should really be considered + based on the number of OSDs which are eligible to be selected by the + pool. Since the pool creation doesn't specify any of CRUSH set rules, + the default rule will be dependent upon the type of pool being + created (replicated or erasure). + + This code makes no attempt to determine the number of OSDs which can be + selected for the specific rule, rather it is left to the user to tune + in the form of 'expected-osd-count' config option. + + :param pool_size: int. pool_size is either the number of replicas for + replicated pools or the K+M sum for erasure coded pools + :param percent_data: float. the percentage of data that is expected to + be contained in the pool for the specific OSD set. Default value + is to assume 10% of the data is for this pool, which is a + relatively low % of the data but allows for the pg_num to be + increased. NOTE: the default is primarily to handle the scenario + where related charms requiring pools has not been upgraded to + include an update to indicate their relative usage of the pools. + :param device_class: str. class of storage to use for basis of pgs + calculation; ceph supports nvme, ssd and hdd by default based + on presence of devices of each type in the deployment. + :return: int. The number of pgs to use. + """ + + # Note: This calculation follows the approach that is provided + # by the Ceph PG Calculator located at http://ceph.com/pgcalc/. + validator(value=pool_size, valid_type=int) + + # Ensure that percent data is set to something - even with a default + # it can be set to None, which would wreak havoc below. + if percent_data is None: + percent_data = DEFAULT_POOL_WEIGHT + + # If the expected-osd-count is specified, then use the max between + # the expected-osd-count and the actual osd_count + osd_list = get_osds(self.service, device_class) + expected = config('expected-osd-count') or 0 + + if osd_list: + if device_class: + osd_count = len(osd_list) + else: + osd_count = max(expected, len(osd_list)) + + # Log a message to provide some insight if the calculations claim + # to be off because someone is setting the expected count and + # there are more OSDs in reality. Try to make a proper guess + # based upon the cluster itself. + if not device_class and expected and osd_count != expected: + log("Found more OSDs than provided expected count. " + "Using the actual count instead", INFO) + elif expected: + # Use the expected-osd-count in older ceph versions to allow for + # a more accurate pg calculations + osd_count = expected + else: + # NOTE(james-page): Default to 200 for older ceph versions + # which don't support OSD query from cli + return LEGACY_PG_COUNT + + percent_data /= 100.0 + target_pgs_per_osd = config('pgs-per-osd') or DEFAULT_PGS_PER_OSD_TARGET + num_pg = (target_pgs_per_osd * osd_count * percent_data) // pool_size + + # NOTE: ensure a sane minimum number of PGS otherwise we don't get any + # reasonable data distribution in minimal OSD configurations + if num_pg < DEFAULT_MINIMUM_PGS: + num_pg = DEFAULT_MINIMUM_PGS + + # The CRUSH algorithm has a slight optimization for placement groups + # with powers of 2 so find the nearest power of 2. If the nearest + # power of 2 is more than 25% below the original value, the next + # highest value is used. To do this, find the nearest power of 2 such + # that 2^n <= num_pg, check to see if its within the 25% tolerance. + exponent = math.floor(math.log(num_pg, 2)) + nearest = 2 ** exponent + if (num_pg - nearest) > (num_pg * 0.25): + # Choose the next highest power of 2 since the nearest is more + # than 25% below the original value. + return int(nearest * 2) + else: + return int(nearest) + + +class ReplicatedPool(Pool): + def __init__(self, service, name, pg_num=None, replicas=2, + percent_data=10.0, app_name=None): + super(ReplicatedPool, self).__init__(service=service, name=name) + self.replicas = replicas + self.percent_data = percent_data + if pg_num: + # Since the number of placement groups were specified, ensure + # that there aren't too many created. + max_pgs = self.get_pgs(self.replicas, 100.0) + self.pg_num = min(pg_num, max_pgs) + else: + self.pg_num = self.get_pgs(self.replicas, percent_data) + if app_name: + self.app_name = app_name + else: + self.app_name = 'unknown' + + def create(self): + if not pool_exists(self.service, self.name): + nautilus_or_later = cmp_pkgrevno('ceph-common', '14.2.0') >= 0 + # Create it + if nautilus_or_later: + cmd = [ + 'ceph', '--id', self.service, 'osd', 'pool', 'create', + '--pg-num-min={}'.format( + min(AUTOSCALER_DEFAULT_PGS, self.pg_num) + ), + self.name, str(self.pg_num) + ] + else: + cmd = [ + 'ceph', '--id', self.service, 'osd', 'pool', 'create', + self.name, str(self.pg_num) + ] + + try: + check_call(cmd) + # Set the pool replica size + update_pool(client=self.service, + pool=self.name, + settings={'size': str(self.replicas)}) + if nautilus_or_later: + # Ensure we set the expected pool ratio + update_pool(client=self.service, + pool=self.name, + settings={'target_size_ratio': str(self.percent_data / 100.0)}) + try: + set_app_name_for_pool(client=self.service, + pool=self.name, + name=self.app_name) + except CalledProcessError: + log('Could not set app name for pool {}'.format(self.name), level=WARNING) + if 'pg_autoscaler' in enabled_manager_modules(): + try: + enable_pg_autoscale(self.service, self.name) + except CalledProcessError as e: + log('Could not configure auto scaling for pool {}: {}'.format( + self.name, e), level=WARNING) + except CalledProcessError: + raise + + +# Default jerasure erasure coded pool +class ErasurePool(Pool): + def __init__(self, service, name, erasure_code_profile="default", + percent_data=10.0, app_name=None): + super(ErasurePool, self).__init__(service=service, name=name) + self.erasure_code_profile = erasure_code_profile + self.percent_data = percent_data + if app_name: + self.app_name = app_name + else: + self.app_name = 'unknown' + + def create(self): + if not pool_exists(self.service, self.name): + # Try to find the erasure profile information in order to properly + # size the number of placement groups. The size of an erasure + # coded placement group is calculated as k+m. + erasure_profile = get_erasure_profile(self.service, + self.erasure_code_profile) + + # Check for errors + if erasure_profile is None: + msg = ("Failed to discover erasure profile named " + "{}".format(self.erasure_code_profile)) + log(msg, level=ERROR) + raise PoolCreationError(msg) + if 'k' not in erasure_profile or 'm' not in erasure_profile: + # Error + msg = ("Unable to find k (data chunks) or m (coding chunks) " + "in erasure profile {}".format(erasure_profile)) + log(msg, level=ERROR) + raise PoolCreationError(msg) + + k = int(erasure_profile['k']) + m = int(erasure_profile['m']) + pgs = self.get_pgs(k + m, self.percent_data) + nautilus_or_later = cmp_pkgrevno('ceph-common', '14.2.0') >= 0 + # Create it + if nautilus_or_later: + cmd = [ + 'ceph', '--id', self.service, 'osd', 'pool', 'create', + '--pg-num-min={}'.format( + min(AUTOSCALER_DEFAULT_PGS, pgs) + ), + self.name, str(pgs), str(pgs), + 'erasure', self.erasure_code_profile + ] + else: + cmd = [ + 'ceph', '--id', self.service, 'osd', 'pool', 'create', + self.name, str(pgs), str(pgs), + 'erasure', self.erasure_code_profile + ] + + try: + check_call(cmd) + try: + set_app_name_for_pool(client=self.service, + pool=self.name, + name=self.app_name) + except CalledProcessError: + log('Could not set app name for pool {}'.format(self.name), level=WARNING) + if nautilus_or_later: + # Ensure we set the expected pool ratio + update_pool(client=self.service, + pool=self.name, + settings={'target_size_ratio': str(self.percent_data / 100.0)}) + if 'pg_autoscaler' in enabled_manager_modules(): + try: + enable_pg_autoscale(self.service, self.name) + except CalledProcessError as e: + log('Could not configure auto scaling for pool {}: {}'.format( + self.name, e), level=WARNING) + except CalledProcessError: + raise + + """Get an existing erasure code profile if it already exists. + Returns json formatted output""" + + +def enabled_manager_modules(): + """Return a list of enabled manager modules. + + :rtype: List[str] + """ + cmd = ['ceph', 'mgr', 'module', 'ls'] + try: + modules = check_output(cmd) + if six.PY3: + modules = modules.decode('UTF-8') + except CalledProcessError as e: + log("Failed to list ceph modules: {}".format(e), WARNING) + return [] + modules = json.loads(modules) + return modules['enabled_modules'] + + +def enable_pg_autoscale(service, pool_name): + """ + Enable Ceph's PG autoscaler for the specified pool. + + :param service: six.string_types. The Ceph user name to run the command under + :param pool_name: six.string_types. The name of the pool to enable sutoscaling on + :raise: CalledProcessError if the command fails + """ + check_call(['ceph', '--id', service, 'osd', 'pool', 'set', pool_name, 'pg_autoscale_mode', 'on']) + + +def get_mon_map(service): + """ + Returns the current monitor map. + :param service: six.string_types. The Ceph user name to run the command under + :return: json string. :raise: ValueError if the monmap fails to parse. + Also raises CalledProcessError if our ceph command fails + """ + try: + mon_status = check_output(['ceph', '--id', service, + 'mon_status', '--format=json']) + if six.PY3: + mon_status = mon_status.decode('UTF-8') + try: + return json.loads(mon_status) + except ValueError as v: + log("Unable to parse mon_status json: {}. Error: {}" + .format(mon_status, str(v))) + raise + except CalledProcessError as e: + log("mon_status command failed with message: {}" + .format(str(e))) + raise + + +def hash_monitor_names(service): + """ + Uses the get_mon_map() function to get information about the monitor + cluster. + Hash the name of each monitor. Return a sorted list of monitor hashes + in an ascending order. + :param service: six.string_types. The Ceph user name to run the command under + :rtype : dict. json dict of monitor name, ip address and rank + example: { + 'name': 'ip-172-31-13-165', + 'rank': 0, + 'addr': '172.31.13.165:6789/0'} + """ + try: + hash_list = [] + monitor_list = get_mon_map(service=service) + if monitor_list['monmap']['mons']: + for mon in monitor_list['monmap']['mons']: + hash_list.append( + hashlib.sha224(mon['name'].encode('utf-8')).hexdigest()) + return sorted(hash_list) + else: + return None + except (ValueError, CalledProcessError): + raise + + +def monitor_key_delete(service, key): + """ + Delete a key and value pair from the monitor cluster + :param service: six.string_types. The Ceph user name to run the command under + Deletes a key value pair on the monitor cluster. + :param key: six.string_types. The key to delete. + """ + try: + check_output( + ['ceph', '--id', service, + 'config-key', 'del', str(key)]) + except CalledProcessError as e: + log("Monitor config-key put failed with message: {}".format( + e.output)) + raise + + +def monitor_key_set(service, key, value): + """ + Sets a key value pair on the monitor cluster. + :param service: six.string_types. The Ceph user name to run the command under + :param key: six.string_types. The key to set. + :param value: The value to set. This will be converted to a string + before setting + """ + try: + check_output( + ['ceph', '--id', service, + 'config-key', 'put', str(key), str(value)]) + except CalledProcessError as e: + log("Monitor config-key put failed with message: {}".format( + e.output)) + raise + + +def monitor_key_get(service, key): + """ + Gets the value of an existing key in the monitor cluster. + :param service: six.string_types. The Ceph user name to run the command under + :param key: six.string_types. The key to search for. + :return: Returns the value of that key or None if not found. + """ + try: + output = check_output( + ['ceph', '--id', service, + 'config-key', 'get', str(key)]).decode('UTF-8') + return output + except CalledProcessError as e: + log("Monitor config-key get failed with message: {}".format( + e.output)) + return None + + +def monitor_key_exists(service, key): + """ + Searches for the existence of a key in the monitor cluster. + :param service: six.string_types. The Ceph user name to run the command under + :param key: six.string_types. The key to search for + :return: Returns True if the key exists, False if not and raises an + exception if an unknown error occurs. :raise: CalledProcessError if + an unknown error occurs + """ + try: + check_call( + ['ceph', '--id', service, + 'config-key', 'exists', str(key)]) + # I can return true here regardless because Ceph returns + # ENOENT if the key wasn't found + return True + except CalledProcessError as e: + if e.returncode == errno.ENOENT: + return False + else: + log("Unknown error from ceph config-get exists: {} {}".format( + e.returncode, e.output)) + raise + + +def get_erasure_profile(service, name): + """ + :param service: six.string_types. The Ceph user name to run the command under + :param name: + :return: + """ + try: + out = check_output(['ceph', '--id', service, + 'osd', 'erasure-code-profile', 'get', + name, '--format=json']) + if six.PY3: + out = out.decode('UTF-8') + return json.loads(out) + except (CalledProcessError, OSError, ValueError): + return None + + +def pool_set(service, pool_name, key, value): + """ + Sets a value for a RADOS pool in ceph. + :param service: six.string_types. The Ceph user name to run the command under + :param pool_name: six.string_types + :param key: six.string_types + :param value: + :return: None. Can raise CalledProcessError + """ + cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', pool_name, key, + str(value).lower()] + try: + check_call(cmd) + except CalledProcessError: + raise + + +def snapshot_pool(service, pool_name, snapshot_name): + """ + Snapshots a RADOS pool in ceph. + :param service: six.string_types. The Ceph user name to run the command under + :param pool_name: six.string_types + :param snapshot_name: six.string_types + :return: None. Can raise CalledProcessError + """ + cmd = ['ceph', '--id', service, 'osd', 'pool', 'mksnap', pool_name, snapshot_name] + try: + check_call(cmd) + except CalledProcessError: + raise + + +def remove_pool_snapshot(service, pool_name, snapshot_name): + """ + Remove a snapshot from a RADOS pool in ceph. + :param service: six.string_types. The Ceph user name to run the command under + :param pool_name: six.string_types + :param snapshot_name: six.string_types + :return: None. Can raise CalledProcessError + """ + cmd = ['ceph', '--id', service, 'osd', 'pool', 'rmsnap', pool_name, snapshot_name] + try: + check_call(cmd) + except CalledProcessError: + raise + + +def set_pool_quota(service, pool_name, max_bytes=None, max_objects=None): + """ + :param service: The Ceph user name to run the command under + :type service: str + :param pool_name: Name of pool + :type pool_name: str + :param max_bytes: Maximum bytes quota to apply + :type max_bytes: int + :param max_objects: Maximum objects quota to apply + :type max_objects: int + :raises: subprocess.CalledProcessError + """ + cmd = ['ceph', '--id', service, 'osd', 'pool', 'set-quota', pool_name] + if max_bytes: + cmd = cmd + ['max_bytes', str(max_bytes)] + if max_objects: + cmd = cmd + ['max_objects', str(max_objects)] + check_call(cmd) + + +def remove_pool_quota(service, pool_name): + """ + Set a byte quota on a RADOS pool in ceph. + :param service: six.string_types. The Ceph user name to run the command under + :param pool_name: six.string_types + :return: None. Can raise CalledProcessError + """ + cmd = ['ceph', '--id', service, 'osd', 'pool', 'set-quota', pool_name, 'max_bytes', '0'] + try: + check_call(cmd) + except CalledProcessError: + raise + + +def remove_erasure_profile(service, profile_name): + """ + Create a new erasure code profile if one does not already exist for it. Updates + the profile if it exists. Please see http://docs.ceph.com/docs/master/rados/operations/erasure-code-profile/ + for more details + :param service: six.string_types. The Ceph user name to run the command under + :param profile_name: six.string_types + :return: None. Can raise CalledProcessError + """ + cmd = ['ceph', '--id', service, 'osd', 'erasure-code-profile', 'rm', + profile_name] + try: + check_call(cmd) + except CalledProcessError: + raise + + +def create_erasure_profile(service, profile_name, erasure_plugin_name='jerasure', + failure_domain='host', + data_chunks=2, coding_chunks=1, + locality=None, durability_estimator=None, + device_class=None): + """ + Create a new erasure code profile if one does not already exist for it. Updates + the profile if it exists. Please see http://docs.ceph.com/docs/master/rados/operations/erasure-code-profile/ + for more details + :param service: six.string_types. The Ceph user name to run the command under + :param profile_name: six.string_types + :param erasure_plugin_name: six.string_types + :param failure_domain: six.string_types. One of ['chassis', 'datacenter', 'host', 'osd', 'pdu', 'pod', 'rack', 'region', + 'room', 'root', 'row']) + :param data_chunks: int + :param coding_chunks: int + :param locality: int + :param durability_estimator: int + :param device_class: six.string_types + :return: None. Can raise CalledProcessError + """ + # Ensure this failure_domain is allowed by Ceph + validator(failure_domain, six.string_types, + ['chassis', 'datacenter', 'host', 'osd', 'pdu', 'pod', 'rack', 'region', 'room', 'root', 'row']) + + cmd = ['ceph', '--id', service, 'osd', 'erasure-code-profile', 'set', profile_name, + 'plugin=' + erasure_plugin_name, 'k=' + str(data_chunks), 'm=' + str(coding_chunks) + ] + if locality is not None and durability_estimator is not None: + raise ValueError("create_erasure_profile should be called with k, m and one of l or c but not both.") + + luminous_or_later = cmp_pkgrevno('ceph-common', '12.0.0') >= 0 + # failure_domain changed in luminous + if luminous_or_later: + cmd.append('crush-failure-domain=' + failure_domain) + else: + cmd.append('ruleset-failure-domain=' + failure_domain) + + # device class new in luminous + if luminous_or_later and device_class: + cmd.append('crush-device-class={}'.format(device_class)) + else: + log('Skipping device class configuration (ceph < 12.0.0)', + level=DEBUG) + + # Add plugin specific information + if locality is not None: + # For local erasure codes + cmd.append('l=' + str(locality)) + if durability_estimator is not None: + # For Shec erasure codes + cmd.append('c=' + str(durability_estimator)) + + if erasure_profile_exists(service, profile_name): + cmd.append('--force') + + try: + check_call(cmd) + except CalledProcessError: + raise + + +def rename_pool(service, old_name, new_name): + """ + Rename a Ceph pool from old_name to new_name + :param service: six.string_types. The Ceph user name to run the command under + :param old_name: six.string_types + :param new_name: six.string_types + :return: None + """ + validator(value=old_name, valid_type=six.string_types) + validator(value=new_name, valid_type=six.string_types) + + cmd = ['ceph', '--id', service, 'osd', 'pool', 'rename', old_name, new_name] + check_call(cmd) + + +def erasure_profile_exists(service, name): + """ + Check to see if an Erasure code profile already exists. + :param service: six.string_types. The Ceph user name to run the command under + :param name: six.string_types + :return: int or None + """ + validator(value=name, valid_type=six.string_types) + try: + check_call(['ceph', '--id', service, + 'osd', 'erasure-code-profile', 'get', + name]) + return True + except CalledProcessError: + return False + + +def get_cache_mode(service, pool_name): + """ + Find the current caching mode of the pool_name given. + :param service: six.string_types. The Ceph user name to run the command under + :param pool_name: six.string_types + :return: int or None + """ + validator(value=service, valid_type=six.string_types) + validator(value=pool_name, valid_type=six.string_types) + out = check_output(['ceph', '--id', service, + 'osd', 'dump', '--format=json']) + if six.PY3: + out = out.decode('UTF-8') + try: + osd_json = json.loads(out) + for pool in osd_json['pools']: + if pool['pool_name'] == pool_name: + return pool['cache_mode'] + return None + except ValueError: + raise + + +def pool_exists(service, name): + """Check to see if a RADOS pool already exists.""" + try: + out = check_output(['rados', '--id', service, 'lspools']) + if six.PY3: + out = out.decode('UTF-8') + except CalledProcessError: + return False + + return name in out.split() + + +def get_osds(service, device_class=None): + """Return a list of all Ceph Object Storage Daemons currently in the + cluster (optionally filtered by storage device class). + + :param device_class: Class of storage device for OSD's + :type device_class: str + """ + luminous_or_later = cmp_pkgrevno('ceph-common', '12.0.0') >= 0 + if luminous_or_later and device_class: + out = check_output(['ceph', '--id', service, + 'osd', 'crush', 'class', + 'ls-osd', device_class, + '--format=json']) + else: + out = check_output(['ceph', '--id', service, + 'osd', 'ls', + '--format=json']) + if six.PY3: + out = out.decode('UTF-8') + return json.loads(out) + + +def install(): + """Basic Ceph client installation.""" + ceph_dir = "/etc/ceph" + if not os.path.exists(ceph_dir): + os.mkdir(ceph_dir) + + apt_install('ceph-common', fatal=True) + + +def rbd_exists(service, pool, rbd_img): + """Check to see if a RADOS block device exists.""" + try: + out = check_output(['rbd', 'list', '--id', + service, '--pool', pool]) + if six.PY3: + out = out.decode('UTF-8') + except CalledProcessError: + return False + + return rbd_img in out + + +def create_rbd_image(service, pool, image, sizemb): + """Create a new RADOS block device.""" + cmd = ['rbd', 'create', image, '--size', str(sizemb), '--id', service, + '--pool', pool] + check_call(cmd) + + +def update_pool(client, pool, settings): + cmd = ['ceph', '--id', client, 'osd', 'pool', 'set', pool] + for k, v in six.iteritems(settings): + cmd.append(k) + cmd.append(v) + + check_call(cmd) + + +def set_app_name_for_pool(client, pool, name): + """ + Calls `osd pool application enable` for the specified pool name + + :param client: Name of the ceph client to use + :type client: str + :param pool: Pool to set app name for + :type pool: str + :param name: app name for the specified pool + :type name: str + + :raises: CalledProcessError if ceph call fails + """ + if cmp_pkgrevno('ceph-common', '12.0.0') >= 0: + cmd = ['ceph', '--id', client, 'osd', 'pool', + 'application', 'enable', pool, name] + check_call(cmd) + + +def create_pool(service, name, replicas=3, pg_num=None): + """Create a new RADOS pool.""" + if pool_exists(service, name): + log("Ceph pool {} already exists, skipping creation".format(name), + level=WARNING) + return + + if not pg_num: + # Calculate the number of placement groups based + # on upstream recommended best practices. + osds = get_osds(service) + if osds: + pg_num = (len(osds) * 100 // replicas) + else: + # NOTE(james-page): Default to 200 for older ceph versions + # which don't support OSD query from cli + pg_num = 200 + + cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pg_num)] + check_call(cmd) + + update_pool(service, name, settings={'size': str(replicas)}) + + +def delete_pool(service, name): + """Delete a RADOS pool from ceph.""" + cmd = ['ceph', '--id', service, 'osd', 'pool', 'delete', name, + '--yes-i-really-really-mean-it'] + check_call(cmd) + + +def _keyfile_path(service): + return KEYFILE.format(service) + + +def _keyring_path(service): + return KEYRING.format(service) + + +def add_key(service, key): + """ + Add a key to a keyring. + + Creates the keyring if it doesn't already exist. + + Logs and returns if the key is already in the keyring. + """ + keyring = _keyring_path(service) + if os.path.exists(keyring): + with open(keyring, 'r') as ring: + if key in ring.read(): + log('Ceph keyring exists at %s and has not changed.' % keyring, + level=DEBUG) + return + log('Updating existing keyring %s.' % keyring, level=DEBUG) + + cmd = ['ceph-authtool', keyring, '--create-keyring', + '--name=client.{}'.format(service), '--add-key={}'.format(key)] + check_call(cmd) + log('Created new ceph keyring at %s.' % keyring, level=DEBUG) + + +def create_keyring(service, key): + """Deprecated. Please use the more accurately named 'add_key'""" + return add_key(service, key) + + +def delete_keyring(service): + """Delete an existing Ceph keyring.""" + keyring = _keyring_path(service) + if not os.path.exists(keyring): + log('Keyring does not exist at %s' % keyring, level=WARNING) + return + + os.remove(keyring) + log('Deleted ring at %s.' % keyring, level=INFO) + + +def create_key_file(service, key): + """Create a file containing key.""" + keyfile = _keyfile_path(service) + if os.path.exists(keyfile): + log('Keyfile exists at %s.' % keyfile, level=WARNING) + return + + with open(keyfile, 'w') as fd: + fd.write(key) + + log('Created new keyfile at %s.' % keyfile, level=INFO) + + +def get_ceph_nodes(relation='ceph'): + """Query named relation to determine current nodes.""" + hosts = [] + for r_id in relation_ids(relation): + for unit in related_units(r_id): + hosts.append(relation_get('private-address', unit=unit, rid=r_id)) + + return hosts + + +def configure(service, key, auth, use_syslog): + """Perform basic configuration of Ceph.""" + add_key(service, key) + create_key_file(service, key) + hosts = get_ceph_nodes() + with open('/etc/ceph/ceph.conf', 'w') as ceph_conf: + ceph_conf.write(CEPH_CONF.format(auth=auth, + keyring=_keyring_path(service), + mon_hosts=",".join(map(str, hosts)), + use_syslog=use_syslog)) + modprobe('rbd') + + +def image_mapped(name): + """Determine whether a RADOS block device is mapped locally.""" + try: + out = check_output(['rbd', 'showmapped']) + if six.PY3: + out = out.decode('UTF-8') + except CalledProcessError: + return False + + return name in out + + +def map_block_storage(service, pool, image): + """Map a RADOS block device for local use.""" + cmd = [ + 'rbd', + 'map', + '{}/{}'.format(pool, image), + '--user', + service, + '--secret', + _keyfile_path(service), + ] + check_call(cmd) + + +def filesystem_mounted(fs): + """Determine whether a filesytems is already mounted.""" + return fs in [f for f, m in mounts()] + + +def make_filesystem(blk_device, fstype='ext4', timeout=10): + """Make a new filesystem on the specified block device.""" + count = 0 + e_noent = errno.ENOENT + while not os.path.exists(blk_device): + if count >= timeout: + log('Gave up waiting on block device %s' % blk_device, + level=ERROR) + raise IOError(e_noent, os.strerror(e_noent), blk_device) + + log('Waiting for block device %s to appear' % blk_device, + level=DEBUG) + count += 1 + time.sleep(1) + else: + log('Formatting block device %s as filesystem %s.' % + (blk_device, fstype), level=INFO) + check_call(['mkfs', '-t', fstype, blk_device]) + + +def place_data_on_block_device(blk_device, data_src_dst): + """Migrate data in data_src_dst to blk_device and then remount.""" + # mount block device into /mnt + mount(blk_device, '/mnt') + # copy data to /mnt + copy_files(data_src_dst, '/mnt') + # umount block device + umount('/mnt') + # Grab user/group ID's from original source + _dir = os.stat(data_src_dst) + uid = _dir.st_uid + gid = _dir.st_gid + # re-mount where the data should originally be + # TODO: persist is currently a NO-OP in core.host + mount(blk_device, data_src_dst, persist=True) + # ensure original ownership of new mount. + os.chown(data_src_dst, uid, gid) + + +def copy_files(src, dst, symlinks=False, ignore=None): + """Copy files from src to dst.""" + for item in os.listdir(src): + s = os.path.join(src, item) + d = os.path.join(dst, item) + if os.path.isdir(s): + shutil.copytree(s, d, symlinks, ignore) + else: + shutil.copy2(s, d) + + +def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point, + blk_device, fstype, system_services=[], + replicas=3): + """NOTE: This function must only be called from a single service unit for + the same rbd_img otherwise data loss will occur. + + Ensures given pool and RBD image exists, is mapped to a block device, + and the device is formatted and mounted at the given mount_point. + + If formatting a device for the first time, data existing at mount_point + will be migrated to the RBD device before being re-mounted. + + All services listed in system_services will be stopped prior to data + migration and restarted when complete. + """ + # Ensure pool, RBD image, RBD mappings are in place. + if not pool_exists(service, pool): + log('Creating new pool {}.'.format(pool), level=INFO) + create_pool(service, pool, replicas=replicas) + + if not rbd_exists(service, pool, rbd_img): + log('Creating RBD image ({}).'.format(rbd_img), level=INFO) + create_rbd_image(service, pool, rbd_img, sizemb) + + if not image_mapped(rbd_img): + log('Mapping RBD Image {} as a Block Device.'.format(rbd_img), + level=INFO) + map_block_storage(service, pool, rbd_img) + + # make file system + # TODO: What happens if for whatever reason this is run again and + # the data is already in the rbd device and/or is mounted?? + # When it is mounted already, it will fail to make the fs + # XXX: This is really sketchy! Need to at least add an fstab entry + # otherwise this hook will blow away existing data if its executed + # after a reboot. + if not filesystem_mounted(mount_point): + make_filesystem(blk_device, fstype) + + for svc in system_services: + if service_running(svc): + log('Stopping services {} prior to migrating data.' + .format(svc), level=DEBUG) + service_stop(svc) + + place_data_on_block_device(blk_device, mount_point) + + for svc in system_services: + log('Starting service {} after migrating data.' + .format(svc), level=DEBUG) + service_start(svc) + + +def ensure_ceph_keyring(service, user=None, group=None, + relation='ceph', key=None): + """Ensures a ceph keyring is created for a named service and optionally + ensures user and group ownership. + + @returns boolean: Flag to indicate whether a key was successfully written + to disk based on either relation data or a supplied key + """ + if not key: + for rid in relation_ids(relation): + for unit in related_units(rid): + key = relation_get('key', rid=rid, unit=unit) + if key: + break + + if not key: + return False + + add_key(service=service, key=key) + keyring = _keyring_path(service) + if user and group: + check_call(['chown', '%s.%s' % (user, group), keyring]) + + return True + + +class CephBrokerRq(object): + """Ceph broker request. + + Multiple operations can be added to a request and sent to the Ceph broker + to be executed. + + Request is json-encoded for sending over the wire. + + The API is versioned and defaults to version 1. + """ + + def __init__(self, api_version=1, request_id=None): + self.api_version = api_version + if request_id: + self.request_id = request_id + else: + self.request_id = str(uuid.uuid1()) + self.ops = [] + + def add_op(self, op): + """Add an op if it is not already in the list. + + :param op: Operation to add. + :type op: dict + """ + if op not in self.ops: + self.ops.append(op) + + def add_op_request_access_to_group(self, name, namespace=None, + permission=None, key_name=None, + object_prefix_permissions=None): + """ + Adds the requested permissions to the current service's Ceph key, + allowing the key to access only the specified pools or + object prefixes. object_prefix_permissions should be a dictionary + keyed on the permission with the corresponding value being a list + of prefixes to apply that permission to. + { + 'rwx': ['prefix1', 'prefix2'], + 'class-read': ['prefix3']} + """ + self.add_op({ + 'op': 'add-permissions-to-key', 'group': name, + 'namespace': namespace, + 'name': key_name or service_name(), + 'group-permission': permission, + 'object-prefix-permissions': object_prefix_permissions}) + + def add_op_create_pool(self, name, replica_count=3, pg_num=None, + weight=None, group=None, namespace=None, + app_name=None, max_bytes=None, max_objects=None): + """DEPRECATED: Use ``add_op_create_replicated_pool()`` or + ``add_op_create_erasure_pool()`` instead. + """ + return self.add_op_create_replicated_pool( + name, replica_count=replica_count, pg_num=pg_num, weight=weight, + group=group, namespace=namespace, app_name=app_name, + max_bytes=max_bytes, max_objects=max_objects) + + def add_op_create_replicated_pool(self, name, replica_count=3, pg_num=None, + weight=None, group=None, namespace=None, + app_name=None, max_bytes=None, + max_objects=None): + """Adds an operation to create a replicated pool. + + :param name: Name of pool to create + :type name: str + :param replica_count: Number of copies Ceph should keep of your data. + :type replica_count: int + :param pg_num: Request specific number of Placement Groups to create + for pool. + :type pg_num: int + :param weight: The percentage of data that is expected to be contained + in the pool from the total available space on the OSDs. + Used to calculate number of Placement Groups to create + for pool. + :type weight: float + :param group: Group to add pool to + :type group: str + :param namespace: Group namespace + :type namespace: str + :param app_name: (Optional) Tag pool with application name. Note that + there is certain protocols emerging upstream with + regard to meaningful application names to use. + Examples are ``rbd`` and ``rgw``. + :type app_name: str + :param max_bytes: Maximum bytes quota to apply + :type max_bytes: int + :param max_objects: Maximum objects quota to apply + :type max_objects: int + """ + if pg_num and weight: + raise ValueError('pg_num and weight are mutually exclusive') + + self.add_op({'op': 'create-pool', 'name': name, + 'replicas': replica_count, 'pg_num': pg_num, + 'weight': weight, 'group': group, + 'group-namespace': namespace, 'app-name': app_name, + 'max-bytes': max_bytes, 'max-objects': max_objects}) + + def add_op_create_erasure_pool(self, name, erasure_profile=None, + weight=None, group=None, app_name=None, + max_bytes=None, max_objects=None): + """Adds an operation to create a erasure coded pool. + + :param name: Name of pool to create + :type name: str + :param erasure_profile: Name of erasure code profile to use. If not + set the ceph-mon unit handling the broker + request will set its default value. + :type erasure_profile: str + :param weight: The percentage of data that is expected to be contained + in the pool from the total available space on the OSDs. + :type weight: float + :param group: Group to add pool to + :type group: str + :param app_name: (Optional) Tag pool with application name. Note that + there is certain protocols emerging upstream with + regard to meaningful application names to use. + Examples are ``rbd`` and ``rgw``. + :type app_name: str + :param max_bytes: Maximum bytes quota to apply + :type max_bytes: int + :param max_objects: Maximum objects quota to apply + :type max_objects: int + """ + self.add_op({'op': 'create-pool', 'name': name, + 'pool-type': 'erasure', + 'erasure-profile': erasure_profile, + 'weight': weight, + 'group': group, 'app-name': app_name, + 'max-bytes': max_bytes, 'max-objects': max_objects}) + + def set_ops(self, ops): + """Set request ops to provided value. + + Useful for injecting ops that come from a previous request + to allow comparisons to ensure validity. + """ + self.ops = ops + + @property + def request(self): + return json.dumps({'api-version': self.api_version, 'ops': self.ops, + 'request-id': self.request_id}) + + def _ops_equal(self, other): + if len(self.ops) == len(other.ops): + for req_no in range(0, len(self.ops)): + for key in [ + 'replicas', 'name', 'op', 'pg_num', 'weight', + 'group', 'group-namespace', 'group-permission', + 'object-prefix-permissions']: + if self.ops[req_no].get(key) != other.ops[req_no].get(key): + return False + else: + return False + return True + + def __eq__(self, other): + if not isinstance(other, self.__class__): + return False + if self.api_version == other.api_version and \ + self._ops_equal(other): + return True + else: + return False + + def __ne__(self, other): + return not self.__eq__(other) + + +class CephBrokerRsp(object): + """Ceph broker response. + + Response is json-decoded and contents provided as methods/properties. + + The API is versioned and defaults to version 1. + """ + + def __init__(self, encoded_rsp): + self.api_version = None + self.rsp = json.loads(encoded_rsp) + + @property + def request_id(self): + return self.rsp.get('request-id') + + @property + def exit_code(self): + return self.rsp.get('exit-code') + + @property + def exit_msg(self): + return self.rsp.get('stderr') + + +# Ceph Broker Conversation: +# If a charm needs an action to be taken by ceph it can create a CephBrokerRq +# and send that request to ceph via the ceph relation. The CephBrokerRq has a +# unique id so that the client can identity which CephBrokerRsp is associated +# with the request. Ceph will also respond to each client unit individually +# creating a response key per client unit eg glance/0 will get a CephBrokerRsp +# via key broker-rsp-glance-0 +# +# To use this the charm can just do something like: +# +# from charmhelpers.contrib.storage.linux.ceph import ( +# send_request_if_needed, +# is_request_complete, +# CephBrokerRq, +# ) +# +# @hooks.hook('ceph-relation-changed') +# def ceph_changed(): +# rq = CephBrokerRq() +# rq.add_op_create_pool(name='poolname', replica_count=3) +# +# if is_request_complete(rq): +# +# else: +# send_request_if_needed(get_ceph_request()) +# +# CephBrokerRq and CephBrokerRsp are serialized into JSON. Below is an example +# of glance having sent a request to ceph which ceph has successfully processed +# 'ceph:8': { +# 'ceph/0': { +# 'auth': 'cephx', +# 'broker-rsp-glance-0': '{"request-id": "0bc7dc54", "exit-code": 0}', +# 'broker_rsp': '{"request-id": "0da543b8", "exit-code": 0}', +# 'ceph-public-address': '10.5.44.103', +# 'key': 'AQCLDttVuHXINhAAvI144CB09dYchhHyTUY9BQ==', +# 'private-address': '10.5.44.103', +# }, +# 'glance/0': { +# 'broker_req': ('{"api-version": 1, "request-id": "0bc7dc54", ' +# '"ops": [{"replicas": 3, "name": "glance", ' +# '"op": "create-pool"}]}'), +# 'private-address': '10.5.44.109', +# }, +# } + +def get_previous_request(rid): + """Return the last ceph broker request sent on a given relation + + @param rid: Relation id to query for request + """ + request = None + broker_req = relation_get(attribute='broker_req', rid=rid, + unit=local_unit()) + if broker_req: + request_data = json.loads(broker_req) + request = CephBrokerRq(api_version=request_data['api-version'], + request_id=request_data['request-id']) + request.set_ops(request_data['ops']) + + return request + + +def get_request_states(request, relation='ceph'): + """Return a dict of requests per relation id with their corresponding + completion state. + + This allows a charm, which has a request for ceph, to see whether there is + an equivalent request already being processed and if so what state that + request is in. + + @param request: A CephBrokerRq object + """ + complete = [] + requests = {} + for rid in relation_ids(relation): + complete = False + previous_request = get_previous_request(rid) + if request == previous_request: + sent = True + complete = is_request_complete_for_rid(previous_request, rid) + else: + sent = False + complete = False + + requests[rid] = { + 'sent': sent, + 'complete': complete, + } + + return requests + + +def is_request_sent(request, relation='ceph'): + """Check to see if a functionally equivalent request has already been sent + + Returns True if a similair request has been sent + + @param request: A CephBrokerRq object + """ + states = get_request_states(request, relation=relation) + for rid in states.keys(): + if not states[rid]['sent']: + return False + + return True + + +def is_request_complete(request, relation='ceph'): + """Check to see if a functionally equivalent request has already been + completed + + Returns True if a similair request has been completed + + @param request: A CephBrokerRq object + """ + states = get_request_states(request, relation=relation) + for rid in states.keys(): + if not states[rid]['complete']: + return False + + return True + + +def is_request_complete_for_rid(request, rid): + """Check if a given request has been completed on the given relation + + @param request: A CephBrokerRq object + @param rid: Relation ID + """ + broker_key = get_broker_rsp_key() + for unit in related_units(rid): + rdata = relation_get(rid=rid, unit=unit) + if rdata.get(broker_key): + rsp = CephBrokerRsp(rdata.get(broker_key)) + if rsp.request_id == request.request_id: + if not rsp.exit_code: + return True + else: + # The remote unit sent no reply targeted at this unit so either the + # remote ceph cluster does not support unit targeted replies or it + # has not processed our request yet. + if rdata.get('broker_rsp'): + request_data = json.loads(rdata['broker_rsp']) + if request_data.get('request-id'): + log('Ignoring legacy broker_rsp without unit key as remote ' + 'service supports unit specific replies', level=DEBUG) + else: + log('Using legacy broker_rsp as remote service does not ' + 'supports unit specific replies', level=DEBUG) + rsp = CephBrokerRsp(rdata['broker_rsp']) + if not rsp.exit_code: + return True + + return False + + +def get_broker_rsp_key(): + """Return broker response key for this unit + + This is the key that ceph is going to use to pass request status + information back to this unit + """ + return 'broker-rsp-' + local_unit().replace('/', '-') + + +def send_request_if_needed(request, relation='ceph'): + """Send broker request if an equivalent request has not already been sent + + @param request: A CephBrokerRq object + """ + if is_request_sent(request, relation=relation): + log('Request already sent but not complete, not sending new request', + level=DEBUG) + else: + for rid in relation_ids(relation): + log('Sending request {}'.format(request.request_id), level=DEBUG) + relation_set(relation_id=rid, broker_req=request.request) + + +def has_broker_rsp(rid=None, unit=None): + """Return True if the broker_rsp key is 'truthy' (i.e. set to something) in the relation data. + + :param rid: The relation to check (default of None means current relation) + :type rid: Union[str, None] + :param unit: The remote unit to check (default of None means current unit) + :type unit: Union[str, None] + :returns: True if broker key exists and is set to something 'truthy' + :rtype: bool + """ + rdata = relation_get(rid=rid, unit=unit) or {} + broker_rsp = rdata.get(get_broker_rsp_key()) + return True if broker_rsp else False + + +def is_broker_action_done(action, rid=None, unit=None): + """Check whether broker action has completed yet. + + @param action: name of action to be performed + @returns True if action complete otherwise False + """ + rdata = relation_get(rid=rid, unit=unit) or {} + broker_rsp = rdata.get(get_broker_rsp_key()) + if not broker_rsp: + return False + + rsp = CephBrokerRsp(broker_rsp) + unit_name = local_unit().partition('/')[2] + key = "unit_{}_ceph_broker_action.{}".format(unit_name, action) + kvstore = kv() + val = kvstore.get(key=key) + if val and val == rsp.request_id: + return True + + return False + + +def mark_broker_action_done(action, rid=None, unit=None): + """Mark action as having been completed. + + @param action: name of action to be performed + @returns None + """ + rdata = relation_get(rid=rid, unit=unit) or {} + broker_rsp = rdata.get(get_broker_rsp_key()) + if not broker_rsp: + return + + rsp = CephBrokerRsp(broker_rsp) + unit_name = local_unit().partition('/')[2] + key = "unit_{}_ceph_broker_action.{}".format(unit_name, action) + kvstore = kv() + kvstore.set(key=key, value=rsp.request_id) + kvstore.flush() + + +class CephConfContext(object): + """Ceph config (ceph.conf) context. + + Supports user-provided Ceph configuration settings. Use can provide a + dictionary as the value for the config-flags charm option containing + Ceph configuration settings keyede by their section in ceph.conf. + """ + def __init__(self, permitted_sections=None): + self.permitted_sections = permitted_sections or [] + + def __call__(self): + conf = config('config-flags') + if not conf: + return {} + + conf = config_flags_parser(conf) + if not isinstance(conf, dict): + log("Provided config-flags is not a dictionary - ignoring", + level=WARNING) + return {} + + permitted = self.permitted_sections + if permitted: + diff = set(conf.keys()).difference(set(permitted)) + if diff: + log("Config-flags contains invalid keys '%s' - they will be " + "ignored" % (', '.join(diff)), level=WARNING) + + ceph_conf = {} + for key in conf: + if permitted and key not in permitted: + log("Ignoring key '%s'" % key, level=WARNING) + continue + + ceph_conf[key] = conf[key] + return ceph_conf + + +class CephOSDConfContext(CephConfContext): + """Ceph config (ceph.conf) context. + + Consolidates settings from config-flags via CephConfContext with + settings provided by the mons. The config-flag values are preserved in + conf['osd'], settings from the mons which do not clash with config-flag + settings are in conf['osd_from_client'] and finally settings which do + clash are in conf['osd_from_client_conflict']. Rather than silently drop + the conflicting settings they are provided in the context so they can be + rendered commented out to give some visability to the admin. + """ + + def __init__(self, permitted_sections=None): + super(CephOSDConfContext, self).__init__( + permitted_sections=permitted_sections) + try: + self.settings_from_mons = get_osd_settings('mon') + except OSDSettingConflict: + log( + "OSD settings from mons are inconsistent, ignoring them", + level=WARNING) + self.settings_from_mons = {} + + def filter_osd_from_mon_settings(self): + """Filter settings from client relation against config-flags. + + :returns: A tuple ( + ,config-flag values, + ,client settings which do not conflict with config-flag values, + ,client settings which confilct with config-flag values) + :rtype: (OrderedDict, OrderedDict, OrderedDict) + """ + ceph_conf = super(CephOSDConfContext, self).__call__() + conflicting_entries = {} + clear_entries = {} + for key, value in self.settings_from_mons.items(): + if key in ceph_conf.get('osd', {}): + if ceph_conf['osd'][key] != value: + conflicting_entries[key] = value + else: + clear_entries[key] = value + clear_entries = _order_dict_by_key(clear_entries) + conflicting_entries = _order_dict_by_key(conflicting_entries) + return ceph_conf, clear_entries, conflicting_entries + + def __call__(self): + """Construct OSD config context. + + Standard context with two additional special keys. + osd_from_client_conflict: client settings which confilct with + config-flag values + osd_from_client: settings which do not conflict with config-flag + values + + :returns: OSD config context dict. + :rtype: dict + """ + conf, osd_clear, osd_conflict = self.filter_osd_from_mon_settings() + conf['osd_from_client_conflict'] = osd_conflict + conf['osd_from_client'] = osd_clear + return conf diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/storage/linux/loopback.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/storage/linux/loopback.py new file mode 100644 index 0000000000000000000000000000000000000000..74bab40e43a978e3d9e1e2f9c8975368092145c0 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/storage/linux/loopback.py @@ -0,0 +1,92 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import re +from subprocess import ( + check_call, + check_output, +) + +import six + + +################################################## +# loopback device helpers. +################################################## +def loopback_devices(): + ''' + Parse through 'losetup -a' output to determine currently mapped + loopback devices. Output is expected to look like: + + /dev/loop0: [0807]:961814 (/tmp/my.img) + + or: + + /dev/loop0: [0807]:961814 (/tmp/my.img (deleted)) + + :returns: dict: a dict mapping {loopback_dev: backing_file} + ''' + loopbacks = {} + cmd = ['losetup', '-a'] + output = check_output(cmd) + if six.PY3: + output = output.decode('utf-8') + devs = [d.strip().split(' ', 2) for d in output.splitlines() if d != ''] + for dev, _, f in devs: + loopbacks[dev.replace(':', '')] = re.search(r'\((.+)\)', f).groups()[0] + return loopbacks + + +def create_loopback(file_path): + ''' + Create a loopback device for a given backing file. + + :returns: str: Full path to new loopback device (eg, /dev/loop0) + ''' + file_path = os.path.abspath(file_path) + check_call(['losetup', '--find', file_path]) + for d, f in six.iteritems(loopback_devices()): + if f == file_path: + return d + + +def ensure_loopback_device(path, size): + ''' + Ensure a loopback device exists for a given backing file path and size. + If it a loopback device is not mapped to file, a new one will be created. + + TODO: Confirm size of found loopback device. + + :returns: str: Full path to the ensured loopback device (eg, /dev/loop0) + ''' + for d, f in six.iteritems(loopback_devices()): + if f == path: + return d + + if not os.path.exists(path): + cmd = ['truncate', '--size', size, path] + check_call(cmd) + + return create_loopback(path) + + +def is_mapped_loopback_device(device): + """ + Checks if a given device name is an existing/mapped loopback device. + :param device: str: Full path to the device (eg, /dev/loop1). + :returns: str: Path to the backing file if is a loopback device + empty string otherwise + """ + return loopback_devices().get(device, "") diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/storage/linux/lvm.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/storage/linux/lvm.py new file mode 100644 index 0000000000000000000000000000000000000000..c8bde69263f0e917d32d0e5d70abba1409b26012 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/storage/linux/lvm.py @@ -0,0 +1,182 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import functools +from subprocess import ( + CalledProcessError, + check_call, + check_output, + Popen, + PIPE, +) + + +################################################## +# LVM helpers. +################################################## +def deactivate_lvm_volume_group(block_device): + ''' + Deactivate any volume gruop associated with an LVM physical volume. + + :param block_device: str: Full path to LVM physical volume + ''' + vg = list_lvm_volume_group(block_device) + if vg: + cmd = ['vgchange', '-an', vg] + check_call(cmd) + + +def is_lvm_physical_volume(block_device): + ''' + Determine whether a block device is initialized as an LVM PV. + + :param block_device: str: Full path of block device to inspect. + + :returns: boolean: True if block device is a PV, False if not. + ''' + try: + check_output(['pvdisplay', block_device]) + return True + except CalledProcessError: + return False + + +def remove_lvm_physical_volume(block_device): + ''' + Remove LVM PV signatures from a given block device. + + :param block_device: str: Full path of block device to scrub. + ''' + p = Popen(['pvremove', '-ff', block_device], + stdin=PIPE) + p.communicate(input='y\n') + + +def list_lvm_volume_group(block_device): + ''' + List LVM volume group associated with a given block device. + + Assumes block device is a valid LVM PV. + + :param block_device: str: Full path of block device to inspect. + + :returns: str: Name of volume group associated with block device or None + ''' + vg = None + pvd = check_output(['pvdisplay', block_device]).splitlines() + for lvm in pvd: + lvm = lvm.decode('UTF-8') + if lvm.strip().startswith('VG Name'): + vg = ' '.join(lvm.strip().split()[2:]) + return vg + + +def create_lvm_physical_volume(block_device): + ''' + Initialize a block device as an LVM physical volume. + + :param block_device: str: Full path of block device to initialize. + + ''' + check_call(['pvcreate', block_device]) + + +def create_lvm_volume_group(volume_group, block_device): + ''' + Create an LVM volume group backed by a given block device. + + Assumes block device has already been initialized as an LVM PV. + + :param volume_group: str: Name of volume group to create. + :block_device: str: Full path of PV-initialized block device. + ''' + check_call(['vgcreate', volume_group, block_device]) + + +def list_logical_volumes(select_criteria=None, path_mode=False): + ''' + List logical volumes + + :param select_criteria: str: Limit list to those volumes matching this + criteria (see 'lvs -S help' for more details) + :param path_mode: bool: return logical volume name in 'vg/lv' format, this + format is required for some commands like lvextend + :returns: [str]: List of logical volumes + ''' + lv_diplay_attr = 'lv_name' + if path_mode: + # Parsing output logic relies on the column order + lv_diplay_attr = 'vg_name,' + lv_diplay_attr + cmd = ['lvs', '--options', lv_diplay_attr, '--noheadings'] + if select_criteria: + cmd.extend(['--select', select_criteria]) + lvs = [] + for lv in check_output(cmd).decode('UTF-8').splitlines(): + if not lv: + continue + if path_mode: + lvs.append('/'.join(lv.strip().split())) + else: + lvs.append(lv.strip()) + return lvs + + +list_thin_logical_volume_pools = functools.partial( + list_logical_volumes, + select_criteria='lv_attr =~ ^t') + +list_thin_logical_volumes = functools.partial( + list_logical_volumes, + select_criteria='lv_attr =~ ^V') + + +def extend_logical_volume_by_device(lv_name, block_device): + ''' + Extends the size of logical volume lv_name by the amount of free space on + physical volume block_device. + + :param lv_name: str: name of logical volume to be extended (vg/lv format) + :param block_device: str: name of block_device to be allocated to lv_name + ''' + cmd = ['lvextend', lv_name, block_device] + check_call(cmd) + + +def create_logical_volume(lv_name, volume_group, size=None): + ''' + Create a new logical volume in an existing volume group + + :param lv_name: str: name of logical volume to be created. + :param volume_group: str: Name of volume group to use for the new volume. + :param size: str: Size of logical volume to create (100% if not supplied) + :raises subprocess.CalledProcessError: in the event that the lvcreate fails. + ''' + if size: + check_call([ + 'lvcreate', + '--yes', + '-L', + '{}'.format(size), + '-n', lv_name, volume_group + ]) + # create the lv with all the space available, this is needed because the + # system call is different for LVM + else: + check_call([ + 'lvcreate', + '--yes', + '-l', + '100%FREE', + '-n', lv_name, volume_group + ]) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/storage/linux/utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/storage/linux/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..a35617606cf52d7cffc04ac245811b770fd95e8e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/storage/linux/utils.py @@ -0,0 +1,128 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import re +from stat import S_ISBLK + +from subprocess import ( + CalledProcessError, + check_call, + check_output, + call +) + + +def _luks_uuid(dev): + """ + Check to see if dev is a LUKS encrypted volume, returning the UUID + of volume if it is. + + :param: dev: path to block device to check. + :returns: str. UUID of LUKS device or None if not a LUKS device + """ + try: + cmd = ['cryptsetup', 'luksUUID', dev] + return check_output(cmd).decode('UTF-8').strip() + except CalledProcessError: + return None + + +def is_luks_device(dev): + """ + Determine if dev is a LUKS-formatted block device. + + :param: dev: A full path to a block device to check for LUKS header + presence + :returns: boolean: indicates whether a device is used based on LUKS header. + """ + return True if _luks_uuid(dev) else False + + +def is_mapped_luks_device(dev): + """ + Determine if dev is a mapped LUKS device + :param: dev: A full path to a block device to be checked + :returns: boolean: indicates whether a device is mapped + """ + _, dirs, _ = next(os.walk( + '/sys/class/block/{}/holders/' + .format(os.path.basename(os.path.realpath(dev)))) + ) + is_held = len(dirs) > 0 + return is_held and is_luks_device(dev) + + +def is_block_device(path): + ''' + Confirm device at path is a valid block device node. + + :returns: boolean: True if path is a block device, False if not. + ''' + if not os.path.exists(path): + return False + return S_ISBLK(os.stat(path).st_mode) + + +def zap_disk(block_device): + ''' + Clear a block device of partition table. Relies on sgdisk, which is + installed as pat of the 'gdisk' package in Ubuntu. + + :param block_device: str: Full path of block device to clean. + ''' + # https://github.com/ceph/ceph/commit/fdd7f8d83afa25c4e09aaedd90ab93f3b64a677b + # sometimes sgdisk exits non-zero; this is OK, dd will clean up + call(['sgdisk', '--zap-all', '--', block_device]) + call(['sgdisk', '--clear', '--mbrtogpt', '--', block_device]) + dev_end = check_output(['blockdev', '--getsz', + block_device]).decode('UTF-8') + gpt_end = int(dev_end.split()[0]) - 100 + check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device), + 'bs=1M', 'count=1']) + check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device), + 'bs=512', 'count=100', 'seek=%s' % (gpt_end)]) + + +def is_device_mounted(device): + '''Given a device path, return True if that device is mounted, and False + if it isn't. + + :param device: str: Full path of the device to check. + :returns: boolean: True if the path represents a mounted device, False if + it doesn't. + ''' + try: + out = check_output(['lsblk', '-P', device]).decode('UTF-8') + except Exception: + return False + return bool(re.search(r'MOUNTPOINT=".+"', out)) + + +def mkfs_xfs(device, force=False, inode_size=1024): + """Format device with XFS filesystem. + + By default this should fail if the device already has a filesystem on it. + :param device: Full path to device to format + :ptype device: tr + :param force: Force operation + :ptype: force: boolean + :param inode_size: XFS inode size in bytes + :ptype inode_size: int""" + cmd = ['mkfs.xfs'] + if force: + cmd.append("-f") + + cmd += ['-i', "size={}".format(inode_size), device] + check_call(cmd) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/templating/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/templating/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d7567b863e3a5ad2b7a7f44958b4166e0c3d346b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/templating/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/templating/contexts.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/templating/contexts.py new file mode 100644 index 0000000000000000000000000000000000000000..c1adf94b133f41a168727a3cdd3a536633ff62b3 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/templating/contexts.py @@ -0,0 +1,137 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Copyright 2013 Canonical Ltd. +# +# Authors: +# Charm Helpers Developers +"""A helper to create a yaml cache of config with namespaced relation data.""" +import os +import yaml + +import six + +import charmhelpers.core.hookenv + + +charm_dir = os.environ.get('CHARM_DIR', '') + + +def dict_keys_without_hyphens(a_dict): + """Return the a new dict with underscores instead of hyphens in keys.""" + return dict( + (key.replace('-', '_'), val) for key, val in a_dict.items()) + + +def update_relations(context, namespace_separator=':'): + """Update the context with the relation data.""" + # Add any relation data prefixed with the relation type. + relation_type = charmhelpers.core.hookenv.relation_type() + relations = [] + context['current_relation'] = {} + if relation_type is not None: + relation_data = charmhelpers.core.hookenv.relation_get() + context['current_relation'] = relation_data + # Deprecated: the following use of relation data as keys + # directly in the context will be removed. + relation_data = dict( + ("{relation_type}{namespace_separator}{key}".format( + relation_type=relation_type, + key=key, + namespace_separator=namespace_separator), val) + for key, val in relation_data.items()) + relation_data = dict_keys_without_hyphens(relation_data) + context.update(relation_data) + relations = charmhelpers.core.hookenv.relations_of_type(relation_type) + relations = [dict_keys_without_hyphens(rel) for rel in relations] + + context['relations_full'] = charmhelpers.core.hookenv.relations() + + # the hookenv.relations() data structure is effectively unusable in + # templates and other contexts when trying to access relation data other + # than the current relation. So provide a more useful structure that works + # with any hook. + local_unit = charmhelpers.core.hookenv.local_unit() + relations = {} + for rname, rids in context['relations_full'].items(): + relations[rname] = [] + for rid, rdata in rids.items(): + data = rdata.copy() + if local_unit in rdata: + data.pop(local_unit) + for unit_name, rel_data in data.items(): + new_data = {'__relid__': rid, '__unit__': unit_name} + new_data.update(rel_data) + relations[rname].append(new_data) + context['relations'] = relations + + +def juju_state_to_yaml(yaml_path, namespace_separator=':', + allow_hyphens_in_keys=True, mode=None): + """Update the juju config and state in a yaml file. + + This includes any current relation-get data, and the charm + directory. + + This function was created for the ansible and saltstack + support, as those libraries can use a yaml file to supply + context to templates, but it may be useful generally to + create and update an on-disk cache of all the config, including + previous relation data. + + By default, hyphens are allowed in keys as this is supported + by yaml, but for tools like ansible, hyphens are not valid [1]. + + [1] http://www.ansibleworks.com/docs/playbooks_variables.html#what-makes-a-valid-variable-name + """ + config = charmhelpers.core.hookenv.config() + + # Add the charm_dir which we will need to refer to charm + # file resources etc. + config['charm_dir'] = charm_dir + config['local_unit'] = charmhelpers.core.hookenv.local_unit() + config['unit_private_address'] = charmhelpers.core.hookenv.unit_private_ip() + config['unit_public_address'] = charmhelpers.core.hookenv.unit_get( + 'public-address' + ) + + # Don't use non-standard tags for unicode which will not + # work when salt uses yaml.load_safe. + yaml.add_representer(six.text_type, + lambda dumper, value: dumper.represent_scalar( + six.u('tag:yaml.org,2002:str'), value)) + + yaml_dir = os.path.dirname(yaml_path) + if not os.path.exists(yaml_dir): + os.makedirs(yaml_dir) + + if os.path.exists(yaml_path): + with open(yaml_path, "r") as existing_vars_file: + existing_vars = yaml.load(existing_vars_file.read()) + else: + with open(yaml_path, "w+"): + pass + existing_vars = {} + + if mode is not None: + os.chmod(yaml_path, mode) + + if not allow_hyphens_in_keys: + config = dict_keys_without_hyphens(config) + existing_vars.update(config) + + update_relations(existing_vars, namespace_separator) + + with open(yaml_path, "w+") as fp: + fp.write(yaml.dump(existing_vars, default_flow_style=False)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/templating/jinja.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/templating/jinja.py new file mode 100644 index 0000000000000000000000000000000000000000..38d4fba0e651399064a14112398a0ef43717f5cd --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/templating/jinja.py @@ -0,0 +1,38 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +""" +Templating using the python-jinja2 package. +""" +import six +from charmhelpers.fetch import apt_install, apt_update +try: + import jinja2 +except ImportError: + apt_update(fatal=True) + if six.PY3: + apt_install(["python3-jinja2"], fatal=True) + else: + apt_install(["python-jinja2"], fatal=True) + import jinja2 + + +DEFAULT_TEMPLATES_DIR = 'templates' + + +def render(template_name, context, template_dir=DEFAULT_TEMPLATES_DIR): + templates = jinja2.Environment( + loader=jinja2.FileSystemLoader(template_dir)) + template = templates.get_template(template_name) + return template.render(context) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/templating/pyformat.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/templating/pyformat.py new file mode 100644 index 0000000000000000000000000000000000000000..51a24dc42379fbee88d8952c578fc2498c852b0b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/templating/pyformat.py @@ -0,0 +1,27 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +''' +Templating using standard Python str.format() method. +''' + +from charmhelpers.core import hookenv + + +def render(template, extra={}, **kwargs): + """Return the template rendered using Python's str.format().""" + context = hookenv.execution_environment() + context.update(extra) + context.update(kwargs) + return template.format(**context) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/unison/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/unison/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..61409b14c843766aa10cb957bb1c3d83ba87c6a7 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/contrib/unison/__init__.py @@ -0,0 +1,314 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Easy file synchronization among peer units using ssh + unison. +# +# For the -joined, -changed, and -departed peer relations, add a call to +# ssh_authorized_peers() describing the peer relation and the desired +# user + group. After all peer relations have settled, all hosts should +# be able to connect to on another via key auth'd ssh as the specified user. +# +# Other hooks are then free to synchronize files and directories using +# sync_to_peers(). +# +# For a peer relation named 'cluster', for example: +# +# cluster-relation-joined: +# ... +# ssh_authorized_peers(peer_interface='cluster', +# user='juju_ssh', group='juju_ssh', +# ensure_local_user=True) +# ... +# +# cluster-relation-changed: +# ... +# ssh_authorized_peers(peer_interface='cluster', +# user='juju_ssh', group='juju_ssh', +# ensure_local_user=True) +# ... +# +# cluster-relation-departed: +# ... +# ssh_authorized_peers(peer_interface='cluster', +# user='juju_ssh', group='juju_ssh', +# ensure_local_user=True) +# ... +# +# Hooks are now free to sync files as easily as: +# +# files = ['/etc/fstab', '/etc/apt.conf.d/'] +# sync_to_peers(peer_interface='cluster', +# user='juju_ssh, paths=[files]) +# +# It is assumed the charm itself has setup permissions on each unit +# such that 'juju_ssh' has read + write permissions. Also assumed +# that the calling charm takes care of leader delegation. +# +# Additionally files can be synchronized only to an specific unit: +# sync_to_peer(slave_address, user='juju_ssh', +# paths=[files], verbose=False) + +import os +import pwd + +from copy import copy +from subprocess import check_call, check_output + +from charmhelpers.core.host import ( + adduser, + add_user_to_group, + pwgen, + remove_password_expiry, +) + +from charmhelpers.core.hookenv import ( + log, + hook_name, + relation_ids, + related_units, + relation_set, + relation_get, + unit_private_ip, + INFO, + ERROR, +) + +BASE_CMD = ['unison', '-auto', '-batch=true', '-confirmbigdel=false', + '-fastcheck=true', '-group=false', '-owner=false', + '-prefer=newer', '-times=true'] + + +def get_homedir(user): + try: + user = pwd.getpwnam(user) + return user.pw_dir + except KeyError: + log('Could not get homedir for user %s: user exists?' % (user), ERROR) + raise Exception + + +def create_private_key(user, priv_key_path, key_type='rsa'): + types_bits = { + 'rsa': '2048', + 'ecdsa': '521', + } + if key_type not in types_bits: + log('Unknown ssh key type {}, using rsa'.format(key_type), ERROR) + key_type = 'rsa' + if not os.path.isfile(priv_key_path): + log('Generating new SSH key for user %s.' % user) + cmd = ['ssh-keygen', '-q', '-N', '', '-t', key_type, + '-b', types_bits[key_type], '-f', priv_key_path] + check_call(cmd) + else: + log('SSH key already exists at %s.' % priv_key_path) + check_call(['chown', user, priv_key_path]) + check_call(['chmod', '0600', priv_key_path]) + + +def create_public_key(user, priv_key_path, pub_key_path): + if not os.path.isfile(pub_key_path): + log('Generating missing ssh public key @ %s.' % pub_key_path) + cmd = ['ssh-keygen', '-y', '-f', priv_key_path] + p = check_output(cmd).strip() + with open(pub_key_path, 'wb') as out: + out.write(p) + check_call(['chown', user, pub_key_path]) + + +def get_keypair(user): + home_dir = get_homedir(user) + ssh_dir = os.path.join(home_dir, '.ssh') + priv_key = os.path.join(ssh_dir, 'id_rsa') + pub_key = '%s.pub' % priv_key + + if not os.path.isdir(ssh_dir): + os.mkdir(ssh_dir) + check_call(['chown', '-R', user, ssh_dir]) + + create_private_key(user, priv_key) + create_public_key(user, priv_key, pub_key) + + with open(priv_key, 'r') as p: + _priv = p.read().strip() + + with open(pub_key, 'r') as p: + _pub = p.read().strip() + + return (_priv, _pub) + + +def write_authorized_keys(user, keys): + home_dir = get_homedir(user) + ssh_dir = os.path.join(home_dir, '.ssh') + auth_keys = os.path.join(ssh_dir, 'authorized_keys') + log('Syncing authorized_keys @ %s.' % auth_keys) + with open(auth_keys, 'w') as out: + for k in keys: + out.write('%s\n' % k) + + +def write_known_hosts(user, hosts): + home_dir = get_homedir(user) + ssh_dir = os.path.join(home_dir, '.ssh') + known_hosts = os.path.join(ssh_dir, 'known_hosts') + khosts = [] + for host in hosts: + cmd = ['ssh-keyscan', host] + remote_key = check_output(cmd, universal_newlines=True).strip() + khosts.append(remote_key) + log('Syncing known_hosts @ %s.' % known_hosts) + with open(known_hosts, 'w') as out: + for host in khosts: + out.write('%s\n' % host) + + +def ensure_user(user, group=None): + adduser(user, pwgen()) + if group: + add_user_to_group(user, group) + # Remove password expiry (Bug #1686085) + remove_password_expiry(user) + + +def ssh_authorized_peers(peer_interface, user, group=None, + ensure_local_user=False): + """ + Main setup function, should be called from both peer -changed and -joined + hooks with the same parameters. + """ + if ensure_local_user: + ensure_user(user, group) + priv_key, pub_key = get_keypair(user) + hook = hook_name() + if hook == '%s-relation-joined' % peer_interface: + relation_set(ssh_pub_key=pub_key) + elif hook == '%s-relation-changed' % peer_interface or \ + hook == '%s-relation-departed' % peer_interface: + hosts = [] + keys = [] + + for r_id in relation_ids(peer_interface): + for unit in related_units(r_id): + ssh_pub_key = relation_get('ssh_pub_key', + rid=r_id, + unit=unit) + priv_addr = relation_get('private-address', + rid=r_id, + unit=unit) + if ssh_pub_key: + keys.append(ssh_pub_key) + hosts.append(priv_addr) + else: + log('ssh_authorized_peers(): ssh_pub_key ' + 'missing for unit %s, skipping.' % unit) + write_authorized_keys(user, keys) + write_known_hosts(user, hosts) + authed_hosts = ':'.join(hosts) + relation_set(ssh_authorized_hosts=authed_hosts) + + +def _run_as_user(user, gid=None): + try: + user = pwd.getpwnam(user) + except KeyError: + log('Invalid user: %s' % user) + raise Exception + uid = user.pw_uid + gid = gid or user.pw_gid + os.environ['HOME'] = user.pw_dir + + def _inner(): + os.setgid(gid) + os.setuid(uid) + return _inner + + +def run_as_user(user, cmd, gid=None): + return check_output(cmd, preexec_fn=_run_as_user(user, gid), cwd='/') + + +def collect_authed_hosts(peer_interface): + '''Iterate through the units on peer interface to find all that + have the calling host in its authorized hosts list''' + hosts = [] + for r_id in (relation_ids(peer_interface) or []): + for unit in related_units(r_id): + private_addr = relation_get('private-address', + rid=r_id, unit=unit) + authed_hosts = relation_get('ssh_authorized_hosts', + rid=r_id, unit=unit) + + if not authed_hosts: + log('Peer %s has not authorized *any* hosts yet, skipping.' % + (unit), level=INFO) + continue + + if unit_private_ip() in authed_hosts.split(':'): + hosts.append(private_addr) + else: + log('Peer %s has not authorized *this* host yet, skipping.' % + (unit), level=INFO) + return hosts + + +def sync_path_to_host(path, host, user, verbose=False, cmd=None, gid=None, + fatal=False): + """Sync path to an specific peer host + + Propagates exception if operation fails and fatal=True. + """ + cmd = cmd or copy(BASE_CMD) + if not verbose: + cmd.append('-silent') + + # removing trailing slash from directory paths, unison + # doesn't like these. + if path.endswith('/'): + path = path[:(len(path) - 1)] + + cmd = cmd + [path, 'ssh://%s@%s/%s' % (user, host, path)] + + try: + log('Syncing local path %s to %s@%s:%s' % (path, user, host, path)) + run_as_user(user, cmd, gid) + except Exception: + log('Error syncing remote files') + if fatal: + raise + + +def sync_to_peer(host, user, paths=None, verbose=False, cmd=None, gid=None, + fatal=False): + """Sync paths to an specific peer host + + Propagates exception if any operation fails and fatal=True. + """ + if paths: + for p in paths: + sync_path_to_host(p, host, user, verbose, cmd, gid, fatal) + + +def sync_to_peers(peer_interface, user, paths=None, verbose=False, cmd=None, + gid=None, fatal=False): + """Sync all hosts to an specific path + + The type of group is integer, it allows user has permissions to + operate a directory have a different group id with the user id. + + Propagates exception if any operation fails and fatal=True. + """ + if paths: + for host in collect_authed_hosts(peer_interface): + sync_to_peer(host, user, paths, verbose, cmd, gid, fatal) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/coordinator.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/coordinator.py new file mode 100644 index 0000000000000000000000000000000000000000..59bee3e5b320f6bffa0d89eef7a20888cddb0521 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/coordinator.py @@ -0,0 +1,606 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +''' +The coordinator module allows you to use Juju's leadership feature to +coordinate operations between units of a service. + +Behavior is defined in subclasses of coordinator.BaseCoordinator. +One implementation is provided (coordinator.Serial), which allows an +operation to be run on a single unit at a time, on a first come, first +served basis. You can trivially define more complex behavior by +subclassing BaseCoordinator or Serial. + +:author: Stuart Bishop + + +Services Framework Usage +======================== + +Ensure a peers relation is defined in metadata.yaml. Instantiate a +BaseCoordinator subclass before invoking ServiceManager.manage(). +Ensure that ServiceManager.manage() is wired up to the leader-elected, +leader-settings-changed, peers relation-changed and peers +relation-departed hooks in addition to any other hooks you need, or your +service will deadlock. + +Ensure calls to acquire() are guarded, so that locks are only requested +when they are really needed (and thus hooks only triggered when necessary). +Failing to do this and calling acquire() unconditionally will put your unit +into a hook loop. Calls to granted() do not need to be guarded. + +For example:: + + from charmhelpers.core import hookenv, services + from charmhelpers import coordinator + + def maybe_restart(servicename): + serial = coordinator.Serial() + if needs_restart(): + serial.acquire('restart') + if serial.granted('restart'): + hookenv.service_restart(servicename) + + services = [dict(service='servicename', + data_ready=[maybe_restart])] + + if __name__ == '__main__': + _ = coordinator.Serial() # Must instantiate before manager.manage() + manager = services.ServiceManager(services) + manager.manage() + + +You can implement a similar pattern using a decorator. If the lock has +not been granted, an attempt to acquire() it will be made if the guard +function returns True. If the lock has been granted, the decorated function +is run as normal:: + + from charmhelpers.core import hookenv, services + from charmhelpers import coordinator + + serial = coordinator.Serial() # Global, instatiated on module import. + + def needs_restart(): + [ ... Introspect state. Return True if restart is needed ... ] + + @serial.require('restart', needs_restart) + def maybe_restart(servicename): + hookenv.service_restart(servicename) + + services = [dict(service='servicename', + data_ready=[maybe_restart])] + + if __name__ == '__main__': + manager = services.ServiceManager(services) + manager.manage() + + +Traditional Usage +================= + +Ensure a peers relation is defined in metadata.yaml. + +If you are using charmhelpers.core.hookenv.Hooks, ensure that a +BaseCoordinator subclass is instantiated before calling Hooks.execute. + +If you are not using charmhelpers.core.hookenv.Hooks, ensure +that a BaseCoordinator subclass is instantiated and its handle() +method called at the start of all your hooks. + +For example:: + + import sys + from charmhelpers.core import hookenv + from charmhelpers import coordinator + + hooks = hookenv.Hooks() + + def maybe_restart(): + serial = coordinator.Serial() + if serial.granted('restart'): + hookenv.service_restart('myservice') + + @hooks.hook + def config_changed(): + update_config() + serial = coordinator.Serial() + if needs_restart(): + serial.acquire('restart'): + maybe_restart() + + # Cluster hooks must be wired up. + @hooks.hook('cluster-relation-changed', 'cluster-relation-departed') + def cluster_relation_changed(): + maybe_restart() + + # Leader hooks must be wired up. + @hooks.hook('leader-elected', 'leader-settings-changed') + def leader_settings_changed(): + maybe_restart() + + [ ... repeat for *all* other hooks you are using ... ] + + if __name__ == '__main__': + _ = coordinator.Serial() # Must instantiate before execute() + hooks.execute(sys.argv) + + +You can also use the require decorator. If the lock has not been granted, +an attempt to acquire() it will be made if the guard function returns True. +If the lock has been granted, the decorated function is run as normal:: + + from charmhelpers.core import hookenv + + hooks = hookenv.Hooks() + serial = coordinator.Serial() # Must instantiate before execute() + + @require('restart', needs_restart) + def maybe_restart(): + hookenv.service_restart('myservice') + + @hooks.hook('install', 'config-changed', 'upgrade-charm', + # Peers and leader hooks must be wired up. + 'cluster-relation-changed', 'cluster-relation-departed', + 'leader-elected', 'leader-settings-changed') + def default_hook(): + [...] + maybe_restart() + + if __name__ == '__main__': + hooks.execute() + + +Details +======= + +A simple API is provided similar to traditional locking APIs. A lock +may be requested using the acquire() method, and the granted() method +may be used do to check if a lock previously requested by acquire() has +been granted. It doesn't matter how many times acquire() is called in a +hook. + +Locks are released at the end of the hook they are acquired in. This may +be the current hook if the unit is leader and the lock is free. It is +more likely a future hook (probably leader-settings-changed, possibly +the peers relation-changed or departed hook, potentially any hook). + +Whenever a charm needs to perform a coordinated action it will acquire() +the lock and perform the action immediately if acquisition is +successful. It will also need to perform the same action in every other +hook if the lock has been granted. + + +Grubby Details +-------------- + +Why do you need to be able to perform the same action in every hook? +If the unit is the leader, then it may be able to grant its own lock +and perform the action immediately in the source hook. If the unit is +the leader and cannot immediately grant the lock, then its only +guaranteed chance of acquiring the lock is in the peers relation-joined, +relation-changed or peers relation-departed hooks when another unit has +released it (the only channel to communicate to the leader is the peers +relation). If the unit is not the leader, then it is unlikely the lock +is granted in the source hook (a previous hook must have also made the +request for this to happen). A non-leader is notified about the lock via +leader settings. These changes may be visible in any hook, even before +the leader-settings-changed hook has been invoked. Or the requesting +unit may be promoted to leader after making a request, in which case the +lock may be granted in leader-elected or in a future peers +relation-changed or relation-departed hook. + +This could be simpler if leader-settings-changed was invoked on the +leader. We could then never grant locks except in +leader-settings-changed hooks giving one place for the operation to be +performed. Unfortunately this is not the case with Juju 1.23 leadership. + +But of course, this doesn't really matter to most people as most people +seem to prefer the Services Framework or similar reset-the-world +approaches, rather than the twisty maze of attempting to deduce what +should be done based on what hook happens to be running (which always +seems to evolve into reset-the-world anyway when the charm grows beyond +the trivial). + +I chose not to implement a callback model, where a callback was passed +to acquire to be executed when the lock is granted, because the callback +may become invalid between making the request and the lock being granted +due to an upgrade-charm being run in the interim. And it would create +restrictions, such no lambdas, callback defined at the top level of a +module, etc. Still, we could implement it on top of what is here, eg. +by adding a defer decorator that stores a pickle of itself to disk and +have BaseCoordinator unpickle and execute them when the locks are granted. +''' +from datetime import datetime +from functools import wraps +import json +import os.path + +from six import with_metaclass + +from charmhelpers.core import hookenv + + +# We make BaseCoordinator and subclasses singletons, so that if we +# need to spill to local storage then only a single instance does so, +# rather than having multiple instances stomp over each other. +class Singleton(type): + _instances = {} + + def __call__(cls, *args, **kwargs): + if cls not in cls._instances: + cls._instances[cls] = super(Singleton, cls).__call__(*args, + **kwargs) + return cls._instances[cls] + + +class BaseCoordinator(with_metaclass(Singleton, object)): + relid = None # Peer relation-id, set by __init__ + relname = None + + grants = None # self.grants[unit][lock] == timestamp + requests = None # self.requests[unit][lock] == timestamp + + def __init__(self, relation_key='coordinator', peer_relation_name=None): + '''Instatiate a Coordinator. + + Data is stored on the peers relation and in leadership storage + under the provided relation_key. + + The peers relation is identified by peer_relation_name, and defaults + to the first one found in metadata.yaml. + ''' + # Most initialization is deferred, since invoking hook tools from + # the constructor makes testing hard. + self.key = relation_key + self.relname = peer_relation_name + hookenv.atstart(self.initialize) + + # Ensure that handle() is called, without placing that burden on + # the charm author. They still need to do this manually if they + # are not using a hook framework. + hookenv.atstart(self.handle) + + def initialize(self): + if self.requests is not None: + return # Already initialized. + + assert hookenv.has_juju_version('1.23'), 'Needs Juju 1.23+' + + if self.relname is None: + self.relname = _implicit_peer_relation_name() + + relids = hookenv.relation_ids(self.relname) + if relids: + self.relid = sorted(relids)[0] + + # Load our state, from leadership, the peer relationship, and maybe + # local state as a fallback. Populates self.requests and self.grants. + self._load_state() + self._emit_state() + + # Save our state if the hook completes successfully. + hookenv.atexit(self._save_state) + + # Schedule release of granted locks for the end of the hook. + # This needs to be the last of our atexit callbacks to ensure + # it will be run first when the hook is complete, because there + # is no point mutating our state after it has been saved. + hookenv.atexit(self._release_granted) + + def acquire(self, lock): + '''Acquire the named lock, non-blocking. + + The lock may be granted immediately, or in a future hook. + + Returns True if the lock has been granted. The lock will be + automatically released at the end of the hook in which it is + granted. + + Do not mindlessly call this method, as it triggers a cascade of + hooks. For example, if you call acquire() every time in your + peers relation-changed hook you will end up with an infinite loop + of hooks. It should almost always be guarded by some condition. + ''' + unit = hookenv.local_unit() + ts = self.requests[unit].get(lock) + if not ts: + # If there is no outstanding request on the peers relation, + # create one. + self.requests.setdefault(lock, {}) + self.requests[unit][lock] = _timestamp() + self.msg('Requested {}'.format(lock)) + + # If the leader has granted the lock, yay. + if self.granted(lock): + self.msg('Acquired {}'.format(lock)) + return True + + # If the unit making the request also happens to be the + # leader, it must handle the request now. Even though the + # request has been stored on the peers relation, the peers + # relation-changed hook will not be triggered. + if hookenv.is_leader(): + return self.grant(lock, unit) + + return False # Can't acquire lock, yet. Maybe next hook. + + def granted(self, lock): + '''Return True if a previously requested lock has been granted''' + unit = hookenv.local_unit() + ts = self.requests[unit].get(lock) + if ts and self.grants.get(unit, {}).get(lock) == ts: + return True + return False + + def requested(self, lock): + '''Return True if we are in the queue for the lock''' + return lock in self.requests[hookenv.local_unit()] + + def request_timestamp(self, lock): + '''Return the timestamp of our outstanding request for lock, or None. + + Returns a datetime.datetime() UTC timestamp, with no tzinfo attribute. + ''' + ts = self.requests[hookenv.local_unit()].get(lock, None) + if ts is not None: + return datetime.strptime(ts, _timestamp_format) + + def handle(self): + if not hookenv.is_leader(): + return # Only the leader can grant requests. + + self.msg('Leader handling coordinator requests') + + # Clear our grants that have been released. + for unit in self.grants.keys(): + for lock, grant_ts in list(self.grants[unit].items()): + req_ts = self.requests.get(unit, {}).get(lock) + if req_ts != grant_ts: + # The request timestamp does not match the granted + # timestamp. Several hooks on 'unit' may have run + # before the leader got a chance to make a decision, + # and 'unit' may have released its lock and attempted + # to reacquire it. This will change the timestamp, + # and we correctly revoke the old grant putting it + # to the end of the queue. + ts = datetime.strptime(self.grants[unit][lock], + _timestamp_format) + del self.grants[unit][lock] + self.released(unit, lock, ts) + + # Grant locks + for unit in self.requests.keys(): + for lock in self.requests[unit]: + self.grant(lock, unit) + + def grant(self, lock, unit): + '''Maybe grant the lock to a unit. + + The decision to grant the lock or not is made for $lock + by a corresponding method grant_$lock, which you may define + in a subclass. If no such method is defined, the default_grant + method is used. See Serial.default_grant() for details. + ''' + if not hookenv.is_leader(): + return False # Not the leader, so we cannot grant. + + # Set of units already granted the lock. + granted = set() + for u in self.grants: + if lock in self.grants[u]: + granted.add(u) + if unit in granted: + return True # Already granted. + + # Ordered list of units waiting for the lock. + reqs = set() + for u in self.requests: + if u in granted: + continue # In the granted set. Not wanted in the req list. + for _lock, ts in self.requests[u].items(): + if _lock == lock: + reqs.add((ts, u)) + queue = [t[1] for t in sorted(reqs)] + if unit not in queue: + return False # Unit has not requested the lock. + + # Locate custom logic, or fallback to the default. + grant_func = getattr(self, 'grant_{}'.format(lock), self.default_grant) + + if grant_func(lock, unit, granted, queue): + # Grant the lock. + self.msg('Leader grants {} to {}'.format(lock, unit)) + self.grants.setdefault(unit, {})[lock] = self.requests[unit][lock] + return True + + return False + + def released(self, unit, lock, timestamp): + '''Called on the leader when it has released a lock. + + By default, does nothing but log messages. Override if you + need to perform additional housekeeping when a lock is released, + for example recording timestamps. + ''' + interval = _utcnow() - timestamp + self.msg('Leader released {} from {}, held {}'.format(lock, unit, + interval)) + + def require(self, lock, guard_func, *guard_args, **guard_kw): + """Decorate a function to be run only when a lock is acquired. + + The lock is requested if the guard function returns True. + + The decorated function is called if the lock has been granted. + """ + def decorator(f): + @wraps(f) + def wrapper(*args, **kw): + if self.granted(lock): + self.msg('Granted {}'.format(lock)) + return f(*args, **kw) + if guard_func(*guard_args, **guard_kw) and self.acquire(lock): + return f(*args, **kw) + return None + return wrapper + return decorator + + def msg(self, msg): + '''Emit a message. Override to customize log spam.''' + hookenv.log('coordinator.{} {}'.format(self._name(), msg), + level=hookenv.INFO) + + def _name(self): + return self.__class__.__name__ + + def _load_state(self): + self.msg('Loading state') + + # All responses must be stored in the leadership settings. + # The leader cannot use local state, as a different unit may + # be leader next time. Which is fine, as the leadership + # settings are always available. + self.grants = json.loads(hookenv.leader_get(self.key) or '{}') + + local_unit = hookenv.local_unit() + + # All requests must be stored on the peers relation. This is + # the only channel units have to communicate with the leader. + # Even the leader needs to store its requests here, as a + # different unit may be leader by the time the request can be + # granted. + if self.relid is None: + # The peers relation is not available. Maybe we are early in + # the units's lifecycle. Maybe this unit is standalone. + # Fallback to using local state. + self.msg('No peer relation. Loading local state') + self.requests = {local_unit: self._load_local_state()} + else: + self.requests = self._load_peer_state() + if local_unit not in self.requests: + # The peers relation has just been joined. Update any state + # loaded from our peers with our local state. + self.msg('New peer relation. Merging local state') + self.requests[local_unit] = self._load_local_state() + + def _emit_state(self): + # Emit this units lock status. + for lock in sorted(self.requests[hookenv.local_unit()].keys()): + if self.granted(lock): + self.msg('Granted {}'.format(lock)) + else: + self.msg('Waiting on {}'.format(lock)) + + def _save_state(self): + self.msg('Publishing state') + if hookenv.is_leader(): + # sort_keys to ensure stability. + raw = json.dumps(self.grants, sort_keys=True) + hookenv.leader_set({self.key: raw}) + + local_unit = hookenv.local_unit() + + if self.relid is None: + # No peers relation yet. Fallback to local state. + self.msg('No peer relation. Saving local state') + self._save_local_state(self.requests[local_unit]) + else: + # sort_keys to ensure stability. + raw = json.dumps(self.requests[local_unit], sort_keys=True) + hookenv.relation_set(self.relid, relation_settings={self.key: raw}) + + def _load_peer_state(self): + requests = {} + units = set(hookenv.related_units(self.relid)) + units.add(hookenv.local_unit()) + for unit in units: + raw = hookenv.relation_get(self.key, unit, self.relid) + if raw: + requests[unit] = json.loads(raw) + return requests + + def _local_state_filename(self): + # Include the class name. We allow multiple BaseCoordinator + # subclasses to be instantiated, and they are singletons, so + # this avoids conflicts (unless someone creates and uses two + # BaseCoordinator subclasses with the same class name, so don't + # do that). + return '.charmhelpers.coordinator.{}'.format(self._name()) + + def _load_local_state(self): + fn = self._local_state_filename() + if os.path.exists(fn): + with open(fn, 'r') as f: + return json.load(f) + return {} + + def _save_local_state(self, state): + fn = self._local_state_filename() + with open(fn, 'w') as f: + json.dump(state, f) + + def _release_granted(self): + # At the end of every hook, release all locks granted to + # this unit. If a hook neglects to make use of what it + # requested, it will just have to make the request again. + # Implicit release is the only way this will work, as + # if the unit is standalone there may be no future triggers + # called to do a manual release. + unit = hookenv.local_unit() + for lock in list(self.requests[unit].keys()): + if self.granted(lock): + self.msg('Released local {} lock'.format(lock)) + del self.requests[unit][lock] + + +class Serial(BaseCoordinator): + def default_grant(self, lock, unit, granted, queue): + '''Default logic to grant a lock to a unit. Unless overridden, + only one unit may hold the lock and it will be granted to the + earliest queued request. + + To define custom logic for $lock, create a subclass and + define a grant_$lock method. + + `unit` is the unit name making the request. + + `granted` is the set of units already granted the lock. It will + never include `unit`. It may be empty. + + `queue` is the list of units waiting for the lock, ordered by time + of request. It will always include `unit`, but `unit` is not + necessarily first. + + Returns True if the lock should be granted to `unit`. + ''' + return unit == queue[0] and not granted + + +def _implicit_peer_relation_name(): + md = hookenv.metadata() + assert 'peers' in md, 'No peer relations in metadata.yaml' + return sorted(md['peers'].keys())[0] + + +# A human readable, sortable UTC timestamp format. +_timestamp_format = '%Y-%m-%d %H:%M:%S.%fZ' + + +def _utcnow(): # pragma: no cover + # This wrapper exists as mocking datetime methods is problematic. + return datetime.utcnow() + + +def _timestamp(): + return _utcnow().strftime(_timestamp_format) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d7567b863e3a5ad2b7a7f44958b4166e0c3d346b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/decorators.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/decorators.py new file mode 100644 index 0000000000000000000000000000000000000000..6ad41ee4121f4c0816935f8b16cd84f972aff22b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/decorators.py @@ -0,0 +1,55 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# +# Copyright 2014 Canonical Ltd. +# +# Authors: +# Edward Hope-Morley +# + +import time + +from charmhelpers.core.hookenv import ( + log, + INFO, +) + + +def retry_on_exception(num_retries, base_delay=0, exc_type=Exception): + """If the decorated function raises exception exc_type, allow num_retries + retry attempts before raise the exception. + """ + def _retry_on_exception_inner_1(f): + def _retry_on_exception_inner_2(*args, **kwargs): + retries = num_retries + multiplier = 1 + while True: + try: + return f(*args, **kwargs) + except exc_type: + if not retries: + raise + + delay = base_delay * multiplier + multiplier += 1 + log("Retrying '%s' %d more times (delay=%s)" % + (f.__name__, retries, delay), level=INFO) + retries -= 1 + if delay: + time.sleep(delay) + + return _retry_on_exception_inner_2 + + return _retry_on_exception_inner_1 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/files.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/files.py new file mode 100644 index 0000000000000000000000000000000000000000..fdd82b75709c13da0d534bf4962822984a3c1867 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/files.py @@ -0,0 +1,43 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +__author__ = 'Jorge Niedbalski ' + +import os +import subprocess + + +def sed(filename, before, after, flags='g'): + """ + Search and replaces the given pattern on filename. + + :param filename: relative or absolute file path. + :param before: expression to be replaced (see 'man sed') + :param after: expression to replace with (see 'man sed') + :param flags: sed-compatible regex flags in example, to make + the search and replace case insensitive, specify ``flags="i"``. + The ``g`` flag is always specified regardless, so you do not + need to remember to include it when overriding this parameter. + :returns: If the sed command exit code was zero then return, + otherwise raise CalledProcessError. + """ + expression = r's/{0}/{1}/{2}'.format(before, + after, flags) + + return subprocess.check_call(["sed", "-i", "-r", "-e", + expression, + os.path.expanduser(filename)]) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/fstab.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/fstab.py new file mode 100644 index 0000000000000000000000000000000000000000..d9fa9152c765c538adad3fd9bc45a46018c89b72 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/fstab.py @@ -0,0 +1,132 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import io +import os + +__author__ = 'Jorge Niedbalski R. ' + + +class Fstab(io.FileIO): + """This class extends file in order to implement a file reader/writer + for file `/etc/fstab` + """ + + class Entry(object): + """Entry class represents a non-comment line on the `/etc/fstab` file + """ + def __init__(self, device, mountpoint, filesystem, + options, d=0, p=0): + self.device = device + self.mountpoint = mountpoint + self.filesystem = filesystem + + if not options: + options = "defaults" + + self.options = options + self.d = int(d) + self.p = int(p) + + def __eq__(self, o): + return str(self) == str(o) + + def __str__(self): + return "{} {} {} {} {} {}".format(self.device, + self.mountpoint, + self.filesystem, + self.options, + self.d, + self.p) + + DEFAULT_PATH = os.path.join(os.path.sep, 'etc', 'fstab') + + def __init__(self, path=None): + if path: + self._path = path + else: + self._path = self.DEFAULT_PATH + super(Fstab, self).__init__(self._path, 'rb+') + + def _hydrate_entry(self, line): + # NOTE: use split with no arguments to split on any + # whitespace including tabs + return Fstab.Entry(*filter( + lambda x: x not in ('', None), + line.strip("\n").split())) + + @property + def entries(self): + self.seek(0) + for line in self.readlines(): + line = line.decode('us-ascii') + try: + if line.strip() and not line.strip().startswith("#"): + yield self._hydrate_entry(line) + except ValueError: + pass + + def get_entry_by_attr(self, attr, value): + for entry in self.entries: + e_attr = getattr(entry, attr) + if e_attr == value: + return entry + return None + + def add_entry(self, entry): + if self.get_entry_by_attr('device', entry.device): + return False + + self.write((str(entry) + '\n').encode('us-ascii')) + self.truncate() + return entry + + def remove_entry(self, entry): + self.seek(0) + + lines = [l.decode('us-ascii') for l in self.readlines()] + + found = False + for index, line in enumerate(lines): + if line.strip() and not line.strip().startswith("#"): + if self._hydrate_entry(line) == entry: + found = True + break + + if not found: + return False + + lines.remove(line) + + self.seek(0) + self.write(''.join(lines).encode('us-ascii')) + self.truncate() + return True + + @classmethod + def remove_by_mountpoint(cls, mountpoint, path=None): + fstab = cls(path=path) + entry = fstab.get_entry_by_attr('mountpoint', mountpoint) + if entry: + return fstab.remove_entry(entry) + return False + + @classmethod + def add(cls, device, mountpoint, filesystem, options=None, path=None): + return cls(path=path).add_entry(Fstab.Entry(device, + mountpoint, filesystem, + options=options)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/hookenv.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/hookenv.py new file mode 100644 index 0000000000000000000000000000000000000000..db7ce7282b4c96c8a33abf309a340377216922ec --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/hookenv.py @@ -0,0 +1,1613 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"Interactions with the Juju environment" +# Copyright 2013 Canonical Ltd. +# +# Authors: +# Charm Helpers Developers + +from __future__ import print_function +import copy +from distutils.version import LooseVersion +from enum import Enum +from functools import wraps +from collections import namedtuple +import glob +import os +import json +import yaml +import re +import subprocess +import sys +import errno +import tempfile +from subprocess import CalledProcessError + +from charmhelpers import deprecate + +import six +if not six.PY3: + from UserDict import UserDict +else: + from collections import UserDict + + +CRITICAL = "CRITICAL" +ERROR = "ERROR" +WARNING = "WARNING" +INFO = "INFO" +DEBUG = "DEBUG" +TRACE = "TRACE" +MARKER = object() +SH_MAX_ARG = 131071 + + +RANGE_WARNING = ('Passing NO_PROXY string that includes a cidr. ' + 'This may not be compatible with software you are ' + 'running in your shell.') + + +class WORKLOAD_STATES(Enum): + ACTIVE = 'active' + BLOCKED = 'blocked' + MAINTENANCE = 'maintenance' + WAITING = 'waiting' + + +cache = {} + + +def cached(func): + """Cache return values for multiple executions of func + args + + For example:: + + @cached + def unit_get(attribute): + pass + + unit_get('test') + + will cache the result of unit_get + 'test' for future calls. + """ + @wraps(func) + def wrapper(*args, **kwargs): + global cache + key = json.dumps((func, args, kwargs), sort_keys=True, default=str) + try: + return cache[key] + except KeyError: + pass # Drop out of the exception handler scope. + res = func(*args, **kwargs) + cache[key] = res + return res + wrapper._wrapped = func + return wrapper + + +def flush(key): + """Flushes any entries from function cache where the + key is found in the function+args """ + flush_list = [] + for item in cache: + if key in item: + flush_list.append(item) + for item in flush_list: + del cache[item] + + +def log(message, level=None): + """Write a message to the juju log""" + command = ['juju-log'] + if level: + command += ['-l', level] + if not isinstance(message, six.string_types): + message = repr(message) + command += [message[:SH_MAX_ARG]] + # Missing juju-log should not cause failures in unit tests + # Send log output to stderr + try: + subprocess.call(command) + except OSError as e: + if e.errno == errno.ENOENT: + if level: + message = "{}: {}".format(level, message) + message = "juju-log: {}".format(message) + print(message, file=sys.stderr) + else: + raise + + +def function_log(message): + """Write a function progress message""" + command = ['function-log'] + if not isinstance(message, six.string_types): + message = repr(message) + command += [message[:SH_MAX_ARG]] + # Missing function-log should not cause failures in unit tests + # Send function_log output to stderr + try: + subprocess.call(command) + except OSError as e: + if e.errno == errno.ENOENT: + message = "function-log: {}".format(message) + print(message, file=sys.stderr) + else: + raise + + +class Serializable(UserDict): + """Wrapper, an object that can be serialized to yaml or json""" + + def __init__(self, obj): + # wrap the object + UserDict.__init__(self) + self.data = obj + + def __getattr__(self, attr): + # See if this object has attribute. + if attr in ("json", "yaml", "data"): + return self.__dict__[attr] + # Check for attribute in wrapped object. + got = getattr(self.data, attr, MARKER) + if got is not MARKER: + return got + # Proxy to the wrapped object via dict interface. + try: + return self.data[attr] + except KeyError: + raise AttributeError(attr) + + def __getstate__(self): + # Pickle as a standard dictionary. + return self.data + + def __setstate__(self, state): + # Unpickle into our wrapper. + self.data = state + + def json(self): + """Serialize the object to json""" + return json.dumps(self.data) + + def yaml(self): + """Serialize the object to yaml""" + return yaml.dump(self.data) + + +def execution_environment(): + """A convenient bundling of the current execution context""" + context = {} + context['conf'] = config() + if relation_id(): + context['reltype'] = relation_type() + context['relid'] = relation_id() + context['rel'] = relation_get() + context['unit'] = local_unit() + context['rels'] = relations() + context['env'] = os.environ + return context + + +def in_relation_hook(): + """Determine whether we're running in a relation hook""" + return 'JUJU_RELATION' in os.environ + + +def relation_type(): + """The scope for the current relation hook""" + return os.environ.get('JUJU_RELATION', None) + + +@cached +def relation_id(relation_name=None, service_or_unit=None): + """The relation ID for the current or a specified relation""" + if not relation_name and not service_or_unit: + return os.environ.get('JUJU_RELATION_ID', None) + elif relation_name and service_or_unit: + service_name = service_or_unit.split('/')[0] + for relid in relation_ids(relation_name): + remote_service = remote_service_name(relid) + if remote_service == service_name: + return relid + else: + raise ValueError('Must specify neither or both of relation_name and service_or_unit') + + +def local_unit(): + """Local unit ID""" + return os.environ['JUJU_UNIT_NAME'] + + +def remote_unit(): + """The remote unit for the current relation hook""" + return os.environ.get('JUJU_REMOTE_UNIT', None) + + +def application_name(): + """ + The name of the deployed application this unit belongs to. + """ + return local_unit().split('/')[0] + + +def service_name(): + """ + .. deprecated:: 0.19.1 + Alias for :func:`application_name`. + """ + return application_name() + + +def model_name(): + """ + Name of the model that this unit is deployed in. + """ + return os.environ['JUJU_MODEL_NAME'] + + +def model_uuid(): + """ + UUID of the model that this unit is deployed in. + """ + return os.environ['JUJU_MODEL_UUID'] + + +def principal_unit(): + """Returns the principal unit of this unit, otherwise None""" + # Juju 2.2 and above provides JUJU_PRINCIPAL_UNIT + principal_unit = os.environ.get('JUJU_PRINCIPAL_UNIT', None) + # If it's empty, then this unit is the principal + if principal_unit == '': + return os.environ['JUJU_UNIT_NAME'] + elif principal_unit is not None: + return principal_unit + # For Juju 2.1 and below, let's try work out the principle unit by + # the various charms' metadata.yaml. + for reltype in relation_types(): + for rid in relation_ids(reltype): + for unit in related_units(rid): + md = _metadata_unit(unit) + if not md: + continue + subordinate = md.pop('subordinate', None) + if not subordinate: + return unit + return None + + +@cached +def remote_service_name(relid=None): + """The remote service name for a given relation-id (or the current relation)""" + if relid is None: + unit = remote_unit() + else: + units = related_units(relid) + unit = units[0] if units else None + return unit.split('/')[0] if unit else None + + +def hook_name(): + """The name of the currently executing hook""" + return os.environ.get('JUJU_HOOK_NAME', os.path.basename(sys.argv[0])) + + +class Config(dict): + """A dictionary representation of the charm's config.yaml, with some + extra features: + + - See which values in the dictionary have changed since the previous hook. + - For values that have changed, see what the previous value was. + - Store arbitrary data for use in a later hook. + + NOTE: Do not instantiate this object directly - instead call + ``hookenv.config()``, which will return an instance of :class:`Config`. + + Example usage:: + + >>> # inside a hook + >>> from charmhelpers.core import hookenv + >>> config = hookenv.config() + >>> config['foo'] + 'bar' + >>> # store a new key/value for later use + >>> config['mykey'] = 'myval' + + + >>> # user runs `juju set mycharm foo=baz` + >>> # now we're inside subsequent config-changed hook + >>> config = hookenv.config() + >>> config['foo'] + 'baz' + >>> # test to see if this val has changed since last hook + >>> config.changed('foo') + True + >>> # what was the previous value? + >>> config.previous('foo') + 'bar' + >>> # keys/values that we add are preserved across hooks + >>> config['mykey'] + 'myval' + + """ + CONFIG_FILE_NAME = '.juju-persistent-config' + + def __init__(self, *args, **kw): + super(Config, self).__init__(*args, **kw) + self.implicit_save = True + self._prev_dict = None + self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME) + if os.path.exists(self.path) and os.stat(self.path).st_size: + self.load_previous() + atexit(self._implicit_save) + + def load_previous(self, path=None): + """Load previous copy of config from disk. + + In normal usage you don't need to call this method directly - it + is called automatically at object initialization. + + :param path: + + File path from which to load the previous config. If `None`, + config is loaded from the default location. If `path` is + specified, subsequent `save()` calls will write to the same + path. + + """ + self.path = path or self.path + with open(self.path) as f: + try: + self._prev_dict = json.load(f) + except ValueError as e: + log('Found but was unable to parse previous config data, ' + 'ignoring which will report all values as changed - {}' + .format(str(e)), level=ERROR) + return + for k, v in copy.deepcopy(self._prev_dict).items(): + if k not in self: + self[k] = v + + def changed(self, key): + """Return True if the current value for this key is different from + the previous value. + + """ + if self._prev_dict is None: + return True + return self.previous(key) != self.get(key) + + def previous(self, key): + """Return previous value for this key, or None if there + is no previous value. + + """ + if self._prev_dict: + return self._prev_dict.get(key) + return None + + def save(self): + """Save this config to disk. + + If the charm is using the :mod:`Services Framework ` + or :meth:'@hook ' decorator, this + is called automatically at the end of successful hook execution. + Otherwise, it should be called directly by user code. + + To disable automatic saves, set ``implicit_save=False`` on this + instance. + + """ + with open(self.path, 'w') as f: + os.fchmod(f.fileno(), 0o600) + json.dump(self, f) + + def _implicit_save(self): + if self.implicit_save: + self.save() + + +_cache_config = None + + +def config(scope=None): + """ + Get the juju charm configuration (scope==None) or individual key, + (scope=str). The returned value is a Python data structure loaded as + JSON from the Juju config command. + + :param scope: If set, return the value for the specified key. + :type scope: Optional[str] + :returns: Either the whole config as a Config, or a key from it. + :rtype: Any + """ + global _cache_config + config_cmd_line = ['config-get', '--all', '--format=json'] + try: + # JSON Decode Exception for Python3.5+ + exc_json = json.decoder.JSONDecodeError + except AttributeError: + # JSON Decode Exception for Python2.7 through Python3.4 + exc_json = ValueError + try: + if _cache_config is None: + config_data = json.loads( + subprocess.check_output(config_cmd_line).decode('UTF-8')) + _cache_config = Config(config_data) + if scope is not None: + return _cache_config.get(scope) + return _cache_config + except (exc_json, UnicodeDecodeError) as e: + log('Unable to parse output from config-get: config_cmd_line="{}" ' + 'message="{}"' + .format(config_cmd_line, str(e)), level=ERROR) + return None + + +@cached +def relation_get(attribute=None, unit=None, rid=None): + """Get relation information""" + _args = ['relation-get', '--format=json'] + if rid: + _args.append('-r') + _args.append(rid) + _args.append(attribute or '-') + if unit: + _args.append(unit) + try: + return json.loads(subprocess.check_output(_args).decode('UTF-8')) + except ValueError: + return None + except CalledProcessError as e: + if e.returncode == 2: + return None + raise + + +def relation_set(relation_id=None, relation_settings=None, **kwargs): + """Set relation information for the current unit""" + relation_settings = relation_settings if relation_settings else {} + relation_cmd_line = ['relation-set'] + accepts_file = "--file" in subprocess.check_output( + relation_cmd_line + ["--help"], universal_newlines=True) + if relation_id is not None: + relation_cmd_line.extend(('-r', relation_id)) + settings = relation_settings.copy() + settings.update(kwargs) + for key, value in settings.items(): + # Force value to be a string: it always should, but some call + # sites pass in things like dicts or numbers. + if value is not None: + settings[key] = "{}".format(value) + if accepts_file: + # --file was introduced in Juju 1.23.2. Use it by default if + # available, since otherwise we'll break if the relation data is + # too big. Ideally we should tell relation-set to read the data from + # stdin, but that feature is broken in 1.23.2: Bug #1454678. + with tempfile.NamedTemporaryFile(delete=False) as settings_file: + settings_file.write(yaml.safe_dump(settings).encode("utf-8")) + subprocess.check_call( + relation_cmd_line + ["--file", settings_file.name]) + os.remove(settings_file.name) + else: + for key, value in settings.items(): + if value is None: + relation_cmd_line.append('{}='.format(key)) + else: + relation_cmd_line.append('{}={}'.format(key, value)) + subprocess.check_call(relation_cmd_line) + # Flush cache of any relation-gets for local unit + flush(local_unit()) + + +def relation_clear(r_id=None): + ''' Clears any relation data already set on relation r_id ''' + settings = relation_get(rid=r_id, + unit=local_unit()) + for setting in settings: + if setting not in ['public-address', 'private-address']: + settings[setting] = None + relation_set(relation_id=r_id, + **settings) + + +@cached +def relation_ids(reltype=None): + """A list of relation_ids""" + reltype = reltype or relation_type() + relid_cmd_line = ['relation-ids', '--format=json'] + if reltype is not None: + relid_cmd_line.append(reltype) + return json.loads( + subprocess.check_output(relid_cmd_line).decode('UTF-8')) or [] + return [] + + +@cached +def related_units(relid=None): + """A list of related units""" + relid = relid or relation_id() + units_cmd_line = ['relation-list', '--format=json'] + if relid is not None: + units_cmd_line.extend(('-r', relid)) + return json.loads( + subprocess.check_output(units_cmd_line).decode('UTF-8')) or [] + + +def expected_peer_units(): + """Get a generator for units we expect to join peer relation based on + goal-state. + + The local unit is excluded from the result to make it easy to gauge + completion of all peers joining the relation with existing hook tools. + + Example usage: + log('peer {} of {} joined peer relation' + .format(len(related_units()), + len(list(expected_peer_units())))) + + This function will raise NotImplementedError if used with juju versions + without goal-state support. + + :returns: iterator + :rtype: types.GeneratorType + :raises: NotImplementedError + """ + if not has_juju_version("2.4.0"): + # goal-state first appeared in 2.4.0. + raise NotImplementedError("goal-state") + _goal_state = goal_state() + return (key for key in _goal_state['units'] + if '/' in key and key != local_unit()) + + +def expected_related_units(reltype=None): + """Get a generator for units we expect to join relation based on + goal-state. + + Note that you can not use this function for the peer relation, take a look + at expected_peer_units() for that. + + This function will raise KeyError if you request information for a + relation type for which juju goal-state does not have information. It will + raise NotImplementedError if used with juju versions without goal-state + support. + + Example usage: + log('participant {} of {} joined relation {}' + .format(len(related_units()), + len(list(expected_related_units())), + relation_type())) + + :param reltype: Relation type to list data for, default is to list data for + the realtion type we are currently executing a hook for. + :type reltype: str + :returns: iterator + :rtype: types.GeneratorType + :raises: KeyError, NotImplementedError + """ + if not has_juju_version("2.4.4"): + # goal-state existed in 2.4.0, but did not list individual units to + # join a relation in 2.4.1 through 2.4.3. (LP: #1794739) + raise NotImplementedError("goal-state relation unit count") + reltype = reltype or relation_type() + _goal_state = goal_state() + return (key for key in _goal_state['relations'][reltype] if '/' in key) + + +@cached +def relation_for_unit(unit=None, rid=None): + """Get the json represenation of a unit's relation""" + unit = unit or remote_unit() + relation = relation_get(unit=unit, rid=rid) + for key in relation: + if key.endswith('-list'): + relation[key] = relation[key].split() + relation['__unit__'] = unit + return relation + + +@cached +def relations_for_id(relid=None): + """Get relations of a specific relation ID""" + relation_data = [] + relid = relid or relation_ids() + for unit in related_units(relid): + unit_data = relation_for_unit(unit, relid) + unit_data['__relid__'] = relid + relation_data.append(unit_data) + return relation_data + + +@cached +def relations_of_type(reltype=None): + """Get relations of a specific type""" + relation_data = [] + reltype = reltype or relation_type() + for relid in relation_ids(reltype): + for relation in relations_for_id(relid): + relation['__relid__'] = relid + relation_data.append(relation) + return relation_data + + +@cached +def metadata(): + """Get the current charm metadata.yaml contents as a python object""" + with open(os.path.join(charm_dir(), 'metadata.yaml')) as md: + return yaml.safe_load(md) + + +def _metadata_unit(unit): + """Given the name of a unit (e.g. apache2/0), get the unit charm's + metadata.yaml. Very similar to metadata() but allows us to inspect + other units. Unit needs to be co-located, such as a subordinate or + principal/primary. + + :returns: metadata.yaml as a python object. + + """ + basedir = os.sep.join(charm_dir().split(os.sep)[:-2]) + unitdir = 'unit-{}'.format(unit.replace(os.sep, '-')) + joineddir = os.path.join(basedir, unitdir, 'charm', 'metadata.yaml') + if not os.path.exists(joineddir): + return None + with open(joineddir) as md: + return yaml.safe_load(md) + + +@cached +def relation_types(): + """Get a list of relation types supported by this charm""" + rel_types = [] + md = metadata() + for key in ('provides', 'requires', 'peers'): + section = md.get(key) + if section: + rel_types.extend(section.keys()) + return rel_types + + +@cached +def peer_relation_id(): + '''Get the peers relation id if a peers relation has been joined, else None.''' + md = metadata() + section = md.get('peers') + if section: + for key in section: + relids = relation_ids(key) + if relids: + return relids[0] + return None + + +@cached +def relation_to_interface(relation_name): + """ + Given the name of a relation, return the interface that relation uses. + + :returns: The interface name, or ``None``. + """ + return relation_to_role_and_interface(relation_name)[1] + + +@cached +def relation_to_role_and_interface(relation_name): + """ + Given the name of a relation, return the role and the name of the interface + that relation uses (where role is one of ``provides``, ``requires``, or ``peers``). + + :returns: A tuple containing ``(role, interface)``, or ``(None, None)``. + """ + _metadata = metadata() + for role in ('provides', 'requires', 'peers'): + interface = _metadata.get(role, {}).get(relation_name, {}).get('interface') + if interface: + return role, interface + return None, None + + +@cached +def role_and_interface_to_relations(role, interface_name): + """ + Given a role and interface name, return a list of relation names for the + current charm that use that interface under that role (where role is one + of ``provides``, ``requires``, or ``peers``). + + :returns: A list of relation names. + """ + _metadata = metadata() + results = [] + for relation_name, relation in _metadata.get(role, {}).items(): + if relation['interface'] == interface_name: + results.append(relation_name) + return results + + +@cached +def interface_to_relations(interface_name): + """ + Given an interface, return a list of relation names for the current + charm that use that interface. + + :returns: A list of relation names. + """ + results = [] + for role in ('provides', 'requires', 'peers'): + results.extend(role_and_interface_to_relations(role, interface_name)) + return results + + +@cached +def charm_name(): + """Get the name of the current charm as is specified on metadata.yaml""" + return metadata().get('name') + + +@cached +def relations(): + """Get a nested dictionary of relation data for all related units""" + rels = {} + for reltype in relation_types(): + relids = {} + for relid in relation_ids(reltype): + units = {local_unit(): relation_get(unit=local_unit(), rid=relid)} + for unit in related_units(relid): + reldata = relation_get(unit=unit, rid=relid) + units[unit] = reldata + relids[relid] = units + rels[reltype] = relids + return rels + + +@cached +def is_relation_made(relation, keys='private-address'): + ''' + Determine whether a relation is established by checking for + presence of key(s). If a list of keys is provided, they + must all be present for the relation to be identified as made + ''' + if isinstance(keys, str): + keys = [keys] + for r_id in relation_ids(relation): + for unit in related_units(r_id): + context = {} + for k in keys: + context[k] = relation_get(k, rid=r_id, + unit=unit) + if None not in context.values(): + return True + return False + + +def _port_op(op_name, port, protocol="TCP"): + """Open or close a service network port""" + _args = [op_name] + icmp = protocol.upper() == "ICMP" + if icmp: + _args.append(protocol) + else: + _args.append('{}/{}'.format(port, protocol)) + try: + subprocess.check_call(_args) + except subprocess.CalledProcessError: + # Older Juju pre 2.3 doesn't support ICMP + # so treat it as a no-op if it fails. + if not icmp: + raise + + +def open_port(port, protocol="TCP"): + """Open a service network port""" + _port_op('open-port', port, protocol) + + +def close_port(port, protocol="TCP"): + """Close a service network port""" + _port_op('close-port', port, protocol) + + +def open_ports(start, end, protocol="TCP"): + """Opens a range of service network ports""" + _args = ['open-port'] + _args.append('{}-{}/{}'.format(start, end, protocol)) + subprocess.check_call(_args) + + +def close_ports(start, end, protocol="TCP"): + """Close a range of service network ports""" + _args = ['close-port'] + _args.append('{}-{}/{}'.format(start, end, protocol)) + subprocess.check_call(_args) + + +def opened_ports(): + """Get the opened ports + + *Note that this will only show ports opened in a previous hook* + + :returns: Opened ports as a list of strings: ``['8080/tcp', '8081-8083/tcp']`` + """ + _args = ['opened-ports', '--format=json'] + return json.loads(subprocess.check_output(_args).decode('UTF-8')) + + +@cached +def unit_get(attribute): + """Get the unit ID for the remote unit""" + _args = ['unit-get', '--format=json', attribute] + try: + return json.loads(subprocess.check_output(_args).decode('UTF-8')) + except ValueError: + return None + + +def unit_public_ip(): + """Get this unit's public IP address""" + return unit_get('public-address') + + +def unit_private_ip(): + """Get this unit's private IP address""" + return unit_get('private-address') + + +@cached +def storage_get(attribute=None, storage_id=None): + """Get storage attributes""" + _args = ['storage-get', '--format=json'] + if storage_id: + _args.extend(('-s', storage_id)) + if attribute: + _args.append(attribute) + try: + return json.loads(subprocess.check_output(_args).decode('UTF-8')) + except ValueError: + return None + + +@cached +def storage_list(storage_name=None): + """List the storage IDs for the unit""" + _args = ['storage-list', '--format=json'] + if storage_name: + _args.append(storage_name) + try: + return json.loads(subprocess.check_output(_args).decode('UTF-8')) + except ValueError: + return None + except OSError as e: + import errno + if e.errno == errno.ENOENT: + # storage-list does not exist + return [] + raise + + +class UnregisteredHookError(Exception): + """Raised when an undefined hook is called""" + pass + + +class Hooks(object): + """A convenient handler for hook functions. + + Example:: + + hooks = Hooks() + + # register a hook, taking its name from the function name + @hooks.hook() + def install(): + pass # your code here + + # register a hook, providing a custom hook name + @hooks.hook("config-changed") + def config_changed(): + pass # your code here + + if __name__ == "__main__": + # execute a hook based on the name the program is called by + hooks.execute(sys.argv) + """ + + def __init__(self, config_save=None): + super(Hooks, self).__init__() + self._hooks = {} + + # For unknown reasons, we allow the Hooks constructor to override + # config().implicit_save. + if config_save is not None: + config().implicit_save = config_save + + def register(self, name, function): + """Register a hook""" + self._hooks[name] = function + + def execute(self, args): + """Execute a registered hook based on args[0]""" + _run_atstart() + hook_name = os.path.basename(args[0]) + if hook_name in self._hooks: + try: + self._hooks[hook_name]() + except SystemExit as x: + if x.code is None or x.code == 0: + _run_atexit() + raise + _run_atexit() + else: + raise UnregisteredHookError(hook_name) + + def hook(self, *hook_names): + """Decorator, registering them as hooks""" + def wrapper(decorated): + for hook_name in hook_names: + self.register(hook_name, decorated) + else: + self.register(decorated.__name__, decorated) + if '_' in decorated.__name__: + self.register( + decorated.__name__.replace('_', '-'), decorated) + return decorated + return wrapper + + +class NoNetworkBinding(Exception): + pass + + +def charm_dir(): + """Return the root directory of the current charm""" + d = os.environ.get('JUJU_CHARM_DIR') + if d is not None: + return d + return os.environ.get('CHARM_DIR') + + +def cmd_exists(cmd): + """Return True if the specified cmd exists in the path""" + return any( + os.access(os.path.join(path, cmd), os.X_OK) + for path in os.environ["PATH"].split(os.pathsep) + ) + + +@cached +@deprecate("moved to function_get()", log=log) +def action_get(key=None): + """ + .. deprecated:: 0.20.7 + Alias for :func:`function_get`. + + Gets the value of an action parameter, or all key/value param pairs. + """ + cmd = ['action-get'] + if key is not None: + cmd.append(key) + cmd.append('--format=json') + action_data = json.loads(subprocess.check_output(cmd).decode('UTF-8')) + return action_data + + +@cached +def function_get(key=None): + """Gets the value of an action parameter, or all key/value param pairs""" + cmd = ['function-get'] + # Fallback for older charms. + if not cmd_exists('function-get'): + cmd = ['action-get'] + + if key is not None: + cmd.append(key) + cmd.append('--format=json') + function_data = json.loads(subprocess.check_output(cmd).decode('UTF-8')) + return function_data + + +@deprecate("moved to function_set()", log=log) +def action_set(values): + """ + .. deprecated:: 0.20.7 + Alias for :func:`function_set`. + + Sets the values to be returned after the action finishes. + """ + cmd = ['action-set'] + for k, v in list(values.items()): + cmd.append('{}={}'.format(k, v)) + subprocess.check_call(cmd) + + +def function_set(values): + """Sets the values to be returned after the function finishes""" + cmd = ['function-set'] + # Fallback for older charms. + if not cmd_exists('function-get'): + cmd = ['action-set'] + + for k, v in list(values.items()): + cmd.append('{}={}'.format(k, v)) + subprocess.check_call(cmd) + + +@deprecate("moved to function_fail()", log=log) +def action_fail(message): + """ + .. deprecated:: 0.20.7 + Alias for :func:`function_fail`. + + Sets the action status to failed and sets the error message. + + The results set by action_set are preserved. + """ + subprocess.check_call(['action-fail', message]) + + +def function_fail(message): + """Sets the function status to failed and sets the error message. + + The results set by function_set are preserved.""" + cmd = ['function-fail'] + # Fallback for older charms. + if not cmd_exists('function-fail'): + cmd = ['action-fail'] + cmd.append(message) + + subprocess.check_call(cmd) + + +def action_name(): + """Get the name of the currently executing action.""" + return os.environ.get('JUJU_ACTION_NAME') + + +def function_name(): + """Get the name of the currently executing function.""" + return os.environ.get('JUJU_FUNCTION_NAME') or action_name() + + +def action_uuid(): + """Get the UUID of the currently executing action.""" + return os.environ.get('JUJU_ACTION_UUID') + + +def function_id(): + """Get the ID of the currently executing function.""" + return os.environ.get('JUJU_FUNCTION_ID') or action_uuid() + + +def action_tag(): + """Get the tag for the currently executing action.""" + return os.environ.get('JUJU_ACTION_TAG') + + +def function_tag(): + """Get the tag for the currently executing function.""" + return os.environ.get('JUJU_FUNCTION_TAG') or action_tag() + + +def status_set(workload_state, message, application=False): + """Set the workload state with a message + + Use status-set to set the workload state with a message which is visible + to the user via juju status. If the status-set command is not found then + assume this is juju < 1.23 and juju-log the message instead. + + workload_state -- valid juju workload state. str or WORKLOAD_STATES + message -- status update message + application -- Whether this is an application state set + """ + bad_state_msg = '{!r} is not a valid workload state' + + if isinstance(workload_state, str): + try: + # Convert string to enum. + workload_state = WORKLOAD_STATES[workload_state.upper()] + except KeyError: + raise ValueError(bad_state_msg.format(workload_state)) + + if workload_state not in WORKLOAD_STATES: + raise ValueError(bad_state_msg.format(workload_state)) + + cmd = ['status-set'] + if application: + cmd.append('--application') + cmd.extend([workload_state.value, message]) + try: + ret = subprocess.call(cmd) + if ret == 0: + return + except OSError as e: + if e.errno != errno.ENOENT: + raise + log_message = 'status-set failed: {} {}'.format(workload_state.value, + message) + log(log_message, level='INFO') + + +def status_get(): + """Retrieve the previously set juju workload state and message + + If the status-get command is not found then assume this is juju < 1.23 and + return 'unknown', "" + + """ + cmd = ['status-get', "--format=json", "--include-data"] + try: + raw_status = subprocess.check_output(cmd) + except OSError as e: + if e.errno == errno.ENOENT: + return ('unknown', "") + else: + raise + else: + status = json.loads(raw_status.decode("UTF-8")) + return (status["status"], status["message"]) + + +def translate_exc(from_exc, to_exc): + def inner_translate_exc1(f): + @wraps(f) + def inner_translate_exc2(*args, **kwargs): + try: + return f(*args, **kwargs) + except from_exc: + raise to_exc + + return inner_translate_exc2 + + return inner_translate_exc1 + + +def application_version_set(version): + """Charm authors may trigger this command from any hook to output what + version of the application is running. This could be a package version, + for instance postgres version 9.5. It could also be a build number or + version control revision identifier, for instance git sha 6fb7ba68. """ + + cmd = ['application-version-set'] + cmd.append(version) + try: + subprocess.check_call(cmd) + except OSError: + log("Application Version: {}".format(version)) + + +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) +@cached +def goal_state(): + """Juju goal state values""" + cmd = ['goal-state', '--format=json'] + return json.loads(subprocess.check_output(cmd).decode('UTF-8')) + + +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) +def is_leader(): + """Does the current unit hold the juju leadership + + Uses juju to determine whether the current unit is the leader of its peers + """ + cmd = ['is-leader', '--format=json'] + return json.loads(subprocess.check_output(cmd).decode('UTF-8')) + + +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) +def leader_get(attribute=None): + """Juju leader get value(s)""" + cmd = ['leader-get', '--format=json'] + [attribute or '-'] + return json.loads(subprocess.check_output(cmd).decode('UTF-8')) + + +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) +def leader_set(settings=None, **kwargs): + """Juju leader set value(s)""" + # Don't log secrets. + # log("Juju leader-set '%s'" % (settings), level=DEBUG) + cmd = ['leader-set'] + settings = settings or {} + settings.update(kwargs) + for k, v in settings.items(): + if v is None: + cmd.append('{}='.format(k)) + else: + cmd.append('{}={}'.format(k, v)) + subprocess.check_call(cmd) + + +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) +def payload_register(ptype, klass, pid): + """ is used while a hook is running to let Juju know that a + payload has been started.""" + cmd = ['payload-register'] + for x in [ptype, klass, pid]: + cmd.append(x) + subprocess.check_call(cmd) + + +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) +def payload_unregister(klass, pid): + """ is used while a hook is running to let Juju know + that a payload has been manually stopped. The and provided + must match a payload that has been previously registered with juju using + payload-register.""" + cmd = ['payload-unregister'] + for x in [klass, pid]: + cmd.append(x) + subprocess.check_call(cmd) + + +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) +def payload_status_set(klass, pid, status): + """is used to update the current status of a registered payload. + The and provided must match a payload that has been previously + registered with juju using payload-register. The must be one of the + follow: starting, started, stopping, stopped""" + cmd = ['payload-status-set'] + for x in [klass, pid, status]: + cmd.append(x) + subprocess.check_call(cmd) + + +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) +def resource_get(name): + """used to fetch the resource path of the given name. + + must match a name of defined resource in metadata.yaml + + returns either a path or False if resource not available + """ + if not name: + return False + + cmd = ['resource-get', name] + try: + return subprocess.check_output(cmd).decode('UTF-8') + except subprocess.CalledProcessError: + return False + + +@cached +def juju_version(): + """Full version string (eg. '1.23.3.1-trusty-amd64')""" + # Per https://bugs.launchpad.net/juju-core/+bug/1455368/comments/1 + jujud = glob.glob('/var/lib/juju/tools/machine-*/jujud')[0] + return subprocess.check_output([jujud, 'version'], + universal_newlines=True).strip() + + +def has_juju_version(minimum_version): + """Return True if the Juju version is at least the provided version""" + return LooseVersion(juju_version()) >= LooseVersion(minimum_version) + + +_atexit = [] +_atstart = [] + + +def atstart(callback, *args, **kwargs): + '''Schedule a callback to run before the main hook. + + Callbacks are run in the order they were added. + + This is useful for modules and classes to perform initialization + and inject behavior. In particular: + + - Run common code before all of your hooks, such as logging + the hook name or interesting relation data. + - Defer object or module initialization that requires a hook + context until we know there actually is a hook context, + making testing easier. + - Rather than requiring charm authors to include boilerplate to + invoke your helper's behavior, have it run automatically if + your object is instantiated or module imported. + + This is not at all useful after your hook framework as been launched. + ''' + global _atstart + _atstart.append((callback, args, kwargs)) + + +def atexit(callback, *args, **kwargs): + '''Schedule a callback to run on successful hook completion. + + Callbacks are run in the reverse order that they were added.''' + _atexit.append((callback, args, kwargs)) + + +def _run_atstart(): + '''Hook frameworks must invoke this before running the main hook body.''' + global _atstart + for callback, args, kwargs in _atstart: + callback(*args, **kwargs) + del _atstart[:] + + +def _run_atexit(): + '''Hook frameworks must invoke this after the main hook body has + successfully completed. Do not invoke it if the hook fails.''' + global _atexit + for callback, args, kwargs in reversed(_atexit): + callback(*args, **kwargs) + del _atexit[:] + + +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) +def network_get_primary_address(binding): + ''' + Deprecated since Juju 2.3; use network_get() + + Retrieve the primary network address for a named binding + + :param binding: string. The name of a relation of extra-binding + :return: string. The primary IP address for the named binding + :raise: NotImplementedError if run on Juju < 2.0 + ''' + cmd = ['network-get', '--primary-address', binding] + try: + response = subprocess.check_output( + cmd, + stderr=subprocess.STDOUT).decode('UTF-8').strip() + except CalledProcessError as e: + if 'no network config found for binding' in e.output.decode('UTF-8'): + raise NoNetworkBinding("No network binding for {}" + .format(binding)) + else: + raise + return response + + +def network_get(endpoint, relation_id=None): + """ + Retrieve the network details for a relation endpoint + + :param endpoint: string. The name of a relation endpoint + :param relation_id: int. The ID of the relation for the current context. + :return: dict. The loaded YAML output of the network-get query. + :raise: NotImplementedError if request not supported by the Juju version. + """ + if not has_juju_version('2.2'): + raise NotImplementedError(juju_version()) # earlier versions require --primary-address + if relation_id and not has_juju_version('2.3'): + raise NotImplementedError # 2.3 added the -r option + + cmd = ['network-get', endpoint, '--format', 'yaml'] + if relation_id: + cmd.append('-r') + cmd.append(relation_id) + response = subprocess.check_output( + cmd, + stderr=subprocess.STDOUT).decode('UTF-8').strip() + return yaml.safe_load(response) + + +def add_metric(*args, **kwargs): + """Add metric values. Values may be expressed with keyword arguments. For + metric names containing dashes, these may be expressed as one or more + 'key=value' positional arguments. May only be called from the collect-metrics + hook.""" + _args = ['add-metric'] + _kvpairs = [] + _kvpairs.extend(args) + _kvpairs.extend(['{}={}'.format(k, v) for k, v in kwargs.items()]) + _args.extend(sorted(_kvpairs)) + try: + subprocess.check_call(_args) + return + except EnvironmentError as e: + if e.errno != errno.ENOENT: + raise + log_message = 'add-metric failed: {}'.format(' '.join(_kvpairs)) + log(log_message, level='INFO') + + +def meter_status(): + """Get the meter status, if running in the meter-status-changed hook.""" + return os.environ.get('JUJU_METER_STATUS') + + +def meter_info(): + """Get the meter status information, if running in the meter-status-changed + hook.""" + return os.environ.get('JUJU_METER_INFO') + + +def iter_units_for_relation_name(relation_name): + """Iterate through all units in a relation + + Generator that iterates through all the units in a relation and yields + a named tuple with rid and unit field names. + + Usage: + data = [(u.rid, u.unit) + for u in iter_units_for_relation_name(relation_name)] + + :param relation_name: string relation name + :yield: Named Tuple with rid and unit field names + """ + RelatedUnit = namedtuple('RelatedUnit', 'rid, unit') + for rid in relation_ids(relation_name): + for unit in related_units(rid): + yield RelatedUnit(rid, unit) + + +def ingress_address(rid=None, unit=None): + """ + Retrieve the ingress-address from a relation when available. + Otherwise, return the private-address. + + When used on the consuming side of the relation (unit is a remote + unit), the ingress-address is the IP address that this unit needs + to use to reach the provided service on the remote unit. + + When used on the providing side of the relation (unit == local_unit()), + the ingress-address is the IP address that is advertised to remote + units on this relation. Remote units need to use this address to + reach the local provided service on this unit. + + Note that charms may document some other method to use in + preference to the ingress_address(), such as an address provided + on a different relation attribute or a service discovery mechanism. + This allows charms to redirect inbound connections to their peers + or different applications such as load balancers. + + Usage: + addresses = [ingress_address(rid=u.rid, unit=u.unit) + for u in iter_units_for_relation_name(relation_name)] + + :param rid: string relation id + :param unit: string unit name + :side effect: calls relation_get + :return: string IP address + """ + settings = relation_get(rid=rid, unit=unit) + return (settings.get('ingress-address') or + settings.get('private-address')) + + +def egress_subnets(rid=None, unit=None): + """ + Retrieve the egress-subnets from a relation. + + This function is to be used on the providing side of the + relation, and provides the ranges of addresses that client + connections may come from. The result is uninteresting on + the consuming side of a relation (unit == local_unit()). + + Returns a stable list of subnets in CIDR format. + eg. ['192.168.1.0/24', '2001::F00F/128'] + + If egress-subnets is not available, falls back to using the published + ingress-address, or finally private-address. + + :param rid: string relation id + :param unit: string unit name + :side effect: calls relation_get + :return: list of subnets in CIDR format. eg. ['192.168.1.0/24', '2001::F00F/128'] + """ + def _to_range(addr): + if re.search(r'^(?:\d{1,3}\.){3}\d{1,3}$', addr) is not None: + addr += '/32' + elif ':' in addr and '/' not in addr: # IPv6 + addr += '/128' + return addr + + settings = relation_get(rid=rid, unit=unit) + if 'egress-subnets' in settings: + return [n.strip() for n in settings['egress-subnets'].split(',') if n.strip()] + if 'ingress-address' in settings: + return [_to_range(settings['ingress-address'])] + if 'private-address' in settings: + return [_to_range(settings['private-address'])] + return [] # Should never happen + + +def unit_doomed(unit=None): + """Determines if the unit is being removed from the model + + Requires Juju 2.4.1. + + :param unit: string unit name, defaults to local_unit + :side effect: calls goal_state + :side effect: calls local_unit + :side effect: calls has_juju_version + :return: True if the unit is being removed, already gone, or never existed + """ + if not has_juju_version("2.4.1"): + # We cannot risk blindly returning False for 'we don't know', + # because that could cause data loss; if call sites don't + # need an accurate answer, they likely don't need this helper + # at all. + # goal-state existed in 2.4.0, but did not handle removals + # correctly until 2.4.1. + raise NotImplementedError("is_doomed") + if unit is None: + unit = local_unit() + gs = goal_state() + units = gs.get('units', {}) + if unit not in units: + return True + # I don't think 'dead' units ever show up in the goal-state, but + # check anyway in addition to 'dying'. + return units[unit]['status'] in ('dying', 'dead') + + +def env_proxy_settings(selected_settings=None): + """Get proxy settings from process environment variables. + + Get charm proxy settings from environment variables that correspond to + juju-http-proxy, juju-https-proxy juju-no-proxy (available as of 2.4.2, see + lp:1782236) and juju-ftp-proxy in a format suitable for passing to an + application that reacts to proxy settings passed as environment variables. + Some applications support lowercase or uppercase notation (e.g. curl), some + support only lowercase (e.g. wget), there are also subjectively rare cases + of only uppercase notation support. no_proxy CIDR and wildcard support also + varies between runtimes and applications as there is no enforced standard. + + Some applications may connect to multiple destinations and expose config + options that would affect only proxy settings for a specific destination + these should be handled in charms in an application-specific manner. + + :param selected_settings: format only a subset of possible settings + :type selected_settings: list + :rtype: Option(None, dict[str, str]) + """ + SUPPORTED_SETTINGS = { + 'http': 'HTTP_PROXY', + 'https': 'HTTPS_PROXY', + 'no_proxy': 'NO_PROXY', + 'ftp': 'FTP_PROXY' + } + if selected_settings is None: + selected_settings = SUPPORTED_SETTINGS + + selected_vars = [v for k, v in SUPPORTED_SETTINGS.items() + if k in selected_settings] + proxy_settings = {} + for var in selected_vars: + var_val = os.getenv(var) + if var_val: + proxy_settings[var] = var_val + proxy_settings[var.lower()] = var_val + # Now handle juju-prefixed environment variables. The legacy vs new + # environment variable usage is mutually exclusive + charm_var_val = os.getenv('JUJU_CHARM_{}'.format(var)) + if charm_var_val: + proxy_settings[var] = charm_var_val + proxy_settings[var.lower()] = charm_var_val + if 'no_proxy' in proxy_settings: + if _contains_range(proxy_settings['no_proxy']): + log(RANGE_WARNING, level=WARNING) + return proxy_settings if proxy_settings else None + + +def _contains_range(addresses): + """Check for cidr or wildcard domain in a string. + + Given a string comprising a comma seperated list of ip addresses + and domain names, determine whether the string contains IP ranges + or wildcard domains. + + :param addresses: comma seperated list of domains and ip addresses. + :type addresses: str + """ + return ( + # Test for cidr (e.g. 10.20.20.0/24) + "/" in addresses or + # Test for wildcard domains (*.foo.com or .foo.com) + "*" in addresses or + addresses.startswith(".") or + ",." in addresses or + " ." in addresses) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/host.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/host.py new file mode 100644 index 0000000000000000000000000000000000000000..b33ac906d9eeb198210dceb069724ac9a35652ea --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/host.py @@ -0,0 +1,1104 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Tools for working with the host system""" +# Copyright 2012 Canonical Ltd. +# +# Authors: +# Nick Moffitt +# Matthew Wedgwood + +import os +import re +import pwd +import glob +import grp +import random +import string +import subprocess +import hashlib +import functools +import itertools +import six + +from contextlib import contextmanager +from collections import OrderedDict +from .hookenv import log, INFO, DEBUG, local_unit, charm_name +from .fstab import Fstab +from charmhelpers.osplatform import get_platform + +__platform__ = get_platform() +if __platform__ == "ubuntu": + from charmhelpers.core.host_factory.ubuntu import ( # NOQA:F401 + service_available, + add_new_group, + lsb_release, + cmp_pkgrevno, + CompareHostReleases, + get_distrib_codename, + arch + ) # flake8: noqa -- ignore F401 for this import +elif __platform__ == "centos": + from charmhelpers.core.host_factory.centos import ( # NOQA:F401 + service_available, + add_new_group, + lsb_release, + cmp_pkgrevno, + CompareHostReleases, + ) # flake8: noqa -- ignore F401 for this import + +UPDATEDB_PATH = '/etc/updatedb.conf' + + +def service_start(service_name, **kwargs): + """Start a system service. + + The specified service name is managed via the system level init system. + Some init systems (e.g. upstart) require that additional arguments be + provided in order to directly control service instances whereas other init + systems allow for addressing instances of a service directly by name (e.g. + systemd). + + The kwargs allow for the additional parameters to be passed to underlying + init systems for those systems which require/allow for them. For example, + the ceph-osd upstart script requires the id parameter to be passed along + in order to identify which running daemon should be reloaded. The follow- + ing example stops the ceph-osd service for instance id=4: + + service_stop('ceph-osd', id=4) + + :param service_name: the name of the service to stop + :param **kwargs: additional parameters to pass to the init system when + managing services. These will be passed as key=value + parameters to the init system's commandline. kwargs + are ignored for systemd enabled systems. + """ + return service('start', service_name, **kwargs) + + +def service_stop(service_name, **kwargs): + """Stop a system service. + + The specified service name is managed via the system level init system. + Some init systems (e.g. upstart) require that additional arguments be + provided in order to directly control service instances whereas other init + systems allow for addressing instances of a service directly by name (e.g. + systemd). + + The kwargs allow for the additional parameters to be passed to underlying + init systems for those systems which require/allow for them. For example, + the ceph-osd upstart script requires the id parameter to be passed along + in order to identify which running daemon should be reloaded. The follow- + ing example stops the ceph-osd service for instance id=4: + + service_stop('ceph-osd', id=4) + + :param service_name: the name of the service to stop + :param **kwargs: additional parameters to pass to the init system when + managing services. These will be passed as key=value + parameters to the init system's commandline. kwargs + are ignored for systemd enabled systems. + """ + return service('stop', service_name, **kwargs) + + +def service_restart(service_name, **kwargs): + """Restart a system service. + + The specified service name is managed via the system level init system. + Some init systems (e.g. upstart) require that additional arguments be + provided in order to directly control service instances whereas other init + systems allow for addressing instances of a service directly by name (e.g. + systemd). + + The kwargs allow for the additional parameters to be passed to underlying + init systems for those systems which require/allow for them. For example, + the ceph-osd upstart script requires the id parameter to be passed along + in order to identify which running daemon should be restarted. The follow- + ing example restarts the ceph-osd service for instance id=4: + + service_restart('ceph-osd', id=4) + + :param service_name: the name of the service to restart + :param **kwargs: additional parameters to pass to the init system when + managing services. These will be passed as key=value + parameters to the init system's commandline. kwargs + are ignored for init systems not allowing additional + parameters via the commandline (systemd). + """ + return service('restart', service_name) + + +def service_reload(service_name, restart_on_failure=False, **kwargs): + """Reload a system service, optionally falling back to restart if + reload fails. + + The specified service name is managed via the system level init system. + Some init systems (e.g. upstart) require that additional arguments be + provided in order to directly control service instances whereas other init + systems allow for addressing instances of a service directly by name (e.g. + systemd). + + The kwargs allow for the additional parameters to be passed to underlying + init systems for those systems which require/allow for them. For example, + the ceph-osd upstart script requires the id parameter to be passed along + in order to identify which running daemon should be reloaded. The follow- + ing example restarts the ceph-osd service for instance id=4: + + service_reload('ceph-osd', id=4) + + :param service_name: the name of the service to reload + :param restart_on_failure: boolean indicating whether to fallback to a + restart if the reload fails. + :param **kwargs: additional parameters to pass to the init system when + managing services. These will be passed as key=value + parameters to the init system's commandline. kwargs + are ignored for init systems not allowing additional + parameters via the commandline (systemd). + """ + service_result = service('reload', service_name, **kwargs) + if not service_result and restart_on_failure: + service_result = service('restart', service_name, **kwargs) + return service_result + + +def service_pause(service_name, init_dir="/etc/init", initd_dir="/etc/init.d", + **kwargs): + """Pause a system service. + + Stop it, and prevent it from starting again at boot. + + :param service_name: the name of the service to pause + :param init_dir: path to the upstart init directory + :param initd_dir: path to the sysv init directory + :param **kwargs: additional parameters to pass to the init system when + managing services. These will be passed as key=value + parameters to the init system's commandline. kwargs + are ignored for init systems which do not support + key=value arguments via the commandline. + """ + stopped = True + if service_running(service_name, **kwargs): + stopped = service_stop(service_name, **kwargs) + upstart_file = os.path.join(init_dir, "{}.conf".format(service_name)) + sysv_file = os.path.join(initd_dir, service_name) + if init_is_systemd(): + service('disable', service_name) + service('mask', service_name) + elif os.path.exists(upstart_file): + override_path = os.path.join( + init_dir, '{}.override'.format(service_name)) + with open(override_path, 'w') as fh: + fh.write("manual\n") + elif os.path.exists(sysv_file): + subprocess.check_call(["update-rc.d", service_name, "disable"]) + else: + raise ValueError( + "Unable to detect {0} as SystemD, Upstart {1} or" + " SysV {2}".format( + service_name, upstart_file, sysv_file)) + return stopped + + +def service_resume(service_name, init_dir="/etc/init", + initd_dir="/etc/init.d", **kwargs): + """Resume a system service. + + Reenable starting again at boot. Start the service. + + :param service_name: the name of the service to resume + :param init_dir: the path to the init dir + :param initd dir: the path to the initd dir + :param **kwargs: additional parameters to pass to the init system when + managing services. These will be passed as key=value + parameters to the init system's commandline. kwargs + are ignored for systemd enabled systems. + """ + upstart_file = os.path.join(init_dir, "{}.conf".format(service_name)) + sysv_file = os.path.join(initd_dir, service_name) + if init_is_systemd(): + service('unmask', service_name) + service('enable', service_name) + elif os.path.exists(upstart_file): + override_path = os.path.join( + init_dir, '{}.override'.format(service_name)) + if os.path.exists(override_path): + os.unlink(override_path) + elif os.path.exists(sysv_file): + subprocess.check_call(["update-rc.d", service_name, "enable"]) + else: + raise ValueError( + "Unable to detect {0} as SystemD, Upstart {1} or" + " SysV {2}".format( + service_name, upstart_file, sysv_file)) + started = service_running(service_name, **kwargs) + + if not started: + started = service_start(service_name, **kwargs) + return started + + +def service(action, service_name, **kwargs): + """Control a system service. + + :param action: the action to take on the service + :param service_name: the name of the service to perform th action on + :param **kwargs: additional params to be passed to the service command in + the form of key=value. + """ + if init_is_systemd(): + cmd = ['systemctl', action, service_name] + else: + cmd = ['service', service_name, action] + for key, value in six.iteritems(kwargs): + parameter = '%s=%s' % (key, value) + cmd.append(parameter) + return subprocess.call(cmd) == 0 + + +_UPSTART_CONF = "/etc/init/{}.conf" +_INIT_D_CONF = "/etc/init.d/{}" + + +def service_running(service_name, **kwargs): + """Determine whether a system service is running. + + :param service_name: the name of the service + :param **kwargs: additional args to pass to the service command. This is + used to pass additional key=value arguments to the + service command line for managing specific instance + units (e.g. service ceph-osd status id=2). The kwargs + are ignored in systemd services. + """ + if init_is_systemd(): + return service('is-active', service_name) + else: + if os.path.exists(_UPSTART_CONF.format(service_name)): + try: + cmd = ['status', service_name] + for key, value in six.iteritems(kwargs): + parameter = '%s=%s' % (key, value) + cmd.append(parameter) + output = subprocess.check_output( + cmd, stderr=subprocess.STDOUT).decode('UTF-8') + except subprocess.CalledProcessError: + return False + else: + # This works for upstart scripts where the 'service' command + # returns a consistent string to represent running + # 'start/running' + if ("start/running" in output or + "is running" in output or + "up and running" in output): + return True + elif os.path.exists(_INIT_D_CONF.format(service_name)): + # Check System V scripts init script return codes + return service('status', service_name) + return False + + +SYSTEMD_SYSTEM = '/run/systemd/system' + + +def init_is_systemd(): + """Return True if the host system uses systemd, False otherwise.""" + if lsb_release()['DISTRIB_CODENAME'] == 'trusty': + return False + return os.path.isdir(SYSTEMD_SYSTEM) + + +def adduser(username, password=None, shell='/bin/bash', + system_user=False, primary_group=None, + secondary_groups=None, uid=None, home_dir=None): + """Add a user to the system. + + Will log but otherwise succeed if the user already exists. + + :param str username: Username to create + :param str password: Password for user; if ``None``, create a system user + :param str shell: The default shell for the user + :param bool system_user: Whether to create a login or system user + :param str primary_group: Primary group for user; defaults to username + :param list secondary_groups: Optional list of additional groups + :param int uid: UID for user being created + :param str home_dir: Home directory for user + + :returns: The password database entry struct, as returned by `pwd.getpwnam` + """ + try: + user_info = pwd.getpwnam(username) + log('user {0} already exists!'.format(username)) + if uid: + user_info = pwd.getpwuid(int(uid)) + log('user with uid {0} already exists!'.format(uid)) + except KeyError: + log('creating user {0}'.format(username)) + cmd = ['useradd'] + if uid: + cmd.extend(['--uid', str(uid)]) + if home_dir: + cmd.extend(['--home', str(home_dir)]) + if system_user or password is None: + cmd.append('--system') + else: + cmd.extend([ + '--create-home', + '--shell', shell, + '--password', password, + ]) + if not primary_group: + try: + grp.getgrnam(username) + primary_group = username # avoid "group exists" error + except KeyError: + pass + if primary_group: + cmd.extend(['-g', primary_group]) + if secondary_groups: + cmd.extend(['-G', ','.join(secondary_groups)]) + cmd.append(username) + subprocess.check_call(cmd) + user_info = pwd.getpwnam(username) + return user_info + + +def user_exists(username): + """Check if a user exists""" + try: + pwd.getpwnam(username) + user_exists = True + except KeyError: + user_exists = False + return user_exists + + +def uid_exists(uid): + """Check if a uid exists""" + try: + pwd.getpwuid(uid) + uid_exists = True + except KeyError: + uid_exists = False + return uid_exists + + +def group_exists(groupname): + """Check if a group exists""" + try: + grp.getgrnam(groupname) + group_exists = True + except KeyError: + group_exists = False + return group_exists + + +def gid_exists(gid): + """Check if a gid exists""" + try: + grp.getgrgid(gid) + gid_exists = True + except KeyError: + gid_exists = False + return gid_exists + + +def add_group(group_name, system_group=False, gid=None): + """Add a group to the system + + Will log but otherwise succeed if the group already exists. + + :param str group_name: group to create + :param bool system_group: Create system group + :param int gid: GID for user being created + + :returns: The password database entry struct, as returned by `grp.getgrnam` + """ + try: + group_info = grp.getgrnam(group_name) + log('group {0} already exists!'.format(group_name)) + if gid: + group_info = grp.getgrgid(gid) + log('group with gid {0} already exists!'.format(gid)) + except KeyError: + log('creating group {0}'.format(group_name)) + add_new_group(group_name, system_group, gid) + group_info = grp.getgrnam(group_name) + return group_info + + +def add_user_to_group(username, group): + """Add a user to a group""" + cmd = ['gpasswd', '-a', username, group] + log("Adding user {} to group {}".format(username, group)) + subprocess.check_call(cmd) + + +def chage(username, lastday=None, expiredate=None, inactive=None, + mindays=None, maxdays=None, root=None, warndays=None): + """Change user password expiry information + + :param str username: User to update + :param str lastday: Set when password was changed in YYYY-MM-DD format + :param str expiredate: Set when user's account will no longer be + accessible in YYYY-MM-DD format. + -1 will remove an account expiration date. + :param str inactive: Set the number of days of inactivity after a password + has expired before the account is locked. + -1 will remove an account's inactivity. + :param str mindays: Set the minimum number of days between password + changes to MIN_DAYS. + 0 indicates the password can be changed anytime. + :param str maxdays: Set the maximum number of days during which a + password is valid. + -1 as MAX_DAYS will remove checking maxdays + :param str root: Apply changes in the CHROOT_DIR directory + :param str warndays: Set the number of days of warning before a password + change is required + :raises subprocess.CalledProcessError: if call to chage fails + """ + cmd = ['chage'] + if root: + cmd.extend(['--root', root]) + if lastday: + cmd.extend(['--lastday', lastday]) + if expiredate: + cmd.extend(['--expiredate', expiredate]) + if inactive: + cmd.extend(['--inactive', inactive]) + if mindays: + cmd.extend(['--mindays', mindays]) + if maxdays: + cmd.extend(['--maxdays', maxdays]) + if warndays: + cmd.extend(['--warndays', warndays]) + cmd.append(username) + subprocess.check_call(cmd) + + +remove_password_expiry = functools.partial(chage, expiredate='-1', inactive='-1', mindays='0', maxdays='-1') + + +def rsync(from_path, to_path, flags='-r', options=None, timeout=None): + """Replicate the contents of a path""" + options = options or ['--delete', '--executability'] + cmd = ['/usr/bin/rsync', flags] + if timeout: + cmd = ['timeout', str(timeout)] + cmd + cmd.extend(options) + cmd.append(from_path) + cmd.append(to_path) + log(" ".join(cmd)) + return subprocess.check_output(cmd, stderr=subprocess.STDOUT).decode('UTF-8').strip() + + +def symlink(source, destination): + """Create a symbolic link""" + log("Symlinking {} as {}".format(source, destination)) + cmd = [ + 'ln', + '-sf', + source, + destination, + ] + subprocess.check_call(cmd) + + +def mkdir(path, owner='root', group='root', perms=0o555, force=False): + """Create a directory""" + log("Making dir {} {}:{} {:o}".format(path, owner, group, + perms)) + uid = pwd.getpwnam(owner).pw_uid + gid = grp.getgrnam(group).gr_gid + realpath = os.path.abspath(path) + path_exists = os.path.exists(realpath) + if path_exists and force: + if not os.path.isdir(realpath): + log("Removing non-directory file {} prior to mkdir()".format(path)) + os.unlink(realpath) + os.makedirs(realpath, perms) + elif not path_exists: + os.makedirs(realpath, perms) + os.chown(realpath, uid, gid) + os.chmod(realpath, perms) + + +def write_file(path, content, owner='root', group='root', perms=0o444): + """Create or overwrite a file with the contents of a byte string.""" + uid = pwd.getpwnam(owner).pw_uid + gid = grp.getgrnam(group).gr_gid + # lets see if we can grab the file and compare the context, to avoid doing + # a write. + existing_content = None + existing_uid, existing_gid, existing_perms = None, None, None + try: + with open(path, 'rb') as target: + existing_content = target.read() + stat = os.stat(path) + existing_uid, existing_gid, existing_perms = ( + stat.st_uid, stat.st_gid, stat.st_mode + ) + except Exception: + pass + if content != existing_content: + log("Writing file {} {}:{} {:o}".format(path, owner, group, perms), + level=DEBUG) + with open(path, 'wb') as target: + os.fchown(target.fileno(), uid, gid) + os.fchmod(target.fileno(), perms) + if six.PY3 and isinstance(content, six.string_types): + content = content.encode('UTF-8') + target.write(content) + return + # the contents were the same, but we might still need to change the + # ownership or permissions. + if existing_uid != uid: + log("Changing uid on already existing content: {} -> {}" + .format(existing_uid, uid), level=DEBUG) + os.chown(path, uid, -1) + if existing_gid != gid: + log("Changing gid on already existing content: {} -> {}" + .format(existing_gid, gid), level=DEBUG) + os.chown(path, -1, gid) + if existing_perms != perms: + log("Changing permissions on existing content: {} -> {}" + .format(existing_perms, perms), level=DEBUG) + os.chmod(path, perms) + + +def fstab_remove(mp): + """Remove the given mountpoint entry from /etc/fstab""" + return Fstab.remove_by_mountpoint(mp) + + +def fstab_add(dev, mp, fs, options=None): + """Adds the given device entry to the /etc/fstab file""" + return Fstab.add(dev, mp, fs, options=options) + + +def mount(device, mountpoint, options=None, persist=False, filesystem="ext3"): + """Mount a filesystem at a particular mountpoint""" + cmd_args = ['mount'] + if options is not None: + cmd_args.extend(['-o', options]) + cmd_args.extend([device, mountpoint]) + try: + subprocess.check_output(cmd_args) + except subprocess.CalledProcessError as e: + log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output)) + return False + + if persist: + return fstab_add(device, mountpoint, filesystem, options=options) + return True + + +def umount(mountpoint, persist=False): + """Unmount a filesystem""" + cmd_args = ['umount', mountpoint] + try: + subprocess.check_output(cmd_args) + except subprocess.CalledProcessError as e: + log('Error unmounting {}\n{}'.format(mountpoint, e.output)) + return False + + if persist: + return fstab_remove(mountpoint) + return True + + +def mounts(): + """Get a list of all mounted volumes as [[mountpoint,device],[...]]""" + with open('/proc/mounts') as f: + # [['/mount/point','/dev/path'],[...]] + system_mounts = [m[1::-1] for m in [l.strip().split() + for l in f.readlines()]] + return system_mounts + + +def fstab_mount(mountpoint): + """Mount filesystem using fstab""" + cmd_args = ['mount', mountpoint] + try: + subprocess.check_output(cmd_args) + except subprocess.CalledProcessError as e: + log('Error unmounting {}\n{}'.format(mountpoint, e.output)) + return False + return True + + +def file_hash(path, hash_type='md5'): + """Generate a hash checksum of the contents of 'path' or None if not found. + + :param str hash_type: Any hash alrgorithm supported by :mod:`hashlib`, + such as md5, sha1, sha256, sha512, etc. + """ + if os.path.exists(path): + h = getattr(hashlib, hash_type)() + with open(path, 'rb') as source: + h.update(source.read()) + return h.hexdigest() + else: + return None + + +def path_hash(path): + """Generate a hash checksum of all files matching 'path'. Standard + wildcards like '*' and '?' are supported, see documentation for the 'glob' + module for more information. + + :return: dict: A { filename: hash } dictionary for all matched files. + Empty if none found. + """ + return { + filename: file_hash(filename) + for filename in glob.iglob(path) + } + + +def check_hash(path, checksum, hash_type='md5'): + """Validate a file using a cryptographic checksum. + + :param str checksum: Value of the checksum used to validate the file. + :param str hash_type: Hash algorithm used to generate `checksum`. + Can be any hash alrgorithm supported by :mod:`hashlib`, + such as md5, sha1, sha256, sha512, etc. + :raises ChecksumError: If the file fails the checksum + + """ + actual_checksum = file_hash(path, hash_type) + if checksum != actual_checksum: + raise ChecksumError("'%s' != '%s'" % (checksum, actual_checksum)) + + +class ChecksumError(ValueError): + """A class derived from Value error to indicate the checksum failed.""" + pass + + +def restart_on_change(restart_map, stopstart=False, restart_functions=None): + """Restart services based on configuration files changing + + This function is used a decorator, for example:: + + @restart_on_change({ + '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ] + '/etc/apache/sites-enabled/*': [ 'apache2' ] + }) + def config_changed(): + pass # your code here + + In this example, the cinder-api and cinder-volume services + would be restarted if /etc/ceph/ceph.conf is changed by the + ceph_client_changed function. The apache2 service would be + restarted if any file matching the pattern got changed, created + or removed. Standard wildcards are supported, see documentation + for the 'glob' module for more information. + + @param restart_map: {path_file_name: [service_name, ...] + @param stopstart: DEFAULT false; whether to stop, start OR restart + @param restart_functions: nonstandard functions to use to restart services + {svc: func, ...} + @returns result from decorated function + """ + def wrap(f): + @functools.wraps(f) + def wrapped_f(*args, **kwargs): + return restart_on_change_helper( + (lambda: f(*args, **kwargs)), restart_map, stopstart, + restart_functions) + return wrapped_f + return wrap + + +def restart_on_change_helper(lambda_f, restart_map, stopstart=False, + restart_functions=None): + """Helper function to perform the restart_on_change function. + + This is provided for decorators to restart services if files described + in the restart_map have changed after an invocation of lambda_f(). + + @param lambda_f: function to call. + @param restart_map: {file: [service, ...]} + @param stopstart: whether to stop, start or restart a service + @param restart_functions: nonstandard functions to use to restart services + {svc: func, ...} + @returns result of lambda_f() + """ + if restart_functions is None: + restart_functions = {} + checksums = {path: path_hash(path) for path in restart_map} + r = lambda_f() + # create a list of lists of the services to restart + restarts = [restart_map[path] + for path in restart_map + if path_hash(path) != checksums[path]] + # create a flat list of ordered services without duplicates from lists + services_list = list(OrderedDict.fromkeys(itertools.chain(*restarts))) + if services_list: + actions = ('stop', 'start') if stopstart else ('restart',) + for service_name in services_list: + if service_name in restart_functions: + restart_functions[service_name](service_name) + else: + for action in actions: + service(action, service_name) + return r + + +def pwgen(length=None): + """Generate a random pasword.""" + if length is None: + # A random length is ok to use a weak PRNG + length = random.choice(range(35, 45)) + alphanumeric_chars = [ + l for l in (string.ascii_letters + string.digits) + if l not in 'l0QD1vAEIOUaeiou'] + # Use a crypto-friendly PRNG (e.g. /dev/urandom) for making the + # actual password + random_generator = random.SystemRandom() + random_chars = [ + random_generator.choice(alphanumeric_chars) for _ in range(length)] + return(''.join(random_chars)) + + +def is_phy_iface(interface): + """Returns True if interface is not virtual, otherwise False.""" + if interface: + sys_net = '/sys/class/net' + if os.path.isdir(sys_net): + for iface in glob.glob(os.path.join(sys_net, '*')): + if '/virtual/' in os.path.realpath(iface): + continue + + if interface == os.path.basename(iface): + return True + + return False + + +def get_bond_master(interface): + """Returns bond master if interface is bond slave otherwise None. + + NOTE: the provided interface is expected to be physical + """ + if interface: + iface_path = '/sys/class/net/%s' % (interface) + if os.path.exists(iface_path): + if '/virtual/' in os.path.realpath(iface_path): + return None + + master = os.path.join(iface_path, 'master') + if os.path.exists(master): + master = os.path.realpath(master) + # make sure it is a bond master + if os.path.exists(os.path.join(master, 'bonding')): + return os.path.basename(master) + + return None + + +def list_nics(nic_type=None): + """Return a list of nics of given type(s)""" + if isinstance(nic_type, six.string_types): + int_types = [nic_type] + else: + int_types = nic_type + + interfaces = [] + if nic_type: + for int_type in int_types: + cmd = ['ip', 'addr', 'show', 'label', int_type + '*'] + ip_output = subprocess.check_output(cmd).decode('UTF-8') + ip_output = ip_output.split('\n') + ip_output = (line for line in ip_output if line) + for line in ip_output: + if line.split()[1].startswith(int_type): + matched = re.search('.*: (' + int_type + + r'[0-9]+\.[0-9]+)@.*', line) + if matched: + iface = matched.groups()[0] + else: + iface = line.split()[1].replace(":", "") + + if iface not in interfaces: + interfaces.append(iface) + else: + cmd = ['ip', 'a'] + ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n') + ip_output = (line.strip() for line in ip_output if line) + + key = re.compile(r'^[0-9]+:\s+(.+):') + for line in ip_output: + matched = re.search(key, line) + if matched: + iface = matched.group(1) + iface = iface.partition("@")[0] + if iface not in interfaces: + interfaces.append(iface) + + return interfaces + + +def set_nic_mtu(nic, mtu): + """Set the Maximum Transmission Unit (MTU) on a network interface.""" + cmd = ['ip', 'link', 'set', nic, 'mtu', mtu] + subprocess.check_call(cmd) + + +def get_nic_mtu(nic): + """Return the Maximum Transmission Unit (MTU) for a network interface.""" + cmd = ['ip', 'addr', 'show', nic] + ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n') + mtu = "" + for line in ip_output: + words = line.split() + if 'mtu' in words: + mtu = words[words.index("mtu") + 1] + return mtu + + +def get_nic_hwaddr(nic): + """Return the Media Access Control (MAC) for a network interface.""" + cmd = ['ip', '-o', '-0', 'addr', 'show', nic] + ip_output = subprocess.check_output(cmd).decode('UTF-8') + hwaddr = "" + words = ip_output.split() + if 'link/ether' in words: + hwaddr = words[words.index('link/ether') + 1] + return hwaddr + + +@contextmanager +def chdir(directory): + """Change the current working directory to a different directory for a code + block and return the previous directory after the block exits. Useful to + run commands from a specificed directory. + + :param str directory: The directory path to change to for this context. + """ + cur = os.getcwd() + try: + yield os.chdir(directory) + finally: + os.chdir(cur) + + +def chownr(path, owner, group, follow_links=True, chowntopdir=False): + """Recursively change user and group ownership of files and directories + in given path. Doesn't chown path itself by default, only its children. + + :param str path: The string path to start changing ownership. + :param str owner: The owner string to use when looking up the uid. + :param str group: The group string to use when looking up the gid. + :param bool follow_links: Also follow and chown links if True + :param bool chowntopdir: Also chown path itself if True + """ + uid = pwd.getpwnam(owner).pw_uid + gid = grp.getgrnam(group).gr_gid + if follow_links: + chown = os.chown + else: + chown = os.lchown + + if chowntopdir: + broken_symlink = os.path.lexists(path) and not os.path.exists(path) + if not broken_symlink: + chown(path, uid, gid) + for root, dirs, files in os.walk(path, followlinks=follow_links): + for name in dirs + files: + full = os.path.join(root, name) + broken_symlink = os.path.lexists(full) and not os.path.exists(full) + if not broken_symlink: + chown(full, uid, gid) + + +def lchownr(path, owner, group): + """Recursively change user and group ownership of files and directories + in a given path, not following symbolic links. See the documentation for + 'os.lchown' for more information. + + :param str path: The string path to start changing ownership. + :param str owner: The owner string to use when looking up the uid. + :param str group: The group string to use when looking up the gid. + """ + chownr(path, owner, group, follow_links=False) + + +def owner(path): + """Returns a tuple containing the username & groupname owning the path. + + :param str path: the string path to retrieve the ownership + :return tuple(str, str): A (username, groupname) tuple containing the + name of the user and group owning the path. + :raises OSError: if the specified path does not exist + """ + stat = os.stat(path) + username = pwd.getpwuid(stat.st_uid)[0] + groupname = grp.getgrgid(stat.st_gid)[0] + return username, groupname + + +def get_total_ram(): + """The total amount of system RAM in bytes. + + This is what is reported by the OS, and may be overcommitted when + there are multiple containers hosted on the same machine. + """ + with open('/proc/meminfo', 'r') as f: + for line in f.readlines(): + if line: + key, value, unit = line.split() + if key == 'MemTotal:': + assert unit == 'kB', 'Unknown unit' + return int(value) * 1024 # Classic, not KiB. + raise NotImplementedError() + + +UPSTART_CONTAINER_TYPE = '/run/container_type' + + +def is_container(): + """Determine whether unit is running in a container + + @return: boolean indicating if unit is in a container + """ + if init_is_systemd(): + # Detect using systemd-detect-virt + return subprocess.call(['systemd-detect-virt', + '--container']) == 0 + else: + # Detect using upstart container file marker + return os.path.exists(UPSTART_CONTAINER_TYPE) + + +def add_to_updatedb_prunepath(path, updatedb_path=UPDATEDB_PATH): + """Adds the specified path to the mlocate's udpatedb.conf PRUNEPATH list. + + This method has no effect if the path specified by updatedb_path does not + exist or is not a file. + + @param path: string the path to add to the updatedb.conf PRUNEPATHS value + @param updatedb_path: the path the updatedb.conf file + """ + if not os.path.exists(updatedb_path) or os.path.isdir(updatedb_path): + # If the updatedb.conf file doesn't exist then don't attempt to update + # the file as the package providing mlocate may not be installed on + # the local system + return + + with open(updatedb_path, 'r+') as f_id: + updatedb_text = f_id.read() + output = updatedb(updatedb_text, path) + f_id.seek(0) + f_id.write(output) + f_id.truncate() + + +def updatedb(updatedb_text, new_path): + lines = [line for line in updatedb_text.split("\n")] + for i, line in enumerate(lines): + if line.startswith("PRUNEPATHS="): + paths_line = line.split("=")[1].replace('"', '') + paths = paths_line.split(" ") + if new_path not in paths: + paths.append(new_path) + lines[i] = 'PRUNEPATHS="{}"'.format(' '.join(paths)) + output = "\n".join(lines) + return output + + +def modulo_distribution(modulo=3, wait=30, non_zero_wait=False): + """ Modulo distribution + + This helper uses the unit number, a modulo value and a constant wait time + to produce a calculated wait time distribution. This is useful in large + scale deployments to distribute load during an expensive operation such as + service restarts. + + If you have 1000 nodes that need to restart 100 at a time 1 minute at a + time: + + time.wait(modulo_distribution(modulo=100, wait=60)) + restart() + + If you need restarts to happen serially set modulo to the exact number of + nodes and set a high constant wait time: + + time.wait(modulo_distribution(modulo=10, wait=120)) + restart() + + @param modulo: int The modulo number creates the group distribution + @param wait: int The constant time wait value + @param non_zero_wait: boolean Override unit % modulo == 0, + return modulo * wait. Used to avoid collisions with + leader nodes which are often given priority. + @return: int Calculated time to wait for unit operation + """ + unit_number = int(local_unit().split('/')[1]) + calculated_wait_time = (unit_number % modulo) * wait + if non_zero_wait and calculated_wait_time == 0: + return modulo * wait + else: + return calculated_wait_time + + +def install_ca_cert(ca_cert, name=None): + """ + Install the given cert as a trusted CA. + + The ``name`` is the stem of the filename where the cert is written, and if + not provided, it will default to ``juju-{charm_name}``. + + If the cert is empty or None, or is unchanged, nothing is done. + """ + if not ca_cert: + return + if not isinstance(ca_cert, bytes): + ca_cert = ca_cert.encode('utf8') + if not name: + name = 'juju-{}'.format(charm_name()) + cert_file = '/usr/local/share/ca-certificates/{}.crt'.format(name) + new_hash = hashlib.md5(ca_cert).hexdigest() + if file_hash(cert_file) == new_hash: + return + log("Installing new CA cert at: {}".format(cert_file), level=INFO) + write_file(cert_file, ca_cert) + subprocess.check_call(['update-ca-certificates', '--fresh']) + + +def get_system_env(key, default=None): + """Get data from system environment as represented in ``/etc/environment``. + + :param key: Key to look up + :type key: str + :param default: Value to return if key is not found + :type default: any + :returns: Value for key if found or contents of default parameter + :rtype: any + :raises: subprocess.CalledProcessError + """ + env_file = '/etc/environment' + # use the shell and env(1) to parse the global environments file. This is + # done to get the correct result even if the user has shell variable + # substitutions or other shell logic in that file. + output = subprocess.check_output( + ['env', '-i', '/bin/bash', '-c', + 'set -a && source {} && env'.format(env_file)], + universal_newlines=True) + for k, v in (line.split('=', 1) + for line in output.splitlines() if '=' in line): + if k == key: + return v + else: + return default diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/host_factory/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/host_factory/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/host_factory/centos.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/host_factory/centos.py new file mode 100644 index 0000000000000000000000000000000000000000..7781a3961f23ce0b161ae08b11710466af8de814 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/host_factory/centos.py @@ -0,0 +1,72 @@ +import subprocess +import yum +import os + +from charmhelpers.core.strutils import BasicStringComparator + + +class CompareHostReleases(BasicStringComparator): + """Provide comparisons of Host releases. + + Use in the form of + + if CompareHostReleases(release) > 'trusty': + # do something with mitaka + """ + + def __init__(self, item): + raise NotImplementedError( + "CompareHostReleases() is not implemented for CentOS") + + +def service_available(service_name): + # """Determine whether a system service is available.""" + if os.path.isdir('/run/systemd/system'): + cmd = ['systemctl', 'is-enabled', service_name] + else: + cmd = ['service', service_name, 'is-enabled'] + return subprocess.call(cmd) == 0 + + +def add_new_group(group_name, system_group=False, gid=None): + cmd = ['groupadd'] + if gid: + cmd.extend(['--gid', str(gid)]) + if system_group: + cmd.append('-r') + cmd.append(group_name) + subprocess.check_call(cmd) + + +def lsb_release(): + """Return /etc/os-release in a dict.""" + d = {} + with open('/etc/os-release', 'r') as lsb: + for l in lsb: + s = l.split('=') + if len(s) != 2: + continue + d[s[0].strip()] = s[1].strip() + return d + + +def cmp_pkgrevno(package, revno, pkgcache=None): + """Compare supplied revno with the revno of the installed package. + + * 1 => Installed revno is greater than supplied arg + * 0 => Installed revno is the same as supplied arg + * -1 => Installed revno is less than supplied arg + + This function imports YumBase function if the pkgcache argument + is None. + """ + if not pkgcache: + y = yum.YumBase() + packages = y.doPackageLists() + pkgcache = {i.Name: i.version for i in packages['installed']} + pkg = pkgcache[package] + if pkg > revno: + return 1 + if pkg < revno: + return -1 + return 0 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/host_factory/ubuntu.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/host_factory/ubuntu.py new file mode 100644 index 0000000000000000000000000000000000000000..3edc0687275b29762b45ebf3fe1b045bc9f568b2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/host_factory/ubuntu.py @@ -0,0 +1,116 @@ +import subprocess + +from charmhelpers.core.hookenv import cached +from charmhelpers.core.strutils import BasicStringComparator + + +UBUNTU_RELEASES = ( + 'lucid', + 'maverick', + 'natty', + 'oneiric', + 'precise', + 'quantal', + 'raring', + 'saucy', + 'trusty', + 'utopic', + 'vivid', + 'wily', + 'xenial', + 'yakkety', + 'zesty', + 'artful', + 'bionic', + 'cosmic', + 'disco', + 'eoan', + 'focal' +) + + +class CompareHostReleases(BasicStringComparator): + """Provide comparisons of Ubuntu releases. + + Use in the form of + + if CompareHostReleases(release) > 'trusty': + # do something with mitaka + """ + _list = UBUNTU_RELEASES + + +def service_available(service_name): + """Determine whether a system service is available""" + try: + subprocess.check_output( + ['service', service_name, 'status'], + stderr=subprocess.STDOUT).decode('UTF-8') + except subprocess.CalledProcessError as e: + return b'unrecognized service' not in e.output + else: + return True + + +def add_new_group(group_name, system_group=False, gid=None): + cmd = ['addgroup'] + if gid: + cmd.extend(['--gid', str(gid)]) + if system_group: + cmd.append('--system') + else: + cmd.extend([ + '--group', + ]) + cmd.append(group_name) + subprocess.check_call(cmd) + + +def lsb_release(): + """Return /etc/lsb-release in a dict""" + d = {} + with open('/etc/lsb-release', 'r') as lsb: + for l in lsb: + k, v = l.split('=') + d[k.strip()] = v.strip() + return d + + +def get_distrib_codename(): + """Return the codename of the distribution + :returns: The codename + :rtype: str + """ + return lsb_release()['DISTRIB_CODENAME'].lower() + + +def cmp_pkgrevno(package, revno, pkgcache=None): + """Compare supplied revno with the revno of the installed package. + + * 1 => Installed revno is greater than supplied arg + * 0 => Installed revno is the same as supplied arg + * -1 => Installed revno is less than supplied arg + + This function imports apt_cache function from charmhelpers.fetch if + the pkgcache argument is None. Be sure to add charmhelpers.fetch if + you call this function, or pass an apt_pkg.Cache() instance. + """ + from charmhelpers.fetch import apt_pkg + if not pkgcache: + from charmhelpers.fetch import apt_cache + pkgcache = apt_cache() + pkg = pkgcache[package] + return apt_pkg.version_compare(pkg.current_ver.ver_str, revno) + + +@cached +def arch(): + """Return the package architecture as a string. + + :returns: the architecture + :rtype: str + :raises: subprocess.CalledProcessError if dpkg command fails + """ + return subprocess.check_output( + ['dpkg', '--print-architecture'] + ).rstrip().decode('UTF-8') diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/hugepage.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/hugepage.py new file mode 100644 index 0000000000000000000000000000000000000000..54b5b5e2fcf81eea5f2ebfbceb620ea68d725584 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/hugepage.py @@ -0,0 +1,69 @@ +# -*- coding: utf-8 -*- + +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import yaml +from charmhelpers.core import fstab +from charmhelpers.core import sysctl +from charmhelpers.core.host import ( + add_group, + add_user_to_group, + fstab_mount, + mkdir, +) +from charmhelpers.core.strutils import bytes_from_string +from subprocess import check_output + + +def hugepage_support(user, group='hugetlb', nr_hugepages=256, + max_map_count=65536, mnt_point='/run/hugepages/kvm', + pagesize='2MB', mount=True, set_shmmax=False): + """Enable hugepages on system. + + Args: + user (str) -- Username to allow access to hugepages to + group (str) -- Group name to own hugepages + nr_hugepages (int) -- Number of pages to reserve + max_map_count (int) -- Number of Virtual Memory Areas a process can own + mnt_point (str) -- Directory to mount hugepages on + pagesize (str) -- Size of hugepages + mount (bool) -- Whether to Mount hugepages + """ + group_info = add_group(group) + gid = group_info.gr_gid + add_user_to_group(user, group) + if max_map_count < 2 * nr_hugepages: + max_map_count = 2 * nr_hugepages + sysctl_settings = { + 'vm.nr_hugepages': nr_hugepages, + 'vm.max_map_count': max_map_count, + 'vm.hugetlb_shm_group': gid, + } + if set_shmmax: + shmmax_current = int(check_output(['sysctl', '-n', 'kernel.shmmax'])) + shmmax_minsize = bytes_from_string(pagesize) * nr_hugepages + if shmmax_minsize > shmmax_current: + sysctl_settings['kernel.shmmax'] = shmmax_minsize + sysctl.create(yaml.dump(sysctl_settings), '/etc/sysctl.d/10-hugepage.conf') + mkdir(mnt_point, owner='root', group='root', perms=0o755, force=False) + lfstab = fstab.Fstab() + fstab_entry = lfstab.get_entry_by_attr('mountpoint', mnt_point) + if fstab_entry: + lfstab.remove_entry(fstab_entry) + entry = lfstab.Entry('nodev', mnt_point, 'hugetlbfs', + 'mode=1770,gid={},pagesize={}'.format(gid, pagesize), 0, 0) + lfstab.add_entry(entry) + if mount: + fstab_mount(mnt_point) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/kernel.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/kernel.py new file mode 100644 index 0000000000000000000000000000000000000000..e01f4f8ba73ee0d5ab7553740c2590a50e42f96d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/kernel.py @@ -0,0 +1,72 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import re +import subprocess + +from charmhelpers.osplatform import get_platform +from charmhelpers.core.hookenv import ( + log, + INFO +) + +__platform__ = get_platform() +if __platform__ == "ubuntu": + from charmhelpers.core.kernel_factory.ubuntu import ( # NOQA:F401 + persistent_modprobe, + update_initramfs, + ) # flake8: noqa -- ignore F401 for this import +elif __platform__ == "centos": + from charmhelpers.core.kernel_factory.centos import ( # NOQA:F401 + persistent_modprobe, + update_initramfs, + ) # flake8: noqa -- ignore F401 for this import + +__author__ = "Jorge Niedbalski " + + +def modprobe(module, persist=True): + """Load a kernel module and configure for auto-load on reboot.""" + cmd = ['modprobe', module] + + log('Loading kernel module %s' % module, level=INFO) + + subprocess.check_call(cmd) + if persist: + persistent_modprobe(module) + + +def rmmod(module, force=False): + """Remove a module from the linux kernel""" + cmd = ['rmmod'] + if force: + cmd.append('-f') + cmd.append(module) + log('Removing kernel module %s' % module, level=INFO) + return subprocess.check_call(cmd) + + +def lsmod(): + """Shows what kernel modules are currently loaded""" + return subprocess.check_output(['lsmod'], + universal_newlines=True) + + +def is_module_loaded(module): + """Checks if a kernel module is already loaded""" + matches = re.findall('^%s[ ]+' % module, lsmod(), re.M) + return len(matches) > 0 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/kernel_factory/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/kernel_factory/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/kernel_factory/centos.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/kernel_factory/centos.py new file mode 100644 index 0000000000000000000000000000000000000000..1c402c1157900ff1ad5c6c296a409c9e8fb96d2b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/kernel_factory/centos.py @@ -0,0 +1,17 @@ +import subprocess +import os + + +def persistent_modprobe(module): + """Load a kernel module and configure for auto-load on reboot.""" + if not os.path.exists('/etc/rc.modules'): + open('/etc/rc.modules', 'a') + os.chmod('/etc/rc.modules', 111) + with open('/etc/rc.modules', 'r+') as modules: + if module not in modules.read(): + modules.write('modprobe %s\n' % module) + + +def update_initramfs(version='all'): + """Updates an initramfs image.""" + return subprocess.check_call(["dracut", "-f", version]) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/kernel_factory/ubuntu.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/kernel_factory/ubuntu.py new file mode 100644 index 0000000000000000000000000000000000000000..3de372fd3df38fe151cf79243f129cb504516f22 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/kernel_factory/ubuntu.py @@ -0,0 +1,13 @@ +import subprocess + + +def persistent_modprobe(module): + """Load a kernel module and configure for auto-load on reboot.""" + with open('/etc/modules', 'r+') as modules: + if module not in modules.read(): + modules.write(module + "\n") + + +def update_initramfs(version='all'): + """Updates an initramfs image.""" + return subprocess.check_call(["update-initramfs", "-k", version, "-u"]) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/services/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/services/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..61fd074edc09de434859e48ae1b36baef0503708 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/services/__init__.py @@ -0,0 +1,16 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from .base import * # NOQA +from .helpers import * # NOQA diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/services/base.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/services/base.py new file mode 100644 index 0000000000000000000000000000000000000000..179ad4f0c367dd6b13c10b201c3752d1c8daf05e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/services/base.py @@ -0,0 +1,362 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import json +from inspect import getargspec +from collections import Iterable, OrderedDict + +from charmhelpers.core import host +from charmhelpers.core import hookenv + + +__all__ = ['ServiceManager', 'ManagerCallback', + 'PortManagerCallback', 'open_ports', 'close_ports', 'manage_ports', + 'service_restart', 'service_stop'] + + +class ServiceManager(object): + def __init__(self, services=None): + """ + Register a list of services, given their definitions. + + Service definitions are dicts in the following formats (all keys except + 'service' are optional):: + + { + "service": , + "required_data": , + "provided_data": , + "data_ready": , + "data_lost": , + "start": , + "stop": , + "ports": , + } + + The 'required_data' list should contain dicts of required data (or + dependency managers that act like dicts and know how to collect the data). + Only when all items in the 'required_data' list are populated are the list + of 'data_ready' and 'start' callbacks executed. See `is_ready()` for more + information. + + The 'provided_data' list should contain relation data providers, most likely + a subclass of :class:`charmhelpers.core.services.helpers.RelationContext`, + that will indicate a set of data to set on a given relation. + + The 'data_ready' value should be either a single callback, or a list of + callbacks, to be called when all items in 'required_data' pass `is_ready()`. + Each callback will be called with the service name as the only parameter. + After all of the 'data_ready' callbacks are called, the 'start' callbacks + are fired. + + The 'data_lost' value should be either a single callback, or a list of + callbacks, to be called when a 'required_data' item no longer passes + `is_ready()`. Each callback will be called with the service name as the + only parameter. After all of the 'data_lost' callbacks are called, + the 'stop' callbacks are fired. + + The 'start' value should be either a single callback, or a list of + callbacks, to be called when starting the service, after the 'data_ready' + callbacks are complete. Each callback will be called with the service + name as the only parameter. This defaults to + `[host.service_start, services.open_ports]`. + + The 'stop' value should be either a single callback, or a list of + callbacks, to be called when stopping the service. If the service is + being stopped because it no longer has all of its 'required_data', this + will be called after all of the 'data_lost' callbacks are complete. + Each callback will be called with the service name as the only parameter. + This defaults to `[services.close_ports, host.service_stop]`. + + The 'ports' value should be a list of ports to manage. The default + 'start' handler will open the ports after the service is started, + and the default 'stop' handler will close the ports prior to stopping + the service. + + + Examples: + + The following registers an Upstart service called bingod that depends on + a mongodb relation and which runs a custom `db_migrate` function prior to + restarting the service, and a Runit service called spadesd:: + + manager = services.ServiceManager([ + { + 'service': 'bingod', + 'ports': [80, 443], + 'required_data': [MongoRelation(), config(), {'my': 'data'}], + 'data_ready': [ + services.template(source='bingod.conf'), + services.template(source='bingod.ini', + target='/etc/bingod.ini', + owner='bingo', perms=0400), + ], + }, + { + 'service': 'spadesd', + 'data_ready': services.template(source='spadesd_run.j2', + target='/etc/sv/spadesd/run', + perms=0555), + 'start': runit_start, + 'stop': runit_stop, + }, + ]) + manager.manage() + """ + self._ready_file = os.path.join(hookenv.charm_dir(), 'READY-SERVICES.json') + self._ready = None + self.services = OrderedDict() + for service in services or []: + service_name = service['service'] + self.services[service_name] = service + + def manage(self): + """ + Handle the current hook by doing The Right Thing with the registered services. + """ + hookenv._run_atstart() + try: + hook_name = hookenv.hook_name() + if hook_name == 'stop': + self.stop_services() + else: + self.reconfigure_services() + self.provide_data() + except SystemExit as x: + if x.code is None or x.code == 0: + hookenv._run_atexit() + hookenv._run_atexit() + + def provide_data(self): + """ + Set the relation data for each provider in the ``provided_data`` list. + + A provider must have a `name` attribute, which indicates which relation + to set data on, and a `provide_data()` method, which returns a dict of + data to set. + + The `provide_data()` method can optionally accept two parameters: + + * ``remote_service`` The name of the remote service that the data will + be provided to. The `provide_data()` method will be called once + for each connected service (not unit). This allows the method to + tailor its data to the given service. + * ``service_ready`` Whether or not the service definition had all of + its requirements met, and thus the ``data_ready`` callbacks run. + + Note that the ``provided_data`` methods are now called **after** the + ``data_ready`` callbacks are run. This gives the ``data_ready`` callbacks + a chance to generate any data necessary for the providing to the remote + services. + """ + for service_name, service in self.services.items(): + service_ready = self.is_ready(service_name) + for provider in service.get('provided_data', []): + for relid in hookenv.relation_ids(provider.name): + units = hookenv.related_units(relid) + if not units: + continue + remote_service = units[0].split('/')[0] + argspec = getargspec(provider.provide_data) + if len(argspec.args) > 1: + data = provider.provide_data(remote_service, service_ready) + else: + data = provider.provide_data() + if data: + hookenv.relation_set(relid, data) + + def reconfigure_services(self, *service_names): + """ + Update all files for one or more registered services, and, + if ready, optionally restart them. + + If no service names are given, reconfigures all registered services. + """ + for service_name in service_names or self.services.keys(): + if self.is_ready(service_name): + self.fire_event('data_ready', service_name) + self.fire_event('start', service_name, default=[ + service_restart, + manage_ports]) + self.save_ready(service_name) + else: + if self.was_ready(service_name): + self.fire_event('data_lost', service_name) + self.fire_event('stop', service_name, default=[ + manage_ports, + service_stop]) + self.save_lost(service_name) + + def stop_services(self, *service_names): + """ + Stop one or more registered services, by name. + + If no service names are given, stops all registered services. + """ + for service_name in service_names or self.services.keys(): + self.fire_event('stop', service_name, default=[ + manage_ports, + service_stop]) + + def get_service(self, service_name): + """ + Given the name of a registered service, return its service definition. + """ + service = self.services.get(service_name) + if not service: + raise KeyError('Service not registered: %s' % service_name) + return service + + def fire_event(self, event_name, service_name, default=None): + """ + Fire a data_ready, data_lost, start, or stop event on a given service. + """ + service = self.get_service(service_name) + callbacks = service.get(event_name, default) + if not callbacks: + return + if not isinstance(callbacks, Iterable): + callbacks = [callbacks] + for callback in callbacks: + if isinstance(callback, ManagerCallback): + callback(self, service_name, event_name) + else: + callback(service_name) + + def is_ready(self, service_name): + """ + Determine if a registered service is ready, by checking its 'required_data'. + + A 'required_data' item can be any mapping type, and is considered ready + if `bool(item)` evaluates as True. + """ + service = self.get_service(service_name) + reqs = service.get('required_data', []) + return all(bool(req) for req in reqs) + + def _load_ready_file(self): + if self._ready is not None: + return + if os.path.exists(self._ready_file): + with open(self._ready_file) as fp: + self._ready = set(json.load(fp)) + else: + self._ready = set() + + def _save_ready_file(self): + if self._ready is None: + return + with open(self._ready_file, 'w') as fp: + json.dump(list(self._ready), fp) + + def save_ready(self, service_name): + """ + Save an indicator that the given service is now data_ready. + """ + self._load_ready_file() + self._ready.add(service_name) + self._save_ready_file() + + def save_lost(self, service_name): + """ + Save an indicator that the given service is no longer data_ready. + """ + self._load_ready_file() + self._ready.discard(service_name) + self._save_ready_file() + + def was_ready(self, service_name): + """ + Determine if the given service was previously data_ready. + """ + self._load_ready_file() + return service_name in self._ready + + +class ManagerCallback(object): + """ + Special case of a callback that takes the `ServiceManager` instance + in addition to the service name. + + Subclasses should implement `__call__` which should accept three parameters: + + * `manager` The `ServiceManager` instance + * `service_name` The name of the service it's being triggered for + * `event_name` The name of the event that this callback is handling + """ + def __call__(self, manager, service_name, event_name): + raise NotImplementedError() + + +class PortManagerCallback(ManagerCallback): + """ + Callback class that will open or close ports, for use as either + a start or stop action. + """ + def __call__(self, manager, service_name, event_name): + service = manager.get_service(service_name) + # turn this generator into a list, + # as we'll be going over it multiple times + new_ports = list(service.get('ports', [])) + port_file = os.path.join(hookenv.charm_dir(), '.{}.ports'.format(service_name)) + if os.path.exists(port_file): + with open(port_file) as fp: + old_ports = fp.read().split(',') + for old_port in old_ports: + if bool(old_port) and not self.ports_contains(old_port, new_ports): + hookenv.close_port(old_port) + with open(port_file, 'w') as fp: + fp.write(','.join(str(port) for port in new_ports)) + for port in new_ports: + # A port is either a number or 'ICMP' + protocol = 'TCP' + if str(port).upper() == 'ICMP': + protocol = 'ICMP' + if event_name == 'start': + hookenv.open_port(port, protocol) + elif event_name == 'stop': + hookenv.close_port(port, protocol) + + def ports_contains(self, port, ports): + if not bool(port): + return False + if str(port).upper() != 'ICMP': + port = int(port) + return port in ports + + +def service_stop(service_name): + """ + Wrapper around host.service_stop to prevent spurious "unknown service" + messages in the logs. + """ + if host.service_running(service_name): + host.service_stop(service_name) + + +def service_restart(service_name): + """ + Wrapper around host.service_restart to prevent spurious "unknown service" + messages in the logs. + """ + if host.service_available(service_name): + if host.service_running(service_name): + host.service_restart(service_name) + else: + host.service_start(service_name) + + +# Convenience aliases +open_ports = close_ports = manage_ports = PortManagerCallback() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/services/helpers.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/services/helpers.py new file mode 100644 index 0000000000000000000000000000000000000000..3e6e30d2fe0d9c73ffdc42d70b77e864b6379c53 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/services/helpers.py @@ -0,0 +1,290 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import yaml + +from charmhelpers.core import hookenv +from charmhelpers.core import host +from charmhelpers.core import templating + +from charmhelpers.core.services.base import ManagerCallback + + +__all__ = ['RelationContext', 'TemplateCallback', + 'render_template', 'template'] + + +class RelationContext(dict): + """ + Base class for a context generator that gets relation data from juju. + + Subclasses must provide the attributes `name`, which is the name of the + interface of interest, `interface`, which is the type of the interface of + interest, and `required_keys`, which is the set of keys required for the + relation to be considered complete. The data for all interfaces matching + the `name` attribute that are complete will used to populate the dictionary + values (see `get_data`, below). + + The generated context will be namespaced under the relation :attr:`name`, + to prevent potential naming conflicts. + + :param str name: Override the relation :attr:`name`, since it can vary from charm to charm + :param list additional_required_keys: Extend the list of :attr:`required_keys` + """ + name = None + interface = None + + def __init__(self, name=None, additional_required_keys=None): + if not hasattr(self, 'required_keys'): + self.required_keys = [] + + if name is not None: + self.name = name + if additional_required_keys: + self.required_keys.extend(additional_required_keys) + self.get_data() + + def __bool__(self): + """ + Returns True if all of the required_keys are available. + """ + return self.is_ready() + + __nonzero__ = __bool__ + + def __repr__(self): + return super(RelationContext, self).__repr__() + + def is_ready(self): + """ + Returns True if all of the `required_keys` are available from any units. + """ + ready = len(self.get(self.name, [])) > 0 + if not ready: + hookenv.log('Incomplete relation: {}'.format(self.__class__.__name__), hookenv.DEBUG) + return ready + + def _is_ready(self, unit_data): + """ + Helper method that tests a set of relation data and returns True if + all of the `required_keys` are present. + """ + return set(unit_data.keys()).issuperset(set(self.required_keys)) + + def get_data(self): + """ + Retrieve the relation data for each unit involved in a relation and, + if complete, store it in a list under `self[self.name]`. This + is automatically called when the RelationContext is instantiated. + + The units are sorted lexographically first by the service ID, then by + the unit ID. Thus, if an interface has two other services, 'db:1' + and 'db:2', with 'db:1' having two units, 'wordpress/0' and 'wordpress/1', + and 'db:2' having one unit, 'mediawiki/0', all of which have a complete + set of data, the relation data for the units will be stored in the + order: 'wordpress/0', 'wordpress/1', 'mediawiki/0'. + + If you only care about a single unit on the relation, you can just + access it as `{{ interface[0]['key'] }}`. However, if you can at all + support multiple units on a relation, you should iterate over the list, + like:: + + {% for unit in interface -%} + {{ unit['key'] }}{% if not loop.last %},{% endif %} + {%- endfor %} + + Note that since all sets of relation data from all related services and + units are in a single list, if you need to know which service or unit a + set of data came from, you'll need to extend this class to preserve + that information. + """ + if not hookenv.relation_ids(self.name): + return + + ns = self.setdefault(self.name, []) + for rid in sorted(hookenv.relation_ids(self.name)): + for unit in sorted(hookenv.related_units(rid)): + reldata = hookenv.relation_get(rid=rid, unit=unit) + if self._is_ready(reldata): + ns.append(reldata) + + def provide_data(self): + """ + Return data to be relation_set for this interface. + """ + return {} + + +class MysqlRelation(RelationContext): + """ + Relation context for the `mysql` interface. + + :param str name: Override the relation :attr:`name`, since it can vary from charm to charm + :param list additional_required_keys: Extend the list of :attr:`required_keys` + """ + name = 'db' + interface = 'mysql' + + def __init__(self, *args, **kwargs): + self.required_keys = ['host', 'user', 'password', 'database'] + RelationContext.__init__(self, *args, **kwargs) + + +class HttpRelation(RelationContext): + """ + Relation context for the `http` interface. + + :param str name: Override the relation :attr:`name`, since it can vary from charm to charm + :param list additional_required_keys: Extend the list of :attr:`required_keys` + """ + name = 'website' + interface = 'http' + + def __init__(self, *args, **kwargs): + self.required_keys = ['host', 'port'] + RelationContext.__init__(self, *args, **kwargs) + + def provide_data(self): + return { + 'host': hookenv.unit_get('private-address'), + 'port': 80, + } + + +class RequiredConfig(dict): + """ + Data context that loads config options with one or more mandatory options. + + Once the required options have been changed from their default values, all + config options will be available, namespaced under `config` to prevent + potential naming conflicts (for example, between a config option and a + relation property). + + :param list *args: List of options that must be changed from their default values. + """ + + def __init__(self, *args): + self.required_options = args + self['config'] = hookenv.config() + with open(os.path.join(hookenv.charm_dir(), 'config.yaml')) as fp: + self.config = yaml.load(fp).get('options', {}) + + def __bool__(self): + for option in self.required_options: + if option not in self['config']: + return False + current_value = self['config'][option] + default_value = self.config[option].get('default') + if current_value == default_value: + return False + if current_value in (None, '') and default_value in (None, ''): + return False + return True + + def __nonzero__(self): + return self.__bool__() + + +class StoredContext(dict): + """ + A data context that always returns the data that it was first created with. + + This is useful to do a one-time generation of things like passwords, that + will thereafter use the same value that was originally generated, instead + of generating a new value each time it is run. + """ + def __init__(self, file_name, config_data): + """ + If the file exists, populate `self` with the data from the file. + Otherwise, populate with the given data and persist it to the file. + """ + if os.path.exists(file_name): + self.update(self.read_context(file_name)) + else: + self.store_context(file_name, config_data) + self.update(config_data) + + def store_context(self, file_name, config_data): + if not os.path.isabs(file_name): + file_name = os.path.join(hookenv.charm_dir(), file_name) + with open(file_name, 'w') as file_stream: + os.fchmod(file_stream.fileno(), 0o600) + yaml.dump(config_data, file_stream) + + def read_context(self, file_name): + if not os.path.isabs(file_name): + file_name = os.path.join(hookenv.charm_dir(), file_name) + with open(file_name, 'r') as file_stream: + data = yaml.load(file_stream) + if not data: + raise OSError("%s is empty" % file_name) + return data + + +class TemplateCallback(ManagerCallback): + """ + Callback class that will render a Jinja2 template, for use as a ready + action. + + :param str source: The template source file, relative to + `$CHARM_DIR/templates` + + :param str target: The target to write the rendered template to (or None) + :param str owner: The owner of the rendered file + :param str group: The group of the rendered file + :param int perms: The permissions of the rendered file + :param partial on_change_action: functools partial to be executed when + rendered file changes + :param jinja2 loader template_loader: A jinja2 template loader + + :return str: The rendered template + """ + def __init__(self, source, target, + owner='root', group='root', perms=0o444, + on_change_action=None, template_loader=None): + self.source = source + self.target = target + self.owner = owner + self.group = group + self.perms = perms + self.on_change_action = on_change_action + self.template_loader = template_loader + + def __call__(self, manager, service_name, event_name): + pre_checksum = '' + if self.on_change_action and os.path.isfile(self.target): + pre_checksum = host.file_hash(self.target) + service = manager.get_service(service_name) + context = {'ctx': {}} + for ctx in service.get('required_data', []): + context.update(ctx) + context['ctx'].update(ctx) + + result = templating.render(self.source, self.target, context, + self.owner, self.group, self.perms, + template_loader=self.template_loader) + if self.on_change_action: + if pre_checksum == host.file_hash(self.target): + hookenv.log( + 'No change detected: {}'.format(self.target), + hookenv.DEBUG) + else: + self.on_change_action() + + return result + + +# Convenience aliases for templates +render_template = template = TemplateCallback diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/strutils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/strutils.py new file mode 100644 index 0000000000000000000000000000000000000000..e8df0452f8203b53947eb137eed22d85ff62dff0 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/strutils.py @@ -0,0 +1,129 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import six +import re + + +def bool_from_string(value): + """Interpret string value as boolean. + + Returns True if value translates to True otherwise False. + """ + if isinstance(value, six.string_types): + value = six.text_type(value) + else: + msg = "Unable to interpret non-string value '%s' as boolean" % (value) + raise ValueError(msg) + + value = value.strip().lower() + + if value in ['y', 'yes', 'true', 't', 'on']: + return True + elif value in ['n', 'no', 'false', 'f', 'off']: + return False + + msg = "Unable to interpret string value '%s' as boolean" % (value) + raise ValueError(msg) + + +def bytes_from_string(value): + """Interpret human readable string value as bytes. + + Returns int + """ + BYTE_POWER = { + 'K': 1, + 'KB': 1, + 'M': 2, + 'MB': 2, + 'G': 3, + 'GB': 3, + 'T': 4, + 'TB': 4, + 'P': 5, + 'PB': 5, + } + if isinstance(value, six.string_types): + value = six.text_type(value) + else: + msg = "Unable to interpret non-string value '%s' as bytes" % (value) + raise ValueError(msg) + matches = re.match("([0-9]+)([a-zA-Z]+)", value) + if matches: + size = int(matches.group(1)) * (1024 ** BYTE_POWER[matches.group(2)]) + else: + # Assume that value passed in is bytes + try: + size = int(value) + except ValueError: + msg = "Unable to interpret string value '%s' as bytes" % (value) + raise ValueError(msg) + return size + + +class BasicStringComparator(object): + """Provides a class that will compare strings from an iterator type object. + Used to provide > and < comparisons on strings that may not necessarily be + alphanumerically ordered. e.g. OpenStack or Ubuntu releases AFTER the + z-wrap. + """ + + _list = None + + def __init__(self, item): + if self._list is None: + raise Exception("Must define the _list in the class definition!") + try: + self.index = self._list.index(item) + except Exception: + raise KeyError("Item '{}' is not in list '{}'" + .format(item, self._list)) + + def __eq__(self, other): + assert isinstance(other, str) or isinstance(other, self.__class__) + return self.index == self._list.index(other) + + def __ne__(self, other): + return not self.__eq__(other) + + def __lt__(self, other): + assert isinstance(other, str) or isinstance(other, self.__class__) + return self.index < self._list.index(other) + + def __ge__(self, other): + return not self.__lt__(other) + + def __gt__(self, other): + assert isinstance(other, str) or isinstance(other, self.__class__) + return self.index > self._list.index(other) + + def __le__(self, other): + return not self.__gt__(other) + + def __str__(self): + """Always give back the item at the index so it can be used in + comparisons like: + + s_mitaka = CompareOpenStack('mitaka') + s_newton = CompareOpenstack('newton') + + assert s_newton > s_mitaka + + @returns: + """ + return self._list[self.index] diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/sysctl.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/sysctl.py new file mode 100644 index 0000000000000000000000000000000000000000..386428d619bc38edf02dc088bf7ec32767c0ab94 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/sysctl.py @@ -0,0 +1,75 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import yaml + +from subprocess import check_call, CalledProcessError + +from charmhelpers.core.hookenv import ( + log, + DEBUG, + ERROR, + WARNING, +) + +from charmhelpers.core.host import is_container + +__author__ = 'Jorge Niedbalski R. ' + + +def create(sysctl_dict, sysctl_file, ignore=False): + """Creates a sysctl.conf file from a YAML associative array + + :param sysctl_dict: a dict or YAML-formatted string of sysctl + options eg "{ 'kernel.max_pid': 1337 }" + :type sysctl_dict: str + :param sysctl_file: path to the sysctl file to be saved + :type sysctl_file: str or unicode + :param ignore: If True, ignore "unknown variable" errors. + :type ignore: bool + :returns: None + """ + if type(sysctl_dict) is not dict: + try: + sysctl_dict_parsed = yaml.safe_load(sysctl_dict) + except yaml.YAMLError: + log("Error parsing YAML sysctl_dict: {}".format(sysctl_dict), + level=ERROR) + return + else: + sysctl_dict_parsed = sysctl_dict + + with open(sysctl_file, "w") as fd: + for key, value in sysctl_dict_parsed.items(): + fd.write("{}={}\n".format(key, value)) + + log("Updating sysctl_file: {} values: {}".format(sysctl_file, + sysctl_dict_parsed), + level=DEBUG) + + call = ["sysctl", "-p", sysctl_file] + if ignore: + call.append("-e") + + try: + check_call(call) + except CalledProcessError as e: + if is_container(): + log("Error setting some sysctl keys in this container: {}".format(e.output), + level=WARNING) + else: + raise e diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/templating.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/templating.py new file mode 100644 index 0000000000000000000000000000000000000000..9014015c14ee0b48c775562cd4f0d30884944439 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/templating.py @@ -0,0 +1,93 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import sys + +from charmhelpers.core import host +from charmhelpers.core import hookenv + + +def render(source, target, context, owner='root', group='root', + perms=0o444, templates_dir=None, encoding='UTF-8', + template_loader=None, config_template=None): + """ + Render a template. + + The `source` path, if not absolute, is relative to the `templates_dir`. + + The `target` path should be absolute. It can also be `None`, in which + case no file will be written. + + The context should be a dict containing the values to be replaced in the + template. + + config_template may be provided to render from a provided template instead + of loading from a file. + + The `owner`, `group`, and `perms` options will be passed to `write_file`. + + If omitted, `templates_dir` defaults to the `templates` folder in the charm. + + The rendered template will be written to the file as well as being returned + as a string. + + Note: Using this requires python-jinja2 or python3-jinja2; if it is not + installed, calling this will attempt to use charmhelpers.fetch.apt_install + to install it. + """ + try: + from jinja2 import FileSystemLoader, Environment, exceptions + except ImportError: + try: + from charmhelpers.fetch import apt_install + except ImportError: + hookenv.log('Could not import jinja2, and could not import ' + 'charmhelpers.fetch to install it', + level=hookenv.ERROR) + raise + if sys.version_info.major == 2: + apt_install('python-jinja2', fatal=True) + else: + apt_install('python3-jinja2', fatal=True) + from jinja2 import FileSystemLoader, Environment, exceptions + + if template_loader: + template_env = Environment(loader=template_loader) + else: + if templates_dir is None: + templates_dir = os.path.join(hookenv.charm_dir(), 'templates') + template_env = Environment(loader=FileSystemLoader(templates_dir)) + + # load from a string if provided explicitly + if config_template is not None: + template = template_env.from_string(config_template) + else: + try: + source = source + template = template_env.get_template(source) + except exceptions.TemplateNotFound as e: + hookenv.log('Could not load template %s from %s.' % + (source, templates_dir), + level=hookenv.ERROR) + raise e + content = template.render(context) + if target is not None: + target_dir = os.path.dirname(target) + if not os.path.exists(target_dir): + # This is a terrible default directory permission, as the file + # or its siblings will often contain secrets. + host.mkdir(os.path.dirname(target), owner, group, perms=0o755) + host.write_file(target, content.encode(encoding), owner, group, perms) + return content diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/unitdata.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/unitdata.py new file mode 100644 index 0000000000000000000000000000000000000000..ab554327b343f896880523fc627c1abea84be29a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/core/unitdata.py @@ -0,0 +1,525 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +# +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Authors: +# Kapil Thangavelu +# +""" +Intro +----- + +A simple way to store state in units. This provides a key value +storage with support for versioned, transactional operation, +and can calculate deltas from previous values to simplify unit logic +when processing changes. + + +Hook Integration +---------------- + +There are several extant frameworks for hook execution, including + + - charmhelpers.core.hookenv.Hooks + - charmhelpers.core.services.ServiceManager + +The storage classes are framework agnostic, one simple integration is +via the HookData contextmanager. It will record the current hook +execution environment (including relation data, config data, etc.), +setup a transaction and allow easy access to the changes from +previously seen values. One consequence of the integration is the +reservation of particular keys ('rels', 'unit', 'env', 'config', +'charm_revisions') for their respective values. + +Here's a fully worked integration example using hookenv.Hooks:: + + from charmhelper.core import hookenv, unitdata + + hook_data = unitdata.HookData() + db = unitdata.kv() + hooks = hookenv.Hooks() + + @hooks.hook + def config_changed(): + # Print all changes to configuration from previously seen + # values. + for changed, (prev, cur) in hook_data.conf.items(): + print('config changed', changed, + 'previous value', prev, + 'current value', cur) + + # Get some unit specific bookeeping + if not db.get('pkg_key'): + key = urllib.urlopen('https://example.com/pkg_key').read() + db.set('pkg_key', key) + + # Directly access all charm config as a mapping. + conf = db.getrange('config', True) + + # Directly access all relation data as a mapping + rels = db.getrange('rels', True) + + if __name__ == '__main__': + with hook_data(): + hook.execute() + + +A more basic integration is via the hook_scope context manager which simply +manages transaction scope (and records hook name, and timestamp):: + + >>> from unitdata import kv + >>> db = kv() + >>> with db.hook_scope('install'): + ... # do work, in transactional scope. + ... db.set('x', 1) + >>> db.get('x') + 1 + + +Usage +----- + +Values are automatically json de/serialized to preserve basic typing +and complex data struct capabilities (dicts, lists, ints, booleans, etc). + +Individual values can be manipulated via get/set:: + + >>> kv.set('y', True) + >>> kv.get('y') + True + + # We can set complex values (dicts, lists) as a single key. + >>> kv.set('config', {'a': 1, 'b': True'}) + + # Also supports returning dictionaries as a record which + # provides attribute access. + >>> config = kv.get('config', record=True) + >>> config.b + True + + +Groups of keys can be manipulated with update/getrange:: + + >>> kv.update({'z': 1, 'y': 2}, prefix="gui.") + >>> kv.getrange('gui.', strip=True) + {'z': 1, 'y': 2} + +When updating values, its very helpful to understand which values +have actually changed and how have they changed. The storage +provides a delta method to provide for this:: + + >>> data = {'debug': True, 'option': 2} + >>> delta = kv.delta(data, 'config.') + >>> delta.debug.previous + None + >>> delta.debug.current + True + >>> delta + {'debug': (None, True), 'option': (None, 2)} + +Note the delta method does not persist the actual change, it needs to +be explicitly saved via 'update' method:: + + >>> kv.update(data, 'config.') + +Values modified in the context of a hook scope retain historical values +associated to the hookname. + + >>> with db.hook_scope('config-changed'): + ... db.set('x', 42) + >>> db.gethistory('x') + [(1, u'x', 1, u'install', u'2015-01-21T16:49:30.038372'), + (2, u'x', 42, u'config-changed', u'2015-01-21T16:49:30.038786')] + +""" + +import collections +import contextlib +import datetime +import itertools +import json +import os +import pprint +import sqlite3 +import sys + +__author__ = 'Kapil Thangavelu ' + + +class Storage(object): + """Simple key value database for local unit state within charms. + + Modifications are not persisted unless :meth:`flush` is called. + + To support dicts, lists, integer, floats, and booleans values + are automatically json encoded/decoded. + + Note: to facilitate unit testing, ':memory:' can be passed as the + path parameter which causes sqlite3 to only build the db in memory. + This should only be used for testing purposes. + """ + def __init__(self, path=None): + self.db_path = path + if path is None: + if 'UNIT_STATE_DB' in os.environ: + self.db_path = os.environ['UNIT_STATE_DB'] + else: + self.db_path = os.path.join( + os.environ.get('CHARM_DIR', ''), '.unit-state.db') + if self.db_path != ':memory:': + with open(self.db_path, 'a') as f: + os.fchmod(f.fileno(), 0o600) + self.conn = sqlite3.connect('%s' % self.db_path) + self.cursor = self.conn.cursor() + self.revision = None + self._closed = False + self._init() + + def close(self): + if self._closed: + return + self.flush(False) + self.cursor.close() + self.conn.close() + self._closed = True + + def get(self, key, default=None, record=False): + self.cursor.execute('select data from kv where key=?', [key]) + result = self.cursor.fetchone() + if not result: + return default + if record: + return Record(json.loads(result[0])) + return json.loads(result[0]) + + def getrange(self, key_prefix, strip=False): + """ + Get a range of keys starting with a common prefix as a mapping of + keys to values. + + :param str key_prefix: Common prefix among all keys + :param bool strip: Optionally strip the common prefix from the key + names in the returned dict + :return dict: A (possibly empty) dict of key-value mappings + """ + self.cursor.execute("select key, data from kv where key like ?", + ['%s%%' % key_prefix]) + result = self.cursor.fetchall() + + if not result: + return {} + if not strip: + key_prefix = '' + return dict([ + (k[len(key_prefix):], json.loads(v)) for k, v in result]) + + def update(self, mapping, prefix=""): + """ + Set the values of multiple keys at once. + + :param dict mapping: Mapping of keys to values + :param str prefix: Optional prefix to apply to all keys in `mapping` + before setting + """ + for k, v in mapping.items(): + self.set("%s%s" % (prefix, k), v) + + def unset(self, key): + """ + Remove a key from the database entirely. + """ + self.cursor.execute('delete from kv where key=?', [key]) + if self.revision and self.cursor.rowcount: + self.cursor.execute( + 'insert into kv_revisions values (?, ?, ?)', + [key, self.revision, json.dumps('DELETED')]) + + def unsetrange(self, keys=None, prefix=""): + """ + Remove a range of keys starting with a common prefix, from the database + entirely. + + :param list keys: List of keys to remove. + :param str prefix: Optional prefix to apply to all keys in ``keys`` + before removing. + """ + if keys is not None: + keys = ['%s%s' % (prefix, key) for key in keys] + self.cursor.execute('delete from kv where key in (%s)' % ','.join(['?'] * len(keys)), keys) + if self.revision and self.cursor.rowcount: + self.cursor.execute( + 'insert into kv_revisions values %s' % ','.join(['(?, ?, ?)'] * len(keys)), + list(itertools.chain.from_iterable((key, self.revision, json.dumps('DELETED')) for key in keys))) + else: + self.cursor.execute('delete from kv where key like ?', + ['%s%%' % prefix]) + if self.revision and self.cursor.rowcount: + self.cursor.execute( + 'insert into kv_revisions values (?, ?, ?)', + ['%s%%' % prefix, self.revision, json.dumps('DELETED')]) + + def set(self, key, value): + """ + Set a value in the database. + + :param str key: Key to set the value for + :param value: Any JSON-serializable value to be set + """ + serialized = json.dumps(value) + + self.cursor.execute('select data from kv where key=?', [key]) + exists = self.cursor.fetchone() + + # Skip mutations to the same value + if exists: + if exists[0] == serialized: + return value + + if not exists: + self.cursor.execute( + 'insert into kv (key, data) values (?, ?)', + (key, serialized)) + else: + self.cursor.execute(''' + update kv + set data = ? + where key = ?''', [serialized, key]) + + # Save + if not self.revision: + return value + + self.cursor.execute( + 'select 1 from kv_revisions where key=? and revision=?', + [key, self.revision]) + exists = self.cursor.fetchone() + + if not exists: + self.cursor.execute( + '''insert into kv_revisions ( + revision, key, data) values (?, ?, ?)''', + (self.revision, key, serialized)) + else: + self.cursor.execute( + ''' + update kv_revisions + set data = ? + where key = ? + and revision = ?''', + [serialized, key, self.revision]) + + return value + + def delta(self, mapping, prefix): + """ + return a delta containing values that have changed. + """ + previous = self.getrange(prefix, strip=True) + if not previous: + pk = set() + else: + pk = set(previous.keys()) + ck = set(mapping.keys()) + delta = DeltaSet() + + # added + for k in ck.difference(pk): + delta[k] = Delta(None, mapping[k]) + + # removed + for k in pk.difference(ck): + delta[k] = Delta(previous[k], None) + + # changed + for k in pk.intersection(ck): + c = mapping[k] + p = previous[k] + if c != p: + delta[k] = Delta(p, c) + + return delta + + @contextlib.contextmanager + def hook_scope(self, name=""): + """Scope all future interactions to the current hook execution + revision.""" + assert not self.revision + self.cursor.execute( + 'insert into hooks (hook, date) values (?, ?)', + (name or sys.argv[0], + datetime.datetime.utcnow().isoformat())) + self.revision = self.cursor.lastrowid + try: + yield self.revision + self.revision = None + except Exception: + self.flush(False) + self.revision = None + raise + else: + self.flush() + + def flush(self, save=True): + if save: + self.conn.commit() + elif self._closed: + return + else: + self.conn.rollback() + + def _init(self): + self.cursor.execute(''' + create table if not exists kv ( + key text, + data text, + primary key (key) + )''') + self.cursor.execute(''' + create table if not exists kv_revisions ( + key text, + revision integer, + data text, + primary key (key, revision) + )''') + self.cursor.execute(''' + create table if not exists hooks ( + version integer primary key autoincrement, + hook text, + date text + )''') + self.conn.commit() + + def gethistory(self, key, deserialize=False): + self.cursor.execute( + ''' + select kv.revision, kv.key, kv.data, h.hook, h.date + from kv_revisions kv, + hooks h + where kv.key=? + and kv.revision = h.version + ''', [key]) + if deserialize is False: + return self.cursor.fetchall() + return map(_parse_history, self.cursor.fetchall()) + + def debug(self, fh=sys.stderr): + self.cursor.execute('select * from kv') + pprint.pprint(self.cursor.fetchall(), stream=fh) + self.cursor.execute('select * from kv_revisions') + pprint.pprint(self.cursor.fetchall(), stream=fh) + + +def _parse_history(d): + return (d[0], d[1], json.loads(d[2]), d[3], + datetime.datetime.strptime(d[-1], "%Y-%m-%dT%H:%M:%S.%f")) + + +class HookData(object): + """Simple integration for existing hook exec frameworks. + + Records all unit information, and stores deltas for processing + by the hook. + + Sample:: + + from charmhelper.core import hookenv, unitdata + + changes = unitdata.HookData() + db = unitdata.kv() + hooks = hookenv.Hooks() + + @hooks.hook + def config_changed(): + # View all changes to configuration + for changed, (prev, cur) in changes.conf.items(): + print('config changed', changed, + 'previous value', prev, + 'current value', cur) + + # Get some unit specific bookeeping + if not db.get('pkg_key'): + key = urllib.urlopen('https://example.com/pkg_key').read() + db.set('pkg_key', key) + + if __name__ == '__main__': + with changes(): + hook.execute() + + """ + def __init__(self): + self.kv = kv() + self.conf = None + self.rels = None + + @contextlib.contextmanager + def __call__(self): + from charmhelpers.core import hookenv + hook_name = hookenv.hook_name() + + with self.kv.hook_scope(hook_name): + self._record_charm_version(hookenv.charm_dir()) + delta_config, delta_relation = self._record_hook(hookenv) + yield self.kv, delta_config, delta_relation + + def _record_charm_version(self, charm_dir): + # Record revisions.. charm revisions are meaningless + # to charm authors as they don't control the revision. + # so logic dependnent on revision is not particularly + # useful, however it is useful for debugging analysis. + charm_rev = open( + os.path.join(charm_dir, 'revision')).read().strip() + charm_rev = charm_rev or '0' + revs = self.kv.get('charm_revisions', []) + if charm_rev not in revs: + revs.append(charm_rev.strip() or '0') + self.kv.set('charm_revisions', revs) + + def _record_hook(self, hookenv): + data = hookenv.execution_environment() + self.conf = conf_delta = self.kv.delta(data['conf'], 'config') + self.rels = rels_delta = self.kv.delta(data['rels'], 'rels') + self.kv.set('env', dict(data['env'])) + self.kv.set('unit', data['unit']) + self.kv.set('relid', data.get('relid')) + return conf_delta, rels_delta + + +class Record(dict): + + __slots__ = () + + def __getattr__(self, k): + if k in self: + return self[k] + raise AttributeError(k) + + +class DeltaSet(Record): + + __slots__ = () + + +Delta = collections.namedtuple('Delta', ['previous', 'current']) + + +_KV = None + + +def kv(): + global _KV + if _KV is None: + _KV = Storage() + return _KV diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/fetch/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/fetch/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..0cc7fc850a0632568ad78aae9716be718c9ff6b5 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/fetch/__init__.py @@ -0,0 +1,209 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import importlib +from charmhelpers.osplatform import get_platform +from yaml import safe_load +from charmhelpers.core.hookenv import ( + config, + log, +) + +import six +if six.PY3: + from urllib.parse import urlparse, urlunparse +else: + from urlparse import urlparse, urlunparse + + +# The order of this list is very important. Handlers should be listed in from +# least- to most-specific URL matching. +FETCH_HANDLERS = ( + 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler', + 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler', + 'charmhelpers.fetch.giturl.GitUrlFetchHandler', +) + + +class SourceConfigError(Exception): + pass + + +class UnhandledSource(Exception): + pass + + +class AptLockError(Exception): + pass + + +class GPGKeyError(Exception): + """Exception occurs when a GPG key cannot be fetched or used. The message + indicates what the problem is. + """ + pass + + +class BaseFetchHandler(object): + + """Base class for FetchHandler implementations in fetch plugins""" + + def can_handle(self, source): + """Returns True if the source can be handled. Otherwise returns + a string explaining why it cannot""" + return "Wrong source type" + + def install(self, source): + """Try to download and unpack the source. Return the path to the + unpacked files or raise UnhandledSource.""" + raise UnhandledSource("Wrong source type {}".format(source)) + + def parse_url(self, url): + return urlparse(url) + + def base_url(self, url): + """Return url without querystring or fragment""" + parts = list(self.parse_url(url)) + parts[4:] = ['' for i in parts[4:]] + return urlunparse(parts) + + +__platform__ = get_platform() +module = "charmhelpers.fetch.%s" % __platform__ +fetch = importlib.import_module(module) + +filter_installed_packages = fetch.filter_installed_packages +filter_missing_packages = fetch.filter_missing_packages +install = fetch.apt_install +upgrade = fetch.apt_upgrade +update = _fetch_update = fetch.apt_update +purge = fetch.apt_purge +add_source = fetch.add_source + +if __platform__ == "ubuntu": + apt_cache = fetch.apt_cache + apt_install = fetch.apt_install + apt_update = fetch.apt_update + apt_upgrade = fetch.apt_upgrade + apt_purge = fetch.apt_purge + apt_autoremove = fetch.apt_autoremove + apt_mark = fetch.apt_mark + apt_hold = fetch.apt_hold + apt_unhold = fetch.apt_unhold + import_key = fetch.import_key + get_upstream_version = fetch.get_upstream_version + apt_pkg = fetch.ubuntu_apt_pkg + get_apt_dpkg_env = fetch.get_apt_dpkg_env +elif __platform__ == "centos": + yum_search = fetch.yum_search + + +def configure_sources(update=False, + sources_var='install_sources', + keys_var='install_keys'): + """Configure multiple sources from charm configuration. + + The lists are encoded as yaml fragments in the configuration. + The fragment needs to be included as a string. Sources and their + corresponding keys are of the types supported by add_source(). + + Example config: + install_sources: | + - "ppa:foo" + - "http://example.com/repo precise main" + install_keys: | + - null + - "a1b2c3d4" + + Note that 'null' (a.k.a. None) should not be quoted. + """ + sources = safe_load((config(sources_var) or '').strip()) or [] + keys = safe_load((config(keys_var) or '').strip()) or None + + if isinstance(sources, six.string_types): + sources = [sources] + + if keys is None: + for source in sources: + add_source(source, None) + else: + if isinstance(keys, six.string_types): + keys = [keys] + + if len(sources) != len(keys): + raise SourceConfigError( + 'Install sources and keys lists are different lengths') + for source, key in zip(sources, keys): + add_source(source, key) + if update: + _fetch_update(fatal=True) + + +def install_remote(source, *args, **kwargs): + """Install a file tree from a remote source. + + The specified source should be a url of the form: + scheme://[host]/path[#[option=value][&...]] + + Schemes supported are based on this modules submodules. + Options supported are submodule-specific. + Additional arguments are passed through to the submodule. + + For example:: + + dest = install_remote('http://example.com/archive.tgz', + checksum='deadbeef', + hash_type='sha1') + + This will download `archive.tgz`, validate it using SHA1 and, if + the file is ok, extract it and return the directory in which it + was extracted. If the checksum fails, it will raise + :class:`charmhelpers.core.host.ChecksumError`. + """ + # We ONLY check for True here because can_handle may return a string + # explaining why it can't handle a given source. + handlers = [h for h in plugins() if h.can_handle(source) is True] + for handler in handlers: + try: + return handler.install(source, *args, **kwargs) + except UnhandledSource as e: + log('Install source attempt unsuccessful: {}'.format(e), + level='WARNING') + raise UnhandledSource("No handler found for source {}".format(source)) + + +def install_from_config(config_var_name): + """Install a file from config.""" + charm_config = config() + source = charm_config[config_var_name] + return install_remote(source) + + +def plugins(fetch_handlers=None): + if not fetch_handlers: + fetch_handlers = FETCH_HANDLERS + plugin_list = [] + for handler_name in fetch_handlers: + package, classname = handler_name.rsplit('.', 1) + try: + handler_class = getattr( + importlib.import_module(package), + classname) + plugin_list.append(handler_class()) + except NotImplementedError: + # Skip missing plugins so that they can be ommitted from + # installation if desired + log("FetchHandler {} not found, skipping plugin".format( + handler_name)) + return plugin_list diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/fetch/archiveurl.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/fetch/archiveurl.py new file mode 100644 index 0000000000000000000000000000000000000000..d25587adeff102c3fc9e402f98746fccbd8a3693 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/fetch/archiveurl.py @@ -0,0 +1,165 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import hashlib +import re + +from charmhelpers.fetch import ( + BaseFetchHandler, + UnhandledSource +) +from charmhelpers.payload.archive import ( + get_archive_handler, + extract, +) +from charmhelpers.core.host import mkdir, check_hash + +import six +if six.PY3: + from urllib.request import ( + build_opener, install_opener, urlopen, urlretrieve, + HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler, + ) + from urllib.parse import urlparse, urlunparse, parse_qs + from urllib.error import URLError +else: + from urllib import urlretrieve + from urllib2 import ( + build_opener, install_opener, urlopen, + HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler, + URLError + ) + from urlparse import urlparse, urlunparse, parse_qs + + +def splituser(host): + '''urllib.splituser(), but six's support of this seems broken''' + _userprog = re.compile('^(.*)@(.*)$') + match = _userprog.match(host) + if match: + return match.group(1, 2) + return None, host + + +def splitpasswd(user): + '''urllib.splitpasswd(), but six's support of this is missing''' + _passwdprog = re.compile('^([^:]*):(.*)$', re.S) + match = _passwdprog.match(user) + if match: + return match.group(1, 2) + return user, None + + +class ArchiveUrlFetchHandler(BaseFetchHandler): + """ + Handler to download archive files from arbitrary URLs. + + Can fetch from http, https, ftp, and file URLs. + + Can install either tarballs (.tar, .tgz, .tbz2, etc) or zip files. + + Installs the contents of the archive in $CHARM_DIR/fetched/. + """ + def can_handle(self, source): + url_parts = self.parse_url(source) + if url_parts.scheme not in ('http', 'https', 'ftp', 'file'): + # XXX: Why is this returning a boolean and a string? It's + # doomed to fail since "bool(can_handle('foo://'))" will be True. + return "Wrong source type" + if get_archive_handler(self.base_url(source)): + return True + return False + + def download(self, source, dest): + """ + Download an archive file. + + :param str source: URL pointing to an archive file. + :param str dest: Local path location to download archive file to. + """ + # propagate all exceptions + # URLError, OSError, etc + proto, netloc, path, params, query, fragment = urlparse(source) + if proto in ('http', 'https'): + auth, barehost = splituser(netloc) + if auth is not None: + source = urlunparse((proto, barehost, path, params, query, fragment)) + username, password = splitpasswd(auth) + passman = HTTPPasswordMgrWithDefaultRealm() + # Realm is set to None in add_password to force the username and password + # to be used whatever the realm + passman.add_password(None, source, username, password) + authhandler = HTTPBasicAuthHandler(passman) + opener = build_opener(authhandler) + install_opener(opener) + response = urlopen(source) + try: + with open(dest, 'wb') as dest_file: + dest_file.write(response.read()) + except Exception as e: + if os.path.isfile(dest): + os.unlink(dest) + raise e + + # Mandatory file validation via Sha1 or MD5 hashing. + def download_and_validate(self, url, hashsum, validate="sha1"): + tempfile, headers = urlretrieve(url) + check_hash(tempfile, hashsum, validate) + return tempfile + + def install(self, source, dest=None, checksum=None, hash_type='sha1'): + """ + Download and install an archive file, with optional checksum validation. + + The checksum can also be given on the `source` URL's fragment. + For example:: + + handler.install('http://example.com/file.tgz#sha1=deadbeef') + + :param str source: URL pointing to an archive file. + :param str dest: Local destination path to install to. If not given, + installs to `$CHARM_DIR/archives/archive_file_name`. + :param str checksum: If given, validate the archive file after download. + :param str hash_type: Algorithm used to generate `checksum`. + Can be any hash alrgorithm supported by :mod:`hashlib`, + such as md5, sha1, sha256, sha512, etc. + + """ + url_parts = self.parse_url(source) + dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched') + if not os.path.exists(dest_dir): + mkdir(dest_dir, perms=0o755) + dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path)) + try: + self.download(source, dld_file) + except URLError as e: + raise UnhandledSource(e.reason) + except OSError as e: + raise UnhandledSource(e.strerror) + options = parse_qs(url_parts.fragment) + for key, value in options.items(): + if not six.PY3: + algorithms = hashlib.algorithms + else: + algorithms = hashlib.algorithms_available + if key in algorithms: + if len(value) != 1: + raise TypeError( + "Expected 1 hash value, not %d" % len(value)) + expected = value[0] + check_hash(dld_file, expected, key) + if checksum: + check_hash(dld_file, checksum, hash_type) + return extract(dld_file, dest) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/fetch/bzrurl.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/fetch/bzrurl.py new file mode 100644 index 0000000000000000000000000000000000000000..c4ab3ff1e6bc7dde24e8ed568a3dc0c6012ddea6 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/fetch/bzrurl.py @@ -0,0 +1,76 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +from subprocess import STDOUT, check_output +from charmhelpers.fetch import ( + BaseFetchHandler, + UnhandledSource, + filter_installed_packages, + install, +) +from charmhelpers.core.host import mkdir + + +if filter_installed_packages(['bzr']) != []: + install(['bzr']) + if filter_installed_packages(['bzr']) != []: + raise NotImplementedError('Unable to install bzr') + + +class BzrUrlFetchHandler(BaseFetchHandler): + """Handler for bazaar branches via generic and lp URLs.""" + + def can_handle(self, source): + url_parts = self.parse_url(source) + if url_parts.scheme not in ('bzr+ssh', 'lp', ''): + return False + elif not url_parts.scheme: + return os.path.exists(os.path.join(source, '.bzr')) + else: + return True + + def branch(self, source, dest, revno=None): + if not self.can_handle(source): + raise UnhandledSource("Cannot handle {}".format(source)) + cmd_opts = [] + if revno: + cmd_opts += ['-r', str(revno)] + if os.path.exists(dest): + cmd = ['bzr', 'pull'] + cmd += cmd_opts + cmd += ['--overwrite', '-d', dest, source] + else: + cmd = ['bzr', 'branch'] + cmd += cmd_opts + cmd += [source, dest] + check_output(cmd, stderr=STDOUT) + + def install(self, source, dest=None, revno=None): + url_parts = self.parse_url(source) + branch_name = url_parts.path.strip("/").split("/")[-1] + if dest: + dest_dir = os.path.join(dest, branch_name) + else: + dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", + branch_name) + + if dest and not os.path.exists(dest): + mkdir(dest, perms=0o755) + + try: + self.branch(source, dest_dir, revno) + except OSError as e: + raise UnhandledSource(e.strerror) + return dest_dir diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/fetch/centos.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/fetch/centos.py new file mode 100644 index 0000000000000000000000000000000000000000..a91dcff0645ed541a79cd72af3112bdff393719a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/fetch/centos.py @@ -0,0 +1,171 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import subprocess +import os +import time +import six +import yum + +from tempfile import NamedTemporaryFile +from charmhelpers.core.hookenv import log + +YUM_NO_LOCK = 1 # The return code for "couldn't acquire lock" in YUM. +YUM_NO_LOCK_RETRY_DELAY = 10 # Wait 10 seconds between apt lock checks. +YUM_NO_LOCK_RETRY_COUNT = 30 # Retry to acquire the lock X times. + + +def filter_installed_packages(packages): + """Return a list of packages that require installation.""" + yb = yum.YumBase() + package_list = yb.doPackageLists() + temp_cache = {p.base_package_name: 1 for p in package_list['installed']} + + _pkgs = [p for p in packages if not temp_cache.get(p, False)] + return _pkgs + + +def install(packages, options=None, fatal=False): + """Install one or more packages.""" + cmd = ['yum', '--assumeyes'] + if options is not None: + cmd.extend(options) + cmd.append('install') + if isinstance(packages, six.string_types): + cmd.append(packages) + else: + cmd.extend(packages) + log("Installing {} with options: {}".format(packages, + options)) + _run_yum_command(cmd, fatal) + + +def upgrade(options=None, fatal=False, dist=False): + """Upgrade all packages.""" + cmd = ['yum', '--assumeyes'] + if options is not None: + cmd.extend(options) + cmd.append('upgrade') + log("Upgrading with options: {}".format(options)) + _run_yum_command(cmd, fatal) + + +def update(fatal=False): + """Update local yum cache.""" + cmd = ['yum', '--assumeyes', 'update'] + log("Update with fatal: {}".format(fatal)) + _run_yum_command(cmd, fatal) + + +def purge(packages, fatal=False): + """Purge one or more packages.""" + cmd = ['yum', '--assumeyes', 'remove'] + if isinstance(packages, six.string_types): + cmd.append(packages) + else: + cmd.extend(packages) + log("Purging {}".format(packages)) + _run_yum_command(cmd, fatal) + + +def yum_search(packages): + """Search for a package.""" + output = {} + cmd = ['yum', 'search'] + if isinstance(packages, six.string_types): + cmd.append(packages) + else: + cmd.extend(packages) + log("Searching for {}".format(packages)) + result = subprocess.check_output(cmd) + for package in list(packages): + output[package] = package in result + return output + + +def add_source(source, key=None): + """Add a package source to this system. + + @param source: a URL with a rpm package + + @param key: A key to be added to the system's keyring and used + to verify the signatures on packages. Ideally, this should be an + ASCII format GPG public key including the block headers. A GPG key + id may also be used, but be aware that only insecure protocols are + available to retrieve the actual public key from a public keyserver + placing your Juju environment at risk. + """ + if source is None: + log('Source is not present. Skipping') + return + + if source.startswith('http'): + directory = '/etc/yum.repos.d/' + for filename in os.listdir(directory): + with open(directory + filename, 'r') as rpm_file: + if source in rpm_file.read(): + break + else: + log("Add source: {!r}".format(source)) + # write in the charms.repo + with open(directory + 'Charms.repo', 'a') as rpm_file: + rpm_file.write('[%s]\n' % source[7:].replace('/', '_')) + rpm_file.write('name=%s\n' % source[7:]) + rpm_file.write('baseurl=%s\n\n' % source) + else: + log("Unknown source: {!r}".format(source)) + + if key: + if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key: + with NamedTemporaryFile('w+') as key_file: + key_file.write(key) + key_file.flush() + key_file.seek(0) + subprocess.check_call(['rpm', '--import', key_file.name]) + else: + subprocess.check_call(['rpm', '--import', key]) + + +def _run_yum_command(cmd, fatal=False): + """Run an YUM command. + + Checks the output and retry if the fatal flag is set to True. + + :param: cmd: str: The yum command to run. + :param: fatal: bool: Whether the command's output should be checked and + retried. + """ + env = os.environ.copy() + + if fatal: + retry_count = 0 + result = None + + # If the command is considered "fatal", we need to retry if the yum + # lock was not acquired. + + while result is None or result == YUM_NO_LOCK: + try: + result = subprocess.check_call(cmd, env=env) + except subprocess.CalledProcessError as e: + retry_count = retry_count + 1 + if retry_count > YUM_NO_LOCK_RETRY_COUNT: + raise + result = e.returncode + log("Couldn't acquire YUM lock. Will retry in {} seconds." + "".format(YUM_NO_LOCK_RETRY_DELAY)) + time.sleep(YUM_NO_LOCK_RETRY_DELAY) + + else: + subprocess.call(cmd, env=env) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/fetch/giturl.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/fetch/giturl.py new file mode 100644 index 0000000000000000000000000000000000000000..070ca9bb5c1a2fdef39f88606ffcaf39bb049410 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/fetch/giturl.py @@ -0,0 +1,69 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +from subprocess import check_output, CalledProcessError, STDOUT +from charmhelpers.fetch import ( + BaseFetchHandler, + UnhandledSource, + filter_installed_packages, + install, +) + +if filter_installed_packages(['git']) != []: + install(['git']) + if filter_installed_packages(['git']) != []: + raise NotImplementedError('Unable to install git') + + +class GitUrlFetchHandler(BaseFetchHandler): + """Handler for git branches via generic and github URLs.""" + + def can_handle(self, source): + url_parts = self.parse_url(source) + # TODO (mattyw) no support for ssh git@ yet + if url_parts.scheme not in ('http', 'https', 'git', ''): + return False + elif not url_parts.scheme: + return os.path.exists(os.path.join(source, '.git')) + else: + return True + + def clone(self, source, dest, branch="master", depth=None): + if not self.can_handle(source): + raise UnhandledSource("Cannot handle {}".format(source)) + + if os.path.exists(dest): + cmd = ['git', '-C', dest, 'pull', source, branch] + else: + cmd = ['git', 'clone', source, dest, '--branch', branch] + if depth: + cmd.extend(['--depth', depth]) + check_output(cmd, stderr=STDOUT) + + def install(self, source, branch="master", dest=None, depth=None): + url_parts = self.parse_url(source) + branch_name = url_parts.path.strip("/").split("/")[-1] + if dest: + dest_dir = os.path.join(dest, branch_name) + else: + dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", + branch_name) + try: + self.clone(source, dest_dir, branch, depth) + except CalledProcessError as e: + raise UnhandledSource(e) + except OSError as e: + raise UnhandledSource(e.strerror) + return dest_dir diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/fetch/python/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/fetch/python/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..bff99dc93c64f80716e2d5a2b6d0d4e8a2436955 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/fetch/python/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2014-2019 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/fetch/python/debug.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/fetch/python/debug.py new file mode 100644 index 0000000000000000000000000000000000000000..757135ee4cf3b5ff4c02305126f5ca3940892afc --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/fetch/python/debug.py @@ -0,0 +1,54 @@ +#!/usr/bin/env python +# coding: utf-8 + +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from __future__ import print_function + +import atexit +import sys + +from charmhelpers.fetch.python.rpdb import Rpdb +from charmhelpers.core.hookenv import ( + open_port, + close_port, + ERROR, + log +) + +__author__ = "Jorge Niedbalski " + +DEFAULT_ADDR = "0.0.0.0" +DEFAULT_PORT = 4444 + + +def _error(message): + log(message, level=ERROR) + + +def set_trace(addr=DEFAULT_ADDR, port=DEFAULT_PORT): + """ + Set a trace point using the remote debugger + """ + atexit.register(close_port, port) + try: + log("Starting a remote python debugger session on %s:%s" % (addr, + port)) + open_port(port) + debugger = Rpdb(addr=addr, port=port) + debugger.set_trace(sys._getframe().f_back) + except Exception: + _error("Cannot start a remote debug session on %s:%s" % (addr, + port)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/fetch/python/packages.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/fetch/python/packages.py new file mode 100644 index 0000000000000000000000000000000000000000..6e95028bc540aace84a2ec6c1bcc4de2663e8a87 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/fetch/python/packages.py @@ -0,0 +1,154 @@ +#!/usr/bin/env python +# coding: utf-8 + +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import six +import subprocess +import sys + +from charmhelpers.fetch import apt_install, apt_update +from charmhelpers.core.hookenv import charm_dir, log + +__author__ = "Jorge Niedbalski " + + +def pip_execute(*args, **kwargs): + """Overriden pip_execute() to stop sys.path being changed. + + The act of importing main from the pip module seems to cause add wheels + from the /usr/share/python-wheels which are installed by various tools. + This function ensures that sys.path remains the same after the call is + executed. + """ + try: + _path = sys.path + try: + from pip import main as _pip_execute + except ImportError: + apt_update() + if six.PY2: + apt_install('python-pip') + else: + apt_install('python3-pip') + from pip import main as _pip_execute + _pip_execute(*args, **kwargs) + finally: + sys.path = _path + + +def parse_options(given, available): + """Given a set of options, check if available""" + for key, value in sorted(given.items()): + if not value: + continue + if key in available: + yield "--{0}={1}".format(key, value) + + +def pip_install_requirements(requirements, constraints=None, **options): + """Install a requirements file. + + :param constraints: Path to pip constraints file. + http://pip.readthedocs.org/en/stable/user_guide/#constraints-files + """ + command = ["install"] + + available_options = ('proxy', 'src', 'log', ) + for option in parse_options(options, available_options): + command.append(option) + + command.append("-r {0}".format(requirements)) + if constraints: + command.append("-c {0}".format(constraints)) + log("Installing from file: {} with constraints {} " + "and options: {}".format(requirements, constraints, command)) + else: + log("Installing from file: {} with options: {}".format(requirements, + command)) + pip_execute(command) + + +def pip_install(package, fatal=False, upgrade=False, venv=None, + constraints=None, **options): + """Install a python package""" + if venv: + venv_python = os.path.join(venv, 'bin/pip') + command = [venv_python, "install"] + else: + command = ["install"] + + available_options = ('proxy', 'src', 'log', 'index-url', ) + for option in parse_options(options, available_options): + command.append(option) + + if upgrade: + command.append('--upgrade') + + if constraints: + command.extend(['-c', constraints]) + + if isinstance(package, list): + command.extend(package) + else: + command.append(package) + + log("Installing {} package with options: {}".format(package, + command)) + if venv: + subprocess.check_call(command) + else: + pip_execute(command) + + +def pip_uninstall(package, **options): + """Uninstall a python package""" + command = ["uninstall", "-q", "-y"] + + available_options = ('proxy', 'log', ) + for option in parse_options(options, available_options): + command.append(option) + + if isinstance(package, list): + command.extend(package) + else: + command.append(package) + + log("Uninstalling {} package with options: {}".format(package, + command)) + pip_execute(command) + + +def pip_list(): + """Returns the list of current python installed packages + """ + return pip_execute(["list"]) + + +def pip_create_virtualenv(path=None): + """Create an isolated Python environment.""" + if six.PY2: + apt_install('python-virtualenv') + else: + apt_install('python3-virtualenv') + + if path: + venv_path = path + else: + venv_path = os.path.join(charm_dir(), 'venv') + + if not os.path.exists(venv_path): + subprocess.check_call(['virtualenv', venv_path]) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/fetch/python/rpdb.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/fetch/python/rpdb.py new file mode 100644 index 0000000000000000000000000000000000000000..9b31610c22fc2d24fe5097016cf45728f87de4ae --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/fetch/python/rpdb.py @@ -0,0 +1,56 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Remote Python Debugger (pdb wrapper).""" + +import pdb +import socket +import sys + +__author__ = "Bertrand Janin " +__version__ = "0.1.3" + + +class Rpdb(pdb.Pdb): + + def __init__(self, addr="127.0.0.1", port=4444): + """Initialize the socket and initialize pdb.""" + + # Backup stdin and stdout before replacing them by the socket handle + self.old_stdout = sys.stdout + self.old_stdin = sys.stdin + + # Open a 'reusable' socket to let the webapp reload on the same port + self.skt = socket.socket(socket.AF_INET, socket.SOCK_STREAM) + self.skt.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, True) + self.skt.bind((addr, port)) + self.skt.listen(1) + (clientsocket, address) = self.skt.accept() + handle = clientsocket.makefile('rw') + pdb.Pdb.__init__(self, completekey='tab', stdin=handle, stdout=handle) + sys.stdout = sys.stdin = handle + + def shutdown(self): + """Revert stdin and stdout, close the socket.""" + sys.stdout = self.old_stdout + sys.stdin = self.old_stdin + self.skt.close() + self.set_continue() + + def do_continue(self, arg): + """Stop all operation on ``continue``.""" + self.shutdown() + return 1 + + do_EOF = do_quit = do_exit = do_c = do_cont = do_continue diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/fetch/python/version.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/fetch/python/version.py new file mode 100644 index 0000000000000000000000000000000000000000..3eb421036ff737f8ff1684e85ff87703e30fe543 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/fetch/python/version.py @@ -0,0 +1,32 @@ +#!/usr/bin/env python +# coding: utf-8 + +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import sys + +__author__ = "Jorge Niedbalski " + + +def current_version(): + """Current system python version""" + return sys.version_info + + +def current_version_string(): + """Current system python version as string major.minor.micro""" + return "{0}.{1}.{2}".format(sys.version_info.major, + sys.version_info.minor, + sys.version_info.micro) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/fetch/snap.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/fetch/snap.py new file mode 100644 index 0000000000000000000000000000000000000000..fc70aa941bc4f0bb5ff126237db65705b9e4a10a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/fetch/snap.py @@ -0,0 +1,150 @@ +# Copyright 2014-2017 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +""" +Charm helpers snap for classic charms. + +If writing reactive charms, use the snap layer: +https://lists.ubuntu.com/archives/snapcraft/2016-September/001114.html +""" +import subprocess +import os +from time import sleep +from charmhelpers.core.hookenv import log + +__author__ = 'Joseph Borg ' + +# The return code for "couldn't acquire lock" in Snap +# (hopefully this will be improved). +SNAP_NO_LOCK = 1 +SNAP_NO_LOCK_RETRY_DELAY = 10 # Wait X seconds between Snap lock checks. +SNAP_NO_LOCK_RETRY_COUNT = 30 # Retry to acquire the lock X times. +SNAP_CHANNELS = [ + 'edge', + 'beta', + 'candidate', + 'stable', +] + + +class CouldNotAcquireLockException(Exception): + pass + + +class InvalidSnapChannel(Exception): + pass + + +def _snap_exec(commands): + """ + Execute snap commands. + + :param commands: List commands + :return: Integer exit code + """ + assert type(commands) == list + + retry_count = 0 + return_code = None + + while return_code is None or return_code == SNAP_NO_LOCK: + try: + return_code = subprocess.check_call(['snap'] + commands, + env=os.environ) + except subprocess.CalledProcessError as e: + retry_count += + 1 + if retry_count > SNAP_NO_LOCK_RETRY_COUNT: + raise CouldNotAcquireLockException( + 'Could not aquire lock after {} attempts' + .format(SNAP_NO_LOCK_RETRY_COUNT)) + return_code = e.returncode + log('Snap failed to acquire lock, trying again in {} seconds.' + .format(SNAP_NO_LOCK_RETRY_DELAY), level='WARN') + sleep(SNAP_NO_LOCK_RETRY_DELAY) + + return return_code + + +def snap_install(packages, *flags): + """ + Install a snap package. + + :param packages: String or List String package name + :param flags: List String flags to pass to install command + :return: Integer return code from snap + """ + if type(packages) is not list: + packages = [packages] + + flags = list(flags) + + message = 'Installing snap(s) "%s"' % ', '.join(packages) + if flags: + message += ' with option(s) "%s"' % ', '.join(flags) + + log(message, level='INFO') + return _snap_exec(['install'] + flags + packages) + + +def snap_remove(packages, *flags): + """ + Remove a snap package. + + :param packages: String or List String package name + :param flags: List String flags to pass to remove command + :return: Integer return code from snap + """ + if type(packages) is not list: + packages = [packages] + + flags = list(flags) + + message = 'Removing snap(s) "%s"' % ', '.join(packages) + if flags: + message += ' with options "%s"' % ', '.join(flags) + + log(message, level='INFO') + return _snap_exec(['remove'] + flags + packages) + + +def snap_refresh(packages, *flags): + """ + Refresh / Update snap package. + + :param packages: String or List String package name + :param flags: List String flags to pass to refresh command + :return: Integer return code from snap + """ + if type(packages) is not list: + packages = [packages] + + flags = list(flags) + + message = 'Refreshing snap(s) "%s"' % ', '.join(packages) + if flags: + message += ' with options "%s"' % ', '.join(flags) + + log(message, level='INFO') + return _snap_exec(['refresh'] + flags + packages) + + +def valid_snap_channel(channel): + """ Validate snap channel exists + + :raises InvalidSnapChannel: When channel does not exist + :return: Boolean + """ + if channel.lower() in SNAP_CHANNELS: + return True + else: + raise InvalidSnapChannel("Invalid Snap Channel: {}".format(channel)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/fetch/ubuntu.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/fetch/ubuntu.py new file mode 100644 index 0000000000000000000000000000000000000000..3ddaf0dd47f23cef60d7b7e59a83c989999d9f3f --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/fetch/ubuntu.py @@ -0,0 +1,805 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from collections import OrderedDict +import platform +import re +import six +import subprocess +import sys +import time + +from charmhelpers.core.host import get_distrib_codename, get_system_env + +from charmhelpers.core.hookenv import ( + log, + DEBUG, + WARNING, + env_proxy_settings, +) +from charmhelpers.fetch import SourceConfigError, GPGKeyError +from charmhelpers.fetch import ubuntu_apt_pkg + +PROPOSED_POCKET = ( + "# Proposed\n" + "deb http://archive.ubuntu.com/ubuntu {}-proposed main universe " + "multiverse restricted\n") +PROPOSED_PORTS_POCKET = ( + "# Proposed\n" + "deb http://ports.ubuntu.com/ubuntu-ports {}-proposed main universe " + "multiverse restricted\n") +# Only supports 64bit and ppc64 at the moment. +ARCH_TO_PROPOSED_POCKET = { + 'x86_64': PROPOSED_POCKET, + 'ppc64le': PROPOSED_PORTS_POCKET, + 'aarch64': PROPOSED_PORTS_POCKET, + 's390x': PROPOSED_PORTS_POCKET, +} +CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu" +CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA' +CLOUD_ARCHIVE = """# Ubuntu Cloud Archive +deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main +""" +CLOUD_ARCHIVE_POCKETS = { + # Folsom + 'folsom': 'precise-updates/folsom', + 'folsom/updates': 'precise-updates/folsom', + 'precise-folsom': 'precise-updates/folsom', + 'precise-folsom/updates': 'precise-updates/folsom', + 'precise-updates/folsom': 'precise-updates/folsom', + 'folsom/proposed': 'precise-proposed/folsom', + 'precise-folsom/proposed': 'precise-proposed/folsom', + 'precise-proposed/folsom': 'precise-proposed/folsom', + # Grizzly + 'grizzly': 'precise-updates/grizzly', + 'grizzly/updates': 'precise-updates/grizzly', + 'precise-grizzly': 'precise-updates/grizzly', + 'precise-grizzly/updates': 'precise-updates/grizzly', + 'precise-updates/grizzly': 'precise-updates/grizzly', + 'grizzly/proposed': 'precise-proposed/grizzly', + 'precise-grizzly/proposed': 'precise-proposed/grizzly', + 'precise-proposed/grizzly': 'precise-proposed/grizzly', + # Havana + 'havana': 'precise-updates/havana', + 'havana/updates': 'precise-updates/havana', + 'precise-havana': 'precise-updates/havana', + 'precise-havana/updates': 'precise-updates/havana', + 'precise-updates/havana': 'precise-updates/havana', + 'havana/proposed': 'precise-proposed/havana', + 'precise-havana/proposed': 'precise-proposed/havana', + 'precise-proposed/havana': 'precise-proposed/havana', + # Icehouse + 'icehouse': 'precise-updates/icehouse', + 'icehouse/updates': 'precise-updates/icehouse', + 'precise-icehouse': 'precise-updates/icehouse', + 'precise-icehouse/updates': 'precise-updates/icehouse', + 'precise-updates/icehouse': 'precise-updates/icehouse', + 'icehouse/proposed': 'precise-proposed/icehouse', + 'precise-icehouse/proposed': 'precise-proposed/icehouse', + 'precise-proposed/icehouse': 'precise-proposed/icehouse', + # Juno + 'juno': 'trusty-updates/juno', + 'juno/updates': 'trusty-updates/juno', + 'trusty-juno': 'trusty-updates/juno', + 'trusty-juno/updates': 'trusty-updates/juno', + 'trusty-updates/juno': 'trusty-updates/juno', + 'juno/proposed': 'trusty-proposed/juno', + 'trusty-juno/proposed': 'trusty-proposed/juno', + 'trusty-proposed/juno': 'trusty-proposed/juno', + # Kilo + 'kilo': 'trusty-updates/kilo', + 'kilo/updates': 'trusty-updates/kilo', + 'trusty-kilo': 'trusty-updates/kilo', + 'trusty-kilo/updates': 'trusty-updates/kilo', + 'trusty-updates/kilo': 'trusty-updates/kilo', + 'kilo/proposed': 'trusty-proposed/kilo', + 'trusty-kilo/proposed': 'trusty-proposed/kilo', + 'trusty-proposed/kilo': 'trusty-proposed/kilo', + # Liberty + 'liberty': 'trusty-updates/liberty', + 'liberty/updates': 'trusty-updates/liberty', + 'trusty-liberty': 'trusty-updates/liberty', + 'trusty-liberty/updates': 'trusty-updates/liberty', + 'trusty-updates/liberty': 'trusty-updates/liberty', + 'liberty/proposed': 'trusty-proposed/liberty', + 'trusty-liberty/proposed': 'trusty-proposed/liberty', + 'trusty-proposed/liberty': 'trusty-proposed/liberty', + # Mitaka + 'mitaka': 'trusty-updates/mitaka', + 'mitaka/updates': 'trusty-updates/mitaka', + 'trusty-mitaka': 'trusty-updates/mitaka', + 'trusty-mitaka/updates': 'trusty-updates/mitaka', + 'trusty-updates/mitaka': 'trusty-updates/mitaka', + 'mitaka/proposed': 'trusty-proposed/mitaka', + 'trusty-mitaka/proposed': 'trusty-proposed/mitaka', + 'trusty-proposed/mitaka': 'trusty-proposed/mitaka', + # Newton + 'newton': 'xenial-updates/newton', + 'newton/updates': 'xenial-updates/newton', + 'xenial-newton': 'xenial-updates/newton', + 'xenial-newton/updates': 'xenial-updates/newton', + 'xenial-updates/newton': 'xenial-updates/newton', + 'newton/proposed': 'xenial-proposed/newton', + 'xenial-newton/proposed': 'xenial-proposed/newton', + 'xenial-proposed/newton': 'xenial-proposed/newton', + # Ocata + 'ocata': 'xenial-updates/ocata', + 'ocata/updates': 'xenial-updates/ocata', + 'xenial-ocata': 'xenial-updates/ocata', + 'xenial-ocata/updates': 'xenial-updates/ocata', + 'xenial-updates/ocata': 'xenial-updates/ocata', + 'ocata/proposed': 'xenial-proposed/ocata', + 'xenial-ocata/proposed': 'xenial-proposed/ocata', + 'xenial-proposed/ocata': 'xenial-proposed/ocata', + # Pike + 'pike': 'xenial-updates/pike', + 'xenial-pike': 'xenial-updates/pike', + 'xenial-pike/updates': 'xenial-updates/pike', + 'xenial-updates/pike': 'xenial-updates/pike', + 'pike/proposed': 'xenial-proposed/pike', + 'xenial-pike/proposed': 'xenial-proposed/pike', + 'xenial-proposed/pike': 'xenial-proposed/pike', + # Queens + 'queens': 'xenial-updates/queens', + 'xenial-queens': 'xenial-updates/queens', + 'xenial-queens/updates': 'xenial-updates/queens', + 'xenial-updates/queens': 'xenial-updates/queens', + 'queens/proposed': 'xenial-proposed/queens', + 'xenial-queens/proposed': 'xenial-proposed/queens', + 'xenial-proposed/queens': 'xenial-proposed/queens', + # Rocky + 'rocky': 'bionic-updates/rocky', + 'bionic-rocky': 'bionic-updates/rocky', + 'bionic-rocky/updates': 'bionic-updates/rocky', + 'bionic-updates/rocky': 'bionic-updates/rocky', + 'rocky/proposed': 'bionic-proposed/rocky', + 'bionic-rocky/proposed': 'bionic-proposed/rocky', + 'bionic-proposed/rocky': 'bionic-proposed/rocky', + # Stein + 'stein': 'bionic-updates/stein', + 'bionic-stein': 'bionic-updates/stein', + 'bionic-stein/updates': 'bionic-updates/stein', + 'bionic-updates/stein': 'bionic-updates/stein', + 'stein/proposed': 'bionic-proposed/stein', + 'bionic-stein/proposed': 'bionic-proposed/stein', + 'bionic-proposed/stein': 'bionic-proposed/stein', + # Train + 'train': 'bionic-updates/train', + 'bionic-train': 'bionic-updates/train', + 'bionic-train/updates': 'bionic-updates/train', + 'bionic-updates/train': 'bionic-updates/train', + 'train/proposed': 'bionic-proposed/train', + 'bionic-train/proposed': 'bionic-proposed/train', + 'bionic-proposed/train': 'bionic-proposed/train', + # Ussuri + 'ussuri': 'bionic-updates/ussuri', + 'bionic-ussuri': 'bionic-updates/ussuri', + 'bionic-ussuri/updates': 'bionic-updates/ussuri', + 'bionic-updates/ussuri': 'bionic-updates/ussuri', + 'ussuri/proposed': 'bionic-proposed/ussuri', + 'bionic-ussuri/proposed': 'bionic-proposed/ussuri', + 'bionic-proposed/ussuri': 'bionic-proposed/ussuri', +} + + +APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT. +CMD_RETRY_DELAY = 10 # Wait 10 seconds between command retries. +CMD_RETRY_COUNT = 3 # Retry a failing fatal command X times. + + +def filter_installed_packages(packages): + """Return a list of packages that require installation.""" + cache = apt_cache() + _pkgs = [] + for package in packages: + try: + p = cache[package] + p.current_ver or _pkgs.append(package) + except KeyError: + log('Package {} has no installation candidate.'.format(package), + level='WARNING') + _pkgs.append(package) + return _pkgs + + +def filter_missing_packages(packages): + """Return a list of packages that are installed. + + :param packages: list of packages to evaluate. + :returns list: Packages that are installed. + """ + return list( + set(packages) - + set(filter_installed_packages(packages)) + ) + + +def apt_cache(*_, **__): + """Shim returning an object simulating the apt_pkg Cache. + + :param _: Accept arguments for compability, not used. + :type _: any + :param __: Accept keyword arguments for compability, not used. + :type __: any + :returns:Object used to interrogate the system apt and dpkg databases. + :rtype:ubuntu_apt_pkg.Cache + """ + if 'apt_pkg' in sys.modules: + # NOTE(fnordahl): When our consumer use the upstream ``apt_pkg`` module + # in conjunction with the apt_cache helper function, they may expect us + # to call ``apt_pkg.init()`` for them. + # + # Detect this situation, log a warning and make the call to + # ``apt_pkg.init()`` to avoid the consumer Python interpreter from + # crashing with a segmentation fault. + log('Support for use of upstream ``apt_pkg`` module in conjunction' + 'with charm-helpers is deprecated since 2019-06-25', level=WARNING) + sys.modules['apt_pkg'].init() + return ubuntu_apt_pkg.Cache() + + +def apt_install(packages, options=None, fatal=False): + """Install one or more packages. + + :param packages: Package(s) to install + :type packages: Option[str, List[str]] + :param options: Options to pass on to apt-get + :type options: Option[None, List[str]] + :param fatal: Whether the command's output should be checked and + retried. + :type fatal: bool + :raises: subprocess.CalledProcessError + """ + if options is None: + options = ['--option=Dpkg::Options::=--force-confold'] + + cmd = ['apt-get', '--assume-yes'] + cmd.extend(options) + cmd.append('install') + if isinstance(packages, six.string_types): + cmd.append(packages) + else: + cmd.extend(packages) + log("Installing {} with options: {}".format(packages, + options)) + _run_apt_command(cmd, fatal) + + +def apt_upgrade(options=None, fatal=False, dist=False): + """Upgrade all packages. + + :param options: Options to pass on to apt-get + :type options: Option[None, List[str]] + :param fatal: Whether the command's output should be checked and + retried. + :type fatal: bool + :param dist: Whether ``dist-upgrade`` should be used over ``upgrade`` + :type dist: bool + :raises: subprocess.CalledProcessError + """ + if options is None: + options = ['--option=Dpkg::Options::=--force-confold'] + + cmd = ['apt-get', '--assume-yes'] + cmd.extend(options) + if dist: + cmd.append('dist-upgrade') + else: + cmd.append('upgrade') + log("Upgrading with options: {}".format(options)) + _run_apt_command(cmd, fatal) + + +def apt_update(fatal=False): + """Update local apt cache.""" + cmd = ['apt-get', 'update'] + _run_apt_command(cmd, fatal) + + +def apt_purge(packages, fatal=False): + """Purge one or more packages. + + :param packages: Package(s) to install + :type packages: Option[str, List[str]] + :param fatal: Whether the command's output should be checked and + retried. + :type fatal: bool + :raises: subprocess.CalledProcessError + """ + cmd = ['apt-get', '--assume-yes', 'purge'] + if isinstance(packages, six.string_types): + cmd.append(packages) + else: + cmd.extend(packages) + log("Purging {}".format(packages)) + _run_apt_command(cmd, fatal) + + +def apt_autoremove(purge=True, fatal=False): + """Purge one or more packages. + :param purge: Whether the ``--purge`` option should be passed on or not. + :type purge: bool + :param fatal: Whether the command's output should be checked and + retried. + :type fatal: bool + :raises: subprocess.CalledProcessError + """ + cmd = ['apt-get', '--assume-yes', 'autoremove'] + if purge: + cmd.append('--purge') + _run_apt_command(cmd, fatal) + + +def apt_mark(packages, mark, fatal=False): + """Flag one or more packages using apt-mark.""" + log("Marking {} as {}".format(packages, mark)) + cmd = ['apt-mark', mark] + if isinstance(packages, six.string_types): + cmd.append(packages) + else: + cmd.extend(packages) + + if fatal: + subprocess.check_call(cmd, universal_newlines=True) + else: + subprocess.call(cmd, universal_newlines=True) + + +def apt_hold(packages, fatal=False): + return apt_mark(packages, 'hold', fatal=fatal) + + +def apt_unhold(packages, fatal=False): + return apt_mark(packages, 'unhold', fatal=fatal) + + +def import_key(key): + """Import an ASCII Armor key. + + A Radix64 format keyid is also supported for backwards + compatibility. In this case Ubuntu keyserver will be + queried for a key via HTTPS by its keyid. This method + is less preferrable because https proxy servers may + require traffic decryption which is equivalent to a + man-in-the-middle attack (a proxy server impersonates + keyserver TLS certificates and has to be explicitly + trusted by the system). + + :param key: A GPG key in ASCII armor format, + including BEGIN and END markers or a keyid. + :type key: (bytes, str) + :raises: GPGKeyError if the key could not be imported + """ + key = key.strip() + if '-' in key or '\n' in key: + # Send everything not obviously a keyid to GPG to import, as + # we trust its validation better than our own. eg. handling + # comments before the key. + log("PGP key found (looks like ASCII Armor format)", level=DEBUG) + if ('-----BEGIN PGP PUBLIC KEY BLOCK-----' in key and + '-----END PGP PUBLIC KEY BLOCK-----' in key): + log("Writing provided PGP key in the binary format", level=DEBUG) + if six.PY3: + key_bytes = key.encode('utf-8') + else: + key_bytes = key + key_name = _get_keyid_by_gpg_key(key_bytes) + key_gpg = _dearmor_gpg_key(key_bytes) + _write_apt_gpg_keyfile(key_name=key_name, key_material=key_gpg) + else: + raise GPGKeyError("ASCII armor markers missing from GPG key") + else: + log("PGP key found (looks like Radix64 format)", level=WARNING) + log("SECURELY importing PGP key from keyserver; " + "full key not provided.", level=WARNING) + # as of bionic add-apt-repository uses curl with an HTTPS keyserver URL + # to retrieve GPG keys. `apt-key adv` command is deprecated as is + # apt-key in general as noted in its manpage. See lp:1433761 for more + # history. Instead, /etc/apt/trusted.gpg.d is used directly to drop + # gpg + key_asc = _get_key_by_keyid(key) + # write the key in GPG format so that apt-key list shows it + key_gpg = _dearmor_gpg_key(key_asc) + _write_apt_gpg_keyfile(key_name=key, key_material=key_gpg) + + +def _get_keyid_by_gpg_key(key_material): + """Get a GPG key fingerprint by GPG key material. + Gets a GPG key fingerprint (40-digit, 160-bit) by the ASCII armor-encoded + or binary GPG key material. Can be used, for example, to generate file + names for keys passed via charm options. + + :param key_material: ASCII armor-encoded or binary GPG key material + :type key_material: bytes + :raises: GPGKeyError if invalid key material has been provided + :returns: A GPG key fingerprint + :rtype: str + """ + # Use the same gpg command for both Xenial and Bionic + cmd = 'gpg --with-colons --with-fingerprint' + ps = subprocess.Popen(cmd.split(), + stdout=subprocess.PIPE, + stderr=subprocess.PIPE, + stdin=subprocess.PIPE) + out, err = ps.communicate(input=key_material) + if six.PY3: + out = out.decode('utf-8') + err = err.decode('utf-8') + if 'gpg: no valid OpenPGP data found.' in err: + raise GPGKeyError('Invalid GPG key material provided') + # from gnupg2 docs: fpr :: Fingerprint (fingerprint is in field 10) + return re.search(r"^fpr:{9}([0-9A-F]{40}):$", out, re.MULTILINE).group(1) + + +def _get_key_by_keyid(keyid): + """Get a key via HTTPS from the Ubuntu keyserver. + Different key ID formats are supported by SKS keyservers (the longer ones + are more secure, see "dead beef attack" and https://evil32.com/). Since + HTTPS is used, if SSLBump-like HTTPS proxies are in place, they will + impersonate keyserver.ubuntu.com and generate a certificate with + keyserver.ubuntu.com in the CN field or in SubjAltName fields of a + certificate. If such proxy behavior is expected it is necessary to add the + CA certificate chain containing the intermediate CA of the SSLBump proxy to + every machine that this code runs on via ca-certs cloud-init directive (via + cloudinit-userdata model-config) or via other means (such as through a + custom charm option). Also note that DNS resolution for the hostname in a + URL is done at a proxy server - not at the client side. + + 8-digit (32 bit) key ID + https://keyserver.ubuntu.com/pks/lookup?search=0x4652B4E6 + 16-digit (64 bit) key ID + https://keyserver.ubuntu.com/pks/lookup?search=0x6E85A86E4652B4E6 + 40-digit key ID: + https://keyserver.ubuntu.com/pks/lookup?search=0x35F77D63B5CEC106C577ED856E85A86E4652B4E6 + + :param keyid: An 8, 16 or 40 hex digit keyid to find a key for + :type keyid: (bytes, str) + :returns: A key material for the specified GPG key id + :rtype: (str, bytes) + :raises: subprocess.CalledProcessError + """ + # options=mr - machine-readable output (disables html wrappers) + keyserver_url = ('https://keyserver.ubuntu.com' + '/pks/lookup?op=get&options=mr&exact=on&search=0x{}') + curl_cmd = ['curl', keyserver_url.format(keyid)] + # use proxy server settings in order to retrieve the key + return subprocess.check_output(curl_cmd, + env=env_proxy_settings(['https'])) + + +def _dearmor_gpg_key(key_asc): + """Converts a GPG key in the ASCII armor format to the binary format. + + :param key_asc: A GPG key in ASCII armor format. + :type key_asc: (str, bytes) + :returns: A GPG key in binary format + :rtype: (str, bytes) + :raises: GPGKeyError + """ + ps = subprocess.Popen(['gpg', '--dearmor'], + stdout=subprocess.PIPE, + stderr=subprocess.PIPE, + stdin=subprocess.PIPE) + out, err = ps.communicate(input=key_asc) + # no need to decode output as it is binary (invalid utf-8), only error + if six.PY3: + err = err.decode('utf-8') + if 'gpg: no valid OpenPGP data found.' in err: + raise GPGKeyError('Invalid GPG key material. Check your network setup' + ' (MTU, routing, DNS) and/or proxy server settings' + ' as well as destination keyserver status.') + else: + return out + + +def _write_apt_gpg_keyfile(key_name, key_material): + """Writes GPG key material into a file at a provided path. + + :param key_name: A key name to use for a key file (could be a fingerprint) + :type key_name: str + :param key_material: A GPG key material (binary) + :type key_material: (str, bytes) + """ + with open('/etc/apt/trusted.gpg.d/{}.gpg'.format(key_name), + 'wb') as keyf: + keyf.write(key_material) + + +def add_source(source, key=None, fail_invalid=False): + """Add a package source to this system. + + @param source: a URL or sources.list entry, as supported by + add-apt-repository(1). Examples:: + + ppa:charmers/example + deb https://stub:key@private.example.com/ubuntu trusty main + + In addition: + 'proposed:' may be used to enable the standard 'proposed' + pocket for the release. + 'cloud:' may be used to activate official cloud archive pockets, + such as 'cloud:icehouse' + 'distro' may be used as a noop + + Full list of source specifications supported by the function are: + + 'distro': A NOP; i.e. it has no effect. + 'proposed': the proposed deb spec [2] is wrtten to + /etc/apt/sources.list/proposed + 'distro-proposed': adds -proposed to the debs [2] + 'ppa:': add-apt-repository --yes + 'deb ': add-apt-repository --yes deb + 'http://....': add-apt-repository --yes http://... + 'cloud-archive:': add-apt-repository -yes cloud-archive: + 'cloud:[-staging]': specify a Cloud Archive pocket with + optional staging version. If staging is used then the staging PPA [2] + with be used. If staging is NOT used then the cloud archive [3] will be + added, and the 'ubuntu-cloud-keyring' package will be added for the + current distro. + + Otherwise the source is not recognised and this is logged to the juju log. + However, no error is raised, unless sys_error_on_exit is True. + + [1] deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main + where {} is replaced with the derived pocket name. + [2] deb http://archive.ubuntu.com/ubuntu {}-proposed \ + main universe multiverse restricted + where {} is replaced with the lsb_release codename (e.g. xenial) + [3] deb http://ubuntu-cloud.archive.canonical.com/ubuntu + to /etc/apt/sources.list.d/cloud-archive-list + + @param key: A key to be added to the system's APT keyring and used + to verify the signatures on packages. Ideally, this should be an + ASCII format GPG public key including the block headers. A GPG key + id may also be used, but be aware that only insecure protocols are + available to retrieve the actual public key from a public keyserver + placing your Juju environment at risk. ppa and cloud archive keys + are securely added automtically, so sould not be provided. + + @param fail_invalid: (boolean) if True, then the function raises a + SourceConfigError is there is no matching installation source. + + @raises SourceConfigError() if for cloud:, the is not a + valid pocket in CLOUD_ARCHIVE_POCKETS + """ + _mapping = OrderedDict([ + (r"^distro$", lambda: None), # This is a NOP + (r"^(?:proposed|distro-proposed)$", _add_proposed), + (r"^cloud-archive:(.*)$", _add_apt_repository), + (r"^((?:deb |http:|https:|ppa:).*)$", _add_apt_repository), + (r"^cloud:(.*)-(.*)\/staging$", _add_cloud_staging), + (r"^cloud:(.*)-(.*)$", _add_cloud_distro_check), + (r"^cloud:(.*)$", _add_cloud_pocket), + (r"^snap:.*-(.*)-(.*)$", _add_cloud_distro_check), + ]) + if source is None: + source = '' + for r, fn in six.iteritems(_mapping): + m = re.match(r, source) + if m: + if key: + # Import key before adding the source which depends on it, + # as refreshing packages could fail otherwise. + try: + import_key(key) + except GPGKeyError as e: + raise SourceConfigError(str(e)) + # call the associated function with the captured groups + # raises SourceConfigError on error. + fn(*m.groups()) + break + else: + # nothing matched. log an error and maybe sys.exit + err = "Unknown source: {!r}".format(source) + log(err) + if fail_invalid: + raise SourceConfigError(err) + + +def _add_proposed(): + """Add the PROPOSED_POCKET as /etc/apt/source.list.d/proposed.list + + Uses get_distrib_codename to determine the correct stanza for + the deb line. + + For intel architecutres PROPOSED_POCKET is used for the release, but for + other architectures PROPOSED_PORTS_POCKET is used for the release. + """ + release = get_distrib_codename() + arch = platform.machine() + if arch not in six.iterkeys(ARCH_TO_PROPOSED_POCKET): + raise SourceConfigError("Arch {} not supported for (distro-)proposed" + .format(arch)) + with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt: + apt.write(ARCH_TO_PROPOSED_POCKET[arch].format(release)) + + +def _add_apt_repository(spec): + """Add the spec using add_apt_repository + + :param spec: the parameter to pass to add_apt_repository + :type spec: str + """ + if '{series}' in spec: + series = get_distrib_codename() + spec = spec.replace('{series}', series) + # software-properties package for bionic properly reacts to proxy settings + # passed as environment variables (See lp:1433761). This is not the case + # LTS and non-LTS releases below bionic. + _run_with_retries(['add-apt-repository', '--yes', spec], + cmd_env=env_proxy_settings(['https'])) + + +def _add_cloud_pocket(pocket): + """Add a cloud pocket as /etc/apt/sources.d/cloud-archive.list + + Note that this overwrites the existing file if there is one. + + This function also converts the simple pocket in to the actual pocket using + the CLOUD_ARCHIVE_POCKETS mapping. + + :param pocket: string representing the pocket to add a deb spec for. + :raises: SourceConfigError if the cloud pocket doesn't exist or the + requested release doesn't match the current distro version. + """ + apt_install(filter_installed_packages(['ubuntu-cloud-keyring']), + fatal=True) + if pocket not in CLOUD_ARCHIVE_POCKETS: + raise SourceConfigError( + 'Unsupported cloud: source option %s' % + pocket) + actual_pocket = CLOUD_ARCHIVE_POCKETS[pocket] + with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt: + apt.write(CLOUD_ARCHIVE.format(actual_pocket)) + + +def _add_cloud_staging(cloud_archive_release, openstack_release): + """Add the cloud staging repository which is in + ppa:ubuntu-cloud-archive/-staging + + This function checks that the cloud_archive_release matches the current + codename for the distro that charm is being installed on. + + :param cloud_archive_release: string, codename for the release. + :param openstack_release: String, codename for the openstack release. + :raises: SourceConfigError if the cloud_archive_release doesn't match the + current version of the os. + """ + _verify_is_ubuntu_rel(cloud_archive_release, openstack_release) + ppa = 'ppa:ubuntu-cloud-archive/{}-staging'.format(openstack_release) + cmd = 'add-apt-repository -y {}'.format(ppa) + _run_with_retries(cmd.split(' ')) + + +def _add_cloud_distro_check(cloud_archive_release, openstack_release): + """Add the cloud pocket, but also check the cloud_archive_release against + the current distro, and use the openstack_release as the full lookup. + + This just calls _add_cloud_pocket() with the openstack_release as pocket + to get the correct cloud-archive.list for dpkg to work with. + + :param cloud_archive_release:String, codename for the distro release. + :param openstack_release: String, spec for the release to look up in the + CLOUD_ARCHIVE_POCKETS + :raises: SourceConfigError if this is the wrong distro, or the pocket spec + doesn't exist. + """ + _verify_is_ubuntu_rel(cloud_archive_release, openstack_release) + _add_cloud_pocket("{}-{}".format(cloud_archive_release, openstack_release)) + + +def _verify_is_ubuntu_rel(release, os_release): + """Verify that the release is in the same as the current ubuntu release. + + :param release: String, lowercase for the release. + :param os_release: String, the os_release being asked for + :raises: SourceConfigError if the release is not the same as the ubuntu + release. + """ + ubuntu_rel = get_distrib_codename() + if release != ubuntu_rel: + raise SourceConfigError( + 'Invalid Cloud Archive release specified: {}-{} on this Ubuntu' + 'version ({})'.format(release, os_release, ubuntu_rel)) + + +def _run_with_retries(cmd, max_retries=CMD_RETRY_COUNT, retry_exitcodes=(1,), + retry_message="", cmd_env=None): + """Run a command and retry until success or max_retries is reached. + + :param cmd: The apt command to run. + :type cmd: str + :param max_retries: The number of retries to attempt on a fatal + command. Defaults to CMD_RETRY_COUNT. + :type max_retries: int + :param retry_exitcodes: Optional additional exit codes to retry. + Defaults to retry on exit code 1. + :type retry_exitcodes: tuple + :param retry_message: Optional log prefix emitted during retries. + :type retry_message: str + :param: cmd_env: Environment variables to add to the command run. + :type cmd_env: Option[None, Dict[str, str]] + """ + env = get_apt_dpkg_env() + if cmd_env: + env.update(cmd_env) + + if not retry_message: + retry_message = "Failed executing '{}'".format(" ".join(cmd)) + retry_message += ". Will retry in {} seconds".format(CMD_RETRY_DELAY) + + retry_count = 0 + result = None + + retry_results = (None,) + retry_exitcodes + while result in retry_results: + try: + result = subprocess.check_call(cmd, env=env) + except subprocess.CalledProcessError as e: + retry_count = retry_count + 1 + if retry_count > max_retries: + raise + result = e.returncode + log(retry_message) + time.sleep(CMD_RETRY_DELAY) + + +def _run_apt_command(cmd, fatal=False): + """Run an apt command with optional retries. + + :param cmd: The apt command to run. + :type cmd: str + :param fatal: Whether the command's output should be checked and + retried. + :type fatal: bool + """ + if fatal: + _run_with_retries( + cmd, retry_exitcodes=(1, APT_NO_LOCK,), + retry_message="Couldn't acquire DPKG lock") + else: + subprocess.call(cmd, env=get_apt_dpkg_env()) + + +def get_upstream_version(package): + """Determine upstream version based on installed package + + @returns None (if not installed) or the upstream version + """ + cache = apt_cache() + try: + pkg = cache[package] + except Exception: + # the package is unknown to the current apt cache. + return None + + if not pkg.current_ver: + # package is known, but no version is currently installed. + return None + + return ubuntu_apt_pkg.upstream_version(pkg.current_ver.ver_str) + + +def get_apt_dpkg_env(): + """Get environment suitable for execution of APT and DPKG tools. + + We keep this in a helper function instead of in a global constant to + avoid execution on import of the library. + :returns: Environment suitable for execution of APT and DPKG tools. + :rtype: Dict[str, str] + """ + # The fallback is used in the event of ``/etc/environment`` not containing + # avalid PATH variable. + return {'DEBIAN_FRONTEND': 'noninteractive', + 'PATH': get_system_env('PATH', '/usr/sbin:/usr/bin:/sbin:/bin')} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/fetch/ubuntu_apt_pkg.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/fetch/ubuntu_apt_pkg.py new file mode 100644 index 0000000000000000000000000000000000000000..929a75d7a775921c32368e8cc7950eaa97390047 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/fetch/ubuntu_apt_pkg.py @@ -0,0 +1,267 @@ +# Copyright 2019 Canonical Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Provide a subset of the ``python-apt`` module API. + +Data collection is done through subprocess calls to ``apt-cache`` and +``dpkg-query`` commands. + +The main purpose for this module is to avoid dependency on the +``python-apt`` python module. + +The indicated python module is a wrapper around the ``apt`` C++ library +which is tightly connected to the version of the distribution it was +shipped on. It is not developed in a backward/forward compatible manner. + +This in turn makes it incredibly hard to distribute as a wheel for a piece +of python software that supports a span of distro releases [0][1]. + +Upstream feedback like [2] does not give confidence in this ever changing, +so with this we get rid of the dependency. + +0: https://github.com/juju-solutions/layer-basic/pull/135 +1: https://bugs.launchpad.net/charm-octavia/+bug/1824112 +2: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=845330#10 +""" + +import locale +import os +import subprocess +import sys + + +class _container(dict): + """Simple container for attributes.""" + __getattr__ = dict.__getitem__ + __setattr__ = dict.__setitem__ + + +class Package(_container): + """Simple container for package attributes.""" + + +class Version(_container): + """Simple container for version attributes.""" + + +class Cache(object): + """Simulation of ``apt_pkg`` Cache object.""" + def __init__(self, progress=None): + pass + + def __contains__(self, package): + try: + pkg = self.__getitem__(package) + return pkg is not None + except KeyError: + return False + + def __getitem__(self, package): + """Get information about a package from apt and dpkg databases. + + :param package: Name of package + :type package: str + :returns: Package object + :rtype: object + :raises: KeyError, subprocess.CalledProcessError + """ + apt_result = self._apt_cache_show([package])[package] + apt_result['name'] = apt_result.pop('package') + pkg = Package(apt_result) + dpkg_result = self._dpkg_list([package]).get(package, {}) + current_ver = None + installed_version = dpkg_result.get('version') + if installed_version: + current_ver = Version({'ver_str': installed_version}) + pkg.current_ver = current_ver + pkg.architecture = dpkg_result.get('architecture') + return pkg + + def _dpkg_list(self, packages): + """Get data from system dpkg database for package. + + :param packages: Packages to get data from + :type packages: List[str] + :returns: Structured data about installed packages, keys like + ``dpkg-query --list`` + :rtype: dict + :raises: subprocess.CalledProcessError + """ + pkgs = {} + cmd = ['dpkg-query', '--list'] + cmd.extend(packages) + if locale.getlocale() == (None, None): + # subprocess calls out to locale.getpreferredencoding(False) to + # determine encoding. Workaround for Trusty where the + # environment appears to not be set up correctly. + locale.setlocale(locale.LC_ALL, 'en_US.UTF-8') + try: + output = subprocess.check_output(cmd, + stderr=subprocess.STDOUT, + universal_newlines=True) + except subprocess.CalledProcessError as cp: + # ``dpkg-query`` may return error and at the same time have + # produced useful output, for example when asked for multiple + # packages where some are not installed + if cp.returncode != 1: + raise + output = cp.output + headings = [] + for line in output.splitlines(): + if line.startswith('||/'): + headings = line.split() + headings.pop(0) + continue + elif (line.startswith('|') or line.startswith('+') or + line.startswith('dpkg-query:')): + continue + else: + data = line.split(None, 4) + status = data.pop(0) + if status != 'ii': + continue + pkg = {} + pkg.update({k.lower(): v for k, v in zip(headings, data)}) + if 'name' in pkg: + pkgs.update({pkg['name']: pkg}) + return pkgs + + def _apt_cache_show(self, packages): + """Get data from system apt cache for package. + + :param packages: Packages to get data from + :type packages: List[str] + :returns: Structured data about package, keys like + ``apt-cache show`` + :rtype: dict + :raises: subprocess.CalledProcessError + """ + pkgs = {} + cmd = ['apt-cache', 'show', '--no-all-versions'] + cmd.extend(packages) + if locale.getlocale() == (None, None): + # subprocess calls out to locale.getpreferredencoding(False) to + # determine encoding. Workaround for Trusty where the + # environment appears to not be set up correctly. + locale.setlocale(locale.LC_ALL, 'en_US.UTF-8') + try: + output = subprocess.check_output(cmd, + stderr=subprocess.STDOUT, + universal_newlines=True) + previous = None + pkg = {} + for line in output.splitlines(): + if not line: + if 'package' in pkg: + pkgs.update({pkg['package']: pkg}) + pkg = {} + continue + if line.startswith(' '): + if previous and previous in pkg: + pkg[previous] += os.linesep + line.lstrip() + continue + if ':' in line: + kv = line.split(':', 1) + key = kv[0].lower() + if key == 'n': + continue + previous = key + pkg.update({key: kv[1].lstrip()}) + except subprocess.CalledProcessError as cp: + # ``apt-cache`` returns 100 if none of the packages asked for + # exist in the apt cache. + if cp.returncode != 100: + raise + return pkgs + + +class Config(_container): + def __init__(self): + super(Config, self).__init__(self._populate()) + + def _populate(self): + cfgs = {} + cmd = ['apt-config', 'dump'] + output = subprocess.check_output(cmd, + stderr=subprocess.STDOUT, + universal_newlines=True) + for line in output.splitlines(): + if not line.startswith("CommandLine"): + k, v = line.split(" ", 1) + cfgs[k] = v.strip(";").strip("\"") + + return cfgs + + +# Backwards compatibility with old apt_pkg module +sys.modules[__name__].config = Config() + + +def init(): + """Compability shim that does nothing.""" + pass + + +def upstream_version(version): + """Extracts upstream version from a version string. + + Upstream reference: https://salsa.debian.org/apt-team/apt/blob/master/ + apt-pkg/deb/debversion.cc#L259 + + :param version: Version string + :type version: str + :returns: Upstream version + :rtype: str + """ + if version: + version = version.split(':')[-1] + version = version.split('-')[0] + return version + + +def version_compare(a, b): + """Compare the given versions. + + Call out to ``dpkg`` to make sure the code doing the comparison is + compatible with what the ``apt`` library would do. Mimic the return + values. + + Upstream reference: + https://apt-team.pages.debian.net/python-apt/library/apt_pkg.html + ?highlight=version_compare#apt_pkg.version_compare + + :param a: version string + :type a: str + :param b: version string + :type b: str + :returns: >0 if ``a`` is greater than ``b``, 0 if a equals b, + <0 if ``a`` is smaller than ``b`` + :rtype: int + :raises: subprocess.CalledProcessError, RuntimeError + """ + for op in ('gt', 1), ('eq', 0), ('lt', -1): + try: + subprocess.check_call(['dpkg', '--compare-versions', + a, op[0], b], + stderr=subprocess.STDOUT, + universal_newlines=True) + return op[1] + except subprocess.CalledProcessError as cp: + if cp.returncode == 1: + continue + raise + else: + raise RuntimeError('Unable to compare "{}" and "{}", according to ' + 'our logic they are neither greater, equal nor ' + 'less than each other.') diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/osplatform.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/osplatform.py new file mode 100644 index 0000000000000000000000000000000000000000..78c81af5955caee51271c830d58cacac2cab9bcc --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/osplatform.py @@ -0,0 +1,46 @@ +import platform +import os + + +def get_platform(): + """Return the current OS platform. + + For example: if current os platform is Ubuntu then a string "ubuntu" + will be returned (which is the name of the module). + This string is used to decide which platform module should be imported. + """ + # linux_distribution is deprecated and will be removed in Python 3.7 + # Warnings *not* disabled, as we certainly need to fix this. + if hasattr(platform, 'linux_distribution'): + tuple_platform = platform.linux_distribution() + current_platform = tuple_platform[0] + else: + current_platform = _get_platform_from_fs() + + if "Ubuntu" in current_platform: + return "ubuntu" + elif "CentOS" in current_platform: + return "centos" + elif "debian" in current_platform: + # Stock Python does not detect Ubuntu and instead returns debian. + # Or at least it does in some build environments like Travis CI + return "ubuntu" + elif "elementary" in current_platform: + # ElementaryOS fails to run tests locally without this. + return "ubuntu" + else: + raise RuntimeError("This module is not supported on {}." + .format(current_platform)) + + +def _get_platform_from_fs(): + """Get Platform from /etc/os-release.""" + with open(os.path.join(os.sep, 'etc', 'os-release')) as fin: + content = dict( + line.split('=', 1) + for line in fin.read().splitlines() + if '=' in line + ) + for k, v in content.items(): + content[k] = v.strip('"') + return content["NAME"] diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/payload/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/payload/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..ee55cb3d2baddb556df910f1d41638c3c7f39c59 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/payload/__init__.py @@ -0,0 +1,15 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"Tools for working with files injected into a charm just before deployment." diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/payload/archive.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/payload/archive.py new file mode 100644 index 0000000000000000000000000000000000000000..7fc453f523cf49f0c123839a347c4452c6d465ca --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/payload/archive.py @@ -0,0 +1,71 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import tarfile +import zipfile +from charmhelpers.core import ( + host, + hookenv, +) + + +class ArchiveError(Exception): + pass + + +def get_archive_handler(archive_name): + if os.path.isfile(archive_name): + if tarfile.is_tarfile(archive_name): + return extract_tarfile + elif zipfile.is_zipfile(archive_name): + return extract_zipfile + else: + # look at the file name + for ext in ('.tar', '.tar.gz', '.tgz', 'tar.bz2', '.tbz2', '.tbz'): + if archive_name.endswith(ext): + return extract_tarfile + for ext in ('.zip', '.jar'): + if archive_name.endswith(ext): + return extract_zipfile + + +def archive_dest_default(archive_name): + archive_file = os.path.basename(archive_name) + return os.path.join(hookenv.charm_dir(), "archives", archive_file) + + +def extract(archive_name, destpath=None): + handler = get_archive_handler(archive_name) + if handler: + if not destpath: + destpath = archive_dest_default(archive_name) + if not os.path.isdir(destpath): + host.mkdir(destpath) + handler(archive_name, destpath) + return destpath + else: + raise ArchiveError("No handler for archive") + + +def extract_tarfile(archive_name, destpath): + "Unpack a tar archive, optionally compressed" + archive = tarfile.open(archive_name) + archive.extractall(destpath) + + +def extract_zipfile(archive_name, destpath): + "Unpack a zip file" + archive = zipfile.ZipFile(archive_name) + archive.extractall(destpath) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/payload/execd.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/payload/execd.py new file mode 100644 index 0000000000000000000000000000000000000000..1502aa0b596f0b1a2017ccb4543a35999774431d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charmhelpers/payload/execd.py @@ -0,0 +1,65 @@ +#!/usr/bin/env python + +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import sys +import subprocess +from charmhelpers.core import hookenv + + +def default_execd_dir(): + return os.path.join(os.environ['CHARM_DIR'], 'exec.d') + + +def execd_module_paths(execd_dir=None): + """Generate a list of full paths to modules within execd_dir.""" + if not execd_dir: + execd_dir = default_execd_dir() + + if not os.path.exists(execd_dir): + return + + for subpath in os.listdir(execd_dir): + module = os.path.join(execd_dir, subpath) + if os.path.isdir(module): + yield module + + +def execd_submodule_paths(command, execd_dir=None): + """Generate a list of full paths to the specified command within exec_dir. + """ + for module_path in execd_module_paths(execd_dir): + path = os.path.join(module_path, command) + if os.access(path, os.X_OK) and os.path.isfile(path): + yield path + + +def execd_run(command, execd_dir=None, die_on_error=True, stderr=subprocess.STDOUT): + """Run command for each module within execd_dir which defines it.""" + for submodule_path in execd_submodule_paths(command, execd_dir): + try: + subprocess.check_output(submodule_path, stderr=stderr, + universal_newlines=True) + except subprocess.CalledProcessError as e: + hookenv.log("Error ({}) running {}. Output: {}".format( + e.returncode, e.cmd, e.output)) + if die_on_error: + sys.exit(e.returncode) + + +def execd_preinstall(execd_dir=None): + """Run charm-pre-install for each module within execd_dir.""" + execd_run('charm-pre-install', execd_dir=execd_dir) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charms/osm/libansible.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charms/osm/libansible.py new file mode 100644 index 0000000000000000000000000000000000000000..811ec4441fdde7f424a6e29800f265a528d27da5 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charms/osm/libansible.py @@ -0,0 +1,108 @@ +## +# Copyright ETSI OSM +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## + +import fnmatch +import os +import subprocess +import sys +import yaml + +import charmhelpers.fetch + + +sys.path.append("lib") + + +ansible_hosts_path = "/etc/ansible/hosts" + + +def install_ansible_support(from_ppa=True, ppa_location="ppa:ansible/ansible"): + """Installs the ansible package. + + By default it is installed from the `PPA`_ linked from + the ansible `website`_ or from a ppa specified by a charm config.. + + .. _PPA: https://launchpad.net/~rquillo/+archive/ansible + .. _website: http://docs.ansible.com/intro_installation.html#latest-releases-via-apt-ubuntu + + If from_ppa is empty, you must ensure that the package is available + from a configured repository. + """ + if from_ppa: + charmhelpers.fetch.add_source(ppa_location) + charmhelpers.fetch.apt_update(fatal=True) + charmhelpers.fetch.apt_install("ansible") + with open(ansible_hosts_path, "w+") as hosts_file: + hosts_file.write("localhost ansible_connection=local") + + +def create_hosts(hostname, username, password, hosts): + inventory_path = "/etc/ansible/hosts" + with open(inventory_path, 'w') as f: + f.write('[{}]\n'.format(hosts)) + h1 = '{0} ansible_connection=network_cli ansible_network_os=vyos ' \ + 'ansible_ssh_user={1} ansible_ssh_pass={2} ' \ + 'ansible_python_interpreter=/usr/bin/python\n'.format(hostname, username, password) + f.write(h1) + + +def create_ansible_cfg(): + ansible_config_path = "/etc/ansible/ansible.cfg" + with open(ansible_config_path, 'w') as f: + f.write('[defaults]\n') + f.write('host_key_checking = False\n') + # logs playbook execution attempts to the specified path + f.write('log_path = /var/log/ansible.log\n') + f.write('[ssh_connection]\n') + f.write('control_path=%(directory)s/%%h-%%r\n') + f.write('control_path_dir=~/.ansible/cp\n') + + +# Function to find the playbook path +def find(pattern, path): + result = "" + for root, dirs, files in os.walk(path): + for name in files: + if fnmatch.fnmatch(name, pattern): + result = os.path.join(root, name) + return result + + +def execute_playbook(playbook_file, hostname, user, password, vars_dict=None): + playbook_path = find(playbook_file, '/var/lib/juju/agents/') + print("Playbook " + playbook_file + " is " + playbook_path) + with open(playbook_path, 'r') as f: + playbook_data = yaml.load(f, Loader=yaml.SafeLoader) + hosts = 'all' + if 'hosts' in playbook_data[0].keys() and playbook_data[0]['hosts']: + hosts = playbook_data[0]['hosts'] + create_ansible_cfg() + create_hosts(hostname, user, password, hosts) + call = 'ansible-playbook -vvvv %s ' % playbook_path + if vars_dict and isinstance(vars_dict, dict) and len(vars_dict) > 0: + call += '--extra-vars ' + string_var = '' + for v in vars_dict.items(): + string_var += '%s=%s ' % v + string_var = string_var.strip() + call += '"%s"' % string_var + call = call.strip() + print(call) + result = subprocess.check_output(call, shell=True) + lastline = result.decode('utf-8').splitlines()[-2] + print("Result is " + lastline) + return lastline diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charms/osm/ns.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charms/osm/ns.py new file mode 100644 index 0000000000000000000000000000000000000000..25be4056282e48ce946632025be8b466557e3171 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charms/osm/ns.py @@ -0,0 +1,301 @@ +# A prototype of a library to aid in the development and operation of +# OSM Network Service charms + +import asyncio +import logging +import os +import os.path +import re +import subprocess +import sys +import time +import yaml + +try: + import juju +except ImportError: + # Not all cloud images are created equal + if not os.path.exists("/usr/bin/python3") or not os.path.exists("/usr/bin/pip3"): + # Update the apt cache + subprocess.check_call(["apt-get", "update"]) + + # Install the Python3 package + subprocess.check_call(["apt-get", "install", "-y", "python3", "python3-pip"],) + + + # Install the libjuju build dependencies + subprocess.check_call(["apt-get", "install", "-y", "libffi-dev", "libssl-dev"],) + + subprocess.check_call( + [sys.executable, "-m", "pip", "install", "juju"], + ) + +from juju.controller import Controller + +# Quiet the debug logging +logging.getLogger('websockets.protocol').setLevel(logging.INFO) +logging.getLogger('juju.client.connection').setLevel(logging.WARN) +logging.getLogger('juju.model').setLevel(logging.WARN) +logging.getLogger('juju.machine').setLevel(logging.WARN) + + +class NetworkService: + """A lightweight interface to the Juju controller. + + This NetworkService client is specifically designed to allow a higher-level + "NS" charm to interoperate with "VNF" charms, allowing for the execution of + Primitives across other charms within the same model. + """ + endpoint = None + user = 'admin' + secret = None + port = 17070 + loop = None + client = None + model = None + cacert = None + + def __init__(self, user, secret, endpoint=None): + + self.user = user + self.secret = secret + if endpoint is None: + addresses = os.environ['JUJU_API_ADDRESSES'] + for address in addresses.split(' '): + self.endpoint = address + else: + self.endpoint = endpoint + + # Stash the name of the model + self.model = os.environ['JUJU_MODEL_NAME'] + + # Load the ca-cert from agent.conf + AGENT_PATH = os.path.dirname(os.environ['JUJU_CHARM_DIR']) + with open("{}/agent.conf".format(AGENT_PATH), "r") as f: + try: + y = yaml.safe_load(f) + self.cacert = y['cacert'] + except yaml.YAMLError as exc: + print("Unable to find Juju ca-cert.") + raise exc + + # Create our event loop + self.loop = asyncio.new_event_loop() + asyncio.set_event_loop(self.loop) + + async def connect(self): + """Connect to the Juju controller.""" + controller = Controller() + + print( + "Connecting to controller... ws://{}:{} as {}/{}".format( + self.endpoint, + self.port, + self.user, + self.secret[-4:].rjust(len(self.secret), "*"), + ) + ) + await controller.connect( + endpoint=self.endpoint, + username=self.user, + password=self.secret, + cacert=self.cacert, + ) + + return controller + + def __del__(self): + self.logout() + + async def disconnect(self): + """Disconnect from the Juju controller.""" + if self.client: + print("Disconnecting Juju controller") + await self.client.disconnect() + + def login(self): + """Login to the Juju controller.""" + if not self.client: + # Connect to the Juju API server + self.client = self.loop.run_until_complete(self.connect()) + return self.client + + def logout(self): + """Logout of the Juju controller.""" + + if self.loop: + print("Disconnecting from API") + self.loop.run_until_complete(self.disconnect()) + + def FormatApplicationName(self, *args): + """ + Generate a Juju-compatible Application name + + :param args tuple: Positional arguments to be used to construct the + application name. + + Limitations:: + - Only accepts characters a-z and non-consequitive dashes (-) + - Application name should not exceed 50 characters + + Examples:: + + FormatApplicationName("ping_pong_ns", "ping_vnf", "a") + """ + appname = "" + for c in "-".join(list(args)): + if c.isdigit(): + c = chr(97 + int(c)) + elif not c.isalpha(): + c = "-" + appname += c + + return re.sub('-+', '-', appname.lower()) + + def GetApplicationName(self, nsr_name, vnf_name, vnf_member_index): + """Get the runtime application name of a VNF/VDU. + + This will generate an application name matching the name of the deployed charm, + given the right parameters. + + :param nsr_name str: The name of the running Network Service, as specified at instantiation. + :param vnf_name str: The name of the VNF or VDU + :param vnf_member_index: The vnf-member-index as specified in the descriptor + """ + + application_name = self.FormatApplicationName(nsr_name, vnf_member_index, vnf_name) + + # This matches the logic used by the LCM + application_name = application_name[0:48] + vca_index = int(vnf_member_index) - 1 + application_name += '-' + chr(97 + vca_index // 26) + chr(97 + vca_index % 26) + + return application_name + + def ExecutePrimitiveGetOutput(self, application, primitive, params={}, timeout=600): + """Execute a single primitive and return it's output. + + This is a blocking method that will execute a single primitive and wait + for its completion before return it's output. + + :param application str: The application name provided by `GetApplicationName`. + :param primitive str: The name of the primitive to execute. + :param params list: A list of parameters. + :param timeout int: A timeout, in seconds, to wait for the primitive to finish. Defaults to 600 seconds. + """ + uuid = self.ExecutePrimitive(application, primitive, params) + + status = None + output = None + + starttime = time.time() + while(time.time() < starttime + timeout): + status = self.GetPrimitiveStatus(uuid) + if status in ['completed', 'failed']: + break + time.sleep(10) + + # When the primitive is done, get the output + if status in ['completed', 'failed']: + output = self.GetPrimitiveOutput(uuid) + + return output + + def ExecutePrimitive(self, application, primitive, params={}): + """Execute a primitive. + + This is a non-blocking method to execute a primitive. It will return + the UUID of the queued primitive execution, which you can use + for subsequent calls to `GetPrimitiveStatus` and `GetPrimitiveOutput`. + + :param application string: The name of the application + :param primitive string: The name of the Primitive. + :param params list: A list of parameters. + + :returns uuid string: The UUID of the executed Primitive + """ + uuid = None + + if not self.client: + self.login() + + model = self.loop.run_until_complete( + self.client.get_model(self.model) + ) + + # Get the application + if application in model.applications: + app = model.applications[application] + + # Execute the primitive + unit = app.units[0] + if unit: + action = self.loop.run_until_complete( + unit.run_action(primitive, **params) + ) + uuid = action.id + print("Executing action: {}".format(uuid)) + self.loop.run_until_complete( + model.disconnect() + ) + else: + # Invalid mapping: application not found. Raise exception + raise Exception("Application not found: {}".format(application)) + + return uuid + + def GetPrimitiveStatus(self, uuid): + """Get the status of a Primitive execution. + + This will return one of the following strings: + - pending + - running + - completed + - failed + + :param uuid string: The UUID of the executed Primitive. + :returns: The status of the executed Primitive + """ + status = None + + if not self.client: + self.login() + + model = self.loop.run_until_complete( + self.client.get_model(self.model) + ) + + status = self.loop.run_until_complete( + model.get_action_status(uuid) + ) + + self.loop.run_until_complete( + model.disconnect() + ) + + return status[uuid] + + def GetPrimitiveOutput(self, uuid): + """Get the output of a completed Primitive execution. + + + :param uuid string: The UUID of the executed Primitive. + :returns: The output of the execution, or None if it's still running. + """ + result = None + if not self.client: + self.login() + + model = self.loop.run_until_complete( + self.client.get_model(self.model) + ) + + result = self.loop.run_until_complete( + model.get_action_output(uuid) + ) + + self.loop.run_until_complete( + model.disconnect() + ) + + return result diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charms/osm/proxy_cluster.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charms/osm/proxy_cluster.py new file mode 100644 index 0000000000000000000000000000000000000000..f323a3af88f3011ef9fde794cdfe343be8665abb --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charms/osm/proxy_cluster.py @@ -0,0 +1,59 @@ +import socket + +from ops.framework import Object, StoredState + + +class ProxyCluster(Object): + + state = StoredState() + + def __init__(self, charm, relation_name): + super().__init__(charm, relation_name) + self._relation_name = relation_name + self._relation = self.framework.model.get_relation(self._relation_name) + + self.framework.observe(charm.on.ssh_keys_initialized, self.on_ssh_keys_initialized) + + self.state.set_default(ssh_public_key=None) + self.state.set_default(ssh_private_key=None) + + def on_ssh_keys_initialized(self, event): + if not self.framework.model.unit.is_leader(): + raise RuntimeError("The initial unit of a cluster must also be a leader.") + + self.state.ssh_public_key = event.ssh_public_key + self.state.ssh_private_key = event.ssh_private_key + if not self.is_joined: + event.defer() + return + + self._relation.data[self.model.app][ + "ssh_public_key" + ] = self.state.ssh_public_key + self._relation.data[self.model.app][ + "ssh_private_key" + ] = self.state.ssh_private_key + + @property + def is_joined(self): + return self._relation is not None + + @property + def ssh_public_key(self): + if self.is_joined: + return self._relation.data[self.model.app].get("ssh_public_key") + + @property + def ssh_private_key(self): + if self.is_joined: + return self._relation.data[self.model.app].get("ssh_private_key") + + @property + def is_cluster_initialized(self): + return ( + True + if self.is_joined + and self._relation.data[self.model.app].get("ssh_public_key") + and self._relation.data[self.model.app].get("ssh_private_key") + else False + ) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charms/osm/sshproxy.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charms/osm/sshproxy.py new file mode 100644 index 0000000000000000000000000000000000000000..e2c311e5be8515a7fe19c05dfed9f042ded0576b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/charms/osm/sshproxy.py @@ -0,0 +1,375 @@ +"""Module to help with executing commands over SSH.""" +## +# Copyright 2016 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## + +# from charmhelpers.core import unitdata +# from charmhelpers.core.hookenv import log + +import io +import ipaddress +import subprocess +import os +import socket +import shlex +import traceback +import sys + +from subprocess import ( + check_call, + Popen, + CalledProcessError, + PIPE, +) + +from ops.charm import CharmBase, CharmEvents +from ops.framework import StoredState, EventBase, EventSource +from ops.main import main +from ops.model import ( + ActiveStatus, + BlockedStatus, + MaintenanceStatus, + WaitingStatus, + ModelError, +) +import os +import subprocess +from .proxy_cluster import ProxyCluster + +import logging + + +logger = logging.getLogger(__name__) + +class SSHKeysInitialized(EventBase): + def __init__(self, handle, ssh_public_key, ssh_private_key): + super().__init__(handle) + self.ssh_public_key = ssh_public_key + self.ssh_private_key = ssh_private_key + + def snapshot(self): + return { + "ssh_public_key": self.ssh_public_key, + "ssh_private_key": self.ssh_private_key, + } + + def restore(self, snapshot): + self.ssh_public_key = snapshot["ssh_public_key"] + self.ssh_private_key = snapshot["ssh_private_key"] + + +class ProxyClusterEvents(CharmEvents): + ssh_keys_initialized = EventSource(SSHKeysInitialized) + + +class SSHProxyCharm(CharmBase): + + state = StoredState() + on = ProxyClusterEvents() + + def __init__(self, framework, key): + super().__init__(framework, key) + + self.peers = ProxyCluster(self, "proxypeer") + + # SSH Proxy actions (primitives) + self.framework.observe(self.on.generate_ssh_key_action, self.on_generate_ssh_key_action) + self.framework.observe(self.on.get_ssh_public_key_action, self.on_get_ssh_public_key_action) + self.framework.observe(self.on.run_action, self.on_run_action) + self.framework.observe(self.on.verify_ssh_credentials_action, self.on_verify_ssh_credentials_action) + + self.framework.observe(self.on.proxypeer_relation_changed, self.on_proxypeer_relation_changed) + + def get_ssh_proxy(self): + """Get the SSHProxy instance""" + proxy = SSHProxy( + hostname=self.model.config["ssh-hostname"], + username=self.model.config["ssh-username"], + password=self.model.config["ssh-password"], + ) + return proxy + + def on_proxypeer_relation_changed(self, event): + if self.peers.is_cluster_initialized and not SSHProxy.has_ssh_key(): + pubkey = self.peers.ssh_public_key + privkey = self.peers.ssh_private_key + SSHProxy.write_ssh_keys(public=pubkey, private=privkey) + self.verify_credentials() + else: + event.defer() + + def on_config_changed(self, event): + """Handle changes in configuration""" + self.verify_credentials() + + def on_install(self, event): + SSHProxy.install() + + def on_start(self, event): + """Called when the charm is being installed""" + if not self.peers.is_joined: + event.defer() + return + + unit = self.model.unit + + if not SSHProxy.has_ssh_key(): + unit.status = MaintenanceStatus("Generating SSH keys...") + pubkey = None + privkey = None + if self.model.unit.is_leader(): + if self.peers.is_cluster_initialized: + SSHProxy.write_ssh_keys( + public=self.peers.ssh_public_key, + private=self.peers.ssh_private_key, + ) + else: + SSHProxy.generate_ssh_key() + self.on.ssh_keys_initialized.emit( + SSHProxy.get_ssh_public_key(), SSHProxy.get_ssh_private_key() + ) + self.verify_credentials() + + def verify_credentials(self): + unit = self.model.unit + + # Unit should go into a waiting state until verify_ssh_credentials is successful + unit.status = WaitingStatus("Waiting for SSH credentials") + proxy = self.get_ssh_proxy() + verified, _ = proxy.verify_credentials() + if verified: + unit.status = ActiveStatus() + else: + unit.status = BlockedStatus("Invalid SSH credentials.") + return verified + + ##################### + # SSH Proxy methods # + ##################### + def on_generate_ssh_key_action(self, event): + """Generate a new SSH keypair for this unit.""" + if self.model.unit.is_leader(): + if not SSHProxy.generate_ssh_key(): + event.fail("Unable to generate ssh key") + else: + event.fail("Unit is not leader") + return + + def on_get_ssh_public_key_action(self, event): + """Get the SSH public key for this unit.""" + if self.model.unit.is_leader(): + pubkey = SSHProxy.get_ssh_public_key() + event.set_results({"pubkey": SSHProxy.get_ssh_public_key()}) + else: + event.fail("Unit is not leader") + return + + def on_run_action(self, event): + """Run an arbitrary command on the remote host.""" + if self.model.unit.is_leader(): + cmd = event.params["command"] + proxy = self.get_ssh_proxy() + stdout, stderr = proxy.run(cmd) + event.set_results({"output": stdout}) + if len(stderr): + event.fail(stderr) + else: + event.fail("Unit is not leader") + return + + def on_verify_ssh_credentials_action(self, event): + """Verify the SSH credentials for this unit.""" + unit = self.model.unit + if unit.is_leader(): + proxy = self.get_ssh_proxy() + verified, stderr = proxy.verify_credentials() + if verified: + event.set_results({"verified": True}) + unit.status = ActiveStatus() + else: + event.set_results({"verified": False, "stderr": stderr}) + event.fail("Not verified") + unit.status = BlockedStatus("Invalid SSH credentials.") + + else: + event.fail("Unit is not leader") + return + + +class LeadershipError(ModelError): + def __init__(self): + super().__init__("not leader") + +class SSHProxy: + private_key_path = "/root/.ssh/id_sshproxy" + public_key_path = "/root/.ssh/id_sshproxy.pub" + key_type = "rsa" + key_bits = 4096 + + def __init__(self, hostname: str, username: str, password: str = ""): + self.hostname = hostname + self.username = username + self.password = password + + @staticmethod + def install(): + check_call("apt update && apt install -y openssh-client sshpass", shell=True) + + @staticmethod + def generate_ssh_key(): + """Generate a 4096-bit rsa keypair.""" + if not os.path.exists(SSHProxy.private_key_path): + cmd = "ssh-keygen -t {} -b {} -N '' -f {}".format( + SSHProxy.key_type, SSHProxy.key_bits, SSHProxy.private_key_path, + ) + + try: + check_call(cmd, shell=True) + except CalledProcessError: + return False + + return True + + @staticmethod + def write_ssh_keys(public, private): + """Write a 4096-bit rsa keypair.""" + with open(SSHProxy.public_key_path, "w") as f: + f.write(public) + f.close() + with open(SSHProxy.private_key_path, "w") as f: + f.write(private) + f.close() + + @staticmethod + def get_ssh_public_key(): + publickey = "" + if os.path.exists(SSHProxy.private_key_path): + with open(SSHProxy.public_key_path, "r") as f: + publickey = f.read() + return publickey + + @staticmethod + def get_ssh_private_key(): + privatekey = "" + if os.path.exists(SSHProxy.private_key_path): + with open(SSHProxy.private_key_path, "r") as f: + privatekey = f.read() + return privatekey + + @staticmethod + def has_ssh_key(): + return True if os.path.exists(SSHProxy.private_key_path) else False + + def run(self, cmd: str) -> (str, str): + """Run a command remotely via SSH. + + Note: The previous behavior was to run the command locally if SSH wasn't + configured, but that can lead to cases where execution succeeds when you'd + expect it not to. + """ + if isinstance(cmd, str): + cmd = shlex.split(cmd) + + host = self._get_hostname() + user = self.username + passwd = self.password + key = self.private_key_path + + # Make sure we have everything we need to connect + if host and user: + return self.ssh(cmd) + + raise Exception("Invalid SSH credentials.") + + def scp(self, source_file, destination_file): + """Execute an scp command. Requires a fully qualified source and + destination. + + :param str source_file: Path to the source file + :param str destination_file: Path to the destination file + :raises: :class:`CalledProcessError` if the command fails + """ + cmd = [ + "sshpass", + "-p", + self.password, + "scp", + "-i", + os.path.expanduser(self.private_key_path), + "-o", + "StrictHostKeyChecking=no", + "-q", + "-B", + ] + destination = "{}@{}:{}".format(self.username, self.hostname, destination_file) + cmd.extend([source_file, destination]) + subprocess.run(cmd, check=True) + + def ssh(self, command): + """Run a command remotely via SSH. + + :param list(str) command: The command to execute + :return: tuple: The stdout and stderr of the command execution + :raises: :class:`CalledProcessError` if the command fails + """ + + destination = "{}@{}".format(self.username, self.hostname) + cmd = [ + "sshpass", + "-p", + self.password, + "ssh", + "-i", + os.path.expanduser(self.private_key_path), + "-o", + "StrictHostKeyChecking=no", + "-q", + destination, + ] + cmd.extend(command) + output = subprocess.run(cmd, check=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) + return (output.stdout.decode("utf-8").strip(), output.stderr.decode("utf-8").strip()) + + def verify_credentials(self): + """Verify the SSH credentials. + + :return (bool, str): Verified, Stderr + """ + verified = False + try: + (stdout, stderr) = self.run("hostname") + verified = True + except CalledProcessError as e: + stderr = "Command failed: {} ({})".format(" ".join(e.cmd), str(e.output)) + except (TimeoutError, socket.timeout): + stderr = "Timeout attempting to reach {}".format(self._get_hostname()) + except Exception as error: + tb = traceback.format_exc() + stderr = "Unhandled exception: {}".format(tb) + return verified, stderr + + ################### + # Private methods # + ################### + def _get_hostname(self): + """Get the hostname for the ssh target. + + HACK: This function was added to work around an issue where the + ssh-hostname was passed in the format of a.b.c.d;a.b.c.d, where the first + is the floating ip, and the second the non-floating ip, for an Openstack + instance. + """ + return self.hostname.split(";")[0] diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/ops/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/ops/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..f17b2969db298b21bc47bbe1d3614ccff93e9c6e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/ops/__init__.py @@ -0,0 +1,20 @@ +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""The Operator Framework.""" + +from .version import version as __version__ # noqa: F401 (imported but unused) + +# Import here the bare minimum to break the circular import between modules +from . import charm # noqa: F401 (imported but unused) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/ops/charm.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/ops/charm.py new file mode 100755 index 0000000000000000000000000000000000000000..d898de859fc444814bc19a7f8f0caaaec6f7e5f4 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/ops/charm.py @@ -0,0 +1,575 @@ +# Copyright 2019-2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import enum +import os +import pathlib +import typing + +import yaml + +from ops.framework import Object, EventSource, EventBase, Framework, ObjectEvents +from ops import model + + +def _loadYaml(source): + if yaml.__with_libyaml__: + return yaml.load(source, Loader=yaml.CSafeLoader) + return yaml.load(source, Loader=yaml.SafeLoader) + + +class HookEvent(EventBase): + """A base class for events that trigger because of a Juju hook firing.""" + + +class ActionEvent(EventBase): + """A base class for events that trigger when a user asks for an Action to be run. + + To read the parameters for the action, see the instance variable `params`. + To respond with the result of the action, call `set_results`. To add progress + messages that are visible as the action is progressing use `log`. + + :ivar params: The parameters passed to the action (read by action-get) + """ + + def defer(self): + """Action events are not deferable like other events. + + This is because an action runs synchronously and the user is waiting for the result. + """ + raise RuntimeError('cannot defer action events') + + def restore(self, snapshot: dict) -> None: + """Used by the operator framework to record the action. + + Not meant to be called directly by Charm code. + """ + env_action_name = os.environ.get('JUJU_ACTION_NAME') + event_action_name = self.handle.kind[:-len('_action')].replace('_', '-') + if event_action_name != env_action_name: + # This could only happen if the dev manually emits the action, or from a bug. + raise RuntimeError('action event kind does not match current action') + # Params are loaded at restore rather than __init__ because + # the model is not available in __init__. + self.params = self.framework.model._backend.action_get() + + def set_results(self, results: typing.Mapping) -> None: + """Report the result of the action. + + Args: + results: The result of the action as a Dict + """ + self.framework.model._backend.action_set(results) + + def log(self, message: str) -> None: + """Send a message that a user will see while the action is running. + + Args: + message: The message for the user. + """ + self.framework.model._backend.action_log(message) + + def fail(self, message: str = '') -> None: + """Report that this action has failed. + + Args: + message: Optional message to record why it has failed. + """ + self.framework.model._backend.action_fail(message) + + +class InstallEvent(HookEvent): + """Represents the `install` hook from Juju.""" + + +class StartEvent(HookEvent): + """Represents the `start` hook from Juju.""" + + +class StopEvent(HookEvent): + """Represents the `stop` hook from Juju.""" + + +class RemoveEvent(HookEvent): + """Represents the `remove` hook from Juju. """ + + +class ConfigChangedEvent(HookEvent): + """Represents the `config-changed` hook from Juju.""" + + +class UpdateStatusEvent(HookEvent): + """Represents the `update-status` hook from Juju.""" + + +class UpgradeCharmEvent(HookEvent): + """Represents the `upgrade-charm` hook from Juju. + + This will be triggered when a user has run `juju upgrade-charm`. It is run after Juju + has unpacked the upgraded charm code, and so this event will be handled with new code. + """ + + +class PreSeriesUpgradeEvent(HookEvent): + """Represents the `pre-series-upgrade` hook from Juju. + + This happens when a user has run `juju upgrade-series MACHINE prepare` and + will fire for each unit that is running on the machine, telling them that + the user is preparing to upgrade the Machine's series (eg trusty->bionic). + The charm should take actions to prepare for the upgrade (a database charm + would want to write out a version-independent dump of the database, so that + when a new version of the database is available in a new series, it can be + used.) + Once all units on a machine have run `pre-series-upgrade`, the user will + initiate the steps to actually upgrade the machine (eg `do-release-upgrade`). + When the upgrade has been completed, the :class:`PostSeriesUpgradeEvent` will fire. + """ + + +class PostSeriesUpgradeEvent(HookEvent): + """Represents the `post-series-upgrade` hook from Juju. + + This is run after the user has done a distribution upgrade (or rolled back + and kept the same series). It is called in response to + `juju upgrade-series MACHINE complete`. Charms are expected to do whatever + steps are necessary to reconfigure their applications for the new series. + """ + + +class LeaderElectedEvent(HookEvent): + """Represents the `leader-elected` hook from Juju. + + Juju will trigger this when a new lead unit is chosen for a given application. + This represents the leader of the charm information (not necessarily the primary + of a running application). The main utility is that charm authors can know + that only one unit will be a leader at any given time, so they can do + configuration, etc, that would otherwise require coordination between units. + (eg, selecting a password for a new relation) + """ + + +class LeaderSettingsChangedEvent(HookEvent): + """Represents the `leader-settings-changed` hook from Juju. + + Deprecated. This represents when a lead unit would call `leader-set` to inform + the other units of an application that they have new information to handle. + This has been deprecated in favor of using a Peer relation, and having the + leader set a value in the Application data bag for that peer relation. + (see :class:`RelationChangedEvent`). + """ + + +class CollectMetricsEvent(HookEvent): + """Represents the `collect-metrics` hook from Juju. + + Note that events firing during a CollectMetricsEvent are currently + sandboxed in how they can interact with Juju. To report metrics + use :meth:`.add_metrics`. + """ + + def add_metrics(self, metrics: typing.Mapping, labels: typing.Mapping = None) -> None: + """Record metrics that have been gathered by the charm for this unit. + + Args: + metrics: A collection of {key: float} pairs that contains the + metrics that have been gathered + labels: {key:value} strings that can be applied to the + metrics that are being gathered + """ + self.framework.model._backend.add_metrics(metrics, labels) + + +class RelationEvent(HookEvent): + """A base class representing the various relation lifecycle events. + + Charmers should not be creating RelationEvents directly. The events will be + generated by the framework from Juju related events. Users can observe them + from the various `CharmBase.on[relation_name].relation_*` events. + + Attributes: + relation: The Relation involved in this event + app: The remote application that has triggered this event + unit: The remote unit that has triggered this event. This may be None + if the relation event was triggered as an Application level event + """ + + def __init__(self, handle, relation, app=None, unit=None): + super().__init__(handle) + + if unit is not None and unit.app != app: + raise RuntimeError( + 'cannot create RelationEvent with application {} and unit {}'.format(app, unit)) + + self.relation = relation + self.app = app + self.unit = unit + + def snapshot(self) -> dict: + """Used by the framework to serialize the event to disk. + + Not meant to be called by Charm code. + """ + snapshot = { + 'relation_name': self.relation.name, + 'relation_id': self.relation.id, + } + if self.app: + snapshot['app_name'] = self.app.name + if self.unit: + snapshot['unit_name'] = self.unit.name + return snapshot + + def restore(self, snapshot: dict) -> None: + """Used by the framework to deserialize the event from disk. + + Not meant to be called by Charm code. + """ + self.relation = self.framework.model.get_relation( + snapshot['relation_name'], snapshot['relation_id']) + + app_name = snapshot.get('app_name') + if app_name: + self.app = self.framework.model.get_app(app_name) + else: + self.app = None + + unit_name = snapshot.get('unit_name') + if unit_name: + self.unit = self.framework.model.get_unit(unit_name) + else: + self.unit = None + + +class RelationCreatedEvent(RelationEvent): + """Represents the `relation-created` hook from Juju. + + This is triggered when a new relation to another app is added in Juju. This + can occur before units for those applications have started. All existing + relations should be established before start. + """ + + +class RelationJoinedEvent(RelationEvent): + """Represents the `relation-joined` hook from Juju. + + This is triggered whenever a new unit of a related application joins the relation. + (eg, a unit was added to an existing related app, or a new relation was established + with an application that already had units.) + """ + + +class RelationChangedEvent(RelationEvent): + """Represents the `relation-changed` hook from Juju. + + This is triggered whenever there is a change to the data bucket for a related + application or unit. Look at `event.relation.data[event.unit/app]` to see the + new information. + """ + + +class RelationDepartedEvent(RelationEvent): + """Represents the `relation-departed` hook from Juju. + + This is the inverse of the RelationJoinedEvent, representing when a unit + is leaving the relation (the unit is being removed, the app is being removed, + the relation is being removed). It is fired once for each unit that is + going away. + """ + + +class RelationBrokenEvent(RelationEvent): + """Represents the `relation-broken` hook from Juju. + + If a relation is being removed (`juju remove-relation` or `juju remove-application`), + once all the units have been removed, RelationBrokenEvent will fire to signal + that the relationship has been fully terminated. + """ + + +class StorageEvent(HookEvent): + """Base class representing Storage related events.""" + + +class StorageAttachedEvent(StorageEvent): + """Represents the `storage-attached` hook from Juju. + + Called when new storage is available for the charm to use. + """ + + +class StorageDetachingEvent(StorageEvent): + """Represents the `storage-detaching` hook from Juju. + + Called when storage a charm has been using is going away. + """ + + +class CharmEvents(ObjectEvents): + """The events that are generated by Juju in response to the lifecycle of an application.""" + + install = EventSource(InstallEvent) + start = EventSource(StartEvent) + stop = EventSource(StopEvent) + remove = EventSource(RemoveEvent) + update_status = EventSource(UpdateStatusEvent) + config_changed = EventSource(ConfigChangedEvent) + upgrade_charm = EventSource(UpgradeCharmEvent) + pre_series_upgrade = EventSource(PreSeriesUpgradeEvent) + post_series_upgrade = EventSource(PostSeriesUpgradeEvent) + leader_elected = EventSource(LeaderElectedEvent) + leader_settings_changed = EventSource(LeaderSettingsChangedEvent) + collect_metrics = EventSource(CollectMetricsEvent) + + +class CharmBase(Object): + """Base class that represents the Charm overall. + + Usually this initialization is done by ops.main.main() rather than Charm authors + directly instantiating a Charm. + + Args: + framework: The framework responsible for managing the Model and events for this + Charm. + key: Ignored; will remove after deprecation period of the signature change. + """ + + on = CharmEvents() + + def __init__(self, framework: Framework, key: typing.Optional = None): + super().__init__(framework, None) + + for relation_name in self.framework.meta.relations: + relation_name = relation_name.replace('-', '_') + self.on.define_event(relation_name + '_relation_created', RelationCreatedEvent) + self.on.define_event(relation_name + '_relation_joined', RelationJoinedEvent) + self.on.define_event(relation_name + '_relation_changed', RelationChangedEvent) + self.on.define_event(relation_name + '_relation_departed', RelationDepartedEvent) + self.on.define_event(relation_name + '_relation_broken', RelationBrokenEvent) + + for storage_name in self.framework.meta.storages: + storage_name = storage_name.replace('-', '_') + self.on.define_event(storage_name + '_storage_attached', StorageAttachedEvent) + self.on.define_event(storage_name + '_storage_detaching', StorageDetachingEvent) + + for action_name in self.framework.meta.actions: + action_name = action_name.replace('-', '_') + self.on.define_event(action_name + '_action', ActionEvent) + + @property + def app(self) -> model.Application: + """Application that this unit is part of.""" + return self.framework.model.app + + @property + def unit(self) -> model.Unit: + """Unit that this execution is responsible for.""" + return self.framework.model.unit + + @property + def meta(self) -> 'CharmMeta': + """CharmMeta of this charm. + """ + return self.framework.meta + + @property + def charm_dir(self) -> pathlib.Path: + """Root directory of the Charm as it is running. + """ + return self.framework.charm_dir + + +class CharmMeta: + """Object containing the metadata for the charm. + + This is read from metadata.yaml and/or actions.yaml. Generally charms will + define this information, rather than reading it at runtime. This class is + mostly for the framework to understand what the charm has defined. + + The maintainers, tags, terms, series, and extra_bindings attributes are all + lists of strings. The requires, provides, peers, relations, storage, + resources, and payloads attributes are all mappings of names to instances + of the respective RelationMeta, StorageMeta, ResourceMeta, or PayloadMeta. + + The relations attribute is a convenience accessor which includes all of the + requires, provides, and peers RelationMeta items. If needed, the role of + the relation definition can be obtained from its role attribute. + + Attributes: + name: The name of this charm + summary: Short description of what this charm does + description: Long description for this charm + maintainers: A list of strings of the email addresses of the maintainers + of this charm. + tags: Charm store tag metadata for categories associated with this charm. + terms: Charm store terms that should be agreed to before this charm can + be deployed. (Used for things like licensing issues.) + series: The list of supported OS series that this charm can support. + The first entry in the list is the default series that will be + used by deploy if no other series is requested by the user. + subordinate: True/False whether this charm is intended to be used as a + subordinate charm. + min_juju_version: If supplied, indicates this charm needs features that + are not available in older versions of Juju. + requires: A dict of {name: :class:`RelationMeta` } for each 'requires' relation. + provides: A dict of {name: :class:`RelationMeta` } for each 'provides' relation. + peers: A dict of {name: :class:`RelationMeta` } for each 'peer' relation. + relations: A dict containing all :class:`RelationMeta` attributes (merged from other + sections) + storages: A dict of {name: :class:`StorageMeta`} for each defined storage. + resources: A dict of {name: :class:`ResourceMeta`} for each defined resource. + payloads: A dict of {name: :class:`PayloadMeta`} for each defined payload. + extra_bindings: A dict of additional named bindings that a charm can use + for network configuration. + actions: A dict of {name: :class:`ActionMeta`} for actions that the charm has defined. + Args: + raw: a mapping containing the contents of metadata.yaml + actions_raw: a mapping containing the contents of actions.yaml + """ + + def __init__(self, raw: dict = {}, actions_raw: dict = {}): + self.name = raw.get('name', '') + self.summary = raw.get('summary', '') + self.description = raw.get('description', '') + self.maintainers = [] + if 'maintainer' in raw: + self.maintainers.append(raw['maintainer']) + if 'maintainers' in raw: + self.maintainers.extend(raw['maintainers']) + self.tags = raw.get('tags', []) + self.terms = raw.get('terms', []) + self.series = raw.get('series', []) + self.subordinate = raw.get('subordinate', False) + self.min_juju_version = raw.get('min-juju-version') + self.requires = {name: RelationMeta(RelationRole.requires, name, rel) + for name, rel in raw.get('requires', {}).items()} + self.provides = {name: RelationMeta(RelationRole.provides, name, rel) + for name, rel in raw.get('provides', {}).items()} + self.peers = {name: RelationMeta(RelationRole.peer, name, rel) + for name, rel in raw.get('peers', {}).items()} + self.relations = {} + self.relations.update(self.requires) + self.relations.update(self.provides) + self.relations.update(self.peers) + self.storages = {name: StorageMeta(name, storage) + for name, storage in raw.get('storage', {}).items()} + self.resources = {name: ResourceMeta(name, res) + for name, res in raw.get('resources', {}).items()} + self.payloads = {name: PayloadMeta(name, payload) + for name, payload in raw.get('payloads', {}).items()} + self.extra_bindings = raw.get('extra-bindings', {}) + self.actions = {name: ActionMeta(name, action) for name, action in actions_raw.items()} + + @classmethod + def from_yaml( + cls, metadata: typing.Union[str, typing.TextIO], + actions: typing.Optional[typing.Union[str, typing.TextIO]] = None): + """Instantiate a CharmMeta from a YAML description of metadata.yaml. + + Args: + metadata: A YAML description of charm metadata (name, relations, etc.) + This can be a simple string, or a file-like object. (passed to `yaml.safe_load`). + actions: YAML description of Actions for this charm (eg actions.yaml) + """ + meta = _loadYaml(metadata) + raw_actions = {} + if actions is not None: + raw_actions = _loadYaml(actions) + return cls(meta, raw_actions) + + +class RelationRole(enum.Enum): + peer = 'peer' + requires = 'requires' + provides = 'provides' + + def is_peer(self) -> bool: + """Return whether the current role is peer. + + A convenience to avoid having to import charm. + """ + return self is RelationRole.peer + + +class RelationMeta: + """Object containing metadata about a relation definition. + + Should not be constructed directly by Charm code. Is gotten from one of + :attr:`CharmMeta.peers`, :attr:`CharmMeta.requires`, :attr:`CharmMeta.provides`, + or :attr:`CharmMeta.relations`. + + Attributes: + role: This is one of peer/requires/provides + relation_name: Name of this relation from metadata.yaml + interface_name: Optional definition of the interface protocol. + scope: "global" or "container" scope based on how the relation should be used. + """ + + def __init__(self, role: RelationRole, relation_name: str, raw: dict): + if not isinstance(role, RelationRole): + raise TypeError("role should be a Role, not {!r}".format(role)) + self.role = role + self.relation_name = relation_name + self.interface_name = raw['interface'] + self.scope = raw.get('scope') + + +class StorageMeta: + """Object containing metadata about a storage definition.""" + + def __init__(self, name, raw): + self.storage_name = name + self.type = raw['type'] + self.description = raw.get('description', '') + self.shared = raw.get('shared', False) + self.read_only = raw.get('read-only', False) + self.minimum_size = raw.get('minimum-size') + self.location = raw.get('location') + self.multiple_range = None + if 'multiple' in raw: + range = raw['multiple']['range'] + if '-' not in range: + self.multiple_range = (int(range), int(range)) + else: + range = range.split('-') + self.multiple_range = (int(range[0]), int(range[1]) if range[1] else None) + + +class ResourceMeta: + """Object containing metadata about a resource definition.""" + + def __init__(self, name, raw): + self.resource_name = name + self.type = raw['type'] + self.filename = raw.get('filename', None) + self.description = raw.get('description', '') + + +class PayloadMeta: + """Object containing metadata about a payload definition.""" + + def __init__(self, name, raw): + self.payload_name = name + self.type = raw['type'] + + +class ActionMeta: + """Object containing metadata about an action's definition.""" + + def __init__(self, name, raw=None): + raw = raw or {} + self.name = name + self.title = raw.get('title', '') + self.description = raw.get('description', '') + self.parameters = raw.get('params', {}) # {: } + self.required = raw.get('required', []) # [, ...] diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/ops/framework.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/ops/framework.py new file mode 100755 index 0000000000000000000000000000000000000000..b7c4749ff2b5bfb4f354bf1a8d4cd6ed64cf0da5 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/ops/framework.py @@ -0,0 +1,1067 @@ +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import collections +import collections.abc +import inspect +import keyword +import logging +import marshal +import os +import pathlib +import pdb +import re +import sys +import types +import weakref + +from ops import charm +from ops.storage import ( + NoSnapshotError, + SQLiteStorage, +) + +logger = logging.getLogger(__name__) + + +class Handle: + """Handle defines a name for an object in the form of a hierarchical path. + + The provided parent is the object (or that object's handle) that this handle + sits under, or None if the object identified by this handle stands by itself + as the root of its own hierarchy. + + The handle kind is a string that defines a namespace so objects with the + same parent and kind will have unique keys. + + The handle key is a string uniquely identifying the object. No other objects + under the same parent and kind may have the same key. + """ + + def __init__(self, parent, kind, key): + if parent and not isinstance(parent, Handle): + parent = parent.handle + self._parent = parent + self._kind = kind + self._key = key + if parent: + if key: + self._path = "{}/{}[{}]".format(parent, kind, key) + else: + self._path = "{}/{}".format(parent, kind) + else: + if key: + self._path = "{}[{}]".format(kind, key) + else: + self._path = "{}".format(kind) + + def nest(self, kind, key): + return Handle(self, kind, key) + + def __hash__(self): + return hash((self.parent, self.kind, self.key)) + + def __eq__(self, other): + return (self.parent, self.kind, self.key) == (other.parent, other.kind, other.key) + + def __str__(self): + return self.path + + @property + def parent(self): + return self._parent + + @property + def kind(self): + return self._kind + + @property + def key(self): + return self._key + + @property + def path(self): + return self._path + + @classmethod + def from_path(cls, path): + handle = None + for pair in path.split("/"): + pair = pair.split("[") + good = False + if len(pair) == 1: + kind, key = pair[0], None + good = True + elif len(pair) == 2: + kind, key = pair + if key and key[-1] == ']': + key = key[:-1] + good = True + if not good: + raise RuntimeError("attempted to restore invalid handle path {}".format(path)) + handle = Handle(handle, kind, key) + return handle + + +class EventBase: + + def __init__(self, handle): + self.handle = handle + self.deferred = False + + def defer(self): + self.deferred = True + + def snapshot(self): + """Return the snapshot data that should be persisted. + + Subclasses must override to save any custom state. + """ + return None + + def restore(self, snapshot): + """Restore the value state from the given snapshot. + + Subclasses must override to restore their custom state. + """ + self.deferred = False + + +class EventSource: + """EventSource wraps an event type with a descriptor to facilitate observing and emitting. + + It is generally used as: + + class SomethingHappened(EventBase): + pass + + class SomeObject(Object): + something_happened = EventSource(SomethingHappened) + + With that, instances of that type will offer the someobj.something_happened + attribute which is a BoundEvent and may be used to emit and observe the event. + """ + + def __init__(self, event_type): + if not isinstance(event_type, type) or not issubclass(event_type, EventBase): + raise RuntimeError( + 'Event requires a subclass of EventBase as an argument, got {}'.format(event_type)) + self.event_type = event_type + self.event_kind = None + self.emitter_type = None + + def _set_name(self, emitter_type, event_kind): + if self.event_kind is not None: + raise RuntimeError( + 'EventSource({}) reused as {}.{} and {}.{}'.format( + self.event_type.__name__, + self.emitter_type.__name__, + self.event_kind, + emitter_type.__name__, + event_kind, + )) + self.event_kind = event_kind + self.emitter_type = emitter_type + + def __get__(self, emitter, emitter_type=None): + if emitter is None: + return self + # Framework might not be available if accessed as CharmClass.on.event + # rather than charm_instance.on.event, but in that case it couldn't be + # emitted anyway, so there's no point to registering it. + framework = getattr(emitter, 'framework', None) + if framework is not None: + framework.register_type(self.event_type, emitter, self.event_kind) + return BoundEvent(emitter, self.event_type, self.event_kind) + + +class BoundEvent: + + def __repr__(self): + return ''.format( + self.event_type.__name__, + type(self.emitter).__name__, + self.event_kind, + hex(id(self)), + ) + + def __init__(self, emitter, event_type, event_kind): + self.emitter = emitter + self.event_type = event_type + self.event_kind = event_kind + + def emit(self, *args, **kwargs): + """Emit event to all registered observers. + + The current storage state is committed before and after each observer is notified. + """ + framework = self.emitter.framework + key = framework._next_event_key() + event = self.event_type(Handle(self.emitter, self.event_kind, key), *args, **kwargs) + framework._emit(event) + + +class HandleKind: + """Helper descriptor to define the Object.handle_kind field. + + The handle_kind for an object defaults to its type name, but it may + be explicitly overridden if desired. + """ + + def __get__(self, obj, obj_type): + kind = obj_type.__dict__.get("handle_kind") + if kind: + return kind + return obj_type.__name__ + + +class _Metaclass(type): + """Helper class to ensure proper instantiation of Object-derived classes. + + This class currently has a single purpose: events derived from EventSource + that are class attributes of Object-derived classes need to be told what + their name is in that class. For example, in + + class SomeObject(Object): + something_happened = EventSource(SomethingHappened) + + the instance of EventSource needs to know it's called 'something_happened'. + + Starting from python 3.6 we could use __set_name__ on EventSource for this, + but until then this (meta)class does the equivalent work. + + TODO: when we drop support for 3.5 drop this class, and rename _set_name in + EventSource to __set_name__; everything should continue to work. + + """ + + def __new__(typ, *a, **kw): + k = super().__new__(typ, *a, **kw) + # k is now the Object-derived class; loop over its class attributes + for n, v in vars(k).items(): + # we could do duck typing here if we want to support + # non-EventSource-derived shenanigans. We don't. + if isinstance(v, EventSource): + # this is what 3.6+ does automatically for us: + v._set_name(k, n) + return k + + +class Object(metaclass=_Metaclass): + + handle_kind = HandleKind() + + def __init__(self, parent, key): + kind = self.handle_kind + if isinstance(parent, Framework): + self.framework = parent + # Avoid Framework instances having a circular reference to themselves. + if self.framework is self: + self.framework = weakref.proxy(self.framework) + self.handle = Handle(None, kind, key) + else: + self.framework = parent.framework + self.handle = Handle(parent, kind, key) + self.framework._track(self) + + # TODO Detect conflicting handles here. + + @property + def model(self): + return self.framework.model + + +class ObjectEvents(Object): + """Convenience type to allow defining .on attributes at class level.""" + + handle_kind = "on" + + def __init__(self, parent=None, key=None): + if parent is not None: + super().__init__(parent, key) + else: + self._cache = weakref.WeakKeyDictionary() + + def __get__(self, emitter, emitter_type): + if emitter is None: + return self + instance = self._cache.get(emitter) + if instance is None: + # Same type, different instance, more data. Doing this unusual construct + # means people can subclass just this one class to have their own 'on'. + instance = self._cache[emitter] = type(self)(emitter) + return instance + + @classmethod + def define_event(cls, event_kind, event_type): + """Define an event on this type at runtime. + + cls: a type to define an event on. + + event_kind: an attribute name that will be used to access the + event. Must be a valid python identifier, not be a keyword + or an existing attribute. + + event_type: a type of the event to define. + + """ + prefix = 'unable to define an event with event_kind that ' + if not event_kind.isidentifier(): + raise RuntimeError(prefix + 'is not a valid python identifier: ' + event_kind) + elif keyword.iskeyword(event_kind): + raise RuntimeError(prefix + 'is a python keyword: ' + event_kind) + try: + getattr(cls, event_kind) + raise RuntimeError( + prefix + 'overlaps with an existing type {} attribute: {}'.format(cls, event_kind)) + except AttributeError: + pass + + event_descriptor = EventSource(event_type) + event_descriptor._set_name(cls, event_kind) + setattr(cls, event_kind, event_descriptor) + + def events(self): + """Return a mapping of event_kinds to bound_events for all available events. + """ + events_map = {} + # We have to iterate over the class rather than instance to allow for properties which + # might call this method (e.g., event views), leading to infinite recursion. + for attr_name, attr_value in inspect.getmembers(type(self)): + if isinstance(attr_value, EventSource): + # We actually care about the bound_event, however, since it + # provides the most info for users of this method. + event_kind = attr_name + bound_event = getattr(self, event_kind) + events_map[event_kind] = bound_event + return events_map + + def __getitem__(self, key): + return PrefixedEvents(self, key) + + +class PrefixedEvents: + + def __init__(self, emitter, key): + self._emitter = emitter + self._prefix = key.replace("-", "_") + '_' + + def __getattr__(self, name): + return getattr(self._emitter, self._prefix + name) + + +class PreCommitEvent(EventBase): + pass + + +class CommitEvent(EventBase): + pass + + +class FrameworkEvents(ObjectEvents): + pre_commit = EventSource(PreCommitEvent) + commit = EventSource(CommitEvent) + + +class NoTypeError(Exception): + + def __init__(self, handle_path): + self.handle_path = handle_path + + def __str__(self): + return "cannot restore {} since no class was registered for it".format(self.handle_path) + + +# the message to show to the user when a pdb breakpoint goes active +_BREAKPOINT_WELCOME_MESSAGE = """ +Starting pdb to debug charm operator. +Run `h` for help, `c` to continue, or `exit`/CTRL-d to abort. +Future breakpoints may interrupt execution again. +More details at https://discourse.jujucharms.com/t/debugging-charm-hooks + +""" + + +_event_regex = r'^(|.*/)on/[a-zA-Z_]+\[\d+\]$' + + +class Framework(Object): + + on = FrameworkEvents() + + # Override properties from Object so that we can set them in __init__. + model = None + meta = None + charm_dir = None + + def __init__(self, storage, charm_dir, meta, model): + + super().__init__(self, None) + + self.charm_dir = charm_dir + self.meta = meta + self.model = model + self._observers = [] # [(observer_path, method_name, parent_path, event_key)] + self._observer = weakref.WeakValueDictionary() # {observer_path: observer} + self._objects = weakref.WeakValueDictionary() + self._type_registry = {} # {(parent_path, kind): cls} + self._type_known = set() # {cls} + + if isinstance(storage, (str, pathlib.Path)): + logger.warning( + "deprecated: Framework now takes a Storage not a path") + storage = SQLiteStorage(storage) + self._storage = storage + + # We can't use the higher-level StoredState because it relies on events. + self.register_type(StoredStateData, None, StoredStateData.handle_kind) + stored_handle = Handle(None, StoredStateData.handle_kind, '_stored') + try: + self._stored = self.load_snapshot(stored_handle) + except NoSnapshotError: + self._stored = StoredStateData(self, '_stored') + self._stored['event_count'] = 0 + + # Hook into builtin breakpoint, so if Python >= 3.7, devs will be able to just do + # breakpoint(); if Python < 3.7, this doesn't affect anything + sys.breakpointhook = self.breakpoint + + # Flag to indicate that we already presented the welcome message in a debugger breakpoint + self._breakpoint_welcomed = False + + # Parse once the env var, which may be used multiple times later + debug_at = os.environ.get('JUJU_DEBUG_AT') + self._juju_debug_at = debug_at.split(',') if debug_at else () + + def close(self): + self._storage.close() + + def _track(self, obj): + """Track object and ensure it is the only object created using its handle path.""" + if obj is self: + # Framework objects don't track themselves + return + if obj.handle.path in self.framework._objects: + raise RuntimeError( + 'two objects claiming to be {} have been created'.format(obj.handle.path)) + self._objects[obj.handle.path] = obj + + def _forget(self, obj): + """Stop tracking the given object. See also _track.""" + self._objects.pop(obj.handle.path, None) + + def commit(self): + # Give a chance for objects to persist data they want to before a commit is made. + self.on.pre_commit.emit() + # Make sure snapshots are saved by instances of StoredStateData. Any possible state + # modifications in on_commit handlers of instances of other classes will not be persisted. + self.on.commit.emit() + # Save our event count after all events have been emitted. + self.save_snapshot(self._stored) + self._storage.commit() + + def register_type(self, cls, parent, kind=None): + if parent and not isinstance(parent, Handle): + parent = parent.handle + if parent: + parent_path = parent.path + else: + parent_path = None + if not kind: + kind = cls.handle_kind + self._type_registry[(parent_path, kind)] = cls + self._type_known.add(cls) + + def save_snapshot(self, value): + """Save a persistent snapshot of the provided value. + + The provided value must implement the following interface: + + value.handle = Handle(...) + value.snapshot() => {...} # Simple builtin types only. + value.restore(snapshot) # Restore custom state from prior snapshot. + """ + if type(value) not in self._type_known: + raise RuntimeError( + 'cannot save {} values before registering that type'.format(type(value).__name__)) + data = value.snapshot() + + # Use marshal as a validator, enforcing the use of simple types, as we later the + # information is really pickled, which is too error prone for future evolution of the + # stored data (e.g. if the developer stores a custom object and later changes its + # class name; when unpickling the original class will not be there and event + # data loading will fail). + try: + marshal.dumps(data) + except ValueError: + msg = "unable to save the data for {}, it must contain only simple types: {!r}" + raise ValueError(msg.format(value.__class__.__name__, data)) + + self._storage.save_snapshot(value.handle.path, data) + + def load_snapshot(self, handle): + parent_path = None + if handle.parent: + parent_path = handle.parent.path + cls = self._type_registry.get((parent_path, handle.kind)) + if not cls: + raise NoTypeError(handle.path) + data = self._storage.load_snapshot(handle.path) + obj = cls.__new__(cls) + obj.framework = self + obj.handle = handle + obj.restore(data) + self._track(obj) + return obj + + def drop_snapshot(self, handle): + self._storage.drop_snapshot(handle.path) + + def observe(self, bound_event: BoundEvent, observer: types.MethodType): + """Register observer to be called when bound_event is emitted. + + The bound_event is generally provided as an attribute of the object that emits + the event, and is created in this style: + + class SomeObject: + something_happened = Event(SomethingHappened) + + That event may be observed as: + + framework.observe(someobj.something_happened, self._on_something_happened) + + Raises: + RuntimeError: if bound_event or observer are the wrong type. + """ + if not isinstance(bound_event, BoundEvent): + raise RuntimeError( + 'Framework.observe requires a BoundEvent as second parameter, got {}'.format( + bound_event)) + if not isinstance(observer, types.MethodType): + # help users of older versions of the framework + if isinstance(observer, charm.CharmBase): + raise TypeError( + 'observer methods must now be explicitly provided;' + ' please replace observe(self.on.{0}, self)' + ' with e.g. observe(self.on.{0}, self._on_{0})'.format( + bound_event.event_kind)) + raise RuntimeError( + 'Framework.observe requires a method as third parameter, got {}'.format(observer)) + + event_type = bound_event.event_type + event_kind = bound_event.event_kind + emitter = bound_event.emitter + + self.register_type(event_type, emitter, event_kind) + + if hasattr(emitter, "handle"): + emitter_path = emitter.handle.path + else: + raise RuntimeError( + 'event emitter {} must have a "handle" attribute'.format(type(emitter).__name__)) + + # Validate that the method has an acceptable call signature. + sig = inspect.signature(observer) + # Self isn't included in the params list, so the first arg will be the event. + extra_params = list(sig.parameters.values())[1:] + + method_name = observer.__name__ + observer = observer.__self__ + if not sig.parameters: + raise TypeError( + '{}.{} must accept event parameter'.format(type(observer).__name__, method_name)) + elif any(param.default is inspect.Parameter.empty for param in extra_params): + # Allow for additional optional params, since there's no reason to exclude them, but + # required params will break. + raise TypeError( + '{}.{} has extra required parameter'.format(type(observer).__name__, method_name)) + + # TODO Prevent the exact same parameters from being registered more than once. + + self._observer[observer.handle.path] = observer + self._observers.append((observer.handle.path, method_name, emitter_path, event_kind)) + + def _next_event_key(self): + """Return the next event key that should be used, incrementing the internal counter.""" + # Increment the count first; this means the keys will start at 1, and 0 + # means no events have been emitted. + self._stored['event_count'] += 1 + return str(self._stored['event_count']) + + def _emit(self, event): + """See BoundEvent.emit for the public way to call this.""" + + saved = False + event_path = event.handle.path + event_kind = event.handle.kind + parent_path = event.handle.parent.path + # TODO Track observers by (parent_path, event_kind) rather than as a list of + # all observers. Avoiding linear search through all observers for every event + for observer_path, method_name, _parent_path, _event_kind in self._observers: + if _parent_path != parent_path: + continue + if _event_kind and _event_kind != event_kind: + continue + if not saved: + # Save the event for all known observers before the first notification + # takes place, so that either everyone interested sees it, or nobody does. + self.save_snapshot(event) + saved = True + # Again, only commit this after all notices are saved. + self._storage.save_notice(event_path, observer_path, method_name) + if saved: + self._reemit(event_path) + + def reemit(self): + """Reemit previously deferred events to the observers that deferred them. + + Only the specific observers that have previously deferred the event will be + notified again. Observers that asked to be notified about events after it's + been first emitted won't be notified, as that would mean potentially observing + events out of order. + """ + self._reemit() + + def _reemit(self, single_event_path=None): + last_event_path = None + deferred = True + for event_path, observer_path, method_name in self._storage.notices(single_event_path): + event_handle = Handle.from_path(event_path) + + if last_event_path != event_path: + if not deferred and last_event_path is not None: + self._storage.drop_snapshot(last_event_path) + last_event_path = event_path + deferred = False + + try: + event = self.load_snapshot(event_handle) + except NoTypeError: + self._storage.drop_notice(event_path, observer_path, method_name) + continue + + event.deferred = False + observer = self._observer.get(observer_path) + if observer: + custom_handler = getattr(observer, method_name, None) + if custom_handler: + event_is_from_juju = isinstance(event, charm.HookEvent) + event_is_action = isinstance(event, charm.ActionEvent) + if (event_is_from_juju or event_is_action) and 'hook' in self._juju_debug_at: + # Present the welcome message and run under PDB. + self._show_debug_code_message() + pdb.runcall(custom_handler, event) + else: + # Regular call to the registered method. + custom_handler(event) + + if event.deferred: + deferred = True + else: + self._storage.drop_notice(event_path, observer_path, method_name) + # We intentionally consider this event to be dead and reload it from + # scratch in the next path. + self.framework._forget(event) + + if not deferred and last_event_path is not None: + self._storage.drop_snapshot(last_event_path) + + def _show_debug_code_message(self): + """Present the welcome message (only once!) when using debugger functionality.""" + if not self._breakpoint_welcomed: + self._breakpoint_welcomed = True + print(_BREAKPOINT_WELCOME_MESSAGE, file=sys.stderr, end='') + + def breakpoint(self, name=None): + """Add breakpoint, optionally named, at the place where this method is called. + + For the breakpoint to be activated the JUJU_DEBUG_AT environment variable + must be set to "all" or to the specific name parameter provided, if any. In every + other situation calling this method does nothing. + + The framework also provides a standard breakpoint named "hook", that will + stop execution when a hook event is about to be handled. + + For those reasons, the "all" and "hook" breakpoint names are reserved. + """ + # If given, validate the name comply with all the rules + if name is not None: + if not isinstance(name, str): + raise TypeError('breakpoint names must be strings') + if name in ('hook', 'all'): + raise ValueError('breakpoint names "all" and "hook" are reserved') + if not re.match(r'^[a-z0-9]([a-z0-9\-]*[a-z0-9])?$', name): + raise ValueError('breakpoint names must look like "foo" or "foo-bar"') + + indicated_breakpoints = self._juju_debug_at + if not indicated_breakpoints: + return + + if 'all' in indicated_breakpoints or name in indicated_breakpoints: + self._show_debug_code_message() + + # If we call set_trace() directly it will open the debugger *here*, so indicating + # it to use our caller's frame + code_frame = inspect.currentframe().f_back + pdb.Pdb().set_trace(code_frame) + else: + logger.warning( + "Breakpoint %r skipped (not found in the requested breakpoints: %s)", + name, indicated_breakpoints) + + def remove_unreferenced_events(self): + """Remove events from storage that are not referenced. + + In older versions of the framework, events that had no observers would get recorded but + never deleted. This makes a best effort to find these events and remove them from the + database. + """ + event_regex = re.compile(_event_regex) + to_remove = [] + for handle_path in self._storage.list_snapshots(): + if event_regex.match(handle_path): + notices = self._storage.notices(handle_path) + if next(notices, None) is None: + # There are no notices for this handle_path, it is valid to remove it + to_remove.append(handle_path) + for handle_path in to_remove: + self._storage.drop_snapshot(handle_path) + + +class StoredStateData(Object): + + def __init__(self, parent, attr_name): + super().__init__(parent, attr_name) + self._cache = {} + self.dirty = False + + def __getitem__(self, key): + return self._cache.get(key) + + def __setitem__(self, key, value): + self._cache[key] = value + self.dirty = True + + def __contains__(self, key): + return key in self._cache + + def snapshot(self): + return self._cache + + def restore(self, snapshot): + self._cache = snapshot + self.dirty = False + + def on_commit(self, event): + if self.dirty: + self.framework.save_snapshot(self) + self.dirty = False + + +class BoundStoredState: + + def __init__(self, parent, attr_name): + parent.framework.register_type(StoredStateData, parent) + + handle = Handle(parent, StoredStateData.handle_kind, attr_name) + try: + data = parent.framework.load_snapshot(handle) + except NoSnapshotError: + data = StoredStateData(parent, attr_name) + + # __dict__ is used to avoid infinite recursion. + self.__dict__["_data"] = data + self.__dict__["_attr_name"] = attr_name + + parent.framework.observe(parent.framework.on.commit, self._data.on_commit) + + def __getattr__(self, key): + # "on" is the only reserved key that can't be used in the data map. + if key == "on": + return self._data.on + if key not in self._data: + raise AttributeError("attribute '{}' is not stored".format(key)) + return _wrap_stored(self._data, self._data[key]) + + def __setattr__(self, key, value): + if key == "on": + raise AttributeError("attribute 'on' is reserved and cannot be set") + + value = _unwrap_stored(self._data, value) + + if not isinstance(value, (type(None), int, float, str, bytes, list, dict, set)): + raise AttributeError( + 'attribute {!r} cannot be a {}: must be int/float/dict/list/etc'.format( + key, type(value).__name__)) + + self._data[key] = _unwrap_stored(self._data, value) + + def set_default(self, **kwargs): + """"Set the value of any given key if it has not already been set""" + for k, v in kwargs.items(): + if k not in self._data: + self._data[k] = v + + +class StoredState: + """A class used to store data the charm needs persisted across invocations. + + Example:: + + class MyClass(Object): + _stored = StoredState() + + Instances of `MyClass` can transparently save state between invocations by + setting attributes on `_stored`. Initial state should be set with + `set_default` on the bound object, that is:: + + class MyClass(Object): + _stored = StoredState() + + def __init__(self, parent, key): + super().__init__(parent, key) + self._stored.set_default(seen=set()) + self.framework.observe(self.on.seen, self._on_seen) + + def _on_seen(self, event): + self._stored.seen.add(event.uuid) + + """ + + def __init__(self): + self.parent_type = None + self.attr_name = None + + def __get__(self, parent, parent_type=None): + if self.parent_type is not None and self.parent_type not in parent_type.mro(): + # the StoredState instance is being shared between two unrelated classes + # -> unclear what is exepcted of us -> bail out + raise RuntimeError( + 'StoredState shared by {} and {}'.format( + self.parent_type.__name__, parent_type.__name__)) + + if parent is None: + # accessing via the class directly (e.g. MyClass.stored) + return self + + bound = None + if self.attr_name is not None: + bound = parent.__dict__.get(self.attr_name) + if bound is not None: + # we already have the thing from a previous pass, huzzah + return bound + + # need to find ourselves amongst the parent's bases + for cls in parent_type.mro(): + for attr_name, attr_value in cls.__dict__.items(): + if attr_value is not self: + continue + # we've found ourselves! is it the first time? + if bound is not None: + # the StoredState instance is being stored in two different + # attributes -> unclear what is expected of us -> bail out + raise RuntimeError("StoredState shared by {0}.{1} and {0}.{2}".format( + cls.__name__, self.attr_name, attr_name)) + # we've found ourselves for the first time; save where, and bind the object + self.attr_name = attr_name + self.parent_type = cls + bound = BoundStoredState(parent, attr_name) + + if bound is not None: + # cache the bound object to avoid the expensive lookup the next time + # (don't use setattr, to keep things symmetric with the fast-path lookup above) + parent.__dict__[self.attr_name] = bound + return bound + + raise AttributeError( + 'cannot find {} attribute in type {}'.format( + self.__class__.__name__, parent_type.__name__)) + + +def _wrap_stored(parent_data, value): + t = type(value) + if t is dict: + return StoredDict(parent_data, value) + if t is list: + return StoredList(parent_data, value) + if t is set: + return StoredSet(parent_data, value) + return value + + +def _unwrap_stored(parent_data, value): + t = type(value) + if t is StoredDict or t is StoredList or t is StoredSet: + return value._under + return value + + +class StoredDict(collections.abc.MutableMapping): + + def __init__(self, stored_data, under): + self._stored_data = stored_data + self._under = under + + def __getitem__(self, key): + return _wrap_stored(self._stored_data, self._under[key]) + + def __setitem__(self, key, value): + self._under[key] = _unwrap_stored(self._stored_data, value) + self._stored_data.dirty = True + + def __delitem__(self, key): + del self._under[key] + self._stored_data.dirty = True + + def __iter__(self): + return self._under.__iter__() + + def __len__(self): + return len(self._under) + + def __eq__(self, other): + if isinstance(other, StoredDict): + return self._under == other._under + elif isinstance(other, collections.abc.Mapping): + return self._under == other + else: + return NotImplemented + + +class StoredList(collections.abc.MutableSequence): + + def __init__(self, stored_data, under): + self._stored_data = stored_data + self._under = under + + def __getitem__(self, index): + return _wrap_stored(self._stored_data, self._under[index]) + + def __setitem__(self, index, value): + self._under[index] = _unwrap_stored(self._stored_data, value) + self._stored_data.dirty = True + + def __delitem__(self, index): + del self._under[index] + self._stored_data.dirty = True + + def __len__(self): + return len(self._under) + + def insert(self, index, value): + self._under.insert(index, value) + self._stored_data.dirty = True + + def append(self, value): + self._under.append(value) + self._stored_data.dirty = True + + def __eq__(self, other): + if isinstance(other, StoredList): + return self._under == other._under + elif isinstance(other, collections.abc.Sequence): + return self._under == other + else: + return NotImplemented + + def __lt__(self, other): + if isinstance(other, StoredList): + return self._under < other._under + elif isinstance(other, collections.abc.Sequence): + return self._under < other + else: + return NotImplemented + + def __le__(self, other): + if isinstance(other, StoredList): + return self._under <= other._under + elif isinstance(other, collections.abc.Sequence): + return self._under <= other + else: + return NotImplemented + + def __gt__(self, other): + if isinstance(other, StoredList): + return self._under > other._under + elif isinstance(other, collections.abc.Sequence): + return self._under > other + else: + return NotImplemented + + def __ge__(self, other): + if isinstance(other, StoredList): + return self._under >= other._under + elif isinstance(other, collections.abc.Sequence): + return self._under >= other + else: + return NotImplemented + + +class StoredSet(collections.abc.MutableSet): + + def __init__(self, stored_data, under): + self._stored_data = stored_data + self._under = under + + def add(self, key): + self._under.add(key) + self._stored_data.dirty = True + + def discard(self, key): + self._under.discard(key) + self._stored_data.dirty = True + + def __contains__(self, key): + return key in self._under + + def __iter__(self): + return self._under.__iter__() + + def __len__(self): + return len(self._under) + + @classmethod + def _from_iterable(cls, it): + """Construct an instance of the class from any iterable input. + + Per https://docs.python.org/3/library/collections.abc.html + if the Set mixin is being used in a class with a different constructor signature, + you will need to override _from_iterable() with a classmethod that can construct + new instances from an iterable argument. + """ + return set(it) + + def __le__(self, other): + if isinstance(other, StoredSet): + return self._under <= other._under + elif isinstance(other, collections.abc.Set): + return self._under <= other + else: + return NotImplemented + + def __ge__(self, other): + if isinstance(other, StoredSet): + return self._under >= other._under + elif isinstance(other, collections.abc.Set): + return self._under >= other + else: + return NotImplemented + + def __eq__(self, other): + if isinstance(other, StoredSet): + return self._under == other._under + elif isinstance(other, collections.abc.Set): + return self._under == other + else: + return NotImplemented diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/ops/jujuversion.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/ops/jujuversion.py new file mode 100755 index 0000000000000000000000000000000000000000..b2b8177dbe396f0d8c46b86e26af6b4e54ea046d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/ops/jujuversion.py @@ -0,0 +1,98 @@ +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import re +from functools import total_ordering + + +@total_ordering +class JujuVersion: + + PATTERN = r'''^ + (?P\d{1,9})\.(?P\d{1,9}) # and numbers are always there + ((?:\.|-(?P[a-z]+))(?P\d{1,9}))? # sometimes with . or - + (\.(?P\d{1,9}))?$ # and sometimes with a number. + ''' + + def __init__(self, version): + m = re.match(self.PATTERN, version, re.VERBOSE) + if not m: + raise RuntimeError('"{}" is not a valid Juju version string'.format(version)) + + d = m.groupdict() + self.major = int(m.group('major')) + self.minor = int(m.group('minor')) + self.tag = d['tag'] or '' + self.patch = int(d['patch'] or 0) + self.build = int(d['build'] or 0) + + def __repr__(self): + if self.tag: + s = '{}.{}-{}{}'.format(self.major, self.minor, self.tag, self.patch) + else: + s = '{}.{}.{}'.format(self.major, self.minor, self.patch) + if self.build > 0: + s += '.{}'.format(self.build) + return s + + def __eq__(self, other): + if self is other: + return True + if isinstance(other, str): + other = type(self)(other) + elif not isinstance(other, JujuVersion): + raise RuntimeError('cannot compare Juju version "{}" with "{}"'.format(self, other)) + return ( + self.major == other.major + and self.minor == other.minor + and self.tag == other.tag + and self.build == other.build + and self.patch == other.patch) + + def __lt__(self, other): + if self is other: + return False + if isinstance(other, str): + other = type(self)(other) + elif not isinstance(other, JujuVersion): + raise RuntimeError('cannot compare Juju version "{}" with "{}"'.format(self, other)) + + if self.major != other.major: + return self.major < other.major + elif self.minor != other.minor: + return self.minor < other.minor + elif self.tag != other.tag: + if not self.tag: + return False + elif not other.tag: + return True + return self.tag < other.tag + elif self.patch != other.patch: + return self.patch < other.patch + elif self.build != other.build: + return self.build < other.build + return False + + @classmethod + def from_environ(cls) -> 'JujuVersion': + """Build a JujuVersion from JUJU_VERSION.""" + v = os.environ.get('JUJU_VERSION') + if not v: + raise RuntimeError('environ has no JUJU_VERSION') + return cls(v) + + def has_app_data(self) -> bool: + """Determine whether this juju version knows about app data.""" + return (self.major, self.minor, self.patch) >= (2, 7, 0) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/ops/lib/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/ops/lib/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..edb9fcacea6f0173aed9f07ca8a683cfead989cc --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/ops/lib/__init__.py @@ -0,0 +1,194 @@ +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import sys +import os +import re + +from ast import literal_eval +from importlib.util import module_from_spec +from importlib.machinery import ModuleSpec +from pkgutil import get_importer +from types import ModuleType + + +_libraries = None + +_libline_re = re.compile(r'''^LIB([A-Z]+)\s*=\s*([0-9]+|['"][a-zA-Z0-9_.\-@]+['"])''') +_libname_re = re.compile(r'''^[a-z][a-z0-9]+$''') + +# Not perfect, but should do for now. +_libauthor_re = re.compile(r'''^[A-Za-z0-9_+.-]+@[a-z0-9_-]+(?:\.[a-z0-9_-]+)*\.[a-z]{2,3}$''') + + +def use(name: str, api: int, author: str) -> ModuleType: + """Use a library from the ops libraries. + + Args: + name: the name of the library requested. + api: the API version of the library. + author: the author of the library. If not given, requests the + one in the standard library. + Raises: + ImportError: if the library cannot be found. + TypeError: if the name, api, or author are the wrong type. + ValueError: if the name, api, or author are invalid. + """ + if not isinstance(name, str): + raise TypeError("invalid library name: {!r} (must be a str)".format(name)) + if not isinstance(author, str): + raise TypeError("invalid library author: {!r} (must be a str)".format(author)) + if not isinstance(api, int): + raise TypeError("invalid library API: {!r} (must be an int)".format(api)) + if api < 0: + raise ValueError('invalid library api: {} (must be ≥0)'.format(api)) + if not _libname_re.match(name): + raise ValueError("invalid library name: {!r} (chars and digits only)".format(name)) + if not _libauthor_re.match(author): + raise ValueError("invalid library author email: {!r}".format(author)) + + if _libraries is None: + autoimport() + + versions = _libraries.get((name, author), ()) + for lib in versions: + if lib.api == api: + return lib.import_module() + + others = ', '.join(str(lib.api) for lib in versions) + if others: + msg = 'cannot find "{}" from "{}" with API version {} (have {})'.format( + name, author, api, others) + else: + msg = 'cannot find library "{}" from "{}"'.format(name, author) + + raise ImportError(msg, name=name) + + +def autoimport(): + """Find all libs in the path and enable use of them. + + You only need to call this if you've installed a package or + otherwise changed sys.path in the current run, and need to see the + changes. Otherwise libraries are found on first call of `use`. + """ + global _libraries + _libraries = {} + for spec in _find_all_specs(sys.path): + lib = _parse_lib(spec) + if lib is None: + continue + + versions = _libraries.setdefault((lib.name, lib.author), []) + versions.append(lib) + versions.sort(reverse=True) + + +def _find_all_specs(path): + for sys_dir in path: + if sys_dir == "": + sys_dir = "." + try: + top_dirs = os.listdir(sys_dir) + except OSError: + continue + for top_dir in top_dirs: + opslib = os.path.join(sys_dir, top_dir, 'opslib') + try: + lib_dirs = os.listdir(opslib) + except OSError: + continue + finder = get_importer(opslib) + if finder is None or not hasattr(finder, 'find_spec'): + continue + for lib_dir in lib_dirs: + spec = finder.find_spec(lib_dir) + if spec is None: + continue + if spec.loader is None: + # a namespace package; not supported + continue + yield spec + + +# only the first this many lines of a file are looked at for the LIB* constants +_MAX_LIB_LINES = 99 + + +def _parse_lib(spec): + if spec.origin is None: + return None + + _expected = {'NAME': str, 'AUTHOR': str, 'API': int, 'PATCH': int} + + try: + with open(spec.origin, 'rt', encoding='utf-8') as f: + libinfo = {} + for n, line in enumerate(f): + if len(libinfo) == len(_expected): + break + if n > _MAX_LIB_LINES: + return None + m = _libline_re.match(line) + if m is None: + continue + key, value = m.groups() + if key in _expected: + value = literal_eval(value) + if not isinstance(value, _expected[key]): + return None + libinfo[key] = value + else: + if len(libinfo) != len(_expected): + return None + except Exception: + return None + + return _Lib(spec, libinfo['NAME'], libinfo['AUTHOR'], libinfo['API'], libinfo['PATCH']) + + +class _Lib: + + def __init__(self, spec: ModuleSpec, name: str, author: str, api: int, patch: int): + self.spec = spec + self.name = name + self.author = author + self.api = api + self.patch = patch + + self._module = None + + def __repr__(self): + return "<_Lib {0.name} by {0.author}, API {0.api}, patch {0.patch}>".format(self) + + def import_module(self) -> ModuleType: + if self._module is None: + module = module_from_spec(self.spec) + self.spec.loader.exec_module(module) + self._module = module + return self._module + + def __eq__(self, other): + if not isinstance(other, _Lib): + return NotImplemented + a = (self.name, self.author, self.api, self.patch) + b = (other.name, other.author, other.api, other.patch) + return a == b + + def __lt__(self, other): + if not isinstance(other, _Lib): + return NotImplemented + a = (self.name, self.author, self.api, self.patch) + b = (other.name, other.author, other.api, other.patch) + return a < b diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/ops/log.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/ops/log.py new file mode 100644 index 0000000000000000000000000000000000000000..4aac5543aec4d84dc393e79b772a30284712d6d4 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/ops/log.py @@ -0,0 +1,51 @@ +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import sys +import logging + + +class JujuLogHandler(logging.Handler): + """A handler for sending logs to Juju via juju-log.""" + + def __init__(self, model_backend, level=logging.DEBUG): + super().__init__(level) + self.model_backend = model_backend + + def emit(self, record): + self.model_backend.juju_log(record.levelname, self.format(record)) + + +def setup_root_logging(model_backend, debug=False): + """Setup python logging to forward messages to juju-log. + + By default, logging is set to DEBUG level, and messages will be filtered by Juju. + Charmers can also set their own default log level with:: + + logging.getLogger().setLevel(logging.INFO) + + model_backend -- a ModelBackend to use for juju-log + debug -- if True, write logs to stderr as well as to juju-log. + """ + logger = logging.getLogger() + logger.setLevel(logging.DEBUG) + logger.addHandler(JujuLogHandler(model_backend)) + if debug: + handler = logging.StreamHandler() + formatter = logging.Formatter('%(asctime)s %(levelname)-8s %(message)s') + handler.setFormatter(formatter) + logger.addHandler(handler) + + sys.excepthook = lambda etype, value, tb: logger.error( + "Uncaught exception while in charm code:", exc_info=(etype, value, tb)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/ops/main.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/ops/main.py new file mode 100755 index 0000000000000000000000000000000000000000..6dc31c3575044796e8fe1f61b8415395689d6339 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/ops/main.py @@ -0,0 +1,348 @@ +# Copyright 2019-2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import inspect +import logging +import os +import subprocess +import sys +import warnings +from pathlib import Path + +import yaml + +import ops.charm +import ops.framework +import ops.model +import ops.storage + +from ops.log import setup_root_logging + +CHARM_STATE_FILE = '.unit-state.db' + + +logger = logging.getLogger() + + +def _get_charm_dir(): + charm_dir = os.environ.get("JUJU_CHARM_DIR") + if charm_dir is None: + # Assume $JUJU_CHARM_DIR/lib/op/main.py structure. + charm_dir = Path('{}/../../..'.format(__file__)).resolve() + else: + charm_dir = Path(charm_dir).resolve() + return charm_dir + + +def _create_event_link(charm, bound_event): + """Create a symlink for a particular event. + + charm -- A charm object. + bound_event -- An event for which to create a symlink. + """ + if issubclass(bound_event.event_type, ops.charm.HookEvent): + event_dir = charm.framework.charm_dir / 'hooks' + event_path = event_dir / bound_event.event_kind.replace('_', '-') + elif issubclass(bound_event.event_type, ops.charm.ActionEvent): + if not bound_event.event_kind.endswith("_action"): + raise RuntimeError( + 'action event name {} needs _action suffix'.format(bound_event.event_kind)) + event_dir = charm.framework.charm_dir / 'actions' + # The event_kind is suffixed with "_action" while the executable is not. + event_path = event_dir / bound_event.event_kind[:-len('_action')].replace('_', '-') + else: + raise RuntimeError( + 'cannot create a symlink: unsupported event type {}'.format(bound_event.event_type)) + + event_dir.mkdir(exist_ok=True) + if not event_path.exists(): + # CPython has different implementations for populating sys.argv[0] for Linux and Windows. + # For Windows it is always an absolute path (any symlinks are resolved) + # while for Linux it can be a relative path. + target_path = os.path.relpath(os.path.realpath(sys.argv[0]), str(event_dir)) + + # Ignore the non-symlink files or directories + # assuming the charm author knows what they are doing. + logger.debug( + 'Creating a new relative symlink at %s pointing to %s', + event_path, target_path) + event_path.symlink_to(target_path) + + +def _setup_event_links(charm_dir, charm): + """Set up links for supported events that originate from Juju. + + Whether a charm can handle an event or not can be determined by + introspecting which events are defined on it. + + Hooks or actions are created as symlinks to the charm code file + which is determined by inspecting symlinks provided by the charm + author at hooks/install or hooks/start. + + charm_dir -- A root directory of the charm. + charm -- An instance of the Charm class. + + """ + for bound_event in charm.on.events().values(): + # Only events that originate from Juju need symlinks. + if issubclass(bound_event.event_type, (ops.charm.HookEvent, ops.charm.ActionEvent)): + _create_event_link(charm, bound_event) + + +def _emit_charm_event(charm, event_name): + """Emits a charm event based on a Juju event name. + + charm -- A charm instance to emit an event from. + event_name -- A Juju event name to emit on a charm. + """ + event_to_emit = None + try: + event_to_emit = getattr(charm.on, event_name) + except AttributeError: + logger.debug("Event %s not defined for %s.", event_name, charm) + + # If the event is not supported by the charm implementation, do + # not error out or try to emit it. This is to support rollbacks. + if event_to_emit is not None: + args, kwargs = _get_event_args(charm, event_to_emit) + logger.debug('Emitting Juju event %s.', event_name) + event_to_emit.emit(*args, **kwargs) + + +def _get_event_args(charm, bound_event): + event_type = bound_event.event_type + model = charm.framework.model + + if issubclass(event_type, ops.charm.RelationEvent): + relation_name = os.environ['JUJU_RELATION'] + relation_id = int(os.environ['JUJU_RELATION_ID'].split(':')[-1]) + relation = model.get_relation(relation_name, relation_id) + else: + relation = None + + remote_app_name = os.environ.get('JUJU_REMOTE_APP', '') + remote_unit_name = os.environ.get('JUJU_REMOTE_UNIT', '') + if remote_app_name or remote_unit_name: + if not remote_app_name: + if '/' not in remote_unit_name: + raise RuntimeError('invalid remote unit name: {}'.format(remote_unit_name)) + remote_app_name = remote_unit_name.split('/')[0] + args = [relation, model.get_app(remote_app_name)] + if remote_unit_name: + args.append(model.get_unit(remote_unit_name)) + return args, {} + elif relation: + return [relation], {} + return [], {} + + +class _Dispatcher: + """Encapsulate how to figure out what event Juju wants us to run. + + Also knows how to run “legacy” hooks when Juju called us via a top-level + ``dispatch`` binary. + + Args: + charm_dir: the toplevel directory of the charm + + Attributes: + event_name: the name of the event to run + is_dispatch_aware: are we running under a Juju that knows about the + dispatch binary? + + """ + + def __init__(self, charm_dir: Path): + self._charm_dir = charm_dir + self._exec_path = Path(sys.argv[0]) + + if 'JUJU_DISPATCH_PATH' in os.environ and (charm_dir / 'dispatch').exists(): + self._init_dispatch() + else: + self._init_legacy() + + def ensure_event_links(self, charm): + """Make sure necessary symlinks are present on disk""" + + if self.is_dispatch_aware: + # links aren't needed + return + + # When a charm is force-upgraded and a unit is in an error state Juju + # does not run upgrade-charm and instead runs the failed hook followed + # by config-changed. Given the nature of force-upgrading the hook setup + # code is not triggered on config-changed. + # + # 'start' event is included as Juju does not fire the install event for + # K8s charms (see LP: #1854635). + if (self.event_name in ('install', 'start', 'upgrade_charm') + or self.event_name.endswith('_storage_attached')): + _setup_event_links(self._charm_dir, charm) + + def run_any_legacy_hook(self): + """Run any extant legacy hook. + + If there is both a dispatch file and a legacy hook for the + current event, run the wanted legacy hook. + """ + + if not self.is_dispatch_aware: + # we *are* the legacy hook + return + + dispatch_path = self._charm_dir / self._dispatch_path + if not dispatch_path.exists(): + logger.debug("Legacy %s does not exist.", self._dispatch_path) + return + + # super strange that there isn't an is_executable + if not os.access(str(dispatch_path), os.X_OK): + logger.warning("Legacy %s exists but is not executable.", self._dispatch_path) + return + + if dispatch_path.resolve() == self._exec_path.resolve(): + logger.debug("Legacy %s is just a link to ourselves.", self._dispatch_path) + return + + argv = sys.argv.copy() + argv[0] = str(dispatch_path) + logger.info("Running legacy %s.", self._dispatch_path) + try: + subprocess.run(argv, check=True) + except subprocess.CalledProcessError as e: + logger.warning( + "Legacy %s exited with status %d.", + self._dispatch_path, e.returncode) + sys.exit(e.returncode) + else: + logger.debug("Legacy %s exited with status 0.", self._dispatch_path) + + def _set_name_from_path(self, path: Path): + """Sets the name attribute to that which can be inferred from the given path.""" + name = path.name.replace('-', '_') + if path.parent.name == 'actions': + name = '{}_action'.format(name) + self.event_name = name + + def _init_legacy(self): + """Set up the 'legacy' dispatcher. + + The current Juju doesn't know about 'dispatch' and calls hooks + explicitly. + """ + self.is_dispatch_aware = False + self._set_name_from_path(self._exec_path) + + def _init_dispatch(self): + """Set up the new 'dispatch' dispatcher. + + The current Juju will run 'dispatch' if it exists, and otherwise fall + back to the old behaviour. + + JUJU_DISPATCH_PATH will be set to the wanted hook, e.g. hooks/install, + in both cases. + """ + self._dispatch_path = Path(os.environ['JUJU_DISPATCH_PATH']) + + if 'OPERATOR_DISPATCH' in os.environ: + logger.debug("Charm called itself via %s.", self._dispatch_path) + sys.exit(0) + os.environ['OPERATOR_DISPATCH'] = '1' + + self.is_dispatch_aware = True + self._set_name_from_path(self._dispatch_path) + + def is_restricted_context(self): + """"Return True if we are running in a restricted Juju context. + + When in a restricted context, most commands (relation-get, config-get, + state-get) are not available. As such, we change how we interact with + Juju. + """ + return self.event_name in ('collect_metrics',) + + +def main(charm_class, use_juju_for_storage=False): + """Setup the charm and dispatch the observed event. + + The event name is based on the way this executable was called (argv[0]). + """ + charm_dir = _get_charm_dir() + + model_backend = ops.model._ModelBackend() + debug = ('JUJU_DEBUG' in os.environ) + setup_root_logging(model_backend, debug=debug) + logger.debug("Operator Framework %s up and running.", ops.__version__) + + dispatcher = _Dispatcher(charm_dir) + dispatcher.run_any_legacy_hook() + + metadata = (charm_dir / 'metadata.yaml').read_text() + actions_meta = charm_dir / 'actions.yaml' + if actions_meta.exists(): + actions_metadata = actions_meta.read_text() + else: + actions_metadata = None + + if not yaml.__with_libyaml__: + logger.debug('yaml does not have libyaml extensions, using slower pure Python yaml loader') + meta = ops.charm.CharmMeta.from_yaml(metadata, actions_metadata) + model = ops.model.Model(meta, model_backend) + + # TODO: If Juju unit agent crashes after exit(0) from the charm code + # the framework will commit the snapshot but Juju will not commit its + # operation. + charm_state_path = charm_dir / CHARM_STATE_FILE + if use_juju_for_storage: + if dispatcher.is_restricted_context(): + # TODO: jam 2020-06-30 This unconditionally avoids running a collect metrics event + # Though we eventually expect that juju will run collect-metrics in a + # non-restricted context. Once we can determine that we are running collect-metrics + # in a non-restricted context, we should fire the event as normal. + logger.debug('"%s" is not supported when using Juju for storage\n' + 'see: https://github.com/canonical/operator/issues/348', + dispatcher.event_name) + # Note that we don't exit nonzero, because that would cause Juju to rerun the hook + return + store = ops.storage.JujuStorage() + else: + store = ops.storage.SQLiteStorage(charm_state_path) + framework = ops.framework.Framework(store, charm_dir, meta, model) + try: + sig = inspect.signature(charm_class) + try: + sig.bind(framework) + except TypeError: + msg = ( + "the second argument, 'key', has been deprecated and will be " + "removed after the 0.7 release") + warnings.warn(msg, DeprecationWarning) + charm = charm_class(framework, None) + else: + charm = charm_class(framework) + dispatcher.ensure_event_links(charm) + + # TODO: Remove the collect_metrics check below as soon as the relevant + # Juju changes are made. + # + # Skip reemission of deferred events for collect-metrics events because + # they do not have the full access to all hook tools. + if not dispatcher.is_restricted_context(): + framework.reemit() + + _emit_charm_event(charm, dispatcher.event_name) + + framework.commit() + finally: + framework.close() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/ops/model.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/ops/model.py new file mode 100644 index 0000000000000000000000000000000000000000..b96e89154ea9cec2b62a4fab4649412e115c304e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/ops/model.py @@ -0,0 +1,1237 @@ +# Copyright 2019 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import datetime +import decimal +import ipaddress +import json +import os +import re +import shutil +import tempfile +import time +import typing +import weakref + +from abc import ABC, abstractmethod +from collections.abc import Mapping, MutableMapping +from pathlib import Path +from subprocess import run, PIPE, CalledProcessError + +import ops +from ops.jujuversion import JujuVersion + + +class Model: + """Represents the Juju Model as seen from this unit. + + This should not be instantiated directly by Charmers, but can be accessed as `self.model` + from any class that derives from Object. + + Attributes: + unit: A :class:`Unit` that represents the unit that is running this code (eg yourself) + app: A :class:`Application` that represents the application this unit is a part of. + relations: Mapping of endpoint to list of :class:`Relation` answering the question + "what am I currently related to". See also :meth:`.get_relation` + config: A dict of the config for the current application. + resources: Access to resources for this charm. Use ``model.resources.fetch(resource_name)`` + to get the path on disk where the resource can be found. + storages: Mapping of storage_name to :class:`Storage` for the storage points defined in + metadata.yaml + pod: Used to get access to ``model.pod.set_spec`` to set the container specification + for Kubernetes charms. + """ + + def __init__(self, meta: 'ops.charm.CharmMeta', backend: '_ModelBackend'): + self._cache = _ModelCache(backend) + self._backend = backend + self.unit = self.get_unit(self._backend.unit_name) + self.app = self.unit.app + self.relations = RelationMapping(meta.relations, self.unit, self._backend, self._cache) + self.config = ConfigData(self._backend) + self.resources = Resources(list(meta.resources), self._backend) + self.pod = Pod(self._backend) + self.storages = StorageMapping(list(meta.storages), self._backend) + self._bindings = BindingMapping(self._backend) + + @property + def name(self) -> str: + """Return the name of the Model that this unit is running in. + + This is read from the environment variable ``JUJU_MODEL_NAME``. + """ + return self._backend.model_name + + def get_unit(self, unit_name: str) -> 'Unit': + """Get an arbitrary unit by name. + + Internally this uses a cache, so asking for the same unit two times will + return the same object. + """ + return self._cache.get(Unit, unit_name) + + def get_app(self, app_name: str) -> 'Application': + """Get an application by name. + + Internally this uses a cache, so asking for the same application two times will + return the same object. + """ + return self._cache.get(Application, app_name) + + def get_relation( + self, relation_name: str, + relation_id: typing.Optional[int] = None) -> 'Relation': + """Get a specific Relation instance. + + If relation_id is not given, this will return the Relation instance if the + relation is established only once or None if it is not established. If this + same relation is established multiple times the error TooManyRelatedAppsError is raised. + + Args: + relation_name: The name of the endpoint for this charm + relation_id: An identifier for a specific relation. Used to disambiguate when a + given application has more than one relation on a given endpoint. + Raises: + TooManyRelatedAppsError: is raised if there is more than one relation to the + supplied relation_name and no relation_id was supplied + """ + return self.relations._get_unique(relation_name, relation_id) + + def get_binding(self, binding_key: typing.Union[str, 'Relation']) -> 'Binding': + """Get a network space binding. + + Args: + binding_key: The relation name or instance to obtain bindings for. + Returns: + If ``binding_key`` is a relation name, the method returns the default binding + for that relation. If a relation instance is provided, the method first looks + up a more specific binding for that specific relation ID, and if none is found + falls back to the default binding for the relation name. + """ + return self._bindings.get(binding_key) + + +class _ModelCache: + + def __init__(self, backend): + self._backend = backend + self._weakrefs = weakref.WeakValueDictionary() + + def get(self, entity_type, *args): + key = (entity_type,) + args + entity = self._weakrefs.get(key) + if entity is None: + entity = entity_type(*args, backend=self._backend, cache=self) + self._weakrefs[key] = entity + return entity + + +class Application: + """Represents a named application in the model. + + This might be your application, or might be an application that you are related to. + Charmers should not instantiate Application objects directly, but should use + :meth:`Model.get_app` if they need a reference to a given application. + + Attributes: + name: The name of this application (eg, 'mysql'). This name may differ from the name of + the charm, if the user has deployed it to a different name. + """ + + def __init__(self, name, backend, cache): + self.name = name + self._backend = backend + self._cache = cache + self._is_our_app = self.name == self._backend.app_name + self._status = None + + def _invalidate(self): + self._status = None + + @property + def status(self) -> 'StatusBase': + """Used to report or read the status of the overall application. + + Can only be read and set by the lead unit of the application. + + The status of remote units is always Unknown. + + Raises: + RuntimeError: if you try to set the status of another application, or if you try to + set the status of this application as a unit that is not the leader. + InvalidStatusError: if you try to set the status to something that is not a + :class:`StatusBase` + + Example:: + + self.model.app.status = BlockedStatus('I need a human to come help me') + """ + if not self._is_our_app: + return UnknownStatus() + + if not self._backend.is_leader(): + raise RuntimeError('cannot get application status as a non-leader unit') + + if self._status: + return self._status + + s = self._backend.status_get(is_app=True) + self._status = StatusBase.from_name(s['status'], s['message']) + return self._status + + @status.setter + def status(self, value: 'StatusBase'): + if not isinstance(value, StatusBase): + raise InvalidStatusError( + 'invalid value provided for application {} status: {}'.format(self, value) + ) + + if not self._is_our_app: + raise RuntimeError('cannot to set status for a remote application {}'.format(self)) + + if not self._backend.is_leader(): + raise RuntimeError('cannot set application status as a non-leader unit') + + self._backend.status_set(value.name, value.message, is_app=True) + self._status = value + + def __repr__(self): + return '<{}.{} {}>'.format(type(self).__module__, type(self).__name__, self.name) + + +class Unit: + """Represents a named unit in the model. + + This might be your unit, another unit of your application, or a unit of another application + that you are related to. + + Attributes: + name: The name of the unit (eg, 'mysql/0') + app: The Application the unit is a part of. + """ + + def __init__(self, name, backend, cache): + self.name = name + + app_name = name.split('/')[0] + self.app = cache.get(Application, app_name) + + self._backend = backend + self._cache = cache + self._is_our_unit = self.name == self._backend.unit_name + self._status = None + + def _invalidate(self): + self._status = None + + @property + def status(self) -> 'StatusBase': + """Used to report or read the status of a specific unit. + + The status of any unit other than yourself is always Unknown. + + Raises: + RuntimeError: if you try to set the status of a unit other than yourself. + InvalidStatusError: if you try to set the status to something other than + a :class:`StatusBase` + Example:: + + self.model.unit.status = MaintenanceStatus('reconfiguring the frobnicators') + """ + if not self._is_our_unit: + return UnknownStatus() + + if self._status: + return self._status + + s = self._backend.status_get(is_app=False) + self._status = StatusBase.from_name(s['status'], s['message']) + return self._status + + @status.setter + def status(self, value: 'StatusBase'): + if not isinstance(value, StatusBase): + raise InvalidStatusError( + 'invalid value provided for unit {} status: {}'.format(self, value) + ) + + if not self._is_our_unit: + raise RuntimeError('cannot set status for a remote unit {}'.format(self)) + + self._backend.status_set(value.name, value.message, is_app=False) + self._status = value + + def __repr__(self): + return '<{}.{} {}>'.format(type(self).__module__, type(self).__name__, self.name) + + def is_leader(self) -> bool: + """Return whether this unit is the leader of its application. + + This can only be called for your own unit. + Returns: + True if you are the leader, False otherwise + Raises: + RuntimeError: if called for a unit that is not yourself + """ + if self._is_our_unit: + # This value is not cached as it is not guaranteed to persist for the whole duration + # of a hook execution. + return self._backend.is_leader() + else: + raise RuntimeError( + 'leadership status of remote units ({}) is not visible to other' + ' applications'.format(self) + ) + + def set_workload_version(self, version: str) -> None: + """Record the version of the software running as the workload. + + This shouldn't be confused with the revision of the charm. This is informative only; + shown in the output of 'juju status'. + """ + if not isinstance(version, str): + raise TypeError("workload version must be a str, not {}: {!r}".format( + type(version).__name__, version)) + self._backend.application_version_set(version) + + +class LazyMapping(Mapping, ABC): + """Represents a dict that isn't populated until it is accessed. + + Charm authors should generally never need to use this directly, but it forms + the basis for many of the dicts that the framework tracks. + """ + + _lazy_data = None + + @abstractmethod + def _load(self): + raise NotImplementedError() + + @property + def _data(self): + data = self._lazy_data + if data is None: + data = self._lazy_data = self._load() + return data + + def _invalidate(self): + self._lazy_data = None + + def __contains__(self, key): + return key in self._data + + def __len__(self): + return len(self._data) + + def __iter__(self): + return iter(self._data) + + def __getitem__(self, key): + return self._data[key] + + +class RelationMapping(Mapping): + """Map of relation names to lists of :class:`Relation` instances.""" + + def __init__(self, relations_meta, our_unit, backend, cache): + self._peers = set() + for name, relation_meta in relations_meta.items(): + if relation_meta.role.is_peer(): + self._peers.add(name) + self._our_unit = our_unit + self._backend = backend + self._cache = cache + self._data = {relation_name: None for relation_name in relations_meta} + + def __contains__(self, key): + return key in self._data + + def __len__(self): + return len(self._data) + + def __iter__(self): + return iter(self._data) + + def __getitem__(self, relation_name): + is_peer = relation_name in self._peers + relation_list = self._data[relation_name] + if relation_list is None: + relation_list = self._data[relation_name] = [] + for rid in self._backend.relation_ids(relation_name): + relation = Relation(relation_name, rid, is_peer, + self._our_unit, self._backend, self._cache) + relation_list.append(relation) + return relation_list + + def _invalidate(self, relation_name): + """Used to wipe the cache of a given relation_name. + + Not meant to be used by Charm authors. The content of relation data is + static for the lifetime of a hook, so it is safe to cache in memory once + accessed. + """ + self._data[relation_name] = None + + def _get_unique(self, relation_name, relation_id=None): + if relation_id is not None: + if not isinstance(relation_id, int): + raise ModelError('relation id {} must be int or None not {}'.format( + relation_id, + type(relation_id).__name__)) + for relation in self[relation_name]: + if relation.id == relation_id: + return relation + else: + # The relation may be dead, but it is not forgotten. + is_peer = relation_name in self._peers + return Relation(relation_name, relation_id, is_peer, + self._our_unit, self._backend, self._cache) + num_related = len(self[relation_name]) + if num_related == 0: + return None + elif num_related == 1: + return self[relation_name][0] + else: + # TODO: We need something in the framework to catch and gracefully handle + # errors, ideally integrating the error catching with Juju's mechanisms. + raise TooManyRelatedAppsError(relation_name, num_related, 1) + + +class BindingMapping: + """Mapping of endpoints to network bindings. + + Charm authors should not instantiate this directly, but access it via + :meth:`Model.get_binding` + """ + + def __init__(self, backend): + self._backend = backend + self._data = {} + + def get(self, binding_key: typing.Union[str, 'Relation']) -> 'Binding': + """Get a specific Binding for an endpoint/relation. + + Not used directly by Charm authors. See :meth:`Model.get_binding` + """ + if isinstance(binding_key, Relation): + binding_name = binding_key.name + relation_id = binding_key.id + elif isinstance(binding_key, str): + binding_name = binding_key + relation_id = None + else: + raise ModelError('binding key must be str or relation instance, not {}' + ''.format(type(binding_key).__name__)) + binding = self._data.get(binding_key) + if binding is None: + binding = Binding(binding_name, relation_id, self._backend) + self._data[binding_key] = binding + return binding + + +class Binding: + """Binding to a network space. + + Attributes: + name: The name of the endpoint this binding represents (eg, 'db') + """ + + def __init__(self, name, relation_id, backend): + self.name = name + self._relation_id = relation_id + self._backend = backend + self._network = None + + @property + def network(self) -> 'Network': + """The network information for this binding.""" + if self._network is None: + try: + self._network = Network(self._backend.network_get(self.name, self._relation_id)) + except RelationNotFoundError: + if self._relation_id is None: + raise + # If a relation is dead, we can still get network info associated with an + # endpoint itself + self._network = Network(self._backend.network_get(self.name)) + return self._network + + +class Network: + """Network space details. + + Charm authors should not instantiate this directly, but should get access to the Network + definition from :meth:`Model.get_binding` and its ``network`` attribute. + + Attributes: + interfaces: A list of :class:`NetworkInterface` details. This includes the + information about how your application should be configured (eg, what + IP addresses should you bind to.) + Note that multiple addresses for a single interface are represented as multiple + interfaces. (eg, ``[NetworKInfo('ens1', '10.1.1.1/32'), + NetworkInfo('ens1', '10.1.2.1/32'])``) + ingress_addresses: A list of :class:`ipaddress.ip_address` objects representing the IP + addresses that other units should use to get in touch with you. + egress_subnets: A list of :class:`ipaddress.ip_network` representing the subnets that + other units will see you connecting from. Due to things like NAT it isn't always + possible to narrow it down to a single address, but when it is clear, the CIDRs + will be constrained to a single address. (eg, 10.0.0.1/32) + Args: + network_info: A dict of network information as returned by ``network-get``. + """ + + def __init__(self, network_info: dict): + self.interfaces = [] + # Treat multiple addresses on an interface as multiple logical + # interfaces with the same name. + for interface_info in network_info['bind-addresses']: + interface_name = interface_info['interface-name'] + for address_info in interface_info['addresses']: + self.interfaces.append(NetworkInterface(interface_name, address_info)) + self.ingress_addresses = [] + for address in network_info['ingress-addresses']: + self.ingress_addresses.append(ipaddress.ip_address(address)) + self.egress_subnets = [] + for subnet in network_info['egress-subnets']: + self.egress_subnets.append(ipaddress.ip_network(subnet)) + + @property + def bind_address(self): + """A single address that your application should bind() to. + + For the common case where there is a single answer. This represents a single + address from :attr:`.interfaces` that can be used to configure where your + application should bind() and listen(). + """ + return self.interfaces[0].address + + @property + def ingress_address(self): + """The address other applications should use to connect to your unit. + + Due to things like public/private addresses, NAT and tunneling, the address you bind() + to is not always the address other people can use to connect() to you. + This is just the first address from :attr:`.ingress_addresses`. + """ + return self.ingress_addresses[0] + + +class NetworkInterface: + """Represents a single network interface that the charm needs to know about. + + Charmers should not instantiate this type directly. Instead use :meth:`Model.get_binding` + to get the network information for a given endpoint. + + Attributes: + name: The name of the interface (eg. 'eth0', or 'ens1') + subnet: An :class:`ipaddress.ip_network` representation of the IP for the network + interface. This may be a single address (eg '10.0.1.2/32') + """ + + def __init__(self, name: str, address_info: dict): + self.name = name + # TODO: expose a hardware address here, see LP: #1864070. + self.address = ipaddress.ip_address(address_info['value']) + cidr = address_info['cidr'] + if not cidr: + # The cidr field may be empty, see LP: #1864102. + # In this case, make it a /32 or /128 IP network. + self.subnet = ipaddress.ip_network(address_info['value']) + else: + self.subnet = ipaddress.ip_network(cidr) + # TODO: expose a hostname/canonical name for the address here, see LP: #1864086. + + +class Relation: + """Represents an established relation between this application and another application. + + This class should not be instantiated directly, instead use :meth:`Model.get_relation` + or :attr:`RelationEvent.relation`. + + Attributes: + name: The name of the local endpoint of the relation (eg 'db') + id: The identifier for a particular relation (integer) + app: An :class:`Application` representing the remote application of this relation. + For peer relations this will be the local application. + units: A set of :class:`Unit` for units that have started and joined this relation. + data: A :class:`RelationData` holding the data buckets for each entity + of a relation. Accessed via eg Relation.data[unit]['foo'] + """ + + def __init__( + self, relation_name: str, relation_id: int, is_peer: bool, our_unit: Unit, + backend: '_ModelBackend', cache: '_ModelCache'): + self.name = relation_name + self.id = relation_id + self.app = None + self.units = set() + + # For peer relations, both the remote and the local app are the same. + if is_peer: + self.app = our_unit.app + try: + for unit_name in backend.relation_list(self.id): + unit = cache.get(Unit, unit_name) + self.units.add(unit) + if self.app is None: + self.app = unit.app + except RelationNotFoundError: + # If the relation is dead, just treat it as if it has no remote units. + pass + self.data = RelationData(self, our_unit, backend) + + def __repr__(self): + return '<{}.{} {}:{}>'.format(type(self).__module__, + type(self).__name__, + self.name, + self.id) + + +class RelationData(Mapping): + """Represents the various data buckets of a given relation. + + Each unit and application involved in a relation has their own data bucket. + Eg: ``{entity: RelationDataContent}`` + where entity can be either a :class:`Unit` or a :class:`Application`. + + Units can read and write their own data, and if they are the leader, + they can read and write their application data. They are allowed to read + remote unit and application data. + + This class should not be created directly. It should be accessed via + :attr:`Relation.data` + """ + + def __init__(self, relation: Relation, our_unit: Unit, backend: '_ModelBackend'): + self.relation = weakref.proxy(relation) + self._data = { + our_unit: RelationDataContent(self.relation, our_unit, backend), + our_unit.app: RelationDataContent(self.relation, our_unit.app, backend), + } + self._data.update({ + unit: RelationDataContent(self.relation, unit, backend) + for unit in self.relation.units}) + # The relation might be dead so avoid a None key here. + if self.relation.app is not None: + self._data.update({ + self.relation.app: RelationDataContent(self.relation, self.relation.app, backend), + }) + + def __contains__(self, key): + return key in self._data + + def __len__(self): + return len(self._data) + + def __iter__(self): + return iter(self._data) + + def __getitem__(self, key): + return self._data[key] + + +# We mix in MutableMapping here to get some convenience implementations, but whether it's actually +# mutable or not is controlled by the flag. +class RelationDataContent(LazyMapping, MutableMapping): + + def __init__(self, relation, entity, backend): + self.relation = relation + self._entity = entity + self._backend = backend + self._is_app = isinstance(entity, Application) + + def _load(self): + try: + return self._backend.relation_get(self.relation.id, self._entity.name, self._is_app) + except RelationNotFoundError: + # Dead relations tell no tales (and have no data). + return {} + + def _is_mutable(self): + if self._is_app: + is_our_app = self._backend.app_name == self._entity.name + if not is_our_app: + return False + # Whether the application data bag is mutable or not depends on + # whether this unit is a leader or not, but this is not guaranteed + # to be always true during the same hook execution. + return self._backend.is_leader() + else: + is_our_unit = self._backend.unit_name == self._entity.name + if is_our_unit: + return True + return False + + def __setitem__(self, key, value): + if not self._is_mutable(): + raise RelationDataError('cannot set relation data for {}'.format(self._entity.name)) + if not isinstance(value, str): + raise RelationDataError('relation data values must be strings') + + self._backend.relation_set(self.relation.id, key, value, self._is_app) + + # Don't load data unnecessarily if we're only updating. + if self._lazy_data is not None: + if value == '': + # Match the behavior of Juju, which is that setting the value to an + # empty string will remove the key entirely from the relation data. + del self._data[key] + else: + self._data[key] = value + + def __delitem__(self, key): + # Match the behavior of Juju, which is that setting the value to an empty + # string will remove the key entirely from the relation data. + self.__setitem__(key, '') + + +class ConfigData(LazyMapping): + + def __init__(self, backend): + self._backend = backend + + def _load(self): + return self._backend.config_get() + + +class StatusBase: + """Status values specific to applications and units. + + To access a status by name, see :meth:`StatusBase.from_name`, most use cases will just + directly use the child class to indicate their status. + """ + + _statuses = {} + name = None + + def __init__(self, message: str): + self.message = message + + def __new__(cls, *args, **kwargs): + if cls is StatusBase: + raise TypeError("cannot instantiate a base class") + return super().__new__(cls) + + def __eq__(self, other): + if not isinstance(self, type(other)): + return False + return self.message == other.message + + def __repr__(self): + return "{.__class__.__name__}({!r})".format(self, self.message) + + @classmethod + def from_name(cls, name: str, message: str): + if name == 'unknown': + # unknown is special + return UnknownStatus() + else: + return cls._statuses[name](message) + + @classmethod + def register(cls, child): + if child.name is None: + raise AttributeError('cannot register a Status which has no name') + cls._statuses[child.name] = child + return child + + +@StatusBase.register +class UnknownStatus(StatusBase): + """The unit status is unknown. + + A unit-agent has finished calling install, config-changed and start, but the + charm has not called status-set yet. + + """ + name = 'unknown' + + def __init__(self): + # Unknown status cannot be set and does not have a message associated with it. + super().__init__('') + + def __repr__(self): + return "UnknownStatus()" + + +@StatusBase.register +class ActiveStatus(StatusBase): + """The unit is ready. + + The unit believes it is correctly offering all the services it has been asked to offer. + """ + name = 'active' + + def __init__(self, message: str = ''): + super().__init__(message) + + +@StatusBase.register +class BlockedStatus(StatusBase): + """The unit requires manual intervention. + + An operator has to manually intervene to unblock the unit and let it proceed. + """ + name = 'blocked' + + +@StatusBase.register +class MaintenanceStatus(StatusBase): + """The unit is performing maintenance tasks. + + The unit is not yet providing services, but is actively doing work in preparation + for providing those services. This is a "spinning" state, not an error state. It + reflects activity on the unit itself, not on peers or related units. + + """ + name = 'maintenance' + + +@StatusBase.register +class WaitingStatus(StatusBase): + """A unit is unable to progress. + + The unit is unable to progress to an active state because an application to which + it is related is not running. + + """ + name = 'waiting' + + +class Resources: + """Object representing resources for the charm. + """ + + def __init__(self, names: typing.Iterable[str], backend: '_ModelBackend'): + self._backend = backend + self._paths = {name: None for name in names} + + def fetch(self, name: str) -> Path: + """Fetch the resource from the controller or store. + + If successfully fetched, this returns a Path object to where the resource is stored + on disk, otherwise it raises a ModelError. + """ + if name not in self._paths: + raise RuntimeError('invalid resource name: {}'.format(name)) + if self._paths[name] is None: + self._paths[name] = Path(self._backend.resource_get(name)) + return self._paths[name] + + +class Pod: + """Represents the definition of a pod spec in Kubernetes models. + + Currently only supports simple access to setting the Juju pod spec via :attr:`.set_spec`. + """ + + def __init__(self, backend: '_ModelBackend'): + self._backend = backend + + def set_spec(self, spec: typing.Mapping, k8s_resources: typing.Mapping = None): + """Set the specification for pods that Juju should start in kubernetes. + + See `juju help-tool pod-spec-set` for details of what should be passed. + Args: + spec: The mapping defining the pod specification + k8s_resources: Additional kubernetes specific specification. + + Returns: + """ + if not self._backend.is_leader(): + raise ModelError('cannot set a pod spec as this unit is not a leader') + self._backend.pod_spec_set(spec, k8s_resources) + + +class StorageMapping(Mapping): + """Map of storage names to lists of Storage instances.""" + + def __init__(self, storage_names: typing.Iterable[str], backend: '_ModelBackend'): + self._backend = backend + self._storage_map = {storage_name: None for storage_name in storage_names} + + def __contains__(self, key: str): + return key in self._storage_map + + def __len__(self): + return len(self._storage_map) + + def __iter__(self): + return iter(self._storage_map) + + def __getitem__(self, storage_name: str) -> typing.List['Storage']: + storage_list = self._storage_map[storage_name] + if storage_list is None: + storage_list = self._storage_map[storage_name] = [] + for storage_id in self._backend.storage_list(storage_name): + storage_list.append(Storage(storage_name, storage_id, self._backend)) + return storage_list + + def request(self, storage_name: str, count: int = 1): + """Requests new storage instances of a given name. + + Uses storage-add tool to request additional storage. Juju will notify the unit + via -storage-attached events when it becomes available. + """ + if storage_name not in self._storage_map: + raise ModelError(('cannot add storage {!r}:' + ' it is not present in the charm metadata').format(storage_name)) + self._backend.storage_add(storage_name, count) + + +class Storage: + """"Represents a storage as defined in metadata.yaml + + Attributes: + name: Simple string name of the storage + id: The provider id for storage + """ + + def __init__(self, storage_name, storage_id, backend): + self.name = storage_name + self.id = storage_id + self._backend = backend + self._location = None + + @property + def location(self): + if self._location is None: + raw = self._backend.storage_get('{}/{}'.format(self.name, self.id), "location") + self._location = Path(raw) + return self._location + + +class ModelError(Exception): + """Base class for exceptions raised when interacting with the Model.""" + pass + + +class TooManyRelatedAppsError(ModelError): + """Raised by :meth:`Model.get_relation` if there is more than one related application.""" + + def __init__(self, relation_name, num_related, max_supported): + super().__init__('Too many remote applications on {} ({} > {})'.format( + relation_name, num_related, max_supported)) + self.relation_name = relation_name + self.num_related = num_related + self.max_supported = max_supported + + +class RelationDataError(ModelError): + """Raised by ``Relation.data[entity][key] = 'foo'`` if the data is invalid. + + This is raised if you're either trying to set a value to something that isn't a string, + or if you are trying to set a value in a bucket that you don't have access to. (eg, + another application/unit or setting your application data but you aren't the leader.) + """ + + +class RelationNotFoundError(ModelError): + """Backend error when querying juju for a given relation and that relation doesn't exist.""" + + +class InvalidStatusError(ModelError): + """Raised if trying to set an Application or Unit status to something invalid.""" + + +class _ModelBackend: + """Represents the connection between the Model representation and talking to Juju. + + Charm authors should not directly interact with the ModelBackend, it is a private + implementation of Model. + """ + + LEASE_RENEWAL_PERIOD = datetime.timedelta(seconds=30) + + def __init__(self, unit_name=None, model_name=None): + if unit_name is None: + self.unit_name = os.environ['JUJU_UNIT_NAME'] + else: + self.unit_name = unit_name + if model_name is None: + model_name = os.environ.get('JUJU_MODEL_NAME') + self.model_name = model_name + self.app_name = self.unit_name.split('/')[0] + + self._is_leader = None + self._leader_check_time = None + + def _run(self, *args, return_output=False, use_json=False): + kwargs = dict(stdout=PIPE, stderr=PIPE) + if use_json: + args += ('--format=json',) + try: + result = run(args, check=True, **kwargs) + except CalledProcessError as e: + raise ModelError(e.stderr) + if return_output: + if result.stdout is None: + return '' + else: + text = result.stdout.decode('utf8') + if use_json: + return json.loads(text) + else: + return text + + def relation_ids(self, relation_name): + relation_ids = self._run('relation-ids', relation_name, return_output=True, use_json=True) + return [int(relation_id.split(':')[-1]) for relation_id in relation_ids] + + def relation_list(self, relation_id): + try: + return self._run('relation-list', '-r', str(relation_id), + return_output=True, use_json=True) + except ModelError as e: + if 'relation not found' in str(e): + raise RelationNotFoundError() from e + raise + + def relation_get(self, relation_id, member_name, is_app): + if not isinstance(is_app, bool): + raise TypeError('is_app parameter to relation_get must be a boolean') + + if is_app: + version = JujuVersion.from_environ() + if not version.has_app_data(): + raise RuntimeError( + 'getting application data is not supported on Juju version {}'.format(version)) + + args = ['relation-get', '-r', str(relation_id), '-', member_name] + if is_app: + args.append('--app') + + try: + return self._run(*args, return_output=True, use_json=True) + except ModelError as e: + if 'relation not found' in str(e): + raise RelationNotFoundError() from e + raise + + def relation_set(self, relation_id, key, value, is_app): + if not isinstance(is_app, bool): + raise TypeError('is_app parameter to relation_set must be a boolean') + + if is_app: + version = JujuVersion.from_environ() + if not version.has_app_data(): + raise RuntimeError( + 'setting application data is not supported on Juju version {}'.format(version)) + + args = ['relation-set', '-r', str(relation_id), '{}={}'.format(key, value)] + if is_app: + args.append('--app') + + try: + return self._run(*args) + except ModelError as e: + if 'relation not found' in str(e): + raise RelationNotFoundError() from e + raise + + def config_get(self): + return self._run('config-get', return_output=True, use_json=True) + + def is_leader(self): + """Obtain the current leadership status for the unit the charm code is executing on. + + The value is cached for the duration of a lease which is 30s in Juju. + """ + now = time.monotonic() + if self._leader_check_time is None: + check = True + else: + time_since_check = datetime.timedelta(seconds=now - self._leader_check_time) + check = (time_since_check > self.LEASE_RENEWAL_PERIOD or self._is_leader is None) + if check: + # Current time MUST be saved before running is-leader to ensure the cache + # is only used inside the window that is-leader itself asserts. + self._leader_check_time = now + self._is_leader = self._run('is-leader', return_output=True, use_json=True) + + return self._is_leader + + def resource_get(self, resource_name): + return self._run('resource-get', resource_name, return_output=True).strip() + + def pod_spec_set(self, spec, k8s_resources): + tmpdir = Path(tempfile.mkdtemp('-pod-spec-set')) + try: + spec_path = tmpdir / 'spec.json' + spec_path.write_text(json.dumps(spec)) + args = ['--file', str(spec_path)] + if k8s_resources: + k8s_res_path = tmpdir / 'k8s-resources.json' + k8s_res_path.write_text(json.dumps(k8s_resources)) + args.extend(['--k8s-resources', str(k8s_res_path)]) + self._run('pod-spec-set', *args) + finally: + shutil.rmtree(str(tmpdir)) + + def status_get(self, *, is_app=False): + """Get a status of a unit or an application. + + Args: + is_app: A boolean indicating whether the status should be retrieved for a unit + or an application. + """ + content = self._run( + 'status-get', '--include-data', '--application={}'.format(is_app), + use_json=True, + return_output=True) + # Unit status looks like (in YAML): + # message: 'load: 0.28 0.26 0.26' + # status: active + # status-data: {} + # Application status looks like (in YAML): + # application-status: + # message: 'load: 0.28 0.26 0.26' + # status: active + # status-data: {} + # units: + # uo/0: + # message: 'load: 0.28 0.26 0.26' + # status: active + # status-data: {} + + if is_app: + return {'status': content['application-status']['status'], + 'message': content['application-status']['message']} + else: + return content + + def status_set(self, status, message='', *, is_app=False): + """Set a status of a unit or an application. + + Args: + app: A boolean indicating whether the status should be set for a unit or an + application. + """ + if not isinstance(is_app, bool): + raise TypeError('is_app parameter must be boolean') + return self._run('status-set', '--application={}'.format(is_app), status, message) + + def storage_list(self, name): + return [int(s.split('/')[1]) for s in self._run('storage-list', name, + return_output=True, use_json=True)] + + def storage_get(self, storage_name_id, attribute): + return self._run('storage-get', '-s', storage_name_id, attribute, + return_output=True, use_json=True) + + def storage_add(self, name, count=1): + if not isinstance(count, int) or isinstance(count, bool): + raise TypeError('storage count must be integer, got: {} ({})'.format(count, + type(count))) + self._run('storage-add', '{}={}'.format(name, count)) + + def action_get(self): + return self._run('action-get', return_output=True, use_json=True) + + def action_set(self, results): + self._run('action-set', *["{}={}".format(k, v) for k, v in results.items()]) + + def action_log(self, message): + self._run('action-log', message) + + def action_fail(self, message=''): + self._run('action-fail', message) + + def application_version_set(self, version): + self._run('application-version-set', '--', version) + + def juju_log(self, level, message): + self._run('juju-log', '--log-level', level, message) + + def network_get(self, binding_name, relation_id=None): + """Return network info provided by network-get for a given binding. + + Args: + binding_name: A name of a binding (relation name or extra-binding name). + relation_id: An optional relation id to get network info for. + """ + cmd = ['network-get', binding_name] + if relation_id is not None: + cmd.extend(['-r', str(relation_id)]) + try: + return self._run(*cmd, return_output=True, use_json=True) + except ModelError as e: + if 'relation not found' in str(e): + raise RelationNotFoundError() from e + raise + + def add_metrics(self, metrics, labels=None): + cmd = ['add-metric'] + + if labels: + label_args = [] + for k, v in labels.items(): + _ModelBackendValidator.validate_metric_label(k) + _ModelBackendValidator.validate_label_value(k, v) + label_args.append('{}={}'.format(k, v)) + cmd.extend(['--labels', ','.join(label_args)]) + + metric_args = [] + for k, v in metrics.items(): + _ModelBackendValidator.validate_metric_key(k) + metric_value = _ModelBackendValidator.format_metric_value(v) + metric_args.append('{}={}'.format(k, metric_value)) + cmd.extend(metric_args) + self._run(*cmd) + + +class _ModelBackendValidator: + """Provides facilities for validating inputs and formatting them for model backends.""" + + METRIC_KEY_REGEX = re.compile(r'^[a-zA-Z](?:[a-zA-Z0-9-_]*[a-zA-Z0-9])?$') + + @classmethod + def validate_metric_key(cls, key): + if cls.METRIC_KEY_REGEX.match(key) is None: + raise ModelError( + 'invalid metric key {!r}: must match {}'.format( + key, cls.METRIC_KEY_REGEX.pattern)) + + @classmethod + def validate_metric_label(cls, label_name): + if cls.METRIC_KEY_REGEX.match(label_name) is None: + raise ModelError( + 'invalid metric label name {!r}: must match {}'.format( + label_name, cls.METRIC_KEY_REGEX.pattern)) + + @classmethod + def format_metric_value(cls, value): + try: + decimal_value = decimal.Decimal.from_float(value) + except TypeError as e: + e2 = ModelError('invalid metric value {!r} provided:' + ' must be a positive finite float'.format(value)) + raise e2 from e + if decimal_value.is_nan() or decimal_value.is_infinite() or decimal_value < 0: + raise ModelError('invalid metric value {!r} provided:' + ' must be a positive finite float'.format(value)) + return str(decimal_value) + + @classmethod + def validate_label_value(cls, label, value): + # Label values cannot be empty, contain commas or equal signs as those are + # used by add-metric as separators. + if not value: + raise ModelError( + 'metric label {} has an empty value, which is not allowed'.format(label)) + v = str(value) + if re.search('[,=]', v) is not None: + raise ModelError( + 'metric label values must not contain "," or "=": {}={!r}'.format(label, value)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/ops/storage.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/ops/storage.py new file mode 100755 index 0000000000000000000000000000000000000000..d4310ce1cfbb707c6278b70f84a9751da3ce07af --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/ops/storage.py @@ -0,0 +1,318 @@ +# Copyright 2019-2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from datetime import timedelta +import pickle +import shutil +import subprocess +import sqlite3 +import typing + +import yaml + + +class SQLiteStorage: + + DB_LOCK_TIMEOUT = timedelta(hours=1) + + def __init__(self, filename): + # The isolation_level argument is set to None such that the implicit + # transaction management behavior of the sqlite3 module is disabled. + self._db = sqlite3.connect(str(filename), + isolation_level=None, + timeout=self.DB_LOCK_TIMEOUT.total_seconds()) + self._setup() + + def _setup(self): + # Make sure that the database is locked until the connection is closed, + # not until the transaction ends. + self._db.execute("PRAGMA locking_mode=EXCLUSIVE") + c = self._db.execute("BEGIN") + c.execute("SELECT count(name) FROM sqlite_master WHERE type='table' AND name='snapshot'") + if c.fetchone()[0] == 0: + # Keep in mind what might happen if the process dies somewhere below. + # The system must not be rendered permanently broken by that. + self._db.execute("CREATE TABLE snapshot (handle TEXT PRIMARY KEY, data BLOB)") + self._db.execute(''' + CREATE TABLE notice ( + sequence INTEGER PRIMARY KEY AUTOINCREMENT, + event_path TEXT, + observer_path TEXT, + method_name TEXT) + ''') + self._db.commit() + + def close(self): + self._db.close() + + def commit(self): + self._db.commit() + + # There's commit but no rollback. For abort to be supported, we'll need logic that + # can rollback decisions made by third-party code in terms of the internal state + # of objects that have been snapshotted, and hooks to let them know about it and + # take the needed actions to undo their logic until the last snapshot. + # This is doable but will increase significantly the chances for mistakes. + + def save_snapshot(self, handle_path: str, snapshot_data: typing.Any) -> None: + """Part of the Storage API, persist a snapshot data under the given handle. + + Args: + handle_path: The string identifying the snapshot. + snapshot_data: The data to be persisted. (as returned by Object.snapshot()). This + might be a dict/tuple/int, but must only contain 'simple' python types. + """ + # Use pickle for serialization, so the value remains portable. + raw_data = pickle.dumps(snapshot_data) + self._db.execute("REPLACE INTO snapshot VALUES (?, ?)", (handle_path, raw_data)) + + def load_snapshot(self, handle_path: str) -> typing.Any: + """Part of the Storage API, retrieve a snapshot that was previously saved. + + Args: + handle_path: The string identifying the snapshot. + Raises: + NoSnapshotError: if there is no snapshot for the given handle_path. + """ + c = self._db.cursor() + c.execute("SELECT data FROM snapshot WHERE handle=?", (handle_path,)) + row = c.fetchone() + if row: + return pickle.loads(row[0]) + raise NoSnapshotError(handle_path) + + def drop_snapshot(self, handle_path: str): + """Part of the Storage API, remove a snapshot that was previously saved. + + Dropping a snapshot that doesn't exist is treated as a no-op. + """ + self._db.execute("DELETE FROM snapshot WHERE handle=?", (handle_path,)) + + def list_snapshots(self) -> typing.Generator[str, None, None]: + """Return the name of all snapshots that are currently saved.""" + c = self._db.cursor() + c.execute("SELECT handle FROM snapshot") + while True: + rows = c.fetchmany() + if not rows: + break + for row in rows: + yield row[0] + + def save_notice(self, event_path: str, observer_path: str, method_name: str) -> None: + """Part of the Storage API, record an notice (event and observer)""" + self._db.execute('INSERT INTO notice VALUES (NULL, ?, ?, ?)', + (event_path, observer_path, method_name)) + + def drop_notice(self, event_path: str, observer_path: str, method_name: str) -> None: + """Part of the Storage API, remove a notice that was previously recorded.""" + self._db.execute(''' + DELETE FROM notice + WHERE event_path=? + AND observer_path=? + AND method_name=? + ''', (event_path, observer_path, method_name)) + + def notices(self, event_path: typing.Optional[str]) ->\ + typing.Generator[typing.Tuple[str, str, str], None, None]: + """Part of the Storage API, return all notices that begin with event_path. + + Args: + event_path: If supplied, will only yield events that match event_path. If not + supplied (or None/'') will return all events. + Returns: + Iterable of (event_path, observer_path, method_name) tuples + """ + if event_path: + c = self._db.execute(''' + SELECT event_path, observer_path, method_name + FROM notice + WHERE event_path=? + ORDER BY sequence + ''', (event_path,)) + else: + c = self._db.execute(''' + SELECT event_path, observer_path, method_name + FROM notice + ORDER BY sequence + ''') + while True: + rows = c.fetchmany() + if not rows: + break + for row in rows: + yield tuple(row) + + +class JujuStorage: + """"Storing the content tracked by the Framework in Juju. + + This uses :class:`_JujuStorageBackend` to interact with state-get/state-set + as the way to store state for the framework and for components. + """ + + NOTICE_KEY = "#notices#" + + def __init__(self, backend: '_JujuStorageBackend' = None): + self._backend = backend + if backend is None: + self._backend = _JujuStorageBackend() + + def close(self): + return + + def commit(self): + return + + def save_snapshot(self, handle_path: str, snapshot_data: typing.Any) -> None: + self._backend.set(handle_path, snapshot_data) + + def load_snapshot(self, handle_path): + try: + content = self._backend.get(handle_path) + except KeyError: + raise NoSnapshotError(handle_path) + return content + + def drop_snapshot(self, handle_path): + self._backend.delete(handle_path) + + def save_notice(self, event_path: str, observer_path: str, method_name: str): + notice_list = self._load_notice_list() + notice_list.append([event_path, observer_path, method_name]) + self._save_notice_list(notice_list) + + def drop_notice(self, event_path: str, observer_path: str, method_name: str): + notice_list = self._load_notice_list() + notice_list.remove([event_path, observer_path, method_name]) + self._save_notice_list(notice_list) + + def notices(self, event_path: str): + notice_list = self._load_notice_list() + for row in notice_list: + if row[0] != event_path: + continue + yield tuple(row) + + def _load_notice_list(self) -> typing.List[typing.Tuple[str]]: + try: + notice_list = self._backend.get(self.NOTICE_KEY) + except KeyError: + return [] + if notice_list is None: + return [] + return notice_list + + def _save_notice_list(self, notices: typing.List[typing.Tuple[str]]) -> None: + self._backend.set(self.NOTICE_KEY, notices) + + +class _SimpleLoader(getattr(yaml, 'CSafeLoader', yaml.SafeLoader)): + """Handle a couple basic python types. + + yaml.SafeLoader can handle all the basic int/float/dict/set/etc that we want. The only one + that it *doesn't* handle is tuples. We don't want to support arbitrary types, so we just + subclass SafeLoader and add tuples back in. + """ + # Taken from the example at: + # https://stackoverflow.com/questions/9169025/how-can-i-add-a-python-tuple-to-a-yaml-file-using-pyyaml + + construct_python_tuple = yaml.Loader.construct_python_tuple + + +_SimpleLoader.add_constructor( + u'tag:yaml.org,2002:python/tuple', + _SimpleLoader.construct_python_tuple) + + +class _SimpleDumper(getattr(yaml, 'CSafeDumper', yaml.SafeDumper)): + """Add types supported by 'marshal' + + YAML can support arbitrary types, but that is generally considered unsafe (like pickle). So + we want to only support dumping out types that are safe to load. + """ + + +_SimpleDumper.represent_tuple = yaml.Dumper.represent_tuple +_SimpleDumper.add_representer(tuple, _SimpleDumper.represent_tuple) + + +class _JujuStorageBackend: + """Implements the interface from the Operator framework to Juju's state-get/set/etc.""" + + @staticmethod + def is_available() -> bool: + """Check if Juju state storage is available. + + This checks if there is a 'state-get' executable in PATH. + """ + p = shutil.which('state-get') + return p is not None + + def set(self, key: str, value: typing.Any) -> None: + """Set a key to a given value. + + Args: + key: The string key that will be used to find the value later + value: Arbitrary content that will be returned by get(). + Raises: + CalledProcessError: if 'state-set' returns an error code. + """ + # default_flow_style=None means that it can use Block for + # complex types (types that have nested types) but use flow + # for simple types (like an array). Not all versions of PyYAML + # have the same default style. + encoded_value = yaml.dump(value, Dumper=_SimpleDumper, default_flow_style=None) + content = yaml.dump( + {key: encoded_value}, encoding='utf-8', default_style='|', + default_flow_style=False, + Dumper=_SimpleDumper) + subprocess.run(["state-set", "--file", "-"], input=content, check=True) + + def get(self, key: str) -> typing.Any: + """Get the bytes value associated with a given key. + + Args: + key: The string key that will be used to find the value + Raises: + CalledProcessError: if 'state-get' returns an error code. + """ + # We don't capture stderr here so it can end up in debug logs. + p = subprocess.run( + ["state-get", key], + stdout=subprocess.PIPE, + check=True, + ) + if p.stdout == b'' or p.stdout == b'\n': + raise KeyError(key) + return yaml.load(p.stdout, Loader=_SimpleLoader) + + def delete(self, key: str) -> None: + """Remove a key from being tracked. + + Args: + key: The key to stop storing + Raises: + CalledProcessError: if 'state-delete' returns an error code. + """ + subprocess.run(["state-delete", key], check=True) + + +class NoSnapshotError(Exception): + + def __init__(self, handle_path): + self.handle_path = handle_path + + def __str__(self): + return 'no snapshot data found for {} object'.format(self.handle_path) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/ops/testing.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/ops/testing.py new file mode 100755 index 0000000000000000000000000000000000000000..b4b3fe071216238007c9f3847ca9556be626bf6b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/ops/testing.py @@ -0,0 +1,586 @@ +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import inspect +import pathlib +from textwrap import dedent +import tempfile +import typing +import yaml +import weakref + +from ops import ( + charm, + framework, + model, + storage, +) + + +# OptionalYAML is something like metadata.yaml or actions.yaml. You can +# pass in a file-like object or the string directly. +OptionalYAML = typing.Optional[typing.Union[str, typing.TextIO]] + + +# noinspection PyProtectedMember +class Harness: + """This class represents a way to build up the model that will drive a test suite. + + The model that is created is from the viewpoint of the charm that you are testing. + + Example:: + + harness = Harness(MyCharm) + # Do initial setup here + relation_id = harness.add_relation('db', 'postgresql') + # Now instantiate the charm to see events as the model changes + harness.begin() + harness.add_relation_unit(relation_id, 'postgresql/0') + harness.update_relation_data(relation_id, 'postgresql/0', {'key': 'val'}) + # Check that charm has properly handled the relation_joined event for postgresql/0 + self.assertEqual(harness.charm. ...) + + Args: + charm_cls: The Charm class that you'll be testing. + meta: charm.CharmBase is a A string or file-like object containing the contents of + metadata.yaml. If not supplied, we will look for a 'metadata.yaml' file in the + parent directory of the Charm, and if not found fall back to a trivial + 'name: test-charm' metadata. + actions: A string or file-like object containing the contents of + actions.yaml. If not supplied, we will look for a 'actions.yaml' file in the + parent directory of the Charm. + """ + + def __init__( + self, + charm_cls: typing.Type[charm.CharmBase], + *, + meta: OptionalYAML = None, + actions: OptionalYAML = None): + # TODO: jam 2020-03-05 We probably want to take config as a parameter as well, since + # it would define the default values of config that the charm would see. + self._charm_cls = charm_cls + self._charm = None + self._charm_dir = 'no-disk-path' # this may be updated by _create_meta + self._lazy_resource_dir = None + self._meta = self._create_meta(meta, actions) + self._unit_name = self._meta.name + '/0' + self._framework = None + self._hooks_enabled = True + self._relation_id_counter = 0 + self._backend = _TestingModelBackend(self._unit_name, self._meta) + self._model = model.Model(self._meta, self._backend) + self._storage = storage.SQLiteStorage(':memory:') + self._framework = framework.Framework( + self._storage, self._charm_dir, self._meta, self._model) + + @property + def charm(self) -> charm.CharmBase: + """Return the instance of the charm class that was passed to __init__. + + Note that the Charm is not instantiated until you have called + :meth:`.begin()`. + """ + return self._charm + + @property + def model(self) -> model.Model: + """Return the :class:`~ops.model.Model` that is being driven by this Harness.""" + return self._model + + @property + def framework(self) -> framework.Framework: + """Return the Framework that is being driven by this Harness.""" + return self._framework + + @property + def _resource_dir(self) -> pathlib.Path: + if self._lazy_resource_dir is not None: + return self._lazy_resource_dir + + self.__resource_dir = tempfile.TemporaryDirectory() + self._lazy_resource_dir = pathlib.Path(self.__resource_dir.name) + self._finalizer = weakref.finalize(self, self.__resource_dir.cleanup) + return self._lazy_resource_dir + + def begin(self) -> None: + """Instantiate the Charm and start handling events. + + Before calling begin(), there is no Charm instance, so changes to the Model won't emit + events. You must call begin before :attr:`.charm` is valid. + """ + if self._charm is not None: + raise RuntimeError('cannot call the begin method on the harness more than once') + + # The Framework adds attributes to class objects for events, etc. As such, we can't re-use + # the original class against multiple Frameworks. So create a locally defined class + # and register it. + # TODO: jam 2020-03-16 We are looking to changes this to Instance attributes instead of + # Class attributes which should clean up this ugliness. The API can stay the same + class TestEvents(self._charm_cls.on.__class__): + pass + + TestEvents.__name__ = self._charm_cls.on.__class__.__name__ + + class TestCharm(self._charm_cls): + on = TestEvents() + + # Note: jam 2020-03-01 This is so that errors in testing say MyCharm has no attribute foo, + # rather than TestCharm has no attribute foo. + TestCharm.__name__ = self._charm_cls.__name__ + self._charm = TestCharm(self._framework) + + def _create_meta(self, charm_metadata, action_metadata): + """Create a CharmMeta object. + + Handle the cases where a user doesn't supply explicit metadata snippets. + """ + filename = inspect.getfile(self._charm_cls) + charm_dir = pathlib.Path(filename).parents[1] + + if charm_metadata is None: + metadata_path = charm_dir / 'metadata.yaml' + if metadata_path.is_file(): + charm_metadata = metadata_path.read_text() + self._charm_dir = charm_dir + else: + # The simplest of metadata that the framework can support + charm_metadata = 'name: test-charm' + elif isinstance(charm_metadata, str): + charm_metadata = dedent(charm_metadata) + + if action_metadata is None: + actions_path = charm_dir / 'actions.yaml' + if actions_path.is_file(): + action_metadata = actions_path.read_text() + self._charm_dir = charm_dir + elif isinstance(action_metadata, str): + action_metadata = dedent(action_metadata) + + return charm.CharmMeta.from_yaml(charm_metadata, action_metadata) + + def add_oci_resource(self, resource_name: str, + contents: typing.Mapping[str, str] = None) -> None: + """Add oci resources to the backend. + + This will register an oci resource and create a temporary file for processing metadata + about the resource. A default set of values will be used for all the file contents + unless a specific contents dict is provided. + + Args: + resource_name: Name of the resource to add custom contents to. + contents: Optional custom dict to write for the named resource. + """ + if not contents: + contents = {'registrypath': 'registrypath', + 'username': 'username', + 'password': 'password', + } + if resource_name not in self._meta.resources.keys(): + raise RuntimeError('Resource {} is not a defined resources'.format(resource_name)) + if self._meta.resources[resource_name].type != "oci-image": + raise RuntimeError('Resource {} is not an OCI Image'.format(resource_name)) + resource_dir = self._resource_dir / resource_name + resource_dir.mkdir(exist_ok=True) + resource_file = resource_dir / "contents.yaml" + with resource_file.open('wt', encoding='utf8') as resource_yaml: + yaml.dump(contents, resource_yaml) + self._backend._resources_map[resource_name] = resource_file + + def populate_oci_resources(self) -> None: + """Populate all OCI resources.""" + for name, data in self._meta.resources.items(): + if data.type == "oci-image": + self.add_oci_resource(name) + + def disable_hooks(self) -> None: + """Stop emitting hook events when the model changes. + + This can be used by developers to stop changes to the model from emitting events that + the charm will react to. Call :meth:`.enable_hooks` + to re-enable them. + """ + self._hooks_enabled = False + + def enable_hooks(self) -> None: + """Re-enable hook events from charm.on when the model is changed. + + By default hook events are enabled once you call :meth:`.begin`, + but if you have used :meth:`.disable_hooks`, this can be used to + enable them again. + """ + self._hooks_enabled = True + + def _next_relation_id(self): + rel_id = self._relation_id_counter + self._relation_id_counter += 1 + return rel_id + + def add_relation(self, relation_name: str, remote_app: str) -> int: + """Declare that there is a new relation between this app and `remote_app`. + + Args: + relation_name: The relation on Charm that is being related to + remote_app: The name of the application that is being related to + + Return: + The relation_id created by this add_relation. + """ + rel_id = self._next_relation_id() + self._backend._relation_ids_map.setdefault(relation_name, []).append(rel_id) + self._backend._relation_names[rel_id] = relation_name + self._backend._relation_list_map[rel_id] = [] + self._backend._relation_data[rel_id] = { + remote_app: {}, + self._backend.unit_name: {}, + self._backend.app_name: {}, + } + # Reload the relation_ids list + if self._model is not None: + self._model.relations._invalidate(relation_name) + if self._charm is None or not self._hooks_enabled: + return rel_id + relation = self._model.get_relation(relation_name, rel_id) + app = self._model.get_app(remote_app) + self._charm.on[relation_name].relation_created.emit( + relation, app) + return rel_id + + def add_relation_unit(self, relation_id: int, remote_unit_name: str) -> None: + """Add a new unit to a relation. + + Example:: + + rel_id = harness.add_relation('db', 'postgresql') + harness.add_relation_unit(rel_id, 'postgresql/0') + + This will trigger a `relation_joined` event and a `relation_changed` event. + + Args: + relation_id: The integer relation identifier (as returned by add_relation). + remote_unit_name: A string representing the remote unit that is being added. + Return: + None + """ + self._backend._relation_list_map[relation_id].append(remote_unit_name) + self._backend._relation_data[relation_id][remote_unit_name] = {} + relation_name = self._backend._relation_names[relation_id] + # Make sure that the Model reloads the relation_list for this relation_id, as well as + # reloading the relation data for this unit. + if self._model is not None: + remote_unit = self._model.get_unit(remote_unit_name) + relation = self._model.get_relation(relation_name, relation_id) + unit_cache = relation.data.get(remote_unit, None) + if unit_cache is not None: + unit_cache._invalidate() + self._model.relations._invalidate(relation_name) + if self._charm is None or not self._hooks_enabled: + return + self._charm.on[relation_name].relation_joined.emit( + relation, remote_unit.app, remote_unit) + + def get_relation_data(self, relation_id: int, app_or_unit: str) -> typing.Mapping: + """Get the relation data bucket for a single app or unit in a given relation. + + This ignores all of the safety checks of who can and can't see data in relations (eg, + non-leaders can't read their own application's relation data because there are no events + that keep that data up-to-date for the unit). + + Args: + relation_id: The relation whose content we want to look at. + app_or_unit: The name of the application or unit whose data we want to read + Return: + a dict containing the relation data for `app_or_unit` or None. + Raises: + KeyError: if relation_id doesn't exist + """ + return self._backend._relation_data[relation_id].get(app_or_unit, None) + + def get_workload_version(self) -> str: + """Read the workload version that was set by the unit.""" + return self._backend._workload_version + + def set_model_name(self, name: str) -> None: + """Set the name of the Model that this is representing. + + This cannot be called once begin() has been called. But it lets you set the value that + will be returned by Model.name. + """ + if self._charm is not None: + raise RuntimeError('cannot set the Model name after begin()') + self._backend.model_name = name + + def update_relation_data( + self, + relation_id: int, + app_or_unit: str, + key_values: typing.Mapping, + ) -> None: + """Update the relation data for a given unit or application in a given relation. + + This also triggers the `relation_changed` event for this relation_id. + + Args: + relation_id: The integer relation_id representing this relation. + app_or_unit: The unit or application name that is being updated. + This can be the local or remote application. + key_values: Each key/value will be updated in the relation data. + """ + relation_name = self._backend._relation_names[relation_id] + relation = self._model.get_relation(relation_name, relation_id) + if '/' in app_or_unit: + entity = self._model.get_unit(app_or_unit) + else: + entity = self._model.get_app(app_or_unit) + rel_data = relation.data.get(entity, None) + if rel_data is not None: + # rel_data may have cached now-stale data, so _invalidate() it. + # Note, this won't cause the data to be loaded if it wasn't already. + rel_data._invalidate() + + new_values = self._backend._relation_data[relation_id][app_or_unit].copy() + for k, v in key_values.items(): + if v == '': + new_values.pop(k, None) + else: + new_values[k] = v + self._backend._relation_data[relation_id][app_or_unit] = new_values + + if app_or_unit == self._model.unit.name: + # No events for our own unit + return + if app_or_unit == self._model.app.name: + # updating our own app only generates an event if it is a peer relation and we + # aren't the leader + is_peer = self._meta.relations[relation_name].role.is_peer() + if not is_peer: + return + if self._model.unit.is_leader(): + return + self._emit_relation_changed(relation_id, app_or_unit) + + def _emit_relation_changed(self, relation_id, app_or_unit): + if self._charm is None or not self._hooks_enabled: + return + rel_name = self._backend._relation_names[relation_id] + relation = self.model.get_relation(rel_name, relation_id) + if '/' in app_or_unit: + app_name = app_or_unit.split('/')[0] + unit_name = app_or_unit + app = self.model.get_app(app_name) + unit = self.model.get_unit(unit_name) + args = (relation, app, unit) + else: + app_name = app_or_unit + app = self.model.get_app(app_name) + args = (relation, app) + self._charm.on[rel_name].relation_changed.emit(*args) + + def update_config( + self, + key_values: typing.Mapping[str, str] = None, + unset: typing.Iterable[str] = (), + ) -> None: + """Update the config as seen by the charm. + + This will trigger a `config_changed` event. + + Args: + key_values: A Mapping of key:value pairs to update in config. + unset: An iterable of keys to remove from Config. (Note that this does + not currently reset the config values to the default defined in config.yaml.) + """ + config = self._backend._config + if key_values is not None: + for key, value in key_values.items(): + config[key] = value + for key in unset: + config.pop(key, None) + # NOTE: jam 2020-03-01 Note that this sort of works "by accident". Config + # is a LazyMapping, but its _load returns a dict and this method mutates + # the dict that Config is caching. Arguably we should be doing some sort + # of charm.framework.model.config._invalidate() + if self._charm is None or not self._hooks_enabled: + return + self._charm.on.config_changed.emit() + + def set_leader(self, is_leader: bool = True) -> None: + """Set whether this unit is the leader or not. + + If this charm becomes a leader then `leader_elected` will be triggered. + + Args: + is_leader: True/False as to whether this unit is the leader. + """ + was_leader = self._backend._is_leader + self._backend._is_leader = is_leader + # Note: jam 2020-03-01 currently is_leader is cached at the ModelBackend level, not in + # the Model objects, so this automatically gets noticed. + if is_leader and not was_leader and self._charm is not None and self._hooks_enabled: + self._charm.on.leader_elected.emit() + + def _get_backend_calls(self, reset: bool = True) -> list: + """Return the calls that we have made to the TestingModelBackend. + + This is useful mostly for testing the framework itself, so that we can assert that we + do/don't trigger extra calls. + + Args: + reset: If True, reset the calls list back to empty, if false, the call list is + preserved. + Return: + ``[(call1, args...), (call2, args...)]`` + """ + calls = self._backend._calls.copy() + if reset: + self._backend._calls.clear() + return calls + + +def _record_calls(cls): + """Replace methods on cls with methods that record that they have been called. + + Iterate all attributes of cls, and for public methods, replace them with a wrapped method + that records the method called along with the arguments and keyword arguments. + """ + for meth_name, orig_method in cls.__dict__.items(): + if meth_name.startswith('_'): + continue + + def decorator(orig_method): + def wrapped(self, *args, **kwargs): + full_args = (orig_method.__name__,) + args + if kwargs: + full_args = full_args + (kwargs,) + self._calls.append(full_args) + return orig_method(self, *args, **kwargs) + return wrapped + + setattr(cls, meth_name, decorator(orig_method)) + return cls + + +@_record_calls +class _TestingModelBackend: + """This conforms to the interface for ModelBackend but provides canned data. + + DO NOT use this class directly, it is used by `Harness`_ to drive the model. + `Harness`_ is responsible for maintaining the internal consistency of the values here, + as the only public methods of this type are for implementing ModelBackend. + """ + + def __init__(self, unit_name, meta): + self.unit_name = unit_name + self.app_name = self.unit_name.split('/')[0] + self.model_name = None + self._calls = [] + self._meta = meta + self._is_leader = None + self._relation_ids_map = {} # relation name to [relation_ids,...] + self._relation_names = {} # reverse map from relation_id to relation_name + self._relation_list_map = {} # relation_id: [unit_name,...] + self._relation_data = {} # {relation_id: {name: data}} + self._config = {} + self._is_leader = False + self._resources_map = {} + self._pod_spec = None + self._app_status = {'status': 'unknown', 'message': ''} + self._unit_status = {'status': 'maintenance', 'message': ''} + self._workload_version = None + + def relation_ids(self, relation_name): + try: + return self._relation_ids_map[relation_name] + except KeyError as e: + if relation_name not in self._meta.relations: + raise model.ModelError('{} is not a known relation'.format(relation_name)) from e + return [] + + def relation_list(self, relation_id): + try: + return self._relation_list_map[relation_id] + except KeyError as e: + raise model.RelationNotFoundError from e + + def relation_get(self, relation_id, member_name, is_app): + if is_app and '/' in member_name: + member_name = member_name.split('/')[0] + if relation_id not in self._relation_data: + raise model.RelationNotFoundError() + return self._relation_data[relation_id][member_name].copy() + + def relation_set(self, relation_id, key, value, is_app): + relation = self._relation_data[relation_id] + if is_app: + bucket_key = self.app_name + else: + bucket_key = self.unit_name + if bucket_key not in relation: + relation[bucket_key] = {} + bucket = relation[bucket_key] + if value == '': + bucket.pop(key, None) + else: + bucket[key] = value + + def config_get(self): + return self._config + + def is_leader(self): + return self._is_leader + + def application_version_set(self, version): + self._workload_version = version + + def resource_get(self, resource_name): + return self._resources_map[resource_name] + + def pod_spec_set(self, spec, k8s_resources): + self._pod_spec = (spec, k8s_resources) + + def status_get(self, *, is_app=False): + if is_app: + return self._app_status + else: + return self._unit_status + + def status_set(self, status, message='', *, is_app=False): + if is_app: + self._app_status = {'status': status, 'message': message} + else: + self._unit_status = {'status': status, 'message': message} + + def storage_list(self, name): + raise NotImplementedError(self.storage_list) + + def storage_get(self, storage_name_id, attribute): + raise NotImplementedError(self.storage_get) + + def storage_add(self, name, count=1): + raise NotImplementedError(self.storage_add) + + def action_get(self): + raise NotImplementedError(self.action_get) + + def action_set(self, results): + raise NotImplementedError(self.action_set) + + def action_log(self, message): + raise NotImplementedError(self.action_log) + + def action_fail(self, message=''): + raise NotImplementedError(self.action_fail) + + def network_get(self, endpoint_name, relation_id=None): + raise NotImplementedError(self.network_get) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/ops/version.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/ops/version.py new file mode 100644 index 0000000000000000000000000000000000000000..15e5478555ee0fa948bfb0ad57cc79ba7cef3721 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/lib/ops/version.py @@ -0,0 +1,50 @@ +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import subprocess +from pathlib import Path + +__all__ = ('version',) + +_FALLBACK = '0.8' # this gets bumped after release + + +def _get_version(): + version = _FALLBACK + ".dev0+unknown" + + p = Path(__file__).parent + if (p.parent / '.git').exists(): + try: + proc = subprocess.run( + ['git', 'describe', '--tags', '--dirty'], + stdout=subprocess.PIPE, + stderr=subprocess.DEVNULL, + cwd=p, + check=True) + except Exception: + pass + else: + version = proc.stdout.strip().decode('utf8') + if '-' in version: + # version will look like -<#commits>-g[-dirty] + # in terms of PEP 440, the tag we'll make sure is a 'public version identifier'; + # everything after the first - needs to be a 'local version' + public, local = version.split('-', 1) + version = public + '+' + local.replace('-', '.') + # version now +<#commits>.g[.dirty] + # which is PEP440-compliant (as long as is :-) + return version + + +version = _get_version() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/metadata.yaml b/SOL004/hackfest_firewall_pnf/charms/vyos-config/metadata.yaml new file mode 100644 index 0000000000000000000000000000000000000000..c331b60d88dcfc3f1c675e24282dacdf8ec4ffdb --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/metadata.yaml @@ -0,0 +1,11 @@ +name: vyos-config +summary: A proxy charm to configure VyOS Router +maintainer: David García +description: | + Charm to configure VyOS PNF +series: + - xenial + - bionic +peers: + proxypeer: + interface: proxypeer diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/.bzrignore b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/.bzrignore new file mode 100644 index 0000000000000000000000000000000000000000..398f08fcbfae019fbf1fe40e9b1d49f8384393d9 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/.bzrignore @@ -0,0 +1,17 @@ +*.pyc +__pycache__/ +dist/ +build/ +MANIFEST +charmhelpers.egg-info/ +charmhelpers/version.py +.coverage +.env/ +coverage.xml +docs/_build +.idea +.project +.pydevproject +.settings +.venv +.venv3 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/.gitignore b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/.gitignore new file mode 100644 index 0000000000000000000000000000000000000000..9b13d46d0a42cd97ff428ffa4f4ba0a9d9946806 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/.gitignore @@ -0,0 +1,125 @@ +# Byte-compiled / optimized / DLL files +__pycache__/ +*.py[cod] +*$py.class + +# C extensions +*.so + +# Distribution / packaging +.Python +build/ +develop-eggs/ +dist/ +downloads/ +eggs/ +.eggs/ +lib/ +lib64/ +parts/ +sdist/ +var/ +wheels/ +*.egg-info/ +.installed.cfg +*.egg +MANIFEST + +# PyInstaller +# Usually these files are written by a python script from a template +# before PyInstaller builds the exe, so as to inject date/other infos into it. +*.manifest +*.spec + +# Installer logs +pip-log.txt +pip-delete-this-directory.txt + +# Unit test / coverage reports +htmlcov/ +.tox/ +.coverage +.coverage.* +.cache +nosetests.xml +coverage.xml +*.cover +.hypothesis/ + +# Translations +*.mo +*.pot + +# Django stuff: +*.log +local_settings.py + +# Flask stuff: +instance/ +.webassets-cache + +# Scrapy stuff: +.scrapy + +# Sphinx documentation +docs/_build/ + +# PyBuilder +target/ + +# Jupyter Notebook +.ipynb_checkpoints + +# pyenv +.python-version + +# celery beat schedule file +celerybeat-schedule + +# SageMath parsed files +*.sage.py + +# Environments +.env +.venv* +env/ +venv/ +ENV/ +env.bak/ +venv.bak/ + +# Spyder project settings +.spyderproject +.spyproject + +# Rope project settings +.ropeproject + +# mkdocs documentation +/site + +# mypy +.mypy_cache/ + +*.pyc +__pycache__/ +dist/ +build/ +MANIFEST +charmhelpers.egg-info/ +charmhelpers/version.py +.coverage +.env/ +coverage.xml +docs/_build +.idea +.project +.pydevproject +.settings +.venv +.venv3 +.bzr +.unit-state.db + +AUTHORS +ChangeLog diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/.travis.yml b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/.travis.yml new file mode 100644 index 0000000000000000000000000000000000000000..7e67f092d935333e2e9324bc612b069761322bd6 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/.travis.yml @@ -0,0 +1,32 @@ +dist: xenial +sudo: required +language: python +addons: + snaps: + - name: juju + classic: true + channel: stable +before_install: + - sudo apt -qq update + - sudo apt install -y libapt-pkg-dev # For python-apt wheel build +# NOTE(beisner): Avoid test pollution by not enabling system site packages. +virtualenv: + system_site_packages: false +install: + - pip install tox +matrix: + include: + - python: 2.7 + env: ENV=pep8,py2 + - python: 3.4 + env: ENV=pep8,py3 + - python: 3.5 + env: ENV=pep8,py3 + - python: 3.6 + env: ENV=pep8,py3 + - python: 3.7 + env: ENV=pep8,py3 + - python: 3.8 + env: ENV=pep8,py3 +script: + - tox -c tox.ini -e $ENV diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/HACKING.md b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/HACKING.md new file mode 100644 index 0000000000000000000000000000000000000000..d9d5996d0fc12622e63039db3b038ddfb5272362 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/HACKING.md @@ -0,0 +1,102 @@ +# Hacking on charmhelpers + +## Run testsuite (tox method) + +CAUTION: the charm-helpers library has some unit tests which do unsavory things +such as making real, unmocked calls out to sudo foo, juju binaries, and perhaps +other things. This is not ideal for a number of reasons. One of those reasons +is that it pollutes the test runner (your) system. + +The current recommendation for testing locally is to do so in a fresh Xenial +(16.04) lxc container. 16.04 is selected for consistency with what is available +in the Travis CI test gates. As of this writing, 18.04 is not available there. + +The fresh Xenial lxc system container will need to have the following packages +installed in order to satisfy test runner dependencies: + + sudo apt install git bzr tox libapt-pkg-dev python-dev python3-dev build-essential juju -y + +The tests can be executed as follows: + + tox -e pep8 + tox -e py3 + tox -e py2 + +See also: .travis.yaml for what is happening in the test gate. + +## Run testsuite (legacy Makefile method) + + make test + +Run `make` without arguments for more options. + +## Test it in a charm + +Use following instructions to build a charm that uses your own development branch of +charmhelpers. + +Step 1: Make sure your version of charmhelpers is recognised as the latest version by +by appending `dev0` to the version number in the `VERSION` file. + +Step 2: Create an override file `override-wheelhouse.txt` that points to your own +charmhelpers branch. *The format of this file is the same as pip's +[`requirements.txt`](https://pip.pypa.io/en/stable/reference/pip_install/#requirements-file-format) +file. + + # Override charmhelpers by the version found in folder + -e /path/to/charmhelpers + # Or point it to a github repo with + -e git+https://github.com//charm-helpers#egg=charmhelpers + +Step 3: Build the charm specifying the override file. *You might need to install the +candidate channel of the charm snap* + + charm build -w wheelhouse-overrides.txt + +Now when you deploy your charm, it will use your own branch of charmhelpers. + +*Note: If you want to verify this or change the charmhelpers code on a built +charm, get the path of the installed charmhelpers by running following command.* + + python3 -c "import charmhelpers; print(charmhelpers.__file__)" + + +# Hacking on Docs + +Install html doc dependencies: + +```bash +sudo apt-get install python-flake8 python-shelltoolbox python-tempita \ +python-nose python-mock python-testtools python-jinja2 python-coverage \ +python-git python-netifaces python-netaddr python-pip zip +``` + +To build the html documentation: + +```bash +make docs +``` + +To browse the html documentation locally: + +```bash +make docs +cd docs/_build/html +python -m SimpleHTTPServer 8765 +# point web browser to http://localhost:8765 +``` + +To build and upload package and doc updates to PyPI: + +```bash +make release +# note: if the package version already exists on PyPI +# this command will upload doc updates only +``` + +# PyPI Package and Docs + +The published package and docs currently live at: + + https://pypi.python.org/pypi/charmhelpers + http://pythonhosted.org/charmhelpers/ diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/LICENSE b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/LICENSE new file mode 100644 index 0000000000000000000000000000000000000000..d645695673349e3947e8e5ae42332d0ac3164cd7 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/LICENSE @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/MANIFEST.in b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/MANIFEST.in new file mode 100644 index 0000000000000000000000000000000000000000..72bbf0ab78b68d6d0ebb8d354dd993e751d4be89 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/MANIFEST.in @@ -0,0 +1,7 @@ +include *.txt +include Makefile +include VERSION +include MANIFEST.in +include scripts/* +include README.rst +recursive-include debian * diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/Makefile b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/Makefile new file mode 100644 index 0000000000000000000000000000000000000000..8771f31a57b1af939e73a1dfe8c7e56be195cc87 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/Makefile @@ -0,0 +1,89 @@ +PROJECT=charmhelpers +PYTHON := /usr/bin/env python +SUITE=unstable +TESTS=tests/ + +all: + @echo "make source - Create source package" + @echo "make sdeb - Create debian source package" + @echo "make deb - Create debian package" + @echo "make clean" + @echo "make userinstall - Install locally" + @echo "make docs - Build html documentation" + @echo "make release - Build and upload package and docs to PyPI" + @echo "make test" + +sdeb: source + scripts/build source + +deb: source + scripts/build + +source: setup.py + scripts/update-revno + python setup.py sdist + +clean: + -python setup.py clean + rm -rf build/ MANIFEST + find . -name '*.pyc' -delete + find . -name '__pycache__' -delete + rm -rf dist/* + rm -rf .venv + rm -rf .venv3 + (which dh_clean && dh_clean) || true + +userinstall: + scripts/update-revno + python setup.py install --user + + +.venv: + dpkg-query -W -f='$${status}' gcc python-dev python-virtualenv 2>/dev/null | grep --invert-match "not-installed" || sudo apt-get install -y python-dev python-virtualenv + virtualenv .venv --system-site-packages + .venv/bin/pip install -U pip + .venv/bin/pip install -I -r test-requirements.txt + .venv/bin/pip install bzr + +.venv3: + dpkg-query -W -f='$${status}' gcc python3-dev python-virtualenv python3-apt 2>/dev/null | grep --invert-match "not-installed" || sudo apt-get install -y python3-dev python-virtualenv python3-apt + virtualenv .venv3 --python=python3 --system-site-packages + .venv3/bin/pip install -U pip + .venv3/bin/pip install -I -r test-requirements.txt + +# Note we don't even attempt to run tests if lint isn't passing. +test: lint test2 test3 + @echo OK + +test2: + @echo Starting Py2 tests... + .venv/bin/nosetests -s --nologcapture tests/ + +test3: + @echo Starting Py3 tests... + .venv3/bin/nosetests -s --nologcapture tests/ + +ftest: lint + @echo Starting fast tests... + .venv/bin/nosetests --attr '!slow' --nologcapture tests/ + .venv3/bin/nosetests --attr '!slow' --nologcapture tests/ + +lint: .venv .venv3 + @echo Checking for Python syntax... + @.venv/bin/flake8 --ignore=E402,E501,W504 $(PROJECT) $(TESTS) tools/ \ + && echo Py2 OK + @.venv3/bin/flake8 --ignore=E402,E501,W504 $(PROJECT) $(TESTS) tools/ \ + && echo Py3 OK + +docs: + - [ -z "`dpkg -l | grep python-sphinx`" ] && sudo apt-get install python-sphinx -y + - [ -z "`dpkg -l | grep python-pip`" ] && sudo apt-get install python-pip -y + - [ -z "`pip list | grep -i sphinx-pypi-upload`" ] && sudo pip install sphinx-pypi-upload + - [ -z "`pip list | grep -i sphinx_rtd_theme`" ] && sudo pip install sphinx_rtd_theme + cd docs && make html && cd - +.PHONY: docs + +release: docs + $(PYTHON) setup.py sdist upload upload_sphinx + +build: test lint docs diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/README.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/README.rst new file mode 100644 index 0000000000000000000000000000000000000000..b8fb15eaabf79cb949b703301d6b89bbb0cb6b5e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/README.rst @@ -0,0 +1,52 @@ +CharmHelpers |badge| +-------------------- + +.. |badge| image:: https://travis-ci.org/juju/charm-helpers.svg?branch=master + :target: https://travis-ci.org/juju/charm-helpers + +Overview +======== + +CharmHelpers provides an opinionated set of tools for building Juju charms. + +The full documentation is available online at: https://charm-helpers.readthedocs.io/ + +Common Usage Examples +===================== + +* interaction with charm-specific Juju unit agents via hook tools; +* processing of events and execution of decorated functions based on event names; +* handling of persistent storage between independent charm invocations; +* rendering of configuration file templates; +* modification of system configuration files; +* installation of packages; +* retrieval of machine-specific details; +* implementation of application-specific code reused in similar charms. + +Why Python? +=========== + +* Python is an extremely popular, easy to learn, and powerful language which is also common in automation tools; +* An interpreted language helps with charm portability across different CPU architectures; +* Doesn't require debugging symbols (just use pdb in-place); +* An author or a user is able to make debugging changes without recompiling a charm. + +Dev/Test +======== + +See the HACKING.md file for information about testing and development. + +License +======= + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/bin/README b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/bin/README new file mode 100644 index 0000000000000000000000000000000000000000..df4a50484ea9dfca98c4dc63a94531e3449c98aa --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/bin/README @@ -0,0 +1,4 @@ +This directory contains executables for accessing charmhelpers functionality + + +Please see charmhelpers.cli for the recommended way to add scripts. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/bin/chlp b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/bin/chlp new file mode 100755 index 0000000000000000000000000000000000000000..0c1c38dc15039191fd3ea04fc7ea740764ea974b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/bin/chlp @@ -0,0 +1,8 @@ +#!/usr/bin/env python + +from charmhelpers.cli import cmdline +from charmhelpers.cli.commands import * + + +if __name__ == '__main__': + cmdline.run() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/bin/contrib/charmsupport/charmsupport b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/bin/contrib/charmsupport/charmsupport new file mode 100755 index 0000000000000000000000000000000000000000..7a28beb3867db4b517ca98bd048303aaa727105a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/bin/contrib/charmsupport/charmsupport @@ -0,0 +1,31 @@ +#!/usr/bin/env python + +import argparse +from charmhelpers.contrib.charmsupport import execd + + +def run_execd(args): + execd.execd_run(args.module, args.dir, die_on_error=True) + + +def parse_args(): + parser = argparse.ArgumentParser(description='Perform common charm tasks') + subparsers = parser.add_subparsers(help='Commands') + + execd_parser = subparsers.add_parser('execd', + help='Execute a directory of commands') + execd_parser.add_argument('--module', default='charm-pre-install', + help='module to run (default: charm-pre-install)') + execd_parser.add_argument('--dir', + help="Override the exec.d directory path") + execd_parser.set_defaults(func=run_execd) + + return parser.parse_args() + + +def main(): + arguments = parse_args() + arguments.func(arguments) + +if __name__ == '__main__': + exit(main()) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/bin/contrib/saltstack/salt-call b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/bin/contrib/saltstack/salt-call new file mode 100755 index 0000000000000000000000000000000000000000..5b8a8f39355447bd23c569c32c2987a9c494fe73 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/bin/contrib/saltstack/salt-call @@ -0,0 +1,11 @@ +#!/usr/bin/env python +''' +Directly call a salt command in the modules, does not require a running salt +minion to run. +''' + +from salt.scripts import salt_call + + +if __name__ == '__main__': + salt_call() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..61ef90719b5d5d759de1a6b80a1ea748d8bb0911 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/__init__.py @@ -0,0 +1,97 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Bootstrap charm-helpers, installing its dependencies if necessary using +# only standard libraries. +from __future__ import print_function +from __future__ import absolute_import + +import functools +import inspect +import subprocess +import sys + +try: + import six # NOQA:F401 +except ImportError: + if sys.version_info.major == 2: + subprocess.check_call(['apt-get', 'install', '-y', 'python-six']) + else: + subprocess.check_call(['apt-get', 'install', '-y', 'python3-six']) + import six # NOQA:F401 + +try: + import yaml # NOQA:F401 +except ImportError: + if sys.version_info.major == 2: + subprocess.check_call(['apt-get', 'install', '-y', 'python-yaml']) + else: + subprocess.check_call(['apt-get', 'install', '-y', 'python3-yaml']) + import yaml # NOQA:F401 + + +# Holds a list of mapping of mangled function names that have been deprecated +# using the @deprecate decorator below. This is so that the warning is only +# printed once for each usage of the function. +__deprecated_functions = {} + + +def deprecate(warning, date=None, log=None): + """Add a deprecation warning the first time the function is used. + The date, which is a string in semi-ISO8660 format indicate the year-month + that the function is officially going to be removed. + + usage: + + @deprecate('use core/fetch/add_source() instead', '2017-04') + def contributed_add_source_thing(...): + ... + + And it then prints to the log ONCE that the function is deprecated. + The reason for passing the logging function (log) is so that hookenv.log + can be used for a charm if needed. + + :param warning: String to indicat where it has moved ot. + :param date: optional sting, in YYYY-MM format to indicate when the + function will definitely (probably) be removed. + :param log: The log function to call to log. If not, logs to stdout + """ + def wrap(f): + + @functools.wraps(f) + def wrapped_f(*args, **kwargs): + try: + module = inspect.getmodule(f) + file = inspect.getsourcefile(f) + lines = inspect.getsourcelines(f) + f_name = "{}-{}-{}..{}-{}".format( + module.__name__, file, lines[0], lines[-1], f.__name__) + except (IOError, TypeError): + # assume it was local, so just use the name of the function + f_name = f.__name__ + if f_name not in __deprecated_functions: + __deprecated_functions[f_name] = True + s = "DEPRECATION WARNING: Function {} is being removed".format( + f.__name__) + if date: + s = "{} on/around {}".format(s, date) + if warning: + s = "{} : {}".format(s, warning) + if log: + log(s) + else: + print(s) + return f(*args, **kwargs) + return wrapped_f + return wrap diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/cli/README.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/cli/README.rst new file mode 100644 index 0000000000000000000000000000000000000000..f7901c09c79c268d353825776a74072a7ee4dee7 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/cli/README.rst @@ -0,0 +1,57 @@ +========== +Commandant +========== + +----------------------------------------------------- +Automatic command-line interfaces to Python functions +----------------------------------------------------- + +One of the benefits of ``libvirt`` is the uniformity of the interface: the C API (as well as the bindings in other languages) is a set of functions that accept parameters that are nearly identical to the command-line arguments. If you run ``virsh``, you get an interactive command prompt that supports all of the same commands that your shell scripts use as ``virsh`` subcommands. + +Command execution and stdio manipulation is the greatest common factor across all development systems in the POSIX environment. By exposing your functions as commands that manipulate streams of text, you can make life easier for all the Ruby and Erlang and Go programmers in your life. + +Goals +===== + +* Single decorator to expose a function as a command. + * now two decorators - one "automatic" and one that allows authors to manipulate the arguments for fine-grained control.(MW) +* Automatic analysis of function signature through ``inspect.getargspec()`` +* Command argument parser built automatically with ``argparse`` +* Interactive interpreter loop object made with ``Cmd`` +* Options to output structured return value data via ``pprint``, ``yaml`` or ``json`` dumps. + +Other Important Features that need writing +------------------------------------------ + +* Help and Usage documentation can be automatically generated, but it will be important to let users override this behaviour +* The decorator should allow specifying further parameters to the parser's add_argument() calls, to specify types or to make arguments behave as boolean flags, etc. + - Filename arguments are important, as good practice is for functions to accept file objects as parameters. + - choices arguments help to limit bad input before the function is called +* Some automatic behaviour could make for better defaults, once the user can override them. + - We could automatically detect arguments that default to False or True, and automatically support --no-foo for foo=True. + - We could automatically support hyphens as alternates for underscores + - Arguments defaulting to sequence types could support the ``append`` action. + + +----------------------------------------------------- +Implementing subcommands +----------------------------------------------------- + +(WIP) + +So as to avoid dependencies on the cli module, subcommands should be defined separately from their implementations. The recommmendation would be to place definitions into separate modules near the implementations which they expose. + +Some examples:: + + from charmhelpers.cli import CommandLine + from charmhelpers.payload import execd + from charmhelpers.foo import bar + + cli = CommandLine() + + cli.subcommand(execd.execd_run) + + @cli.subcommand_builder("bar", help="Bar baz qux") + def barcmd_builder(subparser): + subparser.add_argument('argument1', help="yackety") + return bar diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/cli/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/cli/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..389b490f4eec27f18118ab6a5f3f529dbf2e9ecc --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/cli/__init__.py @@ -0,0 +1,189 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import inspect +import argparse +import sys + +from six.moves import zip + +import charmhelpers.core.unitdata + + +class OutputFormatter(object): + def __init__(self, outfile=sys.stdout): + self.formats = ( + "raw", + "json", + "py", + "yaml", + "csv", + "tab", + ) + self.outfile = outfile + + def add_arguments(self, argument_parser): + formatgroup = argument_parser.add_mutually_exclusive_group() + choices = self.supported_formats + formatgroup.add_argument("--format", metavar='FMT', + help="Select output format for returned data, " + "where FMT is one of: {}".format(choices), + choices=choices, default='raw') + for fmt in self.formats: + fmtfunc = getattr(self, fmt) + formatgroup.add_argument("-{}".format(fmt[0]), + "--{}".format(fmt), action='store_const', + const=fmt, dest='format', + help=fmtfunc.__doc__) + + @property + def supported_formats(self): + return self.formats + + def raw(self, output): + """Output data as raw string (default)""" + if isinstance(output, (list, tuple)): + output = '\n'.join(map(str, output)) + self.outfile.write(str(output)) + + def py(self, output): + """Output data as a nicely-formatted python data structure""" + import pprint + pprint.pprint(output, stream=self.outfile) + + def json(self, output): + """Output data in JSON format""" + import json + json.dump(output, self.outfile) + + def yaml(self, output): + """Output data in YAML format""" + import yaml + yaml.safe_dump(output, self.outfile) + + def csv(self, output): + """Output data as excel-compatible CSV""" + import csv + csvwriter = csv.writer(self.outfile) + csvwriter.writerows(output) + + def tab(self, output): + """Output data in excel-compatible tab-delimited format""" + import csv + csvwriter = csv.writer(self.outfile, dialect=csv.excel_tab) + csvwriter.writerows(output) + + def format_output(self, output, fmt='raw'): + fmtfunc = getattr(self, fmt) + fmtfunc(output) + + +class CommandLine(object): + argument_parser = None + subparsers = None + formatter = None + exit_code = 0 + + def __init__(self): + if not self.argument_parser: + self.argument_parser = argparse.ArgumentParser(description='Perform common charm tasks') + if not self.formatter: + self.formatter = OutputFormatter() + self.formatter.add_arguments(self.argument_parser) + if not self.subparsers: + self.subparsers = self.argument_parser.add_subparsers(help='Commands') + + def subcommand(self, command_name=None): + """ + Decorate a function as a subcommand. Use its arguments as the + command-line arguments""" + def wrapper(decorated): + cmd_name = command_name or decorated.__name__ + subparser = self.subparsers.add_parser(cmd_name, + description=decorated.__doc__) + for args, kwargs in describe_arguments(decorated): + subparser.add_argument(*args, **kwargs) + subparser.set_defaults(func=decorated) + return decorated + return wrapper + + def test_command(self, decorated): + """ + Subcommand is a boolean test function, so bool return values should be + converted to a 0/1 exit code. + """ + decorated._cli_test_command = True + return decorated + + def no_output(self, decorated): + """ + Subcommand is not expected to return a value, so don't print a spurious None. + """ + decorated._cli_no_output = True + return decorated + + def subcommand_builder(self, command_name, description=None): + """ + Decorate a function that builds a subcommand. Builders should accept a + single argument (the subparser instance) and return the function to be + run as the command.""" + def wrapper(decorated): + subparser = self.subparsers.add_parser(command_name) + func = decorated(subparser) + subparser.set_defaults(func=func) + subparser.description = description or func.__doc__ + return wrapper + + def run(self): + "Run cli, processing arguments and executing subcommands." + arguments = self.argument_parser.parse_args() + argspec = inspect.getargspec(arguments.func) + vargs = [] + for arg in argspec.args: + vargs.append(getattr(arguments, arg)) + if argspec.varargs: + vargs.extend(getattr(arguments, argspec.varargs)) + output = arguments.func(*vargs) + if getattr(arguments.func, '_cli_test_command', False): + self.exit_code = 0 if output else 1 + output = '' + if getattr(arguments.func, '_cli_no_output', False): + output = '' + self.formatter.format_output(output, arguments.format) + if charmhelpers.core.unitdata._KV: + charmhelpers.core.unitdata._KV.flush() + + +cmdline = CommandLine() + + +def describe_arguments(func): + """ + Analyze a function's signature and return a data structure suitable for + passing in as arguments to an argparse parser's add_argument() method.""" + + argspec = inspect.getargspec(func) + # we should probably raise an exception somewhere if func includes **kwargs + if argspec.defaults: + positional_args = argspec.args[:-len(argspec.defaults)] + keyword_names = argspec.args[-len(argspec.defaults):] + for arg, default in zip(keyword_names, argspec.defaults): + yield ('--{}'.format(arg),), {'default': default} + else: + positional_args = argspec.args + + for arg in positional_args: + yield (arg,), {} + if argspec.varargs: + yield (argspec.varargs,), {'nargs': '*'} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/cli/benchmark.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/cli/benchmark.py new file mode 100644 index 0000000000000000000000000000000000000000..303af14b607d31e338aefff0df593609b7b45feb --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/cli/benchmark.py @@ -0,0 +1,34 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from . import cmdline +from charmhelpers.contrib.benchmark import Benchmark + + +@cmdline.subcommand(command_name='benchmark-start') +def start(): + Benchmark.start() + + +@cmdline.subcommand(command_name='benchmark-finish') +def finish(): + Benchmark.finish() + + +@cmdline.subcommand_builder('benchmark-composite', description="Set the benchmark composite score") +def service(subparser): + subparser.add_argument("value", help="The composite score.") + subparser.add_argument("units", help="The units the composite score represents, i.e., 'reads/sec'.") + subparser.add_argument("direction", help="'asc' if a lower score is better, 'desc' if a higher score is better.") + return Benchmark.set_composite_score diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/cli/commands.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/cli/commands.py new file mode 100644 index 0000000000000000000000000000000000000000..b93105650be8226aa390fda46aa08afb23ebc7bc --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/cli/commands.py @@ -0,0 +1,30 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +""" +This module loads sub-modules into the python runtime so they can be +discovered via the inspect module. In order to prevent flake8 from (rightfully) +telling us these are unused modules, throw a ' # noqa' at the end of each import +so that the warning is suppressed. +""" + +from . import CommandLine # noqa + +""" +Import the sub-modules which have decorated subcommands to register with chlp. +""" +from . import host # noqa +from . import benchmark # noqa +from . import unitdata # noqa +from . import hookenv # noqa diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/cli/hookenv.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/cli/hookenv.py new file mode 100644 index 0000000000000000000000000000000000000000..bd72f448bf0092251a454ff6dd3145f09048ae72 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/cli/hookenv.py @@ -0,0 +1,21 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from . import cmdline +from charmhelpers.core import hookenv + + +cmdline.subcommand('relation-id')(hookenv.relation_id._wrapped) +cmdline.subcommand('service-name')(hookenv.service_name) +cmdline.subcommand('remote-service-name')(hookenv.remote_service_name._wrapped) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/cli/host.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/cli/host.py new file mode 100644 index 0000000000000000000000000000000000000000..40396849907976fd077cc9c53a0852c4380c6266 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/cli/host.py @@ -0,0 +1,29 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from . import cmdline +from charmhelpers.core import host + + +@cmdline.subcommand() +def mounts(): + "List mounts" + return host.mounts() + + +@cmdline.subcommand_builder('service', description="Control system services") +def service(subparser): + subparser.add_argument("action", help="The action to perform (start, stop, etc...)") + subparser.add_argument("service_name", help="Name of the service to control") + return host.service diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/cli/unitdata.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/cli/unitdata.py new file mode 100644 index 0000000000000000000000000000000000000000..acce846f84ef32ed0b5829cf08e67ad33f0eb5d1 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/cli/unitdata.py @@ -0,0 +1,46 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from . import cmdline +from charmhelpers.core import unitdata + + +@cmdline.subcommand_builder('unitdata', description="Store and retrieve data") +def unitdata_cmd(subparser): + nested = subparser.add_subparsers() + + get_cmd = nested.add_parser('get', help='Retrieve data') + get_cmd.add_argument('key', help='Key to retrieve the value of') + get_cmd.set_defaults(action='get', value=None) + + getrange_cmd = nested.add_parser('getrange', help='Retrieve multiple data') + getrange_cmd.add_argument('key', metavar='prefix', + help='Prefix of the keys to retrieve') + getrange_cmd.set_defaults(action='getrange', value=None) + + set_cmd = nested.add_parser('set', help='Store data') + set_cmd.add_argument('key', help='Key to set') + set_cmd.add_argument('value', help='Value to store') + set_cmd.set_defaults(action='set') + + def _unitdata_cmd(action, key, value): + if action == 'get': + return unitdata.kv().get(key) + elif action == 'getrange': + return unitdata.kv().getrange(key) + elif action == 'set': + unitdata.kv().set(key, value) + unitdata.kv().flush() + return '' + return _unitdata_cmd diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/context.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/context.py new file mode 100644 index 0000000000000000000000000000000000000000..01864740e89633f6c65f43c32ed89ef9d5e857c9 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/context.py @@ -0,0 +1,205 @@ +# Copyright 2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +''' +A Pythonic API to interact with the charm hook environment. + +:author: Stuart Bishop +''' + +import six + +from charmhelpers.core import hookenv + +from collections import OrderedDict +if six.PY3: + from collections import UserDict # pragma: nocover +else: + from UserDict import IterableUserDict as UserDict # pragma: nocover + + +class Relations(OrderedDict): + '''Mapping relation name -> relation id -> Relation. + + >>> rels = Relations() + >>> rels['sprog']['sprog:12']['client/6']['widget'] + 'remote widget' + >>> rels['sprog']['sprog:12'].local['widget'] = 'local widget' + >>> rels['sprog']['sprog:12'].local['widget'] + 'local widget' + >>> rels.peer.local['widget'] + 'local widget on the peer relation' + ''' + def __init__(self): + super(Relations, self).__init__() + for relname in sorted(hookenv.relation_types()): + self[relname] = OrderedDict() + relids = hookenv.relation_ids(relname) + relids.sort(key=lambda x: int(x.split(':', 1)[-1])) + for relid in relids: + self[relname][relid] = Relation(relid) + + @property + def peer(self): + peer_relid = hookenv.peer_relation_id() + for rels in self.values(): + if peer_relid in rels: + return rels[peer_relid] + + +class Relation(OrderedDict): + '''Mapping of unit -> remote RelationInfo for a relation. + + This is an OrderedDict mapping, ordered numerically by + by unit number. + + Also provides access to the local RelationInfo, and peer RelationInfo + instances by the 'local' and 'peers' attributes. + + >>> r = Relation('sprog:12') + >>> r.keys() + ['client/9', 'client/10'] # Ordered numerically + >>> r['client/10']['widget'] # A remote RelationInfo setting + 'remote widget' + >>> r.local['widget'] # The local RelationInfo setting + 'local widget' + ''' + relid = None # The relation id. + relname = None # The relation name (also known as relation type). + service = None # The remote service name, if known. + local = None # The local end's RelationInfo. + peers = None # Map of peer -> RelationInfo. None if no peer relation. + + def __init__(self, relid): + remote_units = hookenv.related_units(relid) + remote_units.sort(key=lambda u: int(u.split('/', 1)[-1])) + super(Relation, self).__init__((unit, RelationInfo(relid, unit)) + for unit in remote_units) + + self.relname = relid.split(':', 1)[0] + self.relid = relid + self.local = RelationInfo(relid, hookenv.local_unit()) + + for relinfo in self.values(): + self.service = relinfo.service + break + + # If we have peers, and they have joined both the provided peer + # relation and this relation, we can peek at their data too. + # This is useful for creating consensus without leadership. + peer_relid = hookenv.peer_relation_id() + if peer_relid and peer_relid != relid: + peers = hookenv.related_units(peer_relid) + if peers: + peers.sort(key=lambda u: int(u.split('/', 1)[-1])) + self.peers = OrderedDict((peer, RelationInfo(relid, peer)) + for peer in peers) + else: + self.peers = OrderedDict() + else: + self.peers = None + + def __str__(self): + return '{} ({})'.format(self.relid, self.service) + + +class RelationInfo(UserDict): + '''The bag of data at an end of a relation. + + Every unit participating in a relation has a single bag of + data associated with that relation. This is that bag. + + The bag of data for the local unit may be updated. Remote data + is immutable and will remain static for the duration of the hook. + + Changes made to the local units relation data only become visible + to other units after the hook completes successfully. If the hook + does not complete successfully, the changes are rolled back. + + Unlike standard Python mappings, setting an item to None is the + same as deleting it. + + >>> relinfo = RelationInfo('db:12') # Default is the local unit. + >>> relinfo['user'] = 'fred' + >>> relinfo['user'] + 'fred' + >>> relinfo['user'] = None + >>> 'fred' in relinfo + False + + This class wraps hookenv.relation_get and hookenv.relation_set. + All caching is left up to these two methods to avoid synchronization + issues. Data is only loaded on demand. + ''' + relid = None # The relation id. + relname = None # The relation name (also know as the relation type). + unit = None # The unit id. + number = None # The unit number (integer). + service = None # The service name. + + def __init__(self, relid, unit): + self.relname = relid.split(':', 1)[0] + self.relid = relid + self.unit = unit + self.service, num = self.unit.split('/', 1) + self.number = int(num) + + def __str__(self): + return '{} ({})'.format(self.relid, self.unit) + + @property + def data(self): + return hookenv.relation_get(rid=self.relid, unit=self.unit) + + def __setitem__(self, key, value): + if self.unit != hookenv.local_unit(): + raise TypeError('Attempting to set {} on remote unit {}' + ''.format(key, self.unit)) + if value is not None and not isinstance(value, six.string_types): + # We don't do implicit casting. This would cause simple + # types like integers to be read back as strings in subsequent + # hooks, and mutable types would require a lot of wrapping + # to ensure relation-set gets called when they are mutated. + raise ValueError('Only string values allowed') + hookenv.relation_set(self.relid, {key: value}) + + def __delitem__(self, key): + # Deleting a key and setting it to null is the same thing in + # Juju relations. + self[key] = None + + +class Leader(UserDict): + def __init__(self): + pass # Don't call superclass initializer, as it will nuke self.data + + @property + def data(self): + return hookenv.leader_get() + + def __setitem__(self, key, value): + if not hookenv.is_leader(): + raise TypeError('Not the leader. Cannot change leader settings.') + if value is not None and not isinstance(value, six.string_types): + # We don't do implicit casting. This would cause simple + # types like integers to be read back as strings in subsequent + # hooks, and mutable types would require a lot of wrapping + # to ensure leader-set gets called when they are mutated. + raise ValueError('Only string values allowed') + hookenv.leader_set({key: value}) + + def __delitem__(self, key): + # Deleting a key and setting it to null is the same thing in + # Juju leadership settings. + self[key] = None diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d7567b863e3a5ad2b7a7f44958b4166e0c3d346b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/amulet/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/amulet/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d7567b863e3a5ad2b7a7f44958b4166e0c3d346b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/amulet/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/amulet/deployment.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/amulet/deployment.py new file mode 100644 index 0000000000000000000000000000000000000000..d21d01d8ffe242d686283b0ed977b88be6bfc74e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/amulet/deployment.py @@ -0,0 +1,99 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import amulet +import os +import six + + +class AmuletDeployment(object): + """Amulet deployment. + + This class provides generic Amulet deployment and test runner + methods. + """ + + def __init__(self, series=None): + """Initialize the deployment environment.""" + self.series = None + + if series: + self.series = series + self.d = amulet.Deployment(series=self.series) + else: + self.d = amulet.Deployment() + + def _add_services(self, this_service, other_services): + """Add services. + + Add services to the deployment where this_service is the local charm + that we're testing and other_services are the other services that + are being used in the local amulet tests. + """ + if this_service['name'] != os.path.basename(os.getcwd()): + s = this_service['name'] + msg = "The charm's root directory name needs to be {}".format(s) + amulet.raise_status(amulet.FAIL, msg=msg) + + if 'units' not in this_service: + this_service['units'] = 1 + + self.d.add(this_service['name'], units=this_service['units'], + constraints=this_service.get('constraints'), + storage=this_service.get('storage')) + + for svc in other_services: + if 'location' in svc: + branch_location = svc['location'] + elif self.series: + branch_location = 'cs:{}/{}'.format(self.series, svc['name']), + else: + branch_location = None + + if 'units' not in svc: + svc['units'] = 1 + + self.d.add(svc['name'], charm=branch_location, units=svc['units'], + constraints=svc.get('constraints'), + storage=svc.get('storage')) + + def _add_relations(self, relations): + """Add all of the relations for the services.""" + for k, v in six.iteritems(relations): + self.d.relate(k, v) + + def _configure_services(self, configs): + """Configure all of the services.""" + for service, config in six.iteritems(configs): + self.d.configure(service, config) + + def _deploy(self): + """Deploy environment and wait for all hooks to finish executing.""" + timeout = int(os.environ.get('AMULET_SETUP_TIMEOUT', 900)) + try: + self.d.setup(timeout=timeout) + self.d.sentry.wait(timeout=timeout) + except amulet.helpers.TimeoutError: + amulet.raise_status( + amulet.FAIL, + msg="Deployment timed out ({}s)".format(timeout) + ) + except Exception: + raise + + def run_tests(self): + """Run all of the methods that are prefixed with 'test_'.""" + for test in dir(self): + if test.startswith('test_'): + getattr(self, test)() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/amulet/utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/amulet/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..54283088e1ddd53bc85008bc7446620e87b42278 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/amulet/utils.py @@ -0,0 +1,820 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import io +import json +import logging +import os +import re +import socket +import subprocess +import sys +import time +import uuid + +import amulet +import distro_info +import six +from six.moves import configparser +if six.PY3: + from urllib import parse as urlparse +else: + import urlparse + + +class AmuletUtils(object): + """Amulet utilities. + + This class provides common utility functions that are used by Amulet + tests. + """ + + def __init__(self, log_level=logging.ERROR): + self.log = self.get_logger(level=log_level) + self.ubuntu_releases = self.get_ubuntu_releases() + + def get_logger(self, name="amulet-logger", level=logging.DEBUG): + """Get a logger object that will log to stdout.""" + log = logging + logger = log.getLogger(name) + fmt = log.Formatter("%(asctime)s %(funcName)s " + "%(levelname)s: %(message)s") + + handler = log.StreamHandler(stream=sys.stdout) + handler.setLevel(level) + handler.setFormatter(fmt) + + logger.addHandler(handler) + logger.setLevel(level) + + return logger + + def valid_ip(self, ip): + if re.match(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$", ip): + return True + else: + return False + + def valid_url(self, url): + p = re.compile( + r'^(?:http|ftp)s?://' + r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}\.?)|' # noqa + r'localhost|' + r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})' + r'(?::\d+)?' + r'(?:/?|[/?]\S+)$', + re.IGNORECASE) + if p.match(url): + return True + else: + return False + + def get_ubuntu_release_from_sentry(self, sentry_unit): + """Get Ubuntu release codename from sentry unit. + + :param sentry_unit: amulet sentry/service unit pointer + :returns: list of strings - release codename, failure message + """ + msg = None + cmd = 'lsb_release -cs' + release, code = sentry_unit.ssh(cmd) + if code == 0: + self.log.debug('{} lsb_release: {}'.format( + sentry_unit.info['unit_name'], release)) + else: + msg = ('{} `{}` returned {} ' + '{}'.format(sentry_unit.info['unit_name'], + cmd, release, code)) + if release not in self.ubuntu_releases: + msg = ("Release ({}) not found in Ubuntu releases " + "({})".format(release, self.ubuntu_releases)) + return release, msg + + def validate_services(self, commands): + """Validate that lists of commands succeed on service units. Can be + used to verify system services are running on the corresponding + service units. + + :param commands: dict with sentry keys and arbitrary command list vals + :returns: None if successful, Failure string message otherwise + """ + self.log.debug('Checking status of system services...') + + # /!\ DEPRECATION WARNING (beisner): + # New and existing tests should be rewritten to use + # validate_services_by_name() as it is aware of init systems. + self.log.warn('DEPRECATION WARNING: use ' + 'validate_services_by_name instead of validate_services ' + 'due to init system differences.') + + for k, v in six.iteritems(commands): + for cmd in v: + output, code = k.run(cmd) + self.log.debug('{} `{}` returned ' + '{}'.format(k.info['unit_name'], + cmd, code)) + if code != 0: + return "command `{}` returned {}".format(cmd, str(code)) + return None + + def validate_services_by_name(self, sentry_services): + """Validate system service status by service name, automatically + detecting init system based on Ubuntu release codename. + + :param sentry_services: dict with sentry keys and svc list values + :returns: None if successful, Failure string message otherwise + """ + self.log.debug('Checking status of system services...') + + # Point at which systemd became a thing + systemd_switch = self.ubuntu_releases.index('vivid') + + for sentry_unit, services_list in six.iteritems(sentry_services): + # Get lsb_release codename from unit + release, ret = self.get_ubuntu_release_from_sentry(sentry_unit) + if ret: + return ret + + for service_name in services_list: + if (self.ubuntu_releases.index(release) >= systemd_switch or + service_name in ['rabbitmq-server', 'apache2', + 'memcached']): + # init is systemd (or regular sysv) + cmd = 'sudo service {} status'.format(service_name) + output, code = sentry_unit.run(cmd) + service_running = code == 0 + elif self.ubuntu_releases.index(release) < systemd_switch: + # init is upstart + cmd = 'sudo status {}'.format(service_name) + output, code = sentry_unit.run(cmd) + service_running = code == 0 and "start/running" in output + + self.log.debug('{} `{}` returned ' + '{}'.format(sentry_unit.info['unit_name'], + cmd, code)) + if not service_running: + return u"command `{}` returned {} {}".format( + cmd, output, str(code)) + return None + + def _get_config(self, unit, filename): + """Get a ConfigParser object for parsing a unit's config file.""" + file_contents = unit.file_contents(filename) + + # NOTE(beisner): by default, ConfigParser does not handle options + # with no value, such as the flags used in the mysql my.cnf file. + # https://bugs.python.org/issue7005 + config = configparser.ConfigParser(allow_no_value=True) + config.readfp(io.StringIO(file_contents)) + return config + + def validate_config_data(self, sentry_unit, config_file, section, + expected): + """Validate config file data. + + Verify that the specified section of the config file contains + the expected option key:value pairs. + + Compare expected dictionary data vs actual dictionary data. + The values in the 'expected' dictionary can be strings, bools, ints, + longs, or can be a function that evaluates a variable and returns a + bool. + """ + self.log.debug('Validating config file data ({} in {} on {})' + '...'.format(section, config_file, + sentry_unit.info['unit_name'])) + config = self._get_config(sentry_unit, config_file) + + if section != 'DEFAULT' and not config.has_section(section): + return "section [{}] does not exist".format(section) + + for k in expected.keys(): + if not config.has_option(section, k): + return "section [{}] is missing option {}".format(section, k) + + actual = config.get(section, k) + v = expected[k] + if (isinstance(v, six.string_types) or + isinstance(v, bool) or + isinstance(v, six.integer_types)): + # handle explicit values + if actual != v: + return "section [{}] {}:{} != expected {}:{}".format( + section, k, actual, k, expected[k]) + # handle function pointers, such as not_null or valid_ip + elif not v(actual): + return "section [{}] {}:{} != expected {}:{}".format( + section, k, actual, k, expected[k]) + return None + + def _validate_dict_data(self, expected, actual): + """Validate dictionary data. + + Compare expected dictionary data vs actual dictionary data. + The values in the 'expected' dictionary can be strings, bools, ints, + longs, or can be a function that evaluates a variable and returns a + bool. + """ + self.log.debug('actual: {}'.format(repr(actual))) + self.log.debug('expected: {}'.format(repr(expected))) + + for k, v in six.iteritems(expected): + if k in actual: + if (isinstance(v, six.string_types) or + isinstance(v, bool) or + isinstance(v, six.integer_types)): + # handle explicit values + if v != actual[k]: + return "{}:{}".format(k, actual[k]) + # handle function pointers, such as not_null or valid_ip + elif not v(actual[k]): + return "{}:{}".format(k, actual[k]) + else: + return "key '{}' does not exist".format(k) + return None + + def validate_relation_data(self, sentry_unit, relation, expected): + """Validate actual relation data based on expected relation data.""" + actual = sentry_unit.relation(relation[0], relation[1]) + return self._validate_dict_data(expected, actual) + + def _validate_list_data(self, expected, actual): + """Compare expected list vs actual list data.""" + for e in expected: + if e not in actual: + return "expected item {} not found in actual list".format(e) + return None + + def not_null(self, string): + if string is not None: + return True + else: + return False + + def _get_file_mtime(self, sentry_unit, filename): + """Get last modification time of file.""" + return sentry_unit.file_stat(filename)['mtime'] + + def _get_dir_mtime(self, sentry_unit, directory): + """Get last modification time of directory.""" + return sentry_unit.directory_stat(directory)['mtime'] + + def _get_proc_start_time(self, sentry_unit, service, pgrep_full=None): + """Get start time of a process based on the last modification time + of the /proc/pid directory. + + :sentry_unit: The sentry unit to check for the service on + :service: service name to look for in process table + :pgrep_full: [Deprecated] Use full command line search mode with pgrep + :returns: epoch time of service process start + :param commands: list of bash commands + :param sentry_units: list of sentry unit pointers + :returns: None if successful; Failure message otherwise + """ + pid_list = self.get_process_id_list( + sentry_unit, service, pgrep_full=pgrep_full) + pid = pid_list[0] + proc_dir = '/proc/{}'.format(pid) + self.log.debug('Pid for {} on {}: {}'.format( + service, sentry_unit.info['unit_name'], pid)) + + return self._get_dir_mtime(sentry_unit, proc_dir) + + def service_restarted(self, sentry_unit, service, filename, + pgrep_full=None, sleep_time=20): + """Check if service was restarted. + + Compare a service's start time vs a file's last modification time + (such as a config file for that service) to determine if the service + has been restarted. + """ + # /!\ DEPRECATION WARNING (beisner): + # This method is prone to races in that no before-time is known. + # Use validate_service_config_changed instead. + + # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now + # used instead of pgrep. pgrep_full is still passed through to ensure + # deprecation WARNS. lp1474030 + self.log.warn('DEPRECATION WARNING: use ' + 'validate_service_config_changed instead of ' + 'service_restarted due to known races.') + + time.sleep(sleep_time) + if (self._get_proc_start_time(sentry_unit, service, pgrep_full) >= + self._get_file_mtime(sentry_unit, filename)): + return True + else: + return False + + def service_restarted_since(self, sentry_unit, mtime, service, + pgrep_full=None, sleep_time=20, + retry_count=30, retry_sleep_time=10): + """Check if service was been started after a given time. + + Args: + sentry_unit (sentry): The sentry unit to check for the service on + mtime (float): The epoch time to check against + service (string): service name to look for in process table + pgrep_full: [Deprecated] Use full command line search mode with pgrep + sleep_time (int): Initial sleep time (s) before looking for file + retry_sleep_time (int): Time (s) to sleep between retries + retry_count (int): If file is not found, how many times to retry + + Returns: + bool: True if service found and its start time it newer than mtime, + False if service is older than mtime or if service was + not found. + """ + # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now + # used instead of pgrep. pgrep_full is still passed through to ensure + # deprecation WARNS. lp1474030 + + unit_name = sentry_unit.info['unit_name'] + self.log.debug('Checking that %s service restarted since %s on ' + '%s' % (service, mtime, unit_name)) + time.sleep(sleep_time) + proc_start_time = None + tries = 0 + while tries <= retry_count and not proc_start_time: + try: + proc_start_time = self._get_proc_start_time(sentry_unit, + service, + pgrep_full) + self.log.debug('Attempt {} to get {} proc start time on {} ' + 'OK'.format(tries, service, unit_name)) + except IOError as e: + # NOTE(beisner) - race avoidance, proc may not exist yet. + # https://bugs.launchpad.net/charm-helpers/+bug/1474030 + self.log.debug('Attempt {} to get {} proc start time on {} ' + 'failed\n{}'.format(tries, service, + unit_name, e)) + time.sleep(retry_sleep_time) + tries += 1 + + if not proc_start_time: + self.log.warn('No proc start time found, assuming service did ' + 'not start') + return False + if proc_start_time >= mtime: + self.log.debug('Proc start time is newer than provided mtime' + '(%s >= %s) on %s (OK)' % (proc_start_time, + mtime, unit_name)) + return True + else: + self.log.warn('Proc start time (%s) is older than provided mtime ' + '(%s) on %s, service did not ' + 'restart' % (proc_start_time, mtime, unit_name)) + return False + + def config_updated_since(self, sentry_unit, filename, mtime, + sleep_time=20, retry_count=30, + retry_sleep_time=10): + """Check if file was modified after a given time. + + Args: + sentry_unit (sentry): The sentry unit to check the file mtime on + filename (string): The file to check mtime of + mtime (float): The epoch time to check against + sleep_time (int): Initial sleep time (s) before looking for file + retry_sleep_time (int): Time (s) to sleep between retries + retry_count (int): If file is not found, how many times to retry + + Returns: + bool: True if file was modified more recently than mtime, False if + file was modified before mtime, or if file not found. + """ + unit_name = sentry_unit.info['unit_name'] + self.log.debug('Checking that %s updated since %s on ' + '%s' % (filename, mtime, unit_name)) + time.sleep(sleep_time) + file_mtime = None + tries = 0 + while tries <= retry_count and not file_mtime: + try: + file_mtime = self._get_file_mtime(sentry_unit, filename) + self.log.debug('Attempt {} to get {} file mtime on {} ' + 'OK'.format(tries, filename, unit_name)) + except IOError as e: + # NOTE(beisner) - race avoidance, file may not exist yet. + # https://bugs.launchpad.net/charm-helpers/+bug/1474030 + self.log.debug('Attempt {} to get {} file mtime on {} ' + 'failed\n{}'.format(tries, filename, + unit_name, e)) + time.sleep(retry_sleep_time) + tries += 1 + + if not file_mtime: + self.log.warn('Could not determine file mtime, assuming ' + 'file does not exist') + return False + + if file_mtime >= mtime: + self.log.debug('File mtime is newer than provided mtime ' + '(%s >= %s) on %s (OK)' % (file_mtime, + mtime, unit_name)) + return True + else: + self.log.warn('File mtime is older than provided mtime' + '(%s < on %s) on %s' % (file_mtime, + mtime, unit_name)) + return False + + def validate_service_config_changed(self, sentry_unit, mtime, service, + filename, pgrep_full=None, + sleep_time=20, retry_count=30, + retry_sleep_time=10): + """Check service and file were updated after mtime + + Args: + sentry_unit (sentry): The sentry unit to check for the service on + mtime (float): The epoch time to check against + service (string): service name to look for in process table + filename (string): The file to check mtime of + pgrep_full: [Deprecated] Use full command line search mode with pgrep + sleep_time (int): Initial sleep in seconds to pass to test helpers + retry_count (int): If service is not found, how many times to retry + retry_sleep_time (int): Time in seconds to wait between retries + + Typical Usage: + u = OpenStackAmuletUtils(ERROR) + ... + mtime = u.get_sentry_time(self.cinder_sentry) + self.d.configure('cinder', {'verbose': 'True', 'debug': 'True'}) + if not u.validate_service_config_changed(self.cinder_sentry, + mtime, + 'cinder-api', + '/etc/cinder/cinder.conf') + amulet.raise_status(amulet.FAIL, msg='update failed') + Returns: + bool: True if both service and file where updated/restarted after + mtime, False if service is older than mtime or if service was + not found or if filename was modified before mtime. + """ + + # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now + # used instead of pgrep. pgrep_full is still passed through to ensure + # deprecation WARNS. lp1474030 + + service_restart = self.service_restarted_since( + sentry_unit, mtime, + service, + pgrep_full=pgrep_full, + sleep_time=sleep_time, + retry_count=retry_count, + retry_sleep_time=retry_sleep_time) + + config_update = self.config_updated_since( + sentry_unit, + filename, + mtime, + sleep_time=sleep_time, + retry_count=retry_count, + retry_sleep_time=retry_sleep_time) + + return service_restart and config_update + + def get_sentry_time(self, sentry_unit): + """Return current epoch time on a sentry""" + cmd = "date +'%s'" + return float(sentry_unit.run(cmd)[0]) + + def relation_error(self, name, data): + return 'unexpected relation data in {} - {}'.format(name, data) + + def endpoint_error(self, name, data): + return 'unexpected endpoint data in {} - {}'.format(name, data) + + def get_ubuntu_releases(self): + """Return a list of all Ubuntu releases in order of release.""" + _d = distro_info.UbuntuDistroInfo() + _release_list = _d.all + return _release_list + + def file_to_url(self, file_rel_path): + """Convert a relative file path to a file URL.""" + _abs_path = os.path.abspath(file_rel_path) + return urlparse.urlparse(_abs_path, scheme='file').geturl() + + def check_commands_on_units(self, commands, sentry_units): + """Check that all commands in a list exit zero on all + sentry units in a list. + + :param commands: list of bash commands + :param sentry_units: list of sentry unit pointers + :returns: None if successful; Failure message otherwise + """ + self.log.debug('Checking exit codes for {} commands on {} ' + 'sentry units...'.format(len(commands), + len(sentry_units))) + for sentry_unit in sentry_units: + for cmd in commands: + output, code = sentry_unit.run(cmd) + if code == 0: + self.log.debug('{} `{}` returned {} ' + '(OK)'.format(sentry_unit.info['unit_name'], + cmd, code)) + else: + return ('{} `{}` returned {} ' + '{}'.format(sentry_unit.info['unit_name'], + cmd, code, output)) + return None + + def get_process_id_list(self, sentry_unit, process_name, + expect_success=True, pgrep_full=False): + """Get a list of process ID(s) from a single sentry juju unit + for a single process name. + + :param sentry_unit: Amulet sentry instance (juju unit) + :param process_name: Process name + :param expect_success: If False, expect the PID to be missing, + raise if it is present. + :returns: List of process IDs + """ + if pgrep_full: + cmd = 'pgrep -f "{}"'.format(process_name) + else: + cmd = 'pidof -x "{}"'.format(process_name) + if not expect_success: + cmd += " || exit 0 && exit 1" + output, code = sentry_unit.run(cmd) + if code != 0: + msg = ('{} `{}` returned {} ' + '{}'.format(sentry_unit.info['unit_name'], + cmd, code, output)) + amulet.raise_status(amulet.FAIL, msg=msg) + return str(output).split() + + def get_unit_process_ids( + self, unit_processes, expect_success=True, pgrep_full=False): + """Construct a dict containing unit sentries, process names, and + process IDs. + + :param unit_processes: A dictionary of Amulet sentry instance + to list of process names. + :param expect_success: if False expect the processes to not be + running, raise if they are. + :returns: Dictionary of Amulet sentry instance to dictionary + of process names to PIDs. + """ + pid_dict = {} + for sentry_unit, process_list in six.iteritems(unit_processes): + pid_dict[sentry_unit] = {} + for process in process_list: + pids = self.get_process_id_list( + sentry_unit, process, expect_success=expect_success, + pgrep_full=pgrep_full) + pid_dict[sentry_unit].update({process: pids}) + return pid_dict + + def validate_unit_process_ids(self, expected, actual): + """Validate process id quantities for services on units.""" + self.log.debug('Checking units for running processes...') + self.log.debug('Expected PIDs: {}'.format(expected)) + self.log.debug('Actual PIDs: {}'.format(actual)) + + if len(actual) != len(expected): + return ('Unit count mismatch. expected, actual: {}, ' + '{} '.format(len(expected), len(actual))) + + for (e_sentry, e_proc_names) in six.iteritems(expected): + e_sentry_name = e_sentry.info['unit_name'] + if e_sentry in actual.keys(): + a_proc_names = actual[e_sentry] + else: + return ('Expected sentry ({}) not found in actual dict data.' + '{}'.format(e_sentry_name, e_sentry)) + + if len(e_proc_names.keys()) != len(a_proc_names.keys()): + return ('Process name count mismatch. expected, actual: {}, ' + '{}'.format(len(expected), len(actual))) + + for (e_proc_name, e_pids), (a_proc_name, a_pids) in \ + zip(e_proc_names.items(), a_proc_names.items()): + if e_proc_name != a_proc_name: + return ('Process name mismatch. expected, actual: {}, ' + '{}'.format(e_proc_name, a_proc_name)) + + a_pids_length = len(a_pids) + fail_msg = ('PID count mismatch. {} ({}) expected, actual: ' + '{}, {} ({})'.format(e_sentry_name, e_proc_name, + e_pids, a_pids_length, + a_pids)) + + # If expected is a list, ensure at least one PID quantity match + if isinstance(e_pids, list) and \ + a_pids_length not in e_pids: + return fail_msg + # If expected is not bool and not list, + # ensure PID quantities match + elif not isinstance(e_pids, bool) and \ + not isinstance(e_pids, list) and \ + a_pids_length != e_pids: + return fail_msg + # If expected is bool True, ensure 1 or more PIDs exist + elif isinstance(e_pids, bool) and \ + e_pids is True and a_pids_length < 1: + return fail_msg + # If expected is bool False, ensure 0 PIDs exist + elif isinstance(e_pids, bool) and \ + e_pids is False and a_pids_length != 0: + return fail_msg + else: + self.log.debug('PID check OK: {} {} {}: ' + '{}'.format(e_sentry_name, e_proc_name, + e_pids, a_pids)) + return None + + def validate_list_of_identical_dicts(self, list_of_dicts): + """Check that all dicts within a list are identical.""" + hashes = [] + for _dict in list_of_dicts: + hashes.append(hash(frozenset(_dict.items()))) + + self.log.debug('Hashes: {}'.format(hashes)) + if len(set(hashes)) == 1: + self.log.debug('Dicts within list are identical') + else: + return 'Dicts within list are not identical' + + return None + + def validate_sectionless_conf(self, file_contents, expected): + """A crude conf parser. Useful to inspect configuration files which + do not have section headers (as would be necessary in order to use + the configparser). Such as openstack-dashboard or rabbitmq confs.""" + for line in file_contents.split('\n'): + if '=' in line: + args = line.split('=') + if len(args) <= 1: + continue + key = args[0].strip() + value = args[1].strip() + if key in expected.keys(): + if expected[key] != value: + msg = ('Config mismatch. Expected, actual: {}, ' + '{}'.format(expected[key], value)) + amulet.raise_status(amulet.FAIL, msg=msg) + + def get_unit_hostnames(self, units): + """Return a dict of juju unit names to hostnames.""" + host_names = {} + for unit in units: + host_names[unit.info['unit_name']] = \ + str(unit.file_contents('/etc/hostname').strip()) + self.log.debug('Unit host names: {}'.format(host_names)) + return host_names + + def run_cmd_unit(self, sentry_unit, cmd): + """Run a command on a unit, return the output and exit code.""" + output, code = sentry_unit.run(cmd) + if code == 0: + self.log.debug('{} `{}` command returned {} ' + '(OK)'.format(sentry_unit.info['unit_name'], + cmd, code)) + else: + msg = ('{} `{}` command returned {} ' + '{}'.format(sentry_unit.info['unit_name'], + cmd, code, output)) + amulet.raise_status(amulet.FAIL, msg=msg) + return str(output), code + + def file_exists_on_unit(self, sentry_unit, file_name): + """Check if a file exists on a unit.""" + try: + sentry_unit.file_stat(file_name) + return True + except IOError: + return False + except Exception as e: + msg = 'Error checking file {}: {}'.format(file_name, e) + amulet.raise_status(amulet.FAIL, msg=msg) + + def file_contents_safe(self, sentry_unit, file_name, + max_wait=60, fatal=False): + """Get file contents from a sentry unit. Wrap amulet file_contents + with retry logic to address races where a file checks as existing, + but no longer exists by the time file_contents is called. + Return None if file not found. Optionally raise if fatal is True.""" + unit_name = sentry_unit.info['unit_name'] + file_contents = False + tries = 0 + while not file_contents and tries < (max_wait / 4): + try: + file_contents = sentry_unit.file_contents(file_name) + except IOError: + self.log.debug('Attempt {} to open file {} from {} ' + 'failed'.format(tries, file_name, + unit_name)) + time.sleep(4) + tries += 1 + + if file_contents: + return file_contents + elif not fatal: + return None + elif fatal: + msg = 'Failed to get file contents from unit.' + amulet.raise_status(amulet.FAIL, msg) + + def port_knock_tcp(self, host="localhost", port=22, timeout=15): + """Open a TCP socket to check for a listening sevice on a host. + + :param host: host name or IP address, default to localhost + :param port: TCP port number, default to 22 + :param timeout: Connect timeout, default to 15 seconds + :returns: True if successful, False if connect failed + """ + + # Resolve host name if possible + try: + connect_host = socket.gethostbyname(host) + host_human = "{} ({})".format(connect_host, host) + except socket.error as e: + self.log.warn('Unable to resolve address: ' + '{} ({}) Trying anyway!'.format(host, e)) + connect_host = host + host_human = connect_host + + # Attempt socket connection + try: + knock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) + knock.settimeout(timeout) + knock.connect((connect_host, port)) + knock.close() + self.log.debug('Socket connect OK for host ' + '{} on port {}.'.format(host_human, port)) + return True + except socket.error as e: + self.log.debug('Socket connect FAIL for' + ' {} port {} ({})'.format(host_human, port, e)) + return False + + def port_knock_units(self, sentry_units, port=22, + timeout=15, expect_success=True): + """Open a TCP socket to check for a listening sevice on each + listed juju unit. + + :param sentry_units: list of sentry unit pointers + :param port: TCP port number, default to 22 + :param timeout: Connect timeout, default to 15 seconds + :expect_success: True by default, set False to invert logic + :returns: None if successful, Failure message otherwise + """ + for unit in sentry_units: + host = unit.info['public-address'] + connected = self.port_knock_tcp(host, port, timeout) + if not connected and expect_success: + return 'Socket connect failed.' + elif connected and not expect_success: + return 'Socket connected unexpectedly.' + + def get_uuid_epoch_stamp(self): + """Returns a stamp string based on uuid4 and epoch time. Useful in + generating test messages which need to be unique-ish.""" + return '[{}-{}]'.format(uuid.uuid4(), time.time()) + + # amulet juju action helpers: + def run_action(self, unit_sentry, action, + _check_output=subprocess.check_output, + params=None): + """Translate to amulet's built in run_action(). Deprecated. + + Run the named action on a given unit sentry. + + params a dict of parameters to use + _check_output parameter is no longer used + + @return action_id. + """ + self.log.warn('charmhelpers.contrib.amulet.utils.run_action has been ' + 'deprecated for amulet.run_action') + return unit_sentry.run_action(action, action_args=params) + + def wait_on_action(self, action_id, _check_output=subprocess.check_output): + """Wait for a given action, returning if it completed or not. + + action_id a string action uuid + _check_output parameter is no longer used + """ + data = amulet.actions.get_action_output(action_id, full_output=True) + return data.get(u"status") == "completed" + + def status_get(self, unit): + """Return the current service status of this unit.""" + raw_status, return_code = unit.run( + "status-get --format=json --include-data") + if return_code != 0: + return ("unknown", "") + status = json.loads(raw_status) + return (status["status"], status["message"]) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/ansible/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/ansible/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..824780aca93c406f7331ec3f0da8f22bee1272ec --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/ansible/__init__.py @@ -0,0 +1,303 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Copyright 2013 Canonical Ltd. +# +# Authors: +# Charm Helpers Developers +""" +The ansible package enables you to easily use the configuration management +tool `Ansible`_ to setup and configure your charm. All of your charm +configuration options and relation-data are available as regular Ansible +variables which can be used in your playbooks and templates. + +.. _Ansible: https://www.ansible.com/ + +Usage +===== + +Here is an example directory structure for a charm to get you started:: + + charm-ansible-example/ + |-- ansible + | |-- playbook.yaml + | `-- templates + | `-- example.j2 + |-- config.yaml + |-- copyright + |-- icon.svg + |-- layer.yaml + |-- metadata.yaml + |-- reactive + | `-- example.py + |-- README.md + +Running a playbook called ``playbook.yaml`` when the ``install`` hook is run +can be as simple as:: + + from charmhelpers.contrib import ansible + from charms.reactive import hook + + @hook('install') + def install(): + ansible.install_ansible_support() + ansible.apply_playbook('ansible/playbook.yaml') + +Here is an example playbook that uses the ``template`` module to template the +file ``example.j2`` to the charm host and then uses the ``debug`` module to +print out all the host and Juju variables that you can use in your playbooks. +Note that you must target ``localhost`` as the playbook is run locally on the +charm host:: + + --- + - hosts: localhost + tasks: + - name: Template a file + template: + src: templates/example.j2 + dest: /tmp/example.j2 + + - name: Print all variables available to Ansible + debug: + var: vars + +Read more online about `playbooks`_ and standard Ansible `modules`_. + +.. _playbooks: https://docs.ansible.com/ansible/latest/user_guide/playbooks.html +.. _modules: https://docs.ansible.com/ansible/latest/user_guide/modules.html + +A further feature of the Ansible hooks is to provide a light weight "action" +scripting tool. This is a decorator that you apply to a function, and that +function can now receive cli args, and can pass extra args to the playbook:: + + @hooks.action() + def some_action(amount, force="False"): + "Usage: some-action AMOUNT [force=True]" # <-- shown on error + # process the arguments + # do some calls + # return extra-vars to be passed to ansible-playbook + return { + 'amount': int(amount), + 'type': force, + } + +You can now create a symlink to hooks.py that can be invoked like a hook, but +with cli params:: + + # link actions/some-action to hooks/hooks.py + + actions/some-action amount=10 force=true + +Install Ansible via pip +======================= + +If you want to install a specific version of Ansible via pip instead of +``install_ansible_support`` which uses APT, consider using the layer options +of `layer-basic`_ to install Ansible in a virtualenv:: + + options: + basic: + python_packages: ['ansible==2.9.0'] + include_system_packages: true + use_venv: true + +.. _layer-basic: https://charmsreactive.readthedocs.io/en/latest/layer-basic.html#layer-configuration + +""" +import os +import json +import stat +import subprocess +import functools + +import charmhelpers.contrib.templating.contexts +import charmhelpers.core.host +import charmhelpers.core.hookenv +import charmhelpers.fetch + + +charm_dir = os.environ.get('CHARM_DIR', '') +ansible_hosts_path = '/etc/ansible/hosts' +# Ansible will automatically include any vars in the following +# file in its inventory when run locally. +ansible_vars_path = '/etc/ansible/host_vars/localhost' + + +def install_ansible_support(from_ppa=True, ppa_location='ppa:ansible/ansible'): + """Installs Ansible via APT. + + By default this installs Ansible from the `PPA`_ linked from + the Ansible `website`_ or from a PPA set in ``ppa_location``. + + .. _PPA: https://launchpad.net/~ansible/+archive/ubuntu/ansible + .. _website: http://docs.ansible.com/intro_installation.html#latest-releases-via-apt-ubuntu + + If ``from_ppa`` is ``False``, then Ansible will be installed from + Ubuntu's Universe repositories. + """ + if from_ppa: + charmhelpers.fetch.add_source(ppa_location) + charmhelpers.fetch.apt_update(fatal=True) + charmhelpers.fetch.apt_install('ansible') + with open(ansible_hosts_path, 'w+') as hosts_file: + hosts_file.write('localhost ansible_connection=local ansible_remote_tmp=/root/.ansible/tmp') + + +def apply_playbook(playbook, tags=None, extra_vars=None): + """Run a playbook. + + This helper runs a playbook with juju state variables as context, + therefore variables set in application config can be used directly. + List of tags (--tags) and dictionary with extra_vars (--extra-vars) + can be passed as additional parameters. + + Read more about playbook `_variables`_ online. + + .. _variables: https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html + + Example:: + + # Run ansible/playbook.yaml with tag install and pass extra + # variables var_a and var_b + apply_playbook( + playbook='ansible/playbook.yaml', + tags=['install'], + extra_vars={'var_a': 'val_a', 'var_b': 'val_b'} + ) + + # Run ansible/playbook.yaml with tag config and extra variable nested, + # which is passed as json and can be used as dictionary in playbook + apply_playbook( + playbook='ansible/playbook.yaml', + tags=['config'], + extra_vars={'nested': {'a': 'value1', 'b': 'value2'}} + ) + + # Custom config file can be passed within extra_vars + apply_playbook( + playbook='ansible/playbook.yaml', + extra_vars="@some_file.json" + ) + + """ + tags = tags or [] + tags = ",".join(tags) + charmhelpers.contrib.templating.contexts.juju_state_to_yaml( + ansible_vars_path, namespace_separator='__', + allow_hyphens_in_keys=False, mode=(stat.S_IRUSR | stat.S_IWUSR)) + + # we want ansible's log output to be unbuffered + env = os.environ.copy() + env['PYTHONUNBUFFERED'] = "1" + call = [ + 'ansible-playbook', + '-c', + 'local', + playbook, + ] + if tags: + call.extend(['--tags', '{}'.format(tags)]) + if extra_vars: + call.extend(['--extra-vars', json.dumps(extra_vars)]) + subprocess.check_call(call, env=env) + + +class AnsibleHooks(charmhelpers.core.hookenv.Hooks): + """Run a playbook with the hook-name as the tag. + + This helper builds on the standard hookenv.Hooks helper, + but additionally runs the playbook with the hook-name specified + using --tags (ie. running all the tasks tagged with the hook-name). + + Example:: + + hooks = AnsibleHooks(playbook_path='ansible/my_machine_state.yaml') + + # All the tasks within my_machine_state.yaml tagged with 'install' + # will be run automatically after do_custom_work() + @hooks.hook() + def install(): + do_custom_work() + + # For most of your hooks, you won't need to do anything other + # than run the tagged tasks for the hook: + @hooks.hook('config-changed', 'start', 'stop') + def just_use_playbook(): + pass + + # As a convenience, you can avoid the above noop function by specifying + # the hooks which are handled by ansible-only and they'll be registered + # for you: + # hooks = AnsibleHooks( + # 'ansible/my_machine_state.yaml', + # default_hooks=['config-changed', 'start', 'stop']) + + if __name__ == "__main__": + # execute a hook based on the name the program is called by + hooks.execute(sys.argv) + """ + + def __init__(self, playbook_path, default_hooks=None): + """Register any hooks handled by ansible.""" + super(AnsibleHooks, self).__init__() + + self._actions = {} + self.playbook_path = playbook_path + + default_hooks = default_hooks or [] + + def noop(*args, **kwargs): + pass + + for hook in default_hooks: + self.register(hook, noop) + + def register_action(self, name, function): + """Register a hook""" + self._actions[name] = function + + def execute(self, args): + """Execute the hook followed by the playbook using the hook as tag.""" + hook_name = os.path.basename(args[0]) + extra_vars = None + if hook_name in self._actions: + extra_vars = self._actions[hook_name](args[1:]) + else: + super(AnsibleHooks, self).execute(args) + + charmhelpers.contrib.ansible.apply_playbook( + self.playbook_path, tags=[hook_name], extra_vars=extra_vars) + + def action(self, *action_names): + """Decorator, registering them as actions""" + def action_wrapper(decorated): + + @functools.wraps(decorated) + def wrapper(argv): + kwargs = dict(arg.split('=') for arg in argv) + try: + return decorated(**kwargs) + except TypeError as e: + if decorated.__doc__: + e.args += (decorated.__doc__,) + raise + + self.register_action(decorated.__name__, wrapper) + if '_' in decorated.__name__: + self.register_action( + decorated.__name__.replace('_', '-'), wrapper) + + return wrapper + + return action_wrapper diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/benchmark/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/benchmark/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..c35f7fe785a29519a257f1ed5aa33fdfa19129c7 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/benchmark/__init__.py @@ -0,0 +1,124 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import subprocess +import time +import os +from distutils.spawn import find_executable + +from charmhelpers.core.hookenv import ( + in_relation_hook, + relation_ids, + relation_set, + relation_get, +) + + +def action_set(key, val): + if find_executable('action-set'): + action_cmd = ['action-set'] + + if isinstance(val, dict): + for k, v in iter(val.items()): + action_set('%s.%s' % (key, k), v) + return True + + action_cmd.append('%s=%s' % (key, val)) + subprocess.check_call(action_cmd) + return True + return False + + +class Benchmark(): + """ + Helper class for the `benchmark` interface. + + :param list actions: Define the actions that are also benchmarks + + From inside the benchmark-relation-changed hook, you would + Benchmark(['memory', 'cpu', 'disk', 'smoke', 'custom']) + + Examples: + + siege = Benchmark(['siege']) + siege.start() + [... run siege ...] + # The higher the score, the better the benchmark + siege.set_composite_score(16.70, 'trans/sec', 'desc') + siege.finish() + + + """ + + BENCHMARK_CONF = '/etc/benchmark.conf' # Replaced in testing + + required_keys = [ + 'hostname', + 'port', + 'graphite_port', + 'graphite_endpoint', + 'api_port' + ] + + def __init__(self, benchmarks=None): + if in_relation_hook(): + if benchmarks is not None: + for rid in sorted(relation_ids('benchmark')): + relation_set(relation_id=rid, relation_settings={ + 'benchmarks': ",".join(benchmarks) + }) + + # Check the relation data + config = {} + for key in self.required_keys: + val = relation_get(key) + if val is not None: + config[key] = val + else: + # We don't have all of the required keys + config = {} + break + + if len(config): + with open(self.BENCHMARK_CONF, 'w') as f: + for key, val in iter(config.items()): + f.write("%s=%s\n" % (key, val)) + + @staticmethod + def start(): + action_set('meta.start', time.strftime('%Y-%m-%dT%H:%M:%SZ')) + + """ + If the collectd charm is also installed, tell it to send a snapshot + of the current profile data. + """ + COLLECT_PROFILE_DATA = '/usr/local/bin/collect-profile-data' + if os.path.exists(COLLECT_PROFILE_DATA): + subprocess.check_output([COLLECT_PROFILE_DATA]) + + @staticmethod + def finish(): + action_set('meta.stop', time.strftime('%Y-%m-%dT%H:%M:%SZ')) + + @staticmethod + def set_composite_score(value, units, direction='asc'): + """ + Set the composite score for a benchmark run. This is a single number + representative of the benchmark results. This could be the most + important metric, or an amalgamation of metric scores. + """ + return action_set( + "meta.composite", + {'value': value, 'units': units, 'direction': direction} + ) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/charmhelpers/IMPORT b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/charmhelpers/IMPORT new file mode 100644 index 0000000000000000000000000000000000000000..d41cb041f45cb19b87aee7d3f5fd4d47a3d3fb2d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/charmhelpers/IMPORT @@ -0,0 +1,4 @@ +Source lp:charm-tools/trunk + +charm-tools/helpers/python/charmhelpers/__init__.py -> charmhelpers/charmhelpers/contrib/charmhelpers/__init__.py +charm-tools/helpers/python/charmhelpers/tests/test_charmhelpers.py -> charmhelpers/tests/contrib/charmhelpers/test_charmhelpers.py diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/charmhelpers/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/charmhelpers/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..ed63e8121e23a8e01ccb8ddcbf9aea34543d3666 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/charmhelpers/__init__.py @@ -0,0 +1,203 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import warnings +warnings.warn("contrib.charmhelpers is deprecated", DeprecationWarning) # noqa + +import operator +import tempfile +import time +import yaml +import subprocess + +import six +if six.PY3: + from urllib.request import urlopen + from urllib.error import (HTTPError, URLError) +else: + from urllib2 import (urlopen, HTTPError, URLError) + +"""Helper functions for writing Juju charms in Python.""" + +__metaclass__ = type +__all__ = [ + # 'get_config', # core.hookenv.config() + # 'log', # core.hookenv.log() + # 'log_entry', # core.hookenv.log() + # 'log_exit', # core.hookenv.log() + # 'relation_get', # core.hookenv.relation_get() + # 'relation_set', # core.hookenv.relation_set() + # 'relation_ids', # core.hookenv.relation_ids() + # 'relation_list', # core.hookenv.relation_units() + # 'config_get', # core.hookenv.config() + # 'unit_get', # core.hookenv.unit_get() + # 'open_port', # core.hookenv.open_port() + # 'close_port', # core.hookenv.close_port() + # 'service_control', # core.host.service() + 'unit_info', # client-side, NOT IMPLEMENTED + 'wait_for_machine', # client-side, NOT IMPLEMENTED + 'wait_for_page_contents', # client-side, NOT IMPLEMENTED + 'wait_for_relation', # client-side, NOT IMPLEMENTED + 'wait_for_unit', # client-side, NOT IMPLEMENTED +] + + +SLEEP_AMOUNT = 0.1 + + +# We create a juju_status Command here because it makes testing much, +# much easier. +def juju_status(): + subprocess.check_call(['juju', 'status']) + +# re-implemented as charmhelpers.fetch.configure_sources() +# def configure_source(update=False): +# source = config_get('source') +# if ((source.startswith('ppa:') or +# source.startswith('cloud:') or +# source.startswith('http:'))): +# run('add-apt-repository', source) +# if source.startswith("http:"): +# run('apt-key', 'import', config_get('key')) +# if update: +# run('apt-get', 'update') + + +# DEPRECATED: client-side only +def make_charm_config_file(charm_config): + charm_config_file = tempfile.NamedTemporaryFile(mode='w+') + charm_config_file.write(yaml.dump(charm_config)) + charm_config_file.flush() + # The NamedTemporaryFile instance is returned instead of just the name + # because we want to take advantage of garbage collection-triggered + # deletion of the temp file when it goes out of scope in the caller. + return charm_config_file + + +# DEPRECATED: client-side only +def unit_info(service_name, item_name, data=None, unit=None): + if data is None: + data = yaml.safe_load(juju_status()) + service = data['services'].get(service_name) + if service is None: + # XXX 2012-02-08 gmb: + # This allows us to cope with the race condition that we + # have between deploying a service and having it come up in + # `juju status`. We could probably do with cleaning it up so + # that it fails a bit more noisily after a while. + return '' + units = service['units'] + if unit is not None: + item = units[unit][item_name] + else: + # It might seem odd to sort the units here, but we do it to + # ensure that when no unit is specified, the first unit for the + # service (or at least the one with the lowest number) is the + # one whose data gets returned. + sorted_unit_names = sorted(units.keys()) + item = units[sorted_unit_names[0]][item_name] + return item + + +# DEPRECATED: client-side only +def get_machine_data(): + return yaml.safe_load(juju_status())['machines'] + + +# DEPRECATED: client-side only +def wait_for_machine(num_machines=1, timeout=300): + """Wait `timeout` seconds for `num_machines` machines to come up. + + This wait_for... function can be called by other wait_for functions + whose timeouts might be too short in situations where only a bare + Juju setup has been bootstrapped. + + :return: A tuple of (num_machines, time_taken). This is used for + testing. + """ + # You may think this is a hack, and you'd be right. The easiest way + # to tell what environment we're working in (LXC vs EC2) is to check + # the dns-name of the first machine. If it's localhost we're in LXC + # and we can just return here. + if get_machine_data()[0]['dns-name'] == 'localhost': + return 1, 0 + start_time = time.time() + while True: + # Drop the first machine, since it's the Zookeeper and that's + # not a machine that we need to wait for. This will only work + # for EC2 environments, which is why we return early above if + # we're in LXC. + machine_data = get_machine_data() + non_zookeeper_machines = [ + machine_data[key] for key in list(machine_data.keys())[1:]] + if len(non_zookeeper_machines) >= num_machines: + all_machines_running = True + for machine in non_zookeeper_machines: + if machine.get('instance-state') != 'running': + all_machines_running = False + break + if all_machines_running: + break + if time.time() - start_time >= timeout: + raise RuntimeError('timeout waiting for service to start') + time.sleep(SLEEP_AMOUNT) + return num_machines, time.time() - start_time + + +# DEPRECATED: client-side only +def wait_for_unit(service_name, timeout=480): + """Wait `timeout` seconds for a given service name to come up.""" + wait_for_machine(num_machines=1) + start_time = time.time() + while True: + state = unit_info(service_name, 'agent-state') + if 'error' in state or state == 'started': + break + if time.time() - start_time >= timeout: + raise RuntimeError('timeout waiting for service to start') + time.sleep(SLEEP_AMOUNT) + if state != 'started': + raise RuntimeError('unit did not start, agent-state: ' + state) + + +# DEPRECATED: client-side only +def wait_for_relation(service_name, relation_name, timeout=120): + """Wait `timeout` seconds for a given relation to come up.""" + start_time = time.time() + while True: + relation = unit_info(service_name, 'relations').get(relation_name) + if relation is not None and relation['state'] == 'up': + break + if time.time() - start_time >= timeout: + raise RuntimeError('timeout waiting for relation to be up') + time.sleep(SLEEP_AMOUNT) + + +# DEPRECATED: client-side only +def wait_for_page_contents(url, contents, timeout=120, validate=None): + if validate is None: + validate = operator.contains + start_time = time.time() + while True: + try: + stream = urlopen(url) + except (HTTPError, URLError): + pass + else: + page = stream.read() + if validate(page, contents): + return page + if time.time() - start_time >= timeout: + raise RuntimeError('timeout waiting for contents of ' + url) + time.sleep(SLEEP_AMOUNT) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/charmsupport/IMPORT b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/charmsupport/IMPORT new file mode 100644 index 0000000000000000000000000000000000000000..554fddda9f48d6d17abaff942834493b2cca2dbe --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/charmsupport/IMPORT @@ -0,0 +1,14 @@ +Source: lp:charmsupport/trunk + +charmsupport/charmsupport/execd.py -> charm-helpers/charmhelpers/contrib/charmsupport/execd.py +charmsupport/charmsupport/hookenv.py -> charm-helpers/charmhelpers/contrib/charmsupport/hookenv.py +charmsupport/charmsupport/host.py -> charm-helpers/charmhelpers/contrib/charmsupport/host.py +charmsupport/charmsupport/nrpe.py -> charm-helpers/charmhelpers/contrib/charmsupport/nrpe.py +charmsupport/charmsupport/volumes.py -> charm-helpers/charmhelpers/contrib/charmsupport/volumes.py + +charmsupport/tests/test_execd.py -> charm-helpers/tests/contrib/charmsupport/test_execd.py +charmsupport/tests/test_hookenv.py -> charm-helpers/tests/contrib/charmsupport/test_hookenv.py +charmsupport/tests/test_host.py -> charm-helpers/tests/contrib/charmsupport/test_host.py +charmsupport/tests/test_nrpe.py -> charm-helpers/tests/contrib/charmsupport/test_nrpe.py + +charmsupport/bin/charmsupport -> charm-helpers/bin/contrib/charmsupport/charmsupport diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/charmsupport/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/charmsupport/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d7567b863e3a5ad2b7a7f44958b4166e0c3d346b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/charmsupport/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/charmsupport/nrpe.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/charmsupport/nrpe.py new file mode 100644 index 0000000000000000000000000000000000000000..d775861b0868a174a3f0f4a8d4a42320272a4cb0 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/charmsupport/nrpe.py @@ -0,0 +1,500 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Compatibility with the nrpe-external-master charm""" +# Copyright 2012 Canonical Ltd. +# +# Authors: +# Matthew Wedgwood + +import subprocess +import pwd +import grp +import os +import glob +import shutil +import re +import shlex +import yaml + +from charmhelpers.core.hookenv import ( + config, + hook_name, + local_unit, + log, + relation_get, + relation_ids, + relation_set, + relations_of_type, +) + +from charmhelpers.core.host import service +from charmhelpers.core import host + +# This module adds compatibility with the nrpe-external-master and plain nrpe +# subordinate charms. To use it in your charm: +# +# 1. Update metadata.yaml +# +# provides: +# (...) +# nrpe-external-master: +# interface: nrpe-external-master +# scope: container +# +# and/or +# +# provides: +# (...) +# local-monitors: +# interface: local-monitors +# scope: container + +# +# 2. Add the following to config.yaml +# +# nagios_context: +# default: "juju" +# type: string +# description: | +# Used by the nrpe subordinate charms. +# A string that will be prepended to instance name to set the host name +# in nagios. So for instance the hostname would be something like: +# juju-myservice-0 +# If you're running multiple environments with the same services in them +# this allows you to differentiate between them. +# nagios_servicegroups: +# default: "" +# type: string +# description: | +# A comma-separated list of nagios servicegroups. +# If left empty, the nagios_context will be used as the servicegroup +# +# 3. Add custom checks (Nagios plugins) to files/nrpe-external-master +# +# 4. Update your hooks.py with something like this: +# +# from charmsupport.nrpe import NRPE +# (...) +# def update_nrpe_config(): +# nrpe_compat = NRPE() +# nrpe_compat.add_check( +# shortname = "myservice", +# description = "Check MyService", +# check_cmd = "check_http -w 2 -c 10 http://localhost" +# ) +# nrpe_compat.add_check( +# "myservice_other", +# "Check for widget failures", +# check_cmd = "/srv/myapp/scripts/widget_check" +# ) +# nrpe_compat.write() +# +# def config_changed(): +# (...) +# update_nrpe_config() +# +# def nrpe_external_master_relation_changed(): +# update_nrpe_config() +# +# def local_monitors_relation_changed(): +# update_nrpe_config() +# +# 4.a If your charm is a subordinate charm set primary=False +# +# from charmsupport.nrpe import NRPE +# (...) +# def update_nrpe_config(): +# nrpe_compat = NRPE(primary=False) +# +# 5. ln -s hooks.py nrpe-external-master-relation-changed +# ln -s hooks.py local-monitors-relation-changed + + +class CheckException(Exception): + pass + + +class Check(object): + shortname_re = '[A-Za-z0-9-_.@]+$' + service_template = (""" +#--------------------------------------------------- +# This file is Juju managed +#--------------------------------------------------- +define service {{ + use active-service + host_name {nagios_hostname} + service_description {nagios_hostname}[{shortname}] """ + """{description} + check_command check_nrpe!{command} + servicegroups {nagios_servicegroup} +}} +""") + + def __init__(self, shortname, description, check_cmd): + super(Check, self).__init__() + # XXX: could be better to calculate this from the service name + if not re.match(self.shortname_re, shortname): + raise CheckException("shortname must match {}".format( + Check.shortname_re)) + self.shortname = shortname + self.command = "check_{}".format(shortname) + # Note: a set of invalid characters is defined by the + # Nagios server config + # The default is: illegal_object_name_chars=`~!$%^&*"|'<>?,()= + self.description = description + self.check_cmd = self._locate_cmd(check_cmd) + + def _get_check_filename(self): + return os.path.join(NRPE.nrpe_confdir, '{}.cfg'.format(self.command)) + + def _get_service_filename(self, hostname): + return os.path.join(NRPE.nagios_exportdir, + 'service__{}_{}.cfg'.format(hostname, self.command)) + + def _locate_cmd(self, check_cmd): + search_path = ( + '/usr/lib/nagios/plugins', + '/usr/local/lib/nagios/plugins', + ) + parts = shlex.split(check_cmd) + for path in search_path: + if os.path.exists(os.path.join(path, parts[0])): + command = os.path.join(path, parts[0]) + if len(parts) > 1: + command += " " + " ".join(parts[1:]) + return command + log('Check command not found: {}'.format(parts[0])) + return '' + + def _remove_service_files(self): + if not os.path.exists(NRPE.nagios_exportdir): + return + for f in os.listdir(NRPE.nagios_exportdir): + if f.endswith('_{}.cfg'.format(self.command)): + os.remove(os.path.join(NRPE.nagios_exportdir, f)) + + def remove(self, hostname): + nrpe_check_file = self._get_check_filename() + if os.path.exists(nrpe_check_file): + os.remove(nrpe_check_file) + self._remove_service_files() + + def write(self, nagios_context, hostname, nagios_servicegroups): + nrpe_check_file = self._get_check_filename() + with open(nrpe_check_file, 'w') as nrpe_check_config: + nrpe_check_config.write("# check {}\n".format(self.shortname)) + if nagios_servicegroups: + nrpe_check_config.write( + "# The following header was added automatically by juju\n") + nrpe_check_config.write( + "# Modifying it will affect nagios monitoring and alerting\n") + nrpe_check_config.write( + "# servicegroups: {}\n".format(nagios_servicegroups)) + nrpe_check_config.write("command[{}]={}\n".format( + self.command, self.check_cmd)) + + if not os.path.exists(NRPE.nagios_exportdir): + log('Not writing service config as {} is not accessible'.format( + NRPE.nagios_exportdir)) + else: + self.write_service_config(nagios_context, hostname, + nagios_servicegroups) + + def write_service_config(self, nagios_context, hostname, + nagios_servicegroups): + self._remove_service_files() + + templ_vars = { + 'nagios_hostname': hostname, + 'nagios_servicegroup': nagios_servicegroups, + 'description': self.description, + 'shortname': self.shortname, + 'command': self.command, + } + nrpe_service_text = Check.service_template.format(**templ_vars) + nrpe_service_file = self._get_service_filename(hostname) + with open(nrpe_service_file, 'w') as nrpe_service_config: + nrpe_service_config.write(str(nrpe_service_text)) + + def run(self): + subprocess.call(self.check_cmd) + + +class NRPE(object): + nagios_logdir = '/var/log/nagios' + nagios_exportdir = '/var/lib/nagios/export' + nrpe_confdir = '/etc/nagios/nrpe.d' + homedir = '/var/lib/nagios' # home dir provided by nagios-nrpe-server + + def __init__(self, hostname=None, primary=True): + super(NRPE, self).__init__() + self.config = config() + self.primary = primary + self.nagios_context = self.config['nagios_context'] + if 'nagios_servicegroups' in self.config and self.config['nagios_servicegroups']: + self.nagios_servicegroups = self.config['nagios_servicegroups'] + else: + self.nagios_servicegroups = self.nagios_context + self.unit_name = local_unit().replace('/', '-') + if hostname: + self.hostname = hostname + else: + nagios_hostname = get_nagios_hostname() + if nagios_hostname: + self.hostname = nagios_hostname + else: + self.hostname = "{}-{}".format(self.nagios_context, self.unit_name) + self.checks = [] + # Iff in an nrpe-external-master relation hook, set primary status + relation = relation_ids('nrpe-external-master') + if relation: + log("Setting charm primary status {}".format(primary)) + for rid in relation: + relation_set(relation_id=rid, relation_settings={'primary': self.primary}) + self.remove_check_queue = set() + + def add_check(self, *args, **kwargs): + shortname = None + if kwargs.get('shortname') is None: + if len(args) > 0: + shortname = args[0] + else: + shortname = kwargs['shortname'] + + self.checks.append(Check(*args, **kwargs)) + try: + self.remove_check_queue.remove(shortname) + except KeyError: + pass + + def remove_check(self, *args, **kwargs): + if kwargs.get('shortname') is None: + raise ValueError('shortname of check must be specified') + + # Use sensible defaults if they're not specified - these are not + # actually used during removal, but they're required for constructing + # the Check object; check_disk is chosen because it's part of the + # nagios-plugins-basic package. + if kwargs.get('check_cmd') is None: + kwargs['check_cmd'] = 'check_disk' + if kwargs.get('description') is None: + kwargs['description'] = '' + + check = Check(*args, **kwargs) + check.remove(self.hostname) + self.remove_check_queue.add(kwargs['shortname']) + + def write(self): + try: + nagios_uid = pwd.getpwnam('nagios').pw_uid + nagios_gid = grp.getgrnam('nagios').gr_gid + except Exception: + log("Nagios user not set up, nrpe checks not updated") + return + + if not os.path.exists(NRPE.nagios_logdir): + os.mkdir(NRPE.nagios_logdir) + os.chown(NRPE.nagios_logdir, nagios_uid, nagios_gid) + + nrpe_monitors = {} + monitors = {"monitors": {"remote": {"nrpe": nrpe_monitors}}} + for nrpecheck in self.checks: + nrpecheck.write(self.nagios_context, self.hostname, + self.nagios_servicegroups) + nrpe_monitors[nrpecheck.shortname] = { + "command": nrpecheck.command, + } + + # update-status hooks are configured to firing every 5 minutes by + # default. When nagios-nrpe-server is restarted, the nagios server + # reports checks failing causing unnecessary alerts. Let's not restart + # on update-status hooks. + if not hook_name() == 'update-status': + service('restart', 'nagios-nrpe-server') + + monitor_ids = relation_ids("local-monitors") + \ + relation_ids("nrpe-external-master") + for rid in monitor_ids: + reldata = relation_get(unit=local_unit(), rid=rid) + if 'monitors' in reldata: + # update the existing set of monitors with the new data + old_monitors = yaml.safe_load(reldata['monitors']) + old_nrpe_monitors = old_monitors['monitors']['remote']['nrpe'] + # remove keys that are in the remove_check_queue + old_nrpe_monitors = {k: v for k, v in old_nrpe_monitors.items() + if k not in self.remove_check_queue} + # update/add nrpe_monitors + old_nrpe_monitors.update(nrpe_monitors) + old_monitors['monitors']['remote']['nrpe'] = old_nrpe_monitors + # write back to the relation + relation_set(relation_id=rid, monitors=yaml.dump(old_monitors)) + else: + # write a brand new set of monitors, as no existing ones. + relation_set(relation_id=rid, monitors=yaml.dump(monitors)) + + self.remove_check_queue.clear() + + +def get_nagios_hostcontext(relation_name='nrpe-external-master'): + """ + Query relation with nrpe subordinate, return the nagios_host_context + + :param str relation_name: Name of relation nrpe sub joined to + """ + for rel in relations_of_type(relation_name): + if 'nagios_host_context' in rel: + return rel['nagios_host_context'] + + +def get_nagios_hostname(relation_name='nrpe-external-master'): + """ + Query relation with nrpe subordinate, return the nagios_hostname + + :param str relation_name: Name of relation nrpe sub joined to + """ + for rel in relations_of_type(relation_name): + if 'nagios_hostname' in rel: + return rel['nagios_hostname'] + + +def get_nagios_unit_name(relation_name='nrpe-external-master'): + """ + Return the nagios unit name prepended with host_context if needed + + :param str relation_name: Name of relation nrpe sub joined to + """ + host_context = get_nagios_hostcontext(relation_name) + if host_context: + unit = "%s:%s" % (host_context, local_unit()) + else: + unit = local_unit() + return unit + + +def add_init_service_checks(nrpe, services, unit_name, immediate_check=True): + """ + Add checks for each service in list + + :param NRPE nrpe: NRPE object to add check to + :param list services: List of services to check + :param str unit_name: Unit name to use in check description + :param bool immediate_check: For sysv init, run the service check immediately + """ + for svc in services: + # Don't add a check for these services from neutron-gateway + if svc in ['ext-port', 'os-charm-phy-nic-mtu']: + next + + upstart_init = '/etc/init/%s.conf' % svc + sysv_init = '/etc/init.d/%s' % svc + + if host.init_is_systemd(): + nrpe.add_check( + shortname=svc, + description='process check {%s}' % unit_name, + check_cmd='check_systemd.py %s' % svc + ) + elif os.path.exists(upstart_init): + nrpe.add_check( + shortname=svc, + description='process check {%s}' % unit_name, + check_cmd='check_upstart_job %s' % svc + ) + elif os.path.exists(sysv_init): + cronpath = '/etc/cron.d/nagios-service-check-%s' % svc + checkpath = '%s/service-check-%s.txt' % (nrpe.homedir, svc) + croncmd = ( + '/usr/local/lib/nagios/plugins/check_exit_status.pl ' + '-e -s /etc/init.d/%s status' % svc + ) + cron_file = '*/5 * * * * root %s > %s\n' % (croncmd, checkpath) + f = open(cronpath, 'w') + f.write(cron_file) + f.close() + nrpe.add_check( + shortname=svc, + description='service check {%s}' % unit_name, + check_cmd='check_status_file.py -f %s' % checkpath, + ) + # if /var/lib/nagios doesn't exist open(checkpath, 'w') will fail + # (LP: #1670223). + if immediate_check and os.path.isdir(nrpe.homedir): + f = open(checkpath, 'w') + subprocess.call( + croncmd.split(), + stdout=f, + stderr=subprocess.STDOUT + ) + f.close() + os.chmod(checkpath, 0o644) + + +def copy_nrpe_checks(nrpe_files_dir=None): + """ + Copy the nrpe checks into place + + """ + NAGIOS_PLUGINS = '/usr/local/lib/nagios/plugins' + if nrpe_files_dir is None: + # determine if "charmhelpers" is in CHARMDIR or CHARMDIR/hooks + for segment in ['.', 'hooks']: + nrpe_files_dir = os.path.abspath(os.path.join( + os.getenv('CHARM_DIR'), + segment, + 'charmhelpers', + 'contrib', + 'openstack', + 'files')) + if os.path.isdir(nrpe_files_dir): + break + else: + raise RuntimeError("Couldn't find charmhelpers directory") + if not os.path.exists(NAGIOS_PLUGINS): + os.makedirs(NAGIOS_PLUGINS) + for fname in glob.glob(os.path.join(nrpe_files_dir, "check_*")): + if os.path.isfile(fname): + shutil.copy2(fname, + os.path.join(NAGIOS_PLUGINS, os.path.basename(fname))) + + +def add_haproxy_checks(nrpe, unit_name): + """ + Add checks for each service in list + + :param NRPE nrpe: NRPE object to add check to + :param str unit_name: Unit name to use in check description + """ + nrpe.add_check( + shortname='haproxy_servers', + description='Check HAProxy {%s}' % unit_name, + check_cmd='check_haproxy.sh') + nrpe.add_check( + shortname='haproxy_queue', + description='Check HAProxy queue depth {%s}' % unit_name, + check_cmd='check_haproxy_queue_depth.sh') + + +def remove_deprecated_check(nrpe, deprecated_services): + """ + Remove checks fro deprecated services in list + + :param nrpe: NRPE object to remove check from + :type nrpe: NRPE + :param deprecated_services: List of deprecated services that are removed + :type deprecated_services: list + """ + for dep_svc in deprecated_services: + log('Deprecated service: {}'.format(dep_svc)) + nrpe.remove_check(shortname=dep_svc) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/charmsupport/volumes.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/charmsupport/volumes.py new file mode 100644 index 0000000000000000000000000000000000000000..7ea43f0888cd92ac4344f908aa7c9d0afe7568ed --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/charmsupport/volumes.py @@ -0,0 +1,173 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +''' +Functions for managing volumes in juju units. One volume is supported per unit. +Subordinates may have their own storage, provided it is on its own partition. + +Configuration stanzas:: + + volume-ephemeral: + type: boolean + default: true + description: > + If false, a volume is mounted as sepecified in "volume-map" + If true, ephemeral storage will be used, meaning that log data + will only exist as long as the machine. YOU HAVE BEEN WARNED. + volume-map: + type: string + default: {} + description: > + YAML map of units to device names, e.g: + "{ rsyslog/0: /dev/vdb, rsyslog/1: /dev/vdb }" + Service units will raise a configure-error if volume-ephemeral + is 'true' and no volume-map value is set. Use 'juju set' to set a + value and 'juju resolved' to complete configuration. + +Usage:: + + from charmsupport.volumes import configure_volume, VolumeConfigurationError + from charmsupport.hookenv import log, ERROR + def post_mount_hook(): + stop_service('myservice') + def post_mount_hook(): + start_service('myservice') + + if __name__ == '__main__': + try: + configure_volume(before_change=pre_mount_hook, + after_change=post_mount_hook) + except VolumeConfigurationError: + log('Storage could not be configured', ERROR) + +''' + +# XXX: Known limitations +# - fstab is neither consulted nor updated + +import os +from charmhelpers.core import hookenv +from charmhelpers.core import host +import yaml + + +MOUNT_BASE = '/srv/juju/volumes' + + +class VolumeConfigurationError(Exception): + '''Volume configuration data is missing or invalid''' + pass + + +def get_config(): + '''Gather and sanity-check volume configuration data''' + volume_config = {} + config = hookenv.config() + + errors = False + + if config.get('volume-ephemeral') in (True, 'True', 'true', 'Yes', 'yes'): + volume_config['ephemeral'] = True + else: + volume_config['ephemeral'] = False + + try: + volume_map = yaml.safe_load(config.get('volume-map', '{}')) + except yaml.YAMLError as e: + hookenv.log("Error parsing YAML volume-map: {}".format(e), + hookenv.ERROR) + errors = True + if volume_map is None: + # probably an empty string + volume_map = {} + elif not isinstance(volume_map, dict): + hookenv.log("Volume-map should be a dictionary, not {}".format( + type(volume_map))) + errors = True + + volume_config['device'] = volume_map.get(os.environ['JUJU_UNIT_NAME']) + if volume_config['device'] and volume_config['ephemeral']: + # asked for ephemeral storage but also defined a volume ID + hookenv.log('A volume is defined for this unit, but ephemeral ' + 'storage was requested', hookenv.ERROR) + errors = True + elif not volume_config['device'] and not volume_config['ephemeral']: + # asked for permanent storage but did not define volume ID + hookenv.log('Ephemeral storage was requested, but there is no volume ' + 'defined for this unit.', hookenv.ERROR) + errors = True + + unit_mount_name = hookenv.local_unit().replace('/', '-') + volume_config['mountpoint'] = os.path.join(MOUNT_BASE, unit_mount_name) + + if errors: + return None + return volume_config + + +def mount_volume(config): + if os.path.exists(config['mountpoint']): + if not os.path.isdir(config['mountpoint']): + hookenv.log('Not a directory: {}'.format(config['mountpoint'])) + raise VolumeConfigurationError() + else: + host.mkdir(config['mountpoint']) + if os.path.ismount(config['mountpoint']): + unmount_volume(config) + if not host.mount(config['device'], config['mountpoint'], persist=True): + raise VolumeConfigurationError() + + +def unmount_volume(config): + if os.path.ismount(config['mountpoint']): + if not host.umount(config['mountpoint'], persist=True): + raise VolumeConfigurationError() + + +def managed_mounts(): + '''List of all mounted managed volumes''' + return filter(lambda mount: mount[0].startswith(MOUNT_BASE), host.mounts()) + + +def configure_volume(before_change=lambda: None, after_change=lambda: None): + '''Set up storage (or don't) according to the charm's volume configuration. + Returns the mount point or "ephemeral". before_change and after_change + are optional functions to be called if the volume configuration changes. + ''' + + config = get_config() + if not config: + hookenv.log('Failed to read volume configuration', hookenv.CRITICAL) + raise VolumeConfigurationError() + + if config['ephemeral']: + if os.path.ismount(config['mountpoint']): + before_change() + unmount_volume(config) + after_change() + return 'ephemeral' + else: + # persistent storage + if os.path.ismount(config['mountpoint']): + mounts = dict(managed_mounts()) + if mounts.get(config['mountpoint']) != config['device']: + before_change() + unmount_volume(config) + mount_volume(config) + after_change() + else: + before_change() + mount_volume(config) + after_change() + return config['mountpoint'] diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/database/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/database/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..64fac9def6101c1de83cd5dafe7fdb5213d6a955 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/database/__init__.py @@ -0,0 +1,11 @@ +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/database/mysql.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/database/mysql.py new file mode 100644 index 0000000000000000000000000000000000000000..c9ecce5f793eedeb963557d109deaf5e1271e65f --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/database/mysql.py @@ -0,0 +1,821 @@ +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Helper for working with a MySQL database""" +import collections +import copy +import json +import re +import sys +import platform +import os +import glob +import six + +# from string import upper + +from charmhelpers.core.host import ( + CompareHostReleases, + lsb_release, + mkdir, + pwgen, + write_file +) +from charmhelpers.core.hookenv import ( + config as config_get, + relation_get, + related_units, + unit_get, + log, + DEBUG, + INFO, + WARNING, + leader_get, + leader_set, + is_leader, +) +from charmhelpers.fetch import ( + apt_install, + apt_update, + filter_installed_packages, +) +from charmhelpers.contrib.network.ip import get_host_ip + +try: + import MySQLdb +except ImportError: + apt_update(fatal=True) + if six.PY2: + apt_install(filter_installed_packages(['python-mysqldb']), fatal=True) + else: + apt_install(filter_installed_packages(['python3-mysqldb']), fatal=True) + import MySQLdb + + +class MySQLSetPasswordError(Exception): + pass + + +class MySQLHelper(object): + + def __init__(self, rpasswdf_template, upasswdf_template, host='localhost', + migrate_passwd_to_leader_storage=True, + delete_ondisk_passwd_file=True, user="root", password=None, port=None): + self.user = user + self.host = host + self.password = password + self.port = port + + # Password file path templates + self.root_passwd_file_template = rpasswdf_template + self.user_passwd_file_template = upasswdf_template + + self.migrate_passwd_to_leader_storage = migrate_passwd_to_leader_storage + # If we migrate we have the option to delete local copy of root passwd + self.delete_ondisk_passwd_file = delete_ondisk_passwd_file + self.connection = None + + def connect(self, user='root', password=None, host=None, port=None): + _connection_info = { + "user": user or self.user, + "passwd": password or self.password, + "host": host or self.host + } + # port cannot be None but we also do not want to specify it unless it + # has been explicit set. + port = port or self.port + if port is not None: + _connection_info["port"] = port + + log("Opening db connection for %s@%s" % (user, host), level=DEBUG) + self.connection = MySQLdb.connect(**_connection_info) + + def database_exists(self, db_name): + cursor = self.connection.cursor() + try: + cursor.execute("SHOW DATABASES") + databases = [i[0] for i in cursor.fetchall()] + finally: + cursor.close() + + return db_name in databases + + def create_database(self, db_name): + cursor = self.connection.cursor() + try: + cursor.execute("CREATE DATABASE `{}` CHARACTER SET UTF8" + .format(db_name)) + finally: + cursor.close() + + def grant_exists(self, db_name, db_user, remote_ip): + cursor = self.connection.cursor() + priv_string = "GRANT ALL PRIVILEGES ON `{}`.* " \ + "TO '{}'@'{}'".format(db_name, db_user, remote_ip) + try: + cursor.execute("SHOW GRANTS for '{}'@'{}'".format(db_user, + remote_ip)) + grants = [i[0] for i in cursor.fetchall()] + except MySQLdb.OperationalError: + return False + finally: + cursor.close() + + # TODO: review for different grants + return priv_string in grants + + def create_grant(self, db_name, db_user, remote_ip, password): + cursor = self.connection.cursor() + try: + # TODO: review for different grants + cursor.execute("GRANT ALL PRIVILEGES ON `{}`.* TO '{}'@'{}' " + "IDENTIFIED BY '{}'".format(db_name, + db_user, + remote_ip, + password)) + finally: + cursor.close() + + def create_admin_grant(self, db_user, remote_ip, password): + cursor = self.connection.cursor() + try: + cursor.execute("GRANT ALL PRIVILEGES ON *.* TO '{}'@'{}' " + "IDENTIFIED BY '{}'".format(db_user, + remote_ip, + password)) + finally: + cursor.close() + + def cleanup_grant(self, db_user, remote_ip): + cursor = self.connection.cursor() + try: + cursor.execute("DROP FROM mysql.user WHERE user='{}' " + "AND HOST='{}'".format(db_user, + remote_ip)) + finally: + cursor.close() + + def flush_priviledges(self): + cursor = self.connection.cursor() + try: + cursor.execute("FLUSH PRIVILEGES") + finally: + cursor.close() + + def execute(self, sql): + """Execute arbitary SQL against the database.""" + cursor = self.connection.cursor() + try: + cursor.execute(sql) + finally: + cursor.close() + + def select(self, sql): + """ + Execute arbitrary SQL select query against the database + and return the results. + + :param sql: SQL select query to execute + :type sql: string + :returns: SQL select query result + :rtype: list of lists + :raises: MySQLdb.Error + """ + cursor = self.connection.cursor() + try: + cursor.execute(sql) + results = [list(i) for i in cursor.fetchall()] + finally: + cursor.close() + return results + + def migrate_passwords_to_leader_storage(self, excludes=None): + """Migrate any passwords storage on disk to leader storage.""" + if not is_leader(): + log("Skipping password migration as not the lead unit", + level=DEBUG) + return + dirname = os.path.dirname(self.root_passwd_file_template) + path = os.path.join(dirname, '*.passwd') + for f in glob.glob(path): + if excludes and f in excludes: + log("Excluding %s from leader storage migration" % (f), + level=DEBUG) + continue + + key = os.path.basename(f) + with open(f, 'r') as passwd: + _value = passwd.read().strip() + + try: + leader_set(settings={key: _value}) + + if self.delete_ondisk_passwd_file: + os.unlink(f) + except ValueError: + # NOTE cluster relation not yet ready - skip for now + pass + + def get_mysql_password_on_disk(self, username=None, password=None): + """Retrieve, generate or store a mysql password for the provided + username on disk.""" + if username: + template = self.user_passwd_file_template + passwd_file = template.format(username) + else: + passwd_file = self.root_passwd_file_template + + _password = None + if os.path.exists(passwd_file): + log("Using existing password file '%s'" % passwd_file, level=DEBUG) + with open(passwd_file, 'r') as passwd: + _password = passwd.read().strip() + else: + log("Generating new password file '%s'" % passwd_file, level=DEBUG) + if not os.path.isdir(os.path.dirname(passwd_file)): + # NOTE: need to ensure this is not mysql root dir (which needs + # to be mysql readable) + mkdir(os.path.dirname(passwd_file), owner='root', group='root', + perms=0o770) + # Force permissions - for some reason the chmod in makedirs + # fails + os.chmod(os.path.dirname(passwd_file), 0o770) + + _password = password or pwgen(length=32) + write_file(passwd_file, _password, owner='root', group='root', + perms=0o660) + + return _password + + def passwd_keys(self, username): + """Generator to return keys used to store passwords in peer store. + + NOTE: we support both legacy and new format to support mysql + charm prior to refactor. This is necessary to avoid LP 1451890. + """ + keys = [] + if username == 'mysql': + log("Bad username '%s'" % (username), level=WARNING) + + if username: + # IMPORTANT: *newer* format must be returned first + keys.append('mysql-%s.passwd' % (username)) + keys.append('%s.passwd' % (username)) + else: + keys.append('mysql.passwd') + + for key in keys: + yield key + + def get_mysql_password(self, username=None, password=None): + """Retrieve, generate or store a mysql password for the provided + username using peer relation cluster.""" + excludes = [] + + # First check peer relation. + try: + for key in self.passwd_keys(username): + _password = leader_get(key) + if _password: + break + + # If root password available don't update peer relation from local + if _password and not username: + excludes.append(self.root_passwd_file_template) + + except ValueError: + # cluster relation is not yet started; use on-disk + _password = None + + # If none available, generate new one + if not _password: + _password = self.get_mysql_password_on_disk(username, password) + + # Put on wire if required + if self.migrate_passwd_to_leader_storage: + self.migrate_passwords_to_leader_storage(excludes=excludes) + + return _password + + def get_mysql_root_password(self, password=None): + """Retrieve or generate mysql root password for service units.""" + return self.get_mysql_password(username=None, password=password) + + def set_mysql_password(self, username, password, current_password=None): + """Update a mysql password for the provided username changing the + leader settings + + To update root's password pass `None` in the username + + :param username: Username to change password of + :type username: str + :param password: New password for user. + :type password: str + :param current_password: Existing password for user. + :type current_password: str + """ + + if username is None: + username = 'root' + + # get root password via leader-get, it may be that in the past (when + # changes to root-password were not supported) the user changed the + # password, so leader-get is more reliable source than + # config.previous('root-password'). + rel_username = None if username == 'root' else username + if not current_password: + current_password = self.get_mysql_password(rel_username) + + # password that needs to be set + new_passwd = password + + # update password for all users (e.g. root@localhost, root@::1, etc) + try: + self.connect(user=username, password=current_password) + cursor = self.connection.cursor() + except MySQLdb.OperationalError as ex: + raise MySQLSetPasswordError(('Cannot connect using password in ' + 'leader settings (%s)') % ex, ex) + + try: + # NOTE(freyes): Due to skip-name-resolve root@$HOSTNAME account + # fails when using SET PASSWORD so using UPDATE against the + # mysql.user table is needed, but changes to this table are not + # replicated across the cluster, so this update needs to run in + # all the nodes. More info at + # http://galeracluster.com/documentation-webpages/userchanges.html + release = CompareHostReleases(lsb_release()['DISTRIB_CODENAME']) + if release < 'bionic': + SQL_UPDATE_PASSWD = ("UPDATE mysql.user SET password = " + "PASSWORD( %s ) WHERE user = %s;") + else: + # PXC 5.7 (introduced in Bionic) uses authentication_string + SQL_UPDATE_PASSWD = ("UPDATE mysql.user SET " + "authentication_string = " + "PASSWORD( %s ) WHERE user = %s;") + cursor.execute(SQL_UPDATE_PASSWD, (new_passwd, username)) + cursor.execute('FLUSH PRIVILEGES;') + self.connection.commit() + except MySQLdb.OperationalError as ex: + raise MySQLSetPasswordError('Cannot update password: %s' % str(ex), + ex) + finally: + cursor.close() + + # check the password was changed + try: + self.connect(user=username, password=new_passwd) + self.execute('select 1;') + except MySQLdb.OperationalError as ex: + raise MySQLSetPasswordError(('Cannot connect using new password: ' + '%s') % str(ex), ex) + + if not is_leader(): + log('Only the leader can set a new password in the relation', + level=DEBUG) + return + + for key in self.passwd_keys(rel_username): + _password = leader_get(key) + if _password: + log('Updating password for %s (%s)' % (key, rel_username), + level=DEBUG) + leader_set(settings={key: new_passwd}) + + def set_mysql_root_password(self, password, current_password=None): + """Update mysql root password changing the leader settings + + :param password: New password for user. + :type password: str + :param current_password: Existing password for user. + :type current_password: str + """ + self.set_mysql_password( + 'root', + password, + current_password=current_password) + + def normalize_address(self, hostname): + """Ensure that address returned is an IP address (i.e. not fqdn)""" + if config_get('prefer-ipv6'): + # TODO: add support for ipv6 dns + return hostname + + if hostname != unit_get('private-address'): + return get_host_ip(hostname, fallback=hostname) + + # Otherwise assume localhost + return '127.0.0.1' + + def get_allowed_units(self, database, username, relation_id=None): + """Get list of units with access grants for database with username. + + This is typically used to provide shared-db relations with a list of + which units have been granted access to the given database. + """ + self.connect(password=self.get_mysql_root_password()) + allowed_units = set() + for unit in related_units(relation_id): + settings = relation_get(rid=relation_id, unit=unit) + # First check for setting with prefix, then without + for attr in ["%s_hostname" % (database), 'hostname']: + hosts = settings.get(attr, None) + if hosts: + break + + if hosts: + # hostname can be json-encoded list of hostnames + try: + hosts = json.loads(hosts) + except ValueError: + hosts = [hosts] + else: + hosts = [settings['private-address']] + + if hosts: + for host in hosts: + host = self.normalize_address(host) + if self.grant_exists(database, username, host): + log("Grant exists for host '%s' on db '%s'" % + (host, database), level=DEBUG) + if unit not in allowed_units: + allowed_units.add(unit) + else: + log("Grant does NOT exist for host '%s' on db '%s'" % + (host, database), level=DEBUG) + else: + log("No hosts found for grant check", level=INFO) + + return allowed_units + + def configure_db(self, hostname, database, username, admin=False): + """Configure access to database for username from hostname.""" + self.connect(password=self.get_mysql_root_password()) + if not self.database_exists(database): + self.create_database(database) + + remote_ip = self.normalize_address(hostname) + password = self.get_mysql_password(username) + if not self.grant_exists(database, username, remote_ip): + if not admin: + self.create_grant(database, username, remote_ip, password) + else: + self.create_admin_grant(username, remote_ip, password) + self.flush_priviledges() + + return password + + +# `_singleton_config_helper` stores the instance of the helper class that is +# being used during a hook invocation. +_singleton_config_helper = None + + +def get_mysql_config_helper(): + global _singleton_config_helper + if _singleton_config_helper is None: + _singleton_config_helper = MySQLConfigHelper() + return _singleton_config_helper + + +class MySQLConfigHelper(object): + """Base configuration helper for MySQL.""" + + # Going for the biggest page size to avoid wasted bytes. + # InnoDB page size is 16MB + + DEFAULT_PAGE_SIZE = 16 * 1024 * 1024 + DEFAULT_INNODB_BUFFER_FACTOR = 0.50 + DEFAULT_INNODB_BUFFER_SIZE_MAX = 512 * 1024 * 1024 + + # Validation and lookups for InnoDB configuration + INNODB_VALID_BUFFERING_VALUES = [ + 'none', + 'inserts', + 'deletes', + 'changes', + 'purges', + 'all' + ] + INNODB_FLUSH_CONFIG_VALUES = { + 'fast': 2, + 'safest': 1, + 'unsafe': 0, + } + + def human_to_bytes(self, human): + """Convert human readable configuration options to bytes.""" + num_re = re.compile('^[0-9]+$') + if num_re.match(human): + return human + + factors = { + 'K': 1024, + 'M': 1048576, + 'G': 1073741824, + 'T': 1099511627776 + } + modifier = human[-1] + if modifier in factors: + return int(human[:-1]) * factors[modifier] + + if modifier == '%': + total_ram = self.human_to_bytes(self.get_mem_total()) + if self.is_32bit_system() and total_ram > self.sys_mem_limit(): + total_ram = self.sys_mem_limit() + factor = int(human[:-1]) * 0.01 + pctram = total_ram * factor + return int(pctram - (pctram % self.DEFAULT_PAGE_SIZE)) + + raise ValueError("Can only convert K,M,G, or T") + + def is_32bit_system(self): + """Determine whether system is 32 or 64 bit.""" + try: + return sys.maxsize < 2 ** 32 + except OverflowError: + return False + + def sys_mem_limit(self): + """Determine the default memory limit for the current service unit.""" + if platform.machine() in ['armv7l']: + _mem_limit = self.human_to_bytes('2700M') # experimentally determined + else: + # Limit for x86 based 32bit systems + _mem_limit = self.human_to_bytes('4G') + + return _mem_limit + + def get_mem_total(self): + """Calculate the total memory in the current service unit.""" + with open('/proc/meminfo') as meminfo_file: + for line in meminfo_file: + key, mem = line.split(':', 2) + if key == 'MemTotal': + mtot, modifier = mem.strip().split(' ') + return '%s%s' % (mtot, modifier[0].upper()) + + def get_innodb_flush_log_at_trx_commit(self): + """Get value for innodb_flush_log_at_trx_commit. + + Use the innodb-flush-log-at-trx-commit or the tunning-level setting + translated by INNODB_FLUSH_CONFIG_VALUES to get the + innodb_flush_log_at_trx_commit value. + + :returns: Numeric value for innodb_flush_log_at_trx_commit + :rtype: Union[None, int] + """ + _iflatc = config_get('innodb-flush-log-at-trx-commit') + _tuning_level = config_get('tuning-level') + if _iflatc: + return _iflatc + elif _tuning_level: + return self.INNODB_FLUSH_CONFIG_VALUES.get(_tuning_level, 1) + + def get_innodb_change_buffering(self): + """Get value for innodb_change_buffering. + + Use the innodb-change-buffering validated against + INNODB_VALID_BUFFERING_VALUES to get the innodb_change_buffering value. + + :returns: String value for innodb_change_buffering. + :rtype: Union[None, str] + """ + _icb = config_get('innodb-change-buffering') + if _icb and _icb in self.INNODB_VALID_BUFFERING_VALUES: + return _icb + + def get_innodb_buffer_pool_size(self): + """Get value for innodb_buffer_pool_size. + + Return the number value of innodb-buffer-pool-size or dataset-size. If + neither is set, calculate a sane default based on total memory. + + :returns: Numeric value for innodb_buffer_pool_size. + :rtype: int + """ + total_memory = self.human_to_bytes(self.get_mem_total()) + + dataset_bytes = config_get('dataset-size') + innodb_buffer_pool_size = config_get('innodb-buffer-pool-size') + + if innodb_buffer_pool_size: + innodb_buffer_pool_size = self.human_to_bytes( + innodb_buffer_pool_size) + elif dataset_bytes: + log("Option 'dataset-size' has been deprecated, please use" + "innodb_buffer_pool_size option instead", level="WARN") + innodb_buffer_pool_size = self.human_to_bytes( + dataset_bytes) + else: + # NOTE(jamespage): pick the smallest of 50% of RAM or 512MB + # to ensure that deployments in containers + # without constraints don't try to consume + # silly amounts of memory. + innodb_buffer_pool_size = min( + int(total_memory * self.DEFAULT_INNODB_BUFFER_FACTOR), + self.DEFAULT_INNODB_BUFFER_SIZE_MAX + ) + + if innodb_buffer_pool_size > total_memory: + log("innodb_buffer_pool_size; {} is greater than system available memory:{}".format( + innodb_buffer_pool_size, + total_memory), level='WARN') + + return innodb_buffer_pool_size + + +class PerconaClusterHelper(MySQLConfigHelper): + """Percona-cluster specific configuration helper.""" + + def parse_config(self): + """Parse charm configuration and calculate values for config files.""" + config = config_get() + mysql_config = {} + if 'max-connections' in config: + mysql_config['max_connections'] = config['max-connections'] + + if 'wait-timeout' in config: + mysql_config['wait_timeout'] = config['wait-timeout'] + + if self.get_innodb_flush_log_at_trx_commit() is not None: + mysql_config['innodb_flush_log_at_trx_commit'] = \ + self.get_innodb_flush_log_at_trx_commit() + + if self.get_innodb_change_buffering() is not None: + mysql_config['innodb_change_buffering'] = config['innodb-change-buffering'] + + if 'innodb-io-capacity' in config: + mysql_config['innodb_io_capacity'] = config['innodb-io-capacity'] + + # Set a sane default key_buffer size + mysql_config['key_buffer'] = self.human_to_bytes('32M') + mysql_config['innodb_buffer_pool_size'] = self.get_innodb_buffer_pool_size() + return mysql_config + + +class MySQL8Helper(MySQLHelper): + + def grant_exists(self, db_name, db_user, remote_ip): + cursor = self.connection.cursor() + priv_string = ("GRANT ALL PRIVILEGES ON {}.* " + "TO {}@{}".format(db_name, db_user, remote_ip)) + try: + cursor.execute("SHOW GRANTS FOR '{}'@'{}'".format(db_user, + remote_ip)) + grants = [i[0] for i in cursor.fetchall()] + except MySQLdb.OperationalError: + return False + finally: + cursor.close() + + # Different versions of MySQL use ' or `. Ignore these in the check. + return priv_string in [ + i.replace("'", "").replace("`", "") for i in grants] + + def create_grant(self, db_name, db_user, remote_ip, password): + if self.grant_exists(db_name, db_user, remote_ip): + return + + # Make sure the user exists + # MySQL8 must create the user before the grant + self.create_user(db_user, remote_ip, password) + + cursor = self.connection.cursor() + try: + cursor.execute("GRANT ALL PRIVILEGES ON `{}`.* TO '{}'@'{}'" + .format(db_name, db_user, remote_ip)) + finally: + cursor.close() + + def create_user(self, db_user, remote_ip, password): + + SQL_USER_CREATE = ( + "CREATE USER '{db_user}'@'{remote_ip}' " + "IDENTIFIED BY '{password}'") + + cursor = self.connection.cursor() + try: + cursor.execute(SQL_USER_CREATE.format( + db_user=db_user, + remote_ip=remote_ip, + password=password) + ) + except MySQLdb._exceptions.OperationalError: + log("DB user {} already exists.".format(db_user), + "WARNING") + finally: + cursor.close() + + def create_router_grant(self, db_user, remote_ip, password): + + # Make sure the user exists + # MySQL8 must create the user before the grant + self.create_user(db_user, remote_ip, password) + + # Mysql-Router specific grants + cursor = self.connection.cursor() + try: + cursor.execute("GRANT CREATE USER ON *.* TO '{}'@'{}' WITH GRANT " + "OPTION".format(db_user, remote_ip)) + cursor.execute("GRANT SELECT, INSERT, UPDATE, DELETE, EXECUTE ON " + "mysql_innodb_cluster_metadata.* TO '{}'@'{}'" + .format(db_user, remote_ip)) + cursor.execute("GRANT SELECT ON mysql.user TO '{}'@'{}'" + .format(db_user, remote_ip)) + cursor.execute("GRANT SELECT ON " + "performance_schema.replication_group_members " + "TO '{}'@'{}'".format(db_user, remote_ip)) + cursor.execute("GRANT SELECT ON " + "performance_schema.replication_group_member_stats " + "TO '{}'@'{}'".format(db_user, remote_ip)) + cursor.execute("GRANT SELECT ON " + "performance_schema.global_variables " + "TO '{}'@'{}'".format(db_user, remote_ip)) + finally: + cursor.close() + + def configure_router(self, hostname, username): + + if self.connection is None: + self.connect(password=self.get_mysql_root_password()) + + remote_ip = self.normalize_address(hostname) + password = self.get_mysql_password(username) + self.create_user(username, remote_ip, password) + self.create_router_grant(username, remote_ip, password) + + return password + + +def get_prefix(requested, keys=None): + """Return existing prefix or None. + + :param requested: Request string. i.e. novacell0_username + :type requested: str + :param keys: Keys to determine prefix. Defaults set in function. + :type keys: List of str keys + :returns: String prefix i.e. novacell0 + :rtype: Union[None, str] + """ + if keys is None: + # Shared-DB default keys + keys = ["_database", "_username", "_hostname"] + for key in keys: + if requested.endswith(key): + return requested[:-len(key)] + + +def get_db_data(relation_data, unprefixed): + """Organize database requests into a collections.OrderedDict + + :param relation_data: shared-db relation data + :type relation_data: dict + :param unprefixed: Prefix to use for requests without a prefix. This should + be unique for each side of the relation to avoid + conflicts. + :type unprefixed: str + :returns: Order dict of databases and users + :rtype: collections.OrderedDict + """ + # Deep copy to avoid unintentionally changing relation data + settings = copy.deepcopy(relation_data) + databases = collections.OrderedDict() + + # Clear non-db related elements + if "egress-subnets" in settings.keys(): + settings.pop("egress-subnets") + if "ingress-address" in settings.keys(): + settings.pop("ingress-address") + if "private-address" in settings.keys(): + settings.pop("private-address") + + singleset = {"database", "username", "hostname"} + if singleset.issubset(settings): + settings["{}_{}".format(unprefixed, "hostname")] = ( + settings["hostname"]) + settings.pop("hostname") + settings["{}_{}".format(unprefixed, "database")] = ( + settings["database"]) + settings.pop("database") + settings["{}_{}".format(unprefixed, "username")] = ( + settings["username"]) + settings.pop("username") + + for k, v in settings.items(): + db = k.split("_")[0] + x = "_".join(k.split("_")[1:]) + if db not in databases: + databases[db] = collections.OrderedDict() + databases[db][x] = v + + return databases diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hahelpers/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hahelpers/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d7567b863e3a5ad2b7a7f44958b4166e0c3d346b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hahelpers/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hahelpers/apache.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hahelpers/apache.py new file mode 100644 index 0000000000000000000000000000000000000000..2c1e371e179bb6926f227331f86cb14a38c229a2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hahelpers/apache.py @@ -0,0 +1,86 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# +# Copyright 2012 Canonical Ltd. +# +# This file is sourced from lp:openstack-charm-helpers +# +# Authors: +# James Page +# Adam Gandelman +# + +import os + +from charmhelpers.core import host +from charmhelpers.core.hookenv import ( + config as config_get, + relation_get, + relation_ids, + related_units as relation_list, + log, + INFO, +) + + +def get_cert(cn=None): + # TODO: deal with multiple https endpoints via charm config + cert = config_get('ssl_cert') + key = config_get('ssl_key') + if not (cert and key): + log("Inspecting identity-service relations for SSL certificate.", + level=INFO) + cert = key = None + if cn: + ssl_cert_attr = 'ssl_cert_{}'.format(cn) + ssl_key_attr = 'ssl_key_{}'.format(cn) + else: + ssl_cert_attr = 'ssl_cert' + ssl_key_attr = 'ssl_key' + for r_id in relation_ids('identity-service'): + for unit in relation_list(r_id): + if not cert: + cert = relation_get(ssl_cert_attr, + rid=r_id, unit=unit) + if not key: + key = relation_get(ssl_key_attr, + rid=r_id, unit=unit) + return (cert, key) + + +def get_ca_cert(): + ca_cert = config_get('ssl_ca') + if ca_cert is None: + log("Inspecting identity-service relations for CA SSL certificate.", + level=INFO) + for r_id in (relation_ids('identity-service') + + relation_ids('identity-credentials')): + for unit in relation_list(r_id): + if ca_cert is None: + ca_cert = relation_get('ca_cert', + rid=r_id, unit=unit) + return ca_cert + + +def retrieve_ca_cert(cert_file): + cert = None + if os.path.isfile(cert_file): + with open(cert_file, 'rb') as crt: + cert = crt.read() + return cert + + +def install_ca_cert(ca_cert): + host.install_ca_cert(ca_cert, 'keystone_juju_ca_cert') diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hahelpers/cluster.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hahelpers/cluster.py new file mode 100644 index 0000000000000000000000000000000000000000..ba34fba0cafa21d15a6a27946544b2c99fbd3663 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hahelpers/cluster.py @@ -0,0 +1,451 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# +# Copyright 2012 Canonical Ltd. +# +# Authors: +# James Page +# Adam Gandelman +# + +""" +Helpers for clustering and determining "cluster leadership" and other +clustering-related helpers. +""" + +import functools +import subprocess +import os +import time + +from socket import gethostname as get_unit_hostname + +import six + +from charmhelpers.core.hookenv import ( + log, + relation_ids, + related_units as relation_list, + relation_get, + config as config_get, + INFO, + DEBUG, + WARNING, + unit_get, + is_leader as juju_is_leader, + status_set, +) +from charmhelpers.core.host import ( + modulo_distribution, +) +from charmhelpers.core.decorators import ( + retry_on_exception, +) +from charmhelpers.core.strutils import ( + bool_from_string, +) + +DC_RESOURCE_NAME = 'DC' + + +class HAIncompleteConfig(Exception): + pass + + +class HAIncorrectConfig(Exception): + pass + + +class CRMResourceNotFound(Exception): + pass + + +class CRMDCNotFound(Exception): + pass + + +def is_elected_leader(resource): + """ + Returns True if the charm executing this is the elected cluster leader. + + It relies on two mechanisms to determine leadership: + 1. If juju is sufficiently new and leadership election is supported, + the is_leader command will be used. + 2. If the charm is part of a corosync cluster, call corosync to + determine leadership. + 3. If the charm is not part of a corosync cluster, the leader is + determined as being "the alive unit with the lowest unit numer". In + other words, the oldest surviving unit. + """ + try: + return juju_is_leader() + except NotImplementedError: + log('Juju leadership election feature not enabled' + ', using fallback support', + level=WARNING) + + if is_clustered(): + if not is_crm_leader(resource): + log('Deferring action to CRM leader.', level=INFO) + return False + else: + peers = peer_units() + if peers and not oldest_peer(peers): + log('Deferring action to oldest service unit.', level=INFO) + return False + return True + + +def is_clustered(): + for r_id in (relation_ids('ha') or []): + for unit in (relation_list(r_id) or []): + clustered = relation_get('clustered', + rid=r_id, + unit=unit) + if clustered: + return True + return False + + +def is_crm_dc(): + """ + Determine leadership by querying the pacemaker Designated Controller + """ + cmd = ['crm', 'status'] + try: + status = subprocess.check_output(cmd, stderr=subprocess.STDOUT) + if not isinstance(status, six.text_type): + status = six.text_type(status, "utf-8") + except subprocess.CalledProcessError as ex: + raise CRMDCNotFound(str(ex)) + + current_dc = '' + for line in status.split('\n'): + if line.startswith('Current DC'): + # Current DC: juju-lytrusty-machine-2 (168108163) - partition with quorum + current_dc = line.split(':')[1].split()[0] + if current_dc == get_unit_hostname(): + return True + elif current_dc == 'NONE': + raise CRMDCNotFound('Current DC: NONE') + + return False + + +@retry_on_exception(5, base_delay=2, + exc_type=(CRMResourceNotFound, CRMDCNotFound)) +def is_crm_leader(resource, retry=False): + """ + Returns True if the charm calling this is the elected corosync leader, + as returned by calling the external "crm" command. + + We allow this operation to be retried to avoid the possibility of getting a + false negative. See LP #1396246 for more info. + """ + if resource == DC_RESOURCE_NAME: + return is_crm_dc() + cmd = ['crm', 'resource', 'show', resource] + try: + status = subprocess.check_output(cmd, stderr=subprocess.STDOUT) + if not isinstance(status, six.text_type): + status = six.text_type(status, "utf-8") + except subprocess.CalledProcessError: + status = None + + if status and get_unit_hostname() in status: + return True + + if status and "resource %s is NOT running" % (resource) in status: + raise CRMResourceNotFound("CRM resource %s not found" % (resource)) + + return False + + +def is_leader(resource): + log("is_leader is deprecated. Please consider using is_crm_leader " + "instead.", level=WARNING) + return is_crm_leader(resource) + + +def peer_units(peer_relation="cluster"): + peers = [] + for r_id in (relation_ids(peer_relation) or []): + for unit in (relation_list(r_id) or []): + peers.append(unit) + return peers + + +def peer_ips(peer_relation='cluster', addr_key='private-address'): + '''Return a dict of peers and their private-address''' + peers = {} + for r_id in relation_ids(peer_relation): + for unit in relation_list(r_id): + peers[unit] = relation_get(addr_key, rid=r_id, unit=unit) + return peers + + +def oldest_peer(peers): + """Determines who the oldest peer is by comparing unit numbers.""" + local_unit_no = int(os.getenv('JUJU_UNIT_NAME').split('/')[1]) + for peer in peers: + remote_unit_no = int(peer.split('/')[1]) + if remote_unit_no < local_unit_no: + return False + return True + + +def eligible_leader(resource): + log("eligible_leader is deprecated. Please consider using " + "is_elected_leader instead.", level=WARNING) + return is_elected_leader(resource) + + +def https(): + ''' + Determines whether enough data has been provided in configuration + or relation data to configure HTTPS + . + returns: boolean + ''' + use_https = config_get('use-https') + if use_https and bool_from_string(use_https): + return True + if config_get('ssl_cert') and config_get('ssl_key'): + return True + for r_id in relation_ids('certificates'): + for unit in relation_list(r_id): + ca = relation_get('ca', rid=r_id, unit=unit) + if ca: + return True + for r_id in relation_ids('identity-service'): + for unit in relation_list(r_id): + # TODO - needs fixing for new helper as ssl_cert/key suffixes with CN + rel_state = [ + relation_get('https_keystone', rid=r_id, unit=unit), + relation_get('ca_cert', rid=r_id, unit=unit), + ] + # NOTE: works around (LP: #1203241) + if (None not in rel_state) and ('' not in rel_state): + return True + return False + + +def determine_api_port(public_port, singlenode_mode=False): + ''' + Determine correct API server listening port based on + existence of HTTPS reverse proxy and/or haproxy. + + public_port: int: standard public port for given service + + singlenode_mode: boolean: Shuffle ports when only a single unit is present + + returns: int: the correct listening port for the API service + ''' + i = 0 + if singlenode_mode: + i += 1 + elif len(peer_units()) > 0 or is_clustered(): + i += 1 + if https(): + i += 1 + return public_port - (i * 10) + + +def determine_apache_port(public_port, singlenode_mode=False): + ''' + Description: Determine correct apache listening port based on public IP + + state of the cluster. + + public_port: int: standard public port for given service + + singlenode_mode: boolean: Shuffle ports when only a single unit is present + + returns: int: the correct listening port for the HAProxy service + ''' + i = 0 + if singlenode_mode: + i += 1 + elif len(peer_units()) > 0 or is_clustered(): + i += 1 + return public_port - (i * 10) + + +determine_apache_port_single = functools.partial( + determine_apache_port, singlenode_mode=True) + + +def get_hacluster_config(exclude_keys=None): + ''' + Obtains all relevant configuration from charm configuration required + for initiating a relation to hacluster: + + ha-bindiface, ha-mcastport, vip, os-internal-hostname, + os-admin-hostname, os-public-hostname, os-access-hostname + + param: exclude_keys: list of setting key(s) to be excluded. + returns: dict: A dict containing settings keyed by setting name. + raises: HAIncompleteConfig if settings are missing or incorrect. + ''' + settings = ['ha-bindiface', 'ha-mcastport', 'vip', 'os-internal-hostname', + 'os-admin-hostname', 'os-public-hostname', 'os-access-hostname'] + conf = {} + for setting in settings: + if exclude_keys and setting in exclude_keys: + continue + + conf[setting] = config_get(setting) + + if not valid_hacluster_config(): + raise HAIncorrectConfig('Insufficient or incorrect config data to ' + 'configure hacluster.') + return conf + + +def valid_hacluster_config(): + ''' + Check that either vip or dns-ha is set. If dns-ha then one of os-*-hostname + must be set. + + Note: ha-bindiface and ha-macastport both have defaults and will always + be set. We only care that either vip or dns-ha is set. + + :returns: boolean: valid config returns true. + raises: HAIncompatibileConfig if settings conflict. + raises: HAIncompleteConfig if settings are missing. + ''' + vip = config_get('vip') + dns = config_get('dns-ha') + if not(bool(vip) ^ bool(dns)): + msg = ('HA: Either vip or dns-ha must be set but not both in order to ' + 'use high availability') + status_set('blocked', msg) + raise HAIncorrectConfig(msg) + + # If dns-ha then one of os-*-hostname must be set + if dns: + dns_settings = ['os-internal-hostname', 'os-admin-hostname', + 'os-public-hostname', 'os-access-hostname'] + # At this point it is unknown if one or all of the possible + # network spaces are in HA. Validate at least one is set which is + # the minimum required. + for setting in dns_settings: + if config_get(setting): + log('DNS HA: At least one hostname is set {}: {}' + ''.format(setting, config_get(setting)), + level=DEBUG) + return True + + msg = ('DNS HA: At least one os-*-hostname(s) must be set to use ' + 'DNS HA') + status_set('blocked', msg) + raise HAIncompleteConfig(msg) + + log('VIP HA: VIP is set {}'.format(vip), level=DEBUG) + return True + + +def canonical_url(configs, vip_setting='vip'): + ''' + Returns the correct HTTP URL to this host given the state of HTTPS + configuration and hacluster. + + :configs : OSTemplateRenderer: A config tempating object to inspect for + a complete https context. + + :vip_setting: str: Setting in charm config that specifies + VIP address. + ''' + scheme = 'http' + if 'https' in configs.complete_contexts(): + scheme = 'https' + if is_clustered(): + addr = config_get(vip_setting) + else: + addr = unit_get('private-address') + return '%s://%s' % (scheme, addr) + + +def distributed_wait(modulo=None, wait=None, operation_name='operation'): + ''' Distribute operations by waiting based on modulo_distribution + + If modulo and or wait are not set, check config_get for those values. + If config values are not set, default to modulo=3 and wait=30. + + :param modulo: int The modulo number creates the group distribution + :param wait: int The constant time wait value + :param operation_name: string Operation name for status message + i.e. 'restart' + :side effect: Calls config_get() + :side effect: Calls log() + :side effect: Calls status_set() + :side effect: Calls time.sleep() + ''' + if modulo is None: + modulo = config_get('modulo-nodes') or 3 + if wait is None: + wait = config_get('known-wait') or 30 + if juju_is_leader(): + # The leader should never wait + calculated_wait = 0 + else: + # non_zero_wait=True guarantees the non-leader who gets modulo 0 + # will still wait + calculated_wait = modulo_distribution(modulo=modulo, wait=wait, + non_zero_wait=True) + msg = "Waiting {} seconds for {} ...".format(calculated_wait, + operation_name) + log(msg, DEBUG) + status_set('maintenance', msg) + time.sleep(calculated_wait) + + +def get_managed_services_and_ports(services, external_ports, + external_services=None, + port_conv_f=determine_apache_port_single): + """Get the services and ports managed by this charm. + + Return only the services and corresponding ports that are managed by this + charm. This excludes haproxy when there is a relation with hacluster. This + is because this charm passes responsability for stopping and starting + haproxy to hacluster. + + Similarly, if a relation with hacluster exists then the ports returned by + this method correspond to those managed by the apache server rather than + haproxy. + + :param services: List of services. + :type services: List[str] + :param external_ports: List of ports managed by external services. + :type external_ports: List[int] + :param external_services: List of services to be removed if ha relation is + present. + :type external_services: List[str] + :param port_conv_f: Function to apply to ports to calculate the ports + managed by services controlled by this charm. + :type port_convert_func: f() + :returns: A tuple containing a list of services first followed by a list of + ports. + :rtype: Tuple[List[str], List[int]] + """ + if external_services is None: + external_services = ['haproxy'] + if relation_ids('ha'): + for svc in external_services: + try: + services.remove(svc) + except ValueError: + pass + external_ports = [port_conv_f(p) for p in external_ports] + return services, external_ports diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/README.hardening.md b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/README.hardening.md new file mode 100644 index 0000000000000000000000000000000000000000..91280c03e6d7b5d75b356cd94614fc821abc2644 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/README.hardening.md @@ -0,0 +1,38 @@ +# Juju charm-helpers hardening library + +## Description + +This library provides multiple implementations of system and application +hardening that conform to the standards of http://hardening.io/. + +Current implementations include: + + * OS + * SSH + * MySQL + * Apache + +## Requirements + +* Juju Charms + +## Usage + +1. Synchronise this library into your charm and add the harden() decorator + (from contrib.hardening.harden) to any functions or methods you want to use + to trigger hardening of your application/system. + +2. Add a config option called 'harden' to your charm config.yaml and set it to + a space-delimited list of hardening modules you want to run e.g. "os ssh" + +3. Override any config defaults (contrib.hardening.defaults) by adding a file + called hardening.yaml to your charm root containing the name(s) of the + modules whose settings you want override at root level and then any settings + with overrides e.g. + + os: + general: + desktop_enable: True + +4. Now just run your charm as usual and hardening will be applied each time the + hook runs. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..30a3e94359e94011cd247de7ade76667346e7379 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/apache/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/apache/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..58bebd846bd6fa648cfab6ab1056ad10d8415453 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/apache/__init__.py @@ -0,0 +1,17 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from os import path + +TEMPLATES_DIR = path.join(path.dirname(__file__), 'templates') diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/apache/checks/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/apache/checks/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..3bc2ebd4760124e23c128868e098aceac610260f --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/apache/checks/__init__.py @@ -0,0 +1,29 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from charmhelpers.core.hookenv import ( + log, + DEBUG, +) +from charmhelpers.contrib.hardening.apache.checks import config + + +def run_apache_checks(): + log("Starting Apache hardening checks.", level=DEBUG) + checks = config.get_audits() + for check in checks: + log("Running '%s' check" % (check.__class__.__name__), level=DEBUG) + check.ensure_compliance() + + log("Apache hardening checks complete.", level=DEBUG) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/apache/checks/config.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/apache/checks/config.py new file mode 100644 index 0000000000000000000000000000000000000000..341da9eee10f73cbe3d7e7e5cf91b57b4d2a89b4 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/apache/checks/config.py @@ -0,0 +1,104 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import re +import six +import subprocess + + +from charmhelpers.core.hookenv import ( + log, + INFO, +) +from charmhelpers.contrib.hardening.audits.file import ( + FilePermissionAudit, + DirectoryPermissionAudit, + NoReadWriteForOther, + TemplatedFile, + DeletedFile +) +from charmhelpers.contrib.hardening.audits.apache import DisabledModuleAudit +from charmhelpers.contrib.hardening.apache import TEMPLATES_DIR +from charmhelpers.contrib.hardening import utils + + +def get_audits(): + """Get Apache hardening config audits. + + :returns: dictionary of audits + """ + if subprocess.call(['which', 'apache2'], stdout=subprocess.PIPE) != 0: + log("Apache server does not appear to be installed on this node - " + "skipping apache hardening", level=INFO) + return [] + + context = ApacheConfContext() + settings = utils.get_settings('apache') + audits = [ + FilePermissionAudit(paths=os.path.join( + settings['common']['apache_dir'], 'apache2.conf'), + user='root', group='root', mode=0o0640), + + TemplatedFile(os.path.join(settings['common']['apache_dir'], + 'mods-available/alias.conf'), + context, + TEMPLATES_DIR, + mode=0o0640, + user='root', + service_actions=[{'service': 'apache2', + 'actions': ['restart']}]), + + TemplatedFile(os.path.join(settings['common']['apache_dir'], + 'conf-enabled/99-hardening.conf'), + context, + TEMPLATES_DIR, + mode=0o0640, + user='root', + service_actions=[{'service': 'apache2', + 'actions': ['restart']}]), + + DirectoryPermissionAudit(settings['common']['apache_dir'], + user='root', + group='root', + mode=0o0750), + + DisabledModuleAudit(settings['hardening']['modules_to_disable']), + + NoReadWriteForOther(settings['common']['apache_dir']), + + DeletedFile(['/var/www/html/index.html']) + ] + + return audits + + +class ApacheConfContext(object): + """Defines the set of key/value pairs to set in a apache config file. + + This context, when called, will return a dictionary containing the + key/value pairs of setting to specify in the + /etc/apache/conf-enabled/hardening.conf file. + """ + def __call__(self): + settings = utils.get_settings('apache') + ctxt = settings['hardening'] + + out = subprocess.check_output(['apache2', '-v']) + if six.PY3: + out = out.decode('utf-8') + ctxt['apache_version'] = re.search(r'.+version: Apache/(.+?)\s.+', + out).group(1) + ctxt['apache_icondir'] = '/usr/share/apache2/icons/' + return ctxt diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/apache/templates/99-hardening.conf b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/apache/templates/99-hardening.conf new file mode 100644 index 0000000000000000000000000000000000000000..22b68041d50ff753284bbb4b41a21e8f2bd8c18a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/apache/templates/99-hardening.conf @@ -0,0 +1,32 @@ +############################################################################### +# WARNING: This configuration file is maintained by Juju. Local changes may +# be overwritten. +############################################################################### + + + + # http://httpd.apache.org/docs/2.4/upgrading.html + {% if apache_version > '2.2' -%} + Require all granted + {% else -%} + Order Allow,Deny + Deny from all + {% endif %} + + + + + Options -Indexes -FollowSymLinks + AllowOverride None + + + + Options -Indexes -FollowSymLinks + AllowOverride None + + +TraceEnable {{ traceenable }} +ServerTokens {{ servertokens }} + +SSLHonorCipherOrder {{ honor_cipher_order }} +SSLCipherSuite {{ cipher_suite }} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/apache/templates/alias.conf b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/apache/templates/alias.conf new file mode 100644 index 0000000000000000000000000000000000000000..e46a58a30dadbb6ccffa02d82593c63a9cbf52df --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/apache/templates/alias.conf @@ -0,0 +1,31 @@ +############################################################################### +# WARNING: This configuration file is maintained by Juju. Local changes may +# be overwritten. +############################################################################### + + # + # Aliases: Add here as many aliases as you need (with no limit). The format is + # Alias fakename realname + # + # Note that if you include a trailing / on fakename then the server will + # require it to be present in the URL. So "/icons" isn't aliased in this + # example, only "/icons/". If the fakename is slash-terminated, then the + # realname must also be slash terminated, and if the fakename omits the + # trailing slash, the realname must also omit it. + # + # We include the /icons/ alias for FancyIndexed directory listings. If + # you do not use FancyIndexing, you may comment this out. + # + Alias /icons/ "{{ apache_icondir }}/" + + + Options -Indexes -MultiViews -FollowSymLinks + AllowOverride None +{% if apache_version == '2.4' -%} + Require all granted +{% else -%} + Order allow,deny + Allow from all +{% endif %} + + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/audits/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/audits/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..6dd5b05fec4ffcfcdb4378a06dfda4e8ac7e8371 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/audits/__init__.py @@ -0,0 +1,54 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + +class BaseAudit(object): # NO-QA + """Base class for hardening checks. + + The lifecycle of a hardening check is to first check to see if the system + is in compliance for the specified check. If it is not in compliance, the + check method will return a value which will be supplied to the. + """ + def __init__(self, *args, **kwargs): + self.unless = kwargs.get('unless', None) + super(BaseAudit, self).__init__() + + def ensure_compliance(self): + """Checks to see if the current hardening check is in compliance or + not. + + If the check that is performed is not in compliance, then an exception + should be raised. + """ + pass + + def _take_action(self): + """Determines whether to perform the action or not. + + Checks whether or not an action should be taken. This is determined by + the truthy value for the unless parameter. If unless is a callback + method, it will be invoked with no parameters in order to determine + whether or not the action should be taken. Otherwise, the truthy value + of the unless attribute will determine if the action should be + performed. + """ + # Do the action if there isn't an unless override. + if self.unless is None: + return True + + # Invoke the callback if there is one. + if hasattr(self.unless, '__call__'): + return not self.unless() + + return not self.unless diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/audits/apache.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/audits/apache.py new file mode 100644 index 0000000000000000000000000000000000000000..04825f5ada0c5b0bb9fc0955baa9a10fa199184d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/audits/apache.py @@ -0,0 +1,100 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import re +import subprocess + +import six + +from charmhelpers.core.hookenv import ( + log, + INFO, + ERROR, +) + +from charmhelpers.contrib.hardening.audits import BaseAudit + + +class DisabledModuleAudit(BaseAudit): + """Audits Apache2 modules. + + Determines if the apache2 modules are enabled. If the modules are enabled + then they are removed in the ensure_compliance. + """ + def __init__(self, modules): + if modules is None: + self.modules = [] + elif isinstance(modules, six.string_types): + self.modules = [modules] + else: + self.modules = modules + + def ensure_compliance(self): + """Ensures that the modules are not loaded.""" + if not self.modules: + return + + try: + loaded_modules = self._get_loaded_modules() + non_compliant_modules = [] + for module in self.modules: + if module in loaded_modules: + log("Module '%s' is enabled but should not be." % + (module), level=INFO) + non_compliant_modules.append(module) + + if len(non_compliant_modules) == 0: + return + + for module in non_compliant_modules: + self._disable_module(module) + self._restart_apache() + except subprocess.CalledProcessError as e: + log('Error occurred auditing apache module compliance. ' + 'This may have been already reported. ' + 'Output is: %s' % e.output, level=ERROR) + + @staticmethod + def _get_loaded_modules(): + """Returns the modules which are enabled in Apache.""" + output = subprocess.check_output(['apache2ctl', '-M']) + if six.PY3: + output = output.decode('utf-8') + modules = [] + for line in output.splitlines(): + # Each line of the enabled module output looks like: + # module_name (static|shared) + # Plus a header line at the top of the output which is stripped + # out by the regex. + matcher = re.search(r'^ (\S*)_module (\S*)', line) + if matcher: + modules.append(matcher.group(1)) + return modules + + @staticmethod + def _disable_module(module): + """Disables the specified module in Apache.""" + try: + subprocess.check_call(['a2dismod', module]) + except subprocess.CalledProcessError as e: + # Note: catch error here to allow the attempt of disabling + # multiple modules in one go rather than failing after the + # first module fails. + log('Error occurred disabling module %s. ' + 'Output is: %s' % (module, e.output), level=ERROR) + + @staticmethod + def _restart_apache(): + """Restarts the apache process""" + subprocess.check_output(['service', 'apache2', 'restart']) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/audits/apt.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/audits/apt.py new file mode 100644 index 0000000000000000000000000000000000000000..cad7bf7376d6f22ce8feccd843b070c399887aa9 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/audits/apt.py @@ -0,0 +1,104 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from __future__ import absolute_import # required for external apt import +from six import string_types + +from charmhelpers.fetch import ( + apt_cache, + apt_purge +) +from charmhelpers.core.hookenv import ( + log, + DEBUG, + WARNING, +) +from charmhelpers.contrib.hardening.audits import BaseAudit +from charmhelpers.fetch import ubuntu_apt_pkg as apt_pkg + + +class AptConfig(BaseAudit): + + def __init__(self, config, **kwargs): + self.config = config + + def verify_config(self): + apt_pkg.init() + for cfg in self.config: + value = apt_pkg.config.get(cfg['key'], cfg.get('default', '')) + if value and value != cfg['expected']: + log("APT config '%s' has unexpected value '%s' " + "(expected='%s')" % + (cfg['key'], value, cfg['expected']), level=WARNING) + + def ensure_compliance(self): + self.verify_config() + + +class RestrictedPackages(BaseAudit): + """Class used to audit restricted packages on the system.""" + + def __init__(self, pkgs, **kwargs): + super(RestrictedPackages, self).__init__(**kwargs) + if isinstance(pkgs, string_types) or not hasattr(pkgs, '__iter__'): + self.pkgs = pkgs.split() + else: + self.pkgs = pkgs + + def ensure_compliance(self): + cache = apt_cache() + + for p in self.pkgs: + if p not in cache: + continue + + pkg = cache[p] + if not self.is_virtual_package(pkg): + if not pkg.current_ver: + log("Package '%s' is not installed." % pkg.name, + level=DEBUG) + continue + else: + log("Restricted package '%s' is installed" % pkg.name, + level=WARNING) + self.delete_package(cache, pkg) + else: + log("Checking restricted virtual package '%s' provides" % + pkg.name, level=DEBUG) + self.delete_package(cache, pkg) + + def delete_package(self, cache, pkg): + """Deletes the package from the system. + + Deletes the package form the system, properly handling virtual + packages. + + :param cache: the apt cache + :param pkg: the package to remove + """ + if self.is_virtual_package(pkg): + log("Package '%s' appears to be virtual - purging provides" % + pkg.name, level=DEBUG) + for _p in pkg.provides_list: + self.delete_package(cache, _p[2].parent_pkg) + elif not pkg.current_ver: + log("Package '%s' not installed" % pkg.name, level=DEBUG) + return + else: + log("Purging package '%s'" % pkg.name, level=DEBUG) + apt_purge(pkg.name) + + def is_virtual_package(self, pkg): + return (pkg.get('has_provides', False) and + not pkg.get('has_versions', False)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/audits/file.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/audits/file.py new file mode 100644 index 0000000000000000000000000000000000000000..257c6351a0b0d244273013faef913f52349f2486 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/audits/file.py @@ -0,0 +1,550 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import grp +import os +import pwd +import re + +from subprocess import ( + CalledProcessError, + check_output, + check_call, +) +from traceback import format_exc +from six import string_types +from stat import ( + S_ISGID, + S_ISUID +) + +from charmhelpers.core.hookenv import ( + log, + DEBUG, + INFO, + WARNING, + ERROR, +) +from charmhelpers.core import unitdata +from charmhelpers.core.host import file_hash +from charmhelpers.contrib.hardening.audits import BaseAudit +from charmhelpers.contrib.hardening.templating import ( + get_template_path, + render_and_write, +) +from charmhelpers.contrib.hardening import utils + + +class BaseFileAudit(BaseAudit): + """Base class for file audits. + + Provides api stubs for compliance check flow that must be used by any class + that implemented this one. + """ + + def __init__(self, paths, always_comply=False, *args, **kwargs): + """ + :param paths: string path of list of paths of files we want to apply + compliance checks are criteria to. + :param always_comply: if true compliance criteria is always applied + else compliance is skipped for non-existent + paths. + """ + super(BaseFileAudit, self).__init__(*args, **kwargs) + self.always_comply = always_comply + if isinstance(paths, string_types) or not hasattr(paths, '__iter__'): + self.paths = [paths] + else: + self.paths = paths + + def ensure_compliance(self): + """Ensure that the all registered files comply to registered criteria. + """ + for p in self.paths: + if os.path.exists(p): + if self.is_compliant(p): + continue + + log('File %s is not in compliance.' % p, level=INFO) + else: + if not self.always_comply: + log("Non-existent path '%s' - skipping compliance check" + % (p), level=INFO) + continue + + if self._take_action(): + log("Applying compliance criteria to '%s'" % (p), level=INFO) + self.comply(p) + + def is_compliant(self, path): + """Audits the path to see if it is compliance. + + :param path: the path to the file that should be checked. + """ + raise NotImplementedError + + def comply(self, path): + """Enforces the compliance of a path. + + :param path: the path to the file that should be enforced. + """ + raise NotImplementedError + + @classmethod + def _get_stat(cls, path): + """Returns the Posix st_stat information for the specified file path. + + :param path: the path to get the st_stat information for. + :returns: an st_stat object for the path or None if the path doesn't + exist. + """ + return os.stat(path) + + +class FilePermissionAudit(BaseFileAudit): + """Implements an audit for file permissions and ownership for a user. + + This class implements functionality that ensures that a specific user/group + will own the file(s) specified and that the permissions specified are + applied properly to the file. + """ + def __init__(self, paths, user, group=None, mode=0o600, **kwargs): + self.user = user + self.group = group + self.mode = mode + super(FilePermissionAudit, self).__init__(paths, user, group, mode, + **kwargs) + + @property + def user(self): + return self._user + + @user.setter + def user(self, name): + try: + user = pwd.getpwnam(name) + except KeyError: + log('Unknown user %s' % name, level=ERROR) + user = None + self._user = user + + @property + def group(self): + return self._group + + @group.setter + def group(self, name): + try: + group = None + if name: + group = grp.getgrnam(name) + else: + group = grp.getgrgid(self.user.pw_gid) + except KeyError: + log('Unknown group %s' % name, level=ERROR) + self._group = group + + def is_compliant(self, path): + """Checks if the path is in compliance. + + Used to determine if the path specified meets the necessary + requirements to be in compliance with the check itself. + + :param path: the file path to check + :returns: True if the path is compliant, False otherwise. + """ + stat = self._get_stat(path) + user = self.user + group = self.group + + compliant = True + if stat.st_uid != user.pw_uid or stat.st_gid != group.gr_gid: + log('File %s is not owned by %s:%s.' % (path, user.pw_name, + group.gr_name), + level=INFO) + compliant = False + + # POSIX refers to the st_mode bits as corresponding to both the + # file type and file permission bits, where the least significant 12 + # bits (o7777) are the suid (11), sgid (10), sticky bits (9), and the + # file permission bits (8-0) + perms = stat.st_mode & 0o7777 + if perms != self.mode: + log('File %s has incorrect permissions, currently set to %s' % + (path, oct(stat.st_mode & 0o7777)), level=INFO) + compliant = False + + return compliant + + def comply(self, path): + """Issues a chown and chmod to the file paths specified.""" + utils.ensure_permissions(path, self.user.pw_name, self.group.gr_name, + self.mode) + + +class DirectoryPermissionAudit(FilePermissionAudit): + """Performs a permission check for the specified directory path.""" + + def __init__(self, paths, user, group=None, mode=0o600, + recursive=True, **kwargs): + super(DirectoryPermissionAudit, self).__init__(paths, user, group, + mode, **kwargs) + self.recursive = recursive + + def is_compliant(self, path): + """Checks if the directory is compliant. + + Used to determine if the path specified and all of its children + directories are in compliance with the check itself. + + :param path: the directory path to check + :returns: True if the directory tree is compliant, otherwise False. + """ + if not os.path.isdir(path): + log('Path specified %s is not a directory.' % path, level=ERROR) + raise ValueError("%s is not a directory." % path) + + if not self.recursive: + return super(DirectoryPermissionAudit, self).is_compliant(path) + + compliant = True + for root, dirs, _ in os.walk(path): + if len(dirs) > 0: + continue + + if not super(DirectoryPermissionAudit, self).is_compliant(root): + compliant = False + continue + + return compliant + + def comply(self, path): + for root, dirs, _ in os.walk(path): + if len(dirs) > 0: + super(DirectoryPermissionAudit, self).comply(root) + + +class ReadOnly(BaseFileAudit): + """Audits that files and folders are read only.""" + def __init__(self, paths, *args, **kwargs): + super(ReadOnly, self).__init__(paths=paths, *args, **kwargs) + + def is_compliant(self, path): + try: + output = check_output(['find', path, '-perm', '-go+w', + '-type', 'f']).strip() + + # The find above will find any files which have permission sets + # which allow too broad of write access. As such, the path is + # compliant if there is no output. + if output: + return False + + return True + except CalledProcessError as e: + log('Error occurred checking finding writable files for %s. ' + 'Error information is: command %s failed with returncode ' + '%d and output %s.\n%s' % (path, e.cmd, e.returncode, e.output, + format_exc(e)), level=ERROR) + return False + + def comply(self, path): + try: + check_output(['chmod', 'go-w', '-R', path]) + except CalledProcessError as e: + log('Error occurred removing writeable permissions for %s. ' + 'Error information is: command %s failed with returncode ' + '%d and output %s.\n%s' % (path, e.cmd, e.returncode, e.output, + format_exc(e)), level=ERROR) + + +class NoReadWriteForOther(BaseFileAudit): + """Ensures that the files found under the base path are readable or + writable by anyone other than the owner or the group. + """ + def __init__(self, paths): + super(NoReadWriteForOther, self).__init__(paths) + + def is_compliant(self, path): + try: + cmd = ['find', path, '-perm', '-o+r', '-type', 'f', '-o', + '-perm', '-o+w', '-type', 'f'] + output = check_output(cmd).strip() + + # The find above here will find any files which have read or + # write permissions for other, meaning there is too broad of access + # to read/write the file. As such, the path is compliant if there's + # no output. + if output: + return False + + return True + except CalledProcessError as e: + log('Error occurred while finding files which are readable or ' + 'writable to the world in %s. ' + 'Command output is: %s.' % (path, e.output), level=ERROR) + + def comply(self, path): + try: + check_output(['chmod', '-R', 'o-rw', path]) + except CalledProcessError as e: + log('Error occurred attempting to change modes of files under ' + 'path %s. Output of command is: %s' % (path, e.output)) + + +class NoSUIDSGIDAudit(BaseFileAudit): + """Audits that specified files do not have SUID/SGID bits set.""" + def __init__(self, paths, *args, **kwargs): + super(NoSUIDSGIDAudit, self).__init__(paths=paths, *args, **kwargs) + + def is_compliant(self, path): + stat = self._get_stat(path) + if (stat.st_mode & (S_ISGID | S_ISUID)) != 0: + return False + + return True + + def comply(self, path): + try: + log('Removing suid/sgid from %s.' % path, level=DEBUG) + check_output(['chmod', '-s', path]) + except CalledProcessError as e: + log('Error occurred removing suid/sgid from %s.' + 'Error information is: command %s failed with returncode ' + '%d and output %s.\n%s' % (path, e.cmd, e.returncode, e.output, + format_exc(e)), level=ERROR) + + +class TemplatedFile(BaseFileAudit): + """The TemplatedFileAudit audits the contents of a templated file. + + This audit renders a file from a template, sets the appropriate file + permissions, then generates a hashsum with which to check the content + changed. + """ + def __init__(self, path, context, template_dir, mode, user='root', + group='root', service_actions=None, **kwargs): + self.context = context + self.user = user + self.group = group + self.mode = mode + self.template_dir = template_dir + self.service_actions = service_actions + super(TemplatedFile, self).__init__(paths=path, always_comply=True, + **kwargs) + + def is_compliant(self, path): + """Determines if the templated file is compliant. + + A templated file is only compliant if it has not changed (as + determined by its sha256 hashsum) AND its file permissions are set + appropriately. + + :param path: the path to check compliance. + """ + same_templates = self.templates_match(path) + same_content = self.contents_match(path) + same_permissions = self.permissions_match(path) + + if same_content and same_permissions and same_templates: + return True + + return False + + def run_service_actions(self): + """Run any actions on services requested.""" + if not self.service_actions: + return + + for svc_action in self.service_actions: + name = svc_action['service'] + actions = svc_action['actions'] + log("Running service '%s' actions '%s'" % (name, actions), + level=DEBUG) + for action in actions: + cmd = ['service', name, action] + try: + check_call(cmd) + except CalledProcessError as exc: + log("Service name='%s' action='%s' failed - %s" % + (name, action, exc), level=WARNING) + + def comply(self, path): + """Ensures the contents and the permissions of the file. + + :param path: the path to correct + """ + dirname = os.path.dirname(path) + if not os.path.exists(dirname): + os.makedirs(dirname) + + self.pre_write() + render_and_write(self.template_dir, path, self.context()) + utils.ensure_permissions(path, self.user, self.group, self.mode) + self.run_service_actions() + self.save_checksum(path) + self.post_write() + + def pre_write(self): + """Invoked prior to writing the template.""" + pass + + def post_write(self): + """Invoked after writing the template.""" + pass + + def templates_match(self, path): + """Determines if the template files are the same. + + The template file equality is determined by the hashsum of the + template files themselves. If there is no hashsum, then the content + cannot be sure to be the same so treat it as if they changed. + Otherwise, return whether or not the hashsums are the same. + + :param path: the path to check + :returns: boolean + """ + template_path = get_template_path(self.template_dir, path) + key = 'hardening:template:%s' % template_path + template_checksum = file_hash(template_path) + kv = unitdata.kv() + stored_tmplt_checksum = kv.get(key) + if not stored_tmplt_checksum: + kv.set(key, template_checksum) + kv.flush() + log('Saved template checksum for %s.' % template_path, + level=DEBUG) + # Since we don't have a template checksum, then assume it doesn't + # match and return that the template is different. + return False + elif stored_tmplt_checksum != template_checksum: + kv.set(key, template_checksum) + kv.flush() + log('Updated template checksum for %s.' % template_path, + level=DEBUG) + return False + + # Here the template hasn't changed based upon the calculated + # checksum of the template and what was previously stored. + return True + + def contents_match(self, path): + """Determines if the file content is the same. + + This is determined by comparing hashsum of the file contents and + the saved hashsum. If there is no hashsum, then the content cannot + be sure to be the same so treat them as if they are not the same. + Otherwise, return True if the hashsums are the same, False if they + are not the same. + + :param path: the file to check. + """ + checksum = file_hash(path) + + kv = unitdata.kv() + stored_checksum = kv.get('hardening:%s' % path) + if not stored_checksum: + # If the checksum hasn't been generated, return False to ensure + # the file is written and the checksum stored. + log('Checksum for %s has not been calculated.' % path, level=DEBUG) + return False + elif stored_checksum != checksum: + log('Checksum mismatch for %s.' % path, level=DEBUG) + return False + + return True + + def permissions_match(self, path): + """Determines if the file owner and permissions match. + + :param path: the path to check. + """ + audit = FilePermissionAudit(path, self.user, self.group, self.mode) + return audit.is_compliant(path) + + def save_checksum(self, path): + """Calculates and saves the checksum for the path specified. + + :param path: the path of the file to save the checksum. + """ + checksum = file_hash(path) + kv = unitdata.kv() + kv.set('hardening:%s' % path, checksum) + kv.flush() + + +class DeletedFile(BaseFileAudit): + """Audit to ensure that a file is deleted.""" + def __init__(self, paths): + super(DeletedFile, self).__init__(paths) + + def is_compliant(self, path): + return not os.path.exists(path) + + def comply(self, path): + os.remove(path) + + +class FileContentAudit(BaseFileAudit): + """Audit the contents of a file.""" + def __init__(self, paths, cases, **kwargs): + # Cases we expect to pass + self.pass_cases = cases.get('pass', []) + # Cases we expect to fail + self.fail_cases = cases.get('fail', []) + super(FileContentAudit, self).__init__(paths, **kwargs) + + def is_compliant(self, path): + """ + Given a set of content matching cases i.e. tuple(regex, bool) where + bool value denotes whether or not regex is expected to match, check that + all cases match as expected with the contents of the file. Cases can be + expected to pass of fail. + + :param path: Path of file to check. + :returns: Boolean value representing whether or not all cases are + found to be compliant. + """ + log("Auditing contents of file '%s'" % (path), level=DEBUG) + with open(path, 'r') as fd: + contents = fd.read() + + matches = 0 + for pattern in self.pass_cases: + key = re.compile(pattern, flags=re.MULTILINE) + results = re.search(key, contents) + if results: + matches += 1 + else: + log("Pattern '%s' was expected to pass but instead it failed" + % (pattern), level=WARNING) + + for pattern in self.fail_cases: + key = re.compile(pattern, flags=re.MULTILINE) + results = re.search(key, contents) + if not results: + matches += 1 + else: + log("Pattern '%s' was expected to fail but instead it passed" + % (pattern), level=WARNING) + + total = len(self.pass_cases) + len(self.fail_cases) + log("Checked %s cases and %s passed" % (total, matches), level=DEBUG) + return matches == total + + def comply(self, *args, **kwargs): + """NOOP since we just issue warnings. This is to avoid the + NotImplememtedError. + """ + log("Not applying any compliance criteria, only checks.", level=INFO) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/defaults/apache.yaml b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/defaults/apache.yaml new file mode 100644 index 0000000000000000000000000000000000000000..0f940d4cfa85ca7051dd60a4805d84bb6aebed6d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/defaults/apache.yaml @@ -0,0 +1,16 @@ +# NOTE: this file contains the default configuration for the 'apache' hardening +# code. If you want to override any settings you must add them to a file +# called hardening.yaml in the root directory of your charm using the +# name 'apache' as the root key followed by any of the following with new +# values. + +common: + apache_dir: '/etc/apache2' + +hardening: + traceenable: 'off' + allowed_http_methods: "GET POST" + modules_to_disable: [ cgi, cgid ] + servertokens: 'Prod' + honor_cipher_order: 'on' + cipher_suite: 'ALL:+MEDIUM:+HIGH:!LOW:!MD5:!RC4:!eNULL:!aNULL:!3DES' diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/defaults/apache.yaml.schema b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/defaults/apache.yaml.schema new file mode 100644 index 0000000000000000000000000000000000000000..c112137cb45c4b63cb05384145b3edf8c443e2b8 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/defaults/apache.yaml.schema @@ -0,0 +1,12 @@ +# NOTE: this schema must contain all valid keys from it's associated defaults +# file. It is used to validate user-provided overrides. +common: + apache_dir: + traceenable: + +hardening: + allowed_http_methods: + modules_to_disable: + servertokens: + honor_cipher_order: + cipher_suite: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/defaults/mysql.yaml b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/defaults/mysql.yaml new file mode 100644 index 0000000000000000000000000000000000000000..682d22bf3ded32eb1c8d6188486ec4468d9ec457 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/defaults/mysql.yaml @@ -0,0 +1,38 @@ +# NOTE: this file contains the default configuration for the 'mysql' hardening +# code. If you want to override any settings you must add them to a file +# called hardening.yaml in the root directory of your charm using the +# name 'mysql' as the root key followed by any of the following with new +# values. + +hardening: + mysql-conf: /etc/mysql/my.cnf + hardening-conf: /etc/mysql/conf.d/hardening.cnf + +security: + # @see http://www.symantec.com/connect/articles/securing-mysql-step-step + # @see http://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_chroot + chroot: None + + # @see http://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_safe-user-create + safe-user-create: 1 + + # @see http://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_secure-auth + secure-auth: 1 + + # @see http://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_symbolic-links + skip-symbolic-links: 1 + + # @see http://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_skip-show-database + skip-show-database: True + + # @see http://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_local_infile + local-infile: 0 + + # @see https://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_allow-suspicious-udfs + allow-suspicious-udfs: 0 + + # @see https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_automatic_sp_privileges + automatic-sp-privileges: 0 + + # @see https://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_secure-file-priv + secure-file-priv: /tmp diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/defaults/mysql.yaml.schema b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/defaults/mysql.yaml.schema new file mode 100644 index 0000000000000000000000000000000000000000..2edf325c311c6fbb062a072083b4d12cebc3d9c1 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/defaults/mysql.yaml.schema @@ -0,0 +1,15 @@ +# NOTE: this schema must contain all valid keys from it's associated defaults +# file. It is used to validate user-provided overrides. +hardening: + mysql-conf: + hardening-conf: +security: + chroot: + safe-user-create: + secure-auth: + skip-symbolic-links: + skip-show-database: + local-infile: + allow-suspicious-udfs: + automatic-sp-privileges: + secure-file-priv: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/defaults/os.yaml b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/defaults/os.yaml new file mode 100644 index 0000000000000000000000000000000000000000..9a8627b5ed2803828e1e4d78260c6b5f90cae659 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/defaults/os.yaml @@ -0,0 +1,68 @@ +# NOTE: this file contains the default configuration for the 'os' hardening +# code. If you want to override any settings you must add them to a file +# called hardening.yaml in the root directory of your charm using the +# name 'os' as the root key followed by any of the following with new +# values. + +general: + desktop_enable: False # (type:boolean) + +environment: + extra_user_paths: [] + umask: 027 + root_path: / + +auth: + pw_max_age: 60 + # discourage password cycling + pw_min_age: 7 + retries: 5 + lockout_time: 600 + timeout: 60 + allow_homeless: False # (type:boolean) + pam_passwdqc_enable: True # (type:boolean) + pam_passwdqc_options: 'min=disabled,disabled,16,12,8' + root_ttys: + console + tty1 + tty2 + tty3 + tty4 + tty5 + tty6 + uid_min: 1000 + gid_min: 1000 + sys_uid_min: 100 + sys_uid_max: 999 + sys_gid_min: 100 + sys_gid_max: 999 + chfn_restrict: + +security: + users_allow: [] + suid_sgid_enforce: True # (type:boolean) + # user-defined blacklist and whitelist + suid_sgid_blacklist: [] + suid_sgid_whitelist: [] + # if this is True, remove any suid/sgid bits from files that were not in the whitelist + suid_sgid_dry_run_on_unknown: False # (type:boolean) + suid_sgid_remove_from_unknown: False # (type:boolean) + # remove packages with known issues + packages_clean: True # (type:boolean) + packages_list: + xinetd + inetd + ypserv + telnet-server + rsh-server + rsync + kernel_enable_module_loading: True # (type:boolean) + kernel_enable_core_dump: False # (type:boolean) + ssh_tmout: 300 + +sysctl: + kernel_secure_sysrq: 244 # 4 + 16 + 32 + 64 + 128 + kernel_enable_sysrq: False # (type:boolean) + forwarding: False # (type:boolean) + ipv6_enable: False # (type:boolean) + arp_restricted: True # (type:boolean) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/defaults/os.yaml.schema b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/defaults/os.yaml.schema new file mode 100644 index 0000000000000000000000000000000000000000..cc3b9c206eae56cbe68826cb76748e2deb9483e1 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/defaults/os.yaml.schema @@ -0,0 +1,43 @@ +# NOTE: this schema must contain all valid keys from it's associated defaults +# file. It is used to validate user-provided overrides. +general: + desktop_enable: +environment: + extra_user_paths: + umask: + root_path: +auth: + pw_max_age: + pw_min_age: + retries: + lockout_time: + timeout: + allow_homeless: + pam_passwdqc_enable: + pam_passwdqc_options: + root_ttys: + uid_min: + gid_min: + sys_uid_min: + sys_uid_max: + sys_gid_min: + sys_gid_max: + chfn_restrict: +security: + users_allow: + suid_sgid_enforce: + suid_sgid_blacklist: + suid_sgid_whitelist: + suid_sgid_dry_run_on_unknown: + suid_sgid_remove_from_unknown: + packages_clean: + packages_list: + kernel_enable_module_loading: + kernel_enable_core_dump: + ssh_tmout: +sysctl: + kernel_secure_sysrq: + kernel_enable_sysrq: + forwarding: + ipv6_enable: + arp_restricted: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/defaults/ssh.yaml b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/defaults/ssh.yaml new file mode 100644 index 0000000000000000000000000000000000000000..cd529bcae1ec00fef2e969f43dc3cf530b46ef9a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/defaults/ssh.yaml @@ -0,0 +1,49 @@ +# NOTE: this file contains the default configuration for the 'ssh' hardening +# code. If you want to override any settings you must add them to a file +# called hardening.yaml in the root directory of your charm using the +# name 'ssh' as the root key followed by any of the following with new +# values. + +common: + service_name: 'ssh' + network_ipv6_enable: False # (type:boolean) + ports: [22] + remote_hosts: [] + +client: + package: 'openssh-client' + cbc_required: False # (type:boolean) + weak_hmac: False # (type:boolean) + weak_kex: False # (type:boolean) + roaming: False + password_authentication: 'no' + +server: + host_key_files: ['/etc/ssh/ssh_host_rsa_key', '/etc/ssh/ssh_host_dsa_key', + '/etc/ssh/ssh_host_ecdsa_key'] + cbc_required: False # (type:boolean) + weak_hmac: False # (type:boolean) + weak_kex: False # (type:boolean) + allow_root_with_key: False # (type:boolean) + allow_tcp_forwarding: 'no' + allow_agent_forwarding: 'no' + allow_x11_forwarding: 'no' + use_privilege_separation: 'sandbox' + listen_to: ['0.0.0.0'] + use_pam: 'no' + package: 'openssh-server' + password_authentication: 'no' + alive_interval: '600' + alive_count: '3' + sftp_enable: False # (type:boolean) + sftp_group: 'sftponly' + sftp_chroot: '/home/%u' + deny_users: [] + allow_users: [] + deny_groups: [] + allow_groups: [] + print_motd: 'no' + print_last_log: 'no' + use_dns: 'no' + max_auth_tries: 2 + max_sessions: 10 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/defaults/ssh.yaml.schema b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/defaults/ssh.yaml.schema new file mode 100644 index 0000000000000000000000000000000000000000..d05e054bc234015206bb1195152fa9ffd6a33151 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/defaults/ssh.yaml.schema @@ -0,0 +1,42 @@ +# NOTE: this schema must contain all valid keys from it's associated defaults +# file. It is used to validate user-provided overrides. +common: + service_name: + network_ipv6_enable: + ports: + remote_hosts: +client: + package: + cbc_required: + weak_hmac: + weak_kex: + roaming: + password_authentication: +server: + host_key_files: + cbc_required: + weak_hmac: + weak_kex: + allow_root_with_key: + allow_tcp_forwarding: + allow_agent_forwarding: + allow_x11_forwarding: + use_privilege_separation: + listen_to: + use_pam: + package: + password_authentication: + alive_interval: + alive_count: + sftp_enable: + sftp_group: + sftp_chroot: + deny_users: + allow_users: + deny_groups: + allow_groups: + print_motd: + print_last_log: + use_dns: + max_auth_tries: + max_sessions: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/harden.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/harden.py new file mode 100644 index 0000000000000000000000000000000000000000..63f21b9c9855065da3be875c01a2c94db7df47b4 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/harden.py @@ -0,0 +1,96 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import six + +from collections import OrderedDict + +from charmhelpers.core.hookenv import ( + config, + log, + DEBUG, + WARNING, +) +from charmhelpers.contrib.hardening.host.checks import run_os_checks +from charmhelpers.contrib.hardening.ssh.checks import run_ssh_checks +from charmhelpers.contrib.hardening.mysql.checks import run_mysql_checks +from charmhelpers.contrib.hardening.apache.checks import run_apache_checks + +_DISABLE_HARDENING_FOR_UNIT_TEST = False + + +def harden(overrides=None): + """Hardening decorator. + + This is the main entry point for running the hardening stack. In order to + run modules of the stack you must add this decorator to charm hook(s) and + ensure that your charm config.yaml contains the 'harden' option set to + one or more of the supported modules. Setting these will cause the + corresponding hardening code to be run when the hook fires. + + This decorator can and should be applied to more than one hook or function + such that hardening modules are called multiple times. This is because + subsequent calls will perform auditing checks that will report any changes + to resources hardened by the first run (and possibly perform compliance + actions as a result of any detected infractions). + + :param overrides: Optional list of stack modules used to override those + provided with 'harden' config. + :returns: Returns value returned by decorated function once executed. + """ + if overrides is None: + overrides = [] + + def _harden_inner1(f): + # As this has to be py2.7 compat, we can't use nonlocal. Use a trick + # to capture the dictionary that can then be updated. + _logged = {'done': False} + + def _harden_inner2(*args, **kwargs): + # knock out hardening via a config var; normally it won't get + # disabled. + if _DISABLE_HARDENING_FOR_UNIT_TEST: + return f(*args, **kwargs) + if not _logged['done']: + log("Hardening function '%s'" % (f.__name__), level=DEBUG) + _logged['done'] = True + RUN_CATALOG = OrderedDict([('os', run_os_checks), + ('ssh', run_ssh_checks), + ('mysql', run_mysql_checks), + ('apache', run_apache_checks)]) + + enabled = overrides[:] or (config("harden") or "").split() + if enabled: + modules_to_run = [] + # modules will always be performed in the following order + for module, func in six.iteritems(RUN_CATALOG): + if module in enabled: + enabled.remove(module) + modules_to_run.append(func) + + if enabled: + log("Unknown hardening modules '%s' - ignoring" % + (', '.join(enabled)), level=WARNING) + + for hardener in modules_to_run: + log("Executing hardening module '%s'" % + (hardener.__name__), level=DEBUG) + hardener() + else: + log("No hardening applied to '%s'" % (f.__name__), level=DEBUG) + + return f(*args, **kwargs) + return _harden_inner2 + + return _harden_inner1 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..58bebd846bd6fa648cfab6ab1056ad10d8415453 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/__init__.py @@ -0,0 +1,17 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from os import path + +TEMPLATES_DIR = path.join(path.dirname(__file__), 'templates') diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/checks/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/checks/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..0e7e409f3c7e0406b40353f48acfc3479e4c1a24 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/checks/__init__.py @@ -0,0 +1,48 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from charmhelpers.core.hookenv import ( + log, + DEBUG, +) +from charmhelpers.contrib.hardening.host.checks import ( + apt, + limits, + login, + minimize_access, + pam, + profile, + securetty, + suid_sgid, + sysctl +) + + +def run_os_checks(): + log("Starting OS hardening checks.", level=DEBUG) + checks = apt.get_audits() + checks.extend(limits.get_audits()) + checks.extend(login.get_audits()) + checks.extend(minimize_access.get_audits()) + checks.extend(pam.get_audits()) + checks.extend(profile.get_audits()) + checks.extend(securetty.get_audits()) + checks.extend(suid_sgid.get_audits()) + checks.extend(sysctl.get_audits()) + + for check in checks: + log("Running '%s' check" % (check.__class__.__name__), level=DEBUG) + check.ensure_compliance() + + log("OS hardening checks complete.", level=DEBUG) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/checks/apt.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/checks/apt.py new file mode 100644 index 0000000000000000000000000000000000000000..7ce41b0043134e256d9c20ee729f1c4345faa3f9 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/checks/apt.py @@ -0,0 +1,37 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from charmhelpers.contrib.hardening.utils import get_settings +from charmhelpers.contrib.hardening.audits.apt import ( + AptConfig, + RestrictedPackages, +) + + +def get_audits(): + """Get OS hardening apt audits. + + :returns: dictionary of audits + """ + audits = [AptConfig([{'key': 'APT::Get::AllowUnauthenticated', + 'expected': 'false'}])] + + settings = get_settings('os') + clean_packages = settings['security']['packages_clean'] + if clean_packages: + security_packages = settings['security']['packages_list'] + if security_packages: + audits.append(RestrictedPackages(security_packages)) + + return audits diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/checks/limits.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/checks/limits.py new file mode 100644 index 0000000000000000000000000000000000000000..e94f5ebef360c7c80c35eba8243d3e7f7dcbb14d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/checks/limits.py @@ -0,0 +1,53 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from charmhelpers.contrib.hardening.audits.file import ( + DirectoryPermissionAudit, + TemplatedFile, +) +from charmhelpers.contrib.hardening.host import TEMPLATES_DIR +from charmhelpers.contrib.hardening import utils + + +def get_audits(): + """Get OS hardening security limits audits. + + :returns: dictionary of audits + """ + audits = [] + settings = utils.get_settings('os') + + # Ensure that the /etc/security/limits.d directory is only writable + # by the root user, but others can execute and read. + audits.append(DirectoryPermissionAudit('/etc/security/limits.d', + user='root', group='root', + mode=0o755)) + + # If core dumps are not enabled, then don't allow core dumps to be + # created as they may contain sensitive information. + if not settings['security']['kernel_enable_core_dump']: + audits.append(TemplatedFile('/etc/security/limits.d/10.hardcore.conf', + SecurityLimitsContext(), + template_dir=TEMPLATES_DIR, + user='root', group='root', mode=0o0440)) + return audits + + +class SecurityLimitsContext(object): + + def __call__(self): + settings = utils.get_settings('os') + ctxt = {'disable_core_dump': + not settings['security']['kernel_enable_core_dump']} + return ctxt diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/checks/login.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/checks/login.py new file mode 100644 index 0000000000000000000000000000000000000000..fe2bc6ef34a0dae612c2617dc1d13390f651e419 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/checks/login.py @@ -0,0 +1,65 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from six import string_types + +from charmhelpers.contrib.hardening.audits.file import TemplatedFile +from charmhelpers.contrib.hardening.host import TEMPLATES_DIR +from charmhelpers.contrib.hardening import utils + + +def get_audits(): + """Get OS hardening login.defs audits. + + :returns: dictionary of audits + """ + audits = [TemplatedFile('/etc/login.defs', LoginContext(), + template_dir=TEMPLATES_DIR, + user='root', group='root', mode=0o0444)] + return audits + + +class LoginContext(object): + + def __call__(self): + settings = utils.get_settings('os') + + # Octal numbers in yaml end up being turned into decimal, + # so check if the umask is entered as a string (e.g. '027') + # or as an octal umask as we know it (e.g. 002). If its not + # a string assume it to be octal and turn it into an octal + # string. + umask = settings['environment']['umask'] + if not isinstance(umask, string_types): + umask = '%s' % oct(umask) + + ctxt = { + 'additional_user_paths': + settings['environment']['extra_user_paths'], + 'umask': umask, + 'pwd_max_age': settings['auth']['pw_max_age'], + 'pwd_min_age': settings['auth']['pw_min_age'], + 'uid_min': settings['auth']['uid_min'], + 'sys_uid_min': settings['auth']['sys_uid_min'], + 'sys_uid_max': settings['auth']['sys_uid_max'], + 'gid_min': settings['auth']['gid_min'], + 'sys_gid_min': settings['auth']['sys_gid_min'], + 'sys_gid_max': settings['auth']['sys_gid_max'], + 'login_retries': settings['auth']['retries'], + 'login_timeout': settings['auth']['timeout'], + 'chfn_restrict': settings['auth']['chfn_restrict'], + 'allow_login_without_home': settings['auth']['allow_homeless'] + } + + return ctxt diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/checks/minimize_access.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/checks/minimize_access.py new file mode 100644 index 0000000000000000000000000000000000000000..6e64be003be0b89d1416b22c35c43b6024979361 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/checks/minimize_access.py @@ -0,0 +1,50 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from charmhelpers.contrib.hardening.audits.file import ( + FilePermissionAudit, + ReadOnly, +) +from charmhelpers.contrib.hardening import utils + + +def get_audits(): + """Get OS hardening access audits. + + :returns: dictionary of audits + """ + audits = [] + settings = utils.get_settings('os') + + # Remove write permissions from $PATH folders for all regular users. + # This prevents changing system-wide commands from normal users. + path_folders = {'/usr/local/sbin', + '/usr/local/bin', + '/usr/sbin', + '/usr/bin', + '/bin'} + extra_user_paths = settings['environment']['extra_user_paths'] + path_folders.update(extra_user_paths) + audits.append(ReadOnly(path_folders)) + + # Only allow the root user to have access to the shadow file. + audits.append(FilePermissionAudit('/etc/shadow', 'root', 'root', 0o0600)) + + if 'change_user' not in settings['security']['users_allow']: + # su should only be accessible to user and group root, unless it is + # expressly defined to allow users to change to root via the + # security_users_allow config option. + audits.append(FilePermissionAudit('/bin/su', 'root', 'root', 0o750)) + + return audits diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/checks/pam.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/checks/pam.py new file mode 100644 index 0000000000000000000000000000000000000000..9b38d5f0cf0b16282968825b79b44d80a1a7f577 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/checks/pam.py @@ -0,0 +1,132 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from subprocess import ( + check_output, + CalledProcessError, +) + +from charmhelpers.core.hookenv import ( + log, + DEBUG, + ERROR, +) +from charmhelpers.fetch import ( + apt_install, + apt_purge, + apt_update, +) +from charmhelpers.contrib.hardening.audits.file import ( + TemplatedFile, + DeletedFile, +) +from charmhelpers.contrib.hardening import utils +from charmhelpers.contrib.hardening.host import TEMPLATES_DIR + + +def get_audits(): + """Get OS hardening PAM authentication audits. + + :returns: dictionary of audits + """ + audits = [] + + settings = utils.get_settings('os') + + if settings['auth']['pam_passwdqc_enable']: + audits.append(PasswdqcPAM('/etc/passwdqc.conf')) + + if settings['auth']['retries']: + audits.append(Tally2PAM('/usr/share/pam-configs/tally2')) + else: + audits.append(DeletedFile('/usr/share/pam-configs/tally2')) + + return audits + + +class PasswdqcPAMContext(object): + + def __call__(self): + ctxt = {} + settings = utils.get_settings('os') + + ctxt['auth_pam_passwdqc_options'] = \ + settings['auth']['pam_passwdqc_options'] + + return ctxt + + +class PasswdqcPAM(TemplatedFile): + """The PAM Audit verifies the linux PAM settings.""" + def __init__(self, path): + super(PasswdqcPAM, self).__init__(path=path, + template_dir=TEMPLATES_DIR, + context=PasswdqcPAMContext(), + user='root', + group='root', + mode=0o0640) + + def pre_write(self): + # Always remove? + for pkg in ['libpam-ccreds', 'libpam-cracklib']: + log("Purging package '%s'" % pkg, level=DEBUG), + apt_purge(pkg) + + apt_update(fatal=True) + for pkg in ['libpam-passwdqc']: + log("Installing package '%s'" % pkg, level=DEBUG), + apt_install(pkg) + + def post_write(self): + """Updates the PAM configuration after the file has been written""" + try: + check_output(['pam-auth-update', '--package']) + except CalledProcessError as e: + log('Error calling pam-auth-update: %s' % e, level=ERROR) + + +class Tally2PAMContext(object): + + def __call__(self): + ctxt = {} + settings = utils.get_settings('os') + + ctxt['auth_lockout_time'] = settings['auth']['lockout_time'] + ctxt['auth_retries'] = settings['auth']['retries'] + + return ctxt + + +class Tally2PAM(TemplatedFile): + """The PAM Audit verifies the linux PAM settings.""" + def __init__(self, path): + super(Tally2PAM, self).__init__(path=path, + template_dir=TEMPLATES_DIR, + context=Tally2PAMContext(), + user='root', + group='root', + mode=0o0640) + + def pre_write(self): + # Always remove? + apt_purge('libpam-ccreds') + apt_update(fatal=True) + apt_install('libpam-modules') + + def post_write(self): + """Updates the PAM configuration after the file has been written""" + try: + check_output(['pam-auth-update', '--package']) + except CalledProcessError as e: + log('Error calling pam-auth-update: %s' % e, level=ERROR) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/checks/profile.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/checks/profile.py new file mode 100644 index 0000000000000000000000000000000000000000..2727428da9241ccf88a60843d05dffb26cebac96 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/checks/profile.py @@ -0,0 +1,49 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from charmhelpers.contrib.hardening.audits.file import TemplatedFile +from charmhelpers.contrib.hardening.host import TEMPLATES_DIR +from charmhelpers.contrib.hardening import utils + + +def get_audits(): + """Get OS hardening profile audits. + + :returns: dictionary of audits + """ + audits = [] + + settings = utils.get_settings('os') + # If core dumps are not enabled, then don't allow core dumps to be + # created as they may contain sensitive information. + if not settings['security']['kernel_enable_core_dump']: + audits.append(TemplatedFile('/etc/profile.d/pinerolo_profile.sh', + ProfileContext(), + template_dir=TEMPLATES_DIR, + mode=0o0755, user='root', group='root')) + if settings['security']['ssh_tmout']: + audits.append(TemplatedFile('/etc/profile.d/99-hardening.sh', + ProfileContext(), + template_dir=TEMPLATES_DIR, + mode=0o0644, user='root', group='root')) + return audits + + +class ProfileContext(object): + + def __call__(self): + settings = utils.get_settings('os') + ctxt = {'ssh_tmout': + settings['security']['ssh_tmout']} + return ctxt diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/checks/securetty.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/checks/securetty.py new file mode 100644 index 0000000000000000000000000000000000000000..34cd02178c1fbf1c7d467af0814ad9fd4199dc3d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/checks/securetty.py @@ -0,0 +1,37 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from charmhelpers.contrib.hardening.audits.file import TemplatedFile +from charmhelpers.contrib.hardening.host import TEMPLATES_DIR +from charmhelpers.contrib.hardening import utils + + +def get_audits(): + """Get OS hardening Secure TTY audits. + + :returns: dictionary of audits + """ + audits = [] + audits.append(TemplatedFile('/etc/securetty', SecureTTYContext(), + template_dir=TEMPLATES_DIR, + mode=0o0400, user='root', group='root')) + return audits + + +class SecureTTYContext(object): + + def __call__(self): + settings = utils.get_settings('os') + ctxt = {'ttys': settings['auth']['root_ttys']} + return ctxt diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/checks/suid_sgid.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/checks/suid_sgid.py new file mode 100644 index 0000000000000000000000000000000000000000..bcbe3fde07ea0716e2de6d1d4e103fcb19166c14 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/checks/suid_sgid.py @@ -0,0 +1,129 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import subprocess + +from charmhelpers.core.hookenv import ( + log, + INFO, +) +from charmhelpers.contrib.hardening.audits.file import NoSUIDSGIDAudit +from charmhelpers.contrib.hardening import utils + + +BLACKLIST = ['/usr/bin/rcp', '/usr/bin/rlogin', '/usr/bin/rsh', + '/usr/libexec/openssh/ssh-keysign', + '/usr/lib/openssh/ssh-keysign', + '/sbin/netreport', + '/usr/sbin/usernetctl', + '/usr/sbin/userisdnctl', + '/usr/sbin/pppd', + '/usr/bin/lockfile', + '/usr/bin/mail-lock', + '/usr/bin/mail-unlock', + '/usr/bin/mail-touchlock', + '/usr/bin/dotlockfile', + '/usr/bin/arping', + '/usr/sbin/uuidd', + '/usr/bin/mtr', + '/usr/lib/evolution/camel-lock-helper-1.2', + '/usr/lib/pt_chown', + '/usr/lib/eject/dmcrypt-get-device', + '/usr/lib/mc/cons.saver'] + +WHITELIST = ['/bin/mount', '/bin/ping', '/bin/su', '/bin/umount', + '/sbin/pam_timestamp_check', '/sbin/unix_chkpwd', '/usr/bin/at', + '/usr/bin/gpasswd', '/usr/bin/locate', '/usr/bin/newgrp', + '/usr/bin/passwd', '/usr/bin/ssh-agent', + '/usr/libexec/utempter/utempter', '/usr/sbin/lockdev', + '/usr/sbin/sendmail.sendmail', '/usr/bin/expiry', + '/bin/ping6', '/usr/bin/traceroute6.iputils', + '/sbin/mount.nfs', '/sbin/umount.nfs', + '/sbin/mount.nfs4', '/sbin/umount.nfs4', + '/usr/bin/crontab', + '/usr/bin/wall', '/usr/bin/write', + '/usr/bin/screen', + '/usr/bin/mlocate', + '/usr/bin/chage', '/usr/bin/chfn', '/usr/bin/chsh', + '/bin/fusermount', + '/usr/bin/pkexec', + '/usr/bin/sudo', '/usr/bin/sudoedit', + '/usr/sbin/postdrop', '/usr/sbin/postqueue', + '/usr/sbin/suexec', + '/usr/lib/squid/ncsa_auth', '/usr/lib/squid/pam_auth', + '/usr/kerberos/bin/ksu', + '/usr/sbin/ccreds_validate', + '/usr/bin/Xorg', + '/usr/bin/X', + '/usr/lib/dbus-1.0/dbus-daemon-launch-helper', + '/usr/lib/vte/gnome-pty-helper', + '/usr/lib/libvte9/gnome-pty-helper', + '/usr/lib/libvte-2.90-9/gnome-pty-helper'] + + +def get_audits(): + """Get OS hardening suid/sgid audits. + + :returns: dictionary of audits + """ + checks = [] + settings = utils.get_settings('os') + if not settings['security']['suid_sgid_enforce']: + log("Skipping suid/sgid hardening", level=INFO) + return checks + + # Build the blacklist and whitelist of files for suid/sgid checks. + # There are a total of 4 lists: + # 1. the system blacklist + # 2. the system whitelist + # 3. the user blacklist + # 4. the user whitelist + # + # The blacklist is the set of paths which should NOT have the suid/sgid bit + # set and the whitelist is the set of paths which MAY have the suid/sgid + # bit setl. The user whitelist/blacklist effectively override the system + # whitelist/blacklist. + u_b = settings['security']['suid_sgid_blacklist'] + u_w = settings['security']['suid_sgid_whitelist'] + + blacklist = set(BLACKLIST) - set(u_w + u_b) + whitelist = set(WHITELIST) - set(u_b + u_w) + + checks.append(NoSUIDSGIDAudit(blacklist)) + + dry_run = settings['security']['suid_sgid_dry_run_on_unknown'] + + if settings['security']['suid_sgid_remove_from_unknown'] or dry_run: + # If the policy is a dry_run (e.g. complain only) or remove unknown + # suid/sgid bits then find all of the paths which have the suid/sgid + # bit set and then remove the whitelisted paths. + root_path = settings['environment']['root_path'] + unknown_paths = find_paths_with_suid_sgid(root_path) - set(whitelist) + checks.append(NoSUIDSGIDAudit(unknown_paths, unless=dry_run)) + + return checks + + +def find_paths_with_suid_sgid(root_path): + """Finds all paths/files which have an suid/sgid bit enabled. + + Starting with the root_path, this will recursively find all paths which + have an suid or sgid bit set. + """ + cmd = ['find', root_path, '-perm', '-4000', '-o', '-perm', '-2000', + '-type', 'f', '!', '-path', '/proc/*', '-print'] + + p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) + out, _ = p.communicate() + return set(out.split('\n')) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/checks/sysctl.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/checks/sysctl.py new file mode 100644 index 0000000000000000000000000000000000000000..f1ea5813036b11893e8b9a986bf30a2f7a541b5d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/checks/sysctl.py @@ -0,0 +1,209 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import platform +import re +import six +import subprocess + +from charmhelpers.core.hookenv import ( + log, + INFO, + WARNING, +) +from charmhelpers.contrib.hardening import utils +from charmhelpers.contrib.hardening.audits.file import ( + FilePermissionAudit, + TemplatedFile, +) +from charmhelpers.contrib.hardening.host import TEMPLATES_DIR + + +SYSCTL_DEFAULTS = """net.ipv4.ip_forward=%(net_ipv4_ip_forward)s +net.ipv6.conf.all.forwarding=%(net_ipv6_conf_all_forwarding)s +net.ipv4.conf.all.rp_filter=1 +net.ipv4.conf.default.rp_filter=1 +net.ipv4.icmp_echo_ignore_broadcasts=1 +net.ipv4.icmp_ignore_bogus_error_responses=1 +net.ipv4.icmp_ratelimit=100 +net.ipv4.icmp_ratemask=88089 +net.ipv6.conf.all.disable_ipv6=%(net_ipv6_conf_all_disable_ipv6)s +net.ipv4.tcp_timestamps=%(net_ipv4_tcp_timestamps)s +net.ipv4.conf.all.arp_ignore=%(net_ipv4_conf_all_arp_ignore)s +net.ipv4.conf.all.arp_announce=%(net_ipv4_conf_all_arp_announce)s +net.ipv4.tcp_rfc1337=1 +net.ipv4.tcp_syncookies=1 +net.ipv4.conf.all.shared_media=1 +net.ipv4.conf.default.shared_media=1 +net.ipv4.conf.all.accept_source_route=0 +net.ipv4.conf.default.accept_source_route=0 +net.ipv4.conf.all.accept_redirects=0 +net.ipv4.conf.default.accept_redirects=0 +net.ipv6.conf.all.accept_redirects=0 +net.ipv6.conf.default.accept_redirects=0 +net.ipv4.conf.all.secure_redirects=0 +net.ipv4.conf.default.secure_redirects=0 +net.ipv4.conf.all.send_redirects=0 +net.ipv4.conf.default.send_redirects=0 +net.ipv4.conf.all.log_martians=0 +net.ipv6.conf.default.router_solicitations=0 +net.ipv6.conf.default.accept_ra_rtr_pref=0 +net.ipv6.conf.default.accept_ra_pinfo=0 +net.ipv6.conf.default.accept_ra_defrtr=0 +net.ipv6.conf.default.autoconf=0 +net.ipv6.conf.default.dad_transmits=0 +net.ipv6.conf.default.max_addresses=1 +net.ipv6.conf.all.accept_ra=0 +net.ipv6.conf.default.accept_ra=0 +kernel.modules_disabled=%(kernel_modules_disabled)s +kernel.sysrq=%(kernel_sysrq)s +fs.suid_dumpable=%(fs_suid_dumpable)s +kernel.randomize_va_space=2 +""" + + +def get_audits(): + """Get OS hardening sysctl audits. + + :returns: dictionary of audits + """ + audits = [] + settings = utils.get_settings('os') + + # Apply the sysctl settings which are configured to be applied. + audits.append(SysctlConf()) + # Make sure that only root has access to the sysctl.conf file, and + # that it is read-only. + audits.append(FilePermissionAudit('/etc/sysctl.conf', + user='root', + group='root', mode=0o0440)) + # If module loading is not enabled, then ensure that the modules + # file has the appropriate permissions and rebuild the initramfs + if not settings['security']['kernel_enable_module_loading']: + audits.append(ModulesTemplate()) + + return audits + + +class ModulesContext(object): + + def __call__(self): + settings = utils.get_settings('os') + with open('/proc/cpuinfo', 'r') as fd: + cpuinfo = fd.readlines() + + for line in cpuinfo: + match = re.search(r"^vendor_id\s+:\s+(.+)", line) + if match: + vendor = match.group(1) + + if vendor == "GenuineIntel": + vendor = "intel" + elif vendor == "AuthenticAMD": + vendor = "amd" + + ctxt = {'arch': platform.processor(), + 'cpuVendor': vendor, + 'desktop_enable': settings['general']['desktop_enable']} + + return ctxt + + +class ModulesTemplate(object): + + def __init__(self): + super(ModulesTemplate, self).__init__('/etc/initramfs-tools/modules', + ModulesContext(), + templates_dir=TEMPLATES_DIR, + user='root', group='root', + mode=0o0440) + + def post_write(self): + subprocess.check_call(['update-initramfs', '-u']) + + +class SysCtlHardeningContext(object): + def __call__(self): + settings = utils.get_settings('os') + ctxt = {'sysctl': {}} + + log("Applying sysctl settings", level=INFO) + extras = {'net_ipv4_ip_forward': 0, + 'net_ipv6_conf_all_forwarding': 0, + 'net_ipv6_conf_all_disable_ipv6': 1, + 'net_ipv4_tcp_timestamps': 0, + 'net_ipv4_conf_all_arp_ignore': 0, + 'net_ipv4_conf_all_arp_announce': 0, + 'kernel_sysrq': 0, + 'fs_suid_dumpable': 0, + 'kernel_modules_disabled': 1} + + if settings['sysctl']['ipv6_enable']: + extras['net_ipv6_conf_all_disable_ipv6'] = 0 + + if settings['sysctl']['forwarding']: + extras['net_ipv4_ip_forward'] = 1 + extras['net_ipv6_conf_all_forwarding'] = 1 + + if settings['sysctl']['arp_restricted']: + extras['net_ipv4_conf_all_arp_ignore'] = 1 + extras['net_ipv4_conf_all_arp_announce'] = 2 + + if settings['security']['kernel_enable_module_loading']: + extras['kernel_modules_disabled'] = 0 + + if settings['sysctl']['kernel_enable_sysrq']: + sysrq_val = settings['sysctl']['kernel_secure_sysrq'] + extras['kernel_sysrq'] = sysrq_val + + if settings['security']['kernel_enable_core_dump']: + extras['fs_suid_dumpable'] = 1 + + settings.update(extras) + for d in (SYSCTL_DEFAULTS % settings).split(): + d = d.strip().partition('=') + key = d[0].strip() + path = os.path.join('/proc/sys', key.replace('.', '/')) + if not os.path.exists(path): + log("Skipping '%s' since '%s' does not exist" % (key, path), + level=WARNING) + continue + + ctxt['sysctl'][key] = d[2] or None + + # Translate for python3 + return {'sysctl_settings': + [(k, v) for k, v in six.iteritems(ctxt['sysctl'])]} + + +class SysctlConf(TemplatedFile): + """An audit check for sysctl settings.""" + def __init__(self): + self.conffile = '/etc/sysctl.d/99-juju-hardening.conf' + super(SysctlConf, self).__init__(self.conffile, + SysCtlHardeningContext(), + template_dir=TEMPLATES_DIR, + user='root', group='root', + mode=0o0440) + + def post_write(self): + try: + subprocess.check_call(['sysctl', '-p', self.conffile]) + except subprocess.CalledProcessError as e: + # NOTE: on some systems if sysctl cannot apply all settings it + # will return non-zero as well. + log("sysctl command returned an error (maybe some " + "keys could not be set) - %s" % (e), + level=WARNING) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/templates/10.hardcore.conf b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/templates/10.hardcore.conf new file mode 100644 index 0000000000000000000000000000000000000000..0014191fc8152fd9147b3fb5446987e6e62f2d77 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/templates/10.hardcore.conf @@ -0,0 +1,8 @@ +############################################################################### +# WARNING: This configuration file is maintained by Juju. Local changes may +# be overwritten. +############################################################################### +{% if disable_core_dump -%} +# Prevent core dumps for all users. These are usually only needed by developers and may contain sensitive information. +* hard core 0 +{% endif %} \ No newline at end of file diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/templates/99-hardening.sh b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/templates/99-hardening.sh new file mode 100644 index 0000000000000000000000000000000000000000..616cef46f492f682aca28c71a6e20176870a36f2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/templates/99-hardening.sh @@ -0,0 +1,5 @@ +TMOUT={{ tmout }} +readonly TMOUT +export TMOUT + +readonly HISTFILE diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/templates/99-juju-hardening.conf b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/templates/99-juju-hardening.conf new file mode 100644 index 0000000000000000000000000000000000000000..101f1e1d709c268890553957f30c93259681ce59 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/templates/99-juju-hardening.conf @@ -0,0 +1,7 @@ +############################################################################### +# WARNING: This configuration file is maintained by Juju. Local changes may +# be overwritten. +############################################################################### +{% for key, value in sysctl_settings -%} +{{ key }}={{ value }} +{% endfor -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/templates/login.defs b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/templates/login.defs new file mode 100644 index 0000000000000000000000000000000000000000..db137d6dbb7a3a850294407199225392a880cfc2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/templates/login.defs @@ -0,0 +1,349 @@ +############################################################################### +# WARNING: This configuration file is maintained by Juju. Local changes may +# be overwritten. +############################################################################### +# +# /etc/login.defs - Configuration control definitions for the login package. +# +# Three items must be defined: MAIL_DIR, ENV_SUPATH, and ENV_PATH. +# If unspecified, some arbitrary (and possibly incorrect) value will +# be assumed. All other items are optional - if not specified then +# the described action or option will be inhibited. +# +# Comment lines (lines beginning with "#") and blank lines are ignored. +# +# Modified for Linux. --marekm + +# REQUIRED for useradd/userdel/usermod +# Directory where mailboxes reside, _or_ name of file, relative to the +# home directory. If you _do_ define MAIL_DIR and MAIL_FILE, +# MAIL_DIR takes precedence. +# +# Essentially: +# - MAIL_DIR defines the location of users mail spool files +# (for mbox use) by appending the username to MAIL_DIR as defined +# below. +# - MAIL_FILE defines the location of the users mail spool files as the +# fully-qualified filename obtained by prepending the user home +# directory before $MAIL_FILE +# +# NOTE: This is no more used for setting up users MAIL environment variable +# which is, starting from shadow 4.0.12-1 in Debian, entirely the +# job of the pam_mail PAM modules +# See default PAM configuration files provided for +# login, su, etc. +# +# This is a temporary situation: setting these variables will soon +# move to /etc/default/useradd and the variables will then be +# no more supported +MAIL_DIR /var/mail +#MAIL_FILE .mail + +# +# Enable logging and display of /var/log/faillog login failure info. +# This option conflicts with the pam_tally PAM module. +# +FAILLOG_ENAB yes + +# +# Enable display of unknown usernames when login failures are recorded. +# +# WARNING: Unknown usernames may become world readable. +# See #290803 and #298773 for details about how this could become a security +# concern +LOG_UNKFAIL_ENAB no + +# +# Enable logging of successful logins +# +LOG_OK_LOGINS yes + +# +# Enable "syslog" logging of su activity - in addition to sulog file logging. +# SYSLOG_SG_ENAB does the same for newgrp and sg. +# +SYSLOG_SU_ENAB yes +SYSLOG_SG_ENAB yes + +# +# If defined, all su activity is logged to this file. +# +#SULOG_FILE /var/log/sulog + +# +# If defined, file which maps tty line to TERM environment parameter. +# Each line of the file is in a format something like "vt100 tty01". +# +#TTYTYPE_FILE /etc/ttytype + +# +# If defined, login failures will be logged here in a utmp format +# last, when invoked as lastb, will read /var/log/btmp, so... +# +FTMP_FILE /var/log/btmp + +# +# If defined, the command name to display when running "su -". For +# example, if this is defined as "su" then a "ps" will display the +# command is "-su". If not defined, then "ps" would display the +# name of the shell actually being run, e.g. something like "-sh". +# +SU_NAME su + +# +# If defined, file which inhibits all the usual chatter during the login +# sequence. If a full pathname, then hushed mode will be enabled if the +# user's name or shell are found in the file. If not a full pathname, then +# hushed mode will be enabled if the file exists in the user's home directory. +# +HUSHLOGIN_FILE .hushlogin +#HUSHLOGIN_FILE /etc/hushlogins + +# +# *REQUIRED* The default PATH settings, for superuser and normal users. +# +# (they are minimal, add the rest in the shell startup files) +ENV_SUPATH PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin +ENV_PATH PATH=/usr/local/bin:/usr/bin:/bin{% if additional_user_paths %}{{ additional_user_paths }}{% endif %} + +# +# Terminal permissions +# +# TTYGROUP Login tty will be assigned this group ownership. +# TTYPERM Login tty will be set to this permission. +# +# If you have a "write" program which is "setgid" to a special group +# which owns the terminals, define TTYGROUP to the group number and +# TTYPERM to 0620. Otherwise leave TTYGROUP commented out and assign +# TTYPERM to either 622 or 600. +# +# In Debian /usr/bin/bsd-write or similar programs are setgid tty +# However, the default and recommended value for TTYPERM is still 0600 +# to not allow anyone to write to anyone else console or terminal + +# Users can still allow other people to write them by issuing +# the "mesg y" command. + +TTYGROUP tty +TTYPERM 0600 + +# +# Login configuration initializations: +# +# ERASECHAR Terminal ERASE character ('\010' = backspace). +# KILLCHAR Terminal KILL character ('\025' = CTRL/U). +# UMASK Default "umask" value. +# +# The ERASECHAR and KILLCHAR are used only on System V machines. +# +# UMASK is the default umask value for pam_umask and is used by +# useradd and newusers to set the mode of the new home directories. +# 022 is the "historical" value in Debian for UMASK +# 027, or even 077, could be considered better for privacy +# There is no One True Answer here : each sysadmin must make up his/her +# mind. +# +# If USERGROUPS_ENAB is set to "yes", that will modify this UMASK default value +# for private user groups, i. e. the uid is the same as gid, and username is +# the same as the primary group name: for these, the user permissions will be +# used as group permissions, e. g. 022 will become 002. +# +# Prefix these values with "0" to get octal, "0x" to get hexadecimal. +# +ERASECHAR 0177 +KILLCHAR 025 +UMASK {{ umask }} + +# Enable setting of the umask group bits to be the same as owner bits (examples: `022` -> `002`, `077` -> `007`) for non-root users, if the uid is the same as gid, and username is the same as the primary group name. +# If set to yes, userdel will remove the user´s group if it contains no more members, and useradd will create by default a group with the name of the user. +USERGROUPS_ENAB yes + +# +# Password aging controls: +# +# PASS_MAX_DAYS Maximum number of days a password may be used. +# PASS_MIN_DAYS Minimum number of days allowed between password changes. +# PASS_WARN_AGE Number of days warning given before a password expires. +# +PASS_MAX_DAYS {{ pwd_max_age }} +PASS_MIN_DAYS {{ pwd_min_age }} +PASS_WARN_AGE 7 + +# +# Min/max values for automatic uid selection in useradd +# +UID_MIN {{ uid_min }} +UID_MAX 60000 +# System accounts +SYS_UID_MIN {{ sys_uid_min }} +SYS_UID_MAX {{ sys_uid_max }} + +# Min/max values for automatic gid selection in groupadd +GID_MIN {{ gid_min }} +GID_MAX 60000 +# System accounts +SYS_GID_MIN {{ sys_gid_min }} +SYS_GID_MAX {{ sys_gid_max }} + +# +# Max number of login retries if password is bad. This will most likely be +# overriden by PAM, since the default pam_unix module has it's own built +# in of 3 retries. However, this is a safe fallback in case you are using +# an authentication module that does not enforce PAM_MAXTRIES. +# +LOGIN_RETRIES {{ login_retries }} + +# +# Max time in seconds for login +# +LOGIN_TIMEOUT {{ login_timeout }} + +# +# Which fields may be changed by regular users using chfn - use +# any combination of letters "frwh" (full name, room number, work +# phone, home phone). If not defined, no changes are allowed. +# For backward compatibility, "yes" = "rwh" and "no" = "frwh". +# +{% if chfn_restrict %} +CHFN_RESTRICT {{ chfn_restrict }} +{% endif %} + +# +# Should login be allowed if we can't cd to the home directory? +# Default in no. +# +DEFAULT_HOME {% if allow_login_without_home %} yes {% else %} no {% endif %} + +# +# If defined, this command is run when removing a user. +# It should remove any at/cron/print jobs etc. owned by +# the user to be removed (passed as the first argument). +# +#USERDEL_CMD /usr/sbin/userdel_local + +# +# Enable setting of the umask group bits to be the same as owner bits +# (examples: 022 -> 002, 077 -> 007) for non-root users, if the uid is +# the same as gid, and username is the same as the primary group name. +# +# If set to yes, userdel will remove the user´s group if it contains no +# more members, and useradd will create by default a group with the name +# of the user. +# +USERGROUPS_ENAB yes + +# +# Instead of the real user shell, the program specified by this parameter +# will be launched, although its visible name (argv[0]) will be the shell's. +# The program may do whatever it wants (logging, additional authentification, +# banner, ...) before running the actual shell. +# +# FAKE_SHELL /bin/fakeshell + +# +# If defined, either full pathname of a file containing device names or +# a ":" delimited list of device names. Root logins will be allowed only +# upon these devices. +# +# This variable is used by login and su. +# +#CONSOLE /etc/consoles +#CONSOLE console:tty01:tty02:tty03:tty04 + +# +# List of groups to add to the user's supplementary group set +# when logging in on the console (as determined by the CONSOLE +# setting). Default is none. +# +# Use with caution - it is possible for users to gain permanent +# access to these groups, even when not logged in on the console. +# How to do it is left as an exercise for the reader... +# +# This variable is used by login and su. +# +#CONSOLE_GROUPS floppy:audio:cdrom + +# +# If set to "yes", new passwords will be encrypted using the MD5-based +# algorithm compatible with the one used by recent releases of FreeBSD. +# It supports passwords of unlimited length and longer salt strings. +# Set to "no" if you need to copy encrypted passwords to other systems +# which don't understand the new algorithm. Default is "no". +# +# This variable is deprecated. You should use ENCRYPT_METHOD. +# +MD5_CRYPT_ENAB no + +# +# If set to MD5 , MD5-based algorithm will be used for encrypting password +# If set to SHA256, SHA256-based algorithm will be used for encrypting password +# If set to SHA512, SHA512-based algorithm will be used for encrypting password +# If set to DES, DES-based algorithm will be used for encrypting password (default) +# Overrides the MD5_CRYPT_ENAB option +# +# Note: It is recommended to use a value consistent with +# the PAM modules configuration. +# +ENCRYPT_METHOD SHA512 + +# +# Only used if ENCRYPT_METHOD is set to SHA256 or SHA512. +# +# Define the number of SHA rounds. +# With a lot of rounds, it is more difficult to brute forcing the password. +# But note also that it more CPU resources will be needed to authenticate +# users. +# +# If not specified, the libc will choose the default number of rounds (5000). +# The values must be inside the 1000-999999999 range. +# If only one of the MIN or MAX values is set, then this value will be used. +# If MIN > MAX, the highest value will be used. +# +# SHA_CRYPT_MIN_ROUNDS 5000 +# SHA_CRYPT_MAX_ROUNDS 5000 + +################# OBSOLETED BY PAM ############## +# # +# These options are now handled by PAM. Please # +# edit the appropriate file in /etc/pam.d/ to # +# enable the equivelants of them. +# +############### + +#MOTD_FILE +#DIALUPS_CHECK_ENAB +#LASTLOG_ENAB +#MAIL_CHECK_ENAB +#OBSCURE_CHECKS_ENAB +#PORTTIME_CHECKS_ENAB +#SU_WHEEL_ONLY +#CRACKLIB_DICTPATH +#PASS_CHANGE_TRIES +#PASS_ALWAYS_WARN +#ENVIRON_FILE +#NOLOGINS_FILE +#ISSUE_FILE +#PASS_MIN_LEN +#PASS_MAX_LEN +#ULIMIT +#ENV_HZ +#CHFN_AUTH +#CHSH_AUTH +#FAIL_DELAY + +################# OBSOLETED ####################### +# # +# These options are no more handled by shadow. # +# # +# Shadow utilities will display a warning if they # +# still appear. # +# # +################################################### + +# CLOSE_SESSIONS +# LOGIN_STRING +# NO_PASSWORD_CONSOLE +# QMAIL_DIR + + + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/templates/modules b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/templates/modules new file mode 100644 index 0000000000000000000000000000000000000000..ef0354ee35fa363b303bb22c6ed0d2d1196aed52 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/templates/modules @@ -0,0 +1,117 @@ +############################################################################### +# WARNING: This configuration file is maintained by Juju. Local changes may +# be overwritten. +############################################################################### +# /etc/modules: kernel modules to load at boot time. +# +# This file contains the names of kernel modules that should be loaded +# at boot time, one per line. Lines beginning with "#" are ignored. +# Parameters can be specified after the module name. + +# Arch +# ---- +# +# Modules for certains builds, contains support modules and some CPU-specific optimizations. + +{% if arch == "x86_64" -%} +# Optimize for x86_64 cryptographic features +twofish-x86_64-3way +twofish-x86_64 +aes-x86_64 +salsa20-x86_64 +blowfish-x86_64 +{% endif -%} + +{% if cpuVendor == "intel" -%} +# Intel-specific optimizations +ghash-clmulni-intel +aesni-intel +kvm-intel +{% endif -%} + +{% if cpuVendor == "amd" -%} +# AMD-specific optimizations +kvm-amd +{% endif -%} + +kvm + + +# Crypto +# ------ + +# Some core modules which comprise strong cryptography. +blowfish_common +blowfish_generic +ctr +cts +lrw +lzo +rmd160 +rmd256 +rmd320 +serpent +sha512_generic +twofish_common +twofish_generic +xts +zlib + + +# Drivers +# ------- + +# Basics +lp +rtc +loop + +# Filesystems +ext2 +btrfs + +{% if desktop_enable -%} +# Desktop +psmouse +snd +snd_ac97_codec +snd_intel8x0 +snd_page_alloc +snd_pcm +snd_timer +soundcore +usbhid +{% endif -%} + +# Lib +# --- +xz + + +# Net +# --- + +# All packets needed for netfilter rules (ie iptables, ebtables). +ip_tables +x_tables +iptable_filter +iptable_nat + +# Targets +ipt_LOG +ipt_REJECT + +# Modules +xt_connlimit +xt_tcpudp +xt_recent +xt_limit +xt_conntrack +nf_conntrack +nf_conntrack_ipv4 +nf_defrag_ipv4 +xt_state +nf_nat + +# Addons +xt_pknock \ No newline at end of file diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/templates/passwdqc.conf b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/templates/passwdqc.conf new file mode 100644 index 0000000000000000000000000000000000000000..f98d14e57428c106692e0f57e8b381f2b0a12c44 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/templates/passwdqc.conf @@ -0,0 +1,11 @@ +############################################################################### +# WARNING: This configuration file is maintained by Juju. Local changes may +# be overwritten. +############################################################################### +Name: passwdqc password strength enforcement +Default: yes +Priority: 1024 +Conflicts: cracklib +Password-Type: Primary +Password: + requisite pam_passwdqc.so {{ auth_pam_passwdqc_options }} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/templates/pinerolo_profile.sh b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/templates/pinerolo_profile.sh new file mode 100644 index 0000000000000000000000000000000000000000..fd2de791b96fbb8889811daf7340d1f2ca2ab3a6 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/templates/pinerolo_profile.sh @@ -0,0 +1,8 @@ +############################################################################### +# WARNING: This configuration file is maintained by Juju. Local changes may +# be overwritten. +############################################################################### +# Disable core dumps via soft limits for all users. Compliance to this setting +# is voluntary and can be modified by users up to a hard limit. This setting is +# a sane default. +ulimit -S -c 0 > /dev/null 2>&1 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/templates/securetty b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/templates/securetty new file mode 100644 index 0000000000000000000000000000000000000000..15b18d4e2f45747845d0b65c06997f154ef674a4 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/templates/securetty @@ -0,0 +1,11 @@ +############################################################################### +# WARNING: This configuration file is maintained by Juju. Local changes may +# be overwritten. +############################################################################### +# A list of TTYs, from which root can log in +# see `man securetty` for reference +{% if ttys -%} +{% for tty in ttys -%} +{{ tty }} +{% endfor -%} +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/templates/tally2 b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/templates/tally2 new file mode 100644 index 0000000000000000000000000000000000000000..d9620299c55e51abbee1017a227c217cd4a9fd33 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/host/templates/tally2 @@ -0,0 +1,14 @@ +############################################################################### +# WARNING: This configuration file is maintained by Juju. Local changes may +# be overwritten. +############################################################################### +Name: tally2 lockout after failed attempts enforcement +Default: yes +Priority: 1024 +Conflicts: cracklib +Auth-Type: Primary +Auth-Initial: + required pam_tally2.so deny={{ auth_retries }} onerr=fail unlock_time={{ auth_lockout_time }} +Account-Type: Primary +Account-Initial: + required pam_tally2.so diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/mysql/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/mysql/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..58bebd846bd6fa648cfab6ab1056ad10d8415453 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/mysql/__init__.py @@ -0,0 +1,17 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from os import path + +TEMPLATES_DIR = path.join(path.dirname(__file__), 'templates') diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/mysql/checks/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/mysql/checks/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..1990d8513bbeef067a8d9a2168e1952efb2961dc --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/mysql/checks/__init__.py @@ -0,0 +1,29 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from charmhelpers.core.hookenv import ( + log, + DEBUG, +) +from charmhelpers.contrib.hardening.mysql.checks import config + + +def run_mysql_checks(): + log("Starting MySQL hardening checks.", level=DEBUG) + checks = config.get_audits() + for check in checks: + log("Running '%s' check" % (check.__class__.__name__), level=DEBUG) + check.ensure_compliance() + + log("MySQL hardening checks complete.", level=DEBUG) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/mysql/checks/config.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/mysql/checks/config.py new file mode 100644 index 0000000000000000000000000000000000000000..a79f33b74a5c2972a82f0b4d8de8d1073dc293ed --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/mysql/checks/config.py @@ -0,0 +1,87 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import six +import subprocess + +from charmhelpers.core.hookenv import ( + log, + WARNING, +) +from charmhelpers.contrib.hardening.audits.file import ( + FilePermissionAudit, + DirectoryPermissionAudit, + TemplatedFile, +) +from charmhelpers.contrib.hardening.mysql import TEMPLATES_DIR +from charmhelpers.contrib.hardening import utils + + +def get_audits(): + """Get MySQL hardening config audits. + + :returns: dictionary of audits + """ + if subprocess.call(['which', 'mysql'], stdout=subprocess.PIPE) != 0: + log("MySQL does not appear to be installed on this node - " + "skipping mysql hardening", level=WARNING) + return [] + + settings = utils.get_settings('mysql') + hardening_settings = settings['hardening'] + my_cnf = hardening_settings['mysql-conf'] + + audits = [ + FilePermissionAudit(paths=[my_cnf], user='root', + group='root', mode=0o0600), + + TemplatedFile(hardening_settings['hardening-conf'], + MySQLConfContext(), + TEMPLATES_DIR, + mode=0o0750, + user='mysql', + group='root', + service_actions=[{'service': 'mysql', + 'actions': ['restart']}]), + + # MySQL and Percona charms do not allow configuration of the + # data directory, so use the default. + DirectoryPermissionAudit('/var/lib/mysql', + user='mysql', + group='mysql', + recursive=False, + mode=0o755), + + DirectoryPermissionAudit('/etc/mysql', + user='root', + group='root', + recursive=False, + mode=0o700), + ] + + return audits + + +class MySQLConfContext(object): + """Defines the set of key/value pairs to set in a mysql config file. + + This context, when called, will return a dictionary containing the + key/value pairs of setting to specify in the + /etc/mysql/conf.d/hardening.cnf file. + """ + def __call__(self): + settings = utils.get_settings('mysql') + # Translate for python3 + return {'mysql_settings': + [(k, v) for k, v in six.iteritems(settings['security'])]} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/mysql/templates/hardening.cnf b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/mysql/templates/hardening.cnf new file mode 100644 index 0000000000000000000000000000000000000000..8242586cd66360b7e6ae33f13018363b95cd4ea9 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/mysql/templates/hardening.cnf @@ -0,0 +1,12 @@ +############################################################################### +# WARNING: This configuration file is maintained by Juju. Local changes may +# be overwritten. +############################################################################### +[mysqld] +{% for setting, value in mysql_settings -%} +{% if value == 'True' -%} +{{ setting }} +{% elif value != 'None' and value != None -%} +{{ setting }} = {{ value }} +{% endif -%} +{% endfor -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/ssh/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/ssh/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..58bebd846bd6fa648cfab6ab1056ad10d8415453 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/ssh/__init__.py @@ -0,0 +1,17 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from os import path + +TEMPLATES_DIR = path.join(path.dirname(__file__), 'templates') diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/ssh/checks/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/ssh/checks/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..edaf484b39f8c7353cebb2f4b68944c6493ba7b3 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/ssh/checks/__init__.py @@ -0,0 +1,29 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from charmhelpers.core.hookenv import ( + log, + DEBUG, +) +from charmhelpers.contrib.hardening.ssh.checks import config + + +def run_ssh_checks(): + log("Starting SSH hardening checks.", level=DEBUG) + checks = config.get_audits() + for check in checks: + log("Running '%s' check" % (check.__class__.__name__), level=DEBUG) + check.ensure_compliance() + + log("SSH hardening checks complete.", level=DEBUG) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/ssh/checks/config.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/ssh/checks/config.py new file mode 100644 index 0000000000000000000000000000000000000000..41bed2d1e7b031182edcf62710876e4073dfbc6e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/ssh/checks/config.py @@ -0,0 +1,435 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os + +from charmhelpers.contrib.network.ip import ( + get_address_in_network, + get_iface_addr, + is_ip, +) +from charmhelpers.core.hookenv import ( + log, + DEBUG, +) +from charmhelpers.fetch import ( + apt_install, + apt_update, +) +from charmhelpers.core.host import ( + lsb_release, + CompareHostReleases, +) +from charmhelpers.contrib.hardening.audits.file import ( + TemplatedFile, + FileContentAudit, +) +from charmhelpers.contrib.hardening.ssh import TEMPLATES_DIR +from charmhelpers.contrib.hardening import utils + + +def get_audits(): + """Get SSH hardening config audits. + + :returns: dictionary of audits + """ + audits = [SSHConfig(), SSHDConfig(), SSHConfigFileContentAudit(), + SSHDConfigFileContentAudit()] + return audits + + +class SSHConfigContext(object): + + type = 'client' + + def get_macs(self, allow_weak_mac): + if allow_weak_mac: + weak_macs = 'weak' + else: + weak_macs = 'default' + + default = 'hmac-sha2-512,hmac-sha2-256,hmac-ripemd160' + macs = {'default': default, + 'weak': default + ',hmac-sha1'} + + default = ('hmac-sha2-512-etm@openssh.com,' + 'hmac-sha2-256-etm@openssh.com,' + 'hmac-ripemd160-etm@openssh.com,umac-128-etm@openssh.com,' + 'hmac-sha2-512,hmac-sha2-256,hmac-ripemd160') + macs_66 = {'default': default, + 'weak': default + ',hmac-sha1'} + + # Use newer ciphers on Ubuntu Trusty and above + _release = lsb_release()['DISTRIB_CODENAME'].lower() + if CompareHostReleases(_release) >= 'trusty': + log("Detected Ubuntu 14.04 or newer, using new macs", level=DEBUG) + macs = macs_66 + + return macs[weak_macs] + + def get_kexs(self, allow_weak_kex): + if allow_weak_kex: + weak_kex = 'weak' + else: + weak_kex = 'default' + + default = 'diffie-hellman-group-exchange-sha256' + weak = (default + ',diffie-hellman-group14-sha1,' + 'diffie-hellman-group-exchange-sha1,' + 'diffie-hellman-group1-sha1') + kex = {'default': default, + 'weak': weak} + + default = ('curve25519-sha256@libssh.org,' + 'diffie-hellman-group-exchange-sha256') + weak = (default + ',diffie-hellman-group14-sha1,' + 'diffie-hellman-group-exchange-sha1,' + 'diffie-hellman-group1-sha1') + kex_66 = {'default': default, + 'weak': weak} + + # Use newer kex on Ubuntu Trusty and above + _release = lsb_release()['DISTRIB_CODENAME'].lower() + if CompareHostReleases(_release) >= 'trusty': + log('Detected Ubuntu 14.04 or newer, using new key exchange ' + 'algorithms', level=DEBUG) + kex = kex_66 + + return kex[weak_kex] + + def get_ciphers(self, cbc_required): + if cbc_required: + weak_ciphers = 'weak' + else: + weak_ciphers = 'default' + + default = 'aes256-ctr,aes192-ctr,aes128-ctr' + cipher = {'default': default, + 'weak': default + 'aes256-cbc,aes192-cbc,aes128-cbc'} + + default = ('chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,' + 'aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr') + ciphers_66 = {'default': default, + 'weak': default + ',aes256-cbc,aes192-cbc,aes128-cbc'} + + # Use newer ciphers on ubuntu Trusty and above + _release = lsb_release()['DISTRIB_CODENAME'].lower() + if CompareHostReleases(_release) >= 'trusty': + log('Detected Ubuntu 14.04 or newer, using new ciphers', + level=DEBUG) + cipher = ciphers_66 + + return cipher[weak_ciphers] + + def get_listening(self, listen=['0.0.0.0']): + """Returns a list of addresses SSH can list on + + Turns input into a sensible list of IPs SSH can listen on. Input + must be a python list of interface names, IPs and/or CIDRs. + + :param listen: list of IPs, CIDRs, interface names + + :returns: list of IPs available on the host + """ + if listen == ['0.0.0.0']: + return listen + + value = [] + for network in listen: + try: + ip = get_address_in_network(network=network, fatal=True) + except ValueError: + if is_ip(network): + ip = network + else: + try: + ip = get_iface_addr(iface=network, fatal=False)[0] + except IndexError: + continue + value.append(ip) + if value == []: + return ['0.0.0.0'] + return value + + def __call__(self): + settings = utils.get_settings('ssh') + if settings['common']['network_ipv6_enable']: + addr_family = 'any' + else: + addr_family = 'inet' + + ctxt = { + 'addr_family': addr_family, + 'remote_hosts': settings['common']['remote_hosts'], + 'password_auth_allowed': + settings['client']['password_authentication'], + 'ports': settings['common']['ports'], + 'ciphers': self.get_ciphers(settings['client']['cbc_required']), + 'macs': self.get_macs(settings['client']['weak_hmac']), + 'kexs': self.get_kexs(settings['client']['weak_kex']), + 'roaming': settings['client']['roaming'], + } + return ctxt + + +class SSHConfig(TemplatedFile): + def __init__(self): + path = '/etc/ssh/ssh_config' + super(SSHConfig, self).__init__(path=path, + template_dir=TEMPLATES_DIR, + context=SSHConfigContext(), + user='root', + group='root', + mode=0o0644) + + def pre_write(self): + settings = utils.get_settings('ssh') + apt_update(fatal=True) + apt_install(settings['client']['package']) + if not os.path.exists('/etc/ssh'): + os.makedir('/etc/ssh') + # NOTE: don't recurse + utils.ensure_permissions('/etc/ssh', 'root', 'root', 0o0755, + maxdepth=0) + + def post_write(self): + # NOTE: don't recurse + utils.ensure_permissions('/etc/ssh', 'root', 'root', 0o0755, + maxdepth=0) + + +class SSHDConfigContext(SSHConfigContext): + + type = 'server' + + def __call__(self): + settings = utils.get_settings('ssh') + if settings['common']['network_ipv6_enable']: + addr_family = 'any' + else: + addr_family = 'inet' + + ctxt = { + 'ssh_ip': self.get_listening(settings['server']['listen_to']), + 'password_auth_allowed': + settings['server']['password_authentication'], + 'ports': settings['common']['ports'], + 'addr_family': addr_family, + 'ciphers': self.get_ciphers(settings['server']['cbc_required']), + 'macs': self.get_macs(settings['server']['weak_hmac']), + 'kexs': self.get_kexs(settings['server']['weak_kex']), + 'host_key_files': settings['server']['host_key_files'], + 'allow_root_with_key': settings['server']['allow_root_with_key'], + 'password_authentication': + settings['server']['password_authentication'], + 'use_priv_sep': settings['server']['use_privilege_separation'], + 'use_pam': settings['server']['use_pam'], + 'allow_x11_forwarding': settings['server']['allow_x11_forwarding'], + 'print_motd': settings['server']['print_motd'], + 'print_last_log': settings['server']['print_last_log'], + 'client_alive_interval': + settings['server']['alive_interval'], + 'client_alive_count': settings['server']['alive_count'], + 'allow_tcp_forwarding': settings['server']['allow_tcp_forwarding'], + 'allow_agent_forwarding': + settings['server']['allow_agent_forwarding'], + 'deny_users': settings['server']['deny_users'], + 'allow_users': settings['server']['allow_users'], + 'deny_groups': settings['server']['deny_groups'], + 'allow_groups': settings['server']['allow_groups'], + 'use_dns': settings['server']['use_dns'], + 'sftp_enable': settings['server']['sftp_enable'], + 'sftp_group': settings['server']['sftp_group'], + 'sftp_chroot': settings['server']['sftp_chroot'], + 'max_auth_tries': settings['server']['max_auth_tries'], + 'max_sessions': settings['server']['max_sessions'], + } + return ctxt + + +class SSHDConfig(TemplatedFile): + def __init__(self): + path = '/etc/ssh/sshd_config' + super(SSHDConfig, self).__init__(path=path, + template_dir=TEMPLATES_DIR, + context=SSHDConfigContext(), + user='root', + group='root', + mode=0o0600, + service_actions=[{'service': 'ssh', + 'actions': + ['restart']}]) + + def pre_write(self): + settings = utils.get_settings('ssh') + apt_update(fatal=True) + apt_install(settings['server']['package']) + if not os.path.exists('/etc/ssh'): + os.makedir('/etc/ssh') + # NOTE: don't recurse + utils.ensure_permissions('/etc/ssh', 'root', 'root', 0o0755, + maxdepth=0) + + def post_write(self): + # NOTE: don't recurse + utils.ensure_permissions('/etc/ssh', 'root', 'root', 0o0755, + maxdepth=0) + + +class SSHConfigFileContentAudit(FileContentAudit): + def __init__(self): + self.path = '/etc/ssh/ssh_config' + super(SSHConfigFileContentAudit, self).__init__(self.path, {}) + + def is_compliant(self, *args, **kwargs): + self.pass_cases = [] + self.fail_cases = [] + settings = utils.get_settings('ssh') + + _release = lsb_release()['DISTRIB_CODENAME'].lower() + if CompareHostReleases(_release) >= 'trusty': + if not settings['server']['weak_hmac']: + self.pass_cases.append(r'^MACs.+,hmac-ripemd160$') + else: + self.pass_cases.append(r'^MACs.+,hmac-sha1$') + + if settings['server']['weak_kex']: + self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha256[,\s]?') # noqa + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group14-sha1[,\s]?') # noqa + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha1[,\s]?') # noqa + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group1-sha1[,\s]?') # noqa + else: + self.pass_cases.append(r'^KexAlgorithms.+,diffie-hellman-group-exchange-sha256$') # noqa + self.fail_cases.append(r'^KexAlgorithms.*diffie-hellman-group14-sha1[,\s]?') # noqa + + if settings['server']['cbc_required']: + self.pass_cases.append(r'^Ciphers\s.*-cbc[,\s]?') + self.fail_cases.append(r'^Ciphers\s.*aes128-ctr[,\s]?') + self.fail_cases.append(r'^Ciphers\s.*aes192-ctr[,\s]?') + self.fail_cases.append(r'^Ciphers\s.*aes256-ctr[,\s]?') + else: + self.fail_cases.append(r'^Ciphers\s.*-cbc[,\s]?') + self.pass_cases.append(r'^Ciphers\schacha20-poly1305@openssh.com,.+') # noqa + self.pass_cases.append(r'^Ciphers\s.*aes128-ctr$') + self.pass_cases.append(r'^Ciphers\s.*aes192-ctr[,\s]?') + self.pass_cases.append(r'^Ciphers\s.*aes256-ctr[,\s]?') + else: + if not settings['client']['weak_hmac']: + self.fail_cases.append(r'^MACs.+,hmac-sha1$') + else: + self.pass_cases.append(r'^MACs.+,hmac-sha1$') + + if settings['client']['weak_kex']: + self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha256[,\s]?') # noqa + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group14-sha1[,\s]?') # noqa + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha1[,\s]?') # noqa + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group1-sha1[,\s]?') # noqa + else: + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha256$') # noqa + self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group14-sha1[,\s]?') # noqa + self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha1[,\s]?') # noqa + self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group1-sha1[,\s]?') # noqa + + if settings['client']['cbc_required']: + self.pass_cases.append(r'^Ciphers\s.*-cbc[,\s]?') + self.fail_cases.append(r'^Ciphers\s.*aes128-ctr[,\s]?') + self.fail_cases.append(r'^Ciphers\s.*aes192-ctr[,\s]?') + self.fail_cases.append(r'^Ciphers\s.*aes256-ctr[,\s]?') + else: + self.fail_cases.append(r'^Ciphers\s.*-cbc[,\s]?') + self.pass_cases.append(r'^Ciphers\s.*aes128-ctr[,\s]?') + self.pass_cases.append(r'^Ciphers\s.*aes192-ctr[,\s]?') + self.pass_cases.append(r'^Ciphers\s.*aes256-ctr[,\s]?') + + if settings['client']['roaming']: + self.pass_cases.append(r'^UseRoaming yes$') + else: + self.fail_cases.append(r'^UseRoaming yes$') + + return super(SSHConfigFileContentAudit, self).is_compliant(*args, + **kwargs) + + +class SSHDConfigFileContentAudit(FileContentAudit): + def __init__(self): + self.path = '/etc/ssh/sshd_config' + super(SSHDConfigFileContentAudit, self).__init__(self.path, {}) + + def is_compliant(self, *args, **kwargs): + self.pass_cases = [] + self.fail_cases = [] + settings = utils.get_settings('ssh') + + _release = lsb_release()['DISTRIB_CODENAME'].lower() + if CompareHostReleases(_release) >= 'trusty': + if not settings['server']['weak_hmac']: + self.pass_cases.append(r'^MACs.+,hmac-ripemd160$') + else: + self.pass_cases.append(r'^MACs.+,hmac-sha1$') + + if settings['server']['weak_kex']: + self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha256[,\s]?') # noqa + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group14-sha1[,\s]?') # noqa + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha1[,\s]?') # noqa + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group1-sha1[,\s]?') # noqa + else: + self.pass_cases.append(r'^KexAlgorithms.+,diffie-hellman-group-exchange-sha256$') # noqa + self.fail_cases.append(r'^KexAlgorithms.*diffie-hellman-group14-sha1[,\s]?') # noqa + + if settings['server']['cbc_required']: + self.pass_cases.append(r'^Ciphers\s.*-cbc[,\s]?') + self.fail_cases.append(r'^Ciphers\s.*aes128-ctr[,\s]?') + self.fail_cases.append(r'^Ciphers\s.*aes192-ctr[,\s]?') + self.fail_cases.append(r'^Ciphers\s.*aes256-ctr[,\s]?') + else: + self.fail_cases.append(r'^Ciphers\s.*-cbc[,\s]?') + self.pass_cases.append(r'^Ciphers\schacha20-poly1305@openssh.com,.+') # noqa + self.pass_cases.append(r'^Ciphers\s.*aes128-ctr$') + self.pass_cases.append(r'^Ciphers\s.*aes192-ctr[,\s]?') + self.pass_cases.append(r'^Ciphers\s.*aes256-ctr[,\s]?') + else: + if not settings['server']['weak_hmac']: + self.pass_cases.append(r'^MACs.+,hmac-ripemd160$') + else: + self.pass_cases.append(r'^MACs.+,hmac-sha1$') + + if settings['server']['weak_kex']: + self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha256[,\s]?') # noqa + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group14-sha1[,\s]?') # noqa + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha1[,\s]?') # noqa + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group1-sha1[,\s]?') # noqa + else: + self.pass_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha256$') # noqa + self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group14-sha1[,\s]?') # noqa + self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group-exchange-sha1[,\s]?') # noqa + self.fail_cases.append(r'^KexAlgorithms\sdiffie-hellman-group1-sha1[,\s]?') # noqa + + if settings['server']['cbc_required']: + self.pass_cases.append(r'^Ciphers\s.*-cbc[,\s]?') + self.fail_cases.append(r'^Ciphers\s.*aes128-ctr[,\s]?') + self.fail_cases.append(r'^Ciphers\s.*aes192-ctr[,\s]?') + self.fail_cases.append(r'^Ciphers\s.*aes256-ctr[,\s]?') + else: + self.fail_cases.append(r'^Ciphers\s.*-cbc[,\s]?') + self.pass_cases.append(r'^Ciphers\s.*aes128-ctr[,\s]?') + self.pass_cases.append(r'^Ciphers\s.*aes192-ctr[,\s]?') + self.pass_cases.append(r'^Ciphers\s.*aes256-ctr[,\s]?') + + if settings['server']['sftp_enable']: + self.pass_cases.append(r'^Subsystem\ssftp') + else: + self.fail_cases.append(r'^Subsystem\ssftp') + + return super(SSHDConfigFileContentAudit, self).is_compliant(*args, + **kwargs) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/ssh/templates/ssh_config b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/ssh/templates/ssh_config new file mode 100644 index 0000000000000000000000000000000000000000..9742d8e2a32cd5da01a9dcb691a5a1201ed93050 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/ssh/templates/ssh_config @@ -0,0 +1,70 @@ +############################################################################### +# WARNING: This configuration file is maintained by Juju. Local changes may +# be overwritten. +############################################################################### +# This is the ssh client system-wide configuration file. See +# ssh_config(5) for more information. This file provides defaults for +# users, and the values can be changed in per-user configuration files +# or on the command line. + +# Configuration data is parsed as follows: +# 1. command line options +# 2. user-specific file +# 3. system-wide file +# Any configuration value is only changed the first time it is set. +# Thus, host-specific definitions should be at the beginning of the +# configuration file, and defaults at the end. + +# Site-wide defaults for some commonly used options. For a comprehensive +# list of available options, their meanings and defaults, please see the +# ssh_config(5) man page. + +# Restrict the following configuration to be limited to this Host. +{% if remote_hosts -%} +Host {{ ' '.join(remote_hosts) }} +{% endif %} +ForwardAgent no +ForwardX11 no +ForwardX11Trusted yes +RhostsRSAAuthentication no +RSAAuthentication yes +PasswordAuthentication {{ password_auth_allowed }} +HostbasedAuthentication no +GSSAPIAuthentication no +GSSAPIDelegateCredentials no +GSSAPIKeyExchange no +GSSAPITrustDNS no +BatchMode no +CheckHostIP yes +AddressFamily {{ addr_family }} +ConnectTimeout 0 +StrictHostKeyChecking ask +IdentityFile ~/.ssh/identity +IdentityFile ~/.ssh/id_rsa +IdentityFile ~/.ssh/id_dsa +# The port at the destination should be defined +{% for port in ports -%} +Port {{ port }} +{% endfor %} +Protocol 2 +Cipher 3des +{% if ciphers -%} +Ciphers {{ ciphers }} +{%- endif %} +{% if macs -%} +MACs {{ macs }} +{%- endif %} +{% if kexs -%} +KexAlgorithms {{ kexs }} +{%- endif %} +EscapeChar ~ +Tunnel no +TunnelDevice any:any +PermitLocalCommand no +VisualHostKey no +RekeyLimit 1G 1h +SendEnv LANG LC_* +HashKnownHosts yes +{% if roaming -%} +UseRoaming {{ roaming }} +{% endif %} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/ssh/templates/sshd_config b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/ssh/templates/sshd_config new file mode 100644 index 0000000000000000000000000000000000000000..5f87298a8119bcab1d2578bcaefd068e5af167c4 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/ssh/templates/sshd_config @@ -0,0 +1,159 @@ +############################################################################### +# WARNING: This configuration file is maintained by Juju. Local changes may +# be overwritten. +############################################################################### +# Package generated configuration file +# See the sshd_config(5) manpage for details + +# What ports, IPs and protocols we listen for +{% for port in ports -%} +Port {{ port }} +{% endfor -%} +AddressFamily {{ addr_family }} +# Use these options to restrict which interfaces/protocols sshd will bind to +{% if ssh_ip -%} +{% for ip in ssh_ip -%} +ListenAddress {{ ip }} +{% endfor %} +{%- else -%} +ListenAddress :: +ListenAddress 0.0.0.0 +{% endif -%} +Protocol 2 +{% if ciphers -%} +Ciphers {{ ciphers }} +{% endif -%} +{% if macs -%} +MACs {{ macs }} +{% endif -%} +{% if kexs -%} +KexAlgorithms {{ kexs }} +{% endif -%} +# HostKeys for protocol version 2 +{% for keyfile in host_key_files -%} +HostKey {{ keyfile }} +{% endfor -%} + +# Privilege Separation is turned on for security +{% if use_priv_sep -%} +UsePrivilegeSeparation {{ use_priv_sep }} +{% endif -%} + +# Lifetime and size of ephemeral version 1 server key +KeyRegenerationInterval 3600 +ServerKeyBits 1024 + +# Logging +SyslogFacility AUTH +LogLevel VERBOSE + +# Authentication: +LoginGraceTime 30s +{% if allow_root_with_key -%} +PermitRootLogin without-password +{% else -%} +PermitRootLogin no +{% endif %} +PermitTunnel no +PermitUserEnvironment no +StrictModes yes + +RSAAuthentication yes +PubkeyAuthentication yes +AuthorizedKeysFile %h/.ssh/authorized_keys + +# Don't read the user's ~/.rhosts and ~/.shosts files +IgnoreRhosts yes +# For this to work you will also need host keys in /etc/ssh_known_hosts +RhostsRSAAuthentication no +# similar for protocol version 2 +HostbasedAuthentication no +# Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication +IgnoreUserKnownHosts yes + +# To enable empty passwords, change to yes (NOT RECOMMENDED) +PermitEmptyPasswords no + +# Change to yes to enable challenge-response passwords (beware issues with +# some PAM modules and threads) +ChallengeResponseAuthentication no + +# Change to no to disable tunnelled clear text passwords +PasswordAuthentication {{ password_authentication }} + +# Kerberos options +KerberosAuthentication no +KerberosGetAFSToken no +KerberosOrLocalPasswd no +KerberosTicketCleanup yes + +# GSSAPI options +GSSAPIAuthentication no +GSSAPICleanupCredentials yes + +X11Forwarding {{ allow_x11_forwarding }} +X11DisplayOffset 10 +X11UseLocalhost yes +GatewayPorts no +PrintMotd {{ print_motd }} +PrintLastLog {{ print_last_log }} +TCPKeepAlive no +UseLogin no + +ClientAliveInterval {{ client_alive_interval }} +ClientAliveCountMax {{ client_alive_count }} +AllowTcpForwarding {{ allow_tcp_forwarding }} +AllowAgentForwarding {{ allow_agent_forwarding }} + +MaxStartups 10:30:100 +#Banner /etc/issue.net + +# Allow client to pass locale environment variables +AcceptEnv LANG LC_* + +# Set this to 'yes' to enable PAM authentication, account processing, +# and session processing. If this is enabled, PAM authentication will +# be allowed through the ChallengeResponseAuthentication and +# PasswordAuthentication. Depending on your PAM configuration, +# PAM authentication via ChallengeResponseAuthentication may bypass +# the setting of "PermitRootLogin without-password". +# If you just want the PAM account and session checks to run without +# PAM authentication, then enable this but set PasswordAuthentication +# and ChallengeResponseAuthentication to 'no'. +UsePAM {{ use_pam }} + +{% if deny_users -%} +DenyUsers {{ deny_users }} +{% endif -%} +{% if allow_users -%} +AllowUsers {{ allow_users }} +{% endif -%} +{% if deny_groups -%} +DenyGroups {{ deny_groups }} +{% endif -%} +{% if allow_groups -%} +AllowGroups allow_groups +{% endif -%} +UseDNS {{ use_dns }} +MaxAuthTries {{ max_auth_tries }} +MaxSessions {{ max_sessions }} + +{% if sftp_enable -%} +# Configuration, in case SFTP is used +## override default of no subsystems +## Subsystem sftp /opt/app/openssh5/libexec/sftp-server +Subsystem sftp internal-sftp -l VERBOSE + +## These lines must appear at the *end* of sshd_config +Match Group {{ sftp_group }} +ForceCommand internal-sftp -l VERBOSE +ChrootDirectory {{ sftp_chroot }} +{% else -%} +# Configuration, in case SFTP is used +## override default of no subsystems +## Subsystem sftp /opt/app/openssh5/libexec/sftp-server +## These lines must appear at the *end* of sshd_config +Match Group sftponly +ForceCommand internal-sftp -l VERBOSE +ChrootDirectory /sftpchroot/home/%u +{% endif %} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/templating.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/templating.py new file mode 100644 index 0000000000000000000000000000000000000000..5b6765f7edeee4bed739fd354c6f7bdf0a8c952e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/templating.py @@ -0,0 +1,73 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import six + +from charmhelpers.core.hookenv import ( + log, + DEBUG, + WARNING, +) + +try: + from jinja2 import FileSystemLoader, Environment +except ImportError: + from charmhelpers.fetch import apt_install + from charmhelpers.fetch import apt_update + apt_update(fatal=True) + if six.PY2: + apt_install('python-jinja2', fatal=True) + else: + apt_install('python3-jinja2', fatal=True) + from jinja2 import FileSystemLoader, Environment + + +# NOTE: function separated from main rendering code to facilitate easier +# mocking in unit tests. +def write(path, data): + with open(path, 'wb') as out: + out.write(data) + + +def get_template_path(template_dir, path): + """Returns the template file which would be used to render the path. + + The path to the template file is returned. + :param template_dir: the directory the templates are located in + :param path: the file path to be written to. + :returns: path to the template file + """ + return os.path.join(template_dir, os.path.basename(path)) + + +def render_and_write(template_dir, path, context): + """Renders the specified template into the file. + + :param template_dir: the directory to load the template from + :param path: the path to write the templated contents to + :param context: the parameters to pass to the rendering engine + """ + env = Environment(loader=FileSystemLoader(template_dir)) + template_file = os.path.basename(path) + template = env.get_template(template_file) + log('Rendering from template: %s' % template.name, level=DEBUG) + rendered_content = template.render(context) + if not rendered_content: + log("Render returned None - skipping '%s'" % path, + level=WARNING) + return + + write(path, rendered_content.encode('utf-8').strip()) + log('Wrote template %s' % path, level=DEBUG) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..ff7485c28c8748ba366dba54f1e3b8f7e6a7c619 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/hardening/utils.py @@ -0,0 +1,155 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import glob +import grp +import os +import pwd +import six +import yaml + +from charmhelpers.core.hookenv import ( + log, + DEBUG, + INFO, + WARNING, + ERROR, +) + + +# Global settings cache. Since each hook fire entails a fresh module import it +# is safe to hold this in memory and not risk missing config changes (since +# they will result in a new hook fire and thus re-import). +__SETTINGS__ = {} + + +def _get_defaults(modules): + """Load the default config for the provided modules. + + :param modules: stack modules config defaults to lookup. + :returns: modules default config dictionary. + """ + default = os.path.join(os.path.dirname(__file__), + 'defaults/%s.yaml' % (modules)) + return yaml.safe_load(open(default)) + + +def _get_schema(modules): + """Load the config schema for the provided modules. + + NOTE: this schema is intended to have 1-1 relationship with they keys in + the default config and is used a means to verify valid overrides provided + by the user. + + :param modules: stack modules config schema to lookup. + :returns: modules default schema dictionary. + """ + schema = os.path.join(os.path.dirname(__file__), + 'defaults/%s.yaml.schema' % (modules)) + return yaml.safe_load(open(schema)) + + +def _get_user_provided_overrides(modules): + """Load user-provided config overrides. + + :param modules: stack modules to lookup in user overrides yaml file. + :returns: overrides dictionary. + """ + overrides = os.path.join(os.environ['JUJU_CHARM_DIR'], + 'hardening.yaml') + if os.path.exists(overrides): + log("Found user-provided config overrides file '%s'" % + (overrides), level=DEBUG) + settings = yaml.safe_load(open(overrides)) + if settings and settings.get(modules): + log("Applying '%s' overrides" % (modules), level=DEBUG) + return settings.get(modules) + + log("No overrides found for '%s'" % (modules), level=DEBUG) + else: + log("No hardening config overrides file '%s' found in charm " + "root dir" % (overrides), level=DEBUG) + + return {} + + +def _apply_overrides(settings, overrides, schema): + """Get overrides config overlayed onto modules defaults. + + :param modules: require stack modules config. + :returns: dictionary of modules config with user overrides applied. + """ + if overrides: + for k, v in six.iteritems(overrides): + if k in schema: + if schema[k] is None: + settings[k] = v + elif type(schema[k]) is dict: + settings[k] = _apply_overrides(settings[k], overrides[k], + schema[k]) + else: + raise Exception("Unexpected type found in schema '%s'" % + type(schema[k]), level=ERROR) + else: + log("Unknown override key '%s' - ignoring" % (k), level=INFO) + + return settings + + +def get_settings(modules): + global __SETTINGS__ + if modules in __SETTINGS__: + return __SETTINGS__[modules] + + schema = _get_schema(modules) + settings = _get_defaults(modules) + overrides = _get_user_provided_overrides(modules) + __SETTINGS__[modules] = _apply_overrides(settings, overrides, schema) + return __SETTINGS__[modules] + + +def ensure_permissions(path, user, group, permissions, maxdepth=-1): + """Ensure permissions for path. + + If path is a file, apply to file and return. If path is a directory, + apply recursively (if required) to directory contents and return. + + :param user: user name + :param group: group name + :param permissions: octal permissions + :param maxdepth: maximum recursion depth. A negative maxdepth allows + infinite recursion and maxdepth=0 means no recursion. + :returns: None + """ + if not os.path.exists(path): + log("File '%s' does not exist - cannot set permissions" % (path), + level=WARNING) + return + + _user = pwd.getpwnam(user) + os.chown(path, _user.pw_uid, grp.getgrnam(group).gr_gid) + os.chmod(path, permissions) + + if maxdepth == 0: + log("Max recursion depth reached - skipping further recursion", + level=DEBUG) + return + elif maxdepth > 0: + maxdepth -= 1 + + if os.path.isdir(path): + contents = glob.glob("%s/*" % (path)) + for c in contents: + ensure_permissions(c, user=user, group=group, + permissions=permissions, maxdepth=maxdepth) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/mellanox/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/mellanox/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..9b088de84e4b288b551603816fc10eebfa7b1503 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/mellanox/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2016 Canonical Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/mellanox/infiniband.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/mellanox/infiniband.py new file mode 100644 index 0000000000000000000000000000000000000000..0edb2314114738c79859f51e505de05e5c7c8fcc --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/mellanox/infiniband.py @@ -0,0 +1,153 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +__author__ = "Jorge Niedbalski " + +import six + +from charmhelpers.fetch import ( + apt_install, + apt_update, +) + +from charmhelpers.core.hookenv import ( + log, + INFO, +) + +try: + from netifaces import interfaces as network_interfaces +except ImportError: + if six.PY2: + apt_install('python-netifaces') + else: + apt_install('python3-netifaces') + from netifaces import interfaces as network_interfaces + +import os +import re +import subprocess + +from charmhelpers.core.kernel import modprobe + +REQUIRED_MODULES = ( + "mlx4_ib", + "mlx4_en", + "mlx4_core", + "ib_ipath", + "ib_mthca", + "ib_srpt", + "ib_srp", + "ib_ucm", + "ib_isert", + "ib_iser", + "ib_ipoib", + "ib_cm", + "ib_uverbs" + "ib_umad", + "ib_sa", + "ib_mad", + "ib_core", + "ib_addr", + "rdma_ucm", +) + +REQUIRED_PACKAGES = ( + "ibutils", + "infiniband-diags", + "ibverbs-utils", +) + +IPOIB_DRIVERS = ( + "ib_ipoib", +) + +ABI_VERSION_FILE = "/sys/class/infiniband_mad/abi_version" + + +class DeviceInfo(object): + pass + + +def install_packages(): + apt_update() + apt_install(REQUIRED_PACKAGES, fatal=True) + + +def load_modules(): + for module in REQUIRED_MODULES: + modprobe(module, persist=True) + + +def is_enabled(): + """Check if infiniband is loaded on the system""" + return os.path.exists(ABI_VERSION_FILE) + + +def stat(): + """Return full output of ibstat""" + return subprocess.check_output(["ibstat"]) + + +def devices(): + """Returns a list of IB enabled devices""" + return subprocess.check_output(['ibstat', '-l']).splitlines() + + +def device_info(device): + """Returns a DeviceInfo object with the current device settings""" + + status = subprocess.check_output([ + 'ibstat', device, '-s']).splitlines() + + regexes = { + "CA type: (.*)": "device_type", + "Number of ports: (.*)": "num_ports", + "Firmware version: (.*)": "fw_ver", + "Hardware version: (.*)": "hw_ver", + "Node GUID: (.*)": "node_guid", + "System image GUID: (.*)": "sys_guid", + } + + device = DeviceInfo() + + for line in status: + for expression, key in regexes.items(): + matches = re.search(expression, line) + if matches: + setattr(device, key, matches.group(1)) + + return device + + +def ipoib_interfaces(): + """Return a list of IPOIB capable ethernet interfaces""" + interfaces = [] + + for interface in network_interfaces(): + try: + driver = re.search('^driver: (.+)$', subprocess.check_output([ + 'ethtool', '-i', + interface]), re.M).group(1) + + if driver in IPOIB_DRIVERS: + interfaces.append(interface) + except Exception: + log("Skipping interface %s" % interface, level=INFO) + continue + + return interfaces diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/network/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/network/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d7567b863e3a5ad2b7a7f44958b4166e0c3d346b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/network/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/network/ip.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/network/ip.py new file mode 100644 index 0000000000000000000000000000000000000000..b13277bb57c9227b1d9dfecf4f6750740e5a262a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/network/ip.py @@ -0,0 +1,602 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import glob +import re +import subprocess +import six +import socket + +from functools import partial + +from charmhelpers.fetch import apt_install, apt_update +from charmhelpers.core.hookenv import ( + config, + log, + network_get_primary_address, + unit_get, + WARNING, + NoNetworkBinding, +) + +from charmhelpers.core.host import ( + lsb_release, + CompareHostReleases, +) + +try: + import netifaces +except ImportError: + apt_update(fatal=True) + if six.PY2: + apt_install('python-netifaces', fatal=True) + else: + apt_install('python3-netifaces', fatal=True) + import netifaces + +try: + import netaddr +except ImportError: + apt_update(fatal=True) + if six.PY2: + apt_install('python-netaddr', fatal=True) + else: + apt_install('python3-netaddr', fatal=True) + import netaddr + + +def _validate_cidr(network): + try: + netaddr.IPNetwork(network) + except (netaddr.core.AddrFormatError, ValueError): + raise ValueError("Network (%s) is not in CIDR presentation format" % + network) + + +def no_ip_found_error_out(network): + errmsg = ("No IP address found in network(s): %s" % network) + raise ValueError(errmsg) + + +def _get_ipv6_network_from_address(address): + """Get an netaddr.IPNetwork for the given IPv6 address + :param address: a dict as returned by netifaces.ifaddresses + :returns netaddr.IPNetwork: None if the address is a link local or loopback + address + """ + if address['addr'].startswith('fe80') or address['addr'] == "::1": + return None + + prefix = address['netmask'].split("/") + if len(prefix) > 1: + netmask = prefix[1] + else: + netmask = address['netmask'] + return netaddr.IPNetwork("%s/%s" % (address['addr'], + netmask)) + + +def get_address_in_network(network, fallback=None, fatal=False): + """Get an IPv4 or IPv6 address within the network from the host. + + :param network (str): CIDR presentation format. For example, + '192.168.1.0/24'. Supports multiple networks as a space-delimited list. + :param fallback (str): If no address is found, return fallback. + :param fatal (boolean): If no address is found, fallback is not + set and fatal is True then exit(1). + """ + if network is None: + if fallback is not None: + return fallback + + if fatal: + no_ip_found_error_out(network) + else: + return None + + networks = network.split() or [network] + for network in networks: + _validate_cidr(network) + network = netaddr.IPNetwork(network) + for iface in netifaces.interfaces(): + try: + addresses = netifaces.ifaddresses(iface) + except ValueError: + # If an instance was deleted between + # netifaces.interfaces() run and now, its interfaces are gone + continue + if network.version == 4 and netifaces.AF_INET in addresses: + for addr in addresses[netifaces.AF_INET]: + cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'], + addr['netmask'])) + if cidr in network: + return str(cidr.ip) + + if network.version == 6 and netifaces.AF_INET6 in addresses: + for addr in addresses[netifaces.AF_INET6]: + cidr = _get_ipv6_network_from_address(addr) + if cidr and cidr in network: + return str(cidr.ip) + + if fallback is not None: + return fallback + + if fatal: + no_ip_found_error_out(network) + + return None + + +def is_ipv6(address): + """Determine whether provided address is IPv6 or not.""" + try: + address = netaddr.IPAddress(address) + except netaddr.AddrFormatError: + # probably a hostname - so not an address at all! + return False + + return address.version == 6 + + +def is_address_in_network(network, address): + """ + Determine whether the provided address is within a network range. + + :param network (str): CIDR presentation format. For example, + '192.168.1.0/24'. + :param address: An individual IPv4 or IPv6 address without a net + mask or subnet prefix. For example, '192.168.1.1'. + :returns boolean: Flag indicating whether address is in network. + """ + try: + network = netaddr.IPNetwork(network) + except (netaddr.core.AddrFormatError, ValueError): + raise ValueError("Network (%s) is not in CIDR presentation format" % + network) + + try: + address = netaddr.IPAddress(address) + except (netaddr.core.AddrFormatError, ValueError): + raise ValueError("Address (%s) is not in correct presentation format" % + address) + + if address in network: + return True + else: + return False + + +def _get_for_address(address, key): + """Retrieve an attribute of or the physical interface that + the IP address provided could be bound to. + + :param address (str): An individual IPv4 or IPv6 address without a net + mask or subnet prefix. For example, '192.168.1.1'. + :param key: 'iface' for the physical interface name or an attribute + of the configured interface, for example 'netmask'. + :returns str: Requested attribute or None if address is not bindable. + """ + address = netaddr.IPAddress(address) + for iface in netifaces.interfaces(): + addresses = netifaces.ifaddresses(iface) + if address.version == 4 and netifaces.AF_INET in addresses: + addr = addresses[netifaces.AF_INET][0]['addr'] + netmask = addresses[netifaces.AF_INET][0]['netmask'] + network = netaddr.IPNetwork("%s/%s" % (addr, netmask)) + cidr = network.cidr + if address in cidr: + if key == 'iface': + return iface + else: + return addresses[netifaces.AF_INET][0][key] + + if address.version == 6 and netifaces.AF_INET6 in addresses: + for addr in addresses[netifaces.AF_INET6]: + network = _get_ipv6_network_from_address(addr) + if not network: + continue + + cidr = network.cidr + if address in cidr: + if key == 'iface': + return iface + elif key == 'netmask' and cidr: + return str(cidr).split('/')[1] + else: + return addr[key] + return None + + +get_iface_for_address = partial(_get_for_address, key='iface') + + +get_netmask_for_address = partial(_get_for_address, key='netmask') + + +def resolve_network_cidr(ip_address): + ''' + Resolves the full address cidr of an ip_address based on + configured network interfaces + ''' + netmask = get_netmask_for_address(ip_address) + return str(netaddr.IPNetwork("%s/%s" % (ip_address, netmask)).cidr) + + +def format_ipv6_addr(address): + """If address is IPv6, wrap it in '[]' otherwise return None. + + This is required by most configuration files when specifying IPv6 + addresses. + """ + if is_ipv6(address): + return "[%s]" % address + + return None + + +def is_ipv6_disabled(): + try: + result = subprocess.check_output( + ['sysctl', 'net.ipv6.conf.all.disable_ipv6'], + stderr=subprocess.STDOUT, + universal_newlines=True) + except subprocess.CalledProcessError: + return True + + return "net.ipv6.conf.all.disable_ipv6 = 1" in result + + +def get_iface_addr(iface='eth0', inet_type='AF_INET', inc_aliases=False, + fatal=True, exc_list=None): + """Return the assigned IP address for a given interface, if any. + + :param iface: network interface on which address(es) are expected to + be found. + :param inet_type: inet address family + :param inc_aliases: include alias interfaces in search + :param fatal: if True, raise exception if address not found + :param exc_list: list of addresses to ignore + :return: list of ip addresses + """ + # Extract nic if passed /dev/ethX + if '/' in iface: + iface = iface.split('/')[-1] + + if not exc_list: + exc_list = [] + + try: + inet_num = getattr(netifaces, inet_type) + except AttributeError: + raise Exception("Unknown inet type '%s'" % str(inet_type)) + + interfaces = netifaces.interfaces() + if inc_aliases: + ifaces = [] + for _iface in interfaces: + if iface == _iface or _iface.split(':')[0] == iface: + ifaces.append(_iface) + + if fatal and not ifaces: + raise Exception("Invalid interface '%s'" % iface) + + ifaces.sort() + else: + if iface not in interfaces: + if fatal: + raise Exception("Interface '%s' not found " % (iface)) + else: + return [] + + else: + ifaces = [iface] + + addresses = [] + for netiface in ifaces: + net_info = netifaces.ifaddresses(netiface) + if inet_num in net_info: + for entry in net_info[inet_num]: + if 'addr' in entry and entry['addr'] not in exc_list: + addresses.append(entry['addr']) + + if fatal and not addresses: + raise Exception("Interface '%s' doesn't have any %s addresses." % + (iface, inet_type)) + + return sorted(addresses) + + +get_ipv4_addr = partial(get_iface_addr, inet_type='AF_INET') + + +def get_iface_from_addr(addr): + """Work out on which interface the provided address is configured.""" + for iface in netifaces.interfaces(): + addresses = netifaces.ifaddresses(iface) + for inet_type in addresses: + for _addr in addresses[inet_type]: + _addr = _addr['addr'] + # link local + ll_key = re.compile("(.+)%.*") + raw = re.match(ll_key, _addr) + if raw: + _addr = raw.group(1) + + if _addr == addr: + log("Address '%s' is configured on iface '%s'" % + (addr, iface)) + return iface + + msg = "Unable to infer net iface on which '%s' is configured" % (addr) + raise Exception(msg) + + +def sniff_iface(f): + """Ensure decorated function is called with a value for iface. + + If no iface provided, inject net iface inferred from unit private address. + """ + def iface_sniffer(*args, **kwargs): + if not kwargs.get('iface', None): + kwargs['iface'] = get_iface_from_addr(unit_get('private-address')) + + return f(*args, **kwargs) + + return iface_sniffer + + +@sniff_iface +def get_ipv6_addr(iface=None, inc_aliases=False, fatal=True, exc_list=None, + dynamic_only=True): + """Get assigned IPv6 address for a given interface. + + Returns list of addresses found. If no address found, returns empty list. + + If iface is None, we infer the current primary interface by doing a reverse + lookup on the unit private-address. + + We currently only support scope global IPv6 addresses i.e. non-temporary + addresses. If no global IPv6 address is found, return the first one found + in the ipv6 address list. + + :param iface: network interface on which ipv6 address(es) are expected to + be found. + :param inc_aliases: include alias interfaces in search + :param fatal: if True, raise exception if address not found + :param exc_list: list of addresses to ignore + :param dynamic_only: only recognise dynamic addresses + :return: list of ipv6 addresses + """ + addresses = get_iface_addr(iface=iface, inet_type='AF_INET6', + inc_aliases=inc_aliases, fatal=fatal, + exc_list=exc_list) + + if addresses: + global_addrs = [] + for addr in addresses: + key_scope_link_local = re.compile("^fe80::..(.+)%(.+)") + m = re.match(key_scope_link_local, addr) + if m: + eui_64_mac = m.group(1) + iface = m.group(2) + else: + global_addrs.append(addr) + + if global_addrs: + # Make sure any found global addresses are not temporary + cmd = ['ip', 'addr', 'show', iface] + out = subprocess.check_output(cmd).decode('UTF-8') + if dynamic_only: + key = re.compile("inet6 (.+)/[0-9]+ scope global.* dynamic.*") + else: + key = re.compile("inet6 (.+)/[0-9]+ scope global.*") + + addrs = [] + for line in out.split('\n'): + line = line.strip() + m = re.match(key, line) + if m and 'temporary' not in line: + # Return the first valid address we find + for addr in global_addrs: + if m.group(1) == addr: + if not dynamic_only or \ + m.group(1).endswith(eui_64_mac): + addrs.append(addr) + + if addrs: + return addrs + + if fatal: + raise Exception("Interface '%s' does not have a scope global " + "non-temporary ipv6 address." % iface) + + return [] + + +def get_bridges(vnic_dir='/sys/devices/virtual/net'): + """Return a list of bridges on the system.""" + b_regex = "%s/*/bridge" % vnic_dir + return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_regex)] + + +def get_bridge_nics(bridge, vnic_dir='/sys/devices/virtual/net'): + """Return a list of nics comprising a given bridge on the system.""" + brif_regex = "%s/%s/brif/*" % (vnic_dir, bridge) + return [x.split('/')[-1] for x in glob.glob(brif_regex)] + + +def is_bridge_member(nic): + """Check if a given nic is a member of a bridge.""" + for bridge in get_bridges(): + if nic in get_bridge_nics(bridge): + return True + + return False + + +def is_ip(address): + """ + Returns True if address is a valid IP address. + """ + try: + # Test to see if already an IPv4/IPv6 address + address = netaddr.IPAddress(address) + return True + except (netaddr.AddrFormatError, ValueError): + return False + + +def ns_query(address): + try: + import dns.resolver + except ImportError: + if six.PY2: + apt_install('python-dnspython', fatal=True) + else: + apt_install('python3-dnspython', fatal=True) + import dns.resolver + + if isinstance(address, dns.name.Name): + rtype = 'PTR' + elif isinstance(address, six.string_types): + rtype = 'A' + else: + return None + + try: + answers = dns.resolver.query(address, rtype) + except dns.resolver.NXDOMAIN: + return None + + if answers: + return str(answers[0]) + return None + + +def get_host_ip(hostname, fallback=None): + """ + Resolves the IP for a given hostname, or returns + the input if it is already an IP. + """ + if is_ip(hostname): + return hostname + + ip_addr = ns_query(hostname) + if not ip_addr: + try: + ip_addr = socket.gethostbyname(hostname) + except Exception: + log("Failed to resolve hostname '%s'" % (hostname), + level=WARNING) + return fallback + return ip_addr + + +def get_hostname(address, fqdn=True): + """ + Resolves hostname for given IP, or returns the input + if it is already a hostname. + """ + if is_ip(address): + try: + import dns.reversename + except ImportError: + if six.PY2: + apt_install("python-dnspython", fatal=True) + else: + apt_install("python3-dnspython", fatal=True) + import dns.reversename + + rev = dns.reversename.from_address(address) + result = ns_query(rev) + + if not result: + try: + result = socket.gethostbyaddr(address)[0] + except Exception: + return None + else: + result = address + + if fqdn: + # strip trailing . + if result.endswith('.'): + return result[:-1] + else: + return result + else: + return result.split('.')[0] + + +def port_has_listener(address, port): + """ + Returns True if the address:port is open and being listened to, + else False. + + @param address: an IP address or hostname + @param port: integer port + + Note calls 'zc' via a subprocess shell + """ + cmd = ['nc', '-z', address, str(port)] + result = subprocess.call(cmd) + return not(bool(result)) + + +def assert_charm_supports_ipv6(): + """Check whether we are able to support charms ipv6.""" + release = lsb_release()['DISTRIB_CODENAME'].lower() + if CompareHostReleases(release) < "trusty": + raise Exception("IPv6 is not supported in the charms for Ubuntu " + "versions less than Trusty 14.04") + + +def get_relation_ip(interface, cidr_network=None): + """Return this unit's IP for the given interface. + + Allow for an arbitrary interface to use with network-get to select an IP. + Handle all address selection options including passed cidr network and + IPv6. + + Usage: get_relation_ip('amqp', cidr_network='10.0.0.0/8') + + @param interface: string name of the relation. + @param cidr_network: string CIDR Network to select an address from. + @raises Exception if prefer-ipv6 is configured but IPv6 unsupported. + @returns IPv6 or IPv4 address + """ + # Select the interface address first + # For possible use as a fallback bellow with get_address_in_network + try: + # Get the interface specific IP + address = network_get_primary_address(interface) + except NotImplementedError: + # If network-get is not available + address = get_host_ip(unit_get('private-address')) + except NoNetworkBinding: + log("No network binding for {}".format(interface), WARNING) + address = get_host_ip(unit_get('private-address')) + + if config('prefer-ipv6'): + # Currently IPv6 has priority, eventually we want IPv6 to just be + # another network space. + assert_charm_supports_ipv6() + return get_ipv6_addr()[0] + elif cidr_network: + # If a specific CIDR network is passed get the address from that + # network. + return get_address_in_network(cidr_network, address) + + # Return the interface address + return address diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/network/ovs/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/network/ovs/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..fd001bc2eaa9b1b511bbd1816e1089521935a50a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/network/ovs/__init__.py @@ -0,0 +1,541 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +''' Helpers for interacting with OpenvSwitch ''' +import hashlib +import subprocess +import os +import six + +from charmhelpers.fetch import apt_install + + +from charmhelpers.core.hookenv import ( + log, WARNING, INFO, DEBUG +) +from charmhelpers.core.host import ( + service +) + + +BRIDGE_TEMPLATE = """\ +# This veth pair is required when neutron data-port is mapped to an existing linux bridge. lp:1635067 + +auto {linuxbridge_port} +iface {linuxbridge_port} inet manual + pre-up ip link add name {linuxbridge_port} type veth peer name {ovsbridge_port} + pre-up ip link set {ovsbridge_port} master {bridge} + pre-up ip link set {ovsbridge_port} up + up ip link set {linuxbridge_port} up + down ip link del {linuxbridge_port} +""" + +MAX_KERNEL_INTERFACE_NAME_LEN = 15 + + +def get_bridges(): + """Return list of the bridges on the default openvswitch + + :returns: List of bridge names + :rtype: List[str] + :raises: subprocess.CalledProcessError if ovs-vsctl fails + """ + cmd = ["ovs-vsctl", "list-br"] + lines = subprocess.check_output(cmd).decode('utf-8').split("\n") + maybe_bridges = [l.strip() for l in lines] + return [b for b in maybe_bridges if b] + + +def get_bridge_ports(name): + """Return a list the ports on a named bridge + + :param name: the name of the bridge to list + :type name: str + :returns: List of ports on the named bridge + :rtype: List[str] + :raises: subprocess.CalledProcessError if the ovs-vsctl command fails. If + the named bridge doesn't exist, then the exception will be raised. + """ + cmd = ["ovs-vsctl", "--", "list-ports", name] + lines = subprocess.check_output(cmd).decode('utf-8').split("\n") + maybe_ports = [l.strip() for l in lines] + return [p for p in maybe_ports if p] + + +def get_bridges_and_ports_map(): + """Return dictionary of bridge to ports for the default openvswitch + + :returns: a mapping of bridge name to a list of ports. + :rtype: Dict[str, List[str]] + :raises: subprocess.CalledProcessError if any of the underlying ovs-vsctl + command fail. + """ + return {b: get_bridge_ports(b) for b in get_bridges()} + + +def _dict_to_vsctl_set(data, table, entity): + """Helper that takes dictionary and provides ``ovs-vsctl set`` commands + + :param data: Additional data to attach to interface + The keys in the data dictionary map directly to column names in the + OpenvSwitch table specified as defined in DB-SCHEMA [0] referenced in + RFC 7047 [1] + + There are some established conventions for keys in the external-ids + column of various tables, consult the OVS Integration Guide [2] for + more details. + + NOTE(fnordahl): Technically the ``external-ids`` column is called + ``external_ids`` (with an underscore) and we rely on ``ovs-vsctl``'s + behaviour of transforming dashes to underscores for us [3] so we can + have a more pleasant data structure. + + 0: http://www.openvswitch.org/ovs-vswitchd.conf.db.5.pdf + 1: https://tools.ietf.org/html/rfc7047 + 2: http://docs.openvswitch.org/en/latest/topics/integration/ + 3: https://github.com/openvswitch/ovs/blob/ + 20dac08fdcce4b7fda1d07add3b346aa9751cfbc/ + lib/db-ctl-base.c#L189-L215 + :type data: Optional[Dict[str,Union[str,Dict[str,str]]]] + :param table: Name of table to operate on + :type table: str + :param entity: Name of entity to operate on + :type entity: str + :returns: '--' separated ``ovs-vsctl set`` commands + :rtype: Iterator[Tuple[str, str, str, str, str]] + """ + for (k, v) in data.items(): + if isinstance(v, dict): + entries = { + '{}:{}'.format(k, dk): dv for (dk, dv) in v.items()} + else: + entries = {k: v} + for (colk, colv) in entries.items(): + yield ('--', 'set', table, entity, '{}={}'.format(colk, colv)) + + +def add_bridge(name, datapath_type=None, brdata=None, exclusive=False): + """Add the named bridge to openvswitch and set/update bridge data for it + + :param name: Name of bridge to create + :type name: str + :param datapath_type: Add datapath_type to bridge (DEPRECATED, use brdata) + :type datapath_type: Optional[str] + :param brdata: Additional data to attach to bridge + The keys in the brdata dictionary map directly to column names in the + OpenvSwitch bridge table as defined in DB-SCHEMA [0] referenced in + RFC 7047 [1] + + There are some established conventions for keys in the external-ids + column of various tables, consult the OVS Integration Guide [2] for + more details. + + NOTE(fnordahl): Technically the ``external-ids`` column is called + ``external_ids`` (with an underscore) and we rely on ``ovs-vsctl``'s + behaviour of transforming dashes to underscores for us [3] so we can + have a more pleasant data structure. + + 0: http://www.openvswitch.org/ovs-vswitchd.conf.db.5.pdf + 1: https://tools.ietf.org/html/rfc7047 + 2: http://docs.openvswitch.org/en/latest/topics/integration/ + 3: https://github.com/openvswitch/ovs/blob/ + 20dac08fdcce4b7fda1d07add3b346aa9751cfbc/ + lib/db-ctl-base.c#L189-L215 + :type brdata: Optional[Dict[str,Union[str,Dict[str,str]]]] + :param exclusive: If True, raise exception if bridge exists + :type exclusive: bool + :raises: subprocess.CalledProcessError + """ + log('Creating bridge {}'.format(name)) + cmd = ['ovs-vsctl', '--'] + if not exclusive: + cmd.append('--may-exist') + cmd.extend(('add-br', name)) + if brdata: + for setcmd in _dict_to_vsctl_set(brdata, 'bridge', name): + cmd.extend(setcmd) + if datapath_type is not None: + log('DEPRECATION WARNING: add_bridge called with datapath_type, ' + 'please use the brdata keyword argument instead.') + cmd += ['--', 'set', 'bridge', name, + 'datapath_type={}'.format(datapath_type)] + subprocess.check_call(cmd) + + +def del_bridge(name): + """Delete the named bridge from openvswitch + + :param name: Name of bridge to remove + :type name: str + :raises: subprocess.CalledProcessError + """ + log('Deleting bridge {}'.format(name)) + subprocess.check_call(["ovs-vsctl", "--", "--if-exists", "del-br", name]) + + +def add_bridge_port(name, port, promisc=False, ifdata=None, exclusive=False, + linkup=True, portdata=None): + """Add port to bridge and optionally set/update interface data for it + + :param name: Name of bridge to attach port to + :type name: str + :param port: Name of port as represented in netdev + :type port: str + :param promisc: Whether to set promiscuous mode on interface + True=on, False=off, None leave untouched + :type promisc: Optional[bool] + :param ifdata: Additional data to attach to interface + The keys in the ifdata dictionary map directly to column names in the + OpenvSwitch Interface table as defined in DB-SCHEMA [0] referenced in + RFC 7047 [1] + + There are some established conventions for keys in the external-ids + column of various tables, consult the OVS Integration Guide [2] for + more details. + + NOTE(fnordahl): Technically the ``external-ids`` column is called + ``external_ids`` (with an underscore) and we rely on ``ovs-vsctl``'s + behaviour of transforming dashes to underscores for us [3] so we can + have a more pleasant data structure. + + 0: http://www.openvswitch.org/ovs-vswitchd.conf.db.5.pdf + 1: https://tools.ietf.org/html/rfc7047 + 2: http://docs.openvswitch.org/en/latest/topics/integration/ + 3: https://github.com/openvswitch/ovs/blob/ + 20dac08fdcce4b7fda1d07add3b346aa9751cfbc/ + lib/db-ctl-base.c#L189-L215 + :type ifdata: Optional[Dict[str,Union[str,Dict[str,str]]]] + :param exclusive: If True, raise exception if port exists + :type exclusive: bool + :param linkup: Bring link up + :type linkup: bool + :param portdata: Additional data to attach to port. Similar to ifdata. + :type portdata: Optional[Dict[str,Union[str,Dict[str,str]]]] + :raises: subprocess.CalledProcessError + """ + cmd = ['ovs-vsctl', '--'] + if not exclusive: + cmd.append('--may-exist') + cmd.extend(('add-port', name, port)) + for ovs_table, data in (('Interface', ifdata), ('Port', portdata)): + if data: + for setcmd in _dict_to_vsctl_set(data, ovs_table, port): + cmd.extend(setcmd) + + log('Adding port {} to bridge {}'.format(port, name)) + subprocess.check_call(cmd) + if linkup: + # This is mostly a workaround for CI environments, in the real world + # the bare metal provider would most likely have configured and brought + # up the link for us. + subprocess.check_call(["ip", "link", "set", port, "up"]) + if promisc: + subprocess.check_call(["ip", "link", "set", port, "promisc", "on"]) + elif promisc is False: + subprocess.check_call(["ip", "link", "set", port, "promisc", "off"]) + + +def del_bridge_port(name, port): + """Delete a port from the named openvswitch bridge + + :param name: Name of bridge to remove port from + :type name: str + :param port: Name of port to remove + :type port: str + :raises: subprocess.CalledProcessError + """ + log('Deleting port {} from bridge {}'.format(port, name)) + subprocess.check_call(["ovs-vsctl", "--", "--if-exists", "del-port", + name, port]) + subprocess.check_call(["ip", "link", "set", port, "down"]) + subprocess.check_call(["ip", "link", "set", port, "promisc", "off"]) + + +def add_bridge_bond(bridge, port, interfaces, portdata=None, ifdatamap=None, + exclusive=False): + """Add bonded port in bridge from interfaces. + + :param bridge: Name of bridge to add bonded port to + :type bridge: str + :param port: Name of created port + :type port: str + :param interfaces: Underlying interfaces that make up the bonded port + :type interfaces: Iterator[str] + :param portdata: Additional data to attach to the created bond port + See _dict_to_vsctl_set() for detailed description. + Example: + { + 'bond-mode': 'balance-tcp', + 'lacp': 'active', + 'other-config': { + 'lacp-time': 'fast', + }, + } + :type portdata: Optional[Dict[str,Union[str,Dict[str,str]]]] + :param ifdatamap: Map of data to attach to created bond interfaces + See _dict_to_vsctl_set() for detailed description. + Example: + { + 'eth0': { + 'type': 'dpdk', + 'mtu-request': '9000', + 'options': { + 'dpdk-devargs': '0000:01:00.0', + }, + }, + } + :type ifdatamap: Optional[Dict[str,Dict[str,Union[str,Dict[str,str]]]]] + :param exclusive: If True, raise exception if port exists + :type exclusive: bool + :raises: subprocess.CalledProcessError + """ + cmd = ['ovs-vsctl', '--'] + if not exclusive: + cmd.append('--may-exist') + cmd.extend(('add-bond', bridge, port)) + cmd.extend(interfaces) + if portdata: + for setcmd in _dict_to_vsctl_set(portdata, 'port', port): + cmd.extend(setcmd) + if ifdatamap: + for ifname, ifdata in ifdatamap.items(): + for setcmd in _dict_to_vsctl_set(ifdata, 'Interface', ifname): + cmd.extend(setcmd) + subprocess.check_call(cmd) + + +def add_ovsbridge_linuxbridge(name, bridge, ifdata=None): + """Add linux bridge to the named openvswitch bridge + + :param name: Name of ovs bridge to be added to Linux bridge + :type name: str + :param bridge: Name of Linux bridge to be added to ovs bridge + :type name: str + :param ifdata: Additional data to attach to interface + The keys in the ifdata dictionary map directly to column names in the + OpenvSwitch Interface table as defined in DB-SCHEMA [0] referenced in + RFC 7047 [1] + + There are some established conventions for keys in the external-ids + column of various tables, consult the OVS Integration Guide [2] for + more details. + + NOTE(fnordahl): Technically the ``external-ids`` column is called + ``external_ids`` (with an underscore) and we rely on ``ovs-vsctl``'s + behaviour of transforming dashes to underscores for us [3] so we can + have a more pleasant data structure. + + 0: http://www.openvswitch.org/ovs-vswitchd.conf.db.5.pdf + 1: https://tools.ietf.org/html/rfc7047 + 2: http://docs.openvswitch.org/en/latest/topics/integration/ + 3: https://github.com/openvswitch/ovs/blob/ + 20dac08fdcce4b7fda1d07add3b346aa9751cfbc/ + lib/db-ctl-base.c#L189-L215 + :type ifdata: Optional[Dict[str,Union[str,Dict[str,str]]]] + """ + try: + import netifaces + except ImportError: + if six.PY2: + apt_install('python-netifaces', fatal=True) + else: + apt_install('python3-netifaces', fatal=True) + import netifaces + + # NOTE(jamespage): + # Older code supported addition of a linuxbridge directly + # to an OVS bridge; ensure we don't break uses on upgrade + existing_ovs_bridge = port_to_br(bridge) + if existing_ovs_bridge is not None: + log('Linuxbridge {} is already directly in use' + ' by OVS bridge {}'.format(bridge, existing_ovs_bridge), + level=INFO) + return + + # NOTE(jamespage): + # preserve existing naming because interfaces may already exist. + ovsbridge_port = "veth-" + name + linuxbridge_port = "veth-" + bridge + if (len(ovsbridge_port) > MAX_KERNEL_INTERFACE_NAME_LEN or + len(linuxbridge_port) > MAX_KERNEL_INTERFACE_NAME_LEN): + # NOTE(jamespage): + # use parts of hashed bridgename (openstack style) when + # a bridge name exceeds 15 chars + hashed_bridge = hashlib.sha256(bridge.encode('UTF-8')).hexdigest() + base = '{}-{}'.format(hashed_bridge[:8], hashed_bridge[-2:]) + ovsbridge_port = "cvo{}".format(base) + linuxbridge_port = "cvb{}".format(base) + + interfaces = netifaces.interfaces() + for interface in interfaces: + if interface == ovsbridge_port or interface == linuxbridge_port: + log('Interface {} already exists'.format(interface), level=INFO) + return + + log('Adding linuxbridge {} to ovsbridge {}'.format(bridge, name), + level=INFO) + + check_for_eni_source() + + with open('/etc/network/interfaces.d/{}.cfg'.format( + linuxbridge_port), 'w') as config: + config.write(BRIDGE_TEMPLATE.format(linuxbridge_port=linuxbridge_port, + ovsbridge_port=ovsbridge_port, + bridge=bridge)) + + subprocess.check_call(["ifup", linuxbridge_port]) + add_bridge_port(name, linuxbridge_port, ifdata=ifdata) + + +def is_linuxbridge_interface(port): + ''' Check if the interface is a linuxbridge bridge + :param port: Name of an interface to check whether it is a Linux bridge + :returns: True if port is a Linux bridge''' + + if os.path.exists('/sys/class/net/' + port + '/bridge'): + log('Interface {} is a Linux bridge'.format(port), level=DEBUG) + return True + else: + log('Interface {} is not a Linux bridge'.format(port), level=DEBUG) + return False + + +def set_manager(manager): + ''' Set the controller for the local openvswitch ''' + log('Setting manager for local ovs to {}'.format(manager)) + subprocess.check_call(['ovs-vsctl', 'set-manager', + 'ssl:{}'.format(manager)]) + + +def set_Open_vSwitch_column_value(column_value): + """ + Calls ovs-vsctl and sets the 'column_value' in the Open_vSwitch table. + + :param column_value: + See http://www.openvswitch.org//ovs-vswitchd.conf.db.5.pdf for + details of the relevant values. + :type str + :raises CalledProcessException: possibly ovsdb-server is not running + """ + log('Setting {} in the Open_vSwitch table'.format(column_value)) + subprocess.check_call(['ovs-vsctl', 'set', 'Open_vSwitch', '.', column_value]) + + +CERT_PATH = '/etc/openvswitch/ovsclient-cert.pem' + + +def get_certificate(): + ''' Read openvswitch certificate from disk ''' + if os.path.exists(CERT_PATH): + log('Reading ovs certificate from {}'.format(CERT_PATH)) + with open(CERT_PATH, 'r') as cert: + full_cert = cert.read() + begin_marker = "-----BEGIN CERTIFICATE-----" + end_marker = "-----END CERTIFICATE-----" + begin_index = full_cert.find(begin_marker) + end_index = full_cert.rfind(end_marker) + if end_index == -1 or begin_index == -1: + raise RuntimeError("Certificate does not contain valid begin" + " and end markers.") + full_cert = full_cert[begin_index:(end_index + len(end_marker))] + return full_cert + else: + log('Certificate not found', level=WARNING) + return None + + +def check_for_eni_source(): + ''' Juju removes the source line when setting up interfaces, + replace if missing ''' + + with open('/etc/network/interfaces', 'r') as eni: + for line in eni: + if line == 'source /etc/network/interfaces.d/*': + return + with open('/etc/network/interfaces', 'a') as eni: + eni.write('\nsource /etc/network/interfaces.d/*') + + +def full_restart(): + ''' Full restart and reload of openvswitch ''' + if os.path.exists('/etc/init/openvswitch-force-reload-kmod.conf'): + service('start', 'openvswitch-force-reload-kmod') + else: + service('force-reload-kmod', 'openvswitch-switch') + + +def enable_ipfix(bridge, target, + cache_active_timeout=60, + cache_max_flows=128, + sampling=64): + '''Enable IPFIX on bridge to target. + :param bridge: Bridge to monitor + :param target: IPFIX remote endpoint + :param cache_active_timeout: The maximum period in seconds for + which an IPFIX flow record is cached + and aggregated before being sent + :param cache_max_flows: The maximum number of IPFIX flow records + that can be cached at a time + :param sampling: The rate at which packets should be sampled and + sent to each target collector + ''' + cmd = [ + 'ovs-vsctl', 'set', 'Bridge', bridge, 'ipfix=@i', '--', + '--id=@i', 'create', 'IPFIX', + 'targets="{}"'.format(target), + 'sampling={}'.format(sampling), + 'cache_active_timeout={}'.format(cache_active_timeout), + 'cache_max_flows={}'.format(cache_max_flows), + ] + log('Enabling IPfix on {}.'.format(bridge)) + subprocess.check_call(cmd) + + +def disable_ipfix(bridge): + '''Diable IPFIX on target bridge. + :param bridge: Bridge to modify + ''' + cmd = ['ovs-vsctl', 'clear', 'Bridge', bridge, 'ipfix'] + subprocess.check_call(cmd) + + +def port_to_br(port): + '''Determine the bridge that contains a port + :param port: Name of port to check for + :returns str: OVS bridge containing port or None if not found + ''' + try: + return subprocess.check_output( + ['ovs-vsctl', 'port-to-br', port] + ).decode('UTF-8').strip() + except subprocess.CalledProcessError: + return None + + +def ovs_appctl(target, args): + """Run `ovs-appctl` for target with args and return output. + + :param target: Name of daemon to contact. Unless target begins with '/', + `ovs-appctl` looks for a pidfile and will build the path to + a /var/run/openvswitch/target.pid.ctl for you. + :type target: str + :param args: Command and arguments to pass to `ovs-appctl` + :type args: Tuple[str, ...] + :returns: Output from command + :rtype: str + :raises: subprocess.CalledProcessError + """ + cmd = ['ovs-appctl', '-t', target] + cmd.extend(args) + return subprocess.check_output(cmd, universal_newlines=True) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/network/ovs/ovn.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/network/ovs/ovn.py new file mode 100644 index 0000000000000000000000000000000000000000..2075f11acfbeb3d0d614cf3db5a1b535bf128824 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/network/ovs/ovn.py @@ -0,0 +1,233 @@ +# Copyright 2019 Canonical Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +import os +import subprocess +import uuid + +from . import utils + + +OVN_RUNDIR = '/var/run/ovn' +OVN_SYSCONFDIR = '/etc/ovn' + + +def ovn_appctl(target, args, rundir=None, use_ovs_appctl=False): + """Run ovn/ovs-appctl for target with args and return output. + + :param target: Name of daemon to contact. Unless target begins with '/', + `ovn-appctl` looks for a pidfile and will build the path to + a /var/run/ovn/target.pid.ctl for you. + :type target: str + :param args: Command and arguments to pass to `ovn-appctl` + :type args: Tuple[str, ...] + :param rundir: Override path to sockets + :type rundir: Optional[str] + :param use_ovs_appctl: The ``ovn-appctl`` command appeared in OVN 20.03, + set this to True to use ``ovs-appctl`` instead. + :type use_ovs_appctl: bool + :returns: Output from command + :rtype: str + :raises: subprocess.CalledProcessError + """ + # NOTE(fnordahl): The ovsdb-server processes for the OVN databases use a + # non-standard naming scheme for their daemon control socket and we need + # to pass the full path to the socket. + if target in ('ovnnb_db', 'ovnsb_db',): + target = os.path.join(rundir or OVN_RUNDIR, target + '.ctl') + + if use_ovs_appctl: + tool = 'ovs-appctl' + else: + tool = 'ovn-appctl' + + return utils._run(tool, '-t', target, *args) + + +class OVNClusterStatus(object): + + def __init__(self, name, cluster_id, server_id, address, status, role, + term, leader, vote, election_timer, log, + entries_not_yet_committed, entries_not_yet_applied, + connections, servers): + """Initialize and populate OVNClusterStatus object. + + Use class initializer so we can define types in a compatible manner. + + :param name: Name of schema used for database + :type name: str + :param cluster_id: UUID of cluster + :type cluster_id: uuid.UUID + :param server_id: UUID of server + :type server_id: uuid.UUID + :param address: OVSDB connection method + :type address: str + :param status: Status text + :type status: str + :param role: Role of server + :type role: str + :param term: Election term + :type term: int + :param leader: Short form UUID of leader + :type leader: str + :param vote: Vote + :type vote: str + :param election_timer: Current value of election timer + :type election_timer: int + :param log: Log + :type log: str + :param entries_not_yet_committed: Entries not yet committed + :type entries_not_yet_committed: int + :param entries_not_yet_applied: Entries not yet applied + :type entries_not_yet_applied: int + :param connections: Connections + :type connections: str + :param servers: Servers in the cluster + [('0ea6', 'ssl:192.0.2.42:6643')] + :type servers: List[Tuple[str,str]] + """ + self.name = name + self.cluster_id = cluster_id + self.server_id = server_id + self.address = address + self.status = status + self.role = role + self.term = term + self.leader = leader + self.vote = vote + self.election_timer = election_timer + self.log = log + self.entries_not_yet_committed = entries_not_yet_committed + self.entries_not_yet_applied = entries_not_yet_applied + self.connections = connections + self.servers = servers + + def __eq__(self, other): + return ( + self.name == other.name and + self.cluster_id == other.cluster_id and + self.server_id == other.server_id and + self.address == other.address and + self.status == other.status and + self.role == other.role and + self.term == other.term and + self.leader == other.leader and + self.vote == other.vote and + self.election_timer == other.election_timer and + self.log == other.log and + self.entries_not_yet_committed == other.entries_not_yet_committed and + self.entries_not_yet_applied == other.entries_not_yet_applied and + self.connections == other.connections and + self.servers == other.servers) + + @property + def is_cluster_leader(self): + """Retrieve status information from clustered OVSDB. + + :returns: Whether target is cluster leader + :rtype: bool + """ + return self.leader == 'self' + + +def cluster_status(target, schema=None, use_ovs_appctl=False, rundir=None): + """Retrieve status information from clustered OVSDB. + + :param target: Usually one of 'ovsdb-server', 'ovnnb_db', 'ovnsb_db', can + also be full path to control socket. + :type target: str + :param schema: Database schema name, deduced from target if not provided + :type schema: Optional[str] + :param use_ovs_appctl: The ``ovn-appctl`` command appeared in OVN 20.03, + set this to True to use ``ovs-appctl`` instead. + :type use_ovs_appctl: bool + :param rundir: Override path to sockets + :type rundir: Optional[str] + :returns: cluster status data object + :rtype: OVNClusterStatus + :raises: subprocess.CalledProcessError, KeyError, RuntimeError + """ + schema_map = { + 'ovnnb_db': 'OVN_Northbound', + 'ovnsb_db': 'OVN_Southbound', + } + if schema and schema not in schema_map.keys(): + raise RuntimeError('Unknown schema provided: "{}"'.format(schema)) + + status = {} + k = '' + for line in ovn_appctl(target, + ('cluster/status', schema or schema_map[target]), + rundir=rundir, + use_ovs_appctl=use_ovs_appctl).splitlines(): + if k and line.startswith(' '): + # there is no key which means this is a instance of a multi-line/ + # multi-value item, populate the List which is already stored under + # the key. + if k == 'servers': + status[k].append( + tuple(line.replace(')', '').lstrip().split()[0:4:3])) + else: + status[k].append(line.lstrip()) + elif ':' in line: + # this is a line with a key + k, v = line.split(':', 1) + k = k.lower() + k = k.replace(' ', '_') + if v: + # this is a line with both key and value + if k in ('cluster_id', 'server_id',): + v = v.replace('(', '') + v = v.replace(')', '') + status[k] = tuple(v.split()) + else: + status[k] = v.lstrip() + else: + # this is a line with only key which means a multi-line/ + # multi-value item. Store key as List which will be + # populated on subsequent iterations. + status[k] = [] + return OVNClusterStatus( + status['name'], + uuid.UUID(status['cluster_id'][1]), + uuid.UUID(status['server_id'][1]), + status['address'], + status['status'], + status['role'], + int(status['term']), + status['leader'], + status['vote'], + int(status['election_timer']), + status['log'], + int(status['entries_not_yet_committed']), + int(status['entries_not_yet_applied']), + status['connections'], + status['servers']) + + +def is_northd_active(): + """Query `ovn-northd` for active status. + + Note that the active status information for ovn-northd is available for + OVN 20.03 and onward. + + :returns: True if local `ovn-northd` instance is active, False otherwise + :rtype: bool + """ + try: + for line in ovn_appctl('ovn-northd', ('status',)).splitlines(): + if line.startswith('Status:') and 'active' in line: + return True + except subprocess.CalledProcessError: + pass + return False diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/network/ovs/ovsdb.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/network/ovs/ovsdb.py new file mode 100644 index 0000000000000000000000000000000000000000..5e50bc36333bde56674a82ad6c88e0d5de44ee07 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/network/ovs/ovsdb.py @@ -0,0 +1,206 @@ +# Copyright 2019 Canonical Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +import json +import uuid + +from . import utils + + +class SimpleOVSDB(object): + """Simple interface to OVSDB through the use of command line tools. + + OVS and OVN is managed through a set of databases. These databases have + similar command line tools to manage them. We make use of the similarity + to provide a generic class that can be used to manage them. + + The OpenvSwitch project does provide a Python API, but on the surface it + appears to be a bit too involved for our simple use case. + + Examples: + sbdb = SimpleOVSDB('ovn-sbctl') + for chs in sbdb.chassis: + print(chs) + + ovsdb = SimpleOVSDB('ovs-vsctl') + for br in ovsdb.bridge: + if br['name'] == 'br-test': + ovsdb.bridge.set(br['uuid'], 'external_ids:charm', 'managed') + """ + + # For validation we keep a complete map of currently known good tool and + # table combinations. This requires maintenance down the line whenever + # upstream adds things that downstream wants, and the cost of maintaining + # that will most likely be lower then the cost of finding the needle in + # the haystack whenever downstream code misspells something. + _tool_table_map = { + 'ovs-vsctl': ( + 'autoattach', + 'bridge', + 'ct_timeout_policy', + 'ct_zone', + 'controller', + 'datapath', + 'flow_sample_collector_set', + 'flow_table', + 'ipfix', + 'interface', + 'manager', + 'mirror', + 'netflow', + 'open_vswitch', + 'port', + 'qos', + 'queue', + 'ssl', + 'sflow', + ), + 'ovn-nbctl': ( + 'acl', + 'address_set', + 'connection', + 'dhcp_options', + 'dns', + 'forwarding_group', + 'gateway_chassis', + 'ha_chassis', + 'ha_chassis_group', + 'load_balancer', + 'load_balancer_health_check', + 'logical_router', + 'logical_router_policy', + 'logical_router_port', + 'logical_router_static_route', + 'logical_switch', + 'logical_switch_port', + 'meter', + 'meter_band', + 'nat', + 'nb_global', + 'port_group', + 'qos', + 'ssl', + ), + 'ovn-sbctl': ( + 'address_set', + 'chassis', + 'connection', + 'controller_event', + 'dhcp_options', + 'dhcpv6_options', + 'dns', + 'datapath_binding', + 'encap', + 'gateway_chassis', + 'ha_chassis', + 'ha_chassis_group', + 'igmp_group', + 'ip_multicast', + 'logical_flow', + 'mac_binding', + 'meter', + 'meter_band', + 'multicast_group', + 'port_binding', + 'port_group', + 'rbac_permission', + 'rbac_role', + 'sb_global', + 'ssl', + 'service_monitor', + ), + } + + def __init__(self, tool): + """SimpleOVSDB constructor. + + :param tool: Which tool with database commands to operate on. + Usually one of `ovs-vsctl`, `ovn-nbctl`, `ovn-sbctl` + :type tool: str + """ + if tool not in self._tool_table_map: + raise RuntimeError( + 'tool must be one of "{}"'.format(self._tool_table_map.keys())) + self._tool = tool + + def __getattr__(self, table): + if table not in self._tool_table_map[self._tool]: + raise AttributeError( + 'table "{}" not known for use with "{}"' + .format(table, self._tool)) + return self.Table(self._tool, table) + + class Table(object): + """Methods to interact with contents of OVSDB tables. + + NOTE: At the time of this writing ``find`` is the only command + line argument to OVSDB manipulating tools that actually supports + JSON output. + """ + + def __init__(self, tool, table): + """SimpleOVSDBTable constructor. + + :param table: Which table to operate on + :type table: str + """ + self._tool = tool + self._table = table + + def _find_tbl(self, condition=None): + """Run and parse output of OVSDB `find` command. + + :param condition: An optional RFC 7047 5.1 match condition + :type condition: Optional[str] + :returns: Dictionary with data + :rtype: Dict[str, any] + """ + # When using json formatted output to OVS commands Internal OVSDB + # notation may occur that require further deserializing. + # Reference: https://tools.ietf.org/html/rfc7047#section-5.1 + ovs_type_cb_map = { + 'uuid': uuid.UUID, + # FIXME sets also appear to sometimes contain type/value tuples + 'set': list, + 'map': dict, + } + cmd = [self._tool, '-f', 'json', 'find', self._table] + if condition: + cmd.append(condition) + output = utils._run(*cmd) + data = json.loads(output) + for row in data['data']: + values = [] + for col in row: + if isinstance(col, list): + f = ovs_type_cb_map.get(col[0], str) + values.append(f(col[1])) + else: + values.append(col) + yield dict(zip(data['headings'], values)) + + def __iter__(self): + return self._find_tbl() + + def clear(self, rec, col): + utils._run(self._tool, 'clear', self._table, rec, col) + + def find(self, condition): + return self._find_tbl(condition=condition) + + def remove(self, rec, col, value): + utils._run(self._tool, 'remove', self._table, rec, col, value) + + def set(self, rec, col, value): + utils._run(self._tool, 'set', self._table, rec, + '{}={}'.format(col, value)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/network/ovs/utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/network/ovs/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..53c9b4ddab6d68d2478d4161e658c70e2caa6a74 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/network/ovs/utils.py @@ -0,0 +1,26 @@ +# Copyright 2019 Canonical Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +import subprocess + + +def _run(*args): + """Run a process, check result, capture decoded output from STDOUT. + + :param args: Command and arguments to run + :type args: Tuple[str, ...] + :returns: Information about the completed process + :rtype: str + :raises subprocess.CalledProcessError + """ + return subprocess.check_output(args, universal_newlines=True) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/network/ufw.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/network/ufw.py new file mode 100644 index 0000000000000000000000000000000000000000..b9bf7c9df5576615e23225a7fdb6c11a28931ba4 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/network/ufw.py @@ -0,0 +1,386 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +""" +This module contains helpers to add and remove ufw rules. + +Examples: + +- open SSH port for subnet 10.0.3.0/24: + + >>> from charmhelpers.contrib.network import ufw + >>> ufw.enable() + >>> ufw.grant_access(src='10.0.3.0/24', dst='any', port='22', proto='tcp') + +- open service by name as defined in /etc/services: + + >>> from charmhelpers.contrib.network import ufw + >>> ufw.enable() + >>> ufw.service('ssh', 'open') + +- close service by port number: + + >>> from charmhelpers.contrib.network import ufw + >>> ufw.enable() + >>> ufw.service('4949', 'close') # munin +""" +import os +import re +import subprocess + +from charmhelpers.core import hookenv +from charmhelpers.core.kernel import modprobe, is_module_loaded + +__author__ = "Felipe Reyes " + + +class UFWError(Exception): + pass + + +class UFWIPv6Error(UFWError): + pass + + +def is_enabled(): + """ + Check if `ufw` is enabled + + :returns: True if ufw is enabled + """ + output = subprocess.check_output(['ufw', 'status'], + universal_newlines=True, + env={'LANG': 'en_US', + 'PATH': os.environ['PATH']}) + + m = re.findall(r'^Status: active\n', output, re.M) + + return len(m) >= 1 + + +def is_ipv6_ok(soft_fail=False): + """ + Check if IPv6 support is present and ip6tables functional + + :param soft_fail: If set to True and IPv6 support is broken, then reports + that the host doesn't have IPv6 support, otherwise a + UFWIPv6Error exception is raised. + :returns: True if IPv6 is working, False otherwise + """ + + # do we have IPv6 in the machine? + if os.path.isdir('/proc/sys/net/ipv6'): + # is ip6tables kernel module loaded? + if not is_module_loaded('ip6_tables'): + # ip6tables support isn't complete, let's try to load it + try: + modprobe('ip6_tables') + # great, we can load the module + return True + except subprocess.CalledProcessError as ex: + hookenv.log("Couldn't load ip6_tables module: %s" % ex.output, + level="WARN") + # we are in a world where ip6tables isn't working + if soft_fail: + # so we inform that the machine doesn't have IPv6 + return False + else: + raise UFWIPv6Error("IPv6 firewall support broken") + else: + # the module is present :) + return True + + else: + # the system doesn't have IPv6 + return False + + +def disable_ipv6(): + """ + Disable ufw IPv6 support in /etc/default/ufw + """ + exit_code = subprocess.call(['sed', '-i', 's/IPV6=.*/IPV6=no/g', + '/etc/default/ufw']) + if exit_code == 0: + hookenv.log('IPv6 support in ufw disabled', level='INFO') + else: + hookenv.log("Couldn't disable IPv6 support in ufw", level="ERROR") + raise UFWError("Couldn't disable IPv6 support in ufw") + + +def enable(soft_fail=False): + """ + Enable ufw + + :param soft_fail: If set to True silently disables IPv6 support in ufw, + otherwise a UFWIPv6Error exception is raised when IP6 + support is broken. + :returns: True if ufw is successfully enabled + """ + if is_enabled(): + return True + + if not is_ipv6_ok(soft_fail): + disable_ipv6() + + output = subprocess.check_output(['ufw', 'enable'], + universal_newlines=True, + env={'LANG': 'en_US', + 'PATH': os.environ['PATH']}) + + m = re.findall('^Firewall is active and enabled on system startup\n', + output, re.M) + hookenv.log(output, level='DEBUG') + + if len(m) == 0: + hookenv.log("ufw couldn't be enabled", level='WARN') + return False + else: + hookenv.log("ufw enabled", level='INFO') + return True + + +def reload(): + """ + Reload ufw + + :returns: True if ufw is successfully enabled + """ + output = subprocess.check_output(['ufw', 'reload'], + universal_newlines=True, + env={'LANG': 'en_US', + 'PATH': os.environ['PATH']}) + + m = re.findall('^Firewall reloaded\n', + output, re.M) + hookenv.log(output, level='DEBUG') + + if len(m) == 0: + hookenv.log("ufw couldn't be reloaded", level='WARN') + return False + else: + hookenv.log("ufw reloaded", level='INFO') + return True + + +def disable(): + """ + Disable ufw + + :returns: True if ufw is successfully disabled + """ + if not is_enabled(): + return True + + output = subprocess.check_output(['ufw', 'disable'], + universal_newlines=True, + env={'LANG': 'en_US', + 'PATH': os.environ['PATH']}) + + m = re.findall(r'^Firewall stopped and disabled on system startup\n', + output, re.M) + hookenv.log(output, level='DEBUG') + + if len(m) == 0: + hookenv.log("ufw couldn't be disabled", level='WARN') + return False + else: + hookenv.log("ufw disabled", level='INFO') + return True + + +def default_policy(policy='deny', direction='incoming'): + """ + Changes the default policy for traffic `direction` + + :param policy: allow, deny or reject + :param direction: traffic direction, possible values: incoming, outgoing, + routed + """ + if policy not in ['allow', 'deny', 'reject']: + raise UFWError(('Unknown policy %s, valid values: ' + 'allow, deny, reject') % policy) + + if direction not in ['incoming', 'outgoing', 'routed']: + raise UFWError(('Unknown direction %s, valid values: ' + 'incoming, outgoing, routed') % direction) + + output = subprocess.check_output(['ufw', 'default', policy, direction], + universal_newlines=True, + env={'LANG': 'en_US', + 'PATH': os.environ['PATH']}) + hookenv.log(output, level='DEBUG') + + m = re.findall("^Default %s policy changed to '%s'\n" % (direction, + policy), + output, re.M) + if len(m) == 0: + hookenv.log("ufw couldn't change the default policy to %s for %s" + % (policy, direction), level='WARN') + return False + else: + hookenv.log("ufw default policy for %s changed to %s" + % (direction, policy), level='INFO') + return True + + +def modify_access(src, dst='any', port=None, proto=None, action='allow', + index=None, prepend=False, comment=None): + """ + Grant access to an address or subnet + + :param src: address (e.g. 192.168.1.234) or subnet + (e.g. 192.168.1.0/24). + :type src: Optional[str] + :param dst: destiny of the connection, if the machine has multiple IPs and + connections to only one of those have to accepted this is the + field has to be set. + :type dst: Optional[str] + :param port: destiny port + :type port: Optional[int] + :param proto: protocol (tcp or udp) + :type proto: Optional[str] + :param action: `allow` or `delete` + :type action: str + :param index: if different from None the rule is inserted at the given + `index`. + :type index: Optional[int] + :param prepend: Whether to insert the rule before all other rules matching + the rule's IP type. + :type prepend: bool + :param comment: Create the rule with a comment + :type comment: Optional[str] + """ + if not is_enabled(): + hookenv.log('ufw is disabled, skipping modify_access()', level='WARN') + return + + if action == 'delete': + if index is not None: + cmd = ['ufw', '--force', 'delete', str(index)] + else: + cmd = ['ufw', 'delete', 'allow'] + elif index is not None: + cmd = ['ufw', 'insert', str(index), action] + elif prepend: + cmd = ['ufw', 'prepend', action] + else: + cmd = ['ufw', action] + + if src is not None: + cmd += ['from', src] + + if dst is not None: + cmd += ['to', dst] + + if port is not None: + cmd += ['port', str(port)] + + if proto is not None: + cmd += ['proto', proto] + + if comment: + cmd.extend(['comment', comment]) + + hookenv.log('ufw {}: {}'.format(action, ' '.join(cmd)), level='DEBUG') + p = subprocess.Popen(cmd, stdout=subprocess.PIPE) + (stdout, stderr) = p.communicate() + + hookenv.log(stdout, level='INFO') + + if p.returncode != 0: + hookenv.log(stderr, level='ERROR') + hookenv.log('Error running: {}, exit code: {}'.format(' '.join(cmd), + p.returncode), + level='ERROR') + + +def grant_access(src, dst='any', port=None, proto=None, index=None): + """ + Grant access to an address or subnet + + :param src: address (e.g. 192.168.1.234) or subnet + (e.g. 192.168.1.0/24). + :param dst: destiny of the connection, if the machine has multiple IPs and + connections to only one of those have to accepted this is the + field has to be set. + :param port: destiny port + :param proto: protocol (tcp or udp) + :param index: if different from None the rule is inserted at the given + `index`. + """ + return modify_access(src, dst=dst, port=port, proto=proto, action='allow', + index=index) + + +def revoke_access(src, dst='any', port=None, proto=None): + """ + Revoke access to an address or subnet + + :param src: address (e.g. 192.168.1.234) or subnet + (e.g. 192.168.1.0/24). + :param dst: destiny of the connection, if the machine has multiple IPs and + connections to only one of those have to accepted this is the + field has to be set. + :param port: destiny port + :param proto: protocol (tcp or udp) + """ + return modify_access(src, dst=dst, port=port, proto=proto, action='delete') + + +def service(name, action): + """ + Open/close access to a service + + :param name: could be a service name defined in `/etc/services` or a port + number. + :param action: `open` or `close` + """ + if action == 'open': + subprocess.check_output(['ufw', 'allow', str(name)], + universal_newlines=True) + elif action == 'close': + subprocess.check_output(['ufw', 'delete', 'allow', str(name)], + universal_newlines=True) + else: + raise UFWError(("'{}' not supported, use 'allow' " + "or 'delete'").format(action)) + + +def status(): + """Retrieve firewall rules as represented by UFW. + + :returns: Tuples with rule number and data + (1, {'to': '', 'action':, 'from':, '', ipv6: True, 'comment': ''}) + :rtype: Iterator[Tuple[int, Dict[str, Union[bool, str]]]] + """ + cp = subprocess.check_output(('ufw', 'status', 'numbered',), + stderr=subprocess.STDOUT, + universal_newlines=True) + for line in cp.splitlines(): + if not line.startswith('['): + continue + ipv6 = True if '(v6)' in line else False + line = line.replace('(v6)', '') + line = line.replace('[', '') + line = line.replace(']', '') + line = line.replace('Anywhere', 'any') + row = line.split() + yield (int(row[0]), { + 'to': row[1], + 'action': ' '.join(row[2:4]).lower(), + 'from': row[4], + 'ipv6': ipv6, + 'comment': row[6] if len(row) > 5 and row[5] == '#' else '', + }) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d7567b863e3a5ad2b7a7f44958b4166e0c3d346b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/alternatives.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/alternatives.py new file mode 100644 index 0000000000000000000000000000000000000000..547de09c6d818772191b519618fa32b08b0e6eff --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/alternatives.py @@ -0,0 +1,44 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +''' Helper for managing alternatives for file conflict resolution ''' + +import subprocess +import shutil +import os + + +def install_alternative(name, target, source, priority=50): + ''' Install alternative configuration ''' + if (os.path.exists(target) and not os.path.islink(target)): + # Move existing file/directory away before installing + shutil.move(target, '{}.bak'.format(target)) + cmd = [ + 'update-alternatives', '--force', '--install', + target, name, source, str(priority) + ] + subprocess.check_call(cmd) + + +def remove_alternative(name, source): + """Remove an installed alternative configuration file + + :param name: string name of the alternative to remove + :param source: string full path to alternative to remove + """ + cmd = [ + 'update-alternatives', '--remove', + name, source + ] + subprocess.check_call(cmd) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/amulet/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/amulet/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d7567b863e3a5ad2b7a7f44958b4166e0c3d346b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/amulet/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/amulet/deployment.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/amulet/deployment.py new file mode 100644 index 0000000000000000000000000000000000000000..dd3aebe97dd04bf42a2c4ee7c7b7b83b1917a678 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/amulet/deployment.py @@ -0,0 +1,384 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import logging +import os +import re +import sys +import six +from collections import OrderedDict +from charmhelpers.contrib.amulet.deployment import ( + AmuletDeployment +) +from charmhelpers.contrib.openstack.amulet.utils import ( + OPENSTACK_RELEASES_PAIRS +) + +DEBUG = logging.DEBUG +ERROR = logging.ERROR + + +class OpenStackAmuletDeployment(AmuletDeployment): + """OpenStack amulet deployment. + + This class inherits from AmuletDeployment and has additional support + that is specifically for use by OpenStack charms. + """ + + def __init__(self, series=None, openstack=None, source=None, + stable=True, log_level=DEBUG): + """Initialize the deployment environment.""" + super(OpenStackAmuletDeployment, self).__init__(series) + self.log = self.get_logger(level=log_level) + self.log.info('OpenStackAmuletDeployment: init') + self.openstack = openstack + self.source = source + self.stable = stable + + def get_logger(self, name="deployment-logger", level=logging.DEBUG): + """Get a logger object that will log to stdout.""" + log = logging + logger = log.getLogger(name) + fmt = log.Formatter("%(asctime)s %(funcName)s " + "%(levelname)s: %(message)s") + + handler = log.StreamHandler(stream=sys.stdout) + handler.setLevel(level) + handler.setFormatter(fmt) + + logger.addHandler(handler) + logger.setLevel(level) + + return logger + + def _determine_branch_locations(self, other_services): + """Determine the branch locations for the other services. + + Determine if the local branch being tested is derived from its + stable or next (dev) branch, and based on this, use the corresonding + stable or next branches for the other_services.""" + + self.log.info('OpenStackAmuletDeployment: determine branch locations') + + # Charms outside the ~openstack-charmers + base_charms = { + 'mysql': ['trusty'], + 'mongodb': ['trusty'], + 'nrpe': ['trusty', 'xenial'], + } + + for svc in other_services: + # If a location has been explicitly set, use it + if svc.get('location'): + continue + if svc['name'] in base_charms: + # NOTE: not all charms have support for all series we + # want/need to test against, so fix to most recent + # that each base charm supports + target_series = self.series + if self.series not in base_charms[svc['name']]: + target_series = base_charms[svc['name']][-1] + svc['location'] = 'cs:{}/{}'.format(target_series, + svc['name']) + elif self.stable: + svc['location'] = 'cs:{}/{}'.format(self.series, + svc['name']) + else: + svc['location'] = 'cs:~openstack-charmers-next/{}/{}'.format( + self.series, + svc['name'] + ) + + return other_services + + def _add_services(self, this_service, other_services, use_source=None, + no_origin=None): + """Add services to the deployment and optionally set + openstack-origin/source. + + :param this_service dict: Service dictionary describing the service + whose amulet tests are being run + :param other_services dict: List of service dictionaries describing + the services needed to support the target + service + :param use_source list: List of services which use the 'source' config + option rather than 'openstack-origin' + :param no_origin list: List of services which do not support setting + the Cloud Archive. + Service Dict: + { + 'name': str charm-name, + 'units': int number of units, + 'constraints': dict of juju constraints, + 'location': str location of charm, + } + eg + this_service = { + 'name': 'openvswitch-odl', + 'constraints': {'mem': '8G'}, + } + other_services = [ + { + 'name': 'nova-compute', + 'units': 2, + 'constraints': {'mem': '4G'}, + 'location': cs:~bob/xenial/nova-compute + }, + { + 'name': 'mysql', + 'constraints': {'mem': '2G'}, + }, + {'neutron-api-odl'}] + use_source = ['mysql'] + no_origin = ['neutron-api-odl'] + """ + self.log.info('OpenStackAmuletDeployment: adding services') + + other_services = self._determine_branch_locations(other_services) + + super(OpenStackAmuletDeployment, self)._add_services(this_service, + other_services) + + services = other_services + services.append(this_service) + + use_source = use_source or [] + no_origin = no_origin or [] + + # Charms which should use the source config option + use_source = list(set( + use_source + ['mysql', 'mongodb', 'rabbitmq-server', 'ceph', + 'ceph-osd', 'ceph-radosgw', 'ceph-mon', + 'ceph-proxy', 'percona-cluster', 'lxd'])) + + # Charms which can not use openstack-origin, ie. many subordinates + no_origin = list(set( + no_origin + ['cinder-ceph', 'hacluster', 'neutron-openvswitch', + 'nrpe', 'openvswitch-odl', 'neutron-api-odl', + 'odl-controller', 'cinder-backup', 'nexentaedge-data', + 'nexentaedge-iscsi-gw', 'nexentaedge-swift-gw', + 'cinder-nexentaedge', 'nexentaedge-mgmt', + 'ceilometer-agent'])) + + if self.openstack: + for svc in services: + if svc['name'] not in use_source + no_origin: + config = {'openstack-origin': self.openstack} + self.d.configure(svc['name'], config) + + if self.source: + for svc in services: + if svc['name'] in use_source and svc['name'] not in no_origin: + config = {'source': self.source} + self.d.configure(svc['name'], config) + + def _configure_services(self, configs): + """Configure all of the services.""" + self.log.info('OpenStackAmuletDeployment: configure services') + for service, config in six.iteritems(configs): + self.d.configure(service, config) + + def _auto_wait_for_status(self, message=None, exclude_services=None, + include_only=None, timeout=None): + """Wait for all units to have a specific extended status, except + for any defined as excluded. Unless specified via message, any + status containing any case of 'ready' will be considered a match. + + Examples of message usage: + + Wait for all unit status to CONTAIN any case of 'ready' or 'ok': + message = re.compile('.*ready.*|.*ok.*', re.IGNORECASE) + + Wait for all units to reach this status (exact match): + message = re.compile('^Unit is ready and clustered$') + + Wait for all units to reach any one of these (exact match): + message = re.compile('Unit is ready|OK|Ready') + + Wait for at least one unit to reach this status (exact match): + message = {'ready'} + + See Amulet's sentry.wait_for_messages() for message usage detail. + https://github.com/juju/amulet/blob/master/amulet/sentry.py + + :param message: Expected status match + :param exclude_services: List of juju service names to ignore, + not to be used in conjuction with include_only. + :param include_only: List of juju service names to exclusively check, + not to be used in conjuction with exclude_services. + :param timeout: Maximum time in seconds to wait for status match + :returns: None. Raises if timeout is hit. + """ + if not timeout: + timeout = int(os.environ.get('AMULET_SETUP_TIMEOUT', 1800)) + self.log.info('Waiting for extended status on units for {}s...' + ''.format(timeout)) + + all_services = self.d.services.keys() + + if exclude_services and include_only: + raise ValueError('exclude_services can not be used ' + 'with include_only') + + if message: + if isinstance(message, re._pattern_type): + match = message.pattern + else: + match = message + + self.log.debug('Custom extended status wait match: ' + '{}'.format(match)) + else: + self.log.debug('Default extended status wait match: contains ' + 'READY (case-insensitive)') + message = re.compile('.*ready.*', re.IGNORECASE) + + if exclude_services: + self.log.debug('Excluding services from extended status match: ' + '{}'.format(exclude_services)) + else: + exclude_services = [] + + if include_only: + services = include_only + else: + services = list(set(all_services) - set(exclude_services)) + + self.log.debug('Waiting up to {}s for extended status on services: ' + '{}'.format(timeout, services)) + service_messages = {service: message for service in services} + + # Check for idleness + self.d.sentry.wait(timeout=timeout) + # Check for error states and bail early + self.d.sentry.wait_for_status(self.d.juju_env, services, timeout=timeout) + # Check for ready messages + self.d.sentry.wait_for_messages(service_messages, timeout=timeout) + + self.log.info('OK') + + def _get_openstack_release(self): + """Get openstack release. + + Return an integer representing the enum value of the openstack + release. + """ + # Must be ordered by OpenStack release (not by Ubuntu release): + for i, os_pair in enumerate(OPENSTACK_RELEASES_PAIRS): + setattr(self, os_pair, i) + + releases = { + ('trusty', None): self.trusty_icehouse, + ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo, + ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty, + ('trusty', 'cloud:trusty-mitaka'): self.trusty_mitaka, + ('xenial', None): self.xenial_mitaka, + ('xenial', 'cloud:xenial-newton'): self.xenial_newton, + ('xenial', 'cloud:xenial-ocata'): self.xenial_ocata, + ('xenial', 'cloud:xenial-pike'): self.xenial_pike, + ('xenial', 'cloud:xenial-queens'): self.xenial_queens, + ('yakkety', None): self.yakkety_newton, + ('zesty', None): self.zesty_ocata, + ('artful', None): self.artful_pike, + ('bionic', None): self.bionic_queens, + ('bionic', 'cloud:bionic-rocky'): self.bionic_rocky, + ('bionic', 'cloud:bionic-stein'): self.bionic_stein, + ('bionic', 'cloud:bionic-train'): self.bionic_train, + ('bionic', 'cloud:bionic-ussuri'): self.bionic_ussuri, + ('cosmic', None): self.cosmic_rocky, + ('disco', None): self.disco_stein, + ('eoan', None): self.eoan_train, + ('focal', None): self.focal_ussuri, + } + return releases[(self.series, self.openstack)] + + def _get_openstack_release_string(self): + """Get openstack release string. + + Return a string representing the openstack release. + """ + releases = OrderedDict([ + ('trusty', 'icehouse'), + ('xenial', 'mitaka'), + ('yakkety', 'newton'), + ('zesty', 'ocata'), + ('artful', 'pike'), + ('bionic', 'queens'), + ('cosmic', 'rocky'), + ('disco', 'stein'), + ('eoan', 'train'), + ('focal', 'ussuri'), + ]) + if self.openstack: + os_origin = self.openstack.split(':')[1] + return os_origin.split('%s-' % self.series)[1].split('/')[0] + else: + return releases[self.series] + + def get_percona_service_entry(self, memory_constraint=None): + """Return a amulet service entry for percona cluster. + + :param memory_constraint: Override the default memory constraint + in the service entry. + :type memory_constraint: str + :returns: Amulet service entry. + :rtype: dict + """ + memory_constraint = memory_constraint or '3072M' + svc_entry = { + 'name': 'percona-cluster', + 'constraints': {'mem': memory_constraint}} + if self._get_openstack_release() <= self.trusty_mitaka: + svc_entry['location'] = 'cs:trusty/percona-cluster' + return svc_entry + + def get_ceph_expected_pools(self, radosgw=False): + """Return a list of expected ceph pools in a ceph + cinder + glance + test scenario, based on OpenStack release and whether ceph radosgw + is flagged as present or not.""" + + if self._get_openstack_release() == self.trusty_icehouse: + # Icehouse + pools = [ + 'data', + 'metadata', + 'rbd', + 'cinder-ceph', + 'glance' + ] + elif (self.trusty_kilo <= self._get_openstack_release() <= + self.zesty_ocata): + # Kilo through Ocata + pools = [ + 'rbd', + 'cinder-ceph', + 'glance' + ] + else: + # Pike and later + pools = [ + 'cinder-ceph', + 'glance' + ] + + if radosgw: + pools.extend([ + '.rgw.root', + '.rgw.control', + '.rgw', + '.rgw.gc', + '.users.uid' + ]) + + return pools diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/amulet/utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/amulet/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..14864198926b82afe382d07263034b020ad43c09 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/amulet/utils.py @@ -0,0 +1,1593 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import amulet +import json +import logging +import os +import re +import six +import time +import urllib +import urlparse + +import cinderclient.v1.client as cinder_client +import cinderclient.v2.client as cinder_clientv2 +import glanceclient.v1 as glance_client +import glanceclient.v2 as glance_clientv2 +import heatclient.v1.client as heat_client +from keystoneclient.v2_0 import client as keystone_client +from keystoneauth1.identity import ( + v3, + v2, +) +from keystoneauth1 import session as keystone_session +from keystoneclient.v3 import client as keystone_client_v3 +from novaclient import exceptions + +import novaclient.client as nova_client +import novaclient +import pika +import swiftclient + +from charmhelpers.core.decorators import retry_on_exception +from charmhelpers.contrib.amulet.utils import ( + AmuletUtils +) +from charmhelpers.core.host import CompareHostReleases + +DEBUG = logging.DEBUG +ERROR = logging.ERROR + +NOVA_CLIENT_VERSION = "2" + +OPENSTACK_RELEASES_PAIRS = [ + 'trusty_icehouse', 'trusty_kilo', 'trusty_liberty', + 'trusty_mitaka', 'xenial_mitaka', + 'xenial_newton', 'yakkety_newton', + 'xenial_ocata', 'zesty_ocata', + 'xenial_pike', 'artful_pike', + 'xenial_queens', 'bionic_queens', + 'bionic_rocky', 'cosmic_rocky', + 'bionic_stein', 'disco_stein', + 'bionic_train', 'eoan_train', + 'bionic_ussuri', 'focal_ussuri', +] + + +class OpenStackAmuletUtils(AmuletUtils): + """OpenStack amulet utilities. + + This class inherits from AmuletUtils and has additional support + that is specifically for use by OpenStack charm tests. + """ + + def __init__(self, log_level=ERROR): + """Initialize the deployment environment.""" + super(OpenStackAmuletUtils, self).__init__(log_level) + + def validate_endpoint_data(self, endpoints, admin_port, internal_port, + public_port, expected, openstack_release=None): + """Validate endpoint data. Pick the correct validator based on + OpenStack release. Expected data should be in the v2 format: + { + 'id': id, + 'region': region, + 'adminurl': adminurl, + 'internalurl': internalurl, + 'publicurl': publicurl, + 'service_id': service_id} + + """ + validation_function = self.validate_v2_endpoint_data + xenial_queens = OPENSTACK_RELEASES_PAIRS.index('xenial_queens') + if openstack_release and openstack_release >= xenial_queens: + validation_function = self.validate_v3_endpoint_data + expected = { + 'id': expected['id'], + 'region': expected['region'], + 'region_id': 'RegionOne', + 'url': self.valid_url, + 'interface': self.not_null, + 'service_id': expected['service_id']} + return validation_function(endpoints, admin_port, internal_port, + public_port, expected) + + def validate_v2_endpoint_data(self, endpoints, admin_port, internal_port, + public_port, expected): + """Validate endpoint data. + + Validate actual endpoint data vs expected endpoint data. The ports + are used to find the matching endpoint. + """ + self.log.debug('Validating endpoint data...') + self.log.debug('actual: {}'.format(repr(endpoints))) + found = False + for ep in endpoints: + self.log.debug('endpoint: {}'.format(repr(ep))) + if (admin_port in ep.adminurl and + internal_port in ep.internalurl and + public_port in ep.publicurl): + found = True + actual = {'id': ep.id, + 'region': ep.region, + 'adminurl': ep.adminurl, + 'internalurl': ep.internalurl, + 'publicurl': ep.publicurl, + 'service_id': ep.service_id} + ret = self._validate_dict_data(expected, actual) + if ret: + return 'unexpected endpoint data - {}'.format(ret) + + if not found: + return 'endpoint not found' + + def validate_v3_endpoint_data(self, endpoints, admin_port, internal_port, + public_port, expected, expected_num_eps=3): + """Validate keystone v3 endpoint data. + + Validate the v3 endpoint data which has changed from v2. The + ports are used to find the matching endpoint. + + The new v3 endpoint data looks like: + + ['}, + region=RegionOne, + region_id=RegionOne, + service_id=17f842a0dc084b928e476fafe67e4095, + url=http://10.5.6.5:9312>, + '}, + region=RegionOne, + region_id=RegionOne, + service_id=72fc8736fb41435e8b3584205bb2cfa3, + url=http://10.5.6.6:35357/v3>, + ... ] + """ + self.log.debug('Validating v3 endpoint data...') + self.log.debug('actual: {}'.format(repr(endpoints))) + found = [] + for ep in endpoints: + self.log.debug('endpoint: {}'.format(repr(ep))) + if ((admin_port in ep.url and ep.interface == 'admin') or + (internal_port in ep.url and ep.interface == 'internal') or + (public_port in ep.url and ep.interface == 'public')): + found.append(ep.interface) + # note we ignore the links member. + actual = {'id': ep.id, + 'region': ep.region, + 'region_id': ep.region_id, + 'interface': self.not_null, + 'url': ep.url, + 'service_id': ep.service_id, } + ret = self._validate_dict_data(expected, actual) + if ret: + return 'unexpected endpoint data - {}'.format(ret) + + if len(found) != expected_num_eps: + return 'Unexpected number of endpoints found' + + def convert_svc_catalog_endpoint_data_to_v3(self, ep_data): + """Convert v2 endpoint data into v3. + + { + 'service_name1': [ + { + 'adminURL': adminURL, + 'id': id, + 'region': region. + 'publicURL': publicURL, + 'internalURL': internalURL + }], + 'service_name2': [ + { + 'adminURL': adminURL, + 'id': id, + 'region': region. + 'publicURL': publicURL, + 'internalURL': internalURL + }], + } + """ + self.log.warn("Endpoint ID and Region ID validation is limited to not " + "null checks after v2 to v3 conversion") + for svc in ep_data.keys(): + assert len(ep_data[svc]) == 1, "Unknown data format" + svc_ep_data = ep_data[svc][0] + ep_data[svc] = [ + { + 'url': svc_ep_data['adminURL'], + 'interface': 'admin', + 'region': svc_ep_data['region'], + 'region_id': self.not_null, + 'id': self.not_null}, + { + 'url': svc_ep_data['publicURL'], + 'interface': 'public', + 'region': svc_ep_data['region'], + 'region_id': self.not_null, + 'id': self.not_null}, + { + 'url': svc_ep_data['internalURL'], + 'interface': 'internal', + 'region': svc_ep_data['region'], + 'region_id': self.not_null, + 'id': self.not_null}] + return ep_data + + def validate_svc_catalog_endpoint_data(self, expected, actual, + openstack_release=None): + """Validate service catalog endpoint data. Pick the correct validator + for the OpenStack version. Expected data should be in the v2 format: + { + 'service_name1': [ + { + 'adminURL': adminURL, + 'id': id, + 'region': region. + 'publicURL': publicURL, + 'internalURL': internalURL + }], + 'service_name2': [ + { + 'adminURL': adminURL, + 'id': id, + 'region': region. + 'publicURL': publicURL, + 'internalURL': internalURL + }], + } + + """ + validation_function = self.validate_v2_svc_catalog_endpoint_data + xenial_queens = OPENSTACK_RELEASES_PAIRS.index('xenial_queens') + if openstack_release and openstack_release >= xenial_queens: + validation_function = self.validate_v3_svc_catalog_endpoint_data + expected = self.convert_svc_catalog_endpoint_data_to_v3(expected) + return validation_function(expected, actual) + + def validate_v2_svc_catalog_endpoint_data(self, expected, actual): + """Validate service catalog endpoint data. + + Validate a list of actual service catalog endpoints vs a list of + expected service catalog endpoints. + """ + self.log.debug('Validating service catalog endpoint data...') + self.log.debug('actual: {}'.format(repr(actual))) + for k, v in six.iteritems(expected): + if k in actual: + ret = self._validate_dict_data(expected[k][0], actual[k][0]) + if ret: + return self.endpoint_error(k, ret) + else: + return "endpoint {} does not exist".format(k) + return ret + + def validate_v3_svc_catalog_endpoint_data(self, expected, actual): + """Validate the keystone v3 catalog endpoint data. + + Validate a list of dictinaries that make up the keystone v3 service + catalogue. + + It is in the form of: + + + {u'identity': [{u'id': u'48346b01c6804b298cdd7349aadb732e', + u'interface': u'admin', + u'region': u'RegionOne', + u'region_id': u'RegionOne', + u'url': u'http://10.5.5.224:35357/v3'}, + {u'id': u'8414f7352a4b47a69fddd9dbd2aef5cf', + u'interface': u'public', + u'region': u'RegionOne', + u'region_id': u'RegionOne', + u'url': u'http://10.5.5.224:5000/v3'}, + {u'id': u'd5ca31440cc24ee1bf625e2996fb6a5b', + u'interface': u'internal', + u'region': u'RegionOne', + u'region_id': u'RegionOne', + u'url': u'http://10.5.5.224:5000/v3'}], + u'key-manager': [{u'id': u'68ebc17df0b045fcb8a8a433ebea9e62', + u'interface': u'public', + u'region': u'RegionOne', + u'region_id': u'RegionOne', + u'url': u'http://10.5.5.223:9311'}, + {u'id': u'9cdfe2a893c34afd8f504eb218cd2f9d', + u'interface': u'internal', + u'region': u'RegionOne', + u'region_id': u'RegionOne', + u'url': u'http://10.5.5.223:9311'}, + {u'id': u'f629388955bc407f8b11d8b7ca168086', + u'interface': u'admin', + u'region': u'RegionOne', + u'region_id': u'RegionOne', + u'url': u'http://10.5.5.223:9312'}]} + + Note, that an added complication is that the order of admin, public, + internal against 'interface' in each region. + + Thus, the function sorts the expected and actual lists using the + interface key as a sort key, prior to the comparison. + """ + self.log.debug('Validating v3 service catalog endpoint data...') + self.log.debug('actual: {}'.format(repr(actual))) + for k, v in six.iteritems(expected): + if k in actual: + l_expected = sorted(v, key=lambda x: x['interface']) + l_actual = sorted(actual[k], key=lambda x: x['interface']) + if len(l_actual) != len(l_expected): + return ("endpoint {} has differing number of interfaces " + " - expected({}), actual({})" + .format(k, len(l_expected), len(l_actual))) + for i_expected, i_actual in zip(l_expected, l_actual): + self.log.debug("checking interface {}" + .format(i_expected['interface'])) + ret = self._validate_dict_data(i_expected, i_actual) + if ret: + return self.endpoint_error(k, ret) + else: + return "endpoint {} does not exist".format(k) + return ret + + def validate_tenant_data(self, expected, actual): + """Validate tenant data. + + Validate a list of actual tenant data vs list of expected tenant + data. + """ + self.log.debug('Validating tenant data...') + self.log.debug('actual: {}'.format(repr(actual))) + for e in expected: + found = False + for act in actual: + a = {'enabled': act.enabled, 'description': act.description, + 'name': act.name, 'id': act.id} + if e['name'] == a['name']: + found = True + ret = self._validate_dict_data(e, a) + if ret: + return "unexpected tenant data - {}".format(ret) + if not found: + return "tenant {} does not exist".format(e['name']) + return ret + + def validate_role_data(self, expected, actual): + """Validate role data. + + Validate a list of actual role data vs a list of expected role + data. + """ + self.log.debug('Validating role data...') + self.log.debug('actual: {}'.format(repr(actual))) + for e in expected: + found = False + for act in actual: + a = {'name': act.name, 'id': act.id} + if e['name'] == a['name']: + found = True + ret = self._validate_dict_data(e, a) + if ret: + return "unexpected role data - {}".format(ret) + if not found: + return "role {} does not exist".format(e['name']) + return ret + + def validate_user_data(self, expected, actual, api_version=None): + """Validate user data. + + Validate a list of actual user data vs a list of expected user + data. + """ + self.log.debug('Validating user data...') + self.log.debug('actual: {}'.format(repr(actual))) + for e in expected: + found = False + for act in actual: + if e['name'] == act.name: + a = {'enabled': act.enabled, 'name': act.name, + 'email': act.email, 'id': act.id} + if api_version == 3: + a['default_project_id'] = getattr(act, + 'default_project_id', + 'none') + else: + a['tenantId'] = act.tenantId + found = True + ret = self._validate_dict_data(e, a) + if ret: + return "unexpected user data - {}".format(ret) + if not found: + return "user {} does not exist".format(e['name']) + return ret + + def validate_flavor_data(self, expected, actual): + """Validate flavor data. + + Validate a list of actual flavors vs a list of expected flavors. + """ + self.log.debug('Validating flavor data...') + self.log.debug('actual: {}'.format(repr(actual))) + act = [a.name for a in actual] + return self._validate_list_data(expected, act) + + def tenant_exists(self, keystone, tenant): + """Return True if tenant exists.""" + self.log.debug('Checking if tenant exists ({})...'.format(tenant)) + return tenant in [t.name for t in keystone.tenants.list()] + + @retry_on_exception(num_retries=5, base_delay=1) + def keystone_wait_for_propagation(self, sentry_relation_pairs, + api_version): + """Iterate over list of sentry and relation tuples and verify that + api_version has the expected value. + + :param sentry_relation_pairs: list of sentry, relation name tuples used + for monitoring propagation of relation + data + :param api_version: api_version to expect in relation data + :returns: None if successful. Raise on error. + """ + for (sentry, relation_name) in sentry_relation_pairs: + rel = sentry.relation('identity-service', + relation_name) + self.log.debug('keystone relation data: {}'.format(rel)) + if rel.get('api_version') != str(api_version): + raise Exception("api_version not propagated through relation" + " data yet ('{}' != '{}')." + "".format(rel.get('api_version'), api_version)) + + def keystone_configure_api_version(self, sentry_relation_pairs, deployment, + api_version): + """Configure preferred-api-version of keystone in deployment and + monitor provided list of relation objects for propagation + before returning to caller. + + :param sentry_relation_pairs: list of sentry, relation tuples used for + monitoring propagation of relation data + :param deployment: deployment to configure + :param api_version: value preferred-api-version will be set to + :returns: None if successful. Raise on error. + """ + self.log.debug("Setting keystone preferred-api-version: '{}'" + "".format(api_version)) + + config = {'preferred-api-version': api_version} + deployment.d.configure('keystone', config) + deployment._auto_wait_for_status() + self.keystone_wait_for_propagation(sentry_relation_pairs, api_version) + + def authenticate_cinder_admin(self, keystone, api_version=2): + """Authenticates admin user with cinder.""" + self.log.debug('Authenticating cinder admin...') + _clients = { + 1: cinder_client.Client, + 2: cinder_clientv2.Client} + return _clients[api_version](session=keystone.session) + + def authenticate_keystone(self, keystone_ip, username, password, + api_version=False, admin_port=False, + user_domain_name=None, domain_name=None, + project_domain_name=None, project_name=None): + """Authenticate with Keystone""" + self.log.debug('Authenticating with keystone...') + if not api_version: + api_version = 2 + sess, auth = self.get_keystone_session( + keystone_ip=keystone_ip, + username=username, + password=password, + api_version=api_version, + admin_port=admin_port, + user_domain_name=user_domain_name, + domain_name=domain_name, + project_domain_name=project_domain_name, + project_name=project_name + ) + if api_version == 2: + client = keystone_client.Client(session=sess) + else: + client = keystone_client_v3.Client(session=sess) + # This populates the client.service_catalog + client.auth_ref = auth.get_access(sess) + return client + + def get_keystone_session(self, keystone_ip, username, password, + api_version=False, admin_port=False, + user_domain_name=None, domain_name=None, + project_domain_name=None, project_name=None): + """Return a keystone session object""" + ep = self.get_keystone_endpoint(keystone_ip, + api_version=api_version, + admin_port=admin_port) + if api_version == 2: + auth = v2.Password( + username=username, + password=password, + tenant_name=project_name, + auth_url=ep + ) + sess = keystone_session.Session(auth=auth) + else: + auth = v3.Password( + user_domain_name=user_domain_name, + username=username, + password=password, + domain_name=domain_name, + project_domain_name=project_domain_name, + project_name=project_name, + auth_url=ep + ) + sess = keystone_session.Session(auth=auth) + return (sess, auth) + + def get_keystone_endpoint(self, keystone_ip, api_version=None, + admin_port=False): + """Return keystone endpoint""" + port = 5000 + if admin_port: + port = 35357 + base_ep = "http://{}:{}".format(keystone_ip.strip().decode('utf-8'), + port) + if api_version == 2: + ep = base_ep + "/v2.0" + else: + ep = base_ep + "/v3" + return ep + + def get_default_keystone_session(self, keystone_sentry, + openstack_release=None, api_version=2): + """Return a keystone session object and client object assuming standard + default settings + + Example call in amulet tests: + self.keystone_session, self.keystone = u.get_default_keystone_session( + self.keystone_sentry, + openstack_release=self._get_openstack_release()) + + The session can then be used to auth other clients: + neutronclient.Client(session=session) + aodh_client.Client(session=session) + eyc + """ + self.log.debug('Authenticating keystone admin...') + # 11 => xenial_queens + if api_version == 3 or (openstack_release and openstack_release >= 11): + client_class = keystone_client_v3.Client + api_version = 3 + else: + client_class = keystone_client.Client + keystone_ip = keystone_sentry.info['public-address'] + session, auth = self.get_keystone_session( + keystone_ip, + api_version=api_version, + username='admin', + password='openstack', + project_name='admin', + user_domain_name='admin_domain', + project_domain_name='admin_domain') + client = client_class(session=session) + # This populates the client.service_catalog + client.auth_ref = auth.get_access(session) + return session, client + + def authenticate_keystone_admin(self, keystone_sentry, user, password, + tenant=None, api_version=None, + keystone_ip=None, user_domain_name=None, + project_domain_name=None, + project_name=None): + """Authenticates admin user with the keystone admin endpoint.""" + self.log.debug('Authenticating keystone admin...') + if not keystone_ip: + keystone_ip = keystone_sentry.info['public-address'] + + # To support backward compatibility usage of this function + if not project_name: + project_name = tenant + if api_version == 3 and not user_domain_name: + user_domain_name = 'admin_domain' + if api_version == 3 and not project_domain_name: + project_domain_name = 'admin_domain' + if api_version == 3 and not project_name: + project_name = 'admin' + + return self.authenticate_keystone( + keystone_ip, user, password, + api_version=api_version, + user_domain_name=user_domain_name, + project_domain_name=project_domain_name, + project_name=project_name, + admin_port=True) + + def authenticate_keystone_user(self, keystone, user, password, tenant): + """Authenticates a regular user with the keystone public endpoint.""" + self.log.debug('Authenticating keystone user ({})...'.format(user)) + ep = keystone.service_catalog.url_for(service_type='identity', + interface='publicURL') + keystone_ip = urlparse.urlparse(ep).hostname + + return self.authenticate_keystone(keystone_ip, user, password, + project_name=tenant) + + def authenticate_glance_admin(self, keystone, force_v1_client=False): + """Authenticates admin user with glance.""" + self.log.debug('Authenticating glance admin...') + ep = keystone.service_catalog.url_for(service_type='image', + interface='adminURL') + if not force_v1_client and keystone.session: + return glance_clientv2.Client("2", session=keystone.session) + else: + return glance_client.Client(ep, token=keystone.auth_token) + + def authenticate_heat_admin(self, keystone): + """Authenticates the admin user with heat.""" + self.log.debug('Authenticating heat admin...') + ep = keystone.service_catalog.url_for(service_type='orchestration', + interface='publicURL') + if keystone.session: + return heat_client.Client(endpoint=ep, session=keystone.session) + else: + return heat_client.Client(endpoint=ep, token=keystone.auth_token) + + def authenticate_nova_user(self, keystone, user, password, tenant): + """Authenticates a regular user with nova-api.""" + self.log.debug('Authenticating nova user ({})...'.format(user)) + ep = keystone.service_catalog.url_for(service_type='identity', + interface='publicURL') + if keystone.session: + return nova_client.Client(NOVA_CLIENT_VERSION, + session=keystone.session, + auth_url=ep) + elif novaclient.__version__[0] >= "7": + return nova_client.Client(NOVA_CLIENT_VERSION, + username=user, password=password, + project_name=tenant, auth_url=ep) + else: + return nova_client.Client(NOVA_CLIENT_VERSION, + username=user, api_key=password, + project_id=tenant, auth_url=ep) + + def authenticate_swift_user(self, keystone, user, password, tenant): + """Authenticates a regular user with swift api.""" + self.log.debug('Authenticating swift user ({})...'.format(user)) + ep = keystone.service_catalog.url_for(service_type='identity', + interface='publicURL') + if keystone.session: + return swiftclient.Connection(session=keystone.session) + else: + return swiftclient.Connection(authurl=ep, + user=user, + key=password, + tenant_name=tenant, + auth_version='2.0') + + def create_flavor(self, nova, name, ram, vcpus, disk, flavorid="auto", + ephemeral=0, swap=0, rxtx_factor=1.0, is_public=True): + """Create the specified flavor.""" + try: + nova.flavors.find(name=name) + except (exceptions.NotFound, exceptions.NoUniqueMatch): + self.log.debug('Creating flavor ({})'.format(name)) + nova.flavors.create(name, ram, vcpus, disk, flavorid, + ephemeral, swap, rxtx_factor, is_public) + + def glance_create_image(self, glance, image_name, image_url, + download_dir='tests', + hypervisor_type=None, + disk_format='qcow2', + architecture='x86_64', + container_format='bare'): + """Download an image and upload it to glance, validate its status + and return an image object pointer. KVM defaults, can override for + LXD. + + :param glance: pointer to authenticated glance api connection + :param image_name: display name for new image + :param image_url: url to retrieve + :param download_dir: directory to store downloaded image file + :param hypervisor_type: glance image hypervisor property + :param disk_format: glance image disk format + :param architecture: glance image architecture property + :param container_format: glance image container format + :returns: glance image pointer + """ + self.log.debug('Creating glance image ({}) from ' + '{}...'.format(image_name, image_url)) + + # Download image + http_proxy = os.getenv('OS_TEST_HTTP_PROXY') + self.log.debug('OS_TEST_HTTP_PROXY: {}'.format(http_proxy)) + if http_proxy: + proxies = {'http': http_proxy} + opener = urllib.FancyURLopener(proxies) + else: + opener = urllib.FancyURLopener() + + abs_file_name = os.path.join(download_dir, image_name) + if not os.path.exists(abs_file_name): + opener.retrieve(image_url, abs_file_name) + + # Create glance image + glance_properties = { + 'architecture': architecture, + } + if hypervisor_type: + glance_properties['hypervisor_type'] = hypervisor_type + # Create glance image + if float(glance.version) < 2.0: + with open(abs_file_name) as f: + image = glance.images.create( + name=image_name, + is_public=True, + disk_format=disk_format, + container_format=container_format, + properties=glance_properties, + data=f) + else: + image = glance.images.create( + name=image_name, + visibility="public", + disk_format=disk_format, + container_format=container_format) + glance.images.upload(image.id, open(abs_file_name, 'rb')) + glance.images.update(image.id, **glance_properties) + + # Wait for image to reach active status + img_id = image.id + ret = self.resource_reaches_status(glance.images, img_id, + expected_stat='active', + msg='Image status wait') + if not ret: + msg = 'Glance image failed to reach expected state.' + amulet.raise_status(amulet.FAIL, msg=msg) + + # Re-validate new image + self.log.debug('Validating image attributes...') + val_img_name = glance.images.get(img_id).name + val_img_stat = glance.images.get(img_id).status + val_img_cfmt = glance.images.get(img_id).container_format + val_img_dfmt = glance.images.get(img_id).disk_format + + if float(glance.version) < 2.0: + val_img_pub = glance.images.get(img_id).is_public + else: + val_img_pub = glance.images.get(img_id).visibility == "public" + + msg_attr = ('Image attributes - name:{} public:{} id:{} stat:{} ' + 'container fmt:{} disk fmt:{}'.format( + val_img_name, val_img_pub, img_id, + val_img_stat, val_img_cfmt, val_img_dfmt)) + + if val_img_name == image_name and val_img_stat == 'active' \ + and val_img_pub is True and val_img_cfmt == container_format \ + and val_img_dfmt == disk_format: + self.log.debug(msg_attr) + else: + msg = ('Image validation failed, {}'.format(msg_attr)) + amulet.raise_status(amulet.FAIL, msg=msg) + + return image + + def create_cirros_image(self, glance, image_name, hypervisor_type=None): + """Download the latest cirros image and upload it to glance, + validate and return a resource pointer. + + :param glance: pointer to authenticated glance connection + :param image_name: display name for new image + :param hypervisor_type: glance image hypervisor property + :returns: glance image pointer + """ + # /!\ DEPRECATION WARNING + self.log.warn('/!\\ DEPRECATION WARNING: use ' + 'glance_create_image instead of ' + 'create_cirros_image.') + + self.log.debug('Creating glance cirros image ' + '({})...'.format(image_name)) + + # Get cirros image URL + http_proxy = os.getenv('OS_TEST_HTTP_PROXY') + self.log.debug('OS_TEST_HTTP_PROXY: {}'.format(http_proxy)) + if http_proxy: + proxies = {'http': http_proxy} + opener = urllib.FancyURLopener(proxies) + else: + opener = urllib.FancyURLopener() + + f = opener.open('http://download.cirros-cloud.net/version/released') + version = f.read().strip() + cirros_img = 'cirros-{}-x86_64-disk.img'.format(version) + cirros_url = 'http://{}/{}/{}'.format('download.cirros-cloud.net', + version, cirros_img) + f.close() + + return self.glance_create_image( + glance, + image_name, + cirros_url, + hypervisor_type=hypervisor_type) + + def delete_image(self, glance, image): + """Delete the specified image.""" + + # /!\ DEPRECATION WARNING + self.log.warn('/!\\ DEPRECATION WARNING: use ' + 'delete_resource instead of delete_image.') + self.log.debug('Deleting glance image ({})...'.format(image)) + return self.delete_resource(glance.images, image, msg='glance image') + + def create_instance(self, nova, image_name, instance_name, flavor): + """Create the specified instance.""" + self.log.debug('Creating instance ' + '({}|{}|{})'.format(instance_name, image_name, flavor)) + image = nova.glance.find_image(image_name) + flavor = nova.flavors.find(name=flavor) + instance = nova.servers.create(name=instance_name, image=image, + flavor=flavor) + + count = 1 + status = instance.status + while status != 'ACTIVE' and count < 60: + time.sleep(3) + instance = nova.servers.get(instance.id) + status = instance.status + self.log.debug('instance status: {}'.format(status)) + count += 1 + + if status != 'ACTIVE': + self.log.error('instance creation timed out') + return None + + return instance + + def delete_instance(self, nova, instance): + """Delete the specified instance.""" + + # /!\ DEPRECATION WARNING + self.log.warn('/!\\ DEPRECATION WARNING: use ' + 'delete_resource instead of delete_instance.') + self.log.debug('Deleting instance ({})...'.format(instance)) + return self.delete_resource(nova.servers, instance, + msg='nova instance') + + def create_or_get_keypair(self, nova, keypair_name="testkey"): + """Create a new keypair, or return pointer if it already exists.""" + try: + _keypair = nova.keypairs.get(keypair_name) + self.log.debug('Keypair ({}) already exists, ' + 'using it.'.format(keypair_name)) + return _keypair + except Exception: + self.log.debug('Keypair ({}) does not exist, ' + 'creating it.'.format(keypair_name)) + + _keypair = nova.keypairs.create(name=keypair_name) + return _keypair + + def _get_cinder_obj_name(self, cinder_object): + """Retrieve name of cinder object. + + :param cinder_object: cinder snapshot or volume object + :returns: str cinder object name + """ + # v1 objects store name in 'display_name' attr but v2+ use 'name' + try: + return cinder_object.display_name + except AttributeError: + return cinder_object.name + + def create_cinder_volume(self, cinder, vol_name="demo-vol", vol_size=1, + img_id=None, src_vol_id=None, snap_id=None): + """Create cinder volume, optionally from a glance image, OR + optionally as a clone of an existing volume, OR optionally + from a snapshot. Wait for the new volume status to reach + the expected status, validate and return a resource pointer. + + :param vol_name: cinder volume display name + :param vol_size: size in gigabytes + :param img_id: optional glance image id + :param src_vol_id: optional source volume id to clone + :param snap_id: optional snapshot id to use + :returns: cinder volume pointer + """ + # Handle parameter input and avoid impossible combinations + if img_id and not src_vol_id and not snap_id: + # Create volume from image + self.log.debug('Creating cinder volume from glance image...') + bootable = 'true' + elif src_vol_id and not img_id and not snap_id: + # Clone an existing volume + self.log.debug('Cloning cinder volume...') + bootable = cinder.volumes.get(src_vol_id).bootable + elif snap_id and not src_vol_id and not img_id: + # Create volume from snapshot + self.log.debug('Creating cinder volume from snapshot...') + snap = cinder.volume_snapshots.find(id=snap_id) + vol_size = snap.size + snap_vol_id = cinder.volume_snapshots.get(snap_id).volume_id + bootable = cinder.volumes.get(snap_vol_id).bootable + elif not img_id and not src_vol_id and not snap_id: + # Create volume + self.log.debug('Creating cinder volume...') + bootable = 'false' + else: + # Impossible combination of parameters + msg = ('Invalid method use - name:{} size:{} img_id:{} ' + 'src_vol_id:{} snap_id:{}'.format(vol_name, vol_size, + img_id, src_vol_id, + snap_id)) + amulet.raise_status(amulet.FAIL, msg=msg) + + # Create new volume + try: + vol_new = cinder.volumes.create(display_name=vol_name, + imageRef=img_id, + size=vol_size, + source_volid=src_vol_id, + snapshot_id=snap_id) + vol_id = vol_new.id + except TypeError: + vol_new = cinder.volumes.create(name=vol_name, + imageRef=img_id, + size=vol_size, + source_volid=src_vol_id, + snapshot_id=snap_id) + vol_id = vol_new.id + except Exception as e: + msg = 'Failed to create volume: {}'.format(e) + amulet.raise_status(amulet.FAIL, msg=msg) + + # Wait for volume to reach available status + ret = self.resource_reaches_status(cinder.volumes, vol_id, + expected_stat="available", + msg="Volume status wait") + if not ret: + msg = 'Cinder volume failed to reach expected state.' + amulet.raise_status(amulet.FAIL, msg=msg) + + # Re-validate new volume + self.log.debug('Validating volume attributes...') + val_vol_name = self._get_cinder_obj_name(cinder.volumes.get(vol_id)) + val_vol_boot = cinder.volumes.get(vol_id).bootable + val_vol_stat = cinder.volumes.get(vol_id).status + val_vol_size = cinder.volumes.get(vol_id).size + msg_attr = ('Volume attributes - name:{} id:{} stat:{} boot:' + '{} size:{}'.format(val_vol_name, vol_id, + val_vol_stat, val_vol_boot, + val_vol_size)) + + if val_vol_boot == bootable and val_vol_stat == 'available' \ + and val_vol_name == vol_name and val_vol_size == vol_size: + self.log.debug(msg_attr) + else: + msg = ('Volume validation failed, {}'.format(msg_attr)) + amulet.raise_status(amulet.FAIL, msg=msg) + + return vol_new + + def delete_resource(self, resource, resource_id, + msg="resource", max_wait=120): + """Delete one openstack resource, such as one instance, keypair, + image, volume, stack, etc., and confirm deletion within max wait time. + + :param resource: pointer to os resource type, ex:glance_client.images + :param resource_id: unique name or id for the openstack resource + :param msg: text to identify purpose in logging + :param max_wait: maximum wait time in seconds + :returns: True if successful, otherwise False + """ + self.log.debug('Deleting OpenStack resource ' + '{} ({})'.format(resource_id, msg)) + num_before = len(list(resource.list())) + resource.delete(resource_id) + + tries = 0 + num_after = len(list(resource.list())) + while num_after != (num_before - 1) and tries < (max_wait / 4): + self.log.debug('{} delete check: ' + '{} [{}:{}] {}'.format(msg, tries, + num_before, + num_after, + resource_id)) + time.sleep(4) + num_after = len(list(resource.list())) + tries += 1 + + self.log.debug('{}: expected, actual count = {}, ' + '{}'.format(msg, num_before - 1, num_after)) + + if num_after == (num_before - 1): + return True + else: + self.log.error('{} delete timed out'.format(msg)) + return False + + def resource_reaches_status(self, resource, resource_id, + expected_stat='available', + msg='resource', max_wait=120): + """Wait for an openstack resources status to reach an + expected status within a specified time. Useful to confirm that + nova instances, cinder vols, snapshots, glance images, heat stacks + and other resources eventually reach the expected status. + + :param resource: pointer to os resource type, ex: heat_client.stacks + :param resource_id: unique id for the openstack resource + :param expected_stat: status to expect resource to reach + :param msg: text to identify purpose in logging + :param max_wait: maximum wait time in seconds + :returns: True if successful, False if status is not reached + """ + + tries = 0 + resource_stat = resource.get(resource_id).status + while resource_stat != expected_stat and tries < (max_wait / 4): + self.log.debug('{} status check: ' + '{} [{}:{}] {}'.format(msg, tries, + resource_stat, + expected_stat, + resource_id)) + time.sleep(4) + resource_stat = resource.get(resource_id).status + tries += 1 + + self.log.debug('{}: expected, actual status = {}, ' + '{}'.format(msg, resource_stat, expected_stat)) + + if resource_stat == expected_stat: + return True + else: + self.log.debug('{} never reached expected status: ' + '{}'.format(resource_id, expected_stat)) + return False + + def get_ceph_osd_id_cmd(self, index): + """Produce a shell command that will return a ceph-osd id.""" + return ("`initctl list | grep 'ceph-osd ' | " + "awk 'NR=={} {{ print $2 }}' | " + "grep -o '[0-9]*'`".format(index + 1)) + + def get_ceph_pools(self, sentry_unit): + """Return a dict of ceph pools from a single ceph unit, with + pool name as keys, pool id as vals.""" + pools = {} + cmd = 'sudo ceph osd lspools' + output, code = sentry_unit.run(cmd) + if code != 0: + msg = ('{} `{}` returned {} ' + '{}'.format(sentry_unit.info['unit_name'], + cmd, code, output)) + amulet.raise_status(amulet.FAIL, msg=msg) + + # For mimic ceph osd lspools output + output = output.replace("\n", ",") + + # Example output: 0 data,1 metadata,2 rbd,3 cinder,4 glance, + for pool in str(output).split(','): + pool_id_name = pool.split(' ') + if len(pool_id_name) == 2: + pool_id = pool_id_name[0] + pool_name = pool_id_name[1] + pools[pool_name] = int(pool_id) + + self.log.debug('Pools on {}: {}'.format(sentry_unit.info['unit_name'], + pools)) + return pools + + def get_ceph_df(self, sentry_unit): + """Return dict of ceph df json output, including ceph pool state. + + :param sentry_unit: Pointer to amulet sentry instance (juju unit) + :returns: Dict of ceph df output + """ + cmd = 'sudo ceph df --format=json' + output, code = sentry_unit.run(cmd) + if code != 0: + msg = ('{} `{}` returned {} ' + '{}'.format(sentry_unit.info['unit_name'], + cmd, code, output)) + amulet.raise_status(amulet.FAIL, msg=msg) + return json.loads(output) + + def get_ceph_pool_sample(self, sentry_unit, pool_id=0): + """Take a sample of attributes of a ceph pool, returning ceph + pool name, object count and disk space used for the specified + pool ID number. + + :param sentry_unit: Pointer to amulet sentry instance (juju unit) + :param pool_id: Ceph pool ID + :returns: List of pool name, object count, kb disk space used + """ + df = self.get_ceph_df(sentry_unit) + for pool in df['pools']: + if pool['id'] == pool_id: + pool_name = pool['name'] + obj_count = pool['stats']['objects'] + kb_used = pool['stats']['kb_used'] + + self.log.debug('Ceph {} pool (ID {}): {} objects, ' + '{} kb used'.format(pool_name, pool_id, + obj_count, kb_used)) + return pool_name, obj_count, kb_used + + def validate_ceph_pool_samples(self, samples, sample_type="resource pool"): + """Validate ceph pool samples taken over time, such as pool + object counts or pool kb used, before adding, after adding, and + after deleting items which affect those pool attributes. The + 2nd element is expected to be greater than the 1st; 3rd is expected + to be less than the 2nd. + + :param samples: List containing 3 data samples + :param sample_type: String for logging and usage context + :returns: None if successful, Failure message otherwise + """ + original, created, deleted = range(3) + if samples[created] <= samples[original] or \ + samples[deleted] >= samples[created]: + return ('Ceph {} samples ({}) ' + 'unexpected.'.format(sample_type, samples)) + else: + self.log.debug('Ceph {} samples (OK): ' + '{}'.format(sample_type, samples)) + return None + + # rabbitmq/amqp specific helpers: + + def rmq_wait_for_cluster(self, deployment, init_sleep=15, timeout=1200): + """Wait for rmq units extended status to show cluster readiness, + after an optional initial sleep period. Initial sleep is likely + necessary to be effective following a config change, as status + message may not instantly update to non-ready.""" + + if init_sleep: + time.sleep(init_sleep) + + message = re.compile('^Unit is ready and clustered$') + deployment._auto_wait_for_status(message=message, + timeout=timeout, + include_only=['rabbitmq-server']) + + def add_rmq_test_user(self, sentry_units, + username="testuser1", password="changeme"): + """Add a test user via the first rmq juju unit, check connection as + the new user against all sentry units. + + :param sentry_units: list of sentry unit pointers + :param username: amqp user name, default to testuser1 + :param password: amqp user password + :returns: None if successful. Raise on error. + """ + self.log.debug('Adding rmq user ({})...'.format(username)) + + # Check that user does not already exist + cmd_user_list = 'rabbitmqctl list_users' + output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list) + if username in output: + self.log.warning('User ({}) already exists, returning ' + 'gracefully.'.format(username)) + return + + perms = '".*" ".*" ".*"' + cmds = ['rabbitmqctl add_user {} {}'.format(username, password), + 'rabbitmqctl set_permissions {} {}'.format(username, perms)] + + # Add user via first unit + for cmd in cmds: + output, _ = self.run_cmd_unit(sentry_units[0], cmd) + + # Check connection against the other sentry_units + self.log.debug('Checking user connect against units...') + for sentry_unit in sentry_units: + connection = self.connect_amqp_by_unit(sentry_unit, ssl=False, + username=username, + password=password) + connection.close() + + def delete_rmq_test_user(self, sentry_units, username="testuser1"): + """Delete a rabbitmq user via the first rmq juju unit. + + :param sentry_units: list of sentry unit pointers + :param username: amqp user name, default to testuser1 + :param password: amqp user password + :returns: None if successful or no such user. + """ + self.log.debug('Deleting rmq user ({})...'.format(username)) + + # Check that the user exists + cmd_user_list = 'rabbitmqctl list_users' + output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list) + + if username not in output: + self.log.warning('User ({}) does not exist, returning ' + 'gracefully.'.format(username)) + return + + # Delete the user + cmd_user_del = 'rabbitmqctl delete_user {}'.format(username) + output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_del) + + def get_rmq_cluster_status(self, sentry_unit): + """Execute rabbitmq cluster status command on a unit and return + the full output. + + :param unit: sentry unit + :returns: String containing console output of cluster status command + """ + cmd = 'rabbitmqctl cluster_status' + output, _ = self.run_cmd_unit(sentry_unit, cmd) + self.log.debug('{} cluster_status:\n{}'.format( + sentry_unit.info['unit_name'], output)) + return str(output) + + def get_rmq_cluster_running_nodes(self, sentry_unit): + """Parse rabbitmqctl cluster_status output string, return list of + running rabbitmq cluster nodes. + + :param unit: sentry unit + :returns: List containing node names of running nodes + """ + # NOTE(beisner): rabbitmqctl cluster_status output is not + # json-parsable, do string chop foo, then json.loads that. + str_stat = self.get_rmq_cluster_status(sentry_unit) + if 'running_nodes' in str_stat: + pos_start = str_stat.find("{running_nodes,") + 15 + pos_end = str_stat.find("]},", pos_start) + 1 + str_run_nodes = str_stat[pos_start:pos_end].replace("'", '"') + run_nodes = json.loads(str_run_nodes) + return run_nodes + else: + return [] + + def validate_rmq_cluster_running_nodes(self, sentry_units): + """Check that all rmq unit hostnames are represented in the + cluster_status output of all units. + + :param host_names: dict of juju unit names to host names + :param units: list of sentry unit pointers (all rmq units) + :returns: None if successful, otherwise return error message + """ + host_names = self.get_unit_hostnames(sentry_units) + errors = [] + + # Query every unit for cluster_status running nodes + for query_unit in sentry_units: + query_unit_name = query_unit.info['unit_name'] + running_nodes = self.get_rmq_cluster_running_nodes(query_unit) + + # Confirm that every unit is represented in the queried unit's + # cluster_status running nodes output. + for validate_unit in sentry_units: + val_host_name = host_names[validate_unit.info['unit_name']] + val_node_name = 'rabbit@{}'.format(val_host_name) + + if val_node_name not in running_nodes: + errors.append('Cluster member check failed on {}: {} not ' + 'in {}\n'.format(query_unit_name, + val_node_name, + running_nodes)) + if errors: + return ''.join(errors) + + def rmq_ssl_is_enabled_on_unit(self, sentry_unit, port=None): + """Check a single juju rmq unit for ssl and port in the config file.""" + host = sentry_unit.info['public-address'] + unit_name = sentry_unit.info['unit_name'] + + conf_file = '/etc/rabbitmq/rabbitmq.config' + conf_contents = str(self.file_contents_safe(sentry_unit, + conf_file, max_wait=16)) + # Checks + conf_ssl = 'ssl' in conf_contents + conf_port = str(port) in conf_contents + + # Port explicitly checked in config + if port and conf_port and conf_ssl: + self.log.debug('SSL is enabled @{}:{} ' + '({})'.format(host, port, unit_name)) + return True + elif port and not conf_port and conf_ssl: + self.log.debug('SSL is enabled @{} but not on port {} ' + '({})'.format(host, port, unit_name)) + return False + # Port not checked (useful when checking that ssl is disabled) + elif not port and conf_ssl: + self.log.debug('SSL is enabled @{}:{} ' + '({})'.format(host, port, unit_name)) + return True + elif not conf_ssl: + self.log.debug('SSL not enabled @{}:{} ' + '({})'.format(host, port, unit_name)) + return False + else: + msg = ('Unknown condition when checking SSL status @{}:{} ' + '({})'.format(host, port, unit_name)) + amulet.raise_status(amulet.FAIL, msg) + + def validate_rmq_ssl_enabled_units(self, sentry_units, port=None): + """Check that ssl is enabled on rmq juju sentry units. + + :param sentry_units: list of all rmq sentry units + :param port: optional ssl port override to validate + :returns: None if successful, otherwise return error message + """ + for sentry_unit in sentry_units: + if not self.rmq_ssl_is_enabled_on_unit(sentry_unit, port=port): + return ('Unexpected condition: ssl is disabled on unit ' + '({})'.format(sentry_unit.info['unit_name'])) + return None + + def validate_rmq_ssl_disabled_units(self, sentry_units): + """Check that ssl is enabled on listed rmq juju sentry units. + + :param sentry_units: list of all rmq sentry units + :returns: True if successful. Raise on error. + """ + for sentry_unit in sentry_units: + if self.rmq_ssl_is_enabled_on_unit(sentry_unit): + return ('Unexpected condition: ssl is enabled on unit ' + '({})'.format(sentry_unit.info['unit_name'])) + return None + + def configure_rmq_ssl_on(self, sentry_units, deployment, + port=None, max_wait=60): + """Turn ssl charm config option on, with optional non-default + ssl port specification. Confirm that it is enabled on every + unit. + + :param sentry_units: list of sentry units + :param deployment: amulet deployment object pointer + :param port: amqp port, use defaults if None + :param max_wait: maximum time to wait in seconds to confirm + :returns: None if successful. Raise on error. + """ + self.log.debug('Setting ssl charm config option: on') + + # Enable RMQ SSL + config = {'ssl': 'on'} + if port: + config['ssl_port'] = port + + deployment.d.configure('rabbitmq-server', config) + + # Wait for unit status + self.rmq_wait_for_cluster(deployment) + + # Confirm + tries = 0 + ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port) + while ret and tries < (max_wait / 4): + time.sleep(4) + self.log.debug('Attempt {}: {}'.format(tries, ret)) + ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port) + tries += 1 + + if ret: + amulet.raise_status(amulet.FAIL, ret) + + def configure_rmq_ssl_off(self, sentry_units, deployment, max_wait=60): + """Turn ssl charm config option off, confirm that it is disabled + on every unit. + + :param sentry_units: list of sentry units + :param deployment: amulet deployment object pointer + :param max_wait: maximum time to wait in seconds to confirm + :returns: None if successful. Raise on error. + """ + self.log.debug('Setting ssl charm config option: off') + + # Disable RMQ SSL + config = {'ssl': 'off'} + deployment.d.configure('rabbitmq-server', config) + + # Wait for unit status + self.rmq_wait_for_cluster(deployment) + + # Confirm + tries = 0 + ret = self.validate_rmq_ssl_disabled_units(sentry_units) + while ret and tries < (max_wait / 4): + time.sleep(4) + self.log.debug('Attempt {}: {}'.format(tries, ret)) + ret = self.validate_rmq_ssl_disabled_units(sentry_units) + tries += 1 + + if ret: + amulet.raise_status(amulet.FAIL, ret) + + def connect_amqp_by_unit(self, sentry_unit, ssl=False, + port=None, fatal=True, + username="testuser1", password="changeme"): + """Establish and return a pika amqp connection to the rabbitmq service + running on a rmq juju unit. + + :param sentry_unit: sentry unit pointer + :param ssl: boolean, default to False + :param port: amqp port, use defaults if None + :param fatal: boolean, default to True (raises on connect error) + :param username: amqp user name, default to testuser1 + :param password: amqp user password + :returns: pika amqp connection pointer or None if failed and non-fatal + """ + host = sentry_unit.info['public-address'] + unit_name = sentry_unit.info['unit_name'] + + # Default port logic if port is not specified + if ssl and not port: + port = 5671 + elif not ssl and not port: + port = 5672 + + self.log.debug('Connecting to amqp on {}:{} ({}) as ' + '{}...'.format(host, port, unit_name, username)) + + try: + credentials = pika.PlainCredentials(username, password) + parameters = pika.ConnectionParameters(host=host, port=port, + credentials=credentials, + ssl=ssl, + connection_attempts=3, + retry_delay=5, + socket_timeout=1) + connection = pika.BlockingConnection(parameters) + assert connection.is_open is True + assert connection.is_closing is False + self.log.debug('Connect OK') + return connection + except Exception as e: + msg = ('amqp connection failed to {}:{} as ' + '{} ({})'.format(host, port, username, str(e))) + if fatal: + amulet.raise_status(amulet.FAIL, msg) + else: + self.log.warn(msg) + return None + + def publish_amqp_message_by_unit(self, sentry_unit, message, + queue="test", ssl=False, + username="testuser1", + password="changeme", + port=None): + """Publish an amqp message to a rmq juju unit. + + :param sentry_unit: sentry unit pointer + :param message: amqp message string + :param queue: message queue, default to test + :param username: amqp user name, default to testuser1 + :param password: amqp user password + :param ssl: boolean, default to False + :param port: amqp port, use defaults if None + :returns: None. Raises exception if publish failed. + """ + self.log.debug('Publishing message to {} queue:\n{}'.format(queue, + message)) + connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl, + port=port, + username=username, + password=password) + + # NOTE(beisner): extra debug here re: pika hang potential: + # https://github.com/pika/pika/issues/297 + # https://groups.google.com/forum/#!topic/rabbitmq-users/Ja0iyfF0Szw + self.log.debug('Defining channel...') + channel = connection.channel() + self.log.debug('Declaring queue...') + channel.queue_declare(queue=queue, auto_delete=False, durable=True) + self.log.debug('Publishing message...') + channel.basic_publish(exchange='', routing_key=queue, body=message) + self.log.debug('Closing channel...') + channel.close() + self.log.debug('Closing connection...') + connection.close() + + def get_amqp_message_by_unit(self, sentry_unit, queue="test", + username="testuser1", + password="changeme", + ssl=False, port=None): + """Get an amqp message from a rmq juju unit. + + :param sentry_unit: sentry unit pointer + :param queue: message queue, default to test + :param username: amqp user name, default to testuser1 + :param password: amqp user password + :param ssl: boolean, default to False + :param port: amqp port, use defaults if None + :returns: amqp message body as string. Raise if get fails. + """ + connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl, + port=port, + username=username, + password=password) + channel = connection.channel() + method_frame, _, body = channel.basic_get(queue) + + if method_frame: + self.log.debug('Retreived message from {} queue:\n{}'.format(queue, + body)) + channel.basic_ack(method_frame.delivery_tag) + channel.close() + connection.close() + return body + else: + msg = 'No message retrieved.' + amulet.raise_status(amulet.FAIL, msg) + + def validate_memcache(self, sentry_unit, conf, os_release, + earliest_release=5, section='keystone_authtoken', + check_kvs=None): + """Check Memcache is running and is configured to be used + + Example call from Amulet test: + + def test_110_memcache(self): + u.validate_memcache(self.neutron_api_sentry, + '/etc/neutron/neutron.conf', + self._get_openstack_release()) + + :param sentry_unit: sentry unit + :param conf: OpenStack config file to check memcache settings + :param os_release: Current OpenStack release int code + :param earliest_release: Earliest Openstack release to check int code + :param section: OpenStack config file section to check + :param check_kvs: Dict of settings to check in config file + :returns: None + """ + if os_release < earliest_release: + self.log.debug('Skipping memcache checks for deployment. {} <' + 'mitaka'.format(os_release)) + return + _kvs = check_kvs or {'memcached_servers': 'inet6:[::1]:11211'} + self.log.debug('Checking memcached is running') + ret = self.validate_services_by_name({sentry_unit: ['memcached']}) + if ret: + amulet.raise_status(amulet.FAIL, msg='Memcache running check' + 'failed {}'.format(ret)) + else: + self.log.debug('OK') + self.log.debug('Checking memcache url is configured in {}'.format( + conf)) + if self.validate_config_data(sentry_unit, conf, section, _kvs): + message = "Memcache config error in: {}".format(conf) + amulet.raise_status(amulet.FAIL, msg=message) + else: + self.log.debug('OK') + self.log.debug('Checking memcache configuration in ' + '/etc/memcached.conf') + contents = self.file_contents_safe(sentry_unit, '/etc/memcached.conf', + fatal=True) + ubuntu_release, _ = self.run_cmd_unit(sentry_unit, 'lsb_release -cs') + if CompareHostReleases(ubuntu_release) <= 'trusty': + memcache_listen_addr = 'ip6-localhost' + else: + memcache_listen_addr = '::1' + expected = { + '-p': '11211', + '-l': memcache_listen_addr} + found = [] + for key, value in expected.items(): + for line in contents.split('\n'): + if line.startswith(key): + self.log.debug('Checking {} is set to {}'.format( + key, + value)) + assert value == line.split()[-1] + self.log.debug(line.split()[-1]) + found.append(key) + if sorted(found) == sorted(expected.keys()): + self.log.debug('OK') + else: + message = "Memcache config error in: /etc/memcached.conf" + amulet.raise_status(amulet.FAIL, msg=message) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/audits/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/audits/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..7f7e5f79a5d5fe3cb374814e32ea16f6060f4f27 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/audits/__init__.py @@ -0,0 +1,212 @@ +# Copyright 2019 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""OpenStack Security Audit code""" + +import collections +from enum import Enum +import traceback + +from charmhelpers.core.host import cmp_pkgrevno +import charmhelpers.contrib.openstack.utils as openstack_utils +import charmhelpers.core.hookenv as hookenv + + +class AuditType(Enum): + OpenStackSecurityGuide = 1 + + +_audits = {} + +Audit = collections.namedtuple('Audit', 'func filters') + + +def audit(*args): + """Decorator to register an audit. + + These are used to generate audits that can be run on a + deployed system that matches the given configuration + + :param args: List of functions to filter tests against + :type args: List[Callable[Dict]] + """ + def wrapper(f): + test_name = f.__name__ + if _audits.get(test_name): + raise RuntimeError( + "Test name '{}' used more than once" + .format(test_name)) + non_callables = [fn for fn in args if not callable(fn)] + if non_callables: + raise RuntimeError( + "Configuration includes non-callable filters: {}" + .format(non_callables)) + _audits[test_name] = Audit(func=f, filters=args) + return f + return wrapper + + +def is_audit_type(*args): + """This audit is included in the specified kinds of audits. + + :param *args: List of AuditTypes to include this audit in + :type args: List[AuditType] + :rtype: Callable[Dict] + """ + def _is_audit_type(audit_options): + if audit_options.get('audit_type') in args: + return True + else: + return False + return _is_audit_type + + +def since_package(pkg, pkg_version): + """This audit should be run after the specified package version (incl). + + :param pkg: Package name to compare + :type pkg: str + :param release: The package version + :type release: str + :rtype: Callable[Dict] + """ + def _since_package(audit_options=None): + return cmp_pkgrevno(pkg, pkg_version) >= 0 + + return _since_package + + +def before_package(pkg, pkg_version): + """This audit should be run before the specified package version (excl). + + :param pkg: Package name to compare + :type pkg: str + :param release: The package version + :type release: str + :rtype: Callable[Dict] + """ + def _before_package(audit_options=None): + return not since_package(pkg, pkg_version)() + + return _before_package + + +def since_openstack_release(pkg, release): + """This audit should run after the specified OpenStack version (incl). + + :param pkg: Package name to compare + :type pkg: str + :param release: The OpenStack release codename + :type release: str + :rtype: Callable[Dict] + """ + def _since_openstack_release(audit_options=None): + _release = openstack_utils.get_os_codename_package(pkg) + return openstack_utils.CompareOpenStackReleases(_release) >= release + + return _since_openstack_release + + +def before_openstack_release(pkg, release): + """This audit should run before the specified OpenStack version (excl). + + :param pkg: Package name to compare + :type pkg: str + :param release: The OpenStack release codename + :type release: str + :rtype: Callable[Dict] + """ + def _before_openstack_release(audit_options=None): + return not since_openstack_release(pkg, release)() + + return _before_openstack_release + + +def it_has_config(config_key): + """This audit should be run based on specified config keys. + + :param config_key: Config key to look for + :type config_key: str + :rtype: Callable[Dict] + """ + def _it_has_config(audit_options): + return audit_options.get(config_key) is not None + + return _it_has_config + + +def run(audit_options): + """Run the configured audits with the specified audit_options. + + :param audit_options: Configuration for the audit + :type audit_options: Config + + :rtype: Dict[str, str] + """ + errors = {} + results = {} + for name, audit in sorted(_audits.items()): + result_name = name.replace('_', '-') + if result_name in audit_options.get('excludes', []): + print( + "Skipping {} because it is" + "excluded in audit config" + .format(result_name)) + continue + if all(p(audit_options) for p in audit.filters): + try: + audit.func(audit_options) + print("{}: PASS".format(name)) + results[result_name] = { + 'success': True, + } + except AssertionError as e: + print("{}: FAIL ({})".format(name, e)) + results[result_name] = { + 'success': False, + 'message': e, + } + except Exception as e: + print("{}: ERROR ({})".format(name, e)) + errors[name] = e + results[result_name] = { + 'success': False, + 'message': e, + } + for name, error in errors.items(): + print("=" * 20) + print("Error in {}: ".format(name)) + traceback.print_tb(error.__traceback__) + print() + return results + + +def action_parse_results(result): + """Parse the result of `run` in the context of an action. + + :param result: The result of running the security-checklist + action on a unit + :type result: Dict[str, Dict[str, str]] + :rtype: int + """ + passed = True + for test, result in result.items(): + if result['success']: + hookenv.action_set({test: 'PASS'}) + else: + hookenv.action_set({test: 'FAIL - {}'.format(result['message'])}) + passed = False + if not passed: + hookenv.action_fail("One or more tests failed") + return 0 if passed else 1 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/audits/openstack_security_guide.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/audits/openstack_security_guide.py new file mode 100644 index 0000000000000000000000000000000000000000..79740ed0c103e841b6e280af920f8be65e3d1d0d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/audits/openstack_security_guide.py @@ -0,0 +1,270 @@ +# Copyright 2019 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import collections +import configparser +import glob +import os.path +import subprocess + +from charmhelpers.contrib.openstack.audits import ( + audit, + AuditType, + # filters + is_audit_type, + it_has_config, +) + +from charmhelpers.core.hookenv import ( + cached, +) + +""" +The Security Guide suggests a specific list of files inside the +config directory for the service having 640 specifically, but +by ensuring the containing directory is 750, only the owner can +write, and only the group can read files within the directory. + +By restricting access to the containing directory, we can more +effectively ensure that there is no accidental leakage if a new +file is added to the service without being added to the security +guide, and to this check. +""" +FILE_ASSERTIONS = { + 'barbican': { + '/etc/barbican': {'group': 'barbican', 'mode': '750'}, + }, + 'ceph-mon': { + '/var/lib/charm/ceph-mon/ceph.conf': + {'owner': 'root', 'group': 'root', 'mode': '644'}, + '/etc/ceph/ceph.client.admin.keyring': + {'owner': 'ceph', 'group': 'ceph'}, + '/etc/ceph/rbdmap': {'mode': '644'}, + '/var/lib/ceph': {'owner': 'ceph', 'group': 'ceph', 'mode': '750'}, + '/var/lib/ceph/bootstrap-*/ceph.keyring': + {'owner': 'ceph', 'group': 'ceph', 'mode': '600'} + }, + 'ceph-osd': { + '/var/lib/charm/ceph-osd/ceph.conf': + {'owner': 'ceph', 'group': 'ceph', 'mode': '644'}, + '/var/lib/ceph': {'owner': 'ceph', 'group': 'ceph', 'mode': '750'}, + '/var/lib/ceph/*': {'owner': 'ceph', 'group': 'ceph', 'mode': '755'}, + '/var/lib/ceph/bootstrap-*/ceph.keyring': + {'owner': 'ceph', 'group': 'ceph', 'mode': '600'}, + '/var/lib/ceph/radosgw': + {'owner': 'ceph', 'group': 'ceph', 'mode': '755'}, + }, + 'cinder': { + '/etc/cinder': {'group': 'cinder', 'mode': '750'}, + }, + 'glance': { + '/etc/glance': {'group': 'glance', 'mode': '750'}, + }, + 'keystone': { + '/etc/keystone': + {'owner': 'keystone', 'group': 'keystone', 'mode': '750'}, + }, + 'manilla': { + '/etc/manila': {'group': 'manilla', 'mode': '750'}, + }, + 'neutron-gateway': { + '/etc/neutron': {'group': 'neutron', 'mode': '750'}, + }, + 'neutron-api': { + '/etc/neutron/': {'group': 'neutron', 'mode': '750'}, + }, + 'nova-cloud-controller': { + '/etc/nova': {'group': 'nova', 'mode': '750'}, + }, + 'nova-compute': { + '/etc/nova/': {'group': 'nova', 'mode': '750'}, + }, + 'openstack-dashboard': { + # From security guide + '/etc/openstack-dashboard/local_settings.py': + {'group': 'horizon', 'mode': '640'}, + }, +} + +Ownership = collections.namedtuple('Ownership', 'owner group mode') + + +@cached +def _stat(file): + """ + Get the Ownership information from a file. + + :param file: The path to a file to stat + :type file: str + :returns: owner, group, and mode of the specified file + :rtype: Ownership + :raises subprocess.CalledProcessError: If the underlying stat fails + """ + out = subprocess.check_output( + ['stat', '-c', '%U %G %a', file]).decode('utf-8') + return Ownership(*out.strip().split(' ')) + + +@cached +def _config_ini(path): + """ + Parse an ini file + + :param path: The path to a file to parse + :type file: str + :returns: Configuration contained in path + :rtype: Dict + """ + # When strict is enabled, duplicate options are not allowed in the + # parsed INI; however, Oslo allows duplicate values. This change + # causes us to ignore the duplicate values which is acceptable as + # long as we don't validate any multi-value options + conf = configparser.ConfigParser(strict=False) + conf.read(path) + return dict(conf) + + +def _validate_file_ownership(owner, group, file_name, optional=False): + """ + Validate that a specified file is owned by `owner:group`. + + :param owner: Name of the owner + :type owner: str + :param group: Name of the group + :type group: str + :param file_name: Path to the file to verify + :type file_name: str + :param optional: Is this file optional, + ie: Should this test fail when it's missing + :type optional: bool + """ + try: + ownership = _stat(file_name) + except subprocess.CalledProcessError as e: + print("Error reading file: {}".format(e)) + if not optional: + assert False, "Specified file does not exist: {}".format(file_name) + assert owner == ownership.owner, \ + "{} has an incorrect owner: {} should be {}".format( + file_name, ownership.owner, owner) + assert group == ownership.group, \ + "{} has an incorrect group: {} should be {}".format( + file_name, ownership.group, group) + print("Validate ownership of {}: PASS".format(file_name)) + + +def _validate_file_mode(mode, file_name, optional=False): + """ + Validate that a specified file has the specified permissions. + + :param mode: file mode that is desires + :type owner: str + :param file_name: Path to the file to verify + :type file_name: str + :param optional: Is this file optional, + ie: Should this test fail when it's missing + :type optional: bool + """ + try: + ownership = _stat(file_name) + except subprocess.CalledProcessError as e: + print("Error reading file: {}".format(e)) + if not optional: + assert False, "Specified file does not exist: {}".format(file_name) + assert mode == ownership.mode, \ + "{} has an incorrect mode: {} should be {}".format( + file_name, ownership.mode, mode) + print("Validate mode of {}: PASS".format(file_name)) + + +@cached +def _config_section(config, section): + """Read the configuration file and return a section.""" + path = os.path.join(config.get('config_path'), config.get('config_file')) + conf = _config_ini(path) + return conf.get(section) + + +@audit(is_audit_type(AuditType.OpenStackSecurityGuide), + it_has_config('files')) +def validate_file_ownership(config): + """Verify that configuration files are owned by the correct user/group.""" + files = config.get('files', {}) + for file_name, options in files.items(): + for key in options.keys(): + if key not in ["owner", "group", "mode"]: + raise RuntimeError( + "Invalid ownership configuration: {}".format(key)) + owner = options.get('owner', config.get('owner', 'root')) + group = options.get('group', config.get('group', 'root')) + optional = options.get('optional', config.get('optional', False)) + if '*' in file_name: + for file in glob.glob(file_name): + if file not in files.keys(): + if os.path.isfile(file): + _validate_file_ownership(owner, group, file, optional) + else: + if os.path.isfile(file_name): + _validate_file_ownership(owner, group, file_name, optional) + + +@audit(is_audit_type(AuditType.OpenStackSecurityGuide), + it_has_config('files')) +def validate_file_permissions(config): + """Verify that permissions on configuration files are secure enough.""" + files = config.get('files', {}) + for file_name, options in files.items(): + for key in options.keys(): + if key not in ["owner", "group", "mode"]: + raise RuntimeError( + "Invalid ownership configuration: {}".format(key)) + mode = options.get('mode', config.get('permissions', '600')) + optional = options.get('optional', config.get('optional', False)) + if '*' in file_name: + for file in glob.glob(file_name): + if file not in files.keys(): + if os.path.isfile(file): + _validate_file_mode(mode, file, optional) + else: + if os.path.isfile(file_name): + _validate_file_mode(mode, file_name, optional) + + +@audit(is_audit_type(AuditType.OpenStackSecurityGuide)) +def validate_uses_keystone(audit_options): + """Validate that the service uses Keystone for authentication.""" + section = _config_section(audit_options, 'api') or _config_section(audit_options, 'DEFAULT') + assert section is not None, "Missing section 'api / DEFAULT'" + assert section.get('auth_strategy') == "keystone", \ + "Application is not using Keystone" + + +@audit(is_audit_type(AuditType.OpenStackSecurityGuide)) +def validate_uses_tls_for_keystone(audit_options): + """Verify that TLS is used to communicate with Keystone.""" + section = _config_section(audit_options, 'keystone_authtoken') + assert section is not None, "Missing section 'keystone_authtoken'" + assert not section.get('insecure') and \ + "https://" in section.get("auth_uri"), \ + "TLS is not used for Keystone" + + +@audit(is_audit_type(AuditType.OpenStackSecurityGuide)) +def validate_uses_tls_for_glance(audit_options): + """Verify that TLS is used to communicate with Glance.""" + section = _config_section(audit_options, 'glance') + assert section is not None, "Missing section 'glance'" + assert not section.get('insecure') and \ + "https://" in section.get("api_servers"), \ + "TLS is not used for Glance" diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/cert_utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/cert_utils.py new file mode 100644 index 0000000000000000000000000000000000000000..b494af64aeae55db44b669990725de81705104b2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/cert_utils.py @@ -0,0 +1,289 @@ +# Copyright 2014-2018 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Common python helper functions used for OpenStack charm certificats. + +import os +import json + +from charmhelpers.contrib.network.ip import ( + get_hostname, + resolve_network_cidr, +) +from charmhelpers.core.hookenv import ( + local_unit, + network_get_primary_address, + config, + related_units, + relation_get, + relation_ids, + unit_get, + NoNetworkBinding, + log, + WARNING, +) +from charmhelpers.contrib.openstack.ip import ( + ADMIN, + resolve_address, + get_vip_in_network, + INTERNAL, + PUBLIC, + ADDRESS_MAP) + +from charmhelpers.core.host import ( + mkdir, + write_file, +) + +from charmhelpers.contrib.hahelpers.apache import ( + install_ca_cert +) + + +class CertRequest(object): + + """Create a request for certificates to be generated + """ + + def __init__(self, json_encode=True): + self.entries = [] + self.hostname_entry = None + self.json_encode = json_encode + + def add_entry(self, net_type, cn, addresses): + """Add a request to the batch + + :param net_type: str netwrok space name request is for + :param cn: str Canonical Name for certificate + :param addresses: [] List of addresses to be used as SANs + """ + self.entries.append({ + 'cn': cn, + 'addresses': addresses}) + + def add_hostname_cn(self): + """Add a request for the hostname of the machine""" + ip = unit_get('private-address') + addresses = [ip] + # If a vip is being used without os-hostname config or + # network spaces then we need to ensure the local units + # cert has the approriate vip in the SAN list + vip = get_vip_in_network(resolve_network_cidr(ip)) + if vip: + addresses.append(vip) + self.hostname_entry = { + 'cn': get_hostname(ip), + 'addresses': addresses} + + def add_hostname_cn_ip(self, addresses): + """Add an address to the SAN list for the hostname request + + :param addr: [] List of address to be added + """ + for addr in addresses: + if addr not in self.hostname_entry['addresses']: + self.hostname_entry['addresses'].append(addr) + + def get_request(self): + """Generate request from the batched up entries + + """ + if self.hostname_entry: + self.entries.append(self.hostname_entry) + request = {} + for entry in self.entries: + sans = sorted(list(set(entry['addresses']))) + request[entry['cn']] = {'sans': sans} + if self.json_encode: + req = {'cert_requests': json.dumps(request, sort_keys=True)} + else: + req = {'cert_requests': request} + req['unit_name'] = local_unit().replace('/', '_') + return req + + +def get_certificate_request(json_encode=True): + """Generate a certificatee requests based on the network confioguration + + """ + req = CertRequest(json_encode=json_encode) + req.add_hostname_cn() + # Add os-hostname entries + for net_type in [INTERNAL, ADMIN, PUBLIC]: + net_config = config(ADDRESS_MAP[net_type]['override']) + try: + net_addr = resolve_address(endpoint_type=net_type) + ip = network_get_primary_address( + ADDRESS_MAP[net_type]['binding']) + addresses = [net_addr, ip] + vip = get_vip_in_network(resolve_network_cidr(ip)) + if vip: + addresses.append(vip) + if net_config: + req.add_entry( + net_type, + net_config, + addresses) + else: + # There is network address with no corresponding hostname. + # Add the ip to the hostname cert to allow for this. + req.add_hostname_cn_ip(addresses) + except NoNetworkBinding: + log("Skipping request for certificate for ip in {} space, no " + "local address found".format(net_type), WARNING) + return req.get_request() + + +def create_ip_cert_links(ssl_dir, custom_hostname_link=None): + """Create symlinks for SAN records + + :param ssl_dir: str Directory to create symlinks in + :param custom_hostname_link: str Additional link to be created + """ + hostname = get_hostname(unit_get('private-address')) + hostname_cert = os.path.join( + ssl_dir, + 'cert_{}'.format(hostname)) + hostname_key = os.path.join( + ssl_dir, + 'key_{}'.format(hostname)) + # Add links to hostname cert, used if os-hostname vars not set + for net_type in [INTERNAL, ADMIN, PUBLIC]: + try: + addr = resolve_address(endpoint_type=net_type) + cert = os.path.join(ssl_dir, 'cert_{}'.format(addr)) + key = os.path.join(ssl_dir, 'key_{}'.format(addr)) + if os.path.isfile(hostname_cert) and not os.path.isfile(cert): + os.symlink(hostname_cert, cert) + os.symlink(hostname_key, key) + except NoNetworkBinding: + log("Skipping creating cert symlink for ip in {} space, no " + "local address found".format(net_type), WARNING) + if custom_hostname_link: + custom_cert = os.path.join( + ssl_dir, + 'cert_{}'.format(custom_hostname_link)) + custom_key = os.path.join( + ssl_dir, + 'key_{}'.format(custom_hostname_link)) + if os.path.isfile(hostname_cert) and not os.path.isfile(custom_cert): + os.symlink(hostname_cert, custom_cert) + os.symlink(hostname_key, custom_key) + + +def install_certs(ssl_dir, certs, chain=None, user='root', group='root'): + """Install the certs passed into the ssl dir and append the chain if + provided. + + :param ssl_dir: str Directory to create symlinks in + :param certs: {} {'cn': {'cert': 'CERT', 'key': 'KEY'}} + :param chain: str Chain to be appended to certs + :param user: (Optional) Owner of certificate files. Defaults to 'root' + :type user: str + :param group: (Optional) Group of certificate files. Defaults to 'root' + :type group: str + """ + for cn, bundle in certs.items(): + cert_filename = 'cert_{}'.format(cn) + key_filename = 'key_{}'.format(cn) + cert_data = bundle['cert'] + if chain: + # Append chain file so that clients that trust the root CA will + # trust certs signed by an intermediate in the chain + cert_data = cert_data + os.linesep + chain + write_file( + path=os.path.join(ssl_dir, cert_filename), owner=user, group=group, + content=cert_data, perms=0o640) + write_file( + path=os.path.join(ssl_dir, key_filename), owner=user, group=group, + content=bundle['key'], perms=0o640) + + +def process_certificates(service_name, relation_id, unit, + custom_hostname_link=None, user='root', group='root'): + """Process the certificates supplied down the relation + + :param service_name: str Name of service the certifcates are for. + :param relation_id: str Relation id providing the certs + :param unit: str Unit providing the certs + :param custom_hostname_link: str Name of custom link to create + :param user: (Optional) Owner of certificate files. Defaults to 'root' + :type user: str + :param group: (Optional) Group of certificate files. Defaults to 'root' + :type group: str + :returns: True if certificates processed for local unit or False + :rtype: bool + """ + data = relation_get(rid=relation_id, unit=unit) + ssl_dir = os.path.join('/etc/apache2/ssl/', service_name) + mkdir(path=ssl_dir) + name = local_unit().replace('/', '_') + certs = data.get('{}.processed_requests'.format(name)) + chain = data.get('chain') + ca = data.get('ca') + if certs: + certs = json.loads(certs) + install_ca_cert(ca.encode()) + install_certs(ssl_dir, certs, chain, user=user, group=group) + create_ip_cert_links( + ssl_dir, + custom_hostname_link=custom_hostname_link) + return True + return False + + +def get_requests_for_local_unit(relation_name=None): + """Extract any certificates data targeted at this unit down relation_name. + + :param relation_name: str Name of relation to check for data. + :returns: List of bundles of certificates. + :rtype: List of dicts + """ + local_name = local_unit().replace('/', '_') + raw_certs_key = '{}.processed_requests'.format(local_name) + relation_name = relation_name or 'certificates' + bundles = [] + for rid in relation_ids(relation_name): + for unit in related_units(rid): + data = relation_get(rid=rid, unit=unit) + if data.get(raw_certs_key): + bundles.append({ + 'ca': data['ca'], + 'chain': data.get('chain'), + 'certs': json.loads(data[raw_certs_key])}) + return bundles + + +def get_bundle_for_cn(cn, relation_name=None): + """Extract certificates for the given cn. + + :param cn: str Canonical Name on certificate. + :param relation_name: str Relation to check for certificates down. + :returns: Dictionary of certificate data, + :rtype: dict. + """ + entries = get_requests_for_local_unit(relation_name) + cert_bundle = {} + for entry in entries: + for _cn, bundle in entry['certs'].items(): + if _cn == cn: + cert_bundle = { + 'cert': bundle['cert'], + 'key': bundle['key'], + 'chain': entry['chain'], + 'ca': entry['ca']} + break + if cert_bundle: + break + return cert_bundle diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/context.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/context.py new file mode 100644 index 0000000000000000000000000000000000000000..42abccf7cb9eabc063bc8d87f016e59f4dc8f898 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/context.py @@ -0,0 +1,3177 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import collections +import copy +import enum +import glob +import hashlib +import json +import math +import os +import re +import socket +import time + +from base64 import b64decode +from subprocess import check_call, CalledProcessError + +import six + +from charmhelpers.contrib.openstack.audits.openstack_security_guide import ( + _config_ini as config_ini +) + +from charmhelpers.fetch import ( + apt_install, + filter_installed_packages, +) +from charmhelpers.core.hookenv import ( + NoNetworkBinding, + config, + is_relation_made, + local_unit, + log, + relation_get, + relation_ids, + related_units, + relation_set, + unit_get, + unit_private_ip, + charm_name, + DEBUG, + INFO, + ERROR, + status_set, + network_get_primary_address, + WARNING, +) + +from charmhelpers.core.sysctl import create as sysctl_create +from charmhelpers.core.strutils import bool_from_string +from charmhelpers.contrib.openstack.exceptions import OSContextError + +from charmhelpers.core.host import ( + get_bond_master, + is_phy_iface, + list_nics, + get_nic_hwaddr, + mkdir, + write_file, + pwgen, + lsb_release, + CompareHostReleases, + is_container, +) +from charmhelpers.contrib.hahelpers.cluster import ( + determine_apache_port, + determine_api_port, + https, + is_clustered, +) +from charmhelpers.contrib.hahelpers.apache import ( + get_cert, + get_ca_cert, + install_ca_cert, +) +from charmhelpers.contrib.openstack.neutron import ( + neutron_plugin_attribute, + parse_data_port_mappings, +) +from charmhelpers.contrib.openstack.ip import ( + resolve_address, + INTERNAL, + ADMIN, + PUBLIC, + ADDRESS_MAP, +) +from charmhelpers.contrib.network.ip import ( + get_address_in_network, + get_ipv4_addr, + get_ipv6_addr, + get_netmask_for_address, + format_ipv6_addr, + is_bridge_member, + is_ipv6_disabled, + get_relation_ip, +) +from charmhelpers.contrib.openstack.utils import ( + config_flags_parser, + get_os_codename_install_source, + enable_memcache, + CompareOpenStackReleases, + os_release, +) +from charmhelpers.core.unitdata import kv + +try: + from sriov_netplan_shim import pci +except ImportError: + # The use of the function and contexts that require the pci module is + # optional. + pass + +try: + import psutil +except ImportError: + if six.PY2: + apt_install('python-psutil', fatal=True) + else: + apt_install('python3-psutil', fatal=True) + import psutil + +CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt' +ADDRESS_TYPES = ['admin', 'internal', 'public'] +HAPROXY_RUN_DIR = '/var/run/haproxy/' +DEFAULT_OSLO_MESSAGING_DRIVER = "messagingv2" + + +def ensure_packages(packages): + """Install but do not upgrade required plugin packages.""" + required = filter_installed_packages(packages) + if required: + apt_install(required, fatal=True) + + +def context_complete(ctxt): + _missing = [] + for k, v in six.iteritems(ctxt): + if v is None or v == '': + _missing.append(k) + + if _missing: + log('Missing required data: %s' % ' '.join(_missing), level=INFO) + return False + + return True + + +class OSContextGenerator(object): + """Base class for all context generators.""" + interfaces = [] + related = False + complete = False + missing_data = [] + + def __call__(self): + raise NotImplementedError + + def context_complete(self, ctxt): + """Check for missing data for the required context data. + Set self.missing_data if it exists and return False. + Set self.complete if no missing data and return True. + """ + # Fresh start + self.complete = False + self.missing_data = [] + for k, v in six.iteritems(ctxt): + if v is None or v == '': + if k not in self.missing_data: + self.missing_data.append(k) + + if self.missing_data: + self.complete = False + log('Missing required data: %s' % ' '.join(self.missing_data), + level=INFO) + else: + self.complete = True + return self.complete + + def get_related(self): + """Check if any of the context interfaces have relation ids. + Set self.related and return True if one of the interfaces + has relation ids. + """ + # Fresh start + self.related = False + try: + for interface in self.interfaces: + if relation_ids(interface): + self.related = True + return self.related + except AttributeError as e: + log("{} {}" + "".format(self, e), 'INFO') + return self.related + + +class SharedDBContext(OSContextGenerator): + interfaces = ['shared-db'] + + def __init__(self, database=None, user=None, relation_prefix=None, + ssl_dir=None, relation_id=None): + """Allows inspecting relation for settings prefixed with + relation_prefix. This is useful for parsing access for multiple + databases returned via the shared-db interface (eg, nova_password, + quantum_password) + """ + self.relation_prefix = relation_prefix + self.database = database + self.user = user + self.ssl_dir = ssl_dir + self.rel_name = self.interfaces[0] + self.relation_id = relation_id + + def __call__(self): + self.database = self.database or config('database') + self.user = self.user or config('database-user') + if None in [self.database, self.user]: + log("Could not generate shared_db context. Missing required charm " + "config options. (database name and user)", level=ERROR) + raise OSContextError + + ctxt = {} + + # NOTE(jamespage) if mysql charm provides a network upon which + # access to the database should be made, reconfigure relation + # with the service units local address and defer execution + access_network = relation_get('access-network') + if access_network is not None: + if self.relation_prefix is not None: + hostname_key = "{}_hostname".format(self.relation_prefix) + else: + hostname_key = "hostname" + access_hostname = get_address_in_network( + access_network, + unit_get('private-address')) + set_hostname = relation_get(attribute=hostname_key, + unit=local_unit()) + if set_hostname != access_hostname: + relation_set(relation_settings={hostname_key: access_hostname}) + return None # Defer any further hook execution for now.... + + password_setting = 'password' + if self.relation_prefix: + password_setting = self.relation_prefix + '_password' + + if self.relation_id: + rids = [self.relation_id] + else: + rids = relation_ids(self.interfaces[0]) + + rel = (get_os_codename_install_source(config('openstack-origin')) or + 'icehouse') + for rid in rids: + self.related = True + for unit in related_units(rid): + rdata = relation_get(rid=rid, unit=unit) + host = rdata.get('db_host') + host = format_ipv6_addr(host) or host + ctxt = { + 'database_host': host, + 'database': self.database, + 'database_user': self.user, + 'database_password': rdata.get(password_setting), + 'database_type': 'mysql+pymysql' + } + # Port is being introduced with LP Bug #1876188 + # but it not currently required and may not be set in all + # cases, particularly in classic charms. + port = rdata.get('db_port') + if port: + ctxt['database_port'] = port + if CompareOpenStackReleases(rel) < 'queens': + ctxt['database_type'] = 'mysql' + if self.context_complete(ctxt): + db_ssl(rdata, ctxt, self.ssl_dir) + return ctxt + return {} + + +class PostgresqlDBContext(OSContextGenerator): + interfaces = ['pgsql-db'] + + def __init__(self, database=None): + self.database = database + + def __call__(self): + self.database = self.database or config('database') + if self.database is None: + log('Could not generate postgresql_db context. Missing required ' + 'charm config options. (database name)', level=ERROR) + raise OSContextError + + ctxt = {} + for rid in relation_ids(self.interfaces[0]): + self.related = True + for unit in related_units(rid): + rel_host = relation_get('host', rid=rid, unit=unit) + rel_user = relation_get('user', rid=rid, unit=unit) + rel_passwd = relation_get('password', rid=rid, unit=unit) + ctxt = {'database_host': rel_host, + 'database': self.database, + 'database_user': rel_user, + 'database_password': rel_passwd, + 'database_type': 'postgresql'} + if self.context_complete(ctxt): + return ctxt + + return {} + + +def db_ssl(rdata, ctxt, ssl_dir): + if 'ssl_ca' in rdata and ssl_dir: + ca_path = os.path.join(ssl_dir, 'db-client.ca') + with open(ca_path, 'wb') as fh: + fh.write(b64decode(rdata['ssl_ca'])) + + ctxt['database_ssl_ca'] = ca_path + elif 'ssl_ca' in rdata: + log("Charm not setup for ssl support but ssl ca found", level=INFO) + return ctxt + + if 'ssl_cert' in rdata: + cert_path = os.path.join( + ssl_dir, 'db-client.cert') + if not os.path.exists(cert_path): + log("Waiting 1m for ssl client cert validity", level=INFO) + time.sleep(60) + + with open(cert_path, 'wb') as fh: + fh.write(b64decode(rdata['ssl_cert'])) + + ctxt['database_ssl_cert'] = cert_path + key_path = os.path.join(ssl_dir, 'db-client.key') + with open(key_path, 'wb') as fh: + fh.write(b64decode(rdata['ssl_key'])) + + ctxt['database_ssl_key'] = key_path + + return ctxt + + +class IdentityServiceContext(OSContextGenerator): + + def __init__(self, + service=None, + service_user=None, + rel_name='identity-service'): + self.service = service + self.service_user = service_user + self.rel_name = rel_name + self.interfaces = [self.rel_name] + + def _setup_pki_cache(self): + if self.service and self.service_user: + # This is required for pki token signing if we don't want /tmp to + # be used. + cachedir = '/var/cache/%s' % (self.service) + if not os.path.isdir(cachedir): + log("Creating service cache dir %s" % (cachedir), level=DEBUG) + mkdir(path=cachedir, owner=self.service_user, + group=self.service_user, perms=0o700) + + return cachedir + return None + + def _get_pkg_name(self, python_name='keystonemiddleware'): + """Get corresponding distro installed package for python + package name. + + :param python_name: nameof the python package + :type: string + """ + pkg_names = map(lambda x: x + python_name, ('python3-', 'python-')) + + for pkg in pkg_names: + if not filter_installed_packages((pkg,)): + return pkg + + return None + + def _get_keystone_authtoken_ctxt(self, ctxt, keystonemiddleware_os_rel): + """Build Jinja2 context for full rendering of [keystone_authtoken] + section with variable names included. Re-constructed from former + template 'section-keystone-auth-mitaka'. + + :param ctxt: Jinja2 context returned from self.__call__() + :type: dict + :param keystonemiddleware_os_rel: OpenStack release name of + keystonemiddleware package installed + """ + c = collections.OrderedDict((('auth_type', 'password'),)) + + # 'www_authenticate_uri' replaced 'auth_uri' since Stein, + # see keystonemiddleware upstream sources for more info + if CompareOpenStackReleases(keystonemiddleware_os_rel) >= 'stein': + c.update(( + ('www_authenticate_uri', "{}://{}:{}/v3".format( + ctxt.get('service_protocol', ''), + ctxt.get('service_host', ''), + ctxt.get('service_port', ''))),)) + else: + c.update(( + ('auth_uri', "{}://{}:{}/v3".format( + ctxt.get('service_protocol', ''), + ctxt.get('service_host', ''), + ctxt.get('service_port', ''))),)) + + c.update(( + ('auth_url', "{}://{}:{}/v3".format( + ctxt.get('auth_protocol', ''), + ctxt.get('auth_host', ''), + ctxt.get('auth_port', ''))), + ('project_domain_name', ctxt.get('admin_domain_name', '')), + ('user_domain_name', ctxt.get('admin_domain_name', '')), + ('project_name', ctxt.get('admin_tenant_name', '')), + ('username', ctxt.get('admin_user', '')), + ('password', ctxt.get('admin_password', '')), + ('signing_dir', ctxt.get('signing_dir', '')),)) + + return c + + def __call__(self): + log('Generating template context for ' + self.rel_name, level=DEBUG) + ctxt = {} + + keystonemiddleware_os_release = None + if self._get_pkg_name(): + keystonemiddleware_os_release = os_release(self._get_pkg_name()) + + cachedir = self._setup_pki_cache() + if cachedir: + ctxt['signing_dir'] = cachedir + + for rid in relation_ids(self.rel_name): + self.related = True + for unit in related_units(rid): + rdata = relation_get(rid=rid, unit=unit) + serv_host = rdata.get('service_host') + serv_host = format_ipv6_addr(serv_host) or serv_host + auth_host = rdata.get('auth_host') + auth_host = format_ipv6_addr(auth_host) or auth_host + svc_protocol = rdata.get('service_protocol') or 'http' + auth_protocol = rdata.get('auth_protocol') or 'http' + api_version = rdata.get('api_version') or '2.0' + ctxt.update({'service_port': rdata.get('service_port'), + 'service_host': serv_host, + 'auth_host': auth_host, + 'auth_port': rdata.get('auth_port'), + 'admin_tenant_name': rdata.get('service_tenant'), + 'admin_user': rdata.get('service_username'), + 'admin_password': rdata.get('service_password'), + 'service_protocol': svc_protocol, + 'auth_protocol': auth_protocol, + 'api_version': api_version}) + + if float(api_version) > 2: + ctxt.update({ + 'admin_domain_name': rdata.get('service_domain'), + 'service_project_id': rdata.get('service_tenant_id'), + 'service_domain_id': rdata.get('service_domain_id')}) + + # we keep all veriables in ctxt for compatibility and + # add nested dictionary for keystone_authtoken generic + # templating + if keystonemiddleware_os_release: + ctxt['keystone_authtoken'] = \ + self._get_keystone_authtoken_ctxt( + ctxt, keystonemiddleware_os_release) + + if self.context_complete(ctxt): + # NOTE(jamespage) this is required for >= icehouse + # so a missing value just indicates keystone needs + # upgrading + ctxt['admin_tenant_id'] = rdata.get('service_tenant_id') + ctxt['admin_domain_id'] = rdata.get('service_domain_id') + return ctxt + + return {} + + +class IdentityCredentialsContext(IdentityServiceContext): + '''Context for identity-credentials interface type''' + + def __init__(self, + service=None, + service_user=None, + rel_name='identity-credentials'): + super(IdentityCredentialsContext, self).__init__(service, + service_user, + rel_name) + + def __call__(self): + log('Generating template context for ' + self.rel_name, level=DEBUG) + ctxt = {} + + cachedir = self._setup_pki_cache() + if cachedir: + ctxt['signing_dir'] = cachedir + + for rid in relation_ids(self.rel_name): + self.related = True + for unit in related_units(rid): + rdata = relation_get(rid=rid, unit=unit) + credentials_host = rdata.get('credentials_host') + credentials_host = ( + format_ipv6_addr(credentials_host) or credentials_host + ) + auth_host = rdata.get('auth_host') + auth_host = format_ipv6_addr(auth_host) or auth_host + svc_protocol = rdata.get('credentials_protocol') or 'http' + auth_protocol = rdata.get('auth_protocol') or 'http' + api_version = rdata.get('api_version') or '2.0' + ctxt.update({ + 'service_port': rdata.get('credentials_port'), + 'service_host': credentials_host, + 'auth_host': auth_host, + 'auth_port': rdata.get('auth_port'), + 'admin_tenant_name': rdata.get('credentials_project'), + 'admin_tenant_id': rdata.get('credentials_project_id'), + 'admin_user': rdata.get('credentials_username'), + 'admin_password': rdata.get('credentials_password'), + 'service_protocol': svc_protocol, + 'auth_protocol': auth_protocol, + 'api_version': api_version + }) + + if float(api_version) > 2: + ctxt.update({'admin_domain_name': + rdata.get('domain')}) + + if self.context_complete(ctxt): + return ctxt + + return {} + + +class NovaVendorMetadataContext(OSContextGenerator): + """Context used for configuring nova vendor metadata on nova.conf file.""" + + def __init__(self, os_release_pkg, interfaces=None): + """Initialize the NovaVendorMetadataContext object. + + :param os_release_pkg: the package name to extract the OpenStack + release codename from. + :type os_release_pkg: str + :param interfaces: list of string values to be used as the Context's + relation interfaces. + :type interfaces: List[str] + """ + self.os_release_pkg = os_release_pkg + if interfaces is not None: + self.interfaces = interfaces + + def __call__(self): + cmp_os_release = CompareOpenStackReleases( + os_release(self.os_release_pkg)) + ctxt = {'vendor_data': False} + + vdata_providers = [] + vdata = config('vendor-data') + vdata_url = config('vendor-data-url') + + if vdata: + try: + # validate the JSON. If invalid, we do not set anything here + json.loads(vdata) + except (TypeError, ValueError) as e: + log('Error decoding vendor-data. {}'.format(e), level=ERROR) + else: + ctxt['vendor_data'] = True + # Mitaka does not support DynamicJSON + # so vendordata_providers is not needed + if cmp_os_release > 'mitaka': + vdata_providers.append('StaticJSON') + + if vdata_url: + if cmp_os_release > 'mitaka': + ctxt['vendor_data_url'] = vdata_url + vdata_providers.append('DynamicJSON') + else: + log('Dynamic vendor data unsupported' + ' for {}.'.format(cmp_os_release), level=ERROR) + if vdata_providers: + ctxt['vendordata_providers'] = ','.join(vdata_providers) + + return ctxt + + +class NovaVendorMetadataJSONContext(OSContextGenerator): + """Context used for writing nova vendor metadata json file.""" + + def __init__(self, os_release_pkg): + """Initialize the NovaVendorMetadataJSONContext object. + + :param os_release_pkg: the package name to extract the OpenStack + release codename from. + :type os_release_pkg: str + """ + self.os_release_pkg = os_release_pkg + + def __call__(self): + ctxt = {'vendor_data_json': '{}'} + + vdata = config('vendor-data') + if vdata: + try: + # validate the JSON. If invalid, we return empty. + json.loads(vdata) + except (TypeError, ValueError) as e: + log('Error decoding vendor-data. {}'.format(e), level=ERROR) + else: + ctxt['vendor_data_json'] = vdata + + return ctxt + + +class AMQPContext(OSContextGenerator): + + def __init__(self, ssl_dir=None, rel_name='amqp', relation_prefix=None, + relation_id=None): + self.ssl_dir = ssl_dir + self.rel_name = rel_name + self.relation_prefix = relation_prefix + self.interfaces = [rel_name] + self.relation_id = relation_id + + def __call__(self): + log('Generating template context for amqp', level=DEBUG) + conf = config() + if self.relation_prefix: + user_setting = '%s-rabbit-user' % (self.relation_prefix) + vhost_setting = '%s-rabbit-vhost' % (self.relation_prefix) + else: + user_setting = 'rabbit-user' + vhost_setting = 'rabbit-vhost' + + try: + username = conf[user_setting] + vhost = conf[vhost_setting] + except KeyError as e: + log('Could not generate shared_db context. Missing required charm ' + 'config options: %s.' % e, level=ERROR) + raise OSContextError + + ctxt = {} + if self.relation_id: + rids = [self.relation_id] + else: + rids = relation_ids(self.rel_name) + for rid in rids: + ha_vip_only = False + self.related = True + transport_hosts = None + rabbitmq_port = '5672' + for unit in related_units(rid): + if relation_get('clustered', rid=rid, unit=unit): + ctxt['clustered'] = True + vip = relation_get('vip', rid=rid, unit=unit) + vip = format_ipv6_addr(vip) or vip + ctxt['rabbitmq_host'] = vip + transport_hosts = [vip] + else: + host = relation_get('private-address', rid=rid, unit=unit) + host = format_ipv6_addr(host) or host + ctxt['rabbitmq_host'] = host + transport_hosts = [host] + + ctxt.update({ + 'rabbitmq_user': username, + 'rabbitmq_password': relation_get('password', rid=rid, + unit=unit), + 'rabbitmq_virtual_host': vhost, + }) + + ssl_port = relation_get('ssl_port', rid=rid, unit=unit) + if ssl_port: + ctxt['rabbit_ssl_port'] = ssl_port + rabbitmq_port = ssl_port + + ssl_ca = relation_get('ssl_ca', rid=rid, unit=unit) + if ssl_ca: + ctxt['rabbit_ssl_ca'] = ssl_ca + + if relation_get('ha_queues', rid=rid, unit=unit) is not None: + ctxt['rabbitmq_ha_queues'] = True + + ha_vip_only = relation_get('ha-vip-only', + rid=rid, unit=unit) is not None + + if self.context_complete(ctxt): + if 'rabbit_ssl_ca' in ctxt: + if not self.ssl_dir: + log("Charm not setup for ssl support but ssl ca " + "found", level=INFO) + break + + ca_path = os.path.join( + self.ssl_dir, 'rabbit-client-ca.pem') + with open(ca_path, 'wb') as fh: + fh.write(b64decode(ctxt['rabbit_ssl_ca'])) + ctxt['rabbit_ssl_ca'] = ca_path + + # Sufficient information found = break out! + break + + # Used for active/active rabbitmq >= grizzly + if (('clustered' not in ctxt or ha_vip_only) and + len(related_units(rid)) > 1): + rabbitmq_hosts = [] + for unit in related_units(rid): + host = relation_get('private-address', rid=rid, unit=unit) + if not relation_get('password', rid=rid, unit=unit): + log( + ("Skipping {} password not sent which indicates " + "unit is not ready.".format(host)), + level=DEBUG) + continue + host = format_ipv6_addr(host) or host + rabbitmq_hosts.append(host) + + rabbitmq_hosts = sorted(rabbitmq_hosts) + ctxt['rabbitmq_hosts'] = ','.join(rabbitmq_hosts) + transport_hosts = rabbitmq_hosts + + if transport_hosts: + transport_url_hosts = ','.join([ + "{}:{}@{}:{}".format(ctxt['rabbitmq_user'], + ctxt['rabbitmq_password'], + host_, + rabbitmq_port) + for host_ in transport_hosts]) + ctxt['transport_url'] = "rabbit://{}/{}".format( + transport_url_hosts, vhost) + + oslo_messaging_flags = conf.get('oslo-messaging-flags', None) + if oslo_messaging_flags: + ctxt['oslo_messaging_flags'] = config_flags_parser( + oslo_messaging_flags) + + oslo_messaging_driver = conf.get( + 'oslo-messaging-driver', DEFAULT_OSLO_MESSAGING_DRIVER) + if oslo_messaging_driver: + ctxt['oslo_messaging_driver'] = oslo_messaging_driver + + notification_format = conf.get('notification-format', None) + if notification_format: + ctxt['notification_format'] = notification_format + + notification_topics = conf.get('notification-topics', None) + if notification_topics: + ctxt['notification_topics'] = notification_topics + + send_notifications_to_logs = conf.get('send-notifications-to-logs', None) + if send_notifications_to_logs: + ctxt['send_notifications_to_logs'] = send_notifications_to_logs + + if not self.complete: + return {} + + return ctxt + + +class CephContext(OSContextGenerator): + """Generates context for /etc/ceph/ceph.conf templates.""" + interfaces = ['ceph'] + + def __call__(self): + if not relation_ids('ceph'): + return {} + + log('Generating template context for ceph', level=DEBUG) + mon_hosts = [] + ctxt = { + 'use_syslog': str(config('use-syslog')).lower() + } + for rid in relation_ids('ceph'): + for unit in related_units(rid): + if not ctxt.get('auth'): + ctxt['auth'] = relation_get('auth', rid=rid, unit=unit) + if not ctxt.get('key'): + ctxt['key'] = relation_get('key', rid=rid, unit=unit) + if not ctxt.get('rbd_features'): + default_features = relation_get('rbd-features', rid=rid, unit=unit) + if default_features is not None: + ctxt['rbd_features'] = default_features + + ceph_addrs = relation_get('ceph-public-address', rid=rid, + unit=unit) + if ceph_addrs: + for addr in ceph_addrs.split(' '): + mon_hosts.append(format_ipv6_addr(addr) or addr) + else: + priv_addr = relation_get('private-address', rid=rid, + unit=unit) + mon_hosts.append(format_ipv6_addr(priv_addr) or priv_addr) + + ctxt['mon_hosts'] = ' '.join(sorted(mon_hosts)) + + if not os.path.isdir('/etc/ceph'): + os.mkdir('/etc/ceph') + + if not self.context_complete(ctxt): + return {} + + ensure_packages(['ceph-common']) + return ctxt + + def context_complete(self, ctxt): + """Overridden here to ensure the context is actually complete. + + We set `key` and `auth` to None here, by default, to ensure + that the context will always evaluate to incomplete until the + Ceph relation has actually sent these details; otherwise, + there is a potential race condition between the relation + appearing and the first unit actually setting this data on the + relation. + + :param ctxt: The current context members + :type ctxt: Dict[str, ANY] + :returns: True if the context is complete + :rtype: bool + """ + if 'auth' not in ctxt or 'key' not in ctxt: + return False + return super(CephContext, self).context_complete(ctxt) + + +class HAProxyContext(OSContextGenerator): + """Provides half a context for the haproxy template, which describes + all peers to be included in the cluster. Each charm needs to include + its own context generator that describes the port mapping. + + :side effect: mkdir is called on HAPROXY_RUN_DIR + """ + interfaces = ['cluster'] + + def __init__(self, singlenode_mode=False, + address_types=ADDRESS_TYPES): + self.address_types = address_types + self.singlenode_mode = singlenode_mode + + def __call__(self): + if not os.path.isdir(HAPROXY_RUN_DIR): + mkdir(path=HAPROXY_RUN_DIR) + if not relation_ids('cluster') and not self.singlenode_mode: + return {} + + l_unit = local_unit().replace('/', '-') + cluster_hosts = collections.OrderedDict() + + # NOTE(jamespage): build out map of configured network endpoints + # and associated backends + for addr_type in self.address_types: + cfg_opt = 'os-{}-network'.format(addr_type) + # NOTE(thedac) For some reason the ADDRESS_MAP uses 'int' rather + # than 'internal' + if addr_type == 'internal': + _addr_map_type = INTERNAL + else: + _addr_map_type = addr_type + # Network spaces aware + laddr = get_relation_ip(ADDRESS_MAP[_addr_map_type]['binding'], + config(cfg_opt)) + if laddr: + netmask = get_netmask_for_address(laddr) + cluster_hosts[laddr] = { + 'network': "{}/{}".format(laddr, + netmask), + 'backends': collections.OrderedDict([(l_unit, + laddr)]) + } + for rid in relation_ids('cluster'): + for unit in sorted(related_units(rid)): + # API Charms will need to set {addr_type}-address with + # get_relation_ip(addr_type) + _laddr = relation_get('{}-address'.format(addr_type), + rid=rid, unit=unit) + if _laddr: + _unit = unit.replace('/', '-') + cluster_hosts[laddr]['backends'][_unit] = _laddr + + # NOTE(jamespage) add backend based on get_relation_ip - this + # will either be the only backend or the fallback if no acls + # match in the frontend + # Network spaces aware + addr = get_relation_ip('cluster') + cluster_hosts[addr] = {} + netmask = get_netmask_for_address(addr) + cluster_hosts[addr] = { + 'network': "{}/{}".format(addr, netmask), + 'backends': collections.OrderedDict([(l_unit, + addr)]) + } + for rid in relation_ids('cluster'): + for unit in sorted(related_units(rid)): + # API Charms will need to set their private-address with + # get_relation_ip('cluster') + _laddr = relation_get('private-address', + rid=rid, unit=unit) + if _laddr: + _unit = unit.replace('/', '-') + cluster_hosts[addr]['backends'][_unit] = _laddr + + ctxt = { + 'frontends': cluster_hosts, + 'default_backend': addr + } + + if config('haproxy-server-timeout'): + ctxt['haproxy_server_timeout'] = config('haproxy-server-timeout') + + if config('haproxy-client-timeout'): + ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout') + + if config('haproxy-queue-timeout'): + ctxt['haproxy_queue_timeout'] = config('haproxy-queue-timeout') + + if config('haproxy-connect-timeout'): + ctxt['haproxy_connect_timeout'] = config('haproxy-connect-timeout') + + if config('prefer-ipv6'): + ctxt['local_host'] = 'ip6-localhost' + ctxt['haproxy_host'] = '::' + else: + ctxt['local_host'] = '127.0.0.1' + ctxt['haproxy_host'] = '0.0.0.0' + + ctxt['ipv6_enabled'] = not is_ipv6_disabled() + + ctxt['stat_port'] = '8888' + + db = kv() + ctxt['stat_password'] = db.get('stat-password') + if not ctxt['stat_password']: + ctxt['stat_password'] = db.set('stat-password', + pwgen(32)) + db.flush() + + for frontend in cluster_hosts: + if (len(cluster_hosts[frontend]['backends']) > 1 or + self.singlenode_mode): + # Enable haproxy when we have enough peers. + log('Ensuring haproxy enabled in /etc/default/haproxy.', + level=DEBUG) + with open('/etc/default/haproxy', 'w') as out: + out.write('ENABLED=1\n') + + return ctxt + + log('HAProxy context is incomplete, this unit has no peers.', + level=INFO) + return {} + + +class ImageServiceContext(OSContextGenerator): + interfaces = ['image-service'] + + def __call__(self): + """Obtains the glance API server from the image-service relation. + Useful in nova and cinder (currently). + """ + log('Generating template context for image-service.', level=DEBUG) + rids = relation_ids('image-service') + if not rids: + return {} + + for rid in rids: + for unit in related_units(rid): + api_server = relation_get('glance-api-server', + rid=rid, unit=unit) + if api_server: + return {'glance_api_servers': api_server} + + log("ImageService context is incomplete. Missing required relation " + "data.", level=INFO) + return {} + + +class ApacheSSLContext(OSContextGenerator): + """Generates a context for an apache vhost configuration that configures + HTTPS reverse proxying for one or many endpoints. Generated context + looks something like:: + + { + 'namespace': 'cinder', + 'private_address': 'iscsi.mycinderhost.com', + 'endpoints': [(8776, 8766), (8777, 8767)] + } + + The endpoints list consists of a tuples mapping external ports + to internal ports. + """ + interfaces = ['https'] + + # charms should inherit this context and set external ports + # and service namespace accordingly. + external_ports = [] + service_namespace = None + user = group = 'root' + + def enable_modules(self): + cmd = ['a2enmod', 'ssl', 'proxy', 'proxy_http', 'headers'] + check_call(cmd) + + def configure_cert(self, cn=None): + ssl_dir = os.path.join('/etc/apache2/ssl/', self.service_namespace) + mkdir(path=ssl_dir) + cert, key = get_cert(cn) + if cert and key: + if cn: + cert_filename = 'cert_{}'.format(cn) + key_filename = 'key_{}'.format(cn) + else: + cert_filename = 'cert' + key_filename = 'key' + + write_file(path=os.path.join(ssl_dir, cert_filename), + content=b64decode(cert), owner=self.user, + group=self.group, perms=0o640) + write_file(path=os.path.join(ssl_dir, key_filename), + content=b64decode(key), owner=self.user, + group=self.group, perms=0o640) + + def configure_ca(self): + ca_cert = get_ca_cert() + if ca_cert: + install_ca_cert(b64decode(ca_cert)) + + def canonical_names(self): + """Figure out which canonical names clients will access this service. + """ + cns = [] + for r_id in relation_ids('identity-service'): + for unit in related_units(r_id): + rdata = relation_get(rid=r_id, unit=unit) + for k in rdata: + if k.startswith('ssl_key_'): + cns.append(k.lstrip('ssl_key_')) + + return sorted(list(set(cns))) + + def get_network_addresses(self): + """For each network configured, return corresponding address and + hostnamr or vip (if available). + + Returns a list of tuples of the form: + + [(address_in_net_a, hostname_in_net_a), + (address_in_net_b, hostname_in_net_b), + ...] + + or, if no hostnames(s) available: + + [(address_in_net_a, vip_in_net_a), + (address_in_net_b, vip_in_net_b), + ...] + + or, if no vip(s) available: + + [(address_in_net_a, address_in_net_a), + (address_in_net_b, address_in_net_b), + ...] + """ + addresses = [] + for net_type in [INTERNAL, ADMIN, PUBLIC]: + net_config = config(ADDRESS_MAP[net_type]['config']) + # NOTE(jamespage): Fallback must always be private address + # as this is used to bind services on the + # local unit. + fallback = unit_get("private-address") + if net_config: + addr = get_address_in_network(net_config, + fallback) + else: + try: + addr = network_get_primary_address( + ADDRESS_MAP[net_type]['binding'] + ) + except (NotImplementedError, NoNetworkBinding): + addr = fallback + + endpoint = resolve_address(net_type) + addresses.append((addr, endpoint)) + + return sorted(set(addresses)) + + def __call__(self): + if isinstance(self.external_ports, six.string_types): + self.external_ports = [self.external_ports] + + if not self.external_ports or not https(): + return {} + + use_keystone_ca = True + for rid in relation_ids('certificates'): + if related_units(rid): + use_keystone_ca = False + + if use_keystone_ca: + self.configure_ca() + + self.enable_modules() + + ctxt = {'namespace': self.service_namespace, + 'endpoints': [], + 'ext_ports': []} + + if use_keystone_ca: + cns = self.canonical_names() + if cns: + for cn in cns: + self.configure_cert(cn) + else: + # Expect cert/key provided in config (currently assumed that ca + # uses ip for cn) + for net_type in (INTERNAL, ADMIN, PUBLIC): + cn = resolve_address(endpoint_type=net_type) + self.configure_cert(cn) + + addresses = self.get_network_addresses() + for address, endpoint in addresses: + for api_port in self.external_ports: + ext_port = determine_apache_port(api_port, + singlenode_mode=True) + int_port = determine_api_port(api_port, singlenode_mode=True) + portmap = (address, endpoint, int(ext_port), int(int_port)) + ctxt['endpoints'].append(portmap) + ctxt['ext_ports'].append(int(ext_port)) + + ctxt['ext_ports'] = sorted(list(set(ctxt['ext_ports']))) + return ctxt + + +class NeutronContext(OSContextGenerator): + interfaces = [] + + @property + def plugin(self): + return None + + @property + def network_manager(self): + return None + + @property + def packages(self): + return neutron_plugin_attribute(self.plugin, 'packages', + self.network_manager) + + @property + def neutron_security_groups(self): + return None + + def _ensure_packages(self): + for pkgs in self.packages: + ensure_packages(pkgs) + + def ovs_ctxt(self): + driver = neutron_plugin_attribute(self.plugin, 'driver', + self.network_manager) + config = neutron_plugin_attribute(self.plugin, 'config', + self.network_manager) + ovs_ctxt = {'core_plugin': driver, + 'neutron_plugin': 'ovs', + 'neutron_security_groups': self.neutron_security_groups, + 'local_ip': unit_private_ip(), + 'config': config} + + return ovs_ctxt + + def nuage_ctxt(self): + driver = neutron_plugin_attribute(self.plugin, 'driver', + self.network_manager) + config = neutron_plugin_attribute(self.plugin, 'config', + self.network_manager) + nuage_ctxt = {'core_plugin': driver, + 'neutron_plugin': 'vsp', + 'neutron_security_groups': self.neutron_security_groups, + 'local_ip': unit_private_ip(), + 'config': config} + + return nuage_ctxt + + def nvp_ctxt(self): + driver = neutron_plugin_attribute(self.plugin, 'driver', + self.network_manager) + config = neutron_plugin_attribute(self.plugin, 'config', + self.network_manager) + nvp_ctxt = {'core_plugin': driver, + 'neutron_plugin': 'nvp', + 'neutron_security_groups': self.neutron_security_groups, + 'local_ip': unit_private_ip(), + 'config': config} + + return nvp_ctxt + + def n1kv_ctxt(self): + driver = neutron_plugin_attribute(self.plugin, 'driver', + self.network_manager) + n1kv_config = neutron_plugin_attribute(self.plugin, 'config', + self.network_manager) + n1kv_user_config_flags = config('n1kv-config-flags') + restrict_policy_profiles = config('n1kv-restrict-policy-profiles') + n1kv_ctxt = {'core_plugin': driver, + 'neutron_plugin': 'n1kv', + 'neutron_security_groups': self.neutron_security_groups, + 'local_ip': unit_private_ip(), + 'config': n1kv_config, + 'vsm_ip': config('n1kv-vsm-ip'), + 'vsm_username': config('n1kv-vsm-username'), + 'vsm_password': config('n1kv-vsm-password'), + 'restrict_policy_profiles': restrict_policy_profiles} + + if n1kv_user_config_flags: + flags = config_flags_parser(n1kv_user_config_flags) + n1kv_ctxt['user_config_flags'] = flags + + return n1kv_ctxt + + def calico_ctxt(self): + driver = neutron_plugin_attribute(self.plugin, 'driver', + self.network_manager) + config = neutron_plugin_attribute(self.plugin, 'config', + self.network_manager) + calico_ctxt = {'core_plugin': driver, + 'neutron_plugin': 'Calico', + 'neutron_security_groups': self.neutron_security_groups, + 'local_ip': unit_private_ip(), + 'config': config} + + return calico_ctxt + + def neutron_ctxt(self): + if https(): + proto = 'https' + else: + proto = 'http' + + if is_clustered(): + host = config('vip') + else: + host = unit_get('private-address') + + ctxt = {'network_manager': self.network_manager, + 'neutron_url': '%s://%s:%s' % (proto, host, '9696')} + return ctxt + + def pg_ctxt(self): + driver = neutron_plugin_attribute(self.plugin, 'driver', + self.network_manager) + config = neutron_plugin_attribute(self.plugin, 'config', + self.network_manager) + ovs_ctxt = {'core_plugin': driver, + 'neutron_plugin': 'plumgrid', + 'neutron_security_groups': self.neutron_security_groups, + 'local_ip': unit_private_ip(), + 'config': config} + return ovs_ctxt + + def midonet_ctxt(self): + driver = neutron_plugin_attribute(self.plugin, 'driver', + self.network_manager) + midonet_config = neutron_plugin_attribute(self.plugin, 'config', + self.network_manager) + mido_ctxt = {'core_plugin': driver, + 'neutron_plugin': 'midonet', + 'neutron_security_groups': self.neutron_security_groups, + 'local_ip': unit_private_ip(), + 'config': midonet_config} + + return mido_ctxt + + def __call__(self): + if self.network_manager not in ['quantum', 'neutron']: + return {} + + if not self.plugin: + return {} + + ctxt = self.neutron_ctxt() + + if self.plugin == 'ovs': + ctxt.update(self.ovs_ctxt()) + elif self.plugin in ['nvp', 'nsx']: + ctxt.update(self.nvp_ctxt()) + elif self.plugin == 'n1kv': + ctxt.update(self.n1kv_ctxt()) + elif self.plugin == 'Calico': + ctxt.update(self.calico_ctxt()) + elif self.plugin == 'vsp': + ctxt.update(self.nuage_ctxt()) + elif self.plugin == 'plumgrid': + ctxt.update(self.pg_ctxt()) + elif self.plugin == 'midonet': + ctxt.update(self.midonet_ctxt()) + + alchemy_flags = config('neutron-alchemy-flags') + if alchemy_flags: + flags = config_flags_parser(alchemy_flags) + ctxt['neutron_alchemy_flags'] = flags + + return ctxt + + +class NeutronPortContext(OSContextGenerator): + + def resolve_ports(self, ports): + """Resolve NICs not yet bound to bridge(s) + + If hwaddress provided then returns resolved hwaddress otherwise NIC. + """ + if not ports: + return None + + hwaddr_to_nic = {} + hwaddr_to_ip = {} + extant_nics = list_nics() + + for nic in extant_nics: + # Ignore virtual interfaces (bond masters will be identified from + # their slaves) + if not is_phy_iface(nic): + continue + + _nic = get_bond_master(nic) + if _nic: + log("Replacing iface '%s' with bond master '%s'" % (nic, _nic), + level=DEBUG) + nic = _nic + + hwaddr = get_nic_hwaddr(nic) + hwaddr_to_nic[hwaddr] = nic + addresses = get_ipv4_addr(nic, fatal=False) + addresses += get_ipv6_addr(iface=nic, fatal=False) + hwaddr_to_ip[hwaddr] = addresses + + resolved = [] + mac_regex = re.compile(r'([0-9A-F]{2}[:-]){5}([0-9A-F]{2})', re.I) + for entry in ports: + if re.match(mac_regex, entry): + # NIC is in known NICs and does NOT hace an IP address + if entry in hwaddr_to_nic and not hwaddr_to_ip[entry]: + # If the nic is part of a bridge then don't use it + if is_bridge_member(hwaddr_to_nic[entry]): + continue + + # Entry is a MAC address for a valid interface that doesn't + # have an IP address assigned yet. + resolved.append(hwaddr_to_nic[entry]) + elif entry in extant_nics: + # If the passed entry is not a MAC address and the interface + # exists, assume it's a valid interface, and that the user put + # it there on purpose (we can trust it to be the real external + # network). + resolved.append(entry) + + # Ensure no duplicates + return list(set(resolved)) + + +class OSConfigFlagContext(OSContextGenerator): + """Provides support for user-defined config flags. + + Users can define a comma-seperated list of key=value pairs + in the charm configuration and apply them at any point in + any file by using a template flag. + + Sometimes users might want config flags inserted within a + specific section so this class allows users to specify the + template flag name, allowing for multiple template flags + (sections) within the same context. + + NOTE: the value of config-flags may be a comma-separated list of + key=value pairs and some Openstack config files support + comma-separated lists as values. + """ + + def __init__(self, charm_flag='config-flags', + template_flag='user_config_flags'): + """ + :param charm_flag: config flags in charm configuration. + :param template_flag: insert point for user-defined flags in template + file. + """ + super(OSConfigFlagContext, self).__init__() + self._charm_flag = charm_flag + self._template_flag = template_flag + + def __call__(self): + config_flags = config(self._charm_flag) + if not config_flags: + return {} + + return {self._template_flag: + config_flags_parser(config_flags)} + + +class LibvirtConfigFlagsContext(OSContextGenerator): + """ + This context provides support for extending + the libvirt section through user-defined flags. + """ + def __call__(self): + ctxt = {} + libvirt_flags = config('libvirt-flags') + if libvirt_flags: + ctxt['libvirt_flags'] = config_flags_parser( + libvirt_flags) + return ctxt + + +class SubordinateConfigContext(OSContextGenerator): + + """ + Responsible for inspecting relations to subordinates that + may be exporting required config via a json blob. + + The subordinate interface allows subordinates to export their + configuration requirements to the principle for multiple config + files and multiple services. Ie, a subordinate that has interfaces + to both glance and nova may export to following yaml blob as json:: + + glance: + /etc/glance/glance-api.conf: + sections: + DEFAULT: + - [key1, value1] + /etc/glance/glance-registry.conf: + MYSECTION: + - [key2, value2] + nova: + /etc/nova/nova.conf: + sections: + DEFAULT: + - [key3, value3] + + + It is then up to the principle charms to subscribe this context to + the service+config file it is interestd in. Configuration data will + be available in the template context, in glance's case, as:: + + ctxt = { + ... other context ... + 'subordinate_configuration': { + 'DEFAULT': { + 'key1': 'value1', + }, + 'MYSECTION': { + 'key2': 'value2', + }, + } + } + """ + + def __init__(self, service, config_file, interface): + """ + :param service : Service name key to query in any subordinate + data found + :param config_file : Service's config file to query sections + :param interface : Subordinate interface to inspect + """ + self.config_file = config_file + if isinstance(service, list): + self.services = service + else: + self.services = [service] + if isinstance(interface, list): + self.interfaces = interface + else: + self.interfaces = [interface] + + def __call__(self): + ctxt = {'sections': {}} + rids = [] + for interface in self.interfaces: + rids.extend(relation_ids(interface)) + for rid in rids: + for unit in related_units(rid): + sub_config = relation_get('subordinate_configuration', + rid=rid, unit=unit) + if sub_config and sub_config != '': + try: + sub_config = json.loads(sub_config) + except Exception: + log('Could not parse JSON from ' + 'subordinate_configuration setting from %s' + % rid, level=ERROR) + continue + + for service in self.services: + if service not in sub_config: + log('Found subordinate_configuration on %s but it ' + 'contained nothing for %s service' + % (rid, service), level=INFO) + continue + + sub_config = sub_config[service] + if self.config_file not in sub_config: + log('Found subordinate_configuration on %s but it ' + 'contained nothing for %s' + % (rid, self.config_file), level=INFO) + continue + + sub_config = sub_config[self.config_file] + for k, v in six.iteritems(sub_config): + if k == 'sections': + for section, config_list in six.iteritems(v): + log("adding section '%s'" % (section), + level=DEBUG) + if ctxt[k].get(section): + ctxt[k][section].extend(config_list) + else: + ctxt[k][section] = config_list + else: + ctxt[k] = v + log("%d section(s) found" % (len(ctxt['sections'])), level=DEBUG) + return ctxt + + +class LogLevelContext(OSContextGenerator): + + def __call__(self): + ctxt = {} + ctxt['debug'] = \ + False if config('debug') is None else config('debug') + ctxt['verbose'] = \ + False if config('verbose') is None else config('verbose') + + return ctxt + + +class SyslogContext(OSContextGenerator): + + def __call__(self): + ctxt = {'use_syslog': config('use-syslog')} + return ctxt + + +class BindHostContext(OSContextGenerator): + + def __call__(self): + if config('prefer-ipv6'): + return {'bind_host': '::'} + else: + return {'bind_host': '0.0.0.0'} + + +MAX_DEFAULT_WORKERS = 4 +DEFAULT_MULTIPLIER = 2 + + +def _calculate_workers(): + ''' + Determine the number of worker processes based on the CPU + count of the unit containing the application. + + Workers will be limited to MAX_DEFAULT_WORKERS in + container environments where no worker-multipler configuration + option been set. + + @returns int: number of worker processes to use + ''' + multiplier = config('worker-multiplier') or DEFAULT_MULTIPLIER + count = int(_num_cpus() * multiplier) + if multiplier > 0 and count == 0: + count = 1 + + if config('worker-multiplier') is None and is_container(): + # NOTE(jamespage): Limit unconfigured worker-multiplier + # to MAX_DEFAULT_WORKERS to avoid insane + # worker configuration in LXD containers + # on large servers + # Reference: https://pad.lv/1665270 + count = min(count, MAX_DEFAULT_WORKERS) + + return count + + +def _num_cpus(): + ''' + Compatibility wrapper for calculating the number of CPU's + a unit has. + + @returns: int: number of CPU cores detected + ''' + try: + return psutil.cpu_count() + except AttributeError: + return psutil.NUM_CPUS + + +class WorkerConfigContext(OSContextGenerator): + + def __call__(self): + ctxt = {"workers": _calculate_workers()} + return ctxt + + +class WSGIWorkerConfigContext(WorkerConfigContext): + + def __init__(self, name=None, script=None, admin_script=None, + public_script=None, user=None, group=None, + process_weight=1.00, + admin_process_weight=0.25, public_process_weight=0.75): + self.service_name = name + self.user = user or name + self.group = group or name + self.script = script + self.admin_script = admin_script + self.public_script = public_script + self.process_weight = process_weight + self.admin_process_weight = admin_process_weight + self.public_process_weight = public_process_weight + + def __call__(self): + total_processes = _calculate_workers() + ctxt = { + "service_name": self.service_name, + "user": self.user, + "group": self.group, + "script": self.script, + "admin_script": self.admin_script, + "public_script": self.public_script, + "processes": int(math.ceil(self.process_weight * total_processes)), + "admin_processes": int(math.ceil(self.admin_process_weight * + total_processes)), + "public_processes": int(math.ceil(self.public_process_weight * + total_processes)), + "threads": 1, + } + return ctxt + + +class ZeroMQContext(OSContextGenerator): + interfaces = ['zeromq-configuration'] + + def __call__(self): + ctxt = {} + if is_relation_made('zeromq-configuration', 'host'): + for rid in relation_ids('zeromq-configuration'): + for unit in related_units(rid): + ctxt['zmq_nonce'] = relation_get('nonce', unit, rid) + ctxt['zmq_host'] = relation_get('host', unit, rid) + ctxt['zmq_redis_address'] = relation_get( + 'zmq_redis_address', unit, rid) + + return ctxt + + +class NotificationDriverContext(OSContextGenerator): + + def __init__(self, zmq_relation='zeromq-configuration', + amqp_relation='amqp'): + """ + :param zmq_relation: Name of Zeromq relation to check + """ + self.zmq_relation = zmq_relation + self.amqp_relation = amqp_relation + + def __call__(self): + ctxt = {'notifications': 'False'} + if is_relation_made(self.amqp_relation): + ctxt['notifications'] = "True" + + return ctxt + + +class SysctlContext(OSContextGenerator): + """This context check if the 'sysctl' option exists on configuration + then creates a file with the loaded contents""" + def __call__(self): + sysctl_dict = config('sysctl') + if sysctl_dict: + sysctl_create(sysctl_dict, + '/etc/sysctl.d/50-{0}.conf'.format(charm_name())) + return {'sysctl': sysctl_dict} + + +class NeutronAPIContext(OSContextGenerator): + ''' + Inspects current neutron-plugin-api relation for neutron settings. Return + defaults if it is not present. + ''' + interfaces = ['neutron-plugin-api'] + + def __call__(self): + self.neutron_defaults = { + 'l2_population': { + 'rel_key': 'l2-population', + 'default': False, + }, + 'overlay_network_type': { + 'rel_key': 'overlay-network-type', + 'default': 'gre', + }, + 'neutron_security_groups': { + 'rel_key': 'neutron-security-groups', + 'default': False, + }, + 'network_device_mtu': { + 'rel_key': 'network-device-mtu', + 'default': None, + }, + 'enable_dvr': { + 'rel_key': 'enable-dvr', + 'default': False, + }, + 'enable_l3ha': { + 'rel_key': 'enable-l3ha', + 'default': False, + }, + 'dns_domain': { + 'rel_key': 'dns-domain', + 'default': None, + }, + 'polling_interval': { + 'rel_key': 'polling-interval', + 'default': 2, + }, + 'rpc_response_timeout': { + 'rel_key': 'rpc-response-timeout', + 'default': 60, + }, + 'report_interval': { + 'rel_key': 'report-interval', + 'default': 30, + }, + 'enable_qos': { + 'rel_key': 'enable-qos', + 'default': False, + }, + 'enable_nsg_logging': { + 'rel_key': 'enable-nsg-logging', + 'default': False, + }, + 'enable_nfg_logging': { + 'rel_key': 'enable-nfg-logging', + 'default': False, + }, + 'enable_port_forwarding': { + 'rel_key': 'enable-port-forwarding', + 'default': False, + }, + 'global_physnet_mtu': { + 'rel_key': 'global-physnet-mtu', + 'default': 1500, + }, + 'physical_network_mtus': { + 'rel_key': 'physical-network-mtus', + 'default': None, + }, + } + ctxt = self.get_neutron_options({}) + for rid in relation_ids('neutron-plugin-api'): + for unit in related_units(rid): + rdata = relation_get(rid=rid, unit=unit) + # The l2-population key is used by the context as a way of + # checking if the api service on the other end is sending data + # in a recent format. + if 'l2-population' in rdata: + ctxt.update(self.get_neutron_options(rdata)) + + extension_drivers = [] + + if ctxt['enable_qos']: + extension_drivers.append('qos') + + if ctxt['enable_nsg_logging']: + extension_drivers.append('log') + + ctxt['extension_drivers'] = ','.join(extension_drivers) + + l3_extension_plugins = [] + + if ctxt['enable_port_forwarding']: + l3_extension_plugins.append('port_forwarding') + + ctxt['l3_extension_plugins'] = l3_extension_plugins + + return ctxt + + def get_neutron_options(self, rdata): + settings = {} + for nkey in self.neutron_defaults.keys(): + defv = self.neutron_defaults[nkey]['default'] + rkey = self.neutron_defaults[nkey]['rel_key'] + if rkey in rdata.keys(): + if type(defv) is bool: + settings[nkey] = bool_from_string(rdata[rkey]) + else: + settings[nkey] = rdata[rkey] + else: + settings[nkey] = defv + return settings + + +class ExternalPortContext(NeutronPortContext): + + def __call__(self): + ctxt = {} + ports = config('ext-port') + if ports: + ports = [p.strip() for p in ports.split()] + ports = self.resolve_ports(ports) + if ports: + ctxt = {"ext_port": ports[0]} + napi_settings = NeutronAPIContext()() + mtu = napi_settings.get('network_device_mtu') + if mtu: + ctxt['ext_port_mtu'] = mtu + + return ctxt + + +class DataPortContext(NeutronPortContext): + + def __call__(self): + ports = config('data-port') + if ports: + # Map of {bridge:port/mac} + portmap = parse_data_port_mappings(ports) + ports = portmap.keys() + # Resolve provided ports or mac addresses and filter out those + # already attached to a bridge. + resolved = self.resolve_ports(ports) + # Rebuild port index using resolved and filtered ports. + normalized = {get_nic_hwaddr(port): port for port in resolved + if port not in ports} + normalized.update({port: port for port in resolved + if port in ports}) + if resolved: + return {normalized[port]: bridge for port, bridge in + six.iteritems(portmap) if port in normalized.keys()} + + return None + + +class PhyNICMTUContext(DataPortContext): + + def __call__(self): + ctxt = {} + mappings = super(PhyNICMTUContext, self).__call__() + if mappings and mappings.keys(): + ports = sorted(mappings.keys()) + napi_settings = NeutronAPIContext()() + mtu = napi_settings.get('network_device_mtu') + all_ports = set() + # If any of ports is a vlan device, its underlying device must have + # mtu applied first. + for port in ports: + for lport in glob.glob("/sys/class/net/%s/lower_*" % port): + lport = os.path.basename(lport) + all_ports.add(lport.split('_')[1]) + + all_ports = list(all_ports) + all_ports.extend(ports) + if mtu: + ctxt["devs"] = '\\n'.join(all_ports) + ctxt['mtu'] = mtu + + return ctxt + + +class NetworkServiceContext(OSContextGenerator): + + def __init__(self, rel_name='quantum-network-service'): + self.rel_name = rel_name + self.interfaces = [rel_name] + + def __call__(self): + for rid in relation_ids(self.rel_name): + for unit in related_units(rid): + rdata = relation_get(rid=rid, unit=unit) + ctxt = { + 'keystone_host': rdata.get('keystone_host'), + 'service_port': rdata.get('service_port'), + 'auth_port': rdata.get('auth_port'), + 'service_tenant': rdata.get('service_tenant'), + 'service_username': rdata.get('service_username'), + 'service_password': rdata.get('service_password'), + 'quantum_host': rdata.get('quantum_host'), + 'quantum_port': rdata.get('quantum_port'), + 'quantum_url': rdata.get('quantum_url'), + 'region': rdata.get('region'), + 'service_protocol': + rdata.get('service_protocol') or 'http', + 'auth_protocol': + rdata.get('auth_protocol') or 'http', + 'api_version': + rdata.get('api_version') or '2.0', + } + if self.context_complete(ctxt): + return ctxt + return {} + + +class InternalEndpointContext(OSContextGenerator): + """Internal endpoint context. + + This context provides the endpoint type used for communication between + services e.g. between Nova and Cinder internally. Openstack uses Public + endpoints by default so this allows admins to optionally use internal + endpoints. + """ + def __call__(self): + return {'use_internal_endpoints': config('use-internal-endpoints')} + + +class VolumeAPIContext(InternalEndpointContext): + """Volume API context. + + This context provides information regarding the volume endpoint to use + when communicating between services. It determines which version of the + API is appropriate for use. + + This value will be determined in the resulting context dictionary + returned from calling the VolumeAPIContext object. Information provided + by this context is as follows: + + volume_api_version: the volume api version to use, currently + 'v2' or 'v3' + volume_catalog_info: the information to use for a cinder client + configuration that consumes API endpoints from the keystone + catalog. This is defined as the type:name:endpoint_type string. + """ + # FIXME(wolsen) This implementation is based on the provider being able + # to specify the package version to check but does not guarantee that the + # volume service api version selected is available. In practice, it is + # quite likely the volume service *is* providing the v3 volume service. + # This should be resolved when the service-discovery spec is implemented. + def __init__(self, pkg): + """ + Creates a new VolumeAPIContext for use in determining which version + of the Volume API should be used for communication. A package codename + should be supplied for determining the currently installed OpenStack + version. + + :param pkg: the package codename to use in order to determine the + component version (e.g. nova-common). See + charmhelpers.contrib.openstack.utils.PACKAGE_CODENAMES for more. + """ + super(VolumeAPIContext, self).__init__() + self._ctxt = None + if not pkg: + raise ValueError('package name must be provided in order to ' + 'determine current OpenStack version.') + self.pkg = pkg + + @property + def ctxt(self): + if self._ctxt is not None: + return self._ctxt + self._ctxt = self._determine_ctxt() + return self._ctxt + + def _determine_ctxt(self): + """Determines the Volume API endpoint information. + + Determines the appropriate version of the API that should be used + as well as the catalog_info string that would be supplied. Returns + a dict containing the volume_api_version and the volume_catalog_info. + """ + rel = os_release(self.pkg) + version = '2' + if CompareOpenStackReleases(rel) >= 'pike': + version = '3' + + service_type = 'volumev{version}'.format(version=version) + service_name = 'cinderv{version}'.format(version=version) + endpoint_type = 'publicURL' + if config('use-internal-endpoints'): + endpoint_type = 'internalURL' + catalog_info = '{type}:{name}:{endpoint}'.format( + type=service_type, name=service_name, endpoint=endpoint_type) + + return { + 'volume_api_version': version, + 'volume_catalog_info': catalog_info, + } + + def __call__(self): + return self.ctxt + + +class AppArmorContext(OSContextGenerator): + """Base class for apparmor contexts.""" + + def __init__(self, profile_name=None): + self._ctxt = None + self.aa_profile = profile_name + self.aa_utils_packages = ['apparmor-utils'] + + @property + def ctxt(self): + if self._ctxt is not None: + return self._ctxt + self._ctxt = self._determine_ctxt() + return self._ctxt + + def _determine_ctxt(self): + """ + Validate aa-profile-mode settings is disable, enforce, or complain. + + :return ctxt: Dictionary of the apparmor profile or None + """ + if config('aa-profile-mode') in ['disable', 'enforce', 'complain']: + ctxt = {'aa_profile_mode': config('aa-profile-mode'), + 'ubuntu_release': lsb_release()['DISTRIB_RELEASE']} + if self.aa_profile: + ctxt['aa_profile'] = self.aa_profile + else: + ctxt = None + return ctxt + + def __call__(self): + return self.ctxt + + def install_aa_utils(self): + """ + Install packages required for apparmor configuration. + """ + log("Installing apparmor utils.") + ensure_packages(self.aa_utils_packages) + + def manually_disable_aa_profile(self): + """ + Manually disable an apparmor profile. + + If aa-profile-mode is set to disabled (default) this is required as the + template has been written but apparmor is yet unaware of the profile + and aa-disable aa-profile fails. Without this the profile would kick + into enforce mode on the next service restart. + + """ + profile_path = '/etc/apparmor.d' + disable_path = '/etc/apparmor.d/disable' + if not os.path.lexists(os.path.join(disable_path, self.aa_profile)): + os.symlink(os.path.join(profile_path, self.aa_profile), + os.path.join(disable_path, self.aa_profile)) + + def setup_aa_profile(self): + """ + Setup an apparmor profile. + The ctxt dictionary will contain the apparmor profile mode and + the apparmor profile name. + Makes calls out to aa-disable, aa-complain, or aa-enforce to setup + the apparmor profile. + """ + self() + if not self.ctxt: + log("Not enabling apparmor Profile") + return + self.install_aa_utils() + cmd = ['aa-{}'.format(self.ctxt['aa_profile_mode'])] + cmd.append(self.ctxt['aa_profile']) + log("Setting up the apparmor profile for {} in {} mode." + "".format(self.ctxt['aa_profile'], self.ctxt['aa_profile_mode'])) + try: + check_call(cmd) + except CalledProcessError as e: + # If aa-profile-mode is set to disabled (default) manual + # disabling is required as the template has been written but + # apparmor is yet unaware of the profile and aa-disable aa-profile + # fails. If aa-disable learns to read profile files first this can + # be removed. + if self.ctxt['aa_profile_mode'] == 'disable': + log("Manually disabling the apparmor profile for {}." + "".format(self.ctxt['aa_profile'])) + self.manually_disable_aa_profile() + return + status_set('blocked', "Apparmor profile {} failed to be set to {}." + "".format(self.ctxt['aa_profile'], + self.ctxt['aa_profile_mode'])) + raise e + + +class MemcacheContext(OSContextGenerator): + """Memcache context + + This context provides options for configuring a local memcache client and + server for both IPv4 and IPv6 + """ + + def __init__(self, package=None): + """ + @param package: Package to examine to extrapolate OpenStack release. + Used when charms have no openstack-origin config + option (ie subordinates) + """ + self.package = package + + def __call__(self): + ctxt = {} + ctxt['use_memcache'] = enable_memcache(package=self.package) + if ctxt['use_memcache']: + # Trusty version of memcached does not support ::1 as a listen + # address so use host file entry instead + release = lsb_release()['DISTRIB_CODENAME'].lower() + if is_ipv6_disabled(): + if CompareHostReleases(release) > 'trusty': + ctxt['memcache_server'] = '127.0.0.1' + else: + ctxt['memcache_server'] = 'localhost' + ctxt['memcache_server_formatted'] = '127.0.0.1' + ctxt['memcache_port'] = '11211' + ctxt['memcache_url'] = '{}:{}'.format( + ctxt['memcache_server_formatted'], + ctxt['memcache_port']) + else: + if CompareHostReleases(release) > 'trusty': + ctxt['memcache_server'] = '::1' + else: + ctxt['memcache_server'] = 'ip6-localhost' + ctxt['memcache_server_formatted'] = '[::1]' + ctxt['memcache_port'] = '11211' + ctxt['memcache_url'] = 'inet6:{}:{}'.format( + ctxt['memcache_server_formatted'], + ctxt['memcache_port']) + return ctxt + + +class EnsureDirContext(OSContextGenerator): + ''' + Serves as a generic context to create a directory as a side-effect. + + Useful for software that supports drop-in files (.d) in conjunction + with config option-based templates. Examples include: + * OpenStack oslo.policy drop-in files; + * systemd drop-in config files; + * other software that supports overriding defaults with .d files + + Another use-case is when a subordinate generates a configuration for + primary to render in a separate directory. + + Some software requires a user to create a target directory to be + scanned for drop-in files with a specific format. This is why this + context is needed to do that before rendering a template. + ''' + + def __init__(self, dirname, **kwargs): + '''Used merely to ensure that a given directory exists.''' + self.dirname = dirname + self.kwargs = kwargs + + def __call__(self): + mkdir(self.dirname, **self.kwargs) + return {} + + +class VersionsContext(OSContextGenerator): + """Context to return the openstack and operating system versions. + + """ + def __init__(self, pkg='python-keystone'): + """Initialise context. + + :param pkg: Package to extrapolate openstack version from. + :type pkg: str + """ + self.pkg = pkg + + def __call__(self): + ostack = os_release(self.pkg) + osystem = lsb_release()['DISTRIB_CODENAME'].lower() + return { + 'openstack_release': ostack, + 'operating_system_release': osystem} + + +class LogrotateContext(OSContextGenerator): + """Common context generator for logrotate.""" + + def __init__(self, location, interval, count): + """ + :param location: Absolute path for the logrotate config file + :type location: str + :param interval: The interval for the rotations. Valid values are + 'daily', 'weekly', 'monthly', 'yearly' + :type interval: str + :param count: The logrotate count option configures the 'count' times + the log files are being rotated before being + :type count: int + """ + self.location = location + self.interval = interval + self.count = 'rotate {}'.format(count) + + def __call__(self): + ctxt = { + 'logrotate_logs_location': self.location, + 'logrotate_interval': self.interval, + 'logrotate_count': self.count, + } + return ctxt + + +class HostInfoContext(OSContextGenerator): + """Context to provide host information.""" + + def __init__(self, use_fqdn_hint_cb=None): + """Initialize HostInfoContext + + :param use_fqdn_hint_cb: Callback whose return value used to populate + `use_fqdn_hint` + :type use_fqdn_hint_cb: Callable[[], bool] + """ + # Store callback used to get hint for whether FQDN should be used + + # Depending on the workload a charm manages, the use of FQDN vs. + # shortname may be a deploy-time decision, i.e. behaviour can not + # change on charm upgrade or post-deployment configuration change. + + # The hint is passed on as a flag in the context to allow the decision + # to be made in the Jinja2 configuration template. + self.use_fqdn_hint_cb = use_fqdn_hint_cb + + def _get_canonical_name(self, name=None): + """Get the official FQDN of the host + + The implementation of ``socket.getfqdn()`` in the standard Python + library does not exhaust all methods of getting the official name + of a host ref Python issue https://bugs.python.org/issue5004 + + This function mimics the behaviour of a call to ``hostname -f`` to + get the official FQDN but returns an empty string if it is + unsuccessful. + + :param name: Shortname to get FQDN on + :type name: Optional[str] + :returns: The official FQDN for host or empty string ('') + :rtype: str + """ + name = name or socket.gethostname() + fqdn = '' + + if six.PY2: + exc = socket.error + else: + exc = OSError + + try: + addrs = socket.getaddrinfo( + name, None, 0, socket.SOCK_DGRAM, 0, socket.AI_CANONNAME) + except exc: + pass + else: + for addr in addrs: + if addr[3]: + if '.' in addr[3]: + fqdn = addr[3] + break + return fqdn + + def __call__(self): + name = socket.gethostname() + ctxt = { + 'host_fqdn': self._get_canonical_name(name) or name, + 'host': name, + 'use_fqdn_hint': ( + self.use_fqdn_hint_cb() if self.use_fqdn_hint_cb else False) + } + return ctxt + + +def validate_ovs_use_veth(*args, **kwargs): + """Validate OVS use veth setting for dhcp agents + + The ovs_use_veth setting is considered immutable as it will break existing + deployments. Historically, we set ovs_use_veth=True in dhcp_agent.ini. It + turns out this is no longer necessary. Ideally, all new deployments would + have this set to False. + + This function validates that the config value does not conflict with + previously deployed settings in dhcp_agent.ini. + + See LP Bug#1831935 for details. + + :returns: Status state and message + :rtype: Union[(None, None), (string, string)] + """ + existing_ovs_use_veth = ( + DHCPAgentContext.get_existing_ovs_use_veth()) + config_ovs_use_veth = DHCPAgentContext.parse_ovs_use_veth() + + # Check settings are set and not None + if existing_ovs_use_veth is not None and config_ovs_use_veth is not None: + # Check for mismatch between existing config ini and juju config + if existing_ovs_use_veth != config_ovs_use_veth: + # Stop the line to avoid breakage + msg = ( + "The existing setting for dhcp_agent.ini ovs_use_veth, {}, " + "does not match the juju config setting, {}. This may lead to " + "VMs being unable to receive a DHCP IP. Either change the " + "juju config setting or dhcp agents may need to be recreated." + .format(existing_ovs_use_veth, config_ovs_use_veth)) + log(msg, ERROR) + return ( + "blocked", + "Mismatched existing and configured ovs-use-veth. See log.") + + # Everything is OK + return None, None + + +class DHCPAgentContext(OSContextGenerator): + + def __call__(self): + """Return the DHCPAGentContext. + + Return all DHCP Agent INI related configuration. + ovs unit is attached to (as a subordinate) and the 'dns_domain' from + the neutron-plugin-api relations (if one is set). + + :returns: Dictionary context + :rtype: Dict + """ + + ctxt = {} + dnsmasq_flags = config('dnsmasq-flags') + if dnsmasq_flags: + ctxt['dnsmasq_flags'] = config_flags_parser(dnsmasq_flags) + ctxt['dns_servers'] = config('dns-servers') + + neutron_api_settings = NeutronAPIContext()() + + ctxt['debug'] = config('debug') + ctxt['instance_mtu'] = config('instance-mtu') + ctxt['ovs_use_veth'] = self.get_ovs_use_veth() + + ctxt['enable_metadata_network'] = config('enable-metadata-network') + ctxt['enable_isolated_metadata'] = config('enable-isolated-metadata') + + if neutron_api_settings.get('dns_domain'): + ctxt['dns_domain'] = neutron_api_settings.get('dns_domain') + + # Override user supplied config for these plugins as these settings are + # mandatory + if config('plugin') in ['nvp', 'nsx', 'n1kv']: + ctxt['enable_metadata_network'] = True + ctxt['enable_isolated_metadata'] = True + + return ctxt + + @staticmethod + def get_existing_ovs_use_veth(): + """Return existing ovs_use_veth setting from dhcp_agent.ini. + + :returns: Boolean value of existing ovs_use_veth setting or None + :rtype: Optional[Bool] + """ + DHCP_AGENT_INI = "/etc/neutron/dhcp_agent.ini" + existing_ovs_use_veth = None + # If there is a dhcp_agent.ini file read the current setting + if os.path.isfile(DHCP_AGENT_INI): + # config_ini does the right thing and returns None if the setting is + # commented. + existing_ovs_use_veth = ( + config_ini(DHCP_AGENT_INI)["DEFAULT"].get("ovs_use_veth")) + # Convert to Bool if necessary + if isinstance(existing_ovs_use_veth, six.string_types): + return bool_from_string(existing_ovs_use_veth) + return existing_ovs_use_veth + + @staticmethod + def parse_ovs_use_veth(): + """Parse the ovs-use-veth config setting. + + Parse the string config setting for ovs-use-veth and return a boolean + or None. + + bool_from_string will raise a ValueError if the string is not falsy or + truthy. + + :raises: ValueError for invalid input + :returns: Boolean value of ovs-use-veth or None + :rtype: Optional[Bool] + """ + _config = config("ovs-use-veth") + # An unset parameter returns None. Just in case we will also check for + # an empty string: "". Ironically, (the problem we are trying to avoid) + # "False" returns True and "" returns False. + if _config is None or not _config: + # Return None + return + # bool_from_string handles many variations of true and false strings + # as well as upper and lowercases including: + # ['y', 'yes', 'true', 't', 'on', 'n', 'no', 'false', 'f', 'off'] + return bool_from_string(_config) + + def get_ovs_use_veth(self): + """Return correct ovs_use_veth setting for use in dhcp_agent.ini. + + Get the right value from config or existing dhcp_agent.ini file. + Existing has precedence. Attempt to default to "False" without + disrupting existing deployments. Handle existing deployments and + upgrades safely. See LP Bug#1831935 + + :returns: Value to use for ovs_use_veth setting + :rtype: Bool + """ + _existing = self.get_existing_ovs_use_veth() + if _existing is not None: + return _existing + + _config = self.parse_ovs_use_veth() + if _config is None: + # New better default + return False + else: + return _config + + +EntityMac = collections.namedtuple('EntityMac', ['entity', 'mac']) + + +def resolve_pci_from_mapping_config(config_key): + """Resolve local PCI devices from MAC addresses in mapping config. + + Note that this function keeps record of mac->PCI address lookups + in the local unit db as the devices will disappaear from the system + once bound. + + :param config_key: Configuration option key to parse data from + :type config_key: str + :returns: PCI device address to Tuple(entity, mac) map + :rtype: collections.OrderedDict[str,Tuple[str,str]] + """ + devices = pci.PCINetDevices() + resolved_devices = collections.OrderedDict() + db = kv() + # Note that ``parse_data_port_mappings`` returns Dict regardless of input + for mac, entity in parse_data_port_mappings(config(config_key)).items(): + pcidev = devices.get_device_from_mac(mac) + if pcidev: + # NOTE: store mac->pci allocation as post binding + # it disappears from PCIDevices. + db.set(mac, pcidev.pci_address) + db.flush() + + pci_address = db.get(mac) + if pci_address: + resolved_devices[pci_address] = EntityMac(entity, mac) + + return resolved_devices + + +class DPDKDeviceContext(OSContextGenerator): + + def __init__(self, driver_key=None, bridges_key=None, bonds_key=None): + """Initialize DPDKDeviceContext. + + :param driver_key: Key to use when retrieving driver config. + :type driver_key: str + :param bridges_key: Key to use when retrieving bridge config. + :type bridges_key: str + :param bonds_key: Key to use when retrieving bonds config. + :type bonds_key: str + """ + self.driver_key = driver_key or 'dpdk-driver' + self.bridges_key = bridges_key or 'data-port' + self.bonds_key = bonds_key or 'dpdk-bond-mappings' + + def __call__(self): + """Populate context. + + :returns: context + :rtype: Dict[str,Union[str,collections.OrderedDict[str,str]]] + """ + driver = config(self.driver_key) + if driver is None: + return {} + # Resolve PCI devices for both directly used devices (_bridges) + # and devices for use in dpdk bonds (_bonds) + pci_devices = resolve_pci_from_mapping_config(self.bridges_key) + pci_devices.update(resolve_pci_from_mapping_config(self.bonds_key)) + return {'devices': pci_devices, + 'driver': driver} + + +class OVSDPDKDeviceContext(OSContextGenerator): + + def __init__(self, bridges_key=None, bonds_key=None): + """Initialize OVSDPDKDeviceContext. + + :param bridges_key: Key to use when retrieving bridge config. + :type bridges_key: str + :param bonds_key: Key to use when retrieving bonds config. + :type bonds_key: str + """ + self.bridges_key = bridges_key or 'data-port' + self.bonds_key = bonds_key or 'dpdk-bond-mappings' + + @staticmethod + def _parse_cpu_list(cpulist): + """Parses a linux cpulist for a numa node + + :returns: list of cores + :rtype: List[int] + """ + cores = [] + ranges = cpulist.split(',') + for cpu_range in ranges: + if "-" in cpu_range: + cpu_min_max = cpu_range.split('-') + cores += range(int(cpu_min_max[0]), + int(cpu_min_max[1]) + 1) + else: + cores.append(int(cpu_range)) + return cores + + def _numa_node_cores(self): + """Get map of numa node -> cpu core + + :returns: map of numa node -> cpu core + :rtype: Dict[str,List[int]] + """ + nodes = {} + node_regex = '/sys/devices/system/node/node*' + for node in glob.glob(node_regex): + index = node.lstrip('/sys/devices/system/node/node') + with open(os.path.join(node, 'cpulist')) as cpulist: + nodes[index] = self._parse_cpu_list(cpulist.read().strip()) + return nodes + + def cpu_mask(self): + """Get hex formatted CPU mask + + The mask is based on using the first config:dpdk-socket-cores + cores of each NUMA node in the unit. + :returns: hex formatted CPU mask + :rtype: str + """ + num_cores = config('dpdk-socket-cores') + mask = 0 + for cores in self._numa_node_cores().values(): + for core in cores[:num_cores]: + mask = mask | 1 << core + return format(mask, '#04x') + + def socket_memory(self): + """Formatted list of socket memory configuration per NUMA node + + :returns: socket memory configuration per NUMA node + :rtype: str + """ + sm_size = config('dpdk-socket-memory') + node_regex = '/sys/devices/system/node/node*' + mem_list = [str(sm_size) for _ in glob.glob(node_regex)] + if mem_list: + return ','.join(mem_list) + else: + return str(sm_size) + + def devices(self): + """List of PCI devices for use by DPDK + + :returns: List of PCI devices for use by DPDK + :rtype: collections.OrderedDict[str,str] + """ + pci_devices = resolve_pci_from_mapping_config(self.bridges_key) + pci_devices.update(resolve_pci_from_mapping_config(self.bonds_key)) + return pci_devices + + def _formatted_whitelist(self, flag): + """Flag formatted list of devices to whitelist + + :param flag: flag format to use + :type flag: str + :rtype: str + """ + whitelist = [] + for device in self.devices(): + whitelist.append(flag.format(device=device)) + return ' '.join(whitelist) + + def device_whitelist(self): + """Formatted list of devices to whitelist for dpdk + + using the old style '-w' flag + + :returns: devices to whitelist prefixed by '-w ' + :rtype: str + """ + return self._formatted_whitelist('-w {device}') + + def pci_whitelist(self): + """Formatted list of devices to whitelist for dpdk + + using the new style '--pci-whitelist' flag + + :returns: devices to whitelist prefixed by '--pci-whitelist ' + :rtype: str + """ + return self._formatted_whitelist('--pci-whitelist {device}') + + def __call__(self): + """Populate context. + + :returns: context + :rtype: Dict[str,Union[bool,str]] + """ + ctxt = {} + whitelist = self.device_whitelist() + if whitelist: + ctxt['dpdk_enabled'] = config('enable-dpdk') + ctxt['device_whitelist'] = self.device_whitelist() + ctxt['socket_memory'] = self.socket_memory() + ctxt['cpu_mask'] = self.cpu_mask() + return ctxt + + +class BridgePortInterfaceMap(object): + """Build a map of bridge ports and interaces from charm configuration. + + NOTE: the handling of this detail in the charm is pre-deprecated. + + The long term goal is for network connectivity detail to be modelled in + the server provisioning layer (such as MAAS) which in turn will provide + a Netplan YAML description that will be used to drive Open vSwitch. + + Until we get to that reality the charm will need to configure this + detail based on application level configuration options. + + There is a established way of mapping interfaces to ports and bridges + in the ``neutron-openvswitch`` and ``neutron-gateway`` charms and we + will carry that forward. + + The relationship between bridge, port and interface(s). + +--------+ + | bridge | + +--------+ + | + +----------------+ + | port aka. bond | + +----------------+ + | | + +-+ +-+ + |i| |i| + |n| |n| + |t| |t| + |0| |N| + +-+ +-+ + """ + class interface_type(enum.Enum): + """Supported interface types. + + Supported interface types can be found in the ``iface_types`` column + in the ``Open_vSwitch`` table on a running system. + """ + dpdk = 'dpdk' + internal = 'internal' + system = 'system' + + def __str__(self): + """Return string representation of value. + + :returns: string representation of value. + :rtype: str + """ + return self.value + + def __init__(self, bridges_key=None, bonds_key=None, enable_dpdk_key=None, + global_mtu=None): + """Initialize map. + + :param bridges_key: Name of bridge:interface/port map config key + (default: 'data-port') + :type bridges_key: Optional[str] + :param bonds_key: Name of port-name:interface map config key + (default: 'dpdk-bond-mappings') + :type bonds_key: Optional[str] + :param enable_dpdk_key: Name of DPDK toggle config key + (default: 'enable-dpdk') + :type enable_dpdk_key: Optional[str] + :param global_mtu: Set a MTU on all interfaces at map initialization. + + The default is to have Open vSwitch get this from the underlying + interface as set up by bare metal provisioning. + + Note that you can augment the MTU on an individual interface basis + like this: + + ifdatamap = bpi.get_ifdatamap(bridge, port) + ifdatamap = { + port: { + **ifdata, + **{'mtu-request': my_individual_mtu_map[port]}, + } + for port, ifdata in ifdatamap.items() + } + :type global_mtu: Optional[int] + """ + bridges_key = bridges_key or 'data-port' + bonds_key = bonds_key or 'dpdk-bond-mappings' + enable_dpdk_key = enable_dpdk_key or 'enable-dpdk' + self._map = collections.defaultdict( + lambda: collections.defaultdict(dict)) + self._ifname_mac_map = collections.defaultdict(list) + self._mac_ifname_map = {} + self._mac_pci_address_map = {} + + # First we iterate over the list of physical interfaces visible to the + # system and update interface name to mac and mac to interface name map + for ifname in list_nics(): + if not is_phy_iface(ifname): + continue + mac = get_nic_hwaddr(ifname) + self._ifname_mac_map[ifname] = [mac] + self._mac_ifname_map[mac] = ifname + + # check if interface is part of a linux bond + _bond_name = get_bond_master(ifname) + if _bond_name and _bond_name != ifname: + log('Add linux bond "{}" to map for physical interface "{}" ' + 'with mac "{}".'.format(_bond_name, ifname, mac), + level=DEBUG) + # for bonds we want to be able to get a list of the mac + # addresses for the physical interfaces the bond is made up of. + if self._ifname_mac_map.get(_bond_name): + self._ifname_mac_map[_bond_name].append(mac) + else: + self._ifname_mac_map[_bond_name] = [mac] + + # In light of the pre-deprecation notice in the docstring of this + # class we will expose the ability to configure OVS bonds as a + # DPDK-only feature, but generally use the data structures internally. + if config(enable_dpdk_key): + # resolve PCI address of interfaces listed in the bridges and bonds + # charm configuration options. Note that for already bound + # interfaces the helper will retrieve MAC address from the unit + # KV store as the information is no longer available in sysfs. + _pci_bridge_mac = resolve_pci_from_mapping_config( + bridges_key) + _pci_bond_mac = resolve_pci_from_mapping_config( + bonds_key) + + for pci_address, bridge_mac in _pci_bridge_mac.items(): + if bridge_mac.mac in self._mac_ifname_map: + # if we already have the interface name in our map it is + # visible to the system and therefore not bound to DPDK + continue + ifname = 'dpdk-{}'.format( + hashlib.sha1( + pci_address.encode('UTF-8')).hexdigest()[:7]) + self._ifname_mac_map[ifname] = [bridge_mac.mac] + self._mac_ifname_map[bridge_mac.mac] = ifname + self._mac_pci_address_map[bridge_mac.mac] = pci_address + + for pci_address, bond_mac in _pci_bond_mac.items(): + # for bonds we want to be able to get a list of macs from + # the bond name and also get at the interface name made up + # of the hash of the PCI address + ifname = 'dpdk-{}'.format( + hashlib.sha1( + pci_address.encode('UTF-8')).hexdigest()[:7]) + self._ifname_mac_map[bond_mac.entity].append(bond_mac.mac) + self._mac_ifname_map[bond_mac.mac] = ifname + self._mac_pci_address_map[bond_mac.mac] = pci_address + + config_bridges = config(bridges_key) or '' + for bridge, ifname_or_mac in ( + pair.split(':', 1) + for pair in config_bridges.split()): + if ':' in ifname_or_mac: + try: + ifname = self.ifname_from_mac(ifname_or_mac) + except KeyError: + # The interface is destined for a different unit in the + # deployment. + continue + macs = [ifname_or_mac] + else: + ifname = ifname_or_mac + macs = self.macs_from_ifname(ifname_or_mac) + + portname = ifname + for mac in macs: + try: + pci_address = self.pci_address_from_mac(mac) + iftype = self.interface_type.dpdk + ifname = self.ifname_from_mac(mac) + except KeyError: + pci_address = None + iftype = self.interface_type.system + + self.add_interface( + bridge, portname, ifname, iftype, pci_address, global_mtu) + + if not macs: + # We have not mapped the interface and it is probably some sort + # of virtual interface. Our user have put it in the config with + # a purpose so let's carry out their wish. LP: #1884743 + log('Add unmapped interface from config: name "{}" bridge "{}"' + .format(ifname, bridge), + level=DEBUG) + self.add_interface( + bridge, ifname, ifname, self.interface_type.system, None, + global_mtu) + + def __getitem__(self, key): + """Provide a Dict-like interface, get value of item. + + :param key: Key to look up value from. + :type key: any + :returns: Value + :rtype: any + """ + return self._map.__getitem__(key) + + def __iter__(self): + """Provide a Dict-like interface, iterate over keys. + + :returns: Iterator + :rtype: Iterator[any] + """ + return self._map.__iter__() + + def __len__(self): + """Provide a Dict-like interface, measure the length of internal map. + + :returns: Length + :rtype: int + """ + return len(self._map) + + def items(self): + """Provide a Dict-like interface, iterate over items. + + :returns: Key Value pairs + :rtype: Iterator[any, any] + """ + return self._map.items() + + def keys(self): + """Provide a Dict-like interface, iterate over keys. + + :returns: Iterator + :rtype: Iterator[any] + """ + return self._map.keys() + + def ifname_from_mac(self, mac): + """ + :returns: Name of interface + :rtype: str + :raises: KeyError + """ + return (get_bond_master(self._mac_ifname_map[mac]) or + self._mac_ifname_map[mac]) + + def macs_from_ifname(self, ifname): + """ + :returns: List of hardware address (MAC) of interface + :rtype: List[str] + :raises: KeyError + """ + return self._ifname_mac_map[ifname] + + def pci_address_from_mac(self, mac): + """ + :param mac: Hardware address (MAC) of interface + :type mac: str + :returns: PCI address of device associated with mac + :rtype: str + :raises: KeyError + """ + return self._mac_pci_address_map[mac] + + def add_interface(self, bridge, port, ifname, iftype, + pci_address, mtu_request): + """Add an interface to the map. + + :param bridge: Name of bridge on which the bond will be added + :type bridge: str + :param port: Name of port which will represent the bond on bridge + :type port: str + :param ifname: Name of interface that will make up the bonded port + :type ifname: str + :param iftype: Type of interface + :type iftype: BridgeBondMap.interface_type + :param pci_address: PCI address of interface + :type pci_address: Optional[str] + :param mtu_request: MTU to request for interface + :type mtu_request: Optional[int] + """ + self._map[bridge][port][ifname] = { + 'type': str(iftype), + } + if pci_address: + self._map[bridge][port][ifname].update({ + 'pci-address': pci_address, + }) + if mtu_request is not None: + self._map[bridge][port][ifname].update({ + 'mtu-request': str(mtu_request) + }) + + def get_ifdatamap(self, bridge, port): + """Get structure suitable for charmhelpers.contrib.network.ovs helpers. + + :param bridge: Name of bridge on which the port will be added + :type bridge: str + :param port: Name of port which will represent one or more interfaces + :type port: str + """ + for _bridge, _ports in self.items(): + for _port, _interfaces in _ports.items(): + if _bridge == bridge and _port == port: + ifdatamap = {} + for name, data in _interfaces.items(): + ifdatamap.update({ + name: { + 'type': data['type'], + }, + }) + if data.get('mtu-request') is not None: + ifdatamap[name].update({ + 'mtu_request': data['mtu-request'], + }) + if data.get('pci-address'): + ifdatamap[name].update({ + 'options': { + 'dpdk-devargs': data['pci-address'], + }, + }) + return ifdatamap + + +class BondConfig(object): + """Container and helpers for bond configuration options. + + Data is put into a dictionary and a convenient config get interface is + provided. + """ + + DEFAULT_LACP_CONFIG = { + 'mode': 'balance-tcp', + 'lacp': 'active', + 'lacp-time': 'fast' + } + ALL_BONDS = 'ALL_BONDS' + + BOND_MODES = ['active-backup', 'balance-slb', 'balance-tcp'] + BOND_LACP = ['active', 'passive', 'off'] + BOND_LACP_TIME = ['fast', 'slow'] + + def __init__(self, config_key=None): + """Parse specified configuration option. + + :param config_key: Configuration key to retrieve data from + (default: ``dpdk-bond-config``) + :type config_key: Optional[str] + """ + self.config_key = config_key or 'dpdk-bond-config' + + self.lacp_config = { + self.ALL_BONDS: copy.deepcopy(self.DEFAULT_LACP_CONFIG) + } + + lacp_config = config(self.config_key) + if lacp_config: + lacp_config_map = lacp_config.split() + for entry in lacp_config_map: + bond, entry = entry.partition(':')[0:3:2] + if not bond: + bond = self.ALL_BONDS + + mode, entry = entry.partition(':')[0:3:2] + if not mode: + mode = self.DEFAULT_LACP_CONFIG['mode'] + assert mode in self.BOND_MODES, \ + "Bond mode {} is invalid".format(mode) + + lacp, entry = entry.partition(':')[0:3:2] + if not lacp: + lacp = self.DEFAULT_LACP_CONFIG['lacp'] + assert lacp in self.BOND_LACP, \ + "Bond lacp {} is invalid".format(lacp) + + lacp_time, entry = entry.partition(':')[0:3:2] + if not lacp_time: + lacp_time = self.DEFAULT_LACP_CONFIG['lacp-time'] + assert lacp_time in self.BOND_LACP_TIME, \ + "Bond lacp-time {} is invalid".format(lacp_time) + + self.lacp_config[bond] = { + 'mode': mode, + 'lacp': lacp, + 'lacp-time': lacp_time + } + + def get_bond_config(self, bond): + """Get the LACP configuration for a bond + + :param bond: the bond name + :return: a dictionary with the configuration of the bond + :rtype: Dict[str,Dict[str,str]] + """ + return self.lacp_config.get(bond, self.lacp_config[self.ALL_BONDS]) + + def get_ovs_portdata(self, bond): + """Get structure suitable for charmhelpers.contrib.network.ovs helpers. + + :param bond: the bond name + :return: a dictionary with the configuration of the bond + :rtype: Dict[str,Union[str,Dict[str,str]]] + """ + bond_config = self.get_bond_config(bond) + return { + 'bond_mode': bond_config['mode'], + 'lacp': bond_config['lacp'], + 'other_config': { + 'lacp-time': bond_config['lacp-time'], + }, + } + + +class SRIOVContext(OSContextGenerator): + """Provide context for configuring SR-IOV devices.""" + + class sriov_config_mode(enum.Enum): + """Mode in which SR-IOV is configured. + + The configuration option identified by the ``numvfs_key`` parameter + is overloaded and defines in which mode the charm should interpret + the other SR-IOV-related configuration options. + """ + auto = 'auto' + blanket = 'blanket' + explicit = 'explicit' + + def _determine_numvfs(self, device, sriov_numvfs): + """Determine number of Virtual Functions (VFs) configured for device. + + :param device: Object describing a PCI Network interface card (NIC)/ + :type device: sriov_netplan_shim.pci.PCINetDevice + :param sriov_numvfs: Number of VFs requested for blanket configuration. + :type sriov_numvfs: int + :returns: Number of VFs to configure for device + :rtype: Optional[int] + """ + + def _get_capped_numvfs(requested): + """Get a number of VFs that does not exceed individual card limits. + + Depending and make and model of NIC the number of VFs supported + vary. Requesting more VFs than a card support would be a fatal + error, cap the requested number at the total number of VFs each + individual card supports. + + :param requested: Number of VFs requested + :type requested: int + :returns: Number of VFs allowed + :rtype: int + """ + actual = min(int(requested), int(device.sriov_totalvfs)) + if actual < int(requested): + log('Requested VFs ({}) too high for device {}. Falling back ' + 'to value supprted by device: {}' + .format(requested, device.interface_name, + device.sriov_totalvfs), + level=WARNING) + return actual + + if self._sriov_config_mode == self.sriov_config_mode.auto: + # auto-mode + # + # If device mapping configuration is present, return information + # on cards with mapping. + # + # If no device mapping configuration is present, return information + # for all cards. + # + # The maximum number of VFs supported by card will be used. + if (self._sriov_mapped_devices and + device.interface_name not in self._sriov_mapped_devices): + log('SR-IOV configured in auto mode: No device mapping for {}' + .format(device.interface_name), + level=DEBUG) + return + return _get_capped_numvfs(device.sriov_totalvfs) + elif self._sriov_config_mode == self.sriov_config_mode.blanket: + # blanket-mode + # + # User has specified a number of VFs that should apply to all + # cards with support for VFs. + return _get_capped_numvfs(sriov_numvfs) + elif self._sriov_config_mode == self.sriov_config_mode.explicit: + # explicit-mode + # + # User has given a list of interface names and associated number of + # VFs + if device.interface_name not in self._sriov_config_devices: + log('SR-IOV configured in explicit mode: No device:numvfs ' + 'pair for device {}, skipping.' + .format(device.interface_name), + level=DEBUG) + return + return _get_capped_numvfs( + self._sriov_config_devices[device.interface_name]) + else: + raise RuntimeError('This should not be reached') + + def __init__(self, numvfs_key=None, device_mappings_key=None): + """Initialize map from PCI devices and configuration options. + + :param numvfs_key: Config key for numvfs (default: 'sriov-numvfs') + :type numvfs_key: Optional[str] + :param device_mappings_key: Config key for device mappings + (default: 'sriov-device-mappings') + :type device_mappings_key: Optional[str] + :raises: RuntimeError + """ + numvfs_key = numvfs_key or 'sriov-numvfs' + device_mappings_key = device_mappings_key or 'sriov-device-mappings' + + devices = pci.PCINetDevices() + charm_config = config() + sriov_numvfs = charm_config.get(numvfs_key) or '' + sriov_device_mappings = charm_config.get(device_mappings_key) or '' + + # create list of devices from sriov_device_mappings config option + self._sriov_mapped_devices = [ + pair.split(':', 1)[1] + for pair in sriov_device_mappings.split() + ] + + # create map of device:numvfs from sriov_numvfs config option + self._sriov_config_devices = { + ifname: numvfs for ifname, numvfs in ( + pair.split(':', 1) for pair in sriov_numvfs.split() + if ':' in sriov_numvfs) + } + + # determine configuration mode from contents of sriov_numvfs + if sriov_numvfs == 'auto': + self._sriov_config_mode = self.sriov_config_mode.auto + elif sriov_numvfs.isdigit(): + self._sriov_config_mode = self.sriov_config_mode.blanket + elif ':' in sriov_numvfs: + self._sriov_config_mode = self.sriov_config_mode.explicit + else: + raise RuntimeError('Unable to determine mode of SR-IOV ' + 'configuration.') + + self._map = { + device.interface_name: self._determine_numvfs(device, sriov_numvfs) + for device in devices.pci_devices + if device.sriov and + self._determine_numvfs(device, sriov_numvfs) is not None + } + + def __call__(self): + """Provide SR-IOV context. + + :returns: Map interface name: min(configured, max) virtual functions. + Example: + { + 'eth0': 16, + 'eth1': 32, + 'eth2': 64, + } + :rtype: Dict[str,int] + """ + return self._map diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/exceptions.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/exceptions.py new file mode 100644 index 0000000000000000000000000000000000000000..f85ae4f4cdbb6567cbdd896338bf88fbf3c9c0ec --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/exceptions.py @@ -0,0 +1,21 @@ +# Copyright 2016 Canonical Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + +class OSContextError(Exception): + """Raised when an error occurs during context generation. + + This exception is principally used in contrib.openstack.context + """ + pass diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/files/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/files/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..9df5f746fbdf5491c640a77df907b71817cbc5af --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/files/__init__.py @@ -0,0 +1,16 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# dummy __init__.py to fool syncer into thinking this is a syncable python +# module diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/files/check_haproxy.sh b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/files/check_haproxy.sh new file mode 100755 index 0000000000000000000000000000000000000000..1df55db4816ec51d6732d68ea0a1e25e6f7b116e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/files/check_haproxy.sh @@ -0,0 +1,34 @@ +#!/bin/bash +#-------------------------------------------- +# This file is managed by Juju +#-------------------------------------------- +# +# Copyright 2009,2012 Canonical Ltd. +# Author: Tom Haddon + +CRITICAL=0 +NOTACTIVE='' +LOGFILE=/var/log/nagios/check_haproxy.log +AUTH=$(grep -r "stats auth" /etc/haproxy/haproxy.cfg | awk 'NR=1{print $3}') + +typeset -i N_INSTANCES=0 +for appserver in $(awk '/^\s+server/{print $2}' /etc/haproxy/haproxy.cfg) +do + N_INSTANCES=N_INSTANCES+1 + output=$(/usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 -u '/;csv' --regex=",${appserver},.*,UP.*" -e ' 200 OK') + if [ $? != 0 ]; then + date >> $LOGFILE + echo $output >> $LOGFILE + /usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 -u '/;csv' -v | grep ",${appserver}," >> $LOGFILE 2>&1 + CRITICAL=1 + NOTACTIVE="${NOTACTIVE} $appserver" + fi +done + +if [ $CRITICAL = 1 ]; then + echo "CRITICAL:${NOTACTIVE}" + exit 2 +fi + +echo "OK: All haproxy instances ($N_INSTANCES) looking good" +exit 0 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/files/check_haproxy_queue_depth.sh b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/files/check_haproxy_queue_depth.sh new file mode 100755 index 0000000000000000000000000000000000000000..91ce0246e66115994c3f518b36448f70100ecfc7 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/files/check_haproxy_queue_depth.sh @@ -0,0 +1,30 @@ +#!/bin/bash +#-------------------------------------------- +# This file is managed by Juju +#-------------------------------------------- +# +# Copyright 2009,2012 Canonical Ltd. +# Author: Tom Haddon + +# These should be config options at some stage +CURRQthrsh=0 +MAXQthrsh=100 + +AUTH=$(grep -r "stats auth" /etc/haproxy/haproxy.cfg | awk 'NR=1{print $3}') + +HAPROXYSTATS=$(/usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 -u '/;csv' -v) + +for BACKEND in $(echo $HAPROXYSTATS| xargs -n1 | grep BACKEND | awk -F , '{print $1}') +do + CURRQ=$(echo "$HAPROXYSTATS" | grep $BACKEND | grep BACKEND | cut -d , -f 3) + MAXQ=$(echo "$HAPROXYSTATS" | grep $BACKEND | grep BACKEND | cut -d , -f 4) + + if [[ $CURRQ -gt $CURRQthrsh || $MAXQ -gt $MAXQthrsh ]] ; then + echo "CRITICAL: queue depth for $BACKEND - CURRENT:$CURRQ MAX:$MAXQ" + exit 2 + fi +done + +echo "OK: All haproxy queue depths looking good" +exit 0 + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/ha/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/ha/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..9b088de84e4b288b551603816fc10eebfa7b1503 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/ha/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2016 Canonical Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/ha/utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/ha/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..a5cbdf535d495a09a0b91f41fdda09862e34140d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/ha/utils.py @@ -0,0 +1,348 @@ +# Copyright 2014-2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# +# Copyright 2016 Canonical Ltd. +# +# Authors: +# Openstack Charmers < +# + +""" +Helpers for high availability. +""" + +import hashlib +import json + +import re + +from charmhelpers.core.hookenv import ( + expected_related_units, + log, + relation_set, + charm_name, + config, + status_set, + DEBUG, +) + +from charmhelpers.core.host import ( + lsb_release +) + +from charmhelpers.contrib.openstack.ip import ( + resolve_address, + is_ipv6, +) + +from charmhelpers.contrib.network.ip import ( + get_iface_for_address, + get_netmask_for_address, +) + +from charmhelpers.contrib.hahelpers.cluster import ( + get_hacluster_config +) + +JSON_ENCODE_OPTIONS = dict( + sort_keys=True, + allow_nan=False, + indent=None, + separators=(',', ':'), +) + +VIP_GROUP_NAME = 'grp_{service}_vips' +DNSHA_GROUP_NAME = 'grp_{service}_hostnames' + + +class DNSHAException(Exception): + """Raised when an error occurs setting up DNS HA + """ + + pass + + +def update_dns_ha_resource_params(resources, resource_params, + relation_id=None, + crm_ocf='ocf:maas:dns'): + """ Configure DNS-HA resources based on provided configuration and + update resource dictionaries for the HA relation. + + @param resources: Pointer to dictionary of resources. + Usually instantiated in ha_joined(). + @param resource_params: Pointer to dictionary of resource parameters. + Usually instantiated in ha_joined() + @param relation_id: Relation ID of the ha relation + @param crm_ocf: Corosync Open Cluster Framework resource agent to use for + DNS HA + """ + _relation_data = {'resources': {}, 'resource_params': {}} + update_hacluster_dns_ha(charm_name(), + _relation_data, + crm_ocf) + resources.update(_relation_data['resources']) + resource_params.update(_relation_data['resource_params']) + relation_set(relation_id=relation_id, groups=_relation_data['groups']) + + +def assert_charm_supports_dns_ha(): + """Validate prerequisites for DNS HA + The MAAS client is only available on Xenial or greater + + :raises DNSHAException: if release is < 16.04 + """ + if lsb_release().get('DISTRIB_RELEASE') < '16.04': + msg = ('DNS HA is only supported on 16.04 and greater ' + 'versions of Ubuntu.') + status_set('blocked', msg) + raise DNSHAException(msg) + return True + + +def expect_ha(): + """ Determine if the unit expects to be in HA + + Check juju goal-state if ha relation is expected, check for VIP or dns-ha + settings which indicate the unit should expect to be related to hacluster. + + @returns boolean + """ + ha_related_units = [] + try: + ha_related_units = list(expected_related_units(reltype='ha')) + except (NotImplementedError, KeyError): + pass + return len(ha_related_units) > 0 or config('vip') or config('dns-ha') + + +def generate_ha_relation_data(service, + extra_settings=None, + haproxy_enabled=True): + """ Generate relation data for ha relation + + Based on configuration options and unit interfaces, generate a json + encoded dict of relation data items for the hacluster relation, + providing configuration for DNS HA or VIP's + haproxy clone sets. + + Example of supplying additional settings:: + + COLO_CONSOLEAUTH = 'inf: res_nova_consoleauth grp_nova_vips' + AGENT_CONSOLEAUTH = 'ocf:openstack:nova-consoleauth' + AGENT_CA_PARAMS = 'op monitor interval="5s"' + + ha_console_settings = { + 'colocations': {'vip_consoleauth': COLO_CONSOLEAUTH}, + 'init_services': {'res_nova_consoleauth': 'nova-consoleauth'}, + 'resources': {'res_nova_consoleauth': AGENT_CONSOLEAUTH}, + 'resource_params': {'res_nova_consoleauth': AGENT_CA_PARAMS}) + generate_ha_relation_data('nova', extra_settings=ha_console_settings) + + + @param service: Name of the service being configured + @param extra_settings: Dict of additional resource data + @returns dict: json encoded data for use with relation_set + """ + _relation_data = {'resources': {}, 'resource_params': {}} + + if haproxy_enabled: + _meta = 'meta migration-threshold="INFINITY" failure-timeout="5s"' + _haproxy_res = 'res_{}_haproxy'.format(service) + _relation_data['resources'] = {_haproxy_res: 'lsb:haproxy'} + _relation_data['resource_params'] = { + _haproxy_res: '{} op monitor interval="5s"'.format(_meta) + } + _relation_data['init_services'] = {_haproxy_res: 'haproxy'} + _relation_data['clones'] = { + 'cl_{}_haproxy'.format(service): _haproxy_res + } + + if extra_settings: + for k, v in extra_settings.items(): + if _relation_data.get(k): + _relation_data[k].update(v) + else: + _relation_data[k] = v + + if config('dns-ha'): + update_hacluster_dns_ha(service, _relation_data) + else: + update_hacluster_vip(service, _relation_data) + + return { + 'json_{}'.format(k): json.dumps(v, **JSON_ENCODE_OPTIONS) + for k, v in _relation_data.items() if v + } + + +def update_hacluster_dns_ha(service, relation_data, + crm_ocf='ocf:maas:dns'): + """ Configure DNS-HA resources based on provided configuration + + @param service: Name of the service being configured + @param relation_data: Pointer to dictionary of relation data. + @param crm_ocf: Corosync Open Cluster Framework resource agent to use for + DNS HA + """ + # Validate the charm environment for DNS HA + assert_charm_supports_dns_ha() + + settings = ['os-admin-hostname', 'os-internal-hostname', + 'os-public-hostname', 'os-access-hostname'] + + # Check which DNS settings are set and update dictionaries + hostname_group = [] + for setting in settings: + hostname = config(setting) + if hostname is None: + log('DNS HA: Hostname setting {} is None. Ignoring.' + ''.format(setting), + DEBUG) + continue + m = re.search('os-(.+?)-hostname', setting) + if m: + endpoint_type = m.group(1) + # resolve_address's ADDRESS_MAP uses 'int' not 'internal' + if endpoint_type == 'internal': + endpoint_type = 'int' + else: + msg = ('Unexpected DNS hostname setting: {}. ' + 'Cannot determine endpoint_type name' + ''.format(setting)) + status_set('blocked', msg) + raise DNSHAException(msg) + + hostname_key = 'res_{}_{}_hostname'.format(service, endpoint_type) + if hostname_key in hostname_group: + log('DNS HA: Resource {}: {} already exists in ' + 'hostname group - skipping'.format(hostname_key, hostname), + DEBUG) + continue + + hostname_group.append(hostname_key) + relation_data['resources'][hostname_key] = crm_ocf + relation_data['resource_params'][hostname_key] = ( + 'params fqdn="{}" ip_address="{}"' + .format(hostname, resolve_address(endpoint_type=endpoint_type, + override=False))) + + if len(hostname_group) >= 1: + log('DNS HA: Hostname group is set with {} as members. ' + 'Informing the ha relation'.format(' '.join(hostname_group)), + DEBUG) + relation_data['groups'] = { + DNSHA_GROUP_NAME.format(service=service): ' '.join(hostname_group) + } + else: + msg = 'DNS HA: Hostname group has no members.' + status_set('blocked', msg) + raise DNSHAException(msg) + + +def get_vip_settings(vip): + """Calculate which nic is on the correct network for the given vip. + + If nic or netmask discovery fail then fallback to using charm supplied + config. If fallback is used this is indicated via the fallback variable. + + @param vip: VIP to lookup nic and cidr for. + @returns (str, str, bool): eg (iface, netmask, fallback) + """ + iface = get_iface_for_address(vip) + netmask = get_netmask_for_address(vip) + fallback = False + if iface is None: + iface = config('vip_iface') + fallback = True + if netmask is None: + netmask = config('vip_cidr') + fallback = True + return iface, netmask, fallback + + +def update_hacluster_vip(service, relation_data): + """ Configure VIP resources based on provided configuration + + @param service: Name of the service being configured + @param relation_data: Pointer to dictionary of relation data. + """ + cluster_config = get_hacluster_config() + vip_group = [] + vips_to_delete = [] + for vip in cluster_config['vip'].split(): + if is_ipv6(vip): + res_vip = 'ocf:heartbeat:IPv6addr' + vip_params = 'ipv6addr' + else: + res_vip = 'ocf:heartbeat:IPaddr2' + vip_params = 'ip' + + iface, netmask, fallback = get_vip_settings(vip) + + vip_monitoring = 'op monitor timeout="20s" interval="10s" depth="0"' + if iface is not None: + # NOTE(jamespage): Delete old VIP resources + # Old style naming encoding iface in name + # does not work well in environments where + # interface/subnet wiring is not consistent + vip_key = 'res_{}_{}_vip'.format(service, iface) + if vip_key in vips_to_delete: + vip_key = '{}_{}'.format(vip_key, vip_params) + vips_to_delete.append(vip_key) + + vip_key = 'res_{}_{}_vip'.format( + service, + hashlib.sha1(vip.encode('UTF-8')).hexdigest()[:7]) + + relation_data['resources'][vip_key] = res_vip + # NOTE(jamespage): + # Use option provided vip params if these where used + # instead of auto-detected values + if fallback: + relation_data['resource_params'][vip_key] = ( + 'params {ip}="{vip}" cidr_netmask="{netmask}" ' + 'nic="{iface}" {vip_monitoring}'.format( + ip=vip_params, + vip=vip, + iface=iface, + netmask=netmask, + vip_monitoring=vip_monitoring)) + else: + # NOTE(jamespage): + # let heartbeat figure out which interface and + # netmask to configure, which works nicely + # when network interface naming is not + # consistent across units. + relation_data['resource_params'][vip_key] = ( + 'params {ip}="{vip}" {vip_monitoring}'.format( + ip=vip_params, + vip=vip, + vip_monitoring=vip_monitoring)) + + vip_group.append(vip_key) + + if vips_to_delete: + try: + relation_data['delete_resources'].extend(vips_to_delete) + except KeyError: + relation_data['delete_resources'] = vips_to_delete + + if len(vip_group) >= 1: + key = VIP_GROUP_NAME.format(service=service) + try: + relation_data['groups'][key] = ' '.join(vip_group) + except KeyError: + relation_data['groups'] = { + key: ' '.join(vip_group) + } diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/ip.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/ip.py new file mode 100644 index 0000000000000000000000000000000000000000..723aebc172e94a5b00c385d9861dd4d45c1bc753 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/ip.py @@ -0,0 +1,197 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from charmhelpers.core.hookenv import ( + NoNetworkBinding, + config, + unit_get, + service_name, + network_get_primary_address, +) +from charmhelpers.contrib.network.ip import ( + get_address_in_network, + is_address_in_network, + is_ipv6, + get_ipv6_addr, + resolve_network_cidr, +) +from charmhelpers.contrib.hahelpers.cluster import is_clustered + +PUBLIC = 'public' +INTERNAL = 'int' +ADMIN = 'admin' +ACCESS = 'access' + +ADDRESS_MAP = { + PUBLIC: { + 'binding': 'public', + 'config': 'os-public-network', + 'fallback': 'public-address', + 'override': 'os-public-hostname', + }, + INTERNAL: { + 'binding': 'internal', + 'config': 'os-internal-network', + 'fallback': 'private-address', + 'override': 'os-internal-hostname', + }, + ADMIN: { + 'binding': 'admin', + 'config': 'os-admin-network', + 'fallback': 'private-address', + 'override': 'os-admin-hostname', + }, + ACCESS: { + 'binding': 'access', + 'config': 'access-network', + 'fallback': 'private-address', + 'override': 'os-access-hostname', + }, +} + + +def canonical_url(configs, endpoint_type=PUBLIC): + """Returns the correct HTTP URL to this host given the state of HTTPS + configuration, hacluster and charm configuration. + + :param configs: OSTemplateRenderer config templating object to inspect + for a complete https context. + :param endpoint_type: str endpoint type to resolve. + :param returns: str base URL for services on the current service unit. + """ + scheme = _get_scheme(configs) + + address = resolve_address(endpoint_type) + if is_ipv6(address): + address = "[{}]".format(address) + + return '%s://%s' % (scheme, address) + + +def _get_scheme(configs): + """Returns the scheme to use for the url (either http or https) + depending upon whether https is in the configs value. + + :param configs: OSTemplateRenderer config templating object to inspect + for a complete https context. + :returns: either 'http' or 'https' depending on whether https is + configured within the configs context. + """ + scheme = 'http' + if configs and 'https' in configs.complete_contexts(): + scheme = 'https' + return scheme + + +def _get_address_override(endpoint_type=PUBLIC): + """Returns any address overrides that the user has defined based on the + endpoint type. + + Note: this function allows for the service name to be inserted into the + address if the user specifies {service_name}.somehost.org. + + :param endpoint_type: the type of endpoint to retrieve the override + value for. + :returns: any endpoint address or hostname that the user has overridden + or None if an override is not present. + """ + override_key = ADDRESS_MAP[endpoint_type]['override'] + addr_override = config(override_key) + if not addr_override: + return None + else: + return addr_override.format(service_name=service_name()) + + +def resolve_address(endpoint_type=PUBLIC, override=True): + """Return unit address depending on net config. + + If unit is clustered with vip(s) and has net splits defined, return vip on + correct network. If clustered with no nets defined, return primary vip. + + If not clustered, return unit address ensuring address is on configured net + split if one is configured, or a Juju 2.0 extra-binding has been used. + + :param endpoint_type: Network endpoing type + :param override: Accept hostname overrides or not + """ + resolved_address = None + if override: + resolved_address = _get_address_override(endpoint_type) + if resolved_address: + return resolved_address + + vips = config('vip') + if vips: + vips = vips.split() + + net_type = ADDRESS_MAP[endpoint_type]['config'] + net_addr = config(net_type) + net_fallback = ADDRESS_MAP[endpoint_type]['fallback'] + binding = ADDRESS_MAP[endpoint_type]['binding'] + clustered = is_clustered() + + if clustered and vips: + if net_addr: + for vip in vips: + if is_address_in_network(net_addr, vip): + resolved_address = vip + break + else: + # NOTE: endeavour to check vips against network space + # bindings + try: + bound_cidr = resolve_network_cidr( + network_get_primary_address(binding) + ) + for vip in vips: + if is_address_in_network(bound_cidr, vip): + resolved_address = vip + break + except (NotImplementedError, NoNetworkBinding): + # If no net-splits configured and no support for extra + # bindings/network spaces so we expect a single vip + resolved_address = vips[0] + else: + if config('prefer-ipv6'): + fallback_addr = get_ipv6_addr(exc_list=vips)[0] + else: + fallback_addr = unit_get(net_fallback) + + if net_addr: + resolved_address = get_address_in_network(net_addr, fallback_addr) + else: + # NOTE: only try to use extra bindings if legacy network + # configuration is not in use + try: + resolved_address = network_get_primary_address(binding) + except (NotImplementedError, NoNetworkBinding): + resolved_address = fallback_addr + + if resolved_address is None: + raise ValueError("Unable to resolve a suitable IP address based on " + "charm state and configuration. (net_type=%s, " + "clustered=%s)" % (net_type, clustered)) + + return resolved_address + + +def get_vip_in_network(network): + matching_vip = None + vips = config('vip') + if vips: + for vip in vips.split(): + if is_address_in_network(network, vip): + matching_vip = vip + return matching_vip diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/keystone.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/keystone.py new file mode 100644 index 0000000000000000000000000000000000000000..d7e02ccd90155710901e444482b589aa264158e6 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/keystone.py @@ -0,0 +1,178 @@ +#!/usr/bin/python +# +# Copyright 2017 Canonical Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import six +from charmhelpers.fetch import apt_install +from charmhelpers.contrib.openstack.context import IdentityServiceContext +from charmhelpers.core.hookenv import ( + log, + ERROR, +) + + +def get_api_suffix(api_version): + """Return the formatted api suffix for the given version + @param api_version: version of the keystone endpoint + @returns the api suffix formatted according to the given api + version + """ + return 'v2.0' if api_version in (2, "2", "2.0") else 'v3' + + +def format_endpoint(schema, addr, port, api_version): + """Return a formatted keystone endpoint + @param schema: http or https + @param addr: ipv4/ipv6 host of the keystone service + @param port: port of the keystone service + @param api_version: 2 or 3 + @returns a fully formatted keystone endpoint + """ + return '{}://{}:{}/{}/'.format(schema, addr, port, + get_api_suffix(api_version)) + + +def get_keystone_manager(endpoint, api_version, **kwargs): + """Return a keystonemanager for the correct API version + + @param endpoint: the keystone endpoint to point client at + @param api_version: version of the keystone api the client should use + @param kwargs: token or username/tenant/password information + @returns keystonemanager class used for interrogating keystone + """ + if api_version == 2: + return KeystoneManager2(endpoint, **kwargs) + if api_version == 3: + return KeystoneManager3(endpoint, **kwargs) + raise ValueError('No manager found for api version {}'.format(api_version)) + + +def get_keystone_manager_from_identity_service_context(): + """Return a keystonmanager generated from a + instance of charmhelpers.contrib.openstack.context.IdentityServiceContext + @returns keystonamenager instance + """ + context = IdentityServiceContext()() + if not context: + msg = "Identity service context cannot be generated" + log(msg, level=ERROR) + raise ValueError(msg) + + endpoint = format_endpoint(context['service_protocol'], + context['service_host'], + context['service_port'], + context['api_version']) + + if context['api_version'] in (2, "2.0"): + api_version = 2 + else: + api_version = 3 + + return get_keystone_manager(endpoint, api_version, + username=context['admin_user'], + password=context['admin_password'], + tenant_name=context['admin_tenant_name']) + + +class KeystoneManager(object): + + def resolve_service_id(self, service_name=None, service_type=None): + """Find the service_id of a given service""" + services = [s._info for s in self.api.services.list()] + + service_name = service_name.lower() + for s in services: + name = s['name'].lower() + if service_type and service_name: + if (service_name == name and service_type == s['type']): + return s['id'] + elif service_name and service_name == name: + return s['id'] + elif service_type and service_type == s['type']: + return s['id'] + return None + + def service_exists(self, service_name=None, service_type=None): + """Determine if the given service exists on the service list""" + return self.resolve_service_id(service_name, service_type) is not None + + +class KeystoneManager2(KeystoneManager): + + def __init__(self, endpoint, **kwargs): + try: + from keystoneclient.v2_0 import client + from keystoneclient.auth.identity import v2 + from keystoneclient import session + except ImportError: + if six.PY2: + apt_install(["python-keystoneclient"], fatal=True) + else: + apt_install(["python3-keystoneclient"], fatal=True) + + from keystoneclient.v2_0 import client + from keystoneclient.auth.identity import v2 + from keystoneclient import session + + self.api_version = 2 + + token = kwargs.get("token", None) + if token: + api = client.Client(endpoint=endpoint, token=token) + else: + auth = v2.Password(username=kwargs.get("username"), + password=kwargs.get("password"), + tenant_name=kwargs.get("tenant_name"), + auth_url=endpoint) + sess = session.Session(auth=auth) + api = client.Client(session=sess) + + self.api = api + + +class KeystoneManager3(KeystoneManager): + + def __init__(self, endpoint, **kwargs): + try: + from keystoneclient.v3 import client + from keystoneclient.auth import token_endpoint + from keystoneclient import session + from keystoneclient.auth.identity import v3 + except ImportError: + if six.PY2: + apt_install(["python-keystoneclient"], fatal=True) + else: + apt_install(["python3-keystoneclient"], fatal=True) + + from keystoneclient.v3 import client + from keystoneclient.auth import token_endpoint + from keystoneclient import session + from keystoneclient.auth.identity import v3 + + self.api_version = 3 + + token = kwargs.get("token", None) + if token: + auth = token_endpoint.Token(endpoint=endpoint, + token=token) + sess = session.Session(auth=auth) + else: + auth = v3.Password(auth_url=endpoint, + user_id=kwargs.get("username"), + password=kwargs.get("password"), + project_id=kwargs.get("tenant_name")) + sess = session.Session(auth=auth) + + self.api = client.Client(session=sess) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/neutron.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/neutron.py new file mode 100644 index 0000000000000000000000000000000000000000..fb5607f3e73159d90236b2d7a4051aa82119e889 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/neutron.py @@ -0,0 +1,359 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Various utilies for dealing with Neutron and the renaming from Quantum. + +import six +from subprocess import check_output + +from charmhelpers.core.hookenv import ( + config, + log, + ERROR, +) + +from charmhelpers.contrib.openstack.utils import ( + os_release, + CompareOpenStackReleases, +) + + +def headers_package(): + """Ensures correct linux-headers for running kernel are installed, + for building DKMS package""" + kver = check_output(['uname', '-r']).decode('UTF-8').strip() + return 'linux-headers-%s' % kver + + +QUANTUM_CONF_DIR = '/etc/quantum' + + +def kernel_version(): + """ Retrieve the current major kernel version as a tuple e.g. (3, 13) """ + kver = check_output(['uname', '-r']).decode('UTF-8').strip() + kver = kver.split('.') + return (int(kver[0]), int(kver[1])) + + +def determine_dkms_package(): + """ Determine which DKMS package should be used based on kernel version """ + # NOTE: 3.13 kernels have support for GRE and VXLAN native + if kernel_version() >= (3, 13): + return [] + else: + return [headers_package(), 'openvswitch-datapath-dkms'] + + +# legacy + + +def quantum_plugins(): + return { + 'ovs': { + 'config': '/etc/quantum/plugins/openvswitch/' + 'ovs_quantum_plugin.ini', + 'driver': 'quantum.plugins.openvswitch.ovs_quantum_plugin.' + 'OVSQuantumPluginV2', + 'contexts': [], + 'services': ['quantum-plugin-openvswitch-agent'], + 'packages': [determine_dkms_package(), + ['quantum-plugin-openvswitch-agent']], + 'server_packages': ['quantum-server', + 'quantum-plugin-openvswitch'], + 'server_services': ['quantum-server'] + }, + 'nvp': { + 'config': '/etc/quantum/plugins/nicira/nvp.ini', + 'driver': 'quantum.plugins.nicira.nicira_nvp_plugin.' + 'QuantumPlugin.NvpPluginV2', + 'contexts': [], + 'services': [], + 'packages': [], + 'server_packages': ['quantum-server', + 'quantum-plugin-nicira'], + 'server_services': ['quantum-server'] + } + } + + +NEUTRON_CONF_DIR = '/etc/neutron' + + +def neutron_plugins(): + release = os_release('nova-common') + plugins = { + 'ovs': { + 'config': '/etc/neutron/plugins/openvswitch/' + 'ovs_neutron_plugin.ini', + 'driver': 'neutron.plugins.openvswitch.ovs_neutron_plugin.' + 'OVSNeutronPluginV2', + 'contexts': [], + 'services': ['neutron-plugin-openvswitch-agent'], + 'packages': [determine_dkms_package(), + ['neutron-plugin-openvswitch-agent']], + 'server_packages': ['neutron-server', + 'neutron-plugin-openvswitch'], + 'server_services': ['neutron-server'] + }, + 'nvp': { + 'config': '/etc/neutron/plugins/nicira/nvp.ini', + 'driver': 'neutron.plugins.nicira.nicira_nvp_plugin.' + 'NeutronPlugin.NvpPluginV2', + 'contexts': [], + 'services': [], + 'packages': [], + 'server_packages': ['neutron-server', + 'neutron-plugin-nicira'], + 'server_services': ['neutron-server'] + }, + 'nsx': { + 'config': '/etc/neutron/plugins/vmware/nsx.ini', + 'driver': 'vmware', + 'contexts': [], + 'services': [], + 'packages': [], + 'server_packages': ['neutron-server', + 'neutron-plugin-vmware'], + 'server_services': ['neutron-server'] + }, + 'n1kv': { + 'config': '/etc/neutron/plugins/cisco/cisco_plugins.ini', + 'driver': 'neutron.plugins.cisco.network_plugin.PluginV2', + 'contexts': [], + 'services': [], + 'packages': [determine_dkms_package(), + ['neutron-plugin-cisco']], + 'server_packages': ['neutron-server', + 'neutron-plugin-cisco'], + 'server_services': ['neutron-server'] + }, + 'Calico': { + 'config': '/etc/neutron/plugins/ml2/ml2_conf.ini', + 'driver': 'neutron.plugins.ml2.plugin.Ml2Plugin', + 'contexts': [], + 'services': ['calico-felix', + 'bird', + 'neutron-dhcp-agent', + 'nova-api-metadata', + 'etcd'], + 'packages': [determine_dkms_package(), + ['calico-compute', + 'bird', + 'neutron-dhcp-agent', + 'nova-api-metadata', + 'etcd']], + 'server_packages': ['neutron-server', 'calico-control', 'etcd'], + 'server_services': ['neutron-server', 'etcd'] + }, + 'vsp': { + 'config': '/etc/neutron/plugins/nuage/nuage_plugin.ini', + 'driver': 'neutron.plugins.nuage.plugin.NuagePlugin', + 'contexts': [], + 'services': [], + 'packages': [], + 'server_packages': ['neutron-server', 'neutron-plugin-nuage'], + 'server_services': ['neutron-server'] + }, + 'plumgrid': { + 'config': '/etc/neutron/plugins/plumgrid/plumgrid.ini', + 'driver': ('neutron.plugins.plumgrid.plumgrid_plugin' + '.plumgrid_plugin.NeutronPluginPLUMgridV2'), + 'contexts': [], + 'services': [], + 'packages': ['plumgrid-lxc', + 'iovisor-dkms'], + 'server_packages': ['neutron-server', + 'neutron-plugin-plumgrid'], + 'server_services': ['neutron-server'] + }, + 'midonet': { + 'config': '/etc/neutron/plugins/midonet/midonet.ini', + 'driver': 'midonet.neutron.plugin.MidonetPluginV2', + 'contexts': [], + 'services': [], + 'packages': [determine_dkms_package()], + 'server_packages': ['neutron-server', + 'python-neutron-plugin-midonet'], + 'server_services': ['neutron-server'] + } + } + if CompareOpenStackReleases(release) >= 'icehouse': + # NOTE: patch in ml2 plugin for icehouse onwards + plugins['ovs']['config'] = '/etc/neutron/plugins/ml2/ml2_conf.ini' + plugins['ovs']['driver'] = 'neutron.plugins.ml2.plugin.Ml2Plugin' + plugins['ovs']['server_packages'] = ['neutron-server', + 'neutron-plugin-ml2'] + # NOTE: patch in vmware renames nvp->nsx for icehouse onwards + plugins['nvp'] = plugins['nsx'] + if CompareOpenStackReleases(release) >= 'kilo': + plugins['midonet']['driver'] = ( + 'neutron.plugins.midonet.plugin.MidonetPluginV2') + if CompareOpenStackReleases(release) >= 'liberty': + plugins['midonet']['driver'] = ( + 'midonet.neutron.plugin_v1.MidonetPluginV2') + plugins['midonet']['server_packages'].remove( + 'python-neutron-plugin-midonet') + plugins['midonet']['server_packages'].append( + 'python-networking-midonet') + plugins['plumgrid']['driver'] = ( + 'networking_plumgrid.neutron.plugins' + '.plugin.NeutronPluginPLUMgridV2') + plugins['plumgrid']['server_packages'].remove( + 'neutron-plugin-plumgrid') + if CompareOpenStackReleases(release) >= 'mitaka': + plugins['nsx']['server_packages'].remove('neutron-plugin-vmware') + plugins['nsx']['server_packages'].append('python-vmware-nsx') + plugins['nsx']['config'] = '/etc/neutron/nsx.ini' + plugins['vsp']['driver'] = ( + 'nuage_neutron.plugins.nuage.plugin.NuagePlugin') + if CompareOpenStackReleases(release) >= 'newton': + plugins['vsp']['config'] = '/etc/neutron/plugins/ml2/ml2_conf.ini' + plugins['vsp']['driver'] = 'neutron.plugins.ml2.plugin.Ml2Plugin' + plugins['vsp']['server_packages'] = ['neutron-server', + 'neutron-plugin-ml2'] + return plugins + + +def neutron_plugin_attribute(plugin, attr, net_manager=None): + manager = net_manager or network_manager() + if manager == 'quantum': + plugins = quantum_plugins() + elif manager == 'neutron': + plugins = neutron_plugins() + else: + log("Network manager '%s' does not support plugins." % (manager), + level=ERROR) + raise Exception + + try: + _plugin = plugins[plugin] + except KeyError: + log('Unrecognised plugin for %s: %s' % (manager, plugin), level=ERROR) + raise Exception + + try: + return _plugin[attr] + except KeyError: + return None + + +def network_manager(): + ''' + Deals with the renaming of Quantum to Neutron in H and any situations + that require compatability (eg, deploying H with network-manager=quantum, + upgrading from G). + ''' + release = os_release('nova-common') + manager = config('network-manager').lower() + + if manager not in ['quantum', 'neutron']: + return manager + + if release in ['essex']: + # E does not support neutron + log('Neutron networking not supported in Essex.', level=ERROR) + raise Exception + elif release in ['folsom', 'grizzly']: + # neutron is named quantum in F and G + return 'quantum' + else: + # ensure accurate naming for all releases post-H + return 'neutron' + + +def parse_mappings(mappings, key_rvalue=False): + """By default mappings are lvalue keyed. + + If key_rvalue is True, the mapping will be reversed to allow multiple + configs for the same lvalue. + """ + parsed = {} + if mappings: + mappings = mappings.split() + for m in mappings: + p = m.partition(':') + + if key_rvalue: + key_index = 2 + val_index = 0 + # if there is no rvalue skip to next + if not p[1]: + continue + else: + key_index = 0 + val_index = 2 + + key = p[key_index].strip() + parsed[key] = p[val_index].strip() + + return parsed + + +def parse_bridge_mappings(mappings): + """Parse bridge mappings. + + Mappings must be a space-delimited list of provider:bridge mappings. + + Returns dict of the form {provider:bridge}. + """ + return parse_mappings(mappings) + + +def parse_data_port_mappings(mappings, default_bridge='br-data'): + """Parse data port mappings. + + Mappings must be a space-delimited list of bridge:port. + + Returns dict of the form {port:bridge} where ports may be mac addresses or + interface names. + """ + + # NOTE(dosaboy): we use rvalue for key to allow multiple values to be + # proposed for since it may be a mac address which will differ + # across units this allowing first-known-good to be chosen. + _mappings = parse_mappings(mappings, key_rvalue=True) + if not _mappings or list(_mappings.values()) == ['']: + if not mappings: + return {} + + # For backwards-compatibility we need to support port-only provided in + # config. + _mappings = {mappings.split()[0]: default_bridge} + + ports = _mappings.keys() + if len(set(ports)) != len(ports): + raise Exception("It is not allowed to have the same port configured " + "on more than one bridge") + + return _mappings + + +def parse_vlan_range_mappings(mappings): + """Parse vlan range mappings. + + Mappings must be a space-delimited list of provider:start:end mappings. + + The start:end range is optional and may be omitted. + + Returns dict of the form {provider: (start, end)}. + """ + _mappings = parse_mappings(mappings) + if not _mappings: + return {} + + mappings = {} + for p, r in six.iteritems(_mappings): + mappings[p] = tuple(r.split(':')) + + return mappings diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/policyd.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/policyd.py new file mode 100644 index 0000000000000000000000000000000000000000..f2bb21e9db926bd2c4de8ab3e8d10d0837af563a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/policyd.py @@ -0,0 +1,801 @@ +# Copyright 2019 Canonical Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import collections +import contextlib +import os +import six +import shutil +import yaml +import zipfile + +import charmhelpers +import charmhelpers.core.hookenv as hookenv +import charmhelpers.core.host as ch_host + +# Features provided by this module: + +""" +Policy.d helper functions +========================= + +The functions in this module are designed, as a set, to provide an easy-to-use +set of hooks for classic charms to add in /etc//policy.d/ +directory override YAML files. + +(For charms.openstack charms, a mixin class is provided for this +functionality). + +In order to "hook" this functionality into a (classic) charm, two functions are +provided: + + maybe_do_policyd_overrides(openstack_release, + service, + blacklist_paths=none, + blacklist_keys=none, + template_function=none, + restart_handler=none) + + maybe_do_policyd_overrides_on_config_changed(openstack_release, + service, + blacklist_paths=None, + blacklist_keys=None, + template_function=None, + restart_handler=None + +(See the docstrings for details on the parameters) + +The functions should be called from the install and upgrade hooks in the charm. +The `maybe_do_policyd_overrides_on_config_changed` function is designed to be +called on the config-changed hook, in that it does an additional check to +ensure that an already overriden policy.d in an upgrade or install hooks isn't +repeated. + +In order the *enable* this functionality, the charm's install, config_changed, +and upgrade_charm hooks need to be modified, and a new config option (see +below) needs to be added. The README for the charm should also be updated. + +Examples from the keystone charm are: + +@hooks.hook('install.real') +@harden() +def install(): + ... + # call the policy overrides handler which will install any policy overrides + maybe_do_policyd_overrides(os_release('keystone'), 'keystone') + + +@hooks.hook('config-changed') +@restart_on_change(restart_map(), restart_functions=restart_function_map()) +@harden() +def config_changed(): + ... + # call the policy overrides handler which will install any policy overrides + maybe_do_policyd_overrides_on_config_changed(os_release('keystone'), + 'keystone') + +@hooks.hook('upgrade-charm') +@restart_on_change(restart_map(), stopstart=True) +@harden() +def upgrade_charm(): + ... + # call the policy overrides handler which will install any policy overrides + maybe_do_policyd_overrides(os_release('keystone'), 'keystone') + +Status Line +=========== + +The workload status code in charm-helpers has been modified to detect if +policy.d override code has been incorporated into the charm by checking for the +new config variable (in the config.yaml). If it has been, then the workload +status line will automatically show "PO:" at the beginning of the workload +status for that unit/service if the config option is set. If the policy +override is broken, the "PO (broken):" will be shown. No changes to the charm +(apart from those already mentioned) are needed to enable this functionality. +(charms.openstack charms also get this functionality, but please see that +library for further details). +""" + +# The config.yaml for the charm should contain the following for the config +# option: + +""" + use-policyd-override: + type: boolean + default: False + description: | + If True then use the resource file named 'policyd-override' to install + override YAML files in the service's policy.d directory. The resource + file should be a ZIP file containing at least one yaml file with a .yaml + or .yml extension. If False then remove the overrides. +""" + +# The metadata.yaml for the charm should contain the following: +""" +resources: + policyd-override: + type: file + filename: policyd-override.zip + description: The policy.d overrides file +""" + +# The README for the charm should contain the following: +""" +Policy Overrides +---------------- + +This feature allows for policy overrides using the `policy.d` directory. This +is an **advanced** feature and the policies that the OpenStack service supports +should be clearly and unambiguously understood before trying to override, or +add to, the default policies that the service uses. The charm also has some +policy defaults. They should also be understood before being overridden. + +> **Caution**: It is possible to break the system (for tenants and other + services) if policies are incorrectly applied to the service. + +Policy overrides are YAML files that contain rules that will add to, or +override, existing policy rules in the service. The `policy.d` directory is +a place to put the YAML override files. This charm owns the +`/etc/keystone/policy.d` directory, and as such, any manual changes to it will +be overwritten on charm upgrades. + +Overrides are provided to the charm using a Juju resource called +`policyd-override`. The resource is a ZIP file. This file, say +`overrides.zip`, is attached to the charm by: + + + juju attach-resource policyd-override=overrides.zip + +The policy override is enabled in the charm using: + + juju config use-policyd-override=true + +When `use-policyd-override` is `True` the status line of the charm will be +prefixed with `PO:` indicating that policies have been overridden. If the +installation of the policy override YAML files failed for any reason then the +status line will be prefixed with `PO (broken):`. The log file for the charm +will indicate the reason. No policy override files are installed if the `PO +(broken):` is shown. The status line indicates that the overrides are broken, +not that the policy for the service has failed. The policy will be the defaults +for the charm and service. + +Policy overrides on one service may affect the functionality of another +service. Therefore, it may be necessary to provide policy overrides for +multiple service charms to achieve a consistent set of policies across the +OpenStack system. The charms for the other services that may need overrides +should be checked to ensure that they support overrides before proceeding. +""" + +POLICYD_VALID_EXTS = ['.yaml', '.yml', '.j2', '.tmpl', '.tpl'] +POLICYD_TEMPLATE_EXTS = ['.j2', '.tmpl', '.tpl'] +POLICYD_RESOURCE_NAME = "policyd-override" +POLICYD_CONFIG_NAME = "use-policyd-override" +POLICYD_SUCCESS_FILENAME = "policyd-override-success" +POLICYD_LOG_LEVEL_DEFAULT = hookenv.INFO +POLICYD_ALWAYS_BLACKLISTED_KEYS = ("admin_required", "cloud_admin") + + +class BadPolicyZipFile(Exception): + + def __init__(self, log_message): + self.log_message = log_message + + def __str__(self): + return self.log_message + + +class BadPolicyYamlFile(Exception): + + def __init__(self, log_message): + self.log_message = log_message + + def __str__(self): + return self.log_message + + +if six.PY2: + BadZipFile = zipfile.BadZipfile +else: + BadZipFile = zipfile.BadZipFile + + +def is_policyd_override_valid_on_this_release(openstack_release): + """Check that the charm is running on at least Ubuntu Xenial, and at + least the queens release. + + :param openstack_release: the release codename that is installed. + :type openstack_release: str + :returns: True if okay + :rtype: bool + """ + # NOTE(ajkavanagh) circular import! This is because the status message + # generation code in utils has to call into this module, but this function + # needs the CompareOpenStackReleases() function. The only way to solve + # this is either to put ALL of this module into utils, or refactor one or + # other of the CompareOpenStackReleases or status message generation code + # into a 3rd module. + import charmhelpers.contrib.openstack.utils as ch_utils + return ch_utils.CompareOpenStackReleases(openstack_release) >= 'queens' + + +def maybe_do_policyd_overrides(openstack_release, + service, + blacklist_paths=None, + blacklist_keys=None, + template_function=None, + restart_handler=None, + user=None, + group=None, + config_changed=False): + """If the config option is set, get the resource file and process it to + enable the policy.d overrides for the service passed. + + The param `openstack_release` is required as the policyd overrides feature + is only supported on openstack_release "queens" or later, and on ubuntu + "xenial" or later. Prior to these versions, this feature is a NOP. + + The optional template_function is a function that accepts a string and has + an opportunity to modify the loaded file prior to it being read by + yaml.safe_load(). This allows the charm to perform "templating" using + charm derived data. + + The param blacklist_paths are paths (that are in the service's policy.d + directory that should not be touched). + + The param blacklist_keys are keys that must not appear in the yaml file. + If they do, then the whole policy.d file fails. + + The yaml file extracted from the resource_file (which is a zipped file) has + its file path reconstructed. This, also, must not match any path in the + black list. + + The param restart_handler is an optional Callable that is called to perform + the service restart if the policy.d file is changed. This should normally + be None as oslo.policy automatically picks up changes in the policy.d + directory. However, for any services where this is buggy then a + restart_handler can be used to force the policy.d files to be read. + + If the config_changed param is True, then the handling is slightly + different: It will only perform the policyd overrides if the config is True + and the success file doesn't exist. Otherwise, it does nothing as the + resource file has already been processed. + + :param openstack_release: The openstack release that is installed. + :type openstack_release: str + :param service: the service name to construct the policy.d directory for. + :type service: str + :param blacklist_paths: optional list of paths to leave alone + :type blacklist_paths: Union[None, List[str]] + :param blacklist_keys: optional list of keys that mustn't appear in the + yaml file's + :type blacklist_keys: Union[None, List[str]] + :param template_function: Optional function that can modify the string + prior to being processed as a Yaml document. + :type template_function: Union[None, Callable[[str], str]] + :param restart_handler: The function to call if the service should be + restarted. + :type restart_handler: Union[None, Callable[]] + :param user: The user to create/write files/directories as + :type user: Union[None, str] + :param group: the group to create/write files/directories as + :type group: Union[None, str] + :param config_changed: Set to True for config_changed hook. + :type config_changed: bool + """ + _user = service if user is None else user + _group = service if group is None else group + if not is_policyd_override_valid_on_this_release(openstack_release): + return + hookenv.log("Running maybe_do_policyd_overrides", + level=POLICYD_LOG_LEVEL_DEFAULT) + config = hookenv.config() + try: + if not config.get(POLICYD_CONFIG_NAME, False): + clean_policyd_dir_for(service, + blacklist_paths, + user=_user, + group=_group) + if (os.path.isfile(_policy_success_file()) and + restart_handler is not None and + callable(restart_handler)): + restart_handler() + remove_policy_success_file() + return + except Exception as e: + hookenv.log("... ERROR: Exception is: {}".format(str(e)), + level=POLICYD_CONFIG_NAME) + import traceback + hookenv.log(traceback.format_exc(), level=POLICYD_LOG_LEVEL_DEFAULT) + return + # if the policyd overrides have been performed when doing config_changed + # just return + if config_changed and is_policy_success_file_set(): + hookenv.log("... already setup, so skipping.", + level=POLICYD_LOG_LEVEL_DEFAULT) + return + # from now on it should succeed; if it doesn't then status line will show + # broken. + resource_filename = get_policy_resource_filename() + restart = process_policy_resource_file( + resource_filename, service, blacklist_paths, blacklist_keys, + template_function) + if restart and restart_handler is not None and callable(restart_handler): + restart_handler() + + +@charmhelpers.deprecate("Use maybe_do_poliyd_overrrides instead") +def maybe_do_policyd_overrides_on_config_changed(*args, **kwargs): + """This function is designed to be called from the config changed hook. + + DEPRECATED: please use maybe_do_policyd_overrides() with the param + `config_changed` as `True`. + + See maybe_do_policyd_overrides() for more details on the params. + """ + if 'config_changed' not in kwargs.keys(): + kwargs['config_changed'] = True + return maybe_do_policyd_overrides(*args, **kwargs) + + +def get_policy_resource_filename(): + """Function to extract the policy resource filename + + :returns: The filename of the resource, if set, otherwise, if an error + occurs, then None is returned. + :rtype: Union[str, None] + """ + try: + return hookenv.resource_get(POLICYD_RESOURCE_NAME) + except Exception: + return None + + +@contextlib.contextmanager +def open_and_filter_yaml_files(filepath, has_subdirs=False): + """Validate that the filepath provided is a zip file and contains at least + one (.yaml|.yml) file, and that the files are not duplicated when the zip + file is flattened. Note that the yaml files are not checked. This is the + first stage in validating the policy zipfile; individual yaml files are not + checked for validity or black listed keys. + + If the has_subdirs param is True, then the files are flattened to the first + directory, and the files in the root are ignored. + + An example of use is: + + with open_and_filter_yaml_files(some_path) as zfp, g: + for zipinfo in g: + # do something with zipinfo ... + + :param filepath: a filepath object that can be opened by zipfile + :type filepath: Union[AnyStr, os.PathLike[AntStr]] + :param has_subdirs: Keep first level of subdirectories in yaml file. + :type has_subdirs: bool + :returns: (zfp handle, + a generator of the (name, filename, ZipInfo object) tuples) as a + tuple. + :rtype: ContextManager[(zipfile.ZipFile, + Generator[(name, str, str, zipfile.ZipInfo)])] + :raises: zipfile.BadZipFile + :raises: BadPolicyZipFile if duplicated yaml or missing + :raises: IOError if the filepath is not found + """ + with zipfile.ZipFile(filepath, 'r') as zfp: + # first pass through; check for duplicates and at least one yaml file. + names = collections.defaultdict(int) + yamlfiles = _yamlfiles(zfp, has_subdirs) + for name, _, _, _ in yamlfiles: + names[name] += 1 + # There must be at least 1 yaml file. + if len(names.keys()) == 0: + raise BadPolicyZipFile("contains no yaml files with {} extensions." + .format(", ".join(POLICYD_VALID_EXTS))) + # There must be no duplicates + duplicates = [n for n, c in names.items() if c > 1] + if duplicates: + raise BadPolicyZipFile("{} have duplicates in the zip file." + .format(", ".join(duplicates))) + # Finally, let's yield the generator + yield (zfp, yamlfiles) + + +def _yamlfiles(zipfile, has_subdirs=False): + """Helper to get a yaml file (according to POLICYD_VALID_EXTS extensions) + and the infolist item from a zipfile. + + If the `has_subdirs` param is True, the the only yaml files that have a + directory component are read, and then first part of the directory + component is kept, along with the filename in the name. e.g. an entry with + a filename of: + + compute/someotherdir/override.yaml + + is returned as: + + compute/override, yaml, override.yaml, + + This is to help with the special, additional, processing that the dashboard + charm requires. + + :param zipfile: the zipfile to read zipinfo items from + :type zipfile: zipfile.ZipFile + :param has_subdirs: Keep first level of subdirectories in yaml file. + :type has_subdirs: bool + :returns: generator of (name, ext, filename, info item) for each + self-identified yaml file. + :rtype: List[(str, str, str, zipfile.ZipInfo)] + """ + files = [] + for infolist_item in zipfile.infolist(): + try: + if infolist_item.is_dir(): + continue + except AttributeError: + # fallback to "old" way to determine dir entry for pre-py36 + if infolist_item.filename.endswith('/'): + continue + _dir, name_ext = os.path.split(infolist_item.filename) + name, ext = os.path.splitext(name_ext) + if has_subdirs and _dir != "": + name = os.path.join(_dir.split(os.path.sep)[0], name) + ext = ext.lower() + if ext and ext in POLICYD_VALID_EXTS: + files.append((name, ext, name_ext, infolist_item)) + return files + + +def read_and_validate_yaml(stream_or_doc, blacklist_keys=None): + """Read, validate and return the (first) yaml document from the stream. + + The doc is read, and checked for a yaml file. The the top-level keys are + checked against the blacklist_keys provided. If there are problems then an + Exception is raised. Otherwise the yaml document is returned as a Python + object that can be dumped back as a yaml file on the system. + + The yaml file must only consist of a str:str mapping, and if not then the + yaml file is rejected. + + :param stream_or_doc: the file object to read the yaml from + :type stream_or_doc: Union[AnyStr, IO[AnyStr]] + :param blacklist_keys: Any keys, which if in the yaml file, should cause + and error. + :type blacklisted_keys: Union[None, List[str]] + :returns: the yaml file as a python document + :rtype: Dict[str, str] + :raises: yaml.YAMLError if there is a problem with the document + :raises: BadPolicyYamlFile if file doesn't look right or there are + blacklisted keys in the file. + """ + blacklist_keys = blacklist_keys or [] + blacklist_keys.append(POLICYD_ALWAYS_BLACKLISTED_KEYS) + doc = yaml.safe_load(stream_or_doc) + if not isinstance(doc, dict): + raise BadPolicyYamlFile("doesn't look like a policy file?") + keys = set(doc.keys()) + blacklisted_keys_present = keys.intersection(blacklist_keys) + if blacklisted_keys_present: + raise BadPolicyYamlFile("blacklisted keys {} present." + .format(", ".join(blacklisted_keys_present))) + if not all(isinstance(k, six.string_types) for k in keys): + raise BadPolicyYamlFile("keys in yaml aren't all strings?") + # check that the dictionary looks like a mapping of str to str + if not all(isinstance(v, six.string_types) for v in doc.values()): + raise BadPolicyYamlFile("values in yaml aren't all strings?") + return doc + + +def policyd_dir_for(service): + """Return the policy directory for the named service. + + :param service: str + :returns: the policy.d override directory. + :rtype: os.PathLike[str] + """ + return os.path.join("/", "etc", service, "policy.d") + + +def clean_policyd_dir_for(service, keep_paths=None, user=None, group=None): + """Clean out the policyd directory except for items that should be kept. + + The keep_paths, if used, should be set to the full path of the files that + should be kept in the policyd directory for the service. Note that the + service name is passed in, and then the policyd_dir_for() function is used. + This is so that a coding error doesn't result in a sudden deletion of the + charm (say). + + :param service: the service name to use to construct the policy.d dir. + :type service: str + :param keep_paths: optional list of paths to not delete. + :type keep_paths: Union[None, List[str]] + :param user: The user to create/write files/directories as + :type user: Union[None, str] + :param group: the group to create/write files/directories as + :type group: Union[None, str] + """ + _user = service if user is None else user + _group = service if group is None else group + keep_paths = keep_paths or [] + path = policyd_dir_for(service) + hookenv.log("Cleaning path: {}".format(path), level=hookenv.DEBUG) + if not os.path.exists(path): + ch_host.mkdir(path, owner=_user, group=_group, perms=0o775) + _scanner = os.scandir if hasattr(os, 'scandir') else _fallback_scandir + for direntry in _scanner(path): + # see if the path should be kept. + if direntry.path in keep_paths: + continue + # we remove any directories; it's ours and there shouldn't be any + if direntry.is_dir(): + shutil.rmtree(direntry.path) + else: + os.remove(direntry.path) + + +def maybe_create_directory_for(path, user, group): + """For the filename 'path', ensure that the directory for that path exists. + + Note that if the directory already exists then the permissions are NOT + changed. + + :param path: the filename including the path to it. + :type path: str + :param user: the user to create the directory as + :param group: the group to create the directory as + """ + _dir, _ = os.path.split(path) + if not os.path.exists(_dir): + ch_host.mkdir(_dir, owner=user, group=group, perms=0o775) + + +@contextlib.contextmanager +def _fallback_scandir(path): + """Fallback os.scandir implementation. + + provide a fallback implementation of os.scandir if this module ever gets + used in a py2 or py34 charm. Uses os.listdir() to get the names in the path, + and then mocks the is_dir() function using os.path.isdir() to check for + directory. + + :param path: the path to list the directories for + :type path: str + :returns: Generator that provides _FBDirectory objects + :rtype: ContextManager[_FBDirectory] + """ + for f in os.listdir(path): + yield _FBDirectory(f) + + +class _FBDirectory(object): + """Mock a scandir Directory object with enough to use in + clean_policyd_dir_for + """ + + def __init__(self, path): + self.path = path + + def is_dir(self): + return os.path.isdir(self.path) + + +def path_for_policy_file(service, name): + """Return the full path for a policy.d file that will be written to the + service's policy.d directory. + + It is constructed using policyd_dir_for(), the name and the ".yaml" + extension. + + For horizon, for example, it's a bit more complicated. The name param is + actually "override_service_dir/a_name", where target_service needs to be + one the allowed horizon override services. This translation and check is + done in the _yamlfiles() function. + + :param service: the service name + :type service: str + :param name: the name for the policy override + :type name: str + :returns: the full path name for the file + :rtype: os.PathLike[str] + """ + return os.path.join(policyd_dir_for(service), name + ".yaml") + + +def _policy_success_file(): + """Return the file name for a successful drop of policy.d overrides + + :returns: the path name for the file. + :rtype: str + """ + return os.path.join(hookenv.charm_dir(), POLICYD_SUCCESS_FILENAME) + + +def remove_policy_success_file(): + """Remove the file that indicates successful policyd override.""" + try: + os.remove(_policy_success_file()) + except Exception: + pass + + +def set_policy_success_file(): + """Set the file that indicates successful policyd override.""" + open(_policy_success_file(), "w").close() + + +def is_policy_success_file_set(): + """Returns True if the policy success file has been set. + + This indicates that policies are overridden and working properly. + + :returns: True if the policy file is set + :rtype: bool + """ + return os.path.isfile(_policy_success_file()) + + +def policyd_status_message_prefix(): + """Return the prefix str for the status line. + + "PO:" indicating that the policy overrides are in place, or "PO (broken):" + if the policy is supposed to be working but there is no success file. + + :returns: the prefix + :rtype: str + """ + if is_policy_success_file_set(): + return "PO:" + return "PO (broken):" + + +def process_policy_resource_file(resource_file, + service, + blacklist_paths=None, + blacklist_keys=None, + template_function=None, + preserve_topdir=False, + preprocess_filename=None, + user=None, + group=None): + """Process the resource file (which should contain at least one yaml file) + and write those files to the service's policy.d directory. + + The optional template_function is a function that accepts a python + string and has an opportunity to modify the document + prior to it being read by the yaml.safe_load() function and written to + disk. Note that this function does *not* say how the templating is done - + this is up to the charm to implement its chosen method. + + The param blacklist_paths are paths (that are in the service's policy.d + directory that should not be touched). + + The param blacklist_keys are keys that must not appear in the yaml file. + If they do, then the whole policy.d file fails. + + The yaml file extracted from the resource_file (which is a zipped file) has + its file path reconstructed. This, also, must not match any path in the + black list. + + The yaml filename can be modified in two ways. If the `preserve_topdir` + param is True, then files will be flattened to the top dir. This allows + for creating sets of files that can be grouped into a single level tree + structure. + + Secondly, if the `preprocess_filename` param is not None and callable() + then the name is passed to that function for preprocessing before being + converted to the end location. This is to allow munging of the filename + prior to being tested for a blacklist path. + + If any error occurs, then the policy.d directory is cleared, the error is + written to the log, and the status line will eventually show as failed. + + :param resource_file: The zipped file to open and extract yaml files form. + :type resource_file: Union[AnyStr, os.PathLike[AnyStr]] + :param service: the service name to construct the policy.d directory for. + :type service: str + :param blacklist_paths: optional list of paths to leave alone + :type blacklist_paths: Union[None, List[str]] + :param blacklist_keys: optional list of keys that mustn't appear in the + yaml file's + :type blacklist_keys: Union[None, List[str]] + :param template_function: Optional function that can modify the yaml + document. + :type template_function: Union[None, Callable[[AnyStr], AnyStr]] + :param preserve_topdir: Keep the toplevel subdir + :type preserve_topdir: bool + :param preprocess_filename: Optional function to use to process filenames + extracted from the resource file. + :type preprocess_filename: Union[None, Callable[[AnyStr]. AnyStr]] + :param user: The user to create/write files/directories as + :type user: Union[None, str] + :param group: the group to create/write files/directories as + :type group: Union[None, str] + :returns: True if the processing was successful, False if not. + :rtype: boolean + """ + hookenv.log("Running process_policy_resource_file", level=hookenv.DEBUG) + blacklist_paths = blacklist_paths or [] + completed = False + _preprocess = None + if preprocess_filename is not None and callable(preprocess_filename): + _preprocess = preprocess_filename + _user = service if user is None else user + _group = service if group is None else group + try: + with open_and_filter_yaml_files( + resource_file, preserve_topdir) as (zfp, gen): + # first clear out the policy.d directory and clear success + remove_policy_success_file() + clean_policyd_dir_for(service, + blacklist_paths, + user=_user, + group=_group) + for name, ext, filename, zipinfo in gen: + # See if the name should be preprocessed. + if _preprocess is not None: + name = _preprocess(name) + # construct a name for the output file. + yaml_filename = path_for_policy_file(service, name) + if yaml_filename in blacklist_paths: + raise BadPolicyZipFile("policy.d name {} is blacklisted" + .format(yaml_filename)) + with zfp.open(zipinfo) as fp: + doc = fp.read() + # if template_function is not None, then offer the document + # to the template function + if ext in POLICYD_TEMPLATE_EXTS: + if (template_function is None or not + callable(template_function)): + raise BadPolicyZipFile( + "Template {} but no template_function is " + "available".format(filename)) + doc = template_function(doc) + yaml_doc = read_and_validate_yaml(doc, blacklist_keys) + # we may have to create the directory + maybe_create_directory_for(yaml_filename, _user, _group) + ch_host.write_file(yaml_filename, + yaml.dump(yaml_doc).encode('utf-8'), + _user, + _group) + # Every thing worked, so we mark up a success. + completed = True + except (BadZipFile, BadPolicyZipFile, BadPolicyYamlFile) as e: + hookenv.log("Processing {} failed: {}".format(resource_file, str(e)), + level=POLICYD_LOG_LEVEL_DEFAULT) + except IOError as e: + # technically this shouldn't happen; it would be a programming error as + # the filename comes from Juju and thus, should exist. + hookenv.log( + "File {} failed with IOError. This really shouldn't happen" + " -- error: {}".format(resource_file, str(e)), + level=POLICYD_LOG_LEVEL_DEFAULT) + except Exception as e: + import traceback + hookenv.log("General Exception({}) during policyd processing" + .format(str(e)), + level=POLICYD_LOG_LEVEL_DEFAULT) + hookenv.log(traceback.format_exc()) + finally: + if not completed: + hookenv.log("Processing {} failed: cleaning policy.d directory" + .format(resource_file), + level=POLICYD_LOG_LEVEL_DEFAULT) + clean_policyd_dir_for(service, + blacklist_paths, + user=_user, + group=_group) + else: + # touch the success filename + hookenv.log("policy.d overrides installed.", + level=POLICYD_LOG_LEVEL_DEFAULT) + set_policy_success_file() + return completed diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/ssh_migrations.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/ssh_migrations.py new file mode 100644 index 0000000000000000000000000000000000000000..96b9f71d42d1c81539f78b8e1c4761f81d84c304 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/ssh_migrations.py @@ -0,0 +1,412 @@ +# Copyright 2018 Canonical Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import subprocess + +from charmhelpers.core.hookenv import ( + ERROR, + log, + relation_get, +) +from charmhelpers.contrib.network.ip import ( + is_ipv6, + ns_query, +) +from charmhelpers.contrib.openstack.utils import ( + get_hostname, + get_host_ip, + is_ip, +) + +NOVA_SSH_DIR = '/etc/nova/compute_ssh/' + + +def ssh_directory_for_unit(application_name, user=None): + """Return the directory used to store ssh assets for the application. + + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param user: The user that the ssh asserts are for. + :type user: str + :returns: Fully qualified directory path. + :rtype: str + """ + if user: + application_name = "{}_{}".format(application_name, user) + _dir = os.path.join(NOVA_SSH_DIR, application_name) + for d in [NOVA_SSH_DIR, _dir]: + if not os.path.isdir(d): + os.mkdir(d) + for f in ['authorized_keys', 'known_hosts']: + f = os.path.join(_dir, f) + if not os.path.isfile(f): + open(f, 'w').close() + return _dir + + +def known_hosts(application_name, user=None): + """Return the known hosts file for the application. + + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param user: The user that the ssh asserts are for. + :type user: str + :returns: Fully qualified path to file. + :rtype: str + """ + return os.path.join( + ssh_directory_for_unit(application_name, user), + 'known_hosts') + + +def authorized_keys(application_name, user=None): + """Return the authorized keys file for the application. + + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param user: The user that the ssh asserts are for. + :type user: str + :returns: Fully qualified path to file. + :rtype: str + """ + return os.path.join( + ssh_directory_for_unit(application_name, user), + 'authorized_keys') + + +def ssh_known_host_key(host, application_name, user=None): + """Return the first entry in known_hosts for host. + + :param host: hostname to lookup in file. + :type host: str + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param user: The user that the ssh asserts are for. + :type user: str + :returns: Host key + :rtype: str or None + """ + cmd = [ + 'ssh-keygen', + '-f', known_hosts(application_name, user), + '-H', + '-F', + host] + try: + # The first line of output is like '# Host xx found: line 1 type RSA', + # which should be excluded. + output = subprocess.check_output(cmd) + except subprocess.CalledProcessError as e: + # RC of 1 seems to be legitimate for most ssh-keygen -F calls. + if e.returncode == 1: + output = e.output + else: + raise + output = output.strip() + + if output: + # Bug #1500589 cmd has 0 rc on precise if entry not present + lines = output.split('\n') + if len(lines) >= 1: + return lines[0] + + return None + + +def remove_known_host(host, application_name, user=None): + """Remove the entry in known_hosts for host. + + :param host: hostname to lookup in file. + :type host: str + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param user: The user that the ssh asserts are for. + :type user: str + """ + log('Removing SSH known host entry for compute host at %s' % host) + cmd = ['ssh-keygen', '-f', known_hosts(application_name, user), '-R', host] + subprocess.check_call(cmd) + + +def is_same_key(key_1, key_2): + """Extract the key from two host entries and compare them. + + :param key_1: Host key + :type key_1: str + :param key_2: Host key + :type key_2: str + """ + # The key format get will be like '|1|2rUumCavEXWVaVyB5uMl6m85pZo=|Cp' + # 'EL6l7VTY37T/fg/ihhNb/GPgs= ssh-rsa AAAAB', we only need to compare + # the part start with 'ssh-rsa' followed with '= ', because the hash + # value in the beginning will change each time. + k_1 = key_1.split('= ')[1] + k_2 = key_2.split('= ')[1] + return k_1 == k_2 + + +def add_known_host(host, application_name, user=None): + """Add the given host key to the known hosts file. + + :param host: host name + :type host: str + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param user: The user that the ssh asserts are for. + :type user: str + """ + cmd = ['ssh-keyscan', '-H', '-t', 'rsa', host] + try: + remote_key = subprocess.check_output(cmd).strip() + except Exception as e: + log('Could not obtain SSH host key from %s' % host, level=ERROR) + raise e + + current_key = ssh_known_host_key(host, application_name, user) + if current_key and remote_key: + if is_same_key(remote_key, current_key): + log('Known host key for compute host %s up to date.' % host) + return + else: + remove_known_host(host, application_name, user) + + log('Adding SSH host key to known hosts for compute node at %s.' % host) + with open(known_hosts(application_name, user), 'a') as out: + out.write("{}\n".format(remote_key)) + + +def ssh_authorized_key_exists(public_key, application_name, user=None): + """Check if given key is in the authorized_key file. + + :param public_key: Public key. + :type public_key: str + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param user: The user that the ssh asserts are for. + :type user: str + :returns: Whether given key is in the authorized_key file. + :rtype: boolean + """ + with open(authorized_keys(application_name, user)) as keys: + return ('%s' % public_key) in keys.read() + + +def add_authorized_key(public_key, application_name, user=None): + """Add given key to the authorized_key file. + + :param public_key: Public key. + :type public_key: str + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param user: The user that the ssh asserts are for. + :type user: str + """ + with open(authorized_keys(application_name, user), 'a') as keys: + keys.write("{}\n".format(public_key)) + + +def ssh_compute_add_host_and_key(public_key, hostname, private_address, + application_name, user=None): + """Add a compute nodes ssh details to local cache. + + Collect various hostname variations and add the corresponding host keys to + the local known hosts file. Finally, add the supplied public key to the + authorized_key file. + + :param public_key: Public key. + :type public_key: str + :param hostname: Hostname to collect host keys from. + :type hostname: str + :param private_address:aCorresponding private address for hostname + :type private_address: str + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param user: The user that the ssh asserts are for. + :type user: str + """ + # If remote compute node hands us a hostname, ensure we have a + # known hosts entry for its IP, hostname and FQDN. + hosts = [private_address] + + if not is_ipv6(private_address): + if hostname: + hosts.append(hostname) + + if is_ip(private_address): + hn = get_hostname(private_address) + if hn: + hosts.append(hn) + short = hn.split('.')[0] + if ns_query(short): + hosts.append(short) + else: + hosts.append(get_host_ip(private_address)) + short = private_address.split('.')[0] + if ns_query(short): + hosts.append(short) + + for host in list(set(hosts)): + add_known_host(host, application_name, user) + + if not ssh_authorized_key_exists(public_key, application_name, user): + log('Saving SSH authorized key for compute host at %s.' % + private_address) + add_authorized_key(public_key, application_name, user) + + +def ssh_compute_add(public_key, application_name, rid=None, unit=None, + user=None): + """Add a compute nodes ssh details to local cache. + + Collect various hostname variations and add the corresponding host keys to + the local known hosts file. Finally, add the supplied public key to the + authorized_key file. + + :param public_key: Public key. + :type public_key: str + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param rid: Relation id of the relation between this charm and the app. If + none is supplied it is assumed its the relation relating to + the current hook context. + :type rid: str + :param unit: Unit to add ssh asserts for if none is supplied it is assumed + its the unit relating to the current hook context. + :type unit: str + :param user: The user that the ssh asserts are for. + :type user: str + """ + relation_data = relation_get(rid=rid, unit=unit) + ssh_compute_add_host_and_key( + public_key, + relation_data.get('hostname'), + relation_data.get('private-address'), + application_name, + user=user) + + +def ssh_known_hosts_lines(application_name, user=None): + """Return contents of known_hosts file for given application. + + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param user: The user that the ssh asserts are for. + :type user: str + """ + known_hosts_list = [] + with open(known_hosts(application_name, user)) as hosts: + for hosts_line in hosts: + if hosts_line.rstrip(): + known_hosts_list.append(hosts_line.rstrip()) + return(known_hosts_list) + + +def ssh_authorized_keys_lines(application_name, user=None): + """Return contents of authorized_keys file for given application. + + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param user: The user that the ssh asserts are for. + :type user: str + """ + authorized_keys_list = [] + + with open(authorized_keys(application_name, user)) as keys: + for authkey_line in keys: + if authkey_line.rstrip(): + authorized_keys_list.append(authkey_line.rstrip()) + return(authorized_keys_list) + + +def ssh_compute_remove(public_key, application_name, user=None): + """Remove given public key from authorized_keys file. + + :param public_key: Public key. + :type public_key: str + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param user: The user that the ssh asserts are for. + :type user: str + """ + if not (os.path.isfile(authorized_keys(application_name, user)) or + os.path.isfile(known_hosts(application_name, user))): + return + + keys = ssh_authorized_keys_lines(application_name, user=None) + keys = [k.strip() for k in keys] + + if public_key not in keys: + return + + [keys.remove(key) for key in keys if key == public_key] + + with open(authorized_keys(application_name, user), 'w') as _keys: + keys = '\n'.join(keys) + if not keys.endswith('\n'): + keys += '\n' + _keys.write(keys) + + +def get_ssh_settings(application_name, user=None): + """Retrieve the known host entries and public keys for application + + Retrieve the known host entries and public keys for application for all + units of the given application related to this application for the + app + user combination. + + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :param user: The user that the ssh asserts are for. + :type user: str + :returns: Public keys + host keys for all units for app + user combination. + :rtype: dict + """ + settings = {} + keys = {} + prefix = '' + if user: + prefix = '{}_'.format(user) + + for i, line in enumerate(ssh_known_hosts_lines( + application_name=application_name, user=user)): + settings['{}known_hosts_{}'.format(prefix, i)] = line + if settings: + settings['{}known_hosts_max_index'.format(prefix)] = len( + settings.keys()) + + for i, line in enumerate(ssh_authorized_keys_lines( + application_name=application_name, user=user)): + keys['{}authorized_keys_{}'.format(prefix, i)] = line + if keys: + keys['{}authorized_keys_max_index'.format(prefix)] = len(keys.keys()) + settings.update(keys) + return settings + + +def get_all_user_ssh_settings(application_name): + """Retrieve the known host entries and public keys for application + + Retrieve the known host entries and public keys for application for all + units of the given application related to this application for root user + and nova user. + + :param application_name: Name of application eg nova-compute-something + :type application_name: str + :returns: Public keys + host keys for all units for app + user combination. + :rtype: dict + """ + settings = get_ssh_settings(application_name) + settings.update(get_ssh_settings(application_name, user='nova')) + return settings diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..9df5f746fbdf5491c640a77df907b71817cbc5af --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/__init__.py @@ -0,0 +1,16 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# dummy __init__.py to fool syncer into thinking this is a syncable python +# module diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/ceph.conf b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/ceph.conf new file mode 100644 index 0000000000000000000000000000000000000000..a11ce8ab85654a4d838e448f36fe3698b59f9531 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/ceph.conf @@ -0,0 +1,24 @@ +############################################################################### +# [ WARNING ] +# ceph configuration file maintained by Juju +# local changes may be overwritten. +############################################################################### +[global] +{% if auth -%} +auth_supported = {{ auth }} +keyring = /etc/ceph/$cluster.$name.keyring +mon host = {{ mon_hosts }} +{% endif -%} +log to syslog = {{ use_syslog }} +err to syslog = {{ use_syslog }} +clog to syslog = {{ use_syslog }} +{% if rbd_features %} +rbd default features = {{ rbd_features }} +{% endif %} + +[client] +{% if rbd_client_cache_settings -%} +{% for key, value in rbd_client_cache_settings.items() -%} +{{ key }} = {{ value }} +{% endfor -%} +{%- endif %} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/git.upstart b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/git.upstart new file mode 100644 index 0000000000000000000000000000000000000000..4bed404bc01087c4dec6a44a56d17ed122e1d1e3 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/git.upstart @@ -0,0 +1,17 @@ +description "{{ service_description }}" +author "Juju {{ service_name }} Charm " + +start on runlevel [2345] +stop on runlevel [!2345] + +respawn + +exec start-stop-daemon --start --chuid {{ user_name }} \ + --chdir {{ start_dir }} --name {{ process_name }} \ + --exec {{ executable_name }} -- \ + {% for config_file in config_files -%} + --config-file={{ config_file }} \ + {% endfor -%} + {% if log_file -%} + --log-file={{ log_file }} + {% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/haproxy.cfg b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/haproxy.cfg new file mode 100644 index 0000000000000000000000000000000000000000..d36af2aa86f91b2ee1249594fffd106843dd0676 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/haproxy.cfg @@ -0,0 +1,77 @@ +global + log /var/lib/haproxy/dev/log local0 + log /var/lib/haproxy/dev/log local1 notice + maxconn 20000 + user haproxy + group haproxy + spread-checks 0 + stats socket /var/run/haproxy/admin.sock mode 600 level admin + stats timeout 2m + +defaults + log global + mode tcp + option tcplog + option dontlognull + retries 3 +{%- if haproxy_queue_timeout %} + timeout queue {{ haproxy_queue_timeout }} +{%- else %} + timeout queue 9000 +{%- endif %} +{%- if haproxy_connect_timeout %} + timeout connect {{ haproxy_connect_timeout }} +{%- else %} + timeout connect 9000 +{%- endif %} +{%- if haproxy_client_timeout %} + timeout client {{ haproxy_client_timeout }} +{%- else %} + timeout client 90000 +{%- endif %} +{%- if haproxy_server_timeout %} + timeout server {{ haproxy_server_timeout }} +{%- else %} + timeout server 90000 +{%- endif %} + +listen stats + bind {{ local_host }}:{{ stat_port }} + mode http + stats enable + stats hide-version + stats realm Haproxy\ Statistics + stats uri / + stats auth admin:{{ stat_password }} + +{% if frontends -%} +{% for service, ports in service_ports.items() -%} +frontend tcp-in_{{ service }} + bind *:{{ ports[0] }} + {% if ipv6_enabled -%} + bind :::{{ ports[0] }} + {% endif -%} + {% for frontend in frontends -%} + acl net_{{ frontend }} dst {{ frontends[frontend]['network'] }} + use_backend {{ service }}_{{ frontend }} if net_{{ frontend }} + {% endfor -%} + default_backend {{ service }}_{{ default_backend }} + +{% for frontend in frontends -%} +backend {{ service }}_{{ frontend }} + balance leastconn + {% if backend_options -%} + {% if backend_options[service] -%} + {% for option in backend_options[service] -%} + {% for key, value in option.items() -%} + {{ key }} {{ value }} + {% endfor -%} + {% endfor -%} + {% endif -%} + {% endif -%} + {% for unit, address in frontends[frontend]['backends'].items() -%} + server {{ unit }} {{ address }}:{{ ports[1] }} check + {% endfor %} +{% endfor -%} +{% endfor -%} +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/logrotate b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/logrotate new file mode 100644 index 0000000000000000000000000000000000000000..b2900d09a4ec2d04152ed7ce25bdc7346c349675 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/logrotate @@ -0,0 +1,9 @@ +/var/log/{{ logrotate_logs_location }}/*.log { + {{ logrotate_interval }} + {{ logrotate_count }} + compress + delaycompress + missingok + notifempty + copytruncate +} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/memcached.conf b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/memcached.conf new file mode 100644 index 0000000000000000000000000000000000000000..26cb037c72beccdb1a4f9269844abdbac2406369 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/memcached.conf @@ -0,0 +1,53 @@ +############################################################################### +# [ WARNING ] +# memcached configuration file maintained by Juju +# local changes may be overwritten. +############################################################################### + +# memcached default config file +# 2003 - Jay Bonci +# This configuration file is read by the start-memcached script provided as +# part of the Debian GNU/Linux distribution. + +# Run memcached as a daemon. This command is implied, and is not needed for the +# daemon to run. See the README.Debian that comes with this package for more +# information. +-d + +# Log memcached's output to /var/log/memcached +logfile /var/log/memcached.log + +# Be verbose +# -v + +# Be even more verbose (print client commands as well) +# -vv + +# Start with a cap of 64 megs of memory. It's reasonable, and the daemon default +# Note that the daemon will grow to this size, but does not start out holding this much +# memory +-m 64 + +# Default connection port is 11211 +-p {{ memcache_port }} + +# Run the daemon as root. The start-memcached will default to running as root if no +# -u command is present in this config file +-u memcache + +# Specify which IP address to listen on. The default is to listen on all IP addresses +# This parameter is one of the only security measures that memcached has, so make sure +# it's listening on a firewalled interface. +-l {{ memcache_server }} + +# Limit the number of simultaneous incoming connections. The daemon default is 1024 +# -c 1024 + +# Lock down all paged memory. Consult with the README and homepage before you do this +# -k + +# Return error when memory is exhausted (rather than removing items) +# -M + +# Maximize core file limit +# -r diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/openstack_https_frontend b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/openstack_https_frontend new file mode 100644 index 0000000000000000000000000000000000000000..f614b3fa71928b615a1f058c6928d3c98bed9575 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/openstack_https_frontend @@ -0,0 +1,29 @@ +{% if endpoints -%} +{% for ext_port in ext_ports -%} +Listen {{ ext_port }} +{% endfor -%} +{% for address, endpoint, ext, int in endpoints -%} + + ServerName {{ endpoint }} + SSLEngine on + SSLProtocol +TLSv1 +TLSv1.1 +TLSv1.2 + SSLCipherSuite HIGH:!RC4:!MD5:!aNULL:!eNULL:!EXP:!LOW:!MEDIUM + SSLCertificateFile /etc/apache2/ssl/{{ namespace }}/cert_{{ endpoint }} + # See LP 1484489 - this is to support <= 2.4.7 and >= 2.4.8 + SSLCertificateChainFile /etc/apache2/ssl/{{ namespace }}/cert_{{ endpoint }} + SSLCertificateKeyFile /etc/apache2/ssl/{{ namespace }}/key_{{ endpoint }} + ProxyPass / http://localhost:{{ int }}/ + ProxyPassReverse / http://localhost:{{ int }}/ + ProxyPreserveHost on + RequestHeader set X-Forwarded-Proto "https" + +{% endfor -%} + + Order deny,allow + Allow from all + + + Order allow,deny + Allow from all + +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/openstack_https_frontend.conf b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/openstack_https_frontend.conf new file mode 100644 index 0000000000000000000000000000000000000000..f614b3fa71928b615a1f058c6928d3c98bed9575 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/openstack_https_frontend.conf @@ -0,0 +1,29 @@ +{% if endpoints -%} +{% for ext_port in ext_ports -%} +Listen {{ ext_port }} +{% endfor -%} +{% for address, endpoint, ext, int in endpoints -%} + + ServerName {{ endpoint }} + SSLEngine on + SSLProtocol +TLSv1 +TLSv1.1 +TLSv1.2 + SSLCipherSuite HIGH:!RC4:!MD5:!aNULL:!eNULL:!EXP:!LOW:!MEDIUM + SSLCertificateFile /etc/apache2/ssl/{{ namespace }}/cert_{{ endpoint }} + # See LP 1484489 - this is to support <= 2.4.7 and >= 2.4.8 + SSLCertificateChainFile /etc/apache2/ssl/{{ namespace }}/cert_{{ endpoint }} + SSLCertificateKeyFile /etc/apache2/ssl/{{ namespace }}/key_{{ endpoint }} + ProxyPass / http://localhost:{{ int }}/ + ProxyPassReverse / http://localhost:{{ int }}/ + ProxyPreserveHost on + RequestHeader set X-Forwarded-Proto "https" + +{% endfor -%} + + Order deny,allow + Allow from all + + + Order allow,deny + Allow from all + +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/section-keystone-authtoken b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/section-keystone-authtoken new file mode 100644 index 0000000000000000000000000000000000000000..5dcebe7c863728b040337616b492296f536b3ef3 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/section-keystone-authtoken @@ -0,0 +1,12 @@ +{% if auth_host -%} +[keystone_authtoken] +auth_uri = {{ service_protocol }}://{{ service_host }}:{{ service_port }} +auth_url = {{ auth_protocol }}://{{ auth_host }}:{{ auth_port }} +auth_plugin = password +project_domain_id = default +user_domain_id = default +project_name = {{ admin_tenant_name }} +username = {{ admin_user }} +password = {{ admin_password }} +signing_dir = {{ signing_dir }} +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/section-keystone-authtoken-legacy b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/section-keystone-authtoken-legacy new file mode 100644 index 0000000000000000000000000000000000000000..9356b2be4e7dabe089b4d7d39793146bb92f3f48 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/section-keystone-authtoken-legacy @@ -0,0 +1,10 @@ +{% if auth_host -%} +[keystone_authtoken] +# Juno specific config (Bug #1557223) +auth_uri = {{ service_protocol }}://{{ service_host }}:{{ service_port }}/{{ service_admin_prefix }} +identity_uri = {{ auth_protocol }}://{{ auth_host }}:{{ auth_port }} +admin_tenant_name = {{ admin_tenant_name }} +admin_user = {{ admin_user }} +admin_password = {{ admin_password }} +signing_dir = {{ signing_dir }} +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/section-keystone-authtoken-mitaka b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/section-keystone-authtoken-mitaka new file mode 100644 index 0000000000000000000000000000000000000000..c281868b16a885cd01a234984974af0349a5d242 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/section-keystone-authtoken-mitaka @@ -0,0 +1,22 @@ +{% if auth_host -%} +[keystone_authtoken] +auth_type = password +{% if api_version == "3" -%} +auth_uri = {{ service_protocol }}://{{ service_host }}:{{ service_port }}/v3 +auth_url = {{ auth_protocol }}://{{ auth_host }}:{{ auth_port }}/v3 +project_domain_name = {{ admin_domain_name }} +user_domain_name = {{ admin_domain_name }} +{% else -%} +auth_uri = {{ service_protocol }}://{{ service_host }}:{{ service_port }} +auth_url = {{ auth_protocol }}://{{ auth_host }}:{{ auth_port }} +project_domain_name = default +user_domain_name = default +{% endif -%} +project_name = {{ admin_tenant_name }} +username = {{ admin_user }} +password = {{ admin_password }} +signing_dir = {{ signing_dir }} +{% if use_memcache == true %} +memcached_servers = {{ memcache_url }} +{% endif -%} +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/section-keystone-authtoken-v3only b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/section-keystone-authtoken-v3only new file mode 100644 index 0000000000000000000000000000000000000000..d26a91fe1f00e2b12d094be727a53eddc87b829d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/section-keystone-authtoken-v3only @@ -0,0 +1,9 @@ +{% if auth_host -%} +[keystone_authtoken] +{% for option_name, option_value in keystone_authtoken.items() -%} +{{ option_name }} = {{ option_value }} +{% endfor -%} +{% if use_memcache == true %} +memcached_servers = {{ memcache_url }} +{% endif -%} +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/section-oslo-cache b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/section-oslo-cache new file mode 100644 index 0000000000000000000000000000000000000000..e056a32aafe32413cefb40b9414d064f2b0c8fd2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/section-oslo-cache @@ -0,0 +1,6 @@ +[cache] +{% if memcache_url %} +enabled = true +backend = oslo_cache.memcache_pool +memcache_servers = {{ memcache_url }} +{% endif %} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/section-oslo-messaging-rabbit b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/section-oslo-messaging-rabbit new file mode 100644 index 0000000000000000000000000000000000000000..bed2216aba7217022ded17dec4cdb0871f513b40 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/section-oslo-messaging-rabbit @@ -0,0 +1,10 @@ +[oslo_messaging_rabbit] +{% if rabbitmq_ha_queues -%} +rabbit_ha_queues = True +{% endif -%} +{% if rabbit_ssl_port -%} +ssl = True +{% endif -%} +{% if rabbit_ssl_ca -%} +ssl_ca_file = {{ rabbit_ssl_ca }} +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/section-oslo-messaging-rabbit-ocata b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/section-oslo-messaging-rabbit-ocata new file mode 100644 index 0000000000000000000000000000000000000000..365f43757719b2de3c601ea6a9752dac8a8b3545 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/section-oslo-messaging-rabbit-ocata @@ -0,0 +1,10 @@ +[oslo_messaging_rabbit] +{% if rabbitmq_ha_queues -%} +rabbit_ha_queues = True +{% endif -%} +{% if rabbit_ssl_port -%} +rabbit_use_ssl = True +{% endif -%} +{% if rabbit_ssl_ca -%} +ssl_ca_file = {{ rabbit_ssl_ca }} +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/section-oslo-middleware b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/section-oslo-middleware new file mode 100644 index 0000000000000000000000000000000000000000..dd73230a42aa037582989979c1bc8132d30b9b38 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/section-oslo-middleware @@ -0,0 +1,5 @@ +[oslo_middleware] + +# Bug #1758675 +enable_proxy_headers_parsing = true + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/section-oslo-notifications b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/section-oslo-notifications new file mode 100644 index 0000000000000000000000000000000000000000..71c7eb068eace94e8986d7a868bded236f75128c --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/section-oslo-notifications @@ -0,0 +1,15 @@ +{% if transport_url -%} +[oslo_messaging_notifications] +driver = {{ oslo_messaging_driver }} +transport_url = {{ transport_url }} +{% if send_notifications_to_logs %} +driver = log +{% endif %} +{% if notification_topics -%} +topics = {{ notification_topics }} +{% endif -%} +{% if notification_format -%} +[notifications] +notification_format = {{ notification_format }} +{% endif -%} +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/section-placement b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/section-placement new file mode 100644 index 0000000000000000000000000000000000000000..97724bdb5af6e0352d0b920600d7cbbc8318fa7c --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/section-placement @@ -0,0 +1,19 @@ +[placement] +{% if auth_host -%} +auth_url = {{ auth_protocol }}://{{ auth_host }}:{{ auth_port }} +auth_type = password +{% if api_version == "3" -%} +project_domain_name = {{ admin_domain_name }} +user_domain_name = {{ admin_domain_name }} +{% else -%} +project_domain_name = default +user_domain_name = default +{% endif -%} +project_name = {{ admin_tenant_name }} +username = {{ admin_user }} +password = {{ admin_password }} +{% endif -%} +{% if region -%} +os_region_name = {{ region }} +{% endif -%} +randomize_allocation_candidates = true diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/section-rabbitmq-oslo b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/section-rabbitmq-oslo new file mode 100644 index 0000000000000000000000000000000000000000..b444c9c99bd179eaa0be31af462576e4ca9c45b5 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/section-rabbitmq-oslo @@ -0,0 +1,22 @@ +{% if rabbitmq_host or rabbitmq_hosts -%} +[oslo_messaging_rabbit] +rabbit_userid = {{ rabbitmq_user }} +rabbit_virtual_host = {{ rabbitmq_virtual_host }} +rabbit_password = {{ rabbitmq_password }} +{% if rabbitmq_hosts -%} +rabbit_hosts = {{ rabbitmq_hosts }} +{% if rabbitmq_ha_queues -%} +rabbit_ha_queues = True +rabbit_durable_queues = False +{% endif -%} +{% else -%} +rabbit_host = {{ rabbitmq_host }} +{% endif -%} +{% if rabbit_ssl_port -%} +rabbit_use_ssl = True +rabbit_port = {{ rabbit_ssl_port }} +{% if rabbit_ssl_ca -%} +kombu_ssl_ca_certs = {{ rabbit_ssl_ca }} +{% endif -%} +{% endif -%} +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/section-zeromq b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/section-zeromq new file mode 100644 index 0000000000000000000000000000000000000000..95f1a76ce87f9babdcba58da79c68d34c2776eea --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/section-zeromq @@ -0,0 +1,14 @@ +{% if zmq_host -%} +# ZeroMQ configuration (restart-nonce: {{ zmq_nonce }}) +rpc_backend = zmq +rpc_zmq_host = {{ zmq_host }} +{% if zmq_redis_address -%} +rpc_zmq_matchmaker = redis +matchmaker_heartbeat_freq = 15 +matchmaker_heartbeat_ttl = 30 +[matchmaker_redis] +host = {{ zmq_redis_address }} +{% else -%} +rpc_zmq_matchmaker = ring +{% endif -%} +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/vendor_data.json b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/vendor_data.json new file mode 100644 index 0000000000000000000000000000000000000000..904f612a7f74490d4508920400d18d493d056477 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/vendor_data.json @@ -0,0 +1 @@ +{{ vendor_data_json }} \ No newline at end of file diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/wsgi-openstack-api.conf b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/wsgi-openstack-api.conf new file mode 100644 index 0000000000000000000000000000000000000000..23b62a385283e6b8f1a6af0fcebccac747031b26 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/wsgi-openstack-api.conf @@ -0,0 +1,91 @@ +# Configuration file maintained by Juju. Local changes may be overwritten. + +{% if port -%} +Listen {{ port }} +{% endif -%} + +{% if admin_port -%} +Listen {{ admin_port }} +{% endif -%} + +{% if public_port -%} +Listen {{ public_port }} +{% endif -%} + +{% if port -%} + + WSGIDaemonProcess {{ service_name }} processes={{ processes }} threads={{ threads }} user={{ user }} group={{ group }} \ + display-name=%{GROUP} + WSGIProcessGroup {{ service_name }} + WSGIScriptAlias / {{ script }} + WSGIApplicationGroup %{GLOBAL} + WSGIPassAuthorization On + = 2.4> + ErrorLogFormat "%{cu}t %M" + + ErrorLog /var/log/apache2/{{ service_name }}_error.log + CustomLog /var/log/apache2/{{ service_name }}_access.log combined + + + = 2.4> + Require all granted + + + Order allow,deny + Allow from all + + + +{% endif -%} + +{% if admin_port -%} + + WSGIDaemonProcess {{ service_name }}-admin processes={{ admin_processes }} threads={{ threads }} user={{ user }} group={{ group }} \ + display-name=%{GROUP} + WSGIProcessGroup {{ service_name }}-admin + WSGIScriptAlias / {{ admin_script }} + WSGIApplicationGroup %{GLOBAL} + WSGIPassAuthorization On + = 2.4> + ErrorLogFormat "%{cu}t %M" + + ErrorLog /var/log/apache2/{{ service_name }}_error.log + CustomLog /var/log/apache2/{{ service_name }}_access.log combined + + + = 2.4> + Require all granted + + + Order allow,deny + Allow from all + + + +{% endif -%} + +{% if public_port -%} + + WSGIDaemonProcess {{ service_name }}-public processes={{ public_processes }} threads={{ threads }} user={{ user }} group={{ group }} \ + display-name=%{GROUP} + WSGIProcessGroup {{ service_name }}-public + WSGIScriptAlias / {{ public_script }} + WSGIApplicationGroup %{GLOBAL} + WSGIPassAuthorization On + = 2.4> + ErrorLogFormat "%{cu}t %M" + + ErrorLog /var/log/apache2/{{ service_name }}_error.log + CustomLog /var/log/apache2/{{ service_name }}_access.log combined + + + = 2.4> + Require all granted + + + Order allow,deny + Allow from all + + + +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/wsgi-openstack-metadata.conf b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/wsgi-openstack-metadata.conf new file mode 100644 index 0000000000000000000000000000000000000000..23b62a385283e6b8f1a6af0fcebccac747031b26 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templates/wsgi-openstack-metadata.conf @@ -0,0 +1,91 @@ +# Configuration file maintained by Juju. Local changes may be overwritten. + +{% if port -%} +Listen {{ port }} +{% endif -%} + +{% if admin_port -%} +Listen {{ admin_port }} +{% endif -%} + +{% if public_port -%} +Listen {{ public_port }} +{% endif -%} + +{% if port -%} + + WSGIDaemonProcess {{ service_name }} processes={{ processes }} threads={{ threads }} user={{ user }} group={{ group }} \ + display-name=%{GROUP} + WSGIProcessGroup {{ service_name }} + WSGIScriptAlias / {{ script }} + WSGIApplicationGroup %{GLOBAL} + WSGIPassAuthorization On + = 2.4> + ErrorLogFormat "%{cu}t %M" + + ErrorLog /var/log/apache2/{{ service_name }}_error.log + CustomLog /var/log/apache2/{{ service_name }}_access.log combined + + + = 2.4> + Require all granted + + + Order allow,deny + Allow from all + + + +{% endif -%} + +{% if admin_port -%} + + WSGIDaemonProcess {{ service_name }}-admin processes={{ admin_processes }} threads={{ threads }} user={{ user }} group={{ group }} \ + display-name=%{GROUP} + WSGIProcessGroup {{ service_name }}-admin + WSGIScriptAlias / {{ admin_script }} + WSGIApplicationGroup %{GLOBAL} + WSGIPassAuthorization On + = 2.4> + ErrorLogFormat "%{cu}t %M" + + ErrorLog /var/log/apache2/{{ service_name }}_error.log + CustomLog /var/log/apache2/{{ service_name }}_access.log combined + + + = 2.4> + Require all granted + + + Order allow,deny + Allow from all + + + +{% endif -%} + +{% if public_port -%} + + WSGIDaemonProcess {{ service_name }}-public processes={{ public_processes }} threads={{ threads }} user={{ user }} group={{ group }} \ + display-name=%{GROUP} + WSGIProcessGroup {{ service_name }}-public + WSGIScriptAlias / {{ public_script }} + WSGIApplicationGroup %{GLOBAL} + WSGIPassAuthorization On + = 2.4> + ErrorLogFormat "%{cu}t %M" + + ErrorLog /var/log/apache2/{{ service_name }}_error.log + CustomLog /var/log/apache2/{{ service_name }}_access.log combined + + + = 2.4> + Require all granted + + + Order allow,deny + Allow from all + + + +{% endif -%} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templating.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templating.py new file mode 100644 index 0000000000000000000000000000000000000000..050f8af5c9135db4fafdbf5098edcd22c7157ff8 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/templating.py @@ -0,0 +1,379 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os + +import six + +from charmhelpers.fetch import apt_install, apt_update +from charmhelpers.core.hookenv import ( + log, + ERROR, + INFO, + TRACE +) +from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES + +try: + from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions +except ImportError: + apt_update(fatal=True) + if six.PY2: + apt_install('python-jinja2', fatal=True) + else: + apt_install('python3-jinja2', fatal=True) + from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions + + +class OSConfigException(Exception): + pass + + +def get_loader(templates_dir, os_release): + """ + Create a jinja2.ChoiceLoader containing template dirs up to + and including os_release. If directory template directory + is missing at templates_dir, it will be omitted from the loader. + templates_dir is added to the bottom of the search list as a base + loading dir. + + A charm may also ship a templates dir with this module + and it will be appended to the bottom of the search list, eg:: + + hooks/charmhelpers/contrib/openstack/templates + + :param templates_dir (str): Base template directory containing release + sub-directories. + :param os_release (str): OpenStack release codename to construct template + loader. + :returns: jinja2.ChoiceLoader constructed with a list of + jinja2.FilesystemLoaders, ordered in descending + order by OpenStack release. + """ + tmpl_dirs = [(rel, os.path.join(templates_dir, rel)) + for rel in six.itervalues(OPENSTACK_CODENAMES)] + + if not os.path.isdir(templates_dir): + log('Templates directory not found @ %s.' % templates_dir, + level=ERROR) + raise OSConfigException + + # the bottom contains tempaltes_dir and possibly a common templates dir + # shipped with the helper. + loaders = [FileSystemLoader(templates_dir)] + helper_templates = os.path.join(os.path.dirname(__file__), 'templates') + if os.path.isdir(helper_templates): + loaders.append(FileSystemLoader(helper_templates)) + + for rel, tmpl_dir in tmpl_dirs: + if os.path.isdir(tmpl_dir): + loaders.insert(0, FileSystemLoader(tmpl_dir)) + if rel == os_release: + break + # demote this log to the lowest level; we don't really need to see these + # lots in production even when debugging. + log('Creating choice loader with dirs: %s' % + [l.searchpath for l in loaders], level=TRACE) + return ChoiceLoader(loaders) + + +class OSConfigTemplate(object): + """ + Associates a config file template with a list of context generators. + Responsible for constructing a template context based on those generators. + """ + + def __init__(self, config_file, contexts, config_template=None): + self.config_file = config_file + + if hasattr(contexts, '__call__'): + self.contexts = [contexts] + else: + self.contexts = contexts + + self._complete_contexts = [] + + self.config_template = config_template + + def context(self): + ctxt = {} + for context in self.contexts: + _ctxt = context() + if _ctxt: + ctxt.update(_ctxt) + # track interfaces for every complete context. + [self._complete_contexts.append(interface) + for interface in context.interfaces + if interface not in self._complete_contexts] + return ctxt + + def complete_contexts(self): + ''' + Return a list of interfaces that have satisfied contexts. + ''' + if self._complete_contexts: + return self._complete_contexts + self.context() + return self._complete_contexts + + @property + def is_string_template(self): + """:returns: Boolean if this instance is a template initialised with a string""" + return self.config_template is not None + + +class OSConfigRenderer(object): + """ + This class provides a common templating system to be used by OpenStack + charms. It is intended to help charms share common code and templates, + and ease the burden of managing config templates across multiple OpenStack + releases. + + Basic usage:: + + # import some common context generates from charmhelpers + from charmhelpers.contrib.openstack import context + + # Create a renderer object for a specific OS release. + configs = OSConfigRenderer(templates_dir='/tmp/templates', + openstack_release='folsom') + # register some config files with context generators. + configs.register(config_file='/etc/nova/nova.conf', + contexts=[context.SharedDBContext(), + context.AMQPContext()]) + configs.register(config_file='/etc/nova/api-paste.ini', + contexts=[context.IdentityServiceContext()]) + configs.register(config_file='/etc/haproxy/haproxy.conf', + contexts=[context.HAProxyContext()]) + configs.register(config_file='/etc/keystone/policy.d/extra.cfg', + contexts=[context.ExtraPolicyContext() + context.KeystoneContext()], + config_template=hookenv.config('extra-policy')) + # write out a single config + configs.write('/etc/nova/nova.conf') + # write out all registered configs + configs.write_all() + + **OpenStack Releases and template loading** + + When the object is instantiated, it is associated with a specific OS + release. This dictates how the template loader will be constructed. + + The constructed loader attempts to load the template from several places + in the following order: + - from the most recent OS release-specific template dir (if one exists) + - the base templates_dir + - a template directory shipped in the charm with this helper file. + + For the example above, '/tmp/templates' contains the following structure:: + + /tmp/templates/nova.conf + /tmp/templates/api-paste.ini + /tmp/templates/grizzly/api-paste.ini + /tmp/templates/havana/api-paste.ini + + Since it was registered with the grizzly release, it first searches + the grizzly directory for nova.conf, then the templates dir. + + When writing api-paste.ini, it will find the template in the grizzly + directory. + + If the object were created with folsom, it would fall back to the + base templates dir for its api-paste.ini template. + + This system should help manage changes in config files through + openstack releases, allowing charms to fall back to the most recently + updated config template for a given release + + The haproxy.conf, since it is not shipped in the templates dir, will + be loaded from the module directory's template directory, eg + $CHARM/hooks/charmhelpers/contrib/openstack/templates. This allows + us to ship common templates (haproxy, apache) with the helpers. + + **Context generators** + + Context generators are used to generate template contexts during hook + execution. Doing so may require inspecting service relations, charm + config, etc. When registered, a config file is associated with a list + of generators. When a template is rendered and written, all context + generates are called in a chain to generate the context dictionary + passed to the jinja2 template. See context.py for more info. + """ + def __init__(self, templates_dir, openstack_release): + if not os.path.isdir(templates_dir): + log('Could not locate templates dir %s' % templates_dir, + level=ERROR) + raise OSConfigException + + self.templates_dir = templates_dir + self.openstack_release = openstack_release + self.templates = {} + self._tmpl_env = None + + if None in [Environment, ChoiceLoader, FileSystemLoader]: + # if this code is running, the object is created pre-install hook. + # jinja2 shouldn't get touched until the module is reloaded on next + # hook execution, with proper jinja2 bits successfully imported. + if six.PY2: + apt_install('python-jinja2') + else: + apt_install('python3-jinja2') + + def register(self, config_file, contexts, config_template=None): + """ + Register a config file with a list of context generators to be called + during rendering. + config_template can be used to load a template from a string instead of + using template loaders and template files. + :param config_file (str): a path where a config file will be rendered + :param contexts (list): a list of context dictionaries with kv pairs + :param config_template (str): an optional template string to use + """ + self.templates[config_file] = OSConfigTemplate( + config_file=config_file, + contexts=contexts, + config_template=config_template + ) + log('Registered config file: {}'.format(config_file), + level=INFO) + + def _get_tmpl_env(self): + if not self._tmpl_env: + loader = get_loader(self.templates_dir, self.openstack_release) + self._tmpl_env = Environment(loader=loader) + + def _get_template(self, template): + self._get_tmpl_env() + template = self._tmpl_env.get_template(template) + log('Loaded template from {}'.format(template.filename), + level=INFO) + return template + + def _get_template_from_string(self, ostmpl): + ''' + Get a jinja2 template object from a string. + :param ostmpl: OSConfigTemplate to use as a data source. + ''' + self._get_tmpl_env() + template = self._tmpl_env.from_string(ostmpl.config_template) + log('Loaded a template from a string for {}'.format( + ostmpl.config_file), + level=INFO) + return template + + def render(self, config_file): + if config_file not in self.templates: + log('Config not registered: {}'.format(config_file), level=ERROR) + raise OSConfigException + + ostmpl = self.templates[config_file] + ctxt = ostmpl.context() + + if ostmpl.is_string_template: + template = self._get_template_from_string(ostmpl) + log('Rendering from a string template: ' + '{}'.format(config_file), + level=INFO) + else: + _tmpl = os.path.basename(config_file) + try: + template = self._get_template(_tmpl) + except exceptions.TemplateNotFound: + # if no template is found with basename, try looking + # for it using a munged full path, eg: + # /etc/apache2/apache2.conf -> etc_apache2_apache2.conf + _tmpl = '_'.join(config_file.split('/')[1:]) + try: + template = self._get_template(_tmpl) + except exceptions.TemplateNotFound as e: + log('Could not load template from {} by {} or {}.' + ''.format( + self.templates_dir, + os.path.basename(config_file), + _tmpl + ), + level=ERROR) + raise e + + log('Rendering from template: {}'.format(config_file), + level=INFO) + return template.render(ctxt) + + def write(self, config_file): + """ + Write a single config file, raises if config file is not registered. + """ + if config_file not in self.templates: + log('Config not registered: %s' % config_file, level=ERROR) + raise OSConfigException + + _out = self.render(config_file) + if six.PY3: + _out = _out.encode('UTF-8') + + with open(config_file, 'wb') as out: + out.write(_out) + + log('Wrote template %s.' % config_file, level=INFO) + + def write_all(self): + """ + Write out all registered config files. + """ + [self.write(k) for k in six.iterkeys(self.templates)] + + def set_release(self, openstack_release): + """ + Resets the template environment and generates a new template loader + based on a the new openstack release. + """ + self._tmpl_env = None + self.openstack_release = openstack_release + self._get_tmpl_env() + + def complete_contexts(self): + ''' + Returns a list of context interfaces that yield a complete context. + ''' + interfaces = [] + [interfaces.extend(i.complete_contexts()) + for i in six.itervalues(self.templates)] + return interfaces + + def get_incomplete_context_data(self, interfaces): + ''' + Return dictionary of relation status of interfaces and any missing + required context data. Example: + {'amqp': {'missing_data': ['rabbitmq_password'], 'related': True}, + 'zeromq-configuration': {'related': False}} + ''' + incomplete_context_data = {} + + for i in six.itervalues(self.templates): + for context in i.contexts: + for interface in interfaces: + related = False + if interface in context.interfaces: + related = context.get_related() + missing_data = context.missing_data + if missing_data: + incomplete_context_data[interface] = {'missing_data': missing_data} + if related: + if incomplete_context_data.get(interface): + incomplete_context_data[interface].update({'related': True}) + else: + incomplete_context_data[interface] = {'related': True} + else: + incomplete_context_data[interface] = {'related': False} + return incomplete_context_data diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..fbf0156108537f9caae35e193c346b0e4ee237b9 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/utils.py @@ -0,0 +1,2352 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Common python helper functions used for OpenStack charms. +from collections import OrderedDict, namedtuple +from functools import wraps + +import subprocess +import json +import os +import sys +import re +import itertools +import functools + +import six +import traceback +import uuid +import yaml + +from charmhelpers import deprecate + +from charmhelpers.contrib.network import ip + +from charmhelpers.core import unitdata + +from charmhelpers.core.hookenv import ( + WORKLOAD_STATES, + action_fail, + action_set, + config, + expected_peer_units, + expected_related_units, + log as juju_log, + charm_dir, + INFO, + ERROR, + metadata, + related_units, + relation_get, + relation_id, + relation_ids, + relation_set, + status_set, + hook_name, + application_version_set, + cached, + leader_set, + leader_get, + local_unit, +) + +from charmhelpers.core.strutils import ( + BasicStringComparator, + bool_from_string, +) + +from charmhelpers.contrib.storage.linux.lvm import ( + deactivate_lvm_volume_group, + is_lvm_physical_volume, + remove_lvm_physical_volume, +) + +from charmhelpers.contrib.network.ip import ( + get_ipv6_addr, + is_ipv6, + port_has_listener, +) + +from charmhelpers.core.host import ( + lsb_release, + mounts, + umount, + service_running, + service_pause, + service_resume, + service_stop, + service_start, + restart_on_change_helper, +) +from charmhelpers.fetch import ( + apt_cache, + import_key as fetch_import_key, + add_source as fetch_add_source, + SourceConfigError, + GPGKeyError, + get_upstream_version, + filter_missing_packages, + ubuntu_apt_pkg as apt, +) + +from charmhelpers.fetch.snap import ( + snap_install, + snap_refresh, + valid_snap_channel, +) + +from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk +from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device +from charmhelpers.contrib.openstack.exceptions import OSContextError +from charmhelpers.contrib.openstack.policyd import ( + policyd_status_message_prefix, + POLICYD_CONFIG_NAME, +) + +from charmhelpers.contrib.openstack.ha.utils import ( + expect_ha, +) + +CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu" +CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA' + +DISTRO_PROPOSED = ('deb http://archive.ubuntu.com/ubuntu/ %s-proposed ' + 'restricted main multiverse universe') + +OPENSTACK_RELEASES = ( + 'diablo', + 'essex', + 'folsom', + 'grizzly', + 'havana', + 'icehouse', + 'juno', + 'kilo', + 'liberty', + 'mitaka', + 'newton', + 'ocata', + 'pike', + 'queens', + 'rocky', + 'stein', + 'train', + 'ussuri', +) + +UBUNTU_OPENSTACK_RELEASE = OrderedDict([ + ('oneiric', 'diablo'), + ('precise', 'essex'), + ('quantal', 'folsom'), + ('raring', 'grizzly'), + ('saucy', 'havana'), + ('trusty', 'icehouse'), + ('utopic', 'juno'), + ('vivid', 'kilo'), + ('wily', 'liberty'), + ('xenial', 'mitaka'), + ('yakkety', 'newton'), + ('zesty', 'ocata'), + ('artful', 'pike'), + ('bionic', 'queens'), + ('cosmic', 'rocky'), + ('disco', 'stein'), + ('eoan', 'train'), + ('focal', 'ussuri'), +]) + + +OPENSTACK_CODENAMES = OrderedDict([ + ('2011.2', 'diablo'), + ('2012.1', 'essex'), + ('2012.2', 'folsom'), + ('2013.1', 'grizzly'), + ('2013.2', 'havana'), + ('2014.1', 'icehouse'), + ('2014.2', 'juno'), + ('2015.1', 'kilo'), + ('2015.2', 'liberty'), + ('2016.1', 'mitaka'), + ('2016.2', 'newton'), + ('2017.1', 'ocata'), + ('2017.2', 'pike'), + ('2018.1', 'queens'), + ('2018.2', 'rocky'), + ('2019.1', 'stein'), + ('2019.2', 'train'), + ('2020.1', 'ussuri'), +]) + +# The ugly duckling - must list releases oldest to newest +SWIFT_CODENAMES = OrderedDict([ + ('diablo', + ['1.4.3']), + ('essex', + ['1.4.8']), + ('folsom', + ['1.7.4']), + ('grizzly', + ['1.7.6', '1.7.7', '1.8.0']), + ('havana', + ['1.9.0', '1.9.1', '1.10.0']), + ('icehouse', + ['1.11.0', '1.12.0', '1.13.0', '1.13.1']), + ('juno', + ['2.0.0', '2.1.0', '2.2.0']), + ('kilo', + ['2.2.1', '2.2.2']), + ('liberty', + ['2.3.0', '2.4.0', '2.5.0']), + ('mitaka', + ['2.5.0', '2.6.0', '2.7.0']), + ('newton', + ['2.8.0', '2.9.0', '2.10.0']), + ('ocata', + ['2.11.0', '2.12.0', '2.13.0']), + ('pike', + ['2.13.0', '2.15.0']), + ('queens', + ['2.16.0', '2.17.0']), + ('rocky', + ['2.18.0', '2.19.0']), + ('stein', + ['2.20.0', '2.21.0']), + ('train', + ['2.22.0', '2.23.0']), + ('ussuri', + ['2.24.0', '2.25.0']), +]) + +# >= Liberty version->codename mapping +PACKAGE_CODENAMES = { + 'nova-common': OrderedDict([ + ('12', 'liberty'), + ('13', 'mitaka'), + ('14', 'newton'), + ('15', 'ocata'), + ('16', 'pike'), + ('17', 'queens'), + ('18', 'rocky'), + ('19', 'stein'), + ('20', 'train'), + ('21', 'ussuri'), + ]), + 'neutron-common': OrderedDict([ + ('7', 'liberty'), + ('8', 'mitaka'), + ('9', 'newton'), + ('10', 'ocata'), + ('11', 'pike'), + ('12', 'queens'), + ('13', 'rocky'), + ('14', 'stein'), + ('15', 'train'), + ('16', 'ussuri'), + ]), + 'cinder-common': OrderedDict([ + ('7', 'liberty'), + ('8', 'mitaka'), + ('9', 'newton'), + ('10', 'ocata'), + ('11', 'pike'), + ('12', 'queens'), + ('13', 'rocky'), + ('14', 'stein'), + ('15', 'train'), + ('16', 'ussuri'), + ]), + 'keystone': OrderedDict([ + ('8', 'liberty'), + ('9', 'mitaka'), + ('10', 'newton'), + ('11', 'ocata'), + ('12', 'pike'), + ('13', 'queens'), + ('14', 'rocky'), + ('15', 'stein'), + ('16', 'train'), + ('17', 'ussuri'), + ]), + 'horizon-common': OrderedDict([ + ('8', 'liberty'), + ('9', 'mitaka'), + ('10', 'newton'), + ('11', 'ocata'), + ('12', 'pike'), + ('13', 'queens'), + ('14', 'rocky'), + ('15', 'stein'), + ('16', 'train'), + ('18', 'ussuri'), + ]), + 'ceilometer-common': OrderedDict([ + ('5', 'liberty'), + ('6', 'mitaka'), + ('7', 'newton'), + ('8', 'ocata'), + ('9', 'pike'), + ('10', 'queens'), + ('11', 'rocky'), + ('12', 'stein'), + ('13', 'train'), + ('14', 'ussuri'), + ]), + 'heat-common': OrderedDict([ + ('5', 'liberty'), + ('6', 'mitaka'), + ('7', 'newton'), + ('8', 'ocata'), + ('9', 'pike'), + ('10', 'queens'), + ('11', 'rocky'), + ('12', 'stein'), + ('13', 'train'), + ('14', 'ussuri'), + ]), + 'glance-common': OrderedDict([ + ('11', 'liberty'), + ('12', 'mitaka'), + ('13', 'newton'), + ('14', 'ocata'), + ('15', 'pike'), + ('16', 'queens'), + ('17', 'rocky'), + ('18', 'stein'), + ('19', 'train'), + ('20', 'ussuri'), + ]), + 'openstack-dashboard': OrderedDict([ + ('8', 'liberty'), + ('9', 'mitaka'), + ('10', 'newton'), + ('11', 'ocata'), + ('12', 'pike'), + ('13', 'queens'), + ('14', 'rocky'), + ('15', 'stein'), + ('16', 'train'), + ('18', 'ussuri'), + ]), +} + +DEFAULT_LOOPBACK_SIZE = '5G' + +DB_SERIES_UPGRADING_KEY = 'cluster-series-upgrading' + +DB_MAINTENANCE_KEYS = [DB_SERIES_UPGRADING_KEY] + + +class CompareOpenStackReleases(BasicStringComparator): + """Provide comparisons of OpenStack releases. + + Use in the form of + + if CompareOpenStackReleases(release) > 'mitaka': + # do something with mitaka + """ + _list = OPENSTACK_RELEASES + + +def error_out(msg): + juju_log("FATAL ERROR: %s" % msg, level='ERROR') + sys.exit(1) + + +def get_installed_semantic_versioned_packages(): + '''Get a list of installed packages which have OpenStack semantic versioning + + :returns List of installed packages + :rtype: [pkg1, pkg2, ...] + ''' + return filter_missing_packages(PACKAGE_CODENAMES.keys()) + + +def get_os_codename_install_source(src): + '''Derive OpenStack release codename from a given installation source.''' + ubuntu_rel = lsb_release()['DISTRIB_CODENAME'] + rel = '' + if src is None: + return rel + if src in ['distro', 'distro-proposed', 'proposed']: + try: + rel = UBUNTU_OPENSTACK_RELEASE[ubuntu_rel] + except KeyError: + e = 'Could not derive openstack release for '\ + 'this Ubuntu release: %s' % ubuntu_rel + error_out(e) + return rel + + if src.startswith('cloud:'): + ca_rel = src.split(':')[1] + ca_rel = ca_rel.split('-')[1].split('/')[0] + return ca_rel + + # Best guess match based on deb string provided + if (src.startswith('deb') or + src.startswith('ppa') or + src.startswith('snap')): + for v in OPENSTACK_CODENAMES.values(): + if v in src: + return v + + +def get_os_version_install_source(src): + codename = get_os_codename_install_source(src) + return get_os_version_codename(codename) + + +def get_os_codename_version(vers): + '''Determine OpenStack codename from version number.''' + try: + return OPENSTACK_CODENAMES[vers] + except KeyError: + e = 'Could not determine OpenStack codename for version %s' % vers + error_out(e) + + +def get_os_version_codename(codename, version_map=OPENSTACK_CODENAMES): + '''Determine OpenStack version number from codename.''' + for k, v in six.iteritems(version_map): + if v == codename: + return k + e = 'Could not derive OpenStack version for '\ + 'codename: %s' % codename + error_out(e) + + +def get_os_version_codename_swift(codename): + '''Determine OpenStack version number of swift from codename.''' + for k, v in six.iteritems(SWIFT_CODENAMES): + if k == codename: + return v[-1] + e = 'Could not derive swift version for '\ + 'codename: %s' % codename + error_out(e) + + +def get_swift_codename(version): + '''Determine OpenStack codename that corresponds to swift version.''' + codenames = [k for k, v in six.iteritems(SWIFT_CODENAMES) if version in v] + + if len(codenames) > 1: + # If more than one release codename contains this version we determine + # the actual codename based on the highest available install source. + for codename in reversed(codenames): + releases = UBUNTU_OPENSTACK_RELEASE + release = [k for k, v in six.iteritems(releases) if codename in v] + ret = subprocess.check_output(['apt-cache', 'policy', 'swift']) + if six.PY3: + ret = ret.decode('UTF-8') + if codename in ret or release[0] in ret: + return codename + elif len(codenames) == 1: + return codenames[0] + + # NOTE: fallback - attempt to match with just major.minor version + match = re.match(r'^(\d+)\.(\d+)', version) + if match: + major_minor_version = match.group(0) + for codename, versions in six.iteritems(SWIFT_CODENAMES): + for release_version in versions: + if release_version.startswith(major_minor_version): + return codename + + return None + + +def get_os_codename_package(package, fatal=True): + '''Derive OpenStack release codename from an installed package.''' + + if snap_install_requested(): + cmd = ['snap', 'list', package] + try: + out = subprocess.check_output(cmd) + if six.PY3: + out = out.decode('UTF-8') + except subprocess.CalledProcessError: + return None + lines = out.split('\n') + for line in lines: + if package in line: + # Second item in list is Version + return line.split()[1] + + cache = apt_cache() + + try: + pkg = cache[package] + except Exception: + if not fatal: + return None + # the package is unknown to the current apt cache. + e = 'Could not determine version of package with no installation '\ + 'candidate: %s' % package + error_out(e) + + if not pkg.current_ver: + if not fatal: + return None + # package is known, but no version is currently installed. + e = 'Could not determine version of uninstalled package: %s' % package + error_out(e) + + vers = apt.upstream_version(pkg.current_ver.ver_str) + if 'swift' in pkg.name: + # Fully x.y.z match for swift versions + match = re.match(r'^(\d+)\.(\d+)\.(\d+)', vers) + else: + # x.y match only for 20XX.X + # and ignore patch level for other packages + match = re.match(r'^(\d+)\.(\d+)', vers) + + if match: + vers = match.group(0) + + # Generate a major version number for newer semantic + # versions of openstack projects + major_vers = vers.split('.')[0] + # >= Liberty independent project versions + if (package in PACKAGE_CODENAMES and + major_vers in PACKAGE_CODENAMES[package]): + return PACKAGE_CODENAMES[package][major_vers] + else: + # < Liberty co-ordinated project versions + try: + if 'swift' in pkg.name: + return get_swift_codename(vers) + else: + return OPENSTACK_CODENAMES[vers] + except KeyError: + if not fatal: + return None + e = 'Could not determine OpenStack codename for version %s' % vers + error_out(e) + + +def get_os_version_package(pkg, fatal=True): + '''Derive OpenStack version number from an installed package.''' + codename = get_os_codename_package(pkg, fatal=fatal) + + if not codename: + return None + + if 'swift' in pkg: + vers_map = SWIFT_CODENAMES + for cname, version in six.iteritems(vers_map): + if cname == codename: + return version[-1] + else: + vers_map = OPENSTACK_CODENAMES + for version, cname in six.iteritems(vers_map): + if cname == codename: + return version + # e = "Could not determine OpenStack version for package: %s" % pkg + # error_out(e) + + +# Module local cache variable for the os_release. +_os_rel = None + + +def reset_os_release(): + '''Unset the cached os_release version''' + global _os_rel + _os_rel = None + + +def os_release(package, base=None, reset_cache=False, source_key=None): + """Returns OpenStack release codename from a cached global. + + If reset_cache then unset the cached os_release version and return the + freshly determined version. + + If the codename can not be determined from either an installed package or + the installation source, the earliest release supported by the charm should + be returned. + + :param package: Name of package to determine release from + :type package: str + :param base: Fallback codename if endavours to determine from package fail + :type base: Optional[str] + :param reset_cache: Reset any cached codename value + :type reset_cache: bool + :param source_key: Name of source configuration option + (default: 'openstack-origin') + :type source_key: Optional[str] + :returns: OpenStack release codename + :rtype: str + """ + source_key = source_key or 'openstack-origin' + if not base: + base = UBUNTU_OPENSTACK_RELEASE[lsb_release()['DISTRIB_CODENAME']] + global _os_rel + if reset_cache: + reset_os_release() + if _os_rel: + return _os_rel + _os_rel = ( + get_os_codename_package(package, fatal=False) or + get_os_codename_install_source(config(source_key)) or + base) + return _os_rel + + +@deprecate("moved to charmhelpers.fetch.import_key()", "2017-07", log=juju_log) +def import_key(keyid): + """Import a key, either ASCII armored, or a GPG key id. + + @param keyid: the key in ASCII armor format, or a GPG key id. + @raises SystemExit() via sys.exit() on failure. + """ + try: + return fetch_import_key(keyid) + except GPGKeyError as e: + error_out("Could not import key: {}".format(str(e))) + + +def get_source_and_pgp_key(source_and_key): + """Look for a pgp key ID or ascii-armor key in the given input. + + :param source_and_key: Sting, "source_spec|keyid" where '|keyid' is + optional. + :returns (source_spec, key_id OR None) as a tuple. Returns None for key_id + if there was no '|' in the source_and_key string. + """ + try: + source, key = source_and_key.split('|', 2) + return source, key or None + except ValueError: + return source_and_key, None + + +@deprecate("use charmhelpers.fetch.add_source() instead.", + "2017-07", log=juju_log) +def configure_installation_source(source_plus_key): + """Configure an installation source. + + The functionality is provided by charmhelpers.fetch.add_source() + The difference between the two functions is that add_source() signature + requires the key to be passed directly, whereas this function passes an + optional key by appending '|' to the end of the source specificiation + 'source'. + + Another difference from add_source() is that the function calls sys.exit(1) + if the configuration fails, whereas add_source() raises + SourceConfigurationError(). Another difference, is that add_source() + silently fails (with a juju_log command) if there is no matching source to + configure, whereas this function fails with a sys.exit(1) + + :param source: String_plus_key -- see above for details. + + Note that the behaviour on error is to log the error to the juju log and + then call sys.exit(1). + """ + if source_plus_key.startswith('snap'): + # Do nothing for snap installs + return + # extract the key if there is one, denoted by a '|' in the rel + source, key = get_source_and_pgp_key(source_plus_key) + + # handle the ordinary sources via add_source + try: + fetch_add_source(source, key, fail_invalid=True) + except SourceConfigError as se: + error_out(str(se)) + + +def config_value_changed(option): + """ + Determine if config value changed since last call to this function. + """ + hook_data = unitdata.HookData() + with hook_data(): + db = unitdata.kv() + current = config(option) + saved = db.get(option) + db.set(option, current) + if saved is None: + return False + return current != saved + + +def get_endpoint_key(service_name, relation_id, unit_name): + """Return the key used to refer to an ep changed notification from a unit. + + :param service_name: Service name eg nova, neutron, placement etc + :type service_name: str + :param relation_id: The id of the relation the unit is on. + :type relation_id: str + :param unit_name: The name of the unit publishing the notification. + :type unit_name: str + :returns: The key used to refer to an ep changed notification from a unit + :rtype: str + """ + return '{}-{}-{}'.format( + service_name, + relation_id.replace(':', '_'), + unit_name.replace('/', '_')) + + +def get_endpoint_notifications(service_names, rel_name='identity-service'): + """Return all notifications for the given services. + + :param service_names: List of service name. + :type service_name: List + :param rel_name: Name of the relation to query + :type rel_name: str + :returns: A dict containing the source of the notification and its nonce. + :rtype: Dict[str, str] + """ + notifications = {} + for rid in relation_ids(rel_name): + for unit in related_units(relid=rid): + ep_changed_json = relation_get( + rid=rid, + unit=unit, + attribute='ep_changed') + if ep_changed_json: + ep_changed = json.loads(ep_changed_json) + for service in service_names: + if ep_changed.get(service): + key = get_endpoint_key(service, rid, unit) + notifications[key] = ep_changed[service] + return notifications + + +def endpoint_changed(service_name, rel_name='identity-service'): + """Whether a new notification has been recieved for an endpoint. + + :param service_name: Service name eg nova, neutron, placement etc + :type service_name: str + :param rel_name: Name of the relation to query + :type rel_name: str + :returns: Whether endpoint has changed + :rtype: bool + """ + changed = False + with unitdata.HookData()() as t: + db = t[0] + notifications = get_endpoint_notifications( + [service_name], + rel_name=rel_name) + for key, nonce in notifications.items(): + if db.get(key) != nonce: + juju_log(('New endpoint change notification found: ' + '{}={}').format(key, nonce), + 'INFO') + changed = True + break + return changed + + +def save_endpoint_changed_triggers(service_names, rel_name='identity-service'): + """Save the enpoint triggers in db so it can be tracked if they changed. + + :param service_names: List of service name. + :type service_name: List + :param rel_name: Name of the relation to query + :type rel_name: str + """ + with unitdata.HookData()() as t: + db = t[0] + notifications = get_endpoint_notifications( + service_names, + rel_name=rel_name) + for key, nonce in notifications.items(): + db.set(key, nonce) + + +def save_script_rc(script_path="scripts/scriptrc", **env_vars): + """ + Write an rc file in the charm-delivered directory containing + exported environment variables provided by env_vars. Any charm scripts run + outside the juju hook environment can source this scriptrc to obtain + updated config information necessary to perform health checks or + service changes. + """ + juju_rc_path = "%s/%s" % (charm_dir(), script_path) + if not os.path.exists(os.path.dirname(juju_rc_path)): + os.mkdir(os.path.dirname(juju_rc_path)) + with open(juju_rc_path, 'wt') as rc_script: + rc_script.write( + "#!/bin/bash\n") + [rc_script.write('export %s=%s\n' % (u, p)) + for u, p in six.iteritems(env_vars) if u != "script_path"] + + +def openstack_upgrade_available(package): + """ + Determines if an OpenStack upgrade is available from installation + source, based on version of installed package. + + :param package: str: Name of installed package. + + :returns: bool: : Returns True if configured installation source offers + a newer version of package. + """ + + src = config('openstack-origin') + cur_vers = get_os_version_package(package) + if not cur_vers: + # The package has not been installed yet do not attempt upgrade + return False + if "swift" in package: + codename = get_os_codename_install_source(src) + avail_vers = get_os_version_codename_swift(codename) + else: + try: + avail_vers = get_os_version_install_source(src) + except Exception: + avail_vers = cur_vers + apt.init() + return apt.version_compare(avail_vers, cur_vers) >= 1 + + +def ensure_block_device(block_device): + ''' + Confirm block_device, create as loopback if necessary. + + :param block_device: str: Full path of block device to ensure. + + :returns: str: Full path of ensured block device. + ''' + _none = ['None', 'none', None] + if (block_device in _none): + error_out('prepare_storage(): Missing required input: block_device=%s.' + % block_device) + + if block_device.startswith('/dev/'): + bdev = block_device + elif block_device.startswith('/'): + _bd = block_device.split('|') + if len(_bd) == 2: + bdev, size = _bd + else: + bdev = block_device + size = DEFAULT_LOOPBACK_SIZE + bdev = ensure_loopback_device(bdev, size) + else: + bdev = '/dev/%s' % block_device + + if not is_block_device(bdev): + error_out('Failed to locate valid block device at %s' % bdev) + + return bdev + + +def clean_storage(block_device): + ''' + Ensures a block device is clean. That is: + - unmounted + - any lvm volume groups are deactivated + - any lvm physical device signatures removed + - partition table wiped + + :param block_device: str: Full path to block device to clean. + ''' + for mp, d in mounts(): + if d == block_device: + juju_log('clean_storage(): %s is mounted @ %s, unmounting.' % + (d, mp), level=INFO) + umount(mp, persist=True) + + if is_lvm_physical_volume(block_device): + deactivate_lvm_volume_group(block_device) + remove_lvm_physical_volume(block_device) + else: + zap_disk(block_device) + + +is_ip = ip.is_ip +ns_query = ip.ns_query +get_host_ip = ip.get_host_ip +get_hostname = ip.get_hostname + + +def get_matchmaker_map(mm_file='/etc/oslo/matchmaker_ring.json'): + mm_map = {} + if os.path.isfile(mm_file): + with open(mm_file, 'r') as f: + mm_map = json.load(f) + return mm_map + + +def sync_db_with_multi_ipv6_addresses(database, database_user, + relation_prefix=None): + hosts = get_ipv6_addr(dynamic_only=False) + + if config('vip'): + vips = config('vip').split() + for vip in vips: + if vip and is_ipv6(vip): + hosts.append(vip) + + kwargs = {'database': database, + 'username': database_user, + 'hostname': json.dumps(hosts)} + + if relation_prefix: + for key in list(kwargs.keys()): + kwargs["%s_%s" % (relation_prefix, key)] = kwargs[key] + del kwargs[key] + + for rid in relation_ids('shared-db'): + relation_set(relation_id=rid, **kwargs) + + +def os_requires_version(ostack_release, pkg): + """ + Decorator for hook to specify minimum supported release + """ + def wrap(f): + @wraps(f) + def wrapped_f(*args): + if os_release(pkg) < ostack_release: + raise Exception("This hook is not supported on releases" + " before %s" % ostack_release) + f(*args) + return wrapped_f + return wrap + + +def os_workload_status(configs, required_interfaces, charm_func=None): + """ + Decorator to set workload status based on complete contexts + """ + def wrap(f): + @wraps(f) + def wrapped_f(*args, **kwargs): + # Run the original function first + f(*args, **kwargs) + # Set workload status now that contexts have been + # acted on + set_os_workload_status(configs, required_interfaces, charm_func) + return wrapped_f + return wrap + + +def set_os_workload_status(configs, required_interfaces, charm_func=None, + services=None, ports=None): + """Set the state of the workload status for the charm. + + This calls _determine_os_workload_status() to get the new state, message + and sets the status using status_set() + + @param configs: a templating.OSConfigRenderer() object + @param required_interfaces: {generic: [specific, specific2, ...]} + @param charm_func: a callable function that returns state, message. The + signature is charm_func(configs) -> (state, message) + @param services: list of strings OR dictionary specifying services/ports + @param ports: OPTIONAL list of port numbers. + @returns state, message: the new workload status, user message + """ + state, message = _determine_os_workload_status( + configs, required_interfaces, charm_func, services, ports) + status_set(state, message) + + +def _determine_os_workload_status( + configs, required_interfaces, charm_func=None, + services=None, ports=None): + """Determine the state of the workload status for the charm. + + This function returns the new workload status for the charm based + on the state of the interfaces, the paused state and whether the + services are actually running and any specified ports are open. + + This checks: + + 1. if the unit should be paused, that it is actually paused. If so the + state is 'maintenance' + message, else 'broken'. + 2. that the interfaces/relations are complete. If they are not then + it sets the state to either 'broken' or 'waiting' and an appropriate + message. + 3. If all the relation data is set, then it checks that the actual + services really are running. If not it sets the state to 'broken'. + + If everything is okay then the state returns 'active'. + + @param configs: a templating.OSConfigRenderer() object + @param required_interfaces: {generic: [specific, specific2, ...]} + @param charm_func: a callable function that returns state, message. The + signature is charm_func(configs) -> (state, message) + @param services: list of strings OR dictionary specifying services/ports + @param ports: OPTIONAL list of port numbers. + @returns state, message: the new workload status, user message + """ + state, message = _ows_check_if_paused(services, ports) + + if state is None: + state, message = _ows_check_generic_interfaces( + configs, required_interfaces) + + if state != 'maintenance' and charm_func: + # _ows_check_charm_func() may modify the state, message + state, message = _ows_check_charm_func( + state, message, lambda: charm_func(configs)) + + if state is None: + state, message = _ows_check_services_running(services, ports) + + if state is None: + state = 'active' + message = "Unit is ready" + juju_log(message, 'INFO') + + try: + if config(POLICYD_CONFIG_NAME): + message = "{} {}".format(policyd_status_message_prefix(), message) + except Exception: + pass + + return state, message + + +def _ows_check_if_paused(services=None, ports=None): + """Check if the unit is supposed to be paused, and if so check that the + services/ports (if passed) are actually stopped/not being listened to. + + If the unit isn't supposed to be paused, just return None, None + + If the unit is performing a series upgrade, return a message indicating + this. + + @param services: OPTIONAL services spec or list of service names. + @param ports: OPTIONAL list of port numbers. + @returns state, message or None, None + """ + if is_unit_upgrading_set(): + state, message = check_actually_paused(services=services, + ports=ports) + if state is None: + # we're paused okay, so set maintenance and return + state = "blocked" + message = ("Ready for do-release-upgrade and reboot. " + "Set complete when finished.") + return state, message + + if is_unit_paused_set(): + state, message = check_actually_paused(services=services, + ports=ports) + if state is None: + # we're paused okay, so set maintenance and return + state = "maintenance" + message = "Paused. Use 'resume' action to resume normal service." + return state, message + return None, None + + +def _ows_check_generic_interfaces(configs, required_interfaces): + """Check the complete contexts to determine the workload status. + + - Checks for missing or incomplete contexts + - juju log details of missing required data. + - determines the correct workload status + - creates an appropriate message for status_set(...) + + if there are no problems then the function returns None, None + + @param configs: a templating.OSConfigRenderer() object + @params required_interfaces: {generic_interface: [specific_interface], } + @returns state, message or None, None + """ + incomplete_rel_data = incomplete_relation_data(configs, + required_interfaces) + state = None + message = None + missing_relations = set() + incomplete_relations = set() + + for generic_interface, relations_states in incomplete_rel_data.items(): + related_interface = None + missing_data = {} + # Related or not? + for interface, relation_state in relations_states.items(): + if relation_state.get('related'): + related_interface = interface + missing_data = relation_state.get('missing_data') + break + # No relation ID for the generic_interface? + if not related_interface: + juju_log("{} relation is missing and must be related for " + "functionality. ".format(generic_interface), 'WARN') + state = 'blocked' + missing_relations.add(generic_interface) + else: + # Relation ID eists but no related unit + if not missing_data: + # Edge case - relation ID exists but departings + _hook_name = hook_name() + if (('departed' in _hook_name or 'broken' in _hook_name) and + related_interface in _hook_name): + state = 'blocked' + missing_relations.add(generic_interface) + juju_log("{} relation's interface, {}, " + "relationship is departed or broken " + "and is required for functionality." + "".format(generic_interface, related_interface), + "WARN") + # Normal case relation ID exists but no related unit + # (joining) + else: + juju_log("{} relations's interface, {}, is related but has" + " no units in the relation." + "".format(generic_interface, related_interface), + "INFO") + # Related unit exists and data missing on the relation + else: + juju_log("{} relation's interface, {}, is related awaiting " + "the following data from the relationship: {}. " + "".format(generic_interface, related_interface, + ", ".join(missing_data)), "INFO") + if state != 'blocked': + state = 'waiting' + if generic_interface not in missing_relations: + incomplete_relations.add(generic_interface) + + if missing_relations: + message = "Missing relations: {}".format(", ".join(missing_relations)) + if incomplete_relations: + message += "; incomplete relations: {}" \ + "".format(", ".join(incomplete_relations)) + state = 'blocked' + elif incomplete_relations: + message = "Incomplete relations: {}" \ + "".format(", ".join(incomplete_relations)) + state = 'waiting' + + return state, message + + +def _ows_check_charm_func(state, message, charm_func_with_configs): + """Run a custom check function for the charm to see if it wants to + change the state. This is only run if not in 'maintenance' and + tests to see if the new state is more important that the previous + one determined by the interfaces/relations check. + + @param state: the previously determined state so far. + @param message: the user orientated message so far. + @param charm_func: a callable function that returns state, message + @returns state, message strings. + """ + if charm_func_with_configs: + charm_state, charm_message = charm_func_with_configs() + if (charm_state != 'active' and + charm_state != 'unknown' and + charm_state is not None): + state = workload_state_compare(state, charm_state) + if message: + charm_message = charm_message.replace("Incomplete relations: ", + "") + message = "{}, {}".format(message, charm_message) + else: + message = charm_message + return state, message + + +def _ows_check_services_running(services, ports): + """Check that the services that should be running are actually running + and that any ports specified are being listened to. + + @param services: list of strings OR dictionary specifying services/ports + @param ports: list of ports + @returns state, message: strings or None, None + """ + messages = [] + state = None + if services is not None: + services = _extract_services_list_helper(services) + services_running, running = _check_running_services(services) + if not all(running): + messages.append( + "Services not running that should be: {}" + .format(", ".join(_filter_tuples(services_running, False)))) + state = 'blocked' + # also verify that the ports that should be open are open + # NB, that ServiceManager objects only OPTIONALLY have ports + map_not_open, ports_open = ( + _check_listening_on_services_ports(services)) + if not all(ports_open): + # find which service has missing ports. They are in service + # order which makes it a bit easier. + message_parts = {service: ", ".join([str(v) for v in open_ports]) + for service, open_ports in map_not_open.items()} + message = ", ".join( + ["{}: [{}]".format(s, sp) for s, sp in message_parts.items()]) + messages.append( + "Services with ports not open that should be: {}" + .format(message)) + state = 'blocked' + + if ports is not None: + # and we can also check ports which we don't know the service for + ports_open, ports_open_bools = _check_listening_on_ports_list(ports) + if not all(ports_open_bools): + messages.append( + "Ports which should be open, but are not: {}" + .format(", ".join([str(p) for p, v in ports_open + if not v]))) + state = 'blocked' + + if state is not None: + message = "; ".join(messages) + return state, message + + return None, None + + +def _extract_services_list_helper(services): + """Extract a OrderedDict of {service: [ports]} of the supplied services + for use by the other functions. + + The services object can either be: + - None : no services were passed (an empty dict is returned) + - a list of strings + - A dictionary (optionally OrderedDict) {service_name: {'service': ..}} + - An array of [{'service': service_name, ...}, ...] + + @param services: see above + @returns OrderedDict(service: [ports], ...) + """ + if services is None: + return {} + if isinstance(services, dict): + services = services.values() + # either extract the list of services from the dictionary, or if + # it is a simple string, use that. i.e. works with mixed lists. + _s = OrderedDict() + for s in services: + if isinstance(s, dict) and 'service' in s: + _s[s['service']] = s.get('ports', []) + if isinstance(s, str): + _s[s] = [] + return _s + + +def _check_running_services(services): + """Check that the services dict provided is actually running and provide + a list of (service, boolean) tuples for each service. + + Returns both a zipped list of (service, boolean) and a list of booleans + in the same order as the services. + + @param services: OrderedDict of strings: [ports], one for each service to + check. + @returns [(service, boolean), ...], : results for checks + [boolean] : just the result of the service checks + """ + services_running = [service_running(s) for s in services] + return list(zip(services, services_running)), services_running + + +def _check_listening_on_services_ports(services, test=False): + """Check that the unit is actually listening (has the port open) on the + ports that the service specifies are open. If test is True then the + function returns the services with ports that are open rather than + closed. + + Returns an OrderedDict of service: ports and a list of booleans + + @param services: OrderedDict(service: [port, ...], ...) + @param test: default=False, if False, test for closed, otherwise open. + @returns OrderedDict(service: [port-not-open, ...]...), [boolean] + """ + test = not(not(test)) # ensure test is True or False + all_ports = list(itertools.chain(*services.values())) + ports_states = [port_has_listener('0.0.0.0', p) for p in all_ports] + map_ports = OrderedDict() + matched_ports = [p for p, opened in zip(all_ports, ports_states) + if opened == test] # essentially opened xor test + for service, ports in services.items(): + set_ports = set(ports).intersection(matched_ports) + if set_ports: + map_ports[service] = set_ports + return map_ports, ports_states + + +def _check_listening_on_ports_list(ports): + """Check that the ports list given are being listened to + + Returns a list of ports being listened to and a list of the + booleans. + + @param ports: LIST or port numbers. + @returns [(port_num, boolean), ...], [boolean] + """ + ports_open = [port_has_listener('0.0.0.0', p) for p in ports] + return zip(ports, ports_open), ports_open + + +def _filter_tuples(services_states, state): + """Return a simple list from a list of tuples according to the condition + + @param services_states: LIST of (string, boolean): service and running + state. + @param state: Boolean to match the tuple against. + @returns [LIST of strings] that matched the tuple RHS. + """ + return [s for s, b in services_states if b == state] + + +def workload_state_compare(current_workload_state, workload_state): + """ Return highest priority of two states""" + hierarchy = {'unknown': -1, + 'active': 0, + 'maintenance': 1, + 'waiting': 2, + 'blocked': 3, + } + + if hierarchy.get(workload_state) is None: + workload_state = 'unknown' + if hierarchy.get(current_workload_state) is None: + current_workload_state = 'unknown' + + # Set workload_state based on hierarchy of statuses + if hierarchy.get(current_workload_state) > hierarchy.get(workload_state): + return current_workload_state + else: + return workload_state + + +def incomplete_relation_data(configs, required_interfaces): + """Check complete contexts against required_interfaces + Return dictionary of incomplete relation data. + + configs is an OSConfigRenderer object with configs registered + + required_interfaces is a dictionary of required general interfaces + with dictionary values of possible specific interfaces. + Example: + required_interfaces = {'database': ['shared-db', 'pgsql-db']} + + The interface is said to be satisfied if anyone of the interfaces in the + list has a complete context. + + Return dictionary of incomplete or missing required contexts with relation + status of interfaces and any missing data points. Example: + {'message': + {'amqp': {'missing_data': ['rabbitmq_password'], 'related': True}, + 'zeromq-configuration': {'related': False}}, + 'identity': + {'identity-service': {'related': False}}, + 'database': + {'pgsql-db': {'related': False}, + 'shared-db': {'related': True}}} + """ + complete_ctxts = configs.complete_contexts() + incomplete_relations = [ + svc_type + for svc_type, interfaces in required_interfaces.items() + if not set(interfaces).intersection(complete_ctxts)] + return { + i: configs.get_incomplete_context_data(required_interfaces[i]) + for i in incomplete_relations} + + +def do_action_openstack_upgrade(package, upgrade_callback, configs): + """Perform action-managed OpenStack upgrade. + + Upgrades packages to the configured openstack-origin version and sets + the corresponding action status as a result. + + If the charm was installed from source we cannot upgrade it. + For backwards compatibility a config flag (action-managed-upgrade) must + be set for this code to run, otherwise a full service level upgrade will + fire on config-changed. + + @param package: package name for determining if upgrade available + @param upgrade_callback: function callback to charm's upgrade function + @param configs: templating object derived from OSConfigRenderer class + + @return: True if upgrade successful; False if upgrade failed or skipped + """ + ret = False + + if openstack_upgrade_available(package): + if config('action-managed-upgrade'): + juju_log('Upgrading OpenStack release') + + try: + upgrade_callback(configs=configs) + action_set({'outcome': 'success, upgrade completed.'}) + ret = True + except Exception: + action_set({'outcome': 'upgrade failed, see traceback.'}) + action_set({'traceback': traceback.format_exc()}) + action_fail('do_openstack_upgrade resulted in an ' + 'unexpected error') + else: + action_set({'outcome': 'action-managed-upgrade config is ' + 'False, skipped upgrade.'}) + else: + action_set({'outcome': 'no upgrade available.'}) + + return ret + + +def remote_restart(rel_name, remote_service=None): + trigger = { + 'restart-trigger': str(uuid.uuid4()), + } + if remote_service: + trigger['remote-service'] = remote_service + for rid in relation_ids(rel_name): + # This subordinate can be related to two seperate services using + # different subordinate relations so only issue the restart if + # the principle is conencted down the relation we think it is + if related_units(relid=rid): + relation_set(relation_id=rid, + relation_settings=trigger, + ) + + +def check_actually_paused(services=None, ports=None): + """Check that services listed in the services object and ports + are actually closed (not listened to), to verify that the unit is + properly paused. + + @param services: See _extract_services_list_helper + @returns status, : string for status (None if okay) + message : string for problem for status_set + """ + state = None + message = None + messages = [] + if services is not None: + services = _extract_services_list_helper(services) + services_running, services_states = _check_running_services(services) + if any(services_states): + # there shouldn't be any running so this is a problem + messages.append("these services running: {}" + .format(", ".join( + _filter_tuples(services_running, True)))) + state = "blocked" + ports_open, ports_open_bools = ( + _check_listening_on_services_ports(services, True)) + if any(ports_open_bools): + message_parts = {service: ", ".join([str(v) for v in open_ports]) + for service, open_ports in ports_open.items()} + message = ", ".join( + ["{}: [{}]".format(s, sp) for s, sp in message_parts.items()]) + messages.append( + "these service:ports are open: {}".format(message)) + state = 'blocked' + if ports is not None: + ports_open, bools = _check_listening_on_ports_list(ports) + if any(bools): + messages.append( + "these ports which should be closed, but are open: {}" + .format(", ".join([str(p) for p, v in ports_open if v]))) + state = 'blocked' + if messages: + message = ("Services should be paused but {}" + .format(", ".join(messages))) + return state, message + + +def set_unit_paused(): + """Set the unit to a paused state in the local kv() store. + This does NOT actually pause the unit + """ + with unitdata.HookData()() as t: + kv = t[0] + kv.set('unit-paused', True) + + +def clear_unit_paused(): + """Clear the unit from a paused state in the local kv() store + This does NOT actually restart any services - it only clears the + local state. + """ + with unitdata.HookData()() as t: + kv = t[0] + kv.set('unit-paused', False) + + +def is_unit_paused_set(): + """Return the state of the kv().get('unit-paused'). + This does NOT verify that the unit really is paused. + + To help with units that don't have HookData() (testing) + if it excepts, return False + """ + try: + with unitdata.HookData()() as t: + kv = t[0] + # transform something truth-y into a Boolean. + return not(not(kv.get('unit-paused'))) + except Exception: + return False + + +def manage_payload_services(action, services=None, charm_func=None): + """Run an action against all services. + + An optional charm_func() can be called. It should raise an Exception to + indicate that the function failed. If it was succesfull it should return + None or an optional message. + + The signature for charm_func is: + charm_func() -> message: str + + charm_func() is executed after any services are stopped, if supplied. + + The services object can either be: + - None : no services were passed (an empty dict is returned) + - a list of strings + - A dictionary (optionally OrderedDict) {service_name: {'service': ..}} + - An array of [{'service': service_name, ...}, ...] + + :param action: Action to run: pause, resume, start or stop. + :type action: str + :param services: See above + :type services: See above + :param charm_func: function to run for custom charm pausing. + :type charm_func: f() + :returns: Status boolean and list of messages + :rtype: (bool, []) + :raises: RuntimeError + """ + actions = { + 'pause': service_pause, + 'resume': service_resume, + 'start': service_start, + 'stop': service_stop} + action = action.lower() + if action not in actions.keys(): + raise RuntimeError( + "action: {} must be one of: {}".format(action, + ', '.join(actions.keys()))) + services = _extract_services_list_helper(services) + messages = [] + success = True + if services: + for service in services.keys(): + rc = actions[action](service) + if not rc: + success = False + messages.append("{} didn't {} cleanly.".format(service, + action)) + if charm_func: + try: + message = charm_func() + if message: + messages.append(message) + except Exception as e: + success = False + messages.append(str(e)) + return success, messages + + +def pause_unit(assess_status_func, services=None, ports=None, + charm_func=None): + """Pause a unit by stopping the services and setting 'unit-paused' + in the local kv() store. + + Also checks that the services have stopped and ports are no longer + being listened to. + + An optional charm_func() can be called that can either raise an + Exception or return non None, None to indicate that the unit + didn't pause cleanly. + + The signature for charm_func is: + charm_func() -> message: string + + charm_func() is executed after any services are stopped, if supplied. + + The services object can either be: + - None : no services were passed (an empty dict is returned) + - a list of strings + - A dictionary (optionally OrderedDict) {service_name: {'service': ..}} + - An array of [{'service': service_name, ...}, ...] + + @param assess_status_func: (f() -> message: string | None) or None + @param services: OPTIONAL see above + @param ports: OPTIONAL list of port + @param charm_func: function to run for custom charm pausing. + @returns None + @raises Exception(message) on an error for action_fail(). + """ + _, messages = manage_payload_services( + 'pause', + services=services, + charm_func=charm_func) + set_unit_paused() + if assess_status_func: + message = assess_status_func() + if message: + messages.append(message) + if messages and not is_unit_upgrading_set(): + raise Exception("Couldn't pause: {}".format("; ".join(messages))) + + +def resume_unit(assess_status_func, services=None, ports=None, + charm_func=None): + """Resume a unit by starting the services and clearning 'unit-paused' + in the local kv() store. + + Also checks that the services have started and ports are being listened to. + + An optional charm_func() can be called that can either raise an + Exception or return non None to indicate that the unit + didn't resume cleanly. + + The signature for charm_func is: + charm_func() -> message: string + + charm_func() is executed after any services are started, if supplied. + + The services object can either be: + - None : no services were passed (an empty dict is returned) + - a list of strings + - A dictionary (optionally OrderedDict) {service_name: {'service': ..}} + - An array of [{'service': service_name, ...}, ...] + + @param assess_status_func: (f() -> message: string | None) or None + @param services: OPTIONAL see above + @param ports: OPTIONAL list of port + @param charm_func: function to run for custom charm resuming. + @returns None + @raises Exception(message) on an error for action_fail(). + """ + _, messages = manage_payload_services( + 'resume', + services=services, + charm_func=charm_func) + clear_unit_paused() + if assess_status_func: + message = assess_status_func() + if message: + messages.append(message) + if messages: + raise Exception("Couldn't resume: {}".format("; ".join(messages))) + + +def make_assess_status_func(*args, **kwargs): + """Creates an assess_status_func() suitable for handing to pause_unit() + and resume_unit(). + + This uses the _determine_os_workload_status(...) function to determine + what the workload_status should be for the unit. If the unit is + not in maintenance or active states, then the message is returned to + the caller. This is so an action that doesn't result in either a + complete pause or complete resume can signal failure with an action_fail() + """ + def _assess_status_func(): + state, message = _determine_os_workload_status(*args, **kwargs) + status_set(state, message) + if state not in ['maintenance', 'active']: + return message + return None + + return _assess_status_func + + +def pausable_restart_on_change(restart_map, stopstart=False, + restart_functions=None): + """A restart_on_change decorator that checks to see if the unit is + paused. If it is paused then the decorated function doesn't fire. + + This is provided as a helper, as the @restart_on_change(...) decorator + is in core.host, yet the openstack specific helpers are in this file + (contrib.openstack.utils). Thus, this needs to be an optional feature + for openstack charms (or charms that wish to use the openstack + pause/resume type features). + + It is used as follows: + + from contrib.openstack.utils import ( + pausable_restart_on_change as restart_on_change) + + @restart_on_change(restart_map, stopstart=) + def some_hook(...): + pass + + see core.utils.restart_on_change() for more details. + + Note restart_map can be a callable, in which case, restart_map is only + evaluated at runtime. This means that it is lazy and the underlying + function won't be called if the decorated function is never called. Note, + retains backwards compatibility for passing a non-callable dictionary. + + @param f: the function to decorate + @param restart_map: (optionally callable, which then returns the + restart_map) the restart map {conf_file: [services]} + @param stopstart: DEFAULT false; whether to stop, start or just restart + @returns decorator to use a restart_on_change with pausability + """ + def wrap(f): + # py27 compatible nonlocal variable. When py3 only, replace with + # nonlocal keyword + __restart_map_cache = {'cache': None} + + @functools.wraps(f) + def wrapped_f(*args, **kwargs): + if is_unit_paused_set(): + return f(*args, **kwargs) + if __restart_map_cache['cache'] is None: + __restart_map_cache['cache'] = restart_map() \ + if callable(restart_map) else restart_map + # otherwise, normal restart_on_change functionality + return restart_on_change_helper( + (lambda: f(*args, **kwargs)), __restart_map_cache['cache'], + stopstart, restart_functions) + return wrapped_f + return wrap + + +def ordered(orderme): + """Converts the provided dictionary into a collections.OrderedDict. + + The items in the returned OrderedDict will be inserted based on the + natural sort order of the keys. Nested dictionaries will also be sorted + in order to ensure fully predictable ordering. + + :param orderme: the dict to order + :return: collections.OrderedDict + :raises: ValueError: if `orderme` isn't a dict instance. + """ + if not isinstance(orderme, dict): + raise ValueError('argument must be a dict type') + + result = OrderedDict() + for k, v in sorted(six.iteritems(orderme), key=lambda x: x[0]): + if isinstance(v, dict): + result[k] = ordered(v) + else: + result[k] = v + + return result + + +def config_flags_parser(config_flags): + """Parses config flags string into dict. + + This parsing method supports a few different formats for the config + flag values to be parsed: + + 1. A string in the simple format of key=value pairs, with the possibility + of specifying multiple key value pairs within the same string. For + example, a string in the format of 'key1=value1, key2=value2' will + return a dict of: + + {'key1': 'value1', 'key2': 'value2'}. + + 2. A string in the above format, but supporting a comma-delimited list + of values for the same key. For example, a string in the format of + 'key1=value1, key2=value3,value4,value5' will return a dict of: + + {'key1': 'value1', 'key2': 'value2,value3,value4'} + + 3. A string containing a colon character (:) prior to an equal + character (=) will be treated as yaml and parsed as such. This can be + used to specify more complex key value pairs. For example, + a string in the format of 'key1: subkey1=value1, subkey2=value2' will + return a dict of: + + {'key1', 'subkey1=value1, subkey2=value2'} + + The provided config_flags string may be a list of comma-separated values + which themselves may be comma-separated list of values. + """ + # If we find a colon before an equals sign then treat it as yaml. + # Note: limit it to finding the colon first since this indicates assignment + # for inline yaml. + colon = config_flags.find(':') + equals = config_flags.find('=') + if colon > 0: + if colon < equals or equals < 0: + return ordered(yaml.safe_load(config_flags)) + + if config_flags.find('==') >= 0: + juju_log("config_flags is not in expected format (key=value)", + level=ERROR) + raise OSContextError + + # strip the following from each value. + post_strippers = ' ,' + # we strip any leading/trailing '=' or ' ' from the string then + # split on '='. + split = config_flags.strip(' =').split('=') + limit = len(split) + flags = OrderedDict() + for i in range(0, limit - 1): + current = split[i] + next = split[i + 1] + vindex = next.rfind(',') + if (i == limit - 2) or (vindex < 0): + value = next + else: + value = next[:vindex] + + if i == 0: + key = current + else: + # if this not the first entry, expect an embedded key. + index = current.rfind(',') + if index < 0: + juju_log("Invalid config value(s) at index %s" % (i), + level=ERROR) + raise OSContextError + key = current[index + 1:] + + # Add to collection. + flags[key.strip(post_strippers)] = value.rstrip(post_strippers) + + return flags + + +def os_application_version_set(package): + '''Set version of application for Juju 2.0 and later''' + application_version = get_upstream_version(package) + # NOTE(jamespage) if not able to figure out package version, fallback to + # openstack codename version detection. + if not application_version: + application_version_set(os_release(package)) + else: + application_version_set(application_version) + + +def os_application_status_set(check_function): + """Run the supplied function and set the application status accordingly. + + :param check_function: Function to run to get app states and messages. + :type check_function: function + """ + state, message = check_function() + status_set(state, message, application=True) + + +def enable_memcache(source=None, release=None, package=None): + """Determine if memcache should be enabled on the local unit + + @param release: release of OpenStack currently deployed + @param package: package to derive OpenStack version deployed + @returns boolean Whether memcache should be enabled + """ + _release = None + if release: + _release = release + else: + _release = os_release(package) + if not _release: + _release = get_os_codename_install_source(source) + + return CompareOpenStackReleases(_release) >= 'mitaka' + + +def token_cache_pkgs(source=None, release=None): + """Determine additional packages needed for token caching + + @param source: source string for charm + @param release: release of OpenStack currently deployed + @returns List of package to enable token caching + """ + packages = [] + if enable_memcache(source=source, release=release): + packages.extend(['memcached', 'python-memcache']) + return packages + + +def update_json_file(filename, items): + """Updates the json `filename` with a given dict. + :param filename: path to json file (e.g. /etc/glance/policy.json) + :param items: dict of items to update + """ + if not items: + return + + with open(filename) as fd: + policy = json.load(fd) + + # Compare before and after and if nothing has changed don't write the file + # since that could cause unnecessary service restarts. + before = json.dumps(policy, indent=4, sort_keys=True) + policy.update(items) + after = json.dumps(policy, indent=4, sort_keys=True) + if before == after: + return + + with open(filename, "w") as fd: + fd.write(after) + + +@cached +def snap_install_requested(): + """ Determine if installing from snaps + + If openstack-origin is of the form snap:track/channel[/branch] + and channel is in SNAPS_CHANNELS return True. + """ + origin = config('openstack-origin') or "" + if not origin.startswith('snap:'): + return False + + _src = origin[5:] + if '/' in _src: + channel = _src.split('/')[1] + else: + # Handle snap:track with no channel + channel = 'stable' + return valid_snap_channel(channel) + + +def get_snaps_install_info_from_origin(snaps, src, mode='classic'): + """Generate a dictionary of snap install information from origin + + @param snaps: List of snaps + @param src: String of openstack-origin or source of the form + snap:track/channel + @param mode: String classic, devmode or jailmode + @returns: Dictionary of snaps with channels and modes + """ + + if not src.startswith('snap:'): + juju_log("Snap source is not a snap origin", 'WARN') + return {} + + _src = src[5:] + channel = '--channel={}'.format(_src) + + return {snap: {'channel': channel, 'mode': mode} + for snap in snaps} + + +def install_os_snaps(snaps, refresh=False): + """Install OpenStack snaps from channel and with mode + + @param snaps: Dictionary of snaps with channels and modes of the form: + {'snap_name': {'channel': 'snap_channel', + 'mode': 'snap_mode'}} + Where channel is a snapstore channel and mode is --classic, --devmode + or --jailmode. + @param post_snap_install: Callback function to run after snaps have been + installed + """ + + def _ensure_flag(flag): + if flag.startswith('--'): + return flag + return '--{}'.format(flag) + + if refresh: + for snap in snaps.keys(): + snap_refresh(snap, + _ensure_flag(snaps[snap]['channel']), + _ensure_flag(snaps[snap]['mode'])) + else: + for snap in snaps.keys(): + snap_install(snap, + _ensure_flag(snaps[snap]['channel']), + _ensure_flag(snaps[snap]['mode'])) + + +def set_unit_upgrading(): + """Set the unit to a upgrading state in the local kv() store. + """ + with unitdata.HookData()() as t: + kv = t[0] + kv.set('unit-upgrading', True) + + +def clear_unit_upgrading(): + """Clear the unit from a upgrading state in the local kv() store + """ + with unitdata.HookData()() as t: + kv = t[0] + kv.set('unit-upgrading', False) + + +def is_unit_upgrading_set(): + """Return the state of the kv().get('unit-upgrading'). + + To help with units that don't have HookData() (testing) + if it excepts, return False + """ + try: + with unitdata.HookData()() as t: + kv = t[0] + # transform something truth-y into a Boolean. + return not(not(kv.get('unit-upgrading'))) + except Exception: + return False + + +def series_upgrade_prepare(pause_unit_helper=None, configs=None): + """ Run common series upgrade prepare tasks. + + :param pause_unit_helper: function: Function to pause unit + :param configs: OSConfigRenderer object: Configurations + :returns None: + """ + set_unit_upgrading() + if pause_unit_helper and configs: + if not is_unit_paused_set(): + pause_unit_helper(configs) + + +def series_upgrade_complete(resume_unit_helper=None, configs=None): + """ Run common series upgrade complete tasks. + + :param resume_unit_helper: function: Function to resume unit + :param configs: OSConfigRenderer object: Configurations + :returns None: + """ + clear_unit_paused() + clear_unit_upgrading() + if configs: + configs.write_all() + if resume_unit_helper: + resume_unit_helper(configs) + + +def is_db_initialised(): + """Check leader storage to see if database has been initialised. + + :returns: Whether DB has been initialised + :rtype: bool + """ + db_initialised = None + if leader_get('db-initialised') is None: + juju_log( + 'db-initialised key missing, assuming db is not initialised', + 'DEBUG') + db_initialised = False + else: + db_initialised = bool_from_string(leader_get('db-initialised')) + juju_log('Database initialised: {}'.format(db_initialised), 'DEBUG') + return db_initialised + + +def set_db_initialised(): + """Add flag to leader storage to indicate database has been initialised. + """ + juju_log('Setting db-initialised to True', 'DEBUG') + leader_set({'db-initialised': True}) + + +def is_db_maintenance_mode(relid=None): + """Check relation data from notifications of db in maintenance mode. + + :returns: Whether db has notified it is in maintenance mode. + :rtype: bool + """ + juju_log('Checking for maintenance notifications', 'DEBUG') + if relid: + r_ids = [relid] + else: + r_ids = relation_ids('shared-db') + rids_units = [(r, u) for r in r_ids for u in related_units(r)] + notifications = [] + for r_id, unit in rids_units: + settings = relation_get(unit=unit, rid=r_id) + for key, value in settings.items(): + if value and key in DB_MAINTENANCE_KEYS: + juju_log( + 'Unit: {}, Key: {}, Value: {}'.format(unit, key, value), + 'DEBUG') + try: + notifications.append(bool_from_string(value)) + except ValueError: + juju_log( + 'Could not discern bool from {}'.format(value), + 'WARN') + pass + return True in notifications + + +@cached +def container_scoped_relations(): + """Get all the container scoped relations + + :returns: List of relation names + :rtype: List + """ + md = metadata() + relations = [] + for relation_type in ('provides', 'requires', 'peers'): + for relation in md.get(relation_type, []): + if md[relation_type][relation].get('scope') == 'container': + relations.append(relation) + return relations + + +def is_db_ready(use_current_context=False, rel_name=None): + """Check remote database is ready to be used. + + Database relations are expected to provide a list of 'allowed' units to + confirm that the database is ready for use by those units. + + If db relation has provided this information and local unit is a member, + returns True otherwise False. + + :param use_current_context: Whether to limit checks to current hook + context. + :type use_current_context: bool + :param rel_name: Name of relation to check + :type rel_name: string + :returns: Whether remote db is ready. + :rtype: bool + :raises: Exception + """ + key = 'allowed_units' + + rel_name = rel_name or 'shared-db' + this_unit = local_unit() + + if use_current_context: + if relation_id() in relation_ids(rel_name): + rids_units = [(None, None)] + else: + raise Exception("use_current_context=True but not in {} " + "rel hook contexts (currently in {})." + .format(rel_name, relation_id())) + else: + rids_units = [(r_id, u) + for r_id in relation_ids(rel_name) + for u in related_units(r_id)] + + for rid, unit in rids_units: + allowed_units = relation_get(rid=rid, unit=unit, attribute=key) + if allowed_units and this_unit in allowed_units.split(): + juju_log("This unit ({}) is in allowed unit list from {}".format( + this_unit, + unit), 'DEBUG') + return True + + juju_log("This unit was not found in any allowed unit list") + return False + + +def is_expected_scale(peer_relation_name='cluster'): + """Query juju goal-state to determine whether our peer- and dependency- + relations are at the expected scale. + + Useful for deferring per unit per relation housekeeping work until we are + ready to complete it successfully and without unnecessary repetiton. + + Always returns True if version of juju used does not support goal-state. + + :param peer_relation_name: Name of peer relation + :type rel_name: string + :returns: True or False + :rtype: bool + """ + def _get_relation_id(rel_type): + return next((rid for rid in relation_ids(reltype=rel_type)), None) + + Relation = namedtuple('Relation', 'rel_type rel_id') + peer_rid = _get_relation_id(peer_relation_name) + # Units with no peers should still have a peer relation. + if not peer_rid: + juju_log('Not at expected scale, no peer relation found', 'DEBUG') + return False + expected_relations = [ + Relation(rel_type='shared-db', rel_id=_get_relation_id('shared-db'))] + if expect_ha(): + expected_relations.append( + Relation( + rel_type='ha', + rel_id=_get_relation_id('ha'))) + juju_log( + 'Checking scale of {} relations'.format( + ','.join([r.rel_type for r in expected_relations])), + 'DEBUG') + try: + if (len(related_units(relid=peer_rid)) < + len(list(expected_peer_units()))): + return False + for rel in expected_relations: + if not rel.rel_id: + juju_log( + 'Expected to find {} relation, but it is missing'.format( + rel.rel_type), + 'DEBUG') + return False + # Goal state returns every unit even for container scoped + # relations but the charm only ever has a relation with + # the local unit. + if rel.rel_type in container_scoped_relations(): + expected_count = 1 + else: + expected_count = len( + list(expected_related_units(reltype=rel.rel_type))) + if len(related_units(relid=rel.rel_id)) < expected_count: + juju_log( + ('Not at expected scale, not enough units on {} ' + 'relation'.format(rel.rel_type)), + 'DEBUG') + return False + except NotImplementedError: + return True + juju_log('All checks have passed, unit is at expected scale', 'DEBUG') + return True + + +def get_peer_key(unit_name): + """Get the peer key for this unit. + + The peer key is the key a unit uses to publish its status down the peer + relation + + :param unit_name: Name of unit + :type unit_name: string + :returns: Peer key for given unit + :rtype: string + """ + return 'unit-state-{}'.format(unit_name.replace('/', '-')) + + +UNIT_READY = 'READY' +UNIT_NOTREADY = 'NOTREADY' +UNIT_UNKNOWN = 'UNKNOWN' +UNIT_STATES = [UNIT_READY, UNIT_NOTREADY, UNIT_UNKNOWN] + + +def inform_peers_unit_state(state, relation_name='cluster'): + """Inform peers of the state of this unit. + + :param state: State of unit to publish + :type state: string + :param relation_name: Name of relation to publish state on + :type relation_name: string + """ + if state not in UNIT_STATES: + raise ValueError( + "Setting invalid state {} for unit".format(state)) + for r_id in relation_ids(relation_name): + relation_set(relation_id=r_id, + relation_settings={ + get_peer_key(local_unit()): state}) + + +def get_peers_unit_state(relation_name='cluster'): + """Get the state of all peers. + + :param relation_name: Name of relation to check peers on. + :type relation_name: string + :returns: Unit states keyed on unit name. + :rtype: dict + :raises: ValueError + """ + r_ids = relation_ids(relation_name) + rids_units = [(r, u) for r in r_ids for u in related_units(r)] + unit_states = {} + for r_id, unit in rids_units: + settings = relation_get(unit=unit, rid=r_id) + unit_states[unit] = settings.get(get_peer_key(unit), UNIT_UNKNOWN) + if unit_states[unit] not in UNIT_STATES: + raise ValueError( + "Unit in unknown state {}".format(unit_states[unit])) + return unit_states + + +def are_peers_ready(relation_name='cluster'): + """Check if all peers are ready. + + :param relation_name: Name of relation to check peers on. + :type relation_name: string + :returns: Whether all units are ready. + :rtype: bool + """ + unit_states = get_peers_unit_state(relation_name) + return all(v == UNIT_READY for v in unit_states.values()) + + +def inform_peers_if_ready(check_unit_ready_func, relation_name='cluster'): + """Inform peers if this unit is ready. + + The check function should return a tuple (state, message). A state + of 'READY' indicates the unit is READY. + + :param check_unit_ready_func: Function to run to check readiness + :type check_unit_ready_func: function + :param relation_name: Name of relation to check peers on. + :type relation_name: string + """ + unit_ready, msg = check_unit_ready_func() + if unit_ready: + state = UNIT_READY + else: + state = UNIT_NOTREADY + juju_log('Telling peers this unit is: {}'.format(state), 'DEBUG') + inform_peers_unit_state(state, relation_name) + + +def check_api_unit_ready(check_db_ready=True): + """Check if this unit is ready. + + :param check_db_ready: Include checks of database readiness. + :type check_db_ready: bool + :returns: Whether unit state is ready and status message + :rtype: (bool, str) + """ + unit_state, msg = get_api_unit_status(check_db_ready=check_db_ready) + return unit_state == WORKLOAD_STATES.ACTIVE, msg + + +def get_api_unit_status(check_db_ready=True): + """Return a workload status and message for this unit. + + :param check_db_ready: Include checks of database readiness. + :type check_db_ready: bool + :returns: Workload state and message + :rtype: (bool, str) + """ + unit_state = WORKLOAD_STATES.ACTIVE + msg = 'Unit is ready' + if is_db_maintenance_mode(): + unit_state = WORKLOAD_STATES.MAINTENANCE + msg = 'Database in maintenance mode.' + elif is_unit_paused_set(): + unit_state = WORKLOAD_STATES.BLOCKED + msg = 'Unit paused.' + elif check_db_ready and not is_db_ready(): + unit_state = WORKLOAD_STATES.WAITING + msg = 'Allowed_units list provided but this unit not present' + elif not is_db_initialised(): + unit_state = WORKLOAD_STATES.WAITING + msg = 'Database not initialised' + elif not is_expected_scale(): + unit_state = WORKLOAD_STATES.WAITING + msg = 'Charm and its dependencies not yet at expected scale' + juju_log(msg, 'DEBUG') + return unit_state, msg + + +def check_api_application_ready(): + """Check if this application is ready. + + :returns: Whether application state is ready and status message + :rtype: (bool, str) + """ + app_state, msg = get_api_application_status() + return app_state == WORKLOAD_STATES.ACTIVE, msg + + +def get_api_application_status(): + """Return a workload status and message for this application. + + :returns: Workload state and message + :rtype: (bool, str) + """ + app_state, msg = get_api_unit_status() + if app_state == WORKLOAD_STATES.ACTIVE: + if are_peers_ready(): + return WORKLOAD_STATES.ACTIVE, 'Application Ready' + else: + return WORKLOAD_STATES.WAITING, 'Some units are not ready' + return app_state, msg diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/vaultlocker.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/vaultlocker.py new file mode 100644 index 0000000000000000000000000000000000000000..4ee6c1dba910b824467c2c34e0a3d3d0f3fd906d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/openstack/vaultlocker.py @@ -0,0 +1,179 @@ +# Copyright 2018 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import json +import os + +import charmhelpers.contrib.openstack.alternatives as alternatives +import charmhelpers.contrib.openstack.context as context + +import charmhelpers.core.hookenv as hookenv +import charmhelpers.core.host as host +import charmhelpers.core.templating as templating +import charmhelpers.core.unitdata as unitdata + +VAULTLOCKER_BACKEND = 'charm-vaultlocker' + + +class VaultKVContext(context.OSContextGenerator): + """Vault KV context for interaction with vault-kv interfaces""" + interfaces = ['secrets-storage'] + + def __init__(self, secret_backend=None): + super(context.OSContextGenerator, self).__init__() + self.secret_backend = ( + secret_backend or 'charm-{}'.format(hookenv.service_name()) + ) + + def __call__(self): + try: + import hvac + except ImportError: + # BUG: #1862085 - if the relation is made to vault, but the + # 'encrypt' option is not made, then the charm errors with an + # import warning. This catches that, logs a warning, and returns + # with an empty context. + hookenv.log("VaultKVContext: trying to use hvac pythong module " + "but it's not available. Is secrets-stroage relation " + "made, but encrypt option not set?", + level=hookenv.WARNING) + # return an emptry context on hvac import error + return {} + ctxt = {} + # NOTE(hopem): see https://bugs.launchpad.net/charm-helpers/+bug/1849323 + db = unitdata.kv() + # currently known-good secret-id + secret_id = db.get('secret-id') + + for relation_id in hookenv.relation_ids(self.interfaces[0]): + for unit in hookenv.related_units(relation_id): + data = hookenv.relation_get(unit=unit, + rid=relation_id) + vault_url = data.get('vault_url') + role_id = data.get('{}_role_id'.format(hookenv.local_unit())) + token = data.get('{}_token'.format(hookenv.local_unit())) + + if all([vault_url, role_id, token]): + token = json.loads(token) + vault_url = json.loads(vault_url) + + # Tokens may change when secret_id's are being + # reissued - if so use token to get new secret_id + token_success = False + try: + secret_id = retrieve_secret_id( + url=vault_url, + token=token + ) + token_success = True + except hvac.exceptions.InvalidRequest: + # Try next + pass + + if token_success: + db.set('secret-id', secret_id) + db.flush() + + ctxt['vault_url'] = vault_url + ctxt['role_id'] = json.loads(role_id) + ctxt['secret_id'] = secret_id + ctxt['secret_backend'] = self.secret_backend + vault_ca = data.get('vault_ca') + if vault_ca: + ctxt['vault_ca'] = json.loads(vault_ca) + + self.complete = True + break + else: + if secret_id: + ctxt['vault_url'] = vault_url + ctxt['role_id'] = json.loads(role_id) + ctxt['secret_id'] = secret_id + ctxt['secret_backend'] = self.secret_backend + vault_ca = data.get('vault_ca') + if vault_ca: + ctxt['vault_ca'] = json.loads(vault_ca) + + if self.complete: + break + + if ctxt: + self.complete = True + + return ctxt + + +def write_vaultlocker_conf(context, priority=100): + """Write vaultlocker configuration to disk and install alternative + + :param context: Dict of data from vault-kv relation + :ptype: context: dict + :param priority: Priority of alternative configuration + :ptype: priority: int""" + charm_vl_path = "/var/lib/charm/{}/vaultlocker.conf".format( + hookenv.service_name() + ) + host.mkdir(os.path.dirname(charm_vl_path), perms=0o700) + templating.render(source='vaultlocker.conf.j2', + target=charm_vl_path, + context=context, perms=0o600), + alternatives.install_alternative('vaultlocker.conf', + '/etc/vaultlocker/vaultlocker.conf', + charm_vl_path, priority) + + +def vault_relation_complete(backend=None): + """Determine whether vault relation is complete + + :param backend: Name of secrets backend requested + :ptype backend: string + :returns: whether the relation to vault is complete + :rtype: bool""" + try: + import hvac + except ImportError: + return False + try: + vault_kv = VaultKVContext(secret_backend=backend or VAULTLOCKER_BACKEND) + vault_kv() + return vault_kv.complete + except hvac.exceptions.InvalidRequest: + return False + + +# TODO: contrib a high level unwrap method to hvac that works +def retrieve_secret_id(url, token): + """Retrieve a response-wrapped secret_id from Vault + + :param url: URL to Vault Server + :ptype url: str + :param token: One shot Token to use + :ptype token: str + :returns: secret_id to use for Vault Access + :rtype: str""" + import hvac + try: + # hvac 0.10.1 changed default adapter to JSONAdapter + client = hvac.Client(url=url, token=token, adapter=hvac.adapters.Request) + except AttributeError: + # hvac < 0.6.2 doesn't have adapter but uses the same response interface + client = hvac.Client(url=url, token=token) + else: + # hvac < 0.9.2 assumes adapter is an instance, so doesn't instantiate + if not isinstance(client.adapter, hvac.adapters.Request): + client.adapter = hvac.adapters.Request(base_uri=url, token=token) + response = client._post('/v1/sys/wrapping/unwrap') + if response.status_code == 200: + data = response.json() + return data['data']['secret_id'] diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/peerstorage/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/peerstorage/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..a8fa60c2a3080d088b8e0abae370aa355c80ec56 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/peerstorage/__init__.py @@ -0,0 +1,267 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import json +import six + +from charmhelpers.core.hookenv import relation_id as current_relation_id +from charmhelpers.core.hookenv import ( + is_relation_made, + relation_ids, + relation_get as _relation_get, + local_unit, + relation_set as _relation_set, + leader_get as _leader_get, + leader_set, + is_leader, +) + + +""" +This helper provides functions to support use of a peer relation +for basic key/value storage, with the added benefit that all storage +can be replicated across peer units. + +Requirement to use: + +To use this, the "peer_echo()" method has to be called form the peer +relation's relation-changed hook: + +@hooks.hook("cluster-relation-changed") # Adapt the to your peer relation name +def cluster_relation_changed(): + peer_echo() + +Once this is done, you can use peer storage from anywhere: + +@hooks.hook("some-hook") +def some_hook(): + # You can store and retrieve key/values this way: + if is_relation_made("cluster"): # from charmhelpers.core.hookenv + # There are peers available so we can work with peer storage + peer_store("mykey", "myvalue") + value = peer_retrieve("mykey") + print value + else: + print "No peers joind the relation, cannot share key/values :(" +""" + + +def leader_get(attribute=None, rid=None): + """Wrapper to ensure that settings are migrated from the peer relation. + + This is to support upgrading an environment that does not support + Juju leadership election to one that does. + + If a setting is not extant in the leader-get but is on the relation-get + peer rel, it is migrated and marked as such so that it is not re-migrated. + """ + migration_key = '__leader_get_migrated_settings__' + if not is_leader(): + return _leader_get(attribute=attribute) + + settings_migrated = False + leader_settings = _leader_get(attribute=attribute) + previously_migrated = _leader_get(attribute=migration_key) + + if previously_migrated: + migrated = set(json.loads(previously_migrated)) + else: + migrated = set([]) + + try: + if migration_key in leader_settings: + del leader_settings[migration_key] + except TypeError: + pass + + if attribute: + if attribute in migrated: + return leader_settings + + # If attribute not present in leader db, check if this unit has set + # the attribute in the peer relation + if not leader_settings: + peer_setting = _relation_get(attribute=attribute, unit=local_unit(), + rid=rid) + if peer_setting: + leader_set(settings={attribute: peer_setting}) + leader_settings = peer_setting + + if leader_settings: + settings_migrated = True + migrated.add(attribute) + else: + r_settings = _relation_get(unit=local_unit(), rid=rid) + if r_settings: + for key in set(r_settings.keys()).difference(migrated): + # Leader setting wins + if not leader_settings.get(key): + leader_settings[key] = r_settings[key] + + settings_migrated = True + migrated.add(key) + + if settings_migrated: + leader_set(**leader_settings) + + if migrated and settings_migrated: + migrated = json.dumps(list(migrated)) + leader_set(settings={migration_key: migrated}) + + return leader_settings + + +def relation_set(relation_id=None, relation_settings=None, **kwargs): + """Attempt to use leader-set if supported in the current version of Juju, + otherwise falls back on relation-set. + + Note that we only attempt to use leader-set if the provided relation_id is + a peer relation id or no relation id is provided (in which case we assume + we are within the peer relation context). + """ + try: + if relation_id in relation_ids('cluster'): + return leader_set(settings=relation_settings, **kwargs) + else: + raise NotImplementedError + except NotImplementedError: + return _relation_set(relation_id=relation_id, + relation_settings=relation_settings, **kwargs) + + +def relation_get(attribute=None, unit=None, rid=None): + """Attempt to use leader-get if supported in the current version of Juju, + otherwise falls back on relation-get. + + Note that we only attempt to use leader-get if the provided rid is a peer + relation id or no relation id is provided (in which case we assume we are + within the peer relation context). + """ + try: + if rid in relation_ids('cluster'): + return leader_get(attribute, rid) + else: + raise NotImplementedError + except NotImplementedError: + return _relation_get(attribute=attribute, rid=rid, unit=unit) + + +def peer_retrieve(key, relation_name='cluster'): + """Retrieve a named key from peer relation `relation_name`.""" + cluster_rels = relation_ids(relation_name) + if len(cluster_rels) > 0: + cluster_rid = cluster_rels[0] + return relation_get(attribute=key, rid=cluster_rid, + unit=local_unit()) + else: + raise ValueError('Unable to detect' + 'peer relation {}'.format(relation_name)) + + +def peer_retrieve_by_prefix(prefix, relation_name='cluster', delimiter='_', + inc_list=None, exc_list=None): + """ Retrieve k/v pairs given a prefix and filter using {inc,exc}_list """ + inc_list = inc_list if inc_list else [] + exc_list = exc_list if exc_list else [] + peerdb_settings = peer_retrieve('-', relation_name=relation_name) + matched = {} + if peerdb_settings is None: + return matched + for k, v in peerdb_settings.items(): + full_prefix = prefix + delimiter + if k.startswith(full_prefix): + new_key = k.replace(full_prefix, '') + if new_key in exc_list: + continue + if new_key in inc_list or len(inc_list) == 0: + matched[new_key] = v + return matched + + +def peer_store(key, value, relation_name='cluster'): + """Store the key/value pair on the named peer relation `relation_name`.""" + cluster_rels = relation_ids(relation_name) + if len(cluster_rels) > 0: + cluster_rid = cluster_rels[0] + relation_set(relation_id=cluster_rid, + relation_settings={key: value}) + else: + raise ValueError('Unable to detect ' + 'peer relation {}'.format(relation_name)) + + +def peer_echo(includes=None, force=False): + """Echo filtered attributes back onto the same relation for storage. + + This is a requirement to use the peerstorage module - it needs to be called + from the peer relation's changed hook. + + If Juju leader support exists this will be a noop unless force is True. + """ + try: + is_leader() + except NotImplementedError: + pass + else: + if not force: + return # NOOP if leader-election is supported + + # Use original non-leader calls + relation_get = _relation_get + relation_set = _relation_set + + rdata = relation_get() + echo_data = {} + if includes is None: + echo_data = rdata.copy() + for ex in ['private-address', 'public-address']: + if ex in echo_data: + echo_data.pop(ex) + else: + for attribute, value in six.iteritems(rdata): + for include in includes: + if include in attribute: + echo_data[attribute] = value + if len(echo_data) > 0: + relation_set(relation_settings=echo_data) + + +def peer_store_and_set(relation_id=None, peer_relation_name='cluster', + peer_store_fatal=False, relation_settings=None, + delimiter='_', **kwargs): + """Store passed-in arguments both in argument relation and in peer storage. + + It functions like doing relation_set() and peer_store() at the same time, + with the same data. + + @param relation_id: the id of the relation to store the data on. Defaults + to the current relation. + @param peer_store_fatal: Set to True, the function will raise an exception + should the peer storage not be available.""" + + relation_settings = relation_settings if relation_settings else {} + relation_set(relation_id=relation_id, + relation_settings=relation_settings, + **kwargs) + if is_relation_made(peer_relation_name): + for key, value in six.iteritems(dict(list(kwargs.items()) + + list(relation_settings.items()))): + key_prefix = relation_id or current_relation_id() + peer_store(key_prefix + delimiter + key, + value, + relation_name=peer_relation_name) + else: + if peer_store_fatal: + raise ValueError('Unable to detect ' + 'peer relation {}'.format(peer_relation_name)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/python.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/python.py new file mode 100644 index 0000000000000000000000000000000000000000..84cba8c4eba34fdd705f4ee39628ebd33b5175a2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/python.py @@ -0,0 +1,21 @@ +# Copyright 2014-2019 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from __future__ import absolute_import + +# deprecated aliases for backwards compatibility +from charmhelpers.fetch.python import debug # noqa +from charmhelpers.fetch.python import packages # noqa +from charmhelpers.fetch.python import rpdb # noqa +from charmhelpers.fetch.python import version # noqa diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/saltstack/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/saltstack/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d74f4039379b608b5c076fd96c906ff5237f1f83 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/saltstack/__init__.py @@ -0,0 +1,116 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Charm Helpers saltstack - declare the state of your machines. + +This helper enables you to declare your machine state, rather than +program it procedurally (and have to test each change to your procedures). +Your install hook can be as simple as:: + + {{{ + from charmhelpers.contrib.saltstack import ( + install_salt_support, + update_machine_state, + ) + + + def install(): + install_salt_support() + update_machine_state('machine_states/dependencies.yaml') + update_machine_state('machine_states/installed.yaml') + }}} + +and won't need to change (nor will its tests) when you change the machine +state. + +It's using a python package called salt-minion which allows various formats for +specifying resources, such as:: + + {{{ + /srv/{{ basedir }}: + file.directory: + - group: ubunet + - user: ubunet + - require: + - user: ubunet + - recurse: + - user + - group + + ubunet: + group.present: + - gid: 1500 + user.present: + - uid: 1500 + - gid: 1500 + - createhome: False + - require: + - group: ubunet + }}} + +The docs for all the different state definitions are at: + http://docs.saltstack.com/ref/states/all/ + + +TODO: + * Add test helpers which will ensure that machine state definitions + are functionally (but not necessarily logically) correct (ie. getting + salt to parse all state defs. + * Add a link to a public bootstrap charm example / blogpost. + * Find a way to obviate the need to use the grains['charm_dir'] syntax + in templates. +""" +# Copyright 2013 Canonical Ltd. +# +# Authors: +# Charm Helpers Developers +import subprocess + +import charmhelpers.contrib.templating.contexts +import charmhelpers.core.host +import charmhelpers.core.hookenv + + +salt_grains_path = '/etc/salt/grains' + + +def install_salt_support(from_ppa=True): + """Installs the salt-minion helper for machine state. + + By default the salt-minion package is installed from + the saltstack PPA. If from_ppa is False you must ensure + that the salt-minion package is available in the apt cache. + """ + if from_ppa: + subprocess.check_call([ + '/usr/bin/add-apt-repository', + '--yes', + 'ppa:saltstack/salt', + ]) + subprocess.check_call(['/usr/bin/apt-get', 'update']) + # We install salt-common as salt-minion would run the salt-minion + # daemon. + charmhelpers.fetch.apt_install('salt-common') + + +def update_machine_state(state_path): + """Update the machine state using the provided state declaration.""" + charmhelpers.contrib.templating.contexts.juju_state_to_yaml( + salt_grains_path) + subprocess.check_call([ + 'salt-call', + '--local', + 'state.template', + state_path, + ]) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/ssl/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/ssl/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..1d238b529e44ad2761d86e96b4845589f8e951c4 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/ssl/__init__.py @@ -0,0 +1,92 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import subprocess +from charmhelpers.core import hookenv + + +def generate_selfsigned(keyfile, certfile, keysize="1024", config=None, subject=None, cn=None): + """Generate selfsigned SSL keypair + + You must provide one of the 3 optional arguments: + config, subject or cn + If more than one is provided the leftmost will be used + + Arguments: + keyfile -- (required) full path to the keyfile to be created + certfile -- (required) full path to the certfile to be created + keysize -- (optional) SSL key length + config -- (optional) openssl configuration file + subject -- (optional) dictionary with SSL subject variables + cn -- (optional) cerfificate common name + + Required keys in subject dict: + cn -- Common name (eq. FQDN) + + Optional keys in subject dict + country -- Country Name (2 letter code) + state -- State or Province Name (full name) + locality -- Locality Name (eg, city) + organization -- Organization Name (eg, company) + organizational_unit -- Organizational Unit Name (eg, section) + email -- Email Address + """ + + cmd = [] + if config: + cmd = ["/usr/bin/openssl", "req", "-new", "-newkey", + "rsa:{}".format(keysize), "-days", "365", "-nodes", "-x509", + "-keyout", keyfile, + "-out", certfile, "-config", config] + elif subject: + ssl_subject = "" + if "country" in subject: + ssl_subject = ssl_subject + "/C={}".format(subject["country"]) + if "state" in subject: + ssl_subject = ssl_subject + "/ST={}".format(subject["state"]) + if "locality" in subject: + ssl_subject = ssl_subject + "/L={}".format(subject["locality"]) + if "organization" in subject: + ssl_subject = ssl_subject + "/O={}".format(subject["organization"]) + if "organizational_unit" in subject: + ssl_subject = ssl_subject + "/OU={}".format(subject["organizational_unit"]) + if "cn" in subject: + ssl_subject = ssl_subject + "/CN={}".format(subject["cn"]) + else: + hookenv.log("When using \"subject\" argument you must " + "provide \"cn\" field at very least") + return False + if "email" in subject: + ssl_subject = ssl_subject + "/emailAddress={}".format(subject["email"]) + + cmd = ["/usr/bin/openssl", "req", "-new", "-newkey", + "rsa:{}".format(keysize), "-days", "365", "-nodes", "-x509", + "-keyout", keyfile, + "-out", certfile, "-subj", ssl_subject] + elif cn: + cmd = ["/usr/bin/openssl", "req", "-new", "-newkey", + "rsa:{}".format(keysize), "-days", "365", "-nodes", "-x509", + "-keyout", keyfile, + "-out", certfile, "-subj", "/CN={}".format(cn)] + + if not cmd: + hookenv.log("No config, subject or cn provided," + "unable to generate self signed SSL certificates") + return False + try: + subprocess.check_call(cmd) + return True + except Exception as e: + print("Execution of openssl command failed:\n{}".format(e)) + return False diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/ssl/service.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/ssl/service.py new file mode 100644 index 0000000000000000000000000000000000000000..06b534ffa3b0de97a56bac4060c39a9a4485b0c2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/ssl/service.py @@ -0,0 +1,277 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +from os.path import join as path_join +from os.path import exists +import subprocess + +from charmhelpers.core.hookenv import log, DEBUG + +STD_CERT = "standard" + +# Mysql server is fairly picky about cert creation +# and types, spec its creation separately for now. +MYSQL_CERT = "mysql" + + +class ServiceCA(object): + + default_expiry = str(365 * 2) + default_ca_expiry = str(365 * 6) + + def __init__(self, name, ca_dir, cert_type=STD_CERT): + self.name = name + self.ca_dir = ca_dir + self.cert_type = cert_type + + ############### + # Hook Helper API + @staticmethod + def get_ca(type=STD_CERT): + service_name = os.environ['JUJU_UNIT_NAME'].split('/')[0] + ca_path = os.path.join(os.environ['CHARM_DIR'], 'ca') + ca = ServiceCA(service_name, ca_path, type) + ca.init() + return ca + + @classmethod + def get_service_cert(cls, type=STD_CERT): + service_name = os.environ['JUJU_UNIT_NAME'].split('/')[0] + ca = cls.get_ca() + crt, key = ca.get_or_create_cert(service_name) + return crt, key, ca.get_ca_bundle() + + ############### + + def init(self): + log("initializing service ca", level=DEBUG) + if not exists(self.ca_dir): + self._init_ca_dir(self.ca_dir) + self._init_ca() + + @property + def ca_key(self): + return path_join(self.ca_dir, 'private', 'cacert.key') + + @property + def ca_cert(self): + return path_join(self.ca_dir, 'cacert.pem') + + @property + def ca_conf(self): + return path_join(self.ca_dir, 'ca.cnf') + + @property + def signing_conf(self): + return path_join(self.ca_dir, 'signing.cnf') + + def _init_ca_dir(self, ca_dir): + os.mkdir(ca_dir) + for i in ['certs', 'crl', 'newcerts', 'private']: + sd = path_join(ca_dir, i) + if not exists(sd): + os.mkdir(sd) + + if not exists(path_join(ca_dir, 'serial')): + with open(path_join(ca_dir, 'serial'), 'w') as fh: + fh.write('02\n') + + if not exists(path_join(ca_dir, 'index.txt')): + with open(path_join(ca_dir, 'index.txt'), 'w') as fh: + fh.write('') + + def _init_ca(self): + """Generate the root ca's cert and key. + """ + if not exists(path_join(self.ca_dir, 'ca.cnf')): + with open(path_join(self.ca_dir, 'ca.cnf'), 'w') as fh: + fh.write( + CA_CONF_TEMPLATE % (self.get_conf_variables())) + + if not exists(path_join(self.ca_dir, 'signing.cnf')): + with open(path_join(self.ca_dir, 'signing.cnf'), 'w') as fh: + fh.write( + SIGNING_CONF_TEMPLATE % (self.get_conf_variables())) + + if exists(self.ca_cert) or exists(self.ca_key): + raise RuntimeError("Initialized called when CA already exists") + cmd = ['openssl', 'req', '-config', self.ca_conf, + '-x509', '-nodes', '-newkey', 'rsa', + '-days', self.default_ca_expiry, + '-keyout', self.ca_key, '-out', self.ca_cert, + '-outform', 'PEM'] + output = subprocess.check_output(cmd, stderr=subprocess.STDOUT) + log("CA Init:\n %s" % output, level=DEBUG) + + def get_conf_variables(self): + return dict( + org_name="juju", + org_unit_name="%s service" % self.name, + common_name=self.name, + ca_dir=self.ca_dir) + + def get_or_create_cert(self, common_name): + if common_name in self: + return self.get_certificate(common_name) + return self.create_certificate(common_name) + + def create_certificate(self, common_name): + if common_name in self: + return self.get_certificate(common_name) + key_p = path_join(self.ca_dir, "certs", "%s.key" % common_name) + crt_p = path_join(self.ca_dir, "certs", "%s.crt" % common_name) + csr_p = path_join(self.ca_dir, "certs", "%s.csr" % common_name) + self._create_certificate(common_name, key_p, csr_p, crt_p) + return self.get_certificate(common_name) + + def get_certificate(self, common_name): + if common_name not in self: + raise ValueError("No certificate for %s" % common_name) + key_p = path_join(self.ca_dir, "certs", "%s.key" % common_name) + crt_p = path_join(self.ca_dir, "certs", "%s.crt" % common_name) + with open(crt_p) as fh: + crt = fh.read() + with open(key_p) as fh: + key = fh.read() + return crt, key + + def __contains__(self, common_name): + crt_p = path_join(self.ca_dir, "certs", "%s.crt" % common_name) + return exists(crt_p) + + def _create_certificate(self, common_name, key_p, csr_p, crt_p): + template_vars = self.get_conf_variables() + template_vars['common_name'] = common_name + subj = '/O=%(org_name)s/OU=%(org_unit_name)s/CN=%(common_name)s' % ( + template_vars) + + log("CA Create Cert %s" % common_name, level=DEBUG) + cmd = ['openssl', 'req', '-sha1', '-newkey', 'rsa:2048', + '-nodes', '-days', self.default_expiry, + '-keyout', key_p, '-out', csr_p, '-subj', subj] + subprocess.check_call(cmd, stderr=subprocess.PIPE) + cmd = ['openssl', 'rsa', '-in', key_p, '-out', key_p] + subprocess.check_call(cmd, stderr=subprocess.PIPE) + + log("CA Sign Cert %s" % common_name, level=DEBUG) + if self.cert_type == MYSQL_CERT: + cmd = ['openssl', 'x509', '-req', + '-in', csr_p, '-days', self.default_expiry, + '-CA', self.ca_cert, '-CAkey', self.ca_key, + '-set_serial', '01', '-out', crt_p] + else: + cmd = ['openssl', 'ca', '-config', self.signing_conf, + '-extensions', 'req_extensions', + '-days', self.default_expiry, '-notext', + '-in', csr_p, '-out', crt_p, '-subj', subj, '-batch'] + log("running %s" % " ".join(cmd), level=DEBUG) + subprocess.check_call(cmd, stderr=subprocess.PIPE) + + def get_ca_bundle(self): + with open(self.ca_cert) as fh: + return fh.read() + + +CA_CONF_TEMPLATE = """ +[ ca ] +default_ca = CA_default + +[ CA_default ] +dir = %(ca_dir)s +policy = policy_match +database = $dir/index.txt +serial = $dir/serial +certs = $dir/certs +crl_dir = $dir/crl +new_certs_dir = $dir/newcerts +certificate = $dir/cacert.pem +private_key = $dir/private/cacert.key +RANDFILE = $dir/private/.rand +default_md = default + +[ req ] +default_bits = 1024 +default_md = sha1 + +prompt = no +distinguished_name = ca_distinguished_name + +x509_extensions = ca_extensions + +[ ca_distinguished_name ] +organizationName = %(org_name)s +organizationalUnitName = %(org_unit_name)s Certificate Authority + + +[ policy_match ] +countryName = optional +stateOrProvinceName = optional +organizationName = match +organizationalUnitName = optional +commonName = supplied + +[ ca_extensions ] +basicConstraints = critical,CA:true +subjectKeyIdentifier = hash +authorityKeyIdentifier = keyid:always, issuer +keyUsage = cRLSign, keyCertSign +""" + + +SIGNING_CONF_TEMPLATE = """ +[ ca ] +default_ca = CA_default + +[ CA_default ] +dir = %(ca_dir)s +policy = policy_match +database = $dir/index.txt +serial = $dir/serial +certs = $dir/certs +crl_dir = $dir/crl +new_certs_dir = $dir/newcerts +certificate = $dir/cacert.pem +private_key = $dir/private/cacert.key +RANDFILE = $dir/private/.rand +default_md = default + +[ req ] +default_bits = 1024 +default_md = sha1 + +prompt = no +distinguished_name = req_distinguished_name + +x509_extensions = req_extensions + +[ req_distinguished_name ] +organizationName = %(org_name)s +organizationalUnitName = %(org_unit_name)s machine resources +commonName = %(common_name)s + +[ policy_match ] +countryName = optional +stateOrProvinceName = optional +organizationName = match +organizationalUnitName = optional +commonName = supplied + +[ req_extensions ] +basicConstraints = CA:false +subjectKeyIdentifier = hash +authorityKeyIdentifier = keyid:always, issuer +keyUsage = digitalSignature, keyEncipherment, keyAgreement +extendedKeyUsage = serverAuth, clientAuth +""" diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/storage/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/storage/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d7567b863e3a5ad2b7a7f44958b4166e0c3d346b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/storage/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/storage/linux/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/storage/linux/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d7567b863e3a5ad2b7a7f44958b4166e0c3d346b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/storage/linux/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/storage/linux/bcache.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/storage/linux/bcache.py new file mode 100644 index 0000000000000000000000000000000000000000..605991e16a4238f4ed46fbf0791048187f375c5c --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/storage/linux/bcache.py @@ -0,0 +1,74 @@ +# Copyright 2017 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +import os +import json + +from charmhelpers.core.hookenv import log + +stats_intervals = ['stats_day', 'stats_five_minute', + 'stats_hour', 'stats_total'] + +SYSFS = '/sys' + + +class Bcache(object): + """Bcache behaviour + """ + + def __init__(self, cachepath): + self.cachepath = cachepath + + @classmethod + def fromdevice(cls, devname): + return cls('{}/block/{}/bcache'.format(SYSFS, devname)) + + def __str__(self): + return self.cachepath + + def get_stats(self, interval): + """Get cache stats + """ + intervaldir = 'stats_{}'.format(interval) + path = "{}/{}".format(self.cachepath, intervaldir) + out = dict() + for elem in os.listdir(path): + out[elem] = open('{}/{}'.format(path, elem)).read().strip() + return out + + +def get_bcache_fs(): + """Return all cache sets + """ + cachesetroot = "{}/fs/bcache".format(SYSFS) + try: + dirs = os.listdir(cachesetroot) + except OSError: + log("No bcache fs found") + return [] + cacheset = set([Bcache('{}/{}'.format(cachesetroot, d)) for d in dirs if not d.startswith('register')]) + return cacheset + + +def get_stats_action(cachespec, interval): + """Action for getting bcache statistics for a given cachespec. + Cachespec can either be a device name, eg. 'sdb', which will retrieve + cache stats for the given device, or 'global', which will retrieve stats + for all cachesets + """ + if cachespec == 'global': + caches = get_bcache_fs() + else: + caches = [Bcache.fromdevice(cachespec)] + res = dict((c.cachepath, c.get_stats(interval)) for c in caches) + return json.dumps(res, indent=4, separators=(',', ': ')) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/storage/linux/ceph.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/storage/linux/ceph.py new file mode 100644 index 0000000000000000000000000000000000000000..814d5c72bc246c9536d3bb5975ade1c7fbe989e4 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/storage/linux/ceph.py @@ -0,0 +1,1810 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# +# Copyright 2012 Canonical Ltd. +# +# This file is sourced from lp:openstack-charm-helpers +# +# Authors: +# James Page +# Adam Gandelman +# + +import collections +import errno +import hashlib +import math +import six + +import os +import shutil +import json +import time +import uuid + +from subprocess import ( + check_call, + check_output, + CalledProcessError, +) +from charmhelpers.core.hookenv import ( + config, + service_name, + local_unit, + relation_get, + relation_ids, + relation_set, + related_units, + log, + DEBUG, + INFO, + WARNING, + ERROR, +) +from charmhelpers.core.host import ( + mount, + mounts, + service_start, + service_stop, + service_running, + umount, + cmp_pkgrevno, +) +from charmhelpers.fetch import ( + apt_install, +) +from charmhelpers.core.unitdata import kv + +from charmhelpers.core.kernel import modprobe +from charmhelpers.contrib.openstack.utils import config_flags_parser + +KEYRING = '/etc/ceph/ceph.client.{}.keyring' +KEYFILE = '/etc/ceph/ceph.client.{}.key' + +CEPH_CONF = """[global] +auth supported = {auth} +keyring = {keyring} +mon host = {mon_hosts} +log to syslog = {use_syslog} +err to syslog = {use_syslog} +clog to syslog = {use_syslog} +""" + +# The number of placement groups per OSD to target for placement group +# calculations. This number is chosen as 100 due to the ceph PG Calc +# documentation recommending to choose 100 for clusters which are not +# expected to increase in the foreseeable future. Since the majority of the +# calculations are done on deployment, target the case of non-expanding +# clusters as the default. +DEFAULT_PGS_PER_OSD_TARGET = 100 +DEFAULT_POOL_WEIGHT = 10.0 +LEGACY_PG_COUNT = 200 +DEFAULT_MINIMUM_PGS = 2 +AUTOSCALER_DEFAULT_PGS = 32 + + +class OsdPostUpgradeError(Exception): + """Error class for OSD post-upgrade operations.""" + pass + + +class OSDSettingConflict(Exception): + """Error class for conflicting osd setting requests.""" + pass + + +class OSDSettingNotAllowed(Exception): + """Error class for a disallowed setting.""" + pass + + +OSD_SETTING_EXCEPTIONS = (OSDSettingConflict, OSDSettingNotAllowed) + +OSD_SETTING_WHITELIST = [ + 'osd heartbeat grace', + 'osd heartbeat interval', +] + + +def _order_dict_by_key(rdict): + """Convert a dictionary into an OrderedDict sorted by key. + + :param rdict: Dictionary to be ordered. + :type rdict: dict + :returns: Ordered Dictionary. + :rtype: collections.OrderedDict + """ + return collections.OrderedDict(sorted(rdict.items(), key=lambda k: k[0])) + + +def get_osd_settings(relation_name): + """Consolidate requested osd settings from all clients. + + Consolidate requested osd settings from all clients. Check that the + requested setting is on the whitelist and it does not conflict with + any other requested settings. + + :returns: Dictionary of settings + :rtype: dict + + :raises: OSDSettingNotAllowed + :raises: OSDSettingConflict + """ + rel_ids = relation_ids(relation_name) + osd_settings = {} + for relid in rel_ids: + for unit in related_units(relid): + unit_settings = relation_get('osd-settings', unit, relid) or '{}' + unit_settings = json.loads(unit_settings) + for key, value in unit_settings.items(): + if key not in OSD_SETTING_WHITELIST: + msg = 'Illegal settings "{}"'.format(key) + raise OSDSettingNotAllowed(msg) + if key in osd_settings: + if osd_settings[key] != unit_settings[key]: + msg = 'Conflicting settings for "{}"'.format(key) + raise OSDSettingConflict(msg) + else: + osd_settings[key] = value + return _order_dict_by_key(osd_settings) + + +def send_osd_settings(): + """Pass on requested OSD settings to osd units.""" + try: + settings = get_osd_settings('client') + except OSD_SETTING_EXCEPTIONS as e: + # There is a problem with the settings, not passing them on. Update + # status will notify the user. + log(e, level=ERROR) + return + data = { + 'osd-settings': json.dumps(settings, sort_keys=True)} + for relid in relation_ids('osd'): + relation_set(relation_id=relid, + relation_settings=data) + + +def validator(value, valid_type, valid_range=None): + """ + Used to validate these: http://docs.ceph.com/docs/master/rados/operations/pools/#set-pool-values + Example input: + validator(value=1, + valid_type=int, + valid_range=[0, 2]) + This says I'm testing value=1. It must be an int inclusive in [0,2] + + :param value: The value to validate + :param valid_type: The type that value should be. + :param valid_range: A range of values that value can assume. + :return: + """ + assert isinstance(value, valid_type), "{} is not a {}".format( + value, + valid_type) + if valid_range is not None: + assert isinstance(valid_range, list), \ + "valid_range must be a list, was given {}".format(valid_range) + # If we're dealing with strings + if isinstance(value, six.string_types): + assert value in valid_range, \ + "{} is not in the list {}".format(value, valid_range) + # Integer, float should have a min and max + else: + if len(valid_range) != 2: + raise ValueError( + "Invalid valid_range list of {} for {}. " + "List must be [min,max]".format(valid_range, value)) + assert value >= valid_range[0], \ + "{} is less than minimum allowed value of {}".format( + value, valid_range[0]) + assert value <= valid_range[1], \ + "{} is greater than maximum allowed value of {}".format( + value, valid_range[1]) + + +class PoolCreationError(Exception): + """ + A custom error to inform the caller that a pool creation failed. Provides an error message + """ + + def __init__(self, message): + super(PoolCreationError, self).__init__(message) + + +class Pool(object): + """ + An object oriented approach to Ceph pool creation. This base class is inherited by ReplicatedPool and ErasurePool. + Do not call create() on this base class as it will not do anything. Instantiate a child class and call create(). + """ + + def __init__(self, service, name): + self.service = service + self.name = name + + # Create the pool if it doesn't exist already + # To be implemented by subclasses + def create(self): + pass + + def add_cache_tier(self, cache_pool, mode): + """ + Adds a new cache tier to an existing pool. + :param cache_pool: six.string_types. The cache tier pool name to add. + :param mode: six.string_types. The caching mode to use for this pool. valid range = ["readonly", "writeback"] + :return: None + """ + # Check the input types and values + validator(value=cache_pool, valid_type=six.string_types) + validator(value=mode, valid_type=six.string_types, valid_range=["readonly", "writeback"]) + + check_call(['ceph', '--id', self.service, 'osd', 'tier', 'add', self.name, cache_pool]) + check_call(['ceph', '--id', self.service, 'osd', 'tier', 'cache-mode', cache_pool, mode]) + check_call(['ceph', '--id', self.service, 'osd', 'tier', 'set-overlay', self.name, cache_pool]) + check_call(['ceph', '--id', self.service, 'osd', 'pool', 'set', cache_pool, 'hit_set_type', 'bloom']) + + def remove_cache_tier(self, cache_pool): + """ + Removes a cache tier from Ceph. Flushes all dirty objects from writeback pools and waits for that to complete. + :param cache_pool: six.string_types. The cache tier pool name to remove. + :return: None + """ + # read-only is easy, writeback is much harder + mode = get_cache_mode(self.service, cache_pool) + if mode == 'readonly': + check_call(['ceph', '--id', self.service, 'osd', 'tier', 'cache-mode', cache_pool, 'none']) + check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove', self.name, cache_pool]) + + elif mode == 'writeback': + pool_forward_cmd = ['ceph', '--id', self.service, 'osd', 'tier', + 'cache-mode', cache_pool, 'forward'] + if cmp_pkgrevno('ceph-common', '10.1') >= 0: + # Jewel added a mandatory flag + pool_forward_cmd.append('--yes-i-really-mean-it') + + check_call(pool_forward_cmd) + # Flush the cache and wait for it to return + check_call(['rados', '--id', self.service, '-p', cache_pool, 'cache-flush-evict-all']) + check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove-overlay', self.name]) + check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove', self.name, cache_pool]) + + def get_pgs(self, pool_size, percent_data=DEFAULT_POOL_WEIGHT, + device_class=None): + """Return the number of placement groups to use when creating the pool. + + Returns the number of placement groups which should be specified when + creating the pool. This is based upon the calculation guidelines + provided by the Ceph Placement Group Calculator (located online at + http://ceph.com/pgcalc/). + + The number of placement groups are calculated using the following: + + (Target PGs per OSD) * (OSD #) * (%Data) + ---------------------------------------- + (Pool size) + + Per the upstream guidelines, the OSD # should really be considered + based on the number of OSDs which are eligible to be selected by the + pool. Since the pool creation doesn't specify any of CRUSH set rules, + the default rule will be dependent upon the type of pool being + created (replicated or erasure). + + This code makes no attempt to determine the number of OSDs which can be + selected for the specific rule, rather it is left to the user to tune + in the form of 'expected-osd-count' config option. + + :param pool_size: int. pool_size is either the number of replicas for + replicated pools or the K+M sum for erasure coded pools + :param percent_data: float. the percentage of data that is expected to + be contained in the pool for the specific OSD set. Default value + is to assume 10% of the data is for this pool, which is a + relatively low % of the data but allows for the pg_num to be + increased. NOTE: the default is primarily to handle the scenario + where related charms requiring pools has not been upgraded to + include an update to indicate their relative usage of the pools. + :param device_class: str. class of storage to use for basis of pgs + calculation; ceph supports nvme, ssd and hdd by default based + on presence of devices of each type in the deployment. + :return: int. The number of pgs to use. + """ + + # Note: This calculation follows the approach that is provided + # by the Ceph PG Calculator located at http://ceph.com/pgcalc/. + validator(value=pool_size, valid_type=int) + + # Ensure that percent data is set to something - even with a default + # it can be set to None, which would wreak havoc below. + if percent_data is None: + percent_data = DEFAULT_POOL_WEIGHT + + # If the expected-osd-count is specified, then use the max between + # the expected-osd-count and the actual osd_count + osd_list = get_osds(self.service, device_class) + expected = config('expected-osd-count') or 0 + + if osd_list: + if device_class: + osd_count = len(osd_list) + else: + osd_count = max(expected, len(osd_list)) + + # Log a message to provide some insight if the calculations claim + # to be off because someone is setting the expected count and + # there are more OSDs in reality. Try to make a proper guess + # based upon the cluster itself. + if not device_class and expected and osd_count != expected: + log("Found more OSDs than provided expected count. " + "Using the actual count instead", INFO) + elif expected: + # Use the expected-osd-count in older ceph versions to allow for + # a more accurate pg calculations + osd_count = expected + else: + # NOTE(james-page): Default to 200 for older ceph versions + # which don't support OSD query from cli + return LEGACY_PG_COUNT + + percent_data /= 100.0 + target_pgs_per_osd = config('pgs-per-osd') or DEFAULT_PGS_PER_OSD_TARGET + num_pg = (target_pgs_per_osd * osd_count * percent_data) // pool_size + + # NOTE: ensure a sane minimum number of PGS otherwise we don't get any + # reasonable data distribution in minimal OSD configurations + if num_pg < DEFAULT_MINIMUM_PGS: + num_pg = DEFAULT_MINIMUM_PGS + + # The CRUSH algorithm has a slight optimization for placement groups + # with powers of 2 so find the nearest power of 2. If the nearest + # power of 2 is more than 25% below the original value, the next + # highest value is used. To do this, find the nearest power of 2 such + # that 2^n <= num_pg, check to see if its within the 25% tolerance. + exponent = math.floor(math.log(num_pg, 2)) + nearest = 2 ** exponent + if (num_pg - nearest) > (num_pg * 0.25): + # Choose the next highest power of 2 since the nearest is more + # than 25% below the original value. + return int(nearest * 2) + else: + return int(nearest) + + +class ReplicatedPool(Pool): + def __init__(self, service, name, pg_num=None, replicas=2, + percent_data=10.0, app_name=None): + super(ReplicatedPool, self).__init__(service=service, name=name) + self.replicas = replicas + self.percent_data = percent_data + if pg_num: + # Since the number of placement groups were specified, ensure + # that there aren't too many created. + max_pgs = self.get_pgs(self.replicas, 100.0) + self.pg_num = min(pg_num, max_pgs) + else: + self.pg_num = self.get_pgs(self.replicas, percent_data) + if app_name: + self.app_name = app_name + else: + self.app_name = 'unknown' + + def create(self): + if not pool_exists(self.service, self.name): + nautilus_or_later = cmp_pkgrevno('ceph-common', '14.2.0') >= 0 + # Create it + if nautilus_or_later: + cmd = [ + 'ceph', '--id', self.service, 'osd', 'pool', 'create', + '--pg-num-min={}'.format( + min(AUTOSCALER_DEFAULT_PGS, self.pg_num) + ), + self.name, str(self.pg_num) + ] + else: + cmd = [ + 'ceph', '--id', self.service, 'osd', 'pool', 'create', + self.name, str(self.pg_num) + ] + + try: + check_call(cmd) + # Set the pool replica size + update_pool(client=self.service, + pool=self.name, + settings={'size': str(self.replicas)}) + if nautilus_or_later: + # Ensure we set the expected pool ratio + update_pool(client=self.service, + pool=self.name, + settings={'target_size_ratio': str(self.percent_data / 100.0)}) + try: + set_app_name_for_pool(client=self.service, + pool=self.name, + name=self.app_name) + except CalledProcessError: + log('Could not set app name for pool {}'.format(self.name), level=WARNING) + if 'pg_autoscaler' in enabled_manager_modules(): + try: + enable_pg_autoscale(self.service, self.name) + except CalledProcessError as e: + log('Could not configure auto scaling for pool {}: {}'.format( + self.name, e), level=WARNING) + except CalledProcessError: + raise + + +# Default jerasure erasure coded pool +class ErasurePool(Pool): + def __init__(self, service, name, erasure_code_profile="default", + percent_data=10.0, app_name=None): + super(ErasurePool, self).__init__(service=service, name=name) + self.erasure_code_profile = erasure_code_profile + self.percent_data = percent_data + if app_name: + self.app_name = app_name + else: + self.app_name = 'unknown' + + def create(self): + if not pool_exists(self.service, self.name): + # Try to find the erasure profile information in order to properly + # size the number of placement groups. The size of an erasure + # coded placement group is calculated as k+m. + erasure_profile = get_erasure_profile(self.service, + self.erasure_code_profile) + + # Check for errors + if erasure_profile is None: + msg = ("Failed to discover erasure profile named " + "{}".format(self.erasure_code_profile)) + log(msg, level=ERROR) + raise PoolCreationError(msg) + if 'k' not in erasure_profile or 'm' not in erasure_profile: + # Error + msg = ("Unable to find k (data chunks) or m (coding chunks) " + "in erasure profile {}".format(erasure_profile)) + log(msg, level=ERROR) + raise PoolCreationError(msg) + + k = int(erasure_profile['k']) + m = int(erasure_profile['m']) + pgs = self.get_pgs(k + m, self.percent_data) + nautilus_or_later = cmp_pkgrevno('ceph-common', '14.2.0') >= 0 + # Create it + if nautilus_or_later: + cmd = [ + 'ceph', '--id', self.service, 'osd', 'pool', 'create', + '--pg-num-min={}'.format( + min(AUTOSCALER_DEFAULT_PGS, pgs) + ), + self.name, str(pgs), str(pgs), + 'erasure', self.erasure_code_profile + ] + else: + cmd = [ + 'ceph', '--id', self.service, 'osd', 'pool', 'create', + self.name, str(pgs), str(pgs), + 'erasure', self.erasure_code_profile + ] + + try: + check_call(cmd) + try: + set_app_name_for_pool(client=self.service, + pool=self.name, + name=self.app_name) + except CalledProcessError: + log('Could not set app name for pool {}'.format(self.name), level=WARNING) + if nautilus_or_later: + # Ensure we set the expected pool ratio + update_pool(client=self.service, + pool=self.name, + settings={'target_size_ratio': str(self.percent_data / 100.0)}) + if 'pg_autoscaler' in enabled_manager_modules(): + try: + enable_pg_autoscale(self.service, self.name) + except CalledProcessError as e: + log('Could not configure auto scaling for pool {}: {}'.format( + self.name, e), level=WARNING) + except CalledProcessError: + raise + + """Get an existing erasure code profile if it already exists. + Returns json formatted output""" + + +def enabled_manager_modules(): + """Return a list of enabled manager modules. + + :rtype: List[str] + """ + cmd = ['ceph', 'mgr', 'module', 'ls'] + try: + modules = check_output(cmd) + if six.PY3: + modules = modules.decode('UTF-8') + except CalledProcessError as e: + log("Failed to list ceph modules: {}".format(e), WARNING) + return [] + modules = json.loads(modules) + return modules['enabled_modules'] + + +def enable_pg_autoscale(service, pool_name): + """ + Enable Ceph's PG autoscaler for the specified pool. + + :param service: six.string_types. The Ceph user name to run the command under + :param pool_name: six.string_types. The name of the pool to enable sutoscaling on + :raise: CalledProcessError if the command fails + """ + check_call(['ceph', '--id', service, 'osd', 'pool', 'set', pool_name, 'pg_autoscale_mode', 'on']) + + +def get_mon_map(service): + """ + Returns the current monitor map. + :param service: six.string_types. The Ceph user name to run the command under + :return: json string. :raise: ValueError if the monmap fails to parse. + Also raises CalledProcessError if our ceph command fails + """ + try: + mon_status = check_output(['ceph', '--id', service, + 'mon_status', '--format=json']) + if six.PY3: + mon_status = mon_status.decode('UTF-8') + try: + return json.loads(mon_status) + except ValueError as v: + log("Unable to parse mon_status json: {}. Error: {}" + .format(mon_status, str(v))) + raise + except CalledProcessError as e: + log("mon_status command failed with message: {}" + .format(str(e))) + raise + + +def hash_monitor_names(service): + """ + Uses the get_mon_map() function to get information about the monitor + cluster. + Hash the name of each monitor. Return a sorted list of monitor hashes + in an ascending order. + :param service: six.string_types. The Ceph user name to run the command under + :rtype : dict. json dict of monitor name, ip address and rank + example: { + 'name': 'ip-172-31-13-165', + 'rank': 0, + 'addr': '172.31.13.165:6789/0'} + """ + try: + hash_list = [] + monitor_list = get_mon_map(service=service) + if monitor_list['monmap']['mons']: + for mon in monitor_list['monmap']['mons']: + hash_list.append( + hashlib.sha224(mon['name'].encode('utf-8')).hexdigest()) + return sorted(hash_list) + else: + return None + except (ValueError, CalledProcessError): + raise + + +def monitor_key_delete(service, key): + """ + Delete a key and value pair from the monitor cluster + :param service: six.string_types. The Ceph user name to run the command under + Deletes a key value pair on the monitor cluster. + :param key: six.string_types. The key to delete. + """ + try: + check_output( + ['ceph', '--id', service, + 'config-key', 'del', str(key)]) + except CalledProcessError as e: + log("Monitor config-key put failed with message: {}".format( + e.output)) + raise + + +def monitor_key_set(service, key, value): + """ + Sets a key value pair on the monitor cluster. + :param service: six.string_types. The Ceph user name to run the command under + :param key: six.string_types. The key to set. + :param value: The value to set. This will be converted to a string + before setting + """ + try: + check_output( + ['ceph', '--id', service, + 'config-key', 'put', str(key), str(value)]) + except CalledProcessError as e: + log("Monitor config-key put failed with message: {}".format( + e.output)) + raise + + +def monitor_key_get(service, key): + """ + Gets the value of an existing key in the monitor cluster. + :param service: six.string_types. The Ceph user name to run the command under + :param key: six.string_types. The key to search for. + :return: Returns the value of that key or None if not found. + """ + try: + output = check_output( + ['ceph', '--id', service, + 'config-key', 'get', str(key)]).decode('UTF-8') + return output + except CalledProcessError as e: + log("Monitor config-key get failed with message: {}".format( + e.output)) + return None + + +def monitor_key_exists(service, key): + """ + Searches for the existence of a key in the monitor cluster. + :param service: six.string_types. The Ceph user name to run the command under + :param key: six.string_types. The key to search for + :return: Returns True if the key exists, False if not and raises an + exception if an unknown error occurs. :raise: CalledProcessError if + an unknown error occurs + """ + try: + check_call( + ['ceph', '--id', service, + 'config-key', 'exists', str(key)]) + # I can return true here regardless because Ceph returns + # ENOENT if the key wasn't found + return True + except CalledProcessError as e: + if e.returncode == errno.ENOENT: + return False + else: + log("Unknown error from ceph config-get exists: {} {}".format( + e.returncode, e.output)) + raise + + +def get_erasure_profile(service, name): + """ + :param service: six.string_types. The Ceph user name to run the command under + :param name: + :return: + """ + try: + out = check_output(['ceph', '--id', service, + 'osd', 'erasure-code-profile', 'get', + name, '--format=json']) + if six.PY3: + out = out.decode('UTF-8') + return json.loads(out) + except (CalledProcessError, OSError, ValueError): + return None + + +def pool_set(service, pool_name, key, value): + """ + Sets a value for a RADOS pool in ceph. + :param service: six.string_types. The Ceph user name to run the command under + :param pool_name: six.string_types + :param key: six.string_types + :param value: + :return: None. Can raise CalledProcessError + """ + cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', pool_name, key, + str(value).lower()] + try: + check_call(cmd) + except CalledProcessError: + raise + + +def snapshot_pool(service, pool_name, snapshot_name): + """ + Snapshots a RADOS pool in ceph. + :param service: six.string_types. The Ceph user name to run the command under + :param pool_name: six.string_types + :param snapshot_name: six.string_types + :return: None. Can raise CalledProcessError + """ + cmd = ['ceph', '--id', service, 'osd', 'pool', 'mksnap', pool_name, snapshot_name] + try: + check_call(cmd) + except CalledProcessError: + raise + + +def remove_pool_snapshot(service, pool_name, snapshot_name): + """ + Remove a snapshot from a RADOS pool in ceph. + :param service: six.string_types. The Ceph user name to run the command under + :param pool_name: six.string_types + :param snapshot_name: six.string_types + :return: None. Can raise CalledProcessError + """ + cmd = ['ceph', '--id', service, 'osd', 'pool', 'rmsnap', pool_name, snapshot_name] + try: + check_call(cmd) + except CalledProcessError: + raise + + +def set_pool_quota(service, pool_name, max_bytes=None, max_objects=None): + """ + :param service: The Ceph user name to run the command under + :type service: str + :param pool_name: Name of pool + :type pool_name: str + :param max_bytes: Maximum bytes quota to apply + :type max_bytes: int + :param max_objects: Maximum objects quota to apply + :type max_objects: int + :raises: subprocess.CalledProcessError + """ + cmd = ['ceph', '--id', service, 'osd', 'pool', 'set-quota', pool_name] + if max_bytes: + cmd = cmd + ['max_bytes', str(max_bytes)] + if max_objects: + cmd = cmd + ['max_objects', str(max_objects)] + check_call(cmd) + + +def remove_pool_quota(service, pool_name): + """ + Set a byte quota on a RADOS pool in ceph. + :param service: six.string_types. The Ceph user name to run the command under + :param pool_name: six.string_types + :return: None. Can raise CalledProcessError + """ + cmd = ['ceph', '--id', service, 'osd', 'pool', 'set-quota', pool_name, 'max_bytes', '0'] + try: + check_call(cmd) + except CalledProcessError: + raise + + +def remove_erasure_profile(service, profile_name): + """ + Create a new erasure code profile if one does not already exist for it. Updates + the profile if it exists. Please see http://docs.ceph.com/docs/master/rados/operations/erasure-code-profile/ + for more details + :param service: six.string_types. The Ceph user name to run the command under + :param profile_name: six.string_types + :return: None. Can raise CalledProcessError + """ + cmd = ['ceph', '--id', service, 'osd', 'erasure-code-profile', 'rm', + profile_name] + try: + check_call(cmd) + except CalledProcessError: + raise + + +def create_erasure_profile(service, profile_name, erasure_plugin_name='jerasure', + failure_domain='host', + data_chunks=2, coding_chunks=1, + locality=None, durability_estimator=None, + device_class=None): + """ + Create a new erasure code profile if one does not already exist for it. Updates + the profile if it exists. Please see http://docs.ceph.com/docs/master/rados/operations/erasure-code-profile/ + for more details + :param service: six.string_types. The Ceph user name to run the command under + :param profile_name: six.string_types + :param erasure_plugin_name: six.string_types + :param failure_domain: six.string_types. One of ['chassis', 'datacenter', 'host', 'osd', 'pdu', 'pod', 'rack', 'region', + 'room', 'root', 'row']) + :param data_chunks: int + :param coding_chunks: int + :param locality: int + :param durability_estimator: int + :param device_class: six.string_types + :return: None. Can raise CalledProcessError + """ + # Ensure this failure_domain is allowed by Ceph + validator(failure_domain, six.string_types, + ['chassis', 'datacenter', 'host', 'osd', 'pdu', 'pod', 'rack', 'region', 'room', 'root', 'row']) + + cmd = ['ceph', '--id', service, 'osd', 'erasure-code-profile', 'set', profile_name, + 'plugin=' + erasure_plugin_name, 'k=' + str(data_chunks), 'm=' + str(coding_chunks) + ] + if locality is not None and durability_estimator is not None: + raise ValueError("create_erasure_profile should be called with k, m and one of l or c but not both.") + + luminous_or_later = cmp_pkgrevno('ceph-common', '12.0.0') >= 0 + # failure_domain changed in luminous + if luminous_or_later: + cmd.append('crush-failure-domain=' + failure_domain) + else: + cmd.append('ruleset-failure-domain=' + failure_domain) + + # device class new in luminous + if luminous_or_later and device_class: + cmd.append('crush-device-class={}'.format(device_class)) + else: + log('Skipping device class configuration (ceph < 12.0.0)', + level=DEBUG) + + # Add plugin specific information + if locality is not None: + # For local erasure codes + cmd.append('l=' + str(locality)) + if durability_estimator is not None: + # For Shec erasure codes + cmd.append('c=' + str(durability_estimator)) + + if erasure_profile_exists(service, profile_name): + cmd.append('--force') + + try: + check_call(cmd) + except CalledProcessError: + raise + + +def rename_pool(service, old_name, new_name): + """ + Rename a Ceph pool from old_name to new_name + :param service: six.string_types. The Ceph user name to run the command under + :param old_name: six.string_types + :param new_name: six.string_types + :return: None + """ + validator(value=old_name, valid_type=six.string_types) + validator(value=new_name, valid_type=six.string_types) + + cmd = ['ceph', '--id', service, 'osd', 'pool', 'rename', old_name, new_name] + check_call(cmd) + + +def erasure_profile_exists(service, name): + """ + Check to see if an Erasure code profile already exists. + :param service: six.string_types. The Ceph user name to run the command under + :param name: six.string_types + :return: int or None + """ + validator(value=name, valid_type=six.string_types) + try: + check_call(['ceph', '--id', service, + 'osd', 'erasure-code-profile', 'get', + name]) + return True + except CalledProcessError: + return False + + +def get_cache_mode(service, pool_name): + """ + Find the current caching mode of the pool_name given. + :param service: six.string_types. The Ceph user name to run the command under + :param pool_name: six.string_types + :return: int or None + """ + validator(value=service, valid_type=six.string_types) + validator(value=pool_name, valid_type=six.string_types) + out = check_output(['ceph', '--id', service, + 'osd', 'dump', '--format=json']) + if six.PY3: + out = out.decode('UTF-8') + try: + osd_json = json.loads(out) + for pool in osd_json['pools']: + if pool['pool_name'] == pool_name: + return pool['cache_mode'] + return None + except ValueError: + raise + + +def pool_exists(service, name): + """Check to see if a RADOS pool already exists.""" + try: + out = check_output(['rados', '--id', service, 'lspools']) + if six.PY3: + out = out.decode('UTF-8') + except CalledProcessError: + return False + + return name in out.split() + + +def get_osds(service, device_class=None): + """Return a list of all Ceph Object Storage Daemons currently in the + cluster (optionally filtered by storage device class). + + :param device_class: Class of storage device for OSD's + :type device_class: str + """ + luminous_or_later = cmp_pkgrevno('ceph-common', '12.0.0') >= 0 + if luminous_or_later and device_class: + out = check_output(['ceph', '--id', service, + 'osd', 'crush', 'class', + 'ls-osd', device_class, + '--format=json']) + else: + out = check_output(['ceph', '--id', service, + 'osd', 'ls', + '--format=json']) + if six.PY3: + out = out.decode('UTF-8') + return json.loads(out) + + +def install(): + """Basic Ceph client installation.""" + ceph_dir = "/etc/ceph" + if not os.path.exists(ceph_dir): + os.mkdir(ceph_dir) + + apt_install('ceph-common', fatal=True) + + +def rbd_exists(service, pool, rbd_img): + """Check to see if a RADOS block device exists.""" + try: + out = check_output(['rbd', 'list', '--id', + service, '--pool', pool]) + if six.PY3: + out = out.decode('UTF-8') + except CalledProcessError: + return False + + return rbd_img in out + + +def create_rbd_image(service, pool, image, sizemb): + """Create a new RADOS block device.""" + cmd = ['rbd', 'create', image, '--size', str(sizemb), '--id', service, + '--pool', pool] + check_call(cmd) + + +def update_pool(client, pool, settings): + cmd = ['ceph', '--id', client, 'osd', 'pool', 'set', pool] + for k, v in six.iteritems(settings): + cmd.append(k) + cmd.append(v) + + check_call(cmd) + + +def set_app_name_for_pool(client, pool, name): + """ + Calls `osd pool application enable` for the specified pool name + + :param client: Name of the ceph client to use + :type client: str + :param pool: Pool to set app name for + :type pool: str + :param name: app name for the specified pool + :type name: str + + :raises: CalledProcessError if ceph call fails + """ + if cmp_pkgrevno('ceph-common', '12.0.0') >= 0: + cmd = ['ceph', '--id', client, 'osd', 'pool', + 'application', 'enable', pool, name] + check_call(cmd) + + +def create_pool(service, name, replicas=3, pg_num=None): + """Create a new RADOS pool.""" + if pool_exists(service, name): + log("Ceph pool {} already exists, skipping creation".format(name), + level=WARNING) + return + + if not pg_num: + # Calculate the number of placement groups based + # on upstream recommended best practices. + osds = get_osds(service) + if osds: + pg_num = (len(osds) * 100 // replicas) + else: + # NOTE(james-page): Default to 200 for older ceph versions + # which don't support OSD query from cli + pg_num = 200 + + cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pg_num)] + check_call(cmd) + + update_pool(service, name, settings={'size': str(replicas)}) + + +def delete_pool(service, name): + """Delete a RADOS pool from ceph.""" + cmd = ['ceph', '--id', service, 'osd', 'pool', 'delete', name, + '--yes-i-really-really-mean-it'] + check_call(cmd) + + +def _keyfile_path(service): + return KEYFILE.format(service) + + +def _keyring_path(service): + return KEYRING.format(service) + + +def add_key(service, key): + """ + Add a key to a keyring. + + Creates the keyring if it doesn't already exist. + + Logs and returns if the key is already in the keyring. + """ + keyring = _keyring_path(service) + if os.path.exists(keyring): + with open(keyring, 'r') as ring: + if key in ring.read(): + log('Ceph keyring exists at %s and has not changed.' % keyring, + level=DEBUG) + return + log('Updating existing keyring %s.' % keyring, level=DEBUG) + + cmd = ['ceph-authtool', keyring, '--create-keyring', + '--name=client.{}'.format(service), '--add-key={}'.format(key)] + check_call(cmd) + log('Created new ceph keyring at %s.' % keyring, level=DEBUG) + + +def create_keyring(service, key): + """Deprecated. Please use the more accurately named 'add_key'""" + return add_key(service, key) + + +def delete_keyring(service): + """Delete an existing Ceph keyring.""" + keyring = _keyring_path(service) + if not os.path.exists(keyring): + log('Keyring does not exist at %s' % keyring, level=WARNING) + return + + os.remove(keyring) + log('Deleted ring at %s.' % keyring, level=INFO) + + +def create_key_file(service, key): + """Create a file containing key.""" + keyfile = _keyfile_path(service) + if os.path.exists(keyfile): + log('Keyfile exists at %s.' % keyfile, level=WARNING) + return + + with open(keyfile, 'w') as fd: + fd.write(key) + + log('Created new keyfile at %s.' % keyfile, level=INFO) + + +def get_ceph_nodes(relation='ceph'): + """Query named relation to determine current nodes.""" + hosts = [] + for r_id in relation_ids(relation): + for unit in related_units(r_id): + hosts.append(relation_get('private-address', unit=unit, rid=r_id)) + + return hosts + + +def configure(service, key, auth, use_syslog): + """Perform basic configuration of Ceph.""" + add_key(service, key) + create_key_file(service, key) + hosts = get_ceph_nodes() + with open('/etc/ceph/ceph.conf', 'w') as ceph_conf: + ceph_conf.write(CEPH_CONF.format(auth=auth, + keyring=_keyring_path(service), + mon_hosts=",".join(map(str, hosts)), + use_syslog=use_syslog)) + modprobe('rbd') + + +def image_mapped(name): + """Determine whether a RADOS block device is mapped locally.""" + try: + out = check_output(['rbd', 'showmapped']) + if six.PY3: + out = out.decode('UTF-8') + except CalledProcessError: + return False + + return name in out + + +def map_block_storage(service, pool, image): + """Map a RADOS block device for local use.""" + cmd = [ + 'rbd', + 'map', + '{}/{}'.format(pool, image), + '--user', + service, + '--secret', + _keyfile_path(service), + ] + check_call(cmd) + + +def filesystem_mounted(fs): + """Determine whether a filesytems is already mounted.""" + return fs in [f for f, m in mounts()] + + +def make_filesystem(blk_device, fstype='ext4', timeout=10): + """Make a new filesystem on the specified block device.""" + count = 0 + e_noent = errno.ENOENT + while not os.path.exists(blk_device): + if count >= timeout: + log('Gave up waiting on block device %s' % blk_device, + level=ERROR) + raise IOError(e_noent, os.strerror(e_noent), blk_device) + + log('Waiting for block device %s to appear' % blk_device, + level=DEBUG) + count += 1 + time.sleep(1) + else: + log('Formatting block device %s as filesystem %s.' % + (blk_device, fstype), level=INFO) + check_call(['mkfs', '-t', fstype, blk_device]) + + +def place_data_on_block_device(blk_device, data_src_dst): + """Migrate data in data_src_dst to blk_device and then remount.""" + # mount block device into /mnt + mount(blk_device, '/mnt') + # copy data to /mnt + copy_files(data_src_dst, '/mnt') + # umount block device + umount('/mnt') + # Grab user/group ID's from original source + _dir = os.stat(data_src_dst) + uid = _dir.st_uid + gid = _dir.st_gid + # re-mount where the data should originally be + # TODO: persist is currently a NO-OP in core.host + mount(blk_device, data_src_dst, persist=True) + # ensure original ownership of new mount. + os.chown(data_src_dst, uid, gid) + + +def copy_files(src, dst, symlinks=False, ignore=None): + """Copy files from src to dst.""" + for item in os.listdir(src): + s = os.path.join(src, item) + d = os.path.join(dst, item) + if os.path.isdir(s): + shutil.copytree(s, d, symlinks, ignore) + else: + shutil.copy2(s, d) + + +def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point, + blk_device, fstype, system_services=[], + replicas=3): + """NOTE: This function must only be called from a single service unit for + the same rbd_img otherwise data loss will occur. + + Ensures given pool and RBD image exists, is mapped to a block device, + and the device is formatted and mounted at the given mount_point. + + If formatting a device for the first time, data existing at mount_point + will be migrated to the RBD device before being re-mounted. + + All services listed in system_services will be stopped prior to data + migration and restarted when complete. + """ + # Ensure pool, RBD image, RBD mappings are in place. + if not pool_exists(service, pool): + log('Creating new pool {}.'.format(pool), level=INFO) + create_pool(service, pool, replicas=replicas) + + if not rbd_exists(service, pool, rbd_img): + log('Creating RBD image ({}).'.format(rbd_img), level=INFO) + create_rbd_image(service, pool, rbd_img, sizemb) + + if not image_mapped(rbd_img): + log('Mapping RBD Image {} as a Block Device.'.format(rbd_img), + level=INFO) + map_block_storage(service, pool, rbd_img) + + # make file system + # TODO: What happens if for whatever reason this is run again and + # the data is already in the rbd device and/or is mounted?? + # When it is mounted already, it will fail to make the fs + # XXX: This is really sketchy! Need to at least add an fstab entry + # otherwise this hook will blow away existing data if its executed + # after a reboot. + if not filesystem_mounted(mount_point): + make_filesystem(blk_device, fstype) + + for svc in system_services: + if service_running(svc): + log('Stopping services {} prior to migrating data.' + .format(svc), level=DEBUG) + service_stop(svc) + + place_data_on_block_device(blk_device, mount_point) + + for svc in system_services: + log('Starting service {} after migrating data.' + .format(svc), level=DEBUG) + service_start(svc) + + +def ensure_ceph_keyring(service, user=None, group=None, + relation='ceph', key=None): + """Ensures a ceph keyring is created for a named service and optionally + ensures user and group ownership. + + @returns boolean: Flag to indicate whether a key was successfully written + to disk based on either relation data or a supplied key + """ + if not key: + for rid in relation_ids(relation): + for unit in related_units(rid): + key = relation_get('key', rid=rid, unit=unit) + if key: + break + + if not key: + return False + + add_key(service=service, key=key) + keyring = _keyring_path(service) + if user and group: + check_call(['chown', '%s.%s' % (user, group), keyring]) + + return True + + +class CephBrokerRq(object): + """Ceph broker request. + + Multiple operations can be added to a request and sent to the Ceph broker + to be executed. + + Request is json-encoded for sending over the wire. + + The API is versioned and defaults to version 1. + """ + + def __init__(self, api_version=1, request_id=None): + self.api_version = api_version + if request_id: + self.request_id = request_id + else: + self.request_id = str(uuid.uuid1()) + self.ops = [] + + def add_op(self, op): + """Add an op if it is not already in the list. + + :param op: Operation to add. + :type op: dict + """ + if op not in self.ops: + self.ops.append(op) + + def add_op_request_access_to_group(self, name, namespace=None, + permission=None, key_name=None, + object_prefix_permissions=None): + """ + Adds the requested permissions to the current service's Ceph key, + allowing the key to access only the specified pools or + object prefixes. object_prefix_permissions should be a dictionary + keyed on the permission with the corresponding value being a list + of prefixes to apply that permission to. + { + 'rwx': ['prefix1', 'prefix2'], + 'class-read': ['prefix3']} + """ + self.add_op({ + 'op': 'add-permissions-to-key', 'group': name, + 'namespace': namespace, + 'name': key_name or service_name(), + 'group-permission': permission, + 'object-prefix-permissions': object_prefix_permissions}) + + def add_op_create_pool(self, name, replica_count=3, pg_num=None, + weight=None, group=None, namespace=None, + app_name=None, max_bytes=None, max_objects=None): + """DEPRECATED: Use ``add_op_create_replicated_pool()`` or + ``add_op_create_erasure_pool()`` instead. + """ + return self.add_op_create_replicated_pool( + name, replica_count=replica_count, pg_num=pg_num, weight=weight, + group=group, namespace=namespace, app_name=app_name, + max_bytes=max_bytes, max_objects=max_objects) + + def add_op_create_replicated_pool(self, name, replica_count=3, pg_num=None, + weight=None, group=None, namespace=None, + app_name=None, max_bytes=None, + max_objects=None): + """Adds an operation to create a replicated pool. + + :param name: Name of pool to create + :type name: str + :param replica_count: Number of copies Ceph should keep of your data. + :type replica_count: int + :param pg_num: Request specific number of Placement Groups to create + for pool. + :type pg_num: int + :param weight: The percentage of data that is expected to be contained + in the pool from the total available space on the OSDs. + Used to calculate number of Placement Groups to create + for pool. + :type weight: float + :param group: Group to add pool to + :type group: str + :param namespace: Group namespace + :type namespace: str + :param app_name: (Optional) Tag pool with application name. Note that + there is certain protocols emerging upstream with + regard to meaningful application names to use. + Examples are ``rbd`` and ``rgw``. + :type app_name: str + :param max_bytes: Maximum bytes quota to apply + :type max_bytes: int + :param max_objects: Maximum objects quota to apply + :type max_objects: int + """ + if pg_num and weight: + raise ValueError('pg_num and weight are mutually exclusive') + + self.add_op({'op': 'create-pool', 'name': name, + 'replicas': replica_count, 'pg_num': pg_num, + 'weight': weight, 'group': group, + 'group-namespace': namespace, 'app-name': app_name, + 'max-bytes': max_bytes, 'max-objects': max_objects}) + + def add_op_create_erasure_pool(self, name, erasure_profile=None, + weight=None, group=None, app_name=None, + max_bytes=None, max_objects=None): + """Adds an operation to create a erasure coded pool. + + :param name: Name of pool to create + :type name: str + :param erasure_profile: Name of erasure code profile to use. If not + set the ceph-mon unit handling the broker + request will set its default value. + :type erasure_profile: str + :param weight: The percentage of data that is expected to be contained + in the pool from the total available space on the OSDs. + :type weight: float + :param group: Group to add pool to + :type group: str + :param app_name: (Optional) Tag pool with application name. Note that + there is certain protocols emerging upstream with + regard to meaningful application names to use. + Examples are ``rbd`` and ``rgw``. + :type app_name: str + :param max_bytes: Maximum bytes quota to apply + :type max_bytes: int + :param max_objects: Maximum objects quota to apply + :type max_objects: int + """ + self.add_op({'op': 'create-pool', 'name': name, + 'pool-type': 'erasure', + 'erasure-profile': erasure_profile, + 'weight': weight, + 'group': group, 'app-name': app_name, + 'max-bytes': max_bytes, 'max-objects': max_objects}) + + def set_ops(self, ops): + """Set request ops to provided value. + + Useful for injecting ops that come from a previous request + to allow comparisons to ensure validity. + """ + self.ops = ops + + @property + def request(self): + return json.dumps({'api-version': self.api_version, 'ops': self.ops, + 'request-id': self.request_id}) + + def _ops_equal(self, other): + if len(self.ops) == len(other.ops): + for req_no in range(0, len(self.ops)): + for key in [ + 'replicas', 'name', 'op', 'pg_num', 'weight', + 'group', 'group-namespace', 'group-permission', + 'object-prefix-permissions']: + if self.ops[req_no].get(key) != other.ops[req_no].get(key): + return False + else: + return False + return True + + def __eq__(self, other): + if not isinstance(other, self.__class__): + return False + if self.api_version == other.api_version and \ + self._ops_equal(other): + return True + else: + return False + + def __ne__(self, other): + return not self.__eq__(other) + + +class CephBrokerRsp(object): + """Ceph broker response. + + Response is json-decoded and contents provided as methods/properties. + + The API is versioned and defaults to version 1. + """ + + def __init__(self, encoded_rsp): + self.api_version = None + self.rsp = json.loads(encoded_rsp) + + @property + def request_id(self): + return self.rsp.get('request-id') + + @property + def exit_code(self): + return self.rsp.get('exit-code') + + @property + def exit_msg(self): + return self.rsp.get('stderr') + + +# Ceph Broker Conversation: +# If a charm needs an action to be taken by ceph it can create a CephBrokerRq +# and send that request to ceph via the ceph relation. The CephBrokerRq has a +# unique id so that the client can identity which CephBrokerRsp is associated +# with the request. Ceph will also respond to each client unit individually +# creating a response key per client unit eg glance/0 will get a CephBrokerRsp +# via key broker-rsp-glance-0 +# +# To use this the charm can just do something like: +# +# from charmhelpers.contrib.storage.linux.ceph import ( +# send_request_if_needed, +# is_request_complete, +# CephBrokerRq, +# ) +# +# @hooks.hook('ceph-relation-changed') +# def ceph_changed(): +# rq = CephBrokerRq() +# rq.add_op_create_pool(name='poolname', replica_count=3) +# +# if is_request_complete(rq): +# +# else: +# send_request_if_needed(get_ceph_request()) +# +# CephBrokerRq and CephBrokerRsp are serialized into JSON. Below is an example +# of glance having sent a request to ceph which ceph has successfully processed +# 'ceph:8': { +# 'ceph/0': { +# 'auth': 'cephx', +# 'broker-rsp-glance-0': '{"request-id": "0bc7dc54", "exit-code": 0}', +# 'broker_rsp': '{"request-id": "0da543b8", "exit-code": 0}', +# 'ceph-public-address': '10.5.44.103', +# 'key': 'AQCLDttVuHXINhAAvI144CB09dYchhHyTUY9BQ==', +# 'private-address': '10.5.44.103', +# }, +# 'glance/0': { +# 'broker_req': ('{"api-version": 1, "request-id": "0bc7dc54", ' +# '"ops": [{"replicas": 3, "name": "glance", ' +# '"op": "create-pool"}]}'), +# 'private-address': '10.5.44.109', +# }, +# } + +def get_previous_request(rid): + """Return the last ceph broker request sent on a given relation + + @param rid: Relation id to query for request + """ + request = None + broker_req = relation_get(attribute='broker_req', rid=rid, + unit=local_unit()) + if broker_req: + request_data = json.loads(broker_req) + request = CephBrokerRq(api_version=request_data['api-version'], + request_id=request_data['request-id']) + request.set_ops(request_data['ops']) + + return request + + +def get_request_states(request, relation='ceph'): + """Return a dict of requests per relation id with their corresponding + completion state. + + This allows a charm, which has a request for ceph, to see whether there is + an equivalent request already being processed and if so what state that + request is in. + + @param request: A CephBrokerRq object + """ + complete = [] + requests = {} + for rid in relation_ids(relation): + complete = False + previous_request = get_previous_request(rid) + if request == previous_request: + sent = True + complete = is_request_complete_for_rid(previous_request, rid) + else: + sent = False + complete = False + + requests[rid] = { + 'sent': sent, + 'complete': complete, + } + + return requests + + +def is_request_sent(request, relation='ceph'): + """Check to see if a functionally equivalent request has already been sent + + Returns True if a similair request has been sent + + @param request: A CephBrokerRq object + """ + states = get_request_states(request, relation=relation) + for rid in states.keys(): + if not states[rid]['sent']: + return False + + return True + + +def is_request_complete(request, relation='ceph'): + """Check to see if a functionally equivalent request has already been + completed + + Returns True if a similair request has been completed + + @param request: A CephBrokerRq object + """ + states = get_request_states(request, relation=relation) + for rid in states.keys(): + if not states[rid]['complete']: + return False + + return True + + +def is_request_complete_for_rid(request, rid): + """Check if a given request has been completed on the given relation + + @param request: A CephBrokerRq object + @param rid: Relation ID + """ + broker_key = get_broker_rsp_key() + for unit in related_units(rid): + rdata = relation_get(rid=rid, unit=unit) + if rdata.get(broker_key): + rsp = CephBrokerRsp(rdata.get(broker_key)) + if rsp.request_id == request.request_id: + if not rsp.exit_code: + return True + else: + # The remote unit sent no reply targeted at this unit so either the + # remote ceph cluster does not support unit targeted replies or it + # has not processed our request yet. + if rdata.get('broker_rsp'): + request_data = json.loads(rdata['broker_rsp']) + if request_data.get('request-id'): + log('Ignoring legacy broker_rsp without unit key as remote ' + 'service supports unit specific replies', level=DEBUG) + else: + log('Using legacy broker_rsp as remote service does not ' + 'supports unit specific replies', level=DEBUG) + rsp = CephBrokerRsp(rdata['broker_rsp']) + if not rsp.exit_code: + return True + + return False + + +def get_broker_rsp_key(): + """Return broker response key for this unit + + This is the key that ceph is going to use to pass request status + information back to this unit + """ + return 'broker-rsp-' + local_unit().replace('/', '-') + + +def send_request_if_needed(request, relation='ceph'): + """Send broker request if an equivalent request has not already been sent + + @param request: A CephBrokerRq object + """ + if is_request_sent(request, relation=relation): + log('Request already sent but not complete, not sending new request', + level=DEBUG) + else: + for rid in relation_ids(relation): + log('Sending request {}'.format(request.request_id), level=DEBUG) + relation_set(relation_id=rid, broker_req=request.request) + + +def has_broker_rsp(rid=None, unit=None): + """Return True if the broker_rsp key is 'truthy' (i.e. set to something) in the relation data. + + :param rid: The relation to check (default of None means current relation) + :type rid: Union[str, None] + :param unit: The remote unit to check (default of None means current unit) + :type unit: Union[str, None] + :returns: True if broker key exists and is set to something 'truthy' + :rtype: bool + """ + rdata = relation_get(rid=rid, unit=unit) or {} + broker_rsp = rdata.get(get_broker_rsp_key()) + return True if broker_rsp else False + + +def is_broker_action_done(action, rid=None, unit=None): + """Check whether broker action has completed yet. + + @param action: name of action to be performed + @returns True if action complete otherwise False + """ + rdata = relation_get(rid=rid, unit=unit) or {} + broker_rsp = rdata.get(get_broker_rsp_key()) + if not broker_rsp: + return False + + rsp = CephBrokerRsp(broker_rsp) + unit_name = local_unit().partition('/')[2] + key = "unit_{}_ceph_broker_action.{}".format(unit_name, action) + kvstore = kv() + val = kvstore.get(key=key) + if val and val == rsp.request_id: + return True + + return False + + +def mark_broker_action_done(action, rid=None, unit=None): + """Mark action as having been completed. + + @param action: name of action to be performed + @returns None + """ + rdata = relation_get(rid=rid, unit=unit) or {} + broker_rsp = rdata.get(get_broker_rsp_key()) + if not broker_rsp: + return + + rsp = CephBrokerRsp(broker_rsp) + unit_name = local_unit().partition('/')[2] + key = "unit_{}_ceph_broker_action.{}".format(unit_name, action) + kvstore = kv() + kvstore.set(key=key, value=rsp.request_id) + kvstore.flush() + + +class CephConfContext(object): + """Ceph config (ceph.conf) context. + + Supports user-provided Ceph configuration settings. Use can provide a + dictionary as the value for the config-flags charm option containing + Ceph configuration settings keyede by their section in ceph.conf. + """ + def __init__(self, permitted_sections=None): + self.permitted_sections = permitted_sections or [] + + def __call__(self): + conf = config('config-flags') + if not conf: + return {} + + conf = config_flags_parser(conf) + if not isinstance(conf, dict): + log("Provided config-flags is not a dictionary - ignoring", + level=WARNING) + return {} + + permitted = self.permitted_sections + if permitted: + diff = set(conf.keys()).difference(set(permitted)) + if diff: + log("Config-flags contains invalid keys '%s' - they will be " + "ignored" % (', '.join(diff)), level=WARNING) + + ceph_conf = {} + for key in conf: + if permitted and key not in permitted: + log("Ignoring key '%s'" % key, level=WARNING) + continue + + ceph_conf[key] = conf[key] + return ceph_conf + + +class CephOSDConfContext(CephConfContext): + """Ceph config (ceph.conf) context. + + Consolidates settings from config-flags via CephConfContext with + settings provided by the mons. The config-flag values are preserved in + conf['osd'], settings from the mons which do not clash with config-flag + settings are in conf['osd_from_client'] and finally settings which do + clash are in conf['osd_from_client_conflict']. Rather than silently drop + the conflicting settings they are provided in the context so they can be + rendered commented out to give some visability to the admin. + """ + + def __init__(self, permitted_sections=None): + super(CephOSDConfContext, self).__init__( + permitted_sections=permitted_sections) + try: + self.settings_from_mons = get_osd_settings('mon') + except OSDSettingConflict: + log( + "OSD settings from mons are inconsistent, ignoring them", + level=WARNING) + self.settings_from_mons = {} + + def filter_osd_from_mon_settings(self): + """Filter settings from client relation against config-flags. + + :returns: A tuple ( + ,config-flag values, + ,client settings which do not conflict with config-flag values, + ,client settings which confilct with config-flag values) + :rtype: (OrderedDict, OrderedDict, OrderedDict) + """ + ceph_conf = super(CephOSDConfContext, self).__call__() + conflicting_entries = {} + clear_entries = {} + for key, value in self.settings_from_mons.items(): + if key in ceph_conf.get('osd', {}): + if ceph_conf['osd'][key] != value: + conflicting_entries[key] = value + else: + clear_entries[key] = value + clear_entries = _order_dict_by_key(clear_entries) + conflicting_entries = _order_dict_by_key(conflicting_entries) + return ceph_conf, clear_entries, conflicting_entries + + def __call__(self): + """Construct OSD config context. + + Standard context with two additional special keys. + osd_from_client_conflict: client settings which confilct with + config-flag values + osd_from_client: settings which do not conflict with config-flag + values + + :returns: OSD config context dict. + :rtype: dict + """ + conf, osd_clear, osd_conflict = self.filter_osd_from_mon_settings() + conf['osd_from_client_conflict'] = osd_conflict + conf['osd_from_client'] = osd_clear + return conf diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/storage/linux/loopback.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/storage/linux/loopback.py new file mode 100644 index 0000000000000000000000000000000000000000..74bab40e43a978e3d9e1e2f9c8975368092145c0 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/storage/linux/loopback.py @@ -0,0 +1,92 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import re +from subprocess import ( + check_call, + check_output, +) + +import six + + +################################################## +# loopback device helpers. +################################################## +def loopback_devices(): + ''' + Parse through 'losetup -a' output to determine currently mapped + loopback devices. Output is expected to look like: + + /dev/loop0: [0807]:961814 (/tmp/my.img) + + or: + + /dev/loop0: [0807]:961814 (/tmp/my.img (deleted)) + + :returns: dict: a dict mapping {loopback_dev: backing_file} + ''' + loopbacks = {} + cmd = ['losetup', '-a'] + output = check_output(cmd) + if six.PY3: + output = output.decode('utf-8') + devs = [d.strip().split(' ', 2) for d in output.splitlines() if d != ''] + for dev, _, f in devs: + loopbacks[dev.replace(':', '')] = re.search(r'\((.+)\)', f).groups()[0] + return loopbacks + + +def create_loopback(file_path): + ''' + Create a loopback device for a given backing file. + + :returns: str: Full path to new loopback device (eg, /dev/loop0) + ''' + file_path = os.path.abspath(file_path) + check_call(['losetup', '--find', file_path]) + for d, f in six.iteritems(loopback_devices()): + if f == file_path: + return d + + +def ensure_loopback_device(path, size): + ''' + Ensure a loopback device exists for a given backing file path and size. + If it a loopback device is not mapped to file, a new one will be created. + + TODO: Confirm size of found loopback device. + + :returns: str: Full path to the ensured loopback device (eg, /dev/loop0) + ''' + for d, f in six.iteritems(loopback_devices()): + if f == path: + return d + + if not os.path.exists(path): + cmd = ['truncate', '--size', size, path] + check_call(cmd) + + return create_loopback(path) + + +def is_mapped_loopback_device(device): + """ + Checks if a given device name is an existing/mapped loopback device. + :param device: str: Full path to the device (eg, /dev/loop1). + :returns: str: Path to the backing file if is a loopback device + empty string otherwise + """ + return loopback_devices().get(device, "") diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/storage/linux/lvm.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/storage/linux/lvm.py new file mode 100644 index 0000000000000000000000000000000000000000..c8bde69263f0e917d32d0e5d70abba1409b26012 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/storage/linux/lvm.py @@ -0,0 +1,182 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import functools +from subprocess import ( + CalledProcessError, + check_call, + check_output, + Popen, + PIPE, +) + + +################################################## +# LVM helpers. +################################################## +def deactivate_lvm_volume_group(block_device): + ''' + Deactivate any volume gruop associated with an LVM physical volume. + + :param block_device: str: Full path to LVM physical volume + ''' + vg = list_lvm_volume_group(block_device) + if vg: + cmd = ['vgchange', '-an', vg] + check_call(cmd) + + +def is_lvm_physical_volume(block_device): + ''' + Determine whether a block device is initialized as an LVM PV. + + :param block_device: str: Full path of block device to inspect. + + :returns: boolean: True if block device is a PV, False if not. + ''' + try: + check_output(['pvdisplay', block_device]) + return True + except CalledProcessError: + return False + + +def remove_lvm_physical_volume(block_device): + ''' + Remove LVM PV signatures from a given block device. + + :param block_device: str: Full path of block device to scrub. + ''' + p = Popen(['pvremove', '-ff', block_device], + stdin=PIPE) + p.communicate(input='y\n') + + +def list_lvm_volume_group(block_device): + ''' + List LVM volume group associated with a given block device. + + Assumes block device is a valid LVM PV. + + :param block_device: str: Full path of block device to inspect. + + :returns: str: Name of volume group associated with block device or None + ''' + vg = None + pvd = check_output(['pvdisplay', block_device]).splitlines() + for lvm in pvd: + lvm = lvm.decode('UTF-8') + if lvm.strip().startswith('VG Name'): + vg = ' '.join(lvm.strip().split()[2:]) + return vg + + +def create_lvm_physical_volume(block_device): + ''' + Initialize a block device as an LVM physical volume. + + :param block_device: str: Full path of block device to initialize. + + ''' + check_call(['pvcreate', block_device]) + + +def create_lvm_volume_group(volume_group, block_device): + ''' + Create an LVM volume group backed by a given block device. + + Assumes block device has already been initialized as an LVM PV. + + :param volume_group: str: Name of volume group to create. + :block_device: str: Full path of PV-initialized block device. + ''' + check_call(['vgcreate', volume_group, block_device]) + + +def list_logical_volumes(select_criteria=None, path_mode=False): + ''' + List logical volumes + + :param select_criteria: str: Limit list to those volumes matching this + criteria (see 'lvs -S help' for more details) + :param path_mode: bool: return logical volume name in 'vg/lv' format, this + format is required for some commands like lvextend + :returns: [str]: List of logical volumes + ''' + lv_diplay_attr = 'lv_name' + if path_mode: + # Parsing output logic relies on the column order + lv_diplay_attr = 'vg_name,' + lv_diplay_attr + cmd = ['lvs', '--options', lv_diplay_attr, '--noheadings'] + if select_criteria: + cmd.extend(['--select', select_criteria]) + lvs = [] + for lv in check_output(cmd).decode('UTF-8').splitlines(): + if not lv: + continue + if path_mode: + lvs.append('/'.join(lv.strip().split())) + else: + lvs.append(lv.strip()) + return lvs + + +list_thin_logical_volume_pools = functools.partial( + list_logical_volumes, + select_criteria='lv_attr =~ ^t') + +list_thin_logical_volumes = functools.partial( + list_logical_volumes, + select_criteria='lv_attr =~ ^V') + + +def extend_logical_volume_by_device(lv_name, block_device): + ''' + Extends the size of logical volume lv_name by the amount of free space on + physical volume block_device. + + :param lv_name: str: name of logical volume to be extended (vg/lv format) + :param block_device: str: name of block_device to be allocated to lv_name + ''' + cmd = ['lvextend', lv_name, block_device] + check_call(cmd) + + +def create_logical_volume(lv_name, volume_group, size=None): + ''' + Create a new logical volume in an existing volume group + + :param lv_name: str: name of logical volume to be created. + :param volume_group: str: Name of volume group to use for the new volume. + :param size: str: Size of logical volume to create (100% if not supplied) + :raises subprocess.CalledProcessError: in the event that the lvcreate fails. + ''' + if size: + check_call([ + 'lvcreate', + '--yes', + '-L', + '{}'.format(size), + '-n', lv_name, volume_group + ]) + # create the lv with all the space available, this is needed because the + # system call is different for LVM + else: + check_call([ + 'lvcreate', + '--yes', + '-l', + '100%FREE', + '-n', lv_name, volume_group + ]) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/storage/linux/utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/storage/linux/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..a35617606cf52d7cffc04ac245811b770fd95e8e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/storage/linux/utils.py @@ -0,0 +1,128 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import re +from stat import S_ISBLK + +from subprocess import ( + CalledProcessError, + check_call, + check_output, + call +) + + +def _luks_uuid(dev): + """ + Check to see if dev is a LUKS encrypted volume, returning the UUID + of volume if it is. + + :param: dev: path to block device to check. + :returns: str. UUID of LUKS device or None if not a LUKS device + """ + try: + cmd = ['cryptsetup', 'luksUUID', dev] + return check_output(cmd).decode('UTF-8').strip() + except CalledProcessError: + return None + + +def is_luks_device(dev): + """ + Determine if dev is a LUKS-formatted block device. + + :param: dev: A full path to a block device to check for LUKS header + presence + :returns: boolean: indicates whether a device is used based on LUKS header. + """ + return True if _luks_uuid(dev) else False + + +def is_mapped_luks_device(dev): + """ + Determine if dev is a mapped LUKS device + :param: dev: A full path to a block device to be checked + :returns: boolean: indicates whether a device is mapped + """ + _, dirs, _ = next(os.walk( + '/sys/class/block/{}/holders/' + .format(os.path.basename(os.path.realpath(dev)))) + ) + is_held = len(dirs) > 0 + return is_held and is_luks_device(dev) + + +def is_block_device(path): + ''' + Confirm device at path is a valid block device node. + + :returns: boolean: True if path is a block device, False if not. + ''' + if not os.path.exists(path): + return False + return S_ISBLK(os.stat(path).st_mode) + + +def zap_disk(block_device): + ''' + Clear a block device of partition table. Relies on sgdisk, which is + installed as pat of the 'gdisk' package in Ubuntu. + + :param block_device: str: Full path of block device to clean. + ''' + # https://github.com/ceph/ceph/commit/fdd7f8d83afa25c4e09aaedd90ab93f3b64a677b + # sometimes sgdisk exits non-zero; this is OK, dd will clean up + call(['sgdisk', '--zap-all', '--', block_device]) + call(['sgdisk', '--clear', '--mbrtogpt', '--', block_device]) + dev_end = check_output(['blockdev', '--getsz', + block_device]).decode('UTF-8') + gpt_end = int(dev_end.split()[0]) - 100 + check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device), + 'bs=1M', 'count=1']) + check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device), + 'bs=512', 'count=100', 'seek=%s' % (gpt_end)]) + + +def is_device_mounted(device): + '''Given a device path, return True if that device is mounted, and False + if it isn't. + + :param device: str: Full path of the device to check. + :returns: boolean: True if the path represents a mounted device, False if + it doesn't. + ''' + try: + out = check_output(['lsblk', '-P', device]).decode('UTF-8') + except Exception: + return False + return bool(re.search(r'MOUNTPOINT=".+"', out)) + + +def mkfs_xfs(device, force=False, inode_size=1024): + """Format device with XFS filesystem. + + By default this should fail if the device already has a filesystem on it. + :param device: Full path to device to format + :ptype device: tr + :param force: Force operation + :ptype: force: boolean + :param inode_size: XFS inode size in bytes + :ptype inode_size: int""" + cmd = ['mkfs.xfs'] + if force: + cmd.append("-f") + + cmd += ['-i', "size={}".format(inode_size), device] + check_call(cmd) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/templating/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/templating/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d7567b863e3a5ad2b7a7f44958b4166e0c3d346b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/templating/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/templating/contexts.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/templating/contexts.py new file mode 100644 index 0000000000000000000000000000000000000000..c1adf94b133f41a168727a3cdd3a536633ff62b3 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/templating/contexts.py @@ -0,0 +1,137 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Copyright 2013 Canonical Ltd. +# +# Authors: +# Charm Helpers Developers +"""A helper to create a yaml cache of config with namespaced relation data.""" +import os +import yaml + +import six + +import charmhelpers.core.hookenv + + +charm_dir = os.environ.get('CHARM_DIR', '') + + +def dict_keys_without_hyphens(a_dict): + """Return the a new dict with underscores instead of hyphens in keys.""" + return dict( + (key.replace('-', '_'), val) for key, val in a_dict.items()) + + +def update_relations(context, namespace_separator=':'): + """Update the context with the relation data.""" + # Add any relation data prefixed with the relation type. + relation_type = charmhelpers.core.hookenv.relation_type() + relations = [] + context['current_relation'] = {} + if relation_type is not None: + relation_data = charmhelpers.core.hookenv.relation_get() + context['current_relation'] = relation_data + # Deprecated: the following use of relation data as keys + # directly in the context will be removed. + relation_data = dict( + ("{relation_type}{namespace_separator}{key}".format( + relation_type=relation_type, + key=key, + namespace_separator=namespace_separator), val) + for key, val in relation_data.items()) + relation_data = dict_keys_without_hyphens(relation_data) + context.update(relation_data) + relations = charmhelpers.core.hookenv.relations_of_type(relation_type) + relations = [dict_keys_without_hyphens(rel) for rel in relations] + + context['relations_full'] = charmhelpers.core.hookenv.relations() + + # the hookenv.relations() data structure is effectively unusable in + # templates and other contexts when trying to access relation data other + # than the current relation. So provide a more useful structure that works + # with any hook. + local_unit = charmhelpers.core.hookenv.local_unit() + relations = {} + for rname, rids in context['relations_full'].items(): + relations[rname] = [] + for rid, rdata in rids.items(): + data = rdata.copy() + if local_unit in rdata: + data.pop(local_unit) + for unit_name, rel_data in data.items(): + new_data = {'__relid__': rid, '__unit__': unit_name} + new_data.update(rel_data) + relations[rname].append(new_data) + context['relations'] = relations + + +def juju_state_to_yaml(yaml_path, namespace_separator=':', + allow_hyphens_in_keys=True, mode=None): + """Update the juju config and state in a yaml file. + + This includes any current relation-get data, and the charm + directory. + + This function was created for the ansible and saltstack + support, as those libraries can use a yaml file to supply + context to templates, but it may be useful generally to + create and update an on-disk cache of all the config, including + previous relation data. + + By default, hyphens are allowed in keys as this is supported + by yaml, but for tools like ansible, hyphens are not valid [1]. + + [1] http://www.ansibleworks.com/docs/playbooks_variables.html#what-makes-a-valid-variable-name + """ + config = charmhelpers.core.hookenv.config() + + # Add the charm_dir which we will need to refer to charm + # file resources etc. + config['charm_dir'] = charm_dir + config['local_unit'] = charmhelpers.core.hookenv.local_unit() + config['unit_private_address'] = charmhelpers.core.hookenv.unit_private_ip() + config['unit_public_address'] = charmhelpers.core.hookenv.unit_get( + 'public-address' + ) + + # Don't use non-standard tags for unicode which will not + # work when salt uses yaml.load_safe. + yaml.add_representer(six.text_type, + lambda dumper, value: dumper.represent_scalar( + six.u('tag:yaml.org,2002:str'), value)) + + yaml_dir = os.path.dirname(yaml_path) + if not os.path.exists(yaml_dir): + os.makedirs(yaml_dir) + + if os.path.exists(yaml_path): + with open(yaml_path, "r") as existing_vars_file: + existing_vars = yaml.load(existing_vars_file.read()) + else: + with open(yaml_path, "w+"): + pass + existing_vars = {} + + if mode is not None: + os.chmod(yaml_path, mode) + + if not allow_hyphens_in_keys: + config = dict_keys_without_hyphens(config) + existing_vars.update(config) + + update_relations(existing_vars, namespace_separator) + + with open(yaml_path, "w+") as fp: + fp.write(yaml.dump(existing_vars, default_flow_style=False)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/templating/jinja.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/templating/jinja.py new file mode 100644 index 0000000000000000000000000000000000000000..38d4fba0e651399064a14112398a0ef43717f5cd --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/templating/jinja.py @@ -0,0 +1,38 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +""" +Templating using the python-jinja2 package. +""" +import six +from charmhelpers.fetch import apt_install, apt_update +try: + import jinja2 +except ImportError: + apt_update(fatal=True) + if six.PY3: + apt_install(["python3-jinja2"], fatal=True) + else: + apt_install(["python-jinja2"], fatal=True) + import jinja2 + + +DEFAULT_TEMPLATES_DIR = 'templates' + + +def render(template_name, context, template_dir=DEFAULT_TEMPLATES_DIR): + templates = jinja2.Environment( + loader=jinja2.FileSystemLoader(template_dir)) + template = templates.get_template(template_name) + return template.render(context) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/templating/pyformat.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/templating/pyformat.py new file mode 100644 index 0000000000000000000000000000000000000000..51a24dc42379fbee88d8952c578fc2498c852b0b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/templating/pyformat.py @@ -0,0 +1,27 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +''' +Templating using standard Python str.format() method. +''' + +from charmhelpers.core import hookenv + + +def render(template, extra={}, **kwargs): + """Return the template rendered using Python's str.format().""" + context = hookenv.execution_environment() + context.update(extra) + context.update(kwargs) + return template.format(**context) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/unison/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/unison/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..61409b14c843766aa10cb957bb1c3d83ba87c6a7 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/contrib/unison/__init__.py @@ -0,0 +1,314 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Easy file synchronization among peer units using ssh + unison. +# +# For the -joined, -changed, and -departed peer relations, add a call to +# ssh_authorized_peers() describing the peer relation and the desired +# user + group. After all peer relations have settled, all hosts should +# be able to connect to on another via key auth'd ssh as the specified user. +# +# Other hooks are then free to synchronize files and directories using +# sync_to_peers(). +# +# For a peer relation named 'cluster', for example: +# +# cluster-relation-joined: +# ... +# ssh_authorized_peers(peer_interface='cluster', +# user='juju_ssh', group='juju_ssh', +# ensure_local_user=True) +# ... +# +# cluster-relation-changed: +# ... +# ssh_authorized_peers(peer_interface='cluster', +# user='juju_ssh', group='juju_ssh', +# ensure_local_user=True) +# ... +# +# cluster-relation-departed: +# ... +# ssh_authorized_peers(peer_interface='cluster', +# user='juju_ssh', group='juju_ssh', +# ensure_local_user=True) +# ... +# +# Hooks are now free to sync files as easily as: +# +# files = ['/etc/fstab', '/etc/apt.conf.d/'] +# sync_to_peers(peer_interface='cluster', +# user='juju_ssh, paths=[files]) +# +# It is assumed the charm itself has setup permissions on each unit +# such that 'juju_ssh' has read + write permissions. Also assumed +# that the calling charm takes care of leader delegation. +# +# Additionally files can be synchronized only to an specific unit: +# sync_to_peer(slave_address, user='juju_ssh', +# paths=[files], verbose=False) + +import os +import pwd + +from copy import copy +from subprocess import check_call, check_output + +from charmhelpers.core.host import ( + adduser, + add_user_to_group, + pwgen, + remove_password_expiry, +) + +from charmhelpers.core.hookenv import ( + log, + hook_name, + relation_ids, + related_units, + relation_set, + relation_get, + unit_private_ip, + INFO, + ERROR, +) + +BASE_CMD = ['unison', '-auto', '-batch=true', '-confirmbigdel=false', + '-fastcheck=true', '-group=false', '-owner=false', + '-prefer=newer', '-times=true'] + + +def get_homedir(user): + try: + user = pwd.getpwnam(user) + return user.pw_dir + except KeyError: + log('Could not get homedir for user %s: user exists?' % (user), ERROR) + raise Exception + + +def create_private_key(user, priv_key_path, key_type='rsa'): + types_bits = { + 'rsa': '2048', + 'ecdsa': '521', + } + if key_type not in types_bits: + log('Unknown ssh key type {}, using rsa'.format(key_type), ERROR) + key_type = 'rsa' + if not os.path.isfile(priv_key_path): + log('Generating new SSH key for user %s.' % user) + cmd = ['ssh-keygen', '-q', '-N', '', '-t', key_type, + '-b', types_bits[key_type], '-f', priv_key_path] + check_call(cmd) + else: + log('SSH key already exists at %s.' % priv_key_path) + check_call(['chown', user, priv_key_path]) + check_call(['chmod', '0600', priv_key_path]) + + +def create_public_key(user, priv_key_path, pub_key_path): + if not os.path.isfile(pub_key_path): + log('Generating missing ssh public key @ %s.' % pub_key_path) + cmd = ['ssh-keygen', '-y', '-f', priv_key_path] + p = check_output(cmd).strip() + with open(pub_key_path, 'wb') as out: + out.write(p) + check_call(['chown', user, pub_key_path]) + + +def get_keypair(user): + home_dir = get_homedir(user) + ssh_dir = os.path.join(home_dir, '.ssh') + priv_key = os.path.join(ssh_dir, 'id_rsa') + pub_key = '%s.pub' % priv_key + + if not os.path.isdir(ssh_dir): + os.mkdir(ssh_dir) + check_call(['chown', '-R', user, ssh_dir]) + + create_private_key(user, priv_key) + create_public_key(user, priv_key, pub_key) + + with open(priv_key, 'r') as p: + _priv = p.read().strip() + + with open(pub_key, 'r') as p: + _pub = p.read().strip() + + return (_priv, _pub) + + +def write_authorized_keys(user, keys): + home_dir = get_homedir(user) + ssh_dir = os.path.join(home_dir, '.ssh') + auth_keys = os.path.join(ssh_dir, 'authorized_keys') + log('Syncing authorized_keys @ %s.' % auth_keys) + with open(auth_keys, 'w') as out: + for k in keys: + out.write('%s\n' % k) + + +def write_known_hosts(user, hosts): + home_dir = get_homedir(user) + ssh_dir = os.path.join(home_dir, '.ssh') + known_hosts = os.path.join(ssh_dir, 'known_hosts') + khosts = [] + for host in hosts: + cmd = ['ssh-keyscan', host] + remote_key = check_output(cmd, universal_newlines=True).strip() + khosts.append(remote_key) + log('Syncing known_hosts @ %s.' % known_hosts) + with open(known_hosts, 'w') as out: + for host in khosts: + out.write('%s\n' % host) + + +def ensure_user(user, group=None): + adduser(user, pwgen()) + if group: + add_user_to_group(user, group) + # Remove password expiry (Bug #1686085) + remove_password_expiry(user) + + +def ssh_authorized_peers(peer_interface, user, group=None, + ensure_local_user=False): + """ + Main setup function, should be called from both peer -changed and -joined + hooks with the same parameters. + """ + if ensure_local_user: + ensure_user(user, group) + priv_key, pub_key = get_keypair(user) + hook = hook_name() + if hook == '%s-relation-joined' % peer_interface: + relation_set(ssh_pub_key=pub_key) + elif hook == '%s-relation-changed' % peer_interface or \ + hook == '%s-relation-departed' % peer_interface: + hosts = [] + keys = [] + + for r_id in relation_ids(peer_interface): + for unit in related_units(r_id): + ssh_pub_key = relation_get('ssh_pub_key', + rid=r_id, + unit=unit) + priv_addr = relation_get('private-address', + rid=r_id, + unit=unit) + if ssh_pub_key: + keys.append(ssh_pub_key) + hosts.append(priv_addr) + else: + log('ssh_authorized_peers(): ssh_pub_key ' + 'missing for unit %s, skipping.' % unit) + write_authorized_keys(user, keys) + write_known_hosts(user, hosts) + authed_hosts = ':'.join(hosts) + relation_set(ssh_authorized_hosts=authed_hosts) + + +def _run_as_user(user, gid=None): + try: + user = pwd.getpwnam(user) + except KeyError: + log('Invalid user: %s' % user) + raise Exception + uid = user.pw_uid + gid = gid or user.pw_gid + os.environ['HOME'] = user.pw_dir + + def _inner(): + os.setgid(gid) + os.setuid(uid) + return _inner + + +def run_as_user(user, cmd, gid=None): + return check_output(cmd, preexec_fn=_run_as_user(user, gid), cwd='/') + + +def collect_authed_hosts(peer_interface): + '''Iterate through the units on peer interface to find all that + have the calling host in its authorized hosts list''' + hosts = [] + for r_id in (relation_ids(peer_interface) or []): + for unit in related_units(r_id): + private_addr = relation_get('private-address', + rid=r_id, unit=unit) + authed_hosts = relation_get('ssh_authorized_hosts', + rid=r_id, unit=unit) + + if not authed_hosts: + log('Peer %s has not authorized *any* hosts yet, skipping.' % + (unit), level=INFO) + continue + + if unit_private_ip() in authed_hosts.split(':'): + hosts.append(private_addr) + else: + log('Peer %s has not authorized *this* host yet, skipping.' % + (unit), level=INFO) + return hosts + + +def sync_path_to_host(path, host, user, verbose=False, cmd=None, gid=None, + fatal=False): + """Sync path to an specific peer host + + Propagates exception if operation fails and fatal=True. + """ + cmd = cmd or copy(BASE_CMD) + if not verbose: + cmd.append('-silent') + + # removing trailing slash from directory paths, unison + # doesn't like these. + if path.endswith('/'): + path = path[:(len(path) - 1)] + + cmd = cmd + [path, 'ssh://%s@%s/%s' % (user, host, path)] + + try: + log('Syncing local path %s to %s@%s:%s' % (path, user, host, path)) + run_as_user(user, cmd, gid) + except Exception: + log('Error syncing remote files') + if fatal: + raise + + +def sync_to_peer(host, user, paths=None, verbose=False, cmd=None, gid=None, + fatal=False): + """Sync paths to an specific peer host + + Propagates exception if any operation fails and fatal=True. + """ + if paths: + for p in paths: + sync_path_to_host(p, host, user, verbose, cmd, gid, fatal) + + +def sync_to_peers(peer_interface, user, paths=None, verbose=False, cmd=None, + gid=None, fatal=False): + """Sync all hosts to an specific path + + The type of group is integer, it allows user has permissions to + operate a directory have a different group id with the user id. + + Propagates exception if any operation fails and fatal=True. + """ + if paths: + for host in collect_authed_hosts(peer_interface): + sync_to_peer(host, user, paths, verbose, cmd, gid, fatal) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/coordinator.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/coordinator.py new file mode 100644 index 0000000000000000000000000000000000000000..59bee3e5b320f6bffa0d89eef7a20888cddb0521 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/coordinator.py @@ -0,0 +1,606 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +''' +The coordinator module allows you to use Juju's leadership feature to +coordinate operations between units of a service. + +Behavior is defined in subclasses of coordinator.BaseCoordinator. +One implementation is provided (coordinator.Serial), which allows an +operation to be run on a single unit at a time, on a first come, first +served basis. You can trivially define more complex behavior by +subclassing BaseCoordinator or Serial. + +:author: Stuart Bishop + + +Services Framework Usage +======================== + +Ensure a peers relation is defined in metadata.yaml. Instantiate a +BaseCoordinator subclass before invoking ServiceManager.manage(). +Ensure that ServiceManager.manage() is wired up to the leader-elected, +leader-settings-changed, peers relation-changed and peers +relation-departed hooks in addition to any other hooks you need, or your +service will deadlock. + +Ensure calls to acquire() are guarded, so that locks are only requested +when they are really needed (and thus hooks only triggered when necessary). +Failing to do this and calling acquire() unconditionally will put your unit +into a hook loop. Calls to granted() do not need to be guarded. + +For example:: + + from charmhelpers.core import hookenv, services + from charmhelpers import coordinator + + def maybe_restart(servicename): + serial = coordinator.Serial() + if needs_restart(): + serial.acquire('restart') + if serial.granted('restart'): + hookenv.service_restart(servicename) + + services = [dict(service='servicename', + data_ready=[maybe_restart])] + + if __name__ == '__main__': + _ = coordinator.Serial() # Must instantiate before manager.manage() + manager = services.ServiceManager(services) + manager.manage() + + +You can implement a similar pattern using a decorator. If the lock has +not been granted, an attempt to acquire() it will be made if the guard +function returns True. If the lock has been granted, the decorated function +is run as normal:: + + from charmhelpers.core import hookenv, services + from charmhelpers import coordinator + + serial = coordinator.Serial() # Global, instatiated on module import. + + def needs_restart(): + [ ... Introspect state. Return True if restart is needed ... ] + + @serial.require('restart', needs_restart) + def maybe_restart(servicename): + hookenv.service_restart(servicename) + + services = [dict(service='servicename', + data_ready=[maybe_restart])] + + if __name__ == '__main__': + manager = services.ServiceManager(services) + manager.manage() + + +Traditional Usage +================= + +Ensure a peers relation is defined in metadata.yaml. + +If you are using charmhelpers.core.hookenv.Hooks, ensure that a +BaseCoordinator subclass is instantiated before calling Hooks.execute. + +If you are not using charmhelpers.core.hookenv.Hooks, ensure +that a BaseCoordinator subclass is instantiated and its handle() +method called at the start of all your hooks. + +For example:: + + import sys + from charmhelpers.core import hookenv + from charmhelpers import coordinator + + hooks = hookenv.Hooks() + + def maybe_restart(): + serial = coordinator.Serial() + if serial.granted('restart'): + hookenv.service_restart('myservice') + + @hooks.hook + def config_changed(): + update_config() + serial = coordinator.Serial() + if needs_restart(): + serial.acquire('restart'): + maybe_restart() + + # Cluster hooks must be wired up. + @hooks.hook('cluster-relation-changed', 'cluster-relation-departed') + def cluster_relation_changed(): + maybe_restart() + + # Leader hooks must be wired up. + @hooks.hook('leader-elected', 'leader-settings-changed') + def leader_settings_changed(): + maybe_restart() + + [ ... repeat for *all* other hooks you are using ... ] + + if __name__ == '__main__': + _ = coordinator.Serial() # Must instantiate before execute() + hooks.execute(sys.argv) + + +You can also use the require decorator. If the lock has not been granted, +an attempt to acquire() it will be made if the guard function returns True. +If the lock has been granted, the decorated function is run as normal:: + + from charmhelpers.core import hookenv + + hooks = hookenv.Hooks() + serial = coordinator.Serial() # Must instantiate before execute() + + @require('restart', needs_restart) + def maybe_restart(): + hookenv.service_restart('myservice') + + @hooks.hook('install', 'config-changed', 'upgrade-charm', + # Peers and leader hooks must be wired up. + 'cluster-relation-changed', 'cluster-relation-departed', + 'leader-elected', 'leader-settings-changed') + def default_hook(): + [...] + maybe_restart() + + if __name__ == '__main__': + hooks.execute() + + +Details +======= + +A simple API is provided similar to traditional locking APIs. A lock +may be requested using the acquire() method, and the granted() method +may be used do to check if a lock previously requested by acquire() has +been granted. It doesn't matter how many times acquire() is called in a +hook. + +Locks are released at the end of the hook they are acquired in. This may +be the current hook if the unit is leader and the lock is free. It is +more likely a future hook (probably leader-settings-changed, possibly +the peers relation-changed or departed hook, potentially any hook). + +Whenever a charm needs to perform a coordinated action it will acquire() +the lock and perform the action immediately if acquisition is +successful. It will also need to perform the same action in every other +hook if the lock has been granted. + + +Grubby Details +-------------- + +Why do you need to be able to perform the same action in every hook? +If the unit is the leader, then it may be able to grant its own lock +and perform the action immediately in the source hook. If the unit is +the leader and cannot immediately grant the lock, then its only +guaranteed chance of acquiring the lock is in the peers relation-joined, +relation-changed or peers relation-departed hooks when another unit has +released it (the only channel to communicate to the leader is the peers +relation). If the unit is not the leader, then it is unlikely the lock +is granted in the source hook (a previous hook must have also made the +request for this to happen). A non-leader is notified about the lock via +leader settings. These changes may be visible in any hook, even before +the leader-settings-changed hook has been invoked. Or the requesting +unit may be promoted to leader after making a request, in which case the +lock may be granted in leader-elected or in a future peers +relation-changed or relation-departed hook. + +This could be simpler if leader-settings-changed was invoked on the +leader. We could then never grant locks except in +leader-settings-changed hooks giving one place for the operation to be +performed. Unfortunately this is not the case with Juju 1.23 leadership. + +But of course, this doesn't really matter to most people as most people +seem to prefer the Services Framework or similar reset-the-world +approaches, rather than the twisty maze of attempting to deduce what +should be done based on what hook happens to be running (which always +seems to evolve into reset-the-world anyway when the charm grows beyond +the trivial). + +I chose not to implement a callback model, where a callback was passed +to acquire to be executed when the lock is granted, because the callback +may become invalid between making the request and the lock being granted +due to an upgrade-charm being run in the interim. And it would create +restrictions, such no lambdas, callback defined at the top level of a +module, etc. Still, we could implement it on top of what is here, eg. +by adding a defer decorator that stores a pickle of itself to disk and +have BaseCoordinator unpickle and execute them when the locks are granted. +''' +from datetime import datetime +from functools import wraps +import json +import os.path + +from six import with_metaclass + +from charmhelpers.core import hookenv + + +# We make BaseCoordinator and subclasses singletons, so that if we +# need to spill to local storage then only a single instance does so, +# rather than having multiple instances stomp over each other. +class Singleton(type): + _instances = {} + + def __call__(cls, *args, **kwargs): + if cls not in cls._instances: + cls._instances[cls] = super(Singleton, cls).__call__(*args, + **kwargs) + return cls._instances[cls] + + +class BaseCoordinator(with_metaclass(Singleton, object)): + relid = None # Peer relation-id, set by __init__ + relname = None + + grants = None # self.grants[unit][lock] == timestamp + requests = None # self.requests[unit][lock] == timestamp + + def __init__(self, relation_key='coordinator', peer_relation_name=None): + '''Instatiate a Coordinator. + + Data is stored on the peers relation and in leadership storage + under the provided relation_key. + + The peers relation is identified by peer_relation_name, and defaults + to the first one found in metadata.yaml. + ''' + # Most initialization is deferred, since invoking hook tools from + # the constructor makes testing hard. + self.key = relation_key + self.relname = peer_relation_name + hookenv.atstart(self.initialize) + + # Ensure that handle() is called, without placing that burden on + # the charm author. They still need to do this manually if they + # are not using a hook framework. + hookenv.atstart(self.handle) + + def initialize(self): + if self.requests is not None: + return # Already initialized. + + assert hookenv.has_juju_version('1.23'), 'Needs Juju 1.23+' + + if self.relname is None: + self.relname = _implicit_peer_relation_name() + + relids = hookenv.relation_ids(self.relname) + if relids: + self.relid = sorted(relids)[0] + + # Load our state, from leadership, the peer relationship, and maybe + # local state as a fallback. Populates self.requests and self.grants. + self._load_state() + self._emit_state() + + # Save our state if the hook completes successfully. + hookenv.atexit(self._save_state) + + # Schedule release of granted locks for the end of the hook. + # This needs to be the last of our atexit callbacks to ensure + # it will be run first when the hook is complete, because there + # is no point mutating our state after it has been saved. + hookenv.atexit(self._release_granted) + + def acquire(self, lock): + '''Acquire the named lock, non-blocking. + + The lock may be granted immediately, or in a future hook. + + Returns True if the lock has been granted. The lock will be + automatically released at the end of the hook in which it is + granted. + + Do not mindlessly call this method, as it triggers a cascade of + hooks. For example, if you call acquire() every time in your + peers relation-changed hook you will end up with an infinite loop + of hooks. It should almost always be guarded by some condition. + ''' + unit = hookenv.local_unit() + ts = self.requests[unit].get(lock) + if not ts: + # If there is no outstanding request on the peers relation, + # create one. + self.requests.setdefault(lock, {}) + self.requests[unit][lock] = _timestamp() + self.msg('Requested {}'.format(lock)) + + # If the leader has granted the lock, yay. + if self.granted(lock): + self.msg('Acquired {}'.format(lock)) + return True + + # If the unit making the request also happens to be the + # leader, it must handle the request now. Even though the + # request has been stored on the peers relation, the peers + # relation-changed hook will not be triggered. + if hookenv.is_leader(): + return self.grant(lock, unit) + + return False # Can't acquire lock, yet. Maybe next hook. + + def granted(self, lock): + '''Return True if a previously requested lock has been granted''' + unit = hookenv.local_unit() + ts = self.requests[unit].get(lock) + if ts and self.grants.get(unit, {}).get(lock) == ts: + return True + return False + + def requested(self, lock): + '''Return True if we are in the queue for the lock''' + return lock in self.requests[hookenv.local_unit()] + + def request_timestamp(self, lock): + '''Return the timestamp of our outstanding request for lock, or None. + + Returns a datetime.datetime() UTC timestamp, with no tzinfo attribute. + ''' + ts = self.requests[hookenv.local_unit()].get(lock, None) + if ts is not None: + return datetime.strptime(ts, _timestamp_format) + + def handle(self): + if not hookenv.is_leader(): + return # Only the leader can grant requests. + + self.msg('Leader handling coordinator requests') + + # Clear our grants that have been released. + for unit in self.grants.keys(): + for lock, grant_ts in list(self.grants[unit].items()): + req_ts = self.requests.get(unit, {}).get(lock) + if req_ts != grant_ts: + # The request timestamp does not match the granted + # timestamp. Several hooks on 'unit' may have run + # before the leader got a chance to make a decision, + # and 'unit' may have released its lock and attempted + # to reacquire it. This will change the timestamp, + # and we correctly revoke the old grant putting it + # to the end of the queue. + ts = datetime.strptime(self.grants[unit][lock], + _timestamp_format) + del self.grants[unit][lock] + self.released(unit, lock, ts) + + # Grant locks + for unit in self.requests.keys(): + for lock in self.requests[unit]: + self.grant(lock, unit) + + def grant(self, lock, unit): + '''Maybe grant the lock to a unit. + + The decision to grant the lock or not is made for $lock + by a corresponding method grant_$lock, which you may define + in a subclass. If no such method is defined, the default_grant + method is used. See Serial.default_grant() for details. + ''' + if not hookenv.is_leader(): + return False # Not the leader, so we cannot grant. + + # Set of units already granted the lock. + granted = set() + for u in self.grants: + if lock in self.grants[u]: + granted.add(u) + if unit in granted: + return True # Already granted. + + # Ordered list of units waiting for the lock. + reqs = set() + for u in self.requests: + if u in granted: + continue # In the granted set. Not wanted in the req list. + for _lock, ts in self.requests[u].items(): + if _lock == lock: + reqs.add((ts, u)) + queue = [t[1] for t in sorted(reqs)] + if unit not in queue: + return False # Unit has not requested the lock. + + # Locate custom logic, or fallback to the default. + grant_func = getattr(self, 'grant_{}'.format(lock), self.default_grant) + + if grant_func(lock, unit, granted, queue): + # Grant the lock. + self.msg('Leader grants {} to {}'.format(lock, unit)) + self.grants.setdefault(unit, {})[lock] = self.requests[unit][lock] + return True + + return False + + def released(self, unit, lock, timestamp): + '''Called on the leader when it has released a lock. + + By default, does nothing but log messages. Override if you + need to perform additional housekeeping when a lock is released, + for example recording timestamps. + ''' + interval = _utcnow() - timestamp + self.msg('Leader released {} from {}, held {}'.format(lock, unit, + interval)) + + def require(self, lock, guard_func, *guard_args, **guard_kw): + """Decorate a function to be run only when a lock is acquired. + + The lock is requested if the guard function returns True. + + The decorated function is called if the lock has been granted. + """ + def decorator(f): + @wraps(f) + def wrapper(*args, **kw): + if self.granted(lock): + self.msg('Granted {}'.format(lock)) + return f(*args, **kw) + if guard_func(*guard_args, **guard_kw) and self.acquire(lock): + return f(*args, **kw) + return None + return wrapper + return decorator + + def msg(self, msg): + '''Emit a message. Override to customize log spam.''' + hookenv.log('coordinator.{} {}'.format(self._name(), msg), + level=hookenv.INFO) + + def _name(self): + return self.__class__.__name__ + + def _load_state(self): + self.msg('Loading state') + + # All responses must be stored in the leadership settings. + # The leader cannot use local state, as a different unit may + # be leader next time. Which is fine, as the leadership + # settings are always available. + self.grants = json.loads(hookenv.leader_get(self.key) or '{}') + + local_unit = hookenv.local_unit() + + # All requests must be stored on the peers relation. This is + # the only channel units have to communicate with the leader. + # Even the leader needs to store its requests here, as a + # different unit may be leader by the time the request can be + # granted. + if self.relid is None: + # The peers relation is not available. Maybe we are early in + # the units's lifecycle. Maybe this unit is standalone. + # Fallback to using local state. + self.msg('No peer relation. Loading local state') + self.requests = {local_unit: self._load_local_state()} + else: + self.requests = self._load_peer_state() + if local_unit not in self.requests: + # The peers relation has just been joined. Update any state + # loaded from our peers with our local state. + self.msg('New peer relation. Merging local state') + self.requests[local_unit] = self._load_local_state() + + def _emit_state(self): + # Emit this units lock status. + for lock in sorted(self.requests[hookenv.local_unit()].keys()): + if self.granted(lock): + self.msg('Granted {}'.format(lock)) + else: + self.msg('Waiting on {}'.format(lock)) + + def _save_state(self): + self.msg('Publishing state') + if hookenv.is_leader(): + # sort_keys to ensure stability. + raw = json.dumps(self.grants, sort_keys=True) + hookenv.leader_set({self.key: raw}) + + local_unit = hookenv.local_unit() + + if self.relid is None: + # No peers relation yet. Fallback to local state. + self.msg('No peer relation. Saving local state') + self._save_local_state(self.requests[local_unit]) + else: + # sort_keys to ensure stability. + raw = json.dumps(self.requests[local_unit], sort_keys=True) + hookenv.relation_set(self.relid, relation_settings={self.key: raw}) + + def _load_peer_state(self): + requests = {} + units = set(hookenv.related_units(self.relid)) + units.add(hookenv.local_unit()) + for unit in units: + raw = hookenv.relation_get(self.key, unit, self.relid) + if raw: + requests[unit] = json.loads(raw) + return requests + + def _local_state_filename(self): + # Include the class name. We allow multiple BaseCoordinator + # subclasses to be instantiated, and they are singletons, so + # this avoids conflicts (unless someone creates and uses two + # BaseCoordinator subclasses with the same class name, so don't + # do that). + return '.charmhelpers.coordinator.{}'.format(self._name()) + + def _load_local_state(self): + fn = self._local_state_filename() + if os.path.exists(fn): + with open(fn, 'r') as f: + return json.load(f) + return {} + + def _save_local_state(self, state): + fn = self._local_state_filename() + with open(fn, 'w') as f: + json.dump(state, f) + + def _release_granted(self): + # At the end of every hook, release all locks granted to + # this unit. If a hook neglects to make use of what it + # requested, it will just have to make the request again. + # Implicit release is the only way this will work, as + # if the unit is standalone there may be no future triggers + # called to do a manual release. + unit = hookenv.local_unit() + for lock in list(self.requests[unit].keys()): + if self.granted(lock): + self.msg('Released local {} lock'.format(lock)) + del self.requests[unit][lock] + + +class Serial(BaseCoordinator): + def default_grant(self, lock, unit, granted, queue): + '''Default logic to grant a lock to a unit. Unless overridden, + only one unit may hold the lock and it will be granted to the + earliest queued request. + + To define custom logic for $lock, create a subclass and + define a grant_$lock method. + + `unit` is the unit name making the request. + + `granted` is the set of units already granted the lock. It will + never include `unit`. It may be empty. + + `queue` is the list of units waiting for the lock, ordered by time + of request. It will always include `unit`, but `unit` is not + necessarily first. + + Returns True if the lock should be granted to `unit`. + ''' + return unit == queue[0] and not granted + + +def _implicit_peer_relation_name(): + md = hookenv.metadata() + assert 'peers' in md, 'No peer relations in metadata.yaml' + return sorted(md['peers'].keys())[0] + + +# A human readable, sortable UTC timestamp format. +_timestamp_format = '%Y-%m-%d %H:%M:%S.%fZ' + + +def _utcnow(): # pragma: no cover + # This wrapper exists as mocking datetime methods is problematic. + return datetime.utcnow() + + +def _timestamp(): + return _utcnow().strftime(_timestamp_format) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d7567b863e3a5ad2b7a7f44958b4166e0c3d346b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/decorators.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/decorators.py new file mode 100644 index 0000000000000000000000000000000000000000..6ad41ee4121f4c0816935f8b16cd84f972aff22b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/decorators.py @@ -0,0 +1,55 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# +# Copyright 2014 Canonical Ltd. +# +# Authors: +# Edward Hope-Morley +# + +import time + +from charmhelpers.core.hookenv import ( + log, + INFO, +) + + +def retry_on_exception(num_retries, base_delay=0, exc_type=Exception): + """If the decorated function raises exception exc_type, allow num_retries + retry attempts before raise the exception. + """ + def _retry_on_exception_inner_1(f): + def _retry_on_exception_inner_2(*args, **kwargs): + retries = num_retries + multiplier = 1 + while True: + try: + return f(*args, **kwargs) + except exc_type: + if not retries: + raise + + delay = base_delay * multiplier + multiplier += 1 + log("Retrying '%s' %d more times (delay=%s)" % + (f.__name__, retries, delay), level=INFO) + retries -= 1 + if delay: + time.sleep(delay) + + return _retry_on_exception_inner_2 + + return _retry_on_exception_inner_1 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/files.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/files.py new file mode 100644 index 0000000000000000000000000000000000000000..fdd82b75709c13da0d534bf4962822984a3c1867 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/files.py @@ -0,0 +1,43 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +__author__ = 'Jorge Niedbalski ' + +import os +import subprocess + + +def sed(filename, before, after, flags='g'): + """ + Search and replaces the given pattern on filename. + + :param filename: relative or absolute file path. + :param before: expression to be replaced (see 'man sed') + :param after: expression to replace with (see 'man sed') + :param flags: sed-compatible regex flags in example, to make + the search and replace case insensitive, specify ``flags="i"``. + The ``g`` flag is always specified regardless, so you do not + need to remember to include it when overriding this parameter. + :returns: If the sed command exit code was zero then return, + otherwise raise CalledProcessError. + """ + expression = r's/{0}/{1}/{2}'.format(before, + after, flags) + + return subprocess.check_call(["sed", "-i", "-r", "-e", + expression, + os.path.expanduser(filename)]) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/fstab.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/fstab.py new file mode 100644 index 0000000000000000000000000000000000000000..d9fa9152c765c538adad3fd9bc45a46018c89b72 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/fstab.py @@ -0,0 +1,132 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import io +import os + +__author__ = 'Jorge Niedbalski R. ' + + +class Fstab(io.FileIO): + """This class extends file in order to implement a file reader/writer + for file `/etc/fstab` + """ + + class Entry(object): + """Entry class represents a non-comment line on the `/etc/fstab` file + """ + def __init__(self, device, mountpoint, filesystem, + options, d=0, p=0): + self.device = device + self.mountpoint = mountpoint + self.filesystem = filesystem + + if not options: + options = "defaults" + + self.options = options + self.d = int(d) + self.p = int(p) + + def __eq__(self, o): + return str(self) == str(o) + + def __str__(self): + return "{} {} {} {} {} {}".format(self.device, + self.mountpoint, + self.filesystem, + self.options, + self.d, + self.p) + + DEFAULT_PATH = os.path.join(os.path.sep, 'etc', 'fstab') + + def __init__(self, path=None): + if path: + self._path = path + else: + self._path = self.DEFAULT_PATH + super(Fstab, self).__init__(self._path, 'rb+') + + def _hydrate_entry(self, line): + # NOTE: use split with no arguments to split on any + # whitespace including tabs + return Fstab.Entry(*filter( + lambda x: x not in ('', None), + line.strip("\n").split())) + + @property + def entries(self): + self.seek(0) + for line in self.readlines(): + line = line.decode('us-ascii') + try: + if line.strip() and not line.strip().startswith("#"): + yield self._hydrate_entry(line) + except ValueError: + pass + + def get_entry_by_attr(self, attr, value): + for entry in self.entries: + e_attr = getattr(entry, attr) + if e_attr == value: + return entry + return None + + def add_entry(self, entry): + if self.get_entry_by_attr('device', entry.device): + return False + + self.write((str(entry) + '\n').encode('us-ascii')) + self.truncate() + return entry + + def remove_entry(self, entry): + self.seek(0) + + lines = [l.decode('us-ascii') for l in self.readlines()] + + found = False + for index, line in enumerate(lines): + if line.strip() and not line.strip().startswith("#"): + if self._hydrate_entry(line) == entry: + found = True + break + + if not found: + return False + + lines.remove(line) + + self.seek(0) + self.write(''.join(lines).encode('us-ascii')) + self.truncate() + return True + + @classmethod + def remove_by_mountpoint(cls, mountpoint, path=None): + fstab = cls(path=path) + entry = fstab.get_entry_by_attr('mountpoint', mountpoint) + if entry: + return fstab.remove_entry(entry) + return False + + @classmethod + def add(cls, device, mountpoint, filesystem, options=None, path=None): + return cls(path=path).add_entry(Fstab.Entry(device, + mountpoint, filesystem, + options=options)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/hookenv.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/hookenv.py new file mode 100644 index 0000000000000000000000000000000000000000..db7ce7282b4c96c8a33abf309a340377216922ec --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/hookenv.py @@ -0,0 +1,1613 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"Interactions with the Juju environment" +# Copyright 2013 Canonical Ltd. +# +# Authors: +# Charm Helpers Developers + +from __future__ import print_function +import copy +from distutils.version import LooseVersion +from enum import Enum +from functools import wraps +from collections import namedtuple +import glob +import os +import json +import yaml +import re +import subprocess +import sys +import errno +import tempfile +from subprocess import CalledProcessError + +from charmhelpers import deprecate + +import six +if not six.PY3: + from UserDict import UserDict +else: + from collections import UserDict + + +CRITICAL = "CRITICAL" +ERROR = "ERROR" +WARNING = "WARNING" +INFO = "INFO" +DEBUG = "DEBUG" +TRACE = "TRACE" +MARKER = object() +SH_MAX_ARG = 131071 + + +RANGE_WARNING = ('Passing NO_PROXY string that includes a cidr. ' + 'This may not be compatible with software you are ' + 'running in your shell.') + + +class WORKLOAD_STATES(Enum): + ACTIVE = 'active' + BLOCKED = 'blocked' + MAINTENANCE = 'maintenance' + WAITING = 'waiting' + + +cache = {} + + +def cached(func): + """Cache return values for multiple executions of func + args + + For example:: + + @cached + def unit_get(attribute): + pass + + unit_get('test') + + will cache the result of unit_get + 'test' for future calls. + """ + @wraps(func) + def wrapper(*args, **kwargs): + global cache + key = json.dumps((func, args, kwargs), sort_keys=True, default=str) + try: + return cache[key] + except KeyError: + pass # Drop out of the exception handler scope. + res = func(*args, **kwargs) + cache[key] = res + return res + wrapper._wrapped = func + return wrapper + + +def flush(key): + """Flushes any entries from function cache where the + key is found in the function+args """ + flush_list = [] + for item in cache: + if key in item: + flush_list.append(item) + for item in flush_list: + del cache[item] + + +def log(message, level=None): + """Write a message to the juju log""" + command = ['juju-log'] + if level: + command += ['-l', level] + if not isinstance(message, six.string_types): + message = repr(message) + command += [message[:SH_MAX_ARG]] + # Missing juju-log should not cause failures in unit tests + # Send log output to stderr + try: + subprocess.call(command) + except OSError as e: + if e.errno == errno.ENOENT: + if level: + message = "{}: {}".format(level, message) + message = "juju-log: {}".format(message) + print(message, file=sys.stderr) + else: + raise + + +def function_log(message): + """Write a function progress message""" + command = ['function-log'] + if not isinstance(message, six.string_types): + message = repr(message) + command += [message[:SH_MAX_ARG]] + # Missing function-log should not cause failures in unit tests + # Send function_log output to stderr + try: + subprocess.call(command) + except OSError as e: + if e.errno == errno.ENOENT: + message = "function-log: {}".format(message) + print(message, file=sys.stderr) + else: + raise + + +class Serializable(UserDict): + """Wrapper, an object that can be serialized to yaml or json""" + + def __init__(self, obj): + # wrap the object + UserDict.__init__(self) + self.data = obj + + def __getattr__(self, attr): + # See if this object has attribute. + if attr in ("json", "yaml", "data"): + return self.__dict__[attr] + # Check for attribute in wrapped object. + got = getattr(self.data, attr, MARKER) + if got is not MARKER: + return got + # Proxy to the wrapped object via dict interface. + try: + return self.data[attr] + except KeyError: + raise AttributeError(attr) + + def __getstate__(self): + # Pickle as a standard dictionary. + return self.data + + def __setstate__(self, state): + # Unpickle into our wrapper. + self.data = state + + def json(self): + """Serialize the object to json""" + return json.dumps(self.data) + + def yaml(self): + """Serialize the object to yaml""" + return yaml.dump(self.data) + + +def execution_environment(): + """A convenient bundling of the current execution context""" + context = {} + context['conf'] = config() + if relation_id(): + context['reltype'] = relation_type() + context['relid'] = relation_id() + context['rel'] = relation_get() + context['unit'] = local_unit() + context['rels'] = relations() + context['env'] = os.environ + return context + + +def in_relation_hook(): + """Determine whether we're running in a relation hook""" + return 'JUJU_RELATION' in os.environ + + +def relation_type(): + """The scope for the current relation hook""" + return os.environ.get('JUJU_RELATION', None) + + +@cached +def relation_id(relation_name=None, service_or_unit=None): + """The relation ID for the current or a specified relation""" + if not relation_name and not service_or_unit: + return os.environ.get('JUJU_RELATION_ID', None) + elif relation_name and service_or_unit: + service_name = service_or_unit.split('/')[0] + for relid in relation_ids(relation_name): + remote_service = remote_service_name(relid) + if remote_service == service_name: + return relid + else: + raise ValueError('Must specify neither or both of relation_name and service_or_unit') + + +def local_unit(): + """Local unit ID""" + return os.environ['JUJU_UNIT_NAME'] + + +def remote_unit(): + """The remote unit for the current relation hook""" + return os.environ.get('JUJU_REMOTE_UNIT', None) + + +def application_name(): + """ + The name of the deployed application this unit belongs to. + """ + return local_unit().split('/')[0] + + +def service_name(): + """ + .. deprecated:: 0.19.1 + Alias for :func:`application_name`. + """ + return application_name() + + +def model_name(): + """ + Name of the model that this unit is deployed in. + """ + return os.environ['JUJU_MODEL_NAME'] + + +def model_uuid(): + """ + UUID of the model that this unit is deployed in. + """ + return os.environ['JUJU_MODEL_UUID'] + + +def principal_unit(): + """Returns the principal unit of this unit, otherwise None""" + # Juju 2.2 and above provides JUJU_PRINCIPAL_UNIT + principal_unit = os.environ.get('JUJU_PRINCIPAL_UNIT', None) + # If it's empty, then this unit is the principal + if principal_unit == '': + return os.environ['JUJU_UNIT_NAME'] + elif principal_unit is not None: + return principal_unit + # For Juju 2.1 and below, let's try work out the principle unit by + # the various charms' metadata.yaml. + for reltype in relation_types(): + for rid in relation_ids(reltype): + for unit in related_units(rid): + md = _metadata_unit(unit) + if not md: + continue + subordinate = md.pop('subordinate', None) + if not subordinate: + return unit + return None + + +@cached +def remote_service_name(relid=None): + """The remote service name for a given relation-id (or the current relation)""" + if relid is None: + unit = remote_unit() + else: + units = related_units(relid) + unit = units[0] if units else None + return unit.split('/')[0] if unit else None + + +def hook_name(): + """The name of the currently executing hook""" + return os.environ.get('JUJU_HOOK_NAME', os.path.basename(sys.argv[0])) + + +class Config(dict): + """A dictionary representation of the charm's config.yaml, with some + extra features: + + - See which values in the dictionary have changed since the previous hook. + - For values that have changed, see what the previous value was. + - Store arbitrary data for use in a later hook. + + NOTE: Do not instantiate this object directly - instead call + ``hookenv.config()``, which will return an instance of :class:`Config`. + + Example usage:: + + >>> # inside a hook + >>> from charmhelpers.core import hookenv + >>> config = hookenv.config() + >>> config['foo'] + 'bar' + >>> # store a new key/value for later use + >>> config['mykey'] = 'myval' + + + >>> # user runs `juju set mycharm foo=baz` + >>> # now we're inside subsequent config-changed hook + >>> config = hookenv.config() + >>> config['foo'] + 'baz' + >>> # test to see if this val has changed since last hook + >>> config.changed('foo') + True + >>> # what was the previous value? + >>> config.previous('foo') + 'bar' + >>> # keys/values that we add are preserved across hooks + >>> config['mykey'] + 'myval' + + """ + CONFIG_FILE_NAME = '.juju-persistent-config' + + def __init__(self, *args, **kw): + super(Config, self).__init__(*args, **kw) + self.implicit_save = True + self._prev_dict = None + self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME) + if os.path.exists(self.path) and os.stat(self.path).st_size: + self.load_previous() + atexit(self._implicit_save) + + def load_previous(self, path=None): + """Load previous copy of config from disk. + + In normal usage you don't need to call this method directly - it + is called automatically at object initialization. + + :param path: + + File path from which to load the previous config. If `None`, + config is loaded from the default location. If `path` is + specified, subsequent `save()` calls will write to the same + path. + + """ + self.path = path or self.path + with open(self.path) as f: + try: + self._prev_dict = json.load(f) + except ValueError as e: + log('Found but was unable to parse previous config data, ' + 'ignoring which will report all values as changed - {}' + .format(str(e)), level=ERROR) + return + for k, v in copy.deepcopy(self._prev_dict).items(): + if k not in self: + self[k] = v + + def changed(self, key): + """Return True if the current value for this key is different from + the previous value. + + """ + if self._prev_dict is None: + return True + return self.previous(key) != self.get(key) + + def previous(self, key): + """Return previous value for this key, or None if there + is no previous value. + + """ + if self._prev_dict: + return self._prev_dict.get(key) + return None + + def save(self): + """Save this config to disk. + + If the charm is using the :mod:`Services Framework ` + or :meth:'@hook ' decorator, this + is called automatically at the end of successful hook execution. + Otherwise, it should be called directly by user code. + + To disable automatic saves, set ``implicit_save=False`` on this + instance. + + """ + with open(self.path, 'w') as f: + os.fchmod(f.fileno(), 0o600) + json.dump(self, f) + + def _implicit_save(self): + if self.implicit_save: + self.save() + + +_cache_config = None + + +def config(scope=None): + """ + Get the juju charm configuration (scope==None) or individual key, + (scope=str). The returned value is a Python data structure loaded as + JSON from the Juju config command. + + :param scope: If set, return the value for the specified key. + :type scope: Optional[str] + :returns: Either the whole config as a Config, or a key from it. + :rtype: Any + """ + global _cache_config + config_cmd_line = ['config-get', '--all', '--format=json'] + try: + # JSON Decode Exception for Python3.5+ + exc_json = json.decoder.JSONDecodeError + except AttributeError: + # JSON Decode Exception for Python2.7 through Python3.4 + exc_json = ValueError + try: + if _cache_config is None: + config_data = json.loads( + subprocess.check_output(config_cmd_line).decode('UTF-8')) + _cache_config = Config(config_data) + if scope is not None: + return _cache_config.get(scope) + return _cache_config + except (exc_json, UnicodeDecodeError) as e: + log('Unable to parse output from config-get: config_cmd_line="{}" ' + 'message="{}"' + .format(config_cmd_line, str(e)), level=ERROR) + return None + + +@cached +def relation_get(attribute=None, unit=None, rid=None): + """Get relation information""" + _args = ['relation-get', '--format=json'] + if rid: + _args.append('-r') + _args.append(rid) + _args.append(attribute or '-') + if unit: + _args.append(unit) + try: + return json.loads(subprocess.check_output(_args).decode('UTF-8')) + except ValueError: + return None + except CalledProcessError as e: + if e.returncode == 2: + return None + raise + + +def relation_set(relation_id=None, relation_settings=None, **kwargs): + """Set relation information for the current unit""" + relation_settings = relation_settings if relation_settings else {} + relation_cmd_line = ['relation-set'] + accepts_file = "--file" in subprocess.check_output( + relation_cmd_line + ["--help"], universal_newlines=True) + if relation_id is not None: + relation_cmd_line.extend(('-r', relation_id)) + settings = relation_settings.copy() + settings.update(kwargs) + for key, value in settings.items(): + # Force value to be a string: it always should, but some call + # sites pass in things like dicts or numbers. + if value is not None: + settings[key] = "{}".format(value) + if accepts_file: + # --file was introduced in Juju 1.23.2. Use it by default if + # available, since otherwise we'll break if the relation data is + # too big. Ideally we should tell relation-set to read the data from + # stdin, but that feature is broken in 1.23.2: Bug #1454678. + with tempfile.NamedTemporaryFile(delete=False) as settings_file: + settings_file.write(yaml.safe_dump(settings).encode("utf-8")) + subprocess.check_call( + relation_cmd_line + ["--file", settings_file.name]) + os.remove(settings_file.name) + else: + for key, value in settings.items(): + if value is None: + relation_cmd_line.append('{}='.format(key)) + else: + relation_cmd_line.append('{}={}'.format(key, value)) + subprocess.check_call(relation_cmd_line) + # Flush cache of any relation-gets for local unit + flush(local_unit()) + + +def relation_clear(r_id=None): + ''' Clears any relation data already set on relation r_id ''' + settings = relation_get(rid=r_id, + unit=local_unit()) + for setting in settings: + if setting not in ['public-address', 'private-address']: + settings[setting] = None + relation_set(relation_id=r_id, + **settings) + + +@cached +def relation_ids(reltype=None): + """A list of relation_ids""" + reltype = reltype or relation_type() + relid_cmd_line = ['relation-ids', '--format=json'] + if reltype is not None: + relid_cmd_line.append(reltype) + return json.loads( + subprocess.check_output(relid_cmd_line).decode('UTF-8')) or [] + return [] + + +@cached +def related_units(relid=None): + """A list of related units""" + relid = relid or relation_id() + units_cmd_line = ['relation-list', '--format=json'] + if relid is not None: + units_cmd_line.extend(('-r', relid)) + return json.loads( + subprocess.check_output(units_cmd_line).decode('UTF-8')) or [] + + +def expected_peer_units(): + """Get a generator for units we expect to join peer relation based on + goal-state. + + The local unit is excluded from the result to make it easy to gauge + completion of all peers joining the relation with existing hook tools. + + Example usage: + log('peer {} of {} joined peer relation' + .format(len(related_units()), + len(list(expected_peer_units())))) + + This function will raise NotImplementedError if used with juju versions + without goal-state support. + + :returns: iterator + :rtype: types.GeneratorType + :raises: NotImplementedError + """ + if not has_juju_version("2.4.0"): + # goal-state first appeared in 2.4.0. + raise NotImplementedError("goal-state") + _goal_state = goal_state() + return (key for key in _goal_state['units'] + if '/' in key and key != local_unit()) + + +def expected_related_units(reltype=None): + """Get a generator for units we expect to join relation based on + goal-state. + + Note that you can not use this function for the peer relation, take a look + at expected_peer_units() for that. + + This function will raise KeyError if you request information for a + relation type for which juju goal-state does not have information. It will + raise NotImplementedError if used with juju versions without goal-state + support. + + Example usage: + log('participant {} of {} joined relation {}' + .format(len(related_units()), + len(list(expected_related_units())), + relation_type())) + + :param reltype: Relation type to list data for, default is to list data for + the realtion type we are currently executing a hook for. + :type reltype: str + :returns: iterator + :rtype: types.GeneratorType + :raises: KeyError, NotImplementedError + """ + if not has_juju_version("2.4.4"): + # goal-state existed in 2.4.0, but did not list individual units to + # join a relation in 2.4.1 through 2.4.3. (LP: #1794739) + raise NotImplementedError("goal-state relation unit count") + reltype = reltype or relation_type() + _goal_state = goal_state() + return (key for key in _goal_state['relations'][reltype] if '/' in key) + + +@cached +def relation_for_unit(unit=None, rid=None): + """Get the json represenation of a unit's relation""" + unit = unit or remote_unit() + relation = relation_get(unit=unit, rid=rid) + for key in relation: + if key.endswith('-list'): + relation[key] = relation[key].split() + relation['__unit__'] = unit + return relation + + +@cached +def relations_for_id(relid=None): + """Get relations of a specific relation ID""" + relation_data = [] + relid = relid or relation_ids() + for unit in related_units(relid): + unit_data = relation_for_unit(unit, relid) + unit_data['__relid__'] = relid + relation_data.append(unit_data) + return relation_data + + +@cached +def relations_of_type(reltype=None): + """Get relations of a specific type""" + relation_data = [] + reltype = reltype or relation_type() + for relid in relation_ids(reltype): + for relation in relations_for_id(relid): + relation['__relid__'] = relid + relation_data.append(relation) + return relation_data + + +@cached +def metadata(): + """Get the current charm metadata.yaml contents as a python object""" + with open(os.path.join(charm_dir(), 'metadata.yaml')) as md: + return yaml.safe_load(md) + + +def _metadata_unit(unit): + """Given the name of a unit (e.g. apache2/0), get the unit charm's + metadata.yaml. Very similar to metadata() but allows us to inspect + other units. Unit needs to be co-located, such as a subordinate or + principal/primary. + + :returns: metadata.yaml as a python object. + + """ + basedir = os.sep.join(charm_dir().split(os.sep)[:-2]) + unitdir = 'unit-{}'.format(unit.replace(os.sep, '-')) + joineddir = os.path.join(basedir, unitdir, 'charm', 'metadata.yaml') + if not os.path.exists(joineddir): + return None + with open(joineddir) as md: + return yaml.safe_load(md) + + +@cached +def relation_types(): + """Get a list of relation types supported by this charm""" + rel_types = [] + md = metadata() + for key in ('provides', 'requires', 'peers'): + section = md.get(key) + if section: + rel_types.extend(section.keys()) + return rel_types + + +@cached +def peer_relation_id(): + '''Get the peers relation id if a peers relation has been joined, else None.''' + md = metadata() + section = md.get('peers') + if section: + for key in section: + relids = relation_ids(key) + if relids: + return relids[0] + return None + + +@cached +def relation_to_interface(relation_name): + """ + Given the name of a relation, return the interface that relation uses. + + :returns: The interface name, or ``None``. + """ + return relation_to_role_and_interface(relation_name)[1] + + +@cached +def relation_to_role_and_interface(relation_name): + """ + Given the name of a relation, return the role and the name of the interface + that relation uses (where role is one of ``provides``, ``requires``, or ``peers``). + + :returns: A tuple containing ``(role, interface)``, or ``(None, None)``. + """ + _metadata = metadata() + for role in ('provides', 'requires', 'peers'): + interface = _metadata.get(role, {}).get(relation_name, {}).get('interface') + if interface: + return role, interface + return None, None + + +@cached +def role_and_interface_to_relations(role, interface_name): + """ + Given a role and interface name, return a list of relation names for the + current charm that use that interface under that role (where role is one + of ``provides``, ``requires``, or ``peers``). + + :returns: A list of relation names. + """ + _metadata = metadata() + results = [] + for relation_name, relation in _metadata.get(role, {}).items(): + if relation['interface'] == interface_name: + results.append(relation_name) + return results + + +@cached +def interface_to_relations(interface_name): + """ + Given an interface, return a list of relation names for the current + charm that use that interface. + + :returns: A list of relation names. + """ + results = [] + for role in ('provides', 'requires', 'peers'): + results.extend(role_and_interface_to_relations(role, interface_name)) + return results + + +@cached +def charm_name(): + """Get the name of the current charm as is specified on metadata.yaml""" + return metadata().get('name') + + +@cached +def relations(): + """Get a nested dictionary of relation data for all related units""" + rels = {} + for reltype in relation_types(): + relids = {} + for relid in relation_ids(reltype): + units = {local_unit(): relation_get(unit=local_unit(), rid=relid)} + for unit in related_units(relid): + reldata = relation_get(unit=unit, rid=relid) + units[unit] = reldata + relids[relid] = units + rels[reltype] = relids + return rels + + +@cached +def is_relation_made(relation, keys='private-address'): + ''' + Determine whether a relation is established by checking for + presence of key(s). If a list of keys is provided, they + must all be present for the relation to be identified as made + ''' + if isinstance(keys, str): + keys = [keys] + for r_id in relation_ids(relation): + for unit in related_units(r_id): + context = {} + for k in keys: + context[k] = relation_get(k, rid=r_id, + unit=unit) + if None not in context.values(): + return True + return False + + +def _port_op(op_name, port, protocol="TCP"): + """Open or close a service network port""" + _args = [op_name] + icmp = protocol.upper() == "ICMP" + if icmp: + _args.append(protocol) + else: + _args.append('{}/{}'.format(port, protocol)) + try: + subprocess.check_call(_args) + except subprocess.CalledProcessError: + # Older Juju pre 2.3 doesn't support ICMP + # so treat it as a no-op if it fails. + if not icmp: + raise + + +def open_port(port, protocol="TCP"): + """Open a service network port""" + _port_op('open-port', port, protocol) + + +def close_port(port, protocol="TCP"): + """Close a service network port""" + _port_op('close-port', port, protocol) + + +def open_ports(start, end, protocol="TCP"): + """Opens a range of service network ports""" + _args = ['open-port'] + _args.append('{}-{}/{}'.format(start, end, protocol)) + subprocess.check_call(_args) + + +def close_ports(start, end, protocol="TCP"): + """Close a range of service network ports""" + _args = ['close-port'] + _args.append('{}-{}/{}'.format(start, end, protocol)) + subprocess.check_call(_args) + + +def opened_ports(): + """Get the opened ports + + *Note that this will only show ports opened in a previous hook* + + :returns: Opened ports as a list of strings: ``['8080/tcp', '8081-8083/tcp']`` + """ + _args = ['opened-ports', '--format=json'] + return json.loads(subprocess.check_output(_args).decode('UTF-8')) + + +@cached +def unit_get(attribute): + """Get the unit ID for the remote unit""" + _args = ['unit-get', '--format=json', attribute] + try: + return json.loads(subprocess.check_output(_args).decode('UTF-8')) + except ValueError: + return None + + +def unit_public_ip(): + """Get this unit's public IP address""" + return unit_get('public-address') + + +def unit_private_ip(): + """Get this unit's private IP address""" + return unit_get('private-address') + + +@cached +def storage_get(attribute=None, storage_id=None): + """Get storage attributes""" + _args = ['storage-get', '--format=json'] + if storage_id: + _args.extend(('-s', storage_id)) + if attribute: + _args.append(attribute) + try: + return json.loads(subprocess.check_output(_args).decode('UTF-8')) + except ValueError: + return None + + +@cached +def storage_list(storage_name=None): + """List the storage IDs for the unit""" + _args = ['storage-list', '--format=json'] + if storage_name: + _args.append(storage_name) + try: + return json.loads(subprocess.check_output(_args).decode('UTF-8')) + except ValueError: + return None + except OSError as e: + import errno + if e.errno == errno.ENOENT: + # storage-list does not exist + return [] + raise + + +class UnregisteredHookError(Exception): + """Raised when an undefined hook is called""" + pass + + +class Hooks(object): + """A convenient handler for hook functions. + + Example:: + + hooks = Hooks() + + # register a hook, taking its name from the function name + @hooks.hook() + def install(): + pass # your code here + + # register a hook, providing a custom hook name + @hooks.hook("config-changed") + def config_changed(): + pass # your code here + + if __name__ == "__main__": + # execute a hook based on the name the program is called by + hooks.execute(sys.argv) + """ + + def __init__(self, config_save=None): + super(Hooks, self).__init__() + self._hooks = {} + + # For unknown reasons, we allow the Hooks constructor to override + # config().implicit_save. + if config_save is not None: + config().implicit_save = config_save + + def register(self, name, function): + """Register a hook""" + self._hooks[name] = function + + def execute(self, args): + """Execute a registered hook based on args[0]""" + _run_atstart() + hook_name = os.path.basename(args[0]) + if hook_name in self._hooks: + try: + self._hooks[hook_name]() + except SystemExit as x: + if x.code is None or x.code == 0: + _run_atexit() + raise + _run_atexit() + else: + raise UnregisteredHookError(hook_name) + + def hook(self, *hook_names): + """Decorator, registering them as hooks""" + def wrapper(decorated): + for hook_name in hook_names: + self.register(hook_name, decorated) + else: + self.register(decorated.__name__, decorated) + if '_' in decorated.__name__: + self.register( + decorated.__name__.replace('_', '-'), decorated) + return decorated + return wrapper + + +class NoNetworkBinding(Exception): + pass + + +def charm_dir(): + """Return the root directory of the current charm""" + d = os.environ.get('JUJU_CHARM_DIR') + if d is not None: + return d + return os.environ.get('CHARM_DIR') + + +def cmd_exists(cmd): + """Return True if the specified cmd exists in the path""" + return any( + os.access(os.path.join(path, cmd), os.X_OK) + for path in os.environ["PATH"].split(os.pathsep) + ) + + +@cached +@deprecate("moved to function_get()", log=log) +def action_get(key=None): + """ + .. deprecated:: 0.20.7 + Alias for :func:`function_get`. + + Gets the value of an action parameter, or all key/value param pairs. + """ + cmd = ['action-get'] + if key is not None: + cmd.append(key) + cmd.append('--format=json') + action_data = json.loads(subprocess.check_output(cmd).decode('UTF-8')) + return action_data + + +@cached +def function_get(key=None): + """Gets the value of an action parameter, or all key/value param pairs""" + cmd = ['function-get'] + # Fallback for older charms. + if not cmd_exists('function-get'): + cmd = ['action-get'] + + if key is not None: + cmd.append(key) + cmd.append('--format=json') + function_data = json.loads(subprocess.check_output(cmd).decode('UTF-8')) + return function_data + + +@deprecate("moved to function_set()", log=log) +def action_set(values): + """ + .. deprecated:: 0.20.7 + Alias for :func:`function_set`. + + Sets the values to be returned after the action finishes. + """ + cmd = ['action-set'] + for k, v in list(values.items()): + cmd.append('{}={}'.format(k, v)) + subprocess.check_call(cmd) + + +def function_set(values): + """Sets the values to be returned after the function finishes""" + cmd = ['function-set'] + # Fallback for older charms. + if not cmd_exists('function-get'): + cmd = ['action-set'] + + for k, v in list(values.items()): + cmd.append('{}={}'.format(k, v)) + subprocess.check_call(cmd) + + +@deprecate("moved to function_fail()", log=log) +def action_fail(message): + """ + .. deprecated:: 0.20.7 + Alias for :func:`function_fail`. + + Sets the action status to failed and sets the error message. + + The results set by action_set are preserved. + """ + subprocess.check_call(['action-fail', message]) + + +def function_fail(message): + """Sets the function status to failed and sets the error message. + + The results set by function_set are preserved.""" + cmd = ['function-fail'] + # Fallback for older charms. + if not cmd_exists('function-fail'): + cmd = ['action-fail'] + cmd.append(message) + + subprocess.check_call(cmd) + + +def action_name(): + """Get the name of the currently executing action.""" + return os.environ.get('JUJU_ACTION_NAME') + + +def function_name(): + """Get the name of the currently executing function.""" + return os.environ.get('JUJU_FUNCTION_NAME') or action_name() + + +def action_uuid(): + """Get the UUID of the currently executing action.""" + return os.environ.get('JUJU_ACTION_UUID') + + +def function_id(): + """Get the ID of the currently executing function.""" + return os.environ.get('JUJU_FUNCTION_ID') or action_uuid() + + +def action_tag(): + """Get the tag for the currently executing action.""" + return os.environ.get('JUJU_ACTION_TAG') + + +def function_tag(): + """Get the tag for the currently executing function.""" + return os.environ.get('JUJU_FUNCTION_TAG') or action_tag() + + +def status_set(workload_state, message, application=False): + """Set the workload state with a message + + Use status-set to set the workload state with a message which is visible + to the user via juju status. If the status-set command is not found then + assume this is juju < 1.23 and juju-log the message instead. + + workload_state -- valid juju workload state. str or WORKLOAD_STATES + message -- status update message + application -- Whether this is an application state set + """ + bad_state_msg = '{!r} is not a valid workload state' + + if isinstance(workload_state, str): + try: + # Convert string to enum. + workload_state = WORKLOAD_STATES[workload_state.upper()] + except KeyError: + raise ValueError(bad_state_msg.format(workload_state)) + + if workload_state not in WORKLOAD_STATES: + raise ValueError(bad_state_msg.format(workload_state)) + + cmd = ['status-set'] + if application: + cmd.append('--application') + cmd.extend([workload_state.value, message]) + try: + ret = subprocess.call(cmd) + if ret == 0: + return + except OSError as e: + if e.errno != errno.ENOENT: + raise + log_message = 'status-set failed: {} {}'.format(workload_state.value, + message) + log(log_message, level='INFO') + + +def status_get(): + """Retrieve the previously set juju workload state and message + + If the status-get command is not found then assume this is juju < 1.23 and + return 'unknown', "" + + """ + cmd = ['status-get', "--format=json", "--include-data"] + try: + raw_status = subprocess.check_output(cmd) + except OSError as e: + if e.errno == errno.ENOENT: + return ('unknown', "") + else: + raise + else: + status = json.loads(raw_status.decode("UTF-8")) + return (status["status"], status["message"]) + + +def translate_exc(from_exc, to_exc): + def inner_translate_exc1(f): + @wraps(f) + def inner_translate_exc2(*args, **kwargs): + try: + return f(*args, **kwargs) + except from_exc: + raise to_exc + + return inner_translate_exc2 + + return inner_translate_exc1 + + +def application_version_set(version): + """Charm authors may trigger this command from any hook to output what + version of the application is running. This could be a package version, + for instance postgres version 9.5. It could also be a build number or + version control revision identifier, for instance git sha 6fb7ba68. """ + + cmd = ['application-version-set'] + cmd.append(version) + try: + subprocess.check_call(cmd) + except OSError: + log("Application Version: {}".format(version)) + + +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) +@cached +def goal_state(): + """Juju goal state values""" + cmd = ['goal-state', '--format=json'] + return json.loads(subprocess.check_output(cmd).decode('UTF-8')) + + +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) +def is_leader(): + """Does the current unit hold the juju leadership + + Uses juju to determine whether the current unit is the leader of its peers + """ + cmd = ['is-leader', '--format=json'] + return json.loads(subprocess.check_output(cmd).decode('UTF-8')) + + +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) +def leader_get(attribute=None): + """Juju leader get value(s)""" + cmd = ['leader-get', '--format=json'] + [attribute or '-'] + return json.loads(subprocess.check_output(cmd).decode('UTF-8')) + + +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) +def leader_set(settings=None, **kwargs): + """Juju leader set value(s)""" + # Don't log secrets. + # log("Juju leader-set '%s'" % (settings), level=DEBUG) + cmd = ['leader-set'] + settings = settings or {} + settings.update(kwargs) + for k, v in settings.items(): + if v is None: + cmd.append('{}='.format(k)) + else: + cmd.append('{}={}'.format(k, v)) + subprocess.check_call(cmd) + + +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) +def payload_register(ptype, klass, pid): + """ is used while a hook is running to let Juju know that a + payload has been started.""" + cmd = ['payload-register'] + for x in [ptype, klass, pid]: + cmd.append(x) + subprocess.check_call(cmd) + + +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) +def payload_unregister(klass, pid): + """ is used while a hook is running to let Juju know + that a payload has been manually stopped. The and provided + must match a payload that has been previously registered with juju using + payload-register.""" + cmd = ['payload-unregister'] + for x in [klass, pid]: + cmd.append(x) + subprocess.check_call(cmd) + + +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) +def payload_status_set(klass, pid, status): + """is used to update the current status of a registered payload. + The and provided must match a payload that has been previously + registered with juju using payload-register. The must be one of the + follow: starting, started, stopping, stopped""" + cmd = ['payload-status-set'] + for x in [klass, pid, status]: + cmd.append(x) + subprocess.check_call(cmd) + + +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) +def resource_get(name): + """used to fetch the resource path of the given name. + + must match a name of defined resource in metadata.yaml + + returns either a path or False if resource not available + """ + if not name: + return False + + cmd = ['resource-get', name] + try: + return subprocess.check_output(cmd).decode('UTF-8') + except subprocess.CalledProcessError: + return False + + +@cached +def juju_version(): + """Full version string (eg. '1.23.3.1-trusty-amd64')""" + # Per https://bugs.launchpad.net/juju-core/+bug/1455368/comments/1 + jujud = glob.glob('/var/lib/juju/tools/machine-*/jujud')[0] + return subprocess.check_output([jujud, 'version'], + universal_newlines=True).strip() + + +def has_juju_version(minimum_version): + """Return True if the Juju version is at least the provided version""" + return LooseVersion(juju_version()) >= LooseVersion(minimum_version) + + +_atexit = [] +_atstart = [] + + +def atstart(callback, *args, **kwargs): + '''Schedule a callback to run before the main hook. + + Callbacks are run in the order they were added. + + This is useful for modules and classes to perform initialization + and inject behavior. In particular: + + - Run common code before all of your hooks, such as logging + the hook name or interesting relation data. + - Defer object or module initialization that requires a hook + context until we know there actually is a hook context, + making testing easier. + - Rather than requiring charm authors to include boilerplate to + invoke your helper's behavior, have it run automatically if + your object is instantiated or module imported. + + This is not at all useful after your hook framework as been launched. + ''' + global _atstart + _atstart.append((callback, args, kwargs)) + + +def atexit(callback, *args, **kwargs): + '''Schedule a callback to run on successful hook completion. + + Callbacks are run in the reverse order that they were added.''' + _atexit.append((callback, args, kwargs)) + + +def _run_atstart(): + '''Hook frameworks must invoke this before running the main hook body.''' + global _atstart + for callback, args, kwargs in _atstart: + callback(*args, **kwargs) + del _atstart[:] + + +def _run_atexit(): + '''Hook frameworks must invoke this after the main hook body has + successfully completed. Do not invoke it if the hook fails.''' + global _atexit + for callback, args, kwargs in reversed(_atexit): + callback(*args, **kwargs) + del _atexit[:] + + +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) +def network_get_primary_address(binding): + ''' + Deprecated since Juju 2.3; use network_get() + + Retrieve the primary network address for a named binding + + :param binding: string. The name of a relation of extra-binding + :return: string. The primary IP address for the named binding + :raise: NotImplementedError if run on Juju < 2.0 + ''' + cmd = ['network-get', '--primary-address', binding] + try: + response = subprocess.check_output( + cmd, + stderr=subprocess.STDOUT).decode('UTF-8').strip() + except CalledProcessError as e: + if 'no network config found for binding' in e.output.decode('UTF-8'): + raise NoNetworkBinding("No network binding for {}" + .format(binding)) + else: + raise + return response + + +def network_get(endpoint, relation_id=None): + """ + Retrieve the network details for a relation endpoint + + :param endpoint: string. The name of a relation endpoint + :param relation_id: int. The ID of the relation for the current context. + :return: dict. The loaded YAML output of the network-get query. + :raise: NotImplementedError if request not supported by the Juju version. + """ + if not has_juju_version('2.2'): + raise NotImplementedError(juju_version()) # earlier versions require --primary-address + if relation_id and not has_juju_version('2.3'): + raise NotImplementedError # 2.3 added the -r option + + cmd = ['network-get', endpoint, '--format', 'yaml'] + if relation_id: + cmd.append('-r') + cmd.append(relation_id) + response = subprocess.check_output( + cmd, + stderr=subprocess.STDOUT).decode('UTF-8').strip() + return yaml.safe_load(response) + + +def add_metric(*args, **kwargs): + """Add metric values. Values may be expressed with keyword arguments. For + metric names containing dashes, these may be expressed as one or more + 'key=value' positional arguments. May only be called from the collect-metrics + hook.""" + _args = ['add-metric'] + _kvpairs = [] + _kvpairs.extend(args) + _kvpairs.extend(['{}={}'.format(k, v) for k, v in kwargs.items()]) + _args.extend(sorted(_kvpairs)) + try: + subprocess.check_call(_args) + return + except EnvironmentError as e: + if e.errno != errno.ENOENT: + raise + log_message = 'add-metric failed: {}'.format(' '.join(_kvpairs)) + log(log_message, level='INFO') + + +def meter_status(): + """Get the meter status, if running in the meter-status-changed hook.""" + return os.environ.get('JUJU_METER_STATUS') + + +def meter_info(): + """Get the meter status information, if running in the meter-status-changed + hook.""" + return os.environ.get('JUJU_METER_INFO') + + +def iter_units_for_relation_name(relation_name): + """Iterate through all units in a relation + + Generator that iterates through all the units in a relation and yields + a named tuple with rid and unit field names. + + Usage: + data = [(u.rid, u.unit) + for u in iter_units_for_relation_name(relation_name)] + + :param relation_name: string relation name + :yield: Named Tuple with rid and unit field names + """ + RelatedUnit = namedtuple('RelatedUnit', 'rid, unit') + for rid in relation_ids(relation_name): + for unit in related_units(rid): + yield RelatedUnit(rid, unit) + + +def ingress_address(rid=None, unit=None): + """ + Retrieve the ingress-address from a relation when available. + Otherwise, return the private-address. + + When used on the consuming side of the relation (unit is a remote + unit), the ingress-address is the IP address that this unit needs + to use to reach the provided service on the remote unit. + + When used on the providing side of the relation (unit == local_unit()), + the ingress-address is the IP address that is advertised to remote + units on this relation. Remote units need to use this address to + reach the local provided service on this unit. + + Note that charms may document some other method to use in + preference to the ingress_address(), such as an address provided + on a different relation attribute or a service discovery mechanism. + This allows charms to redirect inbound connections to their peers + or different applications such as load balancers. + + Usage: + addresses = [ingress_address(rid=u.rid, unit=u.unit) + for u in iter_units_for_relation_name(relation_name)] + + :param rid: string relation id + :param unit: string unit name + :side effect: calls relation_get + :return: string IP address + """ + settings = relation_get(rid=rid, unit=unit) + return (settings.get('ingress-address') or + settings.get('private-address')) + + +def egress_subnets(rid=None, unit=None): + """ + Retrieve the egress-subnets from a relation. + + This function is to be used on the providing side of the + relation, and provides the ranges of addresses that client + connections may come from. The result is uninteresting on + the consuming side of a relation (unit == local_unit()). + + Returns a stable list of subnets in CIDR format. + eg. ['192.168.1.0/24', '2001::F00F/128'] + + If egress-subnets is not available, falls back to using the published + ingress-address, or finally private-address. + + :param rid: string relation id + :param unit: string unit name + :side effect: calls relation_get + :return: list of subnets in CIDR format. eg. ['192.168.1.0/24', '2001::F00F/128'] + """ + def _to_range(addr): + if re.search(r'^(?:\d{1,3}\.){3}\d{1,3}$', addr) is not None: + addr += '/32' + elif ':' in addr and '/' not in addr: # IPv6 + addr += '/128' + return addr + + settings = relation_get(rid=rid, unit=unit) + if 'egress-subnets' in settings: + return [n.strip() for n in settings['egress-subnets'].split(',') if n.strip()] + if 'ingress-address' in settings: + return [_to_range(settings['ingress-address'])] + if 'private-address' in settings: + return [_to_range(settings['private-address'])] + return [] # Should never happen + + +def unit_doomed(unit=None): + """Determines if the unit is being removed from the model + + Requires Juju 2.4.1. + + :param unit: string unit name, defaults to local_unit + :side effect: calls goal_state + :side effect: calls local_unit + :side effect: calls has_juju_version + :return: True if the unit is being removed, already gone, or never existed + """ + if not has_juju_version("2.4.1"): + # We cannot risk blindly returning False for 'we don't know', + # because that could cause data loss; if call sites don't + # need an accurate answer, they likely don't need this helper + # at all. + # goal-state existed in 2.4.0, but did not handle removals + # correctly until 2.4.1. + raise NotImplementedError("is_doomed") + if unit is None: + unit = local_unit() + gs = goal_state() + units = gs.get('units', {}) + if unit not in units: + return True + # I don't think 'dead' units ever show up in the goal-state, but + # check anyway in addition to 'dying'. + return units[unit]['status'] in ('dying', 'dead') + + +def env_proxy_settings(selected_settings=None): + """Get proxy settings from process environment variables. + + Get charm proxy settings from environment variables that correspond to + juju-http-proxy, juju-https-proxy juju-no-proxy (available as of 2.4.2, see + lp:1782236) and juju-ftp-proxy in a format suitable for passing to an + application that reacts to proxy settings passed as environment variables. + Some applications support lowercase or uppercase notation (e.g. curl), some + support only lowercase (e.g. wget), there are also subjectively rare cases + of only uppercase notation support. no_proxy CIDR and wildcard support also + varies between runtimes and applications as there is no enforced standard. + + Some applications may connect to multiple destinations and expose config + options that would affect only proxy settings for a specific destination + these should be handled in charms in an application-specific manner. + + :param selected_settings: format only a subset of possible settings + :type selected_settings: list + :rtype: Option(None, dict[str, str]) + """ + SUPPORTED_SETTINGS = { + 'http': 'HTTP_PROXY', + 'https': 'HTTPS_PROXY', + 'no_proxy': 'NO_PROXY', + 'ftp': 'FTP_PROXY' + } + if selected_settings is None: + selected_settings = SUPPORTED_SETTINGS + + selected_vars = [v for k, v in SUPPORTED_SETTINGS.items() + if k in selected_settings] + proxy_settings = {} + for var in selected_vars: + var_val = os.getenv(var) + if var_val: + proxy_settings[var] = var_val + proxy_settings[var.lower()] = var_val + # Now handle juju-prefixed environment variables. The legacy vs new + # environment variable usage is mutually exclusive + charm_var_val = os.getenv('JUJU_CHARM_{}'.format(var)) + if charm_var_val: + proxy_settings[var] = charm_var_val + proxy_settings[var.lower()] = charm_var_val + if 'no_proxy' in proxy_settings: + if _contains_range(proxy_settings['no_proxy']): + log(RANGE_WARNING, level=WARNING) + return proxy_settings if proxy_settings else None + + +def _contains_range(addresses): + """Check for cidr or wildcard domain in a string. + + Given a string comprising a comma seperated list of ip addresses + and domain names, determine whether the string contains IP ranges + or wildcard domains. + + :param addresses: comma seperated list of domains and ip addresses. + :type addresses: str + """ + return ( + # Test for cidr (e.g. 10.20.20.0/24) + "/" in addresses or + # Test for wildcard domains (*.foo.com or .foo.com) + "*" in addresses or + addresses.startswith(".") or + ",." in addresses or + " ." in addresses) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/host.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/host.py new file mode 100644 index 0000000000000000000000000000000000000000..b33ac906d9eeb198210dceb069724ac9a35652ea --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/host.py @@ -0,0 +1,1104 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Tools for working with the host system""" +# Copyright 2012 Canonical Ltd. +# +# Authors: +# Nick Moffitt +# Matthew Wedgwood + +import os +import re +import pwd +import glob +import grp +import random +import string +import subprocess +import hashlib +import functools +import itertools +import six + +from contextlib import contextmanager +from collections import OrderedDict +from .hookenv import log, INFO, DEBUG, local_unit, charm_name +from .fstab import Fstab +from charmhelpers.osplatform import get_platform + +__platform__ = get_platform() +if __platform__ == "ubuntu": + from charmhelpers.core.host_factory.ubuntu import ( # NOQA:F401 + service_available, + add_new_group, + lsb_release, + cmp_pkgrevno, + CompareHostReleases, + get_distrib_codename, + arch + ) # flake8: noqa -- ignore F401 for this import +elif __platform__ == "centos": + from charmhelpers.core.host_factory.centos import ( # NOQA:F401 + service_available, + add_new_group, + lsb_release, + cmp_pkgrevno, + CompareHostReleases, + ) # flake8: noqa -- ignore F401 for this import + +UPDATEDB_PATH = '/etc/updatedb.conf' + + +def service_start(service_name, **kwargs): + """Start a system service. + + The specified service name is managed via the system level init system. + Some init systems (e.g. upstart) require that additional arguments be + provided in order to directly control service instances whereas other init + systems allow for addressing instances of a service directly by name (e.g. + systemd). + + The kwargs allow for the additional parameters to be passed to underlying + init systems for those systems which require/allow for them. For example, + the ceph-osd upstart script requires the id parameter to be passed along + in order to identify which running daemon should be reloaded. The follow- + ing example stops the ceph-osd service for instance id=4: + + service_stop('ceph-osd', id=4) + + :param service_name: the name of the service to stop + :param **kwargs: additional parameters to pass to the init system when + managing services. These will be passed as key=value + parameters to the init system's commandline. kwargs + are ignored for systemd enabled systems. + """ + return service('start', service_name, **kwargs) + + +def service_stop(service_name, **kwargs): + """Stop a system service. + + The specified service name is managed via the system level init system. + Some init systems (e.g. upstart) require that additional arguments be + provided in order to directly control service instances whereas other init + systems allow for addressing instances of a service directly by name (e.g. + systemd). + + The kwargs allow for the additional parameters to be passed to underlying + init systems for those systems which require/allow for them. For example, + the ceph-osd upstart script requires the id parameter to be passed along + in order to identify which running daemon should be reloaded. The follow- + ing example stops the ceph-osd service for instance id=4: + + service_stop('ceph-osd', id=4) + + :param service_name: the name of the service to stop + :param **kwargs: additional parameters to pass to the init system when + managing services. These will be passed as key=value + parameters to the init system's commandline. kwargs + are ignored for systemd enabled systems. + """ + return service('stop', service_name, **kwargs) + + +def service_restart(service_name, **kwargs): + """Restart a system service. + + The specified service name is managed via the system level init system. + Some init systems (e.g. upstart) require that additional arguments be + provided in order to directly control service instances whereas other init + systems allow for addressing instances of a service directly by name (e.g. + systemd). + + The kwargs allow for the additional parameters to be passed to underlying + init systems for those systems which require/allow for them. For example, + the ceph-osd upstart script requires the id parameter to be passed along + in order to identify which running daemon should be restarted. The follow- + ing example restarts the ceph-osd service for instance id=4: + + service_restart('ceph-osd', id=4) + + :param service_name: the name of the service to restart + :param **kwargs: additional parameters to pass to the init system when + managing services. These will be passed as key=value + parameters to the init system's commandline. kwargs + are ignored for init systems not allowing additional + parameters via the commandline (systemd). + """ + return service('restart', service_name) + + +def service_reload(service_name, restart_on_failure=False, **kwargs): + """Reload a system service, optionally falling back to restart if + reload fails. + + The specified service name is managed via the system level init system. + Some init systems (e.g. upstart) require that additional arguments be + provided in order to directly control service instances whereas other init + systems allow for addressing instances of a service directly by name (e.g. + systemd). + + The kwargs allow for the additional parameters to be passed to underlying + init systems for those systems which require/allow for them. For example, + the ceph-osd upstart script requires the id parameter to be passed along + in order to identify which running daemon should be reloaded. The follow- + ing example restarts the ceph-osd service for instance id=4: + + service_reload('ceph-osd', id=4) + + :param service_name: the name of the service to reload + :param restart_on_failure: boolean indicating whether to fallback to a + restart if the reload fails. + :param **kwargs: additional parameters to pass to the init system when + managing services. These will be passed as key=value + parameters to the init system's commandline. kwargs + are ignored for init systems not allowing additional + parameters via the commandline (systemd). + """ + service_result = service('reload', service_name, **kwargs) + if not service_result and restart_on_failure: + service_result = service('restart', service_name, **kwargs) + return service_result + + +def service_pause(service_name, init_dir="/etc/init", initd_dir="/etc/init.d", + **kwargs): + """Pause a system service. + + Stop it, and prevent it from starting again at boot. + + :param service_name: the name of the service to pause + :param init_dir: path to the upstart init directory + :param initd_dir: path to the sysv init directory + :param **kwargs: additional parameters to pass to the init system when + managing services. These will be passed as key=value + parameters to the init system's commandline. kwargs + are ignored for init systems which do not support + key=value arguments via the commandline. + """ + stopped = True + if service_running(service_name, **kwargs): + stopped = service_stop(service_name, **kwargs) + upstart_file = os.path.join(init_dir, "{}.conf".format(service_name)) + sysv_file = os.path.join(initd_dir, service_name) + if init_is_systemd(): + service('disable', service_name) + service('mask', service_name) + elif os.path.exists(upstart_file): + override_path = os.path.join( + init_dir, '{}.override'.format(service_name)) + with open(override_path, 'w') as fh: + fh.write("manual\n") + elif os.path.exists(sysv_file): + subprocess.check_call(["update-rc.d", service_name, "disable"]) + else: + raise ValueError( + "Unable to detect {0} as SystemD, Upstart {1} or" + " SysV {2}".format( + service_name, upstart_file, sysv_file)) + return stopped + + +def service_resume(service_name, init_dir="/etc/init", + initd_dir="/etc/init.d", **kwargs): + """Resume a system service. + + Reenable starting again at boot. Start the service. + + :param service_name: the name of the service to resume + :param init_dir: the path to the init dir + :param initd dir: the path to the initd dir + :param **kwargs: additional parameters to pass to the init system when + managing services. These will be passed as key=value + parameters to the init system's commandline. kwargs + are ignored for systemd enabled systems. + """ + upstart_file = os.path.join(init_dir, "{}.conf".format(service_name)) + sysv_file = os.path.join(initd_dir, service_name) + if init_is_systemd(): + service('unmask', service_name) + service('enable', service_name) + elif os.path.exists(upstart_file): + override_path = os.path.join( + init_dir, '{}.override'.format(service_name)) + if os.path.exists(override_path): + os.unlink(override_path) + elif os.path.exists(sysv_file): + subprocess.check_call(["update-rc.d", service_name, "enable"]) + else: + raise ValueError( + "Unable to detect {0} as SystemD, Upstart {1} or" + " SysV {2}".format( + service_name, upstart_file, sysv_file)) + started = service_running(service_name, **kwargs) + + if not started: + started = service_start(service_name, **kwargs) + return started + + +def service(action, service_name, **kwargs): + """Control a system service. + + :param action: the action to take on the service + :param service_name: the name of the service to perform th action on + :param **kwargs: additional params to be passed to the service command in + the form of key=value. + """ + if init_is_systemd(): + cmd = ['systemctl', action, service_name] + else: + cmd = ['service', service_name, action] + for key, value in six.iteritems(kwargs): + parameter = '%s=%s' % (key, value) + cmd.append(parameter) + return subprocess.call(cmd) == 0 + + +_UPSTART_CONF = "/etc/init/{}.conf" +_INIT_D_CONF = "/etc/init.d/{}" + + +def service_running(service_name, **kwargs): + """Determine whether a system service is running. + + :param service_name: the name of the service + :param **kwargs: additional args to pass to the service command. This is + used to pass additional key=value arguments to the + service command line for managing specific instance + units (e.g. service ceph-osd status id=2). The kwargs + are ignored in systemd services. + """ + if init_is_systemd(): + return service('is-active', service_name) + else: + if os.path.exists(_UPSTART_CONF.format(service_name)): + try: + cmd = ['status', service_name] + for key, value in six.iteritems(kwargs): + parameter = '%s=%s' % (key, value) + cmd.append(parameter) + output = subprocess.check_output( + cmd, stderr=subprocess.STDOUT).decode('UTF-8') + except subprocess.CalledProcessError: + return False + else: + # This works for upstart scripts where the 'service' command + # returns a consistent string to represent running + # 'start/running' + if ("start/running" in output or + "is running" in output or + "up and running" in output): + return True + elif os.path.exists(_INIT_D_CONF.format(service_name)): + # Check System V scripts init script return codes + return service('status', service_name) + return False + + +SYSTEMD_SYSTEM = '/run/systemd/system' + + +def init_is_systemd(): + """Return True if the host system uses systemd, False otherwise.""" + if lsb_release()['DISTRIB_CODENAME'] == 'trusty': + return False + return os.path.isdir(SYSTEMD_SYSTEM) + + +def adduser(username, password=None, shell='/bin/bash', + system_user=False, primary_group=None, + secondary_groups=None, uid=None, home_dir=None): + """Add a user to the system. + + Will log but otherwise succeed if the user already exists. + + :param str username: Username to create + :param str password: Password for user; if ``None``, create a system user + :param str shell: The default shell for the user + :param bool system_user: Whether to create a login or system user + :param str primary_group: Primary group for user; defaults to username + :param list secondary_groups: Optional list of additional groups + :param int uid: UID for user being created + :param str home_dir: Home directory for user + + :returns: The password database entry struct, as returned by `pwd.getpwnam` + """ + try: + user_info = pwd.getpwnam(username) + log('user {0} already exists!'.format(username)) + if uid: + user_info = pwd.getpwuid(int(uid)) + log('user with uid {0} already exists!'.format(uid)) + except KeyError: + log('creating user {0}'.format(username)) + cmd = ['useradd'] + if uid: + cmd.extend(['--uid', str(uid)]) + if home_dir: + cmd.extend(['--home', str(home_dir)]) + if system_user or password is None: + cmd.append('--system') + else: + cmd.extend([ + '--create-home', + '--shell', shell, + '--password', password, + ]) + if not primary_group: + try: + grp.getgrnam(username) + primary_group = username # avoid "group exists" error + except KeyError: + pass + if primary_group: + cmd.extend(['-g', primary_group]) + if secondary_groups: + cmd.extend(['-G', ','.join(secondary_groups)]) + cmd.append(username) + subprocess.check_call(cmd) + user_info = pwd.getpwnam(username) + return user_info + + +def user_exists(username): + """Check if a user exists""" + try: + pwd.getpwnam(username) + user_exists = True + except KeyError: + user_exists = False + return user_exists + + +def uid_exists(uid): + """Check if a uid exists""" + try: + pwd.getpwuid(uid) + uid_exists = True + except KeyError: + uid_exists = False + return uid_exists + + +def group_exists(groupname): + """Check if a group exists""" + try: + grp.getgrnam(groupname) + group_exists = True + except KeyError: + group_exists = False + return group_exists + + +def gid_exists(gid): + """Check if a gid exists""" + try: + grp.getgrgid(gid) + gid_exists = True + except KeyError: + gid_exists = False + return gid_exists + + +def add_group(group_name, system_group=False, gid=None): + """Add a group to the system + + Will log but otherwise succeed if the group already exists. + + :param str group_name: group to create + :param bool system_group: Create system group + :param int gid: GID for user being created + + :returns: The password database entry struct, as returned by `grp.getgrnam` + """ + try: + group_info = grp.getgrnam(group_name) + log('group {0} already exists!'.format(group_name)) + if gid: + group_info = grp.getgrgid(gid) + log('group with gid {0} already exists!'.format(gid)) + except KeyError: + log('creating group {0}'.format(group_name)) + add_new_group(group_name, system_group, gid) + group_info = grp.getgrnam(group_name) + return group_info + + +def add_user_to_group(username, group): + """Add a user to a group""" + cmd = ['gpasswd', '-a', username, group] + log("Adding user {} to group {}".format(username, group)) + subprocess.check_call(cmd) + + +def chage(username, lastday=None, expiredate=None, inactive=None, + mindays=None, maxdays=None, root=None, warndays=None): + """Change user password expiry information + + :param str username: User to update + :param str lastday: Set when password was changed in YYYY-MM-DD format + :param str expiredate: Set when user's account will no longer be + accessible in YYYY-MM-DD format. + -1 will remove an account expiration date. + :param str inactive: Set the number of days of inactivity after a password + has expired before the account is locked. + -1 will remove an account's inactivity. + :param str mindays: Set the minimum number of days between password + changes to MIN_DAYS. + 0 indicates the password can be changed anytime. + :param str maxdays: Set the maximum number of days during which a + password is valid. + -1 as MAX_DAYS will remove checking maxdays + :param str root: Apply changes in the CHROOT_DIR directory + :param str warndays: Set the number of days of warning before a password + change is required + :raises subprocess.CalledProcessError: if call to chage fails + """ + cmd = ['chage'] + if root: + cmd.extend(['--root', root]) + if lastday: + cmd.extend(['--lastday', lastday]) + if expiredate: + cmd.extend(['--expiredate', expiredate]) + if inactive: + cmd.extend(['--inactive', inactive]) + if mindays: + cmd.extend(['--mindays', mindays]) + if maxdays: + cmd.extend(['--maxdays', maxdays]) + if warndays: + cmd.extend(['--warndays', warndays]) + cmd.append(username) + subprocess.check_call(cmd) + + +remove_password_expiry = functools.partial(chage, expiredate='-1', inactive='-1', mindays='0', maxdays='-1') + + +def rsync(from_path, to_path, flags='-r', options=None, timeout=None): + """Replicate the contents of a path""" + options = options or ['--delete', '--executability'] + cmd = ['/usr/bin/rsync', flags] + if timeout: + cmd = ['timeout', str(timeout)] + cmd + cmd.extend(options) + cmd.append(from_path) + cmd.append(to_path) + log(" ".join(cmd)) + return subprocess.check_output(cmd, stderr=subprocess.STDOUT).decode('UTF-8').strip() + + +def symlink(source, destination): + """Create a symbolic link""" + log("Symlinking {} as {}".format(source, destination)) + cmd = [ + 'ln', + '-sf', + source, + destination, + ] + subprocess.check_call(cmd) + + +def mkdir(path, owner='root', group='root', perms=0o555, force=False): + """Create a directory""" + log("Making dir {} {}:{} {:o}".format(path, owner, group, + perms)) + uid = pwd.getpwnam(owner).pw_uid + gid = grp.getgrnam(group).gr_gid + realpath = os.path.abspath(path) + path_exists = os.path.exists(realpath) + if path_exists and force: + if not os.path.isdir(realpath): + log("Removing non-directory file {} prior to mkdir()".format(path)) + os.unlink(realpath) + os.makedirs(realpath, perms) + elif not path_exists: + os.makedirs(realpath, perms) + os.chown(realpath, uid, gid) + os.chmod(realpath, perms) + + +def write_file(path, content, owner='root', group='root', perms=0o444): + """Create or overwrite a file with the contents of a byte string.""" + uid = pwd.getpwnam(owner).pw_uid + gid = grp.getgrnam(group).gr_gid + # lets see if we can grab the file and compare the context, to avoid doing + # a write. + existing_content = None + existing_uid, existing_gid, existing_perms = None, None, None + try: + with open(path, 'rb') as target: + existing_content = target.read() + stat = os.stat(path) + existing_uid, existing_gid, existing_perms = ( + stat.st_uid, stat.st_gid, stat.st_mode + ) + except Exception: + pass + if content != existing_content: + log("Writing file {} {}:{} {:o}".format(path, owner, group, perms), + level=DEBUG) + with open(path, 'wb') as target: + os.fchown(target.fileno(), uid, gid) + os.fchmod(target.fileno(), perms) + if six.PY3 and isinstance(content, six.string_types): + content = content.encode('UTF-8') + target.write(content) + return + # the contents were the same, but we might still need to change the + # ownership or permissions. + if existing_uid != uid: + log("Changing uid on already existing content: {} -> {}" + .format(existing_uid, uid), level=DEBUG) + os.chown(path, uid, -1) + if existing_gid != gid: + log("Changing gid on already existing content: {} -> {}" + .format(existing_gid, gid), level=DEBUG) + os.chown(path, -1, gid) + if existing_perms != perms: + log("Changing permissions on existing content: {} -> {}" + .format(existing_perms, perms), level=DEBUG) + os.chmod(path, perms) + + +def fstab_remove(mp): + """Remove the given mountpoint entry from /etc/fstab""" + return Fstab.remove_by_mountpoint(mp) + + +def fstab_add(dev, mp, fs, options=None): + """Adds the given device entry to the /etc/fstab file""" + return Fstab.add(dev, mp, fs, options=options) + + +def mount(device, mountpoint, options=None, persist=False, filesystem="ext3"): + """Mount a filesystem at a particular mountpoint""" + cmd_args = ['mount'] + if options is not None: + cmd_args.extend(['-o', options]) + cmd_args.extend([device, mountpoint]) + try: + subprocess.check_output(cmd_args) + except subprocess.CalledProcessError as e: + log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output)) + return False + + if persist: + return fstab_add(device, mountpoint, filesystem, options=options) + return True + + +def umount(mountpoint, persist=False): + """Unmount a filesystem""" + cmd_args = ['umount', mountpoint] + try: + subprocess.check_output(cmd_args) + except subprocess.CalledProcessError as e: + log('Error unmounting {}\n{}'.format(mountpoint, e.output)) + return False + + if persist: + return fstab_remove(mountpoint) + return True + + +def mounts(): + """Get a list of all mounted volumes as [[mountpoint,device],[...]]""" + with open('/proc/mounts') as f: + # [['/mount/point','/dev/path'],[...]] + system_mounts = [m[1::-1] for m in [l.strip().split() + for l in f.readlines()]] + return system_mounts + + +def fstab_mount(mountpoint): + """Mount filesystem using fstab""" + cmd_args = ['mount', mountpoint] + try: + subprocess.check_output(cmd_args) + except subprocess.CalledProcessError as e: + log('Error unmounting {}\n{}'.format(mountpoint, e.output)) + return False + return True + + +def file_hash(path, hash_type='md5'): + """Generate a hash checksum of the contents of 'path' or None if not found. + + :param str hash_type: Any hash alrgorithm supported by :mod:`hashlib`, + such as md5, sha1, sha256, sha512, etc. + """ + if os.path.exists(path): + h = getattr(hashlib, hash_type)() + with open(path, 'rb') as source: + h.update(source.read()) + return h.hexdigest() + else: + return None + + +def path_hash(path): + """Generate a hash checksum of all files matching 'path'. Standard + wildcards like '*' and '?' are supported, see documentation for the 'glob' + module for more information. + + :return: dict: A { filename: hash } dictionary for all matched files. + Empty if none found. + """ + return { + filename: file_hash(filename) + for filename in glob.iglob(path) + } + + +def check_hash(path, checksum, hash_type='md5'): + """Validate a file using a cryptographic checksum. + + :param str checksum: Value of the checksum used to validate the file. + :param str hash_type: Hash algorithm used to generate `checksum`. + Can be any hash alrgorithm supported by :mod:`hashlib`, + such as md5, sha1, sha256, sha512, etc. + :raises ChecksumError: If the file fails the checksum + + """ + actual_checksum = file_hash(path, hash_type) + if checksum != actual_checksum: + raise ChecksumError("'%s' != '%s'" % (checksum, actual_checksum)) + + +class ChecksumError(ValueError): + """A class derived from Value error to indicate the checksum failed.""" + pass + + +def restart_on_change(restart_map, stopstart=False, restart_functions=None): + """Restart services based on configuration files changing + + This function is used a decorator, for example:: + + @restart_on_change({ + '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ] + '/etc/apache/sites-enabled/*': [ 'apache2' ] + }) + def config_changed(): + pass # your code here + + In this example, the cinder-api and cinder-volume services + would be restarted if /etc/ceph/ceph.conf is changed by the + ceph_client_changed function. The apache2 service would be + restarted if any file matching the pattern got changed, created + or removed. Standard wildcards are supported, see documentation + for the 'glob' module for more information. + + @param restart_map: {path_file_name: [service_name, ...] + @param stopstart: DEFAULT false; whether to stop, start OR restart + @param restart_functions: nonstandard functions to use to restart services + {svc: func, ...} + @returns result from decorated function + """ + def wrap(f): + @functools.wraps(f) + def wrapped_f(*args, **kwargs): + return restart_on_change_helper( + (lambda: f(*args, **kwargs)), restart_map, stopstart, + restart_functions) + return wrapped_f + return wrap + + +def restart_on_change_helper(lambda_f, restart_map, stopstart=False, + restart_functions=None): + """Helper function to perform the restart_on_change function. + + This is provided for decorators to restart services if files described + in the restart_map have changed after an invocation of lambda_f(). + + @param lambda_f: function to call. + @param restart_map: {file: [service, ...]} + @param stopstart: whether to stop, start or restart a service + @param restart_functions: nonstandard functions to use to restart services + {svc: func, ...} + @returns result of lambda_f() + """ + if restart_functions is None: + restart_functions = {} + checksums = {path: path_hash(path) for path in restart_map} + r = lambda_f() + # create a list of lists of the services to restart + restarts = [restart_map[path] + for path in restart_map + if path_hash(path) != checksums[path]] + # create a flat list of ordered services without duplicates from lists + services_list = list(OrderedDict.fromkeys(itertools.chain(*restarts))) + if services_list: + actions = ('stop', 'start') if stopstart else ('restart',) + for service_name in services_list: + if service_name in restart_functions: + restart_functions[service_name](service_name) + else: + for action in actions: + service(action, service_name) + return r + + +def pwgen(length=None): + """Generate a random pasword.""" + if length is None: + # A random length is ok to use a weak PRNG + length = random.choice(range(35, 45)) + alphanumeric_chars = [ + l for l in (string.ascii_letters + string.digits) + if l not in 'l0QD1vAEIOUaeiou'] + # Use a crypto-friendly PRNG (e.g. /dev/urandom) for making the + # actual password + random_generator = random.SystemRandom() + random_chars = [ + random_generator.choice(alphanumeric_chars) for _ in range(length)] + return(''.join(random_chars)) + + +def is_phy_iface(interface): + """Returns True if interface is not virtual, otherwise False.""" + if interface: + sys_net = '/sys/class/net' + if os.path.isdir(sys_net): + for iface in glob.glob(os.path.join(sys_net, '*')): + if '/virtual/' in os.path.realpath(iface): + continue + + if interface == os.path.basename(iface): + return True + + return False + + +def get_bond_master(interface): + """Returns bond master if interface is bond slave otherwise None. + + NOTE: the provided interface is expected to be physical + """ + if interface: + iface_path = '/sys/class/net/%s' % (interface) + if os.path.exists(iface_path): + if '/virtual/' in os.path.realpath(iface_path): + return None + + master = os.path.join(iface_path, 'master') + if os.path.exists(master): + master = os.path.realpath(master) + # make sure it is a bond master + if os.path.exists(os.path.join(master, 'bonding')): + return os.path.basename(master) + + return None + + +def list_nics(nic_type=None): + """Return a list of nics of given type(s)""" + if isinstance(nic_type, six.string_types): + int_types = [nic_type] + else: + int_types = nic_type + + interfaces = [] + if nic_type: + for int_type in int_types: + cmd = ['ip', 'addr', 'show', 'label', int_type + '*'] + ip_output = subprocess.check_output(cmd).decode('UTF-8') + ip_output = ip_output.split('\n') + ip_output = (line for line in ip_output if line) + for line in ip_output: + if line.split()[1].startswith(int_type): + matched = re.search('.*: (' + int_type + + r'[0-9]+\.[0-9]+)@.*', line) + if matched: + iface = matched.groups()[0] + else: + iface = line.split()[1].replace(":", "") + + if iface not in interfaces: + interfaces.append(iface) + else: + cmd = ['ip', 'a'] + ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n') + ip_output = (line.strip() for line in ip_output if line) + + key = re.compile(r'^[0-9]+:\s+(.+):') + for line in ip_output: + matched = re.search(key, line) + if matched: + iface = matched.group(1) + iface = iface.partition("@")[0] + if iface not in interfaces: + interfaces.append(iface) + + return interfaces + + +def set_nic_mtu(nic, mtu): + """Set the Maximum Transmission Unit (MTU) on a network interface.""" + cmd = ['ip', 'link', 'set', nic, 'mtu', mtu] + subprocess.check_call(cmd) + + +def get_nic_mtu(nic): + """Return the Maximum Transmission Unit (MTU) for a network interface.""" + cmd = ['ip', 'addr', 'show', nic] + ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n') + mtu = "" + for line in ip_output: + words = line.split() + if 'mtu' in words: + mtu = words[words.index("mtu") + 1] + return mtu + + +def get_nic_hwaddr(nic): + """Return the Media Access Control (MAC) for a network interface.""" + cmd = ['ip', '-o', '-0', 'addr', 'show', nic] + ip_output = subprocess.check_output(cmd).decode('UTF-8') + hwaddr = "" + words = ip_output.split() + if 'link/ether' in words: + hwaddr = words[words.index('link/ether') + 1] + return hwaddr + + +@contextmanager +def chdir(directory): + """Change the current working directory to a different directory for a code + block and return the previous directory after the block exits. Useful to + run commands from a specificed directory. + + :param str directory: The directory path to change to for this context. + """ + cur = os.getcwd() + try: + yield os.chdir(directory) + finally: + os.chdir(cur) + + +def chownr(path, owner, group, follow_links=True, chowntopdir=False): + """Recursively change user and group ownership of files and directories + in given path. Doesn't chown path itself by default, only its children. + + :param str path: The string path to start changing ownership. + :param str owner: The owner string to use when looking up the uid. + :param str group: The group string to use when looking up the gid. + :param bool follow_links: Also follow and chown links if True + :param bool chowntopdir: Also chown path itself if True + """ + uid = pwd.getpwnam(owner).pw_uid + gid = grp.getgrnam(group).gr_gid + if follow_links: + chown = os.chown + else: + chown = os.lchown + + if chowntopdir: + broken_symlink = os.path.lexists(path) and not os.path.exists(path) + if not broken_symlink: + chown(path, uid, gid) + for root, dirs, files in os.walk(path, followlinks=follow_links): + for name in dirs + files: + full = os.path.join(root, name) + broken_symlink = os.path.lexists(full) and not os.path.exists(full) + if not broken_symlink: + chown(full, uid, gid) + + +def lchownr(path, owner, group): + """Recursively change user and group ownership of files and directories + in a given path, not following symbolic links. See the documentation for + 'os.lchown' for more information. + + :param str path: The string path to start changing ownership. + :param str owner: The owner string to use when looking up the uid. + :param str group: The group string to use when looking up the gid. + """ + chownr(path, owner, group, follow_links=False) + + +def owner(path): + """Returns a tuple containing the username & groupname owning the path. + + :param str path: the string path to retrieve the ownership + :return tuple(str, str): A (username, groupname) tuple containing the + name of the user and group owning the path. + :raises OSError: if the specified path does not exist + """ + stat = os.stat(path) + username = pwd.getpwuid(stat.st_uid)[0] + groupname = grp.getgrgid(stat.st_gid)[0] + return username, groupname + + +def get_total_ram(): + """The total amount of system RAM in bytes. + + This is what is reported by the OS, and may be overcommitted when + there are multiple containers hosted on the same machine. + """ + with open('/proc/meminfo', 'r') as f: + for line in f.readlines(): + if line: + key, value, unit = line.split() + if key == 'MemTotal:': + assert unit == 'kB', 'Unknown unit' + return int(value) * 1024 # Classic, not KiB. + raise NotImplementedError() + + +UPSTART_CONTAINER_TYPE = '/run/container_type' + + +def is_container(): + """Determine whether unit is running in a container + + @return: boolean indicating if unit is in a container + """ + if init_is_systemd(): + # Detect using systemd-detect-virt + return subprocess.call(['systemd-detect-virt', + '--container']) == 0 + else: + # Detect using upstart container file marker + return os.path.exists(UPSTART_CONTAINER_TYPE) + + +def add_to_updatedb_prunepath(path, updatedb_path=UPDATEDB_PATH): + """Adds the specified path to the mlocate's udpatedb.conf PRUNEPATH list. + + This method has no effect if the path specified by updatedb_path does not + exist or is not a file. + + @param path: string the path to add to the updatedb.conf PRUNEPATHS value + @param updatedb_path: the path the updatedb.conf file + """ + if not os.path.exists(updatedb_path) or os.path.isdir(updatedb_path): + # If the updatedb.conf file doesn't exist then don't attempt to update + # the file as the package providing mlocate may not be installed on + # the local system + return + + with open(updatedb_path, 'r+') as f_id: + updatedb_text = f_id.read() + output = updatedb(updatedb_text, path) + f_id.seek(0) + f_id.write(output) + f_id.truncate() + + +def updatedb(updatedb_text, new_path): + lines = [line for line in updatedb_text.split("\n")] + for i, line in enumerate(lines): + if line.startswith("PRUNEPATHS="): + paths_line = line.split("=")[1].replace('"', '') + paths = paths_line.split(" ") + if new_path not in paths: + paths.append(new_path) + lines[i] = 'PRUNEPATHS="{}"'.format(' '.join(paths)) + output = "\n".join(lines) + return output + + +def modulo_distribution(modulo=3, wait=30, non_zero_wait=False): + """ Modulo distribution + + This helper uses the unit number, a modulo value and a constant wait time + to produce a calculated wait time distribution. This is useful in large + scale deployments to distribute load during an expensive operation such as + service restarts. + + If you have 1000 nodes that need to restart 100 at a time 1 minute at a + time: + + time.wait(modulo_distribution(modulo=100, wait=60)) + restart() + + If you need restarts to happen serially set modulo to the exact number of + nodes and set a high constant wait time: + + time.wait(modulo_distribution(modulo=10, wait=120)) + restart() + + @param modulo: int The modulo number creates the group distribution + @param wait: int The constant time wait value + @param non_zero_wait: boolean Override unit % modulo == 0, + return modulo * wait. Used to avoid collisions with + leader nodes which are often given priority. + @return: int Calculated time to wait for unit operation + """ + unit_number = int(local_unit().split('/')[1]) + calculated_wait_time = (unit_number % modulo) * wait + if non_zero_wait and calculated_wait_time == 0: + return modulo * wait + else: + return calculated_wait_time + + +def install_ca_cert(ca_cert, name=None): + """ + Install the given cert as a trusted CA. + + The ``name`` is the stem of the filename where the cert is written, and if + not provided, it will default to ``juju-{charm_name}``. + + If the cert is empty or None, or is unchanged, nothing is done. + """ + if not ca_cert: + return + if not isinstance(ca_cert, bytes): + ca_cert = ca_cert.encode('utf8') + if not name: + name = 'juju-{}'.format(charm_name()) + cert_file = '/usr/local/share/ca-certificates/{}.crt'.format(name) + new_hash = hashlib.md5(ca_cert).hexdigest() + if file_hash(cert_file) == new_hash: + return + log("Installing new CA cert at: {}".format(cert_file), level=INFO) + write_file(cert_file, ca_cert) + subprocess.check_call(['update-ca-certificates', '--fresh']) + + +def get_system_env(key, default=None): + """Get data from system environment as represented in ``/etc/environment``. + + :param key: Key to look up + :type key: str + :param default: Value to return if key is not found + :type default: any + :returns: Value for key if found or contents of default parameter + :rtype: any + :raises: subprocess.CalledProcessError + """ + env_file = '/etc/environment' + # use the shell and env(1) to parse the global environments file. This is + # done to get the correct result even if the user has shell variable + # substitutions or other shell logic in that file. + output = subprocess.check_output( + ['env', '-i', '/bin/bash', '-c', + 'set -a && source {} && env'.format(env_file)], + universal_newlines=True) + for k, v in (line.split('=', 1) + for line in output.splitlines() if '=' in line): + if k == key: + return v + else: + return default diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/host_factory/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/host_factory/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/host_factory/centos.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/host_factory/centos.py new file mode 100644 index 0000000000000000000000000000000000000000..7781a3961f23ce0b161ae08b11710466af8de814 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/host_factory/centos.py @@ -0,0 +1,72 @@ +import subprocess +import yum +import os + +from charmhelpers.core.strutils import BasicStringComparator + + +class CompareHostReleases(BasicStringComparator): + """Provide comparisons of Host releases. + + Use in the form of + + if CompareHostReleases(release) > 'trusty': + # do something with mitaka + """ + + def __init__(self, item): + raise NotImplementedError( + "CompareHostReleases() is not implemented for CentOS") + + +def service_available(service_name): + # """Determine whether a system service is available.""" + if os.path.isdir('/run/systemd/system'): + cmd = ['systemctl', 'is-enabled', service_name] + else: + cmd = ['service', service_name, 'is-enabled'] + return subprocess.call(cmd) == 0 + + +def add_new_group(group_name, system_group=False, gid=None): + cmd = ['groupadd'] + if gid: + cmd.extend(['--gid', str(gid)]) + if system_group: + cmd.append('-r') + cmd.append(group_name) + subprocess.check_call(cmd) + + +def lsb_release(): + """Return /etc/os-release in a dict.""" + d = {} + with open('/etc/os-release', 'r') as lsb: + for l in lsb: + s = l.split('=') + if len(s) != 2: + continue + d[s[0].strip()] = s[1].strip() + return d + + +def cmp_pkgrevno(package, revno, pkgcache=None): + """Compare supplied revno with the revno of the installed package. + + * 1 => Installed revno is greater than supplied arg + * 0 => Installed revno is the same as supplied arg + * -1 => Installed revno is less than supplied arg + + This function imports YumBase function if the pkgcache argument + is None. + """ + if not pkgcache: + y = yum.YumBase() + packages = y.doPackageLists() + pkgcache = {i.Name: i.version for i in packages['installed']} + pkg = pkgcache[package] + if pkg > revno: + return 1 + if pkg < revno: + return -1 + return 0 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/host_factory/ubuntu.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/host_factory/ubuntu.py new file mode 100644 index 0000000000000000000000000000000000000000..3edc0687275b29762b45ebf3fe1b045bc9f568b2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/host_factory/ubuntu.py @@ -0,0 +1,116 @@ +import subprocess + +from charmhelpers.core.hookenv import cached +from charmhelpers.core.strutils import BasicStringComparator + + +UBUNTU_RELEASES = ( + 'lucid', + 'maverick', + 'natty', + 'oneiric', + 'precise', + 'quantal', + 'raring', + 'saucy', + 'trusty', + 'utopic', + 'vivid', + 'wily', + 'xenial', + 'yakkety', + 'zesty', + 'artful', + 'bionic', + 'cosmic', + 'disco', + 'eoan', + 'focal' +) + + +class CompareHostReleases(BasicStringComparator): + """Provide comparisons of Ubuntu releases. + + Use in the form of + + if CompareHostReleases(release) > 'trusty': + # do something with mitaka + """ + _list = UBUNTU_RELEASES + + +def service_available(service_name): + """Determine whether a system service is available""" + try: + subprocess.check_output( + ['service', service_name, 'status'], + stderr=subprocess.STDOUT).decode('UTF-8') + except subprocess.CalledProcessError as e: + return b'unrecognized service' not in e.output + else: + return True + + +def add_new_group(group_name, system_group=False, gid=None): + cmd = ['addgroup'] + if gid: + cmd.extend(['--gid', str(gid)]) + if system_group: + cmd.append('--system') + else: + cmd.extend([ + '--group', + ]) + cmd.append(group_name) + subprocess.check_call(cmd) + + +def lsb_release(): + """Return /etc/lsb-release in a dict""" + d = {} + with open('/etc/lsb-release', 'r') as lsb: + for l in lsb: + k, v = l.split('=') + d[k.strip()] = v.strip() + return d + + +def get_distrib_codename(): + """Return the codename of the distribution + :returns: The codename + :rtype: str + """ + return lsb_release()['DISTRIB_CODENAME'].lower() + + +def cmp_pkgrevno(package, revno, pkgcache=None): + """Compare supplied revno with the revno of the installed package. + + * 1 => Installed revno is greater than supplied arg + * 0 => Installed revno is the same as supplied arg + * -1 => Installed revno is less than supplied arg + + This function imports apt_cache function from charmhelpers.fetch if + the pkgcache argument is None. Be sure to add charmhelpers.fetch if + you call this function, or pass an apt_pkg.Cache() instance. + """ + from charmhelpers.fetch import apt_pkg + if not pkgcache: + from charmhelpers.fetch import apt_cache + pkgcache = apt_cache() + pkg = pkgcache[package] + return apt_pkg.version_compare(pkg.current_ver.ver_str, revno) + + +@cached +def arch(): + """Return the package architecture as a string. + + :returns: the architecture + :rtype: str + :raises: subprocess.CalledProcessError if dpkg command fails + """ + return subprocess.check_output( + ['dpkg', '--print-architecture'] + ).rstrip().decode('UTF-8') diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/hugepage.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/hugepage.py new file mode 100644 index 0000000000000000000000000000000000000000..54b5b5e2fcf81eea5f2ebfbceb620ea68d725584 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/hugepage.py @@ -0,0 +1,69 @@ +# -*- coding: utf-8 -*- + +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import yaml +from charmhelpers.core import fstab +from charmhelpers.core import sysctl +from charmhelpers.core.host import ( + add_group, + add_user_to_group, + fstab_mount, + mkdir, +) +from charmhelpers.core.strutils import bytes_from_string +from subprocess import check_output + + +def hugepage_support(user, group='hugetlb', nr_hugepages=256, + max_map_count=65536, mnt_point='/run/hugepages/kvm', + pagesize='2MB', mount=True, set_shmmax=False): + """Enable hugepages on system. + + Args: + user (str) -- Username to allow access to hugepages to + group (str) -- Group name to own hugepages + nr_hugepages (int) -- Number of pages to reserve + max_map_count (int) -- Number of Virtual Memory Areas a process can own + mnt_point (str) -- Directory to mount hugepages on + pagesize (str) -- Size of hugepages + mount (bool) -- Whether to Mount hugepages + """ + group_info = add_group(group) + gid = group_info.gr_gid + add_user_to_group(user, group) + if max_map_count < 2 * nr_hugepages: + max_map_count = 2 * nr_hugepages + sysctl_settings = { + 'vm.nr_hugepages': nr_hugepages, + 'vm.max_map_count': max_map_count, + 'vm.hugetlb_shm_group': gid, + } + if set_shmmax: + shmmax_current = int(check_output(['sysctl', '-n', 'kernel.shmmax'])) + shmmax_minsize = bytes_from_string(pagesize) * nr_hugepages + if shmmax_minsize > shmmax_current: + sysctl_settings['kernel.shmmax'] = shmmax_minsize + sysctl.create(yaml.dump(sysctl_settings), '/etc/sysctl.d/10-hugepage.conf') + mkdir(mnt_point, owner='root', group='root', perms=0o755, force=False) + lfstab = fstab.Fstab() + fstab_entry = lfstab.get_entry_by_attr('mountpoint', mnt_point) + if fstab_entry: + lfstab.remove_entry(fstab_entry) + entry = lfstab.Entry('nodev', mnt_point, 'hugetlbfs', + 'mode=1770,gid={},pagesize={}'.format(gid, pagesize), 0, 0) + lfstab.add_entry(entry) + if mount: + fstab_mount(mnt_point) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/kernel.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/kernel.py new file mode 100644 index 0000000000000000000000000000000000000000..e01f4f8ba73ee0d5ab7553740c2590a50e42f96d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/kernel.py @@ -0,0 +1,72 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import re +import subprocess + +from charmhelpers.osplatform import get_platform +from charmhelpers.core.hookenv import ( + log, + INFO +) + +__platform__ = get_platform() +if __platform__ == "ubuntu": + from charmhelpers.core.kernel_factory.ubuntu import ( # NOQA:F401 + persistent_modprobe, + update_initramfs, + ) # flake8: noqa -- ignore F401 for this import +elif __platform__ == "centos": + from charmhelpers.core.kernel_factory.centos import ( # NOQA:F401 + persistent_modprobe, + update_initramfs, + ) # flake8: noqa -- ignore F401 for this import + +__author__ = "Jorge Niedbalski " + + +def modprobe(module, persist=True): + """Load a kernel module and configure for auto-load on reboot.""" + cmd = ['modprobe', module] + + log('Loading kernel module %s' % module, level=INFO) + + subprocess.check_call(cmd) + if persist: + persistent_modprobe(module) + + +def rmmod(module, force=False): + """Remove a module from the linux kernel""" + cmd = ['rmmod'] + if force: + cmd.append('-f') + cmd.append(module) + log('Removing kernel module %s' % module, level=INFO) + return subprocess.check_call(cmd) + + +def lsmod(): + """Shows what kernel modules are currently loaded""" + return subprocess.check_output(['lsmod'], + universal_newlines=True) + + +def is_module_loaded(module): + """Checks if a kernel module is already loaded""" + matches = re.findall('^%s[ ]+' % module, lsmod(), re.M) + return len(matches) > 0 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/kernel_factory/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/kernel_factory/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/kernel_factory/centos.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/kernel_factory/centos.py new file mode 100644 index 0000000000000000000000000000000000000000..1c402c1157900ff1ad5c6c296a409c9e8fb96d2b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/kernel_factory/centos.py @@ -0,0 +1,17 @@ +import subprocess +import os + + +def persistent_modprobe(module): + """Load a kernel module and configure for auto-load on reboot.""" + if not os.path.exists('/etc/rc.modules'): + open('/etc/rc.modules', 'a') + os.chmod('/etc/rc.modules', 111) + with open('/etc/rc.modules', 'r+') as modules: + if module not in modules.read(): + modules.write('modprobe %s\n' % module) + + +def update_initramfs(version='all'): + """Updates an initramfs image.""" + return subprocess.check_call(["dracut", "-f", version]) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/kernel_factory/ubuntu.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/kernel_factory/ubuntu.py new file mode 100644 index 0000000000000000000000000000000000000000..3de372fd3df38fe151cf79243f129cb504516f22 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/kernel_factory/ubuntu.py @@ -0,0 +1,13 @@ +import subprocess + + +def persistent_modprobe(module): + """Load a kernel module and configure for auto-load on reboot.""" + with open('/etc/modules', 'r+') as modules: + if module not in modules.read(): + modules.write(module + "\n") + + +def update_initramfs(version='all'): + """Updates an initramfs image.""" + return subprocess.check_call(["update-initramfs", "-k", version, "-u"]) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/services/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/services/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..61fd074edc09de434859e48ae1b36baef0503708 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/services/__init__.py @@ -0,0 +1,16 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from .base import * # NOQA +from .helpers import * # NOQA diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/services/base.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/services/base.py new file mode 100644 index 0000000000000000000000000000000000000000..179ad4f0c367dd6b13c10b201c3752d1c8daf05e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/services/base.py @@ -0,0 +1,362 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import json +from inspect import getargspec +from collections import Iterable, OrderedDict + +from charmhelpers.core import host +from charmhelpers.core import hookenv + + +__all__ = ['ServiceManager', 'ManagerCallback', + 'PortManagerCallback', 'open_ports', 'close_ports', 'manage_ports', + 'service_restart', 'service_stop'] + + +class ServiceManager(object): + def __init__(self, services=None): + """ + Register a list of services, given their definitions. + + Service definitions are dicts in the following formats (all keys except + 'service' are optional):: + + { + "service": , + "required_data": , + "provided_data": , + "data_ready": , + "data_lost": , + "start": , + "stop": , + "ports": , + } + + The 'required_data' list should contain dicts of required data (or + dependency managers that act like dicts and know how to collect the data). + Only when all items in the 'required_data' list are populated are the list + of 'data_ready' and 'start' callbacks executed. See `is_ready()` for more + information. + + The 'provided_data' list should contain relation data providers, most likely + a subclass of :class:`charmhelpers.core.services.helpers.RelationContext`, + that will indicate a set of data to set on a given relation. + + The 'data_ready' value should be either a single callback, or a list of + callbacks, to be called when all items in 'required_data' pass `is_ready()`. + Each callback will be called with the service name as the only parameter. + After all of the 'data_ready' callbacks are called, the 'start' callbacks + are fired. + + The 'data_lost' value should be either a single callback, or a list of + callbacks, to be called when a 'required_data' item no longer passes + `is_ready()`. Each callback will be called with the service name as the + only parameter. After all of the 'data_lost' callbacks are called, + the 'stop' callbacks are fired. + + The 'start' value should be either a single callback, or a list of + callbacks, to be called when starting the service, after the 'data_ready' + callbacks are complete. Each callback will be called with the service + name as the only parameter. This defaults to + `[host.service_start, services.open_ports]`. + + The 'stop' value should be either a single callback, or a list of + callbacks, to be called when stopping the service. If the service is + being stopped because it no longer has all of its 'required_data', this + will be called after all of the 'data_lost' callbacks are complete. + Each callback will be called with the service name as the only parameter. + This defaults to `[services.close_ports, host.service_stop]`. + + The 'ports' value should be a list of ports to manage. The default + 'start' handler will open the ports after the service is started, + and the default 'stop' handler will close the ports prior to stopping + the service. + + + Examples: + + The following registers an Upstart service called bingod that depends on + a mongodb relation and which runs a custom `db_migrate` function prior to + restarting the service, and a Runit service called spadesd:: + + manager = services.ServiceManager([ + { + 'service': 'bingod', + 'ports': [80, 443], + 'required_data': [MongoRelation(), config(), {'my': 'data'}], + 'data_ready': [ + services.template(source='bingod.conf'), + services.template(source='bingod.ini', + target='/etc/bingod.ini', + owner='bingo', perms=0400), + ], + }, + { + 'service': 'spadesd', + 'data_ready': services.template(source='spadesd_run.j2', + target='/etc/sv/spadesd/run', + perms=0555), + 'start': runit_start, + 'stop': runit_stop, + }, + ]) + manager.manage() + """ + self._ready_file = os.path.join(hookenv.charm_dir(), 'READY-SERVICES.json') + self._ready = None + self.services = OrderedDict() + for service in services or []: + service_name = service['service'] + self.services[service_name] = service + + def manage(self): + """ + Handle the current hook by doing The Right Thing with the registered services. + """ + hookenv._run_atstart() + try: + hook_name = hookenv.hook_name() + if hook_name == 'stop': + self.stop_services() + else: + self.reconfigure_services() + self.provide_data() + except SystemExit as x: + if x.code is None or x.code == 0: + hookenv._run_atexit() + hookenv._run_atexit() + + def provide_data(self): + """ + Set the relation data for each provider in the ``provided_data`` list. + + A provider must have a `name` attribute, which indicates which relation + to set data on, and a `provide_data()` method, which returns a dict of + data to set. + + The `provide_data()` method can optionally accept two parameters: + + * ``remote_service`` The name of the remote service that the data will + be provided to. The `provide_data()` method will be called once + for each connected service (not unit). This allows the method to + tailor its data to the given service. + * ``service_ready`` Whether or not the service definition had all of + its requirements met, and thus the ``data_ready`` callbacks run. + + Note that the ``provided_data`` methods are now called **after** the + ``data_ready`` callbacks are run. This gives the ``data_ready`` callbacks + a chance to generate any data necessary for the providing to the remote + services. + """ + for service_name, service in self.services.items(): + service_ready = self.is_ready(service_name) + for provider in service.get('provided_data', []): + for relid in hookenv.relation_ids(provider.name): + units = hookenv.related_units(relid) + if not units: + continue + remote_service = units[0].split('/')[0] + argspec = getargspec(provider.provide_data) + if len(argspec.args) > 1: + data = provider.provide_data(remote_service, service_ready) + else: + data = provider.provide_data() + if data: + hookenv.relation_set(relid, data) + + def reconfigure_services(self, *service_names): + """ + Update all files for one or more registered services, and, + if ready, optionally restart them. + + If no service names are given, reconfigures all registered services. + """ + for service_name in service_names or self.services.keys(): + if self.is_ready(service_name): + self.fire_event('data_ready', service_name) + self.fire_event('start', service_name, default=[ + service_restart, + manage_ports]) + self.save_ready(service_name) + else: + if self.was_ready(service_name): + self.fire_event('data_lost', service_name) + self.fire_event('stop', service_name, default=[ + manage_ports, + service_stop]) + self.save_lost(service_name) + + def stop_services(self, *service_names): + """ + Stop one or more registered services, by name. + + If no service names are given, stops all registered services. + """ + for service_name in service_names or self.services.keys(): + self.fire_event('stop', service_name, default=[ + manage_ports, + service_stop]) + + def get_service(self, service_name): + """ + Given the name of a registered service, return its service definition. + """ + service = self.services.get(service_name) + if not service: + raise KeyError('Service not registered: %s' % service_name) + return service + + def fire_event(self, event_name, service_name, default=None): + """ + Fire a data_ready, data_lost, start, or stop event on a given service. + """ + service = self.get_service(service_name) + callbacks = service.get(event_name, default) + if not callbacks: + return + if not isinstance(callbacks, Iterable): + callbacks = [callbacks] + for callback in callbacks: + if isinstance(callback, ManagerCallback): + callback(self, service_name, event_name) + else: + callback(service_name) + + def is_ready(self, service_name): + """ + Determine if a registered service is ready, by checking its 'required_data'. + + A 'required_data' item can be any mapping type, and is considered ready + if `bool(item)` evaluates as True. + """ + service = self.get_service(service_name) + reqs = service.get('required_data', []) + return all(bool(req) for req in reqs) + + def _load_ready_file(self): + if self._ready is not None: + return + if os.path.exists(self._ready_file): + with open(self._ready_file) as fp: + self._ready = set(json.load(fp)) + else: + self._ready = set() + + def _save_ready_file(self): + if self._ready is None: + return + with open(self._ready_file, 'w') as fp: + json.dump(list(self._ready), fp) + + def save_ready(self, service_name): + """ + Save an indicator that the given service is now data_ready. + """ + self._load_ready_file() + self._ready.add(service_name) + self._save_ready_file() + + def save_lost(self, service_name): + """ + Save an indicator that the given service is no longer data_ready. + """ + self._load_ready_file() + self._ready.discard(service_name) + self._save_ready_file() + + def was_ready(self, service_name): + """ + Determine if the given service was previously data_ready. + """ + self._load_ready_file() + return service_name in self._ready + + +class ManagerCallback(object): + """ + Special case of a callback that takes the `ServiceManager` instance + in addition to the service name. + + Subclasses should implement `__call__` which should accept three parameters: + + * `manager` The `ServiceManager` instance + * `service_name` The name of the service it's being triggered for + * `event_name` The name of the event that this callback is handling + """ + def __call__(self, manager, service_name, event_name): + raise NotImplementedError() + + +class PortManagerCallback(ManagerCallback): + """ + Callback class that will open or close ports, for use as either + a start or stop action. + """ + def __call__(self, manager, service_name, event_name): + service = manager.get_service(service_name) + # turn this generator into a list, + # as we'll be going over it multiple times + new_ports = list(service.get('ports', [])) + port_file = os.path.join(hookenv.charm_dir(), '.{}.ports'.format(service_name)) + if os.path.exists(port_file): + with open(port_file) as fp: + old_ports = fp.read().split(',') + for old_port in old_ports: + if bool(old_port) and not self.ports_contains(old_port, new_ports): + hookenv.close_port(old_port) + with open(port_file, 'w') as fp: + fp.write(','.join(str(port) for port in new_ports)) + for port in new_ports: + # A port is either a number or 'ICMP' + protocol = 'TCP' + if str(port).upper() == 'ICMP': + protocol = 'ICMP' + if event_name == 'start': + hookenv.open_port(port, protocol) + elif event_name == 'stop': + hookenv.close_port(port, protocol) + + def ports_contains(self, port, ports): + if not bool(port): + return False + if str(port).upper() != 'ICMP': + port = int(port) + return port in ports + + +def service_stop(service_name): + """ + Wrapper around host.service_stop to prevent spurious "unknown service" + messages in the logs. + """ + if host.service_running(service_name): + host.service_stop(service_name) + + +def service_restart(service_name): + """ + Wrapper around host.service_restart to prevent spurious "unknown service" + messages in the logs. + """ + if host.service_available(service_name): + if host.service_running(service_name): + host.service_restart(service_name) + else: + host.service_start(service_name) + + +# Convenience aliases +open_ports = close_ports = manage_ports = PortManagerCallback() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/services/helpers.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/services/helpers.py new file mode 100644 index 0000000000000000000000000000000000000000..3e6e30d2fe0d9c73ffdc42d70b77e864b6379c53 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/services/helpers.py @@ -0,0 +1,290 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import yaml + +from charmhelpers.core import hookenv +from charmhelpers.core import host +from charmhelpers.core import templating + +from charmhelpers.core.services.base import ManagerCallback + + +__all__ = ['RelationContext', 'TemplateCallback', + 'render_template', 'template'] + + +class RelationContext(dict): + """ + Base class for a context generator that gets relation data from juju. + + Subclasses must provide the attributes `name`, which is the name of the + interface of interest, `interface`, which is the type of the interface of + interest, and `required_keys`, which is the set of keys required for the + relation to be considered complete. The data for all interfaces matching + the `name` attribute that are complete will used to populate the dictionary + values (see `get_data`, below). + + The generated context will be namespaced under the relation :attr:`name`, + to prevent potential naming conflicts. + + :param str name: Override the relation :attr:`name`, since it can vary from charm to charm + :param list additional_required_keys: Extend the list of :attr:`required_keys` + """ + name = None + interface = None + + def __init__(self, name=None, additional_required_keys=None): + if not hasattr(self, 'required_keys'): + self.required_keys = [] + + if name is not None: + self.name = name + if additional_required_keys: + self.required_keys.extend(additional_required_keys) + self.get_data() + + def __bool__(self): + """ + Returns True if all of the required_keys are available. + """ + return self.is_ready() + + __nonzero__ = __bool__ + + def __repr__(self): + return super(RelationContext, self).__repr__() + + def is_ready(self): + """ + Returns True if all of the `required_keys` are available from any units. + """ + ready = len(self.get(self.name, [])) > 0 + if not ready: + hookenv.log('Incomplete relation: {}'.format(self.__class__.__name__), hookenv.DEBUG) + return ready + + def _is_ready(self, unit_data): + """ + Helper method that tests a set of relation data and returns True if + all of the `required_keys` are present. + """ + return set(unit_data.keys()).issuperset(set(self.required_keys)) + + def get_data(self): + """ + Retrieve the relation data for each unit involved in a relation and, + if complete, store it in a list under `self[self.name]`. This + is automatically called when the RelationContext is instantiated. + + The units are sorted lexographically first by the service ID, then by + the unit ID. Thus, if an interface has two other services, 'db:1' + and 'db:2', with 'db:1' having two units, 'wordpress/0' and 'wordpress/1', + and 'db:2' having one unit, 'mediawiki/0', all of which have a complete + set of data, the relation data for the units will be stored in the + order: 'wordpress/0', 'wordpress/1', 'mediawiki/0'. + + If you only care about a single unit on the relation, you can just + access it as `{{ interface[0]['key'] }}`. However, if you can at all + support multiple units on a relation, you should iterate over the list, + like:: + + {% for unit in interface -%} + {{ unit['key'] }}{% if not loop.last %},{% endif %} + {%- endfor %} + + Note that since all sets of relation data from all related services and + units are in a single list, if you need to know which service or unit a + set of data came from, you'll need to extend this class to preserve + that information. + """ + if not hookenv.relation_ids(self.name): + return + + ns = self.setdefault(self.name, []) + for rid in sorted(hookenv.relation_ids(self.name)): + for unit in sorted(hookenv.related_units(rid)): + reldata = hookenv.relation_get(rid=rid, unit=unit) + if self._is_ready(reldata): + ns.append(reldata) + + def provide_data(self): + """ + Return data to be relation_set for this interface. + """ + return {} + + +class MysqlRelation(RelationContext): + """ + Relation context for the `mysql` interface. + + :param str name: Override the relation :attr:`name`, since it can vary from charm to charm + :param list additional_required_keys: Extend the list of :attr:`required_keys` + """ + name = 'db' + interface = 'mysql' + + def __init__(self, *args, **kwargs): + self.required_keys = ['host', 'user', 'password', 'database'] + RelationContext.__init__(self, *args, **kwargs) + + +class HttpRelation(RelationContext): + """ + Relation context for the `http` interface. + + :param str name: Override the relation :attr:`name`, since it can vary from charm to charm + :param list additional_required_keys: Extend the list of :attr:`required_keys` + """ + name = 'website' + interface = 'http' + + def __init__(self, *args, **kwargs): + self.required_keys = ['host', 'port'] + RelationContext.__init__(self, *args, **kwargs) + + def provide_data(self): + return { + 'host': hookenv.unit_get('private-address'), + 'port': 80, + } + + +class RequiredConfig(dict): + """ + Data context that loads config options with one or more mandatory options. + + Once the required options have been changed from their default values, all + config options will be available, namespaced under `config` to prevent + potential naming conflicts (for example, between a config option and a + relation property). + + :param list *args: List of options that must be changed from their default values. + """ + + def __init__(self, *args): + self.required_options = args + self['config'] = hookenv.config() + with open(os.path.join(hookenv.charm_dir(), 'config.yaml')) as fp: + self.config = yaml.load(fp).get('options', {}) + + def __bool__(self): + for option in self.required_options: + if option not in self['config']: + return False + current_value = self['config'][option] + default_value = self.config[option].get('default') + if current_value == default_value: + return False + if current_value in (None, '') and default_value in (None, ''): + return False + return True + + def __nonzero__(self): + return self.__bool__() + + +class StoredContext(dict): + """ + A data context that always returns the data that it was first created with. + + This is useful to do a one-time generation of things like passwords, that + will thereafter use the same value that was originally generated, instead + of generating a new value each time it is run. + """ + def __init__(self, file_name, config_data): + """ + If the file exists, populate `self` with the data from the file. + Otherwise, populate with the given data and persist it to the file. + """ + if os.path.exists(file_name): + self.update(self.read_context(file_name)) + else: + self.store_context(file_name, config_data) + self.update(config_data) + + def store_context(self, file_name, config_data): + if not os.path.isabs(file_name): + file_name = os.path.join(hookenv.charm_dir(), file_name) + with open(file_name, 'w') as file_stream: + os.fchmod(file_stream.fileno(), 0o600) + yaml.dump(config_data, file_stream) + + def read_context(self, file_name): + if not os.path.isabs(file_name): + file_name = os.path.join(hookenv.charm_dir(), file_name) + with open(file_name, 'r') as file_stream: + data = yaml.load(file_stream) + if not data: + raise OSError("%s is empty" % file_name) + return data + + +class TemplateCallback(ManagerCallback): + """ + Callback class that will render a Jinja2 template, for use as a ready + action. + + :param str source: The template source file, relative to + `$CHARM_DIR/templates` + + :param str target: The target to write the rendered template to (or None) + :param str owner: The owner of the rendered file + :param str group: The group of the rendered file + :param int perms: The permissions of the rendered file + :param partial on_change_action: functools partial to be executed when + rendered file changes + :param jinja2 loader template_loader: A jinja2 template loader + + :return str: The rendered template + """ + def __init__(self, source, target, + owner='root', group='root', perms=0o444, + on_change_action=None, template_loader=None): + self.source = source + self.target = target + self.owner = owner + self.group = group + self.perms = perms + self.on_change_action = on_change_action + self.template_loader = template_loader + + def __call__(self, manager, service_name, event_name): + pre_checksum = '' + if self.on_change_action and os.path.isfile(self.target): + pre_checksum = host.file_hash(self.target) + service = manager.get_service(service_name) + context = {'ctx': {}} + for ctx in service.get('required_data', []): + context.update(ctx) + context['ctx'].update(ctx) + + result = templating.render(self.source, self.target, context, + self.owner, self.group, self.perms, + template_loader=self.template_loader) + if self.on_change_action: + if pre_checksum == host.file_hash(self.target): + hookenv.log( + 'No change detected: {}'.format(self.target), + hookenv.DEBUG) + else: + self.on_change_action() + + return result + + +# Convenience aliases for templates +render_template = template = TemplateCallback diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/strutils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/strutils.py new file mode 100644 index 0000000000000000000000000000000000000000..e8df0452f8203b53947eb137eed22d85ff62dff0 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/strutils.py @@ -0,0 +1,129 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import six +import re + + +def bool_from_string(value): + """Interpret string value as boolean. + + Returns True if value translates to True otherwise False. + """ + if isinstance(value, six.string_types): + value = six.text_type(value) + else: + msg = "Unable to interpret non-string value '%s' as boolean" % (value) + raise ValueError(msg) + + value = value.strip().lower() + + if value in ['y', 'yes', 'true', 't', 'on']: + return True + elif value in ['n', 'no', 'false', 'f', 'off']: + return False + + msg = "Unable to interpret string value '%s' as boolean" % (value) + raise ValueError(msg) + + +def bytes_from_string(value): + """Interpret human readable string value as bytes. + + Returns int + """ + BYTE_POWER = { + 'K': 1, + 'KB': 1, + 'M': 2, + 'MB': 2, + 'G': 3, + 'GB': 3, + 'T': 4, + 'TB': 4, + 'P': 5, + 'PB': 5, + } + if isinstance(value, six.string_types): + value = six.text_type(value) + else: + msg = "Unable to interpret non-string value '%s' as bytes" % (value) + raise ValueError(msg) + matches = re.match("([0-9]+)([a-zA-Z]+)", value) + if matches: + size = int(matches.group(1)) * (1024 ** BYTE_POWER[matches.group(2)]) + else: + # Assume that value passed in is bytes + try: + size = int(value) + except ValueError: + msg = "Unable to interpret string value '%s' as bytes" % (value) + raise ValueError(msg) + return size + + +class BasicStringComparator(object): + """Provides a class that will compare strings from an iterator type object. + Used to provide > and < comparisons on strings that may not necessarily be + alphanumerically ordered. e.g. OpenStack or Ubuntu releases AFTER the + z-wrap. + """ + + _list = None + + def __init__(self, item): + if self._list is None: + raise Exception("Must define the _list in the class definition!") + try: + self.index = self._list.index(item) + except Exception: + raise KeyError("Item '{}' is not in list '{}'" + .format(item, self._list)) + + def __eq__(self, other): + assert isinstance(other, str) or isinstance(other, self.__class__) + return self.index == self._list.index(other) + + def __ne__(self, other): + return not self.__eq__(other) + + def __lt__(self, other): + assert isinstance(other, str) or isinstance(other, self.__class__) + return self.index < self._list.index(other) + + def __ge__(self, other): + return not self.__lt__(other) + + def __gt__(self, other): + assert isinstance(other, str) or isinstance(other, self.__class__) + return self.index > self._list.index(other) + + def __le__(self, other): + return not self.__gt__(other) + + def __str__(self): + """Always give back the item at the index so it can be used in + comparisons like: + + s_mitaka = CompareOpenStack('mitaka') + s_newton = CompareOpenstack('newton') + + assert s_newton > s_mitaka + + @returns: + """ + return self._list[self.index] diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/sysctl.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/sysctl.py new file mode 100644 index 0000000000000000000000000000000000000000..386428d619bc38edf02dc088bf7ec32767c0ab94 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/sysctl.py @@ -0,0 +1,75 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import yaml + +from subprocess import check_call, CalledProcessError + +from charmhelpers.core.hookenv import ( + log, + DEBUG, + ERROR, + WARNING, +) + +from charmhelpers.core.host import is_container + +__author__ = 'Jorge Niedbalski R. ' + + +def create(sysctl_dict, sysctl_file, ignore=False): + """Creates a sysctl.conf file from a YAML associative array + + :param sysctl_dict: a dict or YAML-formatted string of sysctl + options eg "{ 'kernel.max_pid': 1337 }" + :type sysctl_dict: str + :param sysctl_file: path to the sysctl file to be saved + :type sysctl_file: str or unicode + :param ignore: If True, ignore "unknown variable" errors. + :type ignore: bool + :returns: None + """ + if type(sysctl_dict) is not dict: + try: + sysctl_dict_parsed = yaml.safe_load(sysctl_dict) + except yaml.YAMLError: + log("Error parsing YAML sysctl_dict: {}".format(sysctl_dict), + level=ERROR) + return + else: + sysctl_dict_parsed = sysctl_dict + + with open(sysctl_file, "w") as fd: + for key, value in sysctl_dict_parsed.items(): + fd.write("{}={}\n".format(key, value)) + + log("Updating sysctl_file: {} values: {}".format(sysctl_file, + sysctl_dict_parsed), + level=DEBUG) + + call = ["sysctl", "-p", sysctl_file] + if ignore: + call.append("-e") + + try: + check_call(call) + except CalledProcessError as e: + if is_container(): + log("Error setting some sysctl keys in this container: {}".format(e.output), + level=WARNING) + else: + raise e diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/templating.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/templating.py new file mode 100644 index 0000000000000000000000000000000000000000..9014015c14ee0b48c775562cd4f0d30884944439 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/templating.py @@ -0,0 +1,93 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import sys + +from charmhelpers.core import host +from charmhelpers.core import hookenv + + +def render(source, target, context, owner='root', group='root', + perms=0o444, templates_dir=None, encoding='UTF-8', + template_loader=None, config_template=None): + """ + Render a template. + + The `source` path, if not absolute, is relative to the `templates_dir`. + + The `target` path should be absolute. It can also be `None`, in which + case no file will be written. + + The context should be a dict containing the values to be replaced in the + template. + + config_template may be provided to render from a provided template instead + of loading from a file. + + The `owner`, `group`, and `perms` options will be passed to `write_file`. + + If omitted, `templates_dir` defaults to the `templates` folder in the charm. + + The rendered template will be written to the file as well as being returned + as a string. + + Note: Using this requires python-jinja2 or python3-jinja2; if it is not + installed, calling this will attempt to use charmhelpers.fetch.apt_install + to install it. + """ + try: + from jinja2 import FileSystemLoader, Environment, exceptions + except ImportError: + try: + from charmhelpers.fetch import apt_install + except ImportError: + hookenv.log('Could not import jinja2, and could not import ' + 'charmhelpers.fetch to install it', + level=hookenv.ERROR) + raise + if sys.version_info.major == 2: + apt_install('python-jinja2', fatal=True) + else: + apt_install('python3-jinja2', fatal=True) + from jinja2 import FileSystemLoader, Environment, exceptions + + if template_loader: + template_env = Environment(loader=template_loader) + else: + if templates_dir is None: + templates_dir = os.path.join(hookenv.charm_dir(), 'templates') + template_env = Environment(loader=FileSystemLoader(templates_dir)) + + # load from a string if provided explicitly + if config_template is not None: + template = template_env.from_string(config_template) + else: + try: + source = source + template = template_env.get_template(source) + except exceptions.TemplateNotFound as e: + hookenv.log('Could not load template %s from %s.' % + (source, templates_dir), + level=hookenv.ERROR) + raise e + content = template.render(context) + if target is not None: + target_dir = os.path.dirname(target) + if not os.path.exists(target_dir): + # This is a terrible default directory permission, as the file + # or its siblings will often contain secrets. + host.mkdir(os.path.dirname(target), owner, group, perms=0o755) + host.write_file(target, content.encode(encoding), owner, group, perms) + return content diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/unitdata.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/unitdata.py new file mode 100644 index 0000000000000000000000000000000000000000..ab554327b343f896880523fc627c1abea84be29a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/core/unitdata.py @@ -0,0 +1,525 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +# +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Authors: +# Kapil Thangavelu +# +""" +Intro +----- + +A simple way to store state in units. This provides a key value +storage with support for versioned, transactional operation, +and can calculate deltas from previous values to simplify unit logic +when processing changes. + + +Hook Integration +---------------- + +There are several extant frameworks for hook execution, including + + - charmhelpers.core.hookenv.Hooks + - charmhelpers.core.services.ServiceManager + +The storage classes are framework agnostic, one simple integration is +via the HookData contextmanager. It will record the current hook +execution environment (including relation data, config data, etc.), +setup a transaction and allow easy access to the changes from +previously seen values. One consequence of the integration is the +reservation of particular keys ('rels', 'unit', 'env', 'config', +'charm_revisions') for their respective values. + +Here's a fully worked integration example using hookenv.Hooks:: + + from charmhelper.core import hookenv, unitdata + + hook_data = unitdata.HookData() + db = unitdata.kv() + hooks = hookenv.Hooks() + + @hooks.hook + def config_changed(): + # Print all changes to configuration from previously seen + # values. + for changed, (prev, cur) in hook_data.conf.items(): + print('config changed', changed, + 'previous value', prev, + 'current value', cur) + + # Get some unit specific bookeeping + if not db.get('pkg_key'): + key = urllib.urlopen('https://example.com/pkg_key').read() + db.set('pkg_key', key) + + # Directly access all charm config as a mapping. + conf = db.getrange('config', True) + + # Directly access all relation data as a mapping + rels = db.getrange('rels', True) + + if __name__ == '__main__': + with hook_data(): + hook.execute() + + +A more basic integration is via the hook_scope context manager which simply +manages transaction scope (and records hook name, and timestamp):: + + >>> from unitdata import kv + >>> db = kv() + >>> with db.hook_scope('install'): + ... # do work, in transactional scope. + ... db.set('x', 1) + >>> db.get('x') + 1 + + +Usage +----- + +Values are automatically json de/serialized to preserve basic typing +and complex data struct capabilities (dicts, lists, ints, booleans, etc). + +Individual values can be manipulated via get/set:: + + >>> kv.set('y', True) + >>> kv.get('y') + True + + # We can set complex values (dicts, lists) as a single key. + >>> kv.set('config', {'a': 1, 'b': True'}) + + # Also supports returning dictionaries as a record which + # provides attribute access. + >>> config = kv.get('config', record=True) + >>> config.b + True + + +Groups of keys can be manipulated with update/getrange:: + + >>> kv.update({'z': 1, 'y': 2}, prefix="gui.") + >>> kv.getrange('gui.', strip=True) + {'z': 1, 'y': 2} + +When updating values, its very helpful to understand which values +have actually changed and how have they changed. The storage +provides a delta method to provide for this:: + + >>> data = {'debug': True, 'option': 2} + >>> delta = kv.delta(data, 'config.') + >>> delta.debug.previous + None + >>> delta.debug.current + True + >>> delta + {'debug': (None, True), 'option': (None, 2)} + +Note the delta method does not persist the actual change, it needs to +be explicitly saved via 'update' method:: + + >>> kv.update(data, 'config.') + +Values modified in the context of a hook scope retain historical values +associated to the hookname. + + >>> with db.hook_scope('config-changed'): + ... db.set('x', 42) + >>> db.gethistory('x') + [(1, u'x', 1, u'install', u'2015-01-21T16:49:30.038372'), + (2, u'x', 42, u'config-changed', u'2015-01-21T16:49:30.038786')] + +""" + +import collections +import contextlib +import datetime +import itertools +import json +import os +import pprint +import sqlite3 +import sys + +__author__ = 'Kapil Thangavelu ' + + +class Storage(object): + """Simple key value database for local unit state within charms. + + Modifications are not persisted unless :meth:`flush` is called. + + To support dicts, lists, integer, floats, and booleans values + are automatically json encoded/decoded. + + Note: to facilitate unit testing, ':memory:' can be passed as the + path parameter which causes sqlite3 to only build the db in memory. + This should only be used for testing purposes. + """ + def __init__(self, path=None): + self.db_path = path + if path is None: + if 'UNIT_STATE_DB' in os.environ: + self.db_path = os.environ['UNIT_STATE_DB'] + else: + self.db_path = os.path.join( + os.environ.get('CHARM_DIR', ''), '.unit-state.db') + if self.db_path != ':memory:': + with open(self.db_path, 'a') as f: + os.fchmod(f.fileno(), 0o600) + self.conn = sqlite3.connect('%s' % self.db_path) + self.cursor = self.conn.cursor() + self.revision = None + self._closed = False + self._init() + + def close(self): + if self._closed: + return + self.flush(False) + self.cursor.close() + self.conn.close() + self._closed = True + + def get(self, key, default=None, record=False): + self.cursor.execute('select data from kv where key=?', [key]) + result = self.cursor.fetchone() + if not result: + return default + if record: + return Record(json.loads(result[0])) + return json.loads(result[0]) + + def getrange(self, key_prefix, strip=False): + """ + Get a range of keys starting with a common prefix as a mapping of + keys to values. + + :param str key_prefix: Common prefix among all keys + :param bool strip: Optionally strip the common prefix from the key + names in the returned dict + :return dict: A (possibly empty) dict of key-value mappings + """ + self.cursor.execute("select key, data from kv where key like ?", + ['%s%%' % key_prefix]) + result = self.cursor.fetchall() + + if not result: + return {} + if not strip: + key_prefix = '' + return dict([ + (k[len(key_prefix):], json.loads(v)) for k, v in result]) + + def update(self, mapping, prefix=""): + """ + Set the values of multiple keys at once. + + :param dict mapping: Mapping of keys to values + :param str prefix: Optional prefix to apply to all keys in `mapping` + before setting + """ + for k, v in mapping.items(): + self.set("%s%s" % (prefix, k), v) + + def unset(self, key): + """ + Remove a key from the database entirely. + """ + self.cursor.execute('delete from kv where key=?', [key]) + if self.revision and self.cursor.rowcount: + self.cursor.execute( + 'insert into kv_revisions values (?, ?, ?)', + [key, self.revision, json.dumps('DELETED')]) + + def unsetrange(self, keys=None, prefix=""): + """ + Remove a range of keys starting with a common prefix, from the database + entirely. + + :param list keys: List of keys to remove. + :param str prefix: Optional prefix to apply to all keys in ``keys`` + before removing. + """ + if keys is not None: + keys = ['%s%s' % (prefix, key) for key in keys] + self.cursor.execute('delete from kv where key in (%s)' % ','.join(['?'] * len(keys)), keys) + if self.revision and self.cursor.rowcount: + self.cursor.execute( + 'insert into kv_revisions values %s' % ','.join(['(?, ?, ?)'] * len(keys)), + list(itertools.chain.from_iterable((key, self.revision, json.dumps('DELETED')) for key in keys))) + else: + self.cursor.execute('delete from kv where key like ?', + ['%s%%' % prefix]) + if self.revision and self.cursor.rowcount: + self.cursor.execute( + 'insert into kv_revisions values (?, ?, ?)', + ['%s%%' % prefix, self.revision, json.dumps('DELETED')]) + + def set(self, key, value): + """ + Set a value in the database. + + :param str key: Key to set the value for + :param value: Any JSON-serializable value to be set + """ + serialized = json.dumps(value) + + self.cursor.execute('select data from kv where key=?', [key]) + exists = self.cursor.fetchone() + + # Skip mutations to the same value + if exists: + if exists[0] == serialized: + return value + + if not exists: + self.cursor.execute( + 'insert into kv (key, data) values (?, ?)', + (key, serialized)) + else: + self.cursor.execute(''' + update kv + set data = ? + where key = ?''', [serialized, key]) + + # Save + if not self.revision: + return value + + self.cursor.execute( + 'select 1 from kv_revisions where key=? and revision=?', + [key, self.revision]) + exists = self.cursor.fetchone() + + if not exists: + self.cursor.execute( + '''insert into kv_revisions ( + revision, key, data) values (?, ?, ?)''', + (self.revision, key, serialized)) + else: + self.cursor.execute( + ''' + update kv_revisions + set data = ? + where key = ? + and revision = ?''', + [serialized, key, self.revision]) + + return value + + def delta(self, mapping, prefix): + """ + return a delta containing values that have changed. + """ + previous = self.getrange(prefix, strip=True) + if not previous: + pk = set() + else: + pk = set(previous.keys()) + ck = set(mapping.keys()) + delta = DeltaSet() + + # added + for k in ck.difference(pk): + delta[k] = Delta(None, mapping[k]) + + # removed + for k in pk.difference(ck): + delta[k] = Delta(previous[k], None) + + # changed + for k in pk.intersection(ck): + c = mapping[k] + p = previous[k] + if c != p: + delta[k] = Delta(p, c) + + return delta + + @contextlib.contextmanager + def hook_scope(self, name=""): + """Scope all future interactions to the current hook execution + revision.""" + assert not self.revision + self.cursor.execute( + 'insert into hooks (hook, date) values (?, ?)', + (name or sys.argv[0], + datetime.datetime.utcnow().isoformat())) + self.revision = self.cursor.lastrowid + try: + yield self.revision + self.revision = None + except Exception: + self.flush(False) + self.revision = None + raise + else: + self.flush() + + def flush(self, save=True): + if save: + self.conn.commit() + elif self._closed: + return + else: + self.conn.rollback() + + def _init(self): + self.cursor.execute(''' + create table if not exists kv ( + key text, + data text, + primary key (key) + )''') + self.cursor.execute(''' + create table if not exists kv_revisions ( + key text, + revision integer, + data text, + primary key (key, revision) + )''') + self.cursor.execute(''' + create table if not exists hooks ( + version integer primary key autoincrement, + hook text, + date text + )''') + self.conn.commit() + + def gethistory(self, key, deserialize=False): + self.cursor.execute( + ''' + select kv.revision, kv.key, kv.data, h.hook, h.date + from kv_revisions kv, + hooks h + where kv.key=? + and kv.revision = h.version + ''', [key]) + if deserialize is False: + return self.cursor.fetchall() + return map(_parse_history, self.cursor.fetchall()) + + def debug(self, fh=sys.stderr): + self.cursor.execute('select * from kv') + pprint.pprint(self.cursor.fetchall(), stream=fh) + self.cursor.execute('select * from kv_revisions') + pprint.pprint(self.cursor.fetchall(), stream=fh) + + +def _parse_history(d): + return (d[0], d[1], json.loads(d[2]), d[3], + datetime.datetime.strptime(d[-1], "%Y-%m-%dT%H:%M:%S.%f")) + + +class HookData(object): + """Simple integration for existing hook exec frameworks. + + Records all unit information, and stores deltas for processing + by the hook. + + Sample:: + + from charmhelper.core import hookenv, unitdata + + changes = unitdata.HookData() + db = unitdata.kv() + hooks = hookenv.Hooks() + + @hooks.hook + def config_changed(): + # View all changes to configuration + for changed, (prev, cur) in changes.conf.items(): + print('config changed', changed, + 'previous value', prev, + 'current value', cur) + + # Get some unit specific bookeeping + if not db.get('pkg_key'): + key = urllib.urlopen('https://example.com/pkg_key').read() + db.set('pkg_key', key) + + if __name__ == '__main__': + with changes(): + hook.execute() + + """ + def __init__(self): + self.kv = kv() + self.conf = None + self.rels = None + + @contextlib.contextmanager + def __call__(self): + from charmhelpers.core import hookenv + hook_name = hookenv.hook_name() + + with self.kv.hook_scope(hook_name): + self._record_charm_version(hookenv.charm_dir()) + delta_config, delta_relation = self._record_hook(hookenv) + yield self.kv, delta_config, delta_relation + + def _record_charm_version(self, charm_dir): + # Record revisions.. charm revisions are meaningless + # to charm authors as they don't control the revision. + # so logic dependnent on revision is not particularly + # useful, however it is useful for debugging analysis. + charm_rev = open( + os.path.join(charm_dir, 'revision')).read().strip() + charm_rev = charm_rev or '0' + revs = self.kv.get('charm_revisions', []) + if charm_rev not in revs: + revs.append(charm_rev.strip() or '0') + self.kv.set('charm_revisions', revs) + + def _record_hook(self, hookenv): + data = hookenv.execution_environment() + self.conf = conf_delta = self.kv.delta(data['conf'], 'config') + self.rels = rels_delta = self.kv.delta(data['rels'], 'rels') + self.kv.set('env', dict(data['env'])) + self.kv.set('unit', data['unit']) + self.kv.set('relid', data.get('relid')) + return conf_delta, rels_delta + + +class Record(dict): + + __slots__ = () + + def __getattr__(self, k): + if k in self: + return self[k] + raise AttributeError(k) + + +class DeltaSet(Record): + + __slots__ = () + + +Delta = collections.namedtuple('Delta', ['previous', 'current']) + + +_KV = None + + +def kv(): + global _KV + if _KV is None: + _KV = Storage() + return _KV diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/fetch/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/fetch/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..0cc7fc850a0632568ad78aae9716be718c9ff6b5 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/fetch/__init__.py @@ -0,0 +1,209 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import importlib +from charmhelpers.osplatform import get_platform +from yaml import safe_load +from charmhelpers.core.hookenv import ( + config, + log, +) + +import six +if six.PY3: + from urllib.parse import urlparse, urlunparse +else: + from urlparse import urlparse, urlunparse + + +# The order of this list is very important. Handlers should be listed in from +# least- to most-specific URL matching. +FETCH_HANDLERS = ( + 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler', + 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler', + 'charmhelpers.fetch.giturl.GitUrlFetchHandler', +) + + +class SourceConfigError(Exception): + pass + + +class UnhandledSource(Exception): + pass + + +class AptLockError(Exception): + pass + + +class GPGKeyError(Exception): + """Exception occurs when a GPG key cannot be fetched or used. The message + indicates what the problem is. + """ + pass + + +class BaseFetchHandler(object): + + """Base class for FetchHandler implementations in fetch plugins""" + + def can_handle(self, source): + """Returns True if the source can be handled. Otherwise returns + a string explaining why it cannot""" + return "Wrong source type" + + def install(self, source): + """Try to download and unpack the source. Return the path to the + unpacked files or raise UnhandledSource.""" + raise UnhandledSource("Wrong source type {}".format(source)) + + def parse_url(self, url): + return urlparse(url) + + def base_url(self, url): + """Return url without querystring or fragment""" + parts = list(self.parse_url(url)) + parts[4:] = ['' for i in parts[4:]] + return urlunparse(parts) + + +__platform__ = get_platform() +module = "charmhelpers.fetch.%s" % __platform__ +fetch = importlib.import_module(module) + +filter_installed_packages = fetch.filter_installed_packages +filter_missing_packages = fetch.filter_missing_packages +install = fetch.apt_install +upgrade = fetch.apt_upgrade +update = _fetch_update = fetch.apt_update +purge = fetch.apt_purge +add_source = fetch.add_source + +if __platform__ == "ubuntu": + apt_cache = fetch.apt_cache + apt_install = fetch.apt_install + apt_update = fetch.apt_update + apt_upgrade = fetch.apt_upgrade + apt_purge = fetch.apt_purge + apt_autoremove = fetch.apt_autoremove + apt_mark = fetch.apt_mark + apt_hold = fetch.apt_hold + apt_unhold = fetch.apt_unhold + import_key = fetch.import_key + get_upstream_version = fetch.get_upstream_version + apt_pkg = fetch.ubuntu_apt_pkg + get_apt_dpkg_env = fetch.get_apt_dpkg_env +elif __platform__ == "centos": + yum_search = fetch.yum_search + + +def configure_sources(update=False, + sources_var='install_sources', + keys_var='install_keys'): + """Configure multiple sources from charm configuration. + + The lists are encoded as yaml fragments in the configuration. + The fragment needs to be included as a string. Sources and their + corresponding keys are of the types supported by add_source(). + + Example config: + install_sources: | + - "ppa:foo" + - "http://example.com/repo precise main" + install_keys: | + - null + - "a1b2c3d4" + + Note that 'null' (a.k.a. None) should not be quoted. + """ + sources = safe_load((config(sources_var) or '').strip()) or [] + keys = safe_load((config(keys_var) or '').strip()) or None + + if isinstance(sources, six.string_types): + sources = [sources] + + if keys is None: + for source in sources: + add_source(source, None) + else: + if isinstance(keys, six.string_types): + keys = [keys] + + if len(sources) != len(keys): + raise SourceConfigError( + 'Install sources and keys lists are different lengths') + for source, key in zip(sources, keys): + add_source(source, key) + if update: + _fetch_update(fatal=True) + + +def install_remote(source, *args, **kwargs): + """Install a file tree from a remote source. + + The specified source should be a url of the form: + scheme://[host]/path[#[option=value][&...]] + + Schemes supported are based on this modules submodules. + Options supported are submodule-specific. + Additional arguments are passed through to the submodule. + + For example:: + + dest = install_remote('http://example.com/archive.tgz', + checksum='deadbeef', + hash_type='sha1') + + This will download `archive.tgz`, validate it using SHA1 and, if + the file is ok, extract it and return the directory in which it + was extracted. If the checksum fails, it will raise + :class:`charmhelpers.core.host.ChecksumError`. + """ + # We ONLY check for True here because can_handle may return a string + # explaining why it can't handle a given source. + handlers = [h for h in plugins() if h.can_handle(source) is True] + for handler in handlers: + try: + return handler.install(source, *args, **kwargs) + except UnhandledSource as e: + log('Install source attempt unsuccessful: {}'.format(e), + level='WARNING') + raise UnhandledSource("No handler found for source {}".format(source)) + + +def install_from_config(config_var_name): + """Install a file from config.""" + charm_config = config() + source = charm_config[config_var_name] + return install_remote(source) + + +def plugins(fetch_handlers=None): + if not fetch_handlers: + fetch_handlers = FETCH_HANDLERS + plugin_list = [] + for handler_name in fetch_handlers: + package, classname = handler_name.rsplit('.', 1) + try: + handler_class = getattr( + importlib.import_module(package), + classname) + plugin_list.append(handler_class()) + except NotImplementedError: + # Skip missing plugins so that they can be ommitted from + # installation if desired + log("FetchHandler {} not found, skipping plugin".format( + handler_name)) + return plugin_list diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/fetch/archiveurl.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/fetch/archiveurl.py new file mode 100644 index 0000000000000000000000000000000000000000..d25587adeff102c3fc9e402f98746fccbd8a3693 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/fetch/archiveurl.py @@ -0,0 +1,165 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import hashlib +import re + +from charmhelpers.fetch import ( + BaseFetchHandler, + UnhandledSource +) +from charmhelpers.payload.archive import ( + get_archive_handler, + extract, +) +from charmhelpers.core.host import mkdir, check_hash + +import six +if six.PY3: + from urllib.request import ( + build_opener, install_opener, urlopen, urlretrieve, + HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler, + ) + from urllib.parse import urlparse, urlunparse, parse_qs + from urllib.error import URLError +else: + from urllib import urlretrieve + from urllib2 import ( + build_opener, install_opener, urlopen, + HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler, + URLError + ) + from urlparse import urlparse, urlunparse, parse_qs + + +def splituser(host): + '''urllib.splituser(), but six's support of this seems broken''' + _userprog = re.compile('^(.*)@(.*)$') + match = _userprog.match(host) + if match: + return match.group(1, 2) + return None, host + + +def splitpasswd(user): + '''urllib.splitpasswd(), but six's support of this is missing''' + _passwdprog = re.compile('^([^:]*):(.*)$', re.S) + match = _passwdprog.match(user) + if match: + return match.group(1, 2) + return user, None + + +class ArchiveUrlFetchHandler(BaseFetchHandler): + """ + Handler to download archive files from arbitrary URLs. + + Can fetch from http, https, ftp, and file URLs. + + Can install either tarballs (.tar, .tgz, .tbz2, etc) or zip files. + + Installs the contents of the archive in $CHARM_DIR/fetched/. + """ + def can_handle(self, source): + url_parts = self.parse_url(source) + if url_parts.scheme not in ('http', 'https', 'ftp', 'file'): + # XXX: Why is this returning a boolean and a string? It's + # doomed to fail since "bool(can_handle('foo://'))" will be True. + return "Wrong source type" + if get_archive_handler(self.base_url(source)): + return True + return False + + def download(self, source, dest): + """ + Download an archive file. + + :param str source: URL pointing to an archive file. + :param str dest: Local path location to download archive file to. + """ + # propagate all exceptions + # URLError, OSError, etc + proto, netloc, path, params, query, fragment = urlparse(source) + if proto in ('http', 'https'): + auth, barehost = splituser(netloc) + if auth is not None: + source = urlunparse((proto, barehost, path, params, query, fragment)) + username, password = splitpasswd(auth) + passman = HTTPPasswordMgrWithDefaultRealm() + # Realm is set to None in add_password to force the username and password + # to be used whatever the realm + passman.add_password(None, source, username, password) + authhandler = HTTPBasicAuthHandler(passman) + opener = build_opener(authhandler) + install_opener(opener) + response = urlopen(source) + try: + with open(dest, 'wb') as dest_file: + dest_file.write(response.read()) + except Exception as e: + if os.path.isfile(dest): + os.unlink(dest) + raise e + + # Mandatory file validation via Sha1 or MD5 hashing. + def download_and_validate(self, url, hashsum, validate="sha1"): + tempfile, headers = urlretrieve(url) + check_hash(tempfile, hashsum, validate) + return tempfile + + def install(self, source, dest=None, checksum=None, hash_type='sha1'): + """ + Download and install an archive file, with optional checksum validation. + + The checksum can also be given on the `source` URL's fragment. + For example:: + + handler.install('http://example.com/file.tgz#sha1=deadbeef') + + :param str source: URL pointing to an archive file. + :param str dest: Local destination path to install to. If not given, + installs to `$CHARM_DIR/archives/archive_file_name`. + :param str checksum: If given, validate the archive file after download. + :param str hash_type: Algorithm used to generate `checksum`. + Can be any hash alrgorithm supported by :mod:`hashlib`, + such as md5, sha1, sha256, sha512, etc. + + """ + url_parts = self.parse_url(source) + dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched') + if not os.path.exists(dest_dir): + mkdir(dest_dir, perms=0o755) + dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path)) + try: + self.download(source, dld_file) + except URLError as e: + raise UnhandledSource(e.reason) + except OSError as e: + raise UnhandledSource(e.strerror) + options = parse_qs(url_parts.fragment) + for key, value in options.items(): + if not six.PY3: + algorithms = hashlib.algorithms + else: + algorithms = hashlib.algorithms_available + if key in algorithms: + if len(value) != 1: + raise TypeError( + "Expected 1 hash value, not %d" % len(value)) + expected = value[0] + check_hash(dld_file, expected, key) + if checksum: + check_hash(dld_file, checksum, hash_type) + return extract(dld_file, dest) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/fetch/bzrurl.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/fetch/bzrurl.py new file mode 100644 index 0000000000000000000000000000000000000000..c4ab3ff1e6bc7dde24e8ed568a3dc0c6012ddea6 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/fetch/bzrurl.py @@ -0,0 +1,76 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +from subprocess import STDOUT, check_output +from charmhelpers.fetch import ( + BaseFetchHandler, + UnhandledSource, + filter_installed_packages, + install, +) +from charmhelpers.core.host import mkdir + + +if filter_installed_packages(['bzr']) != []: + install(['bzr']) + if filter_installed_packages(['bzr']) != []: + raise NotImplementedError('Unable to install bzr') + + +class BzrUrlFetchHandler(BaseFetchHandler): + """Handler for bazaar branches via generic and lp URLs.""" + + def can_handle(self, source): + url_parts = self.parse_url(source) + if url_parts.scheme not in ('bzr+ssh', 'lp', ''): + return False + elif not url_parts.scheme: + return os.path.exists(os.path.join(source, '.bzr')) + else: + return True + + def branch(self, source, dest, revno=None): + if not self.can_handle(source): + raise UnhandledSource("Cannot handle {}".format(source)) + cmd_opts = [] + if revno: + cmd_opts += ['-r', str(revno)] + if os.path.exists(dest): + cmd = ['bzr', 'pull'] + cmd += cmd_opts + cmd += ['--overwrite', '-d', dest, source] + else: + cmd = ['bzr', 'branch'] + cmd += cmd_opts + cmd += [source, dest] + check_output(cmd, stderr=STDOUT) + + def install(self, source, dest=None, revno=None): + url_parts = self.parse_url(source) + branch_name = url_parts.path.strip("/").split("/")[-1] + if dest: + dest_dir = os.path.join(dest, branch_name) + else: + dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", + branch_name) + + if dest and not os.path.exists(dest): + mkdir(dest, perms=0o755) + + try: + self.branch(source, dest_dir, revno) + except OSError as e: + raise UnhandledSource(e.strerror) + return dest_dir diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/fetch/centos.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/fetch/centos.py new file mode 100644 index 0000000000000000000000000000000000000000..a91dcff0645ed541a79cd72af3112bdff393719a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/fetch/centos.py @@ -0,0 +1,171 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import subprocess +import os +import time +import six +import yum + +from tempfile import NamedTemporaryFile +from charmhelpers.core.hookenv import log + +YUM_NO_LOCK = 1 # The return code for "couldn't acquire lock" in YUM. +YUM_NO_LOCK_RETRY_DELAY = 10 # Wait 10 seconds between apt lock checks. +YUM_NO_LOCK_RETRY_COUNT = 30 # Retry to acquire the lock X times. + + +def filter_installed_packages(packages): + """Return a list of packages that require installation.""" + yb = yum.YumBase() + package_list = yb.doPackageLists() + temp_cache = {p.base_package_name: 1 for p in package_list['installed']} + + _pkgs = [p for p in packages if not temp_cache.get(p, False)] + return _pkgs + + +def install(packages, options=None, fatal=False): + """Install one or more packages.""" + cmd = ['yum', '--assumeyes'] + if options is not None: + cmd.extend(options) + cmd.append('install') + if isinstance(packages, six.string_types): + cmd.append(packages) + else: + cmd.extend(packages) + log("Installing {} with options: {}".format(packages, + options)) + _run_yum_command(cmd, fatal) + + +def upgrade(options=None, fatal=False, dist=False): + """Upgrade all packages.""" + cmd = ['yum', '--assumeyes'] + if options is not None: + cmd.extend(options) + cmd.append('upgrade') + log("Upgrading with options: {}".format(options)) + _run_yum_command(cmd, fatal) + + +def update(fatal=False): + """Update local yum cache.""" + cmd = ['yum', '--assumeyes', 'update'] + log("Update with fatal: {}".format(fatal)) + _run_yum_command(cmd, fatal) + + +def purge(packages, fatal=False): + """Purge one or more packages.""" + cmd = ['yum', '--assumeyes', 'remove'] + if isinstance(packages, six.string_types): + cmd.append(packages) + else: + cmd.extend(packages) + log("Purging {}".format(packages)) + _run_yum_command(cmd, fatal) + + +def yum_search(packages): + """Search for a package.""" + output = {} + cmd = ['yum', 'search'] + if isinstance(packages, six.string_types): + cmd.append(packages) + else: + cmd.extend(packages) + log("Searching for {}".format(packages)) + result = subprocess.check_output(cmd) + for package in list(packages): + output[package] = package in result + return output + + +def add_source(source, key=None): + """Add a package source to this system. + + @param source: a URL with a rpm package + + @param key: A key to be added to the system's keyring and used + to verify the signatures on packages. Ideally, this should be an + ASCII format GPG public key including the block headers. A GPG key + id may also be used, but be aware that only insecure protocols are + available to retrieve the actual public key from a public keyserver + placing your Juju environment at risk. + """ + if source is None: + log('Source is not present. Skipping') + return + + if source.startswith('http'): + directory = '/etc/yum.repos.d/' + for filename in os.listdir(directory): + with open(directory + filename, 'r') as rpm_file: + if source in rpm_file.read(): + break + else: + log("Add source: {!r}".format(source)) + # write in the charms.repo + with open(directory + 'Charms.repo', 'a') as rpm_file: + rpm_file.write('[%s]\n' % source[7:].replace('/', '_')) + rpm_file.write('name=%s\n' % source[7:]) + rpm_file.write('baseurl=%s\n\n' % source) + else: + log("Unknown source: {!r}".format(source)) + + if key: + if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key: + with NamedTemporaryFile('w+') as key_file: + key_file.write(key) + key_file.flush() + key_file.seek(0) + subprocess.check_call(['rpm', '--import', key_file.name]) + else: + subprocess.check_call(['rpm', '--import', key]) + + +def _run_yum_command(cmd, fatal=False): + """Run an YUM command. + + Checks the output and retry if the fatal flag is set to True. + + :param: cmd: str: The yum command to run. + :param: fatal: bool: Whether the command's output should be checked and + retried. + """ + env = os.environ.copy() + + if fatal: + retry_count = 0 + result = None + + # If the command is considered "fatal", we need to retry if the yum + # lock was not acquired. + + while result is None or result == YUM_NO_LOCK: + try: + result = subprocess.check_call(cmd, env=env) + except subprocess.CalledProcessError as e: + retry_count = retry_count + 1 + if retry_count > YUM_NO_LOCK_RETRY_COUNT: + raise + result = e.returncode + log("Couldn't acquire YUM lock. Will retry in {} seconds." + "".format(YUM_NO_LOCK_RETRY_DELAY)) + time.sleep(YUM_NO_LOCK_RETRY_DELAY) + + else: + subprocess.call(cmd, env=env) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/fetch/giturl.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/fetch/giturl.py new file mode 100644 index 0000000000000000000000000000000000000000..070ca9bb5c1a2fdef39f88606ffcaf39bb049410 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/fetch/giturl.py @@ -0,0 +1,69 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +from subprocess import check_output, CalledProcessError, STDOUT +from charmhelpers.fetch import ( + BaseFetchHandler, + UnhandledSource, + filter_installed_packages, + install, +) + +if filter_installed_packages(['git']) != []: + install(['git']) + if filter_installed_packages(['git']) != []: + raise NotImplementedError('Unable to install git') + + +class GitUrlFetchHandler(BaseFetchHandler): + """Handler for git branches via generic and github URLs.""" + + def can_handle(self, source): + url_parts = self.parse_url(source) + # TODO (mattyw) no support for ssh git@ yet + if url_parts.scheme not in ('http', 'https', 'git', ''): + return False + elif not url_parts.scheme: + return os.path.exists(os.path.join(source, '.git')) + else: + return True + + def clone(self, source, dest, branch="master", depth=None): + if not self.can_handle(source): + raise UnhandledSource("Cannot handle {}".format(source)) + + if os.path.exists(dest): + cmd = ['git', '-C', dest, 'pull', source, branch] + else: + cmd = ['git', 'clone', source, dest, '--branch', branch] + if depth: + cmd.extend(['--depth', depth]) + check_output(cmd, stderr=STDOUT) + + def install(self, source, branch="master", dest=None, depth=None): + url_parts = self.parse_url(source) + branch_name = url_parts.path.strip("/").split("/")[-1] + if dest: + dest_dir = os.path.join(dest, branch_name) + else: + dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", + branch_name) + try: + self.clone(source, dest_dir, branch, depth) + except CalledProcessError as e: + raise UnhandledSource(e) + except OSError as e: + raise UnhandledSource(e.strerror) + return dest_dir diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/fetch/python/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/fetch/python/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..bff99dc93c64f80716e2d5a2b6d0d4e8a2436955 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/fetch/python/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2014-2019 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/fetch/python/debug.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/fetch/python/debug.py new file mode 100644 index 0000000000000000000000000000000000000000..757135ee4cf3b5ff4c02305126f5ca3940892afc --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/fetch/python/debug.py @@ -0,0 +1,54 @@ +#!/usr/bin/env python +# coding: utf-8 + +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from __future__ import print_function + +import atexit +import sys + +from charmhelpers.fetch.python.rpdb import Rpdb +from charmhelpers.core.hookenv import ( + open_port, + close_port, + ERROR, + log +) + +__author__ = "Jorge Niedbalski " + +DEFAULT_ADDR = "0.0.0.0" +DEFAULT_PORT = 4444 + + +def _error(message): + log(message, level=ERROR) + + +def set_trace(addr=DEFAULT_ADDR, port=DEFAULT_PORT): + """ + Set a trace point using the remote debugger + """ + atexit.register(close_port, port) + try: + log("Starting a remote python debugger session on %s:%s" % (addr, + port)) + open_port(port) + debugger = Rpdb(addr=addr, port=port) + debugger.set_trace(sys._getframe().f_back) + except Exception: + _error("Cannot start a remote debug session on %s:%s" % (addr, + port)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/fetch/python/packages.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/fetch/python/packages.py new file mode 100644 index 0000000000000000000000000000000000000000..6e95028bc540aace84a2ec6c1bcc4de2663e8a87 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/fetch/python/packages.py @@ -0,0 +1,154 @@ +#!/usr/bin/env python +# coding: utf-8 + +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import six +import subprocess +import sys + +from charmhelpers.fetch import apt_install, apt_update +from charmhelpers.core.hookenv import charm_dir, log + +__author__ = "Jorge Niedbalski " + + +def pip_execute(*args, **kwargs): + """Overriden pip_execute() to stop sys.path being changed. + + The act of importing main from the pip module seems to cause add wheels + from the /usr/share/python-wheels which are installed by various tools. + This function ensures that sys.path remains the same after the call is + executed. + """ + try: + _path = sys.path + try: + from pip import main as _pip_execute + except ImportError: + apt_update() + if six.PY2: + apt_install('python-pip') + else: + apt_install('python3-pip') + from pip import main as _pip_execute + _pip_execute(*args, **kwargs) + finally: + sys.path = _path + + +def parse_options(given, available): + """Given a set of options, check if available""" + for key, value in sorted(given.items()): + if not value: + continue + if key in available: + yield "--{0}={1}".format(key, value) + + +def pip_install_requirements(requirements, constraints=None, **options): + """Install a requirements file. + + :param constraints: Path to pip constraints file. + http://pip.readthedocs.org/en/stable/user_guide/#constraints-files + """ + command = ["install"] + + available_options = ('proxy', 'src', 'log', ) + for option in parse_options(options, available_options): + command.append(option) + + command.append("-r {0}".format(requirements)) + if constraints: + command.append("-c {0}".format(constraints)) + log("Installing from file: {} with constraints {} " + "and options: {}".format(requirements, constraints, command)) + else: + log("Installing from file: {} with options: {}".format(requirements, + command)) + pip_execute(command) + + +def pip_install(package, fatal=False, upgrade=False, venv=None, + constraints=None, **options): + """Install a python package""" + if venv: + venv_python = os.path.join(venv, 'bin/pip') + command = [venv_python, "install"] + else: + command = ["install"] + + available_options = ('proxy', 'src', 'log', 'index-url', ) + for option in parse_options(options, available_options): + command.append(option) + + if upgrade: + command.append('--upgrade') + + if constraints: + command.extend(['-c', constraints]) + + if isinstance(package, list): + command.extend(package) + else: + command.append(package) + + log("Installing {} package with options: {}".format(package, + command)) + if venv: + subprocess.check_call(command) + else: + pip_execute(command) + + +def pip_uninstall(package, **options): + """Uninstall a python package""" + command = ["uninstall", "-q", "-y"] + + available_options = ('proxy', 'log', ) + for option in parse_options(options, available_options): + command.append(option) + + if isinstance(package, list): + command.extend(package) + else: + command.append(package) + + log("Uninstalling {} package with options: {}".format(package, + command)) + pip_execute(command) + + +def pip_list(): + """Returns the list of current python installed packages + """ + return pip_execute(["list"]) + + +def pip_create_virtualenv(path=None): + """Create an isolated Python environment.""" + if six.PY2: + apt_install('python-virtualenv') + else: + apt_install('python3-virtualenv') + + if path: + venv_path = path + else: + venv_path = os.path.join(charm_dir(), 'venv') + + if not os.path.exists(venv_path): + subprocess.check_call(['virtualenv', venv_path]) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/fetch/python/rpdb.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/fetch/python/rpdb.py new file mode 100644 index 0000000000000000000000000000000000000000..9b31610c22fc2d24fe5097016cf45728f87de4ae --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/fetch/python/rpdb.py @@ -0,0 +1,56 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Remote Python Debugger (pdb wrapper).""" + +import pdb +import socket +import sys + +__author__ = "Bertrand Janin " +__version__ = "0.1.3" + + +class Rpdb(pdb.Pdb): + + def __init__(self, addr="127.0.0.1", port=4444): + """Initialize the socket and initialize pdb.""" + + # Backup stdin and stdout before replacing them by the socket handle + self.old_stdout = sys.stdout + self.old_stdin = sys.stdin + + # Open a 'reusable' socket to let the webapp reload on the same port + self.skt = socket.socket(socket.AF_INET, socket.SOCK_STREAM) + self.skt.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, True) + self.skt.bind((addr, port)) + self.skt.listen(1) + (clientsocket, address) = self.skt.accept() + handle = clientsocket.makefile('rw') + pdb.Pdb.__init__(self, completekey='tab', stdin=handle, stdout=handle) + sys.stdout = sys.stdin = handle + + def shutdown(self): + """Revert stdin and stdout, close the socket.""" + sys.stdout = self.old_stdout + sys.stdin = self.old_stdin + self.skt.close() + self.set_continue() + + def do_continue(self, arg): + """Stop all operation on ``continue``.""" + self.shutdown() + return 1 + + do_EOF = do_quit = do_exit = do_c = do_cont = do_continue diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/fetch/python/version.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/fetch/python/version.py new file mode 100644 index 0000000000000000000000000000000000000000..3eb421036ff737f8ff1684e85ff87703e30fe543 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/fetch/python/version.py @@ -0,0 +1,32 @@ +#!/usr/bin/env python +# coding: utf-8 + +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import sys + +__author__ = "Jorge Niedbalski " + + +def current_version(): + """Current system python version""" + return sys.version_info + + +def current_version_string(): + """Current system python version as string major.minor.micro""" + return "{0}.{1}.{2}".format(sys.version_info.major, + sys.version_info.minor, + sys.version_info.micro) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/fetch/snap.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/fetch/snap.py new file mode 100644 index 0000000000000000000000000000000000000000..fc70aa941bc4f0bb5ff126237db65705b9e4a10a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/fetch/snap.py @@ -0,0 +1,150 @@ +# Copyright 2014-2017 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +""" +Charm helpers snap for classic charms. + +If writing reactive charms, use the snap layer: +https://lists.ubuntu.com/archives/snapcraft/2016-September/001114.html +""" +import subprocess +import os +from time import sleep +from charmhelpers.core.hookenv import log + +__author__ = 'Joseph Borg ' + +# The return code for "couldn't acquire lock" in Snap +# (hopefully this will be improved). +SNAP_NO_LOCK = 1 +SNAP_NO_LOCK_RETRY_DELAY = 10 # Wait X seconds between Snap lock checks. +SNAP_NO_LOCK_RETRY_COUNT = 30 # Retry to acquire the lock X times. +SNAP_CHANNELS = [ + 'edge', + 'beta', + 'candidate', + 'stable', +] + + +class CouldNotAcquireLockException(Exception): + pass + + +class InvalidSnapChannel(Exception): + pass + + +def _snap_exec(commands): + """ + Execute snap commands. + + :param commands: List commands + :return: Integer exit code + """ + assert type(commands) == list + + retry_count = 0 + return_code = None + + while return_code is None or return_code == SNAP_NO_LOCK: + try: + return_code = subprocess.check_call(['snap'] + commands, + env=os.environ) + except subprocess.CalledProcessError as e: + retry_count += + 1 + if retry_count > SNAP_NO_LOCK_RETRY_COUNT: + raise CouldNotAcquireLockException( + 'Could not aquire lock after {} attempts' + .format(SNAP_NO_LOCK_RETRY_COUNT)) + return_code = e.returncode + log('Snap failed to acquire lock, trying again in {} seconds.' + .format(SNAP_NO_LOCK_RETRY_DELAY), level='WARN') + sleep(SNAP_NO_LOCK_RETRY_DELAY) + + return return_code + + +def snap_install(packages, *flags): + """ + Install a snap package. + + :param packages: String or List String package name + :param flags: List String flags to pass to install command + :return: Integer return code from snap + """ + if type(packages) is not list: + packages = [packages] + + flags = list(flags) + + message = 'Installing snap(s) "%s"' % ', '.join(packages) + if flags: + message += ' with option(s) "%s"' % ', '.join(flags) + + log(message, level='INFO') + return _snap_exec(['install'] + flags + packages) + + +def snap_remove(packages, *flags): + """ + Remove a snap package. + + :param packages: String or List String package name + :param flags: List String flags to pass to remove command + :return: Integer return code from snap + """ + if type(packages) is not list: + packages = [packages] + + flags = list(flags) + + message = 'Removing snap(s) "%s"' % ', '.join(packages) + if flags: + message += ' with options "%s"' % ', '.join(flags) + + log(message, level='INFO') + return _snap_exec(['remove'] + flags + packages) + + +def snap_refresh(packages, *flags): + """ + Refresh / Update snap package. + + :param packages: String or List String package name + :param flags: List String flags to pass to refresh command + :return: Integer return code from snap + """ + if type(packages) is not list: + packages = [packages] + + flags = list(flags) + + message = 'Refreshing snap(s) "%s"' % ', '.join(packages) + if flags: + message += ' with options "%s"' % ', '.join(flags) + + log(message, level='INFO') + return _snap_exec(['refresh'] + flags + packages) + + +def valid_snap_channel(channel): + """ Validate snap channel exists + + :raises InvalidSnapChannel: When channel does not exist + :return: Boolean + """ + if channel.lower() in SNAP_CHANNELS: + return True + else: + raise InvalidSnapChannel("Invalid Snap Channel: {}".format(channel)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/fetch/ubuntu.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/fetch/ubuntu.py new file mode 100644 index 0000000000000000000000000000000000000000..3ddaf0dd47f23cef60d7b7e59a83c989999d9f3f --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/fetch/ubuntu.py @@ -0,0 +1,805 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from collections import OrderedDict +import platform +import re +import six +import subprocess +import sys +import time + +from charmhelpers.core.host import get_distrib_codename, get_system_env + +from charmhelpers.core.hookenv import ( + log, + DEBUG, + WARNING, + env_proxy_settings, +) +from charmhelpers.fetch import SourceConfigError, GPGKeyError +from charmhelpers.fetch import ubuntu_apt_pkg + +PROPOSED_POCKET = ( + "# Proposed\n" + "deb http://archive.ubuntu.com/ubuntu {}-proposed main universe " + "multiverse restricted\n") +PROPOSED_PORTS_POCKET = ( + "# Proposed\n" + "deb http://ports.ubuntu.com/ubuntu-ports {}-proposed main universe " + "multiverse restricted\n") +# Only supports 64bit and ppc64 at the moment. +ARCH_TO_PROPOSED_POCKET = { + 'x86_64': PROPOSED_POCKET, + 'ppc64le': PROPOSED_PORTS_POCKET, + 'aarch64': PROPOSED_PORTS_POCKET, + 's390x': PROPOSED_PORTS_POCKET, +} +CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu" +CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA' +CLOUD_ARCHIVE = """# Ubuntu Cloud Archive +deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main +""" +CLOUD_ARCHIVE_POCKETS = { + # Folsom + 'folsom': 'precise-updates/folsom', + 'folsom/updates': 'precise-updates/folsom', + 'precise-folsom': 'precise-updates/folsom', + 'precise-folsom/updates': 'precise-updates/folsom', + 'precise-updates/folsom': 'precise-updates/folsom', + 'folsom/proposed': 'precise-proposed/folsom', + 'precise-folsom/proposed': 'precise-proposed/folsom', + 'precise-proposed/folsom': 'precise-proposed/folsom', + # Grizzly + 'grizzly': 'precise-updates/grizzly', + 'grizzly/updates': 'precise-updates/grizzly', + 'precise-grizzly': 'precise-updates/grizzly', + 'precise-grizzly/updates': 'precise-updates/grizzly', + 'precise-updates/grizzly': 'precise-updates/grizzly', + 'grizzly/proposed': 'precise-proposed/grizzly', + 'precise-grizzly/proposed': 'precise-proposed/grizzly', + 'precise-proposed/grizzly': 'precise-proposed/grizzly', + # Havana + 'havana': 'precise-updates/havana', + 'havana/updates': 'precise-updates/havana', + 'precise-havana': 'precise-updates/havana', + 'precise-havana/updates': 'precise-updates/havana', + 'precise-updates/havana': 'precise-updates/havana', + 'havana/proposed': 'precise-proposed/havana', + 'precise-havana/proposed': 'precise-proposed/havana', + 'precise-proposed/havana': 'precise-proposed/havana', + # Icehouse + 'icehouse': 'precise-updates/icehouse', + 'icehouse/updates': 'precise-updates/icehouse', + 'precise-icehouse': 'precise-updates/icehouse', + 'precise-icehouse/updates': 'precise-updates/icehouse', + 'precise-updates/icehouse': 'precise-updates/icehouse', + 'icehouse/proposed': 'precise-proposed/icehouse', + 'precise-icehouse/proposed': 'precise-proposed/icehouse', + 'precise-proposed/icehouse': 'precise-proposed/icehouse', + # Juno + 'juno': 'trusty-updates/juno', + 'juno/updates': 'trusty-updates/juno', + 'trusty-juno': 'trusty-updates/juno', + 'trusty-juno/updates': 'trusty-updates/juno', + 'trusty-updates/juno': 'trusty-updates/juno', + 'juno/proposed': 'trusty-proposed/juno', + 'trusty-juno/proposed': 'trusty-proposed/juno', + 'trusty-proposed/juno': 'trusty-proposed/juno', + # Kilo + 'kilo': 'trusty-updates/kilo', + 'kilo/updates': 'trusty-updates/kilo', + 'trusty-kilo': 'trusty-updates/kilo', + 'trusty-kilo/updates': 'trusty-updates/kilo', + 'trusty-updates/kilo': 'trusty-updates/kilo', + 'kilo/proposed': 'trusty-proposed/kilo', + 'trusty-kilo/proposed': 'trusty-proposed/kilo', + 'trusty-proposed/kilo': 'trusty-proposed/kilo', + # Liberty + 'liberty': 'trusty-updates/liberty', + 'liberty/updates': 'trusty-updates/liberty', + 'trusty-liberty': 'trusty-updates/liberty', + 'trusty-liberty/updates': 'trusty-updates/liberty', + 'trusty-updates/liberty': 'trusty-updates/liberty', + 'liberty/proposed': 'trusty-proposed/liberty', + 'trusty-liberty/proposed': 'trusty-proposed/liberty', + 'trusty-proposed/liberty': 'trusty-proposed/liberty', + # Mitaka + 'mitaka': 'trusty-updates/mitaka', + 'mitaka/updates': 'trusty-updates/mitaka', + 'trusty-mitaka': 'trusty-updates/mitaka', + 'trusty-mitaka/updates': 'trusty-updates/mitaka', + 'trusty-updates/mitaka': 'trusty-updates/mitaka', + 'mitaka/proposed': 'trusty-proposed/mitaka', + 'trusty-mitaka/proposed': 'trusty-proposed/mitaka', + 'trusty-proposed/mitaka': 'trusty-proposed/mitaka', + # Newton + 'newton': 'xenial-updates/newton', + 'newton/updates': 'xenial-updates/newton', + 'xenial-newton': 'xenial-updates/newton', + 'xenial-newton/updates': 'xenial-updates/newton', + 'xenial-updates/newton': 'xenial-updates/newton', + 'newton/proposed': 'xenial-proposed/newton', + 'xenial-newton/proposed': 'xenial-proposed/newton', + 'xenial-proposed/newton': 'xenial-proposed/newton', + # Ocata + 'ocata': 'xenial-updates/ocata', + 'ocata/updates': 'xenial-updates/ocata', + 'xenial-ocata': 'xenial-updates/ocata', + 'xenial-ocata/updates': 'xenial-updates/ocata', + 'xenial-updates/ocata': 'xenial-updates/ocata', + 'ocata/proposed': 'xenial-proposed/ocata', + 'xenial-ocata/proposed': 'xenial-proposed/ocata', + 'xenial-proposed/ocata': 'xenial-proposed/ocata', + # Pike + 'pike': 'xenial-updates/pike', + 'xenial-pike': 'xenial-updates/pike', + 'xenial-pike/updates': 'xenial-updates/pike', + 'xenial-updates/pike': 'xenial-updates/pike', + 'pike/proposed': 'xenial-proposed/pike', + 'xenial-pike/proposed': 'xenial-proposed/pike', + 'xenial-proposed/pike': 'xenial-proposed/pike', + # Queens + 'queens': 'xenial-updates/queens', + 'xenial-queens': 'xenial-updates/queens', + 'xenial-queens/updates': 'xenial-updates/queens', + 'xenial-updates/queens': 'xenial-updates/queens', + 'queens/proposed': 'xenial-proposed/queens', + 'xenial-queens/proposed': 'xenial-proposed/queens', + 'xenial-proposed/queens': 'xenial-proposed/queens', + # Rocky + 'rocky': 'bionic-updates/rocky', + 'bionic-rocky': 'bionic-updates/rocky', + 'bionic-rocky/updates': 'bionic-updates/rocky', + 'bionic-updates/rocky': 'bionic-updates/rocky', + 'rocky/proposed': 'bionic-proposed/rocky', + 'bionic-rocky/proposed': 'bionic-proposed/rocky', + 'bionic-proposed/rocky': 'bionic-proposed/rocky', + # Stein + 'stein': 'bionic-updates/stein', + 'bionic-stein': 'bionic-updates/stein', + 'bionic-stein/updates': 'bionic-updates/stein', + 'bionic-updates/stein': 'bionic-updates/stein', + 'stein/proposed': 'bionic-proposed/stein', + 'bionic-stein/proposed': 'bionic-proposed/stein', + 'bionic-proposed/stein': 'bionic-proposed/stein', + # Train + 'train': 'bionic-updates/train', + 'bionic-train': 'bionic-updates/train', + 'bionic-train/updates': 'bionic-updates/train', + 'bionic-updates/train': 'bionic-updates/train', + 'train/proposed': 'bionic-proposed/train', + 'bionic-train/proposed': 'bionic-proposed/train', + 'bionic-proposed/train': 'bionic-proposed/train', + # Ussuri + 'ussuri': 'bionic-updates/ussuri', + 'bionic-ussuri': 'bionic-updates/ussuri', + 'bionic-ussuri/updates': 'bionic-updates/ussuri', + 'bionic-updates/ussuri': 'bionic-updates/ussuri', + 'ussuri/proposed': 'bionic-proposed/ussuri', + 'bionic-ussuri/proposed': 'bionic-proposed/ussuri', + 'bionic-proposed/ussuri': 'bionic-proposed/ussuri', +} + + +APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT. +CMD_RETRY_DELAY = 10 # Wait 10 seconds between command retries. +CMD_RETRY_COUNT = 3 # Retry a failing fatal command X times. + + +def filter_installed_packages(packages): + """Return a list of packages that require installation.""" + cache = apt_cache() + _pkgs = [] + for package in packages: + try: + p = cache[package] + p.current_ver or _pkgs.append(package) + except KeyError: + log('Package {} has no installation candidate.'.format(package), + level='WARNING') + _pkgs.append(package) + return _pkgs + + +def filter_missing_packages(packages): + """Return a list of packages that are installed. + + :param packages: list of packages to evaluate. + :returns list: Packages that are installed. + """ + return list( + set(packages) - + set(filter_installed_packages(packages)) + ) + + +def apt_cache(*_, **__): + """Shim returning an object simulating the apt_pkg Cache. + + :param _: Accept arguments for compability, not used. + :type _: any + :param __: Accept keyword arguments for compability, not used. + :type __: any + :returns:Object used to interrogate the system apt and dpkg databases. + :rtype:ubuntu_apt_pkg.Cache + """ + if 'apt_pkg' in sys.modules: + # NOTE(fnordahl): When our consumer use the upstream ``apt_pkg`` module + # in conjunction with the apt_cache helper function, they may expect us + # to call ``apt_pkg.init()`` for them. + # + # Detect this situation, log a warning and make the call to + # ``apt_pkg.init()`` to avoid the consumer Python interpreter from + # crashing with a segmentation fault. + log('Support for use of upstream ``apt_pkg`` module in conjunction' + 'with charm-helpers is deprecated since 2019-06-25', level=WARNING) + sys.modules['apt_pkg'].init() + return ubuntu_apt_pkg.Cache() + + +def apt_install(packages, options=None, fatal=False): + """Install one or more packages. + + :param packages: Package(s) to install + :type packages: Option[str, List[str]] + :param options: Options to pass on to apt-get + :type options: Option[None, List[str]] + :param fatal: Whether the command's output should be checked and + retried. + :type fatal: bool + :raises: subprocess.CalledProcessError + """ + if options is None: + options = ['--option=Dpkg::Options::=--force-confold'] + + cmd = ['apt-get', '--assume-yes'] + cmd.extend(options) + cmd.append('install') + if isinstance(packages, six.string_types): + cmd.append(packages) + else: + cmd.extend(packages) + log("Installing {} with options: {}".format(packages, + options)) + _run_apt_command(cmd, fatal) + + +def apt_upgrade(options=None, fatal=False, dist=False): + """Upgrade all packages. + + :param options: Options to pass on to apt-get + :type options: Option[None, List[str]] + :param fatal: Whether the command's output should be checked and + retried. + :type fatal: bool + :param dist: Whether ``dist-upgrade`` should be used over ``upgrade`` + :type dist: bool + :raises: subprocess.CalledProcessError + """ + if options is None: + options = ['--option=Dpkg::Options::=--force-confold'] + + cmd = ['apt-get', '--assume-yes'] + cmd.extend(options) + if dist: + cmd.append('dist-upgrade') + else: + cmd.append('upgrade') + log("Upgrading with options: {}".format(options)) + _run_apt_command(cmd, fatal) + + +def apt_update(fatal=False): + """Update local apt cache.""" + cmd = ['apt-get', 'update'] + _run_apt_command(cmd, fatal) + + +def apt_purge(packages, fatal=False): + """Purge one or more packages. + + :param packages: Package(s) to install + :type packages: Option[str, List[str]] + :param fatal: Whether the command's output should be checked and + retried. + :type fatal: bool + :raises: subprocess.CalledProcessError + """ + cmd = ['apt-get', '--assume-yes', 'purge'] + if isinstance(packages, six.string_types): + cmd.append(packages) + else: + cmd.extend(packages) + log("Purging {}".format(packages)) + _run_apt_command(cmd, fatal) + + +def apt_autoremove(purge=True, fatal=False): + """Purge one or more packages. + :param purge: Whether the ``--purge`` option should be passed on or not. + :type purge: bool + :param fatal: Whether the command's output should be checked and + retried. + :type fatal: bool + :raises: subprocess.CalledProcessError + """ + cmd = ['apt-get', '--assume-yes', 'autoremove'] + if purge: + cmd.append('--purge') + _run_apt_command(cmd, fatal) + + +def apt_mark(packages, mark, fatal=False): + """Flag one or more packages using apt-mark.""" + log("Marking {} as {}".format(packages, mark)) + cmd = ['apt-mark', mark] + if isinstance(packages, six.string_types): + cmd.append(packages) + else: + cmd.extend(packages) + + if fatal: + subprocess.check_call(cmd, universal_newlines=True) + else: + subprocess.call(cmd, universal_newlines=True) + + +def apt_hold(packages, fatal=False): + return apt_mark(packages, 'hold', fatal=fatal) + + +def apt_unhold(packages, fatal=False): + return apt_mark(packages, 'unhold', fatal=fatal) + + +def import_key(key): + """Import an ASCII Armor key. + + A Radix64 format keyid is also supported for backwards + compatibility. In this case Ubuntu keyserver will be + queried for a key via HTTPS by its keyid. This method + is less preferrable because https proxy servers may + require traffic decryption which is equivalent to a + man-in-the-middle attack (a proxy server impersonates + keyserver TLS certificates and has to be explicitly + trusted by the system). + + :param key: A GPG key in ASCII armor format, + including BEGIN and END markers or a keyid. + :type key: (bytes, str) + :raises: GPGKeyError if the key could not be imported + """ + key = key.strip() + if '-' in key or '\n' in key: + # Send everything not obviously a keyid to GPG to import, as + # we trust its validation better than our own. eg. handling + # comments before the key. + log("PGP key found (looks like ASCII Armor format)", level=DEBUG) + if ('-----BEGIN PGP PUBLIC KEY BLOCK-----' in key and + '-----END PGP PUBLIC KEY BLOCK-----' in key): + log("Writing provided PGP key in the binary format", level=DEBUG) + if six.PY3: + key_bytes = key.encode('utf-8') + else: + key_bytes = key + key_name = _get_keyid_by_gpg_key(key_bytes) + key_gpg = _dearmor_gpg_key(key_bytes) + _write_apt_gpg_keyfile(key_name=key_name, key_material=key_gpg) + else: + raise GPGKeyError("ASCII armor markers missing from GPG key") + else: + log("PGP key found (looks like Radix64 format)", level=WARNING) + log("SECURELY importing PGP key from keyserver; " + "full key not provided.", level=WARNING) + # as of bionic add-apt-repository uses curl with an HTTPS keyserver URL + # to retrieve GPG keys. `apt-key adv` command is deprecated as is + # apt-key in general as noted in its manpage. See lp:1433761 for more + # history. Instead, /etc/apt/trusted.gpg.d is used directly to drop + # gpg + key_asc = _get_key_by_keyid(key) + # write the key in GPG format so that apt-key list shows it + key_gpg = _dearmor_gpg_key(key_asc) + _write_apt_gpg_keyfile(key_name=key, key_material=key_gpg) + + +def _get_keyid_by_gpg_key(key_material): + """Get a GPG key fingerprint by GPG key material. + Gets a GPG key fingerprint (40-digit, 160-bit) by the ASCII armor-encoded + or binary GPG key material. Can be used, for example, to generate file + names for keys passed via charm options. + + :param key_material: ASCII armor-encoded or binary GPG key material + :type key_material: bytes + :raises: GPGKeyError if invalid key material has been provided + :returns: A GPG key fingerprint + :rtype: str + """ + # Use the same gpg command for both Xenial and Bionic + cmd = 'gpg --with-colons --with-fingerprint' + ps = subprocess.Popen(cmd.split(), + stdout=subprocess.PIPE, + stderr=subprocess.PIPE, + stdin=subprocess.PIPE) + out, err = ps.communicate(input=key_material) + if six.PY3: + out = out.decode('utf-8') + err = err.decode('utf-8') + if 'gpg: no valid OpenPGP data found.' in err: + raise GPGKeyError('Invalid GPG key material provided') + # from gnupg2 docs: fpr :: Fingerprint (fingerprint is in field 10) + return re.search(r"^fpr:{9}([0-9A-F]{40}):$", out, re.MULTILINE).group(1) + + +def _get_key_by_keyid(keyid): + """Get a key via HTTPS from the Ubuntu keyserver. + Different key ID formats are supported by SKS keyservers (the longer ones + are more secure, see "dead beef attack" and https://evil32.com/). Since + HTTPS is used, if SSLBump-like HTTPS proxies are in place, they will + impersonate keyserver.ubuntu.com and generate a certificate with + keyserver.ubuntu.com in the CN field or in SubjAltName fields of a + certificate. If such proxy behavior is expected it is necessary to add the + CA certificate chain containing the intermediate CA of the SSLBump proxy to + every machine that this code runs on via ca-certs cloud-init directive (via + cloudinit-userdata model-config) or via other means (such as through a + custom charm option). Also note that DNS resolution for the hostname in a + URL is done at a proxy server - not at the client side. + + 8-digit (32 bit) key ID + https://keyserver.ubuntu.com/pks/lookup?search=0x4652B4E6 + 16-digit (64 bit) key ID + https://keyserver.ubuntu.com/pks/lookup?search=0x6E85A86E4652B4E6 + 40-digit key ID: + https://keyserver.ubuntu.com/pks/lookup?search=0x35F77D63B5CEC106C577ED856E85A86E4652B4E6 + + :param keyid: An 8, 16 or 40 hex digit keyid to find a key for + :type keyid: (bytes, str) + :returns: A key material for the specified GPG key id + :rtype: (str, bytes) + :raises: subprocess.CalledProcessError + """ + # options=mr - machine-readable output (disables html wrappers) + keyserver_url = ('https://keyserver.ubuntu.com' + '/pks/lookup?op=get&options=mr&exact=on&search=0x{}') + curl_cmd = ['curl', keyserver_url.format(keyid)] + # use proxy server settings in order to retrieve the key + return subprocess.check_output(curl_cmd, + env=env_proxy_settings(['https'])) + + +def _dearmor_gpg_key(key_asc): + """Converts a GPG key in the ASCII armor format to the binary format. + + :param key_asc: A GPG key in ASCII armor format. + :type key_asc: (str, bytes) + :returns: A GPG key in binary format + :rtype: (str, bytes) + :raises: GPGKeyError + """ + ps = subprocess.Popen(['gpg', '--dearmor'], + stdout=subprocess.PIPE, + stderr=subprocess.PIPE, + stdin=subprocess.PIPE) + out, err = ps.communicate(input=key_asc) + # no need to decode output as it is binary (invalid utf-8), only error + if six.PY3: + err = err.decode('utf-8') + if 'gpg: no valid OpenPGP data found.' in err: + raise GPGKeyError('Invalid GPG key material. Check your network setup' + ' (MTU, routing, DNS) and/or proxy server settings' + ' as well as destination keyserver status.') + else: + return out + + +def _write_apt_gpg_keyfile(key_name, key_material): + """Writes GPG key material into a file at a provided path. + + :param key_name: A key name to use for a key file (could be a fingerprint) + :type key_name: str + :param key_material: A GPG key material (binary) + :type key_material: (str, bytes) + """ + with open('/etc/apt/trusted.gpg.d/{}.gpg'.format(key_name), + 'wb') as keyf: + keyf.write(key_material) + + +def add_source(source, key=None, fail_invalid=False): + """Add a package source to this system. + + @param source: a URL or sources.list entry, as supported by + add-apt-repository(1). Examples:: + + ppa:charmers/example + deb https://stub:key@private.example.com/ubuntu trusty main + + In addition: + 'proposed:' may be used to enable the standard 'proposed' + pocket for the release. + 'cloud:' may be used to activate official cloud archive pockets, + such as 'cloud:icehouse' + 'distro' may be used as a noop + + Full list of source specifications supported by the function are: + + 'distro': A NOP; i.e. it has no effect. + 'proposed': the proposed deb spec [2] is wrtten to + /etc/apt/sources.list/proposed + 'distro-proposed': adds -proposed to the debs [2] + 'ppa:': add-apt-repository --yes + 'deb ': add-apt-repository --yes deb + 'http://....': add-apt-repository --yes http://... + 'cloud-archive:': add-apt-repository -yes cloud-archive: + 'cloud:[-staging]': specify a Cloud Archive pocket with + optional staging version. If staging is used then the staging PPA [2] + with be used. If staging is NOT used then the cloud archive [3] will be + added, and the 'ubuntu-cloud-keyring' package will be added for the + current distro. + + Otherwise the source is not recognised and this is logged to the juju log. + However, no error is raised, unless sys_error_on_exit is True. + + [1] deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main + where {} is replaced with the derived pocket name. + [2] deb http://archive.ubuntu.com/ubuntu {}-proposed \ + main universe multiverse restricted + where {} is replaced with the lsb_release codename (e.g. xenial) + [3] deb http://ubuntu-cloud.archive.canonical.com/ubuntu + to /etc/apt/sources.list.d/cloud-archive-list + + @param key: A key to be added to the system's APT keyring and used + to verify the signatures on packages. Ideally, this should be an + ASCII format GPG public key including the block headers. A GPG key + id may also be used, but be aware that only insecure protocols are + available to retrieve the actual public key from a public keyserver + placing your Juju environment at risk. ppa and cloud archive keys + are securely added automtically, so sould not be provided. + + @param fail_invalid: (boolean) if True, then the function raises a + SourceConfigError is there is no matching installation source. + + @raises SourceConfigError() if for cloud:, the is not a + valid pocket in CLOUD_ARCHIVE_POCKETS + """ + _mapping = OrderedDict([ + (r"^distro$", lambda: None), # This is a NOP + (r"^(?:proposed|distro-proposed)$", _add_proposed), + (r"^cloud-archive:(.*)$", _add_apt_repository), + (r"^((?:deb |http:|https:|ppa:).*)$", _add_apt_repository), + (r"^cloud:(.*)-(.*)\/staging$", _add_cloud_staging), + (r"^cloud:(.*)-(.*)$", _add_cloud_distro_check), + (r"^cloud:(.*)$", _add_cloud_pocket), + (r"^snap:.*-(.*)-(.*)$", _add_cloud_distro_check), + ]) + if source is None: + source = '' + for r, fn in six.iteritems(_mapping): + m = re.match(r, source) + if m: + if key: + # Import key before adding the source which depends on it, + # as refreshing packages could fail otherwise. + try: + import_key(key) + except GPGKeyError as e: + raise SourceConfigError(str(e)) + # call the associated function with the captured groups + # raises SourceConfigError on error. + fn(*m.groups()) + break + else: + # nothing matched. log an error and maybe sys.exit + err = "Unknown source: {!r}".format(source) + log(err) + if fail_invalid: + raise SourceConfigError(err) + + +def _add_proposed(): + """Add the PROPOSED_POCKET as /etc/apt/source.list.d/proposed.list + + Uses get_distrib_codename to determine the correct stanza for + the deb line. + + For intel architecutres PROPOSED_POCKET is used for the release, but for + other architectures PROPOSED_PORTS_POCKET is used for the release. + """ + release = get_distrib_codename() + arch = platform.machine() + if arch not in six.iterkeys(ARCH_TO_PROPOSED_POCKET): + raise SourceConfigError("Arch {} not supported for (distro-)proposed" + .format(arch)) + with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt: + apt.write(ARCH_TO_PROPOSED_POCKET[arch].format(release)) + + +def _add_apt_repository(spec): + """Add the spec using add_apt_repository + + :param spec: the parameter to pass to add_apt_repository + :type spec: str + """ + if '{series}' in spec: + series = get_distrib_codename() + spec = spec.replace('{series}', series) + # software-properties package for bionic properly reacts to proxy settings + # passed as environment variables (See lp:1433761). This is not the case + # LTS and non-LTS releases below bionic. + _run_with_retries(['add-apt-repository', '--yes', spec], + cmd_env=env_proxy_settings(['https'])) + + +def _add_cloud_pocket(pocket): + """Add a cloud pocket as /etc/apt/sources.d/cloud-archive.list + + Note that this overwrites the existing file if there is one. + + This function also converts the simple pocket in to the actual pocket using + the CLOUD_ARCHIVE_POCKETS mapping. + + :param pocket: string representing the pocket to add a deb spec for. + :raises: SourceConfigError if the cloud pocket doesn't exist or the + requested release doesn't match the current distro version. + """ + apt_install(filter_installed_packages(['ubuntu-cloud-keyring']), + fatal=True) + if pocket not in CLOUD_ARCHIVE_POCKETS: + raise SourceConfigError( + 'Unsupported cloud: source option %s' % + pocket) + actual_pocket = CLOUD_ARCHIVE_POCKETS[pocket] + with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt: + apt.write(CLOUD_ARCHIVE.format(actual_pocket)) + + +def _add_cloud_staging(cloud_archive_release, openstack_release): + """Add the cloud staging repository which is in + ppa:ubuntu-cloud-archive/-staging + + This function checks that the cloud_archive_release matches the current + codename for the distro that charm is being installed on. + + :param cloud_archive_release: string, codename for the release. + :param openstack_release: String, codename for the openstack release. + :raises: SourceConfigError if the cloud_archive_release doesn't match the + current version of the os. + """ + _verify_is_ubuntu_rel(cloud_archive_release, openstack_release) + ppa = 'ppa:ubuntu-cloud-archive/{}-staging'.format(openstack_release) + cmd = 'add-apt-repository -y {}'.format(ppa) + _run_with_retries(cmd.split(' ')) + + +def _add_cloud_distro_check(cloud_archive_release, openstack_release): + """Add the cloud pocket, but also check the cloud_archive_release against + the current distro, and use the openstack_release as the full lookup. + + This just calls _add_cloud_pocket() with the openstack_release as pocket + to get the correct cloud-archive.list for dpkg to work with. + + :param cloud_archive_release:String, codename for the distro release. + :param openstack_release: String, spec for the release to look up in the + CLOUD_ARCHIVE_POCKETS + :raises: SourceConfigError if this is the wrong distro, or the pocket spec + doesn't exist. + """ + _verify_is_ubuntu_rel(cloud_archive_release, openstack_release) + _add_cloud_pocket("{}-{}".format(cloud_archive_release, openstack_release)) + + +def _verify_is_ubuntu_rel(release, os_release): + """Verify that the release is in the same as the current ubuntu release. + + :param release: String, lowercase for the release. + :param os_release: String, the os_release being asked for + :raises: SourceConfigError if the release is not the same as the ubuntu + release. + """ + ubuntu_rel = get_distrib_codename() + if release != ubuntu_rel: + raise SourceConfigError( + 'Invalid Cloud Archive release specified: {}-{} on this Ubuntu' + 'version ({})'.format(release, os_release, ubuntu_rel)) + + +def _run_with_retries(cmd, max_retries=CMD_RETRY_COUNT, retry_exitcodes=(1,), + retry_message="", cmd_env=None): + """Run a command and retry until success or max_retries is reached. + + :param cmd: The apt command to run. + :type cmd: str + :param max_retries: The number of retries to attempt on a fatal + command. Defaults to CMD_RETRY_COUNT. + :type max_retries: int + :param retry_exitcodes: Optional additional exit codes to retry. + Defaults to retry on exit code 1. + :type retry_exitcodes: tuple + :param retry_message: Optional log prefix emitted during retries. + :type retry_message: str + :param: cmd_env: Environment variables to add to the command run. + :type cmd_env: Option[None, Dict[str, str]] + """ + env = get_apt_dpkg_env() + if cmd_env: + env.update(cmd_env) + + if not retry_message: + retry_message = "Failed executing '{}'".format(" ".join(cmd)) + retry_message += ". Will retry in {} seconds".format(CMD_RETRY_DELAY) + + retry_count = 0 + result = None + + retry_results = (None,) + retry_exitcodes + while result in retry_results: + try: + result = subprocess.check_call(cmd, env=env) + except subprocess.CalledProcessError as e: + retry_count = retry_count + 1 + if retry_count > max_retries: + raise + result = e.returncode + log(retry_message) + time.sleep(CMD_RETRY_DELAY) + + +def _run_apt_command(cmd, fatal=False): + """Run an apt command with optional retries. + + :param cmd: The apt command to run. + :type cmd: str + :param fatal: Whether the command's output should be checked and + retried. + :type fatal: bool + """ + if fatal: + _run_with_retries( + cmd, retry_exitcodes=(1, APT_NO_LOCK,), + retry_message="Couldn't acquire DPKG lock") + else: + subprocess.call(cmd, env=get_apt_dpkg_env()) + + +def get_upstream_version(package): + """Determine upstream version based on installed package + + @returns None (if not installed) or the upstream version + """ + cache = apt_cache() + try: + pkg = cache[package] + except Exception: + # the package is unknown to the current apt cache. + return None + + if not pkg.current_ver: + # package is known, but no version is currently installed. + return None + + return ubuntu_apt_pkg.upstream_version(pkg.current_ver.ver_str) + + +def get_apt_dpkg_env(): + """Get environment suitable for execution of APT and DPKG tools. + + We keep this in a helper function instead of in a global constant to + avoid execution on import of the library. + :returns: Environment suitable for execution of APT and DPKG tools. + :rtype: Dict[str, str] + """ + # The fallback is used in the event of ``/etc/environment`` not containing + # avalid PATH variable. + return {'DEBIAN_FRONTEND': 'noninteractive', + 'PATH': get_system_env('PATH', '/usr/sbin:/usr/bin:/sbin:/bin')} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/fetch/ubuntu_apt_pkg.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/fetch/ubuntu_apt_pkg.py new file mode 100644 index 0000000000000000000000000000000000000000..929a75d7a775921c32368e8cc7950eaa97390047 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/fetch/ubuntu_apt_pkg.py @@ -0,0 +1,267 @@ +# Copyright 2019 Canonical Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Provide a subset of the ``python-apt`` module API. + +Data collection is done through subprocess calls to ``apt-cache`` and +``dpkg-query`` commands. + +The main purpose for this module is to avoid dependency on the +``python-apt`` python module. + +The indicated python module is a wrapper around the ``apt`` C++ library +which is tightly connected to the version of the distribution it was +shipped on. It is not developed in a backward/forward compatible manner. + +This in turn makes it incredibly hard to distribute as a wheel for a piece +of python software that supports a span of distro releases [0][1]. + +Upstream feedback like [2] does not give confidence in this ever changing, +so with this we get rid of the dependency. + +0: https://github.com/juju-solutions/layer-basic/pull/135 +1: https://bugs.launchpad.net/charm-octavia/+bug/1824112 +2: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=845330#10 +""" + +import locale +import os +import subprocess +import sys + + +class _container(dict): + """Simple container for attributes.""" + __getattr__ = dict.__getitem__ + __setattr__ = dict.__setitem__ + + +class Package(_container): + """Simple container for package attributes.""" + + +class Version(_container): + """Simple container for version attributes.""" + + +class Cache(object): + """Simulation of ``apt_pkg`` Cache object.""" + def __init__(self, progress=None): + pass + + def __contains__(self, package): + try: + pkg = self.__getitem__(package) + return pkg is not None + except KeyError: + return False + + def __getitem__(self, package): + """Get information about a package from apt and dpkg databases. + + :param package: Name of package + :type package: str + :returns: Package object + :rtype: object + :raises: KeyError, subprocess.CalledProcessError + """ + apt_result = self._apt_cache_show([package])[package] + apt_result['name'] = apt_result.pop('package') + pkg = Package(apt_result) + dpkg_result = self._dpkg_list([package]).get(package, {}) + current_ver = None + installed_version = dpkg_result.get('version') + if installed_version: + current_ver = Version({'ver_str': installed_version}) + pkg.current_ver = current_ver + pkg.architecture = dpkg_result.get('architecture') + return pkg + + def _dpkg_list(self, packages): + """Get data from system dpkg database for package. + + :param packages: Packages to get data from + :type packages: List[str] + :returns: Structured data about installed packages, keys like + ``dpkg-query --list`` + :rtype: dict + :raises: subprocess.CalledProcessError + """ + pkgs = {} + cmd = ['dpkg-query', '--list'] + cmd.extend(packages) + if locale.getlocale() == (None, None): + # subprocess calls out to locale.getpreferredencoding(False) to + # determine encoding. Workaround for Trusty where the + # environment appears to not be set up correctly. + locale.setlocale(locale.LC_ALL, 'en_US.UTF-8') + try: + output = subprocess.check_output(cmd, + stderr=subprocess.STDOUT, + universal_newlines=True) + except subprocess.CalledProcessError as cp: + # ``dpkg-query`` may return error and at the same time have + # produced useful output, for example when asked for multiple + # packages where some are not installed + if cp.returncode != 1: + raise + output = cp.output + headings = [] + for line in output.splitlines(): + if line.startswith('||/'): + headings = line.split() + headings.pop(0) + continue + elif (line.startswith('|') or line.startswith('+') or + line.startswith('dpkg-query:')): + continue + else: + data = line.split(None, 4) + status = data.pop(0) + if status != 'ii': + continue + pkg = {} + pkg.update({k.lower(): v for k, v in zip(headings, data)}) + if 'name' in pkg: + pkgs.update({pkg['name']: pkg}) + return pkgs + + def _apt_cache_show(self, packages): + """Get data from system apt cache for package. + + :param packages: Packages to get data from + :type packages: List[str] + :returns: Structured data about package, keys like + ``apt-cache show`` + :rtype: dict + :raises: subprocess.CalledProcessError + """ + pkgs = {} + cmd = ['apt-cache', 'show', '--no-all-versions'] + cmd.extend(packages) + if locale.getlocale() == (None, None): + # subprocess calls out to locale.getpreferredencoding(False) to + # determine encoding. Workaround for Trusty where the + # environment appears to not be set up correctly. + locale.setlocale(locale.LC_ALL, 'en_US.UTF-8') + try: + output = subprocess.check_output(cmd, + stderr=subprocess.STDOUT, + universal_newlines=True) + previous = None + pkg = {} + for line in output.splitlines(): + if not line: + if 'package' in pkg: + pkgs.update({pkg['package']: pkg}) + pkg = {} + continue + if line.startswith(' '): + if previous and previous in pkg: + pkg[previous] += os.linesep + line.lstrip() + continue + if ':' in line: + kv = line.split(':', 1) + key = kv[0].lower() + if key == 'n': + continue + previous = key + pkg.update({key: kv[1].lstrip()}) + except subprocess.CalledProcessError as cp: + # ``apt-cache`` returns 100 if none of the packages asked for + # exist in the apt cache. + if cp.returncode != 100: + raise + return pkgs + + +class Config(_container): + def __init__(self): + super(Config, self).__init__(self._populate()) + + def _populate(self): + cfgs = {} + cmd = ['apt-config', 'dump'] + output = subprocess.check_output(cmd, + stderr=subprocess.STDOUT, + universal_newlines=True) + for line in output.splitlines(): + if not line.startswith("CommandLine"): + k, v = line.split(" ", 1) + cfgs[k] = v.strip(";").strip("\"") + + return cfgs + + +# Backwards compatibility with old apt_pkg module +sys.modules[__name__].config = Config() + + +def init(): + """Compability shim that does nothing.""" + pass + + +def upstream_version(version): + """Extracts upstream version from a version string. + + Upstream reference: https://salsa.debian.org/apt-team/apt/blob/master/ + apt-pkg/deb/debversion.cc#L259 + + :param version: Version string + :type version: str + :returns: Upstream version + :rtype: str + """ + if version: + version = version.split(':')[-1] + version = version.split('-')[0] + return version + + +def version_compare(a, b): + """Compare the given versions. + + Call out to ``dpkg`` to make sure the code doing the comparison is + compatible with what the ``apt`` library would do. Mimic the return + values. + + Upstream reference: + https://apt-team.pages.debian.net/python-apt/library/apt_pkg.html + ?highlight=version_compare#apt_pkg.version_compare + + :param a: version string + :type a: str + :param b: version string + :type b: str + :returns: >0 if ``a`` is greater than ``b``, 0 if a equals b, + <0 if ``a`` is smaller than ``b`` + :rtype: int + :raises: subprocess.CalledProcessError, RuntimeError + """ + for op in ('gt', 1), ('eq', 0), ('lt', -1): + try: + subprocess.check_call(['dpkg', '--compare-versions', + a, op[0], b], + stderr=subprocess.STDOUT, + universal_newlines=True) + return op[1] + except subprocess.CalledProcessError as cp: + if cp.returncode == 1: + continue + raise + else: + raise RuntimeError('Unable to compare "{}" and "{}", according to ' + 'our logic they are neither greater, equal nor ' + 'less than each other.') diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/osplatform.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/osplatform.py new file mode 100644 index 0000000000000000000000000000000000000000..78c81af5955caee51271c830d58cacac2cab9bcc --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/osplatform.py @@ -0,0 +1,46 @@ +import platform +import os + + +def get_platform(): + """Return the current OS platform. + + For example: if current os platform is Ubuntu then a string "ubuntu" + will be returned (which is the name of the module). + This string is used to decide which platform module should be imported. + """ + # linux_distribution is deprecated and will be removed in Python 3.7 + # Warnings *not* disabled, as we certainly need to fix this. + if hasattr(platform, 'linux_distribution'): + tuple_platform = platform.linux_distribution() + current_platform = tuple_platform[0] + else: + current_platform = _get_platform_from_fs() + + if "Ubuntu" in current_platform: + return "ubuntu" + elif "CentOS" in current_platform: + return "centos" + elif "debian" in current_platform: + # Stock Python does not detect Ubuntu and instead returns debian. + # Or at least it does in some build environments like Travis CI + return "ubuntu" + elif "elementary" in current_platform: + # ElementaryOS fails to run tests locally without this. + return "ubuntu" + else: + raise RuntimeError("This module is not supported on {}." + .format(current_platform)) + + +def _get_platform_from_fs(): + """Get Platform from /etc/os-release.""" + with open(os.path.join(os.sep, 'etc', 'os-release')) as fin: + content = dict( + line.split('=', 1) + for line in fin.read().splitlines() + if '=' in line + ) + for k, v in content.items(): + content[k] = v.strip('"') + return content["NAME"] diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/payload/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/payload/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..ee55cb3d2baddb556df910f1d41638c3c7f39c59 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/payload/__init__.py @@ -0,0 +1,15 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"Tools for working with files injected into a charm just before deployment." diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/payload/archive.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/payload/archive.py new file mode 100644 index 0000000000000000000000000000000000000000..7fc453f523cf49f0c123839a347c4452c6d465ca --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/payload/archive.py @@ -0,0 +1,71 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import tarfile +import zipfile +from charmhelpers.core import ( + host, + hookenv, +) + + +class ArchiveError(Exception): + pass + + +def get_archive_handler(archive_name): + if os.path.isfile(archive_name): + if tarfile.is_tarfile(archive_name): + return extract_tarfile + elif zipfile.is_zipfile(archive_name): + return extract_zipfile + else: + # look at the file name + for ext in ('.tar', '.tar.gz', '.tgz', 'tar.bz2', '.tbz2', '.tbz'): + if archive_name.endswith(ext): + return extract_tarfile + for ext in ('.zip', '.jar'): + if archive_name.endswith(ext): + return extract_zipfile + + +def archive_dest_default(archive_name): + archive_file = os.path.basename(archive_name) + return os.path.join(hookenv.charm_dir(), "archives", archive_file) + + +def extract(archive_name, destpath=None): + handler = get_archive_handler(archive_name) + if handler: + if not destpath: + destpath = archive_dest_default(archive_name) + if not os.path.isdir(destpath): + host.mkdir(destpath) + handler(archive_name, destpath) + return destpath + else: + raise ArchiveError("No handler for archive") + + +def extract_tarfile(archive_name, destpath): + "Unpack a tar archive, optionally compressed" + archive = tarfile.open(archive_name) + archive.extractall(destpath) + + +def extract_zipfile(archive_name, destpath): + "Unpack a zip file" + archive = zipfile.ZipFile(archive_name) + archive.extractall(destpath) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/payload/execd.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/payload/execd.py new file mode 100644 index 0000000000000000000000000000000000000000..1502aa0b596f0b1a2017ccb4543a35999774431d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/charmhelpers/payload/execd.py @@ -0,0 +1,65 @@ +#!/usr/bin/env python + +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import sys +import subprocess +from charmhelpers.core import hookenv + + +def default_execd_dir(): + return os.path.join(os.environ['CHARM_DIR'], 'exec.d') + + +def execd_module_paths(execd_dir=None): + """Generate a list of full paths to modules within execd_dir.""" + if not execd_dir: + execd_dir = default_execd_dir() + + if not os.path.exists(execd_dir): + return + + for subpath in os.listdir(execd_dir): + module = os.path.join(execd_dir, subpath) + if os.path.isdir(module): + yield module + + +def execd_submodule_paths(command, execd_dir=None): + """Generate a list of full paths to the specified command within exec_dir. + """ + for module_path in execd_module_paths(execd_dir): + path = os.path.join(module_path, command) + if os.access(path, os.X_OK) and os.path.isfile(path): + yield path + + +def execd_run(command, execd_dir=None, die_on_error=True, stderr=subprocess.STDOUT): + """Run command for each module within execd_dir which defines it.""" + for submodule_path in execd_submodule_paths(command, execd_dir): + try: + subprocess.check_output(submodule_path, stderr=stderr, + universal_newlines=True) + except subprocess.CalledProcessError as e: + hookenv.log("Error ({}) running {}. Output: {}".format( + e.returncode, e.cmd, e.output)) + if die_on_error: + sys.exit(e.returncode) + + +def execd_preinstall(execd_dir=None): + """Run charm-pre-install for each module within execd_dir.""" + execd_run('charm-pre-install', execd_dir=execd_dir) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/debian/compat b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/debian/compat new file mode 100644 index 0000000000000000000000000000000000000000..7f8f011eb73d6043d2e6db9d2c101195ae2801f2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/debian/compat @@ -0,0 +1 @@ +7 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/debian/control b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/debian/control new file mode 100644 index 0000000000000000000000000000000000000000..c4992afd94fd22c6449365362dd96bc53ae6d404 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/debian/control @@ -0,0 +1,20 @@ +Source: charmhelpers +Maintainer: Matthew Wedgwood +Section: python +Priority: optional +Build-Depends: python-all (>= 2.6.6-3), debhelper (>= 7) +Standards-Version: 3.9.1 + +Package: python-charmhelpers +Architecture: all +Depends: ${misc:Depends}, ${python:Depends} +Description: UNKNOWN + ============ + CharmHelpers + ============ + . + CharmHelpers provides an opinionated set of tools for building Juju + charms that work together. In addition to basic tasks like interact- + ing with the charm environment and the machine it runs on, it also + helps keep you build hooks and establish relations effortlessly. + . diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/debian/rules b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/debian/rules new file mode 100755 index 0000000000000000000000000000000000000000..7f2101b365a870adf54c565999245eedcc416810 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/debian/rules @@ -0,0 +1,9 @@ +#!/usr/bin/make -f + +# This file was automatically generated by stdeb 0.6.0+git at +# Fri, 04 Jan 2013 15:14:11 -0600 + +%: + dh $@ --with python2 --buildsystem=python_distutils + + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/debian/source/format b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/debian/source/format new file mode 100644 index 0000000000000000000000000000000000000000..163aaf8d82b6c54f23c45f32895dbdfdcc27b047 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/debian/source/format @@ -0,0 +1 @@ +3.0 (quilt) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/Makefile b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/Makefile new file mode 100644 index 0000000000000000000000000000000000000000..17396a1fddc3e317e197d7b9d73a6eace214b065 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/Makefile @@ -0,0 +1,177 @@ +# Makefile for Sphinx documentation +# + +# You can set these variables from the command line. +SPHINXOPTS = +SPHINXBUILD = sphinx-build +PAPER = +BUILDDIR = _build + +# User-friendly check for sphinx-build +ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1) +$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/) +endif + +# Internal variables. +PAPEROPT_a4 = -D latex_paper_size=a4 +PAPEROPT_letter = -D latex_paper_size=letter +ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . +# the i18n builder cannot share the environment and doctrees with the others +I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . + +.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext + +help: + @echo "Please use \`make ' where is one of" + @echo " html to make standalone HTML files" + @echo " dirhtml to make HTML files named index.html in directories" + @echo " singlehtml to make a single large HTML file" + @echo " pickle to make pickle files" + @echo " json to make JSON files" + @echo " htmlhelp to make HTML files and a HTML help project" + @echo " qthelp to make HTML files and a qthelp project" + @echo " devhelp to make HTML files and a Devhelp project" + @echo " epub to make an epub" + @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" + @echo " latexpdf to make LaTeX files and run them through pdflatex" + @echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx" + @echo " text to make text files" + @echo " man to make manual pages" + @echo " texinfo to make Texinfo files" + @echo " info to make Texinfo files and run them through makeinfo" + @echo " gettext to make PO message catalogs" + @echo " changes to make an overview of all changed/added/deprecated items" + @echo " xml to make Docutils-native XML files" + @echo " pseudoxml to make pseudoxml-XML files for display purposes" + @echo " linkcheck to check all external links for integrity" + @echo " doctest to run all doctests embedded in the documentation (if enabled)" + +clean: + rm -rf $(BUILDDIR)/* + +html: + $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html + @echo + @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." + +dirhtml: + $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml + @echo + @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." + +singlehtml: + $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml + @echo + @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." + +pickle: + $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle + @echo + @echo "Build finished; now you can process the pickle files." + +json: + $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json + @echo + @echo "Build finished; now you can process the JSON files." + +htmlhelp: + $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp + @echo + @echo "Build finished; now you can run HTML Help Workshop with the" \ + ".hhp project file in $(BUILDDIR)/htmlhelp." + +qthelp: + $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp + @echo + @echo "Build finished; now you can run "qcollectiongenerator" with the" \ + ".qhcp project file in $(BUILDDIR)/qthelp, like this:" + @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/CharmHelpers.qhcp" + @echo "To view the help file:" + @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/CharmHelpers.qhc" + +devhelp: + $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp + @echo + @echo "Build finished." + @echo "To view the help file:" + @echo "# mkdir -p $$HOME/.local/share/devhelp/CharmHelpers" + @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/CharmHelpers" + @echo "# devhelp" + +epub: + $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub + @echo + @echo "Build finished. The epub file is in $(BUILDDIR)/epub." + +latex: + $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex + @echo + @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." + @echo "Run \`make' in that directory to run these through (pdf)latex" \ + "(use \`make latexpdf' here to do that automatically)." + +latexpdf: + $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex + @echo "Running LaTeX files through pdflatex..." + $(MAKE) -C $(BUILDDIR)/latex all-pdf + @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." + +latexpdfja: + $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex + @echo "Running LaTeX files through platex and dvipdfmx..." + $(MAKE) -C $(BUILDDIR)/latex all-pdf-ja + @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." + +text: + $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text + @echo + @echo "Build finished. The text files are in $(BUILDDIR)/text." + +man: + $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man + @echo + @echo "Build finished. The manual pages are in $(BUILDDIR)/man." + +texinfo: + $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo + @echo + @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." + @echo "Run \`make' in that directory to run these through makeinfo" \ + "(use \`make info' here to do that automatically)." + +info: + $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo + @echo "Running Texinfo files through makeinfo..." + make -C $(BUILDDIR)/texinfo info + @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." + +gettext: + $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale + @echo + @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." + +changes: + $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes + @echo + @echo "The overview file is in $(BUILDDIR)/changes." + +linkcheck: + $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck + @echo + @echo "Link check complete; look for any errors in the above output " \ + "or in $(BUILDDIR)/linkcheck/output.txt." + +doctest: + $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest + @echo "Testing of doctests in the sources finished, look at the " \ + "results in $(BUILDDIR)/doctest/output.txt." + +xml: + $(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml + @echo + @echo "Build finished. The XML files are in $(BUILDDIR)/xml." + +pseudoxml: + $(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml + @echo + @echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml." diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/_extensions/automembersummary.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/_extensions/automembersummary.py new file mode 100644 index 0000000000000000000000000000000000000000..6a1e707ab4422d738ae006b2c9d34647148da4c5 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/_extensions/automembersummary.py @@ -0,0 +1,84 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + +import inspect + +from docutils.parsers.rst import directives +from sphinx.ext.autosummary import Autosummary +from sphinx.ext.autosummary import get_import_prefixes_from_env +from sphinx.ext.autosummary import import_by_name + + +class AutoMemberSummary(Autosummary): + required_arguments = 0 + optional_arguments = 0 + final_argument_whitespace = False + has_content = True + option_spec = { + 'toctree': directives.unchanged, + 'nosignatures': directives.flag, + 'template': directives.unchanged, + } + + def get_items(self, names): + env = self.state.document.settings.env + prefixes = get_import_prefixes_from_env(env) + + items = [] + prefix = '' + shorten = '' + + def _get_items(name): + _items = super(AutoMemberSummary, self).get_items([shorten + name]) + for dn, sig, summary, rn in _items: + items.append(('%s%s' % (prefix, dn), sig, summary, rn)) + + for name in names: + if '~' in name: + prefix, name = name.split('~') + shorten = '~' + else: + prefix = '' + shorten = '' + + try: + real_name, obj, parent, _ = import_by_name(name, prefixes=prefixes) + except ImportError: + self.warn('failed to import %s' % name) + continue + + if not inspect.ismodule(obj): + _get_items(name) + continue + + for member in dir(obj): + if member.startswith('_'): + continue + mobj = getattr(obj, member) + if hasattr(mobj, '__module__'): + if not mobj.__module__.startswith(real_name): + continue # skip imported classes & functions + elif hasattr(mobj, '__name__'): + if not mobj.__name__.startswith(real_name): + continue # skip imported modules + else: + continue # skip instances + _get_items('%s.%s' % (name, member)) + + return items + + +def setup(app): + app.add_directive('automembersummary', AutoMemberSummary) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.cli.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.cli.rst new file mode 100644 index 0000000000000000000000000000000000000000..749ad249f07b68ba5886cdbdc0c9e8b78d5d16b2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.cli.rst @@ -0,0 +1,24 @@ +charmhelpers.cli package +======================== + +charmhelpers.cli.commands module +-------------------------------- + +.. automodule:: charmhelpers.cli.commands + :members: + :undoc-members: + :show-inheritance: + +charmhelpers.cli.host module +---------------------------- + +.. automodule:: charmhelpers.cli.host + :members: + :undoc-members: + :show-inheritance: + + +.. automodule:: charmhelpers.cli + :members: + :undoc-members: + :show-inheritance: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.ansible.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.ansible.rst new file mode 100644 index 0000000000000000000000000000000000000000..a6644937d0f7e2d508e500dd46f33ad6d47fb346 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.ansible.rst @@ -0,0 +1,7 @@ +charmhelpers.contrib.ansible package +==================================== + +.. automodule:: charmhelpers.contrib.ansible + :members: + :undoc-members: + :show-inheritance: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.charmhelpers.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.charmhelpers.rst new file mode 100644 index 0000000000000000000000000000000000000000..5d5222db297171acf80bce09ec22b47a5611ac39 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.charmhelpers.rst @@ -0,0 +1,7 @@ +charmhelpers.contrib.charmhelpers package +========================================= + +.. automodule:: charmhelpers.contrib.charmhelpers + :members: + :undoc-members: + :show-inheritance: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.charmsupport.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.charmsupport.rst new file mode 100644 index 0000000000000000000000000000000000000000..5f33aeb027320a708a2980f03cf1d7ae6bc6c893 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.charmsupport.rst @@ -0,0 +1,24 @@ +charmhelpers.contrib.charmsupport package +========================================= + +charmhelpers.contrib.charmsupport.nrpe module +--------------------------------------------- + +.. automodule:: charmhelpers.contrib.charmsupport.nrpe + :members: + :undoc-members: + :show-inheritance: + +charmhelpers.contrib.charmsupport.volumes module +------------------------------------------------ + +.. automodule:: charmhelpers.contrib.charmsupport.volumes + :members: + :undoc-members: + :show-inheritance: + + +.. automodule:: charmhelpers.contrib.charmsupport + :members: + :undoc-members: + :show-inheritance: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.hahelpers.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.hahelpers.rst new file mode 100644 index 0000000000000000000000000000000000000000..129db24def3584ee42ae64de3e87123eed585ca1 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.hahelpers.rst @@ -0,0 +1,24 @@ +charmhelpers.contrib.hahelpers package +====================================== + +charmhelpers.contrib.hahelpers.apache module +-------------------------------------------- + +.. automodule:: charmhelpers.contrib.hahelpers.apache + :members: + :undoc-members: + :show-inheritance: + +charmhelpers.contrib.hahelpers.cluster module +--------------------------------------------- + +.. automodule:: charmhelpers.contrib.hahelpers.cluster + :members: + :undoc-members: + :show-inheritance: + + +.. automodule:: charmhelpers.contrib.hahelpers + :members: + :undoc-members: + :show-inheritance: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.network.ovs.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.network.ovs.rst new file mode 100644 index 0000000000000000000000000000000000000000..98ab8cb1e2f42ff434da1b26d1519bf2d440c76f --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.network.ovs.rst @@ -0,0 +1,7 @@ +charmhelpers.contrib.network.ovs package +======================================== + +.. automodule:: charmhelpers.contrib.network.ovs + :members: + :undoc-members: + :show-inheritance: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.network.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.network.rst new file mode 100644 index 0000000000000000000000000000000000000000..c7b89b252b100033a122a1717fd753c21893096f --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.network.rst @@ -0,0 +1,20 @@ +charmhelpers.contrib.network package +==================================== + +.. toctree:: + + charmhelpers.contrib.network.ovs + +charmhelpers.contrib.network.ip module +-------------------------------------- + +.. automodule:: charmhelpers.contrib.network.ip + :members: + :undoc-members: + :show-inheritance: + + +.. automodule:: charmhelpers.contrib.network + :members: + :undoc-members: + :show-inheritance: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.openstack.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.openstack.rst new file mode 100644 index 0000000000000000000000000000000000000000..d969ed62678e227a14640db45e9a851465341762 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.openstack.rst @@ -0,0 +1,52 @@ +charmhelpers.contrib.openstack package +====================================== + +.. toctree:: + + charmhelpers.contrib.openstack.templates + +charmhelpers.contrib.openstack.alternatives module +-------------------------------------------------- + +.. automodule:: charmhelpers.contrib.openstack.alternatives + :members: + :undoc-members: + :show-inheritance: + +charmhelpers.contrib.openstack.context module +--------------------------------------------- + +.. automodule:: charmhelpers.contrib.openstack.context + :members: + :undoc-members: + :show-inheritance: + +charmhelpers.contrib.openstack.neutron module +--------------------------------------------- + +.. automodule:: charmhelpers.contrib.openstack.neutron + :members: + :undoc-members: + :show-inheritance: + +charmhelpers.contrib.openstack.templating module +------------------------------------------------ + +.. automodule:: charmhelpers.contrib.openstack.templating + :members: + :undoc-members: + :show-inheritance: + +charmhelpers.contrib.openstack.utils module +------------------------------------------- + +.. automodule:: charmhelpers.contrib.openstack.utils + :members: + :undoc-members: + :show-inheritance: + + +.. automodule:: charmhelpers.contrib.openstack + :members: + :undoc-members: + :show-inheritance: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.openstack.templates.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.openstack.templates.rst new file mode 100644 index 0000000000000000000000000000000000000000..f9eaa2f8581744fdc42b632f743636f715f149a0 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.openstack.templates.rst @@ -0,0 +1,7 @@ +charmhelpers.contrib.openstack.templates package +================================================ + +.. automodule:: charmhelpers.contrib.openstack.templates + :members: + :undoc-members: + :show-inheritance: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.peerstorage.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.peerstorage.rst new file mode 100644 index 0000000000000000000000000000000000000000..e33e82fc09d84a1c8f29202c913e21ac43d001ac --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.peerstorage.rst @@ -0,0 +1,7 @@ +charmhelpers.contrib.peerstorage package +======================================== + +.. automodule:: charmhelpers.contrib.peerstorage + :members: + :undoc-members: + :show-inheritance: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.python.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.python.rst new file mode 100644 index 0000000000000000000000000000000000000000..98d6a62dc03df54fa70494f3301f4f793d5143f1 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.python.rst @@ -0,0 +1,40 @@ +charmhelpers.contrib.python package +=================================== + +charmhelpers.contrib.python.debug module +---------------------------------------- + +.. automodule:: charmhelpers.contrib.python.debug + :members: + :undoc-members: + :show-inheritance: + +charmhelpers.contrib.python.packages module +------------------------------------------- + +.. automodule:: charmhelpers.contrib.python.packages + :members: + :undoc-members: + :show-inheritance: + +charmhelpers.contrib.python.rpdb module +--------------------------------------- + +.. automodule:: charmhelpers.contrib.python.rpdb + :members: + :undoc-members: + :show-inheritance: + +charmhelpers.contrib.python.version module +------------------------------------------ + +.. automodule:: charmhelpers.contrib.python.version + :members: + :undoc-members: + :show-inheritance: + + +.. automodule:: charmhelpers.contrib.python + :members: + :undoc-members: + :show-inheritance: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.rst new file mode 100644 index 0000000000000000000000000000000000000000..d3655431936555d7e3f9b5a1c2ad6aacd299bfc5 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.rst @@ -0,0 +1,23 @@ +charmhelpers.contrib package +============================ + +.. toctree:: + + charmhelpers.contrib.ansible + charmhelpers.contrib.charmhelpers + charmhelpers.contrib.charmsupport + charmhelpers.contrib.hahelpers + charmhelpers.contrib.network + charmhelpers.contrib.openstack + charmhelpers.contrib.peerstorage + charmhelpers.contrib.python + charmhelpers.contrib.saltstack + charmhelpers.contrib.ssl + charmhelpers.contrib.storage + charmhelpers.contrib.templating + charmhelpers.contrib.unison + +.. automodule:: charmhelpers.contrib + :members: + :undoc-members: + :show-inheritance: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.saltstack.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.saltstack.rst new file mode 100644 index 0000000000000000000000000000000000000000..ca14b85d87263152b6f90d7009a8935ca5888002 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.saltstack.rst @@ -0,0 +1,7 @@ +charmhelpers.contrib.saltstack package +====================================== + +.. automodule:: charmhelpers.contrib.saltstack + :members: + :undoc-members: + :show-inheritance: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.ssl.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.ssl.rst new file mode 100644 index 0000000000000000000000000000000000000000..8b7cde87776fb49c116ea4911dbc0ff5f000a19b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.ssl.rst @@ -0,0 +1,16 @@ +charmhelpers.contrib.ssl package +================================ + +charmhelpers.contrib.ssl.service module +--------------------------------------- + +.. automodule:: charmhelpers.contrib.ssl.service + :members: + :undoc-members: + :show-inheritance: + + +.. automodule:: charmhelpers.contrib.ssl + :members: + :undoc-members: + :show-inheritance: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.storage.linux.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.storage.linux.rst new file mode 100644 index 0000000000000000000000000000000000000000..5acb07394dab5b5cf7017d4d4f46f693d2023abf --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.storage.linux.rst @@ -0,0 +1,40 @@ +charmhelpers.contrib.storage.linux package +========================================== + +charmhelpers.contrib.storage.linux.ceph module +---------------------------------------------- + +.. automodule:: charmhelpers.contrib.storage.linux.ceph + :members: + :undoc-members: + :show-inheritance: + +charmhelpers.contrib.storage.linux.loopback module +-------------------------------------------------- + +.. automodule:: charmhelpers.contrib.storage.linux.loopback + :members: + :undoc-members: + :show-inheritance: + +charmhelpers.contrib.storage.linux.lvm module +--------------------------------------------- + +.. automodule:: charmhelpers.contrib.storage.linux.lvm + :members: + :undoc-members: + :show-inheritance: + +charmhelpers.contrib.storage.linux.utils module +----------------------------------------------- + +.. automodule:: charmhelpers.contrib.storage.linux.utils + :members: + :undoc-members: + :show-inheritance: + + +.. automodule:: charmhelpers.contrib.storage.linux + :members: + :undoc-members: + :show-inheritance: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.storage.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.storage.rst new file mode 100644 index 0000000000000000000000000000000000000000..a3adfdfc3ebda31ff24001a2a243ce98249ba5c5 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.storage.rst @@ -0,0 +1,11 @@ +charmhelpers.contrib.storage package +==================================== + +.. toctree:: + + charmhelpers.contrib.storage.linux + +.. automodule:: charmhelpers.contrib.storage + :members: + :undoc-members: + :show-inheritance: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.templating.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.templating.rst new file mode 100644 index 0000000000000000000000000000000000000000..c3eb2a24312eda3c725949ad646977776ee76f0c --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.templating.rst @@ -0,0 +1,24 @@ +charmhelpers.contrib.templating package +======================================= + +charmhelpers.contrib.templating.contexts module +----------------------------------------------- + +.. automodule:: charmhelpers.contrib.templating.contexts + :members: + :undoc-members: + :show-inheritance: + +charmhelpers.contrib.templating.pyformat module +----------------------------------------------- + +.. automodule:: charmhelpers.contrib.templating.pyformat + :members: + :undoc-members: + :show-inheritance: + + +.. automodule:: charmhelpers.contrib.templating + :members: + :undoc-members: + :show-inheritance: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.unison.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.unison.rst new file mode 100644 index 0000000000000000000000000000000000000000..af3a5254608508cfa253f71e741f4423ffb77c39 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.contrib.unison.rst @@ -0,0 +1,7 @@ +charmhelpers.contrib.unison package +=================================== + +.. automodule:: charmhelpers.contrib.unison + :members: + :undoc-members: + :show-inheritance: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.coordinator.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.coordinator.rst new file mode 100644 index 0000000000000000000000000000000000000000..13eeb677d5383a681d43b385aee14691bfcccac2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.coordinator.rst @@ -0,0 +1,10 @@ +charmhelpers.coordinator package +================================ + +charmhelpers.coordinator module +------------------------------- + +.. automodule:: charmhelpers.coordinator + :members: + :undoc-members: + :show-inheritance: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.core.decorators.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.core.decorators.rst new file mode 100644 index 0000000000000000000000000000000000000000..5b4fb402a7d03916de39fe39a8ab418f027a1463 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.core.decorators.rst @@ -0,0 +1,7 @@ +charmhelpers.core.decorators +============================ + +.. automodule:: charmhelpers.core.decorators + :members: + :undoc-members: + :show-inheritance: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.core.fstab.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.core.fstab.rst new file mode 100644 index 0000000000000000000000000000000000000000..a4c9f9488c91c062dac8bf482032fdd03d8a5aaa --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.core.fstab.rst @@ -0,0 +1,7 @@ +charmhelpers.core.fstab +======================= + +.. automodule:: charmhelpers.core.fstab + :members: + :undoc-members: + :show-inheritance: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.core.hookenv.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.core.hookenv.rst new file mode 100644 index 0000000000000000000000000000000000000000..70a0df8891af1ceb085e4574abbea7409b6a35ca --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.core.hookenv.rst @@ -0,0 +1,12 @@ +charmhelpers.core.hookenv +========================= + +.. automembersummary:: + :nosignatures: + + ~charmhelpers.core.hookenv + +.. automodule:: charmhelpers.core.hookenv + :members: + :undoc-members: + :show-inheritance: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.core.host.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.core.host.rst new file mode 100644 index 0000000000000000000000000000000000000000..7b6b9f0cf835c61963ab5a190b47720bdef5030c --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.core.host.rst @@ -0,0 +1,12 @@ +charmhelpers.core.host +====================== + +.. automembersummary:: + :nosignatures: + + ~charmhelpers.core.host + +.. automodule:: charmhelpers.core.host + :members: + :undoc-members: + :show-inheritance: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.core.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.core.rst new file mode 100644 index 0000000000000000000000000000000000000000..0aec7b766b12702c826fe21c41416700c4239945 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.core.rst @@ -0,0 +1,19 @@ +charmhelpers.core package +========================= + +.. toctree:: + + charmhelpers.core.decorators + charmhelpers.core.fstab + charmhelpers.core.hookenv + charmhelpers.core.host + charmhelpers.core.strutils + charmhelpers.core.sysctl + charmhelpers.core.templating + charmhelpers.core.unitdata + charmhelpers.core.services + +.. automodule:: charmhelpers.core + :members: + :undoc-members: + :show-inheritance: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.core.services.base.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.core.services.base.rst new file mode 100644 index 0000000000000000000000000000000000000000..79ef6cb59c3722050b4ccfb744a5b5929589b1c1 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.core.services.base.rst @@ -0,0 +1,12 @@ +charmhelpers.core.services.base +=============================== + +.. automembersummary:: + :nosignatures: + + ~charmhelpers.core.services.base + +.. automodule:: charmhelpers.core.services.base + :members: + :undoc-members: + :show-inheritance: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.core.services.helpers.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.core.services.helpers.rst new file mode 100644 index 0000000000000000000000000000000000000000..ccba56ce1434d86e35d21a3b452fdbe5eeaa8be1 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.core.services.helpers.rst @@ -0,0 +1,12 @@ +charmhelpers.core.services.helpers +================================== + +.. automembersummary:: + :nosignatures: + + ~charmhelpers.core.services.helpers + +.. automodule:: charmhelpers.core.services.helpers + :members: + :undoc-members: + :show-inheritance: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.core.services.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.core.services.rst new file mode 100644 index 0000000000000000000000000000000000000000..515103c4313ac0f069aea6847911693807f2b8da --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.core.services.rst @@ -0,0 +1,12 @@ +charmhelpers.core.services +========================== + +.. toctree:: + + charmhelpers.core.services.base + charmhelpers.core.services.helpers + +.. automodule:: charmhelpers.core.services + :members: + :undoc-members: + :show-inheritance: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.core.strutils.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.core.strutils.rst new file mode 100644 index 0000000000000000000000000000000000000000..3b96809fb4540af6079a0400b835617c879cb991 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.core.strutils.rst @@ -0,0 +1,7 @@ +charmhelpers.core.strutils +============================ + +.. automodule:: charmhelpers.core.strutils + :members: + :undoc-members: + :show-inheritance: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.core.sysctl.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.core.sysctl.rst new file mode 100644 index 0000000000000000000000000000000000000000..45b960fd94dac9c58efe0b7d9b525985e0d97391 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.core.sysctl.rst @@ -0,0 +1,7 @@ +charmhelpers.core.sysctl +============================ + +.. automodule:: charmhelpers.core.sysctl + :members: + :undoc-members: + :show-inheritance: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.core.templating.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.core.templating.rst new file mode 100644 index 0000000000000000000000000000000000000000..c131d97beafe3d4c0e6bbc0fd679f286ea21f27d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.core.templating.rst @@ -0,0 +1,7 @@ +charmhelpers.core.templating +============================ + +.. automodule:: charmhelpers.core.templating + :members: + :undoc-members: + :show-inheritance: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.core.unitdata.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.core.unitdata.rst new file mode 100644 index 0000000000000000000000000000000000000000..2b50978de002ccc15fe049abd694615c465fd546 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.core.unitdata.rst @@ -0,0 +1,7 @@ +charmhelpers.core.unitdata +========================== + +.. automodule:: charmhelpers.core.unitdata + :members: + :undoc-members: + :show-inheritance: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.fetch.archiveurl.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.fetch.archiveurl.rst new file mode 100644 index 0000000000000000000000000000000000000000..b9d09447755143364143e6bff8d52f53329a0f41 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.fetch.archiveurl.rst @@ -0,0 +1,7 @@ +charmhelpers.fetch.archiveurl module +==================================== + +.. automodule:: charmhelpers.fetch.archiveurl + :members: + :undoc-members: + :show-inheritance: \ No newline at end of file diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.fetch.bzrurl.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.fetch.bzrurl.rst new file mode 100644 index 0000000000000000000000000000000000000000..1be591097884beb4480db1054fb2cd3b01498637 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.fetch.bzrurl.rst @@ -0,0 +1,7 @@ +charmhelpers.fetch.bzrurl module +================================ + +.. automodule:: charmhelpers.fetch.bzrurl + :members: + :undoc-members: + :show-inheritance: \ No newline at end of file diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.fetch.python.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.fetch.python.rst new file mode 100644 index 0000000000000000000000000000000000000000..ed89d428c526fcee5d5f06d273194f4eb03c170c --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.fetch.python.rst @@ -0,0 +1,7 @@ +charmhelpers.fetch.python module +==================================== + +.. automodule:: charmhelpers.fetch.python + :members: + :undoc-members: + :show-inheritance: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.fetch.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.fetch.rst new file mode 100644 index 0000000000000000000000000000000000000000..6fb42ef2d7e3df3d4643a9bc480734c1c56e6bc3 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.fetch.rst @@ -0,0 +1,37 @@ +charmhelpers.fetch package +========================== + +.. automodule:: charmhelpers.fetch + :members: + :undoc-members: + :show-inheritance: + + +charmhelpers.fetch.archiveurl module +------------------------------------ + +.. toctree:: + + charmhelpers.fetch.archiveurl + +charmhelpers.fetch.bzrurl module +-------------------------------- + +.. toctree:: + + charmhelpers.fetch.bzrurl + +charmhelpers.fetch.snap module +------------------------------ + +.. toctree:: + + charmhelpers.fetch.snap + +charmhelpers.fetch.python module +------------------------------ + +.. toctree:: + + charmhelpers.fetch.python + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.fetch.snap.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.fetch.snap.rst new file mode 100644 index 0000000000000000000000000000000000000000..882a88e2dd9a9e844846f3fecda7d21e08f0803f --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.fetch.snap.rst @@ -0,0 +1,26 @@ +charmhelpers.fetch.snap package +=============================== + +.. automodule:: charmhelpers.fetch.snap + :members: + :undoc-members: + :show-inheritance: + +Examples +-------- + +.. code-block:: python + + snap_install('hello-world', '--classic', '--stable') + snap_install(['hello-world', 'htop']) + +.. code-block:: python + + snap_refresh('hello-world', '--classic', '--stable') + snap_refresh(['hello-world', 'htop']) + +.. code-block:: python + + snap_remove('hello-world') + snap_remove(['hello-world', 'htop']) + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.payload.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.payload.rst new file mode 100644 index 0000000000000000000000000000000000000000..b1d9607ad115e37806b36ab6d8d54bae665cf917 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.payload.rst @@ -0,0 +1,24 @@ +charmhelpers.payload package +============================ + +charmhelpers.payload.archive module +----------------------------------- + +.. automodule:: charmhelpers.payload.archive + :members: + :undoc-members: + :show-inheritance: + +charmhelpers.payload.execd module +--------------------------------- + +.. automodule:: charmhelpers.payload.execd + :members: + :undoc-members: + :show-inheritance: + + +.. automodule:: charmhelpers.payload + :members: + :undoc-members: + :show-inheritance: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.rst new file mode 100644 index 0000000000000000000000000000000000000000..14266fbe55f19887d4b454a13094852bdbb61b55 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/api/charmhelpers.rst @@ -0,0 +1,17 @@ +API Documentation +================= + +.. toctree:: + :maxdepth: 3 + + charmhelpers.core + charmhelpers.contrib + charmhelpers.fetch + charmhelpers.payload + charmhelpers.cli + charmhelpers.coordinator + +.. automodule:: charmhelpers + :members: + :undoc-members: + :show-inheritance: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/conf.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/conf.py new file mode 100644 index 0000000000000000000000000000000000000000..f495d5511209971c53b0098178438246a3fee94b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/conf.py @@ -0,0 +1,266 @@ +# -*- coding: utf-8 -*- +# +# Charm Helpers documentation build configuration file, created by +# sphinx-quickstart on Fri Jun 6 10:34:44 2014. +# +# This file is execfile()d with the current directory set to its +# containing dir. +# +# Note that not all possible configuration values are present in this +# autogenerated file. +# +# All configuration values have a default; values that are commented out +# serve to show the default. + +import sys +import os + +# If extensions (or modules to document with autodoc) are in another directory, +# add these directories to sys.path here. If the directory is relative to the +# documentation root, use os.path.abspath to make it absolute, like shown here. +sys.path.insert(0, os.path.abspath('../')) +sys.path.append(os.path.abspath('_extensions/')) + +# -- General configuration ------------------------------------------------ + +# If your documentation needs a minimal Sphinx version, state it here. +#needs_sphinx = '1.0' + +# Add any Sphinx extension module names here, as strings. They can be +# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom +# ones. +extensions = [ + 'sphinx.ext.autodoc', + 'sphinx.ext.autosummary', + 'automembersummary', +] + +# Add any paths that contain templates here, relative to this directory. +templates_path = ['_templates'] + +# The suffix of source filenames. +source_suffix = '.rst' + +# The encoding of source files. +#source_encoding = 'utf-8-sig' + +# The master toctree document. +master_doc = 'index' + +# General information about the project. +project = u'Charm Helpers' +copyright = u'2014-2018, Canonical Ltd.' + +# The version info for the project you're documenting, acts as replacement for +# |version| and |release|, also used in various other places throughout the +# built documents. +# +version_file = os.path.abspath( + os.path.join(os.path.dirname(__file__), '../', 'VERSION')) +VERSION = open(version_file).read().strip() +# The short X.Y version. +version = VERSION +# The full version, including alpha/beta/rc tags. +release = VERSION + +# The language for content autogenerated by Sphinx. Refer to documentation +# for a list of supported languages. +#language = None + +# There are two options for replacing |today|: either, you set today to some +# non-false value, then it is used: +#today = '' +# Else, today_fmt is used as the format for a strftime call. +#today_fmt = '%B %d, %Y' + +# List of patterns, relative to source directory, that match files and +# directories to ignore when looking for source files. +exclude_patterns = ['_build', '_extensions'] + +# The reST default role (used for this markup: `text`) to use for all +# documents. +#default_role = None + +# If true, '()' will be appended to :func: etc. cross-reference text. +#add_function_parentheses = True + +# If true, the current module name will be prepended to all description +# unit titles (such as .. function::). +#add_module_names = True + +# If true, sectionauthor and moduleauthor directives will be shown in the +# output. They are ignored by default. +#show_authors = False + +# The name of the Pygments (syntax highlighting) style to use. +pygments_style = 'sphinx' + +# A list of ignored prefixes for module index sorting. +#modindex_common_prefix = [] + +# If true, keep warnings as "system message" paragraphs in the built documents. +#keep_warnings = False + + +# -- Options for HTML output ---------------------------------------------- + +# The theme to use for HTML and HTML Help pages. See the documentation for +# a list of builtin themes. +html_theme = 'sphinx_rtd_theme' + +# Theme options are theme-specific and customize the look and feel of a theme +# further. For a list of options available for each theme, see the +# documentation. +#html_theme_options = {} + +# Add any paths that contain custom themes here, relative to this directory. +html_theme_path = ['_themes'] + +# The name for this set of Sphinx documents. If None, it defaults to +# " v documentation". +#html_title = None + +# A shorter title for the navigation bar. Default is the same as html_title. +#html_short_title = None + +# The name of an image file (relative to this directory) to place at the top +# of the sidebar. +#html_logo = None + +# The name of an image file (within the static path) to use as favicon of the +# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 +# pixels large. +#html_favicon = None + +# Add any paths that contain custom static files (such as style sheets) here, +# relative to this directory. They are copied after the builtin static files, +# so a file named "default.css" will overwrite the builtin "default.css". +html_static_path = ['_static'] + +# Add any extra paths that contain custom files (such as robots.txt or +# .htaccess) here, relative to this directory. These files are copied +# directly to the root of the documentation. +#html_extra_path = [] + +# If not '', a 'Last updated on:' timestamp is inserted at every page bottom, +# using the given strftime format. +#html_last_updated_fmt = '%b %d, %Y' + +# If true, SmartyPants will be used to convert quotes and dashes to +# typographically correct entities. +#html_use_smartypants = True + +# Custom sidebar templates, maps document names to template names. +#html_sidebars = {} + +# Additional templates that should be rendered to pages, maps page names to +# template names. +#html_additional_pages = {} + +# If false, no module index is generated. +#html_domain_indices = True + +# If false, no index is generated. +#html_use_index = True + +# If true, the index is split into individual pages for each letter. +#html_split_index = False + +# If true, links to the reST sources are added to the pages. +#html_show_sourcelink = True + +# If true, "Created using Sphinx" is shown in the HTML footer. Default is True. +#html_show_sphinx = True + +# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. +#html_show_copyright = True + +# If true, an OpenSearch description file will be output, and all pages will +# contain a tag referring to it. The value of this option must be the +# base URL from which the finished HTML is served. +#html_use_opensearch = '' + +# This is the file name suffix for HTML files (e.g. ".xhtml"). +#html_file_suffix = None + +# Output file base name for HTML help builder. +htmlhelp_basename = 'CharmHelpersdoc' + + +# -- Options for LaTeX output --------------------------------------------- + +latex_elements = { +# The paper size ('letterpaper' or 'a4paper'). +#'papersize': 'letterpaper', + +# The font size ('10pt', '11pt' or '12pt'). +#'pointsize': '10pt', + +# Additional stuff for the LaTeX preamble. +#'preamble': '', +} + +# Grouping the document tree into LaTeX files. List of tuples +# (source start file, target name, title, +# author, documentclass [howto, manual, or own class]). +latex_documents = [ + ('index', 'CharmHelpers.tex', u'Charm Helpers Documentation', + u'Charm Helpers Developers', 'manual'), +] + +# The name of an image file (relative to this directory) to place at the top of +# the title page. +#latex_logo = None + +# For "manual" documents, if this is true, then toplevel headings are parts, +# not chapters. +#latex_use_parts = False + +# If true, show page references after internal links. +#latex_show_pagerefs = False + +# If true, show URL addresses after external links. +#latex_show_urls = False + +# Documents to append as an appendix to all manuals. +#latex_appendices = [] + +# If false, no module index is generated. +#latex_domain_indices = True + + +# -- Options for manual page output --------------------------------------- + +# One entry per manual page. List of tuples +# (source start file, name, description, authors, manual section). +man_pages = [ + ('index', 'charmhelpers', u'Charm Helpers Documentation', + [u'Charm Helpers Developers'], 1) +] + +# If true, show URL addresses after external links. +#man_show_urls = False + + +# -- Options for Texinfo output ------------------------------------------- + +# Grouping the document tree into Texinfo files. List of tuples +# (source start file, target name, title, author, +# dir menu entry, description, category) +texinfo_documents = [ + ('index', 'CharmHelpers', u'Charm Helpers Documentation', + u'Charm Helpers Developers', 'CharmHelpers', 'One line description of project.', + 'Miscellaneous'), +] + +# Documents to append as an appendix to all manuals. +#texinfo_appendices = [] + +# If false, no module index is generated. +#texinfo_domain_indices = True + +# How to display URL addresses: 'footnote', 'no', or 'inline'. +#texinfo_show_urls = 'footnote' + +# If true, do not generate a @detailmenu in the "Top" node's menu. +#texinfo_no_detailmenu = False diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/contributing.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/contributing.rst new file mode 100644 index 0000000000000000000000000000000000000000..ed4728b5950eb9128d6536ea34c2647968614389 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/contributing.rst @@ -0,0 +1,49 @@ +Contributing +============ + +All contributions, both code and documentation, are welcome! + +Source +------ + +The source code is located at https://github.com/juju/charm-helpers. To +submit contributions you'll need to create a GitHub account if you do not +already have one. + +To get the code:: + + $ git clone https://github.com/juju/charm-helpers + +To build and run tests:: + + $ cd charm-helpers + $ make + +Submitting a Merge Proposal +--------------------------- + +Run ``make test`` and ensure all tests pass. Then commit your changes to a +`fork `_ and create a +`pull request `_. + +Open Bugs +--------- + +If you're looking for something to work on, the open bug/feature list can be +found at https://bugs.launchpad.net/charm-helpers. + +Documentation +------------- + +If you'd like to contribute to the documentation, please refer to the ``HACKING.md`` +document in the root of the source tree for instructions on building the documentation. + +Contributions to the :doc:`example-index` section of the documentation are +especially welcome, and are easy to add. Simply add a new ``.rst`` file under +``charmhelpers/docs/examples``. + +Getting Help +------------ + +If you need help you can find it in ``#juju`` on the Freenode IRC network. Come +talk to us - we're a friendly bunch! diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/example-index.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/example-index.rst new file mode 100644 index 0000000000000000000000000000000000000000..e0251c08a1c8e471081a5007c6faede6d92f2d52 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/example-index.rst @@ -0,0 +1,11 @@ +Examples +======== + +If you'd like to contribute an example (please do!), please refer to the +:doc:`contributing` page for instructions on how to do so. + +.. toctree:: + :maxdepth: 1 + :glob: + + examples/* diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/examples/config.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/examples/config.rst new file mode 100644 index 0000000000000000000000000000000000000000..d6d60895729bd038d161bb458773c0d410238b1b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/examples/config.rst @@ -0,0 +1,88 @@ +Interacting with Charm Configuration +==================================== + +The :func:`charmhelpers.core.hookenv.config`, when called with no arguments, +returns a :class:`charmhelpers.core.hookenv.Config` instance - a dictionary +representation of a charm's ``config.yaml`` file. This object can +be used to: + +* get a charm's current config values +* check if a config value has changed since the last hook invocation +* view the previous value of a changed config item +* save arbitrary key/value data for use in a later hook + +For the following examples we'll assume our charm has a config.yaml file that +looks like this:: + + options: + app-name: + type: string + default: "My App" + description: "Name of your app." + + +Getting charm config values +--------------------------- + +:: + + # hooks/hooks.py + + from charmhelpers.core import hookenv + + hooks = hookenv.Hooks() + + @hooks.hook('install') + def install(): + config = hookenv.config() + + assert config['app-name'] == 'My App' + +Checking if a config value has changed +-------------------------------------- + +Let's say the user changes the ``app-name`` config value at runtime by +executing the following juju command:: + + juju set mycharm app-name="My New App" + +which triggers a ``config-changed`` hook:: + + # hooks/hooks.py + + from charmhelpers.core import hookenv + + hooks = hookenv.Hooks() + + @hooks.hook('config-changed') + def config_changed(): + config = hookenv.config() + + assert config.changed('app-name') + assert config['app-name'] == 'My New App' + assert config.previous('app-name') == 'My App' + +Saving arbitrary key/value data +------------------------------- + +The :class:`Config ` object maybe also be +used to store arbitrary data that you want to persist across hook +invocations:: + + # hooks/hooks.py + + from charmhelpers.core import hookenv + + hooks = hookenv.Hooks() + + @hooks.hook('install') + def install(): + config = hookenv.config() + + config['mykey'] = 'myval' + + @hooks.hook('config-changed') + def config_changed(): + config = hookenv.config() + + assert config['mykey'] == 'myval' diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/examples/services.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/examples/services.rst new file mode 100644 index 0000000000000000000000000000000000000000..96b2971e35913b0ee12505b3af80e250d7f4de61 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/examples/services.rst @@ -0,0 +1,160 @@ +Managing Charms with the Services Framework +=========================================== + +Traditional charm authoring is focused on implementing hooks. That is, +the charm author is thinking in terms of "What hook am I handling; what +does this hook need to do?" However, in most cases, the real question +should be "Do I have the information I need to configure and start this +piece of software and, if so, what are the steps for doing so?" The +services framework tries to bring the focus to the data and the +setup tasks, in the most declarative way possible. + + +Hooks as Data Sources for Service Definitions +--------------------------------------------- + +While the ``install``, ``start``, and ``stop`` hooks clearly represent +state transitions, all of the other hooks are really notifications of +changes in data from external sources, such as config data values in +the case of ``config-changed`` or relation data for any of the +``*-relation-*`` hooks. Moreover, many charms that rely on external +data from config options or relations find themselves needing some +piece of external data before they can even configure and start anything, +and so the ``start`` hook loses its semantic usefulness. + +If data is required from multiple sources, it even becomes impossible to +know which hook will be executing when all required data is available. +(E.g., which relation will be the last to execute; will the required +config option be set before or after all of the relations are available?) +One common solution to this problem is to create "flag files" to track +whether a given bit of data has been observed, but this can get cluttered +quickly and is difficult to understand what conditions lead to which actions. + +When using the services framework, all hooks other than ``install`` +are handled by a single call to :meth:`manager.manage() `. +This can be done with symlinks, or by having a ``definitions.py`` file +containing the service defintions, and every hook can be reduced to:: + + #!/bin/env python + from charmhelpers.core.services import ServiceManager + from definitions import service_definitions + ServiceManager(service_definitions).manage() + +So, what magic goes into ``definitions.py``? + + +Service Definitions Overview +---------------------------- + +The format of service definitions are fully documented in +:class:`~charmhelpers.core.services.base.ServiceManager`, but most commonly +will consist of one or more dictionaries containing four items: the name of +a service being managed, the list of data contexts required before the service +can be configured and started, the list of actions to take when the data +requirements are satisfied, and list of ports to open. The service name +generally maps to an Upstart job, the required data contexts are ``dict`` +or ``dict``-like structures that contain the data once available (usually +subclasses of :class:`~charmhelpers.core.services.helpers.RelationContext` +or wrappers around :func:`charmhelpers.core.hookenv.config`), and the actions +are just callbacks that are passed the service name for which they are executing +(or a subclass of :class:`~charmhelpers.core.services.base.ManagerCallback` +for more complex cases). + +An example service definition might be:: + + service_definitions = [ + { + 'service': 'wordpress', + 'ports': [80], + 'required_data': [config(), MySQLRelation()], + 'data_ready': [ + actions.install_frontend, + services.render_template(source='wp-config.php.j2', + target=os.path.join(WP_INSTALL_DIR, 'wp-config.php')) + services.render_template(source='wordpress.upstart.j2', + target='/etc/init/wordpress'), + ], + }, + ] + +Each time a hook is fired, the conditions will be checked (in this case, just +that MySQL is available) and, if met, the appropriate actions taken (correct +front-end installed, config files written / updated, and the Upstart job +(re)started, implicitly). + + +Required Data Contexts +---------------------- + +Required data contexts are, at the most basic level, are just dictionaries, +and if they evaluate as True (e.g., if the contain data), their condition is +considered to be met. A simple sentinal could just be a function that returns +data if available or an empty ``dict`` otherwise. + +For the common case of gathering data from relations, the +:class:`~charmhelpers.core.services.helpers.RelationContext` base class gathers +data from a named relation and checks for a set of required keys to be present +and set on the relation before considering that relation complete. For example, +a basic MySQL context might be:: + + class MySQLRelation(RelationContext): + name = 'db' + interface = 'mysql' + required_keys = ['host', 'user', 'password', 'database'] + +Because there could potentially be multiple units on a given relation, and +to prevent conflicts when the data contexts are merged to be sent to templates +(see below), the data for a ``RelationContext`` is nested in the following way:: + + relation[relation.name][unit_number][relation_key] + +For example, to get the host of the first MySQL unit (``mysql/0``):: + + mysql = MySQLRelation() + unit_0_host = mysql[mysql.name][0]['host'] + +Note that only units that have set values for all of the required keys are +included in the list, and if no units have set all of the required keys, +instantiating the ``RelationContext`` will result in an empty list. + + +Data-Ready Actions +------------------ + +When a hook is triggered and all of the ``required_data`` contexts are complete, +the list of "data ready" actions are executed. These callbacks are passed +the service name from the ``service`` key of the service definition for which +they are running, and are responsible for (re)configuring the service +according to the required data. + +The most common action should be to render a config file from a template. +The :class:`render_template ` +helper will merge all of the ``required_data`` contexts and render a +`Jinja2 `_ template with the combined data. For +example, to render a list of DSNs for units on the db relation, the +template should include:: + + databases: [ + {% for unit in db %} + "mysql://{{unit['user']}}:{{unit['password']}}@{{unit['host']}}/{{unit['database']}}", + {% endfor %} + ] + +Note that the actions need to be idempotent, since they will all be re-run +if something about the charm changes (that is, if a hook is triggered). That +is why rendering a template is preferred to editing a file via regular expression +substitutions. + +Also note that the actions are not responsible for starting the service; there +are separate ``start`` and ``stop`` options that default to starting and stopping +an Upstart service with the name given by the ``service`` value. + + +Conclusion +---------- + +By using this framework, it is easy to see what the preconditions for the charm +are, and there is never a concern about things being in a partially configured +state. As a charm author, you can focus on what is important to you: what +data is mandatory, what is optional, and what actions should be taken once +the requirements are met. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/getting-started.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/getting-started.rst new file mode 100644 index 0000000000000000000000000000000000000000..785fc8693eb5f74ab1b72ffe1affb8804a182a5c --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/getting-started.rst @@ -0,0 +1,152 @@ +Getting Started +=============== + +For a video introduction to ``charmhelpers``, check out this +`Charm School session `_. To start +using ``charmhelpers``, proceed with the instructions on the remainder of this +page. + +Installing Charm Tools +---------------------- + +First, follow `these instructions `_ +to install the ``charm-tools`` package for your platform. + +Creating a New Charm +-------------------- + +:: + + $ cd ~ + $ mkdirs -p charms/precise + $ cd charms/precise + $ charm create -t python mycharm + INFO: Generating template for mycharm in ./mycharm + INFO: No mycharm in apt cache; creating an empty charm instead. + Symlink all hooks to one python source file? [yN] y + INFO:root:Loading charm helper config from charm-helpers.yaml. + INFO:root:Checking out lp:charm-helpers to /tmp/tmpPAqUyN/charm-helpers. + Branched 160 revisions. + INFO:root:Syncing directory: /tmp/tmpPAqUyN/charm-helpers/charmhelpers/core -> lib/charmhelpers/core. + INFO:root:Adding missing __init__.py: lib/charmhelpers/__init__.py + +Let's see what our new charm looks like:: + + $ tree mycharm/ + mycharm/ + ├── charm-helpers.yaml + ├── config.yaml + ├── hooks + │   ├── config-changed -> hooks.py + │   ├── hooks.py + │   ├── install -> hooks.py + │   ├── start -> hooks.py + │   ├── stop -> hooks.py + │   └── upgrade-charm -> hooks.py + ├── icon.svg + ├── lib + │   └── charmhelpers + │   ├── core + │   │   ├── fstab.py + │   │   ├── hookenv.py + │   │   ├── host.py + │   │   └── __init__.py + │   └── __init__.py + ├── metadata.yaml + ├── README.ex + ├── revision + ├── scripts + │   └── charm_helpers_sync.py + └── tests + ├── 00-setup + └── 10-deploy + + 6 directories, 20 files + +The ``charmhelpers`` code is bundled in our charm in the ``lib/`` directory. +All of our python code will go in ``hooks/hook.py``. A look at that file reveals +that ``charmhelpers`` has been added to the python path and imported for us:: + + $ head mycharm/hooks/hooks.py -n11 + #!/usr/bin/python + + import os + import sys + + sys.path.insert(0, os.path.join(os.environ['CHARM_DIR'], 'lib')) + + from charmhelpers.core import ( + hookenv, + host, + ) + +Updating Charmhelpers Packages +------------------------------ + +By default, a new charm installs only the ``charmhelpers.core`` package, but +other packages are available (for a complete list, see the :doc:`api/charmhelpers`). +The installed packages are controlled by the ``charm-helpers.yaml`` file in our charm:: + + $ cd mycharm + $ cat charm-helpers.yaml + destination: lib/charmhelpers + branch: lp:charm-helpers + include: + - core + +Let's update this file to include some more packages:: + + $ vim charm-helpers.yaml + $ cat charm-helpers.yaml + destination: lib/charmhelpers + branch: lp:charm-helpers + include: + - core + - contrib.storage + - fetch + +Now we need to download the new packages into our charm:: + + $ ./scripts/charm_helpers_sync.py -c charm-helpers.yaml + INFO:root:Loading charm helper config from charm-helpers.yaml. + INFO:root:Checking out lp:charm-helpers to /tmp/tmpT38Y87/charm-helpers. + Branched 160 revisions. + INFO:root:Syncing directory: /tmp/tmpT38Y87/charm-helpers/charmhelpers/core -> lib/charmhelpers/core. + INFO:root:Syncing directory: /tmp/tmpT38Y87/charm-helpers/charmhelpers/contrib/storage -> lib/charmhelpers/contrib/storage. + INFO:root:Adding missing __init__.py: lib/charmhelpers/contrib/__init__.py + INFO:root:Syncing directory: /tmp/tmpT38Y87/charm-helpers/charmhelpers/fetch -> lib/charmhelpers/fetch. + +A look at our charmhelpers directory reveals that the new packages have indeed +been added. We are now free to import and use them in our charm:: + + $ tree lib/charmhelpers/ + lib/charmhelpers/ + ├── contrib + │   ├── __init__.py + │   └── storage + │   ├── __init__.py + │   └── linux + │   ├── ceph.py + │   ├── __init__.py + │   ├── loopback.py + │   ├── lvm.py + │   └── utils.py + ├── core + │   ├── fstab.py + │   ├── hookenv.py + │   ├── host.py + │   └── __init__.py + ├── fetch + │   ├── archiveurl.py + │   ├── bzrurl.py + │   └── __init__.py + └── __init__.py + + 5 directories, 15 files + +Next Steps +---------- + +Now that you have access to ``charmhelpers`` in your charm, check out the +:doc:`example-index` or :doc:`api/charmhelpers` to learn about all the great +functionality that ``charmhelpers`` provides. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/index.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/index.rst new file mode 100644 index 0000000000000000000000000000000000000000..7a75555cdc4afc43e53a4da3afd9b5751cb9f3ad --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/docs/index.rst @@ -0,0 +1,41 @@ +.. Charm Helpers documentation master file, created by + sphinx-quickstart on Fri Jun 6 10:34:44 2014. + You can adapt this file completely to your liking, but it should at least + contain the root `toctree` directive. + +Charm Helpers Documentation +=========================== + +The ``charmhelpers`` Python library is an extensive collection of functions and classes +for simplifying the development of `Juju Charms`_. It includes utilities for: + +* Interacting with the host environment +* Managing hook events +* Reading and writing charm configuration +* Installing dependencies +* Much, much more! + +.. toctree:: + :maxdepth: 2 + + getting-started + example-index + api/charmhelpers + +.. toctree:: + :caption: Project + :glob: + :maxdepth: 3 + + contributing + changelog + + +Indices and tables +================== + +* :ref:`genindex` +* :ref:`modindex` + + +.. _Juju Charms: https://juju.ubuntu.com/docs/ diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/requirements.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/requirements.txt new file mode 100644 index 0000000000000000000000000000000000000000..97dbebaed2bd8c992d55d7446095a2a02772687b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/requirements.txt @@ -0,0 +1,16 @@ +# Runtime Dependencies + +# https://pyyaml.org/wiki/PyYAML#history +# PyYAML==5.2 is last supported for py34 +PyYAML + +# https://jinja.palletsprojects.com/en/2.11.x/changelog/ +# Jinja2==2.10 is last supported for py34 (trusty) +# Jinja2==2.11 is last supported for py27 & py35 (xenial) +Jinja2 + +six +netaddr +Tempita + +pbr!=2.1.0,>=2.0.0 # Apache-2.0 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/scripts/README b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/scripts/README new file mode 100644 index 0000000000000000000000000000000000000000..baaf91962d2a0ddc2fb6b5fb1824a2081df8839a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/scripts/README @@ -0,0 +1 @@ +This directory contains scripts for managing the charmhelpers project diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/scripts/update-revno b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/scripts/update-revno new file mode 100755 index 0000000000000000000000000000000000000000..48e6bba99da16156bba0ef75cf0008b0c35d8663 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/scripts/update-revno @@ -0,0 +1,11 @@ +#!/bin/bash +VERSION=$(cat VERSION) +REVNO=$(bzr revno) +bzr di &>/dev/null +if [ $? ]; then + REVNO="${REVNO}+" +fi +cat << EOF > charmhelpers/version.py +CHARMHELPERS_VERSION = '${VERSION}' +CHARMHELPERS_BZRREVNO = '${REVNO}' +EOF diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/setup.cfg b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/setup.cfg new file mode 100644 index 0000000000000000000000000000000000000000..af820fb78782d5051c8b18e1ed7c2e77c04310eb --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/setup.cfg @@ -0,0 +1,36 @@ +[metadata] +name = charmhelpers +summary = Helpers for Juju Charm development +description-file = + README.rst +author = Charmers +author-email = juju@lists.ubuntu.com +home-page = https://github.com/juju/charm-helpers +classifier = + Intended Audience :: Information Technology + Intended Audience :: System Administrators + License :: OSI Approved :: Apache Software License + Operating System :: POSIX :: Linux + Programming Language :: Python + Programming Language :: Python :: 2 + Programming Language :: Python :: 2.7 + Programming Language :: Python :: 3 + Programming Language :: Python :: 3.6 + Programming Language :: Python :: 3.7 + Programming Language :: Python :: 3.8 + +[files] +packages = + charmhelpers +scripts = + bin/chlp + bin/contrib/charmsupport/charmsupport + bin/contrib/saltstack/salt-call + +[nosetests] +with-coverage=1 +cover-erase=1 +cover-package=charmhelpers,tools + +[upload_sphinx] +upload-dir = docs/_build/html diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/setup.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/setup.py new file mode 100644 index 0000000000000000000000000000000000000000..9d8594c18214533b15c511991656497131985c61 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/setup.py @@ -0,0 +1,27 @@ +# Copyright 2016 Canonical Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import setuptools + +# In python < 2.7.4, a lazy loading of package `pbr` will break +# setuptools if some other modules registered functions in `atexit`. +# solution from: http://bugs.python.org/issue15881#msg170215 +try: + import multiprocessing # noqa +except ImportError: + pass + +setuptools.setup( + setup_requires=['pbr>=2.0.0'], + pbr=True) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tarmac_tests.sh b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tarmac_tests.sh new file mode 100755 index 0000000000000000000000000000000000000000..28f7e5a4432b97d9a4b4b7f06f1bb0e3538fb560 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tarmac_tests.sh @@ -0,0 +1,13 @@ +#!/bin/sh +# How the tests are run in Jenkins by Tarmac + +set -e + +pkgs='python-flake8 python-shelltoolbox python-tempita python-nose python-mock python-testtools python-jinja2 python-coverage python-git python-netifaces python-netaddr python-pip zip' +if ! dpkg -s $pkgs 2>/dev/null >/dev/null ; then + echo "Required packages are missing. Please ensure that the missing packages are installed." + echo "Run: sudo apt-get install $pkgs" + exit 1 +fi + +make build diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/test-requirements.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/test-requirements.txt new file mode 100644 index 0000000000000000000000000000000000000000..733801e1ba5ff896eecb7d80ee742b8e4cc50e7d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/test-requirements.txt @@ -0,0 +1,37 @@ +# Test-only dependencies are unpinned. +# +git+https://git.launchpad.net/ubuntu/+source/python-distutils-extra +pip +coverage>=3.6 +mock>=1.0.1,<1.1.0 +nose>=1.3.1 +flake8 +testtools==0.9.14 # Before dependent on modern 'six' +amulet +distro-info +sphinx_rtd_theme +ipaddress;python_version<'3.0' # Py27 unit test requirement + + +########################################################## +# Specify versions of runtime dependencies where possible. +# The requirements.txt file cannot be so specific + +# https://pyyaml.org/wiki/PyYAML#history +# PyYAML==5.2 is last supported for py34 +PyYAML==5.2;python_version >= '3.0' and python_version <= '3.4' # py3 trusty +PyYAML; python_version == '2.7' or python_version >= '3.5' # all else + +# https://jinja.palletsprojects.com/en/2.11.x/changelog/ +# Jinja2==2.10 is last supported for py34 +# Jinja2==2.11 is last supported for py27 & py35 +Jinja2==2.10;python_version >= '3.0' and python_version <= '3.4' # py3 trusty +Jinja2==2.11;python_version == '2.7' or python_version == '3.5' # py27, py35 +Jinja2; python_version >= '3.6' # py36 and on + +############################################################## + +netifaces==0.10 # trusty is 0.8, but using py3 compatible version for tests. +psutil==1.2.1 # trusty +python-keystoneclient==2.3.2 # xenial +dnspython==1.11.1 # trusty diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..5707debfde11bad2916f95be62df9f24a0306efa --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/__init__.py @@ -0,0 +1,16 @@ +import sys +import mock + + +sys.modules['yum'] = mock.MagicMock() +sys.modules['sriov_netplan_shim'] = mock.MagicMock() +sys.modules['sriov_netplan_shim.pci'] = mock.MagicMock() +with mock.patch('charmhelpers.deprecate') as ch_deprecate: + def mock_deprecate(warning, date=None, log=None): + def mock_wrap(f): + def wrapped_f(*args, **kwargs): + return f(*args, **kwargs) + return wrapped_f + return mock_wrap + ch_deprecate.side_effect = mock_deprecate + import charmhelpers.contrib.openstack.utils as openstack # noqa: F401 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/cli/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/cli/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/cli/test_cmdline.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/cli/test_cmdline.py new file mode 100644 index 0000000000000000000000000000000000000000..549e70c025836268d2f52c4a2653669cd2d81848 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/cli/test_cmdline.py @@ -0,0 +1,236 @@ +"""Tests for the commandant code that analyzes a function signature to +determine the parameters to argparse.""" + +from unittest import TestCase +from mock import ( + patch, + MagicMock, + ANY, +) +import json +from pprint import pformat +import yaml +import csv + +from six import StringIO + +from charmhelpers import cli + + +class SubCommandTest(TestCase): + """Test creation of subcommands""" + + def setUp(self): + super(SubCommandTest, self).setUp() + self.cl = cli.CommandLine() + + @patch('sys.exit') + def test_subcommand_wrapper(self, _sys_exit): + """Test function name detection""" + @self.cl.subcommand() + def payload(): + "A function that does work." + pass + args = self.cl.argument_parser.parse_args(['payload']) + self.assertEqual(args.func, payload) + self.assertEqual(_sys_exit.mock_calls, []) + + @patch('sys.exit') + def test_subcommand_wrapper_bogus_arguments(self, _sys_exit): + """Test function name detection""" + @self.cl.subcommand() + def payload(): + "A function that does work." + pass + with self.assertRaises(TypeError): + with patch("sys.argv", "tests deliberately bad input".split()): + with patch("sys.stderr"): + self.cl.argument_parser.parse_args() + _sys_exit.assert_called_once_with(2) + + @patch('sys.exit') + def test_subcommand_wrapper_cmdline_options(self, _sys_exit): + """Test detection of positional arguments and optional parameters.""" + @self.cl.subcommand() + def payload(x, y=None): + "A function that does work." + return x + args = self.cl.argument_parser.parse_args(['payload', 'positional', '--y=optional']) + self.assertEqual(args.func, payload) + self.assertEqual(args.x, 'positional') + self.assertEqual(args.y, 'optional') + self.assertEqual(_sys_exit.mock_calls, []) + + @patch('sys.exit') + def test_subcommand_builder(self, _sys_exit): + def noop(z): + pass + + @self.cl.subcommand_builder('payload', description="A subcommand") + def payload_command(subparser): + subparser.add_argument('-z', action='store_true') + return noop + + args = self.cl.argument_parser.parse_args(['payload', '-z']) + self.assertEqual(args.func, noop) + self.assertTrue(args.z) + self.assertFalse(_sys_exit.called) + + def test_subcommand_builder_bogus_wrapped_args(self): + with self.assertRaises(TypeError): + @self.cl.subcommand_builder('payload', description="A subcommand") + def payload_command(subparser, otherarg): + pass + + def test_run(self): + self.bar_called = False + + @self.cl.subcommand() + def bar(x, y=None, *vargs): + "A function that does work." + self.assertEqual(x, 'baz') + self.assertEqual(y, 'why') + self.assertEqual(vargs, ('mux', 'zob')) + self.bar_called = True + return "qux" + + args = ['chlp', 'bar', '--y', 'why', 'baz', 'mux', 'zob'] + self.cl.formatter = MagicMock() + with patch("sys.argv", args): + with patch("charmhelpers.core.unitdata._KV") as _KV: + self.cl.run() + assert _KV.flush.called + self.assertTrue(self.bar_called) + self.cl.formatter.format_output.assert_called_once_with('qux', ANY) + + def test_no_output(self): + self.bar_called = False + + @self.cl.subcommand() + @self.cl.no_output + def bar(x, y=None, *vargs): + "A function that does work." + self.bar_called = True + return "qux" + + args = ['foo', 'bar', 'baz'] + self.cl.formatter = MagicMock() + with patch("sys.argv", args): + self.cl.run() + self.assertTrue(self.bar_called) + self.cl.formatter.format_output.assert_called_once_with('', ANY) + + def test_test_command(self): + self.bar_called = False + self.bar_result = True + + @self.cl.subcommand() + @self.cl.test_command + def bar(x, y=None, *vargs): + "A function that does work." + self.bar_called = True + return self.bar_result + + args = ['foo', 'bar', 'baz'] + self.cl.formatter = MagicMock() + with patch("sys.argv", args): + self.cl.run() + self.assertTrue(self.bar_called) + self.assertEqual(self.cl.exit_code, 0) + self.cl.formatter.format_output.assert_called_once_with('', ANY) + + self.bar_result = False + with patch("sys.argv", args): + self.cl.run() + self.assertEqual(self.cl.exit_code, 1) + + +class OutputFormatterTest(TestCase): + def setUp(self): + super(OutputFormatterTest, self).setUp() + self.expected_formats = ( + "raw", + "json", + "py", + "yaml", + "csv", + "tab", + ) + self.outfile = StringIO() + self.of = cli.OutputFormatter(outfile=self.outfile) + self.output_data = {"this": "is", "some": 1, "data": dict()} + + def test_supports_formats(self): + self.assertEqual(sorted(self.expected_formats), + sorted(self.of.supported_formats)) + + def test_adds_arguments(self): + ap = MagicMock() + arg_group = MagicMock() + add_arg = MagicMock() + arg_group.add_argument = add_arg + ap.add_mutually_exclusive_group.return_value = arg_group + self.of.add_arguments(ap) + + self.assertTrue(add_arg.called) + + for call_args in add_arg.call_args_list: + if "--format" in call_args[0]: + self.assertEqual(sorted(call_args[1]['choices']), + sorted(self.expected_formats)) + self.assertEqual(call_args[1]['default'], 'raw') + break + else: + print(arg_group.call_args_list) + self.fail("No --format argument was created") + + all_args = [c[0][0] for c in add_arg.call_args_list] + all_args.extend([c[0][1] for c in add_arg.call_args_list if len(c[0]) > 1]) + for fmt in self.expected_formats: + self.assertIn("-{}".format(fmt[0]), all_args) + self.assertIn("--{}".format(fmt), all_args) + + def test_outputs_raw(self): + self.of.raw(self.output_data) + self.outfile.seek(0) + self.assertEqual(self.outfile.read(), str(self.output_data)) + + def test_outputs_json(self): + self.of.json(self.output_data) + self.outfile.seek(0) + self.assertEqual(self.outfile.read(), json.dumps(self.output_data)) + + def test_outputs_py(self): + self.of.py(self.output_data) + self.outfile.seek(0) + self.assertEqual(self.outfile.read(), pformat(self.output_data) + "\n") + + def test_outputs_yaml(self): + self.of.yaml(self.output_data) + self.outfile.seek(0) + self.assertEqual(self.outfile.read(), yaml.dump(self.output_data)) + + def test_outputs_csv(self): + sample = StringIO() + writer = csv.writer(sample) + writer.writerows(self.output_data) + sample.seek(0) + self.of.csv(self.output_data) + self.outfile.seek(0) + self.assertEqual(self.outfile.read(), sample.read()) + + def test_outputs_tab(self): + sample = StringIO() + writer = csv.writer(sample, dialect=csv.excel_tab) + writer.writerows(self.output_data) + sample.seek(0) + self.of.tab(self.output_data) + self.outfile.seek(0) + self.assertEqual(self.outfile.read(), sample.read()) + + def test_formats_output(self): + for format in self.expected_formats: + mock_f = MagicMock() + setattr(self.of, format, mock_f) + self.of.format_output(self.output_data, format) + mock_f.assert_called_with(self.output_data) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/cli/test_function_signature_analysis.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/cli/test_function_signature_analysis.py new file mode 100644 index 0000000000000000000000000000000000000000..515d9ceca36b9bbc1b97901231b1ba60b7a13774 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/cli/test_function_signature_analysis.py @@ -0,0 +1,46 @@ +"""Tests for the commandant code that analyzes a function signature to +determine the parameters to argparse.""" + +from testtools import TestCase + +from charmhelpers import cli + + +class FunctionSignatureTest(TestCase): + """Test a variety of function signatures.""" + + def test_positional_arguments(self): + """Finite number of order-dependent required arguments.""" + argparams = tuple(cli.describe_arguments(lambda x, y, z: False)) + self.assertEqual(3, len(argparams)) + for argspec in ((('x',), {}), (('y',), {}), (('z',), {})): + self.assertIn(argspec, argparams) + + def test_keyword_arguments(self): + """Function has optional parameters with default values.""" + argparams = tuple(cli.describe_arguments(lambda x, y=3, z="bar": False)) + self.assertEqual(3, len(argparams)) + for argspec in ((('x',), {}), + (('--y',), {"default": 3}), + (('--z',), {"default": "bar"})): + self.assertIn(argspec, argparams) + + def test_varargs(self): + """Function has a splat-operator parameter to catch an arbitrary number + of positional parameters.""" + argparams = tuple(cli.describe_arguments( + lambda x, y=3, *z: False)) + self.assertEqual(3, len(argparams)) + for argspec in ((('x',), {}), + (('--y',), {"default": 3}), + (('z',), {"nargs": "*"})): + self.assertIn(argspec, argparams) + + def test_keyword_splat_missing(self): + """Double-splat arguments can't be represented in the current version + of commandant.""" + args = cli.describe_arguments(lambda x, y=3, *z, **missing: False) + for opts, _ in args: + # opts should be ('varname',) at this point + self.assertTrue(len(opts) == 1) + self.assertNotIn('missing', opts) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/context/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/context/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/context/test_context.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/context/test_context.py new file mode 100644 index 0000000000000000000000000000000000000000..35543015a2ed17024fb34c36ad128c2e11a4a5da --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/context/test_context.py @@ -0,0 +1,199 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +import unittest +from mock import patch, sentinel +import six + +from charmhelpers import context +from charmhelpers.core import hookenv + + +class TestRelations(unittest.TestCase): + def setUp(self): + def install(*args, **kw): + p = patch.object(*args, **kw) + p.start() + self.addCleanup(p.stop) + + install(hookenv, 'relation_types', return_value=['rel', 'pear']) + install(hookenv, 'peer_relation_id', return_value='pear:9') + install(hookenv, 'relation_ids', + side_effect=lambda x: ['{}:{}'.format(x, i) + for i in range(9, 11)]) + install(hookenv, 'related_units', + side_effect=lambda x: ['svc_' + x.replace(':', '/')]) + install(hookenv, 'local_unit', return_value='foo/1') + install(hookenv, 'relation_get') + install(hookenv, 'relation_set') + # install(hookenv, 'is_leader', return_value=False) + + def test_relations(self): + rels = context.Relations() + self.assertListEqual(list(rels.keys()), + ['pear', 'rel']) # Ordered alphabetically + self.assertListEqual(list(rels['rel'].keys()), + ['rel:9', 'rel:10']) # Ordered numerically + + # Relation data is loaded on demand, not on instantiation. + self.assertFalse(hookenv.relation_get.called) + + # But we did have to retrieve some lists of units etc. + self.assertGreaterEqual(hookenv.relation_ids.call_count, 2) + self.assertGreaterEqual(hookenv.related_units.call_count, 2) + + def test_relations_peer(self): + # The Relations instance has a short cut to the peer relation. + # If the charm has managed to get multiple peer relations, + # it returns the 'primary' one used here and returned by + # hookenv.peer_relation_id() + rels = context.Relations() + self.assertIs(rels.peer, rels['pear']['pear:9']) + + def test_relation(self): + rel = context.Relations()['rel']['rel:9'] + self.assertEqual(rel.relid, 'rel:9') + self.assertEqual(rel.relname, 'rel') + self.assertEqual(rel.service, 'svc_rel') + self.assertTrue(isinstance(rel.local, context.RelationInfo)) + self.assertEqual(rel.local.unit, hookenv.local_unit()) + self.assertTrue(isinstance(rel.peers, context.OrderedDict)) + self.assertTrue(len(rel.peers), 2) + self.assertTrue(isinstance(rel.peers['svc_pear/9'], + context.RelationInfo)) + + # I use this in my log messages. Relation id for identity + # plus service name for ease of reference. + self.assertEqual(str(rel), 'rel:9 (svc_rel)') + + def test_relation_no_peer_relation(self): + hookenv.peer_relation_id.return_value = None + rel = context.Relation('rel:10') + self.assertTrue(rel.peers is None) + + def test_relation_no_peers(self): + hookenv.related_units.side_effect = None + hookenv.related_units.return_value = [] + rel = context.Relation('rel:10') + self.assertDictEqual(rel.peers, {}) + + def test_peer_relation(self): + peer_rel = context.Relations().peer + # The peer relation does not have a 'peers' properly. We + # could give it one for symmetry, but it seems somewhat silly. + self.assertTrue(peer_rel.peers is None) + + def test_relationinfo(self): + hookenv.relation_get.return_value = {sentinel.key: 'value'} + r = context.RelationInfo('rel:10', 'svc_rel/9') + + self.assertEqual(r.relname, 'rel') + self.assertEqual(r.relid, 'rel:10') + self.assertEqual(r.unit, 'svc_rel/9') + self.assertEqual(r.service, 'svc_rel') + self.assertEqual(r.number, 9) + + self.assertFalse(hookenv.relation_get.called) + self.assertEqual(r[sentinel.key], 'value') + hookenv.relation_get.assert_called_with(unit='svc_rel/9', rid='rel:10') + + # Updates fail + with self.assertRaises(TypeError): + r['newkey'] = 'foo' + + # Deletes fail + with self.assertRaises(TypeError): + del r[sentinel.key] + + # I use this for logging. + self.assertEqual(str(r), 'rel:10 (svc_rel/9)') + + def test_relationinfo_local(self): + r = context.RelationInfo('rel:10', hookenv.local_unit()) + + # Updates work, with standard strings. + r[sentinel.key] = 'value' + hookenv.relation_set.assert_called_once_with( + 'rel:10', {sentinel.key: 'value'}) + + # Python 2 unicode strings work too. + hookenv.relation_set.reset_mock() + r[sentinel.key] = six.u('value') + hookenv.relation_set.assert_called_once_with( + 'rel:10', {sentinel.key: six.u('value')}) + + # Byte strings fail under Python 3. + if six.PY3: + with self.assertRaises(ValueError): + r[sentinel.key] = six.b('value') + + # Deletes work + del r[sentinel.key] + hookenv.relation_set.assert_called_with('rel:10', {sentinel.key: None}) + + # Attempting to write a non-string fails + with self.assertRaises(ValueError): + r[sentinel.key] = 42 + + +class TestLeader(unittest.TestCase): + @patch.object(hookenv, 'leader_get') + def test_get(self, leader_get): + leader_get.return_value = {'a_key': 'a_value'} + + leader = context.Leader() + self.assertEqual(leader['a_key'], 'a_value') + leader_get.assert_called_with() + + with self.assertRaises(KeyError): + leader['missing'] + + @patch.object(hookenv, 'leader_set') + @patch.object(hookenv, 'leader_get') + @patch.object(hookenv, 'is_leader') + def test_set(self, is_leader, leader_get, leader_set): + is_leader.return_value = True + leader = context.Leader() + + # Updates work + leader[sentinel.key] = 'foo' + leader_set.assert_called_with({sentinel.key: 'foo'}) + del leader[sentinel.key] + leader_set.assert_called_with({sentinel.key: None}) + + # Python 2 unicode string values work too + leader[sentinel.key] = six.u('bar') + leader_set.assert_called_with({sentinel.key: 'bar'}) + + # Byte strings fail under Python 3 + if six.PY3: + with self.assertRaises(ValueError): + leader[sentinel.key] = six.b('baz') + + # Non strings fail, as implicit casting causes more trouble + # than it solves. Simple types like integers would round trip + # back as strings. + with self.assertRaises(ValueError): + leader[sentinel.key] = 42 + + @patch.object(hookenv, 'leader_set') + @patch.object(hookenv, 'leader_get') + @patch.object(hookenv, 'is_leader') + def test_set_not_leader(self, is_leader, leader_get, leader_set): + is_leader.return_value = False + leader_get.return_value = {'a_key': 'a_value'} + leader = context.Leader() + with self.assertRaises(TypeError): + leader['a_key'] = 'foo' + with self.assertRaises(TypeError): + del leader['a_key'] diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/amulet/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/amulet/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/amulet/test_utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/amulet/test_utils.py new file mode 100644 index 0000000000000000000000000000000000000000..78d3ecd110bea991a18312aed9c6ad66db9733aa --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/amulet/test_utils.py @@ -0,0 +1,370 @@ +# Copyright 2015 Canonical Ltd. +# +# Authors: +# Adam Collard + +from contextlib import contextmanager +from mock import patch +import sys +import unittest + +import six + +from charmhelpers.contrib.amulet.utils import ( + AmuletUtils, + amulet, +) + + +@contextmanager +def captured_output(): + """Simple context manager to capture stdout/stderr. + + Source: http://stackoverflow.com/a/17981937/56219. + """ + new_out, new_err = six.StringIO(), six.StringIO() + old_out, old_err = sys.stdout, sys.stderr + try: + sys.stdout, sys.stderr = new_out, new_err + yield sys.stdout, sys.stderr + finally: + sys.stdout, sys.stderr = old_out, old_err + + +class FakeSentry(object): + + def __init__(self, name="foo"): + self.commands = {} + self.info = {"unit_name": name} + + def run(self, command): + return self.commands[command] + + def ssh(self, command): + return self.commands[command] + + def run_action(self, action, action_args=None): + return 'action-id' + + +class ValidateServicesByNameTestCase(unittest.TestCase): + + def setUp(self): + self.utils = AmuletUtils() + self.sentry_unit = FakeSentry() + + def test_errors_for_unknown_upstart_service(self): + """ + Returns a message if the Upstart service is unknown. + """ + self.sentry_unit.commands["lsb_release -cs"] = "trusty", 0 + self.sentry_unit.commands["sudo status foo"] = ( + "status: Unknown job: foo", 1) + + result = self.utils.validate_services_by_name( + {self.sentry_unit: ["foo"]}) + self.assertIsNotNone(result) + + def test_none_for_started_upstart_service(self): + """ + Returns None if the Upstart service is running. + """ + self.sentry_unit.commands["lsb_release -cs"] = "trusty", 0 + self.sentry_unit.commands["sudo status foo"] = ( + "foo start/running, process 42", 0) + + result = self.utils.validate_services_by_name( + {self.sentry_unit: ["foo"]}) + self.assertIsNone(result) + + def test_errors_for_stopped_upstart_service(self): + """ + Returns a message if the Upstart service is stopped. + """ + self.sentry_unit.commands["lsb_release -cs"] = "trusty", 0 + self.sentry_unit.commands["sudo status foo"] = "foo stop/waiting", 0 + + result = self.utils.validate_services_by_name( + {self.sentry_unit: ["foo"]}) + self.assertIsNotNone(result) + + def test_errors_for_unknown_systemd_service(self): + """ + Returns a message if a systemd service is unknown. + """ + self.sentry_unit.commands["lsb_release -cs"] = "vivid", 0 + self.sentry_unit.commands["sudo service foo status"] = (u"""\ +\u25cf foo.service + Loaded: not-found (Reason: No such file or directory) + Active: inactive (dead) +""", 3) + + result = self.utils.validate_services_by_name({ + self.sentry_unit: ["foo"]}) + self.assertIsNotNone(result) + + def test_none_for_started_systemd_service(self): + """ + Returns None if a systemd service is running. + """ + self.sentry_unit.commands["lsb_release -cs"] = "vivid", 0 + self.sentry_unit.commands["sudo service foo status"] = (u"""\ +\u25cf foo.service - Foo + Loaded: loaded (/lib/systemd/system/foo.service; enabled) + Active: active (exited) since Thu 1970-01-01 00:00:00 UTC; 42h 42min ago + Main PID: 3 (code=exited, status=0/SUCCESS) + CGroup: /system.slice/foo.service +""", 0) + result = self.utils.validate_services_by_name( + {self.sentry_unit: ["foo"]}) + self.assertIsNone(result) + + def test_errors_for_stopped_systemd_service(self): + """ + Returns a message if a systemd service is stopped. + """ + self.sentry_unit.commands["lsb_release -cs"] = "vivid", 0 + self.sentry_unit.commands["sudo service foo status"] = (u"""\ +\u25cf foo.service - Foo + Loaded: loaded (/lib/systemd/system/foo.service; disabled) + Active: inactive (dead) +""", 3) + result = self.utils.validate_services_by_name( + {self.sentry_unit: ["foo"]}) + self.assertIsNotNone(result) + + +class RunActionTestCase(unittest.TestCase): + + def setUp(self): + self.utils = AmuletUtils() + self.sentry_unit = FakeSentry() + + def test_returns_action_id(self): + """Returns action_id.""" + + self.assertEqual("action-id", self.utils.run_action( + self.sentry_unit, "foo")) + + +class WaitActionTestCase(unittest.TestCase): + + def setUp(self): + self.utils = AmuletUtils() + + @patch.object(amulet.actions, "get_action_output") + def test_returns_true_if_completed(self, get_action_output): + """JSON output is parsed and returns True if the action completed.""" + + get_action_output.return_value = {"status": "completed"} + + self.assertTrue(self.utils.wait_on_action("action-id")) + get_action_output.assert_called_with("action-id", full_output=True) + + @patch.object(amulet.actions, "get_action_output") + def test_returns_false_if_still_running(self, get_action_output): + """ + JSON output is parsed and returns False if the action is still running. + """ + get_action_output.return_value = {"status": "running"} + + self.assertFalse(self.utils.wait_on_action("action-id")) + get_action_output.assert_called_with("action-id", full_output=True) + + @patch.object(amulet.actions, "get_action_output") + def test_returns_false_if_no_status(self, get_action_output): + """ + JSON output is parsed and returns False if there is no action status. + """ + get_action_output.return_value = {} + + self.assertFalse(self.utils.wait_on_action("action-id")) + get_action_output.assert_called_with("action-id", full_output=True) + + +class GetProcessIdListTestCase(unittest.TestCase): + + def setUp(self): + self.utils = AmuletUtils() + self.sentry_unit = FakeSentry() + + def test_returns_pids(self): + """ + Normal execution returns a list of pids + """ + self.sentry_unit.commands['pidof -x "foo"'] = ("123 124 125", 0) + result = self.utils.get_process_id_list(self.sentry_unit, "foo") + self.assertEqual(["123", "124", "125"], result) + + def test_fails_if_no_process_found(self): + """ + By default, the expectation is that a process is running. Failure + to find a given process results in an amulet.FAIL being + raised. + """ + self.sentry_unit.commands['pidof -x "foo"'] = ("", 1) + with self.assertRaises(SystemExit) as cm, captured_output() as ( + out, err): + self.utils.get_process_id_list(self.sentry_unit, "foo") + the_exception = cm.exception + self.assertEqual(1, the_exception.code) + self.assertEqual( + 'foo `pidof -x "foo"` returned 1', out.getvalue().rstrip()) + + def test_looks_for_scripts(self): + """ + pidof command uses -x to return a list of pids of scripts + """ + self.sentry_unit.commands["pidof foo"] = ("", 1) + self.sentry_unit.commands['pidof -x "foo"'] = ("123 124 125", 0) + result = self.utils.get_process_id_list(self.sentry_unit, "foo") + self.assertEqual(["123", "124", "125"], result) + + def test_expect_no_pid(self): + """ + By setting expectation that there are no pids running the logic + about when to fail is reversed. + """ + self.sentry_unit.commands[ + 'pidof -x "foo" || exit 0 && exit 1'] = ("", 0) + self.sentry_unit.commands[ + 'pidof -x "bar" || exit 0 && exit 1'] = ("", 1) + result = self.utils.get_process_id_list( + self.sentry_unit, "foo", expect_success=False) + self.assertEqual([], result) + with self.assertRaises(SystemExit) as cm, captured_output() as ( + out, err): + self.utils.get_process_id_list( + self.sentry_unit, "bar", expect_success=False) + the_exception = cm.exception + self.assertEqual(1, the_exception.code) + self.assertEqual( + 'foo `pidof -x "bar" || exit 0 && exit 1` returned 1', + out.getvalue().rstrip()) + + +class GetUnitProcessIdsTestCase(unittest.TestCase): + + def setUp(self): + self.utils = AmuletUtils() + self.sentry_unit = FakeSentry() + + def test_returns_map(self): + """ + Normal execution returns a dictionary mapping process names to + PIDs for each unit. + """ + second_sentry = FakeSentry(name="bar") + self.sentry_unit.commands['pidof -x "foo"'] = ("123 124", 0) + second_sentry.commands['pidof -x "bar"'] = ("456 457", 0) + + result = self.utils.get_unit_process_ids({ + self.sentry_unit: ["foo"], second_sentry: ["bar"]}) + self.assertEqual({ + self.sentry_unit: {"foo": ["123", "124"]}, + second_sentry: {"bar": ["456", "457"]}}, result) + + def test_expect_failure(self): + """ + Expected failures return empty lists. + """ + second_sentry = FakeSentry(name="bar") + self.sentry_unit.commands[ + 'pidof -x "foo" || exit 0 && exit 1'] = ("", 0) + second_sentry.commands['pidof -x "bar" || exit 0 && exit 1'] = ("", 0) + + result = self.utils.get_unit_process_ids( + {self.sentry_unit: ["foo"], second_sentry: ["bar"]}, + expect_success=False) + self.assertEqual({ + self.sentry_unit: {"foo": []}, + second_sentry: {"bar": []}}, result) + + +class StatusGetTestCase(unittest.TestCase): + + def setUp(self): + self.utils = AmuletUtils() + self.sentry_unit = FakeSentry() + + def test_status_get(self): + """ + We can get the status of a unit. + """ + self.sentry_unit.commands[ + "status-get --format=json --include-data"] = ( + """{"status": "active", "message": "foo"}""", 0) + self.assertEqual(self.utils.status_get(self.sentry_unit), + (u"active", u"foo")) + + def test_status_get_missing_command(self): + """ + Older releases of Juju have no status-get command. In those + cases we should return the "unknown" status. + """ + self.sentry_unit.commands[ + "status-get --format=json --include-data"] = ( + "status-get: command not found", 127) + self.assertEqual(self.utils.status_get(self.sentry_unit), + (u"unknown", u"")) + + +class ValidateServicesByProcessIDTestCase(unittest.TestCase): + + def setUp(self): + self.utils = AmuletUtils() + self.sentry_unit = FakeSentry() + + def test_accepts_list_wrong(self): + """ + Validates that it can accept a list + """ + expected = {self.sentry_unit: {"foo": [3, 4]}} + actual = {self.sentry_unit: {"foo": [12345, 67890]}} + result = self.utils.validate_unit_process_ids(expected, actual) + self.assertIsNotNone(result) + + def test_accepts_list(self): + """ + Validates that it can accept a list + """ + expected = {self.sentry_unit: {"foo": [2, 3]}} + actual = {self.sentry_unit: {"foo": [12345, 67890]}} + result = self.utils.validate_unit_process_ids(expected, actual) + self.assertIsNone(result) + + def test_accepts_string(self): + """ + Validates that it can accept a string + """ + expected = {self.sentry_unit: {"foo": 2}} + actual = {self.sentry_unit: {"foo": [12345, 67890]}} + result = self.utils.validate_unit_process_ids(expected, actual) + self.assertIsNone(result) + + def test_accepts_string_wrong(self): + """ + Validates that it can accept a string + """ + expected = {self.sentry_unit: {"foo": 3}} + actual = {self.sentry_unit: {"foo": [12345, 67890]}} + result = self.utils.validate_unit_process_ids(expected, actual) + self.assertIsNotNone(result) + + def test_accepts_bool(self): + """ + Validates that it can accept a boolean + """ + expected = {self.sentry_unit: {"foo": True}} + actual = {self.sentry_unit: {"foo": [12345, 67890]}} + result = self.utils.validate_unit_process_ids(expected, actual) + self.assertIsNone(result) + + def test_accepts_bool_wrong(self): + """ + Validates that it can accept a boolean + """ + expected = {self.sentry_unit: {"foo": True}} + actual = {self.sentry_unit: {"foo": []}} + result = self.utils.validate_unit_process_ids(expected, actual) + self.assertIsNotNone(result) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/ansible/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/ansible/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/ansible/test_ansible.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/ansible/test_ansible.py new file mode 100644 index 0000000000000000000000000000000000000000..5261da25c767d07d557ecd587dd4402c63ed0b65 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/ansible/test_ansible.py @@ -0,0 +1,339 @@ +# Copyright 2013 Canonical Ltd. +# +# Authors: +# Charm Helpers Developers +import mock +import os +import shutil +import stat +import tempfile +import unittest +import yaml + + +import charmhelpers.contrib.ansible +from charmhelpers.core import hookenv + + +class InstallAnsibleSupportTestCase(unittest.TestCase): + + def setUp(self): + super(InstallAnsibleSupportTestCase, self).setUp() + + patcher = mock.patch('charmhelpers.fetch') + self.mock_fetch = patcher.start() + self.addCleanup(patcher.stop) + + patcher = mock.patch('charmhelpers.core') + self.mock_core = patcher.start() + self.addCleanup(patcher.stop) + + hosts_file = tempfile.NamedTemporaryFile() + self.ansible_hosts_path = hosts_file.name + self.addCleanup(hosts_file.close) + patcher = mock.patch.object(charmhelpers.contrib.ansible, + 'ansible_hosts_path', + self.ansible_hosts_path) + patcher.start() + self.addCleanup(patcher.stop) + + def test_adds_ppa_by_default(self): + charmhelpers.contrib.ansible.install_ansible_support() + + self.mock_fetch.add_source.assert_called_once_with( + 'ppa:ansible/ansible') + self.mock_fetch.apt_update.assert_called_once_with(fatal=True) + self.mock_fetch.apt_install.assert_called_once_with( + 'ansible') + + def test_no_ppa(self): + charmhelpers.contrib.ansible.install_ansible_support( + from_ppa=False) + + self.assertEqual(self.mock_fetch.add_source.call_count, 0) + self.mock_fetch.apt_install.assert_called_once_with( + 'ansible') + + def test_writes_ansible_hosts(self): + with open(self.ansible_hosts_path) as hosts_file: + self.assertEqual(hosts_file.read(), '') + + charmhelpers.contrib.ansible.install_ansible_support() + + with open(self.ansible_hosts_path) as hosts_file: + self.assertEqual(hosts_file.read(), + 'localhost ansible_connection=local ' + 'ansible_remote_tmp=/root/.ansible/tmp') + + +class ApplyPlaybookTestCases(unittest.TestCase): + + unit_data = { + 'private-address': '10.0.3.2', + 'public-address': '123.123.123.123', + } + + def setUp(self): + super(ApplyPlaybookTestCases, self).setUp() + + # Hookenv patches (a single patch to hookenv doesn't work): + patcher = mock.patch('charmhelpers.core.hookenv.config') + self.mock_config = patcher.start() + self.addCleanup(patcher.stop) + Serializable = charmhelpers.core.hookenv.Serializable + self.mock_config.return_value = Serializable({}) + patcher = mock.patch('charmhelpers.core.hookenv.relation_get') + self.mock_relation_get = patcher.start() + self.mock_relation_get.return_value = {} + self.addCleanup(patcher.stop) + patcher = mock.patch('charmhelpers.core.hookenv.relations') + self.mock_relations = patcher.start() + self.mock_relations.return_value = { + 'wsgi-file': {}, + 'website': {}, + 'nrpe-external-master': {}, + } + self.addCleanup(patcher.stop) + patcher = mock.patch('charmhelpers.core.hookenv.relations_of_type') + self.mock_relations_of_type = patcher.start() + self.mock_relations_of_type.return_value = [] + self.addCleanup(patcher.stop) + patcher = mock.patch('charmhelpers.core.hookenv.relation_type') + self.mock_relation_type = patcher.start() + self.mock_relation_type.return_value = None + self.addCleanup(patcher.stop) + patcher = mock.patch('charmhelpers.core.hookenv.local_unit') + self.mock_local_unit = patcher.start() + self.addCleanup(patcher.stop) + self.mock_local_unit.return_value = {} + + def unit_get_data(argument): + "dummy unit_get that accesses dummy unit data" + return self.unit_data[argument] + + patcher = mock.patch( + 'charmhelpers.core.hookenv.unit_get', unit_get_data) + self.mock_unit_get = patcher.start() + self.addCleanup(patcher.stop) + + patcher = mock.patch('charmhelpers.contrib.ansible.subprocess') + self.mock_subprocess = patcher.start() + self.addCleanup(patcher.stop) + + etc_dir = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, etc_dir) + self.vars_path = os.path.join(etc_dir, 'ansible', 'vars.yaml') + patcher = mock.patch.object(charmhelpers.contrib.ansible, + 'ansible_vars_path', self.vars_path) + patcher.start() + self.addCleanup(patcher.stop) + + patcher = mock.patch.object(charmhelpers.contrib.ansible.os, + 'environ', {}) + patcher.start() + self.addCleanup(patcher.stop) + + def test_calls_ansible_playbook(self): + charmhelpers.contrib.ansible.apply_playbook( + 'playbooks/dependencies.yaml') + + self.mock_subprocess.check_call.assert_called_once_with([ + 'ansible-playbook', '-c', 'local', 'playbooks/dependencies.yaml'], + env={'PYTHONUNBUFFERED': '1'}) + + def test_writes_vars_file(self): + self.assertFalse(os.path.exists(self.vars_path)) + self.mock_config.return_value = charmhelpers.core.hookenv.Serializable({ + 'group_code_owner': 'webops_deploy', + 'user_code_runner': 'ubunet', + 'private-address': '10.10.10.10', + }) + self.mock_relation_type.return_value = 'wsgi-file' + self.mock_relation_get.return_value = { + 'relation_key1': 'relation_value1', + 'relation-key2': 'relation_value2', + } + + charmhelpers.contrib.ansible.apply_playbook( + 'playbooks/dependencies.yaml') + + self.assertTrue(os.path.exists(self.vars_path)) + stats = os.stat(self.vars_path) + self.assertEqual( + stats.st_mode & stat.S_IRWXU, + stat.S_IRUSR | stat.S_IWUSR) + self.assertEqual(stats.st_mode & stat.S_IRWXG, 0) + self.assertEqual(stats.st_mode & stat.S_IRWXO, 0) + with open(self.vars_path, 'r') as vars_file: + result = yaml.safe_load(vars_file.read()) + self.assertEqual({ + "group_code_owner": "webops_deploy", + "user_code_runner": "ubunet", + "private_address": "10.10.10.10", + "charm_dir": "", + "local_unit": {}, + 'current_relation': { + 'relation_key1': 'relation_value1', + 'relation-key2': 'relation_value2', + }, + 'relations_full': { + 'nrpe-external-master': {}, + 'website': {}, + 'wsgi-file': {}, + }, + 'relations': { + 'nrpe-external-master': [], + 'website': [], + 'wsgi-file': [], + }, + "wsgi_file__relation_key1": "relation_value1", + "wsgi_file__relation_key2": "relation_value2", + "unit_private_address": "10.0.3.2", + "unit_public_address": "123.123.123.123", + }, result) + + def test_calls_with_tags(self): + charmhelpers.contrib.ansible.apply_playbook( + 'playbooks/complete-state.yaml', tags=['install', 'somethingelse']) + + self.mock_subprocess.check_call.assert_called_once_with([ + 'ansible-playbook', '-c', 'local', 'playbooks/complete-state.yaml', + '--tags', 'install,somethingelse'], env={'PYTHONUNBUFFERED': '1'}) + + @mock.patch.object(hookenv, 'config') + def test_calls_with_extra_vars(self, config): + charmhelpers.contrib.ansible.apply_playbook( + 'playbooks/complete-state.yaml', tags=['install', 'somethingelse'], + extra_vars={'a': 'b'}) + + self.mock_subprocess.check_call.assert_called_once_with([ + 'ansible-playbook', '-c', 'local', 'playbooks/complete-state.yaml', + '--tags', 'install,somethingelse', '--extra-vars', '{"a": "b"}'], + env={'PYTHONUNBUFFERED': '1'}) + + @mock.patch.object(hookenv, 'config') + def test_calls_with_extra_vars_path(self, config): + charmhelpers.contrib.ansible.apply_playbook( + 'playbooks/complete-state.yaml', tags=['install', 'somethingelse'], + extra_vars='@myvars.json') + + self.mock_subprocess.check_call.assert_called_once_with([ + 'ansible-playbook', '-c', 'local', 'playbooks/complete-state.yaml', + '--tags', 'install,somethingelse', '--extra-vars', '"@myvars.json"'], + env={'PYTHONUNBUFFERED': '1'}) + + @mock.patch.object(hookenv, 'config') + def test_calls_with_extra_vars_dict(self, config): + charmhelpers.contrib.ansible.apply_playbook( + 'playbooks/complete-state.yaml', tags=['install', 'somethingelse'], + extra_vars={'pkg': {'a': 'present', 'b': 'absent'}}) + + self.mock_subprocess.check_call.assert_called_once_with([ + 'ansible-playbook', '-c', 'local', 'playbooks/complete-state.yaml', + '--tags', 'install,somethingelse', '--extra-vars', + '{"pkg": {"a": "present", "b": "absent"}}'], + env={'PYTHONUNBUFFERED': '1'}) + + @mock.patch.object(hookenv, 'config') + def test_hooks_executes_playbook_with_tag(self, config): + hooks = charmhelpers.contrib.ansible.AnsibleHooks('my/playbook.yaml') + foo = mock.MagicMock() + hooks.register('foo', foo) + + hooks.execute(['foo']) + + self.assertEqual(foo.call_count, 1) + self.mock_subprocess.check_call.assert_called_once_with([ + 'ansible-playbook', '-c', 'local', 'my/playbook.yaml', + '--tags', 'foo'], env={'PYTHONUNBUFFERED': '1'}) + + @mock.patch.object(hookenv, 'config') + def test_specifying_ansible_handled_hooks(self, config): + hooks = charmhelpers.contrib.ansible.AnsibleHooks( + 'my/playbook.yaml', default_hooks=['start', 'stop']) + + hooks.execute(['start']) + + self.mock_subprocess.check_call.assert_called_once_with([ + 'ansible-playbook', '-c', 'local', 'my/playbook.yaml', + '--tags', 'start'], env={'PYTHONUNBUFFERED': '1'}) + + +class TestActionDecorator(unittest.TestCase): + + def setUp(self): + p = mock.patch('charmhelpers.contrib.ansible.apply_playbook') + self.apply_playbook = p.start() + self.addCleanup(p.stop) + + def test_action_no_args(self): + hooks = charmhelpers.contrib.ansible.AnsibleHooks('playbook.yaml') + + @hooks.action() + def test(): + return {} + + hooks.execute(['test']) + self.apply_playbook.assert_called_once_with( + 'playbook.yaml', tags=['test'], extra_vars={}) + + def test_action_required_arg_keyword(self): + hooks = charmhelpers.contrib.ansible.AnsibleHooks('playbook.yaml') + + @hooks.action() + def test(x): + return locals() + + hooks.execute(['test', 'x=a']) + self.apply_playbook.assert_called_once_with( + 'playbook.yaml', tags=['test'], extra_vars={'x': 'a'}) + + def test_action_required_arg_missing(self): + hooks = charmhelpers.contrib.ansible.AnsibleHooks('playbook.yaml') + + @hooks.action() + def test(x): + """Requires x""" + return locals() + + try: + hooks.execute(['test']) + self.fail("should have thrown TypeError") + except TypeError as e: + self.assertEqual(e.args[1], "Requires x") + + def test_action_required_unknown_arg(self): + hooks = charmhelpers.contrib.ansible.AnsibleHooks('playbook.yaml') + + @hooks.action() + def test(x='a'): + """Requires x""" + return locals() + + try: + hooks.execute(['test', 'z=c']) + self.fail("should have thrown TypeError") + except TypeError as e: + self.assertEqual(e.args[1], "Requires x") + + def test_action_default_arg(self): + hooks = charmhelpers.contrib.ansible.AnsibleHooks('playbook.yaml') + + @hooks.action() + def test(x='b'): + return locals() + + hooks.execute(['test']) + self.apply_playbook.assert_called_once_with( + 'playbook.yaml', tags=['test'], extra_vars={'x': 'b'}) + + def test_action_mutliple(self): + hooks = charmhelpers.contrib.ansible.AnsibleHooks('playbook.yaml') + + @hooks.action() + def test(x, y='b'): + return locals() + + hooks.execute(['test', 'x=a', 'y=b']) + self.apply_playbook.assert_called_once_with( + 'playbook.yaml', tags=['test'], extra_vars={'x': 'a', 'y': 'b'}) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/benchmark/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/benchmark/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/benchmark/test_benchmark.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/benchmark/test_benchmark.py new file mode 100644 index 0000000000000000000000000000000000000000..52ec086271f02e7538e69a9b272a35afe8e243c8 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/benchmark/test_benchmark.py @@ -0,0 +1,124 @@ +from functools import partial +from os.path import join +from tempfile import mkdtemp +from shutil import rmtree + +import mock +from testtools import TestCase +# import unittest +from charmhelpers.contrib.benchmark import Benchmark, action_set # noqa +from tests.helpers import patch_open, FakeRelation + +TO_PATCH = [ + 'in_relation_hook', + 'relation_ids', + 'relation_set', + 'relation_get', +] + +FAKE_RELATION = { + 'benchmark:0': { + 'benchmark/0': { + 'hostname': '127.0.0.1', + 'port': '1111', + 'graphite_port': '2222', + 'graphite_endpoint': 'http://localhost:3333', + 'api_port': '4444' + } + } +} + + +class TestBenchmark(TestCase): + + def setUp(self): + super(TestBenchmark, self).setUp() + for m in TO_PATCH: + setattr(self, m, self._patch(m)) + self.fake_relation = FakeRelation(FAKE_RELATION) + # self.hook_name.return_value = 'benchmark-relation-changed' + + self.relation_get.side_effect = partial( + self.fake_relation.get, rid="benchmark:0", unit="benchmark/0") + self.relation_ids.side_effect = self.fake_relation.relation_ids + + def _patch(self, method): + _m = mock.patch('charmhelpers.contrib.benchmark.' + method) + m = _m.start() + self.addCleanup(_m.stop) + return m + + @mock.patch('os.path.exists') + @mock.patch('subprocess.check_output') + def test_benchmark_start(self, check_output, exists): + + exists.return_value = True + check_output.return_value = "data" + + with patch_open() as (_open, _file): + self.assertIsNone(Benchmark.start()) + # _open.assert_called_with('/etc/benchmark.conf', 'w') + + COLLECT_PROFILE_DATA = '/usr/local/bin/collect-profile-data' + exists.assert_any_call(COLLECT_PROFILE_DATA) + check_output.assert_any_call([COLLECT_PROFILE_DATA]) + + def test_benchmark_finish(self): + with patch_open() as (_open, _file): + self.assertIsNone(Benchmark.finish()) + # _open.assert_called_with('/etc/benchmark.conf', 'w') + + @mock.patch('charmhelpers.contrib.benchmark.action_set') + def test_benchmark_set_composite_score(self, action_set): + self.assertTrue(Benchmark.set_composite_score(15.7, 'hits/sec', 'desc')) + action_set.assert_called_once_with('meta.composite', {'value': 15.7, 'units': 'hits/sec', 'direction': 'desc'}) + + @mock.patch('charmhelpers.contrib.benchmark.find_executable') + @mock.patch('charmhelpers.contrib.benchmark.subprocess.check_call') + def test_benchmark_action_set(self, check_call, find_executable): + find_executable.return_value = "/usr/bin/action-set" + self.assertTrue(action_set('foo', 'bar')) + + find_executable.assert_called_once_with('action-set') + check_call.assert_called_once_with(['action-set', 'foo=bar']) + + @mock.patch('charmhelpers.contrib.benchmark.find_executable') + @mock.patch('charmhelpers.contrib.benchmark.subprocess.check_call') + def test_benchmark_action_set_dict(self, check_call, find_executable): + find_executable.return_value = "/usr/bin/action-set" + self.assertTrue(action_set('baz', {'foo': 1, 'bar': 2})) + + find_executable.assert_called_with('action-set') + + check_call.assert_any_call(['action-set', 'baz.foo=1']) + check_call.assert_any_call(['action-set', 'baz.bar=2']) + + @mock.patch('charmhelpers.contrib.benchmark.relation_ids') + @mock.patch('charmhelpers.contrib.benchmark.in_relation_hook') + def test_benchmark_init(self, in_relation_hook, relation_ids): + + in_relation_hook.return_value = True + relation_ids.return_value = ['benchmark:0'] + actions = ['asdf', 'foobar'] + + tempdir = mkdtemp(prefix=self.__class__.__name__) + self.addCleanup(rmtree, tempdir) + conf_path = join(tempdir, "benchmark.conf") + with mock.patch.object(Benchmark, "BENCHMARK_CONF", conf_path): + b = Benchmark(actions) + + self.assertIsInstance(b, Benchmark) + + self.assertTrue(self.relation_get.called) + self.assertTrue(self.relation_set.called) + + relation_ids.assert_called_once_with('benchmark') + + self.relation_set.assert_called_once_with( + relation_id='benchmark:0', + relation_settings={'benchmarks': ",".join(actions)} + ) + + conf_contents = open(conf_path).readlines() + for key, val in iter(FAKE_RELATION['benchmark:0']['benchmark/0'].items()): + self.assertIn("%s=%s\n" % (key, val), conf_contents) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/charmhelpers/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/charmhelpers/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/charmhelpers/test_charmhelpers.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/charmhelpers/test_charmhelpers.py new file mode 100644 index 0000000000000000000000000000000000000000..2837d2b9a49730d509def4943874928b88bec74b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/charmhelpers/test_charmhelpers.py @@ -0,0 +1,274 @@ +# Tests for Python charm helpers. + +import unittest +import yaml +from testtools import TestCase + +from six import StringIO + +import sys +# Path hack to ensure we test the local code, not a version installed in +# /usr/local/lib. This is necessary since /usr/local/lib is prepended before +# what is specified in PYTHONPATH. +sys.path.insert(0, 'helpers/python') +from charmhelpers.contrib import charmhelpers # noqa + + +class CharmHelpersTestCase(TestCase): + """A basic test case for Python charm helpers.""" + + def _patch_command(self, replacement_command): + """Monkeypatch charmhelpers.command for testing purposes. + + :param replacement_command: The replacement Callable for + command(). + """ + self.patch(charmhelpers, 'command', lambda *args: replacement_command) + + def _make_juju_status_dict(self, num_units=1, + service_name='test-service', + unit_state='pending', + machine_state='not-started'): + """Generate valid juju status dict and return it.""" + machine_data = {} + # The 0th machine is the Zookeeper. + machine_data[0] = {'dns-name': 'zookeeper.example.com', + 'instance-id': 'machine0', + 'state': 'not-started'} + service_data = {'charm': 'local:precise/{}-1'.format(service_name), + 'relations': {}, + 'units': {}} + for i in range(num_units): + # The machine is always going to be i+1 because there + # will always be num_units+1 machines. + machine_number = i + 1 + unit_machine_data = { + 'dns-name': 'machine{}.example.com'.format(machine_number), + 'instance-id': 'machine{}'.format(machine_number), + 'state': machine_state, + 'instance-state': machine_state} + machine_data[machine_number] = unit_machine_data + unit_data = { + 'machine': machine_number, + 'public-address': + '{}-{}.example.com'.format(service_name, i), + 'relations': {'db': {'state': 'up'}}, + 'agent-state': unit_state} + service_data['units']['{}/{}'.format(service_name, i)] = ( + unit_data) + juju_status_data = {'machines': machine_data, + 'services': {service_name: service_data}} + return juju_status_data + + def _make_juju_status_yaml(self, num_units=1, + service_name='test-service', + unit_state='pending', + machine_state='not-started'): + """Convert the dict returned by `_make_juju_status_dict` to YAML.""" + return yaml.dump( + self._make_juju_status_dict( + num_units, service_name, unit_state, machine_state)) + + def test_make_charm_config_file(self): + # make_charm_config_file() writes the passed configuration to a + # temporary file as YAML. + charm_config = {'foo': 'bar', + 'spam': 'eggs', + 'ham': 'jam'} + # make_charm_config_file() returns the file object so that it + # can be garbage collected properly. + charm_config_file = charmhelpers.make_charm_config_file(charm_config) + with open(charm_config_file.name) as config_in: + written_config = config_in.read() + self.assertEqual(yaml.dump(charm_config), written_config) + + def test_unit_info(self): + # unit_info returns requested data about a given service. + juju_yaml = self._make_juju_status_yaml() + self.patch(charmhelpers, 'juju_status', lambda: juju_yaml) + self.assertEqual( + 'pending', + charmhelpers.unit_info('test-service', 'agent-state')) + + def test_unit_info_returns_empty_for_nonexistent_service(self): + # If the service passed to unit_info() has not yet started (or + # otherwise doesn't exist), unit_info() will return an empty + # string. + juju_yaml = "services: {}" + self.patch(charmhelpers, 'juju_status', lambda: juju_yaml) + self.assertEqual( + '', charmhelpers.unit_info('test-service', 'state')) + + def test_unit_info_accepts_data(self): + # It's possible to pass a `data` dict, containing the parsed + # result of juju status, to unit_info(). + juju_status_data = yaml.safe_load( + self._make_juju_status_yaml()) + self.patch(charmhelpers, 'juju_status', lambda: None) + service_data = juju_status_data['services']['test-service'] + unit_info_dict = service_data['units']['test-service/0'] + for key, value in unit_info_dict.items(): + item_info = charmhelpers.unit_info( + 'test-service', key, data=juju_status_data) + self.assertEqual(value, item_info) + + def test_unit_info_returns_first_unit_by_default(self): + # By default, unit_info() just returns the value of the + # requested item for the first unit in a service. + juju_yaml = self._make_juju_status_yaml(num_units=2) + self.patch(charmhelpers, 'juju_status', lambda: juju_yaml) + unit_address = charmhelpers.unit_info( + 'test-service', 'public-address') + self.assertEqual('test-service-0.example.com', unit_address) + + def test_unit_info_accepts_unit_name(self): + # By default, unit_info() just returns the value of the + # requested item for the first unit in a service. However, it's + # possible to pass a unit name to it, too. + juju_yaml = self._make_juju_status_yaml(num_units=2) + self.patch(charmhelpers, 'juju_status', lambda: juju_yaml) + unit_address = charmhelpers.unit_info( + 'test-service', 'public-address', unit='test-service/1') + self.assertEqual('test-service-1.example.com', unit_address) + + def test_get_machine_data(self): + # get_machine_data() returns a dict containing the machine data + # parsed from juju status. + juju_yaml = self._make_juju_status_yaml() + self.patch(charmhelpers, 'juju_status', lambda: juju_yaml) + machine_0_data = charmhelpers.get_machine_data()[0] + self.assertEqual('zookeeper.example.com', machine_0_data['dns-name']) + + def test_wait_for_machine_returns_if_machine_up(self): + # If wait_for_machine() is called and the machine(s) it is + # waiting for are already up, it will return. + juju_yaml = self._make_juju_status_yaml(machine_state='running') + self.patch(charmhelpers, 'juju_status', lambda: juju_yaml) + machines, time_taken = charmhelpers.wait_for_machine(timeout=1) + self.assertEqual(1, machines) + + def test_wait_for_machine_times_out(self): + # If the machine that wait_for_machine is waiting for isn't + # 'running' before the passed timeout is reached, + # wait_for_machine will raise an error. + juju_yaml = self._make_juju_status_yaml() + self.patch(charmhelpers, 'juju_status', lambda: juju_yaml) + self.assertRaises( + RuntimeError, charmhelpers.wait_for_machine, timeout=0) + + def test_wait_for_machine_always_returns_if_running_locally(self): + # If juju is actually running against a local LXC container, + # wait_for_machine will always return. + juju_status_dict = self._make_juju_status_dict() + # We'll update the 0th machine to make it look like it's an LXC + # container. + juju_status_dict['machines'][0]['dns-name'] = 'localhost' + juju_yaml = yaml.dump(juju_status_dict) + self.patch(charmhelpers, 'juju_status', lambda: juju_yaml) + machines, time_taken = charmhelpers.wait_for_machine(timeout=1) + # wait_for_machine will always return 1 machine started here, + # since there's only one machine to start. + self.assertEqual(1, machines) + # time_taken will be 0, since no actual waiting happened. + self.assertEqual(0, time_taken) + + def test_wait_for_machine_waits_for_multiple_machines(self): + # wait_for_machine can be told to wait for multiple machines. + juju_yaml = self._make_juju_status_yaml( + num_units=2, machine_state='running') + self.patch(charmhelpers, 'juju_status', lambda: juju_yaml) + machines, time_taken = charmhelpers.wait_for_machine(num_machines=2) + self.assertEqual(2, machines) + + def test_wait_for_unit_returns_if_unit_started(self): + # wait_for_unit() will return if the service it's waiting for is + # already up. + juju_yaml = self._make_juju_status_yaml( + unit_state='started', machine_state='running') + self.patch(charmhelpers, 'juju_status', lambda: juju_yaml) + charmhelpers.wait_for_unit('test-service', timeout=0) + + def test_wait_for_unit_raises_error_on_error_state(self): + # If the unit is in some kind of error state, wait_for_unit will + # raise a RuntimeError. + juju_yaml = self._make_juju_status_yaml( + unit_state='start-error', machine_state='running') + self.patch(charmhelpers, 'juju_status', lambda: juju_yaml) + self.assertRaises(RuntimeError, charmhelpers.wait_for_unit, + 'test-service', timeout=0) + + def test_wait_for_unit_raises_error_on_timeout(self): + # If the unit does not start before the timeout is reached, + # wait_for_unit will raise a RuntimeError. + juju_yaml = self._make_juju_status_yaml( + unit_state='pending', machine_state='running') + self.patch(charmhelpers, 'juju_status', lambda: juju_yaml) + self.assertRaises(RuntimeError, charmhelpers.wait_for_unit, + 'test-service', timeout=0) + + def test_wait_for_relation_returns_if_relation_up(self): + # wait_for_relation() waits for relations to come up. If a + # relation is already 'up', wait_for_relation() will return + # immediately. + juju_yaml = self._make_juju_status_yaml( + unit_state='started', machine_state='running') + self.patch(charmhelpers, 'juju_status', lambda: juju_yaml) + charmhelpers.wait_for_relation('test-service', 'db', timeout=0) + + def test_wait_for_relation_times_out_if_relation_not_present(self): + # If a relation does not exist at all before a timeout is + # reached, wait_for_relation() will raise a RuntimeError. + juju_dict = self._make_juju_status_dict( + unit_state='started', machine_state='running') + units = juju_dict['services']['test-service']['units'] + # We'll remove all the relations for test-service for this test. + units['test-service/0']['relations'] = {} + juju_dict['services']['test-service']['units'] = units + juju_yaml = yaml.dump(juju_dict) + self.patch(charmhelpers, 'juju_status', lambda: juju_yaml) + self.assertRaises( + RuntimeError, charmhelpers.wait_for_relation, 'test-service', + 'db', timeout=0) + + def test_wait_for_relation_times_out_if_relation_not_up(self): + # If a relation does not transition to an 'up' state, before a + # timeout is reached, wait_for_relation() will raise a + # RuntimeError. + juju_dict = self._make_juju_status_dict( + unit_state='started', machine_state='running') + units = juju_dict['services']['test-service']['units'] + units['test-service/0']['relations']['db']['state'] = 'down' + juju_dict['services']['test-service']['units'] = units + juju_yaml = yaml.dump(juju_dict) + self.patch(charmhelpers, 'juju_status', lambda: juju_yaml) + self.assertRaises( + RuntimeError, charmhelpers.wait_for_relation, 'test-service', + 'db', timeout=0) + + def test_wait_for_page_contents_returns_if_contents_available(self): + # wait_for_page_contents() will wait until a given string is + # contained within the results of a given url and will return + # once it does. + # We need to patch the charmhelpers instance of urlopen so that + # it doesn't try to connect out. + test_content = "Hello, world." + self.patch(charmhelpers, 'urlopen', + lambda *args: StringIO(test_content)) + charmhelpers.wait_for_page_contents( + 'http://example.com', test_content, timeout=0) + + def test_wait_for_page_contents_times_out(self): + # If the desired contents do not appear within the page before + # the specified timeout, wait_for_page_contents() will raise a + # RuntimeError. + # We need to patch the charmhelpers instance of urlopen so that + # it doesn't try to connect out. + self.patch(charmhelpers, 'urlopen', + lambda *args: StringIO("This won't work.")) + self.assertRaises( + RuntimeError, charmhelpers.wait_for_page_contents, + 'http://example.com', "This will error", timeout=0) + + +if __name__ == '__main__': + unittest.main() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/charmsupport/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/charmsupport/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/charmsupport/test_nrpe.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/charmsupport/test_nrpe.py new file mode 100644 index 0000000000000000000000000000000000000000..454c62176f5fe0a772c0d73cc44061592aad56e4 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/charmsupport/test_nrpe.py @@ -0,0 +1,419 @@ +import os +import yaml +import subprocess + +from testtools import TestCase +from mock import patch, call, MagicMock + +from charmhelpers.contrib.charmsupport import nrpe +from charmhelpers.core import host + + +class NRPEBaseTestCase(TestCase): + patches = { + 'config': {'object': nrpe}, + 'copy2': {'object': nrpe.shutil}, + 'log': {'object': nrpe}, + 'getpwnam': {'object': nrpe.pwd}, + 'getgrnam': {'object': nrpe.grp}, + 'glob': {'object': nrpe.glob}, + 'mkdir': {'object': os}, + 'chown': {'object': os}, + 'chmod': {'object': os}, + 'exists': {'object': os.path}, + 'listdir': {'object': os}, + 'remove': {'object': os}, + 'open': {'object': nrpe, 'create': True}, + 'isfile': {'object': os.path}, + 'isdir': {'object': os.path}, + 'call': {'object': subprocess}, + 'relation_get': {'object': nrpe}, + 'relation_ids': {'object': nrpe}, + 'relation_set': {'object': nrpe}, + 'relations_of_type': {'object': nrpe}, + 'service': {'object': nrpe}, + 'init_is_systemd': {'object': host}, + } + + def setUp(self): + super(NRPEBaseTestCase, self).setUp() + self.patched = {} + # Mock the universe. + for attr, data in self.patches.items(): + create = data.get('create', False) + patcher = patch.object(data['object'], attr, create=create) + self.patched[attr] = patcher.start() + self.addCleanup(patcher.stop) + env_patcher = patch.dict('os.environ', + {'JUJU_UNIT_NAME': 'testunit', + 'CHARM_DIR': '/usr/lib/test_charm_dir'}) + env_patcher.start() + self.addCleanup(env_patcher.stop) + + def check_call_counts(self, **kwargs): + for attr, expected in kwargs.items(): + patcher = self.patched[attr] + self.assertEqual(expected, patcher.call_count, attr) + + +class NRPETestCase(NRPEBaseTestCase): + + def test_init_gets_config(self): + self.patched['config'].return_value = {'nagios_context': 'testctx', + 'nagios_servicegroups': 'testsgrps'} + + checker = nrpe.NRPE() + + self.assertEqual('testctx', checker.nagios_context) + self.assertEqual('testsgrps', checker.nagios_servicegroups) + self.assertEqual('testunit', checker.unit_name) + self.assertEqual('testctx-testunit', checker.hostname) + self.check_call_counts(config=1) + + def test_init_hostname(self): + """Test that the hostname parameter is correctly set""" + checker = nrpe.NRPE() + self.assertEqual(checker.hostname, + "{}-{}".format(checker.nagios_context, + checker.unit_name)) + hostname = "test.host" + checker = nrpe.NRPE(hostname=hostname) + self.assertEqual(checker.hostname, hostname) + + def test_default_servicegroup(self): + """Test that nagios_servicegroups gets set to the default if omitted""" + self.patched['config'].return_value = {'nagios_context': 'testctx'} + checker = nrpe.NRPE() + self.assertEqual(checker.nagios_servicegroups, 'testctx') + + def test_no_nagios_installed_bails(self): + self.patched['config'].return_value = {'nagios_context': 'test', + 'nagios_servicegroups': ''} + self.patched['getgrnam'].side_effect = KeyError + checker = nrpe.NRPE() + + self.assertEqual(None, checker.write()) + + expected = 'Nagios user not set up, nrpe checks not updated' + self.patched['log'].assert_called_with(expected) + self.check_call_counts(log=2, config=1, getpwnam=1, getgrnam=1) + + def test_write_no_checker(self): + self.patched['config'].return_value = {'nagios_context': 'test', + 'nagios_servicegroups': ''} + self.patched['exists'].return_value = True + checker = nrpe.NRPE() + + self.assertEqual(None, checker.write()) + + self.check_call_counts(config=1, getpwnam=1, getgrnam=1, exists=1) + + def test_write_restarts_service(self): + self.patched['config'].return_value = {'nagios_context': 'test', + 'nagios_servicegroups': ''} + self.patched['exists'].return_value = True + checker = nrpe.NRPE() + + self.assertEqual(None, checker.write()) + + self.patched['service'].assert_called_with('restart', 'nagios-nrpe-server') + self.check_call_counts(config=1, getpwnam=1, getgrnam=1, + exists=1, service=1) + + def test_update_nrpe(self): + self.patched['config'].return_value = {'nagios_context': 'a', + 'nagios_servicegroups': ''} + self.patched['exists'].return_value = True + self.patched['relation_get'].return_value = { + 'egress-subnets': '10.66.111.24/32', + 'ingress-address': '10.66.111.24', + 'private-address': '10.66.111.24' + } + + def _rels(rname): + relations = { + 'local-monitors': 'local-monitors:1', + 'nrpe-external-master': 'nrpe-external-master:2', + } + return [relations[rname]] + self.patched['relation_ids'].side_effect = _rels + + checker = nrpe.NRPE() + checker.add_check(shortname="myservice", + description="Check MyService", + check_cmd="check_http http://localhost") + + self.assertEqual(None, checker.write()) + + self.assertEqual(2, self.patched['open'].call_count) + filename = 'check_myservice.cfg' + expected = [ + ('/etc/nagios/nrpe.d/%s' % filename, 'w'), + ('/var/lib/nagios/export/service__a-testunit_%s' % filename, 'w'), + ] + actual = [x[0] for x in self.patched['open'].call_args_list] + self.assertEqual(expected, actual) + outfile = self.patched['open'].return_value.__enter__.return_value + service_file_contents = """ +#--------------------------------------------------- +# This file is Juju managed +#--------------------------------------------------- +define service { + use active-service + host_name a-testunit + service_description a-testunit[myservice] Check MyService + check_command check_nrpe!check_myservice + servicegroups a +} +""" + expected = [ + '# check myservice\n', + '# The following header was added automatically by juju\n', + '# Modifying it will affect nagios monitoring and alerting\n', + '# servicegroups: a\n', + 'command[check_myservice]=/usr/lib/nagios/plugins/check_http http://localhost\n', + service_file_contents, + ] + actual = [x[0][0] for x in outfile.write.call_args_list] + self.assertEqual(expected, actual) + + nrpe_monitors = {'myservice': + {'command': 'check_myservice'}} + monitors = yaml.dump( + {"monitors": {"remote": {"nrpe": nrpe_monitors}}}) + relation_set_calls = [ + call(monitors=monitors, relation_id="local-monitors:1"), + call(monitors=monitors, relation_id="nrpe-external-master:2"), + ] + self.patched['relation_set'].assert_has_calls(relation_set_calls, any_order=True) + self.check_call_counts(config=1, getpwnam=1, getgrnam=1, + exists=4, open=2, listdir=1, relation_get=2, + relation_ids=3, relation_set=3) + + +class NRPECheckTestCase(NRPEBaseTestCase): + + def test_invalid_shortname(self): + cases = [ + 'invalid:name', + '', + ] + for shortname in cases: + self.assertRaises(nrpe.CheckException, nrpe.Check, shortname, + 'description', '/some/command') + + def test_valid_shortname(self): + cases = [ + '1_number_is_fine', + 'dots.are.good', + 'dashes-ok', + 'UPPER_case_allowed', + '5', + '@valid', + ] + for shortname in cases: + check = nrpe.Check(shortname, 'description', '/some/command') + self.assertEqual(shortname, check.shortname) + + def test_write_removes_existing_config(self): + self.patched['listdir'].return_value = [ + 'foo', 'bar.cfg', '_check_shortname.cfg'] + check = nrpe.Check('shortname', 'description', '/some/command') + + self.assertEqual(None, check.write('testctx', 'hostname', 'testsgrp')) + + expected = '/var/lib/nagios/export/_check_shortname.cfg' + self.patched['remove'].assert_called_once_with(expected) + self.check_call_counts(exists=3, remove=1, open=2, listdir=1) + + def test_check_write_nrpe_exportdir_not_accessible(self): + self.patched['exists'].return_value = False + check = nrpe.Check('shortname', 'description', '/some/command') + + self.assertEqual(None, check.write('testctx', 'hostname', 'testsgrps')) + expected = ('Not writing service config as ' + '/var/lib/nagios/export is not accessible') + self.patched['log'].assert_has_calls( + [call(expected)], any_order=True) + self.check_call_counts(log=2, open=1) + + def test_locate_cmd_no_args(self): + self.patched['exists'].return_value = True + + check = nrpe.Check('shortname', 'description', '/bin/ls') + + self.assertEqual('/bin/ls', check.check_cmd) + + def test_locate_cmd_not_found(self): + self.patched['exists'].return_value = False + check = nrpe.Check('shortname', 'description', 'check_http -x -y -z') + + self.assertEqual('', check.check_cmd) + self.assertEqual(2, self.patched['exists'].call_count) + expected = [ + '/usr/lib/nagios/plugins/check_http', + '/usr/local/lib/nagios/plugins/check_http', + ] + actual = [x[0][0] for x in self.patched['exists'].call_args_list] + self.assertEqual(expected, actual) + self.check_call_counts(exists=2, log=1) + expected = 'Check command not found: check_http' + self.assertEqual(expected, self.patched['log'].call_args[0][0]) + + def test_run(self): + self.patched['exists'].return_value = True + command = '/usr/bin/wget foo' + check = nrpe.Check('shortname', 'description', command) + + self.assertEqual(None, check.run()) + + self.check_call_counts(exists=1, call=1) + self.assertEqual(command, self.patched['call'].call_args[0][0]) + + +class NRPEMiscTestCase(NRPEBaseTestCase): + def test_get_nagios_hostcontext(self): + rel_info = { + 'nagios_hostname': 'bob-openstack-dashboard-0', + 'private-address': '10.5.3.103', + '__unit__': u'dashboard-nrpe/1', + '__relid__': u'nrpe-external-master:2', + 'nagios_host_context': u'bob', + } + self.patched['relations_of_type'].return_value = [rel_info] + self.assertEqual(nrpe.get_nagios_hostcontext(), 'bob') + + def test_get_nagios_hostname(self): + rel_info = { + 'nagios_hostname': 'bob-openstack-dashboard-0', + 'private-address': '10.5.3.103', + '__unit__': u'dashboard-nrpe/1', + '__relid__': u'nrpe-external-master:2', + 'nagios_host_context': u'bob', + } + self.patched['relations_of_type'].return_value = [rel_info] + self.assertEqual(nrpe.get_nagios_hostname(), 'bob-openstack-dashboard-0') + + def test_get_nagios_unit_name(self): + rel_info = { + 'nagios_hostname': 'bob-openstack-dashboard-0', + 'private-address': '10.5.3.103', + '__unit__': u'dashboard-nrpe/1', + '__relid__': u'nrpe-external-master:2', + 'nagios_host_context': u'bob', + } + self.patched['relations_of_type'].return_value = [rel_info] + self.assertEqual(nrpe.get_nagios_unit_name(), 'bob:testunit') + + def test_get_nagios_unit_name_no_hc(self): + self.patched['relations_of_type'].return_value = [] + self.assertEqual(nrpe.get_nagios_unit_name(), 'testunit') + + @patch.object(os.path, 'isdir') + def test_add_init_service_checks(self, mock_isdir): + def _exists(init_file): + files = ['/etc/init/apache2.conf', + '/usr/lib/nagios/plugins/check_upstart_job', + '/etc/init.d/haproxy', + '/usr/lib/nagios/plugins/check_status_file.py', + '/etc/cron.d/nagios-service-check-haproxy', + '/var/lib/nagios/service-check-haproxy.txt', + '/usr/lib/nagios/plugins/check_systemd.py' + ] + return init_file in files + + self.patched['exists'].side_effect = _exists + + # Test without systemd and /var/lib/nagios does not exist + self.patched['init_is_systemd'].return_value = False + mock_isdir.return_value = False + bill = nrpe.NRPE() + services = ['apache2', 'haproxy'] + nrpe.add_init_service_checks(bill, services, 'testunit') + mock_isdir.assert_called_with('/var/lib/nagios') + self.patched['call'].assert_not_called() + expect_cmds = { + 'apache2': '/usr/lib/nagios/plugins/check_upstart_job apache2', + 'haproxy': '/usr/lib/nagios/plugins/check_status_file.py -f ' + '/var/lib/nagios/service-check-haproxy.txt', + } + self.assertEqual(bill.checks[0].shortname, 'apache2') + self.assertEqual(bill.checks[0].check_cmd, expect_cmds['apache2']) + self.assertEqual(bill.checks[1].shortname, 'haproxy') + self.assertEqual(bill.checks[1].check_cmd, expect_cmds['haproxy']) + + # without systemd and /var/lib/nagios does exist + mock_isdir.return_value = True + f = MagicMock() + self.patched['open'].return_value = f + bill = nrpe.NRPE() + services = ['apache2', 'haproxy'] + nrpe.add_init_service_checks(bill, services, 'testunit') + mock_isdir.assert_called_with('/var/lib/nagios') + self.patched['call'].assert_called_with( + ['/usr/local/lib/nagios/plugins/check_exit_status.pl', '-e', '-s', + '/etc/init.d/haproxy', 'status'], stdout=f, + stderr=subprocess.STDOUT) + + # Test regular services and snap services with systemd + services = ['apache2', 'haproxy', 'snap.test.test', + 'ceph-radosgw@hostname'] + self.patched['init_is_systemd'].return_value = True + nrpe.add_init_service_checks(bill, services, 'testunit') + expect_cmds = { + 'apache2': '/usr/lib/nagios/plugins/check_systemd.py apache2', + 'haproxy': '/usr/lib/nagios/plugins/check_systemd.py haproxy', + 'snap.test.test': '/usr/lib/nagios/plugins/check_systemd.py snap.test.test', + } + self.assertEqual(bill.checks[2].shortname, 'apache2') + self.assertEqual(bill.checks[2].check_cmd, expect_cmds['apache2']) + self.assertEqual(bill.checks[3].shortname, 'haproxy') + self.assertEqual(bill.checks[3].check_cmd, expect_cmds['haproxy']) + self.assertEqual(bill.checks[4].shortname, 'snap.test.test') + self.assertEqual(bill.checks[4].check_cmd, expect_cmds['snap.test.test']) + + def test_copy_nrpe_checks(self): + file_presence = { + 'filea': True, + 'fileb': False} + self.patched['exists'].return_value = True + self.patched['glob'].return_value = ['filea', 'fileb'] + self.patched['isdir'].side_effect = [False, True] + self.patched['isfile'].side_effect = lambda x: file_presence[x] + nrpe.copy_nrpe_checks() + self.patched['glob'].assert_called_once_with( + ('/usr/lib/test_charm_dir/hooks/charmhelpers/contrib/openstack/' + 'files/check_*')) + self.patched['copy2'].assert_called_once_with( + 'filea', + '/usr/local/lib/nagios/plugins/filea') + + def test_copy_nrpe_checks_other_root(self): + file_presence = { + 'filea': True, + 'fileb': False} + self.patched['exists'].return_value = True + self.patched['glob'].return_value = ['filea', 'fileb'] + self.patched['isdir'].side_effect = [True, False] + self.patched['isfile'].side_effect = lambda x: file_presence[x] + nrpe.copy_nrpe_checks() + self.patched['glob'].assert_called_once_with( + ('/usr/lib/test_charm_dir/charmhelpers/contrib/openstack/' + 'files/check_*')) + self.patched['copy2'].assert_called_once_with( + 'filea', + '/usr/local/lib/nagios/plugins/filea') + + def test_copy_nrpe_checks_nrpe_files_dir(self): + file_presence = { + 'filea': True, + 'fileb': False} + self.patched['exists'].return_value = True + self.patched['glob'].return_value = ['filea', 'fileb'] + self.patched['isfile'].side_effect = lambda x: file_presence[x] + nrpe.copy_nrpe_checks(nrpe_files_dir='/other/dir') + self.patched['glob'].assert_called_once_with( + '/other/dir/check_*') + self.patched['copy2'].assert_called_once_with( + 'filea', + '/usr/local/lib/nagios/plugins/filea') diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/database/README b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/database/README new file mode 100644 index 0000000000000000000000000000000000000000..c58240383d6d1cc6fc1c8388882e50d529804fc8 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/database/README @@ -0,0 +1,7 @@ +The Mysql test is not enabled. In order to enable it, touch __init__.py. + +As it currently stands, the test has poor coverage and fails under python3. +These things should be addressed before re-enabling the test. + +Adam Israel +March 17, 2015 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/database/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/database/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/database/test_mysql.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/database/test_mysql.py new file mode 100644 index 0000000000000000000000000000000000000000..c449fc481f1a76b5da9090288a64d1df8eb5b259 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/database/test_mysql.py @@ -0,0 +1,909 @@ +import os +import mock +import json +import unittest +import sys +import shutil +import tempfile + +from collections import OrderedDict + +sys.modules['MySQLdb'] = mock.Mock() +from charmhelpers.contrib.database import mysql # noqa + + +class MysqlTests(unittest.TestCase): + def setUp(self): + super(MysqlTests, self).setUp() + + def test_connect_host_defined(self): + helper = mysql.MySQLHelper('foo', 'bar', host='hostA') + with mock.patch.object(mysql, 'log'): + helper.connect(user='user', password='password', host='1.1.1.1') + mysql.MySQLdb.connect.assert_called_with( + passwd='password', host='1.1.1.1', user='user') + + def test_connect_host_not_defined(self): + helper = mysql.MySQLHelper('foo', 'bar') + with mock.patch.object(mysql, 'log'): + helper.connect(user='user', password='password') + mysql.MySQLdb.connect.assert_called_with( + passwd='password', host='localhost', user='user') + + def test_connect_port_defined(self): + helper = mysql.MySQLHelper('foo', 'bar') + with mock.patch.object(mysql, 'log'): + helper.connect(user='user', password='password', port=3316) + mysql.MySQLdb.connect.assert_called_with( + passwd='password', host='localhost', user='user', port=3316) + + @mock.patch.object(mysql.MySQLHelper, 'normalize_address') + @mock.patch.object(mysql.MySQLHelper, 'get_mysql_password') + @mock.patch.object(mysql.MySQLHelper, 'grant_exists') + @mock.patch.object(mysql, 'relation_get') + @mock.patch.object(mysql, 'related_units') + @mock.patch.object(mysql, 'log') + def test_get_allowed_units(self, mock_log, mock_related_units, + mock_relation_get, + mock_grant_exists, + mock_get_password, + mock_normalize_address): + + # echo + mock_normalize_address.side_effect = lambda addr: addr + + def mock_rel_get(unit, rid): + if unit == 'unit/0': + # Non-prefixed settings + d = {'private-address': '10.0.0.1', + 'hostname': 'hostA'} + elif unit == 'unit/1': + # Containing prefixed settings + d = {'private-address': '10.0.0.2', + 'dbA_hostname': json.dumps(['10.0.0.2', '2001:db8:1::2'])} + elif unit == 'unit/2': + # No hostname + d = {'private-address': '10.0.0.3'} + + return d + + mock_relation_get.side_effect = mock_rel_get + mock_related_units.return_value = ['unit/0', 'unit/1', 'unit/2'] + + helper = mysql.MySQLHelper('foo', 'bar', host='hostA') + units = helper.get_allowed_units('dbA', 'userA') + + calls = [mock.call('dbA', 'userA', 'hostA'), + mock.call('dbA', 'userA', '10.0.0.2'), + mock.call('dbA', 'userA', '2001:db8:1::2'), + mock.call('dbA', 'userA', '10.0.0.3')] + + helper.grant_exists.assert_has_calls(calls, any_order=True) + self.assertEqual(units, set(['unit/0', 'unit/1', 'unit/2'])) + + @mock.patch('charmhelpers.contrib.network.ip.log', + lambda *args, **kwargs: None) + @mock.patch('charmhelpers.contrib.network.ip.ns_query') + @mock.patch('charmhelpers.contrib.network.ip.socket') + @mock.patch.object(mysql, 'unit_get') + @mock.patch.object(mysql, 'config_get') + @mock.patch.object(mysql, 'log') + def test_normalize_address(self, mock_log, mock_config_get, mock_unit_get, + mock_socket, mock_ns_query): + helper = mysql.MySQLHelper('foo', 'bar', host='hostA') + # prefer-ipv6 + mock_config_get.return_value = False + # echo + mock_socket.gethostbyname.side_effect = lambda addr: addr + + mock_unit_get.return_value = '10.0.0.1' + out = helper.normalize_address('10.0.0.1') + self.assertEqual('127.0.0.1', out) + mock_config_get.assert_called_with('prefer-ipv6') + + mock_unit_get.return_value = '10.0.0.1' + out = helper.normalize_address('10.0.0.2') + self.assertEqual('10.0.0.2', out) + mock_config_get.assert_called_with('prefer-ipv6') + + out = helper.normalize_address('2001:db8:1::1') + self.assertEqual('2001:db8:1::1', out) + mock_config_get.assert_called_with('prefer-ipv6') + + mock_socket.gethostbyname.side_effect = Exception + mock_ns_query.return_value = None + out = helper.normalize_address('unresolvable') + self.assertEqual('unresolvable', out) + mock_config_get.assert_called_with('prefer-ipv6') + + # prefer-ipv6 + mock_config_get.return_value = True + mock_socket.gethostbyname.side_effect = 'other' + out = helper.normalize_address('unresolvable') + self.assertEqual('unresolvable', out) + mock_config_get.assert_called_with('prefer-ipv6') + + def test_passwd_keys(self): + helper = mysql.MySQLHelper('foo', 'bar', host='hostA') + self.assertEqual(list(helper.passwd_keys(None)), ['mysql.passwd']) + self.assertEqual(list(helper.passwd_keys('auser')), + ['mysql-auser.passwd', 'auser.passwd']) + + @mock.patch.object(mysql.MySQLHelper, 'migrate_passwords_to_leader_storage') + @mock.patch.object(mysql.MySQLHelper, 'get_mysql_password_on_disk') + @mock.patch.object(mysql, 'leader_get') + def test_get_mysql_password_no_peer_passwd(self, mock_leader_get, + mock_get_disk_pw, + mock_migrate_pw): + helper = mysql.MySQLHelper('foo', 'bar', host='hostA') + store = {} + mock_leader_get.side_effect = lambda key: store.get(key) + mock_get_disk_pw.return_value = "disk-passwd" + self.assertEqual(helper.get_mysql_password(), "disk-passwd") + self.assertTrue(mock_migrate_pw.called) + + @mock.patch.object(mysql.MySQLHelper, 'migrate_passwords_to_leader_storage') + @mock.patch.object(mysql.MySQLHelper, 'get_mysql_password_on_disk') + @mock.patch.object(mysql, 'leader_get') + def test_get_mysql_password_peer_passwd(self, mock_leader_get, + mock_get_disk_pw, mock_migrate_pw): + helper = mysql.MySQLHelper('foo', 'bar', host='hostA') + store = {'mysql-userA.passwd': 'passwdA'} + mock_leader_get.side_effect = lambda key: store.get(key) + mock_get_disk_pw.return_value = "disk-passwd" + self.assertEqual(helper.get_mysql_password(username='userA'), + "passwdA") + self.assertTrue(mock_migrate_pw.called) + + @mock.patch.object(mysql.MySQLHelper, 'migrate_passwords_to_leader_storage') + @mock.patch.object(mysql.MySQLHelper, 'get_mysql_password_on_disk') + @mock.patch.object(mysql, 'leader_get') + def test_get_mysql_password_peer_passwd_legacy(self, mock_leader_get, + mock_get_disk_pw, + mock_migrate_pw): + helper = mysql.MySQLHelper('foo', 'bar', host='hostA') + store = {'userA.passwd': 'passwdA'} + mock_leader_get.side_effect = lambda key: store.get(key) + mock_get_disk_pw.return_value = "disk-passwd" + self.assertEqual(helper.get_mysql_password(username='userA'), + "passwdA") + self.assertTrue(mock_migrate_pw.called) + + @mock.patch.object(mysql.MySQLHelper, 'migrate_passwords_to_leader_storage') + @mock.patch.object(mysql.MySQLHelper, 'get_mysql_password_on_disk') + @mock.patch.object(mysql, 'leader_get') + def test_get_mysql_password_peer_passwd_all(self, mock_leader_get, + mock_get_disk_pw, + mock_migrate_pw): + helper = mysql.MySQLHelper('foo', 'bar', host='hostA') + # Add * so we can identify that the new format key takes precedence + # if found. + store = {'mysql-userA.passwd': 'passwdA', + 'userA.passwd': 'passwdA*'} + mock_leader_get.side_effect = lambda key: store.get(key) + mock_get_disk_pw.return_value = "disk-passwd" + self.assertEqual(helper.get_mysql_password(username='userA'), + "passwdA") + self.assertTrue(mock_migrate_pw.called) + + @mock.patch.object(mysql.MySQLHelper, 'set_mysql_password') + def test_set_mysql_root_password(self, mock_set_passwd): + helper = mysql.MySQLHelper('foo', 'bar', host='hostA') + helper.set_mysql_root_password(password='1234') + mock_set_passwd.assert_called_with( + 'root', + '1234', + current_password=None) + + @mock.patch.object(mysql.MySQLHelper, 'set_mysql_password') + def test_set_mysql_root_password_cur_passwd(self, mock_set_passwd): + helper = mysql.MySQLHelper('foo', 'bar', host='hostA') + helper.set_mysql_root_password(password='1234', current_password='abc') + mock_set_passwd.assert_called_with( + 'root', + '1234', + current_password='abc') + + @mock.patch.object(mysql, 'log', lambda *args, **kwargs: None) + @mock.patch.object(mysql, 'is_leader') + @mock.patch.object(mysql, 'leader_get') + @mock.patch.object(mysql, 'leader_set') + @mock.patch.object(mysql, 'CompareHostReleases') + @mock.patch.object(mysql.MySQLHelper, 'get_mysql_password') + @mock.patch.object(mysql.MySQLHelper, 'connect') + def test_set_mysql_password(self, mock_connect, mock_get_passwd, + mock_compare_releases, mock_leader_set, + mock_leader_get, mock_is_leader): + mock_connection = mock.MagicMock() + mock_cursor = mock.MagicMock() + mock_connection.cursor.return_value = mock_cursor + mock_get_passwd.return_value = 'asdf' + mock_is_leader.return_value = True + mock_leader_get.return_value = '1234' + mock_compare_releases.return_value = 'artful' + + helper = mysql.MySQLHelper('foo', 'bar', host='hostA') + helper.connection = mock_connection + + helper.set_mysql_password(username='root', password='1234') + + mock_connect.assert_has_calls( + [mock.call(user='root', password='asdf'), # original password + mock.call(user='root', password='1234')]) # new password + mock_leader_get.assert_has_calls([mock.call('mysql.passwd')]) + mock_leader_set.assert_has_calls( + [mock.call(settings={'mysql.passwd': '1234'})] + ) + SQL_UPDATE_PASSWD = ("UPDATE mysql.user SET password = " + "PASSWORD( %s ) WHERE user = %s;") + mock_cursor.assert_has_calls( + [mock.call.execute(SQL_UPDATE_PASSWD, ('1234', 'root')), + mock.call.execute('FLUSH PRIVILEGES;'), + mock.call.close(), + mock.call.execute('select 1;'), + mock.call.close()] + ) + mock_get_passwd.assert_called_once_with(None) + + # make sure for the non-leader leader-set is not called + mock_is_leader.return_value = False + mock_leader_set.reset_mock() + mock_get_passwd.reset_mock() + helper.set_mysql_password(username='root', password='1234') + mock_leader_set.assert_not_called() + mock_get_passwd.assert_called_once_with(None) + + mock_get_passwd.reset_mock() + mock_compare_releases.return_value = 'bionic' + helper.set_mysql_password(username='root', password='1234') + SQL_UPDATE_PASSWD = ("UPDATE mysql.user SET " + "authentication_string = " + "PASSWORD( %s ) WHERE user = %s;") + mock_cursor.assert_has_calls( + [mock.call.execute(SQL_UPDATE_PASSWD, ('1234', 'root')), + mock.call.execute('FLUSH PRIVILEGES;'), + mock.call.close(), + mock.call.execute('select 1;'), + mock.call.close()] + ) + mock_get_passwd.assert_called_once_with(None) + + # Test supplying the current password + mock_is_leader.return_value = False + mock_connect.reset_mock() + mock_get_passwd.reset_mock() + helper.set_mysql_password( + username='root', + password='1234', + current_password='currpass') + self.assertFalse(mock_get_passwd.called) + mock_connect.assert_has_calls( + [mock.call(user='root', password='currpass'), # original password + mock.call(user='root', password='1234')]) # new password + + @mock.patch.object(mysql, 'leader_get') + @mock.patch.object(mysql, 'leader_set') + @mock.patch.object(mysql.MySQLHelper, 'get_mysql_password') + @mock.patch.object(mysql.MySQLHelper, 'connect') + def test_set_mysql_password_fail_to_connect(self, mock_connect, + mock_get_passwd, + mock_leader_set, + mock_leader_get): + + class FakeOperationalError(Exception): + pass + + def fake_connect(*args, **kwargs): + raise FakeOperationalError('foobar') + + mysql.MySQLdb.OperationalError = FakeOperationalError + helper = mysql.MySQLHelper('foo', 'bar', host='hostA') + mock_connect.side_effect = fake_connect + self.assertRaises(mysql.MySQLSetPasswordError, + helper.set_mysql_password, + username='root', password='1234') + + @mock.patch.object(mysql, 'leader_get') + @mock.patch.object(mysql, 'leader_set') + @mock.patch.object(mysql.MySQLHelper, 'get_mysql_password') + @mock.patch.object(mysql.MySQLHelper, 'connect') + def test_set_mysql_password_fail_to_connect2(self, mock_connect, + mock_get_passwd, + mock_leader_set, + mock_leader_get): + + class FakeOperationalError(Exception): + def __str__(self): + return 'some-error' + + operational_error = FakeOperationalError('foobar') + + def fake_connect(user, password): + # fail for the new password + if user == 'root' and password == '1234': + raise operational_error + else: + return mock.MagicMock() + + mysql.MySQLdb.OperationalError = FakeOperationalError + helper = mysql.MySQLHelper('foo', 'bar', host='hostA') + helper.connection = mock.MagicMock() + mock_connect.side_effect = fake_connect + with self.assertRaises(mysql.MySQLSetPasswordError) as cm: + helper.set_mysql_password(username='root', password='1234') + + ex = cm.exception + self.assertEqual(ex.args[0], 'Cannot connect using new password: some-error') + self.assertEqual(ex.args[1], operational_error) + + @mock.patch.object(mysql, 'is_leader') + @mock.patch.object(mysql, 'leader_set') + def test_migrate_passwords_to_leader_storage(self, mock_leader_set, + mock_is_leader): + files = {'mysql.passwd': '1', + 'userA.passwd': '2', + 'mysql-userA.passwd': '3'} + store = {} + + def _store(settings): + store.update(settings) + + mock_is_leader.return_value = True + + tmpdir = tempfile.mkdtemp('charm-helpers-unit-tests') + try: + root_tmplt = "%s/mysql.passwd" % (tmpdir) + helper = mysql.MySQLHelper(root_tmplt, None, host='hostA') + for f in files: + with open(os.path.join(tmpdir, f), 'w') as fd: + fd.write(files[f]) + + mock_leader_set.side_effect = _store + helper.migrate_passwords_to_leader_storage() + + calls = [mock.call(settings={'mysql.passwd': '1'}), + mock.call(settings={'userA.passwd': '2'}), + mock.call(settings={'mysql-userA.passwd': '3'})] + + mock_leader_set.assert_has_calls(calls, + any_order=True) + finally: + shutil.rmtree(tmpdir) + + # Note that legacy key/val is NOT overwritten + self.assertEqual(store, {'mysql.passwd': '1', + 'userA.passwd': '2', + 'mysql-userA.passwd': '3'}) + + @mock.patch.object(mysql, 'log', lambda *args, **kwargs: None) + @mock.patch.object(mysql, 'is_leader') + @mock.patch.object(mysql, 'leader_set') + def test_migrate_passwords_to_leader_storage_not_leader(self, mock_leader_set, + mock_is_leader): + mock_is_leader.return_value = False + tmpdir = tempfile.mkdtemp('charm-helpers-unit-tests') + try: + root_tmplt = "%s/mysql.passwd" % (tmpdir) + helper = mysql.MySQLHelper(root_tmplt, None, host='hostA') + helper.migrate_passwords_to_leader_storage() + finally: + shutil.rmtree(tmpdir) + mock_leader_set.assert_not_called() + + +class MySQLConfigHelperTests(unittest.TestCase): + + def setUp(self): + super(MySQLConfigHelperTests, self).setUp() + self.config_data = {} + self.config = mock.MagicMock() + mysql.config_get = self.config + self.config.side_effect = self._fake_config + + def _fake_config(self, key=None): + if key: + try: + return self.config_data[key] + except KeyError: + return None + else: + return self.config_data + + @mock.patch.object(mysql.MySQLConfigHelper, 'get_mem_total') + @mock.patch.object(mysql, 'log') + def test_get_innodb_pool_fixed(self, log, mem): + mem.return_value = "100G" + self.config_data = { + 'innodb-buffer-pool-size': "50%", + } + + helper = mysql.MySQLConfigHelper() + + self.assertEqual( + helper.get_innodb_buffer_pool_size(), + helper.human_to_bytes("50G")) + + @mock.patch.object(mysql.MySQLConfigHelper, 'get_mem_total') + @mock.patch.object(mysql, 'log') + def test_get_innodb_pool_not_set(self, mog, mem): + mem.return_value = "100G" + self.config_data = { + 'innodb-buffer-pool-size': '', + } + + helper = mysql.MySQLConfigHelper() + + self.assertEqual( + helper.get_innodb_buffer_pool_size(), + helper.DEFAULT_INNODB_BUFFER_SIZE_MAX) + + @mock.patch.object(mysql.MySQLConfigHelper, 'get_mem_total') + @mock.patch.object(mysql, 'log') + def test_get_innodb_buffer_unset(self, mog, mem): + mem.return_value = "100G" + self.config_data = { + 'innodb-buffer-pool-size': None, + 'dataset-size': None, + } + + helper = mysql.MySQLConfigHelper() + + self.assertEqual( + helper.get_innodb_buffer_pool_size(), + helper.DEFAULT_INNODB_BUFFER_SIZE_MAX) + + @mock.patch.object(mysql.MySQLConfigHelper, 'get_mem_total') + @mock.patch.object(mysql, 'log') + def test_get_innodb_buffer_unset_small(self, mog, mem): + mem.return_value = "512M" + self.config_data = { + 'innodb-buffer-pool-size': None, + 'dataset-size': None, + } + + helper = mysql.MySQLConfigHelper() + + self.assertEqual( + helper.get_innodb_buffer_pool_size(), + int(helper.human_to_bytes(mem.return_value) * + helper.DEFAULT_INNODB_BUFFER_FACTOR)) + + @mock.patch.object(mysql.MySQLConfigHelper, 'get_mem_total') + @mock.patch.object(mysql, 'log') + def test_get_innodb_dataset_size(self, mog, mem): + mem.return_value = "100G" + self.config_data = { + 'dataset-size': "10G", + } + + helper = mysql.MySQLConfigHelper() + + self.assertEqual( + helper.get_innodb_buffer_pool_size(), + int(helper.human_to_bytes("10G"))) + + @mock.patch.object(mysql.MySQLConfigHelper, 'get_mem_total') + @mock.patch.object(mysql, 'log') + def test_get_tuning_level(self, mog, mem): + mem.return_value = "512M" + self.config_data = { + 'tuning-level': 'safest', + } + + helper = mysql.MySQLConfigHelper() + + self.assertEqual( + helper.get_innodb_flush_log_at_trx_commit(), + 1 + ) + + @mock.patch.object(mysql.MySQLConfigHelper, 'get_mem_total') + @mock.patch.object(mysql, 'log') + def test_get_tuning_level_fast(self, mog, mem): + mem.return_value = "512M" + self.config_data = { + 'tuning-level': 'fast', + } + + helper = mysql.MySQLConfigHelper() + + self.assertEqual( + helper.get_innodb_flush_log_at_trx_commit(), + 2 + ) + + @mock.patch.object(mysql.MySQLConfigHelper, 'get_mem_total') + @mock.patch.object(mysql, 'log') + def test_get_tuning_level_unsafe(self, mog, mem): + mem.return_value = "512M" + self.config_data = { + 'tuning-level': 'unsafe', + } + + helper = mysql.MySQLConfigHelper() + + self.assertEqual( + helper.get_innodb_flush_log_at_trx_commit(), + 0 + ) + + @mock.patch.object(mysql.MySQLConfigHelper, 'get_mem_total') + @mock.patch.object(mysql, 'log') + def test_get_innodb_valid_values(self, mog, mem): + mem.return_value = "512M" + self.config_data = { + 'innodb-change-buffering': 'all', + } + + helper = mysql.MySQLConfigHelper() + + self.assertEqual( + helper.get_innodb_change_buffering(), + 'all' + ) + + @mock.patch.object(mysql.MySQLConfigHelper, 'get_mem_total') + @mock.patch.object(mysql, 'log') + def test_get_innodb_invalid_values(self, mog, mem): + mem.return_value = "512M" + self.config_data = { + 'innodb-change-buffering': 'invalid', + } + + helper = mysql.MySQLConfigHelper() + + self.assertTrue(helper.get_innodb_change_buffering() is None) + + +class PerconaTests(unittest.TestCase): + + def setUp(self): + super(PerconaTests, self).setUp() + self.config_data = {} + self.config = mock.MagicMock() + mysql.config_get = self.config + self.config.side_effect = self._fake_config + + def _fake_config(self, key=None): + if key: + try: + return self.config_data[key] + except KeyError: + return None + else: + return self.config_data + + @mock.patch.object(mysql.MySQLConfigHelper, 'get_mem_total') + @mock.patch.object(mysql, 'log') + def test_parse_config_innodb_pool_fixed(self, log, mem): + mem.return_value = "100G" + self.config_data = { + 'innodb-buffer-pool-size': "50%", + } + + helper = mysql.PerconaClusterHelper() + mysql_config = helper.parse_config() + + self.assertEqual(mysql_config.get('innodb_buffer_pool_size'), + helper.human_to_bytes("50G")) + + @mock.patch.object(mysql.MySQLConfigHelper, 'get_mem_total') + @mock.patch.object(mysql, 'log') + def test_parse_config_innodb_pool_not_set(self, mog, mem): + mem.return_value = "100G" + self.config_data = { + 'innodb-buffer-pool-size': '', + } + + helper = mysql.PerconaClusterHelper() + mysql_config = helper.parse_config() + + self.assertEqual( + mysql_config.get('innodb_buffer_pool_size'), + helper.DEFAULT_INNODB_BUFFER_SIZE_MAX) + + @mock.patch.object(mysql.MySQLConfigHelper, 'get_mem_total') + @mock.patch.object(mysql, 'log') + def test_parse_config_innodb_buffer_unset(self, mog, mem): + mem.return_value = "100G" + self.config_data = { + 'innodb-buffer-pool-size': None, + 'dataset-size': None, + } + + helper = mysql.PerconaClusterHelper() + mysql_config = helper.parse_config() + + self.assertEqual( + mysql_config.get('innodb_buffer_pool_size'), + helper.DEFAULT_INNODB_BUFFER_SIZE_MAX) + + @mock.patch.object(mysql.MySQLConfigHelper, 'get_mem_total') + @mock.patch.object(mysql, 'log') + def test_parse_config_innodb_buffer_unset_small(self, mog, mem): + mem.return_value = "512M" + self.config_data = { + 'innodb-buffer-pool-size': None, + 'dataset-size': None, + } + + helper = mysql.PerconaClusterHelper() + mysql_config = helper.parse_config() + + self.assertEqual( + mysql_config.get('innodb_buffer_pool_size'), + int(helper.human_to_bytes(mem.return_value) * + helper.DEFAULT_INNODB_BUFFER_FACTOR)) + + @mock.patch.object(mysql.MySQLConfigHelper, 'get_mem_total') + @mock.patch.object(mysql, 'log') + def test_parse_config_innodb_dataset_size(self, mog, mem): + mem.return_value = "100G" + self.config_data = { + 'dataset-size': "10G", + } + + helper = mysql.PerconaClusterHelper() + mysql_config = helper.parse_config() + + self.assertEqual( + mysql_config.get('innodb_buffer_pool_size'), + int(helper.human_to_bytes("10G"))) + + @mock.patch.object(mysql.MySQLConfigHelper, 'get_mem_total') + @mock.patch.object(mysql, 'log') + def test_parse_config_wait_timeout(self, mog, mem): + mem.return_value = "100G" + + timeout = 314 + self.config_data = { + 'wait-timeout': timeout, + } + + helper = mysql.PerconaClusterHelper() + mysql_config = helper.parse_config() + + self.assertEqual( + mysql_config.get('wait_timeout'), + timeout) + + @mock.patch.object(mysql.MySQLConfigHelper, 'get_mem_total') + @mock.patch.object(mysql, 'log') + def test_parse_config_tuning_level(self, mog, mem): + mem.return_value = "512M" + self.config_data = { + 'tuning-level': 'safest', + } + + helper = mysql.PerconaClusterHelper() + mysql_config = helper.parse_config() + + self.assertEqual( + mysql_config.get('innodb_flush_log_at_trx_commit'), + 1 + ) + + @mock.patch.object(mysql.MySQLConfigHelper, 'get_mem_total') + @mock.patch.object(mysql, 'log') + def test_parse_config_tuning_level_fast(self, mog, mem): + mem.return_value = "512M" + self.config_data = { + 'tuning-level': 'fast', + } + + helper = mysql.PerconaClusterHelper() + mysql_config = helper.parse_config() + + self.assertEqual( + mysql_config.get('innodb_flush_log_at_trx_commit'), + 2 + ) + + @mock.patch.object(mysql.MySQLConfigHelper, 'get_mem_total') + @mock.patch.object(mysql, 'log') + def test_parse_config_tuning_level_unsafe(self, mog, mem): + mem.return_value = "512M" + self.config_data = { + 'tuning-level': 'unsafe', + } + + helper = mysql.PerconaClusterHelper() + mysql_config = helper.parse_config() + + self.assertEqual( + mysql_config.get('innodb_flush_log_at_trx_commit'), + 0 + ) + + @mock.patch.object(mysql.MySQLConfigHelper, 'get_mem_total') + @mock.patch.object(mysql, 'log') + def test_parse_config_innodb_valid_values(self, mog, mem): + mem.return_value = "512M" + self.config_data = { + 'innodb-change-buffering': 'all', + 'innodb-io-capacity': 100, + } + + helper = mysql.PerconaClusterHelper() + mysql_config = helper.parse_config() + + self.assertEqual( + mysql_config.get('innodb_change_buffering'), + 'all' + ) + + self.assertEqual( + mysql_config.get('innodb_io_capacity'), + 100 + ) + + @mock.patch.object(mysql.MySQLConfigHelper, 'get_mem_total') + @mock.patch.object(mysql, 'log') + def test_parse_config_innodb_invalid_values(self, mog, mem): + mem.return_value = "512M" + self.config_data = { + 'innodb-change-buffering': 'invalid', + } + + helper = mysql.PerconaClusterHelper() + mysql_config = helper.parse_config() + + self.assertTrue('innodb_change_buffering' not in mysql_config) + self.assertTrue('innodb_io_capacity' not in mysql_config) + + +class Mysql8Tests(unittest.TestCase): + + def setUp(self): + super(Mysql8Tests, self).setUp() + self.template = "/tmp/mysql-passwd.txt" + self.connection = mock.MagicMock() + self.cursor = mock.MagicMock() + self.connection.cursor.return_value = self.cursor + self.helper = mysql.MySQL8Helper( + rpasswdf_template=self.template, + upasswdf_template=self.template) + self.helper.connection = self.connection + self.user = "user" + self.host = "10.5.0.21" + self.password = "passwd" + self.db = "mydb" + + def test_grant_exists(self): + # With backticks + self.cursor.fetchall.return_value = ( + ("GRANT USAGE ON *.* TO `{}`@`{}`".format(self.user, self.host),), + ("GRANT ALL PRIVILEGES ON `{}`.* TO `{}`@`{}`" + .format(self.db, self.user, self.host),)) + self.assertTrue(self.helper.grant_exists(self.db, self.user, self.host)) + + self.cursor.execute.assert_called_with( + "SHOW GRANTS FOR '{}'@'{}'".format(self.user, self.host)) + + # With single quotes + self.cursor.fetchall.return_value = ( + ("GRANT USAGE ON *.* TO '{}'@'{}'".format(self.user, self.host),), + ("GRANT ALL PRIVILEGES ON '{}'.* TO '{}'@'{}'" + .format(self.db, self.user, self.host),)) + self.assertTrue(self.helper.grant_exists(self.db, self.user, self.host)) + + # Grant not there + self.cursor.fetchall.return_value = ( + ("GRANT USAGE ON *.* TO '{}'@'{}'".format("someuser", "notmyhost"),), + ("GRANT ALL PRIVILEGES ON '{}'.* TO '{}'@'{}'" + .format("somedb", "someuser", "notmyhost"),)) + self.assertFalse(self.helper.grant_exists(self.db, self.user, self.host)) + + def test_create_grant(self): + self.helper.grant_exists = mock.MagicMock(return_value=False) + self.helper.create_user = mock.MagicMock() + + self.helper.create_grant(self.db, self.user, self.host, self.password) + self.cursor.execute.assert_called_with( + "GRANT ALL PRIVILEGES ON `{}`.* TO '{}'@'{}'" + .format(self.db, self.user, self.host)) + self.helper.create_user.assert_called_with(self.user, self.host, self.password) + + def test_create_user(self): + self.helper.create_user(self.user, self.host, self.password) + self.cursor.execute.assert_called_with( + "CREATE USER '{}'@'{}' IDENTIFIED BY '{}'". + format(self.user, self.host, self.password)) + + def test_create_router_grant(self): + self.helper.create_user = mock.MagicMock() + + self.helper.create_router_grant(self.user, self.host, self.password) + _calls = [ + mock.call("GRANT CREATE USER ON *.* TO '{}'@'{}' WITH GRANT OPTION" + .format(self.user, self.host)), + mock.call("GRANT SELECT, INSERT, UPDATE, DELETE, EXECUTE ON " + "mysql_innodb_cluster_metadata.* TO '{}'@'{}'" + .format(self.user, self.host)), + mock.call("GRANT SELECT ON mysql.user TO '{}'@'{}'" + .format(self.user, self.host)), + mock.call("GRANT SELECT ON " + "performance_schema.replication_group_members TO " + "'{}'@'{}'".format(self.user, self.host)), + mock.call("GRANT SELECT ON " + "performance_schema.replication_group_member_stats TO " + "'{}'@'{}'".format(self.user, self.host)), + mock.call("GRANT SELECT ON " + "performance_schema.global_variables TO " + "'{}'@'{}'".format(self.user, self.host))] + + self.cursor.execute.assert_has_calls(_calls) + self.helper.create_user.assert_called_with(self.user, self.host, self.password) + + def test_configure_router(self): + self.helper.create_user = mock.MagicMock() + self.helper.create_router_grant = mock.MagicMock() + self.helper.normalize_address = mock.MagicMock(return_value=self.host) + self.helper.get_mysql_password = mock.MagicMock(return_value=self.password) + + self.assertEqual(self.password, self.helper.configure_router(self.host, self.user)) + self.helper.create_user.assert_called_with(self.user, self.host, self.password) + self.helper.create_router_grant.assert_called_with(self.user, self.host, self.password) + + +class MysqlHelperTests(unittest.TestCase): + + def setUp(self): + super(MysqlHelperTests, self).setUp() + + def test_get_prefix(self): + _tests = { + "prefix1": "prefix1_username", + "prefix2": "prefix2_database", + "prefix3": "prefix3_hostname"} + + for key in _tests.keys(): + self.assertEqual( + key, + mysql.get_prefix(_tests[key])) + + def test_get_db_data(self): + _unprefixed = "myprefix" + # Test relation data has every variation of shared-db/db-router data + _relation_data = { + "egress-subnets": "10.5.0.43/32", + "ingress-address": "10.5.0.43", + "nova_database": "nova", + "nova_hostname": "10.5.0.43", + "nova_username": "nova", + "novaapi_database": "nova_api", + "novaapi_hostname": "10.5.0.43", + "novaapi_username": "nova", + "novacell0_database": "nova_cell0", + "novacell0_hostname": "10.5.0.43", + "novacell0_username": "nova", + "private-address": "10.5.0.43", + "database": "keystone", + "username": "keystone", + "hostname": "10.5.0.43", + "mysqlrouter_username": + "mysqlrouteruser", + "mysqlrouter_hostname": "10.5.0.43"} + + _expected_data = OrderedDict([ + ('nova', OrderedDict([('database', 'nova'), + ('hostname', '10.5.0.43'), + ('username', 'nova')])), + ('novaapi', OrderedDict([('database', 'nova_api'), + ('hostname', '10.5.0.43'), + ('username', 'nova')])), + ('novacell0', OrderedDict([('database', 'nova_cell0'), + ('hostname', '10.5.0.43'), + ('username', 'nova')])), + ('mysqlrouter', OrderedDict([('username', 'mysqlrouteruser'), + ('hostname', '10.5.0.43')])), + ('myprefix', OrderedDict([('hostname', '10.5.0.43'), + ('database', 'keystone'), + ('username', 'keystone')]))]) + + _results = mysql.get_db_data(_relation_data, unprefixed=_unprefixed) + + for prefix in _expected_data.keys(): + for key in _expected_data[prefix].keys(): + self.assertEqual( + _results[prefix][key], _expected_data[prefix][key]) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hahelpers/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hahelpers/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hahelpers/test_apache_utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hahelpers/test_apache_utils.py new file mode 100644 index 0000000000000000000000000000000000000000..7b755693b92dcbebc3928f2957bca960a301a927 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hahelpers/test_apache_utils.py @@ -0,0 +1,133 @@ +from mock import patch, call + +from testtools import TestCase +from tests.helpers import patch_open, FakeRelation + +import charmhelpers.contrib.hahelpers.apache as apache_utils + +cert = ''' + -----BEGIN CERTIFICATE----- +MIIDXTCCAkWgAwIBAgIJAMO1fWOu8ntUMA0GCSqGSIb3DQEBCwUAMEUxCzAJBgNV +BAYTAkFVMRMwEQYDVQQIDApTb21lLVN0YXRlMSEwHwYDVQQKDBhJbnRlcm5ldCBX +aWRnaXRzIFB0eSBMdGQwHhcNMTQwNDIyMTUzNDA0WhcNMjQwNDE5MTUzNDA0WjBF +MQswCQYDVQQGEwJBVTETMBEGA1UECAwKU29tZS1TdGF0ZTEhMB8GA1UECgwYSW50 +ZXJuZXQgV2lkZ2l0cyBQdHkgTHRkMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIB +CgKCAQEAuk6dmZnMvVxykNidNjbIwXM3ShhMpwCvUmWwpybFAIqhtNTuGJF9Ikp5 +kzB+ThQV1onK8O8YarNGQx+MOISEnlJ5npj3Atp33pKGHRn69lHKGVfJvRbN4A90 +1hTueYsELzfPV2YWm4c6nQiXRT6Cy0yaw/DE8fBTHzAiE9+/XGPsjn5VPv8H6Wa1 +f/d5FblE+RtHP6YpRo9Jh3XAn3iC9fVr8rblS4rk7ev8LfH/yIG2wRVOEPC6lYfu +MEIwPpxKV0c3Z6lqtMOgC5dgzWjrbItnQfB0JaIzSFMMxDhNCJocQRJDQ+0jmj+K +rMGB1QRZlVLZxx0xnv38G0GyfFMv8QIDAQABo1AwTjAdBgNVHQ4EFgQUcxEj7X26 +poFDa0lw40aAKIqyNp0wHwYDVR0jBBgwFoAUcxEj7X26poFDa0lw40aAKIqyNp0w +DAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAQe6RUCqTYf0Ns8fKfAEb +QSxZKqCst02oC0F3Gm0opWiUetxZqmAYTAjztmlRFIw7hgF/P95SY1ujGLZmiAlU +poOTjQ/i7MvjkXPVCo92izwXi65qRmJGbjduIirOAYtBmBmm3qS9BmoDlLQMVNYn +bwFImc9ar0h+o3/VH1hry+2vEVikXiKK5uKZI6B7ejNYfAWydq6ilzfKIh75W852 +OSbKt3NB/BTZZUdCvK6+B+MoeuzQHDO0/QKBEBfaKFeJki3mdyzFlNbYio1z00rM +E2zl3kh9gkZnMuV1uzHdfKJbtTcNn4hCls5x7T21jn4joADHaVez8FloykBUABu3 +qw== +-----END CERTIFICATE----- +''' + +IDENTITY_NEW_STYLE_CERTS = { + 'identity-service:0': { + 'keystone/0': { + 'ssl_cert_test-cn': 'keystone_provided_cert', + 'ssl_key_test-cn': 'keystone_provided_key', + } + } +} + +IDENTITY_OLD_STYLE_CERTS = { + 'identity-service:0': { + 'keystone/0': { + 'ssl_cert': 'keystone_provided_cert', + 'ssl_key': 'keystone_provided_key', + } + } +} + + +class ApacheUtilsTests(TestCase): + def setUp(self): + super(ApacheUtilsTests, self).setUp() + [self._patch(m) for m in [ + 'log', + 'config_get', + 'relation_get', + 'relation_ids', + 'relation_list', + 'host', + ]] + + def _patch(self, method): + _m = patch.object(apache_utils, method) + mock = _m.start() + self.addCleanup(_m.stop) + setattr(self, method, mock) + + def test_get_cert_from_config(self): + '''Ensure cert and key from charm config override relation''' + self.config_get.side_effect = [ + 'some_ca_cert', # config_get('ssl_cert') + 'some_ca_key', # config_Get('ssl_key') + ] + result = apache_utils.get_cert('test-cn') + self.assertEquals(('some_ca_cert', 'some_ca_key'), result) + + def test_get_ca_cert_from_config(self): + self.config_get.return_value = "some_ca_cert" + self.assertEquals('some_ca_cert', apache_utils.get_ca_cert()) + + def test_get_cert_from_relation(self): + self.config_get.return_value = None + rel = FakeRelation(IDENTITY_NEW_STYLE_CERTS) + self.relation_ids.side_effect = rel.relation_ids + self.relation_list.side_effect = rel.relation_units + self.relation_get.side_effect = rel.get + result = apache_utils.get_cert('test-cn') + self.assertEquals(('keystone_provided_cert', 'keystone_provided_key'), + result) + + def test_get_cert_from_relation_deprecated(self): + self.config_get.return_value = None + rel = FakeRelation(IDENTITY_OLD_STYLE_CERTS) + self.relation_ids.side_effect = rel.relation_ids + self.relation_list.side_effect = rel.relation_units + self.relation_get.side_effect = rel.get + result = apache_utils.get_cert() + self.assertEquals(('keystone_provided_cert', 'keystone_provided_key'), + result) + + def test_get_ca_cert_from_relation(self): + self.config_get.return_value = None + self.relation_ids.side_effect = [['identity-service:0'], + ['identity-credentials:1']] + self.relation_list.return_value = 'keystone/0' + self.relation_get.side_effect = [ + 'keystone_provided_ca', + ] + result = apache_utils.get_ca_cert() + self.relation_ids.assert_has_calls([call('identity-service'), + call('identity-credentials')]) + self.assertEquals('keystone_provided_ca', + result) + + @patch.object(apache_utils.os.path, 'isfile') + def test_retrieve_ca_cert(self, _isfile): + _isfile.return_value = True + with patch_open() as (_open, _file): + _file.read.return_value = cert + self.assertEqual( + apache_utils.retrieve_ca_cert('mycertfile'), + cert) + _open.assert_called_once_with('mycertfile', 'rb') + + @patch.object(apache_utils.os.path, 'isfile') + def test_retrieve_ca_cert_no_file(self, _isfile): + _isfile.return_value = False + with patch_open() as (_open, _file): + self.assertEqual( + apache_utils.retrieve_ca_cert('mycertfile'), + None) + self.assertFalse(_open.called) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hahelpers/test_cluster_utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hahelpers/test_cluster_utils.py new file mode 100644 index 0000000000000000000000000000000000000000..990f1dcb0bfbde735b90b8b8418a82d47659cf25 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hahelpers/test_cluster_utils.py @@ -0,0 +1,590 @@ +from mock import patch, MagicMock, call + +from subprocess import CalledProcessError +from testtools import TestCase + +import charmhelpers.contrib.hahelpers.cluster as cluster_utils + +CRM_STATUS = b''' +Last updated: Thu May 14 14:46:35 2015 +Last change: Thu May 14 14:43:51 2015 via crmd on juju-trusty-machine-1 +Stack: corosync +Current DC: juju-trusty-machine-2 (168108171) - partition with quorum +Version: 1.1.10-42f2063 +3 Nodes configured +4 Resources configured + + +Online: [ juju-trusty-machine-1 juju-trusty-machine-2 juju-trusty-machine-3 ] + + Resource Group: grp_percona_cluster + res_mysql_vip (ocf::heartbeat:IPaddr2): Started juju-trusty-machine-1 + Clone Set: cl_mysql_monitor [res_mysql_monitor] + Started: [ juju-trusty-machine-1 juju-trusty-machine-2 juju-trusty-machine-3 ] +''' + +CRM_DC_NONE = b''' +Last updated: Thu May 14 14:46:35 2015 +Last change: Thu May 14 14:43:51 2015 via crmd on juju-trusty-machine-1 +Stack: corosync +Current DC: NONE +1 Nodes configured, 2 expected votes +0 Resources configured + + +Node node1: UNCLEAN (offline) +''' + + +class ClusterUtilsTests(TestCase): + def setUp(self): + super(ClusterUtilsTests, self).setUp() + [self._patch(m) for m in [ + 'log', + 'relation_ids', + 'relation_list', + 'relation_get', + 'get_unit_hostname', + 'config_get', + 'unit_get', + ]] + + def _patch(self, method): + _m = patch.object(cluster_utils, method) + mock = _m.start() + self.addCleanup(_m.stop) + setattr(self, method, mock) + + def test_is_clustered(self): + '''It determines whether or not a unit is clustered''' + self.relation_ids.return_value = ['ha:0'] + self.relation_list.return_value = ['ha/0'] + self.relation_get.return_value = 'yes' + self.assertTrue(cluster_utils.is_clustered()) + + def test_is_not_clustered(self): + '''It determines whether or not a unit is clustered''' + self.relation_ids.return_value = ['ha:0'] + self.relation_list.return_value = ['ha/0'] + self.relation_get.return_value = None + self.assertFalse(cluster_utils.is_clustered()) + + @patch('subprocess.check_output') + def test_is_crm_dc(self, check_output): + '''It determines its unit is leader''' + self.get_unit_hostname.return_value = 'juju-trusty-machine-2' + check_output.return_value = CRM_STATUS + self.assertTrue(cluster_utils.is_crm_dc()) + + @patch('subprocess.check_output') + def test_is_crm_dc_no_cluster(self, check_output): + '''It is not leader if there is no cluster up''' + def r(*args, **kwargs): + raise CalledProcessError(1, 'crm') + + check_output.side_effect = r + self.assertRaises(cluster_utils.CRMDCNotFound, cluster_utils.is_crm_dc) + + @patch('subprocess.check_output') + def test_is_crm_dc_false(self, check_output): + '''It determines its unit is leader''' + self.get_unit_hostname.return_value = 'juju-trusty-machine-1' + check_output.return_value = CRM_STATUS + self.assertFalse(cluster_utils.is_crm_dc()) + + @patch('subprocess.check_output') + def test_is_crm_dc_current_none(self, check_output): + '''It determines its unit is leader''' + self.get_unit_hostname.return_value = 'juju-trusty-machine-1' + check_output.return_value = CRM_DC_NONE + self.assertRaises(cluster_utils.CRMDCNotFound, cluster_utils.is_crm_dc) + + @patch('subprocess.check_output') + def test_is_crm_leader(self, check_output): + '''It determines its unit is leader''' + self.get_unit_hostname.return_value = 'node1' + crm = b'resource vip is running on: node1' + check_output.return_value = crm + self.assertTrue(cluster_utils.is_crm_leader('vip')) + + @patch('charmhelpers.core.decorators.time') + @patch('subprocess.check_output') + def test_is_not_leader(self, check_output, mock_time): + '''It determines its unit is not leader''' + self.get_unit_hostname.return_value = 'node1' + crm = b'resource vip is running on: node2' + check_output.return_value = crm + self.assertFalse(cluster_utils.is_crm_leader('some_resource')) + self.assertFalse(mock_time.called) + + @patch('charmhelpers.core.decorators.log') + @patch('charmhelpers.core.decorators.time') + @patch('subprocess.check_output') + def test_is_not_leader_resource_not_exists(self, check_output, mock_time, + mock_log): + '''It determines its unit is not leader''' + self.get_unit_hostname.return_value = 'node1' + check_output.return_value = "resource vip is NOT running" + self.assertRaises(cluster_utils.CRMResourceNotFound, + cluster_utils.is_crm_leader, 'vip') + mock_time.assert_has_calls([call.sleep(2), call.sleep(4), + call.sleep(6)]) + + @patch('charmhelpers.core.decorators.time') + @patch('subprocess.check_output') + def test_is_crm_leader_no_cluster(self, check_output, mock_time): + '''It is not leader if there is no cluster up''' + check_output.side_effect = CalledProcessError(1, 'crm') + self.assertFalse(cluster_utils.is_crm_leader('vip')) + self.assertFalse(mock_time.called) + + @patch.object(cluster_utils, 'is_crm_dc') + def test_is_crm_leader_dc_resource(self, _is_crm_dc): + '''Call out to is_crm_dc''' + cluster_utils.is_crm_leader(cluster_utils.DC_RESOURCE_NAME) + _is_crm_dc.assert_called_with() + + def test_peer_units(self): + '''It lists all peer units for cluster relation''' + peers = ['peer_node/1', 'peer_node/2'] + self.relation_ids.return_value = ['cluster:0'] + self.relation_list.return_value = peers + self.assertEquals(peers, cluster_utils.peer_units()) + + def test_peer_ips(self): + '''Get a dict of peers and their ips''' + peers = { + 'peer_node/1': '10.0.0.1', + 'peer_node/2': '10.0.0.2', + } + + def _relation_get(attr, rid, unit): + return peers[unit] + self.relation_ids.return_value = ['cluster:0'] + self.relation_list.return_value = peers.keys() + self.relation_get.side_effect = _relation_get + self.assertEquals(peers, cluster_utils.peer_ips()) + + @patch('os.getenv') + def test_is_oldest_peer(self, getenv): + '''It detects local unit is the oldest of all peers''' + peers = ['peer_node/1', 'peer_node/2', 'peer_node/3'] + getenv.return_value = 'peer_node/1' + self.assertTrue(cluster_utils.oldest_peer(peers)) + + @patch('os.getenv') + def test_is_not_oldest_peer(self, getenv): + '''It detects local unit is not the oldest of all peers''' + peers = ['peer_node/1', 'peer_node/2', 'peer_node/3'] + getenv.return_value = 'peer_node/2' + self.assertFalse(cluster_utils.oldest_peer(peers)) + + @patch.object(cluster_utils, 'is_crm_leader') + @patch.object(cluster_utils, 'is_clustered') + def test_is_elected_leader_clustered(self, is_clustered, is_crm_leader): + '''It detects it is the eligible leader in a hacluster of units''' + is_clustered.return_value = True + is_crm_leader.return_value = True + self.assertTrue(cluster_utils.is_elected_leader('vip')) + + @patch.object(cluster_utils, 'is_crm_leader') + @patch.object(cluster_utils, 'is_clustered') + def test_not_is_elected_leader_clustered(self, is_clustered, is_crm_leader): + '''It detects it is not the eligible leader in a hacluster of units''' + is_clustered.return_value = True + is_crm_leader.return_value = False + self.assertFalse(cluster_utils.is_elected_leader('vip')) + + @patch.object(cluster_utils, 'oldest_peer') + @patch.object(cluster_utils, 'peer_units') + @patch.object(cluster_utils, 'is_clustered') + def test_is_is_elected_leader_unclustered(self, is_clustered, + peer_units, oldest_peer): + '''It detects it is the eligible leader in non-clustered peer group''' + is_clustered.return_value = False + oldest_peer.return_value = True + self.assertTrue(cluster_utils.is_elected_leader('vip')) + + @patch.object(cluster_utils, 'oldest_peer') + @patch.object(cluster_utils, 'peer_units') + @patch.object(cluster_utils, 'is_clustered') + def test_not_is_elected_leader_unclustered(self, is_clustered, + peer_units, oldest_peer): + '''It detects it is not the eligible leader in non-clustered group''' + is_clustered.return_value = False + oldest_peer.return_value = False + self.assertFalse(cluster_utils.is_elected_leader('vip')) + + def test_https_explict(self): + '''It determines https is available if configured explicitly''' + # config_get('use-https') + self.config_get.return_value = 'yes' + self.assertTrue(cluster_utils.https()) + + def test_https_cert_key_in_config(self): + '''It determines https is available if cert + key in charm config''' + # config_get('use-https') + self.config_get.side_effect = [ + 'no', # config_get('use-https') + 'cert', # config_get('ssl_cert') + 'key', # config_get('ssl_key') + ] + self.assertTrue(cluster_utils.https()) + + def test_https_cert_key_in_identity_relation(self): + '''It determines https is available if cert in identity-service''' + self.config_get.return_value = False + self.relation_ids.return_value = 'identity-service:0' + self.relation_list.return_value = 'keystone/0' + self.relation_get.side_effect = [ + 'yes', # relation_get('https_keystone') + 'cert', # relation_get('ssl_cert') + 'key', # relation_get('ssl_key') + 'ca_cert', # relation_get('ca_cert') + ] + self.assertTrue(cluster_utils.https()) + + def test_https_cert_key_incomplete_identity_relation(self): + '''It determines https unavailable if cert not in identity-service''' + self.config_get.return_value = False + self.relation_ids.return_value = 'identity-service:0' + self.relation_list.return_value = 'keystone/0' + self.relation_get.return_value = None + self.assertFalse(cluster_utils.https()) + + @patch.object(cluster_utils, 'https') + @patch.object(cluster_utils, 'peer_units') + def test_determine_api_port_with_peers(self, peer_units, https): + '''It determines API port in presence of peers''' + peer_units.return_value = ['peer1'] + https.return_value = False + self.assertEquals(9686, cluster_utils.determine_api_port(9696)) + + @patch.object(cluster_utils, 'https') + @patch.object(cluster_utils, 'peer_units') + def test_determine_api_port_nopeers_singlemode(self, peer_units, https): + '''It determines API port with a single unit in singlemode''' + peer_units.return_value = [] + https.return_value = False + port = cluster_utils.determine_api_port(9696, singlenode_mode=True) + self.assertEquals(9686, port) + + @patch.object(cluster_utils, 'is_clustered') + @patch.object(cluster_utils, 'https') + @patch.object(cluster_utils, 'peer_units') + def test_determine_api_port_clustered(self, peer_units, https, + is_clustered): + '''It determines API port in presence of an hacluster''' + peer_units.return_value = [] + is_clustered.return_value = True + https.return_value = False + self.assertEquals(9686, cluster_utils.determine_api_port(9696)) + + @patch.object(cluster_utils, 'is_clustered') + @patch.object(cluster_utils, 'https') + @patch.object(cluster_utils, 'peer_units') + def test_determine_api_port_clustered_https(self, peer_units, https, + is_clustered): + '''It determines API port in presence of hacluster + https''' + peer_units.return_value = [] + is_clustered.return_value = True + https.return_value = True + self.assertEquals(9676, cluster_utils.determine_api_port(9696)) + + @patch.object(cluster_utils, 'https') + def test_determine_apache_port_https(self, https): + '''It determines haproxy port with https enabled''' + https.return_value = True + self.assertEquals(9696, cluster_utils.determine_apache_port(9696)) + + @patch.object(cluster_utils, 'https') + @patch.object(cluster_utils, 'is_clustered') + def test_determine_apache_port_clustered(self, https, is_clustered): + '''It determines haproxy port with https disabled''' + https.return_value = True + is_clustered.return_value = True + self.assertEquals(9686, cluster_utils.determine_apache_port(9696)) + + @patch.object(cluster_utils, 'peer_units') + @patch.object(cluster_utils, 'https') + @patch.object(cluster_utils, 'is_clustered') + def test_determine_apache_port_nopeers_singlemode(self, https, + is_clustered, + peer_units): + '''It determines haproxy port with a single unit in singlemode''' + peer_units.return_value = [] + https.return_value = False + is_clustered.return_value = False + port = cluster_utils.determine_apache_port(9696, singlenode_mode=True) + self.assertEquals(9686, port) + + @patch.object(cluster_utils, 'valid_hacluster_config') + def test_get_hacluster_config_complete(self, valid_hacluster_config): + '''It fetches all hacluster charm config''' + conf = { + 'ha-bindiface': 'eth1', + 'ha-mcastport': '3333', + 'vip': '10.0.0.1', + 'os-admin-hostname': None, + 'os-public-hostname': None, + 'os-internal-hostname': None, + 'os-access-hostname': None, + } + + valid_hacluster_config.return_value = True + + def _fake_config_get(setting): + return conf[setting] + + self.config_get.side_effect = _fake_config_get + self.assertEquals(conf, cluster_utils.get_hacluster_config()) + + @patch.object(cluster_utils, 'valid_hacluster_config') + def test_get_hacluster_config_incomplete(self, valid_hacluster_config): + '''It raises exception if some hacluster charm config missing''' + conf = { + 'ha-bindiface': 'eth1', + 'ha-mcastport': '3333', + 'vip': None, + 'os-admin-hostname': None, + 'os-public-hostname': None, + 'os-internal-hostname': None, + 'os-access-hostname': None, + } + + valid_hacluster_config.return_value = False + + def _fake_config_get(setting): + return conf[setting] + + self.config_get.side_effect = _fake_config_get + self.assertRaises(cluster_utils.HAIncorrectConfig, + cluster_utils.get_hacluster_config) + + @patch.object(cluster_utils, 'valid_hacluster_config') + def test_get_hacluster_config_with_excludes(self, valid_hacluster_config): + '''It fetches all hacluster charm config''' + conf = { + 'ha-bindiface': 'eth1', + 'ha-mcastport': '3333', + } + valid_hacluster_config.return_value = True + + def _fake_config_get(setting): + return conf[setting] + + self.config_get.side_effect = _fake_config_get + exclude_keys = ['vip', 'os-admin-hostname', 'os-internal-hostname', + 'os-public-hostname', 'os-access-hostname'] + result = cluster_utils.get_hacluster_config(exclude_keys) + self.assertEquals(conf, result) + + @patch.object(cluster_utils, 'is_clustered') + def test_canonical_url_bare(self, is_clustered): + '''It constructs a URL to host with no https or cluster present''' + self.unit_get.return_value = 'foohost1' + is_clustered.return_value = False + configs = MagicMock() + configs.complete_contexts = MagicMock() + configs.complete_contexts.return_value = [] + url = cluster_utils.canonical_url(configs) + self.assertEquals('http://foohost1', url) + + @patch.object(cluster_utils, 'is_clustered') + def test_canonical_url_https_no_cluster(self, is_clustered): + '''It constructs a URL to host with https and no cluster present''' + self.unit_get.return_value = 'foohost1' + is_clustered.return_value = False + configs = MagicMock() + configs.complete_contexts = MagicMock() + configs.complete_contexts.return_value = ['https'] + url = cluster_utils.canonical_url(configs) + self.assertEquals('https://foohost1', url) + + @patch.object(cluster_utils, 'is_clustered') + def test_canonical_url_https_cluster(self, is_clustered): + '''It constructs a URL to host with https and cluster present''' + self.config_get.return_value = '10.0.0.1' + is_clustered.return_value = True + configs = MagicMock() + configs.complete_contexts = MagicMock() + configs.complete_contexts.return_value = ['https'] + url = cluster_utils.canonical_url(configs) + self.assertEquals('https://10.0.0.1', url) + + @patch.object(cluster_utils, 'is_clustered') + def test_canonical_url_cluster_no_https(self, is_clustered): + '''It constructs a URL to host with no https and cluster present''' + self.config_get.return_value = '10.0.0.1' + self.unit_get.return_value = 'foohost1' + is_clustered.return_value = True + configs = MagicMock() + configs.complete_contexts = MagicMock() + configs.complete_contexts.return_value = [] + url = cluster_utils.canonical_url(configs) + self.assertEquals('http://10.0.0.1', url) + + @patch.object(cluster_utils, 'status_set') + def test_valid_hacluster_config_incomplete(self, status_set): + '''Returns False with incomplete HA config''' + conf = { + 'vip': None, + 'os-admin-hostname': None, + 'os-public-hostname': None, + 'os-internal-hostname': None, + 'os-access-hostname': None, + 'dns-ha': False, + } + + def _fake_config_get(setting): + return conf[setting] + + self.config_get.side_effect = _fake_config_get + self.assertRaises(cluster_utils.HAIncorrectConfig, + cluster_utils.valid_hacluster_config) + + @patch.object(cluster_utils, 'status_set') + def test_valid_hacluster_config_both(self, status_set): + '''Returns False when both VIP and DNS HA are set''' + conf = { + 'vip': '10.0.0.1', + 'os-admin-hostname': None, + 'os-public-hostname': None, + 'os-internal-hostname': None, + 'os-access-hostname': None, + 'dns-ha': True, + } + + def _fake_config_get(setting): + return conf[setting] + + self.config_get.side_effect = _fake_config_get + self.assertRaises(cluster_utils.HAIncorrectConfig, + cluster_utils.valid_hacluster_config) + + @patch.object(cluster_utils, 'status_set') + def test_valid_hacluster_config_vip_ha(self, status_set): + '''Returns True with complete VIP HA config''' + conf = { + 'vip': '10.0.0.1', + 'os-admin-hostname': None, + 'os-public-hostname': None, + 'os-internal-hostname': None, + 'os-access-hostname': None, + 'dns-ha': False, + } + + def _fake_config_get(setting): + return conf[setting] + + self.config_get.side_effect = _fake_config_get + self.assertTrue(cluster_utils.valid_hacluster_config()) + self.assertFalse(status_set.called) + + @patch.object(cluster_utils, 'status_set') + def test_valid_hacluster_config_dns_incomplete(self, status_set): + '''Returns False with incomplete DNS HA config''' + conf = { + 'vip': None, + 'os-admin-hostname': None, + 'os-public-hostname': None, + 'os-internal-hostname': None, + 'os-access-hostname': None, + 'dns-ha': True, + } + + def _fake_config_get(setting): + return conf[setting] + + self.config_get.side_effect = _fake_config_get + self.assertRaises(cluster_utils.HAIncompleteConfig, + cluster_utils.valid_hacluster_config) + + @patch.object(cluster_utils, 'status_set') + def test_valid_hacluster_config_dns_ha(self, status_set): + '''Returns True with complete DNS HA config''' + conf = { + 'vip': None, + 'os-admin-hostname': 'somehostname', + 'os-public-hostname': None, + 'os-internal-hostname': None, + 'os-access-hostname': None, + 'dns-ha': True, + } + + def _fake_config_get(setting): + return conf[setting] + + self.config_get.side_effect = _fake_config_get + self.assertTrue(cluster_utils.valid_hacluster_config()) + self.assertFalse(status_set.called) + + @patch.object(cluster_utils, 'juju_is_leader') + @patch.object(cluster_utils, 'status_set') + @patch.object(cluster_utils.time, 'sleep') + @patch.object(cluster_utils, 'modulo_distribution') + @patch.object(cluster_utils, 'log') + def test_distributed_wait(self, log, modulo_distribution, sleep, + status_set, is_leader): + + # Leader regardless of modulo should not wait + is_leader.return_value = True + cluster_utils.distributed_wait(modulo=9, wait=23) + modulo_distribution.assert_not_called() + sleep.assert_called_with(0) + + # The rest of the tests are non-leader units + is_leader.return_value = False + + def _fake_config_get(setting): + return conf[setting] + + # Uses fallback defaults + conf = { + 'modulo-nodes': None, + 'known-wait': None, + } + self.config_get.side_effect = _fake_config_get + cluster_utils.distributed_wait() + modulo_distribution.assert_called_with(modulo=3, wait=30, + non_zero_wait=True) + + # Uses config values + conf = { + 'modulo-nodes': 7, + 'known-wait': 10, + } + self.config_get.side_effect = _fake_config_get + cluster_utils.distributed_wait() + modulo_distribution.assert_called_with(modulo=7, wait=10, + non_zero_wait=True) + + # Uses passed values + cluster_utils.distributed_wait(modulo=5, wait=45) + modulo_distribution.assert_called_with(modulo=5, wait=45, + non_zero_wait=True) + + @patch.object(cluster_utils, 'relation_ids') + def test_get_managed_services_and_ports(self, relation_ids): + relation_ids.return_value = ['rel:2'] + self.assertEqual( + cluster_utils.get_managed_services_and_ports( + ['apache2', 'haproxy'], + [8067, 4545, 6732]), + (['apache2'], [8057, 4535, 6722])) + self.assertEqual( + cluster_utils.get_managed_services_and_ports( + ['apache2', 'haproxy'], + [8067, 4545, 6732], + external_services=['apache2']), + (['haproxy'], [8057, 4535, 6722])) + + def add_ten(x): + return x + 10 + + self.assertEqual( + cluster_utils.get_managed_services_and_ports( + ['apache2', 'haproxy'], + [8067, 4545, 6732], + port_conv_f=add_ten), + (['apache2'], [8077, 4555, 6742])) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/apache/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/apache/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/apache/checks/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/apache/checks/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/apache/checks/test_config.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/apache/checks/test_config.py new file mode 100644 index 0000000000000000000000000000000000000000..da080da395488bd9e91945f443bd302e9a0d26a6 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/apache/checks/test_config.py @@ -0,0 +1,110 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import shutil +import tempfile + +from unittest import TestCase +from mock import patch + +from charmhelpers.contrib.hardening.apache.checks import config + +TEST_TMPDIR = None +APACHE_VERSION_STR = b"""Server version: Apache/2.4.7 (Ubuntu) +Server built: Jan 14 2016 17:45:23 +""" + + +class ApacheConfigTestCase(TestCase): + + def setUp(self): + global TEST_TMPDIR + TEST_TMPDIR = tempfile.mkdtemp() + + def tearDown(self): + shutil.rmtree(TEST_TMPDIR) + + @patch.object(config.subprocess, 'call', lambda *args, **kwargs: 1) + def test_get_audits_apache_not_installed(self): + audits = config.get_audits() + self.assertEqual([], audits) + + @patch.object(config.utils, 'get_settings', lambda x: { + 'common': {'apache_dir': TEST_TMPDIR, + 'traceenable': 'Off'}, + 'hardening': { + 'allowed_http_methods': {'GOGETEM'}, + 'modules_to_disable': {'modfoo'} + } + }) + @patch.object(config.subprocess, 'call', lambda *args, **kwargs: 0) + def test_get_audits_apache_is_installed(self): + audits = config.get_audits() + self.assertEqual(7, len(audits)) + + @patch.object(config.utils, 'get_settings', lambda x: { + 'common': {'apache_dir': TEST_TMPDIR}, + 'hardening': { + 'allowed_http_methods': {'GOGETEM'}, + 'modules_to_disable': {'modfoo'}, + 'traceenable': 'off', + 'servertokens': 'Prod', + 'honor_cipher_order': 'on', + 'cipher_suite': 'ALL:+MEDIUM:+HIGH:!LOW:!MD5:!RC4:!eNULL:!aNULL:!3DES' + } + }) + @patch.object(config, 'subprocess') + def test_ApacheConfContext(self, mock_subprocess): + mock_subprocess.call.return_value = 0 + + with tempfile.NamedTemporaryFile() as ftmp: # noqa + def fake_check_output(cmd, *args, **kwargs): + if cmd[0] == 'apache2': + return APACHE_VERSION_STR + + mock_subprocess.check_output.side_effect = fake_check_output + ctxt = config.ApacheConfContext() + self.assertEqual(ctxt(), { + 'allowed_http_methods': set(['GOGETEM']), + 'apache_icondir': + '/usr/share/apache2/icons/', + 'apache_version': '2.4.7', + 'modules_to_disable': set(['modfoo']), + 'servertokens': 'Prod', + 'traceenable': 'off', + 'honor_cipher_order': 'on', + 'cipher_suite': 'ALL:+MEDIUM:+HIGH:!LOW:!MD5:!RC4:!eNULL:!aNULL:!3DES' + }) + + @patch.object(config.utils, 'get_settings', lambda x: { + 'common': {'apache_dir': TEST_TMPDIR}, + 'hardening': { + 'allowed_http_methods': {'GOGETEM'}, + 'modules_to_disable': {'modfoo'}, + 'traceenable': 'off', + 'servertokens': 'Prod', + 'honor_cipher_order': 'on', + 'cipher_suite': 'ALL:+MEDIUM:+HIGH:!LOW:!MD5:!RC4:!eNULL:!aNULL:!3DES' + } + }) + @patch.object(config.subprocess, 'call', lambda *args, **kwargs: 0) + def test_file_permission_audit(self): + audits = config.get_audits() + settings = config.utils.get_settings('apache') + conf_file_name = 'apache2.conf' + conf_file_path = os.path.join( + settings['common']['apache_dir'], conf_file_name + ) + self.assertEqual(audits[0].paths[0], conf_file_path) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/audits/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/audits/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..30a3e94359e94011cd247de7ade76667346e7379 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/audits/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/audits/test_apache_audits.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/audits/test_apache_audits.py new file mode 100644 index 0000000000000000000000000000000000000000..69e7c081c6e7fcd0579f9417b025c66d5f39c976 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/audits/test_apache_audits.py @@ -0,0 +1,86 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from mock import call +from mock import patch + +from unittest import TestCase +from charmhelpers.contrib.hardening.audits import apache +from subprocess import CalledProcessError + + +class DisabledModuleAuditsTest(TestCase): + + def setup(self): + super(DisabledModuleAuditsTest, self).setUp() + self._patch_obj(apache, 'log') + + def _patch_obj(self, obj, method): + _m = patch.object(obj, method) + mock = _m.start() + self.addCleanup(mock.stop) + setattr(self, method, mock) + + def test_init_string(self): + audit = apache.DisabledModuleAudit('foo') + self.assertEqual(['foo'], audit.modules) + + def test_init_list(self): + audit = apache.DisabledModuleAudit(['foo', 'bar']) + self.assertEqual(['foo', 'bar'], audit.modules) + + @patch.object(apache.DisabledModuleAudit, '_get_loaded_modules') + def test_ensure_compliance_no_modules(self, mock_loaded_modules): + audit = apache.DisabledModuleAudit(None) + audit.ensure_compliance() + self.assertFalse(mock_loaded_modules.called) + + @patch.object(apache.DisabledModuleAudit, '_get_loaded_modules') + @patch.object(apache, 'log', lambda *args, **kwargs: None) + def test_ensure_compliance_loaded_modules_raises_ex(self, + mock_loaded_modules): + mock_loaded_modules.side_effect = CalledProcessError(1, 'test', 'err') + audit = apache.DisabledModuleAudit('foo') + audit.ensure_compliance() + + @patch.object(apache.DisabledModuleAudit, '_get_loaded_modules') + @patch.object(apache.DisabledModuleAudit, '_disable_module') + @patch.object(apache, 'log', lambda *args, **kwargs: None) + def test_disabled_modules_not_loaded(self, mock_disable_module, + mock_loaded_modules): + mock_loaded_modules.return_value = ['foo'] + audit = apache.DisabledModuleAudit('bar') + audit.ensure_compliance() + self.assertFalse(mock_disable_module.called) + + @patch.object(apache.DisabledModuleAudit, '_get_loaded_modules') + @patch.object(apache.DisabledModuleAudit, '_disable_module') + @patch.object(apache.DisabledModuleAudit, '_restart_apache') + @patch.object(apache, 'log', lambda *args, **kwargs: None) + def test_disabled_modules_loaded(self, mock_restart_apache, + mock_disable_module, mock_loaded_modules): + mock_loaded_modules.return_value = ['foo', 'bar'] + audit = apache.DisabledModuleAudit('bar') + audit.ensure_compliance() + mock_disable_module.assert_has_calls([call('bar')]) + mock_restart_apache.assert_has_calls([call()]) + + @patch('subprocess.check_output') + def test_get_loaded_modules(self, mock_check_output): + mock_check_output.return_value = (b'Loaded Modules:\n' + b' foo_module (static)\n' + b' bar_module (shared)\n') + audit = apache.DisabledModuleAudit('bar') + result = audit._get_loaded_modules() + self.assertEqual(['foo', 'bar'], result) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/audits/test_apt_audits.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/audits/test_apt_audits.py new file mode 100644 index 0000000000000000000000000000000000000000..fc14b2a55a910fd743da21ad126826edc67d0076 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/audits/test_apt_audits.py @@ -0,0 +1,87 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from unittest import TestCase + +from mock import call +from mock import MagicMock +from mock import patch + +from charmhelpers.contrib.hardening.audits import apt +from charmhelpers.fetch import ubuntu_apt_pkg as apt_pkg +from charmhelpers.core import hookenv + + +class RestrictedPackagesTestCase(TestCase): + def setUp(self): + super(RestrictedPackagesTestCase, self).setUp() + + def create_package(self, name, virtual=False): + pkg = MagicMock() + pkg.name = name + pkg.current_ver = '2.0' + if virtual: + pkgver = MagicMock() + pkgver.parent_pkg = self.create_package('foo') + pkg.provides_list = [('virtualfoo', None, pkgver)] + resp = { + 'has_provides': True, + 'has_versions': False, + } + pkg.get.side_effect = resp.get + + return pkg + + @patch.object(apt, 'apt_cache') + @patch.object(apt, 'apt_purge') + @patch.object(apt, 'log', lambda *args, **kwargs: None) + def test_ensure_compliance(self, mock_purge, mock_apt_cache): + pkg = self.create_package('bar') + mock_apt_cache.return_value = {'bar': pkg} + + audit = apt.RestrictedPackages(pkgs=['bar']) + audit.ensure_compliance() + mock_purge.assert_has_calls([call(pkg.name)]) + + @patch.object(apt, 'apt_purge') + @patch.object(apt, 'apt_cache') + @patch.object(apt, 'log', lambda *args, **kwargs: None) + def test_apt_harden_virtual_package(self, mock_apt_cache, mock_apt_purge): + vpkg = self.create_package('virtualfoo', virtual=True) + mock_apt_cache.return_value = {'foo': vpkg} + audit = apt.RestrictedPackages(pkgs=['foo']) + audit.ensure_compliance() + self.assertTrue(mock_apt_cache.called) + mock_apt_purge.assert_has_calls([call('foo')]) + + +class AptConfigTestCase(TestCase): + + @patch.object(apt, 'apt_pkg') + def test_ensure_compliance(self, mock_apt_pkg): + mock_apt_pkg.init.return_value = None + mock_apt_pkg.config.side_effect = {} + mock_apt_pkg.config.get.return_value = None + audit = apt.AptConfig([{'key': 'APT::Get::AllowUnauthenticated', + 'expected': 'false'}]) + audit.ensure_compliance() + self.assertTrue(mock_apt_pkg.config.get.called) + + @patch.object(hookenv, 'log') + def test_verify_config(self, mock_log): + cfg = apt_pkg.config + key, value = list(cfg.items())[0] + audit = apt.AptConfig([{"key": key, "expected": value}]) + audit.ensure_compliance() + self.assertFalse(mock_log.called) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/audits/test_base_audits.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/audits/test_base_audits.py new file mode 100644 index 0000000000000000000000000000000000000000..cf9f2367304d30fca6f48f63b640e77cc51e3d97 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/audits/test_base_audits.py @@ -0,0 +1,52 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from unittest import TestCase + +from charmhelpers.contrib.hardening.audits import BaseAudit + + +class BaseAuditTestCase(TestCase): + + def setUp(self): + super(BaseAuditTestCase, self).setUp() + + def test_take_action_default(self): + check = BaseAudit() + take_action = check._take_action() + self.assertTrue(take_action) + + def test_take_action_unless_true(self): + check = BaseAudit(unless=True) + take_action = check._take_action() + self.assertFalse(take_action) + + def test_take_action_unless_false(self): + check = BaseAudit(unless=False) + take_action = check._take_action() + self.assertTrue(take_action) + + def test_take_action_unless_callback_false(self): + def callback(): + return False + check = BaseAudit(unless=callback) + take_action = check._take_action() + self.assertTrue(take_action) + + def test_take_action_unless_callback_true(self): + def callback(): + return True + check = BaseAudit(unless=callback) + take_action = check._take_action() + self.assertFalse(take_action) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/audits/test_file_audits.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/audits/test_file_audits.py new file mode 100644 index 0000000000000000000000000000000000000000..8718f6115dbaddeab44d8fe9fb9c9f6ffec17091 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/audits/test_file_audits.py @@ -0,0 +1,334 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import shutil +import tempfile + +from mock import call, patch + +from unittest import TestCase + +from charmhelpers.core import unitdata +from charmhelpers.contrib.hardening.audits import file + + +@patch('os.path.exists') +class BaseFileAuditTestCase(TestCase): + + def setUp(self): + super(BaseFileAuditTestCase, self).setUp() + self._patch_obj(file.BaseFileAudit, 'is_compliant') + self._patch_obj(file.BaseFileAudit, 'comply') + self._patch_obj(file, 'log') + + def _patch_obj(self, obj, method): + _m = patch.object(obj, method) + mock = _m.start() + self.addCleanup(_m.stop) + setattr(self, method, mock) + + def test_ensure_compliance(self, mock_exists): + mock_exists.return_value = False + check = file.BaseFileAudit(paths='/tmp/foo') + check.ensure_compliance() + self.assertFalse(self.comply.called) + + def test_ensure_compliance_in_compliance(self, mock_exists): + mock_exists.return_value = True + self.is_compliant.return_value = True + check = file.BaseFileAudit(paths=['/tmp/foo']) + check.ensure_compliance() + mock_exists.assert_has_calls([call('/tmp/foo')]) + self.is_compliant.assert_has_calls([call('/tmp/foo')]) + self.assertFalse(self.log.called) + self.assertFalse(self.comply.called) + + def test_ensure_compliance_out_of_compliance(self, mock_exists): + mock_exists.return_value = True + self.is_compliant.return_value = False + check = file.BaseFileAudit(paths=['/tmp/foo']) + check.ensure_compliance() + mock_exists.assert_has_calls([call('/tmp/foo')]) + self.is_compliant.assert_has_calls([call('/tmp/foo')]) + self.assertTrue(self.log.called) + self.comply.assert_has_calls([call('/tmp/foo')]) + + +class EasyMock(dict): + __getattr__ = dict.__getitem__ + __setattr__ = dict.__setitem__ + + +class FilePermissionAuditTestCase(TestCase): + def setUp(self): + super(FilePermissionAuditTestCase, self).setUp() + self._patch_obj(file.grp, 'getgrnam') + self._patch_obj(file.pwd, 'getpwnam') + self._patch_obj(file.FilePermissionAudit, '_get_stat') + self.getpwnam.return_value = EasyMock({'pw_name': 'testuser', + 'pw_uid': 1000}) + self.getgrnam.return_value = EasyMock({'gr_name': 'testgroup', + 'gr_gid': 1000}) + self._get_stat.return_value = EasyMock({'st_mode': 0o644, + 'st_uid': 1000, + 'st_gid': 1000}) + + def _patch_obj(self, obj, method): + _m = patch.object(obj, method) + mock = _m.start() + self.addCleanup(_m.stop) + setattr(self, method, mock) + + def test_is_compliant(self): + check = file.FilePermissionAudit(paths=['/foo/bar'], + user='testuser', + group='testgroup', mode=0o644) + compliant = check.is_compliant('/foo/bar') + self.assertTrue(compliant) + + @patch.object(file, 'log', lambda *args, **kwargs: None) + def test_not_compliant_wrong_group(self): + self.getgrnam.return_value = EasyMock({'gr_name': 'testgroup', + 'gr_gid': 222}) + check = file.FilePermissionAudit(paths=['/foo/bar'], user='testuser', + group='testgroup', mode=0o644) + compliant = check.is_compliant('/foo/bar') + self.assertFalse(compliant) + + @patch.object(file, 'log', lambda *args, **kwargs: None) + def test_not_compliant_wrong_user(self): + self.getpwnam.return_value = EasyMock({'pw_name': 'fred', + 'pw_uid': 123}) + check = file.FilePermissionAudit(paths=['/foo/bar'], user='testuser', + group='testgroup', mode=0o644) + compliant = check.is_compliant('/foo/bar') + self.assertFalse(compliant) + + @patch.object(file, 'log', lambda *args, **kwargs: None) + def test_not_compliant_wrong_permissions(self): + self._get_stat.return_value = EasyMock({'st_mode': 0o777, + 'st_uid': 1000, + 'st_gid': 1000}) + check = file.FilePermissionAudit(paths=['/foo/bar'], user='testuser', + group='testgroup', mode=0o644) + compliant = check.is_compliant('/foo/bar') + self.assertFalse(compliant) + + @patch('charmhelpers.contrib.hardening.utils.ensure_permissions') + @patch.object(file, 'log', lambda *args, **kwargs: None) + def test_comply(self, _ensure_permissions): + check = file.FilePermissionAudit(paths=['/foo/bar'], user='testuser', + group='testgroup', mode=0o644) + check.comply('/foo/bar') + c = call('/foo/bar', 'testuser', 'testgroup', 0o644) + _ensure_permissions.assert_has_calls([c]) + + +class DirectoryPermissionAuditTestCase(TestCase): + def setUp(self): + super(DirectoryPermissionAuditTestCase, self).setUp() + + @patch('charmhelpers.contrib.hardening.audits.file.os.path.isdir') + @patch.object(file, 'log', lambda *args, **kwargs: None) + def test_is_compliant_not_directory(self, mock_isdir): + mock_isdir.return_value = False + check = file.DirectoryPermissionAudit(paths=['/foo/bar'], + user='testuser', + group='testgroup', mode=0o0700) + self.assertRaises(ValueError, check.is_compliant, '/foo/bar') + + @patch.object(file.FilePermissionAudit, 'is_compliant') + @patch.object(file, 'log', lambda *args, **kwargs: None) + def test_is_compliant_file_not_compliant(self, mock_is_compliant): + mock_is_compliant.return_value = False + tmpdir = tempfile.mkdtemp() + try: + check = file.DirectoryPermissionAudit(paths=[tmpdir], + user='testuser', + group='testgroup', + mode=0o0700) + compliant = check.is_compliant(tmpdir) + self.assertFalse(compliant) + finally: + shutil.rmtree(tmpdir) + + +class NoSUIDGUIDAuditTestCase(TestCase): + def setUp(self): + super(NoSUIDGUIDAuditTestCase, self).setUp() + + @patch.object(file.NoSUIDSGIDAudit, '_get_stat') + @patch.object(file, 'log', lambda *args, **kwargs: None) + def test_is_compliant(self, mock_get_stat): + mock_get_stat.return_value = EasyMock({'st_mode': 0o0644, + 'st_uid': 0, + 'st_gid': 0}) + audit = file.NoSUIDSGIDAudit('/foo/bar') + compliant = audit.is_compliant('/foo/bar') + self.assertTrue(compliant) + + @patch.object(file.NoSUIDSGIDAudit, '_get_stat') + @patch.object(file, 'log', lambda *args, **kwargs: None) + def test_is_noncompliant(self, mock_get_stat): + mock_get_stat.return_value = EasyMock({'st_mode': 0o6644, + 'st_uid': 0, + 'st_gid': 0}) + audit = file.NoSUIDSGIDAudit('/foo/bar') + compliant = audit.is_compliant('/foo/bar') + self.assertFalse(compliant) + + @patch.object(file, 'log') + @patch.object(file, 'check_output') + def test_comply(self, mock_check_output, mock_log): + audit = file.NoSUIDSGIDAudit('/foo/bar') + audit.comply('/foo/bar') + mock_check_output.assert_has_calls([call(['chmod', '-s', '/foo/bar'])]) + self.assertTrue(mock_log.called) + + +class TemplatedFileTestCase(TestCase): + def setUp(self): + super(TemplatedFileTestCase, self).setUp() + self.kv = patch.object(unitdata, 'kv') + self.kv.start() + self.addCleanup(self.kv.stop) + + @patch.object(file.TemplatedFile, 'templates_match') + @patch.object(file.TemplatedFile, 'contents_match') + @patch.object(file.TemplatedFile, 'permissions_match') + @patch.object(file, 'log', lambda *args, **kwargs: None) + def test_is_not_compliant(self, contents_match_, permissions_match_, + templates_match_): + contents_match_.return_value = False + permissions_match_.return_value = False + templates_match_.return_value = False + + f = file.TemplatedFile('/foo/bar', None, '/tmp', 0o0644) + compliant = f.is_compliant('/foo/bar') + self.assertFalse(compliant) + + @patch.object(file.TemplatedFile, 'templates_match') + @patch.object(file.TemplatedFile, 'contents_match') + @patch.object(file.TemplatedFile, 'permissions_match') + @patch.object(file, 'log', lambda *args, **kwargs: None) + def test_is_compliant(self, contents_match_, permissions_match_, + templates_match_): + contents_match_.return_value = True + permissions_match_.return_value = True + templates_match_.return_value = True + + f = file.TemplatedFile('/foo/bar', None, '/tmp', 0o0644) + compliant = f.is_compliant('/foo/bar') + self.assertTrue(compliant) + + @patch.object(file.TemplatedFile, 'templates_match') + @patch.object(file.TemplatedFile, 'contents_match') + @patch.object(file.TemplatedFile, 'permissions_match') + @patch.object(file, 'log', lambda *args, **kwargs: None) + def test_template_changes(self, contents_match_, permissions_match_, + templates_match_): + contents_match_.return_value = True + permissions_match_.return_value = True + templates_match_.return_value = False + + f = file.TemplatedFile('/foo/bar', None, '/tmp', 0o0644) + compliant = f.is_compliant('/foo/bar') + self.assertFalse(compliant) + + @patch.object(file, 'render_and_write') + @patch.object(file.utils, 'ensure_permissions') + @patch.object(file, 'log', lambda *args, **kwargs: None) + def test_comply(self, mock_ensure_permissions, mock_render_and_write): + class Context(object): + def __call__(self): + return {} + with tempfile.NamedTemporaryFile() as ftmp: + f = file.TemplatedFile(ftmp.name, Context(), + os.path.dirname(ftmp.name), 0o0644) + f.comply(ftmp.name) + calls = [call(os.path.dirname(ftmp.name), ftmp.name, {})] + mock_render_and_write.assert_has_calls(calls) + mock_ensure_permissions.assert_has_calls([call(ftmp.name, 'root', + 'root', 0o0644)]) + + +CONTENTS_PASS = """Ciphers aes256-ctr,aes192-ctr,aes128-ctr +MACs hmac-sha2-512,hmac-sha2-256,hmac-ripemd160 +KexAlgorithms diffie-hellman-group-exchange-sha256 +""" + + +CONTENTS_FAIL = """Ciphers aes256-ctr,aes192-ctr,aes128-ctr +MACs hmac-sha2-512,hmac-sha2-256,hmac-ripemd160 +KexAlgorithms diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1,diffie-hellman-group1-sha1 +""" + + +class FileContentAuditTestCase(TestCase): + + @patch.object(file, 'log') + def test_audit_contents_pass(self, mock_log): + conditions = {'pass': + [r'^KexAlgorithms\s+diffie-hellman-group-exchange-' + 'sha256$'], + 'fail': [r'^KexAlgorithms\s+diffie-hellman-group-' + 'exchange-sha256.+$']} + with tempfile.NamedTemporaryFile() as ftmp: + filename = ftmp.name + with open(filename, 'w') as fd: + fd.write(CONTENTS_FAIL) + + audit = file.FileContentAudit(filename, conditions) + self.assertFalse(audit.is_compliant(filename)) + + calls = [call("Auditing contents of file '%s'" % filename, + level='DEBUG'), + call("Pattern '^KexAlgorithms\\s+diffie-hellman-group-" + "exchange-sha256$' was expected to pass but instead it " + "failed", level='WARNING'), + call("Pattern '^KexAlgorithms\\s+diffie-hellman-group-" + "exchange-sha256.+$' was expected to fail but instead " + "it passed", level='WARNING'), + call('Checked 2 cases and 0 passed', level='DEBUG')] + mock_log.assert_has_calls(calls) + + @patch.object(file, 'log') + def test_audit_contents_fail(self, mock_log): + conditions = {'pass': + [r'^KexAlgorithms\s+diffie-hellman-group-exchange-' + 'sha256$'], + 'fail': + [r'^KexAlgorithms\s+diffie-hellman-group-exchange-' + 'sha256.+$']} + with tempfile.NamedTemporaryFile() as ftmp: + filename = ftmp.name + with open(filename, 'w') as fd: + fd.write(CONTENTS_FAIL) + + audit = file.FileContentAudit(filename, conditions) + self.assertFalse(audit.is_compliant(filename)) + + calls = [call("Auditing contents of file '%s'" % filename, + level='DEBUG'), + call("Pattern '^KexAlgorithms\\s+diffie-hellman-group-" + "exchange-sha256$' was expected to pass but instead " + "it failed", + level='WARNING'), + call("Pattern '^KexAlgorithms\\s+diffie-hellman-group-" + "exchange-sha256.+$' was expected to fail but instead " + "it passed", + level='WARNING'), + call('Checked 2 cases and 0 passed', level='DEBUG')] + mock_log.assert_has_calls(calls) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/defaults/apache.yaml b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/defaults/apache.yaml new file mode 100644 index 0000000000000000000000000000000000000000..0f940d4cfa85ca7051dd60a4805d84bb6aebed6d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/defaults/apache.yaml @@ -0,0 +1,16 @@ +# NOTE: this file contains the default configuration for the 'apache' hardening +# code. If you want to override any settings you must add them to a file +# called hardening.yaml in the root directory of your charm using the +# name 'apache' as the root key followed by any of the following with new +# values. + +common: + apache_dir: '/etc/apache2' + +hardening: + traceenable: 'off' + allowed_http_methods: "GET POST" + modules_to_disable: [ cgi, cgid ] + servertokens: 'Prod' + honor_cipher_order: 'on' + cipher_suite: 'ALL:+MEDIUM:+HIGH:!LOW:!MD5:!RC4:!eNULL:!aNULL:!3DES' diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/defaults/apache.yaml.schema b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/defaults/apache.yaml.schema new file mode 100644 index 0000000000000000000000000000000000000000..c112137cb45c4b63cb05384145b3edf8c443e2b8 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/defaults/apache.yaml.schema @@ -0,0 +1,12 @@ +# NOTE: this schema must contain all valid keys from it's associated defaults +# file. It is used to validate user-provided overrides. +common: + apache_dir: + traceenable: + +hardening: + allowed_http_methods: + modules_to_disable: + servertokens: + honor_cipher_order: + cipher_suite: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/defaults/mysql.yaml b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/defaults/mysql.yaml new file mode 100644 index 0000000000000000000000000000000000000000..682d22bf3ded32eb1c8d6188486ec4468d9ec457 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/defaults/mysql.yaml @@ -0,0 +1,38 @@ +# NOTE: this file contains the default configuration for the 'mysql' hardening +# code. If you want to override any settings you must add them to a file +# called hardening.yaml in the root directory of your charm using the +# name 'mysql' as the root key followed by any of the following with new +# values. + +hardening: + mysql-conf: /etc/mysql/my.cnf + hardening-conf: /etc/mysql/conf.d/hardening.cnf + +security: + # @see http://www.symantec.com/connect/articles/securing-mysql-step-step + # @see http://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_chroot + chroot: None + + # @see http://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_safe-user-create + safe-user-create: 1 + + # @see http://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_secure-auth + secure-auth: 1 + + # @see http://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_symbolic-links + skip-symbolic-links: 1 + + # @see http://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_skip-show-database + skip-show-database: True + + # @see http://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_local_infile + local-infile: 0 + + # @see https://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_allow-suspicious-udfs + allow-suspicious-udfs: 0 + + # @see https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_automatic_sp_privileges + automatic-sp-privileges: 0 + + # @see https://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_secure-file-priv + secure-file-priv: /tmp diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/defaults/mysql.yaml.schema b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/defaults/mysql.yaml.schema new file mode 100644 index 0000000000000000000000000000000000000000..2edf325c311c6fbb062a072083b4d12cebc3d9c1 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/defaults/mysql.yaml.schema @@ -0,0 +1,15 @@ +# NOTE: this schema must contain all valid keys from it's associated defaults +# file. It is used to validate user-provided overrides. +hardening: + mysql-conf: + hardening-conf: +security: + chroot: + safe-user-create: + secure-auth: + skip-symbolic-links: + skip-show-database: + local-infile: + allow-suspicious-udfs: + automatic-sp-privileges: + secure-file-priv: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/defaults/os.yaml b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/defaults/os.yaml new file mode 100644 index 0000000000000000000000000000000000000000..9a8627b5ed2803828e1e4d78260c6b5f90cae659 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/defaults/os.yaml @@ -0,0 +1,68 @@ +# NOTE: this file contains the default configuration for the 'os' hardening +# code. If you want to override any settings you must add them to a file +# called hardening.yaml in the root directory of your charm using the +# name 'os' as the root key followed by any of the following with new +# values. + +general: + desktop_enable: False # (type:boolean) + +environment: + extra_user_paths: [] + umask: 027 + root_path: / + +auth: + pw_max_age: 60 + # discourage password cycling + pw_min_age: 7 + retries: 5 + lockout_time: 600 + timeout: 60 + allow_homeless: False # (type:boolean) + pam_passwdqc_enable: True # (type:boolean) + pam_passwdqc_options: 'min=disabled,disabled,16,12,8' + root_ttys: + console + tty1 + tty2 + tty3 + tty4 + tty5 + tty6 + uid_min: 1000 + gid_min: 1000 + sys_uid_min: 100 + sys_uid_max: 999 + sys_gid_min: 100 + sys_gid_max: 999 + chfn_restrict: + +security: + users_allow: [] + suid_sgid_enforce: True # (type:boolean) + # user-defined blacklist and whitelist + suid_sgid_blacklist: [] + suid_sgid_whitelist: [] + # if this is True, remove any suid/sgid bits from files that were not in the whitelist + suid_sgid_dry_run_on_unknown: False # (type:boolean) + suid_sgid_remove_from_unknown: False # (type:boolean) + # remove packages with known issues + packages_clean: True # (type:boolean) + packages_list: + xinetd + inetd + ypserv + telnet-server + rsh-server + rsync + kernel_enable_module_loading: True # (type:boolean) + kernel_enable_core_dump: False # (type:boolean) + ssh_tmout: 300 + +sysctl: + kernel_secure_sysrq: 244 # 4 + 16 + 32 + 64 + 128 + kernel_enable_sysrq: False # (type:boolean) + forwarding: False # (type:boolean) + ipv6_enable: False # (type:boolean) + arp_restricted: True # (type:boolean) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/defaults/os.yaml.schema b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/defaults/os.yaml.schema new file mode 100644 index 0000000000000000000000000000000000000000..cc3b9c206eae56cbe68826cb76748e2deb9483e1 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/defaults/os.yaml.schema @@ -0,0 +1,43 @@ +# NOTE: this schema must contain all valid keys from it's associated defaults +# file. It is used to validate user-provided overrides. +general: + desktop_enable: +environment: + extra_user_paths: + umask: + root_path: +auth: + pw_max_age: + pw_min_age: + retries: + lockout_time: + timeout: + allow_homeless: + pam_passwdqc_enable: + pam_passwdqc_options: + root_ttys: + uid_min: + gid_min: + sys_uid_min: + sys_uid_max: + sys_gid_min: + sys_gid_max: + chfn_restrict: +security: + users_allow: + suid_sgid_enforce: + suid_sgid_blacklist: + suid_sgid_whitelist: + suid_sgid_dry_run_on_unknown: + suid_sgid_remove_from_unknown: + packages_clean: + packages_list: + kernel_enable_module_loading: + kernel_enable_core_dump: + ssh_tmout: +sysctl: + kernel_secure_sysrq: + kernel_enable_sysrq: + forwarding: + ipv6_enable: + arp_restricted: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/defaults/ssh.yaml b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/defaults/ssh.yaml new file mode 100644 index 0000000000000000000000000000000000000000..cd529bcae1ec00fef2e969f43dc3cf530b46ef9a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/defaults/ssh.yaml @@ -0,0 +1,49 @@ +# NOTE: this file contains the default configuration for the 'ssh' hardening +# code. If you want to override any settings you must add them to a file +# called hardening.yaml in the root directory of your charm using the +# name 'ssh' as the root key followed by any of the following with new +# values. + +common: + service_name: 'ssh' + network_ipv6_enable: False # (type:boolean) + ports: [22] + remote_hosts: [] + +client: + package: 'openssh-client' + cbc_required: False # (type:boolean) + weak_hmac: False # (type:boolean) + weak_kex: False # (type:boolean) + roaming: False + password_authentication: 'no' + +server: + host_key_files: ['/etc/ssh/ssh_host_rsa_key', '/etc/ssh/ssh_host_dsa_key', + '/etc/ssh/ssh_host_ecdsa_key'] + cbc_required: False # (type:boolean) + weak_hmac: False # (type:boolean) + weak_kex: False # (type:boolean) + allow_root_with_key: False # (type:boolean) + allow_tcp_forwarding: 'no' + allow_agent_forwarding: 'no' + allow_x11_forwarding: 'no' + use_privilege_separation: 'sandbox' + listen_to: ['0.0.0.0'] + use_pam: 'no' + package: 'openssh-server' + password_authentication: 'no' + alive_interval: '600' + alive_count: '3' + sftp_enable: False # (type:boolean) + sftp_group: 'sftponly' + sftp_chroot: '/home/%u' + deny_users: [] + allow_users: [] + deny_groups: [] + allow_groups: [] + print_motd: 'no' + print_last_log: 'no' + use_dns: 'no' + max_auth_tries: 2 + max_sessions: 10 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/defaults/ssh.yaml.schema b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/defaults/ssh.yaml.schema new file mode 100644 index 0000000000000000000000000000000000000000..d05e054bc234015206bb1195152fa9ffd6a33151 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/defaults/ssh.yaml.schema @@ -0,0 +1,42 @@ +# NOTE: this schema must contain all valid keys from it's associated defaults +# file. It is used to validate user-provided overrides. +common: + service_name: + network_ipv6_enable: + ports: + remote_hosts: +client: + package: + cbc_required: + weak_hmac: + weak_kex: + roaming: + password_authentication: +server: + host_key_files: + cbc_required: + weak_hmac: + weak_kex: + allow_root_with_key: + allow_tcp_forwarding: + allow_agent_forwarding: + allow_x11_forwarding: + use_privilege_separation: + listen_to: + use_pam: + package: + password_authentication: + alive_interval: + alive_count: + sftp_enable: + sftp_group: + sftp_chroot: + deny_users: + allow_users: + deny_groups: + allow_groups: + print_motd: + print_last_log: + use_dns: + max_auth_tries: + max_sessions: diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/host/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/host/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/host/checks/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/host/checks/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/host/checks/test_apt.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/host/checks/test_apt.py new file mode 100644 index 0000000000000000000000000000000000000000..ea7ed8ea729f0e4d6f5f2ddf8561b56f97c299ae --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/host/checks/test_apt.py @@ -0,0 +1,46 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from unittest import TestCase + +from mock import patch + +from charmhelpers.contrib.hardening.host.checks import apt + + +class AptHardeningTestCase(TestCase): + + @patch.object(apt, 'get_settings', lambda x: { + 'security': {'packages_clean': False} + }) + def test_dont_clean_packages(self): + audits = apt.get_audits() + self.assertEqual(1, len(audits)) + + @patch.object(apt, 'get_settings', lambda x: { + 'security': {'packages_clean': True, + 'packages_list': []} + }) + def test_no_security_packages(self): + audits = apt.get_audits() + self.assertEqual(1, len(audits)) + + @patch.object(apt, 'get_settings', lambda x: { + 'security': {'packages_clean': True, + 'packages_list': ['foo', 'bar']} + }) + def test_restricted_packages(self): + audits = apt.get_audits() + self.assertEqual(2, len(audits)) + self.assertTrue(isinstance(audits[1], apt.RestrictedPackages)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/host/checks/test_limits.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/host/checks/test_limits.py new file mode 100644 index 0000000000000000000000000000000000000000..0750fdef2b23ac343cd7ac05b08ab4dae784343e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/host/checks/test_limits.py @@ -0,0 +1,43 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from unittest import TestCase + +from mock import patch + +from charmhelpers.contrib.hardening.host.checks import limits + + +class LimitsTestCase(TestCase): + + @patch.object(limits.utils, 'get_settings', + lambda x: {'security': {'kernel_enable_core_dump': False}}) + def test_core_dump_disabled(self): + audits = limits.get_audits() + self.assertEqual(2, len(audits)) + audit = audits[0] + self.assertTrue(isinstance(audit, limits.DirectoryPermissionAudit)) + self.assertEqual('/etc/security/limits.d', audit.paths[0]) + audit = audits[1] + self.assertTrue(isinstance(audit, limits.TemplatedFile)) + self.assertEqual('/etc/security/limits.d/10.hardcore.conf', + audit.paths[0]) + + @patch.object(limits.utils, 'get_settings', lambda x: { + 'security': {'kernel_enable_core_dump': True} + }) + def test_core_dump_enabled(self): + audits = limits.get_audits() + self.assertEqual(1, len(audits)) + self.assertTrue(isinstance(audits[0], limits.DirectoryPermissionAudit)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/host/checks/test_login.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/host/checks/test_login.py new file mode 100644 index 0000000000000000000000000000000000000000..e3c6fc85224356faa2b49c8121786d8754662c49 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/host/checks/test_login.py @@ -0,0 +1,27 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from unittest import TestCase + +from charmhelpers.contrib.hardening.host.checks import login + + +class LoginTestCase(TestCase): + + def test_login(self): + audits = login.get_audits() + self.assertEqual(1, len(audits)) + audit = audits[0] + self.assertTrue(isinstance(audit, login.TemplatedFile)) + self.assertEqual('/etc/login.defs', audit.paths[0]) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/host/checks/test_minimize_access.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/host/checks/test_minimize_access.py new file mode 100644 index 0000000000000000000000000000000000000000..2e3838778907289cbe83b0d7b53e525965ff3ec1 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/host/checks/test_minimize_access.py @@ -0,0 +1,68 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from unittest import TestCase + +from mock import patch + +from charmhelpers.contrib.hardening.host.checks import minimize_access + + +class MinimizeAccessTestCase(TestCase): + + @patch.object(minimize_access.utils, 'get_settings', lambda x: + {'environment': {'extra_user_paths': []}, + 'security': {'users_allow': []}}) + def test_default_options(self): + audits = minimize_access.get_audits() + self.assertEqual(3, len(audits)) + + # First audit is to ensure that all folders in the $PATH variable + # are read-only. + self.assertTrue(isinstance(audits[0], minimize_access.ReadOnly)) + self.assertEqual({'/usr/local/sbin', '/usr/local/bin', + '/usr/sbin', '/usr/bin', '/bin'}, audits[0].paths) + + # Second audit is to ensure that the /etc/shadow is only readable + # by the root user. + self.assertTrue(isinstance(audits[1], + minimize_access.FilePermissionAudit)) + self.assertEqual(audits[1].paths[0], '/etc/shadow') + self.assertEqual(audits[1].mode, 0o0600) + + # Last audit is to ensure that only root has access to the su + self.assertTrue(isinstance(audits[2], + minimize_access.FilePermissionAudit)) + self.assertEqual(audits[2].paths[0], '/bin/su') + self.assertEqual(audits[2].mode, 0o0750) + + @patch.object(minimize_access.utils, 'get_settings', lambda x: + {'environment': {'extra_user_paths': []}, + 'security': {'users_allow': ['change_user']}}) + def test_allow_change_user(self): + audits = minimize_access.get_audits() + self.assertEqual(2, len(audits)) + + # First audit is to ensure that all folders in the $PATH variable + # are read-only. + self.assertTrue(isinstance(audits[0], minimize_access.ReadOnly)) + self.assertEqual({'/usr/local/sbin', '/usr/local/bin', + '/usr/sbin', '/usr/bin', '/bin'}, audits[0].paths) + + # Second audit is to ensure that the /etc/shadow is only readable + # by the root user. + self.assertTrue(isinstance(audits[1], + minimize_access.FilePermissionAudit)) + self.assertEqual(audits[1].paths[0], '/etc/shadow') + self.assertEqual(audits[1].mode, 0o0600) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/host/checks/test_pam.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/host/checks/test_pam.py new file mode 100644 index 0000000000000000000000000000000000000000..2879bc87ec127e04222f4fc8e1a7568cf8985cc0 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/host/checks/test_pam.py @@ -0,0 +1,53 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from unittest import TestCase + +from mock import patch + +from charmhelpers.contrib.hardening.host.checks import pam + + +class PAMTestCase(TestCase): + + @patch.object(pam.utils, 'get_settings', lambda x: { + 'auth': {'pam_passwdqc_enable': True, + 'retries': False} + }) + def test_enable_passwdqc(self): + audits = pam.get_audits() + self.assertEqual(2, len(audits)) + audit = audits[0] + self.assertTrue(isinstance(audit, pam.PasswdqcPAM)) + audit = audits[1] + self.assertTrue(isinstance(audit, pam.DeletedFile)) + self.assertEqual('/usr/share/pam-configs/tally2', audit.paths[0]) + + @patch.object(pam.utils, 'get_settings', lambda x: { + 'auth': {'pam_passwdqc_enable': False, + 'retries': True} + }) + def test_disable_passwdqc(self): + audits = pam.get_audits() + self.assertEqual(1, len(audits)) + self.assertFalse(isinstance(audits[0], pam.PasswdqcPAM)) + + @patch.object(pam.utils, 'get_settings', lambda x: { + 'auth': {'pam_passwdqc_enable': False, + 'retries': True} + }) + def test_auth_retries(self): + audits = pam.get_audits() + self.assertEqual(1, len(audits)) + self.assertTrue(isinstance(audits[0], pam.Tally2PAM)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/host/checks/test_profile.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/host/checks/test_profile.py new file mode 100644 index 0000000000000000000000000000000000000000..1d7fc99bdb2ec86ecf02bb3b0a3d7b01c179730e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/host/checks/test_profile.py @@ -0,0 +1,65 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os + +from unittest import TestCase + +from mock import patch + +from charmhelpers.contrib.hardening.host.checks import profile + + +class ProfileTestCase(TestCase): + + def setUp(self): + super(ProfileTestCase, self).setUp() + + os.environ['JUJU_CHARM_DIR'] = '/tmp' + self.addCleanup(lambda: os.environ.pop('JUJU_CHARM_DIR')) + + @patch.object(profile.utils, 'get_settings', lambda x: + {'security': {'kernel_enable_core_dump': False, 'ssh_tmout': False}}) + def test_core_dump_disabled(self): + audits = profile.get_audits() + self.assertEqual(1, len(audits)) + self.assertTrue(isinstance(audits[0], profile.TemplatedFile)) + + @patch.object(profile.utils, 'get_settings', lambda x: { + 'security': {'kernel_enable_core_dump': True, 'ssh_tmout': False} + }) + def test_core_dump_enabled(self): + audits = profile.get_audits() + self.assertEqual(0, len(audits)) + + @patch.object(profile.utils, 'get_settings', lambda x: + {'security': {'kernel_enable_core_dump': True, 'ssh_tmout': False}}) + def test_ssh_tmout_disabled(self): + audits = profile.get_audits() + self.assertEqual(0, len(audits)) + + @patch.object(profile.utils, 'get_settings', lambda x: { + 'security': {'kernel_enable_core_dump': True, 'ssh_tmout': 300} + }) + def test_ssh_tmout_enabled(self): + audits = profile.get_audits() + self.assertEqual(1, len(audits)) + self.assertTrue(isinstance(audits[0], profile.TemplatedFile)) + + @patch.object(profile.utils, 'log', lambda *args, **kwargs: None) + def test_ProfileContext(self): + ctxt = profile.ProfileContext() + self.assertEqual(ctxt(), { + 'ssh_tmout': 300 + }) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/host/checks/test_securetty.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/host/checks/test_securetty.py new file mode 100644 index 0000000000000000000000000000000000000000..4c5d7acd94550bf1e8929e415b7ef05ff96378b2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/host/checks/test_securetty.py @@ -0,0 +1,27 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from unittest import TestCase + +from charmhelpers.contrib.hardening.host.checks import securetty + + +class SecureTTYTestCase(TestCase): + + def test_securetty(self): + audits = securetty.get_audits() + self.assertEqual(1, len(audits)) + audit = audits[0] + self.assertTrue(isinstance(audit, securetty.TemplatedFile)) + self.assertEqual('/etc/securetty', audit.paths[0]) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/host/checks/test_suid_guid.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/host/checks/test_suid_guid.py new file mode 100644 index 0000000000000000000000000000000000000000..96f78b28b35ee9a58d9117bb0b3552d6841a69b6 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/host/checks/test_suid_guid.py @@ -0,0 +1,55 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import tempfile + +from unittest import TestCase + +from mock import call +from mock import patch + +from charmhelpers.contrib.hardening.host.checks import suid_sgid + + +@patch.object(suid_sgid, 'log', lambda *args, **kwargs: None) +class SUIDSGIDTestCase(TestCase): + + @patch.object(suid_sgid.utils, 'get_settings', lambda x: { + 'security': {'suid_sgid_enforce': False} + }) + def test_no_enforcement(self): + audits = suid_sgid.get_audits() + self.assertEqual(0, len(audits)) + + @patch.object(suid_sgid, 'subprocess') + @patch.object(suid_sgid.utils, 'get_settings', lambda x: { + 'security': {'suid_sgid_enforce': True, + 'suid_sgid_remove_from_unknown': True, + 'suid_sgid_blacklist': [], + 'suid_sgid_whitelist': [], + 'suid_sgid_dry_run_on_unknown': True}, + 'environment': {'root_path': '/'} + }) + def test_suid_guid_harden(self, mock_subprocess): + p = mock_subprocess.Popen.return_value + with tempfile.NamedTemporaryFile() as tmp: + p.communicate.return_value = (tmp.name, "stderr") + + audits = suid_sgid.get_audits() + self.assertEqual(2, len(audits)) + cmd = ['find', '/', '-perm', '-4000', '-o', '-perm', '-2000', '-type', + 'f', '!', '-path', '/proc/*', '-print'] + calls = [call(cmd, stderr=mock_subprocess.PIPE, + stdout=mock_subprocess.PIPE)] + mock_subprocess.Popen.assert_has_calls(calls) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/mysql/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/mysql/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/mysql/checks/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/mysql/checks/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/mysql/checks/test_config.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/mysql/checks/test_config.py new file mode 100644 index 0000000000000000000000000000000000000000..af015f6aaa1f902fcc97c1e95d31a623246a1fa4 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/mysql/checks/test_config.py @@ -0,0 +1,33 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from unittest import TestCase + +from mock import patch + +from charmhelpers.contrib.hardening.mysql.checks import config + + +class MySQLConfigTestCase(TestCase): + + @patch.object(config.subprocess, 'call', lambda *args, **kwargs: 0) + @patch.object(config.utils, 'get_settings', lambda x: { + 'hardening': { + 'mysql-conf': {}, + 'hardening-conf': {} + } + }) + def test_get_audits(self): + audits = config.get_audits() + self.assertEqual(4, len(audits)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/ssh/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/ssh/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/ssh/checks/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/ssh/checks/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/ssh/checks/test_config.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/ssh/checks/test_config.py new file mode 100644 index 0000000000000000000000000000000000000000..49cea4e3be6f6156d4549c265016fe68cd9c31ca --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/ssh/checks/test_config.py @@ -0,0 +1,27 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from testtools import TestCase + +from mock import patch + +from charmhelpers.contrib.hardening.ssh.checks import config + + +class SSHConfigTestCase(TestCase): + + @patch.object(config.utils, 'get_settings', lambda x: {}) + def test_dont_clean_packages(self): + audits = config.get_audits() + self.assertEqual(4, len(audits)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/test_defaults.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/test_defaults.py new file mode 100644 index 0000000000000000000000000000000000000000..35baa2bedda1329d37b5a4c08446a6e93ecbfbd8 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/test_defaults.py @@ -0,0 +1,58 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import glob +import yaml + +from unittest import TestCase + +TEMPLATES_DIR = os.path.join(os.path.dirname(__file__), 'defaults') + + +class DefaultsTestCase(TestCase): + + def setUp(self): + super(DefaultsTestCase, self).setUp() + + def get_keys(self, dicto, keys=None): + if keys is None: + keys = [] + + if dicto: + if type(dicto) is not dict: + raise Exception("Unexpected entry: %s" % dicto) + + for key in dicto.keys(): + keys.append(key) + if type(dicto[key]) is dict: + self.get_keys(dicto[key], keys) + + return keys + + def test_defaults(self): + defaults_paths = glob.glob('%s/*.yaml' % TEMPLATES_DIR) + for defaults in defaults_paths: + schema = "%s.schema" % defaults + self.assertTrue(os.path.exists(schema)) + a = yaml.safe_load(open(schema)) + b = yaml.safe_load(open(defaults)) + if not a and not b: + continue + + # Test that all keys in default are present in their associated + # schema. + skeys = self.get_keys(a) + dkeys = self.get_keys(b) + self.assertEqual(set(dkeys).symmetric_difference(skeys), set([])) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/test_harden.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/test_harden.py new file mode 100644 index 0000000000000000000000000000000000000000..25364350ba9bca1546042c6246d21d4018de02e9 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/test_harden.py @@ -0,0 +1,81 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from mock import patch, call +from unittest import TestCase + +from charmhelpers.contrib.hardening import harden + + +class HardenTestCase(TestCase): + + def setUp(self): + super(HardenTestCase, self).setUp() + + @patch.object(harden, 'log', lambda *args, **kwargs: None) + @patch.object(harden, 'run_apache_checks') + @patch.object(harden, 'run_mysql_checks') + @patch.object(harden, 'run_ssh_checks') + @patch.object(harden, 'run_os_checks') + def test_harden(self, mock_host, mock_ssh, mock_mysql, mock_apache): + mock_host.__name__ = 'host' + mock_ssh.__name__ = 'ssh' + mock_mysql.__name__ = 'mysql' + mock_apache.__name__ = 'apache' + + @harden.harden(overrides=['ssh', 'mysql']) + def foo(arg1, kwarg1=None): + return "done." + + self.assertEqual(foo('anarg', kwarg1='akwarg'), "done.") + self.assertTrue(mock_ssh.called) + self.assertTrue(mock_mysql.called) + self.assertFalse(mock_apache.called) + self.assertFalse(mock_host.called) + + @patch.object(harden, 'log') + @patch.object(harden, 'run_apache_checks') + @patch.object(harden, 'run_mysql_checks') + @patch.object(harden, 'run_ssh_checks') + @patch.object(harden, 'run_os_checks') + def test_harden_logs_work(self, mock_host, mock_ssh, mock_mysql, + mock_apache, mock_log): + mock_host.__name__ = 'host' + mock_ssh.__name__ = 'ssh' + mock_mysql.__name__ = 'mysql' + mock_apache.__name__ = 'apache' + + @harden.harden(overrides=['ssh', 'mysql']) + def foo(arg1, kwarg1=None): + return arg1 + kwarg1 + + mock_log.assert_not_called() + self.assertEqual(foo('anarg', kwarg1='akwarg'), "anargakwarg") + mock_log.assert_any_call("Hardening function 'foo'", level="DEBUG") + + @harden.harden(overrides=['ssh', 'mysql']) + def bar(arg1, kwarg1=None): + return arg1 + kwarg1 + + mock_log.reset_mock() + self.assertEqual(bar("a", kwarg1="b"), "ab") + mock_log.assert_any_call("Hardening function 'bar'", level="DEBUG") + + # check it only logs the function name once + mock_log.reset_mock() + self.assertEqual(bar("a", kwarg1="b"), "ab") + self.assertEqual( + mock_log.call_args_list, + [call("Executing hardening module 'ssh'", level="DEBUG"), + call("Executing hardening module 'mysql'", level="DEBUG")]) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/test_templating.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/test_templating.py new file mode 100644 index 0000000000000000000000000000000000000000..6a2f26d783fe712469e2d11f3f9afbf53b8d7bc0 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/test_templating.py @@ -0,0 +1,310 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import tempfile +import os +import six + +from mock import call, patch +from unittest import TestCase + +from charmhelpers.contrib.hardening import templating +from charmhelpers.contrib.hardening import utils +from charmhelpers.contrib.hardening.audits.file import ( + TemplatedFile, + FileContentAudit, +) +from charmhelpers.contrib.hardening.ssh.checks import ( + config as ssh_config_check +) +from charmhelpers.contrib.hardening.host.checks import ( + sysctl, + securetty, +) +from charmhelpers.contrib.hardening.apache.checks import ( + config as apache_config_check +) +from charmhelpers.contrib.hardening.mysql.checks import ( + config as mysql_config_check +) + + +class TemplatingTestCase(TestCase): + + def setUp(self): + super(TemplatingTestCase, self).setUp() + + os.environ['JUJU_CHARM_DIR'] = '/tmp' + self.pathindex = {} + self.addCleanup(lambda: os.environ.pop('JUJU_CHARM_DIR')) + + def get_renderers(self, audits): + renderers = [] + for a in audits: + if issubclass(a.__class__, TemplatedFile): + renderers.append(a) + + return renderers + + def get_contentcheckers(self, audits): + contentcheckers = [] + for a in audits: + if issubclass(a.__class__, FileContentAudit): + contentcheckers.append(a) + + return contentcheckers + + def render(self, renderers): + for template in renderers: + with patch.object(template, 'pre_write', lambda: None): + with patch.object(template, 'post_write', lambda: None): + with patch.object(template, 'run_service_actions'): + with patch.object(template, 'save_checksum'): + for p in template.paths: + template.comply(p) + + def checkcontents(self, contentcheckers): + for check in contentcheckers: + if check.path not in self.pathindex: + continue + + self.assertTrue(check.is_compliant(self.pathindex[check.path])) + + @patch.object(ssh_config_check, 'lsb_release', + lambda: {'DISTRIB_CODENAME': 'precise'}) + @patch.object(utils, 'ensure_permissions') + @patch.object(templating, 'write') + @patch('charmhelpers.contrib.hardening.audits.file.log') + @patch.object(templating, 'log', lambda *args, **kwargs: None) + @patch.object(utils, 'log', lambda *args, **kwargs: None) + @patch.object(ssh_config_check, 'log', lambda *args, **kwargs: None) + def test_ssh_config_render_and_check_lt_trusty(self, mock_log, mock_write, + mock_ensure_permissions): + audits = ssh_config_check.get_audits() + contentcheckers = self.get_contentcheckers(audits) + renderers = self.get_renderers(audits) + configs = {} + + def write(path, data): + with tempfile.NamedTemporaryFile(delete=False) as ftmp: + if os.path.basename(path) == "ssh_config": + configs['ssh'] = ftmp.name + elif os.path.basename(path) == "sshd_config": + configs['sshd'] = ftmp.name + + if path in self.pathindex: + raise Exception("File already rendered '%s'" % path) + + self.pathindex[path] = ftmp.name + with open(ftmp.name, 'wb') as fd: + fd.write(data) + + mock_write.side_effect = write + self.render(renderers) + self.checkcontents(contentcheckers) + self.assertTrue(mock_write.called) + args_list = mock_write.call_args_list + self.assertEqual('/etc/ssh/ssh_config', args_list[0][0][0]) + self.assertEqual('/etc/ssh/sshd_config', args_list[1][0][0]) + self.assertEqual(mock_write.call_count, 2) + + calls = [call("Auditing contents of file '%s'" % configs['ssh'], + level='DEBUG'), + call('Checked 10 cases and 10 passed', level='DEBUG'), + call("Auditing contents of file '%s'" % configs['sshd'], + level='DEBUG'), + call('Checked 10 cases and 10 passed', level='DEBUG')] + mock_log.assert_has_calls(calls) + + @patch.object(ssh_config_check, 'lsb_release', + lambda: {'DISTRIB_CODENAME': 'trusty'}) + @patch.object(utils, 'ensure_permissions') + @patch.object(templating, 'write') + @patch('charmhelpers.contrib.hardening.audits.file.log') + @patch.object(templating, 'log', lambda *args, **kwargs: None) + @patch.object(utils, 'log', lambda *args, **kwargs: None) + @patch.object(ssh_config_check, 'log', lambda *args, **kwargs: None) + def test_ssh_config_render_and_check_gte_trusty(self, mock_log, mock_write, + mock_ensure_permissions): + audits = ssh_config_check.get_audits() + contentcheckers = self.get_contentcheckers(audits) + renderers = self.get_renderers(audits) + + def write(path, data): + with tempfile.NamedTemporaryFile(delete=False) as ftmp: + if path in self.pathindex: + raise Exception("File already rendered '%s'" % path) + + self.pathindex[path] = ftmp.name + with open(ftmp.name, 'wb') as fd: + fd.write(data) + + mock_write.side_effect = write + self.render(renderers) + self.checkcontents(contentcheckers) + self.assertTrue(mock_write.called) + args_list = mock_write.call_args_list + self.assertEqual('/etc/ssh/ssh_config', args_list[0][0][0]) + self.assertEqual('/etc/ssh/sshd_config', args_list[1][0][0]) + self.assertEqual(mock_write.call_count, 2) + + mock_log.assert_has_calls([call('Checked 9 cases and 9 passed', + level='DEBUG')]) + + @patch.object(utils, 'ensure_permissions') + @patch.object(templating, 'write') + @patch.object(sysctl, 'log', lambda *args, **kwargs: None) + @patch.object(templating, 'log', lambda *args, **kwargs: None) + @patch.object(utils, 'log', lambda *args, **kwargs: None) + def test_os_sysctl_and_check(self, mock_write, mock_ensure_permissions): + audits = sysctl.get_audits() + contentcheckers = self.get_contentcheckers(audits) + renderers = self.get_renderers(audits) + + def write(path, data): + if path in self.pathindex: + raise Exception("File already rendered '%s'" % path) + + with tempfile.NamedTemporaryFile(delete=False) as ftmp: + self.pathindex[path] = ftmp.name + with open(ftmp.name, 'wb') as fd: + fd.write(data) + + mock_write.side_effect = write + self.render(renderers) + self.checkcontents(contentcheckers) + self.assertTrue(mock_write.called) + args_list = mock_write.call_args_list + self.assertEqual('/etc/sysctl.d/99-juju-hardening.conf', + args_list[0][0][0]) + self.assertEqual(mock_write.call_count, 1) + + @patch.object(utils, 'ensure_permissions') + @patch.object(templating, 'write') + @patch.object(sysctl, 'log', lambda *args, **kwargs: None) + @patch.object(templating, 'log', lambda *args, **kwargs: None) + @patch.object(utils, 'log', lambda *args, **kwargs: None) + def test_os_securetty_and_check(self, mock_write, mock_ensure_permissions): + audits = securetty.get_audits() + contentcheckers = self.get_contentcheckers(audits) + renderers = self.get_renderers(audits) + + def write(path, data): + if path in self.pathindex: + raise Exception("File already rendered '%s'" % path) + + with tempfile.NamedTemporaryFile(delete=False) as ftmp: + self.pathindex[path] = ftmp.name + with open(ftmp.name, 'wb') as fd: + fd.write(data) + + mock_write.side_effect = write + self.render(renderers) + self.checkcontents(contentcheckers) + self.assertTrue(mock_write.called) + args_list = mock_write.call_args_list + self.assertEqual('/etc/securetty', args_list[0][0][0]) + self.assertEqual(mock_write.call_count, 1) + + @patch.object(apache_config_check.utils, 'get_settings', lambda x: { + 'common': {'apache_dir': '/tmp/foo'}, + 'hardening': { + 'allowed_http_methods': {'GOGETEM'}, + 'modules_to_disable': {'modfoo'}, + 'traceenable': 'off', + 'servertokens': 'Prod', + 'honor_cipher_order': 'on', + 'cipher_suite': 'ALL:+MEDIUM:+HIGH:!LOW:!MD5:!RC4:!eNULL:!aNULL:!3DES' + } + }) + @patch('charmhelpers.contrib.hardening.audits.file.os.path.exists', + lambda *a, **kwa: True) + @patch.object(apache_config_check, 'subprocess') + @patch.object(utils, 'ensure_permissions') + @patch.object(templating, 'write') + @patch.object(templating, 'log', lambda *args, **kwargs: None) + @patch.object(utils, 'log', lambda *args, **kwargs: None) + def test_apache_conf_and_check(self, mock_write, mock_ensure_permissions, + mock_subprocess): + mock_subprocess.call.return_value = 0 + apache_version = b"""Server version: Apache/2.4.7 (Ubuntu) + Server built: Jan 14 2016 17:45:23 + """ + mock_subprocess.check_output.return_value = apache_version + audits = apache_config_check.get_audits() + contentcheckers = self.get_contentcheckers(audits) + renderers = self.get_renderers(audits) + + def write(path, data): + if path in self.pathindex: + raise Exception("File already rendered '%s'" % path) + + with tempfile.NamedTemporaryFile(delete=False) as ftmp: + self.pathindex[path] = ftmp.name + with open(ftmp.name, 'wb') as fd: + fd.write(data) + + mock_write.side_effect = write + self.render(renderers) + self.checkcontents(contentcheckers) + self.assertTrue(mock_write.called) + args_list = mock_write.call_args_list + self.assertEqual('/tmp/foo/mods-available/alias.conf', + args_list[0][0][0]) + self.assertEqual(mock_write.call_count, 2) + + @patch.object(apache_config_check.utils, 'get_settings', lambda x: { + 'security': {}, + 'hardening': { + 'mysql-conf': '/tmp/foo/mysql.cnf', + 'hardening-conf': '/tmp/foo/conf.d/hardening.cnf' + } + }) + @patch('charmhelpers.contrib.hardening.audits.file.os.path.exists', + lambda *a, **kwa: True) + @patch.object(utils, 'ensure_permissions') + @patch.object(templating, 'write') + @patch.object(mysql_config_check.subprocess, 'call', + lambda *args, **kwargs: 0) + @patch.object(templating, 'log', lambda *args, **kwargs: None) + @patch.object(utils, 'log', lambda *args, **kwargs: None) + def test_mysql_conf_and_check(self, mock_write, mock_ensure_permissions): + audits = mysql_config_check.get_audits() + contentcheckers = self.get_contentcheckers(audits) + renderers = self.get_renderers(audits) + + def write(path, data): + if path in self.pathindex: + raise Exception("File already rendered '%s'" % path) + + with tempfile.NamedTemporaryFile(delete=False) as ftmp: + self.pathindex[path] = ftmp.name + with open(ftmp.name, 'wb') as fd: + fd.write(data) + + mock_write.side_effect = write + self.render(renderers) + self.checkcontents(contentcheckers) + self.assertTrue(mock_write.called) + args_list = mock_write.call_args_list + self.assertEqual('/tmp/foo/conf.d/hardening.cnf', + args_list[0][0][0]) + self.assertEqual(mock_write.call_count, 1) + + def tearDown(self): + # Cleanup + for path in six.itervalues(self.pathindex): + os.remove(path) + + super(TemplatingTestCase, self).tearDown() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/test_utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/test_utils.py new file mode 100644 index 0000000000000000000000000000000000000000..2b83895d56e12fc91fec5d40254ce2289c8712ff --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/hardening/test_utils.py @@ -0,0 +1,63 @@ +# Copyright 2016 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import six +import tempfile + +from mock import ( + MagicMock, + call, + patch +) +from unittest import TestCase + +from charmhelpers.contrib.hardening import utils + + +class UtilsTestCase(TestCase): + + def setUp(self): + super(UtilsTestCase, self).setUp() + utils.__SETTINGS__ = {} + + @patch.object(utils.grp, 'getgrnam') + @patch.object(utils.pwd, 'getpwnam') + @patch.object(utils, 'os') + @patch.object(utils, 'log', lambda *args, **kwargs: None) + def test_ensure_permissions(self, mock_os, mock_getpwnam, mock_getgrnam): + user = MagicMock() + user.pw_uid = '12' + mock_getpwnam.return_value = user + group = MagicMock() + group.gr_gid = '23' + mock_getgrnam.return_value = group + + with tempfile.NamedTemporaryFile() as tmp: + utils.ensure_permissions(tmp.name, 'testuser', 'testgroup', 0o0440) + + mock_getpwnam.assert_has_calls([call('testuser')]) + mock_getgrnam.assert_has_calls([call('testgroup')]) + mock_os.chown.assert_has_calls([call(tmp.name, '12', '23')]) + mock_os.chmod.assert_has_calls([call(tmp.name, 0o0440)]) + + @patch.object(utils, '_get_user_provided_overrides') + def test_settings_cache(self, mock_get_user_provided_overrides): + mock_get_user_provided_overrides.return_value = {} + self.assertEqual(utils.__SETTINGS__, {}) + self.assertTrue('sysctl' in utils.get_settings('os')) + self.assertEqual(sorted(list(six.iterkeys(utils.__SETTINGS__))), + ['os']) + self.assertTrue('server' in utils.get_settings('ssh')) + self.assertEqual(sorted(list(six.iterkeys(utils.__SETTINGS__))), + ['os', 'ssh']) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/mellanox/test_infiniband.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/mellanox/test_infiniband.py new file mode 100644 index 0000000000000000000000000000000000000000..bc60be7a6306bd4d90b701a1ef9828751ae889a5 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/mellanox/test_infiniband.py @@ -0,0 +1,115 @@ +#!/usr/bin/env python + +from charmhelpers.contrib.mellanox import infiniband + +from mock import patch, call +import unittest + +TO_PATCH = [ + "log", + "INFO", + "apt_install", + "apt_update", + "modprobe", + "network_interfaces" +] + +NETWORK_INTERFACES = [ + 'lo', + 'eth0', + 'eth1', + 'eth2', + 'eth3', + 'eth4', + 'juju-br0', + 'ib0', + 'virbr0', + 'ovs-system', + 'br-int', + 'br-ex', + 'br-data', + 'phy-br-data', + 'int-br-data', + 'br-tun' +] + + +IBSTAT_OUTPUT = """ +CA 'mlx4_0' + CA type: MT4103 + Number of ports: 2 + Firmware version: 2.33.5000 + Hardware version: 0 + Node GUID: 0xe41d2d03000a1120 + System image GUID: 0xe41d2d03000a1123 +""" + + +class InfinibandTest(unittest.TestCase): + + def setUp(self): + for m in TO_PATCH: + setattr(self, m, self._patch(m)) + + def _patch(self, method): + _m = patch('charmhelpers.contrib.mellanox.infiniband.' + method) + mock = _m.start() + self.addCleanup(_m.stop) + return mock + + def test_load_modules(self): + infiniband.load_modules() + + self.modprobe.assert_has_calls(map(lambda x: call(x, persist=True), + infiniband.REQUIRED_MODULES)) + + def test_install_packages(self): + infiniband.install_packages() + + self.apt_update.assert_is_called_once() + self.apt_install.assert_is_called_once() + + @patch("os.path.exists") + def test_is_enabled(self, exists): + exists.return_value = True + self.assertTrue(infiniband.is_enabled()) + + @patch("subprocess.check_output") + def test_stat(self, check_output): + infiniband.stat() + + check_output.assert_called_with(["ibstat"]) + + @patch("subprocess.check_output") + def test_devices(self, check_output): + infiniband.devices() + + check_output.assert_called_with(["ibstat", "-l"]) + + @patch("subprocess.check_output") + def test_device_info(self, check_output): + check_output.return_value = IBSTAT_OUTPUT + + info = infiniband.device_info("mlx4_0") + + self.assertEquals(info.num_ports, "2") + self.assertEquals(info.device_type, "MT4103") + self.assertEquals(info.fw_ver, "2.33.5000") + self.assertEquals(info.hw_ver, "0") + self.assertEquals(info.node_guid, "0xe41d2d03000a1120") + self.assertEquals(info.sys_guid, "0xe41d2d03000a1123") + + @patch("subprocess.check_output") + def test_ipoib_interfaces(self, check_output): + self.network_interfaces.return_value = NETWORK_INTERFACES + + ipoib_nic = "ib0" + + def c(*args, **kwargs): + if ipoib_nic in args[0]: + return "driver: ib_ipoib" + else: + return "driver: mock" + + check_output.side_effect = c + self.assertEquals(infiniband.ipoib_interfaces(), [ipoib_nic]) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/network/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/network/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/network/ovs/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/network/ovs/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/network/ovs/test_ovn.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/network/ovs/test_ovn.py new file mode 100644 index 0000000000000000000000000000000000000000..c039693a4cb93806f4753f61c177cc4600ce2b91 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/network/ovs/test_ovn.py @@ -0,0 +1,122 @@ +import textwrap +import uuid + +import charmhelpers.contrib.network.ovs.ovn as ovn + +import tests.utils as test_utils + + +CLUSTER_STATUS = textwrap.dedent(""" + 0ea6 + Name: OVN_Northbound + Cluster ID: f6a3 (f6a36e77-97bf-4740-b46a-705cbe4fef45) + Server ID: 0ea6 (0ea6e785-c2bb-4640-b7a2-85104c11a2c1) + Address: ssl:10.219.3.174:6643 + Status: cluster member + Role: follower + Term: 3 + Leader: 22dd + Vote: unknown + + Election timer: 1000 + Log: [2, 10] + Entries not yet committed: 0 + Entries not yet applied: 0 + Connections: ->f6cf ->22dd <-22dd <-f6cf + Servers: + 0ea6 (0ea6 at ssl:10.219.3.174:6643) (self) + f6cf (f6cf at ssl:10.219.3.64:6643) + 22dd (22dd at ssl:10.219.3.137:6643) + """) + +NORTHD_STATUS_ACTIVE = textwrap.dedent(""" + Status: active + """) + +NORTHD_STATUS_STANDBY = textwrap.dedent(""" + Status: standby + """) + + +class TestOVN(test_utils.BaseTestCase): + + def test_ovn_appctl(self): + self.patch_object(ovn.utils, '_run') + ovn.ovn_appctl('ovn-northd', ('is-paused',)) + self._run.assert_called_once_with('ovn-appctl', '-t', 'ovn-northd', + 'is-paused') + self._run.reset_mock() + ovn.ovn_appctl('ovnnb_db', ('cluster/status',)) + self._run.assert_called_once_with('ovn-appctl', '-t', + '/var/run/ovn/ovnnb_db.ctl', + 'cluster/status') + self._run.reset_mock() + ovn.ovn_appctl('ovnnb_db', ('cluster/status',), use_ovs_appctl=True) + self._run.assert_called_once_with('ovs-appctl', '-t', + '/var/run/ovn/ovnnb_db.ctl', + 'cluster/status') + self._run.reset_mock() + ovn.ovn_appctl('ovnsb_db', ('cluster/status',), + rundir='/var/run/openvswitch') + self._run.assert_called_once_with('ovn-appctl', '-t', + '/var/run/openvswitch/ovnsb_db.ctl', + 'cluster/status') + + def test_cluster_status(self): + self.patch_object(ovn, 'ovn_appctl') + self.ovn_appctl.return_value = CLUSTER_STATUS + expect = ovn.OVNClusterStatus( + 'OVN_Northbound', + uuid.UUID('f6a36e77-97bf-4740-b46a-705cbe4fef45'), + uuid.UUID('0ea6e785-c2bb-4640-b7a2-85104c11a2c1'), + 'ssl:10.219.3.174:6643', + 'cluster member', + 'follower', + 3, + '22dd', + 'unknown', + 1000, + '[2, 10]', + 0, + 0, + '->f6cf ->22dd <-22dd <-f6cf', + [ + ('0ea6', 'ssl:10.219.3.174:6643'), + ('f6cf', 'ssl:10.219.3.64:6643'), + ('22dd', 'ssl:10.219.3.137:6643'), + ]) + self.assertEquals(ovn.cluster_status('ovnnb_db'), expect) + self.ovn_appctl.assert_called_once_with('ovnnb_db', ('cluster/status', + 'OVN_Northbound'), + rundir=None, + use_ovs_appctl=False) + self.assertFalse(expect.is_cluster_leader) + expect = ovn.OVNClusterStatus( + 'OVN_Northbound', + uuid.UUID('f6a36e77-97bf-4740-b46a-705cbe4fef45'), + uuid.UUID('0ea6e785-c2bb-4640-b7a2-85104c11a2c1'), + 'ssl:10.219.3.174:6643', + 'cluster member', + 'leader', + 3, + 'self', + 'unknown', + 1000, + '[2, 10]', + 0, + 0, + '->f6cf ->22dd <-22dd <-f6cf', + [ + ('0ea6', 'ssl:10.219.3.174:6643'), + ('f6cf', 'ssl:10.219.3.64:6643'), + ('22dd', 'ssl:10.219.3.137:6643'), + ]) + self.assertTrue(expect.is_cluster_leader) + + def test_is_northd_active(self): + self.patch_object(ovn, 'ovn_appctl') + self.ovn_appctl.return_value = NORTHD_STATUS_ACTIVE + self.assertTrue(ovn.is_northd_active()) + self.ovn_appctl.assert_called_once_with('ovn-northd', ('status',)) + self.ovn_appctl.return_value = NORTHD_STATUS_STANDBY + self.assertFalse(ovn.is_northd_active()) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/network/ovs/test_ovs.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/network/ovs/test_ovs.py new file mode 100644 index 0000000000000000000000000000000000000000..9f66cbb0262fc93f348f6f51b8b28f361f564926 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/network/ovs/test_ovs.py @@ -0,0 +1,174 @@ +import mock + +import charmhelpers.contrib.network.ovs as ovs + +import tests.utils as test_utils + + +# NOTE(fnordahl): some functions drectly under the ``contrib.network.ovs`` +# module have their unit tests in the ``test_ovs.py`` module in the +# ``tests.contrib.network`` package. + + +class TestOVS(test_utils.BaseTestCase): + + def test__dict_to_vsctl_set(self): + indata = { + 'key': 'value', + 'otherkey': { + 'nestedkey': 'nestedvalue', + }, + } + # due to varying Dict ordering depending on Python version we need + # to be a bit elaborate rather than comparing result directly + result1 = ('--', 'set', 'aTable', 'anItem', 'key=value') + result2 = ('--', 'set', 'aTable', 'anItem', + 'otherkey:nestedkey=nestedvalue') + for setcmd in ovs._dict_to_vsctl_set(indata, 'aTable', 'anItem'): + self.assertTrue(setcmd == result1 or setcmd == result2) + + def test_add_bridge(self): + self.patch_object(ovs.subprocess, 'check_call') + self.patch_object(ovs, 'log') + ovs.add_bridge('test') + self.check_call.assert_called_once_with([ + "ovs-vsctl", "--", "--may-exist", + "add-br", 'test']) + self.assertTrue(self.log.call_count == 1) + + self.check_call.reset_mock() + self.log.reset_mock() + ovs.add_bridge('test', datapath_type='netdev') + self.check_call.assert_called_with([ + "ovs-vsctl", "--", "--may-exist", + "add-br", 'test', "--", "set", + "bridge", "test", "datapath_type=netdev", + ]) + self.assertTrue(self.log.call_count == 2) + + self.check_call.reset_mock() + ovs.add_bridge('test', exclusive=True) + self.check_call.assert_called_once_with([ + "ovs-vsctl", "--", "add-br", 'test']) + + self.check_call.reset_mock() + self.patch_object(ovs, '_dict_to_vsctl_set') + self._dict_to_vsctl_set.return_value = [['--', 'fakeextradata']] + ovs.add_bridge('test', brdata={'fakeinput': None}) + self._dict_to_vsctl_set.assert_called_once_with( + {'fakeinput': None}, 'bridge', 'test') + self.check_call.assert_called_once_with([ + 'ovs-vsctl', '--', '--may-exist', 'add-br', 'test', + '--', 'fakeextradata']) + + def test_add_bridge_port(self): + self.patch_object(ovs.subprocess, 'check_call') + self.patch_object(ovs, 'log') + ovs.add_bridge_port('test', 'eth1') + self.check_call.assert_has_calls([ + mock.call(['ovs-vsctl', '--', '--may-exist', 'add-port', + 'test', 'eth1']), + mock.call(['ip', 'link', 'set', 'eth1', 'up']), + mock.call(['ip', 'link', 'set', 'eth1', 'promisc', 'off']) + ]) + self.assertTrue(self.log.call_count == 1) + + self.check_call.reset_mock() + self.log.reset_mock() + ovs.add_bridge_port('test', 'eth1', promisc=True) + self.check_call.assert_has_calls([ + mock.call(['ovs-vsctl', '--', '--may-exist', 'add-port', + 'test', 'eth1']), + mock.call(['ip', 'link', 'set', 'eth1', 'up']), + mock.call(['ip', 'link', 'set', 'eth1', 'promisc', 'on']) + ]) + self.assertTrue(self.log.call_count == 1) + + self.check_call.reset_mock() + self.log.reset_mock() + ovs.add_bridge_port('test', 'eth1', promisc=None) + self.check_call.assert_has_calls([ + mock.call(['ovs-vsctl', '--', '--may-exist', 'add-port', + 'test', 'eth1']), + mock.call(['ip', 'link', 'set', 'eth1', 'up']), + ]) + self.assertTrue(self.log.call_count == 1) + + self.check_call.reset_mock() + ovs.add_bridge_port('test', 'eth1', exclusive=True, linkup=False) + self.check_call.assert_has_calls([ + mock.call(['ovs-vsctl', '--', 'add-port', 'test', 'eth1']), + mock.call(['ip', 'link', 'set', 'eth1', 'promisc', 'off']) + ]) + + self.check_call.reset_mock() + self.patch_object(ovs, '_dict_to_vsctl_set') + self._dict_to_vsctl_set.return_value = [['--', 'fakeextradata']] + ovs.add_bridge_port('test', 'eth1', ifdata={'fakeinput': None}) + self._dict_to_vsctl_set.assert_called_once_with( + {'fakeinput': None}, 'Interface', 'eth1') + self.check_call.assert_has_calls([ + mock.call(['ovs-vsctl', '--', '--may-exist', 'add-port', + 'test', 'eth1', '--', 'fakeextradata']), + mock.call(['ip', 'link', 'set', 'eth1', 'up']), + mock.call(['ip', 'link', 'set', 'eth1', 'promisc', 'off']) + ]) + self._dict_to_vsctl_set.reset_mock() + self.check_call.reset_mock() + ovs.add_bridge_port('test', 'eth1', portdata={'fakeportinput': None}) + self._dict_to_vsctl_set.assert_called_once_with( + {'fakeportinput': None}, 'Port', 'eth1') + self.check_call.assert_has_calls([ + mock.call(['ovs-vsctl', '--', '--may-exist', 'add-port', + 'test', 'eth1', '--', 'fakeextradata']), + mock.call(['ip', 'link', 'set', 'eth1', 'up']), + mock.call(['ip', 'link', 'set', 'eth1', 'promisc', 'off']) + ]) + + def test_ovs_appctl(self): + self.patch_object(ovs.subprocess, 'check_output') + ovs.ovs_appctl('ovs-vswitchd', ('ofproto/list',)) + self.check_output.assert_called_once_with( + ['ovs-appctl', '-t', 'ovs-vswitchd', 'ofproto/list'], + universal_newlines=True) + + def test_add_bridge_bond(self): + self.patch_object(ovs.subprocess, 'check_call') + self.patch_object(ovs, '_dict_to_vsctl_set') + self._dict_to_vsctl_set.return_value = [['--', 'fakekey=fakevalue']] + portdata = { + 'bond-mode': 'balance-tcp', + 'lacp': 'active', + 'other-config': { + 'lacp-time': 'fast', + }, + } + ifdatamap = { + 'eth0': { + 'type': 'dpdk', + 'mtu-request': '9000', + 'options': { + 'dpdk-devargs': '0000:01:00.0', + }, + }, + 'eth1': { + 'type': 'dpdk', + 'mtu-request': '9000', + 'options': { + 'dpdk-devargs': '0000:02:00.0', + }, + }, + } + ovs.add_bridge_bond('br-ex', 'bond42', ['eth0', 'eth1'], + portdata, ifdatamap) + self._dict_to_vsctl_set.assert_has_calls([ + mock.call(portdata, 'port', 'bond42'), + mock.call(ifdatamap['eth0'], 'Interface', 'eth0'), + mock.call(ifdatamap['eth1'], 'Interface', 'eth1'), + ], any_order=True) + self.check_call.assert_called_once_with([ + 'ovs-vsctl', + '--', '--may-exist', 'add-bond', 'br-ex', 'bond42', 'eth0', 'eth1', + '--', 'fakekey=fakevalue', + '--', 'fakekey=fakevalue', + '--', 'fakekey=fakevalue']) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/network/ovs/test_ovsdb.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/network/ovs/test_ovsdb.py new file mode 100644 index 0000000000000000000000000000000000000000..6ecc7085ac47a4e6f8ede01e77f9cf91c53fa100 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/network/ovs/test_ovsdb.py @@ -0,0 +1,123 @@ +import mock +import textwrap +import uuid + +import charmhelpers.contrib.network.ovs.ovsdb as ovsdb + +import tests.utils as test_utils + + +VSCTL_BRIDGE_TBL = textwrap.dedent(""" + {"data":[[["uuid","1e21ba48-61ff-4b32-b35e-cb80411da351"], + ["set",[]],["set",[]],"0000a0369fdd3890","","", + ["map",[["charm-ovn-chassis","managed"],["other","value"]]], + ["set",[]],["set",[]],["map",[]],["set",[]],false,["set",[]], + "br-test",["set",[]],["map",[]],["set", + [["uuid","617f9359-77e2-41be-8af6-4c44e7a6bcc3"], + ["uuid","da840476-8809-4107-8733-591f4696f056"]]], + ["set",[]],false,["map",[]],["set",[]],["map",[]],false], + [["uuid","bb685b0f-a383-40a1-b7a5-b5c2066bfa42"], + ["set",[]],["set",[]],"00000e5b68bba140","","", + ["map",[]],"secure",["set",[]],["map",[]],["set",[]],false, + ["set",[]],"br-int",["set",[]],["map",[["disable-in-band","true"]]], + ["set",[["uuid","07f4c231-9fd2-49b0-a558-5b69d657fdb0"], + ["uuid","8bbd2441-866f-4317-a284-09491702776c"], + ["uuid","d9e9c081-6482-4006-b7d6-239182b56c2e"]]], + ["set",[]],false,["map",[]],["set",[]],["map",[]],false]], + "headings":["_uuid","auto_attach","controller","datapath_id", + "datapath_type","datapath_version","external_ids","fail_mode", + "flood_vlans","flow_tables","ipfix","mcast_snooping_enable", + "mirrors","name","netflow","other_config","ports","protocols", + "rstp_enable","rstp_status","sflow","status","stp_enable"]} + """) + + +class TestSimpleOVSDB(test_utils.BaseTestCase): + + def patch_target(self, attr, return_value=None): + mocked = mock.patch.object(self.target, attr) + self._patches[attr] = mocked + started = mocked.start() + started.return_value = return_value + self._patches_start[attr] = started + setattr(self, attr, started) + + def test___init__(self): + with self.assertRaises(RuntimeError): + self.target = ovsdb.SimpleOVSDB('atool') + with self.assertRaises(AttributeError): + self.target = ovsdb.SimpleOVSDB('ovs-vsctl') + self.target.unknown_table.find() + + def test__find_tbl(self): + self.target = ovsdb.SimpleOVSDB('ovs-vsctl') + self.patch_object(ovsdb.utils, '_run') + self._run.return_value = VSCTL_BRIDGE_TBL + self.maxDiff = None + expect = { + '_uuid': uuid.UUID('1e21ba48-61ff-4b32-b35e-cb80411da351'), + 'auto_attach': [], + 'controller': [], + 'datapath_id': '0000a0369fdd3890', + 'datapath_type': '', + 'datapath_version': '', + 'external_ids': { + 'charm-ovn-chassis': 'managed', + 'other': 'value', + }, + 'fail_mode': [], + 'flood_vlans': [], + 'flow_tables': {}, + 'ipfix': [], + 'mcast_snooping_enable': False, + 'mirrors': [], + 'name': 'br-test', + 'netflow': [], + 'other_config': {}, + 'ports': [['uuid', '617f9359-77e2-41be-8af6-4c44e7a6bcc3'], + ['uuid', 'da840476-8809-4107-8733-591f4696f056']], + 'protocols': [], + 'rstp_enable': False, + 'rstp_status': {}, + 'sflow': [], + 'status': {}, + 'stp_enable': False} + # this in effect also tests the __iter__ front end method + for el in self.target.bridge: + self.assertDictEqual(el, expect) + break + self._run.assert_called_once_with( + 'ovs-vsctl', '-f', 'json', 'find', 'bridge') + self._run.reset_mock() + # this in effect also tests the find front end method + for el in self.target.bridge.find(condition='name=br-test'): + break + self._run.assert_called_once_with( + 'ovs-vsctl', '-f', 'json', 'find', 'bridge', 'name=br-test') + + def test_clear(self): + self.target = ovsdb.SimpleOVSDB('ovs-vsctl') + self.patch_object(ovsdb.utils, '_run') + self.target.interface.clear('1e21ba48-61ff-4b32-b35e-cb80411da351', + 'external_ids') + self._run.assert_called_once_with( + 'ovs-vsctl', 'clear', 'interface', + '1e21ba48-61ff-4b32-b35e-cb80411da351', 'external_ids') + + def test_remove(self): + self.target = ovsdb.SimpleOVSDB('ovs-vsctl') + self.patch_object(ovsdb.utils, '_run') + self.target.interface.remove('1e21ba48-61ff-4b32-b35e-cb80411da351', + 'external_ids', 'other') + self._run.assert_called_once_with( + 'ovs-vsctl', 'remove', 'interface', + '1e21ba48-61ff-4b32-b35e-cb80411da351', 'external_ids', 'other') + + def test_set(self): + self.target = ovsdb.SimpleOVSDB('ovs-vsctl') + self.patch_object(ovsdb.utils, '_run') + self.target.interface.set('1e21ba48-61ff-4b32-b35e-cb80411da351', + 'external_ids:other', 'value') + self._run.assert_called_once_with( + 'ovs-vsctl', 'set', 'interface', + '1e21ba48-61ff-4b32-b35e-cb80411da351', 'external_ids:other=value') diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/network/ovs/test_utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/network/ovs/test_utils.py new file mode 100644 index 0000000000000000000000000000000000000000..8b7e4b1af51df750a0dfab73b88ec8902d113a8f --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/network/ovs/test_utils.py @@ -0,0 +1,13 @@ +import charmhelpers.contrib.network.ovs.utils as utils + +import tests.utils as test_utils + + +class TestUtils(test_utils.BaseTestCase): + + def test__run(self): + self.patch_object(utils.subprocess, 'check_output') + self.check_output.return_value = 'aReturn' + self.assertEquals(utils._run('aArg'), 'aReturn') + self.check_output.assert_called_once_with( + ('aArg',), universal_newlines=True) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/network/test_ip.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/network/test_ip.py new file mode 100644 index 0000000000000000000000000000000000000000..d638abc275440e48a8b5eaafdd323ff7d94d9f74 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/network/test_ip.py @@ -0,0 +1,848 @@ +import subprocess +import unittest + +import mock +import netifaces + +import charmhelpers.contrib.network.ip as net_ip +from mock import patch, MagicMock + +import nose.tools +import six + +if not six.PY3: + builtin_open = '__builtin__.open' + builtin_import = '__builtin__.__import__' +else: + builtin_open = 'builtins.open' + builtin_import = 'builtins.__import__' + +DUMMY_ADDRESSES = { + 'lo': { + 17: [{'peer': '00:00:00:00:00:00', + 'addr': '00:00:00:00:00:00'}], + 2: [{'peer': '127.0.0.1', 'netmask': + '255.0.0.0', 'addr': '127.0.0.1'}], + 10: [{'netmask': 'ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff', + 'addr': '::1'}] + }, + 'eth0': { + 2: [{'addr': '192.168.1.55', + 'broadcast': '192.168.1.255', + 'netmask': '255.255.255.0'}], + 10: [{'addr': '2a01:348:2f4:0:685e:5748:ae62:209f', + 'netmask': 'ffff:ffff:ffff:ffff::'}, + {'addr': 'fe80::3e97:eff:fe8b:1cf7%eth0', + 'netmask': 'ffff:ffff:ffff:ffff::'}, + {'netmask': 'ffff:ffff:ffff:ffff::/64', + 'addr': 'fd2d:dec4:cf59:3c16::1'}, + {'addr': '2001:db8:1::', + 'netmask': 'ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff'}], + 17: [{'addr': '3c:97:0e:8b:1c:f7', + 'broadcast': 'ff:ff:ff:ff:ff:ff'}] + }, + 'eth0:1': { + 2: [{'addr': '192.168.1.56', + 'broadcast': '192.168.1.255', + 'netmask': '255.255.255.0'}], + }, + 'eth1': { + 2: [{'addr': '10.5.0.1', + 'broadcast': '10.5.255.255', + 'netmask': '255.255.0.0'}, + {'addr': '10.6.0.2', + 'broadcast': '10.6.0.255', + 'netmask': '255.255.255.0'}], + 3: [{'addr': 'fe80::3e97:eff:fe8b:1cf7%eth1', + 'netmask': 'ffff:ffff:ffff:ffff::'}], + 17: [{'addr': '3c:97:0e:8b:1c:f7', + 'broadcast': 'ff:ff:ff:ff:ff:ff'}] + }, + 'eth2': { + 10: [{'addr': '3a01:348:2f4:0:685e:5748:ae62:209f', + 'netmask': 'ffff:ffff:ffff:ffff::'}, + {'addr': 'fe80::3e97:edd:fe8b:1cf7%eth0', + 'netmask': 'ffff:ffff:ffff:ffff::'}], + 17: [{'addr': '3c:97:0e:8b:1c:f7', + 'broadcast': 'ff:ff:ff:ff:ff:ff'}] + }, + 'eth2:1': { + 2: [{'addr': '192.168.10.58', + 'broadcast': '192.168.1.255', + 'netmask': '255.255.255.0'}], + }, +} + +IP_OUTPUT = b"""link/ether fa:16:3e:2a:cc:ce brd ff:ff:ff:ff:ff:ff + inet 10.5.16.93/16 brd 10.5.255.255 scope global eth0 + valid_lft forever preferred_lft forever + inet6 2001:db8:1:0:d0cf:528c:23eb:6000/64 scope global + valid_lft forever preferred_lft forever + inet6 2001:db8:1:0:2918:3444:852:5b8a/64 scope global temporary dynamic + valid_lft 86400sec preferred_lft 14400sec + inet6 2001:db8:1:0:f816:3eff:fe2a:ccce/64 scope global dynamic + valid_lft 86400sec preferred_lft 14400sec + inet6 fe80::f816:3eff:fe2a:ccce/64 scope link + valid_lft forever preferred_lft forever +""" + +IP2_OUTPUT = b"""link/ether fa:16:3e:2a:cc:ce brd ff:ff:ff:ff:ff:ff + inet 10.5.16.93/16 brd 10.5.255.255 scope global eth0 + valid_lft forever preferred_lft forever + inet6 2001:db8:1:0:d0cf:528c:23eb:6000/64 scope global + valid_lft forever preferred_lft forever + inet6 2001:db8:1:0:2918:3444:852:5b8a/64 scope global temporary dynamic + valid_lft 86400sec preferred_lft 14400sec + inet6 2001:db8:1:0:f816:3eff:fe2a:ccce/64 scope global mngtmpaddr dynamic + valid_lft 86400sec preferred_lft 14400sec + inet6 fe80::f816:3eff:fe2a:ccce/64 scope link + valid_lft forever preferred_lft forever +""" + +IP_OUTPUT_NO_VALID = b"""link/ether fa:16:3e:2a:cc:ce brd ff:ff:ff:ff:ff:ff + inet 10.5.16.93/16 brd 10.5.255.255 scope global eth0 + valid_lft forever preferred_lft forever + inet6 2001:db8:1:0:2918:3444:852:5b8a/64 scope global temporary dynamic + valid_lft 86400sec preferred_lft 14400sec + inet6 fe80::f816:3eff:fe2a:ccce/64 scope link + valid_lft forever preferred_lft forever +""" + + +class FakeAnswer(object): + def __init__(self, ip): + self.ip = ip + + def __str__(self): + return self.ip + + +class FakeResolver(object): + def __init__(self, ip): + self.ip = ip + + def query(self, hostname, query_type): + if self.ip == '': + return [] + else: + return [FakeAnswer(self.ip)] + + +class FakeReverse(object): + def from_address(self, address): + return '156.94.189.91.in-addr.arpa' + + +class FakeDNSName(object): + def __init__(self, dnsname): + pass + + +class FakeDNS(object): + def __init__(self, ip): + self.resolver = FakeResolver(ip) + self.reversename = FakeReverse() + self.name = MagicMock() + self.name.Name = FakeDNSName + + +class IPTest(unittest.TestCase): + + def mock_ifaddresses(self, iface): + return DUMMY_ADDRESSES[iface] + + def test_get_address_in_network_with_invalid_net(self): + for net in ['192.168.300/22', '192.168.1.0/2a', '2.a']: + self.assertRaises(ValueError, + net_ip.get_address_in_network, + net) + + def _test_get_address_in_network(self, expect_ip_addr, + network, fallback=None, fatal=False): + + def side_effect(iface): + return DUMMY_ADDRESSES[iface] + + with mock.patch.object(netifaces, 'interfaces') as interfaces: + interfaces.return_value = sorted(DUMMY_ADDRESSES.keys()) + with mock.patch.object(netifaces, 'ifaddresses') as ifaddresses: + ifaddresses.side_effect = side_effect + if not fatal: + self.assertEqual(expect_ip_addr, + net_ip.get_address_in_network(network, + fallback, + fatal)) + else: + net_ip.get_address_in_network(network, fallback, fatal) + + @mock.patch.object(subprocess, 'call') + def test_get_address_in_network_with_none(self, popen): + fallback = '10.10.10.10' + self.assertEqual(fallback, + net_ip.get_address_in_network(None, fallback)) + self.assertEqual(None, + net_ip.get_address_in_network(None)) + + self.assertRaises(ValueError, self._test_get_address_in_network, + None, None, fatal=True) + + def test_get_address_in_network_ipv4(self): + self._test_get_address_in_network('192.168.1.55', '192.168.1.0/24') + + def test_get_address_in_network_ipv4_multi(self): + # Assumes that there is an address configured on both but the first + # one is picked# + self._test_get_address_in_network('192.168.1.55', + '192.168.1.0/24 192.168.10.0/24') + + def test_get_address_in_network_ipv4_multi2(self): + # Assumes that there is nothing configured on 192.168.11.0/24 + self._test_get_address_in_network('192.168.10.58', + '192.168.11.0/24 192.168.10.0/24') + + def test_get_address_in_network_ipv4_secondary(self): + self._test_get_address_in_network('10.6.0.2', + '10.6.0.0/24') + + def test_get_address_in_network_ipv6(self): + self._test_get_address_in_network('2a01:348:2f4:0:685e:5748:ae62:209f', + '2a01:348:2f4::/64') + + def test_get_address_in_network_with_non_existent_net(self): + self._test_get_address_in_network(None, '172.16.0.0/16') + + def test_get_address_in_network_fallback_works(self): + fallback = '10.10.0.0' + self._test_get_address_in_network(fallback, '172.16.0.0/16', fallback) + + @mock.patch.object(subprocess, 'call') + def test_get_address_in_network_not_found_fatal(self, popen): + self.assertRaises(ValueError, self._test_get_address_in_network, + None, '172.16.0.0/16', fatal=True) + + def test_get_address_in_network_not_found_not_fatal(self): + self._test_get_address_in_network(None, '172.16.0.0/16', fatal=False) + + @patch.object(netifaces, 'ifaddresses') + @patch.object(netifaces, 'interfaces') + def test_get_address_in_network_netmask(self, _interfaces, _ifaddresses): + """ + Validates that get_address_in_network works with a netmask + that uses the format 'ffff:ffff:ffff::/prefixlen' + """ + _interfaces.return_value = DUMMY_ADDRESSES.keys() + _ifaddresses.side_effect = DUMMY_ADDRESSES.__getitem__ + self._test_get_address_in_network('fd2d:dec4:cf59:3c16::1', + 'fd2d:dec4:cf59:3c16::/64', + fatal=False) + + def test_is_address_in_network(self): + self.assertTrue( + net_ip.is_address_in_network( + '192.168.1.0/24', + '192.168.1.1')) + self.assertFalse( + net_ip.is_address_in_network( + '192.168.1.0/24', + '10.5.1.1')) + self.assertRaises(ValueError, net_ip.is_address_in_network, + 'broken', '192.168.1.1') + self.assertRaises(ValueError, net_ip.is_address_in_network, + '192.168.1.0/24', 'hostname') + self.assertTrue( + net_ip.is_address_in_network( + '2a01:348:2f4::/64', + '2a01:348:2f4:0:685e:5748:ae62:209f') + ) + self.assertFalse( + net_ip.is_address_in_network( + '2a01:348:2f4::/64', + 'fdfc:3bd5:210b:cc8d:8c80:9e10:3f07:371') + ) + + @patch.object(netifaces, 'ifaddresses') + @patch.object(netifaces, 'interfaces') + def test_get_iface_for_address(self, _interfaces, _ifaddresses): + def mock_ifaddresses(iface): + return DUMMY_ADDRESSES[iface] + _interfaces.return_value = ['eth0', 'eth1'] + _ifaddresses.side_effect = mock_ifaddresses + self.assertEquals( + net_ip.get_iface_for_address('192.168.1.220'), + 'eth0') + self.assertEquals(net_ip.get_iface_for_address('10.5.20.4'), 'eth1') + self.assertEquals( + net_ip.get_iface_for_address('2a01:348:2f4:0:685e:5748:ae62:210f'), + 'eth0' + ) + self.assertEquals(net_ip.get_iface_for_address('172.4.5.5'), None) + + @patch.object(netifaces, 'ifaddresses') + @patch.object(netifaces, 'interfaces') + def test_get_netmask_for_address(self, _interfaces, _ifaddresses): + def mock_ifaddresses(iface): + return DUMMY_ADDRESSES[iface] + _interfaces.return_value = ['eth0', 'eth1'] + _ifaddresses.side_effect = mock_ifaddresses + self.assertEquals( + net_ip.get_netmask_for_address('192.168.1.220'), + '255.255.255.0') + self.assertEquals( + net_ip.get_netmask_for_address('10.5.20.4'), + '255.255.0.0') + self.assertEquals(net_ip.get_netmask_for_address('172.4.5.5'), None) + self.assertEquals( + net_ip.get_netmask_for_address( + '2a01:348:2f4:0:685e:5748:ae62:210f'), + '64' + ) + self.assertEquals( + net_ip.get_netmask_for_address('2001:db8:1::'), + '128' + ) + + def test_is_ipv6(self): + self.assertFalse(net_ip.is_ipv6('myhost')) + self.assertFalse(net_ip.is_ipv6('172.4.5.5')) + self.assertTrue(net_ip.is_ipv6('2a01:348:2f4:0:685e:5748:ae62:209f')) + + @patch.object(netifaces, 'ifaddresses') + @patch.object(netifaces, 'interfaces') + def test_get_ipv6_addr_no_ipv6(self, _interfaces, _ifaddresses): + _interfaces.return_value = DUMMY_ADDRESSES.keys() + _ifaddresses.side_effect = DUMMY_ADDRESSES.__getitem__ + with nose.tools.assert_raises(Exception): + net_ip.get_ipv6_addr('eth0:1') + + @patch.object(netifaces, 'ifaddresses') + @patch.object(netifaces, 'interfaces') + def test_get_ipv6_addr_no_global_ipv6(self, _interfaces, + _ifaddresses): + DUMMY_ADDRESSES = { + 'eth0': { + 10: [{'addr': 'fe80::3e97:eff:fe8b:1cf7%eth0', + 'netmask': 'ffff:ffff:ffff:ffff::'}], + } + } + _interfaces.return_value = DUMMY_ADDRESSES.keys() + _ifaddresses.side_effect = DUMMY_ADDRESSES.__getitem__ + self.assertRaises(Exception, net_ip.get_ipv6_addr) + + @patch('charmhelpers.contrib.network.ip.get_iface_from_addr') + @patch.object(netifaces, 'ifaddresses') + @patch.object(netifaces, 'interfaces') + def test_get_ipv6_addr_exc_list(self, _interfaces, _ifaddresses, + mock_get_iface_from_addr): + def mock_ifaddresses(iface): + return DUMMY_ADDRESSES[iface] + + _interfaces.return_value = ['eth0', 'eth1'] + _ifaddresses.side_effect = mock_ifaddresses + + result = net_ip.get_ipv6_addr( + exc_list='2a01:348:2f4:0:685e:5748:ae62:209f', + inc_aliases=True, + fatal=False + ) + self.assertEqual([], result) + + @patch('charmhelpers.contrib.network.ip.get_iface_from_addr') + @patch('charmhelpers.contrib.network.ip.subprocess.check_output') + @patch.object(netifaces, 'ifaddresses') + @patch.object(netifaces, 'interfaces') + def test_get_ipv6_addr(self, _interfaces, _ifaddresses, mock_check_out, + mock_get_iface_from_addr): + mock_get_iface_from_addr.return_value = 'eth0' + mock_check_out.return_value = \ + b"inet6 2a01:348:2f4:0:685e:5748:ae62:209f/64 scope global dynamic" + _interfaces.return_value = DUMMY_ADDRESSES.keys() + _ifaddresses.side_effect = DUMMY_ADDRESSES.__getitem__ + result = net_ip.get_ipv6_addr(dynamic_only=False) + self.assertEqual(['2a01:348:2f4:0:685e:5748:ae62:209f'], result) + + @patch('charmhelpers.contrib.network.ip.get_iface_from_addr') + @patch('charmhelpers.contrib.network.ip.subprocess.check_output') + @patch.object(netifaces, 'ifaddresses') + @patch.object(netifaces, 'interfaces') + def test_get_ipv6_addr_global_dynamic(self, _interfaces, _ifaddresses, + mock_check_out, + mock_get_iface_from_addr): + mock_get_iface_from_addr.return_value = 'eth0' + mock_check_out.return_value = \ + b"inet6 2a01:348:2f4:0:685e:5748:ae62:209f/64 scope global dynamic" + _interfaces.return_value = DUMMY_ADDRESSES.keys() + _ifaddresses.side_effect = DUMMY_ADDRESSES.__getitem__ + result = net_ip.get_ipv6_addr(dynamic_only=False) + self.assertEqual(['2a01:348:2f4:0:685e:5748:ae62:209f'], result) + + @patch.object(netifaces, 'interfaces') + def test_get_ipv6_addr_invalid_nic(self, _interfaces): + _interfaces.return_value = DUMMY_ADDRESSES.keys() + self.assertRaises(Exception, net_ip.get_ipv6_addr, 'eth1') + + @patch('charmhelpers.contrib.network.ip.subprocess.check_output') + def test_is_ipv6_disabled(self, mock_check_output): + # verify that the function does look for the right thing + mock_check_output.return_value = """ + Some lines before + net.ipv6.conf.all.disable_ipv6 = 1 + Some lines afterward + """ + self.assertTrue(net_ip.is_ipv6_disabled()) + mock_check_output.assert_called_once_with( + ['sysctl', 'net.ipv6.conf.all.disable_ipv6'], + stderr=subprocess.STDOUT, universal_newlines=True) + # if it isn't there, it must return false + mock_check_output.return_value = "" + self.assertFalse(net_ip.is_ipv6_disabled()) + # If the syscall returns an error, then return True + + def fake_check_call(*args, **kwargs): + raise subprocess.CalledProcessError(['called'], 1) + mock_check_output.side_effect = fake_check_call + self.assertTrue(net_ip.is_ipv6_disabled()) + + @patch.object(netifaces, 'ifaddresses') + @patch.object(netifaces, 'interfaces') + def test_get_iface_addr(self, _interfaces, _ifaddresses): + _interfaces.return_value = DUMMY_ADDRESSES.keys() + _ifaddresses.side_effect = DUMMY_ADDRESSES.__getitem__ + result = net_ip.get_iface_addr("eth0") + self.assertEqual(["192.168.1.55"], result) + + @patch.object(netifaces, 'ifaddresses') + @patch.object(netifaces, 'interfaces') + def test_get_iface_addr_excaliases(self, _interfaces, _ifaddresses): + _interfaces.return_value = DUMMY_ADDRESSES.keys() + _ifaddresses.side_effect = DUMMY_ADDRESSES.__getitem__ + result = net_ip.get_iface_addr("eth0") + self.assertEqual(['192.168.1.55'], result) + + @patch.object(netifaces, 'ifaddresses') + @patch.object(netifaces, 'interfaces') + def test_get_iface_addr_incaliases(self, _interfaces, _ifaddresses): + _interfaces.return_value = DUMMY_ADDRESSES.keys() + _ifaddresses.side_effect = DUMMY_ADDRESSES.__getitem__ + result = net_ip.get_iface_addr("eth0", inc_aliases=True) + self.assertEqual(['192.168.1.55', '192.168.1.56'], result) + + @patch.object(netifaces, 'ifaddresses') + @patch.object(netifaces, 'interfaces') + def test_get_iface_addr_exclist(self, _interfaces, _ifaddresses): + _interfaces.return_value = DUMMY_ADDRESSES.keys() + _ifaddresses.side_effect = DUMMY_ADDRESSES.__getitem__ + result = net_ip.get_iface_addr("eth0", inc_aliases=True, + exc_list=['192.168.1.55']) + self.assertEqual(['192.168.1.56'], result) + + @patch.object(netifaces, 'ifaddresses') + @patch.object(netifaces, 'interfaces') + def test_get_iface_addr_mixedaddr(self, _interfaces, _ifaddresses): + _interfaces.return_value = DUMMY_ADDRESSES.keys() + _ifaddresses.side_effect = DUMMY_ADDRESSES.__getitem__ + result = net_ip.get_iface_addr("eth2", inc_aliases=True) + self.assertEqual(["192.168.10.58"], result) + + @patch.object(netifaces, 'ifaddresses') + @patch.object(netifaces, 'interfaces') + def test_get_iface_addr_full_interface_path(self, _interfaces, + _ifaddresses): + _interfaces.return_value = DUMMY_ADDRESSES.keys() + _ifaddresses.side_effect = DUMMY_ADDRESSES.__getitem__ + result = net_ip.get_iface_addr("/dev/eth0") + self.assertEqual(["192.168.1.55"], result) + + @patch.object(netifaces, 'interfaces') + def test_get_iface_addr_invalid_type(self, _interfaces): + _interfaces.return_value = DUMMY_ADDRESSES.keys() + with nose.tools.assert_raises(Exception): + net_ip.get_iface_addr(iface='eth0', inet_type='AF_BOB') + + @patch.object(netifaces, 'ifaddresses') + @patch.object(netifaces, 'interfaces') + def test_get_iface_addr_invalid_interface(self, _interfaces, _ifaddresses): + _interfaces.return_value = DUMMY_ADDRESSES.keys() + result = net_ip.get_ipv4_addr("eth3", fatal=False) + self.assertEqual([], result) + + @patch.object(netifaces, 'interfaces') + def test_get_iface_addr_invalid_interface_fatal(self, _interfaces): + _interfaces.return_value = DUMMY_ADDRESSES.keys() + with nose.tools.assert_raises(Exception): + net_ip.get_ipv4_addr("eth3", fatal=True) + + @patch.object(netifaces, 'interfaces') + def test_get_iface_addr_invalid_interface_fatal_incaliases(self, + _interfaces): + _interfaces.return_value = DUMMY_ADDRESSES.keys() + with nose.tools.assert_raises(Exception): + net_ip.get_ipv4_addr("eth3", fatal=True, inc_aliases=True) + + @patch.object(netifaces, 'ifaddresses') + @patch.object(netifaces, 'interfaces') + def test_get_get_iface_addr_interface_has_no_ipv4(self, _interfaces, + _ifaddresses): + + # This will raise a KeyError since we are looking for "2" + # (actally, netiface.AF_INET). + DUMMY_ADDRESSES = { + 'eth0': { + 10: [{'addr': 'fe80::3e97:eff:fe8b:1cf7%eth0', + 'netmask': 'ffff:ffff:ffff:ffff::'}], + } + } + + _interfaces.return_value = DUMMY_ADDRESSES.keys() + _ifaddresses.side_effect = DUMMY_ADDRESSES.__getitem__ + + result = net_ip.get_ipv4_addr("eth0", fatal=False) + self.assertEqual([], result) + + @patch('glob.glob') + def test_get_bridges(self, _glob): + _glob.return_value = ['/sys/devices/virtual/net/br0/bridge'] + self.assertEqual(['br0'], net_ip.get_bridges()) + + @patch.object(net_ip, 'get_bridges') + @patch('glob.glob') + def test_get_bridge_nics(self, _glob, _get_bridges): + _glob.return_value = ['/sys/devices/virtual/net/br0/brif/eth4', + '/sys/devices/virtual/net/br0/brif/eth5'] + self.assertEqual(['eth4', 'eth5'], net_ip.get_bridge_nics('br0')) + + @patch.object(net_ip, 'get_bridges') + @patch('glob.glob') + def test_get_bridge_nics_invalid_br(self, _glob, _get_bridges): + _glob.return_value = [] + self.assertEqual([], net_ip.get_bridge_nics('br1')) + + @patch.object(net_ip, 'get_bridges') + @patch.object(net_ip, 'get_bridge_nics') + def test_is_bridge_member(self, _get_bridge_nics, _get_bridges): + _get_bridges.return_value = ['br0'] + _get_bridge_nics.return_value = ['eth4', 'eth5'] + self.assertTrue(net_ip.is_bridge_member('eth4')) + self.assertFalse(net_ip.is_bridge_member('eth6')) + + def test_format_ipv6_addr(self): + DUMMY_ADDRESS = '2001:db8:1:0:f131:fc84:ea37:7d4' + self.assertEquals(net_ip.format_ipv6_addr(DUMMY_ADDRESS), + '[2001:db8:1:0:f131:fc84:ea37:7d4]') + + def test_format_invalid_ipv6_addr(self): + INVALID_IPV6_ADDR = 'myhost' + self.assertEquals(net_ip.format_ipv6_addr(INVALID_IPV6_ADDR), + None) + + @patch('charmhelpers.contrib.network.ip.get_iface_from_addr') + @patch('charmhelpers.contrib.network.ip.subprocess.check_output') + @patch('charmhelpers.contrib.network.ip.get_iface_addr') + def test_get_ipv6_global_address(self, mock_get_iface_addr, mock_check_out, + mock_get_iface_from_addr): + mock_get_iface_from_addr.return_value = 'eth0' + mock_check_out.return_value = IP_OUTPUT + scope_global_addr = '2001:db8:1:0:d0cf:528c:23eb:6000' + scope_global_dyn_addr = '2001:db8:1:0:f816:3eff:fe2a:ccce' + mock_get_iface_addr.return_value = [scope_global_addr, + scope_global_dyn_addr, + '2001:db8:1:0:2918:3444:852:5b8a', + 'fe80::f816:3eff:fe2a:ccce%eth0'] + self.assertEqual([scope_global_addr, scope_global_dyn_addr], + net_ip.get_ipv6_addr(dynamic_only=False)) + + @patch('charmhelpers.contrib.network.ip.get_iface_from_addr') + @patch('charmhelpers.contrib.network.ip.subprocess.check_output') + @patch('charmhelpers.contrib.network.ip.get_iface_addr') + def test_get_ipv6_global_dynamic_address(self, mock_get_iface_addr, + mock_check_out, + mock_get_iface_from_addr): + mock_get_iface_from_addr.return_value = 'eth0' + mock_check_out.return_value = IP_OUTPUT + scope_global_addr = '2001:db8:1:0:d0cf:528c:23eb:6000' + scope_global_dyn_addr = '2001:db8:1:0:f816:3eff:fe2a:ccce' + mock_get_iface_addr.return_value = [scope_global_addr, + scope_global_dyn_addr, + '2001:db8:1:0:2918:3444:852:5b8a', + 'fe80::f816:3eff:fe2a:ccce%eth0'] + self.assertEqual([scope_global_dyn_addr], net_ip.get_ipv6_addr()) + + @patch('charmhelpers.contrib.network.ip.get_iface_from_addr') + @patch('charmhelpers.contrib.network.ip.subprocess.check_output') + @patch('charmhelpers.contrib.network.ip.get_iface_addr') + def test_get_ipv6_global_dynamic_address_ip2(self, mock_get_iface_addr, + mock_check_out, + mock_get_iface_from_addr): + mock_get_iface_from_addr.return_value = 'eth0' + mock_check_out.return_value = IP2_OUTPUT + scope_global_addr = '2001:db8:1:0:d0cf:528c:23eb:6000' + scope_global_dyn_addr = '2001:db8:1:0:f816:3eff:fe2a:ccce' + mock_get_iface_addr.return_value = [scope_global_addr, + scope_global_dyn_addr, + '2001:db8:1:0:2918:3444:852:5b8a', + 'fe80::f816:3eff:fe2a:ccce%eth0'] + self.assertEqual([scope_global_dyn_addr], net_ip.get_ipv6_addr()) + + @patch('charmhelpers.contrib.network.ip.subprocess.check_output') + @patch('charmhelpers.contrib.network.ip.get_iface_addr') + def test_get_ipv6_global_dynamic_address_invalid_address( + self, mock_get_iface_addr, mock_check_out): + mock_get_iface_addr.return_value = [] + with nose.tools.assert_raises(Exception): + net_ip.get_ipv6_addr() + + mock_get_iface_addr.return_value = ['2001:db8:1:0:2918:3444:852:5b8a'] + mock_check_out.return_value = IP_OUTPUT_NO_VALID + with nose.tools.assert_raises(Exception): + net_ip.get_ipv6_addr() + + @patch('charmhelpers.contrib.network.ip.get_iface_addr') + def test_get_ipv6_addr_w_iface(self, mock_get_iface_addr): + mock_get_iface_addr.return_value = [] + net_ip.get_ipv6_addr(iface='testif', fatal=False) + mock_get_iface_addr.assert_called_once_with(iface='testif', + inet_type='AF_INET6', + inc_aliases=False, + fatal=False, exc_list=None) + + @patch('charmhelpers.contrib.network.ip.unit_get') + @patch('charmhelpers.contrib.network.ip.get_iface_from_addr') + @patch('charmhelpers.contrib.network.ip.get_iface_addr') + def test_get_ipv6_addr_no_iface(self, mock_get_iface_addr, + mock_get_iface_from_addr, mock_unit_get): + mock_unit_get.return_value = '1.2.3.4' + mock_get_iface_addr.return_value = [] + mock_get_iface_from_addr.return_value = "testif" + net_ip.get_ipv6_addr(fatal=False) + mock_get_iface_from_addr.assert_called_once_with('1.2.3.4') + mock_get_iface_addr.assert_called_once_with(iface='testif', + inet_type='AF_INET6', + inc_aliases=False, + fatal=False, exc_list=None) + + @patch('netifaces.interfaces') + @patch('netifaces.ifaddresses') + @patch('charmhelpers.contrib.network.ip.log') + def test_get_iface_from_addr(self, mock_log, mock_ifaddresses, + mock_interfaces): + mock_ifaddresses.side_effect = lambda iface: DUMMY_ADDRESSES[iface] + mock_interfaces.return_value = sorted(DUMMY_ADDRESSES.keys()) + addr = 'fe80::3e97:eff:fe8b:1cf7' + self.assertEqual(net_ip.get_iface_from_addr(addr), 'eth0') + + with nose.tools.assert_raises(Exception): + net_ip.get_iface_from_addr('1.2.3.4') + + def test_is_ip(self): + self.assertTrue(net_ip.is_ip('10.0.0.1')) + self.assertTrue(net_ip.is_ip('2001:db8:1:0:2918:3444:852:5b8a')) + self.assertFalse(net_ip.is_ip('www.ubuntu.com')) + + @patch('charmhelpers.contrib.network.ip.apt_install') + def test_get_host_ip_with_hostname(self, apt_install): + fake_dns = FakeDNS('10.0.0.1') + with patch(builtin_import, side_effect=[fake_dns]): + ip = net_ip.get_host_ip('www.ubuntu.com') + self.assertEquals(ip, '10.0.0.1') + + @patch('charmhelpers.contrib.network.ip.ns_query') + @patch('charmhelpers.contrib.network.ip.socket.gethostbyname') + @patch('charmhelpers.contrib.network.ip.apt_install') + def test_get_host_ip_with_hostname_no_dns(self, apt_install, socket, + ns_query): + ns_query.return_value = [] + fake_dns = FakeDNS(None) + socket.return_value = '10.0.0.1' + with patch(builtin_import, side_effect=[fake_dns]): + ip = net_ip.get_host_ip('www.ubuntu.com') + self.assertEquals(ip, '10.0.0.1') + + @patch('charmhelpers.contrib.network.ip.log') + @patch('charmhelpers.contrib.network.ip.ns_query') + @patch('charmhelpers.contrib.network.ip.socket.gethostbyname') + @patch('charmhelpers.contrib.network.ip.apt_install') + def test_get_host_ip_with_hostname_fallback(self, apt_install, socket, + ns_query, *args): + ns_query.return_value = [] + fake_dns = FakeDNS(None) + + def r(): + raise Exception() + + socket.side_effect = r + with patch(builtin_import, side_effect=[fake_dns]): + ip = net_ip.get_host_ip('www.ubuntu.com', fallback='127.0.0.1') + self.assertEquals(ip, '127.0.0.1') + + @patch('charmhelpers.contrib.network.ip.apt_install') + def test_get_host_ip_with_ip(self, apt_install): + fake_dns = FakeDNS('5.5.5.5') + with patch(builtin_import, side_effect=[fake_dns]): + ip = net_ip.get_host_ip('4.2.2.1') + self.assertEquals(ip, '4.2.2.1') + + @patch('charmhelpers.contrib.network.ip.apt_install') + def test_ns_query_trigger_apt_install(self, apt_install): + fake_dns = FakeDNS('5.5.5.5') + with patch(builtin_import, side_effect=[ImportError, fake_dns]): + nsq = net_ip.ns_query('5.5.5.5') + if six.PY2: + apt_install.assert_called_with('python-dnspython', fatal=True) + else: + apt_install.assert_called_with('python3-dnspython', fatal=True) + self.assertEquals(nsq, '5.5.5.5') + + @patch('charmhelpers.contrib.network.ip.apt_install') + def test_ns_query_ptr_record(self, apt_install): + fake_dns = FakeDNS('127.0.0.1') + with patch(builtin_import, side_effect=[fake_dns]): + nsq = net_ip.ns_query('127.0.0.1') + self.assertEquals(nsq, '127.0.0.1') + + @patch('charmhelpers.contrib.network.ip.apt_install') + def test_ns_query_a_record(self, apt_install): + fake_dns = FakeDNS('127.0.0.1') + fake_dns_name = FakeDNSName('www.somedomain.tld') + with patch(builtin_import, side_effect=[fake_dns]): + nsq = net_ip.ns_query(fake_dns_name) + self.assertEquals(nsq, '127.0.0.1') + + @patch('charmhelpers.contrib.network.ip.apt_install') + def test_ns_query_blank_record(self, apt_install): + fake_dns = FakeDNS(None) + with patch(builtin_import, side_effect=[fake_dns, fake_dns]): + nsq = net_ip.ns_query(None) + self.assertEquals(nsq, None) + + @patch('charmhelpers.contrib.network.ip.apt_install') + def test_ns_query_lookup_fail(self, apt_install): + fake_dns = FakeDNS('') + with patch(builtin_import, side_effect=[fake_dns, fake_dns]): + nsq = net_ip.ns_query('nonexistant') + self.assertEquals(nsq, None) + + @patch('charmhelpers.contrib.network.ip.apt_install') + def test_get_hostname_with_ip(self, apt_install): + fake_dns = FakeDNS('www.ubuntu.com') + with patch(builtin_import, side_effect=[fake_dns, fake_dns]): + hn = net_ip.get_hostname('4.2.2.1') + self.assertEquals(hn, 'www.ubuntu.com') + + @patch('charmhelpers.contrib.network.ip.apt_install') + def test_get_hostname_with_ip_not_fqdn(self, apt_install): + fake_dns = FakeDNS('packages.ubuntu.com') + with patch(builtin_import, side_effect=[fake_dns, fake_dns]): + hn = net_ip.get_hostname('4.2.2.1', fqdn=False) + self.assertEquals(hn, 'packages') + + @patch('charmhelpers.contrib.network.ip.apt_install') + def test_get_hostname_with_hostname(self, apt_install): + hn = net_ip.get_hostname('www.ubuntu.com') + self.assertEquals(hn, 'www.ubuntu.com') + + @patch('charmhelpers.contrib.network.ip.apt_install') + def test_get_hostname_with_hostname_trailingdot(self, apt_install): + hn = net_ip.get_hostname('www.ubuntu.com.') + self.assertEquals(hn, 'www.ubuntu.com') + + @patch('charmhelpers.contrib.network.ip.apt_install') + def test_get_hostname_with_hostname_not_fqdn(self, apt_install): + hn = net_ip.get_hostname('packages.ubuntu.com', fqdn=False) + self.assertEquals(hn, 'packages') + + @patch('charmhelpers.contrib.network.ip.apt_install') + def test_get_hostname_trigger_apt_install(self, apt_install): + fake_dns = FakeDNS('www.ubuntu.com') + with patch(builtin_import, side_effect=[ImportError, fake_dns, + fake_dns]): + hn = net_ip.get_hostname('4.2.2.1') + if six.PY2: + apt_install.assert_called_with('python-dnspython', fatal=True) + else: + apt_install.assert_called_with('python3-dnspython', fatal=True) + + self.assertEquals(hn, 'www.ubuntu.com') + + @patch('charmhelpers.contrib.network.ip.socket.gethostbyaddr') + @patch('charmhelpers.contrib.network.ip.ns_query') + @patch('charmhelpers.contrib.network.ip.apt_install') + def test_get_hostname_lookup_fail(self, apt_install, ns_query, socket): + fake_dns = FakeDNS('www.ubuntu.com') + ns_query.return_value = [] + socket.return_value = () + with patch(builtin_import, side_effect=[fake_dns, fake_dns]): + hn = net_ip.get_hostname('4.2.2.1') + self.assertEquals(hn, None) + + @patch('charmhelpers.contrib.network.ip.socket.gethostbyaddr') + @patch('charmhelpers.contrib.network.ip.ns_query') + @patch('charmhelpers.contrib.network.ip.apt_install') + def test_get_hostname_lookup_fail_gethostbyaddr_fallback( + self, apt_install, ns_query, socket): + fake_dns = FakeDNS('www.ubuntu.com') + ns_query.return_value = [] + socket.return_value = ("www.ubuntu.com", "", "") + with patch(builtin_import, side_effect=[fake_dns]): + hn = net_ip.get_hostname('4.2.2.1') + self.assertEquals(hn, "www.ubuntu.com") + + @patch('charmhelpers.contrib.network.ip.subprocess.call') + def test_port_has_listener(self, subprocess_call): + subprocess_call.return_value = 1 + self.assertEqual(net_ip.port_has_listener('ip-address', 50), False) + subprocess_call.assert_called_with(['nc', '-z', 'ip-address', '50']) + subprocess_call.return_value = 0 + self.assertEqual(net_ip.port_has_listener('ip-address', 70), True) + subprocess_call.assert_called_with(['nc', '-z', 'ip-address', '70']) + + @patch.object(net_ip, 'log', lambda *args, **kwargs: None) + @patch.object(net_ip, 'config') + @patch.object(net_ip, 'network_get_primary_address') + @patch.object(net_ip, 'get_address_in_network') + @patch.object(net_ip, 'unit_get') + @patch.object(net_ip, 'get_ipv6_addr') + @patch.object(net_ip, 'assert_charm_supports_ipv6') + def test_get_relation_ip(self, assert_charm_supports_ipv6, get_ipv6_addr, + unit_get, get_address_in_network, + network_get_primary_address, config): + ACCESS_IP = '10.50.1.1' + ACCESS_NETWORK = '10.50.1.0/24' + AMQP_IP = '10.200.1.1' + IPV6_IP = '2001:DB8::1' + DEFAULT_IP = '172.16.1.1' + assert_charm_supports_ipv6.return_value = True + get_ipv6_addr.return_value = [IPV6_IP] + unit_get.return_value = DEFAULT_IP + get_address_in_network.return_value = DEFAULT_IP + network_get_primary_address.return_value = AMQP_IP + + # Network-get calls + _config = {'prefer-ipv6': False} + config.side_effect = lambda key: _config.get(key) + + network_get_primary_address.side_effect = NotImplementedError + self.assertEqual(DEFAULT_IP, net_ip.get_relation_ip('amqp')) + + network_get_primary_address.side_effect = net_ip.NoNetworkBinding + self.assertEqual(DEFAULT_IP, net_ip.get_relation_ip('doesnotexist')) + + network_get_primary_address.side_effect = None + self.assertEqual(AMQP_IP, net_ip.get_relation_ip('amqp')) + + self.assertFalse(get_address_in_network.called) + + # Specific CIDR network + get_address_in_network.return_value = ACCESS_IP + network_get_primary_address.return_value = DEFAULT_IP + self.assertEqual( + ACCESS_IP, + net_ip.get_relation_ip('shared-db', + cidr_network=ACCESS_NETWORK)) + get_address_in_network.assert_called_with(ACCESS_NETWORK, DEFAULT_IP) + + self.assertFalse(assert_charm_supports_ipv6.called) + + # IPv6 + _config = {'prefer-ipv6': True} + config.side_effect = lambda key: _config.get(key) + self.assertEqual(IPV6_IP, net_ip.get_relation_ip('amqp')) + assert_charm_supports_ipv6.assert_called_with() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/network/test_ovs.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/network/test_ovs.py new file mode 100644 index 0000000000000000000000000000000000000000..9cf8e615123f78420c29e4a545d22805d2ae5f99 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/network/test_ovs.py @@ -0,0 +1,333 @@ +import subprocess +import unittest + +from mock import patch, call, MagicMock + +import charmhelpers.contrib.network.ovs as ovs + +from tests.helpers import patch_open + + +# NOTE(fnordahl): some functions drectly under the ``contrib.network.ovs`` +# module have their unit tests in the ``test_ovs.py`` module in the +# ``tests.contrib.network.ovs`` package. + + +GOOD_CERT = '''Certificate: + Data: + Version: 1 (0x0) + Serial Number: 13798680962510501282 (0xbf7ec33a136235a2) + Signature Algorithm: sha1WithRSAEncryption + Issuer: C=US, ST=CA, L=Palo Alto, O=Open vSwitch, OU=Open vSwitch + Validity + Not Before: Jun 28 17:02:19 2013 GMT + Not After : Jun 28 17:02:19 2019 GMT + Subject: C=US, ST=CA, L=Palo Alto, O=Open vSwitch, OU=Open vSwitch + Subject Public Key Info: + Public Key Algorithm: rsaEncryption + Public-Key: (2048 bit) + Modulus: + 00:e8:a7:db:0a:6d:c0:16:4a:14:96:1d:74:91:15: + 64:3f:ae:2a:54:be:2a:fe:10:14:9a:73:39:d8:58: + 74:7f:ab:d5:f2:39:aa:9a:27:7c:31:82:f8:74:42: + 46:8d:c5:3b:42:55:52:be:75:7f:a5:b1:ec:d5:29: + 9f:62:0e:de:31:27:2b:95:1f:24:0d:ca:8c:48:30: + 96:9f:ba:b7:9d:eb:c1:bd:93:05:e3:d8:ca:66:5a: + e9:cb:a5:7a:3a:8d:27:e2:05:9d:88:fc:a9:ef:af: + 47:4c:66:ce:c6:43:73:1a:85:f4:5f:b9:53:5b:29: + f3:c3:23:1f:0c:20:95:11:50:71:b2:f6:01:23:3f: + 66:0f:5c:43:c2:90:fb:e5:98:73:98:e9:38:bb:1f: + 1b:89:97:1e:dc:d7:98:07:68:32:ec:da:1d:69:0b: + e2:df:40:fb:64:52:e5:e9:40:27:b0:ca:73:21:51: + f6:8f:00:20:c0:2b:1a:d4:01:c2:32:38:9d:d1:8d: + 88:71:46:a9:42:0d:ee:3b:1c:88:db:27:69:49:f9: + 60:34:70:61:3d:60:df:7e:e4:e1:1d:c6:16:89:05: + ba:31:06:eb:88:b5:78:94:5d:8c:9d:88:fe:f2:c2: + 80:a1:04:15:d3:84:85:d3:aa:5a:1d:53:5c:f8:57: + ae:61 + Exponent: 65537 (0x10001) + Signature Algorithm: sha1WithRSAEncryption + 14:7e:ca:c3:fc:93:60:9f:80:e0:65:2e:ef:41:2d:f9:af:77: + da:6d:e2:e0:11:70:17:fb:e5:67:4c:f0:ad:39:ec:96:ef:fe: + d5:95:94:70:e5:52:31:68:63:8c:ea:b3:a1:8e:02:e2:91:4b: + a8:8c:07:86:fd:80:98:a2:b1:90:2b:9c:2e:ab:f4:73:9d:8f: + fd:31:b9:8f:fe:6c:af:d6:bf:72:44:89:08:93:19:ef:2b:c3: + 7c:ab:ba:bc:57:ca:f1:17:e4:e8:81:40:ca:65:df:84:be:10: + 2c:42:46:af:d2:e0:0d:df:5d:56:53:65:13:e0:20:55:b4:ee: + cd:5e:b5:c4:97:1d:3e:a6:c1:9c:7e:b8:87:ee:64:78:a5:59: + e5:b2:79:47:9a:8e:59:fa:c4:18:ea:27:fd:a2:d5:76:d0:ae: + d9:05:f6:0e:23:ca:7d:66:a1:ba:18:67:f5:6d:bb:51:5a:f5: + 52:e9:17:bb:63:15:24:b4:61:25:9f:d9:9c:89:58:93:9a:c3: + 74:55:72:3e:f9:ff:ef:54:7d:e8:28:78:ba:3c:c7:15:ba:b9: + c6:e3:8c:61:cb:a9:ed:8d:07:16:0d:8d:f6:1c:36:11:69:08: + b8:45:7d:fc:fd:d1:ab:2d:9b:4e:9c:dd:11:78:50:c7:87:9f: + 4a:24:9c:a0 +-----BEGIN CERTIFICATE----- +MIIDwjCCAqoCCQC/fsM6E2I1ojANBgkqhkiG9w0BAQUFADCBojELMAkGA1UEBhMC +VVMxCzAJBgNVBAgTAkNBMRIwEAYDVQQHEwlQYWxvIEFsdG8xFTATBgNVBAoTDE9w +ZW4gdlN3aXRjaDEfMB0GA1UECxMWT3BlbiB2U3dpdGNoIGNlcnRpZmllcjE6MDgG +A1UEAxMxb3ZzY2xpZW50IGlkOjU4MTQ5N2E1LWJjMDAtNGVjYy1iNzkwLTU3NTZj +ZWUxNmE0ODAeFw0xMzA2MjgxNzAyMTlaFw0xOTA2MjgxNzAyMTlaMIGiMQswCQYD +VQQGEwJVUzELMAkGA1UECBMCQ0ExEjAQBgNVBAcTCVBhbG8gQWx0bzEVMBMGA1UE +ChMMT3BlbiB2U3dpdGNoMR8wHQYDVQQLExZPcGVuIHZTd2l0Y2ggY2VydGlmaWVy +MTowOAYDVQQDEzFvdnNjbGllbnQgaWQ6NTgxNDk3YTUtYmMwMC00ZWNjLWI3OTAt +NTc1NmNlZTE2YTQ4MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA6Kfb +Cm3AFkoUlh10kRVkP64qVL4q/hAUmnM52Fh0f6vV8jmqmid8MYL4dEJGjcU7QlVS +vnV/pbHs1SmfYg7eMScrlR8kDcqMSDCWn7q3nevBvZMF49jKZlrpy6V6Oo0n4gWd +iPyp769HTGbOxkNzGoX0X7lTWynzwyMfDCCVEVBxsvYBIz9mD1xDwpD75ZhzmOk4 +ux8biZce3NeYB2gy7NodaQvi30D7ZFLl6UAnsMpzIVH2jwAgwCsa1AHCMjid0Y2I +cUapQg3uOxyI2ydpSflgNHBhPWDffuThHcYWiQW6MQbriLV4lF2MnYj+8sKAoQQV +04SF06paHVNc+FeuYQIDAQABMA0GCSqGSIb3DQEBBQUAA4IBAQAUfsrD/JNgn4Dg +ZS7vQS35r3fabeLgEXAX++VnTPCtOeyW7/7VlZRw5VIxaGOM6rOhjgLikUuojAeG +/YCYorGQK5wuq/RznY/9MbmP/myv1r9yRIkIkxnvK8N8q7q8V8rxF+TogUDKZd+E +vhAsQkav0uAN311WU2UT4CBVtO7NXrXElx0+psGcfriH7mR4pVnlsnlHmo5Z+sQY +6if9otV20K7ZBfYOI8p9ZqG6GGf1bbtRWvVS6Re7YxUktGEln9mciViTmsN0VXI+ ++f/vVH3oKHi6PMcVurnG44xhy6ntjQcWDY32HDYRaQi4RX38/dGrLZtOnN0ReFDH +h59KJJyg +-----END CERTIFICATE----- +''' + +PEM_ENCODED = '''-----BEGIN CERTIFICATE----- +MIIDwjCCAqoCCQC/fsM6E2I1ojANBgkqhkiG9w0BAQUFADCBojELMAkGA1UEBhMC +VVMxCzAJBgNVBAgTAkNBMRIwEAYDVQQHEwlQYWxvIEFsdG8xFTATBgNVBAoTDE9w +ZW4gdlN3aXRjaDEfMB0GA1UECxMWT3BlbiB2U3dpdGNoIGNlcnRpZmllcjE6MDgG +A1UEAxMxb3ZzY2xpZW50IGlkOjU4MTQ5N2E1LWJjMDAtNGVjYy1iNzkwLTU3NTZj +ZWUxNmE0ODAeFw0xMzA2MjgxNzAyMTlaFw0xOTA2MjgxNzAyMTlaMIGiMQswCQYD +VQQGEwJVUzELMAkGA1UECBMCQ0ExEjAQBgNVBAcTCVBhbG8gQWx0bzEVMBMGA1UE +ChMMT3BlbiB2U3dpdGNoMR8wHQYDVQQLExZPcGVuIHZTd2l0Y2ggY2VydGlmaWVy +MTowOAYDVQQDEzFvdnNjbGllbnQgaWQ6NTgxNDk3YTUtYmMwMC00ZWNjLWI3OTAt +NTc1NmNlZTE2YTQ4MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA6Kfb +Cm3AFkoUlh10kRVkP64qVL4q/hAUmnM52Fh0f6vV8jmqmid8MYL4dEJGjcU7QlVS +vnV/pbHs1SmfYg7eMScrlR8kDcqMSDCWn7q3nevBvZMF49jKZlrpy6V6Oo0n4gWd +iPyp769HTGbOxkNzGoX0X7lTWynzwyMfDCCVEVBxsvYBIz9mD1xDwpD75ZhzmOk4 +ux8biZce3NeYB2gy7NodaQvi30D7ZFLl6UAnsMpzIVH2jwAgwCsa1AHCMjid0Y2I +cUapQg3uOxyI2ydpSflgNHBhPWDffuThHcYWiQW6MQbriLV4lF2MnYj+8sKAoQQV +04SF06paHVNc+FeuYQIDAQABMA0GCSqGSIb3DQEBBQUAA4IBAQAUfsrD/JNgn4Dg +ZS7vQS35r3fabeLgEXAX++VnTPCtOeyW7/7VlZRw5VIxaGOM6rOhjgLikUuojAeG +/YCYorGQK5wuq/RznY/9MbmP/myv1r9yRIkIkxnvK8N8q7q8V8rxF+TogUDKZd+E +vhAsQkav0uAN311WU2UT4CBVtO7NXrXElx0+psGcfriH7mR4pVnlsnlHmo5Z+sQY +6if9otV20K7ZBfYOI8p9ZqG6GGf1bbtRWvVS6Re7YxUktGEln9mciViTmsN0VXI+ ++f/vVH3oKHi6PMcVurnG44xhy6ntjQcWDY32HDYRaQi4RX38/dGrLZtOnN0ReFDH +h59KJJyg +-----END CERTIFICATE-----''' + +BAD_CERT = ''' NO MARKERS ''' +TO_PATCH = [ + "apt_install", + "log", + "hashlib", +] + + +class OVSHelpersTest(unittest.TestCase): + + def setUp(self): + for m in TO_PATCH: + setattr(self, m, self._patch(m)) + + def _patch(self, method): + _m = patch('charmhelpers.contrib.network.ovs.' + method) + mock = _m.start() + self.addCleanup(_m.stop) + return mock + + @patch('subprocess.check_output') + def test_get_bridges(self, check_output): + check_output.return_value = b"br1\n br2 " + self.assertEqual(ovs.get_bridges(), ['br1', 'br2']) + check_output.assert_called_once_with(['ovs-vsctl', 'list-br']) + + @patch('subprocess.check_output') + def test_get_bridge_ports(self, check_output): + check_output.return_value = b"p1\n p2 \np3" + self.assertEqual(ovs.get_bridge_ports('br1'), ['p1', 'p2', 'p3']) + check_output.assert_called_once_with( + ['ovs-vsctl', '--', 'list-ports', 'br1']) + + @patch.object(ovs, 'get_bridges') + @patch.object(ovs, 'get_bridge_ports') + def test_get_bridges_and_ports_map(self, get_bridge_ports, get_bridges): + get_bridges.return_value = ['br1', 'br2'] + get_bridge_ports.side_effect = [ + ['p1', 'p2'], + ['p3']] + self.assertEqual(ovs.get_bridges_and_ports_map(), { + 'br1': ['p1', 'p2'], + 'br2': ['p3'], + }) + + @patch('subprocess.check_call') + def test_del_bridge(self, check_call): + ovs.del_bridge('test') + check_call.assert_called_with(["ovs-vsctl", "--", "--if-exists", + "del-br", 'test']) + self.assertTrue(self.log.call_count == 1) + + @patch('subprocess.check_call') + def test_del_bridge_port(self, check_call): + ovs.del_bridge_port('test', 'eth1') + check_call.assert_has_calls([ + call(["ovs-vsctl", "--", "--if-exists", "del-port", + 'test', 'eth1']), + call(['ip', 'link', 'set', 'eth1', 'down']), + call(['ip', 'link', 'set', 'eth1', 'promisc', 'off']) + ]) + self.assertTrue(self.log.call_count == 1) + + @patch.object(ovs, 'port_to_br') + @patch.object(ovs, 'add_bridge_port') + @patch('subprocess.check_call') + def test_add_ovsbridge_linuxbridge(self, check_call, + add_bridge_port, + port_to_br): + port_to_br.return_value = None + with patch_open() as (mock_open, mock_file): + ovs.add_ovsbridge_linuxbridge('br-ex', 'br-eno1', ifdata={ + 'external-ids': {'mycharm': 'br-ex'} + }) + + check_call.assert_called_with(['ifup', 'veth-br-eno1']) + add_bridge_port.assert_called_with( + 'br-ex', 'veth-br-eno1', ifdata={ + 'external-ids': {'mycharm': 'br-ex'} + } + ) + + @patch.object(ovs, 'port_to_br') + @patch.object(ovs, 'add_bridge_port') + @patch('subprocess.check_call') + def test_add_ovsbridge_linuxbridge_already_direct_wired(self, + check_call, + add_bridge_port, + port_to_br): + port_to_br.return_value = 'br-ex' + ovs.add_ovsbridge_linuxbridge('br-ex', 'br-eno1') + check_call.assert_not_called() + add_bridge_port.assert_not_called() + + @patch.object(ovs, 'port_to_br') + @patch.object(ovs, 'add_bridge_port') + @patch('subprocess.check_call') + def test_add_ovsbridge_linuxbridge_longname(self, check_call, + add_bridge_port, + port_to_br): + port_to_br.return_value = None + mock_hasher = MagicMock() + mock_hasher.hexdigest.return_value = '12345678901234578910' + self.hashlib.sha256.return_value = mock_hasher + with patch_open() as (mock_open, mock_file): + ovs.add_ovsbridge_linuxbridge('br-ex', 'br-reallylongname') + + check_call.assert_called_with(['ifup', 'cvb12345678-10']) + add_bridge_port.assert_called_with( + 'br-ex', 'cvb12345678-10', ifdata=None + ) + + @patch('os.path.exists') + def test_is_linuxbridge_interface_false(self, exists): + exists.return_value = False + result = ovs.is_linuxbridge_interface('eno1') + self.assertFalse(result) + + @patch('os.path.exists') + def test_is_linuxbridge_interface_true(self, exists): + exists.return_value = True + result = ovs.is_linuxbridge_interface('eno1') + self.assertTrue(result) + + @patch('subprocess.check_call') + def test_set_manager(self, check_call): + ovs.set_manager('manager') + check_call.assert_called_with(['ovs-vsctl', 'set-manager', + 'ssl:manager']) + self.assertTrue(self.log.call_count == 1) + + @patch('subprocess.check_call') + def test_set_Open_vSwitch_column_value(self, check_call): + ovs.set_Open_vSwitch_column_value('other_config:foo=bar') + check_call.assert_called_with(['ovs-vsctl', 'set', + 'Open_vSwitch', '.', 'other_config:foo=bar']) + self.assertTrue(self.log.call_count == 1) + + @patch('os.path.exists') + def test_get_certificate_good_cert(self, exists): + exists.return_value = True + with patch_open() as (mock_open, mock_file): + mock_file.read.return_value = GOOD_CERT + self.assertEqual(ovs.get_certificate(), PEM_ENCODED) + self.assertTrue(self.log.call_count == 1) + + @patch('os.path.exists') + def test_get_certificate_bad_cert(self, exists): + exists.return_value = True + with patch_open() as (mock_open, mock_file): + mock_file.read.return_value = BAD_CERT + self.assertRaises(RuntimeError, ovs.get_certificate) + self.assertTrue(self.log.call_count == 1) + + @patch('os.path.exists') + def test_get_certificate_missing(self, exists): + exists.return_value = False + self.assertIsNone(ovs.get_certificate()) + self.assertTrue(self.log.call_count == 1) + + @patch('os.path.exists') + @patch.object(ovs, 'service') + def test_full_restart(self, service, exists): + exists.return_value = False + ovs.full_restart() + service.assert_called_with('force-reload-kmod', 'openvswitch-switch') + + @patch('os.path.exists') + @patch.object(ovs, 'service') + def test_full_restart_upstart(self, service, exists): + exists.return_value = True + ovs.full_restart() + service.assert_called_with('start', 'openvswitch-force-reload-kmod') + + @patch('subprocess.check_output') + def test_port_to_br(self, check_output): + check_output.return_value = b'br-ex' + self.assertEqual(ovs.port_to_br('br-lb'), + 'br-ex') + + @patch('subprocess.check_output') + def test_port_to_br_not_found(self, check_output): + check_output.side_effect = subprocess.CalledProcessError(1, 'not found') + self.assertEqual(ovs.port_to_br('br-lb'), None) + + @patch('subprocess.check_call') + def test_enable_ipfix_defaults(self, check_call): + ovs.enable_ipfix('br-int', + '10.5.0.10:4739') + check_call.assert_called_once_with([ + 'ovs-vsctl', 'set', 'Bridge', 'br-int', 'ipfix=@i', '--', + '--id=@i', 'create', 'IPFIX', + 'targets="10.5.0.10:4739"', + 'sampling=64', + 'cache_active_timeout=60', + 'cache_max_flows=128', + ]) + + @patch('subprocess.check_call') + def test_enable_ipfix_values(self, check_call): + ovs.enable_ipfix('br-int', + '10.5.0.10:4739', + sampling=120, + cache_max_flows=24, + cache_active_timeout=120) + check_call.assert_called_once_with([ + 'ovs-vsctl', 'set', 'Bridge', 'br-int', 'ipfix=@i', '--', + '--id=@i', 'create', 'IPFIX', + 'targets="10.5.0.10:4739"', + 'sampling=120', + 'cache_active_timeout=120', + 'cache_max_flows=24', + ]) + + @patch('subprocess.check_call') + def test_disable_ipfix(self, check_call): + ovs.disable_ipfix('br-int') + check_call.assert_called_once_with( + ['ovs-vsctl', 'clear', 'Bridge', 'br-int', 'ipfix'] + ) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/network/test_ufw.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/network/test_ufw.py new file mode 100644 index 0000000000000000000000000000000000000000..6eddc021cf0208dd4775c4c27d0ef048295890b0 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/network/test_ufw.py @@ -0,0 +1,550 @@ +from __future__ import print_function + +import mock +import os +import subprocess +import unittest + +from charmhelpers.contrib.network import ufw + +__author__ = 'Felipe Reyes ' + + +LSMOD_NO_IP6 = """Module Size Used by +raid1 39533 1 +psmouse 106548 0 +raid0 17842 0 +ahci 34062 5 +multipath 13145 0 +r8169 71471 0 +libahci 32424 1 ahci +mii 13934 1 r8169 +linear 12894 0 +""" +LSMOD_IP6 = """Module Size Used by +xt_hl 12521 0 +ip6_tables 27026 0 +ip6t_rt 13537 0 +nf_conntrack_ipv6 18894 0 +nf_defrag_ipv6 34769 1 nf_conntrack_ipv6 +xt_recent 18457 0 +xt_LOG 17702 0 +xt_limit 12711 0 +""" +DEFAULT_POLICY_OUTPUT = """Default incoming policy changed to 'deny' +(be sure to update your rules accordingly) +""" +DEFAULT_POLICY_OUTPUT_OUTGOING = """Default outgoing policy changed to 'allow' +(be sure to update your rules accordingly) +""" + +UFW_STATUS_NUMBERED = """Status: active + + To Action From + -- ------ ---- +[ 1] 6641/tcp ALLOW IN 10.219.3.86 # charm-ovn-central +[12] 6641/tcp REJECT IN Anywhere +[19] 6644/tcp (v6) REJECT IN Anywhere (v6) # charm-ovn-central + +""" + + +class TestUFW(unittest.TestCase): + @mock.patch('charmhelpers.core.hookenv.log') + @mock.patch('subprocess.check_output') + @mock.patch('charmhelpers.contrib.network.ufw.modprobe') + def test_enable_ok(self, modprobe, check_output, log): + msg = 'Firewall is active and enabled on system startup\n' + check_output.return_value = msg + self.assertTrue(ufw.enable()) + + check_output.assert_any_call(['ufw', 'enable'], + universal_newlines=True, + env={'LANG': 'en_US', + 'PATH': os.environ['PATH']}) + log.assert_any_call(msg, level='DEBUG') + log.assert_any_call('ufw enabled', level='INFO') + + @mock.patch('charmhelpers.core.hookenv.log') + @mock.patch('subprocess.check_output') + @mock.patch('charmhelpers.contrib.network.ufw.modprobe') + def test_enable_fail(self, modprobe, check_output, log): + msg = 'neneene\n' + check_output.return_value = msg + self.assertFalse(ufw.enable()) + + check_output.assert_any_call(['ufw', 'enable'], + universal_newlines=True, + env={'LANG': 'en_US', + 'PATH': os.environ['PATH']}) + log.assert_any_call(msg, level='DEBUG') + log.assert_any_call("ufw couldn't be enabled", level='WARN') + + @mock.patch('charmhelpers.contrib.network.ufw.is_enabled') + @mock.patch('charmhelpers.core.hookenv.log') + @mock.patch('subprocess.check_output') + def test_disable_ok(self, check_output, log, is_enabled): + is_enabled.return_value = True + msg = 'Firewall stopped and disabled on system startup\n' + check_output.return_value = msg + self.assertTrue(ufw.disable()) + + check_output.assert_any_call(['ufw', 'disable'], + universal_newlines=True, + env={'LANG': 'en_US', + 'PATH': os.environ['PATH']}) + log.assert_any_call(msg, level='DEBUG') + log.assert_any_call('ufw disabled', level='INFO') + + @mock.patch('charmhelpers.contrib.network.ufw.is_enabled') + @mock.patch('charmhelpers.core.hookenv.log') + @mock.patch('subprocess.check_output') + def test_disable_fail(self, check_output, log, is_enabled): + is_enabled.return_value = True + msg = 'neneene\n' + check_output.return_value = msg + self.assertFalse(ufw.disable()) + + check_output.assert_any_call(['ufw', 'disable'], + universal_newlines=True, + env={'LANG': 'en_US', + 'PATH': os.environ['PATH']}) + log.assert_any_call(msg, level='DEBUG') + log.assert_any_call("ufw couldn't be disabled", level='WARN') + + @mock.patch('charmhelpers.contrib.network.ufw.is_enabled') + @mock.patch('charmhelpers.core.hookenv.log') + @mock.patch('subprocess.check_output') + def test_modify_access_ufw_is_disabled(self, check_output, log, + is_enabled): + is_enabled.return_value = False + ufw.modify_access('127.0.0.1') + log.assert_any_call('ufw is disabled, skipping modify_access()', + level='WARN') + + @mock.patch('charmhelpers.contrib.network.ufw.is_enabled') + @mock.patch('charmhelpers.core.hookenv.log') + @mock.patch('subprocess.Popen') + def test_modify_access_allow(self, popen, log, is_enabled): + is_enabled.return_value = True + p = mock.Mock() + p.configure_mock(**{'communicate.return_value': ('stdout', 'stderr'), + 'returncode': 0}) + popen.return_value = p + + ufw.modify_access('127.0.0.1') + popen.assert_any_call(['ufw', 'allow', 'from', '127.0.0.1', 'to', + 'any'], stdout=subprocess.PIPE) + log.assert_any_call('ufw allow: ufw allow from 127.0.0.1 to any', + level='DEBUG') + log.assert_any_call('stdout', level='INFO') + + @mock.patch('charmhelpers.contrib.network.ufw.is_enabled') + @mock.patch('charmhelpers.core.hookenv.log') + @mock.patch('subprocess.Popen') + def test_modify_access_allow_set_proto(self, popen, log, is_enabled): + is_enabled.return_value = True + p = mock.Mock() + p.configure_mock(**{'communicate.return_value': ('stdout', 'stderr'), + 'returncode': 0}) + popen.return_value = p + + ufw.modify_access('127.0.0.1', proto='udp') + popen.assert_any_call(['ufw', 'allow', 'from', '127.0.0.1', 'to', + 'any', 'proto', 'udp'], stdout=subprocess.PIPE) + log.assert_any_call(('ufw allow: ufw allow from 127.0.0.1 ' + 'to any proto udp'), level='DEBUG') + log.assert_any_call('stdout', level='INFO') + + @mock.patch('charmhelpers.contrib.network.ufw.is_enabled') + @mock.patch('charmhelpers.core.hookenv.log') + @mock.patch('subprocess.Popen') + def test_modify_access_allow_set_port(self, popen, log, is_enabled): + is_enabled.return_value = True + p = mock.Mock() + p.configure_mock(**{'communicate.return_value': ('stdout', 'stderr'), + 'returncode': 0}) + popen.return_value = p + + ufw.modify_access('127.0.0.1', port='80') + popen.assert_any_call(['ufw', 'allow', 'from', '127.0.0.1', 'to', + 'any', 'port', '80'], stdout=subprocess.PIPE) + log.assert_any_call(('ufw allow: ufw allow from 127.0.0.1 ' + 'to any port 80'), level='DEBUG') + log.assert_any_call('stdout', level='INFO') + + @mock.patch('charmhelpers.contrib.network.ufw.is_enabled') + @mock.patch('charmhelpers.core.hookenv.log') + @mock.patch('subprocess.Popen') + def test_modify_access_allow_set_dst(self, popen, log, is_enabled): + is_enabled.return_value = True + p = mock.Mock() + p.configure_mock(**{'communicate.return_value': ('stdout', 'stderr'), + 'returncode': 0}) + popen.return_value = p + + ufw.modify_access('127.0.0.1', dst='127.0.0.1', port='80') + popen.assert_any_call(['ufw', 'allow', 'from', '127.0.0.1', 'to', + '127.0.0.1', 'port', '80'], + stdout=subprocess.PIPE) + log.assert_any_call(('ufw allow: ufw allow from 127.0.0.1 ' + 'to 127.0.0.1 port 80'), level='DEBUG') + log.assert_any_call('stdout', level='INFO') + + @mock.patch('charmhelpers.contrib.network.ufw.is_enabled') + @mock.patch('charmhelpers.core.hookenv.log') + @mock.patch('subprocess.Popen') + def test_modify_access_allow_ipv6(self, popen, log, is_enabled): + is_enabled.return_value = True + p = mock.Mock() + p.configure_mock(**{'communicate.return_value': ('stdout', 'stderr'), + 'returncode': 0}) + popen.return_value = p + + ufw.modify_access('::1', dst='::1', port='80') + popen.assert_any_call(['ufw', 'allow', 'from', '::1', 'to', + '::1', 'port', '80'], + stdout=subprocess.PIPE) + log.assert_any_call(('ufw allow: ufw allow from ::1 ' + 'to ::1 port 80'), level='DEBUG') + log.assert_any_call('stdout', level='INFO') + + @mock.patch('charmhelpers.contrib.network.ufw.is_enabled') + @mock.patch('charmhelpers.core.hookenv.log') + @mock.patch('subprocess.Popen') + def test_modify_access_with_index(self, popen, log, is_enabled): + is_enabled.return_value = True + p = mock.Mock() + p.configure_mock(**{'communicate.return_value': ('stdout', 'stderr'), + 'returncode': 0}) + popen.return_value = p + + ufw.modify_access('127.0.0.1', dst='127.0.0.1', port='80', index=1) + popen.assert_any_call(['ufw', 'insert', '1', 'allow', 'from', + '127.0.0.1', 'to', '127.0.0.1', 'port', '80'], + stdout=subprocess.PIPE) + log.assert_any_call(('ufw allow: ufw insert 1 allow from 127.0.0.1 ' + 'to 127.0.0.1 port 80'), level='DEBUG') + log.assert_any_call('stdout', level='INFO') + + @mock.patch('charmhelpers.contrib.network.ufw.is_enabled') + @mock.patch('charmhelpers.core.hookenv.log') + @mock.patch('subprocess.Popen') + def test_modify_access_prepend(self, popen, log, is_enabled): + is_enabled.return_value = True + p = mock.Mock() + p.configure_mock(**{'communicate.return_value': ('stdout', 'stderr'), + 'returncode': 0}) + popen.return_value = p + ufw.modify_access('127.0.0.1', dst='127.0.0.1', port='80', + prepend=True) + popen.assert_any_call(['ufw', 'prepend', 'allow', 'from', '127.0.0.1', + 'to', '127.0.0.1', 'port', '80'], + stdout=subprocess.PIPE) + log.assert_any_call(('ufw allow: ufw prepend allow from 127.0.0.1 ' + 'to 127.0.0.1 port 80'), level='DEBUG') + log.assert_any_call('stdout', level='INFO') + + @mock.patch('charmhelpers.contrib.network.ufw.is_enabled') + @mock.patch('charmhelpers.core.hookenv.log') + @mock.patch('subprocess.Popen') + def test_modify_access_comment(self, popen, log, is_enabled): + is_enabled.return_value = True + p = mock.Mock() + p.configure_mock(**{'communicate.return_value': ('stdout', 'stderr'), + 'returncode': 0}) + popen.return_value = p + ufw.modify_access('127.0.0.1', dst='127.0.0.1', port='80', + comment='No comment') + popen.assert_any_call(['ufw', 'allow', 'from', '127.0.0.1', + 'to', '127.0.0.1', 'port', '80', + 'comment', 'No comment'], + stdout=subprocess.PIPE) + + @mock.patch('charmhelpers.contrib.network.ufw.is_enabled') + @mock.patch('charmhelpers.core.hookenv.log') + @mock.patch('subprocess.Popen') + def test_modify_access_delete_index(self, popen, log, is_enabled): + is_enabled.return_value = True + p = mock.Mock() + p.configure_mock(**{'communicate.return_value': ('stdout', 'stderr'), + 'returncode': 0}) + popen.return_value = p + ufw.modify_access(None, dst=None, action='delete', index=42) + popen.assert_any_call(['ufw', '--force', 'delete', '42'], + stdout=subprocess.PIPE) + + @mock.patch('charmhelpers.contrib.network.ufw.is_enabled') + @mock.patch('charmhelpers.core.hookenv.log') + @mock.patch('subprocess.Popen') + def test_grant_access(self, popen, log, is_enabled): + is_enabled.return_value = True + p = mock.Mock() + p.configure_mock(**{'communicate.return_value': ('stdout', 'stderr'), + 'returncode': 0}) + popen.return_value = p + + ufw.grant_access('127.0.0.1', dst='127.0.0.1', port='80') + popen.assert_any_call(['ufw', 'allow', 'from', '127.0.0.1', 'to', + '127.0.0.1', 'port', '80'], + stdout=subprocess.PIPE) + log.assert_any_call(('ufw allow: ufw allow from 127.0.0.1 ' + 'to 127.0.0.1 port 80'), level='DEBUG') + log.assert_any_call('stdout', level='INFO') + + @mock.patch('charmhelpers.contrib.network.ufw.is_enabled') + @mock.patch('charmhelpers.core.hookenv.log') + @mock.patch('subprocess.Popen') + def test_grant_access_with_index(self, popen, log, is_enabled): + is_enabled.return_value = True + p = mock.Mock() + p.configure_mock(**{'communicate.return_value': ('stdout', 'stderr'), + 'returncode': 0}) + popen.return_value = p + + ufw.grant_access('127.0.0.1', dst='127.0.0.1', port='80', index=1) + popen.assert_any_call(['ufw', 'insert', '1', 'allow', 'from', + '127.0.0.1', 'to', '127.0.0.1', 'port', '80'], + stdout=subprocess.PIPE) + log.assert_any_call(('ufw allow: ufw insert 1 allow from 127.0.0.1 ' + 'to 127.0.0.1 port 80'), level='DEBUG') + log.assert_any_call('stdout', level='INFO') + + @mock.patch('charmhelpers.contrib.network.ufw.is_enabled') + @mock.patch('charmhelpers.core.hookenv.log') + @mock.patch('subprocess.Popen') + def test_revoke_access(self, popen, log, is_enabled): + is_enabled.return_value = True + p = mock.Mock() + p.configure_mock(**{'communicate.return_value': ('stdout', 'stderr'), + 'returncode': 0}) + popen.return_value = p + + ufw.revoke_access('127.0.0.1', dst='127.0.0.1', port='80') + popen.assert_any_call(['ufw', 'delete', 'allow', 'from', '127.0.0.1', + 'to', '127.0.0.1', 'port', '80'], + stdout=subprocess.PIPE) + log.assert_any_call(('ufw delete: ufw delete allow from 127.0.0.1 ' + 'to 127.0.0.1 port 80'), level='DEBUG') + log.assert_any_call('stdout', level='INFO') + + @mock.patch('subprocess.check_output') + def test_service_open(self, check_output): + ufw.service('ssh', 'open') + check_output.assert_any_call(['ufw', 'allow', 'ssh'], + universal_newlines=True) + + @mock.patch('subprocess.check_output') + def test_service_close(self, check_output): + ufw.service('ssh', 'close') + check_output.assert_any_call(['ufw', 'delete', 'allow', 'ssh'], + universal_newlines=True) + + @mock.patch('subprocess.check_output') + def test_service_unsupport_action(self, check_output): + self.assertRaises(ufw.UFWError, ufw.service, 'ssh', 'nenene') + + @mock.patch('charmhelpers.contrib.network.ufw.is_enabled') + @mock.patch('charmhelpers.core.hookenv.log') + @mock.patch('os.path.isdir') + @mock.patch('subprocess.call') + @mock.patch('subprocess.check_output') + def test_no_ipv6(self, check_output, call, isdir, log, is_enabled): + check_output.return_value = ('Firewall is active and enabled ' + 'on system startup\n') + isdir.return_value = False + call.return_value = 0 + is_enabled.return_value = False + ufw.enable() + + call.assert_called_with(['sed', '-i', 's/IPV6=.*/IPV6=no/g', + '/etc/default/ufw']) + log.assert_any_call('IPv6 support in ufw disabled', level='INFO') + + @mock.patch('charmhelpers.contrib.network.ufw.is_enabled') + @mock.patch('charmhelpers.core.hookenv.log') + @mock.patch('os.path.isdir') + @mock.patch('subprocess.call') + @mock.patch('subprocess.check_output') + @mock.patch('charmhelpers.contrib.network.ufw.modprobe') + def test_no_ip6_tables(self, modprobe, check_output, call, isdir, log, + is_enabled): + def c(*args, **kwargs): + if args[0] == ['lsmod']: + return LSMOD_NO_IP6 + elif args[0] == ['modprobe', 'ip6_tables']: + return "" + else: + return 'Firewall is active and enabled on system startup\n' + + check_output.side_effect = c + isdir.return_value = True + call.return_value = 0 + + is_enabled.return_value = False + self.assertTrue(ufw.enable()) + + @mock.patch('charmhelpers.contrib.network.ufw.is_enabled') + @mock.patch('charmhelpers.core.hookenv.log') + @mock.patch('os.path.isdir') + @mock.patch('charmhelpers.contrib.network.ufw.modprobe') + @mock.patch('charmhelpers.contrib.network.ufw.is_module_loaded') + def test_no_ip6_tables_fail_to_load(self, is_module_loaded, + modprobe, isdir, log, is_enabled): + is_module_loaded.return_value = False + + def c(m): + raise subprocess.CalledProcessError(1, ['modprobe', + 'ip6_tables'], + "fail to load ip6_tables") + + modprobe.side_effect = c + isdir.return_value = True + is_enabled.return_value = False + + self.assertRaises(ufw.UFWIPv6Error, ufw.enable) + + @mock.patch('charmhelpers.contrib.network.ufw.is_enabled') + @mock.patch('charmhelpers.core.hookenv.log') + @mock.patch('os.path.isdir') + @mock.patch('charmhelpers.contrib.network.ufw.modprobe') + @mock.patch('charmhelpers.contrib.network.ufw.is_module_loaded') + @mock.patch('subprocess.call') + @mock.patch('subprocess.check_output') + def test_no_ip6_tables_fail_to_load_soft_fail(self, check_output, + call, is_module_loaded, + modprobe, + isdir, log, is_enabled): + is_module_loaded.return_value = False + + def c(m): + raise subprocess.CalledProcessError(1, ['modprobe', + 'ip6_tables'], + "fail to load ip6_tables") + + modprobe.side_effect = c + isdir.return_value = True + call.return_value = 0 + check_output.return_value = ("Firewall is active and enabled on " + "system startup\n") + is_enabled.return_value = False + self.assertTrue(ufw.enable(soft_fail=True)) + call.assert_called_with(['sed', '-i', 's/IPV6=.*/IPV6=no/g', + '/etc/default/ufw']) + log.assert_any_call('IPv6 support in ufw disabled', level='INFO') + + @mock.patch('charmhelpers.contrib.network.ufw.is_enabled') + @mock.patch('charmhelpers.core.hookenv.log') + @mock.patch('os.path.isdir') + @mock.patch('subprocess.call') + @mock.patch('subprocess.check_output') + def test_no_ipv6_failed_disabling_ufw(self, check_output, call, isdir, + log, is_enabled): + check_output.return_value = ('Firewall is active and enabled ' + 'on system startup\n') + isdir.return_value = False + call.return_value = 1 + is_enabled.return_value = False + self.assertRaises(ufw.UFWError, ufw.enable) + + call.assert_called_with(['sed', '-i', 's/IPV6=.*/IPV6=no/g', + '/etc/default/ufw']) + log.assert_any_call("Couldn't disable IPv6 support in ufw", + level="ERROR") + + @mock.patch('charmhelpers.core.hookenv.log') + @mock.patch('charmhelpers.contrib.network.ufw.is_enabled') + @mock.patch('os.path.isdir') + @mock.patch('subprocess.check_output') + @mock.patch('charmhelpers.contrib.network.ufw.modprobe') + def test_with_ipv6(self, modprobe, check_output, isdir, is_enabled, log): + def c(*args, **kwargs): + if args[0] == ['lsmod']: + return LSMOD_IP6 + else: + return 'Firewall is active and enabled on system startup\n' + + check_output.side_effect = c + is_enabled.return_value = False + isdir.return_value = True + ufw.enable() + + @mock.patch('charmhelpers.core.hookenv.log') + @mock.patch('subprocess.check_output') + def test_change_default_policy(self, check_output, log): + check_output.return_value = DEFAULT_POLICY_OUTPUT + self.assertTrue(ufw.default_policy()) + check_output.asser_any_call(['ufw', 'default', 'deny', 'incoming']) + + @mock.patch('charmhelpers.core.hookenv.log') + @mock.patch('subprocess.check_output') + def test_change_default_policy_allow_outgoing(self, check_output, log): + check_output.return_value = DEFAULT_POLICY_OUTPUT_OUTGOING + self.assertTrue(ufw.default_policy('allow', 'outgoing')) + check_output.asser_any_call(['ufw', 'default', 'allow', 'outgoing']) + + @mock.patch('charmhelpers.core.hookenv.log') + @mock.patch('subprocess.check_output') + def test_change_default_policy_unexpected_output(self, check_output, log): + check_output.return_value = "asdf" + self.assertFalse(ufw.default_policy()) + + @mock.patch('charmhelpers.core.hookenv.log') + @mock.patch('subprocess.check_output') + def test_change_default_policy_wrong_policy(self, check_output, log): + self.assertRaises(ufw.UFWError, ufw.default_policy, 'asdf') + + @mock.patch('charmhelpers.core.hookenv.log') + @mock.patch('subprocess.check_output') + def test_change_default_policy_wrong_direction(self, check_output, log): + self.assertRaises(ufw.UFWError, ufw.default_policy, 'allow', 'asdf') + + @mock.patch('charmhelpers.core.hookenv.log') + @mock.patch('subprocess.check_output') + @mock.patch('charmhelpers.contrib.network.ufw.modprobe') + def test_reload_ok(self, modprobe, check_output, log): + msg = 'Firewall reloaded\n' + check_output.return_value = msg + self.assertTrue(ufw.reload()) + + check_output.assert_any_call(['ufw', 'reload'], + universal_newlines=True, + env={'LANG': 'en_US', + 'PATH': os.environ['PATH']}) + log.assert_any_call(msg, level='DEBUG') + log.assert_any_call('ufw reloaded', level='INFO') + + @mock.patch('charmhelpers.core.hookenv.log') + @mock.patch('subprocess.check_output') + @mock.patch('charmhelpers.contrib.network.ufw.modprobe') + def test_reload_fail(self, modprobe, check_output, log): + msg = 'This did not work\n' + check_output.return_value = msg + self.assertFalse(ufw.reload()) + + check_output.assert_any_call(['ufw', 'reload'], + universal_newlines=True, + env={'LANG': 'en_US', + 'PATH': os.environ['PATH']}) + log.assert_any_call(msg, level='DEBUG') + log.assert_any_call("ufw couldn't be reloaded", level='WARN') + + def test_status(self): + with mock.patch('subprocess.check_output') as check_output: + check_output.return_value = UFW_STATUS_NUMBERED + expect = { + 1: {'to': '6641/tcp', 'action': 'allow in', + 'from': '10.219.3.86', 'ipv6': False, + 'comment': 'charm-ovn-central'}, + 12: {'to': '6641/tcp', 'action': 'reject in', + 'from': 'any', 'ipv6': False, + 'comment': ''}, + 19: {'to': '6644/tcp', 'action': 'reject in', + 'from': 'any', 'ipv6': True, + 'comment': 'charm-ovn-central'}, + } + n_rules = 0 + for n, r in ufw.status(): + self.assertDictEqual(r, expect[n]) + n_rules += 1 + self.assertEquals(n_rules, 3) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/ha/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/ha/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/ha/test_ha_utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/ha/test_ha_utils.py new file mode 100644 index 0000000000000000000000000000000000000000..a83963211d1079e0237492a1fb162046fbf52b49 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/ha/test_ha_utils.py @@ -0,0 +1,445 @@ +from mock import patch +import unittest +import json + +from charmhelpers.contrib.openstack.ha import utils as ha + +IFACE_LOOKUPS = { + '10.5.100.1': 'eth1', + 'ffff::1': 'eth1', + 'ffaa::1': 'eth2', +} + +NETMASK_LOOKUPS = { + '10.5.100.1': '255.255.255.0', + 'ffff::1': '64', + 'ffaa::1': '32', +} + + +class HATests(unittest.TestCase): + def setUp(self): + super(HATests, self).setUp() + [self._patch(m) for m in [ + 'charm_name', + 'config', + 'relation_set', + 'resolve_address', + 'status_set', + 'get_hacluster_config', + 'get_iface_for_address', + 'get_netmask_for_address', + ]] + self.resources = {'res_test_haproxy': 'lsb:haproxy'} + self.resource_params = {'res_test_haproxy': 'op monitor interval="5s"'} + self.conf = {} + self.config.side_effect = lambda key: self.conf.get(key) + self.maxDiff = None + self.get_iface_for_address.side_effect = \ + lambda x: IFACE_LOOKUPS.get(x) + self.get_netmask_for_address.side_effect = \ + lambda x: NETMASK_LOOKUPS.get(x) + + def _patch(self, method): + _m = patch.object(ha, method) + mock = _m.start() + self.addCleanup(_m.stop) + setattr(self, method, mock) + + @patch.object(ha, 'log', lambda *args, **kwargs: None) + @patch.object(ha, 'assert_charm_supports_dns_ha') + def test_update_dns_ha_resource_params_none(self, + assert_charm_supports_dns_ha): + self.conf = { + 'os-admin-hostname': None, + 'os-internal-hostname': None, + 'os-public-hostname': None, + } + + with self.assertRaises(ha.DNSHAException): + ha.update_dns_ha_resource_params( + relation_id='ha:1', + resources=self.resources, + resource_params=self.resource_params) + + @patch.object(ha, 'log', lambda *args, **kwargs: None) + @patch.object(ha, 'assert_charm_supports_dns_ha') + def test_update_dns_ha_resource_params_one(self, + assert_charm_supports_dns_ha): + EXPECTED_RESOURCES = {'res_test_public_hostname': 'ocf:maas:dns', + 'res_test_haproxy': 'lsb:haproxy'} + EXPECTED_RESOURCE_PARAMS = { + 'res_test_public_hostname': ('params fqdn="test.maas" ' + 'ip_address="10.0.0.1"'), + 'res_test_haproxy': 'op monitor interval="5s"'} + + self.conf = { + 'os-admin-hostname': None, + 'os-internal-hostname': None, + 'os-public-hostname': 'test.maas', + } + + self.charm_name.return_value = 'test' + self.resolve_address.return_value = '10.0.0.1' + ha.update_dns_ha_resource_params(relation_id='ha:1', + resources=self.resources, + resource_params=self.resource_params) + self.assertEqual(self.resources, EXPECTED_RESOURCES) + self.assertEqual(self.resource_params, EXPECTED_RESOURCE_PARAMS) + self.relation_set.assert_called_with( + groups={'grp_test_hostnames': 'res_test_public_hostname'}, + relation_id='ha:1') + + @patch.object(ha, 'log', lambda *args, **kwargs: None) + @patch.object(ha, 'assert_charm_supports_dns_ha') + def test_update_dns_ha_resource_params_all(self, + assert_charm_supports_dns_ha): + EXPECTED_RESOURCES = {'res_test_admin_hostname': 'ocf:maas:dns', + 'res_test_int_hostname': 'ocf:maas:dns', + 'res_test_public_hostname': 'ocf:maas:dns', + 'res_test_haproxy': 'lsb:haproxy'} + EXPECTED_RESOURCE_PARAMS = { + 'res_test_admin_hostname': ('params fqdn="test.admin.maas" ' + 'ip_address="10.0.0.1"'), + 'res_test_int_hostname': ('params fqdn="test.internal.maas" ' + 'ip_address="10.0.0.1"'), + 'res_test_public_hostname': ('params fqdn="test.public.maas" ' + 'ip_address="10.0.0.1"'), + 'res_test_haproxy': 'op monitor interval="5s"'} + + self.conf = { + 'os-admin-hostname': 'test.admin.maas', + 'os-internal-hostname': 'test.internal.maas', + 'os-public-hostname': 'test.public.maas', + } + + self.charm_name.return_value = 'test' + self.resolve_address.return_value = '10.0.0.1' + ha.update_dns_ha_resource_params(relation_id='ha:1', + resources=self.resources, + resource_params=self.resource_params) + self.assertEqual(self.resources, EXPECTED_RESOURCES) + self.assertEqual(self.resource_params, EXPECTED_RESOURCE_PARAMS) + self.relation_set.assert_called_with( + groups={'grp_test_hostnames': + ('res_test_admin_hostname ' + 'res_test_int_hostname ' + 'res_test_public_hostname')}, + relation_id='ha:1') + + @patch.object(ha, 'lsb_release') + def test_assert_charm_supports_dns_ha(self, lsb_release): + lsb_release.return_value = {'DISTRIB_RELEASE': '16.04'} + self.assertTrue(ha.assert_charm_supports_dns_ha()) + + @patch.object(ha, 'lsb_release') + def test_assert_charm_supports_dns_ha_exception(self, lsb_release): + lsb_release.return_value = {'DISTRIB_RELEASE': '12.04'} + self.assertRaises(ha.DNSHAException, + lambda: ha.assert_charm_supports_dns_ha()) + + @patch.object(ha, 'expected_related_units') + def tests_expect_ha(self, expected_related_units): + expected_related_units.return_value = (x for x in []) + self.conf = {'vip': None, + 'dns-ha': None} + self.assertFalse(ha.expect_ha()) + + expected_related_units.return_value = (x for x in ['hacluster-unit/0', + 'hacluster-unit/1', + 'hacluster-unit/2']) + self.conf = {'vip': None, + 'dns-ha': None} + self.assertTrue(ha.expect_ha()) + + expected_related_units.side_effect = NotImplementedError + self.conf = {'vip': '10.0.0.1', + 'dns-ha': None} + self.assertTrue(ha.expect_ha()) + + self.conf = {'vip': None, + 'dns-ha': True} + self.assertTrue(ha.expect_ha()) + + def test_get_vip_settings(self): + self.assertEqual( + ha.get_vip_settings('10.5.100.1'), + ('eth1', '255.255.255.0', False)) + + def test_get_vip_settings_fallback(self): + self.conf = {'vip_iface': 'eth3', + 'vip_cidr': '255.255.0.0'} + self.assertEqual( + ha.get_vip_settings('192.168.100.1'), + ('eth3', '255.255.0.0', True)) + + def test_update_hacluster_vip_single_vip(self): + self.get_hacluster_config.return_value = { + 'vip': '10.5.100.1' + } + test_data = {'resources': {}, 'resource_params': {}} + expected = { + 'delete_resources': ['res_testservice_eth1_vip'], + 'groups': { + 'grp_testservice_vips': 'res_testservice_242d562_vip' + }, + 'resource_params': { + 'res_testservice_242d562_vip': + ('params ip="10.5.100.1" op monitor ' + 'timeout="20s" interval="10s" depth="0"') + }, + 'resources': { + 'res_testservice_242d562_vip': 'ocf:heartbeat:IPaddr2' + } + } + ha.update_hacluster_vip('testservice', test_data) + self.assertEqual(test_data, expected) + + def test_update_hacluster_vip_single_vip_fallback(self): + self.get_hacluster_config.return_value = { + 'vip': '10.5.100.1' + } + test_data = {'resources': {}, 'resource_params': {}} + expected = { + 'delete_resources': ['res_testservice_eth1_vip'], + 'groups': { + 'grp_testservice_vips': 'res_testservice_242d562_vip' + }, + 'resource_params': { + 'res_testservice_242d562_vip': + ('params ip="10.5.100.1" op monitor ' + 'timeout="20s" interval="10s" depth="0"') + }, + 'resources': { + 'res_testservice_242d562_vip': 'ocf:heartbeat:IPaddr2' + } + } + ha.update_hacluster_vip('testservice', test_data) + self.assertEqual(test_data, expected) + + def test_update_hacluster_config_vip(self): + self.get_iface_for_address.side_effect = lambda x: None + self.get_netmask_for_address.side_effect = lambda x: None + self.conf = {'vip_iface': 'eth1', + 'vip_cidr': '255.255.255.0'} + self.get_hacluster_config.return_value = { + 'vip': '10.5.100.1' + } + test_data = {'resources': {}, 'resource_params': {}} + expected = { + 'delete_resources': ['res_testservice_eth1_vip'], + 'groups': { + 'grp_testservice_vips': 'res_testservice_242d562_vip' + }, + 'resource_params': { + 'res_testservice_242d562_vip': ( + 'params ip="10.5.100.1" cidr_netmask="255.255.255.0" ' + 'nic="eth1" op monitor timeout="20s" ' + 'interval="10s" depth="0"') + + }, + 'resources': { + 'res_testservice_242d562_vip': 'ocf:heartbeat:IPaddr2' + } + } + ha.update_hacluster_vip('testservice', test_data) + self.assertEqual(test_data, expected) + + def test_update_hacluster_vip_multiple_vip(self): + self.get_hacluster_config.return_value = { + 'vip': '10.5.100.1 ffff::1 ffaa::1' + } + test_data = {'resources': {}, 'resource_params': {}} + expected = { + 'groups': { + 'grp_testservice_vips': ('res_testservice_242d562_vip ' + 'res_testservice_856d56f_vip ' + 'res_testservice_f563c5d_vip') + }, + 'delete_resources': ['res_testservice_eth1_vip', + 'res_testservice_eth1_vip_ipv6addr', + 'res_testservice_eth2_vip'], + 'resource_params': { + 'res_testservice_242d562_vip': + ('params ip="10.5.100.1" op monitor ' + 'timeout="20s" interval="10s" depth="0"'), + 'res_testservice_856d56f_vip': + ('params ipv6addr="ffff::1" op monitor ' + 'timeout="20s" interval="10s" depth="0"'), + 'res_testservice_f563c5d_vip': + ('params ipv6addr="ffaa::1" op monitor ' + 'timeout="20s" interval="10s" depth="0"'), + }, + 'resources': { + 'res_testservice_242d562_vip': 'ocf:heartbeat:IPaddr2', + 'res_testservice_856d56f_vip': 'ocf:heartbeat:IPv6addr', + 'res_testservice_f563c5d_vip': 'ocf:heartbeat:IPv6addr', + } + } + ha.update_hacluster_vip('testservice', test_data) + self.assertEqual(test_data, expected) + + def test_generate_ha_relation_data_haproxy_disabled(self): + self.get_hacluster_config.return_value = { + 'vip': '10.5.100.1 ffff::1 ffaa::1' + } + extra_settings = { + 'colocations': {'vip_cauth': 'inf: res_nova_cauth grp_nova_vips'}, + 'init_services': {'res_nova_cauth': 'nova-cauth'}, + 'delete_resources': ['res_ceilometer_polling'], + 'groups': {'grp_testservice_wombles': 'res_testservice_orinoco'}, + } + expected = { + 'colocations': {'vip_cauth': 'inf: res_nova_cauth grp_nova_vips'}, + 'groups': { + 'grp_testservice_vips': ('res_testservice_242d562_vip ' + 'res_testservice_856d56f_vip ' + 'res_testservice_f563c5d_vip'), + 'grp_testservice_wombles': 'res_testservice_orinoco' + }, + 'resource_params': { + 'res_testservice_242d562_vip': + ('params ip="10.5.100.1" op monitor ' + 'timeout="20s" interval="10s" depth="0"'), + 'res_testservice_856d56f_vip': + ('params ipv6addr="ffff::1" op monitor ' + 'timeout="20s" interval="10s" depth="0"'), + 'res_testservice_f563c5d_vip': + ('params ipv6addr="ffaa::1" op monitor ' + 'timeout="20s" interval="10s" depth="0"'), + }, + 'resources': { + 'res_testservice_242d562_vip': 'ocf:heartbeat:IPaddr2', + 'res_testservice_856d56f_vip': 'ocf:heartbeat:IPv6addr', + 'res_testservice_f563c5d_vip': 'ocf:heartbeat:IPv6addr', + }, + 'clones': {}, + 'init_services': { + 'res_nova_cauth': 'nova-cauth' + }, + 'delete_resources': ["res_ceilometer_polling", + "res_testservice_eth1_vip", + "res_testservice_eth1_vip_ipv6addr", + "res_testservice_eth2_vip"], + } + expected = { + 'json_{}'.format(k): json.dumps(v, **ha.JSON_ENCODE_OPTIONS) + for k, v in expected.items() if v + } + self.assertEqual( + ha.generate_ha_relation_data('testservice', + haproxy_enabled=False, + extra_settings=extra_settings), + expected) + + def test_generate_ha_relation_data(self): + self.get_hacluster_config.return_value = { + 'vip': '10.5.100.1 ffff::1 ffaa::1' + } + extra_settings = { + 'colocations': {'vip_cauth': 'inf: res_nova_cauth grp_nova_vips'}, + 'init_services': {'res_nova_cauth': 'nova-cauth'}, + 'delete_resources': ['res_ceilometer_polling'], + 'groups': {'grp_testservice_wombles': 'res_testservice_orinoco'}, + } + expected = { + 'colocations': {'vip_cauth': 'inf: res_nova_cauth grp_nova_vips'}, + 'groups': { + 'grp_testservice_vips': ('res_testservice_242d562_vip ' + 'res_testservice_856d56f_vip ' + 'res_testservice_f563c5d_vip'), + 'grp_testservice_wombles': 'res_testservice_orinoco' + }, + 'resource_params': { + 'res_testservice_242d562_vip': + ('params ip="10.5.100.1" op monitor ' + 'timeout="20s" interval="10s" depth="0"'), + 'res_testservice_856d56f_vip': + ('params ipv6addr="ffff::1" op monitor ' + 'timeout="20s" interval="10s" depth="0"'), + 'res_testservice_f563c5d_vip': + ('params ipv6addr="ffaa::1" op monitor ' + 'timeout="20s" interval="10s" depth="0"'), + 'res_testservice_haproxy': + ('meta migration-threshold="INFINITY" failure-timeout="5s" ' + 'op monitor interval="5s"'), + }, + 'resources': { + 'res_testservice_242d562_vip': 'ocf:heartbeat:IPaddr2', + 'res_testservice_856d56f_vip': 'ocf:heartbeat:IPv6addr', + 'res_testservice_f563c5d_vip': 'ocf:heartbeat:IPv6addr', + 'res_testservice_haproxy': 'lsb:haproxy', + }, + 'clones': { + 'cl_testservice_haproxy': 'res_testservice_haproxy', + }, + 'init_services': { + 'res_testservice_haproxy': 'haproxy', + 'res_nova_cauth': 'nova-cauth' + }, + 'delete_resources': ["res_ceilometer_polling", + "res_testservice_eth1_vip", + "res_testservice_eth1_vip_ipv6addr", + "res_testservice_eth2_vip"], + } + expected = { + 'json_{}'.format(k): json.dumps(v, **ha.JSON_ENCODE_OPTIONS) + for k, v in expected.items() if v + } + self.assertEqual( + ha.generate_ha_relation_data('testservice', + extra_settings=extra_settings), + expected) + + @patch.object(ha, 'log') + @patch.object(ha, 'assert_charm_supports_dns_ha') + def test_generate_ha_relation_data_dns_ha(self, + assert_charm_supports_dns_ha, + log): + self.get_hacluster_config.return_value = { + 'vip': '10.5.100.1 ffff::1 ffaa::1' + } + self.conf = { + 'os-admin-hostname': 'test.admin.maas', + 'os-internal-hostname': 'test.internal.maas', + 'os-public-hostname': 'test.public.maas', + 'dns-ha': True, + } + self.resolve_address.return_value = '10.0.0.1' + assert_charm_supports_dns_ha.return_value = True + expected = { + 'groups': { + 'grp_testservice_hostnames': ('res_testservice_admin_hostname' + ' res_testservice_int_hostname' + ' res_testservice_public_hostname') + }, + 'resource_params': { + 'res_testservice_admin_hostname': + 'params fqdn="test.admin.maas" ip_address="10.0.0.1"', + 'res_testservice_int_hostname': + 'params fqdn="test.internal.maas" ip_address="10.0.0.1"', + 'res_testservice_public_hostname': + 'params fqdn="test.public.maas" ip_address="10.0.0.1"', + 'res_testservice_haproxy': + ('meta migration-threshold="INFINITY" failure-timeout="5s" ' + 'op monitor interval="5s"'), + }, + 'resources': { + 'res_testservice_admin_hostname': 'ocf:maas:dns', + 'res_testservice_int_hostname': 'ocf:maas:dns', + 'res_testservice_public_hostname': 'ocf:maas:dns', + 'res_testservice_haproxy': 'lsb:haproxy', + }, + 'clones': { + 'cl_testservice_haproxy': 'res_testservice_haproxy', + }, + 'init_services': { + 'res_testservice_haproxy': 'haproxy' + }, + } + expected = { + 'json_{}'.format(k): json.dumps(v, **ha.JSON_ENCODE_OPTIONS) + for k, v in expected.items() if v + } + self.assertEqual(ha.generate_ha_relation_data('testservice'), + expected) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/test_alternatives.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/test_alternatives.py new file mode 100644 index 0000000000000000000000000000000000000000..402f0d42088c7e66afd043028216dd5b76186cdc --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/test_alternatives.py @@ -0,0 +1,76 @@ +from testtools import TestCase +from mock import patch + +import charmhelpers.contrib.openstack.alternatives as alternatives + + +NAME = 'test' +SOURCE = '/var/lib/charm/test/test.conf' +TARGET = '/etc/test/test,conf' + + +class AlternativesTestCase(TestCase): + + @patch('subprocess.os.path') + @patch('subprocess.check_call') + def test_new_alternative(self, _check, _path): + _path.exists.return_value = False + alternatives.install_alternative(NAME, + TARGET, + SOURCE) + _check.assert_called_with( + ['update-alternatives', '--force', '--install', + TARGET, NAME, SOURCE, '50'] + ) + + @patch('subprocess.os.path') + @patch('subprocess.check_call') + def test_priority(self, _check, _path): + _path.exists.return_value = False + alternatives.install_alternative(NAME, + TARGET, + SOURCE, 100) + _check.assert_called_with( + ['update-alternatives', '--force', '--install', + TARGET, NAME, SOURCE, '100'] + ) + + @patch('shutil.move') + @patch('subprocess.os.path') + @patch('subprocess.check_call') + def test_new_alternative_existing_file(self, _check, + _path, _move): + _path.exists.return_value = True + _path.islink.return_value = False + alternatives.install_alternative(NAME, + TARGET, + SOURCE) + _check.assert_called_with( + ['update-alternatives', '--force', '--install', + TARGET, NAME, SOURCE, '50'] + ) + _move.assert_called_with(TARGET, '{}.bak'.format(TARGET)) + + @patch('shutil.move') + @patch('subprocess.os.path') + @patch('subprocess.check_call') + def test_new_alternative_existing_link(self, _check, + _path, _move): + _path.exists.return_value = True + _path.islink.return_value = True + alternatives.install_alternative(NAME, + TARGET, + SOURCE) + _check.assert_called_with( + ['update-alternatives', '--force', '--install', + TARGET, NAME, SOURCE, '50'] + ) + _move.assert_not_called() + + @patch('subprocess.check_call') + def test_remove_alternative(self, _check): + alternatives.remove_alternative(NAME, SOURCE) + _check.assert_called_with( + ['update-alternatives', '--remove', + NAME, SOURCE] + ) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/test_audits.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/test_audits.py new file mode 100644 index 0000000000000000000000000000000000000000..930292dc1bcc4a7973e49d6cedfce3aae3d0f5d7 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/test_audits.py @@ -0,0 +1,329 @@ +from testtools import TestCase, skipIf +from mock import patch, MagicMock, call +import six + +import charmhelpers.contrib.openstack.audits as audits +import charmhelpers.contrib.openstack.audits.openstack_security_guide as guide + + +@skipIf(six.PY2, 'Audits only support Python3') +class AuditTestCase(TestCase): + + @patch('charmhelpers.contrib.openstack.audits._audits', {}) + def test_wrapper(self): + variables = { + 'guard_called': False, + 'test_run': False, + } + + def should_run(audit_options): + variables['guard_called'] = True + return True + + @audits.audit(should_run) + def test(options): + variables['test_run'] = True + + audits.run({}) + self.assertTrue(variables['guard_called']) + self.assertTrue(variables['test_run']) + self.assertEqual(audits._audits['test'], audits.Audit(test, (should_run,))) + + @patch('charmhelpers.contrib.openstack.audits._audits', {}) + def test_wrapper_not_run(self): + variables = { + 'guard_called': False, + 'test_run': False, + } + + def should_run(audit_options): + variables['guard_called'] = True + return False + + @audits.audit(should_run) + def test(options): + variables['test_run'] = True + + audits.run({}) + self.assertTrue(variables['guard_called']) + self.assertFalse(variables['test_run']) + self.assertEqual(audits._audits['test'], audits.Audit(test, (should_run,))) + + @patch('charmhelpers.contrib.openstack.audits._audits', {}) + def test_duplicate_audit(self): + def should_run(audit_options): + return True + + @audits.audit(should_run) + def test(options): + pass + + try: + # Again! + # + # Both of the following '#noqa's are to prevent flake8 from + # noticing the duplicate function `test` The intent in this test + # is for the audits.audit to pick up on the duplicate function. + @audits.audit(should_run) # noqa + def test(options): # noqa + pass + except RuntimeError as e: + self.assertEqual("Test name 'test' used more than once", e.args[0]) + return + self.assertTrue(False, "Duplicate audit should raise an exception") + + @patch('charmhelpers.contrib.openstack.audits._audits', {}) + def test_non_callable_filter(self): + try: + # Again! + @audits.audit(3) + def test(options): + pass + except RuntimeError as e: + self.assertEqual("Configuration includes non-callable filters: [3]", e.args[0]) + return + self.assertTrue(False, "Duplicate audit should raise an exception") + + @patch('charmhelpers.contrib.openstack.audits._audits', {}) + def test_exclude_config(self): + variables = { + 'test_run': False, + } + + @audits.audit() + def test(options): + variables['test_run'] = True + + audits.run({'excludes': ['test']}) + self.assertFalse(variables['test_run']) + + +class AuditsTestCase(TestCase): + + @patch('charmhelpers.contrib.openstack.audits.cmp_pkgrevno') + def test_since_package_less(self, _cmp_pkgrevno): + _cmp_pkgrevno.return_value = 1 + + verifier = audits.since_package('test', '12.0.0') + self.assertEqual(verifier(), True) + + @patch('charmhelpers.contrib.openstack.audits.cmp_pkgrevno') + def test_since_package_greater(self, _cmp_pkgrevno): + _cmp_pkgrevno.return_value = -1 + + verifier = audits.since_package('test', '14.0.0') + self.assertEqual(verifier(), False) + + @patch('charmhelpers.contrib.openstack.audits.cmp_pkgrevno') + def test_since_package_equal(self, _cmp_pkgrevno): + _cmp_pkgrevno.return_value = 0 + + verifier = audits.since_package('test', '13.0.0') + self.assertEqual(verifier(), True) + + @patch('charmhelpers.contrib.openstack.utils.get_os_codename_package') + def test_since_openstack_less(self, _get_os_codename_package): + _get_os_codename_package.return_value = "icehouse" + + verifier = audits.since_openstack_release('test', 'mitaka') + self.assertEqual(verifier(), False) + + @patch('charmhelpers.contrib.openstack.utils.get_os_codename_package') + def test_since_openstack_greater(self, _get_os_codename_package): + _get_os_codename_package.return_value = "rocky" + + verifier = audits.since_openstack_release('test', 'queens') + self.assertEqual(verifier(), True) + + @patch('charmhelpers.contrib.openstack.utils.get_os_codename_package') + def test_since_openstack_equal(self, _get_os_codename_package): + _get_os_codename_package.return_value = "mitaka" + + verifier = audits.since_openstack_release('test', 'mitaka') + self.assertEqual(verifier(), True) + + @patch('charmhelpers.contrib.openstack.utils.get_os_codename_package') + def test_before_openstack_less(self, _get_os_codename_package): + _get_os_codename_package.return_value = "icehouse" + + verifier = audits.before_openstack_release('test', 'mitaka') + self.assertEqual(verifier(), True) + + @patch('charmhelpers.contrib.openstack.utils.get_os_codename_package') + def test_before_openstack_greater(self, _get_os_codename_package): + _get_os_codename_package.return_value = "rocky" + + verifier = audits.before_openstack_release('test', 'queens') + self.assertEqual(verifier(), False) + + @patch('charmhelpers.contrib.openstack.utils.get_os_codename_package') + def test_before_openstack_equal(self, _get_os_codename_package): + _get_os_codename_package.return_value = "mitaka" + + verifier = audits.before_openstack_release('test', 'mitaka') + self.assertEqual(verifier(), False) + + @patch('charmhelpers.contrib.openstack.audits.cmp_pkgrevno') + def test_before_package_less(self, _cmp_pkgrevno): + _cmp_pkgrevno.return_value = 1 + + verifier = audits.before_package('test', '12.0.0') + self.assertEqual(verifier(), False) + + @patch('charmhelpers.contrib.openstack.audits.cmp_pkgrevno') + def test_before_package_greater(self, _cmp_pkgrevno): + _cmp_pkgrevno.return_value = -1 + + verifier = audits.before_package('test', '14.0.0') + self.assertEqual(verifier(), True) + + @patch('charmhelpers.contrib.openstack.audits.cmp_pkgrevno') + def test_before_package_equal(self, _cmp_pkgrevno): + _cmp_pkgrevno.return_value = 0 + + verifier = audits.before_package('test', '13.0.0') + self.assertEqual(verifier(), False) + + def test_is_audit_type_empty(self): + verifier = audits.is_audit_type(audits.AuditType.OpenStackSecurityGuide) + self.assertEqual(verifier({}), False) + + def test_is_audit_type(self): + verifier = audits.is_audit_type(audits.AuditType.OpenStackSecurityGuide) + self.assertEqual(verifier({'audit_type': audits.AuditType.OpenStackSecurityGuide}), True) + + +@skipIf(six.PY2, 'Audits only support Python3') +class OpenstackSecurityGuideTestCase(TestCase): + + @patch('configparser.ConfigParser') + def test_internal_config_parser_is_not_strict(self, _config_parser): + parser = MagicMock() + _config_parser.return_value = parser + guide._config_ini('test') + _config_parser.assert_called_with(strict=False) + parser.read.assert_called_with('test') + + @patch('charmhelpers.contrib.openstack.audits.openstack_security_guide._stat') + def test_internal_validate_file_ownership(self, _stat): + _stat.return_value = guide.Ownership('test_user', 'test_group', '600') + guide._validate_file_ownership('test_user', 'test_group', 'test-file-name') + _stat.assert_called_with('test-file-name') + pass + + @patch('charmhelpers.contrib.openstack.audits.openstack_security_guide._stat') + def test_internal_validate_file_mode(self, _stat): + _stat.return_value = guide.Ownership('test_user', 'test_group', '600') + guide._validate_file_mode('600', 'test-file-name') + _stat.assert_called_with('test-file-name') + pass + + @patch('os.path.isfile') + @patch('charmhelpers.contrib.openstack.audits.openstack_security_guide._validate_file_mode') + def test_validate_file_permissions_defaults(self, _validate_mode, _is_file): + _is_file.return_value = True + config = { + 'files': { + 'test': {} + } + } + guide.validate_file_permissions(config) + _validate_mode.assert_called_once_with('600', 'test', False) + + @patch('os.path.isfile') + @patch('charmhelpers.contrib.openstack.audits.openstack_security_guide._validate_file_mode') + def test_validate_file_permissions(self, _validate_mode, _is_file): + _is_file.return_value = True + config = { + 'files': { + 'test': { + 'mode': '777' + } + } + } + guide.validate_file_permissions(config) + _validate_mode.assert_called_once_with('777', 'test', False) + + @patch('glob.glob') + @patch('os.path.isfile') + @patch('charmhelpers.contrib.openstack.audits.openstack_security_guide._validate_file_mode') + def test_validate_file_permissions_glob(self, _validate_mode, _is_file, _glob): + _glob.return_value = ['test'] + _is_file.return_value = True + config = { + 'files': { + '*': { + 'mode': '777' + } + } + } + guide.validate_file_permissions(config) + _validate_mode.assert_called_once_with('777', 'test', False) + + @patch('os.path.isfile') + @patch('charmhelpers.contrib.openstack.audits.openstack_security_guide._validate_file_ownership') + def test_validate_file_ownership_defaults(self, _validate_owner, _is_file): + _is_file.return_value = True + config = { + 'files': { + 'test': {} + } + } + guide.validate_file_ownership(config) + _validate_owner.assert_called_once_with('root', 'root', 'test', False) + + @patch('os.path.isfile') + @patch('charmhelpers.contrib.openstack.audits.openstack_security_guide._validate_file_ownership') + def test_validate_file_ownership(self, _validate_owner, _is_file): + _is_file.return_value = True + config = { + 'files': { + 'test': { + 'owner': 'test-user', + 'group': 'test-group', + } + } + } + guide.validate_file_ownership(config) + _validate_owner.assert_called_once_with('test-user', 'test-group', 'test', False) + + @patch('glob.glob') + @patch('os.path.isfile') + @patch('charmhelpers.contrib.openstack.audits.openstack_security_guide._validate_file_ownership') + def test_validate_file_ownership_glob(self, _validate_owner, _is_file, _glob): + _glob.return_value = ['test'] + _is_file.return_value = True + config = { + 'files': { + '*': { + 'owner': 'test-user', + 'group': 'test-group', + } + } + } + guide.validate_file_ownership(config) + _validate_owner.assert_called_once_with('test-user', 'test-group', 'test', False) + + @patch('charmhelpers.contrib.openstack.audits.openstack_security_guide._config_section') + def test_validate_uses_keystone(self, _config_section): + _config_section.side_effect = [None, { + 'auth_strategy': 'keystone', + }] + guide.validate_uses_keystone({}) + _config_section.assert_has_calls([call({}, 'api'), call({}, 'DEFAULT')]) + + @patch('charmhelpers.contrib.openstack.audits.openstack_security_guide._config_section') + def test_validate_uses_tls_for_keystone(self, _config_section): + _config_section.return_value = { + 'auth_uri': 'https://10.10.10.10', + } + guide.validate_uses_tls_for_keystone({}) + _config_section.assert_called_with({}, 'keystone_authtoken') + + @patch('charmhelpers.contrib.openstack.audits.openstack_security_guide._config_section') + def test_validate_uses_tls_for_glance(self, _config_section): + _config_section.return_value = { + 'api_servers': 'https://10.10.10.10', + } + guide.validate_uses_tls_for_glance({}) + _config_section.assert_called_with({}, 'glance') diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/test_cert_utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/test_cert_utils.py new file mode 100644 index 0000000000000000000000000000000000000000..ce4117d3942253c9eb34eca69f8762c5aa768d33 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/test_cert_utils.py @@ -0,0 +1,308 @@ +import json +import mock +import unittest + +import charmhelpers.contrib.openstack.cert_utils as cert_utils + + +class CertUtilsTests(unittest.TestCase): + + def test_CertRequest(self): + cr = cert_utils.CertRequest() + self.assertEqual(cr.entries, []) + self.assertIsNone(cr.hostname_entry) + + @mock.patch.object(cert_utils, 'local_unit', return_value='unit/2') + def test_CertRequest_add_entry(self, local_unit): + cr = cert_utils.CertRequest() + cr.add_entry('admin', 'admin.openstack.local', ['10.10.10.10']) + self.assertEqual( + cr.get_request(), + {'cert_requests': + '{"admin.openstack.local": {"sans": ["10.10.10.10"]}}', + 'unit_name': 'unit_2'}) + + @mock.patch.object(cert_utils, 'local_unit', return_value='unit/2') + @mock.patch.object(cert_utils, 'resolve_network_cidr') + @mock.patch.object(cert_utils, 'get_vip_in_network') + @mock.patch.object(cert_utils, 'get_hostname') + @mock.patch.object(cert_utils, 'unit_get') + def test_CertRequest_add_hostname_cn(self, unit_get, get_hostname, + get_vip_in_network, + resolve_network_cidr, local_unit): + resolve_network_cidr.side_effect = lambda x: x + get_vip_in_network.return_value = '10.1.2.100' + unit_get.return_value = '10.1.2.3' + get_hostname.return_value = 'juju-unit-2' + cr = cert_utils.CertRequest() + cr.add_hostname_cn() + self.assertEqual( + cr.get_request(), + {'cert_requests': + '{"juju-unit-2": {"sans": ["10.1.2.100", "10.1.2.3"]}}', + 'unit_name': 'unit_2'}) + + @mock.patch.object(cert_utils, 'local_unit', return_value='unit/2') + @mock.patch.object(cert_utils, 'resolve_network_cidr') + @mock.patch.object(cert_utils, 'get_vip_in_network') + @mock.patch.object(cert_utils, 'get_hostname') + @mock.patch.object(cert_utils, 'unit_get') + def test_CertRequest_add_hostname_cn_ip(self, unit_get, get_hostname, + get_vip_in_network, + resolve_network_cidr, local_unit): + resolve_network_cidr.side_effect = lambda x: x + get_vip_in_network.return_value = '10.1.2.100' + unit_get.return_value = '10.1.2.3' + get_hostname.return_value = 'juju-unit-2' + cr = cert_utils.CertRequest() + cr.add_hostname_cn() + cr.add_hostname_cn_ip(['10.1.2.4']) + self.assertEqual( + cr.get_request(), + {'cert_requests': + ('{"juju-unit-2": {"sans": ["10.1.2.100", "10.1.2.3", ' + '"10.1.2.4"]}}'), + 'unit_name': 'unit_2'}) + + @mock.patch.object(cert_utils, 'local_unit', return_value='unit/2') + @mock.patch.object(cert_utils, 'resolve_network_cidr') + @mock.patch.object(cert_utils, 'get_vip_in_network') + @mock.patch.object(cert_utils, 'network_get_primary_address') + @mock.patch.object(cert_utils, 'resolve_address') + @mock.patch.object(cert_utils, 'config') + @mock.patch.object(cert_utils, 'get_hostname') + @mock.patch.object(cert_utils, 'unit_get') + def test_get_certificate_request(self, unit_get, get_hostname, + config, resolve_address, + network_get_primary_address, + get_vip_in_network, resolve_network_cidr, + local_unit): + unit_get.return_value = '10.1.2.3' + get_hostname.return_value = 'juju-unit-2' + _config = { + 'os-internal-hostname': 'internal.openstack.local', + 'os-admin-hostname': 'admin.openstack.local', + 'os-public-hostname': 'public.openstack.local', + } + _resolve_address = { + 'int': '10.0.0.2', + 'admin': '10.10.0.2', + 'public': '10.20.0.2', + } + _npa = { + 'internal': '10.0.0.3', + 'admin': '10.10.0.3', + 'public': '10.20.0.3', + } + _vips = { + '10.0.0.0/16': '10.0.0.100', + '10.10.0.0/16': '10.10.0.100', + '10.20.0.0/16': '10.20.0.100', + } + _resolve_nets = { + '10.0.0.3': '10.0.0.0/16', + '10.10.0.3': '10.10.0.0/16', + '10.20.0.3': '10.20.0.0/16', + } + expect = { + 'admin.openstack.local': { + 'sans': ['10.10.0.100', '10.10.0.2', '10.10.0.3']}, + 'internal.openstack.local': { + 'sans': ['10.0.0.100', '10.0.0.2', '10.0.0.3']}, + 'juju-unit-2': {'sans': ['10.1.2.3']}, + 'public.openstack.local': { + 'sans': ['10.20.0.100', '10.20.0.2', '10.20.0.3']}} + self.maxDiff = None + config.side_effect = lambda x: _config.get(x) + get_vip_in_network.side_effect = lambda x: _vips.get(x) + resolve_network_cidr.side_effect = lambda x: _resolve_nets.get(x) + network_get_primary_address.side_effect = lambda x: _npa.get(x) + resolve_address.side_effect = \ + lambda endpoint_type: _resolve_address[endpoint_type] + output = json.loads( + cert_utils.get_certificate_request()['cert_requests']) + self.assertEqual( + output, + expect) + + @mock.patch.object(cert_utils, 'unit_get') + @mock.patch.object(cert_utils.os, 'symlink') + @mock.patch.object(cert_utils.os.path, 'isfile') + @mock.patch.object(cert_utils, 'resolve_address') + @mock.patch.object(cert_utils, 'get_hostname') + def test_create_ip_cert_links(self, get_hostname, resolve_address, isfile, + symlink, unit_get): + unit_get.return_value = '10.1.2.3' + get_hostname.return_value = 'juju-unit-2' + _resolve_address = { + 'int': '10.0.0.2', + 'admin': '10.10.0.2', + 'public': '10.20.0.2', + } + resolve_address.side_effect = \ + lambda endpoint_type: _resolve_address[endpoint_type] + _files = { + '/etc/ssl/cert_juju-unit-2': True, + '/etc/ssl/cert_10.0.0.2': False, + '/etc/ssl/cert_10.10.0.2': True, + '/etc/ssl/cert_10.20.0.2': False, + '/etc/ssl/cert_funky-name': False, + } + isfile.side_effect = lambda x: _files[x] + expected = [ + mock.call('/etc/ssl/cert_juju-unit-2', '/etc/ssl/cert_10.0.0.2'), + mock.call('/etc/ssl/key_juju-unit-2', '/etc/ssl/key_10.0.0.2'), + mock.call('/etc/ssl/cert_juju-unit-2', '/etc/ssl/cert_10.20.0.2'), + mock.call('/etc/ssl/key_juju-unit-2', '/etc/ssl/key_10.20.0.2'), + ] + cert_utils.create_ip_cert_links('/etc/ssl') + symlink.assert_has_calls(expected) + symlink.reset_mock() + cert_utils.create_ip_cert_links( + '/etc/ssl', + custom_hostname_link='funky-name') + expected.extend([ + mock.call('/etc/ssl/cert_juju-unit-2', '/etc/ssl/cert_funky-name'), + mock.call('/etc/ssl/key_juju-unit-2', '/etc/ssl/key_funky-name'), + ]) + symlink.assert_has_calls(expected) + + @mock.patch.object(cert_utils, 'write_file') + def test_install_certs(self, write_file): + certs = { + 'admin.openstack.local': { + 'cert': 'ADMINCERT', + 'key': 'ADMINKEY'}} + cert_utils.install_certs('/etc/ssl', certs, chain='CHAIN') + expected = [ + mock.call( + path='/etc/ssl/cert_admin.openstack.local', + content='ADMINCERT\nCHAIN', + owner='root', group='root', + perms=0o640), + mock.call( + path='/etc/ssl/key_admin.openstack.local', + content='ADMINKEY', + owner='root', group='root', + perms=0o640), + ] + write_file.assert_has_calls(expected) + + @mock.patch.object(cert_utils, 'write_file') + def test_install_certs_ca(self, write_file): + certs = { + 'admin.openstack.local': { + 'cert': 'ADMINCERT', + 'key': 'ADMINKEY'}} + ca = 'MYCA' + cert_utils.install_certs('/etc/ssl', certs, ca) + expected = [ + mock.call( + path='/etc/ssl/cert_admin.openstack.local', + content='ADMINCERT\nMYCA', + owner='root', group='root', + perms=0o640), + mock.call( + path='/etc/ssl/key_admin.openstack.local', + content='ADMINKEY', + owner='root', group='root', + perms=0o640), + ] + write_file.assert_has_calls(expected) + + @mock.patch.object(cert_utils, 'local_unit') + @mock.patch.object(cert_utils, 'create_ip_cert_links') + @mock.patch.object(cert_utils, 'install_certs') + @mock.patch.object(cert_utils, 'install_ca_cert') + @mock.patch.object(cert_utils, 'mkdir') + @mock.patch.object(cert_utils, 'relation_get') + def test_process_certificates(self, relation_get, mkdir, install_ca_cert, + install_certs, create_ip_cert_links, + local_unit): + local_unit.return_value = 'devnull/2' + certs = { + 'admin.openstack.local': { + 'cert': 'ADMINCERT', + 'key': 'ADMINKEY'}} + _relation_info = { + 'keystone_2.processed_requests': json.dumps(certs), + 'chain': 'MYCHAIN', + 'ca': 'ROOTCA', + } + relation_get.return_value = _relation_info + self.assertFalse(cert_utils.process_certificates( + 'myservice', + 'certificates:2', + 'vault/0', + custom_hostname_link='funky-name')) + local_unit.return_value = 'keystone/2' + self.assertTrue(cert_utils.process_certificates( + 'myservice', + 'certificates:2', + 'vault/0', + custom_hostname_link='funky-name')) + install_ca_cert.assert_called_once_with(b'ROOTCA') + install_certs.assert_called_once_with( + '/etc/apache2/ssl/myservice', + {'admin.openstack.local': { + 'key': 'ADMINKEY', 'cert': 'ADMINCERT'}}, + 'MYCHAIN', user='root', group='root') + create_ip_cert_links.assert_called_once_with( + '/etc/apache2/ssl/myservice', + custom_hostname_link='funky-name') + + @mock.patch.object(cert_utils, 'local_unit') + @mock.patch.object(cert_utils, 'related_units') + @mock.patch.object(cert_utils, 'relation_ids') + @mock.patch.object(cert_utils, 'relation_get') + def test_get_requests_for_local_unit(self, relation_get, relation_ids, + related_units, local_unit): + local_unit.return_value = 'rabbitmq-server/2' + relation_ids.return_value = ['certificates:12'] + related_units.return_value = ['vault/0'] + certs = { + 'juju-cd4bb3-5.lxd': { + 'cert': 'BASECERT', + 'key': 'BASEKEY'}, + 'juju-cd4bb3-5.internal': { + 'cert': 'INTERNALCERT', + 'key': 'INTERNALKEY'}} + _relation_info = { + 'rabbitmq-server_2.processed_requests': json.dumps(certs), + 'chain': 'MYCHAIN', + 'ca': 'ROOTCA', + } + relation_get.return_value = _relation_info + self.assertEqual( + cert_utils.get_requests_for_local_unit(), + [{ + 'ca': 'ROOTCA', + 'certs': { + 'juju-cd4bb3-5.lxd': { + 'cert': 'BASECERT', + 'key': 'BASEKEY'}, + 'juju-cd4bb3-5.internal': { + 'cert': 'INTERNALCERT', + 'key': 'INTERNALKEY'}}, + 'chain': 'MYCHAIN'}] + ) + + @mock.patch.object(cert_utils, 'get_requests_for_local_unit') + def test_get_bundle_for_cn(self, get_requests_for_local_unit): + get_requests_for_local_unit.return_value = [{ + 'ca': 'ROOTCA', + 'certs': { + 'juju-cd4bb3-5.lxd': { + 'cert': 'BASECERT', + 'key': 'BASEKEY'}, + 'juju-cd4bb3-5.internal': { + 'cert': 'INTERNALCERT', + 'key': 'INTERNALKEY'}}, + 'chain': 'MYCHAIN'}] + self.assertEqual( + cert_utils.get_bundle_for_cn('juju-cd4bb3-5.internal'), + { + 'ca': 'ROOTCA', + 'cert': 'INTERNALCERT', + 'chain': 'MYCHAIN', + 'key': 'INTERNALKEY'}) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/test_ip.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/test_ip.py new file mode 100644 index 0000000000000000000000000000000000000000..e062e7b89e9f4f7e9a1566886dbd5f794a4b5e90 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/test_ip.py @@ -0,0 +1,174 @@ +from testtools import TestCase +from mock import patch, call, MagicMock + +import charmhelpers.core as ch_core +import charmhelpers.contrib.openstack.ip as ip + +TO_PATCH = [ + 'config', + 'unit_get', + 'get_address_in_network', + 'is_clustered', + 'service_name', + 'network_get_primary_address', + 'resolve_network_cidr', +] + + +class TestConfig(): + + def __init__(self): + self.config = {} + + def set(self, key, value): + self.config[key] = value + + def get(self, key): + return self.config.get(key) + + +class IPTestCase(TestCase): + + def setUp(self): + super(IPTestCase, self).setUp() + for m in TO_PATCH: + setattr(self, m, self._patch(m)) + self.test_config = TestConfig() + self.config.side_effect = self.test_config.get + self.network_get_primary_address.side_effect = [ + NotImplementedError, + ch_core.hookenv.NoNetworkBinding, + ] + + def _patch(self, method): + _m = patch('charmhelpers.contrib.openstack.ip.' + method) + mock = _m.start() + self.addCleanup(_m.stop) + return mock + + def test_resolve_address_default(self): + self.is_clustered.return_value = False + self.unit_get.return_value = 'unit1' + self.get_address_in_network.return_value = 'unit1' + self.assertEquals(ip.resolve_address(), 'unit1') + self.unit_get.assert_called_with('public-address') + calls = [call('os-public-network'), + call('prefer-ipv6')] + self.config.assert_has_calls(calls) + + def test_resolve_address_default_internal(self): + self.is_clustered.return_value = False + self.unit_get.return_value = 'unit1' + self.get_address_in_network.return_value = 'unit1' + self.assertEquals(ip.resolve_address(ip.INTERNAL), 'unit1') + self.unit_get.assert_called_with('private-address') + calls = [call('os-internal-network'), + call('prefer-ipv6')] + self.config.assert_has_calls(calls) + + def test_resolve_address_public_not_clustered(self): + self.is_clustered.return_value = False + self.test_config.set('os-public-network', '192.168.20.0/24') + self.unit_get.return_value = 'unit1' + self.get_address_in_network.return_value = '192.168.20.1' + self.assertEquals(ip.resolve_address(), '192.168.20.1') + self.unit_get.assert_called_with('public-address') + calls = [call('os-public-network'), + call('prefer-ipv6')] + self.config.assert_has_calls(calls) + self.get_address_in_network.assert_called_with( + '192.168.20.0/24', + 'unit1') + + def test_resolve_address_public_clustered(self): + self.is_clustered.return_value = True + self.test_config.set('os-public-network', '192.168.20.0/24') + self.test_config.set('vip', '192.168.20.100 10.5.3.1') + self.assertEquals(ip.resolve_address(), '192.168.20.100') + + def test_resolve_address_default_clustered(self): + self.is_clustered.return_value = True + self.test_config.set('vip', '10.5.3.1') + self.assertEquals(ip.resolve_address(), '10.5.3.1') + self.config.assert_has_calls( + [call('vip'), + call('os-public-network')]) + + def test_resolve_address_public_clustered_inresolvable(self): + self.is_clustered.return_value = True + self.test_config.set('os-public-network', '192.168.20.0/24') + self.test_config.set('vip', '10.5.3.1') + self.assertRaises(ValueError, ip.resolve_address) + + def test_resolve_address_override(self): + self.test_config.set('os-public-hostname', 'public.example.com') + addr = ip.resolve_address() + self.assertEqual('public.example.com', addr) + + @patch.object(ip, '_get_address_override') + def test_resolve_address_no_override(self, _get_address_override): + self.test_config.set('os-public-hostname', 'public.example.com') + self.unit_get.return_value = '10.0.0.1' + addr = ip.resolve_address(override=False) + self.assertFalse(_get_address_override.called) + self.assertEqual('10.0.0.1', addr) + + def test_resolve_address_override_template(self): + self.test_config.set('os-public-hostname', + '{service_name}.example.com') + self.service_name.return_value = 'foo' + addr = ip.resolve_address() + self.assertEqual('foo.example.com', addr) + + @patch.object(ip, 'get_ipv6_addr', lambda *args, **kwargs: ['::1']) + def test_resolve_address_ipv6_fallback(self): + self.test_config.set('prefer-ipv6', True) + self.is_clustered.return_value = False + self.assertEqual(ip.resolve_address(), '::1') + + @patch.object(ip, 'resolve_address') + def test_canonical_url_http(self, resolve_address): + resolve_address.return_value = 'unit1' + configs = MagicMock() + configs.complete_contexts.return_value = [] + self.assertTrue(ip.canonical_url(configs), + 'http://unit1') + + @patch.object(ip, 'resolve_address') + def test_canonical_url_https(self, resolve_address): + resolve_address.return_value = 'unit1' + configs = MagicMock() + configs.complete_contexts.return_value = ['https'] + self.assertTrue(ip.canonical_url(configs), + 'https://unit1') + + @patch.object(ip, 'is_ipv6', lambda *args: True) + @patch.object(ip, 'resolve_address') + def test_canonical_url_ipv6(self, resolve_address): + resolve_address.return_value = 'unit1' + self.assertTrue(ip.canonical_url(None), 'http://[unit1]') + + def test_resolve_address_network_get(self): + self.is_clustered.return_value = False + self.unit_get.return_value = 'unit1' + self.network_get_primary_address.side_effect = None + self.network_get_primary_address.return_value = '10.5.60.1' + self.assertEqual(ip.resolve_address(), '10.5.60.1') + self.unit_get.assert_called_with('public-address') + calls = [call('os-public-network'), + call('prefer-ipv6')] + self.config.assert_has_calls(calls) + self.network_get_primary_address.assert_called_with('public') + + def test_resolve_address_network_get_clustered(self): + self.is_clustered.return_value = True + self.test_config.set('vip', '10.5.60.20 192.168.1.20') + self.network_get_primary_address.side_effect = None + self.network_get_primary_address.return_value = '10.5.60.1' + self.resolve_network_cidr.return_value = '10.5.60.1/24' + self.assertEqual(ip.resolve_address(), '10.5.60.20') + calls = [call('os-public-hostname'), + call('vip'), + call('os-public-network')] + self.config.assert_has_calls(calls) + self.network_get_primary_address.assert_called_with('public') diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/test_keystone_utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/test_keystone_utils.py new file mode 100644 index 0000000000000000000000000000000000000000..c7bba25d691b736a781d53c6269986435f114745 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/test_keystone_utils.py @@ -0,0 +1,122 @@ +#!/usr/bin/env python + +import unittest + +from mock import patch, PropertyMock + +import charmhelpers.contrib.openstack.keystone as keystone + +TO_PATCH = [ + 'apt_install', + "log", + "ERROR", + "IdentityServiceContext", +] + + +class KeystoneTests(unittest.TestCase): + def setUp(self): + for m in TO_PATCH: + setattr(self, m, self._patch(m)) + + def _patch(self, method): + _m = patch('charmhelpers.contrib.openstack.keystone.' + method) + mock = _m.start() + self.addCleanup(_m.stop) + return mock + + def test_get_keystone_manager(self): + manager = keystone.get_keystone_manager( + 'test-endpoint', 2, token="12345" + ) + self.assertTrue(isinstance(manager, keystone.KeystoneManager2)) + + manager = keystone.get_keystone_manager( + 'test-endpoint', 3, token="12345") + + self.assertTrue(isinstance(manager, keystone.KeystoneManager3)) + self.assertRaises(ValueError, keystone.get_keystone_manager, + 'test-endpoint', 4, token="12345") + + def test_resolve_sevice_id_v2(self): + class ServiceList(list): + def __iter__(self): + class Service(object): + _info = { + 'type': 'metering', + 'name': "ceilometer", + 'id': "uuid-uuid", + } + yield Service() + + manager = keystone.get_keystone_manager('test-endpoint', 2, + token="1234") + manager.api.services.list = PropertyMock(return_value=ServiceList()) + self.assertTrue(manager.service_exists(service_name="ceilometer", + service_type="metering")) + self.assertFalse(manager.service_exists(service_name="barbican")) + self.assertFalse(manager.service_exists(service_name="barbican", + service_type="openstack")) + + def test_resolve_sevice_id_v3(self): + class ServiceList(list): + def __iter__(self): + class Service(object): + _info = { + 'type': 'metering', + 'name': "ceilometer", + 'id': "uuid-uuid", + } + yield Service() + + manager = keystone.get_keystone_manager('test-endpoint', 3, + token="12345") + manager.api.services.list = PropertyMock(return_value=ServiceList()) + self.assertTrue(manager.service_exists(service_name="ceilometer", + service_type="metering")) + self.assertFalse(manager.service_exists(service_name="barbican")) + self.assertFalse(manager.service_exists(service_name="barbican", + service_type="openstack")) + + def test_get_api_suffix(self): + self.assertEquals(keystone.get_api_suffix(2), "v2.0") + self.assertEquals(keystone.get_api_suffix(3), "v3") + + def test_format_endpoint(self): + self.assertEquals(keystone.format_endpoint( + "http", "10.0.0.5", "5000", 2), "http://10.0.0.5:5000/v2.0/") + + def test_get_keystone_manager_from_identity_service_context(self): + class FakeIdentityServiceV2(object): + def __call__(self, *args, **kwargs): + return { + "service_protocol": "https", + "service_host": "10.5.0.5", + "service_port": "5000", + "api_version": "2.0", + "admin_user": "amdin", + "admin_password": "admin", + "admin_tenant_name": "admin_tenant" + } + + self.IdentityServiceContext.return_value = FakeIdentityServiceV2() + + manager = keystone.get_keystone_manager_from_identity_service_context() + self.assertIsInstance(manager, keystone.KeystoneManager2) + + class FakeIdentityServiceV3(object): + def __call__(self, *args, **kwargs): + return { + "service_protocol": "https", + "service_host": "10.5.0.5", + "service_port": "5000", + "api_version": "3", + "admin_user": "amdin", + "admin_password": "admin", + "admin_tenant_name": "admin_tenant" + } + + self.IdentityServiceContext.return_value = FakeIdentityServiceV3() + + manager = keystone.get_keystone_manager_from_identity_service_context() + self.assertIsInstance(manager, keystone.KeystoneManager3) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/test_neutron_utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/test_neutron_utils.py new file mode 100644 index 0000000000000000000000000000000000000000..a571f171c5afcb3bbd6589e9c04c3d2a2db5af6c --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/test_neutron_utils.py @@ -0,0 +1,239 @@ +import unittest +from mock import patch +from nose.tools import raises +import charmhelpers.contrib.openstack.neutron as neutron + +TO_PATCH = [ + 'log', + 'config', + 'os_release', + 'check_output', +] + + +class NeutronTests(unittest.TestCase): + def setUp(self): + for m in TO_PATCH: + setattr(self, m, self._patch(m)) + + def _patch(self, method): + _m = patch('charmhelpers.contrib.openstack.neutron.' + method) + mock = _m.start() + self.addCleanup(_m.stop) + return mock + + def test_headers_package(self): + self.check_output.return_value = b'3.13.0-19-generic' + kname = neutron.headers_package() + self.assertEquals(kname, 'linux-headers-3.13.0-19-generic') + + def test_kernel_version(self): + self.check_output.return_value = b'3.13.0-19-generic' + kver_maj, kver_min = neutron.kernel_version() + self.assertEquals((kver_maj, kver_min), (3, 13)) + + @patch.object(neutron, 'kernel_version') + def test_determine_dkms_package_old_kernel(self, _kernel_version): + self.check_output.return_value = b'3.4.0-19-generic' + _kernel_version.return_value = (3, 10) + dkms_package = neutron.determine_dkms_package() + self.assertEquals(dkms_package, ['linux-headers-3.4.0-19-generic', + 'openvswitch-datapath-dkms']) + + @patch.object(neutron, 'kernel_version') + def test_determine_dkms_package_new_kernel(self, _kernel_version): + _kernel_version.return_value = (3, 13) + dkms_package = neutron.determine_dkms_package() + self.assertEquals(dkms_package, []) + + def test_quantum_plugins(self): + self.config.return_value = 'foo' + plugins = neutron.quantum_plugins() + self.assertEquals(plugins['ovs']['services'], + ['quantum-plugin-openvswitch-agent']) + self.assertEquals(plugins['nvp']['services'], []) + + def test_neutron_plugins_preicehouse(self): + self.config.return_value = 'foo' + self.os_release.return_value = 'havana' + plugins = neutron.neutron_plugins() + self.assertEquals(plugins['ovs']['config'], + '/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini') + self.assertEquals(plugins['nvp']['services'], []) + + def test_neutron_plugins(self): + self.config.return_value = 'foo' + self.os_release.return_value = 'icehouse' + plugins = neutron.neutron_plugins() + self.assertEquals(plugins['ovs']['config'], + '/etc/neutron/plugins/ml2/ml2_conf.ini') + self.assertEquals(plugins['nvp']['config'], + '/etc/neutron/plugins/vmware/nsx.ini') + self.assertTrue('neutron-plugin-vmware' in + plugins['nvp']['server_packages']) + self.assertEquals(plugins['n1kv']['config'], + '/etc/neutron/plugins/cisco/cisco_plugins.ini') + self.assertEquals(plugins['Calico']['config'], + '/etc/neutron/plugins/ml2/ml2_conf.ini') + self.assertEquals(plugins['plumgrid']['config'], + '/etc/neutron/plugins/plumgrid/plumgrid.ini') + self.assertEquals(plugins['midonet']['config'], + '/etc/neutron/plugins/midonet/midonet.ini') + + self.assertEquals(plugins['nvp']['services'], []) + self.assertEquals(plugins['nsx'], plugins['nvp']) + + self.os_release.return_value = 'kilo' + plugins = neutron.neutron_plugins() + self.assertEquals(plugins['midonet']['driver'], + 'neutron.plugins.midonet.plugin.MidonetPluginV2') + self.assertEquals(plugins['nsx']['config'], + '/etc/neutron/plugins/vmware/nsx.ini') + + self.os_release.return_value = 'liberty' + self.config.return_value = 'mem-1.9' + plugins = neutron.neutron_plugins() + self.assertEquals(plugins['midonet']['driver'], + 'midonet.neutron.plugin_v1.MidonetPluginV2') + self.assertTrue('python-networking-midonet' in + plugins['midonet']['server_packages']) + + self.os_release.return_value = 'mitaka' + self.config.return_value = 'mem-1.9' + plugins = neutron.neutron_plugins() + self.assertEquals(plugins['nsx']['config'], + '/etc/neutron/nsx.ini') + self.assertTrue('python-vmware-nsx' in + plugins['nsx']['server_packages']) + + @patch.object(neutron, 'network_manager') + def test_neutron_plugin_attribute_quantum(self, _network_manager): + self.config.return_value = 'foo' + _network_manager.return_value = 'quantum' + plugins = neutron.neutron_plugin_attribute('ovs', 'services') + self.assertEquals(plugins, ['quantum-plugin-openvswitch-agent']) + + @patch.object(neutron, 'network_manager') + def test_neutron_plugin_attribute_neutron(self, _network_manager): + self.config.return_value = 'foo' + self.os_release.return_value = 'icehouse' + _network_manager.return_value = 'neutron' + plugins = neutron.neutron_plugin_attribute('ovs', 'services') + self.assertEquals(plugins, ['neutron-plugin-openvswitch-agent']) + + @raises(Exception) + @patch.object(neutron, 'network_manager') + def test_neutron_plugin_attribute_foo(self, _network_manager): + _network_manager.return_value = 'foo' + self.assertRaises(Exception, neutron.neutron_plugin_attribute('ovs', 'services')) + + @raises(Exception) + @patch.object(neutron, 'network_manager') + def test_neutron_plugin_attribute_plugin_keyerror(self, _network_manager): + self.config.return_value = 'foo' + _network_manager.return_value = 'quantum' + self.assertRaises(Exception, neutron.neutron_plugin_attribute('foo', 'foo')) + + @patch.object(neutron, 'network_manager') + def test_neutron_plugin_attribute_attr_keyerror(self, _network_manager): + self.config.return_value = 'foo' + _network_manager.return_value = 'quantum' + plugins = neutron.neutron_plugin_attribute('ovs', 'foo') + self.assertEquals(plugins, None) + + @raises(Exception) + def test_network_manager_essex(self): + essex_cases = { + 'quantum': 'quantum', + 'neutron': 'quantum', + 'newhotness': 'newhotness', + } + self.os_release.return_value = 'essex' + for nwmanager in essex_cases: + self.config.return_value = nwmanager + self.assertRaises(Exception, neutron.network_manager()) + + def test_network_manager_folsom(self): + folsom_cases = { + 'quantum': 'quantum', + 'neutron': 'quantum', + 'newhotness': 'newhotness', + } + self.os_release.return_value = 'folsom' + for nwmanager in folsom_cases: + self.config.return_value = nwmanager + renamed_manager = neutron.network_manager() + self.assertEquals(renamed_manager, folsom_cases[nwmanager]) + + def test_network_manager_grizzly(self): + grizzly_cases = { + 'quantum': 'quantum', + 'neutron': 'quantum', + 'newhotness': 'newhotness', + } + self.os_release.return_value = 'grizzly' + for nwmanager in grizzly_cases: + self.config.return_value = nwmanager + renamed_manager = neutron.network_manager() + self.assertEquals(renamed_manager, grizzly_cases[nwmanager]) + + def test_network_manager_havana(self): + havana_cases = { + 'quantum': 'neutron', + 'neutron': 'neutron', + 'newhotness': 'newhotness', + } + self.os_release.return_value = 'havana' + for nwmanager in havana_cases: + self.config.return_value = nwmanager + renamed_manager = neutron.network_manager() + self.assertEquals(renamed_manager, havana_cases[nwmanager]) + + def test_network_manager_icehouse(self): + icehouse_cases = { + 'quantum': 'neutron', + 'neutron': 'neutron', + 'newhotness': 'newhotness', + } + self.os_release.return_value = 'icehouse' + for nwmanager in icehouse_cases: + self.config.return_value = nwmanager + renamed_manager = neutron.network_manager() + self.assertEquals(renamed_manager, icehouse_cases[nwmanager]) + + def test_parse_bridge_mappings(self): + ret = neutron.parse_bridge_mappings(None) + self.assertEqual(ret, {}) + ret = neutron.parse_bridge_mappings("physnet1:br0") + self.assertEqual(ret, {'physnet1': 'br0'}) + ret = neutron.parse_bridge_mappings("physnet1:br0 physnet2:br1") + self.assertEqual(ret, {'physnet1': 'br0', 'physnet2': 'br1'}) + + def test_parse_data_port_mappings(self): + ret = neutron.parse_data_port_mappings(None) + self.assertEqual(ret, {}) + ret = neutron.parse_data_port_mappings('br0:eth0') + self.assertEqual(ret, {'eth0': 'br0'}) + # Back-compat test + ret = neutron.parse_data_port_mappings('eth0', default_bridge='br0') + self.assertEqual(ret, {'eth0': 'br0'}) + # Multiple mappings + ret = neutron.parse_data_port_mappings('br0:eth0 br1:eth1') + self.assertEqual(ret, {'eth0': 'br0', 'eth1': 'br1'}) + # MultMAC mappings + ret = neutron.parse_data_port_mappings('br0:cb:23:ae:72:f2:33 ' + 'br0:fa:16:3e:12:97:8e') + self.assertEqual(ret, {'cb:23:ae:72:f2:33': 'br0', + 'fa:16:3e:12:97:8e': 'br0'}) + + def test_parse_vlan_range_mappings(self): + ret = neutron.parse_vlan_range_mappings(None) + self.assertEqual(ret, {}) + ret = neutron.parse_vlan_range_mappings('physnet1:1001:2000') + self.assertEqual(ret, {'physnet1': ('1001', '2000')}) + ret = neutron.parse_vlan_range_mappings('physnet1:1001:2000 physnet2:2001:3000') + self.assertEqual(ret, {'physnet1': ('1001', '2000'), + 'physnet2': ('2001', '3000')}) + ret = neutron.parse_vlan_range_mappings('physnet1 physnet2:2001:3000') + self.assertEqual(ret, {'physnet1': ('',), + 'physnet2': ('2001', '3000')}) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/test_openstack_utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/test_openstack_utils.py new file mode 100644 index 0000000000000000000000000000000000000000..6358ac3ee653fe6a52c5ec890d0a6f7570df4b1e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/test_openstack_utils.py @@ -0,0 +1,2340 @@ +import io +import os +import contextlib +import unittest +from copy import copy +from tests.helpers import patch_open, FakeRelation + +from testtools import TestCase +from mock import MagicMock, patch, call + +from charmhelpers.fetch import ubuntu as fetch +from charmhelpers.core.hookenv import WORKLOAD_STATES, flush + +import charmhelpers.contrib.openstack.utils as openstack + +import six + +if not six.PY3: + builtin_open = '__builtin__.open' + builtin_import = '__builtin__.__import__' +else: + builtin_open = 'builtins.open' + builtin_import = 'builtins.__import__' + +FAKE_CODENAME = 'precise' +# mocked return of openstack.lsb_release() +FAKE_RELEASE = { + 'DISTRIB_CODENAME': 'precise', + 'DISTRIB_RELEASE': '12.04', + 'DISTRIB_ID': 'Ubuntu', + 'DISTRIB_DESCRIPTION': '"Ubuntu 12.04"' +} + +FAKE_REPO = { + # liberty patch release + 'neutron-common': { + 'pkg_vers': '2:7.0.1-0ubuntu1', + 'os_release': 'liberty', + 'os_version': '2015.2' + }, + # liberty release version + 'nova-common': { + 'pkg_vers': '2:12.0.0~b1-0ubuntu1', + 'os_release': 'liberty', + 'os_version': '2015.2' + }, + 'nova': { + 'pkg_vers': '2012.2.3-0ubuntu2.1', + 'os_release': 'folsom', + 'os_version': '2012.2' + }, + 'glance-common': { + 'pkg_vers': '2012.1.3+stable-20130423-74b067df-0ubuntu1', + 'os_release': 'essex', + 'os_version': '2012.1' + }, + 'keystone-common': { + 'pkg_vers': '1:2013.1-0ubuntu1.1~cloud0', + 'os_release': 'grizzly', + 'os_version': '2013.1' + }, + # Exercise swift version detection + 'swift-storage': { + 'pkg_vers': '1.8.0-0ubuntu1', + 'os_release': 'grizzly', + 'os_version': '1.8.0' + }, + 'swift-proxy': { + 'pkg_vers': '1.13.1-0ubuntu1', + 'os_release': 'icehouse', + 'os_version': '1.13.1' + }, + 'swift-common': { + 'pkg_vers': '1.10.0~rc1-0ubuntu1', + 'os_release': 'havana', + 'os_version': '1.10.0' + }, + 'swift-mitaka-dev': { + 'pkg_vers': '2.7.1.dev8.201605111703.trusty-0ubuntu1', + 'os_release': 'mitaka', + 'os_version': '2.7.0' + }, + # a package thats available in the cache but is not installed + 'cinder-common': { + 'os_release': 'havana', + 'os_version': '2013.2' + }, + # poorly formed openstack version + 'bad-version': { + 'pkg_vers': '1:2200.1-0ubuntu1.1~cloud0', + 'os_release': None, + 'os_version': None + } +} + +MOUNTS = [ + ['/mnt', '/dev/vdb'] +] + +url = 'deb ' + openstack.CLOUD_ARCHIVE_URL +UCA_SOURCES = [ + ('cloud:precise-folsom/proposed', url + ' precise-proposed/folsom main'), + ('cloud:precise-folsom', url + ' precise-updates/folsom main'), + ('cloud:precise-folsom/updates', url + ' precise-updates/folsom main'), + ('cloud:precise-grizzly/proposed', url + ' precise-proposed/grizzly main'), + ('cloud:precise-grizzly', url + ' precise-updates/grizzly main'), + ('cloud:precise-grizzly/updates', url + ' precise-updates/grizzly main'), + ('cloud:precise-havana/proposed', url + ' precise-proposed/havana main'), + ('cloud:precise-havana', url + ' precise-updates/havana main'), + ('cloud:precise-havana/updates', url + ' precise-updates/havana main'), + ('cloud:precise-icehouse/proposed', + url + ' precise-proposed/icehouse main'), + ('cloud:precise-icehouse', url + ' precise-updates/icehouse main'), + ('cloud:precise-icehouse/updates', url + ' precise-updates/icehouse main'), +] + +# Mock python-dnspython resolver used by get_host_ip() + + +class FakeAnswer(object): + + def __init__(self, ip): + self.ip = ip + + def __str__(self): + return self.ip + + +class FakeResolver(object): + + def __init__(self, ip): + self.ip = ip + + def query(self, hostname, query_type): + if self.ip == '': + return [] + else: + return [FakeAnswer(self.ip)] + + +class FakeReverse(object): + + def from_address(self, address): + return '156.94.189.91.in-addr.arpa' + + +class FakeDNSName(object): + + def __init__(self, dnsname): + pass + + +class FakeDNS(object): + + def __init__(self, ip): + self.resolver = FakeResolver(ip) + self.reversename = FakeReverse() + self.name = MagicMock() + self.name.Name = FakeDNSName + + +class OpenStackHelpersTestCase(TestCase): + + def setUp(self): + super(OpenStackHelpersTestCase, self).setUp() + self.patch(fetch, 'get_apt_dpkg_env', lambda: {}) + + def _apt_cache(self): + # mocks out the apt cache + def cache_get(package): + pkg = MagicMock() + if package in FAKE_REPO and 'pkg_vers' in FAKE_REPO[package]: + pkg.name = package + pkg.current_ver.ver_str = FAKE_REPO[package]['pkg_vers'] + elif (package in FAKE_REPO and + 'pkg_vers' not in FAKE_REPO[package]): + pkg.name = package + pkg.current_ver = None + else: + raise KeyError + return pkg + cache = MagicMock() + cache.__getitem__.side_effect = cache_get + return cache + + @patch.object(openstack, 'filter_missing_packages') + def test_get_installed_semantic_versioned_packages(self, mock_filter): + def _filter_missing_packages(pkgs): + return [x for x in pkgs if x in ['cinder-common']] + mock_filter.side_effect = _filter_missing_packages + self.assertEquals( + openstack.get_installed_semantic_versioned_packages(), + ['cinder-common']) + + @patch('charmhelpers.contrib.openstack.utils.lsb_release') + def test_os_codename_from_install_source(self, mocked_lsb): + """Test mapping install source to OpenStack release name""" + mocked_lsb.return_value = FAKE_RELEASE + + # the openstack release shipped with respective ubuntu/lsb release. + self.assertEquals(openstack.get_os_codename_install_source('distro'), + 'essex') + # proposed pocket + self.assertEquals(openstack.get_os_codename_install_source( + 'distro-proposed'), + 'essex') + self.assertEquals(openstack.get_os_codename_install_source( + 'proposed'), + 'essex') + + # various cloud archive pockets + src = 'cloud:precise-grizzly' + self.assertEquals(openstack.get_os_codename_install_source(src), + 'grizzly') + src = 'cloud:precise-grizzly/proposed' + self.assertEquals(openstack.get_os_codename_install_source(src), + 'grizzly') + + # ppas and full repo urls. + src = 'ppa:openstack-ubuntu-testing/havana-trunk-testing' + self.assertEquals(openstack.get_os_codename_install_source(src), + 'havana') + src = ('deb http://ubuntu-cloud.archive.canonical.com/ubuntu ' + 'precise-havana main') + self.assertEquals(openstack.get_os_codename_install_source(src), + 'havana') + self.assertEquals(openstack.get_os_codename_install_source(None), + '') + + @patch.object(openstack, 'get_os_version_codename') + @patch.object(openstack, 'get_os_codename_install_source') + def test_os_version_from_install_source(self, codename, version): + codename.return_value = 'grizzly' + openstack.get_os_version_install_source('cloud:precise-grizzly') + version.assert_called_with('grizzly') + + @patch('charmhelpers.contrib.openstack.utils.lsb_release') + def test_os_codename_from_bad_install_source(self, mocked_lsb): + """Test mapping install source to OpenStack release name""" + _fake_release = copy(FAKE_RELEASE) + _fake_release['DISTRIB_CODENAME'] = 'natty' + + mocked_lsb.return_value = _fake_release + _e = 'charmhelpers.contrib.openstack.utils.error_out' + with patch(_e) as mocked_err: + openstack.get_os_codename_install_source('distro') + _er = ('Could not derive openstack release for this Ubuntu ' + 'release: natty') + mocked_err.assert_called_with(_er) + + def test_os_codename_from_version(self): + """Test mapping OpenStack numerical versions to code name""" + self.assertEquals(openstack.get_os_codename_version('2013.1'), + 'grizzly') + + @patch('charmhelpers.contrib.openstack.utils.error_out') + def test_os_codename_from_bad_version(self, mocked_error): + """Test mapping a bad OpenStack numerical versions to code name""" + openstack.get_os_codename_version('2014.5.5') + expected_err = ('Could not determine OpenStack codename for ' + 'version 2014.5.5') + mocked_error.assert_called_with(expected_err) + + def test_os_version_from_codename(self): + """Test mapping a OpenStack codename to numerical version""" + self.assertEquals(openstack.get_os_version_codename('folsom'), + '2012.2') + + @patch('charmhelpers.contrib.openstack.utils.error_out') + def test_os_version_from_bad_codename(self, mocked_error): + """Test mapping a bad OpenStack codename to numerical version""" + openstack.get_os_version_codename('foo') + expected_err = 'Could not derive OpenStack version for codename: foo' + mocked_error.assert_called_with(expected_err) + + def test_os_version_swift_from_codename(self): + """Test mapping a swift codename to numerical version""" + self.assertEquals(openstack.get_os_version_codename_swift('liberty'), + '2.5.0') + + def test_get_swift_codename_single_version_kilo(self): + self.assertEquals(openstack.get_swift_codename('2.2.2'), 'kilo') + + @patch('charmhelpers.contrib.openstack.utils.error_out') + def test_os_version_swift_from_bad_codename(self, mocked_error): + """Test mapping a bad swift codename to numerical version""" + openstack.get_os_version_codename_swift('foo') + expected_err = 'Could not derive swift version for codename: foo' + mocked_error.assert_called_with(expected_err) + + def test_get_swift_codename_multiple_versions_liberty(self): + with patch('subprocess.check_output') as _subp: + _subp.return_value = b"... trusty-updates/liberty/main ..." + self.assertEquals(openstack.get_swift_codename('2.5.0'), 'liberty') + + def test_get_swift_codename_multiple_versions_mitaka(self): + with patch('subprocess.check_output') as _subp: + _subp.return_value = b"... trusty-updates/mitaka/main ..." + self.assertEquals(openstack.get_swift_codename('2.5.0'), 'mitaka') + + def test_get_swift_codename_none(self): + self.assertEquals(openstack.get_swift_codename('1.2.3'), None) + + @patch.object(openstack, 'snap_install_requested') + def test_os_codename_from_package(self, mock_snap_install_requested): + """Test deriving OpenStack codename from an installed package""" + mock_snap_install_requested.return_value = False + with patch.object(openstack, 'apt_cache') as cache: + cache.return_value = self._apt_cache() + for pkg, vers in six.iteritems(FAKE_REPO): + # test fake repo for all "installed" packages + if pkg.startswith('bad-'): + continue + if 'pkg_vers' not in vers: + continue + self.assertEquals(openstack.get_os_codename_package(pkg), + vers['os_release']) + + @patch.object(openstack, 'snap_install_requested') + @patch('charmhelpers.contrib.openstack.utils.error_out') + def test_os_codename_from_bad_package_version(self, mocked_error, + mock_snap_install_requested): + """Test deriving OpenStack codename for a poorly versioned package""" + mock_snap_install_requested.return_value = False + with patch.object(openstack, 'apt_cache') as cache: + cache.return_value = self._apt_cache() + openstack.get_os_codename_package('bad-version') + _e = ('Could not determine OpenStack codename for version 2200.1') + mocked_error.assert_called_with(_e) + + @patch.object(openstack, 'snap_install_requested') + @patch('charmhelpers.contrib.openstack.utils.error_out') + def test_os_codename_from_bad_package(self, mocked_error, + mock_snap_install_requested): + """Test deriving OpenStack codename from an unavailable package""" + mock_snap_install_requested.return_value = False + with patch.object(openstack, 'apt_cache') as cache: + cache.return_value = self._apt_cache() + try: + openstack.get_os_codename_package('foo') + except Exception: + # ignore exceptions that raise when error_out is mocked + # and doesn't sys.exit(1) + pass + e = 'Could not determine version of package with no installation '\ + 'candidate: foo' + mocked_error.assert_called_with(e) + + @patch.object(openstack, 'snap_install_requested') + def test_os_codename_from_bad_package_nonfatal( + self, mock_snap_install_requested): + """Test OpenStack codename from an unavailable package is non-fatal""" + mock_snap_install_requested.return_value = False + with patch.object(openstack, 'apt_cache') as cache: + cache.return_value = self._apt_cache() + self.assertEquals( + None, + openstack.get_os_codename_package('foo', fatal=False) + ) + + @patch.object(openstack, 'snap_install_requested') + @patch('charmhelpers.contrib.openstack.utils.error_out') + def test_os_codename_from_uninstalled_package(self, mock_error, + mock_snap_install_requested): + """Test OpenStack codename from an available but uninstalled pkg""" + mock_snap_install_requested.return_value = False + with patch.object(openstack, 'apt_cache') as cache: + cache.return_value = self._apt_cache() + try: + openstack.get_os_codename_package('cinder-common', fatal=True) + except Exception: + pass + e = ('Could not determine version of uninstalled package: ' + 'cinder-common') + mock_error.assert_called_with(e) + + @patch.object(openstack, 'snap_install_requested') + def test_os_codename_from_uninstalled_package_nonfatal( + self, mock_snap_install_requested): + """Test OpenStack codename from avail uninstalled pkg is non fatal""" + mock_snap_install_requested.return_value = False + with patch.object(openstack, 'apt_cache') as cache: + cache.return_value = self._apt_cache() + self.assertEquals( + None, + openstack.get_os_codename_package('cinder-common', fatal=False) + ) + + @patch.object(openstack, 'snap_install_requested') + @patch('charmhelpers.contrib.openstack.utils.error_out') + def test_os_version_from_package(self, mocked_error, + mock_snap_install_requested): + """Test deriving OpenStack version from an installed package""" + mock_snap_install_requested.return_value = False + with patch.object(openstack, 'apt_cache') as cache: + cache.return_value = self._apt_cache() + for pkg, vers in six.iteritems(FAKE_REPO): + if pkg.startswith('bad-'): + continue + if 'pkg_vers' not in vers: + continue + self.assertEquals(openstack.get_os_version_package(pkg), + vers['os_version']) + + @patch.object(openstack, 'snap_install_requested') + @patch('charmhelpers.contrib.openstack.utils.error_out') + def test_os_version_from_bad_package(self, mocked_error, + mock_snap_install_requested): + """Test deriving OpenStack version from an uninstalled package""" + mock_snap_install_requested.return_value = False + with patch.object(openstack, 'apt_cache') as cache: + cache.return_value = self._apt_cache() + try: + openstack.get_os_version_package('foo') + except Exception: + # ignore exceptions that raise when error_out is mocked + # and doesn't sys.exit(1) + pass + e = 'Could not determine version of package with no installation '\ + 'candidate: foo' + mocked_error.assert_called_with(e) + + @patch.object(openstack, 'snap_install_requested') + def test_os_version_from_bad_package_nonfatal( + self, mock_snap_install_requested): + """Test OpenStack version from an uninstalled package is non-fatal""" + mock_snap_install_requested.return_value = False + with patch.object(openstack, 'apt_cache') as cache: + cache.return_value = self._apt_cache() + self.assertEquals( + None, + openstack.get_os_version_package('foo', fatal=False) + ) + + @patch.object(openstack, 'get_os_codename_package') + @patch('charmhelpers.contrib.openstack.utils.config') + def test_os_release_uncached(self, config, get_cn): + openstack._os_rel = None + get_cn.return_value = 'folsom' + self.assertEquals('folsom', openstack.os_release('nova-common')) + + def test_os_release_cached(self): + openstack._os_rel = 'foo' + self.assertEquals('foo', openstack.os_release('nova-common')) + + @patch.object(openstack, 'juju_log') + @patch('sys.exit') + def test_error_out(self, mocked_exit, juju_log): + """Test erroring out""" + openstack.error_out('Everything broke.') + _log = 'FATAL ERROR: Everything broke.' + juju_log.assert_called_with(_log, level='ERROR') + mocked_exit.assert_called_with(1) + + def test_get_source_and_pgp_key(self): + tests = { + "source|key": ('source', 'key'), + "source|": ('source', None), + "|key": ('', 'key'), + "source": ('source', None), + } + for k, v in six.iteritems(tests): + self.assertEqual(openstack.get_source_and_pgp_key(k), v) + + # These should still work, even though the bulk of the functionality has + # moved to charmhelpers.fetch.add_source() + def test_configure_install_source_distro(self): + """Test configuring installation from distro""" + self.assertIsNone(openstack.configure_installation_source('distro')) + + def test_configure_install_source_ppa(self): + """Test configuring installation source from PPA""" + with patch('subprocess.check_call') as mock: + src = 'ppa:gandelman-a/openstack' + openstack.configure_installation_source(src) + ex_cmd = [ + 'add-apt-repository', '--yes', 'ppa:gandelman-a/openstack'] + mock.assert_called_with(ex_cmd, env={}) + + @patch('subprocess.check_call') + @patch.object(fetch, 'import_key') + def test_configure_install_source_deb_url(self, _import, _spcc): + """Test configuring installation source from deb repo url""" + src = ('deb http://ubuntu-cloud.archive.canonical.com/ubuntu ' + 'precise-havana main|KEYID') + openstack.configure_installation_source(src) + _import.assert_called_with('KEYID') + _spcc.assert_called_once_with( + ['add-apt-repository', '--yes', + 'deb http://ubuntu-cloud.archive.canonical.com/ubuntu ' + 'precise-havana main'], env={}) + + @patch.object(fetch, 'get_distrib_codename') + @patch(builtin_open) + @patch('subprocess.check_call') + def test_configure_install_source_distro_proposed( + self, _spcc, _open, _lsb): + """Test configuring installation source from deb repo url""" + _lsb.return_value = FAKE_CODENAME + _file = MagicMock(spec=io.FileIO) + _open.return_value = _file + openstack.configure_installation_source('distro-proposed') + _file.__enter__().write.assert_called_once_with( + '# Proposed\ndeb http://archive.ubuntu.com/ubuntu ' + 'precise-proposed main universe multiverse restricted\n') + src = ('deb http://archive.ubuntu.com/ubuntu/ precise-proposed ' + 'restricted main multiverse universe') + openstack.configure_installation_source(src) + _spcc.assert_called_once_with( + ['add-apt-repository', '--yes', + 'deb http://archive.ubuntu.com/ubuntu/ precise-proposed ' + 'restricted main multiverse universe'], env={}) + + @patch('charmhelpers.fetch.filter_installed_packages') + @patch('charmhelpers.fetch.apt_install') + @patch.object(openstack, 'error_out') + @patch.object(openstack, 'juju_log') + def test_add_source_cloud_invalid_pocket(self, _log, _out, + apt_install, filter_pkg): + openstack.configure_installation_source("cloud:havana-updates") + _e = ('Invalid Cloud Archive release specified: ' + 'havana-updates on this Ubuntuversion') + _s = _out.call_args[0][0] + self.assertTrue(_s.startswith(_e)) + + @patch.object(fetch, 'filter_installed_packages') + @patch.object(fetch, 'apt_install') + @patch.object(fetch, 'get_distrib_codename') + def test_add_source_cloud_pocket_style(self, get_distrib_codename, + apt_install, filter_pkg): + source = "cloud:precise-updates/havana" + get_distrib_codename.return_value = 'precise' + result = ( + "# Ubuntu Cloud Archive\n" + "deb http://ubuntu-cloud.archive.canonical.com/ubuntu " + "precise-updates/havana main\n") + with patch_open() as (mock_open, mock_file): + openstack.configure_installation_source(source) + mock_file.write.assert_called_with(result) + filter_pkg.assert_called_with(['ubuntu-cloud-keyring']) + + @patch.object(fetch, 'filter_installed_packages') + @patch.object(fetch, 'apt_install') + @patch.object(fetch, 'get_distrib_codename') + def test_add_source_cloud_os_style(self, get_distrib_codename, + apt_install, filter_pkg): + source = "cloud:precise-havana" + get_distrib_codename.return_value = 'precise' + result = ( + "# Ubuntu Cloud Archive\n" + "deb http://ubuntu-cloud.archive.canonical.com/ubuntu " + "precise-updates/havana main\n") + with patch_open() as (mock_open, mock_file): + openstack.configure_installation_source(source) + mock_file.write.assert_called_with(result) + filter_pkg.assert_called_with(['ubuntu-cloud-keyring']) + + @patch.object(fetch, 'filter_installed_packages') + @patch.object(fetch, 'apt_install') + def test_add_source_cloud_distroless_style(self, apt_install, filter_pkg): + source = "cloud:havana" + result = ( + "# Ubuntu Cloud Archive\n" + "deb http://ubuntu-cloud.archive.canonical.com/ubuntu " + "precise-updates/havana main\n") + with patch_open() as (mock_open, mock_file): + openstack.configure_installation_source(source) + mock_file.write.assert_called_with(result) + filter_pkg.assert_called_with(['ubuntu-cloud-keyring']) + + @patch('charmhelpers.fetch.ubuntu.log', lambda *args, **kwargs: None) + @patch('charmhelpers.contrib.openstack.utils.juju_log', + lambda *args, **kwargs: None) + @patch('charmhelpers.contrib.openstack.utils.error_out') + def test_configure_bad_install_source(self, _error): + openstack.configure_installation_source('foo') + _error.assert_called_with("Unknown source: 'foo'") + + @patch.object(fetch, 'get_distrib_codename') + def test_configure_install_source_uca_staging(self, _lsb): + """Test configuring installation source from UCA staging sources""" + _lsb.return_value = FAKE_CODENAME + # staging pockets are configured as PPAs + with patch('subprocess.check_call') as _subp: + src = 'cloud:precise-folsom/staging' + openstack.configure_installation_source(src) + cmd = ['add-apt-repository', '-y', + 'ppa:ubuntu-cloud-archive/folsom-staging'] + _subp.assert_called_with(cmd, env={}) + + @patch(builtin_open) + @patch.object(fetch, 'apt_install') + @patch.object(fetch, 'get_distrib_codename') + @patch.object(fetch, 'filter_installed_packages') + def test_configure_install_source_uca_repos( + self, _fip, _lsb, _install, _open): + """Test configuring installation source from UCA sources""" + _lsb.return_value = FAKE_CODENAME + _file = MagicMock(spec=io.FileIO) + _open.return_value = _file + _fip.side_effect = lambda x: x + for src, url in UCA_SOURCES: + actual_url = "# Ubuntu Cloud Archive\n{}\n".format(url) + openstack.configure_installation_source(src) + _install.assert_called_with(['ubuntu-cloud-keyring'], + fatal=True) + _open.assert_called_with( + '/etc/apt/sources.list.d/cloud-archive.list', + 'w' + ) + _file.__enter__().write.assert_called_with(actual_url) + + @patch('charmhelpers.contrib.openstack.utils.error_out') + def test_configure_install_source_bad_uca(self, mocked_error): + """Test configuring installation source from bad UCA source""" + try: + openstack.configure_installation_source('cloud:foo-bar') + except Exception: + # ignore exceptions that raise when error_out is mocked + # and doesn't sys.exit(1) + pass + _e = ('Invalid Cloud Archive release specified: foo-bar' + ' on this Ubuntuversion') + _s = mocked_error.call_args[0][0] + self.assertTrue(_s.startswith(_e)) + + @patch.object(openstack, 'fetch_import_key') + def test_import_key_calls_fetch_import_key(self, fetch_import_key): + openstack.import_key('random-string') + fetch_import_key.assert_called_once_with('random-string') + + @patch.object(openstack, 'juju_log', lambda *args, **kwargs: None) + @patch.object(openstack, 'fetch_import_key') + @patch.object(openstack, 'sys') + def test_import_key_calls_sys_exit_on_error(self, mock_sys, + fetch_import_key): + + def raiser(_): + raise openstack.GPGKeyError("an error occurred") + fetch_import_key.side_effect = raiser + openstack.import_key('random failure') + mock_sys.exit.assert_called_once_with(1) + + @patch('os.mkdir') + @patch('os.path.exists') + @patch('charmhelpers.contrib.openstack.utils.charm_dir') + @patch(builtin_open) + def test_save_scriptrc(self, _open, _charm_dir, _exists, _mkdir): + """Test generation of scriptrc from environment""" + scriptrc = ['#!/bin/bash\n', + 'export setting1=foo\n', + 'export setting2=bar\n'] + _file = MagicMock(spec=io.FileIO) + _open.return_value = _file + _charm_dir.return_value = '/var/lib/juju/units/testing-foo-0/charm' + _exists.return_value = False + os.environ['JUJU_UNIT_NAME'] = 'testing-foo/0' + openstack.save_script_rc(setting1='foo', setting2='bar') + rcdir = '/var/lib/juju/units/testing-foo-0/charm/scripts' + _mkdir.assert_called_with(rcdir) + expected_f = '/var/lib/juju/units/testing-foo-0/charm/scripts/scriptrc' + _open.assert_called_with(expected_f, 'wt') + _mkdir.assert_called_with(os.path.dirname(expected_f)) + _file.__enter__().write.assert_has_calls( + list(call(line) for line in scriptrc), any_order=True) + + @patch.object(openstack, 'lsb_release') + @patch.object(openstack, 'get_os_version_package') + @patch.object(openstack, 'get_os_version_codename_swift') + @patch.object(openstack, 'config') + def test_openstack_upgrade_detection_true(self, config, vers_swift, + vers_pkg, lsb): + """Test it detects when an openstack package has available upgrade""" + lsb.return_value = FAKE_RELEASE + config.return_value = 'cloud:precise-havana' + vers_pkg.return_value = '2013.1.1' + self.assertTrue(openstack.openstack_upgrade_available('nova-common')) + # milestone to major release detection + vers_pkg.return_value = '2013.2~b1' + self.assertTrue(openstack.openstack_upgrade_available('nova-common')) + vers_pkg.return_value = '1.9.0' + vers_swift.return_value = '2.5.0' + self.assertTrue(openstack.openstack_upgrade_available('swift-proxy')) + vers_pkg.return_value = '2.5.0' + vers_swift.return_value = '2.10.0' + self.assertTrue(openstack.openstack_upgrade_available('swift-proxy')) + + @patch.object(openstack, 'lsb_release') + @patch.object(openstack, 'get_os_version_package') + @patch.object(openstack, 'config') + def test_openstack_upgrade_detection_false(self, config, vers_pkg, lsb): + """Test it detects when an openstack upgrade is not necessary""" + lsb.return_value = FAKE_RELEASE + config.return_value = 'cloud:precise-folsom' + vers_pkg.return_value = '2013.1.1' + self.assertFalse(openstack.openstack_upgrade_available('nova-common')) + # milestone to majro release detection + vers_pkg.return_value = '2013.1~b1' + self.assertFalse(openstack.openstack_upgrade_available('nova-common')) + # ugly duckling testing + config.return_value = 'cloud:precise-havana' + vers_pkg.return_value = '1.10.0' + self.assertFalse(openstack.openstack_upgrade_available('swift-proxy')) + + @patch.object(openstack, 'is_block_device') + @patch.object(openstack, 'error_out') + def test_ensure_block_device_bad_config(self, err, is_bd): + """Test it doesn't prepare storage with bad config""" + openstack.ensure_block_device(block_device='none') + self.assertTrue(err.called) + + @patch.object(openstack, 'is_block_device') + @patch.object(openstack, 'ensure_loopback_device') + def test_ensure_block_device_loopback(self, ensure_loopback, is_bd): + """Test it ensures loopback device when checking block device""" + defsize = openstack.DEFAULT_LOOPBACK_SIZE + is_bd.return_value = True + + ensure_loopback.return_value = '/tmp/cinder.img' + result = openstack.ensure_block_device('/tmp/cinder.img') + ensure_loopback.assert_called_with('/tmp/cinder.img', defsize) + self.assertEquals(result, '/tmp/cinder.img') + + ensure_loopback.return_value = '/tmp/cinder-2.img' + result = openstack.ensure_block_device('/tmp/cinder-2.img|15G') + ensure_loopback.assert_called_with('/tmp/cinder-2.img', '15G') + self.assertEquals(result, '/tmp/cinder-2.img') + + @patch.object(openstack, 'is_block_device') + def test_ensure_standard_block_device(self, is_bd): + """Test it looks for storage at both relative and full device path""" + for dev in ['vdb', '/dev/vdb']: + openstack.ensure_block_device(dev) + is_bd.assert_called_with('/dev/vdb') + + @patch.object(openstack, 'is_block_device') + @patch.object(openstack, 'error_out') + def test_ensure_nonexistent_block_device(self, error_out, is_bd): + """Test it will not ensure a non-existant block device""" + is_bd.return_value = False + openstack.ensure_block_device(block_device='foo') + self.assertTrue(error_out.called) + + @patch.object(openstack, 'juju_log') + @patch.object(openstack, 'umount') + @patch.object(openstack, 'mounts') + @patch.object(openstack, 'zap_disk') + @patch.object(openstack, 'is_lvm_physical_volume') + def test_clean_storage_unmount(self, is_pv, zap_disk, mounts, umount, log): + """Test it unmounts block device when cleaning storage""" + is_pv.return_value = False + zap_disk.return_value = True + mounts.return_value = MOUNTS + openstack.clean_storage('/dev/vdb') + umount.called_with('/dev/vdb', True) + + @patch.object(openstack, 'juju_log') + @patch.object(openstack, 'remove_lvm_physical_volume') + @patch.object(openstack, 'deactivate_lvm_volume_group') + @patch.object(openstack, 'mounts') + @patch.object(openstack, 'is_lvm_physical_volume') + def test_clean_storage_lvm_wipe(self, is_pv, mounts, rm_lv, rm_vg, log): + """Test it removes traces of LVM when cleaning storage""" + mounts.return_value = [] + is_pv.return_value = True + openstack.clean_storage('/dev/vdb') + rm_lv.assert_called_with('/dev/vdb') + rm_vg .assert_called_with('/dev/vdb') + + @patch.object(openstack, 'zap_disk') + @patch.object(openstack, 'is_lvm_physical_volume') + @patch.object(openstack, 'mounts') + def test_clean_storage_zap_disk(self, mounts, is_pv, zap_disk): + """It removes traces of LVM when cleaning storage""" + mounts.return_value = [] + is_pv.return_value = False + openstack.clean_storage('/dev/vdb') + zap_disk.assert_called_with('/dev/vdb') + + @patch('os.path.isfile') + @patch(builtin_open) + def test_get_matchmaker_map(self, _open, _isfile): + _isfile.return_value = True + mm_data = """ + { + "cinder-scheduler": [ + "juju-t-machine-4" + ] + } + """ + fh = _open.return_value.__enter__.return_value + fh.read.return_value = mm_data + self.assertEqual( + openstack.get_matchmaker_map(), + {'cinder-scheduler': ['juju-t-machine-4']} + ) + + @patch('os.path.isfile') + @patch(builtin_open) + def test_get_matchmaker_map_nofile(self, _open, _isfile): + _isfile.return_value = False + self.assertEqual( + openstack.get_matchmaker_map(), + {} + ) + + def test_incomplete_relation_data(self): + configs = MagicMock() + configs.complete_contexts.return_value = ['pgsql-db', 'amqp'] + required_interfaces = { + 'database': ['shared-db', 'pgsql-db'], + 'message': ['amqp', 'zeromq-configuration'], + 'identity': ['identity-service']} + expected_result = 'identity' + + result = openstack.incomplete_relation_data( + configs, required_interfaces) + self.assertTrue(expected_result in result.keys()) + + @patch.object(openstack, 'juju_log') + @patch('charmhelpers.contrib.openstack.utils.status_set') + @patch('charmhelpers.contrib.openstack.utils.is_unit_paused_set', + return_value=False) + def test_set_os_workload_status_complete( + self, is_unit_paused_set, status_set, log): + configs = MagicMock() + configs.complete_contexts.return_value = ['shared-db', + 'amqp', + 'identity-service'] + required_interfaces = { + 'database': ['shared-db', 'pgsql-db'], + 'message': ['amqp', 'zeromq-configuration'], + 'identity': ['identity-service']} + + openstack.set_os_workload_status(configs, required_interfaces) + status_set.assert_called_with('active', 'Unit is ready') + + @patch.object(openstack, 'juju_log') + @patch('charmhelpers.contrib.openstack.utils.incomplete_relation_data', + return_value={'identity': {'identity-service': {'related': True}}}) + @patch('charmhelpers.contrib.openstack.utils.status_set') + @patch('charmhelpers.contrib.openstack.utils.is_unit_paused_set', + return_value=False) + def test_set_os_workload_status_related_incomplete( + self, is_unit_paused_set, status_set, + incomplete_relation_data, log): + configs = MagicMock() + configs.complete_contexts.return_value = ['shared-db', 'amqp'] + required_interfaces = { + 'database': ['shared-db', 'pgsql-db'], + 'message': ['amqp', 'zeromq-configuration'], + 'identity': ['identity-service']} + + openstack.set_os_workload_status(configs, required_interfaces) + status_set.assert_called_with('waiting', + "Incomplete relations: identity") + + @patch.object(openstack, 'juju_log') + @patch('charmhelpers.contrib.openstack.utils.incomplete_relation_data', + return_value={'identity': {'identity-service': {'related': False}}}) + @patch('charmhelpers.contrib.openstack.utils.status_set') + @patch('charmhelpers.contrib.openstack.utils.is_unit_paused_set', + return_value=False) + def test_set_os_workload_status_absent( + self, is_unit_paused_set, status_set, + incomplete_relation_data, log): + configs = MagicMock() + configs.complete_contexts.return_value = ['shared-db', 'amqp'] + required_interfaces = { + 'database': ['shared-db', 'pgsql-db'], + 'message': ['amqp', 'zeromq-configuration'], + 'identity': ['identity-service']} + + openstack.set_os_workload_status(configs, required_interfaces) + status_set.assert_called_with('blocked', + 'Missing relations: identity') + + @patch.object(openstack, 'juju_log') + @patch('charmhelpers.contrib.openstack.utils.hook_name', + return_value='identity-service-relation-broken') + @patch('charmhelpers.contrib.openstack.utils.incomplete_relation_data', + return_value={'identity': {'identity-service': {'related': True}}}) + @patch('charmhelpers.contrib.openstack.utils.status_set') + @patch('charmhelpers.contrib.openstack.utils.is_unit_paused_set', + return_value=False) + def test_set_os_workload_status_related_broken( + self, is_unit_paused_set, status_set, + incomplete_relation_data, hook_name, log): + configs = MagicMock() + configs.complete_contexts.return_value = ['shared-db', 'amqp'] + required_interfaces = { + 'database': ['shared-db', 'pgsql-db'], + 'message': ['amqp', 'zeromq-configuration'], + 'identity': ['identity-service']} + + openstack.set_os_workload_status(configs, required_interfaces) + status_set.assert_called_with('blocked', + "Missing relations: identity") + + @patch.object(openstack, 'juju_log') + @patch('charmhelpers.contrib.openstack.utils.incomplete_relation_data', + return_value={'identity': + {'identity-service': {'related': True}}, + + 'message': + {'amqp': {'missing_data': ['rabbitmq-password'], + 'related': True}}, + + 'database': + {'shared-db': {'related': False}} + }) + @patch('charmhelpers.contrib.openstack.utils.status_set') + @patch('charmhelpers.contrib.openstack.utils.is_unit_paused_set', + return_value=False) + def test_set_os_workload_status_mixed( + self, is_unit_paused_set, status_set, + incomplete_relation_data, log): + configs = MagicMock() + configs.complete_contexts.return_value = ['shared-db', 'amqp'] + required_interfaces = { + 'database': ['shared-db', 'pgsql-db'], + 'message': ['amqp', 'zeromq-configuration'], + 'identity': ['identity-service']} + + openstack.set_os_workload_status(configs, required_interfaces) + + args = status_set.call_args + actual_parm1 = args[0][0] + actual_parm2 = args[0][1] + expected1 = ("Missing relations: database; incomplete relations: " + "identity, message") + expected2 = ("Missing relations: database; incomplete relations: " + "message, identity") + self.assertTrue(actual_parm1 == 'blocked') + self.assertTrue(actual_parm2 == expected1 or actual_parm2 == expected2) + + @patch('charmhelpers.contrib.openstack.utils.service_running') + @patch('charmhelpers.contrib.openstack.utils.port_has_listener') + @patch.object(openstack, 'juju_log') + @patch('charmhelpers.contrib.openstack.utils.status_set') + @patch('charmhelpers.contrib.openstack.utils.is_unit_paused_set', + return_value=False) + def test_set_os_workload_status_complete_with_services_list( + self, is_unit_paused_set, status_set, log, + port_has_listener, service_running): + configs = MagicMock() + configs.complete_contexts.return_value = [] + required_interfaces = {} + + services = ['database', 'identity'] + # Assume that the service and ports are open. + port_has_listener.return_value = True + service_running.return_value = True + + openstack.set_os_workload_status( + configs, required_interfaces, services=services) + status_set.assert_called_with('active', 'Unit is ready') + + @patch('charmhelpers.contrib.openstack.utils.service_running') + @patch('charmhelpers.contrib.openstack.utils.port_has_listener') + @patch.object(openstack, 'juju_log') + @patch('charmhelpers.contrib.openstack.utils.status_set') + @patch('charmhelpers.contrib.openstack.utils.is_unit_paused_set', + return_value=False) + def test_set_os_workload_status_complete_services_list_not_running( + self, is_unit_paused_set, status_set, log, + port_has_listener, service_running): + configs = MagicMock() + configs.complete_contexts.return_value = [] + required_interfaces = {} + + services = ['database', 'identity'] + port_has_listener.return_value = True + # Fail the identity service + service_running.side_effect = [True, False] + + openstack.set_os_workload_status( + configs, required_interfaces, services=services) + status_set.assert_called_with( + 'blocked', + 'Services not running that should be: identity') + + @patch('charmhelpers.contrib.openstack.utils.service_running') + @patch('charmhelpers.contrib.openstack.utils.port_has_listener') + @patch.object(openstack, 'juju_log') + @patch('charmhelpers.contrib.openstack.utils.status_set') + @patch('charmhelpers.contrib.openstack.utils.is_unit_paused_set', + return_value=False) + def test_set_os_workload_status_complete_with_services( + self, is_unit_paused_set, status_set, log, + port_has_listener, service_running): + configs = MagicMock() + configs.complete_contexts.return_value = [] + required_interfaces = {} + + services = [ + {'service': 'database', 'ports': [10, 20]}, + {'service': 'identity', 'ports': [30]}, + ] + # Assume that the service and ports are open. + port_has_listener.return_value = True + service_running.return_value = True + + openstack.set_os_workload_status( + configs, required_interfaces, services=services) + status_set.assert_called_with('active', 'Unit is ready') + + @patch('charmhelpers.contrib.openstack.utils.service_running') + @patch('charmhelpers.contrib.openstack.utils.port_has_listener') + @patch.object(openstack, 'juju_log') + @patch('charmhelpers.contrib.openstack.utils.status_set') + @patch('charmhelpers.contrib.openstack.utils.is_unit_paused_set', + return_value=False) + def test_set_os_workload_status_complete_service_not_running( + self, is_unit_paused_set, status_set, log, + port_has_listener, service_running): + configs = MagicMock() + configs.complete_contexts.return_value = [] + required_interfaces = {} + + services = [ + {'service': 'database', 'ports': [10, 20]}, + {'service': 'identity', 'ports': [30]}, + ] + port_has_listener.return_value = True + # Fail the identity service + service_running.side_effect = [True, False] + + openstack.set_os_workload_status( + configs, required_interfaces, services=services) + status_set.assert_called_with( + 'blocked', + 'Services not running that should be: identity') + + @patch('charmhelpers.contrib.openstack.utils.service_running') + @patch('charmhelpers.contrib.openstack.utils.port_has_listener') + @patch.object(openstack, 'juju_log') + @patch('charmhelpers.contrib.openstack.utils.status_set') + @patch('charmhelpers.contrib.openstack.utils.is_unit_paused_set', + return_value=False) + def test_set_os_workload_status_complete_port_not_open( + self, is_unit_paused_set, status_set, log, + port_has_listener, service_running): + configs = MagicMock() + configs.complete_contexts.return_value = [] + required_interfaces = {} + + services = [ + {'service': 'database', 'ports': [10, 20]}, + {'service': 'identity', 'ports': [30]}, + ] + port_has_listener.side_effect = [True, False, True] + # Fail the identity service + service_running.return_value = True + + openstack.set_os_workload_status( + configs, required_interfaces, services=services) + status_set.assert_called_with( + 'blocked', + 'Services with ports not open that should be:' + ' database: [20]') + + @patch('charmhelpers.contrib.openstack.utils.port_has_listener') + @patch.object(openstack, 'juju_log') + @patch('charmhelpers.contrib.openstack.utils.status_set') + @patch('charmhelpers.contrib.openstack.utils.is_unit_paused_set', + return_value=False) + def test_set_os_workload_status_complete_ports_not_open( + self, is_unit_paused_set, status_set, log, port_has_listener): + configs = MagicMock() + configs.complete_contexts.return_value = [] + required_interfaces = {} + + ports = [50, 60, 70] + port_has_listener.side_effect = [True, False, True] + + openstack.set_os_workload_status( + configs, required_interfaces, ports=ports) + status_set.assert_called_with( + 'blocked', + 'Ports which should be open, but are not: 60') + + @patch.object(openstack, 'juju_log') + @patch('charmhelpers.contrib.openstack.utils.status_set') + @patch('charmhelpers.contrib.openstack.utils.is_unit_paused_set', + return_value=True) + def test_set_os_workload_status_paused_simple( + self, is_unit_paused_set, status_set, log): + configs = MagicMock() + configs.complete_contexts.return_value = [] + required_interfaces = {} + + openstack.set_os_workload_status(configs, required_interfaces) + status_set.assert_called_with( + 'maintenance', + "Paused. Use 'resume' action to resume normal service.") + + @patch('charmhelpers.contrib.openstack.utils.service_running') + @patch('charmhelpers.contrib.openstack.utils.port_has_listener') + @patch.object(openstack, 'juju_log') + @patch('charmhelpers.contrib.openstack.utils.status_set') + @patch('charmhelpers.contrib.openstack.utils.is_unit_paused_set', + return_value=True) + def test_set_os_workload_status_paused_services_check( + self, is_unit_paused_set, status_set, log, + port_has_listener, service_running): + configs = MagicMock() + configs.complete_contexts.return_value = [] + required_interfaces = {} + + services = [ + {'service': 'database', 'ports': [10, 20]}, + {'service': 'identity', 'ports': [30]}, + ] + port_has_listener.return_value = False + service_running.side_effect = [False, False] + + openstack.set_os_workload_status( + configs, required_interfaces, services=services) + status_set.assert_called_with( + 'maintenance', + "Paused. Use 'resume' action to resume normal service.") + + @patch('charmhelpers.contrib.openstack.utils.service_running') + @patch('charmhelpers.contrib.openstack.utils.port_has_listener') + @patch.object(openstack, 'juju_log') + @patch('charmhelpers.contrib.openstack.utils.status_set') + @patch('charmhelpers.contrib.openstack.utils.is_unit_paused_set', + return_value=True) + def test_set_os_workload_status_paused_services_fail( + self, is_unit_paused_set, status_set, log, + port_has_listener, service_running): + configs = MagicMock() + configs.complete_contexts.return_value = [] + required_interfaces = {} + + services = [ + {'service': 'database', 'ports': [10, 20]}, + {'service': 'identity', 'ports': [30]}, + ] + port_has_listener.return_value = False + # Fail the identity service + service_running.side_effect = [False, True] + + openstack.set_os_workload_status( + configs, required_interfaces, services=services) + status_set.assert_called_with( + 'blocked', + "Services should be paused but these services running: identity") + + @patch('charmhelpers.contrib.openstack.utils.service_running') + @patch('charmhelpers.contrib.openstack.utils.port_has_listener') + @patch.object(openstack, 'juju_log') + @patch('charmhelpers.contrib.openstack.utils.status_set') + @patch('charmhelpers.contrib.openstack.utils.is_unit_paused_set', + return_value=True) + def test_set_os_workload_status_paused_services_ports_fail( + self, is_unit_paused_set, status_set, log, + port_has_listener, service_running): + configs = MagicMock() + configs.complete_contexts.return_value = [] + required_interfaces = {} + + services = [ + {'service': 'database', 'ports': [10, 20]}, + {'service': 'identity', 'ports': [30]}, + ] + # make the service 20 port be still listening. + port_has_listener.side_effect = [False, True, False] + service_running.return_value = False + + openstack.set_os_workload_status( + configs, required_interfaces, services=services) + status_set.assert_called_with( + 'blocked', + "Services should be paused but these service:ports are open:" + " database: [20]") + + @patch('charmhelpers.contrib.openstack.utils.port_has_listener') + @patch.object(openstack, 'juju_log') + @patch('charmhelpers.contrib.openstack.utils.status_set') + @patch('charmhelpers.contrib.openstack.utils.is_unit_paused_set', + return_value=True) + def test_set_os_workload_status_paused_ports_check( + self, is_unit_paused_set, status_set, log, + port_has_listener): + configs = MagicMock() + configs.complete_contexts.return_value = [] + required_interfaces = {} + + ports = [50, 60, 70] + port_has_listener.side_effect = [False, False, False] + + openstack.set_os_workload_status( + configs, required_interfaces, ports=ports) + status_set.assert_called_with( + 'maintenance', + "Paused. Use 'resume' action to resume normal service.") + + @patch('charmhelpers.contrib.openstack.utils.port_has_listener') + @patch.object(openstack, 'juju_log') + @patch('charmhelpers.contrib.openstack.utils.status_set') + @patch('charmhelpers.contrib.openstack.utils.is_unit_paused_set', + return_value=True) + def test_set_os_workload_status_paused_ports_fail( + self, is_unit_paused_set, status_set, log, + port_has_listener): + configs = MagicMock() + configs.complete_contexts.return_value = [] + required_interfaces = {} + + # fail port 70 to make it seem to be running + ports = [50, 60, 70] + port_has_listener.side_effect = [False, False, True] + + openstack.set_os_workload_status( + configs, required_interfaces, ports=ports) + status_set.assert_called_with( + 'blocked', + "Services should be paused but " + "these ports which should be closed, but are open: 70") + + @patch('charmhelpers.contrib.openstack.utils.service_running') + @patch('charmhelpers.contrib.openstack.utils.port_has_listener') + def test_check_actually_paused_simple_services( + self, port_has_listener, service_running): + services = ['database', 'identity'] + port_has_listener.return_value = False + service_running.return_value = False + + state, message = openstack.check_actually_paused( + services) + self.assertEquals(state, None) + self.assertEquals(message, None) + + @patch('charmhelpers.contrib.openstack.utils.service_running') + @patch('charmhelpers.contrib.openstack.utils.port_has_listener') + def test_check_actually_paused_simple_services_fail( + self, port_has_listener, service_running): + services = ['database', 'identity'] + port_has_listener.return_value = False + service_running.side_effect = [False, True] + + state, message = openstack.check_actually_paused( + services) + self.assertEquals(state, 'blocked') + self.assertEquals( + message, + "Services should be paused but these services running: identity") + + @patch('charmhelpers.contrib.openstack.utils.service_running') + @patch('charmhelpers.contrib.openstack.utils.port_has_listener') + def test_check_actually_paused_services_dict( + self, port_has_listener, service_running): + services = [ + {'service': 'database', 'ports': [10, 20]}, + {'service': 'identity', 'ports': [30]}, + ] + # Assume that the service and ports are open. + port_has_listener.return_value = False + service_running.return_value = False + + state, message = openstack.check_actually_paused( + services) + self.assertEquals(state, None) + self.assertEquals(message, None) + + @patch('charmhelpers.contrib.openstack.utils.service_running') + @patch('charmhelpers.contrib.openstack.utils.port_has_listener') + def test_check_actually_paused_services_dict_fail( + self, port_has_listener, service_running): + services = [ + {'service': 'database', 'ports': [10, 20]}, + {'service': 'identity', 'ports': [30]}, + ] + # Assume that the service and ports are open. + port_has_listener.return_value = False + service_running.side_effect = [False, True] + + state, message = openstack.check_actually_paused( + services) + self.assertEquals(state, 'blocked') + self.assertEquals( + message, + "Services should be paused but these services running: identity") + + @patch('charmhelpers.contrib.openstack.utils.service_running') + @patch('charmhelpers.contrib.openstack.utils.port_has_listener') + def test_check_actually_paused_services_dict_ports_fail( + self, port_has_listener, service_running): + services = [ + {'service': 'database', 'ports': [10, 20]}, + {'service': 'identity', 'ports': [30]}, + ] + # Assume that the service and ports are open. + port_has_listener.side_effect = [False, True, False] + service_running.return_value = False + + state, message = openstack.check_actually_paused( + services) + self.assertEquals(state, 'blocked') + self.assertEquals(message, + 'Services should be paused but these service:ports' + ' are open: database: [20]') + + @patch('charmhelpers.contrib.openstack.utils.service_running') + @patch('charmhelpers.contrib.openstack.utils.port_has_listener') + def test_check_actually_paused_ports_okay( + self, port_has_listener, service_running): + port_has_listener.side_effect = [False, False, False] + service_running.return_value = False + ports = [50, 60, 70] + + state, message = openstack.check_actually_paused( + ports=ports) + self.assertEquals(state, None) + self.assertEquals(state, None) + + @patch('charmhelpers.contrib.openstack.utils.service_running') + @patch('charmhelpers.contrib.openstack.utils.port_has_listener') + def test_check_actually_paused_ports_fail( + self, port_has_listener, service_running): + port_has_listener.side_effect = [False, True, False] + service_running.return_value = False + ports = [50, 60, 70] + + state, message = openstack.check_actually_paused( + ports=ports) + self.assertEquals(state, 'blocked') + self.assertEquals(message, + 'Services should be paused but these ports ' + 'which should be closed, but are open: 60') + + @staticmethod + def _unit_paused_helper(hook_data_mock): + # HookData()() returns a tuple (kv, delta_config, delta_relation) + # but we only want kv in the test. + kv = MagicMock() + + @contextlib.contextmanager + def hook_data__call__(): + yield (kv, True, False) + + hook_data__call__.return_value = (kv, True, False) + hook_data_mock.return_value = hook_data__call__ + return kv + + @patch('charmhelpers.contrib.openstack.utils.unitdata.HookData') + def test_set_unit_paused(self, hook_data): + kv = self._unit_paused_helper(hook_data) + openstack.set_unit_paused() + kv.set.assert_called_once_with('unit-paused', True) + + @patch('charmhelpers.contrib.openstack.utils.unitdata.HookData') + def test_set_unit_upgrading(self, hook_data): + kv = self._unit_paused_helper(hook_data) + openstack.set_unit_upgrading() + kv.set.assert_called_once_with('unit-upgrading', True) + + @patch('charmhelpers.contrib.openstack.utils.unitdata.HookData') + def test_clear_unit_paused(self, hook_data): + kv = self._unit_paused_helper(hook_data) + openstack.clear_unit_paused() + kv.set.assert_called_once_with('unit-paused', False) + + @patch('charmhelpers.contrib.openstack.utils.unitdata.HookData') + def test_clear_unit_upgrading(self, hook_data): + kv = self._unit_paused_helper(hook_data) + openstack.clear_unit_upgrading() + kv.set.assert_called_once_with('unit-upgrading', False) + + @patch('charmhelpers.contrib.openstack.utils.unitdata.HookData') + def test_is_unit_paused_set(self, hook_data): + kv = self._unit_paused_helper(hook_data) + kv.get.return_value = True + r = openstack.is_unit_paused_set() + kv.get.assert_called_once_with('unit-paused') + self.assertEquals(r, True) + kv.get.return_value = False + r = openstack.is_unit_paused_set() + self.assertEquals(r, False) + + @patch('charmhelpers.contrib.openstack.utils.unitdata.HookData') + def test_is_unit_upgrading_set(self, hook_data): + kv = self._unit_paused_helper(hook_data) + kv.get.return_value = True + r = openstack.is_unit_upgrading_set() + kv.get.assert_called_once_with('unit-upgrading') + self.assertEquals(r, True) + kv.get.return_value = False + r = openstack.is_unit_upgrading_set() + self.assertEquals(r, False) + + @patch('charmhelpers.contrib.openstack.utils.service_stop') + def test_manage_payload_services_ok(self, service_stop): + services = ['service1', 'service2'] + service_stop.side_effect = [True, True] + self.assertEqual( + openstack.manage_payload_services('stop', services=services), + (True, [])) + + @patch('charmhelpers.contrib.openstack.utils.service_stop') + def test_manage_payload_services_fails(self, service_stop): + services = ['service1', 'service2'] + service_stop.side_effect = [True, False] + self.assertEqual( + openstack.manage_payload_services('stop', services=services), + (False, ["service2 didn't stop cleanly."])) + + @patch('charmhelpers.contrib.openstack.utils.service_stop') + def test_manage_payload_services_charm_func(self, service_stop): + bespoke_func = MagicMock() + bespoke_func.return_value = None + services = ['service1', 'service2'] + service_stop.side_effect = [True, True] + self.assertEqual( + openstack.manage_payload_services('stop', services=services, + charm_func=bespoke_func), + (True, [])) + bespoke_func.assert_called_once_with() + + @patch('charmhelpers.contrib.openstack.utils.service_stop') + def test_manage_payload_services_charm_func_msg(self, service_stop): + bespoke_func = MagicMock() + bespoke_func.return_value = 'it worked' + services = ['service1', 'service2'] + service_stop.side_effect = [True, True] + self.assertEqual( + openstack.manage_payload_services('stop', services=services, + charm_func=bespoke_func), + (True, ['it worked'])) + bespoke_func.assert_called_once_with() + + @patch('charmhelpers.contrib.openstack.utils.service_stop') + def test_manage_payload_services_charm_func_fails(self, service_stop): + bespoke_func = MagicMock() + bespoke_func.side_effect = Exception('it failed') + services = ['service1', 'service2'] + service_stop.side_effect = [True, True] + self.assertEqual( + openstack.manage_payload_services('stop', services=services, + charm_func=bespoke_func), + (False, ['it failed'])) + bespoke_func.assert_called_once_with() + + def test_manage_payload_services_wrong_action(self): + self.assertRaises( + RuntimeError, + openstack.manage_payload_services, + 'mangle') + + @patch('charmhelpers.contrib.openstack.utils.service_pause') + @patch('charmhelpers.contrib.openstack.utils.set_unit_paused') + def test_pause_unit_okay(self, set_unit_paused, service_pause): + services = ['service1', 'service2'] + service_pause.side_effect = [True, True] + openstack.pause_unit(None, services=services) + set_unit_paused.assert_called_once_with() + self.assertEquals(service_pause.call_count, 2) + + @patch('charmhelpers.contrib.openstack.utils.service_pause') + @patch('charmhelpers.contrib.openstack.utils.set_unit_paused') + def test_pause_unit_service_fails(self, set_unit_paused, service_pause): + services = ['service1', 'service2'] + service_pause.side_effect = [True, True] + openstack.pause_unit(None, services=services) + set_unit_paused.assert_called_once_with() + self.assertEquals(service_pause.call_count, 2) + # Fail the 2nd service + service_pause.side_effect = [True, False] + try: + openstack.pause_unit(None, services=services) + raise Exception("pause_unit should have raised Exception") + except Exception as e: + self.assertEquals(e.args[0], + "Couldn't pause: service2 didn't pause cleanly.") + + @patch('charmhelpers.contrib.openstack.utils.service_pause') + @patch('charmhelpers.contrib.openstack.utils.set_unit_paused') + def test_pause_unit_service_charm_func( + self, set_unit_paused, service_pause): + services = ['service1', 'service2'] + service_pause.return_value = True + charm_func = MagicMock() + charm_func.return_value = None + openstack.pause_unit(None, services=services, charm_func=charm_func) + charm_func.assert_called_once_with() + # fail the charm_func + charm_func.return_value = "Custom charm failed" + try: + openstack.pause_unit( + None, services=services, charm_func=charm_func) + raise Exception("pause_unit should have raised Exception") + except Exception as e: + self.assertEquals(e.args[0], + "Couldn't pause: Custom charm failed") + + @patch('charmhelpers.contrib.openstack.utils.service_pause') + @patch('charmhelpers.contrib.openstack.utils.set_unit_paused') + def test_pause_unit_assess_status_func( + self, set_unit_paused, service_pause): + services = ['service1', 'service2'] + service_pause.return_value = True + assess_status_func = MagicMock() + assess_status_func.return_value = None + openstack.pause_unit(assess_status_func, services=services) + assess_status_func.assert_called_once_with() + # fail the assess_status_func + assess_status_func.return_value = "assess_status_func failed" + try: + openstack.pause_unit(assess_status_func, services=services) + raise Exception("pause_unit should have raised Exception") + except Exception as e: + self.assertEquals(e.args[0], + "Couldn't pause: assess_status_func failed") + + @patch('charmhelpers.contrib.openstack.utils.service_resume') + @patch('charmhelpers.contrib.openstack.utils.clear_unit_paused') + def test_resume_unit_okay(self, clear_unit_paused, service_resume): + services = ['service1', 'service2'] + service_resume.side_effect = [True, True] + openstack.resume_unit(None, services=services) + clear_unit_paused.assert_called_once_with() + self.assertEquals(service_resume.call_count, 2) + + @patch('charmhelpers.contrib.openstack.utils.service_resume') + @patch('charmhelpers.contrib.openstack.utils.clear_unit_paused') + def test_resume_unit_service_fails( + self, clear_unit_paused, service_resume): + services = ['service1', 'service2'] + service_resume.side_effect = [True, True] + openstack.resume_unit(None, services=services) + clear_unit_paused.assert_called_once_with() + self.assertEquals(service_resume.call_count, 2) + # Fail the 2nd service + service_resume.side_effect = [True, False] + try: + openstack.resume_unit(None, services=services) + raise Exception("resume_unit should have raised Exception") + except Exception as e: + self.assertEquals( + e.args[0], "Couldn't resume: service2 didn't resume cleanly.") + + @patch('charmhelpers.contrib.openstack.utils.service_resume') + @patch('charmhelpers.contrib.openstack.utils.clear_unit_paused') + def test_resume_unit_service_charm_func( + self, clear_unit_paused, service_resume): + services = ['service1', 'service2'] + service_resume.return_value = True + charm_func = MagicMock() + charm_func.return_value = None + openstack.resume_unit(None, services=services, charm_func=charm_func) + charm_func.assert_called_once_with() + # fail the charm_func + charm_func.return_value = "Custom charm failed" + try: + openstack.resume_unit( + None, services=services, charm_func=charm_func) + raise Exception("resume_unit should have raised Exception") + except Exception as e: + self.assertEquals(e.args[0], + "Couldn't resume: Custom charm failed") + + @patch('charmhelpers.contrib.openstack.utils.service_resume') + @patch('charmhelpers.contrib.openstack.utils.clear_unit_paused') + def test_resume_unit_assess_status_func( + self, clear_unit_paused, service_resume): + services = ['service1', 'service2'] + service_resume.return_value = True + assess_status_func = MagicMock() + assess_status_func.return_value = None + openstack.resume_unit(assess_status_func, services=services) + assess_status_func.assert_called_once_with() + # fail the assess_status_func + assess_status_func.return_value = "assess_status_func failed" + try: + openstack.resume_unit(assess_status_func, services=services) + raise Exception("resume_unit should have raised Exception") + except Exception as e: + self.assertEquals(e.args[0], + "Couldn't resume: assess_status_func failed") + + @patch('charmhelpers.contrib.openstack.utils.status_set') + @patch('charmhelpers.contrib.openstack.utils.' + '_determine_os_workload_status') + def test_make_assess_status_func(self, _determine_os_workload_status, + status_set): + _determine_os_workload_status.return_value = ('active', 'fine') + f = openstack.make_assess_status_func('one', 'two', three='three') + r = f() + self.assertEquals(r, None) + _determine_os_workload_status.assert_called_once_with( + 'one', 'two', three='three') + status_set.assert_called_once_with('active', 'fine') + # return something other than 'active' or 'maintenance' + _determine_os_workload_status.return_value = ('broken', 'damaged') + r = f() + self.assertEquals(r, 'damaged') + + # TODO(ajkavanagh) -- there should be a test for + # _determine_os_workload_status() as the policyd override code has changed + # it, but there wasn't a test previously. + + @patch.object(openstack, 'restart_on_change_helper') + @patch.object(openstack, 'is_unit_paused_set') + def test_pausable_restart_on_change( + self, is_unit_paused_set, restart_on_change_helper): + @openstack.pausable_restart_on_change({}) + def test_func(): + pass + + # test with pause: restart_on_change_helper should not be called. + is_unit_paused_set.return_value = True + test_func() + self.assertEquals(restart_on_change_helper.call_count, 0) + + # test without pause: restart_on_change_helper should be called. + is_unit_paused_set.return_value = False + test_func() + self.assertEquals(restart_on_change_helper.call_count, 1) + + @patch.object(openstack, 'restart_on_change_helper') + @patch.object(openstack, 'is_unit_paused_set') + def test_pausable_restart_on_change_with_callable( + self, is_unit_paused_set, restart_on_change_helper): + mock_test = MagicMock() + mock_test.called_set = False + + def _restart_map(): + mock_test.called_set = True + return {"a": "b"} + + @openstack.pausable_restart_on_change(_restart_map) + def test_func(): + pass + + self.assertFalse(mock_test.called_set) + is_unit_paused_set.return_value = False + test_func() + self.assertEquals(restart_on_change_helper.call_count, 1) + self.assertTrue(mock_test.called_set) + + @patch.object(openstack, 'juju_log') + @patch.object(openstack, 'action_set') + @patch.object(openstack, 'action_fail') + @patch.object(openstack, 'openstack_upgrade_available') + @patch('charmhelpers.contrib.openstack.utils.config') + def test_openstack_upgrade(self, config, openstack_upgrade_available, + action_fail, action_set, log): + def do_openstack_upgrade(configs): + pass + + openstack_upgrade_available.return_value = True + + # action-managed-upgrade=True + config.side_effect = [True] + + openstack.do_action_openstack_upgrade('package-xyz', + do_openstack_upgrade, + None) + + self.assertTrue(openstack_upgrade_available.called) + msg = ('success, upgrade completed.') + action_set.assert_called_with({'outcome': msg}) + self.assertFalse(action_fail.called) + + @patch.object(openstack, 'juju_log') + @patch.object(openstack, 'action_set') + @patch.object(openstack, 'action_fail') + @patch.object(openstack, 'openstack_upgrade_available') + @patch('charmhelpers.contrib.openstack.utils.config') + def test_openstack_upgrade_not_avail(self, config, + openstack_upgrade_available, + action_fail, action_set, log): + def do_openstack_upgrade(configs): + pass + + openstack_upgrade_available.return_value = False + + openstack.do_action_openstack_upgrade('package-xyz', + do_openstack_upgrade, + None) + + self.assertTrue(openstack_upgrade_available.called) + msg = ('no upgrade available.') + action_set.assert_called_with({'outcome': msg}) + self.assertFalse(action_fail.called) + + @patch.object(openstack, 'juju_log') + @patch.object(openstack, 'action_set') + @patch.object(openstack, 'action_fail') + @patch.object(openstack, 'openstack_upgrade_available') + @patch('charmhelpers.contrib.openstack.utils.config') + def test_openstack_upgrade_config_false(self, config, + openstack_upgrade_available, + action_fail, action_set, log): + def do_openstack_upgrade(configs): + pass + + openstack_upgrade_available.return_value = True + + # action-managed-upgrade=False + config.side_effect = [False] + + openstack.do_action_openstack_upgrade('package-xyz', + do_openstack_upgrade, + None) + + self.assertTrue(openstack_upgrade_available.called) + msg = ('action-managed-upgrade config is False, skipped upgrade.') + action_set.assert_called_with({'outcome': msg}) + self.assertFalse(action_fail.called) + + @patch.object(openstack, 'juju_log') + @patch.object(openstack, 'action_set') + @patch.object(openstack, 'action_fail') + @patch.object(openstack, 'openstack_upgrade_available') + @patch('traceback.format_exc') + @patch('charmhelpers.contrib.openstack.utils.config') + def test_openstack_upgrade_traceback(self, config, traceback, + openstack_upgrade_available, + action_fail, action_set, log): + def do_openstack_upgrade(configs): + oops() # noqa + + openstack_upgrade_available.return_value = True + + # action-managed-upgrade=False + config.side_effect = [True] + + openstack.do_action_openstack_upgrade('package-xyz', + do_openstack_upgrade, + None) + + self.assertTrue(openstack_upgrade_available.called) + msg = 'do_openstack_upgrade resulted in an unexpected error' + action_fail.assert_called_with(msg) + self.assertTrue(action_set.called) + self.assertTrue(traceback.called) + + @patch.object(openstack, 'os_release') + @patch.object(openstack, 'application_version_set') + def test_os_application_version_set(self, + mock_application_version_set, + mock_os_release): + with patch.object(fetch, 'apt_cache') as cache: + cache.return_value = self._apt_cache() + mock_os_release.return_value = 'mitaka' + openstack.os_application_version_set('neutron-common') + mock_application_version_set.assert_called_with('7.0.1') + openstack.os_application_version_set('cinder-common') + mock_application_version_set.assert_called_with('mitaka') + + @patch.object(openstack, 'valid_snap_channel') + @patch('charmhelpers.contrib.openstack.utils.config') + def test_snap_install_requested(self, config, valid_snap_channel): + valid_snap_channel.return_value = True + # Expect True + flush('snap_install_requested') + config.return_value = 'snap:ocata/edge' + self.assertTrue(openstack.snap_install_requested()) + valid_snap_channel.assert_called_with('edge') + flush('snap_install_requested') + config.return_value = 'snap:pike' + self.assertTrue(openstack.snap_install_requested()) + valid_snap_channel.assert_called_with('stable') + flush('snap_install_requested') + config.return_value = 'snap:pike/stable/jamespage' + self.assertTrue(openstack.snap_install_requested()) + valid_snap_channel.assert_called_with('stable') + # Expect False + flush('snap_install_requested') + config.return_value = 'cloud:xenial-ocata' + self.assertFalse(openstack.snap_install_requested()) + + def test_get_snaps_install_info_from_origin(self): + snaps = ['os_project'] + mode = 'jailmode' + + # snap:track/channel + src = 'snap:ocata/beta' + expected = {snaps[0]: {'mode': mode, + 'channel': '--channel=ocata/beta'}} + self.assertEqual( + expected, + openstack.get_snaps_install_info_from_origin(snaps, src, + mode=mode)) + + # snap:track/channel/branch + src = 'snap:ocata/beta/jamespage' + expected = {snaps[0]: {'mode': mode, + 'channel': '--channel=ocata/beta/jamespage'}} + self.assertEqual( + expected, + openstack.get_snaps_install_info_from_origin(snaps, src, + mode=mode)) + # snap:track + src = 'snap:pike' + expected = {snaps[0]: {'mode': mode, + 'channel': '--channel=pike'}} + self.assertEqual( + expected, + openstack.get_snaps_install_info_from_origin(snaps, src, + mode=mode)) + + @patch.object(openstack, 'snap_install') + def test_install_os_snaps(self, mock_snap_install): + snaps = ['os_project'] + mode = 'jailmode' + + # snap:track/channel + src = 'snap:ocata/beta' + openstack.install_os_snaps( + openstack.get_snaps_install_info_from_origin( + snaps, src, mode=mode)) + mock_snap_install.assert_called_with( + 'os_project', '--channel=ocata/beta', '--jailmode') + + # snap:track + src = 'snap:pike' + openstack.install_os_snaps( + openstack.get_snaps_install_info_from_origin( + snaps, src, mode=mode)) + mock_snap_install.assert_called_with( + 'os_project', '--channel=pike', '--jailmode') + + @patch.object(openstack, 'set_unit_upgrading') + @patch.object(openstack, 'is_unit_paused_set') + def test_series_upgrade_prepare( + self, is_unit_paused_set, set_unit_upgrading): + is_unit_paused_set.return_value = False + fake_pause_helper = MagicMock() + fake_configs = MagicMock() + openstack.series_upgrade_prepare(fake_pause_helper, fake_configs) + set_unit_upgrading.assert_called_once() + fake_pause_helper.assert_called_once_with(fake_configs) + + @patch.object(openstack, 'set_unit_upgrading') + @patch.object(openstack, 'is_unit_paused_set') + def test_series_upgrade_prepare_no_pause( + self, is_unit_paused_set, set_unit_upgrading): + is_unit_paused_set.return_value = True + fake_pause_helper = MagicMock() + fake_configs = MagicMock() + openstack.series_upgrade_prepare(fake_pause_helper, fake_configs) + set_unit_upgrading.assert_called_once() + fake_pause_helper.assert_not_called() + + @patch.object(openstack, 'clear_unit_upgrading') + @patch.object(openstack, 'clear_unit_paused') + def test_series_upgrade_complete( + self, clear_unit_paused, clear_unit_upgrading): + fake_resume_helper = MagicMock() + fake_configs = MagicMock() + openstack.series_upgrade_complete(fake_resume_helper, fake_configs) + clear_unit_upgrading.assert_called_once() + clear_unit_paused.assert_called_once() + fake_configs.write_all.assert_called_once() + fake_resume_helper.assert_called_once_with(fake_configs) + + @patch.object(openstack, 'juju_log') + @patch.object(openstack, 'leader_get') + def test_is_db_initialised(self, leader_get, juju_log): + leader_get.return_value = 'True' + self.assertTrue(openstack.is_db_initialised()) + leader_get.return_value = 'False' + self.assertFalse(openstack.is_db_initialised()) + leader_get.return_value = None + self.assertFalse(openstack.is_db_initialised()) + + @patch.object(openstack, 'juju_log') + @patch.object(openstack, 'leader_set') + def test_set_db_initialised(self, leader_set, juju_log): + openstack.set_db_initialised() + leader_set.assert_called_once_with({'db-initialised': True}) + + @patch.object(openstack, 'juju_log') + @patch.object(openstack, 'relation_ids') + @patch.object(openstack, 'related_units') + @patch.object(openstack, 'relation_get') + def test_is_db_maintenance_mode(self, relation_get, related_units, + relation_ids, juju_log): + relation_ids.return_value = ['rid:1'] + related_units.return_value = ['unit/0', 'unit/2'] + rsettings = { + 'rid:1': { + 'unit/0': { + 'private-ip': '1.2.3.4', + 'cluster-series-upgrading': 'True'}, + 'unit/2': { + 'private-ip': '1.2.3.5'}}} + relation_get.side_effect = lambda unit, rid: rsettings[rid][unit] + self.assertTrue(openstack.is_db_maintenance_mode()) + rsettings = { + 'rid:1': { + 'unit/0': { + 'private-ip': '1.2.3.4'}, + 'unit/2': { + 'private-ip': '1.2.3.5'}}} + self.assertFalse(openstack.is_db_maintenance_mode()) + rsettings = { + 'rid:1': { + 'unit/0': { + 'private-ip': '1.2.3.4', + 'cluster-series-upgrading': 'False'}, + 'unit/2': { + 'private-ip': '1.2.3.5'}}} + self.assertFalse(openstack.is_db_maintenance_mode()) + rsettings = { + 'rid:1': { + 'unit/0': { + 'private-ip': '1.2.3.4', + 'cluster-series-upgrading': 'lskjfsd'}, + 'unit/2': { + 'private-ip': '1.2.3.5'}}} + self.assertFalse(openstack.is_db_maintenance_mode()) + + def test_get_endpoint_key(self): + self.assertEqual( + openstack.get_endpoint_key('placement', 'is:2', 'keystone/0'), + 'placement-is_2-keystone_0') + + @patch.object(openstack, 'relation_get') + @patch.object(openstack, 'related_units') + @patch.object(openstack, 'relation_ids') + def test_get_endpoint_notifications(self, relation_ids, related_units, + relation_get): + id_svc_rel_units = { + 'identity-service:3': ['keystone/0', 'keystone/1', 'keystone/2'] + } + + def _related_units(relid): + return id_svc_rel_units[relid] + + id_svc_rel_data = { + 'keystone/0': { + 'ep_changed': '{"placement": "d5c3"}'}, + 'keystone/1': { + 'ep_changed': '{"nova": "4d06", "neutron": "2aa6"}'}, + 'keystone/2': {}} + + def _relation_get(unit, rid, attribute): + return id_svc_rel_data[unit].get(attribute) + + relation_ids.return_value = id_svc_rel_units.keys() + related_units.side_effect = _related_units + relation_get.side_effect = _relation_get + self.assertEqual( + openstack.get_endpoint_notifications(['neutron']), + { + 'neutron-identity-service_3-keystone_1': '2aa6'}) + self.assertEqual( + openstack.get_endpoint_notifications(['placement', 'neutron']), + { + 'neutron-identity-service_3-keystone_1': '2aa6', + 'placement-identity-service_3-keystone_0': 'd5c3'}) + + @patch.object(openstack, 'get_endpoint_notifications') + @patch.object(openstack.unitdata, 'HookData') + def test_endpoint_changed(self, HookData, get_endpoint_notifications): + self.kv_data = {} + + def _kv_get(key): + return self.kv_data.get(key) + kv = self._unit_paused_helper(HookData) + kv.get.side_effect = _kv_get + # Check endpoint_changed returns True when there are new notifications. + get_endpoint_notifications.return_value = { + 'neutron-identity-service_3-keystone_1': '2aa6', + 'placement-identity-service_3-keystone_0': 'd5c3'} + self.assertTrue(openstack.endpoint_changed('placement')) + # Check endpoint_changed returns False when there are new + # notifications but they are not the ones being looked for. + self.assertTrue(openstack.endpoint_changed('nova')) + # Check endpoint_changed returns False if the notification + # has alredy been seen + get_endpoint_notifications.return_value = { + 'placement-identity-service_3-keystone_0': 'd5c3'} + self.kv_data = { + 'placement-identity-service_3-keystone_0': 'd5c3'} + self.assertFalse(openstack.endpoint_changed('placement')) + + @patch.object(openstack, 'get_endpoint_notifications') + @patch.object(openstack.unitdata, 'HookData') + def test_save_endpoint_changed_triggers(self, HookData, + get_endpoint_notifications): + kv = self._unit_paused_helper(HookData) + get_endpoint_notifications.return_value = { + 'neutron-identity-service_3-keystone_1': '2aa6', + 'placement-identity-service_3-keystone_0': 'd5c3'} + openstack.save_endpoint_changed_triggers(['neutron', 'placement']) + kv_set_calls = [ + call('neutron-identity-service_3-keystone_1', '2aa6'), + call('placement-identity-service_3-keystone_0', 'd5c3')] + kv.set.assert_has_calls(kv_set_calls, any_order=True) + + +class OpenStackUtilsAdditionalTests(TestCase): + SHARED_DB_RELATIONS = { + 'shared-db:8': { + 'mysql-svc1/0': { + 'allowed_units': 'client/0', + }, + 'mysql-svc1/1': {}, + 'mysql-svc1/2': { + 'allowed_units': 'client/0 client/1', + }, + }, + 'shared-db:12': { + 'mysql-svc2/0': { + 'allowed_units': 'client/1', + }, + 'mysql-svc2/1': { + 'allowed_units': 'client/3', + }, + 'mysql-svc2/2': { + 'allowed_units': {}, + }, + } + } + SCALE_RELATIONS = { + 'cluster:2': { + 'keystone/1': {}, + 'keystone/2': {}}, + 'shared-db:12': { + 'mysql-svc2/0': { + 'allowed_units': 'client/1', + }, + 'mysql-svc2/1': { + 'allowed_units': 'client/3', + }, + 'mysql-svc2/2': { + 'allowed_units': {}, + }}, + } + SCALE_RELATIONS_HA = { + 'cluster:2': { + 'keystone/1': {'unit-state-keystone-1': 'READY'}, + 'keystone/2': {}}, + 'shared-db:12': { + 'mysql-svc2/0': { + 'allowed_units': 'client/1', + }, + 'mysql-svc2/1': { + 'allowed_units': 'client/3', + }, + 'mysql-svc2/2': { + 'allowed_units': {}, + }}, + 'ha:32': { + 'hacluster-keystone/1': {}} + } + All_PEERS_READY = { + 'cluster:2': { + 'keystone/1': {'unit-state-keystone-1': 'READY'}, + 'keystone/2': {'unit-state-keystone-2': 'READY'}}} + PEERS_NOT_READY = { + 'cluster:2': { + 'keystone/1': {'unit-state-keystone-1': 'READY'}, + 'keystone/2': {}}} + + def setUp(self): + super(OpenStackUtilsAdditionalTests, self).setUp() + [self._patch(m) for m in [ + 'expect_ha', + 'expected_peer_units', + 'expected_related_units', + 'juju_log', + 'metadata', + 'related_units', + 'relation_get', + 'relation_id', + 'relation_ids', + 'relation_set', + 'local_unit', + ]] + + def _patch(self, method): + _m = patch.object(openstack, method) + mock = _m.start() + self.addCleanup(_m.stop) + setattr(self, method, mock) + + def setup_relation(self, relation_map): + relation = FakeRelation(relation_map) + self.relation_id.side_effect = relation.relation_id + self.relation_get.side_effect = relation.get + self.relation_ids.side_effect = relation.relation_ids + self.related_units.side_effect = relation.related_units + return relation + + def test_is_db_ready(self): + relation = self.setup_relation(self.SHARED_DB_RELATIONS) + + # Check unit allowed in 1st relation + self.local_unit.return_value = 'client/0' + self.assertTrue(openstack.is_db_ready()) + + # Check unit allowed in 2nd relation + self.local_unit.return_value = 'client/3' + self.assertTrue(openstack.is_db_ready()) + + # Check unit not allowed in any list + self.local_unit.return_value = 'client/5' + self.assertFalse(openstack.is_db_ready()) + + # Check call with an invalid relation + self.local_unit.return_value = 'client/3' + # None returned if not in a relation context (eg update-status) + relation.clear_relation_context() + self.assertRaises( + Exception, + openstack.is_db_ready, + use_current_context=True) + + # Check unit allowed using current relation context + relation.set_relation_context('mysql-svc2/0', 'shared-db:12') + self.local_unit.return_value = 'client/1' + self.assertTrue(openstack.is_db_ready(use_current_context=True)) + + # Check unit not allowed using current relation context + relation.set_relation_context('mysql-svc2/0', 'shared-db:12') + self.local_unit.return_value = 'client/0' + self.assertFalse(openstack.is_db_ready(use_current_context=True)) + + @patch.object(openstack, 'container_scoped_relations') + def test_is_expected_scale_noha(self, container_scoped_relations): + self.setup_relation(self.SCALE_RELATIONS) + self.expect_ha.return_value = False + eru = { + 'shared-db': ['mysql/0', 'mysql/1', 'mysql/2']} + + def _expected_related_units(reltype): + return eru[reltype] + self.expected_related_units.side_effect = _expected_related_units + container_scoped_relations.return_value = ['ha', 'domain-backend'] + + # All peer and db units are present + self.expected_peer_units.return_value = ['keystone/0', 'keystone/2'] + self.assertTrue(openstack.is_expected_scale()) + + # db units are present but a peer is missing + self.expected_peer_units.return_value = ['keystone/0', 'keystone/2', 'keystone/3'] + self.assertFalse(openstack.is_expected_scale()) + + # peer units are present but a db unit is missing + eru['shared-db'].append('mysql/3') + self.expected_peer_units.return_value = ['keystone/0', 'keystone/2'] + self.assertFalse(openstack.is_expected_scale()) + eru['shared-db'].remove('mysql/3') + + # Expect ha but ha unit is missing + self.expect_ha.return_value = True + self.expected_peer_units.return_value = ['keystone/0', 'keystone/2'] + self.assertFalse(openstack.is_expected_scale()) + + @patch.object(openstack, 'container_scoped_relations') + def test_is_expected_scale_ha(self, container_scoped_relations): + self.setup_relation(self.SCALE_RELATIONS_HA) + eru = { + 'shared-db': ['mysql/0', 'mysql/1', 'mysql/2']} + + def _expected_related_units(reltype): + return eru[reltype] + self.expected_related_units.side_effect = _expected_related_units + container_scoped_relations.return_value = ['ha', 'domain-backend'] + self.expect_ha.return_value = True + self.expected_peer_units.return_value = ['keystone/0', 'keystone/2'] + self.assertTrue(openstack.is_expected_scale()) + + def test_container_scoped_relations(self): + _metadata = { + 'provides': { + 'amqp': {'interface': 'rabbitmq'}, + 'identity-service': {'interface': 'keystone'}, + 'ha': { + 'interface': 'hacluster', + 'scope': 'container'}}, + 'peers': { + 'cluster': {'interface': 'openstack-ha'}}} + self.metadata.return_value = _metadata + self.assertEqual(openstack.container_scoped_relations(), ['ha']) + + def test_get_peer_key(self): + self.assertEqual( + openstack.get_peer_key('cinder/0'), + 'unit-state-cinder-0') + + def test_inform_peers_unit_state(self): + self.local_unit.return_value = 'client/0' + self.setup_relation(self.All_PEERS_READY) + openstack.inform_peers_unit_state('READY') + self.relation_set.assert_called_once_with( + relation_id='cluster:2', + relation_settings={'unit-state-client-0': 'READY'}) + + def test_get_peers_unit_state(self): + self.setup_relation(self.All_PEERS_READY) + self.assertEqual( + openstack.get_peers_unit_state(), + {'keystone/1': 'READY', 'keystone/2': 'READY'}) + self.setup_relation(self.PEERS_NOT_READY) + self.assertEqual( + openstack.get_peers_unit_state(), + {'keystone/1': 'READY', 'keystone/2': 'UNKNOWN'}) + + def test_are_peers_ready(self): + self.setup_relation(self.All_PEERS_READY) + self.assertTrue(openstack.are_peers_ready()) + self.setup_relation(self.PEERS_NOT_READY) + self.assertFalse(openstack.are_peers_ready()) + + @patch.object(openstack, 'inform_peers_unit_state') + def test_inform_peers_if_ready(self, inform_peers_unit_state): + self.setup_relation(self.All_PEERS_READY) + + def _not_ready(): + return False, "Its all gone wrong" + + def _ready(): + return True, "Hurray!" + openstack.inform_peers_if_ready(_not_ready) + inform_peers_unit_state.assert_called_once_with('NOTREADY', 'cluster') + inform_peers_unit_state.reset_mock() + openstack.inform_peers_if_ready(_ready) + inform_peers_unit_state.assert_called_once_with('READY', 'cluster') + + @patch.object(openstack, 'is_expected_scale') + @patch.object(openstack, 'is_db_initialised') + @patch.object(openstack, 'is_db_ready') + @patch.object(openstack, 'is_unit_paused_set') + @patch.object(openstack, 'is_db_maintenance_mode') + def test_check_api_unit_ready(self, is_db_maintenance_mode, + is_unit_paused_set, is_db_ready, + is_db_initialised, is_expected_scale): + is_db_maintenance_mode.return_value = True + self.assertFalse(openstack.check_api_unit_ready()[0]) + + is_db_maintenance_mode.return_value = False + is_unit_paused_set.return_value = True + self.assertFalse(openstack.check_api_unit_ready()[0]) + + is_db_maintenance_mode.return_value = False + is_unit_paused_set.return_value = False + is_db_ready.return_value = False + self.assertFalse(openstack.check_api_unit_ready()[0]) + + is_db_maintenance_mode.return_value = False + is_unit_paused_set.return_value = False + is_db_ready.return_value = True + is_db_initialised.return_value = False + self.assertFalse(openstack.check_api_unit_ready()[0]) + + is_db_maintenance_mode.return_value = False + is_unit_paused_set.return_value = False + is_db_ready.return_value = True + is_db_initialised.return_value = True + is_expected_scale.return_value = False + self.assertFalse(openstack.check_api_unit_ready()[0]) + + is_db_maintenance_mode.return_value = False + is_unit_paused_set.return_value = False + is_db_ready.return_value = True + is_db_initialised.return_value = True + is_expected_scale.return_value = True + self.assertTrue(openstack.check_api_unit_ready()[0]) + + @patch.object(openstack, 'is_expected_scale') + @patch.object(openstack, 'is_db_initialised') + @patch.object(openstack, 'is_db_ready') + @patch.object(openstack, 'is_unit_paused_set') + @patch.object(openstack, 'is_db_maintenance_mode') + def test_get_api_unit_status(self, is_db_maintenance_mode, + is_unit_paused_set, is_db_ready, + is_db_initialised, is_expected_scale): + is_db_maintenance_mode.return_value = True + self.assertEqual( + openstack.get_api_unit_status()[0].value, + 'maintenance') + + is_db_maintenance_mode.return_value = False + is_unit_paused_set.return_value = True + self.assertEqual( + openstack.get_api_unit_status()[0].value, + 'blocked') + + is_db_maintenance_mode.return_value = False + is_unit_paused_set.return_value = False + is_db_ready.return_value = False + self.assertEqual( + openstack.get_api_unit_status()[0].value, + 'waiting') + + is_db_maintenance_mode.return_value = False + is_unit_paused_set.return_value = False + is_db_ready.return_value = True + is_db_initialised.return_value = False + self.assertEqual( + openstack.get_api_unit_status()[0].value, + 'waiting') + + is_db_maintenance_mode.return_value = False + is_unit_paused_set.return_value = False + is_db_ready.return_value = True + is_db_initialised.return_value = True + is_expected_scale.return_value = False + self.assertEqual( + openstack.get_api_unit_status()[0].value, + 'waiting') + + is_db_maintenance_mode.return_value = False + is_unit_paused_set.return_value = False + is_db_ready.return_value = True + is_db_initialised.return_value = True + is_expected_scale.return_value = True + self.assertEqual( + openstack.get_api_unit_status()[0].value, + 'active') + + @patch.object(openstack, 'get_api_unit_status') + def test_check_api_application_ready(self, get_api_unit_status): + get_api_unit_status.return_value = (WORKLOAD_STATES.ACTIVE, 'Hurray') + self.assertTrue(openstack.check_api_application_ready()[0]) + get_api_unit_status.return_value = (WORKLOAD_STATES.BLOCKED, ':-(') + self.assertFalse(openstack.check_api_application_ready()[0]) + + @patch.object(openstack, 'get_api_unit_status') + def test_get_api_application_status(self, get_api_unit_status): + get_api_unit_status.return_value = (WORKLOAD_STATES.ACTIVE, 'Hurray') + self.assertEqual( + openstack.get_api_application_status()[0].value, + 'active') + get_api_unit_status.return_value = (WORKLOAD_STATES.BLOCKED, ':-(') + self.assertEqual( + openstack.get_api_application_status()[0].value, + 'blocked') + + +if __name__ == '__main__': + unittest.main() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/test_os_contexts.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/test_os_contexts.py new file mode 100644 index 0000000000000000000000000000000000000000..6edd66a4061f5d6bb8cba47d3eb486870783135a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/test_os_contexts.py @@ -0,0 +1,4842 @@ +import collections +import copy +import json +import mock +import six +import unittest +import yaml + +from mock import ( + patch, + Mock, + MagicMock, + call +) + +from tests.helpers import patch_open + +import tests.utils + +import charmhelpers.contrib.openstack.context as context + + +if not six.PY3: + open_builtin = '__builtin__.open' +else: + open_builtin = 'builtins.open' + + +class FakeRelation(object): + + ''' + A fake relation class. Lets tests specify simple relation data + for a default relation + unit (foo:0, foo/0, set in setUp()), eg: + + rel = { + 'private-address': 'foo', + 'password': 'passwd', + } + relation = FakeRelation(rel) + self.relation_get.side_effect = relation.get + passwd = self.relation_get('password') + + or more complex relations meant to be addressed by explicit relation id + + unit id combos: + + rel = { + 'mysql:0': { + 'mysql/0': { + 'private-address': 'foo', + 'password': 'passwd', + } + } + } + relation = FakeRelation(rel) + self.relation_get.side_affect = relation.get + passwd = self.relation_get('password', rid='mysql:0', unit='mysql/0') + ''' + + def __init__(self, relation_data): + self.relation_data = relation_data + + def get(self, attribute=None, unit=None, rid=None): + if not rid or rid == 'foo:0': + if attribute is None: + return self.relation_data + elif attribute in self.relation_data: + return self.relation_data[attribute] + return None + else: + if rid not in self.relation_data: + return None + try: + relation = self.relation_data[rid][unit] + except KeyError: + return None + if attribute is None: + return relation + if attribute in relation: + return relation[attribute] + return None + + def relation_ids(self, relation): + rids = [] + for rid in sorted(self.relation_data.keys()): + if relation + ':' in rid: + rids.append(rid) + return rids + + def relation_units(self, relation_id): + if relation_id not in self.relation_data: + return None + return sorted(self.relation_data[relation_id].keys()) + + +SHARED_DB_RELATION = { + 'db_host': 'dbserver.local', + 'password': 'foo' +} + +SHARED_DB_RELATION_W_PORT = { + 'db_host': 'dbserver.local', + 'password': 'foo', + 'db_port': 3306, +} + +SHARED_DB_RELATION_ALT_RID = { + 'mysql-alt:0': { + 'mysql-alt/0': { + 'db_host': 'dbserver-alt.local', + 'password': 'flump'}}} + +SHARED_DB_RELATION_SSL = { + 'db_host': 'dbserver.local', + 'password': 'foo', + 'ssl_ca': 'Zm9vCg==', + 'ssl_cert': 'YmFyCg==', + 'ssl_key': 'Zm9vYmFyCg==', +} + +SHARED_DB_CONFIG = { + 'database-user': 'adam', + 'database': 'foodb', +} + +SHARED_DB_RELATION_NAMESPACED = { + 'db_host': 'bar', + 'quantum_password': 'bar2' +} + +SHARED_DB_RELATION_ACCESS_NETWORK = { + 'db_host': 'dbserver.local', + 'password': 'foo', + 'access-network': '10.5.5.0/24', + 'hostname': 'bar', +} + + +IDENTITY_SERVICE_RELATION_HTTP = { + 'service_port': '5000', + 'service_host': 'keystonehost.local', + 'auth_host': 'keystone-host.local', + 'auth_port': '35357', + 'service_domain': 'admin_domain', + 'service_tenant': 'admin', + 'service_tenant_id': '123456', + 'service_password': 'foo', + 'service_username': 'adam', + 'service_protocol': 'http', + 'auth_protocol': 'http', +} + +IDENTITY_SERVICE_RELATION_UNSET = { + 'service_port': '5000', + 'service_host': 'keystonehost.local', + 'auth_host': 'keystone-host.local', + 'auth_port': '35357', + 'service_domain': 'admin_domain', + 'service_tenant': 'admin', + 'service_password': 'foo', + 'service_username': 'adam', +} + +IDENTITY_CREDENTIALS_RELATION_UNSET = { + 'credentials_port': '5000', + 'credentials_host': 'keystonehost.local', + 'auth_host': 'keystone-host.local', + 'auth_port': '35357', + 'auth_protocol': 'https', + 'domain': 'admin_domain', + 'credentials_project': 'admin', + 'credentials_project_id': '123456', + 'credentials_password': 'foo', + 'credentials_username': 'adam', + 'credentials_protocol': 'https', +} + + +APIIDENTITY_SERVICE_RELATION_UNSET = { + 'neutron-plugin-api:0': { + 'neutron-api/0': { + 'service_port': '5000', + 'service_host': 'keystonehost.local', + 'auth_host': 'keystone-host.local', + 'auth_port': '35357', + 'service_domain': 'admin_domain', + 'service_tenant': 'admin', + 'service_password': 'foo', + 'service_username': 'adam', + } + } +} + +IDENTITY_SERVICE_RELATION_HTTPS = { + 'service_port': '5000', + 'service_host': 'keystonehost.local', + 'auth_host': 'keystone-host.local', + 'auth_port': '35357', + 'service_domain': 'admin_domain', + 'service_tenant': 'admin', + 'service_password': 'foo', + 'service_username': 'adam', + 'service_protocol': 'https', + 'auth_protocol': 'https', +} + +IDENTITY_SERVICE_RELATION_VERSIONED = { + 'api_version': '3', + 'service_tenant_id': 'svc-proj-id', + 'service_domain_id': 'svc-dom-id', +} +IDENTITY_SERVICE_RELATION_VERSIONED.update(IDENTITY_SERVICE_RELATION_HTTPS) + +IDENTITY_CREDENTIALS_RELATION_VERSIONED = { + 'api_version': '3', + 'service_tenant_id': 'svc-proj-id', + 'service_domain_id': 'svc-dom-id', +} +IDENTITY_CREDENTIALS_RELATION_VERSIONED.update(IDENTITY_CREDENTIALS_RELATION_UNSET) + +POSTGRESQL_DB_RELATION = { + 'host': 'dbserver.local', + 'user': 'adam', + 'password': 'foo', +} + +POSTGRESQL_DB_CONFIG = { + 'database': 'foodb', +} + +IDENTITY_SERVICE_RELATION = { + 'service_port': '5000', + 'service_host': 'keystonehost.local', + 'auth_host': 'keystone-host.local', + 'auth_port': '35357', + 'service_domain': 'admin_domain', + 'service_tenant': 'admin', + 'service_password': 'foo', + 'service_username': 'adam', +} + +AMQP_RELATION = { + 'private-address': 'rabbithost', + 'password': 'foobar', + 'vip': '10.0.0.1', +} + +AMQP_RELATION_ALT_RID = { + 'amqp-alt:0': { + 'rabbitmq-alt/0': { + 'private-address': 'rabbitalthost1', + 'password': 'flump', + }, + } +} + +AMQP_RELATION_WITH_SSL = { + 'private-address': 'rabbithost', + 'password': 'foobar', + 'vip': '10.0.0.1', + 'ssl_port': 5671, + 'ssl_ca': 'cert', + 'ha_queues': 'queues', +} + +AMQP_AA_RELATION = { + 'amqp:0': { + 'rabbitmq/0': { + 'private-address': 'rabbithost1', + 'password': 'foobar', + }, + 'rabbitmq/1': { + 'private-address': 'rabbithost2', + 'password': 'foobar', + }, + 'rabbitmq/2': { # Should be ignored because password is missing. + 'private-address': 'rabbithost3', + } + } +} + +AMQP_CONFIG = { + 'rabbit-user': 'adam', + 'rabbit-vhost': 'foo', +} + +AMQP_OSLO_CONFIG = { + 'oslo-messaging-flags': ("rabbit_max_retries=1" + ",rabbit_retry_backoff=1" + ",rabbit_retry_interval=1"), + 'oslo-messaging-driver': 'log' +} + +AMQP_NOTIFICATION_FORMAT = { + 'notification-format': 'both' +} + +AMQP_NOTIFICATION_TOPICS = { + 'notification-topics': 'foo,bar' +} + +AMQP_NOTIFICATIONS_LOGS = { + 'send-notifications-to-logs': True +} + +AMQP_NOVA_CONFIG = { + 'nova-rabbit-user': 'adam', + 'nova-rabbit-vhost': 'foo', +} + +HAPROXY_CONFIG = { + 'haproxy-server-timeout': 50000, + 'haproxy-client-timeout': 50000, +} + +CEPH_RELATION = { + 'ceph:0': { + 'ceph/0': { + 'private-address': 'ceph_node1', + 'auth': 'foo', + 'key': 'bar', + 'use_syslog': 'true' + }, + 'ceph/1': { + 'private-address': 'ceph_node2', + 'auth': 'foo', + 'key': 'bar', + 'use_syslog': 'false' + }, + } +} + +CEPH_RELATION_WITH_PUBLIC_ADDR = { + 'ceph:0': { + 'ceph/0': { + 'ceph-public-address': '192.168.1.10', + 'private-address': 'ceph_node1', + 'auth': 'foo', + 'key': 'bar' + }, + 'ceph/1': { + 'ceph-public-address': '192.168.1.11', + 'private-address': 'ceph_node2', + 'auth': 'foo', + 'key': 'bar' + }, + } +} + +CEPH_REL_WITH_PUBLIC_ADDR_PORT = { + 'ceph:0': { + 'ceph/0': { + 'ceph-public-address': '192.168.1.10:1234', + 'private-address': 'ceph_node1', + 'auth': 'foo', + 'key': 'bar' + }, + 'ceph/1': { + 'ceph-public-address': '192.168.1.11:4321', + 'private-address': 'ceph_node2', + 'auth': 'foo', + 'key': 'bar' + }, + } +} + +CEPH_REL_WITH_PUBLIC_IPv6_ADDR = { + 'ceph:0': { + 'ceph/0': { + 'ceph-public-address': '2001:5c0:9168::1', + 'private-address': 'ceph_node1', + 'auth': 'foo', + 'key': 'bar' + }, + 'ceph/1': { + 'ceph-public-address': '2001:5c0:9168::2', + 'private-address': 'ceph_node2', + 'auth': 'foo', + 'key': 'bar' + }, + } +} + +CEPH_REL_WITH_PUBLIC_IPv6_ADDR_PORT = { + 'ceph:0': { + 'ceph/0': { + 'ceph-public-address': '[2001:5c0:9168::1]:1234', + 'private-address': 'ceph_node1', + 'auth': 'foo', + 'key': 'bar' + }, + 'ceph/1': { + 'ceph-public-address': '[2001:5c0:9168::2]:4321', + 'private-address': 'ceph_node2', + 'auth': 'foo', + 'key': 'bar' + }, + } +} + +CEPH_REL_WITH_MULTI_PUBLIC_ADDR = { + 'ceph:0': { + 'ceph/0': { + 'ceph-public-address': '192.168.1.10 192.168.1.20', + 'private-address': 'ceph_node1', + 'auth': 'foo', + 'key': 'bar' + }, + 'ceph/1': { + 'ceph-public-address': '192.168.1.11 192.168.1.21', + 'private-address': 'ceph_node2', + 'auth': 'foo', + 'key': 'bar' + }, + } +} + +CEPH_REL_WITH_DEFAULT_FEATURES = { + 'ceph:0': { + 'ceph/0': { + 'private-address': 'ceph_node1', + 'auth': 'foo', + 'key': 'bar', + 'use_syslog': 'true', + 'rbd-features': '1' + }, + 'ceph/1': { + 'private-address': 'ceph_node2', + 'auth': 'foo', + 'key': 'bar', + 'use_syslog': 'false', + 'rbd-features': '1' + }, + } +} + + +IDENTITY_RELATION_NO_CERT = { + 'identity-service:0': { + 'keystone/0': { + 'private-address': 'keystone1', + }, + } +} + +IDENTITY_RELATION_SINGLE_CERT = { + 'identity-service:0': { + 'keystone/0': { + 'private-address': 'keystone1', + 'ssl_cert_cinderhost1': 'certa', + 'ssl_key_cinderhost1': 'keya', + }, + } +} + +IDENTITY_RELATION_MULTIPLE_CERT = { + 'identity-service:0': { + 'keystone/0': { + 'private-address': 'keystone1', + 'ssl_cert_cinderhost1-int-network': 'certa', + 'ssl_key_cinderhost1-int-network': 'keya', + 'ssl_cert_cinderhost1-pub-network': 'certa', + 'ssl_key_cinderhost1-pub-network': 'keya', + 'ssl_cert_cinderhost1-adm-network': 'certa', + 'ssl_key_cinderhost1-adm-network': 'keya', + }, + } +} + +QUANTUM_NETWORK_SERVICE_RELATION = { + 'quantum-network-service:0': { + 'unit/0': { + 'keystone_host': '10.5.0.1', + 'service_port': '5000', + 'auth_port': '20000', + 'service_tenant': 'tenant', + 'service_username': 'username', + 'service_password': 'password', + 'quantum_host': '10.5.0.2', + 'quantum_port': '9696', + 'quantum_url': 'http://10.5.0.2:9696/v2', + 'region': 'aregion' + }, + } +} + +QUANTUM_NETWORK_SERVICE_RELATION_VERSIONED = { + 'quantum-network-service:0': { + 'unit/0': { + 'keystone_host': '10.5.0.1', + 'service_port': '5000', + 'auth_port': '20000', + 'service_tenant': 'tenant', + 'service_username': 'username', + 'service_password': 'password', + 'quantum_host': '10.5.0.2', + 'quantum_port': '9696', + 'quantum_url': 'http://10.5.0.2:9696/v2', + 'region': 'aregion', + 'api_version': '3', + }, + } +} + +SUB_CONFIG = """ +nova: + /etc/nova/nova.conf: + sections: + DEFAULT: + - [nova-key1, value1] + - [nova-key2, value2] +glance: + /etc/glance/glance.conf: + sections: + DEFAULT: + - [glance-key1, value1] + - [glance-key2, value2] +""" + +NOVA_SUB_CONFIG1 = """ +nova: + /etc/nova/nova.conf: + sections: + DEFAULT: + - [nova-key1, value1] + - [nova-key2, value2] +""" + + +NOVA_SUB_CONFIG2 = """ +nova-compute: + /etc/nova/nova.conf: + sections: + DEFAULT: + - [nova-key3, value3] + - [nova-key4, value4] +""" + +NOVA_SUB_CONFIG3 = """ +nova-compute: + /etc/nova/nova.conf: + sections: + DEFAULT: + - [nova-key5, value5] + - [nova-key6, value6] +""" + +CINDER_SUB_CONFIG1 = """ +cinder: + /etc/cinder/cinder.conf: + sections: + cinder-1-section: + - [key1, value1] +""" + +CINDER_SUB_CONFIG2 = """ +cinder: + /etc/cinder/cinder.conf: + sections: + cinder-2-section: + - [key2, value2] + not-a-section: + 1234 +""" + +SUB_CONFIG_RELATION = { + 'nova-subordinate:0': { + 'nova-subordinate/0': { + 'private-address': 'nova_node1', + 'subordinate_configuration': json.dumps(yaml.safe_load(SUB_CONFIG)), + }, + }, + 'glance-subordinate:0': { + 'glance-subordinate/0': { + 'private-address': 'glance_node1', + 'subordinate_configuration': json.dumps(yaml.safe_load(SUB_CONFIG)), + }, + }, + 'foo-subordinate:0': { + 'foo-subordinate/0': { + 'private-address': 'foo_node1', + 'subordinate_configuration': 'ea8e09324jkadsfh', + }, + }, + 'cinder-subordinate:0': { + 'cinder-subordinate/0': { + 'private-address': 'cinder_node1', + 'subordinate_configuration': json.dumps( + yaml.safe_load(CINDER_SUB_CONFIG1)), + }, + }, + 'cinder-subordinate:1': { + 'cinder-subordinate/1': { + 'private-address': 'cinder_node1', + 'subordinate_configuration': json.dumps( + yaml.safe_load(CINDER_SUB_CONFIG2)), + }, + }, +} + +SUB_CONFIG_RELATION2 = { + 'nova-ceilometer:6': { + 'ceilometer-agent/0': { + 'private-address': 'nova_node1', + 'subordinate_configuration': json.dumps( + yaml.safe_load(NOVA_SUB_CONFIG1)), + }, + }, + 'neutron-plugin:3': { + 'neutron-ovs-plugin/0': { + 'private-address': 'nova_node1', + 'subordinate_configuration': json.dumps( + yaml.safe_load(NOVA_SUB_CONFIG2)), + }, + }, + 'neutron-plugin:4': { + 'neutron-other-plugin/0': { + 'private-address': 'nova_node1', + 'subordinate_configuration': json.dumps( + yaml.safe_load(NOVA_SUB_CONFIG3)), + }, + } +} + +NONET_CONFIG = { + 'vip': 'cinderhost1vip', + 'os-internal-network': None, + 'os-admin-network': None, + 'os-public-network': None +} + +FULLNET_CONFIG = { + 'vip': '10.5.1.1 10.5.2.1 10.5.3.1', + 'os-internal-network': "10.5.1.0/24", + 'os-admin-network': "10.5.2.0/24", + 'os-public-network': "10.5.3.0/24" +} + +MACHINE_MACS = { + 'eth0': 'fe:c5:ce:8e:2b:00', + 'eth1': 'fe:c5:ce:8e:2b:01', + 'eth2': 'fe:c5:ce:8e:2b:02', + 'eth3': 'fe:c5:ce:8e:2b:03', +} + +MACHINE_NICS = { + 'eth0': ['192.168.0.1'], + 'eth1': ['192.168.0.2'], + 'eth2': [], + 'eth3': [], +} + +ABSENT_MACS = "aa:a5:ae:ae:ab:a4 " + +# Imported in contexts.py and needs patching in setUp() +TO_PATCH = [ + 'b64decode', + 'check_call', + 'get_cert', + 'get_ca_cert', + 'install_ca_cert', + 'log', + 'config', + 'relation_get', + 'relation_ids', + 'related_units', + 'is_relation_made', + 'relation_set', + 'unit_get', + 'https', + 'determine_api_port', + 'determine_apache_port', + 'is_clustered', + 'time', + 'https', + 'get_address_in_network', + 'get_netmask_for_address', + 'local_unit', + 'get_ipv6_addr', + 'mkdir', + 'write_file', + 'get_relation_ip', + 'charm_name', + 'sysctl_create', + 'kv', + 'pwgen', + 'lsb_release', + 'is_container', + 'network_get_primary_address', + 'resolve_address', + 'is_ipv6_disabled', +] + + +class fake_config(object): + + def __init__(self, data): + self.data = data + + def __call__(self, attr): + if attr in self.data: + return self.data[attr] + return None + + +class fake_is_relation_made(): + def __init__(self, relations): + self.relations = relations + + def rel_made(self, relation): + return self.relations[relation] + + +class TestDB(object): + '''Test KV store for unitdata testing''' + def __init__(self): + self.data = {} + self.flushed = False + + def get(self, key, default=None): + return self.data.get(key, default) + + def set(self, key, value): + self.data[key] = value + return value + + def flush(self): + self.flushed = True + + +class ContextTests(unittest.TestCase): + + def setUp(self): + for m in TO_PATCH: + setattr(self, m, self._patch(m)) + # mock at least a single relation + unit + self.relation_ids.return_value = ['foo:0'] + self.related_units.return_value = ['foo/0'] + self.local_unit.return_value = 'localunit' + self.kv.side_effect = TestDB + self.pwgen.return_value = 'testpassword' + self.lsb_release.return_value = {'DISTRIB_RELEASE': '16.04'} + self.is_container.return_value = False + self.network_get_primary_address.side_effect = NotImplementedError() + self.resolve_address.return_value = '10.5.1.50' + self.maxDiff = None + + def _patch(self, method): + _m = patch('charmhelpers.contrib.openstack.context.' + method) + mock = _m.start() + self.addCleanup(_m.stop) + return mock + + def test_base_class_not_implemented(self): + base = context.OSContextGenerator() + self.assertRaises(NotImplementedError, base) + + @patch.object(context, 'get_os_codename_install_source') + def test_shared_db_context_with_data(self, os_codename): + '''Test shared-db context with all required data''' + os_codename.return_value = 'queens' + relation = FakeRelation(relation_data=SHARED_DB_RELATION) + self.relation_get.side_effect = relation.get + self.get_address_in_network.return_value = '' + self.config.side_effect = fake_config(SHARED_DB_CONFIG) + shared_db = context.SharedDBContext() + result = shared_db() + expected = { + 'database_host': 'dbserver.local', + 'database': 'foodb', + 'database_user': 'adam', + 'database_password': 'foo', + 'database_type': 'mysql+pymysql', + } + self.assertEquals(result, expected) + + def test_shared_db_context_with_data_and_access_net_mismatch(self): + """Mismatch between hostname and hostname for access net - defers + execution""" + relation = FakeRelation( + relation_data=SHARED_DB_RELATION_ACCESS_NETWORK) + self.relation_get.side_effect = relation.get + self.get_address_in_network.return_value = '10.5.5.1' + self.config.side_effect = fake_config(SHARED_DB_CONFIG) + shared_db = context.SharedDBContext() + result = shared_db() + self.assertEquals(result, None) + self.relation_set.assert_called_with( + relation_settings={ + 'hostname': '10.5.5.1'}) + + @patch.object(context, 'get_os_codename_install_source') + def test_shared_db_context_with_data_and_access_net_match(self, + os_codename): + """Correctly set hostname for access net returns complete context""" + os_codename.return_value = 'queens' + relation = FakeRelation( + relation_data=SHARED_DB_RELATION_ACCESS_NETWORK) + self.relation_get.side_effect = relation.get + self.get_address_in_network.return_value = 'bar' + self.config.side_effect = fake_config(SHARED_DB_CONFIG) + shared_db = context.SharedDBContext() + result = shared_db() + expected = { + 'database_host': 'dbserver.local', + 'database': 'foodb', + 'database_user': 'adam', + 'database_password': 'foo', + 'database_type': 'mysql+pymysql', + } + self.assertEquals(result, expected) + + @patch.object(context, 'get_os_codename_install_source') + def test_shared_db_context_explicit_relation_id(self, os_codename): + '''Test shared-db context setting the relation_id''' + os_codename.return_value = 'queens' + relation = FakeRelation(relation_data=SHARED_DB_RELATION_ALT_RID) + self.related_units.return_value = ['mysql-alt/0'] + self.relation_get.side_effect = relation.get + self.get_address_in_network.return_value = '' + self.config.side_effect = fake_config(SHARED_DB_CONFIG) + shared_db = context.SharedDBContext(relation_id='mysql-alt:0') + result = shared_db() + expected = { + 'database_host': 'dbserver-alt.local', + 'database': 'foodb', + 'database_user': 'adam', + 'database_password': 'flump', + 'database_type': 'mysql+pymysql', + } + self.assertEquals(result, expected) + + @patch.object(context, 'get_os_codename_install_source') + def test_shared_db_context_with_port(self, os_codename): + '''Test shared-db context with all required data''' + os_codename.return_value = 'queens' + relation = FakeRelation(relation_data=SHARED_DB_RELATION_W_PORT) + self.relation_get.side_effect = relation.get + self.get_address_in_network.return_value = '' + self.config.side_effect = fake_config(SHARED_DB_CONFIG) + shared_db = context.SharedDBContext() + result = shared_db() + expected = { + 'database_host': 'dbserver.local', + 'database': 'foodb', + 'database_user': 'adam', + 'database_password': 'foo', + 'database_type': 'mysql+pymysql', + 'database_port': 3306, + } + self.assertEquals(result, expected) + + @patch('os.path.exists') + @patch(open_builtin) + def test_db_ssl(self, _open, osexists): + osexists.return_value = False + ssl_dir = '/etc/dbssl' + db_ssl_ctxt = context.db_ssl(SHARED_DB_RELATION_SSL, {}, ssl_dir) + expected = { + 'database_ssl_ca': ssl_dir + '/db-client.ca', + 'database_ssl_cert': ssl_dir + '/db-client.cert', + 'database_ssl_key': ssl_dir + '/db-client.key', + } + files = [ + call(expected['database_ssl_ca'], 'wb'), + call(expected['database_ssl_cert'], 'wb'), + call(expected['database_ssl_key'], 'wb') + ] + for f in files: + self.assertIn(f, _open.call_args_list) + self.assertEquals(db_ssl_ctxt, expected) + decode = [ + call(SHARED_DB_RELATION_SSL['ssl_ca']), + call(SHARED_DB_RELATION_SSL['ssl_cert']), + call(SHARED_DB_RELATION_SSL['ssl_key']) + ] + self.assertEquals(decode, self.b64decode.call_args_list) + + def test_db_ssl_nossldir(self): + db_ssl_ctxt = context.db_ssl(SHARED_DB_RELATION_SSL, {}, None) + self.assertEquals(db_ssl_ctxt, {}) + + @patch.object(context, 'get_os_codename_install_source') + def test_shared_db_context_with_missing_relation(self, os_codename): + '''Test shared-db context missing relation data''' + os_codename.return_value = 'stein' + incomplete_relation = copy.copy(SHARED_DB_RELATION) + incomplete_relation['password'] = None + relation = FakeRelation(relation_data=incomplete_relation) + self.relation_get.side_effect = relation.get + self.config.return_value = SHARED_DB_CONFIG + shared_db = context.SharedDBContext() + result = shared_db() + self.assertEquals(result, {}) + + def test_shared_db_context_with_missing_config(self): + '''Test shared-db context missing relation data''' + incomplete_config = copy.copy(SHARED_DB_CONFIG) + del incomplete_config['database-user'] + self.config.side_effect = fake_config(incomplete_config) + relation = FakeRelation(relation_data=SHARED_DB_RELATION) + self.relation_get.side_effect = relation.get + self.config.return_value = incomplete_config + shared_db = context.SharedDBContext() + self.assertRaises(context.OSContextError, shared_db) + + @patch.object(context, 'get_os_codename_install_source') + def test_shared_db_context_with_params(self, os_codename): + '''Test shared-db context with object parameters''' + os_codename.return_value = 'stein' + shared_db = context.SharedDBContext( + database='quantum', user='quantum', relation_prefix='quantum') + relation = FakeRelation(relation_data=SHARED_DB_RELATION_NAMESPACED) + self.relation_get.side_effect = relation.get + result = shared_db() + self.assertIn( + call(rid='foo:0', unit='foo/0'), + self.relation_get.call_args_list) + self.assertEquals( + result, {'database': 'quantum', + 'database_user': 'quantum', + 'database_password': 'bar2', + 'database_host': 'bar', + 'database_type': 'mysql+pymysql'}) + + @patch.object(context, 'get_os_codename_install_source') + def test_shared_db_context_with_params_pike(self, os_codename): + '''Test shared-db context with object parameters''' + os_codename.return_value = 'pike' + shared_db = context.SharedDBContext( + database='quantum', user='quantum', relation_prefix='quantum') + relation = FakeRelation(relation_data=SHARED_DB_RELATION_NAMESPACED) + self.relation_get.side_effect = relation.get + result = shared_db() + self.assertIn( + call(rid='foo:0', unit='foo/0'), + self.relation_get.call_args_list) + self.assertEquals( + result, {'database': 'quantum', + 'database_user': 'quantum', + 'database_password': 'bar2', + 'database_host': 'bar', + 'database_type': 'mysql'}) + + @patch.object(context, 'get_os_codename_install_source') + @patch('charmhelpers.contrib.openstack.context.format_ipv6_addr') + def test_shared_db_context_with_ipv6(self, format_ipv6_addr, os_codename): + '''Test shared-db context with ipv6''' + shared_db = context.SharedDBContext( + database='quantum', user='quantum', relation_prefix='quantum') + os_codename.return_value = 'stein' + relation = FakeRelation(relation_data=SHARED_DB_RELATION_NAMESPACED) + self.relation_get.side_effect = relation.get + format_ipv6_addr.return_value = '[2001:db8:1::1]' + result = shared_db() + self.assertIn( + call(rid='foo:0', unit='foo/0'), + self.relation_get.call_args_list) + self.assertEquals( + result, {'database': 'quantum', + 'database_user': 'quantum', + 'database_password': 'bar2', + 'database_host': '[2001:db8:1::1]', + 'database_type': 'mysql+pymysql'}) + + def test_postgresql_db_context_with_data(self): + '''Test postgresql-db context with all required data''' + relation = FakeRelation(relation_data=POSTGRESQL_DB_RELATION) + self.relation_get.side_effect = relation.get + self.config.side_effect = fake_config(POSTGRESQL_DB_CONFIG) + postgresql_db = context.PostgresqlDBContext() + result = postgresql_db() + expected = { + 'database_host': 'dbserver.local', + 'database': 'foodb', + 'database_user': 'adam', + 'database_password': 'foo', + 'database_type': 'postgresql', + } + self.assertEquals(result, expected) + + def test_postgresql_db_context_with_missing_relation(self): + '''Test postgresql-db context missing relation data''' + incomplete_relation = copy.copy(POSTGRESQL_DB_RELATION) + incomplete_relation['password'] = None + relation = FakeRelation(relation_data=incomplete_relation) + self.relation_get.side_effect = relation.get + self.config.return_value = POSTGRESQL_DB_CONFIG + postgresql_db = context.PostgresqlDBContext() + result = postgresql_db() + self.assertEquals(result, {}) + + def test_postgresql_db_context_with_missing_config(self): + '''Test postgresql-db context missing relation data''' + incomplete_config = copy.copy(POSTGRESQL_DB_CONFIG) + del incomplete_config['database'] + self.config.side_effect = fake_config(incomplete_config) + relation = FakeRelation(relation_data=POSTGRESQL_DB_RELATION) + self.relation_get.side_effect = relation.get + self.config.return_value = incomplete_config + postgresql_db = context.PostgresqlDBContext() + self.assertRaises(context.OSContextError, postgresql_db) + + def test_postgresql_db_context_with_params(self): + '''Test postgresql-db context with object parameters''' + postgresql_db = context.PostgresqlDBContext(database='quantum') + result = postgresql_db() + self.assertEquals(result['database'], 'quantum') + + @patch.object(context, 'filter_installed_packages', return_value=[]) + @patch.object(context, 'os_release', return_value='rocky') + def test_identity_service_context_with_data(self, *args): + '''Test shared-db context with all required data''' + relation = FakeRelation(relation_data=IDENTITY_SERVICE_RELATION_UNSET) + self.relation_get.side_effect = relation.get + identity_service = context.IdentityServiceContext() + result = identity_service() + expected = { + 'admin_password': 'foo', + 'admin_tenant_name': 'admin', + 'admin_tenant_id': None, + 'admin_domain_id': None, + 'admin_user': 'adam', + 'auth_host': 'keystone-host.local', + 'auth_port': '35357', + 'auth_protocol': 'http', + 'service_host': 'keystonehost.local', + 'service_port': '5000', + 'service_protocol': 'http', + 'api_version': '2.0', + } + result.pop('keystone_authtoken') + self.assertEquals(result, expected) + + def test_identity_credentials_context_with_data(self): + '''Test identity-credentials context with all required data''' + relation = FakeRelation(relation_data=IDENTITY_CREDENTIALS_RELATION_UNSET) + self.relation_get.side_effect = relation.get + identity_credentials = context.IdentityCredentialsContext() + result = identity_credentials() + expected = { + 'admin_password': 'foo', + 'admin_tenant_name': 'admin', + 'admin_tenant_id': '123456', + 'admin_user': 'adam', + 'auth_host': 'keystone-host.local', + 'auth_port': '35357', + 'auth_protocol': 'https', + 'service_host': 'keystonehost.local', + 'service_port': '5000', + 'service_protocol': 'https', + 'api_version': '2.0', + } + self.assertEquals(result, expected) + + @patch.object(context, 'filter_installed_packages', return_value=[]) + @patch.object(context, 'os_release', return_value='rocky') + def test_identity_service_context_with_altname(self, *args): + '''Test identity context when using an explicit relation name''' + relation = FakeRelation( + relation_data=APIIDENTITY_SERVICE_RELATION_UNSET + ) + self.relation_get.side_effect = relation.get + self.relation_ids.return_value = ['neutron-plugin-api:0'] + self.related_units.return_value = ['neutron-api/0'] + identity_service = context.IdentityServiceContext( + rel_name='neutron-plugin-api' + ) + result = identity_service() + expected = { + 'admin_password': 'foo', + 'admin_tenant_name': 'admin', + 'admin_tenant_id': None, + 'admin_domain_id': None, + 'admin_user': 'adam', + 'auth_host': 'keystone-host.local', + 'auth_port': '35357', + 'auth_protocol': 'http', + 'service_host': 'keystonehost.local', + 'service_port': '5000', + 'service_protocol': 'http', + 'api_version': '2.0', + } + result.pop('keystone_authtoken') + self.assertEquals(result, expected) + + @patch.object(context, 'filter_installed_packages', return_value=[]) + @patch.object(context, 'os_release', return_value='rocky') + def test_identity_service_context_with_cache(self, *args): + '''Test shared-db context with signing cache info''' + relation = FakeRelation(relation_data=IDENTITY_SERVICE_RELATION_UNSET) + self.relation_get.side_effect = relation.get + svc = 'cinder' + identity_service = context.IdentityServiceContext(service=svc, + service_user=svc) + result = identity_service() + expected = { + 'admin_password': 'foo', + 'admin_tenant_name': 'admin', + 'admin_tenant_id': None, + 'admin_domain_id': None, + 'admin_user': 'adam', + 'auth_host': 'keystone-host.local', + 'auth_port': '35357', + 'auth_protocol': 'http', + 'service_host': 'keystonehost.local', + 'service_port': '5000', + 'service_protocol': 'http', + 'signing_dir': '/var/cache/cinder', + 'api_version': '2.0', + } + self.assertTrue(self.mkdir.called) + result.pop('keystone_authtoken') + self.assertEquals(result, expected) + + @patch.object(context, 'filter_installed_packages', return_value=[]) + @patch.object(context, 'os_release', return_value='rocky') + def test_identity_service_context_with_data_http(self, *args): + '''Test shared-db context with all required data''' + relation = FakeRelation(relation_data=IDENTITY_SERVICE_RELATION_HTTP) + self.relation_get.side_effect = relation.get + identity_service = context.IdentityServiceContext() + result = identity_service() + expected = { + 'admin_password': 'foo', + 'admin_tenant_name': 'admin', + 'admin_tenant_id': '123456', + 'admin_domain_id': None, + 'admin_user': 'adam', + 'auth_host': 'keystone-host.local', + 'auth_port': '35357', + 'auth_protocol': 'http', + 'service_host': 'keystonehost.local', + 'service_port': '5000', + 'service_protocol': 'http', + 'api_version': '2.0', + } + result.pop('keystone_authtoken') + self.assertEquals(result, expected) + + @patch.object(context, 'filter_installed_packages', return_value=[]) + @patch.object(context, 'os_release', return_value='rocky') + def test_identity_service_context_with_data_https(self, *args): + '''Test shared-db context with all required data''' + relation = FakeRelation(relation_data=IDENTITY_SERVICE_RELATION_HTTPS) + self.relation_get.side_effect = relation.get + identity_service = context.IdentityServiceContext() + result = identity_service() + expected = { + 'admin_password': 'foo', + 'admin_tenant_name': 'admin', + 'admin_tenant_id': None, + 'admin_domain_id': None, + 'admin_user': 'adam', + 'auth_host': 'keystone-host.local', + 'auth_port': '35357', + 'auth_protocol': 'https', + 'service_host': 'keystonehost.local', + 'service_port': '5000', + 'service_protocol': 'https', + 'api_version': '2.0', + } + result.pop('keystone_authtoken') + self.assertEquals(result, expected) + + @patch.object(context, 'filter_installed_packages', return_value=[]) + @patch.object(context, 'os_release', return_value='rocky') + def test_identity_service_context_with_data_versioned(self, *args): + '''Test shared-db context with api version supplied from keystone''' + relation = FakeRelation( + relation_data=IDENTITY_SERVICE_RELATION_VERSIONED) + self.relation_get.side_effect = relation.get + identity_service = context.IdentityServiceContext() + result = identity_service() + expected = { + 'admin_password': 'foo', + 'admin_domain_name': 'admin_domain', + 'admin_tenant_name': 'admin', + 'admin_tenant_id': 'svc-proj-id', + 'admin_domain_id': 'svc-dom-id', + 'service_project_id': 'svc-proj-id', + 'service_domain_id': 'svc-dom-id', + 'admin_user': 'adam', + 'auth_host': 'keystone-host.local', + 'auth_port': '35357', + 'auth_protocol': 'https', + 'service_host': 'keystonehost.local', + 'service_port': '5000', + 'service_protocol': 'https', + 'api_version': '3', + } + result.pop('keystone_authtoken') + self.assertEquals(result, expected) + + def test_identity_credentials_context_with_data_versioned(self): + '''Test identity-credentials context with api version supplied from keystone''' + relation = FakeRelation( + relation_data=IDENTITY_CREDENTIALS_RELATION_VERSIONED) + self.relation_get.side_effect = relation.get + identity_credentials = context.IdentityCredentialsContext() + result = identity_credentials() + expected = { + 'admin_password': 'foo', + 'admin_domain_name': 'admin_domain', + 'admin_tenant_name': 'admin', + 'admin_tenant_id': '123456', + 'admin_user': 'adam', + 'auth_host': 'keystone-host.local', + 'auth_port': '35357', + 'auth_protocol': 'https', + 'service_host': 'keystonehost.local', + 'service_port': '5000', + 'service_protocol': 'https', + 'api_version': '3', + } + self.assertEquals(result, expected) + + @patch.object(context, 'filter_installed_packages', return_value=[]) + @patch.object(context, 'os_release', return_value='rocky') + @patch('charmhelpers.contrib.openstack.context.format_ipv6_addr') + def test_identity_service_context_with_ipv6(self, format_ipv6_addr, *args): + '''Test identity-service context with ipv6''' + relation = FakeRelation(relation_data=IDENTITY_SERVICE_RELATION_HTTP) + self.relation_get.side_effect = relation.get + format_ipv6_addr.return_value = '[2001:db8:1::1]' + identity_service = context.IdentityServiceContext() + result = identity_service() + expected = { + 'admin_password': 'foo', + 'admin_tenant_name': 'admin', + 'admin_tenant_id': '123456', + 'admin_domain_id': None, + 'admin_user': 'adam', + 'auth_host': '[2001:db8:1::1]', + 'auth_port': '35357', + 'auth_protocol': 'http', + 'service_host': '[2001:db8:1::1]', + 'service_port': '5000', + 'service_protocol': 'http', + 'api_version': '2.0', + } + result.pop('keystone_authtoken') + self.assertEquals(result, expected) + + @patch.object(context, 'filter_installed_packages', return_value=[]) + @patch.object(context, 'os_release', return_value='rocky') + def test_identity_service_context_with_missing_relation(self, *args): + '''Test shared-db context missing relation data''' + incomplete_relation = copy.copy(IDENTITY_SERVICE_RELATION_UNSET) + incomplete_relation['service_password'] = None + relation = FakeRelation(relation_data=incomplete_relation) + self.relation_get.side_effect = relation.get + identity_service = context.IdentityServiceContext() + result = identity_service() + self.assertEquals(result, {}) + + @patch.object(context, 'filter_installed_packages') + @patch.object(context, 'os_release') + def test_keystone_authtoken_www_authenticate_uri_stein_apiv3(self, mock_os_release, mock_filter_installed_packages): + relation_data = copy.deepcopy(IDENTITY_SERVICE_RELATION_VERSIONED) + relation = FakeRelation(relation_data=relation_data) + self.relation_get.side_effect = relation.get + + mock_filter_installed_packages.return_value = [] + mock_os_release.return_value = 'stein' + + identity_service = context.IdentityServiceContext() + + cfg_ctx = identity_service() + + keystone_authtoken = cfg_ctx.get('keystone_authtoken', {}) + + expected = collections.OrderedDict(( + ('auth_type', 'password'), + ('www_authenticate_uri', 'https://keystonehost.local:5000/v3'), + ('auth_url', 'https://keystone-host.local:35357/v3'), + ('project_domain_name', 'admin_domain'), + ('user_domain_name', 'admin_domain'), + ('project_name', 'admin'), + ('username', 'adam'), + ('password', 'foo'), + ('signing_dir', ''), + )) + + self.assertEquals(keystone_authtoken, expected) + + def test_amqp_context_with_data(self): + '''Test amqp context with all required data''' + relation = FakeRelation(relation_data=AMQP_RELATION) + self.relation_get.side_effect = relation.get + self.config.return_value = AMQP_CONFIG + amqp = context.AMQPContext() + result = amqp() + expected = { + 'oslo_messaging_driver': 'messagingv2', + 'rabbitmq_host': 'rabbithost', + 'rabbitmq_password': 'foobar', + 'rabbitmq_user': 'adam', + 'rabbitmq_virtual_host': 'foo', + 'transport_url': 'rabbit://adam:foobar@rabbithost:5672/foo' + } + self.assertEquals(result, expected) + + def test_amqp_context_explicit_relation_id(self): + '''Test amqp context setting the relation_id''' + relation = FakeRelation(relation_data=AMQP_RELATION_ALT_RID) + self.relation_get.side_effect = relation.get + self.related_units.return_value = ['rabbitmq-alt/0'] + self.config.return_value = AMQP_CONFIG + amqp = context.AMQPContext(relation_id='amqp-alt:0') + result = amqp() + expected = { + 'oslo_messaging_driver': 'messagingv2', + 'rabbitmq_host': 'rabbitalthost1', + 'rabbitmq_password': 'flump', + 'rabbitmq_user': 'adam', + 'rabbitmq_virtual_host': 'foo', + 'transport_url': 'rabbit://adam:flump@rabbitalthost1:5672/foo' + } + self.assertEquals(result, expected) + + def test_amqp_context_with_data_altname(self): + '''Test amqp context with alternative relation name''' + relation = FakeRelation(relation_data=AMQP_RELATION) + self.relation_get.side_effect = relation.get + self.config.return_value = AMQP_NOVA_CONFIG + amqp = context.AMQPContext( + rel_name='amqp-nova', + relation_prefix='nova') + result = amqp() + expected = { + 'oslo_messaging_driver': 'messagingv2', + 'rabbitmq_host': 'rabbithost', + 'rabbitmq_password': 'foobar', + 'rabbitmq_user': 'adam', + 'rabbitmq_virtual_host': 'foo', + 'transport_url': 'rabbit://adam:foobar@rabbithost:5672/foo' + } + self.assertEquals(result, expected) + + @patch(open_builtin) + def test_amqp_context_with_data_ssl(self, _open): + '''Test amqp context with all required data and ssl''' + relation = FakeRelation(relation_data=AMQP_RELATION_WITH_SSL) + self.relation_get.side_effect = relation.get + self.config.return_value = AMQP_CONFIG + ssl_dir = '/etc/sslamqp' + amqp = context.AMQPContext(ssl_dir=ssl_dir) + result = amqp() + expected = { + 'oslo_messaging_driver': 'messagingv2', + 'rabbitmq_host': 'rabbithost', + 'rabbitmq_password': 'foobar', + 'rabbitmq_user': 'adam', + 'rabbit_ssl_port': 5671, + 'rabbitmq_virtual_host': 'foo', + 'rabbit_ssl_ca': ssl_dir + '/rabbit-client-ca.pem', + 'rabbitmq_ha_queues': True, + 'transport_url': 'rabbit://adam:foobar@rabbithost:5671/foo' + } + _open.assert_called_once_with(ssl_dir + '/rabbit-client-ca.pem', 'wb') + self.assertEquals(result, expected) + self.assertEquals([call(AMQP_RELATION_WITH_SSL['ssl_ca'])], + self.b64decode.call_args_list) + + def test_amqp_context_with_data_ssl_noca(self): + '''Test amqp context with all required data with ssl but missing ca''' + relation = FakeRelation(relation_data=AMQP_RELATION_WITH_SSL) + self.relation_get.side_effect = relation.get + self.config.return_value = AMQP_CONFIG + amqp = context.AMQPContext() + result = amqp() + expected = { + 'oslo_messaging_driver': 'messagingv2', + 'rabbitmq_host': 'rabbithost', + 'rabbitmq_password': 'foobar', + 'rabbitmq_user': 'adam', + 'rabbit_ssl_port': 5671, + 'rabbitmq_virtual_host': 'foo', + 'rabbit_ssl_ca': 'cert', + 'rabbitmq_ha_queues': True, + 'transport_url': 'rabbit://adam:foobar@rabbithost:5671/foo' + } + self.assertEquals(result, expected) + + def test_amqp_context_with_data_clustered(self): + '''Test amqp context with all required data with clustered rabbit''' + relation_data = copy.copy(AMQP_RELATION) + relation_data['clustered'] = 'yes' + relation = FakeRelation(relation_data=relation_data) + self.relation_get.side_effect = relation.get + self.config.return_value = AMQP_CONFIG + amqp = context.AMQPContext() + result = amqp() + expected = { + 'oslo_messaging_driver': 'messagingv2', + 'clustered': True, + 'rabbitmq_host': relation_data['vip'], + 'rabbitmq_password': 'foobar', + 'rabbitmq_user': 'adam', + 'rabbitmq_virtual_host': 'foo', + 'transport_url': 'rabbit://adam:foobar@10.0.0.1:5672/foo' + } + self.assertEquals(result, expected) + + def test_amqp_context_with_data_active_active(self): + '''Test amqp context with required data with active/active rabbit''' + relation_data = copy.copy(AMQP_AA_RELATION) + relation = FakeRelation(relation_data=relation_data) + self.relation_get.side_effect = relation.get + self.relation_ids.side_effect = relation.relation_ids + self.related_units.side_effect = relation.relation_units + self.config.return_value = AMQP_CONFIG + amqp = context.AMQPContext() + result = amqp() + expected = { + 'oslo_messaging_driver': 'messagingv2', + 'rabbitmq_host': 'rabbithost1', + 'rabbitmq_password': 'foobar', + 'rabbitmq_user': 'adam', + 'rabbitmq_virtual_host': 'foo', + 'rabbitmq_hosts': 'rabbithost1,rabbithost2', + 'transport_url': ('rabbit://adam:foobar@rabbithost1:5672' + ',adam:foobar@rabbithost2:5672/foo') + } + self.assertEquals(result, expected) + + def test_amqp_context_with_missing_relation(self): + '''Test amqp context missing relation data''' + incomplete_relation = copy.copy(AMQP_RELATION) + incomplete_relation['password'] = '' + relation = FakeRelation(relation_data=incomplete_relation) + self.relation_get.side_effect = relation.get + self.config.return_value = AMQP_CONFIG + amqp = context.AMQPContext() + result = amqp() + self.assertEquals({}, result) + + def test_amqp_context_with_missing_config(self): + '''Test amqp context missing relation data''' + incomplete_config = copy.copy(AMQP_CONFIG) + del incomplete_config['rabbit-user'] + relation = FakeRelation(relation_data=AMQP_RELATION) + self.relation_get.side_effect = relation.get + self.config.return_value = incomplete_config + amqp = context.AMQPContext() + self.assertRaises(context.OSContextError, amqp) + + @patch('charmhelpers.contrib.openstack.context.format_ipv6_addr') + def test_amqp_context_with_ipv6(self, format_ipv6_addr): + '''Test amqp context with ipv6''' + relation_data = copy.copy(AMQP_AA_RELATION) + relation = FakeRelation(relation_data=relation_data) + self.relation_get.side_effect = relation.get + self.relation_ids.side_effect = relation.relation_ids + self.related_units.side_effect = relation.relation_units + format_ipv6_addr.return_value = '[2001:db8:1::1]' + self.config.return_value = AMQP_CONFIG + amqp = context.AMQPContext() + result = amqp() + expected = { + 'oslo_messaging_driver': 'messagingv2', + 'rabbitmq_host': '[2001:db8:1::1]', + 'rabbitmq_password': 'foobar', + 'rabbitmq_user': 'adam', + 'rabbitmq_virtual_host': 'foo', + 'rabbitmq_hosts': '[2001:db8:1::1],[2001:db8:1::1]', + 'transport_url': ('rabbit://adam:foobar@[2001:db8:1::1]:5672' + ',adam:foobar@[2001:db8:1::1]:5672/foo') + } + self.assertEquals(result, expected) + + def test_amqp_context_with_oslo_messaging(self): + """Test amqp context with oslo-messaging-flags option""" + relation = FakeRelation(relation_data=AMQP_RELATION) + self.relation_get.side_effect = relation.get + AMQP_OSLO_CONFIG.update(AMQP_CONFIG) + self.config.return_value = AMQP_OSLO_CONFIG + amqp = context.AMQPContext() + result = amqp() + expected = { + 'rabbitmq_host': 'rabbithost', + 'rabbitmq_password': 'foobar', + 'rabbitmq_user': 'adam', + 'rabbitmq_virtual_host': 'foo', + 'oslo_messaging_flags': { + 'rabbit_max_retries': '1', + 'rabbit_retry_backoff': '1', + 'rabbit_retry_interval': '1' + }, + 'oslo_messaging_driver': 'log', + 'transport_url': 'rabbit://adam:foobar@rabbithost:5672/foo' + } + + self.assertEquals(result, expected) + + def test_amqp_context_with_notification_format(self): + """Test amqp context with notification_format option""" + relation = FakeRelation(relation_data=AMQP_RELATION) + self.relation_get.side_effect = relation.get + AMQP_NOTIFICATION_FORMAT.update(AMQP_CONFIG) + self.config.return_value = AMQP_NOTIFICATION_FORMAT + amqp = context.AMQPContext() + result = amqp() + expected = { + 'oslo_messaging_driver': 'messagingv2', + 'rabbitmq_host': 'rabbithost', + 'rabbitmq_password': 'foobar', + 'rabbitmq_user': 'adam', + 'rabbitmq_virtual_host': 'foo', + 'notification_format': 'both', + 'transport_url': 'rabbit://adam:foobar@rabbithost:5672/foo' + } + + self.assertEquals(result, expected) + + def test_amqp_context_with_notification_topics(self): + """Test amqp context with notification_topics option""" + relation = FakeRelation(relation_data=AMQP_RELATION) + self.relation_get.side_effect = relation.get + AMQP_NOTIFICATION_TOPICS.update(AMQP_CONFIG) + self.config.return_value = AMQP_NOTIFICATION_TOPICS + amqp = context.AMQPContext() + result = amqp() + expected = { + 'oslo_messaging_driver': 'messagingv2', + 'rabbitmq_host': 'rabbithost', + 'rabbitmq_password': 'foobar', + 'rabbitmq_user': 'adam', + 'rabbitmq_virtual_host': 'foo', + 'notification_topics': 'foo,bar', + 'transport_url': 'rabbit://adam:foobar@rabbithost:5672/foo' + } + + self.assertEquals(result, expected) + + def test_amqp_context_with_notifications_to_logs(self): + """Test amqp context with send_notifications_to_logs""" + relation = FakeRelation(relation_data=AMQP_RELATION) + self.relation_get.side_effect = relation.get + AMQP_NOTIFICATIONS_LOGS.update(AMQP_CONFIG) + self.config.return_value = AMQP_NOTIFICATIONS_LOGS + amqp = context.AMQPContext() + result = amqp() + expected = { + 'oslo_messaging_driver': 'messagingv2', + 'rabbitmq_host': 'rabbithost', + 'rabbitmq_password': 'foobar', + 'rabbitmq_user': 'adam', + 'rabbitmq_virtual_host': 'foo', + 'transport_url': 'rabbit://adam:foobar@rabbithost:5672/foo', + 'send_notifications_to_logs': True, + } + + self.assertEquals(result, expected) + + def test_libvirt_config_flags(self): + self.config.side_effect = fake_config({ + 'libvirt-flags': 'iscsi_use_multipath=True,chap_auth=False', + }) + + results = context.LibvirtConfigFlagsContext()() + self.assertEquals(results, { + 'libvirt_flags': { + 'chap_auth': 'False', + 'iscsi_use_multipath': 'True' + } + }) + + def test_ceph_no_relids(self): + '''Test empty ceph realtion''' + relation = FakeRelation(relation_data={}) + self.relation_ids.side_effect = relation.get + ceph = context.CephContext() + result = ceph() + self.assertEquals(result, {}) + + def test_ceph_rel_with_no_units(self): + '''Test ceph context with missing related units''' + relation = FakeRelation(relation_data={}) + self.relation_ids.side_effect = relation.relation_ids + self.related_units.side_effect = [] + ceph = context.CephContext() + result = ceph() + self.assertEquals(result, {}) + + @patch.object(context, 'config') + @patch('os.path.isdir') + @patch('os.mkdir') + @patch.object(context, 'ensure_packages') + def test_ceph_context_with_data(self, ensure_packages, mkdir, isdir, + mock_config): + '''Test ceph context with all relation data''' + config_dict = {'use-syslog': True} + + def fake_config(key): + return config_dict.get(key) + + mock_config.side_effect = fake_config + isdir.return_value = False + relation = FakeRelation(relation_data=CEPH_RELATION) + self.relation_get.side_effect = relation.get + self.relation_ids.side_effect = relation.relation_ids + self.related_units.side_effect = relation.relation_units + ceph = context.CephContext() + result = ceph() + expected = { + 'mon_hosts': 'ceph_node1 ceph_node2', + 'auth': 'foo', + 'key': 'bar', + 'use_syslog': 'true', + } + self.assertEquals(result, expected) + ensure_packages.assert_called_with(['ceph-common']) + + @patch('os.mkdir') + @patch.object(context, 'ensure_packages') + def test_ceph_context_with_missing_data(self, ensure_packages, mkdir): + '''Test ceph context with missing relation data''' + relation = copy.deepcopy(CEPH_RELATION) + for k, v in six.iteritems(relation): + for u in six.iterkeys(v): + del relation[k][u]['auth'] + relation = FakeRelation(relation_data=relation) + self.relation_get.side_effect = relation.get + self.relation_ids.side_effect = relation.relation_ids + self.related_units.side_effect = relation.relation_units + ceph = context.CephContext() + result = ceph() + self.assertEquals(result, {}) + self.assertFalse(ensure_packages.called) + + @patch.object(context, 'config') + @patch('os.path.isdir') + @patch('os.mkdir') + @patch.object(context, 'ensure_packages') + def test_ceph_context_partial_missing_data(self, ensure_packages, mkdir, + isdir, config): + '''Test ceph context last unit missing data + + Tests a fix to a previously bug which meant only the config from + last unit was returned so if a valid value was supplied from an + earlier unit it would be ignored''' + config.side_effect = fake_config({'use-syslog': 'True'}) + relation = copy.deepcopy(CEPH_RELATION) + for k, v in six.iteritems(relation): + last_unit = sorted(six.iterkeys(v))[-1] + unit_data = relation[k][last_unit] + del unit_data['auth'] + relation[k][last_unit] = unit_data + relation = FakeRelation(relation_data=relation) + self.relation_get.side_effect = relation.get + self.relation_ids.side_effect = relation.relation_ids + self.related_units.side_effect = relation.relation_units + ceph = context.CephContext() + result = ceph() + expected = { + 'mon_hosts': 'ceph_node1 ceph_node2', + 'auth': 'foo', + 'key': 'bar', + 'use_syslog': 'true', + } + self.assertEquals(result, expected) + + @patch.object(context, 'config') + @patch('os.path.isdir') + @patch('os.mkdir') + @patch.object(context, 'ensure_packages') + def test_ceph_context_with_public_addr( + self, ensure_packages, mkdir, isdir, mock_config): + '''Test ceph context in host with multiple networks with all + relation data''' + isdir.return_value = False + config_dict = {'use-syslog': True} + + def fake_config(key): + return config_dict.get(key) + + mock_config.side_effect = fake_config + relation = FakeRelation(relation_data=CEPH_RELATION_WITH_PUBLIC_ADDR) + self.relation_get.side_effect = relation.get + self.relation_ids.side_effect = relation.relation_ids + self.related_units.side_effect = relation.relation_units + ceph = context.CephContext() + result = ceph() + expected = { + 'mon_hosts': '192.168.1.10 192.168.1.11', + 'auth': 'foo', + 'key': 'bar', + 'use_syslog': 'true', + } + self.assertEquals(result, expected) + ensure_packages.assert_called_with(['ceph-common']) + mkdir.assert_called_with('/etc/ceph') + + @patch.object(context, 'config') + @patch('os.path.isdir') + @patch('os.mkdir') + @patch.object(context, 'ensure_packages') + def test_ceph_context_with_public_addr_and_port( + self, ensure_packages, mkdir, isdir, mock_config): + '''Test ceph context in host with multiple networks with all + relation data''' + isdir.return_value = False + config_dict = {'use-syslog': True} + + def fake_config(key): + return config_dict.get(key) + + mock_config.side_effect = fake_config + relation = FakeRelation(relation_data=CEPH_REL_WITH_PUBLIC_ADDR_PORT) + self.relation_get.side_effect = relation.get + self.relation_ids.side_effect = relation.relation_ids + self.related_units.side_effect = relation.relation_units + ceph = context.CephContext() + result = ceph() + expected = { + 'mon_hosts': '192.168.1.10:1234 192.168.1.11:4321', + 'auth': 'foo', + 'key': 'bar', + 'use_syslog': 'true', + } + self.assertEquals(result, expected) + ensure_packages.assert_called_with(['ceph-common']) + mkdir.assert_called_with('/etc/ceph') + + @patch.object(context, 'config') + @patch('os.path.isdir') + @patch('os.mkdir') + @patch.object(context, 'ensure_packages') + def test_ceph_context_with_public_ipv6_addr(self, ensure_packages, mkdir, + isdir, mock_config): + '''Test ceph context in host with multiple networks with all + relation data''' + isdir.return_value = False + config_dict = {'use-syslog': True} + + def fake_config(key): + return config_dict.get(key) + + mock_config.side_effect = fake_config + relation = FakeRelation(relation_data=CEPH_REL_WITH_PUBLIC_IPv6_ADDR) + self.relation_get.side_effect = relation.get + self.relation_ids.side_effect = relation.relation_ids + self.related_units.side_effect = relation.relation_units + ceph = context.CephContext() + result = ceph() + expected = { + 'mon_hosts': '[2001:5c0:9168::1] [2001:5c0:9168::2]', + 'auth': 'foo', + 'key': 'bar', + 'use_syslog': 'true', + } + self.assertEquals(result, expected) + ensure_packages.assert_called_with(['ceph-common']) + mkdir.assert_called_with('/etc/ceph') + + @patch.object(context, 'config') + @patch('os.path.isdir') + @patch('os.mkdir') + @patch.object(context, 'ensure_packages') + def test_ceph_context_with_public_ipv6_addr_port( + self, ensure_packages, mkdir, isdir, mock_config): + '''Test ceph context in host with multiple networks with all + relation data''' + isdir.return_value = False + config_dict = {'use-syslog': True} + + def fake_config(key): + return config_dict.get(key) + + mock_config.side_effect = fake_config + relation = FakeRelation( + relation_data=CEPH_REL_WITH_PUBLIC_IPv6_ADDR_PORT) + self.relation_get.side_effect = relation.get + self.relation_ids.side_effect = relation.relation_ids + self.related_units.side_effect = relation.relation_units + ceph = context.CephContext() + result = ceph() + expected = { + 'mon_hosts': '[2001:5c0:9168::1]:1234 [2001:5c0:9168::2]:4321', + 'auth': 'foo', + 'key': 'bar', + 'use_syslog': 'true', + } + self.assertEquals(result, expected) + ensure_packages.assert_called_with(['ceph-common']) + mkdir.assert_called_with('/etc/ceph') + + @patch.object(context, 'config') + @patch('os.path.isdir') + @patch('os.mkdir') + @patch.object(context, 'ensure_packages') + def test_ceph_context_with_multi_public_addr( + self, ensure_packages, mkdir, isdir, mock_config): + '''Test ceph context in host with multiple networks with all + relation data''' + isdir.return_value = False + config_dict = {'use-syslog': True} + + def fake_config(key): + return config_dict.get(key) + + mock_config.side_effect = fake_config + relation = FakeRelation(relation_data=CEPH_REL_WITH_MULTI_PUBLIC_ADDR) + self.relation_get.side_effect = relation.get + self.relation_ids.side_effect = relation.relation_ids + self.related_units.side_effect = relation.relation_units + ceph = context.CephContext() + result = ceph() + expected = { + 'mon_hosts': '192.168.1.10 192.168.1.11 192.168.1.20 192.168.1.21', + 'auth': 'foo', + 'key': 'bar', + 'use_syslog': 'true', + } + self.assertEquals(result, expected) + ensure_packages.assert_called_with(['ceph-common']) + mkdir.assert_called_with('/etc/ceph') + + @patch.object(context, 'config') + @patch('os.path.isdir') + @patch('os.mkdir') + @patch.object(context, 'ensure_packages') + def test_ceph_context_with_default_features( + self, ensure_packages, mkdir, isdir, mock_config): + '''Test ceph context in host with multiple networks with all + relation data''' + isdir.return_value = False + config_dict = {'use-syslog': True} + + def fake_config(key): + return config_dict.get(key) + + mock_config.side_effect = fake_config + relation = FakeRelation(relation_data=CEPH_REL_WITH_DEFAULT_FEATURES) + self.relation_get.side_effect = relation.get + self.relation_ids.side_effect = relation.relation_ids + self.related_units.side_effect = relation.relation_units + ceph = context.CephContext() + result = ceph() + expected = { + 'mon_hosts': 'ceph_node1 ceph_node2', + 'auth': 'foo', + 'key': 'bar', + 'use_syslog': 'true', + 'rbd_features': '1', + } + self.assertEquals(result, expected) + ensure_packages.assert_called_with(['ceph-common']) + mkdir.assert_called_with('/etc/ceph') + + @patch.object(context, 'config') + @patch('os.path.isdir') + @patch('os.mkdir') + @patch.object(context, 'ensure_packages') + def test_ceph_context_with_rbd_cache(self, ensure_packages, mkdir, isdir, + mock_config): + isdir.return_value = False + config_dict = {'rbd-client-cache': 'enabled', + 'use-syslog': False} + + def fake_config(key): + return config_dict.get(key) + + mock_config.side_effect = fake_config + relation = FakeRelation(relation_data=CEPH_RELATION_WITH_PUBLIC_ADDR) + self.relation_get.side_effect = relation.get + self.relation_ids.side_effect = relation.relation_ids + self.related_units.side_effect = relation.relation_units + + class CephContextWithRBDCache(context.CephContext): + def __call__(self): + ctxt = super(CephContextWithRBDCache, self).__call__() + + rbd_cache = fake_config('rbd-client-cache') or "" + if rbd_cache.lower() == "enabled": + ctxt['rbd_client_cache_settings'] = \ + {'rbd cache': 'true', + 'rbd cache writethrough until flush': 'true'} + elif rbd_cache.lower() == "disabled": + ctxt['rbd_client_cache_settings'] = \ + {'rbd cache': 'false'} + + return ctxt + + ceph = CephContextWithRBDCache() + result = ceph() + expected = { + 'mon_hosts': '192.168.1.10 192.168.1.11', + 'auth': 'foo', + 'key': 'bar', + 'use_syslog': 'false', + } + expected['rbd_client_cache_settings'] = \ + {'rbd cache': 'true', + 'rbd cache writethrough until flush': 'true'} + + self.assertDictEqual(result, expected) + ensure_packages.assert_called_with(['ceph-common']) + mkdir.assert_called_with('/etc/ceph') + + @patch.object(context, 'config') + def test_sysctl_context_with_config(self, config): + self.charm_name.return_value = 'test-charm' + config.return_value = '{ kernel.max_pid: "1337"}' + self.sysctl_create.return_value = True + ctxt = context.SysctlContext() + result = ctxt() + self.sysctl_create.assert_called_with( + config.return_value, + "/etc/sysctl.d/50-test-charm.conf") + + self.assertTrue(result, {'sysctl': config.return_value}) + + @patch.object(context, 'config') + def test_sysctl_context_without_config(self, config): + self.charm_name.return_value = 'test-charm' + config.return_value = None + self.sysctl_create.return_value = True + ctxt = context.SysctlContext() + result = ctxt() + self.assertTrue(self.sysctl_create.called == 0) + self.assertTrue(result, {'sysctl': config.return_value}) + + @patch.object(context, 'config') + @patch('os.path.isdir') + @patch('os.mkdir') + @patch.object(context, 'ensure_packages') + def test_ceph_context_missing_public_addr( + self, ensure_packages, mkdir, isdir, mock_config): + '''Test ceph context in host with multiple networks with no + ceph-public-addr in relation data''' + isdir.return_value = False + config_dict = {'use-syslog': True} + + def fake_config(key): + return config_dict.get(key) + + mock_config.side_effect = fake_config + relation = copy.deepcopy(CEPH_RELATION_WITH_PUBLIC_ADDR) + del relation['ceph:0']['ceph/0']['ceph-public-address'] + relation = FakeRelation(relation_data=relation) + self.relation_get.side_effect = relation.get + self.relation_ids.side_effect = relation.relation_ids + self.related_units.side_effect = relation.relation_units + ceph = context.CephContext() + + result = ceph() + expected = { + 'mon_hosts': '192.168.1.11 ceph_node1', + 'auth': 'foo', + 'key': 'bar', + 'use_syslog': 'true', + } + self.assertEquals(result, expected) + ensure_packages.assert_called_with(['ceph-common']) + mkdir.assert_called_with('/etc/ceph') + + @patch('charmhelpers.contrib.openstack.context.unit_get') + @patch('charmhelpers.contrib.openstack.context.local_unit') + def test_haproxy_context_with_data(self, local_unit, unit_get): + '''Test haproxy context with all relation data''' + cluster_relation = { + 'cluster:0': { + 'peer/1': { + 'private-address': 'cluster-peer1.localnet', + }, + 'peer/2': { + 'private-address': 'cluster-peer2.localnet', + }, + }, + } + local_unit.return_value = 'peer/0' + # We are only using get_relation_ip. + # Setup the values it returns on each subsequent call. + self.get_relation_ip.side_effect = [None, None, None, + 'cluster-peer0.localnet'] + relation = FakeRelation(cluster_relation) + self.relation_ids.side_effect = relation.relation_ids + self.relation_get.side_effect = relation.get + self.related_units.side_effect = relation.relation_units + self.get_netmask_for_address.return_value = '255.255.0.0' + self.config.return_value = False + self.maxDiff = None + self.is_ipv6_disabled.return_value = True + haproxy = context.HAProxyContext() + with patch_open() as (_open, _file): + result = haproxy() + ex = { + 'frontends': { + 'cluster-peer0.localnet': { + 'network': 'cluster-peer0.localnet/255.255.0.0', + 'backends': collections.OrderedDict([ + ('peer-0', 'cluster-peer0.localnet'), + ('peer-1', 'cluster-peer1.localnet'), + ('peer-2', 'cluster-peer2.localnet'), + ]), + }, + }, + 'default_backend': 'cluster-peer0.localnet', + 'local_host': '127.0.0.1', + 'haproxy_host': '0.0.0.0', + 'ipv6_enabled': False, + 'stat_password': 'testpassword', + 'stat_port': '8888', + } + # the context gets generated. + self.assertEquals(ex, result) + # and /etc/default/haproxy is updated. + self.assertEquals(_file.write.call_args_list, + [call('ENABLED=1\n')]) + self.get_relation_ip.assert_has_calls([call('admin', False), + call('internal', False), + call('public', False), + call('cluster')]) + + @patch('charmhelpers.contrib.openstack.context.unit_get') + @patch('charmhelpers.contrib.openstack.context.local_unit') + def test_haproxy_context_with_data_timeout(self, local_unit, unit_get): + '''Test haproxy context with all relation data and timeout''' + cluster_relation = { + 'cluster:0': { + 'peer/1': { + 'private-address': 'cluster-peer1.localnet', + }, + 'peer/2': { + 'private-address': 'cluster-peer2.localnet', + }, + }, + } + local_unit.return_value = 'peer/0' + # We are only using get_relation_ip. + # Setup the values it returns on each subsequent call. + self.get_relation_ip.side_effect = [None, None, None, + 'cluster-peer0.localnet'] + relation = FakeRelation(cluster_relation) + self.relation_ids.side_effect = relation.relation_ids + self.relation_get.side_effect = relation.get + self.related_units.side_effect = relation.relation_units + self.get_netmask_for_address.return_value = '255.255.0.0' + self.config.return_value = False + self.maxDiff = None + c = fake_config(HAPROXY_CONFIG) + c.data['prefer-ipv6'] = False + self.config.side_effect = c + self.is_ipv6_disabled.return_value = True + haproxy = context.HAProxyContext() + with patch_open() as (_open, _file): + result = haproxy() + ex = { + 'frontends': { + 'cluster-peer0.localnet': { + 'network': 'cluster-peer0.localnet/255.255.0.0', + 'backends': collections.OrderedDict([ + ('peer-0', 'cluster-peer0.localnet'), + ('peer-1', 'cluster-peer1.localnet'), + ('peer-2', 'cluster-peer2.localnet'), + ]), + } + }, + 'default_backend': 'cluster-peer0.localnet', + 'local_host': '127.0.0.1', + 'haproxy_host': '0.0.0.0', + 'ipv6_enabled': False, + 'stat_password': 'testpassword', + 'stat_port': '8888', + 'haproxy_client_timeout': 50000, + 'haproxy_server_timeout': 50000, + } + # the context gets generated. + self.assertEquals(ex, result) + # and /etc/default/haproxy is updated. + self.assertEquals(_file.write.call_args_list, + [call('ENABLED=1\n')]) + self.get_relation_ip.assert_has_calls([call('admin', None), + call('internal', None), + call('public', None), + call('cluster')]) + + @patch('charmhelpers.contrib.openstack.context.unit_get') + @patch('charmhelpers.contrib.openstack.context.local_unit') + def test_haproxy_context_with_data_multinet(self, local_unit, unit_get): + '''Test haproxy context with all relation data for network splits''' + cluster_relation = { + 'cluster:0': { + 'peer/1': { + 'private-address': 'cluster-peer1.localnet', + 'admin-address': 'cluster-peer1.admin', + 'internal-address': 'cluster-peer1.internal', + 'public-address': 'cluster-peer1.public', + }, + 'peer/2': { + 'private-address': 'cluster-peer2.localnet', + 'admin-address': 'cluster-peer2.admin', + 'internal-address': 'cluster-peer2.internal', + 'public-address': 'cluster-peer2.public', + }, + }, + } + + local_unit.return_value = 'peer/0' + relation = FakeRelation(cluster_relation) + self.relation_ids.side_effect = relation.relation_ids + self.relation_get.side_effect = relation.get + self.related_units.side_effect = relation.relation_units + # We are only using get_relation_ip. + # Setup the values it returns on each subsequent call. + self.get_relation_ip.side_effect = ['cluster-peer0.admin', + 'cluster-peer0.internal', + 'cluster-peer0.public', + 'cluster-peer0.localnet'] + self.get_netmask_for_address.return_value = '255.255.0.0' + self.config.return_value = False + self.maxDiff = None + self.is_ipv6_disabled.return_value = True + haproxy = context.HAProxyContext() + with patch_open() as (_open, _file): + result = haproxy() + ex = { + 'frontends': { + 'cluster-peer0.admin': { + 'network': 'cluster-peer0.admin/255.255.0.0', + 'backends': collections.OrderedDict([ + ('peer-0', 'cluster-peer0.admin'), + ('peer-1', 'cluster-peer1.admin'), + ('peer-2', 'cluster-peer2.admin'), + ]), + }, + 'cluster-peer0.internal': { + 'network': 'cluster-peer0.internal/255.255.0.0', + 'backends': collections.OrderedDict([ + ('peer-0', 'cluster-peer0.internal'), + ('peer-1', 'cluster-peer1.internal'), + ('peer-2', 'cluster-peer2.internal'), + ]), + }, + 'cluster-peer0.public': { + 'network': 'cluster-peer0.public/255.255.0.0', + 'backends': collections.OrderedDict([ + ('peer-0', 'cluster-peer0.public'), + ('peer-1', 'cluster-peer1.public'), + ('peer-2', 'cluster-peer2.public'), + ]), + }, + 'cluster-peer0.localnet': { + 'network': 'cluster-peer0.localnet/255.255.0.0', + 'backends': collections.OrderedDict([ + ('peer-0', 'cluster-peer0.localnet'), + ('peer-1', 'cluster-peer1.localnet'), + ('peer-2', 'cluster-peer2.localnet'), + ]), + } + }, + 'default_backend': 'cluster-peer0.localnet', + 'local_host': '127.0.0.1', + 'haproxy_host': '0.0.0.0', + 'ipv6_enabled': False, + 'stat_password': 'testpassword', + 'stat_port': '8888', + } + # the context gets generated. + self.assertEquals(ex, result) + # and /etc/default/haproxy is updated. + self.assertEquals(_file.write.call_args_list, + [call('ENABLED=1\n')]) + self.get_relation_ip.assert_has_calls([call('admin', False), + call('internal', False), + call('public', False), + call('cluster')]) + + @patch('charmhelpers.contrib.openstack.context.unit_get') + @patch('charmhelpers.contrib.openstack.context.local_unit') + def test_haproxy_context_with_data_public_only(self, local_unit, unit_get): + '''Test haproxy context with with openstack-dashboard public only binding''' + cluster_relation = { + 'cluster:0': { + 'peer/1': { + 'private-address': 'cluster-peer1.localnet', + 'public-address': 'cluster-peer1.public', + }, + 'peer/2': { + 'private-address': 'cluster-peer2.localnet', + 'public-address': 'cluster-peer2.public', + }, + }, + } + + local_unit.return_value = 'peer/0' + relation = FakeRelation(cluster_relation) + self.relation_ids.side_effect = relation.relation_ids + self.relation_get.side_effect = relation.get + self.related_units.side_effect = relation.relation_units + # We are only using get_relation_ip. + # Setup the values it returns on each subsequent call. + _network_get_map = { + 'public': 'cluster-peer0.public', + 'cluster': 'cluster-peer0.localnet', + } + self.get_relation_ip.side_effect = ( + lambda binding, config_opt=None: + _network_get_map[binding] + ) + self.get_netmask_for_address.return_value = '255.255.0.0' + self.config.return_value = None + self.maxDiff = None + self.is_ipv6_disabled.return_value = True + haproxy = context.HAProxyContext(address_types=['public']) + with patch_open() as (_open, _file): + result = haproxy() + ex = { + 'frontends': { + 'cluster-peer0.public': { + 'network': 'cluster-peer0.public/255.255.0.0', + 'backends': collections.OrderedDict([ + ('peer-0', 'cluster-peer0.public'), + ('peer-1', 'cluster-peer1.public'), + ('peer-2', 'cluster-peer2.public'), + ]), + }, + 'cluster-peer0.localnet': { + 'network': 'cluster-peer0.localnet/255.255.0.0', + 'backends': collections.OrderedDict([ + ('peer-0', 'cluster-peer0.localnet'), + ('peer-1', 'cluster-peer1.localnet'), + ('peer-2', 'cluster-peer2.localnet'), + ]), + } + }, + 'default_backend': 'cluster-peer0.localnet', + 'local_host': '127.0.0.1', + 'haproxy_host': '0.0.0.0', + 'ipv6_enabled': False, + 'stat_password': 'testpassword', + 'stat_port': '8888', + } + # the context gets generated. + self.assertEquals(ex, result) + # and /etc/default/haproxy is updated. + self.assertEquals(_file.write.call_args_list, + [call('ENABLED=1\n')]) + self.get_relation_ip.assert_has_calls([call('public', None), + call('cluster')]) + + @patch('charmhelpers.contrib.openstack.context.unit_get') + @patch('charmhelpers.contrib.openstack.context.local_unit') + def test_haproxy_context_with_data_ipv6(self, local_unit, unit_get): + '''Test haproxy context with all relation data ipv6''' + cluster_relation = { + 'cluster:0': { + 'peer/1': { + 'private-address': 'cluster-peer1.localnet', + }, + 'peer/2': { + 'private-address': 'cluster-peer2.localnet', + }, + }, + } + + local_unit.return_value = 'peer/0' + # We are only using get_relation_ip. + # Setup the values it returns on each subsequent call. + self.get_relation_ip.side_effect = [None, None, None, + 'cluster-peer0.localnet'] + relation = FakeRelation(cluster_relation) + self.relation_ids.side_effect = relation.relation_ids + self.relation_get.side_effect = relation.get + self.related_units.side_effect = relation.relation_units + self.get_address_in_network.return_value = None + self.get_netmask_for_address.return_value = \ + 'FFFF:FFFF:FFFF:FFFF:0000:0000:0000:0000' + self.get_ipv6_addr.return_value = ['cluster-peer0.localnet'] + c = fake_config(HAPROXY_CONFIG) + c.data['prefer-ipv6'] = True + self.config.side_effect = c + self.maxDiff = None + self.is_ipv6_disabled.return_value = False + haproxy = context.HAProxyContext() + with patch_open() as (_open, _file): + result = haproxy() + ex = { + 'frontends': { + 'cluster-peer0.localnet': { + 'network': 'cluster-peer0.localnet/' + 'FFFF:FFFF:FFFF:FFFF:0000:0000:0000:0000', + 'backends': collections.OrderedDict([ + ('peer-0', 'cluster-peer0.localnet'), + ('peer-1', 'cluster-peer1.localnet'), + ('peer-2', 'cluster-peer2.localnet'), + ]), + } + }, + 'default_backend': 'cluster-peer0.localnet', + 'local_host': 'ip6-localhost', + 'haproxy_server_timeout': 50000, + 'haproxy_client_timeout': 50000, + 'haproxy_host': '::', + 'ipv6_enabled': True, + 'stat_password': 'testpassword', + 'stat_port': '8888', + } + # the context gets generated. + self.assertEquals(ex, result) + # and /etc/default/haproxy is updated. + self.assertEquals(_file.write.call_args_list, + [call('ENABLED=1\n')]) + self.get_relation_ip.assert_has_calls([call('admin', None), + call('internal', None), + call('public', None), + call('cluster')]) + + def test_haproxy_context_with_missing_data(self): + '''Test haproxy context with missing relation data''' + self.relation_ids.return_value = [] + haproxy = context.HAProxyContext() + self.assertEquals({}, haproxy()) + + @patch('charmhelpers.contrib.openstack.context.unit_get') + @patch('charmhelpers.contrib.openstack.context.local_unit') + def test_haproxy_context_with_no_peers(self, local_unit, unit_get): + '''Test haproxy context with single unit''' + # peer relations always show at least one peer relation, even + # if unit is alone. should be an incomplete context. + cluster_relation = { + 'cluster:0': { + 'peer/0': { + 'private-address': 'lonely.clusterpeer.howsad', + }, + }, + } + local_unit.return_value = 'peer/0' + # We are only using get_relation_ip. + # Setup the values it returns on each subsequent call. + self.get_relation_ip.side_effect = [None, None, None, None] + relation = FakeRelation(cluster_relation) + self.relation_ids.side_effect = relation.relation_ids + self.relation_get.side_effect = relation.get + self.related_units.side_effect = relation.relation_units + self.config.return_value = False + haproxy = context.HAProxyContext() + self.assertEquals({}, haproxy()) + self.get_relation_ip.assert_has_calls([call('admin', False), + call('internal', False), + call('public', False), + call('cluster')]) + + @patch('charmhelpers.contrib.openstack.context.unit_get') + @patch('charmhelpers.contrib.openstack.context.local_unit') + def test_haproxy_context_with_net_override(self, local_unit, unit_get): + '''Test haproxy context with single unit''' + # peer relations always show at least one peer relation, even + # if unit is alone. should be an incomplete context. + cluster_relation = { + 'cluster:0': { + 'peer/0': { + 'private-address': 'lonely.clusterpeer.howsad', + }, + }, + } + local_unit.return_value = 'peer/0' + # We are only using get_relation_ip. + # Setup the values it returns on each subsequent call. + self.get_relation_ip.side_effect = [None, None, None, None] + relation = FakeRelation(cluster_relation) + self.relation_ids.side_effect = relation.relation_ids + self.relation_get.side_effect = relation.get + self.related_units.side_effect = relation.relation_units + self.config.return_value = False + c = fake_config(HAPROXY_CONFIG) + c.data['os-admin-network'] = '192.168.10.0/24' + c.data['os-internal-network'] = '192.168.20.0/24' + c.data['os-public-network'] = '192.168.30.0/24' + self.config.side_effect = c + haproxy = context.HAProxyContext() + self.assertEquals({}, haproxy()) + self.get_relation_ip.assert_has_calls([call('admin', '192.168.10.0/24'), + call('internal', '192.168.20.0/24'), + call('public', '192.168.30.0/24'), + call('cluster')]) + + @patch('charmhelpers.contrib.openstack.context.unit_get') + @patch('charmhelpers.contrib.openstack.context.local_unit') + def test_haproxy_context_with_no_peers_singlemode(self, local_unit, unit_get): + '''Test haproxy context with single unit''' + # peer relations always show at least one peer relation, even + # if unit is alone. should be an incomplete context. + cluster_relation = { + 'cluster:0': { + 'peer/0': { + 'private-address': 'lonely.clusterpeer.howsad', + }, + }, + } + local_unit.return_value = 'peer/0' + # We are only using get_relation_ip. + # Setup the values it returns on each subsequent call. + self.get_relation_ip.side_effect = [None, None, None, + 'lonely.clusterpeer.howsad'] + relation = FakeRelation(cluster_relation) + self.relation_ids.side_effect = relation.relation_ids + self.relation_get.side_effect = relation.get + self.related_units.side_effect = relation.relation_units + self.config.return_value = False + self.get_address_in_network.return_value = None + self.get_netmask_for_address.return_value = '255.255.0.0' + self.is_ipv6_disabled.return_value = True + with patch_open() as (_open, _file): + result = context.HAProxyContext(singlenode_mode=True)() + ex = { + 'frontends': { + 'lonely.clusterpeer.howsad': { + 'backends': collections.OrderedDict([ + ('peer-0', 'lonely.clusterpeer.howsad')]), + 'network': 'lonely.clusterpeer.howsad/255.255.0.0' + }, + }, + 'default_backend': 'lonely.clusterpeer.howsad', + 'haproxy_host': '0.0.0.0', + 'local_host': '127.0.0.1', + 'ipv6_enabled': False, + 'stat_port': '8888', + 'stat_password': 'testpassword', + } + self.assertEquals(ex, result) + # and /etc/default/haproxy is updated. + self.assertEquals(_file.write.call_args_list, + [call('ENABLED=1\n')]) + self.get_relation_ip.assert_has_calls([call('admin', False), + call('internal', False), + call('public', False), + call('cluster')]) + + def test_https_context_with_no_https(self): + '''Test apache2 https when no https data available''' + apache = context.ApacheSSLContext() + self.https.return_value = False + self.assertEquals({}, apache()) + + def _https_context_setup(self): + ''' + Helper for test_https_context* tests. + + ''' + self.https.return_value = True + self.determine_api_port.return_value = 8756 + self.determine_apache_port.return_value = 8766 + + apache = context.ApacheSSLContext() + apache.configure_cert = MagicMock() + apache.enable_modules = MagicMock() + apache.configure_ca = MagicMock() + apache.canonical_names = MagicMock() + apache.canonical_names.return_value = [ + '10.5.1.1', + '10.5.2.1', + '10.5.3.1', + ] + apache.get_network_addresses = MagicMock() + apache.get_network_addresses.return_value = [ + ('10.5.1.100', '10.5.1.1'), + ('10.5.2.100', '10.5.2.1'), + ('10.5.3.100', '10.5.3.1'), + ] + apache.external_ports = '8776' + apache.service_namespace = 'cinder' + + ex = { + 'namespace': 'cinder', + 'endpoints': [('10.5.1.100', '10.5.1.1', 8766, 8756), + ('10.5.2.100', '10.5.2.1', 8766, 8756), + ('10.5.3.100', '10.5.3.1', 8766, 8756)], + 'ext_ports': [8766] + } + + return apache, ex + + def test_https_context(self): + self.relation_ids.return_value = [] + + apache, ex = self._https_context_setup() + + self.assertEquals(ex, apache()) + + apache.configure_cert.assert_has_calls([ + call('10.5.1.1'), + call('10.5.2.1'), + call('10.5.3.1') + ]) + + self.assertTrue(apache.configure_ca.called) + self.assertTrue(apache.enable_modules.called) + self.assertTrue(apache.configure_cert.called) + + def test_https_context_vault_relation(self): + self.relation_ids.return_value = ['certificates:2'] + self.related_units.return_value = 'vault/0' + + apache, ex = self._https_context_setup() + + self.assertEquals(ex, apache()) + + self.assertFalse(apache.configure_cert.called) + self.assertFalse(apache.configure_ca.called) + + def test_https_context_no_canonical_names(self): + self.relation_ids.return_value = [] + + apache, ex = self._https_context_setup() + apache.canonical_names.return_value = [] + + self.resolve_address.side_effect = ( + '10.5.1.4', '10.5.2.5', '10.5.3.6') + + self.assertEquals(ex, apache()) + + apache.configure_cert.assert_has_calls([ + call('10.5.1.4'), + call('10.5.2.5'), + call('10.5.3.6') + ]) + + self.resolve_address.assert_has_calls([ + call(endpoint_type=context.INTERNAL), + call(endpoint_type=context.ADMIN), + call(endpoint_type=context.PUBLIC), + ]) + + self.assertTrue(apache.configure_ca.called) + self.assertTrue(apache.enable_modules.called) + self.assertTrue(apache.configure_cert.called) + + def test_https_context_loads_correct_apache_mods(self): + # Test apache2 context also loads required apache modules + apache = context.ApacheSSLContext() + apache.enable_modules() + ex_cmd = ['a2enmod', 'ssl', 'proxy', 'proxy_http', 'headers'] + self.check_call.assert_called_with(ex_cmd) + + def test_https_configure_cert(self): + # Test apache2 properly installs certs and keys to disk + self.get_cert.return_value = ('SSL_CERT', 'SSL_KEY') + self.b64decode.side_effect = [b'SSL_CERT', b'SSL_KEY'] + apache = context.ApacheSSLContext() + apache.service_namespace = 'cinder' + apache.configure_cert('test-cn') + # appropriate directories are created. + self.mkdir.assert_called_with(path='/etc/apache2/ssl/cinder') + # appropriate files are written. + files = [call(path='/etc/apache2/ssl/cinder/cert_test-cn', + content=b'SSL_CERT', owner='root', group='root', + perms=0o640), + call(path='/etc/apache2/ssl/cinder/key_test-cn', + content=b'SSL_KEY', owner='root', group='root', + perms=0o640)] + self.write_file.assert_has_calls(files) + # appropriate bits are b64decoded. + decode = [call('SSL_CERT'), call('SSL_KEY')] + self.assertEquals(decode, self.b64decode.call_args_list) + + def test_https_configure_cert_deprecated(self): + # Test apache2 properly installs certs and keys to disk + self.get_cert.return_value = ('SSL_CERT', 'SSL_KEY') + self.b64decode.side_effect = ['SSL_CERT', 'SSL_KEY'] + apache = context.ApacheSSLContext() + apache.service_namespace = 'cinder' + apache.configure_cert() + # appropriate directories are created. + self.mkdir.assert_called_with(path='/etc/apache2/ssl/cinder') + # appropriate files are written. + files = [call(path='/etc/apache2/ssl/cinder/cert', + content='SSL_CERT', owner='root', group='root', + perms=0o640), + call(path='/etc/apache2/ssl/cinder/key', + content='SSL_KEY', owner='root', group='root', + perms=0o640)] + self.write_file.assert_has_calls(files) + # appropriate bits are b64decoded. + decode = [call('SSL_CERT'), call('SSL_KEY')] + self.assertEquals(decode, self.b64decode.call_args_list) + + def test_https_canonical_names(self): + rel = FakeRelation(IDENTITY_RELATION_SINGLE_CERT) + self.relation_ids.side_effect = rel.relation_ids + self.related_units.side_effect = rel.relation_units + self.relation_get.side_effect = rel.get + apache = context.ApacheSSLContext() + self.assertEquals(apache.canonical_names(), ['cinderhost1']) + rel.relation_data = IDENTITY_RELATION_MULTIPLE_CERT + self.assertEquals(apache.canonical_names(), + sorted(['cinderhost1-adm-network', + 'cinderhost1-int-network', + 'cinderhost1-pub-network'])) + rel.relation_data = IDENTITY_RELATION_NO_CERT + self.assertEquals(apache.canonical_names(), []) + + def test_image_service_context_missing_data(self): + '''Test image-service with missing relation and missing data''' + image_service = context.ImageServiceContext() + self.relation_ids.return_value = [] + self.assertEquals({}, image_service()) + self.relation_ids.return_value = ['image-service:0'] + self.related_units.return_value = ['glance/0'] + self.relation_get.return_value = None + self.assertEquals({}, image_service()) + + def test_image_service_context_with_data(self): + '''Test image-service with required data''' + image_service = context.ImageServiceContext() + self.relation_ids.return_value = ['image-service:0'] + self.related_units.return_value = ['glance/0'] + self.relation_get.return_value = 'http://glancehost:9292' + self.assertEquals({'glance_api_servers': 'http://glancehost:9292'}, + image_service()) + + @patch.object(context, 'neutron_plugin_attribute') + def test_neutron_context_base_properties(self, attr): + '''Test neutron context base properties''' + neutron = context.NeutronContext() + attr.return_value = 'quantum-plugin-package' + self.assertEquals(None, neutron.plugin) + self.assertEquals(None, neutron.network_manager) + self.assertEquals(None, neutron.neutron_security_groups) + self.assertEquals('quantum-plugin-package', neutron.packages) + + @patch.object(context, 'neutron_plugin_attribute') + @patch.object(context, 'apt_install') + @patch.object(context, 'filter_installed_packages') + def test_neutron_ensure_package(self, _filter, _install, _packages): + '''Test neutron context installed required packages''' + _filter.return_value = ['quantum-plugin-package'] + _packages.return_value = [['quantum-plugin-package']] + neutron = context.NeutronContext() + neutron._ensure_packages() + _install.assert_called_with(['quantum-plugin-package'], fatal=True) + + @patch.object(context.NeutronContext, 'neutron_security_groups') + @patch.object(context, 'unit_private_ip') + @patch.object(context, 'neutron_plugin_attribute') + def test_neutron_ovs_plugin_context(self, attr, ip, sec_groups): + ip.return_value = '10.0.0.1' + sec_groups.__get__ = MagicMock(return_value=True) + attr.return_value = 'some.quantum.driver.class' + neutron = context.NeutronContext() + self.assertEquals({ + 'config': 'some.quantum.driver.class', + 'core_plugin': 'some.quantum.driver.class', + 'neutron_plugin': 'ovs', + 'neutron_security_groups': True, + 'local_ip': '10.0.0.1'}, neutron.ovs_ctxt()) + + @patch.object(context.NeutronContext, 'neutron_security_groups') + @patch.object(context, 'unit_private_ip') + @patch.object(context, 'neutron_plugin_attribute') + def test_neutron_nvp_plugin_context(self, attr, ip, sec_groups): + ip.return_value = '10.0.0.1' + sec_groups.__get__ = MagicMock(return_value=True) + attr.return_value = 'some.quantum.driver.class' + neutron = context.NeutronContext() + self.assertEquals({ + 'config': 'some.quantum.driver.class', + 'core_plugin': 'some.quantum.driver.class', + 'neutron_plugin': 'nvp', + 'neutron_security_groups': True, + 'local_ip': '10.0.0.1'}, neutron.nvp_ctxt()) + + @patch.object(context, 'config') + @patch.object(context.NeutronContext, 'neutron_security_groups') + @patch.object(context, 'unit_private_ip') + @patch.object(context, 'neutron_plugin_attribute') + def test_neutron_n1kv_plugin_context(self, attr, ip, sec_groups, config): + ip.return_value = '10.0.0.1' + sec_groups.__get__ = MagicMock(return_value=True) + attr.return_value = 'some.quantum.driver.class' + config.return_value = 'n1kv' + neutron = context.NeutronContext() + self.assertEquals({ + 'core_plugin': 'some.quantum.driver.class', + 'neutron_plugin': 'n1kv', + 'neutron_security_groups': True, + 'local_ip': '10.0.0.1', + 'config': 'some.quantum.driver.class', + 'vsm_ip': 'n1kv', + 'vsm_username': 'n1kv', + 'vsm_password': 'n1kv', + 'user_config_flags': {}, + 'restrict_policy_profiles': 'n1kv', + }, neutron.n1kv_ctxt()) + + @patch.object(context.NeutronContext, 'neutron_security_groups') + @patch.object(context, 'unit_private_ip') + @patch.object(context, 'neutron_plugin_attribute') + def test_neutron_calico_plugin_context(self, attr, ip, sec_groups): + ip.return_value = '10.0.0.1' + sec_groups.__get__ = MagicMock(return_value=True) + attr.return_value = 'some.quantum.driver.class' + neutron = context.NeutronContext() + self.assertEquals({ + 'config': 'some.quantum.driver.class', + 'core_plugin': 'some.quantum.driver.class', + 'neutron_plugin': 'Calico', + 'neutron_security_groups': True, + 'local_ip': '10.0.0.1'}, neutron.calico_ctxt()) + + @patch.object(context.NeutronContext, 'neutron_security_groups') + @patch.object(context, 'unit_private_ip') + @patch.object(context, 'neutron_plugin_attribute') + def test_neutron_plumgrid_plugin_context(self, attr, ip, sec_groups): + ip.return_value = '10.0.0.1' + sec_groups.__get__ = MagicMock(return_value=True) + attr.return_value = 'some.quantum.driver.class' + neutron = context.NeutronContext() + self.assertEquals({ + 'config': 'some.quantum.driver.class', + 'core_plugin': 'some.quantum.driver.class', + 'neutron_plugin': 'plumgrid', + 'neutron_security_groups': True, + 'local_ip': '10.0.0.1'}, neutron.pg_ctxt()) + + @patch.object(context.NeutronContext, 'neutron_security_groups') + @patch.object(context, 'unit_private_ip') + @patch.object(context, 'neutron_plugin_attribute') + def test_neutron_nuage_plugin_context(self, attr, ip, sec_groups): + ip.return_value = '10.0.0.1' + sec_groups.__get__ = MagicMock(return_value=True) + attr.return_value = 'some.quantum.driver.class' + neutron = context.NeutronContext() + self.assertEquals({ + 'config': 'some.quantum.driver.class', + 'core_plugin': 'some.quantum.driver.class', + 'neutron_plugin': 'vsp', + 'neutron_security_groups': True, + 'local_ip': '10.0.0.1'}, neutron.nuage_ctxt()) + + @patch.object(context.NeutronContext, 'neutron_security_groups') + @patch.object(context, 'unit_private_ip') + @patch.object(context, 'neutron_plugin_attribute') + def test_neutron_midonet_plugin_context(self, attr, ip, sec_groups): + ip.return_value = '10.0.0.1' + sec_groups.__get__ = MagicMock(return_value=True) + attr.return_value = 'some.quantum.driver.class' + neutron = context.NeutronContext() + self.assertEquals({ + 'config': 'some.quantum.driver.class', + 'core_plugin': 'some.quantum.driver.class', + 'neutron_plugin': 'midonet', + 'neutron_security_groups': True, + 'local_ip': '10.0.0.1'}, neutron.midonet_ctxt()) + + @patch('charmhelpers.contrib.openstack.context.unit_get') + @patch.object(context.NeutronContext, 'network_manager') + def test_neutron_neutron_ctxt(self, mock_network_manager, + mock_unit_get): + vip = '88.11.22.33' + priv_addr = '10.0.0.1' + mock_unit_get.return_value = priv_addr + neutron = context.NeutronContext() + + config = {'vip': vip} + self.config.side_effect = lambda key: config[key] + mock_network_manager.__get__ = Mock(return_value='neutron') + + self.is_clustered.return_value = False + self.assertEquals( + {'network_manager': 'neutron', + 'neutron_url': 'https://%s:9696' % (priv_addr)}, + neutron.neutron_ctxt() + ) + + self.is_clustered.return_value = True + self.assertEquals( + {'network_manager': 'neutron', + 'neutron_url': 'https://%s:9696' % (vip)}, + neutron.neutron_ctxt() + ) + + @patch('charmhelpers.contrib.openstack.context.unit_get') + @patch.object(context.NeutronContext, 'network_manager') + def test_neutron_neutron_ctxt_http(self, mock_network_manager, + mock_unit_get): + vip = '88.11.22.33' + priv_addr = '10.0.0.1' + mock_unit_get.return_value = priv_addr + neutron = context.NeutronContext() + + config = {'vip': vip} + self.config.side_effect = lambda key: config[key] + self.https.return_value = False + mock_network_manager.__get__ = Mock(return_value='neutron') + + self.is_clustered.return_value = False + self.assertEquals( + {'network_manager': 'neutron', + 'neutron_url': 'http://%s:9696' % (priv_addr)}, + neutron.neutron_ctxt() + ) + + self.is_clustered.return_value = True + self.assertEquals( + {'network_manager': 'neutron', + 'neutron_url': 'http://%s:9696' % (vip)}, + neutron.neutron_ctxt() + ) + + @patch.object(context.NeutronContext, 'neutron_ctxt') + @patch.object(context.NeutronContext, 'ovs_ctxt') + @patch.object(context.NeutronContext, 'plugin') + @patch.object(context.NeutronContext, '_ensure_packages') + @patch.object(context.NeutronContext, 'network_manager') + def test_neutron_main_context_generation(self, mock_network_manager, + mock_ensure_packages, + mock_plugin, mock_ovs_ctxt, + mock_neutron_ctxt): + + mock_neutron_ctxt.return_value = {'network_manager': 'neutron', + 'neutron_url': 'https://foo:9696'} + config = {'neutron-alchemy-flags': None} + self.config.side_effect = lambda key: config[key] + neutron = context.NeutronContext() + + mock_network_manager.__get__ = Mock(return_value='flatdhcpmanager') + mock_plugin.__get__ = Mock() + + self.assertEquals({}, neutron()) + self.assertTrue(mock_network_manager.__get__.called) + self.assertFalse(mock_plugin.__get__.called) + + mock_network_manager.__get__.return_value = 'neutron' + mock_plugin.__get__ = Mock(return_value=None) + self.assertEquals({}, neutron()) + self.assertTrue(mock_plugin.__get__.called) + + mock_ovs_ctxt.return_value = {'ovs': 'ovs_context'} + mock_plugin.__get__.return_value = 'ovs' + self.assertEquals( + {'network_manager': 'neutron', + 'ovs': 'ovs_context', + 'neutron_url': 'https://foo:9696'}, + neutron() + ) + + @patch.object(context.NeutronContext, 'neutron_ctxt') + @patch.object(context.NeutronContext, 'nvp_ctxt') + @patch.object(context.NeutronContext, 'plugin') + @patch.object(context.NeutronContext, '_ensure_packages') + @patch.object(context.NeutronContext, 'network_manager') + def test_neutron_main_context_gen_nvp_and_alchemy(self, + mock_network_manager, + mock_ensure_packages, + mock_plugin, + mock_nvp_ctxt, + mock_neutron_ctxt): + + mock_neutron_ctxt.return_value = {'network_manager': 'neutron', + 'neutron_url': 'https://foo:9696'} + config = {'neutron-alchemy-flags': 'pool_size=20'} + self.config.side_effect = lambda key: config[key] + neutron = context.NeutronContext() + + mock_network_manager.__get__ = Mock(return_value='flatdhcpmanager') + mock_plugin.__get__ = Mock() + + self.assertEquals({}, neutron()) + self.assertTrue(mock_network_manager.__get__.called) + self.assertFalse(mock_plugin.__get__.called) + + mock_network_manager.__get__.return_value = 'neutron' + mock_plugin.__get__ = Mock(return_value=None) + self.assertEquals({}, neutron()) + self.assertTrue(mock_plugin.__get__.called) + + mock_nvp_ctxt.return_value = {'nvp': 'nvp_context'} + mock_plugin.__get__.return_value = 'nvp' + self.assertEquals( + {'network_manager': 'neutron', + 'nvp': 'nvp_context', + 'neutron_alchemy_flags': {'pool_size': '20'}, + 'neutron_url': 'https://foo:9696'}, + neutron() + ) + + @patch.object(context.NeutronContext, 'neutron_ctxt') + @patch.object(context.NeutronContext, 'calico_ctxt') + @patch.object(context.NeutronContext, 'plugin') + @patch.object(context.NeutronContext, '_ensure_packages') + @patch.object(context.NeutronContext, 'network_manager') + def test_neutron_main_context_gen_calico(self, mock_network_manager, + mock_ensure_packages, + mock_plugin, mock_ovs_ctxt, + mock_neutron_ctxt): + + mock_neutron_ctxt.return_value = {'network_manager': 'neutron', + 'neutron_url': 'https://foo:9696'} + config = {'neutron-alchemy-flags': None} + self.config.side_effect = lambda key: config[key] + neutron = context.NeutronContext() + + mock_network_manager.__get__ = Mock(return_value='flatdhcpmanager') + mock_plugin.__get__ = Mock() + + self.assertEquals({}, neutron()) + self.assertTrue(mock_network_manager.__get__.called) + self.assertFalse(mock_plugin.__get__.called) + + mock_network_manager.__get__.return_value = 'neutron' + mock_plugin.__get__ = Mock(return_value=None) + self.assertEquals({}, neutron()) + self.assertTrue(mock_plugin.__get__.called) + + mock_ovs_ctxt.return_value = {'Calico': 'calico_context'} + mock_plugin.__get__.return_value = 'Calico' + self.assertEquals( + {'network_manager': 'neutron', + 'Calico': 'calico_context', + 'neutron_url': 'https://foo:9696'}, + neutron() + ) + + @patch('charmhelpers.contrib.openstack.utils.juju_log', + lambda *args, **kwargs: None) + @patch.object(context, 'config') + def test_os_configflag_context(self, config): + flags = context.OSConfigFlagContext() + + # single + config.return_value = 'deadbeef=True' + self.assertEquals({ + 'user_config_flags': { + 'deadbeef': 'True', + } + }, flags()) + + # multi + config.return_value = 'floating_ip=True,use_virtio=False,max=5' + self.assertEquals({ + 'user_config_flags': { + 'floating_ip': 'True', + 'use_virtio': 'False', + 'max': '5', + } + }, flags()) + + for empty in [None, '']: + config.return_value = empty + self.assertEquals({}, flags()) + + # multi with commas + config.return_value = 'good_flag=woot,badflag,great_flag=w00t' + self.assertEquals({ + 'user_config_flags': { + 'good_flag': 'woot,badflag', + 'great_flag': 'w00t', + } + }, flags()) + + # missing key + config.return_value = 'good_flag=woot=toow' + self.assertRaises(context.OSContextError, flags) + + # bad value + config.return_value = 'good_flag=woot==' + self.assertRaises(context.OSContextError, flags) + + @patch.object(context, 'config') + def test_os_configflag_context_custom(self, config): + flags = context.OSConfigFlagContext( + charm_flag='api-config-flags', + template_flag='api_config_flags') + + # single + config.return_value = 'deadbeef=True' + self.assertEquals({ + 'api_config_flags': { + 'deadbeef': 'True', + } + }, flags()) + + def test_os_subordinate_config_context(self): + relation = FakeRelation(relation_data=SUB_CONFIG_RELATION) + self.relation_get.side_effect = relation.get + self.relation_ids.side_effect = relation.relation_ids + self.related_units.side_effect = relation.relation_units + nova_sub_ctxt = context.SubordinateConfigContext( + service='nova', + config_file='/etc/nova/nova.conf', + interface='nova-subordinate', + ) + glance_sub_ctxt = context.SubordinateConfigContext( + service='glance', + config_file='/etc/glance/glance.conf', + interface='glance-subordinate', + ) + cinder_sub_ctxt = context.SubordinateConfigContext( + service='cinder', + config_file='/etc/cinder/cinder.conf', + interface='cinder-subordinate', + ) + foo_sub_ctxt = context.SubordinateConfigContext( + service='foo', + config_file='/etc/foo/foo.conf', + interface='foo-subordinate', + ) + self.assertEquals( + nova_sub_ctxt(), + {'sections': { + 'DEFAULT': [ + ['nova-key1', 'value1'], + ['nova-key2', 'value2']] + }} + ) + self.assertEquals( + glance_sub_ctxt(), + {'sections': { + 'DEFAULT': [ + ['glance-key1', 'value1'], + ['glance-key2', 'value2']] + }} + ) + self.assertEquals( + cinder_sub_ctxt(), + {'sections': { + 'cinder-1-section': [ + ['key1', 'value1']], + 'cinder-2-section': [ + ['key2', 'value2']] + + }, 'not-a-section': 1234} + ) + + # subrodinate supplies nothing for given config + glance_sub_ctxt.config_file = '/etc/glance/glance-api-paste.ini' + self.assertEquals(glance_sub_ctxt(), {'sections': {}}) + + # subordinate supplies bad input + self.assertEquals(foo_sub_ctxt(), {'sections': {}}) + + def test_os_subordinate_config_context_multiple(self): + relation = FakeRelation(relation_data=SUB_CONFIG_RELATION2) + self.relation_get.side_effect = relation.get + self.relation_ids.side_effect = relation.relation_ids + self.related_units.side_effect = relation.relation_units + nova_sub_ctxt = context.SubordinateConfigContext( + service=['nova', 'nova-compute'], + config_file='/etc/nova/nova.conf', + interface=['nova-ceilometer', 'neutron-plugin'], + ) + self.assertEquals( + nova_sub_ctxt(), + {'sections': { + 'DEFAULT': [ + ['nova-key1', 'value1'], + ['nova-key2', 'value2'], + ['nova-key3', 'value3'], + ['nova-key4', 'value4'], + ['nova-key5', 'value5'], + ['nova-key6', 'value6']] + }} + ) + + def test_syslog_context(self): + self.config.side_effect = fake_config({'use-syslog': 'foo'}) + syslog = context.SyslogContext() + result = syslog() + expected = { + 'use_syslog': 'foo', + } + self.assertEquals(result, expected) + + def test_loglevel_context_set(self): + self.config.side_effect = fake_config({ + 'debug': True, + 'verbose': True, + }) + syslog = context.LogLevelContext() + result = syslog() + expected = { + 'debug': True, + 'verbose': True, + } + self.assertEquals(result, expected) + + def test_loglevel_context_unset(self): + self.config.side_effect = fake_config({ + 'debug': None, + 'verbose': None, + }) + syslog = context.LogLevelContext() + result = syslog() + expected = { + 'debug': False, + 'verbose': False, + } + self.assertEquals(result, expected) + + @patch.object(context, '_calculate_workers') + def test_wsgi_worker_config_context(self, + _calculate_workers): + self.config.return_value = 2 # worker-multiplier=2 + _calculate_workers.return_value = 8 + service_name = 'service-name' + script = '/usr/bin/script' + ctxt = context.WSGIWorkerConfigContext(name=service_name, + script=script) + expect = { + "service_name": service_name, + "user": service_name, + "group": service_name, + "script": script, + "admin_script": None, + "public_script": None, + "processes": 8, + "admin_processes": 2, + "public_processes": 6, + "threads": 1, + } + self.assertEqual(expect, ctxt()) + + @patch.object(context, '_calculate_workers') + def test_wsgi_worker_config_context_user_and_group(self, + _calculate_workers): + self.config.return_value = 1 + _calculate_workers.return_value = 1 + service_name = 'service-name' + script = '/usr/bin/script' + user = 'nova' + group = 'nobody' + ctxt = context.WSGIWorkerConfigContext(name=service_name, + user=user, + group=group, + script=script) + expect = { + "service_name": service_name, + "user": user, + "group": group, + "script": script, + "admin_script": None, + "public_script": None, + "processes": 1, + "admin_processes": 1, + "public_processes": 1, + "threads": 1, + } + self.assertEqual(expect, ctxt()) + + def test_zeromq_context_unrelated(self): + self.is_relation_made.return_value = False + self.assertEquals(context.ZeroMQContext()(), {}) + + def test_zeromq_context_related(self): + self.is_relation_made.return_value = True + self.relation_ids.return_value = ['zeromq-configuration:1'] + self.related_units.return_value = ['openstack-zeromq/0'] + self.relation_get.side_effect = ['nonce-data', 'hostname', 'redis'] + self.assertEquals(context.ZeroMQContext()(), + {'zmq_host': 'hostname', + 'zmq_nonce': 'nonce-data', + 'zmq_redis_address': 'redis'}) + + def test_notificationdriver_context_nomsg(self): + relations = { + 'zeromq-configuration': False, + 'amqp': False, + } + rels = fake_is_relation_made(relations=relations) + self.is_relation_made.side_effect = rels.rel_made + self.assertEquals(context.NotificationDriverContext()(), + {'notifications': 'False'}) + + def test_notificationdriver_context_zmq_nometer(self): + relations = { + 'zeromq-configuration': True, + 'amqp': False, + } + rels = fake_is_relation_made(relations=relations) + self.is_relation_made.side_effect = rels.rel_made + self.assertEquals(context.NotificationDriverContext()(), + {'notifications': 'False'}) + + def test_notificationdriver_context_zmq_meter(self): + relations = { + 'zeromq-configuration': True, + 'amqp': False, + } + rels = fake_is_relation_made(relations=relations) + self.is_relation_made.side_effect = rels.rel_made + self.assertEquals(context.NotificationDriverContext()(), + {'notifications': 'False'}) + + def test_notificationdriver_context_amq(self): + relations = { + 'zeromq-configuration': False, + 'amqp': True, + } + rels = fake_is_relation_made(relations=relations) + self.is_relation_made.side_effect = rels.rel_made + self.assertEquals(context.NotificationDriverContext()(), + {'notifications': 'True'}) + + @patch.object(context, 'psutil') + def test_num_cpus_xenial(self, _psutil): + _psutil.cpu_count.return_value = 4 + self.assertTrue(context._num_cpus(), 4) + + @patch.object(context, 'psutil') + def test_num_cpus_trusty(self, _psutil): + _psutil.NUM_CPUS = 4 + self.assertTrue(context._num_cpus(), 4) + + @patch.object(context, '_num_cpus') + def test_calculate_workers_float(self, _num_cpus): + self.config.side_effect = fake_config({ + 'worker-multiplier': 0.3 + }) + _num_cpus.return_value = 4 + self.assertTrue(context._calculate_workers(), 4) + + @patch.object(context, '_num_cpus') + def test_calculate_workers_not_quite_0(self, _num_cpus): + # Make sure that the multiplier evaluating to somewhere between + # 0 and 1 in the floating point range still has at least one + # worker. + self.config.side_effect = fake_config({ + 'worker-multiplier': 0.001 + }) + _num_cpus.return_value = 100 + self.assertTrue(context._calculate_workers(), 1) + + @patch.object(context, 'psutil') + def test_calculate_workers_0(self, _psutil): + self.config.side_effect = fake_config({ + 'worker-multiplier': 0 + }) + _psutil.cpu_count.return_value = 2 + self.assertTrue(context._calculate_workers(), 0) + + @patch.object(context, '_num_cpus') + def test_calculate_workers_noconfig(self, _num_cpus): + self.config.return_value = None + _num_cpus.return_value = 1 + self.assertTrue(context._calculate_workers(), 2) + + @patch.object(context, '_num_cpus') + def test_calculate_workers_noconfig_container(self, _num_cpus): + self.config.return_value = None + self.is_container.return_value = True + _num_cpus.return_value = 1 + self.assertTrue(context._calculate_workers(), 2) + + @patch.object(context, '_num_cpus') + def test_calculate_workers_noconfig_lotsa_cpus_container(self, + _num_cpus): + self.config.return_value = None + self.is_container.return_value = True + _num_cpus.return_value = 32 + self.assertTrue(context._calculate_workers(), 4) + + @patch.object(context, '_num_cpus') + def test_calculate_workers_noconfig_lotsa_cpus_not_container(self, + _num_cpus): + self.config.return_value = None + _num_cpus.return_value = 32 + self.assertTrue(context._calculate_workers(), 64) + + @patch.object(context, '_calculate_workers', return_value=256) + def test_worker_context(self, calculate_workers): + self.assertEqual(context.WorkerConfigContext()(), + {'workers': 256}) + + def test_apache_get_addresses_no_network_config(self): + self.config.side_effect = fake_config({ + 'os-internal-network': None, + 'os-admin-network': None, + 'os-public-network': None + }) + self.resolve_address.return_value = '10.5.1.50' + self.unit_get.return_value = '10.5.1.50' + + apache = context.ApacheSSLContext() + apache.external_ports = '8776' + + addresses = apache.get_network_addresses() + expected = [('10.5.1.50', '10.5.1.50')] + + self.assertEqual(addresses, expected) + + self.get_address_in_network.assert_not_called() + self.resolve_address.assert_has_calls([ + call(context.INTERNAL), + call(context.ADMIN), + call(context.PUBLIC) + ]) + + def test_apache_get_addresses_with_network_config(self): + self.config.side_effect = fake_config({ + 'os-internal-network': '10.5.1.0/24', + 'os-admin-network': '10.5.2.0/24', + 'os-public-network': '10.5.3.0/24', + }) + _base_addresses = ['10.5.1.100', + '10.5.2.100', + '10.5.3.100'] + self.get_address_in_network.side_effect = _base_addresses + self.resolve_address.side_effect = _base_addresses + self.unit_get.return_value = '10.5.1.50' + + apache = context.ApacheSSLContext() + + addresses = apache.get_network_addresses() + expected = [('10.5.1.100', '10.5.1.100'), + ('10.5.2.100', '10.5.2.100'), + ('10.5.3.100', '10.5.3.100')] + self.assertEqual(addresses, expected) + + calls = [call('10.5.1.0/24', '10.5.1.50'), + call('10.5.2.0/24', '10.5.1.50'), + call('10.5.3.0/24', '10.5.1.50')] + self.get_address_in_network.assert_has_calls(calls) + self.resolve_address.assert_has_calls([ + call(context.INTERNAL), + call(context.ADMIN), + call(context.PUBLIC) + ]) + + def test_apache_get_addresses_network_spaces(self): + self.config.side_effect = fake_config({ + 'os-internal-network': None, + 'os-admin-network': None, + 'os-public-network': None + }) + self.network_get_primary_address.side_effect = None + self.network_get_primary_address.return_value = '10.5.2.50' + self.resolve_address.return_value = '10.5.2.100' + self.unit_get.return_value = '10.5.1.50' + + apache = context.ApacheSSLContext() + apache.external_ports = '8776' + + addresses = apache.get_network_addresses() + expected = [('10.5.2.50', '10.5.2.100')] + + self.assertEqual(addresses, expected) + + self.get_address_in_network.assert_not_called() + self.resolve_address.assert_has_calls([ + call(context.INTERNAL), + call(context.ADMIN), + call(context.PUBLIC) + ]) + + def test_config_flag_parsing_simple(self): + # Standard key=value checks... + flags = context.config_flags_parser('key1=value1, key2=value2') + self.assertEqual(flags, {'key1': 'value1', 'key2': 'value2'}) + + # Check for multiple values to a single key + flags = context.config_flags_parser('key1=value1, ' + 'key2=value2,value3,value4') + self.assertEqual(flags, {'key1': 'value1', + 'key2': 'value2,value3,value4'}) + + # Check for yaml formatted key value pairings for more complex + # assignment options. + flags = context.config_flags_parser('key1: subkey1=value1,' + 'subkey2=value2') + self.assertEqual(flags, {'key1': 'subkey1=value1,subkey2=value2'}) + + # Check for good measure the ldap formats + test_string = ('user_tree_dn: ou=ABC General,' + 'ou=User Accounts,dc=example,dc=com') + flags = context.config_flags_parser(test_string) + self.assertEqual(flags, {'user_tree_dn': ('ou=ABC General,' + 'ou=User Accounts,' + 'dc=example,dc=com')}) + + def _fake_get_hwaddr(self, arg): + return MACHINE_MACS[arg] + + def _fake_get_ipv4(self, arg, fatal=False): + return MACHINE_NICS[arg] + + @patch('charmhelpers.contrib.openstack.context.config') + def test_no_ext_port(self, mock_config): + self.config.side_effect = config = fake_config({}) + mock_config.side_effect = config + self.assertEquals(context.ExternalPortContext()(), {}) + + @patch('charmhelpers.contrib.openstack.context.list_nics') + @patch('charmhelpers.contrib.openstack.context.config') + def test_ext_port_eth(self, mock_config, mock_list_nics): + config = fake_config({'ext-port': 'eth1010'}) + self.config.side_effect = config + mock_config.side_effect = config + mock_list_nics.return_value = ['eth1010'] + self.assertEquals(context.ExternalPortContext()(), + {'ext_port': 'eth1010'}) + + @patch('charmhelpers.contrib.openstack.context.list_nics') + @patch('charmhelpers.contrib.openstack.context.config') + def test_ext_port_eth_non_existent(self, mock_config, mock_list_nics): + config = fake_config({'ext-port': 'eth1010'}) + self.config.side_effect = config + mock_config.side_effect = config + mock_list_nics.return_value = [] + self.assertEquals(context.ExternalPortContext()(), {}) + + @patch('charmhelpers.contrib.openstack.context.is_phy_iface', + lambda arg: True) + @patch('charmhelpers.contrib.openstack.context.get_nic_hwaddr') + @patch('charmhelpers.contrib.openstack.context.list_nics') + @patch('charmhelpers.contrib.openstack.context.get_ipv6_addr') + @patch('charmhelpers.contrib.openstack.context.get_ipv4_addr') + @patch('charmhelpers.contrib.openstack.context.config') + def test_ext_port_mac(self, mock_config, mock_get_ipv4_addr, + mock_get_ipv6_addr, mock_list_nics, + mock_get_nic_hwaddr): + config_macs = ABSENT_MACS + " " + MACHINE_MACS['eth2'] + config = fake_config({'ext-port': config_macs}) + self.config.side_effect = config + mock_config.side_effect = config + + mock_get_ipv4_addr.side_effect = self._fake_get_ipv4 + mock_get_ipv6_addr.return_value = [] + mock_list_nics.return_value = MACHINE_MACS.keys() + mock_get_nic_hwaddr.side_effect = self._fake_get_hwaddr + + self.assertEquals(context.ExternalPortContext()(), + {'ext_port': 'eth2'}) + + config = fake_config({'ext-port': ABSENT_MACS}) + self.config.side_effect = config + mock_config.side_effect = config + + self.assertEquals(context.ExternalPortContext()(), {}) + + @patch('charmhelpers.contrib.openstack.context.is_phy_iface', + lambda arg: True) + @patch('charmhelpers.contrib.openstack.context.get_nic_hwaddr') + @patch('charmhelpers.contrib.openstack.context.list_nics') + @patch('charmhelpers.contrib.openstack.context.get_ipv6_addr') + @patch('charmhelpers.contrib.openstack.context.get_ipv4_addr') + @patch('charmhelpers.contrib.openstack.context.config') + def test_ext_port_mac_one_used_nic(self, mock_config, + mock_get_ipv4_addr, + mock_get_ipv6_addr, mock_list_nics, + mock_get_nic_hwaddr): + + self.relation_ids.return_value = ['neutron-plugin-api:1'] + self.related_units.return_value = ['neutron-api/0'] + self.relation_get.return_value = {'network-device-mtu': 1234, + 'l2-population': 'False'} + config_macs = "%s %s" % (MACHINE_MACS['eth1'], + MACHINE_MACS['eth2']) + + mock_get_ipv4_addr.side_effect = self._fake_get_ipv4 + mock_get_ipv6_addr.return_value = [] + mock_list_nics.return_value = MACHINE_MACS.keys() + mock_get_nic_hwaddr.side_effect = self._fake_get_hwaddr + + config = fake_config({'ext-port': config_macs}) + self.config.side_effect = config + mock_config.side_effect = config + self.assertEquals(context.ExternalPortContext()(), + {'ext_port': 'eth2', 'ext_port_mtu': 1234}) + + @patch('charmhelpers.contrib.openstack.context.NeutronPortContext.' + 'resolve_ports') + def test_data_port_eth(self, mock_resolve): + self.config.side_effect = fake_config({'data-port': + 'phybr1:eth1010 ' + 'phybr1:eth1011'}) + mock_resolve.side_effect = lambda ports: ['eth1010'] + self.assertEquals(context.DataPortContext()(), + {'eth1010': 'phybr1'}) + + @patch.object(context, 'get_nic_hwaddr') + @patch.object(context.NeutronPortContext, 'resolve_ports') + def test_data_port_mac(self, mock_resolve, mock_get_nic_hwaddr): + extant_mac = 'cb:23:ae:72:f2:33' + non_extant_mac = 'fa:16:3e:12:97:8e' + self.config.side_effect = fake_config({'data-port': + 'phybr1:%s phybr1:%s' % + (non_extant_mac, extant_mac)}) + + def fake_resolve(ports): + resolved = [] + for port in ports: + if port == extant_mac: + resolved.append('eth1010') + + return resolved + + mock_get_nic_hwaddr.side_effect = lambda nic: extant_mac + mock_resolve.side_effect = fake_resolve + + self.assertEquals(context.DataPortContext()(), + {'eth1010': 'phybr1'}) + + @patch.object(context.NeutronAPIContext, '__call__', lambda *args: + {'network_device_mtu': 5000}) + @patch.object(context, 'get_nic_hwaddr', lambda inst, port: port) + @patch.object(context.NeutronPortContext, 'resolve_ports', + lambda inst, ports: ports) + def test_phy_nic_mtu_context(self): + self.config.side_effect = fake_config({'data-port': + 'phybr1:eth0'}) + ctxt = context.PhyNICMTUContext()() + self.assertEqual(ctxt, {'devs': 'eth0', 'mtu': 5000}) + + @patch.object(context.glob, 'glob') + @patch.object(context.NeutronAPIContext, '__call__', lambda *args: + {'network_device_mtu': 5000}) + @patch.object(context, 'get_nic_hwaddr', lambda inst, port: port) + @patch.object(context.NeutronPortContext, 'resolve_ports', + lambda inst, ports: ports) + def test_phy_nic_mtu_context_vlan(self, mock_glob): + self.config.side_effect = fake_config({'data-port': + 'phybr1:eth0.100'}) + mock_glob.return_value = ['/sys/class/net/eth0.100/lower_eth0'] + ctxt = context.PhyNICMTUContext()() + self.assertEqual(ctxt, {'devs': 'eth0\\neth0.100', 'mtu': 5000}) + + @patch.object(context.glob, 'glob') + @patch.object(context.NeutronAPIContext, '__call__', lambda *args: + {'network_device_mtu': 5000}) + @patch.object(context, 'get_nic_hwaddr', lambda inst, port: port) + @patch.object(context.NeutronPortContext, 'resolve_ports', + lambda inst, ports: ports) + def test_phy_nic_mtu_context_vlan_w_duplicate_raw(self, mock_glob): + self.config.side_effect = fake_config({'data-port': + 'phybr1:eth0.100 ' + 'phybr1:eth0.200'}) + + def fake_glob(wcard): + if 'eth0.100' in wcard: + return ['/sys/class/net/eth0.100/lower_eth0'] + elif 'eth0.200' in wcard: + return ['/sys/class/net/eth0.200/lower_eth0'] + + raise Exception("Unexpeced key '%s'" % (wcard)) + + mock_glob.side_effect = fake_glob + ctxt = context.PhyNICMTUContext()() + self.assertEqual(ctxt, {'devs': 'eth0\\neth0.100\\neth0.200', + 'mtu': 5000}) + + def test_neutronapicontext_defaults(self): + self.relation_ids.return_value = [] + expected_keys = [ + 'l2_population', 'enable_dvr', 'enable_l3ha', + 'overlay_network_type', 'network_device_mtu', + 'enable_qos', 'enable_nsg_logging', 'global_physnet_mtu', + 'physical_network_mtus' + ] + api_ctxt = context.NeutronAPIContext()() + for key in expected_keys: + self.assertTrue(key in api_ctxt) + self.assertEquals(api_ctxt['polling_interval'], 2) + self.assertEquals(api_ctxt['rpc_response_timeout'], 60) + self.assertEquals(api_ctxt['report_interval'], 30) + self.assertEquals(api_ctxt['enable_nsg_logging'], False) + self.assertEquals(api_ctxt['global_physnet_mtu'], 1500) + self.assertIsNone(api_ctxt['physical_network_mtus']) + + def setup_neutron_api_context_relation(self, cfg): + self.relation_ids.return_value = ['neutron-plugin-api:1'] + self.related_units.return_value = ['neutron-api/0'] + # The l2-population key is used by the context as a way of checking if + # the api service on the other end is sending data in a recent format. + self.relation_get.return_value = cfg + + def test_neutronapicontext_extension_drivers_qos_on(self): + self.setup_neutron_api_context_relation({ + 'enable-qos': 'True', + 'l2-population': 'True'}) + api_ctxt = context.NeutronAPIContext()() + self.assertTrue(api_ctxt['enable_qos']) + self.assertEquals(api_ctxt['extension_drivers'], 'qos') + + def test_neutronapicontext_extension_drivers_qos_off(self): + self.setup_neutron_api_context_relation({ + 'enable-qos': 'False', + 'l2-population': 'True'}) + api_ctxt = context.NeutronAPIContext()() + self.assertFalse(api_ctxt['enable_qos']) + self.assertEquals(api_ctxt['extension_drivers'], '') + + def test_neutronapicontext_extension_drivers_qos_absent(self): + self.setup_neutron_api_context_relation({ + 'l2-population': 'True'}) + api_ctxt = context.NeutronAPIContext()() + self.assertFalse(api_ctxt['enable_qos']) + self.assertEquals(api_ctxt['extension_drivers'], '') + + def test_neutronapicontext_extension_drivers_log_off(self): + self.setup_neutron_api_context_relation({ + 'enable-nsg-logging': 'False', + 'l2-population': 'True'}) + api_ctxt = context.NeutronAPIContext()() + self.assertEquals(api_ctxt['extension_drivers'], '') + + def test_neutronapicontext_extension_drivers_log_on(self): + self.setup_neutron_api_context_relation({ + 'enable-nsg-logging': 'True', + 'l2-population': 'True'}) + api_ctxt = context.NeutronAPIContext()() + self.assertEquals(api_ctxt['extension_drivers'], 'log') + + def test_neutronapicontext_extension_drivers_log_qos_on(self): + self.setup_neutron_api_context_relation({ + 'enable-qos': 'True', + 'enable-nsg-logging': 'True', + 'l2-population': 'True'}) + api_ctxt = context.NeutronAPIContext()() + self.assertEquals(api_ctxt['extension_drivers'], 'qos,log') + + def test_neutronapicontext_firewall_group_logging_on(self): + self.setup_neutron_api_context_relation({ + 'enable-nfg-logging': 'True', + 'l2-population': 'True' + }) + api_ctxt = context.NeutronAPIContext()() + self.assertEquals(api_ctxt['enable_nfg_logging'], True) + + def test_neutronapicontext_firewall_group_logging_off(self): + self.setup_neutron_api_context_relation({ + 'enable-nfg-logging': 'False', + 'l2-population': 'True' + }) + api_ctxt = context.NeutronAPIContext()() + self.assertEquals(api_ctxt['enable_nfg_logging'], False) + + def test_neutronapicontext_port_forwarding_on(self): + self.setup_neutron_api_context_relation({ + 'enable-port-forwarding': 'True', + 'l2-population': 'True' + }) + api_ctxt = context.NeutronAPIContext()() + self.assertEquals(api_ctxt['enable_port_forwarding'], True) + + def test_neutronapicontext_port_forwarding_off(self): + self.setup_neutron_api_context_relation({ + 'enable-port-forwarding': 'False', + 'l2-population': 'True' + }) + api_ctxt = context.NeutronAPIContext()() + self.assertEquals(api_ctxt['enable_port_forwarding'], False) + + def test_neutronapicontext_string_converted(self): + self.setup_neutron_api_context_relation({ + 'l2-population': 'True'}) + api_ctxt = context.NeutronAPIContext()() + self.assertEquals(api_ctxt['l2_population'], True) + + def test_neutronapicontext_none(self): + self.relation_ids.return_value = ['neutron-plugin-api:1'] + self.related_units.return_value = ['neutron-api/0'] + self.relation_get.return_value = {'l2-population': 'True'} + api_ctxt = context.NeutronAPIContext()() + self.assertEquals(api_ctxt['network_device_mtu'], None) + + def test_network_service_ctxt_no_units(self): + self.relation_ids.return_value = [] + self.relation_ids.return_value = ['foo'] + self.related_units.return_value = [] + self.assertEquals(context.NetworkServiceContext()(), {}) + + @patch.object(context.OSContextGenerator, 'context_complete') + def test_network_service_ctxt_no_data(self, mock_context_complete): + rel = FakeRelation(QUANTUM_NETWORK_SERVICE_RELATION) + self.relation_ids.side_effect = rel.relation_ids + self.related_units.side_effect = rel.relation_units + relation = FakeRelation(relation_data=QUANTUM_NETWORK_SERVICE_RELATION) + self.relation_get.side_effect = relation.get + mock_context_complete.return_value = False + self.assertEquals(context.NetworkServiceContext()(), {}) + + def test_network_service_ctxt_data(self): + data_result = { + 'keystone_host': '10.5.0.1', + 'service_port': '5000', + 'auth_port': '20000', + 'service_tenant': 'tenant', + 'service_username': 'username', + 'service_password': 'password', + 'quantum_host': '10.5.0.2', + 'quantum_port': '9696', + 'quantum_url': 'http://10.5.0.2:9696/v2', + 'region': 'aregion', + 'service_protocol': 'http', + 'auth_protocol': 'http', + 'api_version': '2.0', + } + rel = FakeRelation(QUANTUM_NETWORK_SERVICE_RELATION) + self.relation_ids.side_effect = rel.relation_ids + self.related_units.side_effect = rel.relation_units + relation = FakeRelation(relation_data=QUANTUM_NETWORK_SERVICE_RELATION) + self.relation_get.side_effect = relation.get + self.assertEquals(context.NetworkServiceContext()(), data_result) + + def test_network_service_ctxt_data_api_version(self): + data_result = { + 'keystone_host': '10.5.0.1', + 'service_port': '5000', + 'auth_port': '20000', + 'service_tenant': 'tenant', + 'service_username': 'username', + 'service_password': 'password', + 'quantum_host': '10.5.0.2', + 'quantum_port': '9696', + 'quantum_url': 'http://10.5.0.2:9696/v2', + 'region': 'aregion', + 'service_protocol': 'http', + 'auth_protocol': 'http', + 'api_version': '3', + } + rel = FakeRelation(QUANTUM_NETWORK_SERVICE_RELATION_VERSIONED) + self.relation_ids.side_effect = rel.relation_ids + self.related_units.side_effect = rel.relation_units + relation = FakeRelation( + relation_data=QUANTUM_NETWORK_SERVICE_RELATION_VERSIONED) + self.relation_get.side_effect = relation.get + self.assertEquals(context.NetworkServiceContext()(), data_result) + + def test_internal_endpoint_context(self): + config = {'use-internal-endpoints': False} + self.config.side_effect = fake_config(config) + ctxt = context.InternalEndpointContext() + self.assertFalse(ctxt()['use_internal_endpoints']) + config = {'use-internal-endpoints': True} + self.config.side_effect = fake_config(config) + self.assertTrue(ctxt()['use_internal_endpoints']) + + @patch.object(context, 'os_release') + def test_volume_api_context(self, mock_os_release): + mock_os_release.return_value = 'ocata' + config = {'use-internal-endpoints': False} + self.config.side_effect = fake_config(config) + ctxt = context.VolumeAPIContext('cinder-common') + c = ctxt() + self.assertEqual(c['volume_api_version'], '2') + self.assertEqual(c['volume_catalog_info'], + 'volumev2:cinderv2:publicURL') + + mock_os_release.return_value = 'pike' + config['use-internal-endpoints'] = True + self.config.side_effect = fake_config(config) + ctxt = context.VolumeAPIContext('cinder-common') + c = ctxt() + self.assertEqual(c['volume_api_version'], '3') + self.assertEqual(c['volume_catalog_info'], + 'volumev3:cinderv3:internalURL') + + def test_volume_api_context_no_pkg(self): + self.assertRaises(ValueError, context.VolumeAPIContext, "") + self.assertRaises(ValueError, context.VolumeAPIContext, None) + + def test_apparmor_context_call_not_valid(self): + ''' Tests for the apparmor context''' + mock_aa_object = context.AppArmorContext() + # Test with invalid config + self.config.return_value = 'NOTVALID' + self.assertEquals(mock_aa_object.__call__(), None) + + def test_apparmor_context_call_complain(self): + ''' Tests for the apparmor context''' + mock_aa_object = context.AppArmorContext() + # Test complain mode + self.config.return_value = 'complain' + self.assertEquals(mock_aa_object.__call__(), + {'aa_profile_mode': 'complain', + 'ubuntu_release': '16.04'}) + + def test_apparmor_context_call_enforce(self): + ''' Tests for the apparmor context''' + mock_aa_object = context.AppArmorContext() + # Test enforce mode + self.config.return_value = 'enforce' + self.assertEquals(mock_aa_object.__call__(), + {'aa_profile_mode': 'enforce', + 'ubuntu_release': '16.04'}) + + def test_apparmor_context_call_disable(self): + ''' Tests for the apparmor context''' + mock_aa_object = context.AppArmorContext() + # Test complain mode + self.config.return_value = 'disable' + self.assertEquals(mock_aa_object.__call__(), + {'aa_profile_mode': 'disable', + 'ubuntu_release': '16.04'}) + + def test_apparmor_setup_complain(self): + ''' Tests for the apparmor setup''' + AA = context.AppArmorContext(profile_name='fake-aa-profile') + AA.install_aa_utils = MagicMock() + AA.manually_disable_aa_profile = MagicMock() + # Test complain mode + self.config.return_value = 'complain' + AA.setup_aa_profile() + AA.install_aa_utils.assert_called_with() + self.check_call.assert_called_with(['aa-complain', 'fake-aa-profile']) + self.assertFalse(AA.manually_disable_aa_profile.called) + + def test_apparmor_setup_enforce(self): + ''' Tests for the apparmor setup''' + AA = context.AppArmorContext(profile_name='fake-aa-profile') + AA.install_aa_utils = MagicMock() + AA.manually_disable_aa_profile = MagicMock() + # Test enforce mode + self.config.return_value = 'enforce' + AA.setup_aa_profile() + self.check_call.assert_called_with(['aa-enforce', 'fake-aa-profile']) + self.assertFalse(AA.manually_disable_aa_profile.called) + + def test_apparmor_setup_disable(self): + ''' Tests for the apparmor setup''' + AA = context.AppArmorContext(profile_name='fake-aa-profile') + AA.install_aa_utils = MagicMock() + AA.manually_disable_aa_profile = MagicMock() + # Test disable mode + self.config.return_value = 'disable' + AA.setup_aa_profile() + self.check_call.assert_called_with(['aa-disable', 'fake-aa-profile']) + self.assertFalse(AA.manually_disable_aa_profile.called) + # Test failed to disable + from subprocess import CalledProcessError + self.check_call.side_effect = CalledProcessError(0, 0, 0) + AA.setup_aa_profile() + self.check_call.assert_called_with(['aa-disable', 'fake-aa-profile']) + AA.manually_disable_aa_profile.assert_called_with() + + @patch.object(context, 'enable_memcache') + @patch.object(context, 'is_ipv6_disabled') + def test_memcache_context_ipv6(self, _is_ipv6_disabled, _enable_memcache): + self.lsb_release.return_value = {'DISTRIB_CODENAME': 'xenial'} + _enable_memcache.return_value = True + _is_ipv6_disabled.return_value = False + config = { + 'openstack-origin': 'distro', + } + self.config.side_effect = fake_config(config) + ctxt = context.MemcacheContext() + self.assertTrue(ctxt()['use_memcache']) + expect = { + 'memcache_port': '11211', + 'memcache_server': '::1', + 'memcache_server_formatted': '[::1]', + 'memcache_url': 'inet6:[::1]:11211', + 'use_memcache': True} + self.assertEqual(ctxt(), expect) + self.lsb_release.return_value = {'DISTRIB_CODENAME': 'trusty'} + expect['memcache_server'] = 'ip6-localhost' + ctxt = context.MemcacheContext() + self.assertEqual(ctxt(), expect) + + @patch.object(context, 'enable_memcache') + @patch.object(context, 'is_ipv6_disabled') + def test_memcache_context_ipv4(self, _is_ipv6_disabled, _enable_memcache): + self.lsb_release.return_value = {'DISTRIB_CODENAME': 'xenial'} + _enable_memcache.return_value = True + _is_ipv6_disabled.return_value = True + config = { + 'openstack-origin': 'distro', + } + self.config.side_effect = fake_config(config) + ctxt = context.MemcacheContext() + self.assertTrue(ctxt()['use_memcache']) + expect = { + 'memcache_port': '11211', + 'memcache_server': '127.0.0.1', + 'memcache_server_formatted': '127.0.0.1', + 'memcache_url': '127.0.0.1:11211', + 'use_memcache': True} + self.assertEqual(ctxt(), expect) + self.lsb_release.return_value = {'DISTRIB_CODENAME': 'trusty'} + expect['memcache_server'] = 'localhost' + ctxt = context.MemcacheContext() + self.assertEqual(ctxt(), expect) + + @patch.object(context, 'enable_memcache') + def test_memcache_off_context(self, _enable_memcache): + _enable_memcache.return_value = False + config = {'openstack-origin': 'distro'} + self.config.side_effect = fake_config(config) + ctxt = context.MemcacheContext() + self.assertFalse(ctxt()['use_memcache']) + self.assertEqual(ctxt(), {'use_memcache': False}) + + @patch('charmhelpers.contrib.openstack.context.mkdir') + def test_ensure_dir_ctx(self, mkdir): + dirname = '/etc/keystone/policy.d' + owner = 'someuser' + group = 'somegroup' + perms = 0o555 + force = False + ctxt = context.EnsureDirContext(dirname, owner=owner, + group=group, perms=perms, + force=force) + ctxt() + mkdir.assert_called_with(dirname, owner=owner, group=group, + perms=perms, force=force) + + @patch.object(context, 'os_release') + def test_VersionsContext(self, os_release): + self.lsb_release.return_value = {'DISTRIB_CODENAME': 'xenial'} + os_release.return_value = 'essex' + self.assertEqual( + context.VersionsContext()(), + { + 'openstack_release': 'essex', + 'operating_system_release': 'xenial'}) + os_release.assert_called_once_with('python-keystone') + self.lsb_release.assert_called_once_with() + + def test_logrotate_context_unset(self): + logrotate = context.LogrotateContext(location='nova', + interval='weekly', + count=4) + ctxt = logrotate() + expected_ctxt = { + 'logrotate_logs_location': 'nova', + 'logrotate_interval': 'weekly', + 'logrotate_count': 'rotate 4', + } + self.assertEquals(ctxt, expected_ctxt) + + @patch.object(context, 'os_release') + def test_vendordata_static(self, os_release): + _vdata = '{"good": "json"}' + os_release.return_value = 'rocky' + self.config.side_effect = [_vdata, None] + ctxt = context.NovaVendorMetadataContext('nova-common')() + + self.assertTrue(ctxt['vendor_data']) + self.assertEqual('StaticJSON', ctxt['vendordata_providers']) + self.assertNotIn('vendor_data_url', ctxt) + + @patch.object(context, 'os_release') + def test_vendordata_dynamic(self, os_release): + _vdata_url = 'http://example.org/vdata' + os_release.return_value = 'rocky' + + self.config.side_effect = [None, _vdata_url] + ctxt = context.NovaVendorMetadataContext('nova-common')() + + self.assertEqual(_vdata_url, ctxt['vendor_data_url']) + self.assertEqual('DynamicJSON', ctxt['vendordata_providers']) + self.assertFalse(ctxt['vendor_data']) + + @patch.object(context, 'os_release') + def test_vendordata_static_and_dynamic(self, os_release): + os_release.return_value = 'rocky' + _vdata = '{"good": "json"}' + _vdata_url = 'http://example.org/vdata' + + self.config.side_effect = [_vdata, _vdata_url] + ctxt = context.NovaVendorMetadataContext('nova-common')() + + self.assertTrue(ctxt['vendor_data']) + self.assertEqual(_vdata_url, ctxt['vendor_data_url']) + self.assertEqual('StaticJSON,DynamicJSON', + ctxt['vendordata_providers']) + + @patch.object(context, 'log') + @patch.object(context, 'os_release') + def test_vendordata_static_invalid_and_dynamic(self, os_release, log): + os_release.return_value = 'rocky' + _vdata = '{bad: json}' + _vdata_url = 'http://example.org/vdata' + + self.config.side_effect = [_vdata, _vdata_url] + ctxt = context.NovaVendorMetadataContext('nova-common')() + + self.assertFalse(ctxt['vendor_data']) + self.assertEqual(_vdata_url, ctxt['vendor_data_url']) + self.assertEqual('DynamicJSON', ctxt['vendordata_providers']) + self.assertTrue(log.called) + + @patch('charmhelpers.contrib.openstack.context.log') + @patch.object(context, 'os_release') + def test_vendordata_static_and_dynamic_mitaka(self, os_release, log): + os_release.return_value = 'mitaka' + _vdata = '{"good": "json"}' + _vdata_url = 'http://example.org/vdata' + + self.config.side_effect = [_vdata, _vdata_url] + ctxt = context.NovaVendorMetadataContext('nova-common')() + + self.assertTrue(log.called) + self.assertTrue(ctxt['vendor_data']) + self.assertNotIn('vendor_data_url', ctxt) + self.assertNotIn('vendordata_providers', ctxt) + + @patch.object(context, 'log') + def test_vendordata_json_valid(self, log): + _vdata = '{"good": "json"}' + self.config.side_effect = [_vdata] + + ctxt = context.NovaVendorMetadataJSONContext('nova-common')() + + self.assertEqual({'vendor_data_json': _vdata}, ctxt) + self.assertFalse(log.called) + + @patch.object(context, 'log') + def test_vendordata_json_invalid(self, log): + _vdata = '{bad: json}' + self.config.side_effect = [_vdata] + + ctxt = context.NovaVendorMetadataJSONContext('nova-common')() + + self.assertEqual({'vendor_data_json': '{}'}, ctxt) + self.assertTrue(log.called) + + @patch.object(context, 'log') + def test_vendordata_json_empty(self, log): + self.config.side_effect = [None] + + ctxt = context.NovaVendorMetadataJSONContext('nova-common')() + + self.assertEqual({'vendor_data_json': '{}'}, ctxt) + self.assertFalse(log.called) + + @patch.object(context, 'socket') + def test_host_info_context(self, _socket): + _socket.getaddrinfo.return_value = [(None, None, None, 'myhost.mydomain', None)] + _socket.gethostname.return_value = 'myhost' + ctxt = context.HostInfoContext()() + self.assertEqual({ + 'host_fqdn': 'myhost.mydomain', + 'host': 'myhost', + 'use_fqdn_hint': False}, + ctxt) + ctxt = context.HostInfoContext(use_fqdn_hint_cb=lambda: True)() + self.assertEqual({ + 'host_fqdn': 'myhost.mydomain', + 'host': 'myhost', + 'use_fqdn_hint': True}, + ctxt) + # if getaddrinfo is unable to find the canonical name we should return + # the shortname to match the behaviour of the original implementation. + _socket.getaddrinfo.return_value = [(None, None, None, 'localhost', None)] + ctxt = context.HostInfoContext()() + self.assertEqual({ + 'host_fqdn': 'myhost', + 'host': 'myhost', + 'use_fqdn_hint': False}, + ctxt) + if six.PY2: + _socket.error = Exception + _socket.getaddrinfo.side_effect = Exception + else: + _socket.getaddrinfo.side_effect = OSError + _socket.gethostname.return_value = 'myhost' + ctxt = context.HostInfoContext()() + self.assertEqual({ + 'host_fqdn': 'myhost', + 'host': 'myhost', + 'use_fqdn_hint': False}, + ctxt) + + @patch.object(context, "DHCPAgentContext") + def test_validate_ovs_use_veth(self, _context): + # No existing dhcp_agent.ini and no config + _context.get_existing_ovs_use_veth.return_value = None + _context.parse_ovs_use_veth.return_value = None + self.assertEqual((None, None), context.validate_ovs_use_veth()) + + # No existing dhcp_agent.ini and config set + _context.get_existing_ovs_use_veth.return_value = None + _context.parse_ovs_use_veth.return_value = True + self.assertEqual((None, None), context.validate_ovs_use_veth()) + + # Existing dhcp_agent.ini and no config + _context.get_existing_ovs_use_veth.return_value = True + _context.parse_ovs_use_veth.return_value = None + self.assertEqual((None, None), context.validate_ovs_use_veth()) + + # Check for agreement with existing dhcp_agent.ini + _context.get_existing_ovs_use_veth.return_value = False + _context.parse_ovs_use_veth.return_value = False + self.assertEqual((None, None), context.validate_ovs_use_veth()) + + # Check for disagreement with existing dhcp_agent.ini + _context.get_existing_ovs_use_veth.return_value = True + _context.parse_ovs_use_veth.return_value = False + self.assertEqual( + ("blocked", + "Mismatched existing and configured ovs-use-veth. See log."), + context.validate_ovs_use_veth()) + + def test_dhcp_agent_context(self): + # Defaults + _config = { + "debug": False, + "dns-servers": None, + "enable-isolated-metadata": None, + "enable-metadata-network": None, + "instance-mtu": None, + "ovs-use-veth": None} + _expect = { + "debug": False, + "dns_servers": None, + "enable_isolated_metadata": None, + "enable_metadata_network": None, + "instance_mtu": None, + "ovs_use_veth": False} + self.config.side_effect = fake_config(_config) + _get_ovs_use_veth = MagicMock() + _get_ovs_use_veth.return_value = False + ctx_object = context.DHCPAgentContext() + ctx_object.get_ovs_use_veth = _get_ovs_use_veth + ctxt = ctx_object() + self.assertEqual(_expect, ctxt) + + # Non-defaults + _dns = "10.5.0.2" + _mtu = 8950 + _config = { + "debug": True, + "dns-servers": _dns, + "enable-isolated-metadata": True, + "enable-metadata-network": True, + "instance-mtu": _mtu, + "ovs-use-veth": True} + _expect = { + "debug": True, + "dns_servers": _dns, + "enable_isolated_metadata": True, + "enable_metadata_network": True, + "instance_mtu": _mtu, + "ovs_use_veth": True} + self.config.side_effect = fake_config(_config) + _get_ovs_use_veth.return_value = True + ctxt = ctx_object() + self.assertEqual(_expect, ctxt) + + def test_dhcp_agent_context_no_dns_domain(self): + _config = {"dns-servers": '8.8.8.8'} + self.config.side_effect = fake_config(_config) + self.relation_ids.return_value = ['rid1'] + self.related_units.return_value = ['nova-compute/0'] + self.relation_get.return_value = 'nova' + self.assertEqual( + context.DHCPAgentContext()(), + {'instance_mtu': None, + 'dns_servers': '8.8.8.8', + 'ovs_use_veth': False, + "enable_isolated_metadata": None, + "enable_metadata_network": None, + "debug": None} + ) + + def test_dhcp_agent_context_dnsmasq_flags(self): + _config = {'dnsmasq-flags': 'dhcp-userclass=set:ipxe,iPXE,' + 'dhcp-match=set:ipxe,175,' + 'server=1.2.3.4'} + self.config.side_effect = fake_config(_config) + self.assertEqual( + context.DHCPAgentContext()(), + { + 'dnsmasq_flags': collections.OrderedDict( + [('dhcp-userclass', 'set:ipxe,iPXE'), + ('dhcp-match', 'set:ipxe,175'), + ('server', '1.2.3.4')]), + 'instance_mtu': None, + 'dns_servers': None, + 'ovs_use_veth': False, + "enable_isolated_metadata": None, + "enable_metadata_network": None, + "debug": None, + } + ) + + def test_get_ovs_use_veth(self): + _get_existing_ovs_use_veth = MagicMock() + _parse_ovs_use_veth = MagicMock() + ctx_object = context.DHCPAgentContext() + ctx_object.get_existing_ovs_use_veth = _get_existing_ovs_use_veth + ctx_object.parse_ovs_use_veth = _parse_ovs_use_veth + + # Default + _get_existing_ovs_use_veth.return_value = None + _parse_ovs_use_veth.return_value = None + self.assertEqual(False, ctx_object.get_ovs_use_veth()) + + # Existing dhcp_agent.ini and no config + _get_existing_ovs_use_veth.return_value = True + _parse_ovs_use_veth.return_value = None + self.assertEqual(True, ctx_object.get_ovs_use_veth()) + + # No existing dhcp_agent.ini and config set + _get_existing_ovs_use_veth.return_value = None + _parse_ovs_use_veth.return_value = False + self.assertEqual(False, ctx_object.get_ovs_use_veth()) + + # Both set matching + _get_existing_ovs_use_veth.return_value = True + _parse_ovs_use_veth.return_value = True + self.assertEqual(True, ctx_object.get_ovs_use_veth()) + + # Both set mismatch: existing overrides + _get_existing_ovs_use_veth.return_value = False + _parse_ovs_use_veth.return_value = True + self.assertEqual(False, ctx_object.get_ovs_use_veth()) + + # Both set mismatch: existing overrides + _get_existing_ovs_use_veth.return_value = True + _parse_ovs_use_veth.return_value = False + self.assertEqual(True, ctx_object.get_ovs_use_veth()) + + @patch.object(context, 'config_ini') + @patch.object(context.os.path, 'isfile') + def test_get_existing_ovs_use_veth(self, _is_file, _config_ini): + _config = {"ovs-use-veth": None} + self.config.side_effect = fake_config(_config) + + ctx_object = context.DHCPAgentContext() + + # Default + _is_file.return_value = False + self.assertEqual(None, ctx_object.get_existing_ovs_use_veth()) + + # Existing + _is_file.return_value = True + _config_ini.return_value = {"DEFAULT": {"ovs_use_veth": True}} + self.assertEqual(True, ctx_object.get_existing_ovs_use_veth()) + + # Existing config_ini returns string + _is_file.return_value = True + _config_ini.return_value = {"DEFAULT": {"ovs_use_veth": "False"}} + self.assertEqual(False, ctx_object.get_existing_ovs_use_veth()) + + @patch.object(context, 'bool_from_string') + def test_parse_ovs_use_veth(self, _bool_from_string): + _config = {"ovs-use-veth": None} + self.config.side_effect = fake_config(_config) + + ctx_object = context.DHCPAgentContext() + + # Unset + self.assertEqual(None, ctx_object.parse_ovs_use_veth()) + _bool_from_string.assert_not_called() + + # Consider empty string unset + _config = {"ovs-use-veth": ""} + self.config.side_effect = fake_config(_config) + self.assertEqual(None, ctx_object.parse_ovs_use_veth()) + _bool_from_string.assert_not_called() + + # Lower true + _bool_from_string.return_value = True + _config = {"ovs-use-veth": "true"} + self.config.side_effect = fake_config(_config) + self.assertEqual(True, ctx_object.parse_ovs_use_veth()) + _bool_from_string.assert_called_with("true") + + # Lower false + _bool_from_string.return_value = False + _bool_from_string.reset_mock() + _config = {"ovs-use-veth": "false"} + self.config.side_effect = fake_config(_config) + self.assertEqual(False, ctx_object.parse_ovs_use_veth()) + _bool_from_string.assert_called_with("false") + + # Upper True + _bool_from_string.return_value = True + _bool_from_string.reset_mock() + _config = {"ovs-use-veth": "True"} + self.config.side_effect = fake_config(_config) + self.assertEqual(True, ctx_object.parse_ovs_use_veth()) + _bool_from_string.assert_called_with("True") + + # Invalid + _bool_from_string.reset_mock() + _config = {"ovs-use-veth": "Invalid"} + self.config.side_effect = fake_config(_config) + _bool_from_string.side_effect = ValueError + with self.assertRaises(ValueError): + ctx_object.parse_ovs_use_veth() + _bool_from_string.assert_called_with("Invalid") + + +class MockPCIDevice(object): + """Simple wrapper to mock pci.PCINetDevice class""" + def __init__(self, address): + self.pci_address = address + + +TEST_CPULIST_1 = "0-3" +TEST_CPULIST_2 = "0-7,16-23" +TEST_CPULIST_3 = "0,4,8,12,16,20,24" +DPDK_DATA_PORTS = ( + "br-phynet3:fe:16:41:df:23:fe " + "br-phynet1:fe:16:41:df:23:fd " + "br-phynet2:fe:f2:d0:45:dc:66" +) +BOND_MAPPINGS = ( + "bond0:fe:16:41:df:23:fe " + "bond0:fe:16:41:df:23:fd " + "bond1:fe:f2:d0:45:dc:66" +) +PCI_DEVICE_MAP = { + 'fe:16:41:df:23:fd': MockPCIDevice('0000:00:1c.0'), + 'fe:16:41:df:23:fe': MockPCIDevice('0000:00:1d.0'), +} + + +class TestDPDKUtils(tests.utils.BaseTestCase): + + def test_resolve_pci_from_mapping_config(self): + # FIXME: need to mock out the unit key value store + self.patch_object(context, 'config') + self.config.side_effect = lambda x: { + 'data-port': DPDK_DATA_PORTS, + 'dpdk-bond-mappings': BOND_MAPPINGS, + }.get(x) + _pci_devices = Mock() + _pci_devices.get_device_from_mac.side_effect = PCI_DEVICE_MAP.get + self.patch_object(context, 'pci') + self.pci.PCINetDevices.return_value = _pci_devices + self.assertDictEqual( + context.resolve_pci_from_mapping_config('data-port'), + { + '0000:00:1c.0': context.EntityMac( + 'br-phynet1', 'fe:16:41:df:23:fd'), + '0000:00:1d.0': context.EntityMac( + 'br-phynet3', 'fe:16:41:df:23:fe'), + }) + self.config.assert_called_once_with('data-port') + self.config.reset_mock() + self.assertDictEqual( + context.resolve_pci_from_mapping_config('dpdk-bond-mappings'), + { + '0000:00:1c.0': context.EntityMac( + 'bond0', 'fe:16:41:df:23:fd'), + '0000:00:1d.0': context.EntityMac( + 'bond0', 'fe:16:41:df:23:fe'), + }) + self.config.assert_called_once_with('dpdk-bond-mappings') + + +DPDK_PATCH = [ + 'resolve_pci_from_mapping_config', + 'glob', +] + +NUMA_CORES_SINGLE = { + '0': [0, 1, 2, 3] +} + +NUMA_CORES_MULTI = { + '0': [0, 1, 2, 3], + '1': [4, 5, 6, 7] +} + + +class TestOVSDPDKDeviceContext(tests.utils.BaseTestCase): + + def setUp(self): + super(TestOVSDPDKDeviceContext, self).setUp() + self.patch_object(context, 'config') + self.config.side_effect = lambda x: { + 'enable-dpdk': True, + } + self.target = context.OVSDPDKDeviceContext() + + def patch_target(self, attr, return_value=None): + mocked = mock.patch.object(self.target, attr) + self._patches[attr] = mocked + started = mocked.start() + started.return_value = return_value + self._patches_start[attr] = started + setattr(self, attr, started) + + def test__parse_cpu_list(self): + self.assertEqual(self.target._parse_cpu_list(TEST_CPULIST_1), + [0, 1, 2, 3]) + self.assertEqual(self.target._parse_cpu_list(TEST_CPULIST_2), + [0, 1, 2, 3, 4, 5, 6, 7, + 16, 17, 18, 19, 20, 21, 22, 23]) + self.assertEqual(self.target._parse_cpu_list(TEST_CPULIST_3), + [0, 4, 8, 12, 16, 20, 24]) + + def test__numa_node_cores(self): + self.patch_target('_parse_cpu_list') + self._parse_cpu_list.return_value = [0, 1, 2, 3] + self.patch_object(context, 'glob') + self.glob.glob.return_value = [ + '/sys/devices/system/node/node0' + ] + with patch_open() as (_, mock_file): + mock_file.read.return_value = TEST_CPULIST_1 + self.target._numa_node_cores() + self.assertEqual(self.target._numa_node_cores(), + {'0': [0, 1, 2, 3]}) + self.glob.glob.assert_called_with('/sys/devices/system/node/node*') + self._parse_cpu_list.assert_called_with(TEST_CPULIST_1) + + def test_device_whitelist(self): + """Test device whitelist generation""" + self.patch_object( + context, 'resolve_pci_from_mapping_config', + return_value=collections.OrderedDict( + sorted({ + '0000:00:1c.0': 'br-data', + '0000:00:1d.0': 'br-data', + }.items()))) + self.assertEqual(self.target.device_whitelist(), + '-w 0000:00:1c.0 -w 0000:00:1d.0') + self.resolve_pci_from_mapping_config.assert_has_calls([ + call('data-port'), + call('dpdk-bond-mappings'), + ]) + + def test_socket_memory(self): + """Test socket memory configuration""" + self.patch_object(context, 'glob') + self.patch_object(context, 'config') + self.config.side_effect = lambda x: { + 'dpdk-socket-memory': 1024, + }.get(x) + self.glob.glob.return_value = ['a'] + self.assertEqual(self.target.socket_memory(), + '1024') + + self.glob.glob.return_value = ['a', 'b'] + self.assertEqual(self.target.socket_memory(), + '1024,1024') + + self.config.side_effect = lambda x: { + 'dpdk-socket-memory': 2048, + }.get(x) + self.assertEqual(self.target.socket_memory(), + '2048,2048') + + def test_cpu_mask(self): + """Test generation of hex CPU masks""" + self.patch_target('_numa_node_cores') + self._numa_node_cores.return_value = NUMA_CORES_SINGLE + self.config.side_effect = lambda x: { + 'dpdk-socket-cores': 1, + }.get(x) + self.assertEqual(self.target.cpu_mask(), '0x01') + + self._numa_node_cores.return_value = NUMA_CORES_MULTI + self.assertEqual(self.target.cpu_mask(), '0x11') + + self.config.side_effect = lambda x: { + 'dpdk-socket-cores': 2, + }.get(x) + self.assertEqual(self.target.cpu_mask(), '0x33') + + def test_context_no_devices(self): + """Ensure that DPDK is disable when no devices detected""" + self.patch_object(context, 'resolve_pci_from_mapping_config') + self.resolve_pci_from_mapping_config.return_value = {} + self.assertEqual(self.target(), {}) + self.resolve_pci_from_mapping_config.assert_has_calls([ + call('data-port'), + call('dpdk-bond-mappings'), + ]) + + def test_context_devices(self): + """Ensure DPDK is enabled when devices are detected""" + self.patch_target('_numa_node_cores') + self.patch_target('devices') + self.devices.return_value = collections.OrderedDict(sorted({ + '0000:00:1c.0': 'br-data', + '0000:00:1d.0': 'br-data', + }.items())) + self._numa_node_cores.return_value = NUMA_CORES_SINGLE + self.patch_object(context, 'glob') + self.glob.glob.return_value = ['a'] + self.config.side_effect = lambda x: { + 'dpdk-socket-cores': 1, + 'dpdk-socket-memory': 1024, + 'enable-dpdk': True, + }.get(x) + self.assertEqual(self.target(), { + 'cpu_mask': '0x01', + 'device_whitelist': '-w 0000:00:1c.0 -w 0000:00:1d.0', + 'dpdk_enabled': True, + 'socket_memory': '1024' + }) + + +class TestDPDKDeviceContext(tests.utils.BaseTestCase): + + _dpdk_bridges = { + '0000:00:1c.0': 'br-data', + '0000:00:1d.0': 'br-physnet1', + } + _dpdk_bonds = { + '0000:00:1c.1': 'dpdk-bond0', + '0000:00:1d.1': 'dpdk-bond0', + } + + def setUp(self): + super(TestDPDKDeviceContext, self).setUp() + self.target = context.DPDKDeviceContext() + self.patch_object(context, 'resolve_pci_from_mapping_config') + self.resolve_pci_from_mapping_config.side_effect = [ + self._dpdk_bridges, + self._dpdk_bonds, + ] + + def test_context(self): + self.patch_object(context, 'config') + self.config.side_effect = lambda x: { + 'dpdk-driver': 'uio_pci_generic', + }.get(x) + devices = copy.deepcopy(self._dpdk_bridges) + devices.update(self._dpdk_bonds) + self.assertEqual(self.target(), { + 'devices': devices, + 'driver': 'uio_pci_generic' + }) + self.config.assert_called_with('dpdk-driver') + + def test_context_none_driver(self): + self.patch_object(context, 'config') + self.config.return_value = None + self.assertEqual(self.target(), {}) + self.config.assert_called_with('dpdk-driver') + + +class TestBridgePortInterfaceMap(tests.utils.BaseTestCase): + + def test__init__(self): + self.maxDiff = None + self.patch_object(context, 'config') + # system with three interfaces (eth0, eth1 and eth2) where + # eth0 and eth1 is part of linux bond bond0. + # Bridge mapping br-ex:eth2, br-provider1:bond0 + self.config.side_effect = lambda x: { + 'data-port': ( + 'br-ex:eth2 ' + 'br-provider1:00:00:5e:00:00:41 ' + 'br-provider1:00:00:5e:00:00:40'), + 'dpdk-bond-mappings': '', + }.get(x) + self.patch_object(context, 'resolve_pci_from_mapping_config') + self.resolve_pci_from_mapping_config.side_effect = [ + { + '0000:00:1c.0': context.EntityMac( + 'br-ex', '00:00:5e:00:00:42'), + }, + {}, + ] + self.patch_object(context, 'list_nics') + self.list_nics.return_value = ['bond0', 'eth0', 'eth1', 'eth2'] + self.patch_object(context, 'is_phy_iface') + self.is_phy_iface.side_effect = lambda x: True if not x.startswith( + 'bond') else False + self.patch_object(context, 'get_bond_master') + self.get_bond_master.side_effect = lambda x: 'bond0' if x in ( + 'eth0', 'eth1') else None + self.patch_object(context, 'get_nic_hwaddr') + self.get_nic_hwaddr.side_effect = lambda x: { + 'bond0': '00:00:5e:00:00:24', + 'eth0': '00:00:5e:00:00:40', + 'eth1': '00:00:5e:00:00:41', + 'eth2': '00:00:5e:00:00:42', + }.get(x) + bpi = context.BridgePortInterfaceMap() + self.maxDiff = None + expect = { + 'br-provider1': { + 'bond0': { + 'bond0': { + 'type': 'system', + }, + }, + }, + 'br-ex': { + 'eth2': { + 'eth2': { + 'type': 'system', + }, + }, + }, + } + self.assertDictEqual(bpi._map, expect) + # do it again but this time use the linux bond name instead of mac + # addresses. + self.config.side_effect = lambda x: { + 'data-port': ( + 'br-ex:eth2 ' + 'br-provider1:bond0'), + 'dpdk-bond-mappings': '', + }.get(x) + bpi = context.BridgePortInterfaceMap() + self.assertDictEqual(bpi._map, expect) + # and if a user asks for a purely virtual interface let's not stop them + expect = { + 'br-provider1': { + 'bond0.1234': { + 'bond0.1234': { + 'type': 'system', + }, + }, + }, + 'br-ex': { + 'eth2': { + 'eth2': { + 'type': 'system', + }, + }, + }, + } + self.config.side_effect = lambda x: { + 'data-port': ( + 'br-ex:eth2 ' + 'br-provider1:bond0.1234'), + 'dpdk-bond-mappings': '', + }.get(x) + bpi = context.BridgePortInterfaceMap() + self.assertDictEqual(bpi._map, expect) + # system with three interfaces (eth0, eth1 and eth2) where we should + # enable DPDK and create OVS bond of eth0 and eth1. + # Bridge mapping br-ex:eth2 br-provider1:dpdk-bond0 + self.config.side_effect = lambda x: { + 'enable-dpdk': True, + 'data-port': ( + 'br-ex:00:00:5e:00:00:42 ' + 'br-provider1:dpdk-bond0'), + 'dpdk-bond-mappings': ( + 'dpdk-bond0:00:00:5e:00:00:40 ' + 'dpdk-bond0:00:00:5e:00:00:41'), + }.get(x) + self.resolve_pci_from_mapping_config.side_effect = [ + { + '0000:00:1c.0': context.EntityMac( + 'br-ex', '00:00:5e:00:00:42'), + }, + { + '0000:00:1d.0': context.EntityMac( + 'dpdk-bond0', '00:00:5e:00:00:40'), + '0000:00:1e.0': context.EntityMac( + 'dpdk-bond0', '00:00:5e:00:00:41'), + }, + ] + # once devices are bound to DPDK they disappear from the system list + # of interfaces + self.list_nics.return_value = [] + bpi = context.BridgePortInterfaceMap(global_mtu=1500) + self.assertDictEqual(bpi._map, { + 'br-provider1': { + 'dpdk-bond0': { + 'dpdk-600a59e': { + 'pci-address': '0000:00:1d.0', + 'type': 'dpdk', + 'mtu-request': '1500', + }, + 'dpdk-5fc1d91': { + 'pci-address': '0000:00:1e.0', + 'type': 'dpdk', + 'mtu-request': '1500', + }, + }, + }, + 'br-ex': { + 'dpdk-6204d33': { + 'dpdk-6204d33': { + 'pci-address': '0000:00:1c.0', + 'type': 'dpdk', + 'mtu-request': '1500', + }, + }, + }, + }) + + def test_add_interface(self): + self.patch_object(context, 'config') + self.config.return_value = '' + ctx = context.BridgePortInterfaceMap() + ctx.add_interface("br1", "bond1", "port1", ctx.interface_type.dpdk, + "00:00:00:00:00:01", 1500) + ctx.add_interface("br1", "bond1", "port2", ctx.interface_type.dpdk, + "00:00:00:00:00:02", 1500) + ctx.add_interface("br1", "bond2", "port3", ctx.interface_type.dpdk, + "00:00:00:00:00:03", 1500) + ctx.add_interface("br1", "bond2", "port4", ctx.interface_type.dpdk, + "00:00:00:00:00:04", 1500) + + expected = ( + 'br1', { + 'bond1': { + 'port1': { + 'type': 'dpdk', + 'pci-address': '00:00:00:00:00:01', + 'mtu-request': '1500', + }, + 'port2': { + 'type': 'dpdk', + 'pci-address': '00:00:00:00:00:02', + 'mtu-request': '1500', + }, + }, + 'bond2': { + 'port3': { + 'type': 'dpdk', + 'pci-address': '00:00:00:00:00:03', + 'mtu-request': '1500', + }, + 'port4': { + 'type': 'dpdk', + 'pci-address': '00:00:00:00:00:04', + 'mtu-request': '1500', + }, + }, + }, + ) + for br, bonds in ctx.items(): + self.maxDiff = None + self.assertEqual(br, expected[0]) + self.assertDictEqual(bonds, expected[1]) + + +class TestBondConfig(tests.utils.BaseTestCase): + + def test_get_bond_config(self): + self.patch_object(context, 'config') + self.config.side_effect = lambda x: { + 'dpdk-bond-config': ':active-backup bond1:balance-slb:off', + }.get(x) + bonds_config = context.BondConfig() + + self.assertEqual(bonds_config.get_bond_config('bond0'), + {'mode': 'active-backup', + 'lacp': 'active', + 'lacp-time': 'fast' + }) + self.assertEqual(bonds_config.get_bond_config('bond1'), + {'mode': 'balance-slb', + 'lacp': 'off', + 'lacp-time': 'fast' + }) + + +class TestSRIOVContext(tests.utils.BaseTestCase): + + class ObjectView(object): + + def __init__(self, _dict): + self.__dict__ = _dict + + def test___init__(self): + self.patch_object(context.pci, 'PCINetDevices') + pci_devices = self.ObjectView({ + 'pci_devices': [ + self.ObjectView({ + 'sriov': True, + 'interface_name': 'eth0', + 'sriov_totalvfs': 16, + }), + self.ObjectView({ + 'sriov': True, + 'interface_name': 'eth1', + 'sriov_totalvfs': 32, + }), + self.ObjectView({ + 'sriov': False, + 'interface_name': 'eth2', + }), + ] + }) + self.PCINetDevices.return_value = pci_devices + self.patch_object(context, 'config') + # auto sets up numvfs = totalvfs + self.config.return_value = { + 'sriov-numvfs': 'auto', + } + self.assertDictEqual(context.SRIOVContext()(), { + 'eth0': 16, + 'eth1': 32, + }) + # when sriov-device-mappings is used only listed devices are set up + self.config.return_value = { + 'sriov-numvfs': 'auto', + 'sriov-device-mappings': 'physnet1:eth0', + } + self.assertDictEqual(context.SRIOVContext()(), { + 'eth0': 16, + }) + self.config.return_value = { + 'sriov-numvfs': 'eth0:8', + 'sriov-device-mappings': 'physnet1:eth0', + } + self.assertDictEqual(context.SRIOVContext()(), { + 'eth0': 8, + }) + self.config.return_value = { + 'sriov-numvfs': 'eth1:8', + } + self.assertDictEqual(context.SRIOVContext()(), { + 'eth1': 8, + }) + # setting a numvfs value higher than a nic supports will revert to + # the nics max value + self.config.return_value = { + 'sriov-numvfs': 'eth1:64', + } + self.assertDictEqual(context.SRIOVContext()(), { + 'eth1': 32, + }) + # devices listed in sriov-numvfs have precedence over + # sriov-device-mappings and the limiter still works when both are used + self.config.return_value = { + 'sriov-numvfs': 'eth1:64', + 'sriov-device-mappings': 'physnet:eth0', + } + self.assertDictEqual(context.SRIOVContext()(), { + 'eth1': 32, + }) + # alternate config keys have effect + self.config.return_value = { + 'my-own-sriov-numvfs': 'auto', + 'my-own-sriov-device-mappings': 'physnet1:eth0', + } + self.assertDictEqual( + context.SRIOVContext( + numvfs_key='my-own-sriov-numvfs', + device_mappings_key='my-own-sriov-device-mappings')(), + { + 'eth0': 16, + }) + # blanket configuration works and respects limits + self.config.return_value = { + 'sriov-numvfs': '24', + } + self.assertDictEqual(context.SRIOVContext()(), { + 'eth0': 16, + 'eth1': 24, + }) + + def test___call__(self): + self.patch_object(context.pci, 'PCINetDevices') + pci_devices = self.ObjectView({'pci_devices': []}) + self.PCINetDevices.return_value = pci_devices + self.patch_object(context, 'config') + self.config.return_value = {'sriov-numvfs': 'auto'} + ctxt_obj = context.SRIOVContext() + ctxt_obj._map = {} + self.assertDictEqual(ctxt_obj(), {}) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/test_os_templating.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/test_os_templating.py new file mode 100644 index 0000000000000000000000000000000000000000..b30dc31f69e12d8687974d8af6346fb745a04e9d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/test_os_templating.py @@ -0,0 +1,374 @@ + +import os +import unittest + +from mock import patch, call, MagicMock + +import charmhelpers.contrib.openstack.templating as templating + +from jinja2.exceptions import TemplateNotFound + +import six +if not six.PY3: + builtin_open = '__builtin__.open' +else: + builtin_open = 'builtins.open' + + +class FakeContextGenerator(object): + interfaces = None + + def set(self, interfaces, context): + self.interfaces = interfaces + self.context = context + + def __call__(self): + return self.context + + +class FakeLoader(object): + def set(self, template): + self.template = template + + def get(self, name): + return self.template + + +class MockFSLoader(object): + def __init__(self, dirs): + self.searchpath = [dirs] + + +class MockChoiceLoader(object): + def __init__(self, loaders): + self.loaders = loaders + + +def MockTemplate(): + templ = MagicMock() + templ.render = MagicMock() + return templ + + +class TemplatingTests(unittest.TestCase): + def setUp(self): + path = os.path.dirname(__file__) + self.loader = FakeLoader() + self.context = FakeContextGenerator() + + self.addCleanup(patch.object(templating, 'apt_install').start().stop()) + self.addCleanup(patch.object(templating, 'log').start().stop()) + + templating.FileSystemLoader = MockFSLoader + templating.ChoiceLoader = MockChoiceLoader + templating.Environment = MagicMock + + self.renderer = templating.OSConfigRenderer(templates_dir=path, + openstack_release='folsom') + + @patch.object(templating, 'apt_install') + def test_initializing_a_render_ensures_jinja2_present(self, apt): + '''Creatinga new renderer object installs jinja2 if needed''' + # temp. undo the patching from setUp + templating.FileSystemLoader = None + templating.ChoiceLoader = None + templating.Environment = None + templating.OSConfigRenderer(templates_dir='/tmp', + openstack_release='foo') + templating.FileSystemLoader = MockFSLoader + templating.ChoiceLoader = MockChoiceLoader + templating.Environment = MagicMock + if six.PY2: + apt.assert_called_with('python-jinja2') + else: + apt.assert_called_with('python3-jinja2') + + def test_create_renderer_invalid_templates_dir(self): + '''Ensure OSConfigRenderer checks templates_dir''' + self.assertRaises(templating.OSConfigException, + templating.OSConfigRenderer, + templates_dir='/tmp/foooo0', + openstack_release='grizzly') + + def test_render_unregistered_config(self): + '''Ensure cannot render an unregistered config file''' + self.assertRaises(templating.OSConfigException, + self.renderer.render, + config_file='/tmp/foo') + + def test_write_unregistered_config(self): + '''Ensure cannot write an unregistered config file''' + self.assertRaises(templating.OSConfigException, + self.renderer.write, + config_file='/tmp/foo') + + def test_render_complete_context(self): + '''It renders a template when provided a complete context''' + self.loader.set('{{ foo }}') + self.context.set(interfaces=['fooservice'], context={'foo': 'bar'}) + self.renderer.register('/tmp/foo', [self.context]) + with patch.object(self.renderer, '_get_template') as _get_t: + fake_tmpl = MockTemplate() + _get_t.return_value = fake_tmpl + self.renderer.render('/tmp/foo') + fake_tmpl.render.assert_called_with(self.context()) + self.assertIn('fooservice', self.renderer.complete_contexts()) + + def test_render_incomplete_context_with_template(self): + '''It renders a template when provided an incomplete context''' + self.context.set(interfaces=['fooservice'], context={}) + self.renderer.register('/tmp/foo', [self.context]) + with patch.object(self.renderer, '_get_template') as _get_t: + fake_tmpl = MockTemplate() + _get_t.return_value = fake_tmpl + self.renderer.render('/tmp/foo') + fake_tmpl.render.assert_called_with({}) + self.assertNotIn('fooservice', self.renderer.complete_contexts()) + + def test_render_template_registered_but_not_found(self): + '''It loads a template by basename of config file first''' + path = os.path.dirname(__file__) + renderer = templating.OSConfigRenderer(templates_dir=path, + openstack_release='folsom') + e = TemplateNotFound('') + renderer._get_template = MagicMock() + renderer._get_template.side_effect = e + renderer.register('/etc/nova/nova.conf', contexts=[]) + self.assertRaises( + TemplateNotFound, renderer.render, '/etc/nova/nova.conf') + + def test_render_template_by_basename_first(self): + '''It loads a template by basename of config file first''' + path = os.path.dirname(__file__) + renderer = templating.OSConfigRenderer(templates_dir=path, + openstack_release='folsom') + renderer._get_template = MagicMock() + renderer.register('/etc/nova/nova.conf', contexts=[]) + renderer.render('/etc/nova/nova.conf') + self.assertEquals(1, len(renderer._get_template.call_args_list)) + self.assertEquals( + [call('nova.conf')], renderer._get_template.call_args_list) + + def test_render_template_by_munged_full_path_last(self): + '''It loads a template by full path of config file second''' + path = os.path.dirname(__file__) + renderer = templating.OSConfigRenderer(templates_dir=path, + openstack_release='folsom') + tmp = MagicMock() + tmp.render = MagicMock() + e = TemplateNotFound('') + renderer._get_template = MagicMock() + renderer._get_template.side_effect = [e, tmp] + renderer.register('/etc/nova/nova.conf', contexts=[]) + renderer.render('/etc/nova/nova.conf') + self.assertEquals(2, len(renderer._get_template.call_args_list)) + self.assertEquals( + [call('nova.conf'), call('etc_nova_nova.conf')], + renderer._get_template.call_args_list) + + def test_render_template_by_basename(self): + '''It renders template if it finds it by config file basename''' + + @patch(builtin_open) + @patch.object(templating, 'get_loader') + def test_write_out_config(self, loader, _open): + '''It writes a templated config when provided a complete context''' + self.context.set(interfaces=['fooservice'], context={'foo': 'bar'}) + self.renderer.register('/tmp/foo', [self.context]) + with patch.object(self.renderer, '_get_template') as _get_t: + fake_tmpl = MockTemplate() + _get_t.return_value = fake_tmpl + self.renderer.write('/tmp/foo') + _open.assert_called_with('/tmp/foo', 'wb') + + def test_write_all(self): + '''It writes out all configuration files at once''' + self.context.set(interfaces=['fooservice'], context={'foo': 'bar'}) + self.renderer.register('/tmp/foo', [self.context]) + self.renderer.register('/tmp/bar', [self.context]) + ex_calls = [ + call('/tmp/bar'), + call('/tmp/foo'), + ] + with patch.object(self.renderer, 'write') as _write: + self.renderer.write_all() + self.assertEquals(sorted(ex_calls), sorted(_write.call_args_list)) + pass + + @patch.object(templating, 'get_loader') + def test_reset_template_loader_for_new_os_release(self, loader): + self.loader.set('') + self.context.set(interfaces=['fooservice'], context={}) + loader.return_value = MockFSLoader('/tmp/foo') + self.renderer.register('/tmp/foo', [self.context]) + self.renderer.render('/tmp/foo') + loader.assert_called_with(os.path.dirname(__file__), 'folsom') + self.renderer.set_release(openstack_release='grizzly') + self.renderer.render('/tmp/foo') + loader.assert_called_with(os.path.dirname(__file__), 'grizzly') + + @patch.object(templating, 'get_loader') + def test_incomplete_context_not_reported_complete(self, loader): + '''It does not recognize an incomplete context as a complete context''' + self.context.set(interfaces=['fooservice'], context={}) + self.renderer.register('/tmp/foo', [self.context]) + self.assertNotIn('fooservice', self.renderer.complete_contexts()) + + @patch.object(templating, 'get_loader') + def test_complete_context_reported_complete(self, loader): + '''It recognizes a complete context as a complete context''' + self.context.set(interfaces=['fooservice'], context={'foo': 'bar'}) + self.renderer.register('/tmp/foo', [self.context]) + self.assertIn('fooservice', self.renderer.complete_contexts()) + + @patch('os.path.isdir') + def test_get_loader_no_templates_dir(self, isdir): + '''Ensure getting loader fails with no template dir''' + isdir.return_value = False + self.assertRaises(templating.OSConfigException, + templating.get_loader, + templates_dir='/tmp/foo', os_release='foo') + + @patch('os.path.isdir') + def test_get_loader_all_search_paths(self, isdir): + '''Ensure loader reverse searches of all release template dirs''' + isdir.return_value = True + choice_loader = templating.get_loader('/tmp/foo', + os_release='icehouse') + dirs = [l.searchpath for l in choice_loader.loaders] + + common_tmplts = os.path.join(os.path.dirname(templating.__file__), + 'templates') + expected = [['/tmp/foo/icehouse'], + ['/tmp/foo/havana'], + ['/tmp/foo/grizzly'], + ['/tmp/foo/folsom'], + ['/tmp/foo/essex'], + ['/tmp/foo/diablo'], + ['/tmp/foo'], + [common_tmplts]] + self.assertEquals(dirs, expected) + + @patch('os.path.isdir') + def test_get_loader_some_search_paths(self, isdir): + '''Ensure loader reverse searches of some release template dirs''' + isdir.return_value = True + choice_loader = templating.get_loader('/tmp/foo', os_release='grizzly') + dirs = [l.searchpath for l in choice_loader.loaders] + + common_tmplts = os.path.join(os.path.dirname(templating.__file__), + 'templates') + + expected = [['/tmp/foo/grizzly'], + ['/tmp/foo/folsom'], + ['/tmp/foo/essex'], + ['/tmp/foo/diablo'], + ['/tmp/foo'], + [common_tmplts]] + self.assertEquals(dirs, expected) + + def test_register_template_with_list_of_contexts(self): + '''Ensure registering a template with a list of context generators''' + def _c1(): + pass + + def _c2(): + pass + tmpl = templating.OSConfigTemplate(config_file='/tmp/foo', + contexts=[_c1, _c2]) + self.assertEquals(tmpl.contexts, [_c1, _c2]) + + def test_register_template_with_single_context(self): + '''Ensure registering a template with a single non-list context''' + def _c1(): + pass + tmpl = templating.OSConfigTemplate(config_file='/tmp/foo', + contexts=_c1) + self.assertEquals(tmpl.contexts, [_c1]) + + +class TemplatingStringTests(unittest.TestCase): + def setUp(self): + path = os.path.dirname(__file__) + self.loader = FakeLoader() + self.context = FakeContextGenerator() + + self.addCleanup(patch.object(templating, + 'apt_install').start().stop()) + self.addCleanup(patch.object(templating, 'log').start().stop()) + + templating.FileSystemLoader = MockFSLoader + templating.ChoiceLoader = MockChoiceLoader + + self.config_file = '/etc/confd/extensible.d/drop-in.conf' + self.config_template = 'use: {{ fake_key }}' + self.renderer = templating.OSConfigRenderer(templates_dir=path, + openstack_release='folsom') + + def test_render_template_from_string_full_context(self): + ''' + Test rendering a specified config file with a string template + and a context. + ''' + + context = {'fake_key': 'fake_val'} + self.context.set( + interfaces=['fooservice'], + context=context + ) + + expected_output = 'use: {}'.format(context['fake_key']) + + self.renderer.register( + config_file=self.config_file, + contexts=[self.context], + config_template=self.config_template + ) + + # should return a string given we render from an in-memory + # template source + output = self.renderer.render(self.config_file) + + self.assertEquals(output, expected_output) + + def test_render_template_from_string_incomplete_context(self): + ''' + Test rendering a specified config file with a string template + and a context. + ''' + + self.context.set( + interfaces=['fooservice'], + context={} + ) + + expected_output = 'use: ' + + self.renderer.register( + config_file=self.config_file, + contexts=[self.context], + config_template=self.config_template + ) + + # should return a string given we render from an in-memory + # template source + output = self.renderer.render(self.config_file) + + self.assertEquals(output, expected_output) + + def test_register_string_template_with_single_context(self): + '''Template rendering from a provided string with a context''' + def _c1(): + pass + + config_file = '/etc/confdir/custom-drop-in.conf' + config_template = 'use: {{ key_available_in_c1 }}' + tmpl = templating.OSConfigTemplate( + config_file=config_file, + contexts=_c1, + config_template=config_template + ) + + self.assertEquals(tmpl.contexts, [_c1]) + self.assertEquals(tmpl.config_file, config_file) + self.assertEquals(tmpl.config_template, config_template) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/test_os_utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/test_os_utils.py new file mode 100644 index 0000000000000000000000000000000000000000..99ed17d9f91d8123740b5ca7f2efd729901deb82 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/test_os_utils.py @@ -0,0 +1,277 @@ +import collections +import json +import mock +import six +import unittest + +from charmhelpers.contrib.openstack import utils + +if not six.PY3: + builtin_open = '__builtin__.open' +else: + builtin_open = 'builtins.open' + + +class UtilsTests(unittest.TestCase): + def setUp(self): + super(UtilsTests, self).setUp() + + def test_compare_openstack_comparator(self): + self.assertTrue(utils.CompareOpenStackReleases('mitaka') < 'newton') + self.assertTrue(utils.CompareOpenStackReleases('pike') > 'essex') + + @mock.patch.object(utils, 'config') + @mock.patch('charmhelpers.contrib.openstack.utils.relation_set') + @mock.patch('charmhelpers.contrib.openstack.utils.relation_ids') + @mock.patch('charmhelpers.contrib.openstack.utils.get_ipv6_addr') + def test_sync_db_with_multi_ipv6_addresses(self, mock_get_ipv6_addr, + mock_relation_ids, + mock_relation_set, + mock_config): + mock_config.return_value = None + addr1 = '2001:db8:1:0:f816:3eff:fe45:7c/64' + addr2 = '2001:db8:1:0:d0cf:528c:23eb:5000/64' + mock_get_ipv6_addr.return_value = [addr1, addr2] + mock_relation_ids.return_value = ['shared-db'] + + utils.sync_db_with_multi_ipv6_addresses('testdb', 'testdbuser') + hosts = json.dumps([addr1, addr2]) + mock_relation_set.assert_called_with(relation_id='shared-db', + database='testdb', + username='testdbuser', + hostname=hosts) + + @mock.patch.object(utils, 'config') + @mock.patch('charmhelpers.contrib.openstack.utils.relation_set') + @mock.patch('charmhelpers.contrib.openstack.utils.relation_ids') + @mock.patch('charmhelpers.contrib.openstack.utils.get_ipv6_addr') + def test_sync_db_with_multi_ipv6_addresses_single(self, mock_get_ipv6_addr, + mock_relation_ids, + mock_relation_set, + mock_config): + mock_config.return_value = None + addr1 = '2001:db8:1:0:f816:3eff:fe45:7c/64' + mock_get_ipv6_addr.return_value = [addr1] + mock_relation_ids.return_value = ['shared-db'] + + utils.sync_db_with_multi_ipv6_addresses('testdb', 'testdbuser') + hosts = json.dumps([addr1]) + mock_relation_set.assert_called_with(relation_id='shared-db', + database='testdb', + username='testdbuser', + hostname=hosts) + + @mock.patch.object(utils, 'config') + @mock.patch('charmhelpers.contrib.openstack.utils.relation_set') + @mock.patch('charmhelpers.contrib.openstack.utils.relation_ids') + @mock.patch('charmhelpers.contrib.openstack.utils.get_ipv6_addr') + def test_sync_db_with_multi_ipv6_addresses_w_prefix(self, + mock_get_ipv6_addr, + mock_relation_ids, + mock_relation_set, + mock_config): + mock_config.return_value = None + addr1 = '2001:db8:1:0:f816:3eff:fe45:7c/64' + mock_get_ipv6_addr.return_value = [addr1] + mock_relation_ids.return_value = ['shared-db'] + + utils.sync_db_with_multi_ipv6_addresses('testdb', 'testdbuser', + relation_prefix='bungabunga') + hosts = json.dumps([addr1]) + mock_relation_set.assert_called_with(relation_id='shared-db', + bungabunga_database='testdb', + bungabunga_username='testdbuser', + bungabunga_hostname=hosts) + + @mock.patch.object(utils, 'config') + @mock.patch('charmhelpers.contrib.openstack.utils.relation_set') + @mock.patch('charmhelpers.contrib.openstack.utils.relation_ids') + @mock.patch('charmhelpers.contrib.openstack.utils.get_ipv6_addr') + def test_sync_db_with_multi_ipv6_addresses_vips(self, mock_get_ipv6_addr, + mock_relation_ids, + mock_relation_set, + mock_config): + addr1 = '2001:db8:1:0:f816:3eff:fe45:7c/64' + addr2 = '2001:db8:1:0:d0cf:528c:23eb:5000/64' + vip1 = '2001:db8:1:0:f816:3eff:32b3:7c' + vip2 = '2001:db8:1:0:f816:3eff:32b3:7d' + mock_config.return_value = '%s 10.0.0.1 %s' % (vip1, vip2) + + mock_get_ipv6_addr.return_value = [addr1, addr2] + mock_relation_ids.return_value = ['shared-db'] + + utils.sync_db_with_multi_ipv6_addresses('testdb', 'testdbuser') + hosts = json.dumps([addr1, addr2, vip1, vip2]) + mock_relation_set.assert_called_with(relation_id='shared-db', + database='testdb', + username='testdbuser', + hostname=hosts) + + @mock.patch('uuid.uuid4') + @mock.patch('charmhelpers.contrib.openstack.utils.related_units') + @mock.patch('charmhelpers.contrib.openstack.utils.relation_set') + @mock.patch('charmhelpers.contrib.openstack.utils.relation_ids') + def test_remote_restart(self, mock_relation_ids, mock_relation_set, + mock_related_units, mock_uuid4): + mock_relation_ids.return_value = ['neutron-plugin-api-subordinate:8'] + mock_related_units.return_value = ['neutron-api/0'] + mock_uuid4.return_value = 'uuid4' + utils.remote_restart('neutron-plugin-api-subordinate') + mock_relation_set.assert_called_with( + relation_id='neutron-plugin-api-subordinate:8', + relation_settings={'restart-trigger': 'uuid4'} + ) + + @mock.patch.object(utils, 'lsb_release') + @mock.patch.object(utils, 'config') + @mock.patch('charmhelpers.contrib.openstack.utils.get_os_codename_package') + @mock.patch('charmhelpers.contrib.openstack.utils.' + 'get_os_codename_install_source') + def test_os_release(self, mock_get_os_codename_install_source, + mock_get_os_codename_package, + mock_config, mock_lsb_release): + # Wipe the modules cached os_rel + utils._os_rel = None + mock_lsb_release.return_value = {"DISTRIB_CODENAME": "trusty"} + mock_get_os_codename_install_source.return_value = None + mock_get_os_codename_package.return_value = None + mock_config.return_value = 'cloud-pocket' + self.assertEqual(utils.os_release('my-pkg'), 'icehouse') + mock_get_os_codename_install_source.assert_called_once_with( + 'cloud-pocket') + mock_get_os_codename_package.assert_called_once_with( + 'my-pkg', fatal=False) + mock_config.assert_called_once_with('openstack-origin') + # Next call to os_release should pickup cached version + mock_get_os_codename_install_source.reset_mock() + mock_get_os_codename_package.reset_mock() + self.assertEqual(utils.os_release('my-pkg'), 'icehouse') + self.assertFalse(mock_get_os_codename_install_source.called) + self.assertFalse(mock_get_os_codename_package.called) + # Call os_release and bypass cache + mock_lsb_release.return_value = {"DISTRIB_CODENAME": "xenial"} + mock_get_os_codename_install_source.reset_mock() + mock_get_os_codename_package.reset_mock() + self.assertEqual(utils.os_release('my-pkg', reset_cache=True), + 'mitaka') + mock_get_os_codename_install_source.assert_called_once_with( + 'cloud-pocket') + mock_get_os_codename_package.assert_called_once_with( + 'my-pkg', fatal=False) + # Override base + mock_lsb_release.return_value = {"DISTRIB_CODENAME": "xenial"} + mock_get_os_codename_install_source.reset_mock() + mock_get_os_codename_package.reset_mock() + self.assertEqual(utils.os_release('my-pkg', reset_cache=True, base="ocata"), + 'ocata') + mock_get_os_codename_install_source.assert_called_once_with( + 'cloud-pocket') + mock_get_os_codename_package.assert_called_once_with( + 'my-pkg', fatal=False) + # Override source key + mock_config.reset_mock() + mock_get_os_codename_install_source.reset_mock() + mock_get_os_codename_package.reset_mock() + mock_get_os_codename_package.return_value = None + utils.os_release('my-pkg', reset_cache=True, source_key='source') + mock_config.assert_called_once_with('source') + mock_get_os_codename_install_source.assert_called_once_with( + 'cloud-pocket') + + @mock.patch.object(utils, 'os_release') + @mock.patch.object(utils, 'get_os_codename_install_source') + def test_enable_memcache(self, _get_os_codename_install_source, + _os_release): + # Check call with 'release' + self.assertFalse(utils.enable_memcache(release='icehouse')) + self.assertTrue(utils.enable_memcache(release='ocata')) + # Check call with 'source' + _os_release.return_value = None + _get_os_codename_install_source.return_value = 'icehouse' + self.assertFalse(utils.enable_memcache(source='distro')) + _os_release.return_value = None + _get_os_codename_install_source.return_value = 'ocata' + self.assertTrue(utils.enable_memcache(source='distro')) + # Check call with 'package' + _os_release.return_value = 'icehouse' + _get_os_codename_install_source.return_value = None + self.assertFalse(utils.enable_memcache(package='pkg1')) + _os_release.return_value = 'ocata' + _get_os_codename_install_source.return_value = None + self.assertTrue(utils.enable_memcache(package='pkg1')) + + @mock.patch.object(utils, 'enable_memcache') + def test_enable_token_cache_pkgs(self, _enable_memcache): + _enable_memcache.return_value = False + self.assertEqual(utils.token_cache_pkgs(source='distro'), []) + _enable_memcache.return_value = True + self.assertEqual(utils.token_cache_pkgs(source='distro'), + ['memcached', 'python-memcache']) + + def test_update_json_file(self): + TEST_POLICY = """{ + "delete_image_location": "", + "get_image_location": "", + "set_image_location": "", + "extra_property": "False" + }""" + + TEST_POLICY_FILE = "/etc/glance/policy.json" + + items_to_update = { + "get_image_location": "role:admin", + "extra_policy": "extra", + } + + mock_open = mock.mock_open(read_data=TEST_POLICY) + with mock.patch(builtin_open, mock_open) as mock_file: + utils.update_json_file(TEST_POLICY_FILE, {}) + self.assertFalse(mock_file.called) + + utils.update_json_file(TEST_POLICY_FILE, items_to_update) + mock_file.assert_has_calls([ + mock.call(TEST_POLICY_FILE), + mock.call(TEST_POLICY_FILE, 'w'), + ], any_order=True) + + modified_policy = json.loads(TEST_POLICY) + modified_policy.update(items_to_update) + mock_open().write.assert_called_with( + json.dumps(modified_policy, indent=4, sort_keys=True)) + + tmp = json.loads(TEST_POLICY) + tmp.update(items_to_update) + TEST_POLICY = json.dumps(tmp) + mock_open = mock.mock_open(read_data=TEST_POLICY) + with mock.patch(builtin_open, mock_open) as mock_file: + utils.update_json_file(TEST_POLICY_FILE, items_to_update) + mock_file.assert_has_calls([ + mock.call(TEST_POLICY_FILE), + ], any_order=True) + + def test_ordered(self): + data = {'one': 1, 'two': 2, 'three': 3} + expected = [('one', 1), ('three', 3), ('two', 2)] + self.assertSequenceEqual(expected, + [x for x in utils.ordered(data).items()]) + + data = { + 'one': 1, + 'two': 2, + 'three': { + 'uno': 1, + 'dos': 2, + 'tres': 3 + } + } + expected = collections.OrderedDict() + expected['one'] = 1 + nested = collections.OrderedDict() + nested['dos'] = 2 + nested['tres'] = 3 + nested['uno'] = 1 + expected['three'] = nested + expected['two'] = 2 + self.assertEqual(expected, utils.ordered(data)) + + self.assertRaises(ValueError, utils.ordered, "foo") diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/test_policyd.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/test_policyd.py new file mode 100644 index 0000000000000000000000000000000000000000..378e58d4ab7ee037efbd9e12e94dcab8b9dbd4d8 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/test_policyd.py @@ -0,0 +1,467 @@ +import contextlib +import copy +import io +import os +import mock +import six +import unittest + +from charmhelpers.contrib.openstack import policyd + + +if not six.PY3: + builtin_open = '__builtin__.open' +else: + builtin_open = 'builtins.open' + + +class PolicydTests(unittest.TestCase): + def setUp(self): + super(PolicydTests, self).setUp() + + def test_is_policyd_override_valid_on_this_release(self): + self.assertTrue( + policyd.is_policyd_override_valid_on_this_release("queens")) + self.assertTrue( + policyd.is_policyd_override_valid_on_this_release("rocky")) + self.assertFalse( + policyd.is_policyd_override_valid_on_this_release("pike")) + + @mock.patch.object(policyd, "clean_policyd_dir_for") + @mock.patch.object(policyd, "remove_policy_success_file") + @mock.patch.object(policyd, "process_policy_resource_file") + @mock.patch.object(policyd, "get_policy_resource_filename") + @mock.patch.object(policyd, "is_policyd_override_valid_on_this_release") + @mock.patch.object(policyd, "_policy_success_file") + @mock.patch("os.path.isfile") + @mock.patch.object(policyd.hookenv, "config") + @mock.patch("charmhelpers.core.hookenv.log") + def test_maybe_do_policyd_overrides( + self, + mock_log, + mock_config, + mock_isfile, + mock__policy_success_file, + mock_is_policyd_override_valid_on_this_release, + mock_get_policy_resource_filename, + mock_process_policy_resource_file, + mock_remove_policy_success_file, + mock_clean_policyd_dir_for, + ): + mock_isfile.return_value = False + mock__policy_success_file.return_value = "s-return" + # test success condition + mock_config.return_value = {policyd.POLICYD_CONFIG_NAME: True} + mock_is_policyd_override_valid_on_this_release.return_value = True + mock_get_policy_resource_filename.return_value = "resource.zip" + mock_process_policy_resource_file.return_value = True + mod_fn = mock.Mock() + restart_handler = mock.Mock() + policyd.maybe_do_policyd_overrides( + "arelease", "aservice", ["a"], ["b"], mod_fn, restart_handler) + mock_is_policyd_override_valid_on_this_release.assert_called_once_with( + "arelease") + mock_get_policy_resource_filename.assert_called_once_with() + mock_process_policy_resource_file.assert_called_once_with( + "resource.zip", "aservice", ["a"], ["b"], mod_fn) + restart_handler.assert_called_once_with() + # test process_policy_resource_file is not called if not valid on the + # release. + mock_process_policy_resource_file.reset_mock() + restart_handler.reset_mock() + mock_is_policyd_override_valid_on_this_release.return_value = False + policyd.maybe_do_policyd_overrides( + "arelease", "aservice", ["a"], ["b"], mod_fn, restart_handler) + mock_process_policy_resource_file.assert_not_called() + restart_handler.assert_not_called() + # test restart_handler is not called if not needed. + mock_is_policyd_override_valid_on_this_release.return_value = True + mock_process_policy_resource_file.return_value = False + policyd.maybe_do_policyd_overrides( + "arelease", "aservice", ["a"], ["b"], mod_fn, restart_handler) + mock_process_policy_resource_file.assert_called_once_with( + "resource.zip", "aservice", ["a"], ["b"], mod_fn) + restart_handler.assert_not_called() + # test that directory gets cleaned if the config is not set + mock_config.return_value = {policyd.POLICYD_CONFIG_NAME: False} + mock_process_policy_resource_file.reset_mock() + policyd.maybe_do_policyd_overrides( + "arelease", "aservice", ["a"], ["b"], mod_fn, restart_handler) + mock_process_policy_resource_file.assert_not_called() + mock_remove_policy_success_file.assert_called_once_with() + mock_clean_policyd_dir_for.assert_called_once_with( + "aservice", ["a"], user='aservice', group='aservice') + + @mock.patch.object(policyd, "maybe_do_policyd_overrides") + def test_maybe_do_policyd_overrides_with_config_changed( + self, + mock_maybe_do_policyd_overrides, + ): + mod_fn = mock.Mock() + restart_handler = mock.Mock() + policyd.maybe_do_policyd_overrides_on_config_changed( + "arelease", "aservice", ["a"], ["b"], mod_fn, restart_handler) + mock_maybe_do_policyd_overrides.assert_called_once_with( + "arelease", "aservice", ["a"], ["b"], mod_fn, restart_handler, + config_changed=True) + + @mock.patch("charmhelpers.core.hookenv.resource_get") + def test_get_policy_resource_filename(self, mock_resource_get): + mock_resource_get.return_value = "test-file" + self.assertEqual(policyd.get_policy_resource_filename(), + "test-file") + mock_resource_get.assert_called_once_with( + policyd.POLICYD_RESOURCE_NAME) + + # check that if an error is raised, that None is returned. + def go_bang(*args): + raise Exception("bang") + + mock_resource_get.side_effect = go_bang + self.assertEqual(policyd.get_policy_resource_filename(), None) + + @mock.patch.object(policyd, "_yamlfiles") + @mock.patch.object(policyd.zipfile, "ZipFile") + def test_open_and_filter_yaml_files(self, mock_ZipFile, mock__yamlfiles): + mock__yamlfiles.return_value = [ + ("file1", ".yaml", "file1.yaml", None), + ("file2", ".yml", "file2.YML", None)] + mock_ZipFile.return_value.__enter__.return_value = "zfp" + # test a valid zip file + with policyd.open_and_filter_yaml_files("some-file") as (zfp, files): + self.assertEqual(zfp, "zfp") + mock_ZipFile.assert_called_once_with("some-file", "r") + self.assertEqual(files, [ + ("file1", ".yaml", "file1.yaml", None), + ("file2", ".yml", "file2.YML", None)]) + # ensure that there must be at least one file. + mock__yamlfiles.return_value = [] + with self.assertRaises(policyd.BadPolicyZipFile): + with policyd.open_and_filter_yaml_files("some-file"): + pass + # ensure that it picks up duplicates + mock__yamlfiles.return_value = [ + ("file1", ".yaml", "file1.yaml", None), + ("file2", ".yml", "file2.yml", None), + ("file1", ".yml", "file1.yml", None)] + with self.assertRaises(policyd.BadPolicyZipFile): + with policyd.open_and_filter_yaml_files("some-file"): + pass + + def test__yamlfiles(self): + class MockZipFile(object): + def __init__(self, infolist): + self._infolist = infolist + + def infolist(self): + return self._infolist + + class MockInfoListItem(object): + def __init__(self, is_dir, filename): + self.filename = filename + self._is_dir = is_dir + + def is_dir(self): + return self._is_dir + + def __repr__(self): + return "MockInfoListItem({}, {})".format(self._is_dir, + self.filename) + + zipfile = MockZipFile([ + MockInfoListItem(False, "file1.yaml"), + MockInfoListItem(False, "file2.md"), + MockInfoListItem(False, "file3.YML"), + MockInfoListItem(False, "file4.Yaml"), + MockInfoListItem(True, "file5"), + MockInfoListItem(True, "file6.yaml"), + MockInfoListItem(False, "file7"), + MockInfoListItem(False, "file8.j2")]) + + self.assertEqual(list(policyd._yamlfiles(zipfile)), + [("file1", ".yaml", "file1.yaml", mock.ANY), + ("file3", ".yml", "file3.YML", mock.ANY), + ("file4", ".yaml", "file4.Yaml", mock.ANY), + ("file8", ".j2", "file8.j2", mock.ANY)]) + + @mock.patch.object(policyd.yaml, "safe_load") + def test_read_and_validate_yaml(self, mock_safe_load): + # test a valid document + good_doc = { + "key1": "rule1", + "key2": "rule2", + } + mock_safe_load.return_value = copy.deepcopy(good_doc) + doc = policyd.read_and_validate_yaml("test-stream") + self.assertEqual(doc, good_doc) + mock_safe_load.assert_called_once_with("test-stream") + # test an invalid document - return a string + mock_safe_load.return_value = "wrong" + with self.assertRaises(policyd.BadPolicyYamlFile): + policyd.read_and_validate_yaml("test-stream") + # test for black-listed keys + with self.assertRaises(policyd.BadPolicyYamlFile): + mock_safe_load.return_value = copy.deepcopy(good_doc) + policyd.read_and_validate_yaml("test-stream", ["key1"]) + # test for non string keys + bad_key_doc = { + (1,): "rule1", + "key2": "rule2", + } + with self.assertRaises(policyd.BadPolicyYamlFile): + mock_safe_load.return_value = copy.deepcopy(bad_key_doc) + policyd.read_and_validate_yaml("test-stream", ["key1"]) + # test for non string values (i.e. no nested keys) + bad_key_doc2 = { + "key1": "rule1", + "key2": {"sub_key": "rule2"}, + } + with self.assertRaises(policyd.BadPolicyYamlFile): + mock_safe_load.return_value = copy.deepcopy(bad_key_doc2) + policyd.read_and_validate_yaml("test-stream", ["key1"]) + + def test_policyd_dir_for(self): + self.assertEqual(policyd.policyd_dir_for('thing'), + "/etc/thing/policy.d") + + @mock.patch.object(policyd.hookenv, 'log') + @mock.patch("os.remove") + @mock.patch("shutil.rmtree") + @mock.patch("charmhelpers.core.host.mkdir") + @mock.patch("os.path.exists") + @mock.patch.object(policyd, "policyd_dir_for") + def test_clean_policyd_dir_for(self, + mock_policyd_dir_for, + mock_os_path_exists, + mock_mkdir, + mock_shutil_rmtree, + mock_os_remove, + mock_log): + if hasattr(os, 'scandir'): + mock_scan_dir_parts = (mock.patch, ["os.scandir"]) + else: + mock_scan_dir_parts = (mock.patch.object, + [policyd, "_fallback_scandir"]) + + class MockDirEntry(object): + def __init__(self, path, is_dir): + self.path = path + self._is_dir = is_dir + + def is_dir(self): + return self._is_dir + + # list of scanned objects + directory_contents = [ + MockDirEntry("one", False), + MockDirEntry("two", False), + MockDirEntry("three", True), + MockDirEntry("four", False)] + + mock_policyd_dir_for.return_value = "the-path" + + # Initial conditions + mock_os_path_exists.return_value = False + + # call the function + with mock_scan_dir_parts[0](*mock_scan_dir_parts[1]) as \ + mock_os_scandir: + mock_os_scandir.return_value = directory_contents + policyd.clean_policyd_dir_for("aservice") + + # check it did the right thing + mock_policyd_dir_for.assert_called_once_with("aservice") + mock_os_path_exists.assert_called_once_with("the-path") + mock_mkdir.assert_called_once_with("the-path", + owner="aservice", + group="aservice", + perms=0o775) + mock_shutil_rmtree.assert_called_once_with("three") + mock_os_remove.assert_has_calls([ + mock.call("one"), mock.call("two"), mock.call("four")]) + + # check also that we can omit paths ... reset everything + mock_os_remove.reset_mock() + mock_shutil_rmtree.reset_mock() + mock_os_path_exists.reset_mock() + mock_os_path_exists.return_value = True + mock_mkdir.reset_mock() + + with mock_scan_dir_parts[0](*mock_scan_dir_parts[1]) as \ + mock_os_scandir: + mock_os_scandir.return_value = directory_contents + policyd.clean_policyd_dir_for("aservice", + keep_paths=["one", "three"]) + + # verify all worked as we expected + mock_mkdir.assert_not_called() + mock_shutil_rmtree.assert_not_called() + mock_os_remove.assert_has_calls([mock.call("two"), mock.call("four")]) + + def test_path_for_policy_file(self): + self.assertEqual(policyd.path_for_policy_file('this', 'that'), + "/etc/this/policy.d/that.yaml") + + @mock.patch("charmhelpers.core.hookenv.charm_dir") + def test__policy_success_file(self, mock_charm_dir): + mock_charm_dir.return_value = "/this" + self.assertEqual(policyd._policy_success_file(), + "/this/{}".format(policyd.POLICYD_SUCCESS_FILENAME)) + + @mock.patch("os.remove") + @mock.patch.object(policyd, "_policy_success_file") + def test_remove_policy_success_file(self, mock_file, mock_os_remove): + mock_file.return_value = "the-path" + policyd.remove_policy_success_file() + mock_os_remove.assert_called_once_with("the-path") + + # now test that failure doesn't fail the function + def go_bang(*args): + raise Exception("bang") + + mock_os_remove.side_effect = go_bang + policyd.remove_policy_success_file() + + @mock.patch("os.path.isfile") + @mock.patch.object(policyd, "_policy_success_file") + def test_policyd_status_message_prefix(self, mock_file, mock_is_file): + mock_file.return_value = "the-path" + mock_is_file.return_value = True + self.assertEqual(policyd.policyd_status_message_prefix(), "PO:") + mock_is_file.return_value = False + self.assertEqual( + policyd.policyd_status_message_prefix(), "PO (broken):") + + @mock.patch("yaml.dump") + @mock.patch.object(policyd, "_policy_success_file") + @mock.patch.object(policyd.hookenv, "log") + @mock.patch.object(policyd, "read_and_validate_yaml") + @mock.patch.object(policyd, "path_for_policy_file") + @mock.patch.object(policyd, "clean_policyd_dir_for") + @mock.patch.object(policyd, "remove_policy_success_file") + @mock.patch.object(policyd, "open_and_filter_yaml_files") + @mock.patch.object(policyd.ch_host, 'write_file') + @mock.patch.object(policyd, "maybe_create_directory_for") + def test_process_policy_resource_file( + self, + mock_maybe_create_directory_for, + mock_write_file, + mock_open_and_filter_yaml_files, + mock_remove_policy_success_file, + mock_clean_policyd_dir_for, + mock_path_for_policy_file, + mock_read_and_validate_yaml, + mock_log, + mock__policy_success_file, + mock_yaml_dump, + ): + mock_zfp = mock.MagicMock() + mod_fn = mock.Mock() + mock_path_for_policy_file.side_effect = lambda s, n: s + "/" + n + gen = [ + ("file1", ".yaml", "file1.yaml", "file1-zipinfo"), + ("file2", ".yml", "file2.yml", "file2-zipinfo")] + mock_open_and_filter_yaml_files.return_value.__enter__.return_value = \ + (mock_zfp, gen) + # first verify that we can blacklist a file + res = policyd.process_policy_resource_file( + "resource.zip", "aservice", ["aservice/file1"], [], mod_fn) + self.assertFalse(res) + mock_remove_policy_success_file.assert_called_once_with() + mock_clean_policyd_dir_for.assert_has_calls([ + mock.call("aservice", + ["aservice/file1"], + user='aservice', + group='aservice'), + mock.call("aservice", + ["aservice/file1"], + user='aservice', + group='aservice')]) + mock_zfp.open.assert_not_called() + mod_fn.assert_not_called() + mock_log.assert_any_call("Processing resource.zip failed: policy.d" + " name aservice/file1 is blacklisted", + level=policyd.POLICYD_LOG_LEVEL_DEFAULT) + + # now test for success + @contextlib.contextmanager + def _patch_open(): + '''Patch open() to allow mocking both open() itself and the file that is + yielded. + + Yields the mock for "open" and "file", respectively.''' + mock_open = mock.MagicMock(spec=open) + mock_file = mock.MagicMock(spec=io.FileIO) + + with mock.patch(builtin_open, mock_open): + yield mock_open, mock_file + + mock_clean_policyd_dir_for.reset_mock() + mock_zfp.reset_mock() + mock_fp = mock.MagicMock() + mock_fp.read.return_value = '{"rule1": "value1"}' + mock_zfp.open.return_value.__enter__.return_value = mock_fp + gen = [("file1", ".j2", "file1.j2", "file1-zipinfo")] + mock_open_and_filter_yaml_files.return_value.__enter__.return_value = \ + (mock_zfp, gen) + mock_read_and_validate_yaml.return_value = {"rule1": "modded_value1"} + mod_fn.return_value = '{"rule1": "modded_value1"}' + mock__policy_success_file.return_value = "policy-success-file" + mock_yaml_dump.return_value = "dumped-file" + with _patch_open() as (mock_open, mock_file): + res = policyd.process_policy_resource_file( + "resource.zip", "aservice", [], ["key"], mod_fn) + self.assertTrue(res) + # mock_open.assert_any_call("aservice/file1", "wt") + mock_write_file.assert_called_once_with( + "aservice/file1", + b'dumped-file', + "aservice", + "aservice") + mock_open.assert_any_call("policy-success-file", "w") + mock_yaml_dump.assert_called_once_with({"rule1": "modded_value1"}) + mock_zfp.open.assert_called_once_with("file1-zipinfo") + mock_read_and_validate_yaml.assert_called_once_with( + '{"rule1": "modded_value1"}', ["key"]) + mod_fn.assert_called_once_with('{"rule1": "value1"}') + + # raise a BadPolicyZipFile if we have a template, but there is no + # template function + mock_log.reset_mock() + with _patch_open() as (mock_open, mock_file): + res = policyd.process_policy_resource_file( + "resource.zip", "aservice", [], ["key"], + template_function=None) + self.assertFalse(res) + mock_log.assert_any_call( + "Processing resource.zip failed: Template file1.j2 " + "but no template_function is available", + level=policyd.POLICYD_LOG_LEVEL_DEFAULT) + + # raise the IOError to validate that code path + def raise_ioerror(*args): + raise IOError("bang") + + mock_open_and_filter_yaml_files.side_effect = raise_ioerror + mock_log.reset_mock() + res = policyd.process_policy_resource_file( + "resource.zip", "aservice", [], ["key"], mod_fn) + self.assertFalse(res, False) + mock_log.assert_any_call( + "File resource.zip failed with IOError. " + "This really shouldn't happen -- error: bang", + level=policyd.POLICYD_LOG_LEVEL_DEFAULT) + # raise a general exception, so that is caught and logged too. + + def raise_exception(*args): + raise Exception("bang2") + + mock_open_and_filter_yaml_files.reset_mock() + mock_open_and_filter_yaml_files.side_effect = raise_exception + mock_log.reset_mock() + res = policyd.process_policy_resource_file( + "resource.zip", "aservice", [], ["key"], mod_fn) + self.assertFalse(res, False) + mock_log.assert_any_call( + "General Exception(bang2) during policyd processing", + level=policyd.POLICYD_LOG_LEVEL_DEFAULT) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/test_ssh_migrations.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/test_ssh_migrations.py new file mode 100644 index 0000000000000000000000000000000000000000..5a3244228d96199f4d3661da18bec83777a32b52 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/test_ssh_migrations.py @@ -0,0 +1,582 @@ +import mock +import six +import subprocess +import unittest + +from tests.helpers import patch_open, mock_open + +import charmhelpers.contrib.openstack.ssh_migrations as ssh_migrations + +if not six.PY3: + builtin_open = '__builtin__.open' + builtin_import = '__builtin__.__import__' +else: + builtin_open = 'builtins.open' + builtin_import = 'builtins.__import__' + + +UNIT1_HOST_KEY_1 = """|1|EaIiWNsBsaSke5T5bdDlaV5xKPU=|WKMu3Va+oNwRjXmPGOZ+mrpWbM8= ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDZdZdR7I35ymFdspruN1CIez/0m62sJeld2nLuOGaNbdl/rk5bGrWUAZh6c9p9H53FAqGAXBD/1C8dZ5dgIAGdTs7PAZq7owXCpgUPQcGOYVAtBwv8qfnWyI1W+Vpi6vnb2sgYr6XGbB9b84i4vrd98IIpXIleC9qd0VUTSYgd7+NPaFNoK0HZmqcNEf5leaa8sgSf4t5F+BTWEXzU3ql/3isFT8lEpJ9N8wOvNzAoFEQcxqauvOJn72QQ6kUrQT3NdQFUMHquS/s+nBrQNPbUmzqrvSOed75Qk8359zqU1Rce7U39cqc0scYi1ak3oJdojwfLFKJw4TMPn/Pq7JnT""" +UNIT1_HOST_KEY_2 = """|1|mCyYWqJl8loqV6LCY84lu2rpqLA=|51m+M+0ES3jYVzr3Kco3CDg8hEY= ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDZdZdR7I35ymFdspruN1CIez/0m62sJeld2nLuOGaNbdl/rk5bGrWUAZh6c9p9H53FAqGAXBD/1C8dZ5dgIAGdTs7PAZq7owXCpgUPQcGOYVAtBwv8qfnWyI1W+Vpi6vnb2sgYr6XGbB9b84i4vrd98IIpXIleC9qd0VUTSYgd7+NPaFNoK0HZmqcNEf5leaa8sgSf4t5F+BTWEXzU3ql/3isFT8lEpJ9N8wOvNzAoFEQcxqauvOJn72QQ6kUrQT3NdQFUMHquS/s+nBrQNPbUmzqrvSOed75Qk8359zqU1Rce7U39cqc0scYi1ak3oJdojwfLFKJw4TMPn/Pq7JnT""" +UNIT2_HOST_KEY_1 = """|1|eWagMqrN7XmX7NdVpZbqMZ2cb4Q=|3jgGiFEU9SMhXwdX0w0kkG54CZc= ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC27Lv4wtAiPIOTsrUCFOU4qaNsov+LZSVHtxlu0aERuD+oU3ZILXOITXJlDohweXN6YuP4hFg49FF119gmMag+hDiA8/BztQmsplkwHrWEPuWKpReLRNLBU+Nt78jrJkjTK8Egwxbaxu8fAPZkCgGyeLkIH4ghrdWlOaWYXzwuxXkYWSpQOgF6E/T+19JKVKNpt2i6w7q9vVwZEjwVr30ubs1bNdPzE9ylNLQRrGa7c38SKsEos5RtZJjEuZGTC9KI0QdEgwnxxNMlT/CIgwWA1V38vLsosF2pHKxbmtCNvBuPNtrBDgXhukVqyEh825RhTAyQYGshMCbAbxl/M9c3""" +UNIT2_HOST_KEY_2 = """|1|zRH8troNwhVzrkMx86E5Ibevw5s=|gESlgkwUumP8q0A6l+CoRlFRpTw= ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC27Lv4wtAiPIOTsrUCFOU4qaNsov+LZSVHtxlu0aERuD+oU3ZILXOITXJlDohweXN6YuP4hFg49FF119gmMag+hDiA8/BztQmsplkwHrWEPuWKpReLRNLBU+Nt78jrJkjTK8Egwxbaxu8fAPZkCgGyeLkIH4ghrdWlOaWYXzwuxXkYWSpQOgF6E/T+19JKVKNpt2i6w7q9vVwZEjwVr30ubs1bNdPzE9ylNLQRrGa7c38SKsEos5RtZJjEuZGTC9KI0QdEgwnxxNMlT/CIgwWA1V38vLsosF2pHKxbmtCNvBuPNtrBDgXhukVqyEh825RhTAyQYGshMCbAbxl/M9c3""" +HOST_KEYS = [UNIT1_HOST_KEY_1, UNIT1_HOST_KEY_2, + UNIT2_HOST_KEY_1, UNIT2_HOST_KEY_2] +UNIT1_PUBKEY_1 = """ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDqtnB3zh3sMufZd1khi44su0hTg/LqLb3ma2iyueTULZikDYa65UidVxzsa6r0Y9jkHwknGlh7fNnGdmc3S8EE+rVUNF4r3JF2Zd/pdfCBia/BmKJcO7+NyRWc8ihlrA3xYUSm+Yg8ZIpqoSb1LKjgAdYISh9HQQaXut2sXtHESdpilNpDf42AZfuQM+B0op0v7bq86ZXOM1rvdJriI6BduHaAOux+d9HDNvV5AxYTICrUkXqIvdHnoRyOFfhTcKun0EtuUxpDiAi0im9C+i+MPwMvA6AmRbot6Tqt2xZPRBYY8+WF7I5cBoovES/dWKP5TZwaGBr+WNv+z2JJhvlN root@juju-4665be-20180716142533-8""" +UNIT2_PUBKEY_1 = """ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCkWfkVrG7wTnfifvL0GkmDj6L33PKrWjzN2hOwZb9EoxwNzFGTMTBIpepTAnO6hdFBwtus1Ej/L12K6L/0YRDZAKjE7yTWOsh1kUxPZ1INRCqLILiefE5A/LPNx8NDb+d/2ryc5QmOQXUALs6mC5VDNchImUp9L7l0RIzPOgPXZCqMC1nZLqqX+eI9EUaf29/+NztYw59rFAa3hWNe8RJCSFeU+iWirWP8rfX9jsLzD9hO3nuZjP23M6tv1jX9LQD+8qkx0WSMa2WrIjkMiclP6tkyCJOZogyoPzZm/+dUhLeY9bIizbZCQKH/b4gOl5m/PkWoqEFshfqGzUIPkAJp root@juju-4665be-20180716142533-9""" +PUB_KEYS = [UNIT1_PUBKEY_1, UNIT2_PUBKEY_1] + + +class SSHMigrationsTests(unittest.TestCase): + + def setUp(self): + self._patches = {} + self._patches_start = {} + + def tearDown(self): + """Run teardown of patches.""" + for k, v in self._patches.items(): + v.stop() + setattr(self, k, None) + self._patches = None + self._patches_start = None + + def patch_object(self, obj, attr, return_value=None, name=None, new=None, + **kwargs): + """Patch the given object.""" + if name is None: + name = attr + if new is not None: + mocked = mock.patch.object(obj, attr, new=new, **kwargs) + else: + mocked = mock.patch.object(obj, attr, **kwargs) + self._patches[name] = mocked + started = mocked.start() + if new is None: + started.return_value = return_value + self._patches_start[name] = started + setattr(self, name, started) + + def setup_mocks_ssh_directory_for_unit(self, app_name, ssh_dir_exists, + app_dir_exists, auth_keys_exists, + known_hosts_exists, user=None): + def _isdir(x): + return { + ssh_dir + '/': ssh_dir_exists, + app_dir: app_dir_exists}[x] + + def _isfile(x): + return { + '{}/authorized_keys'.format(app_dir): auth_keys_exists, + '{}/known_hosts'.format(app_dir): known_hosts_exists}[x] + + if user: + app_name = "{}_{}".format(app_name, user) + ssh_dir = '/etc/nova/compute_ssh' + app_dir = '{}/{}'.format(ssh_dir, app_name) + + self.patch_object(ssh_migrations.os, 'mkdir') + self.patch_object(ssh_migrations.os.path, 'isdir', side_effect=_isdir) + self.patch_object(ssh_migrations.os.path, 'isfile', + side_effect=_isfile) + + def test_ssh_directory_for_unit(self): + self.setup_mocks_ssh_directory_for_unit( + 'nova-compute-lxd', + ssh_dir_exists=True, + app_dir_exists=True, + auth_keys_exists=True, + known_hosts_exists=True) + self.assertEqual( + ssh_migrations.ssh_directory_for_unit('nova-compute-lxd'), + '/etc/nova/compute_ssh/nova-compute-lxd') + self.assertFalse(self.mkdir.called) + + def test_ssh_directory_for_unit_user(self): + self.setup_mocks_ssh_directory_for_unit( + 'nova-compute-lxd', + ssh_dir_exists=True, + app_dir_exists=True, + auth_keys_exists=True, + known_hosts_exists=True, + user='nova') + self.assertEqual( + ssh_migrations.ssh_directory_for_unit( + 'nova-compute-lxd', + user='nova'), + '/etc/nova/compute_ssh/nova-compute-lxd_nova') + self.assertFalse(self.mkdir.called) + + def test_ssh_directory_missing_dir(self): + self.setup_mocks_ssh_directory_for_unit( + 'nova-compute-lxd', + ssh_dir_exists=False, + app_dir_exists=True, + auth_keys_exists=True, + known_hosts_exists=True) + self.assertEqual( + ssh_migrations.ssh_directory_for_unit('nova-compute-lxd'), + '/etc/nova/compute_ssh/nova-compute-lxd') + self.mkdir.assert_called_once_with('/etc/nova/compute_ssh/') + + def test_ssh_directory_missing_dirs(self): + self.setup_mocks_ssh_directory_for_unit( + 'nova-compute-lxd', + ssh_dir_exists=False, + app_dir_exists=False, + auth_keys_exists=True, + known_hosts_exists=True) + self.assertEqual( + ssh_migrations.ssh_directory_for_unit('nova-compute-lxd'), + '/etc/nova/compute_ssh/nova-compute-lxd') + mkdir_calls = [ + mock.call('/etc/nova/compute_ssh/'), + mock.call('/etc/nova/compute_ssh/nova-compute-lxd')] + self.mkdir.assert_has_calls(mkdir_calls) + + @mock.patch(builtin_open) + def test_ssh_directory_missing_file(self, _open): + self.setup_mocks_ssh_directory_for_unit( + 'nova-compute-lxd', + ssh_dir_exists=True, + app_dir_exists=True, + auth_keys_exists=False, + known_hosts_exists=True) + self.assertEqual( + ssh_migrations.ssh_directory_for_unit('nova-compute-lxd'), + '/etc/nova/compute_ssh/nova-compute-lxd') + _open.assert_called_once_with( + '/etc/nova/compute_ssh/nova-compute-lxd/authorized_keys', + 'w') + self.assertFalse(self.mkdir.called) + + @mock.patch(builtin_open) + def test_ssh_directory_missing_files(self, _open): + self.setup_mocks_ssh_directory_for_unit( + 'nova-compute-lxd', + ssh_dir_exists=True, + app_dir_exists=True, + auth_keys_exists=False, + known_hosts_exists=False) + self.assertEqual( + ssh_migrations.ssh_directory_for_unit('nova-compute-lxd'), + '/etc/nova/compute_ssh/nova-compute-lxd') + open_calls = [ + mock.call( + '/etc/nova/compute_ssh/nova-compute-lxd/authorized_keys', + 'w'), + mock.call().close(), + mock.call( + '/etc/nova/compute_ssh/nova-compute-lxd/known_hosts', + 'w'), + mock.call().close()] + _open.assert_has_calls(open_calls) + self.assertFalse(self.mkdir.called) + + def setup_ssh_directory_for_unit_mocks(self): + self.patch_object( + ssh_migrations, + 'ssh_directory_for_unit', + return_value='/somedir') + + def test_known_hosts(self): + self.setup_ssh_directory_for_unit_mocks() + self.assertEqual( + ssh_migrations.known_hosts('nova-compute-lxd'), + '/somedir/known_hosts') + + def test_authorized_keys(self): + self.setup_ssh_directory_for_unit_mocks() + self.assertEqual( + ssh_migrations.authorized_keys('nova-compute-lxd'), + '/somedir/authorized_keys') + + @mock.patch('subprocess.check_output') + def test_ssh_known_host_key(self, _check_output): + self.setup_ssh_directory_for_unit_mocks() + _check_output.return_value = UNIT1_HOST_KEY_1 + self.assertEqual( + ssh_migrations.ssh_known_host_key( + 'juju-4665be-20180716142533-8', + 'nova-compute-lxd'), + UNIT1_HOST_KEY_1) + + @mock.patch('subprocess.check_output') + def test_ssh_known_host_key_multi_match(self, _check_output): + self.setup_ssh_directory_for_unit_mocks() + _check_output.return_value = '{}\n{}\n'.format(UNIT1_HOST_KEY_1, + UNIT1_HOST_KEY_2) + self.assertEqual( + ssh_migrations.ssh_known_host_key( + 'juju-4665be-20180716142533-8', + 'nova-compute-lxd'), + UNIT1_HOST_KEY_1) + + @mock.patch('subprocess.check_output') + def test_ssh_known_host_key_rc1(self, _check_output): + self.setup_ssh_directory_for_unit_mocks() + _check_output.side_effect = subprocess.CalledProcessError( + cmd=['anything'], + returncode=1, + output=UNIT1_HOST_KEY_1) + self.assertEqual( + ssh_migrations.ssh_known_host_key( + 'juju-4665be-20180716142533-8', + 'nova-compute-lxd'), + UNIT1_HOST_KEY_1) + + @mock.patch('subprocess.check_output') + def test_ssh_known_host_key_rc2(self, _check_output): + self.setup_ssh_directory_for_unit_mocks() + _check_output.side_effect = subprocess.CalledProcessError( + cmd=['anything'], + returncode=2, + output='') + with self.assertRaises(subprocess.CalledProcessError): + ssh_migrations.ssh_known_host_key( + 'juju-4665be-20180716142533-8', + 'nova-compute-lxd') + + @mock.patch('subprocess.check_output') + def test_ssh_known_host_key_no_match(self, _check_output): + self.setup_ssh_directory_for_unit_mocks() + _check_output.return_value = '' + self.assertIsNone( + ssh_migrations.ssh_known_host_key( + 'juju-4665be-20180716142533-8', + 'nova-compute-lxd')) + + @mock.patch('subprocess.check_call') + def test_remove_known_host(self, _check_call): + self.patch_object(ssh_migrations, 'log') + self.setup_ssh_directory_for_unit_mocks() + ssh_migrations.remove_known_host( + 'juju-4665be-20180716142533-8', + 'nova-compute-lxd') + _check_call.assert_called_once_with([ + 'ssh-keygen', + '-f', + '/somedir/known_hosts', + '-R', + 'juju-4665be-20180716142533-8']) + + def test_is_same_key(self): + self.assertTrue( + ssh_migrations.is_same_key(UNIT1_HOST_KEY_1, UNIT1_HOST_KEY_2)) + + def test_is_same_key_false(self): + self.assertFalse( + ssh_migrations.is_same_key(UNIT1_HOST_KEY_1, UNIT2_HOST_KEY_1)) + + def setup_mocks_add_known_host(self): + self.setup_ssh_directory_for_unit_mocks() + self.patch_object(ssh_migrations.subprocess, 'check_output') + self.patch_object(ssh_migrations, 'log') + self.patch_object(ssh_migrations, 'ssh_known_host_key') + self.patch_object(ssh_migrations, 'remove_known_host') + + def test_add_known_host(self): + self.setup_mocks_add_known_host() + self.check_output.return_value = UNIT1_HOST_KEY_1 + self.ssh_known_host_key.return_value = '' + with patch_open() as (mock_open, mock_file): + ssh_migrations.add_known_host( + 'juju-4665be-20180716142533-8', + 'nova-compute-lxd') + mock_file.write.assert_called_with(UNIT1_HOST_KEY_1 + '\n') + mock_open.assert_called_with('/somedir/known_hosts', 'a') + self.assertFalse(self.remove_known_host.called) + + def test_add_known_host_existing_invalid_key(self): + self.setup_mocks_add_known_host() + self.check_output.return_value = UNIT1_HOST_KEY_1 + self.ssh_known_host_key.return_value = UNIT2_HOST_KEY_1 + with patch_open() as (mock_open, mock_file): + ssh_migrations.add_known_host( + 'juju-4665be-20180716142533-8', + 'nova-compute-lxd') + mock_file.write.assert_called_with(UNIT1_HOST_KEY_1 + '\n') + mock_open.assert_called_with('/somedir/known_hosts', 'a') + self.remove_known_host.assert_called_once_wth( + 'juju-4665be-20180716142533-8', + 'nova-compute-lxd') + + def test_add_known_host_existing_valid_key(self): + self.setup_mocks_add_known_host() + self.check_output.return_value = UNIT2_HOST_KEY_1 + self.ssh_known_host_key.return_value = UNIT2_HOST_KEY_1 + with patch_open() as (mock_open, mock_file): + ssh_migrations.add_known_host( + 'juju-4665be-20180716142533-8', + 'nova-compute-lxd') + self.assertFalse(mock_open.called) + self.assertFalse(self.remove_known_host.called) + + def test_ssh_authorized_key_exists(self): + self.setup_mocks_add_known_host() + contents = '{}\n{}\n'.format(UNIT1_PUBKEY_1, UNIT2_PUBKEY_1) + with mock_open('/somedir/authorized_keys', contents=contents): + self.assertTrue( + ssh_migrations.ssh_authorized_key_exists( + UNIT1_PUBKEY_1, + 'nova-compute-lxd')) + + def test_ssh_authorized_key_exists_false(self): + self.setup_mocks_add_known_host() + contents = '{}\n'.format(UNIT1_PUBKEY_1) + with mock_open('/somedir/authorized_keys', contents=contents): + self.assertFalse( + ssh_migrations.ssh_authorized_key_exists( + UNIT2_PUBKEY_1, + 'nova-compute-lxd')) + + def test_add_authorized_key(self): + self.setup_mocks_add_known_host() + with patch_open() as (mock_open, mock_file): + ssh_migrations.add_authorized_key( + UNIT1_PUBKEY_1, + 'nova-compute-lxd') + mock_file.write.assert_called_with(UNIT1_PUBKEY_1 + '\n') + mock_open.assert_called_with('/somedir/authorized_keys', 'a') + + def setup_mocks_ssh_compute_add_host_and_key(self): + self.setup_ssh_directory_for_unit_mocks() + self.patch_object(ssh_migrations, 'log') + self.patch_object(ssh_migrations, 'get_hostname') + self.patch_object(ssh_migrations, 'get_host_ip') + self.patch_object(ssh_migrations, 'ns_query') + self.patch_object(ssh_migrations, 'add_known_host') + self.patch_object(ssh_migrations, 'ssh_authorized_key_exists') + self.patch_object(ssh_migrations, 'add_authorized_key') + + def test_ssh_compute_add_host_and_key(self): + self.setup_mocks_ssh_compute_add_host_and_key() + self.get_hostname.return_value = 'alt-hostname.project.serverstack' + self.ns_query.return_value = '10.6.0.17' + ssh_migrations.ssh_compute_add_host_and_key( + UNIT1_PUBKEY_1, + 'juju-4665be-20180716142533-8.project.serverstack', + '10.5.0.17', + 'nova-compute-lxd') + expect_hosts = [ + 'juju-4665be-20180716142533-8.project.serverstack', + 'alt-hostname.project.serverstack', + 'alt-hostname'] + add_known_host_calls = [] + for host in expect_hosts: + add_known_host_calls.append( + mock.call(host, 'nova-compute-lxd', None)) + self.add_known_host.assert_has_calls( + add_known_host_calls, + any_order=True) + self.add_authorized_key.assert_called_once_with( + UNIT1_PUBKEY_1, + 'nova-compute-lxd', + None) + + def test_ssh_compute_add_host_and_key_priv_addr_not_ip(self): + self.setup_mocks_ssh_compute_add_host_and_key() + self.get_hostname.return_value = 'alt-hostname.project.serverstack' + self.ns_query.return_value = '10.6.0.17' + self.get_host_ip.return_value = '10.6.0.17' + ssh_migrations.ssh_compute_add_host_and_key( + UNIT1_PUBKEY_1, + 'juju-4665be-20180716142533-8.project.serverstack', + 'bob.maas', + 'nova-compute-lxd') + expect_hosts = [ + 'bob.maas', + 'juju-4665be-20180716142533-8.project.serverstack', + '10.6.0.17', + 'bob'] + add_known_host_calls = [] + for host in expect_hosts: + add_known_host_calls.append( + mock.call(host, 'nova-compute-lxd', None)) + self.add_known_host.assert_has_calls( + add_known_host_calls, + any_order=True) + self.add_authorized_key.assert_called_once_with( + UNIT1_PUBKEY_1, + 'nova-compute-lxd', + None) + + def test_ssh_compute_add_host_and_key_ipv6(self): + self.setup_mocks_ssh_compute_add_host_and_key() + ssh_migrations.ssh_compute_add_host_and_key( + UNIT1_PUBKEY_1, + 'juju-4665be-20180716142533-8.project.serverstack', + 'fe80::8842:a9ff:fe53:72e4', + 'nova-compute-lxd') + self.add_known_host.assert_called_once_with( + 'fe80::8842:a9ff:fe53:72e4', + 'nova-compute-lxd', + None) + self.add_authorized_key.assert_called_once_with( + UNIT1_PUBKEY_1, + 'nova-compute-lxd', + None) + + @mock.patch.object(ssh_migrations, 'ssh_compute_add_host_and_key') + @mock.patch.object(ssh_migrations, 'relation_get') + def test_ssh_compute_add(self, _relation_get, + _ssh_compute_add_host_and_key): + _relation_get.return_value = { + 'hostname': 'juju-4665be-20180716142533-8.project.serverstack', + 'private-address': '10.5.0.17', + } + ssh_migrations.ssh_compute_add( + UNIT1_PUBKEY_1, + 'nova-compute-lxd', + rid='cloud-compute:23', + unit='nova-compute-lxd/2') + _ssh_compute_add_host_and_key.assert_called_once_with( + UNIT1_PUBKEY_1, + 'juju-4665be-20180716142533-8.project.serverstack', + '10.5.0.17', + 'nova-compute-lxd', + user=None) + + @mock.patch.object(ssh_migrations, 'known_hosts') + def test_ssh_known_hosts_lines(self, _known_hosts): + _known_hosts.return_value = '/somedir/known_hosts' + contents = '\n'.join(HOST_KEYS) + with mock_open('/somedir/known_hosts', contents=contents): + self.assertEqual( + ssh_migrations.ssh_known_hosts_lines('nova-compute-lxd'), + HOST_KEYS) + + @mock.patch.object(ssh_migrations, 'authorized_keys') + def test_ssh_authorized_keys_lines(self, _authorized_keys): + _authorized_keys.return_value = '/somedir/authorized_keys' + contents = '\n'.join(PUB_KEYS) + with mock_open('/somedir/authorized_keys', contents=contents): + self.assertEqual( + ssh_migrations.ssh_authorized_keys_lines('nova-compute-lxd'), + PUB_KEYS) + + def setup_mocks_ssh_compute_remove(self, isfile, authorized_keys_lines): + self.patch_object( + ssh_migrations, + 'ssh_authorized_keys_lines', + return_value=authorized_keys_lines) + self.patch_object(ssh_migrations, 'known_hosts') + self.patch_object( + ssh_migrations, + 'authorized_keys', + return_value='/somedir/authorized_keys') + self.patch_object( + ssh_migrations.os.path, + 'isfile', + return_value=isfile) + + def test_ssh_compute_remove(self): + self.setup_mocks_ssh_compute_remove( + isfile=True, + authorized_keys_lines=PUB_KEYS) + with patch_open() as (mock_open, mock_file): + ssh_migrations.ssh_compute_remove( + UNIT1_PUBKEY_1, + 'nova-compute-lxd') + mock_file.write.assert_called_with(UNIT2_PUBKEY_1 + '\n') + mock_open.assert_called_with('/somedir/authorized_keys', 'w') + + def test_ssh_compute_remove_missing_file(self): + self.setup_mocks_ssh_compute_remove( + isfile=False, + authorized_keys_lines=PUB_KEYS) + with patch_open() as (mock_open, mock_file): + ssh_migrations.ssh_compute_remove( + UNIT1_PUBKEY_1, + 'nova-compute-lxd') + self.assertFalse(mock_file.write.called) + + def test_ssh_compute_remove_missing_key(self): + self.setup_mocks_ssh_compute_remove( + isfile=False, + authorized_keys_lines=[UNIT2_PUBKEY_1]) + with patch_open() as (mock_open, mock_file): + ssh_migrations.ssh_compute_remove( + UNIT1_PUBKEY_1, + 'nova-compute-lxd') + self.assertFalse(mock_file.write.called) + + @mock.patch.object(ssh_migrations, 'ssh_known_hosts_lines') + @mock.patch.object(ssh_migrations, 'ssh_authorized_keys_lines') + def test_get_ssh_settings(self, _ssh_authorized_keys_lines, + _ssh_known_hosts_lines): + _ssh_authorized_keys_lines.return_value = PUB_KEYS + _ssh_known_hosts_lines.return_value = HOST_KEYS + expect = { + 'known_hosts_0': UNIT1_HOST_KEY_1, + 'known_hosts_1': UNIT1_HOST_KEY_2, + 'known_hosts_2': UNIT2_HOST_KEY_1, + 'known_hosts_3': UNIT2_HOST_KEY_2, + 'known_hosts_max_index': 4, + 'authorized_keys_0': UNIT1_PUBKEY_1, + 'authorized_keys_1': UNIT2_PUBKEY_1, + 'authorized_keys_max_index': 2, + } + self.assertEqual( + ssh_migrations.get_ssh_settings('nova-compute-lxd'), + expect) + + @mock.patch.object(ssh_migrations, 'ssh_known_hosts_lines') + @mock.patch.object(ssh_migrations, 'ssh_authorized_keys_lines') + def test_get_ssh_settings_user(self, _ssh_authorized_keys_lines, + _ssh_known_hosts_lines): + _ssh_authorized_keys_lines.return_value = PUB_KEYS + _ssh_known_hosts_lines.return_value = HOST_KEYS + expect = { + 'nova_known_hosts_0': UNIT1_HOST_KEY_1, + 'nova_known_hosts_1': UNIT1_HOST_KEY_2, + 'nova_known_hosts_2': UNIT2_HOST_KEY_1, + 'nova_known_hosts_3': UNIT2_HOST_KEY_2, + 'nova_known_hosts_max_index': 4, + 'nova_authorized_keys_0': UNIT1_PUBKEY_1, + 'nova_authorized_keys_1': UNIT2_PUBKEY_1, + 'nova_authorized_keys_max_index': 2, + } + self.assertEqual( + ssh_migrations.get_ssh_settings('nova-compute-lxd', user='nova'), + expect) + + @mock.patch.object(ssh_migrations, 'ssh_known_hosts_lines') + @mock.patch.object(ssh_migrations, 'ssh_authorized_keys_lines') + def test_get_ssh_settings_empty(self, _ssh_authorized_keys_lines, + _ssh_known_hosts_lines): + _ssh_authorized_keys_lines.return_value = [] + _ssh_known_hosts_lines.return_value = [] + self.assertEqual( + ssh_migrations.get_ssh_settings('nova-compute-lxd'), + {}) + + @mock.patch.object(ssh_migrations, 'get_ssh_settings') + def test_get_all_user_ssh_settings(self, _get_ssh_settings): + def ssh_setiings(application_name, user=None): + base_settings = { + 'known_hosts_0': UNIT1_HOST_KEY_1, + 'known_hosts_max_index': 1, + 'authorized_keys_0': UNIT1_PUBKEY_1, + 'authorized_keys_max_index': 1} + user_settings = { + 'nova_known_hosts_0': UNIT1_HOST_KEY_1, + 'nova_known_hosts_max_index': 1, + 'nova_authorized_keys_0': UNIT1_PUBKEY_1, + 'nova_authorized_keys_max_index': 1} + if user: + return user_settings + else: + return base_settings + _get_ssh_settings.side_effect = ssh_setiings + expect = { + 'known_hosts_0': UNIT1_HOST_KEY_1, + 'known_hosts_max_index': 1, + 'authorized_keys_0': UNIT1_PUBKEY_1, + 'authorized_keys_max_index': 1, + 'nova_known_hosts_0': UNIT1_HOST_KEY_1, + 'nova_known_hosts_max_index': 1, + 'nova_authorized_keys_0': UNIT1_PUBKEY_1, + 'nova_authorized_keys_max_index': 1} + self.assertEqual( + ssh_migrations.get_all_user_ssh_settings('nova-compute-lxd'), + expect) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/test_vaultlocker.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/test_vaultlocker.py new file mode 100644 index 0000000000000000000000000000000000000000..496fd56fdd1b6e1dab0d5abb5b08a82e40d94393 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/openstack/test_vaultlocker.py @@ -0,0 +1,261 @@ +# Copyright 2018 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import json +import mock +import os +import sys +import unittest + +import charmhelpers.contrib.openstack.vaultlocker as vaultlocker + +from .test_os_contexts import TestDB + + +INCOMPLETE_RELATION = { + 'secrets-storage:1': { + 'vault/0': {} + } +} + +COMPLETE_RELATION = { + 'secrets-storage:1': { + 'vault/0': { + 'vault_url': json.dumps('http://vault:8200'), + 'test-service/0_role_id': json.dumps('test-role-from-vault'), + 'test-service/0_token': + json.dumps('00c9a9ab-c523-459d-a250-2ce8f0877c03'), + } + } +} + +DIRTY_RELATION = { + 'secrets-storage:1': { + 'vault/0': { + 'vault_url': json.dumps('http://vault:8200'), + 'test-service/0_role_id': json.dumps('test-role-from-vault'), + 'test-service/0_token': + json.dumps('00c9a9ab-c523-459d-a250-2ce8f0877c03'), + }, + 'vault/1': { + 'vault_url': json.dumps('http://vault:8200'), + 'test-service/0_role_id': json.dumps('test-role-from-vault'), + 'test-service/0_token': + json.dumps('67b36149-dc86-4b80-96c4-35b91847d16e'), + } + } +} + +COMPLETE_WITH_CA_RELATION = { + 'secrets-storage:1': { + 'vault/0': { + 'vault_url': json.dumps('http://vault:8200'), + 'test-service/0_role_id': json.dumps('test-role-from-vault'), + 'test-service/0_token': + json.dumps('00c9a9ab-c523-459d-a250-2ce8f0877c03'), + 'vault_ca': json.dumps('test-ca-data'), + } + } +} + + +class VaultLockerTestCase(unittest.TestCase): + + to_patch = [ + 'hookenv', + 'templating', + 'alternatives', + 'host', + 'unitdata', + ] + + _target_path = '/var/lib/charm/test-service/vaultlocker.conf' + + def setUp(self): + for m in self.to_patch: + setattr(self, m, self._patch(m)) + self.hookenv.service_name.return_value = 'test-service' + self.hookenv.local_unit.return_value = 'test-service/0' + self.db = TestDB() + self.unitdata.kv.return_value = self.db + fake_exc = mock.MagicMock() + fake_exc.InvalidRequest = Exception + self.fake_hvac = mock.MagicMock() + self.fake_hvac.exceptions = fake_exc + sys.modules['hvac'] = self.fake_hvac + + def fake_retrieve_secret_id(self, url=None, token=None): + if token == self.good_token: + return '31be8e65-20a3-45e0-a4a8-4d5a0554fb60' + else: + raise self.fake_hvac.exceptions.InvalidRequest + + def _patch(self, target): + _m = mock.patch.object(vaultlocker, target) + _mock = _m.start() + self.addCleanup(_m.stop) + return _mock + + def test_write_vl_config(self): + ctxt = {'test': 'data'} + vaultlocker.write_vaultlocker_conf(context=ctxt) + self.hookenv.service_name.assert_called_once_with() + self.host.mkdir.assert_called_once_with( + os.path.dirname(self._target_path), + perms=0o700 + ) + self.templating.render.assert_called_once_with( + source='vaultlocker.conf.j2', + target=self._target_path, + context=ctxt, + perms=0o600, + ) + self.alternatives.install_alternative.assert_called_once_with( + 'vaultlocker.conf', + '/etc/vaultlocker/vaultlocker.conf', + self._target_path, + 100 + ) + + def test_write_vl_config_priority(self): + ctxt = {'test': 'data'} + vaultlocker.write_vaultlocker_conf(context=ctxt, priority=200) + self.hookenv.service_name.assert_called_once_with() + self.host.mkdir.assert_called_once_with( + os.path.dirname(self._target_path), + perms=0o700 + ) + self.templating.render.assert_called_once_with( + source='vaultlocker.conf.j2', + target=self._target_path, + context=ctxt, + perms=0o600, + ) + self.alternatives.install_alternative.assert_called_once_with( + 'vaultlocker.conf', + '/etc/vaultlocker/vaultlocker.conf', + self._target_path, + 200 + ) + + def _setup_relation(self, relation): + self.hookenv.relation_ids.side_effect = ( + lambda _: relation.keys() + ) + self.hookenv.related_units.side_effect = ( + lambda rid: relation[rid].keys() + ) + self.hookenv.relation_get.side_effect = ( + lambda unit, rid: + relation[rid][unit] + ) + + def test_context_incomplete(self): + self._setup_relation(INCOMPLETE_RELATION) + context = vaultlocker.VaultKVContext('charm-test') + self.assertEqual(context(), {}) + self.hookenv.relation_ids.assert_called_with('secrets-storage') + self.assertFalse(vaultlocker.vault_relation_complete()) + + @mock.patch.object(vaultlocker, 'retrieve_secret_id') + def test_context_complete(self, retrieve_secret_id): + self._setup_relation(COMPLETE_RELATION) + context = vaultlocker.VaultKVContext('charm-test') + retrieve_secret_id.return_value = 'a3551c8d-0147-4cb6-afc6-efb3db2fccb2' + self.assertEqual(context(), + {'role_id': 'test-role-from-vault', + 'secret_backend': 'charm-test', + 'secret_id': 'a3551c8d-0147-4cb6-afc6-efb3db2fccb2', + 'vault_url': 'http://vault:8200'}) + self.hookenv.relation_ids.assert_called_with('secrets-storage') + self.assertTrue(vaultlocker.vault_relation_complete()) + calls = [mock.call(url='http://vault:8200', + token='00c9a9ab-c523-459d-a250-2ce8f0877c03')] + retrieve_secret_id.assert_has_calls(calls) + + @mock.patch.object(vaultlocker, 'retrieve_secret_id') + def test_context_complete_cached_secret_id(self, retrieve_secret_id): + self._setup_relation(COMPLETE_RELATION) + context = vaultlocker.VaultKVContext('charm-test') + self.db.set('secret-id', '5502fd27-059b-4b0a-91b2-eaff40b6a112') + self.good_token = 'invalid-token' # i.e. cause failure + retrieve_secret_id.side_effect = self.fake_retrieve_secret_id + self.assertEqual(context(), + {'role_id': 'test-role-from-vault', + 'secret_backend': 'charm-test', + 'secret_id': '5502fd27-059b-4b0a-91b2-eaff40b6a112', + 'vault_url': 'http://vault:8200'}) + self.hookenv.relation_ids.assert_called_with('secrets-storage') + self.assertTrue(vaultlocker.vault_relation_complete()) + calls = [mock.call(url='http://vault:8200', + token='00c9a9ab-c523-459d-a250-2ce8f0877c03')] + retrieve_secret_id.assert_has_calls(calls) + + @mock.patch.object(vaultlocker, 'retrieve_secret_id') + def test_purge_old_tokens(self, retrieve_secret_id): + self._setup_relation(DIRTY_RELATION) + context = vaultlocker.VaultKVContext('charm-test') + self.db.set('secret-id', '5502fd27-059b-4b0a-91b2-eaff40b6a112') + self.good_token = '67b36149-dc86-4b80-96c4-35b91847d16e' + retrieve_secret_id.side_effect = self.fake_retrieve_secret_id + self.assertEqual(context(), + {'role_id': 'test-role-from-vault', + 'secret_backend': 'charm-test', + 'secret_id': '31be8e65-20a3-45e0-a4a8-4d5a0554fb60', + 'vault_url': 'http://vault:8200'}) + self.hookenv.relation_ids.assert_called_with('secrets-storage') + self.assertTrue(vaultlocker.vault_relation_complete()) + self.assertEquals(self.db.get('secret-id'), + '31be8e65-20a3-45e0-a4a8-4d5a0554fb60') + calls = [mock.call(url='http://vault:8200', + token='67b36149-dc86-4b80-96c4-35b91847d16e')] + retrieve_secret_id.assert_has_calls(calls) + + @mock.patch.object(vaultlocker, 'retrieve_secret_id') + def test_context_complete_cached_dirty_data(self, retrieve_secret_id): + self._setup_relation(DIRTY_RELATION) + context = vaultlocker.VaultKVContext('charm-test') + self.db.set('secret-id', '5502fd27-059b-4b0a-91b2-eaff40b6a112') + self.good_token = '67b36149-dc86-4b80-96c4-35b91847d16e' + retrieve_secret_id.side_effect = self.fake_retrieve_secret_id + self.assertEqual(context(), + {'role_id': 'test-role-from-vault', + 'secret_backend': 'charm-test', + 'secret_id': '31be8e65-20a3-45e0-a4a8-4d5a0554fb60', + 'vault_url': 'http://vault:8200'}) + self.hookenv.relation_ids.assert_called_with('secrets-storage') + self.assertTrue(vaultlocker.vault_relation_complete()) + self.assertEquals(self.db.get('secret-id'), + '31be8e65-20a3-45e0-a4a8-4d5a0554fb60') + calls = [mock.call(url='http://vault:8200', + token='67b36149-dc86-4b80-96c4-35b91847d16e')] + retrieve_secret_id.assert_has_calls(calls) + + @mock.patch.object(vaultlocker, 'retrieve_secret_id') + def test_context_complete_with_ca(self, retrieve_secret_id): + self._setup_relation(COMPLETE_WITH_CA_RELATION) + retrieve_secret_id.return_value = 'token1234' + context = vaultlocker.VaultKVContext('charm-test') + retrieve_secret_id.return_value = 'a3551c8d-0147-4cb6-afc6-efb3db2fccb2' + self.assertEqual(context(), + {'role_id': 'test-role-from-vault', + 'secret_backend': 'charm-test', + 'secret_id': 'a3551c8d-0147-4cb6-afc6-efb3db2fccb2', + 'vault_url': 'http://vault:8200', + 'vault_ca': 'test-ca-data'}) + self.hookenv.relation_ids.assert_called_with('secrets-storage') + self.assertTrue(vaultlocker.vault_relation_complete()) + calls = [mock.call(url='http://vault:8200', + token='00c9a9ab-c523-459d-a250-2ce8f0877c03')] + retrieve_secret_id.assert_has_calls(calls) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/peerstorage/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/peerstorage/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/peerstorage/test_peerstorage.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/peerstorage/test_peerstorage.py new file mode 100644 index 0000000000000000000000000000000000000000..63b71366dd9e190e91ca2359210d63b45b4719a7 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/peerstorage/test_peerstorage.py @@ -0,0 +1,337 @@ +import copy +import json + +from tests.helpers import FakeRelation +from testtools import TestCase +from mock import patch, call +from charmhelpers.contrib import peerstorage + + +TO_PATCH = [ + 'current_relation_id', + 'is_relation_made', + 'local_unit', + 'relation_get', + '_relation_get', + 'relation_ids', + 'relation_set', + '_relation_set', + '_leader_get', + 'leader_set', + 'is_leader', +] +FAKE_RELATION_NAME = 'cluster' +FAKE_RELATION = { + 'cluster:0': { + 'cluster/0': { + }, + 'cluster/1': { + }, + 'cluster/2': { + }, + }, + +} +FAKE_RELATION_IDS = ['cluster:0'] +FAKE_LOCAL_UNIT = 'test_host' + + +class TestPeerStorage(TestCase): + def setUp(self): + super(TestPeerStorage, self).setUp() + for m in TO_PATCH: + setattr(self, m, self._patch(m)) + self.fake_relation_name = FAKE_RELATION_NAME + self.fake_relation = FakeRelation(FAKE_RELATION) + self.local_unit.return_value = FAKE_LOCAL_UNIT + self.relation_get.return_value = {'key1': 'value1', + 'key2': 'value2', + 'private-address': '127.0.0.1', + 'public-address': '91.189.90.159'} + + def _patch(self, method): + _m = patch('charmhelpers.contrib.peerstorage.' + method) + mock = _m.start() + self.addCleanup(_m.stop) + return mock + + def test_peer_retrieve_no_relation(self): + self.relation_ids.return_value = [] + self.assertRaises(ValueError, peerstorage.peer_retrieve, 'key', relation_name=self.fake_relation_name) + + def test_peer_retrieve_with_relation(self): + self.relation_ids.return_value = FAKE_RELATION_IDS + peerstorage.peer_retrieve('key', self.fake_relation_name) + self.relation_get.assert_called_with(attribute='key', rid=FAKE_RELATION_IDS[0], unit=FAKE_LOCAL_UNIT) + + def test_peer_store_no_relation(self): + self.relation_ids.return_value = [] + self.assertRaises(ValueError, peerstorage.peer_store, 'key', 'value', relation_name=self.fake_relation_name) + + def test_peer_store_with_relation(self): + self.relation_ids.return_value = FAKE_RELATION_IDS + peerstorage.peer_store('key', 'value', self.fake_relation_name) + self.relation_set.assert_called_with(relation_id=FAKE_RELATION_IDS[0], + relation_settings={'key': 'value'}) + + def test_peer_echo_no_includes(self): + peerstorage.is_leader.side_effect = NotImplementedError + settings = {'key1': 'value1', 'key2': 'value2'} + self._relation_get.copy.return_value = settings + self._relation_get.return_value = settings + peerstorage.peer_echo() + self._relation_set.assert_called_with(relation_settings=settings) + + def test_peer_echo_includes(self): + peerstorage.is_leader.side_effect = NotImplementedError + settings = {'key1': 'value1'} + self._relation_get.copy.return_value = settings + self._relation_get.return_value = settings + peerstorage.peer_echo(['key1']) + self._relation_set.assert_called_with(relation_settings=settings) + + @patch.object(peerstorage, 'peer_store') + def test_peer_store_and_set_no_relation(self, peer_store): + self.is_relation_made.return_value = False + peerstorage.peer_store_and_set(relation_id='db', kwarg1='kwarg1_v') + self.relation_set.assert_called_with(relation_id='db', + relation_settings={}, + kwarg1='kwarg1_v') + peer_store.assert_not_called() + + @patch.object(peerstorage, 'peer_store') + def test_peer_store_and_set_no_relation_fatal(self, peer_store): + self.is_relation_made.return_value = False + self.assertRaises(ValueError, + peerstorage.peer_store_and_set, + relation_id='db', + kwarg1='kwarg1_v', + peer_store_fatal=True) + + @patch.object(peerstorage, 'peer_store') + def test_peer_store_and_set_kwargs(self, peer_store): + self.is_relation_made.return_value = True + peerstorage.peer_store_and_set(relation_id='db', kwarg1='kwarg1_v') + self.relation_set.assert_called_with(relation_id='db', + relation_settings={}, + kwarg1='kwarg1_v') + calls = [call('db_kwarg1', 'kwarg1_v', relation_name='cluster')] + peer_store.assert_has_calls(calls, any_order=True) + + @patch.object(peerstorage, 'peer_store') + def test_peer_store_and_rel_settings(self, peer_store): + self.is_relation_made.return_value = True + rel_setting = { + 'rel_set1': 'relset1_v' + } + peerstorage.peer_store_and_set(relation_id='db', + relation_settings=rel_setting) + self.relation_set.assert_called_with(relation_id='db', + relation_settings=rel_setting) + calls = [call('db_rel_set1', 'relset1_v', relation_name='cluster')] + peer_store.assert_has_calls(calls, any_order=True) + + @patch.object(peerstorage, 'peer_store') + def test_peer_store_and_set(self, peer_store): + self.is_relation_made.return_value = True + rel_setting = { + 'rel_set1': 'relset1_v' + } + peerstorage.peer_store_and_set(relation_id='db', + relation_settings=rel_setting, + kwarg1='kwarg1_v', + delimiter='+') + self.relation_set.assert_called_with(relation_id='db', + relation_settings=rel_setting, + kwarg1='kwarg1_v') + calls = [call('db+rel_set1', 'relset1_v', relation_name='cluster'), + call('db+kwarg1', 'kwarg1_v', relation_name='cluster')] + peer_store.assert_has_calls(calls, any_order=True) + + @patch.object(peerstorage, 'peer_retrieve') + def test_peer_retrieve_by_prefix(self, peer_retrieve): + rel_id = 'db:2' + settings = { + 'user': 'bob', + 'pass': 'reallyhardpassword', + 'host': 'myhost', + } + peer_settings = {rel_id + '_' + k: v for k, v in settings.items()} + peer_retrieve.return_value = peer_settings + self.assertEquals(peerstorage.peer_retrieve_by_prefix(rel_id), settings) + + @patch.object(peerstorage, 'peer_retrieve') + def test_peer_retrieve_by_prefix_empty_relation(self, peer_retrieve): + # If relation-get returns None, peer_retrieve_by_prefix returns + # an empty dictionary. + peer_retrieve.return_value = None + rel_id = 'db:2' + self.assertEquals(peerstorage.peer_retrieve_by_prefix(rel_id), {}) + + @patch.object(peerstorage, 'peer_retrieve') + def test_peer_retrieve_by_prefix_exc_list(self, peer_retrieve): + rel_id = 'db:2' + settings = { + 'user': 'bob', + 'pass': 'reallyhardpassword', + 'host': 'myhost', + } + peer_settings = {rel_id + '_' + k: v for k, v in settings.items()} + del settings['host'] + peer_retrieve.return_value = peer_settings + self.assertEquals(peerstorage.peer_retrieve_by_prefix(rel_id, + exc_list=['host']), + settings) + + @patch.object(peerstorage, 'peer_retrieve') + def test_peer_retrieve_by_prefix_inc_list(self, peer_retrieve): + rel_id = 'db:2' + settings = { + 'user': 'bob', + 'pass': 'reallyhardpassword', + 'host': 'myhost', + } + peer_settings = {rel_id + '_' + k: v for k, v in settings.items()} + peer_retrieve.return_value = peer_settings + self.assertEquals(peerstorage.peer_retrieve_by_prefix(rel_id, + inc_list=['host']), + {'host': 'myhost'}) + + def test_leader_get_migration_is_leader(self): + self.is_leader.return_value = True + l_settings = {'s3': 3} + r_settings = {'s1': 1, 's2': 2} + + def mock_relation_get(attribute=None, unit=None, rid=None): + if attribute: + if attribute in r_settings: + return r_settings.get(attribute) + else: + return None + + return copy.deepcopy(r_settings) + + def mock_leader_get(attribute=None): + if attribute: + if attribute in l_settings: + return l_settings.get(attribute) + else: + return None + + return copy.deepcopy(l_settings) + + def mock_leader_set(settings=None, **kwargs): + if settings: + l_settings.update(settings) + + l_settings.update(kwargs) + + def check_leader_db(dicta, dictb): + _dicta = copy.deepcopy(dicta) + _dictb = copy.deepcopy(dictb) + miga = json.loads(_dicta[migration_key]).sort() + migb = json.loads(_dictb[migration_key]).sort() + self.assertEqual(miga, migb) + del _dicta[migration_key] + del _dictb[migration_key] + self.assertEqual(_dicta, _dictb) + + migration_key = '__leader_get_migrated_settings__' + self._relation_get.side_effect = mock_relation_get + self._leader_get.side_effect = mock_leader_get + self.leader_set.side_effect = mock_leader_set + + self.assertEqual({'s1': 1, 's2': 2}, peerstorage._relation_get()) + self.assertEqual({'s3': 3}, peerstorage._leader_get()) + self.assertEqual({'s1': 1, 's2': 2, 's3': 3}, peerstorage.leader_get()) + check_leader_db({'s1': 1, 's2': 2, 's3': 3, + migration_key: '["s2", "s1"]'}, l_settings) + self.assertTrue(peerstorage.leader_set.called) + + peerstorage.leader_set.reset_mock() + self.assertEqual({'s1': 1, 's2': 2, 's3': 3}, peerstorage.leader_get()) + check_leader_db({'s1': 1, 's2': 2, 's3': 3, + migration_key: '["s2", "s1"]'}, l_settings) + self.assertFalse(peerstorage.leader_set.called) + + l_settings = {'s3': 3} + peerstorage.leader_set.reset_mock() + self.assertEqual(1, peerstorage.leader_get('s1')) + check_leader_db({'s1': 1, 's3': 3, + migration_key: '["s1"]'}, l_settings) + self.assertTrue(peerstorage.leader_set.called) + + # Test that leader vals take precedence over non-leader vals + r_settings['s3'] = 2 + r_settings['s4'] = 3 + l_settings['s4'] = 4 + + peerstorage.leader_set.reset_mock() + self.assertEqual(4, peerstorage.leader_get('s4')) + check_leader_db({'s1': 1, 's3': 3, 's4': 4, + migration_key: '["s1", "s4"]'}, l_settings) + self.assertTrue(peerstorage.leader_set.called) + + peerstorage.leader_set.reset_mock() + self.assertEqual({'s1': 1, 's2': 2, 's3': 2, 's4': 3}, + peerstorage._relation_get()) + check_leader_db({'s1': 1, 's3': 3, 's4': 4, + migration_key: '["s1", "s4"]'}, + peerstorage._leader_get()) + self.assertEqual({'s1': 1, 's2': 2, 's3': 3, 's4': 4}, + peerstorage.leader_get()) + check_leader_db({'s1': 1, 's2': 2, 's3': 3, 's4': 4, + migration_key: '["s3", "s2", "s1", "s4"]'}, + l_settings) + self.assertTrue(peerstorage.leader_set.called) + + def test_leader_get_migration_is_not_leader(self): + self.is_leader.return_value = False + l_settings = {'s3': 3} + r_settings = {'s1': 1, 's2': 2} + + def mock_relation_get(attribute=None, unit=None, rid=None): + if attribute: + if attribute in r_settings: + return r_settings.get(attribute) + else: + return None + + return copy.deepcopy(r_settings) + + def mock_leader_get(attribute=None): + if attribute: + if attribute in l_settings: + return l_settings.get(attribute) + else: + return None + + return copy.deepcopy(l_settings) + + def mock_leader_set(settings=None, **kwargs): + if settings: + l_settings.update(settings) + + l_settings.update(kwargs) + + self._relation_get.side_effect = mock_relation_get + self._leader_get.side_effect = mock_leader_get + self.leader_set.side_effect = mock_leader_set + self.assertEqual({'s1': 1, 's2': 2}, peerstorage._relation_get()) + self.assertEqual({'s3': 3}, peerstorage._leader_get()) + self.assertEqual({'s3': 3}, peerstorage.leader_get()) + self.assertEqual({'s3': 3}, l_settings) + self.assertFalse(peerstorage.leader_set.called) + + self.assertEqual({'s3': 3}, peerstorage.leader_get()) + self.assertEqual({'s3': 3}, l_settings) + self.assertFalse(peerstorage.leader_set.called) + + # Test that leader vals take precedence over non-leader vals + r_settings['s3'] = 2 + r_settings['s4'] = 3 + l_settings['s4'] = 4 + + self.assertEqual(4, peerstorage.leader_get('s4')) + self.assertEqual({'s3': 3, 's4': 4}, l_settings) + self.assertFalse(peerstorage.leader_set.called) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/python/test_python.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/python/test_python.py new file mode 100644 index 0000000000000000000000000000000000000000..59c8b6ede89eda52c0fd4eda8e6275bce7aac68e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/python/test_python.py @@ -0,0 +1,23 @@ +#!/usr/bin/env python +# coding: utf-8 + +from __future__ import absolute_import + +from unittest import TestCase + +from charmhelpers.fetch.python import debug +from charmhelpers.fetch.python import packages +from charmhelpers.fetch.python import rpdb +from charmhelpers.fetch.python import version +from charmhelpers.contrib.python import debug as contrib_debug +from charmhelpers.contrib.python import packages as contrib_packages +from charmhelpers.contrib.python import rpdb as contrib_rpdb +from charmhelpers.contrib.python import version as contrib_version + + +class ContribDebugTestCase(TestCase): + def test_aliases(self): + assert contrib_debug is debug + assert contrib_packages is packages + assert contrib_rpdb is rpdb + assert contrib_version is version diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/saltstack/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/saltstack/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/saltstack/test_saltstates.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/saltstack/test_saltstates.py new file mode 100644 index 0000000000000000000000000000000000000000..a9ac35ea80c842cc3a2330066c7e7a3a05154d8b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/saltstack/test_saltstates.py @@ -0,0 +1,75 @@ +# Copyright 2013 Canonical Ltd. +# +# Authors: +# Charm Helpers Developers +import mock +import unittest + +import charmhelpers.contrib.saltstack + + +class InstallSaltSupportTestCase(unittest.TestCase): + + def setUp(self): + super(InstallSaltSupportTestCase, self).setUp() + + patcher = mock.patch('charmhelpers.contrib.saltstack.subprocess') + self.mock_subprocess = patcher.start() + self.addCleanup(patcher.stop) + + patcher = mock.patch('charmhelpers.fetch') + self.mock_charmhelpers_fetch = patcher.start() + self.addCleanup(patcher.stop) + + def test_adds_ppa_by_default(self): + charmhelpers.contrib.saltstack.install_salt_support() + + expected_calls = [((cmd,), {}) for cmd in [ + ['/usr/bin/add-apt-repository', '--yes', 'ppa:saltstack/salt'], + ['/usr/bin/apt-get', 'update'], + ]] + self.assertEqual(self.mock_subprocess.check_call.call_count, 2) + self.assertEqual( + expected_calls, self.mock_subprocess.check_call.call_args_list) + self.mock_charmhelpers_fetch.apt_install.assert_called_once_with( + 'salt-common') + + def test_no_ppa(self): + charmhelpers.contrib.saltstack.install_salt_support( + from_ppa=False) + + self.assertEqual(self.mock_subprocess.check_call.call_count, 0) + self.mock_charmhelpers_fetch.apt_install.assert_called_once_with( + 'salt-common') + + +class UpdateMachineStateTestCase(unittest.TestCase): + + def setUp(self): + super(UpdateMachineStateTestCase, self).setUp() + + patcher = mock.patch('charmhelpers.contrib.saltstack.subprocess') + self.mock_subprocess = patcher.start() + self.addCleanup(patcher.stop) + + patcher = mock.patch('charmhelpers.contrib.templating.contexts.' + 'juju_state_to_yaml') + self.mock_config_2_grains = patcher.start() + self.addCleanup(patcher.stop) + + def test_calls_local_salt_template(self): + charmhelpers.contrib.saltstack.update_machine_state( + 'states/install.yaml') + + self.mock_subprocess.check_call.assert_called_once_with([ + 'salt-call', + '--local', + 'state.template', + 'states/install.yaml', + ]) + + def test_updates_grains(self): + charmhelpers.contrib.saltstack.update_machine_state( + 'states/install.yaml') + + self.mock_config_2_grains.assert_called_once_with('/etc/salt/grains') diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/ssl/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/ssl/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/ssl/test_service.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/ssl/test_service.py new file mode 100644 index 0000000000000000000000000000000000000000..389f6db2341afa44d9f26399e490e55e5de71a82 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/ssl/test_service.py @@ -0,0 +1,78 @@ +from testtools import TestCase +import tempfile +import shutil +import subprocess +import six +import mock + +from os.path import exists, join, isdir + +from charmhelpers.contrib.ssl import service + + +class ServiceCATest(TestCase): + + def setUp(self): + super(ServiceCATest, self).setUp() + self.temp_dir = tempfile.mkdtemp() + + def tearDown(self): + super(ServiceCATest, self).tearDown() + shutil.rmtree(self.temp_dir, ignore_errors=True) + + @mock.patch("charmhelpers.contrib.ssl.service.log") + def test_init(self, *args): + """ + Tests that a ServiceCA is initialized with the correct directory + layout. + """ + ca_root_dir = join(self.temp_dir, 'ca') + ca = service.ServiceCA('fake-name', ca_root_dir) + ca.init() + + paths_to_verify = [ + 'certs/', + 'crl/', + 'newcerts/', + 'private/', + 'private/cacert.key', + 'cacert.pem', + 'serial', + 'index.txt', + 'ca.cnf', + 'signing.cnf', + ] + + for path in paths_to_verify: + full_path = join(ca_root_dir, path) + self.assertTrue(exists(full_path), + 'Path {} does not exist'.format(full_path)) + + if path.endswith('/'): + self.assertTrue(isdir(full_path), + 'Path {} is not a dir'.format(full_path)) + + @mock.patch("charmhelpers.contrib.ssl.service.log") + def test_create_cert(self, *args): + """ + Tests that a generated certificate is valid against the ca. + """ + ca_root_dir = join(self.temp_dir, 'ca') + ca = service.ServiceCA('fake-name', ca_root_dir) + ca.init() + + ca.get_or_create_cert('fake-cert') + + # Verify that the cert belongs to the ca + self.assertTrue('fake-cert' in ca) + + full_cert_path = join(ca_root_dir, 'certs', 'fake-cert.crt') + cmd = ['openssl', 'verify', '-verbose', + '-CAfile', join(ca_root_dir, 'cacert.pem'), full_cert_path] + + output = subprocess.check_output(cmd, + stderr=subprocess.STDOUT).strip() + expected = '{}: OK'.format(full_cert_path) + if six.PY3: + expected = bytes(expected, 'utf-8') + self.assertEqual(expected, output) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/ssl/test_ssl.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/ssl/test_ssl.py new file mode 100644 index 0000000000000000000000000000000000000000..b1fa461c5f2d24d2256427174657671f7a3f5162 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/ssl/test_ssl.py @@ -0,0 +1,56 @@ +from mock import patch +from testtools import TestCase + +from charmhelpers.contrib import ssl + + +class HelpersTest(TestCase): + @patch('subprocess.check_call') + def test_generate_selfsigned_dict(self, mock_call): + subject = {"country": "UK", + "locality": "my_locality", + "state": "my_state", + "organization": "my_organization", + "organizational_unit": "my_unit", + "cn": "mysite.example.com", + "email": "me@example.com" + } + + ssl.generate_selfsigned("mykey.key", "mycert.crt", subject=subject) + mock_call.assert_called_with(['/usr/bin/openssl', 'req', '-new', + '-newkey', 'rsa:1024', '-days', '365', + '-nodes', '-x509', '-keyout', + 'mykey.key', '-out', 'mycert.crt', + '-subj', + '/C=UK/ST=my_state/L=my_locality' + '/O=my_organization/OU=my_unit' + '/CN=mysite.example.com' + '/emailAddress=me@example.com'] + ) + + @patch('charmhelpers.core.hookenv.log') + def test_generate_selfsigned_failure(self, mock_log): + # This is NOT enough, functino requires cn key + subject = {"country": "UK", + "locality": "my_locality"} + + result = ssl.generate_selfsigned("mykey.key", "mycert.crt", subject=subject) + self.assertFalse(result) + + @patch('subprocess.check_call') + def test_generate_selfsigned_file(self, mock_call): + ssl.generate_selfsigned("mykey.key", "mycert.crt", config="test.cnf") + mock_call.assert_called_with(['/usr/bin/openssl', 'req', '-new', + '-newkey', 'rsa:1024', '-days', '365', + '-nodes', '-x509', '-keyout', + 'mykey.key', '-out', 'mycert.crt', + '-config', 'test.cnf']) + + @patch('subprocess.check_call') + def test_generate_selfsigned_cn_key(self, mock_call): + ssl.generate_selfsigned("mykey.key", "mycert.crt", keysize="2048", cn="mysite.example.com") + mock_call.assert_called_with(['/usr/bin/openssl', 'req', '-new', + '-newkey', 'rsa:2048', '-days', '365', + '-nodes', '-x509', '-keyout', + 'mykey.key', '-out', 'mycert.crt', + '-subj', '/CN=mysite.example.com']) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/storage/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/storage/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/storage/test_bcache.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/storage/test_bcache.py new file mode 100644 index 0000000000000000000000000000000000000000..847828d78bdc170c20fac5e23eb18d675aea492e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/storage/test_bcache.py @@ -0,0 +1,90 @@ +# Copyright 2017 Canonical Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import shutil +import json +from mock import patch +from testtools import TestCase +from tempfile import mkdtemp +from charmhelpers.contrib.storage.linux import bcache + +test_stats = { + 'bypassed': '128G\n', + 'cache_bypass_hits': '1132623\n', + 'cache_bypass_misses': '0\n', + 'cache_hit_ratio': '64\n', + 'cache_hits': '12177090\n', + 'cache_miss_collisions': '7091\n', + 'cache_misses': '6717011\n', + 'cache_readaheads': '0\n', +} + +tmpdir = 'bcache-stats-test.' +cacheset = 'abcde' +cachedev = 'sdfoo' + + +class BcacheTestCase(TestCase): + def setUp(self): + super(BcacheTestCase, self).setUp() + self.sysfs = sysfs = mkdtemp(prefix=tmpdir) + self.addCleanup(shutil.rmtree, sysfs) + p = patch('charmhelpers.contrib.storage.linux.bcache.SYSFS', new=sysfs) + p.start() + self.addCleanup(p.stop) + self.cacheset = '{}/fs/bcache/{}'.format(sysfs, cacheset) + os.makedirs(self.cacheset) + self.devcache = '{}/block/{}/bcache'.format(sysfs, cachedev) + for n in ['register', 'register_quiet']: + with open('{}/fs/bcache/{}'.format(sysfs, n), 'w') as f: + f.write('foo') + for kind in self.cacheset, self.devcache: + for sub in bcache.stats_intervals: + intvaldir = '{}/{}'.format(kind, sub) + os.makedirs(intvaldir) + for fn, val in test_stats.items(): + with open(os.path.join(intvaldir, fn), 'w') as f: + f.write(val) + + def test_get_bcache_fs(self): + bcachedirs = bcache.get_bcache_fs() + assert len(bcachedirs) == 1 + assert next(iter(bcachedirs)).cachepath.endswith('/fs/bcache/abcde') + + @patch('charmhelpers.contrib.storage.linux.bcache.log', lambda *args, **kwargs: None) + @patch('charmhelpers.contrib.storage.linux.bcache.os.listdir') + def test_get_bcache_fs_nobcache(self, mock_listdir): + mock_listdir.side_effect = OSError( + '[Errno 2] No such file or directory:...') + bcachedirs = bcache.get_bcache_fs() + assert bcachedirs == [] + + def test_get_stats_global(self): + out = bcache.get_stats_action( + 'global', 'hour') + out = json.loads(out) + assert len(out.keys()) == 1 + k = next(iter(out.keys())) + assert k.endswith(cacheset) + assert out[k]['bypassed'] == '128G' + + def test_get_stats_dev(self): + out = bcache.get_stats_action( + cachedev, 'hour') + out = json.loads(out) + assert len(out.keys()) == 1 + k = next(iter(out.keys())) + assert k.endswith('sdfoo/bcache') + assert out[k]['cache_hit_ratio'] == '64' diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/storage/test_linux_ceph.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/storage/test_linux_ceph.py new file mode 100644 index 0000000000000000000000000000000000000000..53c63a18f9b86316ba1d86b2634eeb6dd645ea2a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/storage/test_linux_ceph.py @@ -0,0 +1,1893 @@ +from mock import patch, call, mock_open + +import collections +import six +import errno +from shutil import rmtree +from tempfile import mkdtemp +from threading import Timer +from testtools import TestCase +import json +import copy +import shutil + +import charmhelpers.contrib.storage.linux.ceph as ceph_utils + +from charmhelpers.core.unitdata import Storage +from subprocess import CalledProcessError +from tests.helpers import patch_open, FakeRelation +import nose.plugins.attrib +import os +import time + +LS_POOLS = b""" +.rgw.foo +images +volumes +rbd +""" + +LS_RBDS = b""" +rbd1 +rbd2 +rbd3 +""" + +IMG_MAP = b""" +bar +baz +""" +# Vastly abbreviated output from ceph osd dump --format=json +OSD_DUMP = b""" +{ + "pools": [ + { + "pool": 2, + "pool_name": "rbd", + "flags": 1, + "flags_names": "hashpspool", + "type": 1, + "size": 3, + "min_size": 2, + "crush_ruleset": 0, + "object_hash": 2, + "pg_num": 64, + "pg_placement_num": 64, + "crash_replay_interval": 0, + "last_change": "1", + "last_force_op_resend": "0", + "auid": 0, + "snap_mode": "selfmanaged", + "snap_seq": 0, + "snap_epoch": 0, + "pool_snaps": [], + "removed_snaps": "[]", + "quota_max_bytes": 0, + "quota_max_objects": 0, + "tiers": [], + "tier_of": -1, + "read_tier": -1, + "write_tier": -1, + "cache_mode": "writeback", + "target_max_bytes": 0, + "target_max_objects": 0, + "cache_target_dirty_ratio_micro": 0, + "cache_target_full_ratio_micro": 0, + "cache_min_flush_age": 0, + "cache_min_evict_age": 0, + "erasure_code_profile": "", + "hit_set_params": { + "type": "none" + }, + "hit_set_period": 0, + "hit_set_count": 0, + "stripe_width": 0 + } + ] +} +""" + +MONMAP_DUMP = b"""{ + "name": "ip-172-31-13-119", "rank": 0, "state": "leader", + "election_epoch": 18, "quorum": [0, 1, 2], + "outside_quorum": [], + "extra_probe_peers": [], + "sync_provider": [], + "monmap": { + "epoch": 1, + "fsid": "9fdc313c-db30-11e5-9805-0242fda74275", + "modified": "0.000000", + "created": "0.000000", + "mons": [ + { + "rank": 0, + "name": "ip-172-31-13-119", + "addr": "172.31.13.119:6789\\/0"}, + { + "rank": 1, + "name": "ip-172-31-24-50", + "addr": "172.31.24.50:6789\\/0"}, + { + "rank": 2, + "name": "ip-172-31-33-107", + "addr": "172.31.33.107:6789\\/0"} + ]}}""" + +CEPH_CLIENT_RELATION = { + 'ceph:8': { + 'ceph/0': { + 'auth': 'cephx', + 'broker-rsp-glance-0': '{"request-id": "0bc7dc54", "exit-code": 0}', + 'broker-rsp-glance-1': '{"request-id": "0880e22a", "exit-code": 0}', + 'broker-rsp-glance-2': '{"request-id": "0da543b8", "exit-code": 0}', + 'broker_rsp': '{"request-id": "0da543b8", "exit-code": 0}', + 'ceph-public-address': '10.5.44.103', + 'key': 'AQCLDttVuHXINhAAvI144CB09dYchhHyTUY9BQ==', + 'private-address': '10.5.44.103', + }, + 'ceph/1': { + 'auth': 'cephx', + 'ceph-public-address': '10.5.44.104', + 'key': 'AQCLDttVuHXINhAAvI144CB09dYchhHyTUY9BQ==', + 'private-address': '10.5.44.104', + }, + 'ceph/2': { + 'auth': 'cephx', + 'ceph-public-address': '10.5.44.105', + 'key': 'AQCLDttVuHXINhAAvI144CB09dYchhHyTUY9BQ==', + 'private-address': '10.5.44.105', + }, + 'glance/0': { + 'broker_req': '{"api-version": 1, "request-id": "0bc7dc54", "ops": [{"replicas": 3, "name": "glance", "op": "create-pool"}]}', + 'private-address': '10.5.44.109', + }, + } +} + +CEPH_CLIENT_RELATION_LEGACY = copy.deepcopy(CEPH_CLIENT_RELATION) +CEPH_CLIENT_RELATION_LEGACY['ceph:8']['ceph/0'] = { + 'auth': 'cephx', + 'broker_rsp': '{"exit-code": 0}', + 'ceph-public-address': '10.5.44.103', + 'key': 'AQCLDttVuHXINhAAvI144CB09dYchhHyTUY9BQ==', + 'private-address': '10.5.44.103', +} + + +class TestConfig(): + + def __init__(self): + self.config = {} + + def set(self, key, value): + self.config[key] = value + + def get(self, key): + return self.config.get(key) + + +class CephBasicUtilsTests(TestCase): + def setUp(self): + super(CephBasicUtilsTests, self).setUp() + [self._patch(m) for m in [ + 'check_output', + ]] + + def _patch(self, method): + _m = patch.object(ceph_utils, method) + mock = _m.start() + self.addCleanup(_m.stop) + setattr(self, method, mock) + + def test_enabled_manager_modules(self): + self.check_output.return_value = b'{"enabled_modules": []}' + ceph_utils.enabled_manager_modules() + self.check_output.assert_called_once_with(['ceph', 'mgr', 'module', 'ls']) + + +class CephUtilsTests(TestCase): + def setUp(self): + super(CephUtilsTests, self).setUp() + [self._patch(m) for m in [ + 'check_call', + 'check_output', + 'config', + 'relation_get', + 'related_units', + 'relation_ids', + 'relation_set', + 'log', + 'cmp_pkgrevno', + 'enabled_manager_modules', + ]] + # Ensure the config is setup for mocking properly. + self.test_config = TestConfig() + self.config.side_effect = self.test_config.get + self.cmp_pkgrevno.return_value = 1 + self.enabled_manager_modules.return_value = [] + + def _patch(self, method): + _m = patch.object(ceph_utils, method) + mock = _m.start() + self.addCleanup(_m.stop) + setattr(self, method, mock) + + def _get_osd_settings_test_helper(self, settings, expected=None): + units = { + 'client:1': ['ceph-iscsi/1', 'ceph-iscsi/2'], + 'client:3': ['cinder-ceph/0', 'cinder-ceph/3']} + self.relation_ids.return_value = units.keys() + self.related_units.side_effect = lambda x: units[x] + self.relation_get.side_effect = lambda x, y, z: settings[y] + if expected: + self.assertEqual( + ceph_utils.get_osd_settings('client'), + expected) + else: + ceph_utils.get_osd_settings('client'), + + def test_get_osd_settings_all_unset(self): + settings = { + 'ceph-iscsi/1': None, + 'ceph-iscsi/2': None, + 'cinder-ceph/0': None, + 'cinder-ceph/3': None} + self._get_osd_settings_test_helper(settings, {}) + + def test_get_osd_settings_one_group_set(self): + settings = { + 'ceph-iscsi/1': '{"osd heartbeat grace": 5}', + 'ceph-iscsi/2': '{"osd heartbeat grace": 5}', + 'cinder-ceph/0': '{"osd heartbeat interval": 25}', + 'cinder-ceph/3': '{"osd heartbeat interval": 25}'} + self._get_osd_settings_test_helper( + settings, + {'osd heartbeat interval': 25, + 'osd heartbeat grace': 5}) + + def test_get_osd_settings_invalid_option(self): + settings = { + 'ceph-iscsi/1': '{"osd foobar": 5}', + 'ceph-iscsi/2': None, + 'cinder-ceph/0': None, + 'cinder-ceph/3': None} + self.assertRaises( + ceph_utils.OSDSettingNotAllowed, + self._get_osd_settings_test_helper, + settings) + + def test_get_osd_settings_conflicting_options(self): + settings = { + 'ceph-iscsi/1': '{"osd heartbeat grace": 5}', + 'ceph-iscsi/2': None, + 'cinder-ceph/0': '{"osd heartbeat grace": 6}', + 'cinder-ceph/3': None} + self.assertRaises( + ceph_utils.OSDSettingConflict, + self._get_osd_settings_test_helper, + settings) + + @patch.object(ceph_utils, 'get_osd_settings') + def test_send_osd_settings(self, _get_osd_settings): + self.relation_ids.return_value = ['client:1', 'client:3'] + _get_osd_settings.return_value = { + 'osd heartbeat grace': 5, + 'osd heartbeat interval': 25} + ceph_utils.send_osd_settings() + expected_calls = [ + call( + relation_id='client:1', + relation_settings={ + 'osd-settings': ('{"osd heartbeat grace": 5, ' + '"osd heartbeat interval": 25}')}), + call( + relation_id='client:3', + relation_settings={ + 'osd-settings': ('{"osd heartbeat grace": 5, ' + '"osd heartbeat interval": 25}')})] + self.relation_set.assert_has_calls(expected_calls, any_order=True) + + @patch.object(ceph_utils, 'get_osd_settings') + def test_send_osd_settings_bad_settings(self, _get_osd_settings): + _get_osd_settings.side_effect = ceph_utils.OSDSettingConflict() + ceph_utils.send_osd_settings() + self.assertFalse(self.relation_set.called) + + def test_validator_valid(self): + # 1 is an int + ceph_utils.validator(value=1, + valid_type=int) + + def test_validator_valid_range(self): + # 1 is an int between 0 and 2 + ceph_utils.validator(value=1, + valid_type=int, + valid_range=[0, 2]) + + def test_validator_invalid_range(self): + # 1 is an int that isn't in the valid list of only 0 + self.assertRaises(ValueError, ceph_utils.validator, + value=1, + valid_type=int, + valid_range=[0]) + + def test_validator_invalid_string_list(self): + # foo is a six.string_types that isn't in the valid string list + self.assertRaises(AssertionError, ceph_utils.validator, + value="foo", + valid_type=six.string_types, + valid_range=["valid", "list", "of", "strings"]) + + def test_validator_valid_string(self): + ceph_utils.validator(value="foo", + valid_type=six.string_types, + valid_range=["foo"]) + + def test_validator_valid_string_type(self): + ceph_utils.validator(value="foo", + valid_type=str, + valid_range=["foo"]) + + def test_pool_add_cache_tier(self): + p = ceph_utils.Pool(name='test', service='admin') + p.add_cache_tier('cacher', 'readonly') + self.check_call.assert_has_calls([ + call(['ceph', '--id', 'admin', 'osd', 'tier', 'add', 'test', 'cacher']), + call(['ceph', '--id', 'admin', 'osd', 'tier', 'cache-mode', 'cacher', 'readonly']), + call(['ceph', '--id', 'admin', 'osd', 'tier', 'set-overlay', 'test', 'cacher']), + call(['ceph', '--id', 'admin', 'osd', 'pool', 'set', 'cacher', 'hit_set_type', 'bloom']), + ]) + + @patch.object(ceph_utils, 'get_cache_mode') + def test_pool_remove_readonly_cache_tier(self, cache_mode): + cache_mode.return_value = 'readonly' + + p = ceph_utils.Pool(name='test', service='admin') + p.remove_cache_tier(cache_pool='cacher') + self.check_call.assert_has_calls([ + call(['ceph', '--id', 'admin', 'osd', 'tier', 'cache-mode', 'cacher', 'none']), + call(['ceph', '--id', 'admin', 'osd', 'tier', 'remove', 'test', 'cacher']), + ]) + + @patch.object(ceph_utils, 'get_cache_mode') + def test_pool_remove_writeback_cache_tier(self, cache_mode): + cache_mode.return_value = 'writeback' + self.cmp_pkgrevno.return_value = 1 + + p = ceph_utils.Pool(name='test', service='admin') + p.remove_cache_tier(cache_pool='cacher') + self.check_call.assert_has_calls([ + call(['ceph', '--id', 'admin', 'osd', 'tier', 'cache-mode', 'cacher', 'forward', + '--yes-i-really-mean-it']), + call(['rados', '--id', 'admin', '-p', 'cacher', 'cache-flush-evict-all']), + call(['ceph', '--id', 'admin', 'osd', 'tier', 'remove-overlay', 'test']), + call(['ceph', '--id', 'admin', 'osd', 'tier', 'remove', 'test', 'cacher']), + ]) + + @patch.object(ceph_utils, 'get_osds') + def test_get_pg_num_pg_calc_values(self, get_osds): + """Tests the calculated pg num in the normal case works""" + # Check the growth case ... e.g. 200 PGs per OSD if the cluster is + # expected to grown in the near future. + get_osds.return_value = range(1, 11) + self.test_config.set('pgs-per-osd', 200) + p = ceph_utils.Pool(name='test', service='admin') + + # For Pool Size of 3, 200 PGs/OSD, and 40% of the overall data, + # the pg num should be 256 + pg_num = p.get_pgs(pool_size=3, percent_data=40) + self.assertEqual(256, pg_num) + + self.test_config.set('pgs-per-osd', 300) + pg_num = p.get_pgs(pool_size=3, percent_data=100) + self.assertEquals(1024, pg_num) + + # Tests the case in which the expected OSD count is provided (and is + # greater than the found OSD count). + self.test_config.set('pgs-per-osd', 100) + self.test_config.set('expected-osd-count', 20) + pg_num = p.get_pgs(pool_size=3, percent_data=100) + self.assertEquals(512, pg_num) + + # Test small % weight with minimal OSD count (3) + get_osds.return_value = range(1, 3) + self.test_config.set('expected-osd-count', None) + self.test_config.set('pgs-per-osd', None) + pg_num = p.get_pgs(pool_size=3, percent_data=0.1) + self.assertEquals(2, pg_num) + + # Check device_class is passed to get_osds + p.get_pgs(pool_size=3, percent_data=90, device_class='nvme') + get_osds.assert_called_with('admin', 'nvme') + + @patch.object(ceph_utils, 'get_osds') + def test_replicated_pool_create_old_ceph(self, get_osds): + self.cmp_pkgrevno.return_value = -1 + get_osds.return_value = None + p = ceph_utils.ReplicatedPool(name='test', service='admin', replicas=3) + p.create() + + self.check_call.assert_has_calls([ + call(['ceph', '--id', 'admin', 'osd', 'pool', 'create', 'test', str(200)]), + call(['ceph', '--id', 'admin', 'osd', 'pool', 'set', 'test', 'size', str(3)]), + ]) + self.assertEqual(self.check_call.call_count, 2) + + @patch.object(ceph_utils, 'get_osds') + def test_replicated_pool_create_luminous_ceph(self, get_osds): + self.cmp_pkgrevno.side_effect = [-1, 1] + get_osds.return_value = None + p = ceph_utils.ReplicatedPool(name='test', service='admin', replicas=3) + p.create() + + self.check_call.assert_has_calls([ + call(['ceph', '--id', 'admin', 'osd', + 'pool', 'create', 'test', str(200)]), + call(['ceph', '--id', 'admin', 'osd', + 'pool', 'set', 'test', 'size', str(3)]), + call(['ceph', '--id', 'admin', 'osd', 'pool', + 'application', 'enable', 'test', 'unknown']) + ]) + self.assertEqual(self.check_call.call_count, 3) + + @patch.object(ceph_utils, 'get_osds') + def test_replicated_pool_create_small_osds(self, get_osds): + get_osds.return_value = range(1, 5) + self.cmp_pkgrevno.return_value = -1 + p = ceph_utils.ReplicatedPool(name='test', service='admin', replicas=3, + percent_data=10) + p.create() + + # Using the PG Calc, for 4 OSDs with a size of 3 and 10% of the data + # at 100 PGs/OSD, the number of expected placement groups will be 16 + self.check_call.assert_has_calls([ + call(['ceph', '--id', 'admin', 'osd', 'pool', 'create', 'test', + '16']), + ]) + + @patch.object(ceph_utils, 'get_osds') + def test_replicated_pool_create_medium_osds(self, get_osds): + self.cmp_pkgrevno.return_value = -1 + get_osds.return_value = range(1, 9) + p = ceph_utils.ReplicatedPool(name='test', service='admin', replicas=3, + percent_data=50) + p.create() + + # Using the PG Calc, for 8 OSDs with a size of 3 and 50% of the data + # at 100 PGs/OSD, the number of expected placement groups will be 128 + self.check_call.assert_has_calls([ + call(['ceph', '--id', 'admin', 'osd', 'pool', 'create', 'test', + '128']), + call(['ceph', '--id', 'admin', 'osd', 'pool', 'set', 'test', 'size', '3']), + ]) + + @patch.object(ceph_utils, 'get_osds') + def test_replicated_pool_create_autoscaler(self, get_osds): + self.enabled_manager_modules.return_value = ['pg_autoscaler'] + self.cmp_pkgrevno.return_value = 1 + get_osds.return_value = range(1, 9) + p = ceph_utils.ReplicatedPool(name='test', service='admin', replicas=3, + percent_data=50) + p.create() + # Using the PG Calc, for 8 OSDs with a size of 3 and 50% of the data + + # at 100 PGs/OSD, the number of expected placement groups will be 128 + self.check_call.assert_has_calls([ + call(['ceph', '--id', 'admin', 'osd', 'pool', + 'create', '--pg-num-min=32', 'test', '128']), + call(['ceph', '--id', 'admin', 'osd', 'pool', + 'set', 'test', 'size', '3']), + call(['ceph', '--id', 'admin', 'osd', 'pool', + 'set', 'test', 'target_size_ratio', '0.5']), + call(['ceph', '--id', 'admin', 'osd', 'pool', + 'application', 'enable', 'test', 'unknown']), + call(['ceph', '--id', 'admin', 'osd', 'pool', + 'set', 'test', 'pg_autoscale_mode', 'on']) + ]) + + @patch.object(ceph_utils, 'get_osds') + def test_replicated_pool_create_autoscaler_small(self, get_osds): + self.enabled_manager_modules.return_value = ['pg_autoscaler'] + self.cmp_pkgrevno.return_value = 1 + get_osds.return_value = range(1, 3) + p = ceph_utils.ReplicatedPool(name='test', service='admin', replicas=3, + percent_data=1) + p.create() + # Using the PG Calc, for 8 OSDs with a size of 3 and 50% of the data + + # at 100 PGs/OSD, the number of expected placement groups will be 128 + self.check_call.assert_has_calls([ + call(['ceph', '--id', 'admin', 'osd', 'pool', + 'create', '--pg-num-min=2', 'test', '2']), + call(['ceph', '--id', 'admin', 'osd', 'pool', + 'set', 'test', 'size', '3']), + call(['ceph', '--id', 'admin', 'osd', 'pool', + 'set', 'test', 'target_size_ratio', '0.01']), + call(['ceph', '--id', 'admin', 'osd', 'pool', + 'application', 'enable', 'test', 'unknown']), + call(['ceph', '--id', 'admin', 'osd', 'pool', + 'set', 'test', 'pg_autoscale_mode', 'on']) + ]) + + @patch.object(ceph_utils, 'get_osds') + def test_replicated_pool_create_large_osds(self, get_osds): + get_osds.return_value = range(1, 41) + self.cmp_pkgrevno.return_value = -1 + p = ceph_utils.ReplicatedPool(name='test', service='admin', replicas=3, + percent_data=100) + p.create() + + # Using the PG Calc, for 40 OSDs with a size of 3 and 100% of the + # data at 100 PGs/OSD then the number of expected placement groups + # will be 1024. + self.check_call.assert_has_calls([ + call(['ceph', '--id', 'admin', 'osd', 'pool', 'create', 'test', + '1024']), + ]) + + @patch.object(ceph_utils, 'get_osds') + def test_replicated_pool_create_xlarge_osds(self, get_osds): + get_osds.return_value = range(1, 1001) + self.cmp_pkgrevno.return_value = -1 + p = ceph_utils.ReplicatedPool(name='test', service='admin', replicas=3, + percent_data=100) + p.create() + + # Using the PG Calc, for 1,000 OSDs with a size of 3 and 100% of the + # data at 100 PGs/OSD then the number of expected placement groups + # will be 32768 + self.check_call.assert_has_calls([ + call(['ceph', '--id', 'admin', 'osd', 'pool', 'create', 'test', + '32768']), + ]) + + @patch.object(ceph_utils, 'get_osds') + def test_replicated_pool_create_failed(self, get_osds): + get_osds.return_value = range(1, 1001) + self.check_call.side_effect = CalledProcessError(returncode=1, + cmd='mock', + output=None) + p = ceph_utils.ReplicatedPool(name='test', service='admin', replicas=3) + self.assertRaises(CalledProcessError, p.create) + + @patch.object(ceph_utils, 'get_osds') + @patch.object(ceph_utils, 'pool_exists') + def test_replicated_pool_skips_creation(self, pool_exists, get_osds): + get_osds.return_value = range(1, 1001) + pool_exists.return_value = True + p = ceph_utils.ReplicatedPool(name='test', service='admin', replicas=3) + p.create() + self.check_call.assert_has_calls([]) + + def test_erasure_pool_create_failed(self): + self.check_output.side_effect = CalledProcessError(returncode=1, + cmd='ceph', + output=None) + p = ceph_utils.ErasurePool('test', 'admin', 'foo') + self.assertRaises(ceph_utils.PoolCreationError, p.create) + + @patch.object(ceph_utils, 'get_erasure_profile') + @patch.object(ceph_utils, 'get_osds') + def test_erasure_pool_create(self, get_osds, erasure_profile): + self.cmp_pkgrevno.return_value = 1 + get_osds.return_value = range(1, 60) + erasure_profile.return_value = { + 'directory': '/usr/lib/x86_64-linux-gnu/ceph/erasure-code', + 'k': '2', + 'technique': 'reed_sol_van', + 'm': '1', + 'plugin': 'jerasure'} + p = ceph_utils.ErasurePool(name='test', service='admin', + percent_data=100) + p.create() + self.check_call.assert_has_calls([ + call(['ceph', '--id', 'admin', 'osd', 'pool', 'create', + '--pg-num-min=32', 'test', + '2048', '2048', 'erasure', 'default']), + call(['ceph', '--id', 'admin', 'osd', 'pool', + 'application', 'enable', 'test', 'unknown']) + ]) + + @patch.object(ceph_utils, 'get_erasure_profile') + @patch.object(ceph_utils, 'get_osds') + def test_erasure_pool_create_autoscaler(self, + get_osds, + erasure_profile): + self.enabled_manager_modules.return_value = ['pg_autoscaler'] + self.cmp_pkgrevno.return_value = 1 + get_osds.return_value = range(1, 60) + erasure_profile.return_value = { + 'directory': '/usr/lib/x86_64-linux-gnu/ceph/erasure-code', + 'k': '2', + 'technique': 'reed_sol_van', + 'm': '1', + 'plugin': 'jerasure'} + p = ceph_utils.ErasurePool(name='test', service='admin', + percent_data=100) + p.create() + self.check_call.assert_has_calls([ + call(['ceph', '--id', 'admin', 'osd', 'pool', + 'create', '--pg-num-min=32', 'test', + '2048', '2048', 'erasure', 'default']), + call(['ceph', '--id', 'admin', 'osd', 'pool', + 'application', 'enable', 'test', 'unknown']), + call(['ceph', '--id', 'admin', 'osd', 'pool', + 'set', 'test', 'target_size_ratio', '1.0']), + call(['ceph', '--id', 'admin', 'osd', 'pool', + 'set', 'test', 'pg_autoscale_mode', 'on']), + ]) + + def test_get_erasure_profile_none(self): + self.check_output.side_effect = CalledProcessError(1, 'ceph') + return_value = ceph_utils.get_erasure_profile('admin', 'unknown') + self.assertEqual(None, return_value) + + def test_pool_set_int(self): + self.check_call.return_value = 0 + ceph_utils.pool_set(service='admin', pool_name='data', key='test', value=2) + self.check_call.assert_has_calls([ + call(['ceph', '--id', 'admin', 'osd', 'pool', 'set', 'data', 'test', '2']) + ]) + + def test_pool_set_bool(self): + self.check_call.return_value = 0 + ceph_utils.pool_set(service='admin', pool_name='data', key='test', value=True) + self.check_call.assert_has_calls([ + call(['ceph', '--id', 'admin', 'osd', 'pool', 'set', 'data', 'test', 'true']) + ]) + + def test_pool_set_str(self): + self.check_call.return_value = 0 + ceph_utils.pool_set(service='admin', pool_name='data', key='test', value='two') + self.check_call.assert_has_calls([ + call(['ceph', '--id', 'admin', 'osd', 'pool', 'set', 'data', 'test', 'two']) + ]) + + def test_pool_set_fails(self): + self.check_call.side_effect = CalledProcessError(returncode=1, cmd='mock', + output=None) + self.assertRaises(CalledProcessError, ceph_utils.pool_set, + service='admin', pool_name='data', key='test', value=2) + + def test_snapshot_pool(self): + self.check_call.return_value = 0 + ceph_utils.snapshot_pool(service='admin', pool_name='data', snapshot_name='test-snap-1') + self.check_call.assert_has_calls([ + call(['ceph', '--id', 'admin', 'osd', 'pool', 'mksnap', 'data', 'test-snap-1']) + ]) + + def test_snapshot_pool_fails(self): + self.check_call.side_effect = CalledProcessError(returncode=1, cmd='mock', + output=None) + self.assertRaises(CalledProcessError, ceph_utils.snapshot_pool, + service='admin', pool_name='data', snapshot_name='test-snap-1') + + def test_remove_pool_snapshot(self): + self.check_call.return_value = 0 + ceph_utils.remove_pool_snapshot(service='admin', pool_name='data', snapshot_name='test-snap-1') + self.check_call.assert_has_calls([ + call(['ceph', '--id', 'admin', 'osd', 'pool', 'rmsnap', 'data', 'test-snap-1']) + ]) + + def test_set_pool_quota(self): + self.check_call.return_value = 0 + ceph_utils.set_pool_quota(service='admin', pool_name='data', + max_bytes=1024) + self.check_call.assert_has_calls([ + call(['ceph', '--id', 'admin', 'osd', 'pool', 'set-quota', 'data', + 'max_bytes', '1024']) + ]) + ceph_utils.set_pool_quota(service='admin', pool_name='data', + max_objects=1024) + self.check_call.assert_has_calls([ + call(['ceph', '--id', 'admin', 'osd', 'pool', 'set-quota', 'data', + 'max_objects', '1024']) + ]) + ceph_utils.set_pool_quota(service='admin', pool_name='data', + max_bytes=1024, max_objects=1024) + self.check_call.assert_has_calls([ + call(['ceph', '--id', 'admin', 'osd', 'pool', 'set-quota', 'data', + 'max_bytes', '1024', 'max_objects', '1024']) + ]) + + def test_remove_pool_quota(self): + self.check_call.return_value = 0 + ceph_utils.remove_pool_quota(service='admin', pool_name='data') + self.check_call.assert_has_calls([ + call(['ceph', '--id', 'admin', 'osd', 'pool', 'set-quota', 'data', 'max_bytes', '0']) + ]) + + @patch.object(ceph_utils, 'erasure_profile_exists') + def test_create_erasure_profile(self, existing_profile): + existing_profile.return_value = True + self.cmp_pkgrevno.return_value = -1 + ceph_utils.create_erasure_profile(service='admin', profile_name='super-profile', erasure_plugin_name='jerasure', + failure_domain='rack', data_chunks=10, coding_chunks=3) + + cmd = ['ceph', '--id', 'admin', 'osd', 'erasure-code-profile', 'set', 'super-profile', + 'plugin=' + 'jerasure', 'k=' + str(10), 'm=' + str(3), + 'ruleset-failure-domain=' + 'rack', '--force'] + self.check_call.assert_has_calls([call(cmd)]) + + self.cmp_pkgrevno.return_value = 1 + ceph_utils.create_erasure_profile(service='admin', profile_name='super-profile', erasure_plugin_name='jerasure', + failure_domain='rack', data_chunks=10, coding_chunks=3) + + cmd = ['ceph', '--id', 'admin', 'osd', 'erasure-code-profile', 'set', 'super-profile', + 'plugin=' + 'jerasure', 'k=' + str(10), 'm=' + str(3), + 'crush-failure-domain=' + 'rack', '--force'] + self.check_call.assert_has_calls([call(cmd)]) + + @patch.object(ceph_utils, 'erasure_profile_exists') + def test_create_erasure_profile_local(self, existing_profile): + self.cmp_pkgrevno.return_value = -1 + existing_profile.return_value = False + ceph_utils.create_erasure_profile(service='admin', profile_name='super-profile', erasure_plugin_name='local', + failure_domain='rack', data_chunks=10, coding_chunks=3, locality=1) + + cmd = ['ceph', '--id', 'admin', 'osd', 'erasure-code-profile', 'set', 'super-profile', + 'plugin=' + 'local', 'k=' + str(10), 'm=' + str(3), + 'ruleset-failure-domain=' + 'rack', 'l=' + str(1)] + self.check_call.assert_has_calls([call(cmd)]) + + @patch.object(ceph_utils, 'erasure_profile_exists') + def test_create_erasure_profile_shec(self, existing_profile): + self.cmp_pkgrevno.return_value = -1 + existing_profile.return_value = False + ceph_utils.create_erasure_profile(service='admin', profile_name='super-profile', erasure_plugin_name='shec', + failure_domain='rack', data_chunks=10, coding_chunks=3, + durability_estimator=1) + + cmd = ['ceph', '--id', 'admin', 'osd', 'erasure-code-profile', 'set', 'super-profile', + 'plugin=' + 'shec', 'k=' + str(10), 'm=' + str(3), + 'ruleset-failure-domain=' + 'rack', 'c=' + str(1)] + self.check_call.assert_has_calls([call(cmd)]) + + def test_rename_pool(self): + ceph_utils.rename_pool(service='admin', old_name='old-pool', new_name='new-pool') + cmd = ['ceph', '--id', 'admin', 'osd', 'pool', 'rename', 'old-pool', 'new-pool'] + self.check_call.assert_called_with(cmd) + + def test_erasure_profile_exists(self): + self.check_call.return_value = 0 + profile_exists = ceph_utils.erasure_profile_exists(service='admin', name='super-profile') + cmd = ['ceph', '--id', 'admin', + 'osd', 'erasure-code-profile', 'get', + 'super-profile'] + self.check_call.assert_called_with(cmd) + self.assertEqual(True, profile_exists) + + def test_set_monitor_key(self): + cmd = ['ceph', '--id', 'admin', + 'config-key', 'put', 'foo', 'bar'] + ceph_utils.monitor_key_set(service='admin', key='foo', value='bar') + self.check_output.assert_called_with(cmd) + + def test_get_monitor_key(self): + cmd = ['ceph', '--id', 'admin', + 'config-key', 'get', 'foo'] + ceph_utils.monitor_key_get(service='admin', key='foo') + self.check_output.assert_called_with(cmd) + + def test_get_monitor_key_failed(self): + self.check_output.side_effect = CalledProcessError( + returncode=2, + cmd='ceph', + output='key foo does not exist') + output = ceph_utils.monitor_key_get(service='admin', key='foo') + self.assertEqual(None, output) + + def test_monitor_key_exists(self): + cmd = ['ceph', '--id', 'admin', + 'config-key', 'exists', 'foo'] + ceph_utils.monitor_key_exists(service='admin', key='foo') + self.check_call.assert_called_with(cmd) + + def test_monitor_key_doesnt_exist(self): + self.check_call.side_effect = CalledProcessError( + returncode=2, + cmd='ceph', + output='key foo does not exist') + output = ceph_utils.monitor_key_exists(service='admin', key='foo') + self.assertEqual(False, output) + + def test_delete_monitor_key(self): + ceph_utils.monitor_key_delete(service='admin', key='foo') + cmd = ['ceph', '--id', 'admin', + 'config-key', 'del', 'foo'] + self.check_output.assert_called_with(cmd) + + def test_delete_monitor_key_failed(self): + self.check_output.side_effect = CalledProcessError( + returncode=2, + cmd='ceph', + output='deletion failed') + self.assertRaises(CalledProcessError, ceph_utils.monitor_key_delete, + service='admin', key='foo') + + def test_get_monmap(self): + self.check_output.return_value = MONMAP_DUMP + cmd = ['ceph', '--id', 'admin', + 'mon_status', '--format=json'] + ceph_utils.get_mon_map(service='admin') + self.check_output.assert_called_with(cmd) + + @patch.object(ceph_utils, 'get_mon_map') + def test_hash_monitor_names(self, monmap): + expected_hash_list = [ + '010d57d581604d411b315dd64112bff832ab92c7323fa06077134b50', + '8e0a9705c1aeafa1ce250cc9f1bb443fc6e5150e5edcbeb6eeb82e3c', + 'c3f8d36ba098c23ee920cb08cfb9beda6b639f8433637c190bdd56ec'] + _monmap_dump = MONMAP_DUMP + if six.PY3: + _monmap_dump = _monmap_dump.decode('UTF-8') + monmap.return_value = json.loads(_monmap_dump) + hashed_mon_list = ceph_utils.hash_monitor_names(service='admin') + self.assertEqual(expected=expected_hash_list, observed=hashed_mon_list) + + def test_get_cache_mode(self): + self.check_output.return_value = OSD_DUMP + cache_mode = ceph_utils.get_cache_mode(service='admin', pool_name='rbd') + self.assertEqual("writeback", cache_mode) + + @patch('os.path.exists') + def test_add_key(self, _exists): + """It creates a new ceph keyring""" + _exists.return_value = False + ceph_utils.add_key('cinder', 'cephkey') + _cmd = ['ceph-authtool', '/etc/ceph/ceph.client.cinder.keyring', + '--create-keyring', '--name=client.cinder', + '--add-key=cephkey'] + self.check_call.assert_called_with(_cmd) + + @patch('os.path.exists') + def test_add_key_already_exists(self, _exists): + """It should insert the key into the existing keyring""" + _exists.return_value = True + try: + with patch("__builtin__.open", mock_open(read_data="foo")): + ceph_utils.add_key('cinder', 'cephkey') + except ImportError: # Python3 + with patch("builtins.open", mock_open(read_data="foo")): + ceph_utils.add_key('cinder', 'cephkey') + self.assertTrue(self.log.called) + _cmd = ['ceph-authtool', '/etc/ceph/ceph.client.cinder.keyring', + '--create-keyring', '--name=client.cinder', + '--add-key=cephkey'] + self.check_call.assert_called_with(_cmd) + + @patch('os.path.exists') + def test_add_key_already_exists_and_key_exists(self, _exists): + """Nothing should happen, apart from a log message""" + _exists.return_value = True + try: + with patch("__builtin__.open", mock_open(read_data="cephkey")): + ceph_utils.add_key('cinder', 'cephkey') + except ImportError: # Python3 + with patch("builtins.open", mock_open(read_data="cephkey")): + ceph_utils.add_key('cinder', 'cephkey') + self.assertTrue(self.log.called) + self.check_call.assert_not_called() + + @patch('os.remove') + @patch('os.path.exists') + def test_delete_keyring(self, _exists, _remove): + """It deletes a ceph keyring.""" + _exists.return_value = True + ceph_utils.delete_keyring('cinder') + _remove.assert_called_with('/etc/ceph/ceph.client.cinder.keyring') + self.assertTrue(self.log.called) + + @patch('os.remove') + @patch('os.path.exists') + def test_delete_keyring_not_exists(self, _exists, _remove): + """It creates a new ceph keyring.""" + _exists.return_value = False + ceph_utils.delete_keyring('cinder') + self.assertTrue(self.log.called) + _remove.assert_not_called() + + @patch('os.path.exists') + def test_create_keyfile(self, _exists): + """It creates a new ceph keyfile""" + _exists.return_value = False + with patch_open() as (_open, _file): + ceph_utils.create_key_file('cinder', 'cephkey') + _file.write.assert_called_with('cephkey') + self.assertTrue(self.log.called) + + @patch('os.path.exists') + def test_create_key_file_already_exists(self, _exists): + """It creates a new ceph keyring""" + _exists.return_value = True + ceph_utils.create_key_file('cinder', 'cephkey') + self.assertTrue(self.log.called) + + @patch('os.mkdir') + @patch.object(ceph_utils, 'apt_install') + @patch('os.path.exists') + def test_install(self, _exists, _install, _mkdir): + _exists.return_value = False + ceph_utils.install() + _mkdir.assert_called_with('/etc/ceph') + _install.assert_called_with('ceph-common', fatal=True) + + def test_get_osds(self): + self.check_output.return_value = json.dumps([1, 2, 3]).encode('UTF-8') + self.assertEquals(ceph_utils.get_osds('test'), [1, 2, 3]) + + def test_get_osds_none(self): + self.check_output.return_value = json.dumps(None).encode('UTF-8') + self.assertEquals(ceph_utils.get_osds('test'), None) + + def test_get_osds_device_class(self): + self.check_output.return_value = json.dumps([1, 2, 3]).encode('UTF-8') + self.assertEquals(ceph_utils.get_osds('test', 'nvme'), [1, 2, 3]) + self.check_output.assert_called_once_with( + ['ceph', '--id', 'test', + 'osd', 'crush', 'class', + 'ls-osd', 'nvme', '--format=json'] + ) + + def test_get_osds_device_class_older(self): + self.check_output.return_value = json.dumps([1, 2, 3]).encode('UTF-8') + self.cmp_pkgrevno.return_value = -1 + self.assertEquals(ceph_utils.get_osds('test', 'nvme'), [1, 2, 3]) + self.check_output.assert_called_once_with( + ['ceph', '--id', 'test', 'osd', 'ls', '--format=json'] + ) + + @patch.object(ceph_utils, 'get_osds') + @patch.object(ceph_utils, 'pool_exists') + def test_create_pool(self, _exists, _get_osds): + """It creates rados pool correctly with default replicas """ + _exists.return_value = False + _get_osds.return_value = [1, 2, 3] + ceph_utils.create_pool(service='cinder', name='foo') + self.check_call.assert_has_calls([ + call(['ceph', '--id', 'cinder', 'osd', 'pool', + 'create', 'foo', '100']), + call(['ceph', '--id', 'cinder', 'osd', 'pool', 'set', + 'foo', 'size', '3']) + ]) + + @patch.object(ceph_utils, 'get_osds') + @patch.object(ceph_utils, 'pool_exists') + def test_create_pool_2_replicas(self, _exists, _get_osds): + """It creates rados pool correctly with 3 replicas""" + _exists.return_value = False + _get_osds.return_value = [1, 2, 3] + ceph_utils.create_pool(service='cinder', name='foo', replicas=2) + self.check_call.assert_has_calls([ + call(['ceph', '--id', 'cinder', 'osd', 'pool', + 'create', 'foo', '150']), + call(['ceph', '--id', 'cinder', 'osd', 'pool', 'set', + 'foo', 'size', '2']) + ]) + + @patch.object(ceph_utils, 'get_osds') + @patch.object(ceph_utils, 'pool_exists') + def test_create_pool_argonaut(self, _exists, _get_osds): + """It creates rados pool correctly with 3 replicas""" + _exists.return_value = False + _get_osds.return_value = None + ceph_utils.create_pool(service='cinder', name='foo') + self.check_call.assert_has_calls([ + call(['ceph', '--id', 'cinder', 'osd', 'pool', + 'create', 'foo', '200']), + call(['ceph', '--id', 'cinder', 'osd', 'pool', 'set', + 'foo', 'size', '3']) + ]) + + def test_create_pool_already_exists(self): + self._patch('pool_exists') + self.pool_exists.return_value = True + ceph_utils.create_pool(service='cinder', name='foo') + self.assertTrue(self.log.called) + self.check_call.assert_not_called() + + def test_keyring_path(self): + """It correctly dervies keyring path from service name""" + result = ceph_utils._keyring_path('cinder') + self.assertEquals('/etc/ceph/ceph.client.cinder.keyring', result) + + def test_keyfile_path(self): + """It correctly dervies keyring path from service name""" + result = ceph_utils._keyfile_path('cinder') + self.assertEquals('/etc/ceph/ceph.client.cinder.key', result) + + def test_pool_exists(self): + """It detects an rbd pool exists""" + self.check_output.return_value = LS_POOLS + self.assertTrue(ceph_utils.pool_exists('cinder', 'volumes')) + self.assertTrue(ceph_utils.pool_exists('rgw', '.rgw.foo')) + + def test_pool_does_not_exist(self): + """It detects an rbd pool exists""" + self.check_output.return_value = LS_POOLS + self.assertFalse(ceph_utils.pool_exists('cinder', 'foo')) + self.assertFalse(ceph_utils.pool_exists('rgw', '.rgw')) + + def test_pool_exists_error(self): + """ Ensure subprocess errors and sandboxed with False """ + self.check_output.side_effect = CalledProcessError(1, 'rados') + self.assertFalse(ceph_utils.pool_exists('cinder', 'foo')) + + def test_rbd_exists(self): + self.check_output.return_value = LS_RBDS + self.assertTrue(ceph_utils.rbd_exists('service', 'pool', 'rbd1')) + self.check_output.assert_called_with( + ['rbd', 'list', '--id', 'service', '--pool', 'pool'] + ) + + def test_rbd_does_not_exist(self): + self.check_output.return_value = LS_RBDS + self.assertFalse(ceph_utils.rbd_exists('service', 'pool', 'rbd4')) + self.check_output.assert_called_with( + ['rbd', 'list', '--id', 'service', '--pool', 'pool'] + ) + + def test_rbd_exists_error(self): + """ Ensure subprocess errors and sandboxed with False """ + self.check_output.side_effect = CalledProcessError(1, 'rbd') + self.assertFalse(ceph_utils.rbd_exists('cinder', 'foo', 'rbd')) + + def test_create_rbd_image(self): + ceph_utils.create_rbd_image('service', 'pool', 'image', 128) + _cmd = ['rbd', 'create', 'image', + '--size', '128', + '--id', 'service', + '--pool', 'pool'] + self.check_call.assert_called_with(_cmd) + + def test_delete_pool(self): + ceph_utils.delete_pool('cinder', 'pool') + _cmd = [ + 'ceph', '--id', 'cinder', + 'osd', 'pool', 'delete', + 'pool', '--yes-i-really-really-mean-it' + ] + self.check_call.assert_called_with(_cmd) + + def test_get_ceph_nodes(self): + self._patch('relation_ids') + self._patch('related_units') + self._patch('relation_get') + units = ['ceph/1', 'ceph2', 'ceph/3'] + self.relation_ids.return_value = ['ceph:0'] + self.related_units.return_value = units + self.relation_get.return_value = '192.168.1.1' + self.assertEquals(len(ceph_utils.get_ceph_nodes()), 3) + + def test_get_ceph_nodes_not_related(self): + self._patch('relation_ids') + self.relation_ids.return_value = [] + self.assertEquals(ceph_utils.get_ceph_nodes(), []) + + def test_configure(self): + self._patch('add_key') + self._patch('create_key_file') + self._patch('get_ceph_nodes') + self._patch('modprobe') + _hosts = ['192.168.1.1', '192.168.1.2'] + self.get_ceph_nodes.return_value = _hosts + _conf = ceph_utils.CEPH_CONF.format( + auth='cephx', + keyring=ceph_utils._keyring_path('cinder'), + mon_hosts=",".join(map(str, _hosts)), + use_syslog='true' + ) + with patch_open() as (_open, _file): + ceph_utils.configure('cinder', 'key', 'cephx', 'true') + _file.write.assert_called_with(_conf) + _open.assert_called_with('/etc/ceph/ceph.conf', 'w') + self.modprobe.assert_called_with('rbd') + self.add_key.assert_called_with('cinder', 'key') + self.create_key_file.assert_called_with('cinder', 'key') + + def test_image_mapped(self): + self.check_output.return_value = IMG_MAP + self.assertTrue(ceph_utils.image_mapped('bar')) + + def test_image_not_mapped(self): + self.check_output.return_value = IMG_MAP + self.assertFalse(ceph_utils.image_mapped('foo')) + + def test_image_not_mapped_error(self): + self.check_output.side_effect = CalledProcessError(1, 'rbd') + self.assertFalse(ceph_utils.image_mapped('bar')) + + def test_map_block_storage(self): + _service = 'cinder' + _pool = 'bar' + _img = 'foo' + _cmd = [ + 'rbd', + 'map', + '{}/{}'.format(_pool, _img), + '--user', + _service, + '--secret', + ceph_utils._keyfile_path(_service), + ] + ceph_utils.map_block_storage(_service, _pool, _img) + self.check_call.assert_called_with(_cmd) + + def test_filesystem_mounted(self): + self._patch('mounts') + self.mounts.return_value = [['/afs', '/dev/sdb'], ['/bfs', '/dev/sdd']] + self.assertTrue(ceph_utils.filesystem_mounted('/afs')) + self.assertFalse(ceph_utils.filesystem_mounted('/zfs')) + + @patch('os.path.exists') + def test_make_filesystem(self, _exists): + _exists.return_value = True + ceph_utils.make_filesystem('/dev/sdd') + self.assertTrue(self.log.called) + self.check_call.assert_called_with(['mkfs', '-t', 'ext4', '/dev/sdd']) + + @patch('os.path.exists') + def test_make_filesystem_xfs(self, _exists): + _exists.return_value = True + ceph_utils.make_filesystem('/dev/sdd', 'xfs') + self.assertTrue(self.log.called) + self.check_call.assert_called_with(['mkfs', '-t', 'xfs', '/dev/sdd']) + + @patch('os.chown') + @patch('os.stat') + def test_place_data_on_block_device(self, _stat, _chown): + self._patch('mount') + self._patch('copy_files') + self._patch('umount') + _stat.return_value.st_uid = 100 + _stat.return_value.st_gid = 100 + ceph_utils.place_data_on_block_device('/dev/sdd', '/var/lib/mysql') + self.mount.assert_has_calls([ + call('/dev/sdd', '/mnt'), + call('/dev/sdd', '/var/lib/mysql', persist=True) + ]) + self.copy_files.assert_called_with('/var/lib/mysql', '/mnt') + self.umount.assert_called_with('/mnt') + _chown.assert_called_with('/var/lib/mysql', 100, 100) + + @patch('shutil.copytree') + @patch('os.listdir') + @patch('os.path.isdir') + def test_copy_files_is_dir(self, _isdir, _listdir, _copytree): + _isdir.return_value = True + subdirs = ['a', 'b', 'c'] + _listdir.return_value = subdirs + ceph_utils.copy_files('/source', '/dest') + for d in subdirs: + _copytree.assert_has_calls([ + call('/source/{}'.format(d), '/dest/{}'.format(d), + False, None) + ]) + + @patch('shutil.copytree') + @patch('os.listdir') + @patch('os.path.isdir') + def test_copy_files_include_symlinks(self, _isdir, _listdir, _copytree): + _isdir.return_value = True + subdirs = ['a', 'b', 'c'] + _listdir.return_value = subdirs + ceph_utils.copy_files('/source', '/dest', True) + for d in subdirs: + _copytree.assert_has_calls([ + call('/source/{}'.format(d), '/dest/{}'.format(d), + True, None) + ]) + + @patch('shutil.copytree') + @patch('os.listdir') + @patch('os.path.isdir') + def test_copy_files_ignore(self, _isdir, _listdir, _copytree): + _isdir.return_value = True + subdirs = ['a', 'b', 'c'] + _listdir.return_value = subdirs + ceph_utils.copy_files('/source', '/dest', True, False) + for d in subdirs: + _copytree.assert_has_calls([ + call('/source/{}'.format(d), '/dest/{}'.format(d), + True, False) + ]) + + @patch('shutil.copy2') + @patch('os.listdir') + @patch('os.path.isdir') + def test_copy_files_files(self, _isdir, _listdir, _copy2): + _isdir.return_value = False + files = ['a', 'b', 'c'] + _listdir.return_value = files + ceph_utils.copy_files('/source', '/dest') + for f in files: + _copy2.assert_has_calls([ + call('/source/{}'.format(f), '/dest/{}'.format(f)) + ]) + + def test_ensure_ceph_storage(self): + self._patch('pool_exists') + self.pool_exists.return_value = False + self._patch('create_pool') + self._patch('rbd_exists') + self.rbd_exists.return_value = False + self._patch('create_rbd_image') + self._patch('image_mapped') + self.image_mapped.return_value = False + self._patch('map_block_storage') + self._patch('filesystem_mounted') + self.filesystem_mounted.return_value = False + self._patch('make_filesystem') + self._patch('service_stop') + self._patch('service_start') + self._patch('service_running') + self.service_running.return_value = True + self._patch('place_data_on_block_device') + _service = 'mysql' + _pool = 'bar' + _rbd_img = 'foo' + _mount = '/var/lib/mysql' + _services = ['mysql'] + _blk_dev = '/dev/rbd1' + ceph_utils.ensure_ceph_storage(_service, _pool, + _rbd_img, 1024, _mount, + _blk_dev, 'ext4', _services, 3) + self.create_pool.assert_called_with(_service, _pool, replicas=3) + self.create_rbd_image.assert_called_with(_service, _pool, + _rbd_img, 1024) + self.map_block_storage.assert_called_with(_service, _pool, _rbd_img) + self.make_filesystem.assert_called_with(_blk_dev, 'ext4') + self.service_stop.assert_called_with(_services[0]) + self.place_data_on_block_device.assert_called_with(_blk_dev, _mount) + self.service_start.assert_called_with(_services[0]) + + def test_make_filesystem_default_filesystem(self): + """make_filesystem() uses ext4 as the default filesystem.""" + device = '/dev/zero' + ceph_utils.make_filesystem(device) + self.check_call.assert_called_with(['mkfs', '-t', 'ext4', device]) + + def test_make_filesystem_no_device(self): + """make_filesystem() raises an IOError if the device does not exist.""" + device = '/no/such/device' + e = self.assertRaises(IOError, ceph_utils.make_filesystem, device, + timeout=0) + self.assertEquals(device, e.filename) + self.assertEquals(errno.ENOENT, e.errno) + self.assertEquals(os.strerror(errno.ENOENT), e.strerror) + self.log.assert_called_with( + 'Gave up waiting on block device %s' % device, level='ERROR') + + @nose.plugins.attrib.attr('slow') + def test_make_filesystem_timeout(self): + """ + make_filesystem() allows to specify how long it should wait for the + device to appear before it fails. + """ + device = '/no/such/device' + timeout = 2 + before = time.time() + self.assertRaises(IOError, ceph_utils.make_filesystem, device, + timeout=timeout) + after = time.time() + duration = after - before + self.assertTrue(timeout - duration < 0.1) + self.log.assert_called_with( + 'Gave up waiting on block device %s' % device, level='ERROR') + + @nose.plugins.attrib.attr('slow') + def test_device_is_formatted_if_it_appears(self): + """ + The specified device is formatted if it appears before the timeout + is reached. + """ + + def create_my_device(filename): + with open(filename, "w") as device: + device.write("hello\n") + + temp_dir = mkdtemp() + self.addCleanup(rmtree, temp_dir) + device = "%s/mydevice" % temp_dir + fstype = 'xfs' + timeout = 4 + t = Timer(2, create_my_device, [device]) + t.start() + ceph_utils.make_filesystem(device, fstype, timeout) + self.check_call.assert_called_with(['mkfs', '-t', fstype, device]) + + def test_existing_device_is_formatted(self): + """ + make_filesystem() formats the given device if it exists with the + specified filesystem. + """ + device = '/dev/zero' + fstype = 'xfs' + ceph_utils.make_filesystem(device, fstype) + self.check_call.assert_called_with(['mkfs', '-t', fstype, device]) + self.log.assert_called_with( + 'Formatting block device %s as ' + 'filesystem %s.' % (device, fstype), level='INFO' + ) + + @patch.object(ceph_utils, 'relation_ids') + @patch.object(ceph_utils, 'related_units') + @patch.object(ceph_utils, 'relation_get') + def test_ensure_ceph_keyring_no_relation_no_data(self, rget, runits, rids): + rids.return_value = [] + self.assertEquals(False, ceph_utils.ensure_ceph_keyring(service='foo')) + rids.return_value = ['ceph:0'] + runits.return_value = ['ceph/0'] + rget.return_value = '' + self.assertEquals(False, ceph_utils.ensure_ceph_keyring(service='foo')) + + @patch.object(ceph_utils, '_keyring_path') + @patch.object(ceph_utils, 'add_key') + @patch.object(ceph_utils, 'relation_ids') + def test_ensure_ceph_keyring_no_relation_but_key(self, rids, + create, _path): + rids.return_value = [] + self.assertTrue(ceph_utils.ensure_ceph_keyring(service='foo', + key='testkey')) + create.assert_called_with(service='foo', key='testkey') + _path.assert_called_with('foo') + + @patch.object(ceph_utils, '_keyring_path') + @patch.object(ceph_utils, 'add_key') + @patch.object(ceph_utils, 'relation_ids') + @patch.object(ceph_utils, 'related_units') + @patch.object(ceph_utils, 'relation_get') + def test_ensure_ceph_keyring_with_data(self, rget, runits, + rids, create, _path): + rids.return_value = ['ceph:0'] + runits.return_value = ['ceph/0'] + rget.return_value = 'fookey' + self.assertEquals(True, + ceph_utils.ensure_ceph_keyring(service='foo')) + create.assert_called_with(service='foo', key='fookey') + _path.assert_called_with('foo') + self.assertFalse(self.check_call.called) + + _path.return_value = '/etc/ceph/client.foo.keyring' + self.assertEquals( + True, + ceph_utils.ensure_ceph_keyring( + service='foo', user='adam', group='users')) + create.assert_called_with(service='foo', key='fookey') + _path.assert_called_with('foo') + self.check_call.assert_called_with([ + 'chown', + 'adam.users', + '/etc/ceph/client.foo.keyring' + ]) + + @patch.object(ceph_utils, 'service_name') + @patch.object(ceph_utils, 'uuid') + def test_ceph_broker_rq_class(self, uuid, service_name): + service_name.return_value = 'service_test' + uuid.uuid1.return_value = 'uuid' + rq = ceph_utils.CephBrokerRq() + rq.add_op_create_pool('pool1', replica_count=1) + rq.add_op_create_pool('pool1', replica_count=1) + rq.add_op_create_pool('pool2') + rq.add_op_create_pool('pool2') + rq.add_op_create_pool('pool3', group='test') + rq.add_op_request_access_to_group(name='test') + rq.add_op_request_access_to_group(name='test') + rq.add_op_request_access_to_group(name='objects', + key_name='test') + rq.add_op_request_access_to_group( + name='others', + object_prefix_permissions={'rwx': ['prefix1']}) + expected = { + 'api-version': 1, + 'request-id': 'uuid', + 'ops': [{'op': 'create-pool', 'name': 'pool1', 'replicas': 1}, + {'op': 'create-pool', 'name': 'pool2', 'replicas': 3}, + {'op': 'create-pool', 'name': 'pool3', 'replicas': 3, 'group': 'test'}, + {'op': 'add-permissions-to-key', 'group': 'test', 'name': 'service_test'}, + {'op': 'add-permissions-to-key', 'group': 'objects', 'name': 'test'}, + { + 'op': 'add-permissions-to-key', + 'group': 'others', + 'name': 'service_test', + 'object-prefix-permissions': {u'rwx': [u'prefix1']}}] + } + request_dict = json.loads(rq.request) + for key in ['api-version', 'request-id']: + self.assertEqual(request_dict[key], expected[key]) + for (op_no, expected_op) in enumerate(expected['ops']): + for key in expected_op.keys(): + self.assertEqual( + request_dict['ops'][op_no][key], + expected_op[key]) + + @patch.object(ceph_utils, 'service_name') + @patch.object(ceph_utils, 'uuid') + def test_ceph_broker_rq_class_test_not_equal(self, uuid, service_name): + service_name.return_value = 'service_test' + uuid.uuid1.return_value = 'uuid' + rq1 = ceph_utils.CephBrokerRq() + rq1.add_op_create_pool('pool1') + rq1.add_op_request_access_to_group(name='test') + rq1.add_op_request_access_to_group(name='objects', + permission='rwx') + rq2 = ceph_utils.CephBrokerRq() + rq2.add_op_create_pool('pool1') + rq2.add_op_request_access_to_group(name='test') + rq2.add_op_request_access_to_group(name='objects', + permission='r') + self.assertFalse(rq1 == rq2) + + def test_ceph_broker_rsp_class(self): + rsp = ceph_utils.CephBrokerRsp(json.dumps({'exit-code': 0, + 'stderr': "Success"})) + self.assertEqual(rsp.exit_code, 0) + self.assertEqual(rsp.exit_msg, "Success") + self.assertEqual(rsp.request_id, None) + + def test_ceph_broker_rsp_class_rqid(self): + rsp = ceph_utils.CephBrokerRsp(json.dumps({'exit-code': 0, + 'stderr': "Success", + 'request-id': 'reqid1'})) + self.assertEqual(rsp.exit_code, 0) + self.assertEqual(rsp.exit_msg, 'Success') + self.assertEqual(rsp.request_id, 'reqid1') + + def setup_client_relation(self, relation): + relation = FakeRelation(relation) + self.relation_get.side_effect = relation.get + self.relation_ids.side_effect = relation.relation_ids + self.related_units.side_effect = relation.related_units + + # @patch.object(ceph_utils, 'uuid') + # @patch.object(ceph_utils, 'local_unit') + # def test_get_request_states(self, mlocal_unit, muuid): + # muuid.uuid1.return_value = '0bc7dc54' + @patch.object(ceph_utils, 'local_unit') + def test_get_request_states(self, mlocal_unit): + mlocal_unit.return_value = 'glance/0' + self.setup_client_relation(CEPH_CLIENT_RELATION) + rq = ceph_utils.CephBrokerRq() + rq.add_op_create_pool(name='glance', replica_count=3) + expect = {'ceph:8': {'complete': True, 'sent': True}} + self.assertEqual(ceph_utils.get_request_states(rq), expect) + + @patch.object(ceph_utils, 'local_unit') + def test_get_request_states_newrq(self, mlocal_unit): + mlocal_unit.return_value = 'glance/0' + self.setup_client_relation(CEPH_CLIENT_RELATION) + rq = ceph_utils.CephBrokerRq() + rq.add_op_create_pool(name='glance', replica_count=4) + expect = {'ceph:8': {'complete': False, 'sent': False}} + self.assertEqual(ceph_utils.get_request_states(rq), expect) + + @patch.object(ceph_utils, 'local_unit') + def test_get_request_states_pendingrq(self, mlocal_unit): + mlocal_unit.return_value = 'glance/0' + rel = copy.deepcopy(CEPH_CLIENT_RELATION) + del rel['ceph:8']['ceph/0']['broker-rsp-glance-0'] + self.setup_client_relation(rel) + rq = ceph_utils.CephBrokerRq() + rq.add_op_create_pool(name='glance', replica_count=3) + expect = {'ceph:8': {'complete': False, 'sent': True}} + self.assertEqual(ceph_utils.get_request_states(rq), expect) + + @patch.object(ceph_utils, 'local_unit') + def test_get_request_states_failedrq(self, mlocal_unit): + mlocal_unit.return_value = 'glance/0' + rel = copy.deepcopy(CEPH_CLIENT_RELATION) + rel['ceph:8']['ceph/0']['broker-rsp-glance-0'] = '{"request-id": "0bc7dc54", "exit-code": 1}' + self.setup_client_relation(rel) + rq = ceph_utils.CephBrokerRq() + rq.add_op_create_pool(name='glance', replica_count=3) + expect = {'ceph:8': {'complete': False, 'sent': True}} + self.assertEqual(ceph_utils.get_request_states(rq), expect) + + @patch.object(ceph_utils, 'local_unit') + def test_is_request_sent(self, mlocal_unit): + mlocal_unit.return_value = 'glance/0' + self.setup_client_relation(CEPH_CLIENT_RELATION) + rq = ceph_utils.CephBrokerRq() + rq.add_op_create_pool(name='glance', replica_count=3) + self.assertTrue(ceph_utils.is_request_sent(rq)) + + @patch.object(ceph_utils, 'local_unit') + def test_is_request_sent_newrq(self, mlocal_unit): + mlocal_unit.return_value = 'glance/0' + self.setup_client_relation(CEPH_CLIENT_RELATION) + rq = ceph_utils.CephBrokerRq() + rq.add_op_create_pool(name='glance', replica_count=4) + self.assertFalse(ceph_utils.is_request_sent(rq)) + + @patch.object(ceph_utils, 'local_unit') + def test_is_request_sent_pending(self, mlocal_unit): + mlocal_unit.return_value = 'glance/0' + rel = copy.deepcopy(CEPH_CLIENT_RELATION) + del rel['ceph:8']['ceph/0']['broker-rsp-glance-0'] + self.setup_client_relation(rel) + rq = ceph_utils.CephBrokerRq() + rq.add_op_create_pool(name='glance', replica_count=3) + self.assertTrue(ceph_utils.is_request_sent(rq)) + + @patch.object(ceph_utils, 'local_unit') + def test_is_request_sent_legacy(self, mlocal_unit): + mlocal_unit.return_value = 'glance/0' + self.setup_client_relation(CEPH_CLIENT_RELATION_LEGACY) + rq = ceph_utils.CephBrokerRq() + rq.add_op_create_pool(name='glance', replica_count=3) + self.assertTrue(ceph_utils.is_request_sent(rq)) + + @patch.object(ceph_utils, 'local_unit') + def test_is_request_sent_legacy_newrq(self, mlocal_unit): + mlocal_unit.return_value = 'glance/0' + self.setup_client_relation(CEPH_CLIENT_RELATION_LEGACY) + rq = ceph_utils.CephBrokerRq() + rq.add_op_create_pool(name='glance', replica_count=4) + self.assertFalse(ceph_utils.is_request_sent(rq)) + + @patch.object(ceph_utils, 'local_unit') + def test_is_request_sent_legacy_pending(self, mlocal_unit): + mlocal_unit.return_value = 'glance/0' + rel = copy.deepcopy(CEPH_CLIENT_RELATION_LEGACY) + del rel['ceph:8']['ceph/0']['broker_rsp'] + rq = ceph_utils.CephBrokerRq() + rq.add_op_create_pool(name='glance', replica_count=3) + self.assertTrue(ceph_utils.is_request_sent(rq)) + + @patch.object(ceph_utils, 'uuid') + @patch.object(ceph_utils, 'local_unit') + def test_is_request_complete(self, mlocal_unit, muuid): + muuid.uuid1.return_value = '0bc7dc54' + mlocal_unit.return_value = 'glance/0' + self.setup_client_relation(CEPH_CLIENT_RELATION) + rq = ceph_utils.CephBrokerRq() + rq.add_op_create_pool(name='glance', replica_count=3) + self.assertTrue(ceph_utils.is_request_complete(rq)) + + @patch.object(ceph_utils, 'local_unit') + def test_is_request_complete_newrq(self, mlocal_unit): + mlocal_unit.return_value = 'glance/0' + self.setup_client_relation(CEPH_CLIENT_RELATION) + rq = ceph_utils.CephBrokerRq() + rq.add_op_create_pool(name='glance', replica_count=4) + self.assertFalse(ceph_utils.is_request_complete(rq)) + + @patch.object(ceph_utils, 'local_unit') + def test_is_request_complete_pending(self, mlocal_unit): + mlocal_unit.return_value = 'glance/0' + rel = copy.deepcopy(CEPH_CLIENT_RELATION) + del rel['ceph:8']['ceph/0']['broker-rsp-glance-0'] + self.setup_client_relation(rel) + rq = ceph_utils.CephBrokerRq() + rq.add_op_create_pool(name='glance', replica_count=3) + self.assertFalse(ceph_utils.is_request_complete(rq)) + + @patch.object(ceph_utils, 'local_unit') + def test_is_request_complete_legacy(self, mlocal_unit): + mlocal_unit.return_value = 'glance/0' + self.setup_client_relation(CEPH_CLIENT_RELATION_LEGACY) + rq = ceph_utils.CephBrokerRq() + rq.add_op_create_pool(name='glance', replica_count=3) + self.assertTrue(ceph_utils.is_request_complete(rq)) + + @patch.object(ceph_utils, 'local_unit') + def test_is_request_complete_legacy_newrq(self, mlocal_unit): + mlocal_unit.return_value = 'glance/0' + self.setup_client_relation(CEPH_CLIENT_RELATION_LEGACY) + rq = ceph_utils.CephBrokerRq() + rq.add_op_create_pool(name='glance', replica_count=4) + self.assertFalse(ceph_utils.is_request_complete(rq)) + + @patch.object(ceph_utils, 'local_unit') + def test_is_request_complete_legacy_pending(self, mlocal_unit): + mlocal_unit.return_value = 'glance/0' + rel = copy.deepcopy(CEPH_CLIENT_RELATION_LEGACY) + del rel['ceph:8']['ceph/0']['broker_rsp'] + self.setup_client_relation(rel) + rq = ceph_utils.CephBrokerRq() + rq.add_op_create_pool(name='glance', replica_count=3) + self.assertFalse(ceph_utils.is_request_complete(rq)) + + def test_equivalent_broker_requests(self): + rq1 = ceph_utils.CephBrokerRq() + rq1.add_op_create_pool(name='glance', replica_count=4) + rq2 = ceph_utils.CephBrokerRq() + rq2.add_op_create_pool(name='glance', replica_count=4) + self.assertTrue(rq1 == rq2) + + def test_equivalent_broker_requests_diff1(self): + rq1 = ceph_utils.CephBrokerRq() + rq1.add_op_create_pool(name='glance', replica_count=3) + rq2 = ceph_utils.CephBrokerRq() + rq2.add_op_create_pool(name='glance', replica_count=4) + self.assertFalse(rq1 == rq2) + + def test_equivalent_broker_requests_diff2(self): + rq1 = ceph_utils.CephBrokerRq() + rq1.add_op_create_pool(name='glance', replica_count=3) + rq2 = ceph_utils.CephBrokerRq() + rq2.add_op_create_pool(name='cinder', replica_count=3) + self.assertFalse(rq1 == rq2) + + def test_equivalent_broker_requests_diff3(self): + rq1 = ceph_utils.CephBrokerRq() + rq1.add_op_create_pool(name='glance', replica_count=3) + rq2 = ceph_utils.CephBrokerRq(api_version=2) + rq2.add_op_create_pool(name='glance', replica_count=3) + self.assertFalse(rq1 == rq2) + + @patch.object(ceph_utils, 'uuid') + @patch.object(ceph_utils, 'local_unit') + def test_is_request_complete_for_rid(self, mlocal_unit, muuid): + muuid.uuid1.return_value = '0bc7dc54' + req = ceph_utils.CephBrokerRq() + req.add_op_create_pool(name='glance', replica_count=3) + mlocal_unit.return_value = 'glance/0' + self.setup_client_relation(CEPH_CLIENT_RELATION) + self.assertTrue(ceph_utils.is_request_complete_for_rid(req, 'ceph:8')) + + @patch.object(ceph_utils, 'uuid') + @patch.object(ceph_utils, 'local_unit') + def test_is_request_complete_for_rid_newrq(self, mlocal_unit, muuid): + muuid.uuid1.return_value = 'a44c0fa6' + req = ceph_utils.CephBrokerRq() + req.add_op_create_pool(name='glance', replica_count=4) + mlocal_unit.return_value = 'glance/0' + self.setup_client_relation(CEPH_CLIENT_RELATION) + self.assertFalse(ceph_utils.is_request_complete_for_rid(req, 'ceph:8')) + + @patch.object(ceph_utils, 'uuid') + @patch.object(ceph_utils, 'local_unit') + def test_is_request_complete_for_rid_failed(self, mlocal_unit, muuid): + muuid.uuid1.return_value = '0bc7dc54' + req = ceph_utils.CephBrokerRq() + req.add_op_create_pool(name='glance', replica_count=4) + mlocal_unit.return_value = 'glance/0' + rel = copy.deepcopy(CEPH_CLIENT_RELATION) + rel['ceph:8']['ceph/0']['broker-rsp-glance-0'] = '{"request-id": "0bc7dc54", "exit-code": 1}' + self.setup_client_relation(rel) + self.assertFalse(ceph_utils.is_request_complete_for_rid(req, 'ceph:8')) + + @patch.object(ceph_utils, 'uuid') + @patch.object(ceph_utils, 'local_unit') + def test_is_request_complete_for_rid_pending(self, mlocal_unit, muuid): + muuid.uuid1.return_value = '0bc7dc54' + req = ceph_utils.CephBrokerRq() + req.add_op_create_pool(name='glance', replica_count=4) + mlocal_unit.return_value = 'glance/0' + rel = copy.deepcopy(CEPH_CLIENT_RELATION) + del rel['ceph:8']['ceph/0']['broker-rsp-glance-0'] + self.setup_client_relation(rel) + self.assertFalse(ceph_utils.is_request_complete_for_rid(req, 'ceph:8')) + + @patch.object(ceph_utils, 'uuid') + @patch.object(ceph_utils, 'local_unit') + def test_is_request_complete_for_rid_legacy(self, mlocal_unit, muuid): + muuid.uuid1.return_value = '0bc7dc54' + req = ceph_utils.CephBrokerRq() + req.add_op_create_pool(name='glance', replica_count=3) + mlocal_unit.return_value = 'glance/0' + self.setup_client_relation(CEPH_CLIENT_RELATION_LEGACY) + self.assertTrue(ceph_utils.is_request_complete_for_rid(req, 'ceph:8')) + + @patch.object(ceph_utils, 'local_unit') + def test_get_broker_rsp_key(self, mlocal_unit): + mlocal_unit.return_value = 'glance/0' + self.assertEqual(ceph_utils.get_broker_rsp_key(), 'broker-rsp-glance-0') + + @patch.object(ceph_utils, 'local_unit') + def test_send_request_if_needed(self, mlocal_unit): + mlocal_unit.return_value = 'glance/0' + self.setup_client_relation(CEPH_CLIENT_RELATION) + rq = ceph_utils.CephBrokerRq() + rq.add_op_create_pool(name='glance', replica_count=3) + ceph_utils.send_request_if_needed(rq) + self.relation_set.assert_has_calls([]) + + @patch.object(ceph_utils, 'uuid') + @patch.object(ceph_utils, 'local_unit') + def test_send_request_if_needed_newrq(self, mlocal_unit, muuid): + muuid.uuid1.return_value = 'de67511e' + mlocal_unit.return_value = 'glance/0' + self.setup_client_relation(CEPH_CLIENT_RELATION) + rq = ceph_utils.CephBrokerRq() + rq.add_op_create_pool(name='glance', replica_count=4) + ceph_utils.send_request_if_needed(rq) + actual = json.loads(self.relation_set.call_args_list[0][1]['broker_req']) + self.assertEqual(actual['api-version'], 1) + self.assertEqual(actual['request-id'], 'de67511e') + self.assertEqual(actual['ops'][0]['replicas'], 4) + self.assertEqual(actual['ops'][0]['op'], 'create-pool') + self.assertEqual(actual['ops'][0]['name'], 'glance') + + @patch.object(ceph_utils, 'config') + def test_ceph_conf_context(self, mock_config): + mock_config.return_value = "{'osd': {'foo': 1}}" + ctxt = ceph_utils.CephConfContext()() + self.assertEqual({'osd': {'foo': 1}}, ctxt) + ctxt = ceph_utils.CephConfContext(['osd', 'mon'])() + mock_config.return_value = ("{'osd': {'foo': 1}," + "'unknown': {'blah': 1}}") + self.assertEqual({'osd': {'foo': 1}}, ctxt) + + @patch.object(ceph_utils, 'get_osd_settings') + @patch.object(ceph_utils, 'config') + def test_ceph_osd_conf_context_conflict(self, mock_config, + mock_get_osd_settings): + mock_config.return_value = "{'osd': {'osd heartbeat grace': 20}}" + mock_get_osd_settings.return_value = { + 'osd heartbeat grace': 25, + 'osd heartbeat interval': 5} + ctxt = ceph_utils.CephOSDConfContext()() + self.assertEqual(ctxt, { + 'osd': collections.OrderedDict([('osd heartbeat grace', 20)]), + 'osd_from_client': collections.OrderedDict( + [('osd heartbeat interval', 5)]), + 'osd_from_client_conflict': collections.OrderedDict( + [('osd heartbeat grace', 25)])}) + + @patch.object(ceph_utils, 'local_unit', lambda: "nova-compute/0") + def test_is_broker_action_done(self): + tmpdir = mkdtemp() + try: + db_path = '{}/kv.db'.format(tmpdir) + with patch('charmhelpers.core.unitdata._KV', Storage(db_path)): + rq_id = "3d03e9f6-4c36-11e7-89ba-fa163e7c7ec6" + broker_key = ceph_utils.get_broker_rsp_key() + self.relation_get.return_value = {broker_key: + json.dumps({"request-id": + rq_id, + "exit-code": 0})} + action = 'restart_nova_compute' + ret = ceph_utils.is_broker_action_done(action, rid="ceph:1", + unit="ceph/0") + self.relation_get.assert_has_calls([call(rid='ceph:1', unit='ceph/0')]) + self.assertFalse(ret) + + ceph_utils.mark_broker_action_done(action) + self.assertTrue(os.path.exists(tmpdir)) + ret = ceph_utils.is_broker_action_done(action, rid="ceph:1", + unit="ceph/0") + self.assertTrue(ret) + finally: + if os.path.exists(tmpdir): + shutil.rmtree(tmpdir) + + def test_has_broker_rsp(self): + rq_id = "3d03e9f6-4c36-11e7-89ba-fa163e7c7ec6" + broker_key = ceph_utils.get_broker_rsp_key() + self.relation_get.return_value = {broker_key: + json.dumps({"request-id": + rq_id, + "exit-code": 0})} + ret = ceph_utils.has_broker_rsp(rid="ceph:1", unit="ceph/0") + self.assertTrue(ret) + self.relation_get.assert_has_calls([call(rid='ceph:1', unit='ceph/0')]) + + self.relation_get.return_value = {'something_else': + json.dumps({"request-id": + rq_id, + "exit-code": 0})} + ret = ceph_utils.has_broker_rsp(rid="ceph:1", unit="ceph/0") + self.assertFalse(ret) + + self.relation_get.return_value = None + ret = ceph_utils.has_broker_rsp(rid="ceph:1", unit="ceph/0") + self.assertFalse(ret) + + @patch.object(ceph_utils, 'local_unit', lambda: "nova-compute/0") + def test_mark_broker_action_done(self): + tmpdir = mkdtemp() + try: + db_path = '{}/kv.db'.format(tmpdir) + with patch('charmhelpers.core.unitdata._KV', Storage(db_path)): + rq_id = "3d03e9f6-4c36-11e7-89ba-fa163e7c7ec6" + broker_key = ceph_utils.get_broker_rsp_key() + self.relation_get.return_value = {broker_key: + json.dumps({"request-id": + rq_id})} + action = 'restart_nova_compute' + ceph_utils.mark_broker_action_done(action, rid="ceph:1", + unit="ceph/0") + key = 'unit_0_ceph_broker_action.{}'.format(action) + self.relation_get.assert_has_calls([call(rid='ceph:1', unit='ceph/0')]) + kvstore = Storage(db_path) + self.assertEqual(kvstore.get(key=key), rq_id) + finally: + if os.path.exists(tmpdir): + shutil.rmtree(tmpdir) + + def test_add_op_create_replicated_pool(self): + base_op = {'app-name': None, + 'group': None, + 'group-namespace': None, + 'max-bytes': None, + 'max-objects': None, + 'name': 'apool', + 'op': 'create-pool', + 'pg_num': None, + 'replicas': 3, + 'weight': None} + rq = ceph_utils.CephBrokerRq() + rq.add_op_create_replicated_pool('apool') + self.assertEqual(rq.ops, [base_op]) + rq = ceph_utils.CephBrokerRq() + rq.add_op_create_pool('apool', replica_count=42) + op = base_op.copy() + op['replicas'] = 42 + self.assertEqual(rq.ops, [op]) + rq = ceph_utils.CephBrokerRq() + rq.add_op_create_replicated_pool('apool', pg_num=42) + op = base_op.copy() + op['pg_num'] = 42 + self.assertEqual(rq.ops, [op]) + rq = ceph_utils.CephBrokerRq() + rq.add_op_create_replicated_pool('apool', weight=42) + op = base_op.copy() + op['weight'] = 42 + self.assertEqual(rq.ops, [op]) + rq = ceph_utils.CephBrokerRq() + rq.add_op_create_replicated_pool('apool', group=51) + op = base_op.copy() + op['group'] = 51 + self.assertEqual(rq.ops, [op]) + rq = ceph_utils.CephBrokerRq() + rq.add_op_create_replicated_pool('apool', namespace='sol-iii') + op = base_op.copy() + op['group-namespace'] = 'sol-iii' + self.assertEqual(rq.ops, [op]) + rq = ceph_utils.CephBrokerRq() + rq.add_op_create_replicated_pool('apool', app_name='earth') + op = base_op.copy() + op['app-name'] = 'earth' + self.assertEqual(rq.ops, [op]) + rq = ceph_utils.CephBrokerRq() + rq.add_op_create_replicated_pool('apool', max_bytes=42) + op = base_op.copy() + op['max-bytes'] = 42 + self.assertEqual(rq.ops, [op]) + rq = ceph_utils.CephBrokerRq() + rq.add_op_create_replicated_pool('apool', max_objects=42) + op = base_op.copy() + op['max-objects'] = 42 + self.assertEqual(rq.ops, [op]) + + def test_add_op_create_erasure_pool(self): + base_op = {'app-name': None, + 'erasure-profile': None, + 'group': None, + 'max-bytes': None, + 'max-objects': None, + 'name': 'apool', + 'op': 'create-pool', + 'pool-type': 'erasure', + 'weight': None} + rq = ceph_utils.CephBrokerRq() + rq.add_op_create_erasure_pool('apool') + self.assertEqual(rq.ops, [base_op]) + rq = ceph_utils.CephBrokerRq() + rq.add_op_create_erasure_pool('apool', weight=42) + op = base_op.copy() + op['weight'] = 42 + self.assertEqual(rq.ops, [op]) + rq = ceph_utils.CephBrokerRq() + rq.add_op_create_erasure_pool('apool', group=51) + op = base_op.copy() + op['group'] = 51 + self.assertEqual(rq.ops, [op]) + rq = ceph_utils.CephBrokerRq() + rq.add_op_create_erasure_pool('apool', app_name='earth') + op = base_op.copy() + op['app-name'] = 'earth' + self.assertEqual(rq.ops, [op]) + rq = ceph_utils.CephBrokerRq() + rq.add_op_create_erasure_pool('apool', max_bytes=42) + op = base_op.copy() + op['max-bytes'] = 42 + self.assertEqual(rq.ops, [op]) + rq = ceph_utils.CephBrokerRq() + rq.add_op_create_erasure_pool('apool', max_objects=42) + op = base_op.copy() + op['max-objects'] = 42 + self.assertEqual(rq.ops, [op]) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/storage/test_linux_storage_loopback.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/storage/test_linux_storage_loopback.py new file mode 100644 index 0000000000000000000000000000000000000000..85a130d4570518883b724f8d8856aed2b2d41bf9 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/storage/test_linux_storage_loopback.py @@ -0,0 +1,82 @@ +import unittest + +from mock import patch + +import charmhelpers.contrib.storage.linux.loopback as loopback + +LOOPBACK_DEVICES = b""" +/dev/loop0: [0805]:2244465 (/tmp/foo.img) +/dev/loop1: [0805]:2244466 (/tmp/bar.img) +/dev/loop2: [0805]:2244467 (/tmp/baz.img (deleted)) +""" + +# It's a mouthful. +STORAGE_LINUX_LOOPBACK = 'charmhelpers.contrib.storage.linux.loopback' + + +class LoopbackStorageUtilsTests(unittest.TestCase): + @patch(STORAGE_LINUX_LOOPBACK + '.check_output') + def test_loopback_devices(self, output): + """It translates current loopback mapping to a dict""" + output.return_value = LOOPBACK_DEVICES + ex = { + '/dev/loop1': '/tmp/bar.img', + '/dev/loop0': '/tmp/foo.img', + '/dev/loop2': '/tmp/baz.img (deleted)' + } + self.assertEquals(loopback.loopback_devices(), ex) + + @patch(STORAGE_LINUX_LOOPBACK + '.create_loopback') + @patch('subprocess.check_call') + @patch(STORAGE_LINUX_LOOPBACK + '.loopback_devices') + def test_loopback_create_already_exists(self, loopbacks, check_call, + create): + """It finds existing loopback device for requested file""" + loopbacks.return_value = {'/dev/loop1': '/tmp/bar.img'} + res = loopback.ensure_loopback_device('/tmp/bar.img', '5G') + self.assertEquals(res, '/dev/loop1') + self.assertFalse(create.called) + self.assertFalse(check_call.called) + + @patch(STORAGE_LINUX_LOOPBACK + '.loopback_devices') + @patch(STORAGE_LINUX_LOOPBACK + '.create_loopback') + @patch('os.path.exists') + def test_loop_creation_no_truncate(self, path_exists, create_loopback, + loopbacks): + """It does not create a new sparse image for loopback if one exists""" + loopbacks.return_value = {} + path_exists.return_value = True + with patch('subprocess.check_call') as check_call: + loopback.ensure_loopback_device('/tmp/foo.img', '15G') + self.assertFalse(check_call.called) + + @patch(STORAGE_LINUX_LOOPBACK + '.loopback_devices') + @patch(STORAGE_LINUX_LOOPBACK + '.create_loopback') + @patch('os.path.exists') + def test_ensure_loopback_creation(self, path_exists, create_loopback, + loopbacks): + """It creates a new sparse image for loopback if one does not exists""" + loopbacks.return_value = {} + path_exists.return_value = False + create_loopback.return_value = '/dev/loop0' + with patch(STORAGE_LINUX_LOOPBACK + '.check_call') as check_call: + loopback.ensure_loopback_device('/tmp/foo.img', '15G') + check_call.assert_called_with(['truncate', '--size', '15G', + '/tmp/foo.img']) + + @patch.object(loopback, 'loopback_devices') + def test_create_loopback(self, _devs): + """It correctly calls losetup to create a loopback device""" + _devs.return_value = {'/dev/loop0': '/tmp/foo'} + with patch(STORAGE_LINUX_LOOPBACK + '.check_call') as check_call: + check_call.return_value = '' + result = loopback.create_loopback('/tmp/foo') + check_call.assert_called_with(['losetup', '--find', '/tmp/foo']) + self.assertEquals(result, '/dev/loop0') + + @patch.object(loopback, 'loopback_devices') + def test_create_is_mapped_loopback_device(self, devs): + devs.return_value = {'/dev/loop0': "/tmp/manco"} + self.assertEquals(loopback.is_mapped_loopback_device("/dev/loop0"), + "/tmp/manco") + self.assertFalse(loopback.is_mapped_loopback_device("/dev/loop1")) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/storage/test_linux_storage_lvm.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/storage/test_linux_storage_lvm.py new file mode 100644 index 0000000000000000000000000000000000000000..015d451dd2aad0f1d466f946ab8234934313e7d6 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/storage/test_linux_storage_lvm.py @@ -0,0 +1,212 @@ +import unittest +import subprocess + +from mock import patch + +import charmhelpers.contrib.storage.linux.lvm as lvm + +PVDISPLAY = b""" + --- Physical volume --- + PV Name /dev/loop0 + VG Name foo + PV Size 10.00 MiB / not usable 2.00 MiB + Allocatable yes + PE Size 4.00 MiB + Total PE 2 + Free PE 2 + Allocated PE 0 + PV UUID fyVqlr-pyrL-89On-f6MD-U91T-dEfc-SL0V2V + +""" + +EMPTY_VG_IN_PVDISPLAY = b""" + --- Physical volume --- + PV Name /dev/loop0 + VG Name + PV Size 10.00 MiB / not usable 2.00 MiB + Allocatable yes + PE Size 4.00 MiB + Total PE 2 + Free PE 2 + Allocated PE 0 + PV UUID fyVqlr-pyrL-89On-f6MD-U91T-dEfc-SL0V2V + +""" +LVS_DEFAULT = b""" + cinder-volumes-pool + testvol + volume-48be6ba0-84c3-4b8d-9be5-e68e47fc7682 + volume-f8c1d2fd-1fa1-4d84-b4e0-431dba7d582e +""" +LVS_WITH_VG = b""" + cinder-volumes cinder-volumes-pool + cinder-volumes testvol + cinder-volumes volume-48be6ba0-84c3-4b8d-9be5-e68e47fc7682 + cinder-volumes volume-f8c1d2fd-1fa1-4d84-b4e0-431dba7d582e +""" +LVS_THIN_POOLS = b""" + cinder-volumes-pool +""" +LVS_THIN_POOLS_WITH_VG = b""" + cinder-volumes cinder-volumes-pool +""" + +# It's a mouthful. +STORAGE_LINUX_LVM = 'charmhelpers.contrib.storage.linux.lvm' + + +class LVMStorageUtilsTests(unittest.TestCase): + def test_find_volume_group_on_pv(self): + """It determines any volume group assigned to a LVM PV""" + with patch(STORAGE_LINUX_LVM + '.check_output') as check_output: + check_output.return_value = PVDISPLAY + vg = lvm.list_lvm_volume_group('/dev/loop0') + self.assertEquals(vg, 'foo') + + def test_find_empty_volume_group_on_pv(self): + """Return empty string when no volume group is assigned to the PV""" + with patch(STORAGE_LINUX_LVM + '.check_output') as check_output: + check_output.return_value = EMPTY_VG_IN_PVDISPLAY + vg = lvm.list_lvm_volume_group('/dev/loop0') + self.assertEquals(vg, '') + + @patch(STORAGE_LINUX_LVM + '.list_lvm_volume_group') + def test_deactivate_lvm_volume_groups(self, ls_vg): + """It deactivates active volume groups on LVM PV""" + ls_vg.return_value = 'foo-vg' + with patch(STORAGE_LINUX_LVM + '.check_call') as check_call: + lvm.deactivate_lvm_volume_group('/dev/loop0') + check_call.assert_called_with(['vgchange', '-an', 'foo-vg']) + + def test_remove_lvm_physical_volume(self): + """It removes LVM physical volume signatures from block device""" + with patch(STORAGE_LINUX_LVM + '.Popen') as popen: + lvm.remove_lvm_physical_volume('/dev/foo') + popen.assert_called_with(['pvremove', '-ff', '/dev/foo'], stdin=-1) + + def test_is_physical_volume(self): + """It properly reports block dev is an LVM PV""" + with patch(STORAGE_LINUX_LVM + '.check_output') as check_output: + check_output.return_value = PVDISPLAY + self.assertTrue(lvm.is_lvm_physical_volume('/dev/loop0')) + + def test_is_not_physical_volume(self): + """It properly reports block dev is an LVM PV""" + with patch(STORAGE_LINUX_LVM + '.check_output') as check_output: + check_output.side_effect = subprocess.CalledProcessError('cmd', 2) + self.assertFalse(lvm.is_lvm_physical_volume('/dev/loop0')) + + def test_pvcreate(self): + """It correctly calls pvcreate for a given block dev""" + with patch(STORAGE_LINUX_LVM + '.check_call') as check_call: + lvm.create_lvm_physical_volume('/dev/foo') + check_call.assert_called_with(['pvcreate', '/dev/foo']) + + def test_vgcreate(self): + """It correctly calls vgcreate for given block dev and vol group""" + with patch(STORAGE_LINUX_LVM + '.check_call') as check_call: + lvm.create_lvm_volume_group('foo-vg', '/dev/foo') + check_call.assert_called_with(['vgcreate', 'foo-vg', '/dev/foo']) + + def test_list_logical_volumes(self): + with patch(STORAGE_LINUX_LVM + '.check_output') as check_output: + check_output.return_value = LVS_DEFAULT + self.assertEqual(lvm.list_logical_volumes(), [ + 'cinder-volumes-pool', + 'testvol', + 'volume-48be6ba0-84c3-4b8d-9be5-e68e47fc7682', + 'volume-f8c1d2fd-1fa1-4d84-b4e0-431dba7d582e']) + check_output.assert_called_with([ + 'lvs', + '--options', + 'lv_name', + '--noheadings']) + + def test_list_logical_volumes_empty(self): + with patch(STORAGE_LINUX_LVM + '.check_output') as check_output: + check_output.return_value = b'' + self.assertEqual(lvm.list_logical_volumes(), []) + + def test_list_logical_volumes_path_mode(self): + with patch(STORAGE_LINUX_LVM + '.check_output') as check_output: + check_output.return_value = LVS_WITH_VG + self.assertEqual(lvm.list_logical_volumes(path_mode=True), [ + 'cinder-volumes/cinder-volumes-pool', + 'cinder-volumes/testvol', + 'cinder-volumes/volume-48be6ba0-84c3-4b8d-9be5-e68e47fc7682', + 'cinder-volumes/volume-f8c1d2fd-1fa1-4d84-b4e0-431dba7d582e']) + check_output.assert_called_with([ + 'lvs', + '--options', + 'vg_name,lv_name', + '--noheadings']) + + def test_list_logical_volumes_select_criteria(self): + with patch(STORAGE_LINUX_LVM + '.check_output') as check_output: + check_output.return_value = LVS_THIN_POOLS + self.assertEqual( + lvm.list_logical_volumes(select_criteria='lv_attr =~ ^t'), + ['cinder-volumes-pool']) + check_output.assert_called_with([ + 'lvs', + '--options', + 'lv_name', + '--noheadings', + '--select', + 'lv_attr =~ ^t']) + + def test_list_thin_logical_volume_pools(self): + with patch(STORAGE_LINUX_LVM + '.check_output') as check_output: + check_output.return_value = LVS_THIN_POOLS + self.assertEqual( + lvm.list_thin_logical_volume_pools(), + ['cinder-volumes-pool']) + check_output.assert_called_with([ + 'lvs', + '--options', + 'lv_name', + '--noheadings', + '--select', + 'lv_attr =~ ^t']) + + def test_list_thin_logical_volume_pools_path_mode(self): + with patch(STORAGE_LINUX_LVM + '.check_output') as check_output: + check_output.return_value = LVS_THIN_POOLS_WITH_VG + self.assertEqual( + lvm.list_thin_logical_volume_pools(path_mode=True), + ['cinder-volumes/cinder-volumes-pool']) + check_output.assert_called_with([ + 'lvs', + '--options', + 'vg_name,lv_name', + '--noheadings', + '--select', + 'lv_attr =~ ^t']) + + def test_extend_logical_volume_by_device(self): + """It correctly calls pvcreate for a given block dev""" + with patch(STORAGE_LINUX_LVM + '.check_call') as check_call: + lvm.extend_logical_volume_by_device('mylv', '/dev/foo') + check_call.assert_called_with(['lvextend', 'mylv', '/dev/foo']) + + def test_create_logical_volume_nosize(self): + with patch(STORAGE_LINUX_LVM + '.check_call') as check_call: + lvm.create_logical_volume('testlv', 'testvg') + check_call.assert_called_with([ + 'lvcreate', + '--yes', + '-l', + '100%FREE', + '-n', 'testlv', 'testvg' + ]) + + def test_create_logical_volume_size(self): + with patch(STORAGE_LINUX_LVM + '.check_call') as check_call: + lvm.create_logical_volume('testlv', 'testvg', '10G') + check_call.assert_called_with([ + 'lvcreate', + '--yes', + '-L', + '10G', + '-n', 'testlv', 'testvg' + ]) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/storage/test_linux_storage_utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/storage/test_linux_storage_utils.py new file mode 100644 index 0000000000000000000000000000000000000000..bf4da919807dba72eb4eeee00c07320b6359b1d7 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/storage/test_linux_storage_utils.py @@ -0,0 +1,206 @@ +from mock import patch +import unittest + +import charmhelpers.contrib.storage.linux.utils as storage_utils + +# It's a mouthful. +STORAGE_LINUX_UTILS = 'charmhelpers.contrib.storage.linux.utils' + + +class MiscStorageUtilsTests(unittest.TestCase): + + @patch(STORAGE_LINUX_UTILS + '.check_output') + @patch(STORAGE_LINUX_UTILS + '.call') + @patch(STORAGE_LINUX_UTILS + '.check_call') + def test_zap_disk(self, check_call, call, check_output): + """It calls sgdisk correctly to zap disk""" + check_output.return_value = b'200\n' + storage_utils.zap_disk('/dev/foo') + call.assert_any_call(['sgdisk', '--zap-all', '--', '/dev/foo']) + call.assert_any_call(['sgdisk', '--clear', '--mbrtogpt', + '--', '/dev/foo']) + check_output.assert_any_call(['blockdev', '--getsz', '/dev/foo']) + check_call.assert_any_call(['dd', 'if=/dev/zero', 'of=/dev/foo', + 'bs=1M', 'count=1']) + check_call.assert_any_call(['dd', 'if=/dev/zero', 'of=/dev/foo', + 'bs=512', 'count=100', 'seek=100']) + + @patch(STORAGE_LINUX_UTILS + '.S_ISBLK') + @patch('os.path.exists') + @patch('os.stat') + def test_is_block_device(self, S_ISBLK, exists, stat): + """It detects device node is block device""" + class fake_stat: + st_mode = True + S_ISBLK.return_value = fake_stat() + exists.return_value = True + self.assertTrue(storage_utils.is_block_device('/dev/foo')) + + @patch(STORAGE_LINUX_UTILS + '.S_ISBLK') + @patch('os.path.exists') + @patch('os.stat') + def test_is_block_device_does_not_exist(self, S_ISBLK, exists, stat): + """It detects device node is block device""" + class fake_stat: + st_mode = True + S_ISBLK.return_value = fake_stat() + exists.return_value = False + self.assertFalse(storage_utils.is_block_device('/dev/foo')) + + @patch(STORAGE_LINUX_UTILS + '.check_output') + def test_is_device_mounted(self, check_output): + """It detects mounted devices as mounted.""" + check_output.return_value = ( + b'NAME="sda" MAJ:MIN="8:16" RM="0" SIZE="238.5G" RO="0" TYPE="disk" MOUNTPOINT="/tmp"\n') + result = storage_utils.is_device_mounted('/dev/sda') + self.assertTrue(result) + + @patch(STORAGE_LINUX_UTILS + '.check_output') + def test_is_device_mounted_partition(self, check_output): + """It detects mounted partitions as mounted.""" + check_output.return_value = ( + b'NAME="sda1" MAJ:MIN="8:16" RM="0" SIZE="238.5G" RO="0" TYPE="disk" MOUNTPOINT="/tmp"\n') + result = storage_utils.is_device_mounted('/dev/sda1') + self.assertTrue(result) + + @patch(STORAGE_LINUX_UTILS + '.check_output') + def test_is_device_mounted_partition_with_device(self, check_output): + """It detects mounted devices as mounted if "mount" shows only a + partition as mounted.""" + check_output.return_value = ( + b'NAME="sda1" MAJ:MIN="8:16" RM="0" SIZE="238.5G" RO="0" TYPE="disk" MOUNTPOINT="/tmp"\n') + result = storage_utils.is_device_mounted('/dev/sda') + self.assertTrue(result) + + @patch(STORAGE_LINUX_UTILS + '.check_output') + def test_is_device_mounted_not_mounted(self, check_output): + """It detects unmounted devices as not mounted.""" + check_output.return_value = ( + b'NAME="sda" MAJ:MIN="8:16" RM="0" SIZE="238.5G" RO="0" TYPE="disk" MOUNTPOINT=""\n') + result = storage_utils.is_device_mounted('/dev/sda') + self.assertFalse(result) + + @patch(STORAGE_LINUX_UTILS + '.check_output') + def test_is_device_mounted_not_mounted_partition(self, check_output): + """It detects unmounted partitions as not mounted.""" + check_output.return_value = ( + b'NAME="sda" MAJ:MIN="8:16" RM="0" SIZE="238.5G" RO="0" TYPE="disk" MOUNTPOINT=""\n') + result = storage_utils.is_device_mounted('/dev/sda1') + self.assertFalse(result) + + @patch(STORAGE_LINUX_UTILS + '.check_output') + def test_is_device_mounted_full_disks(self, check_output): + """It detects mounted full disks as mounted.""" + check_output.return_value = ( + b'NAME="sda" MAJ:MIN="8:16" RM="0" SIZE="238.5G" RO="0" TYPE="disk" MOUNTPOINT="/tmp"\n') + result = storage_utils.is_device_mounted('/dev/sda') + self.assertTrue(result) + + @patch(STORAGE_LINUX_UTILS + '.check_output') + def test_is_device_mounted_cciss(self, check_output): + """It detects mounted cciss partitions as mounted.""" + check_output.return_value = ( + b'NAME="cciss!c0d0" MAJ:MIN="104:0" RM="0" SIZE="273.3G" RO="0" TYPE="disk" MOUNTPOINT="/root"\n') + result = storage_utils.is_device_mounted('/dev/cciss/c0d0') + self.assertTrue(result) + + @patch(STORAGE_LINUX_UTILS + '.check_output') + def test_is_device_mounted_cciss_not_mounted(self, check_output): + """It detects unmounted cciss partitions as not mounted.""" + check_output.return_value = ( + b'NAME="cciss!c0d0" MAJ:MIN="104:0" RM="0" SIZE="273.3G" RO="0" TYPE="disk" MOUNTPOINT=""\n') + result = storage_utils.is_device_mounted('/dev/cciss/c0d0') + self.assertFalse(result) + + @patch(STORAGE_LINUX_UTILS + '.check_call') + def test_mkfs_xfs(self, check_call): + storage_utils.mkfs_xfs('/dev/sdb') + check_call.assert_called_with( + ['mkfs.xfs', '-i', 'size=1024', '/dev/sdb'] + ) + + @patch(STORAGE_LINUX_UTILS + '.check_call') + def test_mkfs_xfs_force(self, check_call): + storage_utils.mkfs_xfs('/dev/sdb', force=True) + check_call.assert_called_with( + ['mkfs.xfs', '-f', '-i', 'size=1024', '/dev/sdb'] + ) + + @patch(STORAGE_LINUX_UTILS + '.check_call') + def test_mkfs_xfs_inode_size(self, check_call): + storage_utils.mkfs_xfs('/dev/sdb', inode_size=512) + check_call.assert_called_with( + ['mkfs.xfs', '-i', 'size=512', '/dev/sdb'] + ) + + +class CephLUKSDeviceTestCase(unittest.TestCase): + + @patch.object(storage_utils, '_luks_uuid') + def test_no_luks_header(self, _luks_uuid): + _luks_uuid.return_value = None + self.assertEqual(storage_utils.is_luks_device('/dev/sdb'), False) + + @patch.object(storage_utils, '_luks_uuid') + def test_luks_header(self, _luks_uuid): + _luks_uuid.return_value = '5e1e4c89-4f68-4b9a-bd93-e25eec34e80f' + self.assertEqual(storage_utils.is_luks_device('/dev/sdb'), True) + + +class CephMappedLUKSDeviceTestCase(unittest.TestCase): + + @patch.object(storage_utils.os, 'walk') + @patch.object(storage_utils, '_luks_uuid') + def test_no_luks_header_not_mapped(self, _luks_uuid, _walk): + _luks_uuid.return_value = None + + def os_walk_side_effect(path): + return { + '/sys/class/block/sdb/holders/': iter([('', [], [])]), + }[path] + _walk.side_effect = os_walk_side_effect + + self.assertEqual(storage_utils.is_mapped_luks_device('/dev/sdb'), False) + + @patch.object(storage_utils.os, 'walk') + @patch.object(storage_utils, '_luks_uuid') + def test_luks_header_mapped(self, _luks_uuid, _walk): + _luks_uuid.return_value = 'db76d142-4782-42f2-84c6-914f9db889a0' + + def os_walk_side_effect(path): + return { + '/sys/class/block/sdb/holders/': iter([('', ['dm-0'], [])]), + }[path] + _walk.side_effect = os_walk_side_effect + + self.assertEqual(storage_utils.is_mapped_luks_device('/dev/sdb'), True) + + @patch.object(storage_utils.os, 'walk') + @patch.object(storage_utils, '_luks_uuid') + def test_luks_header_not_mapped(self, _luks_uuid, _walk): + _luks_uuid.return_value = 'db76d142-4782-42f2-84c6-914f9db889a0' + + def os_walk_side_effect(path): + return { + '/sys/class/block/sdb/holders/': iter([('', [], [])]), + }[path] + _walk.side_effect = os_walk_side_effect + + self.assertEqual(storage_utils.is_mapped_luks_device('/dev/sdb'), False) + + @patch.object(storage_utils.os, 'walk') + @patch.object(storage_utils, '_luks_uuid') + def test_no_luks_header_mapped(self, _luks_uuid, _walk): + """ + This is an edge case where a device is mapped (i.e. used for something + else) but has no LUKS header. Should be handled by other checks. + """ + _luks_uuid.return_value = None + + def os_walk_side_effect(path): + return { + '/sys/class/block/sdb/holders/': iter([('', ['dm-0'], [])]), + }[path] + _walk.side_effect = os_walk_side_effect + + self.assertEqual(storage_utils.is_mapped_luks_device('/dev/sdb'), False) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/templating/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/templating/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/templating/test_contexts.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/templating/test_contexts.py new file mode 100644 index 0000000000000000000000000000000000000000..1abc3d5786995dd7a7f791b5b466661663679d4e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/templating/test_contexts.py @@ -0,0 +1,252 @@ +# Copyright 2013 Canonical Ltd. +# +# Authors: +# Charm Helpers Developers +import mock +import os +import shutil +import tempfile +import unittest +import yaml + +import six + +import charmhelpers.contrib.templating.contexts + + +class JujuState2YamlTestCase(unittest.TestCase): + maxDiff = None + + unit_data = { + 'private-address': '10.0.3.2', + 'public-address': '123.123.123.123', + } + + def setUp(self): + super(JujuState2YamlTestCase, self).setUp() + + # Hookenv patches (a single patch to hookenv doesn't work): + patcher = mock.patch('charmhelpers.core.hookenv.config') + self.mock_config = patcher.start() + self.addCleanup(patcher.stop) + patcher = mock.patch('charmhelpers.core.hookenv.relation_get') + self.mock_relation_get = patcher.start() + self.mock_relation_get.return_value = {} + self.addCleanup(patcher.stop) + patcher = mock.patch('charmhelpers.core.hookenv.relations') + self.mock_relations = patcher.start() + self.mock_relations.return_value = { + 'wsgi-file': {}, + 'website': {}, + 'nrpe-external-master': {}, + } + self.addCleanup(patcher.stop) + patcher = mock.patch('charmhelpers.core.hookenv.relation_type') + self.mock_relation_type = patcher.start() + self.mock_relation_type.return_value = None + self.addCleanup(patcher.stop) + patcher = mock.patch('charmhelpers.core.hookenv.local_unit') + self.mock_local_unit = patcher.start() + self.addCleanup(patcher.stop) + patcher = mock.patch('charmhelpers.core.hookenv.relations_of_type') + self.mock_relations_of_type = patcher.start() + self.addCleanup(patcher.stop) + self.mock_relations_of_type.return_value = [] + + def unit_get_data(argument): + "dummy unit_get that accesses dummy unit data" + return self.unit_data[argument] + + patcher = mock.patch( + 'charmhelpers.core.hookenv.unit_get', unit_get_data) + self.mock_unit_get = patcher.start() + self.addCleanup(patcher.stop) + + # patches specific to this test class. + etc_dir = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, etc_dir) + self.context_path = os.path.join(etc_dir, 'some', 'context') + + patcher = mock.patch.object(charmhelpers.contrib.templating.contexts, + 'charm_dir', '/tmp/charm_dir') + patcher.start() + self.addCleanup(patcher.stop) + + def default_context(self): + return { + "charm_dir": "/tmp/charm_dir", + "group_code_owner": "webops_deploy", + "user_code_runner": "ubunet", + "current_relation": {}, + "relations_full": { + 'wsgi-file': {}, + 'website': {}, + 'nrpe-external-master': {}, + }, + "relations": { + 'wsgi-file': [], + 'website': [], + 'nrpe-external-master': [], + }, + "local_unit": "click-index/3", + "unit_private_address": "10.0.3.2", + "unit_public_address": "123.123.123.123", + } + + def test_output_with_empty_relation(self): + self.mock_config.return_value = { + 'group_code_owner': 'webops_deploy', + 'user_code_runner': 'ubunet', + } + self.mock_local_unit.return_value = "click-index/3" + + charmhelpers.contrib.templating.contexts.juju_state_to_yaml( + self.context_path) + + with open(self.context_path, 'r') as context_file: + result = yaml.safe_load(context_file.read()) + expected = self.default_context() + self.assertEqual(expected, result) + + def test_output_with_no_relation(self): + self.mock_config.return_value = { + 'group_code_owner': 'webops_deploy', + 'user_code_runner': 'ubunet', + } + self.mock_local_unit.return_value = "click-index/3" + self.mock_relation_get.return_value = None + + charmhelpers.contrib.templating.contexts.juju_state_to_yaml( + self.context_path) + + with open(self.context_path, 'r') as context_file: + result = yaml.safe_load(context_file.read()) + expected = self.default_context() + self.assertEqual(expected, result) + + def test_output_with_relation(self): + self.mock_config.return_value = { + 'group_code_owner': 'webops_deploy', + 'user_code_runner': 'ubunet', + } + self.mock_relation_type.return_value = 'wsgi-file' + self.mock_relation_get.return_value = { + 'relation_key1': 'relation_value1', + 'relation_key2': 'relation_value2', + } + self.mock_relations.return_value = { + 'wsgi-file': { + six.u('wsgi-file:0'): { + six.u('gunicorn/1'): { + six.u('private-address'): six.u('10.0.3.99'), + }, + 'click-index/3': { + six.u('wsgi_group'): six.u('ubunet'), + }, + }, + }, + 'website': {}, + 'nrpe-external-master': {}, + } + self.mock_local_unit.return_value = "click-index/3" + + charmhelpers.contrib.templating.contexts.juju_state_to_yaml( + self.context_path) + + with open(self.context_path, 'r') as context_file: + result = yaml.safe_load(context_file.read()) + expected = self.default_context() + expected['current_relation'] = { + "relation_key1": "relation_value1", + "relation_key2": "relation_value2", + } + expected["wsgi_file:relation_key1"] = "relation_value1" + expected["wsgi_file:relation_key2"] = "relation_value2" + expected["relations_full"]['wsgi-file'] = { + 'wsgi-file:0': { + 'gunicorn/1': { + six.u('private-address'): six.u('10.0.3.99')}, + 'click-index/3': {six.u('wsgi_group'): six.u('ubunet')}, + }, + } + expected["relations"]["wsgi-file"] = [ + { + '__relid__': 'wsgi-file:0', + '__unit__': 'gunicorn/1', + 'private-address': '10.0.3.99', + } + ] + self.assertEqual(expected, result) + + def test_relation_with_separator(self): + self.mock_config.return_value = { + 'group_code_owner': 'webops_deploy', + 'user_code_runner': 'ubunet', + } + self.mock_relation_type.return_value = 'wsgi-file' + self.mock_relation_get.return_value = { + 'relation_key1': 'relation_value1', + 'relation_key2': 'relation_value2', + } + self.mock_local_unit.return_value = "click-index/3" + + charmhelpers.contrib.templating.contexts.juju_state_to_yaml( + self.context_path, namespace_separator='__') + + with open(self.context_path, 'r') as context_file: + result = yaml.safe_load(context_file.read()) + expected = self.default_context() + expected['current_relation'] = { + "relation_key1": "relation_value1", + "relation_key2": "relation_value2", + } + expected["wsgi_file__relation_key1"] = "relation_value1" + expected["wsgi_file__relation_key2"] = "relation_value2" + self.assertEqual(expected, result) + + def test_keys_with_hyphens(self): + self.mock_config.return_value = { + 'group_code_owner': 'webops_deploy', + 'user_code_runner': 'ubunet', + 'private-address': '10.1.1.10', + } + self.mock_local_unit.return_value = "click-index/3" + self.mock_relation_get.return_value = None + + charmhelpers.contrib.templating.contexts.juju_state_to_yaml( + self.context_path) + + with open(self.context_path, 'r') as context_file: + result = yaml.safe_load(context_file.read()) + expected = self.default_context() + expected["private-address"] = "10.1.1.10" + self.assertEqual(expected, result) + + def test_keys_with_hypens_not_allowed_in_keys(self): + self.mock_config.return_value = { + 'group_code_owner': 'webops_deploy', + 'user_code_runner': 'ubunet', + 'private-address': '10.1.1.10', + } + self.mock_local_unit.return_value = "click-index/3" + self.mock_relation_type.return_value = 'wsgi-file' + self.mock_relation_get.return_value = { + 'relation-key1': 'relation_value1', + 'relation-key2': 'relation_value2', + } + + charmhelpers.contrib.templating.contexts.juju_state_to_yaml( + self.context_path, allow_hyphens_in_keys=False, + namespace_separator='__') + + with open(self.context_path, 'r') as context_file: + result = yaml.safe_load(context_file.read()) + expected = self.default_context() + expected["private_address"] = "10.1.1.10" + expected["wsgi_file__relation_key1"] = "relation_value1" + expected["wsgi_file__relation_key2"] = "relation_value2" + expected['current_relation'] = { + "relation-key1": "relation_value1", + "relation-key2": "relation_value2", + } + self.assertEqual(expected, result) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/templating/test_jinja.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/templating/test_jinja.py new file mode 100644 index 0000000000000000000000000000000000000000..00b0d47f257feda8954ff24672dbcc763aa45f41 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/templating/test_jinja.py @@ -0,0 +1,48 @@ +import tempfile +import os + +from shutil import rmtree +from testtools import TestCase + +from charmhelpers.contrib.templating.jinja import render + + +SIMPLE_TEMPLATE = "{{ somevar }}" + + +LOOP_TEMPLATE = "{% for i in somevar %}{{ i }}{% endfor %}" + + +class Jinja2Test(TestCase): + + def setUp(self): + super(Jinja2Test, self).setUp() + # Create a "templates directory" in temp + self.templates_dir = tempfile.mkdtemp() + + def tearDown(self): + super(Jinja2Test, self).tearDown() + # Remove the temporary directory so as not to pollute /tmp + rmtree(self.templates_dir) + + def _write_template_to_file(self, name, contents): + path = os.path.join(self.templates_dir, name) + with open(path, "w") as thefile: + thefile.write(contents) + + def test_render_simple_template(self): + name = "simple" + self._write_template_to_file(name, SIMPLE_TEMPLATE) + expected = "hello" + result = render( + name, {"somevar": expected}, template_dir=self.templates_dir) + self.assertEqual(expected, result) + + def test_render_loop_template(self): + name = "loop" + self._write_template_to_file(name, LOOP_TEMPLATE) + expected = "12345" + result = render( + name, {"somevar": ["1", "2", "3", "4", "5"]}, + template_dir=self.templates_dir) + self.assertEqual(expected, result) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/templating/test_pyformat.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/templating/test_pyformat.py new file mode 100644 index 0000000000000000000000000000000000000000..0873a4ae25672b19101872c2ba98ca9586e37ae2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/templating/test_pyformat.py @@ -0,0 +1,36 @@ +from mock import patch +from testtools import TestCase + +from charmhelpers.contrib.templating.pyformat import render +from charmhelpers.core import hookenv + + +class PyFormatTest(TestCase): + @patch.object(hookenv, 'execution_environment') + def test_renders_using_environment(self, execution_environment): + execution_environment.return_value = { + 'foo': 'FOO', + } + + self.assertEqual(render('foo is {foo}'), 'foo is FOO') + + @patch.object(hookenv, 'execution_environment') + def test_extra_overrides(self, execution_environment): + execution_environment.return_value = { + 'foo': 'FOO', + } + + extra = {'foo': 'BAR'} + + self.assertEqual(render('foo is {foo}', extra=extra), 'foo is BAR') + + @patch.object(hookenv, 'execution_environment') + def test_kwargs_overrides(self, execution_environment): + execution_environment.return_value = { + 'foo': 'FOO', + } + + extra = {'foo': 'BAR'} + + self.assertEqual( + render('foo is {foo}', extra=extra, foo='BAZ'), 'foo is BAZ') diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/unison/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/unison/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/unison/test_unison.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/unison/test_unison.py new file mode 100644 index 0000000000000000000000000000000000000000..a68114c5716be87cb1b1504e6b7850f12644f1f5 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/contrib/unison/test_unison.py @@ -0,0 +1,441 @@ + +from mock import call, patch, MagicMock, sentinel +from testtools import TestCase + +from tests.helpers import patch_open, FakeRelation +from charmhelpers.contrib import unison + + +FAKE_RELATION_ENV = { + 'cluster:0': ['cluster/0', 'cluster/1'] +} + + +TO_PATCH = [ + 'log', 'check_call', 'check_output', 'relation_ids', + 'related_units', 'relation_get', 'relation_set', + 'hook_name', 'unit_private_ip', +] + +FAKE_LOCAL_UNIT = 'test_host' +FAKE_RELATION = { + 'cluster:0': { + 'cluster/0': { + 'private-address': 'cluster0.local', + 'ssh_authorized_hosts': 'someotherhost:test_host' + }, + 'clsuter/1': { + 'private-address': 'cluster1.local', + 'ssh_authorized_hosts': 'someotherhost' + }, + 'clsuter/3': { + 'private-address': 'cluster2.local', + 'ssh_authorized_hosts': 'someotherthirdhost' + }, + + }, + +} + + +class UnisonHelperTests(TestCase): + def setUp(self): + super(UnisonHelperTests, self).setUp() + for m in TO_PATCH: + setattr(self, m, self._patch(m)) + self.fake_relation = FakeRelation(FAKE_RELATION) + self.unit_private_ip.return_value = FAKE_LOCAL_UNIT + self.relation_get.side_effect = self.fake_relation.get + self.relation_ids.side_effect = self.fake_relation.relation_ids + self.related_units.side_effect = self.fake_relation.related_units + + def _patch(self, method): + _m = patch('charmhelpers.contrib.unison.' + method) + mock = _m.start() + self.addCleanup(_m.stop) + return mock + + @patch('pwd.getpwnam') + def test_get_homedir(self, pwnam): + fake_user = MagicMock() + fake_user.pw_dir = '/home/foo' + pwnam.return_value = fake_user + self.assertEquals(unison.get_homedir('foo'), + '/home/foo') + + @patch('pwd.getpwnam') + def test_get_homedir_no_user(self, pwnam): + e = KeyError + pwnam.side_effect = e + self.assertRaises(Exception, unison.get_homedir, user='foo') + + def _ensure_calls_in(self, calls): + for _call in calls: + self.assertIn(call(_call), self.check_call.call_args_list) + + @patch('os.path.isfile') + def test_create_private_key_rsa(self, isfile): + create_cmd = [ + 'ssh-keygen', '-q', '-N', '', '-t', 'rsa', '-b', '2048', + '-f', '/home/foo/.ssh/id_rsa'] + + def _ensure_perms(): + cmds = [ + ['chown', 'foo', '/home/foo/.ssh/id_rsa'], + ['chmod', '0600', '/home/foo/.ssh/id_rsa'], + ] + self._ensure_calls_in(cmds) + + isfile.return_value = False + unison.create_private_key( + user='foo', priv_key_path='/home/foo/.ssh/id_rsa') + self.assertIn(call(create_cmd), self.check_call.call_args_list) + _ensure_perms() + self.check_call.call_args_list = [] + + isfile.return_value = True + unison.create_private_key( + user='foo', priv_key_path='/home/foo/.ssh/id_rsa') + self.assertNotIn(call(create_cmd), self.check_call.call_args_list) + _ensure_perms() + + @patch('os.path.isfile') + def test_create_private_key_ecdsa(self, isfile): + create_cmd = [ + 'ssh-keygen', '-q', '-N', '', '-t', 'ecdsa', '-b', '521', + '-f', '/home/foo/.ssh/id_ecdsa'] + + def _ensure_perms(): + cmds = [ + ['chown', 'foo', '/home/foo/.ssh/id_ecdsa'], + ['chmod', '0600', '/home/foo/.ssh/id_ecdsa'], + ] + self._ensure_calls_in(cmds) + + isfile.return_value = False + unison.create_private_key( + user='foo', + priv_key_path='/home/foo/.ssh/id_ecdsa', + key_type='ecdsa') + self.assertIn(call(create_cmd), self.check_call.call_args_list) + _ensure_perms() + self.check_call.call_args_list = [] + + isfile.return_value = True + unison.create_private_key( + user='foo', + priv_key_path='/home/foo/.ssh/id_ecdsa', + key_type='ecdsa') + self.assertNotIn(call(create_cmd), self.check_call.call_args_list) + _ensure_perms() + + @patch('os.path.isfile') + def test_create_public_key(self, isfile): + create_cmd = ['ssh-keygen', '-y', '-f', '/home/foo/.ssh/id_rsa'] + isfile.return_value = True + unison.create_public_key( + user='foo', priv_key_path='/home/foo/.ssh/id_rsa', + pub_key_path='/home/foo/.ssh/id_rsa.pub') + self.assertNotIn(call(create_cmd), self.check_output.call_args_list) + + isfile.return_value = False + with patch_open() as (_open, _file): + self.check_output.return_value = b'fookey' + unison.create_public_key( + user='foo', priv_key_path='/home/foo/.ssh/id_rsa', + pub_key_path='/home/foo/.ssh/id_rsa.pub') + self.assertIn(call(create_cmd), self.check_output.call_args_list) + _open.assert_called_with('/home/foo/.ssh/id_rsa.pub', 'wb') + _file.write.assert_called_with(b'fookey') + + @patch('os.mkdir') + @patch('os.path.isdir') + @patch.object(unison, 'get_homedir') + @patch.multiple(unison, create_private_key=MagicMock(), + create_public_key=MagicMock()) + def test_get_keypair(self, get_homedir, isdir, mkdir): + get_homedir.return_value = '/home/foo' + isdir.return_value = False + with patch_open() as (_open, _file): + _file.read.side_effect = [ + 'foopriv', 'foopub' + ] + priv, pub = unison.get_keypair('adam') + for f in ['/home/foo/.ssh/id_rsa', + '/home/foo/.ssh/id_rsa.pub']: + self.assertIn(call(f, 'r'), _open.call_args_list) + self.assertEquals(priv, 'foopriv') + self.assertEquals(pub, 'foopub') + + @patch.object(unison, 'get_homedir') + def test_write_auth_keys(self, get_homedir): + get_homedir.return_value = '/home/foo' + keys = [ + 'ssh-rsa AAAB3Nz adam', + 'ssh-rsa ALKJFz adam@whereschuck.org', + ] + with patch_open() as (_open, _file): + unison.write_authorized_keys('foo', keys) + _open.assert_called_with('/home/foo/.ssh/authorized_keys', 'w') + for k in keys: + self.assertIn(call('%s\n' % k), _file.write.call_args_list) + + @patch.object(unison, 'get_homedir') + def test_write_known_hosts(self, get_homedir): + get_homedir.return_value = '/home/foo' + keys = [ + '10.0.0.1 ssh-rsa KJDSJF=', + '10.0.0.2 ssh-rsa KJDSJF=', + ] + self.check_output.side_effect = keys + with patch_open() as (_open, _file): + unison.write_known_hosts('foo', ['10.0.0.1', '10.0.0.2']) + _open.assert_called_with('/home/foo/.ssh/known_hosts', 'w') + for k in keys: + self.assertIn(call('%s\n' % k), _file.write.call_args_list) + + @patch.object(unison, 'remove_password_expiry') + @patch.object(unison, 'pwgen') + @patch.object(unison, 'add_user_to_group') + @patch.object(unison, 'adduser') + def test_ensure_user(self, adduser, to_group, pwgen, + remove_password_expiry): + pwgen.return_value = sentinel.password + unison.ensure_user('foo', group='foobar') + adduser.assert_called_with('foo', sentinel.password) + to_group.assert_called_with('foo', 'foobar') + remove_password_expiry.assert_called_with('foo') + + @patch.object(unison, '_run_as_user') + def test_run_as_user(self, _run): + with patch.object(unison, '_run_as_user') as _run: + fake_preexec = MagicMock() + _run.return_value = fake_preexec + unison.run_as_user('foo', ['echo', 'foo']) + self.check_output.assert_called_with( + ['echo', 'foo'], preexec_fn=fake_preexec, cwd='/') + + @patch('pwd.getpwnam') + def test_run_user_not_found(self, getpwnam): + e = KeyError + getpwnam.side_effect = e + self.assertRaises(Exception, unison._run_as_user, 'nouser') + + @patch('os.setuid') + @patch('os.setgid') + @patch('os.environ', spec=dict) + @patch('pwd.getpwnam') + def test_run_as_user_preexec(self, pwnam, environ, setgid, setuid): + fake_env = {'HOME': '/root'} + environ.__getitem__ = MagicMock() + environ.__setitem__ = MagicMock() + environ.__setitem__.side_effect = fake_env.__setitem__ + environ.__getitem__.side_effect = fake_env.__getitem__ + + fake_user = MagicMock() + fake_user.pw_uid = 1010 + fake_user.pw_gid = 1011 + fake_user.pw_dir = '/home/foo' + pwnam.return_value = fake_user + inner = unison._run_as_user('foo') + self.assertEquals(fake_env['HOME'], '/home/foo') + inner() + setgid.assert_called_with(1011) + setuid.assert_called_with(1010) + + @patch('os.setuid') + @patch('os.setgid') + @patch('os.environ', spec=dict) + @patch('pwd.getpwnam') + def test_run_as_user_preexec_with_group(self, pwnam, environ, setgid, setuid): + fake_env = {'HOME': '/root'} + environ.__getitem__ = MagicMock() + environ.__setitem__ = MagicMock() + environ.__setitem__.side_effect = fake_env.__setitem__ + environ.__getitem__.side_effect = fake_env.__getitem__ + + fake_user = MagicMock() + fake_user.pw_uid = 1010 + fake_user.pw_gid = 1011 + fake_user.pw_dir = '/home/foo' + fake_group_id = 2000 + pwnam.return_value = fake_user + inner = unison._run_as_user('foo', gid=fake_group_id) + self.assertEquals(fake_env['HOME'], '/home/foo') + inner() + setgid.assert_called_with(2000) + setuid.assert_called_with(1010) + + @patch.object(unison, 'get_keypair') + @patch.object(unison, 'ensure_user') + def test_ssh_auth_peer_joined(self, ensure_user, get_keypair): + get_keypair.return_value = ('privkey', 'pubkey') + self.hook_name.return_value = 'cluster-relation-joined' + unison.ssh_authorized_peers(peer_interface='cluster', + user='foo', group='foo', + ensure_local_user=True) + self.relation_set.assert_called_with(ssh_pub_key='pubkey') + self.assertFalse(self.relation_get.called) + ensure_user.assert_called_with('foo', 'foo') + get_keypair.assert_called_with('foo') + + @patch.object(unison, 'write_known_hosts') + @patch.object(unison, 'write_authorized_keys') + @patch.object(unison, 'get_keypair') + @patch.object(unison, 'ensure_user') + def test_ssh_auth_peer_changed(self, ensure_user, get_keypair, + write_keys, write_hosts): + get_keypair.return_value = ('privkey', 'pubkey') + + self.hook_name.return_value = 'cluster-relation-changed' + + self.relation_get.side_effect = [ + 'key1', + 'host1', + 'key2', + 'host2', + '', '' + ] + unison.ssh_authorized_peers(peer_interface='cluster', + user='foo', group='foo', + ensure_local_user=True) + + ensure_user.assert_called_with('foo', 'foo') + get_keypair.assert_called_with('foo') + write_keys.assert_called_with('foo', ['key1', 'key2']) + write_hosts.assert_called_with('foo', ['host1', 'host2']) + self.relation_set.assert_called_with(ssh_authorized_hosts='host1:host2') + + @patch.object(unison, 'write_known_hosts') + @patch.object(unison, 'write_authorized_keys') + @patch.object(unison, 'get_keypair') + @patch.object(unison, 'ensure_user') + def test_ssh_auth_peer_departed(self, ensure_user, get_keypair, + write_keys, write_hosts): + get_keypair.return_value = ('privkey', 'pubkey') + + self.hook_name.return_value = 'cluster-relation-departed' + + self.relation_get.side_effect = [ + 'key1', + 'host1', + 'key2', + 'host2', + '', '' + ] + unison.ssh_authorized_peers(peer_interface='cluster', + user='foo', group='foo', + ensure_local_user=True) + + ensure_user.assert_called_with('foo', 'foo') + get_keypair.assert_called_with('foo') + write_keys.assert_called_with('foo', ['key1', 'key2']) + write_hosts.assert_called_with('foo', ['host1', 'host2']) + self.relation_set.assert_called_with(ssh_authorized_hosts='host1:host2') + + def test_collect_authed_hosts(self): + # only one of the hosts in fake environment has auth'd + # the local peer + hosts = unison.collect_authed_hosts('cluster') + self.assertEquals(hosts, ['cluster0.local']) + + def test_collect_authed_hosts_none_authed(self): + with patch.object(unison, 'relation_get') as relation_get: + relation_get.return_value = '' + hosts = unison.collect_authed_hosts('cluster') + self.assertEquals(hosts, []) + + @patch.object(unison, 'run_as_user') + def test_sync_path_to_host(self, run_as_user, verbose=True, gid=None): + for path in ['/tmp/foo', '/tmp/foo/']: + unison.sync_path_to_host(path=path, host='clusterhost1', + user='foo', verbose=verbose, gid=gid) + ex_cmd = ['unison', '-auto', '-batch=true', + '-confirmbigdel=false', '-fastcheck=true', + '-group=false', '-owner=false', + '-prefer=newer', '-times=true'] + if not verbose: + ex_cmd.append('-silent') + ex_cmd += ['/tmp/foo', 'ssh://foo@clusterhost1//tmp/foo'] + run_as_user.assert_called_with('foo', ex_cmd, gid) + + @patch.object(unison, 'run_as_user') + def test_sync_path_to_host_error(self, run_as_user): + for i, path in enumerate(['/tmp/foo', '/tmp/foo/']): + run_as_user.side_effect = Exception + if i == 0: + unison.sync_path_to_host(path=path, host='clusterhost1', + user='foo', verbose=True, gid=None) + else: + self.assertRaises(Exception, unison.sync_path_to_host, + path=path, host='clusterhost1', + user='foo', verbose=True, gid=None, + fatal=True) + + ex_cmd = ['unison', '-auto', '-batch=true', + '-confirmbigdel=false', '-fastcheck=true', + '-group=false', '-owner=false', + '-prefer=newer', '-times=true', + '/tmp/foo', 'ssh://foo@clusterhost1//tmp/foo'] + run_as_user.assert_called_with('foo', ex_cmd, None) + + def test_sync_path_to_host_non_verbose(self): + return self.test_sync_path_to_host(verbose=False) + + def test_sync_path_to_host_with_gid(self): + return self.test_sync_path_to_host(gid=111) + + @patch.object(unison, 'sync_path_to_host') + def test_sync_to_peer(self, sync_path_to_host): + paths = ['/tmp/foo1', '/tmp/foo2'] + host = 'host1' + unison.sync_to_peer(host, 'foouser', paths, True) + calls = [call('/tmp/foo1', host, 'foouser', True, None, None, False), + call('/tmp/foo2', host, 'foouser', True, None, None, False)] + sync_path_to_host.assert_has_calls(calls) + + @patch.object(unison, 'sync_path_to_host') + def test_sync_to_peer_with_gid(self, sync_path_to_host): + paths = ['/tmp/foo1', '/tmp/foo2'] + host = 'host1' + unison.sync_to_peer(host, 'foouser', paths, True, gid=111) + calls = [call('/tmp/foo1', host, 'foouser', True, None, 111, False), + call('/tmp/foo2', host, 'foouser', True, None, 111, False)] + sync_path_to_host.assert_has_calls(calls) + + @patch.object(unison, 'collect_authed_hosts') + @patch.object(unison, 'sync_to_peer') + def test_sync_to_peers(self, sync_to_peer, collect_hosts): + collect_hosts.return_value = ['host1', 'host2', 'host3'] + paths = ['/tmp/foo'] + unison.sync_to_peers(peer_interface='cluster', user='foouser', + paths=paths, verbose=True) + calls = [call('host1', 'foouser', ['/tmp/foo'], True, None, None, False), + call('host2', 'foouser', ['/tmp/foo'], True, None, None, False), + call('host3', 'foouser', ['/tmp/foo'], True, None, None, False)] + sync_to_peer.assert_has_calls(calls) + + @patch.object(unison, 'collect_authed_hosts') + @patch.object(unison, 'sync_to_peer') + def test_sync_to_peers_with_gid(self, sync_to_peer, collect_hosts): + collect_hosts.return_value = ['host1', 'host2', 'host3'] + paths = ['/tmp/foo'] + unison.sync_to_peers(peer_interface='cluster', user='foouser', + paths=paths, verbose=True, gid=111) + calls = [call('host1', 'foouser', ['/tmp/foo'], True, None, 111, False), + call('host2', 'foouser', ['/tmp/foo'], True, None, 111, False), + call('host3', 'foouser', ['/tmp/foo'], True, None, 111, False)] + sync_to_peer.assert_has_calls(calls) + + @patch.object(unison, 'collect_authed_hosts') + @patch.object(unison, 'sync_to_peer') + def test_sync_to_peers_with_cmd(self, sync_to_peer, collect_hosts): + collect_hosts.return_value = ['host1', 'host2', 'host3'] + paths = ['/tmp/foo'] + cmd = ['dummy_cmd'] + unison.sync_to_peers(peer_interface='cluster', user='foouser', + paths=paths, verbose=True, cmd=cmd, gid=111) + calls = [call('host1', 'foouser', ['/tmp/foo'], True, cmd, 111, False), + call('host2', 'foouser', ['/tmp/foo'], True, cmd, 111, False), + call('host3', 'foouser', ['/tmp/foo'], True, cmd, 111, False)] + sync_to_peer.assert_has_calls(calls) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/coordinator/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/coordinator/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/coordinator/test_coordinator.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/coordinator/test_coordinator.py new file mode 100644 index 0000000000000000000000000000000000000000..a57850a406c434eb9fb2a76336cfc9c9bc2c3eaf --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/coordinator/test_coordinator.py @@ -0,0 +1,533 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +from datetime import datetime, timedelta +import json +import tempfile +import unittest +from mock import call, MagicMock, patch, sentinel + +from charmhelpers import coordinator +from charmhelpers.core import hookenv + + +class TestCoordinator(unittest.TestCase): + + def setUp(self): + del hookenv._atstart[:] + del hookenv._atexit[:] + hookenv.cache.clear() + coordinator.Singleton._instances.clear() + + def install(patch): + patch.start() + self.addCleanup(patch.stop) + + install(patch.object(hookenv, 'local_unit', return_value='foo/1')) + install(patch.object(hookenv, 'is_leader', return_value=False)) + install(patch.object(hookenv, 'metadata', + return_value={'peers': {'cluster': None}})) + install(patch.object(hookenv, 'log')) + + # Ensure _timestamp always increases. + install(patch.object(coordinator, '_utcnow', + side_effect=self._utcnow)) + + _last_utcnow = datetime(2015, 1, 1, 00, 00) + + def _utcnow(self, ts=coordinator._timestamp): + self._last_utcnow += timedelta(minutes=1) + return self._last_utcnow + + def test_is_singleton(self): + # BaseCoordinator and subclasses are singletons. Placing this + # burden on charm authors is impractical, particularly if + # libraries start wanting to use coordinator instances. + # With singletons, we don't need to worry about sharing state + # between instances or have them stomping on each other when they + # need to serialize their state. + self.assertTrue(coordinator.BaseCoordinator() + is coordinator.BaseCoordinator()) + self.assertTrue(coordinator.Serial() is coordinator.Serial()) + self.assertFalse(coordinator.BaseCoordinator() is coordinator.Serial()) + + @patch.object(hookenv, 'atstart') + def test_implicit_initialize_and_handle(self, atstart): + # When you construct a BaseCoordinator(), its initialize() and + # handle() method are invoked automatically every hook. This + # is done using hookenv.atstart + c = coordinator.BaseCoordinator() + atstart.assert_has_calls([call(c.initialize), call(c.handle)]) + + @patch.object(hookenv, 'has_juju_version', return_value=False) + def test_initialize_enforces_juju_version(self, has_juju_version): + c = coordinator.BaseCoordinator() + with self.assertRaises(AssertionError): + c.initialize() + has_juju_version.assert_called_once_with('1.23') + + @patch.object(hookenv, 'atexit') + @patch.object(hookenv, 'has_juju_version', return_value=True) + @patch.object(hookenv, 'relation_ids') + def test_initialize(self, relation_ids, ver, atexit): + # First initialization are done before there is a peer relation. + relation_ids.return_value = [] + c = coordinator.BaseCoordinator() + + with patch.object(c, '_load_state') as _load_state, \ + patch.object(c, '_emit_state') as _emit_state: # IGNORE: E127 + c.initialize() + _load_state.assert_called_once_with() + _emit_state.assert_called_once_with() + + self.assertEqual(c.relname, 'cluster') + self.assertIsNone(c.relid) + relation_ids.assert_called_once_with('cluster') + + # Methods installed to save state and release locks if the + # hook is successful. + atexit.assert_has_calls([call(c._save_state), + call(c._release_granted)]) + + # If we have a peer relation, the id is stored. + relation_ids.return_value = ['cluster:1'] + c = coordinator.BaseCoordinator() + with patch.object(c, '_load_state'), patch.object(c, '_emit_state'): + c.initialize() + self.assertEqual(c.relid, 'cluster:1') + + # If we are already initialized, nothing happens. + c.grants = {} + c.requests = {} + c.initialize() + + def test_acquire(self): + c = coordinator.BaseCoordinator() + lock = 'mylock' + c.grants = {} + c.requests = {hookenv.local_unit(): {}} + + # We are not the leader, so first acquire will return False. + self.assertFalse(c.acquire(lock)) + + # But the request is in the queue. + self.assertTrue(c.requested(lock)) + ts = c.request_timestamp(lock) + + # A further attempts at acquiring the lock do nothing, + # and the timestamp of the request remains unchanged. + self.assertFalse(c.acquire(lock)) + self.assertEqual(ts, c.request_timestamp(lock)) + + # Once the leader has granted the lock, acquire returns True. + with patch.object(c, 'granted') as granted: + granted.return_value = True + self.assertTrue(c.acquire(lock)) + granted.assert_called_once_with(lock) + + def test_acquire_leader(self): + # When acquire() is called by the leader, it needs + # to make a grant decision immediately. It can't defer + # making the decision until a future hook, as no future + # hooks will be triggered. + hookenv.is_leader.return_value = True + c = coordinator.Serial() # Not Base. Test hooks into default_grant. + lock = 'mylock' + unit = hookenv.local_unit() + c.grants = {} + c.requests = {unit: {}} + with patch.object(c, 'default_grant') as default_grant: + default_grant.side_effect = iter([False, True]) + + self.assertFalse(c.acquire(lock)) + ts = c.request_timestamp(lock) + + self.assertTrue(c.acquire(lock)) + self.assertEqual(ts, c.request_timestamp(lock)) + + # If it it granted, the leader doesn't make a decision again. + self.assertTrue(c.acquire(lock)) + self.assertEqual(ts, c.request_timestamp(lock)) + + self.assertEqual(default_grant.call_count, 2) + + def test_granted(self): + c = coordinator.BaseCoordinator() + unit = hookenv.local_unit() + lock = 'mylock' + ts = coordinator._timestamp() + c.grants = {} + + # Unit makes a request, but it isn't granted + c.requests = {unit: {lock: ts}} + self.assertFalse(c.granted(lock)) + + # Once the leader has granted the request, all good. + # It does this by mirroring the request timestamp. + c.grants = {unit: {lock: ts}} + self.assertTrue(c.granted(lock)) + + # The unit releases the lock by removing the request. + c.requests = {unit: {}} + self.assertFalse(c.granted(lock)) + + # If the unit makes a new request before the leader + # has had a chance to do its housekeeping, the timestamps + # do not match and the lock not considered granted. + ts = coordinator._timestamp() + c.requests = {unit: {lock: ts}} + self.assertFalse(c.granted(lock)) + + # Until the leader gets around to its duties. + c.grants = {unit: {lock: ts}} + self.assertTrue(c.granted(lock)) + + def test_requested(self): + c = coordinator.BaseCoordinator() + lock = 'mylock' + c.requests = {hookenv.local_unit(): {}} + c.grants = {} + + self.assertFalse(c.requested(lock)) + c.acquire(lock) + self.assertTrue(c.requested(lock)) + + def test_request_timestamp(self): + c = coordinator.BaseCoordinator() + lock = 'mylock' + unit = hookenv.local_unit() + + c.requests = {unit: {}} + c.grants = {} + self.assertIsNone(c.request_timestamp(lock)) + + now = datetime.utcnow() + fmt = coordinator._timestamp_format + c.requests = {hookenv.local_unit(): {lock: now.strftime(fmt)}} + + self.assertEqual(c.request_timestamp(lock), now) + + def test_handle_not_leader(self): + c = coordinator.BaseCoordinator() + # If we are not the leader, handle does nothing. We know this, + # because without mocks or initialization it would otherwise crash. + c.handle() + + def test_handle(self): + hookenv.is_leader.return_value = True + lock = 'mylock' + c = coordinator.BaseCoordinator() + c.relid = 'cluster:1' + + ts = coordinator._timestamp + ts1, ts2, ts3 = ts(), ts(), ts() + + # Grant one of these requests. + requests = {'foo/1': {lock: ts1}, + 'foo/2': {lock: ts2}, + 'foo/3': {lock: ts3}} + c.requests = requests.copy() + # Because the existing grant should be released. + c.grants = {'foo/2': {lock: ts()}} # No request, release. + + with patch.object(c, 'grant') as grant: + c.handle() + + # The requests are unchanged. This is normally state on the + # peer relation, and only the units themselves can change it. + self.assertDictEqual(requests, c.requests) + + # The grant without a corresponding requests was released. + self.assertDictEqual({'foo/2': {}}, c.grants) + + # A potential grant was made for each of the outstanding requests. + grant.assert_has_calls([call(lock, 'foo/1'), + call(lock, 'foo/2'), + call(lock, 'foo/3')], any_order=True) + + def test_grant_not_leader(self): + c = coordinator.BaseCoordinator() + c.grant(sentinel.whatever, sentinel.whatever) # Nothing happens. + + def test_grant(self): + hookenv.is_leader.return_value = True + c = coordinator.BaseCoordinator() + c.default_grant = MagicMock() + c.grant_other = MagicMock() + + ts = coordinator._timestamp + ts1, ts2 = ts(), ts() + + c.requests = {'foo/1': {'mylock': ts1, 'other': ts()}, + 'foo/2': {'mylock': ts2}, + 'foo/3': {'mylock': ts()}} + grants = {'foo/1': {'mylock': ts1}} + c.grants = grants.copy() + + # foo/1 already has a granted mylock, so returns True. + self.assertTrue(c.grant('mylock', 'foo/1')) + + # foo/2 does not have a granted mylock. default_grant will + # be called to make a decision (no) + c.default_grant.return_value = False + self.assertFalse(c.grant('mylock', 'foo/2')) + self.assertDictEqual(grants, c.grants) + c.default_grant.assert_called_once_with('mylock', 'foo/2', + set(['foo/1']), + ['foo/2', 'foo/3']) + c.default_grant.reset_mock() + + # Lets say yes. + c.default_grant.return_value = True + self.assertTrue(c.grant('mylock', 'foo/2')) + grants = {'foo/1': {'mylock': ts1}, 'foo/2': {'mylock': ts2}} + self.assertDictEqual(grants, c.grants) + c.default_grant.assert_called_once_with('mylock', 'foo/2', + set(['foo/1']), + ['foo/2', 'foo/3']) + + # The other lock has custom logic, in the form of the overridden + # grant_other method. + c.grant_other.return_value = False + self.assertFalse(c.grant('other', 'foo/1')) + c.grant_other.assert_called_once_with('other', 'foo/1', + set(), ['foo/1']) + + # If there is no request, grant returns False + c.grant_other.return_value = True + self.assertFalse(c.grant('other', 'foo/2')) + + def test_released(self): + c = coordinator.BaseCoordinator() + with patch.object(c, 'msg') as msg: + c.released('foo/2', 'mylock', coordinator._utcnow()) + expected = 'Leader released mylock from foo/2, held 0:01:00' + msg.assert_called_once_with(expected) + + def test_require(self): + c = coordinator.BaseCoordinator() + c.acquire = MagicMock() + c.granted = MagicMock() + guard = MagicMock() + + wrapped = MagicMock() + + @c.require('mylock', guard) + def func(*args, **kw): + wrapped(*args, **kw) + + # If the lock is granted, the wrapped function is called. + c.granted.return_value = True + func(arg=True) + wrapped.assert_called_once_with(arg=True) + wrapped.reset_mock() + + # If the lock is not granted, and the guard returns False, + # the lock is not acquired. + c.acquire.return_value = False + c.granted.return_value = False + guard.return_value = False + func() + self.assertFalse(wrapped.called) + self.assertFalse(c.acquire.called) + + # If the lock is not granted, and the guard returns True, + # the lock is acquired. But the function still isn't called if + # it cannot be acquired immediately. + guard.return_value = True + func() + self.assertFalse(wrapped.called) + c.acquire.assert_called_once_with('mylock') + + # Finally, if the lock is not granted, and the guard returns True, + # and the lock acquired immediately, the function is called. + c.acquire.return_value = True + func(sentinel.arg) + wrapped.assert_called_once_with(sentinel.arg) + + def test_msg(self): + c = coordinator.BaseCoordinator() + # Just a wrapper around hookenv.log + c.msg('hi') + hookenv.log.assert_called_once_with('coordinator.BaseCoordinator hi', + level=hookenv.INFO) + + def test_name(self): + # We use the class name in a few places to avoid conflicts. + # We assume we won't be using multiple BaseCoordinator subclasses + # with the same name at the same time. + c = coordinator.BaseCoordinator() + self.assertEqual(c._name(), 'BaseCoordinator') + c = coordinator.Serial() + self.assertEqual(c._name(), 'Serial') + + @patch.object(hookenv, 'leader_get') + def test_load_state(self, leader_get): + c = coordinator.BaseCoordinator() + unit = hookenv.local_unit() + + # c.granted is just the leader_get decoded. + leader_get.return_value = '{"json": true}' + c._load_state() + self.assertDictEqual(c.grants, {'json': True}) + + # With no relid, there is no peer relation so request state + # is pulled from a local stash. + with patch.object(c, '_load_local_state') as loc_state: + loc_state.return_value = {'local': True} + c._load_state() + self.assertDictEqual(c.requests, {unit: {'local': True}}) + + # With a relid, request details are pulled from the peer relation. + # If there is no data in the peer relation from the local unit, + # we still pull it from the local stash as it means this is the + # first time we have joined. + c.relid = 'cluster:1' + with patch.object(c, '_load_local_state') as loc_state, \ + patch.object(c, '_load_peer_state') as peer_state: + loc_state.return_value = {'local': True} + peer_state.return_value = {'foo/2': {'mylock': 'whatever'}} + c._load_state() + self.assertDictEqual(c.requests, {unit: {'local': True}, + 'foo/2': {'mylock': 'whatever'}}) + + # If there are local details in the peer relation, the local + # stash is ignored. + with patch.object(c, '_load_local_state') as loc_state, \ + patch.object(c, '_load_peer_state') as peer_state: + loc_state.return_value = {'local': True} + peer_state.return_value = {unit: {}, + 'foo/2': {'mylock': 'whatever'}} + c._load_state() + self.assertDictEqual(c.requests, {unit: {}, + 'foo/2': {'mylock': 'whatever'}}) + + def test_emit_state(self): + c = coordinator.BaseCoordinator() + unit = hookenv.local_unit() + c.requests = {unit: {'lock_a': sentinel.ts, + 'lock_b': sentinel.ts, + 'lock_c': sentinel.ts}} + c.grants = {unit: {'lock_a': sentinel.ts, + 'lock_b': sentinel.ts2}} + with patch.object(c, 'msg') as msg: + c._emit_state() + msg.assert_has_calls([call('Granted lock_a'), + call('Waiting on lock_b'), + call('Waiting on lock_c')], + any_order=True) + + @patch.object(hookenv, 'relation_set') + @patch.object(hookenv, 'leader_set') + def test_save_state(self, leader_set, relation_set): + c = coordinator.BaseCoordinator() + unit = hookenv.local_unit() + c.grants = {'directdump': True} + c.requests = {unit: 'data1', 'foo/2': 'data2'} + + # grants is dumped to leadership settings, if the unit is leader. + with patch.object(c, '_save_local_state') as save_loc: + c._save_state() + self.assertFalse(leader_set.called) + hookenv.is_leader.return_value = True + c._save_state() + leader_set.assert_called_once_with({c.key: '{"directdump": true}'}) + + # If there is no relation id, the local units requests is dumped + # to a local stash. + with patch.object(c, '_save_local_state') as save_loc: + c._save_state() + save_loc.assert_called_once_with('data1') + + # If there is a relation id, the local units requests is dumped + # to the peer relation. + with patch.object(c, '_save_local_state') as save_loc: + c.relid = 'cluster:1' + c._save_state() + self.assertFalse(save_loc.called) + relation_set.assert_called_once_with( + c.relid, relation_settings={c.key: '"data1"'}) # JSON encoded + + @patch.object(hookenv, 'relation_get') + @patch.object(hookenv, 'related_units') + def test_load_peer_state(self, related_units, relation_get): + # Standard relation-get loops, decoding results from JSON. + c = coordinator.BaseCoordinator() + c.key = sentinel.key + c.relid = sentinel.relid + related_units.return_value = ['foo/2', 'foo/3'] + d = {'foo/1': {'foo/1': True}, + 'foo/2': {'foo/2': True}, + 'foo/3': {'foo/3': True}} + + def _get(key, unit, relid): + assert key == sentinel.key + assert relid == sentinel.relid + return json.dumps(d[unit]) + relation_get.side_effect = _get + + self.assertDictEqual(c._load_peer_state(), d) + + def test_local_state_filename(self): + c = coordinator.BaseCoordinator() + self.assertEqual(c._local_state_filename(), + '.charmhelpers.coordinator.BaseCoordinator') + + def test_load_local_state(self): + c = coordinator.BaseCoordinator() + with tempfile.NamedTemporaryFile(mode='w') as f: + with patch.object(c, '_local_state_filename') as fn: + fn.return_value = f.name + d = 'some data' + json.dump(d, f) + f.flush() + d2 = c._load_local_state() + self.assertEqual(d, d2) + + def test_save_local_state(self): + c = coordinator.BaseCoordinator() + with tempfile.NamedTemporaryFile(mode='r') as f: + with patch.object(c, '_local_state_filename') as fn: + fn.return_value = f.name + c._save_local_state('some data') + self.assertEqual(json.load(f), 'some data') + + def test_release_granted(self): + c = coordinator.BaseCoordinator() + unit = hookenv.local_unit() + c.requests = {unit: {'lock1': sentinel.ts, 'lock2': sentinel.ts}, + 'foo/2': {'lock1': sentinel.ts}} + c.grants = {unit: {'lock1': sentinel.ts}, + 'foo/2': {'lock1': sentinel.ts}} + # The granted lock for the local unit is released. + c._release_granted() + self.assertDictEqual(c.requests, {unit: {'lock2': sentinel.ts}, + 'foo/2': {'lock1': sentinel.ts}}) + + def test_implicit_peer_relation_name(self): + self.assertEqual(coordinator._implicit_peer_relation_name(), + 'cluster') + + def test_default_grant(self): + c = coordinator.Serial() + # Lock not granted. First in the queue. + self.assertTrue(c.default_grant(sentinel.lock, sentinel.u1, + set(), [sentinel.u1, sentinel.u2])) + + # Lock not granted. Later in the queue. + self.assertFalse(c.default_grant(sentinel.lock, sentinel.u1, + set(), [sentinel.u2, sentinel.u1])) + + # Lock already granted + self.assertFalse(c.default_grant(sentinel.lock, sentinel.u1, + set([sentinel.u2]), [sentinel.u1])) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/templates/cloud_controller_ng.yml b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/templates/cloud_controller_ng.yml new file mode 100644 index 0000000000000000000000000000000000000000..7f72f899dc890e3604e7bac51659ad09915c6c6c --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/templates/cloud_controller_ng.yml @@ -0,0 +1,173 @@ +--- +# TODO cc_ip cc public ip +local_route: {{ domain }} +port: {{ cc_port }} +pid_filename: /var/vcap/sys/run/cloud_controller_ng/cloud_controller_ng.pid +development_mode: false + +message_bus_servers: + - nats://{{ nats['user'] }}:{{ nats['password'] }}@{{ nats['address'] }}:{{ nats['port'] }} + +external_domain: + - api.{{ domain }} + +system_domain_organization: {{ default_organization }} +system_domain: {{ domain }} +app_domains: [ {{ domain }} ] +srv_api_uri: http://api.{{ domain }} + +default_app_memory: 1024 + +cc_partition: default + +bootstrap_admin_email: admin@{{ default_organization }} + +bulk_api: + auth_user: bulk_api + auth_password: "Password" + +nginx: + use_nginx: false + instance_socket: "/var/vcap/sys/run/cloud_controller_ng/cloud_controller.sock" + +index: 1 +name: cloud_controller_ng + +info: + name: vcap + build: "2222" + version: 2 + support_address: http://support.cloudfoundry.com + description: Cloud Foundry sponsored by Pivotal + api_version: 2.0.0 + + +directories: + tmpdir: /var/vcap/data/cloud_controller_ng/tmp + + +logging: + file: /var/vcap/sys/log/cloud_controller_ng/cloud_controller_ng.log + + syslog: vcap.cloud_controller_ng + + level: debug2 + max_retries: 1 + + + + + +db: &db + database: sqlite:///var/lib/cloudfoundry/cfcloudcontroller/db/cc.db + max_connections: 25 + pool_timeout: 10 + log_level: debug2 + + +login: + url: http://uaa.{{ domain }} + +uaa: + url: http://uaa.{{ domain }} + resource_id: cloud_controller + #symmetric_secret: cc-secret + verification_key: | + -----BEGIN PUBLIC KEY----- + MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDHFr+KICms+tuT1OXJwhCUmR2d + KVy7psa8xzElSyzqx7oJyfJ1JZyOzToj9T5SfTIq396agbHJWVfYphNahvZ/7uMX + qHxf+ZH9BL1gk9Y6kCnbM5R60gfwjyW1/dQPjOzn9N394zd2FJoFHwdq9Qs0wBug + spULZVNRxq7veq/fzwIDAQAB + -----END PUBLIC KEY----- + +# App staging parameters +staging: + max_staging_runtime: 900 + auth: + user: + password: "Password" + +maximum_health_check_timeout: 180 + +runtimes_file: /var/lib/cloudfoundry/cfcloudcontroller/jobs/config/runtimes.yml +stacks_file: /var/lib/cloudfoundry/cfcloudcontroller/jobs/config/stacks.yml + +quota_definitions: + free: + non_basic_services_allowed: false + total_services: 2 + total_routes: 1000 + memory_limit: 1024 + paid: + non_basic_services_allowed: true + total_services: 32 + total_routes: 1000 + memory_limit: 204800 + runaway: + non_basic_services_allowed: true + total_services: 500 + total_routes: 1000 + memory_limit: 204800 + trial: + non_basic_services_allowed: false + total_services: 10 + memory_limit: 2048 + total_routes: 1000 + trial_db_allowed: true + +default_quota_definition: free + +resource_pool: + minimum_size: 65536 + maximum_size: 536870912 + resource_directory_key: cc-resources + + cdn: + uri: + key_pair_id: + private_key: "" + + fog_connection: {"provider":"Local","local_root":"/var/vcap/nfs/store"} + +packages: + app_package_directory_key: cc-packages + + cdn: + uri: + key_pair_id: + private_key: "" + + fog_connection: {"provider":"Local","local_root":"/var/vcap/nfs/store"} + +droplets: + droplet_directory_key: cc-droplets + + cdn: + uri: + key_pair_id: + private_key: "" + + fog_connection: {"provider":"Local","local_root":"/var/vcap/nfs/store"} + +buildpacks: + buildpack_directory_key: cc-buildpacks + + cdn: + uri: + key_pair_id: + private_key: "" + + fog_connection: {"provider":"Local","local_root":"/var/vcap/nfs/store"} + +db_encryption_key: Password + +trial_db: + guid: "78ad16cf-3c22-4427-a982-b9d35d746914" + +tasks_disabled: false +hm9000_noop: true +flapping_crash_count_threshold: 3 + +disable_custom_buildpacks: false + +broker_client_timeout_seconds: 60 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/templates/fake_cc.yml b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/templates/fake_cc.yml new file mode 100644 index 0000000000000000000000000000000000000000..5e3f8b6cd17d1c0426c8d3ae11265da881796104 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/templates/fake_cc.yml @@ -0,0 +1,3 @@ +host: {{nats['host']}} +port: {{nats['port']}} +domain: {{router['domain']}} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/templates/nginx.conf b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/templates/nginx.conf new file mode 100644 index 0000000000000000000000000000000000000000..95e02bf3c85564e768e810e86995deb1a3bc359a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/templates/nginx.conf @@ -0,0 +1,154 @@ +# deployment cloudcontroller nginx.conf +#user vcap vcap; + +error_log /var/vcap/sys/log/nginx_ccng/nginx.error.log; +pid /var/vcap/sys/run/nginx_ccng/nginx.pid; + +events { + worker_connections 8192; + use epoll; +} + +http { + include mime.types; + default_type text/html; + server_tokens off; + variables_hash_max_size 1024; + + log_format main '$host - [$time_local] ' + '"$request" $status $bytes_sent ' + '"$http_referer" "$http_#user_agent" ' + '$proxy_add_x_forwarded_for response_time:$upstream_response_time'; + + access_log /var/vcap/sys/log/nginx_ccng/nginx.access.log main; + + sendfile on; #enable use of sendfile() + tcp_nopush on; + tcp_nodelay on; #disable nagel's algorithm + + keepalive_timeout 75 20; #inherited from router + + client_max_body_size 256M; #already enforced upstream/but doesn't hurt. + + upstream cloud_controller { + server unix:/var/vcap/sys/run/cloud_controller_ng/cloud_controller.sock; + } + + server { + listen {{ nginx_port }}; + server_name _; + server_name_in_redirect off; + proxy_send_timeout 300; + proxy_read_timeout 300; + + # proxy and log all CC traffic + location / { + access_log /var/vcap/sys/log/nginx_ccng/nginx.access.log main; + proxy_buffering off; + proxy_set_header Host $host; + proxy_set_header X-Real_IP $remote_addr; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + proxy_redirect off; + proxy_connect_timeout 10; + proxy_pass http://cloud_controller; + } + + + # used for x-accel-redirect uri://location/foo.txt + # nginx will serve the file root || location || foo.txt + location /droplets/ { + internal; + root /var/vcap/nfs/store; + } + + + + # used for x-accel-redirect uri://location/foo.txt + # nginx will serve the file root || location || foo.txt + location /cc-packages/ { + internal; + root /var/vcap/nfs/store; + } + + + # used for x-accel-redirect uri://location/foo.txt + # nginx will serve the file root || location || foo.txt + location /cc-droplets/ { + internal; + root /var/vcap/nfs/store; + } + + + location ~ (/apps/.*/application|/v2/apps/.*/bits|/services/v\d+/configurations/.*/serialized/data|/v2/buildpacks/.*/bits) { + # Pass altered request body to this location + upload_pass @cc_uploads; + upload_pass_args on; + + # Store files to this directory + upload_store /var/vcap/data/cloud_controller_ng/tmp/uploads; + + # No limit for output body forwarded to CC + upload_max_output_body_len 0; + + # Allow uploaded files to be read only by #user + #upload_store_access #user:r; + + # Set specified fields in request body + upload_set_form_field "${upload_field_name}_name" $upload_file_name; + upload_set_form_field "${upload_field_name}_path" $upload_tmp_path; + + #forward the following fields from existing body + upload_pass_form_field "^resources$"; + upload_pass_form_field "^_method$"; + + #on any error, delete uploaded files. + upload_cleanup 400-505; + } + + location ~ /staging/(buildpack_cache|droplets)/.*/upload { + + # Allow download the droplets and buildpacks + if ($request_method = GET){ + proxy_pass http://cloud_controller; + } + + # Pass along auth header + set $auth_header $upstream_http_x_auth; + proxy_set_header Authorization $auth_header; + + # Pass altered request body to this location + upload_pass @cc_uploads; + + # Store files to this directory + upload_store /var/vcap/data/cloud_controller_ng/tmp/staged_droplet_uploads; + + # Allow uploaded files to be read only by #user + upload_store_access user:r; + + # Set specified fields in request body + upload_set_form_field "droplet_path" $upload_tmp_path; + + #on any error, delete uploaded files. + upload_cleanup 400-505; + } + + # Pass altered request body to a backend + location @cc_uploads { + proxy_pass http://unix:/var/vcap/sys/run/cloud_controller_ng/cloud_controller.sock; + } + + location ~ ^/internal_redirect/(.*){ + # only allow internal redirects + internal; + + set $download_url $1; + + #have to manualy pass along auth header + set $auth_header $upstream_http_x_auth; + proxy_set_header Authorization $auth_header; + + # Download the file and send it to client + proxy_pass $download_url; + } + } +} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/templates/test.conf b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/templates/test.conf new file mode 100644 index 0000000000000000000000000000000000000000..bb02adc4a09a2dc7810351bf90a84f6a01d2ee4b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/templates/test.conf @@ -0,0 +1,3 @@ +something +listen {{nginx_port}} +something else diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/test_files.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/test_files.py new file mode 100644 index 0000000000000000000000000000000000000000..8b443a23d0bad1515f874e69734880932193778a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/test_files.py @@ -0,0 +1,32 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +from charmhelpers.core import files + +import mock +import unittest +import tempfile +import os + + +class FileTests(unittest.TestCase): + + @mock.patch("subprocess.check_call") + def test_sed(self, check_call): + files.sed("/tmp/test-sed-file", "replace", "this") + check_call.assert_called_once_with( + ['sed', '-i', '-r', '-e', 's/replace/this/g', + '/tmp/test-sed-file'] + ) + + def test_sed_file(self): + tmp = tempfile.NamedTemporaryFile(mode='w', delete=False) + tmp.write("IPV6=yes") + tmp.close() + + files.sed(tmp.name, "IPV6=.*", "IPV6=no") + + with open(tmp.name) as tmp: + self.assertEquals(tmp.read(), "IPV6=no") + + os.unlink(tmp.name) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/test_fstab.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/test_fstab.py new file mode 100644 index 0000000000000000000000000000000000000000..c25844866926586e9caefcfb43b30e92ec9edf39 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/test_fstab.py @@ -0,0 +1,93 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +from charmhelpers.core.fstab import Fstab +from nose.tools import (assert_is, + assert_is_not, + assert_equal) +import unittest +import tempfile +import os + +__author__ = 'Jorge Niedbalski R. ' + +DEFAULT_FSTAB_FILE = """/dev/sda /mnt/sda ext3 defaults 0 0 + # This is an indented comment, and the next line is entirely blank. + +/dev/sdb /mnt/sdb ext3 defaults 0 0 +/dev/sdc /mnt/sdc ext3 defaults 0 0 +UUID=3af44368-c50b-4768-8e58-aff003cef8be / ext4 errors=remount-ro 0 1 +""" + +GENERATED_FSTAB_FILE = '\n'.join( + # Helper will writeback with spaces instead of tabs + line.replace('\t', ' ') for line in DEFAULT_FSTAB_FILE.splitlines() + if line.strip() and not line.strip().startswith('#')) + + +class FstabTest(unittest.TestCase): + + def setUp(self): + self.tempfile = tempfile.NamedTemporaryFile('w+', delete=False) + self.tempfile.write(DEFAULT_FSTAB_FILE) + self.tempfile.close() + self.fstab = Fstab(path=self.tempfile.name) + + def tearDown(self): + os.unlink(self.tempfile.name) + + def test_entries(self): + """Test if entries are correctly readed from fstab file""" + assert_equal(sorted(GENERATED_FSTAB_FILE.splitlines()), + sorted(str(entry) for entry in self.fstab.entries)) + + def test_get_entry_by_device_attr(self): + """Test if the get_entry_by_attr method works for device attr""" + for device in ('sda', 'sdb', 'sdc', ): + assert_is_not(self.fstab.get_entry_by_attr('device', + '/dev/%s' % device), + None) + + def test_get_entry_by_mountpoint_attr(self): + """Test if the get_entry_by_attr method works for mountpoint attr""" + for mnt in ('sda', 'sdb', 'sdc', ): + assert_is_not(self.fstab.get_entry_by_attr('mountpoint', + '/mnt/%s' % mnt), None) + + def test_add_entry(self): + """Test if add_entry works for a new entry""" + for device in ('sdf', 'sdg', 'sdh'): + entry = Fstab.Entry('/dev/%s' % device, '/mnt/%s' % device, 'ext3', + None) + assert_is_not(self.fstab.add_entry(entry), None) + assert_is_not(self.fstab.get_entry_by_attr( + 'device', '/dev/%s' % device), None) + + assert_is(self.fstab.add_entry(entry), False, + "Check if adding an existing entry returns false") + + def test_remove_entry(self): + """Test if remove entry works for already existing entries""" + for entry in self.fstab.entries: + assert_is(self.fstab.remove_entry(entry), True) + + assert_equal(len([entry for entry in self.fstab.entries]), 0) + assert_equal(self.fstab.add_entry(entry), entry) + assert_equal(len([entry for entry in self.fstab.entries]), 1) + + def test_assert_remove_add_all(self): + """Test if removing/adding all the entries works""" + for entry in self.fstab.entries: + assert_is(self.fstab.remove_entry(entry), True) + + for device in ('sda', 'sdb', 'sdc', ): + self.fstab.add_entry( + Fstab.Entry('/dev/%s' % device, '/mnt/%s' % device, 'ext3', + None)) + + self.fstab.add_entry(Fstab.Entry( + 'UUID=3af44368-c50b-4768-8e58-aff003cef8be', + '/', 'ext4', 'errors=remount-ro', 0, 1)) + + assert_equal(sorted(GENERATED_FSTAB_FILE.splitlines()), + sorted(str(entry) for entry in self.fstab.entries)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/test_hookenv.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/test_hookenv.py new file mode 100644 index 0000000000000000000000000000000000000000..6d83a7a946011584b373e7a0816dbbbe5434f5a2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/test_hookenv.py @@ -0,0 +1,2378 @@ +import os +import json +from subprocess import CalledProcessError +import shutil +import tempfile +import types +from mock import call, MagicMock, mock_open, patch, sentinel +from testtools import TestCase +from enum import Enum +import yaml + +import six +import io + +from charmhelpers.core import hookenv + +if six.PY3: + import pickle +else: + import cPickle as pickle + + +CHARM_METADATA = b"""name: testmock +summary: test mock summary +description: test mock description +requires: + testreqs: + interface: mock +provides: + testprov: + interface: mock +peers: + testpeer: + interface: mock +""" + + +def _clean_globals(): + hookenv.cache.clear() + del hookenv._atstart[:] + del hookenv._atexit[:] + + +class ConfigTest(TestCase): + def setUp(self): + super(ConfigTest, self).setUp() + + _clean_globals() + self.addCleanup(_clean_globals) + + self.charm_dir = tempfile.mkdtemp() + self.addCleanup(lambda: shutil.rmtree(self.charm_dir)) + + patcher = patch.object(hookenv, 'charm_dir', lambda: self.charm_dir) + self.addCleanup(patcher.stop) + patcher.start() + + def test_init(self): + d = dict(foo='bar') + c = hookenv.Config(d) + + self.assertEqual(c['foo'], 'bar') + self.assertEqual(c._prev_dict, None) + + def test_init_empty_state_file(self): + d = dict(foo='bar') + c = hookenv.Config(d) + + state_path = os.path.join(self.charm_dir, hookenv.Config.CONFIG_FILE_NAME) + with open(state_path, 'w') as f: + f.close() + + self.assertEqual(c['foo'], 'bar') + self.assertEqual(c._prev_dict, None) + self.assertEqual(c.path, state_path) + + def test_init_invalid_state_file(self): + d = dict(foo='bar') + + state_path = os.path.join(self.charm_dir, hookenv.Config.CONFIG_FILE_NAME) + with open(state_path, 'w') as f: + f.write('blah') + + c = hookenv.Config(d) + + self.assertEqual(c['foo'], 'bar') + self.assertEqual(c._prev_dict, None) + self.assertEqual(c.path, state_path) + + def test_load_previous(self): + d = dict(foo='bar') + c = hookenv.Config() + + with open(c.path, 'w') as f: + json.dump(d, f) + + c.load_previous() + self.assertEqual(c._prev_dict, d) + + def test_load_previous_alternate_path(self): + d = dict(foo='bar') + c = hookenv.Config() + + alt_path = os.path.join(self.charm_dir, '.alt-config') + with open(alt_path, 'w') as f: + json.dump(d, f) + + c.load_previous(path=alt_path) + self.assertEqual(c._prev_dict, d) + self.assertEqual(c.path, alt_path) + + def test_changed_without_prev_dict(self): + d = dict(foo='bar') + c = hookenv.Config(d) + + self.assertTrue(c.changed('foo')) + + def test_changed_with_prev_dict(self): + c = hookenv.Config(dict(foo='bar', a='b')) + c.save() + c = hookenv.Config(dict(foo='baz', a='b')) + + self.assertTrue(c.changed('foo')) + self.assertFalse(c.changed('a')) + + def test_previous_without_prev_dict(self): + c = hookenv.Config() + + self.assertEqual(c.previous('foo'), None) + + def test_previous_with_prev_dict(self): + c = hookenv.Config(dict(foo='bar')) + c.save() + c = hookenv.Config(dict(foo='baz', a='b')) + + self.assertEqual(c.previous('foo'), 'bar') + self.assertEqual(c.previous('a'), None) + + def test_save_without_prev_dict(self): + c = hookenv.Config(dict(foo='bar')) + c.save() + + with open(c.path, 'r') as f: + self.assertEqual(c, json.load(f)) + self.assertEqual(c, dict(foo='bar')) + self.assertEqual(os.stat(c.path).st_mode & 0o777, 0o600) + + def test_save_with_prev_dict(self): + c = hookenv.Config(dict(foo='bar')) + c.save() + c = hookenv.Config(dict(a='b')) + c.save() + + with open(c.path, 'r') as f: + self.assertEqual(c, json.load(f)) + self.assertEqual(c, dict(foo='bar', a='b')) + self.assertEqual(os.stat(c.path).st_mode & 0o777, 0o600) + + def test_deep_change(self): + # After loading stored data into our previous dictionary, + # it gets copied into our current dictionary. If this is not + # a deep copy, then mappings and lists will be shared instances + # and changes will not be detected. + c = hookenv.Config(dict(ll=[])) + c.save() + c = hookenv.Config() + c['ll'].append(42) + self.assertTrue(c.changed('ll'), 'load_previous() did not deepcopy') + + def test_getitem(self): + c = hookenv.Config(dict(foo='bar')) + c.save() + c = hookenv.Config(dict(baz='bam')) + + self.assertRaises(KeyError, lambda: c['missing']) + self.assertEqual(c['foo'], 'bar') + self.assertEqual(c['baz'], 'bam') + + def test_get(self): + c = hookenv.Config(dict(foo='bar')) + c.save() + c = hookenv.Config(dict(baz='bam')) + + self.assertIsNone(c.get('missing')) + self.assertIs(c.get('missing', sentinel.missing), sentinel.missing) + self.assertEqual(c.get('foo'), 'bar') + self.assertEqual(c.get('baz'), 'bam') + + def test_keys(self): + c = hookenv.Config(dict(foo='bar')) + c["baz"] = "bar" + self.assertEqual(sorted([six.u("foo"), "baz"]), sorted(c.keys())) + + def test_in(self): + # Test behavior of the in operator. + + prev_path = os.path.join(hookenv.charm_dir(), + hookenv.Config.CONFIG_FILE_NAME) + with open(prev_path, 'w') as f: + json.dump(dict(user='one'), f) + c = hookenv.Config(dict(charm='one')) + + # Items that exist in the dict exist. Items that don't don't. + self.assertTrue('user' in c) + self.assertTrue('charm' in c) + self.assertFalse('bar' in c) + + # Adding items works as expected. + c['user'] = 'two' + c['charm'] = 'two' + c['bar'] = 'two' + self.assertTrue('user' in c) + self.assertTrue('charm' in c) + self.assertTrue('bar' in c) + c.save() + self.assertTrue('user' in c) + self.assertTrue('charm' in c) + self.assertTrue('bar' in c) + + # Removing items works as expected. + del c['user'] + del c['charm'] + self.assertTrue('user' not in c) + self.assertTrue('charm' not in c) + c.save() + self.assertTrue('user' not in c) + self.assertTrue('charm' not in c) + + +class SerializableTest(TestCase): + def test_serializes_object_to_json(self): + foo = { + 'bar': 'baz', + } + wrapped = hookenv.Serializable(foo) + self.assertEqual(wrapped.json(), json.dumps(foo)) + + def test_serializes_object_to_yaml(self): + foo = { + 'bar': 'baz', + } + wrapped = hookenv.Serializable(foo) + self.assertEqual(wrapped.yaml(), yaml.dump(foo)) + + def test_gets_attribute_from_inner_object_as_dict(self): + foo = { + 'bar': 'baz', + } + wrapped = hookenv.Serializable(foo) + + self.assertEqual(wrapped.bar, 'baz') + + def test_raises_error_from_inner_object_as_dict(self): + foo = { + 'bar': 'baz', + } + wrapped = hookenv.Serializable(foo) + + self.assertRaises(AttributeError, getattr, wrapped, 'baz') + + def test_dict_methods_from_inner_object(self): + foo = { + 'bar': 'baz', + } + wrapped = hookenv.Serializable(foo) + for meth in ('keys', 'values', 'items'): + self.assertEqual(sorted(list(getattr(wrapped, meth)())), + sorted(list(getattr(foo, meth)()))) + + self.assertEqual(wrapped.get('bar'), foo.get('bar')) + self.assertEqual(wrapped.get('baz', 42), foo.get('baz', 42)) + self.assertIn('bar', wrapped) + + def test_get_gets_from_inner_object(self): + foo = { + 'bar': 'baz', + } + wrapped = hookenv.Serializable(foo) + + self.assertEqual(wrapped.get('foo'), None) + self.assertEqual(wrapped.get('bar'), 'baz') + self.assertEqual(wrapped.get('zoo', 'bla'), 'bla') + + def test_gets_inner_object(self): + foo = { + 'bar': 'baz', + } + wrapped = hookenv.Serializable(foo) + + self.assertIs(wrapped.data, foo) + + def test_pickle(self): + foo = {'bar': 'baz'} + wrapped = hookenv.Serializable(foo) + pickled = pickle.dumps(wrapped) + unpickled = pickle.loads(pickled) + + self.assert_(isinstance(unpickled, hookenv.Serializable)) + self.assertEqual(unpickled, foo) + + def test_boolean(self): + true_dict = {'foo': 'bar'} + false_dict = {} + + self.assertIs(bool(hookenv.Serializable(true_dict)), True) + self.assertIs(bool(hookenv.Serializable(false_dict)), False) + + def test_equality(self): + foo = {'bar': 'baz'} + bar = {'baz': 'bar'} + wrapped_foo = hookenv.Serializable(foo) + + self.assertEqual(wrapped_foo, foo) + self.assertEqual(wrapped_foo, wrapped_foo) + self.assertNotEqual(wrapped_foo, bar) + + +class HelpersTest(TestCase): + def setUp(self): + super(HelpersTest, self).setUp() + _clean_globals() + self.addCleanup(_clean_globals) + + @patch('subprocess.call') + def test_logs_messages_to_juju_with_default_level(self, mock_call): + hookenv.log('foo') + + mock_call.assert_called_with(['juju-log', 'foo']) + + @patch('subprocess.call') + def test_logs_messages_object(self, mock_call): + hookenv.log(object) + mock_call.assert_called_with(['juju-log', repr(object)]) + + @patch('subprocess.call') + def test_logs_messages_with_alternative_levels(self, mock_call): + alternative_levels = [ + hookenv.CRITICAL, + hookenv.ERROR, + hookenv.WARNING, + hookenv.INFO, + ] + + for level in alternative_levels: + hookenv.log('foo', level) + mock_call.assert_called_with(['juju-log', '-l', level, 'foo']) + + @patch('subprocess.call') + def test_function_log_message(self, mock_call): + hookenv.function_log('foo') + mock_call.assert_called_with(['function-log', 'foo']) + + @patch('subprocess.call') + def test_function_log_message_object(self, mock_call): + hookenv.function_log(object) + mock_call.assert_called_with(['function-log', repr(object)]) + + @patch('charmhelpers.core.hookenv._cache_config', None) + @patch('charmhelpers.core.hookenv.charm_dir') + @patch('subprocess.check_output') + def test_gets_charm_config_with_scope(self, check_output, charm_dir): + check_output.return_value = json.dumps(dict(baz='bar')).encode('UTF-8') + charm_dir.return_value = '/nonexistent' + + result = hookenv.config(scope='baz') + + self.assertEqual(result, 'bar') + check_output.assert_called_with(['config-get', '--all', + '--format=json']) + + # The result can be used like a string + self.assertEqual(result[1], 'a') + + # ... because the result is actually a string + self.assert_(isinstance(result, six.string_types)) + + @patch('charmhelpers.core.hookenv.log', lambda *args, **kwargs: None) + @patch('charmhelpers.core.hookenv._cache_config', None) + @patch('subprocess.check_output') + def test_gets_missing_charm_config_with_scope(self, check_output): + check_output.return_value = b'' + + result = hookenv.config(scope='baz') + + self.assertEqual(result, None) + check_output.assert_called_with(['config-get', '--all', + '--format=json']) + + @patch('charmhelpers.core.hookenv._cache_config', None) + @patch('charmhelpers.core.hookenv.charm_dir') + @patch('subprocess.check_output') + def test_gets_config_without_scope(self, check_output, charm_dir): + check_output.return_value = json.dumps(dict(baz='bar')).encode('UTF-8') + charm_dir.return_value = '/nonexistent' + + result = hookenv.config() + + self.assertIsInstance(result, hookenv.Config) + self.assertEqual(result['baz'], 'bar') + check_output.assert_called_with(['config-get', '--all', + '--format=json']) + + @patch('charmhelpers.core.hookenv.log') + @patch('charmhelpers.core.hookenv._cache_config', None) + @patch('charmhelpers.core.hookenv.charm_dir') + @patch('subprocess.check_output') + def test_gets_charm_config_invalid_json_with_scope(self, + check_output, + charm_dir, + log): + check_output.return_value = '{"invalid: "json"}'.encode('UTF-8') + charm_dir.return_value = '/nonexistent' + + result = hookenv.config(scope='invalid') + + self.assertEqual(result, None) + cmd_line = ['config-get', '--all', '--format=json'] + check_output.assert_called_with(cmd_line) + log.assert_called_with( + 'Unable to parse output from config-get: ' + 'config_cmd_line="{}" message="{}"' + .format(str(cmd_line), + "Expecting ':' delimiter: line 1 column 13 (char 12)"), + level=hookenv.ERROR, + ) + + @patch('charmhelpers.core.hookenv.log') + @patch('charmhelpers.core.hookenv._cache_config', None) + @patch('charmhelpers.core.hookenv.charm_dir') + @patch('subprocess.check_output') + def test_gets_charm_config_invalid_utf8_with_scope(self, + check_output, + charm_dir, + log): + check_output.return_value = b'{"invalid: "json"}\x9D' + charm_dir.return_value = '/nonexistent' + + result = hookenv.config(scope='invalid') + + self.assertEqual(result, None) + cmd_line = ['config-get', '--all', '--format=json'] + check_output.assert_called_with(cmd_line) + try: + # Python3 + log.assert_called_with( + 'Unable to parse output from config-get: ' + 'config_cmd_line="{}" message="{}"' + .format(str(cmd_line), + "'utf8' codec can't decode byte 0x9d in position " + "18: invalid start byte"), + level=hookenv.ERROR, + ) + except AssertionError: + # Python2.7 + log.assert_called_with( + 'Unable to parse output from config-get: ' + 'config_cmd_line="{}" message="{}"' + .format(str(cmd_line), + "'utf-8' codec can't decode byte 0x9d in position " + "18: invalid start byte"), + level=hookenv.ERROR, + ) + + @patch('charmhelpers.core.hookenv._cache_config', {'baz': 'bar'}) + @patch('charmhelpers.core.hookenv.charm_dir') + @patch('subprocess.check_output') + def test_gets_config_from_cache_without_scope(self, + check_output, + charm_dir): + charm_dir.return_value = '/nonexistent' + + result = hookenv.config() + + self.assertEqual(result['baz'], 'bar') + self.assertFalse(check_output.called) + + @patch('charmhelpers.core.hookenv._cache_config', {'baz': 'bar'}) + @patch('charmhelpers.core.hookenv.charm_dir') + @patch('subprocess.check_output') + def test_gets_config_from_cache_with_scope(self, + check_output, + charm_dir): + charm_dir.return_value = '/nonexistent' + + result = hookenv.config('baz') + + self.assertEqual(result, 'bar') + + # The result can be used like a string + self.assertEqual(result[1], 'a') + + # ... because the result is actually a string + self.assert_(isinstance(result, six.string_types)) + + self.assertFalse(check_output.called) + + @patch('charmhelpers.core.hookenv._cache_config', {'foo': 'bar'}) + @patch('subprocess.check_output') + def test_gets_missing_charm_config_from_cache_with_scope(self, + check_output): + + result = hookenv.config(scope='baz') + + self.assertEqual(result, None) + self.assertFalse(check_output.called) + + @patch('charmhelpers.core.hookenv.os') + def test_gets_the_local_unit(self, os_): + os_.environ = { + 'JUJU_UNIT_NAME': 'foo', + } + + self.assertEqual(hookenv.local_unit(), 'foo') + + @patch('charmhelpers.core.hookenv.unit_get') + def test_gets_unit_public_ip(self, _unitget): + _unitget.return_value = sentinel.public_ip + self.assertEqual(sentinel.public_ip, hookenv.unit_public_ip()) + _unitget.assert_called_once_with('public-address') + + @patch('charmhelpers.core.hookenv.unit_get') + def test_gets_unit_private_ip(self, _unitget): + _unitget.return_value = sentinel.private_ip + self.assertEqual(sentinel.private_ip, hookenv.unit_private_ip()) + _unitget.assert_called_once_with('private-address') + + @patch('charmhelpers.core.hookenv.os') + def test_checks_that_is_running_in_relation_hook(self, os_): + os_.environ = { + 'JUJU_RELATION': 'foo', + } + + self.assertTrue(hookenv.in_relation_hook()) + + @patch('charmhelpers.core.hookenv.os') + def test_checks_that_is_not_running_in_relation_hook(self, os_): + os_.environ = { + 'bar': 'foo', + } + + self.assertFalse(hookenv.in_relation_hook()) + + @patch('charmhelpers.core.hookenv.os') + def test_gets_the_relation_type(self, os_): + os_.environ = { + 'JUJU_RELATION': 'foo', + } + + self.assertEqual(hookenv.relation_type(), 'foo') + + @patch('charmhelpers.core.hookenv.os') + def test_relation_type_none_if_not_in_environment(self, os_): + os_.environ = {} + self.assertEqual(hookenv.relation_type(), None) + + @patch('subprocess.check_output') + @patch('charmhelpers.core.hookenv.relation_type') + def test_gets_relation_ids(self, relation_type, check_output): + ids = [1, 2, 3] + check_output.return_value = json.dumps(ids).encode('UTF-8') + reltype = 'foo' + relation_type.return_value = reltype + + result = hookenv.relation_ids() + + self.assertEqual(result, ids) + check_output.assert_called_with(['relation-ids', '--format=json', + reltype]) + + @patch('subprocess.check_output') + @patch('charmhelpers.core.hookenv.relation_type') + def test_gets_relation_ids_empty_array(self, relation_type, check_output): + ids = [] + check_output.return_value = json.dumps(None).encode('UTF-8') + reltype = 'foo' + relation_type.return_value = reltype + + result = hookenv.relation_ids() + + self.assertEqual(result, ids) + check_output.assert_called_with(['relation-ids', '--format=json', + reltype]) + + @patch('subprocess.check_output') + @patch('charmhelpers.core.hookenv.relation_type') + def test_relation_ids_no_relation_type(self, relation_type, check_output): + ids = [1, 2, 3] + check_output.return_value = json.dumps(ids).encode('UTF-8') + relation_type.return_value = None + + result = hookenv.relation_ids() + + self.assertEqual(result, []) + + @patch('subprocess.check_output') + @patch('charmhelpers.core.hookenv.relation_type') + def test_gets_relation_ids_for_type(self, relation_type, check_output): + ids = [1, 2, 3] + check_output.return_value = json.dumps(ids).encode('UTF-8') + reltype = 'foo' + + result = hookenv.relation_ids(reltype) + + self.assertEqual(result, ids) + check_output.assert_called_with(['relation-ids', '--format=json', + reltype]) + self.assertFalse(relation_type.called) + + @patch('subprocess.check_output') + @patch('charmhelpers.core.hookenv.relation_id') + def test_gets_related_units(self, relation_id, check_output): + relid = 123 + units = ['foo', 'bar'] + relation_id.return_value = relid + check_output.return_value = json.dumps(units).encode('UTF-8') + + result = hookenv.related_units() + + self.assertEqual(result, units) + check_output.assert_called_with(['relation-list', '--format=json', + '-r', relid]) + + @patch('subprocess.check_output') + @patch('charmhelpers.core.hookenv.relation_id') + def test_gets_related_units_empty_array(self, relation_id, check_output): + relid = str(123) + units = [] + relation_id.return_value = relid + check_output.return_value = json.dumps(None).encode('UTF-8') + + result = hookenv.related_units() + + self.assertEqual(result, units) + check_output.assert_called_with(['relation-list', '--format=json', + '-r', relid]) + + @patch('subprocess.check_output') + @patch('charmhelpers.core.hookenv.relation_id') + def test_related_units_no_relation(self, relation_id, check_output): + units = ['foo', 'bar'] + relation_id.return_value = None + check_output.return_value = json.dumps(units).encode('UTF-8') + + result = hookenv.related_units() + + self.assertEqual(result, units) + check_output.assert_called_with(['relation-list', '--format=json']) + + @patch('subprocess.check_output') + @patch('charmhelpers.core.hookenv.relation_id') + def test_gets_related_units_for_id(self, relation_id, check_output): + relid = 123 + units = ['foo', 'bar'] + check_output.return_value = json.dumps(units).encode('UTF-8') + + result = hookenv.related_units(relid) + + self.assertEqual(result, units) + check_output.assert_called_with(['relation-list', '--format=json', + '-r', relid]) + self.assertFalse(relation_id.called) + + @patch('charmhelpers.core.hookenv.local_unit') + @patch('charmhelpers.core.hookenv.goal_state') + @patch('charmhelpers.core.hookenv.has_juju_version') + def test_gets_expected_peer_units(self, has_juju_version, goal_state, + local_unit): + has_juju_version.return_value = True + goal_state.return_value = { + 'units': { + 'keystone/0': { + 'status': 'active', + 'since': '2018-09-27 11:38:28Z', + }, + 'keystone/1': { + 'status': 'active', + 'since': '2018-09-27 11:39:23Z', + }, + }, + } + local_unit.return_value = 'keystone/0' + + result = hookenv.expected_peer_units() + + self.assertIsInstance(result, types.GeneratorType) + self.assertEqual(sorted(result), ['keystone/1']) + has_juju_version.assertCalledOnceWith("2.4.0") + local_unit.assertCalledOnceWith() + + @patch('charmhelpers.core.hookenv.has_juju_version') + def test_gets_expected_peer_units_wrong_version(self, has_juju_version): + has_juju_version.return_value = False + + def x(): + # local helper function to make testtools.TestCase.assertRaises + # work with generator + list(hookenv.expected_peer_units()) + + self.assertRaises(NotImplementedError, x) + has_juju_version.assertCalledOnceWith("2.4.0") + + @patch('charmhelpers.core.hookenv.goal_state') + @patch('charmhelpers.core.hookenv.relation_type') + @patch('charmhelpers.core.hookenv.has_juju_version') + def test_gets_expected_related_units(self, has_juju_version, relation_type, + goal_state): + has_juju_version.return_value = True + relation_type.return_value = 'identity-service' + goal_state.return_value = { + 'relations': { + 'identity-service': { + 'glance': { + 'status': 'joined', + 'since': '2018-09-27 11:37:16Z' + }, + 'glance/0': { + 'status': 'active', + 'since': '2018-09-27 11:27:19Z' + }, + 'glance/1': { + 'status': 'active', + 'since': '2018-09-27 11:27:34Z' + }, + }, + }, + } + + result = hookenv.expected_related_units() + + self.assertIsInstance(result, types.GeneratorType) + self.assertEqual(sorted(result), ['glance/0', 'glance/1']) + + @patch('charmhelpers.core.hookenv.goal_state') + @patch('charmhelpers.core.hookenv.has_juju_version') + def test_gets_expected_related_units_for_type(self, has_juju_version, + goal_state): + has_juju_version.return_value = True + goal_state.return_value = { + 'relations': { + 'identity-service': { + 'glance': { + 'status': 'joined', + 'since': '2018-09-27 11:37:16Z' + }, + 'glance/0': { + 'status': 'active', + 'since': '2018-09-27 11:27:19Z' + }, + 'glance/1': { + 'status': 'active', + 'since': '2018-09-27 11:27:34Z' + }, + }, + }, + } + + result = hookenv.expected_related_units('identity-service') + + self.assertIsInstance(result, types.GeneratorType) + self.assertEqual(sorted(result), ['glance/0', 'glance/1']) + + @patch('charmhelpers.core.hookenv.has_juju_version') + def test_gets_expected_related_units_wrong_version(self, has_juju_version): + has_juju_version.return_value = False + + def x(): + # local helper function to make testtools.TestCase.assertRaises + # work with generator + list(hookenv.expected_related_units()) + + self.assertRaises(NotImplementedError, x) + has_juju_version.assertCalledOnceWith("2.4.4") + + @patch('charmhelpers.core.hookenv.os') + def test_gets_the_remote_unit(self, os_): + os_.environ = { + 'JUJU_REMOTE_UNIT': 'foo', + } + + self.assertEqual(hookenv.remote_unit(), 'foo') + + @patch('charmhelpers.core.hookenv.os') + def test_no_remote_unit(self, os_): + os_.environ = {} + self.assertEqual(hookenv.remote_unit(), None) + + @patch('charmhelpers.core.hookenv.remote_unit') + @patch('charmhelpers.core.hookenv.relation_get') + def test_gets_relation_for_unit(self, relation_get, remote_unit): + unit = 'foo-unit' + raw_relation = { + 'foo': 'bar', + } + remote_unit.return_value = unit + relation_get.return_value = raw_relation + + result = hookenv.relation_for_unit() + + self.assertEqual(result['__unit__'], unit) + self.assertEqual(result['foo'], 'bar') + relation_get.assert_called_with(unit=unit, rid=None) + + @patch('charmhelpers.core.hookenv.remote_unit') + @patch('charmhelpers.core.hookenv.relation_get') + def test_gets_relation_for_unit_with_list(self, relation_get, remote_unit): + unit = 'foo-unit' + raw_relation = { + 'foo-list': 'one two three', + } + remote_unit.return_value = unit + relation_get.return_value = raw_relation + + result = hookenv.relation_for_unit() + + self.assertEqual(result['__unit__'], unit) + self.assertEqual(result['foo-list'], ['one', 'two', 'three']) + relation_get.assert_called_with(unit=unit, rid=None) + + @patch('charmhelpers.core.hookenv.remote_unit') + @patch('charmhelpers.core.hookenv.relation_get') + def test_gets_relation_for_specific_unit(self, relation_get, remote_unit): + unit = 'foo-unit' + raw_relation = { + 'foo': 'bar', + } + relation_get.return_value = raw_relation + + result = hookenv.relation_for_unit(unit) + + self.assertEqual(result['__unit__'], unit) + self.assertEqual(result['foo'], 'bar') + relation_get.assert_called_with(unit=unit, rid=None) + self.assertFalse(remote_unit.called) + + @patch('charmhelpers.core.hookenv.relation_ids') + @patch('charmhelpers.core.hookenv.related_units') + @patch('charmhelpers.core.hookenv.relation_for_unit') + def test_gets_relations_for_id(self, relation_for_unit, related_units, + relation_ids): + relid = 123 + units = ['foo', 'bar'] + unit_data = [ + {'foo-item': 'bar-item'}, + {'foo-item2': 'bar-item2'}, + ] + relation_ids.return_value = relid + related_units.return_value = units + relation_for_unit.side_effect = unit_data + + result = hookenv.relations_for_id() + + self.assertEqual(result[0]['__relid__'], relid) + self.assertEqual(result[0]['foo-item'], 'bar-item') + self.assertEqual(result[1]['__relid__'], relid) + self.assertEqual(result[1]['foo-item2'], 'bar-item2') + related_units.assert_called_with(relid) + self.assertEqual(relation_for_unit.mock_calls, [ + call('foo', relid), + call('bar', relid), + ]) + + @patch('charmhelpers.core.hookenv.relation_ids') + @patch('charmhelpers.core.hookenv.related_units') + @patch('charmhelpers.core.hookenv.relation_for_unit') + def test_gets_relations_for_specific_id(self, relation_for_unit, + related_units, relation_ids): + relid = 123 + units = ['foo', 'bar'] + unit_data = [ + {'foo-item': 'bar-item'}, + {'foo-item2': 'bar-item2'}, + ] + related_units.return_value = units + relation_for_unit.side_effect = unit_data + + result = hookenv.relations_for_id(relid) + + self.assertEqual(result[0]['__relid__'], relid) + self.assertEqual(result[0]['foo-item'], 'bar-item') + self.assertEqual(result[1]['__relid__'], relid) + self.assertEqual(result[1]['foo-item2'], 'bar-item2') + related_units.assert_called_with(relid) + self.assertEqual(relation_for_unit.mock_calls, [ + call('foo', relid), + call('bar', relid), + ]) + self.assertFalse(relation_ids.called) + + @patch('charmhelpers.core.hookenv.in_relation_hook') + @patch('charmhelpers.core.hookenv.relation_type') + @patch('charmhelpers.core.hookenv.relation_ids') + @patch('charmhelpers.core.hookenv.relations_for_id') + def test_gets_relations_for_type(self, relations_for_id, relation_ids, + relation_type, in_relation_hook): + reltype = 'foo-type' + relids = [123, 234] + relations = [ + [ + {'foo': 'bar'}, + {'foo2': 'bar2'}, + ], + [ + {'FOO': 'BAR'}, + {'FOO2': 'BAR2'}, + ], + ] + is_in_relation = True + + relation_type.return_value = reltype + relation_ids.return_value = relids + relations_for_id.side_effect = relations + in_relation_hook.return_value = is_in_relation + + result = hookenv.relations_of_type() + + self.assertEqual(result[0]['__relid__'], 123) + self.assertEqual(result[0]['foo'], 'bar') + self.assertEqual(result[1]['__relid__'], 123) + self.assertEqual(result[1]['foo2'], 'bar2') + self.assertEqual(result[2]['__relid__'], 234) + self.assertEqual(result[2]['FOO'], 'BAR') + self.assertEqual(result[3]['__relid__'], 234) + self.assertEqual(result[3]['FOO2'], 'BAR2') + relation_ids.assert_called_with(reltype) + self.assertEqual(relations_for_id.mock_calls, [ + call(123), + call(234), + ]) + + @patch('charmhelpers.core.hookenv.local_unit') + @patch('charmhelpers.core.hookenv.relation_types') + @patch('charmhelpers.core.hookenv.relation_ids') + @patch('charmhelpers.core.hookenv.related_units') + @patch('charmhelpers.core.hookenv.relation_get') + def test_gets_relations(self, relation_get, related_units, + relation_ids, relation_types, local_unit): + local_unit.return_value = 'u0' + relation_types.return_value = ['t1', 't2'] + relation_ids.return_value = ['i1'] + related_units.return_value = ['u1', 'u2'] + relation_get.return_value = {'key': 'val'} + + result = hookenv.relations() + + self.assertEqual(result, { + 't1': { + 'i1': { + 'u0': {'key': 'val'}, + 'u1': {'key': 'val'}, + 'u2': {'key': 'val'}, + }, + }, + 't2': { + 'i1': { + 'u0': {'key': 'val'}, + 'u1': {'key': 'val'}, + 'u2': {'key': 'val'}, + }, + }, + }) + + @patch('charmhelpers.core.hookenv.relation_set') + @patch('charmhelpers.core.hookenv.relation_get') + @patch('charmhelpers.core.hookenv.local_unit') + def test_relation_clear(self, local_unit, + relation_get, + relation_set): + local_unit.return_value = 'local-unit' + relation_get.return_value = { + 'private-address': '10.5.0.1', + 'foo': 'bar', + 'public-address': '146.192.45.6' + } + hookenv.relation_clear('relation:1') + relation_get.assert_called_with(rid='relation:1', + unit='local-unit') + relation_set.assert_called_with( + relation_id='relation:1', + **{'private-address': '10.5.0.1', + 'foo': None, + 'public-address': '146.192.45.6'}) + + @patch('charmhelpers.core.hookenv.relation_ids') + @patch('charmhelpers.core.hookenv.related_units') + @patch('charmhelpers.core.hookenv.relation_get') + def test_is_relation_made(self, relation_get, related_units, + relation_ids): + relation_get.return_value = 'hostname' + related_units.return_value = ['test/1'] + relation_ids.return_value = ['test:0'] + self.assertTrue(hookenv.is_relation_made('test')) + relation_get.assert_called_with('private-address', + rid='test:0', unit='test/1') + + @patch('charmhelpers.core.hookenv.relation_ids') + @patch('charmhelpers.core.hookenv.related_units') + @patch('charmhelpers.core.hookenv.relation_get') + def test_is_relation_made_multi_unit(self, relation_get, related_units, + relation_ids): + relation_get.side_effect = [None, 'hostname'] + related_units.return_value = ['test/1', 'test/2'] + relation_ids.return_value = ['test:0'] + self.assertTrue(hookenv.is_relation_made('test')) + + @patch('charmhelpers.core.hookenv.relation_ids') + @patch('charmhelpers.core.hookenv.related_units') + @patch('charmhelpers.core.hookenv.relation_get') + def test_is_relation_made_different_key(self, + relation_get, related_units, + relation_ids): + relation_get.return_value = 'hostname' + related_units.return_value = ['test/1'] + relation_ids.return_value = ['test:0'] + self.assertTrue(hookenv.is_relation_made('test', keys='auth')) + relation_get.assert_called_with('auth', + rid='test:0', unit='test/1') + + @patch('charmhelpers.core.hookenv.relation_ids') + @patch('charmhelpers.core.hookenv.related_units') + @patch('charmhelpers.core.hookenv.relation_get') + def test_is_relation_made_multiple_keys(self, + relation_get, related_units, + relation_ids): + relation_get.side_effect = ['password', 'hostname'] + related_units.return_value = ['test/1'] + relation_ids.return_value = ['test:0'] + self.assertTrue(hookenv.is_relation_made('test', + keys=['auth', 'host'])) + relation_get.assert_has_calls( + [call('auth', rid='test:0', unit='test/1'), + call('host', rid='test:0', unit='test/1')] + ) + + @patch('charmhelpers.core.hookenv.relation_ids') + @patch('charmhelpers.core.hookenv.related_units') + @patch('charmhelpers.core.hookenv.relation_get') + def test_is_relation_made_not_made(self, + relation_get, related_units, + relation_ids): + relation_get.return_value = None + related_units.return_value = ['test/1'] + relation_ids.return_value = ['test:0'] + self.assertFalse(hookenv.is_relation_made('test')) + + @patch('charmhelpers.core.hookenv.relation_ids') + @patch('charmhelpers.core.hookenv.related_units') + @patch('charmhelpers.core.hookenv.relation_get') + def test_is_relation_made_not_made_multiple_keys(self, + relation_get, + related_units, + relation_ids): + relation_get.side_effect = ['password', None] + related_units.return_value = ['test/1'] + relation_ids.return_value = ['test:0'] + self.assertFalse(hookenv.is_relation_made('test', + keys=['auth', 'host'])) + relation_get.assert_has_calls( + [call('auth', rid='test:0', unit='test/1'), + call('host', rid='test:0', unit='test/1')] + ) + + @patch('charmhelpers.core.hookenv.config') + @patch('charmhelpers.core.hookenv.relation_type') + @patch('charmhelpers.core.hookenv.local_unit') + @patch('charmhelpers.core.hookenv.relation_id') + @patch('charmhelpers.core.hookenv.relations') + @patch('charmhelpers.core.hookenv.relation_get') + @patch('charmhelpers.core.hookenv.os') + def test_gets_execution_environment(self, os_, relations_get, + relations, relation_id, local_unit, + relation_type, config): + config.return_value = 'some-config' + relation_type.return_value = 'some-type' + local_unit.return_value = 'some-unit' + relation_id.return_value = 'some-id' + relations.return_value = 'all-relations' + relations_get.return_value = 'some-relations' + os_.environ = 'some-environment' + + result = hookenv.execution_environment() + + self.assertEqual(result, { + 'conf': 'some-config', + 'reltype': 'some-type', + 'unit': 'some-unit', + 'relid': 'some-id', + 'rel': 'some-relations', + 'rels': 'all-relations', + 'env': 'some-environment', + }) + + @patch('charmhelpers.core.hookenv.config') + @patch('charmhelpers.core.hookenv.relation_type') + @patch('charmhelpers.core.hookenv.local_unit') + @patch('charmhelpers.core.hookenv.relation_id') + @patch('charmhelpers.core.hookenv.relations') + @patch('charmhelpers.core.hookenv.relation_get') + @patch('charmhelpers.core.hookenv.os') + def test_gets_execution_environment_no_relation( + self, os_, relations_get, relations, relation_id, + local_unit, relation_type, config): + config.return_value = 'some-config' + relation_type.return_value = 'some-type' + local_unit.return_value = 'some-unit' + relation_id.return_value = None + relations.return_value = 'all-relations' + relations_get.return_value = 'some-relations' + os_.environ = 'some-environment' + + result = hookenv.execution_environment() + + self.assertEqual(result, { + 'conf': 'some-config', + 'unit': 'some-unit', + 'rels': 'all-relations', + 'env': 'some-environment', + }) + + @patch('charmhelpers.core.hookenv.remote_service_name') + @patch('charmhelpers.core.hookenv.relation_ids') + @patch('charmhelpers.core.hookenv.os') + def test_gets_the_relation_id(self, os_, relation_ids, remote_service_name): + os_.environ = { + 'JUJU_RELATION_ID': 'foo', + } + + self.assertEqual(hookenv.relation_id(), 'foo') + + relation_ids.return_value = ['r:1', 'r:2'] + remote_service_name.side_effect = ['other', 'service'] + self.assertEqual(hookenv.relation_id('rel', 'service/0'), 'r:2') + relation_ids.assert_called_once_with('rel') + self.assertEqual(remote_service_name.call_args_list, [ + call('r:1'), + call('r:2'), + ]) + remote_service_name.side_effect = ['other', 'service'] + self.assertEqual(hookenv.relation_id('rel', 'service'), 'r:2') + + @patch('charmhelpers.core.hookenv.os') + def test_relation_id_none_if_no_env(self, os_): + os_.environ = {} + self.assertEqual(hookenv.relation_id(), None) + + @patch('subprocess.check_output') + def test_gets_relation(self, check_output): + data = {"foo": "BAR"} + check_output.return_value = json.dumps(data).encode('UTF-8') + result = hookenv.relation_get() + + self.assertEqual(result['foo'], 'BAR') + check_output.assert_called_with(['relation-get', '--format=json', '-']) + + @patch('charmhelpers.core.hookenv.subprocess') + def test_relation_get_none(self, mock_subprocess): + mock_subprocess.check_output.return_value = b'null' + + result = hookenv.relation_get() + + self.assertIsNone(result) + + @patch('charmhelpers.core.hookenv.subprocess') + def test_relation_get_calledprocesserror(self, mock_subprocess): + """relation-get called outside a relation will errors without id.""" + mock_subprocess.check_output.side_effect = CalledProcessError( + 2, '/foo/bin/relation-get' + 'no relation id specified') + + result = hookenv.relation_get() + + self.assertIsNone(result) + + @patch('charmhelpers.core.hookenv.subprocess') + def test_relation_get_calledprocesserror_other(self, mock_subprocess): + """relation-get can fail for other more serious errors.""" + mock_subprocess.check_output.side_effect = CalledProcessError( + 1, '/foo/bin/relation-get' + 'connection refused') + + self.assertRaises(CalledProcessError, hookenv.relation_get) + + @patch('subprocess.check_output') + def test_gets_relation_with_scope(self, check_output): + check_output.return_value = json.dumps('bar').encode('UTF-8') + + result = hookenv.relation_get(attribute='baz-scope') + + self.assertEqual(result, 'bar') + check_output.assert_called_with(['relation-get', '--format=json', + 'baz-scope']) + + @patch('subprocess.check_output') + def test_gets_missing_relation_with_scope(self, check_output): + check_output.return_value = b"" + + result = hookenv.relation_get(attribute='baz-scope') + + self.assertEqual(result, None) + check_output.assert_called_with(['relation-get', '--format=json', + 'baz-scope']) + + @patch('subprocess.check_output') + def test_gets_relation_with_unit_name(self, check_output): + check_output.return_value = json.dumps('BAR').encode('UTF-8') + + result = hookenv.relation_get(attribute='baz-scope', unit='baz-unit') + + self.assertEqual(result, 'BAR') + check_output.assert_called_with(['relation-get', '--format=json', + 'baz-scope', 'baz-unit']) + + @patch('charmhelpers.core.hookenv.local_unit') + @patch('subprocess.check_call') + @patch('subprocess.check_output') + def test_relation_set_flushes_local_unit_cache(self, check_output, + check_call, local_unit): + check_output.return_value = json.dumps('BAR').encode('UTF-8') + local_unit.return_value = 'baz_unit' + hookenv.relation_get(attribute='baz_scope', unit='baz_unit') + hookenv.relation_get(attribute='bar_scope') + self.assertTrue(len(hookenv.cache) == 2) + check_output.return_value = "" + hookenv.relation_set(baz_scope='hello') + # relation_set should flush any entries for local_unit + self.assertTrue(len(hookenv.cache) == 1) + + @patch('subprocess.check_output') + def test_gets_relation_with_relation_id(self, check_output): + check_output.return_value = json.dumps('BAR').encode('UTF-8') + + result = hookenv.relation_get(attribute='baz-scope', unit='baz-unit', + rid=123) + + self.assertEqual(result, 'BAR') + check_output.assert_called_with(['relation-get', '--format=json', '-r', + 123, 'baz-scope', 'baz-unit']) + + @patch('charmhelpers.core.hookenv.local_unit') + @patch('subprocess.check_output') + @patch('subprocess.check_call') + def test_sets_relation_with_kwargs(self, check_call_, check_output, + local_unit): + hookenv.relation_set(foo="bar") + check_call_.assert_called_with(['relation-set', 'foo=bar']) + + @patch('charmhelpers.core.hookenv.local_unit') + @patch('subprocess.check_output') + @patch('subprocess.check_call') + def test_sets_relation_with_dict(self, check_call_, check_output, + local_unit): + hookenv.relation_set(relation_settings={"foo": "bar"}) + check_call_.assert_called_with(['relation-set', 'foo=bar']) + + @patch('charmhelpers.core.hookenv.local_unit') + @patch('subprocess.check_output') + @patch('subprocess.check_call') + def test_sets_relation_with_relation_id(self, check_call_, check_output, + local_unit): + hookenv.relation_set(relation_id="foo", bar="baz") + check_call_.assert_called_with(['relation-set', '-r', 'foo', + 'bar=baz']) + + @patch('charmhelpers.core.hookenv.local_unit') + @patch('subprocess.check_output') + @patch('subprocess.check_call') + def test_sets_relation_with_missing_value(self, check_call_, check_output, + local_unit): + hookenv.relation_set(foo=None) + check_call_.assert_called_with(['relation-set', 'foo=']) + + @patch('charmhelpers.core.hookenv.local_unit', MagicMock()) + @patch('os.remove') + @patch('subprocess.check_output') + @patch('subprocess.check_call') + def test_relation_set_file(self, check_call, check_output, remove): + """If relation-set accepts a --file parameter, it's used. + + Juju 1.23.2 introduced a --file parameter, which means you can + pass the data through a file. Not using --file would make + relation_set break if the relation data is too big. + """ + # check_output(["relation-set", "--help"]) is used to determine + # whether we can pass --file to it. + check_output.return_value = "--file" + hookenv.relation_set(foo="bar") + check_output.assert_called_with( + ["relation-set", "--help"], universal_newlines=True) + # relation-set is called with relation-set --file + # with data as YAML and the temp_file is then removed. + self.assertEqual(1, len(check_call.call_args[0])) + command = check_call.call_args[0][0] + self.assertEqual(3, len(command)) + self.assertEqual("relation-set", command[0]) + self.assertEqual("--file", command[1]) + temp_file = command[2] + with open(temp_file, "r") as f: + self.assertEqual("foo: bar", f.read().strip()) + remove.assert_called_with(temp_file) + + @patch('charmhelpers.core.hookenv.local_unit', MagicMock()) + @patch('os.remove') + @patch('subprocess.check_output') + @patch('subprocess.check_call') + def test_relation_set_file_non_str(self, check_call, check_output, remove): + """If relation-set accepts a --file parameter, it's used. + + Any value that is not a string is converted to a string before encoding + the settings to YAML. + """ + # check_output(["relation-set", "--help"]) is used to determine + # whether we can pass --file to it. + check_output.return_value = "--file" + hookenv.relation_set(foo={"bar": 1}) + check_output.assert_called_with( + ["relation-set", "--help"], universal_newlines=True) + # relation-set is called with relation-set --file + # with data as YAML and the temp_file is then removed. + self.assertEqual(1, len(check_call.call_args[0])) + command = check_call.call_args[0][0] + self.assertEqual(3, len(command)) + self.assertEqual("relation-set", command[0]) + self.assertEqual("--file", command[1]) + temp_file = command[2] + with open(temp_file, "r") as f: + self.assertEqual("foo: '{''bar'': 1}'", f.read().strip()) + remove.assert_called_with(temp_file) + + def test_lists_relation_types(self): + open_ = mock_open() + open_.return_value = io.BytesIO(CHARM_METADATA) + + with patch('charmhelpers.core.hookenv.open', open_, create=True): + with patch.dict('os.environ', {'CHARM_DIR': '/var/empty'}): + reltypes = set(hookenv.relation_types()) + open_.assert_called_once_with('/var/empty/metadata.yaml') + self.assertEqual(set(('testreqs', 'testprov', 'testpeer')), reltypes) + + def test_metadata(self): + open_ = mock_open() + open_.return_value = io.BytesIO(CHARM_METADATA) + + with patch('charmhelpers.core.hookenv.open', open_, create=True): + with patch.dict('os.environ', {'CHARM_DIR': '/var/empty'}): + metadata = hookenv.metadata() + self.assertEqual(metadata, yaml.safe_load(CHARM_METADATA)) + + @patch('charmhelpers.core.hookenv.relation_ids') + @patch('charmhelpers.core.hookenv.metadata') + def test_peer_relation_id(self, metadata, relation_ids): + metadata.return_value = {'peers': {sentinel.peer_relname: {}}} + relation_ids.return_value = [sentinel.pid1, sentinel.pid2] + self.assertEqual(hookenv.peer_relation_id(), sentinel.pid1) + relation_ids.assert_called_once_with(sentinel.peer_relname) + + def test_charm_name(self): + open_ = mock_open() + open_.return_value = io.BytesIO(CHARM_METADATA) + + with patch('charmhelpers.core.hookenv.open', open_, create=True): + with patch.dict('os.environ', {'CHARM_DIR': '/var/empty'}): + charm_name = hookenv.charm_name() + self.assertEqual("testmock", charm_name) + + @patch('subprocess.check_call') + def test_opens_port(self, check_call_): + hookenv.open_port(443, "TCP") + hookenv.open_port(80) + hookenv.open_port(100, "UDP") + hookenv.open_port(0, "ICMP") + calls = [ + call(['open-port', '443/TCP']), + call(['open-port', '80/TCP']), + call(['open-port', '100/UDP']), + call(['open-port', 'ICMP']), + ] + check_call_.assert_has_calls(calls) + + @patch('subprocess.check_call') + def test_closes_port(self, check_call_): + hookenv.close_port(443, "TCP") + hookenv.close_port(80) + hookenv.close_port(100, "UDP") + hookenv.close_port(0, "ICMP") + calls = [ + call(['close-port', '443/TCP']), + call(['close-port', '80/TCP']), + call(['close-port', '100/UDP']), + call(['close-port', 'ICMP']), + ] + check_call_.assert_has_calls(calls) + + @patch('subprocess.check_call') + def test_opens_ports(self, check_call_): + hookenv.open_ports(443, 447, "TCP") + hookenv.open_ports(80, 91) + hookenv.open_ports(100, 200, "UDP") + calls = [ + call(['open-port', '443-447/TCP']), + call(['open-port', '80-91/TCP']), + call(['open-port', '100-200/UDP']), + ] + check_call_.assert_has_calls(calls) + + @patch('subprocess.check_call') + def test_closes_ports(self, check_call_): + hookenv.close_ports(443, 447, "TCP") + hookenv.close_ports(80, 91) + hookenv.close_ports(100, 200, "UDP") + calls = [ + call(['close-port', '443-447/TCP']), + call(['close-port', '80-91/TCP']), + call(['close-port', '100-200/UDP']), + ] + check_call_.assert_has_calls(calls) + + @patch('subprocess.check_output') + def test_gets_opened_ports(self, check_output_): + prts = ['8080/tcp', '8081-8083/tcp'] + check_output_.return_value = json.dumps(prts).encode('UTF-8') + self.assertEqual(hookenv.opened_ports(), prts) + check_output_.assert_called_with(['opened-ports', '--format=json']) + + @patch('subprocess.check_output') + def test_gets_unit_attribute(self, check_output_): + check_output_.return_value = json.dumps('bar').encode('UTF-8') + self.assertEqual(hookenv.unit_get('foo'), 'bar') + check_output_.assert_called_with(['unit-get', '--format=json', 'foo']) + + @patch('subprocess.check_output') + def test_gets_missing_unit_attribute(self, check_output_): + check_output_.return_value = b"" + self.assertEqual(hookenv.unit_get('foo'), None) + check_output_.assert_called_with(['unit-get', '--format=json', 'foo']) + + def test_cached_decorator(self): + class Unserializable(object): + def __str__(self): + return 'unserializable' + + unserializable = Unserializable() + calls = [] + values = { + 'hello': 'world', + 'foo': 'bar', + 'baz': None, + unserializable: 'qux', + } + + @hookenv.cached + def cache_function(attribute): + calls.append(attribute) + return values[attribute] + + self.assertEquals(cache_function('hello'), 'world') + self.assertEquals(cache_function('hello'), 'world') + self.assertEquals(cache_function('foo'), 'bar') + self.assertEquals(cache_function('baz'), None) + self.assertEquals(cache_function('baz'), None) + self.assertEquals(cache_function(unserializable), 'qux') + self.assertEquals(calls, ['hello', 'foo', 'baz', unserializable]) + + def test_gets_charm_dir(self): + with patch.dict('os.environ', {}): + self.assertEqual(hookenv.charm_dir(), None) + with patch.dict('os.environ', {'CHARM_DIR': '/var/empty'}): + self.assertEqual(hookenv.charm_dir(), '/var/empty') + with patch.dict('os.environ', {'JUJU_CHARM_DIR': '/var/another'}): + self.assertEqual(hookenv.charm_dir(), '/var/another') + + @patch('subprocess.check_output') + def test_resource_get_unsupported(self, check_output_): + check_output_.side_effect = OSError(2, 'resource-get') + self.assertRaises(NotImplementedError, hookenv.resource_get, 'foo') + + @patch('subprocess.check_output') + def test_resource_get(self, check_output_): + check_output_.return_value = b'/tmp/file' + self.assertEqual(hookenv.resource_get('file'), '/tmp/file') + check_output_.side_effect = CalledProcessError( + 1, '/foo/bin/resource-get', + 'error: could not download resource: resource#foo/file not found') + self.assertFalse(hookenv.resource_get('no-file')) + self.assertFalse(hookenv.resource_get(None)) + + @patch('subprocess.check_output') + def test_goal_state_unsupported(self, check_output_): + check_output_.side_effect = OSError(2, 'goal-state') + self.assertRaises(NotImplementedError, hookenv.goal_state) + + @patch('subprocess.check_output') + def test_goal_state(self, check_output_): + expect = { + 'units': {}, + 'relations': {}, + } + check_output_.return_value = json.dumps(expect).encode('UTF-8') + result = hookenv.goal_state() + + self.assertEqual(result, expect) + check_output_.assert_called_with(['goal-state', '--format=json']) + + @patch('subprocess.check_output') + def test_is_leader_unsupported(self, check_output_): + check_output_.side_effect = OSError(2, 'is-leader') + self.assertRaises(NotImplementedError, hookenv.is_leader) + + @patch('subprocess.check_output') + def test_is_leader(self, check_output_): + check_output_.return_value = b'false' + self.assertFalse(hookenv.is_leader()) + check_output_.return_value = b'true' + self.assertTrue(hookenv.is_leader()) + + @patch('subprocess.check_call') + def test_payload_register(self, check_call_): + hookenv.payload_register('monitoring', 'docker', 'abc123') + check_call_.assert_called_with(['payload-register', 'monitoring', + 'docker', 'abc123']) + + @patch('subprocess.check_call') + def test_payload_unregister(self, check_call_): + hookenv.payload_unregister('monitoring', 'abc123') + check_call_.assert_called_with(['payload-unregister', 'monitoring', + 'abc123']) + + @patch('subprocess.check_call') + def test_payload_status_set(self, check_call_): + hookenv.payload_status_set('monitoring', 'abc123', 'Running') + check_call_.assert_called_with(['payload-status-set', 'monitoring', + 'abc123', 'Running']) + + @patch('subprocess.check_call') + def test_application_version_set(self, check_call_): + hookenv.application_version_set('v1.2.3') + check_call_.assert_called_with(['application-version-set', 'v1.2.3']) + + @patch.object(os, 'getenv') + @patch.object(hookenv, 'log') + def test_env_proxy_settings_juju_charm_all_selected(self, faux_log, + get_env): + expected_settings = { + 'HTTP_PROXY': 'http://squid.internal:3128', + 'http_proxy': 'http://squid.internal:3128', + 'HTTPS_PROXY': 'https://squid.internals:3128', + 'https_proxy': 'https://squid.internals:3128', + 'NO_PROXY': '192.0.2.0/24,198.51.100.0/24,.bar.com', + 'no_proxy': '192.0.2.0/24,198.51.100.0/24,.bar.com', + 'FTP_PROXY': 'ftp://ftp.internal:21', + 'ftp_proxy': 'ftp://ftp.internal:21', + } + + def get_env_side_effect(var): + return { + 'HTTP_PROXY': None, + 'HTTPS_PROXY': None, + 'NO_PROXY': None, + 'FTP_PROXY': None, + 'JUJU_CHARM_HTTP_PROXY': 'http://squid.internal:3128', + 'JUJU_CHARM_HTTPS_PROXY': 'https://squid.internals:3128', + 'JUJU_CHARM_FTP_PROXY': 'ftp://ftp.internal:21', + 'JUJU_CHARM_NO_PROXY': '192.0.2.0/24,198.51.100.0/24,.bar.com' + }[var] + get_env.side_effect = get_env_side_effect + + proxy_settings = hookenv.env_proxy_settings() + get_env.assert_has_calls([call("HTTP_PROXY"), + call("HTTPS_PROXY"), + call("NO_PROXY"), + call("FTP_PROXY"), + call("JUJU_CHARM_HTTP_PROXY"), + call("JUJU_CHARM_HTTPS_PROXY"), + call("JUJU_CHARM_FTP_PROXY"), + call("JUJU_CHARM_NO_PROXY")], + any_order=True) + self.assertEqual(expected_settings, proxy_settings) + # Verify that we logged a warning about the cidr in NO_PROXY. + faux_log.assert_called_with(hookenv.RANGE_WARNING, + level=hookenv.WARNING) + + @patch.object(os, 'getenv') + def test_env_proxy_settings_legacy_https(self, get_env): + expected_settings = { + 'HTTPS_PROXY': 'http://squid.internal:3128', + 'https_proxy': 'http://squid.internal:3128', + } + + def get_env_side_effect(var): + return { + 'HTTPS_PROXY': 'http://squid.internal:3128', + 'JUJU_CHARM_HTTPS_PROXY': None, + }[var] + get_env.side_effect = get_env_side_effect + + proxy_settings = hookenv.env_proxy_settings(['https']) + get_env.assert_has_calls([call("HTTPS_PROXY"), + call("JUJU_CHARM_HTTPS_PROXY")], + any_order=True) + self.assertEqual(expected_settings, proxy_settings) + + @patch.object(os, 'getenv') + def test_env_proxy_settings_juju_charm_https(self, get_env): + expected_settings = { + 'HTTPS_PROXY': 'http://squid.internal:3128', + 'https_proxy': 'http://squid.internal:3128', + } + + def get_env_side_effect(var): + return { + 'HTTPS_PROXY': None, + 'JUJU_CHARM_HTTPS_PROXY': 'http://squid.internal:3128', + }[var] + get_env.side_effect = get_env_side_effect + + proxy_settings = hookenv.env_proxy_settings(['https']) + get_env.assert_has_calls([call("HTTPS_PROXY"), + call("JUJU_CHARM_HTTPS_PROXY")], + any_order=True) + self.assertEqual(expected_settings, proxy_settings) + + @patch.object(os, 'getenv') + def test_env_proxy_settings_legacy_http(self, get_env): + expected_settings = { + 'HTTP_PROXY': 'http://squid.internal:3128', + 'http_proxy': 'http://squid.internal:3128', + } + + def get_env_side_effect(var): + return { + 'HTTP_PROXY': 'http://squid.internal:3128', + 'JUJU_CHARM_HTTP_PROXY': None, + }[var] + get_env.side_effect = get_env_side_effect + + proxy_settings = hookenv.env_proxy_settings(['http']) + get_env.assert_has_calls([call("HTTP_PROXY"), + call("JUJU_CHARM_HTTP_PROXY")], + any_order=True) + self.assertEqual(expected_settings, proxy_settings) + + @patch.object(os, 'getenv') + def test_env_proxy_settings_juju_charm_http(self, get_env): + expected_settings = { + 'HTTP_PROXY': 'http://squid.internal:3128', + 'http_proxy': 'http://squid.internal:3128', + } + + def get_env_side_effect(var): + return { + 'HTTP_PROXY': None, + 'JUJU_CHARM_HTTP_PROXY': 'http://squid.internal:3128', + }[var] + get_env.side_effect = get_env_side_effect + + proxy_settings = hookenv.env_proxy_settings(['http']) + get_env.assert_has_calls([call("HTTP_PROXY"), + call("JUJU_CHARM_HTTP_PROXY")], + any_order=True) + self.assertEqual(expected_settings, proxy_settings) + + +class HooksTest(TestCase): + def setUp(self): + super(HooksTest, self).setUp() + + _clean_globals() + self.addCleanup(_clean_globals) + + charm_dir = tempfile.mkdtemp() + self.addCleanup(lambda: shutil.rmtree(charm_dir)) + patcher = patch.object(hookenv, 'charm_dir', lambda: charm_dir) + self.addCleanup(patcher.stop) + patcher.start() + + config = hookenv.Config({}) + + def _mock_config(scope=None): + return config if scope is None else config[scope] + patcher = patch.object(hookenv, 'config', _mock_config) + self.addCleanup(patcher.stop) + patcher.start() + + def test_config_saved_after_execute(self): + config = hookenv.config() + config.implicit_save = True + + foo = MagicMock() + hooks = hookenv.Hooks() + hooks.register('foo', foo) + hooks.execute(['foo', 'some', 'other', 'args']) + self.assertTrue(os.path.exists(config.path)) + + def test_config_not_saved_after_execute(self): + config = hookenv.config() + config.implicit_save = False + + foo = MagicMock() + hooks = hookenv.Hooks() + hooks.register('foo', foo) + hooks.execute(['foo', 'some', 'other', 'args']) + self.assertFalse(os.path.exists(config.path)) + + def test_config_save_disabled(self): + config = hookenv.config() + config.implicit_save = True + + foo = MagicMock() + hooks = hookenv.Hooks(config_save=False) + hooks.register('foo', foo) + hooks.execute(['foo', 'some', 'other', 'args']) + self.assertFalse(os.path.exists(config.path)) + + def test_runs_a_registered_function(self): + foo = MagicMock() + hooks = hookenv.Hooks() + hooks.register('foo', foo) + + hooks.execute(['foo', 'some', 'other', 'args']) + + foo.assert_called_with() + + def test_cannot_run_unregistered_function(self): + foo = MagicMock() + hooks = hookenv.Hooks() + hooks.register('foo', foo) + + self.assertRaises(hookenv.UnregisteredHookError, hooks.execute, + ['bar']) + + def test_can_run_a_decorated_function_as_one_or_more_hooks(self): + execs = [] + hooks = hookenv.Hooks() + + @hooks.hook('bar', 'baz') + def func(): + execs.append(True) + + hooks.execute(['bar']) + hooks.execute(['baz']) + self.assertRaises(hookenv.UnregisteredHookError, hooks.execute, + ['brew']) + self.assertEqual(execs, [True, True]) + + def test_can_run_a_decorated_function_as_itself(self): + execs = [] + hooks = hookenv.Hooks() + + @hooks.hook() + def func(): + execs.append(True) + + hooks.execute(['func']) + self.assertRaises(hookenv.UnregisteredHookError, hooks.execute, + ['brew']) + self.assertEqual(execs, [True]) + + def test_magic_underscores(self): + # Juju hook names use hypens as separators. Python functions use + # underscores. If explicit names have not been provided, hooks + # are registered with both the function name and the function + # name with underscores replaced with hypens for convenience. + execs = [] + hooks = hookenv.Hooks() + + @hooks.hook() + def call_me_maybe(): + execs.append(True) + + hooks.execute(['call-me-maybe']) + hooks.execute(['call_me_maybe']) + self.assertEqual(execs, [True, True]) + + @patch('charmhelpers.core.hookenv.local_unit') + def test_gets_service_name(self, _unit): + _unit.return_value = 'mysql/3' + self.assertEqual(hookenv.service_name(), 'mysql') + + @patch('charmhelpers.core.hookenv.related_units') + @patch('charmhelpers.core.hookenv.remote_unit') + def test_gets_remote_service_name(self, remote_unit, related_units): + remote_unit.return_value = 'mysql/3' + related_units.return_value = ['pgsql/0', 'pgsql/1'] + self.assertEqual(hookenv.remote_service_name(), 'mysql') + self.assertEqual(hookenv.remote_service_name('pgsql:1'), 'pgsql') + + def test_gets_hook_name(self): + with patch.dict(os.environ, JUJU_HOOK_NAME='hook'): + self.assertEqual(hookenv.hook_name(), 'hook') + with patch('sys.argv', ['other-hook']): + self.assertEqual(hookenv.hook_name(), 'other-hook') + + @patch('subprocess.check_output') + def test_action_get_with_key(self, check_output): + action_data = 'bar' + check_output.return_value = json.dumps(action_data).encode('UTF-8') + + result = hookenv.action_get(key='foo') + + self.assertEqual(result, 'bar') + check_output.assert_called_with(['action-get', 'foo', '--format=json']) + + @patch('subprocess.check_output') + def test_action_get_without_key(self, check_output): + check_output.return_value = json.dumps(dict(foo='bar')).encode('UTF-8') + + result = hookenv.action_get() + + self.assertEqual(result['foo'], 'bar') + check_output.assert_called_with(['action-get', '--format=json']) + + @patch('subprocess.check_call') + def test_action_set(self, check_call): + values = {'foo': 'bar', 'fooz': 'barz'} + hookenv.action_set(values) + # The order of the key/value pairs can change, so sort them before test + called_args = check_call.call_args_list[0][0][0] + called_args.pop(0) + called_args.sort() + self.assertEqual(called_args, ['foo=bar', 'fooz=barz']) + + @patch('subprocess.check_call') + def test_action_fail(self, check_call): + message = "Ooops, the action failed" + hookenv.action_fail(message) + check_call.assert_called_with(['action-fail', message]) + + @patch('charmhelpers.core.hookenv.cmd_exists') + @patch('subprocess.check_output') + def test_function_get_with_key(self, check_output, cmd_exists): + function_data = 'bar' + check_output.return_value = json.dumps(function_data).encode('UTF-8') + cmd_exists.return_value = True + + result = hookenv.function_get(key='foo') + + self.assertEqual(result, 'bar') + check_output.assert_called_with(['function-get', 'foo', '--format=json']) + + @patch('charmhelpers.core.hookenv.cmd_exists') + @patch('subprocess.check_output') + def test_function_get_without_key(self, check_output, cmd_exists): + check_output.return_value = json.dumps(dict(foo='bar')).encode('UTF-8') + cmd_exists.return_value = True + + result = hookenv.function_get() + + self.assertEqual(result['foo'], 'bar') + check_output.assert_called_with(['function-get', '--format=json']) + + @patch('subprocess.check_call') + def test_function_set(self, check_call): + values = {'foo': 'bar', 'fooz': 'barz'} + hookenv.function_set(values) + # The order of the key/value pairs can change, so sort them before test + called_args = check_call.call_args_list[0][0][0] + called_args.pop(0) + called_args.sort() + self.assertEqual(called_args, ['foo=bar', 'fooz=barz']) + + @patch('charmhelpers.core.hookenv.cmd_exists') + @patch('subprocess.check_call') + def test_function_fail(self, check_call, cmd_exists): + cmd_exists.return_value = True + + message = "Ooops, the function failed" + hookenv.function_fail(message) + check_call.assert_called_with(['function-fail', message]) + + def test_status_set_invalid_state(self): + self.assertRaises(ValueError, hookenv.status_set, 'random', 'message') + + def test_status_set_invalid_state_enum(self): + + class RandomEnum(Enum): + FOO = 1 + self.assertRaises( + ValueError, + hookenv.status_set, + RandomEnum.FOO, + 'message') + + @patch('subprocess.call') + def test_status(self, call): + call.return_value = 0 + hookenv.status_set('active', 'Everything is Awesome!') + call.assert_called_with(['status-set', 'active', 'Everything is Awesome!']) + + @patch('subprocess.call') + def test_status_enum(self, call): + call.return_value = 0 + hookenv.status_set( + hookenv.WORKLOAD_STATES.ACTIVE, + 'Everything is Awesome!') + call.assert_called_with(['status-set', 'active', 'Everything is Awesome!']) + + @patch('subprocess.call') + def test_status_app(self, call): + call.return_value = 0 + hookenv.status_set( + 'active', + 'Everything is Awesome!', + application=True) + call.assert_called_with([ + 'status-set', + '--application', + 'active', + 'Everything is Awesome!']) + + @patch('subprocess.call') + @patch.object(hookenv, 'log') + def test_status_enoent(self, log, call): + call.side_effect = OSError(2, 'fail') + hookenv.status_set('active', 'Everything is Awesome!') + log.assert_called_with('status-set failed: active Everything is Awesome!', level='INFO') + + @patch('subprocess.call') + @patch.object(hookenv, 'log') + def test_status_statuscmd_fail(self, log, call): + call.side_effect = OSError(3, 'fail') + self.assertRaises(OSError, hookenv.status_set, 'active', 'msg') + call.assert_called_with(['status-set', 'active', 'msg']) + + @patch('subprocess.check_output') + def test_status_get(self, check_output): + check_output.return_value = json.dumps( + {"message": "foo", + "status": "active", + "status-data": {}}).encode("UTF-8") + result = hookenv.status_get() + self.assertEqual(result, ('active', 'foo')) + check_output.assert_called_with( + ['status-get', "--format=json", "--include-data"]) + + @patch('subprocess.check_output') + def test_status_get_nostatus(self, check_output): + check_output.side_effect = OSError(2, 'fail') + result = hookenv.status_get() + self.assertEqual(result, ('unknown', '')) + + @patch('subprocess.check_output') + def test_status_get_status_error(self, check_output): + check_output.side_effect = OSError(3, 'fail') + self.assertRaises(OSError, hookenv.status_get) + + @patch('subprocess.check_output') + @patch('glob.glob') + def test_juju_version(self, glob, check_output): + glob.return_value = [sentinel.jujud] + check_output.return_value = '1.23.3.1-trusty-amd64\n' + self.assertEqual(hookenv.juju_version(), '1.23.3.1-trusty-amd64') + # Per https://bugs.launchpad.net/juju-core/+bug/1455368/comments/1 + glob.assert_called_once_with('/var/lib/juju/tools/machine-*/jujud') + check_output.assert_called_once_with([sentinel.jujud, 'version'], + universal_newlines=True) + + @patch('charmhelpers.core.hookenv.juju_version') + def test_has_juju_version(self, juju_version): + juju_version.return_value = '1.23.1.2.3.4.5-with-a-cherry-on-top.amd64' + self.assertTrue(hookenv.has_juju_version('1.23')) + self.assertTrue(hookenv.has_juju_version('1.23.1')) + self.assertTrue(hookenv.has_juju_version('1.23.1.1')) + self.assertFalse(hookenv.has_juju_version('1.23.2.1')) + self.assertFalse(hookenv.has_juju_version('1.24')) + + juju_version.return_value = '1.24-beta5.1-trusty-amd64' + self.assertTrue(hookenv.has_juju_version('1.23')) + self.assertTrue(hookenv.has_juju_version('1.24')) # Better if this was false! + self.assertTrue(hookenv.has_juju_version('1.24-beta5')) + self.assertTrue(hookenv.has_juju_version('1.24-beta5.1')) + self.assertFalse(hookenv.has_juju_version('1.25')) + self.assertTrue(hookenv.has_juju_version('1.18-backport6')) + + @patch.object(hookenv, 'relation_to_role_and_interface') + def test_relation_to_interface(self, rtri): + rtri.return_value = (None, 'foo') + self.assertEqual(hookenv.relation_to_interface('rel'), 'foo') + + @patch.object(hookenv, 'metadata') + def test_relation_to_role_and_interface(self, metadata): + metadata.return_value = { + 'provides': { + 'pro-rel': { + 'interface': 'pro-int', + }, + 'pro-rel2': { + 'interface': 'pro-int', + }, + }, + 'requires': { + 'req-rel': { + 'interface': 'req-int', + }, + }, + 'peers': { + 'pee-rel': { + 'interface': 'pee-int', + }, + }, + } + rtri = hookenv.relation_to_role_and_interface + self.assertEqual(rtri('pro-rel'), ('provides', 'pro-int')) + self.assertEqual(rtri('req-rel'), ('requires', 'req-int')) + self.assertEqual(rtri('pee-rel'), ('peers', 'pee-int')) + + @patch.object(hookenv, 'metadata') + def test_role_and_interface_to_relations(self, metadata): + metadata.return_value = { + 'provides': { + 'pro-rel': { + 'interface': 'pro-int', + }, + 'pro-rel2': { + 'interface': 'pro-int', + }, + }, + 'requires': { + 'req-rel': { + 'interface': 'int', + }, + }, + 'peers': { + 'pee-rel': { + 'interface': 'int', + }, + }, + } + ritr = hookenv.role_and_interface_to_relations + assertItemsEqual = getattr(self, 'assertItemsEqual', getattr(self, 'assertCountEqual', None)) + assertItemsEqual(ritr('provides', 'pro-int'), ['pro-rel', 'pro-rel2']) + assertItemsEqual(ritr('requires', 'int'), ['req-rel']) + assertItemsEqual(ritr('peers', 'int'), ['pee-rel']) + + @patch.object(hookenv, 'metadata') + def test_interface_to_relations(self, metadata): + metadata.return_value = { + 'provides': { + 'pro-rel': { + 'interface': 'pro-int', + }, + 'pro-rel2': { + 'interface': 'pro-int', + }, + }, + 'requires': { + 'req-rel': { + 'interface': 'req-int', + }, + }, + 'peers': { + 'pee-rel': { + 'interface': 'pee-int', + }, + }, + } + itr = hookenv.interface_to_relations + assertItemsEqual = getattr(self, 'assertItemsEqual', getattr(self, 'assertCountEqual', None)) + assertItemsEqual(itr('pro-int'), ['pro-rel', 'pro-rel2']) + assertItemsEqual(itr('req-int'), ['req-rel']) + assertItemsEqual(itr('pee-int'), ['pee-rel']) + + def test_action_name(self): + with patch.dict('os.environ', JUJU_ACTION_NAME='action-jack'): + self.assertEqual(hookenv.action_name(), 'action-jack') + + def test_action_uuid(self): + with patch.dict('os.environ', JUJU_ACTION_UUID='action-jack'): + self.assertEqual(hookenv.action_uuid(), 'action-jack') + + def test_action_tag(self): + with patch.dict('os.environ', JUJU_ACTION_TAG='action-jack'): + self.assertEqual(hookenv.action_tag(), 'action-jack') + + @patch('subprocess.check_output') + def test_storage_list(self, check_output): + ids = ['data/0', 'data/1', 'data/2'] + check_output.return_value = json.dumps(ids).encode('UTF-8') + + storage_name = 'arbitrary' + result = hookenv.storage_list(storage_name) + + self.assertEqual(result, ids) + check_output.assert_called_with(['storage-list', '--format=json', + storage_name]) + + @patch('subprocess.check_output') + def test_storage_list_notexist(self, check_output): + import errno + e = OSError() + e.errno = errno.ENOENT + check_output.side_effect = e + + result = hookenv.storage_list() + + self.assertEqual(result, []) + check_output.assert_called_with(['storage-list', '--format=json']) + + @patch('subprocess.check_output') + def test_storage_get_notexist(self, check_output): + # storage_get does not catch ENOENT, because there's no reason why you + # should be calling storage_get except from a storage hook, or with + # the result of storage_list (which will return [] as shown above). + + import errno + e = OSError() + e.errno = errno.ENOENT + check_output.side_effect = e + self.assertRaises(OSError, hookenv.storage_get) + + @patch('subprocess.check_output') + def test_storage_get(self, check_output): + expect = { + 'location': '/dev/sda', + 'kind': 'block', + } + check_output.return_value = json.dumps(expect).encode('UTF-8') + + result = hookenv.storage_get() + + self.assertEqual(result, expect) + check_output.assert_called_with(['storage-get', '--format=json']) + + @patch('subprocess.check_output') + def test_storage_get_attr(self, check_output): + expect = '/dev/sda' + check_output.return_value = json.dumps(expect).encode('UTF-8') + + attribute = 'location' + result = hookenv.storage_get(attribute) + + self.assertEqual(result, expect) + check_output.assert_called_with(['storage-get', '--format=json', + attribute]) + + @patch('subprocess.check_output') + def test_storage_get_with_id(self, check_output): + expect = { + 'location': '/dev/sda', + 'kind': 'block', + } + check_output.return_value = json.dumps(expect).encode('UTF-8') + + storage_id = 'data/0' + result = hookenv.storage_get(storage_id=storage_id) + + self.assertEqual(result, expect) + check_output.assert_called_with(['storage-get', '--format=json', + '-s', storage_id]) + + @patch('subprocess.check_output') + def test_network_get_primary(self, check_output): + """Ensure that network-get is called correctly and output is returned""" + check_output.return_value = b'192.168.22.1' + ip = hookenv.network_get_primary_address('mybinding') + check_output.assert_called_with( + ['network-get', '--primary-address', 'mybinding'], stderr=-2) + self.assertEqual(ip, '192.168.22.1') + + @patch('subprocess.check_output') + def test_network_get_primary_unsupported(self, check_output): + """Ensure that NotImplementedError is thrown when run on Juju < 2.0""" + check_output.side_effect = OSError(2, 'network-get') + self.assertRaises(NotImplementedError, hookenv.network_get_primary_address, + 'mybinding') + + @patch('subprocess.check_output') + def test_network_get_primary_no_binding_found(self, check_output): + """Ensure that NotImplementedError when no binding is found""" + check_output.side_effect = CalledProcessError( + 1, 'network-get', + output='no network config found for binding'.encode('UTF-8')) + self.assertRaises(hookenv.NoNetworkBinding, + hookenv.network_get_primary_address, + 'doesnotexist') + check_output.assert_called_with( + ['network-get', '--primary-address', 'doesnotexist'], stderr=-2) + + @patch('subprocess.check_output') + def test_network_get_primary_other_exception(self, check_output): + """Ensure that CalledProcessError still thrown when not + a missing binding""" + check_output.side_effect = CalledProcessError( + 1, 'network-get', + output='any other message'.encode('UTF-8')) + self.assertRaises(CalledProcessError, + hookenv.network_get_primary_address, + 'mybinding') + + @patch('charmhelpers.core.hookenv.juju_version') + @patch('subprocess.check_output') + def test_network_get(self, check_output, juju_version): + """Ensure that network-get is called correctly""" + juju_version.return_value = '2.2.0' + check_output.return_value = b'result' + hookenv.network_get('endpoint') + check_output.assert_called_with( + ['network-get', 'endpoint', '--format', 'yaml'], stderr=-2) + + @patch('charmhelpers.core.hookenv.juju_version') + @patch('subprocess.check_output') + def test_network_get_primary_required(self, check_output, juju_version): + """Ensure that NotImplementedError is thrown with Juju < 2.2.0""" + check_output.return_value = b'result' + + juju_version.return_value = '2.1.4' + self.assertRaises(NotImplementedError, hookenv.network_get, 'binding') + juju_version.return_value = '2.2.0' + self.assertEquals(hookenv.network_get('endpoint'), 'result') + + @patch('charmhelpers.core.hookenv.juju_version') + @patch('subprocess.check_output') + def test_network_get_relation_bound(self, check_output, juju_version): + """Ensure that network-get supports relation context, requires Juju 2.3""" + juju_version.return_value = '2.3.0' + check_output.return_value = b'result' + hookenv.network_get('endpoint', 'db') + check_output.assert_called_with( + ['network-get', 'endpoint', '--format', 'yaml', '-r', 'db'], + stderr=-2) + juju_version.return_value = '2.2.8' + self.assertRaises(NotImplementedError, hookenv.network_get, 'endpoint', 'db') + + @patch('charmhelpers.core.hookenv.juju_version') + @patch('subprocess.check_output') + def test_network_get_parses_yaml(self, check_output, juju_version): + """network-get returns loaded YAML output.""" + juju_version.return_value = '2.3.0' + check_output.return_value = b""" +bind-addresses: +- macaddress: "" + interfacename: "" + addresses: + - address: 10.136.107.33 + cidr: "" +ingress-addresses: +- 10.136.107.33 + """ + ip = hookenv.network_get('mybinding') + self.assertEqual(len(ip['bind-addresses']), 1) + self.assertEqual(ip['ingress-addresses'], ['10.136.107.33']) + + @patch('subprocess.check_call') + def test_add_metric(self, check_call_): + hookenv.add_metric(flips='1.5', flops='2.1') + hookenv.add_metric('juju-units=6') + hookenv.add_metric('foo-bar=3.333', 'baz-quux=8', users='2') + calls = [ + call(['add-metric', 'flips=1.5', 'flops=2.1']), + call(['add-metric', 'juju-units=6']), + call(['add-metric', 'baz-quux=8', 'foo-bar=3.333', 'users=2']), + ] + check_call_.assert_has_calls(calls) + + @patch('subprocess.check_call') + @patch.object(hookenv, 'log') + def test_add_metric_enoent(self, log, _check_call): + _check_call.side_effect = OSError(2, 'fail') + hookenv.add_metric(flips='1') + log.assert_called_with('add-metric failed: flips=1', level='INFO') + + @patch('charmhelpers.core.hookenv.os') + def test_meter_status(self, os_): + os_.environ = { + 'JUJU_METER_STATUS': 'GREEN', + 'JUJU_METER_INFO': 'all good', + } + self.assertEqual(hookenv.meter_status(), 'GREEN') + self.assertEqual(hookenv.meter_info(), 'all good') + + @patch.object(hookenv, 'related_units') + @patch.object(hookenv, 'relation_ids') + def test_iter_units_for_relation_name(self, relation_ids, related_units): + relation_ids.return_value = ['rel:1'] + related_units.return_value = ['unit/0', 'unit/1', 'unit/2'] + expected = [('rel:1', 'unit/0'), + ('rel:1', 'unit/1'), + ('rel:1', 'unit/2')] + related_units_data = [ + (u.rid, u.unit) + for u in hookenv.iter_units_for_relation_name('rel')] + self.assertEqual(expected, related_units_data) + + @patch.object(hookenv, 'relation_get') + def test_ingress_address(self, relation_get): + """Ensure ingress_address returns the ingress-address when available + and returns the private-address when not. + """ + _with_ingress = {'egress-subnets': '10.5.0.23/32', + 'ingress-address': '10.5.0.23', + 'private-address': '172.16.5.10'} + + _without_ingress = {'private-address': '172.16.5.10'} + + # Return the ingress-address + relation_get.return_value = _with_ingress + self.assertEqual(hookenv.ingress_address(rid='test:1', unit='unit/1'), + '10.5.0.23') + relation_get.assert_called_with(rid='test:1', unit='unit/1') + # Return the private-address + relation_get.return_value = _without_ingress + self.assertEqual(hookenv.ingress_address(rid='test:1'), + '172.16.5.10') + + @patch.object(hookenv, 'relation_get') + def test_egress_subnets(self, relation_get): + """Ensure egress_subnets returns the decoded egress-subnets when available + and falls back correctly when not. + """ + d = {'egress-subnets': '10.5.0.23/32,2001::F00F/64', + 'ingress-address': '10.5.0.23', + 'private-address': '2001::D0:F00D'} + + # Return the egress-subnets + relation_get.return_value = d + self.assertEqual(hookenv.egress_subnets(rid='test:1', unit='unit/1'), + ['10.5.0.23/32', '2001::F00F/64']) + relation_get.assert_called_with(rid='test:1', unit='unit/1') + + # Return the ingress-address + del d['egress-subnets'] + self.assertEqual(hookenv.egress_subnets(), ['10.5.0.23/32']) + + # Return the private-address + del d['ingress-address'] + self.assertEqual(hookenv.egress_subnets(), ['2001::D0:F00D/128']) + + @patch('charmhelpers.core.hookenv.local_unit') + @patch('charmhelpers.core.hookenv.goal_state') + @patch('charmhelpers.core.hookenv.has_juju_version') + def test_unit_doomed(self, has_juju_version, goal_state, local_unit): + # We need to test for a minimum patch level, or we risk + # data loss by returning bogus results with Juju 2.4.0 + has_juju_version.return_value = False + self.assertRaises(NotImplementedError, hookenv.unit_doomed) + has_juju_version.assertCalledOnceWith("2.4.1") + has_juju_version.return_value = True + + goal_state.return_value = json.loads(''' + { + "units": { + "postgresql/0": { + "status": "dying", + "since": "2018-07-30 10:01:06Z" + }, + "postgresql/1": { + "status": "active", + "since": "2018-07-30 10:22:39Z" + } + }, + "relations": {} + } + ''') + self.assertTrue(hookenv.unit_doomed('postgresql/0')) # unit removed, status "dying" + self.assertFalse(hookenv.unit_doomed('postgresql/1')) # unit exists, status "active", maybe other states + self.assertTrue(hookenv.unit_doomed('postgresql/2')) # unit does not exist + + local_unit.return_value = 'postgresql/0' + self.assertTrue(hookenv.unit_doomed()) + + def test_contains_addr_range(self): + # Contains cidr + self.assertTrue(hookenv._contains_range("192.168.1/20")) + self.assertTrue(hookenv._contains_range("192.168.0/24")) + self.assertTrue( + hookenv._contains_range("10.40.50.1,192.168.1/20,10.56.78.9")) + self.assertTrue(hookenv._contains_range("192.168.22/24")) + self.assertTrue(hookenv._contains_range("2001:db8::/32")) + self.assertTrue(hookenv._contains_range("*.foo.com")) + self.assertTrue(hookenv._contains_range(".foo.com")) + self.assertTrue( + hookenv._contains_range("192.168.1.20,.foo.com")) + self.assertTrue( + hookenv._contains_range("192.168.1.20, .foo.com")) + self.assertTrue( + hookenv._contains_range("192.168.1.20,*.foo.com")) + + # Doesn't contain cidr + self.assertFalse(hookenv._contains_range("192.168.1")) + self.assertFalse(hookenv._contains_range("192.168.145")) + self.assertFalse(hookenv._contains_range("192.16.14")) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/test_host.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/test_host.py new file mode 100644 index 0000000000000000000000000000000000000000..19617e8fdebdbccf99a2be722c1dc41beb686030 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/test_host.py @@ -0,0 +1,2008 @@ +import os.path +from collections import OrderedDict +import subprocess +from tempfile import mkdtemp +from shutil import rmtree +from textwrap import dedent + +import imp + +from charmhelpers import osplatform +from mock import patch, call, mock_open +from testtools import TestCase +from tests.helpers import patch_open +from tests.helpers import mock_open as mocked_open +import six + +from charmhelpers.core import host +from charmhelpers.fetch import ubuntu_apt_pkg + + +MOUNT_LINES = (""" +rootfs / rootfs rw 0 0 +sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0 +proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0 +udev /dev devtmpfs rw,relatime,size=8196788k,nr_inodes=2049197,mode=755 0 0 +devpts /dev/pts devpts """ + """rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0 +""").strip().split('\n') + +LSB_RELEASE = '''DISTRIB_ID=Ubuntu +DISTRIB_RELEASE=13.10 +DISTRIB_CODENAME=saucy +DISTRIB_DESCRIPTION="Ubuntu Saucy Salamander (development branch)" +''' + +OS_RELEASE = '''NAME="CentOS Linux" +ANSI_COLOR="0;31" +ID_LIKE="rhel fedora" +VERSION_ID="7" +BUG_REPORT_URL="https://bugs.centos.org/" +CENTOS_MANTISBT_PROJECT="CentOS-7" +PRETTY_NAME="CentOS Linux 7 (Core)" +VERSION="7 (Core)" +REDHAT_SUPPORT_PRODUCT_VERSION="7" +CENTOS_MANTISBT_PROJECT_VERSION="7" +REDHAT_SUPPORT_PRODUCT="centos" +HOME_URL="https://www.centos.org/" +CPE_NAME="cpe:/o:centos:centos:7" +ID="centos" +''' + +IP_LINE_ETH0 = b""" +2: eth0: mtu 1500 qdisc mq master bond0 state UP qlen 1000 + link/ether e4:11:5b:ab:a7:3c brd ff:ff:ff:ff:ff:ff +""" + +IP_LINE_ETH100 = b""" +2: eth100: mtu 1500 qdisc mq master bond0 state UP qlen 1000 + link/ether e4:11:5b:ab:a7:3d brd ff:ff:ff:ff:ff:ff +""" + +IP_LINE_ETH0_VLAN = b""" +6: eth0.10@eth0: mtu 1500 qdisc noqueue state UP group default + link/ether 08:00:27:16:b9:5f brd ff:ff:ff:ff:ff:ff +""" + +IP_LINE_ETH1 = b""" +3: eth1: mtu 1546 qdisc noop state DOWN qlen 1000 + link/ether e4:11:5b:ab:a7:3c brd ff:ff:ff:ff:ff:ff +""" + +IP_LINE_HWADDR = b"""2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000\\ link/ether e4:11:5b:ab:a7:3c brd ff:ff:ff:ff:ff:ff""" + +IP_LINES = IP_LINE_ETH0 + IP_LINE_ETH1 + IP_LINE_ETH0_VLAN + IP_LINE_ETH100 + +IP_LINE_BONDS = b""" +6: bond0.10@bond0: mtu 1500 qdisc noqueue state UP group default +link/ether 08:00:27:16:b9:5f brd ff:ff:ff:ff:ff:ff +""" + + +class HelpersTest(TestCase): + + @patch('charmhelpers.core.host.lsb_release') + @patch('os.path') + def test_init_is_systemd_upstart(self, path, lsb_release): + """Upstart based init is correctly detected""" + lsb_release.return_value = {'DISTRIB_CODENAME': 'whatever'} + path.isdir.return_value = False + self.assertFalse(host.init_is_systemd()) + path.isdir.assert_called_with('/run/systemd/system') + + @patch('charmhelpers.core.host.lsb_release') + @patch('os.path') + def test_init_is_systemd_system(self, path, lsb_release): + """Systemd based init is correctly detected""" + lsb_release.return_value = {'DISTRIB_CODENAME': 'whatever'} + path.isdir.return_value = True + self.assertTrue(host.init_is_systemd()) + path.isdir.assert_called_with('/run/systemd/system') + + @patch('charmhelpers.core.host.lsb_release') + @patch('os.path') + def test_init_is_systemd_trusty(self, path, lsb_release): + # Never returns true under trusty, even if the systemd + # packages have been installed. lp:1670944 + lsb_release.return_value = {'DISTRIB_CODENAME': 'trusty'} + path.isdir.return_value = True + self.assertFalse(host.init_is_systemd()) + self.assertFalse(path.isdir.called) + + @patch.object(host, 'init_is_systemd') + @patch('subprocess.call') + def test_runs_service_action(self, mock_call, systemd): + systemd.return_value = False + mock_call.return_value = 0 + action = 'some-action' + service_name = 'foo-service' + + result = host.service(action, service_name) + + self.assertTrue(result) + mock_call.assert_called_with(['service', service_name, action]) + + @patch.object(host, 'init_is_systemd') + @patch('subprocess.call') + def test_runs_systemctl_action(self, mock_call, systemd): + """Ensure that service calls under systemd call 'systemctl'.""" + systemd.return_value = True + mock_call.return_value = 0 + action = 'some-action' + service_name = 'foo-service' + + result = host.service(action, service_name) + + self.assertTrue(result) + mock_call.assert_called_with(['systemctl', action, service_name]) + + @patch.object(host, 'init_is_systemd') + @patch('subprocess.call') + def test_returns_false_when_service_fails(self, mock_call, systemd): + systemd.return_value = False + mock_call.return_value = 1 + action = 'some-action' + service_name = 'foo-service' + + result = host.service(action, service_name) + + self.assertFalse(result) + mock_call.assert_called_with(['service', service_name, action]) + + @patch.object(host, 'service') + def test_starts_a_service(self, service): + service_name = 'foo-service' + service.side_effect = [True] + self.assertTrue(host.service_start(service_name)) + + service.assert_called_with('start', service_name) + + @patch.object(host, 'service') + def test_starts_a_service_with_parms(self, service): + service_name = 'foo-service' + service.side_effect = [True] + self.assertTrue(host.service_start(service_name, id=4)) + + service.assert_called_with('start', service_name, id=4) + + @patch.object(host, 'service') + def test_stops_a_service(self, service): + service_name = 'foo-service' + service.side_effect = [True] + self.assertTrue(host.service_stop(service_name)) + + service.assert_called_with('stop', service_name) + + @patch.object(host, 'service') + def test_restarts_a_service(self, service): + service_name = 'foo-service' + service.side_effect = [True] + self.assertTrue(host.service_restart(service_name)) + + service.assert_called_with('restart', service_name) + + @patch.object(host, 'service_running') + @patch.object(host, 'init_is_systemd') + @patch.object(host, 'service') + def test_pauses_a_running_systemd_unit(self, service, systemd, + service_running): + """Pause on a running systemd unit will be stopped and disabled.""" + service_name = 'foo-service' + service_running.return_value = True + systemd.return_value = True + self.assertTrue(host.service_pause(service_name)) + service.assert_has_calls([ + call('stop', service_name), + call('disable', service_name), + call('mask', service_name)]) + + @patch.object(host, 'service_running') + @patch.object(host, 'init_is_systemd') + @patch.object(host, 'service') + def test_resumes_a_stopped_systemd_unit(self, service, systemd, + service_running): + """Resume on a stopped systemd unit will be started and enabled.""" + service_name = 'foo-service' + service_running.return_value = False + systemd.return_value = True + self.assertTrue(host.service_resume(service_name)) + service.assert_has_calls([ + call('unmask', service_name), + # Ensures a package starts up if disabled but not masked, + # per lp:1692178 + call('enable', service_name), + call('start', service_name)]) + + @patch.object(host, 'service_running') + @patch.object(host, 'init_is_systemd') + @patch.object(host, 'service') + def test_pauses_a_running_upstart_service(self, service, systemd, + service_running): + """Pause on a running service will call service stop.""" + service_name = 'foo-service' + service.side_effect = [True] + systemd.return_value = False + service_running.return_value = True + tempdir = mkdtemp(prefix="test_pauses_an_upstart_service") + conf_path = os.path.join(tempdir, "{}.conf".format(service_name)) + # Just needs to exist + with open(conf_path, "w") as fh: + fh.write("") + self.addCleanup(rmtree, tempdir) + self.assertTrue(host.service_pause(service_name, init_dir=tempdir)) + + service.assert_called_with('stop', service_name) + override_path = os.path.join( + tempdir, "{}.override".format(service_name)) + with open(override_path, "r") as fh: + override_contents = fh.read() + self.assertEqual("manual\n", override_contents) + + @patch.object(host, 'service_running') + @patch.object(host, 'init_is_systemd') + @patch.object(host, 'service') + def test_pauses_a_stopped_upstart_service(self, service, systemd, + service_running): + """Pause on a stopped service will not call service stop.""" + service_name = 'foo-service' + service.side_effect = [True] + systemd.return_value = False + service_running.return_value = False + tempdir = mkdtemp(prefix="test_pauses_an_upstart_service") + conf_path = os.path.join(tempdir, "{}.conf".format(service_name)) + # Just needs to exist + with open(conf_path, "w") as fh: + fh.write("") + self.addCleanup(rmtree, tempdir) + self.assertTrue(host.service_pause(service_name, init_dir=tempdir)) + + # Stop isn't called because service is already stopped + self.assertRaises( + AssertionError, service.assert_called_with, 'stop', service_name) + override_path = os.path.join( + tempdir, "{}.override".format(service_name)) + with open(override_path, "r") as fh: + override_contents = fh.read() + self.assertEqual("manual\n", override_contents) + + @patch.object(host, 'service_running') + @patch.object(host, 'init_is_systemd') + @patch('subprocess.check_call') + @patch.object(host, 'service') + def test_pauses_a_running_sysv_service(self, service, check_call, + systemd, service_running): + """Pause calls service stop on a running sysv service.""" + service_name = 'foo-service' + service.side_effect = [True] + systemd.return_value = False + service_running.return_value = True + tempdir = mkdtemp(prefix="test_pauses_a_sysv_service") + sysv_path = os.path.join(tempdir, service_name) + # Just needs to exist + with open(sysv_path, "w") as fh: + fh.write("") + self.addCleanup(rmtree, tempdir) + self.assertTrue(host.service_pause( + service_name, init_dir=tempdir, initd_dir=tempdir)) + + service.assert_called_with('stop', service_name) + check_call.assert_called_with(["update-rc.d", service_name, "disable"]) + + @patch.object(host, 'service_running') + @patch.object(host, 'init_is_systemd') + @patch('subprocess.check_call') + @patch.object(host, 'service') + def test_pauses_a_stopped_sysv_service(self, service, check_call, + systemd, service_running): + """Pause does not call service stop on a stopped sysv service.""" + service_name = 'foo-service' + service.side_effect = [True] + systemd.return_value = False + service_running.return_value = False + tempdir = mkdtemp(prefix="test_pauses_a_sysv_service") + sysv_path = os.path.join(tempdir, service_name) + # Just needs to exist + with open(sysv_path, "w") as fh: + fh.write("") + self.addCleanup(rmtree, tempdir) + self.assertTrue(host.service_pause( + service_name, init_dir=tempdir, initd_dir=tempdir)) + + # Stop isn't called because service is already stopped + self.assertRaises( + AssertionError, service.assert_called_with, 'stop', service_name) + check_call.assert_called_with(["update-rc.d", service_name, "disable"]) + + @patch.object(host, 'init_is_systemd') + @patch.object(host, 'service') + def test_pause_with_unknown_service(self, service, systemd): + service_name = 'foo-service' + service.side_effect = [True] + systemd.return_value = False + tempdir = mkdtemp(prefix="test_pauses_with_unknown_service") + self.addCleanup(rmtree, tempdir) + exception = self.assertRaises( + ValueError, host.service_pause, + service_name, init_dir=tempdir, initd_dir=tempdir) + self.assertIn( + "Unable to detect {0}".format(service_name), str(exception)) + self.assertIn(tempdir, str(exception)) + + @patch.object(host, 'service_running') + @patch.object(host, 'init_is_systemd') + @patch('subprocess.check_output') + @patch.object(host, 'service') + def test_resumes_a_running_upstart_service(self, service, check_output, + systemd, service_running): + """When the service is already running, service start isn't called.""" + service_name = 'foo-service' + service.side_effect = [True] + systemd.return_value = False + service_running.return_value = True + tempdir = mkdtemp(prefix="test_resumes_an_upstart_service") + conf_path = os.path.join(tempdir, "{}.conf".format(service_name)) + with open(conf_path, "w") as fh: + fh.write("") + self.addCleanup(rmtree, tempdir) + self.assertTrue(host.service_resume(service_name, init_dir=tempdir)) + + # Start isn't called because service is already running + self.assertFalse(service.called) + override_path = os.path.join( + tempdir, "{}.override".format(service_name)) + self.assertFalse(os.path.exists(override_path)) + + @patch.object(host, 'service_running') + @patch.object(host, 'init_is_systemd') + @patch('subprocess.check_output') + @patch.object(host, 'service') + def test_resumes_a_stopped_upstart_service(self, service, check_output, + systemd, service_running): + """When the service is stopped, service start is called.""" + check_output.return_value = b'foo-service stop/waiting' + service_name = 'foo-service' + service.side_effect = [True] + systemd.return_value = False + service_running.return_value = False + tempdir = mkdtemp(prefix="test_resumes_an_upstart_service") + conf_path = os.path.join(tempdir, "{}.conf".format(service_name)) + with open(conf_path, "w") as fh: + fh.write("") + self.addCleanup(rmtree, tempdir) + self.assertTrue(host.service_resume(service_name, init_dir=tempdir)) + + service.assert_called_with('start', service_name) + override_path = os.path.join( + tempdir, "{}.override".format(service_name)) + self.assertFalse(os.path.exists(override_path)) + + @patch.object(host, 'service_running') + @patch.object(host, 'init_is_systemd') + @patch('subprocess.check_call') + @patch.object(host, 'service') + def test_resumes_a_sysv_service(self, service, check_call, systemd, + service_running): + """When process is in a stop/waiting state, service start is called.""" + service_name = 'foo-service' + service.side_effect = [True] + systemd.return_value = False + service_running.return_value = False + tempdir = mkdtemp(prefix="test_resumes_a_sysv_service") + sysv_path = os.path.join(tempdir, service_name) + # Just needs to exist + with open(sysv_path, "w") as fh: + fh.write("") + self.addCleanup(rmtree, tempdir) + self.assertTrue(host.service_resume( + service_name, init_dir=tempdir, initd_dir=tempdir)) + + service.assert_called_with('start', service_name) + check_call.assert_called_with(["update-rc.d", service_name, "enable"]) + + @patch.object(host, 'service_running') + @patch.object(host, 'init_is_systemd') + @patch('subprocess.check_call') + @patch.object(host, 'service') + def test_resume_a_running_sysv_service(self, service, check_call, + systemd, service_running): + """When process is already running, service start isn't called.""" + service_name = 'foo-service' + systemd.return_value = False + service_running.return_value = True + tempdir = mkdtemp(prefix="test_resumes_a_sysv_service") + sysv_path = os.path.join(tempdir, service_name) + # Just needs to exist + with open(sysv_path, "w") as fh: + fh.write("") + self.addCleanup(rmtree, tempdir) + self.assertTrue(host.service_resume( + service_name, init_dir=tempdir, initd_dir=tempdir)) + + # Start isn't called because service is already running + self.assertFalse(service.called) + check_call.assert_called_with(["update-rc.d", service_name, "enable"]) + + @patch.object(host, 'service_running') + @patch.object(host, 'init_is_systemd') + @patch.object(host, 'service') + def test_resume_with_unknown_service(self, service, systemd, + service_running): + service_name = 'foo-service' + service.side_effect = [True] + systemd.return_value = False + service_running.return_value = False + tempdir = mkdtemp(prefix="test_resumes_with_unknown_service") + self.addCleanup(rmtree, tempdir) + exception = self.assertRaises( + ValueError, host.service_resume, + service_name, init_dir=tempdir, initd_dir=tempdir) + self.assertIn( + "Unable to detect {0}".format(service_name), str(exception)) + self.assertIn(tempdir, str(exception)) + + @patch.object(host, 'service') + def test_reloads_a_service(self, service): + service_name = 'foo-service' + service.side_effect = [True] + self.assertTrue(host.service_reload(service_name)) + + service.assert_called_with('reload', service_name) + + @patch.object(host, 'service') + def test_failed_reload_restarts_a_service(self, service): + service_name = 'foo-service' + service.side_effect = [False, True] + self.assertTrue( + host.service_reload(service_name, restart_on_failure=True)) + + service.assert_has_calls([ + call('reload', service_name), + call('restart', service_name) + ]) + + @patch.object(host, 'service') + def test_failed_reload_without_restart(self, service): + service_name = 'foo-service' + service.side_effect = [False] + self.assertFalse(host.service_reload(service_name)) + + service.assert_called_with('reload', service_name) + + @patch.object(host, 'service') + def test_start_a_service_fails(self, service): + service_name = 'foo-service' + service.side_effect = [False] + self.assertFalse(host.service_start(service_name)) + + service.assert_called_with('start', service_name) + + @patch.object(host, 'service') + def test_stop_a_service_fails(self, service): + service_name = 'foo-service' + service.side_effect = [False] + self.assertFalse(host.service_stop(service_name)) + + service.assert_called_with('stop', service_name) + + @patch.object(host, 'service') + def test_restart_a_service_fails(self, service): + service_name = 'foo-service' + service.side_effect = [False] + self.assertFalse(host.service_restart(service_name)) + + service.assert_called_with('restart', service_name) + + @patch.object(host, 'service') + def test_reload_a_service_fails(self, service): + service_name = 'foo-service' + service.side_effect = [False] + self.assertFalse(host.service_reload(service_name)) + + service.assert_called_with('reload', service_name) + + @patch.object(host, 'service') + def test_failed_reload_restarts_a_service_fails(self, service): + service_name = 'foo-service' + service.side_effect = [False, False] + self.assertFalse( + host.service_reload(service_name, restart_on_failure=True)) + + service.assert_has_calls([ + call('reload', service_name), + call('restart', service_name) + ]) + + @patch.object(host, 'os') + @patch.object(host, 'init_is_systemd') + @patch('subprocess.check_output') + def test_service_running_on_stopped_service(self, check_output, systemd, + os): + systemd.return_value = False + os.path.exists.return_value = True + check_output.return_value = b'foo stop/waiting' + self.assertFalse(host.service_running('foo')) + + @patch.object(host, 'os') + @patch.object(host, 'init_is_systemd') + @patch('subprocess.check_output') + def test_service_running_on_running_service(self, check_output, systemd, + os): + systemd.return_value = False + os.path.exists.return_value = True + check_output.return_value = b'foo start/running, process 23871' + self.assertTrue(host.service_running('foo')) + + @patch.object(host, 'os') + @patch.object(host, 'init_is_systemd') + @patch('subprocess.check_output') + def test_service_running_on_unknown_service(self, check_output, systemd, + os): + systemd.return_value = False + os.path.exists.return_value = True + exc = subprocess.CalledProcessError(1, ['status']) + check_output.side_effect = exc + self.assertFalse(host.service_running('foo')) + + @patch.object(host, 'os') + @patch.object(host, 'service') + @patch.object(host, 'init_is_systemd') + def test_service_systemv_running(self, systemd, service, os): + systemd.return_value = False + service.return_value = True + os.path.exists.side_effect = [False, True] + self.assertTrue(host.service_running('rabbitmq-server')) + service.assert_called_with('status', 'rabbitmq-server') + + @patch.object(host, 'os') + @patch.object(host, 'service') + @patch.object(host, 'init_is_systemd') + def test_service_systemv_not_running(self, systemd, service, + os): + systemd.return_value = False + service.return_value = False + os.path.exists.side_effect = [False, True] + self.assertFalse(host.service_running('keystone')) + service.assert_called_with('status', 'keystone') + + @patch('subprocess.call') + @patch.object(host, 'init_is_systemd') + def test_service_start_with_params(self, systemd, call): + systemd.return_value = False + call.return_value = 0 + self.assertTrue(host.service_start('ceph-osd', id=4)) + call.assert_called_with(['service', 'ceph-osd', 'start', 'id=4']) + + @patch('subprocess.call') + @patch.object(host, 'init_is_systemd') + def test_service_stop_with_params(self, systemd, call): + systemd.return_value = False + call.return_value = 0 + self.assertTrue(host.service_stop('ceph-osd', id=4)) + call.assert_called_with(['service', 'ceph-osd', 'stop', 'id=4']) + + @patch('subprocess.call') + @patch.object(host, 'init_is_systemd') + def test_service_start_systemd_with_params(self, systemd, call): + systemd.return_value = True + call.return_value = 0 + self.assertTrue(host.service_start('ceph-osd', id=4)) + call.assert_called_with(['systemctl', 'start', 'ceph-osd']) + + @patch('grp.getgrnam') + @patch('pwd.getpwnam') + @patch('subprocess.check_call') + @patch.object(host, 'log') + def test_adds_a_user_if_it_doesnt_exist(self, log, check_call, + getpwnam, getgrnam): + username = 'johndoe' + password = 'eodnhoj' + shell = '/bin/bash' + existing_user_pwnam = KeyError('user not found') + new_user_pwnam = 'some user pwnam' + + getpwnam.side_effect = [existing_user_pwnam, new_user_pwnam] + + result = host.adduser(username, password=password) + + self.assertEqual(result, new_user_pwnam) + check_call.assert_called_with([ + 'useradd', + '--create-home', + '--shell', shell, + '--password', password, + '-g', username, + username + ]) + getpwnam.assert_called_with(username) + + @patch('pwd.getpwnam') + @patch('subprocess.check_call') + @patch.object(host, 'log') + def test_doesnt_add_user_if_it_already_exists(self, log, check_call, + getpwnam): + username = 'johndoe' + password = 'eodnhoj' + existing_user_pwnam = 'some user pwnam' + + getpwnam.return_value = existing_user_pwnam + + result = host.adduser(username, password=password) + + self.assertEqual(result, existing_user_pwnam) + self.assertFalse(check_call.called) + getpwnam.assert_called_with(username) + + @patch('grp.getgrnam') + @patch('pwd.getpwnam') + @patch('subprocess.check_call') + @patch.object(host, 'log') + def test_adds_a_user_with_different_shell(self, log, check_call, getpwnam, + getgrnam): + username = 'johndoe' + password = 'eodnhoj' + shell = '/bin/zsh' + existing_user_pwnam = KeyError('user not found') + new_user_pwnam = 'some user pwnam' + + getpwnam.side_effect = [existing_user_pwnam, new_user_pwnam] + getgrnam.side_effect = KeyError('group not found') + + result = host.adduser(username, password=password, shell=shell) + + self.assertEqual(result, new_user_pwnam) + check_call.assert_called_with([ + 'useradd', + '--create-home', + '--shell', shell, + '--password', password, + username + ]) + getpwnam.assert_called_with(username) + + @patch('grp.getgrnam') + @patch('pwd.getpwnam') + @patch('subprocess.check_call') + @patch.object(host, 'log') + def test_adduser_with_groups(self, log, check_call, getpwnam, getgrnam): + username = 'johndoe' + password = 'eodnhoj' + shell = '/bin/bash' + existing_user_pwnam = KeyError('user not found') + new_user_pwnam = 'some user pwnam' + + getpwnam.side_effect = [existing_user_pwnam, new_user_pwnam] + + result = host.adduser(username, password=password, + primary_group='foo', secondary_groups=[ + 'bar', 'qux', + ]) + + self.assertEqual(result, new_user_pwnam) + check_call.assert_called_with([ + 'useradd', + '--create-home', + '--shell', shell, + '--password', password, + '-g', 'foo', + '-G', 'bar,qux', + username + ]) + getpwnam.assert_called_with(username) + assert not getgrnam.called + + @patch('pwd.getpwnam') + @patch('subprocess.check_call') + @patch.object(host, 'log') + def test_adds_a_systemuser(self, log, check_call, getpwnam): + username = 'johndoe' + existing_user_pwnam = KeyError('user not found') + new_user_pwnam = 'some user pwnam' + + getpwnam.side_effect = [existing_user_pwnam, new_user_pwnam] + + result = host.adduser(username, system_user=True) + + self.assertEqual(result, new_user_pwnam) + check_call.assert_called_with([ + 'useradd', + '--system', + username + ]) + getpwnam.assert_called_with(username) + + @patch('pwd.getpwnam') + @patch('subprocess.check_call') + @patch.object(host, 'log') + def test_adds_a_systemuser_with_home_dir(self, log, check_call, getpwnam): + username = 'johndoe' + existing_user_pwnam = KeyError('user not found') + new_user_pwnam = 'some user pwnam' + + getpwnam.side_effect = [existing_user_pwnam, new_user_pwnam] + + result = host.adduser(username, system_user=True, + home_dir='/var/lib/johndoe') + + self.assertEqual(result, new_user_pwnam) + check_call.assert_called_with([ + 'useradd', + '--home', + '/var/lib/johndoe', + '--system', + username + ]) + getpwnam.assert_called_with(username) + + @patch('pwd.getpwnam') + @patch('pwd.getpwuid') + @patch('grp.getgrnam') + @patch('subprocess.check_call') + @patch.object(host, 'log') + def test_add_user_uid(self, log, check_call, getgrnam, getpwuid, getpwnam): + user_name = 'james' + user_id = 1111 + uid_key_error = KeyError('user not found') + getpwuid.side_effect = uid_key_error + host.adduser(user_name, uid=user_id) + + check_call.assert_called_with([ + 'useradd', + '--uid', + str(user_id), + '--system', + '-g', + user_name, + user_name + ]) + getpwnam.assert_called_with(user_name) + getpwuid.assert_called_with(user_id) + + @patch('grp.getgrnam') + @patch('grp.getgrgid') + @patch('subprocess.check_call') + @patch.object(host, 'log') + def test_add_group_gid(self, log, check_call, getgrgid, getgrnam): + group_name = 'darkhorse' + group_id = 1005 + existing_group_gid = KeyError('group not found') + new_group_gid = 1006 + getgrgid.side_effect = [existing_group_gid, new_group_gid] + + host.add_group(group_name, gid=group_id) + check_call.assert_called_with([ + 'addgroup', + '--gid', + str(group_id), + '--group', + group_name + ]) + getgrgid.assert_called_with(group_id) + getgrnam.assert_called_with(group_name) + + @patch('pwd.getpwnam') + def test_user_exists_true(self, getpwnam): + getpwnam.side_effect = 'pw info' + self.assertTrue(host.user_exists('bob')) + + @patch('pwd.getpwnam') + def test_user_exists_false(self, getpwnam): + getpwnam.side_effect = KeyError('user not found') + self.assertFalse(host.user_exists('bob')) + + @patch('subprocess.check_call') + @patch.object(host, 'log') + def test_adds_a_user_to_a_group(self, log, check_call): + username = 'foo' + group = 'bar' + + host.add_user_to_group(username, group) + + check_call.assert_called_with([ + 'gpasswd', '-a', + username, + group + ]) + + @patch.object(osplatform, 'get_platform') + @patch('grp.getgrnam') + @patch('subprocess.check_call') + def test_add_a_group_if_it_doesnt_exist_ubuntu(self, check_call, + getgrnam, platform): + platform.return_value = 'ubuntu' + imp.reload(host) + + group_name = 'testgroup' + existing_group_grnam = KeyError('group not found') + new_group_grnam = 'some group grnam' + + getgrnam.side_effect = [existing_group_grnam, new_group_grnam] + with patch("charmhelpers.core.host.log"): + result = host.add_group(group_name) + + self.assertEqual(result, new_group_grnam) + check_call.assert_called_with(['addgroup', '--group', group_name]) + getgrnam.assert_called_with(group_name) + + @patch.object(osplatform, 'get_platform') + @patch('grp.getgrnam') + @patch('subprocess.check_call') + def test_add_a_group_if_it_doesnt_exist_centos(self, check_call, + getgrnam, platform): + platform.return_value = 'centos' + imp.reload(host) + + group_name = 'testgroup' + existing_group_grnam = KeyError('group not found') + new_group_grnam = 'some group grnam' + + getgrnam.side_effect = [existing_group_grnam, new_group_grnam] + + with patch("charmhelpers.core.host.log"): + result = host.add_group(group_name) + + self.assertEqual(result, new_group_grnam) + check_call.assert_called_with(['groupadd', group_name]) + getgrnam.assert_called_with(group_name) + + @patch.object(osplatform, 'get_platform') + @patch('grp.getgrnam') + @patch('subprocess.check_call') + def test_doesnt_add_group_if_it_already_exists_ubuntu(self, check_call, + getgrnam, platform): + platform.return_value = 'ubuntu' + imp.reload(host) + + group_name = 'testgroup' + existing_group_grnam = 'some group grnam' + + getgrnam.return_value = existing_group_grnam + + with patch("charmhelpers.core.host.log"): + result = host.add_group(group_name) + + self.assertEqual(result, existing_group_grnam) + self.assertFalse(check_call.called) + getgrnam.assert_called_with(group_name) + + @patch.object(osplatform, 'get_platform') + @patch('grp.getgrnam') + @patch('subprocess.check_call') + def test_doesnt_add_group_if_it_already_exists_centos(self, check_call, + getgrnam, platform): + platform.return_value = 'centos' + imp.reload(host) + + group_name = 'testgroup' + existing_group_grnam = 'some group grnam' + + getgrnam.return_value = existing_group_grnam + + with patch("charmhelpers.core.host.log"): + result = host.add_group(group_name) + + self.assertEqual(result, existing_group_grnam) + self.assertFalse(check_call.called) + getgrnam.assert_called_with(group_name) + + @patch.object(osplatform, 'get_platform') + @patch('grp.getgrnam') + @patch('subprocess.check_call') + def test_add_a_system_group_ubuntu(self, check_call, getgrnam, platform): + platform.return_value = 'ubuntu' + imp.reload(host) + + group_name = 'testgroup' + existing_group_grnam = KeyError('group not found') + new_group_grnam = 'some group grnam' + + getgrnam.side_effect = [existing_group_grnam, new_group_grnam] + + with patch("charmhelpers.core.host.log"): + result = host.add_group(group_name, system_group=True) + + self.assertEqual(result, new_group_grnam) + check_call.assert_called_with([ + 'addgroup', + '--system', + group_name + ]) + getgrnam.assert_called_with(group_name) + + @patch.object(osplatform, 'get_platform') + @patch('grp.getgrnam') + @patch('subprocess.check_call') + def test_add_a_system_group_centos(self, check_call, getgrnam, platform): + platform.return_value = 'centos' + imp.reload(host) + + group_name = 'testgroup' + existing_group_grnam = KeyError('group not found') + new_group_grnam = 'some group grnam' + + getgrnam.side_effect = [existing_group_grnam, new_group_grnam] + + with patch("charmhelpers.core.host.log"): + result = host.add_group(group_name, system_group=True) + + self.assertEqual(result, new_group_grnam) + check_call.assert_called_with([ + 'groupadd', + '-r', + group_name + ]) + getgrnam.assert_called_with(group_name) + + @patch('subprocess.check_call') + def test_chage_no_chroot(self, check_call): + host.chage('usera', expiredate='2019-09-28', maxdays='11') + check_call.assert_called_with([ + 'chage', + '--expiredate', '2019-09-28', + '--maxdays', '11', + 'usera' + ]) + + @patch('subprocess.check_call') + def test_chage_chroot(self, check_call): + host.chage('usera', expiredate='2019-09-28', maxdays='11', + root='mychroot') + check_call.assert_called_with([ + 'chage', + '--root', 'mychroot', + '--expiredate', '2019-09-28', + '--maxdays', '11', + 'usera' + ]) + + @patch('subprocess.check_call') + def test_remove_password_expiry(self, check_call): + host.remove_password_expiry('usera') + check_call.assert_called_with([ + 'chage', + '--expiredate', '-1', + '--inactive', '-1', + '--mindays', '0', + '--maxdays', '-1', + 'usera' + ]) + + @patch('subprocess.check_output') + @patch.object(host, 'log') + def test_rsyncs_a_path(self, log, check_output): + from_path = '/from/this/path/foo' + to_path = '/to/this/path/bar' + check_output.return_value = b' some output ' # Spaces will be stripped + + result = host.rsync(from_path, to_path) + + self.assertEqual(result, 'some output') + check_output.assert_called_with(['/usr/bin/rsync', '-r', '--delete', + '--executability', + '/from/this/path/foo', + '/to/this/path/bar'], stderr=subprocess.STDOUT) + + @patch('subprocess.check_call') + @patch.object(host, 'log') + def test_creates_a_symlink(self, log, check_call): + source = '/from/this/path/foo' + destination = '/to/this/path/bar' + + host.symlink(source, destination) + + check_call.assert_called_with(['ln', '-sf', + '/from/this/path/foo', + '/to/this/path/bar']) + + @patch('pwd.getpwnam') + @patch('grp.getgrnam') + @patch.object(host, 'log') + @patch.object(host, 'os') + def test_creates_a_directory_if_it_doesnt_exist(self, os_, log, + getgrnam, getpwnam): + uid = 123 + gid = 234 + owner = 'some-user' + group = 'some-group' + path = '/some/other/path/from/link' + realpath = '/some/path' + path_exists = False + perms = 0o644 + + getpwnam.return_value.pw_uid = uid + getgrnam.return_value.gr_gid = gid + os_.path.abspath.return_value = realpath + os_.path.exists.return_value = path_exists + + host.mkdir(path, owner=owner, group=group, perms=perms) + + getpwnam.assert_called_with('some-user') + getgrnam.assert_called_with('some-group') + os_.path.abspath.assert_called_with(path) + os_.path.exists.assert_called_with(realpath) + os_.makedirs.assert_called_with(realpath, perms) + os_.chown.assert_called_with(realpath, uid, gid) + + @patch.object(host, 'log') + @patch.object(host, 'os') + def test_creates_a_directory_with_defaults(self, os_, log): + uid = 0 + gid = 0 + path = '/some/other/path/from/link' + realpath = '/some/path' + path_exists = False + perms = 0o555 + + os_.path.abspath.return_value = realpath + os_.path.exists.return_value = path_exists + + host.mkdir(path) + + os_.path.abspath.assert_called_with(path) + os_.path.exists.assert_called_with(realpath) + os_.makedirs.assert_called_with(realpath, perms) + os_.chown.assert_called_with(realpath, uid, gid) + + @patch('pwd.getpwnam') + @patch('grp.getgrnam') + @patch.object(host, 'log') + @patch.object(host, 'os') + def test_removes_file_with_same_path_before_mkdir(self, os_, log, + getgrnam, getpwnam): + uid = 123 + gid = 234 + owner = 'some-user' + group = 'some-group' + path = '/some/other/path/from/link' + realpath = '/some/path' + path_exists = True + force = True + is_dir = False + perms = 0o644 + + getpwnam.return_value.pw_uid = uid + getgrnam.return_value.gr_gid = gid + os_.path.abspath.return_value = realpath + os_.path.exists.return_value = path_exists + os_.path.isdir.return_value = is_dir + + host.mkdir(path, owner=owner, group=group, perms=perms, force=force) + + getpwnam.assert_called_with('some-user') + getgrnam.assert_called_with('some-group') + os_.path.abspath.assert_called_with(path) + os_.path.exists.assert_called_with(realpath) + os_.unlink.assert_called_with(realpath) + os_.makedirs.assert_called_with(realpath, perms) + os_.chown.assert_called_with(realpath, uid, gid) + + @patch('pwd.getpwnam') + @patch('grp.getgrnam') + @patch.object(host, 'log') + @patch.object(host, 'os') + def test_writes_content_to_a_file(self, os_, log, getgrnam, getpwnam): + # Curly brackets here demonstrate that we are *not* rendering + # these strings with Python's string formatting. This is a + # change from the original behavior per Bug #1195634. + uid = 123 + gid = 234 + owner = 'some-user-{foo}' + group = 'some-group-{bar}' + path = '/some/path/{baz}' + contents = b'what is {juju}' + perms = 0o644 + fileno = 'some-fileno' + + getpwnam.return_value.pw_uid = uid + getgrnam.return_value.gr_gid = gid + + with patch_open() as (mock_open, mock_file): + mock_file.fileno.return_value = fileno + + host.write_file(path, contents, owner=owner, group=group, + perms=perms) + + getpwnam.assert_called_with('some-user-{foo}') + getgrnam.assert_called_with('some-group-{bar}') + mock_open.assert_called_with('/some/path/{baz}', 'wb') + os_.fchown.assert_called_with(fileno, uid, gid) + os_.fchmod.assert_called_with(fileno, perms) + mock_file.write.assert_called_with(b'what is {juju}') + + @patch.object(host, 'log') + @patch.object(host, 'os') + def test_writes_content_with_default(self, os_, log): + uid = 0 + gid = 0 + path = '/some/path/{baz}' + fmtstr = b'what is {juju}' + perms = 0o444 + fileno = 'some-fileno' + + with patch_open() as (mock_open, mock_file): + mock_file.fileno.return_value = fileno + + host.write_file(path, fmtstr) + + mock_open.assert_called_with('/some/path/{baz}', 'wb') + os_.fchown.assert_called_with(fileno, uid, gid) + os_.fchmod.assert_called_with(fileno, perms) + mock_file.write.assert_called_with(b'what is {juju}') + + @patch.object(host, 'log') + @patch.object(host, 'os') + def test_does_not_write_duplicate_content(self, os_, log): + uid = 0 + gid = 0 + path = '/some/path/{baz}' + fmtstr = b'what is {juju}' + perms = 0o444 + fileno = 'some-fileno' + + os_.stat.return_value.st_uid = 1 + os_.stat.return_value.st_gid = 1 + os_.stat.return_value.st_mode = 0o777 + + with patch_open() as (mock_open, mock_file): + mock_file.fileno.return_value = fileno + mock_file.read.return_value = fmtstr + + host.write_file(path, fmtstr) + + self.assertEqual(mock_open.call_count, 1) # Called to read + os_.chown.assert_has_calls([ + call(path, uid, -1), + call(path, -1, gid), + ]) + os_.chmod.assert_called_with(path, perms) + + @patch.object(host, 'log') + @patch.object(host, 'os') + def test_only_changes_incorrect_ownership(self, os_, log): + uid = 0 + gid = 0 + path = '/some/path/{baz}' + fmtstr = b'what is {juju}' + perms = 0o444 + fileno = 'some-fileno' + + os_.stat.return_value.st_uid = uid + os_.stat.return_value.st_gid = gid + os_.stat.return_value.st_mode = perms + + with patch_open() as (mock_open, mock_file): + mock_file.fileno.return_value = fileno + mock_file.read.return_value = fmtstr + + host.write_file(path, fmtstr) + + self.assertEqual(mock_open.call_count, 1) # Called to read + self.assertEqual(os_.chown.call_count, 0) + + @patch.object(host, 'log') + @patch.object(host, 'os') + def test_writes_binary_contents(self, os_, log): + path = '/some/path/{baz}' + fmtstr = six.u('what is {juju}\N{TRADE MARK SIGN}').encode('UTF-8') + fileno = 'some-fileno' + + with patch_open() as (mock_open, mock_file): + mock_file.fileno.return_value = fileno + + host.write_file(path, fmtstr) + + mock_open.assert_called_with('/some/path/{baz}', 'wb') + mock_file.write.assert_called_with(fmtstr) + + @patch('subprocess.check_output') + @patch.object(host, 'log') + def test_mounts_a_device(self, log, check_output): + device = '/dev/guido' + mountpoint = '/mnt/guido' + options = 'foo,bar' + + result = host.mount(device, mountpoint, options) + + self.assertTrue(result) + check_output.assert_called_with(['mount', '-o', 'foo,bar', + '/dev/guido', '/mnt/guido']) + + @patch('subprocess.check_output') + @patch.object(host, 'log') + def test_doesnt_mount_on_error(self, log, check_output): + device = '/dev/guido' + mountpoint = '/mnt/guido' + options = 'foo,bar' + + error = subprocess.CalledProcessError(123, 'mount it', 'Oops...') + check_output.side_effect = error + + result = host.mount(device, mountpoint, options) + + self.assertFalse(result) + check_output.assert_called_with(['mount', '-o', 'foo,bar', + '/dev/guido', '/mnt/guido']) + + @patch('subprocess.check_output') + @patch.object(host, 'log') + def test_mounts_a_device_without_options(self, log, check_output): + device = '/dev/guido' + mountpoint = '/mnt/guido' + + result = host.mount(device, mountpoint) + + self.assertTrue(result) + check_output.assert_called_with(['mount', '/dev/guido', '/mnt/guido']) + + @patch.object(host, 'Fstab') + @patch('subprocess.check_output') + @patch.object(host, 'log') + def test_mounts_and_persist_a_device(self, log, check_output, fstab): + """Check if a mount works with the persist flag set to True + """ + device = '/dev/guido' + mountpoint = '/mnt/guido' + options = 'foo,bar' + + result = host.mount(device, mountpoint, options, persist=True) + + self.assertTrue(result) + check_output.assert_called_with(['mount', '-o', 'foo,bar', + '/dev/guido', '/mnt/guido']) + + fstab.add.assert_called_with('/dev/guido', '/mnt/guido', 'ext3', + options='foo,bar') + + result = host.mount(device, mountpoint, options, persist=True, + filesystem="xfs") + + self.assertTrue(result) + fstab.add.assert_called_with('/dev/guido', '/mnt/guido', 'xfs', + options='foo,bar') + + @patch.object(host, 'Fstab') + @patch('subprocess.check_output') + @patch.object(host, 'log') + def test_umounts_a_device(self, log, check_output, fstab): + mountpoint = '/mnt/guido' + + result = host.umount(mountpoint, persist=True) + + self.assertTrue(result) + check_output.assert_called_with(['umount', mountpoint]) + fstab.remove_by_mountpoint_called_with(mountpoint) + + @patch('subprocess.check_output') + @patch.object(host, 'log') + def test_umounts_and_persist_device(self, log, check_output): + mountpoint = '/mnt/guido' + + result = host.umount(mountpoint) + + self.assertTrue(result) + check_output.assert_called_with(['umount', '/mnt/guido']) + + @patch('subprocess.check_output') + @patch.object(host, 'log') + def test_doesnt_umount_on_error(self, log, check_output): + mountpoint = '/mnt/guido' + + error = subprocess.CalledProcessError(123, 'mount it', 'Oops...') + check_output.side_effect = error + + result = host.umount(mountpoint) + + self.assertFalse(result) + check_output.assert_called_with(['umount', '/mnt/guido']) + + def test_lists_the_mount_points(self): + with patch_open() as (mock_open, mock_file): + mock_file.readlines.return_value = MOUNT_LINES + result = host.mounts() + + self.assertEqual(result, [ + ['/', 'rootfs'], + ['/sys', 'sysfs'], + ['/proc', 'proc'], + ['/dev', 'udev'], + ['/dev/pts', 'devpts'] + ]) + mock_open.assert_called_with('/proc/mounts') + + _hash_files = { + '/etc/exists.conf': 'lots of nice ceph configuration', + '/etc/missing.conf': None + } + + @patch('subprocess.check_output') + @patch.object(host, 'log') + def test_fstab_mount(self, log, check_output): + self.assertTrue(host.fstab_mount('/mnt/mymntpnt')) + check_output.assert_called_with(['mount', '/mnt/mymntpnt']) + + @patch('subprocess.check_output') + @patch.object(host, 'log') + def test_fstab_mount_fail(self, log, check_output): + error = subprocess.CalledProcessError(123, 'mount it', 'Oops...') + check_output.side_effect = error + self.assertFalse(host.fstab_mount('/mnt/mymntpnt')) + check_output.assert_called_with(['mount', '/mnt/mymntpnt']) + + @patch('hashlib.md5') + @patch('os.path.exists') + def test_file_hash_exists(self, exists, md5): + filename = '/etc/exists.conf' + exists.side_effect = [True] + m = md5() + m.hexdigest.return_value = self._hash_files[filename] + with patch_open() as (mock_open, mock_file): + mock_file.read.return_value = self._hash_files[filename] + result = host.file_hash(filename) + self.assertEqual(result, self._hash_files[filename]) + + @patch('os.path.exists') + def test_file_hash_missing(self, exists): + filename = '/etc/missing.conf' + exists.side_effect = [False] + with patch_open() as (mock_open, mock_file): + mock_file.read.return_value = self._hash_files[filename] + result = host.file_hash(filename) + self.assertEqual(result, None) + + @patch('hashlib.sha1') + @patch('os.path.exists') + def test_file_hash_sha1(self, exists, sha1): + filename = '/etc/exists.conf' + exists.side_effect = [True] + m = sha1() + m.hexdigest.return_value = self._hash_files[filename] + with patch_open() as (mock_open, mock_file): + mock_file.read.return_value = self._hash_files[filename] + result = host.file_hash(filename, hash_type='sha1') + self.assertEqual(result, self._hash_files[filename]) + + @patch.object(host, 'file_hash') + def test_check_hash(self, file_hash): + file_hash.return_value = 'good-hash' + self.assertRaises(host.ChecksumError, host.check_hash, + 'file', 'bad-hash') + host.check_hash('file', 'good-hash', 'sha256') + self.assertEqual(file_hash.call_args_list, [ + call('file', 'md5'), + call('file', 'sha256'), + ]) + + @patch.object(host, 'service') + @patch('os.path.exists') + @patch('glob.iglob') + def test_restart_no_changes(self, iglob, exists, service): + file_name = '/etc/missing.conf' + restart_map = { + file_name: ['test-service'] + } + iglob.return_value = [] + + @host.restart_on_change(restart_map) + def make_no_changes(): + pass + + make_no_changes() + + assert not service.called + assert not exists.called + + @patch.object(host, 'service') + @patch('os.path.exists') + @patch('glob.iglob') + def test_restart_on_change(self, iglob, exists, service): + file_name = '/etc/missing.conf' + restart_map = { + file_name: ['test-service'] + } + iglob.side_effect = [[], [file_name]] + exists.return_value = True + + @host.restart_on_change(restart_map) + def make_some_changes(mock_file): + mock_file.read.return_value = b"newstuff" + + with patch_open() as (mock_open, mock_file): + make_some_changes(mock_file) + + for service_name in restart_map[file_name]: + service.assert_called_with('restart', service_name) + + exists.assert_has_calls([ + call(file_name), + ]) + + @patch.object(host, 'service') + @patch('os.path.exists') + @patch('glob.iglob') + def test_multiservice_restart_on_change(self, iglob, exists, service): + file_name_one = '/etc/missing.conf' + file_name_two = '/etc/exists.conf' + restart_map = { + file_name_one: ['test-service'], + file_name_two: ['test-service', 'test-service2'] + } + iglob.side_effect = [[], [file_name_two], + [file_name_one], [file_name_two]] + exists.return_value = True + + @host.restart_on_change(restart_map) + def make_some_changes(): + pass + + with patch_open() as (mock_open, mock_file): + mock_file.read.side_effect = [b'exists', b'missing', b'exists2'] + make_some_changes() + + # Restart should only happen once per service + for svc in ['test-service2', 'test-service']: + c = call('restart', svc) + self.assertEquals(1, service.call_args_list.count(c)) + + exists.assert_has_calls([ + call(file_name_one), + call(file_name_two) + ]) + + @patch.object(host, 'service') + @patch('os.path.exists') + @patch('glob.iglob') + def test_multiservice_restart_on_change_in_order(self, iglob, exists, + service): + file_name_one = '/etc/cinder/cinder.conf' + file_name_two = '/etc/haproxy/haproxy.conf' + restart_map = OrderedDict([ + (file_name_one, ['some-api']), + (file_name_two, ['haproxy']) + ]) + iglob.side_effect = [[], [file_name_two], + [file_name_one], [file_name_two]] + exists.return_value = True + + @host.restart_on_change(restart_map) + def make_some_changes(): + pass + + with patch_open() as (mock_open, mock_file): + mock_file.read.side_effect = [b'exists', b'missing', b'exists2'] + make_some_changes() + + # Restarts should happen in the order they are described in the + # restart map. + expected = [ + call('restart', 'some-api'), + call('restart', 'haproxy') + ] + self.assertEquals(expected, service.call_args_list) + + @patch.object(host, 'service') + @patch('os.path.exists') + @patch('glob.iglob') + def test_glob_no_restart(self, iglob, exists, service): + glob_path = '/etc/service/*.conf' + file_name_one = '/etc/service/exists.conf' + file_name_two = '/etc/service/exists2.conf' + restart_map = { + glob_path: ['service'] + } + iglob.side_effect = [[file_name_one, file_name_two], + [file_name_one, file_name_two]] + exists.return_value = True + + @host.restart_on_change(restart_map) + def make_some_changes(): + pass + + with patch_open() as (mock_open, mock_file): + mock_file.read.side_effect = [b'content', b'content2', + b'content', b'content2'] + make_some_changes() + + self.assertEquals([], service.call_args_list) + + @patch.object(host, 'service') + @patch('os.path.exists') + @patch('glob.iglob') + def test_glob_restart_on_change(self, iglob, exists, service): + glob_path = '/etc/service/*.conf' + file_name_one = '/etc/service/exists.conf' + file_name_two = '/etc/service/exists2.conf' + restart_map = { + glob_path: ['service'] + } + iglob.side_effect = [[file_name_one, file_name_two], + [file_name_one, file_name_two]] + exists.return_value = True + + @host.restart_on_change(restart_map) + def make_some_changes(): + pass + + with patch_open() as (mock_open, mock_file): + mock_file.read.side_effect = [b'content', b'content2', + b'changed', b'content2'] + make_some_changes() + + self.assertEquals([call('restart', 'service')], service.call_args_list) + + @patch.object(host, 'service') + @patch('os.path.exists') + @patch('glob.iglob') + def test_glob_restart_on_create(self, iglob, exists, service): + glob_path = '/etc/service/*.conf' + file_name_one = '/etc/service/exists.conf' + file_name_two = '/etc/service/missing.conf' + restart_map = { + glob_path: ['service'] + } + iglob.side_effect = [[file_name_one], + [file_name_one, file_name_two]] + exists.return_value = True + + @host.restart_on_change(restart_map) + def make_some_changes(): + pass + + with patch_open() as (mock_open, mock_file): + mock_file.read.side_effect = [b'exists', + b'exists', b'created'] + make_some_changes() + + self.assertEquals([call('restart', 'service')], service.call_args_list) + + @patch.object(host, 'service') + @patch('os.path.exists') + @patch('glob.iglob') + def test_glob_restart_on_delete(self, iglob, exists, service): + glob_path = '/etc/service/*.conf' + file_name_one = '/etc/service/exists.conf' + file_name_two = '/etc/service/exists2.conf' + restart_map = { + glob_path: ['service'] + } + iglob.side_effect = [[file_name_one, file_name_two], + [file_name_two]] + exists.return_value = True + + @host.restart_on_change(restart_map) + def make_some_changes(): + pass + + with patch_open() as (mock_open, mock_file): + mock_file.read.side_effect = [b'exists', b'exists2', + b'exists2'] + make_some_changes() + + self.assertEquals([call('restart', 'service')], service.call_args_list) + + @patch.object(host, 'service_reload') + @patch.object(host, 'service') + @patch('os.path.exists') + @patch('glob.iglob') + def test_restart_on_change_restart_functs(self, iglob, exists, service, + service_reload): + file_name_one = '/etc/cinder/cinder.conf' + file_name_two = '/etc/haproxy/haproxy.conf' + restart_map = OrderedDict([ + (file_name_one, ['some-api']), + (file_name_two, ['haproxy']) + ]) + iglob.side_effect = [[], [file_name_two], + [file_name_one], [file_name_two]] + exists.return_value = True + + restart_funcs = { + 'some-api': service_reload, + } + + @host.restart_on_change(restart_map, restart_functions=restart_funcs) + def make_some_changes(): + pass + + with patch_open() as (mock_open, mock_file): + mock_file.read.side_effect = [b'exists', b'missing', b'exists2'] + make_some_changes() + + self.assertEquals([call('restart', 'haproxy')], service.call_args_list) + self.assertEquals([call('some-api')], service_reload.call_args_list) + + @patch.object(osplatform, 'get_platform') + def test_lsb_release_ubuntu(self, platform): + platform.return_value = 'ubuntu' + imp.reload(host) + + result = { + "DISTRIB_ID": "Ubuntu", + "DISTRIB_RELEASE": "13.10", + "DISTRIB_CODENAME": "saucy", + "DISTRIB_DESCRIPTION": "\"Ubuntu Saucy Salamander " + "(development branch)\"" + } + with mocked_open('/etc/lsb-release', LSB_RELEASE): + lsb_release = host.lsb_release() + for key in result: + self.assertEqual(result[key], lsb_release[key]) + + @patch.object(osplatform, 'get_platform') + def test_lsb_release_centos(self, platform): + platform.return_value = 'centos' + imp.reload(host) + + result = { + 'NAME': '"CentOS Linux"', + 'ANSI_COLOR': '"0;31"', + 'ID_LIKE': '"rhel fedora"', + 'VERSION_ID': '"7"', + 'BUG_REPORT_URL': '"https://bugs.centos.org/"', + 'CENTOS_MANTISBT_PROJECT': '"CentOS-7"', + 'PRETTY_NAME': '"CentOS Linux 7 (Core)"', + 'VERSION': '"7 (Core)"', + 'REDHAT_SUPPORT_PRODUCT_VERSION': '"7"', + 'CENTOS_MANTISBT_PROJECT_VERSION': '"7"', + 'REDHAT_SUPPORT_PRODUCT': '"centos"', + 'HOME_URL': '"https://www.centos.org/"', + 'CPE_NAME': '"cpe:/o:centos:centos:7"', + 'ID': '"centos"' + } + with mocked_open('/etc/os-release', OS_RELEASE): + lsb_release = host.lsb_release() + for key in result: + self.assertEqual(result[key], lsb_release[key]) + + def test_pwgen(self): + pw = host.pwgen() + self.assert_(len(pw) >= 35, 'Password is too short') + + pw = host.pwgen(10) + self.assertEqual(len(pw), 10, 'Password incorrect length') + + pw2 = host.pwgen(10) + self.assertNotEqual(pw, pw2, 'Duplicated password') + + @patch.object(host, 'glob') + @patch('os.path.realpath') + @patch('os.path.isdir') + def test_is_phy_iface(self, mock_isdir, mock_realpath, mock_glob): + mock_isdir.return_value = True + mock_glob.glob.return_value = ['/sys/class/net/eth0', + '/sys/class/net/veth0'] + + def fake_realpath(soft): + if soft.endswith('/eth0'): + hard = ('/sys/devices/pci0000:00/0000:00:1c.4' + '/0000:02:00.1/net/eth0') + else: + hard = '/sys/devices/virtual/net/veth0' + + return hard + + mock_realpath.side_effect = fake_realpath + self.assertTrue(host.is_phy_iface('eth0')) + self.assertFalse(host.is_phy_iface('veth0')) + + @patch('os.path.exists') + @patch('os.path.realpath') + @patch('os.path.isdir') + def test_get_bond_master(self, mock_isdir, mock_realpath, mock_exists): + mock_isdir.return_value = True + + def fake_realpath(soft): + if soft.endswith('/eth0'): + return ('/sys/devices/pci0000:00/0000:00:1c.4' + '/0000:02:00.1/net/eth0') + elif soft.endswith('/br0'): + return '/sys/devices/virtual/net/br0' + elif soft.endswith('/master'): + return '/sys/devices/virtual/net/bond0' + + return None + + def fake_exists(path): + return True + + mock_exists.side_effect = fake_exists + mock_realpath.side_effect = fake_realpath + self.assertEqual(host.get_bond_master('eth0'), 'bond0') + self.assertIsNone(host.get_bond_master('br0')) + + @patch('subprocess.check_output') + def test_list_nics(self, check_output): + check_output.return_value = IP_LINES + nics = host.list_nics() + self.assertEqual(nics, ['eth0', 'eth1', 'eth0.10', 'eth100']) + nics = host.list_nics('eth') + self.assertEqual(nics, ['eth0', 'eth1', 'eth0.10', 'eth100']) + nics = host.list_nics(['eth']) + self.assertEqual(nics, ['eth0', 'eth1', 'eth0.10', 'eth100']) + + @patch('subprocess.check_output') + def test_list_nics_with_bonds(self, check_output): + check_output.return_value = IP_LINE_BONDS + nics = host.list_nics('bond') + self.assertEqual(nics, ['bond0.10', ]) + + @patch('subprocess.check_output') + def test_get_nic_mtu_with_bonds(self, check_output): + check_output.return_value = IP_LINE_BONDS + nic = "bond0.10" + mtu = host.get_nic_mtu(nic) + self.assertEqual(mtu, '1500') + + @patch('subprocess.check_call') + def test_set_nic_mtu(self, mock_call): + mock_call.return_value = 0 + nic = 'eth7' + mtu = '1546' + host.set_nic_mtu(nic, mtu) + mock_call.assert_called_with(['ip', 'link', 'set', nic, 'mtu', mtu]) + + @patch('subprocess.check_output') + def test_get_nic_mtu(self, check_output): + check_output.return_value = IP_LINE_ETH0 + nic = "eth0" + mtu = host.get_nic_mtu(nic) + self.assertEqual(mtu, '1500') + + @patch('subprocess.check_output') + def test_get_nic_mtu_vlan(self, check_output): + check_output.return_value = IP_LINE_ETH0_VLAN + nic = "eth0.10" + mtu = host.get_nic_mtu(nic) + self.assertEqual(mtu, '1500') + + @patch('subprocess.check_output') + def test_get_nic_hwaddr(self, check_output): + check_output.return_value = IP_LINE_HWADDR + nic = "eth0" + hwaddr = host.get_nic_hwaddr(nic) + self.assertEqual(hwaddr, 'e4:11:5b:ab:a7:3c') + + @patch('charmhelpers.core.host_factory.ubuntu.lsb_release') + def test_get_distrib_codename(self, lsb_release): + lsb_release.return_value = {'DISTRIB_CODENAME': 'bionic'} + self.assertEqual(host.get_distrib_codename(), 'bionic') + + @patch.object(osplatform, 'get_platform') + @patch.object(ubuntu_apt_pkg, 'Cache') + def test_cmp_pkgrevno_revnos_ubuntu(self, pkg_cache, platform): + platform.return_value = 'ubuntu' + imp.reload(host) + + class MockPackage: + class MockPackageRevno: + def __init__(self, ver_str): + self.ver_str = ver_str + + def __init__(self, current_ver): + self.current_ver = self.MockPackageRevno(current_ver) + + pkg_dict = { + 'python': MockPackage('2.4') + } + pkg_cache.return_value = pkg_dict + self.assertEqual(host.cmp_pkgrevno('python', '2.3'), 1) + self.assertEqual(host.cmp_pkgrevno('python', '2.4'), 0) + self.assertEqual(host.cmp_pkgrevno('python', '2.5'), -1) + + @patch.object(osplatform, 'get_platform') + def test_cmp_pkgrevno_revnos_centos(self, platform): + platform.return_value = 'centos' + imp.reload(host) + + class MockPackage: + def __init__(self, name, version): + self.Name = name + self.version = version + + yum_dict = { + 'installed': { + MockPackage('python', '2.4') + } + } + + import yum + yum.YumBase.return_value.doPackageLists.return_value = ( + yum_dict) + + self.assertEqual(host.cmp_pkgrevno('python', '2.3'), 1) + self.assertEqual(host.cmp_pkgrevno('python', '2.4'), 0) + self.assertEqual(host.cmp_pkgrevno('python', '2.5'), -1) + + @patch.object(host.os, 'stat') + @patch.object(host.pwd, 'getpwuid') + @patch.object(host.grp, 'getgrgid') + @patch('posix.stat_result') + def test_owner(self, stat_result_, getgrgid_, getpwuid_, stat_): + getgrgid_.return_value = ['testgrp'] + getpwuid_.return_value = ['testuser'] + stat_.return_value = stat_result_() + + user, group = host.owner('/some/path') + stat_.assert_called_once_with('/some/path') + self.assertEqual('testuser', user) + self.assertEqual('testgrp', group) + + def test_get_total_ram(self): + raw = dedent('''\ + MemFree: 183868 kB + MemTotal: 7096108 kB + MemAvailable: 5645240 kB + ''').strip() + with patch_open() as (mock_open, mock_file): + mock_file.readlines.return_value = raw.splitlines() + self.assertEqual(host.get_total_ram(), 7266414592) # 7GB + mock_open.assert_called_once_with('/proc/meminfo', 'r') + + @patch.object(host, 'os') + @patch.object(host, 'init_is_systemd') + @patch('subprocess.call') + def test_is_container_with_systemd_container(self, + call, + init_is_systemd, + mock_os): + init_is_systemd.return_value = True + call.return_value = 0 + self.assertTrue(host.is_container()) + call.assert_called_with(['systemd-detect-virt', '--container']) + + @patch.object(host, 'os') + @patch.object(host, 'init_is_systemd') + @patch('subprocess.call') + def test_is_container_with_systemd_non_container(self, + call, + init_is_systemd, + mock_os): + init_is_systemd.return_value = True + call.return_value = 1 + self.assertFalse(host.is_container()) + call.assert_called_with(['systemd-detect-virt', '--container']) + + @patch.object(host, 'os') + @patch.object(host, 'init_is_systemd') + @patch('subprocess.call') + def test_is_container_with_upstart_container(self, + call, + init_is_systemd, + mock_os): + init_is_systemd.return_value = False + mock_os.path.exists.return_value = True + self.assertTrue(host.is_container()) + mock_os.path.exists.assert_called_with('/run/container_type') + + @patch.object(host, 'os') + @patch.object(host, 'init_is_systemd') + @patch('subprocess.call') + def test_is_container_with_upstart_not_container(self, + call, + init_is_systemd, + mock_os): + init_is_systemd.return_value = False + mock_os.path.exists.return_value = False + self.assertFalse(host.is_container()) + mock_os.path.exists.assert_called_with('/run/container_type') + + def test_updatedb(self): + updatedb_text = 'PRUNEPATHS="/tmp"' + self.assertEqual(host.updatedb(updatedb_text, '/srv/node'), + 'PRUNEPATHS="/tmp /srv/node"') + + def test_no_change_updatedb(self): + updatedb_text = 'PRUNEPATHS="/tmp /srv/node"' + self.assertEqual(host.updatedb(updatedb_text, '/srv/node'), + updatedb_text) + + def test_no_prunepaths(self): + updatedb_text = 'PRUNE_BIND_MOUNTS="yes"' + self.assertEqual(host.updatedb(updatedb_text, '/srv/node'), + updatedb_text) + + def test_write_updatedb(self): + _open = mock_open(read_data='PRUNEPATHS="/tmp /srv/node"') + with patch('charmhelpers.core.host.open', _open, create=True): + host.add_to_updatedb_prunepath("/tmp/test") + handle = _open() + + self.assertTrue(handle.read.call_count == 1) + self.assertTrue(handle.seek.call_count == 1) + handle.write.assert_called_once_with( + 'PRUNEPATHS="/tmp /srv/node /tmp/test"') + + @patch.object(host, 'os') + def test_prunepaths_no_updatedb_conf_file(self, mock_os): + mock_os.path.exists.return_value = False + _open = mock_open(read_data='PRUNEPATHS="/tmp /srv/node"') + with patch('charmhelpers.core.host.open', _open, create=True): + host.add_to_updatedb_prunepath("/tmp/test") + handle = _open() + + self.assertTrue(handle.call_count == 0) + + @patch.object(host, 'os') + def test_prunepaths_updatedb_conf_file_isdir(self, mock_os): + mock_os.path.exists.return_value = True + mock_os.path.isdir.return_value = True + _open = mock_open(read_data='PRUNEPATHS="/tmp /srv/node"') + with patch('charmhelpers.core.host.open', _open, create=True): + host.add_to_updatedb_prunepath("/tmp/test") + handle = _open() + + self.assertTrue(handle.call_count == 0) + + @patch.object(host, 'local_unit') + def test_modulo_distribution(self, local_unit): + local_unit.return_value = 'test/7' + + # unit % modulo * wait + self.assertEqual(host.modulo_distribution(modulo=6, wait=10), 10) + + # Zero wait when unit % modulo == 0 + self.assertEqual(host.modulo_distribution(modulo=7, wait=10), 0) + + # modulo * wait when unit % modulo == 0 and non_zero_wait=True + self.assertEqual(host.modulo_distribution(modulo=7, wait=10, + non_zero_wait=True), + 70) + + @patch.object(host, 'log') + @patch.object(host, 'charm_name') + @patch.object(host, 'write_file') + @patch.object(subprocess, 'check_call') + @patch.object(host, 'file_hash') + @patch('hashlib.md5') + def test_install_ca_cert_new_cert(self, md5, file_hash, check_call, + write_file, charm_name, log): + file_hash.return_value = 'old_hash' + charm_name.return_value = 'charm-name' + + md5().hexdigest.return_value = 'old_hash' + host.install_ca_cert('cert_data') + assert not check_call.called + + md5().hexdigest.return_value = 'new_hash' + host.install_ca_cert(None) + assert not check_call.called + host.install_ca_cert('') + assert not check_call.called + + host.install_ca_cert('cert_data', 'name') + write_file.assert_called_with( + '/usr/local/share/ca-certificates/name.crt', + b'cert_data') + check_call.assert_called_with(['update-ca-certificates', '--fresh']) + + host.install_ca_cert('cert_data') + write_file.assert_called_with( + '/usr/local/share/ca-certificates/juju-charm-name.crt', + b'cert_data') + check_call.assert_called_with(['update-ca-certificates', '--fresh']) + + @patch('subprocess.check_output') + def test_arch(self, check_output): + _ = host.arch() + check_output.assert_called_with( + ['dpkg', '--print-architecture'] + ) + + @patch('subprocess.check_output') + def test_get_system_env(self, check_output): + check_output.return_value = '' + self.assertEquals( + host.get_system_env('aKey', 'aDefault'), 'aDefault') + self.assertEquals(host.get_system_env('aKey'), None) + check_output.return_value = 'aKey=aValue\n' + self.assertEquals( + host.get_system_env('aKey', 'aDefault'), 'aValue') + check_output.return_value = 'otherKey=shell=wicked\n' + self.assertEquals( + host.get_system_env('otherKey', 'aDefault'), 'shell=wicked') + + +class TestHostCompator(TestCase): + + def test_compare_ubuntu_releases(self): + from charmhelpers.osplatform import get_platform + if get_platform() == 'ubuntu': + self.assertTrue(host.CompareHostReleases('yakkety') < 'zesty') diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/test_hugepage.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/test_hugepage.py new file mode 100644 index 0000000000000000000000000000000000000000..69ab9b285f97d4df24b6d4322672925ac79e46cb --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/test_hugepage.py @@ -0,0 +1,118 @@ +from testtools import TestCase +from mock import patch +from charmhelpers.core import hugepage +import yaml + +TO_PATCH = [ + 'fstab', + 'add_group', + 'add_user_to_group', + 'sysctl', + 'fstab_mount', + 'mkdir', + 'check_output', +] + + +class Group(object): + def __init__(self): + self.gr_gid = '1010' + + +class HugepageTests(TestCase): + + def setUp(self): + super(HugepageTests, self).setUp() + for m in TO_PATCH: + setattr(self, m, self._patch(m)) + + def _patch(self, method): + _m = patch('charmhelpers.core.hugepage.' + method) + mock = _m.start() + self.addCleanup(_m.stop) + return mock + + def test_hugepage_support(self): + self.add_group.return_value = Group() + self.fstab.Fstab().get_entry_by_attr.return_value = 'old fstab entry' + self.fstab.Fstab().Entry.return_value = 'new fstab entry' + hugepage.hugepage_support('nova') + sysctl_expect = (""" +vm.hugetlb_shm_group: '1010' +vm.max_map_count: 65536 +vm.nr_hugepages: 256 +""".lstrip()) + self.sysctl.create.assert_called_with(sysctl_expect, + '/etc/sysctl.d/10-hugepage.conf') + self.mkdir.assert_called_with('/run/hugepages/kvm', owner='root', + group='root', perms=0o755, force=False) + self.fstab.Fstab().remove_entry.assert_called_with('old fstab entry') + self.fstab.Fstab().Entry.assert_called_with( + 'nodev', '/run/hugepages/kvm', 'hugetlbfs', + 'mode=1770,gid=1010,pagesize=2MB', 0, 0) + self.fstab.Fstab().add_entry.assert_called_with('new fstab entry') + self.fstab_mount.assert_called_with('/run/hugepages/kvm') + + def test_hugepage_support_new_mnt(self): + self.add_group.return_value = Group() + self.fstab.Fstab().get_entry_by_attr.return_value = None + self.fstab.Fstab().Entry.return_value = 'new fstab entry' + hugepage.hugepage_support('nova') + self.assertEqual(self.fstab.Fstab().remove_entry.call_args_list, []) + + def test_hugepage_support_no_automount(self): + self.add_group.return_value = Group() + self.fstab.Fstab().get_entry_by_attr.return_value = None + self.fstab.Fstab().Entry.return_value = 'new fstab entry' + hugepage.hugepage_support('nova', mount=False) + self.assertEqual(self.fstab_mount.call_args_list, []) + + def test_hugepage_support_nodefaults(self): + self.add_group.return_value = Group() + self.fstab.Fstab().get_entry_by_attr.return_value = 'old fstab entry' + self.fstab.Fstab().Entry.return_value = 'new fstab entry' + hugepage.hugepage_support( + 'nova', group='neutron', nr_hugepages=512, max_map_count=70000, + mnt_point='/hugepages', pagesize='1G', mount=False) + sysctl_expect = { + 'vm.hugetlb_shm_group': '1010', + 'vm.max_map_count': 70000, + 'vm.nr_hugepages': 512, + } + sysctl_setting_arg = self.sysctl.create.call_args_list[0][0][0] + self.assertEqual(yaml.safe_load(sysctl_setting_arg), sysctl_expect) + self.mkdir.assert_called_with('/hugepages', owner='root', + group='root', perms=0o755, force=False) + self.fstab.Fstab().remove_entry.assert_called_with('old fstab entry') + self.fstab.Fstab().Entry.assert_called_with( + 'nodev', '/hugepages', 'hugetlbfs', + 'mode=1770,gid=1010,pagesize=1G', 0, 0) + self.fstab.Fstab().add_entry.assert_called_with('new fstab entry') + + def test_hugepage_support_set_shmmax(self): + self.add_group.return_value = Group() + self.fstab.Fstab().get_entry_by_attr.return_value = None + self.fstab.Fstab().Entry.return_value = 'new fstab entry' + self.check_output.return_value = 2000 + hugepage.hugepage_support('nova', mount=False, set_shmmax=True) + sysctl_expect = { + 'kernel.shmmax': 536870912, + 'vm.hugetlb_shm_group': '1010', + 'vm.max_map_count': 65536, + 'vm.nr_hugepages': 256 + } + sysctl_setting_arg = self.sysctl.create.call_args_list[0][0][0] + self.assertEqual(yaml.safe_load(sysctl_setting_arg), sysctl_expect) + + def test_hugepage_support_auto_increase_max_map_count(self): + self.add_group.return_value = Group() + hugepage.hugepage_support( + 'nova', group='neutron', nr_hugepages=512, max_map_count=200, + mnt_point='/hugepages', pagesize='1G', mount=False) + sysctl_expect = { + 'vm.hugetlb_shm_group': '1010', + 'vm.max_map_count': 1024, + 'vm.nr_hugepages': 512, + } + sysctl_setting_arg = self.sysctl.create.call_args_list[0][0][0] + self.assertEqual(yaml.safe_load(sysctl_setting_arg), sysctl_expect) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/test_kernel.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/test_kernel.py new file mode 100644 index 0000000000000000000000000000000000000000..9da19988dc19fe8e86cd058070d4a38796a205f0 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/test_kernel.py @@ -0,0 +1,113 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +import unittest +import imp + +from charmhelpers import osplatform +from mock import patch +from tests.helpers import patch_open +from charmhelpers.core import kernel + + +class TestKernel(unittest.TestCase): + + @patch('subprocess.check_call') + @patch.object(osplatform, 'get_platform') + def test_modprobe_persistent_ubuntu(self, platform, check_call): + platform.return_value = 'ubuntu' + imp.reload(kernel) + + with patch_open() as (_open, _file): + _file.read.return_value = 'anothermod\n' + with patch("charmhelpers.core.kernel.log"): + kernel.modprobe('mymod') + _open.assert_called_with('/etc/modules', 'r+') + _file.read.assert_called_with() + _file.write.assert_called_with('mymod\n') + check_call.assert_called_with(['modprobe', 'mymod']) + + @patch('os.chmod') + @patch('subprocess.check_call') + @patch.object(osplatform, 'get_platform') + def test_modprobe_persistent_centos(self, platform, check_call, os): + platform.return_value = 'centos' + imp.reload(kernel) + + with patch_open() as (_open, _file): + _file.read.return_value = 'anothermod\n' + with patch("charmhelpers.core.kernel.log"): + kernel.modprobe('mymod') + _open.assert_called_with('/etc/rc.modules', 'r+') + os.assert_called_with('/etc/rc.modules', 111) + _file.read.assert_called_with() + _file.write.assert_called_with('modprobe mymod\n') + check_call.assert_called_with(['modprobe', 'mymod']) + + @patch('subprocess.check_call') + @patch.object(osplatform, 'get_platform') + def test_modprobe_not_persistent_ubuntu(self, platform, check_call): + platform.return_value = 'ubuntu' + imp.reload(kernel) + + with patch_open() as (_open, _file): + _file.read.return_value = 'anothermod\n' + with patch("charmhelpers.core.kernel.log"): + kernel.modprobe('mymod', persist=False) + assert not _open.called + check_call.assert_called_with(['modprobe', 'mymod']) + + @patch('subprocess.check_call') + @patch.object(osplatform, 'get_platform') + def test_modprobe_not_persistent_centos(self, platform, check_call): + platform.return_value = 'centos' + imp.reload(kernel) + + with patch_open() as (_open, _file): + _file.read.return_value = 'anothermod\n' + with patch("charmhelpers.core.kernel.log"): + kernel.modprobe('mymod', persist=False) + assert not _open.called + check_call.assert_called_with(['modprobe', 'mymod']) + + @patch.object(kernel, 'log') + @patch('subprocess.check_call') + def test_rmmod_not_forced(self, check_call, log): + kernel.rmmod('mymod') + check_call.assert_called_with(['rmmod', 'mymod']) + + @patch.object(kernel, 'log') + @patch('subprocess.check_call') + def test_rmmod_forced(self, check_call, log): + kernel.rmmod('mymod', force=True) + check_call.assert_called_with(['rmmod', '-f', 'mymod']) + + @patch.object(kernel, 'log') + @patch('subprocess.check_output') + def test_lsmod(self, check_output, log): + kernel.lsmod() + check_output.assert_called_with(['lsmod'], + universal_newlines=True) + + @patch('charmhelpers.core.kernel.lsmod') + def test_is_module_loaded(self, lsmod): + lsmod.return_value = "ip6_tables 28672 1 ip6table_filter" + self.assertTrue(kernel.is_module_loaded("ip6_tables")) + + @patch.object(osplatform, 'get_platform') + @patch('subprocess.check_call') + def test_update_initramfs_ubuntu(self, check_call, platform): + platform.return_value = 'ubuntu' + imp.reload(kernel) + + kernel.update_initramfs() + check_call.assert_called_with(["update-initramfs", "-k", "all", "-u"]) + + @patch.object(osplatform, 'get_platform') + @patch('subprocess.check_call') + def test_update_initramfs_centos(self, check_call, platform): + platform.return_value = 'centos' + imp.reload(kernel) + + kernel.update_initramfs() + check_call.assert_called_with(['dracut', '-f', 'all']) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/test_services.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/test_services.py new file mode 100644 index 0000000000000000000000000000000000000000..8f6935301182e9dbe1cddd17e270dd9cbc51bb8e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/test_services.py @@ -0,0 +1,868 @@ +import os +import mock +import unittest +import uuid +from charmhelpers.core import hookenv +from charmhelpers.core import host +from charmhelpers.core import services +from functools import partial + + +class TestServiceManager(unittest.TestCase): + def setUp(self): + self.pcharm_dir = mock.patch.object(hookenv, 'charm_dir') + self.mcharm_dir = self.pcharm_dir.start() + self.mcharm_dir.return_value = 'charm_dir' + + def tearDown(self): + self.pcharm_dir.stop() + + def test_register(self): + manager = services.ServiceManager([ + {'service': 'service1', + 'foo': 'bar'}, + {'service': 'service2', + 'qux': 'baz'}, + ]) + self.assertEqual(manager.services, { + 'service1': {'service': 'service1', + 'foo': 'bar'}, + 'service2': {'service': 'service2', + 'qux': 'baz'}, + }) + + def test_register_preserves_order(self): + service_list = [dict(service='a'), dict(service='b')] + + # Test that the services list order is preserved by checking + # both forwards and backwards - only one of these will be + # dictionary order, and if both work we know order is being + # preserved. + manager = services.ServiceManager(service_list) + self.assertEqual(list(manager.services.keys()), ['a', 'b']) + manager = services.ServiceManager(reversed(service_list)) + self.assertEqual(list(manager.services.keys()), ['b', 'a']) + + @mock.patch.object(services.ServiceManager, 'reconfigure_services') + @mock.patch.object(services.ServiceManager, 'stop_services') + @mock.patch.object(hookenv, 'hook_name') + @mock.patch.object(hookenv, 'config') + def test_manage_stop(self, config, hook_name, stop_services, reconfigure_services): + manager = services.ServiceManager() + hook_name.return_value = 'stop' + manager.manage() + stop_services.assert_called_once_with() + assert not reconfigure_services.called + + @mock.patch.object(services.ServiceManager, 'provide_data') + @mock.patch.object(services.ServiceManager, 'reconfigure_services') + @mock.patch.object(services.ServiceManager, 'stop_services') + @mock.patch.object(hookenv, 'hook_name') + @mock.patch.object(hookenv, 'config') + def test_manage_other(self, config, hook_name, stop_services, reconfigure_services, provide_data): + manager = services.ServiceManager() + hook_name.return_value = 'config-changed' + manager.manage() + assert not stop_services.called + reconfigure_services.assert_called_once_with() + provide_data.assert_called_once_with() + + def test_manage_calls_atstart(self): + cb = mock.MagicMock() + hookenv.atstart(cb) + manager = services.ServiceManager() + manager.manage() + self.assertTrue(cb.called) + + def test_manage_calls_atexit(self): + cb = mock.MagicMock() + hookenv.atexit(cb) + manager = services.ServiceManager() + manager.manage() + self.assertTrue(cb.called) + + @mock.patch.object(hookenv, 'config') + def test_manage_config_not_saved(self, config): + config = config.return_value + config.implicit_save = False + manager = services.ServiceManager() + manager.manage() + self.assertFalse(config.save.called) + + @mock.patch.object(services.ServiceManager, 'save_ready') + @mock.patch.object(services.ServiceManager, 'fire_event') + @mock.patch.object(services.ServiceManager, 'is_ready') + def test_reconfigure_ready(self, is_ready, fire_event, save_ready): + manager = services.ServiceManager([ + {'service': 'service1'}, {'service': 'service2'}]) + is_ready.return_value = True + manager.reconfigure_services() + is_ready.assert_has_calls([ + mock.call('service1'), + mock.call('service2'), + ], any_order=True) + fire_event.assert_has_calls([ + mock.call('data_ready', 'service1'), + mock.call('start', 'service1', default=[ + services.service_restart, + services.manage_ports]), + ], any_order=False) + fire_event.assert_has_calls([ + mock.call('data_ready', 'service2'), + mock.call('start', 'service2', default=[ + services.service_restart, + services.manage_ports]), + ], any_order=False) + save_ready.assert_has_calls([ + mock.call('service1'), + mock.call('service2'), + ], any_order=True) + + @mock.patch.object(services.ServiceManager, 'save_ready') + @mock.patch.object(services.ServiceManager, 'fire_event') + @mock.patch.object(services.ServiceManager, 'is_ready') + def test_reconfigure_ready_list(self, is_ready, fire_event, save_ready): + manager = services.ServiceManager([ + {'service': 'service1'}, {'service': 'service2'}]) + is_ready.return_value = True + manager.reconfigure_services('service3', 'service4') + self.assertEqual(is_ready.call_args_list, [ + mock.call('service3'), + mock.call('service4'), + ]) + self.assertEqual(fire_event.call_args_list, [ + mock.call('data_ready', 'service3'), + mock.call('start', 'service3', default=[ + services.service_restart, + services.open_ports]), + mock.call('data_ready', 'service4'), + mock.call('start', 'service4', default=[ + services.service_restart, + services.open_ports]), + ]) + self.assertEqual(save_ready.call_args_list, [ + mock.call('service3'), + mock.call('service4'), + ]) + + @mock.patch.object(services.ServiceManager, 'save_lost') + @mock.patch.object(services.ServiceManager, 'fire_event') + @mock.patch.object(services.ServiceManager, 'was_ready') + @mock.patch.object(services.ServiceManager, 'is_ready') + def test_reconfigure_not_ready(self, is_ready, was_ready, fire_event, save_lost): + manager = services.ServiceManager([ + {'service': 'service1'}, {'service': 'service2'}]) + is_ready.return_value = False + was_ready.return_value = False + manager.reconfigure_services() + is_ready.assert_has_calls([ + mock.call('service1'), + mock.call('service2'), + ], any_order=True) + fire_event.assert_has_calls([ + mock.call('stop', 'service1', default=[ + services.close_ports, + services.service_stop]), + mock.call('stop', 'service2', default=[ + services.close_ports, + services.service_stop]), + ], any_order=True) + save_lost.assert_has_calls([ + mock.call('service1'), + mock.call('service2'), + ], any_order=True) + + @mock.patch.object(services.ServiceManager, 'save_lost') + @mock.patch.object(services.ServiceManager, 'fire_event') + @mock.patch.object(services.ServiceManager, 'was_ready') + @mock.patch.object(services.ServiceManager, 'is_ready') + def test_reconfigure_no_longer_ready(self, is_ready, was_ready, fire_event, save_lost): + manager = services.ServiceManager([ + {'service': 'service1'}, {'service': 'service2'}]) + is_ready.return_value = False + was_ready.return_value = True + manager.reconfigure_services() + is_ready.assert_has_calls([ + mock.call('service1'), + mock.call('service2'), + ], any_order=True) + fire_event.assert_has_calls([ + mock.call('data_lost', 'service1'), + mock.call('stop', 'service1', default=[ + services.close_ports, + services.service_stop]), + ], any_order=False) + fire_event.assert_has_calls([ + mock.call('data_lost', 'service2'), + mock.call('stop', 'service2', default=[ + services.close_ports, + services.service_stop]), + ], any_order=False) + save_lost.assert_has_calls([ + mock.call('service1'), + mock.call('service2'), + ], any_order=True) + + @mock.patch.object(services.ServiceManager, 'fire_event') + def test_stop_services(self, fire_event): + manager = services.ServiceManager([ + {'service': 'service1'}, {'service': 'service2'}]) + manager.stop_services() + fire_event.assert_has_calls([ + mock.call('stop', 'service1', default=[ + services.close_ports, + services.service_stop]), + mock.call('stop', 'service2', default=[ + services.close_ports, + services.service_stop]), + ], any_order=True) + + @mock.patch.object(services.ServiceManager, 'fire_event') + def test_stop_services_list(self, fire_event): + manager = services.ServiceManager([ + {'service': 'service1'}, {'service': 'service2'}]) + manager.stop_services('service3', 'service4') + self.assertEqual(fire_event.call_args_list, [ + mock.call('stop', 'service3', default=[ + services.close_ports, + services.service_stop]), + mock.call('stop', 'service4', default=[ + services.close_ports, + services.service_stop]), + ]) + + def test_get_service(self): + service = {'service': 'test', 'test': 'test_service'} + manager = services.ServiceManager([service]) + self.assertEqual(manager.get_service('test'), service) + + def test_get_service_not_registered(self): + service = {'service': 'test', 'test': 'test_service'} + manager = services.ServiceManager([service]) + self.assertRaises(KeyError, manager.get_service, 'foo') + + @mock.patch.object(services.ServiceManager, 'get_service') + def test_fire_event_default(self, get_service): + get_service.return_value = {} + cb = mock.Mock() + manager = services.ServiceManager() + manager.fire_event('event', 'service', cb) + cb.assert_called_once_with('service') + + @mock.patch.object(services.ServiceManager, 'get_service') + def test_fire_event_default_list(self, get_service): + get_service.return_value = {} + cb = mock.Mock() + manager = services.ServiceManager() + manager.fire_event('event', 'service', [cb]) + cb.assert_called_once_with('service') + + @mock.patch.object(services.ServiceManager, 'get_service') + def test_fire_event_simple_callback(self, get_service): + cb = mock.Mock() + dcb = mock.Mock() + get_service.return_value = {'event': cb} + manager = services.ServiceManager() + manager.fire_event('event', 'service', dcb) + assert not dcb.called + cb.assert_called_once_with('service') + + @mock.patch.object(services.ServiceManager, 'get_service') + def test_fire_event_simple_callback_list(self, get_service): + cb = mock.Mock() + dcb = mock.Mock() + get_service.return_value = {'event': [cb]} + manager = services.ServiceManager() + manager.fire_event('event', 'service', dcb) + assert not dcb.called + cb.assert_called_once_with('service') + + @mock.patch.object(services.ManagerCallback, '__call__') + @mock.patch.object(services.ServiceManager, 'get_service') + def test_fire_event_manager_callback(self, get_service, mcall): + cb = services.ManagerCallback() + dcb = mock.Mock() + get_service.return_value = {'event': cb} + manager = services.ServiceManager() + manager.fire_event('event', 'service', dcb) + assert not dcb.called + mcall.assert_called_once_with(manager, 'service', 'event') + + @mock.patch.object(services.ManagerCallback, '__call__') + @mock.patch.object(services.ServiceManager, 'get_service') + def test_fire_event_manager_callback_list(self, get_service, mcall): + cb = services.ManagerCallback() + dcb = mock.Mock() + get_service.return_value = {'event': [cb]} + manager = services.ServiceManager() + manager.fire_event('event', 'service', dcb) + assert not dcb.called + mcall.assert_called_once_with(manager, 'service', 'event') + + @mock.patch.object(services.ServiceManager, 'get_service') + def test_is_ready(self, get_service): + get_service.side_effect = [ + {}, + {'required_data': [True]}, + {'required_data': [False]}, + {'required_data': [True, False]}, + ] + manager = services.ServiceManager() + assert manager.is_ready('foo') + assert manager.is_ready('bar') + assert not manager.is_ready('foo') + assert not manager.is_ready('foo') + get_service.assert_has_calls([mock.call('foo'), mock.call('bar')]) + + def test_load_ready_file_short_circuit(self): + manager = services.ServiceManager() + manager._ready = 'foo' + manager._load_ready_file() + self.assertEqual(manager._ready, 'foo') + + @mock.patch('os.path.exists') + @mock.patch.object(services.base, 'open', create=True) + def test_load_ready_file_new(self, mopen, exists): + manager = services.ServiceManager() + exists.return_value = False + manager._load_ready_file() + self.assertEqual(manager._ready, set()) + assert not mopen.called + + @mock.patch('json.load') + @mock.patch('os.path.exists') + @mock.patch.object(services.base, 'open', create=True) + def test_load_ready_file(self, mopen, exists, jload): + manager = services.ServiceManager() + exists.return_value = True + jload.return_value = ['bar'] + manager._load_ready_file() + self.assertEqual(manager._ready, set(['bar'])) + exists.assert_called_once_with('charm_dir/READY-SERVICES.json') + mopen.assert_called_once_with('charm_dir/READY-SERVICES.json') + + @mock.patch('json.dump') + @mock.patch.object(services.base, 'open', create=True) + def test_save_ready_file(self, mopen, jdump): + manager = services.ServiceManager() + manager._save_ready_file() + assert not mopen.called + manager._ready = set(['foo']) + manager._save_ready_file() + mopen.assert_called_once_with('charm_dir/READY-SERVICES.json', 'w') + jdump.assert_called_once_with(['foo'], mopen.return_value.__enter__()) + + @mock.patch.object(services.base.ServiceManager, '_save_ready_file') + @mock.patch.object(services.base.ServiceManager, '_load_ready_file') + def test_save_ready(self, _lrf, _srf): + manager = services.ServiceManager() + manager._ready = set(['foo']) + manager.save_ready('bar') + _lrf.assert_called_once_with() + self.assertEqual(manager._ready, set(['foo', 'bar'])) + _srf.assert_called_once_with() + + @mock.patch.object(services.base.ServiceManager, '_save_ready_file') + @mock.patch.object(services.base.ServiceManager, '_load_ready_file') + def test_save_lost(self, _lrf, _srf): + manager = services.ServiceManager() + manager._ready = set(['foo', 'bar']) + manager.save_lost('bar') + _lrf.assert_called_once_with() + self.assertEqual(manager._ready, set(['foo'])) + _srf.assert_called_once_with() + manager.save_lost('bar') + self.assertEqual(manager._ready, set(['foo'])) + + @mock.patch.object(services.base.ServiceManager, '_save_ready_file') + @mock.patch.object(services.base.ServiceManager, '_load_ready_file') + def test_was_ready(self, _lrf, _srf): + manager = services.ServiceManager() + manager._ready = set() + manager.save_ready('foo') + manager.save_ready('bar') + assert manager.was_ready('foo') + assert manager.was_ready('bar') + manager.save_lost('bar') + assert manager.was_ready('foo') + assert not manager.was_ready('bar') + + @mock.patch.object(services.base.hookenv, 'relation_set') + @mock.patch.object(services.base.hookenv, 'related_units') + @mock.patch.object(services.base.hookenv, 'relation_ids') + def test_provide_data_no_match(self, relation_ids, related_units, relation_set): + provider = mock.Mock() + provider.name = 'provided' + manager = services.ServiceManager([ + {'service': 'service', 'provided_data': [provider]} + ]) + relation_ids.return_value = [] + manager.provide_data() + assert not provider.provide_data.called + relation_ids.assert_called_once_with('provided') + + @mock.patch.object(services.base.hookenv, 'relation_set') + @mock.patch.object(services.base.hookenv, 'related_units') + @mock.patch.object(services.base.hookenv, 'relation_ids') + def test_provide_data_not_ready(self, relation_ids, related_units, relation_set): + provider = mock.Mock() + provider.name = 'provided' + pd = mock.Mock() + data = pd.return_value = {'data': True} + provider.provide_data = lambda remote_service, service_ready: pd(remote_service, service_ready) + manager = services.ServiceManager([ + {'service': 'service', 'provided_data': [provider]} + ]) + manager.is_ready = mock.Mock(return_value=False) + relation_ids.return_value = ['relid'] + related_units.return_value = ['service/0'] + manager.provide_data() + relation_set.assert_called_once_with('relid', data) + pd.assert_called_once_with('service', False) + + @mock.patch.object(services.base.hookenv, 'relation_set') + @mock.patch.object(services.base.hookenv, 'related_units') + @mock.patch.object(services.base.hookenv, 'relation_ids') + def test_provide_data_ready(self, relation_ids, related_units, relation_set): + provider = mock.Mock() + provider.name = 'provided' + pd = mock.Mock() + data = pd.return_value = {'data': True} + provider.provide_data = lambda remote_service, service_ready: pd(remote_service, service_ready) + manager = services.ServiceManager([ + {'service': 'service', 'provided_data': [provider]} + ]) + manager.is_ready = mock.Mock(return_value=True) + relation_ids.return_value = ['relid'] + related_units.return_value = ['service/0'] + manager.provide_data() + relation_set.assert_called_once_with('relid', data) + pd.assert_called_once_with('service', True) + + +class TestRelationContext(unittest.TestCase): + def setUp(self): + self.phookenv = mock.patch.object(services.helpers, 'hookenv') + self.mhookenv = self.phookenv.start() + self.mhookenv.relation_ids.return_value = [] + self.context = services.RelationContext() + self.context.name = 'http' + self.context.interface = 'http' + self.context.required_keys = ['foo', 'bar'] + self.mhookenv.reset_mock() + + def tearDown(self): + self.phookenv.stop() + + def test_no_relations(self): + self.context.get_data() + self.assertFalse(self.context.is_ready()) + self.assertEqual(self.context, {}) + self.mhookenv.relation_ids.assert_called_once_with('http') + + def test_no_units(self): + self.mhookenv.relation_ids.return_value = ['nginx'] + self.mhookenv.related_units.return_value = [] + self.context.get_data() + self.assertFalse(self.context.is_ready()) + self.assertEqual(self.context, {'http': []}) + + def test_incomplete(self): + self.mhookenv.relation_ids.return_value = ['nginx', 'apache'] + self.mhookenv.related_units.side_effect = lambda i: [i + '/0'] + self.mhookenv.relation_get.side_effect = [{}, {'foo': '1'}] + self.context.get_data() + self.assertFalse(bool(self.context)) + self.assertEqual(self.mhookenv.relation_get.call_args_list, [ + mock.call(rid='apache', unit='apache/0'), + mock.call(rid='nginx', unit='nginx/0'), + ]) + + def test_complete(self): + self.mhookenv.relation_ids.return_value = ['nginx', 'apache', 'tomcat'] + self.mhookenv.related_units.side_effect = lambda i: [i + '/0'] + self.mhookenv.relation_get.side_effect = [{'foo': '1'}, {'foo': '2', 'bar': '3'}, {}] + self.context.get_data() + self.assertTrue(self.context.is_ready()) + self.assertEqual(self.context, {'http': [ + { + 'foo': '2', + 'bar': '3', + }, + ]}) + self.mhookenv.relation_ids.assert_called_with('http') + self.assertEqual(self.mhookenv.relation_get.call_args_list, [ + mock.call(rid='apache', unit='apache/0'), + mock.call(rid='nginx', unit='nginx/0'), + mock.call(rid='tomcat', unit='tomcat/0'), + ]) + + def test_provide(self): + self.assertEqual(self.context.provide_data(), {}) + + +class TestHttpRelation(unittest.TestCase): + def setUp(self): + self.phookenv = mock.patch.object(services.helpers, 'hookenv') + self.mhookenv = self.phookenv.start() + + self.context = services.helpers.HttpRelation() + + def tearDown(self): + self.phookenv.stop() + + def test_provide_data(self): + self.mhookenv.unit_get.return_value = "127.0.0.1" + self.assertEqual(self.context.provide_data(), { + 'host': "127.0.0.1", + 'port': 80, + }) + + def test_complete(self): + self.mhookenv.relation_ids.return_value = ['website'] + self.mhookenv.related_units.side_effect = lambda i: [i + '/0'] + self.mhookenv.relation_get.side_effect = [{'host': '127.0.0.2', + 'port': 8080}] + self.context.get_data() + self.assertTrue(self.context.is_ready()) + self.assertEqual(self.context, {'website': [ + { + 'host': '127.0.0.2', + 'port': 8080, + }, + ]}) + + self.mhookenv.relation_ids.assert_called_with('website') + self.assertEqual(self.mhookenv.relation_get.call_args_list, [ + mock.call(rid='website', unit='website/0'), + ]) + + +class TestMysqlRelation(unittest.TestCase): + + def setUp(self): + self.phookenv = mock.patch.object(services.helpers, 'hookenv') + self.mhookenv = self.phookenv.start() + + self.context = services.helpers.MysqlRelation() + + def tearDown(self): + self.phookenv.stop() + + def test_complete(self): + self.mhookenv.relation_ids.return_value = ['db'] + self.mhookenv.related_units.side_effect = lambda i: [i + '/0'] + self.mhookenv.relation_get.side_effect = [{'host': '127.0.0.2', + 'user': 'mysql', + 'password': 'mysql', + 'database': 'mysql', + }] + self.context.get_data() + self.assertTrue(self.context.is_ready()) + self.assertEqual(self.context, {'db': [ + { + 'host': '127.0.0.2', + 'user': 'mysql', + 'password': 'mysql', + 'database': 'mysql', + }, + ]}) + + self.mhookenv.relation_ids.assert_called_with('db') + self.assertEqual(self.mhookenv.relation_get.call_args_list, [ + mock.call(rid='db', unit='db/0'), + ]) + + +class TestRequiredConfig(unittest.TestCase): + def setUp(self): + self.options = { + 'options': { + 'option1': { + 'type': 'string', + 'description': 'First option', + }, + 'option2': { + 'type': 'int', + 'default': 0, + 'description': 'Second option', + }, + }, + } + self.config = { + 'option1': None, + 'option2': 0, + } + self._pyaml = mock.patch.object(services.helpers, 'yaml') + self.myaml = self._pyaml.start() + self.myaml.load.side_effect = lambda fp: self.options + self._pconfig = mock.patch.object(hookenv, 'config') + self.mconfig = self._pconfig.start() + self.mconfig.side_effect = lambda: self.config + self._pcharm_dir = mock.patch.object(hookenv, 'charm_dir') + self.mcharm_dir = self._pcharm_dir.start() + self.mcharm_dir.return_value = 'charm_dir' + + def tearDown(self): + self._pyaml.stop() + self._pconfig.stop() + self._pcharm_dir.stop() + + def test_none_changed(self): + with mock.patch.object(services.helpers, 'open', mock.mock_open(), create=True): + context = services.helpers.RequiredConfig('option1', 'option2') + self.assertFalse(bool(context)) + self.assertEqual(context['config']['option1'], None) + self.assertEqual(context['config']['option2'], 0) + + def test_partial(self): + self.config['option1'] = 'value' + with mock.patch.object(services.helpers, 'open', mock.mock_open(), create=True): + context = services.helpers.RequiredConfig('option1', 'option2') + self.assertFalse(bool(context)) + self.assertEqual(context['config']['option1'], 'value') + self.assertEqual(context['config']['option2'], 0) + + def test_ready(self): + self.config['option1'] = 'value' + self.config['option2'] = 1 + with mock.patch.object(services.helpers, 'open', mock.mock_open(), create=True): + context = services.helpers.RequiredConfig('option1', 'option2') + self.assertTrue(bool(context)) + self.assertEqual(context['config']['option1'], 'value') + self.assertEqual(context['config']['option2'], 1) + + def test_none_empty(self): + self.config['option1'] = '' + self.config['option2'] = 1 + with mock.patch.object(services.helpers, 'open', mock.mock_open(), create=True): + context = services.helpers.RequiredConfig('option1', 'option2') + self.assertFalse(bool(context)) + self.assertEqual(context['config']['option1'], '') + self.assertEqual(context['config']['option2'], 1) + + +class TestStoredContext(unittest.TestCase): + @mock.patch.object(services.helpers.StoredContext, 'read_context') + @mock.patch.object(services.helpers.StoredContext, 'store_context') + @mock.patch('os.path.exists') + def test_new(self, exists, store_context, read_context): + exists.return_value = False + context = services.helpers.StoredContext('foo.yaml', {'key': 'val'}) + assert not read_context.called + store_context.assert_called_once_with('foo.yaml', {'key': 'val'}) + self.assertEqual(context, {'key': 'val'}) + + @mock.patch.object(services.helpers.StoredContext, 'read_context') + @mock.patch.object(services.helpers.StoredContext, 'store_context') + @mock.patch('os.path.exists') + def test_existing(self, exists, store_context, read_context): + exists.return_value = True + read_context.return_value = {'key': 'other'} + context = services.helpers.StoredContext('foo.yaml', {'key': 'val'}) + read_context.assert_called_once_with('foo.yaml') + assert not store_context.called + self.assertEqual(context, {'key': 'other'}) + + @mock.patch.object(hookenv, 'charm_dir', lambda: 'charm_dir') + @mock.patch.object(services.helpers.StoredContext, 'read_context') + @mock.patch.object(services.helpers, 'yaml') + @mock.patch('os.fchmod') + @mock.patch('os.path.exists') + def test_store_context(self, exists, fchmod, yaml, read_context): + exists.return_value = False + mopen = mock.mock_open() + with mock.patch.object(services.helpers, 'open', mopen, create=True): + services.helpers.StoredContext('foo.yaml', {'key': 'val'}) + mopen.assert_called_once_with('charm_dir/foo.yaml', 'w') + fchmod.assert_called_once_with(mopen.return_value.fileno(), 0o600) + yaml.dump.assert_called_once_with({'key': 'val'}, mopen.return_value) + + @mock.patch.object(hookenv, 'charm_dir', lambda: 'charm_dir') + @mock.patch.object(services.helpers.StoredContext, 'read_context') + @mock.patch.object(services.helpers, 'yaml') + @mock.patch('os.fchmod') + @mock.patch('os.path.exists') + def test_store_context_abs(self, exists, fchmod, yaml, read_context): + exists.return_value = False + mopen = mock.mock_open() + with mock.patch.object(services.helpers, 'open', mopen, create=True): + services.helpers.StoredContext('/foo.yaml', {'key': 'val'}) + mopen.assert_called_once_with('/foo.yaml', 'w') + + @mock.patch.object(hookenv, 'charm_dir', lambda: 'charm_dir') + @mock.patch.object(services.helpers, 'yaml') + @mock.patch('os.path.exists') + def test_read_context(self, exists, yaml): + exists.return_value = True + yaml.load.return_value = {'key': 'other'} + mopen = mock.mock_open() + with mock.patch.object(services.helpers, 'open', mopen, create=True): + context = services.helpers.StoredContext('foo.yaml', {'key': 'val'}) + mopen.assert_called_once_with('charm_dir/foo.yaml', 'r') + yaml.load.assert_called_once_with(mopen.return_value) + self.assertEqual(context, {'key': 'other'}) + + @mock.patch.object(hookenv, 'charm_dir', lambda: 'charm_dir') + @mock.patch.object(services.helpers, 'yaml') + @mock.patch('os.path.exists') + def test_read_context_abs(self, exists, yaml): + exists.return_value = True + yaml.load.return_value = {'key': 'other'} + mopen = mock.mock_open() + with mock.patch.object(services.helpers, 'open', mopen, create=True): + context = services.helpers.StoredContext('/foo.yaml', {'key': 'val'}) + mopen.assert_called_once_with('/foo.yaml', 'r') + yaml.load.assert_called_once_with(mopen.return_value) + self.assertEqual(context, {'key': 'other'}) + + @mock.patch.object(hookenv, 'charm_dir', lambda: 'charm_dir') + @mock.patch.object(services.helpers, 'yaml') + @mock.patch('os.path.exists') + def test_read_context_empty(self, exists, yaml): + exists.return_value = True + yaml.load.return_value = None + mopen = mock.mock_open() + with mock.patch.object(services.helpers, 'open', mopen, create=True): + self.assertRaises(OSError, services.helpers.StoredContext, '/foo.yaml', {}) + + +class TestTemplateCallback(unittest.TestCase): + @mock.patch.object(services.helpers, 'templating') + def test_template_defaults(self, mtemplating): + manager = mock.Mock(**{'get_service.return_value': { + 'required_data': [{'foo': 'bar'}]}}) + self.assertRaises(TypeError, services.template, source='foo.yml') + callback = services.template(source='foo.yml', target='bar.yml') + assert isinstance(callback, services.ManagerCallback) + assert not mtemplating.render.called + callback(manager, 'test', 'event') + mtemplating.render.assert_called_once_with( + 'foo.yml', 'bar.yml', {'foo': 'bar', 'ctx': {'foo': 'bar'}}, + 'root', 'root', 0o444, template_loader=None) + + @mock.patch.object(services.helpers, 'templating') + def test_template_explicit(self, mtemplating): + manager = mock.Mock(**{'get_service.return_value': { + 'required_data': [{'foo': 'bar'}]}}) + callback = services.template( + source='foo.yml', target='bar.yml', + owner='user', group='group', perms=0o555 + ) + assert isinstance(callback, services.ManagerCallback) + assert not mtemplating.render.called + callback(manager, 'test', 'event') + mtemplating.render.assert_called_once_with( + 'foo.yml', 'bar.yml', {'foo': 'bar', 'ctx': {'foo': 'bar'}}, + 'user', 'group', 0o555, template_loader=None) + + @mock.patch.object(services.helpers, 'templating') + def test_template_loader(self, mtemplating): + manager = mock.Mock(**{'get_service.return_value': { + 'required_data': [{'foo': 'bar'}]}}) + callback = services.template( + source='foo.yml', target='bar.yml', + owner='user', group='group', perms=0o555, + template_loader='myloader' + ) + assert isinstance(callback, services.ManagerCallback) + assert not mtemplating.render.called + callback(manager, 'test', 'event') + mtemplating.render.assert_called_once_with( + 'foo.yml', 'bar.yml', {'foo': 'bar', 'ctx': {'foo': 'bar'}}, + 'user', 'group', 0o555, template_loader='myloader') + + @mock.patch.object(os.path, 'isfile') + @mock.patch.object(host, 'file_hash') + @mock.patch.object(host, 'service_restart') + @mock.patch.object(services.helpers, 'templating') + def test_template_onchange_restart(self, mtemplating, mrestart, mfile_hash, misfile): + def random_string(arg): + return uuid.uuid4() + mfile_hash.side_effect = random_string + misfile.return_value = True + manager = mock.Mock(**{'get_service.return_value': { + 'required_data': [{'foo': 'bar'}]}}) + callback = services.template( + source='foo.yml', target='bar.yml', + owner='user', group='group', perms=0o555, + on_change_action=(partial(mrestart, "mysuperservice")), + ) + assert isinstance(callback, services.ManagerCallback) + assert not mtemplating.render.called + callback(manager, 'test', 'event') + mtemplating.render.assert_called_once_with( + 'foo.yml', 'bar.yml', {'foo': 'bar', 'ctx': {'foo': 'bar'}}, + 'user', 'group', 0o555, template_loader=None) + mrestart.assert_called_with('mysuperservice') + + @mock.patch.object(hookenv, 'log') + @mock.patch.object(os.path, 'isfile') + @mock.patch.object(host, 'file_hash') + @mock.patch.object(host, 'service_restart') + @mock.patch.object(services.helpers, 'templating') + def test_template_onchange_restart_nochange(self, mtemplating, mrestart, + mfile_hash, misfile, mlog): + mfile_hash.return_value = "myhash" + misfile.return_value = True + manager = mock.Mock(**{'get_service.return_value': { + 'required_data': [{'foo': 'bar'}]}}) + callback = services.template( + source='foo.yml', target='bar.yml', + owner='user', group='group', perms=0o555, + on_change_action=(partial(mrestart, "mysuperservice")), + ) + assert isinstance(callback, services.ManagerCallback) + assert not mtemplating.render.called + callback(manager, 'test', 'event') + mtemplating.render.assert_called_once_with( + 'foo.yml', 'bar.yml', {'foo': 'bar', 'ctx': {'foo': 'bar'}}, + 'user', 'group', 0o555, template_loader=None) + self.assertEqual(mrestart.call_args_list, []) + + +class TestPortsCallback(unittest.TestCase): + def setUp(self): + self.phookenv = mock.patch.object(services.base, 'hookenv') + self.mhookenv = self.phookenv.start() + self.mhookenv.relation_ids.return_value = [] + self.mhookenv.charm_dir.return_value = 'charm_dir' + self.popen = mock.patch.object(services.base, 'open', create=True) + self.mopen = self.popen.start() + + def tearDown(self): + self.phookenv.stop() + self.popen.stop() + + def test_no_ports(self): + manager = mock.Mock(**{'get_service.return_value': {}}) + services.PortManagerCallback()(manager, 'service', 'event') + assert not self.mhookenv.open_port.called + assert not self.mhookenv.close_port.called + + def test_open_ports(self): + manager = mock.Mock(**{'get_service.return_value': {'ports': [1, 2]}}) + services.open_ports(manager, 'service', 'start') + self.mhookenv.open_port.has_calls([mock.call(1), mock.call(2)]) + assert not self.mhookenv.close_port.called + + def test_close_ports(self): + manager = mock.Mock(**{'get_service.return_value': {'ports': [1, 2]}}) + services.close_ports(manager, 'service', 'stop') + assert not self.mhookenv.open_port.called + self.mhookenv.close_port.has_calls([mock.call(1), mock.call(2)]) + + def test_close_old_ports(self): + self.mopen.return_value.read.return_value = '10,20' + manager = mock.Mock(**{'get_service.return_value': {'ports': [1, 2]}}) + services.close_ports(manager, 'service', 'stop') + assert not self.mhookenv.open_port.called + self.mhookenv.close_port.has_calls([ + mock.call(10), + mock.call(20), + mock.call(1), + mock.call(2)]) + + +if __name__ == '__main__': + unittest.main() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/test_strutils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/test_strutils.py new file mode 100644 index 0000000000000000000000000000000000000000..9110e563470fd35979edd10d7303f70d9495ac88 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/test_strutils.py @@ -0,0 +1,108 @@ +import unittest + +import charmhelpers.core.strutils as strutils + + +class TestStrUtils(unittest.TestCase): + def setUp(self): + super(TestStrUtils, self).setUp() + + def tearDown(self): + super(TestStrUtils, self).tearDown() + + def test_bool_from_string(self): + self.assertTrue(strutils.bool_from_string('true')) + self.assertTrue(strutils.bool_from_string('True')) + self.assertTrue(strutils.bool_from_string('yes')) + self.assertTrue(strutils.bool_from_string('Yes')) + self.assertTrue(strutils.bool_from_string('y')) + self.assertTrue(strutils.bool_from_string('Y')) + self.assertTrue(strutils.bool_from_string('on')) + + # unicode should also work + self.assertTrue(strutils.bool_from_string(u'true')) + + self.assertFalse(strutils.bool_from_string('False')) + self.assertFalse(strutils.bool_from_string('false')) + self.assertFalse(strutils.bool_from_string('no')) + self.assertFalse(strutils.bool_from_string('No')) + self.assertFalse(strutils.bool_from_string('n')) + self.assertFalse(strutils.bool_from_string('N')) + self.assertFalse(strutils.bool_from_string('off')) + + self.assertRaises(ValueError, strutils.bool_from_string, None) + self.assertRaises(ValueError, strutils.bool_from_string, 'foo') + + def test_bytes_from_string(self): + self.assertEqual(strutils.bytes_from_string('10'), 10) + self.assertEqual(strutils.bytes_from_string('3K'), 3072) + self.assertEqual(strutils.bytes_from_string('3KB'), 3072) + self.assertEqual(strutils.bytes_from_string('3M'), 3145728) + self.assertEqual(strutils.bytes_from_string('3MB'), 3145728) + self.assertEqual(strutils.bytes_from_string('3G'), 3221225472) + self.assertEqual(strutils.bytes_from_string('3GB'), 3221225472) + self.assertEqual(strutils.bytes_from_string('3T'), 3298534883328) + self.assertEqual(strutils.bytes_from_string('3TB'), 3298534883328) + self.assertEqual(strutils.bytes_from_string('3P'), 3377699720527872) + self.assertEqual(strutils.bytes_from_string('3PB'), 3377699720527872) + + self.assertRaises(ValueError, strutils.bytes_from_string, None) + self.assertRaises(ValueError, strutils.bytes_from_string, 'foo') + + def test_basic_string_comparator_class_fails_instantiation(self): + try: + strutils.BasicStringComparator('hello') + raise Exception("instantiating BasicStringComparator should fail") + except Exception as e: + assert (str(e) == "Must define the _list in the class definition!") + + def test_basic_string_comparator_class(self): + + class MyComparator(strutils.BasicStringComparator): + + _list = ('zomg', 'bartlet', 'over', 'and') + + x = MyComparator('zomg') + self.assertEquals(x.index, 0) + y = MyComparator('over') + self.assertEquals(y.index, 2) + self.assertTrue(x == 'zomg') + self.assertTrue(x != 'bartlet') + self.assertTrue(x == x) + self.assertTrue(x != y) + self.assertTrue(x < y) + self.assertTrue(y > x) + self.assertTrue(x < 'bartlet') + self.assertTrue(y > 'bartlet') + self.assertTrue(x >= 'zomg') + self.assertTrue(x <= 'zomg') + self.assertTrue(x >= x) + self.assertTrue(x <= x) + self.assertTrue(y >= 'zomg') + self.assertTrue(y <= 'over') + self.assertTrue(y >= x) + self.assertTrue(x <= y) + # ensure that something not in the list dies + try: + MyComparator('nope') + raise Exception("MyComparator('nope') should have failed") + except Exception as e: + self.assertTrue(isinstance(e, KeyError)) + + def test_basic_string_comparator_fails_different_comparators(self): + + class MyComparator1(strutils.BasicStringComparator): + + _list = ('the truth is out there'.split(' ')) + + class MyComparator2(strutils.BasicStringComparator): + + _list = ('no one in space can hear you scream'.split(' ')) + + x = MyComparator1('is') + y = MyComparator2('you') + try: + x > y + raise Exception("Comparing different comparators should fail") + except Exception as e: + self.assertTrue(isinstance(e, AssertionError)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/test_sysctl.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/test_sysctl.py new file mode 100644 index 0000000000000000000000000000000000000000..58aec64dd16f380ad34b287c5bfb901d35f02f09 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/test_sysctl.py @@ -0,0 +1,134 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +from charmhelpers.core.sysctl import create +import io +from mock import patch, MagicMock +from subprocess import CalledProcessError +import unittest +import tempfile + +import six +if not six.PY3: + builtin_open = '__builtin__.open' +else: + builtin_open = 'builtins.open' + +__author__ = 'Jorge Niedbalski R. ' + + +TO_PATCH = [ + 'log', + 'check_call', + 'is_container', +] + + +class SysctlTests(unittest.TestCase): + def setUp(self): + self.tempfile = tempfile.NamedTemporaryFile(delete=False) + for m in TO_PATCH: + setattr(self, m, self._patch(m)) + + def _patch(self, method): + _m = patch('charmhelpers.core.sysctl.' + method) + mock = _m.start() + self.addCleanup(_m.stop) + return mock + + @patch(builtin_open) + def test_create(self, mock_open): + """Test create sysctl method""" + _file = MagicMock(spec=io.FileIO) + mock_open.return_value = _file + + create('{"kernel.max_pid": 1337}', "/etc/sysctl.d/test-sysctl.conf") + + _file.__enter__().write.assert_called_with("kernel.max_pid=1337\n") + + self.log.assert_called_with( + "Updating sysctl_file: /etc/sysctl.d/test-sysctl.conf" + " values: {'kernel.max_pid': 1337}", + level='DEBUG') + + self.check_call.assert_called_with([ + "sysctl", "-p", + "/etc/sysctl.d/test-sysctl.conf"]) + + @patch(builtin_open) + def test_create_with_ignore(self, mock_open): + """Test create sysctl method""" + _file = MagicMock(spec=io.FileIO) + mock_open.return_value = _file + + create('{"kernel.max_pid": 1337}', + "/etc/sysctl.d/test-sysctl.conf", + ignore=True) + + _file.__enter__().write.assert_called_with("kernel.max_pid=1337\n") + + self.log.assert_called_with( + "Updating sysctl_file: /etc/sysctl.d/test-sysctl.conf" + " values: {'kernel.max_pid': 1337}", + level='DEBUG') + + self.check_call.assert_called_with([ + "sysctl", "-p", + "/etc/sysctl.d/test-sysctl.conf", "-e"]) + + @patch(builtin_open) + def test_create_with_dict(self, mock_open): + """Test create sysctl method""" + _file = MagicMock(spec=io.FileIO) + mock_open.return_value = _file + + create({"kernel.max_pid": 1337}, "/etc/sysctl.d/test-sysctl.conf") + + _file.__enter__().write.assert_called_with("kernel.max_pid=1337\n") + + self.log.assert_called_with( + "Updating sysctl_file: /etc/sysctl.d/test-sysctl.conf" + " values: {'kernel.max_pid': 1337}", + level='DEBUG') + + self.check_call.assert_called_with([ + "sysctl", "-p", + "/etc/sysctl.d/test-sysctl.conf"]) + + @patch(builtin_open) + def test_create_invalid_argument(self, mock_open): + """Test create sysctl with an invalid argument""" + _file = MagicMock(spec=io.FileIO) + mock_open.return_value = _file + + create('{"kernel.max_pid": 1337 xxxx', "/etc/sysctl.d/test-sysctl.conf") + + self.log.assert_called_with( + 'Error parsing YAML sysctl_dict: {"kernel.max_pid": 1337 xxxx', + level='ERROR') + + @patch(builtin_open) + def test_create_raises(self, mock_open): + """CalledProcessErrors are propagated for non-container machines.""" + _file = MagicMock(spec=io.FileIO) + mock_open.return_value = _file + + self.is_container.return_value = False + self.check_call.side_effect = CalledProcessError(1, 'sysctl') + + with self.assertRaises(CalledProcessError): + create('{"kernel.max_pid": 1337}', "/etc/sysctl.d/test-sysctl.conf") + + @patch(builtin_open) + def test_create_raises_container(self, mock_open): + """CalledProcessErrors are logged for containers.""" + _file = MagicMock(spec=io.FileIO) + mock_open.return_value = _file + + self.is_container.return_value = True + self.check_call.side_effect = CalledProcessError(1, 'sysctl', 'foo') + + create('{"kernel.max_pid": 1337}', "/etc/sysctl.d/test-sysctl.conf") + self.log.assert_called_with( + 'Error setting some sysctl keys in this container: foo', + level='WARNING') diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/test_templating.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/test_templating.py new file mode 100644 index 0000000000000000000000000000000000000000..df4767734099b4d3a031b4b9c84ef1c49fef7c77 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/test_templating.py @@ -0,0 +1,166 @@ +import pkg_resources +import shutil +import tempfile +import unittest +import jinja2 +import os.path +import pwd +import grp + +import mock +from charmhelpers.core import templating + + +TEMPLATES_DIR = pkg_resources.resource_filename(__name__, 'templates') + + +class TestTemplating(unittest.TestCase): + def setUp(self): + self.charm_dir = pkg_resources.resource_filename(__name__, '') + self._charm_dir_patch = mock.patch.object(templating.hookenv, + 'charm_dir') + self._charm_dir_mock = self._charm_dir_patch.start() + self._charm_dir_mock.side_effect = lambda: self.charm_dir + + def tearDown(self): + self._charm_dir_patch.stop() + + @mock.patch.object(templating.host.os, 'fchown') + @mock.patch.object(templating.host, 'mkdir') + @mock.patch.object(templating.host, 'log') + def test_render(self, log, mkdir, fchown): + with tempfile.NamedTemporaryFile() as fn1, \ + tempfile.NamedTemporaryFile() as fn2: + context = { + 'nats': { + 'port': '1234', + 'host': 'example.com', + }, + 'router': { + 'domain': 'api.foo.com' + }, + 'nginx_port': 80, + } + templating.render('fake_cc.yml', fn1.name, + context, templates_dir=TEMPLATES_DIR) + contents = open(fn1.name).read() + self.assertRegexpMatches(contents, 'port: 1234') + self.assertRegexpMatches(contents, 'host: example.com') + self.assertRegexpMatches(contents, 'domain: api.foo.com') + + templating.render('test.conf', fn2.name, context, + templates_dir=TEMPLATES_DIR) + contents = open(fn2.name).read() + self.assertRegexpMatches(contents, 'listen 80') + self.assertEqual(fchown.call_count, 2) + # Not called, because the target directory exists. Calling + # it would make the target directory world readable and + # expose your secrets (!). + self.assertEqual(mkdir.call_count, 0) + + @mock.patch.object(templating.host.os, 'fchown') + @mock.patch.object(templating.host, 'mkdir') + @mock.patch.object(templating.host, 'log') + def test_render_from_string(self, log, mkdir, fchown): + with tempfile.NamedTemporaryFile() as fn: + context = { + 'foo': 'bar' + } + + config_template = '{{ foo }}' + templating.render('somefile.txt', fn.name, + context, templates_dir=TEMPLATES_DIR, + config_template=config_template) + contents = open(fn.name).read() + self.assertRegexpMatches(contents, 'bar') + + self.assertEqual(fchown.call_count, 1) + # Not called, because the target directory exists. Calling + # it would make the target directory world readable and + # expose your secrets (!). + self.assertEqual(mkdir.call_count, 0) + + @mock.patch.object(templating.host.os, 'fchown') + @mock.patch.object(templating.host, 'mkdir') + @mock.patch.object(templating.host, 'log') + def test_render_loader(self, log, mkdir, fchown): + with tempfile.NamedTemporaryFile() as fn1: + context = { + 'nats': { + 'port': '1234', + 'host': 'example.com', + }, + 'router': { + 'domain': 'api.foo.com' + }, + 'nginx_port': 80, + } + template_loader = jinja2.ChoiceLoader([jinja2.FileSystemLoader(TEMPLATES_DIR)]) + templating.render('fake_cc.yml', fn1.name, + context, template_loader=template_loader) + contents = open(fn1.name).read() + self.assertRegexpMatches(contents, 'port: 1234') + self.assertRegexpMatches(contents, 'host: example.com') + self.assertRegexpMatches(contents, 'domain: api.foo.com') + + @mock.patch.object(templating.os.path, 'exists') + @mock.patch.object(templating.host.os, 'fchown') + @mock.patch.object(templating.host, 'mkdir') + @mock.patch.object(templating.host, 'log') + def test_render_no_dir(self, log, mkdir, fchown, exists): + exists.return_value = False + with tempfile.NamedTemporaryFile() as fn1, \ + tempfile.NamedTemporaryFile() as fn2: + context = { + 'nats': { + 'port': '1234', + 'host': 'example.com', + }, + 'router': { + 'domain': 'api.foo.com' + }, + 'nginx_port': 80, + } + templating.render('fake_cc.yml', fn1.name, + context, templates_dir=TEMPLATES_DIR) + contents = open(fn1.name).read() + self.assertRegexpMatches(contents, 'port: 1234') + self.assertRegexpMatches(contents, 'host: example.com') + self.assertRegexpMatches(contents, 'domain: api.foo.com') + + templating.render('test.conf', fn2.name, context, + templates_dir=TEMPLATES_DIR) + contents = open(fn2.name).read() + self.assertRegexpMatches(contents, 'listen 80') + self.assertEqual(fchown.call_count, 2) + # Target directory was created, world readable (!). + self.assertEqual(mkdir.call_count, 2) + + @mock.patch.object(templating.host.os, 'fchown') + @mock.patch.object(templating.host, 'log') + def test_render_2(self, log, fchown): + tmpdir = tempfile.mkdtemp() + fn1 = os.path.join(tmpdir, 'test.conf') + try: + context = {'nginx_port': 80} + templating.render('test.conf', fn1, context, + owner=pwd.getpwuid(os.getuid()).pw_name, + group=grp.getgrgid(os.getgid()).gr_name, + templates_dir=TEMPLATES_DIR) + with open(fn1) as f: + contents = f.read() + + self.assertRegexpMatches(contents, 'something') + finally: + shutil.rmtree(tmpdir, ignore_errors=True) + + @mock.patch.object(templating, 'hookenv') + @mock.patch('jinja2.Environment') + def test_load_error(self, Env, hookenv): + Env().get_template.side_effect = jinja2.exceptions.TemplateNotFound( + 'fake_cc.yml') + self.assertRaises( + jinja2.exceptions.TemplateNotFound, templating.render, + 'fake.src', 'fake.tgt', {}, templates_dir='tmpl') + hookenv.log.assert_called_once_with( + 'Could not load template fake.src from tmpl.', level=hookenv.ERROR) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/test_unitdata.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/test_unitdata.py new file mode 100644 index 0000000000000000000000000000000000000000..c6a85ecf5b9d736e1e493d9e38a46af607d60131 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/core/test_unitdata.py @@ -0,0 +1,228 @@ +# -*- coding: utf-8 -*- +# +# Copyright 2015 Canonical Ltd. +# +# Authors: +# Kapil Thangavelu +# +try: + from StringIO import StringIO +except Exception: + from io import StringIO + +import os +import shutil +import tempfile +import unittest + +from mock import patch + +from charmhelpers.core.unitdata import Storage, HookData, kv + + +class HookDataTest(unittest.TestCase): + + def setUp(self): + self.charm_dir = tempfile.mkdtemp() + self.addCleanup(lambda: shutil.rmtree(self.charm_dir)) + self.change_environment(CHARM_DIR=self.charm_dir) + + def change_environment(self, **kw): + original_env = dict(os.environ) + + @self.addCleanup + def cleanup_env(): + os.environ.clear() + os.environ.update(original_env) + + os.environ.update(kw) + + @patch('charmhelpers.core.hookenv.hook_name') + @patch('charmhelpers.core.hookenv.execution_environment') + @patch('charmhelpers.core.hookenv.charm_dir') + def test_hook_data_records(self, cdir, ctx, name): + name.return_value = 'config-changed' + ctx.return_value = { + 'rels': {}, 'conf': {'a': 1}, 'env': {}, 'unit': 'someunit'} + cdir.return_value = self.charm_dir + with open(os.path.join(self.charm_dir, 'revision'), 'w') as fh: + fh.write('1') + hook_data = HookData() + + with hook_data(): + self.assertEqual(kv(), hook_data.kv) + self.assertEqual(kv().get('charm_revisions'), ['1']) + self.assertEqual(kv().get('unit'), 'someunit') + self.assertEqual(list(hook_data.conf), ['a']) + self.assertEqual(tuple(hook_data.conf.a), (None, 1)) + + +class StorageTest(unittest.TestCase): + + def test_init_kv_multiple(self): + with tempfile.NamedTemporaryFile() as fh: + kv = Storage(fh.name) + with kv.hook_scope('xyz'): + kv.set('x', 1) + kv.close() + self.assertEqual(os.stat(fh.name).st_mode & 0o777, 0o600) + + kv = Storage(fh.name) + with kv.hook_scope('abc'): + self.assertEqual(kv.get('x'), 1) + kv.close() + + def test_hook_scope(self): + kv = Storage(':memory:') + try: + with kv.hook_scope('install') as rev: + self.assertEqual(rev, 1) + kv.set('a', 1) + raise RuntimeError('x') + except RuntimeError: + self.assertEqual(kv.get('a'), None) + + with kv.hook_scope('config-changed') as rev: + self.assertEqual(rev, 1) + kv.set('a', 1) + self.assertEqual(kv.get('a'), 1) + + kv.revision = None + + with kv.hook_scope('start') as rev: + self.assertEqual(rev, 2) + kv.set('a', False) + kv.set('a', True) + self.assertEqual(kv.get('a'), True) + + # History doesn't decode values by default + history = [h[:-1] for h in kv.gethistory('a')] + self.assertEqual( + history, + [(1, 'a', '1', 'config-changed'), + (2, 'a', 'true', 'start')]) + + history = [h[:-1] for h in kv.gethistory('a', deserialize=True)] + self.assertEqual( + history, + [(1, 'a', 1, 'config-changed'), + (2, 'a', True, 'start')]) + + def test_delta_no_previous_and_history(self): + kv = Storage(':memory:') + with kv.hook_scope('install'): + data = {'a': 0, 'c': False} + delta = kv.delta(data, 'settings.') + self.assertEqual(delta, { + 'a': (None, False), 'c': (None, False)}) + kv.update(data, 'settings.') + + with kv.hook_scope('config'): + data = {'a': 1, 'c': True} + delta = kv.delta(data, 'settings.') + self.assertEqual(delta, { + 'a': (0, 1), 'c': (False, True)}) + kv.update(data, 'settings.') + # strip the time + history = [h[:-1] for h in kv.gethistory('settings.a')] + self.assertEqual( + history, + [(1, 'settings.a', '0', 'install'), + (2, 'settings.a', '1', 'config')]) + + def test_unset(self): + kv = Storage(':memory:') + with kv.hook_scope('install'): + kv.set('a', True) + with kv.hook_scope('start'): + kv.set('a', False) + with kv.hook_scope('config-changed'): + kv.unset('a') + history = [h[:-1] for h in kv.gethistory('a')] + + self.assertEqual(history, [ + (1, 'a', 'true', 'install'), + (2, 'a', 'false', 'start'), + (3, 'a', '"DELETED"', "config-changed")]) + + def test_flush_and_close_on_closed(self): + kv = Storage(':memory:') + kv.close() + kv.flush(False) + kv.close() + + def test_multi_value_set_skips(self): + # pure coverage test + kv = Storage(':memory:') + kv.set('x', 1) + self.assertEqual(kv.set('x', 1), 1) + + def test_debug(self): + # pure coverage test... + io = StringIO() + kv = Storage(':memory:') + kv.debug(io) + + def test_record(self): + kv = Storage(':memory:') + kv.set('config', {'x': 1, 'b': False}) + config = kv.get('config', record=True) + self.assertEqual(config.b, False) + self.assertEqual(config.x, 1) + self.assertEqual(kv.set('config.x', 1), 1) + try: + config.z + except AttributeError: + pass + else: + self.fail('attribute error should fire on nonexistant') + + def test_delta(self): + kv = Storage(':memory:') + kv.update({'a': 1, 'b': 2.2}, prefix="x") + delta = kv.delta({'a': 0, 'c': False}, prefix='x') + self.assertEqual( + delta, + {'a': (1, 0), 'b': (2.2, None), 'c': (None, False)}) + self.assertEqual(delta.a.previous, 1) + self.assertEqual(delta.a.current, 0) + self.assertEqual(delta.c.previous, None) + self.assertEqual(delta.a.current, False) + + def test_update(self): + kv = Storage(':memory:') + kv.update({'v_a': 1, 'v_b': 2.2}) + self.assertEqual(kv.getrange('v_'), {'v_a': 1, 'v_b': 2.2}) + + kv.update({'a': False, 'b': True}, prefix='x_') + self.assertEqual( + kv.getrange('x_', True), {'a': False, 'b': True}) + + def test_keyrange(self): + kv = Storage(':memory:') + kv.set('docker.net_mtu', 1) + kv.set('docker.net_nack', True) + kv.set('docker.net_type', 'vxlan') + self.assertEqual( + kv.getrange('docker'), + {'docker.net_mtu': 1, 'docker.net_type': 'vxlan', + 'docker.net_nack': True}) + self.assertEqual( + kv.getrange('docker.', True), + {'net_mtu': 1, 'net_type': 'vxlan', 'net_nack': True}) + + def test_get_set_unset(self): + kv = Storage(':memory:') + kv.hook_scope('test') + kv.set('hello', 'saucy') + kv.set('hello', 'world') + self.assertEqual(kv.get('hello'), 'world') + kv.flush() + kv.unset('hello') + self.assertEqual(kv.get('hello'), None) + kv.flush(False) + self.assertEqual(kv.get('hello'), 'world') + + +if __name__ == '__main__': + unittest.main() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/fetch/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/fetch/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/fetch/python/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/fetch/python/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/fetch/python/test_debug.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/fetch/python/test_debug.py new file mode 100755 index 0000000000000000000000000000000000000000..76ee450628ef8b2deee7bcb4842289c8971c52ee --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/fetch/python/test_debug.py @@ -0,0 +1,54 @@ +#!/usr/bin/env python +# coding: utf-8 + +from unittest import TestCase + +from charmhelpers.fetch.python import debug + +import mock + +__author__ = "Jorge Niedbalski " + +TO_PATCH = [ + "log", + "open_port", + "close_port", + "Rpdb", + "_error", +] + + +class DebugTestCase(TestCase): + """Test cases for charmhelpers.contrib.python.debug""" + + def setUp(self): + TestCase.setUp(self) + self.patch_all() + self.log.return_value = True + self.close_port.return_value = True + + self.wrapped_function = mock.Mock(return_value=True) + self.set_trace = debug.set_trace + + def patch(self, method): + _m = mock.patch.object(debug, method) + _mock = _m.start() + self.addCleanup(_m.stop) + return _mock + + def patch_all(self): + for method in TO_PATCH: + setattr(self, method, self.patch(method)) + + def test_debug_set_trace(self): + """Check if set_trace works + """ + self.set_trace() + self.open_port.assert_called_with(4444) + + def test_debug_set_trace_ex(self): + """Check if set_trace raises exception + """ + self.set_trace() + self.Rpdb.set_trace.side_effect = Exception() + self.assertTrue(self._error.called) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/fetch/python/test_packages.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/fetch/python/test_packages.py new file mode 100644 index 0000000000000000000000000000000000000000..c4bd05c2cad7dda7d8cf1a7101358d6af4d2c825 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/fetch/python/test_packages.py @@ -0,0 +1,177 @@ +#!/usr/bin/env python +# coding: utf-8 + +import mock +import six + +from unittest import TestCase +from charmhelpers.fetch.python import packages + +__author__ = "Jorge Niedbalski " + +TO_PATCH = [ + "apt_install", + "charm_dir", + "log", + "pip_execute", +] + + +class PipTestCase(TestCase): + + def setUp(self): + TestCase.setUp(self) + self.patch_all() + + self.log.return_value = True + self.apt_install.return_value = True + + def patch(self, method): + _m = mock.patch.object(packages, method) + _mock = _m.start() + self.addCleanup(_m.stop) + return _mock + + def patch_all(self): + for method in TO_PATCH: + setattr(self, method, self.patch(method)) + + def test_pip_install_requirements(self): + """ + Check if pip_install_requirements works correctly + """ + packages.pip_install_requirements("test_requirements.txt") + self.pip_execute.assert_called_with(["install", + "-r test_requirements.txt"]) + + packages.pip_install_requirements("test_requirements.txt", + "test_constraints.txt") + self.pip_execute.assert_called_with(["install", + "-r test_requirements.txt", + "-c test_constraints.txt"]) + + packages.pip_install_requirements("test_requirements.txt", + proxy="proxy_addr:8080") + + self.pip_execute.assert_called_with(["install", + "--proxy=proxy_addr:8080", + "-r test_requirements.txt"]) + + packages.pip_install_requirements("test_requirements.txt", + log="output.log", + proxy="proxy_addr:8080") + + self.pip_execute.assert_called_with(["install", + "--log=output.log", + "--proxy=proxy_addr:8080", + "-r test_requirements.txt"]) + + def test_pip_install(self): + """ + Check if pip_install works correctly with a single package + """ + packages.pip_install("mock") + self.pip_execute.assert_called_with(["install", + "mock"]) + packages.pip_install("mock", + proxy="proxy_addr:8080") + + self.pip_execute.assert_called_with(["install", + "--proxy=proxy_addr:8080", + "mock"]) + packages.pip_install("mock", + log="output.log", + proxy="proxy_addr:8080") + + self.pip_execute.assert_called_with(["install", + "--log=output.log", + "--proxy=proxy_addr:8080", + "mock"]) + + def test_pip_install_upgrade(self): + """ + Check if pip_install works correctly with a single package + """ + packages.pip_install("mock", upgrade=True) + self.pip_execute.assert_called_with(["install", + "--upgrade", + "mock"]) + + def test_pip_install_multiple(self): + """ + Check if pip_install works correctly with multiple packages + """ + packages.pip_install(["mock", "nose"]) + self.pip_execute.assert_called_with(["install", + "mock", "nose"]) + + @mock.patch('subprocess.check_call') + @mock.patch('os.path.join') + def test_pip_install_venv(self, join, check_call): + """ + Check if pip_install works correctly with multiple packages + """ + join.return_value = 'joined-path' + packages.pip_install(["mock", "nose"], venv=True) + check_call.assert_called_with(["joined-path", "install", + "mock", "nose"]) + + def test_pip_uninstall(self): + """ + Check if pip_uninstall works correctly with a single package + """ + packages.pip_uninstall("mock") + self.pip_execute.assert_called_with(["uninstall", + "-q", + "-y", + "mock"]) + packages.pip_uninstall("mock", + proxy="proxy_addr:8080") + + self.pip_execute.assert_called_with(["uninstall", + "-q", + "-y", + "--proxy=proxy_addr:8080", + "mock"]) + packages.pip_uninstall("mock", + log="output.log", + proxy="proxy_addr:8080") + + self.pip_execute.assert_called_with(["uninstall", + "-q", + "-y", + "--log=output.log", + "--proxy=proxy_addr:8080", + "mock"]) + + def test_pip_uninstall_multiple(self): + """ + Check if pip_uninstall works correctly with multiple packages + """ + packages.pip_uninstall(["mock", "nose"]) + self.pip_execute.assert_called_with(["uninstall", + "-q", + "-y", + "mock", "nose"]) + + def test_pip_list(self): + """ + Checks if pip_list works correctly + """ + packages.pip_list() + self.pip_execute.assert_called_with(["list"]) + + @mock.patch('os.path.join') + @mock.patch('subprocess.check_call') + @mock.patch.object(packages, 'pip_install') + def test_pip_create_virtualenv(self, pip_install, check_call, join): + """ + Checks if pip_create_virtualenv works correctly + """ + join.return_value = 'joined-path' + packages.pip_create_virtualenv() + if six.PY2: + self.apt_install.assert_called_with('python-virtualenv') + else: + self.apt_install.assert_called_with('python3-virtualenv') + check_call.assert_called_with(['virtualenv', 'joined-path']) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/fetch/python/test_version.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/fetch/python/test_version.py new file mode 100644 index 0000000000000000000000000000000000000000..9e32af3d5455f94573885e77330d7532f000e973 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/fetch/python/test_version.py @@ -0,0 +1,27 @@ +#!/usr/bin/env python +# coding: utf-8 + +from unittest import TestCase +from charmhelpers.fetch.python import version + +import sys + +__author__ = "Jorge Niedbalski " + + +class VersionTestCase(TestCase): + + def setUp(self): + TestCase.setUp(self) + + def test_current_version(self): + """ + Check if version.current_version and version.current_version_string + works correctly + """ + self.assertEquals(version.current_version(), + sys.version_info) + self.assertEquals(version.current_version_string(), + "{0}.{1}.{2}".format(sys.version_info.major, + sys.version_info.minor, + sys.version_info.micro)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/fetch/test_archiveurl.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/fetch/test_archiveurl.py new file mode 100644 index 0000000000000000000000000000000000000000..b3733b5ba58ad6f4542052ccb07da559cf93a207 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/fetch/test_archiveurl.py @@ -0,0 +1,131 @@ +import os + +from unittest import TestCase +from mock import ( + MagicMock, + patch, + mock_open, + Mock, + ANY +) +from charmhelpers.fetch import ( + archiveurl, + UnhandledSource, +) + +import six +if six.PY3: + from urllib.parse import urlparse + from urllib.error import URLError +else: + from urllib2 import URLError + from urlparse import urlparse + + +class ArchiveUrlFetchHandlerTest(TestCase): + + def setUp(self): + super(ArchiveUrlFetchHandlerTest, self).setUp() + self.valid_urls = ( + "http://example.com/foo.tar.gz", + "http://example.com/foo.tgz", + "http://example.com/foo.tar.bz2", + "http://example.com/foo.tbz2", + "http://example.com/foo.zip", + "http://example.com/foo.zip?bar=baz&x=y#whee", + "ftp://example.com/foo.tar.gz", + "https://example.com/foo.tgz", + "file://example.com/foo.tar.bz2", + ) + self.invalid_urls = ( + "git://example.com/foo.tar.gz", + "http://example.com/foo", + "http://example.com/foobar=baz&x=y#tar.gz", + "http://example.com/foobar?h=baz.zip", + "bzr+ssh://example.com/foo.tar.gz", + "lp:example/foo.tgz", + "file//example.com/foo.tar.bz2", + "garbage", + ) + self.fh = archiveurl.ArchiveUrlFetchHandler() + + def test_handles_archive_urls(self): + for url in self.valid_urls: + result = self.fh.can_handle(url) + self.assertEqual(result, True, url) + for url in self.invalid_urls: + result = self.fh.can_handle(url) + self.assertNotEqual(result, True, url) + + @patch('charmhelpers.fetch.archiveurl.urlopen') + def test_downloads(self, _urlopen): + for url in self.valid_urls: + response = MagicMock() + response.read.return_value = "bar" + _urlopen.return_value = response + + _open = mock_open() + with patch('charmhelpers.fetch.archiveurl.open', + _open, create=True): + self.fh.download(url, "foo") + + response.read.assert_called_with() + _open.assert_called_once_with("foo", 'wb') + _open().write.assert_called_with("bar") + + @patch('charmhelpers.fetch.archiveurl.check_hash') + @patch('charmhelpers.fetch.archiveurl.mkdir') + @patch('charmhelpers.fetch.archiveurl.extract') + def test_installs(self, _extract, _mkdir, _check_hash): + self.fh.download = MagicMock() + + for url in self.valid_urls: + filename = urlparse(url).path + dest = os.path.join('foo', 'fetched', os.path.basename(filename)) + _extract.return_value = dest + with patch.dict('os.environ', {'CHARM_DIR': 'foo'}): + where = self.fh.install(url, checksum='deadbeef') + self.fh.download.assert_called_with(url, dest) + _extract.assert_called_with(dest, None) + _check_hash.assert_called_with(dest, 'deadbeef', 'sha1') + self.assertEqual(where, dest) + _check_hash.reset_mock() + + url = "http://www.example.com/archive.tar.gz" + + self.fh.download.side_effect = URLError('fail') + with patch.dict('os.environ', {'CHARM_DIR': 'foo'}): + self.assertRaises(UnhandledSource, self.fh.install, url) + + self.fh.download.side_effect = OSError('fail') + with patch.dict('os.environ', {'CHARM_DIR': 'foo'}): + self.assertRaises(UnhandledSource, self.fh.install, url) + + @patch('charmhelpers.fetch.archiveurl.check_hash') + @patch('charmhelpers.fetch.archiveurl.mkdir') + @patch('charmhelpers.fetch.archiveurl.extract') + def test_install_with_hash_in_url(self, _extract, _mkdir, _check_hash): + self.fh.download = MagicMock() + url = "file://example.com/foo.tar.bz2#sha512=beefdead" + with patch.dict('os.environ', {'CHARM_DIR': 'foo'}): + self.fh.install(url) + _check_hash.assert_called_with(ANY, 'beefdead', 'sha512') + + @patch('charmhelpers.fetch.archiveurl.mkdir') + @patch('charmhelpers.fetch.archiveurl.extract') + def test_install_with_duplicate_hash_in_url(self, _extract, _mkdir): + self.fh.download = MagicMock() + url = "file://example.com/foo.tar.bz2#sha512=a&sha512=b" + with patch.dict('os.environ', {'CHARM_DIR': 'foo'}): + with self.assertRaisesRegexp( + TypeError, "Expected 1 hash value, not 2"): + self.fh.install(url) + + @patch('charmhelpers.fetch.archiveurl.urlretrieve') + @patch('charmhelpers.fetch.archiveurl.check_hash') + def test_download_and_validate(self, vfmock, urlmock): + urlmock.return_value = ('/tmp/tmpebM9Hv', Mock()) + dlurl = 'http://example.com/foo.tgz' + dlhash = '988881adc9fc3655077dc2d4d757d480b5ea0e11' + self.fh.download_and_validate(dlurl, dlhash) + vfmock.assert_called_with('/tmp/tmpebM9Hv', dlhash, 'sha1') diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/fetch/test_bzrurl.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/fetch/test_bzrurl.py new file mode 100644 index 0000000000000000000000000000000000000000..4b71209b9998cb319daf3b9a2645c394e1883d8e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/fetch/test_bzrurl.py @@ -0,0 +1,136 @@ +import os +import shutil +import subprocess +import tempfile +from testtools import TestCase +from mock import ( + MagicMock, + patch, +) + +import six +if six.PY3: + from urllib.parse import urlparse +else: + from urlparse import urlparse + +try: + from charmhelpers.fetch import ( + bzrurl, + UnhandledSource, + ) +except ImportError: + bzrurl = None + UnhandledSource = None + + +class BzrUrlFetchHandlerTest(TestCase): + + def setUp(self): + super(BzrUrlFetchHandlerTest, self).setUp() + self.valid_urls = ( + "bzr+ssh://example.com/branch-name", + "bzr+ssh://example.com/branch-name/", + "lp:lp-branch-name", + "lp:example/lp-branch-name", + ) + self.invalid_urls = ( + "http://example.com/foo.tar.gz", + "http://example.com/foo.tgz", + "http://example.com/foo.tar.bz2", + "http://example.com/foo.tbz2", + "http://example.com/foo.zip", + "http://example.com/foo.zip?bar=baz&x=y#whee", + "ftp://example.com/foo.tar.gz", + "https://example.com/foo.tgz", + "file://example.com/foo.tar.bz2", + "git://example.com/foo.tar.gz", + "http://example.com/foo", + "http://example.com/foobar=baz&x=y#tar.gz", + "http://example.com/foobar?h=baz.zip", + "abc:example", + "file//example.com/foo.tar.bz2", + "garbage", + ) + self.fh = bzrurl.BzrUrlFetchHandler() + + def test_handles_bzr_urls(self): + for url in self.valid_urls: + result = self.fh.can_handle(url) + self.assertEqual(result, True, url) + for url in self.invalid_urls: + result = self.fh.can_handle(url) + self.assertNotEqual(result, True, url) + + @patch('charmhelpers.fetch.bzrurl.check_output') + def test_branch(self, check_output): + dest_path = "/destination/path" + for url in self.valid_urls: + self.fh.remote_branch = MagicMock() + self.fh.load_plugins = MagicMock() + self.fh.branch(url, dest_path) + + check_output.assert_called_with(['bzr', 'branch', url, dest_path], stderr=-2) + + for url in self.invalid_urls: + with patch.dict('os.environ', {'CHARM_DIR': 'foo'}): + self.assertRaises(UnhandledSource, self.fh.branch, + url, dest_path) + + @patch('charmhelpers.fetch.bzrurl.check_output') + def test_branch_revno(self, check_output): + dest_path = "/destination/path" + for url in self.valid_urls: + self.fh.remote_branch = MagicMock() + self.fh.load_plugins = MagicMock() + self.fh.branch(url, dest_path, revno=42) + + check_output.assert_called_with(['bzr', 'branch', '-r', '42', + url, dest_path], stderr=-2) + + for url in self.invalid_urls: + with patch.dict('os.environ', {'CHARM_DIR': 'foo'}): + self.assertRaises(UnhandledSource, self.fh.branch, url, + dest_path) + + def test_branch_functional(self): + src = None + dst = None + try: + src = tempfile.mkdtemp() + subprocess.check_output(['bzr', 'init', src], stderr=subprocess.STDOUT) + dst = tempfile.mkdtemp() + os.rmdir(dst) + self.fh.branch(src, dst) + assert os.path.exists(os.path.join(dst, '.bzr')) + self.fh.branch(src, dst) # idempotent + assert os.path.exists(os.path.join(dst, '.bzr')) + finally: + if src: + shutil.rmtree(src, ignore_errors=True) + if dst: + shutil.rmtree(dst, ignore_errors=True) + + def test_installs(self): + self.fh.branch = MagicMock() + + for url in self.valid_urls: + branch_name = urlparse(url).path.strip("/").split("/")[-1] + dest = os.path.join('foo', 'fetched') + dest_dir = os.path.join(dest, os.path.basename(branch_name)) + with patch.dict('os.environ', {'CHARM_DIR': 'foo'}): + where = self.fh.install(url) + self.assertEqual(where, dest_dir) + + @patch('charmhelpers.fetch.bzrurl.mkdir') + def test_installs_dir(self, _mkdir): + self.fh.branch = MagicMock() + + for url in self.valid_urls: + branch_name = urlparse(url).path.strip("/").split("/")[-1] + dest = os.path.join('opt', 'f') + dest_dir = os.path.join(dest, os.path.basename(branch_name)) + with patch.dict('os.environ', {'CHARM_DIR': 'foo'}): + where = self.fh.install(url, dest) + self.assertEqual(where, dest_dir) + _mkdir.assert_called_with(dest, perms=0o755) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/fetch/test_fetch.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/fetch/test_fetch.py new file mode 100644 index 0000000000000000000000000000000000000000..202467f99900a8c8598cc7e9526034ab4c364906 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/fetch/test_fetch.py @@ -0,0 +1,269 @@ +import six +import os +import yaml + +from testtools import TestCase +from mock import ( + patch, + MagicMock, + call, +) + +from charmhelpers import fetch + +if six.PY3: + from urllib.parse import urlparse + builtin_open = 'builtins.open' +else: + from urlparse import urlparse + builtin_open = '__builtin__.open' + + +FAKE_APT_CACHE = { + # an installed package + 'vim': { + 'current_ver': '2:7.3.547-6ubuntu5' + }, + # a uninstalled installation candidate + 'emacs': { + } +} + + +def fake_apt_cache(in_memory=True, progress=None): + def _get(package): + pkg = MagicMock() + if package not in FAKE_APT_CACHE: + raise KeyError + pkg.name = package + if 'current_ver' in FAKE_APT_CACHE[package]: + pkg.current_ver.ver_str = FAKE_APT_CACHE[package]['current_ver'] + else: + pkg.current_ver = None + return pkg + cache = MagicMock() + cache.__getitem__.side_effect = _get + return cache + + +def getenv(update=None): + # return a copy of os.environ with update applied. + # this was necessary because some modules modify os.environment directly + copy = os.environ.copy() + if update is not None: + copy.update(update) + return copy + + +class FetchTest(TestCase): + + @patch('charmhelpers.fetch.log') + @patch.object(fetch, 'config') + @patch.object(fetch, 'add_source') + def test_configure_sources_single_source(self, add_source, config, log): + config.side_effect = ['source', 'key'] + fetch.configure_sources() + add_source.assert_called_with('source', 'key') + + @patch.object(fetch, 'config') + @patch.object(fetch, 'add_source') + def test_configure_sources_null_source(self, add_source, config): + config.side_effect = [None, None] + fetch.configure_sources() + self.assertEqual(add_source.call_count, 0) + + @patch.object(fetch, 'config') + @patch.object(fetch, 'add_source') + def test_configure_sources_empty_source(self, add_source, config): + config.side_effect = ['', ''] + fetch.configure_sources() + self.assertEqual(add_source.call_count, 0) + + @patch.object(fetch, 'config') + @patch.object(fetch, 'add_source') + def test_configure_sources_single_source_no_key(self, add_source, config): + config.side_effect = ['source', None] + fetch.configure_sources() + add_source.assert_called_with('source', None) + + @patch.object(fetch, 'config') + @patch.object(fetch, 'add_source') + def test_configure_sources_multiple_sources(self, add_source, config): + sources = ["sourcea", "sourceb"] + keys = ["keya", None] + config.side_effect = [ + yaml.dump(sources), + yaml.dump(keys) + ] + fetch.configure_sources() + add_source.assert_has_calls([ + call('sourcea', 'keya'), + call('sourceb', None) + ]) + + @patch.object(fetch, 'config') + @patch.object(fetch, 'add_source') + def test_configure_sources_missing_keys(self, add_source, config): + sources = ["sourcea", "sourceb"] + keys = ["keya"] # Second key is missing + config.side_effect = [ + yaml.dump(sources), + yaml.dump(keys) + ] + self.assertRaises(fetch.SourceConfigError, fetch.configure_sources) + + @patch.object(fetch, '_fetch_update') + @patch.object(fetch, 'config') + @patch.object(fetch, 'add_source') + def test_configure_sources_update_called_ubuntu(self, add_source, config, + update): + config.side_effect = ['source', 'key'] + fetch.configure_sources(update=True) + add_source.assert_called_with('source', 'key') + self.assertTrue(update.called) + + +class InstallTest(TestCase): + + def setUp(self): + super(InstallTest, self).setUp() + self.valid_urls = ( + "http://example.com/foo.tar.gz", + "http://example.com/foo.tgz", + "http://example.com/foo.tar.bz2", + "http://example.com/foo.tbz2", + "http://example.com/foo.zip", + "http://example.com/foo.zip?bar=baz&x=y#whee", + "ftp://example.com/foo.tar.gz", + "https://example.com/foo.tgz", + "file://example.com/foo.tar.bz2", + "bzr+ssh://example.com/branch-name", + "bzr+ssh://example.com/branch-name/", + "lp:branch-name", + "lp:example/branch-name", + ) + self.invalid_urls = ( + "git://example.com/foo.tar.gz", + "http://example.com/foo", + "http://example.com/foobar=baz&x=y#tar.gz", + "http://example.com/foobar?h=baz.zip", + "abc:example", + "file//example.com/foo.tar.bz2", + "garbage", + ) + + @patch('charmhelpers.fetch.log') + @patch('charmhelpers.fetch.plugins') + def test_installs_remote(self, _plugins, _log): + h1 = MagicMock(name="h1") + h1.can_handle.return_value = "Nope" + + h2 = MagicMock(name="h2") + h2.can_handle.return_value = True + h2.install.side_effect = fetch.UnhandledSource() + + h3 = MagicMock(name="h3") + h3.can_handle.return_value = True + h3.install.return_value = "foo" + + _plugins.return_value = [h1, h2, h3] + for url in self.valid_urls: + result = fetch.install_remote(url) + + h1.can_handle.assert_called_with(url) + h2.can_handle.assert_called_with(url) + h3.can_handle.assert_called_with(url) + + h1.install.assert_not_called() + h2.install.assert_called_with(url) + h3.install.assert_called_with(url) + + self.assertEqual(result, "foo") + + fetch.install_remote('url', extra_arg=True) + h2.install.assert_called_with('url', extra_arg=True) + + @patch('charmhelpers.fetch.install_remote') + @patch('charmhelpers.fetch.config') + def test_installs_from_config(self, _config, _instrem): + for url in self.valid_urls: + _config.return_value = {"foo": url} + fetch.install_from_config("foo") + _instrem.assert_called_with(url) + + +class PluginTest(TestCase): + + @patch('charmhelpers.fetch.importlib.import_module') + def test_imports_plugins(self, import_): + fetch_handlers = ['a.foo', 'b.foo', 'c.foo'] + module = MagicMock() + import_.return_value = module + plugins = fetch.plugins(fetch_handlers) + + self.assertEqual(len(fetch_handlers), len(plugins)) + module.foo.assert_has_calls(([call()] * len(fetch_handlers))) + + @patch('charmhelpers.fetch.importlib.import_module') + def test_imports_plugins_default(self, import_): + module = MagicMock() + import_.return_value = module + plugins = fetch.plugins() + + self.assertEqual(len(fetch.FETCH_HANDLERS), len(plugins)) + for handler in fetch.FETCH_HANDLERS: + classname = handler.rsplit('.', 1)[-1] + getattr(module, classname).assert_called_with() + + @patch('charmhelpers.fetch.log') + @patch('charmhelpers.fetch.importlib.import_module') + def test_skips_and_logs_missing_plugins(self, import_, log_): + fetch_handlers = ['a.foo', 'b.foo', 'c.foo'] + import_.side_effect = (NotImplementedError, NotImplementedError, + MagicMock()) + plugins = fetch.plugins(fetch_handlers) + + self.assertEqual(1, len(plugins)) + self.assertEqual(2, log_.call_count) + + @patch('charmhelpers.fetch.log') + @patch.object(fetch.importlib, 'import_module') + def test_plugins_are_valid(self, import_module, log_): + plugins = fetch.plugins() + self.assertEqual(len(fetch.FETCH_HANDLERS), len(plugins)) + + +class BaseFetchHandlerTest(TestCase): + + def setUp(self): + super(BaseFetchHandlerTest, self).setUp() + self.test_urls = ( + "http://example.com/foo?bar=baz&x=y#blarg", + "https://example.com/foo", + "ftp://example.com/foo", + "file://example.com/foo", + "git://github.com/foo/bar", + "bzr+ssh://bazaar.launchpad.net/foo/bar", + "bzr+http://bazaar.launchpad.net/foo/bar", + "garbage", + ) + self.fh = fetch.BaseFetchHandler() + + def test_handles_nothing(self): + for url in self.test_urls: + self.assertNotEqual(self.fh.can_handle(url), True) + + def test_install_throws_unhandled(self): + for url in self.test_urls: + self.assertRaises(fetch.UnhandledSource, self.fh.install, url) + + def test_parses_urls(self): + sample_url = "http://example.com/foo?bar=baz&x=y#blarg" + p = self.fh.parse_url(sample_url) + self.assertEqual(p, urlparse(sample_url)) + + def test_returns_baseurl(self): + sample_url = "http://example.com/foo?bar=baz&x=y#blarg" + expected_url = "http://example.com/foo" + u = self.fh.base_url(sample_url) + self.assertEqual(u, expected_url) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/fetch/test_fetch_centos.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/fetch/test_fetch_centos.py new file mode 100644 index 0000000000000000000000000000000000000000..43ca852d3abc658d52c40cbb96ca28a2c0908a47 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/fetch/test_fetch_centos.py @@ -0,0 +1,315 @@ +import subprocess +import os + +from tests.helpers import patch_open +from testtools import TestCase +from mock import ( + patch, + MagicMock, + call, +) +from charmhelpers.fetch import centos as fetch + + +def getenv(update=None): + # return a copy of os.environ with update applied. + # this was necessary because some modules modify os.environment directly + copy = os.environ.copy() + if update is not None: + copy.update(update) + return copy + + +class FetchTest(TestCase): + + @patch("charmhelpers.fetch.log") + @patch('yum.YumBase.doPackageLists') + def test_filter_packages_missing_centos(self, yumBase, log): + + class MockPackage: + def __init__(self, name): + self.base_package_name = name + + yum_dict = { + 'installed': { + MockPackage('vim') + }, + 'available': { + MockPackage('vim') + } + } + import yum + yum.YumBase.return_value.doPackageLists.return_value = yum_dict + result = fetch.filter_installed_packages(['vim', 'emacs']) + self.assertEquals(result, ['emacs']) + + @patch("charmhelpers.fetch.log") + def test_filter_packages_none_missing_centos(self, log): + + class MockPackage: + def __init__(self, name): + self.base_package_name = name + + yum_dict = { + 'installed': { + MockPackage('vim') + }, + 'available': { + MockPackage('vim') + } + } + import yum + yum.yumBase.return_value.doPackageLists.return_value = yum_dict + result = fetch.filter_installed_packages(['vim']) + self.assertEquals(result, []) + + @patch('charmhelpers.fetch.centos.log') + @patch('yum.YumBase.doPackageLists') + def test_filter_packages_not_available_centos(self, yumBase, log): + + class MockPackage: + def __init__(self, name): + self.base_package_name = name + + yum_dict = { + 'installed': { + MockPackage('vim') + } + } + import yum + yum.YumBase.return_value.doPackageLists.return_value = yum_dict + + result = fetch.filter_installed_packages(['vim', 'joe']) + self.assertEquals(result, ['joe']) + + @patch('charmhelpers.fetch.centos.log') + def test_add_source_none_centos(self, log): + fetch.add_source(source=None) + self.assertTrue(log.called) + + @patch('charmhelpers.fetch.centos.log') + @patch('os.listdir') + def test_add_source_http_centos(self, listdir, log): + source = "http://archive.ubuntu.com/ubuntu raring-backports main" + with patch_open() as (mock_open, mock_file): + fetch.add_source(source=source) + listdir.assert_called_with('/etc/yum.repos.d/') + mock_file.write.assert_has_calls([ + call("[archive.ubuntu.com_ubuntu raring-backports main]\n"), + call("name=archive.ubuntu.com/ubuntu raring-backports main\n"), + call("baseurl=http://archive.ubuntu.com/ubuntu raring" + "-backports main\n\n")]) + + @patch('charmhelpers.fetch.centos.log') + @patch('os.listdir') + @patch('subprocess.check_call') + def test_add_source_http_and_key_id_centos(self, check_call, + listdir, log): + source = "http://archive.ubuntu.com/ubuntu raring-backports main" + key_id = "akey" + with patch_open() as (mock_open, mock_file): + fetch.add_source(source=source, key=key_id) + listdir.assert_called_with('/etc/yum.repos.d/') + mock_file.write.assert_has_calls([ + call("[archive.ubuntu.com_ubuntu raring-backports main]\n"), + call("name=archive.ubuntu.com/ubuntu raring-backports main\n"), + call("baseurl=http://archive.ubuntu.com/ubuntu raring" + "-backports main\n\n")]) + check_call.assert_called_with(['rpm', '--import', key_id]) + + @patch('charmhelpers.fetch.centos.log') + @patch('os.listdir') + @patch('subprocess.check_call') + def test_add_source_https_and_key_id_centos(self, check_call, + listdir, log): + source = "https://USER:PASS@private-ppa.launchpad.net/project/awesome" + key_id = "GPGPGP" + with patch_open() as (mock_open, mock_file): + fetch.add_source(source=source, key=key_id) + listdir.assert_called_with('/etc/yum.repos.d/') + mock_file.write.assert_has_calls([ + call("[_USER:PASS@private-ppa.launchpad" + ".net_project_awesome]\n"), + call("name=/USER:PASS@private-ppa.launchpad.net" + "/project/awesome\n"), + call("baseurl=https://USER:PASS@private-ppa.launchpad.net" + "/project/awesome\n\n")]) + check_call.assert_called_with(['rpm', '--import', key_id]) + + @patch('charmhelpers.fetch.centos.log') + @patch.object(fetch, 'NamedTemporaryFile') + @patch('os.listdir') + @patch('subprocess.check_call') + def test_add_source_http_and_key_centos(self, check_call, + listdir, temp_file, log): + source = "http://archive.ubuntu.com/ubuntu raring-backports main" + key = ''' + -----BEGIN PGP PUBLIC KEY BLOCK----- + [...] + -----END PGP PUBLIC KEY BLOCK----- + ''' + file_mock = MagicMock() + file_mock.name = 'temporary_file' + temp_file.return_value.__enter__.return_value = file_mock + listdir.return_value = [] + + with patch_open() as (mock_open, mock_file): + fetch.add_source(source=source, key=key) + listdir.assert_called_with('/etc/yum.repos.d/') + self.assertTrue(log.called) + check_call.assert_called_with(['rpm', '--import', file_mock.name]) + file_mock.write.assert_called_once_with(key) + file_mock.flush.assert_called_once_with() + file_mock.seek.assert_called_once_with(0) + + +class YumTests(TestCase): + + @patch('subprocess.call') + @patch('charmhelpers.fetch.centos.log') + def test_yum_upgrade_non_fatal(self, log, mock_call): + options = ['--foo', '--bar'] + fetch.upgrade(options) + + mock_call.assert_called_with(['yum', '--assumeyes', + '--foo', '--bar', 'upgrade'], + env=getenv()) + + @patch('subprocess.check_call') + @patch('charmhelpers.fetch.centos.log') + def test_yum_upgrade_fatal(self, log, mock_call): + options = ['--foo', '--bar'] + fetch.upgrade(options, fatal=True) + + mock_call.assert_called_with(['yum', '--assumeyes', + '--foo', '--bar', 'upgrade'], + env=getenv()) + + @patch('subprocess.call') + @patch('charmhelpers.fetch.centos.log') + def test_installs_yum_packages(self, log, mock_call): + packages = ['foo', 'bar'] + options = ['--foo', '--bar'] + + fetch.install(packages, options) + + mock_call.assert_called_with(['yum', '--assumeyes', + '--foo', '--bar', 'install', + 'foo', 'bar'], + env=getenv()) + + @patch('subprocess.call') + @patch('charmhelpers.fetch.centos.log') + def test_installs_yum_packages_without_options(self, log, mock_call): + packages = ['foo', 'bar'] + fetch.install(packages) + + mock_call.assert_called_with(['yum', '--assumeyes', + 'install', 'foo', 'bar'], + env=getenv()) + + @patch('subprocess.call') + @patch('charmhelpers.fetch.centos.log') + def test_installs_yum_packages_as_string(self, log, mock_call): + packages = 'foo bar' + fetch.install(packages) + + mock_call.assert_called_with(['yum', '--assumeyes', + 'install', 'foo bar'], + env=getenv()) + + @patch('subprocess.check_call') + @patch('charmhelpers.fetch.centos.log') + def test_installs_yum_packages_with_possible_errors(self, log, mock_call): + packages = ['foo', 'bar'] + options = ['--foo', '--bar'] + + fetch.install(packages, options, fatal=True) + + mock_call.assert_called_with(['yum', '--assumeyes', + '--foo', '--bar', + 'install', 'foo', 'bar'], + env=getenv()) + + @patch('subprocess.check_call') + @patch('charmhelpers.fetch.centos.log') + def test_purges_yum_packages_as_string_fatal(self, log, mock_call): + packages = 'irrelevant names' + mock_call.side_effect = OSError('fail') + + self.assertRaises(OSError, fetch.purge, packages, fatal=True) + self.assertTrue(log.called) + + @patch('subprocess.check_call') + @patch('charmhelpers.fetch.centos.log') + def test_purges_yum_packages_fatal(self, log, mock_call): + packages = ['irrelevant', 'names'] + mock_call.side_effect = OSError('fail') + + self.assertRaises(OSError, fetch.purge, packages, fatal=True) + self.assertTrue(log.called) + + @patch('subprocess.call') + @patch('charmhelpers.fetch.centos.log') + def test_purges_yum_packages_as_string_nofatal(self, log, mock_call): + packages = 'foo bar' + fetch.purge(packages) + + self.assertTrue(log.called) + mock_call.assert_called_with(['yum', '--assumeyes', + 'remove', 'foo bar'], + env=getenv()) + + @patch('subprocess.call') + @patch('charmhelpers.fetch.centos.log') + def test_purges_yum_packages_nofatal(self, log, mock_call): + packages = ['foo', 'bar'] + fetch.purge(packages) + + self.assertTrue(log.called) + mock_call.assert_called_with(['yum', '--assumeyes', + 'remove', 'foo', 'bar'], + env=getenv()) + + @patch('subprocess.check_call') + @patch('charmhelpers.fetch.centos.log') + def test_yum_update_fatal(self, log, check_call): + fetch.update(fatal=True) + check_call.assert_called_with(['yum', '--assumeyes', 'update'], + env=getenv()) + self.assertTrue(log.called) + + @patch('subprocess.check_output') + @patch('charmhelpers.fetch.centos.log') + def test_yum_search(self, log, check_output): + package = ['irrelevant'] + + from charmhelpers.fetch.centos import yum_search + yum_search(package) + check_output.assert_called_with(['yum', 'search', 'irrelevant']) + self.assertTrue(log.called) + + @patch('subprocess.check_call') + @patch('time.sleep') + def test_run_yum_command_retries_if_fatal(self, check_call, sleep): + """The _run_yum_command function retries the command if it can't get + the YUM lock.""" + self.called = False + + def side_effect(*args, **kwargs): + """ + First, raise an exception (can't acquire lock), then return 0 + (the lock is grabbed). + """ + if not self.called: + self.called = True + raise subprocess.CalledProcessError( + returncode=1, cmd="some command") + else: + return 0 + + check_call.side_effect = side_effect + check_call.return_value = 0 + from charmhelpers.fetch.centos import _run_yum_command + _run_yum_command(["some", "command"], fatal=True) + self.assertTrue(sleep.called) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/fetch/test_fetch_ubuntu.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/fetch/test_fetch_ubuntu.py new file mode 100644 index 0000000000000000000000000000000000000000..506746181990a861b254c774e03393e4a0dced16 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/fetch/test_fetch_ubuntu.py @@ -0,0 +1,1019 @@ +import six +import subprocess +import io +import os + +from tests.helpers import patch_open +from testtools import TestCase +from mock import ( + patch, + MagicMock, + call, + sentinel, +) +from charmhelpers.fetch import ubuntu as fetch + +if six.PY3: + builtin_open = 'builtins.open' +else: + builtin_open = '__builtin__.open' + +# mocked return of openstack.get_distrib_codename() +FAKE_CODENAME = 'precise' + +url = 'deb ' + fetch.CLOUD_ARCHIVE_URL +UCA_SOURCES = [ + ('cloud:precise-folsom/proposed', url + ' precise-proposed/folsom main'), + ('cloud:precise-folsom', url + ' precise-updates/folsom main'), + ('cloud:precise-folsom/updates', url + ' precise-updates/folsom main'), + ('cloud:precise-grizzly/proposed', url + ' precise-proposed/grizzly main'), + ('cloud:precise-grizzly', url + ' precise-updates/grizzly main'), + ('cloud:precise-grizzly/updates', url + ' precise-updates/grizzly main'), + ('cloud:precise-havana/proposed', url + ' precise-proposed/havana main'), + ('cloud:precise-havana', url + ' precise-updates/havana main'), + ('cloud:precise-havana/updates', url + ' precise-updates/havana main'), + ('cloud:precise-icehouse/proposed', + url + ' precise-proposed/icehouse main'), + ('cloud:precise-icehouse', url + ' precise-updates/icehouse main'), + ('cloud:precise-icehouse/updates', url + ' precise-updates/icehouse main'), +] + +PGP_KEY_ASCII_ARMOR = """-----BEGIN PGP PUBLIC KEY BLOCK----- +Version: SKS 1.1.5 +Comment: Hostname: keyserver.ubuntu.com + +mI0EUCEyTAEEAMuUxyfiegCCwn4J/c0nw5PUTSJdn5FqiUTq6iMfij65xf1vl0g/Mxqw0gfg +AJIsCDvO9N9dloLAwF6FUBMg5My7WyhRPTAKF505TKJboyX3Pp4J1fU1LV8QFVOp87vUh1Rz +B6GU7cSglhnbL85gmbJTllkzkb3h4Yw7W+edjcQ/ABEBAAG0K0xhdW5jaHBhZCBQUEEgZm9y +IFVidW50dSBDbG91ZCBBcmNoaXZlIFRlYW2IuAQTAQIAIgUCUCEyTAIbAwYLCQgHAwIGFQgC +CQoLBBYCAwECHgECF4AACgkQimhEop9oEE7kJAP/eTBgq3Mhbvo0d8elMOuqZx3nmU7gSyPh +ep0zYIRZ5TJWl/7PRtvp0CJA6N6ZywYTQ/4ANHhpibcHZkh8K0AzUvsGXnJRSFoJeqyDbD91 +EhoO+4ZfHs2HvRBQEDZILMa2OyuB497E5Mmyua3HDEOrG2cVLllsUZzpTFCx8NgeMHk= +=jLBm +-----END PGP PUBLIC KEY BLOCK-----""" + +PGP_KEY_BIN_PGP = b'\x98\x8d\x04P!2L\x01\x04\x00\xcb\x94\xc7\'\xe2z\x00\x82\xc2~\t\xfd\xcd\'\xc3\x93\xd4M"]\x9f\x91j\x89D\xea\xea#\x1f\x8a>\xb9\xc5\xfdo\x97H?3\x1a\xb0\xd2\x07\xe0\x00\x92,\x08;\xce\xf4\xdf]\x96\x82\xc0\xc0^\x85P\x13 \xe4\xcc\xbb[(Q=0\n\x17\x9d9L\xa2[\xa3%\xf7>\x9e\t\xd5\xf55-_\x10\x15S\xa9\xf3\xbb\xd4\x87Ts\x07\xa1\x94\xed\xc4\xa0\x96\x19\xdb/\xce`\x99\xb2S\x96Y3\x91\xbd\xe1\xe1\x8c;[\xe7\x9d\x8d\xc4?\x00\x11\x01\x00\x01\xb4+Launchpad PPA for Ubuntu Cloud Archive Team\x88\xb8\x04\x13\x01\x02\x00"\x05\x02P!2L\x02\x1b\x03\x06\x0b\t\x08\x07\x03\x02\x06\x15\x08\x02\t\n\x0b\x04\x16\x02\x03\x01\x02\x1e\x01\x02\x17\x80\x00\n\t\x10\x8ahD\xa2\x9fh\x10N\xe4$\x03\xffy0`\xabs!n\xfa4w\xc7\xa50\xeb\xaag\x1d\xe7\x99N\xe0K#\xe1z\x9d3`\x84Y\xe52V\x97\xfe\xcfF\xdb\xe9\xd0"@\xe8\xde\x99\xcb\x06\x13C\xfe\x004xi\x89\xb7\x07fH|+@3R\xfb\x06^rQHZ\tz\xac\x83l?u\x12\x1a\x0e\xfb\x86_\x1e\xcd\x87\xbd\x10P\x106H,\xc6\xb6;+\x81\xe3\xde\xc4\xe4\xc9\xb2\xb9\xad\xc7\x0cC\xab\x1bg\x15.YlQ\x9c\xe9LP\xb1\xf0\xd8\x1e0y' # noqa + +# a keyid can be retrieved by the ASCII armor-encoded key using this: +# cat testkey.asc | gpg --with-colons --import-options import-show --dry-run +# --import +PGP_KEY_ID = '8a6844a29f68104e' + +FAKE_APT_CACHE = { + # an installed package + 'vim': { + 'current_ver': '2:7.3.547-6ubuntu5' + }, + # a uninstalled installation candidate + 'emacs': { + } +} + + +def fake_apt_cache(in_memory=True, progress=None): + def _get(package): + pkg = MagicMock() + if package not in FAKE_APT_CACHE: + raise KeyError + pkg.name = package + if 'current_ver' in FAKE_APT_CACHE[package]: + pkg.current_ver.ver_str = FAKE_APT_CACHE[package]['current_ver'] + else: + pkg.current_ver = None + return pkg + cache = MagicMock() + cache.__getitem__.side_effect = _get + return cache + + +class FetchTest(TestCase): + + def setUp(self): + super(FetchTest, self).setUp() + self.patch(fetch, 'get_apt_dpkg_env', lambda: {}) + + @patch("charmhelpers.fetch.ubuntu.log") + @patch.object(fetch, 'apt_cache') + def test_filter_packages_missing_ubuntu(self, cache, log): + cache.side_effect = fake_apt_cache + result = fetch.filter_installed_packages(['vim', 'emacs']) + self.assertEquals(result, ['emacs']) + + @patch("charmhelpers.fetch.ubuntu.log") + @patch.object(fetch, 'apt_cache') + def test_filter_packages_none_missing_ubuntu(self, cache, log): + cache.side_effect = fake_apt_cache + result = fetch.filter_installed_packages(['vim']) + self.assertEquals(result, []) + + @patch('charmhelpers.fetch.ubuntu.log') + @patch.object(fetch, 'apt_cache') + def test_filter_packages_not_available_ubuntu(self, cache, log): + cache.side_effect = fake_apt_cache + result = fetch.filter_installed_packages(['vim', 'joe']) + self.assertEquals(result, ['joe']) + log.assert_called_with('Package joe has no installation candidate.', + level='WARNING') + + @patch('charmhelpers.fetch.ubuntu.filter_installed_packages') + def test_filter_missing_packages(self, filter_installed_packages): + filter_installed_packages.return_value = ['pkga'] + self.assertEqual(['pkgb'], + fetch.filter_missing_packages(['pkga', 'pkgb'])) + + @patch.object(fetch, 'log', lambda *args, **kwargs: None) + @patch.object(fetch, '_write_apt_gpg_keyfile') + @patch.object(fetch, '_dearmor_gpg_key') + def test_import_apt_key_radix(self, dearmor_gpg_key, + w_keyfile): + def dearmor_side_effect(key_asc): + return { + PGP_KEY_ASCII_ARMOR: PGP_KEY_BIN_PGP, + }[key_asc] + dearmor_gpg_key.side_effect = dearmor_side_effect + + with patch('subprocess.check_output') as _subp_check_output: + curl_cmd = ['curl', ('https://keyserver.ubuntu.com' + '/pks/lookup?op=get&options=mr' + '&exact=on&search=0x{}').format(PGP_KEY_ID)] + + def check_output_side_effect(command, env): + return { + ' '.join(curl_cmd): PGP_KEY_ASCII_ARMOR, + }[' '.join(command)] + _subp_check_output.side_effect = check_output_side_effect + + fetch.import_key(PGP_KEY_ID) + _subp_check_output.assert_called_with(curl_cmd, env=None) + w_keyfile.assert_called_once_with(key_name=PGP_KEY_ID, + key_material=PGP_KEY_BIN_PGP) + + @patch.object(fetch, 'log', lambda *args, **kwargs: None) + @patch.object(os, 'getenv') + @patch.object(fetch, '_write_apt_gpg_keyfile') + @patch.object(fetch, '_dearmor_gpg_key') + def test_import_apt_key_radix_https_proxy(self, dearmor_gpg_key, + w_keyfile, getenv): + def dearmor_side_effect(key_asc): + return { + PGP_KEY_ASCII_ARMOR: PGP_KEY_BIN_PGP, + }[key_asc] + dearmor_gpg_key.side_effect = dearmor_side_effect + + def get_env_side_effect(var): + return { + 'HTTPS_PROXY': 'http://squid.internal:3128', + 'JUJU_CHARM_HTTPS_PROXY': None, + }[var] + getenv.side_effect = get_env_side_effect + + with patch('subprocess.check_output') as _subp_check_output: + proxy_settings = { + 'HTTPS_PROXY': 'http://squid.internal:3128', + 'https_proxy': 'http://squid.internal:3128', + } + curl_cmd = ['curl', ('https://keyserver.ubuntu.com' + '/pks/lookup?op=get&options=mr' + '&exact=on&search=0x{}').format(PGP_KEY_ID)] + + def check_output_side_effect(command, env): + return { + ' '.join(curl_cmd): PGP_KEY_ASCII_ARMOR, + }[' '.join(command)] + _subp_check_output.side_effect = check_output_side_effect + + fetch.import_key(PGP_KEY_ID) + _subp_check_output.assert_called_with(curl_cmd, env=proxy_settings) + w_keyfile.assert_called_once_with(key_name=PGP_KEY_ID, + key_material=PGP_KEY_BIN_PGP) + + @patch.object(fetch, 'log', lambda *args, **kwargs: None) + @patch.object(os, 'getenv') + @patch.object(fetch, '_write_apt_gpg_keyfile') + @patch.object(fetch, '_dearmor_gpg_key') + def test_import_apt_key_radix_charm_https_proxy(self, dearmor_gpg_key, + w_keyfile, getenv): + def dearmor_side_effect(key_asc): + return { + PGP_KEY_ASCII_ARMOR: PGP_KEY_BIN_PGP, + }[key_asc] + dearmor_gpg_key.side_effect = dearmor_side_effect + + def get_env_side_effect(var): + return { + 'HTTPS_PROXY': None, + 'JUJU_CHARM_HTTPS_PROXY': 'http://squid.internal:3128', + }[var] + getenv.side_effect = get_env_side_effect + + with patch('subprocess.check_output') as _subp_check_output: + proxy_settings = { + 'HTTPS_PROXY': 'http://squid.internal:3128', + 'https_proxy': 'http://squid.internal:3128', + } + curl_cmd = ['curl', ('https://keyserver.ubuntu.com' + '/pks/lookup?op=get&options=mr' + '&exact=on&search=0x{}').format(PGP_KEY_ID)] + + def check_output_side_effect(command, env): + return { + ' '.join(curl_cmd): PGP_KEY_ASCII_ARMOR, + }[' '.join(command)] + _subp_check_output.side_effect = check_output_side_effect + + fetch.import_key(PGP_KEY_ID) + _subp_check_output.assert_called_with(curl_cmd, env=proxy_settings) + w_keyfile.assert_called_once_with(key_name=PGP_KEY_ID, + key_material=PGP_KEY_BIN_PGP) + + @patch.object(fetch, 'log', lambda *args, **kwargs: None) + @patch.object(fetch, '_dearmor_gpg_key') + @patch('subprocess.check_output') + def test_import_bad_apt_key(self, check_output, dearmor_gpg_key): + """Ensure error when importing apt key fails""" + errmsg = ('Invalid GPG key material. Check your network setup' + ' (MTU, routing, DNS) and/or proxy server settings' + ' as well as destination keyserver status.') + bad_keyid = 'foo' + + curl_cmd = ['curl', ('https://keyserver.ubuntu.com' + '/pks/lookup?op=get&options=mr' + '&exact=on&search=0x{}').format(bad_keyid)] + + def check_output_side_effect(command, env): + return { + ' '.join(curl_cmd): 'foobar', + }[' '.join(command)] + check_output.side_effect = check_output_side_effect + + def dearmor_side_effect(key_asc): + raise fetch.GPGKeyError(errmsg) + dearmor_gpg_key.side_effect = dearmor_side_effect + try: + fetch.import_key(bad_keyid) + assert False + except fetch.GPGKeyError as e: + self.assertEqual(str(e), errmsg) + + @patch('charmhelpers.fetch.ubuntu.log') + def test_add_source_none_ubuntu(self, log): + fetch.add_source(source=None) + self.assertTrue(log.called) + + @patch('subprocess.check_call') + def test_add_source_ppa(self, check_call): + source = "ppa:test-ppa" + fetch.add_source(source=source) + check_call.assert_called_with( + ['add-apt-repository', '--yes', source], env={}) + + @patch("charmhelpers.fetch.ubuntu.log") + @patch('subprocess.check_call') + @patch('time.sleep') + def test_add_source_ppa_retries_30_times(self, sleep, check_call, log): + self.call_count = 0 + + def side_effect(*args, **kwargs): + """Raise an 3 times, then return 0 """ + self.call_count += 1 + if self.call_count <= fetch.CMD_RETRY_COUNT: + raise subprocess.CalledProcessError( + returncode=1, cmd="some add-apt-repository command") + else: + return 0 + check_call.side_effect = side_effect + + source = "ppa:test-ppa" + fetch.add_source(source=source) + check_call.assert_called_with( + ['add-apt-repository', '--yes', source], env={}) + sleep.assert_called_with(10) + self.assertTrue(fetch.CMD_RETRY_COUNT, sleep.call_count) + + @patch('charmhelpers.fetch.ubuntu.log') + @patch('subprocess.check_call') + def test_add_source_http_ubuntu(self, check_call, log): + source = "http://archive.ubuntu.com/ubuntu raring-backports main" + fetch.add_source(source=source) + check_call.assert_called_with( + ['add-apt-repository', '--yes', source], env={}) + + @patch('charmhelpers.fetch.ubuntu.log') + @patch('subprocess.check_call') + def test_add_source_https(self, check_call, log): + source = "https://example.com" + fetch.add_source(source=source) + check_call.assert_called_with( + ['add-apt-repository', '--yes', source], env={}) + + @patch('charmhelpers.fetch.ubuntu.log') + @patch('subprocess.check_call') + def test_add_source_deb(self, check_call, log): + """add-apt-repository behaves differently when using the deb prefix. + + $ add-apt-repository --yes \ + "http://special.example.com/ubuntu precise-special main" + $ grep special /etc/apt/sources.list + deb http://special.example.com/ubuntu precise precise-special main + deb-src http://special.example.com/ubuntu precise precise-special main + + $ add-apt-repository --yes \ + "deb http://special.example.com/ubuntu precise-special main" + $ grep special /etc/apt/sources.list + deb http://special.example.com/ubuntu precise precise-special main + deb-src http://special.example.com/ubuntu precise precise-special main + deb http://special.example.com/ubuntu precise-special main + deb-src http://special.example.com/ubuntu precise-special main + """ + source = "deb http://archive.ubuntu.com/ubuntu raring-backports main" + fetch.add_source(source=source) + check_call.assert_called_with( + ['add-apt-repository', '--yes', source], env={}) + + @patch.object(fetch, '_write_apt_gpg_keyfile') + @patch.object(fetch, '_dearmor_gpg_key') + @patch('charmhelpers.fetch.ubuntu.log') + @patch('subprocess.check_output') + @patch('subprocess.check_call') + def test_add_source_http_and_key_id(self, check_call, check_output, log, + dearmor_gpg_key, + w_keyfile): + def dearmor_side_effect(key_asc): + return { + PGP_KEY_ASCII_ARMOR: PGP_KEY_BIN_PGP, + }[key_asc] + dearmor_gpg_key.side_effect = dearmor_side_effect + + curl_cmd = ['curl', ('https://keyserver.ubuntu.com' + '/pks/lookup?op=get&options=mr' + '&exact=on&search=0x{}').format(PGP_KEY_ID)] + + def check_output_side_effect(command, env): + return { + ' '.join(curl_cmd): PGP_KEY_ASCII_ARMOR, + }[' '.join(command)] + check_output.side_effect = check_output_side_effect + source = "http://archive.ubuntu.com/ubuntu raring-backports main" + check_call.return_value = 0 # Successful exit code + fetch.add_source(source=source, key=PGP_KEY_ID) + check_call.assert_any_call( + ['add-apt-repository', '--yes', source], env={}), + check_output.assert_has_calls([ + call(['curl', ('https://keyserver.ubuntu.com' + '/pks/lookup?op=get&options=mr' + '&exact=on&search=0x{}').format(PGP_KEY_ID)], + env=None), + ]) + + @patch.object(fetch, '_write_apt_gpg_keyfile') + @patch.object(fetch, '_dearmor_gpg_key') + @patch('charmhelpers.fetch.ubuntu.log') + @patch('subprocess.check_output') + @patch('subprocess.check_call') + def test_add_source_https_and_key_id(self, check_call, check_output, log, + dearmor_gpg_key, + w_keyfile): + def dearmor_side_effect(key_asc): + return { + PGP_KEY_ASCII_ARMOR: PGP_KEY_BIN_PGP, + }[key_asc] + dearmor_gpg_key.side_effect = dearmor_side_effect + + curl_cmd = ['curl', ('https://keyserver.ubuntu.com' + '/pks/lookup?op=get&options=mr' + '&exact=on&search=0x{}').format(PGP_KEY_ID)] + + def check_output_side_effect(command, env): + return { + ' '.join(curl_cmd): PGP_KEY_ASCII_ARMOR, + }[' '.join(command)] + check_output.side_effect = check_output_side_effect + + check_call.return_value = 0 + + source = "https://USER:PASS@private-ppa.launchpad.net/project/awesome" + fetch.add_source(source=source, key=PGP_KEY_ID) + check_call.assert_any_call( + ['add-apt-repository', '--yes', source], env={}), + check_output.assert_has_calls([ + call(['curl', ('https://keyserver.ubuntu.com' + '/pks/lookup?op=get&options=mr' + '&exact=on&search=0x{}').format(PGP_KEY_ID)], + env=None), + ]) + + @patch.object(fetch, '_write_apt_gpg_keyfile') + @patch.object(fetch, '_dearmor_gpg_key') + @patch('charmhelpers.fetch.ubuntu.log') + @patch.object(fetch, 'get_distrib_codename') + @patch('subprocess.check_call') + @patch('subprocess.Popen') + def test_add_source_http_and_key_gpg1(self, popen, check_call, + get_distrib_codename, log, + dearmor_gpg_key, + w_keyfile): + + def check_call_side_effect(*args, **kwargs): + # Make sure the gpg key has already been added before the + # add-apt-repository call, as the update could fail otherwise. + popen.assert_called_with( + ['gpg', '--with-colons', '--with-fingerprint'], + stdout=subprocess.PIPE, + stderr=subprocess.PIPE, + stdin=subprocess.PIPE) + return 0 + + source = "http://archive.ubuntu.com/ubuntu raring-backports main" + key = PGP_KEY_ASCII_ARMOR + key_bytes = PGP_KEY_ASCII_ARMOR.encode('utf-8') + get_distrib_codename.return_value = 'trusty' + check_call.side_effect = check_call_side_effect + + expected_key = '35F77D63B5CEC106C577ED856E85A86E4652B4E6' + if six.PY3: + popen.return_value.communicate.return_value = [b""" +pub:-:1024:1:6E85A86E4652B4E6:2009-01-18:::-:Launchpad PPA for Landscape: +fpr:::::::::35F77D63B5CEC106C577ED856E85A86E4652B4E6: + """, b''] + else: + popen.return_value.communicate.return_value = [""" +pub:-:1024:1:6E85A86E4652B4E6:2009-01-18:::-:Launchpad PPA for Landscape: +fpr:::::::::35F77D63B5CEC106C577ED856E85A86E4652B4E6: + """, ''] + + dearmor_gpg_key.return_value = PGP_KEY_BIN_PGP + + fetch.add_source(source=source, key=key) + popen.assert_called_with( + ['gpg', '--with-colons', '--with-fingerprint'], + stdout=subprocess.PIPE, + stderr=subprocess.PIPE, + stdin=subprocess.PIPE) + dearmor_gpg_key.assert_called_with(key_bytes) + w_keyfile.assert_called_with(key_name=expected_key, + key_material=PGP_KEY_BIN_PGP) + check_call.assert_any_call( + ['add-apt-repository', '--yes', source], env={}), + + @patch.object(fetch, '_write_apt_gpg_keyfile') + @patch.object(fetch, '_dearmor_gpg_key') + @patch('charmhelpers.fetch.ubuntu.log') + @patch.object(fetch, 'get_distrib_codename') + @patch('subprocess.check_call') + @patch('subprocess.Popen') + def test_add_source_http_and_key_gpg2(self, popen, check_call, + get_distrib_codename, log, + dearmor_gpg_key, + w_keyfile): + + def check_call_side_effect(*args, **kwargs): + # Make sure the gpg key has already been added before the + # add-apt-repository call, as the update could fail otherwise. + popen.assert_called_with( + ['gpg', '--with-colons', '--with-fingerprint'], + stdout=subprocess.PIPE, + stderr=subprocess.PIPE, + stdin=subprocess.PIPE) + return 0 + + source = "http://archive.ubuntu.com/ubuntu raring-backports main" + key = PGP_KEY_ASCII_ARMOR + key_bytes = PGP_KEY_ASCII_ARMOR.encode('utf-8') + get_distrib_codename.return_value = 'bionic' + check_call.side_effect = check_call_side_effect + + expected_key = '35F77D63B5CEC106C577ED856E85A86E4652B4E6' + + if six.PY3: + popen.return_value.communicate.return_value = [b""" +fpr:::::::::35F77D63B5CEC106C577ED856E85A86E4652B4E6: +uid:-::::1232306042::52FE92E6867B4C099AA1A1877A804A965F41A98C::ppa::::::::::0: + """, b''] + else: + # python2 on a distro with gpg2 (unlikely, but possible) + popen.return_value.communicate.return_value = [""" +fpr:::::::::35F77D63B5CEC106C577ED856E85A86E4652B4E6: +uid:-::::1232306042::52FE92E6867B4C099AA1A1877A804A965F41A98C::ppa::::::::::0: + """, ''] + + dearmor_gpg_key.return_value = PGP_KEY_BIN_PGP + + fetch.add_source(source=source, key=key) + popen.assert_called_with( + ['gpg', '--with-colons', '--with-fingerprint'], + stdout=subprocess.PIPE, + stderr=subprocess.PIPE, + stdin=subprocess.PIPE) + dearmor_gpg_key.assert_called_with(key_bytes) + w_keyfile.assert_called_with(key_name=expected_key, + key_material=PGP_KEY_BIN_PGP) + check_call.assert_any_call( + ['add-apt-repository', '--yes', source], env={}), + + def test_add_source_cloud_invalid_pocket(self): + source = "cloud:havana-updates" + self.assertRaises(fetch.SourceConfigError, + fetch.add_source, source) + + @patch('charmhelpers.fetch.ubuntu.log') + @patch.object(fetch, 'filter_installed_packages') + @patch.object(fetch, 'apt_install') + @patch.object(fetch, 'get_distrib_codename') + def test_add_source_cloud_pocket_style(self, get_distrib_codename, + apt_install, + filter_pkg, log): + source = "cloud:precise-updates/havana" + get_distrib_codename.return_value = 'precise' + result = ('# Ubuntu Cloud Archive\n' + 'deb http://ubuntu-cloud.archive.canonical.com/ubuntu' + ' precise-updates/havana main\n') + + with patch_open() as (mock_open, mock_file): + fetch.add_source(source=source) + mock_file.write.assert_called_with(result) + filter_pkg.assert_called_with(['ubuntu-cloud-keyring']) + + @patch('charmhelpers.fetch.ubuntu.log') + @patch.object(fetch, 'filter_installed_packages') + @patch.object(fetch, 'apt_install') + @patch.object(fetch, 'get_distrib_codename') + def test_add_source_cloud_os_style(self, get_distrib_codename, apt_install, + filter_pkg, log): + source = "cloud:precise-havana" + get_distrib_codename.return_value = 'precise' + result = ('# Ubuntu Cloud Archive\n' + 'deb http://ubuntu-cloud.archive.canonical.com/ubuntu' + ' precise-updates/havana main\n') + with patch_open() as (mock_open, mock_file): + fetch.add_source(source=source) + mock_file.write.assert_called_with(result) + filter_pkg.assert_called_with(['ubuntu-cloud-keyring']) + + @patch('charmhelpers.fetch.ubuntu.log') + @patch.object(fetch, 'filter_installed_packages') + @patch.object(fetch, 'apt_install') + def test_add_source_cloud_distroless_style(self, apt_install, + filter_pkg, log): + source = "cloud:havana" + result = ('# Ubuntu Cloud Archive\n' + 'deb http://ubuntu-cloud.archive.canonical.com/ubuntu' + ' precise-updates/havana main\n') + with patch_open() as (mock_open, mock_file): + fetch.add_source(source=source) + mock_file.write.assert_called_with(result) + filter_pkg.assert_called_with(['ubuntu-cloud-keyring']) + + @patch('charmhelpers.fetch.ubuntu.log') + @patch.object(fetch, 'get_distrib_codename') + @patch('platform.machine') + def test_add_source_proposed_x86_64(self, _machine, + get_distrib_codename, log): + source = "proposed" + result = ('# Proposed\n' + 'deb http://archive.ubuntu.com/ubuntu precise-proposed' + ' main universe multiverse restricted\n') + get_distrib_codename.return_value = 'precise' + _machine.return_value = 'x86_64' + with patch_open() as (mock_open, mock_file): + fetch.add_source(source=source) + mock_file.write.assert_called_with(result) + + @patch('charmhelpers.fetch.ubuntu.log') + @patch.object(fetch, 'get_distrib_codename') + @patch('platform.machine') + def test_add_source_proposed_ppc64le(self, _machine, + get_distrib_codename, log): + source = "proposed" + result = ( + "# Proposed\n" + "deb http://ports.ubuntu.com/ubuntu-ports precise-proposed main " + "universe multiverse restricted\n") + get_distrib_codename.return_value = 'precise' + _machine.return_value = 'ppc64le' + with patch_open() as (mock_open, mock_file): + fetch.add_source(source=source) + mock_file.write.assert_called_with(result) + + @patch.object(fetch, '_write_apt_gpg_keyfile') + @patch.object(fetch, '_dearmor_gpg_key') + @patch('charmhelpers.fetch.ubuntu.log') + @patch('subprocess.check_output') + @patch('subprocess.check_call') + def test_add_source_http_and_key_id_ubuntu(self, check_call, check_output, + log, dearmor_gpg_key, + w_keyfile): + def dearmor_side_effect(key_asc): + return { + PGP_KEY_ASCII_ARMOR: PGP_KEY_BIN_PGP, + }[key_asc] + dearmor_gpg_key.side_effect = dearmor_side_effect + + curl_cmd = ['curl', ('https://keyserver.ubuntu.com' + '/pks/lookup?op=get&options=mr' + '&exact=on&search=0x{}').format(PGP_KEY_ID)] + + def check_output_side_effect(command, env): + return { + ' '.join(curl_cmd): PGP_KEY_ASCII_ARMOR, + }[' '.join(command)] + check_output.side_effect = check_output_side_effect + check_call.return_value = 0 + source = "http://archive.ubuntu.com/ubuntu raring-backports main" + key_id = PGP_KEY_ID + fetch.add_source(source=source, key=key_id) + check_call.assert_any_call( + ['add-apt-repository', '--yes', source], env={}), + check_output.assert_has_calls([ + call(['curl', ('https://keyserver.ubuntu.com' + '/pks/lookup?op=get&options=mr' + '&exact=on&search=0x{}').format(PGP_KEY_ID)], + env=None), + ]) + + @patch.object(fetch, '_write_apt_gpg_keyfile') + @patch.object(fetch, '_dearmor_gpg_key') + @patch('charmhelpers.fetch.ubuntu.log') + @patch('subprocess.check_output') + @patch('subprocess.check_call') + def test_add_source_https_and_key_id_ubuntu(self, check_call, check_output, + log, dearmor_gpg_key, + w_keyfile): + def dearmor_side_effect(key_asc): + return { + PGP_KEY_ASCII_ARMOR: PGP_KEY_BIN_PGP, + }[key_asc] + dearmor_gpg_key.side_effect = dearmor_side_effect + + curl_cmd = ['curl', ('https://keyserver.ubuntu.com' + '/pks/lookup?op=get&options=mr' + '&exact=on&search=0x{}').format(PGP_KEY_ID)] + + def check_output_side_effect(command, env): + return { + ' '.join(curl_cmd): PGP_KEY_ASCII_ARMOR, + }[' '.join(command)] + check_output.side_effect = check_output_side_effect + check_call.return_value = 0 + + source = "https://USER:PASS@private-ppa.launchpad.net/project/awesome" + fetch.add_source(source=source, key=PGP_KEY_ID) + check_call.assert_any_call( + ['add-apt-repository', '--yes', source], env={}), + check_output.assert_has_calls([ + call(['curl', ('https://keyserver.ubuntu.com' + '/pks/lookup?op=get&options=mr' + '&exact=on&search=0x{}').format(PGP_KEY_ID)], + env=None), + ]) + + @patch('charmhelpers.fetch.ubuntu.log') + def test_configure_bad_install_source(self, log): + try: + fetch.add_source('foo', fail_invalid=True) + self.fail("Calling add_source('foo') should fail") + except fetch.SourceConfigError as e: + self.assertEqual(str(e), "Unknown source: 'foo'") + + @patch('charmhelpers.fetch.ubuntu.get_distrib_codename') + def test_configure_install_source_uca_staging(self, _lsb): + """Test configuring installation source from UCA staging sources""" + _lsb.return_value = FAKE_CODENAME + # staging pockets are configured as PPAs + with patch('subprocess.check_call') as _subp: + src = 'cloud:precise-folsom/staging' + fetch.add_source(src) + cmd = ['add-apt-repository', '-y', + 'ppa:ubuntu-cloud-archive/folsom-staging'] + _subp.assert_called_with(cmd, env={}) + + @patch(builtin_open) + @patch('charmhelpers.fetch.ubuntu.apt_install') + @patch('charmhelpers.fetch.ubuntu.get_distrib_codename') + @patch('charmhelpers.fetch.ubuntu.filter_installed_packages') + def test_configure_install_source_uca_repos( + self, _fip, _lsb, _install, _open): + """Test configuring installation source from UCA sources""" + _lsb.return_value = FAKE_CODENAME + _file = MagicMock(spec=io.FileIO) + _open.return_value = _file + _fip.side_effect = lambda x: x + for src, url in UCA_SOURCES: + actual_url = "# Ubuntu Cloud Archive\n{}\n".format(url) + fetch.add_source(src) + _install.assert_called_with(['ubuntu-cloud-keyring'], + fatal=True) + _open.assert_called_with( + '/etc/apt/sources.list.d/cloud-archive.list', + 'w' + ) + _file.__enter__().write.assert_called_with(actual_url) + + def test_configure_install_source_bad_uca(self): + """Test configuring installation source from bad UCA source""" + try: + fetch.add_source('cloud:foo-bar', fail_invalid=True) + self.fail("add_source('cloud:foo-bar') should fail") + except fetch.SourceConfigError as e: + _e = ('Invalid Cloud Archive release specified: foo-bar' + ' on this Ubuntuversion') + self.assertTrue(str(e).startswith(_e)) + + @patch('charmhelpers.fetch.ubuntu.log') + def test_add_unparsable_source(self, log_): + source = "propsed" # Minor typo + fetch.add_source(source=source) + self.assertEqual(1, log_.call_count) + + @patch('charmhelpers.fetch.ubuntu.log') + def test_add_distro_source(self, log): + source = "distro" + # distro is a noop but test validate no exception is thrown + fetch.add_source(source=source) + + +class AptTests(TestCase): + + def setUp(self): + super(AptTests, self).setUp() + self.patch(fetch, 'get_apt_dpkg_env', lambda: {}) + + @patch('subprocess.call') + @patch('charmhelpers.fetch.ubuntu.log') + def test_apt_upgrade_non_fatal(self, log, mock_call): + options = ['--foo', '--bar'] + fetch.apt_upgrade(options) + + mock_call.assert_called_with( + ['apt-get', '--assume-yes', + '--foo', '--bar', 'upgrade'], + env={}) + + @patch('subprocess.check_call') + @patch('charmhelpers.fetch.ubuntu.log') + def test_apt_upgrade_fatal(self, log, mock_call): + options = ['--foo', '--bar'] + fetch.apt_upgrade(options, fatal=True) + + mock_call.assert_called_with( + ['apt-get', '--assume-yes', + '--foo', '--bar', 'upgrade'], + env={}) + + @patch('subprocess.check_call') + @patch('charmhelpers.fetch.ubuntu.log') + def test_apt_dist_upgrade_fatal(self, log, mock_call): + options = ['--foo', '--bar'] + fetch.apt_upgrade(options, fatal=True, dist=True) + + mock_call.assert_called_with( + ['apt-get', '--assume-yes', + '--foo', '--bar', 'dist-upgrade'], + env={}) + + @patch('subprocess.call') + @patch('charmhelpers.fetch.ubuntu.log') + def test_installs_apt_packages(self, log, mock_call): + packages = ['foo', 'bar'] + options = ['--foo', '--bar'] + + fetch.apt_install(packages, options) + + mock_call.assert_called_with( + ['apt-get', '--assume-yes', + '--foo', '--bar', 'install', 'foo', 'bar'], + env={}) + + @patch('subprocess.call') + @patch('charmhelpers.fetch.ubuntu.log') + def test_installs_apt_packages_without_options(self, log, mock_call): + packages = ['foo', 'bar'] + + fetch.apt_install(packages) + + mock_call.assert_called_with( + ['apt-get', '--assume-yes', + '--option=Dpkg::Options::=--force-confold', + 'install', 'foo', 'bar'], + env={}) + + @patch('subprocess.call') + @patch('charmhelpers.fetch.ubuntu.log') + def test_installs_apt_packages_as_string(self, log, mock_call): + packages = 'foo bar' + options = ['--foo', '--bar'] + + fetch.apt_install(packages, options) + + mock_call.assert_called_with( + ['apt-get', '--assume-yes', + '--foo', '--bar', 'install', 'foo bar'], + env={}) + + @patch('subprocess.check_call') + @patch('charmhelpers.fetch.ubuntu.log') + def test_installs_apt_packages_with_possible_errors(self, log, + check_call): + packages = ['foo', 'bar'] + options = ['--foo', '--bar'] + + fetch.apt_install(packages, options, fatal=True) + + check_call.assert_called_with( + ['apt-get', '--assume-yes', + '--foo', '--bar', 'install', 'foo', 'bar'], + env={}) + + @patch('subprocess.check_call') + @patch('charmhelpers.fetch.ubuntu.log') + def test_purges_apt_packages_as_string_fatal(self, log, mock_call): + packages = 'irrelevant names' + mock_call.side_effect = OSError('fail') + + self.assertRaises(OSError, fetch.apt_purge, packages, fatal=True) + self.assertTrue(log.called) + + @patch('subprocess.check_call') + @patch('charmhelpers.fetch.ubuntu.log') + def test_purges_apt_packages_fatal(self, log, mock_call): + packages = ['irrelevant', 'names'] + mock_call.side_effect = OSError('fail') + + self.assertRaises(OSError, fetch.apt_purge, packages, fatal=True) + self.assertTrue(log.called) + + @patch('subprocess.call') + @patch('charmhelpers.fetch.ubuntu.log') + def test_purges_apt_packages_as_string_nofatal(self, log, mock_call): + packages = 'foo bar' + + fetch.apt_purge(packages) + + self.assertTrue(log.called) + mock_call.assert_called_with( + ['apt-get', '--assume-yes', 'purge', 'foo bar'], + env={}) + + @patch('subprocess.call') + @patch('charmhelpers.fetch.ubuntu.log') + def test_purges_apt_packages_nofatal(self, log, mock_call): + packages = ['foo', 'bar'] + + fetch.apt_purge(packages) + + self.assertTrue(log.called) + mock_call.assert_called_with( + ['apt-get', '--assume-yes', 'purge', 'foo', 'bar'], + env={}) + + @patch('subprocess.check_call') + @patch('charmhelpers.fetch.ubuntu.log') + def test_mark_apt_packages_as_string_fatal(self, log, mock_call): + packages = 'irrelevant names' + mock_call.side_effect = OSError('fail') + + self.assertRaises(OSError, fetch.apt_mark, packages, sentinel.mark, + fatal=True) + self.assertTrue(log.called) + + @patch('subprocess.check_call') + @patch('charmhelpers.fetch.ubuntu.log') + def test_mark_apt_packages_fatal(self, log, mock_call): + packages = ['irrelevant', 'names'] + mock_call.side_effect = OSError('fail') + + self.assertRaises(OSError, fetch.apt_mark, packages, sentinel.mark, + fatal=True) + self.assertTrue(log.called) + + @patch('subprocess.call') + @patch('charmhelpers.fetch.ubuntu.log') + def test_mark_apt_packages_as_string_nofatal(self, log, mock_call): + packages = 'foo bar' + + fetch.apt_mark(packages, sentinel.mark) + + self.assertTrue(log.called) + mock_call.assert_called_with( + ['apt-mark', sentinel.mark, 'foo bar'], + universal_newlines=True) + + @patch('subprocess.call') + @patch('charmhelpers.fetch.ubuntu.log') + def test_mark_apt_packages_nofatal(self, log, mock_call): + packages = ['foo', 'bar'] + + fetch.apt_mark(packages, sentinel.mark) + + self.assertTrue(log.called) + mock_call.assert_called_with( + ['apt-mark', sentinel.mark, 'foo', 'bar'], + universal_newlines=True) + + @patch('subprocess.check_call') + @patch('charmhelpers.fetch.ubuntu.log') + def test_mark_apt_packages_nofatal_abortonfatal(self, log, mock_call): + packages = ['foo', 'bar'] + + fetch.apt_mark(packages, sentinel.mark, fatal=True) + + self.assertTrue(log.called) + mock_call.assert_called_with( + ['apt-mark', sentinel.mark, 'foo', 'bar'], + universal_newlines=True) + + @patch('charmhelpers.fetch.ubuntu.apt_mark') + def test_apt_hold(self, apt_mark): + fetch.apt_hold(sentinel.packages) + apt_mark.assert_called_once_with(sentinel.packages, 'hold', + fatal=False) + + @patch('charmhelpers.fetch.ubuntu.apt_mark') + def test_apt_hold_fatal(self, apt_mark): + fetch.apt_hold(sentinel.packages, fatal=sentinel.fatal) + apt_mark.assert_called_once_with(sentinel.packages, 'hold', + fatal=sentinel.fatal) + + @patch('charmhelpers.fetch.ubuntu.apt_mark') + def test_apt_unhold(self, apt_mark): + fetch.apt_unhold(sentinel.packages) + apt_mark.assert_called_once_with(sentinel.packages, 'unhold', + fatal=False) + + @patch('charmhelpers.fetch.ubuntu.apt_mark') + def test_apt_unhold_fatal(self, apt_mark): + fetch.apt_unhold(sentinel.packages, fatal=sentinel.fatal) + apt_mark.assert_called_once_with(sentinel.packages, 'unhold', + fatal=sentinel.fatal) + + @patch('subprocess.check_call') + def test_apt_update_fatal(self, check_call): + fetch.apt_update(fatal=True) + check_call.assert_called_with( + ['apt-get', 'update'], + env={}) + + @patch('subprocess.call') + def test_apt_update_nonfatal(self, call): + fetch.apt_update() + call.assert_called_with( + ['apt-get', 'update'], + env={}) + + @patch('subprocess.check_call') + @patch('time.sleep') + def test_run_apt_command_retries_if_fatal(self, check_call, sleep): + """The _run_apt_command function retries the command if it can't get + the APT lock.""" + self.called = False + + def side_effect(*args, **kwargs): + """ + First, raise an exception (can't acquire lock), then return 0 + (the lock is grabbed). + """ + if not self.called: + self.called = True + raise subprocess.CalledProcessError( + returncode=100, cmd="some command") + else: + return 0 + + check_call.side_effect = side_effect + check_call.return_value = 0 + + from charmhelpers.fetch.ubuntu import _run_apt_command + _run_apt_command(["some", "command"], fatal=True) + self.assertTrue(sleep.called) + + @patch.object(fetch, 'apt_cache') + def test_get_upstream_version(self, cache): + cache.side_effect = fake_apt_cache + self.assertEqual(fetch.get_upstream_version('vim'), '7.3.547') + self.assertEqual(fetch.get_upstream_version('emacs'), None) + self.assertEqual(fetch.get_upstream_version('unknown'), None) + + @patch('charmhelpers.fetch.ubuntu._run_apt_command') + def test_apt_autoremove_fatal(self, run_apt_command): + fetch.apt_autoremove(purge=True, fatal=True) + run_apt_command.assert_called_with( + ['apt-get', '--assume-yes', 'autoremove', '--purge'], + True + ) + + @patch('charmhelpers.fetch.ubuntu._run_apt_command') + def test_apt_autoremove_nonfatal(self, run_apt_command): + fetch.apt_autoremove(purge=False, fatal=False) + run_apt_command.assert_called_with( + ['apt-get', '--assume-yes', 'autoremove'], + False + ) + + +class TestAptDpkgEnv(TestCase): + + @patch.object(fetch, 'get_system_env') + def test_get_apt_dpkg_env(self, mock_get_system_env): + mock_get_system_env.return_value = '/a/path' + self.assertEquals( + fetch.get_apt_dpkg_env(), + {'DEBIAN_FRONTEND': 'noninteractive', 'PATH': '/a/path'}) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/fetch/test_fetch_ubuntu_apt_pkg.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/fetch/test_fetch_ubuntu_apt_pkg.py new file mode 100644 index 0000000000000000000000000000000000000000..d0affc324568fc1290147b8b298af7a2b152a1df --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/fetch/test_fetch_ubuntu_apt_pkg.py @@ -0,0 +1,216 @@ +import mock +import subprocess +import unittest + +from charmhelpers.fetch import ubuntu_apt_pkg as apt_pkg + + +class Test_apt_pkg_Cache(unittest.TestCase): + """Borrow PatchHelper methods from ``charms.openstack``.""" + def setUp(self): + self._patches = {} + self._patches_start = {} + + def tearDown(self): + for k, v in self._patches.items(): + v.stop() + setattr(self, k, None) + self._patches = None + self._patches_start = None + + def patch(self, patchee, name=None, **kwargs): + """Patch a patchable thing. Uses mock.patch() to do the work. + Automatically unpatches at the end of the test. + + The mock gets added to the test object (self) using 'name' or the last + part of the patchee string, after the final dot. + + :param patchee: representing module.object that is to be + patched. + :param name: optional name to call the mock. + :param **kwargs: any other args to pass to mock.patch() + """ + mocked = mock.patch(patchee, **kwargs) + if name is None: + name = patchee.split('.')[-1] + started = mocked.start() + self._patches[name] = mocked + self._patches_start[name] = started + setattr(self, name, started) + + def patch_object(self, obj, attr, name=None, **kwargs): + """Patch a patchable thing. Uses mock.patch.object() to do the work. + Automatically unpatches at the end of the test. + + The mock gets added to the test object (self) using 'name' or the attr + passed in the arguments. + + :param obj: an object that needs to have an attribute patched. + :param attr: that represents the attribute being patched. + :param name: optional name to call the mock. + :param **kwargs: any other args to pass to mock.patch() + """ + mocked = mock.patch.object(obj, attr, **kwargs) + if name is None: + name = attr + started = mocked.start() + self._patches[name] = mocked + self._patches_start[name] = started + setattr(self, name, started) + + def test_apt_cache_show(self): + self.patch_object(apt_pkg.subprocess, 'check_output') + apt_cache = apt_pkg.Cache() + self.check_output.return_value = ( + 'Package: dpkg\n' + 'Version: 1.19.0.5ubuntu2.1\n' + 'Bugs: https://bugs.launchpad.net/ubuntu/+filebug\n' + 'Description-en: Debian package management system\n' + ' Multiline description\n' + '\n' + 'Package: lsof\n' + 'Architecture: amd64\n' + 'Version: 4.91+dfsg-1ubuntu1\n' + '\n' + 'N: There is 1 additional record.\n') + self.assertEquals( + apt_cache._apt_cache_show(['package']), + {'dpkg': { + 'package': 'dpkg', 'version': '1.19.0.5ubuntu2.1', + 'bugs': 'https://bugs.launchpad.net/ubuntu/+filebug', + 'description-en': 'Debian package management system\n' + 'Multiline description'}, + 'lsof': { + 'package': 'lsof', 'architecture': 'amd64', + 'version': '4.91+dfsg-1ubuntu1'}, + }) + self.check_output.assert_called_once_with( + ['apt-cache', 'show', '--no-all-versions', 'package'], + stderr=subprocess.STDOUT, + universal_newlines=True) + + def test_dpkg_list(self): + self.patch_object(apt_pkg.subprocess, 'check_output') + apt_cache = apt_pkg.Cache() + self.check_output.return_value = ( + 'Desired=Unknown/Install/Remove/Purge/Hold\n' + '| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/' + 'trig-aWait/Trig-pend\n' + '|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)\n' + '||/ Name Version Architecture Description\n' + '+++-=============================-==================-===========-' + '=================================\n' + 'ii dpkg 1.19.0.5ubuntu2.1 amd64 ' + 'Debian package management system\n' + 'rc linux-image-4.15.0-42-generic 4.15.0-42.45 amd64 ' + 'Signed kernel image generic\n' + 'ii lsof 4.91+dfsg-1ubuntu1 amd64 ' + 'utility to list open files\n') + expect = { + 'dpkg': { + 'name': 'dpkg', + 'version': '1.19.0.5ubuntu2.1', + 'architecture': 'amd64', + 'description': 'Debian package management system' + }, + 'lsof': { + 'name': 'lsof', + 'version': '4.91+dfsg-1ubuntu1', + 'architecture': 'amd64', + 'description': 'utility to list open files' + }, + } + self.assertEquals( + apt_cache._dpkg_list(['package']), expect) + self.check_output.side_effect = subprocess.CalledProcessError( + 1, '', output=self.check_output.return_value) + self.assertEquals(apt_cache._dpkg_list(['package']), expect) + self.check_output.side_effect = subprocess.CalledProcessError(2, '') + with self.assertRaises(subprocess.CalledProcessError): + _ = apt_cache._dpkg_list(['package']) + + def test_version_compare(self): + self.patch_object(apt_pkg.subprocess, 'check_call') + self.assertEquals(apt_pkg.version_compare('2', '1'), 1) + self.check_call.assert_called_once_with( + ['dpkg', '--compare-versions', '2', 'gt', '1'], + stderr=subprocess.STDOUT, + universal_newlines=True) + self.check_call.side_effect = [ + subprocess.CalledProcessError(1, '', ''), + None, + None, + ] + self.assertEquals(apt_pkg.version_compare('2', '2'), 0) + self.check_call.side_effect = [ + subprocess.CalledProcessError(1, '', ''), + subprocess.CalledProcessError(1, '', ''), + None, + ] + self.assertEquals(apt_pkg.version_compare('1', '2'), -1) + self.check_call.side_effect = subprocess.CalledProcessError(2, '', '') + self.assertRaises(subprocess.CalledProcessError, + apt_pkg.version_compare, '2', '2') + + def test_apt_cache(self): + self.patch_object(apt_pkg.subprocess, 'check_output') + apt_cache = apt_pkg.Cache() + self.check_output.side_effect = [ + ('Package: dpkg\n' + 'Version: 1.19.0.6ubuntu0\n' + 'Bugs: https://bugs.launchpad.net/ubuntu/+filebug\n' + 'Description-en: Debian package management system\n' + ' Multiline description\n' + '\n' + 'Package: lsof\n' + 'Architecture: amd64\n' + 'Version: 4.91+dfsg-1ubuntu1\n' + '\n' + 'N: There is 1 additional record.\n'), + ('Desired=Unknown/Install/Remove/Purge/Hold\n' + '| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/' + 'trig-aWait/Trig-pend\n' + '|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)\n' + '||/ Name Version Architecture Description\n' + '+++-=============================-==================-===========-' + '=================================\n' + 'ii dpkg 1.19.0.5ubuntu2.1 amd64 ' + 'Debian package management system\n' + 'rc linux-image-4.15.0-42-generic 4.15.0-42.45 amd64 ' + 'Signed kernel image generic\n' + 'ii lsof 4.91+dfsg-1ubuntu1 amd64 ' + 'utility to list open files\n'), + ] + pkg = apt_cache['dpkg'] + self.assertEquals(pkg.name, 'dpkg') + self.assertEquals(pkg.current_ver.ver_str, '1.19.0.5ubuntu2.1') + self.assertEquals(pkg.architecture, 'amd64') + self.check_output.side_effect = [ + subprocess.CalledProcessError(100, ''), + subprocess.CalledProcessError(1, ''), + ] + with self.assertRaises(KeyError): + pkg = apt_cache['nonexistent'] + self.check_output.side_effect = [ + ('Package: dpkg\n' + 'Version: 1.19.0.6ubuntu0\n' + 'Bugs: https://bugs.launchpad.net/ubuntu/+filebug\n' + 'Description-en: Debian package management system\n' + ' Multiline description\n' + '\n' + 'Package: lsof\n' + 'Architecture: amd64\n' + 'Version: 4.91+dfsg-1ubuntu1\n' + '\n' + 'N: There is 1 additional record.\n'), + subprocess.CalledProcessError(42, ''), + ] + with self.assertRaises(subprocess.CalledProcessError): + # System error occurs while making dpkg inquiry + pkg = apt_cache['dpkg'] + self.check_output.side_effect = [ + subprocess.CalledProcessError(42, ''), + subprocess.CalledProcessError(1, ''), + ] + with self.assertRaises(subprocess.CalledProcessError): + pkg = apt_cache['system-error-occurs-while-making-apt-inquiry'] diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/fetch/test_giturl.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/fetch/test_giturl.py new file mode 100644 index 0000000000000000000000000000000000000000..e657027fab59ed7054275c7910686db7c0d6e671 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/fetch/test_giturl.py @@ -0,0 +1,115 @@ +import os +import shutil +import subprocess +import tempfile +from testtools import TestCase +from mock import ( + MagicMock, + patch, +) +from charmhelpers.core.host import chdir + +import six +if six.PY3: + from urllib.parse import urlparse +else: + from urlparse import urlparse + + +try: + from charmhelpers.fetch import ( + giturl, + UnhandledSource, + ) +except ImportError: + giturl = None + UnhandledSource = None + + +class GitUrlFetchHandlerTest(TestCase): + def setUp(self): + super(GitUrlFetchHandlerTest, self).setUp() + self.valid_urls = ( + "http://example.com/git-branch", + "https://example.com/git-branch", + "git://example.com/git-branch", + ) + self.invalid_urls = ( + "file://example.com/foo.tar.bz2", + "abc:example", + "garbage", + ) + self.fh = giturl.GitUrlFetchHandler() + + def test_handles_git_urls(self): + for url in self.valid_urls: + result = self.fh.can_handle(url) + self.assertEqual(result, True, url) + for url in self.invalid_urls: + result = self.fh.can_handle(url) + self.assertNotEqual(result, True, url) + + @patch.object(giturl, 'check_output') + def test_clone(self, check_output): + dest_path = "/destination/path" + branch = "master" + for url in self.valid_urls: + self.fh.remote_branch = MagicMock() + self.fh.load_plugins = MagicMock() + self.fh.clone(url, dest_path, branch, None) + + check_output.assert_called_with( + ['git', 'clone', url, dest_path, '--branch', branch], stderr=-2) + + for url in self.invalid_urls: + with patch.dict('os.environ', {'CHARM_DIR': 'foo'}): + self.assertRaises(UnhandledSource, self.fh.clone, url, + dest_path, None, + branch) + + def test_clone_functional(self): + src = None + dst = None + try: + src = tempfile.mkdtemp() + with chdir(src): + subprocess.check_output(['git', 'init']) + subprocess.check_output(['git', 'config', 'user.name', 'Joe']) + subprocess.check_output( + ['git', 'config', 'user.email', 'joe@test.com']) + subprocess.check_output(['touch', 'foo']) + subprocess.check_output(['git', 'add', 'foo']) + subprocess.check_output(['git', 'commit', '-m', 'test']) + dst = tempfile.mkdtemp() + os.rmdir(dst) + self.fh.clone(src, dst) + assert os.path.exists(os.path.join(dst, '.git')) + self.fh.clone(src, dst) # idempotent + assert os.path.exists(os.path.join(dst, '.git')) + finally: + if src: + shutil.rmtree(src, ignore_errors=True) + if dst: + shutil.rmtree(dst, ignore_errors=True) + + def test_installs(self): + self.fh.clone = MagicMock() + + for url in self.valid_urls: + branch_name = urlparse(url).path.strip("/").split("/")[-1] + dest = os.path.join('foo', 'fetched', + os.path.basename(branch_name)) + with patch.dict('os.environ', {'CHARM_DIR': 'foo'}): + where = self.fh.install(url) + self.assertEqual(where, dest) + + def test_installs_specified_dest(self): + self.fh.clone = MagicMock() + + for url in self.valid_urls: + branch_name = urlparse(url).path.strip("/").split("/")[-1] + dest_repo = os.path.join('/tmp/git/', + os.path.basename(branch_name)) + with patch.dict('os.environ', {'CHARM_DIR': 'foo'}): + where = self.fh.install(url, dest="/tmp/git") + self.assertEqual(where, dest_repo) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/fetch/test_snap.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/fetch/test_snap.py new file mode 100644 index 0000000000000000000000000000000000000000..02e6a70c0b2a31ac48bbd81f43d708dd0ec5d466 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/fetch/test_snap.py @@ -0,0 +1,79 @@ +# Copyright 2014-2017 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +from mock import patch +from unittest import TestCase +import charmhelpers.fetch.snap as fetch_snap + +__author__ = 'Joseph Borg ' + +TEST_ENV = {'foo': 'bar'} + + +class SnapTest(TestCase): + """ + Test and install and removal of a snap. + """ + @patch.object(fetch_snap, 'log', lambda *args, **kwargs: None) + @patch('subprocess.check_call') + @patch('os.environ', TEST_ENV) + def testSnapInstall(self, check_call): + """ + Test snap install. + + :param check_call: Mock object + :return: None + """ + check_call.return_value = 0 + fetch_snap.snap_install(['hello-world', 'htop'], '--classic', '--stable') + check_call.assert_called_with(['snap', 'install', '--classic', '--stable', 'hello-world', 'htop'], env=TEST_ENV) + + @patch.object(fetch_snap, 'log', lambda *args, **kwargs: None) + @patch('subprocess.check_call') + @patch('os.environ', TEST_ENV) + def testSnapRefresh(self, check_call): + """ + Test snap refresh. + + :param check_call: Mock object + :return: None + """ + check_call.return_value = 0 + fetch_snap.snap_refresh(['hello-world', 'htop'], '--classic', '--stable') + check_call.assert_called_with(['snap', 'refresh', '--classic', '--stable', 'hello-world', 'htop'], env=TEST_ENV) + + @patch.object(fetch_snap, 'log', lambda *args, **kwargs: None) + @patch('subprocess.check_call') + @patch('os.environ', TEST_ENV) + def testSnapRemove(self, check_call): + """ + Test snap remove. + + :param check_call: Mock object + :return: None + """ + check_call.return_value = 0 + fetch_snap.snap_remove(['hello-world', 'htop']) + check_call.assert_called_with(['snap', 'remove', 'hello-world', 'htop'], env=TEST_ENV) + + def test_valid_snap_channel(self): + """ Test valid snap channel + + :return: None + """ + # Valid + self.assertTrue(fetch_snap.valid_snap_channel('edge')) + + # Invalid + with self.assertRaises(fetch_snap.InvalidSnapChannel): + fetch_snap.valid_snap_channel('DOESNOTEXIST') diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/helpers.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/helpers.py new file mode 100644 index 0000000000000000000000000000000000000000..10581268bfd7d956644fe0df1669f974c4f59c11 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/helpers.py @@ -0,0 +1,144 @@ +''' General helper functions for tests ''' +from contextlib import contextmanager +from mock import patch, MagicMock +import io + +import six +if not six.PY3: + builtin_open = '__builtin__.open' +else: + builtin_open = 'builtins.open' + + +@contextmanager +def patch_open(): + '''Patch open() to allow mocking both open() itself and the file that is + yielded. + + Yields the mock for "open" and "file", respectively.''' + mock_open = MagicMock(spec=open) + mock_file = MagicMock(spec=io.FileIO) + + @contextmanager + def stub_open(*args, **kwargs): + mock_open(*args, **kwargs) + yield mock_file + + with patch(builtin_open, stub_open): + yield mock_open, mock_file + + +@contextmanager +def mock_open(filename, contents=None): + ''' Slightly simpler mock of open to return contents for filename ''' + def mock_file(name, mode='r', buffering=-1): # Python 2 signature. + if name == filename: + if (not six.PY3) or 'b' in mode: + return io.BytesIO(contents) + return io.StringIO(contents) + else: + return open(name, mode, buffering) + + with patch(builtin_open, mock_file): + yield + + +class FakeRelation(object): + ''' + A fake relation class. Lets tests specify simple relation data + for a default relation + unit (foo:0, foo/0, set in setUp()), eg: + + rel = { + 'private-address': 'foo', + 'password': 'passwd', + } + relation = FakeRelation(rel) + self.relation_get.side_effect = relation.get + passwd = self.relation_get('password') + + or more complex relations meant to be addressed by explicit relation id + + unit id combos: + + rel = { + 'mysql:0': { + 'mysql/0': { + 'private-address': 'foo', + 'password': 'passwd', + } + } + } + relation = FakeRelation(rel) + self.relation_get.side_affect = relation.get + passwd = self.relation_get('password', rid='mysql:0', unit='mysql/0') + + set_relation_context can be used to simulate being in a relation hook + context. This allows omitting a relation id or unit when calling relation + helpers as the related unit is present. + + To set the context: + + relation = FakeRelation(rel) + relation.set_relation_context('mysql-svc2/0', 'shared-db:12') + + To clear it: + + relation.clear_relation_context() + ''' + def __init__(self, relation_data): + self.relation_data = relation_data + self.remote_unit = None + self.current_relation_id = None + + def set_relation_context(self, remote_unit, relation_id): + self.remote_unit = remote_unit + self.current_relation_id = relation_id + + def clear_relation_context(self): + self.remote_unit = None + self.current_relation_id = None + + def get(self, attribute=None, unit=None, rid=None): + if not rid or rid == 'foo:0': + if self.current_relation_id: + if not unit: + unit = self.remote_unit + udata = self.relation_data[self.current_relation_id][unit] + if attribute: + return udata[attribute] + else: + return udata[unit] + if attribute is None: + return self.relation_data + elif attribute in self.relation_data: + return self.relation_data[attribute] + return None + else: + if rid not in self.relation_data: + return None + try: + relation = self.relation_data[rid][unit] + except KeyError: + return None + if attribute and attribute in relation: + return relation[attribute] + return relation + + def relation_id(self): + return self.current_relation_id + + def relation_ids(self, reltype=None): + rids = self.relation_data.keys() + if reltype: + return [r for r in rids if r.split(':')[0] == reltype] + return rids + + def related_units(self, relid=None): + try: + return self.relation_data[relid].keys() + except KeyError: + return [] + + def relation_units(self, relation_id): + if relation_id not in self.relation_data: + return None + return self.relation_data[relation_id].keys() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/payload/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/payload/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/payload/test_archive.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/payload/test_archive.py new file mode 100644 index 0000000000000000000000000000000000000000..f636917c9d15e3529962af5f02863618bc0b6b9c --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/payload/test_archive.py @@ -0,0 +1,137 @@ +import os +from testtools import TestCase +from mock import ( + patch, + MagicMock, +) +from charmhelpers.payload import archive +from tempfile import mkdtemp +from shutil import rmtree +import subprocess + + +class ArchiveTestCase(TestCase): + + def create_archive(self, format): + workdir = mkdtemp() + if format == "tar": + workfile = "{}/foo.tar.gz".format(workdir) + cmd = "tar czf {} hosts".format(workfile) + elif format == "zip": + workfile = "{}/foo.zip".format(workdir) + cmd = "zip {} hosts".format(workfile) + curdir = os.getcwd() + os.chdir("/etc") + subprocess.check_output(cmd, shell=True) + os.chdir(curdir) + self.addCleanup(rmtree, workdir) + return (workfile, ["hosts"]) + + @patch('os.path.isfile') + def test_gets_archive_handler_by_ext(self, _isfile): + tar_archive_handler = archive.extract_tarfile + zip_archive_handler = archive.extract_zipfile + _isfile.return_value = False + + for ext in ('tar', 'tar.gz', 'tgz', 'tar.bz2', 'tbz2', 'tbz'): + handler = archive.get_archive_handler("somefile.{}".format(ext)) + msg = "handler for extension: {}".format(ext) + self.assertEqual(handler, tar_archive_handler, msg) + + for ext in ('zip', 'jar'): + handler = archive.get_archive_handler("somefile.{}".format(ext)) + msg = "handler for extension {}".format(ext) + self.assertEqual(handler, zip_archive_handler, msg) + + @patch('zipfile.is_zipfile') + @patch('tarfile.is_tarfile') + @patch('os.path.isfile') + def test_gets_archive_hander_by_filetype(self, _isfile, _istarfile, + _iszipfile): + tar_archive_handler = archive.extract_tarfile + zip_archive_handler = archive.extract_zipfile + _isfile.return_value = True + + _istarfile.return_value = True + _iszipfile.return_value = False + handler = archive.get_archive_handler("foo") + self.assertEqual(handler, tar_archive_handler) + + _istarfile.return_value = False + _iszipfile.return_value = True + handler = archive.get_archive_handler("foo") + self.assertEqual(handler, zip_archive_handler) + + @patch('charmhelpers.core.hookenv.charm_dir') + def test_gets_archive_dest_default(self, _charmdir): + _charmdir.return_value = "foo" + thedir = archive.archive_dest_default("baz") + self.assertEqual(thedir, os.path.join("foo", "archives", "baz")) + + thedir = archive.archive_dest_default("baz/qux") + self.assertEqual(thedir, os.path.join("foo", "archives", "qux")) + + def test_extracts_tarfile(self): + destdir = mkdtemp() + self.addCleanup(rmtree, destdir) + tar_file, contents = self.create_archive("tar") + archive.extract_tarfile(tar_file, destdir) + for path in [os.path.join(destdir, item) for item in contents]: + self.assertTrue(os.path.exists(path)) + + def test_extracts_zipfile(self): + destdir = mkdtemp() + self.addCleanup(rmtree, destdir) + try: + zip_file, contents = self.create_archive("zip") + except subprocess.CalledProcessError as e: + if e.returncode == 127: + self.skip("Skipping - zip is not installed") + else: + raise + archive.extract_zipfile(zip_file, destdir) + for path in [os.path.join(destdir, item) for item in contents]: + self.assertTrue(os.path.exists(path)) + + @patch('charmhelpers.core.host.mkdir') + @patch('charmhelpers.payload.archive.get_archive_handler') + @patch('charmhelpers.payload.archive.archive_dest_default') + def test_extracts(self, _defdest, _gethandler, _mkdir): + archive_name = "foo" + archive_handler = MagicMock() + _gethandler.return_value = archive_handler + + dest = archive.extract(archive_name, "bar") + + _gethandler.assert_called_with(archive_name) + archive_handler.assert_called_with(archive_name, "bar") + _defdest.assert_not_called() + _mkdir.assert_called_with("bar") + self.assertEqual(dest, "bar") + + @patch('charmhelpers.core.host.mkdir') + @patch('charmhelpers.payload.archive.get_archive_handler') + def test_unhandled_extract_raises_exc(self, _gethandler, _mkdir): + archive_name = "foo" + _gethandler.return_value = None + + self.assertRaises(archive.ArchiveError, archive.extract, + archive_name) + + _gethandler.assert_called_with(archive_name) + _mkdir.assert_not_called() + + @patch('charmhelpers.core.host.mkdir') + @patch('charmhelpers.payload.archive.get_archive_handler') + @patch('charmhelpers.payload.archive.archive_dest_default') + def test_extracts_default_dest(self, _defdest, _gethandler, _mkdir): + expected_dest = "bar" + archive_name = "foo" + _defdest.return_value = expected_dest + handler = MagicMock() + handler.return_value = expected_dest + _gethandler.return_value = handler + + dest = archive.extract(archive_name) + self.assertEqual(expected_dest, dest) + handler.assert_called_with(archive_name, expected_dest) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/payload/test_execd.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/payload/test_execd.py new file mode 100644 index 0000000000000000000000000000000000000000..20d83c5891376ff2547ab828f2757f585be25aca --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/payload/test_execd.py @@ -0,0 +1,153 @@ +from testtools import TestCase +from mock import patch +import os +import shutil +import stat + +from tempfile import mkdtemp + +from charmhelpers.payload import execd + + +class ExecDTestCase(TestCase): + + def setUp(self): + super(ExecDTestCase, self).setUp() + charm_dir = mkdtemp() + self.addCleanup(shutil.rmtree, charm_dir) + self.test_charm_dir = charm_dir + + env_patcher = patch.dict('os.environ', + {'CHARM_DIR': self.test_charm_dir}) + env_patcher.start() + self.addCleanup(env_patcher.stop) + + def test_default_execd_dir(self): + expected = os.path.join(self.test_charm_dir, 'exec.d') + default_dir = execd.default_execd_dir() + + self.assertEqual(expected, default_dir) + + def make_preinstall_executable(self, module_dir, execd_dir='exec.d', + error_on_preinstall=False): + """Add a charm-pre-install to module dir. + + When executed, the charm-pre-install will create a second + file in the same directory, charm-pre-install-success. + """ + module_path = os.path.join(self.test_charm_dir, execd_dir, module_dir) + os.makedirs(module_path) + + charm_pre_install_path = os.path.join(module_path, + 'charm-pre-install') + pre_install_success_path = os.path.join(module_path, + 'charm-pre-install-success') + with open(charm_pre_install_path, 'w+') as f: + if not error_on_preinstall: + f.write("#!/bin/bash\n" + "/usr/bin/touch {}".format(pre_install_success_path)) + else: + f.write("#!/bin/bash\n" + "echo stdout_from_pre_install\n" + "echo stderr_from_pre_install >&2\n" + "exit 1") + + # ensure it is executable. + perms = stat.S_IRUSR + stat.S_IXUSR + os.chmod(charm_pre_install_path, perms) + + def assert_preinstall_called_for_mod(self, module_dir, + execd_dir='exec.d'): + """Asserts that the charm-pre-install-success file exists.""" + expected_file = os.path.join(self.test_charm_dir, execd_dir, + module_dir, 'charm-pre-install-success') + files = os.listdir(os.path.dirname(expected_file)) + self.assertTrue(os.path.exists(expected_file), "files were: %s. charmdir is: %s" % (files, self.test_charm_dir)) + + def test_execd_preinstall(self): + """All charm-pre-install hooks are executed.""" + self.make_preinstall_executable(module_dir='basenode') + self.make_preinstall_executable(module_dir='mod2') + + execd.execd_preinstall() + + self.assert_preinstall_called_for_mod('basenode') + self.assert_preinstall_called_for_mod('mod2') + + def test_execd_module_list_from_env(self): + modules = ['basenode', 'mod2', 'c'] + for module in modules: + self.make_preinstall_executable(module_dir=module) + + actual_mod_paths = list(execd.execd_module_paths()) + + expected_mod_paths = [ + os.path.join(self.test_charm_dir, 'exec.d', module) + for module in modules] + self.assertSetEqual(set(actual_mod_paths), set(expected_mod_paths)) + + def test_execd_module_list_with_dir(self): + modules = ['basenode', 'mod2', 'c'] + for module in modules: + self.make_preinstall_executable(module_dir=module, + execd_dir='foo') + + actual_mod_paths = list(execd.execd_module_paths( + execd_dir=os.path.join(self.test_charm_dir, 'foo'))) + + expected_mod_paths = [ + os.path.join(self.test_charm_dir, 'foo', module) + for module in modules] + self.assertSetEqual(set(actual_mod_paths), set(expected_mod_paths)) + + def test_execd_module_paths_no_execd_dir(self): + """Empty list is returned when the exec.d doesn't exist.""" + actual_mod_paths = list(execd.execd_module_paths()) + + self.assertEqual(actual_mod_paths, []) + + def test_execd_submodule_list(self): + modules = ['basenode', 'mod2', 'c'] + for module in modules: + self.make_preinstall_executable(module_dir=module) + + submodules = list(execd.execd_submodule_paths('charm-pre-install')) + + expected = [os.path.join(self.test_charm_dir, 'exec.d', mod, + 'charm-pre-install') for mod in modules] + self.assertEqual(sorted(submodules), sorted(expected)) + + def test_execd_run(self): + modules = ['basenode', 'mod2', 'c'] + for module in modules: + self.make_preinstall_executable(module_dir=module) + + execd.execd_run('charm-pre-install') + + self.assert_preinstall_called_for_mod('basenode') + self.assert_preinstall_called_for_mod('mod2') + self.assert_preinstall_called_for_mod('c') + + @patch('charmhelpers.core.hookenv.log') + def test_execd_run_logs_exception(self, log_): + self.make_preinstall_executable(module_dir='basenode', + error_on_preinstall=True) + + execd.execd_run('charm-pre-install', die_on_error=False) + + expected_log = ('Error (1) running {}/exec.d/basenode/' + 'charm-pre-install. Output: ' + 'stdout_from_pre_install\n' + 'stderr_from_pre_install\n'.format(self.test_charm_dir)) + log_.assert_called_with(expected_log) + + @patch('charmhelpers.core.hookenv.log') + @patch('sys.exit') + def test_execd_run_dies_with_return_code(self, exit_, log): + self.make_preinstall_executable(module_dir='basenode', + error_on_preinstall=True) + + with open(os.devnull, 'wb') as devnull: + execd.execd_run('charm-pre-install', stderr=devnull) + + exit_.assert_called_with(1) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/tools/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/tools/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/tools/test_charm_helper_sync.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/tools/test_charm_helper_sync.py new file mode 100644 index 0000000000000000000000000000000000000000..7c66b9ba80700c1b042948cdfd12b2982ccbe9dc --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/tools/test_charm_helper_sync.py @@ -0,0 +1,304 @@ +import unittest +from mock import call, patch +import yaml + +import tools.charm_helpers_sync.charm_helpers_sync as sync + +import six +if not six.PY3: + builtin_open = '__builtin__.open' +else: + builtin_open = 'builtins.open' + + +INCLUDE = """ +include: + - core + - contrib.openstack + - contrib.storage + - contrib.hahelpers: + - utils + - ceph_utils + - cluster_utils + - haproxy_utils +""" + + +class HelperSyncTests(unittest.TestCase): + def test_clone_helpers(self): + '''It properly branches the correct helpers branch''' + with patch('subprocess.check_call') as check_call: + sync.clone_helpers(work_dir='/tmp/foo', repo='git:charm-helpers') + check_call.assert_called_with(['git', + 'clone', '--depth=1', + 'git:charm-helpers', + '/tmp/foo/charm-helpers']) + + def test_module_path(self): + '''It converts a python module path to a filesystem path''' + self.assertEquals(sync._module_path('some.test.module'), + 'some/test/module') + + def test_src_path(self): + '''It renders the correct path to module within charm-helpers tree''' + path = sync._src_path(src='/tmp/charm-helpers', + module='contrib.openstack') + self.assertEquals('/tmp/charm-helpers/charmhelpers/contrib/openstack', + path) + + def test_dest_path(self): + '''It correctly finds the correct install path within a charm''' + path = sync._dest_path(dest='/tmp/mycharm/hooks/charmhelpers', + module='contrib.openstack') + self.assertEquals('/tmp/mycharm/hooks/charmhelpers/contrib/openstack', + path) + + @patch(builtin_open) + @patch('os.path.exists') + @patch('os.walk') + def test_ensure_init(self, walk, exists, _open): + '''It ensures all subdirectories of a parent are python importable''' + # os walk + # os.path.join + # os.path.exists + # open + def _walk(path): + yield ('/tmp/hooks/', ['helpers'], []) + yield ('/tmp/hooks/helpers', ['foo'], []) + yield ('/tmp/hooks/helpers/foo', [], []) + walk.side_effect = _walk + exists.return_value = False + sync.ensure_init('hooks/helpers/foo/') + ex = [call('/tmp/hooks/__init__.py', 'wb'), + call('/tmp/hooks/helpers/__init__.py', 'wb'), + call('/tmp/hooks/helpers/foo/__init__.py', 'wb')] + for c in ex: + self.assertIn(c, _open.call_args_list) + + @patch('tools.charm_helpers_sync.charm_helpers_sync.ensure_init') + @patch('os.path.isfile') + @patch('shutil.copy') + @patch('os.makedirs') + @patch('os.path.exists') + def test_sync_pyfile(self, exists, mkdirs, copy, isfile, ensure_init): + '''It correctly syncs a py src file from src to dest''' + exists.return_value = False + isfile.return_value = True + sync.sync_pyfile('/tmp/charm-helpers/core/host', + 'hooks/charmhelpers/core') + mkdirs.assert_called_with('hooks/charmhelpers/core') + copy_f = call('/tmp/charm-helpers/core/host.py', + 'hooks/charmhelpers/core') + copy_i = call('/tmp/charm-helpers/core/__init__.py', + 'hooks/charmhelpers/core') + self.assertIn(copy_f, copy.call_args_list) + self.assertIn(copy_i, copy.call_args_list) + ensure_init.assert_called_with('hooks/charmhelpers/core') + + def _test_filter_dir(self, opts, isfile, isdir): + '''It filters non-python files and non-module dirs from source''' + files = { + 'bad_file.bin': 'f', + 'some_dir': 'd', + 'good_helper.py': 'f', + 'good_helper2.py': 'f', + 'good_helper3.py': 'f', + 'bad_file.img': 'f', + } + + def _isfile(f): + try: + return files[f.split('/').pop()] == 'f' + except KeyError: + return False + + def _isdir(f): + try: + return files[f.split('/').pop()] == 'd' + except KeyError: + return False + + isfile.side_effect = _isfile + isdir.side_effect = _isdir + result = sync.get_filter(opts)(dir='/tmp/charm-helpers/core', + ls=six.iterkeys(files)) + return result + + @patch('os.path.isdir') + @patch('os.path.isfile') + def test_filter_dir_no_opts(self, isfile, isdir): + '''It filters out all non-py files by default''' + result = self._test_filter_dir(opts=None, isfile=isfile, isdir=isdir) + ex = ['bad_file.bin', 'bad_file.img', 'some_dir'] + self.assertEquals(sorted(ex), sorted(result)) + + @patch('os.path.isdir') + @patch('os.path.isfile') + def test_filter_dir_with_include(self, isfile, isdir): + '''It includes non-py files if specified as an include opt''' + result = sorted(self._test_filter_dir(opts=['inc=*.img'], + isfile=isfile, isdir=isdir)) + ex = sorted(['bad_file.bin', 'some_dir']) + self.assertEquals(ex, result) + + @patch('os.path.isdir') + @patch('os.path.isfile') + def test_filter_dir_include_all(self, isfile, isdir): + '''It does not filter anything if option specified to include all''' + self.assertEquals(sync.get_filter(opts=['inc=*']), None) + + @patch('tools.charm_helpers_sync.charm_helpers_sync.get_filter') + @patch('tools.charm_helpers_sync.charm_helpers_sync.ensure_init') + @patch('shutil.copytree') + @patch('shutil.rmtree') + @patch('os.path.exists') + def test_sync_directory(self, exists, rmtree, copytree, ensure_init, + _filter): + '''It correctly syncs src directory to dest directory''' + _filter.return_value = None + sync.sync_directory('/tmp/charm-helpers/charmhelpers/core', + 'hooks/charmhelpers/core') + exists.return_value = True + rmtree.assert_called_with('hooks/charmhelpers/core') + copytree.assert_called_with('/tmp/charm-helpers/charmhelpers/core', + 'hooks/charmhelpers/core', ignore=None) + ensure_init.assert_called_with('hooks/charmhelpers/core') + + @patch('os.path.isfile') + def test_is_pyfile(self, isfile): + '''It correctly identifies incomplete path to a py src file as such''' + sync._is_pyfile('/tmp/charm-helpers/charmhelpers/core/host') + isfile.assert_called_with( + '/tmp/charm-helpers/charmhelpers/core/host.py' + ) + + @patch('tools.charm_helpers_sync.charm_helpers_sync.sync_pyfile') + @patch('tools.charm_helpers_sync.charm_helpers_sync.sync_directory') + @patch('os.path.isdir') + def test_syncs_directory(self, is_dir, sync_dir, sync_pyfile): + '''It correctly syncs a module directory''' + is_dir.return_value = True + sync.sync(src='/tmp/charm-helpers', + dest='hooks/charmhelpers', + module='contrib.openstack') + + sync_dir.assert_called_with( + '/tmp/charm-helpers/charmhelpers/contrib/openstack', + 'hooks/charmhelpers/contrib/openstack', None) + + # __init__.py files leading to the directory were also synced. + sync_pyfile.assert_has_calls([ + call('/tmp/charm-helpers/charmhelpers/__init__', + 'hooks/charmhelpers'), + call('/tmp/charm-helpers/charmhelpers/contrib/__init__', + 'hooks/charmhelpers/contrib')]) + + @patch('tools.charm_helpers_sync.charm_helpers_sync.sync_pyfile') + @patch('tools.charm_helpers_sync.charm_helpers_sync._is_pyfile') + @patch('os.path.isdir') + def test_syncs_file(self, is_dir, is_pyfile, sync_pyfile): + '''It correctly syncs a module file''' + is_dir.return_value = False + is_pyfile.return_value = True + sync.sync(src='/tmp/charm-helpers', + dest='hooks/charmhelpers', + module='contrib.openstack.utils') + sync_pyfile.assert_has_calls([ + call('/tmp/charm-helpers/charmhelpers/__init__', + 'hooks/charmhelpers'), + call('/tmp/charm-helpers/charmhelpers/contrib/__init__', + 'hooks/charmhelpers/contrib'), + call('/tmp/charm-helpers/charmhelpers/contrib/openstack/__init__', + 'hooks/charmhelpers/contrib/openstack'), + call('/tmp/charm-helpers/charmhelpers/contrib/openstack/utils', + 'hooks/charmhelpers/contrib/openstack')]) + + @patch('tools.charm_helpers_sync.charm_helpers_sync.sync') + @patch('os.path.isdir') + @patch('os.path.exists') + def test_sync_helpers_from_config(self, exists, isdir, _sync): + '''It correctly syncs a list of included helpers''' + include = yaml.safe_load(INCLUDE)['include'] + isdir.return_value = True + exists.return_value = False + sync.sync_helpers(include=include, + src='/tmp/charm-helpers', + + dest='hooks/charmhelpers') + mods = [ + 'core', + 'contrib.openstack', + 'contrib.storage', + 'contrib.hahelpers.utils', + 'contrib.hahelpers.ceph_utils', + 'contrib.hahelpers.cluster_utils', + 'contrib.hahelpers.haproxy_utils' + ] + + ex_calls = [] + [ex_calls.append( + call('/tmp/charm-helpers', 'hooks/charmhelpers', c, []) + ) for c in mods] + self.assertEquals(ex_calls, _sync.call_args_list) + + @patch('tools.charm_helpers_sync.charm_helpers_sync.sync') + @patch('os.path.isdir') + @patch('os.path.exists') + @patch('shutil.rmtree') + def test_sync_helpers_from_config_cleanup(self, _rmtree, _exists, + isdir, _sync): + '''It correctly syncs a list of included helpers''' + include = yaml.safe_load(INCLUDE)['include'] + isdir.return_value = True + _exists.return_value = True + + sync.sync_helpers(include=include, + src='/tmp/charm-helpers', + + dest='hooks/charmhelpers') + _rmtree.assert_called_with('hooks/charmhelpers') + mods = [ + 'core', + 'contrib.openstack', + 'contrib.storage', + 'contrib.hahelpers.utils', + 'contrib.hahelpers.ceph_utils', + 'contrib.hahelpers.cluster_utils', + 'contrib.hahelpers.haproxy_utils' + ] + + ex_calls = [] + [ex_calls.append( + call('/tmp/charm-helpers', 'hooks/charmhelpers', c, []) + ) for c in mods] + self.assertEquals(ex_calls, _sync.call_args_list) + + def test_extract_option_no_globals(self): + '''It extracts option from an included item with no global options''' + inc = 'contrib.openstack.templates|inc=*.template' + result = sync.extract_options(inc) + ex = ('contrib.openstack.templates', ['inc=*.template']) + self.assertEquals(ex, result) + + def test_extract_option_with_global_as_string(self): + '''It extracts option for include with global options as str''' + inc = 'contrib.openstack.templates|inc=*.template' + result = sync.extract_options(inc, global_options='inc=foo.*') + ex = ('contrib.openstack.templates', + ['inc=*.template', 'inc=foo.*']) + self.assertEquals(ex, result) + + def test_extract_option_with_globals(self): + '''It extracts option from an included item with global options''' + inc = 'contrib.openstack.templates|inc=*.template' + result = sync.extract_options(inc, global_options=['inc=*.cfg']) + ex = ('contrib.openstack.templates', ['inc=*.template', 'inc=*.cfg']) + self.assertEquals(ex, result) + + def test_extract_multiple_options_with_globals(self): + '''It extracts multiple options from an included item''' + inc = 'contrib.openstack.templates|inc=*.template,inc=foo.*' + result = sync.extract_options(inc, global_options=['inc=*.cfg']) + ex = ('contrib.openstack.templates', + ['inc=*.template', 'inc=foo.*', 'inc=*.cfg']) + self.assertEquals(ex, result) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..7ac46994dde1311502b93df965aea4b5038f868a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tests/utils.py @@ -0,0 +1,79 @@ +# Copyright 2016 Canonical Ltd +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +"""Unit test helpers from https://github.com/openstack/charms.openstack/""" + +import contextlib +import io +import mock +import unittest + + +@contextlib.contextmanager +def patch_open(): + '''Patch open() to allow mocking both open() itself and the file that is + yielded. + + Yields the mock for "open" and "file", respectively.''' + mock_open = mock.MagicMock(spec=open) + mock_file = mock.MagicMock(spec=io.FileIO) + + @contextlib.contextmanager + def stub_open(*args, **kwargs): + mock_open(*args, **kwargs) + yield mock_file + + with mock.patch('builtins.open', stub_open): + yield mock_open, mock_file + + +class BaseTestCase(unittest.TestCase): + + def setUp(self): + self._patches = {} + self._patches_start = {} + + def tearDown(self): + for k, v in self._patches.items(): + v.stop() + setattr(self, k, None) + self._patches = None + self._patches_start = None + + def patch_object(self, obj, attr, return_value=None, name=None, new=None, + **kwargs): + if name is None: + name = attr + if new is not None: + mocked = mock.patch.object(obj, attr, new=new, **kwargs) + else: + mocked = mock.patch.object(obj, attr, **kwargs) + self._patches[name] = mocked + started = mocked.start() + if new is None: + started.return_value = return_value + self._patches_start[name] = started + setattr(self, name, started) + + def patch(self, item, return_value=None, name=None, new=None, **kwargs): + if name is None: + raise RuntimeError("Must pass 'name' to .patch()") + if new is not None: + mocked = mock.patch(item, new=new, **kwargs) + else: + mocked = mock.patch(item, **kwargs) + self._patches[name] = mocked + started = mocked.start() + if new is None: + started.return_value = return_value + self._patches_start[name] = started + setattr(self, name, started) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tools/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tools/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d7567b863e3a5ad2b7a7f44958b4166e0c3d346b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tools/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tools/charm_helpers_sync/README.md b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tools/charm_helpers_sync/README.md new file mode 100644 index 0000000000000000000000000000000000000000..e22a77231bf068e0b5c208dc956c7dd47fdd7102 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tools/charm_helpers_sync/README.md @@ -0,0 +1,160 @@ +Script for synchronizing charm-helpers into a charm branch. + +This script is intended to be used by charm authors during the development +of their charm. It allows authors to pull in bits of a charm-helpers source +tree and embed directly into their charm, to be deployed with the rest of +their hooks and charm payload. This script is not intended to be called +by the hooks themselves, but instead by the charm author while they are +hacking on a charm offline. Consider it a method of compiling specific +revision of a charm-helpers branch into a given charm source tree. + +Some goals and benefits to using a sync tool to manage this process: + + - Reduces the burden of manually copying in upstream charm helpers code + into a charm and helps ensure we can easily keep a specific charm's + helper code up to date. + + - Allows authors to hack on their own working branch of charm-helpers, + easily sync into their WIP charm. Any changes they've made to charm + helpers can be upstreamed via a merge of their charm-helpers branch + into lp:charm-helpers, ideally at the same time they are upstreaming + the charm itself into the charm store. Separating charm helper + development from charm development can help reduce cases where charms + are shipping locally modified helpers. + + - Avoids the need to ship the *entire* charm-helpers source tree with + a charm. Authors can selectively pick and choose what subset of helpers + to include to satisfy the goals of their charm. + +Allows specifying a list of dependencies to sync in from a charm-helpers +branch. Ideally, each charm should describe its requirements in a yaml +config included in the charm, eg `charm-helpers.yaml` (NOTE: Example module +layout as of 12/18/2019): + + $ cd my-charm + $ cat >charm-helpers.yaml </charm-helpers \ + -d hooks/helpers core contrib.openstack contrib.hahelpers + +or use a specific branch using the @ notation + + $ charm-helpers-sync.py -r https://github.com//charm-helpers@branch_name \ + -d hooks/helpers core contrib.openstack contrib.hahelpers + +Script will create missing `__init__.py`'s to ensure each subdirectory is +importable, assuming the script is run from the charm's top-level directory. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tools/charm_helpers_sync/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tools/charm_helpers_sync/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d7567b863e3a5ad2b7a7f44958b4166e0c3d346b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tools/charm_helpers_sync/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py new file mode 100755 index 0000000000000000000000000000000000000000..7c0c1945dd32d3c29683a4fc3567b527750b5cd4 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py @@ -0,0 +1,261 @@ +#!/usr/bin/python + +# Copyright 2014-2015 Canonical Limited. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Authors: +# Adam Gandelman + +import logging +import optparse +import os +import subprocess +import shutil +import sys +import tempfile +import yaml +from fnmatch import fnmatch + +import six + +CHARM_HELPERS_REPO = 'https://github.com/juju/charm-helpers' + + +def parse_config(conf_file): + if not os.path.isfile(conf_file): + logging.error('Invalid config file: %s.' % conf_file) + return False + return yaml.load(open(conf_file).read()) + + +def clone_helpers(work_dir, repo): + dest = os.path.join(work_dir, 'charm-helpers') + logging.info('Cloning out %s to %s.' % (repo, dest)) + branch = None + if '@' in repo: + repo, branch = repo.split('@', 1) + cmd = ['git', 'clone', '--depth=1'] + if branch is not None: + cmd += ['--branch', branch] + cmd += [repo, dest] + subprocess.check_call(cmd) + return dest + + +def _module_path(module): + return os.path.join(*module.split('.')) + + +def _src_path(src, module): + return os.path.join(src, 'charmhelpers', _module_path(module)) + + +def _dest_path(dest, module): + return os.path.join(dest, _module_path(module)) + + +def _is_pyfile(path): + return os.path.isfile(path + '.py') + + +def ensure_init(path): + ''' + ensure directories leading up to path are importable, omitting + parent directory, eg path='/hooks/helpers/foo'/: + hooks/ + hooks/helpers/__init__.py + hooks/helpers/foo/__init__.py + ''' + for d, dirs, files in os.walk(os.path.join(*path.split('/')[:2])): + _i = os.path.join(d, '__init__.py') + if not os.path.exists(_i): + logging.info('Adding missing __init__.py: %s' % _i) + open(_i, 'wb').close() + + +def sync_pyfile(src, dest): + src = src + '.py' + src_dir = os.path.dirname(src) + logging.info('Syncing pyfile: %s -> %s.' % (src, dest)) + if not os.path.exists(dest): + os.makedirs(dest) + shutil.copy(src, dest) + if os.path.isfile(os.path.join(src_dir, '__init__.py')): + shutil.copy(os.path.join(src_dir, '__init__.py'), + dest) + ensure_init(dest) + + +def get_filter(opts=None): + opts = opts or [] + if 'inc=*' in opts: + # do not filter any files, include everything + return None + + def _filter(dir, ls): + incs = [opt.split('=').pop() for opt in opts if 'inc=' in opt] + _filter = [] + for f in ls: + _f = os.path.join(dir, f) + + if not os.path.isdir(_f) and not _f.endswith('.py') and incs: + if True not in [fnmatch(_f, inc) for inc in incs]: + logging.debug('Not syncing %s, does not match include ' + 'filters (%s)' % (_f, incs)) + _filter.append(f) + else: + logging.debug('Including file, which matches include ' + 'filters (%s): %s' % (incs, _f)) + elif (os.path.isfile(_f) and not _f.endswith('.py')): + logging.debug('Not syncing file: %s' % f) + _filter.append(f) + elif (os.path.isdir(_f) and not + os.path.isfile(os.path.join(_f, '__init__.py'))): + logging.debug('Not syncing directory: %s' % f) + _filter.append(f) + return _filter + return _filter + + +def sync_directory(src, dest, opts=None): + if os.path.exists(dest): + logging.debug('Removing existing directory: %s' % dest) + shutil.rmtree(dest) + logging.info('Syncing directory: %s -> %s.' % (src, dest)) + + shutil.copytree(src, dest, ignore=get_filter(opts)) + ensure_init(dest) + + +def sync(src, dest, module, opts=None): + + # Sync charmhelpers/__init__.py for bootstrap code. + sync_pyfile(_src_path(src, '__init__'), dest) + + # Sync other __init__.py files in the path leading to module. + m = [] + steps = module.split('.')[:-1] + while steps: + m.append(steps.pop(0)) + init = '.'.join(m + ['__init__']) + sync_pyfile(_src_path(src, init), + os.path.dirname(_dest_path(dest, init))) + + # Sync the module, or maybe a .py file. + if os.path.isdir(_src_path(src, module)): + sync_directory(_src_path(src, module), _dest_path(dest, module), opts) + elif _is_pyfile(_src_path(src, module)): + sync_pyfile(_src_path(src, module), + os.path.dirname(_dest_path(dest, module))) + else: + logging.warn('Could not sync: %s. Neither a pyfile or directory, ' + 'does it even exist?' % module) + + +def parse_sync_options(options): + if not options: + return [] + return options.split(',') + + +def extract_options(inc, global_options=None): + global_options = global_options or [] + if global_options and isinstance(global_options, six.string_types): + global_options = [global_options] + if '|' not in inc: + return (inc, global_options) + inc, opts = inc.split('|') + return (inc, parse_sync_options(opts) + global_options) + + +def sync_helpers(include, src, dest, options=None): + if os.path.exists(dest): + logging.debug('Removing existing directory: %s' % dest) + shutil.rmtree(dest) + if not os.path.isdir(dest): + os.makedirs(dest) + + global_options = parse_sync_options(options) + + for inc in include: + if isinstance(inc, str): + inc, opts = extract_options(inc, global_options) + sync(src, dest, inc, opts) + elif isinstance(inc, dict): + # could also do nested dicts here. + for k, v in six.iteritems(inc): + if isinstance(v, list): + for m in v: + inc, opts = extract_options(m, global_options) + sync(src, dest, '%s.%s' % (k, inc), opts) + + +if __name__ == '__main__': + parser = optparse.OptionParser() + parser.add_option('-c', '--config', action='store', dest='config', + default=None, help='helper config file') + parser.add_option('-D', '--debug', action='store_true', dest='debug', + default=False, help='debug') + parser.add_option('-r', '--repository', action='store', dest='repo', + help='charm-helpers git repository (overrides config)') + parser.add_option('-d', '--destination', action='store', dest='dest_dir', + help='sync destination dir (overrides config)') + (opts, args) = parser.parse_args() + + if opts.debug: + logging.basicConfig(level=logging.DEBUG) + else: + logging.basicConfig(level=logging.INFO) + + if opts.config: + logging.info('Loading charm helper config from %s.' % opts.config) + config = parse_config(opts.config) + if not config: + logging.error('Could not parse config from %s.' % opts.config) + sys.exit(1) + else: + config = {} + + if 'repo' not in config: + config['repo'] = CHARM_HELPERS_REPO + if opts.repo: + config['repo'] = opts.repo + if opts.dest_dir: + config['destination'] = opts.dest_dir + + if 'destination' not in config: + logging.error('No destination dir. specified as option or config.') + sys.exit(1) + + if 'include' not in config: + if not args: + logging.error('No modules to sync specified as option or config.') + sys.exit(1) + config['include'] = [] + [config['include'].append(a) for a in args] + + sync_options = None + if 'options' in config: + sync_options = config['options'] + tmpd = tempfile.mkdtemp() + try: + checkout = clone_helpers(tmpd, config['repo']) + sync_helpers(config['include'], checkout, config['destination'], + options=sync_options) + except Exception as e: + logging.error("Could not sync: %s" % e) + raise e + finally: + logging.debug('Cleaning up %s' % tmpd) + shutil.rmtree(tmpd) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tools/charm_helpers_sync/example-config.yaml b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tools/charm_helpers_sync/example-config.yaml new file mode 100644 index 0000000000000000000000000000000000000000..8563ec01b656a6f6990378dcc137cd89f25dd540 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tools/charm_helpers_sync/example-config.yaml @@ -0,0 +1,14 @@ +# Import from remote git repository. +repo: https://github.com/juju/charm-helpers +# install helpers to ./hooks/charmhelpers +destination: hooks/charmhelpers +include: + # include all of charmhelpers.core + - core + # all of charmhelpers.payload + - payload + # and a subset of charmhelpers.contrib.hahelpers + - contrib.hahelpers: + - openstack_common + - ceph_utils + - utils diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tox.ini b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tox.ini new file mode 100644 index 0000000000000000000000000000000000000000..c4c5ed54d732a0806b8fb1de66a69306ec837c94 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charm-helpers/tox.ini @@ -0,0 +1,53 @@ +[tox] +envlist = pep8,py2,py3 +skipsdist = true +sitepackages = false +# NOTE(beisner): Avoid false positives by not skipping missing interpreters. +# NOTE(beisner): Avoid pollution by not enabling sitepackages. +# NOTE(beisner): the 'py3' env is useful to "just give me whatever py3 is here." +# NOTE(beisner): the 'py3x' envs are useful to use a distinct interpreter version (will fail if not found) +ignore_basepython_conflict = true + +[testenv] +setenv = VIRTUAL_ENV={envdir} + PYTHONHASHSEED=0 +install_command = pip install {opts} {packages} +passenv = HOME TERM +commands = nosetests -s --nologcapture {posargs} --with-coverage --cover-package=charmhelpers tests/ +deps = -r{toxinidir}/test-requirements.txt + +[testenv:py2] +basepython = python +deps = -r{toxinidir}/test-requirements.txt + +[testenv:py3] +basepython = python3 +deps = -r{toxinidir}/test-requirements.txt + +[testenv:py34] +basepython = python3.4 +deps = -r{toxinidir}/test-requirements.txt + +[testenv:py35] +basepython = python3.5 +deps = -r{toxinidir}/test-requirements.txt + +[testenv:py36] +basepython = python3.6 +deps = -r{toxinidir}/test-requirements.txt + +[testenv:py37] +basepython = python3.7 +deps = -r{toxinidir}/test-requirements.txt + +[testenv:py38] +basepython = python3.8 +deps = -r{toxinidir}/test-requirements.txt + +[testenv:pep8] +basepython = python3 +deps = -r{toxinidir}/test-requirements.txt +commands = flake8 -v {posargs} charmhelpers tests tools + +[flake8] +ignore = E402,E501,E741,E722,W504 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charms.osm/LICENSE b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charms.osm/LICENSE new file mode 100644 index 0000000000000000000000000000000000000000..261eeb9e9f8b2b4b0d119366dda99c6fd7d35c64 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charms.osm/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charms.osm/README.md b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charms.osm/README.md new file mode 100644 index 0000000000000000000000000000000000000000..a98a55253f32b50a17989c1fcd7df6e5f82454af --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charms.osm/README.md @@ -0,0 +1,201 @@ +# charms.osm + +A Python library to aid the development of charms for Open Source Mano (OSM) + +## How to install + +```bash +git submodule add https://github.com/canonical/operator mod/operator +git submodule add https://github.com/charmed-osm/charms.osm mod/charms.osm +git submodule add https://github.com/juju/charm-helpers.git mod/charm-helpers # Only for libansible +``` + +## SSHProxyCharm + +In this section, we show the class you should inherit from in order to develop your SSH Proxy charms. + +Example: + +```python +from charms.osm.sshproxy import SSHProxyCharm + +class MySSHProxyCharm(SSHProxyCharm): + + def __init__(self, framework, key): + super().__init__(framework, key) + + # Listen to charm events + self.framework.observe(self.on.config_changed, self.on_config_changed) + self.framework.observe(self.on.install, self.on_install) + self.framework.observe(self.on.start, self.on_start) + + # Listen to the touch action event + self.framework.observe(self.on.touch_action, self.on_touch_action) + + def on_config_changed(self, event): + """Handle changes in configuration""" + super().on_config_changed(event) + + def on_install(self, event): + """Called when the charm is being installed""" + super().on_install(event) + + def on_start(self, event): + """Called when the charm is being started""" + super().on_start(event) + + def on_touch_action(self, event): + """Touch a file.""" + + if self.model.unit.is_leader(): + filename = event.params["filename"] + proxy = self.get_ssh_proxy() + stdout, stderr = proxy.run("touch {}".format(filename)) + event.set_results({"output": stdout}) + else: + event.fail("Unit is not leader") + return +``` + +### Attributes and methods available + +- Atttributes: + - state: StoredState object. It can be used to store state data to be shared within a charm across different hooks +- SSH related methods: + - get_ssh_proxy(): Return an SSHProxy object with which you can then execute scp and ssh commands in the remote machine. + - verify_credentials(): Return True if it has the right credentials to SSH the remote machine. It also updates the status of the unit. +- Charm related methods: Methods that should be run in specific hooks/events. + - on_install(): Install dependencies for enabling SSH functionality + - on_start(): Generate needed SSH keys + - on_config_changed(): Check if the SSH + +### config.yaml + +You need to add this in your config.yaml in your charm. + +```yaml +options: + ssh-hostname: + type: string + default: "" + description: "The hostname or IP address of the machine to" + ssh-username: + type: string + default: "" + description: "The username to login as." + ssh-password: + type: string + default: "" + description: "The password used to authenticate." + ssh-public-key: + type: string + default: "" + description: "The public key of this unit." + ssh-key-type: + type: string + default: "rsa" + description: "The type of encryption to use for the SSH key." + ssh-key-bits: + type: int + default: 4096 + description: "The number of bits to use for the SSH key." + +``` + +### metadata.yaml + +You need to add this in your metadata.yaml in your charm. + +```yaml +peers: + proxypeer: + interface: proxypeer +``` + +### actions.yaml + +You need to add this in your actions.yaml in your charm. + +```yaml +# Required by charms.osm.sshproxy +run: + description: "Run an arbitrary command" + params: + command: + description: "The command to execute." + type: string + default: "" + required: + - command +generate-ssh-key: + description: "Generate a new SSH keypair for this unit. This will replace any existing previously generated keypair." +verify-ssh-credentials: + description: "Verify that this unit can authenticate with server specified by ssh-hostname and ssh-username." +get-ssh-public-key: + description: "Get the public SSH key for this unit." +``` + +## SSHProxy + +Example: + +```python +from charms.osm.sshproxy import SSHProxy + +# Check if SSH Proxy has key +if not SSHProxy.has_ssh_key(): + # Generate SSH Key + SSHProxy.generate_ssh_key() + +# Get generated public and private keys +SSHProxy.get_ssh_public_key() +SSHProxy.get_ssh_private_key() + +# Get Proxy +proxy = SSHProxy( + hostname=config["ssh-hostname"], + username=config["ssh-username"], + password=config["ssh-password"], +) + +# Verify credentials +verified = proxy.verify_credentials() + +if verified: + # Run commands in remote machine + proxy.run("touch /home/ubuntu/touch") +``` + +## Libansible + +```python +from charms.osm import libansible + +# Install ansible packages in the charm +libansible.install_ansible_support() + +result = libansible.execute_playbook( + "configure-remote.yaml", # Name of the playbook <-- Put the playbook in playbooks/ folder + config["ssh-hostname"], + config["ssh-username"], + config["ssh-password"], + dict_vars, # Dictionary with variables to populate in the playbook +) +``` + +## Usage + +Import submodules: + +```bash +git submodule add https://github.com/charmed-osm/charms.osm mod/charms.osm +git submodule add https://github.com/juju/charm-helpers.git mod/charm-helpers # Only for libansible +``` + +Add symlinks: + +```bash +mkdir -p lib/charms +ln -s ../mod/charms.osm/charms/osm lib/charms/osm +ln -s ../mod/charm-helpers/charmhelpers lib/charmhelpers # Only for libansible +``` diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charms.osm/charms/osm/libansible.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charms.osm/charms/osm/libansible.py new file mode 100644 index 0000000000000000000000000000000000000000..32fd26ae7d63d42edef33982d5438669b191a361 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charms.osm/charms/osm/libansible.py @@ -0,0 +1,108 @@ +## +# Copyright 2020 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## + +import fnmatch +import os +import yaml +import subprocess +import sys + +sys.path.append("lib") +import charmhelpers.fetch + + +ansible_hosts_path = "/etc/ansible/hosts" + + +def install_ansible_support(from_ppa=True, ppa_location="ppa:ansible/ansible"): + """Installs the ansible package. + + By default it is installed from the `PPA`_ linked from + the ansible `website`_ or from a ppa specified by a charm config.. + + .. _PPA: https://launchpad.net/~rquillo/+archive/ansible + .. _website: http://docs.ansible.com/intro_installation.html#latest-releases-via-apt-ubuntu + + If from_ppa is empty, you must ensure that the package is available + from a configured repository. + """ + if from_ppa: + charmhelpers.fetch.add_source(ppa_location) + charmhelpers.fetch.apt_update(fatal=True) + charmhelpers.fetch.apt_install("ansible") + with open(ansible_hosts_path, "w+") as hosts_file: + hosts_file.write("localhost ansible_connection=local") + + +def create_hosts(hostname, username, password, hosts): + inventory_path = "/etc/ansible/hosts" + + with open(inventory_path, "w") as f: + f.write("[{}]\n".format(hosts)) + h1 = "host ansible_host={0} ansible_user={1} ansible_password={2}\n".format( + hostname, username, password + ) + f.write(h1) + + +def create_ansible_cfg(): + ansible_config_path = "/etc/ansible/ansible.cfg" + + with open(ansible_config_path, "w") as f: + f.write("[defaults]\n") + f.write("host_key_checking = False\n") + + +# Function to find the playbook path +def find(pattern, path): + result = "" + for root, dirs, files in os.walk(path): + for name in files: + if fnmatch.fnmatch(name, pattern): + result = os.path.join(root, name) + return result + + +def execute_playbook(playbook_file, hostname, user, password, vars_dict=None): + playbook_path = find(playbook_file, "/var/lib/juju/agents/") + + with open(playbook_path, "r") as f: + playbook_data = yaml.load(f) + + hosts = "all" + if "hosts" in playbook_data[0].keys() and playbook_data[0]["hosts"]: + hosts = playbook_data[0]["hosts"] + + create_ansible_cfg() + create_hosts(hostname, user, password, hosts) + + call = "ansible-playbook {} ".format(playbook_path) + + if vars_dict and isinstance(vars_dict, dict) and len(vars_dict) > 0: + call += "--extra-vars " + + string_var = "" + for k,v in vars_dict.items(): + string_var += "{}={} ".format(k, v) + + string_var = string_var.strip() + call += '"{}"'.format(string_var) + + call = call.strip() + result = subprocess.check_output(call, shell=True) + + return result diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charms.osm/charms/osm/ns.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charms.osm/charms/osm/ns.py new file mode 100644 index 0000000000000000000000000000000000000000..25be4056282e48ce946632025be8b466557e3171 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charms.osm/charms/osm/ns.py @@ -0,0 +1,301 @@ +# A prototype of a library to aid in the development and operation of +# OSM Network Service charms + +import asyncio +import logging +import os +import os.path +import re +import subprocess +import sys +import time +import yaml + +try: + import juju +except ImportError: + # Not all cloud images are created equal + if not os.path.exists("/usr/bin/python3") or not os.path.exists("/usr/bin/pip3"): + # Update the apt cache + subprocess.check_call(["apt-get", "update"]) + + # Install the Python3 package + subprocess.check_call(["apt-get", "install", "-y", "python3", "python3-pip"],) + + + # Install the libjuju build dependencies + subprocess.check_call(["apt-get", "install", "-y", "libffi-dev", "libssl-dev"],) + + subprocess.check_call( + [sys.executable, "-m", "pip", "install", "juju"], + ) + +from juju.controller import Controller + +# Quiet the debug logging +logging.getLogger('websockets.protocol').setLevel(logging.INFO) +logging.getLogger('juju.client.connection').setLevel(logging.WARN) +logging.getLogger('juju.model').setLevel(logging.WARN) +logging.getLogger('juju.machine').setLevel(logging.WARN) + + +class NetworkService: + """A lightweight interface to the Juju controller. + + This NetworkService client is specifically designed to allow a higher-level + "NS" charm to interoperate with "VNF" charms, allowing for the execution of + Primitives across other charms within the same model. + """ + endpoint = None + user = 'admin' + secret = None + port = 17070 + loop = None + client = None + model = None + cacert = None + + def __init__(self, user, secret, endpoint=None): + + self.user = user + self.secret = secret + if endpoint is None: + addresses = os.environ['JUJU_API_ADDRESSES'] + for address in addresses.split(' '): + self.endpoint = address + else: + self.endpoint = endpoint + + # Stash the name of the model + self.model = os.environ['JUJU_MODEL_NAME'] + + # Load the ca-cert from agent.conf + AGENT_PATH = os.path.dirname(os.environ['JUJU_CHARM_DIR']) + with open("{}/agent.conf".format(AGENT_PATH), "r") as f: + try: + y = yaml.safe_load(f) + self.cacert = y['cacert'] + except yaml.YAMLError as exc: + print("Unable to find Juju ca-cert.") + raise exc + + # Create our event loop + self.loop = asyncio.new_event_loop() + asyncio.set_event_loop(self.loop) + + async def connect(self): + """Connect to the Juju controller.""" + controller = Controller() + + print( + "Connecting to controller... ws://{}:{} as {}/{}".format( + self.endpoint, + self.port, + self.user, + self.secret[-4:].rjust(len(self.secret), "*"), + ) + ) + await controller.connect( + endpoint=self.endpoint, + username=self.user, + password=self.secret, + cacert=self.cacert, + ) + + return controller + + def __del__(self): + self.logout() + + async def disconnect(self): + """Disconnect from the Juju controller.""" + if self.client: + print("Disconnecting Juju controller") + await self.client.disconnect() + + def login(self): + """Login to the Juju controller.""" + if not self.client: + # Connect to the Juju API server + self.client = self.loop.run_until_complete(self.connect()) + return self.client + + def logout(self): + """Logout of the Juju controller.""" + + if self.loop: + print("Disconnecting from API") + self.loop.run_until_complete(self.disconnect()) + + def FormatApplicationName(self, *args): + """ + Generate a Juju-compatible Application name + + :param args tuple: Positional arguments to be used to construct the + application name. + + Limitations:: + - Only accepts characters a-z and non-consequitive dashes (-) + - Application name should not exceed 50 characters + + Examples:: + + FormatApplicationName("ping_pong_ns", "ping_vnf", "a") + """ + appname = "" + for c in "-".join(list(args)): + if c.isdigit(): + c = chr(97 + int(c)) + elif not c.isalpha(): + c = "-" + appname += c + + return re.sub('-+', '-', appname.lower()) + + def GetApplicationName(self, nsr_name, vnf_name, vnf_member_index): + """Get the runtime application name of a VNF/VDU. + + This will generate an application name matching the name of the deployed charm, + given the right parameters. + + :param nsr_name str: The name of the running Network Service, as specified at instantiation. + :param vnf_name str: The name of the VNF or VDU + :param vnf_member_index: The vnf-member-index as specified in the descriptor + """ + + application_name = self.FormatApplicationName(nsr_name, vnf_member_index, vnf_name) + + # This matches the logic used by the LCM + application_name = application_name[0:48] + vca_index = int(vnf_member_index) - 1 + application_name += '-' + chr(97 + vca_index // 26) + chr(97 + vca_index % 26) + + return application_name + + def ExecutePrimitiveGetOutput(self, application, primitive, params={}, timeout=600): + """Execute a single primitive and return it's output. + + This is a blocking method that will execute a single primitive and wait + for its completion before return it's output. + + :param application str: The application name provided by `GetApplicationName`. + :param primitive str: The name of the primitive to execute. + :param params list: A list of parameters. + :param timeout int: A timeout, in seconds, to wait for the primitive to finish. Defaults to 600 seconds. + """ + uuid = self.ExecutePrimitive(application, primitive, params) + + status = None + output = None + + starttime = time.time() + while(time.time() < starttime + timeout): + status = self.GetPrimitiveStatus(uuid) + if status in ['completed', 'failed']: + break + time.sleep(10) + + # When the primitive is done, get the output + if status in ['completed', 'failed']: + output = self.GetPrimitiveOutput(uuid) + + return output + + def ExecutePrimitive(self, application, primitive, params={}): + """Execute a primitive. + + This is a non-blocking method to execute a primitive. It will return + the UUID of the queued primitive execution, which you can use + for subsequent calls to `GetPrimitiveStatus` and `GetPrimitiveOutput`. + + :param application string: The name of the application + :param primitive string: The name of the Primitive. + :param params list: A list of parameters. + + :returns uuid string: The UUID of the executed Primitive + """ + uuid = None + + if not self.client: + self.login() + + model = self.loop.run_until_complete( + self.client.get_model(self.model) + ) + + # Get the application + if application in model.applications: + app = model.applications[application] + + # Execute the primitive + unit = app.units[0] + if unit: + action = self.loop.run_until_complete( + unit.run_action(primitive, **params) + ) + uuid = action.id + print("Executing action: {}".format(uuid)) + self.loop.run_until_complete( + model.disconnect() + ) + else: + # Invalid mapping: application not found. Raise exception + raise Exception("Application not found: {}".format(application)) + + return uuid + + def GetPrimitiveStatus(self, uuid): + """Get the status of a Primitive execution. + + This will return one of the following strings: + - pending + - running + - completed + - failed + + :param uuid string: The UUID of the executed Primitive. + :returns: The status of the executed Primitive + """ + status = None + + if not self.client: + self.login() + + model = self.loop.run_until_complete( + self.client.get_model(self.model) + ) + + status = self.loop.run_until_complete( + model.get_action_status(uuid) + ) + + self.loop.run_until_complete( + model.disconnect() + ) + + return status[uuid] + + def GetPrimitiveOutput(self, uuid): + """Get the output of a completed Primitive execution. + + + :param uuid string: The UUID of the executed Primitive. + :returns: The output of the execution, or None if it's still running. + """ + result = None + if not self.client: + self.login() + + model = self.loop.run_until_complete( + self.client.get_model(self.model) + ) + + result = self.loop.run_until_complete( + model.get_action_output(uuid) + ) + + self.loop.run_until_complete( + model.disconnect() + ) + + return result diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charms.osm/charms/osm/proxy_cluster.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charms.osm/charms/osm/proxy_cluster.py new file mode 100644 index 0000000000000000000000000000000000000000..f323a3af88f3011ef9fde794cdfe343be8665abb --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charms.osm/charms/osm/proxy_cluster.py @@ -0,0 +1,59 @@ +import socket + +from ops.framework import Object, StoredState + + +class ProxyCluster(Object): + + state = StoredState() + + def __init__(self, charm, relation_name): + super().__init__(charm, relation_name) + self._relation_name = relation_name + self._relation = self.framework.model.get_relation(self._relation_name) + + self.framework.observe(charm.on.ssh_keys_initialized, self.on_ssh_keys_initialized) + + self.state.set_default(ssh_public_key=None) + self.state.set_default(ssh_private_key=None) + + def on_ssh_keys_initialized(self, event): + if not self.framework.model.unit.is_leader(): + raise RuntimeError("The initial unit of a cluster must also be a leader.") + + self.state.ssh_public_key = event.ssh_public_key + self.state.ssh_private_key = event.ssh_private_key + if not self.is_joined: + event.defer() + return + + self._relation.data[self.model.app][ + "ssh_public_key" + ] = self.state.ssh_public_key + self._relation.data[self.model.app][ + "ssh_private_key" + ] = self.state.ssh_private_key + + @property + def is_joined(self): + return self._relation is not None + + @property + def ssh_public_key(self): + if self.is_joined: + return self._relation.data[self.model.app].get("ssh_public_key") + + @property + def ssh_private_key(self): + if self.is_joined: + return self._relation.data[self.model.app].get("ssh_private_key") + + @property + def is_cluster_initialized(self): + return ( + True + if self.is_joined + and self._relation.data[self.model.app].get("ssh_public_key") + and self._relation.data[self.model.app].get("ssh_private_key") + else False + ) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charms.osm/charms/osm/sshproxy.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charms.osm/charms/osm/sshproxy.py new file mode 100644 index 0000000000000000000000000000000000000000..e2c311e5be8515a7fe19c05dfed9f042ded0576b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/charms.osm/charms/osm/sshproxy.py @@ -0,0 +1,375 @@ +"""Module to help with executing commands over SSH.""" +## +# Copyright 2016 Canonical Ltd. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +## + +# from charmhelpers.core import unitdata +# from charmhelpers.core.hookenv import log + +import io +import ipaddress +import subprocess +import os +import socket +import shlex +import traceback +import sys + +from subprocess import ( + check_call, + Popen, + CalledProcessError, + PIPE, +) + +from ops.charm import CharmBase, CharmEvents +from ops.framework import StoredState, EventBase, EventSource +from ops.main import main +from ops.model import ( + ActiveStatus, + BlockedStatus, + MaintenanceStatus, + WaitingStatus, + ModelError, +) +import os +import subprocess +from .proxy_cluster import ProxyCluster + +import logging + + +logger = logging.getLogger(__name__) + +class SSHKeysInitialized(EventBase): + def __init__(self, handle, ssh_public_key, ssh_private_key): + super().__init__(handle) + self.ssh_public_key = ssh_public_key + self.ssh_private_key = ssh_private_key + + def snapshot(self): + return { + "ssh_public_key": self.ssh_public_key, + "ssh_private_key": self.ssh_private_key, + } + + def restore(self, snapshot): + self.ssh_public_key = snapshot["ssh_public_key"] + self.ssh_private_key = snapshot["ssh_private_key"] + + +class ProxyClusterEvents(CharmEvents): + ssh_keys_initialized = EventSource(SSHKeysInitialized) + + +class SSHProxyCharm(CharmBase): + + state = StoredState() + on = ProxyClusterEvents() + + def __init__(self, framework, key): + super().__init__(framework, key) + + self.peers = ProxyCluster(self, "proxypeer") + + # SSH Proxy actions (primitives) + self.framework.observe(self.on.generate_ssh_key_action, self.on_generate_ssh_key_action) + self.framework.observe(self.on.get_ssh_public_key_action, self.on_get_ssh_public_key_action) + self.framework.observe(self.on.run_action, self.on_run_action) + self.framework.observe(self.on.verify_ssh_credentials_action, self.on_verify_ssh_credentials_action) + + self.framework.observe(self.on.proxypeer_relation_changed, self.on_proxypeer_relation_changed) + + def get_ssh_proxy(self): + """Get the SSHProxy instance""" + proxy = SSHProxy( + hostname=self.model.config["ssh-hostname"], + username=self.model.config["ssh-username"], + password=self.model.config["ssh-password"], + ) + return proxy + + def on_proxypeer_relation_changed(self, event): + if self.peers.is_cluster_initialized and not SSHProxy.has_ssh_key(): + pubkey = self.peers.ssh_public_key + privkey = self.peers.ssh_private_key + SSHProxy.write_ssh_keys(public=pubkey, private=privkey) + self.verify_credentials() + else: + event.defer() + + def on_config_changed(self, event): + """Handle changes in configuration""" + self.verify_credentials() + + def on_install(self, event): + SSHProxy.install() + + def on_start(self, event): + """Called when the charm is being installed""" + if not self.peers.is_joined: + event.defer() + return + + unit = self.model.unit + + if not SSHProxy.has_ssh_key(): + unit.status = MaintenanceStatus("Generating SSH keys...") + pubkey = None + privkey = None + if self.model.unit.is_leader(): + if self.peers.is_cluster_initialized: + SSHProxy.write_ssh_keys( + public=self.peers.ssh_public_key, + private=self.peers.ssh_private_key, + ) + else: + SSHProxy.generate_ssh_key() + self.on.ssh_keys_initialized.emit( + SSHProxy.get_ssh_public_key(), SSHProxy.get_ssh_private_key() + ) + self.verify_credentials() + + def verify_credentials(self): + unit = self.model.unit + + # Unit should go into a waiting state until verify_ssh_credentials is successful + unit.status = WaitingStatus("Waiting for SSH credentials") + proxy = self.get_ssh_proxy() + verified, _ = proxy.verify_credentials() + if verified: + unit.status = ActiveStatus() + else: + unit.status = BlockedStatus("Invalid SSH credentials.") + return verified + + ##################### + # SSH Proxy methods # + ##################### + def on_generate_ssh_key_action(self, event): + """Generate a new SSH keypair for this unit.""" + if self.model.unit.is_leader(): + if not SSHProxy.generate_ssh_key(): + event.fail("Unable to generate ssh key") + else: + event.fail("Unit is not leader") + return + + def on_get_ssh_public_key_action(self, event): + """Get the SSH public key for this unit.""" + if self.model.unit.is_leader(): + pubkey = SSHProxy.get_ssh_public_key() + event.set_results({"pubkey": SSHProxy.get_ssh_public_key()}) + else: + event.fail("Unit is not leader") + return + + def on_run_action(self, event): + """Run an arbitrary command on the remote host.""" + if self.model.unit.is_leader(): + cmd = event.params["command"] + proxy = self.get_ssh_proxy() + stdout, stderr = proxy.run(cmd) + event.set_results({"output": stdout}) + if len(stderr): + event.fail(stderr) + else: + event.fail("Unit is not leader") + return + + def on_verify_ssh_credentials_action(self, event): + """Verify the SSH credentials for this unit.""" + unit = self.model.unit + if unit.is_leader(): + proxy = self.get_ssh_proxy() + verified, stderr = proxy.verify_credentials() + if verified: + event.set_results({"verified": True}) + unit.status = ActiveStatus() + else: + event.set_results({"verified": False, "stderr": stderr}) + event.fail("Not verified") + unit.status = BlockedStatus("Invalid SSH credentials.") + + else: + event.fail("Unit is not leader") + return + + +class LeadershipError(ModelError): + def __init__(self): + super().__init__("not leader") + +class SSHProxy: + private_key_path = "/root/.ssh/id_sshproxy" + public_key_path = "/root/.ssh/id_sshproxy.pub" + key_type = "rsa" + key_bits = 4096 + + def __init__(self, hostname: str, username: str, password: str = ""): + self.hostname = hostname + self.username = username + self.password = password + + @staticmethod + def install(): + check_call("apt update && apt install -y openssh-client sshpass", shell=True) + + @staticmethod + def generate_ssh_key(): + """Generate a 4096-bit rsa keypair.""" + if not os.path.exists(SSHProxy.private_key_path): + cmd = "ssh-keygen -t {} -b {} -N '' -f {}".format( + SSHProxy.key_type, SSHProxy.key_bits, SSHProxy.private_key_path, + ) + + try: + check_call(cmd, shell=True) + except CalledProcessError: + return False + + return True + + @staticmethod + def write_ssh_keys(public, private): + """Write a 4096-bit rsa keypair.""" + with open(SSHProxy.public_key_path, "w") as f: + f.write(public) + f.close() + with open(SSHProxy.private_key_path, "w") as f: + f.write(private) + f.close() + + @staticmethod + def get_ssh_public_key(): + publickey = "" + if os.path.exists(SSHProxy.private_key_path): + with open(SSHProxy.public_key_path, "r") as f: + publickey = f.read() + return publickey + + @staticmethod + def get_ssh_private_key(): + privatekey = "" + if os.path.exists(SSHProxy.private_key_path): + with open(SSHProxy.private_key_path, "r") as f: + privatekey = f.read() + return privatekey + + @staticmethod + def has_ssh_key(): + return True if os.path.exists(SSHProxy.private_key_path) else False + + def run(self, cmd: str) -> (str, str): + """Run a command remotely via SSH. + + Note: The previous behavior was to run the command locally if SSH wasn't + configured, but that can lead to cases where execution succeeds when you'd + expect it not to. + """ + if isinstance(cmd, str): + cmd = shlex.split(cmd) + + host = self._get_hostname() + user = self.username + passwd = self.password + key = self.private_key_path + + # Make sure we have everything we need to connect + if host and user: + return self.ssh(cmd) + + raise Exception("Invalid SSH credentials.") + + def scp(self, source_file, destination_file): + """Execute an scp command. Requires a fully qualified source and + destination. + + :param str source_file: Path to the source file + :param str destination_file: Path to the destination file + :raises: :class:`CalledProcessError` if the command fails + """ + cmd = [ + "sshpass", + "-p", + self.password, + "scp", + "-i", + os.path.expanduser(self.private_key_path), + "-o", + "StrictHostKeyChecking=no", + "-q", + "-B", + ] + destination = "{}@{}:{}".format(self.username, self.hostname, destination_file) + cmd.extend([source_file, destination]) + subprocess.run(cmd, check=True) + + def ssh(self, command): + """Run a command remotely via SSH. + + :param list(str) command: The command to execute + :return: tuple: The stdout and stderr of the command execution + :raises: :class:`CalledProcessError` if the command fails + """ + + destination = "{}@{}".format(self.username, self.hostname) + cmd = [ + "sshpass", + "-p", + self.password, + "ssh", + "-i", + os.path.expanduser(self.private_key_path), + "-o", + "StrictHostKeyChecking=no", + "-q", + destination, + ] + cmd.extend(command) + output = subprocess.run(cmd, check=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) + return (output.stdout.decode("utf-8").strip(), output.stderr.decode("utf-8").strip()) + + def verify_credentials(self): + """Verify the SSH credentials. + + :return (bool, str): Verified, Stderr + """ + verified = False + try: + (stdout, stderr) = self.run("hostname") + verified = True + except CalledProcessError as e: + stderr = "Command failed: {} ({})".format(" ".join(e.cmd), str(e.output)) + except (TimeoutError, socket.timeout): + stderr = "Timeout attempting to reach {}".format(self._get_hostname()) + except Exception as error: + tb = traceback.format_exc() + stderr = "Unhandled exception: {}".format(tb) + return verified, stderr + + ################### + # Private methods # + ################### + def _get_hostname(self): + """Get the hostname for the ssh target. + + HACK: This function was added to work around an issue where the + ssh-hostname was passed in the format of a.b.c.d;a.b.c.d, where the first + is the floating ip, and the second the non-floating ip, for an Openstack + instance. + """ + return self.hostname.split(";")[0] diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/.flake8 b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/.flake8 new file mode 100644 index 0000000000000000000000000000000000000000..61d908155588a7968dd25c90cff349377305a789 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/.flake8 @@ -0,0 +1,2 @@ +[flake8] +max-line-length = 99 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/.gitignore b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/.gitignore new file mode 100644 index 0000000000000000000000000000000000000000..5596c502618d6428de0199c4cec069c151c0edd3 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/.gitignore @@ -0,0 +1,4 @@ +__pycache__ +/sandbox +.idea +docs/_build diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/.readthedocs.yaml b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/.readthedocs.yaml new file mode 100644 index 0000000000000000000000000000000000000000..b6989fdb6a173e6f75343fd5d2953acb9c24b0a2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/.readthedocs.yaml @@ -0,0 +1,6 @@ +version: 2 # required +formats: [] # i.e. no extra formats (for now) +python: + version: "3.5" + install: + - requirements: docs/requirements.txt diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/.travis.yml b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/.travis.yml new file mode 100644 index 0000000000000000000000000000000000000000..5aa83238027ae5dccb4a797e6c7ed456f1cfe5de --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/.travis.yml @@ -0,0 +1,24 @@ +dist: bionic + +language: python + +arch: + - amd64 + - arm64 + +python: + - "3.5" + - "3.6" + - "3.7" + - "3.8" + +matrix: + include: + - os: osx + language: generic + +install: + - pip3 install -r requirements-dev.txt + +script: + - ./run_tests diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/CODE_OF_CONDUCT.md b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/CODE_OF_CONDUCT.md new file mode 100644 index 0000000000000000000000000000000000000000..b5bc1b1f8f266e3c7e15e8b8db490f688a1e5230 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/CODE_OF_CONDUCT.md @@ -0,0 +1,78 @@ +# Contributor Covenant Code of Conduct + +## Our Pledge + +In the interest of fostering an open and welcoming environment, we as +contributors and maintainers pledge to making participation in our project and +our community a harassment-free experience for everyone, regardless of age, body +size, disability, ethnicity, sex characteristics, gender identity and expression, +level of experience, education, socio-economic status, nationality, personal +appearance, race, religion, or sexual identity and orientation. + +## Our Standards + +Examples of behavior that contributes to creating a positive environment +include: + +* Using welcoming and inclusive language +* Being respectful of differing viewpoints and experiences +* Gracefully accepting constructive criticism +* Focusing on what is best for the community +* Showing empathy towards other community members + +Examples of unacceptable behavior by participants include: + +* The use of sexualized language or imagery and unwelcome sexual attention or + advances +* Trolling, insulting/derogatory comments, and personal or political attacks +* Public or private harassment +* Publishing others' private information, such as a physical or electronic + address, without explicit permission +* Other conduct which could reasonably be considered inappropriate in a + professional setting + +## Our Responsibilities + +Project maintainers are responsible for clarifying the standards of acceptable +behavior and are expected to take appropriate and fair corrective action in +response to any instances of unacceptable behavior. + +Project maintainers have the right and responsibility to remove, edit, or +reject comments, commits, code, wiki edits, issues, and other contributions +that are not aligned to this Code of Conduct, or to ban temporarily or +permanently any contributor for other behaviors that they deem inappropriate, +threatening, offensive, or harmful. + +## Scope + +This Code of Conduct applies both within project spaces and in public spaces +when an individual is representing the project or its community. Examples of +representing a project or community include using an official project e-mail +address, posting via an official social media account, or acting as an appointed +representative at an online or offline event. Representation of a project may be +further defined and clarified by project maintainers. + +## Enforcement + +Instances of abusive, harassing, or otherwise unacceptable behavior +may be reported by contacting the project team at +charmcrafters@lists.launchpad.net. All complaints will be reviewed and +investigated and will result in a response that is deemed necessary +and appropriate to the circumstances. The project team is obligated to +maintain confidentiality with regard to the reporter of an incident. +Further details of specific enforcement policies may be posted +separately. + +Project maintainers who do not follow or enforce the Code of Conduct in good +faith may face temporary or permanent repercussions as determined by other +members of the project's leadership. + +## Attribution + +This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, +available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html + +[homepage]: https://www.contributor-covenant.org + +For answers to common questions about this code of conduct, see +https://www.contributor-covenant.org/faq diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/LICENSE.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/LICENSE.txt new file mode 100644 index 0000000000000000000000000000000000000000..d645695673349e3947e8e5ae42332d0ac3164cd7 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/LICENSE.txt @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/README.md b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/README.md new file mode 100644 index 0000000000000000000000000000000000000000..3b4e63f67906e9b9cbaf4fe2055e35005b1380a7 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/README.md @@ -0,0 +1,138 @@ +# The Operator Framework + +The Operator Framework provides a simple, lightweight, and powerful way of +writing Juju charms, the best way to encapsulate operational experience in code. + +The framework will help you to: + +* model the integration of your services +* manage the lifecycle of your application +* create reusable and scalable components +* keep your code simple and readable + +## Getting Started + +Charms written using the operator framework are just Python code. The intention +is for it to feel very natural for somebody used to coding in Python, and +reasonably easy to pick up for somebody who might be a domain expert but not +necessarily a pythonista themselves. + +The dependencies of the operator framework are kept as minimal as possible; +currently that's Python 3.5 or greater, and `PyYAML` (both are included by +default in Ubuntu's cloud images from 16.04 on). + + +## A Quick Introduction + +Operator framework charms are just Python code. The entry point to your charm is +a particular Python file. It could be anything that makes sense to your project, +but let's assume this is `src/charm.py`. This file must be executable (and it +must have the appropriate shebang line). + +You need the usual `metadata.yaml` and (probably) `config.yaml` files, and a +`requirements.txt` for any Python dependencies. In other words, your project +might look like this: + +``` +my-charm +├── config.yaml +├── metadata.yaml +├── requirements.txt +└── src/ + └── charm.py +``` + +`src/charm.py` here is the entry point to your charm code. At a minimum, it +needs to define a subclass of `CharmBase` and pass that into the framework's +`main` function: + +```python +from ops.charm import CharmBase +from ops.main import main + +class MyCharm(CharmBase): + def __init__(self, *args): + super().__init__(*args) + self.framework.observe(self.on.start, self.on_start) + + def on_start(self, event): + # Handle the start event here. + +if __name__ == "__main__": + main(MyCharm) +``` + +That should be enough for you to be able to run + +``` +$ charmcraft build +Done, charm left in 'my-charm.charm' +$ juju deploy my-charm.charm +``` + +> 🛈 More information on [`charmcraft`](https://pypi.org/project/charmcraft/) can +> also be found on its [github page](https://github.com/canonical/charmcraft). + +Happy charming! + +## Testing your charms + +The operator framework provides a testing harness, so that you can test that +your charm does the right thing when presented with different scenarios, without +having to have a full deployment to do so. `pydoc3 ops.testing` has the details +for that, including this example: + +```python +harness = Harness(MyCharm) +# Do initial setup here +relation_id = harness.add_relation('db', 'postgresql') +# Now instantiate the charm to see events as the model changes +harness.begin() +harness.add_relation_unit(relation_id, 'postgresql/0') +harness.update_relation_data(relation_id, 'postgresql/0', {'key': 'val'}) +# Check that charm has properly handled the relation_joined event for postgresql/0 +self.assertEqual(harness.charm. ...) +``` + +## Talk to us + +If you need help, have ideas, or would just like to chat with us, reach out on +IRC: we're in [#smooth-operator] on freenode (or try the [webchat]). + +We also pay attention to Juju's [discourse], but currently we don't actively +post there outside of our little corner of the [docs]; most discussion at this +stage is on IRC. + +[webchat]: https://webchat.freenode.net/#smooth-operator +[#smooth-operator]: irc://chat.freenode.net/%23smooth-operator +[discourse]: https://discourse.juju.is/c/charming +[docs]: https://discourse.juju.is/c/docs/operator-framework + +## Operator Framework development + +If you want to work in the framework *itself* you will need Python >= 3.5 and +the dependencies declared in `requirements-dev.txt` installed in your system. +Or you can use a virtualenv: + + virtualenv --python=python3 env + source env/bin/activate + pip install -r requirements-dev.txt + +Then you can try `./run_tests`, it should all go green. + +If you want to build the documentation you'll need the requirements from +`docs/requirements.txt`, or in your virtualenv + + pip install -r docs/requirements.txt + +and then you can run `./build_docs`. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/build_docs b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/build_docs new file mode 100755 index 0000000000000000000000000000000000000000..af8b892f7568bdda5ac892beaeafad5696e607d3 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/build_docs @@ -0,0 +1,18 @@ +#!/bin/sh + +set -e + +flavour=html + +if [ "$1" ]; then + if [ "$1" = "--help" ] || [ "$1" = "-h" ]; then + flavour=help + else + flavour="$1" + fi + shift +fi + +cd docs + +sphinx-build -M "$flavour" . _build "$@" diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/docs/conf.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/docs/conf.py new file mode 100644 index 0000000000000000000000000000000000000000..c8d3e3d57b11cbd9ae6bb144336728e433ae1d28 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/docs/conf.py @@ -0,0 +1,97 @@ +# Configuration file for the Sphinx documentation builder. +# +# For a full list of options see the documentation: +# https://www.sphinx-doc.org/en/master/usage/configuration.html + + +# -- Path setup -------------------------------------------------------------- + +from pathlib import Path +import sys +sys.path.insert(0, str(Path(__file__).parent.parent)) + + +# -- Project information ----------------------------------------------------- + +project = 'The Operator Framework' +copyright = '2019-2020, Canonical Ltd.' +author = 'Canonical Ltd' + + +# -- General configuration --------------------------------------------------- + +# Add any Sphinx extension module names here, as strings. They can be +# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom +# ones. +extensions = [ + 'sphinx.ext.autodoc', + 'sphinx.ext.napoleon', + 'sphinx.ext.todo', + 'sphinx.ext.viewcode', + 'sphinx.ext.intersphinx', +] + +# The document name of the “master” document, that is, the document +# that contains the root toctree directive. +master_doc = 'index' + +# Add any paths that contain templates here, relative to this directory. +templates_path = ['_templates'] + +# List of patterns, relative to source directory, that match files and +# directories to ignore when looking for source files. +# This pattern also affects html_static_path and html_extra_path. +exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store'] + + +# -- Options for HTML output ------------------------------------------------- + +# The theme to use for HTML and HTML Help pages. See the documentation for +# a list of builtin themes. +# +html_theme = 'sphinx_rtd_theme' # 'alabaster' + +# Add any paths that contain custom static files (such as style sheets) here, +# relative to this directory. They are copied after the builtin static files, +# so a file named "default.css" will overwrite the builtin "default.css". +html_static_path = ['_static'] + + +# -- Options for sphinx.ext.todo --------------------------------------------- + +# If this is True, todo and todolist produce output, else they +# produce nothing. The default is False. +todo_include_todos = False + + +# -- Options for sphinx.ext.autodoc ------------------------------------------ + +# This value controls how to represents typehints. The setting takes the +# following values: +# 'signature' – Show typehints as its signature (default) +# 'description' – Show typehints as content of function or method +# 'none' – Do not show typehints +autodoc_typehints = 'description' + +# This value selects what content will be inserted into the main body of an +# autoclass directive. The possible values are: +# 'class' - Only the class’ docstring is inserted. This is the +# default. You can still document __init__ as a separate method +# using automethod or the members option to autoclass. +# 'both' - Both the class’ and the __init__ method’s docstring are +# concatenated and inserted. +# 'init' - Only the __init__ method’s docstring is inserted. +autoclass_content = 'both' + +autodoc_default_options = { + 'members': None, # None here means "yes" + 'undoc-members': None, + 'show-inheritance': None, +} + + +# -- Options for sphinx.ext.intersphinx -------------------------------------- + +# This config value contains the locations and names of other projects +# that should be linked to in this documentation. +intersphinx_mapping = {'python': ('https://docs.python.org/3', None)} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/docs/index.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/docs/index.rst new file mode 100644 index 0000000000000000000000000000000000000000..424d78d423e1e7dd5d4049c9f88ffaec994c1714 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/docs/index.rst @@ -0,0 +1,58 @@ + +Welcome to The Operator Framework's documentation! +================================================== + +.. toctree:: + :maxdepth: 2 + :caption: Contents: + +ops package +=========== + +.. automodule:: ops + +Submodules +---------- + +ops.charm module +---------------- + +.. automodule:: ops.charm + +ops.framework module +-------------------- + +.. automodule:: ops.framework + +ops.jujuversion module +---------------------- + +.. automodule:: ops.jujuversion + +ops.log module +-------------- + +.. automodule:: ops.log + +ops.main module +--------------- + +.. automodule:: ops.main + +ops.model module +---------------- + +.. automodule:: ops.model + +ops.testing module +------------------ + +.. automodule:: ops.testing + + +Indices and tables +================== + +* :ref:`genindex` +* :ref:`modindex` +* :ref:`search` diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/docs/requirements.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/docs/requirements.txt new file mode 100644 index 0000000000000000000000000000000000000000..217e49dd6e684b07eaac18aec4c2f717e5f9f5f0 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/docs/requirements.txt @@ -0,0 +1,5 @@ +-r ../requirements.txt + +sphinx==3.* +sphinx_rtd_theme + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/ops/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/ops/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..f17b2969db298b21bc47bbe1d3614ccff93e9c6e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/ops/__init__.py @@ -0,0 +1,20 @@ +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""The Operator Framework.""" + +from .version import version as __version__ # noqa: F401 (imported but unused) + +# Import here the bare minimum to break the circular import between modules +from . import charm # noqa: F401 (imported but unused) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/ops/charm.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/ops/charm.py new file mode 100755 index 0000000000000000000000000000000000000000..d898de859fc444814bc19a7f8f0caaaec6f7e5f4 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/ops/charm.py @@ -0,0 +1,575 @@ +# Copyright 2019-2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import enum +import os +import pathlib +import typing + +import yaml + +from ops.framework import Object, EventSource, EventBase, Framework, ObjectEvents +from ops import model + + +def _loadYaml(source): + if yaml.__with_libyaml__: + return yaml.load(source, Loader=yaml.CSafeLoader) + return yaml.load(source, Loader=yaml.SafeLoader) + + +class HookEvent(EventBase): + """A base class for events that trigger because of a Juju hook firing.""" + + +class ActionEvent(EventBase): + """A base class for events that trigger when a user asks for an Action to be run. + + To read the parameters for the action, see the instance variable `params`. + To respond with the result of the action, call `set_results`. To add progress + messages that are visible as the action is progressing use `log`. + + :ivar params: The parameters passed to the action (read by action-get) + """ + + def defer(self): + """Action events are not deferable like other events. + + This is because an action runs synchronously and the user is waiting for the result. + """ + raise RuntimeError('cannot defer action events') + + def restore(self, snapshot: dict) -> None: + """Used by the operator framework to record the action. + + Not meant to be called directly by Charm code. + """ + env_action_name = os.environ.get('JUJU_ACTION_NAME') + event_action_name = self.handle.kind[:-len('_action')].replace('_', '-') + if event_action_name != env_action_name: + # This could only happen if the dev manually emits the action, or from a bug. + raise RuntimeError('action event kind does not match current action') + # Params are loaded at restore rather than __init__ because + # the model is not available in __init__. + self.params = self.framework.model._backend.action_get() + + def set_results(self, results: typing.Mapping) -> None: + """Report the result of the action. + + Args: + results: The result of the action as a Dict + """ + self.framework.model._backend.action_set(results) + + def log(self, message: str) -> None: + """Send a message that a user will see while the action is running. + + Args: + message: The message for the user. + """ + self.framework.model._backend.action_log(message) + + def fail(self, message: str = '') -> None: + """Report that this action has failed. + + Args: + message: Optional message to record why it has failed. + """ + self.framework.model._backend.action_fail(message) + + +class InstallEvent(HookEvent): + """Represents the `install` hook from Juju.""" + + +class StartEvent(HookEvent): + """Represents the `start` hook from Juju.""" + + +class StopEvent(HookEvent): + """Represents the `stop` hook from Juju.""" + + +class RemoveEvent(HookEvent): + """Represents the `remove` hook from Juju. """ + + +class ConfigChangedEvent(HookEvent): + """Represents the `config-changed` hook from Juju.""" + + +class UpdateStatusEvent(HookEvent): + """Represents the `update-status` hook from Juju.""" + + +class UpgradeCharmEvent(HookEvent): + """Represents the `upgrade-charm` hook from Juju. + + This will be triggered when a user has run `juju upgrade-charm`. It is run after Juju + has unpacked the upgraded charm code, and so this event will be handled with new code. + """ + + +class PreSeriesUpgradeEvent(HookEvent): + """Represents the `pre-series-upgrade` hook from Juju. + + This happens when a user has run `juju upgrade-series MACHINE prepare` and + will fire for each unit that is running on the machine, telling them that + the user is preparing to upgrade the Machine's series (eg trusty->bionic). + The charm should take actions to prepare for the upgrade (a database charm + would want to write out a version-independent dump of the database, so that + when a new version of the database is available in a new series, it can be + used.) + Once all units on a machine have run `pre-series-upgrade`, the user will + initiate the steps to actually upgrade the machine (eg `do-release-upgrade`). + When the upgrade has been completed, the :class:`PostSeriesUpgradeEvent` will fire. + """ + + +class PostSeriesUpgradeEvent(HookEvent): + """Represents the `post-series-upgrade` hook from Juju. + + This is run after the user has done a distribution upgrade (or rolled back + and kept the same series). It is called in response to + `juju upgrade-series MACHINE complete`. Charms are expected to do whatever + steps are necessary to reconfigure their applications for the new series. + """ + + +class LeaderElectedEvent(HookEvent): + """Represents the `leader-elected` hook from Juju. + + Juju will trigger this when a new lead unit is chosen for a given application. + This represents the leader of the charm information (not necessarily the primary + of a running application). The main utility is that charm authors can know + that only one unit will be a leader at any given time, so they can do + configuration, etc, that would otherwise require coordination between units. + (eg, selecting a password for a new relation) + """ + + +class LeaderSettingsChangedEvent(HookEvent): + """Represents the `leader-settings-changed` hook from Juju. + + Deprecated. This represents when a lead unit would call `leader-set` to inform + the other units of an application that they have new information to handle. + This has been deprecated in favor of using a Peer relation, and having the + leader set a value in the Application data bag for that peer relation. + (see :class:`RelationChangedEvent`). + """ + + +class CollectMetricsEvent(HookEvent): + """Represents the `collect-metrics` hook from Juju. + + Note that events firing during a CollectMetricsEvent are currently + sandboxed in how they can interact with Juju. To report metrics + use :meth:`.add_metrics`. + """ + + def add_metrics(self, metrics: typing.Mapping, labels: typing.Mapping = None) -> None: + """Record metrics that have been gathered by the charm for this unit. + + Args: + metrics: A collection of {key: float} pairs that contains the + metrics that have been gathered + labels: {key:value} strings that can be applied to the + metrics that are being gathered + """ + self.framework.model._backend.add_metrics(metrics, labels) + + +class RelationEvent(HookEvent): + """A base class representing the various relation lifecycle events. + + Charmers should not be creating RelationEvents directly. The events will be + generated by the framework from Juju related events. Users can observe them + from the various `CharmBase.on[relation_name].relation_*` events. + + Attributes: + relation: The Relation involved in this event + app: The remote application that has triggered this event + unit: The remote unit that has triggered this event. This may be None + if the relation event was triggered as an Application level event + """ + + def __init__(self, handle, relation, app=None, unit=None): + super().__init__(handle) + + if unit is not None and unit.app != app: + raise RuntimeError( + 'cannot create RelationEvent with application {} and unit {}'.format(app, unit)) + + self.relation = relation + self.app = app + self.unit = unit + + def snapshot(self) -> dict: + """Used by the framework to serialize the event to disk. + + Not meant to be called by Charm code. + """ + snapshot = { + 'relation_name': self.relation.name, + 'relation_id': self.relation.id, + } + if self.app: + snapshot['app_name'] = self.app.name + if self.unit: + snapshot['unit_name'] = self.unit.name + return snapshot + + def restore(self, snapshot: dict) -> None: + """Used by the framework to deserialize the event from disk. + + Not meant to be called by Charm code. + """ + self.relation = self.framework.model.get_relation( + snapshot['relation_name'], snapshot['relation_id']) + + app_name = snapshot.get('app_name') + if app_name: + self.app = self.framework.model.get_app(app_name) + else: + self.app = None + + unit_name = snapshot.get('unit_name') + if unit_name: + self.unit = self.framework.model.get_unit(unit_name) + else: + self.unit = None + + +class RelationCreatedEvent(RelationEvent): + """Represents the `relation-created` hook from Juju. + + This is triggered when a new relation to another app is added in Juju. This + can occur before units for those applications have started. All existing + relations should be established before start. + """ + + +class RelationJoinedEvent(RelationEvent): + """Represents the `relation-joined` hook from Juju. + + This is triggered whenever a new unit of a related application joins the relation. + (eg, a unit was added to an existing related app, or a new relation was established + with an application that already had units.) + """ + + +class RelationChangedEvent(RelationEvent): + """Represents the `relation-changed` hook from Juju. + + This is triggered whenever there is a change to the data bucket for a related + application or unit. Look at `event.relation.data[event.unit/app]` to see the + new information. + """ + + +class RelationDepartedEvent(RelationEvent): + """Represents the `relation-departed` hook from Juju. + + This is the inverse of the RelationJoinedEvent, representing when a unit + is leaving the relation (the unit is being removed, the app is being removed, + the relation is being removed). It is fired once for each unit that is + going away. + """ + + +class RelationBrokenEvent(RelationEvent): + """Represents the `relation-broken` hook from Juju. + + If a relation is being removed (`juju remove-relation` or `juju remove-application`), + once all the units have been removed, RelationBrokenEvent will fire to signal + that the relationship has been fully terminated. + """ + + +class StorageEvent(HookEvent): + """Base class representing Storage related events.""" + + +class StorageAttachedEvent(StorageEvent): + """Represents the `storage-attached` hook from Juju. + + Called when new storage is available for the charm to use. + """ + + +class StorageDetachingEvent(StorageEvent): + """Represents the `storage-detaching` hook from Juju. + + Called when storage a charm has been using is going away. + """ + + +class CharmEvents(ObjectEvents): + """The events that are generated by Juju in response to the lifecycle of an application.""" + + install = EventSource(InstallEvent) + start = EventSource(StartEvent) + stop = EventSource(StopEvent) + remove = EventSource(RemoveEvent) + update_status = EventSource(UpdateStatusEvent) + config_changed = EventSource(ConfigChangedEvent) + upgrade_charm = EventSource(UpgradeCharmEvent) + pre_series_upgrade = EventSource(PreSeriesUpgradeEvent) + post_series_upgrade = EventSource(PostSeriesUpgradeEvent) + leader_elected = EventSource(LeaderElectedEvent) + leader_settings_changed = EventSource(LeaderSettingsChangedEvent) + collect_metrics = EventSource(CollectMetricsEvent) + + +class CharmBase(Object): + """Base class that represents the Charm overall. + + Usually this initialization is done by ops.main.main() rather than Charm authors + directly instantiating a Charm. + + Args: + framework: The framework responsible for managing the Model and events for this + Charm. + key: Ignored; will remove after deprecation period of the signature change. + """ + + on = CharmEvents() + + def __init__(self, framework: Framework, key: typing.Optional = None): + super().__init__(framework, None) + + for relation_name in self.framework.meta.relations: + relation_name = relation_name.replace('-', '_') + self.on.define_event(relation_name + '_relation_created', RelationCreatedEvent) + self.on.define_event(relation_name + '_relation_joined', RelationJoinedEvent) + self.on.define_event(relation_name + '_relation_changed', RelationChangedEvent) + self.on.define_event(relation_name + '_relation_departed', RelationDepartedEvent) + self.on.define_event(relation_name + '_relation_broken', RelationBrokenEvent) + + for storage_name in self.framework.meta.storages: + storage_name = storage_name.replace('-', '_') + self.on.define_event(storage_name + '_storage_attached', StorageAttachedEvent) + self.on.define_event(storage_name + '_storage_detaching', StorageDetachingEvent) + + for action_name in self.framework.meta.actions: + action_name = action_name.replace('-', '_') + self.on.define_event(action_name + '_action', ActionEvent) + + @property + def app(self) -> model.Application: + """Application that this unit is part of.""" + return self.framework.model.app + + @property + def unit(self) -> model.Unit: + """Unit that this execution is responsible for.""" + return self.framework.model.unit + + @property + def meta(self) -> 'CharmMeta': + """CharmMeta of this charm. + """ + return self.framework.meta + + @property + def charm_dir(self) -> pathlib.Path: + """Root directory of the Charm as it is running. + """ + return self.framework.charm_dir + + +class CharmMeta: + """Object containing the metadata for the charm. + + This is read from metadata.yaml and/or actions.yaml. Generally charms will + define this information, rather than reading it at runtime. This class is + mostly for the framework to understand what the charm has defined. + + The maintainers, tags, terms, series, and extra_bindings attributes are all + lists of strings. The requires, provides, peers, relations, storage, + resources, and payloads attributes are all mappings of names to instances + of the respective RelationMeta, StorageMeta, ResourceMeta, or PayloadMeta. + + The relations attribute is a convenience accessor which includes all of the + requires, provides, and peers RelationMeta items. If needed, the role of + the relation definition can be obtained from its role attribute. + + Attributes: + name: The name of this charm + summary: Short description of what this charm does + description: Long description for this charm + maintainers: A list of strings of the email addresses of the maintainers + of this charm. + tags: Charm store tag metadata for categories associated with this charm. + terms: Charm store terms that should be agreed to before this charm can + be deployed. (Used for things like licensing issues.) + series: The list of supported OS series that this charm can support. + The first entry in the list is the default series that will be + used by deploy if no other series is requested by the user. + subordinate: True/False whether this charm is intended to be used as a + subordinate charm. + min_juju_version: If supplied, indicates this charm needs features that + are not available in older versions of Juju. + requires: A dict of {name: :class:`RelationMeta` } for each 'requires' relation. + provides: A dict of {name: :class:`RelationMeta` } for each 'provides' relation. + peers: A dict of {name: :class:`RelationMeta` } for each 'peer' relation. + relations: A dict containing all :class:`RelationMeta` attributes (merged from other + sections) + storages: A dict of {name: :class:`StorageMeta`} for each defined storage. + resources: A dict of {name: :class:`ResourceMeta`} for each defined resource. + payloads: A dict of {name: :class:`PayloadMeta`} for each defined payload. + extra_bindings: A dict of additional named bindings that a charm can use + for network configuration. + actions: A dict of {name: :class:`ActionMeta`} for actions that the charm has defined. + Args: + raw: a mapping containing the contents of metadata.yaml + actions_raw: a mapping containing the contents of actions.yaml + """ + + def __init__(self, raw: dict = {}, actions_raw: dict = {}): + self.name = raw.get('name', '') + self.summary = raw.get('summary', '') + self.description = raw.get('description', '') + self.maintainers = [] + if 'maintainer' in raw: + self.maintainers.append(raw['maintainer']) + if 'maintainers' in raw: + self.maintainers.extend(raw['maintainers']) + self.tags = raw.get('tags', []) + self.terms = raw.get('terms', []) + self.series = raw.get('series', []) + self.subordinate = raw.get('subordinate', False) + self.min_juju_version = raw.get('min-juju-version') + self.requires = {name: RelationMeta(RelationRole.requires, name, rel) + for name, rel in raw.get('requires', {}).items()} + self.provides = {name: RelationMeta(RelationRole.provides, name, rel) + for name, rel in raw.get('provides', {}).items()} + self.peers = {name: RelationMeta(RelationRole.peer, name, rel) + for name, rel in raw.get('peers', {}).items()} + self.relations = {} + self.relations.update(self.requires) + self.relations.update(self.provides) + self.relations.update(self.peers) + self.storages = {name: StorageMeta(name, storage) + for name, storage in raw.get('storage', {}).items()} + self.resources = {name: ResourceMeta(name, res) + for name, res in raw.get('resources', {}).items()} + self.payloads = {name: PayloadMeta(name, payload) + for name, payload in raw.get('payloads', {}).items()} + self.extra_bindings = raw.get('extra-bindings', {}) + self.actions = {name: ActionMeta(name, action) for name, action in actions_raw.items()} + + @classmethod + def from_yaml( + cls, metadata: typing.Union[str, typing.TextIO], + actions: typing.Optional[typing.Union[str, typing.TextIO]] = None): + """Instantiate a CharmMeta from a YAML description of metadata.yaml. + + Args: + metadata: A YAML description of charm metadata (name, relations, etc.) + This can be a simple string, or a file-like object. (passed to `yaml.safe_load`). + actions: YAML description of Actions for this charm (eg actions.yaml) + """ + meta = _loadYaml(metadata) + raw_actions = {} + if actions is not None: + raw_actions = _loadYaml(actions) + return cls(meta, raw_actions) + + +class RelationRole(enum.Enum): + peer = 'peer' + requires = 'requires' + provides = 'provides' + + def is_peer(self) -> bool: + """Return whether the current role is peer. + + A convenience to avoid having to import charm. + """ + return self is RelationRole.peer + + +class RelationMeta: + """Object containing metadata about a relation definition. + + Should not be constructed directly by Charm code. Is gotten from one of + :attr:`CharmMeta.peers`, :attr:`CharmMeta.requires`, :attr:`CharmMeta.provides`, + or :attr:`CharmMeta.relations`. + + Attributes: + role: This is one of peer/requires/provides + relation_name: Name of this relation from metadata.yaml + interface_name: Optional definition of the interface protocol. + scope: "global" or "container" scope based on how the relation should be used. + """ + + def __init__(self, role: RelationRole, relation_name: str, raw: dict): + if not isinstance(role, RelationRole): + raise TypeError("role should be a Role, not {!r}".format(role)) + self.role = role + self.relation_name = relation_name + self.interface_name = raw['interface'] + self.scope = raw.get('scope') + + +class StorageMeta: + """Object containing metadata about a storage definition.""" + + def __init__(self, name, raw): + self.storage_name = name + self.type = raw['type'] + self.description = raw.get('description', '') + self.shared = raw.get('shared', False) + self.read_only = raw.get('read-only', False) + self.minimum_size = raw.get('minimum-size') + self.location = raw.get('location') + self.multiple_range = None + if 'multiple' in raw: + range = raw['multiple']['range'] + if '-' not in range: + self.multiple_range = (int(range), int(range)) + else: + range = range.split('-') + self.multiple_range = (int(range[0]), int(range[1]) if range[1] else None) + + +class ResourceMeta: + """Object containing metadata about a resource definition.""" + + def __init__(self, name, raw): + self.resource_name = name + self.type = raw['type'] + self.filename = raw.get('filename', None) + self.description = raw.get('description', '') + + +class PayloadMeta: + """Object containing metadata about a payload definition.""" + + def __init__(self, name, raw): + self.payload_name = name + self.type = raw['type'] + + +class ActionMeta: + """Object containing metadata about an action's definition.""" + + def __init__(self, name, raw=None): + raw = raw or {} + self.name = name + self.title = raw.get('title', '') + self.description = raw.get('description', '') + self.parameters = raw.get('params', {}) # {: } + self.required = raw.get('required', []) # [, ...] diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/ops/framework.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/ops/framework.py new file mode 100755 index 0000000000000000000000000000000000000000..b7c4749ff2b5bfb4f354bf1a8d4cd6ed64cf0da5 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/ops/framework.py @@ -0,0 +1,1067 @@ +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import collections +import collections.abc +import inspect +import keyword +import logging +import marshal +import os +import pathlib +import pdb +import re +import sys +import types +import weakref + +from ops import charm +from ops.storage import ( + NoSnapshotError, + SQLiteStorage, +) + +logger = logging.getLogger(__name__) + + +class Handle: + """Handle defines a name for an object in the form of a hierarchical path. + + The provided parent is the object (or that object's handle) that this handle + sits under, or None if the object identified by this handle stands by itself + as the root of its own hierarchy. + + The handle kind is a string that defines a namespace so objects with the + same parent and kind will have unique keys. + + The handle key is a string uniquely identifying the object. No other objects + under the same parent and kind may have the same key. + """ + + def __init__(self, parent, kind, key): + if parent and not isinstance(parent, Handle): + parent = parent.handle + self._parent = parent + self._kind = kind + self._key = key + if parent: + if key: + self._path = "{}/{}[{}]".format(parent, kind, key) + else: + self._path = "{}/{}".format(parent, kind) + else: + if key: + self._path = "{}[{}]".format(kind, key) + else: + self._path = "{}".format(kind) + + def nest(self, kind, key): + return Handle(self, kind, key) + + def __hash__(self): + return hash((self.parent, self.kind, self.key)) + + def __eq__(self, other): + return (self.parent, self.kind, self.key) == (other.parent, other.kind, other.key) + + def __str__(self): + return self.path + + @property + def parent(self): + return self._parent + + @property + def kind(self): + return self._kind + + @property + def key(self): + return self._key + + @property + def path(self): + return self._path + + @classmethod + def from_path(cls, path): + handle = None + for pair in path.split("/"): + pair = pair.split("[") + good = False + if len(pair) == 1: + kind, key = pair[0], None + good = True + elif len(pair) == 2: + kind, key = pair + if key and key[-1] == ']': + key = key[:-1] + good = True + if not good: + raise RuntimeError("attempted to restore invalid handle path {}".format(path)) + handle = Handle(handle, kind, key) + return handle + + +class EventBase: + + def __init__(self, handle): + self.handle = handle + self.deferred = False + + def defer(self): + self.deferred = True + + def snapshot(self): + """Return the snapshot data that should be persisted. + + Subclasses must override to save any custom state. + """ + return None + + def restore(self, snapshot): + """Restore the value state from the given snapshot. + + Subclasses must override to restore their custom state. + """ + self.deferred = False + + +class EventSource: + """EventSource wraps an event type with a descriptor to facilitate observing and emitting. + + It is generally used as: + + class SomethingHappened(EventBase): + pass + + class SomeObject(Object): + something_happened = EventSource(SomethingHappened) + + With that, instances of that type will offer the someobj.something_happened + attribute which is a BoundEvent and may be used to emit and observe the event. + """ + + def __init__(self, event_type): + if not isinstance(event_type, type) or not issubclass(event_type, EventBase): + raise RuntimeError( + 'Event requires a subclass of EventBase as an argument, got {}'.format(event_type)) + self.event_type = event_type + self.event_kind = None + self.emitter_type = None + + def _set_name(self, emitter_type, event_kind): + if self.event_kind is not None: + raise RuntimeError( + 'EventSource({}) reused as {}.{} and {}.{}'.format( + self.event_type.__name__, + self.emitter_type.__name__, + self.event_kind, + emitter_type.__name__, + event_kind, + )) + self.event_kind = event_kind + self.emitter_type = emitter_type + + def __get__(self, emitter, emitter_type=None): + if emitter is None: + return self + # Framework might not be available if accessed as CharmClass.on.event + # rather than charm_instance.on.event, but in that case it couldn't be + # emitted anyway, so there's no point to registering it. + framework = getattr(emitter, 'framework', None) + if framework is not None: + framework.register_type(self.event_type, emitter, self.event_kind) + return BoundEvent(emitter, self.event_type, self.event_kind) + + +class BoundEvent: + + def __repr__(self): + return ''.format( + self.event_type.__name__, + type(self.emitter).__name__, + self.event_kind, + hex(id(self)), + ) + + def __init__(self, emitter, event_type, event_kind): + self.emitter = emitter + self.event_type = event_type + self.event_kind = event_kind + + def emit(self, *args, **kwargs): + """Emit event to all registered observers. + + The current storage state is committed before and after each observer is notified. + """ + framework = self.emitter.framework + key = framework._next_event_key() + event = self.event_type(Handle(self.emitter, self.event_kind, key), *args, **kwargs) + framework._emit(event) + + +class HandleKind: + """Helper descriptor to define the Object.handle_kind field. + + The handle_kind for an object defaults to its type name, but it may + be explicitly overridden if desired. + """ + + def __get__(self, obj, obj_type): + kind = obj_type.__dict__.get("handle_kind") + if kind: + return kind + return obj_type.__name__ + + +class _Metaclass(type): + """Helper class to ensure proper instantiation of Object-derived classes. + + This class currently has a single purpose: events derived from EventSource + that are class attributes of Object-derived classes need to be told what + their name is in that class. For example, in + + class SomeObject(Object): + something_happened = EventSource(SomethingHappened) + + the instance of EventSource needs to know it's called 'something_happened'. + + Starting from python 3.6 we could use __set_name__ on EventSource for this, + but until then this (meta)class does the equivalent work. + + TODO: when we drop support for 3.5 drop this class, and rename _set_name in + EventSource to __set_name__; everything should continue to work. + + """ + + def __new__(typ, *a, **kw): + k = super().__new__(typ, *a, **kw) + # k is now the Object-derived class; loop over its class attributes + for n, v in vars(k).items(): + # we could do duck typing here if we want to support + # non-EventSource-derived shenanigans. We don't. + if isinstance(v, EventSource): + # this is what 3.6+ does automatically for us: + v._set_name(k, n) + return k + + +class Object(metaclass=_Metaclass): + + handle_kind = HandleKind() + + def __init__(self, parent, key): + kind = self.handle_kind + if isinstance(parent, Framework): + self.framework = parent + # Avoid Framework instances having a circular reference to themselves. + if self.framework is self: + self.framework = weakref.proxy(self.framework) + self.handle = Handle(None, kind, key) + else: + self.framework = parent.framework + self.handle = Handle(parent, kind, key) + self.framework._track(self) + + # TODO Detect conflicting handles here. + + @property + def model(self): + return self.framework.model + + +class ObjectEvents(Object): + """Convenience type to allow defining .on attributes at class level.""" + + handle_kind = "on" + + def __init__(self, parent=None, key=None): + if parent is not None: + super().__init__(parent, key) + else: + self._cache = weakref.WeakKeyDictionary() + + def __get__(self, emitter, emitter_type): + if emitter is None: + return self + instance = self._cache.get(emitter) + if instance is None: + # Same type, different instance, more data. Doing this unusual construct + # means people can subclass just this one class to have their own 'on'. + instance = self._cache[emitter] = type(self)(emitter) + return instance + + @classmethod + def define_event(cls, event_kind, event_type): + """Define an event on this type at runtime. + + cls: a type to define an event on. + + event_kind: an attribute name that will be used to access the + event. Must be a valid python identifier, not be a keyword + or an existing attribute. + + event_type: a type of the event to define. + + """ + prefix = 'unable to define an event with event_kind that ' + if not event_kind.isidentifier(): + raise RuntimeError(prefix + 'is not a valid python identifier: ' + event_kind) + elif keyword.iskeyword(event_kind): + raise RuntimeError(prefix + 'is a python keyword: ' + event_kind) + try: + getattr(cls, event_kind) + raise RuntimeError( + prefix + 'overlaps with an existing type {} attribute: {}'.format(cls, event_kind)) + except AttributeError: + pass + + event_descriptor = EventSource(event_type) + event_descriptor._set_name(cls, event_kind) + setattr(cls, event_kind, event_descriptor) + + def events(self): + """Return a mapping of event_kinds to bound_events for all available events. + """ + events_map = {} + # We have to iterate over the class rather than instance to allow for properties which + # might call this method (e.g., event views), leading to infinite recursion. + for attr_name, attr_value in inspect.getmembers(type(self)): + if isinstance(attr_value, EventSource): + # We actually care about the bound_event, however, since it + # provides the most info for users of this method. + event_kind = attr_name + bound_event = getattr(self, event_kind) + events_map[event_kind] = bound_event + return events_map + + def __getitem__(self, key): + return PrefixedEvents(self, key) + + +class PrefixedEvents: + + def __init__(self, emitter, key): + self._emitter = emitter + self._prefix = key.replace("-", "_") + '_' + + def __getattr__(self, name): + return getattr(self._emitter, self._prefix + name) + + +class PreCommitEvent(EventBase): + pass + + +class CommitEvent(EventBase): + pass + + +class FrameworkEvents(ObjectEvents): + pre_commit = EventSource(PreCommitEvent) + commit = EventSource(CommitEvent) + + +class NoTypeError(Exception): + + def __init__(self, handle_path): + self.handle_path = handle_path + + def __str__(self): + return "cannot restore {} since no class was registered for it".format(self.handle_path) + + +# the message to show to the user when a pdb breakpoint goes active +_BREAKPOINT_WELCOME_MESSAGE = """ +Starting pdb to debug charm operator. +Run `h` for help, `c` to continue, or `exit`/CTRL-d to abort. +Future breakpoints may interrupt execution again. +More details at https://discourse.jujucharms.com/t/debugging-charm-hooks + +""" + + +_event_regex = r'^(|.*/)on/[a-zA-Z_]+\[\d+\]$' + + +class Framework(Object): + + on = FrameworkEvents() + + # Override properties from Object so that we can set them in __init__. + model = None + meta = None + charm_dir = None + + def __init__(self, storage, charm_dir, meta, model): + + super().__init__(self, None) + + self.charm_dir = charm_dir + self.meta = meta + self.model = model + self._observers = [] # [(observer_path, method_name, parent_path, event_key)] + self._observer = weakref.WeakValueDictionary() # {observer_path: observer} + self._objects = weakref.WeakValueDictionary() + self._type_registry = {} # {(parent_path, kind): cls} + self._type_known = set() # {cls} + + if isinstance(storage, (str, pathlib.Path)): + logger.warning( + "deprecated: Framework now takes a Storage not a path") + storage = SQLiteStorage(storage) + self._storage = storage + + # We can't use the higher-level StoredState because it relies on events. + self.register_type(StoredStateData, None, StoredStateData.handle_kind) + stored_handle = Handle(None, StoredStateData.handle_kind, '_stored') + try: + self._stored = self.load_snapshot(stored_handle) + except NoSnapshotError: + self._stored = StoredStateData(self, '_stored') + self._stored['event_count'] = 0 + + # Hook into builtin breakpoint, so if Python >= 3.7, devs will be able to just do + # breakpoint(); if Python < 3.7, this doesn't affect anything + sys.breakpointhook = self.breakpoint + + # Flag to indicate that we already presented the welcome message in a debugger breakpoint + self._breakpoint_welcomed = False + + # Parse once the env var, which may be used multiple times later + debug_at = os.environ.get('JUJU_DEBUG_AT') + self._juju_debug_at = debug_at.split(',') if debug_at else () + + def close(self): + self._storage.close() + + def _track(self, obj): + """Track object and ensure it is the only object created using its handle path.""" + if obj is self: + # Framework objects don't track themselves + return + if obj.handle.path in self.framework._objects: + raise RuntimeError( + 'two objects claiming to be {} have been created'.format(obj.handle.path)) + self._objects[obj.handle.path] = obj + + def _forget(self, obj): + """Stop tracking the given object. See also _track.""" + self._objects.pop(obj.handle.path, None) + + def commit(self): + # Give a chance for objects to persist data they want to before a commit is made. + self.on.pre_commit.emit() + # Make sure snapshots are saved by instances of StoredStateData. Any possible state + # modifications in on_commit handlers of instances of other classes will not be persisted. + self.on.commit.emit() + # Save our event count after all events have been emitted. + self.save_snapshot(self._stored) + self._storage.commit() + + def register_type(self, cls, parent, kind=None): + if parent and not isinstance(parent, Handle): + parent = parent.handle + if parent: + parent_path = parent.path + else: + parent_path = None + if not kind: + kind = cls.handle_kind + self._type_registry[(parent_path, kind)] = cls + self._type_known.add(cls) + + def save_snapshot(self, value): + """Save a persistent snapshot of the provided value. + + The provided value must implement the following interface: + + value.handle = Handle(...) + value.snapshot() => {...} # Simple builtin types only. + value.restore(snapshot) # Restore custom state from prior snapshot. + """ + if type(value) not in self._type_known: + raise RuntimeError( + 'cannot save {} values before registering that type'.format(type(value).__name__)) + data = value.snapshot() + + # Use marshal as a validator, enforcing the use of simple types, as we later the + # information is really pickled, which is too error prone for future evolution of the + # stored data (e.g. if the developer stores a custom object and later changes its + # class name; when unpickling the original class will not be there and event + # data loading will fail). + try: + marshal.dumps(data) + except ValueError: + msg = "unable to save the data for {}, it must contain only simple types: {!r}" + raise ValueError(msg.format(value.__class__.__name__, data)) + + self._storage.save_snapshot(value.handle.path, data) + + def load_snapshot(self, handle): + parent_path = None + if handle.parent: + parent_path = handle.parent.path + cls = self._type_registry.get((parent_path, handle.kind)) + if not cls: + raise NoTypeError(handle.path) + data = self._storage.load_snapshot(handle.path) + obj = cls.__new__(cls) + obj.framework = self + obj.handle = handle + obj.restore(data) + self._track(obj) + return obj + + def drop_snapshot(self, handle): + self._storage.drop_snapshot(handle.path) + + def observe(self, bound_event: BoundEvent, observer: types.MethodType): + """Register observer to be called when bound_event is emitted. + + The bound_event is generally provided as an attribute of the object that emits + the event, and is created in this style: + + class SomeObject: + something_happened = Event(SomethingHappened) + + That event may be observed as: + + framework.observe(someobj.something_happened, self._on_something_happened) + + Raises: + RuntimeError: if bound_event or observer are the wrong type. + """ + if not isinstance(bound_event, BoundEvent): + raise RuntimeError( + 'Framework.observe requires a BoundEvent as second parameter, got {}'.format( + bound_event)) + if not isinstance(observer, types.MethodType): + # help users of older versions of the framework + if isinstance(observer, charm.CharmBase): + raise TypeError( + 'observer methods must now be explicitly provided;' + ' please replace observe(self.on.{0}, self)' + ' with e.g. observe(self.on.{0}, self._on_{0})'.format( + bound_event.event_kind)) + raise RuntimeError( + 'Framework.observe requires a method as third parameter, got {}'.format(observer)) + + event_type = bound_event.event_type + event_kind = bound_event.event_kind + emitter = bound_event.emitter + + self.register_type(event_type, emitter, event_kind) + + if hasattr(emitter, "handle"): + emitter_path = emitter.handle.path + else: + raise RuntimeError( + 'event emitter {} must have a "handle" attribute'.format(type(emitter).__name__)) + + # Validate that the method has an acceptable call signature. + sig = inspect.signature(observer) + # Self isn't included in the params list, so the first arg will be the event. + extra_params = list(sig.parameters.values())[1:] + + method_name = observer.__name__ + observer = observer.__self__ + if not sig.parameters: + raise TypeError( + '{}.{} must accept event parameter'.format(type(observer).__name__, method_name)) + elif any(param.default is inspect.Parameter.empty for param in extra_params): + # Allow for additional optional params, since there's no reason to exclude them, but + # required params will break. + raise TypeError( + '{}.{} has extra required parameter'.format(type(observer).__name__, method_name)) + + # TODO Prevent the exact same parameters from being registered more than once. + + self._observer[observer.handle.path] = observer + self._observers.append((observer.handle.path, method_name, emitter_path, event_kind)) + + def _next_event_key(self): + """Return the next event key that should be used, incrementing the internal counter.""" + # Increment the count first; this means the keys will start at 1, and 0 + # means no events have been emitted. + self._stored['event_count'] += 1 + return str(self._stored['event_count']) + + def _emit(self, event): + """See BoundEvent.emit for the public way to call this.""" + + saved = False + event_path = event.handle.path + event_kind = event.handle.kind + parent_path = event.handle.parent.path + # TODO Track observers by (parent_path, event_kind) rather than as a list of + # all observers. Avoiding linear search through all observers for every event + for observer_path, method_name, _parent_path, _event_kind in self._observers: + if _parent_path != parent_path: + continue + if _event_kind and _event_kind != event_kind: + continue + if not saved: + # Save the event for all known observers before the first notification + # takes place, so that either everyone interested sees it, or nobody does. + self.save_snapshot(event) + saved = True + # Again, only commit this after all notices are saved. + self._storage.save_notice(event_path, observer_path, method_name) + if saved: + self._reemit(event_path) + + def reemit(self): + """Reemit previously deferred events to the observers that deferred them. + + Only the specific observers that have previously deferred the event will be + notified again. Observers that asked to be notified about events after it's + been first emitted won't be notified, as that would mean potentially observing + events out of order. + """ + self._reemit() + + def _reemit(self, single_event_path=None): + last_event_path = None + deferred = True + for event_path, observer_path, method_name in self._storage.notices(single_event_path): + event_handle = Handle.from_path(event_path) + + if last_event_path != event_path: + if not deferred and last_event_path is not None: + self._storage.drop_snapshot(last_event_path) + last_event_path = event_path + deferred = False + + try: + event = self.load_snapshot(event_handle) + except NoTypeError: + self._storage.drop_notice(event_path, observer_path, method_name) + continue + + event.deferred = False + observer = self._observer.get(observer_path) + if observer: + custom_handler = getattr(observer, method_name, None) + if custom_handler: + event_is_from_juju = isinstance(event, charm.HookEvent) + event_is_action = isinstance(event, charm.ActionEvent) + if (event_is_from_juju or event_is_action) and 'hook' in self._juju_debug_at: + # Present the welcome message and run under PDB. + self._show_debug_code_message() + pdb.runcall(custom_handler, event) + else: + # Regular call to the registered method. + custom_handler(event) + + if event.deferred: + deferred = True + else: + self._storage.drop_notice(event_path, observer_path, method_name) + # We intentionally consider this event to be dead and reload it from + # scratch in the next path. + self.framework._forget(event) + + if not deferred and last_event_path is not None: + self._storage.drop_snapshot(last_event_path) + + def _show_debug_code_message(self): + """Present the welcome message (only once!) when using debugger functionality.""" + if not self._breakpoint_welcomed: + self._breakpoint_welcomed = True + print(_BREAKPOINT_WELCOME_MESSAGE, file=sys.stderr, end='') + + def breakpoint(self, name=None): + """Add breakpoint, optionally named, at the place where this method is called. + + For the breakpoint to be activated the JUJU_DEBUG_AT environment variable + must be set to "all" or to the specific name parameter provided, if any. In every + other situation calling this method does nothing. + + The framework also provides a standard breakpoint named "hook", that will + stop execution when a hook event is about to be handled. + + For those reasons, the "all" and "hook" breakpoint names are reserved. + """ + # If given, validate the name comply with all the rules + if name is not None: + if not isinstance(name, str): + raise TypeError('breakpoint names must be strings') + if name in ('hook', 'all'): + raise ValueError('breakpoint names "all" and "hook" are reserved') + if not re.match(r'^[a-z0-9]([a-z0-9\-]*[a-z0-9])?$', name): + raise ValueError('breakpoint names must look like "foo" or "foo-bar"') + + indicated_breakpoints = self._juju_debug_at + if not indicated_breakpoints: + return + + if 'all' in indicated_breakpoints or name in indicated_breakpoints: + self._show_debug_code_message() + + # If we call set_trace() directly it will open the debugger *here*, so indicating + # it to use our caller's frame + code_frame = inspect.currentframe().f_back + pdb.Pdb().set_trace(code_frame) + else: + logger.warning( + "Breakpoint %r skipped (not found in the requested breakpoints: %s)", + name, indicated_breakpoints) + + def remove_unreferenced_events(self): + """Remove events from storage that are not referenced. + + In older versions of the framework, events that had no observers would get recorded but + never deleted. This makes a best effort to find these events and remove them from the + database. + """ + event_regex = re.compile(_event_regex) + to_remove = [] + for handle_path in self._storage.list_snapshots(): + if event_regex.match(handle_path): + notices = self._storage.notices(handle_path) + if next(notices, None) is None: + # There are no notices for this handle_path, it is valid to remove it + to_remove.append(handle_path) + for handle_path in to_remove: + self._storage.drop_snapshot(handle_path) + + +class StoredStateData(Object): + + def __init__(self, parent, attr_name): + super().__init__(parent, attr_name) + self._cache = {} + self.dirty = False + + def __getitem__(self, key): + return self._cache.get(key) + + def __setitem__(self, key, value): + self._cache[key] = value + self.dirty = True + + def __contains__(self, key): + return key in self._cache + + def snapshot(self): + return self._cache + + def restore(self, snapshot): + self._cache = snapshot + self.dirty = False + + def on_commit(self, event): + if self.dirty: + self.framework.save_snapshot(self) + self.dirty = False + + +class BoundStoredState: + + def __init__(self, parent, attr_name): + parent.framework.register_type(StoredStateData, parent) + + handle = Handle(parent, StoredStateData.handle_kind, attr_name) + try: + data = parent.framework.load_snapshot(handle) + except NoSnapshotError: + data = StoredStateData(parent, attr_name) + + # __dict__ is used to avoid infinite recursion. + self.__dict__["_data"] = data + self.__dict__["_attr_name"] = attr_name + + parent.framework.observe(parent.framework.on.commit, self._data.on_commit) + + def __getattr__(self, key): + # "on" is the only reserved key that can't be used in the data map. + if key == "on": + return self._data.on + if key not in self._data: + raise AttributeError("attribute '{}' is not stored".format(key)) + return _wrap_stored(self._data, self._data[key]) + + def __setattr__(self, key, value): + if key == "on": + raise AttributeError("attribute 'on' is reserved and cannot be set") + + value = _unwrap_stored(self._data, value) + + if not isinstance(value, (type(None), int, float, str, bytes, list, dict, set)): + raise AttributeError( + 'attribute {!r} cannot be a {}: must be int/float/dict/list/etc'.format( + key, type(value).__name__)) + + self._data[key] = _unwrap_stored(self._data, value) + + def set_default(self, **kwargs): + """"Set the value of any given key if it has not already been set""" + for k, v in kwargs.items(): + if k not in self._data: + self._data[k] = v + + +class StoredState: + """A class used to store data the charm needs persisted across invocations. + + Example:: + + class MyClass(Object): + _stored = StoredState() + + Instances of `MyClass` can transparently save state between invocations by + setting attributes on `_stored`. Initial state should be set with + `set_default` on the bound object, that is:: + + class MyClass(Object): + _stored = StoredState() + + def __init__(self, parent, key): + super().__init__(parent, key) + self._stored.set_default(seen=set()) + self.framework.observe(self.on.seen, self._on_seen) + + def _on_seen(self, event): + self._stored.seen.add(event.uuid) + + """ + + def __init__(self): + self.parent_type = None + self.attr_name = None + + def __get__(self, parent, parent_type=None): + if self.parent_type is not None and self.parent_type not in parent_type.mro(): + # the StoredState instance is being shared between two unrelated classes + # -> unclear what is exepcted of us -> bail out + raise RuntimeError( + 'StoredState shared by {} and {}'.format( + self.parent_type.__name__, parent_type.__name__)) + + if parent is None: + # accessing via the class directly (e.g. MyClass.stored) + return self + + bound = None + if self.attr_name is not None: + bound = parent.__dict__.get(self.attr_name) + if bound is not None: + # we already have the thing from a previous pass, huzzah + return bound + + # need to find ourselves amongst the parent's bases + for cls in parent_type.mro(): + for attr_name, attr_value in cls.__dict__.items(): + if attr_value is not self: + continue + # we've found ourselves! is it the first time? + if bound is not None: + # the StoredState instance is being stored in two different + # attributes -> unclear what is expected of us -> bail out + raise RuntimeError("StoredState shared by {0}.{1} and {0}.{2}".format( + cls.__name__, self.attr_name, attr_name)) + # we've found ourselves for the first time; save where, and bind the object + self.attr_name = attr_name + self.parent_type = cls + bound = BoundStoredState(parent, attr_name) + + if bound is not None: + # cache the bound object to avoid the expensive lookup the next time + # (don't use setattr, to keep things symmetric with the fast-path lookup above) + parent.__dict__[self.attr_name] = bound + return bound + + raise AttributeError( + 'cannot find {} attribute in type {}'.format( + self.__class__.__name__, parent_type.__name__)) + + +def _wrap_stored(parent_data, value): + t = type(value) + if t is dict: + return StoredDict(parent_data, value) + if t is list: + return StoredList(parent_data, value) + if t is set: + return StoredSet(parent_data, value) + return value + + +def _unwrap_stored(parent_data, value): + t = type(value) + if t is StoredDict or t is StoredList or t is StoredSet: + return value._under + return value + + +class StoredDict(collections.abc.MutableMapping): + + def __init__(self, stored_data, under): + self._stored_data = stored_data + self._under = under + + def __getitem__(self, key): + return _wrap_stored(self._stored_data, self._under[key]) + + def __setitem__(self, key, value): + self._under[key] = _unwrap_stored(self._stored_data, value) + self._stored_data.dirty = True + + def __delitem__(self, key): + del self._under[key] + self._stored_data.dirty = True + + def __iter__(self): + return self._under.__iter__() + + def __len__(self): + return len(self._under) + + def __eq__(self, other): + if isinstance(other, StoredDict): + return self._under == other._under + elif isinstance(other, collections.abc.Mapping): + return self._under == other + else: + return NotImplemented + + +class StoredList(collections.abc.MutableSequence): + + def __init__(self, stored_data, under): + self._stored_data = stored_data + self._under = under + + def __getitem__(self, index): + return _wrap_stored(self._stored_data, self._under[index]) + + def __setitem__(self, index, value): + self._under[index] = _unwrap_stored(self._stored_data, value) + self._stored_data.dirty = True + + def __delitem__(self, index): + del self._under[index] + self._stored_data.dirty = True + + def __len__(self): + return len(self._under) + + def insert(self, index, value): + self._under.insert(index, value) + self._stored_data.dirty = True + + def append(self, value): + self._under.append(value) + self._stored_data.dirty = True + + def __eq__(self, other): + if isinstance(other, StoredList): + return self._under == other._under + elif isinstance(other, collections.abc.Sequence): + return self._under == other + else: + return NotImplemented + + def __lt__(self, other): + if isinstance(other, StoredList): + return self._under < other._under + elif isinstance(other, collections.abc.Sequence): + return self._under < other + else: + return NotImplemented + + def __le__(self, other): + if isinstance(other, StoredList): + return self._under <= other._under + elif isinstance(other, collections.abc.Sequence): + return self._under <= other + else: + return NotImplemented + + def __gt__(self, other): + if isinstance(other, StoredList): + return self._under > other._under + elif isinstance(other, collections.abc.Sequence): + return self._under > other + else: + return NotImplemented + + def __ge__(self, other): + if isinstance(other, StoredList): + return self._under >= other._under + elif isinstance(other, collections.abc.Sequence): + return self._under >= other + else: + return NotImplemented + + +class StoredSet(collections.abc.MutableSet): + + def __init__(self, stored_data, under): + self._stored_data = stored_data + self._under = under + + def add(self, key): + self._under.add(key) + self._stored_data.dirty = True + + def discard(self, key): + self._under.discard(key) + self._stored_data.dirty = True + + def __contains__(self, key): + return key in self._under + + def __iter__(self): + return self._under.__iter__() + + def __len__(self): + return len(self._under) + + @classmethod + def _from_iterable(cls, it): + """Construct an instance of the class from any iterable input. + + Per https://docs.python.org/3/library/collections.abc.html + if the Set mixin is being used in a class with a different constructor signature, + you will need to override _from_iterable() with a classmethod that can construct + new instances from an iterable argument. + """ + return set(it) + + def __le__(self, other): + if isinstance(other, StoredSet): + return self._under <= other._under + elif isinstance(other, collections.abc.Set): + return self._under <= other + else: + return NotImplemented + + def __ge__(self, other): + if isinstance(other, StoredSet): + return self._under >= other._under + elif isinstance(other, collections.abc.Set): + return self._under >= other + else: + return NotImplemented + + def __eq__(self, other): + if isinstance(other, StoredSet): + return self._under == other._under + elif isinstance(other, collections.abc.Set): + return self._under == other + else: + return NotImplemented diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/ops/jujuversion.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/ops/jujuversion.py new file mode 100755 index 0000000000000000000000000000000000000000..b2b8177dbe396f0d8c46b86e26af6b4e54ea046d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/ops/jujuversion.py @@ -0,0 +1,98 @@ +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import re +from functools import total_ordering + + +@total_ordering +class JujuVersion: + + PATTERN = r'''^ + (?P\d{1,9})\.(?P\d{1,9}) # and numbers are always there + ((?:\.|-(?P[a-z]+))(?P\d{1,9}))? # sometimes with . or - + (\.(?P\d{1,9}))?$ # and sometimes with a number. + ''' + + def __init__(self, version): + m = re.match(self.PATTERN, version, re.VERBOSE) + if not m: + raise RuntimeError('"{}" is not a valid Juju version string'.format(version)) + + d = m.groupdict() + self.major = int(m.group('major')) + self.minor = int(m.group('minor')) + self.tag = d['tag'] or '' + self.patch = int(d['patch'] or 0) + self.build = int(d['build'] or 0) + + def __repr__(self): + if self.tag: + s = '{}.{}-{}{}'.format(self.major, self.minor, self.tag, self.patch) + else: + s = '{}.{}.{}'.format(self.major, self.minor, self.patch) + if self.build > 0: + s += '.{}'.format(self.build) + return s + + def __eq__(self, other): + if self is other: + return True + if isinstance(other, str): + other = type(self)(other) + elif not isinstance(other, JujuVersion): + raise RuntimeError('cannot compare Juju version "{}" with "{}"'.format(self, other)) + return ( + self.major == other.major + and self.minor == other.minor + and self.tag == other.tag + and self.build == other.build + and self.patch == other.patch) + + def __lt__(self, other): + if self is other: + return False + if isinstance(other, str): + other = type(self)(other) + elif not isinstance(other, JujuVersion): + raise RuntimeError('cannot compare Juju version "{}" with "{}"'.format(self, other)) + + if self.major != other.major: + return self.major < other.major + elif self.minor != other.minor: + return self.minor < other.minor + elif self.tag != other.tag: + if not self.tag: + return False + elif not other.tag: + return True + return self.tag < other.tag + elif self.patch != other.patch: + return self.patch < other.patch + elif self.build != other.build: + return self.build < other.build + return False + + @classmethod + def from_environ(cls) -> 'JujuVersion': + """Build a JujuVersion from JUJU_VERSION.""" + v = os.environ.get('JUJU_VERSION') + if not v: + raise RuntimeError('environ has no JUJU_VERSION') + return cls(v) + + def has_app_data(self) -> bool: + """Determine whether this juju version knows about app data.""" + return (self.major, self.minor, self.patch) >= (2, 7, 0) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/ops/lib/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/ops/lib/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..edb9fcacea6f0173aed9f07ca8a683cfead989cc --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/ops/lib/__init__.py @@ -0,0 +1,194 @@ +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import sys +import os +import re + +from ast import literal_eval +from importlib.util import module_from_spec +from importlib.machinery import ModuleSpec +from pkgutil import get_importer +from types import ModuleType + + +_libraries = None + +_libline_re = re.compile(r'''^LIB([A-Z]+)\s*=\s*([0-9]+|['"][a-zA-Z0-9_.\-@]+['"])''') +_libname_re = re.compile(r'''^[a-z][a-z0-9]+$''') + +# Not perfect, but should do for now. +_libauthor_re = re.compile(r'''^[A-Za-z0-9_+.-]+@[a-z0-9_-]+(?:\.[a-z0-9_-]+)*\.[a-z]{2,3}$''') + + +def use(name: str, api: int, author: str) -> ModuleType: + """Use a library from the ops libraries. + + Args: + name: the name of the library requested. + api: the API version of the library. + author: the author of the library. If not given, requests the + one in the standard library. + Raises: + ImportError: if the library cannot be found. + TypeError: if the name, api, or author are the wrong type. + ValueError: if the name, api, or author are invalid. + """ + if not isinstance(name, str): + raise TypeError("invalid library name: {!r} (must be a str)".format(name)) + if not isinstance(author, str): + raise TypeError("invalid library author: {!r} (must be a str)".format(author)) + if not isinstance(api, int): + raise TypeError("invalid library API: {!r} (must be an int)".format(api)) + if api < 0: + raise ValueError('invalid library api: {} (must be ≥0)'.format(api)) + if not _libname_re.match(name): + raise ValueError("invalid library name: {!r} (chars and digits only)".format(name)) + if not _libauthor_re.match(author): + raise ValueError("invalid library author email: {!r}".format(author)) + + if _libraries is None: + autoimport() + + versions = _libraries.get((name, author), ()) + for lib in versions: + if lib.api == api: + return lib.import_module() + + others = ', '.join(str(lib.api) for lib in versions) + if others: + msg = 'cannot find "{}" from "{}" with API version {} (have {})'.format( + name, author, api, others) + else: + msg = 'cannot find library "{}" from "{}"'.format(name, author) + + raise ImportError(msg, name=name) + + +def autoimport(): + """Find all libs in the path and enable use of them. + + You only need to call this if you've installed a package or + otherwise changed sys.path in the current run, and need to see the + changes. Otherwise libraries are found on first call of `use`. + """ + global _libraries + _libraries = {} + for spec in _find_all_specs(sys.path): + lib = _parse_lib(spec) + if lib is None: + continue + + versions = _libraries.setdefault((lib.name, lib.author), []) + versions.append(lib) + versions.sort(reverse=True) + + +def _find_all_specs(path): + for sys_dir in path: + if sys_dir == "": + sys_dir = "." + try: + top_dirs = os.listdir(sys_dir) + except OSError: + continue + for top_dir in top_dirs: + opslib = os.path.join(sys_dir, top_dir, 'opslib') + try: + lib_dirs = os.listdir(opslib) + except OSError: + continue + finder = get_importer(opslib) + if finder is None or not hasattr(finder, 'find_spec'): + continue + for lib_dir in lib_dirs: + spec = finder.find_spec(lib_dir) + if spec is None: + continue + if spec.loader is None: + # a namespace package; not supported + continue + yield spec + + +# only the first this many lines of a file are looked at for the LIB* constants +_MAX_LIB_LINES = 99 + + +def _parse_lib(spec): + if spec.origin is None: + return None + + _expected = {'NAME': str, 'AUTHOR': str, 'API': int, 'PATCH': int} + + try: + with open(spec.origin, 'rt', encoding='utf-8') as f: + libinfo = {} + for n, line in enumerate(f): + if len(libinfo) == len(_expected): + break + if n > _MAX_LIB_LINES: + return None + m = _libline_re.match(line) + if m is None: + continue + key, value = m.groups() + if key in _expected: + value = literal_eval(value) + if not isinstance(value, _expected[key]): + return None + libinfo[key] = value + else: + if len(libinfo) != len(_expected): + return None + except Exception: + return None + + return _Lib(spec, libinfo['NAME'], libinfo['AUTHOR'], libinfo['API'], libinfo['PATCH']) + + +class _Lib: + + def __init__(self, spec: ModuleSpec, name: str, author: str, api: int, patch: int): + self.spec = spec + self.name = name + self.author = author + self.api = api + self.patch = patch + + self._module = None + + def __repr__(self): + return "<_Lib {0.name} by {0.author}, API {0.api}, patch {0.patch}>".format(self) + + def import_module(self) -> ModuleType: + if self._module is None: + module = module_from_spec(self.spec) + self.spec.loader.exec_module(module) + self._module = module + return self._module + + def __eq__(self, other): + if not isinstance(other, _Lib): + return NotImplemented + a = (self.name, self.author, self.api, self.patch) + b = (other.name, other.author, other.api, other.patch) + return a == b + + def __lt__(self, other): + if not isinstance(other, _Lib): + return NotImplemented + a = (self.name, self.author, self.api, self.patch) + b = (other.name, other.author, other.api, other.patch) + return a < b diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/ops/log.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/ops/log.py new file mode 100644 index 0000000000000000000000000000000000000000..4aac5543aec4d84dc393e79b772a30284712d6d4 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/ops/log.py @@ -0,0 +1,51 @@ +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import sys +import logging + + +class JujuLogHandler(logging.Handler): + """A handler for sending logs to Juju via juju-log.""" + + def __init__(self, model_backend, level=logging.DEBUG): + super().__init__(level) + self.model_backend = model_backend + + def emit(self, record): + self.model_backend.juju_log(record.levelname, self.format(record)) + + +def setup_root_logging(model_backend, debug=False): + """Setup python logging to forward messages to juju-log. + + By default, logging is set to DEBUG level, and messages will be filtered by Juju. + Charmers can also set their own default log level with:: + + logging.getLogger().setLevel(logging.INFO) + + model_backend -- a ModelBackend to use for juju-log + debug -- if True, write logs to stderr as well as to juju-log. + """ + logger = logging.getLogger() + logger.setLevel(logging.DEBUG) + logger.addHandler(JujuLogHandler(model_backend)) + if debug: + handler = logging.StreamHandler() + formatter = logging.Formatter('%(asctime)s %(levelname)-8s %(message)s') + handler.setFormatter(formatter) + logger.addHandler(handler) + + sys.excepthook = lambda etype, value, tb: logger.error( + "Uncaught exception while in charm code:", exc_info=(etype, value, tb)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/ops/main.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/ops/main.py new file mode 100755 index 0000000000000000000000000000000000000000..6dc31c3575044796e8fe1f61b8415395689d6339 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/ops/main.py @@ -0,0 +1,348 @@ +# Copyright 2019-2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import inspect +import logging +import os +import subprocess +import sys +import warnings +from pathlib import Path + +import yaml + +import ops.charm +import ops.framework +import ops.model +import ops.storage + +from ops.log import setup_root_logging + +CHARM_STATE_FILE = '.unit-state.db' + + +logger = logging.getLogger() + + +def _get_charm_dir(): + charm_dir = os.environ.get("JUJU_CHARM_DIR") + if charm_dir is None: + # Assume $JUJU_CHARM_DIR/lib/op/main.py structure. + charm_dir = Path('{}/../../..'.format(__file__)).resolve() + else: + charm_dir = Path(charm_dir).resolve() + return charm_dir + + +def _create_event_link(charm, bound_event): + """Create a symlink for a particular event. + + charm -- A charm object. + bound_event -- An event for which to create a symlink. + """ + if issubclass(bound_event.event_type, ops.charm.HookEvent): + event_dir = charm.framework.charm_dir / 'hooks' + event_path = event_dir / bound_event.event_kind.replace('_', '-') + elif issubclass(bound_event.event_type, ops.charm.ActionEvent): + if not bound_event.event_kind.endswith("_action"): + raise RuntimeError( + 'action event name {} needs _action suffix'.format(bound_event.event_kind)) + event_dir = charm.framework.charm_dir / 'actions' + # The event_kind is suffixed with "_action" while the executable is not. + event_path = event_dir / bound_event.event_kind[:-len('_action')].replace('_', '-') + else: + raise RuntimeError( + 'cannot create a symlink: unsupported event type {}'.format(bound_event.event_type)) + + event_dir.mkdir(exist_ok=True) + if not event_path.exists(): + # CPython has different implementations for populating sys.argv[0] for Linux and Windows. + # For Windows it is always an absolute path (any symlinks are resolved) + # while for Linux it can be a relative path. + target_path = os.path.relpath(os.path.realpath(sys.argv[0]), str(event_dir)) + + # Ignore the non-symlink files or directories + # assuming the charm author knows what they are doing. + logger.debug( + 'Creating a new relative symlink at %s pointing to %s', + event_path, target_path) + event_path.symlink_to(target_path) + + +def _setup_event_links(charm_dir, charm): + """Set up links for supported events that originate from Juju. + + Whether a charm can handle an event or not can be determined by + introspecting which events are defined on it. + + Hooks or actions are created as symlinks to the charm code file + which is determined by inspecting symlinks provided by the charm + author at hooks/install or hooks/start. + + charm_dir -- A root directory of the charm. + charm -- An instance of the Charm class. + + """ + for bound_event in charm.on.events().values(): + # Only events that originate from Juju need symlinks. + if issubclass(bound_event.event_type, (ops.charm.HookEvent, ops.charm.ActionEvent)): + _create_event_link(charm, bound_event) + + +def _emit_charm_event(charm, event_name): + """Emits a charm event based on a Juju event name. + + charm -- A charm instance to emit an event from. + event_name -- A Juju event name to emit on a charm. + """ + event_to_emit = None + try: + event_to_emit = getattr(charm.on, event_name) + except AttributeError: + logger.debug("Event %s not defined for %s.", event_name, charm) + + # If the event is not supported by the charm implementation, do + # not error out or try to emit it. This is to support rollbacks. + if event_to_emit is not None: + args, kwargs = _get_event_args(charm, event_to_emit) + logger.debug('Emitting Juju event %s.', event_name) + event_to_emit.emit(*args, **kwargs) + + +def _get_event_args(charm, bound_event): + event_type = bound_event.event_type + model = charm.framework.model + + if issubclass(event_type, ops.charm.RelationEvent): + relation_name = os.environ['JUJU_RELATION'] + relation_id = int(os.environ['JUJU_RELATION_ID'].split(':')[-1]) + relation = model.get_relation(relation_name, relation_id) + else: + relation = None + + remote_app_name = os.environ.get('JUJU_REMOTE_APP', '') + remote_unit_name = os.environ.get('JUJU_REMOTE_UNIT', '') + if remote_app_name or remote_unit_name: + if not remote_app_name: + if '/' not in remote_unit_name: + raise RuntimeError('invalid remote unit name: {}'.format(remote_unit_name)) + remote_app_name = remote_unit_name.split('/')[0] + args = [relation, model.get_app(remote_app_name)] + if remote_unit_name: + args.append(model.get_unit(remote_unit_name)) + return args, {} + elif relation: + return [relation], {} + return [], {} + + +class _Dispatcher: + """Encapsulate how to figure out what event Juju wants us to run. + + Also knows how to run “legacy” hooks when Juju called us via a top-level + ``dispatch`` binary. + + Args: + charm_dir: the toplevel directory of the charm + + Attributes: + event_name: the name of the event to run + is_dispatch_aware: are we running under a Juju that knows about the + dispatch binary? + + """ + + def __init__(self, charm_dir: Path): + self._charm_dir = charm_dir + self._exec_path = Path(sys.argv[0]) + + if 'JUJU_DISPATCH_PATH' in os.environ and (charm_dir / 'dispatch').exists(): + self._init_dispatch() + else: + self._init_legacy() + + def ensure_event_links(self, charm): + """Make sure necessary symlinks are present on disk""" + + if self.is_dispatch_aware: + # links aren't needed + return + + # When a charm is force-upgraded and a unit is in an error state Juju + # does not run upgrade-charm and instead runs the failed hook followed + # by config-changed. Given the nature of force-upgrading the hook setup + # code is not triggered on config-changed. + # + # 'start' event is included as Juju does not fire the install event for + # K8s charms (see LP: #1854635). + if (self.event_name in ('install', 'start', 'upgrade_charm') + or self.event_name.endswith('_storage_attached')): + _setup_event_links(self._charm_dir, charm) + + def run_any_legacy_hook(self): + """Run any extant legacy hook. + + If there is both a dispatch file and a legacy hook for the + current event, run the wanted legacy hook. + """ + + if not self.is_dispatch_aware: + # we *are* the legacy hook + return + + dispatch_path = self._charm_dir / self._dispatch_path + if not dispatch_path.exists(): + logger.debug("Legacy %s does not exist.", self._dispatch_path) + return + + # super strange that there isn't an is_executable + if not os.access(str(dispatch_path), os.X_OK): + logger.warning("Legacy %s exists but is not executable.", self._dispatch_path) + return + + if dispatch_path.resolve() == self._exec_path.resolve(): + logger.debug("Legacy %s is just a link to ourselves.", self._dispatch_path) + return + + argv = sys.argv.copy() + argv[0] = str(dispatch_path) + logger.info("Running legacy %s.", self._dispatch_path) + try: + subprocess.run(argv, check=True) + except subprocess.CalledProcessError as e: + logger.warning( + "Legacy %s exited with status %d.", + self._dispatch_path, e.returncode) + sys.exit(e.returncode) + else: + logger.debug("Legacy %s exited with status 0.", self._dispatch_path) + + def _set_name_from_path(self, path: Path): + """Sets the name attribute to that which can be inferred from the given path.""" + name = path.name.replace('-', '_') + if path.parent.name == 'actions': + name = '{}_action'.format(name) + self.event_name = name + + def _init_legacy(self): + """Set up the 'legacy' dispatcher. + + The current Juju doesn't know about 'dispatch' and calls hooks + explicitly. + """ + self.is_dispatch_aware = False + self._set_name_from_path(self._exec_path) + + def _init_dispatch(self): + """Set up the new 'dispatch' dispatcher. + + The current Juju will run 'dispatch' if it exists, and otherwise fall + back to the old behaviour. + + JUJU_DISPATCH_PATH will be set to the wanted hook, e.g. hooks/install, + in both cases. + """ + self._dispatch_path = Path(os.environ['JUJU_DISPATCH_PATH']) + + if 'OPERATOR_DISPATCH' in os.environ: + logger.debug("Charm called itself via %s.", self._dispatch_path) + sys.exit(0) + os.environ['OPERATOR_DISPATCH'] = '1' + + self.is_dispatch_aware = True + self._set_name_from_path(self._dispatch_path) + + def is_restricted_context(self): + """"Return True if we are running in a restricted Juju context. + + When in a restricted context, most commands (relation-get, config-get, + state-get) are not available. As such, we change how we interact with + Juju. + """ + return self.event_name in ('collect_metrics',) + + +def main(charm_class, use_juju_for_storage=False): + """Setup the charm and dispatch the observed event. + + The event name is based on the way this executable was called (argv[0]). + """ + charm_dir = _get_charm_dir() + + model_backend = ops.model._ModelBackend() + debug = ('JUJU_DEBUG' in os.environ) + setup_root_logging(model_backend, debug=debug) + logger.debug("Operator Framework %s up and running.", ops.__version__) + + dispatcher = _Dispatcher(charm_dir) + dispatcher.run_any_legacy_hook() + + metadata = (charm_dir / 'metadata.yaml').read_text() + actions_meta = charm_dir / 'actions.yaml' + if actions_meta.exists(): + actions_metadata = actions_meta.read_text() + else: + actions_metadata = None + + if not yaml.__with_libyaml__: + logger.debug('yaml does not have libyaml extensions, using slower pure Python yaml loader') + meta = ops.charm.CharmMeta.from_yaml(metadata, actions_metadata) + model = ops.model.Model(meta, model_backend) + + # TODO: If Juju unit agent crashes after exit(0) from the charm code + # the framework will commit the snapshot but Juju will not commit its + # operation. + charm_state_path = charm_dir / CHARM_STATE_FILE + if use_juju_for_storage: + if dispatcher.is_restricted_context(): + # TODO: jam 2020-06-30 This unconditionally avoids running a collect metrics event + # Though we eventually expect that juju will run collect-metrics in a + # non-restricted context. Once we can determine that we are running collect-metrics + # in a non-restricted context, we should fire the event as normal. + logger.debug('"%s" is not supported when using Juju for storage\n' + 'see: https://github.com/canonical/operator/issues/348', + dispatcher.event_name) + # Note that we don't exit nonzero, because that would cause Juju to rerun the hook + return + store = ops.storage.JujuStorage() + else: + store = ops.storage.SQLiteStorage(charm_state_path) + framework = ops.framework.Framework(store, charm_dir, meta, model) + try: + sig = inspect.signature(charm_class) + try: + sig.bind(framework) + except TypeError: + msg = ( + "the second argument, 'key', has been deprecated and will be " + "removed after the 0.7 release") + warnings.warn(msg, DeprecationWarning) + charm = charm_class(framework, None) + else: + charm = charm_class(framework) + dispatcher.ensure_event_links(charm) + + # TODO: Remove the collect_metrics check below as soon as the relevant + # Juju changes are made. + # + # Skip reemission of deferred events for collect-metrics events because + # they do not have the full access to all hook tools. + if not dispatcher.is_restricted_context(): + framework.reemit() + + _emit_charm_event(charm, dispatcher.event_name) + + framework.commit() + finally: + framework.close() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/ops/model.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/ops/model.py new file mode 100644 index 0000000000000000000000000000000000000000..b96e89154ea9cec2b62a4fab4649412e115c304e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/ops/model.py @@ -0,0 +1,1237 @@ +# Copyright 2019 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import datetime +import decimal +import ipaddress +import json +import os +import re +import shutil +import tempfile +import time +import typing +import weakref + +from abc import ABC, abstractmethod +from collections.abc import Mapping, MutableMapping +from pathlib import Path +from subprocess import run, PIPE, CalledProcessError + +import ops +from ops.jujuversion import JujuVersion + + +class Model: + """Represents the Juju Model as seen from this unit. + + This should not be instantiated directly by Charmers, but can be accessed as `self.model` + from any class that derives from Object. + + Attributes: + unit: A :class:`Unit` that represents the unit that is running this code (eg yourself) + app: A :class:`Application` that represents the application this unit is a part of. + relations: Mapping of endpoint to list of :class:`Relation` answering the question + "what am I currently related to". See also :meth:`.get_relation` + config: A dict of the config for the current application. + resources: Access to resources for this charm. Use ``model.resources.fetch(resource_name)`` + to get the path on disk where the resource can be found. + storages: Mapping of storage_name to :class:`Storage` for the storage points defined in + metadata.yaml + pod: Used to get access to ``model.pod.set_spec`` to set the container specification + for Kubernetes charms. + """ + + def __init__(self, meta: 'ops.charm.CharmMeta', backend: '_ModelBackend'): + self._cache = _ModelCache(backend) + self._backend = backend + self.unit = self.get_unit(self._backend.unit_name) + self.app = self.unit.app + self.relations = RelationMapping(meta.relations, self.unit, self._backend, self._cache) + self.config = ConfigData(self._backend) + self.resources = Resources(list(meta.resources), self._backend) + self.pod = Pod(self._backend) + self.storages = StorageMapping(list(meta.storages), self._backend) + self._bindings = BindingMapping(self._backend) + + @property + def name(self) -> str: + """Return the name of the Model that this unit is running in. + + This is read from the environment variable ``JUJU_MODEL_NAME``. + """ + return self._backend.model_name + + def get_unit(self, unit_name: str) -> 'Unit': + """Get an arbitrary unit by name. + + Internally this uses a cache, so asking for the same unit two times will + return the same object. + """ + return self._cache.get(Unit, unit_name) + + def get_app(self, app_name: str) -> 'Application': + """Get an application by name. + + Internally this uses a cache, so asking for the same application two times will + return the same object. + """ + return self._cache.get(Application, app_name) + + def get_relation( + self, relation_name: str, + relation_id: typing.Optional[int] = None) -> 'Relation': + """Get a specific Relation instance. + + If relation_id is not given, this will return the Relation instance if the + relation is established only once or None if it is not established. If this + same relation is established multiple times the error TooManyRelatedAppsError is raised. + + Args: + relation_name: The name of the endpoint for this charm + relation_id: An identifier for a specific relation. Used to disambiguate when a + given application has more than one relation on a given endpoint. + Raises: + TooManyRelatedAppsError: is raised if there is more than one relation to the + supplied relation_name and no relation_id was supplied + """ + return self.relations._get_unique(relation_name, relation_id) + + def get_binding(self, binding_key: typing.Union[str, 'Relation']) -> 'Binding': + """Get a network space binding. + + Args: + binding_key: The relation name or instance to obtain bindings for. + Returns: + If ``binding_key`` is a relation name, the method returns the default binding + for that relation. If a relation instance is provided, the method first looks + up a more specific binding for that specific relation ID, and if none is found + falls back to the default binding for the relation name. + """ + return self._bindings.get(binding_key) + + +class _ModelCache: + + def __init__(self, backend): + self._backend = backend + self._weakrefs = weakref.WeakValueDictionary() + + def get(self, entity_type, *args): + key = (entity_type,) + args + entity = self._weakrefs.get(key) + if entity is None: + entity = entity_type(*args, backend=self._backend, cache=self) + self._weakrefs[key] = entity + return entity + + +class Application: + """Represents a named application in the model. + + This might be your application, or might be an application that you are related to. + Charmers should not instantiate Application objects directly, but should use + :meth:`Model.get_app` if they need a reference to a given application. + + Attributes: + name: The name of this application (eg, 'mysql'). This name may differ from the name of + the charm, if the user has deployed it to a different name. + """ + + def __init__(self, name, backend, cache): + self.name = name + self._backend = backend + self._cache = cache + self._is_our_app = self.name == self._backend.app_name + self._status = None + + def _invalidate(self): + self._status = None + + @property + def status(self) -> 'StatusBase': + """Used to report or read the status of the overall application. + + Can only be read and set by the lead unit of the application. + + The status of remote units is always Unknown. + + Raises: + RuntimeError: if you try to set the status of another application, or if you try to + set the status of this application as a unit that is not the leader. + InvalidStatusError: if you try to set the status to something that is not a + :class:`StatusBase` + + Example:: + + self.model.app.status = BlockedStatus('I need a human to come help me') + """ + if not self._is_our_app: + return UnknownStatus() + + if not self._backend.is_leader(): + raise RuntimeError('cannot get application status as a non-leader unit') + + if self._status: + return self._status + + s = self._backend.status_get(is_app=True) + self._status = StatusBase.from_name(s['status'], s['message']) + return self._status + + @status.setter + def status(self, value: 'StatusBase'): + if not isinstance(value, StatusBase): + raise InvalidStatusError( + 'invalid value provided for application {} status: {}'.format(self, value) + ) + + if not self._is_our_app: + raise RuntimeError('cannot to set status for a remote application {}'.format(self)) + + if not self._backend.is_leader(): + raise RuntimeError('cannot set application status as a non-leader unit') + + self._backend.status_set(value.name, value.message, is_app=True) + self._status = value + + def __repr__(self): + return '<{}.{} {}>'.format(type(self).__module__, type(self).__name__, self.name) + + +class Unit: + """Represents a named unit in the model. + + This might be your unit, another unit of your application, or a unit of another application + that you are related to. + + Attributes: + name: The name of the unit (eg, 'mysql/0') + app: The Application the unit is a part of. + """ + + def __init__(self, name, backend, cache): + self.name = name + + app_name = name.split('/')[0] + self.app = cache.get(Application, app_name) + + self._backend = backend + self._cache = cache + self._is_our_unit = self.name == self._backend.unit_name + self._status = None + + def _invalidate(self): + self._status = None + + @property + def status(self) -> 'StatusBase': + """Used to report or read the status of a specific unit. + + The status of any unit other than yourself is always Unknown. + + Raises: + RuntimeError: if you try to set the status of a unit other than yourself. + InvalidStatusError: if you try to set the status to something other than + a :class:`StatusBase` + Example:: + + self.model.unit.status = MaintenanceStatus('reconfiguring the frobnicators') + """ + if not self._is_our_unit: + return UnknownStatus() + + if self._status: + return self._status + + s = self._backend.status_get(is_app=False) + self._status = StatusBase.from_name(s['status'], s['message']) + return self._status + + @status.setter + def status(self, value: 'StatusBase'): + if not isinstance(value, StatusBase): + raise InvalidStatusError( + 'invalid value provided for unit {} status: {}'.format(self, value) + ) + + if not self._is_our_unit: + raise RuntimeError('cannot set status for a remote unit {}'.format(self)) + + self._backend.status_set(value.name, value.message, is_app=False) + self._status = value + + def __repr__(self): + return '<{}.{} {}>'.format(type(self).__module__, type(self).__name__, self.name) + + def is_leader(self) -> bool: + """Return whether this unit is the leader of its application. + + This can only be called for your own unit. + Returns: + True if you are the leader, False otherwise + Raises: + RuntimeError: if called for a unit that is not yourself + """ + if self._is_our_unit: + # This value is not cached as it is not guaranteed to persist for the whole duration + # of a hook execution. + return self._backend.is_leader() + else: + raise RuntimeError( + 'leadership status of remote units ({}) is not visible to other' + ' applications'.format(self) + ) + + def set_workload_version(self, version: str) -> None: + """Record the version of the software running as the workload. + + This shouldn't be confused with the revision of the charm. This is informative only; + shown in the output of 'juju status'. + """ + if not isinstance(version, str): + raise TypeError("workload version must be a str, not {}: {!r}".format( + type(version).__name__, version)) + self._backend.application_version_set(version) + + +class LazyMapping(Mapping, ABC): + """Represents a dict that isn't populated until it is accessed. + + Charm authors should generally never need to use this directly, but it forms + the basis for many of the dicts that the framework tracks. + """ + + _lazy_data = None + + @abstractmethod + def _load(self): + raise NotImplementedError() + + @property + def _data(self): + data = self._lazy_data + if data is None: + data = self._lazy_data = self._load() + return data + + def _invalidate(self): + self._lazy_data = None + + def __contains__(self, key): + return key in self._data + + def __len__(self): + return len(self._data) + + def __iter__(self): + return iter(self._data) + + def __getitem__(self, key): + return self._data[key] + + +class RelationMapping(Mapping): + """Map of relation names to lists of :class:`Relation` instances.""" + + def __init__(self, relations_meta, our_unit, backend, cache): + self._peers = set() + for name, relation_meta in relations_meta.items(): + if relation_meta.role.is_peer(): + self._peers.add(name) + self._our_unit = our_unit + self._backend = backend + self._cache = cache + self._data = {relation_name: None for relation_name in relations_meta} + + def __contains__(self, key): + return key in self._data + + def __len__(self): + return len(self._data) + + def __iter__(self): + return iter(self._data) + + def __getitem__(self, relation_name): + is_peer = relation_name in self._peers + relation_list = self._data[relation_name] + if relation_list is None: + relation_list = self._data[relation_name] = [] + for rid in self._backend.relation_ids(relation_name): + relation = Relation(relation_name, rid, is_peer, + self._our_unit, self._backend, self._cache) + relation_list.append(relation) + return relation_list + + def _invalidate(self, relation_name): + """Used to wipe the cache of a given relation_name. + + Not meant to be used by Charm authors. The content of relation data is + static for the lifetime of a hook, so it is safe to cache in memory once + accessed. + """ + self._data[relation_name] = None + + def _get_unique(self, relation_name, relation_id=None): + if relation_id is not None: + if not isinstance(relation_id, int): + raise ModelError('relation id {} must be int or None not {}'.format( + relation_id, + type(relation_id).__name__)) + for relation in self[relation_name]: + if relation.id == relation_id: + return relation + else: + # The relation may be dead, but it is not forgotten. + is_peer = relation_name in self._peers + return Relation(relation_name, relation_id, is_peer, + self._our_unit, self._backend, self._cache) + num_related = len(self[relation_name]) + if num_related == 0: + return None + elif num_related == 1: + return self[relation_name][0] + else: + # TODO: We need something in the framework to catch and gracefully handle + # errors, ideally integrating the error catching with Juju's mechanisms. + raise TooManyRelatedAppsError(relation_name, num_related, 1) + + +class BindingMapping: + """Mapping of endpoints to network bindings. + + Charm authors should not instantiate this directly, but access it via + :meth:`Model.get_binding` + """ + + def __init__(self, backend): + self._backend = backend + self._data = {} + + def get(self, binding_key: typing.Union[str, 'Relation']) -> 'Binding': + """Get a specific Binding for an endpoint/relation. + + Not used directly by Charm authors. See :meth:`Model.get_binding` + """ + if isinstance(binding_key, Relation): + binding_name = binding_key.name + relation_id = binding_key.id + elif isinstance(binding_key, str): + binding_name = binding_key + relation_id = None + else: + raise ModelError('binding key must be str or relation instance, not {}' + ''.format(type(binding_key).__name__)) + binding = self._data.get(binding_key) + if binding is None: + binding = Binding(binding_name, relation_id, self._backend) + self._data[binding_key] = binding + return binding + + +class Binding: + """Binding to a network space. + + Attributes: + name: The name of the endpoint this binding represents (eg, 'db') + """ + + def __init__(self, name, relation_id, backend): + self.name = name + self._relation_id = relation_id + self._backend = backend + self._network = None + + @property + def network(self) -> 'Network': + """The network information for this binding.""" + if self._network is None: + try: + self._network = Network(self._backend.network_get(self.name, self._relation_id)) + except RelationNotFoundError: + if self._relation_id is None: + raise + # If a relation is dead, we can still get network info associated with an + # endpoint itself + self._network = Network(self._backend.network_get(self.name)) + return self._network + + +class Network: + """Network space details. + + Charm authors should not instantiate this directly, but should get access to the Network + definition from :meth:`Model.get_binding` and its ``network`` attribute. + + Attributes: + interfaces: A list of :class:`NetworkInterface` details. This includes the + information about how your application should be configured (eg, what + IP addresses should you bind to.) + Note that multiple addresses for a single interface are represented as multiple + interfaces. (eg, ``[NetworKInfo('ens1', '10.1.1.1/32'), + NetworkInfo('ens1', '10.1.2.1/32'])``) + ingress_addresses: A list of :class:`ipaddress.ip_address` objects representing the IP + addresses that other units should use to get in touch with you. + egress_subnets: A list of :class:`ipaddress.ip_network` representing the subnets that + other units will see you connecting from. Due to things like NAT it isn't always + possible to narrow it down to a single address, but when it is clear, the CIDRs + will be constrained to a single address. (eg, 10.0.0.1/32) + Args: + network_info: A dict of network information as returned by ``network-get``. + """ + + def __init__(self, network_info: dict): + self.interfaces = [] + # Treat multiple addresses on an interface as multiple logical + # interfaces with the same name. + for interface_info in network_info['bind-addresses']: + interface_name = interface_info['interface-name'] + for address_info in interface_info['addresses']: + self.interfaces.append(NetworkInterface(interface_name, address_info)) + self.ingress_addresses = [] + for address in network_info['ingress-addresses']: + self.ingress_addresses.append(ipaddress.ip_address(address)) + self.egress_subnets = [] + for subnet in network_info['egress-subnets']: + self.egress_subnets.append(ipaddress.ip_network(subnet)) + + @property + def bind_address(self): + """A single address that your application should bind() to. + + For the common case where there is a single answer. This represents a single + address from :attr:`.interfaces` that can be used to configure where your + application should bind() and listen(). + """ + return self.interfaces[0].address + + @property + def ingress_address(self): + """The address other applications should use to connect to your unit. + + Due to things like public/private addresses, NAT and tunneling, the address you bind() + to is not always the address other people can use to connect() to you. + This is just the first address from :attr:`.ingress_addresses`. + """ + return self.ingress_addresses[0] + + +class NetworkInterface: + """Represents a single network interface that the charm needs to know about. + + Charmers should not instantiate this type directly. Instead use :meth:`Model.get_binding` + to get the network information for a given endpoint. + + Attributes: + name: The name of the interface (eg. 'eth0', or 'ens1') + subnet: An :class:`ipaddress.ip_network` representation of the IP for the network + interface. This may be a single address (eg '10.0.1.2/32') + """ + + def __init__(self, name: str, address_info: dict): + self.name = name + # TODO: expose a hardware address here, see LP: #1864070. + self.address = ipaddress.ip_address(address_info['value']) + cidr = address_info['cidr'] + if not cidr: + # The cidr field may be empty, see LP: #1864102. + # In this case, make it a /32 or /128 IP network. + self.subnet = ipaddress.ip_network(address_info['value']) + else: + self.subnet = ipaddress.ip_network(cidr) + # TODO: expose a hostname/canonical name for the address here, see LP: #1864086. + + +class Relation: + """Represents an established relation between this application and another application. + + This class should not be instantiated directly, instead use :meth:`Model.get_relation` + or :attr:`RelationEvent.relation`. + + Attributes: + name: The name of the local endpoint of the relation (eg 'db') + id: The identifier for a particular relation (integer) + app: An :class:`Application` representing the remote application of this relation. + For peer relations this will be the local application. + units: A set of :class:`Unit` for units that have started and joined this relation. + data: A :class:`RelationData` holding the data buckets for each entity + of a relation. Accessed via eg Relation.data[unit]['foo'] + """ + + def __init__( + self, relation_name: str, relation_id: int, is_peer: bool, our_unit: Unit, + backend: '_ModelBackend', cache: '_ModelCache'): + self.name = relation_name + self.id = relation_id + self.app = None + self.units = set() + + # For peer relations, both the remote and the local app are the same. + if is_peer: + self.app = our_unit.app + try: + for unit_name in backend.relation_list(self.id): + unit = cache.get(Unit, unit_name) + self.units.add(unit) + if self.app is None: + self.app = unit.app + except RelationNotFoundError: + # If the relation is dead, just treat it as if it has no remote units. + pass + self.data = RelationData(self, our_unit, backend) + + def __repr__(self): + return '<{}.{} {}:{}>'.format(type(self).__module__, + type(self).__name__, + self.name, + self.id) + + +class RelationData(Mapping): + """Represents the various data buckets of a given relation. + + Each unit and application involved in a relation has their own data bucket. + Eg: ``{entity: RelationDataContent}`` + where entity can be either a :class:`Unit` or a :class:`Application`. + + Units can read and write their own data, and if they are the leader, + they can read and write their application data. They are allowed to read + remote unit and application data. + + This class should not be created directly. It should be accessed via + :attr:`Relation.data` + """ + + def __init__(self, relation: Relation, our_unit: Unit, backend: '_ModelBackend'): + self.relation = weakref.proxy(relation) + self._data = { + our_unit: RelationDataContent(self.relation, our_unit, backend), + our_unit.app: RelationDataContent(self.relation, our_unit.app, backend), + } + self._data.update({ + unit: RelationDataContent(self.relation, unit, backend) + for unit in self.relation.units}) + # The relation might be dead so avoid a None key here. + if self.relation.app is not None: + self._data.update({ + self.relation.app: RelationDataContent(self.relation, self.relation.app, backend), + }) + + def __contains__(self, key): + return key in self._data + + def __len__(self): + return len(self._data) + + def __iter__(self): + return iter(self._data) + + def __getitem__(self, key): + return self._data[key] + + +# We mix in MutableMapping here to get some convenience implementations, but whether it's actually +# mutable or not is controlled by the flag. +class RelationDataContent(LazyMapping, MutableMapping): + + def __init__(self, relation, entity, backend): + self.relation = relation + self._entity = entity + self._backend = backend + self._is_app = isinstance(entity, Application) + + def _load(self): + try: + return self._backend.relation_get(self.relation.id, self._entity.name, self._is_app) + except RelationNotFoundError: + # Dead relations tell no tales (and have no data). + return {} + + def _is_mutable(self): + if self._is_app: + is_our_app = self._backend.app_name == self._entity.name + if not is_our_app: + return False + # Whether the application data bag is mutable or not depends on + # whether this unit is a leader or not, but this is not guaranteed + # to be always true during the same hook execution. + return self._backend.is_leader() + else: + is_our_unit = self._backend.unit_name == self._entity.name + if is_our_unit: + return True + return False + + def __setitem__(self, key, value): + if not self._is_mutable(): + raise RelationDataError('cannot set relation data for {}'.format(self._entity.name)) + if not isinstance(value, str): + raise RelationDataError('relation data values must be strings') + + self._backend.relation_set(self.relation.id, key, value, self._is_app) + + # Don't load data unnecessarily if we're only updating. + if self._lazy_data is not None: + if value == '': + # Match the behavior of Juju, which is that setting the value to an + # empty string will remove the key entirely from the relation data. + del self._data[key] + else: + self._data[key] = value + + def __delitem__(self, key): + # Match the behavior of Juju, which is that setting the value to an empty + # string will remove the key entirely from the relation data. + self.__setitem__(key, '') + + +class ConfigData(LazyMapping): + + def __init__(self, backend): + self._backend = backend + + def _load(self): + return self._backend.config_get() + + +class StatusBase: + """Status values specific to applications and units. + + To access a status by name, see :meth:`StatusBase.from_name`, most use cases will just + directly use the child class to indicate their status. + """ + + _statuses = {} + name = None + + def __init__(self, message: str): + self.message = message + + def __new__(cls, *args, **kwargs): + if cls is StatusBase: + raise TypeError("cannot instantiate a base class") + return super().__new__(cls) + + def __eq__(self, other): + if not isinstance(self, type(other)): + return False + return self.message == other.message + + def __repr__(self): + return "{.__class__.__name__}({!r})".format(self, self.message) + + @classmethod + def from_name(cls, name: str, message: str): + if name == 'unknown': + # unknown is special + return UnknownStatus() + else: + return cls._statuses[name](message) + + @classmethod + def register(cls, child): + if child.name is None: + raise AttributeError('cannot register a Status which has no name') + cls._statuses[child.name] = child + return child + + +@StatusBase.register +class UnknownStatus(StatusBase): + """The unit status is unknown. + + A unit-agent has finished calling install, config-changed and start, but the + charm has not called status-set yet. + + """ + name = 'unknown' + + def __init__(self): + # Unknown status cannot be set and does not have a message associated with it. + super().__init__('') + + def __repr__(self): + return "UnknownStatus()" + + +@StatusBase.register +class ActiveStatus(StatusBase): + """The unit is ready. + + The unit believes it is correctly offering all the services it has been asked to offer. + """ + name = 'active' + + def __init__(self, message: str = ''): + super().__init__(message) + + +@StatusBase.register +class BlockedStatus(StatusBase): + """The unit requires manual intervention. + + An operator has to manually intervene to unblock the unit and let it proceed. + """ + name = 'blocked' + + +@StatusBase.register +class MaintenanceStatus(StatusBase): + """The unit is performing maintenance tasks. + + The unit is not yet providing services, but is actively doing work in preparation + for providing those services. This is a "spinning" state, not an error state. It + reflects activity on the unit itself, not on peers or related units. + + """ + name = 'maintenance' + + +@StatusBase.register +class WaitingStatus(StatusBase): + """A unit is unable to progress. + + The unit is unable to progress to an active state because an application to which + it is related is not running. + + """ + name = 'waiting' + + +class Resources: + """Object representing resources for the charm. + """ + + def __init__(self, names: typing.Iterable[str], backend: '_ModelBackend'): + self._backend = backend + self._paths = {name: None for name in names} + + def fetch(self, name: str) -> Path: + """Fetch the resource from the controller or store. + + If successfully fetched, this returns a Path object to where the resource is stored + on disk, otherwise it raises a ModelError. + """ + if name not in self._paths: + raise RuntimeError('invalid resource name: {}'.format(name)) + if self._paths[name] is None: + self._paths[name] = Path(self._backend.resource_get(name)) + return self._paths[name] + + +class Pod: + """Represents the definition of a pod spec in Kubernetes models. + + Currently only supports simple access to setting the Juju pod spec via :attr:`.set_spec`. + """ + + def __init__(self, backend: '_ModelBackend'): + self._backend = backend + + def set_spec(self, spec: typing.Mapping, k8s_resources: typing.Mapping = None): + """Set the specification for pods that Juju should start in kubernetes. + + See `juju help-tool pod-spec-set` for details of what should be passed. + Args: + spec: The mapping defining the pod specification + k8s_resources: Additional kubernetes specific specification. + + Returns: + """ + if not self._backend.is_leader(): + raise ModelError('cannot set a pod spec as this unit is not a leader') + self._backend.pod_spec_set(spec, k8s_resources) + + +class StorageMapping(Mapping): + """Map of storage names to lists of Storage instances.""" + + def __init__(self, storage_names: typing.Iterable[str], backend: '_ModelBackend'): + self._backend = backend + self._storage_map = {storage_name: None for storage_name in storage_names} + + def __contains__(self, key: str): + return key in self._storage_map + + def __len__(self): + return len(self._storage_map) + + def __iter__(self): + return iter(self._storage_map) + + def __getitem__(self, storage_name: str) -> typing.List['Storage']: + storage_list = self._storage_map[storage_name] + if storage_list is None: + storage_list = self._storage_map[storage_name] = [] + for storage_id in self._backend.storage_list(storage_name): + storage_list.append(Storage(storage_name, storage_id, self._backend)) + return storage_list + + def request(self, storage_name: str, count: int = 1): + """Requests new storage instances of a given name. + + Uses storage-add tool to request additional storage. Juju will notify the unit + via -storage-attached events when it becomes available. + """ + if storage_name not in self._storage_map: + raise ModelError(('cannot add storage {!r}:' + ' it is not present in the charm metadata').format(storage_name)) + self._backend.storage_add(storage_name, count) + + +class Storage: + """"Represents a storage as defined in metadata.yaml + + Attributes: + name: Simple string name of the storage + id: The provider id for storage + """ + + def __init__(self, storage_name, storage_id, backend): + self.name = storage_name + self.id = storage_id + self._backend = backend + self._location = None + + @property + def location(self): + if self._location is None: + raw = self._backend.storage_get('{}/{}'.format(self.name, self.id), "location") + self._location = Path(raw) + return self._location + + +class ModelError(Exception): + """Base class for exceptions raised when interacting with the Model.""" + pass + + +class TooManyRelatedAppsError(ModelError): + """Raised by :meth:`Model.get_relation` if there is more than one related application.""" + + def __init__(self, relation_name, num_related, max_supported): + super().__init__('Too many remote applications on {} ({} > {})'.format( + relation_name, num_related, max_supported)) + self.relation_name = relation_name + self.num_related = num_related + self.max_supported = max_supported + + +class RelationDataError(ModelError): + """Raised by ``Relation.data[entity][key] = 'foo'`` if the data is invalid. + + This is raised if you're either trying to set a value to something that isn't a string, + or if you are trying to set a value in a bucket that you don't have access to. (eg, + another application/unit or setting your application data but you aren't the leader.) + """ + + +class RelationNotFoundError(ModelError): + """Backend error when querying juju for a given relation and that relation doesn't exist.""" + + +class InvalidStatusError(ModelError): + """Raised if trying to set an Application or Unit status to something invalid.""" + + +class _ModelBackend: + """Represents the connection between the Model representation and talking to Juju. + + Charm authors should not directly interact with the ModelBackend, it is a private + implementation of Model. + """ + + LEASE_RENEWAL_PERIOD = datetime.timedelta(seconds=30) + + def __init__(self, unit_name=None, model_name=None): + if unit_name is None: + self.unit_name = os.environ['JUJU_UNIT_NAME'] + else: + self.unit_name = unit_name + if model_name is None: + model_name = os.environ.get('JUJU_MODEL_NAME') + self.model_name = model_name + self.app_name = self.unit_name.split('/')[0] + + self._is_leader = None + self._leader_check_time = None + + def _run(self, *args, return_output=False, use_json=False): + kwargs = dict(stdout=PIPE, stderr=PIPE) + if use_json: + args += ('--format=json',) + try: + result = run(args, check=True, **kwargs) + except CalledProcessError as e: + raise ModelError(e.stderr) + if return_output: + if result.stdout is None: + return '' + else: + text = result.stdout.decode('utf8') + if use_json: + return json.loads(text) + else: + return text + + def relation_ids(self, relation_name): + relation_ids = self._run('relation-ids', relation_name, return_output=True, use_json=True) + return [int(relation_id.split(':')[-1]) for relation_id in relation_ids] + + def relation_list(self, relation_id): + try: + return self._run('relation-list', '-r', str(relation_id), + return_output=True, use_json=True) + except ModelError as e: + if 'relation not found' in str(e): + raise RelationNotFoundError() from e + raise + + def relation_get(self, relation_id, member_name, is_app): + if not isinstance(is_app, bool): + raise TypeError('is_app parameter to relation_get must be a boolean') + + if is_app: + version = JujuVersion.from_environ() + if not version.has_app_data(): + raise RuntimeError( + 'getting application data is not supported on Juju version {}'.format(version)) + + args = ['relation-get', '-r', str(relation_id), '-', member_name] + if is_app: + args.append('--app') + + try: + return self._run(*args, return_output=True, use_json=True) + except ModelError as e: + if 'relation not found' in str(e): + raise RelationNotFoundError() from e + raise + + def relation_set(self, relation_id, key, value, is_app): + if not isinstance(is_app, bool): + raise TypeError('is_app parameter to relation_set must be a boolean') + + if is_app: + version = JujuVersion.from_environ() + if not version.has_app_data(): + raise RuntimeError( + 'setting application data is not supported on Juju version {}'.format(version)) + + args = ['relation-set', '-r', str(relation_id), '{}={}'.format(key, value)] + if is_app: + args.append('--app') + + try: + return self._run(*args) + except ModelError as e: + if 'relation not found' in str(e): + raise RelationNotFoundError() from e + raise + + def config_get(self): + return self._run('config-get', return_output=True, use_json=True) + + def is_leader(self): + """Obtain the current leadership status for the unit the charm code is executing on. + + The value is cached for the duration of a lease which is 30s in Juju. + """ + now = time.monotonic() + if self._leader_check_time is None: + check = True + else: + time_since_check = datetime.timedelta(seconds=now - self._leader_check_time) + check = (time_since_check > self.LEASE_RENEWAL_PERIOD or self._is_leader is None) + if check: + # Current time MUST be saved before running is-leader to ensure the cache + # is only used inside the window that is-leader itself asserts. + self._leader_check_time = now + self._is_leader = self._run('is-leader', return_output=True, use_json=True) + + return self._is_leader + + def resource_get(self, resource_name): + return self._run('resource-get', resource_name, return_output=True).strip() + + def pod_spec_set(self, spec, k8s_resources): + tmpdir = Path(tempfile.mkdtemp('-pod-spec-set')) + try: + spec_path = tmpdir / 'spec.json' + spec_path.write_text(json.dumps(spec)) + args = ['--file', str(spec_path)] + if k8s_resources: + k8s_res_path = tmpdir / 'k8s-resources.json' + k8s_res_path.write_text(json.dumps(k8s_resources)) + args.extend(['--k8s-resources', str(k8s_res_path)]) + self._run('pod-spec-set', *args) + finally: + shutil.rmtree(str(tmpdir)) + + def status_get(self, *, is_app=False): + """Get a status of a unit or an application. + + Args: + is_app: A boolean indicating whether the status should be retrieved for a unit + or an application. + """ + content = self._run( + 'status-get', '--include-data', '--application={}'.format(is_app), + use_json=True, + return_output=True) + # Unit status looks like (in YAML): + # message: 'load: 0.28 0.26 0.26' + # status: active + # status-data: {} + # Application status looks like (in YAML): + # application-status: + # message: 'load: 0.28 0.26 0.26' + # status: active + # status-data: {} + # units: + # uo/0: + # message: 'load: 0.28 0.26 0.26' + # status: active + # status-data: {} + + if is_app: + return {'status': content['application-status']['status'], + 'message': content['application-status']['message']} + else: + return content + + def status_set(self, status, message='', *, is_app=False): + """Set a status of a unit or an application. + + Args: + app: A boolean indicating whether the status should be set for a unit or an + application. + """ + if not isinstance(is_app, bool): + raise TypeError('is_app parameter must be boolean') + return self._run('status-set', '--application={}'.format(is_app), status, message) + + def storage_list(self, name): + return [int(s.split('/')[1]) for s in self._run('storage-list', name, + return_output=True, use_json=True)] + + def storage_get(self, storage_name_id, attribute): + return self._run('storage-get', '-s', storage_name_id, attribute, + return_output=True, use_json=True) + + def storage_add(self, name, count=1): + if not isinstance(count, int) or isinstance(count, bool): + raise TypeError('storage count must be integer, got: {} ({})'.format(count, + type(count))) + self._run('storage-add', '{}={}'.format(name, count)) + + def action_get(self): + return self._run('action-get', return_output=True, use_json=True) + + def action_set(self, results): + self._run('action-set', *["{}={}".format(k, v) for k, v in results.items()]) + + def action_log(self, message): + self._run('action-log', message) + + def action_fail(self, message=''): + self._run('action-fail', message) + + def application_version_set(self, version): + self._run('application-version-set', '--', version) + + def juju_log(self, level, message): + self._run('juju-log', '--log-level', level, message) + + def network_get(self, binding_name, relation_id=None): + """Return network info provided by network-get for a given binding. + + Args: + binding_name: A name of a binding (relation name or extra-binding name). + relation_id: An optional relation id to get network info for. + """ + cmd = ['network-get', binding_name] + if relation_id is not None: + cmd.extend(['-r', str(relation_id)]) + try: + return self._run(*cmd, return_output=True, use_json=True) + except ModelError as e: + if 'relation not found' in str(e): + raise RelationNotFoundError() from e + raise + + def add_metrics(self, metrics, labels=None): + cmd = ['add-metric'] + + if labels: + label_args = [] + for k, v in labels.items(): + _ModelBackendValidator.validate_metric_label(k) + _ModelBackendValidator.validate_label_value(k, v) + label_args.append('{}={}'.format(k, v)) + cmd.extend(['--labels', ','.join(label_args)]) + + metric_args = [] + for k, v in metrics.items(): + _ModelBackendValidator.validate_metric_key(k) + metric_value = _ModelBackendValidator.format_metric_value(v) + metric_args.append('{}={}'.format(k, metric_value)) + cmd.extend(metric_args) + self._run(*cmd) + + +class _ModelBackendValidator: + """Provides facilities for validating inputs and formatting them for model backends.""" + + METRIC_KEY_REGEX = re.compile(r'^[a-zA-Z](?:[a-zA-Z0-9-_]*[a-zA-Z0-9])?$') + + @classmethod + def validate_metric_key(cls, key): + if cls.METRIC_KEY_REGEX.match(key) is None: + raise ModelError( + 'invalid metric key {!r}: must match {}'.format( + key, cls.METRIC_KEY_REGEX.pattern)) + + @classmethod + def validate_metric_label(cls, label_name): + if cls.METRIC_KEY_REGEX.match(label_name) is None: + raise ModelError( + 'invalid metric label name {!r}: must match {}'.format( + label_name, cls.METRIC_KEY_REGEX.pattern)) + + @classmethod + def format_metric_value(cls, value): + try: + decimal_value = decimal.Decimal.from_float(value) + except TypeError as e: + e2 = ModelError('invalid metric value {!r} provided:' + ' must be a positive finite float'.format(value)) + raise e2 from e + if decimal_value.is_nan() or decimal_value.is_infinite() or decimal_value < 0: + raise ModelError('invalid metric value {!r} provided:' + ' must be a positive finite float'.format(value)) + return str(decimal_value) + + @classmethod + def validate_label_value(cls, label, value): + # Label values cannot be empty, contain commas or equal signs as those are + # used by add-metric as separators. + if not value: + raise ModelError( + 'metric label {} has an empty value, which is not allowed'.format(label)) + v = str(value) + if re.search('[,=]', v) is not None: + raise ModelError( + 'metric label values must not contain "," or "=": {}={!r}'.format(label, value)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/ops/storage.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/ops/storage.py new file mode 100755 index 0000000000000000000000000000000000000000..d4310ce1cfbb707c6278b70f84a9751da3ce07af --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/ops/storage.py @@ -0,0 +1,318 @@ +# Copyright 2019-2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from datetime import timedelta +import pickle +import shutil +import subprocess +import sqlite3 +import typing + +import yaml + + +class SQLiteStorage: + + DB_LOCK_TIMEOUT = timedelta(hours=1) + + def __init__(self, filename): + # The isolation_level argument is set to None such that the implicit + # transaction management behavior of the sqlite3 module is disabled. + self._db = sqlite3.connect(str(filename), + isolation_level=None, + timeout=self.DB_LOCK_TIMEOUT.total_seconds()) + self._setup() + + def _setup(self): + # Make sure that the database is locked until the connection is closed, + # not until the transaction ends. + self._db.execute("PRAGMA locking_mode=EXCLUSIVE") + c = self._db.execute("BEGIN") + c.execute("SELECT count(name) FROM sqlite_master WHERE type='table' AND name='snapshot'") + if c.fetchone()[0] == 0: + # Keep in mind what might happen if the process dies somewhere below. + # The system must not be rendered permanently broken by that. + self._db.execute("CREATE TABLE snapshot (handle TEXT PRIMARY KEY, data BLOB)") + self._db.execute(''' + CREATE TABLE notice ( + sequence INTEGER PRIMARY KEY AUTOINCREMENT, + event_path TEXT, + observer_path TEXT, + method_name TEXT) + ''') + self._db.commit() + + def close(self): + self._db.close() + + def commit(self): + self._db.commit() + + # There's commit but no rollback. For abort to be supported, we'll need logic that + # can rollback decisions made by third-party code in terms of the internal state + # of objects that have been snapshotted, and hooks to let them know about it and + # take the needed actions to undo their logic until the last snapshot. + # This is doable but will increase significantly the chances for mistakes. + + def save_snapshot(self, handle_path: str, snapshot_data: typing.Any) -> None: + """Part of the Storage API, persist a snapshot data under the given handle. + + Args: + handle_path: The string identifying the snapshot. + snapshot_data: The data to be persisted. (as returned by Object.snapshot()). This + might be a dict/tuple/int, but must only contain 'simple' python types. + """ + # Use pickle for serialization, so the value remains portable. + raw_data = pickle.dumps(snapshot_data) + self._db.execute("REPLACE INTO snapshot VALUES (?, ?)", (handle_path, raw_data)) + + def load_snapshot(self, handle_path: str) -> typing.Any: + """Part of the Storage API, retrieve a snapshot that was previously saved. + + Args: + handle_path: The string identifying the snapshot. + Raises: + NoSnapshotError: if there is no snapshot for the given handle_path. + """ + c = self._db.cursor() + c.execute("SELECT data FROM snapshot WHERE handle=?", (handle_path,)) + row = c.fetchone() + if row: + return pickle.loads(row[0]) + raise NoSnapshotError(handle_path) + + def drop_snapshot(self, handle_path: str): + """Part of the Storage API, remove a snapshot that was previously saved. + + Dropping a snapshot that doesn't exist is treated as a no-op. + """ + self._db.execute("DELETE FROM snapshot WHERE handle=?", (handle_path,)) + + def list_snapshots(self) -> typing.Generator[str, None, None]: + """Return the name of all snapshots that are currently saved.""" + c = self._db.cursor() + c.execute("SELECT handle FROM snapshot") + while True: + rows = c.fetchmany() + if not rows: + break + for row in rows: + yield row[0] + + def save_notice(self, event_path: str, observer_path: str, method_name: str) -> None: + """Part of the Storage API, record an notice (event and observer)""" + self._db.execute('INSERT INTO notice VALUES (NULL, ?, ?, ?)', + (event_path, observer_path, method_name)) + + def drop_notice(self, event_path: str, observer_path: str, method_name: str) -> None: + """Part of the Storage API, remove a notice that was previously recorded.""" + self._db.execute(''' + DELETE FROM notice + WHERE event_path=? + AND observer_path=? + AND method_name=? + ''', (event_path, observer_path, method_name)) + + def notices(self, event_path: typing.Optional[str]) ->\ + typing.Generator[typing.Tuple[str, str, str], None, None]: + """Part of the Storage API, return all notices that begin with event_path. + + Args: + event_path: If supplied, will only yield events that match event_path. If not + supplied (or None/'') will return all events. + Returns: + Iterable of (event_path, observer_path, method_name) tuples + """ + if event_path: + c = self._db.execute(''' + SELECT event_path, observer_path, method_name + FROM notice + WHERE event_path=? + ORDER BY sequence + ''', (event_path,)) + else: + c = self._db.execute(''' + SELECT event_path, observer_path, method_name + FROM notice + ORDER BY sequence + ''') + while True: + rows = c.fetchmany() + if not rows: + break + for row in rows: + yield tuple(row) + + +class JujuStorage: + """"Storing the content tracked by the Framework in Juju. + + This uses :class:`_JujuStorageBackend` to interact with state-get/state-set + as the way to store state for the framework and for components. + """ + + NOTICE_KEY = "#notices#" + + def __init__(self, backend: '_JujuStorageBackend' = None): + self._backend = backend + if backend is None: + self._backend = _JujuStorageBackend() + + def close(self): + return + + def commit(self): + return + + def save_snapshot(self, handle_path: str, snapshot_data: typing.Any) -> None: + self._backend.set(handle_path, snapshot_data) + + def load_snapshot(self, handle_path): + try: + content = self._backend.get(handle_path) + except KeyError: + raise NoSnapshotError(handle_path) + return content + + def drop_snapshot(self, handle_path): + self._backend.delete(handle_path) + + def save_notice(self, event_path: str, observer_path: str, method_name: str): + notice_list = self._load_notice_list() + notice_list.append([event_path, observer_path, method_name]) + self._save_notice_list(notice_list) + + def drop_notice(self, event_path: str, observer_path: str, method_name: str): + notice_list = self._load_notice_list() + notice_list.remove([event_path, observer_path, method_name]) + self._save_notice_list(notice_list) + + def notices(self, event_path: str): + notice_list = self._load_notice_list() + for row in notice_list: + if row[0] != event_path: + continue + yield tuple(row) + + def _load_notice_list(self) -> typing.List[typing.Tuple[str]]: + try: + notice_list = self._backend.get(self.NOTICE_KEY) + except KeyError: + return [] + if notice_list is None: + return [] + return notice_list + + def _save_notice_list(self, notices: typing.List[typing.Tuple[str]]) -> None: + self._backend.set(self.NOTICE_KEY, notices) + + +class _SimpleLoader(getattr(yaml, 'CSafeLoader', yaml.SafeLoader)): + """Handle a couple basic python types. + + yaml.SafeLoader can handle all the basic int/float/dict/set/etc that we want. The only one + that it *doesn't* handle is tuples. We don't want to support arbitrary types, so we just + subclass SafeLoader and add tuples back in. + """ + # Taken from the example at: + # https://stackoverflow.com/questions/9169025/how-can-i-add-a-python-tuple-to-a-yaml-file-using-pyyaml + + construct_python_tuple = yaml.Loader.construct_python_tuple + + +_SimpleLoader.add_constructor( + u'tag:yaml.org,2002:python/tuple', + _SimpleLoader.construct_python_tuple) + + +class _SimpleDumper(getattr(yaml, 'CSafeDumper', yaml.SafeDumper)): + """Add types supported by 'marshal' + + YAML can support arbitrary types, but that is generally considered unsafe (like pickle). So + we want to only support dumping out types that are safe to load. + """ + + +_SimpleDumper.represent_tuple = yaml.Dumper.represent_tuple +_SimpleDumper.add_representer(tuple, _SimpleDumper.represent_tuple) + + +class _JujuStorageBackend: + """Implements the interface from the Operator framework to Juju's state-get/set/etc.""" + + @staticmethod + def is_available() -> bool: + """Check if Juju state storage is available. + + This checks if there is a 'state-get' executable in PATH. + """ + p = shutil.which('state-get') + return p is not None + + def set(self, key: str, value: typing.Any) -> None: + """Set a key to a given value. + + Args: + key: The string key that will be used to find the value later + value: Arbitrary content that will be returned by get(). + Raises: + CalledProcessError: if 'state-set' returns an error code. + """ + # default_flow_style=None means that it can use Block for + # complex types (types that have nested types) but use flow + # for simple types (like an array). Not all versions of PyYAML + # have the same default style. + encoded_value = yaml.dump(value, Dumper=_SimpleDumper, default_flow_style=None) + content = yaml.dump( + {key: encoded_value}, encoding='utf-8', default_style='|', + default_flow_style=False, + Dumper=_SimpleDumper) + subprocess.run(["state-set", "--file", "-"], input=content, check=True) + + def get(self, key: str) -> typing.Any: + """Get the bytes value associated with a given key. + + Args: + key: The string key that will be used to find the value + Raises: + CalledProcessError: if 'state-get' returns an error code. + """ + # We don't capture stderr here so it can end up in debug logs. + p = subprocess.run( + ["state-get", key], + stdout=subprocess.PIPE, + check=True, + ) + if p.stdout == b'' or p.stdout == b'\n': + raise KeyError(key) + return yaml.load(p.stdout, Loader=_SimpleLoader) + + def delete(self, key: str) -> None: + """Remove a key from being tracked. + + Args: + key: The key to stop storing + Raises: + CalledProcessError: if 'state-delete' returns an error code. + """ + subprocess.run(["state-delete", key], check=True) + + +class NoSnapshotError(Exception): + + def __init__(self, handle_path): + self.handle_path = handle_path + + def __str__(self): + return 'no snapshot data found for {} object'.format(self.handle_path) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/ops/testing.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/ops/testing.py new file mode 100755 index 0000000000000000000000000000000000000000..b4b3fe071216238007c9f3847ca9556be626bf6b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/ops/testing.py @@ -0,0 +1,586 @@ +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import inspect +import pathlib +from textwrap import dedent +import tempfile +import typing +import yaml +import weakref + +from ops import ( + charm, + framework, + model, + storage, +) + + +# OptionalYAML is something like metadata.yaml or actions.yaml. You can +# pass in a file-like object or the string directly. +OptionalYAML = typing.Optional[typing.Union[str, typing.TextIO]] + + +# noinspection PyProtectedMember +class Harness: + """This class represents a way to build up the model that will drive a test suite. + + The model that is created is from the viewpoint of the charm that you are testing. + + Example:: + + harness = Harness(MyCharm) + # Do initial setup here + relation_id = harness.add_relation('db', 'postgresql') + # Now instantiate the charm to see events as the model changes + harness.begin() + harness.add_relation_unit(relation_id, 'postgresql/0') + harness.update_relation_data(relation_id, 'postgresql/0', {'key': 'val'}) + # Check that charm has properly handled the relation_joined event for postgresql/0 + self.assertEqual(harness.charm. ...) + + Args: + charm_cls: The Charm class that you'll be testing. + meta: charm.CharmBase is a A string or file-like object containing the contents of + metadata.yaml. If not supplied, we will look for a 'metadata.yaml' file in the + parent directory of the Charm, and if not found fall back to a trivial + 'name: test-charm' metadata. + actions: A string or file-like object containing the contents of + actions.yaml. If not supplied, we will look for a 'actions.yaml' file in the + parent directory of the Charm. + """ + + def __init__( + self, + charm_cls: typing.Type[charm.CharmBase], + *, + meta: OptionalYAML = None, + actions: OptionalYAML = None): + # TODO: jam 2020-03-05 We probably want to take config as a parameter as well, since + # it would define the default values of config that the charm would see. + self._charm_cls = charm_cls + self._charm = None + self._charm_dir = 'no-disk-path' # this may be updated by _create_meta + self._lazy_resource_dir = None + self._meta = self._create_meta(meta, actions) + self._unit_name = self._meta.name + '/0' + self._framework = None + self._hooks_enabled = True + self._relation_id_counter = 0 + self._backend = _TestingModelBackend(self._unit_name, self._meta) + self._model = model.Model(self._meta, self._backend) + self._storage = storage.SQLiteStorage(':memory:') + self._framework = framework.Framework( + self._storage, self._charm_dir, self._meta, self._model) + + @property + def charm(self) -> charm.CharmBase: + """Return the instance of the charm class that was passed to __init__. + + Note that the Charm is not instantiated until you have called + :meth:`.begin()`. + """ + return self._charm + + @property + def model(self) -> model.Model: + """Return the :class:`~ops.model.Model` that is being driven by this Harness.""" + return self._model + + @property + def framework(self) -> framework.Framework: + """Return the Framework that is being driven by this Harness.""" + return self._framework + + @property + def _resource_dir(self) -> pathlib.Path: + if self._lazy_resource_dir is not None: + return self._lazy_resource_dir + + self.__resource_dir = tempfile.TemporaryDirectory() + self._lazy_resource_dir = pathlib.Path(self.__resource_dir.name) + self._finalizer = weakref.finalize(self, self.__resource_dir.cleanup) + return self._lazy_resource_dir + + def begin(self) -> None: + """Instantiate the Charm and start handling events. + + Before calling begin(), there is no Charm instance, so changes to the Model won't emit + events. You must call begin before :attr:`.charm` is valid. + """ + if self._charm is not None: + raise RuntimeError('cannot call the begin method on the harness more than once') + + # The Framework adds attributes to class objects for events, etc. As such, we can't re-use + # the original class against multiple Frameworks. So create a locally defined class + # and register it. + # TODO: jam 2020-03-16 We are looking to changes this to Instance attributes instead of + # Class attributes which should clean up this ugliness. The API can stay the same + class TestEvents(self._charm_cls.on.__class__): + pass + + TestEvents.__name__ = self._charm_cls.on.__class__.__name__ + + class TestCharm(self._charm_cls): + on = TestEvents() + + # Note: jam 2020-03-01 This is so that errors in testing say MyCharm has no attribute foo, + # rather than TestCharm has no attribute foo. + TestCharm.__name__ = self._charm_cls.__name__ + self._charm = TestCharm(self._framework) + + def _create_meta(self, charm_metadata, action_metadata): + """Create a CharmMeta object. + + Handle the cases where a user doesn't supply explicit metadata snippets. + """ + filename = inspect.getfile(self._charm_cls) + charm_dir = pathlib.Path(filename).parents[1] + + if charm_metadata is None: + metadata_path = charm_dir / 'metadata.yaml' + if metadata_path.is_file(): + charm_metadata = metadata_path.read_text() + self._charm_dir = charm_dir + else: + # The simplest of metadata that the framework can support + charm_metadata = 'name: test-charm' + elif isinstance(charm_metadata, str): + charm_metadata = dedent(charm_metadata) + + if action_metadata is None: + actions_path = charm_dir / 'actions.yaml' + if actions_path.is_file(): + action_metadata = actions_path.read_text() + self._charm_dir = charm_dir + elif isinstance(action_metadata, str): + action_metadata = dedent(action_metadata) + + return charm.CharmMeta.from_yaml(charm_metadata, action_metadata) + + def add_oci_resource(self, resource_name: str, + contents: typing.Mapping[str, str] = None) -> None: + """Add oci resources to the backend. + + This will register an oci resource and create a temporary file for processing metadata + about the resource. A default set of values will be used for all the file contents + unless a specific contents dict is provided. + + Args: + resource_name: Name of the resource to add custom contents to. + contents: Optional custom dict to write for the named resource. + """ + if not contents: + contents = {'registrypath': 'registrypath', + 'username': 'username', + 'password': 'password', + } + if resource_name not in self._meta.resources.keys(): + raise RuntimeError('Resource {} is not a defined resources'.format(resource_name)) + if self._meta.resources[resource_name].type != "oci-image": + raise RuntimeError('Resource {} is not an OCI Image'.format(resource_name)) + resource_dir = self._resource_dir / resource_name + resource_dir.mkdir(exist_ok=True) + resource_file = resource_dir / "contents.yaml" + with resource_file.open('wt', encoding='utf8') as resource_yaml: + yaml.dump(contents, resource_yaml) + self._backend._resources_map[resource_name] = resource_file + + def populate_oci_resources(self) -> None: + """Populate all OCI resources.""" + for name, data in self._meta.resources.items(): + if data.type == "oci-image": + self.add_oci_resource(name) + + def disable_hooks(self) -> None: + """Stop emitting hook events when the model changes. + + This can be used by developers to stop changes to the model from emitting events that + the charm will react to. Call :meth:`.enable_hooks` + to re-enable them. + """ + self._hooks_enabled = False + + def enable_hooks(self) -> None: + """Re-enable hook events from charm.on when the model is changed. + + By default hook events are enabled once you call :meth:`.begin`, + but if you have used :meth:`.disable_hooks`, this can be used to + enable them again. + """ + self._hooks_enabled = True + + def _next_relation_id(self): + rel_id = self._relation_id_counter + self._relation_id_counter += 1 + return rel_id + + def add_relation(self, relation_name: str, remote_app: str) -> int: + """Declare that there is a new relation between this app and `remote_app`. + + Args: + relation_name: The relation on Charm that is being related to + remote_app: The name of the application that is being related to + + Return: + The relation_id created by this add_relation. + """ + rel_id = self._next_relation_id() + self._backend._relation_ids_map.setdefault(relation_name, []).append(rel_id) + self._backend._relation_names[rel_id] = relation_name + self._backend._relation_list_map[rel_id] = [] + self._backend._relation_data[rel_id] = { + remote_app: {}, + self._backend.unit_name: {}, + self._backend.app_name: {}, + } + # Reload the relation_ids list + if self._model is not None: + self._model.relations._invalidate(relation_name) + if self._charm is None or not self._hooks_enabled: + return rel_id + relation = self._model.get_relation(relation_name, rel_id) + app = self._model.get_app(remote_app) + self._charm.on[relation_name].relation_created.emit( + relation, app) + return rel_id + + def add_relation_unit(self, relation_id: int, remote_unit_name: str) -> None: + """Add a new unit to a relation. + + Example:: + + rel_id = harness.add_relation('db', 'postgresql') + harness.add_relation_unit(rel_id, 'postgresql/0') + + This will trigger a `relation_joined` event and a `relation_changed` event. + + Args: + relation_id: The integer relation identifier (as returned by add_relation). + remote_unit_name: A string representing the remote unit that is being added. + Return: + None + """ + self._backend._relation_list_map[relation_id].append(remote_unit_name) + self._backend._relation_data[relation_id][remote_unit_name] = {} + relation_name = self._backend._relation_names[relation_id] + # Make sure that the Model reloads the relation_list for this relation_id, as well as + # reloading the relation data for this unit. + if self._model is not None: + remote_unit = self._model.get_unit(remote_unit_name) + relation = self._model.get_relation(relation_name, relation_id) + unit_cache = relation.data.get(remote_unit, None) + if unit_cache is not None: + unit_cache._invalidate() + self._model.relations._invalidate(relation_name) + if self._charm is None or not self._hooks_enabled: + return + self._charm.on[relation_name].relation_joined.emit( + relation, remote_unit.app, remote_unit) + + def get_relation_data(self, relation_id: int, app_or_unit: str) -> typing.Mapping: + """Get the relation data bucket for a single app or unit in a given relation. + + This ignores all of the safety checks of who can and can't see data in relations (eg, + non-leaders can't read their own application's relation data because there are no events + that keep that data up-to-date for the unit). + + Args: + relation_id: The relation whose content we want to look at. + app_or_unit: The name of the application or unit whose data we want to read + Return: + a dict containing the relation data for `app_or_unit` or None. + Raises: + KeyError: if relation_id doesn't exist + """ + return self._backend._relation_data[relation_id].get(app_or_unit, None) + + def get_workload_version(self) -> str: + """Read the workload version that was set by the unit.""" + return self._backend._workload_version + + def set_model_name(self, name: str) -> None: + """Set the name of the Model that this is representing. + + This cannot be called once begin() has been called. But it lets you set the value that + will be returned by Model.name. + """ + if self._charm is not None: + raise RuntimeError('cannot set the Model name after begin()') + self._backend.model_name = name + + def update_relation_data( + self, + relation_id: int, + app_or_unit: str, + key_values: typing.Mapping, + ) -> None: + """Update the relation data for a given unit or application in a given relation. + + This also triggers the `relation_changed` event for this relation_id. + + Args: + relation_id: The integer relation_id representing this relation. + app_or_unit: The unit or application name that is being updated. + This can be the local or remote application. + key_values: Each key/value will be updated in the relation data. + """ + relation_name = self._backend._relation_names[relation_id] + relation = self._model.get_relation(relation_name, relation_id) + if '/' in app_or_unit: + entity = self._model.get_unit(app_or_unit) + else: + entity = self._model.get_app(app_or_unit) + rel_data = relation.data.get(entity, None) + if rel_data is not None: + # rel_data may have cached now-stale data, so _invalidate() it. + # Note, this won't cause the data to be loaded if it wasn't already. + rel_data._invalidate() + + new_values = self._backend._relation_data[relation_id][app_or_unit].copy() + for k, v in key_values.items(): + if v == '': + new_values.pop(k, None) + else: + new_values[k] = v + self._backend._relation_data[relation_id][app_or_unit] = new_values + + if app_or_unit == self._model.unit.name: + # No events for our own unit + return + if app_or_unit == self._model.app.name: + # updating our own app only generates an event if it is a peer relation and we + # aren't the leader + is_peer = self._meta.relations[relation_name].role.is_peer() + if not is_peer: + return + if self._model.unit.is_leader(): + return + self._emit_relation_changed(relation_id, app_or_unit) + + def _emit_relation_changed(self, relation_id, app_or_unit): + if self._charm is None or not self._hooks_enabled: + return + rel_name = self._backend._relation_names[relation_id] + relation = self.model.get_relation(rel_name, relation_id) + if '/' in app_or_unit: + app_name = app_or_unit.split('/')[0] + unit_name = app_or_unit + app = self.model.get_app(app_name) + unit = self.model.get_unit(unit_name) + args = (relation, app, unit) + else: + app_name = app_or_unit + app = self.model.get_app(app_name) + args = (relation, app) + self._charm.on[rel_name].relation_changed.emit(*args) + + def update_config( + self, + key_values: typing.Mapping[str, str] = None, + unset: typing.Iterable[str] = (), + ) -> None: + """Update the config as seen by the charm. + + This will trigger a `config_changed` event. + + Args: + key_values: A Mapping of key:value pairs to update in config. + unset: An iterable of keys to remove from Config. (Note that this does + not currently reset the config values to the default defined in config.yaml.) + """ + config = self._backend._config + if key_values is not None: + for key, value in key_values.items(): + config[key] = value + for key in unset: + config.pop(key, None) + # NOTE: jam 2020-03-01 Note that this sort of works "by accident". Config + # is a LazyMapping, but its _load returns a dict and this method mutates + # the dict that Config is caching. Arguably we should be doing some sort + # of charm.framework.model.config._invalidate() + if self._charm is None or not self._hooks_enabled: + return + self._charm.on.config_changed.emit() + + def set_leader(self, is_leader: bool = True) -> None: + """Set whether this unit is the leader or not. + + If this charm becomes a leader then `leader_elected` will be triggered. + + Args: + is_leader: True/False as to whether this unit is the leader. + """ + was_leader = self._backend._is_leader + self._backend._is_leader = is_leader + # Note: jam 2020-03-01 currently is_leader is cached at the ModelBackend level, not in + # the Model objects, so this automatically gets noticed. + if is_leader and not was_leader and self._charm is not None and self._hooks_enabled: + self._charm.on.leader_elected.emit() + + def _get_backend_calls(self, reset: bool = True) -> list: + """Return the calls that we have made to the TestingModelBackend. + + This is useful mostly for testing the framework itself, so that we can assert that we + do/don't trigger extra calls. + + Args: + reset: If True, reset the calls list back to empty, if false, the call list is + preserved. + Return: + ``[(call1, args...), (call2, args...)]`` + """ + calls = self._backend._calls.copy() + if reset: + self._backend._calls.clear() + return calls + + +def _record_calls(cls): + """Replace methods on cls with methods that record that they have been called. + + Iterate all attributes of cls, and for public methods, replace them with a wrapped method + that records the method called along with the arguments and keyword arguments. + """ + for meth_name, orig_method in cls.__dict__.items(): + if meth_name.startswith('_'): + continue + + def decorator(orig_method): + def wrapped(self, *args, **kwargs): + full_args = (orig_method.__name__,) + args + if kwargs: + full_args = full_args + (kwargs,) + self._calls.append(full_args) + return orig_method(self, *args, **kwargs) + return wrapped + + setattr(cls, meth_name, decorator(orig_method)) + return cls + + +@_record_calls +class _TestingModelBackend: + """This conforms to the interface for ModelBackend but provides canned data. + + DO NOT use this class directly, it is used by `Harness`_ to drive the model. + `Harness`_ is responsible for maintaining the internal consistency of the values here, + as the only public methods of this type are for implementing ModelBackend. + """ + + def __init__(self, unit_name, meta): + self.unit_name = unit_name + self.app_name = self.unit_name.split('/')[0] + self.model_name = None + self._calls = [] + self._meta = meta + self._is_leader = None + self._relation_ids_map = {} # relation name to [relation_ids,...] + self._relation_names = {} # reverse map from relation_id to relation_name + self._relation_list_map = {} # relation_id: [unit_name,...] + self._relation_data = {} # {relation_id: {name: data}} + self._config = {} + self._is_leader = False + self._resources_map = {} + self._pod_spec = None + self._app_status = {'status': 'unknown', 'message': ''} + self._unit_status = {'status': 'maintenance', 'message': ''} + self._workload_version = None + + def relation_ids(self, relation_name): + try: + return self._relation_ids_map[relation_name] + except KeyError as e: + if relation_name not in self._meta.relations: + raise model.ModelError('{} is not a known relation'.format(relation_name)) from e + return [] + + def relation_list(self, relation_id): + try: + return self._relation_list_map[relation_id] + except KeyError as e: + raise model.RelationNotFoundError from e + + def relation_get(self, relation_id, member_name, is_app): + if is_app and '/' in member_name: + member_name = member_name.split('/')[0] + if relation_id not in self._relation_data: + raise model.RelationNotFoundError() + return self._relation_data[relation_id][member_name].copy() + + def relation_set(self, relation_id, key, value, is_app): + relation = self._relation_data[relation_id] + if is_app: + bucket_key = self.app_name + else: + bucket_key = self.unit_name + if bucket_key not in relation: + relation[bucket_key] = {} + bucket = relation[bucket_key] + if value == '': + bucket.pop(key, None) + else: + bucket[key] = value + + def config_get(self): + return self._config + + def is_leader(self): + return self._is_leader + + def application_version_set(self, version): + self._workload_version = version + + def resource_get(self, resource_name): + return self._resources_map[resource_name] + + def pod_spec_set(self, spec, k8s_resources): + self._pod_spec = (spec, k8s_resources) + + def status_get(self, *, is_app=False): + if is_app: + return self._app_status + else: + return self._unit_status + + def status_set(self, status, message='', *, is_app=False): + if is_app: + self._app_status = {'status': status, 'message': message} + else: + self._unit_status = {'status': status, 'message': message} + + def storage_list(self, name): + raise NotImplementedError(self.storage_list) + + def storage_get(self, storage_name_id, attribute): + raise NotImplementedError(self.storage_get) + + def storage_add(self, name, count=1): + raise NotImplementedError(self.storage_add) + + def action_get(self): + raise NotImplementedError(self.action_get) + + def action_set(self, results): + raise NotImplementedError(self.action_set) + + def action_log(self, message): + raise NotImplementedError(self.action_log) + + def action_fail(self, message=''): + raise NotImplementedError(self.action_fail) + + def network_get(self, endpoint_name, relation_id=None): + raise NotImplementedError(self.network_get) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/ops/version.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/ops/version.py new file mode 100644 index 0000000000000000000000000000000000000000..15e5478555ee0fa948bfb0ad57cc79ba7cef3721 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/ops/version.py @@ -0,0 +1,50 @@ +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import subprocess +from pathlib import Path + +__all__ = ('version',) + +_FALLBACK = '0.8' # this gets bumped after release + + +def _get_version(): + version = _FALLBACK + ".dev0+unknown" + + p = Path(__file__).parent + if (p.parent / '.git').exists(): + try: + proc = subprocess.run( + ['git', 'describe', '--tags', '--dirty'], + stdout=subprocess.PIPE, + stderr=subprocess.DEVNULL, + cwd=p, + check=True) + except Exception: + pass + else: + version = proc.stdout.strip().decode('utf8') + if '-' in version: + # version will look like -<#commits>-g[-dirty] + # in terms of PEP 440, the tag we'll make sure is a 'public version identifier'; + # everything after the first - needs to be a 'local version' + public, local = version.split('-', 1) + version = public + '+' + local.replace('-', '.') + # version now +<#commits>.g[.dirty] + # which is PEP440-compliant (as long as is :-) + return version + + +version = _get_version() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/requirements-dev.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/requirements-dev.txt new file mode 100644 index 0000000000000000000000000000000000000000..61bd425a6748e8b316bf81948e809b1eba148f16 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/requirements-dev.txt @@ -0,0 +1,5 @@ +-r requirements.txt + +autopep8 +flake8 +logassert diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/requirements.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/requirements.txt new file mode 100644 index 0000000000000000000000000000000000000000..5500f007d0bf6c6098afc0f2c6d00915e345a569 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/requirements.txt @@ -0,0 +1 @@ +PyYAML diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/run_tests b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/run_tests new file mode 100755 index 0000000000000000000000000000000000000000..56411030fdcd8629ffbfbbebb8a8a0650203a934 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/run_tests @@ -0,0 +1,3 @@ +#!/bin/bash + +python3 -m unittest "$@" diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/setup.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/setup.py new file mode 100644 index 0000000000000000000000000000000000000000..ceb866815ae1227338a1e2dc67a217a338ae7bbc --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/setup.py @@ -0,0 +1,75 @@ +# Copyright 2019 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from importlib.util import spec_from_file_location, module_from_spec +from pathlib import Path +from setuptools import setup, find_packages + + +def _read_me() -> str: + with open("README.md", "rt", encoding="utf8") as fh: + readme = fh.read() + return readme + + +def _get_version() -> str: + """Get the version via ops/version.py, without loading ops/__init__.py""" + spec = spec_from_file_location('ops.version', 'ops/version.py') + module = module_from_spec(spec) + spec.loader.exec_module(module) + + return module.version + + +version = _get_version() +version_path = Path("ops/version.py") +version_backup = Path("ops/version.py~") +version_path.rename(version_backup) +try: + with version_path.open("wt", encoding="utf8") as fh: + fh.write('''\ +# this is a generated file + +version = {!r} +'''.format(version)) + + setup( + name="ops", + version=version, + description="The Python library behind great charms", + long_description=_read_me(), + long_description_content_type="text/markdown", + license="Apache-2.0", + url="https://github.com/canonical/operator", + author="The Charmcraft team at Canonical Ltd.", + author_email="charmcraft@lists.launchpad.net", + packages=find_packages(include=('ops', 'ops.*')), + classifiers=[ + "Programming Language :: Python :: 3", + "License :: OSI Approved :: Apache Software License", + "Development Status :: 4 - Beta", + + "Intended Audience :: Developers", + "Intended Audience :: System Administrators", + "Operating System :: MacOS :: MacOS X", + "Operating System :: POSIX :: Linux", + # include Windows once we're running tests there also + # "Operating System :: Microsoft :: Windows", + ], + python_requires='>=3.5', + install_requires=["PyYAML"], + ) + +finally: + version_backup.rename(version_path) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/bin/relation-ids b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/bin/relation-ids new file mode 100755 index 0000000000000000000000000000000000000000..a7e0ead2d3182713bd826696fc403b5a8c54faa6 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/bin/relation-ids @@ -0,0 +1,11 @@ +#!/bin/bash + +case $1 in + db) echo '["db:1"]' ;; + mon) echo '["mon:2"]' ;; + ha) echo '[]' ;; + db0) echo '[]' ;; + db1) echo '["db1:4"]' ;; + db2) echo '["db2:5", "db2:6"]' ;; + *) echo '[]' ;; +esac diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/bin/relation-list b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/bin/relation-list new file mode 100755 index 0000000000000000000000000000000000000000..88490159775624108766a17a35a77599ddea8f03 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/bin/relation-list @@ -0,0 +1,16 @@ +#!/bin/bash + +fail_not_found() { + 1>&2 echo "ERROR invalid value \"$1\" for option -r: relation not found" + exit 2 +} + +case $2 in + 1) echo '["remote/0"]' ;; + 2) echo '["remote/0"]' ;; + 3) fail_not_found $2 ;; + 4) echo '["remoteapp1/0"]' ;; + 5) echo '["remoteapp1/0"]' ;; + 6) echo '["remoteapp2/0"]' ;; + *) fail_not_found $2 ;; +esac diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/actions.yaml b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/actions.yaml new file mode 100644 index 0000000000000000000000000000000000000000..d16c94204f2cc0064965553e9b911d3d6075ba17 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/actions.yaml @@ -0,0 +1,29 @@ +foo-bar: + description: Foos the bar. + title: foo-bar + params: + foo-name: + type: string + description: A foo name to bar. + silent: + type: boolean + description: "" + default: false + required: + - foo-name +start: + description: Start the unit. +get-model-name: + description: Return the name of the model +get-status: + description: Return the Status of the unit +log-critical: + description: log a message at Critical level +log-error: + description: log a message at Error level +log-warning: + description: log a message at Warning level +log-info: + description: log a message at Info level +log-debug: + description: log a message at Debug level diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/config.yaml b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/config.yaml new file mode 100644 index 0000000000000000000000000000000000000000..ffc0186002391ca52273d39bebcc9c4261c47535 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/config.yaml @@ -0,0 +1 @@ +"options": {} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/lib/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/lib/__init__.py new file mode 100755 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/lib/ops/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/lib/ops/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..f17b2969db298b21bc47bbe1d3614ccff93e9c6e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/lib/ops/__init__.py @@ -0,0 +1,20 @@ +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""The Operator Framework.""" + +from .version import version as __version__ # noqa: F401 (imported but unused) + +# Import here the bare minimum to break the circular import between modules +from . import charm # noqa: F401 (imported but unused) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/lib/ops/charm.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/lib/ops/charm.py new file mode 100755 index 0000000000000000000000000000000000000000..d898de859fc444814bc19a7f8f0caaaec6f7e5f4 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/lib/ops/charm.py @@ -0,0 +1,575 @@ +# Copyright 2019-2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import enum +import os +import pathlib +import typing + +import yaml + +from ops.framework import Object, EventSource, EventBase, Framework, ObjectEvents +from ops import model + + +def _loadYaml(source): + if yaml.__with_libyaml__: + return yaml.load(source, Loader=yaml.CSafeLoader) + return yaml.load(source, Loader=yaml.SafeLoader) + + +class HookEvent(EventBase): + """A base class for events that trigger because of a Juju hook firing.""" + + +class ActionEvent(EventBase): + """A base class for events that trigger when a user asks for an Action to be run. + + To read the parameters for the action, see the instance variable `params`. + To respond with the result of the action, call `set_results`. To add progress + messages that are visible as the action is progressing use `log`. + + :ivar params: The parameters passed to the action (read by action-get) + """ + + def defer(self): + """Action events are not deferable like other events. + + This is because an action runs synchronously and the user is waiting for the result. + """ + raise RuntimeError('cannot defer action events') + + def restore(self, snapshot: dict) -> None: + """Used by the operator framework to record the action. + + Not meant to be called directly by Charm code. + """ + env_action_name = os.environ.get('JUJU_ACTION_NAME') + event_action_name = self.handle.kind[:-len('_action')].replace('_', '-') + if event_action_name != env_action_name: + # This could only happen if the dev manually emits the action, or from a bug. + raise RuntimeError('action event kind does not match current action') + # Params are loaded at restore rather than __init__ because + # the model is not available in __init__. + self.params = self.framework.model._backend.action_get() + + def set_results(self, results: typing.Mapping) -> None: + """Report the result of the action. + + Args: + results: The result of the action as a Dict + """ + self.framework.model._backend.action_set(results) + + def log(self, message: str) -> None: + """Send a message that a user will see while the action is running. + + Args: + message: The message for the user. + """ + self.framework.model._backend.action_log(message) + + def fail(self, message: str = '') -> None: + """Report that this action has failed. + + Args: + message: Optional message to record why it has failed. + """ + self.framework.model._backend.action_fail(message) + + +class InstallEvent(HookEvent): + """Represents the `install` hook from Juju.""" + + +class StartEvent(HookEvent): + """Represents the `start` hook from Juju.""" + + +class StopEvent(HookEvent): + """Represents the `stop` hook from Juju.""" + + +class RemoveEvent(HookEvent): + """Represents the `remove` hook from Juju. """ + + +class ConfigChangedEvent(HookEvent): + """Represents the `config-changed` hook from Juju.""" + + +class UpdateStatusEvent(HookEvent): + """Represents the `update-status` hook from Juju.""" + + +class UpgradeCharmEvent(HookEvent): + """Represents the `upgrade-charm` hook from Juju. + + This will be triggered when a user has run `juju upgrade-charm`. It is run after Juju + has unpacked the upgraded charm code, and so this event will be handled with new code. + """ + + +class PreSeriesUpgradeEvent(HookEvent): + """Represents the `pre-series-upgrade` hook from Juju. + + This happens when a user has run `juju upgrade-series MACHINE prepare` and + will fire for each unit that is running on the machine, telling them that + the user is preparing to upgrade the Machine's series (eg trusty->bionic). + The charm should take actions to prepare for the upgrade (a database charm + would want to write out a version-independent dump of the database, so that + when a new version of the database is available in a new series, it can be + used.) + Once all units on a machine have run `pre-series-upgrade`, the user will + initiate the steps to actually upgrade the machine (eg `do-release-upgrade`). + When the upgrade has been completed, the :class:`PostSeriesUpgradeEvent` will fire. + """ + + +class PostSeriesUpgradeEvent(HookEvent): + """Represents the `post-series-upgrade` hook from Juju. + + This is run after the user has done a distribution upgrade (or rolled back + and kept the same series). It is called in response to + `juju upgrade-series MACHINE complete`. Charms are expected to do whatever + steps are necessary to reconfigure their applications for the new series. + """ + + +class LeaderElectedEvent(HookEvent): + """Represents the `leader-elected` hook from Juju. + + Juju will trigger this when a new lead unit is chosen for a given application. + This represents the leader of the charm information (not necessarily the primary + of a running application). The main utility is that charm authors can know + that only one unit will be a leader at any given time, so they can do + configuration, etc, that would otherwise require coordination between units. + (eg, selecting a password for a new relation) + """ + + +class LeaderSettingsChangedEvent(HookEvent): + """Represents the `leader-settings-changed` hook from Juju. + + Deprecated. This represents when a lead unit would call `leader-set` to inform + the other units of an application that they have new information to handle. + This has been deprecated in favor of using a Peer relation, and having the + leader set a value in the Application data bag for that peer relation. + (see :class:`RelationChangedEvent`). + """ + + +class CollectMetricsEvent(HookEvent): + """Represents the `collect-metrics` hook from Juju. + + Note that events firing during a CollectMetricsEvent are currently + sandboxed in how they can interact with Juju. To report metrics + use :meth:`.add_metrics`. + """ + + def add_metrics(self, metrics: typing.Mapping, labels: typing.Mapping = None) -> None: + """Record metrics that have been gathered by the charm for this unit. + + Args: + metrics: A collection of {key: float} pairs that contains the + metrics that have been gathered + labels: {key:value} strings that can be applied to the + metrics that are being gathered + """ + self.framework.model._backend.add_metrics(metrics, labels) + + +class RelationEvent(HookEvent): + """A base class representing the various relation lifecycle events. + + Charmers should not be creating RelationEvents directly. The events will be + generated by the framework from Juju related events. Users can observe them + from the various `CharmBase.on[relation_name].relation_*` events. + + Attributes: + relation: The Relation involved in this event + app: The remote application that has triggered this event + unit: The remote unit that has triggered this event. This may be None + if the relation event was triggered as an Application level event + """ + + def __init__(self, handle, relation, app=None, unit=None): + super().__init__(handle) + + if unit is not None and unit.app != app: + raise RuntimeError( + 'cannot create RelationEvent with application {} and unit {}'.format(app, unit)) + + self.relation = relation + self.app = app + self.unit = unit + + def snapshot(self) -> dict: + """Used by the framework to serialize the event to disk. + + Not meant to be called by Charm code. + """ + snapshot = { + 'relation_name': self.relation.name, + 'relation_id': self.relation.id, + } + if self.app: + snapshot['app_name'] = self.app.name + if self.unit: + snapshot['unit_name'] = self.unit.name + return snapshot + + def restore(self, snapshot: dict) -> None: + """Used by the framework to deserialize the event from disk. + + Not meant to be called by Charm code. + """ + self.relation = self.framework.model.get_relation( + snapshot['relation_name'], snapshot['relation_id']) + + app_name = snapshot.get('app_name') + if app_name: + self.app = self.framework.model.get_app(app_name) + else: + self.app = None + + unit_name = snapshot.get('unit_name') + if unit_name: + self.unit = self.framework.model.get_unit(unit_name) + else: + self.unit = None + + +class RelationCreatedEvent(RelationEvent): + """Represents the `relation-created` hook from Juju. + + This is triggered when a new relation to another app is added in Juju. This + can occur before units for those applications have started. All existing + relations should be established before start. + """ + + +class RelationJoinedEvent(RelationEvent): + """Represents the `relation-joined` hook from Juju. + + This is triggered whenever a new unit of a related application joins the relation. + (eg, a unit was added to an existing related app, or a new relation was established + with an application that already had units.) + """ + + +class RelationChangedEvent(RelationEvent): + """Represents the `relation-changed` hook from Juju. + + This is triggered whenever there is a change to the data bucket for a related + application or unit. Look at `event.relation.data[event.unit/app]` to see the + new information. + """ + + +class RelationDepartedEvent(RelationEvent): + """Represents the `relation-departed` hook from Juju. + + This is the inverse of the RelationJoinedEvent, representing when a unit + is leaving the relation (the unit is being removed, the app is being removed, + the relation is being removed). It is fired once for each unit that is + going away. + """ + + +class RelationBrokenEvent(RelationEvent): + """Represents the `relation-broken` hook from Juju. + + If a relation is being removed (`juju remove-relation` or `juju remove-application`), + once all the units have been removed, RelationBrokenEvent will fire to signal + that the relationship has been fully terminated. + """ + + +class StorageEvent(HookEvent): + """Base class representing Storage related events.""" + + +class StorageAttachedEvent(StorageEvent): + """Represents the `storage-attached` hook from Juju. + + Called when new storage is available for the charm to use. + """ + + +class StorageDetachingEvent(StorageEvent): + """Represents the `storage-detaching` hook from Juju. + + Called when storage a charm has been using is going away. + """ + + +class CharmEvents(ObjectEvents): + """The events that are generated by Juju in response to the lifecycle of an application.""" + + install = EventSource(InstallEvent) + start = EventSource(StartEvent) + stop = EventSource(StopEvent) + remove = EventSource(RemoveEvent) + update_status = EventSource(UpdateStatusEvent) + config_changed = EventSource(ConfigChangedEvent) + upgrade_charm = EventSource(UpgradeCharmEvent) + pre_series_upgrade = EventSource(PreSeriesUpgradeEvent) + post_series_upgrade = EventSource(PostSeriesUpgradeEvent) + leader_elected = EventSource(LeaderElectedEvent) + leader_settings_changed = EventSource(LeaderSettingsChangedEvent) + collect_metrics = EventSource(CollectMetricsEvent) + + +class CharmBase(Object): + """Base class that represents the Charm overall. + + Usually this initialization is done by ops.main.main() rather than Charm authors + directly instantiating a Charm. + + Args: + framework: The framework responsible for managing the Model and events for this + Charm. + key: Ignored; will remove after deprecation period of the signature change. + """ + + on = CharmEvents() + + def __init__(self, framework: Framework, key: typing.Optional = None): + super().__init__(framework, None) + + for relation_name in self.framework.meta.relations: + relation_name = relation_name.replace('-', '_') + self.on.define_event(relation_name + '_relation_created', RelationCreatedEvent) + self.on.define_event(relation_name + '_relation_joined', RelationJoinedEvent) + self.on.define_event(relation_name + '_relation_changed', RelationChangedEvent) + self.on.define_event(relation_name + '_relation_departed', RelationDepartedEvent) + self.on.define_event(relation_name + '_relation_broken', RelationBrokenEvent) + + for storage_name in self.framework.meta.storages: + storage_name = storage_name.replace('-', '_') + self.on.define_event(storage_name + '_storage_attached', StorageAttachedEvent) + self.on.define_event(storage_name + '_storage_detaching', StorageDetachingEvent) + + for action_name in self.framework.meta.actions: + action_name = action_name.replace('-', '_') + self.on.define_event(action_name + '_action', ActionEvent) + + @property + def app(self) -> model.Application: + """Application that this unit is part of.""" + return self.framework.model.app + + @property + def unit(self) -> model.Unit: + """Unit that this execution is responsible for.""" + return self.framework.model.unit + + @property + def meta(self) -> 'CharmMeta': + """CharmMeta of this charm. + """ + return self.framework.meta + + @property + def charm_dir(self) -> pathlib.Path: + """Root directory of the Charm as it is running. + """ + return self.framework.charm_dir + + +class CharmMeta: + """Object containing the metadata for the charm. + + This is read from metadata.yaml and/or actions.yaml. Generally charms will + define this information, rather than reading it at runtime. This class is + mostly for the framework to understand what the charm has defined. + + The maintainers, tags, terms, series, and extra_bindings attributes are all + lists of strings. The requires, provides, peers, relations, storage, + resources, and payloads attributes are all mappings of names to instances + of the respective RelationMeta, StorageMeta, ResourceMeta, or PayloadMeta. + + The relations attribute is a convenience accessor which includes all of the + requires, provides, and peers RelationMeta items. If needed, the role of + the relation definition can be obtained from its role attribute. + + Attributes: + name: The name of this charm + summary: Short description of what this charm does + description: Long description for this charm + maintainers: A list of strings of the email addresses of the maintainers + of this charm. + tags: Charm store tag metadata for categories associated with this charm. + terms: Charm store terms that should be agreed to before this charm can + be deployed. (Used for things like licensing issues.) + series: The list of supported OS series that this charm can support. + The first entry in the list is the default series that will be + used by deploy if no other series is requested by the user. + subordinate: True/False whether this charm is intended to be used as a + subordinate charm. + min_juju_version: If supplied, indicates this charm needs features that + are not available in older versions of Juju. + requires: A dict of {name: :class:`RelationMeta` } for each 'requires' relation. + provides: A dict of {name: :class:`RelationMeta` } for each 'provides' relation. + peers: A dict of {name: :class:`RelationMeta` } for each 'peer' relation. + relations: A dict containing all :class:`RelationMeta` attributes (merged from other + sections) + storages: A dict of {name: :class:`StorageMeta`} for each defined storage. + resources: A dict of {name: :class:`ResourceMeta`} for each defined resource. + payloads: A dict of {name: :class:`PayloadMeta`} for each defined payload. + extra_bindings: A dict of additional named bindings that a charm can use + for network configuration. + actions: A dict of {name: :class:`ActionMeta`} for actions that the charm has defined. + Args: + raw: a mapping containing the contents of metadata.yaml + actions_raw: a mapping containing the contents of actions.yaml + """ + + def __init__(self, raw: dict = {}, actions_raw: dict = {}): + self.name = raw.get('name', '') + self.summary = raw.get('summary', '') + self.description = raw.get('description', '') + self.maintainers = [] + if 'maintainer' in raw: + self.maintainers.append(raw['maintainer']) + if 'maintainers' in raw: + self.maintainers.extend(raw['maintainers']) + self.tags = raw.get('tags', []) + self.terms = raw.get('terms', []) + self.series = raw.get('series', []) + self.subordinate = raw.get('subordinate', False) + self.min_juju_version = raw.get('min-juju-version') + self.requires = {name: RelationMeta(RelationRole.requires, name, rel) + for name, rel in raw.get('requires', {}).items()} + self.provides = {name: RelationMeta(RelationRole.provides, name, rel) + for name, rel in raw.get('provides', {}).items()} + self.peers = {name: RelationMeta(RelationRole.peer, name, rel) + for name, rel in raw.get('peers', {}).items()} + self.relations = {} + self.relations.update(self.requires) + self.relations.update(self.provides) + self.relations.update(self.peers) + self.storages = {name: StorageMeta(name, storage) + for name, storage in raw.get('storage', {}).items()} + self.resources = {name: ResourceMeta(name, res) + for name, res in raw.get('resources', {}).items()} + self.payloads = {name: PayloadMeta(name, payload) + for name, payload in raw.get('payloads', {}).items()} + self.extra_bindings = raw.get('extra-bindings', {}) + self.actions = {name: ActionMeta(name, action) for name, action in actions_raw.items()} + + @classmethod + def from_yaml( + cls, metadata: typing.Union[str, typing.TextIO], + actions: typing.Optional[typing.Union[str, typing.TextIO]] = None): + """Instantiate a CharmMeta from a YAML description of metadata.yaml. + + Args: + metadata: A YAML description of charm metadata (name, relations, etc.) + This can be a simple string, or a file-like object. (passed to `yaml.safe_load`). + actions: YAML description of Actions for this charm (eg actions.yaml) + """ + meta = _loadYaml(metadata) + raw_actions = {} + if actions is not None: + raw_actions = _loadYaml(actions) + return cls(meta, raw_actions) + + +class RelationRole(enum.Enum): + peer = 'peer' + requires = 'requires' + provides = 'provides' + + def is_peer(self) -> bool: + """Return whether the current role is peer. + + A convenience to avoid having to import charm. + """ + return self is RelationRole.peer + + +class RelationMeta: + """Object containing metadata about a relation definition. + + Should not be constructed directly by Charm code. Is gotten from one of + :attr:`CharmMeta.peers`, :attr:`CharmMeta.requires`, :attr:`CharmMeta.provides`, + or :attr:`CharmMeta.relations`. + + Attributes: + role: This is one of peer/requires/provides + relation_name: Name of this relation from metadata.yaml + interface_name: Optional definition of the interface protocol. + scope: "global" or "container" scope based on how the relation should be used. + """ + + def __init__(self, role: RelationRole, relation_name: str, raw: dict): + if not isinstance(role, RelationRole): + raise TypeError("role should be a Role, not {!r}".format(role)) + self.role = role + self.relation_name = relation_name + self.interface_name = raw['interface'] + self.scope = raw.get('scope') + + +class StorageMeta: + """Object containing metadata about a storage definition.""" + + def __init__(self, name, raw): + self.storage_name = name + self.type = raw['type'] + self.description = raw.get('description', '') + self.shared = raw.get('shared', False) + self.read_only = raw.get('read-only', False) + self.minimum_size = raw.get('minimum-size') + self.location = raw.get('location') + self.multiple_range = None + if 'multiple' in raw: + range = raw['multiple']['range'] + if '-' not in range: + self.multiple_range = (int(range), int(range)) + else: + range = range.split('-') + self.multiple_range = (int(range[0]), int(range[1]) if range[1] else None) + + +class ResourceMeta: + """Object containing metadata about a resource definition.""" + + def __init__(self, name, raw): + self.resource_name = name + self.type = raw['type'] + self.filename = raw.get('filename', None) + self.description = raw.get('description', '') + + +class PayloadMeta: + """Object containing metadata about a payload definition.""" + + def __init__(self, name, raw): + self.payload_name = name + self.type = raw['type'] + + +class ActionMeta: + """Object containing metadata about an action's definition.""" + + def __init__(self, name, raw=None): + raw = raw or {} + self.name = name + self.title = raw.get('title', '') + self.description = raw.get('description', '') + self.parameters = raw.get('params', {}) # {: } + self.required = raw.get('required', []) # [, ...] diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/lib/ops/framework.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/lib/ops/framework.py new file mode 100755 index 0000000000000000000000000000000000000000..b7c4749ff2b5bfb4f354bf1a8d4cd6ed64cf0da5 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/lib/ops/framework.py @@ -0,0 +1,1067 @@ +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import collections +import collections.abc +import inspect +import keyword +import logging +import marshal +import os +import pathlib +import pdb +import re +import sys +import types +import weakref + +from ops import charm +from ops.storage import ( + NoSnapshotError, + SQLiteStorage, +) + +logger = logging.getLogger(__name__) + + +class Handle: + """Handle defines a name for an object in the form of a hierarchical path. + + The provided parent is the object (or that object's handle) that this handle + sits under, or None if the object identified by this handle stands by itself + as the root of its own hierarchy. + + The handle kind is a string that defines a namespace so objects with the + same parent and kind will have unique keys. + + The handle key is a string uniquely identifying the object. No other objects + under the same parent and kind may have the same key. + """ + + def __init__(self, parent, kind, key): + if parent and not isinstance(parent, Handle): + parent = parent.handle + self._parent = parent + self._kind = kind + self._key = key + if parent: + if key: + self._path = "{}/{}[{}]".format(parent, kind, key) + else: + self._path = "{}/{}".format(parent, kind) + else: + if key: + self._path = "{}[{}]".format(kind, key) + else: + self._path = "{}".format(kind) + + def nest(self, kind, key): + return Handle(self, kind, key) + + def __hash__(self): + return hash((self.parent, self.kind, self.key)) + + def __eq__(self, other): + return (self.parent, self.kind, self.key) == (other.parent, other.kind, other.key) + + def __str__(self): + return self.path + + @property + def parent(self): + return self._parent + + @property + def kind(self): + return self._kind + + @property + def key(self): + return self._key + + @property + def path(self): + return self._path + + @classmethod + def from_path(cls, path): + handle = None + for pair in path.split("/"): + pair = pair.split("[") + good = False + if len(pair) == 1: + kind, key = pair[0], None + good = True + elif len(pair) == 2: + kind, key = pair + if key and key[-1] == ']': + key = key[:-1] + good = True + if not good: + raise RuntimeError("attempted to restore invalid handle path {}".format(path)) + handle = Handle(handle, kind, key) + return handle + + +class EventBase: + + def __init__(self, handle): + self.handle = handle + self.deferred = False + + def defer(self): + self.deferred = True + + def snapshot(self): + """Return the snapshot data that should be persisted. + + Subclasses must override to save any custom state. + """ + return None + + def restore(self, snapshot): + """Restore the value state from the given snapshot. + + Subclasses must override to restore their custom state. + """ + self.deferred = False + + +class EventSource: + """EventSource wraps an event type with a descriptor to facilitate observing and emitting. + + It is generally used as: + + class SomethingHappened(EventBase): + pass + + class SomeObject(Object): + something_happened = EventSource(SomethingHappened) + + With that, instances of that type will offer the someobj.something_happened + attribute which is a BoundEvent and may be used to emit and observe the event. + """ + + def __init__(self, event_type): + if not isinstance(event_type, type) or not issubclass(event_type, EventBase): + raise RuntimeError( + 'Event requires a subclass of EventBase as an argument, got {}'.format(event_type)) + self.event_type = event_type + self.event_kind = None + self.emitter_type = None + + def _set_name(self, emitter_type, event_kind): + if self.event_kind is not None: + raise RuntimeError( + 'EventSource({}) reused as {}.{} and {}.{}'.format( + self.event_type.__name__, + self.emitter_type.__name__, + self.event_kind, + emitter_type.__name__, + event_kind, + )) + self.event_kind = event_kind + self.emitter_type = emitter_type + + def __get__(self, emitter, emitter_type=None): + if emitter is None: + return self + # Framework might not be available if accessed as CharmClass.on.event + # rather than charm_instance.on.event, but in that case it couldn't be + # emitted anyway, so there's no point to registering it. + framework = getattr(emitter, 'framework', None) + if framework is not None: + framework.register_type(self.event_type, emitter, self.event_kind) + return BoundEvent(emitter, self.event_type, self.event_kind) + + +class BoundEvent: + + def __repr__(self): + return ''.format( + self.event_type.__name__, + type(self.emitter).__name__, + self.event_kind, + hex(id(self)), + ) + + def __init__(self, emitter, event_type, event_kind): + self.emitter = emitter + self.event_type = event_type + self.event_kind = event_kind + + def emit(self, *args, **kwargs): + """Emit event to all registered observers. + + The current storage state is committed before and after each observer is notified. + """ + framework = self.emitter.framework + key = framework._next_event_key() + event = self.event_type(Handle(self.emitter, self.event_kind, key), *args, **kwargs) + framework._emit(event) + + +class HandleKind: + """Helper descriptor to define the Object.handle_kind field. + + The handle_kind for an object defaults to its type name, but it may + be explicitly overridden if desired. + """ + + def __get__(self, obj, obj_type): + kind = obj_type.__dict__.get("handle_kind") + if kind: + return kind + return obj_type.__name__ + + +class _Metaclass(type): + """Helper class to ensure proper instantiation of Object-derived classes. + + This class currently has a single purpose: events derived from EventSource + that are class attributes of Object-derived classes need to be told what + their name is in that class. For example, in + + class SomeObject(Object): + something_happened = EventSource(SomethingHappened) + + the instance of EventSource needs to know it's called 'something_happened'. + + Starting from python 3.6 we could use __set_name__ on EventSource for this, + but until then this (meta)class does the equivalent work. + + TODO: when we drop support for 3.5 drop this class, and rename _set_name in + EventSource to __set_name__; everything should continue to work. + + """ + + def __new__(typ, *a, **kw): + k = super().__new__(typ, *a, **kw) + # k is now the Object-derived class; loop over its class attributes + for n, v in vars(k).items(): + # we could do duck typing here if we want to support + # non-EventSource-derived shenanigans. We don't. + if isinstance(v, EventSource): + # this is what 3.6+ does automatically for us: + v._set_name(k, n) + return k + + +class Object(metaclass=_Metaclass): + + handle_kind = HandleKind() + + def __init__(self, parent, key): + kind = self.handle_kind + if isinstance(parent, Framework): + self.framework = parent + # Avoid Framework instances having a circular reference to themselves. + if self.framework is self: + self.framework = weakref.proxy(self.framework) + self.handle = Handle(None, kind, key) + else: + self.framework = parent.framework + self.handle = Handle(parent, kind, key) + self.framework._track(self) + + # TODO Detect conflicting handles here. + + @property + def model(self): + return self.framework.model + + +class ObjectEvents(Object): + """Convenience type to allow defining .on attributes at class level.""" + + handle_kind = "on" + + def __init__(self, parent=None, key=None): + if parent is not None: + super().__init__(parent, key) + else: + self._cache = weakref.WeakKeyDictionary() + + def __get__(self, emitter, emitter_type): + if emitter is None: + return self + instance = self._cache.get(emitter) + if instance is None: + # Same type, different instance, more data. Doing this unusual construct + # means people can subclass just this one class to have their own 'on'. + instance = self._cache[emitter] = type(self)(emitter) + return instance + + @classmethod + def define_event(cls, event_kind, event_type): + """Define an event on this type at runtime. + + cls: a type to define an event on. + + event_kind: an attribute name that will be used to access the + event. Must be a valid python identifier, not be a keyword + or an existing attribute. + + event_type: a type of the event to define. + + """ + prefix = 'unable to define an event with event_kind that ' + if not event_kind.isidentifier(): + raise RuntimeError(prefix + 'is not a valid python identifier: ' + event_kind) + elif keyword.iskeyword(event_kind): + raise RuntimeError(prefix + 'is a python keyword: ' + event_kind) + try: + getattr(cls, event_kind) + raise RuntimeError( + prefix + 'overlaps with an existing type {} attribute: {}'.format(cls, event_kind)) + except AttributeError: + pass + + event_descriptor = EventSource(event_type) + event_descriptor._set_name(cls, event_kind) + setattr(cls, event_kind, event_descriptor) + + def events(self): + """Return a mapping of event_kinds to bound_events for all available events. + """ + events_map = {} + # We have to iterate over the class rather than instance to allow for properties which + # might call this method (e.g., event views), leading to infinite recursion. + for attr_name, attr_value in inspect.getmembers(type(self)): + if isinstance(attr_value, EventSource): + # We actually care about the bound_event, however, since it + # provides the most info for users of this method. + event_kind = attr_name + bound_event = getattr(self, event_kind) + events_map[event_kind] = bound_event + return events_map + + def __getitem__(self, key): + return PrefixedEvents(self, key) + + +class PrefixedEvents: + + def __init__(self, emitter, key): + self._emitter = emitter + self._prefix = key.replace("-", "_") + '_' + + def __getattr__(self, name): + return getattr(self._emitter, self._prefix + name) + + +class PreCommitEvent(EventBase): + pass + + +class CommitEvent(EventBase): + pass + + +class FrameworkEvents(ObjectEvents): + pre_commit = EventSource(PreCommitEvent) + commit = EventSource(CommitEvent) + + +class NoTypeError(Exception): + + def __init__(self, handle_path): + self.handle_path = handle_path + + def __str__(self): + return "cannot restore {} since no class was registered for it".format(self.handle_path) + + +# the message to show to the user when a pdb breakpoint goes active +_BREAKPOINT_WELCOME_MESSAGE = """ +Starting pdb to debug charm operator. +Run `h` for help, `c` to continue, or `exit`/CTRL-d to abort. +Future breakpoints may interrupt execution again. +More details at https://discourse.jujucharms.com/t/debugging-charm-hooks + +""" + + +_event_regex = r'^(|.*/)on/[a-zA-Z_]+\[\d+\]$' + + +class Framework(Object): + + on = FrameworkEvents() + + # Override properties from Object so that we can set them in __init__. + model = None + meta = None + charm_dir = None + + def __init__(self, storage, charm_dir, meta, model): + + super().__init__(self, None) + + self.charm_dir = charm_dir + self.meta = meta + self.model = model + self._observers = [] # [(observer_path, method_name, parent_path, event_key)] + self._observer = weakref.WeakValueDictionary() # {observer_path: observer} + self._objects = weakref.WeakValueDictionary() + self._type_registry = {} # {(parent_path, kind): cls} + self._type_known = set() # {cls} + + if isinstance(storage, (str, pathlib.Path)): + logger.warning( + "deprecated: Framework now takes a Storage not a path") + storage = SQLiteStorage(storage) + self._storage = storage + + # We can't use the higher-level StoredState because it relies on events. + self.register_type(StoredStateData, None, StoredStateData.handle_kind) + stored_handle = Handle(None, StoredStateData.handle_kind, '_stored') + try: + self._stored = self.load_snapshot(stored_handle) + except NoSnapshotError: + self._stored = StoredStateData(self, '_stored') + self._stored['event_count'] = 0 + + # Hook into builtin breakpoint, so if Python >= 3.7, devs will be able to just do + # breakpoint(); if Python < 3.7, this doesn't affect anything + sys.breakpointhook = self.breakpoint + + # Flag to indicate that we already presented the welcome message in a debugger breakpoint + self._breakpoint_welcomed = False + + # Parse once the env var, which may be used multiple times later + debug_at = os.environ.get('JUJU_DEBUG_AT') + self._juju_debug_at = debug_at.split(',') if debug_at else () + + def close(self): + self._storage.close() + + def _track(self, obj): + """Track object and ensure it is the only object created using its handle path.""" + if obj is self: + # Framework objects don't track themselves + return + if obj.handle.path in self.framework._objects: + raise RuntimeError( + 'two objects claiming to be {} have been created'.format(obj.handle.path)) + self._objects[obj.handle.path] = obj + + def _forget(self, obj): + """Stop tracking the given object. See also _track.""" + self._objects.pop(obj.handle.path, None) + + def commit(self): + # Give a chance for objects to persist data they want to before a commit is made. + self.on.pre_commit.emit() + # Make sure snapshots are saved by instances of StoredStateData. Any possible state + # modifications in on_commit handlers of instances of other classes will not be persisted. + self.on.commit.emit() + # Save our event count after all events have been emitted. + self.save_snapshot(self._stored) + self._storage.commit() + + def register_type(self, cls, parent, kind=None): + if parent and not isinstance(parent, Handle): + parent = parent.handle + if parent: + parent_path = parent.path + else: + parent_path = None + if not kind: + kind = cls.handle_kind + self._type_registry[(parent_path, kind)] = cls + self._type_known.add(cls) + + def save_snapshot(self, value): + """Save a persistent snapshot of the provided value. + + The provided value must implement the following interface: + + value.handle = Handle(...) + value.snapshot() => {...} # Simple builtin types only. + value.restore(snapshot) # Restore custom state from prior snapshot. + """ + if type(value) not in self._type_known: + raise RuntimeError( + 'cannot save {} values before registering that type'.format(type(value).__name__)) + data = value.snapshot() + + # Use marshal as a validator, enforcing the use of simple types, as we later the + # information is really pickled, which is too error prone for future evolution of the + # stored data (e.g. if the developer stores a custom object and later changes its + # class name; when unpickling the original class will not be there and event + # data loading will fail). + try: + marshal.dumps(data) + except ValueError: + msg = "unable to save the data for {}, it must contain only simple types: {!r}" + raise ValueError(msg.format(value.__class__.__name__, data)) + + self._storage.save_snapshot(value.handle.path, data) + + def load_snapshot(self, handle): + parent_path = None + if handle.parent: + parent_path = handle.parent.path + cls = self._type_registry.get((parent_path, handle.kind)) + if not cls: + raise NoTypeError(handle.path) + data = self._storage.load_snapshot(handle.path) + obj = cls.__new__(cls) + obj.framework = self + obj.handle = handle + obj.restore(data) + self._track(obj) + return obj + + def drop_snapshot(self, handle): + self._storage.drop_snapshot(handle.path) + + def observe(self, bound_event: BoundEvent, observer: types.MethodType): + """Register observer to be called when bound_event is emitted. + + The bound_event is generally provided as an attribute of the object that emits + the event, and is created in this style: + + class SomeObject: + something_happened = Event(SomethingHappened) + + That event may be observed as: + + framework.observe(someobj.something_happened, self._on_something_happened) + + Raises: + RuntimeError: if bound_event or observer are the wrong type. + """ + if not isinstance(bound_event, BoundEvent): + raise RuntimeError( + 'Framework.observe requires a BoundEvent as second parameter, got {}'.format( + bound_event)) + if not isinstance(observer, types.MethodType): + # help users of older versions of the framework + if isinstance(observer, charm.CharmBase): + raise TypeError( + 'observer methods must now be explicitly provided;' + ' please replace observe(self.on.{0}, self)' + ' with e.g. observe(self.on.{0}, self._on_{0})'.format( + bound_event.event_kind)) + raise RuntimeError( + 'Framework.observe requires a method as third parameter, got {}'.format(observer)) + + event_type = bound_event.event_type + event_kind = bound_event.event_kind + emitter = bound_event.emitter + + self.register_type(event_type, emitter, event_kind) + + if hasattr(emitter, "handle"): + emitter_path = emitter.handle.path + else: + raise RuntimeError( + 'event emitter {} must have a "handle" attribute'.format(type(emitter).__name__)) + + # Validate that the method has an acceptable call signature. + sig = inspect.signature(observer) + # Self isn't included in the params list, so the first arg will be the event. + extra_params = list(sig.parameters.values())[1:] + + method_name = observer.__name__ + observer = observer.__self__ + if not sig.parameters: + raise TypeError( + '{}.{} must accept event parameter'.format(type(observer).__name__, method_name)) + elif any(param.default is inspect.Parameter.empty for param in extra_params): + # Allow for additional optional params, since there's no reason to exclude them, but + # required params will break. + raise TypeError( + '{}.{} has extra required parameter'.format(type(observer).__name__, method_name)) + + # TODO Prevent the exact same parameters from being registered more than once. + + self._observer[observer.handle.path] = observer + self._observers.append((observer.handle.path, method_name, emitter_path, event_kind)) + + def _next_event_key(self): + """Return the next event key that should be used, incrementing the internal counter.""" + # Increment the count first; this means the keys will start at 1, and 0 + # means no events have been emitted. + self._stored['event_count'] += 1 + return str(self._stored['event_count']) + + def _emit(self, event): + """See BoundEvent.emit for the public way to call this.""" + + saved = False + event_path = event.handle.path + event_kind = event.handle.kind + parent_path = event.handle.parent.path + # TODO Track observers by (parent_path, event_kind) rather than as a list of + # all observers. Avoiding linear search through all observers for every event + for observer_path, method_name, _parent_path, _event_kind in self._observers: + if _parent_path != parent_path: + continue + if _event_kind and _event_kind != event_kind: + continue + if not saved: + # Save the event for all known observers before the first notification + # takes place, so that either everyone interested sees it, or nobody does. + self.save_snapshot(event) + saved = True + # Again, only commit this after all notices are saved. + self._storage.save_notice(event_path, observer_path, method_name) + if saved: + self._reemit(event_path) + + def reemit(self): + """Reemit previously deferred events to the observers that deferred them. + + Only the specific observers that have previously deferred the event will be + notified again. Observers that asked to be notified about events after it's + been first emitted won't be notified, as that would mean potentially observing + events out of order. + """ + self._reemit() + + def _reemit(self, single_event_path=None): + last_event_path = None + deferred = True + for event_path, observer_path, method_name in self._storage.notices(single_event_path): + event_handle = Handle.from_path(event_path) + + if last_event_path != event_path: + if not deferred and last_event_path is not None: + self._storage.drop_snapshot(last_event_path) + last_event_path = event_path + deferred = False + + try: + event = self.load_snapshot(event_handle) + except NoTypeError: + self._storage.drop_notice(event_path, observer_path, method_name) + continue + + event.deferred = False + observer = self._observer.get(observer_path) + if observer: + custom_handler = getattr(observer, method_name, None) + if custom_handler: + event_is_from_juju = isinstance(event, charm.HookEvent) + event_is_action = isinstance(event, charm.ActionEvent) + if (event_is_from_juju or event_is_action) and 'hook' in self._juju_debug_at: + # Present the welcome message and run under PDB. + self._show_debug_code_message() + pdb.runcall(custom_handler, event) + else: + # Regular call to the registered method. + custom_handler(event) + + if event.deferred: + deferred = True + else: + self._storage.drop_notice(event_path, observer_path, method_name) + # We intentionally consider this event to be dead and reload it from + # scratch in the next path. + self.framework._forget(event) + + if not deferred and last_event_path is not None: + self._storage.drop_snapshot(last_event_path) + + def _show_debug_code_message(self): + """Present the welcome message (only once!) when using debugger functionality.""" + if not self._breakpoint_welcomed: + self._breakpoint_welcomed = True + print(_BREAKPOINT_WELCOME_MESSAGE, file=sys.stderr, end='') + + def breakpoint(self, name=None): + """Add breakpoint, optionally named, at the place where this method is called. + + For the breakpoint to be activated the JUJU_DEBUG_AT environment variable + must be set to "all" or to the specific name parameter provided, if any. In every + other situation calling this method does nothing. + + The framework also provides a standard breakpoint named "hook", that will + stop execution when a hook event is about to be handled. + + For those reasons, the "all" and "hook" breakpoint names are reserved. + """ + # If given, validate the name comply with all the rules + if name is not None: + if not isinstance(name, str): + raise TypeError('breakpoint names must be strings') + if name in ('hook', 'all'): + raise ValueError('breakpoint names "all" and "hook" are reserved') + if not re.match(r'^[a-z0-9]([a-z0-9\-]*[a-z0-9])?$', name): + raise ValueError('breakpoint names must look like "foo" or "foo-bar"') + + indicated_breakpoints = self._juju_debug_at + if not indicated_breakpoints: + return + + if 'all' in indicated_breakpoints or name in indicated_breakpoints: + self._show_debug_code_message() + + # If we call set_trace() directly it will open the debugger *here*, so indicating + # it to use our caller's frame + code_frame = inspect.currentframe().f_back + pdb.Pdb().set_trace(code_frame) + else: + logger.warning( + "Breakpoint %r skipped (not found in the requested breakpoints: %s)", + name, indicated_breakpoints) + + def remove_unreferenced_events(self): + """Remove events from storage that are not referenced. + + In older versions of the framework, events that had no observers would get recorded but + never deleted. This makes a best effort to find these events and remove them from the + database. + """ + event_regex = re.compile(_event_regex) + to_remove = [] + for handle_path in self._storage.list_snapshots(): + if event_regex.match(handle_path): + notices = self._storage.notices(handle_path) + if next(notices, None) is None: + # There are no notices for this handle_path, it is valid to remove it + to_remove.append(handle_path) + for handle_path in to_remove: + self._storage.drop_snapshot(handle_path) + + +class StoredStateData(Object): + + def __init__(self, parent, attr_name): + super().__init__(parent, attr_name) + self._cache = {} + self.dirty = False + + def __getitem__(self, key): + return self._cache.get(key) + + def __setitem__(self, key, value): + self._cache[key] = value + self.dirty = True + + def __contains__(self, key): + return key in self._cache + + def snapshot(self): + return self._cache + + def restore(self, snapshot): + self._cache = snapshot + self.dirty = False + + def on_commit(self, event): + if self.dirty: + self.framework.save_snapshot(self) + self.dirty = False + + +class BoundStoredState: + + def __init__(self, parent, attr_name): + parent.framework.register_type(StoredStateData, parent) + + handle = Handle(parent, StoredStateData.handle_kind, attr_name) + try: + data = parent.framework.load_snapshot(handle) + except NoSnapshotError: + data = StoredStateData(parent, attr_name) + + # __dict__ is used to avoid infinite recursion. + self.__dict__["_data"] = data + self.__dict__["_attr_name"] = attr_name + + parent.framework.observe(parent.framework.on.commit, self._data.on_commit) + + def __getattr__(self, key): + # "on" is the only reserved key that can't be used in the data map. + if key == "on": + return self._data.on + if key not in self._data: + raise AttributeError("attribute '{}' is not stored".format(key)) + return _wrap_stored(self._data, self._data[key]) + + def __setattr__(self, key, value): + if key == "on": + raise AttributeError("attribute 'on' is reserved and cannot be set") + + value = _unwrap_stored(self._data, value) + + if not isinstance(value, (type(None), int, float, str, bytes, list, dict, set)): + raise AttributeError( + 'attribute {!r} cannot be a {}: must be int/float/dict/list/etc'.format( + key, type(value).__name__)) + + self._data[key] = _unwrap_stored(self._data, value) + + def set_default(self, **kwargs): + """"Set the value of any given key if it has not already been set""" + for k, v in kwargs.items(): + if k not in self._data: + self._data[k] = v + + +class StoredState: + """A class used to store data the charm needs persisted across invocations. + + Example:: + + class MyClass(Object): + _stored = StoredState() + + Instances of `MyClass` can transparently save state between invocations by + setting attributes on `_stored`. Initial state should be set with + `set_default` on the bound object, that is:: + + class MyClass(Object): + _stored = StoredState() + + def __init__(self, parent, key): + super().__init__(parent, key) + self._stored.set_default(seen=set()) + self.framework.observe(self.on.seen, self._on_seen) + + def _on_seen(self, event): + self._stored.seen.add(event.uuid) + + """ + + def __init__(self): + self.parent_type = None + self.attr_name = None + + def __get__(self, parent, parent_type=None): + if self.parent_type is not None and self.parent_type not in parent_type.mro(): + # the StoredState instance is being shared between two unrelated classes + # -> unclear what is exepcted of us -> bail out + raise RuntimeError( + 'StoredState shared by {} and {}'.format( + self.parent_type.__name__, parent_type.__name__)) + + if parent is None: + # accessing via the class directly (e.g. MyClass.stored) + return self + + bound = None + if self.attr_name is not None: + bound = parent.__dict__.get(self.attr_name) + if bound is not None: + # we already have the thing from a previous pass, huzzah + return bound + + # need to find ourselves amongst the parent's bases + for cls in parent_type.mro(): + for attr_name, attr_value in cls.__dict__.items(): + if attr_value is not self: + continue + # we've found ourselves! is it the first time? + if bound is not None: + # the StoredState instance is being stored in two different + # attributes -> unclear what is expected of us -> bail out + raise RuntimeError("StoredState shared by {0}.{1} and {0}.{2}".format( + cls.__name__, self.attr_name, attr_name)) + # we've found ourselves for the first time; save where, and bind the object + self.attr_name = attr_name + self.parent_type = cls + bound = BoundStoredState(parent, attr_name) + + if bound is not None: + # cache the bound object to avoid the expensive lookup the next time + # (don't use setattr, to keep things symmetric with the fast-path lookup above) + parent.__dict__[self.attr_name] = bound + return bound + + raise AttributeError( + 'cannot find {} attribute in type {}'.format( + self.__class__.__name__, parent_type.__name__)) + + +def _wrap_stored(parent_data, value): + t = type(value) + if t is dict: + return StoredDict(parent_data, value) + if t is list: + return StoredList(parent_data, value) + if t is set: + return StoredSet(parent_data, value) + return value + + +def _unwrap_stored(parent_data, value): + t = type(value) + if t is StoredDict or t is StoredList or t is StoredSet: + return value._under + return value + + +class StoredDict(collections.abc.MutableMapping): + + def __init__(self, stored_data, under): + self._stored_data = stored_data + self._under = under + + def __getitem__(self, key): + return _wrap_stored(self._stored_data, self._under[key]) + + def __setitem__(self, key, value): + self._under[key] = _unwrap_stored(self._stored_data, value) + self._stored_data.dirty = True + + def __delitem__(self, key): + del self._under[key] + self._stored_data.dirty = True + + def __iter__(self): + return self._under.__iter__() + + def __len__(self): + return len(self._under) + + def __eq__(self, other): + if isinstance(other, StoredDict): + return self._under == other._under + elif isinstance(other, collections.abc.Mapping): + return self._under == other + else: + return NotImplemented + + +class StoredList(collections.abc.MutableSequence): + + def __init__(self, stored_data, under): + self._stored_data = stored_data + self._under = under + + def __getitem__(self, index): + return _wrap_stored(self._stored_data, self._under[index]) + + def __setitem__(self, index, value): + self._under[index] = _unwrap_stored(self._stored_data, value) + self._stored_data.dirty = True + + def __delitem__(self, index): + del self._under[index] + self._stored_data.dirty = True + + def __len__(self): + return len(self._under) + + def insert(self, index, value): + self._under.insert(index, value) + self._stored_data.dirty = True + + def append(self, value): + self._under.append(value) + self._stored_data.dirty = True + + def __eq__(self, other): + if isinstance(other, StoredList): + return self._under == other._under + elif isinstance(other, collections.abc.Sequence): + return self._under == other + else: + return NotImplemented + + def __lt__(self, other): + if isinstance(other, StoredList): + return self._under < other._under + elif isinstance(other, collections.abc.Sequence): + return self._under < other + else: + return NotImplemented + + def __le__(self, other): + if isinstance(other, StoredList): + return self._under <= other._under + elif isinstance(other, collections.abc.Sequence): + return self._under <= other + else: + return NotImplemented + + def __gt__(self, other): + if isinstance(other, StoredList): + return self._under > other._under + elif isinstance(other, collections.abc.Sequence): + return self._under > other + else: + return NotImplemented + + def __ge__(self, other): + if isinstance(other, StoredList): + return self._under >= other._under + elif isinstance(other, collections.abc.Sequence): + return self._under >= other + else: + return NotImplemented + + +class StoredSet(collections.abc.MutableSet): + + def __init__(self, stored_data, under): + self._stored_data = stored_data + self._under = under + + def add(self, key): + self._under.add(key) + self._stored_data.dirty = True + + def discard(self, key): + self._under.discard(key) + self._stored_data.dirty = True + + def __contains__(self, key): + return key in self._under + + def __iter__(self): + return self._under.__iter__() + + def __len__(self): + return len(self._under) + + @classmethod + def _from_iterable(cls, it): + """Construct an instance of the class from any iterable input. + + Per https://docs.python.org/3/library/collections.abc.html + if the Set mixin is being used in a class with a different constructor signature, + you will need to override _from_iterable() with a classmethod that can construct + new instances from an iterable argument. + """ + return set(it) + + def __le__(self, other): + if isinstance(other, StoredSet): + return self._under <= other._under + elif isinstance(other, collections.abc.Set): + return self._under <= other + else: + return NotImplemented + + def __ge__(self, other): + if isinstance(other, StoredSet): + return self._under >= other._under + elif isinstance(other, collections.abc.Set): + return self._under >= other + else: + return NotImplemented + + def __eq__(self, other): + if isinstance(other, StoredSet): + return self._under == other._under + elif isinstance(other, collections.abc.Set): + return self._under == other + else: + return NotImplemented diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/lib/ops/jujuversion.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/lib/ops/jujuversion.py new file mode 100755 index 0000000000000000000000000000000000000000..b2b8177dbe396f0d8c46b86e26af6b4e54ea046d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/lib/ops/jujuversion.py @@ -0,0 +1,98 @@ +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import re +from functools import total_ordering + + +@total_ordering +class JujuVersion: + + PATTERN = r'''^ + (?P\d{1,9})\.(?P\d{1,9}) # and numbers are always there + ((?:\.|-(?P[a-z]+))(?P\d{1,9}))? # sometimes with . or - + (\.(?P\d{1,9}))?$ # and sometimes with a number. + ''' + + def __init__(self, version): + m = re.match(self.PATTERN, version, re.VERBOSE) + if not m: + raise RuntimeError('"{}" is not a valid Juju version string'.format(version)) + + d = m.groupdict() + self.major = int(m.group('major')) + self.minor = int(m.group('minor')) + self.tag = d['tag'] or '' + self.patch = int(d['patch'] or 0) + self.build = int(d['build'] or 0) + + def __repr__(self): + if self.tag: + s = '{}.{}-{}{}'.format(self.major, self.minor, self.tag, self.patch) + else: + s = '{}.{}.{}'.format(self.major, self.minor, self.patch) + if self.build > 0: + s += '.{}'.format(self.build) + return s + + def __eq__(self, other): + if self is other: + return True + if isinstance(other, str): + other = type(self)(other) + elif not isinstance(other, JujuVersion): + raise RuntimeError('cannot compare Juju version "{}" with "{}"'.format(self, other)) + return ( + self.major == other.major + and self.minor == other.minor + and self.tag == other.tag + and self.build == other.build + and self.patch == other.patch) + + def __lt__(self, other): + if self is other: + return False + if isinstance(other, str): + other = type(self)(other) + elif not isinstance(other, JujuVersion): + raise RuntimeError('cannot compare Juju version "{}" with "{}"'.format(self, other)) + + if self.major != other.major: + return self.major < other.major + elif self.minor != other.minor: + return self.minor < other.minor + elif self.tag != other.tag: + if not self.tag: + return False + elif not other.tag: + return True + return self.tag < other.tag + elif self.patch != other.patch: + return self.patch < other.patch + elif self.build != other.build: + return self.build < other.build + return False + + @classmethod + def from_environ(cls) -> 'JujuVersion': + """Build a JujuVersion from JUJU_VERSION.""" + v = os.environ.get('JUJU_VERSION') + if not v: + raise RuntimeError('environ has no JUJU_VERSION') + return cls(v) + + def has_app_data(self) -> bool: + """Determine whether this juju version knows about app data.""" + return (self.major, self.minor, self.patch) >= (2, 7, 0) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/lib/ops/lib/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/lib/ops/lib/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..edb9fcacea6f0173aed9f07ca8a683cfead989cc --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/lib/ops/lib/__init__.py @@ -0,0 +1,194 @@ +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import sys +import os +import re + +from ast import literal_eval +from importlib.util import module_from_spec +from importlib.machinery import ModuleSpec +from pkgutil import get_importer +from types import ModuleType + + +_libraries = None + +_libline_re = re.compile(r'''^LIB([A-Z]+)\s*=\s*([0-9]+|['"][a-zA-Z0-9_.\-@]+['"])''') +_libname_re = re.compile(r'''^[a-z][a-z0-9]+$''') + +# Not perfect, but should do for now. +_libauthor_re = re.compile(r'''^[A-Za-z0-9_+.-]+@[a-z0-9_-]+(?:\.[a-z0-9_-]+)*\.[a-z]{2,3}$''') + + +def use(name: str, api: int, author: str) -> ModuleType: + """Use a library from the ops libraries. + + Args: + name: the name of the library requested. + api: the API version of the library. + author: the author of the library. If not given, requests the + one in the standard library. + Raises: + ImportError: if the library cannot be found. + TypeError: if the name, api, or author are the wrong type. + ValueError: if the name, api, or author are invalid. + """ + if not isinstance(name, str): + raise TypeError("invalid library name: {!r} (must be a str)".format(name)) + if not isinstance(author, str): + raise TypeError("invalid library author: {!r} (must be a str)".format(author)) + if not isinstance(api, int): + raise TypeError("invalid library API: {!r} (must be an int)".format(api)) + if api < 0: + raise ValueError('invalid library api: {} (must be ≥0)'.format(api)) + if not _libname_re.match(name): + raise ValueError("invalid library name: {!r} (chars and digits only)".format(name)) + if not _libauthor_re.match(author): + raise ValueError("invalid library author email: {!r}".format(author)) + + if _libraries is None: + autoimport() + + versions = _libraries.get((name, author), ()) + for lib in versions: + if lib.api == api: + return lib.import_module() + + others = ', '.join(str(lib.api) for lib in versions) + if others: + msg = 'cannot find "{}" from "{}" with API version {} (have {})'.format( + name, author, api, others) + else: + msg = 'cannot find library "{}" from "{}"'.format(name, author) + + raise ImportError(msg, name=name) + + +def autoimport(): + """Find all libs in the path and enable use of them. + + You only need to call this if you've installed a package or + otherwise changed sys.path in the current run, and need to see the + changes. Otherwise libraries are found on first call of `use`. + """ + global _libraries + _libraries = {} + for spec in _find_all_specs(sys.path): + lib = _parse_lib(spec) + if lib is None: + continue + + versions = _libraries.setdefault((lib.name, lib.author), []) + versions.append(lib) + versions.sort(reverse=True) + + +def _find_all_specs(path): + for sys_dir in path: + if sys_dir == "": + sys_dir = "." + try: + top_dirs = os.listdir(sys_dir) + except OSError: + continue + for top_dir in top_dirs: + opslib = os.path.join(sys_dir, top_dir, 'opslib') + try: + lib_dirs = os.listdir(opslib) + except OSError: + continue + finder = get_importer(opslib) + if finder is None or not hasattr(finder, 'find_spec'): + continue + for lib_dir in lib_dirs: + spec = finder.find_spec(lib_dir) + if spec is None: + continue + if spec.loader is None: + # a namespace package; not supported + continue + yield spec + + +# only the first this many lines of a file are looked at for the LIB* constants +_MAX_LIB_LINES = 99 + + +def _parse_lib(spec): + if spec.origin is None: + return None + + _expected = {'NAME': str, 'AUTHOR': str, 'API': int, 'PATCH': int} + + try: + with open(spec.origin, 'rt', encoding='utf-8') as f: + libinfo = {} + for n, line in enumerate(f): + if len(libinfo) == len(_expected): + break + if n > _MAX_LIB_LINES: + return None + m = _libline_re.match(line) + if m is None: + continue + key, value = m.groups() + if key in _expected: + value = literal_eval(value) + if not isinstance(value, _expected[key]): + return None + libinfo[key] = value + else: + if len(libinfo) != len(_expected): + return None + except Exception: + return None + + return _Lib(spec, libinfo['NAME'], libinfo['AUTHOR'], libinfo['API'], libinfo['PATCH']) + + +class _Lib: + + def __init__(self, spec: ModuleSpec, name: str, author: str, api: int, patch: int): + self.spec = spec + self.name = name + self.author = author + self.api = api + self.patch = patch + + self._module = None + + def __repr__(self): + return "<_Lib {0.name} by {0.author}, API {0.api}, patch {0.patch}>".format(self) + + def import_module(self) -> ModuleType: + if self._module is None: + module = module_from_spec(self.spec) + self.spec.loader.exec_module(module) + self._module = module + return self._module + + def __eq__(self, other): + if not isinstance(other, _Lib): + return NotImplemented + a = (self.name, self.author, self.api, self.patch) + b = (other.name, other.author, other.api, other.patch) + return a == b + + def __lt__(self, other): + if not isinstance(other, _Lib): + return NotImplemented + a = (self.name, self.author, self.api, self.patch) + b = (other.name, other.author, other.api, other.patch) + return a < b diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/lib/ops/log.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/lib/ops/log.py new file mode 100644 index 0000000000000000000000000000000000000000..4aac5543aec4d84dc393e79b772a30284712d6d4 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/lib/ops/log.py @@ -0,0 +1,51 @@ +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import sys +import logging + + +class JujuLogHandler(logging.Handler): + """A handler for sending logs to Juju via juju-log.""" + + def __init__(self, model_backend, level=logging.DEBUG): + super().__init__(level) + self.model_backend = model_backend + + def emit(self, record): + self.model_backend.juju_log(record.levelname, self.format(record)) + + +def setup_root_logging(model_backend, debug=False): + """Setup python logging to forward messages to juju-log. + + By default, logging is set to DEBUG level, and messages will be filtered by Juju. + Charmers can also set their own default log level with:: + + logging.getLogger().setLevel(logging.INFO) + + model_backend -- a ModelBackend to use for juju-log + debug -- if True, write logs to stderr as well as to juju-log. + """ + logger = logging.getLogger() + logger.setLevel(logging.DEBUG) + logger.addHandler(JujuLogHandler(model_backend)) + if debug: + handler = logging.StreamHandler() + formatter = logging.Formatter('%(asctime)s %(levelname)-8s %(message)s') + handler.setFormatter(formatter) + logger.addHandler(handler) + + sys.excepthook = lambda etype, value, tb: logger.error( + "Uncaught exception while in charm code:", exc_info=(etype, value, tb)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/lib/ops/main.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/lib/ops/main.py new file mode 100755 index 0000000000000000000000000000000000000000..6dc31c3575044796e8fe1f61b8415395689d6339 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/lib/ops/main.py @@ -0,0 +1,348 @@ +# Copyright 2019-2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import inspect +import logging +import os +import subprocess +import sys +import warnings +from pathlib import Path + +import yaml + +import ops.charm +import ops.framework +import ops.model +import ops.storage + +from ops.log import setup_root_logging + +CHARM_STATE_FILE = '.unit-state.db' + + +logger = logging.getLogger() + + +def _get_charm_dir(): + charm_dir = os.environ.get("JUJU_CHARM_DIR") + if charm_dir is None: + # Assume $JUJU_CHARM_DIR/lib/op/main.py structure. + charm_dir = Path('{}/../../..'.format(__file__)).resolve() + else: + charm_dir = Path(charm_dir).resolve() + return charm_dir + + +def _create_event_link(charm, bound_event): + """Create a symlink for a particular event. + + charm -- A charm object. + bound_event -- An event for which to create a symlink. + """ + if issubclass(bound_event.event_type, ops.charm.HookEvent): + event_dir = charm.framework.charm_dir / 'hooks' + event_path = event_dir / bound_event.event_kind.replace('_', '-') + elif issubclass(bound_event.event_type, ops.charm.ActionEvent): + if not bound_event.event_kind.endswith("_action"): + raise RuntimeError( + 'action event name {} needs _action suffix'.format(bound_event.event_kind)) + event_dir = charm.framework.charm_dir / 'actions' + # The event_kind is suffixed with "_action" while the executable is not. + event_path = event_dir / bound_event.event_kind[:-len('_action')].replace('_', '-') + else: + raise RuntimeError( + 'cannot create a symlink: unsupported event type {}'.format(bound_event.event_type)) + + event_dir.mkdir(exist_ok=True) + if not event_path.exists(): + # CPython has different implementations for populating sys.argv[0] for Linux and Windows. + # For Windows it is always an absolute path (any symlinks are resolved) + # while for Linux it can be a relative path. + target_path = os.path.relpath(os.path.realpath(sys.argv[0]), str(event_dir)) + + # Ignore the non-symlink files or directories + # assuming the charm author knows what they are doing. + logger.debug( + 'Creating a new relative symlink at %s pointing to %s', + event_path, target_path) + event_path.symlink_to(target_path) + + +def _setup_event_links(charm_dir, charm): + """Set up links for supported events that originate from Juju. + + Whether a charm can handle an event or not can be determined by + introspecting which events are defined on it. + + Hooks or actions are created as symlinks to the charm code file + which is determined by inspecting symlinks provided by the charm + author at hooks/install or hooks/start. + + charm_dir -- A root directory of the charm. + charm -- An instance of the Charm class. + + """ + for bound_event in charm.on.events().values(): + # Only events that originate from Juju need symlinks. + if issubclass(bound_event.event_type, (ops.charm.HookEvent, ops.charm.ActionEvent)): + _create_event_link(charm, bound_event) + + +def _emit_charm_event(charm, event_name): + """Emits a charm event based on a Juju event name. + + charm -- A charm instance to emit an event from. + event_name -- A Juju event name to emit on a charm. + """ + event_to_emit = None + try: + event_to_emit = getattr(charm.on, event_name) + except AttributeError: + logger.debug("Event %s not defined for %s.", event_name, charm) + + # If the event is not supported by the charm implementation, do + # not error out or try to emit it. This is to support rollbacks. + if event_to_emit is not None: + args, kwargs = _get_event_args(charm, event_to_emit) + logger.debug('Emitting Juju event %s.', event_name) + event_to_emit.emit(*args, **kwargs) + + +def _get_event_args(charm, bound_event): + event_type = bound_event.event_type + model = charm.framework.model + + if issubclass(event_type, ops.charm.RelationEvent): + relation_name = os.environ['JUJU_RELATION'] + relation_id = int(os.environ['JUJU_RELATION_ID'].split(':')[-1]) + relation = model.get_relation(relation_name, relation_id) + else: + relation = None + + remote_app_name = os.environ.get('JUJU_REMOTE_APP', '') + remote_unit_name = os.environ.get('JUJU_REMOTE_UNIT', '') + if remote_app_name or remote_unit_name: + if not remote_app_name: + if '/' not in remote_unit_name: + raise RuntimeError('invalid remote unit name: {}'.format(remote_unit_name)) + remote_app_name = remote_unit_name.split('/')[0] + args = [relation, model.get_app(remote_app_name)] + if remote_unit_name: + args.append(model.get_unit(remote_unit_name)) + return args, {} + elif relation: + return [relation], {} + return [], {} + + +class _Dispatcher: + """Encapsulate how to figure out what event Juju wants us to run. + + Also knows how to run “legacy” hooks when Juju called us via a top-level + ``dispatch`` binary. + + Args: + charm_dir: the toplevel directory of the charm + + Attributes: + event_name: the name of the event to run + is_dispatch_aware: are we running under a Juju that knows about the + dispatch binary? + + """ + + def __init__(self, charm_dir: Path): + self._charm_dir = charm_dir + self._exec_path = Path(sys.argv[0]) + + if 'JUJU_DISPATCH_PATH' in os.environ and (charm_dir / 'dispatch').exists(): + self._init_dispatch() + else: + self._init_legacy() + + def ensure_event_links(self, charm): + """Make sure necessary symlinks are present on disk""" + + if self.is_dispatch_aware: + # links aren't needed + return + + # When a charm is force-upgraded and a unit is in an error state Juju + # does not run upgrade-charm and instead runs the failed hook followed + # by config-changed. Given the nature of force-upgrading the hook setup + # code is not triggered on config-changed. + # + # 'start' event is included as Juju does not fire the install event for + # K8s charms (see LP: #1854635). + if (self.event_name in ('install', 'start', 'upgrade_charm') + or self.event_name.endswith('_storage_attached')): + _setup_event_links(self._charm_dir, charm) + + def run_any_legacy_hook(self): + """Run any extant legacy hook. + + If there is both a dispatch file and a legacy hook for the + current event, run the wanted legacy hook. + """ + + if not self.is_dispatch_aware: + # we *are* the legacy hook + return + + dispatch_path = self._charm_dir / self._dispatch_path + if not dispatch_path.exists(): + logger.debug("Legacy %s does not exist.", self._dispatch_path) + return + + # super strange that there isn't an is_executable + if not os.access(str(dispatch_path), os.X_OK): + logger.warning("Legacy %s exists but is not executable.", self._dispatch_path) + return + + if dispatch_path.resolve() == self._exec_path.resolve(): + logger.debug("Legacy %s is just a link to ourselves.", self._dispatch_path) + return + + argv = sys.argv.copy() + argv[0] = str(dispatch_path) + logger.info("Running legacy %s.", self._dispatch_path) + try: + subprocess.run(argv, check=True) + except subprocess.CalledProcessError as e: + logger.warning( + "Legacy %s exited with status %d.", + self._dispatch_path, e.returncode) + sys.exit(e.returncode) + else: + logger.debug("Legacy %s exited with status 0.", self._dispatch_path) + + def _set_name_from_path(self, path: Path): + """Sets the name attribute to that which can be inferred from the given path.""" + name = path.name.replace('-', '_') + if path.parent.name == 'actions': + name = '{}_action'.format(name) + self.event_name = name + + def _init_legacy(self): + """Set up the 'legacy' dispatcher. + + The current Juju doesn't know about 'dispatch' and calls hooks + explicitly. + """ + self.is_dispatch_aware = False + self._set_name_from_path(self._exec_path) + + def _init_dispatch(self): + """Set up the new 'dispatch' dispatcher. + + The current Juju will run 'dispatch' if it exists, and otherwise fall + back to the old behaviour. + + JUJU_DISPATCH_PATH will be set to the wanted hook, e.g. hooks/install, + in both cases. + """ + self._dispatch_path = Path(os.environ['JUJU_DISPATCH_PATH']) + + if 'OPERATOR_DISPATCH' in os.environ: + logger.debug("Charm called itself via %s.", self._dispatch_path) + sys.exit(0) + os.environ['OPERATOR_DISPATCH'] = '1' + + self.is_dispatch_aware = True + self._set_name_from_path(self._dispatch_path) + + def is_restricted_context(self): + """"Return True if we are running in a restricted Juju context. + + When in a restricted context, most commands (relation-get, config-get, + state-get) are not available. As such, we change how we interact with + Juju. + """ + return self.event_name in ('collect_metrics',) + + +def main(charm_class, use_juju_for_storage=False): + """Setup the charm and dispatch the observed event. + + The event name is based on the way this executable was called (argv[0]). + """ + charm_dir = _get_charm_dir() + + model_backend = ops.model._ModelBackend() + debug = ('JUJU_DEBUG' in os.environ) + setup_root_logging(model_backend, debug=debug) + logger.debug("Operator Framework %s up and running.", ops.__version__) + + dispatcher = _Dispatcher(charm_dir) + dispatcher.run_any_legacy_hook() + + metadata = (charm_dir / 'metadata.yaml').read_text() + actions_meta = charm_dir / 'actions.yaml' + if actions_meta.exists(): + actions_metadata = actions_meta.read_text() + else: + actions_metadata = None + + if not yaml.__with_libyaml__: + logger.debug('yaml does not have libyaml extensions, using slower pure Python yaml loader') + meta = ops.charm.CharmMeta.from_yaml(metadata, actions_metadata) + model = ops.model.Model(meta, model_backend) + + # TODO: If Juju unit agent crashes after exit(0) from the charm code + # the framework will commit the snapshot but Juju will not commit its + # operation. + charm_state_path = charm_dir / CHARM_STATE_FILE + if use_juju_for_storage: + if dispatcher.is_restricted_context(): + # TODO: jam 2020-06-30 This unconditionally avoids running a collect metrics event + # Though we eventually expect that juju will run collect-metrics in a + # non-restricted context. Once we can determine that we are running collect-metrics + # in a non-restricted context, we should fire the event as normal. + logger.debug('"%s" is not supported when using Juju for storage\n' + 'see: https://github.com/canonical/operator/issues/348', + dispatcher.event_name) + # Note that we don't exit nonzero, because that would cause Juju to rerun the hook + return + store = ops.storage.JujuStorage() + else: + store = ops.storage.SQLiteStorage(charm_state_path) + framework = ops.framework.Framework(store, charm_dir, meta, model) + try: + sig = inspect.signature(charm_class) + try: + sig.bind(framework) + except TypeError: + msg = ( + "the second argument, 'key', has been deprecated and will be " + "removed after the 0.7 release") + warnings.warn(msg, DeprecationWarning) + charm = charm_class(framework, None) + else: + charm = charm_class(framework) + dispatcher.ensure_event_links(charm) + + # TODO: Remove the collect_metrics check below as soon as the relevant + # Juju changes are made. + # + # Skip reemission of deferred events for collect-metrics events because + # they do not have the full access to all hook tools. + if not dispatcher.is_restricted_context(): + framework.reemit() + + _emit_charm_event(charm, dispatcher.event_name) + + framework.commit() + finally: + framework.close() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/lib/ops/model.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/lib/ops/model.py new file mode 100644 index 0000000000000000000000000000000000000000..b96e89154ea9cec2b62a4fab4649412e115c304e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/lib/ops/model.py @@ -0,0 +1,1237 @@ +# Copyright 2019 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import datetime +import decimal +import ipaddress +import json +import os +import re +import shutil +import tempfile +import time +import typing +import weakref + +from abc import ABC, abstractmethod +from collections.abc import Mapping, MutableMapping +from pathlib import Path +from subprocess import run, PIPE, CalledProcessError + +import ops +from ops.jujuversion import JujuVersion + + +class Model: + """Represents the Juju Model as seen from this unit. + + This should not be instantiated directly by Charmers, but can be accessed as `self.model` + from any class that derives from Object. + + Attributes: + unit: A :class:`Unit` that represents the unit that is running this code (eg yourself) + app: A :class:`Application` that represents the application this unit is a part of. + relations: Mapping of endpoint to list of :class:`Relation` answering the question + "what am I currently related to". See also :meth:`.get_relation` + config: A dict of the config for the current application. + resources: Access to resources for this charm. Use ``model.resources.fetch(resource_name)`` + to get the path on disk where the resource can be found. + storages: Mapping of storage_name to :class:`Storage` for the storage points defined in + metadata.yaml + pod: Used to get access to ``model.pod.set_spec`` to set the container specification + for Kubernetes charms. + """ + + def __init__(self, meta: 'ops.charm.CharmMeta', backend: '_ModelBackend'): + self._cache = _ModelCache(backend) + self._backend = backend + self.unit = self.get_unit(self._backend.unit_name) + self.app = self.unit.app + self.relations = RelationMapping(meta.relations, self.unit, self._backend, self._cache) + self.config = ConfigData(self._backend) + self.resources = Resources(list(meta.resources), self._backend) + self.pod = Pod(self._backend) + self.storages = StorageMapping(list(meta.storages), self._backend) + self._bindings = BindingMapping(self._backend) + + @property + def name(self) -> str: + """Return the name of the Model that this unit is running in. + + This is read from the environment variable ``JUJU_MODEL_NAME``. + """ + return self._backend.model_name + + def get_unit(self, unit_name: str) -> 'Unit': + """Get an arbitrary unit by name. + + Internally this uses a cache, so asking for the same unit two times will + return the same object. + """ + return self._cache.get(Unit, unit_name) + + def get_app(self, app_name: str) -> 'Application': + """Get an application by name. + + Internally this uses a cache, so asking for the same application two times will + return the same object. + """ + return self._cache.get(Application, app_name) + + def get_relation( + self, relation_name: str, + relation_id: typing.Optional[int] = None) -> 'Relation': + """Get a specific Relation instance. + + If relation_id is not given, this will return the Relation instance if the + relation is established only once or None if it is not established. If this + same relation is established multiple times the error TooManyRelatedAppsError is raised. + + Args: + relation_name: The name of the endpoint for this charm + relation_id: An identifier for a specific relation. Used to disambiguate when a + given application has more than one relation on a given endpoint. + Raises: + TooManyRelatedAppsError: is raised if there is more than one relation to the + supplied relation_name and no relation_id was supplied + """ + return self.relations._get_unique(relation_name, relation_id) + + def get_binding(self, binding_key: typing.Union[str, 'Relation']) -> 'Binding': + """Get a network space binding. + + Args: + binding_key: The relation name or instance to obtain bindings for. + Returns: + If ``binding_key`` is a relation name, the method returns the default binding + for that relation. If a relation instance is provided, the method first looks + up a more specific binding for that specific relation ID, and if none is found + falls back to the default binding for the relation name. + """ + return self._bindings.get(binding_key) + + +class _ModelCache: + + def __init__(self, backend): + self._backend = backend + self._weakrefs = weakref.WeakValueDictionary() + + def get(self, entity_type, *args): + key = (entity_type,) + args + entity = self._weakrefs.get(key) + if entity is None: + entity = entity_type(*args, backend=self._backend, cache=self) + self._weakrefs[key] = entity + return entity + + +class Application: + """Represents a named application in the model. + + This might be your application, or might be an application that you are related to. + Charmers should not instantiate Application objects directly, but should use + :meth:`Model.get_app` if they need a reference to a given application. + + Attributes: + name: The name of this application (eg, 'mysql'). This name may differ from the name of + the charm, if the user has deployed it to a different name. + """ + + def __init__(self, name, backend, cache): + self.name = name + self._backend = backend + self._cache = cache + self._is_our_app = self.name == self._backend.app_name + self._status = None + + def _invalidate(self): + self._status = None + + @property + def status(self) -> 'StatusBase': + """Used to report or read the status of the overall application. + + Can only be read and set by the lead unit of the application. + + The status of remote units is always Unknown. + + Raises: + RuntimeError: if you try to set the status of another application, or if you try to + set the status of this application as a unit that is not the leader. + InvalidStatusError: if you try to set the status to something that is not a + :class:`StatusBase` + + Example:: + + self.model.app.status = BlockedStatus('I need a human to come help me') + """ + if not self._is_our_app: + return UnknownStatus() + + if not self._backend.is_leader(): + raise RuntimeError('cannot get application status as a non-leader unit') + + if self._status: + return self._status + + s = self._backend.status_get(is_app=True) + self._status = StatusBase.from_name(s['status'], s['message']) + return self._status + + @status.setter + def status(self, value: 'StatusBase'): + if not isinstance(value, StatusBase): + raise InvalidStatusError( + 'invalid value provided for application {} status: {}'.format(self, value) + ) + + if not self._is_our_app: + raise RuntimeError('cannot to set status for a remote application {}'.format(self)) + + if not self._backend.is_leader(): + raise RuntimeError('cannot set application status as a non-leader unit') + + self._backend.status_set(value.name, value.message, is_app=True) + self._status = value + + def __repr__(self): + return '<{}.{} {}>'.format(type(self).__module__, type(self).__name__, self.name) + + +class Unit: + """Represents a named unit in the model. + + This might be your unit, another unit of your application, or a unit of another application + that you are related to. + + Attributes: + name: The name of the unit (eg, 'mysql/0') + app: The Application the unit is a part of. + """ + + def __init__(self, name, backend, cache): + self.name = name + + app_name = name.split('/')[0] + self.app = cache.get(Application, app_name) + + self._backend = backend + self._cache = cache + self._is_our_unit = self.name == self._backend.unit_name + self._status = None + + def _invalidate(self): + self._status = None + + @property + def status(self) -> 'StatusBase': + """Used to report or read the status of a specific unit. + + The status of any unit other than yourself is always Unknown. + + Raises: + RuntimeError: if you try to set the status of a unit other than yourself. + InvalidStatusError: if you try to set the status to something other than + a :class:`StatusBase` + Example:: + + self.model.unit.status = MaintenanceStatus('reconfiguring the frobnicators') + """ + if not self._is_our_unit: + return UnknownStatus() + + if self._status: + return self._status + + s = self._backend.status_get(is_app=False) + self._status = StatusBase.from_name(s['status'], s['message']) + return self._status + + @status.setter + def status(self, value: 'StatusBase'): + if not isinstance(value, StatusBase): + raise InvalidStatusError( + 'invalid value provided for unit {} status: {}'.format(self, value) + ) + + if not self._is_our_unit: + raise RuntimeError('cannot set status for a remote unit {}'.format(self)) + + self._backend.status_set(value.name, value.message, is_app=False) + self._status = value + + def __repr__(self): + return '<{}.{} {}>'.format(type(self).__module__, type(self).__name__, self.name) + + def is_leader(self) -> bool: + """Return whether this unit is the leader of its application. + + This can only be called for your own unit. + Returns: + True if you are the leader, False otherwise + Raises: + RuntimeError: if called for a unit that is not yourself + """ + if self._is_our_unit: + # This value is not cached as it is not guaranteed to persist for the whole duration + # of a hook execution. + return self._backend.is_leader() + else: + raise RuntimeError( + 'leadership status of remote units ({}) is not visible to other' + ' applications'.format(self) + ) + + def set_workload_version(self, version: str) -> None: + """Record the version of the software running as the workload. + + This shouldn't be confused with the revision of the charm. This is informative only; + shown in the output of 'juju status'. + """ + if not isinstance(version, str): + raise TypeError("workload version must be a str, not {}: {!r}".format( + type(version).__name__, version)) + self._backend.application_version_set(version) + + +class LazyMapping(Mapping, ABC): + """Represents a dict that isn't populated until it is accessed. + + Charm authors should generally never need to use this directly, but it forms + the basis for many of the dicts that the framework tracks. + """ + + _lazy_data = None + + @abstractmethod + def _load(self): + raise NotImplementedError() + + @property + def _data(self): + data = self._lazy_data + if data is None: + data = self._lazy_data = self._load() + return data + + def _invalidate(self): + self._lazy_data = None + + def __contains__(self, key): + return key in self._data + + def __len__(self): + return len(self._data) + + def __iter__(self): + return iter(self._data) + + def __getitem__(self, key): + return self._data[key] + + +class RelationMapping(Mapping): + """Map of relation names to lists of :class:`Relation` instances.""" + + def __init__(self, relations_meta, our_unit, backend, cache): + self._peers = set() + for name, relation_meta in relations_meta.items(): + if relation_meta.role.is_peer(): + self._peers.add(name) + self._our_unit = our_unit + self._backend = backend + self._cache = cache + self._data = {relation_name: None for relation_name in relations_meta} + + def __contains__(self, key): + return key in self._data + + def __len__(self): + return len(self._data) + + def __iter__(self): + return iter(self._data) + + def __getitem__(self, relation_name): + is_peer = relation_name in self._peers + relation_list = self._data[relation_name] + if relation_list is None: + relation_list = self._data[relation_name] = [] + for rid in self._backend.relation_ids(relation_name): + relation = Relation(relation_name, rid, is_peer, + self._our_unit, self._backend, self._cache) + relation_list.append(relation) + return relation_list + + def _invalidate(self, relation_name): + """Used to wipe the cache of a given relation_name. + + Not meant to be used by Charm authors. The content of relation data is + static for the lifetime of a hook, so it is safe to cache in memory once + accessed. + """ + self._data[relation_name] = None + + def _get_unique(self, relation_name, relation_id=None): + if relation_id is not None: + if not isinstance(relation_id, int): + raise ModelError('relation id {} must be int or None not {}'.format( + relation_id, + type(relation_id).__name__)) + for relation in self[relation_name]: + if relation.id == relation_id: + return relation + else: + # The relation may be dead, but it is not forgotten. + is_peer = relation_name in self._peers + return Relation(relation_name, relation_id, is_peer, + self._our_unit, self._backend, self._cache) + num_related = len(self[relation_name]) + if num_related == 0: + return None + elif num_related == 1: + return self[relation_name][0] + else: + # TODO: We need something in the framework to catch and gracefully handle + # errors, ideally integrating the error catching with Juju's mechanisms. + raise TooManyRelatedAppsError(relation_name, num_related, 1) + + +class BindingMapping: + """Mapping of endpoints to network bindings. + + Charm authors should not instantiate this directly, but access it via + :meth:`Model.get_binding` + """ + + def __init__(self, backend): + self._backend = backend + self._data = {} + + def get(self, binding_key: typing.Union[str, 'Relation']) -> 'Binding': + """Get a specific Binding for an endpoint/relation. + + Not used directly by Charm authors. See :meth:`Model.get_binding` + """ + if isinstance(binding_key, Relation): + binding_name = binding_key.name + relation_id = binding_key.id + elif isinstance(binding_key, str): + binding_name = binding_key + relation_id = None + else: + raise ModelError('binding key must be str or relation instance, not {}' + ''.format(type(binding_key).__name__)) + binding = self._data.get(binding_key) + if binding is None: + binding = Binding(binding_name, relation_id, self._backend) + self._data[binding_key] = binding + return binding + + +class Binding: + """Binding to a network space. + + Attributes: + name: The name of the endpoint this binding represents (eg, 'db') + """ + + def __init__(self, name, relation_id, backend): + self.name = name + self._relation_id = relation_id + self._backend = backend + self._network = None + + @property + def network(self) -> 'Network': + """The network information for this binding.""" + if self._network is None: + try: + self._network = Network(self._backend.network_get(self.name, self._relation_id)) + except RelationNotFoundError: + if self._relation_id is None: + raise + # If a relation is dead, we can still get network info associated with an + # endpoint itself + self._network = Network(self._backend.network_get(self.name)) + return self._network + + +class Network: + """Network space details. + + Charm authors should not instantiate this directly, but should get access to the Network + definition from :meth:`Model.get_binding` and its ``network`` attribute. + + Attributes: + interfaces: A list of :class:`NetworkInterface` details. This includes the + information about how your application should be configured (eg, what + IP addresses should you bind to.) + Note that multiple addresses for a single interface are represented as multiple + interfaces. (eg, ``[NetworKInfo('ens1', '10.1.1.1/32'), + NetworkInfo('ens1', '10.1.2.1/32'])``) + ingress_addresses: A list of :class:`ipaddress.ip_address` objects representing the IP + addresses that other units should use to get in touch with you. + egress_subnets: A list of :class:`ipaddress.ip_network` representing the subnets that + other units will see you connecting from. Due to things like NAT it isn't always + possible to narrow it down to a single address, but when it is clear, the CIDRs + will be constrained to a single address. (eg, 10.0.0.1/32) + Args: + network_info: A dict of network information as returned by ``network-get``. + """ + + def __init__(self, network_info: dict): + self.interfaces = [] + # Treat multiple addresses on an interface as multiple logical + # interfaces with the same name. + for interface_info in network_info['bind-addresses']: + interface_name = interface_info['interface-name'] + for address_info in interface_info['addresses']: + self.interfaces.append(NetworkInterface(interface_name, address_info)) + self.ingress_addresses = [] + for address in network_info['ingress-addresses']: + self.ingress_addresses.append(ipaddress.ip_address(address)) + self.egress_subnets = [] + for subnet in network_info['egress-subnets']: + self.egress_subnets.append(ipaddress.ip_network(subnet)) + + @property + def bind_address(self): + """A single address that your application should bind() to. + + For the common case where there is a single answer. This represents a single + address from :attr:`.interfaces` that can be used to configure where your + application should bind() and listen(). + """ + return self.interfaces[0].address + + @property + def ingress_address(self): + """The address other applications should use to connect to your unit. + + Due to things like public/private addresses, NAT and tunneling, the address you bind() + to is not always the address other people can use to connect() to you. + This is just the first address from :attr:`.ingress_addresses`. + """ + return self.ingress_addresses[0] + + +class NetworkInterface: + """Represents a single network interface that the charm needs to know about. + + Charmers should not instantiate this type directly. Instead use :meth:`Model.get_binding` + to get the network information for a given endpoint. + + Attributes: + name: The name of the interface (eg. 'eth0', or 'ens1') + subnet: An :class:`ipaddress.ip_network` representation of the IP for the network + interface. This may be a single address (eg '10.0.1.2/32') + """ + + def __init__(self, name: str, address_info: dict): + self.name = name + # TODO: expose a hardware address here, see LP: #1864070. + self.address = ipaddress.ip_address(address_info['value']) + cidr = address_info['cidr'] + if not cidr: + # The cidr field may be empty, see LP: #1864102. + # In this case, make it a /32 or /128 IP network. + self.subnet = ipaddress.ip_network(address_info['value']) + else: + self.subnet = ipaddress.ip_network(cidr) + # TODO: expose a hostname/canonical name for the address here, see LP: #1864086. + + +class Relation: + """Represents an established relation between this application and another application. + + This class should not be instantiated directly, instead use :meth:`Model.get_relation` + or :attr:`RelationEvent.relation`. + + Attributes: + name: The name of the local endpoint of the relation (eg 'db') + id: The identifier for a particular relation (integer) + app: An :class:`Application` representing the remote application of this relation. + For peer relations this will be the local application. + units: A set of :class:`Unit` for units that have started and joined this relation. + data: A :class:`RelationData` holding the data buckets for each entity + of a relation. Accessed via eg Relation.data[unit]['foo'] + """ + + def __init__( + self, relation_name: str, relation_id: int, is_peer: bool, our_unit: Unit, + backend: '_ModelBackend', cache: '_ModelCache'): + self.name = relation_name + self.id = relation_id + self.app = None + self.units = set() + + # For peer relations, both the remote and the local app are the same. + if is_peer: + self.app = our_unit.app + try: + for unit_name in backend.relation_list(self.id): + unit = cache.get(Unit, unit_name) + self.units.add(unit) + if self.app is None: + self.app = unit.app + except RelationNotFoundError: + # If the relation is dead, just treat it as if it has no remote units. + pass + self.data = RelationData(self, our_unit, backend) + + def __repr__(self): + return '<{}.{} {}:{}>'.format(type(self).__module__, + type(self).__name__, + self.name, + self.id) + + +class RelationData(Mapping): + """Represents the various data buckets of a given relation. + + Each unit and application involved in a relation has their own data bucket. + Eg: ``{entity: RelationDataContent}`` + where entity can be either a :class:`Unit` or a :class:`Application`. + + Units can read and write their own data, and if they are the leader, + they can read and write their application data. They are allowed to read + remote unit and application data. + + This class should not be created directly. It should be accessed via + :attr:`Relation.data` + """ + + def __init__(self, relation: Relation, our_unit: Unit, backend: '_ModelBackend'): + self.relation = weakref.proxy(relation) + self._data = { + our_unit: RelationDataContent(self.relation, our_unit, backend), + our_unit.app: RelationDataContent(self.relation, our_unit.app, backend), + } + self._data.update({ + unit: RelationDataContent(self.relation, unit, backend) + for unit in self.relation.units}) + # The relation might be dead so avoid a None key here. + if self.relation.app is not None: + self._data.update({ + self.relation.app: RelationDataContent(self.relation, self.relation.app, backend), + }) + + def __contains__(self, key): + return key in self._data + + def __len__(self): + return len(self._data) + + def __iter__(self): + return iter(self._data) + + def __getitem__(self, key): + return self._data[key] + + +# We mix in MutableMapping here to get some convenience implementations, but whether it's actually +# mutable or not is controlled by the flag. +class RelationDataContent(LazyMapping, MutableMapping): + + def __init__(self, relation, entity, backend): + self.relation = relation + self._entity = entity + self._backend = backend + self._is_app = isinstance(entity, Application) + + def _load(self): + try: + return self._backend.relation_get(self.relation.id, self._entity.name, self._is_app) + except RelationNotFoundError: + # Dead relations tell no tales (and have no data). + return {} + + def _is_mutable(self): + if self._is_app: + is_our_app = self._backend.app_name == self._entity.name + if not is_our_app: + return False + # Whether the application data bag is mutable or not depends on + # whether this unit is a leader or not, but this is not guaranteed + # to be always true during the same hook execution. + return self._backend.is_leader() + else: + is_our_unit = self._backend.unit_name == self._entity.name + if is_our_unit: + return True + return False + + def __setitem__(self, key, value): + if not self._is_mutable(): + raise RelationDataError('cannot set relation data for {}'.format(self._entity.name)) + if not isinstance(value, str): + raise RelationDataError('relation data values must be strings') + + self._backend.relation_set(self.relation.id, key, value, self._is_app) + + # Don't load data unnecessarily if we're only updating. + if self._lazy_data is not None: + if value == '': + # Match the behavior of Juju, which is that setting the value to an + # empty string will remove the key entirely from the relation data. + del self._data[key] + else: + self._data[key] = value + + def __delitem__(self, key): + # Match the behavior of Juju, which is that setting the value to an empty + # string will remove the key entirely from the relation data. + self.__setitem__(key, '') + + +class ConfigData(LazyMapping): + + def __init__(self, backend): + self._backend = backend + + def _load(self): + return self._backend.config_get() + + +class StatusBase: + """Status values specific to applications and units. + + To access a status by name, see :meth:`StatusBase.from_name`, most use cases will just + directly use the child class to indicate their status. + """ + + _statuses = {} + name = None + + def __init__(self, message: str): + self.message = message + + def __new__(cls, *args, **kwargs): + if cls is StatusBase: + raise TypeError("cannot instantiate a base class") + return super().__new__(cls) + + def __eq__(self, other): + if not isinstance(self, type(other)): + return False + return self.message == other.message + + def __repr__(self): + return "{.__class__.__name__}({!r})".format(self, self.message) + + @classmethod + def from_name(cls, name: str, message: str): + if name == 'unknown': + # unknown is special + return UnknownStatus() + else: + return cls._statuses[name](message) + + @classmethod + def register(cls, child): + if child.name is None: + raise AttributeError('cannot register a Status which has no name') + cls._statuses[child.name] = child + return child + + +@StatusBase.register +class UnknownStatus(StatusBase): + """The unit status is unknown. + + A unit-agent has finished calling install, config-changed and start, but the + charm has not called status-set yet. + + """ + name = 'unknown' + + def __init__(self): + # Unknown status cannot be set and does not have a message associated with it. + super().__init__('') + + def __repr__(self): + return "UnknownStatus()" + + +@StatusBase.register +class ActiveStatus(StatusBase): + """The unit is ready. + + The unit believes it is correctly offering all the services it has been asked to offer. + """ + name = 'active' + + def __init__(self, message: str = ''): + super().__init__(message) + + +@StatusBase.register +class BlockedStatus(StatusBase): + """The unit requires manual intervention. + + An operator has to manually intervene to unblock the unit and let it proceed. + """ + name = 'blocked' + + +@StatusBase.register +class MaintenanceStatus(StatusBase): + """The unit is performing maintenance tasks. + + The unit is not yet providing services, but is actively doing work in preparation + for providing those services. This is a "spinning" state, not an error state. It + reflects activity on the unit itself, not on peers or related units. + + """ + name = 'maintenance' + + +@StatusBase.register +class WaitingStatus(StatusBase): + """A unit is unable to progress. + + The unit is unable to progress to an active state because an application to which + it is related is not running. + + """ + name = 'waiting' + + +class Resources: + """Object representing resources for the charm. + """ + + def __init__(self, names: typing.Iterable[str], backend: '_ModelBackend'): + self._backend = backend + self._paths = {name: None for name in names} + + def fetch(self, name: str) -> Path: + """Fetch the resource from the controller or store. + + If successfully fetched, this returns a Path object to where the resource is stored + on disk, otherwise it raises a ModelError. + """ + if name not in self._paths: + raise RuntimeError('invalid resource name: {}'.format(name)) + if self._paths[name] is None: + self._paths[name] = Path(self._backend.resource_get(name)) + return self._paths[name] + + +class Pod: + """Represents the definition of a pod spec in Kubernetes models. + + Currently only supports simple access to setting the Juju pod spec via :attr:`.set_spec`. + """ + + def __init__(self, backend: '_ModelBackend'): + self._backend = backend + + def set_spec(self, spec: typing.Mapping, k8s_resources: typing.Mapping = None): + """Set the specification for pods that Juju should start in kubernetes. + + See `juju help-tool pod-spec-set` for details of what should be passed. + Args: + spec: The mapping defining the pod specification + k8s_resources: Additional kubernetes specific specification. + + Returns: + """ + if not self._backend.is_leader(): + raise ModelError('cannot set a pod spec as this unit is not a leader') + self._backend.pod_spec_set(spec, k8s_resources) + + +class StorageMapping(Mapping): + """Map of storage names to lists of Storage instances.""" + + def __init__(self, storage_names: typing.Iterable[str], backend: '_ModelBackend'): + self._backend = backend + self._storage_map = {storage_name: None for storage_name in storage_names} + + def __contains__(self, key: str): + return key in self._storage_map + + def __len__(self): + return len(self._storage_map) + + def __iter__(self): + return iter(self._storage_map) + + def __getitem__(self, storage_name: str) -> typing.List['Storage']: + storage_list = self._storage_map[storage_name] + if storage_list is None: + storage_list = self._storage_map[storage_name] = [] + for storage_id in self._backend.storage_list(storage_name): + storage_list.append(Storage(storage_name, storage_id, self._backend)) + return storage_list + + def request(self, storage_name: str, count: int = 1): + """Requests new storage instances of a given name. + + Uses storage-add tool to request additional storage. Juju will notify the unit + via -storage-attached events when it becomes available. + """ + if storage_name not in self._storage_map: + raise ModelError(('cannot add storage {!r}:' + ' it is not present in the charm metadata').format(storage_name)) + self._backend.storage_add(storage_name, count) + + +class Storage: + """"Represents a storage as defined in metadata.yaml + + Attributes: + name: Simple string name of the storage + id: The provider id for storage + """ + + def __init__(self, storage_name, storage_id, backend): + self.name = storage_name + self.id = storage_id + self._backend = backend + self._location = None + + @property + def location(self): + if self._location is None: + raw = self._backend.storage_get('{}/{}'.format(self.name, self.id), "location") + self._location = Path(raw) + return self._location + + +class ModelError(Exception): + """Base class for exceptions raised when interacting with the Model.""" + pass + + +class TooManyRelatedAppsError(ModelError): + """Raised by :meth:`Model.get_relation` if there is more than one related application.""" + + def __init__(self, relation_name, num_related, max_supported): + super().__init__('Too many remote applications on {} ({} > {})'.format( + relation_name, num_related, max_supported)) + self.relation_name = relation_name + self.num_related = num_related + self.max_supported = max_supported + + +class RelationDataError(ModelError): + """Raised by ``Relation.data[entity][key] = 'foo'`` if the data is invalid. + + This is raised if you're either trying to set a value to something that isn't a string, + or if you are trying to set a value in a bucket that you don't have access to. (eg, + another application/unit or setting your application data but you aren't the leader.) + """ + + +class RelationNotFoundError(ModelError): + """Backend error when querying juju for a given relation and that relation doesn't exist.""" + + +class InvalidStatusError(ModelError): + """Raised if trying to set an Application or Unit status to something invalid.""" + + +class _ModelBackend: + """Represents the connection between the Model representation and talking to Juju. + + Charm authors should not directly interact with the ModelBackend, it is a private + implementation of Model. + """ + + LEASE_RENEWAL_PERIOD = datetime.timedelta(seconds=30) + + def __init__(self, unit_name=None, model_name=None): + if unit_name is None: + self.unit_name = os.environ['JUJU_UNIT_NAME'] + else: + self.unit_name = unit_name + if model_name is None: + model_name = os.environ.get('JUJU_MODEL_NAME') + self.model_name = model_name + self.app_name = self.unit_name.split('/')[0] + + self._is_leader = None + self._leader_check_time = None + + def _run(self, *args, return_output=False, use_json=False): + kwargs = dict(stdout=PIPE, stderr=PIPE) + if use_json: + args += ('--format=json',) + try: + result = run(args, check=True, **kwargs) + except CalledProcessError as e: + raise ModelError(e.stderr) + if return_output: + if result.stdout is None: + return '' + else: + text = result.stdout.decode('utf8') + if use_json: + return json.loads(text) + else: + return text + + def relation_ids(self, relation_name): + relation_ids = self._run('relation-ids', relation_name, return_output=True, use_json=True) + return [int(relation_id.split(':')[-1]) for relation_id in relation_ids] + + def relation_list(self, relation_id): + try: + return self._run('relation-list', '-r', str(relation_id), + return_output=True, use_json=True) + except ModelError as e: + if 'relation not found' in str(e): + raise RelationNotFoundError() from e + raise + + def relation_get(self, relation_id, member_name, is_app): + if not isinstance(is_app, bool): + raise TypeError('is_app parameter to relation_get must be a boolean') + + if is_app: + version = JujuVersion.from_environ() + if not version.has_app_data(): + raise RuntimeError( + 'getting application data is not supported on Juju version {}'.format(version)) + + args = ['relation-get', '-r', str(relation_id), '-', member_name] + if is_app: + args.append('--app') + + try: + return self._run(*args, return_output=True, use_json=True) + except ModelError as e: + if 'relation not found' in str(e): + raise RelationNotFoundError() from e + raise + + def relation_set(self, relation_id, key, value, is_app): + if not isinstance(is_app, bool): + raise TypeError('is_app parameter to relation_set must be a boolean') + + if is_app: + version = JujuVersion.from_environ() + if not version.has_app_data(): + raise RuntimeError( + 'setting application data is not supported on Juju version {}'.format(version)) + + args = ['relation-set', '-r', str(relation_id), '{}={}'.format(key, value)] + if is_app: + args.append('--app') + + try: + return self._run(*args) + except ModelError as e: + if 'relation not found' in str(e): + raise RelationNotFoundError() from e + raise + + def config_get(self): + return self._run('config-get', return_output=True, use_json=True) + + def is_leader(self): + """Obtain the current leadership status for the unit the charm code is executing on. + + The value is cached for the duration of a lease which is 30s in Juju. + """ + now = time.monotonic() + if self._leader_check_time is None: + check = True + else: + time_since_check = datetime.timedelta(seconds=now - self._leader_check_time) + check = (time_since_check > self.LEASE_RENEWAL_PERIOD or self._is_leader is None) + if check: + # Current time MUST be saved before running is-leader to ensure the cache + # is only used inside the window that is-leader itself asserts. + self._leader_check_time = now + self._is_leader = self._run('is-leader', return_output=True, use_json=True) + + return self._is_leader + + def resource_get(self, resource_name): + return self._run('resource-get', resource_name, return_output=True).strip() + + def pod_spec_set(self, spec, k8s_resources): + tmpdir = Path(tempfile.mkdtemp('-pod-spec-set')) + try: + spec_path = tmpdir / 'spec.json' + spec_path.write_text(json.dumps(spec)) + args = ['--file', str(spec_path)] + if k8s_resources: + k8s_res_path = tmpdir / 'k8s-resources.json' + k8s_res_path.write_text(json.dumps(k8s_resources)) + args.extend(['--k8s-resources', str(k8s_res_path)]) + self._run('pod-spec-set', *args) + finally: + shutil.rmtree(str(tmpdir)) + + def status_get(self, *, is_app=False): + """Get a status of a unit or an application. + + Args: + is_app: A boolean indicating whether the status should be retrieved for a unit + or an application. + """ + content = self._run( + 'status-get', '--include-data', '--application={}'.format(is_app), + use_json=True, + return_output=True) + # Unit status looks like (in YAML): + # message: 'load: 0.28 0.26 0.26' + # status: active + # status-data: {} + # Application status looks like (in YAML): + # application-status: + # message: 'load: 0.28 0.26 0.26' + # status: active + # status-data: {} + # units: + # uo/0: + # message: 'load: 0.28 0.26 0.26' + # status: active + # status-data: {} + + if is_app: + return {'status': content['application-status']['status'], + 'message': content['application-status']['message']} + else: + return content + + def status_set(self, status, message='', *, is_app=False): + """Set a status of a unit or an application. + + Args: + app: A boolean indicating whether the status should be set for a unit or an + application. + """ + if not isinstance(is_app, bool): + raise TypeError('is_app parameter must be boolean') + return self._run('status-set', '--application={}'.format(is_app), status, message) + + def storage_list(self, name): + return [int(s.split('/')[1]) for s in self._run('storage-list', name, + return_output=True, use_json=True)] + + def storage_get(self, storage_name_id, attribute): + return self._run('storage-get', '-s', storage_name_id, attribute, + return_output=True, use_json=True) + + def storage_add(self, name, count=1): + if not isinstance(count, int) or isinstance(count, bool): + raise TypeError('storage count must be integer, got: {} ({})'.format(count, + type(count))) + self._run('storage-add', '{}={}'.format(name, count)) + + def action_get(self): + return self._run('action-get', return_output=True, use_json=True) + + def action_set(self, results): + self._run('action-set', *["{}={}".format(k, v) for k, v in results.items()]) + + def action_log(self, message): + self._run('action-log', message) + + def action_fail(self, message=''): + self._run('action-fail', message) + + def application_version_set(self, version): + self._run('application-version-set', '--', version) + + def juju_log(self, level, message): + self._run('juju-log', '--log-level', level, message) + + def network_get(self, binding_name, relation_id=None): + """Return network info provided by network-get for a given binding. + + Args: + binding_name: A name of a binding (relation name or extra-binding name). + relation_id: An optional relation id to get network info for. + """ + cmd = ['network-get', binding_name] + if relation_id is not None: + cmd.extend(['-r', str(relation_id)]) + try: + return self._run(*cmd, return_output=True, use_json=True) + except ModelError as e: + if 'relation not found' in str(e): + raise RelationNotFoundError() from e + raise + + def add_metrics(self, metrics, labels=None): + cmd = ['add-metric'] + + if labels: + label_args = [] + for k, v in labels.items(): + _ModelBackendValidator.validate_metric_label(k) + _ModelBackendValidator.validate_label_value(k, v) + label_args.append('{}={}'.format(k, v)) + cmd.extend(['--labels', ','.join(label_args)]) + + metric_args = [] + for k, v in metrics.items(): + _ModelBackendValidator.validate_metric_key(k) + metric_value = _ModelBackendValidator.format_metric_value(v) + metric_args.append('{}={}'.format(k, metric_value)) + cmd.extend(metric_args) + self._run(*cmd) + + +class _ModelBackendValidator: + """Provides facilities for validating inputs and formatting them for model backends.""" + + METRIC_KEY_REGEX = re.compile(r'^[a-zA-Z](?:[a-zA-Z0-9-_]*[a-zA-Z0-9])?$') + + @classmethod + def validate_metric_key(cls, key): + if cls.METRIC_KEY_REGEX.match(key) is None: + raise ModelError( + 'invalid metric key {!r}: must match {}'.format( + key, cls.METRIC_KEY_REGEX.pattern)) + + @classmethod + def validate_metric_label(cls, label_name): + if cls.METRIC_KEY_REGEX.match(label_name) is None: + raise ModelError( + 'invalid metric label name {!r}: must match {}'.format( + label_name, cls.METRIC_KEY_REGEX.pattern)) + + @classmethod + def format_metric_value(cls, value): + try: + decimal_value = decimal.Decimal.from_float(value) + except TypeError as e: + e2 = ModelError('invalid metric value {!r} provided:' + ' must be a positive finite float'.format(value)) + raise e2 from e + if decimal_value.is_nan() or decimal_value.is_infinite() or decimal_value < 0: + raise ModelError('invalid metric value {!r} provided:' + ' must be a positive finite float'.format(value)) + return str(decimal_value) + + @classmethod + def validate_label_value(cls, label, value): + # Label values cannot be empty, contain commas or equal signs as those are + # used by add-metric as separators. + if not value: + raise ModelError( + 'metric label {} has an empty value, which is not allowed'.format(label)) + v = str(value) + if re.search('[,=]', v) is not None: + raise ModelError( + 'metric label values must not contain "," or "=": {}={!r}'.format(label, value)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/lib/ops/storage.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/lib/ops/storage.py new file mode 100755 index 0000000000000000000000000000000000000000..d4310ce1cfbb707c6278b70f84a9751da3ce07af --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/lib/ops/storage.py @@ -0,0 +1,318 @@ +# Copyright 2019-2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from datetime import timedelta +import pickle +import shutil +import subprocess +import sqlite3 +import typing + +import yaml + + +class SQLiteStorage: + + DB_LOCK_TIMEOUT = timedelta(hours=1) + + def __init__(self, filename): + # The isolation_level argument is set to None such that the implicit + # transaction management behavior of the sqlite3 module is disabled. + self._db = sqlite3.connect(str(filename), + isolation_level=None, + timeout=self.DB_LOCK_TIMEOUT.total_seconds()) + self._setup() + + def _setup(self): + # Make sure that the database is locked until the connection is closed, + # not until the transaction ends. + self._db.execute("PRAGMA locking_mode=EXCLUSIVE") + c = self._db.execute("BEGIN") + c.execute("SELECT count(name) FROM sqlite_master WHERE type='table' AND name='snapshot'") + if c.fetchone()[0] == 0: + # Keep in mind what might happen if the process dies somewhere below. + # The system must not be rendered permanently broken by that. + self._db.execute("CREATE TABLE snapshot (handle TEXT PRIMARY KEY, data BLOB)") + self._db.execute(''' + CREATE TABLE notice ( + sequence INTEGER PRIMARY KEY AUTOINCREMENT, + event_path TEXT, + observer_path TEXT, + method_name TEXT) + ''') + self._db.commit() + + def close(self): + self._db.close() + + def commit(self): + self._db.commit() + + # There's commit but no rollback. For abort to be supported, we'll need logic that + # can rollback decisions made by third-party code in terms of the internal state + # of objects that have been snapshotted, and hooks to let them know about it and + # take the needed actions to undo their logic until the last snapshot. + # This is doable but will increase significantly the chances for mistakes. + + def save_snapshot(self, handle_path: str, snapshot_data: typing.Any) -> None: + """Part of the Storage API, persist a snapshot data under the given handle. + + Args: + handle_path: The string identifying the snapshot. + snapshot_data: The data to be persisted. (as returned by Object.snapshot()). This + might be a dict/tuple/int, but must only contain 'simple' python types. + """ + # Use pickle for serialization, so the value remains portable. + raw_data = pickle.dumps(snapshot_data) + self._db.execute("REPLACE INTO snapshot VALUES (?, ?)", (handle_path, raw_data)) + + def load_snapshot(self, handle_path: str) -> typing.Any: + """Part of the Storage API, retrieve a snapshot that was previously saved. + + Args: + handle_path: The string identifying the snapshot. + Raises: + NoSnapshotError: if there is no snapshot for the given handle_path. + """ + c = self._db.cursor() + c.execute("SELECT data FROM snapshot WHERE handle=?", (handle_path,)) + row = c.fetchone() + if row: + return pickle.loads(row[0]) + raise NoSnapshotError(handle_path) + + def drop_snapshot(self, handle_path: str): + """Part of the Storage API, remove a snapshot that was previously saved. + + Dropping a snapshot that doesn't exist is treated as a no-op. + """ + self._db.execute("DELETE FROM snapshot WHERE handle=?", (handle_path,)) + + def list_snapshots(self) -> typing.Generator[str, None, None]: + """Return the name of all snapshots that are currently saved.""" + c = self._db.cursor() + c.execute("SELECT handle FROM snapshot") + while True: + rows = c.fetchmany() + if not rows: + break + for row in rows: + yield row[0] + + def save_notice(self, event_path: str, observer_path: str, method_name: str) -> None: + """Part of the Storage API, record an notice (event and observer)""" + self._db.execute('INSERT INTO notice VALUES (NULL, ?, ?, ?)', + (event_path, observer_path, method_name)) + + def drop_notice(self, event_path: str, observer_path: str, method_name: str) -> None: + """Part of the Storage API, remove a notice that was previously recorded.""" + self._db.execute(''' + DELETE FROM notice + WHERE event_path=? + AND observer_path=? + AND method_name=? + ''', (event_path, observer_path, method_name)) + + def notices(self, event_path: typing.Optional[str]) ->\ + typing.Generator[typing.Tuple[str, str, str], None, None]: + """Part of the Storage API, return all notices that begin with event_path. + + Args: + event_path: If supplied, will only yield events that match event_path. If not + supplied (or None/'') will return all events. + Returns: + Iterable of (event_path, observer_path, method_name) tuples + """ + if event_path: + c = self._db.execute(''' + SELECT event_path, observer_path, method_name + FROM notice + WHERE event_path=? + ORDER BY sequence + ''', (event_path,)) + else: + c = self._db.execute(''' + SELECT event_path, observer_path, method_name + FROM notice + ORDER BY sequence + ''') + while True: + rows = c.fetchmany() + if not rows: + break + for row in rows: + yield tuple(row) + + +class JujuStorage: + """"Storing the content tracked by the Framework in Juju. + + This uses :class:`_JujuStorageBackend` to interact with state-get/state-set + as the way to store state for the framework and for components. + """ + + NOTICE_KEY = "#notices#" + + def __init__(self, backend: '_JujuStorageBackend' = None): + self._backend = backend + if backend is None: + self._backend = _JujuStorageBackend() + + def close(self): + return + + def commit(self): + return + + def save_snapshot(self, handle_path: str, snapshot_data: typing.Any) -> None: + self._backend.set(handle_path, snapshot_data) + + def load_snapshot(self, handle_path): + try: + content = self._backend.get(handle_path) + except KeyError: + raise NoSnapshotError(handle_path) + return content + + def drop_snapshot(self, handle_path): + self._backend.delete(handle_path) + + def save_notice(self, event_path: str, observer_path: str, method_name: str): + notice_list = self._load_notice_list() + notice_list.append([event_path, observer_path, method_name]) + self._save_notice_list(notice_list) + + def drop_notice(self, event_path: str, observer_path: str, method_name: str): + notice_list = self._load_notice_list() + notice_list.remove([event_path, observer_path, method_name]) + self._save_notice_list(notice_list) + + def notices(self, event_path: str): + notice_list = self._load_notice_list() + for row in notice_list: + if row[0] != event_path: + continue + yield tuple(row) + + def _load_notice_list(self) -> typing.List[typing.Tuple[str]]: + try: + notice_list = self._backend.get(self.NOTICE_KEY) + except KeyError: + return [] + if notice_list is None: + return [] + return notice_list + + def _save_notice_list(self, notices: typing.List[typing.Tuple[str]]) -> None: + self._backend.set(self.NOTICE_KEY, notices) + + +class _SimpleLoader(getattr(yaml, 'CSafeLoader', yaml.SafeLoader)): + """Handle a couple basic python types. + + yaml.SafeLoader can handle all the basic int/float/dict/set/etc that we want. The only one + that it *doesn't* handle is tuples. We don't want to support arbitrary types, so we just + subclass SafeLoader and add tuples back in. + """ + # Taken from the example at: + # https://stackoverflow.com/questions/9169025/how-can-i-add-a-python-tuple-to-a-yaml-file-using-pyyaml + + construct_python_tuple = yaml.Loader.construct_python_tuple + + +_SimpleLoader.add_constructor( + u'tag:yaml.org,2002:python/tuple', + _SimpleLoader.construct_python_tuple) + + +class _SimpleDumper(getattr(yaml, 'CSafeDumper', yaml.SafeDumper)): + """Add types supported by 'marshal' + + YAML can support arbitrary types, but that is generally considered unsafe (like pickle). So + we want to only support dumping out types that are safe to load. + """ + + +_SimpleDumper.represent_tuple = yaml.Dumper.represent_tuple +_SimpleDumper.add_representer(tuple, _SimpleDumper.represent_tuple) + + +class _JujuStorageBackend: + """Implements the interface from the Operator framework to Juju's state-get/set/etc.""" + + @staticmethod + def is_available() -> bool: + """Check if Juju state storage is available. + + This checks if there is a 'state-get' executable in PATH. + """ + p = shutil.which('state-get') + return p is not None + + def set(self, key: str, value: typing.Any) -> None: + """Set a key to a given value. + + Args: + key: The string key that will be used to find the value later + value: Arbitrary content that will be returned by get(). + Raises: + CalledProcessError: if 'state-set' returns an error code. + """ + # default_flow_style=None means that it can use Block for + # complex types (types that have nested types) but use flow + # for simple types (like an array). Not all versions of PyYAML + # have the same default style. + encoded_value = yaml.dump(value, Dumper=_SimpleDumper, default_flow_style=None) + content = yaml.dump( + {key: encoded_value}, encoding='utf-8', default_style='|', + default_flow_style=False, + Dumper=_SimpleDumper) + subprocess.run(["state-set", "--file", "-"], input=content, check=True) + + def get(self, key: str) -> typing.Any: + """Get the bytes value associated with a given key. + + Args: + key: The string key that will be used to find the value + Raises: + CalledProcessError: if 'state-get' returns an error code. + """ + # We don't capture stderr here so it can end up in debug logs. + p = subprocess.run( + ["state-get", key], + stdout=subprocess.PIPE, + check=True, + ) + if p.stdout == b'' or p.stdout == b'\n': + raise KeyError(key) + return yaml.load(p.stdout, Loader=_SimpleLoader) + + def delete(self, key: str) -> None: + """Remove a key from being tracked. + + Args: + key: The key to stop storing + Raises: + CalledProcessError: if 'state-delete' returns an error code. + """ + subprocess.run(["state-delete", key], check=True) + + +class NoSnapshotError(Exception): + + def __init__(self, handle_path): + self.handle_path = handle_path + + def __str__(self): + return 'no snapshot data found for {} object'.format(self.handle_path) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/lib/ops/testing.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/lib/ops/testing.py new file mode 100755 index 0000000000000000000000000000000000000000..b4b3fe071216238007c9f3847ca9556be626bf6b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/lib/ops/testing.py @@ -0,0 +1,586 @@ +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import inspect +import pathlib +from textwrap import dedent +import tempfile +import typing +import yaml +import weakref + +from ops import ( + charm, + framework, + model, + storage, +) + + +# OptionalYAML is something like metadata.yaml or actions.yaml. You can +# pass in a file-like object or the string directly. +OptionalYAML = typing.Optional[typing.Union[str, typing.TextIO]] + + +# noinspection PyProtectedMember +class Harness: + """This class represents a way to build up the model that will drive a test suite. + + The model that is created is from the viewpoint of the charm that you are testing. + + Example:: + + harness = Harness(MyCharm) + # Do initial setup here + relation_id = harness.add_relation('db', 'postgresql') + # Now instantiate the charm to see events as the model changes + harness.begin() + harness.add_relation_unit(relation_id, 'postgresql/0') + harness.update_relation_data(relation_id, 'postgresql/0', {'key': 'val'}) + # Check that charm has properly handled the relation_joined event for postgresql/0 + self.assertEqual(harness.charm. ...) + + Args: + charm_cls: The Charm class that you'll be testing. + meta: charm.CharmBase is a A string or file-like object containing the contents of + metadata.yaml. If not supplied, we will look for a 'metadata.yaml' file in the + parent directory of the Charm, and if not found fall back to a trivial + 'name: test-charm' metadata. + actions: A string or file-like object containing the contents of + actions.yaml. If not supplied, we will look for a 'actions.yaml' file in the + parent directory of the Charm. + """ + + def __init__( + self, + charm_cls: typing.Type[charm.CharmBase], + *, + meta: OptionalYAML = None, + actions: OptionalYAML = None): + # TODO: jam 2020-03-05 We probably want to take config as a parameter as well, since + # it would define the default values of config that the charm would see. + self._charm_cls = charm_cls + self._charm = None + self._charm_dir = 'no-disk-path' # this may be updated by _create_meta + self._lazy_resource_dir = None + self._meta = self._create_meta(meta, actions) + self._unit_name = self._meta.name + '/0' + self._framework = None + self._hooks_enabled = True + self._relation_id_counter = 0 + self._backend = _TestingModelBackend(self._unit_name, self._meta) + self._model = model.Model(self._meta, self._backend) + self._storage = storage.SQLiteStorage(':memory:') + self._framework = framework.Framework( + self._storage, self._charm_dir, self._meta, self._model) + + @property + def charm(self) -> charm.CharmBase: + """Return the instance of the charm class that was passed to __init__. + + Note that the Charm is not instantiated until you have called + :meth:`.begin()`. + """ + return self._charm + + @property + def model(self) -> model.Model: + """Return the :class:`~ops.model.Model` that is being driven by this Harness.""" + return self._model + + @property + def framework(self) -> framework.Framework: + """Return the Framework that is being driven by this Harness.""" + return self._framework + + @property + def _resource_dir(self) -> pathlib.Path: + if self._lazy_resource_dir is not None: + return self._lazy_resource_dir + + self.__resource_dir = tempfile.TemporaryDirectory() + self._lazy_resource_dir = pathlib.Path(self.__resource_dir.name) + self._finalizer = weakref.finalize(self, self.__resource_dir.cleanup) + return self._lazy_resource_dir + + def begin(self) -> None: + """Instantiate the Charm and start handling events. + + Before calling begin(), there is no Charm instance, so changes to the Model won't emit + events. You must call begin before :attr:`.charm` is valid. + """ + if self._charm is not None: + raise RuntimeError('cannot call the begin method on the harness more than once') + + # The Framework adds attributes to class objects for events, etc. As such, we can't re-use + # the original class against multiple Frameworks. So create a locally defined class + # and register it. + # TODO: jam 2020-03-16 We are looking to changes this to Instance attributes instead of + # Class attributes which should clean up this ugliness. The API can stay the same + class TestEvents(self._charm_cls.on.__class__): + pass + + TestEvents.__name__ = self._charm_cls.on.__class__.__name__ + + class TestCharm(self._charm_cls): + on = TestEvents() + + # Note: jam 2020-03-01 This is so that errors in testing say MyCharm has no attribute foo, + # rather than TestCharm has no attribute foo. + TestCharm.__name__ = self._charm_cls.__name__ + self._charm = TestCharm(self._framework) + + def _create_meta(self, charm_metadata, action_metadata): + """Create a CharmMeta object. + + Handle the cases where a user doesn't supply explicit metadata snippets. + """ + filename = inspect.getfile(self._charm_cls) + charm_dir = pathlib.Path(filename).parents[1] + + if charm_metadata is None: + metadata_path = charm_dir / 'metadata.yaml' + if metadata_path.is_file(): + charm_metadata = metadata_path.read_text() + self._charm_dir = charm_dir + else: + # The simplest of metadata that the framework can support + charm_metadata = 'name: test-charm' + elif isinstance(charm_metadata, str): + charm_metadata = dedent(charm_metadata) + + if action_metadata is None: + actions_path = charm_dir / 'actions.yaml' + if actions_path.is_file(): + action_metadata = actions_path.read_text() + self._charm_dir = charm_dir + elif isinstance(action_metadata, str): + action_metadata = dedent(action_metadata) + + return charm.CharmMeta.from_yaml(charm_metadata, action_metadata) + + def add_oci_resource(self, resource_name: str, + contents: typing.Mapping[str, str] = None) -> None: + """Add oci resources to the backend. + + This will register an oci resource and create a temporary file for processing metadata + about the resource. A default set of values will be used for all the file contents + unless a specific contents dict is provided. + + Args: + resource_name: Name of the resource to add custom contents to. + contents: Optional custom dict to write for the named resource. + """ + if not contents: + contents = {'registrypath': 'registrypath', + 'username': 'username', + 'password': 'password', + } + if resource_name not in self._meta.resources.keys(): + raise RuntimeError('Resource {} is not a defined resources'.format(resource_name)) + if self._meta.resources[resource_name].type != "oci-image": + raise RuntimeError('Resource {} is not an OCI Image'.format(resource_name)) + resource_dir = self._resource_dir / resource_name + resource_dir.mkdir(exist_ok=True) + resource_file = resource_dir / "contents.yaml" + with resource_file.open('wt', encoding='utf8') as resource_yaml: + yaml.dump(contents, resource_yaml) + self._backend._resources_map[resource_name] = resource_file + + def populate_oci_resources(self) -> None: + """Populate all OCI resources.""" + for name, data in self._meta.resources.items(): + if data.type == "oci-image": + self.add_oci_resource(name) + + def disable_hooks(self) -> None: + """Stop emitting hook events when the model changes. + + This can be used by developers to stop changes to the model from emitting events that + the charm will react to. Call :meth:`.enable_hooks` + to re-enable them. + """ + self._hooks_enabled = False + + def enable_hooks(self) -> None: + """Re-enable hook events from charm.on when the model is changed. + + By default hook events are enabled once you call :meth:`.begin`, + but if you have used :meth:`.disable_hooks`, this can be used to + enable them again. + """ + self._hooks_enabled = True + + def _next_relation_id(self): + rel_id = self._relation_id_counter + self._relation_id_counter += 1 + return rel_id + + def add_relation(self, relation_name: str, remote_app: str) -> int: + """Declare that there is a new relation between this app and `remote_app`. + + Args: + relation_name: The relation on Charm that is being related to + remote_app: The name of the application that is being related to + + Return: + The relation_id created by this add_relation. + """ + rel_id = self._next_relation_id() + self._backend._relation_ids_map.setdefault(relation_name, []).append(rel_id) + self._backend._relation_names[rel_id] = relation_name + self._backend._relation_list_map[rel_id] = [] + self._backend._relation_data[rel_id] = { + remote_app: {}, + self._backend.unit_name: {}, + self._backend.app_name: {}, + } + # Reload the relation_ids list + if self._model is not None: + self._model.relations._invalidate(relation_name) + if self._charm is None or not self._hooks_enabled: + return rel_id + relation = self._model.get_relation(relation_name, rel_id) + app = self._model.get_app(remote_app) + self._charm.on[relation_name].relation_created.emit( + relation, app) + return rel_id + + def add_relation_unit(self, relation_id: int, remote_unit_name: str) -> None: + """Add a new unit to a relation. + + Example:: + + rel_id = harness.add_relation('db', 'postgresql') + harness.add_relation_unit(rel_id, 'postgresql/0') + + This will trigger a `relation_joined` event and a `relation_changed` event. + + Args: + relation_id: The integer relation identifier (as returned by add_relation). + remote_unit_name: A string representing the remote unit that is being added. + Return: + None + """ + self._backend._relation_list_map[relation_id].append(remote_unit_name) + self._backend._relation_data[relation_id][remote_unit_name] = {} + relation_name = self._backend._relation_names[relation_id] + # Make sure that the Model reloads the relation_list for this relation_id, as well as + # reloading the relation data for this unit. + if self._model is not None: + remote_unit = self._model.get_unit(remote_unit_name) + relation = self._model.get_relation(relation_name, relation_id) + unit_cache = relation.data.get(remote_unit, None) + if unit_cache is not None: + unit_cache._invalidate() + self._model.relations._invalidate(relation_name) + if self._charm is None or not self._hooks_enabled: + return + self._charm.on[relation_name].relation_joined.emit( + relation, remote_unit.app, remote_unit) + + def get_relation_data(self, relation_id: int, app_or_unit: str) -> typing.Mapping: + """Get the relation data bucket for a single app or unit in a given relation. + + This ignores all of the safety checks of who can and can't see data in relations (eg, + non-leaders can't read their own application's relation data because there are no events + that keep that data up-to-date for the unit). + + Args: + relation_id: The relation whose content we want to look at. + app_or_unit: The name of the application or unit whose data we want to read + Return: + a dict containing the relation data for `app_or_unit` or None. + Raises: + KeyError: if relation_id doesn't exist + """ + return self._backend._relation_data[relation_id].get(app_or_unit, None) + + def get_workload_version(self) -> str: + """Read the workload version that was set by the unit.""" + return self._backend._workload_version + + def set_model_name(self, name: str) -> None: + """Set the name of the Model that this is representing. + + This cannot be called once begin() has been called. But it lets you set the value that + will be returned by Model.name. + """ + if self._charm is not None: + raise RuntimeError('cannot set the Model name after begin()') + self._backend.model_name = name + + def update_relation_data( + self, + relation_id: int, + app_or_unit: str, + key_values: typing.Mapping, + ) -> None: + """Update the relation data for a given unit or application in a given relation. + + This also triggers the `relation_changed` event for this relation_id. + + Args: + relation_id: The integer relation_id representing this relation. + app_or_unit: The unit or application name that is being updated. + This can be the local or remote application. + key_values: Each key/value will be updated in the relation data. + """ + relation_name = self._backend._relation_names[relation_id] + relation = self._model.get_relation(relation_name, relation_id) + if '/' in app_or_unit: + entity = self._model.get_unit(app_or_unit) + else: + entity = self._model.get_app(app_or_unit) + rel_data = relation.data.get(entity, None) + if rel_data is not None: + # rel_data may have cached now-stale data, so _invalidate() it. + # Note, this won't cause the data to be loaded if it wasn't already. + rel_data._invalidate() + + new_values = self._backend._relation_data[relation_id][app_or_unit].copy() + for k, v in key_values.items(): + if v == '': + new_values.pop(k, None) + else: + new_values[k] = v + self._backend._relation_data[relation_id][app_or_unit] = new_values + + if app_or_unit == self._model.unit.name: + # No events for our own unit + return + if app_or_unit == self._model.app.name: + # updating our own app only generates an event if it is a peer relation and we + # aren't the leader + is_peer = self._meta.relations[relation_name].role.is_peer() + if not is_peer: + return + if self._model.unit.is_leader(): + return + self._emit_relation_changed(relation_id, app_or_unit) + + def _emit_relation_changed(self, relation_id, app_or_unit): + if self._charm is None or not self._hooks_enabled: + return + rel_name = self._backend._relation_names[relation_id] + relation = self.model.get_relation(rel_name, relation_id) + if '/' in app_or_unit: + app_name = app_or_unit.split('/')[0] + unit_name = app_or_unit + app = self.model.get_app(app_name) + unit = self.model.get_unit(unit_name) + args = (relation, app, unit) + else: + app_name = app_or_unit + app = self.model.get_app(app_name) + args = (relation, app) + self._charm.on[rel_name].relation_changed.emit(*args) + + def update_config( + self, + key_values: typing.Mapping[str, str] = None, + unset: typing.Iterable[str] = (), + ) -> None: + """Update the config as seen by the charm. + + This will trigger a `config_changed` event. + + Args: + key_values: A Mapping of key:value pairs to update in config. + unset: An iterable of keys to remove from Config. (Note that this does + not currently reset the config values to the default defined in config.yaml.) + """ + config = self._backend._config + if key_values is not None: + for key, value in key_values.items(): + config[key] = value + for key in unset: + config.pop(key, None) + # NOTE: jam 2020-03-01 Note that this sort of works "by accident". Config + # is a LazyMapping, but its _load returns a dict and this method mutates + # the dict that Config is caching. Arguably we should be doing some sort + # of charm.framework.model.config._invalidate() + if self._charm is None or not self._hooks_enabled: + return + self._charm.on.config_changed.emit() + + def set_leader(self, is_leader: bool = True) -> None: + """Set whether this unit is the leader or not. + + If this charm becomes a leader then `leader_elected` will be triggered. + + Args: + is_leader: True/False as to whether this unit is the leader. + """ + was_leader = self._backend._is_leader + self._backend._is_leader = is_leader + # Note: jam 2020-03-01 currently is_leader is cached at the ModelBackend level, not in + # the Model objects, so this automatically gets noticed. + if is_leader and not was_leader and self._charm is not None and self._hooks_enabled: + self._charm.on.leader_elected.emit() + + def _get_backend_calls(self, reset: bool = True) -> list: + """Return the calls that we have made to the TestingModelBackend. + + This is useful mostly for testing the framework itself, so that we can assert that we + do/don't trigger extra calls. + + Args: + reset: If True, reset the calls list back to empty, if false, the call list is + preserved. + Return: + ``[(call1, args...), (call2, args...)]`` + """ + calls = self._backend._calls.copy() + if reset: + self._backend._calls.clear() + return calls + + +def _record_calls(cls): + """Replace methods on cls with methods that record that they have been called. + + Iterate all attributes of cls, and for public methods, replace them with a wrapped method + that records the method called along with the arguments and keyword arguments. + """ + for meth_name, orig_method in cls.__dict__.items(): + if meth_name.startswith('_'): + continue + + def decorator(orig_method): + def wrapped(self, *args, **kwargs): + full_args = (orig_method.__name__,) + args + if kwargs: + full_args = full_args + (kwargs,) + self._calls.append(full_args) + return orig_method(self, *args, **kwargs) + return wrapped + + setattr(cls, meth_name, decorator(orig_method)) + return cls + + +@_record_calls +class _TestingModelBackend: + """This conforms to the interface for ModelBackend but provides canned data. + + DO NOT use this class directly, it is used by `Harness`_ to drive the model. + `Harness`_ is responsible for maintaining the internal consistency of the values here, + as the only public methods of this type are for implementing ModelBackend. + """ + + def __init__(self, unit_name, meta): + self.unit_name = unit_name + self.app_name = self.unit_name.split('/')[0] + self.model_name = None + self._calls = [] + self._meta = meta + self._is_leader = None + self._relation_ids_map = {} # relation name to [relation_ids,...] + self._relation_names = {} # reverse map from relation_id to relation_name + self._relation_list_map = {} # relation_id: [unit_name,...] + self._relation_data = {} # {relation_id: {name: data}} + self._config = {} + self._is_leader = False + self._resources_map = {} + self._pod_spec = None + self._app_status = {'status': 'unknown', 'message': ''} + self._unit_status = {'status': 'maintenance', 'message': ''} + self._workload_version = None + + def relation_ids(self, relation_name): + try: + return self._relation_ids_map[relation_name] + except KeyError as e: + if relation_name not in self._meta.relations: + raise model.ModelError('{} is not a known relation'.format(relation_name)) from e + return [] + + def relation_list(self, relation_id): + try: + return self._relation_list_map[relation_id] + except KeyError as e: + raise model.RelationNotFoundError from e + + def relation_get(self, relation_id, member_name, is_app): + if is_app and '/' in member_name: + member_name = member_name.split('/')[0] + if relation_id not in self._relation_data: + raise model.RelationNotFoundError() + return self._relation_data[relation_id][member_name].copy() + + def relation_set(self, relation_id, key, value, is_app): + relation = self._relation_data[relation_id] + if is_app: + bucket_key = self.app_name + else: + bucket_key = self.unit_name + if bucket_key not in relation: + relation[bucket_key] = {} + bucket = relation[bucket_key] + if value == '': + bucket.pop(key, None) + else: + bucket[key] = value + + def config_get(self): + return self._config + + def is_leader(self): + return self._is_leader + + def application_version_set(self, version): + self._workload_version = version + + def resource_get(self, resource_name): + return self._resources_map[resource_name] + + def pod_spec_set(self, spec, k8s_resources): + self._pod_spec = (spec, k8s_resources) + + def status_get(self, *, is_app=False): + if is_app: + return self._app_status + else: + return self._unit_status + + def status_set(self, status, message='', *, is_app=False): + if is_app: + self._app_status = {'status': status, 'message': message} + else: + self._unit_status = {'status': status, 'message': message} + + def storage_list(self, name): + raise NotImplementedError(self.storage_list) + + def storage_get(self, storage_name_id, attribute): + raise NotImplementedError(self.storage_get) + + def storage_add(self, name, count=1): + raise NotImplementedError(self.storage_add) + + def action_get(self): + raise NotImplementedError(self.action_get) + + def action_set(self, results): + raise NotImplementedError(self.action_set) + + def action_log(self, message): + raise NotImplementedError(self.action_log) + + def action_fail(self, message=''): + raise NotImplementedError(self.action_fail) + + def network_get(self, endpoint_name, relation_id=None): + raise NotImplementedError(self.network_get) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/lib/ops/version.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/lib/ops/version.py new file mode 100644 index 0000000000000000000000000000000000000000..15e5478555ee0fa948bfb0ad57cc79ba7cef3721 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/lib/ops/version.py @@ -0,0 +1,50 @@ +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import subprocess +from pathlib import Path + +__all__ = ('version',) + +_FALLBACK = '0.8' # this gets bumped after release + + +def _get_version(): + version = _FALLBACK + ".dev0+unknown" + + p = Path(__file__).parent + if (p.parent / '.git').exists(): + try: + proc = subprocess.run( + ['git', 'describe', '--tags', '--dirty'], + stdout=subprocess.PIPE, + stderr=subprocess.DEVNULL, + cwd=p, + check=True) + except Exception: + pass + else: + version = proc.stdout.strip().decode('utf8') + if '-' in version: + # version will look like -<#commits>-g[-dirty] + # in terms of PEP 440, the tag we'll make sure is a 'public version identifier'; + # everything after the first - needs to be a 'local version' + public, local = version.split('-', 1) + version = public + '+' + local.replace('-', '.') + # version now +<#commits>.g[.dirty] + # which is PEP440-compliant (as long as is :-) + return version + + +version = _get_version() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/metadata.yaml b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/metadata.yaml new file mode 100644 index 0000000000000000000000000000000000000000..3b3aed87e96121224c63916b04009daf40fcab35 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/metadata.yaml @@ -0,0 +1,26 @@ +name: main +summary: A charm used for testing the basic operation of the entrypoint code. +maintainer: Dmitrii Shcherbakov +description: A charm used for testing the basic operation of the entrypoint code. +tags: + - misc +series: + - bionic + - cosmic + - disco +min-juju-version: 2.7.1 +provides: + db: + interface: db +requires: + mon: + interface: monitoring +peers: + ha: + interface: cluster +subordinate: false +storage: + disks: + type: block + multiple: + range: 0- diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/src/charm.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/src/charm.py new file mode 100755 index 0000000000000000000000000000000000000000..154b2376b6ce466a94c0d6745b8b48e48dea00dc --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/charms/test_main/src/charm.py @@ -0,0 +1,223 @@ +#!/usr/bin/env python3 +# Copyright 2019 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import base64 +import pickle +import sys +import logging + +sys.path.append('lib') + +from ops.charm import CharmBase # noqa: E402 (module-level import after non-import code) +from ops.main import main # noqa: E402 (ditto) + +logger = logging.getLogger() + + +class Charm(CharmBase): + + def __init__(self, *args): + super().__init__(*args) + + # This environment variable controls the test charm behavior. + charm_config = os.environ.get('CHARM_CONFIG') + if charm_config is not None: + self._charm_config = pickle.loads(base64.b64decode(charm_config)) + else: + self._charm_config = {} + + # TODO: refactor to use StoredState + # (this implies refactoring most of test_main.py) + self._state_file = self._charm_config.get('STATE_FILE') + try: + with open(str(self._state_file), 'rb') as f: + self._state = pickle.load(f) + except (FileNotFoundError, EOFError): + self._state = { + 'on_install': [], + 'on_start': [], + 'on_config_changed': [], + 'on_update_status': [], + 'on_leader_settings_changed': [], + 'on_db_relation_joined': [], + 'on_mon_relation_changed': [], + 'on_mon_relation_departed': [], + 'on_ha_relation_broken': [], + 'on_foo_bar_action': [], + 'on_start_action': [], + '_on_get_model_name_action': [], + 'on_collect_metrics': [], + + 'on_log_critical_action': [], + 'on_log_error_action': [], + 'on_log_warning_action': [], + 'on_log_info_action': [], + 'on_log_debug_action': [], + + # Observed event types per invocation. A list is used to preserve the + # order in which charm handlers have observed the events. + 'observed_event_types': [], + } + + self.framework.observe(self.on.install, self._on_install) + self.framework.observe(self.on.start, self._on_start) + self.framework.observe(self.on.config_changed, self._on_config_changed) + self.framework.observe(self.on.update_status, self._on_update_status) + self.framework.observe(self.on.leader_settings_changed, self._on_leader_settings_changed) + # Test relation events with endpoints from different + # sections (provides, requires, peers) as well. + self.framework.observe(self.on.db_relation_joined, self._on_db_relation_joined) + self.framework.observe(self.on.mon_relation_changed, self._on_mon_relation_changed) + self.framework.observe(self.on.mon_relation_departed, self._on_mon_relation_departed) + self.framework.observe(self.on.ha_relation_broken, self._on_ha_relation_broken) + + if self._charm_config.get('USE_ACTIONS'): + self.framework.observe(self.on.start_action, self._on_start_action) + self.framework.observe(self.on.foo_bar_action, self._on_foo_bar_action) + self.framework.observe(self.on.get_model_name_action, self._on_get_model_name_action) + self.framework.observe(self.on.get_status_action, self._on_get_status_action) + + self.framework.observe(self.on.collect_metrics, self._on_collect_metrics) + + if self._charm_config.get('USE_LOG_ACTIONS'): + self.framework.observe(self.on.log_critical_action, self._on_log_critical_action) + self.framework.observe(self.on.log_error_action, self._on_log_error_action) + self.framework.observe(self.on.log_warning_action, self._on_log_warning_action) + self.framework.observe(self.on.log_info_action, self._on_log_info_action) + self.framework.observe(self.on.log_debug_action, self._on_log_debug_action) + + if self._charm_config.get('TRY_EXCEPTHOOK'): + raise RuntimeError("failing as requested") + + def _write_state(self): + """Write state variables so that the parent process can read them. + + Each invocation will override the previous state which is intentional. + """ + if self._state_file is not None: + with self._state_file.open('wb') as f: + pickle.dump(self._state, f) + + def _on_install(self, event): + self._state['on_install'].append(type(event)) + self._state['observed_event_types'].append(type(event)) + self._write_state() + + def _on_start(self, event): + self._state['on_start'].append(type(event)) + self._state['observed_event_types'].append(type(event)) + self._write_state() + + def _on_config_changed(self, event): + self._state['on_config_changed'].append(type(event)) + self._state['observed_event_types'].append(type(event)) + event.defer() + self._write_state() + + def _on_update_status(self, event): + self._state['on_update_status'].append(type(event)) + self._state['observed_event_types'].append(type(event)) + self._write_state() + + def _on_leader_settings_changed(self, event): + self._state['on_leader_settings_changed'].append(type(event)) + self._state['observed_event_types'].append(type(event)) + self._write_state() + + def _on_db_relation_joined(self, event): + assert event.app is not None, 'application name cannot be None for a relation-joined event' + self._state['on_db_relation_joined'].append(type(event)) + self._state['observed_event_types'].append(type(event)) + self._state['db_relation_joined_data'] = event.snapshot() + self._write_state() + + def _on_mon_relation_changed(self, event): + assert event.app is not None, ( + 'application name cannot be None for a relation-changed event') + if os.environ.get('JUJU_REMOTE_UNIT'): + assert event.unit is not None, ( + 'a unit name cannot be None for a relation-changed event' + ' associated with a remote unit') + self._state['on_mon_relation_changed'].append(type(event)) + self._state['observed_event_types'].append(type(event)) + self._state['mon_relation_changed_data'] = event.snapshot() + self._write_state() + + def _on_mon_relation_departed(self, event): + assert event.app is not None, ( + 'application name cannot be None for a relation-departed event') + self._state['on_mon_relation_departed'].append(type(event)) + self._state['observed_event_types'].append(type(event)) + self._state['mon_relation_departed_data'] = event.snapshot() + self._write_state() + + def _on_ha_relation_broken(self, event): + assert event.app is None, ( + 'relation-broken events cannot have a reference to a remote application') + assert event.unit is None, ( + 'relation broken events cannot have a reference to a remote unit') + self._state['on_ha_relation_broken'].append(type(event)) + self._state['observed_event_types'].append(type(event)) + self._state['ha_relation_broken_data'] = event.snapshot() + self._write_state() + + def _on_start_action(self, event): + assert event.handle.kind == 'start_action', ( + 'event action name cannot be different from the one being handled') + self._state['on_start_action'].append(type(event)) + self._state['observed_event_types'].append(type(event)) + self._write_state() + + def _on_foo_bar_action(self, event): + assert event.handle.kind == 'foo_bar_action', ( + 'event action name cannot be different from the one being handled') + self._state['on_foo_bar_action'].append(type(event)) + self._state['observed_event_types'].append(type(event)) + self._write_state() + + def _on_get_status_action(self, event): + self._state['status_name'] = self.unit.status.name + self._state['status_message'] = self.unit.status.message + self._write_state() + + def _on_collect_metrics(self, event): + self._state['on_collect_metrics'].append(type(event)) + self._state['observed_event_types'].append(type(event)) + event.add_metrics({'foo': 42}, {'bar': 4.2}) + self._write_state() + + def _on_log_critical_action(self, event): + logger.critical('super critical') + + def _on_log_error_action(self, event): + logger.error('grave error') + + def _on_log_warning_action(self, event): + logger.warning('wise warning') + + def _on_log_info_action(self, event): + logger.info('useful info') + + def _on_log_debug_action(self, event): + logger.debug('insightful debug') + + def _on_get_model_name_action(self, event): + self._state['_on_get_model_name_action'].append(self.model.name) + self._write_state() + + +if __name__ == '__main__': + main(Charm) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/test_charm.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/test_charm.py new file mode 100755 index 0000000000000000000000000000000000000000..f698aec3b71a837134ecd6d2d83164763e9a1957 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/test_charm.py @@ -0,0 +1,322 @@ +# Copyright 2019-2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import unittest +import tempfile +import shutil + +from pathlib import Path + +from ops.charm import ( + CharmBase, + CharmMeta, + CharmEvents, +) +from ops.framework import Framework, EventSource, EventBase +from ops.model import Model, _ModelBackend +from ops.storage import SQLiteStorage + +from .test_helpers import fake_script, fake_script_calls + + +class TestCharm(unittest.TestCase): + + def setUp(self): + def restore_env(env): + os.environ.clear() + os.environ.update(env) + self.addCleanup(restore_env, os.environ.copy()) + + os.environ['PATH'] = "{}:{}".format(Path(__file__).parent / 'bin', os.environ['PATH']) + os.environ['JUJU_UNIT_NAME'] = 'local/0' + + self.tmpdir = Path(tempfile.mkdtemp()) + self.addCleanup(shutil.rmtree, str(self.tmpdir)) + self.meta = CharmMeta() + + class CustomEvent(EventBase): + pass + + class TestCharmEvents(CharmEvents): + custom = EventSource(CustomEvent) + + # Relations events are defined dynamically and modify the class attributes. + # We use a subclass temporarily to prevent these side effects from leaking. + CharmBase.on = TestCharmEvents() + + def cleanup(): + CharmBase.on = CharmEvents() + self.addCleanup(cleanup) + + def create_framework(self): + model = Model(self.meta, _ModelBackend('local/0')) + framework = Framework(SQLiteStorage(':memory:'), self.tmpdir, self.meta, model) + self.addCleanup(framework.close) + return framework + + def test_basic(self): + + class MyCharm(CharmBase): + + def __init__(self, *args): + super().__init__(*args) + + self.started = False + framework.observe(self.on.start, self._on_start) + + def _on_start(self, event): + self.started = True + + events = list(MyCharm.on.events()) + self.assertIn('install', events) + self.assertIn('custom', events) + + framework = self.create_framework() + charm = MyCharm(framework) + charm.on.start.emit() + + self.assertEqual(charm.started, True) + + with self.assertRaisesRegex(TypeError, "observer methods must now be explicitly provided"): + framework.observe(charm.on.start, charm) + + def test_helper_properties(self): + framework = self.create_framework() + + class MyCharm(CharmBase): + pass + + charm = MyCharm(framework) + self.assertEqual(charm.app, framework.model.app) + self.assertEqual(charm.unit, framework.model.unit) + self.assertEqual(charm.meta, framework.meta) + self.assertEqual(charm.charm_dir, framework.charm_dir) + + def test_relation_events(self): + + class MyCharm(CharmBase): + def __init__(self, *args): + super().__init__(*args) + self.seen = [] + for rel in ('req1', 'req-2', 'pro1', 'pro-2', 'peer1', 'peer-2'): + # Hook up relation events to generic handler. + self.framework.observe(self.on[rel].relation_joined, self.on_any_relation) + self.framework.observe(self.on[rel].relation_changed, self.on_any_relation) + self.framework.observe(self.on[rel].relation_departed, self.on_any_relation) + self.framework.observe(self.on[rel].relation_broken, self.on_any_relation) + + def on_any_relation(self, event): + assert event.relation.name == 'req1' + assert event.relation.app.name == 'remote' + self.seen.append(type(event).__name__) + + # language=YAML + self.meta = CharmMeta.from_yaml(metadata=''' +name: my-charm +requires: + req1: + interface: req1 + req-2: + interface: req2 +provides: + pro1: + interface: pro1 + pro-2: + interface: pro2 +peers: + peer1: + interface: peer1 + peer-2: + interface: peer2 +''') + + charm = MyCharm(self.create_framework()) + + rel = charm.framework.model.get_relation('req1', 1) + unit = charm.framework.model.get_unit('remote/0') + charm.on['req1'].relation_joined.emit(rel, unit) + charm.on['req1'].relation_changed.emit(rel, unit) + charm.on['req-2'].relation_changed.emit(rel, unit) + charm.on['pro1'].relation_departed.emit(rel, unit) + charm.on['pro-2'].relation_departed.emit(rel, unit) + charm.on['peer1'].relation_broken.emit(rel) + charm.on['peer-2'].relation_broken.emit(rel) + + self.assertEqual(charm.seen, [ + 'RelationJoinedEvent', + 'RelationChangedEvent', + 'RelationChangedEvent', + 'RelationDepartedEvent', + 'RelationDepartedEvent', + 'RelationBrokenEvent', + 'RelationBrokenEvent', + ]) + + def test_storage_events(self): + + class MyCharm(CharmBase): + def __init__(self, *args): + super().__init__(*args) + self.seen = [] + self.framework.observe(self.on['stor1'].storage_attached, self._on_stor1_attach) + self.framework.observe(self.on['stor2'].storage_detaching, self._on_stor2_detach) + self.framework.observe(self.on['stor3'].storage_attached, self._on_stor3_attach) + self.framework.observe(self.on['stor-4'].storage_attached, self._on_stor4_attach) + + def _on_stor1_attach(self, event): + self.seen.append(type(event).__name__) + + def _on_stor2_detach(self, event): + self.seen.append(type(event).__name__) + + def _on_stor3_attach(self, event): + self.seen.append(type(event).__name__) + + def _on_stor4_attach(self, event): + self.seen.append(type(event).__name__) + + # language=YAML + self.meta = CharmMeta.from_yaml(''' +name: my-charm +storage: + stor-4: + multiple: + range: 2-4 + type: filesystem + stor1: + type: filesystem + stor2: + multiple: + range: "2" + type: filesystem + stor3: + multiple: + range: 2- + type: filesystem +''') + + self.assertIsNone(self.meta.storages['stor1'].multiple_range) + self.assertEqual(self.meta.storages['stor2'].multiple_range, (2, 2)) + self.assertEqual(self.meta.storages['stor3'].multiple_range, (2, None)) + self.assertEqual(self.meta.storages['stor-4'].multiple_range, (2, 4)) + + charm = MyCharm(self.create_framework()) + + charm.on['stor1'].storage_attached.emit() + charm.on['stor2'].storage_detaching.emit() + charm.on['stor3'].storage_attached.emit() + charm.on['stor-4'].storage_attached.emit() + + self.assertEqual(charm.seen, [ + 'StorageAttachedEvent', + 'StorageDetachingEvent', + 'StorageAttachedEvent', + 'StorageAttachedEvent', + ]) + + @classmethod + def _get_action_test_meta(cls): + # language=YAML + return CharmMeta.from_yaml(metadata=''' +name: my-charm +''', actions=''' +foo-bar: + description: "Foos the bar." + params: + foo-name: + description: "A foo name to bar" + type: string + silent: + default: false + description: "" + type: boolean + required: foo-bar + title: foo-bar +start: + description: "Start the unit." +''') + + def _test_action_events(self, cmd_type): + + class MyCharm(CharmBase): + + def __init__(self, *args): + super().__init__(*args) + framework.observe(self.on.foo_bar_action, self._on_foo_bar_action) + framework.observe(self.on.start_action, self._on_start_action) + + def _on_foo_bar_action(self, event): + self.seen_action_params = event.params + event.log('test-log') + event.set_results({'res': 'val with spaces'}) + event.fail('test-fail') + + def _on_start_action(self, event): + pass + + fake_script(self, cmd_type + '-get', """echo '{"foo-name": "name", "silent": true}'""") + fake_script(self, cmd_type + '-set', "") + fake_script(self, cmd_type + '-log', "") + fake_script(self, cmd_type + '-fail', "") + self.meta = self._get_action_test_meta() + + os.environ['JUJU_{}_NAME'.format(cmd_type.upper())] = 'foo-bar' + framework = self.create_framework() + charm = MyCharm(framework) + + events = list(MyCharm.on.events()) + self.assertIn('foo_bar_action', events) + self.assertIn('start_action', events) + + charm.on.foo_bar_action.emit() + self.assertEqual(charm.seen_action_params, {"foo-name": "name", "silent": True}) + self.assertEqual(fake_script_calls(self), [ + [cmd_type + '-get', '--format=json'], + [cmd_type + '-log', "test-log"], + [cmd_type + '-set', "res=val with spaces"], + [cmd_type + '-fail', "test-fail"], + ]) + + # Make sure that action events that do not match the current context are + # not possible to emit by hand. + with self.assertRaises(RuntimeError): + charm.on.start_action.emit() + + def test_action_events(self): + self._test_action_events('action') + + def _test_action_event_defer_fails(self, cmd_type): + + class MyCharm(CharmBase): + + def __init__(self, *args): + super().__init__(*args) + framework.observe(self.on.start_action, self._on_start_action) + + def _on_start_action(self, event): + event.defer() + + fake_script(self, cmd_type + '-get', """echo '{"foo-name": "name", "silent": true}'""") + self.meta = self._get_action_test_meta() + + os.environ['JUJU_{}_NAME'.format(cmd_type.upper())] = 'start' + framework = self.create_framework() + charm = MyCharm(framework) + + with self.assertRaises(RuntimeError): + charm.on.start_action.emit() + + def test_action_event_defer_fails(self): + self._test_action_event_defer_fails('action') diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/test_framework.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/test_framework.py new file mode 100755 index 0000000000000000000000000000000000000000..2c4a7bf0de34396572d73918518f9617b4f7dc55 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/test_framework.py @@ -0,0 +1,1798 @@ +# Copyright 2019-2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import datetime +import gc +import inspect +import io +import os +import re +import shutil +import sys +import tempfile +from unittest.mock import patch +from pathlib import Path + +import logassert + +from ops import charm +from ops.framework import ( + _BREAKPOINT_WELCOME_MESSAGE, + BoundStoredState, + CommitEvent, + EventBase, + _event_regex, + ObjectEvents, + EventSource, + Framework, + Handle, + Object, + PreCommitEvent, + StoredList, + StoredState, + StoredStateData, +) +from ops.storage import NoSnapshotError, SQLiteStorage +from test.test_helpers import fake_script, BaseTestCase + + +class TestFramework(BaseTestCase): + + def setUp(self): + self.tmpdir = Path(tempfile.mkdtemp()) + self.addCleanup(shutil.rmtree, str(self.tmpdir)) + + patcher = patch('ops.storage.SQLiteStorage.DB_LOCK_TIMEOUT', datetime.timedelta(0)) + patcher.start() + self.addCleanup(patcher.stop) + logassert.setup(self, 'ops') + + def test_deprecated_init(self): + # For 0.7, this still works, but it is deprecated. + framework = Framework(':memory:', None, None, None) + self.assertLoggedWarning( + "deprecated: Framework now takes a Storage not a path") + self.assertIsInstance(framework._storage, SQLiteStorage) + + def test_handle_path(self): + cases = [ + (Handle(None, "root", None), "root"), + (Handle(None, "root", "1"), "root[1]"), + (Handle(Handle(None, "root", None), "child", None), "root/child"), + (Handle(Handle(None, "root", "1"), "child", "2"), "root[1]/child[2]"), + ] + for handle, path in cases: + self.assertEqual(str(handle), path) + self.assertEqual(Handle.from_path(path), handle) + + def test_handle_attrs_readonly(self): + handle = Handle(None, 'kind', 'key') + with self.assertRaises(AttributeError): + handle.parent = 'foo' + with self.assertRaises(AttributeError): + handle.kind = 'foo' + with self.assertRaises(AttributeError): + handle.key = 'foo' + with self.assertRaises(AttributeError): + handle.path = 'foo' + + def test_restore_unknown(self): + framework = self.create_framework() + + class Foo(Object): + pass + + handle = Handle(None, "a_foo", "some_key") + + framework.register_type(Foo, None, handle.kind) + + try: + framework.load_snapshot(handle) + except NoSnapshotError as e: + self.assertEqual(e.handle_path, str(handle)) + self.assertEqual(str(e), "no snapshot data found for a_foo[some_key] object") + else: + self.fail("exception NoSnapshotError not raised") + + def test_snapshot_roundtrip(self): + class Foo: + def __init__(self, handle, n): + self.handle = handle + self.my_n = n + + def snapshot(self): + return {"My N!": self.my_n} + + def restore(self, snapshot): + self.my_n = snapshot["My N!"] + 1 + + handle = Handle(None, "a_foo", "some_key") + event = Foo(handle, 1) + + framework1 = self.create_framework(tmpdir=self.tmpdir) + framework1.register_type(Foo, None, handle.kind) + framework1.save_snapshot(event) + framework1.commit() + framework1.close() + + framework2 = self.create_framework(tmpdir=self.tmpdir) + framework2.register_type(Foo, None, handle.kind) + event2 = framework2.load_snapshot(handle) + self.assertEqual(event2.my_n, 2) + + framework2.save_snapshot(event2) + del event2 + gc.collect() + event3 = framework2.load_snapshot(handle) + self.assertEqual(event3.my_n, 3) + + framework2.drop_snapshot(event.handle) + framework2.commit() + framework2.close() + + framework3 = self.create_framework(tmpdir=self.tmpdir) + framework3.register_type(Foo, None, handle.kind) + + self.assertRaises(NoSnapshotError, framework3.load_snapshot, handle) + + def test_simple_event_observer(self): + framework = self.create_framework() + + class MyEvent(EventBase): + pass + + class MyNotifier(Object): + foo = EventSource(MyEvent) + bar = EventSource(MyEvent) + baz = EventSource(MyEvent) + + class MyObserver(Object): + def __init__(self, parent, key): + super().__init__(parent, key) + self.seen = [] + + def on_any(self, event): + self.seen.append("on_any:" + event.handle.kind) + + def on_foo(self, event): + self.seen.append("on_foo:" + event.handle.kind) + + pub = MyNotifier(framework, "1") + obs = MyObserver(framework, "1") + + framework.observe(pub.foo, obs.on_any) + framework.observe(pub.bar, obs.on_any) + + with self.assertRaisesRegex(RuntimeError, "^Framework.observe requires a method"): + framework.observe(pub.baz, obs) + + pub.foo.emit() + pub.bar.emit() + + self.assertEqual(obs.seen, ["on_any:foo", "on_any:bar"]) + + def test_bad_sig_observer(self): + + class MyEvent(EventBase): + pass + + class MyNotifier(Object): + foo = EventSource(MyEvent) + bar = EventSource(MyEvent) + baz = EventSource(MyEvent) + qux = EventSource(MyEvent) + + class MyObserver(Object): + def _on_foo(self): + assert False, 'should not be reached' + + def _on_bar(self, event, extra): + assert False, 'should not be reached' + + def _on_baz(self, event, extra=None, *, k): + assert False, 'should not be reached' + + def _on_qux(self, event, extra=None): + assert False, 'should not be reached' + + framework = self.create_framework() + pub = MyNotifier(framework, "pub") + obs = MyObserver(framework, "obs") + + with self.assertRaisesRegex(TypeError, "must accept event parameter"): + framework.observe(pub.foo, obs._on_foo) + with self.assertRaisesRegex(TypeError, "has extra required parameter"): + framework.observe(pub.bar, obs._on_bar) + with self.assertRaisesRegex(TypeError, "has extra required parameter"): + framework.observe(pub.baz, obs._on_baz) + framework.observe(pub.qux, obs._on_qux) + + def test_on_pre_commit_emitted(self): + framework = self.create_framework(tmpdir=self.tmpdir) + + class PreCommitObserver(Object): + + _stored = StoredState() + + def __init__(self, parent, key): + super().__init__(parent, key) + self.seen = [] + self._stored.myinitdata = 40 + + def on_pre_commit(self, event): + self._stored.myinitdata = 41 + self._stored.mydata = 42 + self.seen.append(type(event)) + + def on_commit(self, event): + # Modifications made here will not be persisted. + self._stored.myinitdata = 42 + self._stored.mydata = 43 + self._stored.myotherdata = 43 + self.seen.append(type(event)) + + obs = PreCommitObserver(framework, None) + + framework.observe(framework.on.pre_commit, obs.on_pre_commit) + + framework.commit() + + self.assertEqual(obs._stored.myinitdata, 41) + self.assertEqual(obs._stored.mydata, 42) + self.assertTrue(obs.seen, [PreCommitEvent, CommitEvent]) + framework.close() + + other_framework = self.create_framework(tmpdir=self.tmpdir) + + new_obs = PreCommitObserver(other_framework, None) + + self.assertEqual(obs._stored.myinitdata, 41) + self.assertEqual(new_obs._stored.mydata, 42) + + with self.assertRaises(AttributeError): + new_obs._stored.myotherdata + + def test_defer_and_reemit(self): + framework = self.create_framework() + + class MyEvent(EventBase): + pass + + class MyNotifier1(Object): + a = EventSource(MyEvent) + b = EventSource(MyEvent) + + class MyNotifier2(Object): + c = EventSource(MyEvent) + + class MyObserver(Object): + def __init__(self, parent, key): + super().__init__(parent, key) + self.seen = [] + self.done = {} + + def on_any(self, event): + self.seen.append(event.handle.kind) + if not self.done.get(event.handle.kind): + event.defer() + + pub1 = MyNotifier1(framework, "1") + pub2 = MyNotifier2(framework, "1") + obs1 = MyObserver(framework, "1") + obs2 = MyObserver(framework, "2") + + framework.observe(pub1.a, obs1.on_any) + framework.observe(pub1.b, obs1.on_any) + framework.observe(pub1.a, obs2.on_any) + framework.observe(pub1.b, obs2.on_any) + framework.observe(pub2.c, obs2.on_any) + + pub1.a.emit() + pub1.b.emit() + pub2.c.emit() + + # Events remain stored because they were deferred. + ev_a_handle = Handle(pub1, "a", "1") + framework.load_snapshot(ev_a_handle) + ev_b_handle = Handle(pub1, "b", "2") + framework.load_snapshot(ev_b_handle) + ev_c_handle = Handle(pub2, "c", "3") + framework.load_snapshot(ev_c_handle) + # make sure the objects are gone before we reemit them + gc.collect() + + framework.reemit() + obs1.done["a"] = True + obs2.done["b"] = True + framework.reemit() + framework.reemit() + obs1.done["b"] = True + obs2.done["a"] = True + framework.reemit() + obs2.done["c"] = True + framework.reemit() + framework.reemit() + framework.reemit() + + self.assertEqual(" ".join(obs1.seen), "a b a b a b b b") + self.assertEqual(" ".join(obs2.seen), "a b c a b c a b c a c a c c") + + # Now the event objects must all be gone from storage. + self.assertRaises(NoSnapshotError, framework.load_snapshot, ev_a_handle) + self.assertRaises(NoSnapshotError, framework.load_snapshot, ev_b_handle) + self.assertRaises(NoSnapshotError, framework.load_snapshot, ev_c_handle) + + def test_custom_event_data(self): + framework = self.create_framework() + + class MyEvent(EventBase): + def __init__(self, handle, n): + super().__init__(handle) + self.my_n = n + + def snapshot(self): + return {"My N!": self.my_n} + + def restore(self, snapshot): + super().restore(snapshot) + self.my_n = snapshot["My N!"] + 1 + + class MyNotifier(Object): + foo = EventSource(MyEvent) + + class MyObserver(Object): + def __init__(self, parent, key): + super().__init__(parent, key) + self.seen = [] + + def _on_foo(self, event): + self.seen.append("on_foo:{}={}".format(event.handle.kind, event.my_n)) + event.defer() + + pub = MyNotifier(framework, "1") + obs = MyObserver(framework, "1") + + framework.observe(pub.foo, obs._on_foo) + + pub.foo.emit(1) + + framework.reemit() + + # Two things being checked here: + # + # 1. There's a restore roundtrip before the event is first observed. + # That means the data is safe before it's ever seen, and the + # roundtrip logic is tested under normal circumstances. + # + # 2. The renotification restores from the pristine event, not + # from the one modified during the first restore (otherwise + # we'd get a foo=3). + # + self.assertEqual(obs.seen, ["on_foo:foo=2", "on_foo:foo=2"]) + + def test_weak_observer(self): + framework = self.create_framework() + + observed_events = [] + + class MyEvent(EventBase): + pass + + class MyEvents(ObjectEvents): + foo = EventSource(MyEvent) + + class MyNotifier(Object): + on = MyEvents() + + class MyObserver(Object): + def _on_foo(self, event): + observed_events.append("foo") + + pub = MyNotifier(framework, "1") + obs = MyObserver(framework, "2") + + framework.observe(pub.on.foo, obs._on_foo) + pub.on.foo.emit() + self.assertEqual(observed_events, ["foo"]) + # Now delete the observer, and note that when we emit the event, it + # doesn't update the local slice again + del obs + gc.collect() + pub.on.foo.emit() + self.assertEqual(observed_events, ["foo"]) + + def test_forget_and_multiple_objects(self): + framework = self.create_framework() + + class MyObject(Object): + pass + + o1 = MyObject(framework, "path") + # Creating a second object at the same path should fail with RuntimeError + with self.assertRaises(RuntimeError): + o2 = MyObject(framework, "path") + # Unless we _forget the object first + framework._forget(o1) + o2 = MyObject(framework, "path") + self.assertEqual(o1.handle.path, o2.handle.path) + # Deleting the tracked object should also work + del o2 + gc.collect() + o3 = MyObject(framework, "path") + self.assertEqual(o1.handle.path, o3.handle.path) + framework.close() + # Or using a second framework + framework_copy = self.create_framework() + o_copy = MyObject(framework_copy, "path") + self.assertEqual(o1.handle.path, o_copy.handle.path) + + def test_forget_and_multiple_objects_with_load_snapshot(self): + framework = self.create_framework(tmpdir=self.tmpdir) + + class MyObject(Object): + def __init__(self, parent, name): + super().__init__(parent, name) + self.value = name + + def snapshot(self): + return self.value + + def restore(self, value): + self.value = value + + framework.register_type(MyObject, None, MyObject.handle_kind) + o1 = MyObject(framework, "path") + framework.save_snapshot(o1) + framework.commit() + o_handle = o1.handle + del o1 + gc.collect() + o2 = framework.load_snapshot(o_handle) + # Trying to load_snapshot a second object at the same path should fail with RuntimeError + with self.assertRaises(RuntimeError): + framework.load_snapshot(o_handle) + # Unless we _forget the object first + framework._forget(o2) + o3 = framework.load_snapshot(o_handle) + self.assertEqual(o2.value, o3.value) + # A loaded object also prevents direct creation of an object + with self.assertRaises(RuntimeError): + MyObject(framework, "path") + framework.close() + # But we can create an object, or load a snapshot in a copy of the framework + framework_copy1 = self.create_framework(tmpdir=self.tmpdir) + o_copy1 = MyObject(framework_copy1, "path") + self.assertEqual(o_copy1.value, "path") + framework_copy1.close() + framework_copy2 = self.create_framework(tmpdir=self.tmpdir) + framework_copy2.register_type(MyObject, None, MyObject.handle_kind) + o_copy2 = framework_copy2.load_snapshot(o_handle) + self.assertEqual(o_copy2.value, "path") + + def test_events_base(self): + framework = self.create_framework() + + class MyEvent(EventBase): + pass + + class MyEvents(ObjectEvents): + foo = EventSource(MyEvent) + bar = EventSource(MyEvent) + + class MyNotifier(Object): + on = MyEvents() + + class MyObserver(Object): + def __init__(self, parent, key): + super().__init__(parent, key) + self.seen = [] + + def _on_foo(self, event): + self.seen.append("on_foo:{}".format(event.handle.kind)) + event.defer() + + def _on_bar(self, event): + self.seen.append("on_bar:{}".format(event.handle.kind)) + + pub = MyNotifier(framework, "1") + obs = MyObserver(framework, "1") + + # Confirm that temporary persistence of BoundEvents doesn't cause errors, + # and that events can be observed. + for bound_event, handler in [(pub.on.foo, obs._on_foo), (pub.on.bar, obs._on_bar)]: + framework.observe(bound_event, handler) + + # Confirm that events can be emitted and seen. + pub.on.foo.emit() + + self.assertEqual(obs.seen, ["on_foo:foo"]) + + def test_conflicting_event_attributes(self): + class MyEvent(EventBase): + pass + + event = EventSource(MyEvent) + + class MyEvents(ObjectEvents): + foo = event + + with self.assertRaises(RuntimeError) as cm: + class OtherEvents(ObjectEvents): + foo = event + self.assertEqual( + str(cm.exception), + "EventSource(MyEvent) reused as MyEvents.foo and OtherEvents.foo") + + with self.assertRaises(RuntimeError) as cm: + class MyNotifier(Object): + on = MyEvents() + bar = event + self.assertEqual( + str(cm.exception), + "EventSource(MyEvent) reused as MyEvents.foo and MyNotifier.bar") + + def test_reemit_ignores_unknown_event_type(self): + # The event type may have been gone for good, and nobody cares, + # so this shouldn't be an error scenario. + + framework = self.create_framework() + + class MyEvent(EventBase): + pass + + class MyNotifier(Object): + foo = EventSource(MyEvent) + + class MyObserver(Object): + def __init__(self, parent, key): + super().__init__(parent, key) + self.seen = [] + + def _on_foo(self, event): + self.seen.append(event.handle) + event.defer() + + pub = MyNotifier(framework, "1") + obs = MyObserver(framework, "1") + + framework.observe(pub.foo, obs._on_foo) + pub.foo.emit() + + event_handle = obs.seen[0] + self.assertEqual(event_handle.kind, "foo") + + framework.commit() + framework.close() + + framework_copy = self.create_framework() + + # No errors on missing event types here. + framework_copy.reemit() + + # Register the type and check that the event is gone from storage. + framework_copy.register_type(MyEvent, event_handle.parent, event_handle.kind) + self.assertRaises(NoSnapshotError, framework_copy.load_snapshot, event_handle) + + def test_auto_register_event_types(self): + framework = self.create_framework() + + class MyFoo(EventBase): + pass + + class MyBar(EventBase): + pass + + class MyEvents(ObjectEvents): + foo = EventSource(MyFoo) + + class MyNotifier(Object): + on = MyEvents() + bar = EventSource(MyBar) + + class MyObserver(Object): + def __init__(self, parent, key): + super().__init__(parent, key) + self.seen = [] + + def _on_foo(self, event): + self.seen.append("on_foo:{}:{}".format(type(event).__name__, event.handle.kind)) + event.defer() + + def _on_bar(self, event): + self.seen.append("on_bar:{}:{}".format(type(event).__name__, event.handle.kind)) + event.defer() + + pub = MyNotifier(framework, "1") + obs = MyObserver(framework, "1") + + pub.on.foo.emit() + pub.bar.emit() + + framework.observe(pub.on.foo, obs._on_foo) + framework.observe(pub.bar, obs._on_bar) + + pub.on.foo.emit() + pub.bar.emit() + + self.assertEqual(obs.seen, ["on_foo:MyFoo:foo", "on_bar:MyBar:bar"]) + + def test_dynamic_event_types(self): + framework = self.create_framework() + + class MyEventsA(ObjectEvents): + handle_kind = 'on_a' + + class MyEventsB(ObjectEvents): + handle_kind = 'on_b' + + class MyNotifier(Object): + on_a = MyEventsA() + on_b = MyEventsB() + + class MyObserver(Object): + def __init__(self, parent, key): + super().__init__(parent, key) + self.seen = [] + + def _on_foo(self, event): + self.seen.append("on_foo:{}:{}".format(type(event).__name__, event.handle.kind)) + event.defer() + + def _on_bar(self, event): + self.seen.append("on_bar:{}:{}".format(type(event).__name__, event.handle.kind)) + event.defer() + + pub = MyNotifier(framework, "1") + obs = MyObserver(framework, "1") + + class MyFoo(EventBase): + pass + + class MyBar(EventBase): + pass + + class DeadBeefEvent(EventBase): + pass + + class NoneEvent(EventBase): + pass + + pub.on_a.define_event("foo", MyFoo) + pub.on_b.define_event("bar", MyBar) + + framework.observe(pub.on_a.foo, obs._on_foo) + framework.observe(pub.on_b.bar, obs._on_bar) + + pub.on_a.foo.emit() + pub.on_b.bar.emit() + + self.assertEqual(obs.seen, ["on_foo:MyFoo:foo", "on_bar:MyBar:bar"]) + + # Definitions remained local to the specific type. + self.assertRaises(AttributeError, lambda: pub.on_a.bar) + self.assertRaises(AttributeError, lambda: pub.on_b.foo) + + # Try to use an event name which is not a valid python identifier. + with self.assertRaises(RuntimeError): + pub.on_a.define_event("dead-beef", DeadBeefEvent) + + # Try to use a python keyword for an event name. + with self.assertRaises(RuntimeError): + pub.on_a.define_event("None", NoneEvent) + + # Try to override an existing attribute. + with self.assertRaises(RuntimeError): + pub.on_a.define_event("foo", MyFoo) + + def test_event_key_roundtrip(self): + class MyEvent(EventBase): + def __init__(self, handle, value): + super().__init__(handle) + self.value = value + + def snapshot(self): + return self.value + + def restore(self, value): + self.value = value + + class MyNotifier(Object): + foo = EventSource(MyEvent) + + class MyObserver(Object): + has_deferred = False + + def __init__(self, parent, key): + super().__init__(parent, key) + self.seen = [] + + def _on_foo(self, event): + self.seen.append((event.handle.key, event.value)) + # Only defer the first event and once. + if not MyObserver.has_deferred: + event.defer() + MyObserver.has_deferred = True + + framework1 = self.create_framework(tmpdir=self.tmpdir) + pub1 = MyNotifier(framework1, "pub") + obs1 = MyObserver(framework1, "obs") + framework1.observe(pub1.foo, obs1._on_foo) + pub1.foo.emit('first') + self.assertEqual(obs1.seen, [('1', 'first')]) + + framework1.commit() + framework1.close() + del framework1 + + framework2 = self.create_framework(tmpdir=self.tmpdir) + pub2 = MyNotifier(framework2, "pub") + obs2 = MyObserver(framework2, "obs") + framework2.observe(pub2.foo, obs2._on_foo) + pub2.foo.emit('second') + framework2.reemit() + + # First observer didn't get updated, since framework it was bound to is gone. + self.assertEqual(obs1.seen, [('1', 'first')]) + # Second observer saw the new event plus the reemit of the first event. + # (The event key goes up by 2 due to the pre-commit and commit events.) + self.assertEqual(obs2.seen, [('4', 'second'), ('1', 'first')]) + + def test_helper_properties(self): + framework = self.create_framework() + framework.model = 'test-model' + framework.meta = 'test-meta' + + my_obj = Object(framework, 'my_obj') + self.assertEqual(my_obj.model, framework.model) + + def test_ban_concurrent_frameworks(self): + f = self.create_framework(tmpdir=self.tmpdir) + with self.assertRaises(Exception) as cm: + self.create_framework(tmpdir=self.tmpdir) + self.assertIn('database is locked', str(cm.exception)) + f.close() + + def test_snapshot_saving_restricted_to_simple_types(self): + # this can not be saved, as it has not simple types! + to_be_saved = {"bar": TestFramework} + + class FooEvent(EventBase): + def snapshot(self): + return to_be_saved + + handle = Handle(None, "a_foo", "some_key") + event = FooEvent(handle) + + framework = self.create_framework() + framework.register_type(FooEvent, None, handle.kind) + with self.assertRaises(ValueError) as cm: + framework.save_snapshot(event) + expected = ( + "unable to save the data for FooEvent, it must contain only simple types: " + "{'bar': }") + self.assertEqual(str(cm.exception), expected) + + def test_unobserved_events_dont_leave_cruft(self): + class FooEvent(EventBase): + def snapshot(self): + return {'content': 1} + + class Events(ObjectEvents): + foo = EventSource(FooEvent) + + class Emitter(Object): + on = Events() + + framework = self.create_framework() + e = Emitter(framework, 'key') + e.on.foo.emit() + ev_1_handle = Handle(e.on, "foo", "1") + with self.assertRaises(NoSnapshotError): + framework.load_snapshot(ev_1_handle) + # Committing will save the framework's state, but no other snapshots should be saved + framework.commit() + events = framework._storage.list_snapshots() + self.assertEqual(list(events), [framework._stored.handle.path]) + + def test_event_regex(self): + examples = [ + 'Ubuntu/on/config_changed[7]', + 'on/commit[9]', + 'on/pre_commit[8]', + ] + non_examples = [ + 'StoredStateData[_stored]', + 'ObjectWithSTorage[obj]StoredStateData[_stored]', + ] + regex = re.compile(_event_regex) + for e in examples: + self.assertIsNotNone(regex.match(e)) + for e in non_examples: + self.assertIsNone(regex.match(e)) + + def test_remove_unreferenced_events(self): + framework = self.create_framework() + + class Evt(EventBase): + pass + + class Events(ObjectEvents): + event = EventSource(Evt) + + class ObjectWithStorage(Object): + _stored = StoredState() + on = Events() + + def __init__(self, framework, key): + super().__init__(framework, key) + self._stored.set_default(foo=2) + self.framework.observe(self.on.event, self._on_event) + + def _on_event(self, event): + event.defer() + + # This is an event that 'happened in the past' that doesn't have an associated notice. + o = ObjectWithStorage(framework, 'obj') + handle = Handle(o.on, 'event', '100') + event = Evt(handle) + framework.save_snapshot(event) + self.assertEqual(list(framework._storage.list_snapshots()), [handle.path]) + o.on.event.emit() + self.assertEqual( + list(framework._storage.notices('')), + [('ObjectWithStorage[obj]/on/event[1]', 'ObjectWithStorage[obj]', '_on_event')]) + framework.commit() + self.assertEqual( + sorted(framework._storage.list_snapshots()), + sorted(['ObjectWithStorage[obj]/on/event[100]', + 'StoredStateData[_stored]', + 'ObjectWithStorage[obj]/StoredStateData[_stored]', + 'ObjectWithStorage[obj]/on/event[1]'])) + framework.remove_unreferenced_events() + self.assertEqual( + sorted(framework._storage.list_snapshots()), + sorted([ + 'StoredStateData[_stored]', + 'ObjectWithStorage[obj]/StoredStateData[_stored]', + 'ObjectWithStorage[obj]/on/event[1]'])) + + +class TestStoredState(BaseTestCase): + + def setUp(self): + self.tmpdir = Path(tempfile.mkdtemp()) + self.addCleanup(shutil.rmtree, str(self.tmpdir)) + + def test_basic_state_storage(self): + class SomeObject(Object): + _stored = StoredState() + + self._stored_state_tests(SomeObject) + + def test_straight_subclass(self): + class SomeObject(Object): + _stored = StoredState() + + class Sub(SomeObject): + pass + + self._stored_state_tests(Sub) + + def test_straight_sub_subclass(self): + class SomeObject(Object): + _stored = StoredState() + + class Sub(SomeObject): + pass + + class SubSub(SomeObject): + pass + + self._stored_state_tests(SubSub) + + def test_two_subclasses(self): + class SomeObject(Object): + _stored = StoredState() + + class SubA(SomeObject): + pass + + class SubB(SomeObject): + pass + + self._stored_state_tests(SubA) + self._stored_state_tests(SubB) + + def test_the_crazy_thing(self): + class NoState(Object): + pass + + class StatedObject(NoState): + _stored = StoredState() + + class Sibling(NoState): + pass + + class FinalChild(StatedObject, Sibling): + pass + + self._stored_state_tests(FinalChild) + + def _stored_state_tests(self, cls): + framework = self.create_framework(tmpdir=self.tmpdir) + obj = cls(framework, "1") + + try: + obj._stored.foo + except AttributeError as e: + self.assertEqual(str(e), "attribute 'foo' is not stored") + else: + self.fail("AttributeError not raised") + + try: + obj._stored.on = "nonono" + except AttributeError as e: + self.assertEqual(str(e), "attribute 'on' is reserved and cannot be set") + else: + self.fail("AttributeError not raised") + + obj._stored.foo = 41 + obj._stored.foo = 42 + obj._stored.bar = "s" + obj._stored.baz = 4.2 + obj._stored.bing = True + + self.assertEqual(obj._stored.foo, 42) + + framework.commit() + + # This won't be committed, and should not be seen. + obj._stored.foo = 43 + + framework.close() + + # Since this has the same absolute object handle, it will get its state back. + framework_copy = self.create_framework(tmpdir=self.tmpdir) + obj_copy = cls(framework_copy, "1") + self.assertEqual(obj_copy._stored.foo, 42) + self.assertEqual(obj_copy._stored.bar, "s") + self.assertEqual(obj_copy._stored.baz, 4.2) + self.assertEqual(obj_copy._stored.bing, True) + + framework_copy.close() + + def test_two_subclasses_no_conflicts(self): + class Base(Object): + _stored = StoredState() + + class SubA(Base): + pass + + class SubB(Base): + pass + + framework = self.create_framework(tmpdir=self.tmpdir) + a = SubA(framework, None) + b = SubB(framework, None) + z = Base(framework, None) + + a._stored.foo = 42 + b._stored.foo = "hello" + z._stored.foo = {1} + + framework.commit() + framework.close() + + framework2 = self.create_framework(tmpdir=self.tmpdir) + a2 = SubA(framework2, None) + b2 = SubB(framework2, None) + z2 = Base(framework2, None) + + self.assertEqual(a2._stored.foo, 42) + self.assertEqual(b2._stored.foo, "hello") + self.assertEqual(z2._stored.foo, {1}) + + def test_two_names_one_state(self): + class Mine(Object): + _stored = StoredState() + _stored2 = _stored + + framework = self.create_framework() + obj = Mine(framework, None) + + with self.assertRaises(RuntimeError): + obj._stored.foo = 42 + + with self.assertRaises(RuntimeError): + obj._stored2.foo = 42 + + framework.close() + + # make sure we're not changing the object on failure + self.assertNotIn("_stored", obj.__dict__) + self.assertNotIn("_stored2", obj.__dict__) + + def test_same_name_two_classes(self): + class Base(Object): + pass + + class A(Base): + _stored = StoredState() + + class B(Base): + _stored = A._stored + + framework = self.create_framework() + a = A(framework, None) + b = B(framework, None) + + # NOTE it's the second one that actually triggers the + # exception, but that's an implementation detail + a._stored.foo = 42 + + with self.assertRaises(RuntimeError): + b._stored.foo = "xyzzy" + + framework.close() + + # make sure we're not changing the object on failure + self.assertNotIn("_stored", b.__dict__) + + def test_mutable_types_invalid(self): + framework = self.create_framework() + + class SomeObject(Object): + _stored = StoredState() + + obj = SomeObject(framework, '1') + try: + class CustomObject: + pass + obj._stored.foo = CustomObject() + except AttributeError as e: + self.assertEqual( + str(e), + "attribute 'foo' cannot be a CustomObject: must be int/float/dict/list/etc") + else: + self.fail('AttributeError not raised') + + framework.commit() + + def test_mutable_types(self): + # Test and validation functions in a list of 2-tuples. + # Assignment and keywords like del are not supported in lambdas + # so functions are used instead. + test_operations = [( + lambda: {}, # Operand A. + None, # Operand B. + {}, # Expected result. + lambda a, b: None, # Operation to perform. + lambda res, expected_res: self.assertEqual(res, expected_res) # Validation to perform. + ), ( + lambda: {}, + {'a': {}}, + {'a': {}}, + lambda a, b: a.update(b), + lambda res, expected_res: self.assertEqual(res, expected_res) + ), ( + lambda: {'a': {}}, + {'b': 'c'}, + {'a': {'b': 'c'}}, + lambda a, b: a['a'].update(b), + lambda res, expected_res: self.assertEqual(res, expected_res) + ), ( + lambda: {'a': {'b': 'c'}}, + {'d': 'e'}, + {'a': {'b': 'c', 'd': 'e'}}, + lambda a, b: a['a'].update(b), + lambda res, expected_res: self.assertEqual(res, expected_res) + ), ( + lambda: {'a': {'b': 'c', 'd': 'e'}}, + 'd', + {'a': {'b': 'c'}}, + lambda a, b: a['a'].pop(b), + lambda res, expected_res: self.assertEqual(res, expected_res) + ), ( + lambda: {'s': set()}, + 'a', + {'s': {'a'}}, + lambda a, b: a['s'].add(b), + lambda res, expected_res: self.assertEqual(res, expected_res) + ), ( + lambda: {'s': {'a'}}, + 'a', + {'s': set()}, + lambda a, b: a['s'].discard(b), + lambda res, expected_res: self.assertEqual(res, expected_res) + ), ( + lambda: [], + None, + [], + lambda a, b: None, + lambda res, expected_res: self.assertEqual(res, expected_res) + ), ( + lambda: [], + 'a', + ['a'], + lambda a, b: a.append(b), + lambda res, expected_res: self.assertEqual(res, expected_res) + ), ( + lambda: ['a'], + ['c'], + ['a', ['c']], + lambda a, b: a.append(b), + lambda res, expected_res: ( + self.assertEqual(res, expected_res), + self.assertIsInstance(res[1], StoredList), + ) + ), ( + lambda: ['a', ['c']], + 'b', + ['b', 'a', ['c']], + lambda a, b: a.insert(0, b), + lambda res, expected_res: self.assertEqual(res, expected_res) + ), ( + lambda: ['b', 'a', ['c']], + ['d'], + ['b', ['d'], 'a', ['c']], + lambda a, b: a.insert(1, b), + lambda res, expected_res: ( + self.assertEqual(res, expected_res), + self.assertIsInstance(res[1], StoredList) + ), + ), ( + lambda: ['b', 'a', ['c']], + ['d'], + ['b', ['d'], ['c']], + # a[1] = b + lambda a, b: a.__setitem__(1, b), + lambda res, expected_res: ( + self.assertEqual(res, expected_res), + self.assertIsInstance(res[1], StoredList) + ), + ), ( + lambda: ['b', ['d'], 'a', ['c']], + 0, + [['d'], 'a', ['c']], + lambda a, b: a.pop(b), + lambda res, expected_res: self.assertEqual(res, expected_res) + ), ( + lambda: [['d'], 'a', ['c']], + ['d'], + ['a', ['c']], + lambda a, b: a.remove(b), + lambda res, expected_res: self.assertEqual(res, expected_res) + ), ( + lambda: ['a', ['c']], + 'd', + ['a', ['c', 'd']], + lambda a, b: a[1].append(b), + lambda res, expected_res: self.assertEqual(res, expected_res) + ), ( + lambda: ['a', ['c', 'd']], + 1, + ['a', ['c']], + lambda a, b: a[1].pop(b), + lambda res, expected_res: self.assertEqual(res, expected_res) + ), ( + lambda: ['a', ['c']], + 'd', + ['a', ['c', 'd']], + lambda a, b: a[1].insert(1, b), + lambda res, expected_res: self.assertEqual(res, expected_res) + ), ( + lambda: ['a', ['c', 'd']], + 'd', + ['a', ['c']], + lambda a, b: a[1].remove(b), + lambda res, expected_res: self.assertEqual(res, expected_res) + ), ( + lambda: set(), + None, + set(), + lambda a, b: None, + lambda res, expected_res: self.assertEqual(res, expected_res) + ), ( + lambda: set(), + 'a', + set(['a']), + lambda a, b: a.add(b), + lambda res, expected_res: self.assertEqual(res, expected_res) + ), ( + lambda: set(['a']), + 'a', + set(), + lambda a, b: a.discard(b), + lambda res, expected_res: self.assertEqual(res, expected_res) + ), ( + lambda: set(), + {'a'}, + set(), + # Nested sets are not allowed as sets themselves are not hashable. + lambda a, b: self.assertRaises(TypeError, a.add, b), + lambda res, expected_res: self.assertEqual(res, expected_res) + )] + + class SomeObject(Object): + _stored = StoredState() + + class WrappedFramework(Framework): + def __init__(self, store, charm_dir, meta, model): + super().__init__(store, charm_dir, meta, model) + self.snapshots = [] + + def save_snapshot(self, value): + if value.handle.path == 'SomeObject[1]/StoredStateData[_stored]': + self.snapshots.append((type(value), value.snapshot())) + return super().save_snapshot(value) + + # Validate correctness of modification operations. + for get_a, b, expected_res, op, validate_op in test_operations: + storage = SQLiteStorage(self.tmpdir / "framework.data") + framework = WrappedFramework(storage, self.tmpdir, None, None) + obj = SomeObject(framework, '1') + + obj._stored.a = get_a() + self.assertTrue(isinstance(obj._stored, BoundStoredState)) + + op(obj._stored.a, b) + validate_op(obj._stored.a, expected_res) + + obj._stored.a = get_a() + framework.commit() + # We should see an update for initializing a + self.assertEqual(framework.snapshots, [ + (StoredStateData, {'a': get_a()}), + ]) + del obj + gc.collect() + obj_copy1 = SomeObject(framework, '1') + self.assertEqual(obj_copy1._stored.a, get_a()) + + op(obj_copy1._stored.a, b) + validate_op(obj_copy1._stored.a, expected_res) + framework.commit() + framework.close() + + storage_copy = SQLiteStorage(self.tmpdir / "framework.data") + framework_copy = WrappedFramework(storage_copy, self.tmpdir, None, None) + + obj_copy2 = SomeObject(framework_copy, '1') + + validate_op(obj_copy2._stored.a, expected_res) + + # Commit saves the pre-commit and commit events, and the framework + # event counter, but shouldn't update the stored state of my object + framework.snapshots.clear() + framework_copy.commit() + self.assertEqual(framework_copy.snapshots, []) + framework_copy.close() + + def test_comparison_operations(self): + test_operations = [( + {"1"}, # Operand A. + {"1", "2"}, # Operand B. + lambda a, b: a < b, # Operation to test. + True, # Result of op(A, B). + False, # Result of op(B, A). + ), ( + {"1"}, + {"1", "2"}, + lambda a, b: a > b, + False, + True + ), ( + # Empty set comparison. + set(), + set(), + lambda a, b: a == b, + True, + True + ), ( + {"a", "c"}, + {"c", "a"}, + lambda a, b: a == b, + True, + True + ), ( + dict(), + dict(), + lambda a, b: a == b, + True, + True + ), ( + {"1": "2"}, + {"1": "2"}, + lambda a, b: a == b, + True, + True + ), ( + {"1": "2"}, + {"1": "3"}, + lambda a, b: a == b, + False, + False + ), ( + [], + [], + lambda a, b: a == b, + True, + True + ), ( + [1, 2], + [1, 2], + lambda a, b: a == b, + True, + True + ), ( + [1, 2, 5, 6], + [1, 2, 5, 8, 10], + lambda a, b: a <= b, + True, + False + ), ( + [1, 2, 5, 6], + [1, 2, 5, 8, 10], + lambda a, b: a < b, + True, + False + ), ( + [1, 2, 5, 8], + [1, 2, 5, 6, 10], + lambda a, b: a > b, + True, + False + ), ( + [1, 2, 5, 8], + [1, 2, 5, 6, 10], + lambda a, b: a >= b, + True, + False + )] + + class SomeObject(Object): + _stored = StoredState() + + framework = self.create_framework() + + for i, (a, b, op, op_ab, op_ba) in enumerate(test_operations): + obj = SomeObject(framework, str(i)) + obj._stored.a = a + self.assertEqual(op(obj._stored.a, b), op_ab) + self.assertEqual(op(b, obj._stored.a), op_ba) + + def test_set_operations(self): + test_operations = [( + {"1"}, # A set to test an operation against (other_set). + lambda a, b: a | b, # An operation to test. + {"1", "a", "b"}, # The expected result of operation(obj._stored.set, other_set). + {"1", "a", "b"} # The expected result of operation(other_set, obj._stored.set). + ), ( + {"a", "c"}, + lambda a, b: a - b, + {"b"}, + {"c"} + ), ( + {"a", "c"}, + lambda a, b: a & b, + {"a"}, + {"a"} + ), ( + {"a", "c", "d"}, + lambda a, b: a ^ b, + {"b", "c", "d"}, + {"b", "c", "d"} + ), ( + set(), + lambda a, b: set(a), + {"a", "b"}, + set() + )] + + class SomeObject(Object): + _stored = StoredState() + + framework = self.create_framework() + + # Validate that operations between StoredSet and built-in sets + # only result in built-in sets being returned. + # Make sure that commutativity is preserved and that the + # original sets are not changed or used as a result. + for i, (variable_operand, operation, ab_res, ba_res) in enumerate(test_operations): + obj = SomeObject(framework, str(i)) + obj._stored.set = {"a", "b"} + + for a, b, expected in [ + (obj._stored.set, variable_operand, ab_res), + (variable_operand, obj._stored.set, ba_res)]: + old_a = set(a) + old_b = set(b) + + result = operation(a, b) + self.assertEqual(result, expected) + + # Common sanity checks + self.assertIsNot(obj._stored.set._under, result) + self.assertIsNot(result, a) + self.assertIsNot(result, b) + self.assertEqual(a, old_a) + self.assertEqual(b, old_b) + + def test_set_default(self): + framework = self.create_framework() + + class StatefulObject(Object): + _stored = StoredState() + parent = StatefulObject(framework, 'key') + parent._stored.set_default(foo=1) + self.assertEqual(parent._stored.foo, 1) + parent._stored.set_default(foo=2) + # foo was already set, so it doesn't get replaced + self.assertEqual(parent._stored.foo, 1) + parent._stored.set_default(foo=3, bar=4) + self.assertEqual(parent._stored.foo, 1) + self.assertEqual(parent._stored.bar, 4) + # reloading the state still leaves things at the default values + framework.commit() + del parent + parent = StatefulObject(framework, 'key') + parent._stored.set_default(foo=5, bar=6) + self.assertEqual(parent._stored.foo, 1) + self.assertEqual(parent._stored.bar, 4) + # TODO: jam 2020-01-30 is there a clean way to tell that + # parent._stored._data.dirty is False? + + +class GenericObserver(Object): + """Generic observer for the tests.""" + + def __init__(self, parent, key): + super().__init__(parent, key) + self.called = False + + def callback_method(self, event): + """Set the instance .called to True.""" + self.called = True + + +@patch('sys.stderr', new_callable=io.StringIO) +class BreakpointTests(BaseTestCase): + + def setUp(self): + super().setUp() + logassert.setup(self, 'ops') + + def test_ignored(self, fake_stderr): + # It doesn't do anything really unless proper environment is there. + with patch.dict(os.environ): + os.environ.pop('JUJU_DEBUG_AT', None) + framework = self.create_framework() + + with patch('pdb.Pdb.set_trace') as mock: + framework.breakpoint() + self.assertEqual(mock.call_count, 0) + self.assertEqual(fake_stderr.getvalue(), "") + self.assertNotLoggedWarning("Breakpoint", "skipped") + + def test_pdb_properly_called(self, fake_stderr): + # The debugger needs to leave the user in the frame where the breakpoint is executed, + # which for the test is the frame we're calling it here in the test :). + with patch.dict(os.environ, {'JUJU_DEBUG_AT': 'all'}): + framework = self.create_framework() + + with patch('pdb.Pdb.set_trace') as mock: + this_frame = inspect.currentframe() + framework.breakpoint() + + self.assertEqual(mock.call_count, 1) + self.assertEqual(mock.call_args, ((this_frame,), {})) + + def test_welcome_message(self, fake_stderr): + # Check that an initial message is shown to the user when code is interrupted. + with patch.dict(os.environ, {'JUJU_DEBUG_AT': 'all'}): + framework = self.create_framework() + with patch('pdb.Pdb.set_trace'): + framework.breakpoint() + self.assertEqual(fake_stderr.getvalue(), _BREAKPOINT_WELCOME_MESSAGE) + + def test_welcome_message_not_multiple(self, fake_stderr): + # Check that an initial message is NOT shown twice if the breakpoint is exercised + # twice in the same run. + with patch.dict(os.environ, {'JUJU_DEBUG_AT': 'all'}): + framework = self.create_framework() + with patch('pdb.Pdb.set_trace'): + framework.breakpoint() + self.assertEqual(fake_stderr.getvalue(), _BREAKPOINT_WELCOME_MESSAGE) + framework.breakpoint() + self.assertEqual(fake_stderr.getvalue(), _BREAKPOINT_WELCOME_MESSAGE) + + def test_builtin_breakpoint_hooked(self, fake_stderr): + # Verify that the proper hook is set. + with patch.dict(os.environ, {'JUJU_DEBUG_AT': 'all'}): + self.create_framework() # creating the framework setups the hook + with patch('pdb.Pdb.set_trace') as mock: + # Calling through sys, not breakpoint() directly, so we can run the + # tests with Py < 3.7. + sys.breakpointhook() + self.assertEqual(mock.call_count, 1) + + def test_breakpoint_names(self, fake_stderr): + framework = self.create_framework() + + # Name rules: + # - must start and end with lowercase alphanumeric characters + # - only contain lowercase alphanumeric characters, or the hyphen "-" + good_names = [ + 'foobar', + 'foo-bar-baz', + 'foo-------bar', + 'foo123', + '778', + '77-xx', + 'a-b', + 'ab', + 'x', + ] + for name in good_names: + with self.subTest(name=name): + framework.breakpoint(name) + + bad_names = [ + '', + '.', + '-', + '...foo', + 'foo.bar', + 'bar--' + 'FOO', + 'FooBar', + 'foo bar', + 'foo_bar', + '/foobar', + 'break-here-☚', + ] + msg = 'breakpoint names must look like "foo" or "foo-bar"' + for name in bad_names: + with self.subTest(name=name): + with self.assertRaises(ValueError) as cm: + framework.breakpoint(name) + self.assertEqual(str(cm.exception), msg) + + reserved_names = [ + 'all', + 'hook', + ] + msg = 'breakpoint names "all" and "hook" are reserved' + for name in reserved_names: + with self.subTest(name=name): + with self.assertRaises(ValueError) as cm: + framework.breakpoint(name) + self.assertEqual(str(cm.exception), msg) + + not_really_names = [ + 123, + 1.1, + False, + ] + for name in not_really_names: + with self.subTest(name=name): + with self.assertRaises(TypeError) as cm: + framework.breakpoint(name) + self.assertEqual(str(cm.exception), 'breakpoint names must be strings') + + def check_trace_set(self, envvar_value, breakpoint_name, call_count): + """Helper to check the diverse combinations of situations.""" + with patch.dict(os.environ, {'JUJU_DEBUG_AT': envvar_value}): + framework = self.create_framework() + with patch('pdb.Pdb.set_trace') as mock: + framework.breakpoint(breakpoint_name) + self.assertEqual(mock.call_count, call_count) + + def test_unnamed_indicated_all(self, fake_stderr): + # If 'all' is indicated, unnamed breakpoints will always activate. + self.check_trace_set('all', None, 1) + + def test_unnamed_indicated_hook(self, fake_stderr): + # Special value 'hook' was indicated, nothing to do with any call. + self.check_trace_set('hook', None, 0) + + def test_named_indicated_specifically(self, fake_stderr): + # Some breakpoint was indicated, and the framework call used exactly that name. + self.check_trace_set('mybreak', 'mybreak', 1) + + def test_named_indicated_unnamed(self, fake_stderr): + # Some breakpoint was indicated, but the framework call was unnamed + self.check_trace_set('some-breakpoint', None, 0) + self.assertLoggedWarning( + "Breakpoint None skipped", + "not found in the requested breakpoints: ['some-breakpoint']") + + def test_named_indicated_somethingelse(self, fake_stderr): + # Some breakpoint was indicated, but the framework call was with a different name + self.check_trace_set('some-breakpoint', 'other-name', 0) + self.assertLoggedWarning( + "Breakpoint 'other-name' skipped", + "not found in the requested breakpoints: ['some-breakpoint']") + + def test_named_indicated_ingroup(self, fake_stderr): + # A multiple breakpoint was indicated, and the framework call used a name among those. + self.check_trace_set('some,mybreak,foobar', 'mybreak', 1) + + def test_named_indicated_all(self, fake_stderr): + # The framework indicated 'all', which includes any named breakpoint set. + self.check_trace_set('all', 'mybreak', 1) + + def test_named_indicated_hook(self, fake_stderr): + # The framework indicated the special value 'hook', nothing to do with any named call. + self.check_trace_set('hook', 'mybreak', 0) + + +class DebugHookTests(BaseTestCase): + + def test_envvar_parsing_missing(self): + with patch.dict(os.environ): + os.environ.pop('JUJU_DEBUG_AT', None) + framework = self.create_framework() + self.assertEqual(framework._juju_debug_at, ()) + + def test_envvar_parsing_empty(self): + with patch.dict(os.environ, {'JUJU_DEBUG_AT': ''}): + framework = self.create_framework() + self.assertEqual(framework._juju_debug_at, ()) + + def test_envvar_parsing_simple(self): + with patch.dict(os.environ, {'JUJU_DEBUG_AT': 'hook'}): + framework = self.create_framework() + self.assertEqual(framework._juju_debug_at, ['hook']) + + def test_envvar_parsing_multiple(self): + with patch.dict(os.environ, {'JUJU_DEBUG_AT': 'foo,bar,all'}): + framework = self.create_framework() + self.assertEqual(framework._juju_debug_at, ['foo', 'bar', 'all']) + + def test_basic_interruption_enabled(self): + framework = self.create_framework() + framework._juju_debug_at = ['hook'] + + publisher = charm.CharmEvents(framework, "1") + observer = GenericObserver(framework, "1") + framework.observe(publisher.install, observer.callback_method) + + with patch('sys.stderr', new_callable=io.StringIO) as fake_stderr: + with patch('pdb.runcall') as mock: + publisher.install.emit() + + # Check that the pdb module was used correctly and that the callback method was NOT + # called (as we intercepted the normal pdb behaviour! this is to check that the + # framework didn't call the callback directly) + self.assertEqual(mock.call_count, 1) + expected_callback, expected_event = mock.call_args[0] + self.assertEqual(expected_callback, observer.callback_method) + self.assertIsInstance(expected_event, EventBase) + self.assertFalse(observer.called) + + # Verify proper message was given to the user. + self.assertEqual(fake_stderr.getvalue(), _BREAKPOINT_WELCOME_MESSAGE) + + def test_actions_are_interrupted(self): + test_model = self.create_model() + framework = self.create_framework(model=test_model) + framework._juju_debug_at = ['hook'] + + class CustomEvents(ObjectEvents): + foobar_action = EventSource(charm.ActionEvent) + + publisher = CustomEvents(framework, "1") + observer = GenericObserver(framework, "1") + framework.observe(publisher.foobar_action, observer.callback_method) + fake_script(self, 'action-get', "echo {}") + + with patch('sys.stderr', new_callable=io.StringIO): + with patch('pdb.runcall') as mock: + with patch.dict(os.environ, {'JUJU_ACTION_NAME': 'foobar'}): + publisher.foobar_action.emit() + + self.assertEqual(mock.call_count, 1) + self.assertFalse(observer.called) + + def test_internal_events_not_interrupted(self): + class MyNotifier(Object): + """Generic notifier for the tests.""" + bar = EventSource(EventBase) + + framework = self.create_framework() + framework._juju_debug_at = ['hook'] + + publisher = MyNotifier(framework, "1") + observer = GenericObserver(framework, "1") + framework.observe(publisher.bar, observer.callback_method) + + with patch('pdb.runcall') as mock: + publisher.bar.emit() + + self.assertEqual(mock.call_count, 0) + self.assertTrue(observer.called) + + def test_envvar_mixed(self): + framework = self.create_framework() + framework._juju_debug_at = ['foo', 'hook', 'all', 'whatever'] + + publisher = charm.CharmEvents(framework, "1") + observer = GenericObserver(framework, "1") + framework.observe(publisher.install, observer.callback_method) + + with patch('sys.stderr', new_callable=io.StringIO): + with patch('pdb.runcall') as mock: + publisher.install.emit() + + self.assertEqual(mock.call_count, 1) + self.assertFalse(observer.called) + + def test_no_registered_method(self): + framework = self.create_framework() + framework._juju_debug_at = ['hook'] + + publisher = charm.CharmEvents(framework, "1") + observer = GenericObserver(framework, "1") + + with patch('pdb.runcall') as mock: + publisher.install.emit() + + self.assertEqual(mock.call_count, 0) + self.assertFalse(observer.called) + + def test_envvar_nohook(self): + framework = self.create_framework() + framework._juju_debug_at = ['something-else'] + + publisher = charm.CharmEvents(framework, "1") + observer = GenericObserver(framework, "1") + framework.observe(publisher.install, observer.callback_method) + + with patch.dict(os.environ, {'JUJU_DEBUG_AT': 'something-else'}): + with patch('pdb.runcall') as mock: + publisher.install.emit() + + self.assertEqual(mock.call_count, 0) + self.assertTrue(observer.called) + + def test_envvar_missing(self): + framework = self.create_framework() + framework._juju_debug_at = () + + publisher = charm.CharmEvents(framework, "1") + observer = GenericObserver(framework, "1") + framework.observe(publisher.install, observer.callback_method) + + with patch('pdb.runcall') as mock: + publisher.install.emit() + + self.assertEqual(mock.call_count, 0) + self.assertTrue(observer.called) + + def test_welcome_message_not_multiple(self): + framework = self.create_framework() + framework._juju_debug_at = ['hook'] + + publisher = charm.CharmEvents(framework, "1") + observer = GenericObserver(framework, "1") + framework.observe(publisher.install, observer.callback_method) + + with patch('sys.stderr', new_callable=io.StringIO) as fake_stderr: + with patch('pdb.runcall') as mock: + publisher.install.emit() + self.assertEqual(fake_stderr.getvalue(), _BREAKPOINT_WELCOME_MESSAGE) + publisher.install.emit() + self.assertEqual(fake_stderr.getvalue(), _BREAKPOINT_WELCOME_MESSAGE) + self.assertEqual(mock.call_count, 2) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/test_helpers.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/test_helpers.py new file mode 100755 index 0000000000000000000000000000000000000000..f7c6fc29a60362940d4e0259351d1d53eb9dbe93 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/test_helpers.py @@ -0,0 +1,121 @@ +# Copyright 2019-2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import pathlib +import subprocess +import shutil +import tempfile +import unittest + +from ops.framework import Framework +from ops.model import Model, _ModelBackend +from ops.charm import CharmMeta +from ops.storage import SQLiteStorage + + +def fake_script(test_case, name, content): + if not hasattr(test_case, 'fake_script_path'): + fake_script_path = tempfile.mkdtemp('-fake_script') + os.environ['PATH'] = '{}:{}'.format(fake_script_path, os.environ["PATH"]) + + def cleanup(): + shutil.rmtree(fake_script_path) + os.environ['PATH'] = os.environ['PATH'].replace(fake_script_path + ':', '') + + test_case.addCleanup(cleanup) + test_case.fake_script_path = pathlib.Path(fake_script_path) + + template_args = { + 'name': name, + 'path': test_case.fake_script_path, + 'content': content, + } + + with (test_case.fake_script_path / name).open('wt') as f: + # Before executing the provided script, dump the provided arguments in calls.txt. + # ASCII 1E is RS 'record separator', and 1C is FS 'file separator', which seem appropriate. + f.write('''#!/bin/sh +{{ printf {name}; printf "\\036%s" "$@"; printf "\\034"; }} >> {path}/calls.txt +{content}'''.format_map(template_args)) + os.chmod(str(test_case.fake_script_path / name), 0o755) + + +def fake_script_calls(test_case, clear=False): + try: + with (test_case.fake_script_path / 'calls.txt').open('r+t') as f: + calls = [line.split('\x1e') for line in f.read().split('\x1c')[:-1]] + if clear: + f.truncate(0) + return calls + except FileNotFoundError: + return [] + + +class FakeScriptTest(unittest.TestCase): + + def test_fake_script_works(self): + fake_script(self, 'foo', 'echo foo runs') + fake_script(self, 'bar', 'echo bar runs') + output = subprocess.getoutput('foo a "b c "; bar "d e" f') + self.assertEqual(output, 'foo runs\nbar runs') + self.assertEqual(fake_script_calls(self), [ + ['foo', 'a', 'b c '], + ['bar', 'd e', 'f'], + ]) + + def test_fake_script_clear(self): + fake_script(self, 'foo', 'echo foo runs') + + output = subprocess.getoutput('foo a "b c"') + self.assertEqual(output, 'foo runs') + + self.assertEqual(fake_script_calls(self, clear=True), [['foo', 'a', 'b c']]) + + fake_script(self, 'bar', 'echo bar runs') + + output = subprocess.getoutput('bar "d e" f') + self.assertEqual(output, 'bar runs') + + self.assertEqual(fake_script_calls(self, clear=True), [['bar', 'd e', 'f']]) + + self.assertEqual(fake_script_calls(self, clear=True), []) + + +class BaseTestCase(unittest.TestCase): + + def create_framework(self, *, model=None, tmpdir=None): + """Create a Framework object. + + By default operate in-memory; pass a temporary directory via the 'tmpdir' + parameter if you whish to instantiate several frameworks sharing the + same dir (e.g. for storing state). + """ + if tmpdir is None: + data_fpath = ":memory:" + charm_dir = 'non-existant' + else: + data_fpath = tmpdir / "framework.data" + charm_dir = tmpdir + + framework = Framework(SQLiteStorage(data_fpath), charm_dir, meta=None, model=model) + self.addCleanup(framework.close) + return framework + + def create_model(self): + """Create a Model object.""" + backend = _ModelBackend(unit_name='myapp/0') + meta = CharmMeta() + model = Model(meta, backend) + return model diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/test_infra.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/test_infra.py new file mode 100644 index 0000000000000000000000000000000000000000..5133b35d11be89a702558d6f961facb8884553ec --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/test_infra.py @@ -0,0 +1,155 @@ +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import io +import itertools +import os +import re +import subprocess +import sys +import tempfile +import unittest +from unittest.mock import patch + +import autopep8 +from flake8.api.legacy import get_style_guide + +import ops + + +def get_python_filepaths(): + """Helper to retrieve paths of Python files.""" + python_paths = ['setup.py'] + for root in ['ops', 'test']: + for dirpath, dirnames, filenames in os.walk(root): + for filename in filenames: + if filename.endswith(".py"): + python_paths.append(os.path.join(dirpath, filename)) + return python_paths + + +class InfrastructureTests(unittest.TestCase): + + def test_pep8(self): + # verify all files are nicely styled + python_filepaths = get_python_filepaths() + style_guide = get_style_guide() + fake_stdout = io.StringIO() + with patch('sys.stdout', fake_stdout): + report = style_guide.check_files(python_filepaths) + + # if flake8 didnt' report anything, we're done + if report.total_errors == 0: + return + + # grab on which files we have issues + flake8_issues = fake_stdout.getvalue().split('\n') + broken_filepaths = {item.split(':')[0] for item in flake8_issues if item} + + # give hints to the developer on how files' style could be improved + options = autopep8.parse_args(['']) + options.aggressive = 1 + options.diff = True + options.max_line_length = 99 + + issues = [] + for filepath in broken_filepaths: + diff = autopep8.fix_file(filepath, options=options) + if diff: + issues.append(diff) + + report = ["Please fix files as suggested by autopep8:"] + issues + report += ["\n-- Original flake8 reports:"] + flake8_issues + self.fail("\n".join(report)) + + def test_quote_backslashes(self): + # ensure we're not using unneeded backslash to escape strings + issues = [] + for filepath in get_python_filepaths(): + with open(filepath, "rt", encoding="utf8") as fh: + for idx, line in enumerate(fh, 1): + if (r'\"' in line or r"\'" in line) and "NOQA" not in line: + issues.append((filepath, idx, line.rstrip())) + if issues: + msgs = ["{}:{:d}:{}".format(*issue) for issue in issues] + self.fail("Spurious backslashes found, please fix these quotings:\n" + "\n".join(msgs)) + + def test_ensure_copyright(self): + # all non-empty Python files must have a proper copyright somewhere in the first 5 lines + issues = [] + regex = re.compile(r"# Copyright \d\d\d\d(-\d\d\d\d)? Canonical Ltd.\n") + for filepath in get_python_filepaths(): + if os.stat(filepath).st_size == 0: + continue + + with open(filepath, "rt", encoding="utf8") as fh: + for line in itertools.islice(fh, 5): + if regex.match(line): + break + else: + issues.append(filepath) + if issues: + self.fail("Please add copyright headers to the following files:\n" + "\n".join(issues)) + + def _run_setup(self, *args): + proc = subprocess.run( + (sys.executable, 'setup.py') + args, + stdout=subprocess.PIPE, + check=True) + return proc.stdout.strip().decode("utf8") + + def test_setup_version(self): + setup_version = self._run_setup('--version') + + self.assertEqual(setup_version, ops.__version__) + + def test_setup_description(self): + with open("README.md", "rt", encoding="utf8") as fh: + disk_readme = fh.read().strip() + + setup_readme = self._run_setup('--long-description') + + self.assertEqual(setup_readme, disk_readme) + + def test_check(self): + self._run_setup('check', '--strict') + + +class ImportersTestCase(unittest.TestCase): + + template = "from ops import {module_name}" + + def test_imports(self): + mod_names = [ + 'charm', + 'framework', + 'main', + 'model', + 'testing', + ] + + for name in mod_names: + with self.subTest(name=name): + self.check(name) + + def check(self, name): + """Helper function to run the test.""" + _, testfile = tempfile.mkstemp() + self.addCleanup(os.unlink, testfile) + + with open(testfile, 'wt', encoding='utf8') as fh: + fh.write(self.template.format(module_name=name)) + + proc = subprocess.run([sys.executable, testfile], env={'PYTHONPATH': os.getcwd()}) + self.assertEqual(proc.returncode, 0) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/test_jujuversion.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/test_jujuversion.py new file mode 100755 index 0000000000000000000000000000000000000000..1e3821017c22b1fb07c74a089b8f678d3b055fe6 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/test_jujuversion.py @@ -0,0 +1,145 @@ +# Copyright 2019 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import unittest +import unittest.mock # in this file, importing just 'patch' would be confusing + +from ops.jujuversion import JujuVersion + + +class TestJujuVersion(unittest.TestCase): + + def test_parsing(self): + test_cases = [ + ("0.0.0", 0, 0, '', 0, 0), + ("0.0.2", 0, 0, '', 2, 0), + ("0.1.0", 0, 1, '', 0, 0), + ("0.2.3", 0, 2, '', 3, 0), + ("10.234.3456", 10, 234, '', 3456, 0), + ("10.234.3456.1", 10, 234, '', 3456, 1), + ("1.21-alpha12", 1, 21, 'alpha', 12, 0), + ("1.21-alpha1.34", 1, 21, 'alpha', 1, 34), + ("2.7", 2, 7, '', 0, 0) + ] + + for vs, major, minor, tag, patch, build in test_cases: + v = JujuVersion(vs) + self.assertEqual(v.major, major) + self.assertEqual(v.minor, minor) + self.assertEqual(v.tag, tag) + self.assertEqual(v.patch, patch) + self.assertEqual(v.build, build) + + @unittest.mock.patch('os.environ', new={}) + def test_from_environ(self): + with self.assertRaisesRegex(RuntimeError, 'environ has no JUJU_VERSION'): + JujuVersion.from_environ() + + os.environ['JUJU_VERSION'] = 'no' + with self.assertRaisesRegex(RuntimeError, 'not a valid Juju version'): + JujuVersion.from_environ() + + os.environ['JUJU_VERSION'] = '2.8.0' + v = JujuVersion.from_environ() + self.assertEqual(v, JujuVersion('2.8.0')) + + def test_has_app_data(self): + self.assertTrue(JujuVersion('2.8.0').has_app_data()) + self.assertTrue(JujuVersion('2.7.0').has_app_data()) + self.assertFalse(JujuVersion('2.6.9').has_app_data()) + + def test_parsing_errors(self): + invalid_versions = [ + "xyz", + "foo.bar", + "foo.bar.baz", + "dead.beef.ca.fe", + "1234567890.2.1", # The major version is too long. + "0.2..1", # Two periods next to each other. + "1.21.alpha1", # Tag comes after period. + "1.21-alpha", # No patch number but a tag is present. + "1.21-alpha1beta", # Non-numeric string after the patch number. + "1.21-alpha-dev", # Tag duplication. + "1.21-alpha_dev3", # Underscore in a tag. + "1.21-alpha123dev3", # Non-numeric string after the patch number. + ] + for v in invalid_versions: + with self.assertRaises(RuntimeError): + JujuVersion(v) + + def test_equality(self): + test_cases = [ + ("1.0.0", "1.0.0", True), + ("01.0.0", "1.0.0", True), + ("10.0.0", "9.0.0", False), + ("1.0.0", "1.0.1", False), + ("1.0.1", "1.0.0", False), + ("1.0.0", "1.1.0", False), + ("1.1.0", "1.0.0", False), + ("1.0.0", "2.0.0", False), + ("1.2-alpha1", "1.2.0", False), + ("1.2-alpha2", "1.2-alpha1", False), + ("1.2-alpha2.1", "1.2-alpha2", False), + ("1.2-alpha2.2", "1.2-alpha2.1", False), + ("1.2-beta1", "1.2-alpha1", False), + ("1.2-beta1", "1.2-alpha2.1", False), + ("1.2-beta1", "1.2.0", False), + ("1.2.1", "1.2.0", False), + ("2.0.0", "1.0.0", False), + ("2.0.0.0", "2.0.0", True), + ("2.0.0.0", "2.0.0.0", True), + ("2.0.0.1", "2.0.0.0", False), + ("2.0.1.10", "2.0.0.0", False), + ] + + for a, b, expected in test_cases: + self.assertEqual(JujuVersion(a) == JujuVersion(b), expected) + self.assertEqual(JujuVersion(a) == b, expected) + + def test_comparison(self): + test_cases = [ + ("1.0.0", "1.0.0", False, True), + ("01.0.0", "1.0.0", False, True), + ("10.0.0", "9.0.0", False, False), + ("1.0.0", "1.0.1", True, True), + ("1.0.1", "1.0.0", False, False), + ("1.0.0", "1.1.0", True, True), + ("1.1.0", "1.0.0", False, False), + ("1.0.0", "2.0.0", True, True), + ("1.2-alpha1", "1.2.0", True, True), + ("1.2-alpha2", "1.2-alpha1", False, False), + ("1.2-alpha2.1", "1.2-alpha2", False, False), + ("1.2-alpha2.2", "1.2-alpha2.1", False, False), + ("1.2-beta1", "1.2-alpha1", False, False), + ("1.2-beta1", "1.2-alpha2.1", False, False), + ("1.2-beta1", "1.2.0", True, True), + ("1.2.1", "1.2.0", False, False), + ("2.0.0", "1.0.0", False, False), + ("2.0.0.0", "2.0.0", False, True), + ("2.0.0.0", "2.0.0.0", False, True), + ("2.0.0.1", "2.0.0.0", False, False), + ("2.0.1.10", "2.0.0.0", False, False), + ] + + for a, b, expected_strict, expected_weak in test_cases: + self.assertEqual(JujuVersion(a) < JujuVersion(b), expected_strict) + self.assertEqual(JujuVersion(a) <= JujuVersion(b), expected_weak) + self.assertEqual(JujuVersion(b) > JujuVersion(a), expected_strict) + self.assertEqual(JujuVersion(b) >= JujuVersion(a), expected_weak) + # Implicit conversion. + self.assertEqual(JujuVersion(a) < b, expected_strict) + self.assertEqual(JujuVersion(a) <= b, expected_weak) + self.assertEqual(b > JujuVersion(a), expected_strict) + self.assertEqual(b >= JujuVersion(a), expected_weak) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/test_lib.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/test_lib.py new file mode 100644 index 0000000000000000000000000000000000000000..7e9af8565449b2ff1ef13c905d8e2e201a7cefad --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/test_lib.py @@ -0,0 +1,545 @@ +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import sys + +from importlib.machinery import ModuleSpec +from pathlib import Path +from tempfile import mkdtemp, mkstemp +from unittest import TestCase +from unittest.mock import patch +from random import shuffle +from shutil import rmtree +from textwrap import dedent + +import ops.lib + + +def _mklib(topdir: str, pkgname: str, libname: str) -> Path: + """Make a for-testing library. + + Args: + topdir: the toplevel directory in which the package will be created. + This directory must already exist. + pkgname: the name of the package to create in the toplevel directory. + this package will have an empty __init__.py. + libname: the name of the library directory to create under the package. + + Returns: + a :class:`Path` to the ``__init__.py`` of the created library. + This file will not have been created yet. + """ + pkg = Path(topdir) / pkgname + try: + pkg.mkdir() + except FileExistsError: + pass + else: + (pkg / '__init__.py').write_text('') + + lib = pkg / 'opslib' / libname + lib.mkdir(parents=True) + + return lib / '__init__.py' + + +def _flatten(specgen): + return sorted([os.path.dirname(spec.origin) for spec in specgen]) + + +class TestLibFinder(TestCase): + def _mkdtemp(self) -> str: + tmpdir = mkdtemp() + self.addCleanup(rmtree, tmpdir) + return tmpdir + + def test_single(self): + tmpdir = self._mkdtemp() + + self.assertEqual(list(ops.lib._find_all_specs([tmpdir])), []) + + _mklib(tmpdir, "foo", "bar").write_text("") + + self.assertEqual( + _flatten(ops.lib._find_all_specs([tmpdir])), + [tmpdir + '/foo/opslib/bar']) + + def test_multi(self): + tmpdirA = self._mkdtemp() + tmpdirB = self._mkdtemp() + + if tmpdirA > tmpdirB: + # keep sorting happy + tmpdirA, tmpdirB = tmpdirB, tmpdirA + + dirs = [tmpdirA, tmpdirB] + + for top in [tmpdirA, tmpdirB]: + for pkg in ["bar", "baz"]: + for lib in ["meep", "quux"]: + _mklib(top, pkg, lib).write_text("") + + expected = [ + os.path.join(tmpdirA, "bar", "opslib", "meep"), + os.path.join(tmpdirA, "bar", "opslib", "quux"), + os.path.join(tmpdirA, "baz", "opslib", "meep"), + os.path.join(tmpdirA, "baz", "opslib", "quux"), + os.path.join(tmpdirB, "bar", "opslib", "meep"), + os.path.join(tmpdirB, "bar", "opslib", "quux"), + os.path.join(tmpdirB, "baz", "opslib", "meep"), + os.path.join(tmpdirB, "baz", "opslib", "quux"), + ] + + self.assertEqual(_flatten(ops.lib._find_all_specs(dirs)), expected) + + def test_cwd(self): + tmpcwd = self._mkdtemp() + cwd = os.getcwd() + os.chdir(tmpcwd) + self.addCleanup(os.chdir, cwd) + + dirs = [""] + + self.assertEqual(list(ops.lib._find_all_specs(dirs)), []) + + _mklib(tmpcwd, "foo", "bar").write_text("") + + self.assertEqual( + _flatten(ops.lib._find_all_specs(dirs)), + ['./foo/opslib/bar']) + + def test_bogus_topdir(self): + """Check that having one bogus dir in sys.path doesn't cause the finder to abort.""" + tmpdir = self._mkdtemp() + + dirs = [tmpdir, "/bogus"] + + self.assertEqual(list(ops.lib._find_all_specs(dirs)), []) + + _mklib(tmpdir, "foo", "bar").write_text("") + + self.assertEqual( + _flatten(ops.lib._find_all_specs(dirs)), + [tmpdir + '/foo/opslib/bar']) + + def test_bogus_opsdir(self): + """Check that having one bogus opslib doesn't cause the finder to abort.""" + + tmpdir = self._mkdtemp() + + self.assertEqual(list(ops.lib._find_all_specs([tmpdir])), []) + + _mklib(tmpdir, "foo", "bar").write_text('') + + path = Path(tmpdir) / 'baz' + path.mkdir() + (path / 'opslib').write_text('') + + self.assertEqual( + _flatten(ops.lib._find_all_specs([tmpdir])), + [tmpdir + '/foo/opslib/bar']) + + def test_namespace(self): + """Check that namespace packages are ignored.""" + tmpdir = self._mkdtemp() + + self.assertEqual(list(ops.lib._find_all_specs([tmpdir])), []) + + _mklib(tmpdir, "foo", "bar") # no __init__.py => a namespace package + + self.assertEqual(list(ops.lib._find_all_specs([tmpdir])), []) + + +class TestLibParser(TestCase): + def _mkmod(self, name: str, content: str = None) -> ModuleSpec: + fd, fname = mkstemp(text=True) + self.addCleanup(os.unlink, fname) + if content is not None: + with os.fdopen(fd, mode='wt', closefd=False) as f: + f.write(dedent(content)) + os.close(fd) + return ModuleSpec(name=name, loader=None, origin=fname) + + def test_simple(self): + """Check that we can load a reasonably straightforward lib""" + m = self._mkmod('foo', ''' + LIBNAME = "foo" + LIBEACH = float('-inf') + LIBAPI = 2 + LIBPATCH = 42 + LIBAUTHOR = "alice@example.com" + LIBANANA = True + ''') + lib = ops.lib._parse_lib(m) + self.assertEqual(lib, ops.lib._Lib(None, "foo", "alice@example.com", 2, 42)) + # also check the repr while we're at it + self.assertEqual(repr(lib), '<_Lib foo by alice@example.com, API 2, patch 42>') + + def test_libauthor_has_dashes(self): + m = self._mkmod('foo', ''' + LIBNAME = "foo" + LIBAPI = 2 + LIBPATCH = 42 + LIBAUTHOR = "alice-someone@example.com" + LIBANANA = True + ''') + lib = ops.lib._parse_lib(m) + self.assertEqual(lib, ops.lib._Lib(None, "foo", "alice-someone@example.com", 2, 42)) + # also check the repr while we're at it + self.assertEqual(repr(lib), '<_Lib foo by alice-someone@example.com, API 2, patch 42>') + + def test_lib_definitions_without_spaces(self): + m = self._mkmod('foo', ''' + LIBNAME="foo" + LIBAPI=2 + LIBPATCH=42 + LIBAUTHOR="alice@example.com" + LIBANANA=True + ''') + lib = ops.lib._parse_lib(m) + self.assertEqual(lib, ops.lib._Lib(None, "foo", "alice@example.com", 2, 42)) + # also check the repr while we're at it + self.assertEqual(repr(lib), '<_Lib foo by alice@example.com, API 2, patch 42>') + + def test_lib_definitions_trailing_comments(self): + m = self._mkmod('foo', ''' + LIBNAME = "foo" # comment style 1 + LIBAPI = 2 = comment style 2 + LIBPATCH = 42 + LIBAUTHOR = "alice@example.com"anything after the quote is a comment + LIBANANA = True + ''') + lib = ops.lib._parse_lib(m) + self.assertEqual(lib, ops.lib._Lib(None, "foo", "alice@example.com", 2, 42)) + # also check the repr while we're at it + self.assertEqual(repr(lib), '<_Lib foo by alice@example.com, API 2, patch 42>') + + def test_incomplete(self): + """Check that if anything is missing, nothing is returned""" + m = self._mkmod('foo', ''' + LIBNAME = "foo" + LIBAPI = 2 + LIBPATCH = 42 + ''') + self.assertIsNone(ops.lib._parse_lib(m)) + + def test_too_long(self): + """Check that if the file is too long, nothing is returned""" + m = self._mkmod('foo', '\n' * ops.lib._MAX_LIB_LINES + ''' + LIBNAME = "foo" + LIBAPI = 2 + LIBPATCH = 42 + LIBAUTHOR = "alice@example.com" + ''') + self.assertIsNone(ops.lib._parse_lib(m)) + + def test_no_origin(self): + """Check that _parse_lib doesn't choke when given a spec with no origin""" + # 'just don't crash' + lib = ops.lib._parse_lib(ModuleSpec(name='hi', loader=None, origin=None)) + self.assertIsNone(lib) + + def test_bogus_origin(self): + """Check that if the origin is messed up, we don't crash""" + # 'just don't crash' + lib = ops.lib._parse_lib(ModuleSpec(name='hi', loader=None, origin='/')) + self.assertIsNone(lib) + + def test_bogus_lib(self): + """Check our behaviour when the lib is messed up""" + # note the syntax error (that is carefully chosen to pass the initial regexp) + m = self._mkmod('foo', ''' + LIBNAME = "1' + LIBAPI = 2 + LIBPATCH = 42 + LIBAUTHOR = "alice@example.com" + ''') + self.assertIsNone(ops.lib._parse_lib(m)) + + def test_name_is_number(self): + """Check our behaviour when the name in the lib is a number""" + m = self._mkmod('foo', ''' + LIBNAME = 1 + LIBAPI = 2 + LIBPATCH = 42 + LIBAUTHOR = "alice@example.com" + ''') + self.assertIsNone(ops.lib._parse_lib(m)) + + def test_api_is_string(self): + """Check our behaviour when the api in the lib is a string""" + m = self._mkmod('foo', ''' + LIBNAME = 'foo' + LIBAPI = '2' + LIBPATCH = 42 + LIBAUTHOR = "alice@example.com" + ''') + self.assertIsNone(ops.lib._parse_lib(m)) + + def test_patch_is_string(self): + """Check our behaviour when the patch in the lib is a string""" + m = self._mkmod('foo', ''' + LIBNAME = 'foo' + LIBAPI = 2 + LIBPATCH = '42' + LIBAUTHOR = "alice@example.com" + ''') + self.assertIsNone(ops.lib._parse_lib(m)) + + def test_author_is_number(self): + """Check our behaviour when the author in the lib is a number""" + m = self._mkmod('foo', ''' + LIBNAME = 'foo' + LIBAPI = 2 + LIBPATCH = 42 + LIBAUTHOR = 43 + ''') + self.assertIsNone(ops.lib._parse_lib(m)) + + def test_other_encoding(self): + """Check that we don't crash when a library is not UTF-8""" + m = self._mkmod('foo') + with open(m.origin, 'wt', encoding='latin-1') as f: + f.write(dedent(''' + LIBNAME = "foo" + LIBAPI = 2 + LIBPATCH = 42 + LIBAUTHOR = "alice@example.com" + LIBANANA = "Ñoño" + ''')) + self.assertIsNone(ops.lib._parse_lib(m)) + + +class TestLib(TestCase): + + def test_lib_comparison(self): + self.assertNotEqual( + ops.lib._Lib(None, "foo", "alice@example.com", 1, 0), + ops.lib._Lib(None, "bar", "bob@example.com", 0, 1)) + self.assertEqual( + ops.lib._Lib(None, "foo", "alice@example.com", 1, 1), + ops.lib._Lib(None, "foo", "alice@example.com", 1, 1)) + + self.assertLess( + ops.lib._Lib(None, "foo", "alice@example.com", 1, 0), + ops.lib._Lib(None, "foo", "alice@example.com", 1, 1)) + self.assertLess( + ops.lib._Lib(None, "foo", "alice@example.com", 0, 1), + ops.lib._Lib(None, "foo", "alice@example.com", 1, 1)) + self.assertLess( + ops.lib._Lib(None, "foo", "alice@example.com", 1, 1), + ops.lib._Lib(None, "foo", "bob@example.com", 1, 1)) + self.assertLess( + ops.lib._Lib(None, "bar", "alice@example.com", 1, 1), + ops.lib._Lib(None, "foo", "alice@example.com", 1, 1)) + + with self.assertRaises(TypeError): + 42 < ops.lib._Lib(None, "bar", "alice@example.com", 1, 1) + with self.assertRaises(TypeError): + ops.lib._Lib(None, "bar", "alice@example.com", 1, 1) < 42 + + # these two might be surprising in that they don't raise an exception, + # but they are correct: our __eq__ bailing means Python falls back to + # its default of checking object identity. + self.assertNotEqual(ops.lib._Lib(None, "bar", "alice@example.com", 1, 1), 42) + self.assertNotEqual(42, ops.lib._Lib(None, "bar", "alice@example.com", 1, 1)) + + def test_lib_order(self): + a = ops.lib._Lib(None, "bar", "alice@example.com", 1, 0) + b = ops.lib._Lib(None, "bar", "alice@example.com", 1, 1) + c = ops.lib._Lib(None, "foo", "alice@example.com", 1, 0) + d = ops.lib._Lib(None, "foo", "alice@example.com", 1, 1) + e = ops.lib._Lib(None, "foo", "bob@example.com", 1, 1) + + for i in range(20): + with self.subTest(i): + libs = [a, b, c, d, e] + shuffle(libs) + self.assertEqual(sorted(libs), [a, b, c, d, e]) + + def test_use_bad_args_types(self): + with self.assertRaises(TypeError): + ops.lib.use(1, 2, 'bob@example.com') + with self.assertRaises(TypeError): + ops.lib.use('foo', '2', 'bob@example.com') + with self.assertRaises(TypeError): + ops.lib.use('foo', 2, ops.lib.use) + + def test_use_bad_args_values(self): + with self.assertRaises(ValueError): + ops.lib.use('--help', 2, 'alice@example.com') + with self.assertRaises(ValueError): + ops.lib.use('foo', -2, 'alice@example.com') + with self.assertRaises(ValueError): + ops.lib.use('foo', 1, 'example.com') + + +@patch('sys.path', new=()) +class TestLibFunctional(TestCase): + + def _mkdtemp(self) -> str: + tmpdir = mkdtemp() + self.addCleanup(rmtree, tmpdir) + return tmpdir + + def test_use_finds_subs(self): + """Test that ops.lib.use("baz") works when baz is inside a package in the python path.""" + tmpdir = self._mkdtemp() + sys.path = [tmpdir] + + _mklib(tmpdir, "foo", "bar").write_text(dedent(""" + LIBNAME = "baz" + LIBAPI = 2 + LIBPATCH = 42 + LIBAUTHOR = "alice@example.com" + """)) + + # autoimport to reset things + ops.lib.autoimport() + + # ops.lib.use done by charm author + baz = ops.lib.use('baz', 2, 'alice@example.com') + self.assertEqual(baz.LIBNAME, 'baz') + self.assertEqual(baz.LIBAPI, 2) + self.assertEqual(baz.LIBPATCH, 42) + self.assertEqual(baz.LIBAUTHOR, 'alice@example.com') + + def test_use_finds_best_same_toplevel(self): + """Test that ops.lib.use("baz") works when there are two baz in the same toplevel.""" + + pkg_b = "foo" + lib_b = "bar" + patch_b = 40 + for pkg_a in ["foo", "fooA"]: + for lib_a in ["bar", "barA"]: + if (pkg_a, lib_a) == (pkg_b, lib_b): + # everything-is-the-same :-) + continue + for patch_a in [38, 42]: + desc = "A: {}/{}/{}; B: {}/{}/{}".format( + pkg_a, lib_a, patch_a, pkg_b, lib_b, patch_b) + with self.subTest(desc): + tmpdir = self._mkdtemp() + sys.path = [tmpdir] + + _mklib(tmpdir, pkg_a, lib_a).write_text(dedent(""" + LIBNAME = "baz" + LIBAPI = 2 + LIBPATCH = {} + LIBAUTHOR = "alice@example.com" + """).format(patch_a)) + + _mklib(tmpdir, pkg_b, lib_b).write_text(dedent(""" + LIBNAME = "baz" + LIBAPI = 2 + LIBPATCH = {} + LIBAUTHOR = "alice@example.com" + """).format(patch_b)) + + # autoimport to reset things + ops.lib.autoimport() + + # ops.lib.use done by charm author + baz = ops.lib.use('baz', 2, 'alice@example.com') + self.assertEqual(baz.LIBNAME, 'baz') + self.assertEqual(baz.LIBAPI, 2) + self.assertEqual(baz.LIBPATCH, max(patch_a, patch_b)) + self.assertEqual(baz.LIBAUTHOR, 'alice@example.com') + + def test_use_finds_best_diff_toplevel(self): + """Test that ops.lib.use("baz") works when there are two baz in the different toplevels.""" + + pkg_b = "foo" + lib_b = "bar" + patch_b = 40 + for pkg_a in ["foo", "fooA"]: + for lib_a in ["bar", "barA"]: + for patch_a in [38, 42]: + desc = "A: {}/{}/{}; B: {}/{}/{}".format( + pkg_a, lib_a, patch_a, pkg_b, lib_b, patch_b) + with self.subTest(desc): + tmpdirA = self._mkdtemp() + tmpdirB = self._mkdtemp() + sys.path = [tmpdirA, tmpdirB] + + _mklib(tmpdirA, pkg_a, lib_a).write_text(dedent(""" + LIBNAME = "baz" + LIBAPI = 2 + LIBPATCH = {} + LIBAUTHOR = "alice@example.com" + """).format(patch_a)) + + _mklib(tmpdirB, pkg_b, lib_b).write_text(dedent(""" + LIBNAME = "baz" + LIBAPI = 2 + LIBPATCH = {} + LIBAUTHOR = "alice@example.com" + """).format(patch_b)) + + # autoimport to reset things + ops.lib.autoimport() + + # ops.lib.use done by charm author + baz = ops.lib.use('baz', 2, 'alice@example.com') + self.assertEqual(baz.LIBNAME, 'baz') + self.assertEqual(baz.LIBAPI, 2) + self.assertEqual(baz.LIBPATCH, max(patch_a, patch_b)) + self.assertEqual(baz.LIBAUTHOR, 'alice@example.com') + + def test_none_found(self): + with self.assertRaises(ImportError): + ops.lib.use('foo', 1, 'alice@example.com') + + def test_from_scratch(self): + tmpdir = self._mkdtemp() + sys.path = [tmpdir] + + _mklib(tmpdir, "foo", "bar").write_text(dedent(""" + LIBNAME = "baz" + LIBAPI = 2 + LIBPATCH = 42 + LIBAUTHOR = "alice@example.com" + """)) + + # hard reset + ops.lib._libraries = None + + # sanity check that ops.lib.use works + baz = ops.lib.use('baz', 2, 'alice@example.com') + self.assertEqual(baz.LIBAPI, 2) + + def test_others_found(self): + tmpdir = self._mkdtemp() + sys.path = [tmpdir] + + _mklib(tmpdir, "foo", "bar").write_text(dedent(""" + LIBNAME = "baz" + LIBAPI = 2 + LIBPATCH = 42 + LIBAUTHOR = "alice@example.com" + """)) + + # reload + ops.lib.autoimport() + + # sanity check that ops.lib.use works + baz = ops.lib.use('baz', 2, 'alice@example.com') + self.assertEqual(baz.LIBAPI, 2) + + with self.assertRaises(ImportError): + ops.lib.use('baz', 1, 'alice@example.com') + + with self.assertRaises(ImportError): + ops.lib.use('baz', 2, 'bob@example.com') diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/test_log.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/test_log.py new file mode 100644 index 0000000000000000000000000000000000000000..b7f74d5c901ffb7833c1e5523f9f69a1959999e1 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/test_log.py @@ -0,0 +1,140 @@ +#!/usr/bin/python3 + +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import io +import unittest +from unittest.mock import patch +import importlib + +import logging +import ops.log + + +class FakeModelBackend: + + def __init__(self): + self._calls = [] + + def calls(self, clear=False): + calls = self._calls + if clear: + self._calls = [] + return calls + + def juju_log(self, message, level): + self._calls.append((message, level)) + + +def reset_logging(): + logging.shutdown() + importlib.reload(logging) + + +class TestLogging(unittest.TestCase): + + def setUp(self): + self.backend = FakeModelBackend() + + reset_logging() + self.addCleanup(reset_logging) + + def test_default_logging(self): + ops.log.setup_root_logging(self.backend) + + logger = logging.getLogger() + self.assertEqual(logger.level, logging.DEBUG) + self.assertIsInstance(logger.handlers[0], ops.log.JujuLogHandler) + + test_cases = [( + lambda: logger.critical('critical'), [('CRITICAL', 'critical')] + ), ( + lambda: logger.error('error'), [('ERROR', 'error')] + ), ( + lambda: logger.warning('warning'), [('WARNING', 'warning')] + ), ( + lambda: logger.info('info'), [('INFO', 'info')] + ), ( + lambda: logger.debug('debug'), [('DEBUG', 'debug')] + )] + + for do, res in test_cases: + do() + calls = self.backend.calls(clear=True) + self.assertEqual(calls, res) + + def test_handler_filtering(self): + logger = logging.getLogger() + logger.setLevel(logging.INFO) + logger.addHandler(ops.log.JujuLogHandler(self.backend, logging.WARNING)) + logger.info('foo') + self.assertEqual(self.backend.calls(), []) + logger.warning('bar') + self.assertEqual(self.backend.calls(), [('WARNING', 'bar')]) + + def test_no_stderr_without_debug(self): + buffer = io.StringIO() + with patch('sys.stderr', buffer): + ops.log.setup_root_logging(self.backend, debug=False) + logger = logging.getLogger() + logger.debug('debug message') + logger.info('info message') + logger.warning('warning message') + logger.critical('critical message') + self.assertEqual( + self.backend.calls(), + [('DEBUG', 'debug message'), + ('INFO', 'info message'), + ('WARNING', 'warning message'), + ('CRITICAL', 'critical message'), + ]) + self.assertEqual(buffer.getvalue(), "") + + def test_debug_logging(self): + buffer = io.StringIO() + with patch('sys.stderr', buffer): + ops.log.setup_root_logging(self.backend, debug=True) + logger = logging.getLogger() + logger.debug('debug message') + logger.info('info message') + logger.warning('warning message') + logger.critical('critical message') + self.assertEqual( + self.backend.calls(), + [('DEBUG', 'debug message'), + ('INFO', 'info message'), + ('WARNING', 'warning message'), + ('CRITICAL', 'critical message'), + ]) + self.assertRegex( + buffer.getvalue(), + r"\d\d\d\d-\d\d-\d\d \d\d:\d\d:\d\d,\d\d\d DEBUG debug message\n" + r"\d\d\d\d-\d\d-\d\d \d\d:\d\d:\d\d,\d\d\d INFO info message\n" + r"\d\d\d\d-\d\d-\d\d \d\d:\d\d:\d\d,\d\d\d WARNING warning message\n" + r"\d\d\d\d-\d\d-\d\d \d\d:\d\d:\d\d,\d\d\d CRITICAL critical message\n" + ) + + def test_reduced_logging(self): + ops.log.setup_root_logging(self.backend) + logger = logging.getLogger() + logger.setLevel(logging.WARNING) + logger.debug('debug') + logger.info('info') + logger.warning('warning') + self.assertEqual(self.backend.calls(), [('WARNING', 'warning')]) + + +if __name__ == '__main__': + unittest.main() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/test_main.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/test_main.py new file mode 100755 index 0000000000000000000000000000000000000000..f62c859332d672fb4140488f2c9f1ad3841b30eb --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/test_main.py @@ -0,0 +1,878 @@ +# Copyright 2019-2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import abc +import base64 +import logging +import os +import pickle +import shutil +import subprocess +import sys +import tempfile +import unittest +import importlib.util +import warnings +from pathlib import Path +from unittest.mock import patch + +from ops.charm import ( + CharmBase, + CharmEvents, + HookEvent, + InstallEvent, + StartEvent, + ConfigChangedEvent, + UpgradeCharmEvent, + UpdateStatusEvent, + LeaderSettingsChangedEvent, + RelationJoinedEvent, + RelationChangedEvent, + RelationDepartedEvent, + RelationBrokenEvent, + RelationEvent, + StorageAttachedEvent, + ActionEvent, + CollectMetricsEvent, +) +from ops.main import main +from ops.version import version + +from .test_helpers import fake_script, fake_script_calls + +# This relies on the expected repository structure to find a path to +# source of the charm under test. +TEST_CHARM_DIR = Path(__file__ + '/../charms/test_main').resolve() + +VERSION_LOGLINE = [ + 'juju-log', '--log-level', 'DEBUG', + 'Operator Framework {} up and running.'.format(version), +] +logger = logging.getLogger(__name__) + + +class SymlinkTargetError(Exception): + pass + + +class EventSpec: + def __init__(self, event_type, event_name, env_var=None, + relation_id=None, remote_app=None, remote_unit=None, + charm_config=None, model_name=None): + self.event_type = event_type + self.event_name = event_name + self.env_var = env_var + self.relation_id = relation_id + self.remote_app = remote_app + self.remote_unit = remote_unit + self.charm_config = charm_config + self.model_name = model_name + + +class CharmInitTestCase(unittest.TestCase): + + def _check(self, charm_class): + """Helper for below tests.""" + fake_environ = { + 'JUJU_UNIT_NAME': 'test_main/0', + 'JUJU_MODEL_NAME': 'mymodel', + } + with patch.dict(os.environ, fake_environ): + with patch('ops.main._emit_charm_event'): + with patch('ops.main._get_charm_dir') as mock_charmdir: + with tempfile.TemporaryDirectory() as tmpdirname: + tmpdirname = Path(tmpdirname) + fake_metadata = tmpdirname / 'metadata.yaml' + with fake_metadata.open('wb') as fh: + fh.write(b'name: test') + mock_charmdir.return_value = tmpdirname + + with warnings.catch_warnings(record=True) as warnings_cm: + main(charm_class) + + return warnings_cm + + def test_init_signature_passthrough(self): + class MyCharm(CharmBase): + + def __init__(self, *args): + super().__init__(*args) + + warn_cm = self._check(MyCharm) + self.assertFalse(warn_cm) + + def test_init_signature_both_arguments(self): + class MyCharm(CharmBase): + + def __init__(self, framework, somekey): + super().__init__(framework, somekey) + + warn_cm = self._check(MyCharm) + self.assertEqual(len(warn_cm), 1) + (warn,) = warn_cm + self.assertTrue(issubclass(warn.category, DeprecationWarning)) + self.assertEqual(str(warn.message), ( + "the second argument, 'key', has been deprecated and will be removed " + "after the 0.7 release")) + + def test_init_signature_only_framework(self): + class MyCharm(CharmBase): + + def __init__(self, framework): + super().__init__(framework) + + warn_cm = self._check(MyCharm) + self.assertFalse(warn_cm) + + +class _TestMain(abc.ABC): + + @abc.abstractmethod + def _setup_entry_point(self, directory, entry_point): + """Set up the given entry point in the given directory. + + If not using dispatch, that would be a symlink / + pointing at src/charm.py; if using dispatch that would be the dispatch + symlink. It could also not be a symlink... + """ + return NotImplemented + + @abc.abstractmethod + def _call_event(self, rel_path, env): + """Set up the environment and call (i.e. run) the given event.""" + return NotImplemented + + @abc.abstractmethod + def test_setup_event_links(self): + """Test auto-creation of symlinks caused by initial events. + + Depending on the combination of dispatch and non-dispatch, this should + be checking for the creation or the _lack_ of creation, as appropriate. + """ + return NotImplemented + + def setUp(self): + self._setup_charm_dir() + + _, tmp_file = tempfile.mkstemp(dir=str(self._tmpdir)) + self._state_file = Path(tmp_file) + + # Relations events are defined dynamically and modify the class attributes. + # We use a subclass temporarily to prevent these side effects from leaking. + class TestCharmEvents(CharmEvents): + pass + CharmBase.on = TestCharmEvents() + + def cleanup(): + CharmBase.on = CharmEvents() + self.addCleanup(cleanup) + + fake_script(self, 'juju-log', "exit 0") + + # set to something other than None for tests that care + self.stdout = None + self.stderr = None + + def _setup_charm_dir(self): + self._tmpdir = Path(tempfile.mkdtemp(prefix='tmp-ops-test-')) + self.addCleanup(shutil.rmtree, str(self._tmpdir)) + self.JUJU_CHARM_DIR = self._tmpdir / 'test_main' + self.hooks_dir = self.JUJU_CHARM_DIR / 'hooks' + charm_path = str(self.JUJU_CHARM_DIR / 'src/charm.py') + self.charm_exec_path = os.path.relpath(charm_path, + str(self.hooks_dir)) + shutil.copytree(str(TEST_CHARM_DIR), str(self.JUJU_CHARM_DIR)) + + charm_spec = importlib.util.spec_from_file_location("charm", charm_path) + self.charm_module = importlib.util.module_from_spec(charm_spec) + charm_spec.loader.exec_module(self.charm_module) + + self._prepare_initial_hooks() + + def _prepare_initial_hooks(self): + initial_hooks = ('install', 'start', 'upgrade-charm', 'disks-storage-attached') + self.hooks_dir.mkdir() + for hook in initial_hooks: + self._setup_entry_point(self.hooks_dir, hook) + + def _prepare_actions(self): + # TODO: jam 2020-06-16 this same work could be done just triggering the 'install' event + # of the charm, it might be cleaner to not set up entry points directly here. + actions_dir_name = 'actions' + actions_dir = self.JUJU_CHARM_DIR / actions_dir_name + actions_dir.mkdir() + for action_name in ('start', 'foo-bar', 'get-model-name', 'get-status'): + self._setup_entry_point(actions_dir, action_name) + + def _read_and_clear_state(self): + state = None + if self._state_file.stat().st_size: + with self._state_file.open('r+b') as state_file: + state = pickle.load(state_file) + state_file.truncate(0) + return state + + def _simulate_event(self, event_spec): + ppath = Path(__file__).parent + pypath = str(ppath.parent) + if 'PYTHONPATH' in os.environ: + pypath += ':' + os.environ['PYTHONPATH'] + env = { + 'PATH': "{}:{}".format(ppath / 'bin', os.environ['PATH']), + 'PYTHONPATH': pypath, + 'JUJU_CHARM_DIR': str(self.JUJU_CHARM_DIR), + 'JUJU_UNIT_NAME': 'test_main/0', + 'CHARM_CONFIG': event_spec.charm_config, + } + if issubclass(event_spec.event_type, RelationEvent): + rel_name = event_spec.event_name.split('_')[0] + env.update({ + 'JUJU_RELATION': rel_name, + 'JUJU_RELATION_ID': str(event_spec.relation_id), + }) + remote_app = event_spec.remote_app + # For juju < 2.7 app name is extracted from JUJU_REMOTE_UNIT. + if remote_app is not None: + env['JUJU_REMOTE_APP'] = remote_app + + remote_unit = event_spec.remote_unit + if remote_unit is None: + remote_unit = '' + + env['JUJU_REMOTE_UNIT'] = remote_unit + else: + env.update({ + 'JUJU_REMOTE_UNIT': '', + 'JUJU_REMOTE_APP': '', + }) + if issubclass(event_spec.event_type, ActionEvent): + event_filename = event_spec.event_name[:-len('_action')].replace('_', '-') + env.update({ + event_spec.env_var: event_filename, + }) + if event_spec.env_var == 'JUJU_ACTION_NAME': + event_dir = 'actions' + else: + raise RuntimeError('invalid envar name specified for a action event') + else: + event_filename = event_spec.event_name.replace('_', '-') + event_dir = 'hooks' + if event_spec.model_name is not None: + env['JUJU_MODEL_NAME'] = event_spec.model_name + + self._call_event(Path(event_dir, event_filename), env) + return self._read_and_clear_state() + + def test_event_reemitted(self): + # base64 encoding is used to avoid null bytes. + charm_config = base64.b64encode(pickle.dumps({ + 'STATE_FILE': self._state_file, + })) + + # First run "install" to make sure all hooks are set up. + state = self._simulate_event(EventSpec(InstallEvent, 'install', charm_config=charm_config)) + self.assertEqual(state['observed_event_types'], [InstallEvent]) + + state = self._simulate_event(EventSpec(ConfigChangedEvent, 'config-changed', + charm_config=charm_config)) + self.assertEqual(state['observed_event_types'], [ConfigChangedEvent]) + + # Re-emit should pick the deferred config-changed. + state = self._simulate_event(EventSpec(UpdateStatusEvent, 'update-status', + charm_config=charm_config)) + self.assertEqual(state['observed_event_types'], [ConfigChangedEvent, UpdateStatusEvent]) + + def test_no_reemission_on_collect_metrics(self): + # base64 encoding is used to avoid null bytes. + charm_config = base64.b64encode(pickle.dumps({ + 'STATE_FILE': self._state_file, + })) + fake_script(self, 'add-metric', 'exit 0') + + # First run "install" to make sure all hooks are set up. + state = self._simulate_event(EventSpec(InstallEvent, 'install', charm_config=charm_config)) + self.assertEqual(state['observed_event_types'], [InstallEvent]) + + state = self._simulate_event(EventSpec(ConfigChangedEvent, 'config-changed', + charm_config=charm_config)) + self.assertEqual(state['observed_event_types'], [ConfigChangedEvent]) + + # Re-emit should not pick the deferred config-changed because + # collect-metrics runs in a restricted context. + state = self._simulate_event(EventSpec(CollectMetricsEvent, 'collect-metrics', + charm_config=charm_config)) + self.assertEqual(state['observed_event_types'], [CollectMetricsEvent]) + + def test_multiple_events_handled(self): + self._prepare_actions() + + charm_config = base64.b64encode(pickle.dumps({ + 'STATE_FILE': self._state_file, + })) + actions_charm_config = base64.b64encode(pickle.dumps({ + 'STATE_FILE': self._state_file, + 'USE_ACTIONS': True, + })) + + fake_script(self, 'action-get', "echo '{}'") + + # Sample events with a different amount of dashes used + # and with endpoints from different sections of metadata.yaml + events_under_test = [( + EventSpec(InstallEvent, 'install', + charm_config=charm_config), + {}, + ), ( + EventSpec(StartEvent, 'start', + charm_config=charm_config), + {}, + ), ( + EventSpec(UpdateStatusEvent, 'update_status', + charm_config=charm_config), + {}, + ), ( + EventSpec(LeaderSettingsChangedEvent, 'leader_settings_changed', + charm_config=charm_config), + {}, + ), ( + EventSpec(RelationJoinedEvent, 'db_relation_joined', + relation_id=1, + remote_app='remote', remote_unit='remote/0', + charm_config=charm_config), + {'relation_name': 'db', + 'relation_id': 1, + 'app_name': 'remote', + 'unit_name': 'remote/0'}, + ), ( + EventSpec(RelationChangedEvent, 'mon_relation_changed', + relation_id=2, + remote_app='remote', remote_unit='remote/0', + charm_config=charm_config), + {'relation_name': 'mon', + 'relation_id': 2, + 'app_name': 'remote', + 'unit_name': 'remote/0'}, + ), ( + EventSpec(RelationChangedEvent, 'mon_relation_changed', + relation_id=2, + remote_app='remote', remote_unit=None, + charm_config=charm_config), + {'relation_name': 'mon', + 'relation_id': 2, + 'app_name': 'remote', + 'unit_name': None}, + ), ( + EventSpec(RelationDepartedEvent, 'mon_relation_departed', + relation_id=2, + remote_app='remote', remote_unit='remote/0', + charm_config=charm_config), + {'relation_name': 'mon', + 'relation_id': 2, + 'app_name': 'remote', + 'unit_name': 'remote/0'}, + ), ( + EventSpec(RelationBrokenEvent, 'ha_relation_broken', + relation_id=3, + charm_config=charm_config), + {'relation_name': 'ha', + 'relation_id': 3}, + ), ( + # Events without a remote app specified (for Juju < 2.7). + EventSpec(RelationJoinedEvent, 'db_relation_joined', + relation_id=1, + remote_unit='remote/0', + charm_config=charm_config), + {'relation_name': 'db', + 'relation_id': 1, + 'app_name': 'remote', + 'unit_name': 'remote/0'}, + ), ( + EventSpec(RelationChangedEvent, 'mon_relation_changed', + relation_id=2, + remote_unit='remote/0', + charm_config=charm_config), + {'relation_name': 'mon', + 'relation_id': 2, + 'app_name': 'remote', + 'unit_name': 'remote/0'}, + ), ( + EventSpec(RelationDepartedEvent, 'mon_relation_departed', + relation_id=2, + remote_unit='remote/0', + charm_config=charm_config), + {'relation_name': 'mon', + 'relation_id': 2, + 'app_name': 'remote', + 'unit_name': 'remote/0'}, + ), ( + EventSpec(ActionEvent, 'start_action', + env_var='JUJU_ACTION_NAME', + charm_config=actions_charm_config), + {}, + ), ( + EventSpec(ActionEvent, 'foo_bar_action', + env_var='JUJU_ACTION_NAME', + charm_config=actions_charm_config), + {}, + )] + + logger.debug('Expected events %s', events_under_test) + + # First run "install" to make sure all hooks are set up. + self._simulate_event(EventSpec(InstallEvent, 'install', charm_config=charm_config)) + + # Simulate hook executions for every event. + for event_spec, expected_event_data in events_under_test: + state = self._simulate_event(event_spec) + + state_key = 'on_' + event_spec.event_name + handled_events = state.get(state_key, []) + + # Make sure that a handler for that event was called once. + self.assertEqual(len(handled_events), 1) + # Make sure the event handled by the Charm has the right type. + handled_event_type = handled_events[0] + self.assertEqual(handled_event_type, event_spec.event_type) + + self.assertEqual(state['observed_event_types'], [event_spec.event_type]) + + if event_spec.event_name in expected_event_data: + self.assertEqual(state[event_spec.event_name + '_data'], + expected_event_data[event_spec.event_name]) + + def test_event_not_implemented(self): + """Make sure events without implementation do not cause non-zero exit. + """ + charm_config = base64.b64encode(pickle.dumps({ + 'STATE_FILE': self._state_file, + })) + + # Simulate a scenario where there is a symlink for an event that + # a charm does not know how to handle. + hook_path = self.JUJU_CHARM_DIR / 'hooks/not-implemented-event' + # This will be cleared up in tearDown. + hook_path.symlink_to('install') + + try: + self._simulate_event(EventSpec(HookEvent, 'not-implemented-event', + charm_config=charm_config)) + except subprocess.CalledProcessError: + self.fail('Event simulation for an unsupported event' + ' results in a non-zero exit code returned') + + def test_collect_metrics(self): + indicator_file = self.JUJU_CHARM_DIR / 'indicator' + charm_config = base64.b64encode(pickle.dumps({ + 'STATE_FILE': self._state_file, + 'INDICATOR_FILE': indicator_file + })) + fake_script(self, 'add-metric', 'exit 0') + fake_script(self, 'juju-log', 'exit 0') + self._simulate_event(EventSpec(InstallEvent, 'install', charm_config=charm_config)) + # Clear the calls during 'install' + fake_script_calls(self, clear=True) + self._simulate_event(EventSpec(CollectMetricsEvent, 'collect_metrics', + charm_config=charm_config)) + + expected = [ + VERSION_LOGLINE, + ['juju-log', '--log-level', 'DEBUG', 'Emitting Juju event collect_metrics.'], + ['add-metric', '--labels', 'bar=4.2', 'foo=42'], + ] + calls = fake_script_calls(self) + + if self.has_dispatch: + expected.insert(1, [ + 'juju-log', '--log-level', 'DEBUG', + 'Legacy hooks/collect-metrics does not exist.']) + + self.assertEqual(calls, expected) + + def test_logger(self): + charm_config = base64.b64encode(pickle.dumps({ + 'STATE_FILE': self._state_file, + 'USE_LOG_ACTIONS': True, + })) + fake_script(self, 'action-get', "echo '{}'") + + test_cases = [( + EventSpec(ActionEvent, 'log_critical_action', env_var='JUJU_ACTION_NAME', + charm_config=charm_config), + ['juju-log', '--log-level', 'CRITICAL', 'super critical'], + ), ( + EventSpec(ActionEvent, 'log_error_action', + env_var='JUJU_ACTION_NAME', + charm_config=charm_config), + ['juju-log', '--log-level', 'ERROR', 'grave error'], + ), ( + EventSpec(ActionEvent, 'log_warning_action', + env_var='JUJU_ACTION_NAME', + charm_config=charm_config), + ['juju-log', '--log-level', 'WARNING', 'wise warning'], + ), ( + EventSpec(ActionEvent, 'log_info_action', + env_var='JUJU_ACTION_NAME', + charm_config=charm_config), + ['juju-log', '--log-level', 'INFO', 'useful info'], + )] + + # Set up action symlinks. + self._simulate_event(EventSpec(InstallEvent, 'install', + charm_config=charm_config)) + + for event_spec, calls in test_cases: + self._simulate_event(event_spec) + self.assertIn(calls, fake_script_calls(self, clear=True)) + + def test_excepthook(self): + charm_config = base64.b64encode(pickle.dumps({ + 'STATE_FILE': self._state_file, + 'TRY_EXCEPTHOOK': True, + })) + with self.assertRaises(subprocess.CalledProcessError): + self._simulate_event(EventSpec(InstallEvent, 'install', + charm_config=charm_config)) + + calls = [' '.join(i) for i in fake_script_calls(self)] + + self.assertEqual(calls.pop(0), ' '.join(VERSION_LOGLINE)) + + if self.has_dispatch: + self.assertEqual( + calls.pop(0), + 'juju-log --log-level DEBUG Legacy hooks/install does not exist.') + + self.maxDiff = None + self.assertRegex( + calls[0], + '(?ms)juju-log --log-level ERROR Uncaught exception while in charm code:\n' + 'Traceback .most recent call last.:\n' + ' .*' + ' raise RuntimeError."failing as requested".\n' + 'RuntimeError: failing as requested' + ) + self.assertEqual(len(calls), 1, "expected 1 call, but got extra: {}".format(calls[1:])) + + def test_sets_model_name(self): + self._prepare_actions() + + actions_charm_config = base64.b64encode(pickle.dumps({ + 'STATE_FILE': self._state_file, + 'USE_ACTIONS': True, + })) + + fake_script(self, 'action-get', "echo '{}'") + state = self._simulate_event(EventSpec( + ActionEvent, 'get_model_name_action', + env_var='JUJU_ACTION_NAME', + model_name='test-model-name', + charm_config=actions_charm_config)) + self.assertIsNotNone(state) + self.assertEqual(state['_on_get_model_name_action'], ['test-model-name']) + + def test_has_valid_status(self): + self._prepare_actions() + + actions_charm_config = base64.b64encode(pickle.dumps({ + 'STATE_FILE': self._state_file, + 'USE_ACTIONS': True, + })) + + fake_script(self, 'action-get', "echo '{}'") + fake_script(self, 'status-get', """echo '{"status": "unknown", "message": ""}'""") + state = self._simulate_event(EventSpec( + ActionEvent, 'get_status_action', + env_var='JUJU_ACTION_NAME', + charm_config=actions_charm_config)) + self.assertIsNotNone(state) + self.assertEqual(state['status_name'], 'unknown') + self.assertEqual(state['status_message'], '') + fake_script( + self, 'status-get', """echo '{"status": "blocked", "message": "help meeee"}'""") + state = self._simulate_event(EventSpec( + ActionEvent, 'get_status_action', + env_var='JUJU_ACTION_NAME', + charm_config=actions_charm_config)) + self.assertIsNotNone(state) + self.assertEqual(state['status_name'], 'blocked') + self.assertEqual(state['status_message'], 'help meeee') + + def test_foo(self): + # base64 encoding is used to avoid null bytes. + charm_config = base64.b64encode(pickle.dumps({ + 'STATE_FILE': self._state_file, + })) + + # First run "install" to make sure all hooks are set up. + state = self._simulate_event(EventSpec(InstallEvent, 'install', charm_config=charm_config)) + self.assertEqual(state['observed_event_types'], [InstallEvent]) + + +class TestMainWithNoDispatch(_TestMain, unittest.TestCase): + has_dispatch = False + hooks_are_symlinks = True + + def _setup_entry_point(self, directory, entry_point): + path = directory / entry_point + path.symlink_to(self.charm_exec_path) + + def _call_event(self, rel_path, env): + event_file = self.JUJU_CHARM_DIR / rel_path + # Note that sys.executable is used to make sure we are using the same + # interpreter for the child process to support virtual environments. + subprocess.run( + [sys.executable, str(event_file)], + check=True, env=env, cwd=str(self.JUJU_CHARM_DIR)) + + def test_setup_event_links(self): + """Test auto-creation of symlinks caused by initial events. + """ + all_event_hooks = ['hooks/' + e.replace("_", "-") + for e in self.charm_module.Charm.on.events().keys()] + charm_config = base64.b64encode(pickle.dumps({ + 'STATE_FILE': self._state_file, + })) + initial_events = { + EventSpec(InstallEvent, 'install', charm_config=charm_config), + EventSpec(StorageAttachedEvent, 'disks-storage-attached', charm_config=charm_config), + EventSpec(StartEvent, 'start', charm_config=charm_config), + EventSpec(UpgradeCharmEvent, 'upgrade-charm', charm_config=charm_config), + } + + def _assess_event_links(event_spec): + self.assertTrue(self.hooks_dir / event_spec.event_name in self.hooks_dir.iterdir()) + for event_hook in all_event_hooks: + hook_path = self.JUJU_CHARM_DIR / event_hook + self.assertTrue(hook_path.exists(), 'Missing hook: ' + event_hook) + if self.hooks_are_symlinks: + self.assertEqual(os.readlink(str(hook_path)), self.charm_exec_path) + + for initial_event in initial_events: + self._setup_charm_dir() + + self._simulate_event(initial_event) + _assess_event_links(initial_event) + # Make sure it is idempotent. + self._simulate_event(initial_event) + _assess_event_links(initial_event) + + def test_setup_action_links(self): + charm_config = base64.b64encode(pickle.dumps({ + 'STATE_FILE': self._state_file, + })) + self._simulate_event(EventSpec(InstallEvent, 'install', charm_config=charm_config)) + # foo-bar is one of the actions defined in actions.yaml + action_hook = self.JUJU_CHARM_DIR / 'actions' / 'foo-bar' + self.assertTrue(action_hook.exists()) + + +class TestMainWithNoDispatchButJujuIsDispatchAware(TestMainWithNoDispatch): + def _call_event(self, rel_path, env): + env["JUJU_DISPATCH_PATH"] = str(rel_path) + super()._call_event(rel_path, env) + + +class TestMainWithNoDispatchButScriptsAreCopies(TestMainWithNoDispatch): + hooks_are_symlinks = False + + def _setup_entry_point(self, directory, entry_point): + charm_path = str(self.JUJU_CHARM_DIR / 'src/charm.py') + path = directory / entry_point + shutil.copy(charm_path, str(path)) + + +class TestMainWithDispatch(_TestMain, unittest.TestCase): + has_dispatch = True + + def _setup_entry_point(self, directory, entry_point): + path = self.JUJU_CHARM_DIR / 'dispatch' + if not path.exists(): + path.symlink_to('src/charm.py') + + def _call_event(self, rel_path, env): + env["JUJU_DISPATCH_PATH"] = str(rel_path) + dispatch = self.JUJU_CHARM_DIR / 'dispatch' + subprocess.run( + [sys.executable, str(dispatch)], + stdout=self.stdout, + stderr=self.stderr, + check=True, env=env, cwd=str(self.JUJU_CHARM_DIR)) + + def test_setup_event_links(self): + """Test auto-creation of symlinks caused by initial events does _not_ happen when using dispatch. + """ + all_event_hooks = ['hooks/' + e.replace("_", "-") + for e in self.charm_module.Charm.on.events().keys()] + charm_config = base64.b64encode(pickle.dumps({ + 'STATE_FILE': self._state_file, + })) + initial_events = { + EventSpec(InstallEvent, 'install', charm_config=charm_config), + EventSpec(StorageAttachedEvent, 'disks-storage-attached', charm_config=charm_config), + EventSpec(StartEvent, 'start', charm_config=charm_config), + EventSpec(UpgradeCharmEvent, 'upgrade-charm', charm_config=charm_config), + } + + def _assess_event_links(event_spec): + self.assertNotIn(self.hooks_dir / event_spec.event_name, self.hooks_dir.iterdir()) + for event_hook in all_event_hooks: + self.assertFalse((self.JUJU_CHARM_DIR / event_hook).exists(), + 'Spurious hook: ' + event_hook) + + for initial_event in initial_events: + self._setup_charm_dir() + + self._simulate_event(initial_event) + _assess_event_links(initial_event) + + def test_hook_and_dispatch(self): + charm_config = base64.b64encode(pickle.dumps({ + 'STATE_FILE': self._state_file, + })) + + old_path = self.fake_script_path + self.fake_script_path = self.hooks_dir + fake_script(self, 'install', 'exit 0') + state = self._simulate_event(EventSpec(InstallEvent, 'install', charm_config=charm_config)) + + # the script was called, *and*, the .on. was called + self.assertEqual(fake_script_calls(self), [['install', '']]) + self.assertEqual(state['observed_event_types'], [InstallEvent]) + + self.fake_script_path = old_path + self.assertEqual(fake_script_calls(self), [ + VERSION_LOGLINE, + ['juju-log', '--log-level', 'INFO', 'Running legacy hooks/install.'], + ['juju-log', '--log-level', 'DEBUG', 'Legacy hooks/install exited with status 0.'], + ['juju-log', '--log-level', 'DEBUG', 'Emitting Juju event install.'], + ]) + + def test_non_executable_hook_and_dispatch(self): + charm_config = base64.b64encode(pickle.dumps({ + 'STATE_FILE': self._state_file, + })) + + (self.hooks_dir / "install").write_text("") + state = self._simulate_event(EventSpec(InstallEvent, 'install', charm_config=charm_config)) + + self.assertEqual(state['observed_event_types'], [InstallEvent]) + + self.assertEqual(fake_script_calls(self), [ + VERSION_LOGLINE, + ['juju-log', '--log-level', 'WARNING', + 'Legacy hooks/install exists but is not executable.'], + ['juju-log', '--log-level', 'DEBUG', 'Emitting Juju event install.'], + ]) + + def test_hook_and_dispatch_with_failing_hook(self): + self.stdout = self.stderr = tempfile.TemporaryFile() + self.addCleanup(self.stdout.close) + + charm_config = base64.b64encode(pickle.dumps({ + 'STATE_FILE': self._state_file, + })) + + old_path = self.fake_script_path + self.fake_script_path = self.hooks_dir + fake_script(self, 'install', 'exit 42') + event = EventSpec(InstallEvent, 'install', charm_config=charm_config) + with self.assertRaises(subprocess.CalledProcessError): + self._simulate_event(event) + self.fake_script_path = old_path + + self.stdout.seek(0) + self.assertEqual(self.stdout.read(), b'') + calls = fake_script_calls(self) + expected = [ + VERSION_LOGLINE, + ['juju-log', '--log-level', 'INFO', 'Running legacy hooks/install.'], + ['juju-log', '--log-level', 'WARNING', 'Legacy hooks/install exited with status 42.'], + ] + self.assertEqual(calls, expected) + + def test_hook_and_dispatch_but_hook_is_dispatch(self): + charm_config = base64.b64encode(pickle.dumps({ + 'STATE_FILE': self._state_file, + })) + event = EventSpec(InstallEvent, 'install', charm_config=charm_config) + hook_path = self.hooks_dir / 'install' + for ((rel, ind), path) in { + # relative and indirect + (True, True): Path('../dispatch'), + # relative and direct + (True, False): Path(self.charm_exec_path), + # absolute and direct + (False, False): (self.hooks_dir / self.charm_exec_path).resolve(), + # absolute and indirect + (False, True): self.JUJU_CHARM_DIR / 'dispatch', + }.items(): + with self.subTest(path=path, rel=rel, ind=ind): + # sanity check + self.assertEqual(path.is_absolute(), not rel) + self.assertEqual(path.name == 'dispatch', ind) + try: + hook_path.symlink_to(path) + + state = self._simulate_event(event) + + # the .on. was only called once + self.assertEqual(state['observed_event_types'], [InstallEvent]) + self.assertEqual(state['on_install'], [InstallEvent]) + finally: + hook_path.unlink() + + def test_hook_and_dispatch_but_hook_is_dispatch_copy(self): + charm_config = base64.b64encode(pickle.dumps({ + 'STATE_FILE': self._state_file, + })) + hook_path = self.hooks_dir / 'install' + path = (self.hooks_dir / self.charm_exec_path).resolve() + shutil.copy(str(path), str(hook_path)) + fake_script(self, 'juju-log', 'exit 0') + + event = EventSpec(InstallEvent, 'install', charm_config=charm_config) + state = self._simulate_event(event) + + # the .on. was only called once + self.assertEqual(state['observed_event_types'], [InstallEvent]) + self.assertEqual(state['on_install'], [InstallEvent]) + self.assertEqual(fake_script_calls(self), [ + VERSION_LOGLINE, + ['juju-log', '--log-level', 'INFO', 'Running legacy hooks/install.'], + VERSION_LOGLINE, # because it called itself + ['juju-log', '--log-level', 'DEBUG', 'Charm called itself via hooks/install.'], + ['juju-log', '--log-level', 'DEBUG', 'Legacy hooks/install exited with status 0.'], + ['juju-log', '--log-level', 'DEBUG', 'Emitting Juju event install.'], + ]) + + +class TestMainWithDispatchAsScript(TestMainWithDispatch): + """Here dispatch is a script that execs the charm.py instead of a symlink. + """ + + has_dispatch = True + + def _setup_entry_point(self, directory, entry_point): + path = self.JUJU_CHARM_DIR / 'dispatch' + if not path.exists(): + path.write_text('#!/bin/sh\nexec "{}" "{}"\n'.format( + sys.executable, + self.JUJU_CHARM_DIR / 'src/charm.py')) + path.chmod(0o755) + + def _call_event(self, rel_path, env): + env["JUJU_DISPATCH_PATH"] = str(rel_path) + dispatch = self.JUJU_CHARM_DIR / 'dispatch' + subprocess.check_call([str(dispatch)], + env=env, cwd=str(self.JUJU_CHARM_DIR)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/test_model.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/test_model.py new file mode 100755 index 0000000000000000000000000000000000000000..8415efa76a3b85a1bfe3ef20497ca00987c1d6a2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/test_model.py @@ -0,0 +1,1329 @@ +#!/usr/bin/python3 +# Copyright 2019 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from collections import OrderedDict +import json +import ipaddress +import os +import pathlib +from textwrap import dedent +import unittest + +import ops.model +import ops.charm +import ops.testing +from ops.charm import RelationMeta, RelationRole + +from test.test_helpers import fake_script, fake_script_calls + + +class TestModel(unittest.TestCase): + + def setUp(self): + self.harness = ops.testing.Harness(ops.charm.CharmBase, meta=''' + name: myapp + provides: + db0: + interface: db0 + requires: + db1: + interface: db1 + peers: + db2: + interface: db2 + ''') + self.relation_id_db0 = self.harness.add_relation('db0', 'db') + self.model = self.harness.model + + def test_model_attributes(self): + self.assertIs(self.model.app, self.model.unit.app) + self.assertIsNone(self.model.name) + + def test_model_name_from_backend(self): + self.harness.set_model_name('default') + m = ops.model.Model(ops.charm.CharmMeta(), self.harness._backend) + self.assertEqual(m.name, 'default') + with self.assertRaises(AttributeError): + m.name = "changes-disallowed" + + def test_relations_keys(self): + rel_app1 = self.harness.add_relation('db1', 'remoteapp1') + self.harness.add_relation_unit(rel_app1, 'remoteapp1/0') + self.harness.add_relation_unit(rel_app1, 'remoteapp1/1') + rel_app2 = self.harness.add_relation('db1', 'remoteapp2') + self.harness.add_relation_unit(rel_app2, 'remoteapp2/0') + + # We invalidate db1 so that it causes us to reload it + self.model.relations._invalidate('db1') + self.resetBackendCalls() + for relation in self.model.relations['db1']: + self.assertIn(self.model.unit, relation.data) + unit_from_rel = next(filter(lambda u: u.name == 'myapp/0', relation.data.keys())) + self.assertIs(self.model.unit, unit_from_rel) + + self.assertBackendCalls([ + ('relation_ids', 'db1'), + ('relation_list', rel_app1), + ('relation_list', rel_app2), + ]) + + def test_get_relation(self): + # one relation on db1 + # two relations on db0 + # no relations on db2 + relation_id_db1 = self.harness.add_relation('db1', 'remoteapp1') + self.harness.add_relation_unit(relation_id_db1, 'remoteapp1/0') + relation_id_db0_b = self.harness.add_relation('db0', 'another') + self.resetBackendCalls() + + with self.assertRaises(ops.model.ModelError): + # You have to specify it by just the integer ID + self.model.get_relation('db1', 'db1:{}'.format(relation_id_db1)) + rel_db1 = self.model.get_relation('db1', relation_id_db1) + self.assertIsInstance(rel_db1, ops.model.Relation) + self.assertBackendCalls([ + ('relation_ids', 'db1'), + ('relation_list', relation_id_db1), + ]) + dead_rel = self.model.get_relation('db1', 7) + self.assertIsInstance(dead_rel, ops.model.Relation) + self.assertEqual(set(dead_rel.data.keys()), {self.model.unit, self.model.unit.app}) + self.assertEqual(dead_rel.data[self.model.unit], {}) + self.assertBackendCalls([ + ('relation_list', 7), + ('relation_get', 7, 'myapp/0', False), + ]) + + self.assertIsNone(self.model.get_relation('db2')) + self.assertBackendCalls([ + ('relation_ids', 'db2'), + ]) + self.assertIs(self.model.get_relation('db1'), rel_db1) + with self.assertRaises(ops.model.TooManyRelatedAppsError): + self.model.get_relation('db0') + + self.assertBackendCalls([ + ('relation_ids', 'db0'), + ('relation_list', self.relation_id_db0), + ('relation_list', relation_id_db0_b), + ]) + + def test_peer_relation_app(self): + self.harness.add_relation('db2', 'myapp') + rel_dbpeer = self.model.get_relation('db2') + self.assertIs(rel_dbpeer.app, self.model.app) + + def test_remote_units_is_our(self): + relation_id = self.harness.add_relation('db1', 'remoteapp1') + self.harness.add_relation_unit(relation_id, 'remoteapp1/0') + self.harness.add_relation_unit(relation_id, 'remoteapp1/1') + self.resetBackendCalls() + + for u in self.model.get_relation('db1').units: + self.assertFalse(u._is_our_unit) + self.assertFalse(u.app._is_our_app) + + self.assertBackendCalls([ + ('relation_ids', 'db1'), + ('relation_list', relation_id) + ]) + + def test_our_unit_is_our(self): + self.assertTrue(self.model.unit._is_our_unit) + self.assertTrue(self.model.unit.app._is_our_app) + + def test_unit_relation_data(self): + relation_id = self.harness.add_relation('db1', 'remoteapp1') + self.harness.add_relation_unit(relation_id, 'remoteapp1/0') + self.harness.update_relation_data( + relation_id, + 'remoteapp1/0', + {'host': 'remoteapp1-0'}) + self.model.relations._invalidate('db1') + self.resetBackendCalls() + + random_unit = self.model.get_unit('randomunit/0') + with self.assertRaises(KeyError): + self.model.get_relation('db1').data[random_unit] + remoteapp1_0 = next(filter(lambda u: u.name == 'remoteapp1/0', + self.model.get_relation('db1').units)) + self.assertEqual(self.model.get_relation('db1').data[remoteapp1_0], + {'host': 'remoteapp1-0'}) + + self.assertBackendCalls([ + ('relation_ids', 'db1'), + ('relation_list', relation_id), + ('relation_get', relation_id, 'remoteapp1/0', False), + ]) + + def test_remote_app_relation_data(self): + relation_id = self.harness.add_relation('db1', 'remoteapp1') + self.harness.update_relation_data(relation_id, 'remoteapp1', {'secret': 'cafedeadbeef'}) + self.harness.add_relation_unit(relation_id, 'remoteapp1/0') + self.harness.add_relation_unit(relation_id, 'remoteapp1/1') + self.resetBackendCalls() + + rel_db1 = self.model.get_relation('db1') + # Try to get relation data for an invalid remote application. + random_app = self.model._cache.get(ops.model.Application, 'randomapp') + with self.assertRaises(KeyError): + rel_db1.data[random_app] + + remoteapp1 = rel_db1.app + self.assertEqual(remoteapp1.name, 'remoteapp1') + self.assertEqual(rel_db1.data[remoteapp1], + {'secret': 'cafedeadbeef'}) + + self.assertBackendCalls([ + ('relation_ids', 'db1'), + ('relation_list', relation_id), + ('relation_get', relation_id, 'remoteapp1', True), + ]) + + def test_relation_data_modify_remote(self): + relation_id = self.harness.add_relation('db1', 'remoteapp1') + self.harness.update_relation_data(relation_id, 'remoteapp1', {'secret': 'cafedeadbeef'}) + self.harness.add_relation_unit(relation_id, 'remoteapp1/0') + self.harness.update_relation_data(relation_id, 'remoteapp1/0', {'host': 'remoteapp1/0'}) + self.model.relations._invalidate('db1') + self.resetBackendCalls() + + rel_db1 = self.model.get_relation('db1') + remoteapp1_0 = next(filter(lambda u: u.name == 'remoteapp1/0', + self.model.get_relation('db1').units)) + # Force memory cache to be loaded. + self.assertIn('host', rel_db1.data[remoteapp1_0]) + with self.assertRaises(ops.model.RelationDataError): + rel_db1.data[remoteapp1_0]['foo'] = 'bar' + self.assertNotIn('foo', rel_db1.data[remoteapp1_0]) + + self.assertBackendCalls([ + ('relation_ids', 'db1'), + ('relation_list', relation_id), + ('relation_get', relation_id, 'remoteapp1/0', False), + ]) + + def test_relation_data_modify_our(self): + relation_id = self.harness.add_relation('db1', 'remoteapp1') + self.harness.update_relation_data(relation_id, 'myapp/0', {'host': 'nothing'}) + self.resetBackendCalls() + + rel_db1 = self.model.get_relation('db1') + # Force memory cache to be loaded. + self.assertIn('host', rel_db1.data[self.model.unit]) + rel_db1.data[self.model.unit]['host'] = 'bar' + self.assertEqual(rel_db1.data[self.model.unit]['host'], 'bar') + + self.assertBackendCalls([ + ('relation_get', relation_id, 'myapp/0', False), + ('relation_set', relation_id, 'host', 'bar', False), + ]) + + def test_app_relation_data_modify_local_as_leader(self): + relation_id = self.harness.add_relation('db1', 'remoteapp1') + self.harness.update_relation_data(relation_id, 'myapp', {'password': 'deadbeefcafe'}) + self.harness.add_relation_unit(relation_id, 'remoteapp1/0') + self.harness.set_leader(True) + self.resetBackendCalls() + + local_app = self.model.unit.app + + rel_db1 = self.model.get_relation('db1') + self.assertEqual(rel_db1.data[local_app], {'password': 'deadbeefcafe'}) + + rel_db1.data[local_app]['password'] = 'foo' + + self.assertEqual(rel_db1.data[local_app]['password'], 'foo') + + self.assertBackendCalls([ + ('relation_ids', 'db1'), + ('relation_list', relation_id), + ('relation_get', relation_id, 'myapp', True), + ('is_leader',), + ('relation_set', relation_id, 'password', 'foo', True), + ]) + + def test_app_relation_data_modify_local_as_minion(self): + relation_id = self.harness.add_relation('db1', 'remoteapp1') + self.harness.update_relation_data(relation_id, 'myapp', {'password': 'deadbeefcafe'}) + self.harness.add_relation_unit(relation_id, 'remoteapp1/0') + self.harness.set_leader(False) + self.resetBackendCalls() + + local_app = self.model.unit.app + + rel_db1 = self.model.get_relation('db1') + self.assertEqual(rel_db1.data[local_app], {'password': 'deadbeefcafe'}) + + with self.assertRaises(ops.model.RelationDataError): + rel_db1.data[local_app]['password'] = 'foobar' + + self.assertBackendCalls([ + ('relation_ids', 'db1'), + ('relation_list', relation_id), + ('relation_get', relation_id, 'myapp', True), + ('is_leader',), + ]) + + def test_relation_data_del_key(self): + relation_id = self.harness.add_relation('db1', 'remoteapp1') + self.harness.update_relation_data(relation_id, 'myapp/0', {'host': 'bar'}) + self.harness.add_relation_unit(relation_id, 'remoteapp1/0') + self.resetBackendCalls() + + rel_db1 = self.model.get_relation('db1') + # Force memory cache to be loaded. + self.assertIn('host', rel_db1.data[self.model.unit]) + del rel_db1.data[self.model.unit]['host'] + self.assertNotIn('host', rel_db1.data[self.model.unit]) + self.assertEqual({}, self.harness.get_relation_data(relation_id, 'myapp/0')) + + self.assertBackendCalls([ + ('relation_ids', 'db1'), + ('relation_list', relation_id), + ('relation_get', relation_id, 'myapp/0', False), + ('relation_set', relation_id, 'host', '', False), + ]) + + def test_relation_set_fail(self): + relation_id = self.harness.add_relation('db1', 'remoteapp1') + self.harness.update_relation_data(relation_id, 'myapp/0', {'host': 'myapp-0'}) + self.harness.add_relation_unit(relation_id, 'remoteapp1/0') + self.resetBackendCalls() + + backend = self.harness._backend + # TODO: jam 2020-03-06 This is way too much information about relation_set + # The original test forced 'relation-set' to return exit code 2, + # but there was nothing illegal about the data that was being set, + # for us to properly test the side effects of relation-set failing. + + def broken_relation_set(relation_id, key, value, is_app): + backend._calls.append(('relation_set', relation_id, key, value, is_app)) + raise ops.model.ModelError() + backend.relation_set = broken_relation_set + + rel_db1 = self.model.get_relation('db1') + # Force memory cache to be loaded. + self.assertIn('host', rel_db1.data[self.model.unit]) + with self.assertRaises(ops.model.ModelError): + rel_db1.data[self.model.unit]['host'] = 'bar' + self.assertEqual(rel_db1.data[self.model.unit]['host'], 'myapp-0') + with self.assertRaises(ops.model.ModelError): + del rel_db1.data[self.model.unit]['host'] + self.assertIn('host', rel_db1.data[self.model.unit]) + + self.assertBackendCalls([ + ('relation_ids', 'db1'), + ('relation_list', relation_id), + ('relation_get', relation_id, 'myapp/0', False), + ('relation_set', relation_id, 'host', 'bar', False), + ('relation_set', relation_id, 'host', '', False), + ]) + + def test_relation_data_type_check(self): + relation_id = self.harness.add_relation('db1', 'remoteapp1') + self.harness.update_relation_data(relation_id, 'myapp/0', {'host': 'myapp-0'}) + self.harness.add_relation_unit(relation_id, 'remoteapp1/0') + self.resetBackendCalls() + + rel_db1 = self.model.get_relation('db1') + with self.assertRaises(ops.model.RelationDataError): + rel_db1.data[self.model.unit]['foo'] = 1 + with self.assertRaises(ops.model.RelationDataError): + rel_db1.data[self.model.unit]['foo'] = {'foo': 'bar'} + with self.assertRaises(ops.model.RelationDataError): + rel_db1.data[self.model.unit]['foo'] = None + # No data has actually been changed + self.assertEqual(dict(rel_db1.data[self.model.unit]), {'host': 'myapp-0'}) + + self.assertBackendCalls([ + ('relation_ids', 'db1'), + ('relation_list', relation_id), + ('relation_get', relation_id, 'myapp/0', False), + ]) + + def test_config(self): + self.harness.update_config({'foo': 'foo', 'bar': 1, 'qux': True}) + self.assertEqual(self.model.config, { + 'foo': 'foo', + 'bar': 1, + 'qux': True, + }) + with self.assertRaises(TypeError): + # Confirm that we cannot modify config values. + self.model.config['foo'] = 'bar' + + self.assertBackendCalls([('config_get',)]) + + def test_is_leader(self): + relation_id = self.harness.add_relation('db1', 'remoteapp1') + self.harness.add_relation_unit(relation_id, 'remoteapp1/0') + self.harness.set_leader(True) + self.resetBackendCalls() + + def check_remote_units(): + # Cannot determine leadership for remote units. + for u in self.model.get_relation('db1').units: + with self.assertRaises(RuntimeError): + u.is_leader() + + self.assertTrue(self.model.unit.is_leader()) + + check_remote_units() + + # Create a new model and backend to drop a cached is-leader output. + self.harness.set_leader(False) + self.assertFalse(self.model.unit.is_leader()) + + check_remote_units() + + self.assertBackendCalls([ + ('is_leader',), + ('relation_ids', 'db1'), + ('relation_list', relation_id), + ('is_leader',), + ]) + + def test_workload_version(self): + self.model.unit.set_workload_version('1.2.3') + self.assertBackendCalls([ + ('application_version_set', '1.2.3'), + ]) + + def test_workload_version_invalid(self): + with self.assertRaises(TypeError) as cm: + self.model.unit.set_workload_version(5) + self.assertEqual(str(cm.exception), "workload version must be a str, not int: 5") + self.assertBackendCalls([]) + + def test_resources(self): + # TODO: (jam) 2020-05-07 Harness doesn't yet support resource-get issue #262 + meta = ops.charm.CharmMeta() + meta.resources = {'foo': None, 'bar': None} + model = ops.model.Model(meta, ops.model._ModelBackend('myapp/0')) + + with self.assertRaises(RuntimeError): + model.resources.fetch('qux') + + fake_script(self, 'resource-get', 'exit 1') + with self.assertRaises(ops.model.ModelError): + model.resources.fetch('foo') + + fake_script(self, 'resource-get', + 'echo /var/lib/juju/agents/unit-test-0/resources/$1/$1.tgz') + self.assertEqual(model.resources.fetch('foo').name, 'foo.tgz') + self.assertEqual(model.resources.fetch('bar').name, 'bar.tgz') + + def test_pod_spec(self): + # TODO: (jam) 2020-05-07 Harness doesn't yet expose pod-spec-set issue #261 + meta = ops.charm.CharmMeta.from_yaml(''' + name: myapp + ''') + model = ops.model.Model(meta, ops.model._ModelBackend('myapp/0')) + fake_script(self, 'pod-spec-set', """ + cat $2 > $(dirname $0)/spec.json + [ -n "$4" ] && cat "$4" > $(dirname $0)/k8s_res.json || true + """) + fake_script(self, 'is-leader', 'echo true') + spec_path = self.fake_script_path / 'spec.json' + k8s_res_path = self.fake_script_path / 'k8s_res.json' + + def check_calls(calls): + # There may 1 or 2 calls because of is-leader. + self.assertLessEqual(len(fake_calls), 2) + pod_spec_call = next(filter(lambda c: c[0] == 'pod-spec-set', calls)) + self.assertEqual(pod_spec_call[:2], ['pod-spec-set', '--file']) + + # 8 bytes are used as of python 3.4.0, see Python bug #12015. + # Other characters are from POSIX 3.282 (Portable Filename + # Character Set) a subset of which Python's mkdtemp uses. + self.assertRegex(pod_spec_call[2], '.*/tmp[A-Za-z0-9._-]{8}-pod-spec-set') + + model.pod.set_spec({'foo': 'bar'}) + self.assertEqual(spec_path.read_text(), '{"foo": "bar"}') + self.assertFalse(k8s_res_path.exists()) + + fake_calls = fake_script_calls(self, clear=True) + check_calls(fake_calls) + + model.pod.set_spec({'bar': 'foo'}, {'qux': 'baz'}) + self.assertEqual(spec_path.read_text(), '{"bar": "foo"}') + self.assertEqual(k8s_res_path.read_text(), '{"qux": "baz"}') + + fake_calls = fake_script_calls(self, clear=True) + check_calls(fake_calls) + + # Create a new model to drop is-leader caching result. + self.backend = ops.model._ModelBackend('myapp/0') + meta = ops.charm.CharmMeta() + model = ops.model.Model(meta, self.backend) + fake_script(self, 'is-leader', 'echo false') + with self.assertRaises(ops.model.ModelError): + model.pod.set_spec({'foo': 'bar'}) + + def test_base_status_instance_raises(self): + with self.assertRaises(TypeError): + ops.model.StatusBase('test') + + class NoNameStatus(ops.model.StatusBase): + pass + + with self.assertRaises(AttributeError): + ops.model.StatusBase.register_status(NoNameStatus) + + def test_status_repr(self): + test_cases = { + "ActiveStatus('Seashell')": ops.model.ActiveStatus('Seashell'), + "MaintenanceStatus('Red')": ops.model.MaintenanceStatus('Red'), + "BlockedStatus('Magenta')": ops.model.BlockedStatus('Magenta'), + "WaitingStatus('Thistle')": ops.model.WaitingStatus('Thistle'), + 'UnknownStatus()': ops.model.UnknownStatus(), + } + for expected, status in test_cases.items(): + self.assertEqual(repr(status), expected) + + def test_status_eq(self): + status_types = [ + ops.model.ActiveStatus, + ops.model.MaintenanceStatus, + ops.model.BlockedStatus, + ops.model.WaitingStatus, + ] + + self.assertEqual(ops.model.UnknownStatus(), ops.model.UnknownStatus()) + for (i, t1) in enumerate(status_types): + self.assertNotEqual(t1(''), ops.model.UnknownStatus()) + for (j, t2) in enumerate(status_types): + self.assertNotEqual(t1('one'), t2('two')) + if i == j: + self.assertEqual(t1('one'), t2('one')) + else: + self.assertNotEqual(t1('one'), t2('one')) + + def test_active_message_default(self): + self.assertEqual(ops.model.ActiveStatus().message, '') + + def test_local_set_valid_unit_status(self): + test_cases = [( + 'active', + ops.model.ActiveStatus('Green'), + ('status_set', 'active', 'Green', {'is_app': False}), + ), ( + 'maintenance', + ops.model.MaintenanceStatus('Yellow'), + ('status_set', 'maintenance', 'Yellow', {'is_app': False}), + ), ( + 'blocked', + ops.model.BlockedStatus('Red'), + ('status_set', 'blocked', 'Red', {'is_app': False}), + ), ( + 'waiting', + ops.model.WaitingStatus('White'), + ('status_set', 'waiting', 'White', {'is_app': False}), + )] + + for test_case, target_status, backend_call in test_cases: + with self.subTest(test_case): + self.model.unit.status = target_status + self.assertEqual(self.model.unit.status, target_status) + self.model.unit._invalidate() + self.assertEqual(self.model.unit.status, target_status) + self.assertBackendCalls([backend_call, ('status_get', {'is_app': False})]) + + def test_local_set_valid_app_status(self): + self.harness.set_leader(True) + test_cases = [( + 'active', + ops.model.ActiveStatus('Green'), + ('status_set', 'active', 'Green', {'is_app': True}), + ), ( + 'maintenance', + ops.model.MaintenanceStatus('Yellow'), + ('status_set', 'maintenance', 'Yellow', {'is_app': True}), + ), ( + 'blocked', + ops.model.BlockedStatus('Red'), + ('status_set', 'blocked', 'Red', {'is_app': True}), + ), ( + 'waiting', + ops.model.WaitingStatus('White'), + ('status_set', 'waiting', 'White', {'is_app': True}), + )] + + for test_case, target_status, backend_call in test_cases: + with self.subTest(test_case): + self.model.app.status = target_status + self.assertEqual(self.model.app.status, target_status) + self.model.app._invalidate() + self.assertEqual(self.model.app.status, target_status) + # There is a backend call to check if we can set the value, + # and then another check each time we assert the status above + expected_calls = [ + ('is_leader',), backend_call, + ('is_leader',), + ('is_leader',), ('status_get', {'is_app': True}), + ] + self.assertBackendCalls(expected_calls) + + def test_set_app_status_non_leader_raises(self): + self.harness.set_leader(False) + with self.assertRaises(RuntimeError): + self.model.app.status + + with self.assertRaises(RuntimeError): + self.model.app.status = ops.model.ActiveStatus() + + def test_set_unit_status_invalid(self): + with self.assertRaises(ops.model.InvalidStatusError): + self.model.unit.status = 'blocked' + + def test_set_app_status_invalid(self): + with self.assertRaises(ops.model.InvalidStatusError): + self.model.app.status = 'blocked' + + def test_remote_unit_status(self): + relation_id = self.harness.add_relation('db1', 'remoteapp1') + self.harness.add_relation_unit(relation_id, 'remoteapp1/0') + self.harness.add_relation_unit(relation_id, 'remoteapp1/1') + remote_unit = next(filter(lambda u: u.name == 'remoteapp1/0', + self.model.get_relation('db1').units)) + self.resetBackendCalls() + + # Remote unit status is always unknown. + self.assertEqual(remote_unit.status, ops.model.UnknownStatus()) + + test_statuses = ( + ops.model.UnknownStatus(), + ops.model.ActiveStatus('Green'), + ops.model.MaintenanceStatus('Yellow'), + ops.model.BlockedStatus('Red'), + ops.model.WaitingStatus('White'), + ) + + for target_status in test_statuses: + with self.subTest(target_status.name): + with self.assertRaises(RuntimeError): + remote_unit.status = target_status + self.assertBackendCalls([]) + + def test_remote_app_status(self): + relation_id = self.harness.add_relation('db1', 'remoteapp1') + self.harness.add_relation_unit(relation_id, 'remoteapp1/0') + self.harness.add_relation_unit(relation_id, 'remoteapp1/1') + remoteapp1 = self.model.get_relation('db1').app + self.resetBackendCalls() + + # Remote application status is always unknown. + self.assertIsInstance(remoteapp1.status, ops.model.UnknownStatus) + + test_statuses = ( + ops.model.UnknownStatus(), + ops.model.ActiveStatus(), + ops.model.MaintenanceStatus('Upgrading software'), + ops.model.BlockedStatus('Awaiting manual resolution'), + ops.model.WaitingStatus('Awaiting related app updates'), + ) + for target_status in test_statuses: + with self.subTest(target_status.name): + with self.assertRaises(RuntimeError): + remoteapp1.status = target_status + self.assertBackendCalls([]) + + def test_storage(self): + # TODO: (jam) 2020-05-07 Harness doesn't yet expose storage-get issue #263 + meta = ops.charm.CharmMeta() + meta.storages = {'disks': None, 'data': None} + model = ops.model.Model(meta, ops.model._ModelBackend('myapp/0')) + + fake_script(self, 'storage-list', ''' + if [ "$1" = disks ]; then + echo '["disks/0", "disks/1"]' + else + echo '[]' + fi + ''') + fake_script(self, 'storage-get', ''' + if [ "$2" = disks/0 ]; then + echo '"/var/srv/disks/0"' + elif [ "$2" = disks/1 ]; then + echo '"/var/srv/disks/1"' + else + exit 2 + fi + ''') + fake_script(self, 'storage-add', '') + + self.assertEqual(len(model.storages), 2) + self.assertEqual(model.storages.keys(), meta.storages.keys()) + self.assertIn('disks', model.storages) + test_cases = { + 0: {'name': 'disks', 'location': pathlib.Path('/var/srv/disks/0')}, + 1: {'name': 'disks', 'location': pathlib.Path('/var/srv/disks/1')}, + } + for storage in model.storages['disks']: + self.assertEqual(storage.name, 'disks') + self.assertIn(storage.id, test_cases) + self.assertEqual(storage.name, test_cases[storage.id]['name']) + self.assertEqual(storage.location, test_cases[storage.id]['location']) + + self.assertEqual(fake_script_calls(self, clear=True), [ + ['storage-list', 'disks', '--format=json'], + ['storage-get', '-s', 'disks/0', 'location', '--format=json'], + ['storage-get', '-s', 'disks/1', 'location', '--format=json'], + ]) + + self.assertSequenceEqual(model.storages['data'], []) + model.storages.request('data', count=3) + self.assertEqual(fake_script_calls(self), [ + ['storage-list', 'data', '--format=json'], + ['storage-add', 'data=3'], + ]) + + # Try to add storage not present in charm metadata. + with self.assertRaises(ops.model.ModelError): + model.storages.request('deadbeef') + + # Invalid count parameter types. + for count_v in [None, False, 2.0, 'a', b'beef', object]: + with self.assertRaises(TypeError): + model.storages.request('data', count_v) + + def resetBackendCalls(self): + self.harness._get_backend_calls(reset=True) + + def assertBackendCalls(self, expected, *, reset=True): + self.assertEqual(expected, self.harness._get_backend_calls(reset=reset)) + + +class TestModelBindings(unittest.TestCase): + + def setUp(self): + meta = ops.charm.CharmMeta() + meta.relations = { + 'db0': RelationMeta( + RelationRole.provides, 'db0', {'interface': 'db0', 'scope': 'global'}), + 'db1': RelationMeta( + RelationRole.requires, 'db1', {'interface': 'db1', 'scope': 'global'}), + 'db2': RelationMeta( + RelationRole.peer, 'db2', {'interface': 'db2', 'scope': 'global'}), + } + self.backend = ops.model._ModelBackend('myapp/0') + self.model = ops.model.Model(meta, self.backend) + + fake_script(self, 'relation-ids', + """([ "$1" = db0 ] && echo '["db0:4"]') || echo '[]'""") + fake_script(self, 'relation-list', """[ "$2" = 4 ] && echo '["remoteapp1/0"]' || exit 2""") + self.network_get_out = '''{ + "bind-addresses": [ + { + "mac-address": "de:ad:be:ef:ca:fe", + "interface-name": "lo", + "addresses": [ + { + "hostname": "", + "value": "192.0.2.2", + "cidr": "192.0.2.0/24" + }, + { + "hostname": "deadbeef.example", + "value": "dead:beef::1", + "cidr": "dead:beef::/64" + } + ] + }, + { + "mac-address": "", + "interface-name": "tun", + "addresses": [ + { + "hostname": "", + "value": "192.0.3.3", + "cidr": "" + }, + { + "hostname": "", + "value": "2001:db8::3", + "cidr": "" + }, + { + "hostname": "deadbeef.local", + "value": "fe80::1:1", + "cidr": "fe80::/64" + } + ] + } + ], + "egress-subnets": [ + "192.0.2.2/32", + "192.0.3.0/24", + "dead:beef::/64", + "2001:db8::3/128" + ], + "ingress-addresses": [ + "192.0.2.2", + "192.0.3.3", + "dead:beef::1", + "2001:db8::3" + ] +}''' + + def _check_binding_data(self, binding_name, binding): + self.assertEqual(binding.name, binding_name) + self.assertEqual(binding.network.bind_address, ipaddress.ip_address('192.0.2.2')) + self.assertEqual(binding.network.ingress_address, ipaddress.ip_address('192.0.2.2')) + # /32 and /128 CIDRs are valid one-address networks for IPv{4,6}Network types respectively. + self.assertEqual(binding.network.egress_subnets, [ipaddress.ip_network('192.0.2.2/32'), + ipaddress.ip_network('192.0.3.0/24'), + ipaddress.ip_network('dead:beef::/64'), + ipaddress.ip_network('2001:db8::3/128')]) + + for (i, (name, address, subnet)) in enumerate([ + ('lo', '192.0.2.2', '192.0.2.0/24'), + ('lo', 'dead:beef::1', 'dead:beef::/64'), + ('tun', '192.0.3.3', '192.0.3.3/32'), + ('tun', '2001:db8::3', '2001:db8::3/128'), + ('tun', 'fe80::1:1', 'fe80::/64')]): + self.assertEqual(binding.network.interfaces[i].name, name) + self.assertEqual(binding.network.interfaces[i].address, ipaddress.ip_address(address)) + self.assertEqual(binding.network.interfaces[i].subnet, ipaddress.ip_network(subnet)) + + def test_invalid_keys(self): + # Basic validation for passing invalid keys. + for name in (object, 0): + with self.assertRaises(ops.model.ModelError): + self.model.get_binding(name) + + def test_dead_relations(self): + fake_script( + self, + 'network-get', + ''' + if [ "$1" = db0 ] && [ "$2" = --format=json ]; then + echo '{}' + else + echo ERROR invalid value "$2" for option -r: relation not found >&2 + exit 2 + fi + '''.format(self.network_get_out)) + # Validate the behavior for dead relations. + binding = ops.model.Binding('db0', 42, self.model._backend) + self.assertEqual(binding.network.bind_address, ipaddress.ip_address('192.0.2.2')) + self.assertEqual(fake_script_calls(self, clear=True), [ + ['network-get', 'db0', '-r', '42', '--format=json'], + ['network-get', 'db0', '--format=json'], + ]) + + def test_binding_by_relation_name(self): + fake_script(self, 'network-get', + '''[ "$1" = db0 ] && echo '{}' || exit 1'''.format(self.network_get_out)) + binding_name = 'db0' + expected_calls = [['network-get', 'db0', '--format=json']] + + binding = self.model.get_binding(binding_name) + self._check_binding_data(binding_name, binding) + self.assertEqual(fake_script_calls(self, clear=True), expected_calls) + + def test_binding_by_relation(self): + fake_script(self, 'network-get', + '''[ "$1" = db0 ] && echo '{}' || exit 1'''.format(self.network_get_out)) + binding_name = 'db0' + expected_calls = [ + ['relation-ids', 'db0', '--format=json'], + # The two invocations below are due to the get_relation call. + ['relation-list', '-r', '4', '--format=json'], + ['network-get', 'db0', '-r', '4', '--format=json'], + ] + binding = self.model.get_binding(self.model.get_relation(binding_name)) + self._check_binding_data(binding_name, binding) + self.assertEqual(fake_script_calls(self, clear=True), expected_calls) + + +class TestModelBackend(unittest.TestCase): + + def setUp(self): + self._backend = None + + @property + def backend(self): + if self._backend is None: + self._backend = ops.model._ModelBackend('myapp/0') + return self._backend + + def test_relation_get_set_is_app_arg(self): + # No is_app provided. + with self.assertRaises(TypeError): + self.backend.relation_set(1, 'fookey', 'barval') + + with self.assertRaises(TypeError): + self.backend.relation_get(1, 'fooentity') + + # Invalid types for is_app. + for is_app_v in [None, 1, 2.0, 'a', b'beef']: + with self.assertRaises(TypeError): + self.backend.relation_set(1, 'fookey', 'barval', is_app=is_app_v) + + with self.assertRaises(TypeError): + self.backend.relation_get(1, 'fooentity', is_app=is_app_v) + + def test_is_leader_refresh(self): + meta = ops.charm.CharmMeta.from_yaml(''' + name: myapp + ''') + model = ops.model.Model(meta, self.backend) + fake_script(self, 'is-leader', 'echo false') + self.assertFalse(model.unit.is_leader()) + + # Change the leadership status + fake_script(self, 'is-leader', 'echo true') + # If you don't force it, we don't check, so we won't see the change + self.assertFalse(model.unit.is_leader()) + # If we force a recheck, then we notice + self.backend._leader_check_time = None + self.assertTrue(model.unit.is_leader()) + + # Force a recheck without changing the leadership status. + fake_script(self, 'is-leader', 'echo true') + self.backend._leader_check_time = None + self.assertTrue(model.unit.is_leader()) + + def test_relation_tool_errors(self): + self.addCleanup(os.environ.pop, 'JUJU_VERSION', None) + os.environ['JUJU_VERSION'] = '2.8.0' + err_msg = 'ERROR invalid value "$2" for option -r: relation not found' + + test_cases = [( + lambda: fake_script(self, 'relation-list', 'echo fooerror >&2 ; exit 1'), + lambda: self.backend.relation_list(3), + ops.model.ModelError, + [['relation-list', '-r', '3', '--format=json']], + ), ( + lambda: fake_script(self, 'relation-list', 'echo {} >&2 ; exit 2'.format(err_msg)), + lambda: self.backend.relation_list(3), + ops.model.RelationNotFoundError, + [['relation-list', '-r', '3', '--format=json']], + ), ( + lambda: fake_script(self, 'relation-set', 'echo fooerror >&2 ; exit 1'), + lambda: self.backend.relation_set(3, 'foo', 'bar', is_app=False), + ops.model.ModelError, + [['relation-set', '-r', '3', 'foo=bar']], + ), ( + lambda: fake_script(self, 'relation-set', 'echo {} >&2 ; exit 2'.format(err_msg)), + lambda: self.backend.relation_set(3, 'foo', 'bar', is_app=False), + ops.model.RelationNotFoundError, + [['relation-set', '-r', '3', 'foo=bar']], + ), ( + lambda: None, + lambda: self.backend.relation_set(3, 'foo', 'bar', is_app=True), + ops.model.RelationNotFoundError, + [['relation-set', '-r', '3', 'foo=bar', '--app']], + ), ( + lambda: fake_script(self, 'relation-get', 'echo fooerror >&2 ; exit 1'), + lambda: self.backend.relation_get(3, 'remote/0', is_app=False), + ops.model.ModelError, + [['relation-get', '-r', '3', '-', 'remote/0', '--format=json']], + ), ( + lambda: fake_script(self, 'relation-get', 'echo {} >&2 ; exit 2'.format(err_msg)), + lambda: self.backend.relation_get(3, 'remote/0', is_app=False), + ops.model.RelationNotFoundError, + [['relation-get', '-r', '3', '-', 'remote/0', '--format=json']], + ), ( + lambda: None, + lambda: self.backend.relation_get(3, 'remote/0', is_app=True), + ops.model.RelationNotFoundError, + [['relation-get', '-r', '3', '-', 'remote/0', '--app', '--format=json']], + )] + + for i, (do_fake, run, exception, calls) in enumerate(test_cases): + with self.subTest(i): + do_fake() + with self.assertRaises(exception): + run() + self.assertEqual(fake_script_calls(self, clear=True), calls) + + def test_relation_get_juju_version_quirks(self): + self.addCleanup(os.environ.pop, 'JUJU_VERSION', None) + + fake_script(self, 'relation-get', '''echo '{"foo": "bar"}' ''') + + # on 2.7.0+, things proceed as expected + for v in ['2.8.0', '2.7.0']: + with self.subTest(v): + os.environ['JUJU_VERSION'] = v + rel_data = self.backend.relation_get(1, 'foo/0', is_app=True) + self.assertEqual(rel_data, {"foo": "bar"}) + calls = [' '.join(i) for i in fake_script_calls(self, clear=True)] + self.assertEqual(calls, ['relation-get -r 1 - foo/0 --app --format=json']) + + # before 2.7.0, it just fails (no --app support) + os.environ['JUJU_VERSION'] = '2.6.9' + with self.assertRaisesRegex(RuntimeError, 'not supported on Juju version 2.6.9'): + self.backend.relation_get(1, 'foo/0', is_app=True) + self.assertEqual(fake_script_calls(self), []) + + def test_relation_set_juju_version_quirks(self): + self.addCleanup(os.environ.pop, 'JUJU_VERSION', None) + + fake_script(self, 'relation-set', 'exit 0') + + # on 2.7.0+, things proceed as expected + for v in ['2.8.0', '2.7.0']: + with self.subTest(v): + os.environ['JUJU_VERSION'] = v + self.backend.relation_set(1, 'foo', 'bar', is_app=True) + calls = [' '.join(i) for i in fake_script_calls(self, clear=True)] + self.assertEqual(calls, ['relation-set -r 1 foo=bar --app']) + + # before 2.7.0, it just fails always (no --app support) + os.environ['JUJU_VERSION'] = '2.6.9' + with self.assertRaisesRegex(RuntimeError, 'not supported on Juju version 2.6.9'): + self.backend.relation_set(1, 'foo', 'bar', is_app=True) + self.assertEqual(fake_script_calls(self), []) + + def test_status_get(self): + # taken from actual Juju output + content = '{"message": "", "status": "unknown", "status-data": {}}' + fake_script(self, 'status-get', "echo '{}'".format(content)) + s = self.backend.status_get(is_app=False) + self.assertEqual(s['status'], "unknown") + self.assertEqual(s['message'], "") + # taken from actual Juju output + content = dedent(""" + { + "application-status": { + "message": "installing", + "status": "maintenance", + "status-data": {}, + "units": { + "uo/0": { + "message": "", + "status": "active", + "status-data": {} + } + } + } + } + """) + fake_script(self, 'status-get', "echo '{}'".format(content)) + s = self.backend.status_get(is_app=True) + self.assertEqual(s['status'], "maintenance") + self.assertEqual(s['message'], "installing") + self.assertEqual(fake_script_calls(self, clear=True), [ + ['status-get', '--include-data', '--application=False', '--format=json'], + ['status-get', '--include-data', '--application=True', '--format=json'], + ]) + + def test_status_is_app_forced_kwargs(self): + fake_script(self, 'status-get', 'exit 1') + fake_script(self, 'status-set', 'exit 1') + + test_cases = ( + lambda: self.backend.status_get(False), + lambda: self.backend.status_get(True), + lambda: self.backend.status_set('active', '', False), + lambda: self.backend.status_set('active', '', True), + ) + + for case in test_cases: + with self.assertRaises(TypeError): + case() + + def test_local_set_invalid_status(self): + # juju return exit code 1 if you ask to set status to 'unknown' + meta = ops.charm.CharmMeta.from_yaml(''' + name: myapp + ''') + model = ops.model.Model(meta, self.backend) + fake_script(self, 'status-set', 'exit 1') + fake_script(self, 'is-leader', 'echo true') + + with self.assertRaises(ops.model.ModelError): + model.unit.status = ops.model.UnknownStatus() + + self.assertEqual(fake_script_calls(self, True), [ + ['status-set', '--application=False', 'unknown', ''], + ]) + + with self.assertRaises(ops.model.ModelError): + model.app.status = ops.model.UnknownStatus() + + # A leadership check is needed for application status. + self.assertEqual(fake_script_calls(self, True), [ + ['is-leader', '--format=json'], + ['status-set', '--application=True', 'unknown', ''], + ]) + + def test_status_set_is_app_not_bool_raises(self): + for is_app_v in [None, 1, 2.0, 'a', b'beef', object]: + with self.assertRaises(TypeError): + self.backend.status_set(ops.model.ActiveStatus, is_app=is_app_v) + + def test_storage_tool_errors(self): + test_cases = [( + lambda: fake_script(self, 'storage-list', 'echo fooerror >&2 ; exit 1'), + lambda: self.backend.storage_list('foobar'), + ops.model.ModelError, + [['storage-list', 'foobar', '--format=json']], + ), ( + lambda: fake_script(self, 'storage-get', 'echo fooerror >&2 ; exit 1'), + lambda: self.backend.storage_get('foobar', 'someattr'), + ops.model.ModelError, + [['storage-get', '-s', 'foobar', 'someattr', '--format=json']], + ), ( + lambda: fake_script(self, 'storage-add', 'echo fooerror >&2 ; exit 1'), + lambda: self.backend.storage_add('foobar', count=2), + ops.model.ModelError, + [['storage-add', 'foobar=2']], + ), ( + lambda: fake_script(self, 'storage-add', 'echo fooerror >&2 ; exit 1'), + lambda: self.backend.storage_add('foobar', count=object), + TypeError, + [], + ), ( + lambda: fake_script(self, 'storage-add', 'echo fooerror >&2 ; exit 1'), + lambda: self.backend.storage_add('foobar', count=True), + TypeError, + [], + )] + for do_fake, run, exception, calls in test_cases: + do_fake() + with self.assertRaises(exception): + run() + self.assertEqual(fake_script_calls(self, clear=True), calls) + + def test_network_get(self): + network_get_out = '''{ + "bind-addresses": [ + { + "mac-address": "", + "interface-name": "", + "addresses": [ + { + "hostname": "", + "value": "192.0.2.2", + "cidr": "" + } + ] + } + ], + "egress-subnets": [ + "192.0.2.2/32" + ], + "ingress-addresses": [ + "192.0.2.2" + ] +}''' + fake_script(self, 'network-get', + '''[ "$1" = deadbeef ] && echo '{}' || exit 1'''.format(network_get_out)) + network_info = self.backend.network_get('deadbeef') + self.assertEqual(network_info, json.loads(network_get_out)) + self.assertEqual(fake_script_calls(self, clear=True), + [['network-get', 'deadbeef', '--format=json']]) + + network_info = self.backend.network_get('deadbeef', 1) + self.assertEqual(network_info, json.loads(network_get_out)) + self.assertEqual(fake_script_calls(self, clear=True), + [['network-get', 'deadbeef', '-r', '1', '--format=json']]) + + def test_network_get_errors(self): + err_no_endpoint = 'ERROR no network config found for binding "$2"' + err_no_rel = 'ERROR invalid value "$3" for option -r: relation not found' + + test_cases = [( + lambda: fake_script(self, 'network-get', + 'echo {} >&2 ; exit 1'.format(err_no_endpoint)), + lambda: self.backend.network_get("deadbeef"), + ops.model.ModelError, + [['network-get', 'deadbeef', '--format=json']], + ), ( + lambda: fake_script(self, 'network-get', 'echo {} >&2 ; exit 2'.format(err_no_rel)), + lambda: self.backend.network_get("deadbeef", 3), + ops.model.RelationNotFoundError, + [['network-get', 'deadbeef', '-r', '3', '--format=json']], + )] + for do_fake, run, exception, calls in test_cases: + do_fake() + with self.assertRaises(exception): + run() + self.assertEqual(fake_script_calls(self, clear=True), calls) + + def test_action_get_error(self): + fake_script(self, 'action-get', '') + fake_script(self, 'action-get', 'echo fooerror >&2 ; exit 1') + with self.assertRaises(ops.model.ModelError): + self.backend.action_get() + calls = [['action-get', '--format=json']] + self.assertEqual(fake_script_calls(self, clear=True), calls) + + def test_action_set_error(self): + fake_script(self, 'action-get', '') + fake_script(self, 'action-set', 'echo fooerror >&2 ; exit 1') + with self.assertRaises(ops.model.ModelError): + self.backend.action_set(OrderedDict([('foo', 'bar'), ('dead', 'beef cafe')])) + calls = [["action-set", "foo=bar", "dead=beef cafe"]] + self.assertEqual(fake_script_calls(self, clear=True), calls) + + def test_action_log_error(self): + fake_script(self, 'action-get', '') + fake_script(self, 'action-log', 'echo fooerror >&2 ; exit 1') + with self.assertRaises(ops.model.ModelError): + self.backend.action_log('log-message') + calls = [["action-log", "log-message"]] + self.assertEqual(fake_script_calls(self, clear=True), calls) + + def test_action_get(self): + fake_script(self, 'action-get', """echo '{"foo-name": "bar", "silent": false}'""") + params = self.backend.action_get() + self.assertEqual(params['foo-name'], 'bar') + self.assertEqual(params['silent'], False) + self.assertEqual(fake_script_calls(self), [['action-get', '--format=json']]) + + def test_action_set(self): + fake_script(self, 'action-get', 'exit 1') + fake_script(self, 'action-set', 'exit 0') + self.backend.action_set(OrderedDict([('x', 'dead beef'), ('y', 1)])) + self.assertEqual(fake_script_calls(self), [['action-set', 'x=dead beef', 'y=1']]) + + def test_action_fail(self): + fake_script(self, 'action-get', 'exit 1') + fake_script(self, 'action-fail', 'exit 0') + self.backend.action_fail('error 42') + self.assertEqual(fake_script_calls(self), [['action-fail', 'error 42']]) + + def test_action_log(self): + fake_script(self, 'action-get', 'exit 1') + fake_script(self, 'action-log', 'exit 0') + self.backend.action_log('progress: 42%') + self.assertEqual(fake_script_calls(self), [['action-log', 'progress: 42%']]) + + def test_application_version_set(self): + fake_script(self, 'application-version-set', 'exit 0') + self.backend.application_version_set('1.2b3') + self.assertEqual(fake_script_calls(self), [['application-version-set', '--', '1.2b3']]) + + def test_application_version_set_invalid(self): + fake_script(self, 'application-version-set', 'exit 0') + with self.assertRaises(TypeError): + self.backend.application_version_set(2) + with self.assertRaises(TypeError): + self.backend.application_version_set() + self.assertEqual(fake_script_calls(self), []) + + def test_juju_log(self): + fake_script(self, 'juju-log', 'exit 0') + self.backend.juju_log('WARNING', 'foo') + self.assertEqual(fake_script_calls(self, clear=True), + [['juju-log', '--log-level', 'WARNING', 'foo']]) + + with self.assertRaises(TypeError): + self.backend.juju_log('DEBUG') + self.assertEqual(fake_script_calls(self, clear=True), []) + + fake_script(self, 'juju-log', 'exit 1') + with self.assertRaises(ops.model.ModelError): + self.backend.juju_log('BAR', 'foo') + self.assertEqual(fake_script_calls(self, clear=True), + [['juju-log', '--log-level', 'BAR', 'foo']]) + + def test_valid_metrics(self): + fake_script(self, 'add-metric', 'exit 0') + test_cases = [( + OrderedDict([('foo', 42), ('b-ar', 4.5), ('ba_-z', 4.5), ('a', 1)]), + OrderedDict([('de', 'ad'), ('be', 'ef_ -')]), + [['add-metric', '--labels', 'de=ad,be=ef_ -', + 'foo=42', 'b-ar=4.5', 'ba_-z=4.5', 'a=1']] + ), ( + OrderedDict([('foo1', 0), ('b2r', 4.5)]), + OrderedDict([('d3', 'aд'), ('b33f', '3_ -')]), + [['add-metric', '--labels', 'd3=aд,b33f=3_ -', 'foo1=0', 'b2r=4.5']], + )] + for metrics, labels, expected_calls in test_cases: + self.backend.add_metrics(metrics, labels) + self.assertEqual(fake_script_calls(self, clear=True), expected_calls) + + def test_invalid_metric_names(self): + invalid_inputs = [ + ({'': 4.2}, {}), + ({'1': 4.2}, {}), + ({'1': -4.2}, {}), + ({'123': 4.2}, {}), + ({'1foo': 4.2}, {}), + ({'-foo': 4.2}, {}), + ({'_foo': 4.2}, {}), + ({'foo-': 4.2}, {}), + ({'foo_': 4.2}, {}), + ({'a-': 4.2}, {}), + ({'a_': 4.2}, {}), + ({'BAЯ': 4.2}, {}), + ] + for metrics, labels in invalid_inputs: + with self.assertRaises(ops.model.ModelError): + self.backend.add_metrics(metrics, labels) + + def test_invalid_metric_values(self): + invalid_inputs = [ + ({'a': float('+inf')}, {}), + ({'a': float('-inf')}, {}), + ({'a': float('nan')}, {}), + ({'foo': 'bar'}, {}), + ({'foo': '1O'}, {}), + ] + for metrics, labels in invalid_inputs: + with self.assertRaises(ops.model.ModelError): + self.backend.add_metrics(metrics, labels) + + def test_invalid_metric_labels(self): + invalid_inputs = [ + ({'foo': 4.2}, {'': 'baz'}), + ({'foo': 4.2}, {',bar': 'baz'}), + ({'foo': 4.2}, {'b=a=r': 'baz'}), + ({'foo': 4.2}, {'BAЯ': 'baz'}), + ] + for metrics, labels in invalid_inputs: + with self.assertRaises(ops.model.ModelError): + self.backend.add_metrics(metrics, labels) + + def test_invalid_metric_label_values(self): + invalid_inputs = [ + ({'foo': 4.2}, {'bar': ''}), + ({'foo': 4.2}, {'bar': 'b,az'}), + ({'foo': 4.2}, {'bar': 'b=az'}), + ] + for metrics, labels in invalid_inputs: + with self.assertRaises(ops.model.ModelError): + self.backend.add_metrics(metrics, labels) + + +class TestLazyMapping(unittest.TestCase): + + def test_invalidate(self): + loaded = [] + + class MyLazyMap(ops.model.LazyMapping): + def _load(self): + loaded.append(1) + return {'foo': 'bar'} + + map = MyLazyMap() + self.assertEqual(map['foo'], 'bar') + self.assertEqual(loaded, [1]) + self.assertEqual(map['foo'], 'bar') + self.assertEqual(loaded, [1]) + map._invalidate() + self.assertEqual(map['foo'], 'bar') + self.assertEqual(loaded, [1, 1]) + + +if __name__ == "__main__": + unittest.main() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/test_storage.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/test_storage.py new file mode 100755 index 0000000000000000000000000000000000000000..3410999bd5f045cd3d65396cc5be546a3c4a94e0 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/test_storage.py @@ -0,0 +1,412 @@ +# Copyright 2019-2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import abc +import gc +import io +import os +import pathlib +import sys +import tempfile +from textwrap import dedent + +import yaml + +from ops import ( + framework, + storage, +) +from test.test_helpers import ( + BaseTestCase, + fake_script, + fake_script_calls, +) + + +class StoragePermutations(abc.ABC): + + def create_framework(self) -> framework.Framework: + """Create a Framework that we can use to test the backend storage. + """ + return framework.Framework(self.create_storage(), None, None, None) + + @abc.abstractmethod + def create_storage(self) -> storage.SQLiteStorage: + """Create a Storage backend that we can interact with""" + return NotImplemented + + def test_save_and_load_snapshot(self): + f = self.create_framework() + + class Sample(framework.Object): + + def __init__(self, parent, key, content): + super().__init__(parent, key) + self.content = content + + def snapshot(self): + return {'content': self.content} + + def restore(self, snapshot): + self.__dict__.update(snapshot) + + f.register_type(Sample, None, Sample.handle_kind) + data = { + 'str': 'string', + 'bytes': b'bytes', + 'int': 1, + 'float': 3.0, + 'dict': {'a': 'b'}, + 'set': {'a', 'b'}, + 'list': [1, 2], + } + s = Sample(f, 'test', data) + handle = s.handle + f.save_snapshot(s) + del s + gc.collect() + res = f.load_snapshot(handle) + self.assertEqual(data, res.content) + + def test_emit_event(self): + f = self.create_framework() + + class Evt(framework.EventBase): + def __init__(self, handle, content): + super().__init__(handle) + self.content = content + + def snapshot(self): + return self.content + + def restore(self, content): + self.content = content + + class Events(framework.ObjectEvents): + event = framework.EventSource(Evt) + + class Sample(framework.Object): + + on = Events() + + def __init__(self, parent, key): + super().__init__(parent, key) + self.observed_content = None + self.framework.observe(self.on.event, self._on_event) + + def _on_event(self, event: Evt): + self.observed_content = event.content + + s = Sample(f, 'key') + f.register_type(Sample, None, Sample.handle_kind) + s.on.event.emit('foo') + self.assertEqual('foo', s.observed_content) + s.on.event.emit(1) + self.assertEqual(1, s.observed_content) + s.on.event.emit(None) + self.assertEqual(None, s.observed_content) + + def test_save_and_overwrite_snapshot(self): + store = self.create_storage() + store.save_snapshot('foo', {1: 2}) + self.assertEqual({1: 2}, store.load_snapshot('foo')) + store.save_snapshot('foo', {'three': 4}) + self.assertEqual({'three': 4}, store.load_snapshot('foo')) + + def test_drop_snapshot(self): + store = self.create_storage() + store.save_snapshot('foo', {1: 2}) + self.assertEqual({1: 2}, store.load_snapshot('foo')) + store.drop_snapshot('foo') + with self.assertRaises(storage.NoSnapshotError): + store.load_snapshot('foo') + + def test_save_snapshot_empty_string(self): + store = self.create_storage() + with self.assertRaises(storage.NoSnapshotError): + store.load_snapshot('foo') + store.save_snapshot('foo', '') + self.assertEqual('', store.load_snapshot('foo')) + store.drop_snapshot('foo') + with self.assertRaises(storage.NoSnapshotError): + store.load_snapshot('foo') + + def test_save_snapshot_none(self): + store = self.create_storage() + with self.assertRaises(storage.NoSnapshotError): + store.load_snapshot('bar') + store.save_snapshot('bar', None) + self.assertEqual(None, store.load_snapshot('bar')) + store.drop_snapshot('bar') + with self.assertRaises(storage.NoSnapshotError): + store.load_snapshot('bar') + + def test_save_snapshot_zero(self): + store = self.create_storage() + with self.assertRaises(storage.NoSnapshotError): + store.load_snapshot('zero') + store.save_snapshot('zero', 0) + self.assertEqual(0, store.load_snapshot('zero')) + store.drop_snapshot('zero') + with self.assertRaises(storage.NoSnapshotError): + store.load_snapshot('zero') + + def test_save_notice(self): + store = self.create_storage() + store.save_notice('event', 'observer', 'method') + self.assertEqual( + list(store.notices('event')), + [('event', 'observer', 'method')]) + + def test_load_notices(self): + store = self.create_storage() + self.assertEqual(list(store.notices('path')), []) + + def test_save_one_load_another_notice(self): + store = self.create_storage() + store.save_notice('event', 'observer', 'method') + self.assertEqual(list(store.notices('other')), []) + + def test_save_load_drop_load_notices(self): + store = self.create_storage() + store.save_notice('event', 'observer', 'method') + store.save_notice('event', 'observer', 'method2') + self.assertEqual( + list(store.notices('event')), + [('event', 'observer', 'method'), + ('event', 'observer', 'method2'), + ]) + + +class TestSQLiteStorage(StoragePermutations, BaseTestCase): + + def create_storage(self): + return storage.SQLiteStorage(':memory:') + + +def setup_juju_backend(test_case, state_file): + """Create fake scripts for pretending to be state-set and state-get""" + template_args = { + 'executable': sys.executable, + 'pthpth': os.path.dirname(pathlib.__file__), + 'state_file': str(state_file), + } + + fake_script(test_case, 'state-set', dedent('''\ + {executable} -c ' + import sys + if "{pthpth}" not in sys.path: + sys.path.append("{pthpth}") + import sys, yaml, pathlib, pickle + assert sys.argv[1:] == ["--file", "-"] + request = yaml.load(sys.stdin, Loader=getattr(yaml, "CSafeLoader", yaml.SafeLoader)) + state_file = pathlib.Path("{state_file}") + if state_file.exists() and state_file.stat().st_size > 0: + with state_file.open("rb") as f: + state = pickle.load(f) + else: + state = {{}} + for k, v in request.items(): + state[k] = v + with state_file.open("wb") as f: + pickle.dump(state, f) + ' "$@" + ''').format(**template_args)) + + fake_script(test_case, 'state-get', dedent('''\ + {executable} -Sc ' + import sys + if "{pthpth}" not in sys.path: + sys.path.append("{pthpth}") + import sys, pathlib, pickle + assert len(sys.argv) == 2 + state_file = pathlib.Path("{state_file}") + if state_file.exists() and state_file.stat().st_size > 0: + with state_file.open("rb") as f: + state = pickle.load(f) + else: + state = {{}} + result = state.get(sys.argv[1], "\\n") + sys.stdout.write(result) + ' "$@" + ''').format(**template_args)) + + fake_script(test_case, 'state-delete', dedent('''\ + {executable} -Sc ' + import sys + if "{pthpth}" not in sys.path: + sys.path.append("{pthpth}") + import sys, pathlib, pickle + assert len(sys.argv) == 2 + state_file = pathlib.Path("{state_file}") + if state_file.exists() and state_file.stat().st_size > 0: + with state_file.open("rb") as f: + state = pickle.load(f) + else: + state = {{}} + state.pop(sys.argv[1], None) + with state_file.open("wb") as f: + pickle.dump(state, f) + ' "$@" + ''').format(**template_args)) + + +class TestJujuStorage(StoragePermutations, BaseTestCase): + + def create_storage(self): + state_file = pathlib.Path(tempfile.mkstemp(prefix='tmp-ops-test-state-')[1]) + self.addCleanup(state_file.unlink) + setup_juju_backend(self, state_file) + return storage.JujuStorage() + + +class TestSimpleLoader(BaseTestCase): + + def test_is_c_loader(self): + loader = storage._SimpleLoader(io.StringIO('')) + if getattr(yaml, 'CSafeLoader', None) is not None: + self.assertIsInstance(loader, yaml.CSafeLoader) + else: + self.assertIsInstance(loader, yaml.SafeLoader) + + def test_is_c_dumper(self): + dumper = storage._SimpleDumper(io.StringIO('')) + if getattr(yaml, 'CSafeDumper', None) is not None: + self.assertIsInstance(dumper, yaml.CSafeDumper) + else: + self.assertIsInstance(dumper, yaml.SafeDumper) + + def test_handles_tuples(self): + raw = yaml.dump((1, 'tuple'), Dumper=storage._SimpleDumper) + parsed = yaml.load(raw, Loader=storage._SimpleLoader) + self.assertEqual(parsed, (1, 'tuple')) + + def assertRefused(self, obj): + # We shouldn't allow them to be written + with self.assertRaises(yaml.representer.RepresenterError): + yaml.dump(obj, Dumper=storage._SimpleDumper) + # If they did somehow end up written, we shouldn't be able to load them + raw = yaml.dump(obj, Dumper=yaml.Dumper) + with self.assertRaises(yaml.constructor.ConstructorError): + yaml.load(raw, Loader=storage._SimpleLoader) + + def test_forbids_some_types(self): + self.assertRefused(1 + 2j) + self.assertRefused({'foo': 1 + 2j}) + self.assertRefused(frozenset(['foo', 'bar'])) + self.assertRefused(bytearray(b'foo')) + self.assertRefused(object()) + + class Foo: + pass + f = Foo() + self.assertRefused(f) + + +class TestJujuStateBackend(BaseTestCase): + + def test_is_not_available(self): + self.assertFalse(storage._JujuStorageBackend.is_available()) + + def test_is_available(self): + fake_script(self, 'state-get', 'echo ""') + self.assertTrue(storage._JujuStorageBackend.is_available()) + self.assertEqual(fake_script_calls(self, clear=True), []) + + def test_set_encodes_args(self): + t = tempfile.NamedTemporaryFile() + fake_script(self, 'state-set', dedent(""" + cat >> {} + """).format(t.name)) + backend = storage._JujuStorageBackend() + backend.set('key', {'foo': 2}) + self.assertEqual(fake_script_calls(self, clear=True), [ + ['state-set', '--file', '-'], + ]) + t.seek(0) + content = t.read() + self.assertEqual(content.decode('utf-8'), dedent("""\ + "key": | + {foo: 2} + """)) + + def test_get(self): + fake_script(self, 'state-get', dedent(""" + echo 'foo: "bar"' + """)) + backend = storage._JujuStorageBackend() + value = backend.get('key') + self.assertEqual(value, {'foo': 'bar'}) + self.assertEqual(fake_script_calls(self, clear=True), [ + ['state-get', 'key'], + ]) + + def test_set_and_get_complex_value(self): + t = tempfile.NamedTemporaryFile() + fake_script(self, 'state-set', dedent(""" + cat >> {} + """).format(t.name)) + backend = storage._JujuStorageBackend() + complex_val = { + 'foo': 2, + 3: [1, 2, '3'], + 'four': {2, 3}, + 'five': {'a': 2, 'b': 3.0}, + 'six': ('a', 'b'), + 'seven': b'1234', + } + backend.set('Class[foo]/_stored', complex_val) + self.assertEqual(fake_script_calls(self, clear=True), [ + ['state-set', '--file', '-'], + ]) + t.seek(0) + content = t.read() + outer = yaml.safe_load(content) + key = 'Class[foo]/_stored' + self.assertEqual(list(outer.keys()), [key]) + inner = yaml.load(outer[key], Loader=storage._SimpleLoader) + self.assertEqual(complex_val, inner) + if sys.version_info >= (3, 6): + # In Python 3.5 dicts are not ordered by default, and PyYAML only + # iterates the dict. So we read and assert the content is valid, + # but we don't assert the serialized form. + self.assertEqual(content.decode('utf-8'), dedent("""\ + "Class[foo]/_stored": | + foo: 2 + 3: [1, 2, '3'] + four: !!set {2: null, 3: null} + five: {a: 2, b: 3.0} + six: !!python/tuple [a, b] + seven: !!binary | + MTIzNA== + """)) + # Note that the content is yaml in a string, embedded inside YAML to declare the Key: + # Value of where to store the entry. + fake_script(self, 'state-get', dedent(""" + echo "foo: 2 + 3: [1, 2, '3'] + four: !!set {2: null, 3: null} + five: {a: 2, b: 3.0} + six: !!python/tuple [a, b] + seven: !!binary | + MTIzNA== + " + """)) + out = backend.get('Class[foo]/_stored') + self.assertEqual(out, complex_val) + + # TODO: Add tests for things we don't want to support. eg, YAML that has custom types should + # be properly forbidden. + # TODO: Tests for state-set/get/delete and how they handle if you ask to delete something + # that doesn't exist, or get something that doesn't exist, etc. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/test_testing.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/test_testing.py new file mode 100644 index 0000000000000000000000000000000000000000..8f9e4eb17f7a1f746834357987976343e216ee2d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/mod/operator/test/test_testing.py @@ -0,0 +1,931 @@ +# Copyright 2019-2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import importlib +import pathlib +import shutil +import sys +import tempfile +import textwrap +import unittest +import yaml + +from ops.charm import ( + CharmBase, + RelationEvent, +) +from ops.framework import ( + Object, +) +from ops.model import ( + ActiveStatus, + MaintenanceStatus, + UnknownStatus, + ModelError, + RelationNotFoundError, +) +from ops.testing import Harness + + +class TestHarness(unittest.TestCase): + + def test_add_relation(self): + harness = Harness(CharmBase, meta=''' + name: test-app + requires: + db: + interface: pgsql + ''') + rel_id = harness.add_relation('db', 'postgresql') + self.assertIsInstance(rel_id, int) + backend = harness._backend + self.assertEqual(backend.relation_ids('db'), [rel_id]) + self.assertEqual(backend.relation_list(rel_id), []) + # Make sure the initial data bags for our app and unit are empty. + self.assertEqual(backend.relation_get(rel_id, 'test-app', is_app=True), {}) + self.assertEqual(backend.relation_get(rel_id, 'test-app/0', is_app=False), {}) + + def test_add_relation_and_unit(self): + harness = Harness(CharmBase, meta=''' + name: test-app + requires: + db: + interface: pgsql + ''') + rel_id = harness.add_relation('db', 'postgresql') + self.assertIsInstance(rel_id, int) + harness.add_relation_unit(rel_id, 'postgresql/0') + harness.update_relation_data(rel_id, 'postgresql/0', {'foo': 'bar'}) + backend = harness._backend + self.assertEqual(backend.relation_ids('db'), [rel_id]) + self.assertEqual(backend.relation_list(rel_id), ['postgresql/0']) + self.assertEqual( + backend.relation_get(rel_id, 'postgresql/0', is_app=False), + {'foo': 'bar'}) + + def test_add_relation_with_remote_app_data(self): + # language=YAML + harness = Harness(CharmBase, meta=''' + name: test-app + requires: + db: + interface: pgsql + ''') + remote_app = 'postgresql' + rel_id = harness.add_relation('db', remote_app) + harness.update_relation_data(rel_id, 'postgresql', {'app': 'data'}) + self.assertIsInstance(rel_id, int) + backend = harness._backend + self.assertEqual([rel_id], backend.relation_ids('db')) + self.assertEqual({'app': 'data'}, backend.relation_get(rel_id, remote_app, is_app=True)) + + def test_add_relation_with_our_initial_data(self): + + class InitialDataTester(CharmBase): + """Record the relation-changed events.""" + + def __init__(self, framework): + super().__init__(framework) + self.observed_events = [] + self.framework.observe(self.on.db_relation_changed, self._on_db_relation_changed) + + def _on_db_relation_changed(self, event): + self.observed_events.append(event) + + # language=YAML + harness = Harness(InitialDataTester, meta=''' + name: test-app + requires: + db: + interface: pgsql + ''') + rel_id = harness.add_relation('db', 'postgresql') + harness.update_relation_data(rel_id, 'test-app', {'k': 'v1'}) + harness.update_relation_data(rel_id, 'test-app/0', {'ingress-address': '192.0.2.1'}) + backend = harness._backend + self.assertEqual({'k': 'v1'}, backend.relation_get(rel_id, 'test-app', is_app=True)) + self.assertEqual({'ingress-address': '192.0.2.1'}, + backend.relation_get(rel_id, 'test-app/0', is_app=False)) + + harness.begin() + self.assertEqual({'k': 'v1'}, backend.relation_get(rel_id, 'test-app', is_app=True)) + self.assertEqual({'ingress-address': '192.0.2.1'}, + backend.relation_get(rel_id, 'test-app/0', is_app=False)) + # Make sure no relation-changed events are emitted for our own data bags. + self.assertEqual([], harness.charm.observed_events) + + # A remote unit can still update our app relation data bag since our unit is not a leader. + harness.update_relation_data(rel_id, 'test-app', {'k': 'v2'}) + # And we get an event + self.assertEqual([], harness.charm.observed_events) + # We can also update our own relation data, even if it is a bit 'cheaty' + harness.update_relation_data(rel_id, 'test-app/0', {'ingress-address': '192.0.2.2'}) + # But no event happens + + # Updating our data app relation data bag and our unit data bag does not generate events. + harness.set_leader(True) + harness.update_relation_data(rel_id, 'test-app', {'k': 'v3'}) + harness.update_relation_data(rel_id, 'test-app/0', {'ingress-address': '192.0.2.2'}) + self.assertEqual([], harness.charm.observed_events) + + def test_add_peer_relation_with_initial_data_leader(self): + + class InitialDataTester(CharmBase): + """Record the relation-changed events.""" + + def __init__(self, framework): + super().__init__(framework) + self.observed_events = [] + self.framework.observe(self.on.cluster_relation_changed, + self._on_cluster_relation_changed) + + def _on_cluster_relation_changed(self, event): + self.observed_events.append(event) + + # language=YAML + harness = Harness(InitialDataTester, meta=''' + name: test-app + peers: + cluster: + interface: cluster + ''') + # TODO: dmitriis 2020-04-07 test a minion unit and initial peer relation app data + # events when the harness begins to emit events for initial data. + harness.set_leader(is_leader=True) + rel_id = harness.add_relation('cluster', 'test-app') + harness.update_relation_data(rel_id, 'test-app', {'k': 'v'}) + harness.update_relation_data(rel_id, 'test-app/0', {'ingress-address': '192.0.2.1'}) + backend = harness._backend + self.assertEqual({'k': 'v'}, backend.relation_get(rel_id, 'test-app', is_app=True)) + self.assertEqual({'ingress-address': '192.0.2.1'}, + backend.relation_get(rel_id, 'test-app/0', is_app=False)) + + harness.begin() + self.assertEqual({'k': 'v'}, backend.relation_get(rel_id, 'test-app', is_app=True)) + self.assertEqual({'ingress-address': '192.0.2.1'}, + backend.relation_get(rel_id, 'test-app/0', is_app=False)) + # Make sure no relation-changed events are emitted for our own data bags. + self.assertEqual([], harness.charm.observed_events) + + # Updating our app relation data bag and our unit data bag does not trigger events + harness.update_relation_data(rel_id, 'test-app', {'k': 'v2'}) + harness.update_relation_data(rel_id, 'test-app/0', {'ingress-address': '192.0.2.2'}) + self.assertEqual([], harness.charm.observed_events) + + # If our unit becomes a minion, updating app relation data indirectly becomes possible + # and our charm gets notifications. + harness.set_leader(False) + harness.update_relation_data(rel_id, 'test-app', {'k': 'v3'}) + self.assertEqual({'k': 'v3'}, backend.relation_get(rel_id, 'test-app', is_app=True)) + self.assertTrue(len(harness.charm.observed_events), 1) + self.assertIsInstance(harness.charm.observed_events[0], RelationEvent) + + def test_relation_events(self): + harness = Harness(RelationEventCharm, meta=''' + name: test-app + requires: + db: + interface: pgsql + ''') + harness.begin() + harness.charm.observe_relation_events('db') + self.assertEqual(harness.charm.get_changes(), []) + rel_id = harness.add_relation('db', 'postgresql') + self.assertEqual( + harness.charm.get_changes(), + [{'name': 'relation-created', + 'data': { + 'app': 'postgresql', + 'unit': None, + 'relation_id': rel_id, + }}]) + harness.add_relation_unit(rel_id, 'postgresql/0') + self.assertEqual( + harness.charm.get_changes(), + [{'name': 'relation-joined', + 'data': { + 'app': 'postgresql', + 'unit': 'postgresql/0', + 'relation_id': rel_id, + }}]) + harness.update_relation_data(rel_id, 'postgresql', {'foo': 'bar'}) + self.assertEqual( + harness.charm.get_changes(), + [{'name': 'relation-changed', + 'data': { + 'app': 'postgresql', + 'unit': None, + 'relation_id': rel_id, + }}]) + harness.update_relation_data(rel_id, 'postgresql/0', {'baz': 'bing'}) + self.assertEqual( + harness.charm.get_changes(), + [{'name': 'relation-changed', + 'data': { + 'app': 'postgresql', + 'unit': 'postgresql/0', + 'relation_id': rel_id, + }}]) + + def test_get_relation_data(self): + harness = Harness(CharmBase, meta=''' + name: test-app + requires: + db: + interface: pgsql + ''') + rel_id = harness.add_relation('db', 'postgresql') + harness.update_relation_data(rel_id, 'postgresql', {'remote': 'data'}) + self.assertEqual(harness.get_relation_data(rel_id, 'test-app'), {}) + self.assertEqual(harness.get_relation_data(rel_id, 'test-app/0'), {}) + self.assertEqual(harness.get_relation_data(rel_id, 'test-app/1'), None) + self.assertEqual(harness.get_relation_data(rel_id, 'postgresql'), {'remote': 'data'}) + with self.assertRaises(KeyError): + # unknown relation id + harness.get_relation_data(99, 'postgresql') + + def test_create_harness_twice(self): + metadata = ''' + name: my-charm + requires: + db: + interface: pgsql + ''' + harness1 = Harness(CharmBase, meta=metadata) + harness2 = Harness(CharmBase, meta=metadata) + harness1.begin() + harness2.begin() + helper1 = DBRelationChangedHelper(harness1.charm, "helper1") + helper2 = DBRelationChangedHelper(harness2.charm, "helper2") + rel_id = harness2.add_relation('db', 'postgresql') + harness2.update_relation_data(rel_id, 'postgresql', {'key': 'value'}) + # Helper2 should see the event triggered by harness2, but helper1 should see no events. + self.assertEqual(helper1.changes, []) + self.assertEqual(helper2.changes, [(rel_id, 'postgresql')]) + + def test_begin_twice(self): + # language=YAML + harness = Harness(CharmBase, meta=''' + name: test-app + requires: + db: + interface: pgsql + ''') + harness.begin() + with self.assertRaises(RuntimeError): + harness.begin() + + def test_update_relation_exposes_new_data(self): + harness = Harness(CharmBase, meta=''' + name: my-charm + requires: + db: + interface: pgsql + ''') + harness.begin() + viewer = RelationChangedViewer(harness.charm, 'db') + rel_id = harness.add_relation('db', 'postgresql') + harness.add_relation_unit(rel_id, 'postgresql/0') + harness.update_relation_data(rel_id, 'postgresql/0', {'initial': 'data'}) + self.assertEqual(viewer.changes, [{'initial': 'data'}]) + harness.update_relation_data(rel_id, 'postgresql/0', {'new': 'value'}) + self.assertEqual(viewer.changes, [{'initial': 'data'}, + {'initial': 'data', 'new': 'value'}]) + + def test_update_relation_no_local_unit_change_event(self): + # language=YAML + harness = Harness(CharmBase, meta=''' + name: my-charm + requires: + db: + interface: pgsql + ''') + harness.begin() + helper = DBRelationChangedHelper(harness.charm, "helper") + rel_id = harness.add_relation('db', 'postgresql') + rel = harness.charm.model.get_relation('db') + rel.data[harness.charm.model.unit]['key'] = 'value' + # there should be no event for updating our own data + harness.update_relation_data(rel_id, 'my-charm/0', {'new': 'other'}) + # but the data will be updated. + self.assertEqual({'key': 'value', 'new': 'other'}, rel.data[harness.charm.model.unit]) + + rel.data[harness.charm.model.unit]['new'] = 'value' + # Our unit data bag got updated. + self.assertEqual(rel.data[harness.charm.model.unit]['new'], 'value') + # But there were no changed events registered by our unit. + self.assertEqual([], helper.changes) + + def test_update_peer_relation_no_local_unit_change_event(self): + # language=YAML + harness = Harness(CharmBase, meta=''' + name: postgresql + peers: + db: + interface: pgsql + ''') + harness.begin() + helper = DBRelationChangedHelper(harness.charm, "helper") + rel_id = harness.add_relation('db', 'postgresql') + + rel = harness.charm.model.get_relation('db') + rel.data[harness.charm.model.unit]['key'] = 'value' + rel = harness.charm.model.get_relation('db') + harness.update_relation_data(rel_id, 'postgresql/0', {'key': 'v1'}) + self.assertEqual({'key': 'v1'}, rel.data[harness.charm.model.unit]) + # Make sure there was no event + self.assertEqual([], helper.changes) + + rel.data[harness.charm.model.unit]['key'] = 'v2' + # Our unit data bag got updated. + self.assertEqual({'key': 'v2'}, dict(rel.data[harness.charm.model.unit])) + # But there were no changed events registered by our unit. + self.assertEqual([], helper.changes) + + # Same for when our unit is a leader. + harness.set_leader(is_leader=True) + harness.update_relation_data(rel_id, 'postgresql/0', {'key': 'v3'}) + self.assertEqual({'key': 'v3'}, dict(rel.data[harness.charm.model.unit])) + self.assertEqual([], helper.changes) + + rel.data[harness.charm.model.unit]['key'] = 'v4' + self.assertEqual(rel.data[harness.charm.model.unit]['key'], 'v4') + self.assertEqual([], helper.changes) + + def test_update_peer_relation_app_data(self): + # language=YAML + harness = Harness(CharmBase, meta=''' + name: postgresql + peers: + db: + interface: pgsql + ''') + harness.begin() + harness.set_leader(is_leader=True) + helper = DBRelationChangedHelper(harness.charm, "helper") + rel_id = harness.add_relation('db', 'postgresql') + rel = harness.charm.model.get_relation('db') + rel.data[harness.charm.app]['key'] = 'value' + harness.update_relation_data(rel_id, 'postgresql', {'key': 'v1'}) + self.assertEqual({'key': 'v1'}, rel.data[harness.charm.app]) + self.assertEqual([], helper.changes) + + rel.data[harness.charm.app]['key'] = 'v2' + # Our unit data bag got updated. + self.assertEqual(rel.data[harness.charm.model.app]['key'], 'v2') + # But there were no changed events registered by our unit. + self.assertEqual([], helper.changes) + + # If our unit is not a leader unit we get an update about peer app relation data changes. + harness.set_leader(is_leader=False) + harness.update_relation_data(rel_id, 'postgresql', {'k2': 'v2'}) + self.assertEqual(rel.data[harness.charm.model.app]['k2'], 'v2') + self.assertEqual(helper.changes, [(0, 'postgresql')]) + + def test_update_relation_no_local_app_change_event(self): + # language=YAML + harness = Harness(CharmBase, meta=''' + name: my-charm + requires: + db: + interface: pgsql + ''') + harness.begin() + harness.set_leader(False) + helper = DBRelationChangedHelper(harness.charm, "helper") + rel_id = harness.add_relation('db', 'postgresql') + # TODO: remove this as soon as https://github.com/canonical/operator/issues/175 is fixed. + harness.add_relation_unit(rel_id, 'postgresql/0') + self.assertEqual(helper.changes, []) + + harness.update_relation_data(rel_id, 'my-charm', {'new': 'value'}) + rel = harness.charm.model.get_relation('db') + self.assertEqual(rel.data[harness.charm.app]['new'], 'value') + + # Our app data bag got updated. + self.assertEqual(rel.data[harness.charm.model.app]['new'], 'value') + # But there were no changed events registered by our unit. + self.assertEqual(helper.changes, []) + + def test_update_relation_remove_data(self): + harness = Harness(CharmBase, meta=''' + name: my-charm + requires: + db: + interface: pgsql + ''') + harness.begin() + viewer = RelationChangedViewer(harness.charm, 'db') + rel_id = harness.add_relation('db', 'postgresql') + harness.add_relation_unit(rel_id, 'postgresql/0') + harness.update_relation_data(rel_id, 'postgresql/0', {'initial': 'data'}) + harness.update_relation_data(rel_id, 'postgresql/0', {'initial': ''}) + self.assertEqual(viewer.changes, [{'initial': 'data'}, {}]) + + def test_update_config(self): + harness = Harness(RecordingCharm) + harness.begin() + harness.update_config(key_values={'a': 'foo', 'b': 2}) + self.assertEqual( + harness.charm.changes, + [{'name': 'config', 'data': {'a': 'foo', 'b': 2}}]) + harness.update_config(key_values={'b': 3}) + self.assertEqual( + harness.charm.changes, + [{'name': 'config', 'data': {'a': 'foo', 'b': 2}}, + {'name': 'config', 'data': {'a': 'foo', 'b': 3}}]) + # you can set config values to the empty string, you can use unset to actually remove items + harness.update_config(key_values={'a': ''}, unset=set('b')) + self.assertEqual( + harness.charm.changes, + [{'name': 'config', 'data': {'a': 'foo', 'b': 2}}, + {'name': 'config', 'data': {'a': 'foo', 'b': 3}}, + {'name': 'config', 'data': {'a': ''}}, + ]) + + def test_set_leader(self): + harness = Harness(RecordingCharm) + # No event happens here + harness.set_leader(False) + harness.begin() + self.assertFalse(harness.charm.model.unit.is_leader()) + harness.set_leader(True) + self.assertEqual(harness.charm.get_changes(reset=True), [{'name': 'leader-elected'}]) + self.assertTrue(harness.charm.model.unit.is_leader()) + harness.set_leader(False) + self.assertFalse(harness.charm.model.unit.is_leader()) + # No hook event when you lose leadership. + # TODO: verify if Juju always triggers `leader-settings-changed` if you + # lose leadership. + self.assertEqual(harness.charm.get_changes(reset=True), []) + harness.disable_hooks() + harness.set_leader(True) + # No hook event if you have disabled them + self.assertEqual(harness.charm.get_changes(reset=True), []) + + def test_relation_set_app_not_leader(self): + harness = Harness(RecordingCharm, meta=''' + name: test-charm + requires: + db: + interface: pgsql + ''') + harness.begin() + harness.set_leader(False) + rel_id = harness.add_relation('db', 'postgresql') + harness.add_relation_unit(rel_id, 'postgresql/0') + rel = harness.charm.model.get_relation('db') + with self.assertRaises(ModelError): + rel.data[harness.charm.app]['foo'] = 'bar' + # The data has not actually been changed + self.assertEqual(harness.get_relation_data(rel_id, 'test-charm'), {}) + harness.set_leader(True) + rel.data[harness.charm.app]['foo'] = 'bar' + self.assertEqual(harness.get_relation_data(rel_id, 'test-charm'), {'foo': 'bar'}) + + def test_hooks_enabled_and_disabled(self): + harness = Harness(RecordingCharm, meta=''' + name: test-charm + ''') + # Before begin() there are no events. + harness.update_config({'value': 'first'}) + # By default, after begin the charm is set up to receive events. + harness.begin() + harness.update_config({'value': 'second'}) + self.assertEqual( + harness.charm.get_changes(reset=True), + [{'name': 'config', 'data': {'value': 'second'}}]) + # Once disabled, we won't see config-changed when we make an update + harness.disable_hooks() + harness.update_config({'third': '3'}) + self.assertEqual(harness.charm.get_changes(reset=True), []) + harness.enable_hooks() + harness.update_config({'value': 'fourth'}) + self.assertEqual( + harness.charm.get_changes(reset=True), + [{'name': 'config', 'data': {'value': 'fourth', 'third': '3'}}]) + + def test_metadata_from_directory(self): + tmp = pathlib.Path(tempfile.mkdtemp()) + self.addCleanup(shutil.rmtree, str(tmp)) + metadata_filename = tmp / 'metadata.yaml' + with metadata_filename.open('wt') as metadata: + metadata.write(textwrap.dedent(''' + name: my-charm + requires: + db: + interface: pgsql + ''')) + harness = self._get_dummy_charm_harness(tmp) + harness.begin() + self.assertEqual(list(harness.model.relations), ['db']) + # The charm_dir also gets set + self.assertEqual(harness.framework.charm_dir, tmp) + + def test_set_model_name(self): + harness = Harness(CharmBase, meta=''' + name: test-charm + ''') + harness.set_model_name('foo') + self.assertEqual('foo', harness.model.name) + + def test_set_model_name_after_begin(self): + harness = Harness(CharmBase, meta=''' + name: test-charm + ''') + harness.set_model_name('bar') + harness.begin() + with self.assertRaises(RuntimeError): + harness.set_model_name('foo') + self.assertEqual(harness.model.name, 'bar') + + def test_actions_from_directory(self): + tmp = pathlib.Path(tempfile.mkdtemp()) + self.addCleanup(shutil.rmtree, str(tmp)) + actions_filename = tmp / 'actions.yaml' + with actions_filename.open('wt') as actions: + actions.write(textwrap.dedent(''' + test: + description: a dummy action + ''')) + harness = self._get_dummy_charm_harness(tmp) + harness.begin() + self.assertEqual(list(harness.framework.meta.actions), ['test']) + # The charm_dir also gets set + self.assertEqual(harness.framework.charm_dir, tmp) + + def _get_dummy_charm_harness(self, tmp): + self._write_dummy_charm(tmp) + charm_mod = importlib.import_module('charm') + harness = Harness(charm_mod.MyTestingCharm) + return harness + + def _write_dummy_charm(self, tmp): + srcdir = tmp / 'src' + srcdir.mkdir(0o755) + charm_filename = srcdir / 'charm.py' + with charm_filename.open('wt') as charmpy: + # language=Python + charmpy.write(textwrap.dedent(''' + from ops.charm import CharmBase + class MyTestingCharm(CharmBase): + pass + ''')) + orig = sys.path[:] + sys.path.append(str(srcdir)) + + def cleanup(): + sys.path = orig + sys.modules.pop('charm') + + self.addCleanup(cleanup) + + def test_actions_passed_in(self): + harness = Harness( + CharmBase, + meta=''' + name: test-app + ''', + actions=''' + test-action: + description: a dummy test action + ''') + self.assertEqual(list(harness.framework.meta.actions), ['test-action']) + + def test_relation_set_deletes(self): + harness = Harness(CharmBase, meta=''' + name: test-charm + requires: + db: + interface: pgsql + ''') + harness.begin() + harness.set_leader(False) + rel_id = harness.add_relation('db', 'postgresql') + harness.update_relation_data(rel_id, 'test-charm/0', {'foo': 'bar'}) + harness.add_relation_unit(rel_id, 'postgresql/0') + rel = harness.charm.model.get_relation('db', rel_id) + del rel.data[harness.charm.model.unit]['foo'] + self.assertEqual({}, harness.get_relation_data(rel_id, 'test-charm/0')) + + def test_set_workload_version(self): + harness = Harness(CharmBase, meta=''' + name: app + ''') + harness.begin() + self.assertIsNone(harness.get_workload_version()) + harness.charm.model.unit.set_workload_version('1.2.3') + self.assertEqual(harness.get_workload_version(), '1.2.3') + + def test_get_backend_calls(self): + harness = Harness(CharmBase, meta=''' + name: test-charm + requires: + db: + interface: pgsql + ''') + harness.begin() + # No calls to the backend yet + self.assertEqual(harness._get_backend_calls(), []) + rel_id = harness.add_relation('db', 'postgresql') + # update_relation_data ensures the cached data for the relation is wiped + harness.update_relation_data(rel_id, 'test-charm/0', {'foo': 'bar'}) + self.assertEqual( + harness._get_backend_calls(reset=True), [ + ('relation_ids', 'db'), + ('relation_list', rel_id), + ]) + # add_relation_unit resets the relation_list, but doesn't trigger backend calls + harness.add_relation_unit(rel_id, 'postgresql/0') + self.assertEqual([], harness._get_backend_calls(reset=False)) + # however, update_relation_data does, because we are preparing relation-changed + harness.update_relation_data(rel_id, 'postgresql/0', {'foo': 'bar'}) + self.assertEqual( + harness._get_backend_calls(reset=False), [ + ('relation_ids', 'db'), + ('relation_list', rel_id), + ]) + # If we check again, they are still there, but now we reset it + self.assertEqual( + harness._get_backend_calls(reset=True), [ + ('relation_ids', 'db'), + ('relation_list', rel_id), + ]) + # And the calls are gone + self.assertEqual(harness._get_backend_calls(), []) + + def test_get_backend_calls_with_kwargs(self): + harness = Harness(CharmBase, meta=''' + name: test-charm + requires: + db: + interface: pgsql + ''') + harness.begin() + unit = harness.charm.model.unit + # Reset the list, because we don't care what it took to get here + harness._get_backend_calls(reset=True) + unit.status = ActiveStatus() + self.assertEqual( + [('status_set', 'active', '', {'is_app': False})], harness._get_backend_calls()) + harness.set_leader(True) + app = harness.charm.model.app + harness._get_backend_calls(reset=True) + app.status = ActiveStatus('message') + self.assertEqual( + [('is_leader',), + ('status_set', 'active', 'message', {'is_app': True})], + harness._get_backend_calls()) + + def test_unit_status(self): + harness = Harness(CharmBase, meta='name: test-app') + harness.set_leader(True) + harness.begin() + # default status + self.assertEqual(harness.model.unit.status, MaintenanceStatus('')) + status = ActiveStatus('message') + harness.model.unit.status = status + self.assertEqual(harness.model.unit.status, status) + + def test_app_status(self): + harness = Harness(CharmBase, meta='name: test-app') + harness.set_leader(True) + harness.begin() + # default status + self.assertEqual(harness.model.app.status, UnknownStatus()) + status = ActiveStatus('message') + harness.model.app.status = status + self.assertEqual(harness.model.app.status, status) + + +class DBRelationChangedHelper(Object): + def __init__(self, parent, key): + super().__init__(parent, key) + self.changes = [] + parent.framework.observe(parent.on.db_relation_changed, self.on_relation_changed) + + def on_relation_changed(self, event): + if event.unit is not None: + self.changes.append((event.relation.id, event.unit.name)) + else: + self.changes.append((event.relation.id, event.app.name)) + + +class RelationChangedViewer(Object): + """Track relation_changed events and saves the data seen in the relation bucket.""" + + def __init__(self, charm, relation_name): + super().__init__(charm, relation_name) + self.changes = [] + charm.framework.observe(charm.on[relation_name].relation_changed, self.on_relation_changed) + + def on_relation_changed(self, event): + if event.unit is not None: + data = event.relation.data[event.unit] + else: + data = event.relation.data[event.app] + self.changes.append(dict(data)) + + +class RecordingCharm(CharmBase): + """Record the events that we see, and any associated data.""" + + def __init__(self, framework): + super().__init__(framework) + self.changes = [] + self.framework.observe(self.on.config_changed, self.on_config_changed) + self.framework.observe(self.on.leader_elected, self.on_leader_elected) + + def get_changes(self, reset=True): + changes = self.changes + if reset: + self.changes = [] + return changes + + def on_config_changed(self, _): + self.changes.append(dict(name='config', data=dict(self.framework.model.config))) + + def on_leader_elected(self, _): + self.changes.append(dict(name='leader-elected')) + + +class RelationEventCharm(RecordingCharm): + """Record events related to relation lifecycles.""" + + def __init__(self, framework): + super().__init__(framework) + + def observe_relation_events(self, relation_name): + self.framework.observe(self.on[relation_name].relation_created, self._on_relation_created) + self.framework.observe(self.on[relation_name].relation_joined, self._on_relation_joined) + self.framework.observe(self.on[relation_name].relation_changed, self._on_relation_changed) + self.framework.observe(self.on[relation_name].relation_departed, + self._on_relation_departed) + self.framework.observe(self.on[relation_name].relation_broken, self._on_relation_broken) + + def _on_relation_created(self, event): + self._observe_relation_event('relation-created', event) + + def _on_relation_joined(self, event): + self._observe_relation_event('relation-joined', event) + + def _on_relation_changed(self, event): + self._observe_relation_event('relation-changed', event) + + def _on_relation_departed(self, event): + self._observe_relation_event('relation-departed', event) + + def _on_relation_broken(self, event): + self._observe_relation_event('relation-broken', event) + + def _observe_relation_event(self, event_name, event): + unit_name = None + if event.unit is not None: + unit_name = event.unit.name + app_name = None + if event.app is not None: + app_name = event.app.name + self.changes.append( + dict(name=event_name, + data=dict(app=app_name, unit=unit_name, relation_id=event.relation.id))) + + +class TestTestingModelBackend(unittest.TestCase): + + def test_status_set_get_unit(self): + harness = Harness(CharmBase, meta=''' + name: app + ''') + backend = harness._backend + backend.status_set('blocked', 'message', is_app=False) + self.assertEqual( + backend.status_get(is_app=False), + {'status': 'blocked', 'message': 'message'}) + self.assertEqual( + backend.status_get(is_app=True), + {'status': 'unknown', 'message': ''}) + + def test_status_set_get_app(self): + harness = Harness(CharmBase, meta=''' + name: app + ''') + backend = harness._backend + backend.status_set('blocked', 'message', is_app=True) + self.assertEqual( + backend.status_get(is_app=True), + {'status': 'blocked', 'message': 'message'}) + self.assertEqual( + backend.status_get(is_app=False), + {'status': 'maintenance', 'message': ''}) + + def test_relation_ids_unknown_relation(self): + harness = Harness(CharmBase, meta=''' + name: test-charm + provides: + db: + interface: mydb + ''') + backend = harness._backend + # With no relations added, we just get an empty list for the interface + self.assertEqual(backend.relation_ids('db'), []) + # But an unknown interface raises a ModelError + with self.assertRaises(ModelError): + backend.relation_ids('unknown') + + def test_relation_get_unknown_relation_id(self): + harness = Harness(CharmBase, meta=''' + name: test-charm + ''') + backend = harness._backend + with self.assertRaises(RelationNotFoundError): + backend.relation_get(1234, 'unit/0', False) + + def test_relation_list_unknown_relation_id(self): + harness = Harness(CharmBase, meta=''' + name: test-charm + ''') + backend = harness._backend + with self.assertRaises(RelationNotFoundError): + backend.relation_list(1234) + + def test_populate_oci_resources(self): + harness = Harness(CharmBase, meta=''' + name: test-app + resources: + image: + type: oci-image + description: "Image to deploy." + image2: + type: oci-image + description: "Another image." + ''') + harness.populate_oci_resources() + resource = harness._resource_dir / "image" / "contents.yaml" + with resource.open('r') as resource_file: + contents = yaml.safe_load(resource_file.read()) + self.assertEqual(contents['registrypath'], 'registrypath') + self.assertEqual(contents['username'], 'username') + self.assertEqual(contents['password'], 'password') + self.assertEqual(len(harness._backend._resources_map), 2) + + def test_resource_folder_cleanup(self): + harness = Harness(CharmBase, meta=''' + name: test-app + resources: + image: + type: oci-image + description: "Image to deploy." + ''') + harness.populate_oci_resources() + resource = harness._resource_dir / "image" / "contents.yaml" + del harness + with self.assertRaises(FileNotFoundError): + with resource.open('r') as resource_file: + print("This shouldn't be here: {}".format(resource_file)) + + def test_add_oci_resource_custom(self): + harness = Harness(CharmBase, meta=''' + name: test-app + resources: + image: + type: oci-image + description: "Image to deploy." + ''') + custom = { + "registrypath": "custompath", + "username": "custom_username", + "password": "custom_password", + } + harness.add_oci_resource('image', custom) + resource = harness._resource_dir / "image" / "contents.yaml" + with resource.open('r') as resource_file: + contents = yaml.safe_load(resource_file.read()) + self.assertEqual(contents['registrypath'], 'custompath') + self.assertEqual(contents['username'], 'custom_username') + self.assertEqual(contents['password'], 'custom_password') + self.assertEqual(len(harness._backend._resources_map), 1) + + def test_add_oci_resource_no_image(self): + harness = Harness(CharmBase, meta=''' + name: test-app + resources: + image: + type: file + description: "Image to deploy." + ''') + with self.assertRaises(RuntimeError): + harness.add_oci_resource("image") + with self.assertRaises(RuntimeError): + harness.add_oci_resource("missing-resource") + self.assertEqual(len(harness._backend._resources_map), 0) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/playbooks/add-port-forward.yaml b/SOL004/hackfest_firewall_pnf/charms/vyos-config/playbooks/add-port-forward.yaml new file mode 100644 index 0000000000000000000000000000000000000000..5d48d75870345d86b3aaa2ae06204bffe2dfe00e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/playbooks/add-port-forward.yaml @@ -0,0 +1,13 @@ +- hosts: vyos-routers + gather_facts: false + connection: local + tasks: + - name: backup switch (vyos) + vyos_config: + lines: + - nat destination rule {{ ruleNumber }} destination port "{{ sourcePort }}" + - nat destination rule {{ ruleNumber }} inbound-interface "eth0" + - nat destination rule {{ ruleNumber }} protocol "tcp" + - nat destination rule {{ ruleNumber }} translation port "{{ destinationPort }}" + - nat destination rule {{ ruleNumber }} translation address "{{ destinationAddress }}" + save: yes diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/playbooks/backup.yaml b/SOL004/hackfest_firewall_pnf/charms/vyos-config/playbooks/backup.yaml new file mode 100644 index 0000000000000000000000000000000000000000..c83de0831c8271bdd5477e6fd3c012d204df7ac1 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/playbooks/backup.yaml @@ -0,0 +1,9 @@ +- hosts: vyos-routers + gather_facts: false + connection: local + tasks: + - name: backup switch (vyos) + vyos_config: + backup: yes + backup_options: + filename: "{{ backupFile }}" diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/playbooks/configure-remote.yaml b/SOL004/hackfest_firewall_pnf/charms/vyos-config/playbooks/configure-remote.yaml new file mode 100644 index 0000000000000000000000000000000000000000..cc0232fae05dca7c81301a771a07b133f006a75c --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/playbooks/configure-remote.yaml @@ -0,0 +1,8 @@ +- hosts: vyos-routers + connection: local + tasks: + - name: add a host to the list of allowed in the firewall group + vyos_config: + lines: + - set firewall group network-group MAGMA_AGW network "{{ MAGMA_AGW_IP }}/32" + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/playbooks/remove-port-forward.yaml b/SOL004/hackfest_firewall_pnf/charms/vyos-config/playbooks/remove-port-forward.yaml new file mode 100644 index 0000000000000000000000000000000000000000..61c12f955f95d172f2da52a7049329470071bf3c --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/playbooks/remove-port-forward.yaml @@ -0,0 +1,11 @@ +- hosts: vyos-routers + gather_facts: false + connection: local + tasks: + - name: backup switch (vyos) + vyos_command: + commands: + - config + - delete nat destination rule {{ ruleNumber }} + - commit + - save diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/playbooks/restore.yaml b/SOL004/hackfest_firewall_pnf/charms/vyos-config/playbooks/restore.yaml new file mode 100644 index 0000000000000000000000000000000000000000..20051c42fa13639fc25ca99e5751022e0fbe0249 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/playbooks/restore.yaml @@ -0,0 +1,7 @@ +- hosts: vyos-routers + connection: local + tasks: + - name: Performs a restore a previously created backup + vyos_config: + lines: + - load "/home/osm/{{ backupFile }}" diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/requirements.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/requirements.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/src/charm.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/src/charm.py new file mode 100755 index 0000000000000000000000000000000000000000..6f97799ce1827c2da22ea51cd6453564bf83321c --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/src/charm.py @@ -0,0 +1,112 @@ +#!/usr/bin/env python3 +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import subprocess +import sys +import traceback + +from charms.osm import libansible +from charms.osm.sshproxy import SSHProxyCharm +from ops.charm import CharmBase, CharmEvents +from ops.framework import StoredState, EventBase, EventSource +from ops.main import main +from ops.model import ( + ActiveStatus, + BlockedStatus, + MaintenanceStatus, + WaitingStatus, + ModelError, +) + + +sys.path.append("lib") + + +class VyosCharm(SSHProxyCharm): + def __init__(self, framework, key): + super().__init__(framework, key) + + # Register all of the events we want to observe + self.framework.observe(self.on.config_changed, self.on_config_changed) + self.framework.observe(self.on.install, self.on_install) + self.framework.observe(self.on.start, self.on_start) + self.framework.observe(self.on.upgrade_charm, self.on_upgrade_charm) + # Charm actions (primitives) + self.framework.observe( + self.on.add_port_forward_action, self.on_add_port_forward_action + ) + self.framework.observe( + self.on.remove_port_forward_action, self.on_remove_port_forward_action + ) + + def on_config_changed(self, event): + """Handle changes in configuration""" + super().on_config_changed(event) + + def on_install(self, event): + """Called when the charm is being installed""" + super().on_install(event) + self.unit.status = MaintenanceStatus("Installing Ansible") + libansible.install_ansible_support() + self.unit.status = ActiveStatus() + + def on_start(self, event): + """Called when the charm is being started""" + super().on_start(event) + + def on_add_port_forward_action(self, event): + """Adds a port forward rule.""" + self._run_playbook( + event, + "add-port-forward.yaml", + { + "sourcePort": event.params["sourcePort"], + "destinationPort": event.params["destinationPort"], + "destinationAddress": event.params["destinationAddress"], + "ruleNumber": event.params["ruleNumber"], + }) + + def on_remove_port_forward_action(self, event): + """Remove a port forward rule.""" + self._run_playbook( + event, + "remove-port-forward.yaml", + { + "ruleNumber": event.params["ruleNumber"], + }) + + def on_upgrade_charm(self, event): + """Upgrade the charm.""" + + def _run_playbook(self, event, playbook, variables): + try: + config = self.model.config + result = libansible.execute_playbook( + playbook, + config["ssh-hostname"], + config["ssh-username"], + config["ssh-password"], + variables, + ) + event.set_results({"output": result}) + except: + exc_type, exc_value, exc_traceback = sys.exc_info() + err = traceback.format_exception(exc_type, exc_value, exc_traceback) + event.fail(message="Playbook " + playbook + " failed: " + str(err)) + + +if __name__ == "__main__": + main(VyosCharm) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/activate b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/activate new file mode 100644 index 0000000000000000000000000000000000000000..3b8d0b2918a4bb421dc5d6b7cfc6e55c7f4ea480 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/activate @@ -0,0 +1,84 @@ +# This file must be used with "source bin/activate" *from bash* +# you cannot run it directly + + +if [ "${BASH_SOURCE-}" = "$0" ]; then + echo "You must source this script: \$ source $0" >&2 + exit 33 +fi + +deactivate () { + unset -f pydoc >/dev/null 2>&1 + + # reset old environment variables + # ! [ -z ${VAR+_} ] returns true if VAR is declared at all + if ! [ -z "${_OLD_VIRTUAL_PATH:+_}" ] ; then + PATH="$_OLD_VIRTUAL_PATH" + export PATH + unset _OLD_VIRTUAL_PATH + fi + if ! [ -z "${_OLD_VIRTUAL_PYTHONHOME+_}" ] ; then + PYTHONHOME="$_OLD_VIRTUAL_PYTHONHOME" + export PYTHONHOME + unset _OLD_VIRTUAL_PYTHONHOME + fi + + # This should detect bash and zsh, which have a hash command that must + # be called to get it to forget past commands. Without forgetting + # past commands the $PATH changes we made may not be respected + if [ -n "${BASH-}" ] || [ -n "${ZSH_VERSION-}" ] ; then + hash -r 2>/dev/null + fi + + if ! [ -z "${_OLD_VIRTUAL_PS1+_}" ] ; then + PS1="$_OLD_VIRTUAL_PS1" + export PS1 + unset _OLD_VIRTUAL_PS1 + fi + + unset VIRTUAL_ENV + if [ ! "${1-}" = "nondestructive" ] ; then + # Self destruct! + unset -f deactivate + fi +} + +# unset irrelevant variables +deactivate nondestructive + +VIRTUAL_ENV='/home/mark/git/GitLab/osm-packages/hackfest_firewall_pnf/charms/vyos-config-src/venv' +export VIRTUAL_ENV + +_OLD_VIRTUAL_PATH="$PATH" +PATH="$VIRTUAL_ENV/bin:$PATH" +export PATH + +# unset PYTHONHOME if set +if ! [ -z "${PYTHONHOME+_}" ] ; then + _OLD_VIRTUAL_PYTHONHOME="$PYTHONHOME" + unset PYTHONHOME +fi + +if [ -z "${VIRTUAL_ENV_DISABLE_PROMPT-}" ] ; then + _OLD_VIRTUAL_PS1="${PS1-}" + if [ "x" != x ] ; then + PS1="${PS1-}" + else + PS1="(`basename \"$VIRTUAL_ENV\"`) ${PS1-}" + fi + export PS1 +fi + +# Make sure to unalias pydoc if it's already there +alias pydoc 2>/dev/null >/dev/null && unalias pydoc || true + +pydoc () { + python -m pydoc "$@" +} + +# This should detect bash and zsh, which have a hash command that must +# be called to get it to forget past commands. Without forgetting +# past commands the $PATH changes we made may not be respected +if [ -n "${BASH-}" ] || [ -n "${ZSH_VERSION-}" ] ; then + hash -r 2>/dev/null +fi diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/activate.csh b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/activate.csh new file mode 100644 index 0000000000000000000000000000000000000000..823028c01c915886e29683e8d0caf32255f363ed --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/activate.csh @@ -0,0 +1,55 @@ +# This file must be used with "source bin/activate.csh" *from csh*. +# You cannot run it directly. +# Created by Davide Di Blasi . + +set newline='\ +' + +alias deactivate 'test $?_OLD_VIRTUAL_PATH != 0 && setenv PATH "$_OLD_VIRTUAL_PATH:q" && unset _OLD_VIRTUAL_PATH; rehash; test $?_OLD_VIRTUAL_PROMPT != 0 && set prompt="$_OLD_VIRTUAL_PROMPT:q" && unset _OLD_VIRTUAL_PROMPT; unsetenv VIRTUAL_ENV; test "\!:*" != "nondestructive" && unalias deactivate && unalias pydoc' + +# Unset irrelevant variables. +deactivate nondestructive + +setenv VIRTUAL_ENV '/home/mark/git/GitLab/osm-packages/hackfest_firewall_pnf/charms/vyos-config-src/venv' + +set _OLD_VIRTUAL_PATH="$PATH:q" +setenv PATH "$VIRTUAL_ENV:q/bin:$PATH:q" + + + +if ('' != "") then + set env_name = '' +else + set env_name = '('"$VIRTUAL_ENV:t:q"') ' +endif + +if ( $?VIRTUAL_ENV_DISABLE_PROMPT ) then + if ( $VIRTUAL_ENV_DISABLE_PROMPT == "" ) then + set do_prompt = "1" + else + set do_prompt = "0" + endif +else + set do_prompt = "1" +endif + +if ( $do_prompt == "1" ) then + # Could be in a non-interactive environment, + # in which case, $prompt is undefined and we wouldn't + # care about the prompt anyway. + if ( $?prompt ) then + set _OLD_VIRTUAL_PROMPT="$prompt:q" + if ( "$prompt:q" =~ *"$newline:q"* ) then + : + else + set prompt = "$env_name:q$prompt:q" + endif + endif +endif + +unset env_name +unset do_prompt + +alias pydoc python -m pydoc + +rehash diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/activate.fish b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/activate.fish new file mode 100644 index 0000000000000000000000000000000000000000..5c50244a11153a5c9791366f4362b157594338b6 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/activate.fish @@ -0,0 +1,100 @@ +# This file must be used using `source bin/activate.fish` *within a running fish ( http://fishshell.com ) session*. +# Do not run it directly. + +function _bashify_path -d "Converts a fish path to something bash can recognize" + set fishy_path $argv + set bashy_path $fishy_path[1] + for path_part in $fishy_path[2..-1] + set bashy_path "$bashy_path:$path_part" + end + echo $bashy_path +end + +function _fishify_path -d "Converts a bash path to something fish can recognize" + echo $argv | tr ':' '\n' +end + +function deactivate -d 'Exit virtualenv mode and return to the normal environment.' + # reset old environment variables + if test -n "$_OLD_VIRTUAL_PATH" + # https://github.com/fish-shell/fish-shell/issues/436 altered PATH handling + if test (echo $FISH_VERSION | head -c 1) -lt 3 + set -gx PATH (_fishify_path "$_OLD_VIRTUAL_PATH") + else + set -gx PATH "$_OLD_VIRTUAL_PATH" + end + set -e _OLD_VIRTUAL_PATH + end + + if test -n "$_OLD_VIRTUAL_PYTHONHOME" + set -gx PYTHONHOME "$_OLD_VIRTUAL_PYTHONHOME" + set -e _OLD_VIRTUAL_PYTHONHOME + end + + if test -n "$_OLD_FISH_PROMPT_OVERRIDE" + and functions -q _old_fish_prompt + # Set an empty local `$fish_function_path` to allow the removal of `fish_prompt` using `functions -e`. + set -l fish_function_path + + # Erase virtualenv's `fish_prompt` and restore the original. + functions -e fish_prompt + functions -c _old_fish_prompt fish_prompt + functions -e _old_fish_prompt + set -e _OLD_FISH_PROMPT_OVERRIDE + end + + set -e VIRTUAL_ENV + + if test "$argv[1]" != 'nondestructive' + # Self-destruct! + functions -e pydoc + functions -e deactivate + functions -e _bashify_path + functions -e _fishify_path + end +end + +# Unset irrelevant variables. +deactivate nondestructive + +set -gx VIRTUAL_ENV '/home/mark/git/GitLab/osm-packages/hackfest_firewall_pnf/charms/vyos-config-src/venv' + +# https://github.com/fish-shell/fish-shell/issues/436 altered PATH handling +if test (echo $FISH_VERSION | head -c 1) -lt 3 + set -gx _OLD_VIRTUAL_PATH (_bashify_path $PATH) +else + set -gx _OLD_VIRTUAL_PATH "$PATH" +end +set -gx PATH "$VIRTUAL_ENV"'/bin' $PATH + +# Unset `$PYTHONHOME` if set. +if set -q PYTHONHOME + set -gx _OLD_VIRTUAL_PYTHONHOME $PYTHONHOME + set -e PYTHONHOME +end + +function pydoc + python -m pydoc $argv +end + +if test -z "$VIRTUAL_ENV_DISABLE_PROMPT" + # Copy the current `fish_prompt` function as `_old_fish_prompt`. + functions -c fish_prompt _old_fish_prompt + + function fish_prompt + # Run the user's prompt first; it might depend on (pipe)status. + set -l prompt (_old_fish_prompt) + + # Prompt override provided? + # If not, just prepend the environment name. + if test -n '' + printf '%s%s' '' (set_color normal) + else + printf '%s(%s) ' (set_color normal) (basename "$VIRTUAL_ENV") + end + + string join -- \n $prompt # handle multi-line prompts + end + + set -gx _OLD_FISH_PROMPT_OVERRIDE "$VIRTUAL_ENV" +end diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/activate.ps1 b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/activate.ps1 new file mode 100644 index 0000000000000000000000000000000000000000..95504d3956b326b1ae2ece60f5db082db8d8bf99 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/activate.ps1 @@ -0,0 +1,60 @@ +$script:THIS_PATH = $myinvocation.mycommand.path +$script:BASE_DIR = Split-Path (Resolve-Path "$THIS_PATH/..") -Parent + +function global:deactivate([switch] $NonDestructive) { + if (Test-Path variable:_OLD_VIRTUAL_PATH) { + $env:PATH = $variable:_OLD_VIRTUAL_PATH + Remove-Variable "_OLD_VIRTUAL_PATH" -Scope global + } + + if (Test-Path function:_old_virtual_prompt) { + $function:prompt = $function:_old_virtual_prompt + Remove-Item function:\_old_virtual_prompt + } + + if ($env:VIRTUAL_ENV) { + Remove-Item env:VIRTUAL_ENV -ErrorAction SilentlyContinue + } + + if (!$NonDestructive) { + # Self destruct! + Remove-Item function:deactivate + Remove-Item function:pydoc + } +} + +function global:pydoc { + python -m pydoc $args +} + +# unset irrelevant variables +deactivate -nondestructive + +$VIRTUAL_ENV = $BASE_DIR +$env:VIRTUAL_ENV = $VIRTUAL_ENV + +New-Variable -Scope global -Name _OLD_VIRTUAL_PATH -Value $env:PATH + +$env:PATH = "$env:VIRTUAL_ENV/bin:" + $env:PATH +if (!$env:VIRTUAL_ENV_DISABLE_PROMPT) { + function global:_old_virtual_prompt { + "" + } + $function:_old_virtual_prompt = $function:prompt + + if ("" -ne "") { + function global:prompt { + # Add the custom prefix to the existing prompt + $previous_prompt_value = & $function:_old_virtual_prompt + ("" + $previous_prompt_value) + } + } + else { + function global:prompt { + # Add a prefix to the current prompt, but don't discard it. + $previous_prompt_value = & $function:_old_virtual_prompt + $new_prompt_value = "($( Split-Path $env:VIRTUAL_ENV -Leaf )) " + ($new_prompt_value + $previous_prompt_value) + } + } +} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/activate.xsh b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/activate.xsh new file mode 100644 index 0000000000000000000000000000000000000000..9f6d410511a2994332207d34697c7e8612aa1ae6 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/activate.xsh @@ -0,0 +1,46 @@ +"""Xonsh activate script for virtualenv""" +from xonsh.tools import get_sep as _get_sep + +def _deactivate(args): + if "pydoc" in aliases: + del aliases["pydoc"] + + if ${...}.get("_OLD_VIRTUAL_PATH", ""): + $PATH = $_OLD_VIRTUAL_PATH + del $_OLD_VIRTUAL_PATH + + if ${...}.get("_OLD_VIRTUAL_PYTHONHOME", ""): + $PYTHONHOME = $_OLD_VIRTUAL_PYTHONHOME + del $_OLD_VIRTUAL_PYTHONHOME + + if "VIRTUAL_ENV" in ${...}: + del $VIRTUAL_ENV + + if "VIRTUAL_ENV_PROMPT" in ${...}: + del $VIRTUAL_ENV_PROMPT + + if "nondestructive" not in args: + # Self destruct! + del aliases["deactivate"] + + +# unset irrelevant variables +_deactivate(["nondestructive"]) +aliases["deactivate"] = _deactivate + +$VIRTUAL_ENV = r"/home/mark/git/GitLab/osm-packages/hackfest_firewall_pnf/charms/vyos-config-src/venv" + +$_OLD_VIRTUAL_PATH = $PATH +$PATH = $PATH[:] +$PATH.add($VIRTUAL_ENV + _get_sep() + "bin", front=True, replace=True) + +if ${...}.get("PYTHONHOME", ""): + # unset PYTHONHOME if set + $_OLD_VIRTUAL_PYTHONHOME = $PYTHONHOME + del $PYTHONHOME + +$VIRTUAL_ENV_PROMPT = "" +if not $VIRTUAL_ENV_PROMPT: + del $VIRTUAL_ENV_PROMPT + +aliases["pydoc"] = ["python", "-m", "pydoc"] diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/activate_this.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/activate_this.py new file mode 100644 index 0000000000000000000000000000000000000000..44799869857f0fc68aef1c85a38fdbf236f3b017 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/activate_this.py @@ -0,0 +1,32 @@ +# -*- coding: utf-8 -*- +"""Activate virtualenv for current interpreter: + +Use exec(open(this_file).read(), {'__file__': this_file}). + +This can be used when you must use an existing Python interpreter, not the virtualenv bin/python. +""" +import os +import site +import sys + +try: + abs_file = os.path.abspath(__file__) +except NameError: + raise AssertionError("You must use exec(open(this_file).read(), {'__file__': this_file}))") + +bin_dir = os.path.dirname(abs_file) +base = bin_dir[: -len("bin") - 1] # strip away the bin part from the __file__, plus the path separator + +# prepend bin to PATH (this file is inside the bin directory) +os.environ["PATH"] = os.pathsep.join([bin_dir] + os.environ.get("PATH", "").split(os.pathsep)) +os.environ["VIRTUAL_ENV"] = base # virtual env is right above bin directory + +# add the virtual environments libraries to the host python import mechanism +prev_length = len(sys.path) +for lib in "../lib/python3.8/site-packages".split(os.pathsep): + path = os.path.realpath(os.path.join(bin_dir, lib)) + site.addsitedir(path.decode("utf-8") if "" else path) +sys.path[:] = sys.path[prev_length:] + sys.path[0:prev_length] + +sys.real_prefix = sys.prefix +sys.prefix = base diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/chardetect b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/chardetect new file mode 100755 index 0000000000000000000000000000000000000000..ddb8412a6c612081c4f90f784d43906b019ae280 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/chardetect @@ -0,0 +1,8 @@ +#!/home/mark/git/GitLab/osm-packages/hackfest_firewall_pnf/charms/vyos-config-src/venv/bin/python +# -*- coding: utf-8 -*- +import re +import sys +from chardet.cli.chardetect import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/chardetect-3.8 b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/chardetect-3.8 new file mode 100755 index 0000000000000000000000000000000000000000..ddb8412a6c612081c4f90f784d43906b019ae280 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/chardetect-3.8 @@ -0,0 +1,8 @@ +#!/home/mark/git/GitLab/osm-packages/hackfest_firewall_pnf/charms/vyos-config-src/venv/bin/python +# -*- coding: utf-8 -*- +import re +import sys +from chardet.cli.chardetect import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/chardetect3 b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/chardetect3 new file mode 100755 index 0000000000000000000000000000000000000000..ddb8412a6c612081c4f90f784d43906b019ae280 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/chardetect3 @@ -0,0 +1,8 @@ +#!/home/mark/git/GitLab/osm-packages/hackfest_firewall_pnf/charms/vyos-config-src/venv/bin/python +# -*- coding: utf-8 -*- +import re +import sys +from chardet.cli.chardetect import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/charmcraft b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/charmcraft new file mode 100755 index 0000000000000000000000000000000000000000..ee6460bcbc19a4c5a5db69e999e8bbd2297d7a4f --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/charmcraft @@ -0,0 +1,8 @@ +#!/home/mark/git/GitLab/osm-packages/hackfest_firewall_pnf/charms/vyos-config-src/venv/bin/python +# -*- coding: utf-8 -*- +import re +import sys +from charmcraft.main import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/distro b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/distro new file mode 100755 index 0000000000000000000000000000000000000000..06e85815f80a423d4033004a5e5cc23d3ef47171 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/distro @@ -0,0 +1,8 @@ +#!/home/mark/git/GitLab/osm-packages/hackfest_firewall_pnf/charms/vyos-config-src/venv/bin/python +# -*- coding: utf-8 -*- +import re +import sys +from distro import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/distro-3.8 b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/distro-3.8 new file mode 100755 index 0000000000000000000000000000000000000000..06e85815f80a423d4033004a5e5cc23d3ef47171 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/distro-3.8 @@ -0,0 +1,8 @@ +#!/home/mark/git/GitLab/osm-packages/hackfest_firewall_pnf/charms/vyos-config-src/venv/bin/python +# -*- coding: utf-8 -*- +import re +import sys +from distro import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/distro3 b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/distro3 new file mode 100755 index 0000000000000000000000000000000000000000..06e85815f80a423d4033004a5e5cc23d3ef47171 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/distro3 @@ -0,0 +1,8 @@ +#!/home/mark/git/GitLab/osm-packages/hackfest_firewall_pnf/charms/vyos-config-src/venv/bin/python +# -*- coding: utf-8 -*- +import re +import sys +from distro import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/easy_install b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/easy_install new file mode 100755 index 0000000000000000000000000000000000000000..5dd7f111061d3c9bf682e3e6114c70cf601c91be --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/easy_install @@ -0,0 +1,8 @@ +#!/home/mark/git/GitLab/osm-packages/hackfest_firewall_pnf/charms/vyos-config-src/venv/bin/python +# -*- coding: utf-8 -*- +import re +import sys +from setuptools.command.easy_install import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/easy_install-3.8 b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/easy_install-3.8 new file mode 100755 index 0000000000000000000000000000000000000000..5dd7f111061d3c9bf682e3e6114c70cf601c91be --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/easy_install-3.8 @@ -0,0 +1,8 @@ +#!/home/mark/git/GitLab/osm-packages/hackfest_firewall_pnf/charms/vyos-config-src/venv/bin/python +# -*- coding: utf-8 -*- +import re +import sys +from setuptools.command.easy_install import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/easy_install3 b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/easy_install3 new file mode 100755 index 0000000000000000000000000000000000000000..5dd7f111061d3c9bf682e3e6114c70cf601c91be --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/easy_install3 @@ -0,0 +1,8 @@ +#!/home/mark/git/GitLab/osm-packages/hackfest_firewall_pnf/charms/vyos-config-src/venv/bin/python +# -*- coding: utf-8 -*- +import re +import sys +from setuptools.command.easy_install import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/jsonschema b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/jsonschema new file mode 100755 index 0000000000000000000000000000000000000000..5de42465e06538adb47fd019048dde5071b19fd3 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/jsonschema @@ -0,0 +1,8 @@ +#!/home/mark/git/GitLab/osm-packages/hackfest_firewall_pnf/charms/vyos-config-src/venv/bin/python +# -*- coding: utf-8 -*- +import re +import sys +from jsonschema.cli import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/pip b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/pip new file mode 100755 index 0000000000000000000000000000000000000000..24b601867cc4d186e8e2dcd588e35a8103111312 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/pip @@ -0,0 +1,8 @@ +#!/home/mark/git/GitLab/osm-packages/hackfest_firewall_pnf/charms/vyos-config-src/venv/bin/python +# -*- coding: utf-8 -*- +import re +import sys +from pip._internal.cli.main import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/pip-3.8 b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/pip-3.8 new file mode 100755 index 0000000000000000000000000000000000000000..24b601867cc4d186e8e2dcd588e35a8103111312 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/pip-3.8 @@ -0,0 +1,8 @@ +#!/home/mark/git/GitLab/osm-packages/hackfest_firewall_pnf/charms/vyos-config-src/venv/bin/python +# -*- coding: utf-8 -*- +import re +import sys +from pip._internal.cli.main import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/pip3 b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/pip3 new file mode 100755 index 0000000000000000000000000000000000000000..24b601867cc4d186e8e2dcd588e35a8103111312 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/pip3 @@ -0,0 +1,8 @@ +#!/home/mark/git/GitLab/osm-packages/hackfest_firewall_pnf/charms/vyos-config-src/venv/bin/python +# -*- coding: utf-8 -*- +import re +import sys +from pip._internal.cli.main import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/pip3.8 b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/pip3.8 new file mode 100755 index 0000000000000000000000000000000000000000..24b601867cc4d186e8e2dcd588e35a8103111312 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/pip3.8 @@ -0,0 +1,8 @@ +#!/home/mark/git/GitLab/osm-packages/hackfest_firewall_pnf/charms/vyos-config-src/venv/bin/python +# -*- coding: utf-8 -*- +import re +import sys +from pip._internal.cli.main import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/tabulate b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/tabulate new file mode 100755 index 0000000000000000000000000000000000000000..5316dda6a280ac09b06abd2512093a04439f9b49 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/tabulate @@ -0,0 +1,8 @@ +#!/home/mark/git/GitLab/osm-packages/hackfest_firewall_pnf/charms/vyos-config-src/venv/bin/python +# -*- coding: utf-8 -*- +import re +import sys +from tabulate import _main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(_main()) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/wheel b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/wheel new file mode 100755 index 0000000000000000000000000000000000000000..06e0d0325d52ae311688581b20c60e3b65fc7f41 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/wheel @@ -0,0 +1,8 @@ +#!/home/mark/git/GitLab/osm-packages/hackfest_firewall_pnf/charms/vyos-config-src/venv/bin/python +# -*- coding: utf-8 -*- +import re +import sys +from wheel.cli import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/wheel-3.8 b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/wheel-3.8 new file mode 100755 index 0000000000000000000000000000000000000000..06e0d0325d52ae311688581b20c60e3b65fc7f41 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/wheel-3.8 @@ -0,0 +1,8 @@ +#!/home/mark/git/GitLab/osm-packages/hackfest_firewall_pnf/charms/vyos-config-src/venv/bin/python +# -*- coding: utf-8 -*- +import re +import sys +from wheel.cli import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/wheel3 b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/wheel3 new file mode 100755 index 0000000000000000000000000000000000000000..06e0d0325d52ae311688581b20c60e3b65fc7f41 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/bin/wheel3 @@ -0,0 +1,8 @@ +#!/home/mark/git/GitLab/osm-packages/hackfest_firewall_pnf/charms/vyos-config-src/venv/bin/python +# -*- coding: utf-8 -*- +import re +import sys +from wheel.cli import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/CacheControl-0.12.6.dist-info/AUTHORS.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/CacheControl-0.12.6.dist-info/AUTHORS.txt new file mode 100644 index 0000000000000000000000000000000000000000..72c87d7d38ae7bf859717c333a5ee8230f6ce624 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/CacheControl-0.12.6.dist-info/AUTHORS.txt @@ -0,0 +1,562 @@ +A_Rog +Aakanksha Agrawal <11389424+rasponic@users.noreply.github.com> +Abhinav Sagar <40603139+abhinavsagar@users.noreply.github.com> +ABHYUDAY PRATAP SINGH +abs51295 +AceGentile +Adam Chainz +Adam Tse +Adam Tse +Adam Wentz +admin +Adrien Morison +ahayrapetyan +Ahilya +AinsworthK +Akash Srivastava +Alan Yee +Albert Tugushev +Albert-Guan +albertg +Aleks Bunin +Alethea Flowers +Alex Gaynor +Alex Grönholm +Alex Loosley +Alex Morega +Alex Stachowiak +Alexander Shtyrov +Alexandre Conrad +Alexey Popravka +Alexey Popravka +Alli +Ami Fischman +Ananya Maiti +Anatoly Techtonik +Anders Kaseorg +Andreas Lutro +Andrei Geacar +Andrew Gaul +Andrey Bulgakov +Andrés Delfino <34587441+andresdelfino@users.noreply.github.com> +Andrés Delfino +Andy Freeland +Andy Freeland +Andy Kluger +Ani Hayrapetyan +Aniruddha Basak +Anish Tambe +Anrs Hu +Anthony Sottile +Antoine Musso +Anton Ovchinnikov +Anton Patrushev +Antonio Alvarado Hernandez +Antony Lee +Antti Kaihola +Anubhav Patel +Anuj Godase +AQNOUCH Mohammed +AraHaan +Arindam Choudhury +Armin Ronacher +Artem +Ashley Manton +Ashwin Ramaswami +atse +Atsushi Odagiri +Avner Cohen +Baptiste Mispelon +Barney Gale +barneygale +Bartek Ogryczak +Bastian Venthur +Ben Darnell +Ben Hoyt +Ben Rosser +Bence Nagy +Benjamin Peterson +Benjamin VanEvery +Benoit Pierre +Berker Peksag +Bernardo B. Marques +Bernhard M. Wiedemann +Bertil Hatt +Bogdan Opanchuk +BorisZZZ +Brad Erickson +Bradley Ayers +Brandon L. Reiss +Brandt Bucher +Brett Randall +Brian Cristante <33549821+brcrista@users.noreply.github.com> +Brian Cristante +Brian Rosner +BrownTruck +Bruno Oliveira +Bruno Renié +Bstrdsmkr +Buck Golemon +burrows +Bussonnier Matthias +c22 +Caleb Martinez +Calvin Smith +Carl Meyer +Carlos Liam +Carol Willing +Carter Thayer +Cass +Chandrasekhar Atina +Chih-Hsuan Yen +Chih-Hsuan Yen +Chris Brinker +Chris Hunt +Chris Jerdonek +Chris McDonough +Chris Wolfe +Christian Heimes +Christian Oudard +Christopher Hunt +Christopher Snyder +Clark Boylan +Clay McClure +Cody +Cody Soyland +Colin Watson +Connor Osborn +Cooper Lees +Cooper Ry Lees +Cory Benfield +Cory Wright +Craig Kerstiens +Cristian Sorinel +Curtis Doty +cytolentino +Damian Quiroga +Dan Black +Dan Savilonis +Dan Sully +daniel +Daniel Collins +Daniel Hahler +Daniel Holth +Daniel Jost +Daniel Shaulov +Daniele Esposti +Daniele Procida +Danny Hermes +Dav Clark +Dave Abrahams +Dave Jones +David Aguilar +David Black +David Bordeynik +David Bordeynik +David Caro +David Evans +David Linke +David Pursehouse +David Tucker +David Wales +Davidovich +derwolfe +Desetude +Diego Caraballo +DiegoCaraballo +Dmitry Gladkov +Domen Kožar +Donald Stufft +Dongweiming +Douglas Thor +DrFeathers +Dustin Ingram +Dwayne Bailey +Ed Morley <501702+edmorley@users.noreply.github.com> +Ed Morley +Eitan Adler +ekristina +elainechan +Eli Schwartz +Eli Schwartz +Emil Burzo +Emil Styrke +Endoh Takanao +enoch +Erdinc Mutlu +Eric Gillingham +Eric Hanchrow +Eric Hopper +Erik M. Bray +Erik Rose +Ernest W Durbin III +Ernest W. Durbin III +Erwin Janssen +Eugene Vereshchagin +everdimension +Felix Yan +fiber-space +Filip Kokosiński +Florian Briand +Florian Rathgeber +Francesco +Francesco Montesano +Frost Ming +Gabriel Curio +Gabriel de Perthuis +Garry Polley +gdanielson +Geoffrey Lehée +Geoffrey Sneddon +George Song +Georgi Valkov +Giftlin Rajaiah +gizmoguy1 +gkdoc <40815324+gkdoc@users.noreply.github.com> +Gopinath M <31352222+mgopi1990@users.noreply.github.com> +GOTO Hayato <3532528+gh640@users.noreply.github.com> +gpiks +Guilherme Espada +Guy Rozendorn +gzpan123 +Hanjun Kim +Hari Charan +Harsh Vardhan +Herbert Pfennig +Hsiaoming Yang +Hugo +Hugo Lopes Tavares +Hugo van Kemenade +hugovk +Hynek Schlawack +Ian Bicking +Ian Cordasco +Ian Lee +Ian Stapleton Cordasco +Ian Wienand +Ian Wienand +Igor Kuzmitshov +Igor Sobreira +Ilya Baryshev +INADA Naoki +Ionel Cristian Mărieș +Ionel Maries Cristian +Ivan Pozdeev +Jacob Kim +jakirkham +Jakub Stasiak +Jakub Vysoky +Jakub Wilk +James Cleveland +James Cleveland +James Firth +James Polley +Jan Pokorný +Jannis Leidel +jarondl +Jason R. Coombs +Jay Graves +Jean-Christophe Fillion-Robin +Jeff Barber +Jeff Dairiki +Jelmer Vernooij +jenix21 +Jeremy Stanley +Jeremy Zafran +Jiashuo Li +Jim Garrison +Jivan Amara +John Paton +John-Scott Atlakson +johnthagen +johnthagen +Jon Banafato +Jon Dufresne +Jon Parise +Jonas Nockert +Jonathan Herbert +Joost Molenaar +Jorge Niedbalski +Joseph Long +Josh Bronson +Josh Hansen +Josh Schneier +Juanjo Bazán +Julian Berman +Julian Gethmann +Julien Demoor +jwg4 +Jyrki Pulliainen +Kai Chen +Kamal Bin Mustafa +kaustav haldar +keanemind +Keith Maxwell +Kelsey Hightower +Kenneth Belitzky +Kenneth Reitz +Kenneth Reitz +Kevin Burke +Kevin Carter +Kevin Frommelt +Kevin R Patterson +Kexuan Sun +Kit Randel +kpinc +Krishna Oza +Kumar McMillan +Kyle Persohn +lakshmanaram +Laszlo Kiss-Kollar +Laurent Bristiel +Laurie Opperman +Leon Sasson +Lev Givon +Lincoln de Sousa +Lipis +Loren Carvalho +Lucas Cimon +Ludovic Gasc +Luke Macken +Luo Jiebin +luojiebin +luz.paz +László Kiss Kollár +László Kiss Kollár +Marc Abramowitz +Marc Tamlyn +Marcus Smith +Mariatta +Mark Kohler +Mark Williams +Mark Williams +Markus Hametner +Masaki +Masklinn +Matej Stuchlik +Mathew Jennings +Mathieu Bridon +Matt Good +Matt Maker +Matt Robenolt +matthew +Matthew Einhorn +Matthew Gilliard +Matthew Iversen +Matthew Trumbell +Matthew Willson +Matthias Bussonnier +mattip +Maxim Kurnikov +Maxime Rouyrre +mayeut +mbaluna <44498973+mbaluna@users.noreply.github.com> +mdebi <17590103+mdebi@users.noreply.github.com> +memoselyk +Michael +Michael Aquilina +Michael E. Karpeles +Michael Klich +Michael Williamson +michaelpacer +Mickaël Schoentgen +Miguel Araujo Perez +Mihir Singh +Mike +Mike Hendricks +Min RK +MinRK +Miro Hrončok +Monica Baluna +montefra +Monty Taylor +Nate Coraor +Nathaniel J. Smith +Nehal J Wani +Neil Botelho +Nick Coghlan +Nick Stenning +Nick Timkovich +Nicolas Bock +Nikhil Benesch +Nitesh Sharma +Nowell Strite +NtaleGrey +nvdv +Ofekmeister +ofrinevo +Oliver Jeeves +Oliver Tonnhofer +Olivier Girardot +Olivier Grisel +Ollie Rutherfurd +OMOTO Kenji +Omry Yadan +Oren Held +Oscar Benjamin +Oz N Tiram +Pachwenko <32424503+Pachwenko@users.noreply.github.com> +Patrick Dubroy +Patrick Jenkins +Patrick Lawson +patricktokeeffe +Patrik Kopkan +Paul Kehrer +Paul Moore +Paul Nasrat +Paul Oswald +Paul van der Linden +Paulus Schoutsen +Pavithra Eswaramoorthy <33131404+QueenCoffee@users.noreply.github.com> +Pawel Jasinski +Pekka Klärck +Peter Lisák +Peter Waller +petr-tik +Phaneendra Chiruvella +Phil Freo +Phil Pennock +Phil Whelan +Philip Jägenstedt +Philip Molloy +Philippe Ombredanne +Pi Delport +Pierre-Yves Rofes +pip +Prabakaran Kumaresshan +Prabhjyotsing Surjit Singh Sodhi +Prabhu Marappan +Pradyun Gedam +Pratik Mallya +Preet Thakkar +Preston Holmes +Przemek Wrzos +Pulkit Goyal <7895pulkit@gmail.com> +Qiangning Hong +Quentin Pradet +R. David Murray +Rafael Caricio +Ralf Schmitt +Razzi Abuissa +rdb +Remi Rampin +Remi Rampin +Rene Dudfield +Riccardo Magliocchetti +Richard Jones +RobberPhex +Robert Collins +Robert McGibbon +Robert T. McGibbon +robin elisha robinson +Roey Berman +Rohan Jain +Rohan Jain +Rohan Jain +Roman Bogorodskiy +Romuald Brunet +Ronny Pfannschmidt +Rory McCann +Ross Brattain +Roy Wellington Ⅳ +Roy Wellington Ⅳ +Ryan Wooden +ryneeverett +Sachi King +Salvatore Rinchiera +Savio Jomton +schlamar +Scott Kitterman +Sean +seanj +Sebastian Jordan +Sebastian Schaetz +Segev Finer +SeongSoo Cho +Sergey Vasilyev +Seth Woodworth +Shlomi Fish +Shovan Maity +Simeon Visser +Simon Cross +Simon Pichugin +sinoroc +Sorin Sbarnea +Stavros Korokithakis +Stefan Scherfke +Stephan Erb +stepshal +Steve (Gadget) Barnes +Steve Barnes +Steve Dower +Steve Kowalik +Steven Myint +stonebig +Stéphane Bidoul (ACSONE) +Stéphane Bidoul +Stéphane Klein +Sumana Harihareswara +Sviatoslav Sydorenko +Sviatoslav Sydorenko +Swat009 +Takayuki SHIMIZUKAWA +tbeswick +Thijs Triemstra +Thomas Fenzl +Thomas Grainger +Thomas Guettler +Thomas Johansson +Thomas Kluyver +Thomas Smith +Tim D. Smith +Tim Gates +Tim Harder +Tim Heap +tim smith +tinruufu +Tom Forbes +Tom Freudenheim +Tom V +Tomas Orsava +Tomer Chachamu +Tony Beswick +Tony Zhaocheng Tan +TonyBeswick +toonarmycaptain +Toshio Kuratomi +Travis Swicegood +Tzu-ping Chung +Valentin Haenel +Victor Stinner +victorvpaulo +Viktor Szépe +Ville Skyttä +Vinay Sajip +Vincent Philippon +Vinicyus Macedo <7549205+vinicyusmacedo@users.noreply.github.com> +Vitaly Babiy +Vladimir Rutsky +W. Trevor King +Wil Tan +Wilfred Hughes +William ML Leslie +William T Olson +Wilson Mo +wim glenn +Wolfgang Maier +Xavier Fernandez +Xavier Fernandez +xoviat +xtreak +YAMAMOTO Takashi +Yen Chi Hsuan +Yeray Diaz Diaz +Yoval P +Yu Jian +Yuan Jing Vincent Yan +Zearin +Zearin +Zhiping Deng +Zvezdan Petkovic +Łukasz Langa +Семён Марьясин diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/CacheControl-0.12.6.dist-info/INSTALLER b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/CacheControl-0.12.6.dist-info/INSTALLER new file mode 100644 index 0000000000000000000000000000000000000000..a1b589e38a32041e49332e5e81c2d363dc418d68 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/CacheControl-0.12.6.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/CacheControl-0.12.6.dist-info/LICENSE.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/CacheControl-0.12.6.dist-info/LICENSE.txt new file mode 100644 index 0000000000000000000000000000000000000000..737fec5c5352af3d9a6a47a0670da4bdb52c5725 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/CacheControl-0.12.6.dist-info/LICENSE.txt @@ -0,0 +1,20 @@ +Copyright (c) 2008-2019 The pip developers (see AUTHORS.txt file) + +Permission is hereby granted, free of charge, to any person obtaining +a copy of this software and associated documentation files (the +"Software"), to deal in the Software without restriction, including +without limitation the rights to use, copy, modify, merge, publish, +distribute, sublicense, and/or sell copies of the Software, and to +permit persons to whom the Software is furnished to do so, subject to +the following conditions: + +The above copyright notice and this permission notice shall be +included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE +LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION +OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION +WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/CacheControl-0.12.6.dist-info/METADATA b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/CacheControl-0.12.6.dist-info/METADATA new file mode 100644 index 0000000000000000000000000000000000000000..3119fa91afb9c067f0d78f959ca038b7c0f86a2c --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/CacheControl-0.12.6.dist-info/METADATA @@ -0,0 +1,71 @@ +Metadata-Version: 2.1 +Name: CacheControl +Version: 0.12.6 +Summary: httplib2 caching for requests +Home-page: https://github.com/ionrock/cachecontrol +Author: Eric Larson +Author-email: eric@ionrock.org +License: UNKNOWN +Keywords: requests http caching web +Platform: UNKNOWN +Classifier: Development Status :: 4 - Beta +Classifier: Environment :: Web Environment +Classifier: License :: OSI Approved :: Apache Software License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python :: 2 +Classifier: Programming Language :: Python :: 2.7 +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.4 +Classifier: Programming Language :: Python :: 3.5 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Topic :: Internet :: WWW/HTTP +Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.* +Provides-Extra: filecache +Requires-Dist: lockfile (>=0.9) ; extra == 'filecache' +Provides-Extra: redis +Requires-Dist: redis (>=2.10.5) ; extra == 'redis' + +============== + CacheControl +============== + +.. image:: https://img.shields.io/pypi/v/cachecontrol.svg + :target: https://pypi.python.org/pypi/cachecontrol + :alt: Latest Version + +.. image:: https://travis-ci.org/ionrock/cachecontrol.png?branch=master + :target: https://travis-ci.org/ionrock/cachecontrol + +CacheControl is a port of the caching algorithms in httplib2_ for use with +requests_ session object. + +It was written because httplib2's better support for caching is often +mitigated by its lack of thread safety. The same is true of requests in +terms of caching. + + +Quickstart +========== + +.. code-block:: python + + import requests + + from cachecontrol import CacheControl + + + sess = requests.session() + cached_sess = CacheControl(sess) + + response = cached_sess.get('http://google.com') + +If the URL contains any caching based headers, it will cache the +result in a simple dictionary. + +For more info, check out the docs_ + +.. _docs: http://cachecontrol.readthedocs.org/en/latest/ +.. _httplib2: https://github.com/jcgregorio/httplib2 +.. _requests: http://docs.python-requests.org/ + + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/CacheControl-0.12.6.dist-info/RECORD b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/CacheControl-0.12.6.dist-info/RECORD new file mode 100644 index 0000000000000000000000000000000000000000..80e6a585e4689710ba51c8c404aa2610601cca53 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/CacheControl-0.12.6.dist-info/RECORD @@ -0,0 +1,37 @@ +cachecontrol/__init__.py,sha256=pJtAaUxOsMPnytI1A3juAJkXYDr8krdSnsg4Yg3OBEg,302 +cachecontrol/_cmd.py,sha256=88j4P3JlJGqg6xAXR4btN9fYruXUH4CE-M93Sie5IB8,1242 +cachecontrol/adapter.py,sha256=ctnbSXDOj0V0NaxJP2jFauOYRDHaNYMP9QCE8kB4kfk,4870 +cachecontrol/cache.py,sha256=1fc4wJP8HYt1ycnJXeEw5pCpeBL2Cqxx6g9Fb0AYDWQ,805 +cachecontrol/compat.py,sha256=Fn_aYzqNbN0bK9gUn8SQUzMLxQ_ruGnsEMvryYDFh3o,647 +cachecontrol/controller.py,sha256=fpLmIvxce2mKVFmtDFiiyydqU_pPbCucYLC9qP-LqvY,14137 +cachecontrol/filewrapper.py,sha256=vACKO8Llzu_ZWyjV1Fxn1MA4TGU60N5N3GSrAFdAY2Q,2533 +cachecontrol/heuristics.py,sha256=BFGHJ3yQcxvZizfo90LLZ04T_Z5XSCXvFotrp7Us0sc,4070 +cachecontrol/serialize.py,sha256=Jms7OS4GB2JFUzuMPlmQtuCDzcjjE-2ijrHpUXC2BV0,7062 +cachecontrol/wrapper.py,sha256=5LX0uJwkNQUtYSEw3aGmGu9WY8wGipd81mJ8lG0d0M4,690 +cachecontrol/caches/__init__.py,sha256=-gHNKYvaeD0kOk5M74eOrsSgIKUtC6i6GfbmugGweEo,86 +cachecontrol/caches/file_cache.py,sha256=nYVKsJtXh6gJXvdn1iWyrhxvkwpQrK-eKoMRzuiwkKk,4153 +cachecontrol/caches/redis_cache.py,sha256=yZP1PoUgAvxEZZrCVwImZ-5pFKU41v5HYJf1rfbXYmM,844 +CacheControl-0.12.6.dist-info/AUTHORS.txt,sha256=RtqU9KfonVGhI48DAA4-yTOBUhBtQTjFhaDzHoyh7uU,21518 +CacheControl-0.12.6.dist-info/LICENSE.txt,sha256=W6Ifuwlk-TatfRU2LR7W1JMcyMj5_y1NkRkOEJvnRDE,1090 +CacheControl-0.12.6.dist-info/METADATA,sha256=DzDqga-Bw7B7ylVrJanU_-imHV1od8Ok64qr5ugT-5A,2107 +CacheControl-0.12.6.dist-info/WHEEL,sha256=kGT74LWyRUZrL4VgLh6_g12IeVl_9u9ZVhadrgXZUEY,110 +CacheControl-0.12.6.dist-info/top_level.txt,sha256=vGYWzpbe3h6gkakV4f7iCK2x3KyK3oMkV5pe5v25-d4,13 +CacheControl-0.12.6.dist-info/RECORD,, +cachecontrol/compat.cpython-38.pyc,, +CacheControl-0.12.6.dist-info/__pycache__,, +CacheControl-0.12.6.virtualenv,, +cachecontrol/caches/file_cache.cpython-38.pyc,, +cachecontrol/caches/__init__.cpython-38.pyc,, +cachecontrol/__init__.cpython-38.pyc,, +cachecontrol/controller.cpython-38.pyc,, +cachecontrol/serialize.cpython-38.pyc,, +cachecontrol/caches/__pycache__,, +cachecontrol/caches/redis_cache.cpython-38.pyc,, +CacheControl-0.12.6.dist-info/INSTALLER,, +cachecontrol/wrapper.cpython-38.pyc,, +cachecontrol/adapter.cpython-38.pyc,, +cachecontrol/heuristics.cpython-38.pyc,, +cachecontrol/filewrapper.cpython-38.pyc,, +cachecontrol/__pycache__,, +cachecontrol/cache.cpython-38.pyc,, +cachecontrol/_cmd.cpython-38.pyc,, \ No newline at end of file diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/CacheControl-0.12.6.dist-info/WHEEL b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/CacheControl-0.12.6.dist-info/WHEEL new file mode 100644 index 0000000000000000000000000000000000000000..ef99c6cf3283b50a273ac4c6d009a0aa85597070 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/CacheControl-0.12.6.dist-info/WHEEL @@ -0,0 +1,6 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.34.2) +Root-Is-Purelib: true +Tag: py2-none-any +Tag: py3-none-any + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/CacheControl-0.12.6.dist-info/top_level.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/CacheControl-0.12.6.dist-info/top_level.txt new file mode 100644 index 0000000000000000000000000000000000000000..af37ac627514445bfc761e8867d8c178ea766e4b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/CacheControl-0.12.6.dist-info/top_level.txt @@ -0,0 +1 @@ +cachecontrol diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/CacheControl-0.12.6.virtualenv b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/CacheControl-0.12.6.virtualenv new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/Jinja2-2.11.2.dist-info/INSTALLER b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/Jinja2-2.11.2.dist-info/INSTALLER new file mode 100644 index 0000000000000000000000000000000000000000..a1b589e38a32041e49332e5e81c2d363dc418d68 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/Jinja2-2.11.2.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/Jinja2-2.11.2.dist-info/LICENSE.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/Jinja2-2.11.2.dist-info/LICENSE.rst new file mode 100644 index 0000000000000000000000000000000000000000..c37cae49ec77ad6ebb25568c1605f1fee5313cfb --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/Jinja2-2.11.2.dist-info/LICENSE.rst @@ -0,0 +1,28 @@ +Copyright 2007 Pallets + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + +1. Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + +2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + +3. Neither the name of the copyright holder nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A +PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED +TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR +PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF +LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING +NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/Jinja2-2.11.2.dist-info/METADATA b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/Jinja2-2.11.2.dist-info/METADATA new file mode 100644 index 0000000000000000000000000000000000000000..55c0f82692c5a56768a741095b7d4fb3872b4df5 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/Jinja2-2.11.2.dist-info/METADATA @@ -0,0 +1,106 @@ +Metadata-Version: 2.1 +Name: Jinja2 +Version: 2.11.2 +Summary: A very fast and expressive template engine. +Home-page: https://palletsprojects.com/p/jinja/ +Author: Armin Ronacher +Author-email: armin.ronacher@active-4.com +Maintainer: Pallets +Maintainer-email: contact@palletsprojects.com +License: BSD-3-Clause +Project-URL: Documentation, https://jinja.palletsprojects.com/ +Project-URL: Code, https://github.com/pallets/jinja +Project-URL: Issue tracker, https://github.com/pallets/jinja/issues +Platform: UNKNOWN +Classifier: Development Status :: 5 - Production/Stable +Classifier: Environment :: Web Environment +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: BSD License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 2 +Classifier: Programming Language :: Python :: 2.7 +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.5 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: Implementation :: CPython +Classifier: Programming Language :: Python :: Implementation :: PyPy +Classifier: Topic :: Internet :: WWW/HTTP :: Dynamic Content +Classifier: Topic :: Software Development :: Libraries :: Python Modules +Classifier: Topic :: Text Processing :: Markup :: HTML +Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.* +Description-Content-Type: text/x-rst +Requires-Dist: MarkupSafe (>=0.23) +Provides-Extra: i18n +Requires-Dist: Babel (>=0.8) ; extra == 'i18n' + +Jinja +===== + +Jinja is a fast, expressive, extensible templating engine. Special +placeholders in the template allow writing code similar to Python +syntax. Then the template is passed data to render the final document. + +It includes: + +- Template inheritance and inclusion. +- Define and import macros within templates. +- HTML templates can use autoescaping to prevent XSS from untrusted + user input. +- A sandboxed environment can safely render untrusted templates. +- AsyncIO support for generating templates and calling async + functions. +- I18N support with Babel. +- Templates are compiled to optimized Python code just-in-time and + cached, or can be compiled ahead-of-time. +- Exceptions point to the correct line in templates to make debugging + easier. +- Extensible filters, tests, functions, and even syntax. + +Jinja's philosophy is that while application logic belongs in Python if +possible, it shouldn't make the template designer's job difficult by +restricting functionality too much. + + +Installing +---------- + +Install and update using `pip`_: + +.. code-block:: text + + $ pip install -U Jinja2 + +.. _pip: https://pip.pypa.io/en/stable/quickstart/ + + +In A Nutshell +------------- + +.. code-block:: jinja + + {% extends "base.html" %} + {% block title %}Members{% endblock %} + {% block content %} + + {% endblock %} + + +Links +----- + +- Website: https://palletsprojects.com/p/jinja/ +- Documentation: https://jinja.palletsprojects.com/ +- Releases: https://pypi.org/project/Jinja2/ +- Code: https://github.com/pallets/jinja +- Issue tracker: https://github.com/pallets/jinja/issues +- Test status: https://dev.azure.com/pallets/jinja/_build +- Official chat: https://discord.gg/t6rrQZH + + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/Jinja2-2.11.2.dist-info/RECORD b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/Jinja2-2.11.2.dist-info/RECORD new file mode 100644 index 0000000000000000000000000000000000000000..413fef4bfdf0330f017450293400e102cab65113 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/Jinja2-2.11.2.dist-info/RECORD @@ -0,0 +1,61 @@ +Jinja2-2.11.2.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +Jinja2-2.11.2.dist-info/LICENSE.rst,sha256=O0nc7kEF6ze6wQ-vG-JgQI_oXSUrjp3y4JefweCUQ3s,1475 +Jinja2-2.11.2.dist-info/METADATA,sha256=5ZHRZoIRAMHsJPnqhlJ622_dRPsYePYJ-9EH4-Ry7yI,3535 +Jinja2-2.11.2.dist-info/RECORD,, +Jinja2-2.11.2.dist-info/WHEEL,sha256=kGT74LWyRUZrL4VgLh6_g12IeVl_9u9ZVhadrgXZUEY,110 +Jinja2-2.11.2.dist-info/entry_points.txt,sha256=Qy_DkVo6Xj_zzOtmErrATe8lHZhOqdjpt3e4JJAGyi8,61 +Jinja2-2.11.2.dist-info/top_level.txt,sha256=PkeVWtLb3-CqjWi1fO29OCbj55EhX_chhKrCdrVe_zs,7 +jinja2/__init__.py,sha256=0QCM_jKKDM10yzSdHRVV4mQbCbDqf0GN0GirAqibn9Y,1549 +jinja2/__pycache__/__init__.cpython-38.pyc,, +jinja2/__pycache__/_compat.cpython-38.pyc,, +jinja2/__pycache__/_identifier.cpython-38.pyc,, +jinja2/__pycache__/asyncfilters.cpython-38.pyc,, +jinja2/__pycache__/asyncsupport.cpython-38.pyc,, +jinja2/__pycache__/bccache.cpython-38.pyc,, +jinja2/__pycache__/compiler.cpython-38.pyc,, +jinja2/__pycache__/constants.cpython-38.pyc,, +jinja2/__pycache__/debug.cpython-38.pyc,, +jinja2/__pycache__/defaults.cpython-38.pyc,, +jinja2/__pycache__/environment.cpython-38.pyc,, +jinja2/__pycache__/exceptions.cpython-38.pyc,, +jinja2/__pycache__/ext.cpython-38.pyc,, +jinja2/__pycache__/filters.cpython-38.pyc,, +jinja2/__pycache__/idtracking.cpython-38.pyc,, +jinja2/__pycache__/lexer.cpython-38.pyc,, +jinja2/__pycache__/loaders.cpython-38.pyc,, +jinja2/__pycache__/meta.cpython-38.pyc,, +jinja2/__pycache__/nativetypes.cpython-38.pyc,, +jinja2/__pycache__/nodes.cpython-38.pyc,, +jinja2/__pycache__/optimizer.cpython-38.pyc,, +jinja2/__pycache__/parser.cpython-38.pyc,, +jinja2/__pycache__/runtime.cpython-38.pyc,, +jinja2/__pycache__/sandbox.cpython-38.pyc,, +jinja2/__pycache__/tests.cpython-38.pyc,, +jinja2/__pycache__/utils.cpython-38.pyc,, +jinja2/__pycache__/visitor.cpython-38.pyc,, +jinja2/_compat.py,sha256=B6Se8HjnXVpzz9-vfHejn-DV2NjaVK-Iewupc5kKlu8,3191 +jinja2/_identifier.py,sha256=EdgGJKi7O1yvr4yFlvqPNEqV6M1qHyQr8Gt8GmVTKVM,1775 +jinja2/asyncfilters.py,sha256=XJtYXTxFvcJ5xwk6SaDL4S0oNnT0wPYvXBCSzc482fI,4250 +jinja2/asyncsupport.py,sha256=ZBFsDLuq3Gtji3Ia87lcyuDbqaHZJRdtShZcqwpFnSQ,7209 +jinja2/bccache.py,sha256=3Pmp4jo65M9FQuIxdxoDBbEDFwe4acDMQf77nEJfrHA,12139 +jinja2/compiler.py,sha256=Ta9W1Lit542wItAHXlDcg0sEOsFDMirCdlFPHAurg4o,66284 +jinja2/constants.py,sha256=RR1sTzNzUmKco6aZicw4JpQpJGCuPuqm1h1YmCNUEFY,1458 +jinja2/debug.py,sha256=neR7GIGGjZH3_ILJGVUYy3eLQCCaWJMXOb7o0kGInWc,8529 +jinja2/defaults.py,sha256=85B6YUUCyWPSdrSeVhcqFVuu_bHUAQXeey--FIwSeVQ,1126 +jinja2/environment.py,sha256=XDSLKc4SqNLMOwTSq3TbWEyA5WyXfuLuVD0wAVjEFwM,50629 +jinja2/exceptions.py,sha256=VjNLawcmf2ODffqVMCQK1cRmvFaUfQWF4u8ouP3QPcE,5425 +jinja2/ext.py,sha256=AtwL5O5enT_L3HR9-oBvhGyUTdGoyaqG_ICtnR_EVd4,26441 +jinja2/filters.py,sha256=_RpPgAlgIj7ExvyDzcHAC3B36cocfWK-1TEketbNeM0,41415 +jinja2/idtracking.py,sha256=J3O4VHsrbf3wzwiBc7Cro26kHb6_5kbULeIOzocchIU,9211 +jinja2/lexer.py,sha256=nUFLRKhhKmmEWkLI65nQePgcQs7qsRdjVYZETMt_v0g,30331 +jinja2/loaders.py,sha256=C-fST_dmFjgWkp0ZuCkrgICAoOsoSIF28wfAFink0oU,17666 +jinja2/meta.py,sha256=QjyYhfNRD3QCXjBJpiPl9KgkEkGXJbAkCUq4-Ur10EQ,4131 +jinja2/nativetypes.py,sha256=Ul__gtVw4xH-0qvUvnCNHedQeNDwmEuyLJztzzSPeRg,2753 +jinja2/nodes.py,sha256=Mk1oJPVgIjnQw9WOqILvcu3rLepcFZ0ahxQm2mbwDwc,31095 +jinja2/optimizer.py,sha256=gQLlMYzvQhluhzmAIFA1tXS0cwgWYOjprN-gTRcHVsc,1457 +jinja2/parser.py,sha256=fcfdqePNTNyvosIvczbytVA332qpsURvYnCGcjDHSkA,35660 +jinja2/runtime.py,sha256=0y-BRyIEZ9ltByL2Id6GpHe1oDRQAwNeQvI0SKobNMw,30618 +jinja2/sandbox.py,sha256=knayyUvXsZ-F0mk15mO2-ehK9gsw04UhB8td-iUOtLc,17127 +jinja2/tests.py,sha256=iO_Y-9Vo60zrVe1lMpSl5sKHqAxe2leZHC08OoZ8K24,4799 +jinja2/utils.py,sha256=OoVMlQe9S2-lWT6jJbTu9tDuDvGNyWUhHDcE51i5_Do,22522 +jinja2/visitor.py,sha256=DUHupl0a4PGp7nxRtZFttUzAi1ccxzqc2hzetPYUz8U,3240 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/Jinja2-2.11.2.dist-info/WHEEL b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/Jinja2-2.11.2.dist-info/WHEEL new file mode 100644 index 0000000000000000000000000000000000000000..ef99c6cf3283b50a273ac4c6d009a0aa85597070 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/Jinja2-2.11.2.dist-info/WHEEL @@ -0,0 +1,6 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.34.2) +Root-Is-Purelib: true +Tag: py2-none-any +Tag: py3-none-any + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/Jinja2-2.11.2.dist-info/entry_points.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/Jinja2-2.11.2.dist-info/entry_points.txt new file mode 100644 index 0000000000000000000000000000000000000000..3619483fd4fca7407f3bd462aefcd70d1de69737 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/Jinja2-2.11.2.dist-info/entry_points.txt @@ -0,0 +1,3 @@ +[babel.extractors] +jinja2 = jinja2.ext:babel_extract [i18n] + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/Jinja2-2.11.2.dist-info/top_level.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/Jinja2-2.11.2.dist-info/top_level.txt new file mode 100644 index 0000000000000000000000000000000000000000..7f7afbf3bf54b346092be6a72070fcbd305ead1e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/Jinja2-2.11.2.dist-info/top_level.txt @@ -0,0 +1 @@ +jinja2 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/MarkupSafe-1.1.1.dist-info/INSTALLER b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/MarkupSafe-1.1.1.dist-info/INSTALLER new file mode 100644 index 0000000000000000000000000000000000000000..a1b589e38a32041e49332e5e81c2d363dc418d68 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/MarkupSafe-1.1.1.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/MarkupSafe-1.1.1.dist-info/LICENSE.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/MarkupSafe-1.1.1.dist-info/LICENSE.rst new file mode 100644 index 0000000000000000000000000000000000000000..9d227a0cc43c3268d15722b763bd94ad298645a1 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/MarkupSafe-1.1.1.dist-info/LICENSE.rst @@ -0,0 +1,28 @@ +Copyright 2010 Pallets + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + +1. Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + +2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + +3. Neither the name of the copyright holder nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A +PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED +TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR +PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF +LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING +NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/MarkupSafe-1.1.1.dist-info/METADATA b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/MarkupSafe-1.1.1.dist-info/METADATA new file mode 100644 index 0000000000000000000000000000000000000000..e4a7b90f51d7d6457663b1935f2665dd44f8e352 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/MarkupSafe-1.1.1.dist-info/METADATA @@ -0,0 +1,94 @@ +Metadata-Version: 2.1 +Name: MarkupSafe +Version: 1.1.1 +Summary: Safely add untrusted strings to HTML/XML markup. +Home-page: https://palletsprojects.com/p/markupsafe/ +Author: Armin Ronacher +Author-email: armin.ronacher@active-4.com +Maintainer: The Pallets Team +Maintainer-email: contact@palletsprojects.com +License: BSD-3-Clause +Project-URL: Documentation, https://markupsafe.palletsprojects.com/ +Project-URL: Code, https://github.com/pallets/markupsafe +Project-URL: Issue tracker, https://github.com/pallets/markupsafe/issues +Platform: UNKNOWN +Classifier: Development Status :: 5 - Production/Stable +Classifier: Environment :: Web Environment +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: BSD License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 2 +Classifier: Programming Language :: Python :: 3 +Classifier: Topic :: Internet :: WWW/HTTP :: Dynamic Content +Classifier: Topic :: Software Development :: Libraries :: Python Modules +Classifier: Topic :: Text Processing :: Markup :: HTML +Requires-Python: >=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.* +Description-Content-Type: text/x-rst + +MarkupSafe +========== + +MarkupSafe implements a text object that escapes characters so it is +safe to use in HTML and XML. Characters that have special meanings are +replaced so that they display as the actual characters. This mitigates +injection attacks, meaning untrusted user input can safely be displayed +on a page. + + +Installing +---------- + +Install and update using `pip`_: + +.. code-block:: text + + pip install -U MarkupSafe + +.. _pip: https://pip.pypa.io/en/stable/quickstart/ + + +Examples +-------- + +.. code-block:: pycon + + >>> from markupsafe import Markup, escape + >>> # escape replaces special characters and wraps in Markup + >>> escape('') + Markup(u'<script>alert(document.cookie);</script>') + >>> # wrap in Markup to mark text "safe" and prevent escaping + >>> Markup('Hello') + Markup('hello') + >>> escape(Markup('Hello')) + Markup('hello') + >>> # Markup is a text subclass (str on Python 3, unicode on Python 2) + >>> # methods and operators escape their arguments + >>> template = Markup("Hello %s") + >>> template % '"World"' + Markup('Hello "World"') + + +Donate +------ + +The Pallets organization develops and supports MarkupSafe and other +libraries that use it. In order to grow the community of contributors +and users, and allow the maintainers to devote more time to the +projects, `please donate today`_. + +.. _please donate today: https://palletsprojects.com/donate + + +Links +----- + +* Website: https://palletsprojects.com/p/markupsafe/ +* Documentation: https://markupsafe.palletsprojects.com/ +* Releases: https://pypi.org/project/MarkupSafe/ +* Code: https://github.com/pallets/markupsafe +* Issue tracker: https://github.com/pallets/markupsafe/issues +* Test status: https://dev.azure.com/pallets/markupsafe/_build +* Official chat: https://discord.gg/t6rrQZH + + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/MarkupSafe-1.1.1.dist-info/RECORD b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/MarkupSafe-1.1.1.dist-info/RECORD new file mode 100644 index 0000000000000000000000000000000000000000..6d95824014c4aeaac8dab2207394c7d655aa7cbd --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/MarkupSafe-1.1.1.dist-info/RECORD @@ -0,0 +1,16 @@ +MarkupSafe-1.1.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +MarkupSafe-1.1.1.dist-info/LICENSE.rst,sha256=SJqOEQhQntmKN7uYPhHg9-HTHwvY-Zp5yESOf_N9B-o,1475 +MarkupSafe-1.1.1.dist-info/METADATA,sha256=-XXnVvCxQP2QbHutIQq_7Pk9OATy-x0NC7gN_3_SCRE,3167 +MarkupSafe-1.1.1.dist-info/RECORD,, +MarkupSafe-1.1.1.dist-info/WHEEL,sha256=RIeRBYNNiNK3sXfnenIjXDrR2Tzyz05xCMpKF2hJ1iA,111 +MarkupSafe-1.1.1.dist-info/top_level.txt,sha256=qy0Plje5IJuvsCBjejJyhDCjEAdcDLK_2agVcex8Z6U,11 +markupsafe/__init__.py,sha256=oTblO5f9KFM-pvnq9bB0HgElnqkJyqHnFN1Nx2NIvnY,10126 +markupsafe/__pycache__/__init__.cpython-38.pyc,, +markupsafe/__pycache__/_compat.cpython-38.pyc,, +markupsafe/__pycache__/_constants.cpython-38.pyc,, +markupsafe/__pycache__/_native.cpython-38.pyc,, +markupsafe/_compat.py,sha256=uEW1ybxEjfxIiuTbRRaJpHsPFf4yQUMMKaPgYEC5XbU,558 +markupsafe/_constants.py,sha256=zo2ajfScG-l1Sb_52EP3MlDCqO7Y1BVHUXXKRsVDRNk,4690 +markupsafe/_native.py,sha256=d-8S_zzYt2y512xYcuSxq0NeG2DUUvG80wVdTn-4KI8,1873 +markupsafe/_speedups.c,sha256=k0fzEIK3CP6MmMqeY0ob43TP90mVN0DTyn7BAl3RqSg,9884 +markupsafe/_speedups.cpython-38-x86_64-linux-gnu.so,sha256=t037yzhfsUaStpvo6eqDVYeK-dHfWmgB4cVL9nkY2-k,48016 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/MarkupSafe-1.1.1.dist-info/WHEEL b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/MarkupSafe-1.1.1.dist-info/WHEEL new file mode 100644 index 0000000000000000000000000000000000000000..b1fcc33cbc2a978759a99244f6338ec10ee908a2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/MarkupSafe-1.1.1.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.36.2) +Root-Is-Purelib: false +Tag: cp38-cp38-manylinux2010_x86_64 + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/MarkupSafe-1.1.1.dist-info/top_level.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/MarkupSafe-1.1.1.dist-info/top_level.txt new file mode 100644 index 0000000000000000000000000000000000000000..75bf729258f9daef77370b6df1a57940f90fc23f --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/MarkupSafe-1.1.1.dist-info/top_level.txt @@ -0,0 +1 @@ +markupsafe diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/PyNaCl-1.4.0.dist-info/INSTALLER b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/PyNaCl-1.4.0.dist-info/INSTALLER new file mode 100644 index 0000000000000000000000000000000000000000..a1b589e38a32041e49332e5e81c2d363dc418d68 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/PyNaCl-1.4.0.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/PyNaCl-1.4.0.dist-info/LICENSE b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/PyNaCl-1.4.0.dist-info/LICENSE new file mode 100644 index 0000000000000000000000000000000000000000..91e18a62b67551a4d427e2eaee0d33dcab94e141 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/PyNaCl-1.4.0.dist-info/LICENSE @@ -0,0 +1,174 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + +TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + +1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + +2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + +3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + +4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + +5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + +6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + +7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + +8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + +9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/PyNaCl-1.4.0.dist-info/METADATA b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/PyNaCl-1.4.0.dist-info/METADATA new file mode 100644 index 0000000000000000000000000000000000000000..0ef3ef0da38c2be04887e5293845fb3b4f0e0509 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/PyNaCl-1.4.0.dist-info/METADATA @@ -0,0 +1,227 @@ +Metadata-Version: 2.1 +Name: PyNaCl +Version: 1.4.0 +Summary: Python binding to the Networking and Cryptography (NaCl) library +Home-page: https://github.com/pyca/pynacl/ +Author: The PyNaCl developers +Author-email: cryptography-dev@python.org +License: Apache License 2.0 +Platform: UNKNOWN +Classifier: Programming Language :: Python :: Implementation :: CPython +Classifier: Programming Language :: Python :: Implementation :: PyPy +Classifier: Programming Language :: Python :: 2 +Classifier: Programming Language :: Python :: 2.7 +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.5 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.* +Requires-Dist: six +Requires-Dist: cffi (>=1.4.1) +Provides-Extra: docs +Requires-Dist: sphinx (>=1.6.5) ; extra == 'docs' +Requires-Dist: sphinx-rtd-theme ; extra == 'docs' +Provides-Extra: tests +Requires-Dist: pytest (!=3.3.0,>=3.2.1) ; extra == 'tests' +Requires-Dist: hypothesis (>=3.27.0) ; extra == 'tests' + +=============================================== +PyNaCl: Python binding to the libsodium library +=============================================== + +.. image:: https://img.shields.io/pypi/v/pynacl.svg + :target: https://pypi.org/project/PyNaCl/ + :alt: Latest Version + +.. image:: https://travis-ci.org/pyca/pynacl.svg?branch=master + :target: https://travis-ci.org/pyca/pynacl + +.. image:: https://codecov.io/github/pyca/pynacl/coverage.svg?branch=master + :target: https://codecov.io/github/pyca/pynacl?branch=master + +.. image:: https://img.shields.io/pypi/pyversions/pynacl.svg + :target: https://pypi.org/project/PyNaCl/ + :alt: Compatible Python Versions + +PyNaCl is a Python binding to `libsodium`_, which is a fork of the +`Networking and Cryptography library`_. These libraries have a stated goal of +improving usability, security and speed. It supports Python 2.7 and 3.5+ as +well as PyPy 2.6+. + +.. _libsodium: https://github.com/jedisct1/libsodium +.. _Networking and Cryptography library: https://nacl.cr.yp.to/ + +Features +-------- + +* Digital signatures +* Secret-key encryption +* Public-key encryption +* Hashing and message authentication +* Password based key derivation and password hashing + +`Changelog`_ +------------ + +.. _Changelog: https://pynacl.readthedocs.io/en/stable/changelog/ + +Installation +============ + +Binary wheel install +-------------------- + +PyNaCl ships as a binary wheel on macOS, Windows and Linux ``manylinux1`` [#many]_ , +so all dependencies are included. Make sure you have an up-to-date pip +and run: + +.. code-block:: console + + $ pip install pynacl + +Linux source build +------------------ + +PyNaCl relies on `libsodium`_, a portable C library. A copy is bundled +with PyNaCl so to install you can run: + +.. code-block:: console + + $ pip install pynacl + +If you'd prefer to use the version of ``libsodium`` provided by your +distribution, you can disable the bundled copy during install by running: + +.. code-block:: console + + $ SODIUM_INSTALL=system pip install pynacl + +.. warning:: Usage of the legacy ``easy_install`` command provided by setuptools + is generally discouraged, and is completely unsupported in PyNaCl's case. + +.. _libsodium: https://github.com/jedisct1/libsodium + +.. [#many] `manylinux1 wheels `_ + are built on a baseline linux environment based on Centos 5.11 + and should work on most x86 and x86_64 glibc based linux environments. + +Changelog +========= + +1.4.0 (2020-05-25) +------------------ + +* Update ``libsodium`` to 1.0.18. +* **BACKWARDS INCOMPATIBLE:** We no longer distribute 32-bit ``manylinux1`` + wheels. Continuing to produce them was a maintenance burden. +* Added support for Python 3.8, and removed support for Python 3.4. +* Add low level bindings for extracting the seed and the public key + from crypto_sign_ed25519 secret key +* Add low level bindings for deterministic random generation. +* Add ``wheel`` and ``setuptools`` setup_requirements in ``setup.py`` (#485) +* Fix checks on very slow builders (#481, #495) +* Add low-level bindings to ed25519 arithmetic functions +* Update low-level blake2b state implementation +* Fix wrong short-input behavior of SealedBox.decrypt() (#517) +* Raise CryptPrefixError exception instead of InvalidkeyError when trying + to check a password against a verifier stored in a unknown format (#519) +* Add support for minimal builds of libsodium. Trying to call functions + not available in a minimal build will raise an UnavailableError + exception. To compile a minimal build of the bundled libsodium, set + the SODIUM_INSTALL_MINIMAL environment variable to any non-empty + string (e.g. ``SODIUM_INSTALL_MINIMAL=1``) for setup. + +1.3.0 2018-09-26 +---------------- + +* Added support for Python 3.7. +* Update ``libsodium`` to 1.0.16. +* Run and test all code examples in PyNaCl docs through sphinx's + doctest builder. +* Add low-level bindings for chacha20-poly1305 AEAD constructions. +* Add low-level bindings for the chacha20-poly1305 secretstream constructions. +* Add low-level bindings for ed25519ph pre-hashed signing construction. +* Add low-level bindings for constant-time increment and addition + on fixed-precision big integers represented as little-endian + byte sequences. +* Add low-level bindings for the ISO/IEC 7816-4 compatible padding API. +* Add low-level bindings for libsodium's crypto_kx... key exchange + construction. +* Set hypothesis deadline to None in tests/test_pwhash.py to avoid + incorrect test failures on slower processor architectures. GitHub + issue #370 + +1.2.1 - 2017-12-04 +------------------ + +* Update hypothesis minimum allowed version. +* Infrastructure: add proper configuration for readthedocs builder + runtime environment. + +1.2.0 - 2017-11-01 +------------------ + +* Update ``libsodium`` to 1.0.15. +* Infrastructure: add jenkins support for automatic build of + ``manylinux1`` binary wheels +* Added support for ``SealedBox`` construction. +* Added support for ``argon2i`` and ``argon2id`` password hashing constructs + and restructured high-level password hashing implementation to expose + the same interface for all hashers. +* Added support for 128 bit ``siphashx24`` variant of ``siphash24``. +* Added support for ``from_seed`` APIs for X25519 keypair generation. +* Dropped support for Python 3.3. + +1.1.2 - 2017-03-31 +------------------ + +* reorder link time library search path when using bundled + libsodium + +1.1.1 - 2017-03-15 +------------------ + +* Fixed a circular import bug in ``nacl.utils``. + +1.1.0 - 2017-03-14 +------------------ + +* Dropped support for Python 2.6. +* Added ``shared_key()`` method on ``Box``. +* You can now pass ``None`` to ``nonce`` when encrypting with ``Box`` or + ``SecretBox`` and it will automatically generate a random nonce. +* Added support for ``siphash24``. +* Added support for ``blake2b``. +* Added support for ``scrypt``. +* Update ``libsodium`` to 1.0.11. +* Default to the bundled ``libsodium`` when compiling. +* All raised exceptions are defined mixing-in + ``nacl.exceptions.CryptoError`` + +1.0.1 - 2016-01-24 +------------------ + +* Fix an issue with absolute paths that prevented the creation of wheels. + +1.0 - 2016-01-23 +---------------- + +* PyNaCl has been ported to use the new APIs available in cffi 1.0+. + Due to this change we no longer support PyPy releases older than 2.6. +* Python 3.2 support has been dropped. +* Functions to convert between Ed25519 and Curve25519 keys have been added. + +0.3.0 - 2015-03-04 +------------------ + +* The low-level API (`nacl.c.*`) has been changed to match the + upstream NaCl C/C++ conventions (as well as those of other NaCl bindings). + The order of arguments and return values has changed significantly. To + avoid silent failures, `nacl.c` has been removed, and replaced with + `nacl.bindings` (with the new argument ordering). If you have code which + calls these functions (e.g. `nacl.c.crypto_box_keypair()`), you must review + the new docstrings and update your code/imports to match the new + conventions. + + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/PyNaCl-1.4.0.dist-info/RECORD b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/PyNaCl-1.4.0.dist-info/RECORD new file mode 100644 index 0000000000000000000000000000000000000000..cb46b478318074ba4d1dbad340f18091c7fffa7f --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/PyNaCl-1.4.0.dist-info/RECORD @@ -0,0 +1,67 @@ +PyNaCl-1.4.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +PyNaCl-1.4.0.dist-info/LICENSE,sha256=0xdK1j5yHUydzLitQyCEiZLTFDabxGMZcgtYAskVP-k,9694 +PyNaCl-1.4.0.dist-info/METADATA,sha256=VXYKfhj8GTj_EfH7PT_oGHgxCAF7YonQtvn0ApWLbmY,8114 +PyNaCl-1.4.0.dist-info/RECORD,, +PyNaCl-1.4.0.dist-info/WHEEL,sha256=HAbUOEaMuGu1vSI8bKL7nQShDuU2XC9f79bOtOydX1A,108 +PyNaCl-1.4.0.dist-info/top_level.txt,sha256=wfdEOI_G2RIzmzsMyhpqP17HUh6Jcqi99to9aHLEslo,13 +nacl/__init__.py,sha256=F4fZFkZq5_reCDDlQb3NVH_Lc1-K3JmsAQaOnJZPzXc,1170 +nacl/__pycache__/__init__.cpython-38.pyc,, +nacl/__pycache__/encoding.cpython-38.pyc,, +nacl/__pycache__/exceptions.cpython-38.pyc,, +nacl/__pycache__/hash.cpython-38.pyc,, +nacl/__pycache__/hashlib.cpython-38.pyc,, +nacl/__pycache__/public.cpython-38.pyc,, +nacl/__pycache__/secret.cpython-38.pyc,, +nacl/__pycache__/signing.cpython-38.pyc,, +nacl/__pycache__/utils.cpython-38.pyc,, +nacl/_sodium.abi3.so,sha256=yoqlIcLJfPPPBpIaFDUQ_kricoapUJ1dzMU-00JRB1A,3270761 +nacl/bindings/__init__.py,sha256=c8Wn3gTCAZSdLMGxVnOuRChvh-8L7DrgBieAUFRqQ8Q,16883 +nacl/bindings/__pycache__/__init__.cpython-38.pyc,, +nacl/bindings/__pycache__/crypto_aead.cpython-38.pyc,, +nacl/bindings/__pycache__/crypto_box.cpython-38.pyc,, +nacl/bindings/__pycache__/crypto_core.cpython-38.pyc,, +nacl/bindings/__pycache__/crypto_generichash.cpython-38.pyc,, +nacl/bindings/__pycache__/crypto_hash.cpython-38.pyc,, +nacl/bindings/__pycache__/crypto_kx.cpython-38.pyc,, +nacl/bindings/__pycache__/crypto_pwhash.cpython-38.pyc,, +nacl/bindings/__pycache__/crypto_scalarmult.cpython-38.pyc,, +nacl/bindings/__pycache__/crypto_secretbox.cpython-38.pyc,, +nacl/bindings/__pycache__/crypto_secretstream.cpython-38.pyc,, +nacl/bindings/__pycache__/crypto_shorthash.cpython-38.pyc,, +nacl/bindings/__pycache__/crypto_sign.cpython-38.pyc,, +nacl/bindings/__pycache__/randombytes.cpython-38.pyc,, +nacl/bindings/__pycache__/sodium_core.cpython-38.pyc,, +nacl/bindings/__pycache__/utils.cpython-38.pyc,, +nacl/bindings/crypto_aead.py,sha256=DE5zdi09GeHZxvmrhHtxVuTqF61y1cs8trTGh_6uP8Q,17335 +nacl/bindings/crypto_box.py,sha256=VVBRvAACrEARLEZDHOFEp4g0meQWwWXTiCW0npPt0HU,9958 +nacl/bindings/crypto_core.py,sha256=7zOeRHS2oBWwI_KB1E4-sRq_ITZGWSUqxI0aT-evPGw,13433 +nacl/bindings/crypto_generichash.py,sha256=a0h-yxZR8fD7AkzaTcXj48pgFrCKVa7Mmer1wj8Az2A,8949 +nacl/bindings/crypto_hash.py,sha256=7Xp4mpXr4cpn-hAOU66KlYVUCVHP6deT0v_eW4UZZXo,2243 +nacl/bindings/crypto_kx.py,sha256=2Gjxu5c7IKAwW2MOJa9zEn1EgpIVQ0tbZQs33REZb38,6937 +nacl/bindings/crypto_pwhash.py,sha256=TH-oXgrzwnNnBCNR_iPw7TtakVBzmW1kIkJciF8MVdM,18696 +nacl/bindings/crypto_scalarmult.py,sha256=8w9CIMSar2eGR2nwmqQlQdN3z_o7L-P4unGnm9gwkVU,8208 +nacl/bindings/crypto_secretbox.py,sha256=luvzB3lwBwXxKm63e9nA2neGtOXeeG8R9SyWEckIqdI,2864 +nacl/bindings/crypto_secretstream.py,sha256=FLICuAI6kRM5qNIZbDetZVzLV-7y2dv5Vd1LTpUEhmo,10475 +nacl/bindings/crypto_shorthash.py,sha256=PK_h7X2WH_QRKJoSHbsQdhc19TIpFFqXy_wkzRPvpnY,2587 +nacl/bindings/crypto_sign.py,sha256=bQc_2VQ4CGttdi9s1hvD5PcqPXFo4ca44dRG-W2kRkc,10617 +nacl/bindings/randombytes.py,sha256=r93-dAfODRnXAUacx9MXsop-WVZ5xMJJB3xyPkHjQr4,1597 +nacl/bindings/sodium_core.py,sha256=52z0K7y6Ge6IlXcysWDVN7UdYcTOij6v0Cb0OLo8_Qc,1079 +nacl/bindings/utils.py,sha256=jOKsDbsjxN9v_HI8DOib72chyU3byqbynXxbiV909-g,4420 +nacl/encoding.py,sha256=tOiyIQVVpGU6A4Lzr0tMuqomhc_Aj0V_c1t56a-ZtPw,1928 +nacl/exceptions.py,sha256=6-hSnpjbUREnrmI1tZT2XqOBterZ0qoBsMvhCm1W5KQ,2128 +nacl/hash.py,sha256=83tVxKjG_DAZUsHKaKl-_kli7K0wNrV20cZxm4vY620,6164 +nacl/hashlib.py,sha256=R2uAL8lfdm_wGwEUcNI92YLyUcFKz9568xZFus7Msps,4197 +nacl/public.py,sha256=-nwQof5ov-wSSdvvoXh-FavTtjfpRnYykZkatNKyLd0,13442 +nacl/pwhash/__init__.py,sha256=_GxfRAjUCQlztdDHOUBbDINC3aJPIha8I-j3HIZE7eU,2720 +nacl/pwhash/__pycache__/__init__.cpython-38.pyc,, +nacl/pwhash/__pycache__/_argon2.cpython-38.pyc,, +nacl/pwhash/__pycache__/argon2i.cpython-38.pyc,, +nacl/pwhash/__pycache__/argon2id.cpython-38.pyc,, +nacl/pwhash/__pycache__/scrypt.cpython-38.pyc,, +nacl/pwhash/_argon2.py,sha256=Eu3-juLws3_v1gNy5aeSVPEwuRVFdGOrfeF0wPH9VHA,1878 +nacl/pwhash/argon2i.py,sha256=EpheK0UHJvZYca_EMhhOcX5GXaOr0xCjFDTIgmSCSDo,4598 +nacl/pwhash/argon2id.py,sha256=IqNm5RQNEd1Z9F-bEWT-_Y9noU26QoTR5YdWONg1uuI,4610 +nacl/pwhash/scrypt.py,sha256=mpx0A2ocNkUiHVMy1qiW4UfSyI3RKxLraqkMXjbg6NI,6731 +nacl/secret.py,sha256=jf4WuUjnnXTekZ2elGgQozZl6zGzxGY_0Nw0fwehUlg,5430 +nacl/signing.py,sha256=jtPBhqfiY6JOZ-HlllRjIQ-8w3a0PFe1AyKVRBLlesg,7339 +nacl/utils.py,sha256=2rL0bIijNgEe7DOMk1-ClaZfoJCQ1NqkksrXP7bDWYU,2166 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/PyNaCl-1.4.0.dist-info/WHEEL b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/PyNaCl-1.4.0.dist-info/WHEEL new file mode 100644 index 0000000000000000000000000000000000000000..b3b8cbbbf48d403d4151148e39bdbf3d25d2f0c2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/PyNaCl-1.4.0.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.34.2) +Root-Is-Purelib: false +Tag: cp35-abi3-manylinux1_x86_64 + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/PyNaCl-1.4.0.dist-info/top_level.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/PyNaCl-1.4.0.dist-info/top_level.txt new file mode 100644 index 0000000000000000000000000000000000000000..f52507f09b95c1d37c005528f28b345a8af187a4 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/PyNaCl-1.4.0.dist-info/top_level.txt @@ -0,0 +1,2 @@ +_sodium +nacl diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/PyYAML-5.3.1.dist-info/INSTALLER b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/PyYAML-5.3.1.dist-info/INSTALLER new file mode 100644 index 0000000000000000000000000000000000000000..a1b589e38a32041e49332e5e81c2d363dc418d68 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/PyYAML-5.3.1.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/PyYAML-5.3.1.dist-info/LICENSE b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/PyYAML-5.3.1.dist-info/LICENSE new file mode 100644 index 0000000000000000000000000000000000000000..3d82c281ee8ceb45368153139f5935bd8d39c816 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/PyYAML-5.3.1.dist-info/LICENSE @@ -0,0 +1,20 @@ +Copyright (c) 2017-2020 Ingy döt Net +Copyright (c) 2006-2016 Kirill Simonov + +Permission is hereby granted, free of charge, to any person obtaining a copy of +this software and associated documentation files (the "Software"), to deal in +the Software without restriction, including without limitation the rights to +use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies +of the Software, and to permit persons to whom the Software is furnished to do +so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/PyYAML-5.3.1.dist-info/METADATA b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/PyYAML-5.3.1.dist-info/METADATA new file mode 100644 index 0000000000000000000000000000000000000000..a70dd202d485af74ab22294dfbc7710364d0b127 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/PyYAML-5.3.1.dist-info/METADATA @@ -0,0 +1,41 @@ +Metadata-Version: 2.1 +Name: PyYAML +Version: 5.3.1 +Summary: YAML parser and emitter for Python +Home-page: https://github.com/yaml/pyyaml +Author: Kirill Simonov +Author-email: xi@resolvent.net +License: MIT +Download-URL: https://pypi.org/project/PyYAML/ +Platform: Any +Classifier: Development Status :: 5 - Production/Stable +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: MIT License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Cython +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 2 +Classifier: Programming Language :: Python :: 2.7 +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.5 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: Implementation :: CPython +Classifier: Programming Language :: Python :: Implementation :: PyPy +Classifier: Topic :: Software Development :: Libraries :: Python Modules +Classifier: Topic :: Text Processing :: Markup +Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.* + +YAML is a data serialization format designed for human readability +and interaction with scripting languages. PyYAML is a YAML parser +and emitter for Python. + +PyYAML features a complete YAML 1.1 parser, Unicode support, pickle +support, capable extension API, and sensible error messages. PyYAML +supports standard YAML tags and provides Python-specific tags that +allow to represent an arbitrary Python object. + +PyYAML is applicable for a broad range of tasks from complex +configuration files to object serialization and persistence. + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/PyYAML-5.3.1.dist-info/RECORD b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/PyYAML-5.3.1.dist-info/RECORD new file mode 100644 index 0000000000000000000000000000000000000000..cae61d9785c50e2d66319fde9ccc891f7f960354 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/PyYAML-5.3.1.dist-info/RECORD @@ -0,0 +1,40 @@ +PyYAML-5.3.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +PyYAML-5.3.1.dist-info/LICENSE,sha256=xAESRJ8lS5dTBFklJIMT6ScO-jbSJrItgtTMbEPFfyk,1101 +PyYAML-5.3.1.dist-info/METADATA,sha256=xTsZFjd8T4M-5rC2M3BHgx_KTTpEPy5vFDIXrbzRXPQ,1758 +PyYAML-5.3.1.dist-info/RECORD,, +PyYAML-5.3.1.dist-info/WHEEL,sha256=TpFVeXF_cAlV118WSIPWtjqW7nPvzoOw-49FmS3fDKQ,103 +PyYAML-5.3.1.dist-info/top_level.txt,sha256=rpj0IVMTisAjh_1vG3Ccf9v5jpCQwAz6cD1IVU5ZdhQ,11 +yaml/__init__.py,sha256=XFUNbKTg4afAd0BETjGQ1mKQ97_g5jbE1C0WoKc74dc,13170 +yaml/__pycache__/__init__.cpython-38.pyc,, +yaml/__pycache__/composer.cpython-38.pyc,, +yaml/__pycache__/constructor.cpython-38.pyc,, +yaml/__pycache__/cyaml.cpython-38.pyc,, +yaml/__pycache__/dumper.cpython-38.pyc,, +yaml/__pycache__/emitter.cpython-38.pyc,, +yaml/__pycache__/error.cpython-38.pyc,, +yaml/__pycache__/events.cpython-38.pyc,, +yaml/__pycache__/loader.cpython-38.pyc,, +yaml/__pycache__/nodes.cpython-38.pyc,, +yaml/__pycache__/parser.cpython-38.pyc,, +yaml/__pycache__/reader.cpython-38.pyc,, +yaml/__pycache__/representer.cpython-38.pyc,, +yaml/__pycache__/resolver.cpython-38.pyc,, +yaml/__pycache__/scanner.cpython-38.pyc,, +yaml/__pycache__/serializer.cpython-38.pyc,, +yaml/__pycache__/tokens.cpython-38.pyc,, +yaml/composer.py,sha256=_Ko30Wr6eDWUeUpauUGT3Lcg9QPBnOPVlTnIMRGJ9FM,4883 +yaml/constructor.py,sha256=O3Uaf0_J_5GQBoeI9ZNhpJAhtdagr_X2HzDgGbZOMnw,28627 +yaml/cyaml.py,sha256=LiMkvchNonfoy1F6ec9L2BiUz3r0bwF4hympASJX1Ic,3846 +yaml/dumper.py,sha256=PLctZlYwZLp7XmeUdwRuv4nYOZ2UBnDIUy8-lKfLF-o,2837 +yaml/emitter.py,sha256=jghtaU7eFwg31bG0B7RZea_29Adi9CKmXq_QjgQpCkQ,43006 +yaml/error.py,sha256=Ah9z-toHJUbE9j-M8YpxgSRM5CgLCcwVzJgLLRF2Fxo,2533 +yaml/events.py,sha256=50_TksgQiE4up-lKo_V-nBy-tAIxkIPQxY5qDhKCeHw,2445 +yaml/loader.py,sha256=UVa-zIqmkFSCIYq_PgSGm4NSJttHY2Rf_zQ4_b1fHN0,2061 +yaml/nodes.py,sha256=gPKNj8pKCdh2d4gr3gIYINnPOaOxGhJAUiYhGRnPE84,1440 +yaml/parser.py,sha256=ilWp5vvgoHFGzvOZDItFoGjD6D42nhlZrZyjAwa0oJo,25495 +yaml/reader.py,sha256=0dmzirOiDG4Xo41RnuQS7K9rkY3xjHiVasfDMNTqCNw,6794 +yaml/representer.py,sha256=82UM3ZxUQKqsKAF4ltWOxCS6jGPIFtXpGs7mvqyv4Xs,14184 +yaml/resolver.py,sha256=DJCjpQr8YQCEYYjKEYqTl0GrsZil2H4aFOI9b0Oe-U4,8970 +yaml/scanner.py,sha256=KeQIKGNlSyPE8QDwionHxy9CgbqE5teJEz05FR9-nAg,51277 +yaml/serializer.py,sha256=ChuFgmhU01hj4xgI8GaKv6vfM2Bujwa9i7d2FAHj7cA,4165 +yaml/tokens.py,sha256=lTQIzSVw8Mg9wv459-TjiOQe6wVziqaRlqX2_89rp54,2573 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/PyYAML-5.3.1.dist-info/WHEEL b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/PyYAML-5.3.1.dist-info/WHEEL new file mode 100644 index 0000000000000000000000000000000000000000..d193dea988d97c2f7f7bf3c4fc196496d361cd4d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/PyYAML-5.3.1.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.34.2) +Root-Is-Purelib: false +Tag: cp38-cp38-linux_x86_64 + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/PyYAML-5.3.1.dist-info/top_level.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/PyYAML-5.3.1.dist-info/top_level.txt new file mode 100644 index 0000000000000000000000000000000000000000..e6475e911f628412049bc4090d86f23ac403adde --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/PyYAML-5.3.1.dist-info/top_level.txt @@ -0,0 +1,2 @@ +_yaml +yaml diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/_cffi_backend.cpython-38-x86_64-linux-gnu.so b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/_cffi_backend.cpython-38-x86_64-linux-gnu.so new file mode 100755 index 0000000000000000000000000000000000000000..297c22e69ccbaa0d47386c02711d4ce45df5959f Binary files /dev/null and b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/_cffi_backend.cpython-38-x86_64-linux-gnu.so differ diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/_pyrsistent_version.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/_pyrsistent_version.py new file mode 100644 index 0000000000000000000000000000000000000000..e0cd4e2283d025bf91267601a1bf8d5503198464 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/_pyrsistent_version.py @@ -0,0 +1 @@ +__version__ = '0.17.3' diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/_virtualenv.pth b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/_virtualenv.pth new file mode 100644 index 0000000000000000000000000000000000000000..1c3ff99867d8110052195fed7650ec50c1f3adfe --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/_virtualenv.pth @@ -0,0 +1 @@ +import _virtualenv \ No newline at end of file diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/_virtualenv.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/_virtualenv.py new file mode 100644 index 0000000000000000000000000000000000000000..b399da4cc2c6254fecb330943e0e6834a19c27c8 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/_virtualenv.py @@ -0,0 +1,115 @@ +"""Patches that are applied at runtime to the virtual environment""" +# -*- coding: utf-8 -*- + +import os +import sys + +VIRTUALENV_PATCH_FILE = os.path.join(__file__) + + +def patch_dist(dist): + """ + Distutils allows user to configure some arguments via a configuration file: + https://docs.python.org/3/install/index.html#distutils-configuration-files + + Some of this arguments though don't make sense in context of the virtual environment files, let's fix them up. + """ + # we cannot allow some install config as that would get packages installed outside of the virtual environment + old_parse_config_files = dist.Distribution.parse_config_files + + def parse_config_files(self, *args, **kwargs): + result = old_parse_config_files(self, *args, **kwargs) + install = self.get_option_dict("install") + + if "prefix" in install: # the prefix governs where to install the libraries + install["prefix"] = VIRTUALENV_PATCH_FILE, os.path.abspath(sys.prefix) + for base in ("purelib", "platlib", "headers", "scripts", "data"): + key = "install_{}".format(base) + if key in install: # do not allow global configs to hijack venv paths + install.pop(key, None) + return result + + dist.Distribution.parse_config_files = parse_config_files + + +# Import hook that patches some modules to ignore configuration values that break package installation in case +# of virtual environments. +_DISTUTILS_PATCH = "distutils.dist", "setuptools.dist" +if sys.version_info > (3, 4): + # https://docs.python.org/3/library/importlib.html#setting-up-an-importer + from importlib.abc import MetaPathFinder + from importlib.util import find_spec + from threading import Lock + from functools import partial + + class _Finder(MetaPathFinder): + """A meta path finder that allows patching the imported distutils modules""" + + fullname = None + lock = Lock() + + def find_spec(self, fullname, path, target=None): + if fullname in _DISTUTILS_PATCH and self.fullname is None: + with self.lock: + self.fullname = fullname + try: + spec = find_spec(fullname, path) + if spec is not None: + # https://www.python.org/dev/peps/pep-0451/#how-loading-will-work + is_new_api = hasattr(spec.loader, "exec_module") + func_name = "exec_module" if is_new_api else "load_module" + old = getattr(spec.loader, func_name) + func = self.exec_module if is_new_api else self.load_module + if old is not func: + try: + setattr(spec.loader, func_name, partial(func, old)) + except AttributeError: + pass # C-Extension loaders are r/o such as zipimporter with +Aakanksha Agrawal <11389424+rasponic@users.noreply.github.com> +Abhinav Sagar <40603139+abhinavsagar@users.noreply.github.com> +ABHYUDAY PRATAP SINGH +abs51295 +AceGentile +Adam Chainz +Adam Tse +Adam Tse +Adam Wentz +admin +Adrien Morison +ahayrapetyan +Ahilya +AinsworthK +Akash Srivastava +Alan Yee +Albert Tugushev +Albert-Guan +albertg +Aleks Bunin +Alethea Flowers +Alex Gaynor +Alex Grönholm +Alex Loosley +Alex Morega +Alex Stachowiak +Alexander Shtyrov +Alexandre Conrad +Alexey Popravka +Alexey Popravka +Alli +Ami Fischman +Ananya Maiti +Anatoly Techtonik +Anders Kaseorg +Andreas Lutro +Andrei Geacar +Andrew Gaul +Andrey Bulgakov +Andrés Delfino <34587441+andresdelfino@users.noreply.github.com> +Andrés Delfino +Andy Freeland +Andy Freeland +Andy Kluger +Ani Hayrapetyan +Aniruddha Basak +Anish Tambe +Anrs Hu +Anthony Sottile +Antoine Musso +Anton Ovchinnikov +Anton Patrushev +Antonio Alvarado Hernandez +Antony Lee +Antti Kaihola +Anubhav Patel +Anuj Godase +AQNOUCH Mohammed +AraHaan +Arindam Choudhury +Armin Ronacher +Artem +Ashley Manton +Ashwin Ramaswami +atse +Atsushi Odagiri +Avner Cohen +Baptiste Mispelon +Barney Gale +barneygale +Bartek Ogryczak +Bastian Venthur +Ben Darnell +Ben Hoyt +Ben Rosser +Bence Nagy +Benjamin Peterson +Benjamin VanEvery +Benoit Pierre +Berker Peksag +Bernardo B. Marques +Bernhard M. Wiedemann +Bertil Hatt +Bogdan Opanchuk +BorisZZZ +Brad Erickson +Bradley Ayers +Brandon L. Reiss +Brandt Bucher +Brett Randall +Brian Cristante <33549821+brcrista@users.noreply.github.com> +Brian Cristante +Brian Rosner +BrownTruck +Bruno Oliveira +Bruno Renié +Bstrdsmkr +Buck Golemon +burrows +Bussonnier Matthias +c22 +Caleb Martinez +Calvin Smith +Carl Meyer +Carlos Liam +Carol Willing +Carter Thayer +Cass +Chandrasekhar Atina +Chih-Hsuan Yen +Chih-Hsuan Yen +Chris Brinker +Chris Hunt +Chris Jerdonek +Chris McDonough +Chris Wolfe +Christian Heimes +Christian Oudard +Christopher Hunt +Christopher Snyder +Clark Boylan +Clay McClure +Cody +Cody Soyland +Colin Watson +Connor Osborn +Cooper Lees +Cooper Ry Lees +Cory Benfield +Cory Wright +Craig Kerstiens +Cristian Sorinel +Curtis Doty +cytolentino +Damian Quiroga +Dan Black +Dan Savilonis +Dan Sully +daniel +Daniel Collins +Daniel Hahler +Daniel Holth +Daniel Jost +Daniel Shaulov +Daniele Esposti +Daniele Procida +Danny Hermes +Dav Clark +Dave Abrahams +Dave Jones +David Aguilar +David Black +David Bordeynik +David Bordeynik +David Caro +David Evans +David Linke +David Pursehouse +David Tucker +David Wales +Davidovich +derwolfe +Desetude +Diego Caraballo +DiegoCaraballo +Dmitry Gladkov +Domen Kožar +Donald Stufft +Dongweiming +Douglas Thor +DrFeathers +Dustin Ingram +Dwayne Bailey +Ed Morley <501702+edmorley@users.noreply.github.com> +Ed Morley +Eitan Adler +ekristina +elainechan +Eli Schwartz +Eli Schwartz +Emil Burzo +Emil Styrke +Endoh Takanao +enoch +Erdinc Mutlu +Eric Gillingham +Eric Hanchrow +Eric Hopper +Erik M. Bray +Erik Rose +Ernest W Durbin III +Ernest W. Durbin III +Erwin Janssen +Eugene Vereshchagin +everdimension +Felix Yan +fiber-space +Filip Kokosiński +Florian Briand +Florian Rathgeber +Francesco +Francesco Montesano +Frost Ming +Gabriel Curio +Gabriel de Perthuis +Garry Polley +gdanielson +Geoffrey Lehée +Geoffrey Sneddon +George Song +Georgi Valkov +Giftlin Rajaiah +gizmoguy1 +gkdoc <40815324+gkdoc@users.noreply.github.com> +Gopinath M <31352222+mgopi1990@users.noreply.github.com> +GOTO Hayato <3532528+gh640@users.noreply.github.com> +gpiks +Guilherme Espada +Guy Rozendorn +gzpan123 +Hanjun Kim +Hari Charan +Harsh Vardhan +Herbert Pfennig +Hsiaoming Yang +Hugo +Hugo Lopes Tavares +Hugo van Kemenade +hugovk +Hynek Schlawack +Ian Bicking +Ian Cordasco +Ian Lee +Ian Stapleton Cordasco +Ian Wienand +Ian Wienand +Igor Kuzmitshov +Igor Sobreira +Ilya Baryshev +INADA Naoki +Ionel Cristian Mărieș +Ionel Maries Cristian +Ivan Pozdeev +Jacob Kim +jakirkham +Jakub Stasiak +Jakub Vysoky +Jakub Wilk +James Cleveland +James Cleveland +James Firth +James Polley +Jan Pokorný +Jannis Leidel +jarondl +Jason R. Coombs +Jay Graves +Jean-Christophe Fillion-Robin +Jeff Barber +Jeff Dairiki +Jelmer Vernooij +jenix21 +Jeremy Stanley +Jeremy Zafran +Jiashuo Li +Jim Garrison +Jivan Amara +John Paton +John-Scott Atlakson +johnthagen +johnthagen +Jon Banafato +Jon Dufresne +Jon Parise +Jonas Nockert +Jonathan Herbert +Joost Molenaar +Jorge Niedbalski +Joseph Long +Josh Bronson +Josh Hansen +Josh Schneier +Juanjo Bazán +Julian Berman +Julian Gethmann +Julien Demoor +jwg4 +Jyrki Pulliainen +Kai Chen +Kamal Bin Mustafa +kaustav haldar +keanemind +Keith Maxwell +Kelsey Hightower +Kenneth Belitzky +Kenneth Reitz +Kenneth Reitz +Kevin Burke +Kevin Carter +Kevin Frommelt +Kevin R Patterson +Kexuan Sun +Kit Randel +kpinc +Krishna Oza +Kumar McMillan +Kyle Persohn +lakshmanaram +Laszlo Kiss-Kollar +Laurent Bristiel +Laurie Opperman +Leon Sasson +Lev Givon +Lincoln de Sousa +Lipis +Loren Carvalho +Lucas Cimon +Ludovic Gasc +Luke Macken +Luo Jiebin +luojiebin +luz.paz +László Kiss Kollár +László Kiss Kollár +Marc Abramowitz +Marc Tamlyn +Marcus Smith +Mariatta +Mark Kohler +Mark Williams +Mark Williams +Markus Hametner +Masaki +Masklinn +Matej Stuchlik +Mathew Jennings +Mathieu Bridon +Matt Good +Matt Maker +Matt Robenolt +matthew +Matthew Einhorn +Matthew Gilliard +Matthew Iversen +Matthew Trumbell +Matthew Willson +Matthias Bussonnier +mattip +Maxim Kurnikov +Maxime Rouyrre +mayeut +mbaluna <44498973+mbaluna@users.noreply.github.com> +mdebi <17590103+mdebi@users.noreply.github.com> +memoselyk +Michael +Michael Aquilina +Michael E. Karpeles +Michael Klich +Michael Williamson +michaelpacer +Mickaël Schoentgen +Miguel Araujo Perez +Mihir Singh +Mike +Mike Hendricks +Min RK +MinRK +Miro Hrončok +Monica Baluna +montefra +Monty Taylor +Nate Coraor +Nathaniel J. Smith +Nehal J Wani +Neil Botelho +Nick Coghlan +Nick Stenning +Nick Timkovich +Nicolas Bock +Nikhil Benesch +Nitesh Sharma +Nowell Strite +NtaleGrey +nvdv +Ofekmeister +ofrinevo +Oliver Jeeves +Oliver Tonnhofer +Olivier Girardot +Olivier Grisel +Ollie Rutherfurd +OMOTO Kenji +Omry Yadan +Oren Held +Oscar Benjamin +Oz N Tiram +Pachwenko <32424503+Pachwenko@users.noreply.github.com> +Patrick Dubroy +Patrick Jenkins +Patrick Lawson +patricktokeeffe +Patrik Kopkan +Paul Kehrer +Paul Moore +Paul Nasrat +Paul Oswald +Paul van der Linden +Paulus Schoutsen +Pavithra Eswaramoorthy <33131404+QueenCoffee@users.noreply.github.com> +Pawel Jasinski +Pekka Klärck +Peter Lisák +Peter Waller +petr-tik +Phaneendra Chiruvella +Phil Freo +Phil Pennock +Phil Whelan +Philip Jägenstedt +Philip Molloy +Philippe Ombredanne +Pi Delport +Pierre-Yves Rofes +pip +Prabakaran Kumaresshan +Prabhjyotsing Surjit Singh Sodhi +Prabhu Marappan +Pradyun Gedam +Pratik Mallya +Preet Thakkar +Preston Holmes +Przemek Wrzos +Pulkit Goyal <7895pulkit@gmail.com> +Qiangning Hong +Quentin Pradet +R. David Murray +Rafael Caricio +Ralf Schmitt +Razzi Abuissa +rdb +Remi Rampin +Remi Rampin +Rene Dudfield +Riccardo Magliocchetti +Richard Jones +RobberPhex +Robert Collins +Robert McGibbon +Robert T. McGibbon +robin elisha robinson +Roey Berman +Rohan Jain +Rohan Jain +Rohan Jain +Roman Bogorodskiy +Romuald Brunet +Ronny Pfannschmidt +Rory McCann +Ross Brattain +Roy Wellington Ⅳ +Roy Wellington Ⅳ +Ryan Wooden +ryneeverett +Sachi King +Salvatore Rinchiera +Savio Jomton +schlamar +Scott Kitterman +Sean +seanj +Sebastian Jordan +Sebastian Schaetz +Segev Finer +SeongSoo Cho +Sergey Vasilyev +Seth Woodworth +Shlomi Fish +Shovan Maity +Simeon Visser +Simon Cross +Simon Pichugin +sinoroc +Sorin Sbarnea +Stavros Korokithakis +Stefan Scherfke +Stephan Erb +stepshal +Steve (Gadget) Barnes +Steve Barnes +Steve Dower +Steve Kowalik +Steven Myint +stonebig +Stéphane Bidoul (ACSONE) +Stéphane Bidoul +Stéphane Klein +Sumana Harihareswara +Sviatoslav Sydorenko +Sviatoslav Sydorenko +Swat009 +Takayuki SHIMIZUKAWA +tbeswick +Thijs Triemstra +Thomas Fenzl +Thomas Grainger +Thomas Guettler +Thomas Johansson +Thomas Kluyver +Thomas Smith +Tim D. Smith +Tim Gates +Tim Harder +Tim Heap +tim smith +tinruufu +Tom Forbes +Tom Freudenheim +Tom V +Tomas Orsava +Tomer Chachamu +Tony Beswick +Tony Zhaocheng Tan +TonyBeswick +toonarmycaptain +Toshio Kuratomi +Travis Swicegood +Tzu-ping Chung +Valentin Haenel +Victor Stinner +victorvpaulo +Viktor Szépe +Ville Skyttä +Vinay Sajip +Vincent Philippon +Vinicyus Macedo <7549205+vinicyusmacedo@users.noreply.github.com> +Vitaly Babiy +Vladimir Rutsky +W. Trevor King +Wil Tan +Wilfred Hughes +William ML Leslie +William T Olson +Wilson Mo +wim glenn +Wolfgang Maier +Xavier Fernandez +Xavier Fernandez +xoviat +xtreak +YAMAMOTO Takashi +Yen Chi Hsuan +Yeray Diaz Diaz +Yoval P +Yu Jian +Yuan Jing Vincent Yan +Zearin +Zearin +Zhiping Deng +Zvezdan Petkovic +Łukasz Langa +Семён Марьясин diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs-1.4.3.dist-info/INSTALLER b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs-1.4.3.dist-info/INSTALLER new file mode 100644 index 0000000000000000000000000000000000000000..a1b589e38a32041e49332e5e81c2d363dc418d68 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs-1.4.3.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs-1.4.3.dist-info/LICENSE.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs-1.4.3.dist-info/LICENSE.txt new file mode 100644 index 0000000000000000000000000000000000000000..737fec5c5352af3d9a6a47a0670da4bdb52c5725 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs-1.4.3.dist-info/LICENSE.txt @@ -0,0 +1,20 @@ +Copyright (c) 2008-2019 The pip developers (see AUTHORS.txt file) + +Permission is hereby granted, free of charge, to any person obtaining +a copy of this software and associated documentation files (the +"Software"), to deal in the Software without restriction, including +without limitation the rights to use, copy, modify, merge, publish, +distribute, sublicense, and/or sell copies of the Software, and to +permit persons to whom the Software is furnished to do so, subject to +the following conditions: + +The above copyright notice and this permission notice shall be +included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE +LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION +OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION +WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs-1.4.3.dist-info/METADATA b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs-1.4.3.dist-info/METADATA new file mode 100644 index 0000000000000000000000000000000000000000..56a3d6ea2099dfccf5396bd3f4ebb2531f1da2a0 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs-1.4.3.dist-info/METADATA @@ -0,0 +1,256 @@ +Metadata-Version: 2.1 +Name: appdirs +Version: 1.4.3 +Summary: A small Python module for determining appropriate platform-specific dirs, e.g. a "user data dir". +Home-page: http://github.com/ActiveState/appdirs +Author: Trent Mick +Author-email: trentm@gmail.com +Maintainer: Trent Mick; Sridhar Ratnakumar; Jeff Rouse +Maintainer-email: trentm@gmail.com; github@srid.name; jr@its.to +License: MIT +Keywords: application directory log cache user +Platform: UNKNOWN +Classifier: Development Status :: 4 - Beta +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: MIT License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python :: 2 +Classifier: Programming Language :: Python :: 2.6 +Classifier: Programming Language :: Python :: 2.7 +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.2 +Classifier: Programming Language :: Python :: 3.3 +Classifier: Programming Language :: Python :: 3.4 +Classifier: Programming Language :: Python :: 3.5 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: Implementation :: PyPy +Classifier: Programming Language :: Python :: Implementation :: CPython +Classifier: Topic :: Software Development :: Libraries :: Python Modules + + +.. image:: https://secure.travis-ci.org/ActiveState/appdirs.png + :target: http://travis-ci.org/ActiveState/appdirs + +the problem +=========== + +What directory should your app use for storing user data? If running on Mac OS X, you +should use:: + + ~/Library/Application Support/ + +If on Windows (at least English Win XP) that should be:: + + C:\Documents and Settings\\Application Data\Local Settings\\ + +or possibly:: + + C:\Documents and Settings\\Application Data\\ + +for `roaming profiles `_ but that is another story. + +On Linux (and other Unices) the dir, according to the `XDG +spec `_, is:: + + ~/.local/share/ + + +``appdirs`` to the rescue +========================= + +This kind of thing is what the ``appdirs`` module is for. ``appdirs`` will +help you choose an appropriate: + +- user data dir (``user_data_dir``) +- user config dir (``user_config_dir``) +- user cache dir (``user_cache_dir``) +- site data dir (``site_data_dir``) +- site config dir (``site_config_dir``) +- user log dir (``user_log_dir``) + +and also: + +- is a single module so other Python packages can include their own private copy +- is slightly opinionated on the directory names used. Look for "OPINION" in + documentation and code for when an opinion is being applied. + + +some example output +=================== + +On Mac OS X:: + + >>> from appdirs import * + >>> appname = "SuperApp" + >>> appauthor = "Acme" + >>> user_data_dir(appname, appauthor) + '/Users/trentm/Library/Application Support/SuperApp' + >>> site_data_dir(appname, appauthor) + '/Library/Application Support/SuperApp' + >>> user_cache_dir(appname, appauthor) + '/Users/trentm/Library/Caches/SuperApp' + >>> user_log_dir(appname, appauthor) + '/Users/trentm/Library/Logs/SuperApp' + +On Windows 7:: + + >>> from appdirs import * + >>> appname = "SuperApp" + >>> appauthor = "Acme" + >>> user_data_dir(appname, appauthor) + 'C:\\Users\\trentm\\AppData\\Local\\Acme\\SuperApp' + >>> user_data_dir(appname, appauthor, roaming=True) + 'C:\\Users\\trentm\\AppData\\Roaming\\Acme\\SuperApp' + >>> user_cache_dir(appname, appauthor) + 'C:\\Users\\trentm\\AppData\\Local\\Acme\\SuperApp\\Cache' + >>> user_log_dir(appname, appauthor) + 'C:\\Users\\trentm\\AppData\\Local\\Acme\\SuperApp\\Logs' + +On Linux:: + + >>> from appdirs import * + >>> appname = "SuperApp" + >>> appauthor = "Acme" + >>> user_data_dir(appname, appauthor) + '/home/trentm/.local/share/SuperApp + >>> site_data_dir(appname, appauthor) + '/usr/local/share/SuperApp' + >>> site_data_dir(appname, appauthor, multipath=True) + '/usr/local/share/SuperApp:/usr/share/SuperApp' + >>> user_cache_dir(appname, appauthor) + '/home/trentm/.cache/SuperApp' + >>> user_log_dir(appname, appauthor) + '/home/trentm/.cache/SuperApp/log' + >>> user_config_dir(appname) + '/home/trentm/.config/SuperApp' + >>> site_config_dir(appname) + '/etc/xdg/SuperApp' + >>> os.environ['XDG_CONFIG_DIRS'] = '/etc:/usr/local/etc' + >>> site_config_dir(appname, multipath=True) + '/etc/SuperApp:/usr/local/etc/SuperApp' + + +``AppDirs`` for convenience +=========================== + +:: + + >>> from appdirs import AppDirs + >>> dirs = AppDirs("SuperApp", "Acme") + >>> dirs.user_data_dir + '/Users/trentm/Library/Application Support/SuperApp' + >>> dirs.site_data_dir + '/Library/Application Support/SuperApp' + >>> dirs.user_cache_dir + '/Users/trentm/Library/Caches/SuperApp' + >>> dirs.user_log_dir + '/Users/trentm/Library/Logs/SuperApp' + + + +Per-version isolation +===================== + +If you have multiple versions of your app in use that you want to be +able to run side-by-side, then you may want version-isolation for these +dirs:: + + >>> from appdirs import AppDirs + >>> dirs = AppDirs("SuperApp", "Acme", version="1.0") + >>> dirs.user_data_dir + '/Users/trentm/Library/Application Support/SuperApp/1.0' + >>> dirs.site_data_dir + '/Library/Application Support/SuperApp/1.0' + >>> dirs.user_cache_dir + '/Users/trentm/Library/Caches/SuperApp/1.0' + >>> dirs.user_log_dir + '/Users/trentm/Library/Logs/SuperApp/1.0' + + + +appdirs Changelog +================= + +appdirs 1.4.3 +------------- +- [PR #76] Python 3.6 invalid escape sequence deprecation fixes +- Fix for Python 3.6 support + +appdirs 1.4.2 +------------- +- [PR #84] Allow installing without setuptools +- [PR #86] Fix string delimiters in setup.py description +- Add Python 3.6 support + +appdirs 1.4.1 +------------- +- [issue #38] Fix _winreg import on Windows Py3 +- [issue #55] Make appname optional + +appdirs 1.4.0 +------------- +- [PR #42] AppAuthor is now optional on Windows +- [issue 41] Support Jython on Windows, Mac, and Unix-like platforms. Windows + support requires `JNA `_. +- [PR #44] Fix incorrect behaviour of the site_config_dir method + +appdirs 1.3.0 +------------- +- [Unix, issue 16] Conform to XDG standard, instead of breaking it for + everybody +- [Unix] Removes gratuitous case mangling of the case, since \*nix-es are + usually case sensitive, so mangling is not wise +- [Unix] Fixes the utterly wrong behaviour in ``site_data_dir``, return result + based on XDG_DATA_DIRS and make room for respecting the standard which + specifies XDG_DATA_DIRS is a multiple-value variable +- [Issue 6] Add ``*_config_dir`` which are distinct on nix-es, according to + XDG specs; on Windows and Mac return the corresponding ``*_data_dir`` + +appdirs 1.2.0 +------------- + +- [Unix] Put ``user_log_dir`` under the *cache* dir on Unix. Seems to be more + typical. +- [issue 9] Make ``unicode`` work on py3k. + +appdirs 1.1.0 +------------- + +- [issue 4] Add ``AppDirs.user_log_dir``. +- [Unix, issue 2, issue 7] appdirs now conforms to `XDG base directory spec + `_. +- [Mac, issue 5] Fix ``site_data_dir()`` on Mac. +- [Mac] Drop use of 'Carbon' module in favour of hardcoded paths; supports + Python3 now. +- [Windows] Append "Cache" to ``user_cache_dir`` on Windows by default. Use + ``opinion=False`` option to disable this. +- Add ``appdirs.AppDirs`` convenience class. Usage: + + >>> dirs = AppDirs("SuperApp", "Acme", version="1.0") + >>> dirs.user_data_dir + '/Users/trentm/Library/Application Support/SuperApp/1.0' + +- [Windows] Cherry-pick Komodo's change to downgrade paths to the Windows short + paths if there are high bit chars. +- [Linux] Change default ``user_cache_dir()`` on Linux to be singular, e.g. + "~/.superapp/cache". +- [Windows] Add ``roaming`` option to ``user_data_dir()`` (for use on Windows only) + and change the default ``user_data_dir`` behaviour to use a *non*-roaming + profile dir (``CSIDL_LOCAL_APPDATA`` instead of ``CSIDL_APPDATA``). Why? Because + a large roaming profile can cause login speed issues. The "only syncs on + logout" behaviour can cause surprises in appdata info. + + +appdirs 1.0.1 (never released) +------------------------------ + +Started this changelog 27 July 2010. Before that this module originated in the +`Komodo `_ product as ``applib.py`` and then +as `applib/location.py +`_ (used by +`PyPM `_ in `ActivePython +`_). This is basically a fork of +applib.py 1.0.1 and applib/location.py 1.0.1. + + + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs-1.4.3.dist-info/RECORD b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs-1.4.3.dist-info/RECORD new file mode 100644 index 0000000000000000000000000000000000000000..2a398ff7c671d56f484f9f24393fc393b16dca95 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs-1.4.3.dist-info/RECORD @@ -0,0 +1,11 @@ +appdirs.py,sha256=MievUEuv3l_mQISH5SF0shDk_BNhHHzYiAPrT3ITN4I,24701 +appdirs-1.4.3.dist-info/AUTHORS.txt,sha256=RtqU9KfonVGhI48DAA4-yTOBUhBtQTjFhaDzHoyh7uU,21518 +appdirs-1.4.3.dist-info/LICENSE.txt,sha256=W6Ifuwlk-TatfRU2LR7W1JMcyMj5_y1NkRkOEJvnRDE,1090 +appdirs-1.4.3.dist-info/METADATA,sha256=CYv_MUT9ndNhXtkeyXnWjTM6jW_z6fsu8wqD6Yxfn0k,8831 +appdirs-1.4.3.dist-info/WHEEL,sha256=kGT74LWyRUZrL4VgLh6_g12IeVl_9u9ZVhadrgXZUEY,110 +appdirs-1.4.3.dist-info/top_level.txt,sha256=nKncE8CUqZERJ6VuQWL4_bkunSPDNfn7KZqb4Tr5YEM,8 +appdirs-1.4.3.dist-info/RECORD,, +appdirs.cpython-38.pyc,, +appdirs-1.4.3.dist-info/__pycache__,, +appdirs-1.4.3.virtualenv,, +appdirs-1.4.3.dist-info/INSTALLER,, \ No newline at end of file diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs-1.4.3.dist-info/WHEEL b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs-1.4.3.dist-info/WHEEL new file mode 100644 index 0000000000000000000000000000000000000000..ef99c6cf3283b50a273ac4c6d009a0aa85597070 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs-1.4.3.dist-info/WHEEL @@ -0,0 +1,6 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.34.2) +Root-Is-Purelib: true +Tag: py2-none-any +Tag: py3-none-any + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs-1.4.3.dist-info/top_level.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs-1.4.3.dist-info/top_level.txt new file mode 100644 index 0000000000000000000000000000000000000000..d64bc321a11c17064f6e048ffcee9044688aa819 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs-1.4.3.dist-info/top_level.txt @@ -0,0 +1 @@ +appdirs diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs-1.4.3.virtualenv b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs-1.4.3.virtualenv new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs-1.4.4.dist-info/INSTALLER b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs-1.4.4.dist-info/INSTALLER new file mode 100644 index 0000000000000000000000000000000000000000..a1b589e38a32041e49332e5e81c2d363dc418d68 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs-1.4.4.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs-1.4.4.dist-info/LICENSE.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs-1.4.4.dist-info/LICENSE.txt new file mode 100644 index 0000000000000000000000000000000000000000..107c61405e3b4a3266a83064073910b6c8d4a42a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs-1.4.4.dist-info/LICENSE.txt @@ -0,0 +1,23 @@ +# This is the MIT license + +Copyright (c) 2010 ActiveState Software Inc. + +Permission is hereby granted, free of charge, to any person obtaining a +copy of this software and associated documentation files (the +"Software"), to deal in the Software without restriction, including +without limitation the rights to use, copy, modify, merge, publish, +distribute, sublicense, and/or sell copies of the Software, and to +permit persons to whom the Software is furnished to do so, subject to +the following conditions: + +The above copyright notice and this permission notice shall be included +in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. +IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY +CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, +TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE +SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs-1.4.4.dist-info/METADATA b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs-1.4.4.dist-info/METADATA new file mode 100644 index 0000000000000000000000000000000000000000..f95073104453fbfb5c3d7ef1dd065544bc63ff2a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs-1.4.4.dist-info/METADATA @@ -0,0 +1,264 @@ +Metadata-Version: 2.1 +Name: appdirs +Version: 1.4.4 +Summary: A small Python module for determining appropriate platform-specific dirs, e.g. a "user data dir". +Home-page: http://github.com/ActiveState/appdirs +Author: Trent Mick +Author-email: trentm@gmail.com +Maintainer: Jeff Rouse +Maintainer-email: jr@its.to +License: MIT +Keywords: application directory log cache user +Platform: UNKNOWN +Classifier: Development Status :: 5 - Production/Stable +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: MIT License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python :: 2 +Classifier: Programming Language :: Python :: 2.7 +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.4 +Classifier: Programming Language :: Python :: 3.5 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: Implementation :: PyPy +Classifier: Programming Language :: Python :: Implementation :: CPython +Classifier: Topic :: Software Development :: Libraries :: Python Modules + + +.. image:: https://secure.travis-ci.org/ActiveState/appdirs.png + :target: http://travis-ci.org/ActiveState/appdirs + +the problem +=========== + +What directory should your app use for storing user data? If running on Mac OS X, you +should use:: + + ~/Library/Application Support/ + +If on Windows (at least English Win XP) that should be:: + + C:\Documents and Settings\\Application Data\Local Settings\\ + +or possibly:: + + C:\Documents and Settings\\Application Data\\ + +for `roaming profiles `_ but that is another story. + +On Linux (and other Unices) the dir, according to the `XDG +spec `_, is:: + + ~/.local/share/ + + +``appdirs`` to the rescue +========================= + +This kind of thing is what the ``appdirs`` module is for. ``appdirs`` will +help you choose an appropriate: + +- user data dir (``user_data_dir``) +- user config dir (``user_config_dir``) +- user cache dir (``user_cache_dir``) +- site data dir (``site_data_dir``) +- site config dir (``site_config_dir``) +- user log dir (``user_log_dir``) + +and also: + +- is a single module so other Python packages can include their own private copy +- is slightly opinionated on the directory names used. Look for "OPINION" in + documentation and code for when an opinion is being applied. + + +some example output +=================== + +On Mac OS X:: + + >>> from appdirs import * + >>> appname = "SuperApp" + >>> appauthor = "Acme" + >>> user_data_dir(appname, appauthor) + '/Users/trentm/Library/Application Support/SuperApp' + >>> site_data_dir(appname, appauthor) + '/Library/Application Support/SuperApp' + >>> user_cache_dir(appname, appauthor) + '/Users/trentm/Library/Caches/SuperApp' + >>> user_log_dir(appname, appauthor) + '/Users/trentm/Library/Logs/SuperApp' + +On Windows 7:: + + >>> from appdirs import * + >>> appname = "SuperApp" + >>> appauthor = "Acme" + >>> user_data_dir(appname, appauthor) + 'C:\\Users\\trentm\\AppData\\Local\\Acme\\SuperApp' + >>> user_data_dir(appname, appauthor, roaming=True) + 'C:\\Users\\trentm\\AppData\\Roaming\\Acme\\SuperApp' + >>> user_cache_dir(appname, appauthor) + 'C:\\Users\\trentm\\AppData\\Local\\Acme\\SuperApp\\Cache' + >>> user_log_dir(appname, appauthor) + 'C:\\Users\\trentm\\AppData\\Local\\Acme\\SuperApp\\Logs' + +On Linux:: + + >>> from appdirs import * + >>> appname = "SuperApp" + >>> appauthor = "Acme" + >>> user_data_dir(appname, appauthor) + '/home/trentm/.local/share/SuperApp + >>> site_data_dir(appname, appauthor) + '/usr/local/share/SuperApp' + >>> site_data_dir(appname, appauthor, multipath=True) + '/usr/local/share/SuperApp:/usr/share/SuperApp' + >>> user_cache_dir(appname, appauthor) + '/home/trentm/.cache/SuperApp' + >>> user_log_dir(appname, appauthor) + '/home/trentm/.cache/SuperApp/log' + >>> user_config_dir(appname) + '/home/trentm/.config/SuperApp' + >>> site_config_dir(appname) + '/etc/xdg/SuperApp' + >>> os.environ['XDG_CONFIG_DIRS'] = '/etc:/usr/local/etc' + >>> site_config_dir(appname, multipath=True) + '/etc/SuperApp:/usr/local/etc/SuperApp' + + +``AppDirs`` for convenience +=========================== + +:: + + >>> from appdirs import AppDirs + >>> dirs = AppDirs("SuperApp", "Acme") + >>> dirs.user_data_dir + '/Users/trentm/Library/Application Support/SuperApp' + >>> dirs.site_data_dir + '/Library/Application Support/SuperApp' + >>> dirs.user_cache_dir + '/Users/trentm/Library/Caches/SuperApp' + >>> dirs.user_log_dir + '/Users/trentm/Library/Logs/SuperApp' + + + +Per-version isolation +===================== + +If you have multiple versions of your app in use that you want to be +able to run side-by-side, then you may want version-isolation for these +dirs:: + + >>> from appdirs import AppDirs + >>> dirs = AppDirs("SuperApp", "Acme", version="1.0") + >>> dirs.user_data_dir + '/Users/trentm/Library/Application Support/SuperApp/1.0' + >>> dirs.site_data_dir + '/Library/Application Support/SuperApp/1.0' + >>> dirs.user_cache_dir + '/Users/trentm/Library/Caches/SuperApp/1.0' + >>> dirs.user_log_dir + '/Users/trentm/Library/Logs/SuperApp/1.0' + + + +appdirs Changelog +================= + +appdirs 1.4.4 +------------- +- [PR #92] Don't import appdirs from setup.py + +Project officially classified as Stable which is important +for inclusion in other distros such as ActivePython. + +First of several incremental releases to catch up on maintenance. + +appdirs 1.4.3 +------------- +- [PR #76] Python 3.6 invalid escape sequence deprecation fixes +- Fix for Python 3.6 support + +appdirs 1.4.2 +------------- +- [PR #84] Allow installing without setuptools +- [PR #86] Fix string delimiters in setup.py description +- Add Python 3.6 support + +appdirs 1.4.1 +------------- +- [issue #38] Fix _winreg import on Windows Py3 +- [issue #55] Make appname optional + +appdirs 1.4.0 +------------- +- [PR #42] AppAuthor is now optional on Windows +- [issue 41] Support Jython on Windows, Mac, and Unix-like platforms. Windows + support requires `JNA `_. +- [PR #44] Fix incorrect behaviour of the site_config_dir method + +appdirs 1.3.0 +------------- +- [Unix, issue 16] Conform to XDG standard, instead of breaking it for + everybody +- [Unix] Removes gratuitous case mangling of the case, since \*nix-es are + usually case sensitive, so mangling is not wise +- [Unix] Fixes the utterly wrong behaviour in ``site_data_dir``, return result + based on XDG_DATA_DIRS and make room for respecting the standard which + specifies XDG_DATA_DIRS is a multiple-value variable +- [Issue 6] Add ``*_config_dir`` which are distinct on nix-es, according to + XDG specs; on Windows and Mac return the corresponding ``*_data_dir`` + +appdirs 1.2.0 +------------- + +- [Unix] Put ``user_log_dir`` under the *cache* dir on Unix. Seems to be more + typical. +- [issue 9] Make ``unicode`` work on py3k. + +appdirs 1.1.0 +------------- + +- [issue 4] Add ``AppDirs.user_log_dir``. +- [Unix, issue 2, issue 7] appdirs now conforms to `XDG base directory spec + `_. +- [Mac, issue 5] Fix ``site_data_dir()`` on Mac. +- [Mac] Drop use of 'Carbon' module in favour of hardcoded paths; supports + Python3 now. +- [Windows] Append "Cache" to ``user_cache_dir`` on Windows by default. Use + ``opinion=False`` option to disable this. +- Add ``appdirs.AppDirs`` convenience class. Usage: + + >>> dirs = AppDirs("SuperApp", "Acme", version="1.0") + >>> dirs.user_data_dir + '/Users/trentm/Library/Application Support/SuperApp/1.0' + +- [Windows] Cherry-pick Komodo's change to downgrade paths to the Windows short + paths if there are high bit chars. +- [Linux] Change default ``user_cache_dir()`` on Linux to be singular, e.g. + "~/.superapp/cache". +- [Windows] Add ``roaming`` option to ``user_data_dir()`` (for use on Windows only) + and change the default ``user_data_dir`` behaviour to use a *non*-roaming + profile dir (``CSIDL_LOCAL_APPDATA`` instead of ``CSIDL_APPDATA``). Why? Because + a large roaming profile can cause login speed issues. The "only syncs on + logout" behaviour can cause surprises in appdata info. + + +appdirs 1.0.1 (never released) +------------------------------ + +Started this changelog 27 July 2010. Before that this module originated in the +`Komodo `_ product as ``applib.py`` and then +as `applib/location.py +`_ (used by +`PyPM `_ in `ActivePython +`_). This is basically a fork of +applib.py 1.0.1 and applib/location.py 1.0.1. + + + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs-1.4.4.dist-info/RECORD b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs-1.4.4.dist-info/RECORD new file mode 100644 index 0000000000000000000000000000000000000000..56589537071d76decc66a51c8e3592b8c3895a35 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs-1.4.4.dist-info/RECORD @@ -0,0 +1,8 @@ +__pycache__/appdirs.cpython-38.pyc,, +appdirs-1.4.4.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +appdirs-1.4.4.dist-info/LICENSE.txt,sha256=Nt200KdFqTqyAyA9cZCBSxuJcn0lTK_0jHp6-71HAAs,1097 +appdirs-1.4.4.dist-info/METADATA,sha256=k5TVfXMNKGHTfp2wm6EJKTuGwGNuoQR5TqQgH8iwG8M,8981 +appdirs-1.4.4.dist-info/RECORD,, +appdirs-1.4.4.dist-info/WHEEL,sha256=kGT74LWyRUZrL4VgLh6_g12IeVl_9u9ZVhadrgXZUEY,110 +appdirs-1.4.4.dist-info/top_level.txt,sha256=nKncE8CUqZERJ6VuQWL4_bkunSPDNfn7KZqb4Tr5YEM,8 +appdirs.py,sha256=g99s2sXhnvTEm79oj4bWI0Toapc-_SmKKNXvOXHkVic,24720 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs-1.4.4.dist-info/WHEEL b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs-1.4.4.dist-info/WHEEL new file mode 100644 index 0000000000000000000000000000000000000000..ef99c6cf3283b50a273ac4c6d009a0aa85597070 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs-1.4.4.dist-info/WHEEL @@ -0,0 +1,6 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.34.2) +Root-Is-Purelib: true +Tag: py2-none-any +Tag: py3-none-any + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs-1.4.4.dist-info/top_level.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs-1.4.4.dist-info/top_level.txt new file mode 100644 index 0000000000000000000000000000000000000000..d64bc321a11c17064f6e048ffcee9044688aa819 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs-1.4.4.dist-info/top_level.txt @@ -0,0 +1 @@ +appdirs diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs.py new file mode 100644 index 0000000000000000000000000000000000000000..ae67001af8b661373edeee2eb327b9f63e630d62 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/appdirs.py @@ -0,0 +1,608 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +# Copyright (c) 2005-2010 ActiveState Software Inc. +# Copyright (c) 2013 Eddy Petrișor + +"""Utilities for determining application-specific dirs. + +See for details and usage. +""" +# Dev Notes: +# - MSDN on where to store app data files: +# http://support.microsoft.com/default.aspx?scid=kb;en-us;310294#XSLTH3194121123120121120120 +# - Mac OS X: http://developer.apple.com/documentation/MacOSX/Conceptual/BPFileSystem/index.html +# - XDG spec for Un*x: http://standards.freedesktop.org/basedir-spec/basedir-spec-latest.html + +__version_info__ = (1, 4, 3) +__version__ = '.'.join(map(str, __version_info__)) + + +import sys +import os + +PY3 = sys.version_info[0] == 3 + +if PY3: + unicode = str + +if sys.platform.startswith('java'): + import platform + os_name = platform.java_ver()[3][0] + if os_name.startswith('Windows'): # "Windows XP", "Windows 7", etc. + system = 'win32' + elif os_name.startswith('Mac'): # "Mac OS X", etc. + system = 'darwin' + else: # "Linux", "SunOS", "FreeBSD", etc. + # Setting this to "linux2" is not ideal, but only Windows or Mac + # are actually checked for and the rest of the module expects + # *sys.platform* style strings. + system = 'linux2' +else: + system = sys.platform + + + +def user_data_dir(appname=None, appauthor=None, version=None, roaming=False): + r"""Return full path to the user-specific data dir for this application. + + "appname" is the name of application. + If None, just the system directory is returned. + "appauthor" (only used on Windows) is the name of the + appauthor or distributing body for this application. Typically + it is the owning company name. This falls back to appname. You may + pass False to disable it. + "version" is an optional version path element to append to the + path. You might want to use this if you want multiple versions + of your app to be able to run independently. If used, this + would typically be ".". + Only applied when appname is present. + "roaming" (boolean, default False) can be set True to use the Windows + roaming appdata directory. That means that for users on a Windows + network setup for roaming profiles, this user data will be + sync'd on login. See + + for a discussion of issues. + + Typical user data directories are: + Mac OS X: ~/Library/Application Support/ + Unix: ~/.local/share/ # or in $XDG_DATA_HOME, if defined + Win XP (not roaming): C:\Documents and Settings\\Application Data\\ + Win XP (roaming): C:\Documents and Settings\\Local Settings\Application Data\\ + Win 7 (not roaming): C:\Users\\AppData\Local\\ + Win 7 (roaming): C:\Users\\AppData\Roaming\\ + + For Unix, we follow the XDG spec and support $XDG_DATA_HOME. + That means, by default "~/.local/share/". + """ + if system == "win32": + if appauthor is None: + appauthor = appname + const = roaming and "CSIDL_APPDATA" or "CSIDL_LOCAL_APPDATA" + path = os.path.normpath(_get_win_folder(const)) + if appname: + if appauthor is not False: + path = os.path.join(path, appauthor, appname) + else: + path = os.path.join(path, appname) + elif system == 'darwin': + path = os.path.expanduser('~/Library/Application Support/') + if appname: + path = os.path.join(path, appname) + else: + path = os.getenv('XDG_DATA_HOME', os.path.expanduser("~/.local/share")) + if appname: + path = os.path.join(path, appname) + if appname and version: + path = os.path.join(path, version) + return path + + +def site_data_dir(appname=None, appauthor=None, version=None, multipath=False): + r"""Return full path to the user-shared data dir for this application. + + "appname" is the name of application. + If None, just the system directory is returned. + "appauthor" (only used on Windows) is the name of the + appauthor or distributing body for this application. Typically + it is the owning company name. This falls back to appname. You may + pass False to disable it. + "version" is an optional version path element to append to the + path. You might want to use this if you want multiple versions + of your app to be able to run independently. If used, this + would typically be ".". + Only applied when appname is present. + "multipath" is an optional parameter only applicable to *nix + which indicates that the entire list of data dirs should be + returned. By default, the first item from XDG_DATA_DIRS is + returned, or '/usr/local/share/', + if XDG_DATA_DIRS is not set + + Typical site data directories are: + Mac OS X: /Library/Application Support/ + Unix: /usr/local/share/ or /usr/share/ + Win XP: C:\Documents and Settings\All Users\Application Data\\ + Vista: (Fail! "C:\ProgramData" is a hidden *system* directory on Vista.) + Win 7: C:\ProgramData\\ # Hidden, but writeable on Win 7. + + For Unix, this is using the $XDG_DATA_DIRS[0] default. + + WARNING: Do not use this on Windows. See the Vista-Fail note above for why. + """ + if system == "win32": + if appauthor is None: + appauthor = appname + path = os.path.normpath(_get_win_folder("CSIDL_COMMON_APPDATA")) + if appname: + if appauthor is not False: + path = os.path.join(path, appauthor, appname) + else: + path = os.path.join(path, appname) + elif system == 'darwin': + path = os.path.expanduser('/Library/Application Support') + if appname: + path = os.path.join(path, appname) + else: + # XDG default for $XDG_DATA_DIRS + # only first, if multipath is False + path = os.getenv('XDG_DATA_DIRS', + os.pathsep.join(['/usr/local/share', '/usr/share'])) + pathlist = [os.path.expanduser(x.rstrip(os.sep)) for x in path.split(os.pathsep)] + if appname: + if version: + appname = os.path.join(appname, version) + pathlist = [os.sep.join([x, appname]) for x in pathlist] + + if multipath: + path = os.pathsep.join(pathlist) + else: + path = pathlist[0] + return path + + if appname and version: + path = os.path.join(path, version) + return path + + +def user_config_dir(appname=None, appauthor=None, version=None, roaming=False): + r"""Return full path to the user-specific config dir for this application. + + "appname" is the name of application. + If None, just the system directory is returned. + "appauthor" (only used on Windows) is the name of the + appauthor or distributing body for this application. Typically + it is the owning company name. This falls back to appname. You may + pass False to disable it. + "version" is an optional version path element to append to the + path. You might want to use this if you want multiple versions + of your app to be able to run independently. If used, this + would typically be ".". + Only applied when appname is present. + "roaming" (boolean, default False) can be set True to use the Windows + roaming appdata directory. That means that for users on a Windows + network setup for roaming profiles, this user data will be + sync'd on login. See + + for a discussion of issues. + + Typical user config directories are: + Mac OS X: same as user_data_dir + Unix: ~/.config/ # or in $XDG_CONFIG_HOME, if defined + Win *: same as user_data_dir + + For Unix, we follow the XDG spec and support $XDG_CONFIG_HOME. + That means, by default "~/.config/". + """ + if system in ["win32", "darwin"]: + path = user_data_dir(appname, appauthor, None, roaming) + else: + path = os.getenv('XDG_CONFIG_HOME', os.path.expanduser("~/.config")) + if appname: + path = os.path.join(path, appname) + if appname and version: + path = os.path.join(path, version) + return path + + +def site_config_dir(appname=None, appauthor=None, version=None, multipath=False): + r"""Return full path to the user-shared data dir for this application. + + "appname" is the name of application. + If None, just the system directory is returned. + "appauthor" (only used on Windows) is the name of the + appauthor or distributing body for this application. Typically + it is the owning company name. This falls back to appname. You may + pass False to disable it. + "version" is an optional version path element to append to the + path. You might want to use this if you want multiple versions + of your app to be able to run independently. If used, this + would typically be ".". + Only applied when appname is present. + "multipath" is an optional parameter only applicable to *nix + which indicates that the entire list of config dirs should be + returned. By default, the first item from XDG_CONFIG_DIRS is + returned, or '/etc/xdg/', if XDG_CONFIG_DIRS is not set + + Typical site config directories are: + Mac OS X: same as site_data_dir + Unix: /etc/xdg/ or $XDG_CONFIG_DIRS[i]/ for each value in + $XDG_CONFIG_DIRS + Win *: same as site_data_dir + Vista: (Fail! "C:\ProgramData" is a hidden *system* directory on Vista.) + + For Unix, this is using the $XDG_CONFIG_DIRS[0] default, if multipath=False + + WARNING: Do not use this on Windows. See the Vista-Fail note above for why. + """ + if system in ["win32", "darwin"]: + path = site_data_dir(appname, appauthor) + if appname and version: + path = os.path.join(path, version) + else: + # XDG default for $XDG_CONFIG_DIRS + # only first, if multipath is False + path = os.getenv('XDG_CONFIG_DIRS', '/etc/xdg') + pathlist = [os.path.expanduser(x.rstrip(os.sep)) for x in path.split(os.pathsep)] + if appname: + if version: + appname = os.path.join(appname, version) + pathlist = [os.sep.join([x, appname]) for x in pathlist] + + if multipath: + path = os.pathsep.join(pathlist) + else: + path = pathlist[0] + return path + + +def user_cache_dir(appname=None, appauthor=None, version=None, opinion=True): + r"""Return full path to the user-specific cache dir for this application. + + "appname" is the name of application. + If None, just the system directory is returned. + "appauthor" (only used on Windows) is the name of the + appauthor or distributing body for this application. Typically + it is the owning company name. This falls back to appname. You may + pass False to disable it. + "version" is an optional version path element to append to the + path. You might want to use this if you want multiple versions + of your app to be able to run independently. If used, this + would typically be ".". + Only applied when appname is present. + "opinion" (boolean) can be False to disable the appending of + "Cache" to the base app data dir for Windows. See + discussion below. + + Typical user cache directories are: + Mac OS X: ~/Library/Caches/ + Unix: ~/.cache/ (XDG default) + Win XP: C:\Documents and Settings\\Local Settings\Application Data\\\Cache + Vista: C:\Users\\AppData\Local\\\Cache + + On Windows the only suggestion in the MSDN docs is that local settings go in + the `CSIDL_LOCAL_APPDATA` directory. This is identical to the non-roaming + app data dir (the default returned by `user_data_dir` above). Apps typically + put cache data somewhere *under* the given dir here. Some examples: + ...\Mozilla\Firefox\Profiles\\Cache + ...\Acme\SuperApp\Cache\1.0 + OPINION: This function appends "Cache" to the `CSIDL_LOCAL_APPDATA` value. + This can be disabled with the `opinion=False` option. + """ + if system == "win32": + if appauthor is None: + appauthor = appname + path = os.path.normpath(_get_win_folder("CSIDL_LOCAL_APPDATA")) + if appname: + if appauthor is not False: + path = os.path.join(path, appauthor, appname) + else: + path = os.path.join(path, appname) + if opinion: + path = os.path.join(path, "Cache") + elif system == 'darwin': + path = os.path.expanduser('~/Library/Caches') + if appname: + path = os.path.join(path, appname) + else: + path = os.getenv('XDG_CACHE_HOME', os.path.expanduser('~/.cache')) + if appname: + path = os.path.join(path, appname) + if appname and version: + path = os.path.join(path, version) + return path + + +def user_state_dir(appname=None, appauthor=None, version=None, roaming=False): + r"""Return full path to the user-specific state dir for this application. + + "appname" is the name of application. + If None, just the system directory is returned. + "appauthor" (only used on Windows) is the name of the + appauthor or distributing body for this application. Typically + it is the owning company name. This falls back to appname. You may + pass False to disable it. + "version" is an optional version path element to append to the + path. You might want to use this if you want multiple versions + of your app to be able to run independently. If used, this + would typically be ".". + Only applied when appname is present. + "roaming" (boolean, default False) can be set True to use the Windows + roaming appdata directory. That means that for users on a Windows + network setup for roaming profiles, this user data will be + sync'd on login. See + + for a discussion of issues. + + Typical user state directories are: + Mac OS X: same as user_data_dir + Unix: ~/.local/state/ # or in $XDG_STATE_HOME, if defined + Win *: same as user_data_dir + + For Unix, we follow this Debian proposal + to extend the XDG spec and support $XDG_STATE_HOME. + + That means, by default "~/.local/state/". + """ + if system in ["win32", "darwin"]: + path = user_data_dir(appname, appauthor, None, roaming) + else: + path = os.getenv('XDG_STATE_HOME', os.path.expanduser("~/.local/state")) + if appname: + path = os.path.join(path, appname) + if appname and version: + path = os.path.join(path, version) + return path + + +def user_log_dir(appname=None, appauthor=None, version=None, opinion=True): + r"""Return full path to the user-specific log dir for this application. + + "appname" is the name of application. + If None, just the system directory is returned. + "appauthor" (only used on Windows) is the name of the + appauthor or distributing body for this application. Typically + it is the owning company name. This falls back to appname. You may + pass False to disable it. + "version" is an optional version path element to append to the + path. You might want to use this if you want multiple versions + of your app to be able to run independently. If used, this + would typically be ".". + Only applied when appname is present. + "opinion" (boolean) can be False to disable the appending of + "Logs" to the base app data dir for Windows, and "log" to the + base cache dir for Unix. See discussion below. + + Typical user log directories are: + Mac OS X: ~/Library/Logs/ + Unix: ~/.cache//log # or under $XDG_CACHE_HOME if defined + Win XP: C:\Documents and Settings\\Local Settings\Application Data\\\Logs + Vista: C:\Users\\AppData\Local\\\Logs + + On Windows the only suggestion in the MSDN docs is that local settings + go in the `CSIDL_LOCAL_APPDATA` directory. (Note: I'm interested in + examples of what some windows apps use for a logs dir.) + + OPINION: This function appends "Logs" to the `CSIDL_LOCAL_APPDATA` + value for Windows and appends "log" to the user cache dir for Unix. + This can be disabled with the `opinion=False` option. + """ + if system == "darwin": + path = os.path.join( + os.path.expanduser('~/Library/Logs'), + appname) + elif system == "win32": + path = user_data_dir(appname, appauthor, version) + version = False + if opinion: + path = os.path.join(path, "Logs") + else: + path = user_cache_dir(appname, appauthor, version) + version = False + if opinion: + path = os.path.join(path, "log") + if appname and version: + path = os.path.join(path, version) + return path + + +class AppDirs(object): + """Convenience wrapper for getting application dirs.""" + def __init__(self, appname=None, appauthor=None, version=None, + roaming=False, multipath=False): + self.appname = appname + self.appauthor = appauthor + self.version = version + self.roaming = roaming + self.multipath = multipath + + @property + def user_data_dir(self): + return user_data_dir(self.appname, self.appauthor, + version=self.version, roaming=self.roaming) + + @property + def site_data_dir(self): + return site_data_dir(self.appname, self.appauthor, + version=self.version, multipath=self.multipath) + + @property + def user_config_dir(self): + return user_config_dir(self.appname, self.appauthor, + version=self.version, roaming=self.roaming) + + @property + def site_config_dir(self): + return site_config_dir(self.appname, self.appauthor, + version=self.version, multipath=self.multipath) + + @property + def user_cache_dir(self): + return user_cache_dir(self.appname, self.appauthor, + version=self.version) + + @property + def user_state_dir(self): + return user_state_dir(self.appname, self.appauthor, + version=self.version) + + @property + def user_log_dir(self): + return user_log_dir(self.appname, self.appauthor, + version=self.version) + + +#---- internal support stuff + +def _get_win_folder_from_registry(csidl_name): + """This is a fallback technique at best. I'm not sure if using the + registry for this guarantees us the correct answer for all CSIDL_* + names. + """ + if PY3: + import winreg as _winreg + else: + import _winreg + + shell_folder_name = { + "CSIDL_APPDATA": "AppData", + "CSIDL_COMMON_APPDATA": "Common AppData", + "CSIDL_LOCAL_APPDATA": "Local AppData", + }[csidl_name] + + key = _winreg.OpenKey( + _winreg.HKEY_CURRENT_USER, + r"Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders" + ) + dir, type = _winreg.QueryValueEx(key, shell_folder_name) + return dir + + +def _get_win_folder_with_pywin32(csidl_name): + from win32com.shell import shellcon, shell + dir = shell.SHGetFolderPath(0, getattr(shellcon, csidl_name), 0, 0) + # Try to make this a unicode path because SHGetFolderPath does + # not return unicode strings when there is unicode data in the + # path. + try: + dir = unicode(dir) + + # Downgrade to short path name if have highbit chars. See + # . + has_high_char = False + for c in dir: + if ord(c) > 255: + has_high_char = True + break + if has_high_char: + try: + import win32api + dir = win32api.GetShortPathName(dir) + except ImportError: + pass + except UnicodeError: + pass + return dir + + +def _get_win_folder_with_ctypes(csidl_name): + import ctypes + + csidl_const = { + "CSIDL_APPDATA": 26, + "CSIDL_COMMON_APPDATA": 35, + "CSIDL_LOCAL_APPDATA": 28, + }[csidl_name] + + buf = ctypes.create_unicode_buffer(1024) + ctypes.windll.shell32.SHGetFolderPathW(None, csidl_const, None, 0, buf) + + # Downgrade to short path name if have highbit chars. See + # . + has_high_char = False + for c in buf: + if ord(c) > 255: + has_high_char = True + break + if has_high_char: + buf2 = ctypes.create_unicode_buffer(1024) + if ctypes.windll.kernel32.GetShortPathNameW(buf.value, buf2, 1024): + buf = buf2 + + return buf.value + +def _get_win_folder_with_jna(csidl_name): + import array + from com.sun import jna + from com.sun.jna.platform import win32 + + buf_size = win32.WinDef.MAX_PATH * 2 + buf = array.zeros('c', buf_size) + shell = win32.Shell32.INSTANCE + shell.SHGetFolderPath(None, getattr(win32.ShlObj, csidl_name), None, win32.ShlObj.SHGFP_TYPE_CURRENT, buf) + dir = jna.Native.toString(buf.tostring()).rstrip("\0") + + # Downgrade to short path name if have highbit chars. See + # . + has_high_char = False + for c in dir: + if ord(c) > 255: + has_high_char = True + break + if has_high_char: + buf = array.zeros('c', buf_size) + kernel = win32.Kernel32.INSTANCE + if kernel.GetShortPathName(dir, buf, buf_size): + dir = jna.Native.toString(buf.tostring()).rstrip("\0") + + return dir + +if system == "win32": + try: + import win32com.shell + _get_win_folder = _get_win_folder_with_pywin32 + except ImportError: + try: + from ctypes import windll + _get_win_folder = _get_win_folder_with_ctypes + except ImportError: + try: + import com.sun.jna + _get_win_folder = _get_win_folder_with_jna + except ImportError: + _get_win_folder = _get_win_folder_from_registry + + +#---- self test code + +if __name__ == "__main__": + appname = "MyApp" + appauthor = "MyCompany" + + props = ("user_data_dir", + "user_config_dir", + "user_cache_dir", + "user_state_dir", + "user_log_dir", + "site_data_dir", + "site_config_dir") + + print("-- app dirs %s --" % __version__) + + print("-- app dirs (with optional 'version')") + dirs = AppDirs(appname, appauthor, version="1.0") + for prop in props: + print("%s: %s" % (prop, getattr(dirs, prop))) + + print("\n-- app dirs (without optional 'version')") + dirs = AppDirs(appname, appauthor) + for prop in props: + print("%s: %s" % (prop, getattr(dirs, prop))) + + print("\n-- app dirs (without optional 'appauthor')") + dirs = AppDirs(appname) + for prop in props: + print("%s: %s" % (prop, getattr(dirs, prop))) + + print("\n-- app dirs (with disabled 'appauthor')") + dirs = AppDirs(appname, appauthor=False) + for prop in props: + print("%s: %s" % (prop, getattr(dirs, prop))) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..bf329cad5c88580c3e1e89ed9196dd58bdf7fe3b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/__init__.py @@ -0,0 +1,76 @@ +from __future__ import absolute_import, division, print_function + +import sys + +from functools import partial + +from . import converters, exceptions, filters, setters, validators +from ._config import get_run_validators, set_run_validators +from ._funcs import asdict, assoc, astuple, evolve, has, resolve_types +from ._make import ( + NOTHING, + Attribute, + Factory, + attrib, + attrs, + fields, + fields_dict, + make_class, + validate, +) +from ._version_info import VersionInfo + + +__version__ = "20.3.0" +__version_info__ = VersionInfo._from_version_string(__version__) + +__title__ = "attrs" +__description__ = "Classes Without Boilerplate" +__url__ = "https://www.attrs.org/" +__uri__ = __url__ +__doc__ = __description__ + " <" + __uri__ + ">" + +__author__ = "Hynek Schlawack" +__email__ = "hs@ox.cx" + +__license__ = "MIT" +__copyright__ = "Copyright (c) 2015 Hynek Schlawack" + + +s = attributes = attrs +ib = attr = attrib +dataclass = partial(attrs, auto_attribs=True) # happy Easter ;) + +__all__ = [ + "Attribute", + "Factory", + "NOTHING", + "asdict", + "assoc", + "astuple", + "attr", + "attrib", + "attributes", + "attrs", + "converters", + "evolve", + "exceptions", + "fields", + "fields_dict", + "filters", + "get_run_validators", + "has", + "ib", + "make_class", + "resolve_types", + "s", + "set_run_validators", + "setters", + "validate", + "validators", +] + +if sys.version_info[:2] >= (3, 6): + from ._next_gen import define, field, frozen, mutable + + __all__.extend((define, field, frozen, mutable)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/__init__.pyi b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/__init__.pyi new file mode 100644 index 0000000000000000000000000000000000000000..442d6e77fb840c8bbe5a95339d43ada698cd4eea --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/__init__.pyi @@ -0,0 +1,433 @@ +from typing import ( + Any, + Callable, + Dict, + Generic, + List, + Optional, + Sequence, + Mapping, + Tuple, + Type, + TypeVar, + Union, + overload, +) + +# `import X as X` is required to make these public +from . import exceptions as exceptions +from . import filters as filters +from . import converters as converters +from . import setters as setters +from . import validators as validators + +from ._version_info import VersionInfo + +__version__: str +__version_info__: VersionInfo +__title__: str +__description__: str +__url__: str +__uri__: str +__author__: str +__email__: str +__license__: str +__copyright__: str + +_T = TypeVar("_T") +_C = TypeVar("_C", bound=type) + +_ValidatorType = Callable[[Any, Attribute[_T], _T], Any] +_ConverterType = Callable[[Any], Any] +_FilterType = Callable[[Attribute[_T], _T], bool] +_ReprType = Callable[[Any], str] +_ReprArgType = Union[bool, _ReprType] +_OnSetAttrType = Callable[[Any, Attribute[Any], Any], Any] +_OnSetAttrArgType = Union[ + _OnSetAttrType, List[_OnSetAttrType], setters._NoOpType +] +_FieldTransformer = Callable[[type, List[Attribute]], List[Attribute]] +# FIXME: in reality, if multiple validators are passed they must be in a list +# or tuple, but those are invariant and so would prevent subtypes of +# _ValidatorType from working when passed in a list or tuple. +_ValidatorArgType = Union[_ValidatorType[_T], Sequence[_ValidatorType[_T]]] + +# _make -- + +NOTHING: object + +# NOTE: Factory lies about its return type to make this possible: +# `x: List[int] # = Factory(list)` +# Work around mypy issue #4554 in the common case by using an overload. +@overload +def Factory(factory: Callable[[], _T]) -> _T: ... +@overload +def Factory( + factory: Union[Callable[[Any], _T], Callable[[], _T]], + takes_self: bool = ..., +) -> _T: ... + +class Attribute(Generic[_T]): + name: str + default: Optional[_T] + validator: Optional[_ValidatorType[_T]] + repr: _ReprArgType + cmp: bool + eq: bool + order: bool + hash: Optional[bool] + init: bool + converter: Optional[_ConverterType] + metadata: Dict[Any, Any] + type: Optional[Type[_T]] + kw_only: bool + on_setattr: _OnSetAttrType + +# NOTE: We had several choices for the annotation to use for type arg: +# 1) Type[_T] +# - Pros: Handles simple cases correctly +# - Cons: Might produce less informative errors in the case of conflicting +# TypeVars e.g. `attr.ib(default='bad', type=int)` +# 2) Callable[..., _T] +# - Pros: Better error messages than #1 for conflicting TypeVars +# - Cons: Terrible error messages for validator checks. +# e.g. attr.ib(type=int, validator=validate_str) +# -> error: Cannot infer function type argument +# 3) type (and do all of the work in the mypy plugin) +# - Pros: Simple here, and we could customize the plugin with our own errors. +# - Cons: Would need to write mypy plugin code to handle all the cases. +# We chose option #1. + +# `attr` lies about its return type to make the following possible: +# attr() -> Any +# attr(8) -> int +# attr(validator=) -> Whatever the callable expects. +# This makes this type of assignments possible: +# x: int = attr(8) +# +# This form catches explicit None or no default but with no other arguments +# returns Any. +@overload +def attrib( + default: None = ..., + validator: None = ..., + repr: _ReprArgType = ..., + cmp: Optional[bool] = ..., + hash: Optional[bool] = ..., + init: bool = ..., + metadata: Optional[Mapping[Any, Any]] = ..., + type: None = ..., + converter: None = ..., + factory: None = ..., + kw_only: bool = ..., + eq: Optional[bool] = ..., + order: Optional[bool] = ..., + on_setattr: Optional[_OnSetAttrArgType] = ..., +) -> Any: ... + +# This form catches an explicit None or no default and infers the type from the +# other arguments. +@overload +def attrib( + default: None = ..., + validator: Optional[_ValidatorArgType[_T]] = ..., + repr: _ReprArgType = ..., + cmp: Optional[bool] = ..., + hash: Optional[bool] = ..., + init: bool = ..., + metadata: Optional[Mapping[Any, Any]] = ..., + type: Optional[Type[_T]] = ..., + converter: Optional[_ConverterType] = ..., + factory: Optional[Callable[[], _T]] = ..., + kw_only: bool = ..., + eq: Optional[bool] = ..., + order: Optional[bool] = ..., + on_setattr: Optional[_OnSetAttrArgType] = ..., +) -> _T: ... + +# This form catches an explicit default argument. +@overload +def attrib( + default: _T, + validator: Optional[_ValidatorArgType[_T]] = ..., + repr: _ReprArgType = ..., + cmp: Optional[bool] = ..., + hash: Optional[bool] = ..., + init: bool = ..., + metadata: Optional[Mapping[Any, Any]] = ..., + type: Optional[Type[_T]] = ..., + converter: Optional[_ConverterType] = ..., + factory: Optional[Callable[[], _T]] = ..., + kw_only: bool = ..., + eq: Optional[bool] = ..., + order: Optional[bool] = ..., + on_setattr: Optional[_OnSetAttrArgType] = ..., +) -> _T: ... + +# This form covers type=non-Type: e.g. forward references (str), Any +@overload +def attrib( + default: Optional[_T] = ..., + validator: Optional[_ValidatorArgType[_T]] = ..., + repr: _ReprArgType = ..., + cmp: Optional[bool] = ..., + hash: Optional[bool] = ..., + init: bool = ..., + metadata: Optional[Mapping[Any, Any]] = ..., + type: object = ..., + converter: Optional[_ConverterType] = ..., + factory: Optional[Callable[[], _T]] = ..., + kw_only: bool = ..., + eq: Optional[bool] = ..., + order: Optional[bool] = ..., + on_setattr: Optional[_OnSetAttrArgType] = ..., +) -> Any: ... +@overload +def field( + *, + default: None = ..., + validator: None = ..., + repr: _ReprArgType = ..., + hash: Optional[bool] = ..., + init: bool = ..., + metadata: Optional[Mapping[Any, Any]] = ..., + converter: None = ..., + factory: None = ..., + kw_only: bool = ..., + eq: Optional[bool] = ..., + order: Optional[bool] = ..., + on_setattr: Optional[_OnSetAttrArgType] = ..., +) -> Any: ... + +# This form catches an explicit None or no default and infers the type from the +# other arguments. +@overload +def field( + *, + default: None = ..., + validator: Optional[_ValidatorArgType[_T]] = ..., + repr: _ReprArgType = ..., + hash: Optional[bool] = ..., + init: bool = ..., + metadata: Optional[Mapping[Any, Any]] = ..., + converter: Optional[_ConverterType] = ..., + factory: Optional[Callable[[], _T]] = ..., + kw_only: bool = ..., + eq: Optional[bool] = ..., + order: Optional[bool] = ..., + on_setattr: Optional[_OnSetAttrArgType] = ..., +) -> _T: ... + +# This form catches an explicit default argument. +@overload +def field( + *, + default: _T, + validator: Optional[_ValidatorArgType[_T]] = ..., + repr: _ReprArgType = ..., + hash: Optional[bool] = ..., + init: bool = ..., + metadata: Optional[Mapping[Any, Any]] = ..., + converter: Optional[_ConverterType] = ..., + factory: Optional[Callable[[], _T]] = ..., + kw_only: bool = ..., + eq: Optional[bool] = ..., + order: Optional[bool] = ..., + on_setattr: Optional[_OnSetAttrArgType] = ..., +) -> _T: ... + +# This form covers type=non-Type: e.g. forward references (str), Any +@overload +def field( + *, + default: Optional[_T] = ..., + validator: Optional[_ValidatorArgType[_T]] = ..., + repr: _ReprArgType = ..., + hash: Optional[bool] = ..., + init: bool = ..., + metadata: Optional[Mapping[Any, Any]] = ..., + converter: Optional[_ConverterType] = ..., + factory: Optional[Callable[[], _T]] = ..., + kw_only: bool = ..., + eq: Optional[bool] = ..., + order: Optional[bool] = ..., + on_setattr: Optional[_OnSetAttrArgType] = ..., +) -> Any: ... +@overload +def attrs( + maybe_cls: _C, + these: Optional[Dict[str, Any]] = ..., + repr_ns: Optional[str] = ..., + repr: bool = ..., + cmp: Optional[bool] = ..., + hash: Optional[bool] = ..., + init: bool = ..., + slots: bool = ..., + frozen: bool = ..., + weakref_slot: bool = ..., + str: bool = ..., + auto_attribs: bool = ..., + kw_only: bool = ..., + cache_hash: bool = ..., + auto_exc: bool = ..., + eq: Optional[bool] = ..., + order: Optional[bool] = ..., + auto_detect: bool = ..., + collect_by_mro: bool = ..., + getstate_setstate: Optional[bool] = ..., + on_setattr: Optional[_OnSetAttrArgType] = ..., + field_transformer: Optional[_FieldTransformer] = ..., +) -> _C: ... +@overload +def attrs( + maybe_cls: None = ..., + these: Optional[Dict[str, Any]] = ..., + repr_ns: Optional[str] = ..., + repr: bool = ..., + cmp: Optional[bool] = ..., + hash: Optional[bool] = ..., + init: bool = ..., + slots: bool = ..., + frozen: bool = ..., + weakref_slot: bool = ..., + str: bool = ..., + auto_attribs: bool = ..., + kw_only: bool = ..., + cache_hash: bool = ..., + auto_exc: bool = ..., + eq: Optional[bool] = ..., + order: Optional[bool] = ..., + auto_detect: bool = ..., + collect_by_mro: bool = ..., + getstate_setstate: Optional[bool] = ..., + on_setattr: Optional[_OnSetAttrArgType] = ..., + field_transformer: Optional[_FieldTransformer] = ..., +) -> Callable[[_C], _C]: ... +@overload +def define( + maybe_cls: _C, + *, + these: Optional[Dict[str, Any]] = ..., + repr: bool = ..., + hash: Optional[bool] = ..., + init: bool = ..., + slots: bool = ..., + frozen: bool = ..., + weakref_slot: bool = ..., + str: bool = ..., + auto_attribs: bool = ..., + kw_only: bool = ..., + cache_hash: bool = ..., + auto_exc: bool = ..., + eq: Optional[bool] = ..., + order: Optional[bool] = ..., + auto_detect: bool = ..., + getstate_setstate: Optional[bool] = ..., + on_setattr: Optional[_OnSetAttrArgType] = ..., + field_transformer: Optional[_FieldTransformer] = ..., +) -> _C: ... +@overload +def define( + maybe_cls: None = ..., + *, + these: Optional[Dict[str, Any]] = ..., + repr: bool = ..., + hash: Optional[bool] = ..., + init: bool = ..., + slots: bool = ..., + frozen: bool = ..., + weakref_slot: bool = ..., + str: bool = ..., + auto_attribs: bool = ..., + kw_only: bool = ..., + cache_hash: bool = ..., + auto_exc: bool = ..., + eq: Optional[bool] = ..., + order: Optional[bool] = ..., + auto_detect: bool = ..., + getstate_setstate: Optional[bool] = ..., + on_setattr: Optional[_OnSetAttrArgType] = ..., + field_transformer: Optional[_FieldTransformer] = ..., +) -> Callable[[_C], _C]: ... + +mutable = define +frozen = define # they differ only in their defaults + +# TODO: add support for returning NamedTuple from the mypy plugin +class _Fields(Tuple[Attribute[Any], ...]): + def __getattr__(self, name: str) -> Attribute[Any]: ... + +def fields(cls: type) -> _Fields: ... +def fields_dict(cls: type) -> Dict[str, Attribute[Any]]: ... +def validate(inst: Any) -> None: ... +def resolve_types( + cls: _C, + globalns: Optional[Dict[str, Any]] = ..., + localns: Optional[Dict[str, Any]] = ..., +) -> _C: ... + +# TODO: add support for returning a proper attrs class from the mypy plugin +# we use Any instead of _CountingAttr so that e.g. `make_class('Foo', +# [attr.ib()])` is valid +def make_class( + name: str, + attrs: Union[List[str], Tuple[str, ...], Dict[str, Any]], + bases: Tuple[type, ...] = ..., + repr_ns: Optional[str] = ..., + repr: bool = ..., + cmp: Optional[bool] = ..., + hash: Optional[bool] = ..., + init: bool = ..., + slots: bool = ..., + frozen: bool = ..., + weakref_slot: bool = ..., + str: bool = ..., + auto_attribs: bool = ..., + kw_only: bool = ..., + cache_hash: bool = ..., + auto_exc: bool = ..., + eq: Optional[bool] = ..., + order: Optional[bool] = ..., + collect_by_mro: bool = ..., + on_setattr: Optional[_OnSetAttrArgType] = ..., + field_transformer: Optional[_FieldTransformer] = ..., +) -> type: ... + +# _funcs -- + +# TODO: add support for returning TypedDict from the mypy plugin +# FIXME: asdict/astuple do not honor their factory args. Waiting on one of +# these: +# https://github.com/python/mypy/issues/4236 +# https://github.com/python/typing/issues/253 +def asdict( + inst: Any, + recurse: bool = ..., + filter: Optional[_FilterType[Any]] = ..., + dict_factory: Type[Mapping[Any, Any]] = ..., + retain_collection_types: bool = ..., + value_serializer: Optional[Callable[[type, Attribute, Any], Any]] = ..., +) -> Dict[str, Any]: ... + +# TODO: add support for returning NamedTuple from the mypy plugin +def astuple( + inst: Any, + recurse: bool = ..., + filter: Optional[_FilterType[Any]] = ..., + tuple_factory: Type[Sequence[Any]] = ..., + retain_collection_types: bool = ..., +) -> Tuple[Any, ...]: ... +def has(cls: type) -> bool: ... +def assoc(inst: _T, **changes: Any) -> _T: ... +def evolve(inst: _T, **changes: Any) -> _T: ... + +# _config -- + +def set_run_validators(run: bool) -> None: ... +def get_run_validators() -> bool: ... + +# aliases -- + +s = attributes = attrs +ib = attr = attrib +dataclass = attrs # Technically, partial(attrs, auto_attribs=True) ;) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/_compat.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/_compat.py new file mode 100644 index 0000000000000000000000000000000000000000..b0ead6e1c8d269f3af46910a1f9671d0f7f5b98a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/_compat.py @@ -0,0 +1,231 @@ +from __future__ import absolute_import, division, print_function + +import platform +import sys +import types +import warnings + + +PY2 = sys.version_info[0] == 2 +PYPY = platform.python_implementation() == "PyPy" + + +if PYPY or sys.version_info[:2] >= (3, 6): + ordered_dict = dict +else: + from collections import OrderedDict + + ordered_dict = OrderedDict + + +if PY2: + from collections import Mapping, Sequence + + from UserDict import IterableUserDict + + # We 'bundle' isclass instead of using inspect as importing inspect is + # fairly expensive (order of 10-15 ms for a modern machine in 2016) + def isclass(klass): + return isinstance(klass, (type, types.ClassType)) + + # TYPE is used in exceptions, repr(int) is different on Python 2 and 3. + TYPE = "type" + + def iteritems(d): + return d.iteritems() + + # Python 2 is bereft of a read-only dict proxy, so we make one! + class ReadOnlyDict(IterableUserDict): + """ + Best-effort read-only dict wrapper. + """ + + def __setitem__(self, key, val): + # We gently pretend we're a Python 3 mappingproxy. + raise TypeError( + "'mappingproxy' object does not support item assignment" + ) + + def update(self, _): + # We gently pretend we're a Python 3 mappingproxy. + raise AttributeError( + "'mappingproxy' object has no attribute 'update'" + ) + + def __delitem__(self, _): + # We gently pretend we're a Python 3 mappingproxy. + raise TypeError( + "'mappingproxy' object does not support item deletion" + ) + + def clear(self): + # We gently pretend we're a Python 3 mappingproxy. + raise AttributeError( + "'mappingproxy' object has no attribute 'clear'" + ) + + def pop(self, key, default=None): + # We gently pretend we're a Python 3 mappingproxy. + raise AttributeError( + "'mappingproxy' object has no attribute 'pop'" + ) + + def popitem(self): + # We gently pretend we're a Python 3 mappingproxy. + raise AttributeError( + "'mappingproxy' object has no attribute 'popitem'" + ) + + def setdefault(self, key, default=None): + # We gently pretend we're a Python 3 mappingproxy. + raise AttributeError( + "'mappingproxy' object has no attribute 'setdefault'" + ) + + def __repr__(self): + # Override to be identical to the Python 3 version. + return "mappingproxy(" + repr(self.data) + ")" + + def metadata_proxy(d): + res = ReadOnlyDict() + res.data.update(d) # We blocked update, so we have to do it like this. + return res + + def just_warn(*args, **kw): # pragma: no cover + """ + We only warn on Python 3 because we are not aware of any concrete + consequences of not setting the cell on Python 2. + """ + + +else: # Python 3 and later. + from collections.abc import Mapping, Sequence # noqa + + def just_warn(*args, **kw): + """ + We only warn on Python 3 because we are not aware of any concrete + consequences of not setting the cell on Python 2. + """ + warnings.warn( + "Running interpreter doesn't sufficiently support code object " + "introspection. Some features like bare super() or accessing " + "__class__ will not work with slotted classes.", + RuntimeWarning, + stacklevel=2, + ) + + def isclass(klass): + return isinstance(klass, type) + + TYPE = "class" + + def iteritems(d): + return d.items() + + def metadata_proxy(d): + return types.MappingProxyType(dict(d)) + + +def make_set_closure_cell(): + """Return a function of two arguments (cell, value) which sets + the value stored in the closure cell `cell` to `value`. + """ + # pypy makes this easy. (It also supports the logic below, but + # why not do the easy/fast thing?) + if PYPY: + + def set_closure_cell(cell, value): + cell.__setstate__((value,)) + + return set_closure_cell + + # Otherwise gotta do it the hard way. + + # Create a function that will set its first cellvar to `value`. + def set_first_cellvar_to(value): + x = value + return + + # This function will be eliminated as dead code, but + # not before its reference to `x` forces `x` to be + # represented as a closure cell rather than a local. + def force_x_to_be_a_cell(): # pragma: no cover + return x + + try: + # Extract the code object and make sure our assumptions about + # the closure behavior are correct. + if PY2: + co = set_first_cellvar_to.func_code + else: + co = set_first_cellvar_to.__code__ + if co.co_cellvars != ("x",) or co.co_freevars != (): + raise AssertionError # pragma: no cover + + # Convert this code object to a code object that sets the + # function's first _freevar_ (not cellvar) to the argument. + if sys.version_info >= (3, 8): + # CPython 3.8+ has an incompatible CodeType signature + # (added a posonlyargcount argument) but also added + # CodeType.replace() to do this without counting parameters. + set_first_freevar_code = co.replace( + co_cellvars=co.co_freevars, co_freevars=co.co_cellvars + ) + else: + args = [co.co_argcount] + if not PY2: + args.append(co.co_kwonlyargcount) + args.extend( + [ + co.co_nlocals, + co.co_stacksize, + co.co_flags, + co.co_code, + co.co_consts, + co.co_names, + co.co_varnames, + co.co_filename, + co.co_name, + co.co_firstlineno, + co.co_lnotab, + # These two arguments are reversed: + co.co_cellvars, + co.co_freevars, + ] + ) + set_first_freevar_code = types.CodeType(*args) + + def set_closure_cell(cell, value): + # Create a function using the set_first_freevar_code, + # whose first closure cell is `cell`. Calling it will + # change the value of that cell. + setter = types.FunctionType( + set_first_freevar_code, {}, "setter", (), (cell,) + ) + # And call it to set the cell. + setter(value) + + # Make sure it works on this interpreter: + def make_func_with_cell(): + x = None + + def func(): + return x # pragma: no cover + + return func + + if PY2: + cell = make_func_with_cell().func_closure[0] + else: + cell = make_func_with_cell().__closure__[0] + set_closure_cell(cell, 100) + if cell.cell_contents != 100: + raise AssertionError # pragma: no cover + + except Exception: + return just_warn + else: + return set_closure_cell + + +set_closure_cell = make_set_closure_cell() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/_config.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/_config.py new file mode 100644 index 0000000000000000000000000000000000000000..8ec920962d1b401335927db837cc13cca1e99ae8 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/_config.py @@ -0,0 +1,23 @@ +from __future__ import absolute_import, division, print_function + + +__all__ = ["set_run_validators", "get_run_validators"] + +_run_validators = True + + +def set_run_validators(run): + """ + Set whether or not validators are run. By default, they are run. + """ + if not isinstance(run, bool): + raise TypeError("'run' must be bool.") + global _run_validators + _run_validators = run + + +def get_run_validators(): + """ + Return whether or not validators are run. + """ + return _run_validators diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/_funcs.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/_funcs.py new file mode 100644 index 0000000000000000000000000000000000000000..e6c930cbd1711caa63014eae133439ad25f32272 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/_funcs.py @@ -0,0 +1,390 @@ +from __future__ import absolute_import, division, print_function + +import copy + +from ._compat import iteritems +from ._make import NOTHING, _obj_setattr, fields +from .exceptions import AttrsAttributeNotFoundError + + +def asdict( + inst, + recurse=True, + filter=None, + dict_factory=dict, + retain_collection_types=False, + value_serializer=None, +): + """ + Return the ``attrs`` attribute values of *inst* as a dict. + + Optionally recurse into other ``attrs``-decorated classes. + + :param inst: Instance of an ``attrs``-decorated class. + :param bool recurse: Recurse into classes that are also + ``attrs``-decorated. + :param callable filter: A callable whose return code determines whether an + attribute or element is included (``True``) or dropped (``False``). Is + called with the `attr.Attribute` as the first argument and the + value as the second argument. + :param callable dict_factory: A callable to produce dictionaries from. For + example, to produce ordered dictionaries instead of normal Python + dictionaries, pass in ``collections.OrderedDict``. + :param bool retain_collection_types: Do not convert to ``list`` when + encountering an attribute whose type is ``tuple`` or ``set``. Only + meaningful if ``recurse`` is ``True``. + :param Optional[callable] value_serializer: A hook that is called for every + attribute or dict key/value. It receives the current instance, field + and value and must return the (updated) value. The hook is run *after* + the optional *filter* has been applied. + + :rtype: return type of *dict_factory* + + :raise attr.exceptions.NotAnAttrsClassError: If *cls* is not an ``attrs`` + class. + + .. versionadded:: 16.0.0 *dict_factory* + .. versionadded:: 16.1.0 *retain_collection_types* + .. versionadded:: 20.3.0 *value_serializer* + """ + attrs = fields(inst.__class__) + rv = dict_factory() + for a in attrs: + v = getattr(inst, a.name) + if filter is not None and not filter(a, v): + continue + + if value_serializer is not None: + v = value_serializer(inst, a, v) + + if recurse is True: + if has(v.__class__): + rv[a.name] = asdict( + v, + True, + filter, + dict_factory, + retain_collection_types, + value_serializer, + ) + elif isinstance(v, (tuple, list, set, frozenset)): + cf = v.__class__ if retain_collection_types is True else list + rv[a.name] = cf( + [ + _asdict_anything( + i, + filter, + dict_factory, + retain_collection_types, + value_serializer, + ) + for i in v + ] + ) + elif isinstance(v, dict): + df = dict_factory + rv[a.name] = df( + ( + _asdict_anything( + kk, + filter, + df, + retain_collection_types, + value_serializer, + ), + _asdict_anything( + vv, + filter, + df, + retain_collection_types, + value_serializer, + ), + ) + for kk, vv in iteritems(v) + ) + else: + rv[a.name] = v + else: + rv[a.name] = v + return rv + + +def _asdict_anything( + val, + filter, + dict_factory, + retain_collection_types, + value_serializer, +): + """ + ``asdict`` only works on attrs instances, this works on anything. + """ + if getattr(val.__class__, "__attrs_attrs__", None) is not None: + # Attrs class. + rv = asdict( + val, + True, + filter, + dict_factory, + retain_collection_types, + value_serializer, + ) + elif isinstance(val, (tuple, list, set, frozenset)): + cf = val.__class__ if retain_collection_types is True else list + rv = cf( + [ + _asdict_anything( + i, + filter, + dict_factory, + retain_collection_types, + value_serializer, + ) + for i in val + ] + ) + elif isinstance(val, dict): + df = dict_factory + rv = df( + ( + _asdict_anything( + kk, filter, df, retain_collection_types, value_serializer + ), + _asdict_anything( + vv, filter, df, retain_collection_types, value_serializer + ), + ) + for kk, vv in iteritems(val) + ) + else: + rv = val + if value_serializer is not None: + rv = value_serializer(None, None, rv) + + return rv + + +def astuple( + inst, + recurse=True, + filter=None, + tuple_factory=tuple, + retain_collection_types=False, +): + """ + Return the ``attrs`` attribute values of *inst* as a tuple. + + Optionally recurse into other ``attrs``-decorated classes. + + :param inst: Instance of an ``attrs``-decorated class. + :param bool recurse: Recurse into classes that are also + ``attrs``-decorated. + :param callable filter: A callable whose return code determines whether an + attribute or element is included (``True``) or dropped (``False``). Is + called with the `attr.Attribute` as the first argument and the + value as the second argument. + :param callable tuple_factory: A callable to produce tuples from. For + example, to produce lists instead of tuples. + :param bool retain_collection_types: Do not convert to ``list`` + or ``dict`` when encountering an attribute which type is + ``tuple``, ``dict`` or ``set``. Only meaningful if ``recurse`` is + ``True``. + + :rtype: return type of *tuple_factory* + + :raise attr.exceptions.NotAnAttrsClassError: If *cls* is not an ``attrs`` + class. + + .. versionadded:: 16.2.0 + """ + attrs = fields(inst.__class__) + rv = [] + retain = retain_collection_types # Very long. :/ + for a in attrs: + v = getattr(inst, a.name) + if filter is not None and not filter(a, v): + continue + if recurse is True: + if has(v.__class__): + rv.append( + astuple( + v, + recurse=True, + filter=filter, + tuple_factory=tuple_factory, + retain_collection_types=retain, + ) + ) + elif isinstance(v, (tuple, list, set, frozenset)): + cf = v.__class__ if retain is True else list + rv.append( + cf( + [ + astuple( + j, + recurse=True, + filter=filter, + tuple_factory=tuple_factory, + retain_collection_types=retain, + ) + if has(j.__class__) + else j + for j in v + ] + ) + ) + elif isinstance(v, dict): + df = v.__class__ if retain is True else dict + rv.append( + df( + ( + astuple( + kk, + tuple_factory=tuple_factory, + retain_collection_types=retain, + ) + if has(kk.__class__) + else kk, + astuple( + vv, + tuple_factory=tuple_factory, + retain_collection_types=retain, + ) + if has(vv.__class__) + else vv, + ) + for kk, vv in iteritems(v) + ) + ) + else: + rv.append(v) + else: + rv.append(v) + + return rv if tuple_factory is list else tuple_factory(rv) + + +def has(cls): + """ + Check whether *cls* is a class with ``attrs`` attributes. + + :param type cls: Class to introspect. + :raise TypeError: If *cls* is not a class. + + :rtype: bool + """ + return getattr(cls, "__attrs_attrs__", None) is not None + + +def assoc(inst, **changes): + """ + Copy *inst* and apply *changes*. + + :param inst: Instance of a class with ``attrs`` attributes. + :param changes: Keyword changes in the new copy. + + :return: A copy of inst with *changes* incorporated. + + :raise attr.exceptions.AttrsAttributeNotFoundError: If *attr_name* couldn't + be found on *cls*. + :raise attr.exceptions.NotAnAttrsClassError: If *cls* is not an ``attrs`` + class. + + .. deprecated:: 17.1.0 + Use `evolve` instead. + """ + import warnings + + warnings.warn( + "assoc is deprecated and will be removed after 2018/01.", + DeprecationWarning, + stacklevel=2, + ) + new = copy.copy(inst) + attrs = fields(inst.__class__) + for k, v in iteritems(changes): + a = getattr(attrs, k, NOTHING) + if a is NOTHING: + raise AttrsAttributeNotFoundError( + "{k} is not an attrs attribute on {cl}.".format( + k=k, cl=new.__class__ + ) + ) + _obj_setattr(new, k, v) + return new + + +def evolve(inst, **changes): + """ + Create a new instance, based on *inst* with *changes* applied. + + :param inst: Instance of a class with ``attrs`` attributes. + :param changes: Keyword changes in the new copy. + + :return: A copy of inst with *changes* incorporated. + + :raise TypeError: If *attr_name* couldn't be found in the class + ``__init__``. + :raise attr.exceptions.NotAnAttrsClassError: If *cls* is not an ``attrs`` + class. + + .. versionadded:: 17.1.0 + """ + cls = inst.__class__ + attrs = fields(cls) + for a in attrs: + if not a.init: + continue + attr_name = a.name # To deal with private attributes. + init_name = attr_name if attr_name[0] != "_" else attr_name[1:] + if init_name not in changes: + changes[init_name] = getattr(inst, attr_name) + + return cls(**changes) + + +def resolve_types(cls, globalns=None, localns=None): + """ + Resolve any strings and forward annotations in type annotations. + + This is only required if you need concrete types in `Attribute`'s *type* + field. In other words, you don't need to resolve your types if you only + use them for static type checking. + + With no arguments, names will be looked up in the module in which the class + was created. If this is not what you want, e.g. if the name only exists + inside a method, you may pass *globalns* or *localns* to specify other + dictionaries in which to look up these names. See the docs of + `typing.get_type_hints` for more details. + + :param type cls: Class to resolve. + :param Optional[dict] globalns: Dictionary containing global variables. + :param Optional[dict] localns: Dictionary containing local variables. + + :raise TypeError: If *cls* is not a class. + :raise attr.exceptions.NotAnAttrsClassError: If *cls* is not an ``attrs`` + class. + :raise NameError: If types cannot be resolved because of missing variables. + + :returns: *cls* so you can use this function also as a class decorator. + Please note that you have to apply it **after** `attr.s`. That means + the decorator has to come in the line **before** `attr.s`. + + .. versionadded:: 20.1.0 + """ + try: + # Since calling get_type_hints is expensive we cache whether we've + # done it already. + cls.__attrs_types_resolved__ + except AttributeError: + import typing + + hints = typing.get_type_hints(cls, globalns=globalns, localns=localns) + for field in fields(cls): + if field.name in hints: + # Since fields have been frozen we must work around it. + _obj_setattr(field, "type", hints[field.name]) + cls.__attrs_types_resolved__ = True + + # Return the class so you can use it as a decorator too. + return cls diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/_make.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/_make.py new file mode 100644 index 0000000000000000000000000000000000000000..49484f935f6929e2d070cc1149ecdd70c4f6a83e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/_make.py @@ -0,0 +1,2765 @@ +from __future__ import absolute_import, division, print_function + +import copy +import linecache +import sys +import threading +import uuid +import warnings + +from operator import itemgetter + +from . import _config, setters +from ._compat import ( + PY2, + PYPY, + isclass, + iteritems, + metadata_proxy, + ordered_dict, + set_closure_cell, +) +from .exceptions import ( + DefaultAlreadySetError, + FrozenInstanceError, + NotAnAttrsClassError, + PythonTooOldError, + UnannotatedAttributeError, +) + + +# This is used at least twice, so cache it here. +_obj_setattr = object.__setattr__ +_init_converter_pat = "__attr_converter_%s" +_init_factory_pat = "__attr_factory_{}" +_tuple_property_pat = ( + " {attr_name} = _attrs_property(_attrs_itemgetter({index}))" +) +_classvar_prefixes = ("typing.ClassVar", "t.ClassVar", "ClassVar") +# we don't use a double-underscore prefix because that triggers +# name mangling when trying to create a slot for the field +# (when slots=True) +_hash_cache_field = "_attrs_cached_hash" + +_empty_metadata_singleton = metadata_proxy({}) + +# Unique object for unequivocal getattr() defaults. +_sentinel = object() + + +class _Nothing(object): + """ + Sentinel class to indicate the lack of a value when ``None`` is ambiguous. + + ``_Nothing`` is a singleton. There is only ever one of it. + """ + + _singleton = None + + def __new__(cls): + if _Nothing._singleton is None: + _Nothing._singleton = super(_Nothing, cls).__new__(cls) + return _Nothing._singleton + + def __repr__(self): + return "NOTHING" + + +NOTHING = _Nothing() +""" +Sentinel to indicate the lack of a value when ``None`` is ambiguous. +""" + + +class _CacheHashWrapper(int): + """ + An integer subclass that pickles / copies as None + + This is used for non-slots classes with ``cache_hash=True``, to avoid + serializing a potentially (even likely) invalid hash value. Since ``None`` + is the default value for uncalculated hashes, whenever this is copied, + the copy's value for the hash should automatically reset. + + See GH #613 for more details. + """ + + if PY2: + # For some reason `type(None)` isn't callable in Python 2, but we don't + # actually need a constructor for None objects, we just need any + # available function that returns None. + def __reduce__(self, _none_constructor=getattr, _args=(0, "", None)): + return _none_constructor, _args + + else: + + def __reduce__(self, _none_constructor=type(None), _args=()): + return _none_constructor, _args + + +def attrib( + default=NOTHING, + validator=None, + repr=True, + cmp=None, + hash=None, + init=True, + metadata=None, + type=None, + converter=None, + factory=None, + kw_only=False, + eq=None, + order=None, + on_setattr=None, +): + """ + Create a new attribute on a class. + + .. warning:: + + Does *not* do anything unless the class is also decorated with + `attr.s`! + + :param default: A value that is used if an ``attrs``-generated ``__init__`` + is used and no value is passed while instantiating or the attribute is + excluded using ``init=False``. + + If the value is an instance of `Factory`, its callable will be + used to construct a new value (useful for mutable data types like lists + or dicts). + + If a default is not set (or set manually to `attr.NOTHING`), a value + *must* be supplied when instantiating; otherwise a `TypeError` + will be raised. + + The default can also be set using decorator notation as shown below. + + :type default: Any value + + :param callable factory: Syntactic sugar for + ``default=attr.Factory(factory)``. + + :param validator: `callable` that is called by ``attrs``-generated + ``__init__`` methods after the instance has been initialized. They + receive the initialized instance, the `Attribute`, and the + passed value. + + The return value is *not* inspected so the validator has to throw an + exception itself. + + If a `list` is passed, its items are treated as validators and must + all pass. + + Validators can be globally disabled and re-enabled using + `get_run_validators`. + + The validator can also be set using decorator notation as shown below. + + :type validator: `callable` or a `list` of `callable`\\ s. + + :param repr: Include this attribute in the generated ``__repr__`` + method. If ``True``, include the attribute; if ``False``, omit it. By + default, the built-in ``repr()`` function is used. To override how the + attribute value is formatted, pass a ``callable`` that takes a single + value and returns a string. Note that the resulting string is used + as-is, i.e. it will be used directly *instead* of calling ``repr()`` + (the default). + :type repr: a `bool` or a `callable` to use a custom function. + :param bool eq: If ``True`` (default), include this attribute in the + generated ``__eq__`` and ``__ne__`` methods that check two instances + for equality. + :param bool order: If ``True`` (default), include this attributes in the + generated ``__lt__``, ``__le__``, ``__gt__`` and ``__ge__`` methods. + :param bool cmp: Setting to ``True`` is equivalent to setting ``eq=True, + order=True``. Deprecated in favor of *eq* and *order*. + :param Optional[bool] hash: Include this attribute in the generated + ``__hash__`` method. If ``None`` (default), mirror *eq*'s value. This + is the correct behavior according the Python spec. Setting this value + to anything else than ``None`` is *discouraged*. + :param bool init: Include this attribute in the generated ``__init__`` + method. It is possible to set this to ``False`` and set a default + value. In that case this attributed is unconditionally initialized + with the specified default value or factory. + :param callable converter: `callable` that is called by + ``attrs``-generated ``__init__`` methods to convert attribute's value + to the desired format. It is given the passed-in value, and the + returned value will be used as the new value of the attribute. The + value is converted before being passed to the validator, if any. + :param metadata: An arbitrary mapping, to be used by third-party + components. See `extending_metadata`. + :param type: The type of the attribute. In Python 3.6 or greater, the + preferred method to specify the type is using a variable annotation + (see `PEP 526 `_). + This argument is provided for backward compatibility. + Regardless of the approach used, the type will be stored on + ``Attribute.type``. + + Please note that ``attrs`` doesn't do anything with this metadata by + itself. You can use it as part of your own code or for + `static type checking `. + :param kw_only: Make this attribute keyword-only (Python 3+) + in the generated ``__init__`` (if ``init`` is ``False``, this + parameter is ignored). + :param on_setattr: Allows to overwrite the *on_setattr* setting from + `attr.s`. If left `None`, the *on_setattr* value from `attr.s` is used. + Set to `attr.setters.NO_OP` to run **no** `setattr` hooks for this + attribute -- regardless of the setting in `attr.s`. + :type on_setattr: `callable`, or a list of callables, or `None`, or + `attr.setters.NO_OP` + + .. versionadded:: 15.2.0 *convert* + .. versionadded:: 16.3.0 *metadata* + .. versionchanged:: 17.1.0 *validator* can be a ``list`` now. + .. versionchanged:: 17.1.0 + *hash* is ``None`` and therefore mirrors *eq* by default. + .. versionadded:: 17.3.0 *type* + .. deprecated:: 17.4.0 *convert* + .. versionadded:: 17.4.0 *converter* as a replacement for the deprecated + *convert* to achieve consistency with other noun-based arguments. + .. versionadded:: 18.1.0 + ``factory=f`` is syntactic sugar for ``default=attr.Factory(f)``. + .. versionadded:: 18.2.0 *kw_only* + .. versionchanged:: 19.2.0 *convert* keyword argument removed + .. versionchanged:: 19.2.0 *repr* also accepts a custom callable. + .. deprecated:: 19.2.0 *cmp* Removal on or after 2021-06-01. + .. versionadded:: 19.2.0 *eq* and *order* + .. versionadded:: 20.1.0 *on_setattr* + .. versionchanged:: 20.3.0 *kw_only* backported to Python 2 + """ + eq, order = _determine_eq_order(cmp, eq, order, True) + + if hash is not None and hash is not True and hash is not False: + raise TypeError( + "Invalid value for hash. Must be True, False, or None." + ) + + if factory is not None: + if default is not NOTHING: + raise ValueError( + "The `default` and `factory` arguments are mutually " + "exclusive." + ) + if not callable(factory): + raise ValueError("The `factory` argument must be a callable.") + default = Factory(factory) + + if metadata is None: + metadata = {} + + # Apply syntactic sugar by auto-wrapping. + if isinstance(on_setattr, (list, tuple)): + on_setattr = setters.pipe(*on_setattr) + + if validator and isinstance(validator, (list, tuple)): + validator = and_(*validator) + + if converter and isinstance(converter, (list, tuple)): + converter = pipe(*converter) + + return _CountingAttr( + default=default, + validator=validator, + repr=repr, + cmp=None, + hash=hash, + init=init, + converter=converter, + metadata=metadata, + type=type, + kw_only=kw_only, + eq=eq, + order=order, + on_setattr=on_setattr, + ) + + +def _make_attr_tuple_class(cls_name, attr_names): + """ + Create a tuple subclass to hold `Attribute`s for an `attrs` class. + + The subclass is a bare tuple with properties for names. + + class MyClassAttributes(tuple): + __slots__ = () + x = property(itemgetter(0)) + """ + attr_class_name = "{}Attributes".format(cls_name) + attr_class_template = [ + "class {}(tuple):".format(attr_class_name), + " __slots__ = ()", + ] + if attr_names: + for i, attr_name in enumerate(attr_names): + attr_class_template.append( + _tuple_property_pat.format(index=i, attr_name=attr_name) + ) + else: + attr_class_template.append(" pass") + globs = {"_attrs_itemgetter": itemgetter, "_attrs_property": property} + eval(compile("\n".join(attr_class_template), "", "exec"), globs) + + return globs[attr_class_name] + + +# Tuple class for extracted attributes from a class definition. +# `base_attrs` is a subset of `attrs`. +_Attributes = _make_attr_tuple_class( + "_Attributes", + [ + # all attributes to build dunder methods for + "attrs", + # attributes that have been inherited + "base_attrs", + # map inherited attributes to their originating classes + "base_attrs_map", + ], +) + + +def _is_class_var(annot): + """ + Check whether *annot* is a typing.ClassVar. + + The string comparison hack is used to avoid evaluating all string + annotations which would put attrs-based classes at a performance + disadvantage compared to plain old classes. + """ + return str(annot).startswith(_classvar_prefixes) + + +def _has_own_attribute(cls, attrib_name): + """ + Check whether *cls* defines *attrib_name* (and doesn't just inherit it). + + Requires Python 3. + """ + attr = getattr(cls, attrib_name, _sentinel) + if attr is _sentinel: + return False + + for base_cls in cls.__mro__[1:]: + a = getattr(base_cls, attrib_name, None) + if attr is a: + return False + + return True + + +def _get_annotations(cls): + """ + Get annotations for *cls*. + """ + if _has_own_attribute(cls, "__annotations__"): + return cls.__annotations__ + + return {} + + +def _counter_getter(e): + """ + Key function for sorting to avoid re-creating a lambda for every class. + """ + return e[1].counter + + +def _collect_base_attrs(cls, taken_attr_names): + """ + Collect attr.ibs from base classes of *cls*, except *taken_attr_names*. + """ + base_attrs = [] + base_attr_map = {} # A dictionary of base attrs to their classes. + + # Traverse the MRO and collect attributes. + for base_cls in reversed(cls.__mro__[1:-1]): + for a in getattr(base_cls, "__attrs_attrs__", []): + if a.inherited or a.name in taken_attr_names: + continue + + a = a.evolve(inherited=True) + base_attrs.append(a) + base_attr_map[a.name] = base_cls + + # For each name, only keep the freshest definition i.e. the furthest at the + # back. base_attr_map is fine because it gets overwritten with every new + # instance. + filtered = [] + seen = set() + for a in reversed(base_attrs): + if a.name in seen: + continue + filtered.insert(0, a) + seen.add(a.name) + + return filtered, base_attr_map + + +def _collect_base_attrs_broken(cls, taken_attr_names): + """ + Collect attr.ibs from base classes of *cls*, except *taken_attr_names*. + + N.B. *taken_attr_names* will be mutated. + + Adhere to the old incorrect behavior. + + Notably it collects from the front and considers inherited attributes which + leads to the buggy behavior reported in #428. + """ + base_attrs = [] + base_attr_map = {} # A dictionary of base attrs to their classes. + + # Traverse the MRO and collect attributes. + for base_cls in cls.__mro__[1:-1]: + for a in getattr(base_cls, "__attrs_attrs__", []): + if a.name in taken_attr_names: + continue + + a = a.evolve(inherited=True) + taken_attr_names.add(a.name) + base_attrs.append(a) + base_attr_map[a.name] = base_cls + + return base_attrs, base_attr_map + + +def _transform_attrs( + cls, these, auto_attribs, kw_only, collect_by_mro, field_transformer +): + """ + Transform all `_CountingAttr`s on a class into `Attribute`s. + + If *these* is passed, use that and don't look for them on the class. + + *collect_by_mro* is True, collect them in the correct MRO order, otherwise + use the old -- incorrect -- order. See #428. + + Return an `_Attributes`. + """ + cd = cls.__dict__ + anns = _get_annotations(cls) + + if these is not None: + ca_list = [(name, ca) for name, ca in iteritems(these)] + + if not isinstance(these, ordered_dict): + ca_list.sort(key=_counter_getter) + elif auto_attribs is True: + ca_names = { + name + for name, attr in cd.items() + if isinstance(attr, _CountingAttr) + } + ca_list = [] + annot_names = set() + for attr_name, type in anns.items(): + if _is_class_var(type): + continue + annot_names.add(attr_name) + a = cd.get(attr_name, NOTHING) + + if not isinstance(a, _CountingAttr): + if a is NOTHING: + a = attrib() + else: + a = attrib(default=a) + ca_list.append((attr_name, a)) + + unannotated = ca_names - annot_names + if len(unannotated) > 0: + raise UnannotatedAttributeError( + "The following `attr.ib`s lack a type annotation: " + + ", ".join( + sorted(unannotated, key=lambda n: cd.get(n).counter) + ) + + "." + ) + else: + ca_list = sorted( + ( + (name, attr) + for name, attr in cd.items() + if isinstance(attr, _CountingAttr) + ), + key=lambda e: e[1].counter, + ) + + own_attrs = [ + Attribute.from_counting_attr( + name=attr_name, ca=ca, type=anns.get(attr_name) + ) + for attr_name, ca in ca_list + ] + + if collect_by_mro: + base_attrs, base_attr_map = _collect_base_attrs( + cls, {a.name for a in own_attrs} + ) + else: + base_attrs, base_attr_map = _collect_base_attrs_broken( + cls, {a.name for a in own_attrs} + ) + + attr_names = [a.name for a in base_attrs + own_attrs] + + AttrsClass = _make_attr_tuple_class(cls.__name__, attr_names) + + if kw_only: + own_attrs = [a.evolve(kw_only=True) for a in own_attrs] + base_attrs = [a.evolve(kw_only=True) for a in base_attrs] + + attrs = AttrsClass(base_attrs + own_attrs) + + # Mandatory vs non-mandatory attr order only matters when they are part of + # the __init__ signature and when they aren't kw_only (which are moved to + # the end and can be mandatory or non-mandatory in any order, as they will + # be specified as keyword args anyway). Check the order of those attrs: + had_default = False + for a in (a for a in attrs if a.init is not False and a.kw_only is False): + if had_default is True and a.default is NOTHING: + raise ValueError( + "No mandatory attributes allowed after an attribute with a " + "default value or factory. Attribute in question: %r" % (a,) + ) + + if had_default is False and a.default is not NOTHING: + had_default = True + + if field_transformer is not None: + attrs = field_transformer(cls, attrs) + return _Attributes((attrs, base_attrs, base_attr_map)) + + +if PYPY: + + def _frozen_setattrs(self, name, value): + """ + Attached to frozen classes as __setattr__. + """ + if isinstance(self, BaseException) and name in ( + "__cause__", + "__context__", + ): + BaseException.__setattr__(self, name, value) + return + + raise FrozenInstanceError() + + +else: + + def _frozen_setattrs(self, name, value): + """ + Attached to frozen classes as __setattr__. + """ + raise FrozenInstanceError() + + +def _frozen_delattrs(self, name): + """ + Attached to frozen classes as __delattr__. + """ + raise FrozenInstanceError() + + +class _ClassBuilder(object): + """ + Iteratively build *one* class. + """ + + __slots__ = ( + "_attr_names", + "_attrs", + "_base_attr_map", + "_base_names", + "_cache_hash", + "_cls", + "_cls_dict", + "_delete_attribs", + "_frozen", + "_has_post_init", + "_is_exc", + "_on_setattr", + "_slots", + "_weakref_slot", + "_has_own_setattr", + "_has_custom_setattr", + ) + + def __init__( + self, + cls, + these, + slots, + frozen, + weakref_slot, + getstate_setstate, + auto_attribs, + kw_only, + cache_hash, + is_exc, + collect_by_mro, + on_setattr, + has_custom_setattr, + field_transformer, + ): + attrs, base_attrs, base_map = _transform_attrs( + cls, + these, + auto_attribs, + kw_only, + collect_by_mro, + field_transformer, + ) + + self._cls = cls + self._cls_dict = dict(cls.__dict__) if slots else {} + self._attrs = attrs + self._base_names = set(a.name for a in base_attrs) + self._base_attr_map = base_map + self._attr_names = tuple(a.name for a in attrs) + self._slots = slots + self._frozen = frozen + self._weakref_slot = weakref_slot + self._cache_hash = cache_hash + self._has_post_init = bool(getattr(cls, "__attrs_post_init__", False)) + self._delete_attribs = not bool(these) + self._is_exc = is_exc + self._on_setattr = on_setattr + + self._has_custom_setattr = has_custom_setattr + self._has_own_setattr = False + + self._cls_dict["__attrs_attrs__"] = self._attrs + + if frozen: + self._cls_dict["__setattr__"] = _frozen_setattrs + self._cls_dict["__delattr__"] = _frozen_delattrs + + self._has_own_setattr = True + + if getstate_setstate: + ( + self._cls_dict["__getstate__"], + self._cls_dict["__setstate__"], + ) = self._make_getstate_setstate() + + def __repr__(self): + return "<_ClassBuilder(cls={cls})>".format(cls=self._cls.__name__) + + def build_class(self): + """ + Finalize class based on the accumulated configuration. + + Builder cannot be used after calling this method. + """ + if self._slots is True: + return self._create_slots_class() + else: + return self._patch_original_class() + + def _patch_original_class(self): + """ + Apply accumulated methods and return the class. + """ + cls = self._cls + base_names = self._base_names + + # Clean class of attribute definitions (`attr.ib()`s). + if self._delete_attribs: + for name in self._attr_names: + if ( + name not in base_names + and getattr(cls, name, _sentinel) is not _sentinel + ): + try: + delattr(cls, name) + except AttributeError: + # This can happen if a base class defines a class + # variable and we want to set an attribute with the + # same name by using only a type annotation. + pass + + # Attach our dunder methods. + for name, value in self._cls_dict.items(): + setattr(cls, name, value) + + # If we've inherited an attrs __setattr__ and don't write our own, + # reset it to object's. + if not self._has_own_setattr and getattr( + cls, "__attrs_own_setattr__", False + ): + cls.__attrs_own_setattr__ = False + + if not self._has_custom_setattr: + cls.__setattr__ = object.__setattr__ + + return cls + + def _create_slots_class(self): + """ + Build and return a new class with a `__slots__` attribute. + """ + base_names = self._base_names + cd = { + k: v + for k, v in iteritems(self._cls_dict) + if k not in tuple(self._attr_names) + ("__dict__", "__weakref__") + } + + # If our class doesn't have its own implementation of __setattr__ + # (either from the user or by us), check the bases, if one of them has + # an attrs-made __setattr__, that needs to be reset. We don't walk the + # MRO because we only care about our immediate base classes. + # XXX: This can be confused by subclassing a slotted attrs class with + # XXX: a non-attrs class and subclass the resulting class with an attrs + # XXX: class. See `test_slotted_confused` for details. For now that's + # XXX: OK with us. + if not self._has_own_setattr: + cd["__attrs_own_setattr__"] = False + + if not self._has_custom_setattr: + for base_cls in self._cls.__bases__: + if base_cls.__dict__.get("__attrs_own_setattr__", False): + cd["__setattr__"] = object.__setattr__ + break + + # Traverse the MRO to check for an existing __weakref__. + weakref_inherited = False + for base_cls in self._cls.__mro__[1:-1]: + if base_cls.__dict__.get("__weakref__", None) is not None: + weakref_inherited = True + break + + names = self._attr_names + if ( + self._weakref_slot + and "__weakref__" not in getattr(self._cls, "__slots__", ()) + and "__weakref__" not in names + and not weakref_inherited + ): + names += ("__weakref__",) + + # We only add the names of attributes that aren't inherited. + # Setting __slots__ to inherited attributes wastes memory. + slot_names = [name for name in names if name not in base_names] + if self._cache_hash: + slot_names.append(_hash_cache_field) + cd["__slots__"] = tuple(slot_names) + + qualname = getattr(self._cls, "__qualname__", None) + if qualname is not None: + cd["__qualname__"] = qualname + + # Create new class based on old class and our methods. + cls = type(self._cls)(self._cls.__name__, self._cls.__bases__, cd) + + # The following is a fix for + # https://github.com/python-attrs/attrs/issues/102. On Python 3, + # if a method mentions `__class__` or uses the no-arg super(), the + # compiler will bake a reference to the class in the method itself + # as `method.__closure__`. Since we replace the class with a + # clone, we rewrite these references so it keeps working. + for item in cls.__dict__.values(): + if isinstance(item, (classmethod, staticmethod)): + # Class- and staticmethods hide their functions inside. + # These might need to be rewritten as well. + closure_cells = getattr(item.__func__, "__closure__", None) + else: + closure_cells = getattr(item, "__closure__", None) + + if not closure_cells: # Catch None or the empty list. + continue + for cell in closure_cells: + try: + match = cell.cell_contents is self._cls + except ValueError: # ValueError: Cell is empty + pass + else: + if match: + set_closure_cell(cell, cls) + + return cls + + def add_repr(self, ns): + self._cls_dict["__repr__"] = self._add_method_dunders( + _make_repr(self._attrs, ns=ns) + ) + return self + + def add_str(self): + repr = self._cls_dict.get("__repr__") + if repr is None: + raise ValueError( + "__str__ can only be generated if a __repr__ exists." + ) + + def __str__(self): + return self.__repr__() + + self._cls_dict["__str__"] = self._add_method_dunders(__str__) + return self + + def _make_getstate_setstate(self): + """ + Create custom __setstate__ and __getstate__ methods. + """ + # __weakref__ is not writable. + state_attr_names = tuple( + an for an in self._attr_names if an != "__weakref__" + ) + + def slots_getstate(self): + """ + Automatically created by attrs. + """ + return tuple(getattr(self, name) for name in state_attr_names) + + hash_caching_enabled = self._cache_hash + + def slots_setstate(self, state): + """ + Automatically created by attrs. + """ + __bound_setattr = _obj_setattr.__get__(self, Attribute) + for name, value in zip(state_attr_names, state): + __bound_setattr(name, value) + + # The hash code cache is not included when the object is + # serialized, but it still needs to be initialized to None to + # indicate that the first call to __hash__ should be a cache + # miss. + if hash_caching_enabled: + __bound_setattr(_hash_cache_field, None) + + return slots_getstate, slots_setstate + + def make_unhashable(self): + self._cls_dict["__hash__"] = None + return self + + def add_hash(self): + self._cls_dict["__hash__"] = self._add_method_dunders( + _make_hash( + self._cls, + self._attrs, + frozen=self._frozen, + cache_hash=self._cache_hash, + ) + ) + + return self + + def add_init(self): + self._cls_dict["__init__"] = self._add_method_dunders( + _make_init( + self._cls, + self._attrs, + self._has_post_init, + self._frozen, + self._slots, + self._cache_hash, + self._base_attr_map, + self._is_exc, + self._on_setattr is not None + and self._on_setattr is not setters.NO_OP, + ) + ) + + return self + + def add_eq(self): + cd = self._cls_dict + + cd["__eq__"] = self._add_method_dunders( + _make_eq(self._cls, self._attrs) + ) + cd["__ne__"] = self._add_method_dunders(_make_ne()) + + return self + + def add_order(self): + cd = self._cls_dict + + cd["__lt__"], cd["__le__"], cd["__gt__"], cd["__ge__"] = ( + self._add_method_dunders(meth) + for meth in _make_order(self._cls, self._attrs) + ) + + return self + + def add_setattr(self): + if self._frozen: + return self + + sa_attrs = {} + for a in self._attrs: + on_setattr = a.on_setattr or self._on_setattr + if on_setattr and on_setattr is not setters.NO_OP: + sa_attrs[a.name] = a, on_setattr + + if not sa_attrs: + return self + + if self._has_custom_setattr: + # We need to write a __setattr__ but there already is one! + raise ValueError( + "Can't combine custom __setattr__ with on_setattr hooks." + ) + + # docstring comes from _add_method_dunders + def __setattr__(self, name, val): + try: + a, hook = sa_attrs[name] + except KeyError: + nval = val + else: + nval = hook(self, a, val) + + _obj_setattr(self, name, nval) + + self._cls_dict["__attrs_own_setattr__"] = True + self._cls_dict["__setattr__"] = self._add_method_dunders(__setattr__) + self._has_own_setattr = True + + return self + + def _add_method_dunders(self, method): + """ + Add __module__ and __qualname__ to a *method* if possible. + """ + try: + method.__module__ = self._cls.__module__ + except AttributeError: + pass + + try: + method.__qualname__ = ".".join( + (self._cls.__qualname__, method.__name__) + ) + except AttributeError: + pass + + try: + method.__doc__ = "Method generated by attrs for class %s." % ( + self._cls.__qualname__, + ) + except AttributeError: + pass + + return method + + +_CMP_DEPRECATION = ( + "The usage of `cmp` is deprecated and will be removed on or after " + "2021-06-01. Please use `eq` and `order` instead." +) + + +def _determine_eq_order(cmp, eq, order, default_eq): + """ + Validate the combination of *cmp*, *eq*, and *order*. Derive the effective + values of eq and order. If *eq* is None, set it to *default_eq*. + """ + if cmp is not None and any((eq is not None, order is not None)): + raise ValueError("Don't mix `cmp` with `eq' and `order`.") + + # cmp takes precedence due to bw-compatibility. + if cmp is not None: + warnings.warn(_CMP_DEPRECATION, DeprecationWarning, stacklevel=3) + + return cmp, cmp + + # If left None, equality is set to the specified default and ordering + # mirrors equality. + if eq is None: + eq = default_eq + + if order is None: + order = eq + + if eq is False and order is True: + raise ValueError("`order` can only be True if `eq` is True too.") + + return eq, order + + +def _determine_whether_to_implement( + cls, flag, auto_detect, dunders, default=True +): + """ + Check whether we should implement a set of methods for *cls*. + + *flag* is the argument passed into @attr.s like 'init', *auto_detect* the + same as passed into @attr.s and *dunders* is a tuple of attribute names + whose presence signal that the user has implemented it themselves. + + Return *default* if no reason for either for or against is found. + + auto_detect must be False on Python 2. + """ + if flag is True or flag is False: + return flag + + if flag is None and auto_detect is False: + return default + + # Logically, flag is None and auto_detect is True here. + for dunder in dunders: + if _has_own_attribute(cls, dunder): + return False + + return default + + +def attrs( + maybe_cls=None, + these=None, + repr_ns=None, + repr=None, + cmp=None, + hash=None, + init=None, + slots=False, + frozen=False, + weakref_slot=True, + str=False, + auto_attribs=False, + kw_only=False, + cache_hash=False, + auto_exc=False, + eq=None, + order=None, + auto_detect=False, + collect_by_mro=False, + getstate_setstate=None, + on_setattr=None, + field_transformer=None, +): + r""" + A class decorator that adds `dunder + `_\ -methods according to the + specified attributes using `attr.ib` or the *these* argument. + + :param these: A dictionary of name to `attr.ib` mappings. This is + useful to avoid the definition of your attributes within the class body + because you can't (e.g. if you want to add ``__repr__`` methods to + Django models) or don't want to. + + If *these* is not ``None``, ``attrs`` will *not* search the class body + for attributes and will *not* remove any attributes from it. + + If *these* is an ordered dict (`dict` on Python 3.6+, + `collections.OrderedDict` otherwise), the order is deduced from + the order of the attributes inside *these*. Otherwise the order + of the definition of the attributes is used. + + :type these: `dict` of `str` to `attr.ib` + + :param str repr_ns: When using nested classes, there's no way in Python 2 + to automatically detect that. Therefore it's possible to set the + namespace explicitly for a more meaningful ``repr`` output. + :param bool auto_detect: Instead of setting the *init*, *repr*, *eq*, + *order*, and *hash* arguments explicitly, assume they are set to + ``True`` **unless any** of the involved methods for one of the + arguments is implemented in the *current* class (i.e. it is *not* + inherited from some base class). + + So for example by implementing ``__eq__`` on a class yourself, + ``attrs`` will deduce ``eq=False`` and won't create *neither* + ``__eq__`` *nor* ``__ne__`` (but Python classes come with a sensible + ``__ne__`` by default, so it *should* be enough to only implement + ``__eq__`` in most cases). + + .. warning:: + + If you prevent ``attrs`` from creating the ordering methods for you + (``order=False``, e.g. by implementing ``__le__``), it becomes + *your* responsibility to make sure its ordering is sound. The best + way is to use the `functools.total_ordering` decorator. + + + Passing ``True`` or ``False`` to *init*, *repr*, *eq*, *order*, + *cmp*, or *hash* overrides whatever *auto_detect* would determine. + + *auto_detect* requires Python 3. Setting it ``True`` on Python 2 raises + a `PythonTooOldError`. + + :param bool repr: Create a ``__repr__`` method with a human readable + representation of ``attrs`` attributes.. + :param bool str: Create a ``__str__`` method that is identical to + ``__repr__``. This is usually not necessary except for + `Exception`\ s. + :param Optional[bool] eq: If ``True`` or ``None`` (default), add ``__eq__`` + and ``__ne__`` methods that check two instances for equality. + + They compare the instances as if they were tuples of their ``attrs`` + attributes if and only if the types of both classes are *identical*! + :param Optional[bool] order: If ``True``, add ``__lt__``, ``__le__``, + ``__gt__``, and ``__ge__`` methods that behave like *eq* above and + allow instances to be ordered. If ``None`` (default) mirror value of + *eq*. + :param Optional[bool] cmp: Setting to ``True`` is equivalent to setting + ``eq=True, order=True``. Deprecated in favor of *eq* and *order*, has + precedence over them for backward-compatibility though. Must not be + mixed with *eq* or *order*. + :param Optional[bool] hash: If ``None`` (default), the ``__hash__`` method + is generated according how *eq* and *frozen* are set. + + 1. If *both* are True, ``attrs`` will generate a ``__hash__`` for you. + 2. If *eq* is True and *frozen* is False, ``__hash__`` will be set to + None, marking it unhashable (which it is). + 3. If *eq* is False, ``__hash__`` will be left untouched meaning the + ``__hash__`` method of the base class will be used (if base class is + ``object``, this means it will fall back to id-based hashing.). + + Although not recommended, you can decide for yourself and force + ``attrs`` to create one (e.g. if the class is immutable even though you + didn't freeze it programmatically) by passing ``True`` or not. Both of + these cases are rather special and should be used carefully. + + See our documentation on `hashing`, Python's documentation on + `object.__hash__`, and the `GitHub issue that led to the default \ + behavior `_ for more + details. + :param bool init: Create a ``__init__`` method that initializes the + ``attrs`` attributes. Leading underscores are stripped for the + argument name. If a ``__attrs_post_init__`` method exists on the + class, it will be called after the class is fully initialized. + :param bool slots: Create a `slotted class ` that's more + memory-efficient. Slotted classes are generally superior to the default + dict classes, but have some gotchas you should know about, so we + encourage you to read the `glossary entry `. + :param bool frozen: Make instances immutable after initialization. If + someone attempts to modify a frozen instance, + `attr.exceptions.FrozenInstanceError` is raised. + + .. note:: + + 1. This is achieved by installing a custom ``__setattr__`` method + on your class, so you can't implement your own. + + 2. True immutability is impossible in Python. + + 3. This *does* have a minor a runtime performance `impact + ` when initializing new instances. In other words: + ``__init__`` is slightly slower with ``frozen=True``. + + 4. If a class is frozen, you cannot modify ``self`` in + ``__attrs_post_init__`` or a self-written ``__init__``. You can + circumvent that limitation by using + ``object.__setattr__(self, "attribute_name", value)``. + + 5. Subclasses of a frozen class are frozen too. + + :param bool weakref_slot: Make instances weak-referenceable. This has no + effect unless ``slots`` is also enabled. + :param bool auto_attribs: If ``True``, collect `PEP 526`_-annotated + attributes (Python 3.6 and later only) from the class body. + + In this case, you **must** annotate every field. If ``attrs`` + encounters a field that is set to an `attr.ib` but lacks a type + annotation, an `attr.exceptions.UnannotatedAttributeError` is + raised. Use ``field_name: typing.Any = attr.ib(...)`` if you don't + want to set a type. + + If you assign a value to those attributes (e.g. ``x: int = 42``), that + value becomes the default value like if it were passed using + ``attr.ib(default=42)``. Passing an instance of `Factory` also + works as expected. + + Attributes annotated as `typing.ClassVar`, and attributes that are + neither annotated nor set to an `attr.ib` are **ignored**. + + .. _`PEP 526`: https://www.python.org/dev/peps/pep-0526/ + :param bool kw_only: Make all attributes keyword-only (Python 3+) + in the generated ``__init__`` (if ``init`` is ``False``, this + parameter is ignored). + :param bool cache_hash: Ensure that the object's hash code is computed + only once and stored on the object. If this is set to ``True``, + hashing must be either explicitly or implicitly enabled for this + class. If the hash code is cached, avoid any reassignments of + fields involved in hash code computation or mutations of the objects + those fields point to after object creation. If such changes occur, + the behavior of the object's hash code is undefined. + :param bool auto_exc: If the class subclasses `BaseException` + (which implicitly includes any subclass of any exception), the + following happens to behave like a well-behaved Python exceptions + class: + + - the values for *eq*, *order*, and *hash* are ignored and the + instances compare and hash by the instance's ids (N.B. ``attrs`` will + *not* remove existing implementations of ``__hash__`` or the equality + methods. It just won't add own ones.), + - all attributes that are either passed into ``__init__`` or have a + default value are additionally available as a tuple in the ``args`` + attribute, + - the value of *str* is ignored leaving ``__str__`` to base classes. + :param bool collect_by_mro: Setting this to `True` fixes the way ``attrs`` + collects attributes from base classes. The default behavior is + incorrect in certain cases of multiple inheritance. It should be on by + default but is kept off for backward-compatability. + + See issue `#428 `_ for + more details. + + :param Optional[bool] getstate_setstate: + .. note:: + This is usually only interesting for slotted classes and you should + probably just set *auto_detect* to `True`. + + If `True`, ``__getstate__`` and + ``__setstate__`` are generated and attached to the class. This is + necessary for slotted classes to be pickleable. If left `None`, it's + `True` by default for slotted classes and ``False`` for dict classes. + + If *auto_detect* is `True`, and *getstate_setstate* is left `None`, + and **either** ``__getstate__`` or ``__setstate__`` is detected directly + on the class (i.e. not inherited), it is set to `False` (this is usually + what you want). + + :param on_setattr: A callable that is run whenever the user attempts to set + an attribute (either by assignment like ``i.x = 42`` or by using + `setattr` like ``setattr(i, "x", 42)``). It receives the same arguments + as validators: the instance, the attribute that is being modified, and + the new value. + + If no exception is raised, the attribute is set to the return value of + the callable. + + If a list of callables is passed, they're automatically wrapped in an + `attr.setters.pipe`. + + :param Optional[callable] field_transformer: + A function that is called with the original class object and all + fields right before ``attrs`` finalizes the class. You can use + this, e.g., to automatically add converters or validators to + fields based on their types. See `transform-fields` for more details. + + .. versionadded:: 16.0.0 *slots* + .. versionadded:: 16.1.0 *frozen* + .. versionadded:: 16.3.0 *str* + .. versionadded:: 16.3.0 Support for ``__attrs_post_init__``. + .. versionchanged:: 17.1.0 + *hash* supports ``None`` as value which is also the default now. + .. versionadded:: 17.3.0 *auto_attribs* + .. versionchanged:: 18.1.0 + If *these* is passed, no attributes are deleted from the class body. + .. versionchanged:: 18.1.0 If *these* is ordered, the order is retained. + .. versionadded:: 18.2.0 *weakref_slot* + .. deprecated:: 18.2.0 + ``__lt__``, ``__le__``, ``__gt__``, and ``__ge__`` now raise a + `DeprecationWarning` if the classes compared are subclasses of + each other. ``__eq`` and ``__ne__`` never tried to compared subclasses + to each other. + .. versionchanged:: 19.2.0 + ``__lt__``, ``__le__``, ``__gt__``, and ``__ge__`` now do not consider + subclasses comparable anymore. + .. versionadded:: 18.2.0 *kw_only* + .. versionadded:: 18.2.0 *cache_hash* + .. versionadded:: 19.1.0 *auto_exc* + .. deprecated:: 19.2.0 *cmp* Removal on or after 2021-06-01. + .. versionadded:: 19.2.0 *eq* and *order* + .. versionadded:: 20.1.0 *auto_detect* + .. versionadded:: 20.1.0 *collect_by_mro* + .. versionadded:: 20.1.0 *getstate_setstate* + .. versionadded:: 20.1.0 *on_setattr* + .. versionadded:: 20.3.0 *field_transformer* + """ + if auto_detect and PY2: + raise PythonTooOldError( + "auto_detect only works on Python 3 and later." + ) + + eq_, order_ = _determine_eq_order(cmp, eq, order, None) + hash_ = hash # work around the lack of nonlocal + + if isinstance(on_setattr, (list, tuple)): + on_setattr = setters.pipe(*on_setattr) + + def wrap(cls): + + if getattr(cls, "__class__", None) is None: + raise TypeError("attrs only works with new-style classes.") + + is_frozen = frozen or _has_frozen_base_class(cls) + is_exc = auto_exc is True and issubclass(cls, BaseException) + has_own_setattr = auto_detect and _has_own_attribute( + cls, "__setattr__" + ) + + if has_own_setattr and is_frozen: + raise ValueError("Can't freeze a class with a custom __setattr__.") + + builder = _ClassBuilder( + cls, + these, + slots, + is_frozen, + weakref_slot, + _determine_whether_to_implement( + cls, + getstate_setstate, + auto_detect, + ("__getstate__", "__setstate__"), + default=slots, + ), + auto_attribs, + kw_only, + cache_hash, + is_exc, + collect_by_mro, + on_setattr, + has_own_setattr, + field_transformer, + ) + if _determine_whether_to_implement( + cls, repr, auto_detect, ("__repr__",) + ): + builder.add_repr(repr_ns) + if str is True: + builder.add_str() + + eq = _determine_whether_to_implement( + cls, eq_, auto_detect, ("__eq__", "__ne__") + ) + if not is_exc and eq is True: + builder.add_eq() + if not is_exc and _determine_whether_to_implement( + cls, order_, auto_detect, ("__lt__", "__le__", "__gt__", "__ge__") + ): + builder.add_order() + + builder.add_setattr() + + if ( + hash_ is None + and auto_detect is True + and _has_own_attribute(cls, "__hash__") + ): + hash = False + else: + hash = hash_ + if hash is not True and hash is not False and hash is not None: + # Can't use `hash in` because 1 == True for example. + raise TypeError( + "Invalid value for hash. Must be True, False, or None." + ) + elif hash is False or (hash is None and eq is False) or is_exc: + # Don't do anything. Should fall back to __object__'s __hash__ + # which is by id. + if cache_hash: + raise TypeError( + "Invalid value for cache_hash. To use hash caching," + " hashing must be either explicitly or implicitly " + "enabled." + ) + elif hash is True or ( + hash is None and eq is True and is_frozen is True + ): + # Build a __hash__ if told so, or if it's safe. + builder.add_hash() + else: + # Raise TypeError on attempts to hash. + if cache_hash: + raise TypeError( + "Invalid value for cache_hash. To use hash caching," + " hashing must be either explicitly or implicitly " + "enabled." + ) + builder.make_unhashable() + + if _determine_whether_to_implement( + cls, init, auto_detect, ("__init__",) + ): + builder.add_init() + else: + if cache_hash: + raise TypeError( + "Invalid value for cache_hash. To use hash caching," + " init must be True." + ) + + return builder.build_class() + + # maybe_cls's type depends on the usage of the decorator. It's a class + # if it's used as `@attrs` but ``None`` if used as `@attrs()`. + if maybe_cls is None: + return wrap + else: + return wrap(maybe_cls) + + +_attrs = attrs +""" +Internal alias so we can use it in functions that take an argument called +*attrs*. +""" + + +if PY2: + + def _has_frozen_base_class(cls): + """ + Check whether *cls* has a frozen ancestor by looking at its + __setattr__. + """ + return ( + getattr(cls.__setattr__, "__module__", None) + == _frozen_setattrs.__module__ + and cls.__setattr__.__name__ == _frozen_setattrs.__name__ + ) + + +else: + + def _has_frozen_base_class(cls): + """ + Check whether *cls* has a frozen ancestor by looking at its + __setattr__. + """ + return cls.__setattr__ == _frozen_setattrs + + +def _attrs_to_tuple(obj, attrs): + """ + Create a tuple of all values of *obj*'s *attrs*. + """ + return tuple(getattr(obj, a.name) for a in attrs) + + +def _generate_unique_filename(cls, func_name): + """ + Create a "filename" suitable for a function being generated. + """ + unique_id = uuid.uuid4() + extra = "" + count = 1 + + while True: + unique_filename = "".format( + func_name, + cls.__module__, + getattr(cls, "__qualname__", cls.__name__), + extra, + ) + # To handle concurrency we essentially "reserve" our spot in + # the linecache with a dummy line. The caller can then + # set this value correctly. + cache_line = (1, None, (str(unique_id),), unique_filename) + if ( + linecache.cache.setdefault(unique_filename, cache_line) + == cache_line + ): + return unique_filename + + # Looks like this spot is taken. Try again. + count += 1 + extra = "-{0}".format(count) + + +def _make_hash(cls, attrs, frozen, cache_hash): + attrs = tuple( + a for a in attrs if a.hash is True or (a.hash is None and a.eq is True) + ) + + tab = " " + + unique_filename = _generate_unique_filename(cls, "hash") + type_hash = hash(unique_filename) + + hash_def = "def __hash__(self" + hash_func = "hash((" + closing_braces = "))" + if not cache_hash: + hash_def += "):" + else: + if not PY2: + hash_def += ", *" + + hash_def += ( + ", _cache_wrapper=" + + "__import__('attr._make')._make._CacheHashWrapper):" + ) + hash_func = "_cache_wrapper(" + hash_func + closing_braces += ")" + + method_lines = [hash_def] + + def append_hash_computation_lines(prefix, indent): + """ + Generate the code for actually computing the hash code. + Below this will either be returned directly or used to compute + a value which is then cached, depending on the value of cache_hash + """ + + method_lines.extend( + [ + indent + prefix + hash_func, + indent + " %d," % (type_hash,), + ] + ) + + for a in attrs: + method_lines.append(indent + " self.%s," % a.name) + + method_lines.append(indent + " " + closing_braces) + + if cache_hash: + method_lines.append(tab + "if self.%s is None:" % _hash_cache_field) + if frozen: + append_hash_computation_lines( + "object.__setattr__(self, '%s', " % _hash_cache_field, tab * 2 + ) + method_lines.append(tab * 2 + ")") # close __setattr__ + else: + append_hash_computation_lines( + "self.%s = " % _hash_cache_field, tab * 2 + ) + method_lines.append(tab + "return self.%s" % _hash_cache_field) + else: + append_hash_computation_lines("return ", tab) + + script = "\n".join(method_lines) + globs = {} + locs = {} + bytecode = compile(script, unique_filename, "exec") + eval(bytecode, globs, locs) + + # In order of debuggers like PDB being able to step through the code, + # we add a fake linecache entry. + linecache.cache[unique_filename] = ( + len(script), + None, + script.splitlines(True), + unique_filename, + ) + + return locs["__hash__"] + + +def _add_hash(cls, attrs): + """ + Add a hash method to *cls*. + """ + cls.__hash__ = _make_hash(cls, attrs, frozen=False, cache_hash=False) + return cls + + +def _make_ne(): + """ + Create __ne__ method. + """ + + def __ne__(self, other): + """ + Check equality and either forward a NotImplemented or + return the result negated. + """ + result = self.__eq__(other) + if result is NotImplemented: + return NotImplemented + + return not result + + return __ne__ + + +def _make_eq(cls, attrs): + """ + Create __eq__ method for *cls* with *attrs*. + """ + attrs = [a for a in attrs if a.eq] + + unique_filename = _generate_unique_filename(cls, "eq") + lines = [ + "def __eq__(self, other):", + " if other.__class__ is not self.__class__:", + " return NotImplemented", + ] + # We can't just do a big self.x = other.x and... clause due to + # irregularities like nan == nan is false but (nan,) == (nan,) is true. + if attrs: + lines.append(" return (") + others = [" ) == ("] + for a in attrs: + lines.append(" self.%s," % (a.name,)) + others.append(" other.%s," % (a.name,)) + + lines += others + [" )"] + else: + lines.append(" return True") + + script = "\n".join(lines) + globs = {} + locs = {} + bytecode = compile(script, unique_filename, "exec") + eval(bytecode, globs, locs) + + # In order of debuggers like PDB being able to step through the code, + # we add a fake linecache entry. + linecache.cache[unique_filename] = ( + len(script), + None, + script.splitlines(True), + unique_filename, + ) + return locs["__eq__"] + + +def _make_order(cls, attrs): + """ + Create ordering methods for *cls* with *attrs*. + """ + attrs = [a for a in attrs if a.order] + + def attrs_to_tuple(obj): + """ + Save us some typing. + """ + return _attrs_to_tuple(obj, attrs) + + def __lt__(self, other): + """ + Automatically created by attrs. + """ + if other.__class__ is self.__class__: + return attrs_to_tuple(self) < attrs_to_tuple(other) + + return NotImplemented + + def __le__(self, other): + """ + Automatically created by attrs. + """ + if other.__class__ is self.__class__: + return attrs_to_tuple(self) <= attrs_to_tuple(other) + + return NotImplemented + + def __gt__(self, other): + """ + Automatically created by attrs. + """ + if other.__class__ is self.__class__: + return attrs_to_tuple(self) > attrs_to_tuple(other) + + return NotImplemented + + def __ge__(self, other): + """ + Automatically created by attrs. + """ + if other.__class__ is self.__class__: + return attrs_to_tuple(self) >= attrs_to_tuple(other) + + return NotImplemented + + return __lt__, __le__, __gt__, __ge__ + + +def _add_eq(cls, attrs=None): + """ + Add equality methods to *cls* with *attrs*. + """ + if attrs is None: + attrs = cls.__attrs_attrs__ + + cls.__eq__ = _make_eq(cls, attrs) + cls.__ne__ = _make_ne() + + return cls + + +_already_repring = threading.local() + + +def _make_repr(attrs, ns): + """ + Make a repr method that includes relevant *attrs*, adding *ns* to the full + name. + """ + + # Figure out which attributes to include, and which function to use to + # format them. The a.repr value can be either bool or a custom callable. + attr_names_with_reprs = tuple( + (a.name, repr if a.repr is True else a.repr) + for a in attrs + if a.repr is not False + ) + + def __repr__(self): + """ + Automatically created by attrs. + """ + try: + working_set = _already_repring.working_set + except AttributeError: + working_set = set() + _already_repring.working_set = working_set + + if id(self) in working_set: + return "..." + real_cls = self.__class__ + if ns is None: + qualname = getattr(real_cls, "__qualname__", None) + if qualname is not None: + class_name = qualname.rsplit(">.", 1)[-1] + else: + class_name = real_cls.__name__ + else: + class_name = ns + "." + real_cls.__name__ + + # Since 'self' remains on the stack (i.e.: strongly referenced) for the + # duration of this call, it's safe to depend on id(...) stability, and + # not need to track the instance and therefore worry about properties + # like weakref- or hash-ability. + working_set.add(id(self)) + try: + result = [class_name, "("] + first = True + for name, attr_repr in attr_names_with_reprs: + if first: + first = False + else: + result.append(", ") + result.extend( + (name, "=", attr_repr(getattr(self, name, NOTHING))) + ) + return "".join(result) + ")" + finally: + working_set.remove(id(self)) + + return __repr__ + + +def _add_repr(cls, ns=None, attrs=None): + """ + Add a repr method to *cls*. + """ + if attrs is None: + attrs = cls.__attrs_attrs__ + + cls.__repr__ = _make_repr(attrs, ns) + return cls + + +def fields(cls): + """ + Return the tuple of ``attrs`` attributes for a class. + + The tuple also allows accessing the fields by their names (see below for + examples). + + :param type cls: Class to introspect. + + :raise TypeError: If *cls* is not a class. + :raise attr.exceptions.NotAnAttrsClassError: If *cls* is not an ``attrs`` + class. + + :rtype: tuple (with name accessors) of `attr.Attribute` + + .. versionchanged:: 16.2.0 Returned tuple allows accessing the fields + by name. + """ + if not isclass(cls): + raise TypeError("Passed object must be a class.") + attrs = getattr(cls, "__attrs_attrs__", None) + if attrs is None: + raise NotAnAttrsClassError( + "{cls!r} is not an attrs-decorated class.".format(cls=cls) + ) + return attrs + + +def fields_dict(cls): + """ + Return an ordered dictionary of ``attrs`` attributes for a class, whose + keys are the attribute names. + + :param type cls: Class to introspect. + + :raise TypeError: If *cls* is not a class. + :raise attr.exceptions.NotAnAttrsClassError: If *cls* is not an ``attrs`` + class. + + :rtype: an ordered dict where keys are attribute names and values are + `attr.Attribute`\\ s. This will be a `dict` if it's + naturally ordered like on Python 3.6+ or an + :class:`~collections.OrderedDict` otherwise. + + .. versionadded:: 18.1.0 + """ + if not isclass(cls): + raise TypeError("Passed object must be a class.") + attrs = getattr(cls, "__attrs_attrs__", None) + if attrs is None: + raise NotAnAttrsClassError( + "{cls!r} is not an attrs-decorated class.".format(cls=cls) + ) + return ordered_dict(((a.name, a) for a in attrs)) + + +def validate(inst): + """ + Validate all attributes on *inst* that have a validator. + + Leaves all exceptions through. + + :param inst: Instance of a class with ``attrs`` attributes. + """ + if _config._run_validators is False: + return + + for a in fields(inst.__class__): + v = a.validator + if v is not None: + v(inst, a, getattr(inst, a.name)) + + +def _is_slot_cls(cls): + return "__slots__" in cls.__dict__ + + +def _is_slot_attr(a_name, base_attr_map): + """ + Check if the attribute name comes from a slot class. + """ + return a_name in base_attr_map and _is_slot_cls(base_attr_map[a_name]) + + +def _make_init( + cls, + attrs, + post_init, + frozen, + slots, + cache_hash, + base_attr_map, + is_exc, + has_global_on_setattr, +): + if frozen and has_global_on_setattr: + raise ValueError("Frozen classes can't use on_setattr.") + + needs_cached_setattr = cache_hash or frozen + filtered_attrs = [] + attr_dict = {} + for a in attrs: + if not a.init and a.default is NOTHING: + continue + + filtered_attrs.append(a) + attr_dict[a.name] = a + + if a.on_setattr is not None: + if frozen is True: + raise ValueError("Frozen classes can't use on_setattr.") + + needs_cached_setattr = True + elif ( + has_global_on_setattr and a.on_setattr is not setters.NO_OP + ) or _is_slot_attr(a.name, base_attr_map): + needs_cached_setattr = True + + unique_filename = _generate_unique_filename(cls, "init") + + script, globs, annotations = _attrs_to_init_script( + filtered_attrs, + frozen, + slots, + post_init, + cache_hash, + base_attr_map, + is_exc, + needs_cached_setattr, + has_global_on_setattr, + ) + locs = {} + bytecode = compile(script, unique_filename, "exec") + globs.update({"NOTHING": NOTHING, "attr_dict": attr_dict}) + + if needs_cached_setattr: + # Save the lookup overhead in __init__ if we need to circumvent + # setattr hooks. + globs["_cached_setattr"] = _obj_setattr + + eval(bytecode, globs, locs) + + # In order of debuggers like PDB being able to step through the code, + # we add a fake linecache entry. + linecache.cache[unique_filename] = ( + len(script), + None, + script.splitlines(True), + unique_filename, + ) + + __init__ = locs["__init__"] + __init__.__annotations__ = annotations + + return __init__ + + +def _setattr(attr_name, value_var, has_on_setattr): + """ + Use the cached object.setattr to set *attr_name* to *value_var*. + """ + return "_setattr('%s', %s)" % (attr_name, value_var) + + +def _setattr_with_converter(attr_name, value_var, has_on_setattr): + """ + Use the cached object.setattr to set *attr_name* to *value_var*, but run + its converter first. + """ + return "_setattr('%s', %s(%s))" % ( + attr_name, + _init_converter_pat % (attr_name,), + value_var, + ) + + +def _assign(attr_name, value, has_on_setattr): + """ + Unless *attr_name* has an on_setattr hook, use normal assignment. Otherwise + relegate to _setattr. + """ + if has_on_setattr: + return _setattr(attr_name, value, True) + + return "self.%s = %s" % (attr_name, value) + + +def _assign_with_converter(attr_name, value_var, has_on_setattr): + """ + Unless *attr_name* has an on_setattr hook, use normal assignment after + conversion. Otherwise relegate to _setattr_with_converter. + """ + if has_on_setattr: + return _setattr_with_converter(attr_name, value_var, True) + + return "self.%s = %s(%s)" % ( + attr_name, + _init_converter_pat % (attr_name,), + value_var, + ) + + +if PY2: + + def _unpack_kw_only_py2(attr_name, default=None): + """ + Unpack *attr_name* from _kw_only dict. + """ + if default is not None: + arg_default = ", %s" % default + else: + arg_default = "" + return "%s = _kw_only.pop('%s'%s)" % ( + attr_name, + attr_name, + arg_default, + ) + + def _unpack_kw_only_lines_py2(kw_only_args): + """ + Unpack all *kw_only_args* from _kw_only dict and handle errors. + + Given a list of strings "{attr_name}" and "{attr_name}={default}" + generates list of lines of code that pop attrs from _kw_only dict and + raise TypeError similar to builtin if required attr is missing or + extra key is passed. + + >>> print("\n".join(_unpack_kw_only_lines_py2(["a", "b=42"]))) + try: + a = _kw_only.pop('a') + b = _kw_only.pop('b', 42) + except KeyError as _key_error: + raise TypeError( + ... + if _kw_only: + raise TypeError( + ... + """ + lines = ["try:"] + lines.extend( + " " + _unpack_kw_only_py2(*arg.split("=")) + for arg in kw_only_args + ) + lines += """\ +except KeyError as _key_error: + raise TypeError( + '__init__() missing required keyword-only argument: %s' % _key_error + ) +if _kw_only: + raise TypeError( + '__init__() got an unexpected keyword argument %r' + % next(iter(_kw_only)) + ) +""".split( + "\n" + ) + return lines + + +def _attrs_to_init_script( + attrs, + frozen, + slots, + post_init, + cache_hash, + base_attr_map, + is_exc, + needs_cached_setattr, + has_global_on_setattr, +): + """ + Return a script of an initializer for *attrs* and a dict of globals. + + The globals are expected by the generated script. + + If *frozen* is True, we cannot set the attributes directly so we use + a cached ``object.__setattr__``. + """ + lines = [] + if needs_cached_setattr: + lines.append( + # Circumvent the __setattr__ descriptor to save one lookup per + # assignment. + # Note _setattr will be used again below if cache_hash is True + "_setattr = _cached_setattr.__get__(self, self.__class__)" + ) + + if frozen is True: + if slots is True: + fmt_setter = _setattr + fmt_setter_with_converter = _setattr_with_converter + else: + # Dict frozen classes assign directly to __dict__. + # But only if the attribute doesn't come from an ancestor slot + # class. + # Note _inst_dict will be used again below if cache_hash is True + lines.append("_inst_dict = self.__dict__") + + def fmt_setter(attr_name, value_var, has_on_setattr): + if _is_slot_attr(attr_name, base_attr_map): + return _setattr(attr_name, value_var, has_on_setattr) + + return "_inst_dict['%s'] = %s" % (attr_name, value_var) + + def fmt_setter_with_converter( + attr_name, value_var, has_on_setattr + ): + if has_on_setattr or _is_slot_attr(attr_name, base_attr_map): + return _setattr_with_converter( + attr_name, value_var, has_on_setattr + ) + + return "_inst_dict['%s'] = %s(%s)" % ( + attr_name, + _init_converter_pat % (attr_name,), + value_var, + ) + + else: + # Not frozen. + fmt_setter = _assign + fmt_setter_with_converter = _assign_with_converter + + args = [] + kw_only_args = [] + attrs_to_validate = [] + + # This is a dictionary of names to validator and converter callables. + # Injecting this into __init__ globals lets us avoid lookups. + names_for_globals = {} + annotations = {"return": None} + + for a in attrs: + if a.validator: + attrs_to_validate.append(a) + + attr_name = a.name + has_on_setattr = a.on_setattr is not None or ( + a.on_setattr is not setters.NO_OP and has_global_on_setattr + ) + arg_name = a.name.lstrip("_") + + has_factory = isinstance(a.default, Factory) + if has_factory and a.default.takes_self: + maybe_self = "self" + else: + maybe_self = "" + + if a.init is False: + if has_factory: + init_factory_name = _init_factory_pat.format(a.name) + if a.converter is not None: + lines.append( + fmt_setter_with_converter( + attr_name, + init_factory_name + "(%s)" % (maybe_self,), + has_on_setattr, + ) + ) + conv_name = _init_converter_pat % (a.name,) + names_for_globals[conv_name] = a.converter + else: + lines.append( + fmt_setter( + attr_name, + init_factory_name + "(%s)" % (maybe_self,), + has_on_setattr, + ) + ) + names_for_globals[init_factory_name] = a.default.factory + else: + if a.converter is not None: + lines.append( + fmt_setter_with_converter( + attr_name, + "attr_dict['%s'].default" % (attr_name,), + has_on_setattr, + ) + ) + conv_name = _init_converter_pat % (a.name,) + names_for_globals[conv_name] = a.converter + else: + lines.append( + fmt_setter( + attr_name, + "attr_dict['%s'].default" % (attr_name,), + has_on_setattr, + ) + ) + elif a.default is not NOTHING and not has_factory: + arg = "%s=attr_dict['%s'].default" % (arg_name, attr_name) + if a.kw_only: + kw_only_args.append(arg) + else: + args.append(arg) + + if a.converter is not None: + lines.append( + fmt_setter_with_converter( + attr_name, arg_name, has_on_setattr + ) + ) + names_for_globals[ + _init_converter_pat % (a.name,) + ] = a.converter + else: + lines.append(fmt_setter(attr_name, arg_name, has_on_setattr)) + + elif has_factory: + arg = "%s=NOTHING" % (arg_name,) + if a.kw_only: + kw_only_args.append(arg) + else: + args.append(arg) + lines.append("if %s is not NOTHING:" % (arg_name,)) + + init_factory_name = _init_factory_pat.format(a.name) + if a.converter is not None: + lines.append( + " " + + fmt_setter_with_converter( + attr_name, arg_name, has_on_setattr + ) + ) + lines.append("else:") + lines.append( + " " + + fmt_setter_with_converter( + attr_name, + init_factory_name + "(" + maybe_self + ")", + has_on_setattr, + ) + ) + names_for_globals[ + _init_converter_pat % (a.name,) + ] = a.converter + else: + lines.append( + " " + fmt_setter(attr_name, arg_name, has_on_setattr) + ) + lines.append("else:") + lines.append( + " " + + fmt_setter( + attr_name, + init_factory_name + "(" + maybe_self + ")", + has_on_setattr, + ) + ) + names_for_globals[init_factory_name] = a.default.factory + else: + if a.kw_only: + kw_only_args.append(arg_name) + else: + args.append(arg_name) + + if a.converter is not None: + lines.append( + fmt_setter_with_converter( + attr_name, arg_name, has_on_setattr + ) + ) + names_for_globals[ + _init_converter_pat % (a.name,) + ] = a.converter + else: + lines.append(fmt_setter(attr_name, arg_name, has_on_setattr)) + + if a.init is True and a.converter is None and a.type is not None: + annotations[arg_name] = a.type + + if attrs_to_validate: # we can skip this if there are no validators. + names_for_globals["_config"] = _config + lines.append("if _config._run_validators is True:") + for a in attrs_to_validate: + val_name = "__attr_validator_" + a.name + attr_name = "__attr_" + a.name + lines.append( + " %s(self, %s, self.%s)" % (val_name, attr_name, a.name) + ) + names_for_globals[val_name] = a.validator + names_for_globals[attr_name] = a + + if post_init: + lines.append("self.__attrs_post_init__()") + + # because this is set only after __attrs_post_init is called, a crash + # will result if post-init tries to access the hash code. This seemed + # preferable to setting this beforehand, in which case alteration to + # field values during post-init combined with post-init accessing the + # hash code would result in silent bugs. + if cache_hash: + if frozen: + if slots: + # if frozen and slots, then _setattr defined above + init_hash_cache = "_setattr('%s', %s)" + else: + # if frozen and not slots, then _inst_dict defined above + init_hash_cache = "_inst_dict['%s'] = %s" + else: + init_hash_cache = "self.%s = %s" + lines.append(init_hash_cache % (_hash_cache_field, "None")) + + # For exceptions we rely on BaseException.__init__ for proper + # initialization. + if is_exc: + vals = ",".join("self." + a.name for a in attrs if a.init) + + lines.append("BaseException.__init__(self, %s)" % (vals,)) + + args = ", ".join(args) + if kw_only_args: + if PY2: + lines = _unpack_kw_only_lines_py2(kw_only_args) + lines + + args += "%s**_kw_only" % (", " if args else "",) # leading comma + else: + args += "%s*, %s" % ( + ", " if args else "", # leading comma + ", ".join(kw_only_args), # kw_only args + ) + return ( + """\ +def __init__(self, {args}): + {lines} +""".format( + args=args, lines="\n ".join(lines) if lines else "pass" + ), + names_for_globals, + annotations, + ) + + +class Attribute(object): + """ + *Read-only* representation of an attribute. + + Instances of this class are frequently used for introspection purposes + like: + + - `fields` returns a tuple of them. + - Validators get them passed as the first argument. + - The *field transformer* hook receives a list of them. + + :attribute name: The name of the attribute. + :attribute inherited: Whether or not that attribute has been inherited from + a base class. + + Plus *all* arguments of `attr.ib` (except for ``factory`` + which is only syntactic sugar for ``default=Factory(...)``. + + .. versionadded:: 20.1.0 *inherited* + .. versionadded:: 20.1.0 *on_setattr* + .. versionchanged:: 20.2.0 *inherited* is not taken into account for + equality checks and hashing anymore. + + For the full version history of the fields, see `attr.ib`. + """ + + __slots__ = ( + "name", + "default", + "validator", + "repr", + "eq", + "order", + "hash", + "init", + "metadata", + "type", + "converter", + "kw_only", + "inherited", + "on_setattr", + ) + + def __init__( + self, + name, + default, + validator, + repr, + cmp, # XXX: unused, remove along with other cmp code. + hash, + init, + inherited, + metadata=None, + type=None, + converter=None, + kw_only=False, + eq=None, + order=None, + on_setattr=None, + ): + eq, order = _determine_eq_order(cmp, eq, order, True) + + # Cache this descriptor here to speed things up later. + bound_setattr = _obj_setattr.__get__(self, Attribute) + + # Despite the big red warning, people *do* instantiate `Attribute` + # themselves. + bound_setattr("name", name) + bound_setattr("default", default) + bound_setattr("validator", validator) + bound_setattr("repr", repr) + bound_setattr("eq", eq) + bound_setattr("order", order) + bound_setattr("hash", hash) + bound_setattr("init", init) + bound_setattr("converter", converter) + bound_setattr( + "metadata", + ( + metadata_proxy(metadata) + if metadata + else _empty_metadata_singleton + ), + ) + bound_setattr("type", type) + bound_setattr("kw_only", kw_only) + bound_setattr("inherited", inherited) + bound_setattr("on_setattr", on_setattr) + + def __setattr__(self, name, value): + raise FrozenInstanceError() + + @classmethod + def from_counting_attr(cls, name, ca, type=None): + # type holds the annotated value. deal with conflicts: + if type is None: + type = ca.type + elif ca.type is not None: + raise ValueError( + "Type annotation and type argument cannot both be present" + ) + inst_dict = { + k: getattr(ca, k) + for k in Attribute.__slots__ + if k + not in ( + "name", + "validator", + "default", + "type", + "inherited", + ) # exclude methods and deprecated alias + } + return cls( + name=name, + validator=ca._validator, + default=ca._default, + type=type, + cmp=None, + inherited=False, + **inst_dict + ) + + @property + def cmp(self): + """ + Simulate the presence of a cmp attribute and warn. + """ + warnings.warn(_CMP_DEPRECATION, DeprecationWarning, stacklevel=2) + + return self.eq and self.order + + # Don't use attr.evolve since fields(Attribute) doesn't work + def evolve(self, **changes): + """ + Copy *self* and apply *changes*. + + This works similarly to `attr.evolve` but that function does not work + with ``Attribute``. + + It is mainly meant to be used for `transform-fields`. + + .. versionadded:: 20.3.0 + """ + new = copy.copy(self) + + new._setattrs(changes.items()) + + return new + + # Don't use _add_pickle since fields(Attribute) doesn't work + def __getstate__(self): + """ + Play nice with pickle. + """ + return tuple( + getattr(self, name) if name != "metadata" else dict(self.metadata) + for name in self.__slots__ + ) + + def __setstate__(self, state): + """ + Play nice with pickle. + """ + self._setattrs(zip(self.__slots__, state)) + + def _setattrs(self, name_values_pairs): + bound_setattr = _obj_setattr.__get__(self, Attribute) + for name, value in name_values_pairs: + if name != "metadata": + bound_setattr(name, value) + else: + bound_setattr( + name, + metadata_proxy(value) + if value + else _empty_metadata_singleton, + ) + + +_a = [ + Attribute( + name=name, + default=NOTHING, + validator=None, + repr=True, + cmp=None, + eq=True, + order=False, + hash=(name != "metadata"), + init=True, + inherited=False, + ) + for name in Attribute.__slots__ +] + +Attribute = _add_hash( + _add_eq( + _add_repr(Attribute, attrs=_a), + attrs=[a for a in _a if a.name != "inherited"], + ), + attrs=[a for a in _a if a.hash and a.name != "inherited"], +) + + +class _CountingAttr(object): + """ + Intermediate representation of attributes that uses a counter to preserve + the order in which the attributes have been defined. + + *Internal* data structure of the attrs library. Running into is most + likely the result of a bug like a forgotten `@attr.s` decorator. + """ + + __slots__ = ( + "counter", + "_default", + "repr", + "eq", + "order", + "hash", + "init", + "metadata", + "_validator", + "converter", + "type", + "kw_only", + "on_setattr", + ) + __attrs_attrs__ = tuple( + Attribute( + name=name, + default=NOTHING, + validator=None, + repr=True, + cmp=None, + hash=True, + init=True, + kw_only=False, + eq=True, + order=False, + inherited=False, + on_setattr=None, + ) + for name in ( + "counter", + "_default", + "repr", + "eq", + "order", + "hash", + "init", + "on_setattr", + ) + ) + ( + Attribute( + name="metadata", + default=None, + validator=None, + repr=True, + cmp=None, + hash=False, + init=True, + kw_only=False, + eq=True, + order=False, + inherited=False, + on_setattr=None, + ), + ) + cls_counter = 0 + + def __init__( + self, + default, + validator, + repr, + cmp, # XXX: unused, remove along with cmp + hash, + init, + converter, + metadata, + type, + kw_only, + eq, + order, + on_setattr, + ): + _CountingAttr.cls_counter += 1 + self.counter = _CountingAttr.cls_counter + self._default = default + self._validator = validator + self.converter = converter + self.repr = repr + self.eq = eq + self.order = order + self.hash = hash + self.init = init + self.metadata = metadata + self.type = type + self.kw_only = kw_only + self.on_setattr = on_setattr + + def validator(self, meth): + """ + Decorator that adds *meth* to the list of validators. + + Returns *meth* unchanged. + + .. versionadded:: 17.1.0 + """ + if self._validator is None: + self._validator = meth + else: + self._validator = and_(self._validator, meth) + return meth + + def default(self, meth): + """ + Decorator that allows to set the default for an attribute. + + Returns *meth* unchanged. + + :raises DefaultAlreadySetError: If default has been set before. + + .. versionadded:: 17.1.0 + """ + if self._default is not NOTHING: + raise DefaultAlreadySetError() + + self._default = Factory(meth, takes_self=True) + + return meth + + +_CountingAttr = _add_eq(_add_repr(_CountingAttr)) + + +@attrs(slots=True, init=False, hash=True) +class Factory(object): + """ + Stores a factory callable. + + If passed as the default value to `attr.ib`, the factory is used to + generate a new value. + + :param callable factory: A callable that takes either none or exactly one + mandatory positional argument depending on *takes_self*. + :param bool takes_self: Pass the partially initialized instance that is + being initialized as a positional argument. + + .. versionadded:: 17.1.0 *takes_self* + """ + + factory = attrib() + takes_self = attrib() + + def __init__(self, factory, takes_self=False): + """ + `Factory` is part of the default machinery so if we want a default + value here, we have to implement it ourselves. + """ + self.factory = factory + self.takes_self = takes_self + + +def make_class(name, attrs, bases=(object,), **attributes_arguments): + """ + A quick way to create a new class called *name* with *attrs*. + + :param str name: The name for the new class. + + :param attrs: A list of names or a dictionary of mappings of names to + attributes. + + If *attrs* is a list or an ordered dict (`dict` on Python 3.6+, + `collections.OrderedDict` otherwise), the order is deduced from + the order of the names or attributes inside *attrs*. Otherwise the + order of the definition of the attributes is used. + :type attrs: `list` or `dict` + + :param tuple bases: Classes that the new class will subclass. + + :param attributes_arguments: Passed unmodified to `attr.s`. + + :return: A new class with *attrs*. + :rtype: type + + .. versionadded:: 17.1.0 *bases* + .. versionchanged:: 18.1.0 If *attrs* is ordered, the order is retained. + """ + if isinstance(attrs, dict): + cls_dict = attrs + elif isinstance(attrs, (list, tuple)): + cls_dict = dict((a, attrib()) for a in attrs) + else: + raise TypeError("attrs argument must be a dict or a list.") + + post_init = cls_dict.pop("__attrs_post_init__", None) + type_ = type( + name, + bases, + {} if post_init is None else {"__attrs_post_init__": post_init}, + ) + # For pickling to work, the __module__ variable needs to be set to the + # frame where the class is created. Bypass this step in environments where + # sys._getframe is not defined (Jython for example) or sys._getframe is not + # defined for arguments greater than 0 (IronPython). + try: + type_.__module__ = sys._getframe(1).f_globals.get( + "__name__", "__main__" + ) + except (AttributeError, ValueError): + pass + + # We do it here for proper warnings with meaningful stacklevel. + cmp = attributes_arguments.pop("cmp", None) + ( + attributes_arguments["eq"], + attributes_arguments["order"], + ) = _determine_eq_order( + cmp, + attributes_arguments.get("eq"), + attributes_arguments.get("order"), + True, + ) + + return _attrs(these=cls_dict, **attributes_arguments)(type_) + + +# These are required by within this module so we define them here and merely +# import into .validators / .converters. + + +@attrs(slots=True, hash=True) +class _AndValidator(object): + """ + Compose many validators to a single one. + """ + + _validators = attrib() + + def __call__(self, inst, attr, value): + for v in self._validators: + v(inst, attr, value) + + +def and_(*validators): + """ + A validator that composes multiple validators into one. + + When called on a value, it runs all wrapped validators. + + :param callables validators: Arbitrary number of validators. + + .. versionadded:: 17.1.0 + """ + vals = [] + for validator in validators: + vals.extend( + validator._validators + if isinstance(validator, _AndValidator) + else [validator] + ) + + return _AndValidator(tuple(vals)) + + +def pipe(*converters): + """ + A converter that composes multiple converters into one. + + When called on a value, it runs all wrapped converters, returning the + *last* value. + + :param callables converters: Arbitrary number of converters. + + .. versionadded:: 20.1.0 + """ + + def pipe_converter(val): + for converter in converters: + val = converter(val) + + return val + + return pipe_converter diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/_next_gen.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/_next_gen.py new file mode 100644 index 0000000000000000000000000000000000000000..2b5565c569e5e1985187db8720d98c28bd3bfaa3 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/_next_gen.py @@ -0,0 +1,160 @@ +""" +This is a Python 3.6 and later-only, keyword-only, and **provisional** API that +calls `attr.s` with different default values. + +Provisional APIs that shall become "import attrs" one glorious day. +""" + +from functools import partial + +from attr.exceptions import UnannotatedAttributeError + +from . import setters +from ._make import NOTHING, _frozen_setattrs, attrib, attrs + + +def define( + maybe_cls=None, + *, + these=None, + repr=None, + hash=None, + init=None, + slots=True, + frozen=False, + weakref_slot=True, + str=False, + auto_attribs=None, + kw_only=False, + cache_hash=False, + auto_exc=True, + eq=None, + order=False, + auto_detect=True, + getstate_setstate=None, + on_setattr=None, + field_transformer=None, +): + r""" + The only behavioral differences are the handling of the *auto_attribs* + option: + + :param Optional[bool] auto_attribs: If set to `True` or `False`, it behaves + exactly like `attr.s`. If left `None`, `attr.s` will try to guess: + + 1. If all attributes are annotated and no `attr.ib` is found, it assumes + *auto_attribs=True*. + 2. Otherwise it assumes *auto_attribs=False* and tries to collect + `attr.ib`\ s. + + and that mutable classes (``frozen=False``) validate on ``__setattr__``. + + .. versionadded:: 20.1.0 + """ + + def do_it(cls, auto_attribs): + return attrs( + maybe_cls=cls, + these=these, + repr=repr, + hash=hash, + init=init, + slots=slots, + frozen=frozen, + weakref_slot=weakref_slot, + str=str, + auto_attribs=auto_attribs, + kw_only=kw_only, + cache_hash=cache_hash, + auto_exc=auto_exc, + eq=eq, + order=order, + auto_detect=auto_detect, + collect_by_mro=True, + getstate_setstate=getstate_setstate, + on_setattr=on_setattr, + field_transformer=field_transformer, + ) + + def wrap(cls): + """ + Making this a wrapper ensures this code runs during class creation. + + We also ensure that frozen-ness of classes is inherited. + """ + nonlocal frozen, on_setattr + + had_on_setattr = on_setattr not in (None, setters.NO_OP) + + # By default, mutable classes validate on setattr. + if frozen is False and on_setattr is None: + on_setattr = setters.validate + + # However, if we subclass a frozen class, we inherit the immutability + # and disable on_setattr. + for base_cls in cls.__bases__: + if base_cls.__setattr__ is _frozen_setattrs: + if had_on_setattr: + raise ValueError( + "Frozen classes can't use on_setattr " + "(frozen-ness was inherited)." + ) + + on_setattr = setters.NO_OP + break + + if auto_attribs is not None: + return do_it(cls, auto_attribs) + + try: + return do_it(cls, True) + except UnannotatedAttributeError: + return do_it(cls, False) + + # maybe_cls's type depends on the usage of the decorator. It's a class + # if it's used as `@attrs` but ``None`` if used as `@attrs()`. + if maybe_cls is None: + return wrap + else: + return wrap(maybe_cls) + + +mutable = define +frozen = partial(define, frozen=True, on_setattr=None) + + +def field( + *, + default=NOTHING, + validator=None, + repr=True, + hash=None, + init=True, + metadata=None, + converter=None, + factory=None, + kw_only=False, + eq=None, + order=None, + on_setattr=None, +): + """ + Identical to `attr.ib`, except keyword-only and with some arguments + removed. + + .. versionadded:: 20.1.0 + """ + return attrib( + default=default, + validator=validator, + repr=repr, + hash=hash, + init=init, + metadata=metadata, + converter=converter, + factory=factory, + kw_only=kw_only, + eq=eq, + order=order, + on_setattr=on_setattr, + ) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/_version_info.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/_version_info.py new file mode 100644 index 0000000000000000000000000000000000000000..014e78a1b43851a2795d5b0e45cfde708e577460 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/_version_info.py @@ -0,0 +1,85 @@ +from __future__ import absolute_import, division, print_function + +from functools import total_ordering + +from ._funcs import astuple +from ._make import attrib, attrs + + +@total_ordering +@attrs(eq=False, order=False, slots=True, frozen=True) +class VersionInfo(object): + """ + A version object that can be compared to tuple of length 1--4: + + >>> attr.VersionInfo(19, 1, 0, "final") <= (19, 2) + True + >>> attr.VersionInfo(19, 1, 0, "final") < (19, 1, 1) + True + >>> vi = attr.VersionInfo(19, 2, 0, "final") + >>> vi < (19, 1, 1) + False + >>> vi < (19,) + False + >>> vi == (19, 2,) + True + >>> vi == (19, 2, 1) + False + + .. versionadded:: 19.2 + """ + + year = attrib(type=int) + minor = attrib(type=int) + micro = attrib(type=int) + releaselevel = attrib(type=str) + + @classmethod + def _from_version_string(cls, s): + """ + Parse *s* and return a _VersionInfo. + """ + v = s.split(".") + if len(v) == 3: + v.append("final") + + return cls( + year=int(v[0]), minor=int(v[1]), micro=int(v[2]), releaselevel=v[3] + ) + + def _ensure_tuple(self, other): + """ + Ensure *other* is a tuple of a valid length. + + Returns a possibly transformed *other* and ourselves as a tuple of + the same length as *other*. + """ + + if self.__class__ is other.__class__: + other = astuple(other) + + if not isinstance(other, tuple): + raise NotImplementedError + + if not (1 <= len(other) <= 4): + raise NotImplementedError + + return astuple(self)[: len(other)], other + + def __eq__(self, other): + try: + us, them = self._ensure_tuple(other) + except NotImplementedError: + return NotImplemented + + return us == them + + def __lt__(self, other): + try: + us, them = self._ensure_tuple(other) + except NotImplementedError: + return NotImplemented + + # Since alphabetically "dev0" < "final" < "post1" < "post2", we don't + # have to do anything special with releaselevel for now. + return us < them diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/_version_info.pyi b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/_version_info.pyi new file mode 100644 index 0000000000000000000000000000000000000000..45ced086337783c4b73b26cd17d2c1c260e24029 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/_version_info.pyi @@ -0,0 +1,9 @@ +class VersionInfo: + @property + def year(self) -> int: ... + @property + def minor(self) -> int: ... + @property + def micro(self) -> int: ... + @property + def releaselevel(self) -> str: ... diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/converters.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/converters.py new file mode 100644 index 0000000000000000000000000000000000000000..715ce17859f456e7197d8d3b09ec4ecea8b004d1 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/converters.py @@ -0,0 +1,85 @@ +""" +Commonly useful converters. +""" + +from __future__ import absolute_import, division, print_function + +from ._make import NOTHING, Factory, pipe + + +__all__ = [ + "pipe", + "optional", + "default_if_none", +] + + +def optional(converter): + """ + A converter that allows an attribute to be optional. An optional attribute + is one which can be set to ``None``. + + :param callable converter: the converter that is used for non-``None`` + values. + + .. versionadded:: 17.1.0 + """ + + def optional_converter(val): + if val is None: + return None + return converter(val) + + return optional_converter + + +def default_if_none(default=NOTHING, factory=None): + """ + A converter that allows to replace ``None`` values by *default* or the + result of *factory*. + + :param default: Value to be used if ``None`` is passed. Passing an instance + of `attr.Factory` is supported, however the ``takes_self`` option + is *not*. + :param callable factory: A callable that takes not parameters whose result + is used if ``None`` is passed. + + :raises TypeError: If **neither** *default* or *factory* is passed. + :raises TypeError: If **both** *default* and *factory* are passed. + :raises ValueError: If an instance of `attr.Factory` is passed with + ``takes_self=True``. + + .. versionadded:: 18.2.0 + """ + if default is NOTHING and factory is None: + raise TypeError("Must pass either `default` or `factory`.") + + if default is not NOTHING and factory is not None: + raise TypeError( + "Must pass either `default` or `factory` but not both." + ) + + if factory is not None: + default = Factory(factory) + + if isinstance(default, Factory): + if default.takes_self: + raise ValueError( + "`takes_self` is not supported by default_if_none." + ) + + def default_if_none_converter(val): + if val is not None: + return val + + return default.factory() + + else: + + def default_if_none_converter(val): + if val is not None: + return val + + return default + + return default_if_none_converter diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/converters.pyi b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/converters.pyi new file mode 100644 index 0000000000000000000000000000000000000000..7b0caa14f0c1d2224744a6a5c3650004082fadcf --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/converters.pyi @@ -0,0 +1,11 @@ +from typing import TypeVar, Optional, Callable, overload +from . import _ConverterType + +_T = TypeVar("_T") + +def pipe(*validators: _ConverterType) -> _ConverterType: ... +def optional(converter: _ConverterType) -> _ConverterType: ... +@overload +def default_if_none(default: _T) -> _ConverterType: ... +@overload +def default_if_none(*, factory: Callable[[], _T]) -> _ConverterType: ... diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/exceptions.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/exceptions.py new file mode 100644 index 0000000000000000000000000000000000000000..fcd89106f16fed3fbb84fc2f45b5327ecd5848cb --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/exceptions.py @@ -0,0 +1,92 @@ +from __future__ import absolute_import, division, print_function + + +class FrozenError(AttributeError): + """ + A frozen/immutable instance or attribute haave been attempted to be + modified. + + It mirrors the behavior of ``namedtuples`` by using the same error message + and subclassing `AttributeError`. + + .. versionadded:: 20.1.0 + """ + + msg = "can't set attribute" + args = [msg] + + +class FrozenInstanceError(FrozenError): + """ + A frozen instance has been attempted to be modified. + + .. versionadded:: 16.1.0 + """ + + +class FrozenAttributeError(FrozenError): + """ + A frozen attribute has been attempted to be modified. + + .. versionadded:: 20.1.0 + """ + + +class AttrsAttributeNotFoundError(ValueError): + """ + An ``attrs`` function couldn't find an attribute that the user asked for. + + .. versionadded:: 16.2.0 + """ + + +class NotAnAttrsClassError(ValueError): + """ + A non-``attrs`` class has been passed into an ``attrs`` function. + + .. versionadded:: 16.2.0 + """ + + +class DefaultAlreadySetError(RuntimeError): + """ + A default has been set using ``attr.ib()`` and is attempted to be reset + using the decorator. + + .. versionadded:: 17.1.0 + """ + + +class UnannotatedAttributeError(RuntimeError): + """ + A class with ``auto_attribs=True`` has an ``attr.ib()`` without a type + annotation. + + .. versionadded:: 17.3.0 + """ + + +class PythonTooOldError(RuntimeError): + """ + It was attempted to use an ``attrs`` feature that requires a newer Python + version. + + .. versionadded:: 18.2.0 + """ + + +class NotCallableError(TypeError): + """ + A ``attr.ib()`` requiring a callable has been set with a value + that is not callable. + + .. versionadded:: 19.2.0 + """ + + def __init__(self, msg, value): + super(TypeError, self).__init__(msg, value) + self.msg = msg + self.value = value + + def __str__(self): + return str(self.msg) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/exceptions.pyi b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/exceptions.pyi new file mode 100644 index 0000000000000000000000000000000000000000..f2680118b404db8f5227d04d27e8439331341c4d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/exceptions.pyi @@ -0,0 +1,17 @@ +from typing import Any + +class FrozenError(AttributeError): + msg: str = ... + +class FrozenInstanceError(FrozenError): ... +class FrozenAttributeError(FrozenError): ... +class AttrsAttributeNotFoundError(ValueError): ... +class NotAnAttrsClassError(ValueError): ... +class DefaultAlreadySetError(RuntimeError): ... +class UnannotatedAttributeError(RuntimeError): ... +class PythonTooOldError(RuntimeError): ... + +class NotCallableError(TypeError): + msg: str = ... + value: Any = ... + def __init__(self, msg: str, value: Any) -> None: ... diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/filters.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/filters.py new file mode 100644 index 0000000000000000000000000000000000000000..dc47e8fa38ce6af08ae29125dc97ee2b19e948d2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/filters.py @@ -0,0 +1,52 @@ +""" +Commonly useful filters for `attr.asdict`. +""" + +from __future__ import absolute_import, division, print_function + +from ._compat import isclass +from ._make import Attribute + + +def _split_what(what): + """ + Returns a tuple of `frozenset`s of classes and attributes. + """ + return ( + frozenset(cls for cls in what if isclass(cls)), + frozenset(cls for cls in what if isinstance(cls, Attribute)), + ) + + +def include(*what): + """ + Whitelist *what*. + + :param what: What to whitelist. + :type what: `list` of `type` or `attr.Attribute`\\ s + + :rtype: `callable` + """ + cls, attrs = _split_what(what) + + def include_(attribute, value): + return value.__class__ in cls or attribute in attrs + + return include_ + + +def exclude(*what): + """ + Blacklist *what*. + + :param what: What to blacklist. + :type what: `list` of classes or `attr.Attribute`\\ s. + + :rtype: `callable` + """ + cls, attrs = _split_what(what) + + def exclude_(attribute, value): + return value.__class__ not in cls and attribute not in attrs + + return exclude_ diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/filters.pyi b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/filters.pyi new file mode 100644 index 0000000000000000000000000000000000000000..68368fe2b92d8f61ef4ca176d007975af08c3cca --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/filters.pyi @@ -0,0 +1,5 @@ +from typing import Union, Any +from . import Attribute, _FilterType + +def include(*what: Union[type, Attribute[Any]]) -> _FilterType[Any]: ... +def exclude(*what: Union[type, Attribute[Any]]) -> _FilterType[Any]: ... diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/py.typed b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/py.typed new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/setters.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/setters.py new file mode 100644 index 0000000000000000000000000000000000000000..240014b3c1e4eb2f264ca6cabe0a426da2eb7aac --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/setters.py @@ -0,0 +1,77 @@ +""" +Commonly used hooks for on_setattr. +""" + +from __future__ import absolute_import, division, print_function + +from . import _config +from .exceptions import FrozenAttributeError + + +def pipe(*setters): + """ + Run all *setters* and return the return value of the last one. + + .. versionadded:: 20.1.0 + """ + + def wrapped_pipe(instance, attrib, new_value): + rv = new_value + + for setter in setters: + rv = setter(instance, attrib, rv) + + return rv + + return wrapped_pipe + + +def frozen(_, __, ___): + """ + Prevent an attribute to be modified. + + .. versionadded:: 20.1.0 + """ + raise FrozenAttributeError() + + +def validate(instance, attrib, new_value): + """ + Run *attrib*'s validator on *new_value* if it has one. + + .. versionadded:: 20.1.0 + """ + if _config._run_validators is False: + return new_value + + v = attrib.validator + if not v: + return new_value + + v(instance, attrib, new_value) + + return new_value + + +def convert(instance, attrib, new_value): + """ + Run *attrib*'s converter -- if it has one -- on *new_value* and return the + result. + + .. versionadded:: 20.1.0 + """ + c = attrib.converter + if c: + return c(new_value) + + return new_value + + +NO_OP = object() +""" +Sentinel for disabling class-wide *on_setattr* hooks for certain attributes. + +Does not work in `pipe` or within lists. + +.. versionadded:: 20.1.0 +""" diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/setters.pyi b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/setters.pyi new file mode 100644 index 0000000000000000000000000000000000000000..19bc33fd1e27057f80f89125607719a296158f24 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/setters.pyi @@ -0,0 +1,18 @@ +from . import _OnSetAttrType, Attribute +from typing import TypeVar, Any, NewType, NoReturn, cast + +_T = TypeVar("_T") + +def frozen( + instance: Any, attribute: Attribute, new_value: Any +) -> NoReturn: ... +def pipe(*setters: _OnSetAttrType) -> _OnSetAttrType: ... +def validate(instance: Any, attribute: Attribute[_T], new_value: _T) -> _T: ... + +# convert is allowed to return Any, because they can be chained using pipe. +def convert( + instance: Any, attribute: Attribute[Any], new_value: Any +) -> Any: ... + +_NoOpType = NewType("_NoOpType", object) +NO_OP: _NoOpType diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/validators.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/validators.py new file mode 100644 index 0000000000000000000000000000000000000000..b9a73054e9c302fc28d455b95368abf539a3196a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/validators.py @@ -0,0 +1,379 @@ +""" +Commonly useful validators. +""" + +from __future__ import absolute_import, division, print_function + +import re + +from ._make import _AndValidator, and_, attrib, attrs +from .exceptions import NotCallableError + + +__all__ = [ + "and_", + "deep_iterable", + "deep_mapping", + "in_", + "instance_of", + "is_callable", + "matches_re", + "optional", + "provides", +] + + +@attrs(repr=False, slots=True, hash=True) +class _InstanceOfValidator(object): + type = attrib() + + def __call__(self, inst, attr, value): + """ + We use a callable class to be able to change the ``__repr__``. + """ + if not isinstance(value, self.type): + raise TypeError( + "'{name}' must be {type!r} (got {value!r} that is a " + "{actual!r}).".format( + name=attr.name, + type=self.type, + actual=value.__class__, + value=value, + ), + attr, + self.type, + value, + ) + + def __repr__(self): + return "".format( + type=self.type + ) + + +def instance_of(type): + """ + A validator that raises a `TypeError` if the initializer is called + with a wrong type for this particular attribute (checks are performed using + `isinstance` therefore it's also valid to pass a tuple of types). + + :param type: The type to check for. + :type type: type or tuple of types + + :raises TypeError: With a human readable error message, the attribute + (of type `attr.Attribute`), the expected type, and the value it + got. + """ + return _InstanceOfValidator(type) + + +@attrs(repr=False, frozen=True, slots=True) +class _MatchesReValidator(object): + regex = attrib() + flags = attrib() + match_func = attrib() + + def __call__(self, inst, attr, value): + """ + We use a callable class to be able to change the ``__repr__``. + """ + if not self.match_func(value): + raise ValueError( + "'{name}' must match regex {regex!r}" + " ({value!r} doesn't)".format( + name=attr.name, regex=self.regex.pattern, value=value + ), + attr, + self.regex, + value, + ) + + def __repr__(self): + return "".format( + regex=self.regex + ) + + +def matches_re(regex, flags=0, func=None): + r""" + A validator that raises `ValueError` if the initializer is called + with a string that doesn't match *regex*. + + :param str regex: a regex string to match against + :param int flags: flags that will be passed to the underlying re function + (default 0) + :param callable func: which underlying `re` function to call (options + are `re.fullmatch`, `re.search`, `re.match`, default + is ``None`` which means either `re.fullmatch` or an emulation of + it on Python 2). For performance reasons, they won't be used directly + but on a pre-`re.compile`\ ed pattern. + + .. versionadded:: 19.2.0 + """ + fullmatch = getattr(re, "fullmatch", None) + valid_funcs = (fullmatch, None, re.search, re.match) + if func not in valid_funcs: + raise ValueError( + "'func' must be one of %s." + % ( + ", ".join( + sorted( + e and e.__name__ or "None" for e in set(valid_funcs) + ) + ), + ) + ) + + pattern = re.compile(regex, flags) + if func is re.match: + match_func = pattern.match + elif func is re.search: + match_func = pattern.search + else: + if fullmatch: + match_func = pattern.fullmatch + else: + pattern = re.compile(r"(?:{})\Z".format(regex), flags) + match_func = pattern.match + + return _MatchesReValidator(pattern, flags, match_func) + + +@attrs(repr=False, slots=True, hash=True) +class _ProvidesValidator(object): + interface = attrib() + + def __call__(self, inst, attr, value): + """ + We use a callable class to be able to change the ``__repr__``. + """ + if not self.interface.providedBy(value): + raise TypeError( + "'{name}' must provide {interface!r} which {value!r} " + "doesn't.".format( + name=attr.name, interface=self.interface, value=value + ), + attr, + self.interface, + value, + ) + + def __repr__(self): + return "".format( + interface=self.interface + ) + + +def provides(interface): + """ + A validator that raises a `TypeError` if the initializer is called + with an object that does not provide the requested *interface* (checks are + performed using ``interface.providedBy(value)`` (see `zope.interface + `_). + + :param interface: The interface to check for. + :type interface: ``zope.interface.Interface`` + + :raises TypeError: With a human readable error message, the attribute + (of type `attr.Attribute`), the expected interface, and the + value it got. + """ + return _ProvidesValidator(interface) + + +@attrs(repr=False, slots=True, hash=True) +class _OptionalValidator(object): + validator = attrib() + + def __call__(self, inst, attr, value): + if value is None: + return + + self.validator(inst, attr, value) + + def __repr__(self): + return "".format( + what=repr(self.validator) + ) + + +def optional(validator): + """ + A validator that makes an attribute optional. An optional attribute is one + which can be set to ``None`` in addition to satisfying the requirements of + the sub-validator. + + :param validator: A validator (or a list of validators) that is used for + non-``None`` values. + :type validator: callable or `list` of callables. + + .. versionadded:: 15.1.0 + .. versionchanged:: 17.1.0 *validator* can be a list of validators. + """ + if isinstance(validator, list): + return _OptionalValidator(_AndValidator(validator)) + return _OptionalValidator(validator) + + +@attrs(repr=False, slots=True, hash=True) +class _InValidator(object): + options = attrib() + + def __call__(self, inst, attr, value): + try: + in_options = value in self.options + except TypeError: # e.g. `1 in "abc"` + in_options = False + + if not in_options: + raise ValueError( + "'{name}' must be in {options!r} (got {value!r})".format( + name=attr.name, options=self.options, value=value + ) + ) + + def __repr__(self): + return "".format( + options=self.options + ) + + +def in_(options): + """ + A validator that raises a `ValueError` if the initializer is called + with a value that does not belong in the options provided. The check is + performed using ``value in options``. + + :param options: Allowed options. + :type options: list, tuple, `enum.Enum`, ... + + :raises ValueError: With a human readable error message, the attribute (of + type `attr.Attribute`), the expected options, and the value it + got. + + .. versionadded:: 17.1.0 + """ + return _InValidator(options) + + +@attrs(repr=False, slots=False, hash=True) +class _IsCallableValidator(object): + def __call__(self, inst, attr, value): + """ + We use a callable class to be able to change the ``__repr__``. + """ + if not callable(value): + message = ( + "'{name}' must be callable " + "(got {value!r} that is a {actual!r})." + ) + raise NotCallableError( + msg=message.format( + name=attr.name, value=value, actual=value.__class__ + ), + value=value, + ) + + def __repr__(self): + return "" + + +def is_callable(): + """ + A validator that raises a `attr.exceptions.NotCallableError` if the + initializer is called with a value for this particular attribute + that is not callable. + + .. versionadded:: 19.1.0 + + :raises `attr.exceptions.NotCallableError`: With a human readable error + message containing the attribute (`attr.Attribute`) name, + and the value it got. + """ + return _IsCallableValidator() + + +@attrs(repr=False, slots=True, hash=True) +class _DeepIterable(object): + member_validator = attrib(validator=is_callable()) + iterable_validator = attrib( + default=None, validator=optional(is_callable()) + ) + + def __call__(self, inst, attr, value): + """ + We use a callable class to be able to change the ``__repr__``. + """ + if self.iterable_validator is not None: + self.iterable_validator(inst, attr, value) + + for member in value: + self.member_validator(inst, attr, member) + + def __repr__(self): + iterable_identifier = ( + "" + if self.iterable_validator is None + else " {iterable!r}".format(iterable=self.iterable_validator) + ) + return ( + "" + ).format( + iterable_identifier=iterable_identifier, + member=self.member_validator, + ) + + +def deep_iterable(member_validator, iterable_validator=None): + """ + A validator that performs deep validation of an iterable. + + :param member_validator: Validator to apply to iterable members + :param iterable_validator: Validator to apply to iterable itself + (optional) + + .. versionadded:: 19.1.0 + + :raises TypeError: if any sub-validators fail + """ + return _DeepIterable(member_validator, iterable_validator) + + +@attrs(repr=False, slots=True, hash=True) +class _DeepMapping(object): + key_validator = attrib(validator=is_callable()) + value_validator = attrib(validator=is_callable()) + mapping_validator = attrib(default=None, validator=optional(is_callable())) + + def __call__(self, inst, attr, value): + """ + We use a callable class to be able to change the ``__repr__``. + """ + if self.mapping_validator is not None: + self.mapping_validator(inst, attr, value) + + for key in value: + self.key_validator(inst, attr, key) + self.value_validator(inst, attr, value[key]) + + def __repr__(self): + return ( + "" + ).format(key=self.key_validator, value=self.value_validator) + + +def deep_mapping(key_validator, value_validator, mapping_validator=None): + """ + A validator that performs deep validation of a dictionary. + + :param key_validator: Validator to apply to dictionary keys + :param value_validator: Validator to apply to dictionary values + :param mapping_validator: Validator to apply to top-level mapping + attribute (optional) + + .. versionadded:: 19.1.0 + + :raises TypeError: if any sub-validators fail + """ + return _DeepMapping(key_validator, value_validator, mapping_validator) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/validators.pyi b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/validators.pyi new file mode 100644 index 0000000000000000000000000000000000000000..9a22abb197d7b2da7402d8f05cebee4186f9e15f --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attr/validators.pyi @@ -0,0 +1,66 @@ +from typing import ( + Container, + List, + Union, + TypeVar, + Type, + Any, + Optional, + Tuple, + Iterable, + Mapping, + Callable, + Match, + AnyStr, + overload, +) +from . import _ValidatorType + +_T = TypeVar("_T") +_T1 = TypeVar("_T1") +_T2 = TypeVar("_T2") +_T3 = TypeVar("_T3") +_I = TypeVar("_I", bound=Iterable) +_K = TypeVar("_K") +_V = TypeVar("_V") +_M = TypeVar("_M", bound=Mapping) + +# To be more precise on instance_of use some overloads. +# If there are more than 3 items in the tuple then we fall back to Any +@overload +def instance_of(type: Type[_T]) -> _ValidatorType[_T]: ... +@overload +def instance_of(type: Tuple[Type[_T]]) -> _ValidatorType[_T]: ... +@overload +def instance_of( + type: Tuple[Type[_T1], Type[_T2]] +) -> _ValidatorType[Union[_T1, _T2]]: ... +@overload +def instance_of( + type: Tuple[Type[_T1], Type[_T2], Type[_T3]] +) -> _ValidatorType[Union[_T1, _T2, _T3]]: ... +@overload +def instance_of(type: Tuple[type, ...]) -> _ValidatorType[Any]: ... +def provides(interface: Any) -> _ValidatorType[Any]: ... +def optional( + validator: Union[_ValidatorType[_T], List[_ValidatorType[_T]]] +) -> _ValidatorType[Optional[_T]]: ... +def in_(options: Container[_T]) -> _ValidatorType[_T]: ... +def and_(*validators: _ValidatorType[_T]) -> _ValidatorType[_T]: ... +def matches_re( + regex: AnyStr, + flags: int = ..., + func: Optional[ + Callable[[AnyStr, AnyStr, int], Optional[Match[AnyStr]]] + ] = ..., +) -> _ValidatorType[AnyStr]: ... +def deep_iterable( + member_validator: _ValidatorType[_T], + iterable_validator: Optional[_ValidatorType[_I]] = ..., +) -> _ValidatorType[_I]: ... +def deep_mapping( + key_validator: _ValidatorType[_K], + value_validator: _ValidatorType[_V], + mapping_validator: Optional[_ValidatorType[_M]] = ..., +) -> _ValidatorType[_M]: ... +def is_callable() -> _ValidatorType[_T]: ... diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attrs-20.3.0.dist-info/AUTHORS.rst b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attrs-20.3.0.dist-info/AUTHORS.rst new file mode 100644 index 0000000000000000000000000000000000000000..f14ef6c60749458fcb60bfdee27efe4d957b4aeb --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attrs-20.3.0.dist-info/AUTHORS.rst @@ -0,0 +1,11 @@ +Credits +======= + +``attrs`` is written and maintained by `Hynek Schlawack `_. + +The development is kindly supported by `Variomedia AG `_. + +A full list of contributors can be found in `GitHub's overview `_. + +It’s the spiritual successor of `characteristic `_ and aspires to fix some of it clunkiness and unfortunate decisions. +Both were inspired by Twisted’s `FancyEqMixin `_ but both are implemented using class decorators because `subclassing is bad for you `_, m’kay? diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attrs-20.3.0.dist-info/INSTALLER b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attrs-20.3.0.dist-info/INSTALLER new file mode 100644 index 0000000000000000000000000000000000000000..a1b589e38a32041e49332e5e81c2d363dc418d68 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attrs-20.3.0.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attrs-20.3.0.dist-info/LICENSE b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attrs-20.3.0.dist-info/LICENSE new file mode 100644 index 0000000000000000000000000000000000000000..7ae3df930976bd01d34041b1c7ceeb4b32aace8c --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attrs-20.3.0.dist-info/LICENSE @@ -0,0 +1,21 @@ +The MIT License (MIT) + +Copyright (c) 2015 Hynek Schlawack + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attrs-20.3.0.dist-info/METADATA b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attrs-20.3.0.dist-info/METADATA new file mode 100644 index 0000000000000000000000000000000000000000..a92cacb0072abd9fb4d805967ef1fb7e1b14ba01 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attrs-20.3.0.dist-info/METADATA @@ -0,0 +1,241 @@ +Metadata-Version: 2.1 +Name: attrs +Version: 20.3.0 +Summary: Classes Without Boilerplate +Home-page: https://www.attrs.org/ +Author: Hynek Schlawack +Author-email: hs@ox.cx +Maintainer: Hynek Schlawack +Maintainer-email: hs@ox.cx +License: MIT +Project-URL: Documentation, https://www.attrs.org/ +Project-URL: Bug Tracker, https://github.com/python-attrs/attrs/issues +Project-URL: Source Code, https://github.com/python-attrs/attrs +Project-URL: Funding, https://github.com/sponsors/hynek +Project-URL: Tidelift, https://tidelift.com/subscription/pkg/pypi-attrs?utm_source=pypi-attrs&utm_medium=pypi +Keywords: class,attribute,boilerplate +Platform: UNKNOWN +Classifier: Development Status :: 5 - Production/Stable +Classifier: Intended Audience :: Developers +Classifier: Natural Language :: English +Classifier: License :: OSI Approved :: MIT License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 2 +Classifier: Programming Language :: Python :: 2.7 +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.5 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: Implementation :: CPython +Classifier: Programming Language :: Python :: Implementation :: PyPy +Classifier: Topic :: Software Development :: Libraries :: Python Modules +Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.* +Description-Content-Type: text/x-rst +Provides-Extra: dev +Requires-Dist: coverage[toml] (>=5.0.2) ; extra == 'dev' +Requires-Dist: hypothesis ; extra == 'dev' +Requires-Dist: pympler ; extra == 'dev' +Requires-Dist: pytest (>=4.3.0) ; extra == 'dev' +Requires-Dist: six ; extra == 'dev' +Requires-Dist: zope.interface ; extra == 'dev' +Requires-Dist: furo ; extra == 'dev' +Requires-Dist: sphinx ; extra == 'dev' +Requires-Dist: pre-commit ; extra == 'dev' +Provides-Extra: docs +Requires-Dist: furo ; extra == 'docs' +Requires-Dist: sphinx ; extra == 'docs' +Requires-Dist: zope.interface ; extra == 'docs' +Provides-Extra: tests +Requires-Dist: coverage[toml] (>=5.0.2) ; extra == 'tests' +Requires-Dist: hypothesis ; extra == 'tests' +Requires-Dist: pympler ; extra == 'tests' +Requires-Dist: pytest (>=4.3.0) ; extra == 'tests' +Requires-Dist: six ; extra == 'tests' +Requires-Dist: zope.interface ; extra == 'tests' +Provides-Extra: tests_no_zope +Requires-Dist: coverage[toml] (>=5.0.2) ; extra == 'tests_no_zope' +Requires-Dist: hypothesis ; extra == 'tests_no_zope' +Requires-Dist: pympler ; extra == 'tests_no_zope' +Requires-Dist: pytest (>=4.3.0) ; extra == 'tests_no_zope' +Requires-Dist: six ; extra == 'tests_no_zope' + +.. image:: https://www.attrs.org/en/latest/_static/attrs_logo.png + :alt: attrs Logo + +====================================== +``attrs``: Classes Without Boilerplate +====================================== + +.. image:: https://readthedocs.org/projects/attrs/badge/?version=stable + :target: https://www.attrs.org/en/stable/?badge=stable + :alt: Documentation Status + +.. image:: https://github.com/python-attrs/attrs/workflows/CI/badge.svg?branch=master + :target: https://github.com/python-attrs/attrs/actions?workflow=CI + :alt: CI Status + +.. image:: https://codecov.io/github/python-attrs/attrs/branch/master/graph/badge.svg + :target: https://codecov.io/github/python-attrs/attrs + :alt: Test Coverage + +.. image:: https://img.shields.io/badge/code%20style-black-000000.svg + :target: https://github.com/psf/black + :alt: Code style: black + +.. teaser-begin + +``attrs`` is the Python package that will bring back the **joy** of **writing classes** by relieving you from the drudgery of implementing object protocols (aka `dunder `_ methods). + +Its main goal is to help you to write **concise** and **correct** software without slowing down your code. + +.. teaser-end + +For that, it gives you a class decorator and a way to declaratively define the attributes on that class: + +.. -code-begin- + +.. code-block:: pycon + + >>> import attr + + >>> @attr.s + ... class SomeClass(object): + ... a_number = attr.ib(default=42) + ... list_of_numbers = attr.ib(factory=list) + ... + ... def hard_math(self, another_number): + ... return self.a_number + sum(self.list_of_numbers) * another_number + + + >>> sc = SomeClass(1, [1, 2, 3]) + >>> sc + SomeClass(a_number=1, list_of_numbers=[1, 2, 3]) + + >>> sc.hard_math(3) + 19 + >>> sc == SomeClass(1, [1, 2, 3]) + True + >>> sc != SomeClass(2, [3, 2, 1]) + True + + >>> attr.asdict(sc) + {'a_number': 1, 'list_of_numbers': [1, 2, 3]} + + >>> SomeClass() + SomeClass(a_number=42, list_of_numbers=[]) + + >>> C = attr.make_class("C", ["a", "b"]) + >>> C("foo", "bar") + C(a='foo', b='bar') + + +After *declaring* your attributes ``attrs`` gives you: + +- a concise and explicit overview of the class's attributes, +- a nice human-readable ``__repr__``, +- a complete set of comparison methods (equality and ordering), +- an initializer, +- and much more, + +*without* writing dull boilerplate code again and again and *without* runtime performance penalties. + +On Python 3.6 and later, you can often even drop the calls to ``attr.ib()`` by using `type annotations `_. + +This gives you the power to use actual classes with actual types in your code instead of confusing ``tuple``\ s or `confusingly behaving `_ ``namedtuple``\ s. +Which in turn encourages you to write *small classes* that do `one thing well `_. +Never again violate the `single responsibility principle `_ just because implementing ``__init__`` et al is a painful drag. + + +.. -getting-help- + +Getting Help +============ + +Please use the ``python-attrs`` tag on `StackOverflow `_ to get help. + +Answering questions of your fellow developers is also a great way to help the project! + + +.. -project-information- + +Project Information +=================== + +``attrs`` is released under the `MIT `_ license, +its documentation lives at `Read the Docs `_, +the code on `GitHub `_, +and the latest release on `PyPI `_. +It’s rigorously tested on Python 2.7, 3.5+, and PyPy. + +We collect information on **third-party extensions** in our `wiki `_. +Feel free to browse and add your own! + +If you'd like to contribute to ``attrs`` you're most welcome and we've written `a little guide `_ to get you started! + + +``attrs`` for Enterprise +------------------------ + +Available as part of the Tidelift Subscription. + +The maintainers of ``attrs`` and thousands of other packages are working with Tidelift to deliver commercial support and maintenance for the open source packages you use to build your applications. +Save time, reduce risk, and improve code health, while paying the maintainers of the exact packages you use. +`Learn more. `_ + + +Release Information +=================== + +20.3.0 (2020-11-05) +------------------- + +Backward-incompatible Changes +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +- ``attr.define()``, ``attr.frozen()``, ``attr.mutable()``, and ``attr.field()`` remain **provisional**. + + This release does **not** change change anything about them and they are already used widely in production though. + + If you wish to use them together with mypy, you can simply drop `this plugin `_ into your project. + + Feel free to provide feedback to them in the linked issue #668. + + We will release the ``attrs`` namespace once we have the feeling that the APIs have properly settled. + `#668 `_ + + +Changes +^^^^^^^ + +- ``attr.s()`` now has a *field_transformer* hook that is called for all ``Attribute``\ s and returns a (modified or updated) list of ``Attribute`` instances. + ``attr.asdict()`` has a *value_serializer* hook that can change the way values are converted. + Both hooks are meant to help with data (de-)serialization workflows. + `#653 `_ +- ``kw_only=True`` now works on Python 2. + `#700 `_ +- ``raise from`` now works on frozen classes on PyPy. + `#703 `_, + `#712 `_ +- ``attr.asdict()`` and ``attr.astuple()`` now treat ``frozenset``\ s like ``set``\ s with regards to the *retain_collection_types* argument. + `#704 `_ +- The type stubs for ``attr.s()`` and ``attr.make_class()`` are not missing the *collect_by_mro* argument anymore. + `#711 `_ + +`Full changelog `_. + +Credits +======= + +``attrs`` is written and maintained by `Hynek Schlawack `_. + +The development is kindly supported by `Variomedia AG `_. + +A full list of contributors can be found in `GitHub's overview `_. + +It’s the spiritual successor of `characteristic `_ and aspires to fix some of it clunkiness and unfortunate decisions. +Both were inspired by Twisted’s `FancyEqMixin `_ but both are implemented using class decorators because `subclassing is bad for you `_, m’kay? + + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attrs-20.3.0.dist-info/RECORD b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attrs-20.3.0.dist-info/RECORD new file mode 100644 index 0000000000000000000000000000000000000000..2a564f9774da22e0015a8329ceb875fa359311eb --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attrs-20.3.0.dist-info/RECORD @@ -0,0 +1,39 @@ +attr/__init__.py,sha256=70KmZOgz2sUvtRTC_IuXEeN2ttOyBWHn4XA59aqGXPs,1568 +attr/__init__.pyi,sha256=ca_4sg7z0e_EL7ehy-flXVGAju5PBX2hVo51dUmPMi0,12986 +attr/__pycache__/__init__.cpython-38.pyc,, +attr/__pycache__/_compat.cpython-38.pyc,, +attr/__pycache__/_config.cpython-38.pyc,, +attr/__pycache__/_funcs.cpython-38.pyc,, +attr/__pycache__/_make.cpython-38.pyc,, +attr/__pycache__/_next_gen.cpython-38.pyc,, +attr/__pycache__/_version_info.cpython-38.pyc,, +attr/__pycache__/converters.cpython-38.pyc,, +attr/__pycache__/exceptions.cpython-38.pyc,, +attr/__pycache__/filters.cpython-38.pyc,, +attr/__pycache__/setters.cpython-38.pyc,, +attr/__pycache__/validators.cpython-38.pyc,, +attr/_compat.py,sha256=rZhpP09xbyWSzMv796XQbryIr21oReJFvA70G3lrHxg,7308 +attr/_config.py,sha256=_KvW0mQdH2PYjHc0YfIUaV_o2pVfM7ziMEYTxwmEhOA,514 +attr/_funcs.py,sha256=PvFQlflEswO_qIR2sUr4a4x8ggQpEoDKe3YKM2rLJu4,13081 +attr/_make.py,sha256=61XB4-SHQpFbWbStGWotTTbzVT2m49DUovRgnxpMqmU,88313 +attr/_next_gen.py,sha256=x6TU2rVOXmFmrNNvkfshJsxyRbAAK0wDI4SJV2OI97c,4138 +attr/_version_info.py,sha256=azMi1lNelb3cJvvYUMXsXVbUANkRzbD5IEiaXVpeVr4,2162 +attr/_version_info.pyi,sha256=x_M3L3WuB7r_ULXAWjx959udKQ4HLB8l-hsc1FDGNvk,209 +attr/converters.py,sha256=CaK6iLtEMmemrqU8LQ1D2nWtbo9dGPAv4UaZ0rFzhOA,2214 +attr/converters.pyi,sha256=fVGSfawF3NMy2EBApkC7dAwMuujWCHnGEnnAgsbkVpg,380 +attr/exceptions.py,sha256=gmlET97ikqdQVvy7Ff9p7zVvqc2SsNtTd-r30pva1GE,1950 +attr/exceptions.pyi,sha256=zZq8bCUnKAy9mDtBEw42ZhPhAUIHoTKedDQInJD883M,539 +attr/filters.py,sha256=weDxwATsa69T_0bPVjiM1fGsciAMQmwhY5G8Jm5BxuI,1098 +attr/filters.pyi,sha256=xDpmKQlFdssgxGa5tsl1ADh_3zwAwAT4vUhd8h-8-Tk,214 +attr/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +attr/setters.py,sha256=0ElzHwdVK3dsYcQi2CXkFvhx8fNxUI5OVhw8SWeaKmA,1434 +attr/setters.pyi,sha256=SYr6adhx4f0dSkmmBICg6eK8WMev5jT-KJQJTdul078,567 +attr/validators.py,sha256=6DBx1jt4oZxx1ppvx6JWqm9-UAsYpXC4HTwxJilCeRg,11497 +attr/validators.pyi,sha256=vZgsJqUwrJevh4v_Hd7_RSXqDrBctE6-3AEZ7uYKodo,1868 +attrs-20.3.0.dist-info/AUTHORS.rst,sha256=wsqCNbGz_mklcJrt54APIZHZpoTIJLkXqEhhn4Nd8hc,752 +attrs-20.3.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +attrs-20.3.0.dist-info/LICENSE,sha256=v2WaKLSSQGAvVrvfSQy-LsUJsVuY-Z17GaUsdA4yeGM,1082 +attrs-20.3.0.dist-info/METADATA,sha256=2XTmALrRRbIZj9J8pJgpKYnyATu_NAL8vfUnqRFpE5w,10220 +attrs-20.3.0.dist-info/RECORD,, +attrs-20.3.0.dist-info/WHEEL,sha256=ADKeyaGyKF5DwBNE0sRE5pvW-bSkFMJfBuhzZ3rceP4,110 +attrs-20.3.0.dist-info/top_level.txt,sha256=tlRYMddkRlKPqJ96wP2_j9uEsmcNHgD2SbuWd4CzGVU,5 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attrs-20.3.0.dist-info/WHEEL b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attrs-20.3.0.dist-info/WHEEL new file mode 100644 index 0000000000000000000000000000000000000000..6d38aa0601b31c7f4c47ff3016173426df4e1d53 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attrs-20.3.0.dist-info/WHEEL @@ -0,0 +1,6 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.35.1) +Root-Is-Purelib: true +Tag: py2-none-any +Tag: py3-none-any + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attrs-20.3.0.dist-info/top_level.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attrs-20.3.0.dist-info/top_level.txt new file mode 100644 index 0000000000000000000000000000000000000000..66a062d889dea72f909b74ddc9e576cc68393fd6 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/attrs-20.3.0.dist-info/top_level.txt @@ -0,0 +1 @@ +attr diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cachecontrol/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cachecontrol/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..a1bbbbe3bff592b2bc761772dc6974af4afb7035 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cachecontrol/__init__.py @@ -0,0 +1,11 @@ +"""CacheControl import Interface. + +Make it easy to import from cachecontrol without long namespaces. +""" +__author__ = "Eric Larson" +__email__ = "eric@ionrock.org" +__version__ = "0.12.6" + +from .wrapper import CacheControl +from .adapter import CacheControlAdapter +from .controller import CacheController diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cachecontrol/_cmd.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cachecontrol/_cmd.py new file mode 100644 index 0000000000000000000000000000000000000000..ee8d60d101c6357c44c97421389d8d7b2ba63c09 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cachecontrol/_cmd.py @@ -0,0 +1,57 @@ +import logging + +import requests + +from cachecontrol.adapter import CacheControlAdapter +from cachecontrol.cache import DictCache +from cachecontrol.controller import logger + +from argparse import ArgumentParser + + +def setup_logging(): + logger.setLevel(logging.DEBUG) + handler = logging.StreamHandler() + logger.addHandler(handler) + + +def get_session(): + adapter = CacheControlAdapter( + DictCache(), cache_etags=True, serializer=None, heuristic=None + ) + sess = requests.Session() + sess.mount("http://", adapter) + sess.mount("https://", adapter) + + sess.cache_controller = adapter.controller + return sess + + +def get_args(): + parser = ArgumentParser() + parser.add_argument("url", help="The URL to try and cache") + return parser.parse_args() + + +def main(args=None): + args = get_args() + sess = get_session() + + # Make a request to get a response + resp = sess.get(args.url) + + # Turn on logging + setup_logging() + + # try setting the cache + sess.cache_controller.cache_response(resp.request, resp.raw) + + # Now try to get it + if sess.cache_controller.cached_request(resp.request): + print("Cached!") + else: + print("Not cached :(") + + +if __name__ == "__main__": + main() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cachecontrol/adapter.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cachecontrol/adapter.py new file mode 100644 index 0000000000000000000000000000000000000000..de50006af4b9533d16dcae5a18d39a336019286f --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cachecontrol/adapter.py @@ -0,0 +1,133 @@ +import types +import functools +import zlib + +from requests.adapters import HTTPAdapter + +from .controller import CacheController +from .cache import DictCache +from .filewrapper import CallbackFileWrapper + + +class CacheControlAdapter(HTTPAdapter): + invalidating_methods = {"PUT", "DELETE"} + + def __init__( + self, + cache=None, + cache_etags=True, + controller_class=None, + serializer=None, + heuristic=None, + cacheable_methods=None, + *args, + **kw + ): + super(CacheControlAdapter, self).__init__(*args, **kw) + self.cache = DictCache() if cache is None else cache + self.heuristic = heuristic + self.cacheable_methods = cacheable_methods or ("GET",) + + controller_factory = controller_class or CacheController + self.controller = controller_factory( + self.cache, cache_etags=cache_etags, serializer=serializer + ) + + def send(self, request, cacheable_methods=None, **kw): + """ + Send a request. Use the request information to see if it + exists in the cache and cache the response if we need to and can. + """ + cacheable = cacheable_methods or self.cacheable_methods + if request.method in cacheable: + try: + cached_response = self.controller.cached_request(request) + except zlib.error: + cached_response = None + if cached_response: + return self.build_response(request, cached_response, from_cache=True) + + # check for etags and add headers if appropriate + request.headers.update(self.controller.conditional_headers(request)) + + resp = super(CacheControlAdapter, self).send(request, **kw) + + return resp + + def build_response( + self, request, response, from_cache=False, cacheable_methods=None + ): + """ + Build a response by making a request or using the cache. + + This will end up calling send and returning a potentially + cached response + """ + cacheable = cacheable_methods or self.cacheable_methods + if not from_cache and request.method in cacheable: + # Check for any heuristics that might update headers + # before trying to cache. + if self.heuristic: + response = self.heuristic.apply(response) + + # apply any expiration heuristics + if response.status == 304: + # We must have sent an ETag request. This could mean + # that we've been expired already or that we simply + # have an etag. In either case, we want to try and + # update the cache if that is the case. + cached_response = self.controller.update_cached_response( + request, response + ) + + if cached_response is not response: + from_cache = True + + # We are done with the server response, read a + # possible response body (compliant servers will + # not return one, but we cannot be 100% sure) and + # release the connection back to the pool. + response.read(decode_content=False) + response.release_conn() + + response = cached_response + + # We always cache the 301 responses + elif response.status == 301: + self.controller.cache_response(request, response) + else: + # Wrap the response file with a wrapper that will cache the + # response when the stream has been consumed. + response._fp = CallbackFileWrapper( + response._fp, + functools.partial( + self.controller.cache_response, request, response + ), + ) + if response.chunked: + super_update_chunk_length = response._update_chunk_length + + def _update_chunk_length(self): + super_update_chunk_length() + if self.chunk_left == 0: + self._fp._close() + + response._update_chunk_length = types.MethodType( + _update_chunk_length, response + ) + + resp = super(CacheControlAdapter, self).build_response(request, response) + + # See if we should invalidate the cache. + if request.method in self.invalidating_methods and resp.ok: + cache_url = self.controller.cache_url(request.url) + self.cache.delete(cache_url) + + # Give the request a from_cache attr to let people use it + resp.from_cache = from_cache + + return resp + + def close(self): + self.cache.close() + super(CacheControlAdapter, self).close() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cachecontrol/cache.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cachecontrol/cache.py new file mode 100644 index 0000000000000000000000000000000000000000..94e07732d91fdf38f4559da04db4221d26553d40 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cachecontrol/cache.py @@ -0,0 +1,39 @@ +""" +The cache object API for implementing caches. The default is a thread +safe in-memory dictionary. +""" +from threading import Lock + + +class BaseCache(object): + + def get(self, key): + raise NotImplementedError() + + def set(self, key, value): + raise NotImplementedError() + + def delete(self, key): + raise NotImplementedError() + + def close(self): + pass + + +class DictCache(BaseCache): + + def __init__(self, init_dict=None): + self.lock = Lock() + self.data = init_dict or {} + + def get(self, key): + return self.data.get(key, None) + + def set(self, key, value): + with self.lock: + self.data.update({key: value}) + + def delete(self, key): + with self.lock: + if key in self.data: + self.data.pop(key) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cachecontrol/caches/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cachecontrol/caches/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..0e1658fa5e5e498bcfa84702ed1a53dfcec071ff --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cachecontrol/caches/__init__.py @@ -0,0 +1,2 @@ +from .file_cache import FileCache # noqa +from .redis_cache import RedisCache # noqa diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cachecontrol/caches/file_cache.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cachecontrol/caches/file_cache.py new file mode 100644 index 0000000000000000000000000000000000000000..607b9452428f4343a1a017dfd230b12456060543 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cachecontrol/caches/file_cache.py @@ -0,0 +1,146 @@ +import hashlib +import os +from textwrap import dedent + +from ..cache import BaseCache +from ..controller import CacheController + +try: + FileNotFoundError +except NameError: + # py2.X + FileNotFoundError = (IOError, OSError) + + +def _secure_open_write(filename, fmode): + # We only want to write to this file, so open it in write only mode + flags = os.O_WRONLY + + # os.O_CREAT | os.O_EXCL will fail if the file already exists, so we only + # will open *new* files. + # We specify this because we want to ensure that the mode we pass is the + # mode of the file. + flags |= os.O_CREAT | os.O_EXCL + + # Do not follow symlinks to prevent someone from making a symlink that + # we follow and insecurely open a cache file. + if hasattr(os, "O_NOFOLLOW"): + flags |= os.O_NOFOLLOW + + # On Windows we'll mark this file as binary + if hasattr(os, "O_BINARY"): + flags |= os.O_BINARY + + # Before we open our file, we want to delete any existing file that is + # there + try: + os.remove(filename) + except (IOError, OSError): + # The file must not exist already, so we can just skip ahead to opening + pass + + # Open our file, the use of os.O_CREAT | os.O_EXCL will ensure that if a + # race condition happens between the os.remove and this line, that an + # error will be raised. Because we utilize a lockfile this should only + # happen if someone is attempting to attack us. + fd = os.open(filename, flags, fmode) + try: + return os.fdopen(fd, "wb") + + except: + # An error occurred wrapping our FD in a file object + os.close(fd) + raise + + +class FileCache(BaseCache): + + def __init__( + self, + directory, + forever=False, + filemode=0o0600, + dirmode=0o0700, + use_dir_lock=None, + lock_class=None, + ): + + if use_dir_lock is not None and lock_class is not None: + raise ValueError("Cannot use use_dir_lock and lock_class together") + + try: + from lockfile import LockFile + from lockfile.mkdirlockfile import MkdirLockFile + except ImportError: + notice = dedent( + """ + NOTE: In order to use the FileCache you must have + lockfile installed. You can install it via pip: + pip install lockfile + """ + ) + raise ImportError(notice) + + else: + if use_dir_lock: + lock_class = MkdirLockFile + + elif lock_class is None: + lock_class = LockFile + + self.directory = directory + self.forever = forever + self.filemode = filemode + self.dirmode = dirmode + self.lock_class = lock_class + + @staticmethod + def encode(x): + return hashlib.sha224(x.encode()).hexdigest() + + def _fn(self, name): + # NOTE: This method should not change as some may depend on it. + # See: https://github.com/ionrock/cachecontrol/issues/63 + hashed = self.encode(name) + parts = list(hashed[:5]) + [hashed] + return os.path.join(self.directory, *parts) + + def get(self, key): + name = self._fn(key) + try: + with open(name, "rb") as fh: + return fh.read() + + except FileNotFoundError: + return None + + def set(self, key, value): + name = self._fn(key) + + # Make sure the directory exists + try: + os.makedirs(os.path.dirname(name), self.dirmode) + except (IOError, OSError): + pass + + with self.lock_class(name) as lock: + # Write our actual file + with _secure_open_write(lock.path, self.filemode) as fh: + fh.write(value) + + def delete(self, key): + name = self._fn(key) + if not self.forever: + try: + os.remove(name) + except FileNotFoundError: + pass + + +def url_to_file_path(url, filecache): + """Return the file cache path based on the URL. + + This does not ensure the file exists! + """ + key = CacheController.cache_url(url) + return filecache._fn(key) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cachecontrol/caches/redis_cache.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cachecontrol/caches/redis_cache.py new file mode 100644 index 0000000000000000000000000000000000000000..16da0aed96893309a3b5b05725e2ac155db7a67e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cachecontrol/caches/redis_cache.py @@ -0,0 +1,33 @@ +from __future__ import division + +from datetime import datetime +from cachecontrol.cache import BaseCache + + +class RedisCache(BaseCache): + + def __init__(self, conn): + self.conn = conn + + def get(self, key): + return self.conn.get(key) + + def set(self, key, value, expires=None): + if not expires: + self.conn.set(key, value) + else: + expires = expires - datetime.utcnow() + self.conn.setex(key, int(expires.total_seconds()), value) + + def delete(self, key): + self.conn.delete(key) + + def clear(self): + """Helper for clearing all the keys in a database. Use with + caution!""" + for key in self.conn.keys(): + self.conn.delete(key) + + def close(self): + """Redis uses connection pooling, no need to close the connection.""" + pass diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cachecontrol/compat.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cachecontrol/compat.py new file mode 100644 index 0000000000000000000000000000000000000000..143c8ab08409f02d38b85a7fc3e8b4d0c7c5b448 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cachecontrol/compat.py @@ -0,0 +1,29 @@ +try: + from urllib.parse import urljoin +except ImportError: + from urlparse import urljoin + + +try: + import cPickle as pickle +except ImportError: + import pickle + + +# Handle the case where the requests module has been patched to not have +# urllib3 bundled as part of its source. +try: + from requests.packages.urllib3.response import HTTPResponse +except ImportError: + from urllib3.response import HTTPResponse + +try: + from requests.packages.urllib3.util import is_fp_closed +except ImportError: + from urllib3.util import is_fp_closed + +# Replicate some six behaviour +try: + text_type = unicode +except NameError: + text_type = str diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cachecontrol/controller.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cachecontrol/controller.py new file mode 100644 index 0000000000000000000000000000000000000000..c5c4a50808baeded95191faa42c67722731aa06c --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cachecontrol/controller.py @@ -0,0 +1,376 @@ +""" +The httplib2 algorithms ported for use with requests. +""" +import logging +import re +import calendar +import time +from email.utils import parsedate_tz + +from requests.structures import CaseInsensitiveDict + +from .cache import DictCache +from .serialize import Serializer + + +logger = logging.getLogger(__name__) + +URI = re.compile(r"^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?") + + +def parse_uri(uri): + """Parses a URI using the regex given in Appendix B of RFC 3986. + + (scheme, authority, path, query, fragment) = parse_uri(uri) + """ + groups = URI.match(uri).groups() + return (groups[1], groups[3], groups[4], groups[6], groups[8]) + + +class CacheController(object): + """An interface to see if request should cached or not. + """ + + def __init__( + self, cache=None, cache_etags=True, serializer=None, status_codes=None + ): + self.cache = DictCache() if cache is None else cache + self.cache_etags = cache_etags + self.serializer = serializer or Serializer() + self.cacheable_status_codes = status_codes or (200, 203, 300, 301) + + @classmethod + def _urlnorm(cls, uri): + """Normalize the URL to create a safe key for the cache""" + (scheme, authority, path, query, fragment) = parse_uri(uri) + if not scheme or not authority: + raise Exception("Only absolute URIs are allowed. uri = %s" % uri) + + scheme = scheme.lower() + authority = authority.lower() + + if not path: + path = "/" + + # Could do syntax based normalization of the URI before + # computing the digest. See Section 6.2.2 of Std 66. + request_uri = query and "?".join([path, query]) or path + defrag_uri = scheme + "://" + authority + request_uri + + return defrag_uri + + @classmethod + def cache_url(cls, uri): + return cls._urlnorm(uri) + + def parse_cache_control(self, headers): + known_directives = { + # https://tools.ietf.org/html/rfc7234#section-5.2 + "max-age": (int, True), + "max-stale": (int, False), + "min-fresh": (int, True), + "no-cache": (None, False), + "no-store": (None, False), + "no-transform": (None, False), + "only-if-cached": (None, False), + "must-revalidate": (None, False), + "public": (None, False), + "private": (None, False), + "proxy-revalidate": (None, False), + "s-maxage": (int, True), + } + + cc_headers = headers.get("cache-control", headers.get("Cache-Control", "")) + + retval = {} + + for cc_directive in cc_headers.split(","): + if not cc_directive.strip(): + continue + + parts = cc_directive.split("=", 1) + directive = parts[0].strip() + + try: + typ, required = known_directives[directive] + except KeyError: + logger.debug("Ignoring unknown cache-control directive: %s", directive) + continue + + if not typ or not required: + retval[directive] = None + if typ: + try: + retval[directive] = typ(parts[1].strip()) + except IndexError: + if required: + logger.debug( + "Missing value for cache-control " "directive: %s", + directive, + ) + except ValueError: + logger.debug( + "Invalid value for cache-control directive " "%s, must be %s", + directive, + typ.__name__, + ) + + return retval + + def cached_request(self, request): + """ + Return a cached response if it exists in the cache, otherwise + return False. + """ + cache_url = self.cache_url(request.url) + logger.debug('Looking up "%s" in the cache', cache_url) + cc = self.parse_cache_control(request.headers) + + # Bail out if the request insists on fresh data + if "no-cache" in cc: + logger.debug('Request header has "no-cache", cache bypassed') + return False + + if "max-age" in cc and cc["max-age"] == 0: + logger.debug('Request header has "max_age" as 0, cache bypassed') + return False + + # Request allows serving from the cache, let's see if we find something + cache_data = self.cache.get(cache_url) + if cache_data is None: + logger.debug("No cache entry available") + return False + + # Check whether it can be deserialized + resp = self.serializer.loads(request, cache_data) + if not resp: + logger.warning("Cache entry deserialization failed, entry ignored") + return False + + # If we have a cached 301, return it immediately. We don't + # need to test our response for other headers b/c it is + # intrinsically "cacheable" as it is Permanent. + # See: + # https://tools.ietf.org/html/rfc7231#section-6.4.2 + # + # Client can try to refresh the value by repeating the request + # with cache busting headers as usual (ie no-cache). + if resp.status == 301: + msg = ( + 'Returning cached "301 Moved Permanently" response ' + "(ignoring date and etag information)" + ) + logger.debug(msg) + return resp + + headers = CaseInsensitiveDict(resp.headers) + if not headers or "date" not in headers: + if "etag" not in headers: + # Without date or etag, the cached response can never be used + # and should be deleted. + logger.debug("Purging cached response: no date or etag") + self.cache.delete(cache_url) + logger.debug("Ignoring cached response: no date") + return False + + now = time.time() + date = calendar.timegm(parsedate_tz(headers["date"])) + current_age = max(0, now - date) + logger.debug("Current age based on date: %i", current_age) + + # TODO: There is an assumption that the result will be a + # urllib3 response object. This may not be best since we + # could probably avoid instantiating or constructing the + # response until we know we need it. + resp_cc = self.parse_cache_control(headers) + + # determine freshness + freshness_lifetime = 0 + + # Check the max-age pragma in the cache control header + if "max-age" in resp_cc: + freshness_lifetime = resp_cc["max-age"] + logger.debug("Freshness lifetime from max-age: %i", freshness_lifetime) + + # If there isn't a max-age, check for an expires header + elif "expires" in headers: + expires = parsedate_tz(headers["expires"]) + if expires is not None: + expire_time = calendar.timegm(expires) - date + freshness_lifetime = max(0, expire_time) + logger.debug("Freshness lifetime from expires: %i", freshness_lifetime) + + # Determine if we are setting freshness limit in the + # request. Note, this overrides what was in the response. + if "max-age" in cc: + freshness_lifetime = cc["max-age"] + logger.debug( + "Freshness lifetime from request max-age: %i", freshness_lifetime + ) + + if "min-fresh" in cc: + min_fresh = cc["min-fresh"] + # adjust our current age by our min fresh + current_age += min_fresh + logger.debug("Adjusted current age from min-fresh: %i", current_age) + + # Return entry if it is fresh enough + if freshness_lifetime > current_age: + logger.debug('The response is "fresh", returning cached response') + logger.debug("%i > %i", freshness_lifetime, current_age) + return resp + + # we're not fresh. If we don't have an Etag, clear it out + if "etag" not in headers: + logger.debug('The cached response is "stale" with no etag, purging') + self.cache.delete(cache_url) + + # return the original handler + return False + + def conditional_headers(self, request): + cache_url = self.cache_url(request.url) + resp = self.serializer.loads(request, self.cache.get(cache_url)) + new_headers = {} + + if resp: + headers = CaseInsensitiveDict(resp.headers) + + if "etag" in headers: + new_headers["If-None-Match"] = headers["ETag"] + + if "last-modified" in headers: + new_headers["If-Modified-Since"] = headers["Last-Modified"] + + return new_headers + + def cache_response(self, request, response, body=None, status_codes=None): + """ + Algorithm for caching requests. + + This assumes a requests Response object. + """ + # From httplib2: Don't cache 206's since we aren't going to + # handle byte range requests + cacheable_status_codes = status_codes or self.cacheable_status_codes + if response.status not in cacheable_status_codes: + logger.debug( + "Status code %s not in %s", response.status, cacheable_status_codes + ) + return + + response_headers = CaseInsensitiveDict(response.headers) + + # If we've been given a body, our response has a Content-Length, that + # Content-Length is valid then we can check to see if the body we've + # been given matches the expected size, and if it doesn't we'll just + # skip trying to cache it. + if ( + body is not None + and "content-length" in response_headers + and response_headers["content-length"].isdigit() + and int(response_headers["content-length"]) != len(body) + ): + return + + cc_req = self.parse_cache_control(request.headers) + cc = self.parse_cache_control(response_headers) + + cache_url = self.cache_url(request.url) + logger.debug('Updating cache with response from "%s"', cache_url) + + # Delete it from the cache if we happen to have it stored there + no_store = False + if "no-store" in cc: + no_store = True + logger.debug('Response header has "no-store"') + if "no-store" in cc_req: + no_store = True + logger.debug('Request header has "no-store"') + if no_store and self.cache.get(cache_url): + logger.debug('Purging existing cache entry to honor "no-store"') + self.cache.delete(cache_url) + if no_store: + return + + # https://tools.ietf.org/html/rfc7234#section-4.1: + # A Vary header field-value of "*" always fails to match. + # Storing such a response leads to a deserialization warning + # during cache lookup and is not allowed to ever be served, + # so storing it can be avoided. + if "*" in response_headers.get("vary", ""): + logger.debug('Response header has "Vary: *"') + return + + # If we've been given an etag, then keep the response + if self.cache_etags and "etag" in response_headers: + logger.debug("Caching due to etag") + self.cache.set( + cache_url, self.serializer.dumps(request, response, body=body) + ) + + # Add to the cache any 301s. We do this before looking that + # the Date headers. + elif response.status == 301: + logger.debug("Caching permanant redirect") + self.cache.set(cache_url, self.serializer.dumps(request, response)) + + # Add to the cache if the response headers demand it. If there + # is no date header then we can't do anything about expiring + # the cache. + elif "date" in response_headers: + # cache when there is a max-age > 0 + if "max-age" in cc and cc["max-age"] > 0: + logger.debug("Caching b/c date exists and max-age > 0") + self.cache.set( + cache_url, self.serializer.dumps(request, response, body=body) + ) + + # If the request can expire, it means we should cache it + # in the meantime. + elif "expires" in response_headers: + if response_headers["expires"]: + logger.debug("Caching b/c of expires header") + self.cache.set( + cache_url, self.serializer.dumps(request, response, body=body) + ) + + def update_cached_response(self, request, response): + """On a 304 we will get a new set of headers that we want to + update our cached value with, assuming we have one. + + This should only ever be called when we've sent an ETag and + gotten a 304 as the response. + """ + cache_url = self.cache_url(request.url) + + cached_response = self.serializer.loads(request, self.cache.get(cache_url)) + + if not cached_response: + # we didn't have a cached response + return response + + # Lets update our headers with the headers from the new request: + # http://tools.ietf.org/html/draft-ietf-httpbis-p4-conditional-26#section-4.1 + # + # The server isn't supposed to send headers that would make + # the cached body invalid. But... just in case, we'll be sure + # to strip out ones we know that might be problmatic due to + # typical assumptions. + excluded_headers = ["content-length"] + + cached_response.headers.update( + dict( + (k, v) + for k, v in response.headers.items() + if k.lower() not in excluded_headers + ) + ) + + # we want a 200 b/c we have content via the cache + cached_response.status = 200 + + # update our cache + self.cache.set(cache_url, self.serializer.dumps(request, cached_response)) + + return cached_response diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cachecontrol/filewrapper.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cachecontrol/filewrapper.py new file mode 100644 index 0000000000000000000000000000000000000000..30ed4c5a62a99906ab662a8acfba2ab75e82af2e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cachecontrol/filewrapper.py @@ -0,0 +1,80 @@ +from io import BytesIO + + +class CallbackFileWrapper(object): + """ + Small wrapper around a fp object which will tee everything read into a + buffer, and when that file is closed it will execute a callback with the + contents of that buffer. + + All attributes are proxied to the underlying file object. + + This class uses members with a double underscore (__) leading prefix so as + not to accidentally shadow an attribute. + """ + + def __init__(self, fp, callback): + self.__buf = BytesIO() + self.__fp = fp + self.__callback = callback + + def __getattr__(self, name): + # The vaguaries of garbage collection means that self.__fp is + # not always set. By using __getattribute__ and the private + # name[0] allows looking up the attribute value and raising an + # AttributeError when it doesn't exist. This stop thigns from + # infinitely recursing calls to getattr in the case where + # self.__fp hasn't been set. + # + # [0] https://docs.python.org/2/reference/expressions.html#atom-identifiers + fp = self.__getattribute__("_CallbackFileWrapper__fp") + return getattr(fp, name) + + def __is_fp_closed(self): + try: + return self.__fp.fp is None + + except AttributeError: + pass + + try: + return self.__fp.closed + + except AttributeError: + pass + + # We just don't cache it then. + # TODO: Add some logging here... + return False + + def _close(self): + if self.__callback: + self.__callback(self.__buf.getvalue()) + + # We assign this to None here, because otherwise we can get into + # really tricky problems where the CPython interpreter dead locks + # because the callback is holding a reference to something which + # has a __del__ method. Setting this to None breaks the cycle + # and allows the garbage collector to do it's thing normally. + self.__callback = None + + def read(self, amt=None): + data = self.__fp.read(amt) + self.__buf.write(data) + if self.__is_fp_closed(): + self._close() + + return data + + def _safe_read(self, amt): + data = self.__fp._safe_read(amt) + if amt == 2 and data == b"\r\n": + # urllib executes this read to toss the CRLF at the end + # of the chunk. + return data + + self.__buf.write(data) + if self.__is_fp_closed(): + self._close() + + return data diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cachecontrol/heuristics.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cachecontrol/heuristics.py new file mode 100644 index 0000000000000000000000000000000000000000..6c0e9790d5d0764f2ca005ae955d644bc2098d75 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cachecontrol/heuristics.py @@ -0,0 +1,135 @@ +import calendar +import time + +from email.utils import formatdate, parsedate, parsedate_tz + +from datetime import datetime, timedelta + +TIME_FMT = "%a, %d %b %Y %H:%M:%S GMT" + + +def expire_after(delta, date=None): + date = date or datetime.utcnow() + return date + delta + + +def datetime_to_header(dt): + return formatdate(calendar.timegm(dt.timetuple())) + + +class BaseHeuristic(object): + + def warning(self, response): + """ + Return a valid 1xx warning header value describing the cache + adjustments. + + The response is provided too allow warnings like 113 + http://tools.ietf.org/html/rfc7234#section-5.5.4 where we need + to explicitly say response is over 24 hours old. + """ + return '110 - "Response is Stale"' + + def update_headers(self, response): + """Update the response headers with any new headers. + + NOTE: This SHOULD always include some Warning header to + signify that the response was cached by the client, not + by way of the provided headers. + """ + return {} + + def apply(self, response): + updated_headers = self.update_headers(response) + + if updated_headers: + response.headers.update(updated_headers) + warning_header_value = self.warning(response) + if warning_header_value is not None: + response.headers.update({"Warning": warning_header_value}) + + return response + + +class OneDayCache(BaseHeuristic): + """ + Cache the response by providing an expires 1 day in the + future. + """ + + def update_headers(self, response): + headers = {} + + if "expires" not in response.headers: + date = parsedate(response.headers["date"]) + expires = expire_after(timedelta(days=1), date=datetime(*date[:6])) + headers["expires"] = datetime_to_header(expires) + headers["cache-control"] = "public" + return headers + + +class ExpiresAfter(BaseHeuristic): + """ + Cache **all** requests for a defined time period. + """ + + def __init__(self, **kw): + self.delta = timedelta(**kw) + + def update_headers(self, response): + expires = expire_after(self.delta) + return {"expires": datetime_to_header(expires), "cache-control": "public"} + + def warning(self, response): + tmpl = "110 - Automatically cached for %s. Response might be stale" + return tmpl % self.delta + + +class LastModified(BaseHeuristic): + """ + If there is no Expires header already, fall back on Last-Modified + using the heuristic from + http://tools.ietf.org/html/rfc7234#section-4.2.2 + to calculate a reasonable value. + + Firefox also does something like this per + https://developer.mozilla.org/en-US/docs/Web/HTTP/Caching_FAQ + http://lxr.mozilla.org/mozilla-release/source/netwerk/protocol/http/nsHttpResponseHead.cpp#397 + Unlike mozilla we limit this to 24-hr. + """ + cacheable_by_default_statuses = { + 200, 203, 204, 206, 300, 301, 404, 405, 410, 414, 501 + } + + def update_headers(self, resp): + headers = resp.headers + + if "expires" in headers: + return {} + + if "cache-control" in headers and headers["cache-control"] != "public": + return {} + + if resp.status not in self.cacheable_by_default_statuses: + return {} + + if "date" not in headers or "last-modified" not in headers: + return {} + + date = calendar.timegm(parsedate_tz(headers["date"])) + last_modified = parsedate(headers["last-modified"]) + if date is None or last_modified is None: + return {} + + now = time.time() + current_age = max(0, now - date) + delta = date - calendar.timegm(last_modified) + freshness_lifetime = max(0, min(delta / 10, 24 * 3600)) + if freshness_lifetime <= current_age: + return {} + + expires = date + freshness_lifetime + return {"expires": time.strftime(TIME_FMT, time.gmtime(expires))} + + def warning(self, resp): + return None diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cachecontrol/serialize.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cachecontrol/serialize.py new file mode 100644 index 0000000000000000000000000000000000000000..572cf0e6c871c83bbc03124505c53e4af9ee60c5 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cachecontrol/serialize.py @@ -0,0 +1,188 @@ +import base64 +import io +import json +import zlib + +import msgpack +from requests.structures import CaseInsensitiveDict + +from .compat import HTTPResponse, pickle, text_type + + +def _b64_decode_bytes(b): + return base64.b64decode(b.encode("ascii")) + + +def _b64_decode_str(s): + return _b64_decode_bytes(s).decode("utf8") + + +class Serializer(object): + + def dumps(self, request, response, body=None): + response_headers = CaseInsensitiveDict(response.headers) + + if body is None: + body = response.read(decode_content=False) + + # NOTE: 99% sure this is dead code. I'm only leaving it + # here b/c I don't have a test yet to prove + # it. Basically, before using + # `cachecontrol.filewrapper.CallbackFileWrapper`, + # this made an effort to reset the file handle. The + # `CallbackFileWrapper` short circuits this code by + # setting the body as the content is consumed, the + # result being a `body` argument is *always* passed + # into cache_response, and in turn, + # `Serializer.dump`. + response._fp = io.BytesIO(body) + + # NOTE: This is all a bit weird, but it's really important that on + # Python 2.x these objects are unicode and not str, even when + # they contain only ascii. The problem here is that msgpack + # understands the difference between unicode and bytes and we + # have it set to differentiate between them, however Python 2 + # doesn't know the difference. Forcing these to unicode will be + # enough to have msgpack know the difference. + data = { + u"response": { + u"body": body, + u"headers": dict( + (text_type(k), text_type(v)) for k, v in response.headers.items() + ), + u"status": response.status, + u"version": response.version, + u"reason": text_type(response.reason), + u"strict": response.strict, + u"decode_content": response.decode_content, + } + } + + # Construct our vary headers + data[u"vary"] = {} + if u"vary" in response_headers: + varied_headers = response_headers[u"vary"].split(",") + for header in varied_headers: + header = text_type(header).strip() + header_value = request.headers.get(header, None) + if header_value is not None: + header_value = text_type(header_value) + data[u"vary"][header] = header_value + + return b",".join([b"cc=4", msgpack.dumps(data, use_bin_type=True)]) + + def loads(self, request, data): + # Short circuit if we've been given an empty set of data + if not data: + return + + # Determine what version of the serializer the data was serialized + # with + try: + ver, data = data.split(b",", 1) + except ValueError: + ver = b"cc=0" + + # Make sure that our "ver" is actually a version and isn't a false + # positive from a , being in the data stream. + if ver[:3] != b"cc=": + data = ver + data + ver = b"cc=0" + + # Get the version number out of the cc=N + ver = ver.split(b"=", 1)[-1].decode("ascii") + + # Dispatch to the actual load method for the given version + try: + return getattr(self, "_loads_v{}".format(ver))(request, data) + + except AttributeError: + # This is a version we don't have a loads function for, so we'll + # just treat it as a miss and return None + return + + def prepare_response(self, request, cached): + """Verify our vary headers match and construct a real urllib3 + HTTPResponse object. + """ + # Special case the '*' Vary value as it means we cannot actually + # determine if the cached response is suitable for this request. + # This case is also handled in the controller code when creating + # a cache entry, but is left here for backwards compatibility. + if "*" in cached.get("vary", {}): + return + + # Ensure that the Vary headers for the cached response match our + # request + for header, value in cached.get("vary", {}).items(): + if request.headers.get(header, None) != value: + return + + body_raw = cached["response"].pop("body") + + headers = CaseInsensitiveDict(data=cached["response"]["headers"]) + if headers.get("transfer-encoding", "") == "chunked": + headers.pop("transfer-encoding") + + cached["response"]["headers"] = headers + + try: + body = io.BytesIO(body_raw) + except TypeError: + # This can happen if cachecontrol serialized to v1 format (pickle) + # using Python 2. A Python 2 str(byte string) will be unpickled as + # a Python 3 str (unicode string), which will cause the above to + # fail with: + # + # TypeError: 'str' does not support the buffer interface + body = io.BytesIO(body_raw.encode("utf8")) + + return HTTPResponse(body=body, preload_content=False, **cached["response"]) + + def _loads_v0(self, request, data): + # The original legacy cache data. This doesn't contain enough + # information to construct everything we need, so we'll treat this as + # a miss. + return + + def _loads_v1(self, request, data): + try: + cached = pickle.loads(data) + except ValueError: + return + + return self.prepare_response(request, cached) + + def _loads_v2(self, request, data): + try: + cached = json.loads(zlib.decompress(data).decode("utf8")) + except (ValueError, zlib.error): + return + + # We need to decode the items that we've base64 encoded + cached["response"]["body"] = _b64_decode_bytes(cached["response"]["body"]) + cached["response"]["headers"] = dict( + (_b64_decode_str(k), _b64_decode_str(v)) + for k, v in cached["response"]["headers"].items() + ) + cached["response"]["reason"] = _b64_decode_str(cached["response"]["reason"]) + cached["vary"] = dict( + (_b64_decode_str(k), _b64_decode_str(v) if v is not None else v) + for k, v in cached["vary"].items() + ) + + return self.prepare_response(request, cached) + + def _loads_v3(self, request, data): + # Due to Python 2 encoding issues, it's impossible to know for sure + # exactly how to load v3 entries, thus we'll treat these as a miss so + # that they get rewritten out as v4 entries. + return + + def _loads_v4(self, request, data): + try: + cached = msgpack.loads(data, raw=False) + except ValueError: + return + + return self.prepare_response(request, cached) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cachecontrol/wrapper.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cachecontrol/wrapper.py new file mode 100644 index 0000000000000000000000000000000000000000..d8e6fc6a9e3bbe8b9b3c032298deb08e764d32f8 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cachecontrol/wrapper.py @@ -0,0 +1,29 @@ +from .adapter import CacheControlAdapter +from .cache import DictCache + + +def CacheControl( + sess, + cache=None, + cache_etags=True, + serializer=None, + heuristic=None, + controller_class=None, + adapter_class=None, + cacheable_methods=None, +): + + cache = DictCache() if cache is None else cache + adapter_class = adapter_class or CacheControlAdapter + adapter = adapter_class( + cache, + cache_etags=cache_etags, + serializer=serializer, + heuristic=heuristic, + controller_class=controller_class, + cacheable_methods=cacheable_methods, + ) + sess.mount("http://", adapter) + sess.mount("https://", adapter) + + return sess diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/certifi-2019.11.28.dist-info/AUTHORS.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/certifi-2019.11.28.dist-info/AUTHORS.txt new file mode 100644 index 0000000000000000000000000000000000000000..72c87d7d38ae7bf859717c333a5ee8230f6ce624 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/certifi-2019.11.28.dist-info/AUTHORS.txt @@ -0,0 +1,562 @@ +A_Rog +Aakanksha Agrawal <11389424+rasponic@users.noreply.github.com> +Abhinav Sagar <40603139+abhinavsagar@users.noreply.github.com> +ABHYUDAY PRATAP SINGH +abs51295 +AceGentile +Adam Chainz +Adam Tse +Adam Tse +Adam Wentz +admin +Adrien Morison +ahayrapetyan +Ahilya +AinsworthK +Akash Srivastava +Alan Yee +Albert Tugushev +Albert-Guan +albertg +Aleks Bunin +Alethea Flowers +Alex Gaynor +Alex Grönholm +Alex Loosley +Alex Morega +Alex Stachowiak +Alexander Shtyrov +Alexandre Conrad +Alexey Popravka +Alexey Popravka +Alli +Ami Fischman +Ananya Maiti +Anatoly Techtonik +Anders Kaseorg +Andreas Lutro +Andrei Geacar +Andrew Gaul +Andrey Bulgakov +Andrés Delfino <34587441+andresdelfino@users.noreply.github.com> +Andrés Delfino +Andy Freeland +Andy Freeland +Andy Kluger +Ani Hayrapetyan +Aniruddha Basak +Anish Tambe +Anrs Hu +Anthony Sottile +Antoine Musso +Anton Ovchinnikov +Anton Patrushev +Antonio Alvarado Hernandez +Antony Lee +Antti Kaihola +Anubhav Patel +Anuj Godase +AQNOUCH Mohammed +AraHaan +Arindam Choudhury +Armin Ronacher +Artem +Ashley Manton +Ashwin Ramaswami +atse +Atsushi Odagiri +Avner Cohen +Baptiste Mispelon +Barney Gale +barneygale +Bartek Ogryczak +Bastian Venthur +Ben Darnell +Ben Hoyt +Ben Rosser +Bence Nagy +Benjamin Peterson +Benjamin VanEvery +Benoit Pierre +Berker Peksag +Bernardo B. Marques +Bernhard M. Wiedemann +Bertil Hatt +Bogdan Opanchuk +BorisZZZ +Brad Erickson +Bradley Ayers +Brandon L. Reiss +Brandt Bucher +Brett Randall +Brian Cristante <33549821+brcrista@users.noreply.github.com> +Brian Cristante +Brian Rosner +BrownTruck +Bruno Oliveira +Bruno Renié +Bstrdsmkr +Buck Golemon +burrows +Bussonnier Matthias +c22 +Caleb Martinez +Calvin Smith +Carl Meyer +Carlos Liam +Carol Willing +Carter Thayer +Cass +Chandrasekhar Atina +Chih-Hsuan Yen +Chih-Hsuan Yen +Chris Brinker +Chris Hunt +Chris Jerdonek +Chris McDonough +Chris Wolfe +Christian Heimes +Christian Oudard +Christopher Hunt +Christopher Snyder +Clark Boylan +Clay McClure +Cody +Cody Soyland +Colin Watson +Connor Osborn +Cooper Lees +Cooper Ry Lees +Cory Benfield +Cory Wright +Craig Kerstiens +Cristian Sorinel +Curtis Doty +cytolentino +Damian Quiroga +Dan Black +Dan Savilonis +Dan Sully +daniel +Daniel Collins +Daniel Hahler +Daniel Holth +Daniel Jost +Daniel Shaulov +Daniele Esposti +Daniele Procida +Danny Hermes +Dav Clark +Dave Abrahams +Dave Jones +David Aguilar +David Black +David Bordeynik +David Bordeynik +David Caro +David Evans +David Linke +David Pursehouse +David Tucker +David Wales +Davidovich +derwolfe +Desetude +Diego Caraballo +DiegoCaraballo +Dmitry Gladkov +Domen Kožar +Donald Stufft +Dongweiming +Douglas Thor +DrFeathers +Dustin Ingram +Dwayne Bailey +Ed Morley <501702+edmorley@users.noreply.github.com> +Ed Morley +Eitan Adler +ekristina +elainechan +Eli Schwartz +Eli Schwartz +Emil Burzo +Emil Styrke +Endoh Takanao +enoch +Erdinc Mutlu +Eric Gillingham +Eric Hanchrow +Eric Hopper +Erik M. Bray +Erik Rose +Ernest W Durbin III +Ernest W. Durbin III +Erwin Janssen +Eugene Vereshchagin +everdimension +Felix Yan +fiber-space +Filip Kokosiński +Florian Briand +Florian Rathgeber +Francesco +Francesco Montesano +Frost Ming +Gabriel Curio +Gabriel de Perthuis +Garry Polley +gdanielson +Geoffrey Lehée +Geoffrey Sneddon +George Song +Georgi Valkov +Giftlin Rajaiah +gizmoguy1 +gkdoc <40815324+gkdoc@users.noreply.github.com> +Gopinath M <31352222+mgopi1990@users.noreply.github.com> +GOTO Hayato <3532528+gh640@users.noreply.github.com> +gpiks +Guilherme Espada +Guy Rozendorn +gzpan123 +Hanjun Kim +Hari Charan +Harsh Vardhan +Herbert Pfennig +Hsiaoming Yang +Hugo +Hugo Lopes Tavares +Hugo van Kemenade +hugovk +Hynek Schlawack +Ian Bicking +Ian Cordasco +Ian Lee +Ian Stapleton Cordasco +Ian Wienand +Ian Wienand +Igor Kuzmitshov +Igor Sobreira +Ilya Baryshev +INADA Naoki +Ionel Cristian Mărieș +Ionel Maries Cristian +Ivan Pozdeev +Jacob Kim +jakirkham +Jakub Stasiak +Jakub Vysoky +Jakub Wilk +James Cleveland +James Cleveland +James Firth +James Polley +Jan Pokorný +Jannis Leidel +jarondl +Jason R. Coombs +Jay Graves +Jean-Christophe Fillion-Robin +Jeff Barber +Jeff Dairiki +Jelmer Vernooij +jenix21 +Jeremy Stanley +Jeremy Zafran +Jiashuo Li +Jim Garrison +Jivan Amara +John Paton +John-Scott Atlakson +johnthagen +johnthagen +Jon Banafato +Jon Dufresne +Jon Parise +Jonas Nockert +Jonathan Herbert +Joost Molenaar +Jorge Niedbalski +Joseph Long +Josh Bronson +Josh Hansen +Josh Schneier +Juanjo Bazán +Julian Berman +Julian Gethmann +Julien Demoor +jwg4 +Jyrki Pulliainen +Kai Chen +Kamal Bin Mustafa +kaustav haldar +keanemind +Keith Maxwell +Kelsey Hightower +Kenneth Belitzky +Kenneth Reitz +Kenneth Reitz +Kevin Burke +Kevin Carter +Kevin Frommelt +Kevin R Patterson +Kexuan Sun +Kit Randel +kpinc +Krishna Oza +Kumar McMillan +Kyle Persohn +lakshmanaram +Laszlo Kiss-Kollar +Laurent Bristiel +Laurie Opperman +Leon Sasson +Lev Givon +Lincoln de Sousa +Lipis +Loren Carvalho +Lucas Cimon +Ludovic Gasc +Luke Macken +Luo Jiebin +luojiebin +luz.paz +László Kiss Kollár +László Kiss Kollár +Marc Abramowitz +Marc Tamlyn +Marcus Smith +Mariatta +Mark Kohler +Mark Williams +Mark Williams +Markus Hametner +Masaki +Masklinn +Matej Stuchlik +Mathew Jennings +Mathieu Bridon +Matt Good +Matt Maker +Matt Robenolt +matthew +Matthew Einhorn +Matthew Gilliard +Matthew Iversen +Matthew Trumbell +Matthew Willson +Matthias Bussonnier +mattip +Maxim Kurnikov +Maxime Rouyrre +mayeut +mbaluna <44498973+mbaluna@users.noreply.github.com> +mdebi <17590103+mdebi@users.noreply.github.com> +memoselyk +Michael +Michael Aquilina +Michael E. Karpeles +Michael Klich +Michael Williamson +michaelpacer +Mickaël Schoentgen +Miguel Araujo Perez +Mihir Singh +Mike +Mike Hendricks +Min RK +MinRK +Miro Hrončok +Monica Baluna +montefra +Monty Taylor +Nate Coraor +Nathaniel J. Smith +Nehal J Wani +Neil Botelho +Nick Coghlan +Nick Stenning +Nick Timkovich +Nicolas Bock +Nikhil Benesch +Nitesh Sharma +Nowell Strite +NtaleGrey +nvdv +Ofekmeister +ofrinevo +Oliver Jeeves +Oliver Tonnhofer +Olivier Girardot +Olivier Grisel +Ollie Rutherfurd +OMOTO Kenji +Omry Yadan +Oren Held +Oscar Benjamin +Oz N Tiram +Pachwenko <32424503+Pachwenko@users.noreply.github.com> +Patrick Dubroy +Patrick Jenkins +Patrick Lawson +patricktokeeffe +Patrik Kopkan +Paul Kehrer +Paul Moore +Paul Nasrat +Paul Oswald +Paul van der Linden +Paulus Schoutsen +Pavithra Eswaramoorthy <33131404+QueenCoffee@users.noreply.github.com> +Pawel Jasinski +Pekka Klärck +Peter Lisák +Peter Waller +petr-tik +Phaneendra Chiruvella +Phil Freo +Phil Pennock +Phil Whelan +Philip Jägenstedt +Philip Molloy +Philippe Ombredanne +Pi Delport +Pierre-Yves Rofes +pip +Prabakaran Kumaresshan +Prabhjyotsing Surjit Singh Sodhi +Prabhu Marappan +Pradyun Gedam +Pratik Mallya +Preet Thakkar +Preston Holmes +Przemek Wrzos +Pulkit Goyal <7895pulkit@gmail.com> +Qiangning Hong +Quentin Pradet +R. David Murray +Rafael Caricio +Ralf Schmitt +Razzi Abuissa +rdb +Remi Rampin +Remi Rampin +Rene Dudfield +Riccardo Magliocchetti +Richard Jones +RobberPhex +Robert Collins +Robert McGibbon +Robert T. McGibbon +robin elisha robinson +Roey Berman +Rohan Jain +Rohan Jain +Rohan Jain +Roman Bogorodskiy +Romuald Brunet +Ronny Pfannschmidt +Rory McCann +Ross Brattain +Roy Wellington Ⅳ +Roy Wellington Ⅳ +Ryan Wooden +ryneeverett +Sachi King +Salvatore Rinchiera +Savio Jomton +schlamar +Scott Kitterman +Sean +seanj +Sebastian Jordan +Sebastian Schaetz +Segev Finer +SeongSoo Cho +Sergey Vasilyev +Seth Woodworth +Shlomi Fish +Shovan Maity +Simeon Visser +Simon Cross +Simon Pichugin +sinoroc +Sorin Sbarnea +Stavros Korokithakis +Stefan Scherfke +Stephan Erb +stepshal +Steve (Gadget) Barnes +Steve Barnes +Steve Dower +Steve Kowalik +Steven Myint +stonebig +Stéphane Bidoul (ACSONE) +Stéphane Bidoul +Stéphane Klein +Sumana Harihareswara +Sviatoslav Sydorenko +Sviatoslav Sydorenko +Swat009 +Takayuki SHIMIZUKAWA +tbeswick +Thijs Triemstra +Thomas Fenzl +Thomas Grainger +Thomas Guettler +Thomas Johansson +Thomas Kluyver +Thomas Smith +Tim D. Smith +Tim Gates +Tim Harder +Tim Heap +tim smith +tinruufu +Tom Forbes +Tom Freudenheim +Tom V +Tomas Orsava +Tomer Chachamu +Tony Beswick +Tony Zhaocheng Tan +TonyBeswick +toonarmycaptain +Toshio Kuratomi +Travis Swicegood +Tzu-ping Chung +Valentin Haenel +Victor Stinner +victorvpaulo +Viktor Szépe +Ville Skyttä +Vinay Sajip +Vincent Philippon +Vinicyus Macedo <7549205+vinicyusmacedo@users.noreply.github.com> +Vitaly Babiy +Vladimir Rutsky +W. Trevor King +Wil Tan +Wilfred Hughes +William ML Leslie +William T Olson +Wilson Mo +wim glenn +Wolfgang Maier +Xavier Fernandez +Xavier Fernandez +xoviat +xtreak +YAMAMOTO Takashi +Yen Chi Hsuan +Yeray Diaz Diaz +Yoval P +Yu Jian +Yuan Jing Vincent Yan +Zearin +Zearin +Zhiping Deng +Zvezdan Petkovic +Łukasz Langa +Семён Марьясин diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/certifi-2019.11.28.dist-info/INSTALLER b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/certifi-2019.11.28.dist-info/INSTALLER new file mode 100644 index 0000000000000000000000000000000000000000..a1b589e38a32041e49332e5e81c2d363dc418d68 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/certifi-2019.11.28.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/certifi-2019.11.28.dist-info/LICENSE.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/certifi-2019.11.28.dist-info/LICENSE.txt new file mode 100644 index 0000000000000000000000000000000000000000..737fec5c5352af3d9a6a47a0670da4bdb52c5725 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/certifi-2019.11.28.dist-info/LICENSE.txt @@ -0,0 +1,20 @@ +Copyright (c) 2008-2019 The pip developers (see AUTHORS.txt file) + +Permission is hereby granted, free of charge, to any person obtaining +a copy of this software and associated documentation files (the +"Software"), to deal in the Software without restriction, including +without limitation the rights to use, copy, modify, merge, publish, +distribute, sublicense, and/or sell copies of the Software, and to +permit persons to whom the Software is furnished to do so, subject to +the following conditions: + +The above copyright notice and this permission notice shall be +included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE +LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION +OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION +WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/certifi-2019.11.28.dist-info/METADATA b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/certifi-2019.11.28.dist-info/METADATA new file mode 100644 index 0000000000000000000000000000000000000000..e897b3d8335b63a805b5bc59da1446fca1a95730 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/certifi-2019.11.28.dist-info/METADATA @@ -0,0 +1,74 @@ +Metadata-Version: 2.1 +Name: certifi +Version: 2019.11.28 +Summary: Python package for providing Mozilla's CA Bundle. +Home-page: https://certifi.io/ +Author: Kenneth Reitz +Author-email: me@kennethreitz.com +License: MPL-2.0 +Platform: UNKNOWN +Classifier: Development Status :: 5 - Production/Stable +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0) +Classifier: Natural Language :: English +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 2 +Classifier: Programming Language :: Python :: 2.6 +Classifier: Programming Language :: Python :: 2.7 +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.3 +Classifier: Programming Language :: Python :: 3.4 +Classifier: Programming Language :: Python :: 3.5 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 + +Certifi: Python SSL Certificates +================================ + +`Certifi`_ is a carefully curated collection of Root Certificates for +validating the trustworthiness of SSL certificates while verifying the identity +of TLS hosts. It has been extracted from the `Requests`_ project. + +Installation +------------ + +``certifi`` is available on PyPI. Simply install it with ``pip``:: + + $ pip install certifi + +Usage +----- + +To reference the installed certificate authority (CA) bundle, you can use the +built-in function:: + + >>> import certifi + + >>> certifi.where() + '/usr/local/lib/python2.7/site-packages/certifi/cacert.pem' + +Or from the command line:: + + $ python -m certifi + /usr/local/lib/python2.7/site-packages/certifi/cacert.pem + +Enjoy! + +1024-bit Root Certificates +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Browsers and certificate authorities have concluded that 1024-bit keys are +unacceptably weak for certificates, particularly root certificates. For this +reason, Mozilla has removed any weak (i.e. 1024-bit key) certificate from its +bundle, replacing it with an equivalent strong (i.e. 2048-bit or greater key) +certificate from the same CA. Because Mozilla removed these certificates from +its bundle, ``certifi`` removed them as well. + +In previous versions, ``certifi`` provided the ``certifi.old_where()`` function +to intentionally re-add the 1024-bit roots back into your bundle. This was not +recommended in production and therefore was removed at the end of 2018. + +.. _`Certifi`: https://certifi.io/en/latest/ +.. _`Requests`: http://docs.python-requests.org/en/latest/ + + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/certifi-2019.11.28.dist-info/RECORD b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/certifi-2019.11.28.dist-info/RECORD new file mode 100644 index 0000000000000000000000000000000000000000..1d4830a1cc31e9092b9f8abd4f5b3f73de935bfa --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/certifi-2019.11.28.dist-info/RECORD @@ -0,0 +1,17 @@ +certifi/__init__.py,sha256=JVwzDhkMttyVVtfNDrU_i0v2a-WmtEBXq0Z8oz4Ghzk,52 +certifi/__main__.py,sha256=FiOYt1Fltst7wk9DRa6GCoBr8qBUxlNQu_MKJf04E6s,41 +certifi/cacert.pem,sha256=cyvv5Jx1gHACNEj2GaOrsIj0Tk8FmSvHR42uhzvlatg,281457 +certifi/core.py,sha256=u_450edAVoiZrgqi6k3Sekcvs-B1zD_hTeE6UJ2OcQ4,225 +certifi-2019.11.28.dist-info/AUTHORS.txt,sha256=RtqU9KfonVGhI48DAA4-yTOBUhBtQTjFhaDzHoyh7uU,21518 +certifi-2019.11.28.dist-info/LICENSE.txt,sha256=W6Ifuwlk-TatfRU2LR7W1JMcyMj5_y1NkRkOEJvnRDE,1090 +certifi-2019.11.28.dist-info/METADATA,sha256=K6ioZZT9N0bRETHmncjK-UJKAFIVF26h1F31dfDaV5k,2523 +certifi-2019.11.28.dist-info/WHEEL,sha256=kGT74LWyRUZrL4VgLh6_g12IeVl_9u9ZVhadrgXZUEY,110 +certifi-2019.11.28.dist-info/top_level.txt,sha256=KMu4vUCfsjLrkPbSNdgdekS-pVJzBAJFO__nI8NF6-U,8 +certifi-2019.11.28.dist-info/RECORD,, +certifi-2019.11.28.dist-info/__pycache__,, +certifi-2019.11.28.virtualenv,, +certifi/__init__.cpython-38.pyc,, +certifi/core.cpython-38.pyc,, +certifi-2019.11.28.dist-info/INSTALLER,, +certifi/__main__.cpython-38.pyc,, +certifi/__pycache__,, \ No newline at end of file diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/certifi-2019.11.28.dist-info/WHEEL b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/certifi-2019.11.28.dist-info/WHEEL new file mode 100644 index 0000000000000000000000000000000000000000..ef99c6cf3283b50a273ac4c6d009a0aa85597070 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/certifi-2019.11.28.dist-info/WHEEL @@ -0,0 +1,6 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.34.2) +Root-Is-Purelib: true +Tag: py2-none-any +Tag: py3-none-any + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/certifi-2019.11.28.dist-info/top_level.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/certifi-2019.11.28.dist-info/top_level.txt new file mode 100644 index 0000000000000000000000000000000000000000..963eac530b9bc28d704d1bc410299c68e3216d4d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/certifi-2019.11.28.dist-info/top_level.txt @@ -0,0 +1 @@ +certifi diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/certifi-2019.11.28.virtualenv b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/certifi-2019.11.28.virtualenv new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/certifi/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/certifi/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..0d59a05630e7ff963947b2596194d12248a41863 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/certifi/__init__.py @@ -0,0 +1,3 @@ +from .core import where + +__version__ = "2019.11.28" diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/certifi/__main__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/certifi/__main__.py new file mode 100644 index 0000000000000000000000000000000000000000..5f1da0dd0c201e8aedc1552ef82a1780eb37276d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/certifi/__main__.py @@ -0,0 +1,2 @@ +from certifi import where +print(where()) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/certifi/cacert.pem b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/certifi/cacert.pem new file mode 100644 index 0000000000000000000000000000000000000000..a4758ef3afb0b0299485e759d9a8c5e4747fc96d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/certifi/cacert.pem @@ -0,0 +1,4602 @@ + +# Issuer: CN=GlobalSign Root CA O=GlobalSign nv-sa OU=Root CA +# Subject: CN=GlobalSign Root CA O=GlobalSign nv-sa OU=Root CA +# Label: "GlobalSign Root CA" +# Serial: 4835703278459707669005204 +# MD5 Fingerprint: 3e:45:52:15:09:51:92:e1:b7:5d:37:9f:b1:87:29:8a +# SHA1 Fingerprint: b1:bc:96:8b:d4:f4:9d:62:2a:a8:9a:81:f2:15:01:52:a4:1d:82:9c +# SHA256 Fingerprint: eb:d4:10:40:e4:bb:3e:c7:42:c9:e3:81:d3:1e:f2:a4:1a:48:b6:68:5c:96:e7:ce:f3:c1:df:6c:d4:33:1c:99 +-----BEGIN CERTIFICATE----- +MIIDdTCCAl2gAwIBAgILBAAAAAABFUtaw5QwDQYJKoZIhvcNAQEFBQAwVzELMAkG +A1UEBhMCQkUxGTAXBgNVBAoTEEdsb2JhbFNpZ24gbnYtc2ExEDAOBgNVBAsTB1Jv +b3QgQ0ExGzAZBgNVBAMTEkdsb2JhbFNpZ24gUm9vdCBDQTAeFw05ODA5MDExMjAw +MDBaFw0yODAxMjgxMjAwMDBaMFcxCzAJBgNVBAYTAkJFMRkwFwYDVQQKExBHbG9i +YWxTaWduIG52LXNhMRAwDgYDVQQLEwdSb290IENBMRswGQYDVQQDExJHbG9iYWxT +aWduIFJvb3QgQ0EwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDaDuaZ +jc6j40+Kfvvxi4Mla+pIH/EqsLmVEQS98GPR4mdmzxzdzxtIK+6NiY6arymAZavp +xy0Sy6scTHAHoT0KMM0VjU/43dSMUBUc71DuxC73/OlS8pF94G3VNTCOXkNz8kHp +1Wrjsok6Vjk4bwY8iGlbKk3Fp1S4bInMm/k8yuX9ifUSPJJ4ltbcdG6TRGHRjcdG +snUOhugZitVtbNV4FpWi6cgKOOvyJBNPc1STE4U6G7weNLWLBYy5d4ux2x8gkasJ +U26Qzns3dLlwR5EiUWMWea6xrkEmCMgZK9FGqkjWZCrXgzT/LCrBbBlDSgeF59N8 +9iFo7+ryUp9/k5DPAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8E +BTADAQH/MB0GA1UdDgQWBBRge2YaRQ2XyolQL30EzTSo//z9SzANBgkqhkiG9w0B +AQUFAAOCAQEA1nPnfE920I2/7LqivjTFKDK1fPxsnCwrvQmeU79rXqoRSLblCKOz +yj1hTdNGCbM+w6DjY1Ub8rrvrTnhQ7k4o+YviiY776BQVvnGCv04zcQLcFGUl5gE +38NflNUVyRRBnMRddWQVDf9VMOyGj/8N7yy5Y0b2qvzfvGn9LhJIZJrglfCm7ymP +AbEVtQwdpf5pLGkkeB6zpxxxYu7KyJesF12KwvhHhm4qxFYxldBniYUr+WymXUad +DKqC5JlR3XC321Y9YeRq4VzW9v493kHMB65jUr9TU/Qr6cf9tveCX4XSQRjbgbME +HMUfpIBvFSDJ3gyICh3WZlXi/EjJKSZp4A== +-----END CERTIFICATE----- + +# Issuer: CN=GlobalSign O=GlobalSign OU=GlobalSign Root CA - R2 +# Subject: CN=GlobalSign O=GlobalSign OU=GlobalSign Root CA - R2 +# Label: "GlobalSign Root CA - R2" +# Serial: 4835703278459682885658125 +# MD5 Fingerprint: 94:14:77:7e:3e:5e:fd:8f:30:bd:41:b0:cf:e7:d0:30 +# SHA1 Fingerprint: 75:e0:ab:b6:13:85:12:27:1c:04:f8:5f:dd:de:38:e4:b7:24:2e:fe +# SHA256 Fingerprint: ca:42:dd:41:74:5f:d0:b8:1e:b9:02:36:2c:f9:d8:bf:71:9d:a1:bd:1b:1e:fc:94:6f:5b:4c:99:f4:2c:1b:9e +-----BEGIN CERTIFICATE----- +MIIDujCCAqKgAwIBAgILBAAAAAABD4Ym5g0wDQYJKoZIhvcNAQEFBQAwTDEgMB4G +A1UECxMXR2xvYmFsU2lnbiBSb290IENBIC0gUjIxEzARBgNVBAoTCkdsb2JhbFNp +Z24xEzARBgNVBAMTCkdsb2JhbFNpZ24wHhcNMDYxMjE1MDgwMDAwWhcNMjExMjE1 +MDgwMDAwWjBMMSAwHgYDVQQLExdHbG9iYWxTaWduIFJvb3QgQ0EgLSBSMjETMBEG +A1UEChMKR2xvYmFsU2lnbjETMBEGA1UEAxMKR2xvYmFsU2lnbjCCASIwDQYJKoZI +hvcNAQEBBQADggEPADCCAQoCggEBAKbPJA6+Lm8omUVCxKs+IVSbC9N/hHD6ErPL +v4dfxn+G07IwXNb9rfF73OX4YJYJkhD10FPe+3t+c4isUoh7SqbKSaZeqKeMWhG8 +eoLrvozps6yWJQeXSpkqBy+0Hne/ig+1AnwblrjFuTosvNYSuetZfeLQBoZfXklq +tTleiDTsvHgMCJiEbKjNS7SgfQx5TfC4LcshytVsW33hoCmEofnTlEnLJGKRILzd +C9XZzPnqJworc5HGnRusyMvo4KD0L5CLTfuwNhv2GXqF4G3yYROIXJ/gkwpRl4pa +zq+r1feqCapgvdzZX99yqWATXgAByUr6P6TqBwMhAo6CygPCm48CAwEAAaOBnDCB +mTAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUm+IH +V2ccHsBqBt5ZtJot39wZhi4wNgYDVR0fBC8wLTAroCmgJ4YlaHR0cDovL2NybC5n +bG9iYWxzaWduLm5ldC9yb290LXIyLmNybDAfBgNVHSMEGDAWgBSb4gdXZxwewGoG +3lm0mi3f3BmGLjANBgkqhkiG9w0BAQUFAAOCAQEAmYFThxxol4aR7OBKuEQLq4Gs +J0/WwbgcQ3izDJr86iw8bmEbTUsp9Z8FHSbBuOmDAGJFtqkIk7mpM0sYmsL4h4hO +291xNBrBVNpGP+DTKqttVCL1OmLNIG+6KYnX3ZHu01yiPqFbQfXf5WRDLenVOavS +ot+3i9DAgBkcRcAtjOj4LaR0VknFBbVPFd5uRHg5h6h+u/N5GJG79G+dwfCMNYxd +AfvDbbnvRG15RjF+Cv6pgsH/76tuIMRQyV+dTZsXjAzlAcmgQWpzU/qlULRuJQ/7 +TBj0/VLZjmmx6BEP3ojY+x1J96relc8geMJgEtslQIxq/H5COEBkEveegeGTLg== +-----END CERTIFICATE----- + +# Issuer: CN=VeriSign Class 3 Public Primary Certification Authority - G3 O=VeriSign, Inc. OU=VeriSign Trust Network/(c) 1999 VeriSign, Inc. - For authorized use only +# Subject: CN=VeriSign Class 3 Public Primary Certification Authority - G3 O=VeriSign, Inc. OU=VeriSign Trust Network/(c) 1999 VeriSign, Inc. - For authorized use only +# Label: "Verisign Class 3 Public Primary Certification Authority - G3" +# Serial: 206684696279472310254277870180966723415 +# MD5 Fingerprint: cd:68:b6:a7:c7:c4:ce:75:e0:1d:4f:57:44:61:92:09 +# SHA1 Fingerprint: 13:2d:0d:45:53:4b:69:97:cd:b2:d5:c3:39:e2:55:76:60:9b:5c:c6 +# SHA256 Fingerprint: eb:04:cf:5e:b1:f3:9a:fa:76:2f:2b:b1:20:f2:96:cb:a5:20:c1:b9:7d:b1:58:95:65:b8:1c:b9:a1:7b:72:44 +-----BEGIN CERTIFICATE----- +MIIEGjCCAwICEQCbfgZJoz5iudXukEhxKe9XMA0GCSqGSIb3DQEBBQUAMIHKMQsw +CQYDVQQGEwJVUzEXMBUGA1UEChMOVmVyaVNpZ24sIEluYy4xHzAdBgNVBAsTFlZl +cmlTaWduIFRydXN0IE5ldHdvcmsxOjA4BgNVBAsTMShjKSAxOTk5IFZlcmlTaWdu +LCBJbmMuIC0gRm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxRTBDBgNVBAMTPFZlcmlT +aWduIENsYXNzIDMgUHVibGljIFByaW1hcnkgQ2VydGlmaWNhdGlvbiBBdXRob3Jp +dHkgLSBHMzAeFw05OTEwMDEwMDAwMDBaFw0zNjA3MTYyMzU5NTlaMIHKMQswCQYD +VQQGEwJVUzEXMBUGA1UEChMOVmVyaVNpZ24sIEluYy4xHzAdBgNVBAsTFlZlcmlT +aWduIFRydXN0IE5ldHdvcmsxOjA4BgNVBAsTMShjKSAxOTk5IFZlcmlTaWduLCBJ +bmMuIC0gRm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxRTBDBgNVBAMTPFZlcmlTaWdu +IENsYXNzIDMgUHVibGljIFByaW1hcnkgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkg +LSBHMzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMu6nFL8eB8aHm8b +N3O9+MlrlBIwT/A2R/XQkQr1F8ilYcEWQE37imGQ5XYgwREGfassbqb1EUGO+i2t +KmFZpGcmTNDovFJbcCAEWNF6yaRpvIMXZK0Fi7zQWM6NjPXr8EJJC52XJ2cybuGu +kxUccLwgTS8Y3pKI6GyFVxEa6X7jJhFUokWWVYPKMIno3Nij7SqAP395ZVc+FSBm +CC+Vk7+qRy+oRpfwEuL+wgorUeZ25rdGt+INpsyow0xZVYnm6FNcHOqd8GIWC6fJ +Xwzw3sJ2zq/3avL6QaaiMxTJ5Xpj055iN9WFZZ4O5lMkdBteHRJTW8cs54NJOxWu +imi5V5cCAwEAATANBgkqhkiG9w0BAQUFAAOCAQEAERSWwauSCPc/L8my/uRan2Te +2yFPhpk0djZX3dAVL8WtfxUfN2JzPtTnX84XA9s1+ivbrmAJXx5fj267Cz3qWhMe +DGBvtcC1IyIuBwvLqXTLR7sdwdela8wv0kL9Sd2nic9TutoAWii/gt/4uhMdUIaC +/Y4wjylGsB49Ndo4YhYYSq3mtlFs3q9i6wHQHiT+eo8SGhJouPtmmRQURVyu565p +F4ErWjfJXir0xuKhXFSbplQAz/DxwceYMBo7Nhbbo27q/a2ywtrvAkcTisDxszGt +TxzhT5yvDwyd93gN2PQ1VoDat20Xj50egWTh/sVFuq1ruQp6Tk9LhO5L8X3dEQ== +-----END CERTIFICATE----- + +# Issuer: CN=Entrust.net Certification Authority (2048) O=Entrust.net OU=www.entrust.net/CPS_2048 incorp. by ref. (limits liab.)/(c) 1999 Entrust.net Limited +# Subject: CN=Entrust.net Certification Authority (2048) O=Entrust.net OU=www.entrust.net/CPS_2048 incorp. by ref. (limits liab.)/(c) 1999 Entrust.net Limited +# Label: "Entrust.net Premium 2048 Secure Server CA" +# Serial: 946069240 +# MD5 Fingerprint: ee:29:31:bc:32:7e:9a:e6:e8:b5:f7:51:b4:34:71:90 +# SHA1 Fingerprint: 50:30:06:09:1d:97:d4:f5:ae:39:f7:cb:e7:92:7d:7d:65:2d:34:31 +# SHA256 Fingerprint: 6d:c4:71:72:e0:1c:bc:b0:bf:62:58:0d:89:5f:e2:b8:ac:9a:d4:f8:73:80:1e:0c:10:b9:c8:37:d2:1e:b1:77 +-----BEGIN CERTIFICATE----- +MIIEKjCCAxKgAwIBAgIEOGPe+DANBgkqhkiG9w0BAQUFADCBtDEUMBIGA1UEChML +RW50cnVzdC5uZXQxQDA+BgNVBAsUN3d3dy5lbnRydXN0Lm5ldC9DUFNfMjA0OCBp +bmNvcnAuIGJ5IHJlZi4gKGxpbWl0cyBsaWFiLikxJTAjBgNVBAsTHChjKSAxOTk5 +IEVudHJ1c3QubmV0IExpbWl0ZWQxMzAxBgNVBAMTKkVudHJ1c3QubmV0IENlcnRp +ZmljYXRpb24gQXV0aG9yaXR5ICgyMDQ4KTAeFw05OTEyMjQxNzUwNTFaFw0yOTA3 +MjQxNDE1MTJaMIG0MRQwEgYDVQQKEwtFbnRydXN0Lm5ldDFAMD4GA1UECxQ3d3d3 +LmVudHJ1c3QubmV0L0NQU18yMDQ4IGluY29ycC4gYnkgcmVmLiAobGltaXRzIGxp +YWIuKTElMCMGA1UECxMcKGMpIDE5OTkgRW50cnVzdC5uZXQgTGltaXRlZDEzMDEG +A1UEAxMqRW50cnVzdC5uZXQgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkgKDIwNDgp +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArU1LqRKGsuqjIAcVFmQq +K0vRvwtKTY7tgHalZ7d4QMBzQshowNtTK91euHaYNZOLGp18EzoOH1u3Hs/lJBQe +sYGpjX24zGtLA/ECDNyrpUAkAH90lKGdCCmziAv1h3edVc3kw37XamSrhRSGlVuX +MlBvPci6Zgzj/L24ScF2iUkZ/cCovYmjZy/Gn7xxGWC4LeksyZB2ZnuU4q941mVT +XTzWnLLPKQP5L6RQstRIzgUyVYr9smRMDuSYB3Xbf9+5CFVghTAp+XtIpGmG4zU/ +HoZdenoVve8AjhUiVBcAkCaTvA5JaJG/+EfTnZVCwQ5N328mz8MYIWJmQ3DW1cAH +4QIDAQABo0IwQDAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNV +HQ4EFgQUVeSB0RGAvtiJuQijMfmhJAkWuXAwDQYJKoZIhvcNAQEFBQADggEBADub +j1abMOdTmXx6eadNl9cZlZD7Bh/KM3xGY4+WZiT6QBshJ8rmcnPyT/4xmf3IDExo +U8aAghOY+rat2l098c5u9hURlIIM7j+VrxGrD9cv3h8Dj1csHsm7mhpElesYT6Yf +zX1XEC+bBAlahLVu2B064dae0Wx5XnkcFMXj0EyTO2U87d89vqbllRrDtRnDvV5b +u/8j72gZyxKTJ1wDLW8w0B62GqzeWvfRqqgnpv55gcR5mTNXuhKwqeBCbJPKVt7+ +bYQLCIt+jerXmCHG8+c8eS9enNFMFY3h7CI3zJpDC5fcgJCNs2ebb0gIFVbPv/Er +fF6adulZkMV8gzURZVE= +-----END CERTIFICATE----- + +# Issuer: CN=Baltimore CyberTrust Root O=Baltimore OU=CyberTrust +# Subject: CN=Baltimore CyberTrust Root O=Baltimore OU=CyberTrust +# Label: "Baltimore CyberTrust Root" +# Serial: 33554617 +# MD5 Fingerprint: ac:b6:94:a5:9c:17:e0:d7:91:52:9b:b1:97:06:a6:e4 +# SHA1 Fingerprint: d4:de:20:d0:5e:66:fc:53:fe:1a:50:88:2c:78:db:28:52:ca:e4:74 +# SHA256 Fingerprint: 16:af:57:a9:f6:76:b0:ab:12:60:95:aa:5e:ba:de:f2:2a:b3:11:19:d6:44:ac:95:cd:4b:93:db:f3:f2:6a:eb +-----BEGIN CERTIFICATE----- +MIIDdzCCAl+gAwIBAgIEAgAAuTANBgkqhkiG9w0BAQUFADBaMQswCQYDVQQGEwJJ +RTESMBAGA1UEChMJQmFsdGltb3JlMRMwEQYDVQQLEwpDeWJlclRydXN0MSIwIAYD +VQQDExlCYWx0aW1vcmUgQ3liZXJUcnVzdCBSb290MB4XDTAwMDUxMjE4NDYwMFoX +DTI1MDUxMjIzNTkwMFowWjELMAkGA1UEBhMCSUUxEjAQBgNVBAoTCUJhbHRpbW9y +ZTETMBEGA1UECxMKQ3liZXJUcnVzdDEiMCAGA1UEAxMZQmFsdGltb3JlIEN5YmVy +VHJ1c3QgUm9vdDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKMEuyKr +mD1X6CZymrV51Cni4eiVgLGw41uOKymaZN+hXe2wCQVt2yguzmKiYv60iNoS6zjr +IZ3AQSsBUnuId9Mcj8e6uYi1agnnc+gRQKfRzMpijS3ljwumUNKoUMMo6vWrJYeK +mpYcqWe4PwzV9/lSEy/CG9VwcPCPwBLKBsua4dnKM3p31vjsufFoREJIE9LAwqSu +XmD+tqYF/LTdB1kC1FkYmGP1pWPgkAx9XbIGevOF6uvUA65ehD5f/xXtabz5OTZy +dc93Uk3zyZAsuT3lySNTPx8kmCFcB5kpvcY67Oduhjprl3RjM71oGDHweI12v/ye +jl0qhqdNkNwnGjkCAwEAAaNFMEMwHQYDVR0OBBYEFOWdWTCCR1jMrPoIVDaGezq1 +BE3wMBIGA1UdEwEB/wQIMAYBAf8CAQMwDgYDVR0PAQH/BAQDAgEGMA0GCSqGSIb3 +DQEBBQUAA4IBAQCFDF2O5G9RaEIFoN27TyclhAO992T9Ldcw46QQF+vaKSm2eT92 +9hkTI7gQCvlYpNRhcL0EYWoSihfVCr3FvDB81ukMJY2GQE/szKN+OMY3EU/t3Wgx +jkzSswF07r51XgdIGn9w/xZchMB5hbgF/X++ZRGjD8ACtPhSNzkE1akxehi/oCr0 +Epn3o0WC4zxe9Z2etciefC7IpJ5OCBRLbf1wbWsaY71k5h+3zvDyny67G7fyUIhz +ksLi4xaNmjICq44Y3ekQEe5+NauQrz4wlHrQMz2nZQ/1/I6eYs9HRCwBXbsdtTLS +R9I4LtD+gdwyah617jzV/OeBHRnDJELqYzmp +-----END CERTIFICATE----- + +# Issuer: CN=AddTrust External CA Root O=AddTrust AB OU=AddTrust External TTP Network +# Subject: CN=AddTrust External CA Root O=AddTrust AB OU=AddTrust External TTP Network +# Label: "AddTrust External Root" +# Serial: 1 +# MD5 Fingerprint: 1d:35:54:04:85:78:b0:3f:42:42:4d:bf:20:73:0a:3f +# SHA1 Fingerprint: 02:fa:f3:e2:91:43:54:68:60:78:57:69:4d:f5:e4:5b:68:85:18:68 +# SHA256 Fingerprint: 68:7f:a4:51:38:22:78:ff:f0:c8:b1:1f:8d:43:d5:76:67:1c:6e:b2:bc:ea:b4:13:fb:83:d9:65:d0:6d:2f:f2 +-----BEGIN CERTIFICATE----- +MIIENjCCAx6gAwIBAgIBATANBgkqhkiG9w0BAQUFADBvMQswCQYDVQQGEwJTRTEU +MBIGA1UEChMLQWRkVHJ1c3QgQUIxJjAkBgNVBAsTHUFkZFRydXN0IEV4dGVybmFs +IFRUUCBOZXR3b3JrMSIwIAYDVQQDExlBZGRUcnVzdCBFeHRlcm5hbCBDQSBSb290 +MB4XDTAwMDUzMDEwNDgzOFoXDTIwMDUzMDEwNDgzOFowbzELMAkGA1UEBhMCU0Ux +FDASBgNVBAoTC0FkZFRydXN0IEFCMSYwJAYDVQQLEx1BZGRUcnVzdCBFeHRlcm5h +bCBUVFAgTmV0d29yazEiMCAGA1UEAxMZQWRkVHJ1c3QgRXh0ZXJuYWwgQ0EgUm9v +dDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALf3GjPm8gAELTngTlvt +H7xsD821+iO2zt6bETOXpClMfZOfvUq8k+0DGuOPz+VtUFrWlymUWoCwSXrbLpX9 +uMq/NzgtHj6RQa1wVsfwTz/oMp50ysiQVOnGXw94nZpAPA6sYapeFI+eh6FqUNzX +mk6vBbOmcZSccbNQYArHE504B4YCqOmoaSYYkKtMsE8jqzpPhNjfzp/haW+710LX +a0Tkx63ubUFfclpxCDezeWWkWaCUN/cALw3CknLa0Dhy2xSoRcRdKn23tNbE7qzN +E0S3ySvdQwAl+mG5aWpYIxG3pzOPVnVZ9c0p10a3CitlttNCbxWyuHv77+ldU9U0 +WicCAwEAAaOB3DCB2TAdBgNVHQ4EFgQUrb2YejS0Jvf6xCZU7wO94CTLVBowCwYD +VR0PBAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wgZkGA1UdIwSBkTCBjoAUrb2YejS0 +Jvf6xCZU7wO94CTLVBqhc6RxMG8xCzAJBgNVBAYTAlNFMRQwEgYDVQQKEwtBZGRU +cnVzdCBBQjEmMCQGA1UECxMdQWRkVHJ1c3QgRXh0ZXJuYWwgVFRQIE5ldHdvcmsx +IjAgBgNVBAMTGUFkZFRydXN0IEV4dGVybmFsIENBIFJvb3SCAQEwDQYJKoZIhvcN +AQEFBQADggEBALCb4IUlwtYj4g+WBpKdQZic2YR5gdkeWxQHIzZlj7DYd7usQWxH +YINRsPkyPef89iYTx4AWpb9a/IfPeHmJIZriTAcKhjW88t5RxNKWt9x+Tu5w/Rw5 +6wwCURQtjr0W4MHfRnXnJK3s9EK0hZNwEGe6nQY1ShjTK3rMUUKhemPR5ruhxSvC +Nr4TDea9Y355e6cJDUCrat2PisP29owaQgVR1EX1n6diIWgVIEM8med8vSTYqZEX +c4g/VhsxOBi0cQ+azcgOno4uG+GMmIPLHzHxREzGBHNJdmAPx/i9F4BrLunMTA5a +mnkPIAou1Z5jJh5VkpTYghdae9C8x49OhgQ= +-----END CERTIFICATE----- + +# Issuer: CN=Entrust Root Certification Authority O=Entrust, Inc. OU=www.entrust.net/CPS is incorporated by reference/(c) 2006 Entrust, Inc. +# Subject: CN=Entrust Root Certification Authority O=Entrust, Inc. OU=www.entrust.net/CPS is incorporated by reference/(c) 2006 Entrust, Inc. +# Label: "Entrust Root Certification Authority" +# Serial: 1164660820 +# MD5 Fingerprint: d6:a5:c3:ed:5d:dd:3e:00:c1:3d:87:92:1f:1d:3f:e4 +# SHA1 Fingerprint: b3:1e:b1:b7:40:e3:6c:84:02:da:dc:37:d4:4d:f5:d4:67:49:52:f9 +# SHA256 Fingerprint: 73:c1:76:43:4f:1b:c6:d5:ad:f4:5b:0e:76:e7:27:28:7c:8d:e5:76:16:c1:e6:e6:14:1a:2b:2c:bc:7d:8e:4c +-----BEGIN CERTIFICATE----- +MIIEkTCCA3mgAwIBAgIERWtQVDANBgkqhkiG9w0BAQUFADCBsDELMAkGA1UEBhMC +VVMxFjAUBgNVBAoTDUVudHJ1c3QsIEluYy4xOTA3BgNVBAsTMHd3dy5lbnRydXN0 +Lm5ldC9DUFMgaXMgaW5jb3Jwb3JhdGVkIGJ5IHJlZmVyZW5jZTEfMB0GA1UECxMW +KGMpIDIwMDYgRW50cnVzdCwgSW5jLjEtMCsGA1UEAxMkRW50cnVzdCBSb290IENl +cnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTA2MTEyNzIwMjM0MloXDTI2MTEyNzIw +NTM0MlowgbAxCzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1FbnRydXN0LCBJbmMuMTkw +NwYDVQQLEzB3d3cuZW50cnVzdC5uZXQvQ1BTIGlzIGluY29ycG9yYXRlZCBieSBy +ZWZlcmVuY2UxHzAdBgNVBAsTFihjKSAyMDA2IEVudHJ1c3QsIEluYy4xLTArBgNV +BAMTJEVudHJ1c3QgUm9vdCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTCCASIwDQYJ +KoZIhvcNAQEBBQADggEPADCCAQoCggEBALaVtkNC+sZtKm9I35RMOVcF7sN5EUFo +Nu3s/poBj6E4KPz3EEZmLk0eGrEaTsbRwJWIsMn/MYszA9u3g3s+IIRe7bJWKKf4 +4LlAcTfFy0cOlypowCKVYhXbR9n10Cv/gkvJrT7eTNuQgFA/CYqEAOwwCj0Yzfv9 +KlmaI5UXLEWeH25DeW0MXJj+SKfFI0dcXv1u5x609mhF0YaDW6KKjbHjKYD+JXGI +rb68j6xSlkuqUY3kEzEZ6E5Nn9uss2rVvDlUccp6en+Q3X0dgNmBu1kmwhH+5pPi +94DkZfs0Nw4pgHBNrziGLp5/V6+eF67rHMsoIV+2HNjnogQi+dPa2MsCAwEAAaOB +sDCBrTAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zArBgNVHRAEJDAi +gA8yMDA2MTEyNzIwMjM0MlqBDzIwMjYxMTI3MjA1MzQyWjAfBgNVHSMEGDAWgBRo +kORnpKZTgMeGZqTx90tD+4S9bTAdBgNVHQ4EFgQUaJDkZ6SmU4DHhmak8fdLQ/uE +vW0wHQYJKoZIhvZ9B0EABBAwDhsIVjcuMTo0LjADAgSQMA0GCSqGSIb3DQEBBQUA +A4IBAQCT1DCw1wMgKtD5Y+iRDAUgqV8ZyntyTtSx29CW+1RaGSwMCPeyvIWonX9t +O1KzKtvn1ISMY/YPyyYBkVBs9F8U4pN0wBOeMDpQ47RgxRzwIkSNcUesyBrJ6Zua +AGAT/3B+XxFNSRuzFVJ7yVTav52Vr2ua2J7p8eRDjeIRRDq/r72DQnNSi6q7pynP +9WQcCk3RvKqsnyrQ/39/2n3qse0wJcGE2jTSW3iDVuycNsMm4hH2Z0kdkquM++v/ +eu6FSqdQgPCnXEqULl8FmTxSQeDNtGPPAUO6nIPcj2A781q0tHuu2guQOHXvgR1m +0vdXcDazv/wor3ElhVsT/h5/WrQ8 +-----END CERTIFICATE----- + +# Issuer: CN=GeoTrust Global CA O=GeoTrust Inc. +# Subject: CN=GeoTrust Global CA O=GeoTrust Inc. +# Label: "GeoTrust Global CA" +# Serial: 144470 +# MD5 Fingerprint: f7:75:ab:29:fb:51:4e:b7:77:5e:ff:05:3c:99:8e:f5 +# SHA1 Fingerprint: de:28:f4:a4:ff:e5:b9:2f:a3:c5:03:d1:a3:49:a7:f9:96:2a:82:12 +# SHA256 Fingerprint: ff:85:6a:2d:25:1d:cd:88:d3:66:56:f4:50:12:67:98:cf:ab:aa:de:40:79:9c:72:2d:e4:d2:b5:db:36:a7:3a +-----BEGIN CERTIFICATE----- +MIIDVDCCAjygAwIBAgIDAjRWMA0GCSqGSIb3DQEBBQUAMEIxCzAJBgNVBAYTAlVT +MRYwFAYDVQQKEw1HZW9UcnVzdCBJbmMuMRswGQYDVQQDExJHZW9UcnVzdCBHbG9i +YWwgQ0EwHhcNMDIwNTIxMDQwMDAwWhcNMjIwNTIxMDQwMDAwWjBCMQswCQYDVQQG +EwJVUzEWMBQGA1UEChMNR2VvVHJ1c3QgSW5jLjEbMBkGA1UEAxMSR2VvVHJ1c3Qg +R2xvYmFsIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA2swYYzD9 +9BcjGlZ+W988bDjkcbd4kdS8odhM+KhDtgPpTSEHCIjaWC9mOSm9BXiLnTjoBbdq +fnGk5sRgprDvgOSJKA+eJdbtg/OtppHHmMlCGDUUna2YRpIuT8rxh0PBFpVXLVDv +iS2Aelet8u5fa9IAjbkU+BQVNdnARqN7csiRv8lVK83Qlz6cJmTM386DGXHKTubU +1XupGc1V3sjs0l44U+VcT4wt/lAjNvxm5suOpDkZALeVAjmRCw7+OC7RHQWa9k0+ +bw8HHa8sHo9gOeL6NlMTOdReJivbPagUvTLrGAMoUgRx5aszPeE4uwc2hGKceeoW +MPRfwCvocWvk+QIDAQABo1MwUTAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTA +ephojYn7qwVkDBF9qn1luMrMTjAfBgNVHSMEGDAWgBTAephojYn7qwVkDBF9qn1l +uMrMTjANBgkqhkiG9w0BAQUFAAOCAQEANeMpauUvXVSOKVCUn5kaFOSPeCpilKIn +Z57QzxpeR+nBsqTP3UEaBU6bS+5Kb1VSsyShNwrrZHYqLizz/Tt1kL/6cdjHPTfS +tQWVYrmm3ok9Nns4d0iXrKYgjy6myQzCsplFAMfOEVEiIuCl6rYVSAlk6l5PdPcF +PseKUgzbFbS9bZvlxrFUaKnjaZC2mqUPuLk/IH2uSrW4nOQdtqvmlKXBx4Ot2/Un +hw4EbNX/3aBd7YdStysVAq45pmp06drE57xNNB6pXE0zX5IJL4hmXXeXxx12E6nV +5fEWCRE11azbJHFwLJhWC9kXtNHjUStedejV0NxPNO3CBWaAocvmMw== +-----END CERTIFICATE----- + +# Issuer: CN=GeoTrust Universal CA O=GeoTrust Inc. +# Subject: CN=GeoTrust Universal CA O=GeoTrust Inc. +# Label: "GeoTrust Universal CA" +# Serial: 1 +# MD5 Fingerprint: 92:65:58:8b:a2:1a:31:72:73:68:5c:b4:a5:7a:07:48 +# SHA1 Fingerprint: e6:21:f3:35:43:79:05:9a:4b:68:30:9d:8a:2f:74:22:15:87:ec:79 +# SHA256 Fingerprint: a0:45:9b:9f:63:b2:25:59:f5:fa:5d:4c:6d:b3:f9:f7:2f:f1:93:42:03:35:78:f0:73:bf:1d:1b:46:cb:b9:12 +-----BEGIN CERTIFICATE----- +MIIFaDCCA1CgAwIBAgIBATANBgkqhkiG9w0BAQUFADBFMQswCQYDVQQGEwJVUzEW +MBQGA1UEChMNR2VvVHJ1c3QgSW5jLjEeMBwGA1UEAxMVR2VvVHJ1c3QgVW5pdmVy +c2FsIENBMB4XDTA0MDMwNDA1MDAwMFoXDTI5MDMwNDA1MDAwMFowRTELMAkGA1UE +BhMCVVMxFjAUBgNVBAoTDUdlb1RydXN0IEluYy4xHjAcBgNVBAMTFUdlb1RydXN0 +IFVuaXZlcnNhbCBDQTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAKYV +VaCjxuAfjJ0hUNfBvitbtaSeodlyWL0AG0y/YckUHUWCq8YdgNY96xCcOq9tJPi8 +cQGeBvV8Xx7BDlXKg5pZMK4ZyzBIle0iN430SppyZj6tlcDgFgDgEB8rMQ7XlFTT +QjOgNB0eRXbdT8oYN+yFFXoZCPzVx5zw8qkuEKmS5j1YPakWaDwvdSEYfyh3peFh +F7em6fgemdtzbvQKoiFs7tqqhZJmr/Z6a4LauiIINQ/PQvE1+mrufislzDoR5G2v +c7J2Ha3QsnhnGqQ5HFELZ1aD/ThdDc7d8Lsrlh/eezJS/R27tQahsiFepdaVaH/w +mZ7cRQg+59IJDTWU3YBOU5fXtQlEIGQWFwMCTFMNaN7VqnJNk22CDtucvc+081xd +VHppCZbW2xHBjXWotM85yM48vCR85mLK4b19p71XZQvk/iXttmkQ3CgaRr0BHdCX +teGYO8A3ZNY9lO4L4fUorgtWv3GLIylBjobFS1J72HGrH4oVpjuDWtdYAVHGTEHZ +f9hBZ3KiKN9gg6meyHv8U3NyWfWTehd2Ds735VzZC1U0oqpbtWpU5xPKV+yXbfRe +Bi9Fi1jUIxaS5BZuKGNZMN9QAZxjiRqf2xeUgnA3wySemkfWWspOqGmJch+RbNt+ +nhutxx9z3SxPGWX9f5NAEC7S8O08ni4oPmkmM8V7AgMBAAGjYzBhMA8GA1UdEwEB +/wQFMAMBAf8wHQYDVR0OBBYEFNq7LqqwDLiIJlF0XG0D08DYj3rWMB8GA1UdIwQY +MBaAFNq7LqqwDLiIJlF0XG0D08DYj3rWMA4GA1UdDwEB/wQEAwIBhjANBgkqhkiG +9w0BAQUFAAOCAgEAMXjmx7XfuJRAyXHEqDXsRh3ChfMoWIawC/yOsjmPRFWrZIRc +aanQmjg8+uUfNeVE44B5lGiku8SfPeE0zTBGi1QrlaXv9z+ZhP015s8xxtxqv6fX +IwjhmF7DWgh2qaavdy+3YL1ERmrvl/9zlcGO6JP7/TG37FcREUWbMPEaiDnBTzyn +ANXH/KttgCJwpQzgXQQpAvvLoJHRfNbDflDVnVi+QTjruXU8FdmbyUqDWcDaU/0z +uzYYm4UPFd3uLax2k7nZAY1IEKj79TiG8dsKxr2EoyNB3tZ3b4XUhRxQ4K5RirqN +Pnbiucon8l+f725ZDQbYKxek0nxru18UGkiPGkzns0ccjkxFKyDuSN/n3QmOGKja +QI2SJhFTYXNd673nxE0pN2HrrDktZy4W1vUAg4WhzH92xH3kt0tm7wNFYGm2DFKW +koRepqO1pD4r2czYG0eq8kTaT/kD6PAUyz/zg97QwVTjt+gKN02LIFkDMBmhLMi9 +ER/frslKxfMnZmaGrGiR/9nmUxwPi1xpZQomyB40w11Re9epnAahNt3ViZS82eQt +DF4JbAiXfKM9fJP/P6EUp8+1Xevb2xzEdt+Iub1FBZUbrvxGakyvSOPOrg/Sfuvm +bJxPgWp6ZKy7PtXny3YuxadIwVyQD8vIP/rmMuGNG2+k5o7Y+SlIis5z/iw= +-----END CERTIFICATE----- + +# Issuer: CN=GeoTrust Universal CA 2 O=GeoTrust Inc. +# Subject: CN=GeoTrust Universal CA 2 O=GeoTrust Inc. +# Label: "GeoTrust Universal CA 2" +# Serial: 1 +# MD5 Fingerprint: 34:fc:b8:d0:36:db:9e:14:b3:c2:f2:db:8f:e4:94:c7 +# SHA1 Fingerprint: 37:9a:19:7b:41:85:45:35:0c:a6:03:69:f3:3c:2e:af:47:4f:20:79 +# SHA256 Fingerprint: a0:23:4f:3b:c8:52:7c:a5:62:8e:ec:81:ad:5d:69:89:5d:a5:68:0d:c9:1d:1c:b8:47:7f:33:f8:78:b9:5b:0b +-----BEGIN CERTIFICATE----- +MIIFbDCCA1SgAwIBAgIBATANBgkqhkiG9w0BAQUFADBHMQswCQYDVQQGEwJVUzEW +MBQGA1UEChMNR2VvVHJ1c3QgSW5jLjEgMB4GA1UEAxMXR2VvVHJ1c3QgVW5pdmVy +c2FsIENBIDIwHhcNMDQwMzA0MDUwMDAwWhcNMjkwMzA0MDUwMDAwWjBHMQswCQYD +VQQGEwJVUzEWMBQGA1UEChMNR2VvVHJ1c3QgSW5jLjEgMB4GA1UEAxMXR2VvVHJ1 +c3QgVW5pdmVyc2FsIENBIDIwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoIC +AQCzVFLByT7y2dyxUxpZKeexw0Uo5dfR7cXFS6GqdHtXr0om/Nj1XqduGdt0DE81 +WzILAePb63p3NeqqWuDW6KFXlPCQo3RWlEQwAx5cTiuFJnSCegx2oG9NzkEtoBUG +FF+3Qs17j1hhNNwqCPkuwwGmIkQcTAeC5lvO0Ep8BNMZcyfwqph/Lq9O64ceJHdq +XbboW0W63MOhBW9Wjo8QJqVJwy7XQYci4E+GymC16qFjwAGXEHm9ADwSbSsVsaxL +se4YuU6W3Nx2/zu+z18DwPw76L5GG//aQMJS9/7jOvdqdzXQ2o3rXhhqMcceujwb +KNZrVMaqW9eiLBsZzKIC9ptZvTdrhrVtgrrY6slWvKk2WP0+GfPtDCapkzj4T8Fd +IgbQl+rhrcZV4IErKIM6+vR7IVEAvlI4zs1meaj0gVbi0IMJR1FbUGrP20gaXT73 +y/Zl92zxlfgCOzJWgjl6W70viRu/obTo/3+NjN8D8WBOWBFM66M/ECuDmgFz2ZRt +hAAnZqzwcEAJQpKtT5MNYQlRJNiS1QuUYbKHsu3/mjX/hVTK7URDrBs8FmtISgoc +QIgfksILAAX/8sgCSqSqqcyZlpwvWOB94b67B9xfBHJcMTTD7F8t4D1kkCLm0ey4 +Lt1ZrtmhN79UNdxzMk+MBB4zsslG8dhcyFVQyWi9qLo2CQIDAQABo2MwYTAPBgNV +HRMBAf8EBTADAQH/MB0GA1UdDgQWBBR281Xh+qQ2+/CfXGJx7Tz0RzgQKzAfBgNV +HSMEGDAWgBR281Xh+qQ2+/CfXGJx7Tz0RzgQKzAOBgNVHQ8BAf8EBAMCAYYwDQYJ +KoZIhvcNAQEFBQADggIBAGbBxiPz2eAubl/oz66wsCVNK/g7WJtAJDday6sWSf+z +dXkzoS9tcBc0kf5nfo/sm+VegqlVHy/c1FEHEv6sFj4sNcZj/NwQ6w2jqtB8zNHQ +L1EuxBRa3ugZ4T7GzKQp5y6EqgYweHZUcyiYWTjgAA1i00J9IZ+uPTqM1fp3DRgr +Fg5fNuH8KrUwJM/gYwx7WBr+mbpCErGR9Hxo4sjoryzqyX6uuyo9DRXcNJW2GHSo +ag/HtPQTxORb7QrSpJdMKu0vbBKJPfEncKpqA1Ihn0CoZ1Dy81of398j9tx4TuaY +T1U6U+Pv8vSfx3zYWK8pIpe44L2RLrB27FcRz+8pRPPphXpgY+RdM4kX2TGq2tbz +GDVyz4crL2MjhF2EjD9XoIj8mZEoJmmZ1I+XRL6O1UixpCgp8RW04eWe3fiPpm8m +1wk8OhwRDqZsN/etRIcsKMfYdIKz0G9KV7s1KSegi+ghp4dkNl3M2Basx7InQJJV +OCiNUW7dFGdTbHFcJoRNdVq2fmBWqU2t+5sel/MN2dKXVHfaPRK34B7vCAas+YWH +6aLcr34YEoP9VhdBLtUpgn2Z9DH2canPLAEnpQW5qrJITirvn5NSUZU8UnOOVkwX +QMAJKOSLakhT2+zNVVXxxvjpoixMptEmX36vWkzaH6byHCx+rgIW0lbQL1dTR+iS +-----END CERTIFICATE----- + +# Issuer: CN=AAA Certificate Services O=Comodo CA Limited +# Subject: CN=AAA Certificate Services O=Comodo CA Limited +# Label: "Comodo AAA Services root" +# Serial: 1 +# MD5 Fingerprint: 49:79:04:b0:eb:87:19:ac:47:b0:bc:11:51:9b:74:d0 +# SHA1 Fingerprint: d1:eb:23:a4:6d:17:d6:8f:d9:25:64:c2:f1:f1:60:17:64:d8:e3:49 +# SHA256 Fingerprint: d7:a7:a0:fb:5d:7e:27:31:d7:71:e9:48:4e:bc:de:f7:1d:5f:0c:3e:0a:29:48:78:2b:c8:3e:e0:ea:69:9e:f4 +-----BEGIN CERTIFICATE----- +MIIEMjCCAxqgAwIBAgIBATANBgkqhkiG9w0BAQUFADB7MQswCQYDVQQGEwJHQjEb +MBkGA1UECAwSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHDAdTYWxmb3JkMRow +GAYDVQQKDBFDb21vZG8gQ0EgTGltaXRlZDEhMB8GA1UEAwwYQUFBIENlcnRpZmlj +YXRlIFNlcnZpY2VzMB4XDTA0MDEwMTAwMDAwMFoXDTI4MTIzMTIzNTk1OVowezEL +MAkGA1UEBhMCR0IxGzAZBgNVBAgMEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UE +BwwHU2FsZm9yZDEaMBgGA1UECgwRQ29tb2RvIENBIExpbWl0ZWQxITAfBgNVBAMM +GEFBQSBDZXJ0aWZpY2F0ZSBTZXJ2aWNlczCCASIwDQYJKoZIhvcNAQEBBQADggEP +ADCCAQoCggEBAL5AnfRu4ep2hxxNRUSOvkbIgwadwSr+GB+O5AL686tdUIoWMQua +BtDFcCLNSS1UY8y2bmhGC1Pqy0wkwLxyTurxFa70VJoSCsN6sjNg4tqJVfMiWPPe +3M/vg4aijJRPn2jymJBGhCfHdr/jzDUsi14HZGWCwEiwqJH5YZ92IFCokcdmtet4 +YgNW8IoaE+oxox6gmf049vYnMlhvB/VruPsUK6+3qszWY19zjNoFmag4qMsXeDZR +rOme9Hg6jc8P2ULimAyrL58OAd7vn5lJ8S3frHRNG5i1R8XlKdH5kBjHYpy+g8cm +ez6KJcfA3Z3mNWgQIJ2P2N7Sw4ScDV7oL8kCAwEAAaOBwDCBvTAdBgNVHQ4EFgQU +oBEKIz6W8Qfs4q8p74Klf9AwpLQwDgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQF +MAMBAf8wewYDVR0fBHQwcjA4oDagNIYyaHR0cDovL2NybC5jb21vZG9jYS5jb20v +QUFBQ2VydGlmaWNhdGVTZXJ2aWNlcy5jcmwwNqA0oDKGMGh0dHA6Ly9jcmwuY29t +b2RvLm5ldC9BQUFDZXJ0aWZpY2F0ZVNlcnZpY2VzLmNybDANBgkqhkiG9w0BAQUF +AAOCAQEACFb8AvCb6P+k+tZ7xkSAzk/ExfYAWMymtrwUSWgEdujm7l3sAg9g1o1Q +GE8mTgHj5rCl7r+8dFRBv/38ErjHT1r0iWAFf2C3BUrz9vHCv8S5dIa2LX1rzNLz +Rt0vxuBqw8M0Ayx9lt1awg6nCpnBBYurDC/zXDrPbDdVCYfeU0BsWO/8tqtlbgT2 +G9w84FoVxp7Z8VlIMCFlA2zs6SFz7JsDoeA3raAVGI/6ugLOpyypEBMs1OUIJqsi +l2D4kF501KKaU73yqWjgom7C12yxow+ev+to51byrvLjKzg6CYG1a4XXvi3tPxq3 +smPi9WIsgtRqAEFQ8TmDn5XpNpaYbg== +-----END CERTIFICATE----- + +# Issuer: CN=QuoVadis Root Certification Authority O=QuoVadis Limited OU=Root Certification Authority +# Subject: CN=QuoVadis Root Certification Authority O=QuoVadis Limited OU=Root Certification Authority +# Label: "QuoVadis Root CA" +# Serial: 985026699 +# MD5 Fingerprint: 27:de:36:fe:72:b7:00:03:00:9d:f4:f0:1e:6c:04:24 +# SHA1 Fingerprint: de:3f:40:bd:50:93:d3:9b:6c:60:f6:da:bc:07:62:01:00:89:76:c9 +# SHA256 Fingerprint: a4:5e:de:3b:bb:f0:9c:8a:e1:5c:72:ef:c0:72:68:d6:93:a2:1c:99:6f:d5:1e:67:ca:07:94:60:fd:6d:88:73 +-----BEGIN CERTIFICATE----- +MIIF0DCCBLigAwIBAgIEOrZQizANBgkqhkiG9w0BAQUFADB/MQswCQYDVQQGEwJC +TTEZMBcGA1UEChMQUXVvVmFkaXMgTGltaXRlZDElMCMGA1UECxMcUm9vdCBDZXJ0 +aWZpY2F0aW9uIEF1dGhvcml0eTEuMCwGA1UEAxMlUXVvVmFkaXMgUm9vdCBDZXJ0 +aWZpY2F0aW9uIEF1dGhvcml0eTAeFw0wMTAzMTkxODMzMzNaFw0yMTAzMTcxODMz +MzNaMH8xCzAJBgNVBAYTAkJNMRkwFwYDVQQKExBRdW9WYWRpcyBMaW1pdGVkMSUw +IwYDVQQLExxSb290IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MS4wLAYDVQQDEyVR +dW9WYWRpcyBSb290IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIIBIjANBgkqhkiG +9w0BAQEFAAOCAQ8AMIIBCgKCAQEAv2G1lVO6V/z68mcLOhrfEYBklbTRvM16z/Yp +li4kVEAkOPcahdxYTMukJ0KX0J+DisPkBgNbAKVRHnAEdOLB1Dqr1607BxgFjv2D +rOpm2RgbaIr1VxqYuvXtdj182d6UajtLF8HVj71lODqV0D1VNk7feVcxKh7YWWVJ +WCCYfqtffp/p1k3sg3Spx2zY7ilKhSoGFPlU5tPaZQeLYzcS19Dsw3sgQUSj7cug +F+FxZc4dZjH3dgEZyH0DWLaVSR2mEiboxgx24ONmy+pdpibu5cxfvWenAScOospU +xbF6lR1xHkopigPcakXBpBlebzbNw6Kwt/5cOOJSvPhEQ+aQuwIDAQABo4ICUjCC +Ak4wPQYIKwYBBQUHAQEEMTAvMC0GCCsGAQUFBzABhiFodHRwczovL29jc3AucXVv +dmFkaXNvZmZzaG9yZS5jb20wDwYDVR0TAQH/BAUwAwEB/zCCARoGA1UdIASCAREw +ggENMIIBCQYJKwYBBAG+WAABMIH7MIHUBggrBgEFBQcCAjCBxxqBxFJlbGlhbmNl +IG9uIHRoZSBRdW9WYWRpcyBSb290IENlcnRpZmljYXRlIGJ5IGFueSBwYXJ0eSBh +c3N1bWVzIGFjY2VwdGFuY2Ugb2YgdGhlIHRoZW4gYXBwbGljYWJsZSBzdGFuZGFy +ZCB0ZXJtcyBhbmQgY29uZGl0aW9ucyBvZiB1c2UsIGNlcnRpZmljYXRpb24gcHJh +Y3RpY2VzLCBhbmQgdGhlIFF1b1ZhZGlzIENlcnRpZmljYXRlIFBvbGljeS4wIgYI +KwYBBQUHAgEWFmh0dHA6Ly93d3cucXVvdmFkaXMuYm0wHQYDVR0OBBYEFItLbe3T +KbkGGew5Oanwl4Rqy+/fMIGuBgNVHSMEgaYwgaOAFItLbe3TKbkGGew5Oanwl4Rq +y+/foYGEpIGBMH8xCzAJBgNVBAYTAkJNMRkwFwYDVQQKExBRdW9WYWRpcyBMaW1p +dGVkMSUwIwYDVQQLExxSb290IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MS4wLAYD +VQQDEyVRdW9WYWRpcyBSb290IENlcnRpZmljYXRpb24gQXV0aG9yaXR5ggQ6tlCL +MA4GA1UdDwEB/wQEAwIBBjANBgkqhkiG9w0BAQUFAAOCAQEAitQUtf70mpKnGdSk +fnIYj9lofFIk3WdvOXrEql494liwTXCYhGHoG+NpGA7O+0dQoE7/8CQfvbLO9Sf8 +7C9TqnN7Az10buYWnuulLsS/VidQK2K6vkscPFVcQR0kvoIgR13VRH56FmjffU1R +cHhXHTMe/QKZnAzNCgVPx7uOpHX6Sm2xgI4JVrmcGmD+XcHXetwReNDWXcG31a0y +mQM6isxUJTkxgXsTIlG6Rmyhu576BGxJJnSP0nPrzDCi5upZIof4l/UO/erMkqQW +xFIY6iHOsfHmhIHluqmGKPJDWl0Snawe2ajlCmqnf6CHKc/yiU3U7MXi5nrQNiOK +SnQ2+Q== +-----END CERTIFICATE----- + +# Issuer: CN=QuoVadis Root CA 2 O=QuoVadis Limited +# Subject: CN=QuoVadis Root CA 2 O=QuoVadis Limited +# Label: "QuoVadis Root CA 2" +# Serial: 1289 +# MD5 Fingerprint: 5e:39:7b:dd:f8:ba:ec:82:e9:ac:62:ba:0c:54:00:2b +# SHA1 Fingerprint: ca:3a:fb:cf:12:40:36:4b:44:b2:16:20:88:80:48:39:19:93:7c:f7 +# SHA256 Fingerprint: 85:a0:dd:7d:d7:20:ad:b7:ff:05:f8:3d:54:2b:20:9d:c7:ff:45:28:f7:d6:77:b1:83:89:fe:a5:e5:c4:9e:86 +-----BEGIN CERTIFICATE----- +MIIFtzCCA5+gAwIBAgICBQkwDQYJKoZIhvcNAQEFBQAwRTELMAkGA1UEBhMCQk0x +GTAXBgNVBAoTEFF1b1ZhZGlzIExpbWl0ZWQxGzAZBgNVBAMTElF1b1ZhZGlzIFJv +b3QgQ0EgMjAeFw0wNjExMjQxODI3MDBaFw0zMTExMjQxODIzMzNaMEUxCzAJBgNV +BAYTAkJNMRkwFwYDVQQKExBRdW9WYWRpcyBMaW1pdGVkMRswGQYDVQQDExJRdW9W +YWRpcyBSb290IENBIDIwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCa +GMpLlA0ALa8DKYrwD4HIrkwZhR0In6spRIXzL4GtMh6QRr+jhiYaHv5+HBg6XJxg +Fyo6dIMzMH1hVBHL7avg5tKifvVrbxi3Cgst/ek+7wrGsxDp3MJGF/hd/aTa/55J +WpzmM+Yklvc/ulsrHHo1wtZn/qtmUIttKGAr79dgw8eTvI02kfN/+NsRE8Scd3bB +rrcCaoF6qUWD4gXmuVbBlDePSHFjIuwXZQeVikvfj8ZaCuWw419eaxGrDPmF60Tp ++ARz8un+XJiM9XOva7R+zdRcAitMOeGylZUtQofX1bOQQ7dsE/He3fbE+Ik/0XX1 +ksOR1YqI0JDs3G3eicJlcZaLDQP9nL9bFqyS2+r+eXyt66/3FsvbzSUr5R/7mp/i +Ucw6UwxI5g69ybR2BlLmEROFcmMDBOAENisgGQLodKcftslWZvB1JdxnwQ5hYIiz +PtGo/KPaHbDRsSNU30R2be1B2MGyIrZTHN81Hdyhdyox5C315eXbyOD/5YDXC2Og +/zOhD7osFRXql7PSorW+8oyWHhqPHWykYTe5hnMz15eWniN9gqRMgeKh0bpnX5UH +oycR7hYQe7xFSkyyBNKr79X9DFHOUGoIMfmR2gyPZFwDwzqLID9ujWc9Otb+fVuI +yV77zGHcizN300QyNQliBJIWENieJ0f7OyHj+OsdWwIDAQABo4GwMIGtMA8GA1Ud +EwEB/wQFMAMBAf8wCwYDVR0PBAQDAgEGMB0GA1UdDgQWBBQahGK8SEwzJQTU7tD2 +A8QZRtGUazBuBgNVHSMEZzBlgBQahGK8SEwzJQTU7tD2A8QZRtGUa6FJpEcwRTEL +MAkGA1UEBhMCQk0xGTAXBgNVBAoTEFF1b1ZhZGlzIExpbWl0ZWQxGzAZBgNVBAMT +ElF1b1ZhZGlzIFJvb3QgQ0EgMoICBQkwDQYJKoZIhvcNAQEFBQADggIBAD4KFk2f +BluornFdLwUvZ+YTRYPENvbzwCYMDbVHZF34tHLJRqUDGCdViXh9duqWNIAXINzn +g/iN/Ae42l9NLmeyhP3ZRPx3UIHmfLTJDQtyU/h2BwdBR5YM++CCJpNVjP4iH2Bl +fF/nJrP3MpCYUNQ3cVX2kiF495V5+vgtJodmVjB3pjd4M1IQWK4/YY7yarHvGH5K +WWPKjaJW1acvvFYfzznB4vsKqBUsfU16Y8Zsl0Q80m/DShcK+JDSV6IZUaUtl0Ha +B0+pUNqQjZRG4T7wlP0QADj1O+hA4bRuVhogzG9Yje0uRY/W6ZM/57Es3zrWIozc +hLsib9D45MY56QSIPMO661V6bYCZJPVsAfv4l7CUW+v90m/xd2gNNWQjrLhVoQPR +TUIZ3Ph1WVaj+ahJefivDrkRoHy3au000LYmYjgahwz46P0u05B/B5EqHdZ+XIWD +mbA4CD/pXvk1B+TJYm5Xf6dQlfe6yJvmjqIBxdZmv3lh8zwc4bmCXF2gw+nYSL0Z +ohEUGW6yhhtoPkg3Goi3XZZenMfvJ2II4pEZXNLxId26F0KCl3GBUzGpn/Z9Yr9y +4aOTHcyKJloJONDO1w2AFrR4pTqHTI2KpdVGl/IsELm8VCLAAVBpQ570su9t+Oza +8eOx79+Rj1QqCyXBJhnEUhAFZdWCEOrCMc0u +-----END CERTIFICATE----- + +# Issuer: CN=QuoVadis Root CA 3 O=QuoVadis Limited +# Subject: CN=QuoVadis Root CA 3 O=QuoVadis Limited +# Label: "QuoVadis Root CA 3" +# Serial: 1478 +# MD5 Fingerprint: 31:85:3c:62:94:97:63:b9:aa:fd:89:4e:af:6f:e0:cf +# SHA1 Fingerprint: 1f:49:14:f7:d8:74:95:1d:dd:ae:02:c0:be:fd:3a:2d:82:75:51:85 +# SHA256 Fingerprint: 18:f1:fc:7f:20:5d:f8:ad:dd:eb:7f:e0:07:dd:57:e3:af:37:5a:9c:4d:8d:73:54:6b:f4:f1:fe:d1:e1:8d:35 +-----BEGIN CERTIFICATE----- +MIIGnTCCBIWgAwIBAgICBcYwDQYJKoZIhvcNAQEFBQAwRTELMAkGA1UEBhMCQk0x +GTAXBgNVBAoTEFF1b1ZhZGlzIExpbWl0ZWQxGzAZBgNVBAMTElF1b1ZhZGlzIFJv +b3QgQ0EgMzAeFw0wNjExMjQxOTExMjNaFw0zMTExMjQxOTA2NDRaMEUxCzAJBgNV +BAYTAkJNMRkwFwYDVQQKExBRdW9WYWRpcyBMaW1pdGVkMRswGQYDVQQDExJRdW9W +YWRpcyBSb290IENBIDMwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQDM +V0IWVJzmmNPTTe7+7cefQzlKZbPoFog02w1ZkXTPkrgEQK0CSzGrvI2RaNggDhoB +4hp7Thdd4oq3P5kazethq8Jlph+3t723j/z9cI8LoGe+AaJZz3HmDyl2/7FWeUUr +H556VOijKTVopAFPD6QuN+8bv+OPEKhyq1hX51SGyMnzW9os2l2ObjyjPtr7guXd +8lyyBTNvijbO0BNO/79KDDRMpsMhvVAEVeuxu537RR5kFd5VAYwCdrXLoT9Cabwv +vWhDFlaJKjdhkf2mrk7AyxRllDdLkgbvBNDInIjbC3uBr7E9KsRlOni27tyAsdLT +mZw67mtaa7ONt9XOnMK+pUsvFrGeaDsGb659n/je7Mwpp5ijJUMv7/FfJuGITfhe +btfZFG4ZM2mnO4SJk8RTVROhUXhA+LjJou57ulJCg54U7QVSWllWp5f8nT8KKdjc +T5EOE7zelaTfi5m+rJsziO+1ga8bxiJTyPbH7pcUsMV8eFLI8M5ud2CEpukqdiDt +WAEXMJPpGovgc2PZapKUSU60rUqFxKMiMPwJ7Wgic6aIDFUhWMXhOp8q3crhkODZ +c6tsgLjoC2SToJyMGf+z0gzskSaHirOi4XCPLArlzW1oUevaPwV/izLmE1xr/l9A +4iLItLRkT9a6fUg+qGkM17uGcclzuD87nSVL2v9A6wIDAQABo4IBlTCCAZEwDwYD +VR0TAQH/BAUwAwEB/zCB4QYDVR0gBIHZMIHWMIHTBgkrBgEEAb5YAAMwgcUwgZMG +CCsGAQUFBwICMIGGGoGDQW55IHVzZSBvZiB0aGlzIENlcnRpZmljYXRlIGNvbnN0 +aXR1dGVzIGFjY2VwdGFuY2Ugb2YgdGhlIFF1b1ZhZGlzIFJvb3QgQ0EgMyBDZXJ0 +aWZpY2F0ZSBQb2xpY3kgLyBDZXJ0aWZpY2F0aW9uIFByYWN0aWNlIFN0YXRlbWVu +dC4wLQYIKwYBBQUHAgEWIWh0dHA6Ly93d3cucXVvdmFkaXNnbG9iYWwuY29tL2Nw +czALBgNVHQ8EBAMCAQYwHQYDVR0OBBYEFPLAE+CCQz777i9nMpY1XNu4ywLQMG4G +A1UdIwRnMGWAFPLAE+CCQz777i9nMpY1XNu4ywLQoUmkRzBFMQswCQYDVQQGEwJC +TTEZMBcGA1UEChMQUXVvVmFkaXMgTGltaXRlZDEbMBkGA1UEAxMSUXVvVmFkaXMg +Um9vdCBDQSAzggIFxjANBgkqhkiG9w0BAQUFAAOCAgEAT62gLEz6wPJv92ZVqyM0 +7ucp2sNbtrCD2dDQ4iH782CnO11gUyeim/YIIirnv6By5ZwkajGxkHon24QRiSem +d1o417+shvzuXYO8BsbRd2sPbSQvS3pspweWyuOEn62Iix2rFo1bZhfZFvSLgNLd ++LJ2w/w4E6oM3kJpK27zPOuAJ9v1pkQNn1pVWQvVDVJIxa6f8i+AxeoyUDUSly7B +4f/xI4hROJ/yZlZ25w9Rl6VSDE1JUZU2Pb+iSwwQHYaZTKrzchGT5Or2m9qoXadN +t54CrnMAyNojA+j56hl0YgCUyyIgvpSnWbWCar6ZeXqp8kokUvd0/bpO5qgdAm6x +DYBEwa7TIzdfu4V8K5Iu6H6li92Z4b8nby1dqnuH/grdS/yO9SbkbnBCbjPsMZ57 +k8HkyWkaPcBrTiJt7qtYTcbQQcEr6k8Sh17rRdhs9ZgC06DYVYoGmRmioHfRMJ6s +zHXug/WwYjnPbFfiTNKRCw51KBuav/0aQ/HKd/s7j2G4aSgWQgRecCocIdiP4b0j +Wy10QJLZYxkNc91pvGJHvOB0K7Lrfb5BG7XARsWhIstfTsEokt4YutUqKLsRixeT +mJlglFwjz1onl14LBQaTNx47aTbrqZ5hHY8y2o4M1nQ+ewkk2gF3R8Q7zTSMmfXK +4SVhM7JZG+Ju1zdXtg2pEto= +-----END CERTIFICATE----- + +# Issuer: O=SECOM Trust.net OU=Security Communication RootCA1 +# Subject: O=SECOM Trust.net OU=Security Communication RootCA1 +# Label: "Security Communication Root CA" +# Serial: 0 +# MD5 Fingerprint: f1:bc:63:6a:54:e0:b5:27:f5:cd:e7:1a:e3:4d:6e:4a +# SHA1 Fingerprint: 36:b1:2b:49:f9:81:9e:d7:4c:9e:bc:38:0f:c6:56:8f:5d:ac:b2:f7 +# SHA256 Fingerprint: e7:5e:72:ed:9f:56:0e:ec:6e:b4:80:00:73:a4:3f:c3:ad:19:19:5a:39:22:82:01:78:95:97:4a:99:02:6b:6c +-----BEGIN CERTIFICATE----- +MIIDWjCCAkKgAwIBAgIBADANBgkqhkiG9w0BAQUFADBQMQswCQYDVQQGEwJKUDEY +MBYGA1UEChMPU0VDT00gVHJ1c3QubmV0MScwJQYDVQQLEx5TZWN1cml0eSBDb21t +dW5pY2F0aW9uIFJvb3RDQTEwHhcNMDMwOTMwMDQyMDQ5WhcNMjMwOTMwMDQyMDQ5 +WjBQMQswCQYDVQQGEwJKUDEYMBYGA1UEChMPU0VDT00gVHJ1c3QubmV0MScwJQYD +VQQLEx5TZWN1cml0eSBDb21tdW5pY2F0aW9uIFJvb3RDQTEwggEiMA0GCSqGSIb3 +DQEBAQUAA4IBDwAwggEKAoIBAQCzs/5/022x7xZ8V6UMbXaKL0u/ZPtM7orw8yl8 +9f/uKuDp6bpbZCKamm8sOiZpUQWZJtzVHGpxxpp9Hp3dfGzGjGdnSj74cbAZJ6kJ +DKaVv0uMDPpVmDvY6CKhS3E4eayXkmmziX7qIWgGmBSWh9JhNrxtJ1aeV+7AwFb9 +Ms+k2Y7CI9eNqPPYJayX5HA49LY6tJ07lyZDo6G8SVlyTCMwhwFY9k6+HGhWZq/N +QV3Is00qVUarH9oe4kA92819uZKAnDfdDJZkndwi92SL32HeFZRSFaB9UslLqCHJ +xrHty8OVYNEP8Ktw+N/LTX7s1vqr2b1/VPKl6Xn62dZ2JChzAgMBAAGjPzA9MB0G +A1UdDgQWBBSgc0mZaNyFW2XjmygvV5+9M7wHSDALBgNVHQ8EBAMCAQYwDwYDVR0T +AQH/BAUwAwEB/zANBgkqhkiG9w0BAQUFAAOCAQEAaECpqLvkT115swW1F7NgE+vG +kl3g0dNq/vu+m22/xwVtWSDEHPC32oRYAmP6SBbvT6UL90qY8j+eG61Ha2POCEfr +Uj94nK9NrvjVT8+amCoQQTlSxN3Zmw7vkwGusi7KaEIkQmywszo+zenaSMQVy+n5 +Bw+SUEmK3TGXX8npN6o7WWWXlDLJs58+OmJYxUmtYg5xpTKqL8aJdkNAExNnPaJU +JRDL8Try2frbSVa7pv6nQTXD4IhhyYjH3zYQIphZ6rBK+1YWc26sTfcioU+tHXot +RSflMMFe8toTyyVCUZVHA4xsIcx0Qu1T/zOLjw9XARYvz6buyXAiFL39vmwLAw== +-----END CERTIFICATE----- + +# Issuer: CN=Sonera Class2 CA O=Sonera +# Subject: CN=Sonera Class2 CA O=Sonera +# Label: "Sonera Class 2 Root CA" +# Serial: 29 +# MD5 Fingerprint: a3:ec:75:0f:2e:88:df:fa:48:01:4e:0b:5c:48:6f:fb +# SHA1 Fingerprint: 37:f7:6d:e6:07:7c:90:c5:b1:3e:93:1a:b7:41:10:b4:f2:e4:9a:27 +# SHA256 Fingerprint: 79:08:b4:03:14:c1:38:10:0b:51:8d:07:35:80:7f:fb:fc:f8:51:8a:00:95:33:71:05:ba:38:6b:15:3d:d9:27 +-----BEGIN CERTIFICATE----- +MIIDIDCCAgigAwIBAgIBHTANBgkqhkiG9w0BAQUFADA5MQswCQYDVQQGEwJGSTEP +MA0GA1UEChMGU29uZXJhMRkwFwYDVQQDExBTb25lcmEgQ2xhc3MyIENBMB4XDTAx +MDQwNjA3Mjk0MFoXDTIxMDQwNjA3Mjk0MFowOTELMAkGA1UEBhMCRkkxDzANBgNV +BAoTBlNvbmVyYTEZMBcGA1UEAxMQU29uZXJhIENsYXNzMiBDQTCCASIwDQYJKoZI +hvcNAQEBBQADggEPADCCAQoCggEBAJAXSjWdyvANlsdE+hY3/Ei9vX+ALTU74W+o +Z6m/AxxNjG8yR9VBaKQTBME1DJqEQ/xcHf+Js+gXGM2RX/uJ4+q/Tl18GybTdXnt +5oTjV+WtKcT0OijnpXuENmmz/V52vaMtmdOQTiMofRhj8VQ7Jp12W5dCsv+u8E7s +3TmVToMGf+dJQMjFAbJUWmYdPfz56TwKnoG4cPABi+QjVHzIrviQHgCWctRUz2Ej +vOr7nQKV0ba5cTppCD8PtOFCx4j1P5iop7oc4HFx71hXgVB6XGt0Rg6DA5jDjqhu +8nYybieDwnPz3BjotJPqdURrBGAgcVeHnfO+oJAjPYok4doh28MCAwEAAaMzMDEw +DwYDVR0TAQH/BAUwAwEB/zARBgNVHQ4ECgQISqCqWITTXjwwCwYDVR0PBAQDAgEG +MA0GCSqGSIb3DQEBBQUAA4IBAQBazof5FnIVV0sd2ZvnoiYw7JNn39Yt0jSv9zil +zqsWuasvfDXLrNAPtEwr/IDva4yRXzZ299uzGxnq9LIR/WFxRL8oszodv7ND6J+/ +3DEIcbCdjdY0RzKQxmUk96BKfARzjzlvF4xytb1LyHr4e4PDKE6cCepnP7JnBBvD +FNr450kkkdAdavphOe9r5yF1BgfYErQhIHBCcYHaPJo2vqZbDWpsmh+Re/n570K6 +Tk6ezAyNlNzZRZxe7EJQY670XcSxEtzKO6gunRRaBXW37Ndj4ro1tgQIkejanZz2 +ZrUYrAqmVCY0M9IbwdR/GjqOC6oybtv8TyWf2TLHllpwrN9M +-----END CERTIFICATE----- + +# Issuer: CN=XRamp Global Certification Authority O=XRamp Security Services Inc OU=www.xrampsecurity.com +# Subject: CN=XRamp Global Certification Authority O=XRamp Security Services Inc OU=www.xrampsecurity.com +# Label: "XRamp Global CA Root" +# Serial: 107108908803651509692980124233745014957 +# MD5 Fingerprint: a1:0b:44:b3:ca:10:d8:00:6e:9d:0f:d8:0f:92:0a:d1 +# SHA1 Fingerprint: b8:01:86:d1:eb:9c:86:a5:41:04:cf:30:54:f3:4c:52:b7:e5:58:c6 +# SHA256 Fingerprint: ce:cd:dc:90:50:99:d8:da:df:c5:b1:d2:09:b7:37:cb:e2:c1:8c:fb:2c:10:c0:ff:0b:cf:0d:32:86:fc:1a:a2 +-----BEGIN CERTIFICATE----- +MIIEMDCCAxigAwIBAgIQUJRs7Bjq1ZxN1ZfvdY+grTANBgkqhkiG9w0BAQUFADCB +gjELMAkGA1UEBhMCVVMxHjAcBgNVBAsTFXd3dy54cmFtcHNlY3VyaXR5LmNvbTEk +MCIGA1UEChMbWFJhbXAgU2VjdXJpdHkgU2VydmljZXMgSW5jMS0wKwYDVQQDEyRY +UmFtcCBHbG9iYWwgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMDQxMTAxMTcx +NDA0WhcNMzUwMTAxMDUzNzE5WjCBgjELMAkGA1UEBhMCVVMxHjAcBgNVBAsTFXd3 +dy54cmFtcHNlY3VyaXR5LmNvbTEkMCIGA1UEChMbWFJhbXAgU2VjdXJpdHkgU2Vy +dmljZXMgSW5jMS0wKwYDVQQDEyRYUmFtcCBHbG9iYWwgQ2VydGlmaWNhdGlvbiBB +dXRob3JpdHkwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCYJB69FbS6 +38eMpSe2OAtp87ZOqCwuIR1cRN8hXX4jdP5efrRKt6atH67gBhbim1vZZ3RrXYCP +KZ2GG9mcDZhtdhAoWORlsH9KmHmf4MMxfoArtYzAQDsRhtDLooY2YKTVMIJt2W7Q +DxIEM5dfT2Fa8OT5kavnHTu86M/0ay00fOJIYRyO82FEzG+gSqmUsE3a56k0enI4 +qEHMPJQRfevIpoy3hsvKMzvZPTeL+3o+hiznc9cKV6xkmxnr9A8ECIqsAxcZZPRa +JSKNNCyy9mgdEm3Tih4U2sSPpuIjhdV6Db1q4Ons7Be7QhtnqiXtRYMh/MHJfNVi +PvryxS3T/dRlAgMBAAGjgZ8wgZwwEwYJKwYBBAGCNxQCBAYeBABDAEEwCwYDVR0P +BAQDAgGGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFMZPoj0GY4QJnM5i5ASs +jVy16bYbMDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6Ly9jcmwueHJhbXBzZWN1cml0 +eS5jb20vWEdDQS5jcmwwEAYJKwYBBAGCNxUBBAMCAQEwDQYJKoZIhvcNAQEFBQAD +ggEBAJEVOQMBG2f7Shz5CmBbodpNl2L5JFMn14JkTpAuw0kbK5rc/Kh4ZzXxHfAR +vbdI4xD2Dd8/0sm2qlWkSLoC295ZLhVbO50WfUfXN+pfTXYSNrsf16GBBEYgoyxt +qZ4Bfj8pzgCT3/3JknOJiWSe5yvkHJEs0rnOfc5vMZnT5r7SHpDwCRR5XCOrTdLa +IR9NmXmd4c8nnxCbHIgNsIpkQTG4DmyQJKSbXHGPurt+HBvbaoAPIbzp26a3QPSy +i6mx5O+aGtA9aZnuqCij4Tyz8LIRnM98QObd50N9otg6tamN8jSZxNQQ4Qb9CYQQ +O+7ETPTsJ3xCwnR8gooJybQDJbw= +-----END CERTIFICATE----- + +# Issuer: O=The Go Daddy Group, Inc. OU=Go Daddy Class 2 Certification Authority +# Subject: O=The Go Daddy Group, Inc. OU=Go Daddy Class 2 Certification Authority +# Label: "Go Daddy Class 2 CA" +# Serial: 0 +# MD5 Fingerprint: 91:de:06:25:ab:da:fd:32:17:0c:bb:25:17:2a:84:67 +# SHA1 Fingerprint: 27:96:ba:e6:3f:18:01:e2:77:26:1b:a0:d7:77:70:02:8f:20:ee:e4 +# SHA256 Fingerprint: c3:84:6b:f2:4b:9e:93:ca:64:27:4c:0e:c6:7c:1e:cc:5e:02:4f:fc:ac:d2:d7:40:19:35:0e:81:fe:54:6a:e4 +-----BEGIN CERTIFICATE----- +MIIEADCCAuigAwIBAgIBADANBgkqhkiG9w0BAQUFADBjMQswCQYDVQQGEwJVUzEh +MB8GA1UEChMYVGhlIEdvIERhZGR5IEdyb3VwLCBJbmMuMTEwLwYDVQQLEyhHbyBE +YWRkeSBDbGFzcyAyIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTA0MDYyOTE3 +MDYyMFoXDTM0MDYyOTE3MDYyMFowYzELMAkGA1UEBhMCVVMxITAfBgNVBAoTGFRo +ZSBHbyBEYWRkeSBHcm91cCwgSW5jLjExMC8GA1UECxMoR28gRGFkZHkgQ2xhc3Mg +MiBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTCCASAwDQYJKoZIhvcNAQEBBQADggEN +ADCCAQgCggEBAN6d1+pXGEmhW+vXX0iG6r7d/+TvZxz0ZWizV3GgXne77ZtJ6XCA +PVYYYwhv2vLM0D9/AlQiVBDYsoHUwHU9S3/Hd8M+eKsaA7Ugay9qK7HFiH7Eux6w +wdhFJ2+qN1j3hybX2C32qRe3H3I2TqYXP2WYktsqbl2i/ojgC95/5Y0V4evLOtXi +EqITLdiOr18SPaAIBQi2XKVlOARFmR6jYGB0xUGlcmIbYsUfb18aQr4CUWWoriMY +avx4A6lNf4DD+qta/KFApMoZFv6yyO9ecw3ud72a9nmYvLEHZ6IVDd2gWMZEewo+ +YihfukEHU1jPEX44dMX4/7VpkI+EdOqXG68CAQOjgcAwgb0wHQYDVR0OBBYEFNLE +sNKR1EwRcbNhyz2h/t2oatTjMIGNBgNVHSMEgYUwgYKAFNLEsNKR1EwRcbNhyz2h +/t2oatTjoWekZTBjMQswCQYDVQQGEwJVUzEhMB8GA1UEChMYVGhlIEdvIERhZGR5 +IEdyb3VwLCBJbmMuMTEwLwYDVQQLEyhHbyBEYWRkeSBDbGFzcyAyIENlcnRpZmlj +YXRpb24gQXV0aG9yaXR5ggEAMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQAD +ggEBADJL87LKPpH8EsahB4yOd6AzBhRckB4Y9wimPQoZ+YeAEW5p5JYXMP80kWNy +OO7MHAGjHZQopDH2esRU1/blMVgDoszOYtuURXO1v0XJJLXVggKtI3lpjbi2Tc7P +TMozI+gciKqdi0FuFskg5YmezTvacPd+mSYgFFQlq25zheabIZ0KbIIOqPjCDPoQ +HmyW74cNxA9hi63ugyuV+I6ShHI56yDqg+2DzZduCLzrTia2cyvk0/ZM/iZx4mER +dEr/VxqHD3VILs9RaRegAhJhldXRQLIQTO7ErBBDpqWeCtWVYpoNz4iCxTIM5Cuf +ReYNnyicsbkqWletNw+vHX/bvZ8= +-----END CERTIFICATE----- + +# Issuer: O=Starfield Technologies, Inc. OU=Starfield Class 2 Certification Authority +# Subject: O=Starfield Technologies, Inc. OU=Starfield Class 2 Certification Authority +# Label: "Starfield Class 2 CA" +# Serial: 0 +# MD5 Fingerprint: 32:4a:4b:bb:c8:63:69:9b:be:74:9a:c6:dd:1d:46:24 +# SHA1 Fingerprint: ad:7e:1c:28:b0:64:ef:8f:60:03:40:20:14:c3:d0:e3:37:0e:b5:8a +# SHA256 Fingerprint: 14:65:fa:20:53:97:b8:76:fa:a6:f0:a9:95:8e:55:90:e4:0f:cc:7f:aa:4f:b7:c2:c8:67:75:21:fb:5f:b6:58 +-----BEGIN CERTIFICATE----- +MIIEDzCCAvegAwIBAgIBADANBgkqhkiG9w0BAQUFADBoMQswCQYDVQQGEwJVUzEl +MCMGA1UEChMcU3RhcmZpZWxkIFRlY2hub2xvZ2llcywgSW5jLjEyMDAGA1UECxMp +U3RhcmZpZWxkIENsYXNzIDIgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMDQw +NjI5MTczOTE2WhcNMzQwNjI5MTczOTE2WjBoMQswCQYDVQQGEwJVUzElMCMGA1UE +ChMcU3RhcmZpZWxkIFRlY2hub2xvZ2llcywgSW5jLjEyMDAGA1UECxMpU3RhcmZp +ZWxkIENsYXNzIDIgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwggEgMA0GCSqGSIb3 +DQEBAQUAA4IBDQAwggEIAoIBAQC3Msj+6XGmBIWtDBFk385N78gDGIc/oav7PKaf +8MOh2tTYbitTkPskpD6E8J7oX+zlJ0T1KKY/e97gKvDIr1MvnsoFAZMej2YcOadN ++lq2cwQlZut3f+dZxkqZJRRU6ybH838Z1TBwj6+wRir/resp7defqgSHo9T5iaU0 +X9tDkYI22WY8sbi5gv2cOj4QyDvvBmVmepsZGD3/cVE8MC5fvj13c7JdBmzDI1aa +K4UmkhynArPkPw2vCHmCuDY96pzTNbO8acr1zJ3o/WSNF4Azbl5KXZnJHoe0nRrA +1W4TNSNe35tfPe/W93bC6j67eA0cQmdrBNj41tpvi/JEoAGrAgEDo4HFMIHCMB0G +A1UdDgQWBBS/X7fRzt0fhvRbVazc1xDCDqmI5zCBkgYDVR0jBIGKMIGHgBS/X7fR +zt0fhvRbVazc1xDCDqmI56FspGowaDELMAkGA1UEBhMCVVMxJTAjBgNVBAoTHFN0 +YXJmaWVsZCBUZWNobm9sb2dpZXMsIEluYy4xMjAwBgNVBAsTKVN0YXJmaWVsZCBD +bGFzcyAyIENlcnRpZmljYXRpb24gQXV0aG9yaXR5ggEAMAwGA1UdEwQFMAMBAf8w +DQYJKoZIhvcNAQEFBQADggEBAAWdP4id0ckaVaGsafPzWdqbAYcaT1epoXkJKtv3 +L7IezMdeatiDh6GX70k1PncGQVhiv45YuApnP+yz3SFmH8lU+nLMPUxA2IGvd56D +eruix/U0F47ZEUD0/CwqTRV/p2JdLiXTAAsgGh1o+Re49L2L7ShZ3U0WixeDyLJl +xy16paq8U4Zt3VekyvggQQto8PT7dL5WXXp59fkdheMtlb71cZBDzI0fmgAKhynp +VSJYACPq4xJDKVtHCN2MQWplBqjlIapBtJUhlbl90TSrE9atvNziPTnNvT51cKEY +WQPJIrSPnNVeKtelttQKbfi3QBFGmh95DmK/D5fs4C8fF5Q= +-----END CERTIFICATE----- + +# Issuer: O=Government Root Certification Authority +# Subject: O=Government Root Certification Authority +# Label: "Taiwan GRCA" +# Serial: 42023070807708724159991140556527066870 +# MD5 Fingerprint: 37:85:44:53:32:45:1f:20:f0:f3:95:e1:25:c4:43:4e +# SHA1 Fingerprint: f4:8b:11:bf:de:ab:be:94:54:20:71:e6:41:de:6b:be:88:2b:40:b9 +# SHA256 Fingerprint: 76:00:29:5e:ef:e8:5b:9e:1f:d6:24:db:76:06:2a:aa:ae:59:81:8a:54:d2:77:4c:d4:c0:b2:c0:11:31:e1:b3 +-----BEGIN CERTIFICATE----- +MIIFcjCCA1qgAwIBAgIQH51ZWtcvwgZEpYAIaeNe9jANBgkqhkiG9w0BAQUFADA/ +MQswCQYDVQQGEwJUVzEwMC4GA1UECgwnR292ZXJubWVudCBSb290IENlcnRpZmlj +YXRpb24gQXV0aG9yaXR5MB4XDTAyMTIwNTEzMjMzM1oXDTMyMTIwNTEzMjMzM1ow +PzELMAkGA1UEBhMCVFcxMDAuBgNVBAoMJ0dvdmVybm1lbnQgUm9vdCBDZXJ0aWZp +Y2F0aW9uIEF1dGhvcml0eTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIB +AJoluOzMonWoe/fOW1mKydGGEghU7Jzy50b2iPN86aXfTEc2pBsBHH8eV4qNw8XR +IePaJD9IK/ufLqGU5ywck9G/GwGHU5nOp/UKIXZ3/6m3xnOUT0b3EEk3+qhZSV1q +gQdW8or5BtD3cCJNtLdBuTK4sfCxw5w/cP1T3YGq2GN49thTbqGsaoQkclSGxtKy +yhwOeYHWtXBiCAEuTk8O1RGvqa/lmr/czIdtJuTJV6L7lvnM4T9TjGxMfptTCAts +F/tnyMKtsc2AtJfcdgEWFelq16TheEfOhtX7MfP6Mb40qij7cEwdScevLJ1tZqa2 +jWR+tSBqnTuBto9AAGdLiYa4zGX+FVPpBMHWXx1E1wovJ5pGfaENda1UhhXcSTvx +ls4Pm6Dso3pdvtUqdULle96ltqqvKKyskKw4t9VoNSZ63Pc78/1Fm9G7Q3hub/FC +VGqY8A2tl+lSXunVanLeavcbYBT0peS2cWeqH+riTcFCQP5nRhc4L0c/cZyu5SHK +YS1tB6iEfC3uUSXxY5Ce/eFXiGvviiNtsea9P63RPZYLhY3Naye7twWb7LuRqQoH +EgKXTiCQ8P8NHuJBO9NAOueNXdpm5AKwB1KYXA6OM5zCppX7VRluTI6uSw+9wThN +Xo+EHWbNxWCWtFJaBYmOlXqYwZE8lSOyDvR5tMl8wUohAgMBAAGjajBoMB0GA1Ud +DgQWBBTMzO/MKWCkO7GStjz6MmKPrCUVOzAMBgNVHRMEBTADAQH/MDkGBGcqBwAE +MTAvMC0CAQAwCQYFKw4DAhoFADAHBgVnKgMAAAQUA5vwIhP/lSg209yewDL7MTqK +UWUwDQYJKoZIhvcNAQEFBQADggIBAECASvomyc5eMN1PhnR2WPWus4MzeKR6dBcZ +TulStbngCnRiqmjKeKBMmo4sIy7VahIkv9Ro04rQ2JyftB8M3jh+Vzj8jeJPXgyf +qzvS/3WXy6TjZwj/5cAWtUgBfen5Cv8b5Wppv3ghqMKnI6mGq3ZW6A4M9hPdKmaK +ZEk9GhiHkASfQlK3T8v+R0F2Ne//AHY2RTKbxkaFXeIksB7jSJaYV0eUVXoPQbFE +JPPB/hprv4j9wabak2BegUqZIJxIZhm1AHlUD7gsL0u8qV1bYH+Mh6XgUmMqvtg7 +hUAV/h62ZT/FS9p+tXo1KaMuephgIqP0fSdOLeq0dDzpD6QzDxARvBMB1uUO07+1 +EqLhRSPAzAhuYbeJq4PjJB7mXQfnHyA+z2fI56wwbSdLaG5LKlwCCDTb+HbkZ6Mm +nD+iMsJKxYEYMRBWqoTvLQr/uB930r+lWKBi5NdLkXWNiYCYfm3LU05er/ayl4WX +udpVBrkk7tfGOB5jGxI7leFYrPLfhNVfmS8NVVvmONsuP3LpSIXLuykTjx44Vbnz +ssQwmSNOXfJIoRIM3BKQCZBUkQM8R+XVyWXgt0t97EfTsws+rZ7QdAAO671RrcDe +LMDDav7v3Aun+kbfYNucpllQdSNpc5Oy+fwC00fmcc4QAu4njIT/rEUNE1yDMuAl +pYYsfPQS +-----END CERTIFICATE----- + +# Issuer: CN=DigiCert Assured ID Root CA O=DigiCert Inc OU=www.digicert.com +# Subject: CN=DigiCert Assured ID Root CA O=DigiCert Inc OU=www.digicert.com +# Label: "DigiCert Assured ID Root CA" +# Serial: 17154717934120587862167794914071425081 +# MD5 Fingerprint: 87:ce:0b:7b:2a:0e:49:00:e1:58:71:9b:37:a8:93:72 +# SHA1 Fingerprint: 05:63:b8:63:0d:62:d7:5a:bb:c8:ab:1e:4b:df:b5:a8:99:b2:4d:43 +# SHA256 Fingerprint: 3e:90:99:b5:01:5e:8f:48:6c:00:bc:ea:9d:11:1e:e7:21:fa:ba:35:5a:89:bc:f1:df:69:56:1e:3d:c6:32:5c +-----BEGIN CERTIFICATE----- +MIIDtzCCAp+gAwIBAgIQDOfg5RfYRv6P5WD8G/AwOTANBgkqhkiG9w0BAQUFADBl +MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3 +d3cuZGlnaWNlcnQuY29tMSQwIgYDVQQDExtEaWdpQ2VydCBBc3N1cmVkIElEIFJv +b3QgQ0EwHhcNMDYxMTEwMDAwMDAwWhcNMzExMTEwMDAwMDAwWjBlMQswCQYDVQQG +EwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3d3cuZGlnaWNl +cnQuY29tMSQwIgYDVQQDExtEaWdpQ2VydCBBc3N1cmVkIElEIFJvb3QgQ0EwggEi +MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCtDhXO5EOAXLGH87dg+XESpa7c +JpSIqvTO9SA5KFhgDPiA2qkVlTJhPLWxKISKityfCgyDF3qPkKyK53lTXDGEKvYP +mDI2dsze3Tyoou9q+yHyUmHfnyDXH+Kx2f4YZNISW1/5WBg1vEfNoTb5a3/UsDg+ +wRvDjDPZ2C8Y/igPs6eD1sNuRMBhNZYW/lmci3Zt1/GiSw0r/wty2p5g0I6QNcZ4 +VYcgoc/lbQrISXwxmDNsIumH0DJaoroTghHtORedmTpyoeb6pNnVFzF1roV9Iq4/ +AUaG9ih5yLHa5FcXxH4cDrC0kqZWs72yl+2qp/C3xag/lRbQ/6GW6whfGHdPAgMB +AAGjYzBhMA4GA1UdDwEB/wQEAwIBhjAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW +BBRF66Kv9JLLgjEtUYunpyGd823IDzAfBgNVHSMEGDAWgBRF66Kv9JLLgjEtUYun +pyGd823IDzANBgkqhkiG9w0BAQUFAAOCAQEAog683+Lt8ONyc3pklL/3cmbYMuRC +dWKuh+vy1dneVrOfzM4UKLkNl2BcEkxY5NM9g0lFWJc1aRqoR+pWxnmrEthngYTf +fwk8lOa4JiwgvT2zKIn3X/8i4peEH+ll74fg38FnSbNd67IJKusm7Xi+fT8r87cm +NW1fiQG2SVufAQWbqz0lwcy2f8Lxb4bG+mRo64EtlOtCt/qMHt1i8b5QZ7dsvfPx +H2sMNgcWfzd8qVttevESRmCD1ycEvkvOl77DZypoEd+A5wwzZr8TDRRu838fYxAe ++o0bJW1sj6W3YQGx0qMmoRBxna3iw/nDmVG3KwcIzi7mULKn+gpFL6Lw8g== +-----END CERTIFICATE----- + +# Issuer: CN=DigiCert Global Root CA O=DigiCert Inc OU=www.digicert.com +# Subject: CN=DigiCert Global Root CA O=DigiCert Inc OU=www.digicert.com +# Label: "DigiCert Global Root CA" +# Serial: 10944719598952040374951832963794454346 +# MD5 Fingerprint: 79:e4:a9:84:0d:7d:3a:96:d7:c0:4f:e2:43:4c:89:2e +# SHA1 Fingerprint: a8:98:5d:3a:65:e5:e5:c4:b2:d7:d6:6d:40:c6:dd:2f:b1:9c:54:36 +# SHA256 Fingerprint: 43:48:a0:e9:44:4c:78:cb:26:5e:05:8d:5e:89:44:b4:d8:4f:96:62:bd:26:db:25:7f:89:34:a4:43:c7:01:61 +-----BEGIN CERTIFICATE----- +MIIDrzCCApegAwIBAgIQCDvgVpBCRrGhdWrJWZHHSjANBgkqhkiG9w0BAQUFADBh +MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3 +d3cuZGlnaWNlcnQuY29tMSAwHgYDVQQDExdEaWdpQ2VydCBHbG9iYWwgUm9vdCBD +QTAeFw0wNjExMTAwMDAwMDBaFw0zMTExMTAwMDAwMDBaMGExCzAJBgNVBAYTAlVT +MRUwEwYDVQQKEwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2VydC5j +b20xIDAeBgNVBAMTF0RpZ2lDZXJ0IEdsb2JhbCBSb290IENBMIIBIjANBgkqhkiG +9w0BAQEFAAOCAQ8AMIIBCgKCAQEA4jvhEXLeqKTTo1eqUKKPC3eQyaKl7hLOllsB +CSDMAZOnTjC3U/dDxGkAV53ijSLdhwZAAIEJzs4bg7/fzTtxRuLWZscFs3YnFo97 +nh6Vfe63SKMI2tavegw5BmV/Sl0fvBf4q77uKNd0f3p4mVmFaG5cIzJLv07A6Fpt +43C/dxC//AH2hdmoRBBYMql1GNXRor5H4idq9Joz+EkIYIvUX7Q6hL+hqkpMfT7P +T19sdl6gSzeRntwi5m3OFBqOasv+zbMUZBfHWymeMr/y7vrTC0LUq7dBMtoM1O/4 +gdW7jVg/tRvoSSiicNoxBN33shbyTApOB6jtSj1etX+jkMOvJwIDAQABo2MwYTAO +BgNVHQ8BAf8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUA95QNVbR +TLtm8KPiGxvDl7I90VUwHwYDVR0jBBgwFoAUA95QNVbRTLtm8KPiGxvDl7I90VUw +DQYJKoZIhvcNAQEFBQADggEBAMucN6pIExIK+t1EnE9SsPTfrgT1eXkIoyQY/Esr +hMAtudXH/vTBH1jLuG2cenTnmCmrEbXjcKChzUyImZOMkXDiqw8cvpOp/2PV5Adg +06O/nVsJ8dWO41P0jmP6P6fbtGbfYmbW0W5BjfIttep3Sp+dWOIrWcBAI+0tKIJF +PnlUkiaY4IBIqDfv8NZ5YBberOgOzW6sRBc4L0na4UU+Krk2U886UAb3LujEV0ls +YSEY1QSteDwsOoBrp+uvFRTp2InBuThs4pFsiv9kuXclVzDAGySj4dzp30d8tbQk +CAUw7C29C79Fv1C5qfPrmAESrciIxpg0X40KPMbp1ZWVbd4= +-----END CERTIFICATE----- + +# Issuer: CN=DigiCert High Assurance EV Root CA O=DigiCert Inc OU=www.digicert.com +# Subject: CN=DigiCert High Assurance EV Root CA O=DigiCert Inc OU=www.digicert.com +# Label: "DigiCert High Assurance EV Root CA" +# Serial: 3553400076410547919724730734378100087 +# MD5 Fingerprint: d4:74:de:57:5c:39:b2:d3:9c:85:83:c5:c0:65:49:8a +# SHA1 Fingerprint: 5f:b7:ee:06:33:e2:59:db:ad:0c:4c:9a:e6:d3:8f:1a:61:c7:dc:25 +# SHA256 Fingerprint: 74:31:e5:f4:c3:c1:ce:46:90:77:4f:0b:61:e0:54:40:88:3b:a9:a0:1e:d0:0b:a6:ab:d7:80:6e:d3:b1:18:cf +-----BEGIN CERTIFICATE----- +MIIDxTCCAq2gAwIBAgIQAqxcJmoLQJuPC3nyrkYldzANBgkqhkiG9w0BAQUFADBs +MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3 +d3cuZGlnaWNlcnQuY29tMSswKQYDVQQDEyJEaWdpQ2VydCBIaWdoIEFzc3VyYW5j +ZSBFViBSb290IENBMB4XDTA2MTExMDAwMDAwMFoXDTMxMTExMDAwMDAwMFowbDEL +MAkGA1UEBhMCVVMxFTATBgNVBAoTDERpZ2lDZXJ0IEluYzEZMBcGA1UECxMQd3d3 +LmRpZ2ljZXJ0LmNvbTErMCkGA1UEAxMiRGlnaUNlcnQgSGlnaCBBc3N1cmFuY2Ug +RVYgUm9vdCBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMbM5XPm ++9S75S0tMqbf5YE/yc0lSbZxKsPVlDRnogocsF9ppkCxxLeyj9CYpKlBWTrT3JTW +PNt0OKRKzE0lgvdKpVMSOO7zSW1xkX5jtqumX8OkhPhPYlG++MXs2ziS4wblCJEM +xChBVfvLWokVfnHoNb9Ncgk9vjo4UFt3MRuNs8ckRZqnrG0AFFoEt7oT61EKmEFB +Ik5lYYeBQVCmeVyJ3hlKV9Uu5l0cUyx+mM0aBhakaHPQNAQTXKFx01p8VdteZOE3 +hzBWBOURtCmAEvF5OYiiAhF8J2a3iLd48soKqDirCmTCv2ZdlYTBoSUeh10aUAsg +EsxBu24LUTi4S8sCAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgGGMA8GA1UdEwEB/wQF +MAMBAf8wHQYDVR0OBBYEFLE+w2kD+L9HAdSYJhoIAu9jZCvDMB8GA1UdIwQYMBaA +FLE+w2kD+L9HAdSYJhoIAu9jZCvDMA0GCSqGSIb3DQEBBQUAA4IBAQAcGgaX3Nec +nzyIZgYIVyHbIUf4KmeqvxgydkAQV8GK83rZEWWONfqe/EW1ntlMMUu4kehDLI6z +eM7b41N5cdblIZQB2lWHmiRk9opmzN6cN82oNLFpmyPInngiK3BD41VHMWEZ71jF +hS9OMPagMRYjyOfiZRYzy78aG6A9+MpeizGLYAiJLQwGXFK3xPkKmNEVX58Svnw2 +Yzi9RKR/5CYrCsSXaQ3pjOLAEFe4yHYSkVXySGnYvCoCWw9E1CAx2/S6cCZdkGCe +vEsXCS+0yx5DaMkHJ8HSXPfqIbloEpw8nL+e/IBcm2PN7EeqJSdnoDfzAIJ9VNep ++OkuE6N36B9K +-----END CERTIFICATE----- + +# Issuer: CN=DST Root CA X3 O=Digital Signature Trust Co. +# Subject: CN=DST Root CA X3 O=Digital Signature Trust Co. +# Label: "DST Root CA X3" +# Serial: 91299735575339953335919266965803778155 +# MD5 Fingerprint: 41:03:52:dc:0f:f7:50:1b:16:f0:02:8e:ba:6f:45:c5 +# SHA1 Fingerprint: da:c9:02:4f:54:d8:f6:df:94:93:5f:b1:73:26:38:ca:6a:d7:7c:13 +# SHA256 Fingerprint: 06:87:26:03:31:a7:24:03:d9:09:f1:05:e6:9b:cf:0d:32:e1:bd:24:93:ff:c6:d9:20:6d:11:bc:d6:77:07:39 +-----BEGIN CERTIFICATE----- +MIIDSjCCAjKgAwIBAgIQRK+wgNajJ7qJMDmGLvhAazANBgkqhkiG9w0BAQUFADA/ +MSQwIgYDVQQKExtEaWdpdGFsIFNpZ25hdHVyZSBUcnVzdCBDby4xFzAVBgNVBAMT +DkRTVCBSb290IENBIFgzMB4XDTAwMDkzMDIxMTIxOVoXDTIxMDkzMDE0MDExNVow +PzEkMCIGA1UEChMbRGlnaXRhbCBTaWduYXR1cmUgVHJ1c3QgQ28uMRcwFQYDVQQD +Ew5EU1QgUm9vdCBDQSBYMzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB +AN+v6ZdQCINXtMxiZfaQguzH0yxrMMpb7NnDfcdAwRgUi+DoM3ZJKuM/IUmTrE4O +rz5Iy2Xu/NMhD2XSKtkyj4zl93ewEnu1lcCJo6m67XMuegwGMoOifooUMM0RoOEq +OLl5CjH9UL2AZd+3UWODyOKIYepLYYHsUmu5ouJLGiifSKOeDNoJjj4XLh7dIN9b +xiqKqy69cK3FCxolkHRyxXtqqzTWMIn/5WgTe1QLyNau7Fqckh49ZLOMxt+/yUFw +7BZy1SbsOFU5Q9D8/RhcQPGX69Wam40dutolucbY38EVAjqr2m7xPi71XAicPNaD +aeQQmxkqtilX4+U9m5/wAl0CAwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNV +HQ8BAf8EBAMCAQYwHQYDVR0OBBYEFMSnsaR7LHH62+FLkHX/xBVghYkQMA0GCSqG +SIb3DQEBBQUAA4IBAQCjGiybFwBcqR7uKGY3Or+Dxz9LwwmglSBd49lZRNI+DT69 +ikugdB/OEIKcdBodfpga3csTS7MgROSR6cz8faXbauX+5v3gTt23ADq1cEmv8uXr +AvHRAosZy5Q6XkjEGB5YGV8eAlrwDPGxrancWYaLbumR9YbK+rlmM6pZW87ipxZz +R8srzJmwN0jP41ZL9c8PDHIyh8bwRLtTcm1D9SZImlJnt1ir/md2cXjbDaJWFBM5 +JDGFoqgCWjBH4d1QB7wCCZAA62RjYJsWvIjJEubSfZGL+T0yjWW06XyxV3bqxbYo +Ob8VZRzI9neWagqNdwvYkQsEjgfbKbYK7p2CNTUQ +-----END CERTIFICATE----- + +# Issuer: CN=SwissSign Gold CA - G2 O=SwissSign AG +# Subject: CN=SwissSign Gold CA - G2 O=SwissSign AG +# Label: "SwissSign Gold CA - G2" +# Serial: 13492815561806991280 +# MD5 Fingerprint: 24:77:d9:a8:91:d1:3b:fa:88:2d:c2:ff:f8:cd:33:93 +# SHA1 Fingerprint: d8:c5:38:8a:b7:30:1b:1b:6e:d4:7a:e6:45:25:3a:6f:9f:1a:27:61 +# SHA256 Fingerprint: 62:dd:0b:e9:b9:f5:0a:16:3e:a0:f8:e7:5c:05:3b:1e:ca:57:ea:55:c8:68:8f:64:7c:68:81:f2:c8:35:7b:95 +-----BEGIN CERTIFICATE----- +MIIFujCCA6KgAwIBAgIJALtAHEP1Xk+wMA0GCSqGSIb3DQEBBQUAMEUxCzAJBgNV +BAYTAkNIMRUwEwYDVQQKEwxTd2lzc1NpZ24gQUcxHzAdBgNVBAMTFlN3aXNzU2ln +biBHb2xkIENBIC0gRzIwHhcNMDYxMDI1MDgzMDM1WhcNMzYxMDI1MDgzMDM1WjBF +MQswCQYDVQQGEwJDSDEVMBMGA1UEChMMU3dpc3NTaWduIEFHMR8wHQYDVQQDExZT +d2lzc1NpZ24gR29sZCBDQSAtIEcyMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIIC +CgKCAgEAr+TufoskDhJuqVAtFkQ7kpJcyrhdhJJCEyq8ZVeCQD5XJM1QiyUqt2/8 +76LQwB8CJEoTlo8jE+YoWACjR8cGp4QjK7u9lit/VcyLwVcfDmJlD909Vopz2q5+ +bbqBHH5CjCA12UNNhPqE21Is8w4ndwtrvxEvcnifLtg+5hg3Wipy+dpikJKVyh+c +6bM8K8vzARO/Ws/BtQpgvd21mWRTuKCWs2/iJneRjOBiEAKfNA+k1ZIzUd6+jbqE +emA8atufK+ze3gE/bk3lUIbLtK/tREDFylqM2tIrfKjuvqblCqoOpd8FUrdVxyJd +MmqXl2MT28nbeTZ7hTpKxVKJ+STnnXepgv9VHKVxaSvRAiTysybUa9oEVeXBCsdt +MDeQKuSeFDNeFhdVxVu1yzSJkvGdJo+hB9TGsnhQ2wwMC3wLjEHXuendjIj3o02y +MszYF9rNt85mndT9Xv+9lz4pded+p2JYryU0pUHHPbwNUMoDAw8IWh+Vc3hiv69y +FGkOpeUDDniOJihC8AcLYiAQZzlG+qkDzAQ4embvIIO1jEpWjpEA/I5cgt6IoMPi +aG59je883WX0XaxR7ySArqpWl2/5rX3aYT+YdzylkbYcjCbaZaIJbcHiVOO5ykxM +gI93e2CaHt+28kgeDrpOVG2Y4OGiGqJ3UM/EY5LsRxmd6+ZrzsECAwEAAaOBrDCB +qTAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUWyV7 +lqRlUX64OfPAeGZe6Drn8O4wHwYDVR0jBBgwFoAUWyV7lqRlUX64OfPAeGZe6Drn +8O4wRgYDVR0gBD8wPTA7BglghXQBWQECAQEwLjAsBggrBgEFBQcCARYgaHR0cDov +L3JlcG9zaXRvcnkuc3dpc3NzaWduLmNvbS8wDQYJKoZIhvcNAQEFBQADggIBACe6 +45R88a7A3hfm5djV9VSwg/S7zV4Fe0+fdWavPOhWfvxyeDgD2StiGwC5+OlgzczO +UYrHUDFu4Up+GC9pWbY9ZIEr44OE5iKHjn3g7gKZYbge9LgriBIWhMIxkziWMaa5 +O1M/wySTVltpkuzFwbs4AOPsF6m43Md8AYOfMke6UiI0HTJ6CVanfCU2qT1L2sCC +bwq7EsiHSycR+R4tx5M/nttfJmtS2S6K8RTGRI0Vqbe/vd6mGu6uLftIdxf+u+yv +GPUqUfA5hJeVbG4bwyvEdGB5JbAKJ9/fXtI5z0V9QkvfsywexcZdylU6oJxpmo/a +77KwPJ+HbBIrZXAVUjEaJM9vMSNQH4xPjyPDdEFjHFWoFN0+4FFQz/EbMFYOkrCC +hdiDyyJkvC24JdVUorgG6q2SpCSgwYa1ShNqR88uC1aVVMvOmttqtKay20EIhid3 +92qgQmwLOM7XdVAyksLfKzAiSNDVQTglXaTpXZ/GlHXQRf0wl0OPkKsKx4ZzYEpp +Ld6leNcG2mqeSz53OiATIgHQv2ieY2BrNU0LbbqhPcCT4H8js1WtciVORvnSFu+w +ZMEBnunKoGqYDs/YYPIvSbjkQuE4NRb0yG5P94FW6LqjviOvrv1vA+ACOzB2+htt +Qc8Bsem4yWb02ybzOqR08kkkW8mw0FfB+j564ZfJ +-----END CERTIFICATE----- + +# Issuer: CN=SwissSign Silver CA - G2 O=SwissSign AG +# Subject: CN=SwissSign Silver CA - G2 O=SwissSign AG +# Label: "SwissSign Silver CA - G2" +# Serial: 5700383053117599563 +# MD5 Fingerprint: e0:06:a1:c9:7d:cf:c9:fc:0d:c0:56:75:96:d8:62:13 +# SHA1 Fingerprint: 9b:aa:e5:9f:56:ee:21:cb:43:5a:be:25:93:df:a7:f0:40:d1:1d:cb +# SHA256 Fingerprint: be:6c:4d:a2:bb:b9:ba:59:b6:f3:93:97:68:37:42:46:c3:c0:05:99:3f:a9:8f:02:0d:1d:ed:be:d4:8a:81:d5 +-----BEGIN CERTIFICATE----- +MIIFvTCCA6WgAwIBAgIITxvUL1S7L0swDQYJKoZIhvcNAQEFBQAwRzELMAkGA1UE +BhMCQ0gxFTATBgNVBAoTDFN3aXNzU2lnbiBBRzEhMB8GA1UEAxMYU3dpc3NTaWdu +IFNpbHZlciBDQSAtIEcyMB4XDTA2MTAyNTA4MzI0NloXDTM2MTAyNTA4MzI0Nlow +RzELMAkGA1UEBhMCQ0gxFTATBgNVBAoTDFN3aXNzU2lnbiBBRzEhMB8GA1UEAxMY +U3dpc3NTaWduIFNpbHZlciBDQSAtIEcyMIICIjANBgkqhkiG9w0BAQEFAAOCAg8A +MIICCgKCAgEAxPGHf9N4Mfc4yfjDmUO8x/e8N+dOcbpLj6VzHVxumK4DV644N0Mv +Fz0fyM5oEMF4rhkDKxD6LHmD9ui5aLlV8gREpzn5/ASLHvGiTSf5YXu6t+WiE7br +YT7QbNHm+/pe7R20nqA1W6GSy/BJkv6FCgU+5tkL4k+73JU3/JHpMjUi0R86TieF +nbAVlDLaYQ1HTWBCrpJH6INaUFjpiou5XaHc3ZlKHzZnu0jkg7Y360g6rw9njxcH +6ATK72oxh9TAtvmUcXtnZLi2kUpCe2UuMGoM9ZDulebyzYLs2aFK7PayS+VFheZt +eJMELpyCbTapxDFkH4aDCyr0NQp4yVXPQbBH6TCfmb5hqAaEuSh6XzjZG6k4sIN/ +c8HDO0gqgg8hm7jMqDXDhBuDsz6+pJVpATqJAHgE2cn0mRmrVn5bi4Y5FZGkECwJ +MoBgs5PAKrYYC51+jUnyEEp/+dVGLxmSo5mnJqy7jDzmDrxHB9xzUfFwZC8I+bRH +HTBsROopN4WSaGa8gzj+ezku01DwH/teYLappvonQfGbGHLy9YR0SslnxFSuSGTf +jNFusB3hB48IHpmccelM2KX3RxIfdNFRnobzwqIjQAtz20um53MGjMGg6cFZrEb6 +5i/4z3GcRm25xBWNOHkDRUjvxF3XCO6HOSKGsg0PWEP3calILv3q1h8CAwEAAaOB +rDCBqTAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU +F6DNweRBtjpbO8tFnb0cwpj6hlgwHwYDVR0jBBgwFoAUF6DNweRBtjpbO8tFnb0c +wpj6hlgwRgYDVR0gBD8wPTA7BglghXQBWQEDAQEwLjAsBggrBgEFBQcCARYgaHR0 +cDovL3JlcG9zaXRvcnkuc3dpc3NzaWduLmNvbS8wDQYJKoZIhvcNAQEFBQADggIB +AHPGgeAn0i0P4JUw4ppBf1AsX19iYamGamkYDHRJ1l2E6kFSGG9YrVBWIGrGvShp +WJHckRE1qTodvBqlYJ7YH39FkWnZfrt4csEGDyrOj4VwYaygzQu4OSlWhDJOhrs9 +xCrZ1x9y7v5RoSJBsXECYxqCsGKrXlcSH9/L3XWgwF15kIwb4FDm3jH+mHtwX6WQ +2K34ArZv02DdQEsixT2tOnqfGhpHkXkzuoLcMmkDlm4fS/Bx/uNncqCxv1yL5PqZ +IseEuRuNI5c/7SXgz2W79WEE790eslpBIlqhn10s6FvJbakMDHiqYMZWjwFaDGi8 +aRl5xB9+lwW/xekkUV7U1UtT7dkjWjYDZaPBA61BMPNGG4WQr2W11bHkFlt4dR2X +em1ZqSqPe97Dh4kQmUlzeMg9vVE1dCrV8X5pGyq7O70luJpaPXJhkGaH7gzWTdQR +dAtq/gsD/KNVV4n+SsuuWxcFyPKNIzFTONItaj+CuY0IavdeQXRuwxF+B6wpYJE/ +OMpXEA29MC/HpeZBoNquBYeaoKRlbEwJDIm6uNO5wJOKMPqN5ZprFQFOZ6raYlY+ +hAhm0sQ2fac+EPyI4NSA5QC9qvNOBqN6avlicuMJT+ubDgEj8Z+7fNzcbBGXJbLy +tGMU0gYqZ4yD9c7qB9iaah7s5Aq7KkzrCWA5zspi2C5u +-----END CERTIFICATE----- + +# Issuer: CN=GeoTrust Primary Certification Authority O=GeoTrust Inc. +# Subject: CN=GeoTrust Primary Certification Authority O=GeoTrust Inc. +# Label: "GeoTrust Primary Certification Authority" +# Serial: 32798226551256963324313806436981982369 +# MD5 Fingerprint: 02:26:c3:01:5e:08:30:37:43:a9:d0:7d:cf:37:e6:bf +# SHA1 Fingerprint: 32:3c:11:8e:1b:f7:b8:b6:52:54:e2:e2:10:0d:d6:02:90:37:f0:96 +# SHA256 Fingerprint: 37:d5:10:06:c5:12:ea:ab:62:64:21:f1:ec:8c:92:01:3f:c5:f8:2a:e9:8e:e5:33:eb:46:19:b8:de:b4:d0:6c +-----BEGIN CERTIFICATE----- +MIIDfDCCAmSgAwIBAgIQGKy1av1pthU6Y2yv2vrEoTANBgkqhkiG9w0BAQUFADBY +MQswCQYDVQQGEwJVUzEWMBQGA1UEChMNR2VvVHJ1c3QgSW5jLjExMC8GA1UEAxMo +R2VvVHJ1c3QgUHJpbWFyeSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTAeFw0wNjEx +MjcwMDAwMDBaFw0zNjA3MTYyMzU5NTlaMFgxCzAJBgNVBAYTAlVTMRYwFAYDVQQK +Ew1HZW9UcnVzdCBJbmMuMTEwLwYDVQQDEyhHZW9UcnVzdCBQcmltYXJ5IENlcnRp +ZmljYXRpb24gQXV0aG9yaXR5MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC +AQEAvrgVe//UfH1nrYNke8hCUy3f9oQIIGHWAVlqnEQRr+92/ZV+zmEwu3qDXwK9 +AWbK7hWNb6EwnL2hhZ6UOvNWiAAxz9juapYC2e0DjPt1befquFUWBRaa9OBesYjA +ZIVcFU2Ix7e64HXprQU9nceJSOC7KMgD4TCTZF5SwFlwIjVXiIrxlQqD17wxcwE0 +7e9GceBrAqg1cmuXm2bgyxx5X9gaBGgeRwLmnWDiNpcB3841kt++Z8dtd1k7j53W +kBWUvEI0EME5+bEnPn7WinXFsq+W06Lem+SYvn3h6YGttm/81w7a4DSwDRp35+MI +mO9Y+pyEtzavwt+s0vQQBnBxNQIDAQABo0IwQDAPBgNVHRMBAf8EBTADAQH/MA4G +A1UdDwEB/wQEAwIBBjAdBgNVHQ4EFgQULNVQQZcVi/CPNmFbSvtr2ZnJM5IwDQYJ +KoZIhvcNAQEFBQADggEBAFpwfyzdtzRP9YZRqSa+S7iq8XEN3GHHoOo0Hnp3DwQ1 +6CePbJC/kRYkRj5KTs4rFtULUh38H2eiAkUxT87z+gOneZ1TatnaYzr4gNfTmeGl +4b7UVXGYNTq+k+qurUKykG/g/CFNNWMziUnWm07Kx+dOCQD32sfvmWKZd7aVIl6K +oKv0uHiYyjgZmclynnjNS6yvGaBzEi38wkG6gZHaFloxt/m0cYASSJlyc1pZU8Fj +UjPtp8nSOQJw+uCxQmYpqptR7TBUIhRf2asdweSU8Pj1K/fqynhG1riR/aYNKxoU +AT6A8EKglQdebc3MS6RFjasS6LPeWuWgfOgPIh1a6Vk= +-----END CERTIFICATE----- + +# Issuer: CN=thawte Primary Root CA O=thawte, Inc. OU=Certification Services Division/(c) 2006 thawte, Inc. - For authorized use only +# Subject: CN=thawte Primary Root CA O=thawte, Inc. OU=Certification Services Division/(c) 2006 thawte, Inc. - For authorized use only +# Label: "thawte Primary Root CA" +# Serial: 69529181992039203566298953787712940909 +# MD5 Fingerprint: 8c:ca:dc:0b:22:ce:f5:be:72:ac:41:1a:11:a8:d8:12 +# SHA1 Fingerprint: 91:c6:d6:ee:3e:8a:c8:63:84:e5:48:c2:99:29:5c:75:6c:81:7b:81 +# SHA256 Fingerprint: 8d:72:2f:81:a9:c1:13:c0:79:1d:f1:36:a2:96:6d:b2:6c:95:0a:97:1d:b4:6b:41:99:f4:ea:54:b7:8b:fb:9f +-----BEGIN CERTIFICATE----- +MIIEIDCCAwigAwIBAgIQNE7VVyDV7exJ9C/ON9srbTANBgkqhkiG9w0BAQUFADCB +qTELMAkGA1UEBhMCVVMxFTATBgNVBAoTDHRoYXd0ZSwgSW5jLjEoMCYGA1UECxMf +Q2VydGlmaWNhdGlvbiBTZXJ2aWNlcyBEaXZpc2lvbjE4MDYGA1UECxMvKGMpIDIw +MDYgdGhhd3RlLCBJbmMuIC0gRm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxHzAdBgNV +BAMTFnRoYXd0ZSBQcmltYXJ5IFJvb3QgQ0EwHhcNMDYxMTE3MDAwMDAwWhcNMzYw +NzE2MjM1OTU5WjCBqTELMAkGA1UEBhMCVVMxFTATBgNVBAoTDHRoYXd0ZSwgSW5j +LjEoMCYGA1UECxMfQ2VydGlmaWNhdGlvbiBTZXJ2aWNlcyBEaXZpc2lvbjE4MDYG +A1UECxMvKGMpIDIwMDYgdGhhd3RlLCBJbmMuIC0gRm9yIGF1dGhvcml6ZWQgdXNl +IG9ubHkxHzAdBgNVBAMTFnRoYXd0ZSBQcmltYXJ5IFJvb3QgQ0EwggEiMA0GCSqG +SIb3DQEBAQUAA4IBDwAwggEKAoIBAQCsoPD7gFnUnMekz52hWXMJEEUMDSxuaPFs +W0hoSVk3/AszGcJ3f8wQLZU0HObrTQmnHNK4yZc2AreJ1CRfBsDMRJSUjQJib+ta +3RGNKJpchJAQeg29dGYvajig4tVUROsdB58Hum/u6f1OCyn1PoSgAfGcq/gcfomk +6KHYcWUNo1F77rzSImANuVud37r8UVsLr5iy6S7pBOhih94ryNdOwUxkHt3Ph1i6 +Sk/KaAcdHJ1KxtUvkcx8cXIcxcBn6zL9yZJclNqFwJu/U30rCfSMnZEfl2pSy94J +NqR32HuHUETVPm4pafs5SSYeCaWAe0At6+gnhcn+Yf1+5nyXHdWdAgMBAAGjQjBA +MA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBR7W0XP +r87Lev0xkhpqtvNG61dIUDANBgkqhkiG9w0BAQUFAAOCAQEAeRHAS7ORtvzw6WfU +DW5FvlXok9LOAz/t2iWwHVfLHjp2oEzsUHboZHIMpKnxuIvW1oeEuzLlQRHAd9mz +YJ3rG9XRbkREqaYB7FViHXe4XI5ISXycO1cRrK1zN44veFyQaEfZYGDm/Ac9IiAX +xPcW6cTYcvnIc3zfFi8VqT79aie2oetaupgf1eNNZAqdE8hhuvU5HIe6uL17In/2 +/qxAeeWsEG89jxt5dovEN7MhGITlNgDrYyCZuen+MwS7QcjBAvlEYyCegc5C09Y/ +LHbTY5xZ3Y+m4Q6gLkH3LpVHz7z9M/P2C2F+fpErgUfCJzDupxBdN49cOSvkBPB7 +jVaMaA== +-----END CERTIFICATE----- + +# Issuer: CN=VeriSign Class 3 Public Primary Certification Authority - G5 O=VeriSign, Inc. OU=VeriSign Trust Network/(c) 2006 VeriSign, Inc. - For authorized use only +# Subject: CN=VeriSign Class 3 Public Primary Certification Authority - G5 O=VeriSign, Inc. OU=VeriSign Trust Network/(c) 2006 VeriSign, Inc. - For authorized use only +# Label: "VeriSign Class 3 Public Primary Certification Authority - G5" +# Serial: 33037644167568058970164719475676101450 +# MD5 Fingerprint: cb:17:e4:31:67:3e:e2:09:fe:45:57:93:f3:0a:fa:1c +# SHA1 Fingerprint: 4e:b6:d5:78:49:9b:1c:cf:5f:58:1e:ad:56:be:3d:9b:67:44:a5:e5 +# SHA256 Fingerprint: 9a:cf:ab:7e:43:c8:d8:80:d0:6b:26:2a:94:de:ee:e4:b4:65:99:89:c3:d0:ca:f1:9b:af:64:05:e4:1a:b7:df +-----BEGIN CERTIFICATE----- +MIIE0zCCA7ugAwIBAgIQGNrRniZ96LtKIVjNzGs7SjANBgkqhkiG9w0BAQUFADCB +yjELMAkGA1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMR8wHQYDVQQL +ExZWZXJpU2lnbiBUcnVzdCBOZXR3b3JrMTowOAYDVQQLEzEoYykgMjAwNiBWZXJp +U2lnbiwgSW5jLiAtIEZvciBhdXRob3JpemVkIHVzZSBvbmx5MUUwQwYDVQQDEzxW +ZXJpU2lnbiBDbGFzcyAzIFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0 +aG9yaXR5IC0gRzUwHhcNMDYxMTA4MDAwMDAwWhcNMzYwNzE2MjM1OTU5WjCByjEL +MAkGA1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMR8wHQYDVQQLExZW +ZXJpU2lnbiBUcnVzdCBOZXR3b3JrMTowOAYDVQQLEzEoYykgMjAwNiBWZXJpU2ln +biwgSW5jLiAtIEZvciBhdXRob3JpemVkIHVzZSBvbmx5MUUwQwYDVQQDEzxWZXJp +U2lnbiBDbGFzcyAzIFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9y +aXR5IC0gRzUwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCvJAgIKXo1 +nmAMqudLO07cfLw8RRy7K+D+KQL5VwijZIUVJ/XxrcgxiV0i6CqqpkKzj/i5Vbex +t0uz/o9+B1fs70PbZmIVYc9gDaTY3vjgw2IIPVQT60nKWVSFJuUrjxuf6/WhkcIz +SdhDY2pSS9KP6HBRTdGJaXvHcPaz3BJ023tdS1bTlr8Vd6Gw9KIl8q8ckmcY5fQG +BO+QueQA5N06tRn/Arr0PO7gi+s3i+z016zy9vA9r911kTMZHRxAy3QkGSGT2RT+ +rCpSx4/VBEnkjWNHiDxpg8v+R70rfk/Fla4OndTRQ8Bnc+MUCH7lP59zuDMKz10/ +NIeWiu5T6CUVAgMBAAGjgbIwga8wDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8E +BAMCAQYwbQYIKwYBBQUHAQwEYTBfoV2gWzBZMFcwVRYJaW1hZ2UvZ2lmMCEwHzAH +BgUrDgMCGgQUj+XTGoasjY5rw8+AatRIGCx7GS4wJRYjaHR0cDovL2xvZ28udmVy +aXNpZ24uY29tL3ZzbG9nby5naWYwHQYDVR0OBBYEFH/TZafC3ey78DAJ80M5+gKv +MzEzMA0GCSqGSIb3DQEBBQUAA4IBAQCTJEowX2LP2BqYLz3q3JktvXf2pXkiOOzE +p6B4Eq1iDkVwZMXnl2YtmAl+X6/WzChl8gGqCBpH3vn5fJJaCGkgDdk+bW48DW7Y +5gaRQBi5+MHt39tBquCWIMnNZBU4gcmU7qKEKQsTb47bDN0lAtukixlE0kF6BWlK +WE9gyn6CagsCqiUXObXbf+eEZSqVir2G3l6BFoMtEMze/aiCKm0oHw0LxOXnGiYZ +4fQRbxC1lfznQgUy286dUV4otp6F01vvpX1FQHKOtw5rDgb7MzVIcbidJ4vEZV8N +hnacRHr2lVz2XTIIM6RUthg/aFzyQkqFOFSDX9HoLPKsEdao7WNq +-----END CERTIFICATE----- + +# Issuer: CN=SecureTrust CA O=SecureTrust Corporation +# Subject: CN=SecureTrust CA O=SecureTrust Corporation +# Label: "SecureTrust CA" +# Serial: 17199774589125277788362757014266862032 +# MD5 Fingerprint: dc:32:c3:a7:6d:25:57:c7:68:09:9d:ea:2d:a9:a2:d1 +# SHA1 Fingerprint: 87:82:c6:c3:04:35:3b:cf:d2:96:92:d2:59:3e:7d:44:d9:34:ff:11 +# SHA256 Fingerprint: f1:c1:b5:0a:e5:a2:0d:d8:03:0e:c9:f6:bc:24:82:3d:d3:67:b5:25:57:59:b4:e7:1b:61:fc:e9:f7:37:5d:73 +-----BEGIN CERTIFICATE----- +MIIDuDCCAqCgAwIBAgIQDPCOXAgWpa1Cf/DrJxhZ0DANBgkqhkiG9w0BAQUFADBI +MQswCQYDVQQGEwJVUzEgMB4GA1UEChMXU2VjdXJlVHJ1c3QgQ29ycG9yYXRpb24x +FzAVBgNVBAMTDlNlY3VyZVRydXN0IENBMB4XDTA2MTEwNzE5MzExOFoXDTI5MTIz +MTE5NDA1NVowSDELMAkGA1UEBhMCVVMxIDAeBgNVBAoTF1NlY3VyZVRydXN0IENv +cnBvcmF0aW9uMRcwFQYDVQQDEw5TZWN1cmVUcnVzdCBDQTCCASIwDQYJKoZIhvcN +AQEBBQADggEPADCCAQoCggEBAKukgeWVzfX2FI7CT8rU4niVWJxB4Q2ZQCQXOZEz +Zum+4YOvYlyJ0fwkW2Gz4BERQRwdbvC4u/jep4G6pkjGnx29vo6pQT64lO0pGtSO +0gMdA+9tDWccV9cGrcrI9f4Or2YlSASWC12juhbDCE/RRvgUXPLIXgGZbf2IzIao +wW8xQmxSPmjL8xk037uHGFaAJsTQ3MBv396gwpEWoGQRS0S8Hvbn+mPeZqx2pHGj +7DaUaHp3pLHnDi+BeuK1cobvomuL8A/b01k/unK8RCSc43Oz969XL0Imnal0ugBS +8kvNU3xHCzaFDmapCJcWNFfBZveA4+1wVMeT4C4oFVmHursCAwEAAaOBnTCBmjAT +BgkrBgEEAYI3FAIEBh4EAEMAQTALBgNVHQ8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB +/zAdBgNVHQ4EFgQUQjK2FvoE/f5dS3rD/fdMQB1aQ68wNAYDVR0fBC0wKzApoCeg +JYYjaHR0cDovL2NybC5zZWN1cmV0cnVzdC5jb20vU1RDQS5jcmwwEAYJKwYBBAGC +NxUBBAMCAQAwDQYJKoZIhvcNAQEFBQADggEBADDtT0rhWDpSclu1pqNlGKa7UTt3 +6Z3q059c4EVlew3KW+JwULKUBRSuSceNQQcSc5R+DCMh/bwQf2AQWnL1mA6s7Ll/ +3XpvXdMc9P+IBWlCqQVxyLesJugutIxq/3HcuLHfmbx8IVQr5Fiiu1cprp6poxkm +D5kuCLDv/WnPmRoJjeOnnyvJNjR7JLN4TJUXpAYmHrZkUjZfYGfZnMUFdAvnZyPS +CPyI6a6Lf+Ew9Dd+/cYy2i2eRDAwbO4H3tI0/NL/QPZL9GZGBlSm8jIKYyYwa5vR +3ItHuuG51WLQoqD0ZwV4KWMabwTW+MZMo5qxN7SN5ShLHZ4swrhovO0C7jE= +-----END CERTIFICATE----- + +# Issuer: CN=Secure Global CA O=SecureTrust Corporation +# Subject: CN=Secure Global CA O=SecureTrust Corporation +# Label: "Secure Global CA" +# Serial: 9751836167731051554232119481456978597 +# MD5 Fingerprint: cf:f4:27:0d:d4:ed:dc:65:16:49:6d:3d:da:bf:6e:de +# SHA1 Fingerprint: 3a:44:73:5a:e5:81:90:1f:24:86:61:46:1e:3b:9c:c4:5f:f5:3a:1b +# SHA256 Fingerprint: 42:00:f5:04:3a:c8:59:0e:bb:52:7d:20:9e:d1:50:30:29:fb:cb:d4:1c:a1:b5:06:ec:27:f1:5a:de:7d:ac:69 +-----BEGIN CERTIFICATE----- +MIIDvDCCAqSgAwIBAgIQB1YipOjUiolN9BPI8PjqpTANBgkqhkiG9w0BAQUFADBK +MQswCQYDVQQGEwJVUzEgMB4GA1UEChMXU2VjdXJlVHJ1c3QgQ29ycG9yYXRpb24x +GTAXBgNVBAMTEFNlY3VyZSBHbG9iYWwgQ0EwHhcNMDYxMTA3MTk0MjI4WhcNMjkx +MjMxMTk1MjA2WjBKMQswCQYDVQQGEwJVUzEgMB4GA1UEChMXU2VjdXJlVHJ1c3Qg +Q29ycG9yYXRpb24xGTAXBgNVBAMTEFNlY3VyZSBHbG9iYWwgQ0EwggEiMA0GCSqG +SIb3DQEBAQUAA4IBDwAwggEKAoIBAQCvNS7YrGxVaQZx5RNoJLNP2MwhR/jxYDiJ +iQPpvepeRlMJ3Fz1Wuj3RSoC6zFh1ykzTM7HfAo3fg+6MpjhHZevj8fcyTiW89sa +/FHtaMbQbqR8JNGuQsiWUGMu4P51/pinX0kuleM5M2SOHqRfkNJnPLLZ/kG5VacJ +jnIFHovdRIWCQtBJwB1g8NEXLJXr9qXBkqPFwqcIYA1gBBCWeZ4WNOaptvolRTnI +HmX5k/Wq8VLcmZg9pYYaDDUz+kulBAYVHDGA76oYa8J719rO+TMg1fW9ajMtgQT7 +sFzUnKPiXB3jqUJ1XnvUd+85VLrJChgbEplJL4hL/VBi0XPnj3pDAgMBAAGjgZ0w +gZowEwYJKwYBBAGCNxQCBAYeBABDAEEwCwYDVR0PBAQDAgGGMA8GA1UdEwEB/wQF +MAMBAf8wHQYDVR0OBBYEFK9EBMJBfkiD2045AuzshHrmzsmkMDQGA1UdHwQtMCsw +KaAnoCWGI2h0dHA6Ly9jcmwuc2VjdXJldHJ1c3QuY29tL1NHQ0EuY3JsMBAGCSsG +AQQBgjcVAQQDAgEAMA0GCSqGSIb3DQEBBQUAA4IBAQBjGghAfaReUw132HquHw0L +URYD7xh8yOOvaliTFGCRsoTciE6+OYo68+aCiV0BN7OrJKQVDpI1WkpEXk5X+nXO +H0jOZvQ8QCaSmGwb7iRGDBezUqXbpZGRzzfTb+cnCDpOGR86p1hcF895P4vkp9Mm +I50mD1hp/Ed+stCNi5O/KU9DaXR2Z0vPB4zmAve14bRDtUstFJ/53CYNv6ZHdAbY +iNE6KTCEztI5gGIbqMdXSbxqVVFnFUq+NQfk1XWYN3kwFNspnWzFacxHVaIw98xc +f8LDmBxrThaA63p4ZUWiABqvDA1VZDRIuJK58bRQKfJPIx/abKwfROHdI3hRW8cW +-----END CERTIFICATE----- + +# Issuer: CN=COMODO Certification Authority O=COMODO CA Limited +# Subject: CN=COMODO Certification Authority O=COMODO CA Limited +# Label: "COMODO Certification Authority" +# Serial: 104350513648249232941998508985834464573 +# MD5 Fingerprint: 5c:48:dc:f7:42:72:ec:56:94:6d:1c:cc:71:35:80:75 +# SHA1 Fingerprint: 66:31:bf:9e:f7:4f:9e:b6:c9:d5:a6:0c:ba:6a:be:d1:f7:bd:ef:7b +# SHA256 Fingerprint: 0c:2c:d6:3d:f7:80:6f:a3:99:ed:e8:09:11:6b:57:5b:f8:79:89:f0:65:18:f9:80:8c:86:05:03:17:8b:af:66 +-----BEGIN CERTIFICATE----- +MIIEHTCCAwWgAwIBAgIQToEtioJl4AsC7j41AkblPTANBgkqhkiG9w0BAQUFADCB +gTELMAkGA1UEBhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4G +A1UEBxMHU2FsZm9yZDEaMBgGA1UEChMRQ09NT0RPIENBIExpbWl0ZWQxJzAlBgNV +BAMTHkNPTU9ETyBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTAeFw0wNjEyMDEwMDAw +MDBaFw0yOTEyMzEyMzU5NTlaMIGBMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3Jl +YXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRowGAYDVQQKExFDT01P +RE8gQ0EgTGltaXRlZDEnMCUGA1UEAxMeQ09NT0RPIENlcnRpZmljYXRpb24gQXV0 +aG9yaXR5MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA0ECLi3LjkRv3 +UcEbVASY06m/weaKXTuH+7uIzg3jLz8GlvCiKVCZrts7oVewdFFxze1CkU1B/qnI +2GqGd0S7WWaXUF601CxwRM/aN5VCaTwwxHGzUvAhTaHYujl8HJ6jJJ3ygxaYqhZ8 +Q5sVW7euNJH+1GImGEaaP+vB+fGQV+useg2L23IwambV4EajcNxo2f8ESIl33rXp ++2dtQem8Ob0y2WIC8bGoPW43nOIv4tOiJovGuFVDiOEjPqXSJDlqR6sA1KGzqSX+ +DT+nHbrTUcELpNqsOO9VUCQFZUaTNE8tja3G1CEZ0o7KBWFxB3NH5YoZEr0ETc5O +nKVIrLsm9wIDAQABo4GOMIGLMB0GA1UdDgQWBBQLWOWLxkwVN6RAqTCpIb5HNlpW +/zAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zBJBgNVHR8EQjBAMD6g +PKA6hjhodHRwOi8vY3JsLmNvbW9kb2NhLmNvbS9DT01PRE9DZXJ0aWZpY2F0aW9u +QXV0aG9yaXR5LmNybDANBgkqhkiG9w0BAQUFAAOCAQEAPpiem/Yb6dc5t3iuHXIY +SdOH5EOC6z/JqvWote9VfCFSZfnVDeFs9D6Mk3ORLgLETgdxb8CPOGEIqB6BCsAv +IC9Bi5HcSEW88cbeunZrM8gALTFGTO3nnc+IlP8zwFboJIYmuNg4ON8qa90SzMc/ +RxdMosIGlgnW2/4/PEZB31jiVg88O8EckzXZOFKs7sjsLjBOlDW0JB9LeGna8gI4 +zJVSk/BwJVmcIGfE7vmLV2H0knZ9P4SNVbfo5azV8fUZVqZa+5Acr5Pr5RzUZ5dd +BA6+C4OmF4O5MBKgxTMVBbkN+8cFduPYSo38NBejxiEovjBFMR7HeL5YYTisO+IB +ZQ== +-----END CERTIFICATE----- + +# Issuer: CN=Network Solutions Certificate Authority O=Network Solutions L.L.C. +# Subject: CN=Network Solutions Certificate Authority O=Network Solutions L.L.C. +# Label: "Network Solutions Certificate Authority" +# Serial: 116697915152937497490437556386812487904 +# MD5 Fingerprint: d3:f3:a6:16:c0:fa:6b:1d:59:b1:2d:96:4d:0e:11:2e +# SHA1 Fingerprint: 74:f8:a3:c3:ef:e7:b3:90:06:4b:83:90:3c:21:64:60:20:e5:df:ce +# SHA256 Fingerprint: 15:f0:ba:00:a3:ac:7a:f3:ac:88:4c:07:2b:10:11:a0:77:bd:77:c0:97:f4:01:64:b2:f8:59:8a:bd:83:86:0c +-----BEGIN CERTIFICATE----- +MIID5jCCAs6gAwIBAgIQV8szb8JcFuZHFhfjkDFo4DANBgkqhkiG9w0BAQUFADBi +MQswCQYDVQQGEwJVUzEhMB8GA1UEChMYTmV0d29yayBTb2x1dGlvbnMgTC5MLkMu +MTAwLgYDVQQDEydOZXR3b3JrIFNvbHV0aW9ucyBDZXJ0aWZpY2F0ZSBBdXRob3Jp +dHkwHhcNMDYxMjAxMDAwMDAwWhcNMjkxMjMxMjM1OTU5WjBiMQswCQYDVQQGEwJV +UzEhMB8GA1UEChMYTmV0d29yayBTb2x1dGlvbnMgTC5MLkMuMTAwLgYDVQQDEydO +ZXR3b3JrIFNvbHV0aW9ucyBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkwggEiMA0GCSqG +SIb3DQEBAQUAA4IBDwAwggEKAoIBAQDkvH6SMG3G2I4rC7xGzuAnlt7e+foS0zwz +c7MEL7xxjOWftiJgPl9dzgn/ggwbmlFQGiaJ3dVhXRncEg8tCqJDXRfQNJIg6nPP +OCwGJgl6cvf6UDL4wpPTaaIjzkGxzOTVHzbRijr4jGPiFFlp7Q3Tf2vouAPlT2rl +mGNpSAW+Lv8ztumXWWn4Zxmuk2GWRBXTcrA/vGp97Eh/jcOrqnErU2lBUzS1sLnF +BgrEsEX1QV1uiUV7PTsmjHTC5dLRfbIR1PtYMiKagMnc/Qzpf14Dl847ABSHJ3A4 +qY5usyd2mFHgBeMhqxrVhSI8KbWaFsWAqPS7azCPL0YCorEMIuDTAgMBAAGjgZcw +gZQwHQYDVR0OBBYEFCEwyfsA106Y2oeqKtCnLrFAMadMMA4GA1UdDwEB/wQEAwIB +BjAPBgNVHRMBAf8EBTADAQH/MFIGA1UdHwRLMEkwR6BFoEOGQWh0dHA6Ly9jcmwu +bmV0c29sc3NsLmNvbS9OZXR3b3JrU29sdXRpb25zQ2VydGlmaWNhdGVBdXRob3Jp +dHkuY3JsMA0GCSqGSIb3DQEBBQUAA4IBAQC7rkvnt1frf6ott3NHhWrB5KUd5Oc8 +6fRZZXe1eltajSU24HqXLjjAV2CDmAaDn7l2em5Q4LqILPxFzBiwmZVRDuwduIj/ +h1AcgsLj4DKAv6ALR8jDMe+ZZzKATxcheQxpXN5eNK4CtSbqUN9/GGUsyfJj4akH +/nxxH2szJGoeBfcFaMBqEssuXmHLrijTfsK0ZpEmXzwuJF/LWA/rKOyvEZbz3Htv +wKeI8lN3s2Berq4o2jUsbzRF0ybh3uxbTydrFny9RAQYgrOJeRcQcT16ohZO9QHN +pGxlaKFJdlxDydi8NmdspZS11My5vWo1ViHe2MPr+8ukYEywVaCge1ey +-----END CERTIFICATE----- + +# Issuer: CN=COMODO ECC Certification Authority O=COMODO CA Limited +# Subject: CN=COMODO ECC Certification Authority O=COMODO CA Limited +# Label: "COMODO ECC Certification Authority" +# Serial: 41578283867086692638256921589707938090 +# MD5 Fingerprint: 7c:62:ff:74:9d:31:53:5e:68:4a:d5:78:aa:1e:bf:23 +# SHA1 Fingerprint: 9f:74:4e:9f:2b:4d:ba:ec:0f:31:2c:50:b6:56:3b:8e:2d:93:c3:11 +# SHA256 Fingerprint: 17:93:92:7a:06:14:54:97:89:ad:ce:2f:8f:34:f7:f0:b6:6d:0f:3a:e3:a3:b8:4d:21:ec:15:db:ba:4f:ad:c7 +-----BEGIN CERTIFICATE----- +MIICiTCCAg+gAwIBAgIQH0evqmIAcFBUTAGem2OZKjAKBggqhkjOPQQDAzCBhTEL +MAkGA1UEBhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UE +BxMHU2FsZm9yZDEaMBgGA1UEChMRQ09NT0RPIENBIExpbWl0ZWQxKzApBgNVBAMT +IkNPTU9ETyBFQ0MgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMDgwMzA2MDAw +MDAwWhcNMzgwMTE4MjM1OTU5WjCBhTELMAkGA1UEBhMCR0IxGzAZBgNVBAgTEkdy +ZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEaMBgGA1UEChMRQ09N +T0RPIENBIExpbWl0ZWQxKzApBgNVBAMTIkNPTU9ETyBFQ0MgQ2VydGlmaWNhdGlv +biBBdXRob3JpdHkwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAAQDR3svdcmCFYX7deSR +FtSrYpn1PlILBs5BAH+X4QokPB0BBO490o0JlwzgdeT6+3eKKvUDYEs2ixYjFq0J +cfRK9ChQtP6IHG4/bC8vCVlbpVsLM5niwz2J+Wos77LTBumjQjBAMB0GA1UdDgQW +BBR1cacZSBm8nZ3qQUfflMRId5nTeTAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/ +BAUwAwEB/zAKBggqhkjOPQQDAwNoADBlAjEA7wNbeqy3eApyt4jf/7VGFAkK+qDm +fQjGGoe9GKhzvSbKYAydzpmfz1wPMOG+FDHqAjAU9JM8SaczepBGR7NjfRObTrdv +GDeAU/7dIOA1mjbRxwG55tzd8/8dLDoWV9mSOdY= +-----END CERTIFICATE----- + +# Issuer: CN=OISTE WISeKey Global Root GA CA O=WISeKey OU=Copyright (c) 2005/OISTE Foundation Endorsed +# Subject: CN=OISTE WISeKey Global Root GA CA O=WISeKey OU=Copyright (c) 2005/OISTE Foundation Endorsed +# Label: "OISTE WISeKey Global Root GA CA" +# Serial: 86718877871133159090080555911823548314 +# MD5 Fingerprint: bc:6c:51:33:a7:e9:d3:66:63:54:15:72:1b:21:92:93 +# SHA1 Fingerprint: 59:22:a1:e1:5a:ea:16:35:21:f8:98:39:6a:46:46:b0:44:1b:0f:a9 +# SHA256 Fingerprint: 41:c9:23:86:6a:b4:ca:d6:b7:ad:57:80:81:58:2e:02:07:97:a6:cb:df:4f:ff:78:ce:83:96:b3:89:37:d7:f5 +-----BEGIN CERTIFICATE----- +MIID8TCCAtmgAwIBAgIQQT1yx/RrH4FDffHSKFTfmjANBgkqhkiG9w0BAQUFADCB +ijELMAkGA1UEBhMCQ0gxEDAOBgNVBAoTB1dJU2VLZXkxGzAZBgNVBAsTEkNvcHly +aWdodCAoYykgMjAwNTEiMCAGA1UECxMZT0lTVEUgRm91bmRhdGlvbiBFbmRvcnNl +ZDEoMCYGA1UEAxMfT0lTVEUgV0lTZUtleSBHbG9iYWwgUm9vdCBHQSBDQTAeFw0w +NTEyMTExNjAzNDRaFw0zNzEyMTExNjA5NTFaMIGKMQswCQYDVQQGEwJDSDEQMA4G +A1UEChMHV0lTZUtleTEbMBkGA1UECxMSQ29weXJpZ2h0IChjKSAyMDA1MSIwIAYD +VQQLExlPSVNURSBGb3VuZGF0aW9uIEVuZG9yc2VkMSgwJgYDVQQDEx9PSVNURSBX +SVNlS2V5IEdsb2JhbCBSb290IEdBIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A +MIIBCgKCAQEAy0+zAJs9Nt350UlqaxBJH+zYK7LG+DKBKUOVTJoZIyEVRd7jyBxR +VVuuk+g3/ytr6dTqvirdqFEr12bDYVxgAsj1znJ7O7jyTmUIms2kahnBAbtzptf2 +w93NvKSLtZlhuAGio9RN1AU9ka34tAhxZK9w8RxrfvbDd50kc3vkDIzh2TbhmYsF +mQvtRTEJysIA2/dyoJaqlYfQjse2YXMNdmaM3Bu0Y6Kff5MTMPGhJ9vZ/yxViJGg +4E8HsChWjBgbl0SOid3gF27nKu+POQoxhILYQBRJLnpB5Kf+42TMwVlxSywhp1t9 +4B3RLoGbw9ho972WG6xwsRYUC9tguSYBBQIDAQABo1EwTzALBgNVHQ8EBAMCAYYw +DwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUswN+rja8sHnR3JQmthG+IbJphpQw +EAYJKwYBBAGCNxUBBAMCAQAwDQYJKoZIhvcNAQEFBQADggEBAEuh/wuHbrP5wUOx +SPMowB0uyQlB+pQAHKSkq0lPjz0e701vvbyk9vImMMkQyh2I+3QZH4VFvbBsUfk2 +ftv1TDI6QU9bR8/oCy22xBmddMVHxjtqD6wU2zz0c5ypBd8A3HR4+vg1YFkCExh8 +vPtNsCBtQ7tgMHpnM1zFmdH4LTlSc/uMqpclXHLZCB6rTjzjgTGfA6b7wP4piFXa +hNVQA7bihKOmNqoROgHhGEvWRGizPflTdISzRpFGlgC3gCy24eMQ4tui5yiPAZZi +Fj4A4xylNoEYokxSdsARo27mHbrjWr42U8U+dY+GaSlYU7Wcu2+fXMUY7N0v4ZjJ +/L7fCg0= +-----END CERTIFICATE----- + +# Issuer: CN=Certigna O=Dhimyotis +# Subject: CN=Certigna O=Dhimyotis +# Label: "Certigna" +# Serial: 18364802974209362175 +# MD5 Fingerprint: ab:57:a6:5b:7d:42:82:19:b5:d8:58:26:28:5e:fd:ff +# SHA1 Fingerprint: b1:2e:13:63:45:86:a4:6f:1a:b2:60:68:37:58:2d:c4:ac:fd:94:97 +# SHA256 Fingerprint: e3:b6:a2:db:2e:d7:ce:48:84:2f:7a:c5:32:41:c7:b7:1d:54:14:4b:fb:40:c1:1f:3f:1d:0b:42:f5:ee:a1:2d +-----BEGIN CERTIFICATE----- +MIIDqDCCApCgAwIBAgIJAP7c4wEPyUj/MA0GCSqGSIb3DQEBBQUAMDQxCzAJBgNV +BAYTAkZSMRIwEAYDVQQKDAlEaGlteW90aXMxETAPBgNVBAMMCENlcnRpZ25hMB4X +DTA3MDYyOTE1MTMwNVoXDTI3MDYyOTE1MTMwNVowNDELMAkGA1UEBhMCRlIxEjAQ +BgNVBAoMCURoaW15b3RpczERMA8GA1UEAwwIQ2VydGlnbmEwggEiMA0GCSqGSIb3 +DQEBAQUAA4IBDwAwggEKAoIBAQDIaPHJ1tazNHUmgh7stL7qXOEm7RFHYeGifBZ4 +QCHkYJ5ayGPhxLGWkv8YbWkj4Sti993iNi+RB7lIzw7sebYs5zRLcAglozyHGxny +gQcPOJAZ0xH+hrTy0V4eHpbNgGzOOzGTtvKg0KmVEn2lmsxryIRWijOp5yIVUxbw +zBfsV1/pogqYCd7jX5xv3EjjhQsVWqa6n6xI4wmy9/Qy3l40vhx4XUJbzg4ij02Q +130yGLMLLGq/jj8UEYkgDncUtT2UCIf3JR7VsmAA7G8qKCVuKj4YYxclPz5EIBb2 +JsglrgVKtOdjLPOMFlN+XPsRGgjBRmKfIrjxwo1p3Po6WAbfAgMBAAGjgbwwgbkw +DwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUGu3+QTmQtCRZvgHyUtVF9lo53BEw +ZAYDVR0jBF0wW4AUGu3+QTmQtCRZvgHyUtVF9lo53BGhOKQ2MDQxCzAJBgNVBAYT +AkZSMRIwEAYDVQQKDAlEaGlteW90aXMxETAPBgNVBAMMCENlcnRpZ25hggkA/tzj +AQ/JSP8wDgYDVR0PAQH/BAQDAgEGMBEGCWCGSAGG+EIBAQQEAwIABzANBgkqhkiG +9w0BAQUFAAOCAQEAhQMeknH2Qq/ho2Ge6/PAD/Kl1NqV5ta+aDY9fm4fTIrv0Q8h +bV6lUmPOEvjvKtpv6zf+EwLHyzs+ImvaYS5/1HI93TDhHkxAGYwP15zRgzB7mFnc +fca5DClMoTOi62c6ZYTTluLtdkVwj7Ur3vkj1kluPBS1xp81HlDQwY9qcEQCYsuu +HWhBp6pX6FOqB9IG9tUUBguRA3UsbHK1YZWaDYu5Def131TN3ubY1gkIl2PlwS6w +t0QmwCbAr1UwnjvVNioZBPRcHv/PLLf/0P2HQBHVESO7SMAhqaQoLf0V+LBOK/Qw +WyH8EZE0vkHve52Xdf+XlcCWWC/qu0bXu+TZLg== +-----END CERTIFICATE----- + +# Issuer: CN=Cybertrust Global Root O=Cybertrust, Inc +# Subject: CN=Cybertrust Global Root O=Cybertrust, Inc +# Label: "Cybertrust Global Root" +# Serial: 4835703278459682877484360 +# MD5 Fingerprint: 72:e4:4a:87:e3:69:40:80:77:ea:bc:e3:f4:ff:f0:e1 +# SHA1 Fingerprint: 5f:43:e5:b1:bf:f8:78:8c:ac:1c:c7:ca:4a:9a:c6:22:2b:cc:34:c6 +# SHA256 Fingerprint: 96:0a:df:00:63:e9:63:56:75:0c:29:65:dd:0a:08:67:da:0b:9c:bd:6e:77:71:4a:ea:fb:23:49:ab:39:3d:a3 +-----BEGIN CERTIFICATE----- +MIIDoTCCAomgAwIBAgILBAAAAAABD4WqLUgwDQYJKoZIhvcNAQEFBQAwOzEYMBYG +A1UEChMPQ3liZXJ0cnVzdCwgSW5jMR8wHQYDVQQDExZDeWJlcnRydXN0IEdsb2Jh +bCBSb290MB4XDTA2MTIxNTA4MDAwMFoXDTIxMTIxNTA4MDAwMFowOzEYMBYGA1UE +ChMPQ3liZXJ0cnVzdCwgSW5jMR8wHQYDVQQDExZDeWJlcnRydXN0IEdsb2JhbCBS +b290MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA+Mi8vRRQZhP/8NN5 +7CPytxrHjoXxEnOmGaoQ25yiZXRadz5RfVb23CO21O1fWLE3TdVJDm71aofW0ozS +J8bi/zafmGWgE07GKmSb1ZASzxQG9Dvj1Ci+6A74q05IlG2OlTEQXO2iLb3VOm2y +HLtgwEZLAfVJrn5GitB0jaEMAs7u/OePuGtm839EAL9mJRQr3RAwHQeWP032a7iP +t3sMpTjr3kfb1V05/Iin89cqdPHoWqI7n1C6poxFNcJQZZXcY4Lv3b93TZxiyWNz +FtApD0mpSPCzqrdsxacwOUBdrsTiXSZT8M4cIwhhqJQZugRiQOwfOHB3EgZxpzAY +XSUnpQIDAQABo4GlMIGiMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8EBTADAQH/ +MB0GA1UdDgQWBBS2CHsNesysIEyGVjJez6tuhS1wVzA/BgNVHR8EODA2MDSgMqAw +hi5odHRwOi8vd3d3Mi5wdWJsaWMtdHJ1c3QuY29tL2NybC9jdC9jdHJvb3QuY3Js +MB8GA1UdIwQYMBaAFLYIew16zKwgTIZWMl7Pq26FLXBXMA0GCSqGSIb3DQEBBQUA +A4IBAQBW7wojoFROlZfJ+InaRcHUowAl9B8Tq7ejhVhpwjCt2BWKLePJzYFa+HMj +Wqd8BfP9IjsO0QbE2zZMcwSO5bAi5MXzLqXZI+O4Tkogp24CJJ8iYGd7ix1yCcUx +XOl5n4BHPa2hCwcUPUf/A2kaDAtE52Mlp3+yybh2hO0j9n0Hq0V+09+zv+mKts2o +omcrUtW3ZfA5TGOgkXmTUg9U3YO7n9GPp1Nzw8v/MOx8BLjYRB+TX3EJIrduPuoc +A06dGiBh+4E37F78CkWr1+cXVdCg6mCbpvbjjFspwgZgFJ0tl0ypkxWdYcQBX0jW +WL1WMRJOEcgh4LMRkWXbtKaIOM5V +-----END CERTIFICATE----- + +# Issuer: O=Chunghwa Telecom Co., Ltd. OU=ePKI Root Certification Authority +# Subject: O=Chunghwa Telecom Co., Ltd. OU=ePKI Root Certification Authority +# Label: "ePKI Root Certification Authority" +# Serial: 28956088682735189655030529057352760477 +# MD5 Fingerprint: 1b:2e:00:ca:26:06:90:3d:ad:fe:6f:15:68:d3:6b:b3 +# SHA1 Fingerprint: 67:65:0d:f1:7e:8e:7e:5b:82:40:a4:f4:56:4b:cf:e2:3d:69:c6:f0 +# SHA256 Fingerprint: c0:a6:f4:dc:63:a2:4b:fd:cf:54:ef:2a:6a:08:2a:0a:72:de:35:80:3e:2f:f5:ff:52:7a:e5:d8:72:06:df:d5 +-----BEGIN CERTIFICATE----- +MIIFsDCCA5igAwIBAgIQFci9ZUdcr7iXAF7kBtK8nTANBgkqhkiG9w0BAQUFADBe +MQswCQYDVQQGEwJUVzEjMCEGA1UECgwaQ2h1bmdod2EgVGVsZWNvbSBDby4sIEx0 +ZC4xKjAoBgNVBAsMIWVQS0kgUm9vdCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTAe +Fw0wNDEyMjAwMjMxMjdaFw0zNDEyMjAwMjMxMjdaMF4xCzAJBgNVBAYTAlRXMSMw +IQYDVQQKDBpDaHVuZ2h3YSBUZWxlY29tIENvLiwgTHRkLjEqMCgGA1UECwwhZVBL +SSBSb290IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIICIjANBgkqhkiG9w0BAQEF +AAOCAg8AMIICCgKCAgEA4SUP7o3biDN1Z82tH306Tm2d0y8U82N0ywEhajfqhFAH +SyZbCUNsIZ5qyNUD9WBpj8zwIuQf5/dqIjG3LBXy4P4AakP/h2XGtRrBp0xtInAh +ijHyl3SJCRImHJ7K2RKilTza6We/CKBk49ZCt0Xvl/T29de1ShUCWH2YWEtgvM3X +DZoTM1PRYfl61dd4s5oz9wCGzh1NlDivqOx4UXCKXBCDUSH3ET00hl7lSM2XgYI1 +TBnsZfZrxQWh7kcT1rMhJ5QQCtkkO7q+RBNGMD+XPNjX12ruOzjjK9SXDrkb5wdJ +fzcq+Xd4z1TtW0ado4AOkUPB1ltfFLqfpo0kR0BZv3I4sjZsN/+Z0V0OWQqraffA +sgRFelQArr5T9rXn4fg8ozHSqf4hUmTFpmfwdQcGlBSBVcYn5AGPF8Fqcde+S/uU +WH1+ETOxQvdibBjWzwloPn9s9h6PYq2lY9sJpx8iQkEeb5mKPtf5P0B6ebClAZLS +nT0IFaUQAS2zMnaolQ2zepr7BxB4EW/hj8e6DyUadCrlHJhBmd8hh+iVBmoKs2pH +dmX2Os+PYhcZewoozRrSgx4hxyy/vv9haLdnG7t4TY3OZ+XkwY63I2binZB1NJip +NiuKmpS5nezMirH4JYlcWrYvjB9teSSnUmjDhDXiZo1jDiVN1Rmy5nk3pyKdVDEC +AwEAAaNqMGgwHQYDVR0OBBYEFB4M97Zn8uGSJglFwFU5Lnc/QkqiMAwGA1UdEwQF +MAMBAf8wOQYEZyoHAAQxMC8wLQIBADAJBgUrDgMCGgUAMAcGBWcqAwAABBRFsMLH +ClZ87lt4DJX5GFPBphzYEDANBgkqhkiG9w0BAQUFAAOCAgEACbODU1kBPpVJufGB +uvl2ICO1J2B01GqZNF5sAFPZn/KmsSQHRGoqxqWOeBLoR9lYGxMqXnmbnwoqZ6Yl +PwZpVnPDimZI+ymBV3QGypzqKOg4ZyYr8dW1P2WT+DZdjo2NQCCHGervJ8A9tDkP +JXtoUHRVnAxZfVo9QZQlUgjgRywVMRnVvwdVxrsStZf0X4OFunHB2WyBEXYKCrC/ +gpf36j36+uwtqSiUO1bd0lEursC9CBWMd1I0ltabrNMdjmEPNXubrjlpC2JgQCA2 +j6/7Nu4tCEoduL+bXPjqpRugc6bY+G7gMwRfaKonh+3ZwZCc7b3jajWvY9+rGNm6 +5ulK6lCKD2GTHuItGeIwlDWSXQ62B68ZgI9HkFFLLk3dheLSClIKF5r8GrBQAuUB +o2M3IUxExJtRmREOc5wGj1QupyheRDmHVi03vYVElOEMSyycw5KFNGHLD7ibSkNS +/jQ6fbjpKdx2qcgw+BRxgMYeNkh0IkFch4LoGHGLQYlE535YW6i4jRPpp2zDR+2z +Gp1iro2C6pSe3VkQw63d4k3jMdXH7OjysP6SHhYKGvzZ8/gntsm+HbRsZJB/9OTE +W9c3rkIO3aQab3yIVMUWbuF6aC74Or8NpDyJO3inTmODBCEIZ43ygknQW/2xzQ+D +hNQ+IIX3Sj0rnP0qCglN6oH4EZw= +-----END CERTIFICATE----- + +# Issuer: O=certSIGN OU=certSIGN ROOT CA +# Subject: O=certSIGN OU=certSIGN ROOT CA +# Label: "certSIGN ROOT CA" +# Serial: 35210227249154 +# MD5 Fingerprint: 18:98:c0:d6:e9:3a:fc:f9:b0:f5:0c:f7:4b:01:44:17 +# SHA1 Fingerprint: fa:b7:ee:36:97:26:62:fb:2d:b0:2a:f6:bf:03:fd:e8:7c:4b:2f:9b +# SHA256 Fingerprint: ea:a9:62:c4:fa:4a:6b:af:eb:e4:15:19:6d:35:1c:cd:88:8d:4f:53:f3:fa:8a:e6:d7:c4:66:a9:4e:60:42:bb +-----BEGIN CERTIFICATE----- +MIIDODCCAiCgAwIBAgIGIAYFFnACMA0GCSqGSIb3DQEBBQUAMDsxCzAJBgNVBAYT +AlJPMREwDwYDVQQKEwhjZXJ0U0lHTjEZMBcGA1UECxMQY2VydFNJR04gUk9PVCBD +QTAeFw0wNjA3MDQxNzIwMDRaFw0zMTA3MDQxNzIwMDRaMDsxCzAJBgNVBAYTAlJP +MREwDwYDVQQKEwhjZXJ0U0lHTjEZMBcGA1UECxMQY2VydFNJR04gUk9PVCBDQTCC +ASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALczuX7IJUqOtdu0KBuqV5Do +0SLTZLrTk+jUrIZhQGpgV2hUhE28alQCBf/fm5oqrl0Hj0rDKH/v+yv6efHHrfAQ +UySQi2bJqIirr1qjAOm+ukbuW3N7LBeCgV5iLKECZbO9xSsAfsT8AzNXDe3i+s5d +RdY4zTW2ssHQnIFKquSyAVwdj1+ZxLGt24gh65AIgoDzMKND5pCCrlUoSe1b16kQ +OA7+j0xbm0bqQfWwCHTD0IgztnzXdN/chNFDDnU5oSVAKOp4yw4sLjmdjItuFhwv +JoIQ4uNllAoEwF73XVv4EOLQunpL+943AAAaWyjj0pxzPjKHmKHJUS/X3qwzs08C +AwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAcYwHQYDVR0O +BBYEFOCMm9slSbPxfIbWskKHC9BroNnkMA0GCSqGSIb3DQEBBQUAA4IBAQA+0hyJ +LjX8+HXd5n9liPRyTMks1zJO890ZeUe9jjtbkw9QSSQTaxQGcu8J06Gh40CEyecY +MnQ8SG4Pn0vU9x7Tk4ZkVJdjclDVVc/6IJMCopvDI5NOFlV2oHB5bc0hH88vLbwZ +44gx+FkagQnIl6Z0x2DEW8xXjrJ1/RsCCdtZb3KTafcxQdaIOL+Hsr0Wefmq5L6I +Jd1hJyMctTEHBDa0GpC9oHRxUIltvBTjD4au8as+x6AJzKNI0eDbZOeStc+vckNw +i/nDhDwTqn6Sm1dTk/pwwpEOMfmbZ13pljheX7NzTogVZ96edhBiIL5VaZVDADlN +9u6wWk5JRFRYX0KD +-----END CERTIFICATE----- + +# Issuer: CN=GeoTrust Primary Certification Authority - G3 O=GeoTrust Inc. OU=(c) 2008 GeoTrust Inc. - For authorized use only +# Subject: CN=GeoTrust Primary Certification Authority - G3 O=GeoTrust Inc. OU=(c) 2008 GeoTrust Inc. - For authorized use only +# Label: "GeoTrust Primary Certification Authority - G3" +# Serial: 28809105769928564313984085209975885599 +# MD5 Fingerprint: b5:e8:34:36:c9:10:44:58:48:70:6d:2e:83:d4:b8:05 +# SHA1 Fingerprint: 03:9e:ed:b8:0b:e7:a0:3c:69:53:89:3b:20:d2:d9:32:3a:4c:2a:fd +# SHA256 Fingerprint: b4:78:b8:12:25:0d:f8:78:63:5c:2a:a7:ec:7d:15:5e:aa:62:5e:e8:29:16:e2:cd:29:43:61:88:6c:d1:fb:d4 +-----BEGIN CERTIFICATE----- +MIID/jCCAuagAwIBAgIQFaxulBmyeUtB9iepwxgPHzANBgkqhkiG9w0BAQsFADCB +mDELMAkGA1UEBhMCVVMxFjAUBgNVBAoTDUdlb1RydXN0IEluYy4xOTA3BgNVBAsT +MChjKSAyMDA4IEdlb1RydXN0IEluYy4gLSBGb3IgYXV0aG9yaXplZCB1c2Ugb25s +eTE2MDQGA1UEAxMtR2VvVHJ1c3QgUHJpbWFyeSBDZXJ0aWZpY2F0aW9uIEF1dGhv +cml0eSAtIEczMB4XDTA4MDQwMjAwMDAwMFoXDTM3MTIwMTIzNTk1OVowgZgxCzAJ +BgNVBAYTAlVTMRYwFAYDVQQKEw1HZW9UcnVzdCBJbmMuMTkwNwYDVQQLEzAoYykg +MjAwOCBHZW9UcnVzdCBJbmMuIC0gRm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxNjA0 +BgNVBAMTLUdlb1RydXN0IFByaW1hcnkgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkg +LSBHMzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANziXmJYHTNXOTIz ++uvLh4yn1ErdBojqZI4xmKU4kB6Yzy5jK/BGvESyiaHAKAxJcCGVn2TAppMSAmUm +hsalifD614SgcK9PGpc/BkTVyetyEH3kMSj7HGHmKAdEc5IiaacDiGydY8hS2pgn +5whMcD60yRLBxWeDXTPzAxHsatBT4tG6NmCUgLthY2xbF37fQJQeqw3CIShwiP/W +JmxsYAQlTlV+fe+/lEjetx3dcI0FX4ilm/LC7urRQEFtYjgdVgbFA0dRIBn8exAL +DmKudlW/X3e+PkkBUz2YJQN2JFodtNuJ6nnltrM7P7pMKEF/BqxqjsHQ9gUdfeZC +huOl1UcCAwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYw +HQYDVR0OBBYEFMR5yo6hTgMdHNxr2zFblD4/MH8tMA0GCSqGSIb3DQEBCwUAA4IB +AQAtxRPPVoB7eni9n64smefv2t+UXglpp+duaIy9cr5HqQ6XErhK8WTTOd8lNNTB +zU6B8A8ExCSzNJbGpqow32hhc9f5joWJ7w5elShKKiePEI4ufIbEAp7aDHdlDkQN +kv39sxY2+hENHYwOB4lqKVb3cvTdFZx3NWZXqxNT2I7BQMXXExZacse3aQHEerGD +AWh9jUGhlBjBJVz88P6DAod8DQ3PLghcSkANPuyBYeYk28rgDi0Hsj5W3I31QYUH +SJsMC8tJP33st/3LjWeJGqvtux6jAAgIFyqCXDFdRootD4abdNlF+9RAsXqqaC2G +spki4cErx5z481+oghLrGREt +-----END CERTIFICATE----- + +# Issuer: CN=thawte Primary Root CA - G2 O=thawte, Inc. OU=(c) 2007 thawte, Inc. - For authorized use only +# Subject: CN=thawte Primary Root CA - G2 O=thawte, Inc. OU=(c) 2007 thawte, Inc. - For authorized use only +# Label: "thawte Primary Root CA - G2" +# Serial: 71758320672825410020661621085256472406 +# MD5 Fingerprint: 74:9d:ea:60:24:c4:fd:22:53:3e:cc:3a:72:d9:29:4f +# SHA1 Fingerprint: aa:db:bc:22:23:8f:c4:01:a1:27:bb:38:dd:f4:1d:db:08:9e:f0:12 +# SHA256 Fingerprint: a4:31:0d:50:af:18:a6:44:71:90:37:2a:86:af:af:8b:95:1f:fb:43:1d:83:7f:1e:56:88:b4:59:71:ed:15:57 +-----BEGIN CERTIFICATE----- +MIICiDCCAg2gAwIBAgIQNfwmXNmET8k9Jj1Xm67XVjAKBggqhkjOPQQDAzCBhDEL +MAkGA1UEBhMCVVMxFTATBgNVBAoTDHRoYXd0ZSwgSW5jLjE4MDYGA1UECxMvKGMp +IDIwMDcgdGhhd3RlLCBJbmMuIC0gRm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxJDAi +BgNVBAMTG3RoYXd0ZSBQcmltYXJ5IFJvb3QgQ0EgLSBHMjAeFw0wNzExMDUwMDAw +MDBaFw0zODAxMTgyMzU5NTlaMIGEMQswCQYDVQQGEwJVUzEVMBMGA1UEChMMdGhh +d3RlLCBJbmMuMTgwNgYDVQQLEy8oYykgMjAwNyB0aGF3dGUsIEluYy4gLSBGb3Ig +YXV0aG9yaXplZCB1c2Ugb25seTEkMCIGA1UEAxMbdGhhd3RlIFByaW1hcnkgUm9v +dCBDQSAtIEcyMHYwEAYHKoZIzj0CAQYFK4EEACIDYgAEotWcgnuVnfFSeIf+iha/ +BebfowJPDQfGAFG6DAJSLSKkQjnE/o/qycG+1E3/n3qe4rF8mq2nhglzh9HnmuN6 +papu+7qzcMBniKI11KOasf2twu8x+qi58/sIxpHR+ymVo0IwQDAPBgNVHRMBAf8E +BTADAQH/MA4GA1UdDwEB/wQEAwIBBjAdBgNVHQ4EFgQUmtgAMADna3+FGO6Lts6K +DPgR4bswCgYIKoZIzj0EAwMDaQAwZgIxAN344FdHW6fmCsO99YCKlzUNG4k8VIZ3 +KMqh9HneteY4sPBlcIx/AlTCv//YoT7ZzwIxAMSNlPzcU9LcnXgWHxUzI1NS41ox +XZ3Krr0TKUQNJ1uo52icEvdYPy5yAlejj6EULg== +-----END CERTIFICATE----- + +# Issuer: CN=thawte Primary Root CA - G3 O=thawte, Inc. OU=Certification Services Division/(c) 2008 thawte, Inc. - For authorized use only +# Subject: CN=thawte Primary Root CA - G3 O=thawte, Inc. OU=Certification Services Division/(c) 2008 thawte, Inc. - For authorized use only +# Label: "thawte Primary Root CA - G3" +# Serial: 127614157056681299805556476275995414779 +# MD5 Fingerprint: fb:1b:5d:43:8a:94:cd:44:c6:76:f2:43:4b:47:e7:31 +# SHA1 Fingerprint: f1:8b:53:8d:1b:e9:03:b6:a6:f0:56:43:5b:17:15:89:ca:f3:6b:f2 +# SHA256 Fingerprint: 4b:03:f4:58:07:ad:70:f2:1b:fc:2c:ae:71:c9:fd:e4:60:4c:06:4c:f5:ff:b6:86:ba:e5:db:aa:d7:fd:d3:4c +-----BEGIN CERTIFICATE----- +MIIEKjCCAxKgAwIBAgIQYAGXt0an6rS0mtZLL/eQ+zANBgkqhkiG9w0BAQsFADCB +rjELMAkGA1UEBhMCVVMxFTATBgNVBAoTDHRoYXd0ZSwgSW5jLjEoMCYGA1UECxMf +Q2VydGlmaWNhdGlvbiBTZXJ2aWNlcyBEaXZpc2lvbjE4MDYGA1UECxMvKGMpIDIw +MDggdGhhd3RlLCBJbmMuIC0gRm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxJDAiBgNV +BAMTG3RoYXd0ZSBQcmltYXJ5IFJvb3QgQ0EgLSBHMzAeFw0wODA0MDIwMDAwMDBa +Fw0zNzEyMDEyMzU5NTlaMIGuMQswCQYDVQQGEwJVUzEVMBMGA1UEChMMdGhhd3Rl +LCBJbmMuMSgwJgYDVQQLEx9DZXJ0aWZpY2F0aW9uIFNlcnZpY2VzIERpdmlzaW9u +MTgwNgYDVQQLEy8oYykgMjAwOCB0aGF3dGUsIEluYy4gLSBGb3IgYXV0aG9yaXpl +ZCB1c2Ugb25seTEkMCIGA1UEAxMbdGhhd3RlIFByaW1hcnkgUm9vdCBDQSAtIEcz +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAsr8nLPvb2FvdeHsbnndm +gcs+vHyu86YnmjSjaDFxODNi5PNxZnmxqWWjpYvVj2AtP0LMqmsywCPLLEHd5N/8 +YZzic7IilRFDGF/Eth9XbAoFWCLINkw6fKXRz4aviKdEAhN0cXMKQlkC+BsUa0Lf +b1+6a4KinVvnSr0eAXLbS3ToO39/fR8EtCab4LRarEc9VbjXsCZSKAExQGbY2SS9 +9irY7CFJXJv2eul/VTV+lmuNk5Mny5K76qxAwJ/C+IDPXfRa3M50hqY+bAtTyr2S +zhkGcuYMXDhpxwTWvGzOW/b3aJzcJRVIiKHpqfiYnODz1TEoYRFsZ5aNOZnLwkUk +OQIDAQABo0IwQDAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIBBjAdBgNV +HQ4EFgQUrWyqlGCc7eT/+j4KdCtjA/e2Wb8wDQYJKoZIhvcNAQELBQADggEBABpA +2JVlrAmSicY59BDlqQ5mU1143vokkbvnRFHfxhY0Cu9qRFHqKweKA3rD6z8KLFIW +oCtDuSWQP3CpMyVtRRooOyfPqsMpQhvfO0zAMzRbQYi/aytlryjvsvXDqmbOe1bu +t8jLZ8HJnBoYuMTDSQPxYA5QzUbF83d597YV4Djbxy8ooAw/dyZ02SUS2jHaGh7c +KUGRIjxpp7sC8rZcJwOJ9Abqm+RyguOhCcHpABnTPtRwa7pxpqpYrvS76Wy274fM +m7v/OeZWYdMKp8RcTGB7BXcmer/YB1IsYvdwY9k5vG8cwnncdimvzsUsZAReiDZu +MdRAGmI0Nj81Aa6sY6A= +-----END CERTIFICATE----- + +# Issuer: CN=GeoTrust Primary Certification Authority - G2 O=GeoTrust Inc. OU=(c) 2007 GeoTrust Inc. - For authorized use only +# Subject: CN=GeoTrust Primary Certification Authority - G2 O=GeoTrust Inc. OU=(c) 2007 GeoTrust Inc. - For authorized use only +# Label: "GeoTrust Primary Certification Authority - G2" +# Serial: 80682863203381065782177908751794619243 +# MD5 Fingerprint: 01:5e:d8:6b:bd:6f:3d:8e:a1:31:f8:12:e0:98:73:6a +# SHA1 Fingerprint: 8d:17:84:d5:37:f3:03:7d:ec:70:fe:57:8b:51:9a:99:e6:10:d7:b0 +# SHA256 Fingerprint: 5e:db:7a:c4:3b:82:a0:6a:87:61:e8:d7:be:49:79:eb:f2:61:1f:7d:d7:9b:f9:1c:1c:6b:56:6a:21:9e:d7:66 +-----BEGIN CERTIFICATE----- +MIICrjCCAjWgAwIBAgIQPLL0SAoA4v7rJDteYD7DazAKBggqhkjOPQQDAzCBmDEL +MAkGA1UEBhMCVVMxFjAUBgNVBAoTDUdlb1RydXN0IEluYy4xOTA3BgNVBAsTMChj +KSAyMDA3IEdlb1RydXN0IEluYy4gLSBGb3IgYXV0aG9yaXplZCB1c2Ugb25seTE2 +MDQGA1UEAxMtR2VvVHJ1c3QgUHJpbWFyeSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0 +eSAtIEcyMB4XDTA3MTEwNTAwMDAwMFoXDTM4MDExODIzNTk1OVowgZgxCzAJBgNV +BAYTAlVTMRYwFAYDVQQKEw1HZW9UcnVzdCBJbmMuMTkwNwYDVQQLEzAoYykgMjAw +NyBHZW9UcnVzdCBJbmMuIC0gRm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxNjA0BgNV +BAMTLUdlb1RydXN0IFByaW1hcnkgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkgLSBH +MjB2MBAGByqGSM49AgEGBSuBBAAiA2IABBWx6P0DFUPlrOuHNxFi79KDNlJ9RVcL +So17VDs6bl8VAsBQps8lL33KSLjHUGMcKiEIfJo22Av+0SbFWDEwKCXzXV2juLal +tJLtbCyf691DiaI8S0iRHVDsJt/WYC69IaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAO +BgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFBVfNVdRVfslsq0DafwBo/q+EVXVMAoG +CCqGSM49BAMDA2cAMGQCMGSWWaboCd6LuvpaiIjwH5HTRqjySkwCY/tsXzjbLkGT +qQ7mndwxHLKgpxgceeHHNgIwOlavmnRs9vuD4DPTCF+hnMJbn0bWtsuRBmOiBucz +rD6ogRLQy7rQkgu2npaqBA+K +-----END CERTIFICATE----- + +# Issuer: CN=VeriSign Universal Root Certification Authority O=VeriSign, Inc. OU=VeriSign Trust Network/(c) 2008 VeriSign, Inc. - For authorized use only +# Subject: CN=VeriSign Universal Root Certification Authority O=VeriSign, Inc. OU=VeriSign Trust Network/(c) 2008 VeriSign, Inc. - For authorized use only +# Label: "VeriSign Universal Root Certification Authority" +# Serial: 85209574734084581917763752644031726877 +# MD5 Fingerprint: 8e:ad:b5:01:aa:4d:81:e4:8c:1d:d1:e1:14:00:95:19 +# SHA1 Fingerprint: 36:79:ca:35:66:87:72:30:4d:30:a5:fb:87:3b:0f:a7:7b:b7:0d:54 +# SHA256 Fingerprint: 23:99:56:11:27:a5:71:25:de:8c:ef:ea:61:0d:df:2f:a0:78:b5:c8:06:7f:4e:82:82:90:bf:b8:60:e8:4b:3c +-----BEGIN CERTIFICATE----- +MIIEuTCCA6GgAwIBAgIQQBrEZCGzEyEDDrvkEhrFHTANBgkqhkiG9w0BAQsFADCB +vTELMAkGA1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMR8wHQYDVQQL +ExZWZXJpU2lnbiBUcnVzdCBOZXR3b3JrMTowOAYDVQQLEzEoYykgMjAwOCBWZXJp +U2lnbiwgSW5jLiAtIEZvciBhdXRob3JpemVkIHVzZSBvbmx5MTgwNgYDVQQDEy9W +ZXJpU2lnbiBVbml2ZXJzYWwgUm9vdCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTAe +Fw0wODA0MDIwMDAwMDBaFw0zNzEyMDEyMzU5NTlaMIG9MQswCQYDVQQGEwJVUzEX +MBUGA1UEChMOVmVyaVNpZ24sIEluYy4xHzAdBgNVBAsTFlZlcmlTaWduIFRydXN0 +IE5ldHdvcmsxOjA4BgNVBAsTMShjKSAyMDA4IFZlcmlTaWduLCBJbmMuIC0gRm9y +IGF1dGhvcml6ZWQgdXNlIG9ubHkxODA2BgNVBAMTL1ZlcmlTaWduIFVuaXZlcnNh +bCBSb290IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIIBIjANBgkqhkiG9w0BAQEF +AAOCAQ8AMIIBCgKCAQEAx2E3XrEBNNti1xWb/1hajCMj1mCOkdeQmIN65lgZOIzF +9uVkhbSicfvtvbnazU0AtMgtc6XHaXGVHzk8skQHnOgO+k1KxCHfKWGPMiJhgsWH +H26MfF8WIFFE0XBPV+rjHOPMee5Y2A7Cs0WTwCznmhcrewA3ekEzeOEz4vMQGn+H +LL729fdC4uW/h2KJXwBL38Xd5HVEMkE6HnFuacsLdUYI0crSK5XQz/u5QGtkjFdN +/BMReYTtXlT2NJ8IAfMQJQYXStrxHXpma5hgZqTZ79IugvHw7wnqRMkVauIDbjPT +rJ9VAMf2CGqUuV/c4DPxhGD5WycRtPwW8rtWaoAljQIDAQABo4GyMIGvMA8GA1Ud +EwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMG0GCCsGAQUFBwEMBGEwX6FdoFsw +WTBXMFUWCWltYWdlL2dpZjAhMB8wBwYFKw4DAhoEFI/l0xqGrI2Oa8PPgGrUSBgs +exkuMCUWI2h0dHA6Ly9sb2dvLnZlcmlzaWduLmNvbS92c2xvZ28uZ2lmMB0GA1Ud +DgQWBBS2d/ppSEefUxLVwuoHMnYH0ZcHGTANBgkqhkiG9w0BAQsFAAOCAQEASvj4 +sAPmLGd75JR3Y8xuTPl9Dg3cyLk1uXBPY/ok+myDjEedO2Pzmvl2MpWRsXe8rJq+ +seQxIcaBlVZaDrHC1LGmWazxY8u4TB1ZkErvkBYoH1quEPuBUDgMbMzxPcP1Y+Oz +4yHJJDnp/RVmRvQbEdBNc6N9Rvk97ahfYtTxP/jgdFcrGJ2BtMQo2pSXpXDrrB2+ +BxHw1dvd5Yzw1TKwg+ZX4o+/vqGqvz0dtdQ46tewXDpPaj+PwGZsY6rp2aQW9IHR +lRQOfc2VNNnSj3BzgXucfr2YYdhFh5iQxeuGMMY1v/D/w1WIg0vvBZIGcfK4mJO3 +7M2CYfE45k+XmCpajQ== +-----END CERTIFICATE----- + +# Issuer: CN=VeriSign Class 3 Public Primary Certification Authority - G4 O=VeriSign, Inc. OU=VeriSign Trust Network/(c) 2007 VeriSign, Inc. - For authorized use only +# Subject: CN=VeriSign Class 3 Public Primary Certification Authority - G4 O=VeriSign, Inc. OU=VeriSign Trust Network/(c) 2007 VeriSign, Inc. - For authorized use only +# Label: "VeriSign Class 3 Public Primary Certification Authority - G4" +# Serial: 63143484348153506665311985501458640051 +# MD5 Fingerprint: 3a:52:e1:e7:fd:6f:3a:e3:6f:f3:6f:99:1b:f9:22:41 +# SHA1 Fingerprint: 22:d5:d8:df:8f:02:31:d1:8d:f7:9d:b7:cf:8a:2d:64:c9:3f:6c:3a +# SHA256 Fingerprint: 69:dd:d7:ea:90:bb:57:c9:3e:13:5d:c8:5e:a6:fc:d5:48:0b:60:32:39:bd:c4:54:fc:75:8b:2a:26:cf:7f:79 +-----BEGIN CERTIFICATE----- +MIIDhDCCAwqgAwIBAgIQL4D+I4wOIg9IZxIokYesszAKBggqhkjOPQQDAzCByjEL +MAkGA1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMR8wHQYDVQQLExZW +ZXJpU2lnbiBUcnVzdCBOZXR3b3JrMTowOAYDVQQLEzEoYykgMjAwNyBWZXJpU2ln +biwgSW5jLiAtIEZvciBhdXRob3JpemVkIHVzZSBvbmx5MUUwQwYDVQQDEzxWZXJp +U2lnbiBDbGFzcyAzIFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9y +aXR5IC0gRzQwHhcNMDcxMTA1MDAwMDAwWhcNMzgwMTE4MjM1OTU5WjCByjELMAkG +A1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMR8wHQYDVQQLExZWZXJp +U2lnbiBUcnVzdCBOZXR3b3JrMTowOAYDVQQLEzEoYykgMjAwNyBWZXJpU2lnbiwg +SW5jLiAtIEZvciBhdXRob3JpemVkIHVzZSBvbmx5MUUwQwYDVQQDEzxWZXJpU2ln +biBDbGFzcyAzIFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5 +IC0gRzQwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAASnVnp8Utpkmw4tXNherJI9/gHm +GUo9FANL+mAnINmDiWn6VMaaGF5VKmTeBvaNSjutEDxlPZCIBIngMGGzrl0Bp3ve +fLK+ymVhAIau2o970ImtTR1ZmkGxvEeA3J5iw/mjgbIwga8wDwYDVR0TAQH/BAUw +AwEB/zAOBgNVHQ8BAf8EBAMCAQYwbQYIKwYBBQUHAQwEYTBfoV2gWzBZMFcwVRYJ +aW1hZ2UvZ2lmMCEwHzAHBgUrDgMCGgQUj+XTGoasjY5rw8+AatRIGCx7GS4wJRYj +aHR0cDovL2xvZ28udmVyaXNpZ24uY29tL3ZzbG9nby5naWYwHQYDVR0OBBYEFLMW +kf3upm7ktS5Jj4d4gYDs5bG1MAoGCCqGSM49BAMDA2gAMGUCMGYhDBgmYFo4e1ZC +4Kf8NoRRkSAsdk1DPcQdhCPQrNZ8NQbOzWm9kA3bbEhCHQ6qQgIxAJw9SDkjOVga +FRJZap7v1VmyHVIsmXHNxynfGyphe3HR3vPA5Q06Sqotp9iGKt0uEA== +-----END CERTIFICATE----- + +# Issuer: CN=NetLock Arany (Class Gold) F\u0151tan\xfas\xedtv\xe1ny O=NetLock Kft. OU=Tan\xfas\xedtv\xe1nykiad\xf3k (Certification Services) +# Subject: CN=NetLock Arany (Class Gold) F\u0151tan\xfas\xedtv\xe1ny O=NetLock Kft. OU=Tan\xfas\xedtv\xe1nykiad\xf3k (Certification Services) +# Label: "NetLock Arany (Class Gold) F\u0151tan\xfas\xedtv\xe1ny" +# Serial: 80544274841616 +# MD5 Fingerprint: c5:a1:b7:ff:73:dd:d6:d7:34:32:18:df:fc:3c:ad:88 +# SHA1 Fingerprint: 06:08:3f:59:3f:15:a1:04:a0:69:a4:6b:a9:03:d0:06:b7:97:09:91 +# SHA256 Fingerprint: 6c:61:da:c3:a2:de:f0:31:50:6b:e0:36:d2:a6:fe:40:19:94:fb:d1:3d:f9:c8:d4:66:59:92:74:c4:46:ec:98 +-----BEGIN CERTIFICATE----- +MIIEFTCCAv2gAwIBAgIGSUEs5AAQMA0GCSqGSIb3DQEBCwUAMIGnMQswCQYDVQQG +EwJIVTERMA8GA1UEBwwIQnVkYXBlc3QxFTATBgNVBAoMDE5ldExvY2sgS2Z0LjE3 +MDUGA1UECwwuVGFuw7pzw610dsOhbnlraWFkw7NrIChDZXJ0aWZpY2F0aW9uIFNl +cnZpY2VzKTE1MDMGA1UEAwwsTmV0TG9jayBBcmFueSAoQ2xhc3MgR29sZCkgRsWR +dGFuw7pzw610dsOhbnkwHhcNMDgxMjExMTUwODIxWhcNMjgxMjA2MTUwODIxWjCB +pzELMAkGA1UEBhMCSFUxETAPBgNVBAcMCEJ1ZGFwZXN0MRUwEwYDVQQKDAxOZXRM +b2NrIEtmdC4xNzA1BgNVBAsMLlRhbsO6c8OtdHbDoW55a2lhZMOzayAoQ2VydGlm +aWNhdGlvbiBTZXJ2aWNlcykxNTAzBgNVBAMMLE5ldExvY2sgQXJhbnkgKENsYXNz +IEdvbGQpIEbFkXRhbsO6c8OtdHbDoW55MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A +MIIBCgKCAQEAxCRec75LbRTDofTjl5Bu0jBFHjzuZ9lk4BqKf8owyoPjIMHj9DrT +lF8afFttvzBPhCf2nx9JvMaZCpDyD/V/Q4Q3Y1GLeqVw/HpYzY6b7cNGbIRwXdrz +AZAj/E4wqX7hJ2Pn7WQ8oLjJM2P+FpD/sLj916jAwJRDC7bVWaaeVtAkH3B5r9s5 +VA1lddkVQZQBr17s9o3x/61k/iCa11zr/qYfCGSji3ZVrR47KGAuhyXoqq8fxmRG +ILdwfzzeSNuWU7c5d+Qa4scWhHaXWy+7GRWF+GmF9ZmnqfI0p6m2pgP8b4Y9VHx2 +BJtr+UBdADTHLpl1neWIA6pN+APSQnbAGwIDAKiLo0UwQzASBgNVHRMBAf8ECDAG +AQH/AgEEMA4GA1UdDwEB/wQEAwIBBjAdBgNVHQ4EFgQUzPpnk/C2uNClwB7zU/2M +U9+D15YwDQYJKoZIhvcNAQELBQADggEBAKt/7hwWqZw8UQCgwBEIBaeZ5m8BiFRh +bvG5GK1Krf6BQCOUL/t1fC8oS2IkgYIL9WHxHG64YTjrgfpioTtaYtOUZcTh5m2C ++C8lcLIhJsFyUR+MLMOEkMNaj7rP9KdlpeuY0fsFskZ1FSNqb4VjMIDw1Z4fKRzC +bLBQWV2QWzuoDTDPv31/zvGdg73JRm4gpvlhUbohL3u+pRVjodSVh/GeufOJ8z2F +uLjbvrW5KfnaNwUASZQDhETnv0Mxz3WLJdH0pmT1kvarBes96aULNmLazAZfNou2 +XjG4Kvte9nHfRCaexOYNkbQudZWAUWpLMKawYqGT8ZvYzsRjdT9ZR7E= +-----END CERTIFICATE----- + +# Issuer: CN=Staat der Nederlanden Root CA - G2 O=Staat der Nederlanden +# Subject: CN=Staat der Nederlanden Root CA - G2 O=Staat der Nederlanden +# Label: "Staat der Nederlanden Root CA - G2" +# Serial: 10000012 +# MD5 Fingerprint: 7c:a5:0f:f8:5b:9a:7d:6d:30:ae:54:5a:e3:42:a2:8a +# SHA1 Fingerprint: 59:af:82:79:91:86:c7:b4:75:07:cb:cf:03:57:46:eb:04:dd:b7:16 +# SHA256 Fingerprint: 66:8c:83:94:7d:a6:3b:72:4b:ec:e1:74:3c:31:a0:e6:ae:d0:db:8e:c5:b3:1b:e3:77:bb:78:4f:91:b6:71:6f +-----BEGIN CERTIFICATE----- +MIIFyjCCA7KgAwIBAgIEAJiWjDANBgkqhkiG9w0BAQsFADBaMQswCQYDVQQGEwJO +TDEeMBwGA1UECgwVU3RhYXQgZGVyIE5lZGVybGFuZGVuMSswKQYDVQQDDCJTdGFh +dCBkZXIgTmVkZXJsYW5kZW4gUm9vdCBDQSAtIEcyMB4XDTA4MDMyNjExMTgxN1oX +DTIwMDMyNTExMDMxMFowWjELMAkGA1UEBhMCTkwxHjAcBgNVBAoMFVN0YWF0IGRl +ciBOZWRlcmxhbmRlbjErMCkGA1UEAwwiU3RhYXQgZGVyIE5lZGVybGFuZGVuIFJv +b3QgQ0EgLSBHMjCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAMVZ5291 +qj5LnLW4rJ4L5PnZyqtdj7U5EILXr1HgO+EASGrP2uEGQxGZqhQlEq0i6ABtQ8Sp +uOUfiUtnvWFI7/3S4GCI5bkYYCjDdyutsDeqN95kWSpGV+RLufg3fNU254DBtvPU +Z5uW6M7XxgpT0GtJlvOjCwV3SPcl5XCsMBQgJeN/dVrlSPhOewMHBPqCYYdu8DvE +pMfQ9XQ+pV0aCPKbJdL2rAQmPlU6Yiile7Iwr/g3wtG61jj99O9JMDeZJiFIhQGp +5Rbn3JBV3w/oOM2ZNyFPXfUib2rFEhZgF1XyZWampzCROME4HYYEhLoaJXhena/M +UGDWE4dS7WMfbWV9whUYdMrhfmQpjHLYFhN9C0lK8SgbIHRrxT3dsKpICT0ugpTN +GmXZK4iambwYfp/ufWZ8Pr2UuIHOzZgweMFvZ9C+X+Bo7d7iscksWXiSqt8rYGPy +5V6548r6f1CGPqI0GAwJaCgRHOThuVw+R7oyPxjMW4T182t0xHJ04eOLoEq9jWYv +6q012iDTiIJh8BIitrzQ1aTsr1SIJSQ8p22xcik/Plemf1WvbibG/ufMQFxRRIEK +eN5KzlW/HdXZt1bv8Hb/C3m1r737qWmRRpdogBQ2HbN/uymYNqUg+oJgYjOk7Na6 +B6duxc8UpufWkjTYgfX8HV2qXB72o007uPc5AgMBAAGjgZcwgZQwDwYDVR0TAQH/ +BAUwAwEB/zBSBgNVHSAESzBJMEcGBFUdIAAwPzA9BggrBgEFBQcCARYxaHR0cDov +L3d3dy5wa2lvdmVyaGVpZC5ubC9wb2xpY2llcy9yb290LXBvbGljeS1HMjAOBgNV +HQ8BAf8EBAMCAQYwHQYDVR0OBBYEFJFoMocVHYnitfGsNig0jQt8YojrMA0GCSqG +SIb3DQEBCwUAA4ICAQCoQUpnKpKBglBu4dfYszk78wIVCVBR7y29JHuIhjv5tLyS +CZa59sCrI2AGeYwRTlHSeYAz+51IvuxBQ4EffkdAHOV6CMqqi3WtFMTC6GY8ggen +5ieCWxjmD27ZUD6KQhgpxrRW/FYQoAUXvQwjf/ST7ZwaUb7dRUG/kSS0H4zpX897 +IZmflZ85OkYcbPnNe5yQzSipx6lVu6xiNGI1E0sUOlWDuYaNkqbG9AclVMwWVxJK +gnjIFNkXgiYtXSAfea7+1HAWFpWD2DU5/1JddRwWxRNVz0fMdWVSSt7wsKfkCpYL ++63C4iWEst3kvX5ZbJvw8NjnyvLplzh+ib7M+zkXYT9y2zqR2GUBGR2tUKRXCnxL +vJxxcypFURmFzI79R6d0lR2o0a9OF7FpJsKqeFdbxU2n5Z4FF5TKsl+gSRiNNOkm +bEgeqmiSBeGCc1qb3AdbCG19ndeNIdn8FCCqwkXfP+cAslHkwvgFuXkajDTznlvk +N1trSt8sV4pAWja63XVECDdCcAz+3F4hoKOKwJCcaNpQ5kUQR3i2TtJlycM33+FC +Y7BXN0Ute4qcvwXqZVUz9zkQxSgqIXobisQk+T8VyJoVIPVVYpbtbZNQvOSqeK3Z +ywplh6ZmwcSBo3c6WB4L7oOLnR7SUqTMHW+wmG2UMbX4cQrcufx9MmDm66+KAQ== +-----END CERTIFICATE----- + +# Issuer: CN=Hongkong Post Root CA 1 O=Hongkong Post +# Subject: CN=Hongkong Post Root CA 1 O=Hongkong Post +# Label: "Hongkong Post Root CA 1" +# Serial: 1000 +# MD5 Fingerprint: a8:0d:6f:39:78:b9:43:6d:77:42:6d:98:5a:cc:23:ca +# SHA1 Fingerprint: d6:da:a8:20:8d:09:d2:15:4d:24:b5:2f:cb:34:6e:b2:58:b2:8a:58 +# SHA256 Fingerprint: f9:e6:7d:33:6c:51:00:2a:c0:54:c6:32:02:2d:66:dd:a2:e7:e3:ff:f1:0a:d0:61:ed:31:d8:bb:b4:10:cf:b2 +-----BEGIN CERTIFICATE----- +MIIDMDCCAhigAwIBAgICA+gwDQYJKoZIhvcNAQEFBQAwRzELMAkGA1UEBhMCSEsx +FjAUBgNVBAoTDUhvbmdrb25nIFBvc3QxIDAeBgNVBAMTF0hvbmdrb25nIFBvc3Qg +Um9vdCBDQSAxMB4XDTAzMDUxNTA1MTMxNFoXDTIzMDUxNTA0NTIyOVowRzELMAkG +A1UEBhMCSEsxFjAUBgNVBAoTDUhvbmdrb25nIFBvc3QxIDAeBgNVBAMTF0hvbmdr +b25nIFBvc3QgUm9vdCBDQSAxMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC +AQEArP84tulmAknjorThkPlAj3n54r15/gK97iSSHSL22oVyaf7XPwnU3ZG1ApzQ +jVrhVcNQhrkpJsLj2aDxaQMoIIBFIi1WpztUlVYiWR8o3x8gPW2iNr4joLFutbEn +PzlTCeqrauh0ssJlXI6/fMN4hM2eFvz1Lk8gKgifd/PFHsSaUmYeSF7jEAaPIpjh +ZY4bXSNmO7ilMlHIhqqhqZ5/dpTCpmy3QfDVyAY45tQM4vM7TG1QjMSDJ8EThFk9 +nnV0ttgCXjqQesBCNnLsak3c78QA3xMYV18meMjWCnl3v/evt3a5pQuEF10Q6m/h +q5URX208o1xNg1vysxmKgIsLhwIDAQABoyYwJDASBgNVHRMBAf8ECDAGAQH/AgED +MA4GA1UdDwEB/wQEAwIBxjANBgkqhkiG9w0BAQUFAAOCAQEADkbVPK7ih9legYsC +mEEIjEy82tvuJxuC52pF7BaLT4Wg87JwvVqWuspube5Gi27nKi6Wsxkz67SfqLI3 +7piol7Yutmcn1KZJ/RyTZXaeQi/cImyaT/JaFTmxcdcrUehtHJjA2Sr0oYJ71clB +oiMBdDhViw+5LmeiIAQ32pwL0xch4I+XeTRvhEgCIDMb5jREn5Fw9IBehEPCKdJs +EhTkYY2sEJCehFC78JZvRZ+K88psT/oROhUVRsPNH4NbLUES7VBnQRM9IauUiqpO +fMGx+6fWtScvl6tu4B3i0RwsH0Ti/L6RoZz71ilTc4afU9hDDl3WY4JxHYB0yvbi +AmvZWg== +-----END CERTIFICATE----- + +# Issuer: CN=SecureSign RootCA11 O=Japan Certification Services, Inc. +# Subject: CN=SecureSign RootCA11 O=Japan Certification Services, Inc. +# Label: "SecureSign RootCA11" +# Serial: 1 +# MD5 Fingerprint: b7:52:74:e2:92:b4:80:93:f2:75:e4:cc:d7:f2:ea:26 +# SHA1 Fingerprint: 3b:c4:9f:48:f8:f3:73:a0:9c:1e:bd:f8:5b:b1:c3:65:c7:d8:11:b3 +# SHA256 Fingerprint: bf:0f:ee:fb:9e:3a:58:1a:d5:f9:e9:db:75:89:98:57:43:d2:61:08:5c:4d:31:4f:6f:5d:72:59:aa:42:16:12 +-----BEGIN CERTIFICATE----- +MIIDbTCCAlWgAwIBAgIBATANBgkqhkiG9w0BAQUFADBYMQswCQYDVQQGEwJKUDEr +MCkGA1UEChMiSmFwYW4gQ2VydGlmaWNhdGlvbiBTZXJ2aWNlcywgSW5jLjEcMBoG +A1UEAxMTU2VjdXJlU2lnbiBSb290Q0ExMTAeFw0wOTA0MDgwNDU2NDdaFw0yOTA0 +MDgwNDU2NDdaMFgxCzAJBgNVBAYTAkpQMSswKQYDVQQKEyJKYXBhbiBDZXJ0aWZp +Y2F0aW9uIFNlcnZpY2VzLCBJbmMuMRwwGgYDVQQDExNTZWN1cmVTaWduIFJvb3RD +QTExMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/XeqpRyQBTvLTJsz +i1oURaTnkBbR31fSIRCkF/3frNYfp+TbfPfs37gD2pRY/V1yfIw/XwFndBWW4wI8 +h9uuywGOwvNmxoVF9ALGOrVisq/6nL+k5tSAMJjzDbaTj6nU2DbysPyKyiyhFTOV +MdrAG/LuYpmGYz+/3ZMqg6h2uRMft85OQoWPIucuGvKVCbIFtUROd6EgvanyTgp9 +UK31BQ1FT0Zx/Sg+U/sE2C3XZR1KG/rPO7AxmjVuyIsG0wCR8pQIZUyxNAYAeoni +8McDWc/V1uinMrPmmECGxc0nEovMe863ETxiYAcjPitAbpSACW22s293bzUIUPsC +h8U+iQIDAQABo0IwQDAdBgNVHQ4EFgQUW/hNT7KlhtQ60vFjmqC+CfZXt94wDgYD +VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQEFBQADggEB +AKChOBZmLqdWHyGcBvod7bkixTgm2E5P7KN/ed5GIaGHd48HCJqypMWvDzKYC3xm +KbabfSVSSUOrTC4rbnpwrxYO4wJs+0LmGJ1F2FXI6Dvd5+H0LgscNFxsWEr7jIhQ +X5Ucv+2rIrVls4W6ng+4reV6G4pQOh29Dbx7VFALuUKvVaAYga1lme++5Jy/xIWr +QbJUb9wlze144o4MjQlJ3WN7WmmWAiGovVJZ6X01y8hSyn+B/tlr0/cR7SXf+Of5 +pPpyl4RTDaXQMhhRdlkUbA/r7F+AjHVDg8OFmP9Mni0N5HeDk061lgeLKBObjBmN +QSdJQO7e5iNEOdyhIta6A/I= +-----END CERTIFICATE----- + +# Issuer: CN=Microsec e-Szigno Root CA 2009 O=Microsec Ltd. +# Subject: CN=Microsec e-Szigno Root CA 2009 O=Microsec Ltd. +# Label: "Microsec e-Szigno Root CA 2009" +# Serial: 14014712776195784473 +# MD5 Fingerprint: f8:49:f4:03:bc:44:2d:83:be:48:69:7d:29:64:fc:b1 +# SHA1 Fingerprint: 89:df:74:fe:5c:f4:0f:4a:80:f9:e3:37:7d:54:da:91:e1:01:31:8e +# SHA256 Fingerprint: 3c:5f:81:fe:a5:fa:b8:2c:64:bf:a2:ea:ec:af:cd:e8:e0:77:fc:86:20:a7:ca:e5:37:16:3d:f3:6e:db:f3:78 +-----BEGIN CERTIFICATE----- +MIIECjCCAvKgAwIBAgIJAMJ+QwRORz8ZMA0GCSqGSIb3DQEBCwUAMIGCMQswCQYD +VQQGEwJIVTERMA8GA1UEBwwIQnVkYXBlc3QxFjAUBgNVBAoMDU1pY3Jvc2VjIEx0 +ZC4xJzAlBgNVBAMMHk1pY3Jvc2VjIGUtU3ppZ25vIFJvb3QgQ0EgMjAwOTEfMB0G +CSqGSIb3DQEJARYQaW5mb0BlLXN6aWduby5odTAeFw0wOTA2MTYxMTMwMThaFw0y +OTEyMzAxMTMwMThaMIGCMQswCQYDVQQGEwJIVTERMA8GA1UEBwwIQnVkYXBlc3Qx +FjAUBgNVBAoMDU1pY3Jvc2VjIEx0ZC4xJzAlBgNVBAMMHk1pY3Jvc2VjIGUtU3pp +Z25vIFJvb3QgQ0EgMjAwOTEfMB0GCSqGSIb3DQEJARYQaW5mb0BlLXN6aWduby5o +dTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOn4j/NjrdqG2KfgQvvP +kd6mJviZpWNwrZuuyjNAfW2WbqEORO7hE52UQlKavXWFdCyoDh2Tthi3jCyoz/tc +cbna7P7ofo/kLx2yqHWH2Leh5TvPmUpG0IMZfcChEhyVbUr02MelTTMuhTlAdX4U +fIASmFDHQWe4oIBhVKZsTh/gnQ4H6cm6M+f+wFUoLAKApxn1ntxVUwOXewdI/5n7 +N4okxFnMUBBjjqqpGrCEGob5X7uxUG6k0QrM1XF+H6cbfPVTbiJfyyvm1HxdrtbC +xkzlBQHZ7Vf8wSN5/PrIJIOV87VqUQHQd9bpEqH5GoP7ghu5sJf0dgYzQ0mg/wu1 ++rUCAwEAAaOBgDB+MA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0G +A1UdDgQWBBTLD8bfQkPMPcu1SCOhGnqmKrs0aDAfBgNVHSMEGDAWgBTLD8bfQkPM +Pcu1SCOhGnqmKrs0aDAbBgNVHREEFDASgRBpbmZvQGUtc3ppZ25vLmh1MA0GCSqG +SIb3DQEBCwUAA4IBAQDJ0Q5eLtXMs3w+y/w9/w0olZMEyL/azXm4Q5DwpL7v8u8h +mLzU1F0G9u5C7DBsoKqpyvGvivo/C3NqPuouQH4frlRheesuCDfXI/OMn74dseGk +ddug4lQUsbocKaQY9hK6ohQU4zE1yED/t+AFdlfBHFny+L/k7SViXITwfn4fs775 +tyERzAMBVnCnEJIeGzSBHq2cGsMEPO0CYdYeBvNfOofyK/FFh+U9rNHHV4S9a67c +2Pm2G2JwCz02yULyMtd6YebS2z3PyKnJm9zbWETXbzivf3jTo60adbocwTZ8jx5t +HMN1Rq41Bab2XD0h7lbwyYIiLXpUq3DDfSJlgnCW +-----END CERTIFICATE----- + +# Issuer: CN=GlobalSign O=GlobalSign OU=GlobalSign Root CA - R3 +# Subject: CN=GlobalSign O=GlobalSign OU=GlobalSign Root CA - R3 +# Label: "GlobalSign Root CA - R3" +# Serial: 4835703278459759426209954 +# MD5 Fingerprint: c5:df:b8:49:ca:05:13:55:ee:2d:ba:1a:c3:3e:b0:28 +# SHA1 Fingerprint: d6:9b:56:11:48:f0:1c:77:c5:45:78:c1:09:26:df:5b:85:69:76:ad +# SHA256 Fingerprint: cb:b5:22:d7:b7:f1:27:ad:6a:01:13:86:5b:df:1c:d4:10:2e:7d:07:59:af:63:5a:7c:f4:72:0d:c9:63:c5:3b +-----BEGIN CERTIFICATE----- +MIIDXzCCAkegAwIBAgILBAAAAAABIVhTCKIwDQYJKoZIhvcNAQELBQAwTDEgMB4G +A1UECxMXR2xvYmFsU2lnbiBSb290IENBIC0gUjMxEzARBgNVBAoTCkdsb2JhbFNp +Z24xEzARBgNVBAMTCkdsb2JhbFNpZ24wHhcNMDkwMzE4MTAwMDAwWhcNMjkwMzE4 +MTAwMDAwWjBMMSAwHgYDVQQLExdHbG9iYWxTaWduIFJvb3QgQ0EgLSBSMzETMBEG +A1UEChMKR2xvYmFsU2lnbjETMBEGA1UEAxMKR2xvYmFsU2lnbjCCASIwDQYJKoZI +hvcNAQEBBQADggEPADCCAQoCggEBAMwldpB5BngiFvXAg7aEyiie/QV2EcWtiHL8 +RgJDx7KKnQRfJMsuS+FggkbhUqsMgUdwbN1k0ev1LKMPgj0MK66X17YUhhB5uzsT +gHeMCOFJ0mpiLx9e+pZo34knlTifBtc+ycsmWQ1z3rDI6SYOgxXG71uL0gRgykmm +KPZpO/bLyCiR5Z2KYVc3rHQU3HTgOu5yLy6c+9C7v/U9AOEGM+iCK65TpjoWc4zd +QQ4gOsC0p6Hpsk+QLjJg6VfLuQSSaGjlOCZgdbKfd/+RFO+uIEn8rUAVSNECMWEZ +XriX7613t2Saer9fwRPvm2L7DWzgVGkWqQPabumDk3F2xmmFghcCAwEAAaNCMEAw +DgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFI/wS3+o +LkUkrk1Q+mOai97i3Ru8MA0GCSqGSIb3DQEBCwUAA4IBAQBLQNvAUKr+yAzv95ZU +RUm7lgAJQayzE4aGKAczymvmdLm6AC2upArT9fHxD4q/c2dKg8dEe3jgr25sbwMp +jjM5RcOO5LlXbKr8EpbsU8Yt5CRsuZRj+9xTaGdWPoO4zzUhw8lo/s7awlOqzJCK +6fBdRoyV3XpYKBovHd7NADdBj+1EbddTKJd+82cEHhXXipa0095MJ6RMG3NzdvQX +mcIfeg7jLQitChws/zyrVQ4PkX4268NXSb7hLi18YIvDQVETI53O9zJrlAGomecs +Mx86OyXShkDOOyyGeMlhLxS67ttVb9+E7gUJTb0o2HLO02JQZR7rkpeDMdmztcpH +WD9f +-----END CERTIFICATE----- + +# Issuer: CN=Autoridad de Certificacion Firmaprofesional CIF A62634068 +# Subject: CN=Autoridad de Certificacion Firmaprofesional CIF A62634068 +# Label: "Autoridad de Certificacion Firmaprofesional CIF A62634068" +# Serial: 6047274297262753887 +# MD5 Fingerprint: 73:3a:74:7a:ec:bb:a3:96:a6:c2:e4:e2:c8:9b:c0:c3 +# SHA1 Fingerprint: ae:c5:fb:3f:c8:e1:bf:c4:e5:4f:03:07:5a:9a:e8:00:b7:f7:b6:fa +# SHA256 Fingerprint: 04:04:80:28:bf:1f:28:64:d4:8f:9a:d4:d8:32:94:36:6a:82:88:56:55:3f:3b:14:30:3f:90:14:7f:5d:40:ef +-----BEGIN CERTIFICATE----- +MIIGFDCCA/ygAwIBAgIIU+w77vuySF8wDQYJKoZIhvcNAQEFBQAwUTELMAkGA1UE +BhMCRVMxQjBABgNVBAMMOUF1dG9yaWRhZCBkZSBDZXJ0aWZpY2FjaW9uIEZpcm1h +cHJvZmVzaW9uYWwgQ0lGIEE2MjYzNDA2ODAeFw0wOTA1MjAwODM4MTVaFw0zMDEy +MzEwODM4MTVaMFExCzAJBgNVBAYTAkVTMUIwQAYDVQQDDDlBdXRvcmlkYWQgZGUg +Q2VydGlmaWNhY2lvbiBGaXJtYXByb2Zlc2lvbmFsIENJRiBBNjI2MzQwNjgwggIi +MA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQDKlmuO6vj78aI14H9M2uDDUtd9 +thDIAl6zQyrET2qyyhxdKJp4ERppWVevtSBC5IsP5t9bpgOSL/UR5GLXMnE42QQM +cas9UX4PB99jBVzpv5RvwSmCwLTaUbDBPLutN0pcyvFLNg4kq7/DhHf9qFD0sefG +L9ItWY16Ck6WaVICqjaY7Pz6FIMMNx/Jkjd/14Et5cS54D40/mf0PmbR0/RAz15i +NA9wBj4gGFrO93IbJWyTdBSTo3OxDqqHECNZXyAFGUftaI6SEspd/NYrspI8IM/h +X68gvqB2f3bl7BqGYTM+53u0P6APjqK5am+5hyZvQWyIplD9amML9ZMWGxmPsu2b +m8mQ9QEM3xk9Dz44I8kvjwzRAv4bVdZO0I08r0+k8/6vKtMFnXkIoctXMbScyJCy +Z/QYFpM6/EfY0XiWMR+6KwxfXZmtY4laJCB22N/9q06mIqqdXuYnin1oKaPnirja +EbsXLZmdEyRG98Xi2J+Of8ePdG1asuhy9azuJBCtLxTa/y2aRnFHvkLfuwHb9H/T +KI8xWVvTyQKmtFLKbpf7Q8UIJm+K9Lv9nyiqDdVF8xM6HdjAeI9BZzwelGSuewvF +6NkBiDkal4ZkQdU7hwxu+g/GvUgUvzlN1J5Bto+WHWOWk9mVBngxaJ43BjuAiUVh +OSPHG0SjFeUc+JIwuwIDAQABo4HvMIHsMBIGA1UdEwEB/wQIMAYBAf8CAQEwDgYD +VR0PAQH/BAQDAgEGMB0GA1UdDgQWBBRlzeurNR4APn7VdMActHNHDhpkLzCBpgYD +VR0gBIGeMIGbMIGYBgRVHSAAMIGPMC8GCCsGAQUFBwIBFiNodHRwOi8vd3d3LmZp +cm1hcHJvZmVzaW9uYWwuY29tL2NwczBcBggrBgEFBQcCAjBQHk4AUABhAHMAZQBv +ACAAZABlACAAbABhACAAQgBvAG4AYQBuAG8AdgBhACAANAA3ACAAQgBhAHIAYwBl +AGwAbwBuAGEAIAAwADgAMAAxADcwDQYJKoZIhvcNAQEFBQADggIBABd9oPm03cXF +661LJLWhAqvdpYhKsg9VSytXjDvlMd3+xDLx51tkljYyGOylMnfX40S2wBEqgLk9 +am58m9Ot/MPWo+ZkKXzR4Tgegiv/J2Wv+xYVxC5xhOW1//qkR71kMrv2JYSiJ0L1 +ILDCExARzRAVukKQKtJE4ZYm6zFIEv0q2skGz3QeqUvVhyj5eTSSPi5E6PaPT481 +PyWzOdxjKpBrIF/EUhJOlywqrJ2X3kjyo2bbwtKDlaZmp54lD+kLM5FlClrD2VQS +3a/DTg4fJl4N3LON7NWBcN7STyQF82xO9UxJZo3R/9ILJUFI/lGExkKvgATP0H5k +SeTy36LssUzAKh3ntLFlosS88Zj0qnAHY7S42jtM+kAiMFsRpvAFDsYCA0irhpuF +3dvd6qJ2gHN99ZwExEWN57kci57q13XRcrHedUTnQn3iV2t93Jm8PYMo6oCTjcVM +ZcFwgbg4/EMxsvYDNEeyrPsiBsse3RdHHF9mudMaotoRsaS8I8nkvof/uZS2+F0g +StRf571oe2XyFR7SOqkt6dhrJKyXWERHrVkY8SFlcN7ONGCoQPHzPKTDKCOM/icz +Q0CgFzzr6juwcqajuUpLXhZI9LK8yIySxZ2frHI2vDSANGupi5LAuBft7HZT9SQB +jLMi6Et8Vcad+qMUu2WFbm5PEn4KPJ2V +-----END CERTIFICATE----- + +# Issuer: CN=Izenpe.com O=IZENPE S.A. +# Subject: CN=Izenpe.com O=IZENPE S.A. +# Label: "Izenpe.com" +# Serial: 917563065490389241595536686991402621 +# MD5 Fingerprint: a6:b0:cd:85:80:da:5c:50:34:a3:39:90:2f:55:67:73 +# SHA1 Fingerprint: 2f:78:3d:25:52:18:a7:4a:65:39:71:b5:2c:a2:9c:45:15:6f:e9:19 +# SHA256 Fingerprint: 25:30:cc:8e:98:32:15:02:ba:d9:6f:9b:1f:ba:1b:09:9e:2d:29:9e:0f:45:48:bb:91:4f:36:3b:c0:d4:53:1f +-----BEGIN CERTIFICATE----- +MIIF8TCCA9mgAwIBAgIQALC3WhZIX7/hy/WL1xnmfTANBgkqhkiG9w0BAQsFADA4 +MQswCQYDVQQGEwJFUzEUMBIGA1UECgwLSVpFTlBFIFMuQS4xEzARBgNVBAMMCkl6 +ZW5wZS5jb20wHhcNMDcxMjEzMTMwODI4WhcNMzcxMjEzMDgyNzI1WjA4MQswCQYD +VQQGEwJFUzEUMBIGA1UECgwLSVpFTlBFIFMuQS4xEzARBgNVBAMMCkl6ZW5wZS5j +b20wggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQDJ03rKDx6sp4boFmVq +scIbRTJxldn+EFvMr+eleQGPicPK8lVx93e+d5TzcqQsRNiekpsUOqHnJJAKClaO +xdgmlOHZSOEtPtoKct2jmRXagaKH9HtuJneJWK3W6wyyQXpzbm3benhB6QiIEn6H +LmYRY2xU+zydcsC8Lv/Ct90NduM61/e0aL6i9eOBbsFGb12N4E3GVFWJGjMxCrFX +uaOKmMPsOzTFlUFpfnXCPCDFYbpRR6AgkJOhkEvzTnyFRVSa0QUmQbC1TR0zvsQD +yCV8wXDbO/QJLVQnSKwv4cSsPsjLkkxTOTcj7NMB+eAJRE1NZMDhDVqHIrytG6P+ +JrUV86f8hBnp7KGItERphIPzidF0BqnMC9bC3ieFUCbKF7jJeodWLBoBHmy+E60Q +rLUk9TiRodZL2vG70t5HtfG8gfZZa88ZU+mNFctKy6lvROUbQc/hhqfK0GqfvEyN +BjNaooXlkDWgYlwWTvDjovoDGrQscbNYLN57C9saD+veIR8GdwYDsMnvmfzAuU8L +hij+0rnq49qlw0dpEuDb8PYZi+17cNcC1u2HGCgsBCRMd+RIihrGO5rUD8r6ddIB +QFqNeb+Lz0vPqhbBleStTIo+F5HUsWLlguWABKQDfo2/2n+iD5dPDNMN+9fR5XJ+ +HMh3/1uaD7euBUbl8agW7EekFwIDAQABo4H2MIHzMIGwBgNVHREEgagwgaWBD2lu +Zm9AaXplbnBlLmNvbaSBkTCBjjFHMEUGA1UECgw+SVpFTlBFIFMuQS4gLSBDSUYg +QTAxMzM3MjYwLVJNZXJjLlZpdG9yaWEtR2FzdGVpeiBUMTA1NSBGNjIgUzgxQzBB +BgNVBAkMOkF2ZGEgZGVsIE1lZGl0ZXJyYW5lbyBFdG9yYmlkZWEgMTQgLSAwMTAx +MCBWaXRvcmlhLUdhc3RlaXowDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMC +AQYwHQYDVR0OBBYEFB0cZQ6o8iV7tJHP5LGx5r1VdGwFMA0GCSqGSIb3DQEBCwUA +A4ICAQB4pgwWSp9MiDrAyw6lFn2fuUhfGI8NYjb2zRlrrKvV9pF9rnHzP7MOeIWb +laQnIUdCSnxIOvVFfLMMjlF4rJUT3sb9fbgakEyrkgPH7UIBzg/YsfqikuFgba56 +awmqxinuaElnMIAkejEWOVt+8Rwu3WwJrfIxwYJOubv5vr8qhT/AQKM6WfxZSzwo +JNu0FXWuDYi6LnPAvViH5ULy617uHjAimcs30cQhbIHsvm0m5hzkQiCeR7Csg1lw +LDXWrzY0tM07+DKo7+N4ifuNRSzanLh+QBxh5z6ikixL8s36mLYp//Pye6kfLqCT +VyvehQP5aTfLnnhqBbTFMXiJ7HqnheG5ezzevh55hM6fcA5ZwjUukCox2eRFekGk +LhObNA5me0mrZJfQRsN5nXJQY6aYWwa9SG3YOYNw6DXwBdGqvOPbyALqfP2C2sJb +UjWumDqtujWTI6cfSN01RpiyEGjkpTHCClguGYEQyVB1/OpaFs4R1+7vUIgtYf8/ +QnMFlEPVjjxOAToZpR9GTnfQXeWBIiGH/pR9hNiTrdZoQ0iy2+tzJOeRf1SktoA+ +naM8THLCV8Sg1Mw4J87VBp6iSNnpn86CcDaTmjvfliHjWbcM2pE38P1ZWrOZyGls +QyYBNWNgVYkDOnXYukrZVP/u3oDYLdE41V4tC5h9Pmzb/CaIxw== +-----END CERTIFICATE----- + +# Issuer: CN=Chambers of Commerce Root - 2008 O=AC Camerfirma S.A. +# Subject: CN=Chambers of Commerce Root - 2008 O=AC Camerfirma S.A. +# Label: "Chambers of Commerce Root - 2008" +# Serial: 11806822484801597146 +# MD5 Fingerprint: 5e:80:9e:84:5a:0e:65:0b:17:02:f3:55:18:2a:3e:d7 +# SHA1 Fingerprint: 78:6a:74:ac:76:ab:14:7f:9c:6a:30:50:ba:9e:a8:7e:fe:9a:ce:3c +# SHA256 Fingerprint: 06:3e:4a:fa:c4:91:df:d3:32:f3:08:9b:85:42:e9:46:17:d8:93:d7:fe:94:4e:10:a7:93:7e:e2:9d:96:93:c0 +-----BEGIN CERTIFICATE----- +MIIHTzCCBTegAwIBAgIJAKPaQn6ksa7aMA0GCSqGSIb3DQEBBQUAMIGuMQswCQYD +VQQGEwJFVTFDMEEGA1UEBxM6TWFkcmlkIChzZWUgY3VycmVudCBhZGRyZXNzIGF0 +IHd3dy5jYW1lcmZpcm1hLmNvbS9hZGRyZXNzKTESMBAGA1UEBRMJQTgyNzQzMjg3 +MRswGQYDVQQKExJBQyBDYW1lcmZpcm1hIFMuQS4xKTAnBgNVBAMTIENoYW1iZXJz +IG9mIENvbW1lcmNlIFJvb3QgLSAyMDA4MB4XDTA4MDgwMTEyMjk1MFoXDTM4MDcz +MTEyMjk1MFowga4xCzAJBgNVBAYTAkVVMUMwQQYDVQQHEzpNYWRyaWQgKHNlZSBj +dXJyZW50IGFkZHJlc3MgYXQgd3d3LmNhbWVyZmlybWEuY29tL2FkZHJlc3MpMRIw +EAYDVQQFEwlBODI3NDMyODcxGzAZBgNVBAoTEkFDIENhbWVyZmlybWEgUy5BLjEp +MCcGA1UEAxMgQ2hhbWJlcnMgb2YgQ29tbWVyY2UgUm9vdCAtIDIwMDgwggIiMA0G +CSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCvAMtwNyuAWko6bHiUfaN/Gh/2NdW9 +28sNRHI+JrKQUrpjOyhYb6WzbZSm891kDFX29ufyIiKAXuFixrYp4YFs8r/lfTJq +VKAyGVn+H4vXPWCGhSRv4xGzdz4gljUha7MI2XAuZPeEklPWDrCQiorjh40G072Q +DuKZoRuGDtqaCrsLYVAGUvGef3bsyw/QHg3PmTA9HMRFEFis1tPo1+XqxQEHd9ZR +5gN/ikilTWh1uem8nk4ZcfUyS5xtYBkL+8ydddy/Js2Pk3g5eXNeJQ7KXOt3EgfL +ZEFHcpOrUMPrCXZkNNI5t3YRCQ12RcSprj1qr7V9ZS+UWBDsXHyvfuK2GNnQm05a +Sd+pZgvMPMZ4fKecHePOjlO+Bd5gD2vlGts/4+EhySnB8esHnFIbAURRPHsl18Tl +UlRdJQfKFiC4reRB7noI/plvg6aRArBsNlVq5331lubKgdaX8ZSD6e2wsWsSaR6s ++12pxZjptFtYer49okQ6Y1nUCyXeG0+95QGezdIp1Z8XGQpvvwyQ0wlf2eOKNcx5 +Wk0ZN5K3xMGtr/R5JJqyAQuxr1yW84Ay+1w9mPGgP0revq+ULtlVmhduYJ1jbLhj +ya6BXBg14JC7vjxPNyK5fuvPnnchpj04gftI2jE9K+OJ9dC1vX7gUMQSibMjmhAx +hduub+84Mxh2EQIDAQABo4IBbDCCAWgwEgYDVR0TAQH/BAgwBgEB/wIBDDAdBgNV +HQ4EFgQU+SSsD7K1+HnA+mCIG8TZTQKeFxkwgeMGA1UdIwSB2zCB2IAU+SSsD7K1 ++HnA+mCIG8TZTQKeFxmhgbSkgbEwga4xCzAJBgNVBAYTAkVVMUMwQQYDVQQHEzpN +YWRyaWQgKHNlZSBjdXJyZW50IGFkZHJlc3MgYXQgd3d3LmNhbWVyZmlybWEuY29t +L2FkZHJlc3MpMRIwEAYDVQQFEwlBODI3NDMyODcxGzAZBgNVBAoTEkFDIENhbWVy +ZmlybWEgUy5BLjEpMCcGA1UEAxMgQ2hhbWJlcnMgb2YgQ29tbWVyY2UgUm9vdCAt +IDIwMDiCCQCj2kJ+pLGu2jAOBgNVHQ8BAf8EBAMCAQYwPQYDVR0gBDYwNDAyBgRV +HSAAMCowKAYIKwYBBQUHAgEWHGh0dHA6Ly9wb2xpY3kuY2FtZXJmaXJtYS5jb20w +DQYJKoZIhvcNAQEFBQADggIBAJASryI1wqM58C7e6bXpeHxIvj99RZJe6dqxGfwW +PJ+0W2aeaufDuV2I6A+tzyMP3iU6XsxPpcG1Lawk0lgH3qLPaYRgM+gQDROpI9CF +5Y57pp49chNyM/WqfcZjHwj0/gF/JM8rLFQJ3uIrbZLGOU8W6jx+ekbURWpGqOt1 +glanq6B8aBMz9p0w8G8nOSQjKpD9kCk18pPfNKXG9/jvjA9iSnyu0/VU+I22mlaH +FoI6M6taIgj3grrqLuBHmrS1RaMFO9ncLkVAO+rcf+g769HsJtg1pDDFOqxXnrN2 +pSB7+R5KBWIBpih1YJeSDW4+TTdDDZIVnBgizVGZoCkaPF+KMjNbMMeJL0eYD6MD +xvbxrN8y8NmBGuScvfaAFPDRLLmF9dijscilIeUcE5fuDr3fKanvNFNb0+RqE4QG +tjICxFKuItLcsiFCGtpA8CnJ7AoMXOLQusxI0zcKzBIKinmwPQN/aUv0NCB9szTq +jktk9T79syNnFQ0EuPAtwQlRPLJsFfClI9eDdOTlLsn+mCdCxqvGnrDQWzilm1De +fhiYtUU79nm06PcaewaD+9CL2rvHvRirCG88gGtAPxkZumWK5r7VXNM21+9AUiRg +OGcEMeyP84LG3rlV8zsxkVrctQgVrXYlCg17LofiDKYGvCYQbTed7N14jHyAxfDZ +d0jQ +-----END CERTIFICATE----- + +# Issuer: CN=Global Chambersign Root - 2008 O=AC Camerfirma S.A. +# Subject: CN=Global Chambersign Root - 2008 O=AC Camerfirma S.A. +# Label: "Global Chambersign Root - 2008" +# Serial: 14541511773111788494 +# MD5 Fingerprint: 9e:80:ff:78:01:0c:2e:c1:36:bd:fe:96:90:6e:08:f3 +# SHA1 Fingerprint: 4a:bd:ee:ec:95:0d:35:9c:89:ae:c7:52:a1:2c:5b:29:f6:d6:aa:0c +# SHA256 Fingerprint: 13:63:35:43:93:34:a7:69:80:16:a0:d3:24:de:72:28:4e:07:9d:7b:52:20:bb:8f:bd:74:78:16:ee:be:ba:ca +-----BEGIN CERTIFICATE----- +MIIHSTCCBTGgAwIBAgIJAMnN0+nVfSPOMA0GCSqGSIb3DQEBBQUAMIGsMQswCQYD +VQQGEwJFVTFDMEEGA1UEBxM6TWFkcmlkIChzZWUgY3VycmVudCBhZGRyZXNzIGF0 +IHd3dy5jYW1lcmZpcm1hLmNvbS9hZGRyZXNzKTESMBAGA1UEBRMJQTgyNzQzMjg3 +MRswGQYDVQQKExJBQyBDYW1lcmZpcm1hIFMuQS4xJzAlBgNVBAMTHkdsb2JhbCBD +aGFtYmVyc2lnbiBSb290IC0gMjAwODAeFw0wODA4MDExMjMxNDBaFw0zODA3MzEx +MjMxNDBaMIGsMQswCQYDVQQGEwJFVTFDMEEGA1UEBxM6TWFkcmlkIChzZWUgY3Vy +cmVudCBhZGRyZXNzIGF0IHd3dy5jYW1lcmZpcm1hLmNvbS9hZGRyZXNzKTESMBAG +A1UEBRMJQTgyNzQzMjg3MRswGQYDVQQKExJBQyBDYW1lcmZpcm1hIFMuQS4xJzAl +BgNVBAMTHkdsb2JhbCBDaGFtYmVyc2lnbiBSb290IC0gMjAwODCCAiIwDQYJKoZI +hvcNAQEBBQADggIPADCCAgoCggIBAMDfVtPkOpt2RbQT2//BthmLN0EYlVJH6xed +KYiONWwGMi5HYvNJBL99RDaxccy9Wglz1dmFRP+RVyXfXjaOcNFccUMd2drvXNL7 +G706tcuto8xEpw2uIRU/uXpbknXYpBI4iRmKt4DS4jJvVpyR1ogQC7N0ZJJ0YPP2 +zxhPYLIj0Mc7zmFLmY/CDNBAspjcDahOo7kKrmCgrUVSY7pmvWjg+b4aqIG7HkF4 +ddPB/gBVsIdU6CeQNR1MM62X/JcumIS/LMmjv9GYERTtY/jKmIhYF5ntRQOXfjyG +HoiMvvKRhI9lNNgATH23MRdaKXoKGCQwoze1eqkBfSbW+Q6OWfH9GzO1KTsXO0G2 +Id3UwD2ln58fQ1DJu7xsepeY7s2MH/ucUa6LcL0nn3HAa6x9kGbo1106DbDVwo3V +yJ2dwW3Q0L9R5OP4wzg2rtandeavhENdk5IMagfeOx2YItaswTXbo6Al/3K1dh3e +beksZixShNBFks4c5eUzHdwHU1SjqoI7mjcv3N2gZOnm3b2u/GSFHTynyQbehP9r +6GsaPMWis0L7iwk+XwhSx2LE1AVxv8Rk5Pihg+g+EpuoHtQ2TS9x9o0o9oOpE9Jh +wZG7SMA0j0GMS0zbaRL/UJScIINZc+18ofLx/d33SdNDWKBWY8o9PeU1VlnpDsog +zCtLkykPAgMBAAGjggFqMIIBZjASBgNVHRMBAf8ECDAGAQH/AgEMMB0GA1UdDgQW +BBS5CcqcHtvTbDprru1U8VuTBjUuXjCB4QYDVR0jBIHZMIHWgBS5CcqcHtvTbDpr +ru1U8VuTBjUuXqGBsqSBrzCBrDELMAkGA1UEBhMCRVUxQzBBBgNVBAcTOk1hZHJp +ZCAoc2VlIGN1cnJlbnQgYWRkcmVzcyBhdCB3d3cuY2FtZXJmaXJtYS5jb20vYWRk +cmVzcykxEjAQBgNVBAUTCUE4Mjc0MzI4NzEbMBkGA1UEChMSQUMgQ2FtZXJmaXJt +YSBTLkEuMScwJQYDVQQDEx5HbG9iYWwgQ2hhbWJlcnNpZ24gUm9vdCAtIDIwMDiC +CQDJzdPp1X0jzjAOBgNVHQ8BAf8EBAMCAQYwPQYDVR0gBDYwNDAyBgRVHSAAMCow +KAYIKwYBBQUHAgEWHGh0dHA6Ly9wb2xpY3kuY2FtZXJmaXJtYS5jb20wDQYJKoZI +hvcNAQEFBQADggIBAICIf3DekijZBZRG/5BXqfEv3xoNa/p8DhxJJHkn2EaqbylZ +UohwEurdPfWbU1Rv4WCiqAm57OtZfMY18dwY6fFn5a+6ReAJ3spED8IXDneRRXoz +X1+WLGiLwUePmJs9wOzL9dWCkoQ10b42OFZyMVtHLaoXpGNR6woBrX/sdZ7LoR/x +fxKxueRkf2fWIyr0uDldmOghp+G9PUIadJpwr2hsUF1Jz//7Dl3mLEfXgTpZALVz +a2Mg9jFFCDkO9HB+QHBaP9BrQql0PSgvAm11cpUJjUhjxsYjV5KTXjXBjfkK9yyd +Yhz2rXzdpjEetrHHfoUm+qRqtdpjMNHvkzeyZi99Bffnt0uYlDXA2TopwZ2yUDMd +SqlapskD7+3056huirRXhOukP9DuqqqHW2Pok+JrqNS4cnhrG+055F3Lm6qH1U9O +AP7Zap88MQ8oAgF9mOinsKJknnn4SPIVqczmyETrP3iZ8ntxPjzxmKfFGBI/5rso +M0LpRQp8bfKGeS/Fghl9CYl8slR2iK7ewfPM4W7bMdaTrpmg7yVqc5iJWzouE4ge +v8CSlDQb4ye3ix5vQv/n6TebUB0tovkC7stYWDpxvGjjqsGvHCgfotwjZT+B6q6Z +09gwzxMNTxXJhLynSC34MCN32EZLeW32jO06f2ARePTpm67VVMB0gNELQp/B +-----END CERTIFICATE----- + +# Issuer: CN=Go Daddy Root Certificate Authority - G2 O=GoDaddy.com, Inc. +# Subject: CN=Go Daddy Root Certificate Authority - G2 O=GoDaddy.com, Inc. +# Label: "Go Daddy Root Certificate Authority - G2" +# Serial: 0 +# MD5 Fingerprint: 80:3a:bc:22:c1:e6:fb:8d:9b:3b:27:4a:32:1b:9a:01 +# SHA1 Fingerprint: 47:be:ab:c9:22:ea:e8:0e:78:78:34:62:a7:9f:45:c2:54:fd:e6:8b +# SHA256 Fingerprint: 45:14:0b:32:47:eb:9c:c8:c5:b4:f0:d7:b5:30:91:f7:32:92:08:9e:6e:5a:63:e2:74:9d:d3:ac:a9:19:8e:da +-----BEGIN CERTIFICATE----- +MIIDxTCCAq2gAwIBAgIBADANBgkqhkiG9w0BAQsFADCBgzELMAkGA1UEBhMCVVMx +EDAOBgNVBAgTB0FyaXpvbmExEzARBgNVBAcTClNjb3R0c2RhbGUxGjAYBgNVBAoT +EUdvRGFkZHkuY29tLCBJbmMuMTEwLwYDVQQDEyhHbyBEYWRkeSBSb290IENlcnRp +ZmljYXRlIEF1dGhvcml0eSAtIEcyMB4XDTA5MDkwMTAwMDAwMFoXDTM3MTIzMTIz +NTk1OVowgYMxCzAJBgNVBAYTAlVTMRAwDgYDVQQIEwdBcml6b25hMRMwEQYDVQQH +EwpTY290dHNkYWxlMRowGAYDVQQKExFHb0RhZGR5LmNvbSwgSW5jLjExMC8GA1UE +AxMoR28gRGFkZHkgUm9vdCBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkgLSBHMjCCASIw +DQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAL9xYgjx+lk09xvJGKP3gElY6SKD +E6bFIEMBO4Tx5oVJnyfq9oQbTqC023CYxzIBsQU+B07u9PpPL1kwIuerGVZr4oAH +/PMWdYA5UXvl+TW2dE6pjYIT5LY/qQOD+qK+ihVqf94Lw7YZFAXK6sOoBJQ7Rnwy +DfMAZiLIjWltNowRGLfTshxgtDj6AozO091GB94KPutdfMh8+7ArU6SSYmlRJQVh +GkSBjCypQ5Yj36w6gZoOKcUcqeldHraenjAKOc7xiID7S13MMuyFYkMlNAJWJwGR +tDtwKj9useiciAF9n9T521NtYJ2/LOdYq7hfRvzOxBsDPAnrSTFcaUaz4EcCAwEA +AaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYE +FDqahQcQZyi27/a9BUFuIMGU2g/eMA0GCSqGSIb3DQEBCwUAA4IBAQCZ21151fmX +WWcDYfF+OwYxdS2hII5PZYe096acvNjpL9DbWu7PdIxztDhC2gV7+AJ1uP2lsdeu +9tfeE8tTEH6KRtGX+rcuKxGrkLAngPnon1rpN5+r5N9ss4UXnT3ZJE95kTXWXwTr +gIOrmgIttRD02JDHBHNA7XIloKmf7J6raBKZV8aPEjoJpL1E/QYVN8Gb5DKj7Tjo +2GTzLH4U/ALqn83/B2gX2yKQOC16jdFU8WnjXzPKej17CuPKf1855eJ1usV2GDPO +LPAvTK33sefOT6jEm0pUBsV/fdUID+Ic/n4XuKxe9tQWskMJDE32p2u0mYRlynqI +4uJEvlz36hz1 +-----END CERTIFICATE----- + +# Issuer: CN=Starfield Root Certificate Authority - G2 O=Starfield Technologies, Inc. +# Subject: CN=Starfield Root Certificate Authority - G2 O=Starfield Technologies, Inc. +# Label: "Starfield Root Certificate Authority - G2" +# Serial: 0 +# MD5 Fingerprint: d6:39:81:c6:52:7e:96:69:fc:fc:ca:66:ed:05:f2:96 +# SHA1 Fingerprint: b5:1c:06:7c:ee:2b:0c:3d:f8:55:ab:2d:92:f4:fe:39:d4:e7:0f:0e +# SHA256 Fingerprint: 2c:e1:cb:0b:f9:d2:f9:e1:02:99:3f:be:21:51:52:c3:b2:dd:0c:ab:de:1c:68:e5:31:9b:83:91:54:db:b7:f5 +-----BEGIN CERTIFICATE----- +MIID3TCCAsWgAwIBAgIBADANBgkqhkiG9w0BAQsFADCBjzELMAkGA1UEBhMCVVMx +EDAOBgNVBAgTB0FyaXpvbmExEzARBgNVBAcTClNjb3R0c2RhbGUxJTAjBgNVBAoT +HFN0YXJmaWVsZCBUZWNobm9sb2dpZXMsIEluYy4xMjAwBgNVBAMTKVN0YXJmaWVs +ZCBSb290IENlcnRpZmljYXRlIEF1dGhvcml0eSAtIEcyMB4XDTA5MDkwMTAwMDAw +MFoXDTM3MTIzMTIzNTk1OVowgY8xCzAJBgNVBAYTAlVTMRAwDgYDVQQIEwdBcml6 +b25hMRMwEQYDVQQHEwpTY290dHNkYWxlMSUwIwYDVQQKExxTdGFyZmllbGQgVGVj +aG5vbG9naWVzLCBJbmMuMTIwMAYDVQQDEylTdGFyZmllbGQgUm9vdCBDZXJ0aWZp +Y2F0ZSBBdXRob3JpdHkgLSBHMjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoC +ggEBAL3twQP89o/8ArFvW59I2Z154qK3A2FWGMNHttfKPTUuiUP3oWmb3ooa/RMg +nLRJdzIpVv257IzdIvpy3Cdhl+72WoTsbhm5iSzchFvVdPtrX8WJpRBSiUZV9Lh1 +HOZ/5FSuS/hVclcCGfgXcVnrHigHdMWdSL5stPSksPNkN3mSwOxGXn/hbVNMYq/N +Hwtjuzqd+/x5AJhhdM8mgkBj87JyahkNmcrUDnXMN/uLicFZ8WJ/X7NfZTD4p7dN +dloedl40wOiWVpmKs/B/pM293DIxfJHP4F8R+GuqSVzRmZTRouNjWwl2tVZi4Ut0 +HZbUJtQIBFnQmA4O5t78w+wfkPECAwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAO +BgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFHwMMh+n2TB/xH1oo2Kooc6rB1snMA0G +CSqGSIb3DQEBCwUAA4IBAQARWfolTwNvlJk7mh+ChTnUdgWUXuEok21iXQnCoKjU +sHU48TRqneSfioYmUeYs0cYtbpUgSpIB7LiKZ3sx4mcujJUDJi5DnUox9g61DLu3 +4jd/IroAow57UvtruzvE03lRTs2Q9GcHGcg8RnoNAX3FWOdt5oUwF5okxBDgBPfg +8n/Uqgr/Qh037ZTlZFkSIHc40zI+OIF1lnP6aI+xy84fxez6nH7PfrHxBy22/L/K +pL/QlwVKvOoYKAKQvVR4CSFx09F9HdkWsKlhPdAKACL8x3vLCWRFCztAgfd9fDL1 +mMpYjn0q7pBZc2T5NnReJaH1ZgUufzkVqSr7UIuOhWn0 +-----END CERTIFICATE----- + +# Issuer: CN=Starfield Services Root Certificate Authority - G2 O=Starfield Technologies, Inc. +# Subject: CN=Starfield Services Root Certificate Authority - G2 O=Starfield Technologies, Inc. +# Label: "Starfield Services Root Certificate Authority - G2" +# Serial: 0 +# MD5 Fingerprint: 17:35:74:af:7b:61:1c:eb:f4:f9:3c:e2:ee:40:f9:a2 +# SHA1 Fingerprint: 92:5a:8f:8d:2c:6d:04:e0:66:5f:59:6a:ff:22:d8:63:e8:25:6f:3f +# SHA256 Fingerprint: 56:8d:69:05:a2:c8:87:08:a4:b3:02:51:90:ed:cf:ed:b1:97:4a:60:6a:13:c6:e5:29:0f:cb:2a:e6:3e:da:b5 +-----BEGIN CERTIFICATE----- +MIID7zCCAtegAwIBAgIBADANBgkqhkiG9w0BAQsFADCBmDELMAkGA1UEBhMCVVMx +EDAOBgNVBAgTB0FyaXpvbmExEzARBgNVBAcTClNjb3R0c2RhbGUxJTAjBgNVBAoT +HFN0YXJmaWVsZCBUZWNobm9sb2dpZXMsIEluYy4xOzA5BgNVBAMTMlN0YXJmaWVs +ZCBTZXJ2aWNlcyBSb290IENlcnRpZmljYXRlIEF1dGhvcml0eSAtIEcyMB4XDTA5 +MDkwMTAwMDAwMFoXDTM3MTIzMTIzNTk1OVowgZgxCzAJBgNVBAYTAlVTMRAwDgYD +VQQIEwdBcml6b25hMRMwEQYDVQQHEwpTY290dHNkYWxlMSUwIwYDVQQKExxTdGFy +ZmllbGQgVGVjaG5vbG9naWVzLCBJbmMuMTswOQYDVQQDEzJTdGFyZmllbGQgU2Vy +dmljZXMgUm9vdCBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkgLSBHMjCCASIwDQYJKoZI +hvcNAQEBBQADggEPADCCAQoCggEBANUMOsQq+U7i9b4Zl1+OiFOxHz/Lz58gE20p +OsgPfTz3a3Y4Y9k2YKibXlwAgLIvWX/2h/klQ4bnaRtSmpDhcePYLQ1Ob/bISdm2 +8xpWriu2dBTrz/sm4xq6HZYuajtYlIlHVv8loJNwU4PahHQUw2eeBGg6345AWh1K +Ts9DkTvnVtYAcMtS7nt9rjrnvDH5RfbCYM8TWQIrgMw0R9+53pBlbQLPLJGmpufe +hRhJfGZOozptqbXuNC66DQO4M99H67FrjSXZm86B0UVGMpZwh94CDklDhbZsc7tk +6mFBrMnUVN+HL8cisibMn1lUaJ/8viovxFUcdUBgF4UCVTmLfwUCAwEAAaNCMEAw +DwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFJxfAN+q +AdcwKziIorhtSpzyEZGDMA0GCSqGSIb3DQEBCwUAA4IBAQBLNqaEd2ndOxmfZyMI +bw5hyf2E3F/YNoHN2BtBLZ9g3ccaaNnRbobhiCPPE95Dz+I0swSdHynVv/heyNXB +ve6SbzJ08pGCL72CQnqtKrcgfU28elUSwhXqvfdqlS5sdJ/PHLTyxQGjhdByPq1z +qwubdQxtRbeOlKyWN7Wg0I8VRw7j6IPdj/3vQQF3zCepYoUz8jcI73HPdwbeyBkd +iEDPfUYd/x7H4c7/I9vG+o1VTqkC50cRRj70/b17KSa7qWFiNyi2LSr2EIZkyXCn +0q23KXB56jzaYyWf/Wi3MOxw+3WKt21gZ7IeyLnp2KhvAotnDU0mV3HaIPzBSlCN +sSi6 +-----END CERTIFICATE----- + +# Issuer: CN=AffirmTrust Commercial O=AffirmTrust +# Subject: CN=AffirmTrust Commercial O=AffirmTrust +# Label: "AffirmTrust Commercial" +# Serial: 8608355977964138876 +# MD5 Fingerprint: 82:92:ba:5b:ef:cd:8a:6f:a6:3d:55:f9:84:f6:d6:b7 +# SHA1 Fingerprint: f9:b5:b6:32:45:5f:9c:be:ec:57:5f:80:dc:e9:6e:2c:c7:b2:78:b7 +# SHA256 Fingerprint: 03:76:ab:1d:54:c5:f9:80:3c:e4:b2:e2:01:a0:ee:7e:ef:7b:57:b6:36:e8:a9:3c:9b:8d:48:60:c9:6f:5f:a7 +-----BEGIN CERTIFICATE----- +MIIDTDCCAjSgAwIBAgIId3cGJyapsXwwDQYJKoZIhvcNAQELBQAwRDELMAkGA1UE +BhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MR8wHQYDVQQDDBZBZmZpcm1UcnVz +dCBDb21tZXJjaWFsMB4XDTEwMDEyOTE0MDYwNloXDTMwMTIzMTE0MDYwNlowRDEL +MAkGA1UEBhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MR8wHQYDVQQDDBZBZmZp +cm1UcnVzdCBDb21tZXJjaWFsMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC +AQEA9htPZwcroRX1BiLLHwGy43NFBkRJLLtJJRTWzsO3qyxPxkEylFf6EqdbDuKP +Hx6GGaeqtS25Xw2Kwq+FNXkyLbscYjfysVtKPcrNcV/pQr6U6Mje+SJIZMblq8Yr +ba0F8PrVC8+a5fBQpIs7R6UjW3p6+DM/uO+Zl+MgwdYoic+U+7lF7eNAFxHUdPAL +MeIrJmqbTFeurCA+ukV6BfO9m2kVrn1OIGPENXY6BwLJN/3HR+7o8XYdcxXyl6S1 +yHp52UKqK39c/s4mT6NmgTWvRLpUHhwwMmWd5jyTXlBOeuM61G7MGvv50jeuJCqr +VwMiKA1JdX+3KNp1v47j3A55MQIDAQABo0IwQDAdBgNVHQ4EFgQUnZPGU4teyq8/ +nx4P5ZmVvCT2lI8wDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwDQYJ +KoZIhvcNAQELBQADggEBAFis9AQOzcAN/wr91LoWXym9e2iZWEnStB03TX8nfUYG +XUPGhi4+c7ImfU+TqbbEKpqrIZcUsd6M06uJFdhrJNTxFq7YpFzUf1GO7RgBsZNj +vbz4YYCanrHOQnDiqX0GJX0nof5v7LMeJNrjS1UaADs1tDvZ110w/YETifLCBivt +Z8SOyUOyXGsViQK8YvxO8rUzqrJv0wqiUOP2O+guRMLbZjipM1ZI8W0bM40NjD9g +N53Tym1+NH4Nn3J2ixufcv1SNUFFApYvHLKac0khsUlHRUe072o0EclNmsxZt9YC +nlpOZbWUrhvfKbAW8b8Angc6F2S1BLUjIZkKlTuXfO8= +-----END CERTIFICATE----- + +# Issuer: CN=AffirmTrust Networking O=AffirmTrust +# Subject: CN=AffirmTrust Networking O=AffirmTrust +# Label: "AffirmTrust Networking" +# Serial: 8957382827206547757 +# MD5 Fingerprint: 42:65:ca:be:01:9a:9a:4c:a9:8c:41:49:cd:c0:d5:7f +# SHA1 Fingerprint: 29:36:21:02:8b:20:ed:02:f5:66:c5:32:d1:d6:ed:90:9f:45:00:2f +# SHA256 Fingerprint: 0a:81:ec:5a:92:97:77:f1:45:90:4a:f3:8d:5d:50:9f:66:b5:e2:c5:8f:cd:b5:31:05:8b:0e:17:f3:f0:b4:1b +-----BEGIN CERTIFICATE----- +MIIDTDCCAjSgAwIBAgIIfE8EORzUmS0wDQYJKoZIhvcNAQEFBQAwRDELMAkGA1UE +BhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MR8wHQYDVQQDDBZBZmZpcm1UcnVz +dCBOZXR3b3JraW5nMB4XDTEwMDEyOTE0MDgyNFoXDTMwMTIzMTE0MDgyNFowRDEL +MAkGA1UEBhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MR8wHQYDVQQDDBZBZmZp +cm1UcnVzdCBOZXR3b3JraW5nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC +AQEAtITMMxcua5Rsa2FSoOujz3mUTOWUgJnLVWREZY9nZOIG41w3SfYvm4SEHi3y +YJ0wTsyEheIszx6e/jarM3c1RNg1lho9Nuh6DtjVR6FqaYvZ/Ls6rnla1fTWcbua +kCNrmreIdIcMHl+5ni36q1Mr3Lt2PpNMCAiMHqIjHNRqrSK6mQEubWXLviRmVSRL +QESxG9fhwoXA3hA/Pe24/PHxI1Pcv2WXb9n5QHGNfb2V1M6+oF4nI979ptAmDgAp +6zxG8D1gvz9Q0twmQVGeFDdCBKNwV6gbh+0t+nvujArjqWaJGctB+d1ENmHP4ndG +yH329JKBNv3bNPFyfvMMFr20FQIDAQABo0IwQDAdBgNVHQ4EFgQUBx/S55zawm6i +QLSwelAQUHTEyL0wDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwDQYJ +KoZIhvcNAQEFBQADggEBAIlXshZ6qML91tmbmzTCnLQyFE2npN/svqe++EPbkTfO +tDIuUFUaNU52Q3Eg75N3ThVwLofDwR1t3Mu1J9QsVtFSUzpE0nPIxBsFZVpikpzu +QY0x2+c06lkh1QF612S4ZDnNye2v7UsDSKegmQGA3GWjNq5lWUhPgkvIZfFXHeVZ +Lgo/bNjR9eUJtGxUAArgFU2HdW23WJZa3W3SAKD0m0i+wzekujbgfIeFlxoVot4u +olu9rxj5kFDNcFn4J2dHy8egBzp90SxdbBk6ZrV9/ZFvgrG+CJPbFEfxojfHRZ48 +x3evZKiT3/Zpg4Jg8klCNO1aAFSFHBY2kgxc+qatv9s= +-----END CERTIFICATE----- + +# Issuer: CN=AffirmTrust Premium O=AffirmTrust +# Subject: CN=AffirmTrust Premium O=AffirmTrust +# Label: "AffirmTrust Premium" +# Serial: 7893706540734352110 +# MD5 Fingerprint: c4:5d:0e:48:b6:ac:28:30:4e:0a:bc:f9:38:16:87:57 +# SHA1 Fingerprint: d8:a6:33:2c:e0:03:6f:b1:85:f6:63:4f:7d:6a:06:65:26:32:28:27 +# SHA256 Fingerprint: 70:a7:3f:7f:37:6b:60:07:42:48:90:45:34:b1:14:82:d5:bf:0e:69:8e:cc:49:8d:f5:25:77:eb:f2:e9:3b:9a +-----BEGIN CERTIFICATE----- +MIIFRjCCAy6gAwIBAgIIbYwURrGmCu4wDQYJKoZIhvcNAQEMBQAwQTELMAkGA1UE +BhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MRwwGgYDVQQDDBNBZmZpcm1UcnVz +dCBQcmVtaXVtMB4XDTEwMDEyOTE0MTAzNloXDTQwMTIzMTE0MTAzNlowQTELMAkG +A1UEBhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MRwwGgYDVQQDDBNBZmZpcm1U +cnVzdCBQcmVtaXVtMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAxBLf +qV/+Qd3d9Z+K4/as4Tx4mrzY8H96oDMq3I0gW64tb+eT2TZwamjPjlGjhVtnBKAQ +JG9dKILBl1fYSCkTtuG+kU3fhQxTGJoeJKJPj/CihQvL9Cl/0qRY7iZNyaqoe5rZ ++jjeRFcV5fiMyNlI4g0WJx0eyIOFJbe6qlVBzAMiSy2RjYvmia9mx+n/K+k8rNrS +s8PhaJyJ+HoAVt70VZVs+7pk3WKL3wt3MutizCaam7uqYoNMtAZ6MMgpv+0GTZe5 +HMQxK9VfvFMSF5yZVylmd2EhMQcuJUmdGPLu8ytxjLW6OQdJd/zvLpKQBY0tL3d7 +70O/Nbua2Plzpyzy0FfuKE4mX4+QaAkvuPjcBukumj5Rp9EixAqnOEhss/n/fauG +V+O61oV4d7pD6kh/9ti+I20ev9E2bFhc8e6kGVQa9QPSdubhjL08s9NIS+LI+H+S +qHZGnEJlPqQewQcDWkYtuJfzt9WyVSHvutxMAJf7FJUnM7/oQ0dG0giZFmA7mn7S +5u046uwBHjxIVkkJx0w3AJ6IDsBz4W9m6XJHMD4Q5QsDyZpCAGzFlH5hxIrff4Ia +C1nEWTJ3s7xgaVY5/bQGeyzWZDbZvUjthB9+pSKPKrhC9IK31FOQeE4tGv2Bb0TX +OwF0lkLgAOIua+rF7nKsu7/+6qqo+Nz2snmKtmcCAwEAAaNCMEAwHQYDVR0OBBYE +FJ3AZ6YMItkm9UWrpmVSESfYRaxjMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/ +BAQDAgEGMA0GCSqGSIb3DQEBDAUAA4ICAQCzV00QYk465KzquByvMiPIs0laUZx2 +KI15qldGF9X1Uva3ROgIRL8YhNILgM3FEv0AVQVhh0HctSSePMTYyPtwni94loMg +Nt58D2kTiKV1NpgIpsbfrM7jWNa3Pt668+s0QNiigfV4Py/VpfzZotReBA4Xrf5B +8OWycvpEgjNC6C1Y91aMYj+6QrCcDFx+LmUmXFNPALJ4fqENmS2NuB2OosSw/WDQ +MKSOyARiqcTtNd56l+0OOF6SL5Nwpamcb6d9Ex1+xghIsV5n61EIJenmJWtSKZGc +0jlzCFfemQa0W50QBuHCAKi4HEoCChTQwUHK+4w1IX2COPKpVJEZNZOUbWo6xbLQ +u4mGk+ibyQ86p3q4ofB4Rvr8Ny/lioTz3/4E2aFooC8k4gmVBtWVyuEklut89pMF +u+1z6S3RdTnX5yTb2E5fQ4+e0BQ5v1VwSJlXMbSc7kqYA5YwH2AG7hsj/oFgIxpH +YoWlzBk0gG+zrBrjn/B7SK3VAdlntqlyk+otZrWyuOQ9PLLvTIzq6we/qzWaVYa8 +GKa1qF60g2xraUDTn9zxw2lrueFtCfTxqlB2Cnp9ehehVZZCmTEJ3WARjQUwfuaO +RtGdFNrHF+QFlozEJLUbzxQHskD4o55BhrwE0GuWyCqANP2/7waj3VjFhT0+j/6e +KeC2uAloGRwYQw== +-----END CERTIFICATE----- + +# Issuer: CN=AffirmTrust Premium ECC O=AffirmTrust +# Subject: CN=AffirmTrust Premium ECC O=AffirmTrust +# Label: "AffirmTrust Premium ECC" +# Serial: 8401224907861490260 +# MD5 Fingerprint: 64:b0:09:55:cf:b1:d5:99:e2:be:13:ab:a6:5d:ea:4d +# SHA1 Fingerprint: b8:23:6b:00:2f:1d:16:86:53:01:55:6c:11:a4:37:ca:eb:ff:c3:bb +# SHA256 Fingerprint: bd:71:fd:f6:da:97:e4:cf:62:d1:64:7a:dd:25:81:b0:7d:79:ad:f8:39:7e:b4:ec:ba:9c:5e:84:88:82:14:23 +-----BEGIN CERTIFICATE----- +MIIB/jCCAYWgAwIBAgIIdJclisc/elQwCgYIKoZIzj0EAwMwRTELMAkGA1UEBhMC +VVMxFDASBgNVBAoMC0FmZmlybVRydXN0MSAwHgYDVQQDDBdBZmZpcm1UcnVzdCBQ +cmVtaXVtIEVDQzAeFw0xMDAxMjkxNDIwMjRaFw00MDEyMzExNDIwMjRaMEUxCzAJ +BgNVBAYTAlVTMRQwEgYDVQQKDAtBZmZpcm1UcnVzdDEgMB4GA1UEAwwXQWZmaXJt +VHJ1c3QgUHJlbWl1bSBFQ0MwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAAQNMF4bFZ0D +0KF5Nbc6PJJ6yhUczWLznCZcBz3lVPqj1swS6vQUX+iOGasvLkjmrBhDeKzQN8O9 +ss0s5kfiGuZjuD0uL3jET9v0D6RoTFVya5UdThhClXjMNzyR4ptlKymjQjBAMB0G +A1UdDgQWBBSaryl6wBE1NSZRMADDav5A1a7WPDAPBgNVHRMBAf8EBTADAQH/MA4G +A1UdDwEB/wQEAwIBBjAKBggqhkjOPQQDAwNnADBkAjAXCfOHiFBar8jAQr9HX/Vs +aobgxCd05DhT1wV/GzTjxi+zygk8N53X57hG8f2h4nECMEJZh0PUUd+60wkyWs6I +flc9nF9Ca/UHLbXwgpP5WW+uZPpY5Yse42O+tYHNbwKMeQ== +-----END CERTIFICATE----- + +# Issuer: CN=Certum Trusted Network CA O=Unizeto Technologies S.A. OU=Certum Certification Authority +# Subject: CN=Certum Trusted Network CA O=Unizeto Technologies S.A. OU=Certum Certification Authority +# Label: "Certum Trusted Network CA" +# Serial: 279744 +# MD5 Fingerprint: d5:e9:81:40:c5:18:69:fc:46:2c:89:75:62:0f:aa:78 +# SHA1 Fingerprint: 07:e0:32:e0:20:b7:2c:3f:19:2f:06:28:a2:59:3a:19:a7:0f:06:9e +# SHA256 Fingerprint: 5c:58:46:8d:55:f5:8e:49:7e:74:39:82:d2:b5:00:10:b6:d1:65:37:4a:cf:83:a7:d4:a3:2d:b7:68:c4:40:8e +-----BEGIN CERTIFICATE----- +MIIDuzCCAqOgAwIBAgIDBETAMA0GCSqGSIb3DQEBBQUAMH4xCzAJBgNVBAYTAlBM +MSIwIAYDVQQKExlVbml6ZXRvIFRlY2hub2xvZ2llcyBTLkEuMScwJQYDVQQLEx5D +ZXJ0dW0gQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkxIjAgBgNVBAMTGUNlcnR1bSBU +cnVzdGVkIE5ldHdvcmsgQ0EwHhcNMDgxMDIyMTIwNzM3WhcNMjkxMjMxMTIwNzM3 +WjB+MQswCQYDVQQGEwJQTDEiMCAGA1UEChMZVW5pemV0byBUZWNobm9sb2dpZXMg +Uy5BLjEnMCUGA1UECxMeQ2VydHVtIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MSIw +IAYDVQQDExlDZXJ0dW0gVHJ1c3RlZCBOZXR3b3JrIENBMIIBIjANBgkqhkiG9w0B +AQEFAAOCAQ8AMIIBCgKCAQEA4/t9o3K6wvDJFIf1awFO4W5AB7ptJ11/91sts1rH +UV+rpDKmYYe2bg+G0jACl/jXaVehGDldamR5xgFZrDwxSjh80gTSSyjoIF87B6LM +TXPb865Px1bVWqeWifrzq2jUI4ZZJ88JJ7ysbnKDHDBy3+Ci6dLhdHUZvSqeexVU +BBvXQzmtVSjF4hq79MDkrjhJM8x2hZ85RdKknvISjFH4fOQtf/WsX+sWn7Et0brM +kUJ3TCXJkDhv2/DM+44el1k+1WBO5gUo7Ul5E0u6SNsv+XLTOcr+H9g0cvW0QM8x +AcPs3hEtF10fuFDRXhmnad4HMyjKUJX5p1TLVIZQRan5SQIDAQABo0IwQDAPBgNV +HRMBAf8EBTADAQH/MB0GA1UdDgQWBBQIds3LB/8k9sXN7buQvOKEN0Z19zAOBgNV +HQ8BAf8EBAMCAQYwDQYJKoZIhvcNAQEFBQADggEBAKaorSLOAT2mo/9i0Eidi15y +sHhE49wcrwn9I0j6vSrEuVUEtRCjjSfeC4Jj0O7eDDd5QVsisrCaQVymcODU0HfL +I9MA4GxWL+FpDQ3Zqr8hgVDZBqWo/5U30Kr+4rP1mS1FhIrlQgnXdAIv94nYmem8 +J9RHjboNRhx3zxSkHLmkMcScKHQDNP8zGSal6Q10tz6XxnboJ5ajZt3hrvJBW8qY +VoNzcOSGGtIxQbovvi0TWnZvTuhOgQ4/WwMioBK+ZlgRSssDxLQqKi2WF+A5VLxI +03YnnZotBqbJ7DnSq9ufmgsnAjUpsUCV5/nonFWIGUbWtzT1fs45mtk48VH3Tyw= +-----END CERTIFICATE----- + +# Issuer: CN=TWCA Root Certification Authority O=TAIWAN-CA OU=Root CA +# Subject: CN=TWCA Root Certification Authority O=TAIWAN-CA OU=Root CA +# Label: "TWCA Root Certification Authority" +# Serial: 1 +# MD5 Fingerprint: aa:08:8f:f6:f9:7b:b7:f2:b1:a7:1e:9b:ea:ea:bd:79 +# SHA1 Fingerprint: cf:9e:87:6d:d3:eb:fc:42:26:97:a3:b5:a3:7a:a0:76:a9:06:23:48 +# SHA256 Fingerprint: bf:d8:8f:e1:10:1c:41:ae:3e:80:1b:f8:be:56:35:0e:e9:ba:d1:a6:b9:bd:51:5e:dc:5c:6d:5b:87:11:ac:44 +-----BEGIN CERTIFICATE----- +MIIDezCCAmOgAwIBAgIBATANBgkqhkiG9w0BAQUFADBfMQswCQYDVQQGEwJUVzES +MBAGA1UECgwJVEFJV0FOLUNBMRAwDgYDVQQLDAdSb290IENBMSowKAYDVQQDDCFU +V0NBIFJvb3QgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMDgwODI4MDcyNDMz +WhcNMzAxMjMxMTU1OTU5WjBfMQswCQYDVQQGEwJUVzESMBAGA1UECgwJVEFJV0FO +LUNBMRAwDgYDVQQLDAdSb290IENBMSowKAYDVQQDDCFUV0NBIFJvb3QgQ2VydGlm +aWNhdGlvbiBBdXRob3JpdHkwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIB +AQCwfnK4pAOU5qfeCTiRShFAh6d8WWQUe7UREN3+v9XAu1bihSX0NXIP+FPQQeFE +AcK0HMMxQhZHhTMidrIKbw/lJVBPhYa+v5guEGcevhEFhgWQxFnQfHgQsIBct+HH +K3XLfJ+utdGdIzdjp9xCoi2SBBtQwXu4PhvJVgSLL1KbralW6cH/ralYhzC2gfeX +RfwZVzsrb+RH9JlF/h3x+JejiB03HFyP4HYlmlD4oFT/RJB2I9IyxsOrBr/8+7/z +rX2SYgJbKdM1o5OaQ2RgXbL6Mv87BK9NQGr5x+PvI/1ry+UPizgN7gr8/g+YnzAx +3WxSZfmLgb4i4RxYA7qRG4kHAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNV +HRMBAf8EBTADAQH/MB0GA1UdDgQWBBRqOFsmjd6LWvJPelSDGRjjCDWmujANBgkq +hkiG9w0BAQUFAAOCAQEAPNV3PdrfibqHDAhUaiBQkr6wQT25JmSDCi/oQMCXKCeC +MErJk/9q56YAf4lCmtYR5VPOL8zy2gXE/uJQxDqGfczafhAJO5I1KlOy/usrBdls +XebQ79NqZp4VKIV66IIArB6nCWlWQtNoURi+VJq/REG6Sb4gumlc7rh3zc5sH62D +lhh9DrUUOYTxKOkto557HnpyWoOzeW/vtPzQCqVYT0bf+215WfKEIlKuD8z7fDvn +aspHYcN6+NOSBB+4IIThNlQWx0DeO4pz3N/GCUzf7Nr/1FNCocnyYh0igzyXxfkZ +YiesZSLX0zzG5Y6yU8xJzrww/nsOM5D77dIUkR8Hrw== +-----END CERTIFICATE----- + +# Issuer: O=SECOM Trust Systems CO.,LTD. OU=Security Communication RootCA2 +# Subject: O=SECOM Trust Systems CO.,LTD. OU=Security Communication RootCA2 +# Label: "Security Communication RootCA2" +# Serial: 0 +# MD5 Fingerprint: 6c:39:7d:a4:0e:55:59:b2:3f:d6:41:b1:12:50:de:43 +# SHA1 Fingerprint: 5f:3b:8c:f2:f8:10:b3:7d:78:b4:ce:ec:19:19:c3:73:34:b9:c7:74 +# SHA256 Fingerprint: 51:3b:2c:ec:b8:10:d4:cd:e5:dd:85:39:1a:df:c6:c2:dd:60:d8:7b:b7:36:d2:b5:21:48:4a:a4:7a:0e:be:f6 +-----BEGIN CERTIFICATE----- +MIIDdzCCAl+gAwIBAgIBADANBgkqhkiG9w0BAQsFADBdMQswCQYDVQQGEwJKUDEl +MCMGA1UEChMcU0VDT00gVHJ1c3QgU3lzdGVtcyBDTy4sTFRELjEnMCUGA1UECxMe +U2VjdXJpdHkgQ29tbXVuaWNhdGlvbiBSb290Q0EyMB4XDTA5MDUyOTA1MDAzOVoX +DTI5MDUyOTA1MDAzOVowXTELMAkGA1UEBhMCSlAxJTAjBgNVBAoTHFNFQ09NIFRy +dXN0IFN5c3RlbXMgQ08uLExURC4xJzAlBgNVBAsTHlNlY3VyaXR5IENvbW11bmlj +YXRpb24gUm9vdENBMjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANAV +OVKxUrO6xVmCxF1SrjpDZYBLx/KWvNs2l9amZIyoXvDjChz335c9S672XewhtUGr +zbl+dp+++T42NKA7wfYxEUV0kz1XgMX5iZnK5atq1LXaQZAQwdbWQonCv/Q4EpVM +VAX3NuRFg3sUZdbcDE3R3n4MqzvEFb46VqZab3ZpUql6ucjrappdUtAtCms1FgkQ +hNBqyjoGADdH5H5XTz+L62e4iKrFvlNVspHEfbmwhRkGeC7bYRr6hfVKkaHnFtWO +ojnflLhwHyg/i/xAXmODPIMqGplrz95Zajv8bxbXH/1KEOtOghY6rCcMU/Gt1SSw +awNQwS08Ft1ENCcadfsCAwEAAaNCMEAwHQYDVR0OBBYEFAqFqXdlBZh8QIH4D5cs +OPEK7DzPMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3 +DQEBCwUAA4IBAQBMOqNErLlFsceTfsgLCkLfZOoc7llsCLqJX2rKSpWeeo8HxdpF +coJxDjrSzG+ntKEju/Ykn8sX/oymzsLS28yN/HH8AynBbF0zX2S2ZTuJbxh2ePXc +okgfGT+Ok+vx+hfuzU7jBBJV1uXk3fs+BXziHV7Gp7yXT2g69ekuCkO2r1dcYmh8 +t/2jioSgrGK+KwmHNPBqAbubKVY8/gA3zyNs8U6qtnRGEmyR7jTV7JqR50S+kDFy +1UkC9gLl9B/rfNmWVan/7Ir5mUf/NVoCqgTLiluHcSmRvaS0eg29mvVXIwAHIRc/ +SjnRBUkLp7Y3gaVdjKozXoEofKd9J+sAro03 +-----END CERTIFICATE----- + +# Issuer: CN=Hellenic Academic and Research Institutions RootCA 2011 O=Hellenic Academic and Research Institutions Cert. Authority +# Subject: CN=Hellenic Academic and Research Institutions RootCA 2011 O=Hellenic Academic and Research Institutions Cert. Authority +# Label: "Hellenic Academic and Research Institutions RootCA 2011" +# Serial: 0 +# MD5 Fingerprint: 73:9f:4c:4b:73:5b:79:e9:fa:ba:1c:ef:6e:cb:d5:c9 +# SHA1 Fingerprint: fe:45:65:9b:79:03:5b:98:a1:61:b5:51:2e:ac:da:58:09:48:22:4d +# SHA256 Fingerprint: bc:10:4f:15:a4:8b:e7:09:dc:a5:42:a7:e1:d4:b9:df:6f:05:45:27:e8:02:ea:a9:2d:59:54:44:25:8a:fe:71 +-----BEGIN CERTIFICATE----- +MIIEMTCCAxmgAwIBAgIBADANBgkqhkiG9w0BAQUFADCBlTELMAkGA1UEBhMCR1Ix +RDBCBgNVBAoTO0hlbGxlbmljIEFjYWRlbWljIGFuZCBSZXNlYXJjaCBJbnN0aXR1 +dGlvbnMgQ2VydC4gQXV0aG9yaXR5MUAwPgYDVQQDEzdIZWxsZW5pYyBBY2FkZW1p +YyBhbmQgUmVzZWFyY2ggSW5zdGl0dXRpb25zIFJvb3RDQSAyMDExMB4XDTExMTIw +NjEzNDk1MloXDTMxMTIwMTEzNDk1MlowgZUxCzAJBgNVBAYTAkdSMUQwQgYDVQQK +EztIZWxsZW5pYyBBY2FkZW1pYyBhbmQgUmVzZWFyY2ggSW5zdGl0dXRpb25zIENl +cnQuIEF1dGhvcml0eTFAMD4GA1UEAxM3SGVsbGVuaWMgQWNhZGVtaWMgYW5kIFJl +c2VhcmNoIEluc3RpdHV0aW9ucyBSb290Q0EgMjAxMTCCASIwDQYJKoZIhvcNAQEB +BQADggEPADCCAQoCggEBAKlTAOMupvaO+mDYLZU++CwqVE7NuYRhlFhPjz2L5EPz +dYmNUeTDN9KKiE15HrcS3UN4SoqS5tdI1Q+kOilENbgH9mgdVc04UfCMJDGFr4PJ +fel3r+0ae50X+bOdOFAPplp5kYCvN66m0zH7tSYJnTxa71HFK9+WXesyHgLacEns +bgzImjeN9/E2YEsmLIKe0HjzDQ9jpFEw4fkrJxIH2Oq9GGKYsFk3fb7u8yBRQlqD +75O6aRXxYp2fmTmCobd0LovUxQt7L/DICto9eQqakxylKHJzkUOap9FNhYS5qXSP +FEDH3N6sQWRstBmbAmNtJGSPRLIl6s5ddAxjMlyNh+UCAwEAAaOBiTCBhjAPBgNV +HRMBAf8EBTADAQH/MAsGA1UdDwQEAwIBBjAdBgNVHQ4EFgQUppFC/RNhSiOeCKQp +5dgTBCPuQSUwRwYDVR0eBEAwPqA8MAWCAy5ncjAFggMuZXUwBoIELmVkdTAGggQu +b3JnMAWBAy5ncjAFgQMuZXUwBoEELmVkdTAGgQQub3JnMA0GCSqGSIb3DQEBBQUA +A4IBAQAf73lB4XtuP7KMhjdCSk4cNx6NZrokgclPEg8hwAOXhiVtXdMiKahsog2p +6z0GW5k6x8zDmjR/qw7IThzh+uTczQ2+vyT+bOdrwg3IBp5OjWEopmr95fZi6hg8 +TqBTnbI6nOulnJEWtk2C4AwFSKls9cz4y51JtPACpf1wA+2KIaWuE4ZJwzNzvoc7 +dIsXRSZMFpGD/md9zU1jZ/rzAxKWeAaNsWftjj++n08C9bMJL/NMh98qy5V8Acys +Nnq/onN694/BtZqhFLKPM58N7yLcZnuEvUUXBj08yrl3NI/K6s8/MT7jiOOASSXI +l7WdmplNsDz4SgCbZN2fOUvRJ9e4 +-----END CERTIFICATE----- + +# Issuer: CN=Actalis Authentication Root CA O=Actalis S.p.A./03358520967 +# Subject: CN=Actalis Authentication Root CA O=Actalis S.p.A./03358520967 +# Label: "Actalis Authentication Root CA" +# Serial: 6271844772424770508 +# MD5 Fingerprint: 69:c1:0d:4f:07:a3:1b:c3:fe:56:3d:04:bc:11:f6:a6 +# SHA1 Fingerprint: f3:73:b3:87:06:5a:28:84:8a:f2:f3:4a:ce:19:2b:dd:c7:8e:9c:ac +# SHA256 Fingerprint: 55:92:60:84:ec:96:3a:64:b9:6e:2a:be:01:ce:0b:a8:6a:64:fb:fe:bc:c7:aa:b5:af:c1:55:b3:7f:d7:60:66 +-----BEGIN CERTIFICATE----- +MIIFuzCCA6OgAwIBAgIIVwoRl0LE48wwDQYJKoZIhvcNAQELBQAwazELMAkGA1UE +BhMCSVQxDjAMBgNVBAcMBU1pbGFuMSMwIQYDVQQKDBpBY3RhbGlzIFMucC5BLi8w +MzM1ODUyMDk2NzEnMCUGA1UEAwweQWN0YWxpcyBBdXRoZW50aWNhdGlvbiBSb290 +IENBMB4XDTExMDkyMjExMjIwMloXDTMwMDkyMjExMjIwMlowazELMAkGA1UEBhMC +SVQxDjAMBgNVBAcMBU1pbGFuMSMwIQYDVQQKDBpBY3RhbGlzIFMucC5BLi8wMzM1 +ODUyMDk2NzEnMCUGA1UEAwweQWN0YWxpcyBBdXRoZW50aWNhdGlvbiBSb290IENB +MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAp8bEpSmkLO/lGMWwUKNv +UTufClrJwkg4CsIcoBh/kbWHuUA/3R1oHwiD1S0eiKD4j1aPbZkCkpAW1V8IbInX +4ay8IMKx4INRimlNAJZaby/ARH6jDuSRzVju3PvHHkVH3Se5CAGfpiEd9UEtL0z9 +KK3giq0itFZljoZUj5NDKd45RnijMCO6zfB9E1fAXdKDa0hMxKufgFpbOr3JpyI/ +gCczWw63igxdBzcIy2zSekciRDXFzMwujt0q7bd9Zg1fYVEiVRvjRuPjPdA1Yprb +rxTIW6HMiRvhMCb8oJsfgadHHwTrozmSBp+Z07/T6k9QnBn+locePGX2oxgkg4YQ +51Q+qDp2JE+BIcXjDwL4k5RHILv+1A7TaLndxHqEguNTVHnd25zS8gebLra8Pu2F +be8lEfKXGkJh90qX6IuxEAf6ZYGyojnP9zz/GPvG8VqLWeICrHuS0E4UT1lF9gxe +KF+w6D9Fz8+vm2/7hNN3WpVvrJSEnu68wEqPSpP4RCHiMUVhUE4Q2OM1fEwZtN4F +v6MGn8i1zeQf1xcGDXqVdFUNaBr8EBtiZJ1t4JWgw5QHVw0U5r0F+7if5t+L4sbn +fpb2U8WANFAoWPASUHEXMLrmeGO89LKtmyuy/uE5jF66CyCU3nuDuP/jVo23Eek7 +jPKxwV2dpAtMK9myGPW1n0sCAwEAAaNjMGEwHQYDVR0OBBYEFFLYiDrIn3hm7Ynz +ezhwlMkCAjbQMA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAUUtiIOsifeGbt +ifN7OHCUyQICNtAwDgYDVR0PAQH/BAQDAgEGMA0GCSqGSIb3DQEBCwUAA4ICAQAL +e3KHwGCmSUyIWOYdiPcUZEim2FgKDk8TNd81HdTtBjHIgT5q1d07GjLukD0R0i70 +jsNjLiNmsGe+b7bAEzlgqqI0JZN1Ut6nna0Oh4lScWoWPBkdg/iaKWW+9D+a2fDz +WochcYBNy+A4mz+7+uAwTc+G02UQGRjRlwKxK3JCaKygvU5a2hi/a5iB0P2avl4V +SM0RFbnAKVy06Ij3Pjaut2L9HmLecHgQHEhb2rykOLpn7VU+Xlff1ANATIGk0k9j +pwlCCRT8AKnCgHNPLsBA2RF7SOp6AsDT6ygBJlh0wcBzIm2Tlf05fbsq4/aC4yyX +X04fkZT6/iyj2HYauE2yOE+b+h1IYHkm4vP9qdCa6HCPSXrW5b0KDtst842/6+Ok +fcvHlXHo2qN8xcL4dJIEG4aspCJTQLas/kx2z/uUMsA1n3Y/buWQbqCmJqK4LL7R +K4X9p2jIugErsWx0Hbhzlefut8cl8ABMALJ+tguLHPPAUJ4lueAI3jZm/zel0btU +ZCzJJ7VLkn5l/9Mt4blOvH+kQSGQQXemOR/qnuOf0GZvBeyqdn6/axag67XH/JJU +LysRJyU3eExRarDzzFhdFPFqSBX/wge2sY0PjlxQRrM9vwGYT7JZVEc+NHt4bVaT +LnPqZih4zR0Uv6CPLy64Lo7yFIrM6bV8+2ydDKXhlg== +-----END CERTIFICATE----- + +# Issuer: O=Trustis Limited OU=Trustis FPS Root CA +# Subject: O=Trustis Limited OU=Trustis FPS Root CA +# Label: "Trustis FPS Root CA" +# Serial: 36053640375399034304724988975563710553 +# MD5 Fingerprint: 30:c9:e7:1e:6b:e6:14:eb:65:b2:16:69:20:31:67:4d +# SHA1 Fingerprint: 3b:c0:38:0b:33:c3:f6:a6:0c:86:15:22:93:d9:df:f5:4b:81:c0:04 +# SHA256 Fingerprint: c1:b4:82:99:ab:a5:20:8f:e9:63:0a:ce:55:ca:68:a0:3e:da:5a:51:9c:88:02:a0:d3:a6:73:be:8f:8e:55:7d +-----BEGIN CERTIFICATE----- +MIIDZzCCAk+gAwIBAgIQGx+ttiD5JNM2a/fH8YygWTANBgkqhkiG9w0BAQUFADBF +MQswCQYDVQQGEwJHQjEYMBYGA1UEChMPVHJ1c3RpcyBMaW1pdGVkMRwwGgYDVQQL +ExNUcnVzdGlzIEZQUyBSb290IENBMB4XDTAzMTIyMzEyMTQwNloXDTI0MDEyMTEx +MzY1NFowRTELMAkGA1UEBhMCR0IxGDAWBgNVBAoTD1RydXN0aXMgTGltaXRlZDEc +MBoGA1UECxMTVHJ1c3RpcyBGUFMgUm9vdCBDQTCCASIwDQYJKoZIhvcNAQEBBQAD +ggEPADCCAQoCggEBAMVQe547NdDfxIzNjpvto8A2mfRC6qc+gIMPpqdZh8mQRUN+ +AOqGeSoDvT03mYlmt+WKVoaTnGhLaASMk5MCPjDSNzoiYYkchU59j9WvezX2fihH +iTHcDnlkH5nSW7r+f2C/revnPDgpai/lkQtV/+xvWNUtyd5MZnGPDNcE2gfmHhjj +vSkCqPoc4Vu5g6hBSLwacY3nYuUtsuvffM/bq1rKMfFMIvMFE/eC+XN5DL7XSxzA +0RU8k0Fk0ea+IxciAIleH2ulrG6nS4zto3Lmr2NNL4XSFDWaLk6M6jKYKIahkQlB +OrTh4/L68MkKokHdqeMDx4gVOxzUGpTXn2RZEm0CAwEAAaNTMFEwDwYDVR0TAQH/ +BAUwAwEB/zAfBgNVHSMEGDAWgBS6+nEleYtXQSUhhgtx67JkDoshZzAdBgNVHQ4E +FgQUuvpxJXmLV0ElIYYLceuyZA6LIWcwDQYJKoZIhvcNAQEFBQADggEBAH5Y//01 +GX2cGE+esCu8jowU/yyg2kdbw++BLa8F6nRIW/M+TgfHbcWzk88iNVy2P3UnXwmW +zaD+vkAMXBJV+JOCyinpXj9WV4s4NvdFGkwozZ5BuO1WTISkQMi4sKUraXAEasP4 +1BIy+Q7DsdwyhEQsb8tGD+pmQQ9P8Vilpg0ND2HepZ5dfWWhPBfnqFVO76DH7cZE +f1T1o+CP8HxVIo8ptoGj4W1OLBuAZ+ytIJ8MYmHVl/9D7S3B2l0pKoU/rGXuhg8F +jZBf3+6f9L/uHfuY5H+QK4R4EA5sSVPvFVtlRkpdr7r7OnIdzfYliB6XzCGcKQEN +ZetX2fNXlrtIzYE= +-----END CERTIFICATE----- + +# Issuer: CN=Buypass Class 2 Root CA O=Buypass AS-983163327 +# Subject: CN=Buypass Class 2 Root CA O=Buypass AS-983163327 +# Label: "Buypass Class 2 Root CA" +# Serial: 2 +# MD5 Fingerprint: 46:a7:d2:fe:45:fb:64:5a:a8:59:90:9b:78:44:9b:29 +# SHA1 Fingerprint: 49:0a:75:74:de:87:0a:47:fe:58:ee:f6:c7:6b:eb:c6:0b:12:40:99 +# SHA256 Fingerprint: 9a:11:40:25:19:7c:5b:b9:5d:94:e6:3d:55:cd:43:79:08:47:b6:46:b2:3c:df:11:ad:a4:a0:0e:ff:15:fb:48 +-----BEGIN CERTIFICATE----- +MIIFWTCCA0GgAwIBAgIBAjANBgkqhkiG9w0BAQsFADBOMQswCQYDVQQGEwJOTzEd +MBsGA1UECgwUQnV5cGFzcyBBUy05ODMxNjMzMjcxIDAeBgNVBAMMF0J1eXBhc3Mg +Q2xhc3MgMiBSb290IENBMB4XDTEwMTAyNjA4MzgwM1oXDTQwMTAyNjA4MzgwM1ow +TjELMAkGA1UEBhMCTk8xHTAbBgNVBAoMFEJ1eXBhc3MgQVMtOTgzMTYzMzI3MSAw +HgYDVQQDDBdCdXlwYXNzIENsYXNzIDIgUm9vdCBDQTCCAiIwDQYJKoZIhvcNAQEB +BQADggIPADCCAgoCggIBANfHXvfBB9R3+0Mh9PT1aeTuMgHbo4Yf5FkNuud1g1Lr +6hxhFUi7HQfKjK6w3Jad6sNgkoaCKHOcVgb/S2TwDCo3SbXlzwx87vFKu3MwZfPV +L4O2fuPn9Z6rYPnT8Z2SdIrkHJasW4DptfQxh6NR/Md+oW+OU3fUl8FVM5I+GC91 +1K2GScuVr1QGbNgGE41b/+EmGVnAJLqBcXmQRFBoJJRfuLMR8SlBYaNByyM21cHx +MlAQTn/0hpPshNOOvEu/XAFOBz3cFIqUCqTqc/sLUegTBxj6DvEr0VQVfTzh97QZ +QmdiXnfgolXsttlpF9U6r0TtSsWe5HonfOV116rLJeffawrbD02TTqigzXsu8lkB +arcNuAeBfos4GzjmCleZPe4h6KP1DBbdi+w0jpwqHAAVF41og9JwnxgIzRFo1clr +Us3ERo/ctfPYV3Me6ZQ5BL/T3jjetFPsaRyifsSP5BtwrfKi+fv3FmRmaZ9JUaLi +FRhnBkp/1Wy1TbMz4GHrXb7pmA8y1x1LPC5aAVKRCfLf6o3YBkBjqhHk/sM3nhRS +P/TizPJhk9H9Z2vXUq6/aKtAQ6BXNVN48FP4YUIHZMbXb5tMOA1jrGKvNouicwoN +9SG9dKpN6nIDSdvHXx1iY8f93ZHsM+71bbRuMGjeyNYmsHVee7QHIJihdjK4TWxP +AgMBAAGjQjBAMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFMmAd+BikoL1Rpzz +uvdMw964o605MA4GA1UdDwEB/wQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAgEAU18h +9bqwOlI5LJKwbADJ784g7wbylp7ppHR/ehb8t/W2+xUbP6umwHJdELFx7rxP462s +A20ucS6vxOOto70MEae0/0qyexAQH6dXQbLArvQsWdZHEIjzIVEpMMpghq9Gqx3t +OluwlN5E40EIosHsHdb9T7bWR9AUC8rmyrV7d35BH16Dx7aMOZawP5aBQW9gkOLo ++fsicdl9sz1Gv7SEr5AcD48Saq/v7h56rgJKihcrdv6sVIkkLE8/trKnToyokZf7 +KcZ7XC25y2a2t6hbElGFtQl+Ynhw/qlqYLYdDnkM/crqJIByw5c/8nerQyIKx+u2 +DISCLIBrQYoIwOula9+ZEsuK1V6ADJHgJgg2SMX6OBE1/yWDLfJ6v9r9jv6ly0Us +H8SIU653DtmadsWOLB2jutXsMq7Aqqz30XpN69QH4kj3Io6wpJ9qzo6ysmD0oyLQ +I+uUWnpp3Q+/QFesa1lQ2aOZ4W7+jQF5JyMV3pKdewlNWudLSDBaGOYKbeaP4NK7 +5t98biGCwWg5TbSYWGZizEqQXsP6JwSxeRV0mcy+rSDeJmAc61ZRpqPq5KM/p/9h +3PFaTWwyI0PurKju7koSCTxdccK+efrCh2gdC/1cacwG0Jp9VJkqyTkaGa9LKkPz +Y11aWOIv4x3kqdbQCtCev9eBCfHJxyYNrJgWVqA= +-----END CERTIFICATE----- + +# Issuer: CN=Buypass Class 3 Root CA O=Buypass AS-983163327 +# Subject: CN=Buypass Class 3 Root CA O=Buypass AS-983163327 +# Label: "Buypass Class 3 Root CA" +# Serial: 2 +# MD5 Fingerprint: 3d:3b:18:9e:2c:64:5a:e8:d5:88:ce:0e:f9:37:c2:ec +# SHA1 Fingerprint: da:fa:f7:fa:66:84:ec:06:8f:14:50:bd:c7:c2:81:a5:bc:a9:64:57 +# SHA256 Fingerprint: ed:f7:eb:bc:a2:7a:2a:38:4d:38:7b:7d:40:10:c6:66:e2:ed:b4:84:3e:4c:29:b4:ae:1d:5b:93:32:e6:b2:4d +-----BEGIN CERTIFICATE----- +MIIFWTCCA0GgAwIBAgIBAjANBgkqhkiG9w0BAQsFADBOMQswCQYDVQQGEwJOTzEd +MBsGA1UECgwUQnV5cGFzcyBBUy05ODMxNjMzMjcxIDAeBgNVBAMMF0J1eXBhc3Mg +Q2xhc3MgMyBSb290IENBMB4XDTEwMTAyNjA4Mjg1OFoXDTQwMTAyNjA4Mjg1OFow +TjELMAkGA1UEBhMCTk8xHTAbBgNVBAoMFEJ1eXBhc3MgQVMtOTgzMTYzMzI3MSAw +HgYDVQQDDBdCdXlwYXNzIENsYXNzIDMgUm9vdCBDQTCCAiIwDQYJKoZIhvcNAQEB +BQADggIPADCCAgoCggIBAKXaCpUWUOOV8l6ddjEGMnqb8RB2uACatVI2zSRHsJ8Y +ZLya9vrVediQYkwiL944PdbgqOkcLNt4EemOaFEVcsfzM4fkoF0LXOBXByow9c3E +N3coTRiR5r/VUv1xLXA+58bEiuPwKAv0dpihi4dVsjoT/Lc+JzeOIuOoTyrvYLs9 +tznDDgFHmV0ST9tD+leh7fmdvhFHJlsTmKtdFoqwNxxXnUX/iJY2v7vKB3tvh2PX +0DJq1l1sDPGzbjniazEuOQAnFN44wOwZZoYS6J1yFhNkUsepNxz9gjDthBgd9K5c +/3ATAOux9TN6S9ZV+AWNS2mw9bMoNlwUxFFzTWsL8TQH2xc519woe2v1n/MuwU8X +KhDzzMro6/1rqy6any2CbgTUUgGTLT2G/H783+9CHaZr77kgxve9oKeV/afmiSTY +zIw0bOIjL9kSGiG5VZFvC5F5GQytQIgLcOJ60g7YaEi7ghM5EFjp2CoHxhLbWNvS +O1UQRwUVZ2J+GGOmRj8JDlQyXr8NYnon74Do29lLBlo3WiXQCBJ31G8JUJc9yB3D +34xFMFbG02SrZvPAXpacw8Tvw3xrizp5f7NJzz3iiZ+gMEuFuZyUJHmPfWupRWgP +K9Dx2hzLabjKSWJtyNBjYt1gD1iqj6G8BaVmos8bdrKEZLFMOVLAMLrwjEsCsLa3 +AgMBAAGjQjBAMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFEe4zf/lb+74suwv +Tg75JbCOPGvDMA4GA1UdDwEB/wQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAgEAACAj +QTUEkMJAYmDv4jVM1z+s4jSQuKFvdvoWFqRINyzpkMLyPPgKn9iB5btb2iUspKdV +cSQy9sgL8rxq+JOssgfCX5/bzMiKqr5qb+FJEMwx14C7u8jYog5kV+qi9cKpMRXS +IGrs/CIBKM+GuIAeqcwRpTzyFrNHnfzSgCHEy9BHcEGhyoMZCCxt8l13nIoUE9Q2 +HJLw5QY33KbmkJs4j1xrG0aGQ0JfPgEHU1RdZX33inOhmlRaHylDFCfChQ+1iHsa +O5S3HWCntZznKWlXWpuTekMwGwPXYshApqr8ZORK15FTAaggiG6cX0S5y2CBNOxv +033aSF/rtJC8LakcC6wc1aJoIIAE1vyxjy+7SjENSoYc6+I2KSb12tjE8nVhz36u +dmNKekBlk4f4HoCMhuWG1o8O/FMsYOgWYRqiPkN7zTlgVGr18okmAWiDSKIz6MkE +kbIRNBE+6tBDGR8Dk5AM/1E9V/RBbuHLoL7ryWPNbczk+DaqaJ3tvV2XcEQNtg41 +3OEMXbugUZTLfhbrES+jkkXITHHZvMmZUldGL1DPvTVp9D0VzgalLA8+9oG6lLvD +u79leNKGef9JOxqDDPDeeOzI8k1MGt6CKfjBWtrt7uYnXuhF0J0cUahoq0Tj0Itq +4/g7u9xN12TyUb7mqqta6THuBrxzvxNiCp/HuZc= +-----END CERTIFICATE----- + +# Issuer: CN=T-TeleSec GlobalRoot Class 3 O=T-Systems Enterprise Services GmbH OU=T-Systems Trust Center +# Subject: CN=T-TeleSec GlobalRoot Class 3 O=T-Systems Enterprise Services GmbH OU=T-Systems Trust Center +# Label: "T-TeleSec GlobalRoot Class 3" +# Serial: 1 +# MD5 Fingerprint: ca:fb:40:a8:4e:39:92:8a:1d:fe:8e:2f:c4:27:ea:ef +# SHA1 Fingerprint: 55:a6:72:3e:cb:f2:ec:cd:c3:23:74:70:19:9d:2a:be:11:e3:81:d1 +# SHA256 Fingerprint: fd:73:da:d3:1c:64:4f:f1:b4:3b:ef:0c:cd:da:96:71:0b:9c:d9:87:5e:ca:7e:31:70:7a:f3:e9:6d:52:2b:bd +-----BEGIN CERTIFICATE----- +MIIDwzCCAqugAwIBAgIBATANBgkqhkiG9w0BAQsFADCBgjELMAkGA1UEBhMCREUx +KzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnByaXNlIFNlcnZpY2VzIEdtYkgxHzAd +BgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50ZXIxJTAjBgNVBAMMHFQtVGVsZVNl +YyBHbG9iYWxSb290IENsYXNzIDMwHhcNMDgxMDAxMTAyOTU2WhcNMzMxMDAxMjM1 +OTU5WjCBgjELMAkGA1UEBhMCREUxKzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnBy +aXNlIFNlcnZpY2VzIEdtYkgxHzAdBgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50 +ZXIxJTAjBgNVBAMMHFQtVGVsZVNlYyBHbG9iYWxSb290IENsYXNzIDMwggEiMA0G +CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC9dZPwYiJvJK7genasfb3ZJNW4t/zN +8ELg63iIVl6bmlQdTQyK9tPPcPRStdiTBONGhnFBSivwKixVA9ZIw+A5OO3yXDw/ +RLyTPWGrTs0NvvAgJ1gORH8EGoel15YUNpDQSXuhdfsaa3Ox+M6pCSzyU9XDFES4 +hqX2iys52qMzVNn6chr3IhUciJFrf2blw2qAsCTz34ZFiP0Zf3WHHx+xGwpzJFu5 +ZeAsVMhg02YXP+HMVDNzkQI6pn97djmiH5a2OK61yJN0HZ65tOVgnS9W0eDrXltM +EnAMbEQgqxHY9Bn20pxSN+f6tsIxO0rUFJmtxxr1XV/6B7h8DR/Wgx6zAgMBAAGj +QjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBS1 +A/d2O2GCahKqGFPrAyGUv/7OyjANBgkqhkiG9w0BAQsFAAOCAQEAVj3vlNW92nOy +WL6ukK2YJ5f+AbGwUgC4TeQbIXQbfsDuXmkqJa9c1h3a0nnJ85cp4IaH3gRZD/FZ +1GSFS5mvJQQeyUapl96Cshtwn5z2r3Ex3XsFpSzTucpH9sry9uetuUg/vBa3wW30 +6gmv7PO15wWeph6KU1HWk4HMdJP2udqmJQV0eVp+QD6CSyYRMG7hP0HHRwA11fXT +91Q+gT3aSWqas+8QPebrb9HIIkfLzM8BMZLZGOMivgkeGj5asuRrDFR6fUNOuIml +e9eiPZaGzPImNC1qkp2aGtAw4l1OBLBfiyB+d8E9lYLRRpo7PHi4b6HQDWSieB4p +TpPDpFQUWw== +-----END CERTIFICATE----- + +# Issuer: CN=EE Certification Centre Root CA O=AS Sertifitseerimiskeskus +# Subject: CN=EE Certification Centre Root CA O=AS Sertifitseerimiskeskus +# Label: "EE Certification Centre Root CA" +# Serial: 112324828676200291871926431888494945866 +# MD5 Fingerprint: 43:5e:88:d4:7d:1a:4a:7e:fd:84:2e:52:eb:01:d4:6f +# SHA1 Fingerprint: c9:a8:b9:e7:55:80:5e:58:e3:53:77:a7:25:eb:af:c3:7b:27:cc:d7 +# SHA256 Fingerprint: 3e:84:ba:43:42:90:85:16:e7:75:73:c0:99:2f:09:79:ca:08:4e:46:85:68:1f:f1:95:cc:ba:8a:22:9b:8a:76 +-----BEGIN CERTIFICATE----- +MIIEAzCCAuugAwIBAgIQVID5oHPtPwBMyonY43HmSjANBgkqhkiG9w0BAQUFADB1 +MQswCQYDVQQGEwJFRTEiMCAGA1UECgwZQVMgU2VydGlmaXRzZWVyaW1pc2tlc2t1 +czEoMCYGA1UEAwwfRUUgQ2VydGlmaWNhdGlvbiBDZW50cmUgUm9vdCBDQTEYMBYG +CSqGSIb3DQEJARYJcGtpQHNrLmVlMCIYDzIwMTAxMDMwMTAxMDMwWhgPMjAzMDEy +MTcyMzU5NTlaMHUxCzAJBgNVBAYTAkVFMSIwIAYDVQQKDBlBUyBTZXJ0aWZpdHNl +ZXJpbWlza2Vza3VzMSgwJgYDVQQDDB9FRSBDZXJ0aWZpY2F0aW9uIENlbnRyZSBS +b290IENBMRgwFgYJKoZIhvcNAQkBFglwa2lAc2suZWUwggEiMA0GCSqGSIb3DQEB +AQUAA4IBDwAwggEKAoIBAQDIIMDs4MVLqwd4lfNE7vsLDP90jmG7sWLqI9iroWUy +euuOF0+W2Ap7kaJjbMeMTC55v6kF/GlclY1i+blw7cNRfdCT5mzrMEvhvH2/UpvO +bntl8jixwKIy72KyaOBhU8E2lf/slLo2rpwcpzIP5Xy0xm90/XsY6KxX7QYgSzIw +WFv9zajmofxwvI6Sc9uXp3whrj3B9UiHbCe9nyV0gVWw93X2PaRka9ZP585ArQ/d +MtO8ihJTmMmJ+xAdTX7Nfh9WDSFwhfYggx/2uh8Ej+p3iDXE/+pOoYtNP2MbRMNE +1CV2yreN1x5KZmTNXMWcg+HCCIia7E6j8T4cLNlsHaFLAgMBAAGjgYowgYcwDwYD +VR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFBLyWj7qVhy/ +zQas8fElyalL1BSZMEUGA1UdJQQ+MDwGCCsGAQUFBwMCBggrBgEFBQcDAQYIKwYB +BQUHAwMGCCsGAQUFBwMEBggrBgEFBQcDCAYIKwYBBQUHAwkwDQYJKoZIhvcNAQEF +BQADggEBAHv25MANqhlHt01Xo/6tu7Fq1Q+e2+RjxY6hUFaTlrg4wCQiZrxTFGGV +v9DHKpY5P30osxBAIWrEr7BSdxjhlthWXePdNl4dp1BUoMUq5KqMlIpPnTX/dqQG +E5Gion0ARD9V04I8GtVbvFZMIi5GQ4okQC3zErg7cBqklrkar4dBGmoYDQZPxz5u +uSlNDUmJEYcyW+ZLBMjkXOZ0c5RdFpgTlf7727FE5TpwrDdr5rMzcijJs1eg9gIW +iAYLtqZLICjU3j2LrTcFU3T+bsy8QxdxXvnFzBqpYe73dgzzcvRyrc9yAjYHR8/v +GVCJYMzpJJUPwssd8m92kMfMdcGWxZ0= +-----END CERTIFICATE----- + +# Issuer: CN=D-TRUST Root Class 3 CA 2 2009 O=D-Trust GmbH +# Subject: CN=D-TRUST Root Class 3 CA 2 2009 O=D-Trust GmbH +# Label: "D-TRUST Root Class 3 CA 2 2009" +# Serial: 623603 +# MD5 Fingerprint: cd:e0:25:69:8d:47:ac:9c:89:35:90:f7:fd:51:3d:2f +# SHA1 Fingerprint: 58:e8:ab:b0:36:15:33:fb:80:f7:9b:1b:6d:29:d3:ff:8d:5f:00:f0 +# SHA256 Fingerprint: 49:e7:a4:42:ac:f0:ea:62:87:05:00:54:b5:25:64:b6:50:e4:f4:9e:42:e3:48:d6:aa:38:e0:39:e9:57:b1:c1 +-----BEGIN CERTIFICATE----- +MIIEMzCCAxugAwIBAgIDCYPzMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNVBAYTAkRF +MRUwEwYDVQQKDAxELVRydXN0IEdtYkgxJzAlBgNVBAMMHkQtVFJVU1QgUm9vdCBD +bGFzcyAzIENBIDIgMjAwOTAeFw0wOTExMDUwODM1NThaFw0yOTExMDUwODM1NTha +ME0xCzAJBgNVBAYTAkRFMRUwEwYDVQQKDAxELVRydXN0IEdtYkgxJzAlBgNVBAMM +HkQtVFJVU1QgUm9vdCBDbGFzcyAzIENBIDIgMjAwOTCCASIwDQYJKoZIhvcNAQEB +BQADggEPADCCAQoCggEBANOySs96R+91myP6Oi/WUEWJNTrGa9v+2wBoqOADER03 +UAifTUpolDWzU9GUY6cgVq/eUXjsKj3zSEhQPgrfRlWLJ23DEE0NkVJD2IfgXU42 +tSHKXzlABF9bfsyjxiupQB7ZNoTWSPOSHjRGICTBpFGOShrvUD9pXRl/RcPHAY9R +ySPocq60vFYJfxLLHLGvKZAKyVXMD9O0Gu1HNVpK7ZxzBCHQqr0ME7UAyiZsxGsM +lFqVlNpQmvH/pStmMaTJOKDfHR+4CS7zp+hnUquVH+BGPtikw8paxTGA6Eian5Rp +/hnd2HN8gcqW3o7tszIFZYQ05ub9VxC1X3a/L7AQDcUCAwEAAaOCARowggEWMA8G +A1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFP3aFMSfMN4hvR5COfyrYyNJ4PGEMA4G +A1UdDwEB/wQEAwIBBjCB0wYDVR0fBIHLMIHIMIGAoH6gfIZ6bGRhcDovL2RpcmVj +dG9yeS5kLXRydXN0Lm5ldC9DTj1ELVRSVVNUJTIwUm9vdCUyMENsYXNzJTIwMyUy +MENBJTIwMiUyMDIwMDksTz1ELVRydXN0JTIwR21iSCxDPURFP2NlcnRpZmljYXRl +cmV2b2NhdGlvbmxpc3QwQ6BBoD+GPWh0dHA6Ly93d3cuZC10cnVzdC5uZXQvY3Js +L2QtdHJ1c3Rfcm9vdF9jbGFzc18zX2NhXzJfMjAwOS5jcmwwDQYJKoZIhvcNAQEL +BQADggEBAH+X2zDI36ScfSF6gHDOFBJpiBSVYEQBrLLpME+bUMJm2H6NMLVwMeni +acfzcNsgFYbQDfC+rAF1hM5+n02/t2A7nPPKHeJeaNijnZflQGDSNiH+0LS4F9p0 +o3/U37CYAqxva2ssJSRyoWXuJVrl5jLn8t+rSfrzkGkj2wTZ51xY/GXUl77M/C4K +zCUqNQT4YJEVdT1B/yMfGchs64JTBKbkTCJNjYy6zltz7GRUUG3RnFX7acM2w4y8 +PIWmawomDeCTmGCufsYkl4phX5GOZpIJhzbNi5stPvZR1FDUWSi9g/LMKHtThm3Y +Johw1+qRzT65ysCQblrGXnRl11z+o+I= +-----END CERTIFICATE----- + +# Issuer: CN=D-TRUST Root Class 3 CA 2 EV 2009 O=D-Trust GmbH +# Subject: CN=D-TRUST Root Class 3 CA 2 EV 2009 O=D-Trust GmbH +# Label: "D-TRUST Root Class 3 CA 2 EV 2009" +# Serial: 623604 +# MD5 Fingerprint: aa:c6:43:2c:5e:2d:cd:c4:34:c0:50:4f:11:02:4f:b6 +# SHA1 Fingerprint: 96:c9:1b:0b:95:b4:10:98:42:fa:d0:d8:22:79:fe:60:fa:b9:16:83 +# SHA256 Fingerprint: ee:c5:49:6b:98:8c:e9:86:25:b9:34:09:2e:ec:29:08:be:d0:b0:f3:16:c2:d4:73:0c:84:ea:f1:f3:d3:48:81 +-----BEGIN CERTIFICATE----- +MIIEQzCCAyugAwIBAgIDCYP0MA0GCSqGSIb3DQEBCwUAMFAxCzAJBgNVBAYTAkRF +MRUwEwYDVQQKDAxELVRydXN0IEdtYkgxKjAoBgNVBAMMIUQtVFJVU1QgUm9vdCBD +bGFzcyAzIENBIDIgRVYgMjAwOTAeFw0wOTExMDUwODUwNDZaFw0yOTExMDUwODUw +NDZaMFAxCzAJBgNVBAYTAkRFMRUwEwYDVQQKDAxELVRydXN0IEdtYkgxKjAoBgNV +BAMMIUQtVFJVU1QgUm9vdCBDbGFzcyAzIENBIDIgRVYgMjAwOTCCASIwDQYJKoZI +hvcNAQEBBQADggEPADCCAQoCggEBAJnxhDRwui+3MKCOvXwEz75ivJn9gpfSegpn +ljgJ9hBOlSJzmY3aFS3nBfwZcyK3jpgAvDw9rKFs+9Z5JUut8Mxk2og+KbgPCdM0 +3TP1YtHhzRnp7hhPTFiu4h7WDFsVWtg6uMQYZB7jM7K1iXdODL/ZlGsTl28So/6Z +qQTMFexgaDbtCHu39b+T7WYxg4zGcTSHThfqr4uRjRxWQa4iN1438h3Z0S0NL2lR +p75mpoo6Kr3HGrHhFPC+Oh25z1uxav60sUYgovseO3Dvk5h9jHOW8sXvhXCtKSb8 +HgQ+HKDYD8tSg2J87otTlZCpV6LqYQXY+U3EJ/pure3511H3a6UCAwEAAaOCASQw +ggEgMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFNOUikxiEyoZLsyvcop9Ntea +HNxnMA4GA1UdDwEB/wQEAwIBBjCB3QYDVR0fBIHVMIHSMIGHoIGEoIGBhn9sZGFw +Oi8vZGlyZWN0b3J5LmQtdHJ1c3QubmV0L0NOPUQtVFJVU1QlMjBSb290JTIwQ2xh +c3MlMjAzJTIwQ0ElMjAyJTIwRVYlMjAyMDA5LE89RC1UcnVzdCUyMEdtYkgsQz1E +RT9jZXJ0aWZpY2F0ZXJldm9jYXRpb25saXN0MEagRKBChkBodHRwOi8vd3d3LmQt +dHJ1c3QubmV0L2NybC9kLXRydXN0X3Jvb3RfY2xhc3NfM19jYV8yX2V2XzIwMDku +Y3JsMA0GCSqGSIb3DQEBCwUAA4IBAQA07XtaPKSUiO8aEXUHL7P+PPoeUSbrh/Yp +3uDx1MYkCenBz1UbtDDZzhr+BlGmFaQt77JLvyAoJUnRpjZ3NOhk31KxEcdzes05 +nsKtjHEh8lprr988TlWvsoRlFIm5d8sqMb7Po23Pb0iUMkZv53GMoKaEGTcH8gNF +CSuGdXzfX2lXANtu2KZyIktQ1HWYVt+3GP9DQ1CuekR78HlR10M9p9OB0/DJT7na +xpeG0ILD5EJt/rDiZE4OJudANCa1CInXCGNjOCd1HjPqbqjdn5lPdE2BiYBL3ZqX +KVwvvoFBuYz/6n1gBp7N1z3TLqMVvKjmJuVvw9y4AyHqnxbxLFS1 +-----END CERTIFICATE----- + +# Issuer: CN=CA Disig Root R2 O=Disig a.s. +# Subject: CN=CA Disig Root R2 O=Disig a.s. +# Label: "CA Disig Root R2" +# Serial: 10572350602393338211 +# MD5 Fingerprint: 26:01:fb:d8:27:a7:17:9a:45:54:38:1a:43:01:3b:03 +# SHA1 Fingerprint: b5:61:eb:ea:a4:de:e4:25:4b:69:1a:98:a5:57:47:c2:34:c7:d9:71 +# SHA256 Fingerprint: e2:3d:4a:03:6d:7b:70:e9:f5:95:b1:42:20:79:d2:b9:1e:df:bb:1f:b6:51:a0:63:3e:aa:8a:9d:c5:f8:07:03 +-----BEGIN CERTIFICATE----- +MIIFaTCCA1GgAwIBAgIJAJK4iNuwisFjMA0GCSqGSIb3DQEBCwUAMFIxCzAJBgNV +BAYTAlNLMRMwEQYDVQQHEwpCcmF0aXNsYXZhMRMwEQYDVQQKEwpEaXNpZyBhLnMu +MRkwFwYDVQQDExBDQSBEaXNpZyBSb290IFIyMB4XDTEyMDcxOTA5MTUzMFoXDTQy +MDcxOTA5MTUzMFowUjELMAkGA1UEBhMCU0sxEzARBgNVBAcTCkJyYXRpc2xhdmEx +EzARBgNVBAoTCkRpc2lnIGEucy4xGTAXBgNVBAMTEENBIERpc2lnIFJvb3QgUjIw +ggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCio8QACdaFXS1tFPbCw3Oe +NcJxVX6B+6tGUODBfEl45qt5WDza/3wcn9iXAng+a0EE6UG9vgMsRfYvZNSrXaNH +PWSb6WiaxswbP7q+sos0Ai6YVRn8jG+qX9pMzk0DIaPY0jSTVpbLTAwAFjxfGs3I +x2ymrdMxp7zo5eFm1tL7A7RBZckQrg4FY8aAamkw/dLukO8NJ9+flXP04SXabBbe +QTg06ov80egEFGEtQX6sx3dOy1FU+16SGBsEWmjGycT6txOgmLcRK7fWV8x8nhfR +yyX+hk4kLlYMeE2eARKmK6cBZW58Yh2EhN/qwGu1pSqVg8NTEQxzHQuyRpDRQjrO +QG6Vrf/GlK1ul4SOfW+eioANSW1z4nuSHsPzwfPrLgVv2RvPN3YEyLRa5Beny912 +H9AZdugsBbPWnDTYltxhh5EF5EQIM8HauQhl1K6yNg3ruji6DOWbnuuNZt2Zz9aJ +QfYEkoopKW1rOhzndX0CcQ7zwOe9yxndnWCywmZgtrEE7snmhrmaZkCo5xHtgUUD +i/ZnWejBBhG93c+AAk9lQHhcR1DIm+YfgXvkRKhbhZri3lrVx/k6RGZL5DJUfORs +nLMOPReisjQS1n6yqEm70XooQL6iFh/f5DcfEXP7kAplQ6INfPgGAVUzfbANuPT1 +rqVCV3w2EYx7XsQDnYx5nQIDAQABo0IwQDAPBgNVHRMBAf8EBTADAQH/MA4GA1Ud +DwEB/wQEAwIBBjAdBgNVHQ4EFgQUtZn4r7CU9eMg1gqtzk5WpC5uQu0wDQYJKoZI +hvcNAQELBQADggIBACYGXnDnZTPIgm7ZnBc6G3pmsgH2eDtpXi/q/075KMOYKmFM +tCQSin1tERT3nLXK5ryeJ45MGcipvXrA1zYObYVybqjGom32+nNjf7xueQgcnYqf +GopTpti72TVVsRHFqQOzVju5hJMiXn7B9hJSi+osZ7z+Nkz1uM/Rs0mSO9MpDpkb +lvdhuDvEK7Z4bLQjb/D907JedR+Zlais9trhxTF7+9FGs9K8Z7RiVLoJ92Owk6Ka ++elSLotgEqv89WBW7xBci8QaQtyDW2QOy7W81k/BfDxujRNt+3vrMNDcTa/F1bal +TFtxyegxvug4BkihGuLq0t4SOVga/4AOgnXmt8kHbA7v/zjxmHHEt38OFdAlab0i +nSvtBfZGR6ztwPDUO+Ls7pZbkBNOHlY667DvlruWIxG68kOGdGSVyCh13x01utI3 +gzhTODY7z2zp+WsO0PsE6E9312UBeIYMej4hYvF/Y3EMyZ9E26gnonW+boE+18Dr +G5gPcFw0sorMwIUY6256s/daoQe/qUKS82Ail+QUoQebTnbAjn39pCXHR+3/H3Os +zMOl6W8KjptlwlCFtaOgUxLMVYdh84GuEEZhvUQhuMI9dM9+JDX6HAcOmz0iyu8x +L4ysEr3vQCj8KWefshNPZiTEUxnpHikV7+ZtsH8tZ/3zbBt1RqPlShfppNcL +-----END CERTIFICATE----- + +# Issuer: CN=ACCVRAIZ1 O=ACCV OU=PKIACCV +# Subject: CN=ACCVRAIZ1 O=ACCV OU=PKIACCV +# Label: "ACCVRAIZ1" +# Serial: 6828503384748696800 +# MD5 Fingerprint: d0:a0:5a:ee:05:b6:09:94:21:a1:7d:f1:b2:29:82:02 +# SHA1 Fingerprint: 93:05:7a:88:15:c6:4f:ce:88:2f:fa:91:16:52:28:78:bc:53:64:17 +# SHA256 Fingerprint: 9a:6e:c0:12:e1:a7:da:9d:be:34:19:4d:47:8a:d7:c0:db:18:22:fb:07:1d:f1:29:81:49:6e:d1:04:38:41:13 +-----BEGIN CERTIFICATE----- +MIIH0zCCBbugAwIBAgIIXsO3pkN/pOAwDQYJKoZIhvcNAQEFBQAwQjESMBAGA1UE +AwwJQUNDVlJBSVoxMRAwDgYDVQQLDAdQS0lBQ0NWMQ0wCwYDVQQKDARBQ0NWMQsw +CQYDVQQGEwJFUzAeFw0xMTA1MDUwOTM3MzdaFw0zMDEyMzEwOTM3MzdaMEIxEjAQ +BgNVBAMMCUFDQ1ZSQUlaMTEQMA4GA1UECwwHUEtJQUNDVjENMAsGA1UECgwEQUND +VjELMAkGA1UEBhMCRVMwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCb +qau/YUqXry+XZpp0X9DZlv3P4uRm7x8fRzPCRKPfmt4ftVTdFXxpNRFvu8gMjmoY +HtiP2Ra8EEg2XPBjs5BaXCQ316PWywlxufEBcoSwfdtNgM3802/J+Nq2DoLSRYWo +G2ioPej0RGy9ocLLA76MPhMAhN9KSMDjIgro6TenGEyxCQ0jVn8ETdkXhBilyNpA +lHPrzg5XPAOBOp0KoVdDaaxXbXmQeOW1tDvYvEyNKKGno6e6Ak4l0Squ7a4DIrhr +IA8wKFSVf+DuzgpmndFALW4ir50awQUZ0m/A8p/4e7MCQvtQqR0tkw8jq8bBD5L/ +0KIV9VMJcRz/RROE5iZe+OCIHAr8Fraocwa48GOEAqDGWuzndN9wrqODJerWx5eH +k6fGioozl2A3ED6XPm4pFdahD9GILBKfb6qkxkLrQaLjlUPTAYVtjrs78yM2x/47 +4KElB0iryYl0/wiPgL/AlmXz7uxLaL2diMMxs0Dx6M/2OLuc5NF/1OVYm3z61PMO +m3WR5LpSLhl+0fXNWhn8ugb2+1KoS5kE3fj5tItQo05iifCHJPqDQsGH+tUtKSpa +cXpkatcnYGMN285J9Y0fkIkyF/hzQ7jSWpOGYdbhdQrqeWZ2iE9x6wQl1gpaepPl +uUsXQA+xtrn13k/c4LOsOxFwYIRKQ26ZIMApcQrAZQIDAQABo4ICyzCCAscwfQYI +KwYBBQUHAQEEcTBvMEwGCCsGAQUFBzAChkBodHRwOi8vd3d3LmFjY3YuZXMvZmls +ZWFkbWluL0FyY2hpdm9zL2NlcnRpZmljYWRvcy9yYWl6YWNjdjEuY3J0MB8GCCsG +AQUFBzABhhNodHRwOi8vb2NzcC5hY2N2LmVzMB0GA1UdDgQWBBTSh7Tj3zcnk1X2 +VuqB5TbMjB4/vTAPBgNVHRMBAf8EBTADAQH/MB8GA1UdIwQYMBaAFNKHtOPfNyeT +VfZW6oHlNsyMHj+9MIIBcwYDVR0gBIIBajCCAWYwggFiBgRVHSAAMIIBWDCCASIG +CCsGAQUFBwICMIIBFB6CARAAQQB1AHQAbwByAGkAZABhAGQAIABkAGUAIABDAGUA +cgB0AGkAZgBpAGMAYQBjAGkA8wBuACAAUgBhAO0AegAgAGQAZQAgAGwAYQAgAEEA +QwBDAFYAIAAoAEEAZwBlAG4AYwBpAGEAIABkAGUAIABUAGUAYwBuAG8AbABvAGcA +7QBhACAAeQAgAEMAZQByAHQAaQBmAGkAYwBhAGMAaQDzAG4AIABFAGwAZQBjAHQA +cgDzAG4AaQBjAGEALAAgAEMASQBGACAAUQA0ADYAMAAxADEANQA2AEUAKQAuACAA +QwBQAFMAIABlAG4AIABoAHQAdABwADoALwAvAHcAdwB3AC4AYQBjAGMAdgAuAGUA +czAwBggrBgEFBQcCARYkaHR0cDovL3d3dy5hY2N2LmVzL2xlZ2lzbGFjaW9uX2Mu +aHRtMFUGA1UdHwROMEwwSqBIoEaGRGh0dHA6Ly93d3cuYWNjdi5lcy9maWxlYWRt +aW4vQXJjaGl2b3MvY2VydGlmaWNhZG9zL3JhaXphY2N2MV9kZXIuY3JsMA4GA1Ud +DwEB/wQEAwIBBjAXBgNVHREEEDAOgQxhY2N2QGFjY3YuZXMwDQYJKoZIhvcNAQEF +BQADggIBAJcxAp/n/UNnSEQU5CmH7UwoZtCPNdpNYbdKl02125DgBS4OxnnQ8pdp +D70ER9m+27Up2pvZrqmZ1dM8MJP1jaGo/AaNRPTKFpV8M9xii6g3+CfYCS0b78gU +JyCpZET/LtZ1qmxNYEAZSUNUY9rizLpm5U9EelvZaoErQNV/+QEnWCzI7UiRfD+m +AM/EKXMRNt6GGT6d7hmKG9Ww7Y49nCrADdg9ZuM8Db3VlFzi4qc1GwQA9j9ajepD +vV+JHanBsMyZ4k0ACtrJJ1vnE5Bc5PUzolVt3OAJTS+xJlsndQAJxGJ3KQhfnlms +tn6tn1QwIgPBHnFk/vk4CpYY3QIUrCPLBhwepH2NDd4nQeit2hW3sCPdK6jT2iWH +7ehVRE2I9DZ+hJp4rPcOVkkO1jMl1oRQQmwgEh0q1b688nCBpHBgvgW1m54ERL5h +I6zppSSMEYCUWqKiuUnSwdzRp+0xESyeGabu4VXhwOrPDYTkF7eifKXeVSUG7szA +h1xA2syVP1XgNce4hL60Xc16gwFy7ofmXx2utYXGJt/mwZrpHgJHnyqobalbz+xF +d3+YJ5oyXSrjhO7FmGYvliAd3djDJ9ew+f7Zfc3Qn48LFFhRny+Lwzgt3uiP1o2H +pPVWQxaZLPSkVrQ0uGE3ycJYgBugl6H8WY3pEfbRD0tVNEYqi4Y7 +-----END CERTIFICATE----- + +# Issuer: CN=TWCA Global Root CA O=TAIWAN-CA OU=Root CA +# Subject: CN=TWCA Global Root CA O=TAIWAN-CA OU=Root CA +# Label: "TWCA Global Root CA" +# Serial: 3262 +# MD5 Fingerprint: f9:03:7e:cf:e6:9e:3c:73:7a:2a:90:07:69:ff:2b:96 +# SHA1 Fingerprint: 9c:bb:48:53:f6:a4:f6:d3:52:a4:e8:32:52:55:60:13:f5:ad:af:65 +# SHA256 Fingerprint: 59:76:90:07:f7:68:5d:0f:cd:50:87:2f:9f:95:d5:75:5a:5b:2b:45:7d:81:f3:69:2b:61:0a:98:67:2f:0e:1b +-----BEGIN CERTIFICATE----- +MIIFQTCCAymgAwIBAgICDL4wDQYJKoZIhvcNAQELBQAwUTELMAkGA1UEBhMCVFcx +EjAQBgNVBAoTCVRBSVdBTi1DQTEQMA4GA1UECxMHUm9vdCBDQTEcMBoGA1UEAxMT +VFdDQSBHbG9iYWwgUm9vdCBDQTAeFw0xMjA2MjcwNjI4MzNaFw0zMDEyMzExNTU5 +NTlaMFExCzAJBgNVBAYTAlRXMRIwEAYDVQQKEwlUQUlXQU4tQ0ExEDAOBgNVBAsT +B1Jvb3QgQ0ExHDAaBgNVBAMTE1RXQ0EgR2xvYmFsIFJvb3QgQ0EwggIiMA0GCSqG +SIb3DQEBAQUAA4ICDwAwggIKAoICAQCwBdvI64zEbooh745NnHEKH1Jw7W2CnJfF +10xORUnLQEK1EjRsGcJ0pDFfhQKX7EMzClPSnIyOt7h52yvVavKOZsTuKwEHktSz +0ALfUPZVr2YOy+BHYC8rMjk1Ujoog/h7FsYYuGLWRyWRzvAZEk2tY/XTP3VfKfCh +MBwqoJimFb3u/Rk28OKRQ4/6ytYQJ0lM793B8YVwm8rqqFpD/G2Gb3PpN0Wp8DbH +zIh1HrtsBv+baz4X7GGqcXzGHaL3SekVtTzWoWH1EfcFbx39Eb7QMAfCKbAJTibc +46KokWofwpFFiFzlmLhxpRUZyXx1EcxwdE8tmx2RRP1WKKD+u4ZqyPpcC1jcxkt2 +yKsi2XMPpfRaAok/T54igu6idFMqPVMnaR1sjjIsZAAmY2E2TqNGtz99sy2sbZCi +laLOz9qC5wc0GZbpuCGqKX6mOL6OKUohZnkfs8O1CWfe1tQHRvMq2uYiN2DLgbYP +oA/pyJV/v1WRBXrPPRXAb94JlAGD1zQbzECl8LibZ9WYkTunhHiVJqRaCPgrdLQA +BDzfuBSO6N+pjWxnkjMdwLfS7JLIvgm/LCkFbwJrnu+8vyq8W8BQj0FwcYeyTbcE +qYSjMq+u7msXi7Kx/mzhkIyIqJdIzshNy/MGz19qCkKxHh53L46g5pIOBvwFItIm +4TFRfTLcDwIDAQABoyMwITAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB +/zANBgkqhkiG9w0BAQsFAAOCAgEAXzSBdu+WHdXltdkCY4QWwa6gcFGn90xHNcgL +1yg9iXHZqjNB6hQbbCEAwGxCGX6faVsgQt+i0trEfJdLjbDorMjupWkEmQqSpqsn +LhpNgb+E1HAerUf+/UqdM+DyucRFCCEK2mlpc3INvjT+lIutwx4116KD7+U4x6WF +H6vPNOw/KP4M8VeGTslV9xzU2KV9Bnpv1d8Q34FOIWWxtuEXeZVFBs5fzNxGiWNo +RI2T9GRwoD2dKAXDOXC4Ynsg/eTb6QihuJ49CcdP+yz4k3ZB3lLg4VfSnQO8d57+ +nile98FRYB/e2guyLXW3Q0iT5/Z5xoRdgFlglPx4mI88k1HtQJAH32RjJMtOcQWh +15QaiDLxInQirqWm2BJpTGCjAu4r7NRjkgtevi92a6O2JryPA9gK8kxkRr05YuWW +6zRjESjMlfGt7+/cgFhI6Uu46mWs6fyAtbXIRfmswZ/ZuepiiI7E8UuDEq3mi4TW +nsLrgxifarsbJGAzcMzs9zLzXNl5fe+epP7JI8Mk7hWSsT2RTyaGvWZzJBPqpK5j +wa19hAM8EHiGG3njxPPyBJUgriOCxLM6AGK/5jYk4Ve6xx6QddVfP5VhK8E7zeWz +aGHQRiapIVJpLesux+t3zqY6tQMzT3bR51xUAV3LePTJDL/PEo4XLSNolOer/qmy +KwbQBM0= +-----END CERTIFICATE----- + +# Issuer: CN=TeliaSonera Root CA v1 O=TeliaSonera +# Subject: CN=TeliaSonera Root CA v1 O=TeliaSonera +# Label: "TeliaSonera Root CA v1" +# Serial: 199041966741090107964904287217786801558 +# MD5 Fingerprint: 37:41:49:1b:18:56:9a:26:f5:ad:c2:66:fb:40:a5:4c +# SHA1 Fingerprint: 43:13:bb:96:f1:d5:86:9b:c1:4e:6a:92:f6:cf:f6:34:69:87:82:37 +# SHA256 Fingerprint: dd:69:36:fe:21:f8:f0:77:c1:23:a1:a5:21:c1:22:24:f7:22:55:b7:3e:03:a7:26:06:93:e8:a2:4b:0f:a3:89 +-----BEGIN CERTIFICATE----- +MIIFODCCAyCgAwIBAgIRAJW+FqD3LkbxezmCcvqLzZYwDQYJKoZIhvcNAQEFBQAw +NzEUMBIGA1UECgwLVGVsaWFTb25lcmExHzAdBgNVBAMMFlRlbGlhU29uZXJhIFJv +b3QgQ0EgdjEwHhcNMDcxMDE4MTIwMDUwWhcNMzIxMDE4MTIwMDUwWjA3MRQwEgYD +VQQKDAtUZWxpYVNvbmVyYTEfMB0GA1UEAwwWVGVsaWFTb25lcmEgUm9vdCBDQSB2 +MTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAMK+6yfwIaPzaSZVfp3F +VRaRXP3vIb9TgHot0pGMYzHw7CTww6XScnwQbfQ3t+XmfHnqjLWCi65ItqwA3GV1 +7CpNX8GH9SBlK4GoRz6JI5UwFpB/6FcHSOcZrr9FZ7E3GwYq/t75rH2D+1665I+X +Z75Ljo1kB1c4VWk0Nj0TSO9P4tNmHqTPGrdeNjPUtAa9GAH9d4RQAEX1jF3oI7x+ +/jXh7VB7qTCNGdMJjmhnXb88lxhTuylixcpecsHHltTbLaC0H2kD7OriUPEMPPCs +81Mt8Bz17Ww5OXOAFshSsCPN4D7c3TxHoLs1iuKYaIu+5b9y7tL6pe0S7fyYGKkm +dtwoSxAgHNN/Fnct7W+A90m7UwW7XWjH1Mh1Fj+JWov3F0fUTPHSiXk+TT2YqGHe +Oh7S+F4D4MHJHIzTjU3TlTazN19jY5szFPAtJmtTfImMMsJu7D0hADnJoWjiUIMu +sDor8zagrC/kb2HCUQk5PotTubtn2txTuXZZNp1D5SDgPTJghSJRt8czu90VL6R4 +pgd7gUY2BIbdeTXHlSw7sKMXNeVzH7RcWe/a6hBle3rQf5+ztCo3O3CLm1u5K7fs +slESl1MpWtTwEhDcTwK7EpIvYtQ/aUN8Ddb8WHUBiJ1YFkveupD/RwGJBmr2X7KQ +arMCpgKIv7NHfirZ1fpoeDVNAgMBAAGjPzA9MA8GA1UdEwEB/wQFMAMBAf8wCwYD +VR0PBAQDAgEGMB0GA1UdDgQWBBTwj1k4ALP1j5qWDNXr+nuqF+gTEjANBgkqhkiG +9w0BAQUFAAOCAgEAvuRcYk4k9AwI//DTDGjkk0kiP0Qnb7tt3oNmzqjMDfz1mgbl +dxSR651Be5kqhOX//CHBXfDkH1e3damhXwIm/9fH907eT/j3HEbAek9ALCI18Bmx +0GtnLLCo4MBANzX2hFxc469CeP6nyQ1Q6g2EdvZR74NTxnr/DlZJLo961gzmJ1Tj +TQpgcmLNkQfWpb/ImWvtxBnmq0wROMVvMeJuScg/doAmAyYp4Db29iBT4xdwNBed +Y2gea+zDTYa4EzAvXUYNR0PVG6pZDrlcjQZIrXSHX8f8MVRBE+LHIQ6e4B4N4cB7 +Q4WQxYpYxmUKeFfyxiMPAdkgS94P+5KFdSpcc41teyWRyu5FrgZLAMzTsVlQ2jqI +OylDRl6XK1TOU2+NSueW+r9xDkKLfP0ooNBIytrEgUy7onOTJsjrDNYmiLbAJM+7 +vVvrdX3pCI6GMyx5dwlppYn8s3CQh3aP0yK7Qs69cwsgJirQmz1wHiRszYd2qReW +t88NkvuOGKmYSdGe/mBEciG5Ge3C9THxOUiIkCR1VBatzvT4aRRkOfujuLpwQMcn +HL/EVlP6Y2XQ8xwOFvVrhlhNGNTkDY6lnVuR3HYkUD/GKvvZt5y11ubQ2egZixVx +SK236thZiNSQvxaz2emsWWFUyBy6ysHK4bkgTI86k4mloMy/0/Z1pHWWbVY= +-----END CERTIFICATE----- + +# Issuer: CN=E-Tugra Certification Authority O=E-Tu\u011fra EBG Bili\u015fim Teknolojileri ve Hizmetleri A.\u015e. OU=E-Tugra Sertifikasyon Merkezi +# Subject: CN=E-Tugra Certification Authority O=E-Tu\u011fra EBG Bili\u015fim Teknolojileri ve Hizmetleri A.\u015e. OU=E-Tugra Sertifikasyon Merkezi +# Label: "E-Tugra Certification Authority" +# Serial: 7667447206703254355 +# MD5 Fingerprint: b8:a1:03:63:b0:bd:21:71:70:8a:6f:13:3a:bb:79:49 +# SHA1 Fingerprint: 51:c6:e7:08:49:06:6e:f3:92:d4:5c:a0:0d:6d:a3:62:8f:c3:52:39 +# SHA256 Fingerprint: b0:bf:d5:2b:b0:d7:d9:bd:92:bf:5d:4d:c1:3d:a2:55:c0:2c:54:2f:37:83:65:ea:89:39:11:f5:5e:55:f2:3c +-----BEGIN CERTIFICATE----- +MIIGSzCCBDOgAwIBAgIIamg+nFGby1MwDQYJKoZIhvcNAQELBQAwgbIxCzAJBgNV +BAYTAlRSMQ8wDQYDVQQHDAZBbmthcmExQDA+BgNVBAoMN0UtVHXEn3JhIEVCRyBC +aWxpxZ9pbSBUZWtub2xvamlsZXJpIHZlIEhpem1ldGxlcmkgQS7Fni4xJjAkBgNV +BAsMHUUtVHVncmEgU2VydGlmaWthc3lvbiBNZXJrZXppMSgwJgYDVQQDDB9FLVR1 +Z3JhIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTEzMDMwNTEyMDk0OFoXDTIz +MDMwMzEyMDk0OFowgbIxCzAJBgNVBAYTAlRSMQ8wDQYDVQQHDAZBbmthcmExQDA+ +BgNVBAoMN0UtVHXEn3JhIEVCRyBCaWxpxZ9pbSBUZWtub2xvamlsZXJpIHZlIEhp +em1ldGxlcmkgQS7Fni4xJjAkBgNVBAsMHUUtVHVncmEgU2VydGlmaWthc3lvbiBN +ZXJrZXppMSgwJgYDVQQDDB9FLVR1Z3JhIENlcnRpZmljYXRpb24gQXV0aG9yaXR5 +MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEA4vU/kwVRHoViVF56C/UY +B4Oufq9899SKa6VjQzm5S/fDxmSJPZQuVIBSOTkHS0vdhQd2h8y/L5VMzH2nPbxH +D5hw+IyFHnSOkm0bQNGZDbt1bsipa5rAhDGvykPL6ys06I+XawGb1Q5KCKpbknSF +Q9OArqGIW66z6l7LFpp3RMih9lRozt6Plyu6W0ACDGQXwLWTzeHxE2bODHnv0ZEo +q1+gElIwcxmOj+GMB6LDu0rw6h8VqO4lzKRG+Bsi77MOQ7osJLjFLFzUHPhdZL3D +k14opz8n8Y4e0ypQBaNV2cvnOVPAmJ6MVGKLJrD3fY185MaeZkJVgkfnsliNZvcH +fC425lAcP9tDJMW/hkd5s3kc91r0E+xs+D/iWR+V7kI+ua2oMoVJl0b+SzGPWsut +dEcf6ZG33ygEIqDUD13ieU/qbIWGvaimzuT6w+Gzrt48Ue7LE3wBf4QOXVGUnhMM +ti6lTPk5cDZvlsouDERVxcr6XQKj39ZkjFqzAQqptQpHF//vkUAqjqFGOjGY5RH8 +zLtJVor8udBhmm9lbObDyz51Sf6Pp+KJxWfXnUYTTjF2OySznhFlhqt/7x3U+Lzn +rFpct1pHXFXOVbQicVtbC/DP3KBhZOqp12gKY6fgDT+gr9Oq0n7vUaDmUStVkhUX +U8u3Zg5mTPj5dUyQ5xJwx0UCAwEAAaNjMGEwHQYDVR0OBBYEFC7j27JJ0JxUeVz6 +Jyr+zE7S6E5UMA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAULuPbsknQnFR5 +XPonKv7MTtLoTlQwDgYDVR0PAQH/BAQDAgEGMA0GCSqGSIb3DQEBCwUAA4ICAQAF +Nzr0TbdF4kV1JI+2d1LoHNgQk2Xz8lkGpD4eKexd0dCrfOAKkEh47U6YA5n+KGCR +HTAduGN8qOY1tfrTYXbm1gdLymmasoR6d5NFFxWfJNCYExL/u6Au/U5Mh/jOXKqY +GwXgAEZKgoClM4so3O0409/lPun++1ndYYRP0lSWE2ETPo+Aab6TR7U1Q9Jauz1c +77NCR807VRMGsAnb/WP2OogKmW9+4c4bU2pEZiNRCHu8W1Ki/QY3OEBhj0qWuJA3 ++GbHeJAAFS6LrVE1Uweoa2iu+U48BybNCAVwzDk/dr2l02cmAYamU9JgO3xDf1WK +vJUawSg5TB9D0pH0clmKuVb8P7Sd2nCcdlqMQ1DujjByTd//SffGqWfZbawCEeI6 +FiWnWAjLb1NBnEg4R2gz0dfHj9R0IdTDBZB6/86WiLEVKV0jq9BgoRJP3vQXzTLl +yb/IQ639Lo7xr+L0mPoSHyDYwKcMhcWQ9DstliaxLL5Mq+ux0orJ23gTDx4JnW2P +AJ8C2sH6H3p6CcRK5ogql5+Ji/03X186zjhZhkuvcQu02PJwT58yE+Owp1fl2tpD +y4Q08ijE6m30Ku/Ba3ba+367hTzSU8JNvnHhRdH9I2cNE3X7z2VnIp2usAnRCf8d +NL/+I5c30jn6PQ0GC7TbO6Orb1wdtn7os4I07QZcJA== +-----END CERTIFICATE----- + +# Issuer: CN=T-TeleSec GlobalRoot Class 2 O=T-Systems Enterprise Services GmbH OU=T-Systems Trust Center +# Subject: CN=T-TeleSec GlobalRoot Class 2 O=T-Systems Enterprise Services GmbH OU=T-Systems Trust Center +# Label: "T-TeleSec GlobalRoot Class 2" +# Serial: 1 +# MD5 Fingerprint: 2b:9b:9e:e4:7b:6c:1f:00:72:1a:cc:c1:77:79:df:6a +# SHA1 Fingerprint: 59:0d:2d:7d:88:4f:40:2e:61:7e:a5:62:32:17:65:cf:17:d8:94:e9 +# SHA256 Fingerprint: 91:e2:f5:78:8d:58:10:eb:a7:ba:58:73:7d:e1:54:8a:8e:ca:cd:01:45:98:bc:0b:14:3e:04:1b:17:05:25:52 +-----BEGIN CERTIFICATE----- +MIIDwzCCAqugAwIBAgIBATANBgkqhkiG9w0BAQsFADCBgjELMAkGA1UEBhMCREUx +KzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnByaXNlIFNlcnZpY2VzIEdtYkgxHzAd +BgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50ZXIxJTAjBgNVBAMMHFQtVGVsZVNl +YyBHbG9iYWxSb290IENsYXNzIDIwHhcNMDgxMDAxMTA0MDE0WhcNMzMxMDAxMjM1 +OTU5WjCBgjELMAkGA1UEBhMCREUxKzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnBy +aXNlIFNlcnZpY2VzIEdtYkgxHzAdBgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50 +ZXIxJTAjBgNVBAMMHFQtVGVsZVNlYyBHbG9iYWxSb290IENsYXNzIDIwggEiMA0G +CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCqX9obX+hzkeXaXPSi5kfl82hVYAUd +AqSzm1nzHoqvNK38DcLZSBnuaY/JIPwhqgcZ7bBcrGXHX+0CfHt8LRvWurmAwhiC +FoT6ZrAIxlQjgeTNuUk/9k9uN0goOA/FvudocP05l03Sx5iRUKrERLMjfTlH6VJi +1hKTXrcxlkIF+3anHqP1wvzpesVsqXFP6st4vGCvx9702cu+fjOlbpSD8DT6Iavq +jnKgP6TeMFvvhk1qlVtDRKgQFRzlAVfFmPHmBiiRqiDFt1MmUUOyCxGVWOHAD3bZ +wI18gfNycJ5v/hqO2V81xrJvNHy+SE/iWjnX2J14np+GPgNeGYtEotXHAgMBAAGj +QjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBS/ +WSA2AHmgoCJrjNXyYdK4LMuCSjANBgkqhkiG9w0BAQsFAAOCAQEAMQOiYQsfdOhy +NsZt+U2e+iKo4YFWz827n+qrkRk4r6p8FU3ztqONpfSO9kSpp+ghla0+AGIWiPAC +uvxhI+YzmzB6azZie60EI4RYZeLbK4rnJVM3YlNfvNoBYimipidx5joifsFvHZVw +IEoHNN/q/xWA5brXethbdXwFeilHfkCoMRN3zUA7tFFHei4R40cR3p1m0IvVVGb6 +g1XqfMIpiRvpb7PO4gWEyS8+eIVibslfwXhjdFjASBgMmTnrpMwatXlajRWc2BQN +9noHV8cigwUtPJslJj0Ys6lDfMjIq2SPDqO/nBudMNva0Bkuqjzx+zOAduTNrRlP +BSeOE6Fuwg== +-----END CERTIFICATE----- + +# Issuer: CN=Atos TrustedRoot 2011 O=Atos +# Subject: CN=Atos TrustedRoot 2011 O=Atos +# Label: "Atos TrustedRoot 2011" +# Serial: 6643877497813316402 +# MD5 Fingerprint: ae:b9:c4:32:4b:ac:7f:5d:66:cc:77:94:bb:2a:77:56 +# SHA1 Fingerprint: 2b:b1:f5:3e:55:0c:1d:c5:f1:d4:e6:b7:6a:46:4b:55:06:02:ac:21 +# SHA256 Fingerprint: f3:56:be:a2:44:b7:a9:1e:b3:5d:53:ca:9a:d7:86:4a:ce:01:8e:2d:35:d5:f8:f9:6d:df:68:a6:f4:1a:a4:74 +-----BEGIN CERTIFICATE----- +MIIDdzCCAl+gAwIBAgIIXDPLYixfszIwDQYJKoZIhvcNAQELBQAwPDEeMBwGA1UE +AwwVQXRvcyBUcnVzdGVkUm9vdCAyMDExMQ0wCwYDVQQKDARBdG9zMQswCQYDVQQG +EwJERTAeFw0xMTA3MDcxNDU4MzBaFw0zMDEyMzEyMzU5NTlaMDwxHjAcBgNVBAMM +FUF0b3MgVHJ1c3RlZFJvb3QgMjAxMTENMAsGA1UECgwEQXRvczELMAkGA1UEBhMC +REUwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCVhTuXbyo7LjvPpvMp +Nb7PGKw+qtn4TaA+Gke5vJrf8v7MPkfoepbCJI419KkM/IL9bcFyYie96mvr54rM +VD6QUM+A1JX76LWC1BTFtqlVJVfbsVD2sGBkWXppzwO3bw2+yj5vdHLqqjAqc2K+ +SZFhyBH+DgMq92og3AIVDV4VavzjgsG1xZ1kCWyjWZgHJ8cblithdHFsQ/H3NYkQ +4J7sVaE3IqKHBAUsR320HLliKWYoyrfhk/WklAOZuXCFteZI6o1Q/NnezG8HDt0L +cp2AMBYHlT8oDv3FdU9T1nSatCQujgKRz3bFmx5VdJx4IbHwLfELn8LVlhgf8FQi +eowHAgMBAAGjfTB7MB0GA1UdDgQWBBSnpQaxLKYJYO7Rl+lwrrw7GWzbITAPBgNV +HRMBAf8EBTADAQH/MB8GA1UdIwQYMBaAFKelBrEspglg7tGX6XCuvDsZbNshMBgG +A1UdIAQRMA8wDQYLKwYBBAGwLQMEAQEwDgYDVR0PAQH/BAQDAgGGMA0GCSqGSIb3 +DQEBCwUAA4IBAQAmdzTblEiGKkGdLD4GkGDEjKwLVLgfuXvTBznk+j57sj1O7Z8j +vZfza1zv7v1Apt+hk6EKhqzvINB5Ab149xnYJDE0BAGmuhWawyfc2E8PzBhj/5kP +DpFrdRbhIfzYJsdHt6bPWHJxfrrhTZVHO8mvbaG0weyJ9rQPOLXiZNwlz6bb65pc +maHFCN795trV1lpFDMS3wrUU77QR/w4VtfX128a961qn8FYiqTxlVMYVqL2Gns2D +lmh6cYGJ4Qvh6hEbaAjMaZ7snkGeRDImeuKHCnE96+RapNLbxc3G3mB/ufNPRJLv +KrcYPqcZ2Qt9sTdBQrC6YB3y/gkRsPCHe6ed +-----END CERTIFICATE----- + +# Issuer: CN=QuoVadis Root CA 1 G3 O=QuoVadis Limited +# Subject: CN=QuoVadis Root CA 1 G3 O=QuoVadis Limited +# Label: "QuoVadis Root CA 1 G3" +# Serial: 687049649626669250736271037606554624078720034195 +# MD5 Fingerprint: a4:bc:5b:3f:fe:37:9a:fa:64:f0:e2:fa:05:3d:0b:ab +# SHA1 Fingerprint: 1b:8e:ea:57:96:29:1a:c9:39:ea:b8:0a:81:1a:73:73:c0:93:79:67 +# SHA256 Fingerprint: 8a:86:6f:d1:b2:76:b5:7e:57:8e:92:1c:65:82:8a:2b:ed:58:e9:f2:f2:88:05:41:34:b7:f1:f4:bf:c9:cc:74 +-----BEGIN CERTIFICATE----- +MIIFYDCCA0igAwIBAgIUeFhfLq0sGUvjNwc1NBMotZbUZZMwDQYJKoZIhvcNAQEL +BQAwSDELMAkGA1UEBhMCQk0xGTAXBgNVBAoTEFF1b1ZhZGlzIExpbWl0ZWQxHjAc +BgNVBAMTFVF1b1ZhZGlzIFJvb3QgQ0EgMSBHMzAeFw0xMjAxMTIxNzI3NDRaFw00 +MjAxMTIxNzI3NDRaMEgxCzAJBgNVBAYTAkJNMRkwFwYDVQQKExBRdW9WYWRpcyBM +aW1pdGVkMR4wHAYDVQQDExVRdW9WYWRpcyBSb290IENBIDEgRzMwggIiMA0GCSqG +SIb3DQEBAQUAA4ICDwAwggIKAoICAQCgvlAQjunybEC0BJyFuTHK3C3kEakEPBtV +wedYMB0ktMPvhd6MLOHBPd+C5k+tR4ds7FtJwUrVu4/sh6x/gpqG7D0DmVIB0jWe +rNrwU8lmPNSsAgHaJNM7qAJGr6Qc4/hzWHa39g6QDbXwz8z6+cZM5cOGMAqNF341 +68Xfuw6cwI2H44g4hWf6Pser4BOcBRiYz5P1sZK0/CPTz9XEJ0ngnjybCKOLXSoh +4Pw5qlPafX7PGglTvF0FBM+hSo+LdoINofjSxxR3W5A2B4GbPgb6Ul5jxaYA/qXp +UhtStZI5cgMJYr2wYBZupt0lwgNm3fME0UDiTouG9G/lg6AnhF4EwfWQvTA9xO+o +abw4m6SkltFi2mnAAZauy8RRNOoMqv8hjlmPSlzkYZqn0ukqeI1RPToV7qJZjqlc +3sX5kCLliEVx3ZGZbHqfPT2YfF72vhZooF6uCyP8Wg+qInYtyaEQHeTTRCOQiJ/G +KubX9ZqzWB4vMIkIG1SitZgj7Ah3HJVdYdHLiZxfokqRmu8hqkkWCKi9YSgxyXSt +hfbZxbGL0eUQMk1fiyA6PEkfM4VZDdvLCXVDaXP7a3F98N/ETH3Goy7IlXnLc6KO +Tk0k+17kBL5yG6YnLUlamXrXXAkgt3+UuU/xDRxeiEIbEbfnkduebPRq34wGmAOt +zCjvpUfzUwIDAQABo0IwQDAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIB +BjAdBgNVHQ4EFgQUo5fW816iEOGrRZ88F2Q87gFwnMwwDQYJKoZIhvcNAQELBQAD +ggIBABj6W3X8PnrHX3fHyt/PX8MSxEBd1DKquGrX1RUVRpgjpeaQWxiZTOOtQqOC +MTaIzen7xASWSIsBx40Bz1szBpZGZnQdT+3Btrm0DWHMY37XLneMlhwqI2hrhVd2 +cDMT/uFPpiN3GPoajOi9ZcnPP/TJF9zrx7zABC4tRi9pZsMbj/7sPtPKlL92CiUN +qXsCHKnQO18LwIE6PWThv6ctTr1NxNgpxiIY0MWscgKCP6o6ojoilzHdCGPDdRS5 +YCgtW2jgFqlmgiNR9etT2DGbe+m3nUvriBbP+V04ikkwj+3x6xn0dxoxGE1nVGwv +b2X52z3sIexe9PSLymBlVNFxZPT5pqOBMzYzcfCkeF9OrYMh3jRJjehZrJ3ydlo2 +8hP0r+AJx2EqbPfgna67hkooby7utHnNkDPDs3b69fBsnQGQ+p6Q9pxyz0fawx/k +NSBT8lTR32GDpgLiJTjehTItXnOQUl1CxM49S+H5GYQd1aJQzEH7QRTDvdbJWqNj +ZgKAvQU6O0ec7AAmTPWIUb+oI38YB7AL7YsmoWTTYUrrXJ/es69nA7Mf3W1daWhp +q1467HxpvMc7hU6eFbm0FU/DlXpY18ls6Wy58yljXrQs8C097Vpl4KlbQMJImYFt +nh8GKjwStIsPm6Ik8KaN1nrgS7ZklmOVhMJKzRwuJIczYOXD +-----END CERTIFICATE----- + +# Issuer: CN=QuoVadis Root CA 2 G3 O=QuoVadis Limited +# Subject: CN=QuoVadis Root CA 2 G3 O=QuoVadis Limited +# Label: "QuoVadis Root CA 2 G3" +# Serial: 390156079458959257446133169266079962026824725800 +# MD5 Fingerprint: af:0c:86:6e:bf:40:2d:7f:0b:3e:12:50:ba:12:3d:06 +# SHA1 Fingerprint: 09:3c:61:f3:8b:8b:dc:7d:55:df:75:38:02:05:00:e1:25:f5:c8:36 +# SHA256 Fingerprint: 8f:e4:fb:0a:f9:3a:4d:0d:67:db:0b:eb:b2:3e:37:c7:1b:f3:25:dc:bc:dd:24:0e:a0:4d:af:58:b4:7e:18:40 +-----BEGIN CERTIFICATE----- +MIIFYDCCA0igAwIBAgIURFc0JFuBiZs18s64KztbpybwdSgwDQYJKoZIhvcNAQEL +BQAwSDELMAkGA1UEBhMCQk0xGTAXBgNVBAoTEFF1b1ZhZGlzIExpbWl0ZWQxHjAc +BgNVBAMTFVF1b1ZhZGlzIFJvb3QgQ0EgMiBHMzAeFw0xMjAxMTIxODU5MzJaFw00 +MjAxMTIxODU5MzJaMEgxCzAJBgNVBAYTAkJNMRkwFwYDVQQKExBRdW9WYWRpcyBM +aW1pdGVkMR4wHAYDVQQDExVRdW9WYWRpcyBSb290IENBIDIgRzMwggIiMA0GCSqG +SIb3DQEBAQUAA4ICDwAwggIKAoICAQChriWyARjcV4g/Ruv5r+LrI3HimtFhZiFf +qq8nUeVuGxbULX1QsFN3vXg6YOJkApt8hpvWGo6t/x8Vf9WVHhLL5hSEBMHfNrMW +n4rjyduYNM7YMxcoRvynyfDStNVNCXJJ+fKH46nafaF9a7I6JaltUkSs+L5u+9ym +c5GQYaYDFCDy54ejiK2toIz/pgslUiXnFgHVy7g1gQyjO/Dh4fxaXc6AcW34Sas+ +O7q414AB+6XrW7PFXmAqMaCvN+ggOp+oMiwMzAkd056OXbxMmO7FGmh77FOm6RQ1 +o9/NgJ8MSPsc9PG/Srj61YxxSscfrf5BmrODXfKEVu+lV0POKa2Mq1W/xPtbAd0j +IaFYAI7D0GoT7RPjEiuA3GfmlbLNHiJuKvhB1PLKFAeNilUSxmn1uIZoL1NesNKq +IcGY5jDjZ1XHm26sGahVpkUG0CM62+tlXSoREfA7T8pt9DTEceT/AFr2XK4jYIVz +8eQQsSWu1ZK7E8EM4DnatDlXtas1qnIhO4M15zHfeiFuuDIIfR0ykRVKYnLP43eh +vNURG3YBZwjgQQvD6xVu+KQZ2aKrr+InUlYrAoosFCT5v0ICvybIxo/gbjh9Uy3l +7ZizlWNof/k19N+IxWA1ksB8aRxhlRbQ694Lrz4EEEVlWFA4r0jyWbYW8jwNkALG +cC4BrTwV1wIDAQABo0IwQDAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIB +BjAdBgNVHQ4EFgQU7edvdlq/YOxJW8ald7tyFnGbxD0wDQYJKoZIhvcNAQELBQAD +ggIBAJHfgD9DCX5xwvfrs4iP4VGyvD11+ShdyLyZm3tdquXK4Qr36LLTn91nMX66 +AarHakE7kNQIXLJgapDwyM4DYvmL7ftuKtwGTTwpD4kWilhMSA/ohGHqPHKmd+RC +roijQ1h5fq7KpVMNqT1wvSAZYaRsOPxDMuHBR//47PERIjKWnML2W2mWeyAMQ0Ga +W/ZZGYjeVYg3UQt4XAoeo0L9x52ID8DyeAIkVJOviYeIyUqAHerQbj5hLja7NQ4n +lv1mNDthcnPxFlxHBlRJAHpYErAK74X9sbgzdWqTHBLmYF5vHX/JHyPLhGGfHoJE ++V+tYlUkmlKY7VHnoX6XOuYvHxHaU4AshZ6rNRDbIl9qxV6XU/IyAgkwo1jwDQHV +csaxfGl7w/U2Rcxhbl5MlMVerugOXou/983g7aEOGzPuVBj+D77vfoRrQ+NwmNtd +dbINWQeFFSM51vHfqSYP1kjHs6Yi9TM3WpVHn3u6GBVv/9YUZINJ0gpnIdsPNWNg +KCLjsZWDzYWm3S8P52dSbrsvhXz1SnPnxT7AvSESBT/8twNJAlvIJebiVDj1eYeM +HVOyToV7BjjHLPj4sHKNJeV3UvQDHEimUF+IIDBu8oJDqz2XhOdT+yHBTw8imoa4 +WSr2Rz0ZiC3oheGe7IUIarFsNMkd7EgrO3jtZsSOeWmD3n+M +-----END CERTIFICATE----- + +# Issuer: CN=QuoVadis Root CA 3 G3 O=QuoVadis Limited +# Subject: CN=QuoVadis Root CA 3 G3 O=QuoVadis Limited +# Label: "QuoVadis Root CA 3 G3" +# Serial: 268090761170461462463995952157327242137089239581 +# MD5 Fingerprint: df:7d:b9:ad:54:6f:68:a1:df:89:57:03:97:43:b0:d7 +# SHA1 Fingerprint: 48:12:bd:92:3c:a8:c4:39:06:e7:30:6d:27:96:e6:a4:cf:22:2e:7d +# SHA256 Fingerprint: 88:ef:81:de:20:2e:b0:18:45:2e:43:f8:64:72:5c:ea:5f:bd:1f:c2:d9:d2:05:73:07:09:c5:d8:b8:69:0f:46 +-----BEGIN CERTIFICATE----- +MIIFYDCCA0igAwIBAgIULvWbAiin23r/1aOp7r0DoM8Sah0wDQYJKoZIhvcNAQEL +BQAwSDELMAkGA1UEBhMCQk0xGTAXBgNVBAoTEFF1b1ZhZGlzIExpbWl0ZWQxHjAc +BgNVBAMTFVF1b1ZhZGlzIFJvb3QgQ0EgMyBHMzAeFw0xMjAxMTIyMDI2MzJaFw00 +MjAxMTIyMDI2MzJaMEgxCzAJBgNVBAYTAkJNMRkwFwYDVQQKExBRdW9WYWRpcyBM +aW1pdGVkMR4wHAYDVQQDExVRdW9WYWRpcyBSb290IENBIDMgRzMwggIiMA0GCSqG +SIb3DQEBAQUAA4ICDwAwggIKAoICAQCzyw4QZ47qFJenMioKVjZ/aEzHs286IxSR +/xl/pcqs7rN2nXrpixurazHb+gtTTK/FpRp5PIpM/6zfJd5O2YIyC0TeytuMrKNu +FoM7pmRLMon7FhY4futD4tN0SsJiCnMK3UmzV9KwCoWdcTzeo8vAMvMBOSBDGzXR +U7Ox7sWTaYI+FrUoRqHe6okJ7UO4BUaKhvVZR74bbwEhELn9qdIoyhA5CcoTNs+c +ra1AdHkrAj80//ogaX3T7mH1urPnMNA3I4ZyYUUpSFlob3emLoG+B01vr87ERROR +FHAGjx+f+IdpsQ7vw4kZ6+ocYfx6bIrc1gMLnia6Et3UVDmrJqMz6nWB2i3ND0/k +A9HvFZcba5DFApCTZgIhsUfei5pKgLlVj7WiL8DWM2fafsSntARE60f75li59wzw +eyuxwHApw0BiLTtIadwjPEjrewl5qW3aqDCYz4ByA4imW0aucnl8CAMhZa634Ryl +sSqiMd5mBPfAdOhx3v89WcyWJhKLhZVXGqtrdQtEPREoPHtht+KPZ0/l7DxMYIBp +VzgeAVuNVejH38DMdyM0SXV89pgR6y3e7UEuFAUCf+D+IOs15xGsIs5XPd7JMG0Q +A4XN8f+MFrXBsj6IbGB/kE+V9/YtrQE5BwT6dYB9v0lQ7e/JxHwc64B+27bQ3RP+ +ydOc17KXqQIDAQABo0IwQDAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIB +BjAdBgNVHQ4EFgQUxhfQvKjqAkPyGwaZXSuQILnXnOQwDQYJKoZIhvcNAQELBQAD +ggIBADRh2Va1EodVTd2jNTFGu6QHcrxfYWLopfsLN7E8trP6KZ1/AvWkyaiTt3px +KGmPc+FSkNrVvjrlt3ZqVoAh313m6Tqe5T72omnHKgqwGEfcIHB9UqM+WXzBusnI +FUBhynLWcKzSt/Ac5IYp8M7vaGPQtSCKFWGafoaYtMnCdvvMujAWzKNhxnQT5Wvv +oxXqA/4Ti2Tk08HS6IT7SdEQTXlm66r99I0xHnAUrdzeZxNMgRVhvLfZkXdxGYFg +u/BYpbWcC/ePIlUnwEsBbTuZDdQdm2NnL9DuDcpmvJRPpq3t/O5jrFc/ZSXPsoaP +0Aj/uHYUbt7lJ+yreLVTubY/6CD50qi+YUbKh4yE8/nxoGibIh6BJpsQBJFxwAYf +3KDTuVan45gtf4Od34wrnDKOMpTwATwiKp9Dwi7DmDkHOHv8XgBCH/MyJnmDhPbl +8MFREsALHgQjDFSlTC9JxUrRtm5gDWv8a4uFJGS3iQ6rJUdbPM9+Sb3H6QrG2vd+ +DhcI00iX0HGS8A85PjRqHH3Y8iKuu2n0M7SmSFXRDw4m6Oy2Cy2nhTXN/VnIn9HN +PlopNLk9hM6xZdRZkZFWdSHBd575euFgndOtBBj0fOtek49TSiIp+EgrPk2GrFt/ +ywaZWWDYWGWVjUTR939+J399roD1B0y2PpxxVJkES/1Y+Zj0 +-----END CERTIFICATE----- + +# Issuer: CN=DigiCert Assured ID Root G2 O=DigiCert Inc OU=www.digicert.com +# Subject: CN=DigiCert Assured ID Root G2 O=DigiCert Inc OU=www.digicert.com +# Label: "DigiCert Assured ID Root G2" +# Serial: 15385348160840213938643033620894905419 +# MD5 Fingerprint: 92:38:b9:f8:63:24:82:65:2c:57:33:e6:fe:81:8f:9d +# SHA1 Fingerprint: a1:4b:48:d9:43:ee:0a:0e:40:90:4f:3c:e0:a4:c0:91:93:51:5d:3f +# SHA256 Fingerprint: 7d:05:eb:b6:82:33:9f:8c:94:51:ee:09:4e:eb:fe:fa:79:53:a1:14:ed:b2:f4:49:49:45:2f:ab:7d:2f:c1:85 +-----BEGIN CERTIFICATE----- +MIIDljCCAn6gAwIBAgIQC5McOtY5Z+pnI7/Dr5r0SzANBgkqhkiG9w0BAQsFADBl +MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3 +d3cuZGlnaWNlcnQuY29tMSQwIgYDVQQDExtEaWdpQ2VydCBBc3N1cmVkIElEIFJv +b3QgRzIwHhcNMTMwODAxMTIwMDAwWhcNMzgwMTE1MTIwMDAwWjBlMQswCQYDVQQG +EwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3d3cuZGlnaWNl +cnQuY29tMSQwIgYDVQQDExtEaWdpQ2VydCBBc3N1cmVkIElEIFJvb3QgRzIwggEi +MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDZ5ygvUj82ckmIkzTz+GoeMVSA +n61UQbVH35ao1K+ALbkKz3X9iaV9JPrjIgwrvJUXCzO/GU1BBpAAvQxNEP4Htecc +biJVMWWXvdMX0h5i89vqbFCMP4QMls+3ywPgym2hFEwbid3tALBSfK+RbLE4E9Hp +EgjAALAcKxHad3A2m67OeYfcgnDmCXRwVWmvo2ifv922ebPynXApVfSr/5Vh88lA +bx3RvpO704gqu52/clpWcTs/1PPRCv4o76Pu2ZmvA9OPYLfykqGxvYmJHzDNw6Yu +YjOuFgJ3RFrngQo8p0Quebg/BLxcoIfhG69Rjs3sLPr4/m3wOnyqi+RnlTGNAgMB +AAGjQjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgGGMB0GA1UdDgQW +BBTOw0q5mVXyuNtgv6l+vVa1lzan1jANBgkqhkiG9w0BAQsFAAOCAQEAyqVVjOPI +QW5pJ6d1Ee88hjZv0p3GeDgdaZaikmkuOGybfQTUiaWxMTeKySHMq2zNixya1r9I +0jJmwYrA8y8678Dj1JGG0VDjA9tzd29KOVPt3ibHtX2vK0LRdWLjSisCx1BL4Gni +lmwORGYQRI+tBev4eaymG+g3NJ1TyWGqolKvSnAWhsI6yLETcDbYz+70CjTVW0z9 +B5yiutkBclzzTcHdDrEcDcRjvq30FPuJ7KJBDkzMyFdA0G4Dqs0MjomZmWzwPDCv +ON9vvKO+KSAnq3T/EyJ43pdSVR6DtVQgA+6uwE9W3jfMw3+qBCe703e4YtsXfJwo +IhNzbM8m9Yop5w== +-----END CERTIFICATE----- + +# Issuer: CN=DigiCert Assured ID Root G3 O=DigiCert Inc OU=www.digicert.com +# Subject: CN=DigiCert Assured ID Root G3 O=DigiCert Inc OU=www.digicert.com +# Label: "DigiCert Assured ID Root G3" +# Serial: 15459312981008553731928384953135426796 +# MD5 Fingerprint: 7c:7f:65:31:0c:81:df:8d:ba:3e:99:e2:5c:ad:6e:fb +# SHA1 Fingerprint: f5:17:a2:4f:9a:48:c6:c9:f8:a2:00:26:9f:dc:0f:48:2c:ab:30:89 +# SHA256 Fingerprint: 7e:37:cb:8b:4c:47:09:0c:ab:36:55:1b:a6:f4:5d:b8:40:68:0f:ba:16:6a:95:2d:b1:00:71:7f:43:05:3f:c2 +-----BEGIN CERTIFICATE----- +MIICRjCCAc2gAwIBAgIQC6Fa+h3foLVJRK/NJKBs7DAKBggqhkjOPQQDAzBlMQsw +CQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3d3cu +ZGlnaWNlcnQuY29tMSQwIgYDVQQDExtEaWdpQ2VydCBBc3N1cmVkIElEIFJvb3Qg +RzMwHhcNMTMwODAxMTIwMDAwWhcNMzgwMTE1MTIwMDAwWjBlMQswCQYDVQQGEwJV +UzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3d3cuZGlnaWNlcnQu +Y29tMSQwIgYDVQQDExtEaWdpQ2VydCBBc3N1cmVkIElEIFJvb3QgRzMwdjAQBgcq +hkjOPQIBBgUrgQQAIgNiAAQZ57ysRGXtzbg/WPuNsVepRC0FFfLvC/8QdJ+1YlJf +Zn4f5dwbRXkLzMZTCp2NXQLZqVneAlr2lSoOjThKiknGvMYDOAdfVdp+CW7if17Q +RSAPWXYQ1qAk8C3eNvJsKTmjQjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/ +BAQDAgGGMB0GA1UdDgQWBBTL0L2p4ZgFUaFNN6KDec6NHSrkhDAKBggqhkjOPQQD +AwNnADBkAjAlpIFFAmsSS3V0T8gj43DydXLefInwz5FyYZ5eEJJZVrmDxxDnOOlY +JjZ91eQ0hjkCMHw2U/Aw5WJjOpnitqM7mzT6HtoQknFekROn3aRukswy1vUhZscv +6pZjamVFkpUBtA== +-----END CERTIFICATE----- + +# Issuer: CN=DigiCert Global Root G2 O=DigiCert Inc OU=www.digicert.com +# Subject: CN=DigiCert Global Root G2 O=DigiCert Inc OU=www.digicert.com +# Label: "DigiCert Global Root G2" +# Serial: 4293743540046975378534879503202253541 +# MD5 Fingerprint: e4:a6:8a:c8:54:ac:52:42:46:0a:fd:72:48:1b:2a:44 +# SHA1 Fingerprint: df:3c:24:f9:bf:d6:66:76:1b:26:80:73:fe:06:d1:cc:8d:4f:82:a4 +# SHA256 Fingerprint: cb:3c:cb:b7:60:31:e5:e0:13:8f:8d:d3:9a:23:f9:de:47:ff:c3:5e:43:c1:14:4c:ea:27:d4:6a:5a:b1:cb:5f +-----BEGIN CERTIFICATE----- +MIIDjjCCAnagAwIBAgIQAzrx5qcRqaC7KGSxHQn65TANBgkqhkiG9w0BAQsFADBh +MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3 +d3cuZGlnaWNlcnQuY29tMSAwHgYDVQQDExdEaWdpQ2VydCBHbG9iYWwgUm9vdCBH +MjAeFw0xMzA4MDExMjAwMDBaFw0zODAxMTUxMjAwMDBaMGExCzAJBgNVBAYTAlVT +MRUwEwYDVQQKEwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2VydC5j +b20xIDAeBgNVBAMTF0RpZ2lDZXJ0IEdsb2JhbCBSb290IEcyMIIBIjANBgkqhkiG +9w0BAQEFAAOCAQ8AMIIBCgKCAQEAuzfNNNx7a8myaJCtSnX/RrohCgiN9RlUyfuI +2/Ou8jqJkTx65qsGGmvPrC3oXgkkRLpimn7Wo6h+4FR1IAWsULecYxpsMNzaHxmx +1x7e/dfgy5SDN67sH0NO3Xss0r0upS/kqbitOtSZpLYl6ZtrAGCSYP9PIUkY92eQ +q2EGnI/yuum06ZIya7XzV+hdG82MHauVBJVJ8zUtluNJbd134/tJS7SsVQepj5Wz +tCO7TG1F8PapspUwtP1MVYwnSlcUfIKdzXOS0xZKBgyMUNGPHgm+F6HmIcr9g+UQ +vIOlCsRnKPZzFBQ9RnbDhxSJITRNrw9FDKZJobq7nMWxM4MphQIDAQABo0IwQDAP +BgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIBhjAdBgNVHQ4EFgQUTiJUIBiV +5uNu5g/6+rkS7QYXjzkwDQYJKoZIhvcNAQELBQADggEBAGBnKJRvDkhj6zHd6mcY +1Yl9PMWLSn/pvtsrF9+wX3N3KjITOYFnQoQj8kVnNeyIv/iPsGEMNKSuIEyExtv4 +NeF22d+mQrvHRAiGfzZ0JFrabA0UWTW98kndth/Jsw1HKj2ZL7tcu7XUIOGZX1NG +Fdtom/DzMNU+MeKNhJ7jitralj41E6Vf8PlwUHBHQRFXGU7Aj64GxJUTFy8bJZ91 +8rGOmaFvE7FBcf6IKshPECBV1/MUReXgRPTqh5Uykw7+U0b6LJ3/iyK5S9kJRaTe +pLiaWN0bfVKfjllDiIGknibVb63dDcY3fe0Dkhvld1927jyNxF1WW6LZZm6zNTfl +MrY= +-----END CERTIFICATE----- + +# Issuer: CN=DigiCert Global Root G3 O=DigiCert Inc OU=www.digicert.com +# Subject: CN=DigiCert Global Root G3 O=DigiCert Inc OU=www.digicert.com +# Label: "DigiCert Global Root G3" +# Serial: 7089244469030293291760083333884364146 +# MD5 Fingerprint: f5:5d:a4:50:a5:fb:28:7e:1e:0f:0d:cc:96:57:56:ca +# SHA1 Fingerprint: 7e:04:de:89:6a:3e:66:6d:00:e6:87:d3:3f:fa:d9:3b:e8:3d:34:9e +# SHA256 Fingerprint: 31:ad:66:48:f8:10:41:38:c7:38:f3:9e:a4:32:01:33:39:3e:3a:18:cc:02:29:6e:f9:7c:2a:c9:ef:67:31:d0 +-----BEGIN CERTIFICATE----- +MIICPzCCAcWgAwIBAgIQBVVWvPJepDU1w6QP1atFcjAKBggqhkjOPQQDAzBhMQsw +CQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3d3cu +ZGlnaWNlcnQuY29tMSAwHgYDVQQDExdEaWdpQ2VydCBHbG9iYWwgUm9vdCBHMzAe +Fw0xMzA4MDExMjAwMDBaFw0zODAxMTUxMjAwMDBaMGExCzAJBgNVBAYTAlVTMRUw +EwYDVQQKEwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2VydC5jb20x +IDAeBgNVBAMTF0RpZ2lDZXJ0IEdsb2JhbCBSb290IEczMHYwEAYHKoZIzj0CAQYF +K4EEACIDYgAE3afZu4q4C/sLfyHS8L6+c/MzXRq8NOrexpu80JX28MzQC7phW1FG +fp4tn+6OYwwX7Adw9c+ELkCDnOg/QW07rdOkFFk2eJ0DQ+4QE2xy3q6Ip6FrtUPO +Z9wj/wMco+I+o0IwQDAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIBhjAd +BgNVHQ4EFgQUs9tIpPmhxdiuNkHMEWNpYim8S8YwCgYIKoZIzj0EAwMDaAAwZQIx +AK288mw/EkrRLTnDCgmXc/SINoyIJ7vmiI1Qhadj+Z4y3maTD/HMsQmP3Wyr+mt/ +oAIwOWZbwmSNuJ5Q3KjVSaLtx9zRSX8XAbjIho9OjIgrqJqpisXRAL34VOKa5Vt8 +sycX +-----END CERTIFICATE----- + +# Issuer: CN=DigiCert Trusted Root G4 O=DigiCert Inc OU=www.digicert.com +# Subject: CN=DigiCert Trusted Root G4 O=DigiCert Inc OU=www.digicert.com +# Label: "DigiCert Trusted Root G4" +# Serial: 7451500558977370777930084869016614236 +# MD5 Fingerprint: 78:f2:fc:aa:60:1f:2f:b4:eb:c9:37:ba:53:2e:75:49 +# SHA1 Fingerprint: dd:fb:16:cd:49:31:c9:73:a2:03:7d:3f:c8:3a:4d:7d:77:5d:05:e4 +# SHA256 Fingerprint: 55:2f:7b:dc:f1:a7:af:9e:6c:e6:72:01:7f:4f:12:ab:f7:72:40:c7:8e:76:1a:c2:03:d1:d9:d2:0a:c8:99:88 +-----BEGIN CERTIFICATE----- +MIIFkDCCA3igAwIBAgIQBZsbV56OITLiOQe9p3d1XDANBgkqhkiG9w0BAQwFADBi +MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3 +d3cuZGlnaWNlcnQuY29tMSEwHwYDVQQDExhEaWdpQ2VydCBUcnVzdGVkIFJvb3Qg +RzQwHhcNMTMwODAxMTIwMDAwWhcNMzgwMTE1MTIwMDAwWjBiMQswCQYDVQQGEwJV +UzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3d3cuZGlnaWNlcnQu +Y29tMSEwHwYDVQQDExhEaWdpQ2VydCBUcnVzdGVkIFJvb3QgRzQwggIiMA0GCSqG +SIb3DQEBAQUAA4ICDwAwggIKAoICAQC/5pBzaN675F1KPDAiMGkz7MKnJS7JIT3y +ithZwuEppz1Yq3aaza57G4QNxDAf8xukOBbrVsaXbR2rsnnyyhHS5F/WBTxSD1If +xp4VpX6+n6lXFllVcq9ok3DCsrp1mWpzMpTREEQQLt+C8weE5nQ7bXHiLQwb7iDV +ySAdYyktzuxeTsiT+CFhmzTrBcZe7FsavOvJz82sNEBfsXpm7nfISKhmV1efVFiO +DCu3T6cw2Vbuyntd463JT17lNecxy9qTXtyOj4DatpGYQJB5w3jHtrHEtWoYOAMQ +jdjUN6QuBX2I9YI+EJFwq1WCQTLX2wRzKm6RAXwhTNS8rhsDdV14Ztk6MUSaM0C/ +CNdaSaTC5qmgZ92kJ7yhTzm1EVgX9yRcRo9k98FpiHaYdj1ZXUJ2h4mXaXpI8OCi +EhtmmnTK3kse5w5jrubU75KSOp493ADkRSWJtppEGSt+wJS00mFt6zPZxd9LBADM +fRyVw4/3IbKyEbe7f/LVjHAsQWCqsWMYRJUadmJ+9oCw++hkpjPRiQfhvbfmQ6QY +uKZ3AeEPlAwhHbJUKSWJbOUOUlFHdL4mrLZBdd56rF+NP8m800ERElvlEFDrMcXK +chYiCd98THU/Y+whX8QgUWtvsauGi0/C1kVfnSD8oR7FwI+isX4KJpn15GkvmB0t +9dmpsh3lGwIDAQABo0IwQDAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIB +hjAdBgNVHQ4EFgQU7NfjgtJxXWRM3y5nP+e6mK4cD08wDQYJKoZIhvcNAQEMBQAD +ggIBALth2X2pbL4XxJEbw6GiAI3jZGgPVs93rnD5/ZpKmbnJeFwMDF/k5hQpVgs2 +SV1EY+CtnJYYZhsjDT156W1r1lT40jzBQ0CuHVD1UvyQO7uYmWlrx8GnqGikJ9yd ++SeuMIW59mdNOj6PWTkiU0TryF0Dyu1Qen1iIQqAyHNm0aAFYF/opbSnr6j3bTWc +fFqK1qI4mfN4i/RN0iAL3gTujJtHgXINwBQy7zBZLq7gcfJW5GqXb5JQbZaNaHqa +sjYUegbyJLkJEVDXCLG4iXqEI2FCKeWjzaIgQdfRnGTZ6iahixTXTBmyUEFxPT9N +cCOGDErcgdLMMpSEDQgJlxxPwO5rIHQw0uA5NBCFIRUBCOhVMt5xSdkoF1BN5r5N +0XWs0Mr7QbhDparTwwVETyw2m+L64kW4I1NsBm9nVX9GtUw/bihaeSbSpKhil9Ie +4u1Ki7wb/UdKDd9nZn6yW0HQO+T0O/QEY+nvwlQAUaCKKsnOeMzV6ocEGLPOr0mI +r/OSmbaz5mEP0oUA51Aa5BuVnRmhuZyxm7EAHu/QD09CbMkKvO5D+jpxpchNJqU1 +/YldvIViHTLSoCtU7ZpXwdv6EM8Zt4tKG48BtieVU+i2iW1bvGjUI+iLUaJW+fCm +gKDWHrO8Dw9TdSmq6hN35N6MgSGtBxBHEa2HPQfRdbzP82Z+ +-----END CERTIFICATE----- + +# Issuer: CN=COMODO RSA Certification Authority O=COMODO CA Limited +# Subject: CN=COMODO RSA Certification Authority O=COMODO CA Limited +# Label: "COMODO RSA Certification Authority" +# Serial: 101909084537582093308941363524873193117 +# MD5 Fingerprint: 1b:31:b0:71:40:36:cc:14:36:91:ad:c4:3e:fd:ec:18 +# SHA1 Fingerprint: af:e5:d2:44:a8:d1:19:42:30:ff:47:9f:e2:f8:97:bb:cd:7a:8c:b4 +# SHA256 Fingerprint: 52:f0:e1:c4:e5:8e:c6:29:29:1b:60:31:7f:07:46:71:b8:5d:7e:a8:0d:5b:07:27:34:63:53:4b:32:b4:02:34 +-----BEGIN CERTIFICATE----- +MIIF2DCCA8CgAwIBAgIQTKr5yttjb+Af907YWwOGnTANBgkqhkiG9w0BAQwFADCB +hTELMAkGA1UEBhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4G +A1UEBxMHU2FsZm9yZDEaMBgGA1UEChMRQ09NT0RPIENBIExpbWl0ZWQxKzApBgNV +BAMTIkNPTU9ETyBSU0EgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMTAwMTE5 +MDAwMDAwWhcNMzgwMTE4MjM1OTU5WjCBhTELMAkGA1UEBhMCR0IxGzAZBgNVBAgT +EkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEaMBgGA1UEChMR +Q09NT0RPIENBIExpbWl0ZWQxKzApBgNVBAMTIkNPTU9ETyBSU0EgQ2VydGlmaWNh +dGlvbiBBdXRob3JpdHkwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCR +6FSS0gpWsawNJN3Fz0RndJkrN6N9I3AAcbxT38T6KhKPS38QVr2fcHK3YX/JSw8X +pz3jsARh7v8Rl8f0hj4K+j5c+ZPmNHrZFGvnnLOFoIJ6dq9xkNfs/Q36nGz637CC +9BR++b7Epi9Pf5l/tfxnQ3K9DADWietrLNPtj5gcFKt+5eNu/Nio5JIk2kNrYrhV +/erBvGy2i/MOjZrkm2xpmfh4SDBF1a3hDTxFYPwyllEnvGfDyi62a+pGx8cgoLEf +Zd5ICLqkTqnyg0Y3hOvozIFIQ2dOciqbXL1MGyiKXCJ7tKuY2e7gUYPDCUZObT6Z ++pUX2nwzV0E8jVHtC7ZcryxjGt9XyD+86V3Em69FmeKjWiS0uqlWPc9vqv9JWL7w +qP/0uK3pN/u6uPQLOvnoQ0IeidiEyxPx2bvhiWC4jChWrBQdnArncevPDt09qZah +SL0896+1DSJMwBGB7FY79tOi4lu3sgQiUpWAk2nojkxl8ZEDLXB0AuqLZxUpaVIC +u9ffUGpVRr+goyhhf3DQw6KqLCGqR84onAZFdr+CGCe01a60y1Dma/RMhnEw6abf +Fobg2P9A3fvQQoh/ozM6LlweQRGBY84YcWsr7KaKtzFcOmpH4MN5WdYgGq/yapiq +crxXStJLnbsQ/LBMQeXtHT1eKJ2czL+zUdqnR+WEUwIDAQABo0IwQDAdBgNVHQ4E +FgQUu69+Aj36pvE8hI6t7jiY7NkyMtQwDgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB +/wQFMAMBAf8wDQYJKoZIhvcNAQEMBQADggIBAArx1UaEt65Ru2yyTUEUAJNMnMvl +wFTPoCWOAvn9sKIN9SCYPBMtrFaisNZ+EZLpLrqeLppysb0ZRGxhNaKatBYSaVqM +4dc+pBroLwP0rmEdEBsqpIt6xf4FpuHA1sj+nq6PK7o9mfjYcwlYRm6mnPTXJ9OV +2jeDchzTc+CiR5kDOF3VSXkAKRzH7JsgHAckaVd4sjn8OoSgtZx8jb8uk2Intzna +FxiuvTwJaP+EmzzV1gsD41eeFPfR60/IvYcjt7ZJQ3mFXLrrkguhxuhoqEwWsRqZ +CuhTLJK7oQkYdQxlqHvLI7cawiiFwxv/0Cti76R7CZGYZ4wUAc1oBmpjIXUDgIiK +boHGhfKppC3n9KUkEEeDys30jXlYsQab5xoq2Z0B15R97QNKyvDb6KkBPvVWmcke +jkk9u+UJueBPSZI9FoJAzMxZxuY67RIuaTxslbH9qh17f4a+Hg4yRvv7E491f0yL +S0Zj/gA0QHDBw7mh3aZw4gSzQbzpgJHqZJx64SIDqZxubw5lT2yHh17zbqD5daWb +QOhTsiedSrnAdyGN/4fy3ryM7xfft0kL0fJuMAsaDk527RH89elWsn2/x20Kk4yl +0MC2Hb46TpSi125sC8KKfPog88Tk5c0NqMuRkrF8hey1FGlmDoLnzc7ILaZRfyHB +NVOFBkpdn627G190 +-----END CERTIFICATE----- + +# Issuer: CN=USERTrust RSA Certification Authority O=The USERTRUST Network +# Subject: CN=USERTrust RSA Certification Authority O=The USERTRUST Network +# Label: "USERTrust RSA Certification Authority" +# Serial: 2645093764781058787591871645665788717 +# MD5 Fingerprint: 1b:fe:69:d1:91:b7:19:33:a3:72:a8:0f:e1:55:e5:b5 +# SHA1 Fingerprint: 2b:8f:1b:57:33:0d:bb:a2:d0:7a:6c:51:f7:0e:e9:0d:da:b9:ad:8e +# SHA256 Fingerprint: e7:93:c9:b0:2f:d8:aa:13:e2:1c:31:22:8a:cc:b0:81:19:64:3b:74:9c:89:89:64:b1:74:6d:46:c3:d4:cb:d2 +-----BEGIN CERTIFICATE----- +MIIF3jCCA8agAwIBAgIQAf1tMPyjylGoG7xkDjUDLTANBgkqhkiG9w0BAQwFADCB +iDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCk5ldyBKZXJzZXkxFDASBgNVBAcTC0pl +cnNleSBDaXR5MR4wHAYDVQQKExVUaGUgVVNFUlRSVVNUIE5ldHdvcmsxLjAsBgNV +BAMTJVVTRVJUcnVzdCBSU0EgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMTAw +MjAxMDAwMDAwWhcNMzgwMTE4MjM1OTU5WjCBiDELMAkGA1UEBhMCVVMxEzARBgNV +BAgTCk5ldyBKZXJzZXkxFDASBgNVBAcTC0plcnNleSBDaXR5MR4wHAYDVQQKExVU +aGUgVVNFUlRSVVNUIE5ldHdvcmsxLjAsBgNVBAMTJVVTRVJUcnVzdCBSU0EgQ2Vy +dGlmaWNhdGlvbiBBdXRob3JpdHkwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIK +AoICAQCAEmUXNg7D2wiz0KxXDXbtzSfTTK1Qg2HiqiBNCS1kCdzOiZ/MPans9s/B +3PHTsdZ7NygRK0faOca8Ohm0X6a9fZ2jY0K2dvKpOyuR+OJv0OwWIJAJPuLodMkY +tJHUYmTbf6MG8YgYapAiPLz+E/CHFHv25B+O1ORRxhFnRghRy4YUVD+8M/5+bJz/ +Fp0YvVGONaanZshyZ9shZrHUm3gDwFA66Mzw3LyeTP6vBZY1H1dat//O+T23LLb2 +VN3I5xI6Ta5MirdcmrS3ID3KfyI0rn47aGYBROcBTkZTmzNg95S+UzeQc0PzMsNT +79uq/nROacdrjGCT3sTHDN/hMq7MkztReJVni+49Vv4M0GkPGw/zJSZrM233bkf6 +c0Plfg6lZrEpfDKEY1WJxA3Bk1QwGROs0303p+tdOmw1XNtB1xLaqUkL39iAigmT +Yo61Zs8liM2EuLE/pDkP2QKe6xJMlXzzawWpXhaDzLhn4ugTncxbgtNMs+1b/97l +c6wjOy0AvzVVdAlJ2ElYGn+SNuZRkg7zJn0cTRe8yexDJtC/QV9AqURE9JnnV4ee +UB9XVKg+/XRjL7FQZQnmWEIuQxpMtPAlR1n6BB6T1CZGSlCBst6+eLf8ZxXhyVeE +Hg9j1uliutZfVS7qXMYoCAQlObgOK6nyTJccBz8NUvXt7y+CDwIDAQABo0IwQDAd +BgNVHQ4EFgQUU3m/WqorSs9UgOHYm8Cd8rIDZsswDgYDVR0PAQH/BAQDAgEGMA8G +A1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQEMBQADggIBAFzUfA3P9wF9QZllDHPF +Up/L+M+ZBn8b2kMVn54CVVeWFPFSPCeHlCjtHzoBN6J2/FNQwISbxmtOuowhT6KO +VWKR82kV2LyI48SqC/3vqOlLVSoGIG1VeCkZ7l8wXEskEVX/JJpuXior7gtNn3/3 +ATiUFJVDBwn7YKnuHKsSjKCaXqeYalltiz8I+8jRRa8YFWSQEg9zKC7F4iRO/Fjs +8PRF/iKz6y+O0tlFYQXBl2+odnKPi4w2r78NBc5xjeambx9spnFixdjQg3IM8WcR +iQycE0xyNN+81XHfqnHd4blsjDwSXWXavVcStkNr/+XeTWYRUc+ZruwXtuhxkYze +Sf7dNXGiFSeUHM9h4ya7b6NnJSFd5t0dCy5oGzuCr+yDZ4XUmFF0sbmZgIn/f3gZ +XHlKYC6SQK5MNyosycdiyA5d9zZbyuAlJQG03RoHnHcAP9Dc1ew91Pq7P8yF1m9/ +qS3fuQL39ZeatTXaw2ewh0qpKJ4jjv9cJ2vhsE/zB+4ALtRZh8tSQZXq9EfX7mRB +VXyNWQKV3WKdwrnuWih0hKWbt5DHDAff9Yk2dDLWKMGwsAvgnEzDHNb842m1R0aB +L6KCq9NjRHDEjf8tM7qtj3u1cIiuPhnPQCjY/MiQu12ZIvVS5ljFH4gxQ+6IHdfG +jjxDah2nGN59PRbxYvnKkKj9 +-----END CERTIFICATE----- + +# Issuer: CN=USERTrust ECC Certification Authority O=The USERTRUST Network +# Subject: CN=USERTrust ECC Certification Authority O=The USERTRUST Network +# Label: "USERTrust ECC Certification Authority" +# Serial: 123013823720199481456569720443997572134 +# MD5 Fingerprint: fa:68:bc:d9:b5:7f:ad:fd:c9:1d:06:83:28:cc:24:c1 +# SHA1 Fingerprint: d1:cb:ca:5d:b2:d5:2a:7f:69:3b:67:4d:e5:f0:5a:1d:0c:95:7d:f0 +# SHA256 Fingerprint: 4f:f4:60:d5:4b:9c:86:da:bf:bc:fc:57:12:e0:40:0d:2b:ed:3f:bc:4d:4f:bd:aa:86:e0:6a:dc:d2:a9:ad:7a +-----BEGIN CERTIFICATE----- +MIICjzCCAhWgAwIBAgIQXIuZxVqUxdJxVt7NiYDMJjAKBggqhkjOPQQDAzCBiDEL +MAkGA1UEBhMCVVMxEzARBgNVBAgTCk5ldyBKZXJzZXkxFDASBgNVBAcTC0plcnNl +eSBDaXR5MR4wHAYDVQQKExVUaGUgVVNFUlRSVVNUIE5ldHdvcmsxLjAsBgNVBAMT +JVVTRVJUcnVzdCBFQ0MgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMTAwMjAx +MDAwMDAwWhcNMzgwMTE4MjM1OTU5WjCBiDELMAkGA1UEBhMCVVMxEzARBgNVBAgT +Ck5ldyBKZXJzZXkxFDASBgNVBAcTC0plcnNleSBDaXR5MR4wHAYDVQQKExVUaGUg +VVNFUlRSVVNUIE5ldHdvcmsxLjAsBgNVBAMTJVVTRVJUcnVzdCBFQ0MgQ2VydGlm +aWNhdGlvbiBBdXRob3JpdHkwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAAQarFRaqflo +I+d61SRvU8Za2EurxtW20eZzca7dnNYMYf3boIkDuAUU7FfO7l0/4iGzzvfUinng +o4N+LZfQYcTxmdwlkWOrfzCjtHDix6EznPO/LlxTsV+zfTJ/ijTjeXmjQjBAMB0G +A1UdDgQWBBQ64QmG1M8ZwpZ2dEl23OA1xmNjmjAOBgNVHQ8BAf8EBAMCAQYwDwYD +VR0TAQH/BAUwAwEB/zAKBggqhkjOPQQDAwNoADBlAjA2Z6EWCNzklwBBHU6+4WMB +zzuqQhFkoJ2UOQIReVx7Hfpkue4WQrO/isIJxOzksU0CMQDpKmFHjFJKS04YcPbW +RNZu9YO6bVi9JNlWSOrvxKJGgYhqOkbRqZtNyWHa0V1Xahg= +-----END CERTIFICATE----- + +# Issuer: CN=GlobalSign O=GlobalSign OU=GlobalSign ECC Root CA - R4 +# Subject: CN=GlobalSign O=GlobalSign OU=GlobalSign ECC Root CA - R4 +# Label: "GlobalSign ECC Root CA - R4" +# Serial: 14367148294922964480859022125800977897474 +# MD5 Fingerprint: 20:f0:27:68:d1:7e:a0:9d:0e:e6:2a:ca:df:5c:89:8e +# SHA1 Fingerprint: 69:69:56:2e:40:80:f4:24:a1:e7:19:9f:14:ba:f3:ee:58:ab:6a:bb +# SHA256 Fingerprint: be:c9:49:11:c2:95:56:76:db:6c:0a:55:09:86:d7:6e:3b:a0:05:66:7c:44:2c:97:62:b4:fb:b7:73:de:22:8c +-----BEGIN CERTIFICATE----- +MIIB4TCCAYegAwIBAgIRKjikHJYKBN5CsiilC+g0mAIwCgYIKoZIzj0EAwIwUDEk +MCIGA1UECxMbR2xvYmFsU2lnbiBFQ0MgUm9vdCBDQSAtIFI0MRMwEQYDVQQKEwpH +bG9iYWxTaWduMRMwEQYDVQQDEwpHbG9iYWxTaWduMB4XDTEyMTExMzAwMDAwMFoX +DTM4MDExOTAzMTQwN1owUDEkMCIGA1UECxMbR2xvYmFsU2lnbiBFQ0MgUm9vdCBD +QSAtIFI0MRMwEQYDVQQKEwpHbG9iYWxTaWduMRMwEQYDVQQDEwpHbG9iYWxTaWdu +MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEuMZ5049sJQ6fLjkZHAOkrprlOQcJ +FspjsbmG+IpXwVfOQvpzofdlQv8ewQCybnMO/8ch5RikqtlxP6jUuc6MHaNCMEAw +DgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFFSwe61F +uOJAf/sKbvu+M8k8o4TVMAoGCCqGSM49BAMCA0gAMEUCIQDckqGgE6bPA7DmxCGX +kPoUVy0D7O48027KqGx2vKLeuwIgJ6iFJzWbVsaj8kfSt24bAgAXqmemFZHe+pTs +ewv4n4Q= +-----END CERTIFICATE----- + +# Issuer: CN=GlobalSign O=GlobalSign OU=GlobalSign ECC Root CA - R5 +# Subject: CN=GlobalSign O=GlobalSign OU=GlobalSign ECC Root CA - R5 +# Label: "GlobalSign ECC Root CA - R5" +# Serial: 32785792099990507226680698011560947931244 +# MD5 Fingerprint: 9f:ad:3b:1c:02:1e:8a:ba:17:74:38:81:0c:a2:bc:08 +# SHA1 Fingerprint: 1f:24:c6:30:cd:a4:18:ef:20:69:ff:ad:4f:dd:5f:46:3a:1b:69:aa +# SHA256 Fingerprint: 17:9f:bc:14:8a:3d:d0:0f:d2:4e:a1:34:58:cc:43:bf:a7:f5:9c:81:82:d7:83:a5:13:f6:eb:ec:10:0c:89:24 +-----BEGIN CERTIFICATE----- +MIICHjCCAaSgAwIBAgIRYFlJ4CYuu1X5CneKcflK2GwwCgYIKoZIzj0EAwMwUDEk +MCIGA1UECxMbR2xvYmFsU2lnbiBFQ0MgUm9vdCBDQSAtIFI1MRMwEQYDVQQKEwpH +bG9iYWxTaWduMRMwEQYDVQQDEwpHbG9iYWxTaWduMB4XDTEyMTExMzAwMDAwMFoX +DTM4MDExOTAzMTQwN1owUDEkMCIGA1UECxMbR2xvYmFsU2lnbiBFQ0MgUm9vdCBD +QSAtIFI1MRMwEQYDVQQKEwpHbG9iYWxTaWduMRMwEQYDVQQDEwpHbG9iYWxTaWdu +MHYwEAYHKoZIzj0CAQYFK4EEACIDYgAER0UOlvt9Xb/pOdEh+J8LttV7HpI6SFkc +8GIxLcB6KP4ap1yztsyX50XUWPrRd21DosCHZTQKH3rd6zwzocWdTaRvQZU4f8ke +hOvRnkmSh5SHDDqFSmafnVmTTZdhBoZKo0IwQDAOBgNVHQ8BAf8EBAMCAQYwDwYD +VR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUPeYpSJvqB8ohREom3m7e0oPQn1kwCgYI +KoZIzj0EAwMDaAAwZQIxAOVpEslu28YxuglB4Zf4+/2a4n0Sye18ZNPLBSWLVtmg +515dTguDnFt2KaAJJiFqYgIwcdK1j1zqO+F4CYWodZI7yFz9SO8NdCKoCOJuxUnO +xwy8p2Fp8fc74SrL+SvzZpA3 +-----END CERTIFICATE----- + +# Issuer: CN=Staat der Nederlanden Root CA - G3 O=Staat der Nederlanden +# Subject: CN=Staat der Nederlanden Root CA - G3 O=Staat der Nederlanden +# Label: "Staat der Nederlanden Root CA - G3" +# Serial: 10003001 +# MD5 Fingerprint: 0b:46:67:07:db:10:2f:19:8c:35:50:60:d1:0b:f4:37 +# SHA1 Fingerprint: d8:eb:6b:41:51:92:59:e0:f3:e7:85:00:c0:3d:b6:88:97:c9:ee:fc +# SHA256 Fingerprint: 3c:4f:b0:b9:5a:b8:b3:00:32:f4:32:b8:6f:53:5f:e1:72:c1:85:d0:fd:39:86:58:37:cf:36:18:7f:a6:f4:28 +-----BEGIN CERTIFICATE----- +MIIFdDCCA1ygAwIBAgIEAJiiOTANBgkqhkiG9w0BAQsFADBaMQswCQYDVQQGEwJO +TDEeMBwGA1UECgwVU3RhYXQgZGVyIE5lZGVybGFuZGVuMSswKQYDVQQDDCJTdGFh +dCBkZXIgTmVkZXJsYW5kZW4gUm9vdCBDQSAtIEczMB4XDTEzMTExNDExMjg0MloX +DTI4MTExMzIzMDAwMFowWjELMAkGA1UEBhMCTkwxHjAcBgNVBAoMFVN0YWF0IGRl +ciBOZWRlcmxhbmRlbjErMCkGA1UEAwwiU3RhYXQgZGVyIE5lZGVybGFuZGVuIFJv +b3QgQ0EgLSBHMzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAL4yolQP +cPssXFnrbMSkUeiFKrPMSjTysF/zDsccPVMeiAho2G89rcKezIJnByeHaHE6n3WW +IkYFsO2tx1ueKt6c/DrGlaf1F2cY5y9JCAxcz+bMNO14+1Cx3Gsy8KL+tjzk7FqX +xz8ecAgwoNzFs21v0IJyEavSgWhZghe3eJJg+szeP4TrjTgzkApyI/o1zCZxMdFy +KJLZWyNtZrVtB0LrpjPOktvA9mxjeM3KTj215VKb8b475lRgsGYeCasH/lSJEULR +9yS6YHgamPfJEf0WwTUaVHXvQ9Plrk7O53vDxk5hUUurmkVLoR9BvUhTFXFkC4az +5S6+zqQbwSmEorXLCCN2QyIkHxcE1G6cxvx/K2Ya7Irl1s9N9WMJtxU51nus6+N8 +6U78dULI7ViVDAZCopz35HCz33JvWjdAidiFpNfxC95DGdRKWCyMijmev4SH8RY7 +Ngzp07TKbBlBUgmhHbBqv4LvcFEhMtwFdozL92TkA1CvjJFnq8Xy7ljY3r735zHP +bMk7ccHViLVlvMDoFxcHErVc0qsgk7TmgoNwNsXNo42ti+yjwUOH5kPiNL6VizXt +BznaqB16nzaeErAMZRKQFWDZJkBE41ZgpRDUajz9QdwOWke275dhdU/Z/seyHdTt +XUmzqWrLZoQT1Vyg3N9udwbRcXXIV2+vD3dbAgMBAAGjQjBAMA8GA1UdEwEB/wQF +MAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBRUrfrHkleuyjWcLhL75Lpd +INyUVzANBgkqhkiG9w0BAQsFAAOCAgEAMJmdBTLIXg47mAE6iqTnB/d6+Oea31BD +U5cqPco8R5gu4RV78ZLzYdqQJRZlwJ9UXQ4DO1t3ApyEtg2YXzTdO2PCwyiBwpwp +LiniyMMB8jPqKqrMCQj3ZWfGzd/TtiunvczRDnBfuCPRy5FOCvTIeuXZYzbB1N/8 +Ipf3YF3qKS9Ysr1YvY2WTxB1v0h7PVGHoTx0IsL8B3+A3MSs/mrBcDCw6Y5p4ixp +gZQJut3+TcCDjJRYwEYgr5wfAvg1VUkvRtTA8KCWAg8zxXHzniN9lLf9OtMJgwYh +/WA9rjLA0u6NpvDntIJ8CsxwyXmA+P5M9zWEGYox+wrZ13+b8KKaa8MFSu1BYBQw +0aoRQm7TIwIEC8Zl3d1Sd9qBa7Ko+gE4uZbqKmxnl4mUnrzhVNXkanjvSr0rmj1A +fsbAddJu+2gw7OyLnflJNZoaLNmzlTnVHpL3prllL+U9bTpITAjc5CgSKL59NVzq +4BZ+Extq1z7XnvwtdbLBFNUjA9tbbws+eC8N3jONFrdI54OagQ97wUNNVQQXOEpR +1VmiiXTTn74eS9fGbbeIJG9gkaSChVtWQbzQRKtqE77RLFi3EjNYsjdj3BP1lB0/ +QFH1T/U67cjF68IeHRaVesd+QnGTbksVtzDfqu1XhUisHWrdOWnk4Xl4vs4Fv6EM +94B7IWcnMFk= +-----END CERTIFICATE----- + +# Issuer: CN=Staat der Nederlanden EV Root CA O=Staat der Nederlanden +# Subject: CN=Staat der Nederlanden EV Root CA O=Staat der Nederlanden +# Label: "Staat der Nederlanden EV Root CA" +# Serial: 10000013 +# MD5 Fingerprint: fc:06:af:7b:e8:1a:f1:9a:b4:e8:d2:70:1f:c0:f5:ba +# SHA1 Fingerprint: 76:e2:7e:c1:4f:db:82:c1:c0:a6:75:b5:05:be:3d:29:b4:ed:db:bb +# SHA256 Fingerprint: 4d:24:91:41:4c:fe:95:67:46:ec:4c:ef:a6:cf:6f:72:e2:8a:13:29:43:2f:9d:8a:90:7a:c4:cb:5d:ad:c1:5a +-----BEGIN CERTIFICATE----- +MIIFcDCCA1igAwIBAgIEAJiWjTANBgkqhkiG9w0BAQsFADBYMQswCQYDVQQGEwJO +TDEeMBwGA1UECgwVU3RhYXQgZGVyIE5lZGVybGFuZGVuMSkwJwYDVQQDDCBTdGFh +dCBkZXIgTmVkZXJsYW5kZW4gRVYgUm9vdCBDQTAeFw0xMDEyMDgxMTE5MjlaFw0y +MjEyMDgxMTEwMjhaMFgxCzAJBgNVBAYTAk5MMR4wHAYDVQQKDBVTdGFhdCBkZXIg +TmVkZXJsYW5kZW4xKTAnBgNVBAMMIFN0YWF0IGRlciBOZWRlcmxhbmRlbiBFViBS +b290IENBMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEA48d+ifkkSzrS +M4M1LGns3Amk41GoJSt5uAg94JG6hIXGhaTK5skuU6TJJB79VWZxXSzFYGgEt9nC +UiY4iKTWO0Cmws0/zZiTs1QUWJZV1VD+hq2kY39ch/aO5ieSZxeSAgMs3NZmdO3d +Z//BYY1jTw+bbRcwJu+r0h8QoPnFfxZpgQNH7R5ojXKhTbImxrpsX23Wr9GxE46p +rfNeaXUmGD5BKyF/7otdBwadQ8QpCiv8Kj6GyzyDOvnJDdrFmeK8eEEzduG/L13l +pJhQDBXd4Pqcfzho0LKmeqfRMb1+ilgnQ7O6M5HTp5gVXJrm0w912fxBmJc+qiXb +j5IusHsMX/FjqTf5m3VpTCgmJdrV8hJwRVXj33NeN/UhbJCONVrJ0yPr08C+eKxC +KFhmpUZtcALXEPlLVPxdhkqHz3/KRawRWrUgUY0viEeXOcDPusBCAUCZSCELa6fS +/ZbV0b5GnUngC6agIk440ME8MLxwjyx1zNDFjFE7PZQIZCZhfbnDZY8UnCHQqv0X +cgOPvZuM5l5Tnrmd74K74bzickFbIZTTRTeU0d8JOV3nI6qaHcptqAqGhYqCvkIH +1vI4gnPah1vlPNOePqc7nvQDs/nxfRN0Av+7oeX6AHkcpmZBiFxgV6YuCcS6/ZrP +px9Aw7vMWgpVSzs4dlG4Y4uElBbmVvMCAwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB +/zAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFP6rAJCYniT8qcwaivsnuL8wbqg7 +MA0GCSqGSIb3DQEBCwUAA4ICAQDPdyxuVr5Os7aEAJSrR8kN0nbHhp8dB9O2tLsI +eK9p0gtJ3jPFrK3CiAJ9Brc1AsFgyb/E6JTe1NOpEyVa/m6irn0F3H3zbPB+po3u +2dfOWBfoqSmuc0iH55vKbimhZF8ZE/euBhD/UcabTVUlT5OZEAFTdfETzsemQUHS +v4ilf0X8rLiltTMMgsT7B/Zq5SWEXwbKwYY5EdtYzXc7LMJMD16a4/CrPmEbUCTC +wPTxGfARKbalGAKb12NMcIxHowNDXLldRqANb/9Zjr7dn3LDWyvfjFvO5QxGbJKy +CqNMVEIYFRIYvdr8unRu/8G2oGTYqV9Vrp9canaW2HNnh/tNf1zuacpzEPuKqf2e +vTY4SUmH9A4U8OmHuD+nT3pajnnUk+S7aFKErGzp85hwVXIy+TSrK0m1zSBi5Dp6 +Z2Orltxtrpfs/J92VoguZs9btsmksNcFuuEnL5O7Jiqik7Ab846+HUCjuTaPPoIa +Gl6I6lD4WeKDRikL40Rc4ZW2aZCaFG+XroHPaO+Zmr615+F/+PoTRxZMzG0IQOeL +eG9QgkRQP2YGiqtDhFZKDyAthg710tvSeopLzaXoTvFeJiUBWSOgftL2fiFX1ye8 +FVdMpEbB4IMeDExNH08GGeL5qPQ6gqGyeUN51q1veieQA6TqJIc/2b3Z6fJfUEkc +7uzXLg== +-----END CERTIFICATE----- + +# Issuer: CN=IdenTrust Commercial Root CA 1 O=IdenTrust +# Subject: CN=IdenTrust Commercial Root CA 1 O=IdenTrust +# Label: "IdenTrust Commercial Root CA 1" +# Serial: 13298821034946342390520003877796839426 +# MD5 Fingerprint: b3:3e:77:73:75:ee:a0:d3:e3:7e:49:63:49:59:bb:c7 +# SHA1 Fingerprint: df:71:7e:aa:4a:d9:4e:c9:55:84:99:60:2d:48:de:5f:bc:f0:3a:25 +# SHA256 Fingerprint: 5d:56:49:9b:e4:d2:e0:8b:cf:ca:d0:8a:3e:38:72:3d:50:50:3b:de:70:69:48:e4:2f:55:60:30:19:e5:28:ae +-----BEGIN CERTIFICATE----- +MIIFYDCCA0igAwIBAgIQCgFCgAAAAUUjyES1AAAAAjANBgkqhkiG9w0BAQsFADBK +MQswCQYDVQQGEwJVUzESMBAGA1UEChMJSWRlblRydXN0MScwJQYDVQQDEx5JZGVu +VHJ1c3QgQ29tbWVyY2lhbCBSb290IENBIDEwHhcNMTQwMTE2MTgxMjIzWhcNMzQw +MTE2MTgxMjIzWjBKMQswCQYDVQQGEwJVUzESMBAGA1UEChMJSWRlblRydXN0MScw +JQYDVQQDEx5JZGVuVHJ1c3QgQ29tbWVyY2lhbCBSb290IENBIDEwggIiMA0GCSqG +SIb3DQEBAQUAA4ICDwAwggIKAoICAQCnUBneP5k91DNG8W9RYYKyqU+PZ4ldhNlT +3Qwo2dfw/66VQ3KZ+bVdfIrBQuExUHTRgQ18zZshq0PirK1ehm7zCYofWjK9ouuU ++ehcCuz/mNKvcbO0U59Oh++SvL3sTzIwiEsXXlfEU8L2ApeN2WIrvyQfYo3fw7gp +S0l4PJNgiCL8mdo2yMKi1CxUAGc1bnO/AljwpN3lsKImesrgNqUZFvX9t++uP0D1 +bVoE/c40yiTcdCMbXTMTEl3EASX2MN0CXZ/g1Ue9tOsbobtJSdifWwLziuQkkORi +T0/Br4sOdBeo0XKIanoBScy0RnnGF7HamB4HWfp1IYVl3ZBWzvurpWCdxJ35UrCL +vYf5jysjCiN2O/cz4ckA82n5S6LgTrx+kzmEB/dEcH7+B1rlsazRGMzyNeVJSQjK +Vsk9+w8YfYs7wRPCTY/JTw436R+hDmrfYi7LNQZReSzIJTj0+kuniVyc0uMNOYZK +dHzVWYfCP04MXFL0PfdSgvHqo6z9STQaKPNBiDoT7uje/5kdX7rL6B7yuVBgwDHT +c+XvvqDtMwt0viAgxGds8AgDelWAf0ZOlqf0Hj7h9tgJ4TNkK2PXMl6f+cB7D3hv +l7yTmvmcEpB4eoCHFddydJxVdHixuuFucAS6T6C6aMN7/zHwcz09lCqxC0EOoP5N +iGVreTO01wIDAQABo0IwQDAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB +/zAdBgNVHQ4EFgQU7UQZwNPwBovupHu+QucmVMiONnYwDQYJKoZIhvcNAQELBQAD +ggIBAA2ukDL2pkt8RHYZYR4nKM1eVO8lvOMIkPkp165oCOGUAFjvLi5+U1KMtlwH +6oi6mYtQlNeCgN9hCQCTrQ0U5s7B8jeUeLBfnLOic7iPBZM4zY0+sLj7wM+x8uwt +LRvM7Kqas6pgghstO8OEPVeKlh6cdbjTMM1gCIOQ045U8U1mwF10A0Cj7oV+wh93 +nAbowacYXVKV7cndJZ5t+qntozo00Fl72u1Q8zW/7esUTTHHYPTa8Yec4kjixsU3 ++wYQ+nVZZjFHKdp2mhzpgq7vmrlR94gjmmmVYjzlVYA211QC//G5Xc7UI2/YRYRK +W2XviQzdFKcgyxilJbQN+QHwotL0AMh0jqEqSI5l2xPE4iUXfeu+h1sXIFRRk0pT +AwvsXcoz7WL9RccvW9xYoIA55vrX/hMUpu09lEpCdNTDd1lzzY9GvlU47/rokTLq +l1gEIt44w8y8bckzOmoKaT+gyOpyj4xjhiO9bTyWnpXgSUyqorkqG5w2gXjtw+hG +4iZZRHUe2XWJUc0QhJ1hYMtd+ZciTY6Y5uN/9lu7rs3KSoFrXgvzUeF0K+l+J6fZ +mUlO+KWA2yUPHGNiiskzZ2s8EIPGrd6ozRaOjfAHN3Gf8qv8QfXBi+wAN10J5U6A +7/qxXDgGpRtK4dw4LTzcqx+QGtVKnO7RcGzM7vRX+Bi6hG6H +-----END CERTIFICATE----- + +# Issuer: CN=IdenTrust Public Sector Root CA 1 O=IdenTrust +# Subject: CN=IdenTrust Public Sector Root CA 1 O=IdenTrust +# Label: "IdenTrust Public Sector Root CA 1" +# Serial: 13298821034946342390521976156843933698 +# MD5 Fingerprint: 37:06:a5:b0:fc:89:9d:ba:f4:6b:8c:1a:64:cd:d5:ba +# SHA1 Fingerprint: ba:29:41:60:77:98:3f:f4:f3:ef:f2:31:05:3b:2e:ea:6d:4d:45:fd +# SHA256 Fingerprint: 30:d0:89:5a:9a:44:8a:26:20:91:63:55:22:d1:f5:20:10:b5:86:7a:ca:e1:2c:78:ef:95:8f:d4:f4:38:9f:2f +-----BEGIN CERTIFICATE----- +MIIFZjCCA06gAwIBAgIQCgFCgAAAAUUjz0Z8AAAAAjANBgkqhkiG9w0BAQsFADBN +MQswCQYDVQQGEwJVUzESMBAGA1UEChMJSWRlblRydXN0MSowKAYDVQQDEyFJZGVu +VHJ1c3QgUHVibGljIFNlY3RvciBSb290IENBIDEwHhcNMTQwMTE2MTc1MzMyWhcN +MzQwMTE2MTc1MzMyWjBNMQswCQYDVQQGEwJVUzESMBAGA1UEChMJSWRlblRydXN0 +MSowKAYDVQQDEyFJZGVuVHJ1c3QgUHVibGljIFNlY3RvciBSb290IENBIDEwggIi +MA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQC2IpT8pEiv6EdrCvsnduTyP4o7 +ekosMSqMjbCpwzFrqHd2hCa2rIFCDQjrVVi7evi8ZX3yoG2LqEfpYnYeEe4IFNGy +RBb06tD6Hi9e28tzQa68ALBKK0CyrOE7S8ItneShm+waOh7wCLPQ5CQ1B5+ctMlS +bdsHyo+1W/CD80/HLaXIrcuVIKQxKFdYWuSNG5qrng0M8gozOSI5Cpcu81N3uURF +/YTLNiCBWS2ab21ISGHKTN9T0a9SvESfqy9rg3LvdYDaBjMbXcjaY8ZNzaxmMc3R +3j6HEDbhuaR672BQssvKplbgN6+rNBM5Jeg5ZuSYeqoSmJxZZoY+rfGwyj4GD3vw +EUs3oERte8uojHH01bWRNszwFcYr3lEXsZdMUD2xlVl8BX0tIdUAvwFnol57plzy +9yLxkA2T26pEUWbMfXYD62qoKjgZl3YNa4ph+bz27nb9cCvdKTz4Ch5bQhyLVi9V +GxyhLrXHFub4qjySjmm2AcG1hp2JDws4lFTo6tyePSW8Uybt1as5qsVATFSrsrTZ +2fjXctscvG29ZV/viDUqZi/u9rNl8DONfJhBaUYPQxxp+pu10GFqzcpL2UyQRqsV +WaFHVCkugyhfHMKiq3IXAAaOReyL4jM9f9oZRORicsPfIsbyVtTdX5Vy7W1f90gD +W/3FKqD2cyOEEBsB5wIDAQABo0IwQDAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/ +BAUwAwEB/zAdBgNVHQ4EFgQU43HgntinQtnbcZFrlJPrw6PRFKMwDQYJKoZIhvcN +AQELBQADggIBAEf63QqwEZE4rU1d9+UOl1QZgkiHVIyqZJnYWv6IAcVYpZmxI1Qj +t2odIFflAWJBF9MJ23XLblSQdf4an4EKwt3X9wnQW3IV5B4Jaj0z8yGa5hV+rVHV +DRDtfULAj+7AmgjVQdZcDiFpboBhDhXAuM/FSRJSzL46zNQuOAXeNf0fb7iAaJg9 +TaDKQGXSc3z1i9kKlT/YPyNtGtEqJBnZhbMX73huqVjRI9PHE+1yJX9dsXNw0H8G +lwmEKYBhHfpe/3OsoOOJuBxxFcbeMX8S3OFtm6/n6J91eEyrRjuazr8FGF1NFTwW +mhlQBJqymm9li1JfPFgEKCXAZmExfrngdbkaqIHWchezxQMxNRF4eKLg6TCMf4Df +WN88uieW4oA0beOY02QnrEh+KHdcxiVhJfiFDGX6xDIvpZgF5PgLZxYWxoK4Mhn5 ++bl53B/N66+rDt0b20XkeucC4pVd/GnwU2lhlXV5C15V5jgclKlZM57IcXR5f1GJ +tshquDDIajjDbp7hNxbqBWJMWxJH7ae0s1hWx0nzfxJoCTFx8G34Tkf71oXuxVhA +GaQdp/lLQzfcaFpPz+vCZHTetBXZ9FRUGi8c15dxVJCO2SCdUyt/q4/i6jC8UDfv +8Ue1fXwsBOxonbRJRBD0ckscZOf85muQ3Wl9af0AVqW3rLatt8o+Ae+c +-----END CERTIFICATE----- + +# Issuer: CN=Entrust Root Certification Authority - G2 O=Entrust, Inc. OU=See www.entrust.net/legal-terms/(c) 2009 Entrust, Inc. - for authorized use only +# Subject: CN=Entrust Root Certification Authority - G2 O=Entrust, Inc. OU=See www.entrust.net/legal-terms/(c) 2009 Entrust, Inc. - for authorized use only +# Label: "Entrust Root Certification Authority - G2" +# Serial: 1246989352 +# MD5 Fingerprint: 4b:e2:c9:91:96:65:0c:f4:0e:5a:93:92:a0:0a:fe:b2 +# SHA1 Fingerprint: 8c:f4:27:fd:79:0c:3a:d1:66:06:8d:e8:1e:57:ef:bb:93:22:72:d4 +# SHA256 Fingerprint: 43:df:57:74:b0:3e:7f:ef:5f:e4:0d:93:1a:7b:ed:f1:bb:2e:6b:42:73:8c:4e:6d:38:41:10:3d:3a:a7:f3:39 +-----BEGIN CERTIFICATE----- +MIIEPjCCAyagAwIBAgIESlOMKDANBgkqhkiG9w0BAQsFADCBvjELMAkGA1UEBhMC +VVMxFjAUBgNVBAoTDUVudHJ1c3QsIEluYy4xKDAmBgNVBAsTH1NlZSB3d3cuZW50 +cnVzdC5uZXQvbGVnYWwtdGVybXMxOTA3BgNVBAsTMChjKSAyMDA5IEVudHJ1c3Qs +IEluYy4gLSBmb3IgYXV0aG9yaXplZCB1c2Ugb25seTEyMDAGA1UEAxMpRW50cnVz +dCBSb290IENlcnRpZmljYXRpb24gQXV0aG9yaXR5IC0gRzIwHhcNMDkwNzA3MTcy +NTU0WhcNMzAxMjA3MTc1NTU0WjCBvjELMAkGA1UEBhMCVVMxFjAUBgNVBAoTDUVu +dHJ1c3QsIEluYy4xKDAmBgNVBAsTH1NlZSB3d3cuZW50cnVzdC5uZXQvbGVnYWwt +dGVybXMxOTA3BgNVBAsTMChjKSAyMDA5IEVudHJ1c3QsIEluYy4gLSBmb3IgYXV0 +aG9yaXplZCB1c2Ugb25seTEyMDAGA1UEAxMpRW50cnVzdCBSb290IENlcnRpZmlj +YXRpb24gQXV0aG9yaXR5IC0gRzIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEK +AoIBAQC6hLZy254Ma+KZ6TABp3bqMriVQRrJ2mFOWHLP/vaCeb9zYQYKpSfYs1/T +RU4cctZOMvJyig/3gxnQaoCAAEUesMfnmr8SVycco2gvCoe9amsOXmXzHHfV1IWN +cCG0szLni6LVhjkCsbjSR87kyUnEO6fe+1R9V77w6G7CebI6C1XiUJgWMhNcL3hW +wcKUs/Ja5CeanyTXxuzQmyWC48zCxEXFjJd6BmsqEZ+pCm5IO2/b1BEZQvePB7/1 +U1+cPvQXLOZprE4yTGJ36rfo5bs0vBmLrpxR57d+tVOxMyLlbc9wPBr64ptntoP0 +jaWvYkxN4FisZDQSA/i2jZRjJKRxAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwIBBjAP +BgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBRqciZ60B7vfec7aVHUbI2fkBJmqzAN +BgkqhkiG9w0BAQsFAAOCAQEAeZ8dlsa2eT8ijYfThwMEYGprmi5ZiXMRrEPR9RP/ +jTkrwPK9T3CMqS/qF8QLVJ7UG5aYMzyorWKiAHarWWluBh1+xLlEjZivEtRh2woZ +Rkfz6/djwUAFQKXSt/S1mja/qYh2iARVBCuch38aNzx+LaUa2NSJXsq9rD1s2G2v +1fN2D807iDginWyTmsQ9v4IbZT+mD12q/OWyFcq1rca8PdCE6OoGcrBNOTJ4vz4R +nAuknZoh8/CbCzB428Hch0P+vGOaysXCHMnHjf87ElgI5rY97HosTvuDls4MPGmH +VHOkc8KT/1EQrBVUAdj8BbGJoX90g5pJ19xOe4pIb4tF9g== +-----END CERTIFICATE----- + +# Issuer: CN=Entrust Root Certification Authority - EC1 O=Entrust, Inc. OU=See www.entrust.net/legal-terms/(c) 2012 Entrust, Inc. - for authorized use only +# Subject: CN=Entrust Root Certification Authority - EC1 O=Entrust, Inc. OU=See www.entrust.net/legal-terms/(c) 2012 Entrust, Inc. - for authorized use only +# Label: "Entrust Root Certification Authority - EC1" +# Serial: 51543124481930649114116133369 +# MD5 Fingerprint: b6:7e:1d:f0:58:c5:49:6c:24:3b:3d:ed:98:18:ed:bc +# SHA1 Fingerprint: 20:d8:06:40:df:9b:25:f5:12:25:3a:11:ea:f7:59:8a:eb:14:b5:47 +# SHA256 Fingerprint: 02:ed:0e:b2:8c:14:da:45:16:5c:56:67:91:70:0d:64:51:d7:fb:56:f0:b2:ab:1d:3b:8e:b0:70:e5:6e:df:f5 +-----BEGIN CERTIFICATE----- +MIIC+TCCAoCgAwIBAgINAKaLeSkAAAAAUNCR+TAKBggqhkjOPQQDAzCBvzELMAkG +A1UEBhMCVVMxFjAUBgNVBAoTDUVudHJ1c3QsIEluYy4xKDAmBgNVBAsTH1NlZSB3 +d3cuZW50cnVzdC5uZXQvbGVnYWwtdGVybXMxOTA3BgNVBAsTMChjKSAyMDEyIEVu +dHJ1c3QsIEluYy4gLSBmb3IgYXV0aG9yaXplZCB1c2Ugb25seTEzMDEGA1UEAxMq +RW50cnVzdCBSb290IENlcnRpZmljYXRpb24gQXV0aG9yaXR5IC0gRUMxMB4XDTEy +MTIxODE1MjUzNloXDTM3MTIxODE1NTUzNlowgb8xCzAJBgNVBAYTAlVTMRYwFAYD +VQQKEw1FbnRydXN0LCBJbmMuMSgwJgYDVQQLEx9TZWUgd3d3LmVudHJ1c3QubmV0 +L2xlZ2FsLXRlcm1zMTkwNwYDVQQLEzAoYykgMjAxMiBFbnRydXN0LCBJbmMuIC0g +Zm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxMzAxBgNVBAMTKkVudHJ1c3QgUm9vdCBD +ZXJ0aWZpY2F0aW9uIEF1dGhvcml0eSAtIEVDMTB2MBAGByqGSM49AgEGBSuBBAAi +A2IABIQTydC6bUF74mzQ61VfZgIaJPRbiWlH47jCffHyAsWfoPZb1YsGGYZPUxBt +ByQnoaD41UcZYUx9ypMn6nQM72+WCf5j7HBdNq1nd67JnXxVRDqiY1Ef9eNi1KlH +Bz7MIKNCMEAwDgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O +BBYEFLdj5xrdjekIplWDpOBqUEFlEUJJMAoGCCqGSM49BAMDA2cAMGQCMGF52OVC +R98crlOZF7ZvHH3hvxGU0QOIdeSNiaSKd0bebWHvAvX7td/M/k7//qnmpwIwW5nX +hTcGtXsI/esni0qU+eH6p44mCOh8kmhtc9hvJqwhAriZtyZBWyVgrtBIGu4G +-----END CERTIFICATE----- + +# Issuer: CN=CFCA EV ROOT O=China Financial Certification Authority +# Subject: CN=CFCA EV ROOT O=China Financial Certification Authority +# Label: "CFCA EV ROOT" +# Serial: 407555286 +# MD5 Fingerprint: 74:e1:b6:ed:26:7a:7a:44:30:33:94:ab:7b:27:81:30 +# SHA1 Fingerprint: e2:b8:29:4b:55:84:ab:6b:58:c2:90:46:6c:ac:3f:b8:39:8f:84:83 +# SHA256 Fingerprint: 5c:c3:d7:8e:4e:1d:5e:45:54:7a:04:e6:87:3e:64:f9:0c:f9:53:6d:1c:cc:2e:f8:00:f3:55:c4:c5:fd:70:fd +-----BEGIN CERTIFICATE----- +MIIFjTCCA3WgAwIBAgIEGErM1jANBgkqhkiG9w0BAQsFADBWMQswCQYDVQQGEwJD +TjEwMC4GA1UECgwnQ2hpbmEgRmluYW5jaWFsIENlcnRpZmljYXRpb24gQXV0aG9y +aXR5MRUwEwYDVQQDDAxDRkNBIEVWIFJPT1QwHhcNMTIwODA4MDMwNzAxWhcNMjkx +MjMxMDMwNzAxWjBWMQswCQYDVQQGEwJDTjEwMC4GA1UECgwnQ2hpbmEgRmluYW5j +aWFsIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MRUwEwYDVQQDDAxDRkNBIEVWIFJP +T1QwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQDXXWvNED8fBVnVBU03 +sQ7smCuOFR36k0sXgiFxEFLXUWRwFsJVaU2OFW2fvwwbwuCjZ9YMrM8irq93VCpL +TIpTUnrD7i7es3ElweldPe6hL6P3KjzJIx1qqx2hp/Hz7KDVRM8Vz3IvHWOX6Jn5 +/ZOkVIBMUtRSqy5J35DNuF++P96hyk0g1CXohClTt7GIH//62pCfCqktQT+x8Rgp +7hZZLDRJGqgG16iI0gNyejLi6mhNbiyWZXvKWfry4t3uMCz7zEasxGPrb382KzRz +EpR/38wmnvFyXVBlWY9ps4deMm/DGIq1lY+wejfeWkU7xzbh72fROdOXW3NiGUgt +hxwG+3SYIElz8AXSG7Ggo7cbcNOIabla1jj0Ytwli3i/+Oh+uFzJlU9fpy25IGvP +a931DfSCt/SyZi4QKPaXWnuWFo8BGS1sbn85WAZkgwGDg8NNkt0yxoekN+kWzqot +aK8KgWU6cMGbrU1tVMoqLUuFG7OA5nBFDWteNfB/O7ic5ARwiRIlk9oKmSJgamNg +TnYGmE69g60dWIolhdLHZR4tjsbftsbhf4oEIRUpdPA+nJCdDC7xij5aqgwJHsfV +PKPtl8MeNPo4+QgO48BdK4PRVmrJtqhUUy54Mmc9gn900PvhtgVguXDbjgv5E1hv +cWAQUhC5wUEJ73IfZzF4/5YFjQIDAQABo2MwYTAfBgNVHSMEGDAWgBTj/i39KNAL +tbq2osS/BqoFjJP7LzAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIBBjAd +BgNVHQ4EFgQU4/4t/SjQC7W6tqLEvwaqBYyT+y8wDQYJKoZIhvcNAQELBQADggIB +ACXGumvrh8vegjmWPfBEp2uEcwPenStPuiB/vHiyz5ewG5zz13ku9Ui20vsXiObT +ej/tUxPQ4i9qecsAIyjmHjdXNYmEwnZPNDatZ8POQQaIxffu2Bq41gt/UP+TqhdL +jOztUmCypAbqTuv0axn96/Ua4CUqmtzHQTb3yHQFhDmVOdYLO6Qn+gjYXB74BGBS +ESgoA//vU2YApUo0FmZ8/Qmkrp5nGm9BC2sGE5uPhnEFtC+NiWYzKXZUmhH4J/qy +P5Hgzg0b8zAarb8iXRvTvyUFTeGSGn+ZnzxEk8rUQElsgIfXBDrDMlI1Dlb4pd19 +xIsNER9Tyx6yF7Zod1rg1MvIB671Oi6ON7fQAUtDKXeMOZePglr4UeWJoBjnaH9d +Ci77o0cOPaYjesYBx4/IXr9tgFa+iiS6M+qf4TIRnvHST4D2G0CvOJ4RUHlzEhLN +5mydLIhyPDCBBpEi6lmt2hkuIsKNuYyH4Ga8cyNfIWRjgEj1oDwYPZTISEEdQLpe +/v5WOaHIz16eGWRGENoXkbcFgKyLmZJ956LYBws2J+dIeWCKw9cTXPhyQN9Ky8+Z +AAoACxGV2lZFA4gKn2fQ1XmxqI1AbQ3CekD6819kR5LLU7m7Wc5P/dAVUwHY3+vZ +5nbv0CO7O6l5s9UCKc2Jo5YPSjXnTkLAdc0Hz+Ys63su +-----END CERTIFICATE----- + +# Issuer: CN=OISTE WISeKey Global Root GB CA O=WISeKey OU=OISTE Foundation Endorsed +# Subject: CN=OISTE WISeKey Global Root GB CA O=WISeKey OU=OISTE Foundation Endorsed +# Label: "OISTE WISeKey Global Root GB CA" +# Serial: 157768595616588414422159278966750757568 +# MD5 Fingerprint: a4:eb:b9:61:28:2e:b7:2f:98:b0:35:26:90:99:51:1d +# SHA1 Fingerprint: 0f:f9:40:76:18:d3:d7:6a:4b:98:f0:a8:35:9e:0c:fd:27:ac:cc:ed +# SHA256 Fingerprint: 6b:9c:08:e8:6e:b0:f7:67:cf:ad:65:cd:98:b6:21:49:e5:49:4a:67:f5:84:5e:7b:d1:ed:01:9f:27:b8:6b:d6 +-----BEGIN CERTIFICATE----- +MIIDtTCCAp2gAwIBAgIQdrEgUnTwhYdGs/gjGvbCwDANBgkqhkiG9w0BAQsFADBt +MQswCQYDVQQGEwJDSDEQMA4GA1UEChMHV0lTZUtleTEiMCAGA1UECxMZT0lTVEUg +Rm91bmRhdGlvbiBFbmRvcnNlZDEoMCYGA1UEAxMfT0lTVEUgV0lTZUtleSBHbG9i +YWwgUm9vdCBHQiBDQTAeFw0xNDEyMDExNTAwMzJaFw0zOTEyMDExNTEwMzFaMG0x +CzAJBgNVBAYTAkNIMRAwDgYDVQQKEwdXSVNlS2V5MSIwIAYDVQQLExlPSVNURSBG +b3VuZGF0aW9uIEVuZG9yc2VkMSgwJgYDVQQDEx9PSVNURSBXSVNlS2V5IEdsb2Jh +bCBSb290IEdCIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA2Be3 +HEokKtaXscriHvt9OO+Y9bI5mE4nuBFde9IllIiCFSZqGzG7qFshISvYD06fWvGx +WuR51jIjK+FTzJlFXHtPrby/h0oLS5daqPZI7H17Dc0hBt+eFf1Biki3IPShehtX +1F1Q/7pn2COZH8g/497/b1t3sWtuuMlk9+HKQUYOKXHQuSP8yYFfTvdv37+ErXNk +u7dCjmn21HYdfp2nuFeKUWdy19SouJVUQHMD9ur06/4oQnc/nSMbsrY9gBQHTC5P +99UKFg29ZkM3fiNDecNAhvVMKdqOmq0NpQSHiB6F4+lT1ZvIiwNjeOvgGUpuuy9r +M2RYk61pv48b74JIxwIDAQABo1EwTzALBgNVHQ8EBAMCAYYwDwYDVR0TAQH/BAUw +AwEB/zAdBgNVHQ4EFgQUNQ/INmNe4qPs+TtmFc5RUuORmj0wEAYJKwYBBAGCNxUB +BAMCAQAwDQYJKoZIhvcNAQELBQADggEBAEBM+4eymYGQfp3FsLAmzYh7KzKNbrgh +cViXfa43FK8+5/ea4n32cZiZBKpDdHij40lhPnOMTZTg+XHEthYOU3gf1qKHLwI5 +gSk8rxWYITD+KJAAjNHhy/peyP34EEY7onhCkRd0VQreUGdNZtGn//3ZwLWoo4rO +ZvUPQ82nK1d7Y0Zqqi5S2PTt4W2tKZB4SLrhI6qjiey1q5bAtEuiHZeeevJuQHHf +aPFlTc58Bd9TZaml8LGXBHAVRgOY1NK/VLSgWH1Sb9pWJmLU2NuJMW8c8CLC02Ic +Nc1MaRVUGpCY3useX8p3x8uOPUNpnJpY0CQ73xtAln41rYHHTnG6iBM= +-----END CERTIFICATE----- + +# Issuer: CN=SZAFIR ROOT CA2 O=Krajowa Izba Rozliczeniowa S.A. +# Subject: CN=SZAFIR ROOT CA2 O=Krajowa Izba Rozliczeniowa S.A. +# Label: "SZAFIR ROOT CA2" +# Serial: 357043034767186914217277344587386743377558296292 +# MD5 Fingerprint: 11:64:c1:89:b0:24:b1:8c:b1:07:7e:89:9e:51:9e:99 +# SHA1 Fingerprint: e2:52:fa:95:3f:ed:db:24:60:bd:6e:28:f3:9c:cc:cf:5e:b3:3f:de +# SHA256 Fingerprint: a1:33:9d:33:28:1a:0b:56:e5:57:d3:d3:2b:1c:e7:f9:36:7e:b0:94:bd:5f:a7:2a:7e:50:04:c8:de:d7:ca:fe +-----BEGIN CERTIFICATE----- +MIIDcjCCAlqgAwIBAgIUPopdB+xV0jLVt+O2XwHrLdzk1uQwDQYJKoZIhvcNAQEL +BQAwUTELMAkGA1UEBhMCUEwxKDAmBgNVBAoMH0tyYWpvd2EgSXpiYSBSb3psaWN6 +ZW5pb3dhIFMuQS4xGDAWBgNVBAMMD1NaQUZJUiBST09UIENBMjAeFw0xNTEwMTkw +NzQzMzBaFw0zNTEwMTkwNzQzMzBaMFExCzAJBgNVBAYTAlBMMSgwJgYDVQQKDB9L +cmFqb3dhIEl6YmEgUm96bGljemVuaW93YSBTLkEuMRgwFgYDVQQDDA9TWkFGSVIg +Uk9PVCBDQTIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC3vD5QqEvN +QLXOYeeWyrSh2gwisPq1e3YAd4wLz32ohswmUeQgPYUM1ljj5/QqGJ3a0a4m7utT +3PSQ1hNKDJA8w/Ta0o4NkjrcsbH/ON7Dui1fgLkCvUqdGw+0w8LBZwPd3BucPbOw +3gAeqDRHu5rr/gsUvTaE2g0gv/pby6kWIK05YO4vdbbnl5z5Pv1+TW9NL++IDWr6 +3fE9biCloBK0TXC5ztdyO4mTp4CEHCdJckm1/zuVnsHMyAHs6A6KCpbns6aH5db5 +BSsNl0BwPLqsdVqc1U2dAgrSS5tmS0YHF2Wtn2yIANwiieDhZNRnvDF5YTy7ykHN +XGoAyDw4jlivAgMBAAGjQjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQD +AgEGMB0GA1UdDgQWBBQuFqlKGLXLzPVvUPMjX/hd56zwyDANBgkqhkiG9w0BAQsF +AAOCAQEAtXP4A9xZWx126aMqe5Aosk3AM0+qmrHUuOQn/6mWmc5G4G18TKI4pAZw +8PRBEew/R40/cof5O/2kbytTAOD/OblqBw7rHRz2onKQy4I9EYKL0rufKq8h5mOG +nXkZ7/e7DDWQw4rtTw/1zBLZpD67oPwglV9PJi8RI4NOdQcPv5vRtB3pEAT+ymCP +oky4rc/hkA/NrgrHXXu3UNLUYfrVFdvXn4dRVOul4+vJhaAlIDf7js4MNIThPIGy +d05DpYhfhmehPea0XGG2Ptv+tyjFogeutcrKjSoS75ftwjCkySp6+/NNIxuZMzSg +LvWpCz/UXeHPhJ/iGcJfitYgHuNztw== +-----END CERTIFICATE----- + +# Issuer: CN=Certum Trusted Network CA 2 O=Unizeto Technologies S.A. OU=Certum Certification Authority +# Subject: CN=Certum Trusted Network CA 2 O=Unizeto Technologies S.A. OU=Certum Certification Authority +# Label: "Certum Trusted Network CA 2" +# Serial: 44979900017204383099463764357512596969 +# MD5 Fingerprint: 6d:46:9e:d9:25:6d:08:23:5b:5e:74:7d:1e:27:db:f2 +# SHA1 Fingerprint: d3:dd:48:3e:2b:bf:4c:05:e8:af:10:f5:fa:76:26:cf:d3:dc:30:92 +# SHA256 Fingerprint: b6:76:f2:ed:da:e8:77:5c:d3:6c:b0:f6:3c:d1:d4:60:39:61:f4:9e:62:65:ba:01:3a:2f:03:07:b6:d0:b8:04 +-----BEGIN CERTIFICATE----- +MIIF0jCCA7qgAwIBAgIQIdbQSk8lD8kyN/yqXhKN6TANBgkqhkiG9w0BAQ0FADCB +gDELMAkGA1UEBhMCUEwxIjAgBgNVBAoTGVVuaXpldG8gVGVjaG5vbG9naWVzIFMu +QS4xJzAlBgNVBAsTHkNlcnR1bSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTEkMCIG +A1UEAxMbQ2VydHVtIFRydXN0ZWQgTmV0d29yayBDQSAyMCIYDzIwMTExMDA2MDgz +OTU2WhgPMjA0NjEwMDYwODM5NTZaMIGAMQswCQYDVQQGEwJQTDEiMCAGA1UEChMZ +VW5pemV0byBUZWNobm9sb2dpZXMgUy5BLjEnMCUGA1UECxMeQ2VydHVtIENlcnRp +ZmljYXRpb24gQXV0aG9yaXR5MSQwIgYDVQQDExtDZXJ0dW0gVHJ1c3RlZCBOZXR3 +b3JrIENBIDIwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQC9+Xj45tWA +DGSdhhuWZGc/IjoedQF97/tcZ4zJzFxrqZHmuULlIEub2pt7uZld2ZuAS9eEQCsn +0+i6MLs+CRqnSZXvK0AkwpfHp+6bJe+oCgCXhVqqndwpyeI1B+twTUrWwbNWuKFB +OJvR+zF/j+Bf4bE/D44WSWDXBo0Y+aomEKsq09DRZ40bRr5HMNUuctHFY9rnY3lE +fktjJImGLjQ/KUxSiyqnwOKRKIm5wFv5HdnnJ63/mgKXwcZQkpsCLL2puTRZCr+E +Sv/f/rOf69me4Jgj7KZrdxYq28ytOxykh9xGc14ZYmhFV+SQgkK7QtbwYeDBoz1m +o130GO6IyY0XRSmZMnUCMe4pJshrAua1YkV/NxVaI2iJ1D7eTiew8EAMvE0Xy02i +sx7QBlrd9pPPV3WZ9fqGGmd4s7+W/jTcvedSVuWz5XV710GRBdxdaeOVDUO5/IOW +OZV7bIBaTxNyxtd9KXpEulKkKtVBRgkg/iKgtlswjbyJDNXXcPiHUv3a76xRLgez +Tv7QCdpw75j6VuZt27VXS9zlLCUVyJ4ueE742pyehizKV/Ma5ciSixqClnrDvFAS +adgOWkaLOusm+iPJtrCBvkIApPjW/jAux9JG9uWOdf3yzLnQh1vMBhBgu4M1t15n +3kfsmUjxpKEV/q2MYo45VU85FrmxY53/twIDAQABo0IwQDAPBgNVHRMBAf8EBTAD +AQH/MB0GA1UdDgQWBBS2oVQ5AsOgP46KvPrU+Bym0ToO/TAOBgNVHQ8BAf8EBAMC +AQYwDQYJKoZIhvcNAQENBQADggIBAHGlDs7k6b8/ONWJWsQCYftMxRQXLYtPU2sQ +F/xlhMcQSZDe28cmk4gmb3DWAl45oPePq5a1pRNcgRRtDoGCERuKTsZPpd1iHkTf +CVn0W3cLN+mLIMb4Ck4uWBzrM9DPhmDJ2vuAL55MYIR4PSFk1vtBHxgP58l1cb29 +XN40hz5BsA72udY/CROWFC/emh1auVbONTqwX3BNXuMp8SMoclm2q8KMZiYcdywm +djWLKKdpoPk79SPdhRB0yZADVpHnr7pH1BKXESLjokmUbOe3lEu6LaTaM4tMpkT/ +WjzGHWTYtTHkpjx6qFcL2+1hGsvxznN3Y6SHb0xRONbkX8eftoEq5IVIeVheO/jb +AoJnwTnbw3RLPTYe+SmTiGhbqEQZIfCn6IENLOiTNrQ3ssqwGyZ6miUfmpqAnksq +P/ujmv5zMnHCnsZy4YpoJ/HkD7TETKVhk/iXEAcqMCWpuchxuO9ozC1+9eB+D4Ko +b7a6bINDd82Kkhehnlt4Fj1F4jNy3eFmypnTycUm/Q1oBEauttmbjL4ZvrHG8hnj +XALKLNhvSgfZyTXaQHXyxKcZb55CEJh15pWLYLztxRLXis7VmFxWlgPF7ncGNf/P +5O4/E2Hu29othfDNrp2yGAlFw5Khchf8R7agCyzxxN5DaAhqXzvwdmP7zAYspsbi +DrW5viSP +-----END CERTIFICATE----- + +# Issuer: CN=Hellenic Academic and Research Institutions RootCA 2015 O=Hellenic Academic and Research Institutions Cert. Authority +# Subject: CN=Hellenic Academic and Research Institutions RootCA 2015 O=Hellenic Academic and Research Institutions Cert. Authority +# Label: "Hellenic Academic and Research Institutions RootCA 2015" +# Serial: 0 +# MD5 Fingerprint: ca:ff:e2:db:03:d9:cb:4b:e9:0f:ad:84:fd:7b:18:ce +# SHA1 Fingerprint: 01:0c:06:95:a6:98:19:14:ff:bf:5f:c6:b0:b6:95:ea:29:e9:12:a6 +# SHA256 Fingerprint: a0:40:92:9a:02:ce:53:b4:ac:f4:f2:ff:c6:98:1c:e4:49:6f:75:5e:6d:45:fe:0b:2a:69:2b:cd:52:52:3f:36 +-----BEGIN CERTIFICATE----- +MIIGCzCCA/OgAwIBAgIBADANBgkqhkiG9w0BAQsFADCBpjELMAkGA1UEBhMCR1Ix +DzANBgNVBAcTBkF0aGVuczFEMEIGA1UEChM7SGVsbGVuaWMgQWNhZGVtaWMgYW5k +IFJlc2VhcmNoIEluc3RpdHV0aW9ucyBDZXJ0LiBBdXRob3JpdHkxQDA+BgNVBAMT +N0hlbGxlbmljIEFjYWRlbWljIGFuZCBSZXNlYXJjaCBJbnN0aXR1dGlvbnMgUm9v +dENBIDIwMTUwHhcNMTUwNzA3MTAxMTIxWhcNNDAwNjMwMTAxMTIxWjCBpjELMAkG +A1UEBhMCR1IxDzANBgNVBAcTBkF0aGVuczFEMEIGA1UEChM7SGVsbGVuaWMgQWNh +ZGVtaWMgYW5kIFJlc2VhcmNoIEluc3RpdHV0aW9ucyBDZXJ0LiBBdXRob3JpdHkx +QDA+BgNVBAMTN0hlbGxlbmljIEFjYWRlbWljIGFuZCBSZXNlYXJjaCBJbnN0aXR1 +dGlvbnMgUm9vdENBIDIwMTUwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoIC +AQDC+Kk/G4n8PDwEXT2QNrCROnk8ZlrvbTkBSRq0t89/TSNTt5AA4xMqKKYx8ZEA +4yjsriFBzh/a/X0SWwGDD7mwX5nh8hKDgE0GPt+sr+ehiGsxr/CL0BgzuNtFajT0 +AoAkKAoCFZVedioNmToUW/bLy1O8E00BiDeUJRtCvCLYjqOWXjrZMts+6PAQZe10 +4S+nfK8nNLspfZu2zwnI5dMK/IhlZXQK3HMcXM1AsRzUtoSMTFDPaI6oWa7CJ06C +ojXdFPQf/7J31Ycvqm59JCfnxssm5uX+Zwdj2EUN3TpZZTlYepKZcj2chF6IIbjV +9Cz82XBST3i4vTwri5WY9bPRaM8gFH5MXF/ni+X1NYEZN9cRCLdmvtNKzoNXADrD +gfgXy5I2XdGj2HUb4Ysn6npIQf1FGQatJ5lOwXBH3bWfgVMS5bGMSF0xQxfjjMZ6 +Y5ZLKTBOhE5iGV48zpeQpX8B653g+IuJ3SWYPZK2fu/Z8VFRfS0myGlZYeCsargq +NhEEelC9MoS+L9xy1dcdFkfkR2YgP/SWxa+OAXqlD3pk9Q0Yh9muiNX6hME6wGko +LfINaFGq46V3xqSQDqE3izEjR8EJCOtu93ib14L8hCCZSRm2Ekax+0VVFqmjZayc +Bw/qa9wfLgZy7IaIEuQt218FL+TwA9MmM+eAws1CoRc0CwIDAQABo0IwQDAPBgNV +HRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIBBjAdBgNVHQ4EFgQUcRVnyMjJvXVd +ctA4GGqd83EkVAswDQYJKoZIhvcNAQELBQADggIBAHW7bVRLqhBYRjTyYtcWNl0I +XtVsyIe9tC5G8jH4fOpCtZMWVdyhDBKg2mF+D1hYc2Ryx+hFjtyp8iY/xnmMsVMI +M4GwVhO+5lFc2JsKT0ucVlMC6U/2DWDqTUJV6HwbISHTGzrMd/K4kPFox/la/vot +9L/J9UUbzjgQKjeKeaO04wlshYaT/4mWJ3iBj2fjRnRUjtkNaeJK9E10A/+yd+2V +Z5fkscWrv2oj6NSU4kQoYsRL4vDY4ilrGnB+JGGTe08DMiUNRSQrlrRGar9KC/ea +j8GsGsVn82800vpzY4zvFrCopEYq+OsS7HK07/grfoxSwIuEVPkvPuNVqNxmsdnh +X9izjFk0WaSrT2y7HxjbdavYy5LNlDhhDgcGH0tGEPEVvo2FXDtKK4F5D7Rpn0lQ +l033DlZdwJVqwjbDG2jJ9SrcR5q+ss7FJej6A7na+RZukYT1HCjI/CbM1xyQVqdf +bzoEvM14iQuODy+jqk+iGxI9FghAD/FGTNeqewjBCvVtJ94Cj8rDtSvK6evIIVM4 +pcw72Hc3MKJP2W/R8kCtQXoXxdZKNYm3QdV8hn9VTYNKpXMgwDqvkPGaJI7ZjnHK +e7iG2rKPmT4dEw0SEe7Uq/DpFXYC5ODfqiAeW2GFZECpkJcNrVPSWh2HagCXZWK0 +vm9qp/UsQu0yrbYhnr68 +-----END CERTIFICATE----- + +# Issuer: CN=Hellenic Academic and Research Institutions ECC RootCA 2015 O=Hellenic Academic and Research Institutions Cert. Authority +# Subject: CN=Hellenic Academic and Research Institutions ECC RootCA 2015 O=Hellenic Academic and Research Institutions Cert. Authority +# Label: "Hellenic Academic and Research Institutions ECC RootCA 2015" +# Serial: 0 +# MD5 Fingerprint: 81:e5:b4:17:eb:c2:f5:e1:4b:0d:41:7b:49:92:fe:ef +# SHA1 Fingerprint: 9f:f1:71:8d:92:d5:9a:f3:7d:74:97:b4:bc:6f:84:68:0b:ba:b6:66 +# SHA256 Fingerprint: 44:b5:45:aa:8a:25:e6:5a:73:ca:15:dc:27:fc:36:d2:4c:1c:b9:95:3a:06:65:39:b1:15:82:dc:48:7b:48:33 +-----BEGIN CERTIFICATE----- +MIICwzCCAkqgAwIBAgIBADAKBggqhkjOPQQDAjCBqjELMAkGA1UEBhMCR1IxDzAN +BgNVBAcTBkF0aGVuczFEMEIGA1UEChM7SGVsbGVuaWMgQWNhZGVtaWMgYW5kIFJl +c2VhcmNoIEluc3RpdHV0aW9ucyBDZXJ0LiBBdXRob3JpdHkxRDBCBgNVBAMTO0hl +bGxlbmljIEFjYWRlbWljIGFuZCBSZXNlYXJjaCBJbnN0aXR1dGlvbnMgRUNDIFJv +b3RDQSAyMDE1MB4XDTE1MDcwNzEwMzcxMloXDTQwMDYzMDEwMzcxMlowgaoxCzAJ +BgNVBAYTAkdSMQ8wDQYDVQQHEwZBdGhlbnMxRDBCBgNVBAoTO0hlbGxlbmljIEFj +YWRlbWljIGFuZCBSZXNlYXJjaCBJbnN0aXR1dGlvbnMgQ2VydC4gQXV0aG9yaXR5 +MUQwQgYDVQQDEztIZWxsZW5pYyBBY2FkZW1pYyBhbmQgUmVzZWFyY2ggSW5zdGl0 +dXRpb25zIEVDQyBSb290Q0EgMjAxNTB2MBAGByqGSM49AgEGBSuBBAAiA2IABJKg +QehLgoRc4vgxEZmGZE4JJS+dQS8KrjVPdJWyUWRrjWvmP3CV8AVER6ZyOFB2lQJa +jq4onvktTpnvLEhvTCUp6NFxW98dwXU3tNf6e3pCnGoKVlp8aQuqgAkkbH7BRqNC +MEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFLQi +C4KZJAEOnLvkDv2/+5cgk5kqMAoGCCqGSM49BAMCA2cAMGQCMGfOFmI4oqxiRaep +lSTAGiecMjvAwNW6qef4BENThe5SId6d9SWDPp5YSy/XZxMOIQIwBeF1Ad5o7Sof +TUwJCA3sS61kFyjndc5FZXIhF8siQQ6ME5g4mlRtm8rifOoCWCKR +-----END CERTIFICATE----- + +# Issuer: CN=ISRG Root X1 O=Internet Security Research Group +# Subject: CN=ISRG Root X1 O=Internet Security Research Group +# Label: "ISRG Root X1" +# Serial: 172886928669790476064670243504169061120 +# MD5 Fingerprint: 0c:d2:f9:e0:da:17:73:e9:ed:86:4d:a5:e3:70:e7:4e +# SHA1 Fingerprint: ca:bd:2a:79:a1:07:6a:31:f2:1d:25:36:35:cb:03:9d:43:29:a5:e8 +# SHA256 Fingerprint: 96:bc:ec:06:26:49:76:f3:74:60:77:9a:cf:28:c5:a7:cf:e8:a3:c0:aa:e1:1a:8f:fc:ee:05:c0:bd:df:08:c6 +-----BEGIN CERTIFICATE----- +MIIFazCCA1OgAwIBAgIRAIIQz7DSQONZRGPgu2OCiwAwDQYJKoZIhvcNAQELBQAw +TzELMAkGA1UEBhMCVVMxKTAnBgNVBAoTIEludGVybmV0IFNlY3VyaXR5IFJlc2Vh +cmNoIEdyb3VwMRUwEwYDVQQDEwxJU1JHIFJvb3QgWDEwHhcNMTUwNjA0MTEwNDM4 +WhcNMzUwNjA0MTEwNDM4WjBPMQswCQYDVQQGEwJVUzEpMCcGA1UEChMgSW50ZXJu +ZXQgU2VjdXJpdHkgUmVzZWFyY2ggR3JvdXAxFTATBgNVBAMTDElTUkcgUm9vdCBY +MTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAK3oJHP0FDfzm54rVygc +h77ct984kIxuPOZXoHj3dcKi/vVqbvYATyjb3miGbESTtrFj/RQSa78f0uoxmyF+ +0TM8ukj13Xnfs7j/EvEhmkvBioZxaUpmZmyPfjxwv60pIgbz5MDmgK7iS4+3mX6U +A5/TR5d8mUgjU+g4rk8Kb4Mu0UlXjIB0ttov0DiNewNwIRt18jA8+o+u3dpjq+sW +T8KOEUt+zwvo/7V3LvSye0rgTBIlDHCNAymg4VMk7BPZ7hm/ELNKjD+Jo2FR3qyH +B5T0Y3HsLuJvW5iB4YlcNHlsdu87kGJ55tukmi8mxdAQ4Q7e2RCOFvu396j3x+UC +B5iPNgiV5+I3lg02dZ77DnKxHZu8A/lJBdiB3QW0KtZB6awBdpUKD9jf1b0SHzUv +KBds0pjBqAlkd25HN7rOrFleaJ1/ctaJxQZBKT5ZPt0m9STJEadao0xAH0ahmbWn +OlFuhjuefXKnEgV4We0+UXgVCwOPjdAvBbI+e0ocS3MFEvzG6uBQE3xDk3SzynTn +jh8BCNAw1FtxNrQHusEwMFxIt4I7mKZ9YIqioymCzLq9gwQbooMDQaHWBfEbwrbw +qHyGO0aoSCqI3Haadr8faqU9GY/rOPNk3sgrDQoo//fb4hVC1CLQJ13hef4Y53CI +rU7m2Ys6xt0nUW7/vGT1M0NPAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNV +HRMBAf8EBTADAQH/MB0GA1UdDgQWBBR5tFnme7bl5AFzgAiIyBpY9umbbjANBgkq +hkiG9w0BAQsFAAOCAgEAVR9YqbyyqFDQDLHYGmkgJykIrGF1XIpu+ILlaS/V9lZL +ubhzEFnTIZd+50xx+7LSYK05qAvqFyFWhfFQDlnrzuBZ6brJFe+GnY+EgPbk6ZGQ +3BebYhtF8GaV0nxvwuo77x/Py9auJ/GpsMiu/X1+mvoiBOv/2X/qkSsisRcOj/KK +NFtY2PwByVS5uCbMiogziUwthDyC3+6WVwW6LLv3xLfHTjuCvjHIInNzktHCgKQ5 +ORAzI4JMPJ+GslWYHb4phowim57iaztXOoJwTdwJx4nLCgdNbOhdjsnvzqvHu7Ur +TkXWStAmzOVyyghqpZXjFaH3pO3JLF+l+/+sKAIuvtd7u+Nxe5AW0wdeRlN8NwdC +jNPElpzVmbUq4JUagEiuTDkHzsxHpFKVK7q4+63SM1N95R1NbdWhscdCb+ZAJzVc +oyi3B43njTOQ5yOf+1CceWxG1bQVs5ZufpsMljq4Ui0/1lvh+wjChP4kqKOJ2qxq +4RgqsahDYVvTH9w7jXbyLeiNdd8XM2w9U/t7y0Ff/9yi0GE44Za4rF2LN9d11TPA +mRGunUHBcnWEvgJBQl9nJEiU0Zsnvgc/ubhPgXRR4Xq37Z0j4r7g1SgEEzwxA57d +emyPxgcYxn/eR44/KJ4EBs+lVDR3veyJm+kXQ99b21/+jh5Xos1AnX5iItreGCc= +-----END CERTIFICATE----- + +# Issuer: O=FNMT-RCM OU=AC RAIZ FNMT-RCM +# Subject: O=FNMT-RCM OU=AC RAIZ FNMT-RCM +# Label: "AC RAIZ FNMT-RCM" +# Serial: 485876308206448804701554682760554759 +# MD5 Fingerprint: e2:09:04:b4:d3:bd:d1:a0:14:fd:1a:d2:47:c4:57:1d +# SHA1 Fingerprint: ec:50:35:07:b2:15:c4:95:62:19:e2:a8:9a:5b:42:99:2c:4c:2c:20 +# SHA256 Fingerprint: eb:c5:57:0c:29:01:8c:4d:67:b1:aa:12:7b:af:12:f7:03:b4:61:1e:bc:17:b7:da:b5:57:38:94:17:9b:93:fa +-----BEGIN CERTIFICATE----- +MIIFgzCCA2ugAwIBAgIPXZONMGc2yAYdGsdUhGkHMA0GCSqGSIb3DQEBCwUAMDsx +CzAJBgNVBAYTAkVTMREwDwYDVQQKDAhGTk1ULVJDTTEZMBcGA1UECwwQQUMgUkFJ +WiBGTk1ULVJDTTAeFw0wODEwMjkxNTU5NTZaFw0zMDAxMDEwMDAwMDBaMDsxCzAJ +BgNVBAYTAkVTMREwDwYDVQQKDAhGTk1ULVJDTTEZMBcGA1UECwwQQUMgUkFJWiBG +Tk1ULVJDTTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALpxgHpMhm5/ +yBNtwMZ9HACXjywMI7sQmkCpGreHiPibVmr75nuOi5KOpyVdWRHbNi63URcfqQgf +BBckWKo3Shjf5TnUV/3XwSyRAZHiItQDwFj8d0fsjz50Q7qsNI1NOHZnjrDIbzAz +WHFctPVrbtQBULgTfmxKo0nRIBnuvMApGGWn3v7v3QqQIecaZ5JCEJhfTzC8PhxF +tBDXaEAUwED653cXeuYLj2VbPNmaUtu1vZ5Gzz3rkQUCwJaydkxNEJY7kvqcfw+Z +374jNUUeAlz+taibmSXaXvMiwzn15Cou08YfxGyqxRxqAQVKL9LFwag0Jl1mpdIC +IfkYtwb1TplvqKtMUejPUBjFd8g5CSxJkjKZqLsXF3mwWsXmo8RZZUc1g16p6DUL +mbvkzSDGm0oGObVo/CK67lWMK07q87Hj/LaZmtVC+nFNCM+HHmpxffnTtOmlcYF7 +wk5HlqX2doWjKI/pgG6BU6VtX7hI+cL5NqYuSf+4lsKMB7ObiFj86xsc3i1w4peS +MKGJ47xVqCfWS+2QrYv6YyVZLag13cqXM7zlzced0ezvXg5KkAYmY6252TUtB7p2 +ZSysV4999AeU14ECll2jB0nVetBX+RvnU0Z1qrB5QstocQjpYL05ac70r8NWQMet +UqIJ5G+GR4of6ygnXYMgrwTJbFaai0b1AgMBAAGjgYMwgYAwDwYDVR0TAQH/BAUw +AwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFPd9xf3E6Jobd2Sn9R2gzL+H +YJptMD4GA1UdIAQ3MDUwMwYEVR0gADArMCkGCCsGAQUFBwIBFh1odHRwOi8vd3d3 +LmNlcnQuZm5tdC5lcy9kcGNzLzANBgkqhkiG9w0BAQsFAAOCAgEAB5BK3/MjTvDD +nFFlm5wioooMhfNzKWtN/gHiqQxjAb8EZ6WdmF/9ARP67Jpi6Yb+tmLSbkyU+8B1 +RXxlDPiyN8+sD8+Nb/kZ94/sHvJwnvDKuO+3/3Y3dlv2bojzr2IyIpMNOmqOFGYM +LVN0V2Ue1bLdI4E7pWYjJ2cJj+F3qkPNZVEI7VFY/uY5+ctHhKQV8Xa7pO6kO8Rf +77IzlhEYt8llvhjho6Tc+hj507wTmzl6NLrTQfv6MooqtyuGC2mDOL7Nii4LcK2N +JpLuHvUBKwrZ1pebbuCoGRw6IYsMHkCtA+fdZn71uSANA+iW+YJF1DngoABd15jm +fZ5nc8OaKveri6E6FO80vFIOiZiaBECEHX5FaZNXzuvO+FB8TxxuBEOb+dY7Ixjp +6o7RTUaN8Tvkasq6+yO3m/qZASlaWFot4/nUbQ4mrcFuNLwy+AwF+mWj2zs3gyLp +1txyM/1d8iC9djwj2ij3+RvrWWTV3F9yfiD8zYm1kGdNYno/Tq0dwzn+evQoFt9B +9kiABdcPUXmsEKvU7ANm5mqwujGSQkBqvjrTcuFqN1W8rB2Vt2lh8kORdOag0wok +RqEIr9baRRmW1FMdW4R58MD3R++Lj8UGrp1MYp3/RgT408m2ECVAdf4WqslKYIYv +uu8wd+RU4riEmViAqhOLUTpPSPaLtrM= +-----END CERTIFICATE----- + +# Issuer: CN=Amazon Root CA 1 O=Amazon +# Subject: CN=Amazon Root CA 1 O=Amazon +# Label: "Amazon Root CA 1" +# Serial: 143266978916655856878034712317230054538369994 +# MD5 Fingerprint: 43:c6:bf:ae:ec:fe:ad:2f:18:c6:88:68:30:fc:c8:e6 +# SHA1 Fingerprint: 8d:a7:f9:65:ec:5e:fc:37:91:0f:1c:6e:59:fd:c1:cc:6a:6e:de:16 +# SHA256 Fingerprint: 8e:cd:e6:88:4f:3d:87:b1:12:5b:a3:1a:c3:fc:b1:3d:70:16:de:7f:57:cc:90:4f:e1:cb:97:c6:ae:98:19:6e +-----BEGIN CERTIFICATE----- +MIIDQTCCAimgAwIBAgITBmyfz5m/jAo54vB4ikPmljZbyjANBgkqhkiG9w0BAQsF +ADA5MQswCQYDVQQGEwJVUzEPMA0GA1UEChMGQW1hem9uMRkwFwYDVQQDExBBbWF6 +b24gUm9vdCBDQSAxMB4XDTE1MDUyNjAwMDAwMFoXDTM4MDExNzAwMDAwMFowOTEL +MAkGA1UEBhMCVVMxDzANBgNVBAoTBkFtYXpvbjEZMBcGA1UEAxMQQW1hem9uIFJv +b3QgQ0EgMTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALJ4gHHKeNXj +ca9HgFB0fW7Y14h29Jlo91ghYPl0hAEvrAIthtOgQ3pOsqTQNroBvo3bSMgHFzZM +9O6II8c+6zf1tRn4SWiw3te5djgdYZ6k/oI2peVKVuRF4fn9tBb6dNqcmzU5L/qw +IFAGbHrQgLKm+a/sRxmPUDgH3KKHOVj4utWp+UhnMJbulHheb4mjUcAwhmahRWa6 +VOujw5H5SNz/0egwLX0tdHA114gk957EWW67c4cX8jJGKLhD+rcdqsq08p8kDi1L +93FcXmn/6pUCyziKrlA4b9v7LWIbxcceVOF34GfID5yHI9Y/QCB/IIDEgEw+OyQm +jgSubJrIqg0CAwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMC +AYYwHQYDVR0OBBYEFIQYzIU07LwMlJQuCFmcx7IQTgoIMA0GCSqGSIb3DQEBCwUA +A4IBAQCY8jdaQZChGsV2USggNiMOruYou6r4lK5IpDB/G/wkjUu0yKGX9rbxenDI +U5PMCCjjmCXPI6T53iHTfIUJrU6adTrCC2qJeHZERxhlbI1Bjjt/msv0tadQ1wUs +N+gDS63pYaACbvXy8MWy7Vu33PqUXHeeE6V/Uq2V8viTO96LXFvKWlJbYK8U90vv +o/ufQJVtMVT8QtPHRh8jrdkPSHCa2XV4cdFyQzR1bldZwgJcJmApzyMZFo6IQ6XU +5MsI+yMRQ+hDKXJioaldXgjUkK642M4UwtBV8ob2xJNDd2ZhwLnoQdeXeGADbkpy +rqXRfboQnoZsG4q5WTP468SQvvG5 +-----END CERTIFICATE----- + +# Issuer: CN=Amazon Root CA 2 O=Amazon +# Subject: CN=Amazon Root CA 2 O=Amazon +# Label: "Amazon Root CA 2" +# Serial: 143266982885963551818349160658925006970653239 +# MD5 Fingerprint: c8:e5:8d:ce:a8:42:e2:7a:c0:2a:5c:7c:9e:26:bf:66 +# SHA1 Fingerprint: 5a:8c:ef:45:d7:a6:98:59:76:7a:8c:8b:44:96:b5:78:cf:47:4b:1a +# SHA256 Fingerprint: 1b:a5:b2:aa:8c:65:40:1a:82:96:01:18:f8:0b:ec:4f:62:30:4d:83:ce:c4:71:3a:19:c3:9c:01:1e:a4:6d:b4 +-----BEGIN CERTIFICATE----- +MIIFQTCCAymgAwIBAgITBmyf0pY1hp8KD+WGePhbJruKNzANBgkqhkiG9w0BAQwF +ADA5MQswCQYDVQQGEwJVUzEPMA0GA1UEChMGQW1hem9uMRkwFwYDVQQDExBBbWF6 +b24gUm9vdCBDQSAyMB4XDTE1MDUyNjAwMDAwMFoXDTQwMDUyNjAwMDAwMFowOTEL +MAkGA1UEBhMCVVMxDzANBgNVBAoTBkFtYXpvbjEZMBcGA1UEAxMQQW1hem9uIFJv +b3QgQ0EgMjCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAK2Wny2cSkxK +gXlRmeyKy2tgURO8TW0G/LAIjd0ZEGrHJgw12MBvIITplLGbhQPDW9tK6Mj4kHbZ +W0/jTOgGNk3Mmqw9DJArktQGGWCsN0R5hYGCrVo34A3MnaZMUnbqQ523BNFQ9lXg +1dKmSYXpN+nKfq5clU1Imj+uIFptiJXZNLhSGkOQsL9sBbm2eLfq0OQ6PBJTYv9K +8nu+NQWpEjTj82R0Yiw9AElaKP4yRLuH3WUnAnE72kr3H9rN9yFVkE8P7K6C4Z9r +2UXTu/Bfh+08LDmG2j/e7HJV63mjrdvdfLC6HM783k81ds8P+HgfajZRRidhW+me +z/CiVX18JYpvL7TFz4QuK/0NURBs+18bvBt+xa47mAExkv8LV/SasrlX6avvDXbR +8O70zoan4G7ptGmh32n2M8ZpLpcTnqWHsFcQgTfJU7O7f/aS0ZzQGPSSbtqDT6Zj +mUyl+17vIWR6IF9sZIUVyzfpYgwLKhbcAS4y2j5L9Z469hdAlO+ekQiG+r5jqFoz +7Mt0Q5X5bGlSNscpb/xVA1wf+5+9R+vnSUeVC06JIglJ4PVhHvG/LopyboBZ/1c6 ++XUyo05f7O0oYtlNc/LMgRdg7c3r3NunysV+Ar3yVAhU/bQtCSwXVEqY0VThUWcI +0u1ufm8/0i2BWSlmy5A5lREedCf+3euvAgMBAAGjQjBAMA8GA1UdEwEB/wQFMAMB +Af8wDgYDVR0PAQH/BAQDAgGGMB0GA1UdDgQWBBSwDPBMMPQFWAJI/TPlUq9LhONm +UjANBgkqhkiG9w0BAQwFAAOCAgEAqqiAjw54o+Ci1M3m9Zh6O+oAA7CXDpO8Wqj2 +LIxyh6mx/H9z/WNxeKWHWc8w4Q0QshNabYL1auaAn6AFC2jkR2vHat+2/XcycuUY ++gn0oJMsXdKMdYV2ZZAMA3m3MSNjrXiDCYZohMr/+c8mmpJ5581LxedhpxfL86kS +k5Nrp+gvU5LEYFiwzAJRGFuFjWJZY7attN6a+yb3ACfAXVU3dJnJUH/jWS5E4ywl +7uxMMne0nxrpS10gxdr9HIcWxkPo1LsmmkVwXqkLN1PiRnsn/eBG8om3zEK2yygm +btmlyTrIQRNg91CMFa6ybRoVGld45pIq2WWQgj9sAq+uEjonljYE1x2igGOpm/Hl +urR8FLBOybEfdF849lHqm/osohHUqS0nGkWxr7JOcQ3AWEbWaQbLU8uz/mtBzUF+ +fUwPfHJ5elnNXkoOrJupmHN5fLT0zLm4BwyydFy4x2+IoZCn9Kr5v2c69BoVYh63 +n749sSmvZ6ES8lgQGVMDMBu4Gon2nL2XA46jCfMdiyHxtN/kHNGfZQIG6lzWE7OE +76KlXIx3KadowGuuQNKotOrN8I1LOJwZmhsoVLiJkO/KdYE+HvJkJMcYr07/R54H +9jVlpNMKVv/1F2Rs76giJUmTtt8AF9pYfl3uxRuw0dFfIRDH+fO6AgonB8Xx1sfT +4PsJYGw= +-----END CERTIFICATE----- + +# Issuer: CN=Amazon Root CA 3 O=Amazon +# Subject: CN=Amazon Root CA 3 O=Amazon +# Label: "Amazon Root CA 3" +# Serial: 143266986699090766294700635381230934788665930 +# MD5 Fingerprint: a0:d4:ef:0b:f7:b5:d8:49:95:2a:ec:f5:c4:fc:81:87 +# SHA1 Fingerprint: 0d:44:dd:8c:3c:8c:1a:1a:58:75:64:81:e9:0f:2e:2a:ff:b3:d2:6e +# SHA256 Fingerprint: 18:ce:6c:fe:7b:f1:4e:60:b2:e3:47:b8:df:e8:68:cb:31:d0:2e:bb:3a:da:27:15:69:f5:03:43:b4:6d:b3:a4 +-----BEGIN CERTIFICATE----- +MIIBtjCCAVugAwIBAgITBmyf1XSXNmY/Owua2eiedgPySjAKBggqhkjOPQQDAjA5 +MQswCQYDVQQGEwJVUzEPMA0GA1UEChMGQW1hem9uMRkwFwYDVQQDExBBbWF6b24g +Um9vdCBDQSAzMB4XDTE1MDUyNjAwMDAwMFoXDTQwMDUyNjAwMDAwMFowOTELMAkG +A1UEBhMCVVMxDzANBgNVBAoTBkFtYXpvbjEZMBcGA1UEAxMQQW1hem9uIFJvb3Qg +Q0EgMzBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABCmXp8ZBf8ANm+gBG1bG8lKl +ui2yEujSLtf6ycXYqm0fc4E7O5hrOXwzpcVOho6AF2hiRVd9RFgdszflZwjrZt6j +QjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgGGMB0GA1UdDgQWBBSr +ttvXBp43rDCGB5Fwx5zEGbF4wDAKBggqhkjOPQQDAgNJADBGAiEA4IWSoxe3jfkr +BqWTrBqYaGFy+uGh0PsceGCmQ5nFuMQCIQCcAu/xlJyzlvnrxir4tiz+OpAUFteM +YyRIHN8wfdVoOw== +-----END CERTIFICATE----- + +# Issuer: CN=Amazon Root CA 4 O=Amazon +# Subject: CN=Amazon Root CA 4 O=Amazon +# Label: "Amazon Root CA 4" +# Serial: 143266989758080763974105200630763877849284878 +# MD5 Fingerprint: 89:bc:27:d5:eb:17:8d:06:6a:69:d5:fd:89:47:b4:cd +# SHA1 Fingerprint: f6:10:84:07:d6:f8:bb:67:98:0c:c2:e2:44:c2:eb:ae:1c:ef:63:be +# SHA256 Fingerprint: e3:5d:28:41:9e:d0:20:25:cf:a6:90:38:cd:62:39:62:45:8d:a5:c6:95:fb:de:a3:c2:2b:0b:fb:25:89:70:92 +-----BEGIN CERTIFICATE----- +MIIB8jCCAXigAwIBAgITBmyf18G7EEwpQ+Vxe3ssyBrBDjAKBggqhkjOPQQDAzA5 +MQswCQYDVQQGEwJVUzEPMA0GA1UEChMGQW1hem9uMRkwFwYDVQQDExBBbWF6b24g +Um9vdCBDQSA0MB4XDTE1MDUyNjAwMDAwMFoXDTQwMDUyNjAwMDAwMFowOTELMAkG +A1UEBhMCVVMxDzANBgNVBAoTBkFtYXpvbjEZMBcGA1UEAxMQQW1hem9uIFJvb3Qg +Q0EgNDB2MBAGByqGSM49AgEGBSuBBAAiA2IABNKrijdPo1MN/sGKe0uoe0ZLY7Bi +9i0b2whxIdIA6GO9mif78DluXeo9pcmBqqNbIJhFXRbb/egQbeOc4OO9X4Ri83Bk +M6DLJC9wuoihKqB1+IGuYgbEgds5bimwHvouXKNCMEAwDwYDVR0TAQH/BAUwAwEB +/zAOBgNVHQ8BAf8EBAMCAYYwHQYDVR0OBBYEFNPsxzplbszh2naaVvuc84ZtV+WB +MAoGCCqGSM49BAMDA2gAMGUCMDqLIfG9fhGt0O9Yli/W651+kI0rz2ZVwyzjKKlw +CkcO8DdZEv8tmZQoTipPNU0zWgIxAOp1AE47xDqUEpHJWEadIRNyp4iciuRMStuW +1KyLa2tJElMzrdfkviT8tQp21KW8EA== +-----END CERTIFICATE----- + +# Issuer: CN=LuxTrust Global Root 2 O=LuxTrust S.A. +# Subject: CN=LuxTrust Global Root 2 O=LuxTrust S.A. +# Label: "LuxTrust Global Root 2" +# Serial: 59914338225734147123941058376788110305822489521 +# MD5 Fingerprint: b2:e1:09:00:61:af:f7:f1:91:6f:c4:ad:8d:5e:3b:7c +# SHA1 Fingerprint: 1e:0e:56:19:0a:d1:8b:25:98:b2:04:44:ff:66:8a:04:17:99:5f:3f +# SHA256 Fingerprint: 54:45:5f:71:29:c2:0b:14:47:c4:18:f9:97:16:8f:24:c5:8f:c5:02:3b:f5:da:5b:e2:eb:6e:1d:d8:90:2e:d5 +-----BEGIN CERTIFICATE----- +MIIFwzCCA6ugAwIBAgIUCn6m30tEntpqJIWe5rgV0xZ/u7EwDQYJKoZIhvcNAQEL +BQAwRjELMAkGA1UEBhMCTFUxFjAUBgNVBAoMDUx1eFRydXN0IFMuQS4xHzAdBgNV +BAMMFkx1eFRydXN0IEdsb2JhbCBSb290IDIwHhcNMTUwMzA1MTMyMTU3WhcNMzUw +MzA1MTMyMTU3WjBGMQswCQYDVQQGEwJMVTEWMBQGA1UECgwNTHV4VHJ1c3QgUy5B +LjEfMB0GA1UEAwwWTHV4VHJ1c3QgR2xvYmFsIFJvb3QgMjCCAiIwDQYJKoZIhvcN +AQEBBQADggIPADCCAgoCggIBANeFl78RmOnwYoNMPIf5U2o3C/IPPIfOb9wmKb3F +ibrJgz337spbxm1Jc7TJRqMbNBM/wYlFV/TZsfs2ZUv7COJIcRHIbjuend+JZTem +hfY7RBi2xjcwYkSSl2l9QjAk5A0MiWtj3sXh306pFGxT4GHO9hcvHTy95iJMHZP1 +EMShduxq3sVs35a0VkBCwGKSMKEtFZSg0iAGCW5qbeXrt77U8PEVfIvmTroTzEsn +Xpk8F12PgX8zPU/TPxvsXD/wPEx1bvKm1Z3aLQdjAsZy6ZS8TEmVT4hSyNvoaYL4 +zDRbIvCGp4m9SAptZoFtyMhk+wHh9OHe2Z7d21vUKpkmFRseTJIpgp7VkoGSQXAZ +96Tlk0u8d2cx3Rz9MXANF5kM+Qw5GSoXtTBxVdUPrljhPS80m8+f9niFwpN6cj5m +j5wWEWCPnolvZ77gR1o7DJpni89Gxq44o/KnvObWhWszJHAiS8sIm7vI+AIpHb4g +DEa/a4ebsypmQjVGbKq6rfmYe+lQVRQxv7HaLe2ArWgk+2mr2HETMOZns4dA/Yl+ +8kPREd8vZS9kzl8UubG/Mb2HeFpZZYiq/FkySIbWTLkpS5XTdvN3JW1CHDiDTf2j +X5t/Lax5Gw5CMZdjpPuKadUiDTSQMC6otOBttpSsvItO13D8xTiOZCXhTTmQzsmH +hFhxAgMBAAGjgagwgaUwDwYDVR0TAQH/BAUwAwEB/zBCBgNVHSAEOzA5MDcGByuB +KwEBAQowLDAqBggrBgEFBQcCARYeaHR0cHM6Ly9yZXBvc2l0b3J5Lmx1eHRydXN0 +Lmx1MA4GA1UdDwEB/wQEAwIBBjAfBgNVHSMEGDAWgBT/GCh2+UgFLKGu8SsbK7JT ++Et8szAdBgNVHQ4EFgQU/xgodvlIBSyhrvErGyuyU/hLfLMwDQYJKoZIhvcNAQEL +BQADggIBAGoZFO1uecEsh9QNcH7X9njJCwROxLHOk3D+sFTAMs2ZMGQXvw/l4jP9 +BzZAcg4atmpZ1gDlaCDdLnINH2pkMSCEfUmmWjfrRcmF9dTHF5kH5ptV5AzoqbTO +jFu1EVzPig4N1qx3gf4ynCSecs5U89BvolbW7MM3LGVYvlcAGvI1+ut7MV3CwRI9 +loGIlonBWVx65n9wNOeD4rHh4bhY79SV5GCc8JaXcozrhAIuZY+kt9J/Z93I055c +qqmkoCUUBpvsT34tC38ddfEz2O3OuHVtPlu5mB0xDVbYQw8wkbIEa91WvpWAVWe+ +2M2D2RjuLg+GLZKecBPs3lHJQ3gCpU3I+V/EkVhGFndadKpAvAefMLmx9xIX3eP/ +JEAdemrRTxgKqpAd60Ae36EeRJIQmvKN4dFLRp7oRUKX6kWZ8+xm1QL68qZKJKre +zrnK+T+Tb/mjuuqlPpmt/f97mfVl7vBZKGfXkJWkE4SphMHozs51k2MavDzq1WQf +LSoSOcbDWjLtR5EWDrw4wVDej8oqkDQc7kGUnF4ZLvhFSZl0kbAEb+MEWrGrKqv+ +x9CWttrhSmQGbmBNvUJO/3jaJMobtNeWOWyu8Q6qp31IiyBMz2TWuJdGsE7RKlY6 +oJO9r4Ak4Ap+58rVyuiFVdw2KuGUaJPHZnJED4AhMmwlxyOAgwrr +-----END CERTIFICATE----- + +# Issuer: CN=TUBITAK Kamu SM SSL Kok Sertifikasi - Surum 1 O=Turkiye Bilimsel ve Teknolojik Arastirma Kurumu - TUBITAK OU=Kamu Sertifikasyon Merkezi - Kamu SM +# Subject: CN=TUBITAK Kamu SM SSL Kok Sertifikasi - Surum 1 O=Turkiye Bilimsel ve Teknolojik Arastirma Kurumu - TUBITAK OU=Kamu Sertifikasyon Merkezi - Kamu SM +# Label: "TUBITAK Kamu SM SSL Kok Sertifikasi - Surum 1" +# Serial: 1 +# MD5 Fingerprint: dc:00:81:dc:69:2f:3e:2f:b0:3b:f6:3d:5a:91:8e:49 +# SHA1 Fingerprint: 31:43:64:9b:ec:ce:27:ec:ed:3a:3f:0b:8f:0d:e4:e8:91:dd:ee:ca +# SHA256 Fingerprint: 46:ed:c3:68:90:46:d5:3a:45:3f:b3:10:4a:b8:0d:ca:ec:65:8b:26:60:ea:16:29:dd:7e:86:79:90:64:87:16 +-----BEGIN CERTIFICATE----- +MIIEYzCCA0ugAwIBAgIBATANBgkqhkiG9w0BAQsFADCB0jELMAkGA1UEBhMCVFIx +GDAWBgNVBAcTD0dlYnplIC0gS29jYWVsaTFCMEAGA1UEChM5VHVya2l5ZSBCaWxp +bXNlbCB2ZSBUZWtub2xvamlrIEFyYXN0aXJtYSBLdXJ1bXUgLSBUVUJJVEFLMS0w +KwYDVQQLEyRLYW11IFNlcnRpZmlrYXN5b24gTWVya2V6aSAtIEthbXUgU00xNjA0 +BgNVBAMTLVRVQklUQUsgS2FtdSBTTSBTU0wgS29rIFNlcnRpZmlrYXNpIC0gU3Vy +dW0gMTAeFw0xMzExMjUwODI1NTVaFw00MzEwMjUwODI1NTVaMIHSMQswCQYDVQQG +EwJUUjEYMBYGA1UEBxMPR2ViemUgLSBLb2NhZWxpMUIwQAYDVQQKEzlUdXJraXll +IEJpbGltc2VsIHZlIFRla25vbG9qaWsgQXJhc3Rpcm1hIEt1cnVtdSAtIFRVQklU +QUsxLTArBgNVBAsTJEthbXUgU2VydGlmaWthc3lvbiBNZXJrZXppIC0gS2FtdSBT +TTE2MDQGA1UEAxMtVFVCSVRBSyBLYW11IFNNIFNTTCBLb2sgU2VydGlmaWthc2kg +LSBTdXJ1bSAxMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAr3UwM6q7 +a9OZLBI3hNmNe5eA027n/5tQlT6QlVZC1xl8JoSNkvoBHToP4mQ4t4y86Ij5iySr +LqP1N+RAjhgleYN1Hzv/bKjFxlb4tO2KRKOrbEz8HdDc72i9z+SqzvBV96I01INr +N3wcwv61A+xXzry0tcXtAA9TNypN9E8Mg/uGz8v+jE69h/mniyFXnHrfA2eJLJ2X +YacQuFWQfw4tJzh03+f92k4S400VIgLI4OD8D62K18lUUMw7D8oWgITQUVbDjlZ/ +iSIzL+aFCr2lqBs23tPcLG07xxO9WSMs5uWk99gL7eqQQESolbuT1dCANLZGeA4f +AJNG4e7p+exPFwIDAQABo0IwQDAdBgNVHQ4EFgQUZT/HiobGPN08VFw1+DrtUgxH +V8gwDgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQEL +BQADggEBACo/4fEyjq7hmFxLXs9rHmoJ0iKpEsdeV31zVmSAhHqT5Am5EM2fKifh +AHe+SMg1qIGf5LgsyX8OsNJLN13qudULXjS99HMpw+0mFZx+CFOKWI3QSyjfwbPf +IPP54+M638yclNhOT8NrF7f3cuitZjO1JVOr4PhMqZ398g26rrnZqsZr+ZO7rqu4 +lzwDGrpDxpa5RXI4s6ehlj2Re37AIVNMh+3yC1SVUZPVIqUNivGTDj5UDrDYyU7c +8jEyVupk+eq1nRZmQnLzf9OxMUP8pI4X8W0jq5Rm+K37DwhuJi1/FwcJsoz7UMCf +lo3Ptv0AnVoUmr8CRPXBwp8iXqIPoeM= +-----END CERTIFICATE----- + +# Issuer: CN=GDCA TrustAUTH R5 ROOT O=GUANG DONG CERTIFICATE AUTHORITY CO.,LTD. +# Subject: CN=GDCA TrustAUTH R5 ROOT O=GUANG DONG CERTIFICATE AUTHORITY CO.,LTD. +# Label: "GDCA TrustAUTH R5 ROOT" +# Serial: 9009899650740120186 +# MD5 Fingerprint: 63:cc:d9:3d:34:35:5c:6f:53:a3:e2:08:70:48:1f:b4 +# SHA1 Fingerprint: 0f:36:38:5b:81:1a:25:c3:9b:31:4e:83:ca:e9:34:66:70:cc:74:b4 +# SHA256 Fingerprint: bf:ff:8f:d0:44:33:48:7d:6a:8a:a6:0c:1a:29:76:7a:9f:c2:bb:b0:5e:42:0f:71:3a:13:b9:92:89:1d:38:93 +-----BEGIN CERTIFICATE----- +MIIFiDCCA3CgAwIBAgIIfQmX/vBH6nowDQYJKoZIhvcNAQELBQAwYjELMAkGA1UE +BhMCQ04xMjAwBgNVBAoMKUdVQU5HIERPTkcgQ0VSVElGSUNBVEUgQVVUSE9SSVRZ +IENPLixMVEQuMR8wHQYDVQQDDBZHRENBIFRydXN0QVVUSCBSNSBST09UMB4XDTE0 +MTEyNjA1MTMxNVoXDTQwMTIzMTE1NTk1OVowYjELMAkGA1UEBhMCQ04xMjAwBgNV +BAoMKUdVQU5HIERPTkcgQ0VSVElGSUNBVEUgQVVUSE9SSVRZIENPLixMVEQuMR8w +HQYDVQQDDBZHRENBIFRydXN0QVVUSCBSNSBST09UMIICIjANBgkqhkiG9w0BAQEF +AAOCAg8AMIICCgKCAgEA2aMW8Mh0dHeb7zMNOwZ+Vfy1YI92hhJCfVZmPoiC7XJj +Dp6L3TQsAlFRwxn9WVSEyfFrs0yw6ehGXTjGoqcuEVe6ghWinI9tsJlKCvLriXBj +TnnEt1u9ol2x8kECK62pOqPseQrsXzrj/e+APK00mxqriCZ7VqKChh/rNYmDf1+u +KU49tm7srsHwJ5uu4/Ts765/94Y9cnrrpftZTqfrlYwiOXnhLQiPzLyRuEH3FMEj +qcOtmkVEs7LXLM3GKeJQEK5cy4KOFxg2fZfmiJqwTTQJ9Cy5WmYqsBebnh52nUpm +MUHfP/vFBu8btn4aRjb3ZGM74zkYI+dndRTVdVeSN72+ahsmUPI2JgaQxXABZG12 +ZuGR224HwGGALrIuL4xwp9E7PLOR5G62xDtw8mySlwnNR30YwPO7ng/Wi64HtloP +zgsMR6flPri9fcebNaBhlzpBdRfMK5Z3KpIhHtmVdiBnaM8Nvd/WHwlqmuLMc3Gk +L30SgLdTMEZeS1SZD2fJpcjyIMGC7J0R38IC+xo70e0gmu9lZJIQDSri3nDxGGeC +jGHeuLzRL5z7D9Ar7Rt2ueQ5Vfj4oR24qoAATILnsn8JuLwwoC8N9VKejveSswoA +HQBUlwbgsQfZxw9cZX08bVlX5O2ljelAU58VS6Bx9hoh49pwBiFYFIeFd3mqgnkC +AwEAAaNCMEAwHQYDVR0OBBYEFOLJQJ9NzuiaoXzPDj9lxSmIahlRMA8GA1UdEwEB +/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgGGMA0GCSqGSIb3DQEBCwUAA4ICAQDRSVfg +p8xoWLoBDysZzY2wYUWsEe1jUGn4H3++Fo/9nesLqjJHdtJnJO29fDMylyrHBYZm +DRd9FBUb1Ov9H5r2XpdptxolpAqzkT9fNqyL7FeoPueBihhXOYV0GkLH6VsTX4/5 +COmSdI31R9KrO9b7eGZONn356ZLpBN79SWP8bfsUcZNnL0dKt7n/HipzcEYwv1ry +L3ml4Y0M2fmyYzeMN2WFcGpcWwlyua1jPLHd+PwyvzeG5LuOmCd+uh8W4XAR8gPf +JWIyJyYYMoSf/wA6E7qaTfRPuBRwIrHKK5DOKcFw9C+df/KQHtZa37dG/OaG+svg +IHZ6uqbL9XzeYqWxi+7egmaKTjowHz+Ay60nugxe19CxVsp3cbK1daFQqUBDF8Io +2c9Si1vIY9RCPqAzekYu9wogRlR+ak8x8YF+QnQ4ZXMn7sZ8uI7XpTrXmKGcjBBV +09tL7ECQ8s1uV9JiDnxXk7Gnbc2dg7sq5+W2O3FYrf3RRbxake5TFW/TRQl1brqQ +XR4EzzffHqhmsYzmIGrv/EhOdJhCrylvLmrH+33RZjEizIYAfmaDDEL0vTSSwxrq +T8p+ck0LcIymSLumoRT2+1hEmRSuqguTaaApJUqlyyvdimYHFngVV3Eb7PVHhPOe +MTd61X8kreS8/f3MboPoDKi3QWwH3b08hpcv0g== +-----END CERTIFICATE----- + +# Issuer: CN=TrustCor RootCert CA-1 O=TrustCor Systems S. de R.L. OU=TrustCor Certificate Authority +# Subject: CN=TrustCor RootCert CA-1 O=TrustCor Systems S. de R.L. OU=TrustCor Certificate Authority +# Label: "TrustCor RootCert CA-1" +# Serial: 15752444095811006489 +# MD5 Fingerprint: 6e:85:f1:dc:1a:00:d3:22:d5:b2:b2:ac:6b:37:05:45 +# SHA1 Fingerprint: ff:bd:cd:e7:82:c8:43:5e:3c:6f:26:86:5c:ca:a8:3a:45:5b:c3:0a +# SHA256 Fingerprint: d4:0e:9c:86:cd:8f:e4:68:c1:77:69:59:f4:9e:a7:74:fa:54:86:84:b6:c4:06:f3:90:92:61:f4:dc:e2:57:5c +-----BEGIN CERTIFICATE----- +MIIEMDCCAxigAwIBAgIJANqb7HHzA7AZMA0GCSqGSIb3DQEBCwUAMIGkMQswCQYD +VQQGEwJQQTEPMA0GA1UECAwGUGFuYW1hMRQwEgYDVQQHDAtQYW5hbWEgQ2l0eTEk +MCIGA1UECgwbVHJ1c3RDb3IgU3lzdGVtcyBTLiBkZSBSLkwuMScwJQYDVQQLDB5U +cnVzdENvciBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkxHzAdBgNVBAMMFlRydXN0Q29y +IFJvb3RDZXJ0IENBLTEwHhcNMTYwMjA0MTIzMjE2WhcNMjkxMjMxMTcyMzE2WjCB +pDELMAkGA1UEBhMCUEExDzANBgNVBAgMBlBhbmFtYTEUMBIGA1UEBwwLUGFuYW1h +IENpdHkxJDAiBgNVBAoMG1RydXN0Q29yIFN5c3RlbXMgUy4gZGUgUi5MLjEnMCUG +A1UECwweVHJ1c3RDb3IgQ2VydGlmaWNhdGUgQXV0aG9yaXR5MR8wHQYDVQQDDBZU +cnVzdENvciBSb290Q2VydCBDQS0xMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIB +CgKCAQEAv463leLCJhJrMxnHQFgKq1mqjQCj/IDHUHuO1CAmujIS2CNUSSUQIpid +RtLByZ5OGy4sDjjzGiVoHKZaBeYei0i/mJZ0PmnK6bV4pQa81QBeCQryJ3pS/C3V +seq0iWEk8xoT26nPUu0MJLq5nux+AHT6k61sKZKuUbS701e/s/OojZz0JEsq1pme +9J7+wH5COucLlVPat2gOkEz7cD+PSiyU8ybdY2mplNgQTsVHCJCZGxdNuWxu72CV +EY4hgLW9oHPY0LJ3xEXqWib7ZnZ2+AYfYW0PVcWDtxBWcgYHpfOxGgMFZA6dWorW +hnAbJN7+KIor0Gqw/Hqi3LJ5DotlDwIDAQABo2MwYTAdBgNVHQ4EFgQU7mtJPHo/ +DeOxCbeKyKsZn3MzUOcwHwYDVR0jBBgwFoAU7mtJPHo/DeOxCbeKyKsZn3MzUOcw +DwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAYYwDQYJKoZIhvcNAQELBQAD +ggEBACUY1JGPE+6PHh0RU9otRCkZoB5rMZ5NDp6tPVxBb5UrJKF5mDo4Nvu7Zp5I +/5CQ7z3UuJu0h3U/IJvOcs+hVcFNZKIZBqEHMwwLKeXx6quj7LUKdJDHfXLy11yf +ke+Ri7fc7Waiz45mO7yfOgLgJ90WmMCV1Aqk5IGadZQ1nJBfiDcGrVmVCrDRZ9MZ +yonnMlo2HD6CqFqTvsbQZJG2z9m2GM/bftJlo6bEjhcxwft+dtvTheNYsnd6djts +L1Ac59v2Z3kf9YKVmgenFK+P3CghZwnS1k1aHBkcjndcw5QkPTJrS37UeJSDvjdN +zl/HHk484IkzlQsPpTLWPFp5LBk= +-----END CERTIFICATE----- + +# Issuer: CN=TrustCor RootCert CA-2 O=TrustCor Systems S. de R.L. OU=TrustCor Certificate Authority +# Subject: CN=TrustCor RootCert CA-2 O=TrustCor Systems S. de R.L. OU=TrustCor Certificate Authority +# Label: "TrustCor RootCert CA-2" +# Serial: 2711694510199101698 +# MD5 Fingerprint: a2:e1:f8:18:0b:ba:45:d5:c7:41:2a:bb:37:52:45:64 +# SHA1 Fingerprint: b8:be:6d:cb:56:f1:55:b9:63:d4:12:ca:4e:06:34:c7:94:b2:1c:c0 +# SHA256 Fingerprint: 07:53:e9:40:37:8c:1b:d5:e3:83:6e:39:5d:ae:a5:cb:83:9e:50:46:f1:bd:0e:ae:19:51:cf:10:fe:c7:c9:65 +-----BEGIN CERTIFICATE----- +MIIGLzCCBBegAwIBAgIIJaHfyjPLWQIwDQYJKoZIhvcNAQELBQAwgaQxCzAJBgNV +BAYTAlBBMQ8wDQYDVQQIDAZQYW5hbWExFDASBgNVBAcMC1BhbmFtYSBDaXR5MSQw +IgYDVQQKDBtUcnVzdENvciBTeXN0ZW1zIFMuIGRlIFIuTC4xJzAlBgNVBAsMHlRy +dXN0Q29yIENlcnRpZmljYXRlIEF1dGhvcml0eTEfMB0GA1UEAwwWVHJ1c3RDb3Ig +Um9vdENlcnQgQ0EtMjAeFw0xNjAyMDQxMjMyMjNaFw0zNDEyMzExNzI2MzlaMIGk +MQswCQYDVQQGEwJQQTEPMA0GA1UECAwGUGFuYW1hMRQwEgYDVQQHDAtQYW5hbWEg +Q2l0eTEkMCIGA1UECgwbVHJ1c3RDb3IgU3lzdGVtcyBTLiBkZSBSLkwuMScwJQYD +VQQLDB5UcnVzdENvciBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkxHzAdBgNVBAMMFlRy +dXN0Q29yIFJvb3RDZXJ0IENBLTIwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIK +AoICAQCnIG7CKqJiJJWQdsg4foDSq8GbZQWU9MEKENUCrO2fk8eHyLAnK0IMPQo+ +QVqedd2NyuCb7GgypGmSaIwLgQ5WoD4a3SwlFIIvl9NkRvRUqdw6VC0xK5mC8tkq +1+9xALgxpL56JAfDQiDyitSSBBtlVkxs1Pu2YVpHI7TYabS3OtB0PAx1oYxOdqHp +2yqlO/rOsP9+aij9JxzIsekp8VduZLTQwRVtDr4uDkbIXvRR/u8OYzo7cbrPb1nK +DOObXUm4TOJXsZiKQlecdu/vvdFoqNL0Cbt3Nb4lggjEFixEIFapRBF37120Hape +az6LMvYHL1cEksr1/p3C6eizjkxLAjHZ5DxIgif3GIJ2SDpxsROhOdUuxTTCHWKF +3wP+TfSvPd9cW436cOGlfifHhi5qjxLGhF5DUVCcGZt45vz27Ud+ez1m7xMTiF88 +oWP7+ayHNZ/zgp6kPwqcMWmLmaSISo5uZk3vFsQPeSghYA2FFn3XVDjxklb9tTNM +g9zXEJ9L/cb4Qr26fHMC4P99zVvh1Kxhe1fVSntb1IVYJ12/+CtgrKAmrhQhJ8Z3 +mjOAPF5GP/fDsaOGM8boXg25NSyqRsGFAnWAoOsk+xWq5Gd/bnc/9ASKL3x74xdh +8N0JqSDIvgmk0H5Ew7IwSjiqqewYmgeCK9u4nBit2uBGF6zPXQIDAQABo2MwYTAd +BgNVHQ4EFgQU2f4hQG6UnrybPZx9mCAZ5YwwYrIwHwYDVR0jBBgwFoAU2f4hQG6U +nrybPZx9mCAZ5YwwYrIwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAYYw +DQYJKoZIhvcNAQELBQADggIBAJ5Fngw7tu/hOsh80QA9z+LqBrWyOrsGS2h60COX +dKcs8AjYeVrXWoSK2BKaG9l9XE1wxaX5q+WjiYndAfrs3fnpkpfbsEZC89NiqpX+ +MWcUaViQCqoL7jcjx1BRtPV+nuN79+TMQjItSQzL/0kMmx40/W5ulop5A7Zv2wnL +/V9lFDfhOPXzYRZY5LVtDQsEGz9QLX+zx3oaFoBg+Iof6Rsqxvm6ARppv9JYx1RX +CI/hOWB3S6xZhBqI8d3LT3jX5+EzLfzuQfogsL7L9ziUwOHQhQ+77Sxzq+3+knYa +ZH9bDTMJBzN7Bj8RpFxwPIXAz+OQqIN3+tvmxYxoZxBnpVIt8MSZj3+/0WvitUfW +2dCFmU2Umw9Lje4AWkcdEQOsQRivh7dvDDqPys/cA8GiCcjl/YBeyGBCARsaU1q7 +N6a3vLqE6R5sGtRk2tRD/pOLS/IseRYQ1JMLiI+h2IYURpFHmygk71dSTlxCnKr3 +Sewn6EAes6aJInKc9Q0ztFijMDvd1GpUk74aTfOTlPf8hAs/hCBcNANExdqtvArB +As8e5ZTZ845b2EzwnexhF7sUMlQMAimTHpKG9n/v55IFDlndmQguLvqcAFLTxWYp +5KeXRKQOKIETNcX2b2TmQcTVL8w0RSXPQQCWPUouwpaYT05KnJe32x+SMsj/D1Fu +1uwJ +-----END CERTIFICATE----- + +# Issuer: CN=TrustCor ECA-1 O=TrustCor Systems S. de R.L. OU=TrustCor Certificate Authority +# Subject: CN=TrustCor ECA-1 O=TrustCor Systems S. de R.L. OU=TrustCor Certificate Authority +# Label: "TrustCor ECA-1" +# Serial: 9548242946988625984 +# MD5 Fingerprint: 27:92:23:1d:0a:f5:40:7c:e9:e6:6b:9d:d8:f5:e7:6c +# SHA1 Fingerprint: 58:d1:df:95:95:67:6b:63:c0:f0:5b:1c:17:4d:8b:84:0b:c8:78:bd +# SHA256 Fingerprint: 5a:88:5d:b1:9c:01:d9:12:c5:75:93:88:93:8c:af:bb:df:03:1a:b2:d4:8e:91:ee:15:58:9b:42:97:1d:03:9c +-----BEGIN CERTIFICATE----- +MIIEIDCCAwigAwIBAgIJAISCLF8cYtBAMA0GCSqGSIb3DQEBCwUAMIGcMQswCQYD +VQQGEwJQQTEPMA0GA1UECAwGUGFuYW1hMRQwEgYDVQQHDAtQYW5hbWEgQ2l0eTEk +MCIGA1UECgwbVHJ1c3RDb3IgU3lzdGVtcyBTLiBkZSBSLkwuMScwJQYDVQQLDB5U +cnVzdENvciBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkxFzAVBgNVBAMMDlRydXN0Q29y +IEVDQS0xMB4XDTE2MDIwNDEyMzIzM1oXDTI5MTIzMTE3MjgwN1owgZwxCzAJBgNV +BAYTAlBBMQ8wDQYDVQQIDAZQYW5hbWExFDASBgNVBAcMC1BhbmFtYSBDaXR5MSQw +IgYDVQQKDBtUcnVzdENvciBTeXN0ZW1zIFMuIGRlIFIuTC4xJzAlBgNVBAsMHlRy +dXN0Q29yIENlcnRpZmljYXRlIEF1dGhvcml0eTEXMBUGA1UEAwwOVHJ1c3RDb3Ig +RUNBLTEwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDPj+ARtZ+odnbb +3w9U73NjKYKtR8aja+3+XzP4Q1HpGjORMRegdMTUpwHmspI+ap3tDvl0mEDTPwOA +BoJA6LHip1GnHYMma6ve+heRK9jGrB6xnhkB1Zem6g23xFUfJ3zSCNV2HykVh0A5 +3ThFEXXQmqc04L/NyFIduUd+Dbi7xgz2c1cWWn5DkR9VOsZtRASqnKmcp0yJF4Ou +owReUoCLHhIlERnXDH19MURB6tuvsBzvgdAsxZohmz3tQjtQJvLsznFhBmIhVE5/ +wZ0+fyCMgMsq2JdiyIMzkX2woloPV+g7zPIlstR8L+xNxqE6FXrntl019fZISjZF +ZtS6mFjBAgMBAAGjYzBhMB0GA1UdDgQWBBREnkj1zG1I1KBLf/5ZJC+Dl5mahjAf +BgNVHSMEGDAWgBREnkj1zG1I1KBLf/5ZJC+Dl5mahjAPBgNVHRMBAf8EBTADAQH/ +MA4GA1UdDwEB/wQEAwIBhjANBgkqhkiG9w0BAQsFAAOCAQEABT41XBVwm8nHc2Fv +civUwo/yQ10CzsSUuZQRg2dd4mdsdXa/uwyqNsatR5Nj3B5+1t4u/ukZMjgDfxT2 +AHMsWbEhBuH7rBiVDKP/mZb3Kyeb1STMHd3BOuCYRLDE5D53sXOpZCz2HAF8P11F +hcCF5yWPldwX8zyfGm6wyuMdKulMY/okYWLW2n62HGz1Ah3UKt1VkOsqEUc8Ll50 +soIipX1TH0XsJ5F95yIW6MBoNtjG8U+ARDL54dHRHareqKucBK+tIA5kmE2la8BI +WJZpTdwHjFGTot+fDz2LYLSCjaoITmJF4PkL0uDgPFveXHEnJcLmA4GLEFPjx1Wi +tJ/X5g== +-----END CERTIFICATE----- + +# Issuer: CN=SSL.com Root Certification Authority RSA O=SSL Corporation +# Subject: CN=SSL.com Root Certification Authority RSA O=SSL Corporation +# Label: "SSL.com Root Certification Authority RSA" +# Serial: 8875640296558310041 +# MD5 Fingerprint: 86:69:12:c0:70:f1:ec:ac:ac:c2:d5:bc:a5:5b:a1:29 +# SHA1 Fingerprint: b7:ab:33:08:d1:ea:44:77:ba:14:80:12:5a:6f:bd:a9:36:49:0c:bb +# SHA256 Fingerprint: 85:66:6a:56:2e:e0:be:5c:e9:25:c1:d8:89:0a:6f:76:a8:7e:c1:6d:4d:7d:5f:29:ea:74:19:cf:20:12:3b:69 +-----BEGIN CERTIFICATE----- +MIIF3TCCA8WgAwIBAgIIeyyb0xaAMpkwDQYJKoZIhvcNAQELBQAwfDELMAkGA1UE +BhMCVVMxDjAMBgNVBAgMBVRleGFzMRAwDgYDVQQHDAdIb3VzdG9uMRgwFgYDVQQK +DA9TU0wgQ29ycG9yYXRpb24xMTAvBgNVBAMMKFNTTC5jb20gUm9vdCBDZXJ0aWZp +Y2F0aW9uIEF1dGhvcml0eSBSU0EwHhcNMTYwMjEyMTczOTM5WhcNNDEwMjEyMTcz +OTM5WjB8MQswCQYDVQQGEwJVUzEOMAwGA1UECAwFVGV4YXMxEDAOBgNVBAcMB0hv +dXN0b24xGDAWBgNVBAoMD1NTTCBDb3Jwb3JhdGlvbjExMC8GA1UEAwwoU1NMLmNv +bSBSb290IENlcnRpZmljYXRpb24gQXV0aG9yaXR5IFJTQTCCAiIwDQYJKoZIhvcN +AQEBBQADggIPADCCAgoCggIBAPkP3aMrfcvQKv7sZ4Wm5y4bunfh4/WvpOz6Sl2R +xFdHaxh3a3by/ZPkPQ/CFp4LZsNWlJ4Xg4XOVu/yFv0AYvUiCVToZRdOQbngT0aX +qhvIuG5iXmmxX9sqAn78bMrzQdjt0Oj8P2FI7bADFB0QDksZ4LtO7IZl/zbzXmcC +C52GVWH9ejjt/uIZALdvoVBidXQ8oPrIJZK0bnoix/geoeOy3ZExqysdBP+lSgQ3 +6YWkMyv94tZVNHwZpEpox7Ko07fKoZOI68GXvIz5HdkihCR0xwQ9aqkpk8zruFvh +/l8lqjRYyMEjVJ0bmBHDOJx+PYZspQ9AhnwC9FwCTyjLrnGfDzrIM/4RJTXq/LrF +YD3ZfBjVsqnTdXgDciLKOsMf7yzlLqn6niy2UUb9rwPW6mBo6oUWNmuF6R7As93E +JNyAKoFBbZQ+yODJgUEAnl6/f8UImKIYLEJAs/lvOCdLToD0PYFH4Ih86hzOtXVc +US4cK38acijnALXRdMbX5J+tB5O2UzU1/Dfkw/ZdFr4hc96SCvigY2q8lpJqPvi8 +ZVWb3vUNiSYE/CUapiVpy8JtynziWV+XrOvvLsi81xtZPCvM8hnIk2snYxnP/Okm ++Mpxm3+T/jRnhE6Z6/yzeAkzcLpmpnbtG3PrGqUNxCITIJRWCk4sbE6x/c+cCbqi +M+2HAgMBAAGjYzBhMB0GA1UdDgQWBBTdBAkHovV6fVJTEpKV7jiAJQ2mWTAPBgNV +HRMBAf8EBTADAQH/MB8GA1UdIwQYMBaAFN0ECQei9Xp9UlMSkpXuOIAlDaZZMA4G +A1UdDwEB/wQEAwIBhjANBgkqhkiG9w0BAQsFAAOCAgEAIBgRlCn7Jp0cHh5wYfGV +cpNxJK1ok1iOMq8bs3AD/CUrdIWQPXhq9LmLpZc7tRiRux6n+UBbkflVma8eEdBc +Hadm47GUBwwyOabqG7B52B2ccETjit3E+ZUfijhDPwGFpUenPUayvOUiaPd7nNgs +PgohyC0zrL/FgZkxdMF1ccW+sfAjRfSda/wZY52jvATGGAslu1OJD7OAUN5F7kR/ +q5R4ZJjT9ijdh9hwZXT7DrkT66cPYakylszeu+1jTBi7qUD3oFRuIIhxdRjqerQ0 +cuAjJ3dctpDqhiVAq+8zD8ufgr6iIPv2tS0a5sKFsXQP+8hlAqRSAUfdSSLBv9jr +a6x+3uxjMxW3IwiPxg+NQVrdjsW5j+VFP3jbutIbQLH+cU0/4IGiul607BXgk90I +H37hVZkLId6Tngr75qNJvTYw/ud3sqB1l7UtgYgXZSD32pAAn8lSzDLKNXz1PQ/Y +K9f1JmzJBjSWFupwWRoyeXkLtoh/D1JIPb9s2KJELtFOt3JY04kTlf5Eq/jXixtu +nLwsoFvVagCvXzfh1foQC5ichucmj87w7G6KVwuA406ywKBjYZC6VWg3dGq2ktuf +oYYitmUnDuy2n0Jg5GfCtdpBC8TTi2EbvPofkSvXRAdeuims2cXp71NIWuuA8ShY +Ic2wBlX7Jz9TkHCpBB5XJ7k= +-----END CERTIFICATE----- + +# Issuer: CN=SSL.com Root Certification Authority ECC O=SSL Corporation +# Subject: CN=SSL.com Root Certification Authority ECC O=SSL Corporation +# Label: "SSL.com Root Certification Authority ECC" +# Serial: 8495723813297216424 +# MD5 Fingerprint: 2e:da:e4:39:7f:9c:8f:37:d1:70:9f:26:17:51:3a:8e +# SHA1 Fingerprint: c3:19:7c:39:24:e6:54:af:1b:c4:ab:20:95:7a:e2:c3:0e:13:02:6a +# SHA256 Fingerprint: 34:17:bb:06:cc:60:07:da:1b:96:1c:92:0b:8a:b4:ce:3f:ad:82:0e:4a:a3:0b:9a:cb:c4:a7:4e:bd:ce:bc:65 +-----BEGIN CERTIFICATE----- +MIICjTCCAhSgAwIBAgIIdebfy8FoW6gwCgYIKoZIzj0EAwIwfDELMAkGA1UEBhMC +VVMxDjAMBgNVBAgMBVRleGFzMRAwDgYDVQQHDAdIb3VzdG9uMRgwFgYDVQQKDA9T +U0wgQ29ycG9yYXRpb24xMTAvBgNVBAMMKFNTTC5jb20gUm9vdCBDZXJ0aWZpY2F0 +aW9uIEF1dGhvcml0eSBFQ0MwHhcNMTYwMjEyMTgxNDAzWhcNNDEwMjEyMTgxNDAz +WjB8MQswCQYDVQQGEwJVUzEOMAwGA1UECAwFVGV4YXMxEDAOBgNVBAcMB0hvdXN0 +b24xGDAWBgNVBAoMD1NTTCBDb3Jwb3JhdGlvbjExMC8GA1UEAwwoU1NMLmNvbSBS +b290IENlcnRpZmljYXRpb24gQXV0aG9yaXR5IEVDQzB2MBAGByqGSM49AgEGBSuB +BAAiA2IABEVuqVDEpiM2nl8ojRfLliJkP9x6jh3MCLOicSS6jkm5BBtHllirLZXI +7Z4INcgn64mMU1jrYor+8FsPazFSY0E7ic3s7LaNGdM0B9y7xgZ/wkWV7Mt/qCPg +CemB+vNH06NjMGEwHQYDVR0OBBYEFILRhXMw5zUE044CkvvlpNHEIejNMA8GA1Ud +EwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAUgtGFczDnNQTTjgKS++Wk0cQh6M0wDgYD +VR0PAQH/BAQDAgGGMAoGCCqGSM49BAMCA2cAMGQCMG/n61kRpGDPYbCWe+0F+S8T +kdzt5fxQaxFGRrMcIQBiu77D5+jNB5n5DQtdcj7EqgIwH7y6C+IwJPt8bYBVCpk+ +gA0z5Wajs6O7pdWLjwkspl1+4vAHCGht0nxpbl/f5Wpl +-----END CERTIFICATE----- + +# Issuer: CN=SSL.com EV Root Certification Authority RSA R2 O=SSL Corporation +# Subject: CN=SSL.com EV Root Certification Authority RSA R2 O=SSL Corporation +# Label: "SSL.com EV Root Certification Authority RSA R2" +# Serial: 6248227494352943350 +# MD5 Fingerprint: e1:1e:31:58:1a:ae:54:53:02:f6:17:6a:11:7b:4d:95 +# SHA1 Fingerprint: 74:3a:f0:52:9b:d0:32:a0:f4:4a:83:cd:d4:ba:a9:7b:7c:2e:c4:9a +# SHA256 Fingerprint: 2e:7b:f1:6c:c2:24:85:a7:bb:e2:aa:86:96:75:07:61:b0:ae:39:be:3b:2f:e9:d0:cc:6d:4e:f7:34:91:42:5c +-----BEGIN CERTIFICATE----- +MIIF6zCCA9OgAwIBAgIIVrYpzTS8ePYwDQYJKoZIhvcNAQELBQAwgYIxCzAJBgNV +BAYTAlVTMQ4wDAYDVQQIDAVUZXhhczEQMA4GA1UEBwwHSG91c3RvbjEYMBYGA1UE +CgwPU1NMIENvcnBvcmF0aW9uMTcwNQYDVQQDDC5TU0wuY29tIEVWIFJvb3QgQ2Vy +dGlmaWNhdGlvbiBBdXRob3JpdHkgUlNBIFIyMB4XDTE3MDUzMTE4MTQzN1oXDTQy +MDUzMDE4MTQzN1owgYIxCzAJBgNVBAYTAlVTMQ4wDAYDVQQIDAVUZXhhczEQMA4G +A1UEBwwHSG91c3RvbjEYMBYGA1UECgwPU1NMIENvcnBvcmF0aW9uMTcwNQYDVQQD +DC5TU0wuY29tIEVWIFJvb3QgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkgUlNBIFIy +MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAjzZlQOHWTcDXtOlG2mvq +M0fNTPl9fb69LT3w23jhhqXZuglXaO1XPqDQCEGD5yhBJB/jchXQARr7XnAjssuf +OePPxU7Gkm0mxnu7s9onnQqG6YE3Bf7wcXHswxzpY6IXFJ3vG2fThVUCAtZJycxa +4bH3bzKfydQ7iEGonL3Lq9ttewkfokxykNorCPzPPFTOZw+oz12WGQvE43LrrdF9 +HSfvkusQv1vrO6/PgN3B0pYEW3p+pKk8OHakYo6gOV7qd89dAFmPZiw+B6KjBSYR +aZfqhbcPlgtLyEDhULouisv3D5oi53+aNxPN8k0TayHRwMwi8qFG9kRpnMphNQcA +b9ZhCBHqurj26bNg5U257J8UZslXWNvNh2n4ioYSA0e/ZhN2rHd9NCSFg83XqpyQ +Gp8hLH94t2S42Oim9HizVcuE0jLEeK6jj2HdzghTreyI/BXkmg3mnxp3zkyPuBQV +PWKchjgGAGYS5Fl2WlPAApiiECtoRHuOec4zSnaqW4EWG7WK2NAAe15itAnWhmMO +pgWVSbooi4iTsjQc2KRVbrcc0N6ZVTsj9CLg+SlmJuwgUHfbSguPvuUCYHBBXtSu +UDkiFCbLsjtzdFVHB3mBOagwE0TlBIqulhMlQg+5U8Sb/M3kHN48+qvWBkofZ6aY +MBzdLNvcGJVXZsb/XItW9XcCAwEAAaNjMGEwDwYDVR0TAQH/BAUwAwEB/zAfBgNV +HSMEGDAWgBT5YLvU49U09rj1BoAlp3PbRmmonjAdBgNVHQ4EFgQU+WC71OPVNPa4 +9QaAJadz20ZpqJ4wDgYDVR0PAQH/BAQDAgGGMA0GCSqGSIb3DQEBCwUAA4ICAQBW +s47LCp1Jjr+kxJG7ZhcFUZh1++VQLHqe8RT6q9OKPv+RKY9ji9i0qVQBDb6Thi/5 +Sm3HXvVX+cpVHBK+Rw82xd9qt9t1wkclf7nxY/hoLVUE0fKNsKTPvDxeH3jnpaAg +cLAExbf3cqfeIg29MyVGjGSSJuM+LmOW2puMPfgYCdcDzH2GguDKBAdRUNf/ktUM +79qGn5nX67evaOI5JpS6aLe/g9Pqemc9YmeuJeVy6OLk7K4S9ksrPJ/psEDzOFSz +/bdoyNrGj1E8svuR3Bznm53htw1yj+KkxKl4+esUrMZDBcJlOSgYAsOCsp0FvmXt +ll9ldDz7CTUue5wT/RsPXcdtgTpWD8w74a8CLyKsRspGPKAcTNZEtF4uXBVmCeEm +Kf7GUmG6sXP/wwyc5WxqlD8UykAWlYTzWamsX0xhk23RO8yilQwipmdnRC652dKK +QbNmC1r7fSOl8hqw/96bg5Qu0T/fkreRrwU7ZcegbLHNYhLDkBvjJc40vG93drEQ +w/cFGsDWr3RiSBd3kmmQYRzelYB0VI8YHMPzA9C/pEN1hlMYegouCRw2n5H9gooi +S9EOUCXdywMMF8mDAAhONU2Ki+3wApRmLER/y5UnlhetCTCstnEXbosX9hwJ1C07 +mKVx01QT2WDz9UtmT/rx7iASjbSsV7FFY6GsdqnC+w== +-----END CERTIFICATE----- + +# Issuer: CN=SSL.com EV Root Certification Authority ECC O=SSL Corporation +# Subject: CN=SSL.com EV Root Certification Authority ECC O=SSL Corporation +# Label: "SSL.com EV Root Certification Authority ECC" +# Serial: 3182246526754555285 +# MD5 Fingerprint: 59:53:22:65:83:42:01:54:c0:ce:42:b9:5a:7c:f2:90 +# SHA1 Fingerprint: 4c:dd:51:a3:d1:f5:20:32:14:b0:c6:c5:32:23:03:91:c7:46:42:6d +# SHA256 Fingerprint: 22:a2:c1:f7:bd:ed:70:4c:c1:e7:01:b5:f4:08:c3:10:88:0f:e9:56:b5:de:2a:4a:44:f9:9c:87:3a:25:a7:c8 +-----BEGIN CERTIFICATE----- +MIIClDCCAhqgAwIBAgIILCmcWxbtBZUwCgYIKoZIzj0EAwIwfzELMAkGA1UEBhMC +VVMxDjAMBgNVBAgMBVRleGFzMRAwDgYDVQQHDAdIb3VzdG9uMRgwFgYDVQQKDA9T +U0wgQ29ycG9yYXRpb24xNDAyBgNVBAMMK1NTTC5jb20gRVYgUm9vdCBDZXJ0aWZp +Y2F0aW9uIEF1dGhvcml0eSBFQ0MwHhcNMTYwMjEyMTgxNTIzWhcNNDEwMjEyMTgx +NTIzWjB/MQswCQYDVQQGEwJVUzEOMAwGA1UECAwFVGV4YXMxEDAOBgNVBAcMB0hv +dXN0b24xGDAWBgNVBAoMD1NTTCBDb3Jwb3JhdGlvbjE0MDIGA1UEAwwrU1NMLmNv +bSBFViBSb290IENlcnRpZmljYXRpb24gQXV0aG9yaXR5IEVDQzB2MBAGByqGSM49 +AgEGBSuBBAAiA2IABKoSR5CYG/vvw0AHgyBO8TCCogbR8pKGYfL2IWjKAMTH6kMA +VIbc/R/fALhBYlzccBYy3h+Z1MzFB8gIH2EWB1E9fVwHU+M1OIzfzZ/ZLg1Kthku +WnBaBu2+8KGwytAJKaNjMGEwHQYDVR0OBBYEFFvKXuXe0oGqzagtZFG22XKbl+ZP +MA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAUW8pe5d7SgarNqC1kUbbZcpuX +5k8wDgYDVR0PAQH/BAQDAgGGMAoGCCqGSM49BAMCA2gAMGUCMQCK5kCJN+vp1RPZ +ytRrJPOwPYdGWBrssd9v+1a6cGvHOMzosYxPD/fxZ3YOg9AeUY8CMD32IygmTMZg +h5Mmm7I1HrrW9zzRHM76JTymGoEVW/MSD2zuZYrJh6j5B+BimoxcSg== +-----END CERTIFICATE----- + +# Issuer: CN=GlobalSign O=GlobalSign OU=GlobalSign Root CA - R6 +# Subject: CN=GlobalSign O=GlobalSign OU=GlobalSign Root CA - R6 +# Label: "GlobalSign Root CA - R6" +# Serial: 1417766617973444989252670301619537 +# MD5 Fingerprint: 4f:dd:07:e4:d4:22:64:39:1e:0c:37:42:ea:d1:c6:ae +# SHA1 Fingerprint: 80:94:64:0e:b5:a7:a1:ca:11:9c:1f:dd:d5:9f:81:02:63:a7:fb:d1 +# SHA256 Fingerprint: 2c:ab:ea:fe:37:d0:6c:a2:2a:ba:73:91:c0:03:3d:25:98:29:52:c4:53:64:73:49:76:3a:3a:b5:ad:6c:cf:69 +-----BEGIN CERTIFICATE----- +MIIFgzCCA2ugAwIBAgIORea7A4Mzw4VlSOb/RVEwDQYJKoZIhvcNAQEMBQAwTDEg +MB4GA1UECxMXR2xvYmFsU2lnbiBSb290IENBIC0gUjYxEzARBgNVBAoTCkdsb2Jh +bFNpZ24xEzARBgNVBAMTCkdsb2JhbFNpZ24wHhcNMTQxMjEwMDAwMDAwWhcNMzQx +MjEwMDAwMDAwWjBMMSAwHgYDVQQLExdHbG9iYWxTaWduIFJvb3QgQ0EgLSBSNjET +MBEGA1UEChMKR2xvYmFsU2lnbjETMBEGA1UEAxMKR2xvYmFsU2lnbjCCAiIwDQYJ +KoZIhvcNAQEBBQADggIPADCCAgoCggIBAJUH6HPKZvnsFMp7PPcNCPG0RQssgrRI +xutbPK6DuEGSMxSkb3/pKszGsIhrxbaJ0cay/xTOURQh7ErdG1rG1ofuTToVBu1k +ZguSgMpE3nOUTvOniX9PeGMIyBJQbUJmL025eShNUhqKGoC3GYEOfsSKvGRMIRxD +aNc9PIrFsmbVkJq3MQbFvuJtMgamHvm566qjuL++gmNQ0PAYid/kD3n16qIfKtJw +LnvnvJO7bVPiSHyMEAc4/2ayd2F+4OqMPKq0pPbzlUoSB239jLKJz9CgYXfIWHSw +1CM69106yqLbnQneXUQtkPGBzVeS+n68UARjNN9rkxi+azayOeSsJDa38O+2HBNX +k7besvjihbdzorg1qkXy4J02oW9UivFyVm4uiMVRQkQVlO6jxTiWm05OWgtH8wY2 +SXcwvHE35absIQh1/OZhFj931dmRl4QKbNQCTXTAFO39OfuD8l4UoQSwC+n+7o/h +bguyCLNhZglqsQY6ZZZZwPA1/cnaKI0aEYdwgQqomnUdnjqGBQCe24DWJfncBZ4n +WUx2OVvq+aWh2IMP0f/fMBH5hc8zSPXKbWQULHpYT9NLCEnFlWQaYw55PfWzjMpY +rZxCRXluDocZXFSxZba/jJvcE+kNb7gu3GduyYsRtYQUigAZcIN5kZeR1Bonvzce +MgfYFGM8KEyvAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8EBTAD +AQH/MB0GA1UdDgQWBBSubAWjkxPioufi1xzWx/B/yGdToDAfBgNVHSMEGDAWgBSu +bAWjkxPioufi1xzWx/B/yGdToDANBgkqhkiG9w0BAQwFAAOCAgEAgyXt6NH9lVLN +nsAEoJFp5lzQhN7craJP6Ed41mWYqVuoPId8AorRbrcWc+ZfwFSY1XS+wc3iEZGt +Ixg93eFyRJa0lV7Ae46ZeBZDE1ZXs6KzO7V33EByrKPrmzU+sQghoefEQzd5Mr61 +55wsTLxDKZmOMNOsIeDjHfrYBzN2VAAiKrlNIC5waNrlU/yDXNOd8v9EDERm8tLj +vUYAGm0CuiVdjaExUd1URhxN25mW7xocBFymFe944Hn+Xds+qkxV/ZoVqW/hpvvf +cDDpw+5CRu3CkwWJ+n1jez/QcYF8AOiYrg54NMMl+68KnyBr3TsTjxKM4kEaSHpz +oHdpx7Zcf4LIHv5YGygrqGytXm3ABdJ7t+uA/iU3/gKbaKxCXcPu9czc8FB10jZp +nOZ7BN9uBmm23goJSFmH63sUYHpkqmlD75HHTOwY3WzvUy2MmeFe8nI+z1TIvWfs +pA9MRf/TuTAjB0yPEL+GltmZWrSZVxykzLsViVO6LAUP5MSeGbEYNNVMnbrt9x+v +JJUEeKgDu+6B5dpffItKoZB0JaezPkvILFa9x8jvOOJckvB595yEunQtYQEgfn7R +8k8HWV+LLUNS60YMlOH1Zkd5d9VUWx+tJDfLRVpOoERIyNiwmcUVhAn21klJwGW4 +5hpxbqCo8YLoRT5s1gLXCmeDBVrJpBA= +-----END CERTIFICATE----- + +# Issuer: CN=OISTE WISeKey Global Root GC CA O=WISeKey OU=OISTE Foundation Endorsed +# Subject: CN=OISTE WISeKey Global Root GC CA O=WISeKey OU=OISTE Foundation Endorsed +# Label: "OISTE WISeKey Global Root GC CA" +# Serial: 44084345621038548146064804565436152554 +# MD5 Fingerprint: a9:d6:b9:2d:2f:93:64:f8:a5:69:ca:91:e9:68:07:23 +# SHA1 Fingerprint: e0:11:84:5e:34:de:be:88:81:b9:9c:f6:16:26:d1:96:1f:c3:b9:31 +# SHA256 Fingerprint: 85:60:f9:1c:36:24:da:ba:95:70:b5:fe:a0:db:e3:6f:f1:1a:83:23:be:94:86:85:4f:b3:f3:4a:55:71:19:8d +-----BEGIN CERTIFICATE----- +MIICaTCCAe+gAwIBAgIQISpWDK7aDKtARb8roi066jAKBggqhkjOPQQDAzBtMQsw +CQYDVQQGEwJDSDEQMA4GA1UEChMHV0lTZUtleTEiMCAGA1UECxMZT0lTVEUgRm91 +bmRhdGlvbiBFbmRvcnNlZDEoMCYGA1UEAxMfT0lTVEUgV0lTZUtleSBHbG9iYWwg +Um9vdCBHQyBDQTAeFw0xNzA1MDkwOTQ4MzRaFw00MjA1MDkwOTU4MzNaMG0xCzAJ +BgNVBAYTAkNIMRAwDgYDVQQKEwdXSVNlS2V5MSIwIAYDVQQLExlPSVNURSBGb3Vu +ZGF0aW9uIEVuZG9yc2VkMSgwJgYDVQQDEx9PSVNURSBXSVNlS2V5IEdsb2JhbCBS +b290IEdDIENBMHYwEAYHKoZIzj0CAQYFK4EEACIDYgAETOlQwMYPchi82PG6s4ni +eUqjFqdrVCTbUf/q9Akkwwsin8tqJ4KBDdLArzHkdIJuyiXZjHWd8dvQmqJLIX4W +p2OQ0jnUsYd4XxiWD1AbNTcPasbc2RNNpI6QN+a9WzGRo1QwUjAOBgNVHQ8BAf8E +BAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUSIcUrOPDnpBgOtfKie7T +rYy0UGYwEAYJKwYBBAGCNxUBBAMCAQAwCgYIKoZIzj0EAwMDaAAwZQIwJsdpW9zV +57LnyAyMjMPdeYwbY9XJUpROTYJKcx6ygISpJcBMWm1JKWB4E+J+SOtkAjEA2zQg +Mgj/mkkCtojeFK9dbJlxjRo/i9fgojaGHAeCOnZT/cKi7e97sIBPWA9LUzm9 +-----END CERTIFICATE----- + +# Issuer: CN=GTS Root R1 O=Google Trust Services LLC +# Subject: CN=GTS Root R1 O=Google Trust Services LLC +# Label: "GTS Root R1" +# Serial: 146587175971765017618439757810265552097 +# MD5 Fingerprint: 82:1a:ef:d4:d2:4a:f2:9f:e2:3d:97:06:14:70:72:85 +# SHA1 Fingerprint: e1:c9:50:e6:ef:22:f8:4c:56:45:72:8b:92:20:60:d7:d5:a7:a3:e8 +# SHA256 Fingerprint: 2a:57:54:71:e3:13:40:bc:21:58:1c:bd:2c:f1:3e:15:84:63:20:3e:ce:94:bc:f9:d3:cc:19:6b:f0:9a:54:72 +-----BEGIN CERTIFICATE----- +MIIFWjCCA0KgAwIBAgIQbkepxUtHDA3sM9CJuRz04TANBgkqhkiG9w0BAQwFADBH +MQswCQYDVQQGEwJVUzEiMCAGA1UEChMZR29vZ2xlIFRydXN0IFNlcnZpY2VzIExM +QzEUMBIGA1UEAxMLR1RTIFJvb3QgUjEwHhcNMTYwNjIyMDAwMDAwWhcNMzYwNjIy +MDAwMDAwWjBHMQswCQYDVQQGEwJVUzEiMCAGA1UEChMZR29vZ2xlIFRydXN0IFNl +cnZpY2VzIExMQzEUMBIGA1UEAxMLR1RTIFJvb3QgUjEwggIiMA0GCSqGSIb3DQEB +AQUAA4ICDwAwggIKAoICAQC2EQKLHuOhd5s73L+UPreVp0A8of2C+X0yBoJx9vaM +f/vo27xqLpeXo4xL+Sv2sfnOhB2x+cWX3u+58qPpvBKJXqeqUqv4IyfLpLGcY9vX +mX7wCl7raKb0xlpHDU0QM+NOsROjyBhsS+z8CZDfnWQpJSMHobTSPS5g4M/SCYe7 +zUjwTcLCeoiKu7rPWRnWr4+wB7CeMfGCwcDfLqZtbBkOtdh+JhpFAz2weaSUKK0P +fyblqAj+lug8aJRT7oM6iCsVlgmy4HqMLnXWnOunVmSPlk9orj2XwoSPwLxAwAtc +vfaHszVsrBhQf4TgTM2S0yDpM7xSma8ytSmzJSq0SPly4cpk9+aCEI3oncKKiPo4 +Zor8Y/kB+Xj9e1x3+naH+uzfsQ55lVe0vSbv1gHR6xYKu44LtcXFilWr06zqkUsp +zBmkMiVOKvFlRNACzqrOSbTqn3yDsEB750Orp2yjj32JgfpMpf/VjsPOS+C12LOO +Rc92wO1AK/1TD7Cn1TsNsYqiA94xrcx36m97PtbfkSIS5r762DL8EGMUUXLeXdYW +k70paDPvOmbsB4om3xPXV2V4J95eSRQAogB/mqghtqmxlbCluQ0WEdrHbEg8QOB+ +DVrNVjzRlwW5y0vtOUucxD/SVRNuJLDWcfr0wbrM7Rv1/oFB2ACYPTrIrnqYNxgF +lQIDAQABo0IwQDAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNV +HQ4EFgQU5K8rJnEaK0gnhS9SZizv8IkTcT4wDQYJKoZIhvcNAQEMBQADggIBADiW +Cu49tJYeX++dnAsznyvgyv3SjgofQXSlfKqE1OXyHuY3UjKcC9FhHb8owbZEKTV1 +d5iyfNm9dKyKaOOpMQkpAWBz40d8U6iQSifvS9efk+eCNs6aaAyC58/UEBZvXw6Z +XPYfcX3v73svfuo21pdwCxXu11xWajOl40k4DLh9+42FpLFZXvRq4d2h9mREruZR +gyFmxhE+885H7pwoHyXa/6xmld01D1zvICxi/ZG6qcz8WpyTgYMpl0p8WnK0OdC3 +d8t5/Wk6kjftbjhlRn7pYL15iJdfOBL07q9bgsiG1eGZbYwE8na6SfZu6W0eX6Dv +J4J2QPim01hcDyxC2kLGe4g0x8HYRZvBPsVhHdljUEn2NIVq4BjFbkerQUIpm/Zg +DdIx02OYI5NaAIFItO/Nis3Jz5nu2Z6qNuFoS3FJFDYoOj0dzpqPJeaAcWErtXvM ++SUWgeExX6GjfhaknBZqlxi9dnKlC54dNuYvoS++cJEPqOba+MSSQGwlfnuzCdyy +F62ARPBopY+Udf90WuioAnwMCeKpSwughQtiue+hMZL77/ZRBIls6Kl0obsXs7X9 +SQ98POyDGCBDTtWTurQ0sR8WNh8M5mQ5Fkzc4P4dyKliPUDqysU0ArSuiYgzNdws +E3PYJ/HQcu51OyLemGhmW/HGY0dVHLqlCFF1pkgl +-----END CERTIFICATE----- + +# Issuer: CN=GTS Root R2 O=Google Trust Services LLC +# Subject: CN=GTS Root R2 O=Google Trust Services LLC +# Label: "GTS Root R2" +# Serial: 146587176055767053814479386953112547951 +# MD5 Fingerprint: 44:ed:9a:0e:a4:09:3b:00:f2:ae:4c:a3:c6:61:b0:8b +# SHA1 Fingerprint: d2:73:96:2a:2a:5e:39:9f:73:3f:e1:c7:1e:64:3f:03:38:34:fc:4d +# SHA256 Fingerprint: c4:5d:7b:b0:8e:6d:67:e6:2e:42:35:11:0b:56:4e:5f:78:fd:92:ef:05:8c:84:0a:ea:4e:64:55:d7:58:5c:60 +-----BEGIN CERTIFICATE----- +MIIFWjCCA0KgAwIBAgIQbkepxlqz5yDFMJo/aFLybzANBgkqhkiG9w0BAQwFADBH +MQswCQYDVQQGEwJVUzEiMCAGA1UEChMZR29vZ2xlIFRydXN0IFNlcnZpY2VzIExM +QzEUMBIGA1UEAxMLR1RTIFJvb3QgUjIwHhcNMTYwNjIyMDAwMDAwWhcNMzYwNjIy +MDAwMDAwWjBHMQswCQYDVQQGEwJVUzEiMCAGA1UEChMZR29vZ2xlIFRydXN0IFNl +cnZpY2VzIExMQzEUMBIGA1UEAxMLR1RTIFJvb3QgUjIwggIiMA0GCSqGSIb3DQEB +AQUAA4ICDwAwggIKAoICAQDO3v2m++zsFDQ8BwZabFn3GTXd98GdVarTzTukk3Lv +CvptnfbwhYBboUhSnznFt+4orO/LdmgUud+tAWyZH8QiHZ/+cnfgLFuv5AS/T3Kg +GjSY6Dlo7JUle3ah5mm5hRm9iYz+re026nO8/4Piy33B0s5Ks40FnotJk9/BW9Bu +XvAuMC6C/Pq8tBcKSOWIm8Wba96wyrQD8Nr0kLhlZPdcTK3ofmZemde4wj7I0BOd +re7kRXuJVfeKH2JShBKzwkCX44ofR5GmdFrS+LFjKBC4swm4VndAoiaYecb+3yXu +PuWgf9RhD1FLPD+M2uFwdNjCaKH5wQzpoeJ/u1U8dgbuak7MkogwTZq9TwtImoS1 +mKPV+3PBV2HdKFZ1E66HjucMUQkQdYhMvI35ezzUIkgfKtzra7tEscszcTJGr61K +8YzodDqs5xoic4DSMPclQsciOzsSrZYuxsN2B6ogtzVJV+mSSeh2FnIxZyuWfoqj +x5RWIr9qS34BIbIjMt/kmkRtWVtd9QCgHJvGeJeNkP+byKq0rxFROV7Z+2et1VsR +nTKaG73VululycslaVNVJ1zgyjbLiGH7HrfQy+4W+9OmTN6SpdTi3/UGVN4unUu0 +kzCqgc7dGtxRcw1PcOnlthYhGXmy5okLdWTK1au8CcEYof/UVKGFPP0UJAOyh9Ok +twIDAQABo0IwQDAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNV +HQ4EFgQUu//KjiOfT5nK2+JopqUVJxce2Q4wDQYJKoZIhvcNAQEMBQADggIBALZp +8KZ3/p7uC4Gt4cCpx/k1HUCCq+YEtN/L9x0Pg/B+E02NjO7jMyLDOfxA325BS0JT +vhaI8dI4XsRomRyYUpOM52jtG2pzegVATX9lO9ZY8c6DR2Dj/5epnGB3GFW1fgiT +z9D2PGcDFWEJ+YF59exTpJ/JjwGLc8R3dtyDovUMSRqodt6Sm2T4syzFJ9MHwAiA +pJiS4wGWAqoC7o87xdFtCjMwc3i5T1QWvwsHoaRc5svJXISPD+AVdyx+Jn7axEvb +pxZ3B7DNdehyQtaVhJ2Gg/LkkM0JR9SLA3DaWsYDQvTtN6LwG1BUSw7YhN4ZKJmB +R64JGz9I0cNv4rBgF/XuIwKl2gBbbZCr7qLpGzvpx0QnRY5rn/WkhLx3+WuXrD5R +RaIRpsyF7gpo8j5QOHokYh4XIDdtak23CZvJ/KRY9bb7nE4Yu5UC56GtmwfuNmsk +0jmGwZODUNKBRqhfYlcsu2xkiAhu7xNUX90txGdj08+JN7+dIPT7eoOboB6BAFDC +5AwiWVIQ7UNWhwD4FFKnHYuTjKJNRn8nxnGbJN7k2oaLDX5rIMHAnuFl2GqjpuiF +izoHCBy69Y9Vmhh1fuXsgWbRIXOhNUQLgD1bnF5vKheW0YMjiGZt5obicDIvUiLn +yOd/xCxgXS/Dr55FBcOEArf9LAhST4Ldo/DUhgkC +-----END CERTIFICATE----- + +# Issuer: CN=GTS Root R3 O=Google Trust Services LLC +# Subject: CN=GTS Root R3 O=Google Trust Services LLC +# Label: "GTS Root R3" +# Serial: 146587176140553309517047991083707763997 +# MD5 Fingerprint: 1a:79:5b:6b:04:52:9c:5d:c7:74:33:1b:25:9a:f9:25 +# SHA1 Fingerprint: 30:d4:24:6f:07:ff:db:91:89:8a:0b:e9:49:66:11:eb:8c:5e:46:e5 +# SHA256 Fingerprint: 15:d5:b8:77:46:19:ea:7d:54:ce:1c:a6:d0:b0:c4:03:e0:37:a9:17:f1:31:e8:a0:4e:1e:6b:7a:71:ba:bc:e5 +-----BEGIN CERTIFICATE----- +MIICDDCCAZGgAwIBAgIQbkepx2ypcyRAiQ8DVd2NHTAKBggqhkjOPQQDAzBHMQsw +CQYDVQQGEwJVUzEiMCAGA1UEChMZR29vZ2xlIFRydXN0IFNlcnZpY2VzIExMQzEU +MBIGA1UEAxMLR1RTIFJvb3QgUjMwHhcNMTYwNjIyMDAwMDAwWhcNMzYwNjIyMDAw +MDAwWjBHMQswCQYDVQQGEwJVUzEiMCAGA1UEChMZR29vZ2xlIFRydXN0IFNlcnZp +Y2VzIExMQzEUMBIGA1UEAxMLR1RTIFJvb3QgUjMwdjAQBgcqhkjOPQIBBgUrgQQA +IgNiAAQfTzOHMymKoYTey8chWEGJ6ladK0uFxh1MJ7x/JlFyb+Kf1qPKzEUURout +736GjOyxfi//qXGdGIRFBEFVbivqJn+7kAHjSxm65FSWRQmx1WyRRK2EE46ajA2A +DDL24CejQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8EBTADAQH/MB0GA1Ud +DgQWBBTB8Sa6oC2uhYHP0/EqEr24Cmf9vDAKBggqhkjOPQQDAwNpADBmAjEAgFuk +fCPAlaUs3L6JbyO5o91lAFJekazInXJ0glMLfalAvWhgxeG4VDvBNhcl2MG9AjEA +njWSdIUlUfUk7GRSJFClH9voy8l27OyCbvWFGFPouOOaKaqW04MjyaR7YbPMAuhd +-----END CERTIFICATE----- + +# Issuer: CN=GTS Root R4 O=Google Trust Services LLC +# Subject: CN=GTS Root R4 O=Google Trust Services LLC +# Label: "GTS Root R4" +# Serial: 146587176229350439916519468929765261721 +# MD5 Fingerprint: 5d:b6:6a:c4:60:17:24:6a:1a:99:a8:4b:ee:5e:b4:26 +# SHA1 Fingerprint: 2a:1d:60:27:d9:4a:b1:0a:1c:4d:91:5c:cd:33:a0:cb:3e:2d:54:cb +# SHA256 Fingerprint: 71:cc:a5:39:1f:9e:79:4b:04:80:25:30:b3:63:e1:21:da:8a:30:43:bb:26:66:2f:ea:4d:ca:7f:c9:51:a4:bd +-----BEGIN CERTIFICATE----- +MIICCjCCAZGgAwIBAgIQbkepyIuUtui7OyrYorLBmTAKBggqhkjOPQQDAzBHMQsw +CQYDVQQGEwJVUzEiMCAGA1UEChMZR29vZ2xlIFRydXN0IFNlcnZpY2VzIExMQzEU +MBIGA1UEAxMLR1RTIFJvb3QgUjQwHhcNMTYwNjIyMDAwMDAwWhcNMzYwNjIyMDAw +MDAwWjBHMQswCQYDVQQGEwJVUzEiMCAGA1UEChMZR29vZ2xlIFRydXN0IFNlcnZp +Y2VzIExMQzEUMBIGA1UEAxMLR1RTIFJvb3QgUjQwdjAQBgcqhkjOPQIBBgUrgQQA +IgNiAATzdHOnaItgrkO4NcWBMHtLSZ37wWHO5t5GvWvVYRg1rkDdc/eJkTBa6zzu +hXyiQHY7qca4R9gq55KRanPpsXI5nymfopjTX15YhmUPoYRlBtHci8nHc8iMai/l +xKvRHYqjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8EBTADAQH/MB0GA1Ud +DgQWBBSATNbrdP9JNqPV2Py1PsVq8JQdjDAKBggqhkjOPQQDAwNnADBkAjBqUFJ0 +CMRw3J5QdCHojXohw0+WbhXRIjVhLfoIN+4Zba3bssx9BzT1YBkstTTZbyACMANx +sbqjYAuG7ZoIapVon+Kz4ZNkfF6Tpt95LY2F45TPI11xzPKwTdb+mciUqXWi4w== +-----END CERTIFICATE----- + +# Issuer: CN=UCA Global G2 Root O=UniTrust +# Subject: CN=UCA Global G2 Root O=UniTrust +# Label: "UCA Global G2 Root" +# Serial: 124779693093741543919145257850076631279 +# MD5 Fingerprint: 80:fe:f0:c4:4a:f0:5c:62:32:9f:1c:ba:78:a9:50:f8 +# SHA1 Fingerprint: 28:f9:78:16:19:7a:ff:18:25:18:aa:44:fe:c1:a0:ce:5c:b6:4c:8a +# SHA256 Fingerprint: 9b:ea:11:c9:76:fe:01:47:64:c1:be:56:a6:f9:14:b5:a5:60:31:7a:bd:99:88:39:33:82:e5:16:1a:a0:49:3c +-----BEGIN CERTIFICATE----- +MIIFRjCCAy6gAwIBAgIQXd+x2lqj7V2+WmUgZQOQ7zANBgkqhkiG9w0BAQsFADA9 +MQswCQYDVQQGEwJDTjERMA8GA1UECgwIVW5pVHJ1c3QxGzAZBgNVBAMMElVDQSBH +bG9iYWwgRzIgUm9vdDAeFw0xNjAzMTEwMDAwMDBaFw00MDEyMzEwMDAwMDBaMD0x +CzAJBgNVBAYTAkNOMREwDwYDVQQKDAhVbmlUcnVzdDEbMBkGA1UEAwwSVUNBIEds +b2JhbCBHMiBSb290MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAxeYr +b3zvJgUno4Ek2m/LAfmZmqkywiKHYUGRO8vDaBsGxUypK8FnFyIdK+35KYmToni9 +kmugow2ifsqTs6bRjDXVdfkX9s9FxeV67HeToI8jrg4aA3++1NDtLnurRiNb/yzm +VHqUwCoV8MmNsHo7JOHXaOIxPAYzRrZUEaalLyJUKlgNAQLx+hVRZ2zA+te2G3/R +VogvGjqNO7uCEeBHANBSh6v7hn4PJGtAnTRnvI3HLYZveT6OqTwXS3+wmeOwcWDc +C/Vkw85DvG1xudLeJ1uK6NjGruFZfc8oLTW4lVYa8bJYS7cSN8h8s+1LgOGN+jIj +tm+3SJUIsUROhYw6AlQgL9+/V087OpAh18EmNVQg7Mc/R+zvWr9LesGtOxdQXGLY +D0tK3Cv6brxzks3sx1DoQZbXqX5t2Okdj4q1uViSukqSKwxW/YDrCPBeKW4bHAyv +j5OJrdu9o54hyokZ7N+1wxrrFv54NkzWbtA+FxyQF2smuvt6L78RHBgOLXMDj6Dl +NaBa4kx1HXHhOThTeEDMg5PXCp6dW4+K5OXgSORIskfNTip1KnvyIvbJvgmRlld6 +iIis7nCs+dwp4wwcOxJORNanTrAmyPPZGpeRaOrvjUYG0lZFWJo8DA+DuAUlwznP +O6Q0ibd5Ei9Hxeepl2n8pndntd978XplFeRhVmUCAwEAAaNCMEAwDgYDVR0PAQH/ +BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFIHEjMz15DD/pQwIX4wV +ZyF0Ad/fMA0GCSqGSIb3DQEBCwUAA4ICAQATZSL1jiutROTL/7lo5sOASD0Ee/oj +L3rtNtqyzm325p7lX1iPyzcyochltq44PTUbPrw7tgTQvPlJ9Zv3hcU2tsu8+Mg5 +1eRfB70VVJd0ysrtT7q6ZHafgbiERUlMjW+i67HM0cOU2kTC5uLqGOiiHycFutfl +1qnN3e92mI0ADs0b+gO3joBYDic/UvuUospeZcnWhNq5NXHzJsBPd+aBJ9J3O5oU +b3n09tDh05S60FdRvScFDcH9yBIw7m+NESsIndTUv4BFFJqIRNow6rSn4+7vW4LV +PtateJLbXDzz2K36uGt/xDYotgIVilQsnLAXc47QN6MUPJiVAAwpBVueSUmxX8fj +y88nZY41F7dXyDDZQVu5FLbowg+UMaeUmMxq67XhJ/UQqAHojhJi6IjMtX9Gl8Cb +EGY4GjZGXyJoPd/JxhMnq1MGrKI8hgZlb7F+sSlEmqO6SWkoaY/X5V+tBIZkbxqg +DMUIYs6Ao9Dz7GjevjPHF1t/gMRMTLGmhIrDO7gJzRSBuhjjVFc2/tsvfEehOjPI ++Vg7RE+xygKJBJYoaMVLuCaJu9YzL1DV/pqJuhgyklTGW+Cd+V7lDSKb9triyCGy +YiGqhkCyLmTTX8jjfhFnRR8F/uOi77Oos/N9j/gMHyIfLXC0uAE0djAA5SN4p1bX +UB+K+wb1whnw0A== +-----END CERTIFICATE----- + +# Issuer: CN=UCA Extended Validation Root O=UniTrust +# Subject: CN=UCA Extended Validation Root O=UniTrust +# Label: "UCA Extended Validation Root" +# Serial: 106100277556486529736699587978573607008 +# MD5 Fingerprint: a1:f3:5f:43:c6:34:9b:da:bf:8c:7e:05:53:ad:96:e2 +# SHA1 Fingerprint: a3:a1:b0:6f:24:61:23:4a:e3:36:a5:c2:37:fc:a6:ff:dd:f0:d7:3a +# SHA256 Fingerprint: d4:3a:f9:b3:54:73:75:5c:96:84:fc:06:d7:d8:cb:70:ee:5c:28:e7:73:fb:29:4e:b4:1e:e7:17:22:92:4d:24 +-----BEGIN CERTIFICATE----- +MIIFWjCCA0KgAwIBAgIQT9Irj/VkyDOeTzRYZiNwYDANBgkqhkiG9w0BAQsFADBH +MQswCQYDVQQGEwJDTjERMA8GA1UECgwIVW5pVHJ1c3QxJTAjBgNVBAMMHFVDQSBF +eHRlbmRlZCBWYWxpZGF0aW9uIFJvb3QwHhcNMTUwMzEzMDAwMDAwWhcNMzgxMjMx +MDAwMDAwWjBHMQswCQYDVQQGEwJDTjERMA8GA1UECgwIVW5pVHJ1c3QxJTAjBgNV +BAMMHFVDQSBFeHRlbmRlZCBWYWxpZGF0aW9uIFJvb3QwggIiMA0GCSqGSIb3DQEB +AQUAA4ICDwAwggIKAoICAQCpCQcoEwKwmeBkqh5DFnpzsZGgdT6o+uM4AHrsiWog +D4vFsJszA1qGxliG1cGFu0/GnEBNyr7uaZa4rYEwmnySBesFK5pI0Lh2PpbIILvS +sPGP2KxFRv+qZ2C0d35qHzwaUnoEPQc8hQ2E0B92CvdqFN9y4zR8V05WAT558aop +O2z6+I9tTcg1367r3CTueUWnhbYFiN6IXSV8l2RnCdm/WhUFhvMJHuxYMjMR83dk +sHYf5BA1FxvyDrFspCqjc/wJHx4yGVMR59mzLC52LqGj3n5qiAno8geK+LLNEOfi +c0CTuwjRP+H8C5SzJe98ptfRr5//lpr1kXuYC3fUfugH0mK1lTnj8/FtDw5lhIpj +VMWAtuCeS31HJqcBCF3RiJ7XwzJE+oJKCmhUfzhTA8ykADNkUVkLo4KRel7sFsLz +KuZi2irbWWIQJUoqgQtHB0MGcIfS+pMRKXpITeuUx3BNr2fVUbGAIAEBtHoIppB/ +TuDvB0GHr2qlXov7z1CymlSvw4m6WC31MJixNnI5fkkE/SmnTHnkBVfblLkWU41G +sx2VYVdWf6/wFlthWG82UBEL2KwrlRYaDh8IzTY0ZRBiZtWAXxQgXy0MoHgKaNYs +1+lvK9JKBZP8nm9rZ/+I8U6laUpSNwXqxhaN0sSZ0YIrO7o1dfdRUVjzyAfd5LQD +fwIDAQABo0IwQDAdBgNVHQ4EFgQU2XQ65DA9DfcS3H5aBZ8eNJr34RQwDwYDVR0T +AQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAYYwDQYJKoZIhvcNAQELBQADggIBADaN +l8xCFWQpN5smLNb7rhVpLGsaGvdftvkHTFnq88nIua7Mui563MD1sC3AO6+fcAUR +ap8lTwEpcOPlDOHqWnzcSbvBHiqB9RZLcpHIojG5qtr8nR/zXUACE/xOHAbKsxSQ +VBcZEhrxH9cMaVr2cXj0lH2RC47skFSOvG+hTKv8dGT9cZr4QQehzZHkPJrgmzI5 +c6sq1WnIeJEmMX3ixzDx/BR4dxIOE/TdFpS/S2d7cFOFyrC78zhNLJA5wA3CXWvp +4uXViI3WLL+rG761KIcSF3Ru/H38j9CHJrAb+7lsq+KePRXBOy5nAliRn+/4Qh8s +t2j1da3Ptfb/EX3C8CSlrdP6oDyp+l3cpaDvRKS+1ujl5BOWF3sGPjLtx7dCvHaj +2GU4Kzg1USEODm8uNBNA4StnDG1KQTAYI1oyVZnJF+A83vbsea0rWBmirSwiGpWO +vpaQXUJXxPkUAzUrHC1RVwinOt4/5Mi0A3PCwSaAuwtCH60NryZy2sy+s6ODWA2C +xR9GUeOcGMyNm43sSet1UNWMKFnKdDTajAshqx7qG+XH/RU+wBeq+yNuJkbL+vmx +cmtpzyKEC2IPrNkZAJSidjzULZrtBJ4tBmIQN1IchXIbJ+XMxjHsN+xjWZsLHXbM +fjKaiJUINlK73nZfdklJrX+9ZSCyycErdhh2n1ax +-----END CERTIFICATE----- + +# Issuer: CN=Certigna Root CA O=Dhimyotis OU=0002 48146308100036 +# Subject: CN=Certigna Root CA O=Dhimyotis OU=0002 48146308100036 +# Label: "Certigna Root CA" +# Serial: 269714418870597844693661054334862075617 +# MD5 Fingerprint: 0e:5c:30:62:27:eb:5b:bc:d7:ae:62:ba:e9:d5:df:77 +# SHA1 Fingerprint: 2d:0d:52:14:ff:9e:ad:99:24:01:74:20:47:6e:6c:85:27:27:f5:43 +# SHA256 Fingerprint: d4:8d:3d:23:ee:db:50:a4:59:e5:51:97:60:1c:27:77:4b:9d:7b:18:c9:4d:5a:05:95:11:a1:02:50:b9:31:68 +-----BEGIN CERTIFICATE----- +MIIGWzCCBEOgAwIBAgIRAMrpG4nxVQMNo+ZBbcTjpuEwDQYJKoZIhvcNAQELBQAw +WjELMAkGA1UEBhMCRlIxEjAQBgNVBAoMCURoaW15b3RpczEcMBoGA1UECwwTMDAw +MiA0ODE0NjMwODEwMDAzNjEZMBcGA1UEAwwQQ2VydGlnbmEgUm9vdCBDQTAeFw0x +MzEwMDEwODMyMjdaFw0zMzEwMDEwODMyMjdaMFoxCzAJBgNVBAYTAkZSMRIwEAYD +VQQKDAlEaGlteW90aXMxHDAaBgNVBAsMEzAwMDIgNDgxNDYzMDgxMDAwMzYxGTAX +BgNVBAMMEENlcnRpZ25hIFJvb3QgQ0EwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAw +ggIKAoICAQDNGDllGlmx6mQWDoyUJJV8g9PFOSbcDO8WV43X2KyjQn+Cyu3NW9sO +ty3tRQgXstmzy9YXUnIo245Onoq2C/mehJpNdt4iKVzSs9IGPjA5qXSjklYcoW9M +CiBtnyN6tMbaLOQdLNyzKNAT8kxOAkmhVECe5uUFoC2EyP+YbNDrihqECB63aCPu +I9Vwzm1RaRDuoXrC0SIxwoKF0vJVdlB8JXrJhFwLrN1CTivngqIkicuQstDuI7pm +TLtipPlTWmR7fJj6o0ieD5Wupxj0auwuA0Wv8HT4Ks16XdG+RCYyKfHx9WzMfgIh +C59vpD++nVPiz32pLHxYGpfhPTc3GGYo0kDFUYqMwy3OU4gkWGQwFsWq4NYKpkDf +ePb1BHxpE4S80dGnBs8B92jAqFe7OmGtBIyT46388NtEbVncSVmurJqZNjBBe3Yz +IoejwpKGbvlw7q6Hh5UbxHq9MfPU0uWZ/75I7HX1eBYdpnDBfzwboZL7z8g81sWT +Co/1VTp2lc5ZmIoJlXcymoO6LAQ6l73UL77XbJuiyn1tJslV1c/DeVIICZkHJC1k +JWumIWmbat10TWuXekG9qxf5kBdIjzb5LdXF2+6qhUVB+s06RbFo5jZMm5BX7CO5 +hwjCxAnxl4YqKE3idMDaxIzb3+KhF1nOJFl0Mdp//TBt2dzhauH8XwIDAQABo4IB +GjCCARYwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYE +FBiHVuBud+4kNTxOc5of1uHieX4rMB8GA1UdIwQYMBaAFBiHVuBud+4kNTxOc5of +1uHieX4rMEQGA1UdIAQ9MDswOQYEVR0gADAxMC8GCCsGAQUFBwIBFiNodHRwczov +L3d3d3cuY2VydGlnbmEuZnIvYXV0b3JpdGVzLzBtBgNVHR8EZjBkMC+gLaArhilo +dHRwOi8vY3JsLmNlcnRpZ25hLmZyL2NlcnRpZ25hcm9vdGNhLmNybDAxoC+gLYYr +aHR0cDovL2NybC5kaGlteW90aXMuY29tL2NlcnRpZ25hcm9vdGNhLmNybDANBgkq +hkiG9w0BAQsFAAOCAgEAlLieT/DjlQgi581oQfccVdV8AOItOoldaDgvUSILSo3L +6btdPrtcPbEo/uRTVRPPoZAbAh1fZkYJMyjhDSSXcNMQH+pkV5a7XdrnxIxPTGRG +HVyH41neQtGbqH6mid2PHMkwgu07nM3A6RngatgCdTer9zQoKJHyBApPNeNgJgH6 +0BGM+RFq7q89w1DTj18zeTyGqHNFkIwgtnJzFyO+B2XleJINugHA64wcZr+shncB +lA2c5uk5jR+mUYyZDDl34bSb+hxnV29qao6pK0xXeXpXIs/NX2NGjVxZOob4Mkdi +o2cNGJHc+6Zr9UhhcyNZjgKnvETq9Emd8VRY+WCv2hikLyhF3HqgiIZd8zvn/yk1 +gPxkQ5Tm4xxvvq0OKmOZK8l+hfZx6AYDlf7ej0gcWtSS6Cvu5zHbugRqh5jnxV/v +faci9wHYTfmJ0A6aBVmknpjZbyvKcL5kwlWj9Omvw5Ip3IgWJJk8jSaYtlu3zM63 +Nwf9JtmYhST/WSMDmu2dnajkXjjO11INb9I/bbEFa0nOipFGc/T2L/Coc3cOZayh +jWZSaX5LaAzHHjcng6WMxwLkFM1JAbBzs/3GkDpv0mztO+7skb6iQ12LAEpmJURw +3kAP+HwV96LOPNdeE4yBFxgX0b3xdxA61GU5wSesVywlVP+i2k+KYTlerj1KjL0= +-----END CERTIFICATE----- + +# Issuer: CN=emSign Root CA - G1 O=eMudhra Technologies Limited OU=emSign PKI +# Subject: CN=emSign Root CA - G1 O=eMudhra Technologies Limited OU=emSign PKI +# Label: "emSign Root CA - G1" +# Serial: 235931866688319308814040 +# MD5 Fingerprint: 9c:42:84:57:dd:cb:0b:a7:2e:95:ad:b6:f3:da:bc:ac +# SHA1 Fingerprint: 8a:c7:ad:8f:73:ac:4e:c1:b5:75:4d:a5:40:f4:fc:cf:7c:b5:8e:8c +# SHA256 Fingerprint: 40:f6:af:03:46:a9:9a:a1:cd:1d:55:5a:4e:9c:ce:62:c7:f9:63:46:03:ee:40:66:15:83:3d:c8:c8:d0:03:67 +-----BEGIN CERTIFICATE----- +MIIDlDCCAnygAwIBAgIKMfXkYgxsWO3W2DANBgkqhkiG9w0BAQsFADBnMQswCQYD +VQQGEwJJTjETMBEGA1UECxMKZW1TaWduIFBLSTElMCMGA1UEChMcZU11ZGhyYSBU +ZWNobm9sb2dpZXMgTGltaXRlZDEcMBoGA1UEAxMTZW1TaWduIFJvb3QgQ0EgLSBH +MTAeFw0xODAyMTgxODMwMDBaFw00MzAyMTgxODMwMDBaMGcxCzAJBgNVBAYTAklO +MRMwEQYDVQQLEwplbVNpZ24gUEtJMSUwIwYDVQQKExxlTXVkaHJhIFRlY2hub2xv +Z2llcyBMaW1pdGVkMRwwGgYDVQQDExNlbVNpZ24gUm9vdCBDQSAtIEcxMIIBIjAN +BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAk0u76WaK7p1b1TST0Bsew+eeuGQz +f2N4aLTNLnF115sgxk0pvLZoYIr3IZpWNVrzdr3YzZr/k1ZLpVkGoZM0Kd0WNHVO +8oG0x5ZOrRkVUkr+PHB1cM2vK6sVmjM8qrOLqs1D/fXqcP/tzxE7lM5OMhbTI0Aq +d7OvPAEsbO2ZLIvZTmmYsvePQbAyeGHWDV/D+qJAkh1cF+ZwPjXnorfCYuKrpDhM +tTk1b+oDafo6VGiFbdbyL0NVHpENDtjVaqSW0RM8LHhQ6DqS0hdW5TUaQBw+jSzt +Od9C4INBdN+jzcKGYEho42kLVACL5HZpIQ15TjQIXhTCzLG3rdd8cIrHhQIDAQAB +o0IwQDAdBgNVHQ4EFgQU++8Nhp6w492pufEhF38+/PB3KxowDgYDVR0PAQH/BAQD +AgEGMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAFn/8oz1h31x +PaOfG1vR2vjTnGs2vZupYeveFix0PZ7mddrXuqe8QhfnPZHr5X3dPpzxz5KsbEjM +wiI/aTvFthUvozXGaCocV685743QNcMYDHsAVhzNixl03r4PEuDQqqE/AjSxcM6d +GNYIAwlG7mDgfrbESQRRfXBgvKqy/3lyeqYdPV8q+Mri/Tm3R7nrft8EI6/6nAYH +6ftjk4BAtcZsCjEozgyfz7MjNYBBjWzEN3uBL4ChQEKF6dk4jeihU80Bv2noWgby +RQuQ+q7hv53yrlc8pa6yVvSLZUDp/TGBLPQ5Cdjua6e0ph0VpZj3AYHYhX3zUVxx +iN66zB+Afko= +-----END CERTIFICATE----- + +# Issuer: CN=emSign ECC Root CA - G3 O=eMudhra Technologies Limited OU=emSign PKI +# Subject: CN=emSign ECC Root CA - G3 O=eMudhra Technologies Limited OU=emSign PKI +# Label: "emSign ECC Root CA - G3" +# Serial: 287880440101571086945156 +# MD5 Fingerprint: ce:0b:72:d1:9f:88:8e:d0:50:03:e8:e3:b8:8b:67:40 +# SHA1 Fingerprint: 30:43:fa:4f:f2:57:dc:a0:c3:80:ee:2e:58:ea:78:b2:3f:e6:bb:c1 +# SHA256 Fingerprint: 86:a1:ec:ba:08:9c:4a:8d:3b:be:27:34:c6:12:ba:34:1d:81:3e:04:3c:f9:e8:a8:62:cd:5c:57:a3:6b:be:6b +-----BEGIN CERTIFICATE----- +MIICTjCCAdOgAwIBAgIKPPYHqWhwDtqLhDAKBggqhkjOPQQDAzBrMQswCQYDVQQG +EwJJTjETMBEGA1UECxMKZW1TaWduIFBLSTElMCMGA1UEChMcZU11ZGhyYSBUZWNo +bm9sb2dpZXMgTGltaXRlZDEgMB4GA1UEAxMXZW1TaWduIEVDQyBSb290IENBIC0g +RzMwHhcNMTgwMjE4MTgzMDAwWhcNNDMwMjE4MTgzMDAwWjBrMQswCQYDVQQGEwJJ +TjETMBEGA1UECxMKZW1TaWduIFBLSTElMCMGA1UEChMcZU11ZGhyYSBUZWNobm9s +b2dpZXMgTGltaXRlZDEgMB4GA1UEAxMXZW1TaWduIEVDQyBSb290IENBIC0gRzMw +djAQBgcqhkjOPQIBBgUrgQQAIgNiAAQjpQy4LRL1KPOxst3iAhKAnjlfSU2fySU0 +WXTsuwYc58Byr+iuL+FBVIcUqEqy6HyC5ltqtdyzdc6LBtCGI79G1Y4PPwT01xyS +fvalY8L1X44uT6EYGQIrMgqCZH0Wk9GjQjBAMB0GA1UdDgQWBBR8XQKEE9TMipuB +zhccLikenEhjQjAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAKBggq +hkjOPQQDAwNpADBmAjEAvvNhzwIQHWSVB7gYboiFBS+DCBeQyh+KTOgNG3qxrdWB +CUfvO6wIBHxcmbHtRwfSAjEAnbpV/KlK6O3t5nYBQnvI+GDZjVGLVTv7jHvrZQnD ++JbNR6iC8hZVdyR+EhCVBCyj +-----END CERTIFICATE----- + +# Issuer: CN=emSign Root CA - C1 O=eMudhra Inc OU=emSign PKI +# Subject: CN=emSign Root CA - C1 O=eMudhra Inc OU=emSign PKI +# Label: "emSign Root CA - C1" +# Serial: 825510296613316004955058 +# MD5 Fingerprint: d8:e3:5d:01:21:fa:78:5a:b0:df:ba:d2:ee:2a:5f:68 +# SHA1 Fingerprint: e7:2e:f1:df:fc:b2:09:28:cf:5d:d4:d5:67:37:b1:51:cb:86:4f:01 +# SHA256 Fingerprint: 12:56:09:aa:30:1d:a0:a2:49:b9:7a:82:39:cb:6a:34:21:6f:44:dc:ac:9f:39:54:b1:42:92:f2:e8:c8:60:8f +-----BEGIN CERTIFICATE----- +MIIDczCCAlugAwIBAgILAK7PALrEzzL4Q7IwDQYJKoZIhvcNAQELBQAwVjELMAkG +A1UEBhMCVVMxEzARBgNVBAsTCmVtU2lnbiBQS0kxFDASBgNVBAoTC2VNdWRocmEg +SW5jMRwwGgYDVQQDExNlbVNpZ24gUm9vdCBDQSAtIEMxMB4XDTE4MDIxODE4MzAw +MFoXDTQzMDIxODE4MzAwMFowVjELMAkGA1UEBhMCVVMxEzARBgNVBAsTCmVtU2ln +biBQS0kxFDASBgNVBAoTC2VNdWRocmEgSW5jMRwwGgYDVQQDExNlbVNpZ24gUm9v +dCBDQSAtIEMxMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAz+upufGZ +BczYKCFK83M0UYRWEPWgTywS4/oTmifQz/l5GnRfHXk5/Fv4cI7gklL35CX5VIPZ +HdPIWoU/Xse2B+4+wM6ar6xWQio5JXDWv7V7Nq2s9nPczdcdioOl+yuQFTdrHCZH +3DspVpNqs8FqOp099cGXOFgFixwR4+S0uF2FHYP+eF8LRWgYSKVGczQ7/g/IdrvH +GPMF0Ybzhe3nudkyrVWIzqa2kbBPrH4VI5b2P/AgNBbeCsbEBEV5f6f9vtKppa+c +xSMq9zwhbL2vj07FOrLzNBL834AaSaTUqZX3noleoomslMuoaJuvimUnzYnu3Yy1 +aylwQ6BpC+S5DwIDAQABo0IwQDAdBgNVHQ4EFgQU/qHgcB4qAzlSWkK+XJGFehiq +TbUwDgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQEL +BQADggEBAMJKVvoVIXsoounlHfv4LcQ5lkFMOycsxGwYFYDGrK9HWS8mC+M2sO87 +/kOXSTKZEhVb3xEp/6tT+LvBeA+snFOvV71ojD1pM/CjoCNjO2RnIkSt1XHLVip4 +kqNPEjE2NuLe/gDEo2APJ62gsIq1NnpSob0n9CAnYuhNlCQT5AoE6TyrLshDCUrG +YQTlSTR+08TI9Q/Aqum6VF7zYytPT1DU/rl7mYw9wC68AivTxEDkigcxHpvOJpkT ++xHqmiIMERnHXhuBUDDIlhJu58tBf5E7oke3VIAb3ADMmpDqw8NQBmIMMMAVSKeo +WXzhriKi4gp6D/piq1JM4fHfyr6DDUI= +-----END CERTIFICATE----- + +# Issuer: CN=emSign ECC Root CA - C3 O=eMudhra Inc OU=emSign PKI +# Subject: CN=emSign ECC Root CA - C3 O=eMudhra Inc OU=emSign PKI +# Label: "emSign ECC Root CA - C3" +# Serial: 582948710642506000014504 +# MD5 Fingerprint: 3e:53:b3:a3:81:ee:d7:10:f8:d3:b0:1d:17:92:f5:d5 +# SHA1 Fingerprint: b6:af:43:c2:9b:81:53:7d:f6:ef:6b:c3:1f:1f:60:15:0c:ee:48:66 +# SHA256 Fingerprint: bc:4d:80:9b:15:18:9d:78:db:3e:1d:8c:f4:f9:72:6a:79:5d:a1:64:3c:a5:f1:35:8e:1d:db:0e:dc:0d:7e:b3 +-----BEGIN CERTIFICATE----- +MIICKzCCAbGgAwIBAgIKe3G2gla4EnycqDAKBggqhkjOPQQDAzBaMQswCQYDVQQG +EwJVUzETMBEGA1UECxMKZW1TaWduIFBLSTEUMBIGA1UEChMLZU11ZGhyYSBJbmMx +IDAeBgNVBAMTF2VtU2lnbiBFQ0MgUm9vdCBDQSAtIEMzMB4XDTE4MDIxODE4MzAw +MFoXDTQzMDIxODE4MzAwMFowWjELMAkGA1UEBhMCVVMxEzARBgNVBAsTCmVtU2ln +biBQS0kxFDASBgNVBAoTC2VNdWRocmEgSW5jMSAwHgYDVQQDExdlbVNpZ24gRUND +IFJvb3QgQ0EgLSBDMzB2MBAGByqGSM49AgEGBSuBBAAiA2IABP2lYa57JhAd6bci +MK4G9IGzsUJxlTm801Ljr6/58pc1kjZGDoeVjbk5Wum739D+yAdBPLtVb4Ojavti +sIGJAnB9SMVK4+kiVCJNk7tCDK93nCOmfddhEc5lx/h//vXyqaNCMEAwHQYDVR0O +BBYEFPtaSNCAIEDyqOkAB2kZd6fmw/TPMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB +Af8EBTADAQH/MAoGCCqGSM49BAMDA2gAMGUCMQC02C8Cif22TGK6Q04ThHK1rt0c +3ta13FaPWEBaLd4gTCKDypOofu4SQMfWh0/434UCMBwUZOR8loMRnLDRWmFLpg9J +0wD8ofzkpf9/rdcw0Md3f76BB1UwUCAU9Vc4CqgxUQ== +-----END CERTIFICATE----- + +# Issuer: CN=Hongkong Post Root CA 3 O=Hongkong Post +# Subject: CN=Hongkong Post Root CA 3 O=Hongkong Post +# Label: "Hongkong Post Root CA 3" +# Serial: 46170865288971385588281144162979347873371282084 +# MD5 Fingerprint: 11:fc:9f:bd:73:30:02:8a:fd:3f:f3:58:b9:cb:20:f0 +# SHA1 Fingerprint: 58:a2:d0:ec:20:52:81:5b:c1:f3:f8:64:02:24:4e:c2:8e:02:4b:02 +# SHA256 Fingerprint: 5a:2f:c0:3f:0c:83:b0:90:bb:fa:40:60:4b:09:88:44:6c:76:36:18:3d:f9:84:6e:17:10:1a:44:7f:b8:ef:d6 +-----BEGIN CERTIFICATE----- +MIIFzzCCA7egAwIBAgIUCBZfikyl7ADJk0DfxMauI7gcWqQwDQYJKoZIhvcNAQEL +BQAwbzELMAkGA1UEBhMCSEsxEjAQBgNVBAgTCUhvbmcgS29uZzESMBAGA1UEBxMJ +SG9uZyBLb25nMRYwFAYDVQQKEw1Ib25na29uZyBQb3N0MSAwHgYDVQQDExdIb25n +a29uZyBQb3N0IFJvb3QgQ0EgMzAeFw0xNzA2MDMwMjI5NDZaFw00MjA2MDMwMjI5 +NDZaMG8xCzAJBgNVBAYTAkhLMRIwEAYDVQQIEwlIb25nIEtvbmcxEjAQBgNVBAcT +CUhvbmcgS29uZzEWMBQGA1UEChMNSG9uZ2tvbmcgUG9zdDEgMB4GA1UEAxMXSG9u +Z2tvbmcgUG9zdCBSb290IENBIDMwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIK +AoICAQCziNfqzg8gTr7m1gNt7ln8wlffKWihgw4+aMdoWJwcYEuJQwy51BWy7sFO +dem1p+/l6TWZ5Mwc50tfjTMwIDNT2aa71T4Tjukfh0mtUC1Qyhi+AViiE3CWu4mI +VoBc+L0sPOFMV4i707mV78vH9toxdCim5lSJ9UExyuUmGs2C4HDaOym71QP1mbpV +9WTRYA6ziUm4ii8F0oRFKHyPaFASePwLtVPLwpgchKOesL4jpNrcyCse2m5FHomY +2vkALgbpDDtw1VAliJnLzXNg99X/NWfFobxeq81KuEXryGgeDQ0URhLj0mRiikKY +vLTGCAj4/ahMZJx2Ab0vqWwzD9g/KLg8aQFChn5pwckGyuV6RmXpwtZQQS4/t+Tt +bNe/JgERohYpSms0BpDsE9K2+2p20jzt8NYt3eEV7KObLyzJPivkaTv/ciWxNoZb +x39ri1UbSsUgYT2uy1DhCDq+sI9jQVMwCFk8mB13umOResoQUGC/8Ne8lYePl8X+ +l2oBlKN8W4UdKjk60FSh0Tlxnf0h+bV78OLgAo9uliQlLKAeLKjEiafv7ZkGL7YK +TE/bosw3Gq9HhS2KX8Q0NEwA/RiTZxPRN+ZItIsGxVd7GYYKecsAyVKvQv83j+Gj +Hno9UKtjBucVtT+2RTeUN7F+8kjDf8V1/peNRY8apxpyKBpADwIDAQABo2MwYTAP +BgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIBBjAfBgNVHSMEGDAWgBQXnc0e +i9Y5K3DTXNSguB+wAPzFYTAdBgNVHQ4EFgQUF53NHovWOStw01zUoLgfsAD8xWEw +DQYJKoZIhvcNAQELBQADggIBAFbVe27mIgHSQpsY1Q7XZiNc4/6gx5LS6ZStS6LG +7BJ8dNVI0lkUmcDrudHr9EgwW62nV3OZqdPlt9EuWSRY3GguLmLYauRwCy0gUCCk +MpXRAJi70/33MvJJrsZ64Ee+bs7Lo3I6LWldy8joRTnU+kLBEUx3XZL7av9YROXr +gZ6voJmtvqkBZss4HTzfQx/0TW60uhdG/H39h4F5ag0zD/ov+BS5gLNdTaqX4fnk +GMX41TiMJjz98iji7lpJiCzfeT2OnpA8vUFKOt1b9pq0zj8lMH8yfaIDlNDceqFS +3m6TjRgm/VWsvY+b0s+v54Ysyx8Jb6NvqYTUc79NoXQbTiNg8swOqn+knEwlqLJm +Ozj/2ZQw9nKEvmhVEA/GcywWaZMH/rFF7buiVWqw2rVKAiUnhde3t4ZEFolsgCs+ +l6mc1X5VTMbeRRAc6uk7nwNT7u56AQIWeNTowr5GdogTPyK7SBIdUgC0An4hGh6c +JfTzPV4e0hz5sy229zdcxsshTrD3mUcYhcErulWuBurQB7Lcq9CClnXO0lD+mefP +L5/ndtFhKvshuzHQqp9HpLIiyhY6UFfEW0NnxWViA0kB60PZ2Pierc+xYw5F9KBa +LJstxabArahH9CdMOA0uG0k7UvToiIMrVCjU8jVStDKDYmlkDJGcn5fqdBb9HxEG +mpv0 +-----END CERTIFICATE----- + +# Issuer: CN=Entrust Root Certification Authority - G4 O=Entrust, Inc. OU=See www.entrust.net/legal-terms/(c) 2015 Entrust, Inc. - for authorized use only +# Subject: CN=Entrust Root Certification Authority - G4 O=Entrust, Inc. OU=See www.entrust.net/legal-terms/(c) 2015 Entrust, Inc. - for authorized use only +# Label: "Entrust Root Certification Authority - G4" +# Serial: 289383649854506086828220374796556676440 +# MD5 Fingerprint: 89:53:f1:83:23:b7:7c:8e:05:f1:8c:71:38:4e:1f:88 +# SHA1 Fingerprint: 14:88:4e:86:26:37:b0:26:af:59:62:5c:40:77:ec:35:29:ba:96:01 +# SHA256 Fingerprint: db:35:17:d1:f6:73:2a:2d:5a:b9:7c:53:3e:c7:07:79:ee:32:70:a6:2f:b4:ac:42:38:37:24:60:e6:f0:1e:88 +-----BEGIN CERTIFICATE----- +MIIGSzCCBDOgAwIBAgIRANm1Q3+vqTkPAAAAAFVlrVgwDQYJKoZIhvcNAQELBQAw +gb4xCzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1FbnRydXN0LCBJbmMuMSgwJgYDVQQL +Ex9TZWUgd3d3LmVudHJ1c3QubmV0L2xlZ2FsLXRlcm1zMTkwNwYDVQQLEzAoYykg +MjAxNSBFbnRydXN0LCBJbmMuIC0gZm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxMjAw +BgNVBAMTKUVudHJ1c3QgUm9vdCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eSAtIEc0 +MB4XDTE1MDUyNzExMTExNloXDTM3MTIyNzExNDExNlowgb4xCzAJBgNVBAYTAlVT +MRYwFAYDVQQKEw1FbnRydXN0LCBJbmMuMSgwJgYDVQQLEx9TZWUgd3d3LmVudHJ1 +c3QubmV0L2xlZ2FsLXRlcm1zMTkwNwYDVQQLEzAoYykgMjAxNSBFbnRydXN0LCBJ +bmMuIC0gZm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxMjAwBgNVBAMTKUVudHJ1c3Qg +Um9vdCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eSAtIEc0MIICIjANBgkqhkiG9w0B +AQEFAAOCAg8AMIICCgKCAgEAsewsQu7i0TD/pZJH4i3DumSXbcr3DbVZwbPLqGgZ +2K+EbTBwXX7zLtJTmeH+H17ZSK9dE43b/2MzTdMAArzE+NEGCJR5WIoV3imz/f3E +T+iq4qA7ec2/a0My3dl0ELn39GjUu9CH1apLiipvKgS1sqbHoHrmSKvS0VnM1n4j +5pds8ELl3FFLFUHtSUrJ3hCX1nbB76W1NhSXNdh4IjVS70O92yfbYVaCNNzLiGAM +C1rlLAHGVK/XqsEQe9IFWrhAnoanw5CGAlZSCXqc0ieCU0plUmr1POeo8pyvi73T +DtTUXm6Hnmo9RR3RXRv06QqsYJn7ibT/mCzPfB3pAqoEmh643IhuJbNsZvc8kPNX +wbMv9W3y+8qh+CmdRouzavbmZwe+LGcKKh9asj5XxNMhIWNlUpEbsZmOeX7m640A +2Vqq6nPopIICR5b+W45UYaPrL0swsIsjdXJ8ITzI9vF01Bx7owVV7rtNOzK+mndm +nqxpkCIHH2E6lr7lmk/MBTwoWdPBDFSoWWG9yHJM6Nyfh3+9nEg2XpWjDrk4JFX8 +dWbrAuMINClKxuMrLzOg2qOGpRKX/YAr2hRC45K9PvJdXmd0LhyIRyk0X+IyqJwl +N4y6mACXi0mWHv0liqzc2thddG5msP9E36EYxr5ILzeUePiVSj9/E15dWf10hkNj +c0kCAwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYD +VR0OBBYEFJ84xFYjwznooHFs6FRM5Og6sb9nMA0GCSqGSIb3DQEBCwUAA4ICAQAS +5UKme4sPDORGpbZgQIeMJX6tuGguW8ZAdjwD+MlZ9POrYs4QjbRaZIxowLByQzTS +Gwv2LFPSypBLhmb8qoMi9IsabyZIrHZ3CL/FmFz0Jomee8O5ZDIBf9PD3Vht7LGr +hFV0d4QEJ1JrhkzO3bll/9bGXp+aEJlLdWr+aumXIOTkdnrG0CSqkM0gkLpHZPt/ +B7NTeLUKYvJzQ85BK4FqLoUWlFPUa19yIqtRLULVAJyZv967lDtX/Zr1hstWO1uI +AeV8KEsD+UmDfLJ/fOPtjqF/YFOOVZ1QNBIPt5d7bIdKROf1beyAN/BYGW5KaHbw +H5Lk6rWS02FREAutp9lfx1/cH6NcjKF+m7ee01ZvZl4HliDtC3T7Zk6LERXpgUl+ +b7DUUH8i119lAg2m9IUe2K4GS0qn0jFmwvjO5QimpAKWRGhXxNUzzxkvFMSUHHuk +2fCfDrGA4tGeEWSpiBE6doLlYsKA2KSD7ZPvfC+QsDJMlhVoSFLUmQjAJOgc47Ol +IQ6SwJAfzyBfyjs4x7dtOvPmRLgOMWuIjnDrnBdSqEGULoe256YSxXXfW8AKbnuk +5F6G+TaU33fD6Q3AOfF5u0aOq0NZJ7cguyPpVkAh7DE9ZapD8j3fcEThuk0mEDuY +n/PIjhs4ViFqUZPTkcpG2om3PVODLAgfi49T3f+sHw== +-----END CERTIFICATE----- diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/certifi/core.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/certifi/core.py new file mode 100644 index 0000000000000000000000000000000000000000..53404e133ff5a22e80bb3d87e37cac0d172c498e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/certifi/core.py @@ -0,0 +1,15 @@ +# -*- coding: utf-8 -*- + +""" +certifi.py +~~~~~~~~~~ + +This module returns the installation location of cacert.pem. +""" +import os + + +def where(): + f = os.path.dirname(__file__) + + return '/etc/ssl/certs/ca-certificates.crt' diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi-1.14.5.dist-info/INSTALLER b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi-1.14.5.dist-info/INSTALLER new file mode 100644 index 0000000000000000000000000000000000000000..a1b589e38a32041e49332e5e81c2d363dc418d68 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi-1.14.5.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi-1.14.5.dist-info/LICENSE b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi-1.14.5.dist-info/LICENSE new file mode 100644 index 0000000000000000000000000000000000000000..29225eee9edcd72c6a354550a5a3bedf1932b2ef --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi-1.14.5.dist-info/LICENSE @@ -0,0 +1,26 @@ + +Except when otherwise stated (look for LICENSE files in directories or +information at the beginning of each file) all software and +documentation is licensed as follows: + + The MIT License + + Permission is hereby granted, free of charge, to any person + obtaining a copy of this software and associated documentation + files (the "Software"), to deal in the Software without + restriction, including without limitation the rights to use, + copy, modify, merge, publish, distribute, sublicense, and/or + sell copies of the Software, and to permit persons to whom the + Software is furnished to do so, subject to the following conditions: + + The above copyright notice and this permission notice shall be included + in all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS + OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi-1.14.5.dist-info/METADATA b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi-1.14.5.dist-info/METADATA new file mode 100644 index 0000000000000000000000000000000000000000..417da722da6484391c30736a4379b56e3b26f4d7 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi-1.14.5.dist-info/METADATA @@ -0,0 +1,37 @@ +Metadata-Version: 2.1 +Name: cffi +Version: 1.14.5 +Summary: Foreign Function Interface for Python calling C code. +Home-page: http://cffi.readthedocs.org +Author: Armin Rigo, Maciej Fijalkowski +Author-email: python-cffi@googlegroups.com +License: MIT +Platform: UNKNOWN +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 2 +Classifier: Programming Language :: Python :: 2.6 +Classifier: Programming Language :: Python :: 2.7 +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.2 +Classifier: Programming Language :: Python :: 3.3 +Classifier: Programming Language :: Python :: 3.4 +Classifier: Programming Language :: Python :: 3.5 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: Implementation :: CPython +Classifier: Programming Language :: Python :: Implementation :: PyPy +Classifier: License :: OSI Approved :: MIT License +Requires-Dist: pycparser + + +CFFI +==== + +Foreign Function Interface for Python calling C code. +Please see the `Documentation `_. + +Contact +------- + +`Mailing list `_ + + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi-1.14.5.dist-info/RECORD b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi-1.14.5.dist-info/RECORD new file mode 100644 index 0000000000000000000000000000000000000000..187f18b00e33dff80278024c8e7d01f09f85dc65 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi-1.14.5.dist-info/RECORD @@ -0,0 +1,45 @@ +_cffi_backend.cpython-38-x86_64-linux-gnu.so,sha256=xaT4emFoLaKFKgxsPR-DzE_RnqPMEWbaY2DtdmdQrmQ,893296 +cffi-1.14.5.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +cffi-1.14.5.dist-info/LICENSE,sha256=BLgPWwd7vtaICM_rreteNSPyqMmpZJXFh72W3x6sKjM,1294 +cffi-1.14.5.dist-info/METADATA,sha256=9cQJcfX8MjM9nlAXlHcCe-YmRy7Ez9IsA3eSoOdYYWY,1191 +cffi-1.14.5.dist-info/RECORD,, +cffi-1.14.5.dist-info/WHEEL,sha256=Dh4w5P6PPWbqyqoE6MHlzbFQwZXlM-voWJDf2WUsS2g,108 +cffi-1.14.5.dist-info/entry_points.txt,sha256=Q9f5C9IpjYxo0d2PK9eUcnkgxHc9pHWwjEMaANPKNCI,76 +cffi-1.14.5.dist-info/top_level.txt,sha256=rE7WR3rZfNKxWI9-jn6hsHCAl7MDkB-FmuQbxWjFehQ,19 +cffi.libs/libffi-806b1a9d.so.6.0.4,sha256=0MxSFdTpPwKsulw1XmWPRPVreyFK3AIBBg7i6a3-rWY,46632 +cffi/__init__.py,sha256=xTe6YZU3-T1_hzu4x7LMs6afwrbhtwYaQ4oYCYTfy3M,513 +cffi/__pycache__/__init__.cpython-38.pyc,, +cffi/__pycache__/api.cpython-38.pyc,, +cffi/__pycache__/backend_ctypes.cpython-38.pyc,, +cffi/__pycache__/cffi_opcode.cpython-38.pyc,, +cffi/__pycache__/commontypes.cpython-38.pyc,, +cffi/__pycache__/cparser.cpython-38.pyc,, +cffi/__pycache__/error.cpython-38.pyc,, +cffi/__pycache__/ffiplatform.cpython-38.pyc,, +cffi/__pycache__/lock.cpython-38.pyc,, +cffi/__pycache__/model.cpython-38.pyc,, +cffi/__pycache__/pkgconfig.cpython-38.pyc,, +cffi/__pycache__/recompiler.cpython-38.pyc,, +cffi/__pycache__/setuptools_ext.cpython-38.pyc,, +cffi/__pycache__/vengine_cpy.cpython-38.pyc,, +cffi/__pycache__/vengine_gen.cpython-38.pyc,, +cffi/__pycache__/verifier.cpython-38.pyc,, +cffi/_cffi_errors.h,sha256=6nFQ-4dRQI1bXRoSeqdvyKU33TmutQJB_2fAhWSzdl8,3856 +cffi/_cffi_include.h,sha256=tKnA1rdSoPHp23FnDL1mDGwFo-Uj6fXfA6vA6kcoEUc,14800 +cffi/_embedding.h,sha256=vXP95nMKN_jwCGWenL3XugNPwa2Ko-yqqp0lA-Nh5-I,17581 +cffi/api.py,sha256=yxJalIePbr1mz_WxAHokSwyP5CVYde44m-nolHnbJNo,42064 +cffi/backend_ctypes.py,sha256=h5ZIzLc6BFVXnGyc9xPqZWUS7qGy7yFSDqXe68Sa8z4,42454 +cffi/cffi_opcode.py,sha256=v9RdD_ovA8rCtqsC95Ivki5V667rAOhGgs3fb2q9xpM,5724 +cffi/commontypes.py,sha256=QS4uxCDI7JhtTyjh1hlnCA-gynmaszWxJaRRLGkJa1A,2689 +cffi/cparser.py,sha256=rO_1pELRw1gI1DE1m4gi2ik5JMfpxouAACLXpRPlVEA,44231 +cffi/error.py,sha256=v6xTiS4U0kvDcy4h_BDRo5v39ZQuj-IMRYLv5ETddZs,877 +cffi/ffiplatform.py,sha256=HMXqR8ks2wtdsNxGaWpQ_PyqIvtiuos_vf1qKCy-cwg,4046 +cffi/lock.py,sha256=l9TTdwMIMpi6jDkJGnQgE9cvTIR7CAntIJr8EGHt3pY,747 +cffi/model.py,sha256=_GH_UF1Rn9vC4AvmgJm6qj7RUXXG3eqKPc8bPxxyBKE,21768 +cffi/parse_c_type.h,sha256=OdwQfwM9ktq6vlCB43exFQmxDBtj2MBNdK8LYl15tjw,5976 +cffi/pkgconfig.py,sha256=LP1w7vmWvmKwyqLaU1Z243FOWGNQMrgMUZrvgFuOlco,4374 +cffi/recompiler.py,sha256=7OBdKr0dAzRnEbgQvjCAikoFAygjTvitaJHdRGc1k24,64568 +cffi/setuptools_ext.py,sha256=RUR17N5f8gpiQBBlXL34P9FtOu1mhHIaAf3WJlg5S4I,8931 +cffi/vengine_cpy.py,sha256=YglN8YS-UaHEv2k2cxgotNWE87dHX20-68EyKoiKUYA,43320 +cffi/vengine_gen.py,sha256=5dX7s1DU6pTBOMI6oTVn_8Bnmru_lj932B6b4v29Hlg,26684 +cffi/verifier.py,sha256=J9Enz2rbJb9CHPqWlWQ5uQESoyr0uc7MNWugchjXBv4,11207 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi-1.14.5.dist-info/WHEEL b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi-1.14.5.dist-info/WHEEL new file mode 100644 index 0000000000000000000000000000000000000000..69d594f055a5127401ebe017f8837cef4c76c020 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi-1.14.5.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.36.2) +Root-Is-Purelib: false +Tag: cp38-cp38-manylinux1_x86_64 + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi-1.14.5.dist-info/entry_points.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi-1.14.5.dist-info/entry_points.txt new file mode 100644 index 0000000000000000000000000000000000000000..eee7e0fb1f493ea015a66ea46ae9c05c51ec5e18 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi-1.14.5.dist-info/entry_points.txt @@ -0,0 +1,3 @@ +[distutils.setup_keywords] +cffi_modules = cffi.setuptools_ext:cffi_modules + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi-1.14.5.dist-info/top_level.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi-1.14.5.dist-info/top_level.txt new file mode 100644 index 0000000000000000000000000000000000000000..f64577957eb0d893196994ae517759f3fa8e48dd --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi-1.14.5.dist-info/top_level.txt @@ -0,0 +1,2 @@ +_cffi_backend +cffi diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi.libs/libffi-806b1a9d.so.6.0.4 b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi.libs/libffi-806b1a9d.so.6.0.4 new file mode 100755 index 0000000000000000000000000000000000000000..7e0b3012805ee9dd64076988820be79345407832 Binary files /dev/null and b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi.libs/libffi-806b1a9d.so.6.0.4 differ diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..b79c21f91d3cad22e1a07caad0960a2d0cfab388 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/__init__.py @@ -0,0 +1,14 @@ +__all__ = ['FFI', 'VerificationError', 'VerificationMissing', 'CDefError', + 'FFIError'] + +from .api import FFI +from .error import CDefError, FFIError, VerificationError, VerificationMissing +from .error import PkgConfigError + +__version__ = "1.14.5" +__version_info__ = (1, 14, 5) + +# The verifier module file names are based on the CRC32 of a string that +# contains the following version number. It may be older than __version__ +# if nothing is clearly incompatible. +__version_verifier_modules__ = "0.8.6" diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/_cffi_errors.h b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/_cffi_errors.h new file mode 100644 index 0000000000000000000000000000000000000000..83cdad068d071fc4b0205051aae3c5b34416b29d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/_cffi_errors.h @@ -0,0 +1,147 @@ +#ifndef CFFI_MESSAGEBOX +# ifdef _MSC_VER +# define CFFI_MESSAGEBOX 1 +# else +# define CFFI_MESSAGEBOX 0 +# endif +#endif + + +#if CFFI_MESSAGEBOX +/* Windows only: logic to take the Python-CFFI embedding logic + initialization errors and display them in a background thread + with MessageBox. The idea is that if the whole program closes + as a result of this problem, then likely it is already a console + program and you can read the stderr output in the console too. + If it is not a console program, then it will likely show its own + dialog to complain, or generally not abruptly close, and for this + case the background thread should stay alive. +*/ +static void *volatile _cffi_bootstrap_text; + +static PyObject *_cffi_start_error_capture(void) +{ + PyObject *result = NULL; + PyObject *x, *m, *bi; + + if (InterlockedCompareExchangePointer(&_cffi_bootstrap_text, + (void *)1, NULL) != NULL) + return (PyObject *)1; + + m = PyImport_AddModule("_cffi_error_capture"); + if (m == NULL) + goto error; + + result = PyModule_GetDict(m); + if (result == NULL) + goto error; + +#if PY_MAJOR_VERSION >= 3 + bi = PyImport_ImportModule("builtins"); +#else + bi = PyImport_ImportModule("__builtin__"); +#endif + if (bi == NULL) + goto error; + PyDict_SetItemString(result, "__builtins__", bi); + Py_DECREF(bi); + + x = PyRun_String( + "import sys\n" + "class FileLike:\n" + " def write(self, x):\n" + " try:\n" + " of.write(x)\n" + " except: pass\n" + " self.buf += x\n" + "fl = FileLike()\n" + "fl.buf = ''\n" + "of = sys.stderr\n" + "sys.stderr = fl\n" + "def done():\n" + " sys.stderr = of\n" + " return fl.buf\n", /* make sure the returned value stays alive */ + Py_file_input, + result, result); + Py_XDECREF(x); + + error: + if (PyErr_Occurred()) + { + PyErr_WriteUnraisable(Py_None); + PyErr_Clear(); + } + return result; +} + +#pragma comment(lib, "user32.lib") + +static DWORD WINAPI _cffi_bootstrap_dialog(LPVOID ignored) +{ + Sleep(666); /* may be interrupted if the whole process is closing */ +#if PY_MAJOR_VERSION >= 3 + MessageBoxW(NULL, (wchar_t *)_cffi_bootstrap_text, + L"Python-CFFI error", + MB_OK | MB_ICONERROR); +#else + MessageBoxA(NULL, (char *)_cffi_bootstrap_text, + "Python-CFFI error", + MB_OK | MB_ICONERROR); +#endif + _cffi_bootstrap_text = NULL; + return 0; +} + +static void _cffi_stop_error_capture(PyObject *ecap) +{ + PyObject *s; + void *text; + + if (ecap == (PyObject *)1) + return; + + if (ecap == NULL) + goto error; + + s = PyRun_String("done()", Py_eval_input, ecap, ecap); + if (s == NULL) + goto error; + + /* Show a dialog box, but in a background thread, and + never show multiple dialog boxes at once. */ +#if PY_MAJOR_VERSION >= 3 + text = PyUnicode_AsWideCharString(s, NULL); +#else + text = PyString_AsString(s); +#endif + + _cffi_bootstrap_text = text; + + if (text != NULL) + { + HANDLE h; + h = CreateThread(NULL, 0, _cffi_bootstrap_dialog, + NULL, 0, NULL); + if (h != NULL) + CloseHandle(h); + } + /* decref the string, but it should stay alive as 'fl.buf' + in the small module above. It will really be freed only if + we later get another similar error. So it's a leak of at + most one copy of the small module. That's fine for this + situation which is usually a "fatal error" anyway. */ + Py_DECREF(s); + PyErr_Clear(); + return; + + error: + _cffi_bootstrap_text = NULL; + PyErr_Clear(); +} + +#else + +static PyObject *_cffi_start_error_capture(void) { return NULL; } +static void _cffi_stop_error_capture(PyObject *ecap) { } + +#endif diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/_cffi_include.h b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/_cffi_include.h new file mode 100644 index 0000000000000000000000000000000000000000..e4c0a672405298ddb3dcb2e2ca6da9eea3d2e162 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/_cffi_include.h @@ -0,0 +1,385 @@ +#define _CFFI_ + +/* We try to define Py_LIMITED_API before including Python.h. + + Mess: we can only define it if Py_DEBUG, Py_TRACE_REFS and + Py_REF_DEBUG are not defined. This is a best-effort approximation: + we can learn about Py_DEBUG from pyconfig.h, but it is unclear if + the same works for the other two macros. Py_DEBUG implies them, + but not the other way around. + + The implementation is messy (issue #350): on Windows, with _MSC_VER, + we have to define Py_LIMITED_API even before including pyconfig.h. + In that case, we guess what pyconfig.h will do to the macros above, + and check our guess after the #include. + + Note that on Windows, with CPython 3.x, you need >= 3.5 and virtualenv + version >= 16.0.0. With older versions of either, you don't get a + copy of PYTHON3.DLL in the virtualenv. We can't check the version of + CPython *before* we even include pyconfig.h. ffi.set_source() puts + a ``#define _CFFI_NO_LIMITED_API'' at the start of this file if it is + running on Windows < 3.5, as an attempt at fixing it, but that's + arguably wrong because it may not be the target version of Python. + Still better than nothing I guess. As another workaround, you can + remove the definition of Py_LIMITED_API here. + + See also 'py_limited_api' in cffi/setuptools_ext.py. +*/ +#if !defined(_CFFI_USE_EMBEDDING) && !defined(Py_LIMITED_API) +# ifdef _MSC_VER +# if !defined(_DEBUG) && !defined(Py_DEBUG) && !defined(Py_TRACE_REFS) && !defined(Py_REF_DEBUG) && !defined(_CFFI_NO_LIMITED_API) +# define Py_LIMITED_API +# endif +# include + /* sanity-check: Py_LIMITED_API will cause crashes if any of these + are also defined. Normally, the Python file PC/pyconfig.h does not + cause any of these to be defined, with the exception that _DEBUG + causes Py_DEBUG. Double-check that. */ +# ifdef Py_LIMITED_API +# if defined(Py_DEBUG) +# error "pyconfig.h unexpectedly defines Py_DEBUG, but Py_LIMITED_API is set" +# endif +# if defined(Py_TRACE_REFS) +# error "pyconfig.h unexpectedly defines Py_TRACE_REFS, but Py_LIMITED_API is set" +# endif +# if defined(Py_REF_DEBUG) +# error "pyconfig.h unexpectedly defines Py_REF_DEBUG, but Py_LIMITED_API is set" +# endif +# endif +# else +# include +# if !defined(Py_DEBUG) && !defined(Py_TRACE_REFS) && !defined(Py_REF_DEBUG) && !defined(_CFFI_NO_LIMITED_API) +# define Py_LIMITED_API +# endif +# endif +#endif + +#include +#ifdef __cplusplus +extern "C" { +#endif +#include +#include "parse_c_type.h" + +/* this block of #ifs should be kept exactly identical between + c/_cffi_backend.c, cffi/vengine_cpy.py, cffi/vengine_gen.py + and cffi/_cffi_include.h */ +#if defined(_MSC_VER) +# include /* for alloca() */ +# if _MSC_VER < 1600 /* MSVC < 2010 */ + typedef __int8 int8_t; + typedef __int16 int16_t; + typedef __int32 int32_t; + typedef __int64 int64_t; + typedef unsigned __int8 uint8_t; + typedef unsigned __int16 uint16_t; + typedef unsigned __int32 uint32_t; + typedef unsigned __int64 uint64_t; + typedef __int8 int_least8_t; + typedef __int16 int_least16_t; + typedef __int32 int_least32_t; + typedef __int64 int_least64_t; + typedef unsigned __int8 uint_least8_t; + typedef unsigned __int16 uint_least16_t; + typedef unsigned __int32 uint_least32_t; + typedef unsigned __int64 uint_least64_t; + typedef __int8 int_fast8_t; + typedef __int16 int_fast16_t; + typedef __int32 int_fast32_t; + typedef __int64 int_fast64_t; + typedef unsigned __int8 uint_fast8_t; + typedef unsigned __int16 uint_fast16_t; + typedef unsigned __int32 uint_fast32_t; + typedef unsigned __int64 uint_fast64_t; + typedef __int64 intmax_t; + typedef unsigned __int64 uintmax_t; +# else +# include +# endif +# if _MSC_VER < 1800 /* MSVC < 2013 */ +# ifndef __cplusplus + typedef unsigned char _Bool; +# endif +# endif +#else +# include +# if (defined (__SVR4) && defined (__sun)) || defined(_AIX) || defined(__hpux) +# include +# endif +#endif + +#ifdef __GNUC__ +# define _CFFI_UNUSED_FN __attribute__((unused)) +#else +# define _CFFI_UNUSED_FN /* nothing */ +#endif + +#ifdef __cplusplus +# ifndef _Bool + typedef bool _Bool; /* semi-hackish: C++ has no _Bool; bool is builtin */ +# endif +#endif + +/********** CPython-specific section **********/ +#ifndef PYPY_VERSION + + +#if PY_MAJOR_VERSION >= 3 +# define PyInt_FromLong PyLong_FromLong +#endif + +#define _cffi_from_c_double PyFloat_FromDouble +#define _cffi_from_c_float PyFloat_FromDouble +#define _cffi_from_c_long PyInt_FromLong +#define _cffi_from_c_ulong PyLong_FromUnsignedLong +#define _cffi_from_c_longlong PyLong_FromLongLong +#define _cffi_from_c_ulonglong PyLong_FromUnsignedLongLong +#define _cffi_from_c__Bool PyBool_FromLong + +#define _cffi_to_c_double PyFloat_AsDouble +#define _cffi_to_c_float PyFloat_AsDouble + +#define _cffi_from_c_int(x, type) \ + (((type)-1) > 0 ? /* unsigned */ \ + (sizeof(type) < sizeof(long) ? \ + PyInt_FromLong((long)x) : \ + sizeof(type) == sizeof(long) ? \ + PyLong_FromUnsignedLong((unsigned long)x) : \ + PyLong_FromUnsignedLongLong((unsigned long long)x)) : \ + (sizeof(type) <= sizeof(long) ? \ + PyInt_FromLong((long)x) : \ + PyLong_FromLongLong((long long)x))) + +#define _cffi_to_c_int(o, type) \ + ((type)( \ + sizeof(type) == 1 ? (((type)-1) > 0 ? (type)_cffi_to_c_u8(o) \ + : (type)_cffi_to_c_i8(o)) : \ + sizeof(type) == 2 ? (((type)-1) > 0 ? (type)_cffi_to_c_u16(o) \ + : (type)_cffi_to_c_i16(o)) : \ + sizeof(type) == 4 ? (((type)-1) > 0 ? (type)_cffi_to_c_u32(o) \ + : (type)_cffi_to_c_i32(o)) : \ + sizeof(type) == 8 ? (((type)-1) > 0 ? (type)_cffi_to_c_u64(o) \ + : (type)_cffi_to_c_i64(o)) : \ + (Py_FatalError("unsupported size for type " #type), (type)0))) + +#define _cffi_to_c_i8 \ + ((int(*)(PyObject *))_cffi_exports[1]) +#define _cffi_to_c_u8 \ + ((int(*)(PyObject *))_cffi_exports[2]) +#define _cffi_to_c_i16 \ + ((int(*)(PyObject *))_cffi_exports[3]) +#define _cffi_to_c_u16 \ + ((int(*)(PyObject *))_cffi_exports[4]) +#define _cffi_to_c_i32 \ + ((int(*)(PyObject *))_cffi_exports[5]) +#define _cffi_to_c_u32 \ + ((unsigned int(*)(PyObject *))_cffi_exports[6]) +#define _cffi_to_c_i64 \ + ((long long(*)(PyObject *))_cffi_exports[7]) +#define _cffi_to_c_u64 \ + ((unsigned long long(*)(PyObject *))_cffi_exports[8]) +#define _cffi_to_c_char \ + ((int(*)(PyObject *))_cffi_exports[9]) +#define _cffi_from_c_pointer \ + ((PyObject *(*)(char *, struct _cffi_ctypedescr *))_cffi_exports[10]) +#define _cffi_to_c_pointer \ + ((char *(*)(PyObject *, struct _cffi_ctypedescr *))_cffi_exports[11]) +#define _cffi_get_struct_layout \ + not used any more +#define _cffi_restore_errno \ + ((void(*)(void))_cffi_exports[13]) +#define _cffi_save_errno \ + ((void(*)(void))_cffi_exports[14]) +#define _cffi_from_c_char \ + ((PyObject *(*)(char))_cffi_exports[15]) +#define _cffi_from_c_deref \ + ((PyObject *(*)(char *, struct _cffi_ctypedescr *))_cffi_exports[16]) +#define _cffi_to_c \ + ((int(*)(char *, struct _cffi_ctypedescr *, PyObject *))_cffi_exports[17]) +#define _cffi_from_c_struct \ + ((PyObject *(*)(char *, struct _cffi_ctypedescr *))_cffi_exports[18]) +#define _cffi_to_c_wchar_t \ + ((_cffi_wchar_t(*)(PyObject *))_cffi_exports[19]) +#define _cffi_from_c_wchar_t \ + ((PyObject *(*)(_cffi_wchar_t))_cffi_exports[20]) +#define _cffi_to_c_long_double \ + ((long double(*)(PyObject *))_cffi_exports[21]) +#define _cffi_to_c__Bool \ + ((_Bool(*)(PyObject *))_cffi_exports[22]) +#define _cffi_prepare_pointer_call_argument \ + ((Py_ssize_t(*)(struct _cffi_ctypedescr *, \ + PyObject *, char **))_cffi_exports[23]) +#define _cffi_convert_array_from_object \ + ((int(*)(char *, struct _cffi_ctypedescr *, PyObject *))_cffi_exports[24]) +#define _CFFI_CPIDX 25 +#define _cffi_call_python \ + ((void(*)(struct _cffi_externpy_s *, char *))_cffi_exports[_CFFI_CPIDX]) +#define _cffi_to_c_wchar3216_t \ + ((int(*)(PyObject *))_cffi_exports[26]) +#define _cffi_from_c_wchar3216_t \ + ((PyObject *(*)(int))_cffi_exports[27]) +#define _CFFI_NUM_EXPORTS 28 + +struct _cffi_ctypedescr; + +static void *_cffi_exports[_CFFI_NUM_EXPORTS]; + +#define _cffi_type(index) ( \ + assert((((uintptr_t)_cffi_types[index]) & 1) == 0), \ + (struct _cffi_ctypedescr *)_cffi_types[index]) + +static PyObject *_cffi_init(const char *module_name, Py_ssize_t version, + const struct _cffi_type_context_s *ctx) +{ + PyObject *module, *o_arg, *new_module; + void *raw[] = { + (void *)module_name, + (void *)version, + (void *)_cffi_exports, + (void *)ctx, + }; + + module = PyImport_ImportModule("_cffi_backend"); + if (module == NULL) + goto failure; + + o_arg = PyLong_FromVoidPtr((void *)raw); + if (o_arg == NULL) + goto failure; + + new_module = PyObject_CallMethod( + module, (char *)"_init_cffi_1_0_external_module", (char *)"O", o_arg); + + Py_DECREF(o_arg); + Py_DECREF(module); + return new_module; + + failure: + Py_XDECREF(module); + return NULL; +} + + +#ifdef HAVE_WCHAR_H +typedef wchar_t _cffi_wchar_t; +#else +typedef uint16_t _cffi_wchar_t; /* same random pick as _cffi_backend.c */ +#endif + +_CFFI_UNUSED_FN static uint16_t _cffi_to_c_char16_t(PyObject *o) +{ + if (sizeof(_cffi_wchar_t) == 2) + return (uint16_t)_cffi_to_c_wchar_t(o); + else + return (uint16_t)_cffi_to_c_wchar3216_t(o); +} + +_CFFI_UNUSED_FN static PyObject *_cffi_from_c_char16_t(uint16_t x) +{ + if (sizeof(_cffi_wchar_t) == 2) + return _cffi_from_c_wchar_t((_cffi_wchar_t)x); + else + return _cffi_from_c_wchar3216_t((int)x); +} + +_CFFI_UNUSED_FN static int _cffi_to_c_char32_t(PyObject *o) +{ + if (sizeof(_cffi_wchar_t) == 4) + return (int)_cffi_to_c_wchar_t(o); + else + return (int)_cffi_to_c_wchar3216_t(o); +} + +_CFFI_UNUSED_FN static PyObject *_cffi_from_c_char32_t(unsigned int x) +{ + if (sizeof(_cffi_wchar_t) == 4) + return _cffi_from_c_wchar_t((_cffi_wchar_t)x); + else + return _cffi_from_c_wchar3216_t((int)x); +} + +union _cffi_union_alignment_u { + unsigned char m_char; + unsigned short m_short; + unsigned int m_int; + unsigned long m_long; + unsigned long long m_longlong; + float m_float; + double m_double; + long double m_longdouble; +}; + +struct _cffi_freeme_s { + struct _cffi_freeme_s *next; + union _cffi_union_alignment_u alignment; +}; + +_CFFI_UNUSED_FN static int +_cffi_convert_array_argument(struct _cffi_ctypedescr *ctptr, PyObject *arg, + char **output_data, Py_ssize_t datasize, + struct _cffi_freeme_s **freeme) +{ + char *p; + if (datasize < 0) + return -1; + + p = *output_data; + if (p == NULL) { + struct _cffi_freeme_s *fp = (struct _cffi_freeme_s *)PyObject_Malloc( + offsetof(struct _cffi_freeme_s, alignment) + (size_t)datasize); + if (fp == NULL) + return -1; + fp->next = *freeme; + *freeme = fp; + p = *output_data = (char *)&fp->alignment; + } + memset((void *)p, 0, (size_t)datasize); + return _cffi_convert_array_from_object(p, ctptr, arg); +} + +_CFFI_UNUSED_FN static void +_cffi_free_array_arguments(struct _cffi_freeme_s *freeme) +{ + do { + void *p = (void *)freeme; + freeme = freeme->next; + PyObject_Free(p); + } while (freeme != NULL); +} + +/********** end CPython-specific section **********/ +#else +_CFFI_UNUSED_FN +static void (*_cffi_call_python_org)(struct _cffi_externpy_s *, char *); +# define _cffi_call_python _cffi_call_python_org +#endif + + +#define _cffi_array_len(array) (sizeof(array) / sizeof((array)[0])) + +#define _cffi_prim_int(size, sign) \ + ((size) == 1 ? ((sign) ? _CFFI_PRIM_INT8 : _CFFI_PRIM_UINT8) : \ + (size) == 2 ? ((sign) ? _CFFI_PRIM_INT16 : _CFFI_PRIM_UINT16) : \ + (size) == 4 ? ((sign) ? _CFFI_PRIM_INT32 : _CFFI_PRIM_UINT32) : \ + (size) == 8 ? ((sign) ? _CFFI_PRIM_INT64 : _CFFI_PRIM_UINT64) : \ + _CFFI__UNKNOWN_PRIM) + +#define _cffi_prim_float(size) \ + ((size) == sizeof(float) ? _CFFI_PRIM_FLOAT : \ + (size) == sizeof(double) ? _CFFI_PRIM_DOUBLE : \ + (size) == sizeof(long double) ? _CFFI__UNKNOWN_LONG_DOUBLE : \ + _CFFI__UNKNOWN_FLOAT_PRIM) + +#define _cffi_check_int(got, got_nonpos, expected) \ + ((got_nonpos) == (expected <= 0) && \ + (got) == (unsigned long long)expected) + +#ifdef MS_WIN32 +# define _cffi_stdcall __stdcall +#else +# define _cffi_stdcall /* nothing */ +#endif + +#ifdef __cplusplus +} +#endif diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/_embedding.h b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/_embedding.h new file mode 100644 index 0000000000000000000000000000000000000000..c36d79342d4b5a048fcd1e8deaa918fcd3b938a0 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/_embedding.h @@ -0,0 +1,527 @@ + +/***** Support code for embedding *****/ + +#ifdef __cplusplus +extern "C" { +#endif + + +#if defined(_WIN32) +# define CFFI_DLLEXPORT __declspec(dllexport) +#elif defined(__GNUC__) +# define CFFI_DLLEXPORT __attribute__((visibility("default"))) +#else +# define CFFI_DLLEXPORT /* nothing */ +#endif + + +/* There are two global variables of type _cffi_call_python_fnptr: + + * _cffi_call_python, which we declare just below, is the one called + by ``extern "Python"`` implementations. + + * _cffi_call_python_org, which on CPython is actually part of the + _cffi_exports[] array, is the function pointer copied from + _cffi_backend. + + After initialization is complete, both are equal. However, the + first one remains equal to &_cffi_start_and_call_python until the + very end of initialization, when we are (or should be) sure that + concurrent threads also see a completely initialized world, and + only then is it changed. +*/ +#undef _cffi_call_python +typedef void (*_cffi_call_python_fnptr)(struct _cffi_externpy_s *, char *); +static void _cffi_start_and_call_python(struct _cffi_externpy_s *, char *); +static _cffi_call_python_fnptr _cffi_call_python = &_cffi_start_and_call_python; + + +#ifndef _MSC_VER + /* --- Assuming a GCC not infinitely old --- */ +# define cffi_compare_and_swap(l,o,n) __sync_bool_compare_and_swap(l,o,n) +# define cffi_write_barrier() __sync_synchronize() +# if !defined(__amd64__) && !defined(__x86_64__) && \ + !defined(__i386__) && !defined(__i386) +# define cffi_read_barrier() __sync_synchronize() +# else +# define cffi_read_barrier() (void)0 +# endif +#else + /* --- Windows threads version --- */ +# include +# define cffi_compare_and_swap(l,o,n) \ + (InterlockedCompareExchangePointer(l,n,o) == (o)) +# define cffi_write_barrier() InterlockedCompareExchange(&_cffi_dummy,0,0) +# define cffi_read_barrier() (void)0 +static volatile LONG _cffi_dummy; +#endif + +#ifdef WITH_THREAD +# ifndef _MSC_VER +# include + static pthread_mutex_t _cffi_embed_startup_lock; +# else + static CRITICAL_SECTION _cffi_embed_startup_lock; +# endif + static char _cffi_embed_startup_lock_ready = 0; +#endif + +static void _cffi_acquire_reentrant_mutex(void) +{ + static void *volatile lock = NULL; + + while (!cffi_compare_and_swap(&lock, NULL, (void *)1)) { + /* should ideally do a spin loop instruction here, but + hard to do it portably and doesn't really matter I + think: pthread_mutex_init() should be very fast, and + this is only run at start-up anyway. */ + } + +#ifdef WITH_THREAD + if (!_cffi_embed_startup_lock_ready) { +# ifndef _MSC_VER + pthread_mutexattr_t attr; + pthread_mutexattr_init(&attr); + pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE); + pthread_mutex_init(&_cffi_embed_startup_lock, &attr); +# else + InitializeCriticalSection(&_cffi_embed_startup_lock); +# endif + _cffi_embed_startup_lock_ready = 1; + } +#endif + + while (!cffi_compare_and_swap(&lock, (void *)1, NULL)) + ; + +#ifndef _MSC_VER + pthread_mutex_lock(&_cffi_embed_startup_lock); +#else + EnterCriticalSection(&_cffi_embed_startup_lock); +#endif +} + +static void _cffi_release_reentrant_mutex(void) +{ +#ifndef _MSC_VER + pthread_mutex_unlock(&_cffi_embed_startup_lock); +#else + LeaveCriticalSection(&_cffi_embed_startup_lock); +#endif +} + + +/********** CPython-specific section **********/ +#ifndef PYPY_VERSION + +#include "_cffi_errors.h" + + +#define _cffi_call_python_org _cffi_exports[_CFFI_CPIDX] + +PyMODINIT_FUNC _CFFI_PYTHON_STARTUP_FUNC(void); /* forward */ + +static void _cffi_py_initialize(void) +{ + /* XXX use initsigs=0, which "skips initialization registration of + signal handlers, which might be useful when Python is + embedded" according to the Python docs. But review and think + if it should be a user-controllable setting. + + XXX we should also give a way to write errors to a buffer + instead of to stderr. + + XXX if importing 'site' fails, CPython (any version) calls + exit(). Should we try to work around this behavior here? + */ + Py_InitializeEx(0); +} + +static int _cffi_initialize_python(void) +{ + /* This initializes Python, imports _cffi_backend, and then the + present .dll/.so is set up as a CPython C extension module. + */ + int result; + PyGILState_STATE state; + PyObject *pycode=NULL, *global_dict=NULL, *x; + PyObject *builtins; + + state = PyGILState_Ensure(); + + /* Call the initxxx() function from the present module. It will + create and initialize us as a CPython extension module, instead + of letting the startup Python code do it---it might reimport + the same .dll/.so and get maybe confused on some platforms. + It might also have troubles locating the .dll/.so again for all + I know. + */ + (void)_CFFI_PYTHON_STARTUP_FUNC(); + if (PyErr_Occurred()) + goto error; + + /* Now run the Python code provided to ffi.embedding_init_code(). + */ + pycode = Py_CompileString(_CFFI_PYTHON_STARTUP_CODE, + "", + Py_file_input); + if (pycode == NULL) + goto error; + global_dict = PyDict_New(); + if (global_dict == NULL) + goto error; + builtins = PyEval_GetBuiltins(); + if (builtins == NULL) + goto error; + if (PyDict_SetItemString(global_dict, "__builtins__", builtins) < 0) + goto error; + x = PyEval_EvalCode( +#if PY_MAJOR_VERSION < 3 + (PyCodeObject *) +#endif + pycode, global_dict, global_dict); + if (x == NULL) + goto error; + Py_DECREF(x); + + /* Done! Now if we've been called from + _cffi_start_and_call_python() in an ``extern "Python"``, we can + only hope that the Python code did correctly set up the + corresponding @ffi.def_extern() function. Otherwise, the + general logic of ``extern "Python"`` functions (inside the + _cffi_backend module) will find that the reference is still + missing and print an error. + */ + result = 0; + done: + Py_XDECREF(pycode); + Py_XDECREF(global_dict); + PyGILState_Release(state); + return result; + + error:; + { + /* Print as much information as potentially useful. + Debugging load-time failures with embedding is not fun + */ + PyObject *ecap; + PyObject *exception, *v, *tb, *f, *modules, *mod; + PyErr_Fetch(&exception, &v, &tb); + ecap = _cffi_start_error_capture(); + f = PySys_GetObject((char *)"stderr"); + if (f != NULL && f != Py_None) { + PyFile_WriteString( + "Failed to initialize the Python-CFFI embedding logic:\n\n", f); + } + + if (exception != NULL) { + PyErr_NormalizeException(&exception, &v, &tb); + PyErr_Display(exception, v, tb); + } + Py_XDECREF(exception); + Py_XDECREF(v); + Py_XDECREF(tb); + + if (f != NULL && f != Py_None) { + PyFile_WriteString("\nFrom: " _CFFI_MODULE_NAME + "\ncompiled with cffi version: 1.14.5" + "\n_cffi_backend module: ", f); + modules = PyImport_GetModuleDict(); + mod = PyDict_GetItemString(modules, "_cffi_backend"); + if (mod == NULL) { + PyFile_WriteString("not loaded", f); + } + else { + v = PyObject_GetAttrString(mod, "__file__"); + PyFile_WriteObject(v, f, 0); + Py_XDECREF(v); + } + PyFile_WriteString("\nsys.path: ", f); + PyFile_WriteObject(PySys_GetObject((char *)"path"), f, 0); + PyFile_WriteString("\n\n", f); + } + _cffi_stop_error_capture(ecap); + } + result = -1; + goto done; +} + +#if PY_VERSION_HEX < 0x03080000 +PyAPI_DATA(char *) _PyParser_TokenNames[]; /* from CPython */ +#endif + +static int _cffi_carefully_make_gil(void) +{ + /* This does the basic initialization of Python. It can be called + completely concurrently from unrelated threads. It assumes + that we don't hold the GIL before (if it exists), and we don't + hold it afterwards. + + (What it really does used to be completely different in Python 2 + and Python 3, with the Python 2 solution avoiding the spin-lock + around the Py_InitializeEx() call. However, after recent changes + to CPython 2.7 (issue #358) it no longer works. So we use the + Python 3 solution everywhere.) + + This initializes Python by calling Py_InitializeEx(). + Important: this must not be called concurrently at all. + So we use a global variable as a simple spin lock. This global + variable must be from 'libpythonX.Y.so', not from this + cffi-based extension module, because it must be shared from + different cffi-based extension modules. + + In Python < 3.8, we choose + _PyParser_TokenNames[0] as a completely arbitrary pointer value + that is never written to. The default is to point to the + string "ENDMARKER". We change it temporarily to point to the + next character in that string. (Yes, I know it's REALLY + obscure.) + + In Python >= 3.8, this string array is no longer writable, so + instead we pick PyCapsuleType.tp_version_tag. We can't change + Python < 3.8 because someone might use a mixture of cffi + embedded modules, some of which were compiled before this file + changed. + */ + +#ifdef WITH_THREAD +# if PY_VERSION_HEX < 0x03080000 + char *volatile *lock = (char *volatile *)_PyParser_TokenNames; + char *old_value, *locked_value; + + while (1) { /* spin loop */ + old_value = *lock; + locked_value = old_value + 1; + if (old_value[0] == 'E') { + assert(old_value[1] == 'N'); + if (cffi_compare_and_swap(lock, old_value, locked_value)) + break; + } + else { + assert(old_value[0] == 'N'); + /* should ideally do a spin loop instruction here, but + hard to do it portably and doesn't really matter I + think: PyEval_InitThreads() should be very fast, and + this is only run at start-up anyway. */ + } + } +# else + int volatile *lock = (int volatile *)&PyCapsule_Type.tp_version_tag; + int old_value, locked_value; + assert(!(PyCapsule_Type.tp_flags & Py_TPFLAGS_HAVE_VERSION_TAG)); + + while (1) { /* spin loop */ + old_value = *lock; + locked_value = -42; + if (old_value == 0) { + if (cffi_compare_and_swap(lock, old_value, locked_value)) + break; + } + else { + assert(old_value == locked_value); + /* should ideally do a spin loop instruction here, but + hard to do it portably and doesn't really matter I + think: PyEval_InitThreads() should be very fast, and + this is only run at start-up anyway. */ + } + } +# endif +#endif + + /* call Py_InitializeEx() */ + if (!Py_IsInitialized()) { + _cffi_py_initialize(); +#if PY_VERSION_HEX < 0x03070000 + PyEval_InitThreads(); +#endif + PyEval_SaveThread(); /* release the GIL */ + /* the returned tstate must be the one that has been stored into the + autoTLSkey by _PyGILState_Init() called from Py_Initialize(). */ + } + else { +#if PY_VERSION_HEX < 0x03070000 + /* PyEval_InitThreads() is always a no-op from CPython 3.7 */ + PyGILState_STATE state = PyGILState_Ensure(); + PyEval_InitThreads(); + PyGILState_Release(state); +#endif + } + +#ifdef WITH_THREAD + /* release the lock */ + while (!cffi_compare_and_swap(lock, locked_value, old_value)) + ; +#endif + + return 0; +} + +/********** end CPython-specific section **********/ + + +#else + + +/********** PyPy-specific section **********/ + +PyMODINIT_FUNC _CFFI_PYTHON_STARTUP_FUNC(const void *[]); /* forward */ + +static struct _cffi_pypy_init_s { + const char *name; + void *func; /* function pointer */ + const char *code; +} _cffi_pypy_init = { + _CFFI_MODULE_NAME, + _CFFI_PYTHON_STARTUP_FUNC, + _CFFI_PYTHON_STARTUP_CODE, +}; + +extern int pypy_carefully_make_gil(const char *); +extern int pypy_init_embedded_cffi_module(int, struct _cffi_pypy_init_s *); + +static int _cffi_carefully_make_gil(void) +{ + return pypy_carefully_make_gil(_CFFI_MODULE_NAME); +} + +static int _cffi_initialize_python(void) +{ + return pypy_init_embedded_cffi_module(0xB011, &_cffi_pypy_init); +} + +/********** end PyPy-specific section **********/ + + +#endif + + +#ifdef __GNUC__ +__attribute__((noinline)) +#endif +static _cffi_call_python_fnptr _cffi_start_python(void) +{ + /* Delicate logic to initialize Python. This function can be + called multiple times concurrently, e.g. when the process calls + its first ``extern "Python"`` functions in multiple threads at + once. It can also be called recursively, in which case we must + ignore it. We also have to consider what occurs if several + different cffi-based extensions reach this code in parallel + threads---it is a different copy of the code, then, and we + can't have any shared global variable unless it comes from + 'libpythonX.Y.so'. + + Idea: + + * _cffi_carefully_make_gil(): "carefully" call + PyEval_InitThreads() (possibly with Py_InitializeEx() first). + + * then we use a (local) custom lock to make sure that a call to this + cffi-based extension will wait if another call to the *same* + extension is running the initialization in another thread. + It is reentrant, so that a recursive call will not block, but + only one from a different thread. + + * then we grab the GIL and (Python 2) we call Py_InitializeEx(). + At this point, concurrent calls to Py_InitializeEx() are not + possible: we have the GIL. + + * do the rest of the specific initialization, which may + temporarily release the GIL but not the custom lock. + Only release the custom lock when we are done. + */ + static char called = 0; + + if (_cffi_carefully_make_gil() != 0) + return NULL; + + _cffi_acquire_reentrant_mutex(); + + /* Here the GIL exists, but we don't have it. We're only protected + from concurrency by the reentrant mutex. */ + + /* This file only initializes the embedded module once, the first + time this is called, even if there are subinterpreters. */ + if (!called) { + called = 1; /* invoke _cffi_initialize_python() only once, + but don't set '_cffi_call_python' right now, + otherwise concurrent threads won't call + this function at all (we need them to wait) */ + if (_cffi_initialize_python() == 0) { + /* now initialization is finished. Switch to the fast-path. */ + + /* We would like nobody to see the new value of + '_cffi_call_python' without also seeing the rest of the + data initialized. However, this is not possible. But + the new value of '_cffi_call_python' is the function + 'cffi_call_python()' from _cffi_backend. So: */ + cffi_write_barrier(); + /* ^^^ we put a write barrier here, and a corresponding + read barrier at the start of cffi_call_python(). This + ensures that after that read barrier, we see everything + done here before the write barrier. + */ + + assert(_cffi_call_python_org != NULL); + _cffi_call_python = (_cffi_call_python_fnptr)_cffi_call_python_org; + } + else { + /* initialization failed. Reset this to NULL, even if it was + already set to some other value. Future calls to + _cffi_start_python() are still forced to occur, and will + always return NULL from now on. */ + _cffi_call_python_org = NULL; + } + } + + _cffi_release_reentrant_mutex(); + + return (_cffi_call_python_fnptr)_cffi_call_python_org; +} + +static +void _cffi_start_and_call_python(struct _cffi_externpy_s *externpy, char *args) +{ + _cffi_call_python_fnptr fnptr; + int current_err = errno; +#ifdef _MSC_VER + int current_lasterr = GetLastError(); +#endif + fnptr = _cffi_start_python(); + if (fnptr == NULL) { + fprintf(stderr, "function %s() called, but initialization code " + "failed. Returning 0.\n", externpy->name); + memset(args, 0, externpy->size_of_result); + } +#ifdef _MSC_VER + SetLastError(current_lasterr); +#endif + errno = current_err; + + if (fnptr != NULL) + fnptr(externpy, args); +} + + +/* The cffi_start_python() function makes sure Python is initialized + and our cffi module is set up. It can be called manually from the + user C code. The same effect is obtained automatically from any + dll-exported ``extern "Python"`` function. This function returns + -1 if initialization failed, 0 if all is OK. */ +_CFFI_UNUSED_FN +static int cffi_start_python(void) +{ + if (_cffi_call_python == &_cffi_start_and_call_python) { + if (_cffi_start_python() == NULL) + return -1; + } + cffi_read_barrier(); + return 0; +} + +#undef cffi_compare_and_swap +#undef cffi_write_barrier +#undef cffi_read_barrier + +#ifdef __cplusplus +} +#endif diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/api.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/api.py new file mode 100644 index 0000000000000000000000000000000000000000..999a8aefc4af0b27120823116212b0afe8484aad --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/api.py @@ -0,0 +1,965 @@ +import sys, types +from .lock import allocate_lock +from .error import CDefError +from . import model + +try: + callable +except NameError: + # Python 3.1 + from collections import Callable + callable = lambda x: isinstance(x, Callable) + +try: + basestring +except NameError: + # Python 3.x + basestring = str + +_unspecified = object() + + + +class FFI(object): + r''' + The main top-level class that you instantiate once, or once per module. + + Example usage: + + ffi = FFI() + ffi.cdef(""" + int printf(const char *, ...); + """) + + C = ffi.dlopen(None) # standard library + -or- + C = ffi.verify() # use a C compiler: verify the decl above is right + + C.printf("hello, %s!\n", ffi.new("char[]", "world")) + ''' + + def __init__(self, backend=None): + """Create an FFI instance. The 'backend' argument is used to + select a non-default backend, mostly for tests. + """ + if backend is None: + # You need PyPy (>= 2.0 beta), or a CPython (>= 2.6) with + # _cffi_backend.so compiled. + import _cffi_backend as backend + from . import __version__ + if backend.__version__ != __version__: + # bad version! Try to be as explicit as possible. + if hasattr(backend, '__file__'): + # CPython + raise Exception("Version mismatch: this is the 'cffi' package version %s, located in %r. When we import the top-level '_cffi_backend' extension module, we get version %s, located in %r. The two versions should be equal; check your installation." % ( + __version__, __file__, + backend.__version__, backend.__file__)) + else: + # PyPy + raise Exception("Version mismatch: this is the 'cffi' package version %s, located in %r. This interpreter comes with a built-in '_cffi_backend' module, which is version %s. The two versions should be equal; check your installation." % ( + __version__, __file__, backend.__version__)) + # (If you insist you can also try to pass the option + # 'backend=backend_ctypes.CTypesBackend()', but don't + # rely on it! It's probably not going to work well.) + + from . import cparser + self._backend = backend + self._lock = allocate_lock() + self._parser = cparser.Parser() + self._cached_btypes = {} + self._parsed_types = types.ModuleType('parsed_types').__dict__ + self._new_types = types.ModuleType('new_types').__dict__ + self._function_caches = [] + self._libraries = [] + self._cdefsources = [] + self._included_ffis = [] + self._windows_unicode = None + self._init_once_cache = {} + self._cdef_version = None + self._embedding = None + self._typecache = model.get_typecache(backend) + if hasattr(backend, 'set_ffi'): + backend.set_ffi(self) + for name in list(backend.__dict__): + if name.startswith('RTLD_'): + setattr(self, name, getattr(backend, name)) + # + with self._lock: + self.BVoidP = self._get_cached_btype(model.voidp_type) + self.BCharA = self._get_cached_btype(model.char_array_type) + if isinstance(backend, types.ModuleType): + # _cffi_backend: attach these constants to the class + if not hasattr(FFI, 'NULL'): + FFI.NULL = self.cast(self.BVoidP, 0) + FFI.CData, FFI.CType = backend._get_types() + else: + # ctypes backend: attach these constants to the instance + self.NULL = self.cast(self.BVoidP, 0) + self.CData, self.CType = backend._get_types() + self.buffer = backend.buffer + + def cdef(self, csource, override=False, packed=False, pack=None): + """Parse the given C source. This registers all declared functions, + types, and global variables. The functions and global variables can + then be accessed via either 'ffi.dlopen()' or 'ffi.verify()'. + The types can be used in 'ffi.new()' and other functions. + If 'packed' is specified as True, all structs declared inside this + cdef are packed, i.e. laid out without any field alignment at all. + Alternatively, 'pack' can be a small integer, and requests for + alignment greater than that are ignored (pack=1 is equivalent to + packed=True). + """ + self._cdef(csource, override=override, packed=packed, pack=pack) + + def embedding_api(self, csource, packed=False, pack=None): + self._cdef(csource, packed=packed, pack=pack, dllexport=True) + if self._embedding is None: + self._embedding = '' + + def _cdef(self, csource, override=False, **options): + if not isinstance(csource, str): # unicode, on Python 2 + if not isinstance(csource, basestring): + raise TypeError("cdef() argument must be a string") + csource = csource.encode('ascii') + with self._lock: + self._cdef_version = object() + self._parser.parse(csource, override=override, **options) + self._cdefsources.append(csource) + if override: + for cache in self._function_caches: + cache.clear() + finishlist = self._parser._recomplete + if finishlist: + self._parser._recomplete = [] + for tp in finishlist: + tp.finish_backend_type(self, finishlist) + + def dlopen(self, name, flags=0): + """Load and return a dynamic library identified by 'name'. + The standard C library can be loaded by passing None. + Note that functions and types declared by 'ffi.cdef()' are not + linked to a particular library, just like C headers; in the + library we only look for the actual (untyped) symbols. + """ + if not (isinstance(name, basestring) or + name is None or + isinstance(name, self.CData)): + raise TypeError("dlopen(name): name must be a file name, None, " + "or an already-opened 'void *' handle") + with self._lock: + lib, function_cache = _make_ffi_library(self, name, flags) + self._function_caches.append(function_cache) + self._libraries.append(lib) + return lib + + def dlclose(self, lib): + """Close a library obtained with ffi.dlopen(). After this call, + access to functions or variables from the library will fail + (possibly with a segmentation fault). + """ + type(lib).__cffi_close__(lib) + + def _typeof_locked(self, cdecl): + # call me with the lock! + key = cdecl + if key in self._parsed_types: + return self._parsed_types[key] + # + if not isinstance(cdecl, str): # unicode, on Python 2 + cdecl = cdecl.encode('ascii') + # + type = self._parser.parse_type(cdecl) + really_a_function_type = type.is_raw_function + if really_a_function_type: + type = type.as_function_pointer() + btype = self._get_cached_btype(type) + result = btype, really_a_function_type + self._parsed_types[key] = result + return result + + def _typeof(self, cdecl, consider_function_as_funcptr=False): + # string -> ctype object + try: + result = self._parsed_types[cdecl] + except KeyError: + with self._lock: + result = self._typeof_locked(cdecl) + # + btype, really_a_function_type = result + if really_a_function_type and not consider_function_as_funcptr: + raise CDefError("the type %r is a function type, not a " + "pointer-to-function type" % (cdecl,)) + return btype + + def typeof(self, cdecl): + """Parse the C type given as a string and return the + corresponding object. + It can also be used on 'cdata' instance to get its C type. + """ + if isinstance(cdecl, basestring): + return self._typeof(cdecl) + if isinstance(cdecl, self.CData): + return self._backend.typeof(cdecl) + if isinstance(cdecl, types.BuiltinFunctionType): + res = _builtin_function_type(cdecl) + if res is not None: + return res + if (isinstance(cdecl, types.FunctionType) + and hasattr(cdecl, '_cffi_base_type')): + with self._lock: + return self._get_cached_btype(cdecl._cffi_base_type) + raise TypeError(type(cdecl)) + + def sizeof(self, cdecl): + """Return the size in bytes of the argument. It can be a + string naming a C type, or a 'cdata' instance. + """ + if isinstance(cdecl, basestring): + BType = self._typeof(cdecl) + return self._backend.sizeof(BType) + else: + return self._backend.sizeof(cdecl) + + def alignof(self, cdecl): + """Return the natural alignment size in bytes of the C type + given as a string. + """ + if isinstance(cdecl, basestring): + cdecl = self._typeof(cdecl) + return self._backend.alignof(cdecl) + + def offsetof(self, cdecl, *fields_or_indexes): + """Return the offset of the named field inside the given + structure or array, which must be given as a C type name. + You can give several field names in case of nested structures. + You can also give numeric values which correspond to array + items, in case of an array type. + """ + if isinstance(cdecl, basestring): + cdecl = self._typeof(cdecl) + return self._typeoffsetof(cdecl, *fields_or_indexes)[1] + + def new(self, cdecl, init=None): + """Allocate an instance according to the specified C type and + return a pointer to it. The specified C type must be either a + pointer or an array: ``new('X *')`` allocates an X and returns + a pointer to it, whereas ``new('X[n]')`` allocates an array of + n X'es and returns an array referencing it (which works + mostly like a pointer, like in C). You can also use + ``new('X[]', n)`` to allocate an array of a non-constant + length n. + + The memory is initialized following the rules of declaring a + global variable in C: by default it is zero-initialized, but + an explicit initializer can be given which can be used to + fill all or part of the memory. + + When the returned object goes out of scope, the memory + is freed. In other words the returned object has + ownership of the value of type 'cdecl' that it points to. This + means that the raw data can be used as long as this object is + kept alive, but must not be used for a longer time. Be careful + about that when copying the pointer to the memory somewhere + else, e.g. into another structure. + """ + if isinstance(cdecl, basestring): + cdecl = self._typeof(cdecl) + return self._backend.newp(cdecl, init) + + def new_allocator(self, alloc=None, free=None, + should_clear_after_alloc=True): + """Return a new allocator, i.e. a function that behaves like ffi.new() + but uses the provided low-level 'alloc' and 'free' functions. + + 'alloc' is called with the size as argument. If it returns NULL, a + MemoryError is raised. 'free' is called with the result of 'alloc' + as argument. Both can be either Python function or directly C + functions. If 'free' is None, then no free function is called. + If both 'alloc' and 'free' are None, the default is used. + + If 'should_clear_after_alloc' is set to False, then the memory + returned by 'alloc' is assumed to be already cleared (or you are + fine with garbage); otherwise CFFI will clear it. + """ + compiled_ffi = self._backend.FFI() + allocator = compiled_ffi.new_allocator(alloc, free, + should_clear_after_alloc) + def allocate(cdecl, init=None): + if isinstance(cdecl, basestring): + cdecl = self._typeof(cdecl) + return allocator(cdecl, init) + return allocate + + def cast(self, cdecl, source): + """Similar to a C cast: returns an instance of the named C + type initialized with the given 'source'. The source is + casted between integers or pointers of any type. + """ + if isinstance(cdecl, basestring): + cdecl = self._typeof(cdecl) + return self._backend.cast(cdecl, source) + + def string(self, cdata, maxlen=-1): + """Return a Python string (or unicode string) from the 'cdata'. + If 'cdata' is a pointer or array of characters or bytes, returns + the null-terminated string. The returned string extends until + the first null character, or at most 'maxlen' characters. If + 'cdata' is an array then 'maxlen' defaults to its length. + + If 'cdata' is a pointer or array of wchar_t, returns a unicode + string following the same rules. + + If 'cdata' is a single character or byte or a wchar_t, returns + it as a string or unicode string. + + If 'cdata' is an enum, returns the value of the enumerator as a + string, or 'NUMBER' if the value is out of range. + """ + return self._backend.string(cdata, maxlen) + + def unpack(self, cdata, length): + """Unpack an array of C data of the given length, + returning a Python string/unicode/list. + + If 'cdata' is a pointer to 'char', returns a byte string. + It does not stop at the first null. This is equivalent to: + ffi.buffer(cdata, length)[:] + + If 'cdata' is a pointer to 'wchar_t', returns a unicode string. + 'length' is measured in wchar_t's; it is not the size in bytes. + + If 'cdata' is a pointer to anything else, returns a list of + 'length' items. This is a faster equivalent to: + [cdata[i] for i in range(length)] + """ + return self._backend.unpack(cdata, length) + + #def buffer(self, cdata, size=-1): + # """Return a read-write buffer object that references the raw C data + # pointed to by the given 'cdata'. The 'cdata' must be a pointer or + # an array. Can be passed to functions expecting a buffer, or directly + # manipulated with: + # + # buf[:] get a copy of it in a regular string, or + # buf[idx] as a single character + # buf[:] = ... + # buf[idx] = ... change the content + # """ + # note that 'buffer' is a type, set on this instance by __init__ + + def from_buffer(self, cdecl, python_buffer=_unspecified, + require_writable=False): + """Return a cdata of the given type pointing to the data of the + given Python object, which must support the buffer interface. + Note that this is not meant to be used on the built-in types + str or unicode (you can build 'char[]' arrays explicitly) + but only on objects containing large quantities of raw data + in some other format, like 'array.array' or numpy arrays. + + The first argument is optional and default to 'char[]'. + """ + if python_buffer is _unspecified: + cdecl, python_buffer = self.BCharA, cdecl + elif isinstance(cdecl, basestring): + cdecl = self._typeof(cdecl) + return self._backend.from_buffer(cdecl, python_buffer, + require_writable) + + def memmove(self, dest, src, n): + """ffi.memmove(dest, src, n) copies n bytes of memory from src to dest. + + Like the C function memmove(), the memory areas may overlap; + apart from that it behaves like the C function memcpy(). + + 'src' can be any cdata ptr or array, or any Python buffer object. + 'dest' can be any cdata ptr or array, or a writable Python buffer + object. The size to copy, 'n', is always measured in bytes. + + Unlike other methods, this one supports all Python buffer including + byte strings and bytearrays---but it still does not support + non-contiguous buffers. + """ + return self._backend.memmove(dest, src, n) + + def callback(self, cdecl, python_callable=None, error=None, onerror=None): + """Return a callback object or a decorator making such a + callback object. 'cdecl' must name a C function pointer type. + The callback invokes the specified 'python_callable' (which may + be provided either directly or via a decorator). Important: the + callback object must be manually kept alive for as long as the + callback may be invoked from the C level. + """ + def callback_decorator_wrap(python_callable): + if not callable(python_callable): + raise TypeError("the 'python_callable' argument " + "is not callable") + return self._backend.callback(cdecl, python_callable, + error, onerror) + if isinstance(cdecl, basestring): + cdecl = self._typeof(cdecl, consider_function_as_funcptr=True) + if python_callable is None: + return callback_decorator_wrap # decorator mode + else: + return callback_decorator_wrap(python_callable) # direct mode + + def getctype(self, cdecl, replace_with=''): + """Return a string giving the C type 'cdecl', which may be itself + a string or a object. If 'replace_with' is given, it gives + extra text to append (or insert for more complicated C types), like + a variable name, or '*' to get actually the C type 'pointer-to-cdecl'. + """ + if isinstance(cdecl, basestring): + cdecl = self._typeof(cdecl) + replace_with = replace_with.strip() + if (replace_with.startswith('*') + and '&[' in self._backend.getcname(cdecl, '&')): + replace_with = '(%s)' % replace_with + elif replace_with and not replace_with[0] in '[(': + replace_with = ' ' + replace_with + return self._backend.getcname(cdecl, replace_with) + + def gc(self, cdata, destructor, size=0): + """Return a new cdata object that points to the same + data. Later, when this new cdata object is garbage-collected, + 'destructor(old_cdata_object)' will be called. + + The optional 'size' gives an estimate of the size, used to + trigger the garbage collection more eagerly. So far only used + on PyPy. It tells the GC that the returned object keeps alive + roughly 'size' bytes of external memory. + """ + return self._backend.gcp(cdata, destructor, size) + + def _get_cached_btype(self, type): + assert self._lock.acquire(False) is False + # call me with the lock! + try: + BType = self._cached_btypes[type] + except KeyError: + finishlist = [] + BType = type.get_cached_btype(self, finishlist) + for type in finishlist: + type.finish_backend_type(self, finishlist) + return BType + + def verify(self, source='', tmpdir=None, **kwargs): + """Verify that the current ffi signatures compile on this + machine, and return a dynamic library object. The dynamic + library can be used to call functions and access global + variables declared in this 'ffi'. The library is compiled + by the C compiler: it gives you C-level API compatibility + (including calling macros). This is unlike 'ffi.dlopen()', + which requires binary compatibility in the signatures. + """ + from .verifier import Verifier, _caller_dir_pycache + # + # If set_unicode(True) was called, insert the UNICODE and + # _UNICODE macro declarations + if self._windows_unicode: + self._apply_windows_unicode(kwargs) + # + # Set the tmpdir here, and not in Verifier.__init__: it picks + # up the caller's directory, which we want to be the caller of + # ffi.verify(), as opposed to the caller of Veritier(). + tmpdir = tmpdir or _caller_dir_pycache() + # + # Make a Verifier() and use it to load the library. + self.verifier = Verifier(self, source, tmpdir, **kwargs) + lib = self.verifier.load_library() + # + # Save the loaded library for keep-alive purposes, even + # if the caller doesn't keep it alive itself (it should). + self._libraries.append(lib) + return lib + + def _get_errno(self): + return self._backend.get_errno() + def _set_errno(self, errno): + self._backend.set_errno(errno) + errno = property(_get_errno, _set_errno, None, + "the value of 'errno' from/to the C calls") + + def getwinerror(self, code=-1): + return self._backend.getwinerror(code) + + def _pointer_to(self, ctype): + with self._lock: + return model.pointer_cache(self, ctype) + + def addressof(self, cdata, *fields_or_indexes): + """Return the address of a . + If 'fields_or_indexes' are given, returns the address of that + field or array item in the structure or array, recursively in + case of nested structures. + """ + try: + ctype = self._backend.typeof(cdata) + except TypeError: + if '__addressof__' in type(cdata).__dict__: + return type(cdata).__addressof__(cdata, *fields_or_indexes) + raise + if fields_or_indexes: + ctype, offset = self._typeoffsetof(ctype, *fields_or_indexes) + else: + if ctype.kind == "pointer": + raise TypeError("addressof(pointer)") + offset = 0 + ctypeptr = self._pointer_to(ctype) + return self._backend.rawaddressof(ctypeptr, cdata, offset) + + def _typeoffsetof(self, ctype, field_or_index, *fields_or_indexes): + ctype, offset = self._backend.typeoffsetof(ctype, field_or_index) + for field1 in fields_or_indexes: + ctype, offset1 = self._backend.typeoffsetof(ctype, field1, 1) + offset += offset1 + return ctype, offset + + def include(self, ffi_to_include): + """Includes the typedefs, structs, unions and enums defined + in another FFI instance. Usage is similar to a #include in C, + where a part of the program might include types defined in + another part for its own usage. Note that the include() + method has no effect on functions, constants and global + variables, which must anyway be accessed directly from the + lib object returned by the original FFI instance. + """ + if not isinstance(ffi_to_include, FFI): + raise TypeError("ffi.include() expects an argument that is also of" + " type cffi.FFI, not %r" % ( + type(ffi_to_include).__name__,)) + if ffi_to_include is self: + raise ValueError("self.include(self)") + with ffi_to_include._lock: + with self._lock: + self._parser.include(ffi_to_include._parser) + self._cdefsources.append('[') + self._cdefsources.extend(ffi_to_include._cdefsources) + self._cdefsources.append(']') + self._included_ffis.append(ffi_to_include) + + def new_handle(self, x): + return self._backend.newp_handle(self.BVoidP, x) + + def from_handle(self, x): + return self._backend.from_handle(x) + + def release(self, x): + self._backend.release(x) + + def set_unicode(self, enabled_flag): + """Windows: if 'enabled_flag' is True, enable the UNICODE and + _UNICODE defines in C, and declare the types like TCHAR and LPTCSTR + to be (pointers to) wchar_t. If 'enabled_flag' is False, + declare these types to be (pointers to) plain 8-bit characters. + This is mostly for backward compatibility; you usually want True. + """ + if self._windows_unicode is not None: + raise ValueError("set_unicode() can only be called once") + enabled_flag = bool(enabled_flag) + if enabled_flag: + self.cdef("typedef wchar_t TBYTE;" + "typedef wchar_t TCHAR;" + "typedef const wchar_t *LPCTSTR;" + "typedef const wchar_t *PCTSTR;" + "typedef wchar_t *LPTSTR;" + "typedef wchar_t *PTSTR;" + "typedef TBYTE *PTBYTE;" + "typedef TCHAR *PTCHAR;") + else: + self.cdef("typedef char TBYTE;" + "typedef char TCHAR;" + "typedef const char *LPCTSTR;" + "typedef const char *PCTSTR;" + "typedef char *LPTSTR;" + "typedef char *PTSTR;" + "typedef TBYTE *PTBYTE;" + "typedef TCHAR *PTCHAR;") + self._windows_unicode = enabled_flag + + def _apply_windows_unicode(self, kwds): + defmacros = kwds.get('define_macros', ()) + if not isinstance(defmacros, (list, tuple)): + raise TypeError("'define_macros' must be a list or tuple") + defmacros = list(defmacros) + [('UNICODE', '1'), + ('_UNICODE', '1')] + kwds['define_macros'] = defmacros + + def _apply_embedding_fix(self, kwds): + # must include an argument like "-lpython2.7" for the compiler + def ensure(key, value): + lst = kwds.setdefault(key, []) + if value not in lst: + lst.append(value) + # + if '__pypy__' in sys.builtin_module_names: + import os + if sys.platform == "win32": + # we need 'libpypy-c.lib'. Current distributions of + # pypy (>= 4.1) contain it as 'libs/python27.lib'. + pythonlib = "python{0[0]}{0[1]}".format(sys.version_info) + if hasattr(sys, 'prefix'): + ensure('library_dirs', os.path.join(sys.prefix, 'libs')) + else: + # we need 'libpypy-c.{so,dylib}', which should be by + # default located in 'sys.prefix/bin' for installed + # systems. + if sys.version_info < (3,): + pythonlib = "pypy-c" + else: + pythonlib = "pypy3-c" + if hasattr(sys, 'prefix'): + ensure('library_dirs', os.path.join(sys.prefix, 'bin')) + # On uninstalled pypy's, the libpypy-c is typically found in + # .../pypy/goal/. + if hasattr(sys, 'prefix'): + ensure('library_dirs', os.path.join(sys.prefix, 'pypy', 'goal')) + else: + if sys.platform == "win32": + template = "python%d%d" + if hasattr(sys, 'gettotalrefcount'): + template += '_d' + else: + try: + import sysconfig + except ImportError: # 2.6 + from distutils import sysconfig + template = "python%d.%d" + if sysconfig.get_config_var('DEBUG_EXT'): + template += sysconfig.get_config_var('DEBUG_EXT') + pythonlib = (template % + (sys.hexversion >> 24, (sys.hexversion >> 16) & 0xff)) + if hasattr(sys, 'abiflags'): + pythonlib += sys.abiflags + ensure('libraries', pythonlib) + if sys.platform == "win32": + ensure('extra_link_args', '/MANIFEST') + + def set_source(self, module_name, source, source_extension='.c', **kwds): + import os + if hasattr(self, '_assigned_source'): + raise ValueError("set_source() cannot be called several times " + "per ffi object") + if not isinstance(module_name, basestring): + raise TypeError("'module_name' must be a string") + if os.sep in module_name or (os.altsep and os.altsep in module_name): + raise ValueError("'module_name' must not contain '/': use a dotted " + "name to make a 'package.module' location") + self._assigned_source = (str(module_name), source, + source_extension, kwds) + + def set_source_pkgconfig(self, module_name, pkgconfig_libs, source, + source_extension='.c', **kwds): + from . import pkgconfig + if not isinstance(pkgconfig_libs, list): + raise TypeError("the pkgconfig_libs argument must be a list " + "of package names") + kwds2 = pkgconfig.flags_from_pkgconfig(pkgconfig_libs) + pkgconfig.merge_flags(kwds, kwds2) + self.set_source(module_name, source, source_extension, **kwds) + + def distutils_extension(self, tmpdir='build', verbose=True): + from distutils.dir_util import mkpath + from .recompiler import recompile + # + if not hasattr(self, '_assigned_source'): + if hasattr(self, 'verifier'): # fallback, 'tmpdir' ignored + return self.verifier.get_extension() + raise ValueError("set_source() must be called before" + " distutils_extension()") + module_name, source, source_extension, kwds = self._assigned_source + if source is None: + raise TypeError("distutils_extension() is only for C extension " + "modules, not for dlopen()-style pure Python " + "modules") + mkpath(tmpdir) + ext, updated = recompile(self, module_name, + source, tmpdir=tmpdir, extradir=tmpdir, + source_extension=source_extension, + call_c_compiler=False, **kwds) + if verbose: + if updated: + sys.stderr.write("regenerated: %r\n" % (ext.sources[0],)) + else: + sys.stderr.write("not modified: %r\n" % (ext.sources[0],)) + return ext + + def emit_c_code(self, filename): + from .recompiler import recompile + # + if not hasattr(self, '_assigned_source'): + raise ValueError("set_source() must be called before emit_c_code()") + module_name, source, source_extension, kwds = self._assigned_source + if source is None: + raise TypeError("emit_c_code() is only for C extension modules, " + "not for dlopen()-style pure Python modules") + recompile(self, module_name, source, + c_file=filename, call_c_compiler=False, **kwds) + + def emit_python_code(self, filename): + from .recompiler import recompile + # + if not hasattr(self, '_assigned_source'): + raise ValueError("set_source() must be called before emit_c_code()") + module_name, source, source_extension, kwds = self._assigned_source + if source is not None: + raise TypeError("emit_python_code() is only for dlopen()-style " + "pure Python modules, not for C extension modules") + recompile(self, module_name, source, + c_file=filename, call_c_compiler=False, **kwds) + + def compile(self, tmpdir='.', verbose=0, target=None, debug=None): + """The 'target' argument gives the final file name of the + compiled DLL. Use '*' to force distutils' choice, suitable for + regular CPython C API modules. Use a file name ending in '.*' + to ask for the system's default extension for dynamic libraries + (.so/.dll/.dylib). + + The default is '*' when building a non-embedded C API extension, + and (module_name + '.*') when building an embedded library. + """ + from .recompiler import recompile + # + if not hasattr(self, '_assigned_source'): + raise ValueError("set_source() must be called before compile()") + module_name, source, source_extension, kwds = self._assigned_source + return recompile(self, module_name, source, tmpdir=tmpdir, + target=target, source_extension=source_extension, + compiler_verbose=verbose, debug=debug, **kwds) + + def init_once(self, func, tag): + # Read _init_once_cache[tag], which is either (False, lock) if + # we're calling the function now in some thread, or (True, result). + # Don't call setdefault() in most cases, to avoid allocating and + # immediately freeing a lock; but still use setdefaut() to avoid + # races. + try: + x = self._init_once_cache[tag] + except KeyError: + x = self._init_once_cache.setdefault(tag, (False, allocate_lock())) + # Common case: we got (True, result), so we return the result. + if x[0]: + return x[1] + # Else, it's a lock. Acquire it to serialize the following tests. + with x[1]: + # Read again from _init_once_cache the current status. + x = self._init_once_cache[tag] + if x[0]: + return x[1] + # Call the function and store the result back. + result = func() + self._init_once_cache[tag] = (True, result) + return result + + def embedding_init_code(self, pysource): + if self._embedding: + raise ValueError("embedding_init_code() can only be called once") + # fix 'pysource' before it gets dumped into the C file: + # - remove empty lines at the beginning, so it starts at "line 1" + # - dedent, if all non-empty lines are indented + # - check for SyntaxErrors + import re + match = re.match(r'\s*\n', pysource) + if match: + pysource = pysource[match.end():] + lines = pysource.splitlines() or [''] + prefix = re.match(r'\s*', lines[0]).group() + for i in range(1, len(lines)): + line = lines[i] + if line.rstrip(): + while not line.startswith(prefix): + prefix = prefix[:-1] + i = len(prefix) + lines = [line[i:]+'\n' for line in lines] + pysource = ''.join(lines) + # + compile(pysource, "cffi_init", "exec") + # + self._embedding = pysource + + def def_extern(self, *args, **kwds): + raise ValueError("ffi.def_extern() is only available on API-mode FFI " + "objects") + + def list_types(self): + """Returns the user type names known to this FFI instance. + This returns a tuple containing three lists of names: + (typedef_names, names_of_structs, names_of_unions) + """ + typedefs = [] + structs = [] + unions = [] + for key in self._parser._declarations: + if key.startswith('typedef '): + typedefs.append(key[8:]) + elif key.startswith('struct '): + structs.append(key[7:]) + elif key.startswith('union '): + unions.append(key[6:]) + typedefs.sort() + structs.sort() + unions.sort() + return (typedefs, structs, unions) + + +def _load_backend_lib(backend, name, flags): + import os + if not isinstance(name, basestring): + if sys.platform != "win32" or name is not None: + return backend.load_library(name, flags) + name = "c" # Windows: load_library(None) fails, but this works + # on Python 2 (backward compatibility hack only) + first_error = None + if '.' in name or '/' in name or os.sep in name: + try: + return backend.load_library(name, flags) + except OSError as e: + first_error = e + import ctypes.util + path = ctypes.util.find_library(name) + if path is None: + if name == "c" and sys.platform == "win32" and sys.version_info >= (3,): + raise OSError("dlopen(None) cannot work on Windows for Python 3 " + "(see http://bugs.python.org/issue23606)") + msg = ("ctypes.util.find_library() did not manage " + "to locate a library called %r" % (name,)) + if first_error is not None: + msg = "%s. Additionally, %s" % (first_error, msg) + raise OSError(msg) + return backend.load_library(path, flags) + +def _make_ffi_library(ffi, libname, flags): + backend = ffi._backend + backendlib = _load_backend_lib(backend, libname, flags) + # + def accessor_function(name): + key = 'function ' + name + tp, _ = ffi._parser._declarations[key] + BType = ffi._get_cached_btype(tp) + value = backendlib.load_function(BType, name) + library.__dict__[name] = value + # + def accessor_variable(name): + key = 'variable ' + name + tp, _ = ffi._parser._declarations[key] + BType = ffi._get_cached_btype(tp) + read_variable = backendlib.read_variable + write_variable = backendlib.write_variable + setattr(FFILibrary, name, property( + lambda self: read_variable(BType, name), + lambda self, value: write_variable(BType, name, value))) + # + def addressof_var(name): + try: + return addr_variables[name] + except KeyError: + with ffi._lock: + if name not in addr_variables: + key = 'variable ' + name + tp, _ = ffi._parser._declarations[key] + BType = ffi._get_cached_btype(tp) + if BType.kind != 'array': + BType = model.pointer_cache(ffi, BType) + p = backendlib.load_function(BType, name) + addr_variables[name] = p + return addr_variables[name] + # + def accessor_constant(name): + raise NotImplementedError("non-integer constant '%s' cannot be " + "accessed from a dlopen() library" % (name,)) + # + def accessor_int_constant(name): + library.__dict__[name] = ffi._parser._int_constants[name] + # + accessors = {} + accessors_version = [False] + addr_variables = {} + # + def update_accessors(): + if accessors_version[0] is ffi._cdef_version: + return + # + for key, (tp, _) in ffi._parser._declarations.items(): + if not isinstance(tp, model.EnumType): + tag, name = key.split(' ', 1) + if tag == 'function': + accessors[name] = accessor_function + elif tag == 'variable': + accessors[name] = accessor_variable + elif tag == 'constant': + accessors[name] = accessor_constant + else: + for i, enumname in enumerate(tp.enumerators): + def accessor_enum(name, tp=tp, i=i): + tp.check_not_partial() + library.__dict__[name] = tp.enumvalues[i] + accessors[enumname] = accessor_enum + for name in ffi._parser._int_constants: + accessors.setdefault(name, accessor_int_constant) + accessors_version[0] = ffi._cdef_version + # + def make_accessor(name): + with ffi._lock: + if name in library.__dict__ or name in FFILibrary.__dict__: + return # added by another thread while waiting for the lock + if name not in accessors: + update_accessors() + if name not in accessors: + raise AttributeError(name) + accessors[name](name) + # + class FFILibrary(object): + def __getattr__(self, name): + make_accessor(name) + return getattr(self, name) + def __setattr__(self, name, value): + try: + property = getattr(self.__class__, name) + except AttributeError: + make_accessor(name) + setattr(self, name, value) + else: + property.__set__(self, value) + def __dir__(self): + with ffi._lock: + update_accessors() + return accessors.keys() + def __addressof__(self, name): + if name in library.__dict__: + return library.__dict__[name] + if name in FFILibrary.__dict__: + return addressof_var(name) + make_accessor(name) + if name in library.__dict__: + return library.__dict__[name] + if name in FFILibrary.__dict__: + return addressof_var(name) + raise AttributeError("cffi library has no function or " + "global variable named '%s'" % (name,)) + def __cffi_close__(self): + backendlib.close_lib() + self.__dict__.clear() + # + if isinstance(libname, basestring): + try: + if not isinstance(libname, str): # unicode, on Python 2 + libname = libname.encode('utf-8') + FFILibrary.__name__ = 'FFILibrary_%s' % libname + except UnicodeError: + pass + library = FFILibrary() + return library, library.__dict__ + +def _builtin_function_type(func): + # a hack to make at least ffi.typeof(builtin_function) work, + # if the builtin function was obtained by 'vengine_cpy'. + import sys + try: + module = sys.modules[func.__module__] + ffi = module._cffi_original_ffi + types_of_builtin_funcs = module._cffi_types_of_builtin_funcs + tp = types_of_builtin_funcs[func] + except (KeyError, AttributeError, TypeError): + return None + else: + with ffi._lock: + return ffi._get_cached_btype(tp) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/backend_ctypes.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/backend_ctypes.py new file mode 100644 index 0000000000000000000000000000000000000000..e7956a79cfb1c3d28a2ad22a40b261ae7dbbbb5f --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/backend_ctypes.py @@ -0,0 +1,1121 @@ +import ctypes, ctypes.util, operator, sys +from . import model + +if sys.version_info < (3,): + bytechr = chr +else: + unicode = str + long = int + xrange = range + bytechr = lambda num: bytes([num]) + +class CTypesType(type): + pass + +class CTypesData(object): + __metaclass__ = CTypesType + __slots__ = ['__weakref__'] + __name__ = '' + + def __init__(self, *args): + raise TypeError("cannot instantiate %r" % (self.__class__,)) + + @classmethod + def _newp(cls, init): + raise TypeError("expected a pointer or array ctype, got '%s'" + % (cls._get_c_name(),)) + + @staticmethod + def _to_ctypes(value): + raise TypeError + + @classmethod + def _arg_to_ctypes(cls, *value): + try: + ctype = cls._ctype + except AttributeError: + raise TypeError("cannot create an instance of %r" % (cls,)) + if value: + res = cls._to_ctypes(*value) + if not isinstance(res, ctype): + res = cls._ctype(res) + else: + res = cls._ctype() + return res + + @classmethod + def _create_ctype_obj(cls, init): + if init is None: + return cls._arg_to_ctypes() + else: + return cls._arg_to_ctypes(init) + + @staticmethod + def _from_ctypes(ctypes_value): + raise TypeError + + @classmethod + def _get_c_name(cls, replace_with=''): + return cls._reftypename.replace(' &', replace_with) + + @classmethod + def _fix_class(cls): + cls.__name__ = 'CData<%s>' % (cls._get_c_name(),) + cls.__qualname__ = 'CData<%s>' % (cls._get_c_name(),) + cls.__module__ = 'ffi' + + def _get_own_repr(self): + raise NotImplementedError + + def _addr_repr(self, address): + if address == 0: + return 'NULL' + else: + if address < 0: + address += 1 << (8*ctypes.sizeof(ctypes.c_void_p)) + return '0x%x' % address + + def __repr__(self, c_name=None): + own = self._get_own_repr() + return '' % (c_name or self._get_c_name(), own) + + def _convert_to_address(self, BClass): + if BClass is None: + raise TypeError("cannot convert %r to an address" % ( + self._get_c_name(),)) + else: + raise TypeError("cannot convert %r to %r" % ( + self._get_c_name(), BClass._get_c_name())) + + @classmethod + def _get_size(cls): + return ctypes.sizeof(cls._ctype) + + def _get_size_of_instance(self): + return ctypes.sizeof(self._ctype) + + @classmethod + def _cast_from(cls, source): + raise TypeError("cannot cast to %r" % (cls._get_c_name(),)) + + def _cast_to_integer(self): + return self._convert_to_address(None) + + @classmethod + def _alignment(cls): + return ctypes.alignment(cls._ctype) + + def __iter__(self): + raise TypeError("cdata %r does not support iteration" % ( + self._get_c_name()),) + + def _make_cmp(name): + cmpfunc = getattr(operator, name) + def cmp(self, other): + v_is_ptr = not isinstance(self, CTypesGenericPrimitive) + w_is_ptr = (isinstance(other, CTypesData) and + not isinstance(other, CTypesGenericPrimitive)) + if v_is_ptr and w_is_ptr: + return cmpfunc(self._convert_to_address(None), + other._convert_to_address(None)) + elif v_is_ptr or w_is_ptr: + return NotImplemented + else: + if isinstance(self, CTypesGenericPrimitive): + self = self._value + if isinstance(other, CTypesGenericPrimitive): + other = other._value + return cmpfunc(self, other) + cmp.func_name = name + return cmp + + __eq__ = _make_cmp('__eq__') + __ne__ = _make_cmp('__ne__') + __lt__ = _make_cmp('__lt__') + __le__ = _make_cmp('__le__') + __gt__ = _make_cmp('__gt__') + __ge__ = _make_cmp('__ge__') + + def __hash__(self): + return hash(self._convert_to_address(None)) + + def _to_string(self, maxlen): + raise TypeError("string(): %r" % (self,)) + + +class CTypesGenericPrimitive(CTypesData): + __slots__ = [] + + def __hash__(self): + return hash(self._value) + + def _get_own_repr(self): + return repr(self._from_ctypes(self._value)) + + +class CTypesGenericArray(CTypesData): + __slots__ = [] + + @classmethod + def _newp(cls, init): + return cls(init) + + def __iter__(self): + for i in xrange(len(self)): + yield self[i] + + def _get_own_repr(self): + return self._addr_repr(ctypes.addressof(self._blob)) + + +class CTypesGenericPtr(CTypesData): + __slots__ = ['_address', '_as_ctype_ptr'] + _automatic_casts = False + kind = "pointer" + + @classmethod + def _newp(cls, init): + return cls(init) + + @classmethod + def _cast_from(cls, source): + if source is None: + address = 0 + elif isinstance(source, CTypesData): + address = source._cast_to_integer() + elif isinstance(source, (int, long)): + address = source + else: + raise TypeError("bad type for cast to %r: %r" % + (cls, type(source).__name__)) + return cls._new_pointer_at(address) + + @classmethod + def _new_pointer_at(cls, address): + self = cls.__new__(cls) + self._address = address + self._as_ctype_ptr = ctypes.cast(address, cls._ctype) + return self + + def _get_own_repr(self): + try: + return self._addr_repr(self._address) + except AttributeError: + return '???' + + def _cast_to_integer(self): + return self._address + + def __nonzero__(self): + return bool(self._address) + __bool__ = __nonzero__ + + @classmethod + def _to_ctypes(cls, value): + if not isinstance(value, CTypesData): + raise TypeError("unexpected %s object" % type(value).__name__) + address = value._convert_to_address(cls) + return ctypes.cast(address, cls._ctype) + + @classmethod + def _from_ctypes(cls, ctypes_ptr): + address = ctypes.cast(ctypes_ptr, ctypes.c_void_p).value or 0 + return cls._new_pointer_at(address) + + @classmethod + def _initialize(cls, ctypes_ptr, value): + if value: + ctypes_ptr.contents = cls._to_ctypes(value).contents + + def _convert_to_address(self, BClass): + if (BClass in (self.__class__, None) or BClass._automatic_casts + or self._automatic_casts): + return self._address + else: + return CTypesData._convert_to_address(self, BClass) + + +class CTypesBaseStructOrUnion(CTypesData): + __slots__ = ['_blob'] + + @classmethod + def _create_ctype_obj(cls, init): + # may be overridden + raise TypeError("cannot instantiate opaque type %s" % (cls,)) + + def _get_own_repr(self): + return self._addr_repr(ctypes.addressof(self._blob)) + + @classmethod + def _offsetof(cls, fieldname): + return getattr(cls._ctype, fieldname).offset + + def _convert_to_address(self, BClass): + if getattr(BClass, '_BItem', None) is self.__class__: + return ctypes.addressof(self._blob) + else: + return CTypesData._convert_to_address(self, BClass) + + @classmethod + def _from_ctypes(cls, ctypes_struct_or_union): + self = cls.__new__(cls) + self._blob = ctypes_struct_or_union + return self + + @classmethod + def _to_ctypes(cls, value): + return value._blob + + def __repr__(self, c_name=None): + return CTypesData.__repr__(self, c_name or self._get_c_name(' &')) + + +class CTypesBackend(object): + + PRIMITIVE_TYPES = { + 'char': ctypes.c_char, + 'short': ctypes.c_short, + 'int': ctypes.c_int, + 'long': ctypes.c_long, + 'long long': ctypes.c_longlong, + 'signed char': ctypes.c_byte, + 'unsigned char': ctypes.c_ubyte, + 'unsigned short': ctypes.c_ushort, + 'unsigned int': ctypes.c_uint, + 'unsigned long': ctypes.c_ulong, + 'unsigned long long': ctypes.c_ulonglong, + 'float': ctypes.c_float, + 'double': ctypes.c_double, + '_Bool': ctypes.c_bool, + } + + for _name in ['unsigned long long', 'unsigned long', + 'unsigned int', 'unsigned short', 'unsigned char']: + _size = ctypes.sizeof(PRIMITIVE_TYPES[_name]) + PRIMITIVE_TYPES['uint%d_t' % (8*_size)] = PRIMITIVE_TYPES[_name] + if _size == ctypes.sizeof(ctypes.c_void_p): + PRIMITIVE_TYPES['uintptr_t'] = PRIMITIVE_TYPES[_name] + if _size == ctypes.sizeof(ctypes.c_size_t): + PRIMITIVE_TYPES['size_t'] = PRIMITIVE_TYPES[_name] + + for _name in ['long long', 'long', 'int', 'short', 'signed char']: + _size = ctypes.sizeof(PRIMITIVE_TYPES[_name]) + PRIMITIVE_TYPES['int%d_t' % (8*_size)] = PRIMITIVE_TYPES[_name] + if _size == ctypes.sizeof(ctypes.c_void_p): + PRIMITIVE_TYPES['intptr_t'] = PRIMITIVE_TYPES[_name] + PRIMITIVE_TYPES['ptrdiff_t'] = PRIMITIVE_TYPES[_name] + if _size == ctypes.sizeof(ctypes.c_size_t): + PRIMITIVE_TYPES['ssize_t'] = PRIMITIVE_TYPES[_name] + + + def __init__(self): + self.RTLD_LAZY = 0 # not supported anyway by ctypes + self.RTLD_NOW = 0 + self.RTLD_GLOBAL = ctypes.RTLD_GLOBAL + self.RTLD_LOCAL = ctypes.RTLD_LOCAL + + def set_ffi(self, ffi): + self.ffi = ffi + + def _get_types(self): + return CTypesData, CTypesType + + def load_library(self, path, flags=0): + cdll = ctypes.CDLL(path, flags) + return CTypesLibrary(self, cdll) + + def new_void_type(self): + class CTypesVoid(CTypesData): + __slots__ = [] + _reftypename = 'void &' + @staticmethod + def _from_ctypes(novalue): + return None + @staticmethod + def _to_ctypes(novalue): + if novalue is not None: + raise TypeError("None expected, got %s object" % + (type(novalue).__name__,)) + return None + CTypesVoid._fix_class() + return CTypesVoid + + def new_primitive_type(self, name): + if name == 'wchar_t': + raise NotImplementedError(name) + ctype = self.PRIMITIVE_TYPES[name] + if name == 'char': + kind = 'char' + elif name in ('float', 'double'): + kind = 'float' + else: + if name in ('signed char', 'unsigned char'): + kind = 'byte' + elif name == '_Bool': + kind = 'bool' + else: + kind = 'int' + is_signed = (ctype(-1).value == -1) + # + def _cast_source_to_int(source): + if isinstance(source, (int, long, float)): + source = int(source) + elif isinstance(source, CTypesData): + source = source._cast_to_integer() + elif isinstance(source, bytes): + source = ord(source) + elif source is None: + source = 0 + else: + raise TypeError("bad type for cast to %r: %r" % + (CTypesPrimitive, type(source).__name__)) + return source + # + kind1 = kind + class CTypesPrimitive(CTypesGenericPrimitive): + __slots__ = ['_value'] + _ctype = ctype + _reftypename = '%s &' % name + kind = kind1 + + def __init__(self, value): + self._value = value + + @staticmethod + def _create_ctype_obj(init): + if init is None: + return ctype() + return ctype(CTypesPrimitive._to_ctypes(init)) + + if kind == 'int' or kind == 'byte': + @classmethod + def _cast_from(cls, source): + source = _cast_source_to_int(source) + source = ctype(source).value # cast within range + return cls(source) + def __int__(self): + return self._value + + if kind == 'bool': + @classmethod + def _cast_from(cls, source): + if not isinstance(source, (int, long, float)): + source = _cast_source_to_int(source) + return cls(bool(source)) + def __int__(self): + return int(self._value) + + if kind == 'char': + @classmethod + def _cast_from(cls, source): + source = _cast_source_to_int(source) + source = bytechr(source & 0xFF) + return cls(source) + def __int__(self): + return ord(self._value) + + if kind == 'float': + @classmethod + def _cast_from(cls, source): + if isinstance(source, float): + pass + elif isinstance(source, CTypesGenericPrimitive): + if hasattr(source, '__float__'): + source = float(source) + else: + source = int(source) + else: + source = _cast_source_to_int(source) + source = ctype(source).value # fix precision + return cls(source) + def __int__(self): + return int(self._value) + def __float__(self): + return self._value + + _cast_to_integer = __int__ + + if kind == 'int' or kind == 'byte' or kind == 'bool': + @staticmethod + def _to_ctypes(x): + if not isinstance(x, (int, long)): + if isinstance(x, CTypesData): + x = int(x) + else: + raise TypeError("integer expected, got %s" % + type(x).__name__) + if ctype(x).value != x: + if not is_signed and x < 0: + raise OverflowError("%s: negative integer" % name) + else: + raise OverflowError("%s: integer out of bounds" + % name) + return x + + if kind == 'char': + @staticmethod + def _to_ctypes(x): + if isinstance(x, bytes) and len(x) == 1: + return x + if isinstance(x, CTypesPrimitive): # > + return x._value + raise TypeError("character expected, got %s" % + type(x).__name__) + def __nonzero__(self): + return ord(self._value) != 0 + else: + def __nonzero__(self): + return self._value != 0 + __bool__ = __nonzero__ + + if kind == 'float': + @staticmethod + def _to_ctypes(x): + if not isinstance(x, (int, long, float, CTypesData)): + raise TypeError("float expected, got %s" % + type(x).__name__) + return ctype(x).value + + @staticmethod + def _from_ctypes(value): + return getattr(value, 'value', value) + + @staticmethod + def _initialize(blob, init): + blob.value = CTypesPrimitive._to_ctypes(init) + + if kind == 'char': + def _to_string(self, maxlen): + return self._value + if kind == 'byte': + def _to_string(self, maxlen): + return chr(self._value & 0xff) + # + CTypesPrimitive._fix_class() + return CTypesPrimitive + + def new_pointer_type(self, BItem): + getbtype = self.ffi._get_cached_btype + if BItem is getbtype(model.PrimitiveType('char')): + kind = 'charp' + elif BItem in (getbtype(model.PrimitiveType('signed char')), + getbtype(model.PrimitiveType('unsigned char'))): + kind = 'bytep' + elif BItem is getbtype(model.void_type): + kind = 'voidp' + else: + kind = 'generic' + # + class CTypesPtr(CTypesGenericPtr): + __slots__ = ['_own'] + if kind == 'charp': + __slots__ += ['__as_strbuf'] + _BItem = BItem + if hasattr(BItem, '_ctype'): + _ctype = ctypes.POINTER(BItem._ctype) + _bitem_size = ctypes.sizeof(BItem._ctype) + else: + _ctype = ctypes.c_void_p + if issubclass(BItem, CTypesGenericArray): + _reftypename = BItem._get_c_name('(* &)') + else: + _reftypename = BItem._get_c_name(' * &') + + def __init__(self, init): + ctypeobj = BItem._create_ctype_obj(init) + if kind == 'charp': + self.__as_strbuf = ctypes.create_string_buffer( + ctypeobj.value + b'\x00') + self._as_ctype_ptr = ctypes.cast( + self.__as_strbuf, self._ctype) + else: + self._as_ctype_ptr = ctypes.pointer(ctypeobj) + self._address = ctypes.cast(self._as_ctype_ptr, + ctypes.c_void_p).value + self._own = True + + def __add__(self, other): + if isinstance(other, (int, long)): + return self._new_pointer_at(self._address + + other * self._bitem_size) + else: + return NotImplemented + + def __sub__(self, other): + if isinstance(other, (int, long)): + return self._new_pointer_at(self._address - + other * self._bitem_size) + elif type(self) is type(other): + return (self._address - other._address) // self._bitem_size + else: + return NotImplemented + + def __getitem__(self, index): + if getattr(self, '_own', False) and index != 0: + raise IndexError + return BItem._from_ctypes(self._as_ctype_ptr[index]) + + def __setitem__(self, index, value): + self._as_ctype_ptr[index] = BItem._to_ctypes(value) + + if kind == 'charp' or kind == 'voidp': + @classmethod + def _arg_to_ctypes(cls, *value): + if value and isinstance(value[0], bytes): + return ctypes.c_char_p(value[0]) + else: + return super(CTypesPtr, cls)._arg_to_ctypes(*value) + + if kind == 'charp' or kind == 'bytep': + def _to_string(self, maxlen): + if maxlen < 0: + maxlen = sys.maxsize + p = ctypes.cast(self._as_ctype_ptr, + ctypes.POINTER(ctypes.c_char)) + n = 0 + while n < maxlen and p[n] != b'\x00': + n += 1 + return b''.join([p[i] for i in range(n)]) + + def _get_own_repr(self): + if getattr(self, '_own', False): + return 'owning %d bytes' % ( + ctypes.sizeof(self._as_ctype_ptr.contents),) + return super(CTypesPtr, self)._get_own_repr() + # + if (BItem is self.ffi._get_cached_btype(model.void_type) or + BItem is self.ffi._get_cached_btype(model.PrimitiveType('char'))): + CTypesPtr._automatic_casts = True + # + CTypesPtr._fix_class() + return CTypesPtr + + def new_array_type(self, CTypesPtr, length): + if length is None: + brackets = ' &[]' + else: + brackets = ' &[%d]' % length + BItem = CTypesPtr._BItem + getbtype = self.ffi._get_cached_btype + if BItem is getbtype(model.PrimitiveType('char')): + kind = 'char' + elif BItem in (getbtype(model.PrimitiveType('signed char')), + getbtype(model.PrimitiveType('unsigned char'))): + kind = 'byte' + else: + kind = 'generic' + # + class CTypesArray(CTypesGenericArray): + __slots__ = ['_blob', '_own'] + if length is not None: + _ctype = BItem._ctype * length + else: + __slots__.append('_ctype') + _reftypename = BItem._get_c_name(brackets) + _declared_length = length + _CTPtr = CTypesPtr + + def __init__(self, init): + if length is None: + if isinstance(init, (int, long)): + len1 = init + init = None + elif kind == 'char' and isinstance(init, bytes): + len1 = len(init) + 1 # extra null + else: + init = tuple(init) + len1 = len(init) + self._ctype = BItem._ctype * len1 + self._blob = self._ctype() + self._own = True + if init is not None: + self._initialize(self._blob, init) + + @staticmethod + def _initialize(blob, init): + if isinstance(init, bytes): + init = [init[i:i+1] for i in range(len(init))] + else: + if isinstance(init, CTypesGenericArray): + if (len(init) != len(blob) or + not isinstance(init, CTypesArray)): + raise TypeError("length/type mismatch: %s" % (init,)) + init = tuple(init) + if len(init) > len(blob): + raise IndexError("too many initializers") + addr = ctypes.cast(blob, ctypes.c_void_p).value + PTR = ctypes.POINTER(BItem._ctype) + itemsize = ctypes.sizeof(BItem._ctype) + for i, value in enumerate(init): + p = ctypes.cast(addr + i * itemsize, PTR) + BItem._initialize(p.contents, value) + + def __len__(self): + return len(self._blob) + + def __getitem__(self, index): + if not (0 <= index < len(self._blob)): + raise IndexError + return BItem._from_ctypes(self._blob[index]) + + def __setitem__(self, index, value): + if not (0 <= index < len(self._blob)): + raise IndexError + self._blob[index] = BItem._to_ctypes(value) + + if kind == 'char' or kind == 'byte': + def _to_string(self, maxlen): + if maxlen < 0: + maxlen = len(self._blob) + p = ctypes.cast(self._blob, + ctypes.POINTER(ctypes.c_char)) + n = 0 + while n < maxlen and p[n] != b'\x00': + n += 1 + return b''.join([p[i] for i in range(n)]) + + def _get_own_repr(self): + if getattr(self, '_own', False): + return 'owning %d bytes' % (ctypes.sizeof(self._blob),) + return super(CTypesArray, self)._get_own_repr() + + def _convert_to_address(self, BClass): + if BClass in (CTypesPtr, None) or BClass._automatic_casts: + return ctypes.addressof(self._blob) + else: + return CTypesData._convert_to_address(self, BClass) + + @staticmethod + def _from_ctypes(ctypes_array): + self = CTypesArray.__new__(CTypesArray) + self._blob = ctypes_array + return self + + @staticmethod + def _arg_to_ctypes(value): + return CTypesPtr._arg_to_ctypes(value) + + def __add__(self, other): + if isinstance(other, (int, long)): + return CTypesPtr._new_pointer_at( + ctypes.addressof(self._blob) + + other * ctypes.sizeof(BItem._ctype)) + else: + return NotImplemented + + @classmethod + def _cast_from(cls, source): + raise NotImplementedError("casting to %r" % ( + cls._get_c_name(),)) + # + CTypesArray._fix_class() + return CTypesArray + + def _new_struct_or_union(self, kind, name, base_ctypes_class): + # + class struct_or_union(base_ctypes_class): + pass + struct_or_union.__name__ = '%s_%s' % (kind, name) + kind1 = kind + # + class CTypesStructOrUnion(CTypesBaseStructOrUnion): + __slots__ = ['_blob'] + _ctype = struct_or_union + _reftypename = '%s &' % (name,) + _kind = kind = kind1 + # + CTypesStructOrUnion._fix_class() + return CTypesStructOrUnion + + def new_struct_type(self, name): + return self._new_struct_or_union('struct', name, ctypes.Structure) + + def new_union_type(self, name): + return self._new_struct_or_union('union', name, ctypes.Union) + + def complete_struct_or_union(self, CTypesStructOrUnion, fields, tp, + totalsize=-1, totalalignment=-1, sflags=0, + pack=0): + if totalsize >= 0 or totalalignment >= 0: + raise NotImplementedError("the ctypes backend of CFFI does not support " + "structures completed by verify(); please " + "compile and install the _cffi_backend module.") + struct_or_union = CTypesStructOrUnion._ctype + fnames = [fname for (fname, BField, bitsize) in fields] + btypes = [BField for (fname, BField, bitsize) in fields] + bitfields = [bitsize for (fname, BField, bitsize) in fields] + # + bfield_types = {} + cfields = [] + for (fname, BField, bitsize) in fields: + if bitsize < 0: + cfields.append((fname, BField._ctype)) + bfield_types[fname] = BField + else: + cfields.append((fname, BField._ctype, bitsize)) + bfield_types[fname] = Ellipsis + if sflags & 8: + struct_or_union._pack_ = 1 + elif pack: + struct_or_union._pack_ = pack + struct_or_union._fields_ = cfields + CTypesStructOrUnion._bfield_types = bfield_types + # + @staticmethod + def _create_ctype_obj(init): + result = struct_or_union() + if init is not None: + initialize(result, init) + return result + CTypesStructOrUnion._create_ctype_obj = _create_ctype_obj + # + def initialize(blob, init): + if is_union: + if len(init) > 1: + raise ValueError("union initializer: %d items given, but " + "only one supported (use a dict if needed)" + % (len(init),)) + if not isinstance(init, dict): + if isinstance(init, (bytes, unicode)): + raise TypeError("union initializer: got a str") + init = tuple(init) + if len(init) > len(fnames): + raise ValueError("too many values for %s initializer" % + CTypesStructOrUnion._get_c_name()) + init = dict(zip(fnames, init)) + addr = ctypes.addressof(blob) + for fname, value in init.items(): + BField, bitsize = name2fieldtype[fname] + assert bitsize < 0, \ + "not implemented: initializer with bit fields" + offset = CTypesStructOrUnion._offsetof(fname) + PTR = ctypes.POINTER(BField._ctype) + p = ctypes.cast(addr + offset, PTR) + BField._initialize(p.contents, value) + is_union = CTypesStructOrUnion._kind == 'union' + name2fieldtype = dict(zip(fnames, zip(btypes, bitfields))) + # + for fname, BField, bitsize in fields: + if fname == '': + raise NotImplementedError("nested anonymous structs/unions") + if hasattr(CTypesStructOrUnion, fname): + raise ValueError("the field name %r conflicts in " + "the ctypes backend" % fname) + if bitsize < 0: + def getter(self, fname=fname, BField=BField, + offset=CTypesStructOrUnion._offsetof(fname), + PTR=ctypes.POINTER(BField._ctype)): + addr = ctypes.addressof(self._blob) + p = ctypes.cast(addr + offset, PTR) + return BField._from_ctypes(p.contents) + def setter(self, value, fname=fname, BField=BField): + setattr(self._blob, fname, BField._to_ctypes(value)) + # + if issubclass(BField, CTypesGenericArray): + setter = None + if BField._declared_length == 0: + def getter(self, fname=fname, BFieldPtr=BField._CTPtr, + offset=CTypesStructOrUnion._offsetof(fname), + PTR=ctypes.POINTER(BField._ctype)): + addr = ctypes.addressof(self._blob) + p = ctypes.cast(addr + offset, PTR) + return BFieldPtr._from_ctypes(p) + # + else: + def getter(self, fname=fname, BField=BField): + return BField._from_ctypes(getattr(self._blob, fname)) + def setter(self, value, fname=fname, BField=BField): + # xxx obscure workaround + value = BField._to_ctypes(value) + oldvalue = getattr(self._blob, fname) + setattr(self._blob, fname, value) + if value != getattr(self._blob, fname): + setattr(self._blob, fname, oldvalue) + raise OverflowError("value too large for bitfield") + setattr(CTypesStructOrUnion, fname, property(getter, setter)) + # + CTypesPtr = self.ffi._get_cached_btype(model.PointerType(tp)) + for fname in fnames: + if hasattr(CTypesPtr, fname): + raise ValueError("the field name %r conflicts in " + "the ctypes backend" % fname) + def getter(self, fname=fname): + return getattr(self[0], fname) + def setter(self, value, fname=fname): + setattr(self[0], fname, value) + setattr(CTypesPtr, fname, property(getter, setter)) + + def new_function_type(self, BArgs, BResult, has_varargs): + nameargs = [BArg._get_c_name() for BArg in BArgs] + if has_varargs: + nameargs.append('...') + nameargs = ', '.join(nameargs) + # + class CTypesFunctionPtr(CTypesGenericPtr): + __slots__ = ['_own_callback', '_name'] + _ctype = ctypes.CFUNCTYPE(getattr(BResult, '_ctype', None), + *[BArg._ctype for BArg in BArgs], + use_errno=True) + _reftypename = BResult._get_c_name('(* &)(%s)' % (nameargs,)) + + def __init__(self, init, error=None): + # create a callback to the Python callable init() + import traceback + assert not has_varargs, "varargs not supported for callbacks" + if getattr(BResult, '_ctype', None) is not None: + error = BResult._from_ctypes( + BResult._create_ctype_obj(error)) + else: + error = None + def callback(*args): + args2 = [] + for arg, BArg in zip(args, BArgs): + args2.append(BArg._from_ctypes(arg)) + try: + res2 = init(*args2) + res2 = BResult._to_ctypes(res2) + except: + traceback.print_exc() + res2 = error + if issubclass(BResult, CTypesGenericPtr): + if res2: + res2 = ctypes.cast(res2, ctypes.c_void_p).value + # .value: http://bugs.python.org/issue1574593 + else: + res2 = None + #print repr(res2) + return res2 + if issubclass(BResult, CTypesGenericPtr): + # The only pointers callbacks can return are void*s: + # http://bugs.python.org/issue5710 + callback_ctype = ctypes.CFUNCTYPE( + ctypes.c_void_p, + *[BArg._ctype for BArg in BArgs], + use_errno=True) + else: + callback_ctype = CTypesFunctionPtr._ctype + self._as_ctype_ptr = callback_ctype(callback) + self._address = ctypes.cast(self._as_ctype_ptr, + ctypes.c_void_p).value + self._own_callback = init + + @staticmethod + def _initialize(ctypes_ptr, value): + if value: + raise NotImplementedError("ctypes backend: not supported: " + "initializers for function pointers") + + def __repr__(self): + c_name = getattr(self, '_name', None) + if c_name: + i = self._reftypename.index('(* &)') + if self._reftypename[i-1] not in ' )*': + c_name = ' ' + c_name + c_name = self._reftypename.replace('(* &)', c_name) + return CTypesData.__repr__(self, c_name) + + def _get_own_repr(self): + if getattr(self, '_own_callback', None) is not None: + return 'calling %r' % (self._own_callback,) + return super(CTypesFunctionPtr, self)._get_own_repr() + + def __call__(self, *args): + if has_varargs: + assert len(args) >= len(BArgs) + extraargs = args[len(BArgs):] + args = args[:len(BArgs)] + else: + assert len(args) == len(BArgs) + ctypes_args = [] + for arg, BArg in zip(args, BArgs): + ctypes_args.append(BArg._arg_to_ctypes(arg)) + if has_varargs: + for i, arg in enumerate(extraargs): + if arg is None: + ctypes_args.append(ctypes.c_void_p(0)) # NULL + continue + if not isinstance(arg, CTypesData): + raise TypeError( + "argument %d passed in the variadic part " + "needs to be a cdata object (got %s)" % + (1 + len(BArgs) + i, type(arg).__name__)) + ctypes_args.append(arg._arg_to_ctypes(arg)) + result = self._as_ctype_ptr(*ctypes_args) + return BResult._from_ctypes(result) + # + CTypesFunctionPtr._fix_class() + return CTypesFunctionPtr + + def new_enum_type(self, name, enumerators, enumvalues, CTypesInt): + assert isinstance(name, str) + reverse_mapping = dict(zip(reversed(enumvalues), + reversed(enumerators))) + # + class CTypesEnum(CTypesInt): + __slots__ = [] + _reftypename = '%s &' % name + + def _get_own_repr(self): + value = self._value + try: + return '%d: %s' % (value, reverse_mapping[value]) + except KeyError: + return str(value) + + def _to_string(self, maxlen): + value = self._value + try: + return reverse_mapping[value] + except KeyError: + return str(value) + # + CTypesEnum._fix_class() + return CTypesEnum + + def get_errno(self): + return ctypes.get_errno() + + def set_errno(self, value): + ctypes.set_errno(value) + + def string(self, b, maxlen=-1): + return b._to_string(maxlen) + + def buffer(self, bptr, size=-1): + raise NotImplementedError("buffer() with ctypes backend") + + def sizeof(self, cdata_or_BType): + if isinstance(cdata_or_BType, CTypesData): + return cdata_or_BType._get_size_of_instance() + else: + assert issubclass(cdata_or_BType, CTypesData) + return cdata_or_BType._get_size() + + def alignof(self, BType): + assert issubclass(BType, CTypesData) + return BType._alignment() + + def newp(self, BType, source): + if not issubclass(BType, CTypesData): + raise TypeError + return BType._newp(source) + + def cast(self, BType, source): + return BType._cast_from(source) + + def callback(self, BType, source, error, onerror): + assert onerror is None # XXX not implemented + return BType(source, error) + + _weakref_cache_ref = None + + def gcp(self, cdata, destructor, size=0): + if self._weakref_cache_ref is None: + import weakref + class MyRef(weakref.ref): + def __eq__(self, other): + myref = self() + return self is other or ( + myref is not None and myref is other()) + def __ne__(self, other): + return not (self == other) + def __hash__(self): + try: + return self._hash + except AttributeError: + self._hash = hash(self()) + return self._hash + self._weakref_cache_ref = {}, MyRef + weak_cache, MyRef = self._weakref_cache_ref + + if destructor is None: + try: + del weak_cache[MyRef(cdata)] + except KeyError: + raise TypeError("Can remove destructor only on a object " + "previously returned by ffi.gc()") + return None + + def remove(k): + cdata, destructor = weak_cache.pop(k, (None, None)) + if destructor is not None: + destructor(cdata) + + new_cdata = self.cast(self.typeof(cdata), cdata) + assert new_cdata is not cdata + weak_cache[MyRef(new_cdata, remove)] = (cdata, destructor) + return new_cdata + + typeof = type + + def getcname(self, BType, replace_with): + return BType._get_c_name(replace_with) + + def typeoffsetof(self, BType, fieldname, num=0): + if isinstance(fieldname, str): + if num == 0 and issubclass(BType, CTypesGenericPtr): + BType = BType._BItem + if not issubclass(BType, CTypesBaseStructOrUnion): + raise TypeError("expected a struct or union ctype") + BField = BType._bfield_types[fieldname] + if BField is Ellipsis: + raise TypeError("not supported for bitfields") + return (BField, BType._offsetof(fieldname)) + elif isinstance(fieldname, (int, long)): + if issubclass(BType, CTypesGenericArray): + BType = BType._CTPtr + if not issubclass(BType, CTypesGenericPtr): + raise TypeError("expected an array or ptr ctype") + BItem = BType._BItem + offset = BItem._get_size() * fieldname + if offset > sys.maxsize: + raise OverflowError + return (BItem, offset) + else: + raise TypeError(type(fieldname)) + + def rawaddressof(self, BTypePtr, cdata, offset=None): + if isinstance(cdata, CTypesBaseStructOrUnion): + ptr = ctypes.pointer(type(cdata)._to_ctypes(cdata)) + elif isinstance(cdata, CTypesGenericPtr): + if offset is None or not issubclass(type(cdata)._BItem, + CTypesBaseStructOrUnion): + raise TypeError("unexpected cdata type") + ptr = type(cdata)._to_ctypes(cdata) + elif isinstance(cdata, CTypesGenericArray): + ptr = type(cdata)._to_ctypes(cdata) + else: + raise TypeError("expected a ") + if offset: + ptr = ctypes.cast( + ctypes.c_void_p( + ctypes.cast(ptr, ctypes.c_void_p).value + offset), + type(ptr)) + return BTypePtr._from_ctypes(ptr) + + +class CTypesLibrary(object): + + def __init__(self, backend, cdll): + self.backend = backend + self.cdll = cdll + + def load_function(self, BType, name): + c_func = getattr(self.cdll, name) + funcobj = BType._from_ctypes(c_func) + funcobj._name = name + return funcobj + + def read_variable(self, BType, name): + try: + ctypes_obj = BType._ctype.in_dll(self.cdll, name) + except AttributeError as e: + raise NotImplementedError(e) + return BType._from_ctypes(ctypes_obj) + + def write_variable(self, BType, name, value): + new_ctypes_obj = BType._to_ctypes(value) + ctypes_obj = BType._ctype.in_dll(self.cdll, name) + ctypes.memmove(ctypes.addressof(ctypes_obj), + ctypes.addressof(new_ctypes_obj), + ctypes.sizeof(BType._ctype)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/cffi_opcode.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/cffi_opcode.py new file mode 100644 index 0000000000000000000000000000000000000000..a0df98d1c743790f4047672abcae0d00f993a2ce --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/cffi_opcode.py @@ -0,0 +1,187 @@ +from .error import VerificationError + +class CffiOp(object): + def __init__(self, op, arg): + self.op = op + self.arg = arg + + def as_c_expr(self): + if self.op is None: + assert isinstance(self.arg, str) + return '(_cffi_opcode_t)(%s)' % (self.arg,) + classname = CLASS_NAME[self.op] + return '_CFFI_OP(_CFFI_OP_%s, %s)' % (classname, self.arg) + + def as_python_bytes(self): + if self.op is None and self.arg.isdigit(): + value = int(self.arg) # non-negative: '-' not in self.arg + if value >= 2**31: + raise OverflowError("cannot emit %r: limited to 2**31-1" + % (self.arg,)) + return format_four_bytes(value) + if isinstance(self.arg, str): + raise VerificationError("cannot emit to Python: %r" % (self.arg,)) + return format_four_bytes((self.arg << 8) | self.op) + + def __str__(self): + classname = CLASS_NAME.get(self.op, self.op) + return '(%s %s)' % (classname, self.arg) + +def format_four_bytes(num): + return '\\x%02X\\x%02X\\x%02X\\x%02X' % ( + (num >> 24) & 0xFF, + (num >> 16) & 0xFF, + (num >> 8) & 0xFF, + (num ) & 0xFF) + +OP_PRIMITIVE = 1 +OP_POINTER = 3 +OP_ARRAY = 5 +OP_OPEN_ARRAY = 7 +OP_STRUCT_UNION = 9 +OP_ENUM = 11 +OP_FUNCTION = 13 +OP_FUNCTION_END = 15 +OP_NOOP = 17 +OP_BITFIELD = 19 +OP_TYPENAME = 21 +OP_CPYTHON_BLTN_V = 23 # varargs +OP_CPYTHON_BLTN_N = 25 # noargs +OP_CPYTHON_BLTN_O = 27 # O (i.e. a single arg) +OP_CONSTANT = 29 +OP_CONSTANT_INT = 31 +OP_GLOBAL_VAR = 33 +OP_DLOPEN_FUNC = 35 +OP_DLOPEN_CONST = 37 +OP_GLOBAL_VAR_F = 39 +OP_EXTERN_PYTHON = 41 + +PRIM_VOID = 0 +PRIM_BOOL = 1 +PRIM_CHAR = 2 +PRIM_SCHAR = 3 +PRIM_UCHAR = 4 +PRIM_SHORT = 5 +PRIM_USHORT = 6 +PRIM_INT = 7 +PRIM_UINT = 8 +PRIM_LONG = 9 +PRIM_ULONG = 10 +PRIM_LONGLONG = 11 +PRIM_ULONGLONG = 12 +PRIM_FLOAT = 13 +PRIM_DOUBLE = 14 +PRIM_LONGDOUBLE = 15 + +PRIM_WCHAR = 16 +PRIM_INT8 = 17 +PRIM_UINT8 = 18 +PRIM_INT16 = 19 +PRIM_UINT16 = 20 +PRIM_INT32 = 21 +PRIM_UINT32 = 22 +PRIM_INT64 = 23 +PRIM_UINT64 = 24 +PRIM_INTPTR = 25 +PRIM_UINTPTR = 26 +PRIM_PTRDIFF = 27 +PRIM_SIZE = 28 +PRIM_SSIZE = 29 +PRIM_INT_LEAST8 = 30 +PRIM_UINT_LEAST8 = 31 +PRIM_INT_LEAST16 = 32 +PRIM_UINT_LEAST16 = 33 +PRIM_INT_LEAST32 = 34 +PRIM_UINT_LEAST32 = 35 +PRIM_INT_LEAST64 = 36 +PRIM_UINT_LEAST64 = 37 +PRIM_INT_FAST8 = 38 +PRIM_UINT_FAST8 = 39 +PRIM_INT_FAST16 = 40 +PRIM_UINT_FAST16 = 41 +PRIM_INT_FAST32 = 42 +PRIM_UINT_FAST32 = 43 +PRIM_INT_FAST64 = 44 +PRIM_UINT_FAST64 = 45 +PRIM_INTMAX = 46 +PRIM_UINTMAX = 47 +PRIM_FLOATCOMPLEX = 48 +PRIM_DOUBLECOMPLEX = 49 +PRIM_CHAR16 = 50 +PRIM_CHAR32 = 51 + +_NUM_PRIM = 52 +_UNKNOWN_PRIM = -1 +_UNKNOWN_FLOAT_PRIM = -2 +_UNKNOWN_LONG_DOUBLE = -3 + +_IO_FILE_STRUCT = -1 + +PRIMITIVE_TO_INDEX = { + 'char': PRIM_CHAR, + 'short': PRIM_SHORT, + 'int': PRIM_INT, + 'long': PRIM_LONG, + 'long long': PRIM_LONGLONG, + 'signed char': PRIM_SCHAR, + 'unsigned char': PRIM_UCHAR, + 'unsigned short': PRIM_USHORT, + 'unsigned int': PRIM_UINT, + 'unsigned long': PRIM_ULONG, + 'unsigned long long': PRIM_ULONGLONG, + 'float': PRIM_FLOAT, + 'double': PRIM_DOUBLE, + 'long double': PRIM_LONGDOUBLE, + 'float _Complex': PRIM_FLOATCOMPLEX, + 'double _Complex': PRIM_DOUBLECOMPLEX, + '_Bool': PRIM_BOOL, + 'wchar_t': PRIM_WCHAR, + 'char16_t': PRIM_CHAR16, + 'char32_t': PRIM_CHAR32, + 'int8_t': PRIM_INT8, + 'uint8_t': PRIM_UINT8, + 'int16_t': PRIM_INT16, + 'uint16_t': PRIM_UINT16, + 'int32_t': PRIM_INT32, + 'uint32_t': PRIM_UINT32, + 'int64_t': PRIM_INT64, + 'uint64_t': PRIM_UINT64, + 'intptr_t': PRIM_INTPTR, + 'uintptr_t': PRIM_UINTPTR, + 'ptrdiff_t': PRIM_PTRDIFF, + 'size_t': PRIM_SIZE, + 'ssize_t': PRIM_SSIZE, + 'int_least8_t': PRIM_INT_LEAST8, + 'uint_least8_t': PRIM_UINT_LEAST8, + 'int_least16_t': PRIM_INT_LEAST16, + 'uint_least16_t': PRIM_UINT_LEAST16, + 'int_least32_t': PRIM_INT_LEAST32, + 'uint_least32_t': PRIM_UINT_LEAST32, + 'int_least64_t': PRIM_INT_LEAST64, + 'uint_least64_t': PRIM_UINT_LEAST64, + 'int_fast8_t': PRIM_INT_FAST8, + 'uint_fast8_t': PRIM_UINT_FAST8, + 'int_fast16_t': PRIM_INT_FAST16, + 'uint_fast16_t': PRIM_UINT_FAST16, + 'int_fast32_t': PRIM_INT_FAST32, + 'uint_fast32_t': PRIM_UINT_FAST32, + 'int_fast64_t': PRIM_INT_FAST64, + 'uint_fast64_t': PRIM_UINT_FAST64, + 'intmax_t': PRIM_INTMAX, + 'uintmax_t': PRIM_UINTMAX, + } + +F_UNION = 0x01 +F_CHECK_FIELDS = 0x02 +F_PACKED = 0x04 +F_EXTERNAL = 0x08 +F_OPAQUE = 0x10 + +G_FLAGS = dict([('_CFFI_' + _key, globals()[_key]) + for _key in ['F_UNION', 'F_CHECK_FIELDS', 'F_PACKED', + 'F_EXTERNAL', 'F_OPAQUE']]) + +CLASS_NAME = {} +for _name, _value in list(globals().items()): + if _name.startswith('OP_') and isinstance(_value, int): + CLASS_NAME[_value] = _name[3:] diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/commontypes.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/commontypes.py new file mode 100644 index 0000000000000000000000000000000000000000..8ec97c756a4b1023fd3963dd39b706f7c0e34373 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/commontypes.py @@ -0,0 +1,80 @@ +import sys +from . import model +from .error import FFIError + + +COMMON_TYPES = {} + +try: + # fetch "bool" and all simple Windows types + from _cffi_backend import _get_common_types + _get_common_types(COMMON_TYPES) +except ImportError: + pass + +COMMON_TYPES['FILE'] = model.unknown_type('FILE', '_IO_FILE') +COMMON_TYPES['bool'] = '_Bool' # in case we got ImportError above + +for _type in model.PrimitiveType.ALL_PRIMITIVE_TYPES: + if _type.endswith('_t'): + COMMON_TYPES[_type] = _type +del _type + +_CACHE = {} + +def resolve_common_type(parser, commontype): + try: + return _CACHE[commontype] + except KeyError: + cdecl = COMMON_TYPES.get(commontype, commontype) + if not isinstance(cdecl, str): + result, quals = cdecl, 0 # cdecl is already a BaseType + elif cdecl in model.PrimitiveType.ALL_PRIMITIVE_TYPES: + result, quals = model.PrimitiveType(cdecl), 0 + elif cdecl == 'set-unicode-needed': + raise FFIError("The Windows type %r is only available after " + "you call ffi.set_unicode()" % (commontype,)) + else: + if commontype == cdecl: + raise FFIError( + "Unsupported type: %r. Please look at " + "http://cffi.readthedocs.io/en/latest/cdef.html#ffi-cdef-limitations " + "and file an issue if you think this type should really " + "be supported." % (commontype,)) + result, quals = parser.parse_type_and_quals(cdecl) # recursive + + assert isinstance(result, model.BaseTypeByIdentity) + _CACHE[commontype] = result, quals + return result, quals + + +# ____________________________________________________________ +# extra types for Windows (most of them are in commontypes.c) + + +def win_common_types(): + return { + "UNICODE_STRING": model.StructType( + "_UNICODE_STRING", + ["Length", + "MaximumLength", + "Buffer"], + [model.PrimitiveType("unsigned short"), + model.PrimitiveType("unsigned short"), + model.PointerType(model.PrimitiveType("wchar_t"))], + [-1, -1, -1]), + "PUNICODE_STRING": "UNICODE_STRING *", + "PCUNICODE_STRING": "const UNICODE_STRING *", + + "TBYTE": "set-unicode-needed", + "TCHAR": "set-unicode-needed", + "LPCTSTR": "set-unicode-needed", + "PCTSTR": "set-unicode-needed", + "LPTSTR": "set-unicode-needed", + "PTSTR": "set-unicode-needed", + "PTBYTE": "set-unicode-needed", + "PTCHAR": "set-unicode-needed", + } + +if sys.platform == 'win32': + COMMON_TYPES.update(win_common_types()) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/cparser.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/cparser.py new file mode 100644 index 0000000000000000000000000000000000000000..74830e913f21409f536febddae7769d0364cd24b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/cparser.py @@ -0,0 +1,1006 @@ +from . import model +from .commontypes import COMMON_TYPES, resolve_common_type +from .error import FFIError, CDefError +try: + from . import _pycparser as pycparser +except ImportError: + import pycparser +import weakref, re, sys + +try: + if sys.version_info < (3,): + import thread as _thread + else: + import _thread + lock = _thread.allocate_lock() +except ImportError: + lock = None + +def _workaround_for_static_import_finders(): + # Issue #392: packaging tools like cx_Freeze can not find these + # because pycparser uses exec dynamic import. This is an obscure + # workaround. This function is never called. + import pycparser.yacctab + import pycparser.lextab + +CDEF_SOURCE_STRING = "" +_r_comment = re.compile(r"/\*.*?\*/|//([^\n\\]|\\.)*?$", + re.DOTALL | re.MULTILINE) +_r_define = re.compile(r"^\s*#\s*define\s+([A-Za-z_][A-Za-z_0-9]*)" + r"\b((?:[^\n\\]|\\.)*?)$", + re.DOTALL | re.MULTILINE) +_r_line_directive = re.compile(r"^[ \t]*#[ \t]*(?:line|\d+)\b.*$", re.MULTILINE) +_r_partial_enum = re.compile(r"=\s*\.\.\.\s*[,}]|\.\.\.\s*\}") +_r_enum_dotdotdot = re.compile(r"__dotdotdot\d+__$") +_r_partial_array = re.compile(r"\[\s*\.\.\.\s*\]") +_r_words = re.compile(r"\w+|\S") +_parser_cache = None +_r_int_literal = re.compile(r"-?0?x?[0-9a-f]+[lu]*$", re.IGNORECASE) +_r_stdcall1 = re.compile(r"\b(__stdcall|WINAPI)\b") +_r_stdcall2 = re.compile(r"[(]\s*(__stdcall|WINAPI)\b") +_r_cdecl = re.compile(r"\b__cdecl\b") +_r_extern_python = re.compile(r'\bextern\s*"' + r'(Python|Python\s*\+\s*C|C\s*\+\s*Python)"\s*.') +_r_star_const_space = re.compile( # matches "* const " + r"[*]\s*((const|volatile|restrict)\b\s*)+") +_r_int_dotdotdot = re.compile(r"(\b(int|long|short|signed|unsigned|char)\s*)+" + r"\.\.\.") +_r_float_dotdotdot = re.compile(r"\b(double|float)\s*\.\.\.") + +def _get_parser(): + global _parser_cache + if _parser_cache is None: + _parser_cache = pycparser.CParser() + return _parser_cache + +def _workaround_for_old_pycparser(csource): + # Workaround for a pycparser issue (fixed between pycparser 2.10 and + # 2.14): "char*const***" gives us a wrong syntax tree, the same as + # for "char***(*const)". This means we can't tell the difference + # afterwards. But "char(*const(***))" gives us the right syntax + # tree. The issue only occurs if there are several stars in + # sequence with no parenthesis inbetween, just possibly qualifiers. + # Attempt to fix it by adding some parentheses in the source: each + # time we see "* const" or "* const *", we add an opening + # parenthesis before each star---the hard part is figuring out where + # to close them. + parts = [] + while True: + match = _r_star_const_space.search(csource) + if not match: + break + #print repr(''.join(parts)+csource), '=>', + parts.append(csource[:match.start()]) + parts.append('('); closing = ')' + parts.append(match.group()) # e.g. "* const " + endpos = match.end() + if csource.startswith('*', endpos): + parts.append('('); closing += ')' + level = 0 + i = endpos + while i < len(csource): + c = csource[i] + if c == '(': + level += 1 + elif c == ')': + if level == 0: + break + level -= 1 + elif c in ',;=': + if level == 0: + break + i += 1 + csource = csource[endpos:i] + closing + csource[i:] + #print repr(''.join(parts)+csource) + parts.append(csource) + return ''.join(parts) + +def _preprocess_extern_python(csource): + # input: `extern "Python" int foo(int);` or + # `extern "Python" { int foo(int); }` + # output: + # void __cffi_extern_python_start; + # int foo(int); + # void __cffi_extern_python_stop; + # + # input: `extern "Python+C" int foo(int);` + # output: + # void __cffi_extern_python_plus_c_start; + # int foo(int); + # void __cffi_extern_python_stop; + parts = [] + while True: + match = _r_extern_python.search(csource) + if not match: + break + endpos = match.end() - 1 + #print + #print ''.join(parts)+csource + #print '=>' + parts.append(csource[:match.start()]) + if 'C' in match.group(1): + parts.append('void __cffi_extern_python_plus_c_start; ') + else: + parts.append('void __cffi_extern_python_start; ') + if csource[endpos] == '{': + # grouping variant + closing = csource.find('}', endpos) + if closing < 0: + raise CDefError("'extern \"Python\" {': no '}' found") + if csource.find('{', endpos + 1, closing) >= 0: + raise NotImplementedError("cannot use { } inside a block " + "'extern \"Python\" { ... }'") + parts.append(csource[endpos+1:closing]) + csource = csource[closing+1:] + else: + # non-grouping variant + semicolon = csource.find(';', endpos) + if semicolon < 0: + raise CDefError("'extern \"Python\": no ';' found") + parts.append(csource[endpos:semicolon+1]) + csource = csource[semicolon+1:] + parts.append(' void __cffi_extern_python_stop;') + #print ''.join(parts)+csource + #print + parts.append(csource) + return ''.join(parts) + +def _warn_for_string_literal(csource): + if '"' not in csource: + return + for line in csource.splitlines(): + if '"' in line and not line.lstrip().startswith('#'): + import warnings + warnings.warn("String literal found in cdef() or type source. " + "String literals are ignored here, but you should " + "remove them anyway because some character sequences " + "confuse pre-parsing.") + break + +def _warn_for_non_extern_non_static_global_variable(decl): + if not decl.storage: + import warnings + warnings.warn("Global variable '%s' in cdef(): for consistency " + "with C it should have a storage class specifier " + "(usually 'extern')" % (decl.name,)) + +def _remove_line_directives(csource): + # _r_line_directive matches whole lines, without the final \n, if they + # start with '#line' with some spacing allowed, or '#NUMBER'. This + # function stores them away and replaces them with exactly the string + # '#line@N', where N is the index in the list 'line_directives'. + line_directives = [] + def replace(m): + i = len(line_directives) + line_directives.append(m.group()) + return '#line@%d' % i + csource = _r_line_directive.sub(replace, csource) + return csource, line_directives + +def _put_back_line_directives(csource, line_directives): + def replace(m): + s = m.group() + if not s.startswith('#line@'): + raise AssertionError("unexpected #line directive " + "(should have been processed and removed") + return line_directives[int(s[6:])] + return _r_line_directive.sub(replace, csource) + +def _preprocess(csource): + # First, remove the lines of the form '#line N "filename"' because + # the "filename" part could confuse the rest + csource, line_directives = _remove_line_directives(csource) + # Remove comments. NOTE: this only work because the cdef() section + # should not contain any string literals (except in line directives)! + def replace_keeping_newlines(m): + return ' ' + m.group().count('\n') * '\n' + csource = _r_comment.sub(replace_keeping_newlines, csource) + # Remove the "#define FOO x" lines + macros = {} + for match in _r_define.finditer(csource): + macroname, macrovalue = match.groups() + macrovalue = macrovalue.replace('\\\n', '').strip() + macros[macroname] = macrovalue + csource = _r_define.sub('', csource) + # + if pycparser.__version__ < '2.14': + csource = _workaround_for_old_pycparser(csource) + # + # BIG HACK: replace WINAPI or __stdcall with "volatile const". + # It doesn't make sense for the return type of a function to be + # "volatile volatile const", so we abuse it to detect __stdcall... + # Hack number 2 is that "int(volatile *fptr)();" is not valid C + # syntax, so we place the "volatile" before the opening parenthesis. + csource = _r_stdcall2.sub(' volatile volatile const(', csource) + csource = _r_stdcall1.sub(' volatile volatile const ', csource) + csource = _r_cdecl.sub(' ', csource) + # + # Replace `extern "Python"` with start/end markers + csource = _preprocess_extern_python(csource) + # + # Now there should not be any string literal left; warn if we get one + _warn_for_string_literal(csource) + # + # Replace "[...]" with "[__dotdotdotarray__]" + csource = _r_partial_array.sub('[__dotdotdotarray__]', csource) + # + # Replace "...}" with "__dotdotdotNUM__}". This construction should + # occur only at the end of enums; at the end of structs we have "...;}" + # and at the end of vararg functions "...);". Also replace "=...[,}]" + # with ",__dotdotdotNUM__[,}]": this occurs in the enums too, when + # giving an unknown value. + matches = list(_r_partial_enum.finditer(csource)) + for number, match in enumerate(reversed(matches)): + p = match.start() + if csource[p] == '=': + p2 = csource.find('...', p, match.end()) + assert p2 > p + csource = '%s,__dotdotdot%d__ %s' % (csource[:p], number, + csource[p2+3:]) + else: + assert csource[p:p+3] == '...' + csource = '%s __dotdotdot%d__ %s' % (csource[:p], number, + csource[p+3:]) + # Replace "int ..." or "unsigned long int..." with "__dotdotdotint__" + csource = _r_int_dotdotdot.sub(' __dotdotdotint__ ', csource) + # Replace "float ..." or "double..." with "__dotdotdotfloat__" + csource = _r_float_dotdotdot.sub(' __dotdotdotfloat__ ', csource) + # Replace all remaining "..." with the same name, "__dotdotdot__", + # which is declared with a typedef for the purpose of C parsing. + csource = csource.replace('...', ' __dotdotdot__ ') + # Finally, put back the line directives + csource = _put_back_line_directives(csource, line_directives) + return csource, macros + +def _common_type_names(csource): + # Look in the source for what looks like usages of types from the + # list of common types. A "usage" is approximated here as the + # appearance of the word, minus a "definition" of the type, which + # is the last word in a "typedef" statement. Approximative only + # but should be fine for all the common types. + look_for_words = set(COMMON_TYPES) + look_for_words.add(';') + look_for_words.add(',') + look_for_words.add('(') + look_for_words.add(')') + look_for_words.add('typedef') + words_used = set() + is_typedef = False + paren = 0 + previous_word = '' + for word in _r_words.findall(csource): + if word in look_for_words: + if word == ';': + if is_typedef: + words_used.discard(previous_word) + look_for_words.discard(previous_word) + is_typedef = False + elif word == 'typedef': + is_typedef = True + paren = 0 + elif word == '(': + paren += 1 + elif word == ')': + paren -= 1 + elif word == ',': + if is_typedef and paren == 0: + words_used.discard(previous_word) + look_for_words.discard(previous_word) + else: # word in COMMON_TYPES + words_used.add(word) + previous_word = word + return words_used + + +class Parser(object): + + def __init__(self): + self._declarations = {} + self._included_declarations = set() + self._anonymous_counter = 0 + self._structnode2type = weakref.WeakKeyDictionary() + self._options = {} + self._int_constants = {} + self._recomplete = [] + self._uses_new_feature = None + + def _parse(self, csource): + csource, macros = _preprocess(csource) + # XXX: for more efficiency we would need to poke into the + # internals of CParser... the following registers the + # typedefs, because their presence or absence influences the + # parsing itself (but what they are typedef'ed to plays no role) + ctn = _common_type_names(csource) + typenames = [] + for name in sorted(self._declarations): + if name.startswith('typedef '): + name = name[8:] + typenames.append(name) + ctn.discard(name) + typenames += sorted(ctn) + # + csourcelines = [] + csourcelines.append('# 1 ""') + for typename in typenames: + csourcelines.append('typedef int %s;' % typename) + csourcelines.append('typedef int __dotdotdotint__, __dotdotdotfloat__,' + ' __dotdotdot__;') + # this forces pycparser to consider the following in the file + # called from line 1 + csourcelines.append('# 1 "%s"' % (CDEF_SOURCE_STRING,)) + csourcelines.append(csource) + fullcsource = '\n'.join(csourcelines) + if lock is not None: + lock.acquire() # pycparser is not thread-safe... + try: + ast = _get_parser().parse(fullcsource) + except pycparser.c_parser.ParseError as e: + self.convert_pycparser_error(e, csource) + finally: + if lock is not None: + lock.release() + # csource will be used to find buggy source text + return ast, macros, csource + + def _convert_pycparser_error(self, e, csource): + # xxx look for ":NUM:" at the start of str(e) + # and interpret that as a line number. This will not work if + # the user gives explicit ``# NUM "FILE"`` directives. + line = None + msg = str(e) + match = re.match(r"%s:(\d+):" % (CDEF_SOURCE_STRING,), msg) + if match: + linenum = int(match.group(1), 10) + csourcelines = csource.splitlines() + if 1 <= linenum <= len(csourcelines): + line = csourcelines[linenum-1] + return line + + def convert_pycparser_error(self, e, csource): + line = self._convert_pycparser_error(e, csource) + + msg = str(e) + if line: + msg = 'cannot parse "%s"\n%s' % (line.strip(), msg) + else: + msg = 'parse error\n%s' % (msg,) + raise CDefError(msg) + + def parse(self, csource, override=False, packed=False, pack=None, + dllexport=False): + if packed: + if packed != True: + raise ValueError("'packed' should be False or True; use " + "'pack' to give another value") + if pack: + raise ValueError("cannot give both 'pack' and 'packed'") + pack = 1 + elif pack: + if pack & (pack - 1): + raise ValueError("'pack' must be a power of two, not %r" % + (pack,)) + else: + pack = 0 + prev_options = self._options + try: + self._options = {'override': override, + 'packed': pack, + 'dllexport': dllexport} + self._internal_parse(csource) + finally: + self._options = prev_options + + def _internal_parse(self, csource): + ast, macros, csource = self._parse(csource) + # add the macros + self._process_macros(macros) + # find the first "__dotdotdot__" and use that as a separator + # between the repeated typedefs and the real csource + iterator = iter(ast.ext) + for decl in iterator: + if decl.name == '__dotdotdot__': + break + else: + assert 0 + current_decl = None + # + try: + self._inside_extern_python = '__cffi_extern_python_stop' + for decl in iterator: + current_decl = decl + if isinstance(decl, pycparser.c_ast.Decl): + self._parse_decl(decl) + elif isinstance(decl, pycparser.c_ast.Typedef): + if not decl.name: + raise CDefError("typedef does not declare any name", + decl) + quals = 0 + if (isinstance(decl.type.type, pycparser.c_ast.IdentifierType) and + decl.type.type.names[-1].startswith('__dotdotdot')): + realtype = self._get_unknown_type(decl) + elif (isinstance(decl.type, pycparser.c_ast.PtrDecl) and + isinstance(decl.type.type, pycparser.c_ast.TypeDecl) and + isinstance(decl.type.type.type, + pycparser.c_ast.IdentifierType) and + decl.type.type.type.names[-1].startswith('__dotdotdot')): + realtype = self._get_unknown_ptr_type(decl) + else: + realtype, quals = self._get_type_and_quals( + decl.type, name=decl.name, partial_length_ok=True, + typedef_example="*(%s *)0" % (decl.name,)) + self._declare('typedef ' + decl.name, realtype, quals=quals) + elif decl.__class__.__name__ == 'Pragma': + pass # skip pragma, only in pycparser 2.15 + else: + raise CDefError("unexpected <%s>: this construct is valid " + "C but not valid in cdef()" % + decl.__class__.__name__, decl) + except CDefError as e: + if len(e.args) == 1: + e.args = e.args + (current_decl,) + raise + except FFIError as e: + msg = self._convert_pycparser_error(e, csource) + if msg: + e.args = (e.args[0] + "\n *** Err: %s" % msg,) + raise + + def _add_constants(self, key, val): + if key in self._int_constants: + if self._int_constants[key] == val: + return # ignore identical double declarations + raise FFIError( + "multiple declarations of constant: %s" % (key,)) + self._int_constants[key] = val + + def _add_integer_constant(self, name, int_str): + int_str = int_str.lower().rstrip("ul") + neg = int_str.startswith('-') + if neg: + int_str = int_str[1:] + # "010" is not valid oct in py3 + if (int_str.startswith("0") and int_str != '0' + and not int_str.startswith("0x")): + int_str = "0o" + int_str[1:] + pyvalue = int(int_str, 0) + if neg: + pyvalue = -pyvalue + self._add_constants(name, pyvalue) + self._declare('macro ' + name, pyvalue) + + def _process_macros(self, macros): + for key, value in macros.items(): + value = value.strip() + if _r_int_literal.match(value): + self._add_integer_constant(key, value) + elif value == '...': + self._declare('macro ' + key, value) + else: + raise CDefError( + 'only supports one of the following syntax:\n' + ' #define %s ... (literally dot-dot-dot)\n' + ' #define %s NUMBER (with NUMBER an integer' + ' constant, decimal/hex/octal)\n' + 'got:\n' + ' #define %s %s' + % (key, key, key, value)) + + def _declare_function(self, tp, quals, decl): + tp = self._get_type_pointer(tp, quals) + if self._options.get('dllexport'): + tag = 'dllexport_python ' + elif self._inside_extern_python == '__cffi_extern_python_start': + tag = 'extern_python ' + elif self._inside_extern_python == '__cffi_extern_python_plus_c_start': + tag = 'extern_python_plus_c ' + else: + tag = 'function ' + self._declare(tag + decl.name, tp) + + def _parse_decl(self, decl): + node = decl.type + if isinstance(node, pycparser.c_ast.FuncDecl): + tp, quals = self._get_type_and_quals(node, name=decl.name) + assert isinstance(tp, model.RawFunctionType) + self._declare_function(tp, quals, decl) + else: + if isinstance(node, pycparser.c_ast.Struct): + self._get_struct_union_enum_type('struct', node) + elif isinstance(node, pycparser.c_ast.Union): + self._get_struct_union_enum_type('union', node) + elif isinstance(node, pycparser.c_ast.Enum): + self._get_struct_union_enum_type('enum', node) + elif not decl.name: + raise CDefError("construct does not declare any variable", + decl) + # + if decl.name: + tp, quals = self._get_type_and_quals(node, + partial_length_ok=True) + if tp.is_raw_function: + self._declare_function(tp, quals, decl) + elif (tp.is_integer_type() and + hasattr(decl, 'init') and + hasattr(decl.init, 'value') and + _r_int_literal.match(decl.init.value)): + self._add_integer_constant(decl.name, decl.init.value) + elif (tp.is_integer_type() and + isinstance(decl.init, pycparser.c_ast.UnaryOp) and + decl.init.op == '-' and + hasattr(decl.init.expr, 'value') and + _r_int_literal.match(decl.init.expr.value)): + self._add_integer_constant(decl.name, + '-' + decl.init.expr.value) + elif (tp is model.void_type and + decl.name.startswith('__cffi_extern_python_')): + # hack: `extern "Python"` in the C source is replaced + # with "void __cffi_extern_python_start;" and + # "void __cffi_extern_python_stop;" + self._inside_extern_python = decl.name + else: + if self._inside_extern_python !='__cffi_extern_python_stop': + raise CDefError( + "cannot declare constants or " + "variables with 'extern \"Python\"'") + if (quals & model.Q_CONST) and not tp.is_array_type: + self._declare('constant ' + decl.name, tp, quals=quals) + else: + _warn_for_non_extern_non_static_global_variable(decl) + self._declare('variable ' + decl.name, tp, quals=quals) + + def parse_type(self, cdecl): + return self.parse_type_and_quals(cdecl)[0] + + def parse_type_and_quals(self, cdecl): + ast, macros = self._parse('void __dummy(\n%s\n);' % cdecl)[:2] + assert not macros + exprnode = ast.ext[-1].type.args.params[0] + if isinstance(exprnode, pycparser.c_ast.ID): + raise CDefError("unknown identifier '%s'" % (exprnode.name,)) + return self._get_type_and_quals(exprnode.type) + + def _declare(self, name, obj, included=False, quals=0): + if name in self._declarations: + prevobj, prevquals = self._declarations[name] + if prevobj is obj and prevquals == quals: + return + if not self._options.get('override'): + raise FFIError( + "multiple declarations of %s (for interactive usage, " + "try cdef(xx, override=True))" % (name,)) + assert '__dotdotdot__' not in name.split() + self._declarations[name] = (obj, quals) + if included: + self._included_declarations.add(obj) + + def _extract_quals(self, type): + quals = 0 + if isinstance(type, (pycparser.c_ast.TypeDecl, + pycparser.c_ast.PtrDecl)): + if 'const' in type.quals: + quals |= model.Q_CONST + if 'volatile' in type.quals: + quals |= model.Q_VOLATILE + if 'restrict' in type.quals: + quals |= model.Q_RESTRICT + return quals + + def _get_type_pointer(self, type, quals, declname=None): + if isinstance(type, model.RawFunctionType): + return type.as_function_pointer() + if (isinstance(type, model.StructOrUnionOrEnum) and + type.name.startswith('$') and type.name[1:].isdigit() and + type.forcename is None and declname is not None): + return model.NamedPointerType(type, declname, quals) + return model.PointerType(type, quals) + + def _get_type_and_quals(self, typenode, name=None, partial_length_ok=False, + typedef_example=None): + # first, dereference typedefs, if we have it already parsed, we're good + if (isinstance(typenode, pycparser.c_ast.TypeDecl) and + isinstance(typenode.type, pycparser.c_ast.IdentifierType) and + len(typenode.type.names) == 1 and + ('typedef ' + typenode.type.names[0]) in self._declarations): + tp, quals = self._declarations['typedef ' + typenode.type.names[0]] + quals |= self._extract_quals(typenode) + return tp, quals + # + if isinstance(typenode, pycparser.c_ast.ArrayDecl): + # array type + if typenode.dim is None: + length = None + else: + length = self._parse_constant( + typenode.dim, partial_length_ok=partial_length_ok) + # a hack: in 'typedef int foo_t[...][...];', don't use '...' as + # the length but use directly the C expression that would be + # generated by recompiler.py. This lets the typedef be used in + # many more places within recompiler.py + if typedef_example is not None: + if length == '...': + length = '_cffi_array_len(%s)' % (typedef_example,) + typedef_example = "*" + typedef_example + # + tp, quals = self._get_type_and_quals(typenode.type, + partial_length_ok=partial_length_ok, + typedef_example=typedef_example) + return model.ArrayType(tp, length), quals + # + if isinstance(typenode, pycparser.c_ast.PtrDecl): + # pointer type + itemtype, itemquals = self._get_type_and_quals(typenode.type) + tp = self._get_type_pointer(itemtype, itemquals, declname=name) + quals = self._extract_quals(typenode) + return tp, quals + # + if isinstance(typenode, pycparser.c_ast.TypeDecl): + quals = self._extract_quals(typenode) + type = typenode.type + if isinstance(type, pycparser.c_ast.IdentifierType): + # assume a primitive type. get it from .names, but reduce + # synonyms to a single chosen combination + names = list(type.names) + if names != ['signed', 'char']: # keep this unmodified + prefixes = {} + while names: + name = names[0] + if name in ('short', 'long', 'signed', 'unsigned'): + prefixes[name] = prefixes.get(name, 0) + 1 + del names[0] + else: + break + # ignore the 'signed' prefix below, and reorder the others + newnames = [] + for prefix in ('unsigned', 'short', 'long'): + for i in range(prefixes.get(prefix, 0)): + newnames.append(prefix) + if not names: + names = ['int'] # implicitly + if names == ['int']: # but kill it if 'short' or 'long' + if 'short' in prefixes or 'long' in prefixes: + names = [] + names = newnames + names + ident = ' '.join(names) + if ident == 'void': + return model.void_type, quals + if ident == '__dotdotdot__': + raise FFIError(':%d: bad usage of "..."' % + typenode.coord.line) + tp0, quals0 = resolve_common_type(self, ident) + return tp0, (quals | quals0) + # + if isinstance(type, pycparser.c_ast.Struct): + # 'struct foobar' + tp = self._get_struct_union_enum_type('struct', type, name) + return tp, quals + # + if isinstance(type, pycparser.c_ast.Union): + # 'union foobar' + tp = self._get_struct_union_enum_type('union', type, name) + return tp, quals + # + if isinstance(type, pycparser.c_ast.Enum): + # 'enum foobar' + tp = self._get_struct_union_enum_type('enum', type, name) + return tp, quals + # + if isinstance(typenode, pycparser.c_ast.FuncDecl): + # a function type + return self._parse_function_type(typenode, name), 0 + # + # nested anonymous structs or unions end up here + if isinstance(typenode, pycparser.c_ast.Struct): + return self._get_struct_union_enum_type('struct', typenode, name, + nested=True), 0 + if isinstance(typenode, pycparser.c_ast.Union): + return self._get_struct_union_enum_type('union', typenode, name, + nested=True), 0 + # + raise FFIError(":%d: bad or unsupported type declaration" % + typenode.coord.line) + + def _parse_function_type(self, typenode, funcname=None): + params = list(getattr(typenode.args, 'params', [])) + for i, arg in enumerate(params): + if not hasattr(arg, 'type'): + raise CDefError("%s arg %d: unknown type '%s'" + " (if you meant to use the old C syntax of giving" + " untyped arguments, it is not supported)" + % (funcname or 'in expression', i + 1, + getattr(arg, 'name', '?'))) + ellipsis = ( + len(params) > 0 and + isinstance(params[-1].type, pycparser.c_ast.TypeDecl) and + isinstance(params[-1].type.type, + pycparser.c_ast.IdentifierType) and + params[-1].type.type.names == ['__dotdotdot__']) + if ellipsis: + params.pop() + if not params: + raise CDefError( + "%s: a function with only '(...)' as argument" + " is not correct C" % (funcname or 'in expression')) + args = [self._as_func_arg(*self._get_type_and_quals(argdeclnode.type)) + for argdeclnode in params] + if not ellipsis and args == [model.void_type]: + args = [] + result, quals = self._get_type_and_quals(typenode.type) + # the 'quals' on the result type are ignored. HACK: we absure them + # to detect __stdcall functions: we textually replace "__stdcall" + # with "volatile volatile const" above. + abi = None + if hasattr(typenode.type, 'quals'): # else, probable syntax error anyway + if typenode.type.quals[-3:] == ['volatile', 'volatile', 'const']: + abi = '__stdcall' + return model.RawFunctionType(tuple(args), result, ellipsis, abi) + + def _as_func_arg(self, type, quals): + if isinstance(type, model.ArrayType): + return model.PointerType(type.item, quals) + elif isinstance(type, model.RawFunctionType): + return type.as_function_pointer() + else: + return type + + def _get_struct_union_enum_type(self, kind, type, name=None, nested=False): + # First, a level of caching on the exact 'type' node of the AST. + # This is obscure, but needed because pycparser "unrolls" declarations + # such as "typedef struct { } foo_t, *foo_p" and we end up with + # an AST that is not a tree, but a DAG, with the "type" node of the + # two branches foo_t and foo_p of the trees being the same node. + # It's a bit silly but detecting "DAG-ness" in the AST tree seems + # to be the only way to distinguish this case from two independent + # structs. See test_struct_with_two_usages. + try: + return self._structnode2type[type] + except KeyError: + pass + # + # Note that this must handle parsing "struct foo" any number of + # times and always return the same StructType object. Additionally, + # one of these times (not necessarily the first), the fields of + # the struct can be specified with "struct foo { ...fields... }". + # If no name is given, then we have to create a new anonymous struct + # with no caching; in this case, the fields are either specified + # right now or never. + # + force_name = name + name = type.name + # + # get the type or create it if needed + if name is None: + # 'force_name' is used to guess a more readable name for + # anonymous structs, for the common case "typedef struct { } foo". + if force_name is not None: + explicit_name = '$%s' % force_name + else: + self._anonymous_counter += 1 + explicit_name = '$%d' % self._anonymous_counter + tp = None + else: + explicit_name = name + key = '%s %s' % (kind, name) + tp, _ = self._declarations.get(key, (None, None)) + # + if tp is None: + if kind == 'struct': + tp = model.StructType(explicit_name, None, None, None) + elif kind == 'union': + tp = model.UnionType(explicit_name, None, None, None) + elif kind == 'enum': + if explicit_name == '__dotdotdot__': + raise CDefError("Enums cannot be declared with ...") + tp = self._build_enum_type(explicit_name, type.values) + else: + raise AssertionError("kind = %r" % (kind,)) + if name is not None: + self._declare(key, tp) + else: + if kind == 'enum' and type.values is not None: + raise NotImplementedError( + "enum %s: the '{}' declaration should appear on the first " + "time the enum is mentioned, not later" % explicit_name) + if not tp.forcename: + tp.force_the_name(force_name) + if tp.forcename and '$' in tp.name: + self._declare('anonymous %s' % tp.forcename, tp) + # + self._structnode2type[type] = tp + # + # enums: done here + if kind == 'enum': + return tp + # + # is there a 'type.decls'? If yes, then this is the place in the + # C sources that declare the fields. If no, then just return the + # existing type, possibly still incomplete. + if type.decls is None: + return tp + # + if tp.fldnames is not None: + raise CDefError("duplicate declaration of struct %s" % name) + fldnames = [] + fldtypes = [] + fldbitsize = [] + fldquals = [] + for decl in type.decls: + if (isinstance(decl.type, pycparser.c_ast.IdentifierType) and + ''.join(decl.type.names) == '__dotdotdot__'): + # XXX pycparser is inconsistent: 'names' should be a list + # of strings, but is sometimes just one string. Use + # str.join() as a way to cope with both. + self._make_partial(tp, nested) + continue + if decl.bitsize is None: + bitsize = -1 + else: + bitsize = self._parse_constant(decl.bitsize) + self._partial_length = False + type, fqual = self._get_type_and_quals(decl.type, + partial_length_ok=True) + if self._partial_length: + self._make_partial(tp, nested) + if isinstance(type, model.StructType) and type.partial: + self._make_partial(tp, nested) + fldnames.append(decl.name or '') + fldtypes.append(type) + fldbitsize.append(bitsize) + fldquals.append(fqual) + tp.fldnames = tuple(fldnames) + tp.fldtypes = tuple(fldtypes) + tp.fldbitsize = tuple(fldbitsize) + tp.fldquals = tuple(fldquals) + if fldbitsize != [-1] * len(fldbitsize): + if isinstance(tp, model.StructType) and tp.partial: + raise NotImplementedError("%s: using both bitfields and '...;'" + % (tp,)) + tp.packed = self._options.get('packed') + if tp.completed: # must be re-completed: it is not opaque any more + tp.completed = 0 + self._recomplete.append(tp) + return tp + + def _make_partial(self, tp, nested): + if not isinstance(tp, model.StructOrUnion): + raise CDefError("%s cannot be partial" % (tp,)) + if not tp.has_c_name() and not nested: + raise NotImplementedError("%s is partial but has no C name" %(tp,)) + tp.partial = True + + def _parse_constant(self, exprnode, partial_length_ok=False): + # for now, limited to expressions that are an immediate number + # or positive/negative number + if isinstance(exprnode, pycparser.c_ast.Constant): + s = exprnode.value + if '0' <= s[0] <= '9': + s = s.rstrip('uUlL') + try: + if s.startswith('0'): + return int(s, 8) + else: + return int(s, 10) + except ValueError: + if len(s) > 1: + if s.lower()[0:2] == '0x': + return int(s, 16) + elif s.lower()[0:2] == '0b': + return int(s, 2) + raise CDefError("invalid constant %r" % (s,)) + elif s[0] == "'" and s[-1] == "'" and ( + len(s) == 3 or (len(s) == 4 and s[1] == "\\")): + return ord(s[-2]) + else: + raise CDefError("invalid constant %r" % (s,)) + # + if (isinstance(exprnode, pycparser.c_ast.UnaryOp) and + exprnode.op == '+'): + return self._parse_constant(exprnode.expr) + # + if (isinstance(exprnode, pycparser.c_ast.UnaryOp) and + exprnode.op == '-'): + return -self._parse_constant(exprnode.expr) + # load previously defined int constant + if (isinstance(exprnode, pycparser.c_ast.ID) and + exprnode.name in self._int_constants): + return self._int_constants[exprnode.name] + # + if (isinstance(exprnode, pycparser.c_ast.ID) and + exprnode.name == '__dotdotdotarray__'): + if partial_length_ok: + self._partial_length = True + return '...' + raise FFIError(":%d: unsupported '[...]' here, cannot derive " + "the actual array length in this context" + % exprnode.coord.line) + # + if isinstance(exprnode, pycparser.c_ast.BinaryOp): + left = self._parse_constant(exprnode.left) + right = self._parse_constant(exprnode.right) + if exprnode.op == '+': + return left + right + elif exprnode.op == '-': + return left - right + elif exprnode.op == '*': + return left * right + elif exprnode.op == '/': + return self._c_div(left, right) + elif exprnode.op == '%': + return left - self._c_div(left, right) * right + elif exprnode.op == '<<': + return left << right + elif exprnode.op == '>>': + return left >> right + elif exprnode.op == '&': + return left & right + elif exprnode.op == '|': + return left | right + elif exprnode.op == '^': + return left ^ right + # + raise FFIError(":%d: unsupported expression: expected a " + "simple numeric constant" % exprnode.coord.line) + + def _c_div(self, a, b): + result = a // b + if ((a < 0) ^ (b < 0)) and (a % b) != 0: + result += 1 + return result + + def _build_enum_type(self, explicit_name, decls): + if decls is not None: + partial = False + enumerators = [] + enumvalues = [] + nextenumvalue = 0 + for enum in decls.enumerators: + if _r_enum_dotdotdot.match(enum.name): + partial = True + continue + if enum.value is not None: + nextenumvalue = self._parse_constant(enum.value) + enumerators.append(enum.name) + enumvalues.append(nextenumvalue) + self._add_constants(enum.name, nextenumvalue) + nextenumvalue += 1 + enumerators = tuple(enumerators) + enumvalues = tuple(enumvalues) + tp = model.EnumType(explicit_name, enumerators, enumvalues) + tp.partial = partial + else: # opaque enum + tp = model.EnumType(explicit_name, (), ()) + return tp + + def include(self, other): + for name, (tp, quals) in other._declarations.items(): + if name.startswith('anonymous $enum_$'): + continue # fix for test_anonymous_enum_include + kind = name.split(' ', 1)[0] + if kind in ('struct', 'union', 'enum', 'anonymous', 'typedef'): + self._declare(name, tp, included=True, quals=quals) + for k, v in other._int_constants.items(): + self._add_constants(k, v) + + def _get_unknown_type(self, decl): + typenames = decl.type.type.names + if typenames == ['__dotdotdot__']: + return model.unknown_type(decl.name) + + if typenames == ['__dotdotdotint__']: + if self._uses_new_feature is None: + self._uses_new_feature = "'typedef int... %s'" % decl.name + return model.UnknownIntegerType(decl.name) + + if typenames == ['__dotdotdotfloat__']: + # note: not for 'long double' so far + if self._uses_new_feature is None: + self._uses_new_feature = "'typedef float... %s'" % decl.name + return model.UnknownFloatType(decl.name) + + raise FFIError(':%d: unsupported usage of "..." in typedef' + % decl.coord.line) + + def _get_unknown_ptr_type(self, decl): + if decl.type.type.type.names == ['__dotdotdot__']: + return model.unknown_ptr_type(decl.name) + raise FFIError(':%d: unsupported usage of "..." in typedef' + % decl.coord.line) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/error.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/error.py new file mode 100644 index 0000000000000000000000000000000000000000..0a27247c32a381ab7cecedd0f985b781619c1ea5 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/error.py @@ -0,0 +1,31 @@ + +class FFIError(Exception): + __module__ = 'cffi' + +class CDefError(Exception): + __module__ = 'cffi' + def __str__(self): + try: + current_decl = self.args[1] + filename = current_decl.coord.file + linenum = current_decl.coord.line + prefix = '%s:%d: ' % (filename, linenum) + except (AttributeError, TypeError, IndexError): + prefix = '' + return '%s%s' % (prefix, self.args[0]) + +class VerificationError(Exception): + """ An error raised when verification fails + """ + __module__ = 'cffi' + +class VerificationMissing(Exception): + """ An error raised when incomplete structures are passed into + cdef, but no verification has been done + """ + __module__ = 'cffi' + +class PkgConfigError(Exception): + """ An error raised for missing modules in pkg-config + """ + __module__ = 'cffi' diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/ffiplatform.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/ffiplatform.py new file mode 100644 index 0000000000000000000000000000000000000000..85313460a69477513c8e00f4df430925f2c4ecc9 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/ffiplatform.py @@ -0,0 +1,127 @@ +import sys, os +from .error import VerificationError + + +LIST_OF_FILE_NAMES = ['sources', 'include_dirs', 'library_dirs', + 'extra_objects', 'depends'] + +def get_extension(srcfilename, modname, sources=(), **kwds): + _hack_at_distutils() + from distutils.core import Extension + allsources = [srcfilename] + for src in sources: + allsources.append(os.path.normpath(src)) + return Extension(name=modname, sources=allsources, **kwds) + +def compile(tmpdir, ext, compiler_verbose=0, debug=None): + """Compile a C extension module using distutils.""" + + _hack_at_distutils() + saved_environ = os.environ.copy() + try: + outputfilename = _build(tmpdir, ext, compiler_verbose, debug) + outputfilename = os.path.abspath(outputfilename) + finally: + # workaround for a distutils bugs where some env vars can + # become longer and longer every time it is used + for key, value in saved_environ.items(): + if os.environ.get(key) != value: + os.environ[key] = value + return outputfilename + +def _build(tmpdir, ext, compiler_verbose=0, debug=None): + # XXX compact but horrible :-( + from distutils.core import Distribution + import distutils.errors, distutils.log + # + dist = Distribution({'ext_modules': [ext]}) + dist.parse_config_files() + options = dist.get_option_dict('build_ext') + if debug is None: + debug = sys.flags.debug + options['debug'] = ('ffiplatform', debug) + options['force'] = ('ffiplatform', True) + options['build_lib'] = ('ffiplatform', tmpdir) + options['build_temp'] = ('ffiplatform', tmpdir) + # + try: + old_level = distutils.log.set_threshold(0) or 0 + try: + distutils.log.set_verbosity(compiler_verbose) + dist.run_command('build_ext') + cmd_obj = dist.get_command_obj('build_ext') + [soname] = cmd_obj.get_outputs() + finally: + distutils.log.set_threshold(old_level) + except (distutils.errors.CompileError, + distutils.errors.LinkError) as e: + raise VerificationError('%s: %s' % (e.__class__.__name__, e)) + # + return soname + +try: + from os.path import samefile +except ImportError: + def samefile(f1, f2): + return os.path.abspath(f1) == os.path.abspath(f2) + +def maybe_relative_path(path): + if not os.path.isabs(path): + return path # already relative + dir = path + names = [] + while True: + prevdir = dir + dir, name = os.path.split(prevdir) + if dir == prevdir or not dir: + return path # failed to make it relative + names.append(name) + try: + if samefile(dir, os.curdir): + names.reverse() + return os.path.join(*names) + except OSError: + pass + +# ____________________________________________________________ + +try: + int_or_long = (int, long) + import cStringIO +except NameError: + int_or_long = int # Python 3 + import io as cStringIO + +def _flatten(x, f): + if isinstance(x, str): + f.write('%ds%s' % (len(x), x)) + elif isinstance(x, dict): + keys = sorted(x.keys()) + f.write('%dd' % len(keys)) + for key in keys: + _flatten(key, f) + _flatten(x[key], f) + elif isinstance(x, (list, tuple)): + f.write('%dl' % len(x)) + for value in x: + _flatten(value, f) + elif isinstance(x, int_or_long): + f.write('%di' % (x,)) + else: + raise TypeError( + "the keywords to verify() contains unsupported object %r" % (x,)) + +def flatten(x): + f = cStringIO.StringIO() + _flatten(x, f) + return f.getvalue() + +def _hack_at_distutils(): + # Windows-only workaround for some configurations: see + # https://bugs.python.org/issue23246 (Python 2.7 with + # a specific MS compiler suite download) + if sys.platform == "win32": + try: + import setuptools # for side-effects, patches distutils + except ImportError: + pass diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/lock.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/lock.py new file mode 100644 index 0000000000000000000000000000000000000000..db91b7158c4ee9aa653462fe38e79ed1b553db87 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/lock.py @@ -0,0 +1,30 @@ +import sys + +if sys.version_info < (3,): + try: + from thread import allocate_lock + except ImportError: + from dummy_thread import allocate_lock +else: + try: + from _thread import allocate_lock + except ImportError: + from _dummy_thread import allocate_lock + + +##import sys +##l1 = allocate_lock + +##class allocate_lock(object): +## def __init__(self): +## self._real = l1() +## def __enter__(self): +## for i in range(4, 0, -1): +## print sys._getframe(i).f_code +## print +## return self._real.__enter__() +## def __exit__(self, *args): +## return self._real.__exit__(*args) +## def acquire(self, f): +## assert f is False +## return self._real.acquire(f) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/model.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/model.py new file mode 100644 index 0000000000000000000000000000000000000000..ad1c1764893d0257c0e75eeb61b0a359e89adf0f --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/model.py @@ -0,0 +1,617 @@ +import types +import weakref + +from .lock import allocate_lock +from .error import CDefError, VerificationError, VerificationMissing + +# type qualifiers +Q_CONST = 0x01 +Q_RESTRICT = 0x02 +Q_VOLATILE = 0x04 + +def qualify(quals, replace_with): + if quals & Q_CONST: + replace_with = ' const ' + replace_with.lstrip() + if quals & Q_VOLATILE: + replace_with = ' volatile ' + replace_with.lstrip() + if quals & Q_RESTRICT: + # It seems that __restrict is supported by gcc and msvc. + # If you hit some different compiler, add a #define in + # _cffi_include.h for it (and in its copies, documented there) + replace_with = ' __restrict ' + replace_with.lstrip() + return replace_with + + +class BaseTypeByIdentity(object): + is_array_type = False + is_raw_function = False + + def get_c_name(self, replace_with='', context='a C file', quals=0): + result = self.c_name_with_marker + assert result.count('&') == 1 + # some logic duplication with ffi.getctype()... :-( + replace_with = replace_with.strip() + if replace_with: + if replace_with.startswith('*') and '&[' in result: + replace_with = '(%s)' % replace_with + elif not replace_with[0] in '[(': + replace_with = ' ' + replace_with + replace_with = qualify(quals, replace_with) + result = result.replace('&', replace_with) + if '$' in result: + raise VerificationError( + "cannot generate '%s' in %s: unknown type name" + % (self._get_c_name(), context)) + return result + + def _get_c_name(self): + return self.c_name_with_marker.replace('&', '') + + def has_c_name(self): + return '$' not in self._get_c_name() + + def is_integer_type(self): + return False + + def get_cached_btype(self, ffi, finishlist, can_delay=False): + try: + BType = ffi._cached_btypes[self] + except KeyError: + BType = self.build_backend_type(ffi, finishlist) + BType2 = ffi._cached_btypes.setdefault(self, BType) + assert BType2 is BType + return BType + + def __repr__(self): + return '<%s>' % (self._get_c_name(),) + + def _get_items(self): + return [(name, getattr(self, name)) for name in self._attrs_] + + +class BaseType(BaseTypeByIdentity): + + def __eq__(self, other): + return (self.__class__ == other.__class__ and + self._get_items() == other._get_items()) + + def __ne__(self, other): + return not self == other + + def __hash__(self): + return hash((self.__class__, tuple(self._get_items()))) + + +class VoidType(BaseType): + _attrs_ = () + + def __init__(self): + self.c_name_with_marker = 'void&' + + def build_backend_type(self, ffi, finishlist): + return global_cache(self, ffi, 'new_void_type') + +void_type = VoidType() + + +class BasePrimitiveType(BaseType): + def is_complex_type(self): + return False + + +class PrimitiveType(BasePrimitiveType): + _attrs_ = ('name',) + + ALL_PRIMITIVE_TYPES = { + 'char': 'c', + 'short': 'i', + 'int': 'i', + 'long': 'i', + 'long long': 'i', + 'signed char': 'i', + 'unsigned char': 'i', + 'unsigned short': 'i', + 'unsigned int': 'i', + 'unsigned long': 'i', + 'unsigned long long': 'i', + 'float': 'f', + 'double': 'f', + 'long double': 'f', + 'float _Complex': 'j', + 'double _Complex': 'j', + '_Bool': 'i', + # the following types are not primitive in the C sense + 'wchar_t': 'c', + 'char16_t': 'c', + 'char32_t': 'c', + 'int8_t': 'i', + 'uint8_t': 'i', + 'int16_t': 'i', + 'uint16_t': 'i', + 'int32_t': 'i', + 'uint32_t': 'i', + 'int64_t': 'i', + 'uint64_t': 'i', + 'int_least8_t': 'i', + 'uint_least8_t': 'i', + 'int_least16_t': 'i', + 'uint_least16_t': 'i', + 'int_least32_t': 'i', + 'uint_least32_t': 'i', + 'int_least64_t': 'i', + 'uint_least64_t': 'i', + 'int_fast8_t': 'i', + 'uint_fast8_t': 'i', + 'int_fast16_t': 'i', + 'uint_fast16_t': 'i', + 'int_fast32_t': 'i', + 'uint_fast32_t': 'i', + 'int_fast64_t': 'i', + 'uint_fast64_t': 'i', + 'intptr_t': 'i', + 'uintptr_t': 'i', + 'intmax_t': 'i', + 'uintmax_t': 'i', + 'ptrdiff_t': 'i', + 'size_t': 'i', + 'ssize_t': 'i', + } + + def __init__(self, name): + assert name in self.ALL_PRIMITIVE_TYPES + self.name = name + self.c_name_with_marker = name + '&' + + def is_char_type(self): + return self.ALL_PRIMITIVE_TYPES[self.name] == 'c' + def is_integer_type(self): + return self.ALL_PRIMITIVE_TYPES[self.name] == 'i' + def is_float_type(self): + return self.ALL_PRIMITIVE_TYPES[self.name] == 'f' + def is_complex_type(self): + return self.ALL_PRIMITIVE_TYPES[self.name] == 'j' + + def build_backend_type(self, ffi, finishlist): + return global_cache(self, ffi, 'new_primitive_type', self.name) + + +class UnknownIntegerType(BasePrimitiveType): + _attrs_ = ('name',) + + def __init__(self, name): + self.name = name + self.c_name_with_marker = name + '&' + + def is_integer_type(self): + return True + + def build_backend_type(self, ffi, finishlist): + raise NotImplementedError("integer type '%s' can only be used after " + "compilation" % self.name) + +class UnknownFloatType(BasePrimitiveType): + _attrs_ = ('name', ) + + def __init__(self, name): + self.name = name + self.c_name_with_marker = name + '&' + + def build_backend_type(self, ffi, finishlist): + raise NotImplementedError("float type '%s' can only be used after " + "compilation" % self.name) + + +class BaseFunctionType(BaseType): + _attrs_ = ('args', 'result', 'ellipsis', 'abi') + + def __init__(self, args, result, ellipsis, abi=None): + self.args = args + self.result = result + self.ellipsis = ellipsis + self.abi = abi + # + reprargs = [arg._get_c_name() for arg in self.args] + if self.ellipsis: + reprargs.append('...') + reprargs = reprargs or ['void'] + replace_with = self._base_pattern % (', '.join(reprargs),) + if abi is not None: + replace_with = replace_with[:1] + abi + ' ' + replace_with[1:] + self.c_name_with_marker = ( + self.result.c_name_with_marker.replace('&', replace_with)) + + +class RawFunctionType(BaseFunctionType): + # Corresponds to a C type like 'int(int)', which is the C type of + # a function, but not a pointer-to-function. The backend has no + # notion of such a type; it's used temporarily by parsing. + _base_pattern = '(&)(%s)' + is_raw_function = True + + def build_backend_type(self, ffi, finishlist): + raise CDefError("cannot render the type %r: it is a function " + "type, not a pointer-to-function type" % (self,)) + + def as_function_pointer(self): + return FunctionPtrType(self.args, self.result, self.ellipsis, self.abi) + + +class FunctionPtrType(BaseFunctionType): + _base_pattern = '(*&)(%s)' + + def build_backend_type(self, ffi, finishlist): + result = self.result.get_cached_btype(ffi, finishlist) + args = [] + for tp in self.args: + args.append(tp.get_cached_btype(ffi, finishlist)) + abi_args = () + if self.abi == "__stdcall": + if not self.ellipsis: # __stdcall ignored for variadic funcs + try: + abi_args = (ffi._backend.FFI_STDCALL,) + except AttributeError: + pass + return global_cache(self, ffi, 'new_function_type', + tuple(args), result, self.ellipsis, *abi_args) + + def as_raw_function(self): + return RawFunctionType(self.args, self.result, self.ellipsis, self.abi) + + +class PointerType(BaseType): + _attrs_ = ('totype', 'quals') + + def __init__(self, totype, quals=0): + self.totype = totype + self.quals = quals + extra = qualify(quals, " *&") + if totype.is_array_type: + extra = "(%s)" % (extra.lstrip(),) + self.c_name_with_marker = totype.c_name_with_marker.replace('&', extra) + + def build_backend_type(self, ffi, finishlist): + BItem = self.totype.get_cached_btype(ffi, finishlist, can_delay=True) + return global_cache(self, ffi, 'new_pointer_type', BItem) + +voidp_type = PointerType(void_type) + +def ConstPointerType(totype): + return PointerType(totype, Q_CONST) + +const_voidp_type = ConstPointerType(void_type) + + +class NamedPointerType(PointerType): + _attrs_ = ('totype', 'name') + + def __init__(self, totype, name, quals=0): + PointerType.__init__(self, totype, quals) + self.name = name + self.c_name_with_marker = name + '&' + + +class ArrayType(BaseType): + _attrs_ = ('item', 'length') + is_array_type = True + + def __init__(self, item, length): + self.item = item + self.length = length + # + if length is None: + brackets = '&[]' + elif length == '...': + brackets = '&[/*...*/]' + else: + brackets = '&[%s]' % length + self.c_name_with_marker = ( + self.item.c_name_with_marker.replace('&', brackets)) + + def length_is_unknown(self): + return isinstance(self.length, str) + + def resolve_length(self, newlength): + return ArrayType(self.item, newlength) + + def build_backend_type(self, ffi, finishlist): + if self.length_is_unknown(): + raise CDefError("cannot render the type %r: unknown length" % + (self,)) + self.item.get_cached_btype(ffi, finishlist) # force the item BType + BPtrItem = PointerType(self.item).get_cached_btype(ffi, finishlist) + return global_cache(self, ffi, 'new_array_type', BPtrItem, self.length) + +char_array_type = ArrayType(PrimitiveType('char'), None) + + +class StructOrUnionOrEnum(BaseTypeByIdentity): + _attrs_ = ('name',) + forcename = None + + def build_c_name_with_marker(self): + name = self.forcename or '%s %s' % (self.kind, self.name) + self.c_name_with_marker = name + '&' + + def force_the_name(self, forcename): + self.forcename = forcename + self.build_c_name_with_marker() + + def get_official_name(self): + assert self.c_name_with_marker.endswith('&') + return self.c_name_with_marker[:-1] + + +class StructOrUnion(StructOrUnionOrEnum): + fixedlayout = None + completed = 0 + partial = False + packed = 0 + + def __init__(self, name, fldnames, fldtypes, fldbitsize, fldquals=None): + self.name = name + self.fldnames = fldnames + self.fldtypes = fldtypes + self.fldbitsize = fldbitsize + self.fldquals = fldquals + self.build_c_name_with_marker() + + def anonymous_struct_fields(self): + if self.fldtypes is not None: + for name, type in zip(self.fldnames, self.fldtypes): + if name == '' and isinstance(type, StructOrUnion): + yield type + + def enumfields(self, expand_anonymous_struct_union=True): + fldquals = self.fldquals + if fldquals is None: + fldquals = (0,) * len(self.fldnames) + for name, type, bitsize, quals in zip(self.fldnames, self.fldtypes, + self.fldbitsize, fldquals): + if (name == '' and isinstance(type, StructOrUnion) + and expand_anonymous_struct_union): + # nested anonymous struct/union + for result in type.enumfields(): + yield result + else: + yield (name, type, bitsize, quals) + + def force_flatten(self): + # force the struct or union to have a declaration that lists + # directly all fields returned by enumfields(), flattening + # nested anonymous structs/unions. + names = [] + types = [] + bitsizes = [] + fldquals = [] + for name, type, bitsize, quals in self.enumfields(): + names.append(name) + types.append(type) + bitsizes.append(bitsize) + fldquals.append(quals) + self.fldnames = tuple(names) + self.fldtypes = tuple(types) + self.fldbitsize = tuple(bitsizes) + self.fldquals = tuple(fldquals) + + def get_cached_btype(self, ffi, finishlist, can_delay=False): + BType = StructOrUnionOrEnum.get_cached_btype(self, ffi, finishlist, + can_delay) + if not can_delay: + self.finish_backend_type(ffi, finishlist) + return BType + + def finish_backend_type(self, ffi, finishlist): + if self.completed: + if self.completed != 2: + raise NotImplementedError("recursive structure declaration " + "for '%s'" % (self.name,)) + return + BType = ffi._cached_btypes[self] + # + self.completed = 1 + # + if self.fldtypes is None: + pass # not completing it: it's an opaque struct + # + elif self.fixedlayout is None: + fldtypes = [tp.get_cached_btype(ffi, finishlist) + for tp in self.fldtypes] + lst = list(zip(self.fldnames, fldtypes, self.fldbitsize)) + extra_flags = () + if self.packed: + if self.packed == 1: + extra_flags = (8,) # SF_PACKED + else: + extra_flags = (0, self.packed) + ffi._backend.complete_struct_or_union(BType, lst, self, + -1, -1, *extra_flags) + # + else: + fldtypes = [] + fieldofs, fieldsize, totalsize, totalalignment = self.fixedlayout + for i in range(len(self.fldnames)): + fsize = fieldsize[i] + ftype = self.fldtypes[i] + # + if isinstance(ftype, ArrayType) and ftype.length_is_unknown(): + # fix the length to match the total size + BItemType = ftype.item.get_cached_btype(ffi, finishlist) + nlen, nrest = divmod(fsize, ffi.sizeof(BItemType)) + if nrest != 0: + self._verification_error( + "field '%s.%s' has a bogus size?" % ( + self.name, self.fldnames[i] or '{}')) + ftype = ftype.resolve_length(nlen) + self.fldtypes = (self.fldtypes[:i] + (ftype,) + + self.fldtypes[i+1:]) + # + BFieldType = ftype.get_cached_btype(ffi, finishlist) + if isinstance(ftype, ArrayType) and ftype.length is None: + assert fsize == 0 + else: + bitemsize = ffi.sizeof(BFieldType) + if bitemsize != fsize: + self._verification_error( + "field '%s.%s' is declared as %d bytes, but is " + "really %d bytes" % (self.name, + self.fldnames[i] or '{}', + bitemsize, fsize)) + fldtypes.append(BFieldType) + # + lst = list(zip(self.fldnames, fldtypes, self.fldbitsize, fieldofs)) + ffi._backend.complete_struct_or_union(BType, lst, self, + totalsize, totalalignment) + self.completed = 2 + + def _verification_error(self, msg): + raise VerificationError(msg) + + def check_not_partial(self): + if self.partial and self.fixedlayout is None: + raise VerificationMissing(self._get_c_name()) + + def build_backend_type(self, ffi, finishlist): + self.check_not_partial() + finishlist.append(self) + # + return global_cache(self, ffi, 'new_%s_type' % self.kind, + self.get_official_name(), key=self) + + +class StructType(StructOrUnion): + kind = 'struct' + + +class UnionType(StructOrUnion): + kind = 'union' + + +class EnumType(StructOrUnionOrEnum): + kind = 'enum' + partial = False + partial_resolved = False + + def __init__(self, name, enumerators, enumvalues, baseinttype=None): + self.name = name + self.enumerators = enumerators + self.enumvalues = enumvalues + self.baseinttype = baseinttype + self.build_c_name_with_marker() + + def force_the_name(self, forcename): + StructOrUnionOrEnum.force_the_name(self, forcename) + if self.forcename is None: + name = self.get_official_name() + self.forcename = '$' + name.replace(' ', '_') + + def check_not_partial(self): + if self.partial and not self.partial_resolved: + raise VerificationMissing(self._get_c_name()) + + def build_backend_type(self, ffi, finishlist): + self.check_not_partial() + base_btype = self.build_baseinttype(ffi, finishlist) + return global_cache(self, ffi, 'new_enum_type', + self.get_official_name(), + self.enumerators, self.enumvalues, + base_btype, key=self) + + def build_baseinttype(self, ffi, finishlist): + if self.baseinttype is not None: + return self.baseinttype.get_cached_btype(ffi, finishlist) + # + if self.enumvalues: + smallest_value = min(self.enumvalues) + largest_value = max(self.enumvalues) + else: + import warnings + try: + # XXX! The goal is to ensure that the warnings.warn() + # will not suppress the warning. We want to get it + # several times if we reach this point several times. + __warningregistry__.clear() + except NameError: + pass + warnings.warn("%r has no values explicitly defined; " + "guessing that it is equivalent to 'unsigned int'" + % self._get_c_name()) + smallest_value = largest_value = 0 + if smallest_value < 0: # needs a signed type + sign = 1 + candidate1 = PrimitiveType("int") + candidate2 = PrimitiveType("long") + else: + sign = 0 + candidate1 = PrimitiveType("unsigned int") + candidate2 = PrimitiveType("unsigned long") + btype1 = candidate1.get_cached_btype(ffi, finishlist) + btype2 = candidate2.get_cached_btype(ffi, finishlist) + size1 = ffi.sizeof(btype1) + size2 = ffi.sizeof(btype2) + if (smallest_value >= ((-1) << (8*size1-1)) and + largest_value < (1 << (8*size1-sign))): + return btype1 + if (smallest_value >= ((-1) << (8*size2-1)) and + largest_value < (1 << (8*size2-sign))): + return btype2 + raise CDefError("%s values don't all fit into either 'long' " + "or 'unsigned long'" % self._get_c_name()) + +def unknown_type(name, structname=None): + if structname is None: + structname = '$%s' % name + tp = StructType(structname, None, None, None) + tp.force_the_name(name) + tp.origin = "unknown_type" + return tp + +def unknown_ptr_type(name, structname=None): + if structname is None: + structname = '$$%s' % name + tp = StructType(structname, None, None, None) + return NamedPointerType(tp, name) + + +global_lock = allocate_lock() +_typecache_cffi_backend = weakref.WeakValueDictionary() + +def get_typecache(backend): + # returns _typecache_cffi_backend if backend is the _cffi_backend + # module, or type(backend).__typecache if backend is an instance of + # CTypesBackend (or some FakeBackend class during tests) + if isinstance(backend, types.ModuleType): + return _typecache_cffi_backend + with global_lock: + if not hasattr(type(backend), '__typecache'): + type(backend).__typecache = weakref.WeakValueDictionary() + return type(backend).__typecache + +def global_cache(srctype, ffi, funcname, *args, **kwds): + key = kwds.pop('key', (funcname, args)) + assert not kwds + try: + return ffi._typecache[key] + except KeyError: + pass + try: + res = getattr(ffi._backend, funcname)(*args) + except NotImplementedError as e: + raise NotImplementedError("%s: %r: %s" % (funcname, srctype, e)) + # note that setdefault() on WeakValueDictionary is not atomic + # and contains a rare bug (http://bugs.python.org/issue19542); + # we have to use a lock and do it ourselves + cache = ffi._typecache + with global_lock: + res1 = cache.get(key) + if res1 is None: + cache[key] = res + return res + else: + return res1 + +def pointer_cache(ffi, BType): + return global_cache('?', ffi, 'new_pointer_type', BType) + +def attach_exception_info(e, name): + if e.args and type(e.args[0]) is str: + e.args = ('%s: %s' % (name, e.args[0]),) + e.args[1:] diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/parse_c_type.h b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/parse_c_type.h new file mode 100644 index 0000000000000000000000000000000000000000..84e4ef85659eb63e6453d8af9f024f1866182342 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/parse_c_type.h @@ -0,0 +1,181 @@ + +/* This part is from file 'cffi/parse_c_type.h'. It is copied at the + beginning of C sources generated by CFFI's ffi.set_source(). */ + +typedef void *_cffi_opcode_t; + +#define _CFFI_OP(opcode, arg) (_cffi_opcode_t)(opcode | (((uintptr_t)(arg)) << 8)) +#define _CFFI_GETOP(cffi_opcode) ((unsigned char)(uintptr_t)cffi_opcode) +#define _CFFI_GETARG(cffi_opcode) (((intptr_t)cffi_opcode) >> 8) + +#define _CFFI_OP_PRIMITIVE 1 +#define _CFFI_OP_POINTER 3 +#define _CFFI_OP_ARRAY 5 +#define _CFFI_OP_OPEN_ARRAY 7 +#define _CFFI_OP_STRUCT_UNION 9 +#define _CFFI_OP_ENUM 11 +#define _CFFI_OP_FUNCTION 13 +#define _CFFI_OP_FUNCTION_END 15 +#define _CFFI_OP_NOOP 17 +#define _CFFI_OP_BITFIELD 19 +#define _CFFI_OP_TYPENAME 21 +#define _CFFI_OP_CPYTHON_BLTN_V 23 // varargs +#define _CFFI_OP_CPYTHON_BLTN_N 25 // noargs +#define _CFFI_OP_CPYTHON_BLTN_O 27 // O (i.e. a single arg) +#define _CFFI_OP_CONSTANT 29 +#define _CFFI_OP_CONSTANT_INT 31 +#define _CFFI_OP_GLOBAL_VAR 33 +#define _CFFI_OP_DLOPEN_FUNC 35 +#define _CFFI_OP_DLOPEN_CONST 37 +#define _CFFI_OP_GLOBAL_VAR_F 39 +#define _CFFI_OP_EXTERN_PYTHON 41 + +#define _CFFI_PRIM_VOID 0 +#define _CFFI_PRIM_BOOL 1 +#define _CFFI_PRIM_CHAR 2 +#define _CFFI_PRIM_SCHAR 3 +#define _CFFI_PRIM_UCHAR 4 +#define _CFFI_PRIM_SHORT 5 +#define _CFFI_PRIM_USHORT 6 +#define _CFFI_PRIM_INT 7 +#define _CFFI_PRIM_UINT 8 +#define _CFFI_PRIM_LONG 9 +#define _CFFI_PRIM_ULONG 10 +#define _CFFI_PRIM_LONGLONG 11 +#define _CFFI_PRIM_ULONGLONG 12 +#define _CFFI_PRIM_FLOAT 13 +#define _CFFI_PRIM_DOUBLE 14 +#define _CFFI_PRIM_LONGDOUBLE 15 + +#define _CFFI_PRIM_WCHAR 16 +#define _CFFI_PRIM_INT8 17 +#define _CFFI_PRIM_UINT8 18 +#define _CFFI_PRIM_INT16 19 +#define _CFFI_PRIM_UINT16 20 +#define _CFFI_PRIM_INT32 21 +#define _CFFI_PRIM_UINT32 22 +#define _CFFI_PRIM_INT64 23 +#define _CFFI_PRIM_UINT64 24 +#define _CFFI_PRIM_INTPTR 25 +#define _CFFI_PRIM_UINTPTR 26 +#define _CFFI_PRIM_PTRDIFF 27 +#define _CFFI_PRIM_SIZE 28 +#define _CFFI_PRIM_SSIZE 29 +#define _CFFI_PRIM_INT_LEAST8 30 +#define _CFFI_PRIM_UINT_LEAST8 31 +#define _CFFI_PRIM_INT_LEAST16 32 +#define _CFFI_PRIM_UINT_LEAST16 33 +#define _CFFI_PRIM_INT_LEAST32 34 +#define _CFFI_PRIM_UINT_LEAST32 35 +#define _CFFI_PRIM_INT_LEAST64 36 +#define _CFFI_PRIM_UINT_LEAST64 37 +#define _CFFI_PRIM_INT_FAST8 38 +#define _CFFI_PRIM_UINT_FAST8 39 +#define _CFFI_PRIM_INT_FAST16 40 +#define _CFFI_PRIM_UINT_FAST16 41 +#define _CFFI_PRIM_INT_FAST32 42 +#define _CFFI_PRIM_UINT_FAST32 43 +#define _CFFI_PRIM_INT_FAST64 44 +#define _CFFI_PRIM_UINT_FAST64 45 +#define _CFFI_PRIM_INTMAX 46 +#define _CFFI_PRIM_UINTMAX 47 +#define _CFFI_PRIM_FLOATCOMPLEX 48 +#define _CFFI_PRIM_DOUBLECOMPLEX 49 +#define _CFFI_PRIM_CHAR16 50 +#define _CFFI_PRIM_CHAR32 51 + +#define _CFFI__NUM_PRIM 52 +#define _CFFI__UNKNOWN_PRIM (-1) +#define _CFFI__UNKNOWN_FLOAT_PRIM (-2) +#define _CFFI__UNKNOWN_LONG_DOUBLE (-3) + +#define _CFFI__IO_FILE_STRUCT (-1) + + +struct _cffi_global_s { + const char *name; + void *address; + _cffi_opcode_t type_op; + void *size_or_direct_fn; // OP_GLOBAL_VAR: size, or 0 if unknown + // OP_CPYTHON_BLTN_*: addr of direct function +}; + +struct _cffi_getconst_s { + unsigned long long value; + const struct _cffi_type_context_s *ctx; + int gindex; +}; + +struct _cffi_struct_union_s { + const char *name; + int type_index; // -> _cffi_types, on a OP_STRUCT_UNION + int flags; // _CFFI_F_* flags below + size_t size; + int alignment; + int first_field_index; // -> _cffi_fields array + int num_fields; +}; +#define _CFFI_F_UNION 0x01 // is a union, not a struct +#define _CFFI_F_CHECK_FIELDS 0x02 // complain if fields are not in the + // "standard layout" or if some are missing +#define _CFFI_F_PACKED 0x04 // for CHECK_FIELDS, assume a packed struct +#define _CFFI_F_EXTERNAL 0x08 // in some other ffi.include() +#define _CFFI_F_OPAQUE 0x10 // opaque + +struct _cffi_field_s { + const char *name; + size_t field_offset; + size_t field_size; + _cffi_opcode_t field_type_op; +}; + +struct _cffi_enum_s { + const char *name; + int type_index; // -> _cffi_types, on a OP_ENUM + int type_prim; // _CFFI_PRIM_xxx + const char *enumerators; // comma-delimited string +}; + +struct _cffi_typename_s { + const char *name; + int type_index; /* if opaque, points to a possibly artificial + OP_STRUCT which is itself opaque */ +}; + +struct _cffi_type_context_s { + _cffi_opcode_t *types; + const struct _cffi_global_s *globals; + const struct _cffi_field_s *fields; + const struct _cffi_struct_union_s *struct_unions; + const struct _cffi_enum_s *enums; + const struct _cffi_typename_s *typenames; + int num_globals; + int num_struct_unions; + int num_enums; + int num_typenames; + const char *const *includes; + int num_types; + int flags; /* future extension */ +}; + +struct _cffi_parse_info_s { + const struct _cffi_type_context_s *ctx; + _cffi_opcode_t *output; + unsigned int output_size; + size_t error_location; + const char *error_message; +}; + +struct _cffi_externpy_s { + const char *name; + size_t size_of_result; + void *reserved1, *reserved2; +}; + +#ifdef _CFFI_INTERNAL +static int parse_c_type(struct _cffi_parse_info_s *info, const char *input); +static int search_in_globals(const struct _cffi_type_context_s *ctx, + const char *search, size_t search_len); +static int search_in_struct_unions(const struct _cffi_type_context_s *ctx, + const char *search, size_t search_len); +#endif diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/pkgconfig.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/pkgconfig.py new file mode 100644 index 0000000000000000000000000000000000000000..5c93f15a60e6f904b2dd108d6e22044a5890bcb4 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/pkgconfig.py @@ -0,0 +1,121 @@ +# pkg-config, https://www.freedesktop.org/wiki/Software/pkg-config/ integration for cffi +import sys, os, subprocess + +from .error import PkgConfigError + + +def merge_flags(cfg1, cfg2): + """Merge values from cffi config flags cfg2 to cf1 + + Example: + merge_flags({"libraries": ["one"]}, {"libraries": ["two"]}) + {"libraries": ["one", "two"]} + """ + for key, value in cfg2.items(): + if key not in cfg1: + cfg1[key] = value + else: + if not isinstance(cfg1[key], list): + raise TypeError("cfg1[%r] should be a list of strings" % (key,)) + if not isinstance(value, list): + raise TypeError("cfg2[%r] should be a list of strings" % (key,)) + cfg1[key].extend(value) + return cfg1 + + +def call(libname, flag, encoding=sys.getfilesystemencoding()): + """Calls pkg-config and returns the output if found + """ + a = ["pkg-config", "--print-errors"] + a.append(flag) + a.append(libname) + try: + pc = subprocess.Popen(a, stdout=subprocess.PIPE, stderr=subprocess.PIPE) + except EnvironmentError as e: + raise PkgConfigError("cannot run pkg-config: %s" % (str(e).strip(),)) + + bout, berr = pc.communicate() + if pc.returncode != 0: + try: + berr = berr.decode(encoding) + except Exception: + pass + raise PkgConfigError(berr.strip()) + + if sys.version_info >= (3,) and not isinstance(bout, str): # Python 3.x + try: + bout = bout.decode(encoding) + except UnicodeDecodeError: + raise PkgConfigError("pkg-config %s %s returned bytes that cannot " + "be decoded with encoding %r:\n%r" % + (flag, libname, encoding, bout)) + + if os.altsep != '\\' and '\\' in bout: + raise PkgConfigError("pkg-config %s %s returned an unsupported " + "backslash-escaped output:\n%r" % + (flag, libname, bout)) + return bout + + +def flags_from_pkgconfig(libs): + r"""Return compiler line flags for FFI.set_source based on pkg-config output + + Usage + ... + ffibuilder.set_source("_foo", pkgconfig = ["libfoo", "libbar >= 1.8.3"]) + + If pkg-config is installed on build machine, then arguments include_dirs, + library_dirs, libraries, define_macros, extra_compile_args and + extra_link_args are extended with an output of pkg-config for libfoo and + libbar. + + Raises PkgConfigError in case the pkg-config call fails. + """ + + def get_include_dirs(string): + return [x[2:] for x in string.split() if x.startswith("-I")] + + def get_library_dirs(string): + return [x[2:] for x in string.split() if x.startswith("-L")] + + def get_libraries(string): + return [x[2:] for x in string.split() if x.startswith("-l")] + + # convert -Dfoo=bar to list of tuples [("foo", "bar")] expected by distutils + def get_macros(string): + def _macro(x): + x = x[2:] # drop "-D" + if '=' in x: + return tuple(x.split("=", 1)) # "-Dfoo=bar" => ("foo", "bar") + else: + return (x, None) # "-Dfoo" => ("foo", None) + return [_macro(x) for x in string.split() if x.startswith("-D")] + + def get_other_cflags(string): + return [x for x in string.split() if not x.startswith("-I") and + not x.startswith("-D")] + + def get_other_libs(string): + return [x for x in string.split() if not x.startswith("-L") and + not x.startswith("-l")] + + # return kwargs for given libname + def kwargs(libname): + fse = sys.getfilesystemencoding() + all_cflags = call(libname, "--cflags") + all_libs = call(libname, "--libs") + return { + "include_dirs": get_include_dirs(all_cflags), + "library_dirs": get_library_dirs(all_libs), + "libraries": get_libraries(all_libs), + "define_macros": get_macros(all_cflags), + "extra_compile_args": get_other_cflags(all_cflags), + "extra_link_args": get_other_libs(all_libs), + } + + # merge all arguments together + ret = {} + for libname in libs: + lib_flags = kwargs(libname) + merge_flags(ret, lib_flags) + return ret diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/recompiler.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/recompiler.py new file mode 100644 index 0000000000000000000000000000000000000000..86b37d7ffcb77f62783e2394f6c0c6d3bcbe3f26 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/recompiler.py @@ -0,0 +1,1581 @@ +import os, sys, io +from . import ffiplatform, model +from .error import VerificationError +from .cffi_opcode import * + +VERSION_BASE = 0x2601 +VERSION_EMBEDDED = 0x2701 +VERSION_CHAR16CHAR32 = 0x2801 + +USE_LIMITED_API = (sys.platform != 'win32' or sys.version_info < (3, 0) or + sys.version_info >= (3, 5)) + + +class GlobalExpr: + def __init__(self, name, address, type_op, size=0, check_value=0): + self.name = name + self.address = address + self.type_op = type_op + self.size = size + self.check_value = check_value + + def as_c_expr(self): + return ' { "%s", (void *)%s, %s, (void *)%s },' % ( + self.name, self.address, self.type_op.as_c_expr(), self.size) + + def as_python_expr(self): + return "b'%s%s',%d" % (self.type_op.as_python_bytes(), self.name, + self.check_value) + +class FieldExpr: + def __init__(self, name, field_offset, field_size, fbitsize, field_type_op): + self.name = name + self.field_offset = field_offset + self.field_size = field_size + self.fbitsize = fbitsize + self.field_type_op = field_type_op + + def as_c_expr(self): + spaces = " " * len(self.name) + return (' { "%s", %s,\n' % (self.name, self.field_offset) + + ' %s %s,\n' % (spaces, self.field_size) + + ' %s %s },' % (spaces, self.field_type_op.as_c_expr())) + + def as_python_expr(self): + raise NotImplementedError + + def as_field_python_expr(self): + if self.field_type_op.op == OP_NOOP: + size_expr = '' + elif self.field_type_op.op == OP_BITFIELD: + size_expr = format_four_bytes(self.fbitsize) + else: + raise NotImplementedError + return "b'%s%s%s'" % (self.field_type_op.as_python_bytes(), + size_expr, + self.name) + +class StructUnionExpr: + def __init__(self, name, type_index, flags, size, alignment, comment, + first_field_index, c_fields): + self.name = name + self.type_index = type_index + self.flags = flags + self.size = size + self.alignment = alignment + self.comment = comment + self.first_field_index = first_field_index + self.c_fields = c_fields + + def as_c_expr(self): + return (' { "%s", %d, %s,' % (self.name, self.type_index, self.flags) + + '\n %s, %s, ' % (self.size, self.alignment) + + '%d, %d ' % (self.first_field_index, len(self.c_fields)) + + ('/* %s */ ' % self.comment if self.comment else '') + + '},') + + def as_python_expr(self): + flags = eval(self.flags, G_FLAGS) + fields_expr = [c_field.as_field_python_expr() + for c_field in self.c_fields] + return "(b'%s%s%s',%s)" % ( + format_four_bytes(self.type_index), + format_four_bytes(flags), + self.name, + ','.join(fields_expr)) + +class EnumExpr: + def __init__(self, name, type_index, size, signed, allenums): + self.name = name + self.type_index = type_index + self.size = size + self.signed = signed + self.allenums = allenums + + def as_c_expr(self): + return (' { "%s", %d, _cffi_prim_int(%s, %s),\n' + ' "%s" },' % (self.name, self.type_index, + self.size, self.signed, self.allenums)) + + def as_python_expr(self): + prim_index = { + (1, 0): PRIM_UINT8, (1, 1): PRIM_INT8, + (2, 0): PRIM_UINT16, (2, 1): PRIM_INT16, + (4, 0): PRIM_UINT32, (4, 1): PRIM_INT32, + (8, 0): PRIM_UINT64, (8, 1): PRIM_INT64, + }[self.size, self.signed] + return "b'%s%s%s\\x00%s'" % (format_four_bytes(self.type_index), + format_four_bytes(prim_index), + self.name, self.allenums) + +class TypenameExpr: + def __init__(self, name, type_index): + self.name = name + self.type_index = type_index + + def as_c_expr(self): + return ' { "%s", %d },' % (self.name, self.type_index) + + def as_python_expr(self): + return "b'%s%s'" % (format_four_bytes(self.type_index), self.name) + + +# ____________________________________________________________ + + +class Recompiler: + _num_externpy = 0 + + def __init__(self, ffi, module_name, target_is_python=False): + self.ffi = ffi + self.module_name = module_name + self.target_is_python = target_is_python + self._version = VERSION_BASE + + def needs_version(self, ver): + self._version = max(self._version, ver) + + def collect_type_table(self): + self._typesdict = {} + self._generate("collecttype") + # + all_decls = sorted(self._typesdict, key=str) + # + # prepare all FUNCTION bytecode sequences first + self.cffi_types = [] + for tp in all_decls: + if tp.is_raw_function: + assert self._typesdict[tp] is None + self._typesdict[tp] = len(self.cffi_types) + self.cffi_types.append(tp) # placeholder + for tp1 in tp.args: + assert isinstance(tp1, (model.VoidType, + model.BasePrimitiveType, + model.PointerType, + model.StructOrUnionOrEnum, + model.FunctionPtrType)) + if self._typesdict[tp1] is None: + self._typesdict[tp1] = len(self.cffi_types) + self.cffi_types.append(tp1) # placeholder + self.cffi_types.append('END') # placeholder + # + # prepare all OTHER bytecode sequences + for tp in all_decls: + if not tp.is_raw_function and self._typesdict[tp] is None: + self._typesdict[tp] = len(self.cffi_types) + self.cffi_types.append(tp) # placeholder + if tp.is_array_type and tp.length is not None: + self.cffi_types.append('LEN') # placeholder + assert None not in self._typesdict.values() + # + # collect all structs and unions and enums + self._struct_unions = {} + self._enums = {} + for tp in all_decls: + if isinstance(tp, model.StructOrUnion): + self._struct_unions[tp] = None + elif isinstance(tp, model.EnumType): + self._enums[tp] = None + for i, tp in enumerate(sorted(self._struct_unions, + key=lambda tp: tp.name)): + self._struct_unions[tp] = i + for i, tp in enumerate(sorted(self._enums, + key=lambda tp: tp.name)): + self._enums[tp] = i + # + # emit all bytecode sequences now + for tp in all_decls: + method = getattr(self, '_emit_bytecode_' + tp.__class__.__name__) + method(tp, self._typesdict[tp]) + # + # consistency check + for op in self.cffi_types: + assert isinstance(op, CffiOp) + self.cffi_types = tuple(self.cffi_types) # don't change any more + + def _enum_fields(self, tp): + # When producing C, expand all anonymous struct/union fields. + # That's necessary to have C code checking the offsets of the + # individual fields contained in them. When producing Python, + # don't do it and instead write it like it is, with the + # corresponding fields having an empty name. Empty names are + # recognized at runtime when we import the generated Python + # file. + expand_anonymous_struct_union = not self.target_is_python + return tp.enumfields(expand_anonymous_struct_union) + + def _do_collect_type(self, tp): + if not isinstance(tp, model.BaseTypeByIdentity): + if isinstance(tp, tuple): + for x in tp: + self._do_collect_type(x) + return + if tp not in self._typesdict: + self._typesdict[tp] = None + if isinstance(tp, model.FunctionPtrType): + self._do_collect_type(tp.as_raw_function()) + elif isinstance(tp, model.StructOrUnion): + if tp.fldtypes is not None and ( + tp not in self.ffi._parser._included_declarations): + for name1, tp1, _, _ in self._enum_fields(tp): + self._do_collect_type(self._field_type(tp, name1, tp1)) + else: + for _, x in tp._get_items(): + self._do_collect_type(x) + + def _generate(self, step_name): + lst = self.ffi._parser._declarations.items() + for name, (tp, quals) in sorted(lst): + kind, realname = name.split(' ', 1) + try: + method = getattr(self, '_generate_cpy_%s_%s' % (kind, + step_name)) + except AttributeError: + raise VerificationError( + "not implemented in recompile(): %r" % name) + try: + self._current_quals = quals + method(tp, realname) + except Exception as e: + model.attach_exception_info(e, name) + raise + + # ---------- + + ALL_STEPS = ["global", "field", "struct_union", "enum", "typename"] + + def collect_step_tables(self): + # collect the declarations for '_cffi_globals', '_cffi_typenames', etc. + self._lsts = {} + for step_name in self.ALL_STEPS: + self._lsts[step_name] = [] + self._seen_struct_unions = set() + self._generate("ctx") + self._add_missing_struct_unions() + # + for step_name in self.ALL_STEPS: + lst = self._lsts[step_name] + if step_name != "field": + lst.sort(key=lambda entry: entry.name) + self._lsts[step_name] = tuple(lst) # don't change any more + # + # check for a possible internal inconsistency: _cffi_struct_unions + # should have been generated with exactly self._struct_unions + lst = self._lsts["struct_union"] + for tp, i in self._struct_unions.items(): + assert i < len(lst) + assert lst[i].name == tp.name + assert len(lst) == len(self._struct_unions) + # same with enums + lst = self._lsts["enum"] + for tp, i in self._enums.items(): + assert i < len(lst) + assert lst[i].name == tp.name + assert len(lst) == len(self._enums) + + # ---------- + + def _prnt(self, what=''): + self._f.write(what + '\n') + + def write_source_to_f(self, f, preamble): + if self.target_is_python: + assert preamble is None + self.write_py_source_to_f(f) + else: + assert preamble is not None + self.write_c_source_to_f(f, preamble) + + def _rel_readlines(self, filename): + g = open(os.path.join(os.path.dirname(__file__), filename), 'r') + lines = g.readlines() + g.close() + return lines + + def write_c_source_to_f(self, f, preamble): + self._f = f + prnt = self._prnt + if self.ffi._embedding is not None: + prnt('#define _CFFI_USE_EMBEDDING') + if not USE_LIMITED_API: + prnt('#define _CFFI_NO_LIMITED_API') + # + # first the '#include' (actually done by inlining the file's content) + lines = self._rel_readlines('_cffi_include.h') + i = lines.index('#include "parse_c_type.h"\n') + lines[i:i+1] = self._rel_readlines('parse_c_type.h') + prnt(''.join(lines)) + # + # if we have ffi._embedding != None, we give it here as a macro + # and include an extra file + base_module_name = self.module_name.split('.')[-1] + if self.ffi._embedding is not None: + prnt('#define _CFFI_MODULE_NAME "%s"' % (self.module_name,)) + prnt('static const char _CFFI_PYTHON_STARTUP_CODE[] = {') + self._print_string_literal_in_array(self.ffi._embedding) + prnt('0 };') + prnt('#ifdef PYPY_VERSION') + prnt('# define _CFFI_PYTHON_STARTUP_FUNC _cffi_pypyinit_%s' % ( + base_module_name,)) + prnt('#elif PY_MAJOR_VERSION >= 3') + prnt('# define _CFFI_PYTHON_STARTUP_FUNC PyInit_%s' % ( + base_module_name,)) + prnt('#else') + prnt('# define _CFFI_PYTHON_STARTUP_FUNC init%s' % ( + base_module_name,)) + prnt('#endif') + lines = self._rel_readlines('_embedding.h') + i = lines.index('#include "_cffi_errors.h"\n') + lines[i:i+1] = self._rel_readlines('_cffi_errors.h') + prnt(''.join(lines)) + self.needs_version(VERSION_EMBEDDED) + # + # then paste the C source given by the user, verbatim. + prnt('/************************************************************/') + prnt() + prnt(preamble) + prnt() + prnt('/************************************************************/') + prnt() + # + # the declaration of '_cffi_types' + prnt('static void *_cffi_types[] = {') + typeindex2type = dict([(i, tp) for (tp, i) in self._typesdict.items()]) + for i, op in enumerate(self.cffi_types): + comment = '' + if i in typeindex2type: + comment = ' // ' + typeindex2type[i]._get_c_name() + prnt('/* %2d */ %s,%s' % (i, op.as_c_expr(), comment)) + if not self.cffi_types: + prnt(' 0') + prnt('};') + prnt() + # + # call generate_cpy_xxx_decl(), for every xxx found from + # ffi._parser._declarations. This generates all the functions. + self._seen_constants = set() + self._generate("decl") + # + # the declaration of '_cffi_globals' and '_cffi_typenames' + nums = {} + for step_name in self.ALL_STEPS: + lst = self._lsts[step_name] + nums[step_name] = len(lst) + if nums[step_name] > 0: + prnt('static const struct _cffi_%s_s _cffi_%ss[] = {' % ( + step_name, step_name)) + for entry in lst: + prnt(entry.as_c_expr()) + prnt('};') + prnt() + # + # the declaration of '_cffi_includes' + if self.ffi._included_ffis: + prnt('static const char * const _cffi_includes[] = {') + for ffi_to_include in self.ffi._included_ffis: + try: + included_module_name, included_source = ( + ffi_to_include._assigned_source[:2]) + except AttributeError: + raise VerificationError( + "ffi object %r includes %r, but the latter has not " + "been prepared with set_source()" % ( + self.ffi, ffi_to_include,)) + if included_source is None: + raise VerificationError( + "not implemented yet: ffi.include() of a Python-based " + "ffi inside a C-based ffi") + prnt(' "%s",' % (included_module_name,)) + prnt(' NULL') + prnt('};') + prnt() + # + # the declaration of '_cffi_type_context' + prnt('static const struct _cffi_type_context_s _cffi_type_context = {') + prnt(' _cffi_types,') + for step_name in self.ALL_STEPS: + if nums[step_name] > 0: + prnt(' _cffi_%ss,' % step_name) + else: + prnt(' NULL, /* no %ss */' % step_name) + for step_name in self.ALL_STEPS: + if step_name != "field": + prnt(' %d, /* num_%ss */' % (nums[step_name], step_name)) + if self.ffi._included_ffis: + prnt(' _cffi_includes,') + else: + prnt(' NULL, /* no includes */') + prnt(' %d, /* num_types */' % (len(self.cffi_types),)) + flags = 0 + if self._num_externpy: + flags |= 1 # set to mean that we use extern "Python" + prnt(' %d, /* flags */' % flags) + prnt('};') + prnt() + # + # the init function + prnt('#ifdef __GNUC__') + prnt('# pragma GCC visibility push(default) /* for -fvisibility= */') + prnt('#endif') + prnt() + prnt('#ifdef PYPY_VERSION') + prnt('PyMODINIT_FUNC') + prnt('_cffi_pypyinit_%s(const void *p[])' % (base_module_name,)) + prnt('{') + if self._num_externpy: + prnt(' if (((intptr_t)p[0]) >= 0x0A03) {') + prnt(' _cffi_call_python_org = ' + '(void(*)(struct _cffi_externpy_s *, char *))p[1];') + prnt(' }') + prnt(' p[0] = (const void *)0x%x;' % self._version) + prnt(' p[1] = &_cffi_type_context;') + prnt('#if PY_MAJOR_VERSION >= 3') + prnt(' return NULL;') + prnt('#endif') + prnt('}') + # on Windows, distutils insists on putting init_cffi_xyz in + # 'export_symbols', so instead of fighting it, just give up and + # give it one + prnt('# ifdef _MSC_VER') + prnt(' PyMODINIT_FUNC') + prnt('# if PY_MAJOR_VERSION >= 3') + prnt(' PyInit_%s(void) { return NULL; }' % (base_module_name,)) + prnt('# else') + prnt(' init%s(void) { }' % (base_module_name,)) + prnt('# endif') + prnt('# endif') + prnt('#elif PY_MAJOR_VERSION >= 3') + prnt('PyMODINIT_FUNC') + prnt('PyInit_%s(void)' % (base_module_name,)) + prnt('{') + prnt(' return _cffi_init("%s", 0x%x, &_cffi_type_context);' % ( + self.module_name, self._version)) + prnt('}') + prnt('#else') + prnt('PyMODINIT_FUNC') + prnt('init%s(void)' % (base_module_name,)) + prnt('{') + prnt(' _cffi_init("%s", 0x%x, &_cffi_type_context);' % ( + self.module_name, self._version)) + prnt('}') + prnt('#endif') + prnt() + prnt('#ifdef __GNUC__') + prnt('# pragma GCC visibility pop') + prnt('#endif') + self._version = None + + def _to_py(self, x): + if isinstance(x, str): + return "b'%s'" % (x,) + if isinstance(x, (list, tuple)): + rep = [self._to_py(item) for item in x] + if len(rep) == 1: + rep.append('') + return "(%s)" % (','.join(rep),) + return x.as_python_expr() # Py2: unicode unexpected; Py3: bytes unexp. + + def write_py_source_to_f(self, f): + self._f = f + prnt = self._prnt + # + # header + prnt("# auto-generated file") + prnt("import _cffi_backend") + # + # the 'import' of the included ffis + num_includes = len(self.ffi._included_ffis or ()) + for i in range(num_includes): + ffi_to_include = self.ffi._included_ffis[i] + try: + included_module_name, included_source = ( + ffi_to_include._assigned_source[:2]) + except AttributeError: + raise VerificationError( + "ffi object %r includes %r, but the latter has not " + "been prepared with set_source()" % ( + self.ffi, ffi_to_include,)) + if included_source is not None: + raise VerificationError( + "not implemented yet: ffi.include() of a C-based " + "ffi inside a Python-based ffi") + prnt('from %s import ffi as _ffi%d' % (included_module_name, i)) + prnt() + prnt("ffi = _cffi_backend.FFI('%s'," % (self.module_name,)) + prnt(" _version = 0x%x," % (self._version,)) + self._version = None + # + # the '_types' keyword argument + self.cffi_types = tuple(self.cffi_types) # don't change any more + types_lst = [op.as_python_bytes() for op in self.cffi_types] + prnt(' _types = %s,' % (self._to_py(''.join(types_lst)),)) + typeindex2type = dict([(i, tp) for (tp, i) in self._typesdict.items()]) + # + # the keyword arguments from ALL_STEPS + for step_name in self.ALL_STEPS: + lst = self._lsts[step_name] + if len(lst) > 0 and step_name != "field": + prnt(' _%ss = %s,' % (step_name, self._to_py(lst))) + # + # the '_includes' keyword argument + if num_includes > 0: + prnt(' _includes = (%s,),' % ( + ', '.join(['_ffi%d' % i for i in range(num_includes)]),)) + # + # the footer + prnt(')') + + # ---------- + + def _gettypenum(self, type): + # a KeyError here is a bug. please report it! :-) + return self._typesdict[type] + + def _convert_funcarg_to_c(self, tp, fromvar, tovar, errcode): + extraarg = '' + if isinstance(tp, model.BasePrimitiveType) and not tp.is_complex_type(): + if tp.is_integer_type() and tp.name != '_Bool': + converter = '_cffi_to_c_int' + extraarg = ', %s' % tp.name + elif isinstance(tp, model.UnknownFloatType): + # don't check with is_float_type(): it may be a 'long + # double' here, and _cffi_to_c_double would loose precision + converter = '(%s)_cffi_to_c_double' % (tp.get_c_name(''),) + else: + cname = tp.get_c_name('') + converter = '(%s)_cffi_to_c_%s' % (cname, + tp.name.replace(' ', '_')) + if cname in ('char16_t', 'char32_t'): + self.needs_version(VERSION_CHAR16CHAR32) + errvalue = '-1' + # + elif isinstance(tp, model.PointerType): + self._convert_funcarg_to_c_ptr_or_array(tp, fromvar, + tovar, errcode) + return + # + elif (isinstance(tp, model.StructOrUnionOrEnum) or + isinstance(tp, model.BasePrimitiveType)): + # a struct (not a struct pointer) as a function argument; + # or, a complex (the same code works) + self._prnt(' if (_cffi_to_c((char *)&%s, _cffi_type(%d), %s) < 0)' + % (tovar, self._gettypenum(tp), fromvar)) + self._prnt(' %s;' % errcode) + return + # + elif isinstance(tp, model.FunctionPtrType): + converter = '(%s)_cffi_to_c_pointer' % tp.get_c_name('') + extraarg = ', _cffi_type(%d)' % self._gettypenum(tp) + errvalue = 'NULL' + # + else: + raise NotImplementedError(tp) + # + self._prnt(' %s = %s(%s%s);' % (tovar, converter, fromvar, extraarg)) + self._prnt(' if (%s == (%s)%s && PyErr_Occurred())' % ( + tovar, tp.get_c_name(''), errvalue)) + self._prnt(' %s;' % errcode) + + def _extra_local_variables(self, tp, localvars, freelines): + if isinstance(tp, model.PointerType): + localvars.add('Py_ssize_t datasize') + localvars.add('struct _cffi_freeme_s *large_args_free = NULL') + freelines.add('if (large_args_free != NULL)' + ' _cffi_free_array_arguments(large_args_free);') + + def _convert_funcarg_to_c_ptr_or_array(self, tp, fromvar, tovar, errcode): + self._prnt(' datasize = _cffi_prepare_pointer_call_argument(') + self._prnt(' _cffi_type(%d), %s, (char **)&%s);' % ( + self._gettypenum(tp), fromvar, tovar)) + self._prnt(' if (datasize != 0) {') + self._prnt(' %s = ((size_t)datasize) <= 640 ? ' + '(%s)alloca((size_t)datasize) : NULL;' % ( + tovar, tp.get_c_name(''))) + self._prnt(' if (_cffi_convert_array_argument(_cffi_type(%d), %s, ' + '(char **)&%s,' % (self._gettypenum(tp), fromvar, tovar)) + self._prnt(' datasize, &large_args_free) < 0)') + self._prnt(' %s;' % errcode) + self._prnt(' }') + + def _convert_expr_from_c(self, tp, var, context): + if isinstance(tp, model.BasePrimitiveType): + if tp.is_integer_type() and tp.name != '_Bool': + return '_cffi_from_c_int(%s, %s)' % (var, tp.name) + elif isinstance(tp, model.UnknownFloatType): + return '_cffi_from_c_double(%s)' % (var,) + elif tp.name != 'long double' and not tp.is_complex_type(): + cname = tp.name.replace(' ', '_') + if cname in ('char16_t', 'char32_t'): + self.needs_version(VERSION_CHAR16CHAR32) + return '_cffi_from_c_%s(%s)' % (cname, var) + else: + return '_cffi_from_c_deref((char *)&%s, _cffi_type(%d))' % ( + var, self._gettypenum(tp)) + elif isinstance(tp, (model.PointerType, model.FunctionPtrType)): + return '_cffi_from_c_pointer((char *)%s, _cffi_type(%d))' % ( + var, self._gettypenum(tp)) + elif isinstance(tp, model.ArrayType): + return '_cffi_from_c_pointer((char *)%s, _cffi_type(%d))' % ( + var, self._gettypenum(model.PointerType(tp.item))) + elif isinstance(tp, model.StructOrUnion): + if tp.fldnames is None: + raise TypeError("'%s' is used as %s, but is opaque" % ( + tp._get_c_name(), context)) + return '_cffi_from_c_struct((char *)&%s, _cffi_type(%d))' % ( + var, self._gettypenum(tp)) + elif isinstance(tp, model.EnumType): + return '_cffi_from_c_deref((char *)&%s, _cffi_type(%d))' % ( + var, self._gettypenum(tp)) + else: + raise NotImplementedError(tp) + + # ---------- + # typedefs + + def _typedef_type(self, tp, name): + return self._global_type(tp, "(*(%s *)0)" % (name,)) + + def _generate_cpy_typedef_collecttype(self, tp, name): + self._do_collect_type(self._typedef_type(tp, name)) + + def _generate_cpy_typedef_decl(self, tp, name): + pass + + def _typedef_ctx(self, tp, name): + type_index = self._typesdict[tp] + self._lsts["typename"].append(TypenameExpr(name, type_index)) + + def _generate_cpy_typedef_ctx(self, tp, name): + tp = self._typedef_type(tp, name) + self._typedef_ctx(tp, name) + if getattr(tp, "origin", None) == "unknown_type": + self._struct_ctx(tp, tp.name, approxname=None) + elif isinstance(tp, model.NamedPointerType): + self._struct_ctx(tp.totype, tp.totype.name, approxname=tp.name, + named_ptr=tp) + + # ---------- + # function declarations + + def _generate_cpy_function_collecttype(self, tp, name): + self._do_collect_type(tp.as_raw_function()) + if tp.ellipsis and not self.target_is_python: + self._do_collect_type(tp) + + def _generate_cpy_function_decl(self, tp, name): + assert not self.target_is_python + assert isinstance(tp, model.FunctionPtrType) + if tp.ellipsis: + # cannot support vararg functions better than this: check for its + # exact type (including the fixed arguments), and build it as a + # constant function pointer (no CPython wrapper) + self._generate_cpy_constant_decl(tp, name) + return + prnt = self._prnt + numargs = len(tp.args) + if numargs == 0: + argname = 'noarg' + elif numargs == 1: + argname = 'arg0' + else: + argname = 'args' + # + # ------------------------------ + # the 'd' version of the function, only for addressof(lib, 'func') + arguments = [] + call_arguments = [] + context = 'argument of %s' % name + for i, type in enumerate(tp.args): + arguments.append(type.get_c_name(' x%d' % i, context)) + call_arguments.append('x%d' % i) + repr_arguments = ', '.join(arguments) + repr_arguments = repr_arguments or 'void' + if tp.abi: + abi = tp.abi + ' ' + else: + abi = '' + name_and_arguments = '%s_cffi_d_%s(%s)' % (abi, name, repr_arguments) + prnt('static %s' % (tp.result.get_c_name(name_and_arguments),)) + prnt('{') + call_arguments = ', '.join(call_arguments) + result_code = 'return ' + if isinstance(tp.result, model.VoidType): + result_code = '' + prnt(' %s%s(%s);' % (result_code, name, call_arguments)) + prnt('}') + # + prnt('#ifndef PYPY_VERSION') # ------------------------------ + # + prnt('static PyObject *') + prnt('_cffi_f_%s(PyObject *self, PyObject *%s)' % (name, argname)) + prnt('{') + # + context = 'argument of %s' % name + for i, type in enumerate(tp.args): + arg = type.get_c_name(' x%d' % i, context) + prnt(' %s;' % arg) + # + localvars = set() + freelines = set() + for type in tp.args: + self._extra_local_variables(type, localvars, freelines) + for decl in sorted(localvars): + prnt(' %s;' % (decl,)) + # + if not isinstance(tp.result, model.VoidType): + result_code = 'result = ' + context = 'result of %s' % name + result_decl = ' %s;' % tp.result.get_c_name(' result', context) + prnt(result_decl) + prnt(' PyObject *pyresult;') + else: + result_decl = None + result_code = '' + # + if len(tp.args) > 1: + rng = range(len(tp.args)) + for i in rng: + prnt(' PyObject *arg%d;' % i) + prnt() + prnt(' if (!PyArg_UnpackTuple(args, "%s", %d, %d, %s))' % ( + name, len(rng), len(rng), + ', '.join(['&arg%d' % i for i in rng]))) + prnt(' return NULL;') + prnt() + # + for i, type in enumerate(tp.args): + self._convert_funcarg_to_c(type, 'arg%d' % i, 'x%d' % i, + 'return NULL') + prnt() + # + prnt(' Py_BEGIN_ALLOW_THREADS') + prnt(' _cffi_restore_errno();') + call_arguments = ['x%d' % i for i in range(len(tp.args))] + call_arguments = ', '.join(call_arguments) + prnt(' { %s%s(%s); }' % (result_code, name, call_arguments)) + prnt(' _cffi_save_errno();') + prnt(' Py_END_ALLOW_THREADS') + prnt() + # + prnt(' (void)self; /* unused */') + if numargs == 0: + prnt(' (void)noarg; /* unused */') + if result_code: + prnt(' pyresult = %s;' % + self._convert_expr_from_c(tp.result, 'result', 'result type')) + for freeline in freelines: + prnt(' ' + freeline) + prnt(' return pyresult;') + else: + for freeline in freelines: + prnt(' ' + freeline) + prnt(' Py_INCREF(Py_None);') + prnt(' return Py_None;') + prnt('}') + # + prnt('#else') # ------------------------------ + # + # the PyPy version: need to replace struct/union arguments with + # pointers, and if the result is a struct/union, insert a first + # arg that is a pointer to the result. We also do that for + # complex args and return type. + def need_indirection(type): + return (isinstance(type, model.StructOrUnion) or + (isinstance(type, model.PrimitiveType) and + type.is_complex_type())) + difference = False + arguments = [] + call_arguments = [] + context = 'argument of %s' % name + for i, type in enumerate(tp.args): + indirection = '' + if need_indirection(type): + indirection = '*' + difference = True + arg = type.get_c_name(' %sx%d' % (indirection, i), context) + arguments.append(arg) + call_arguments.append('%sx%d' % (indirection, i)) + tp_result = tp.result + if need_indirection(tp_result): + context = 'result of %s' % name + arg = tp_result.get_c_name(' *result', context) + arguments.insert(0, arg) + tp_result = model.void_type + result_decl = None + result_code = '*result = ' + difference = True + if difference: + repr_arguments = ', '.join(arguments) + repr_arguments = repr_arguments or 'void' + name_and_arguments = '%s_cffi_f_%s(%s)' % (abi, name, + repr_arguments) + prnt('static %s' % (tp_result.get_c_name(name_and_arguments),)) + prnt('{') + if result_decl: + prnt(result_decl) + call_arguments = ', '.join(call_arguments) + prnt(' { %s%s(%s); }' % (result_code, name, call_arguments)) + if result_decl: + prnt(' return result;') + prnt('}') + else: + prnt('# define _cffi_f_%s _cffi_d_%s' % (name, name)) + # + prnt('#endif') # ------------------------------ + prnt() + + def _generate_cpy_function_ctx(self, tp, name): + if tp.ellipsis and not self.target_is_python: + self._generate_cpy_constant_ctx(tp, name) + return + type_index = self._typesdict[tp.as_raw_function()] + numargs = len(tp.args) + if self.target_is_python: + meth_kind = OP_DLOPEN_FUNC + elif numargs == 0: + meth_kind = OP_CPYTHON_BLTN_N # 'METH_NOARGS' + elif numargs == 1: + meth_kind = OP_CPYTHON_BLTN_O # 'METH_O' + else: + meth_kind = OP_CPYTHON_BLTN_V # 'METH_VARARGS' + self._lsts["global"].append( + GlobalExpr(name, '_cffi_f_%s' % name, + CffiOp(meth_kind, type_index), + size='_cffi_d_%s' % name)) + + # ---------- + # named structs or unions + + def _field_type(self, tp_struct, field_name, tp_field): + if isinstance(tp_field, model.ArrayType): + actual_length = tp_field.length + if actual_length == '...': + ptr_struct_name = tp_struct.get_c_name('*') + actual_length = '_cffi_array_len(((%s)0)->%s)' % ( + ptr_struct_name, field_name) + tp_item = self._field_type(tp_struct, '%s[0]' % field_name, + tp_field.item) + tp_field = model.ArrayType(tp_item, actual_length) + return tp_field + + def _struct_collecttype(self, tp): + self._do_collect_type(tp) + if self.target_is_python: + # also requires nested anon struct/unions in ABI mode, recursively + for fldtype in tp.anonymous_struct_fields(): + self._struct_collecttype(fldtype) + + def _struct_decl(self, tp, cname, approxname): + if tp.fldtypes is None: + return + prnt = self._prnt + checkfuncname = '_cffi_checkfld_%s' % (approxname,) + prnt('_CFFI_UNUSED_FN') + prnt('static void %s(%s *p)' % (checkfuncname, cname)) + prnt('{') + prnt(' /* only to generate compile-time warnings or errors */') + prnt(' (void)p;') + for fname, ftype, fbitsize, fqual in self._enum_fields(tp): + try: + if ftype.is_integer_type() or fbitsize >= 0: + # accept all integers, but complain on float or double + if fname != '': + prnt(" (void)((p->%s) | 0); /* check that '%s.%s' is " + "an integer */" % (fname, cname, fname)) + continue + # only accept exactly the type declared, except that '[]' + # is interpreted as a '*' and so will match any array length. + # (It would also match '*', but that's harder to detect...) + while (isinstance(ftype, model.ArrayType) + and (ftype.length is None or ftype.length == '...')): + ftype = ftype.item + fname = fname + '[0]' + prnt(' { %s = &p->%s; (void)tmp; }' % ( + ftype.get_c_name('*tmp', 'field %r'%fname, quals=fqual), + fname)) + except VerificationError as e: + prnt(' /* %s */' % str(e)) # cannot verify it, ignore + prnt('}') + prnt('struct _cffi_align_%s { char x; %s y; };' % (approxname, cname)) + prnt() + + def _struct_ctx(self, tp, cname, approxname, named_ptr=None): + type_index = self._typesdict[tp] + reason_for_not_expanding = None + flags = [] + if isinstance(tp, model.UnionType): + flags.append("_CFFI_F_UNION") + if tp.fldtypes is None: + flags.append("_CFFI_F_OPAQUE") + reason_for_not_expanding = "opaque" + if (tp not in self.ffi._parser._included_declarations and + (named_ptr is None or + named_ptr not in self.ffi._parser._included_declarations)): + if tp.fldtypes is None: + pass # opaque + elif tp.partial or any(tp.anonymous_struct_fields()): + pass # field layout obtained silently from the C compiler + else: + flags.append("_CFFI_F_CHECK_FIELDS") + if tp.packed: + if tp.packed > 1: + raise NotImplementedError( + "%r is declared with 'pack=%r'; only 0 or 1 are " + "supported in API mode (try to use \"...;\", which " + "does not require a 'pack' declaration)" % + (tp, tp.packed)) + flags.append("_CFFI_F_PACKED") + else: + flags.append("_CFFI_F_EXTERNAL") + reason_for_not_expanding = "external" + flags = '|'.join(flags) or '0' + c_fields = [] + if reason_for_not_expanding is None: + enumfields = list(self._enum_fields(tp)) + for fldname, fldtype, fbitsize, fqual in enumfields: + fldtype = self._field_type(tp, fldname, fldtype) + self._check_not_opaque(fldtype, + "field '%s.%s'" % (tp.name, fldname)) + # cname is None for _add_missing_struct_unions() only + op = OP_NOOP + if fbitsize >= 0: + op = OP_BITFIELD + size = '%d /* bits */' % fbitsize + elif cname is None or ( + isinstance(fldtype, model.ArrayType) and + fldtype.length is None): + size = '(size_t)-1' + else: + size = 'sizeof(((%s)0)->%s)' % ( + tp.get_c_name('*') if named_ptr is None + else named_ptr.name, + fldname) + if cname is None or fbitsize >= 0: + offset = '(size_t)-1' + elif named_ptr is not None: + offset = '((char *)&((%s)0)->%s) - (char *)0' % ( + named_ptr.name, fldname) + else: + offset = 'offsetof(%s, %s)' % (tp.get_c_name(''), fldname) + c_fields.append( + FieldExpr(fldname, offset, size, fbitsize, + CffiOp(op, self._typesdict[fldtype]))) + first_field_index = len(self._lsts["field"]) + self._lsts["field"].extend(c_fields) + # + if cname is None: # unknown name, for _add_missing_struct_unions + size = '(size_t)-2' + align = -2 + comment = "unnamed" + else: + if named_ptr is not None: + size = 'sizeof(*(%s)0)' % (named_ptr.name,) + align = '-1 /* unknown alignment */' + else: + size = 'sizeof(%s)' % (cname,) + align = 'offsetof(struct _cffi_align_%s, y)' % (approxname,) + comment = None + else: + size = '(size_t)-1' + align = -1 + first_field_index = -1 + comment = reason_for_not_expanding + self._lsts["struct_union"].append( + StructUnionExpr(tp.name, type_index, flags, size, align, comment, + first_field_index, c_fields)) + self._seen_struct_unions.add(tp) + + def _check_not_opaque(self, tp, location): + while isinstance(tp, model.ArrayType): + tp = tp.item + if isinstance(tp, model.StructOrUnion) and tp.fldtypes is None: + raise TypeError( + "%s is of an opaque type (not declared in cdef())" % location) + + def _add_missing_struct_unions(self): + # not very nice, but some struct declarations might be missing + # because they don't have any known C name. Check that they are + # not partial (we can't complete or verify them!) and emit them + # anonymously. + lst = list(self._struct_unions.items()) + lst.sort(key=lambda tp_order: tp_order[1]) + for tp, order in lst: + if tp not in self._seen_struct_unions: + if tp.partial: + raise NotImplementedError("internal inconsistency: %r is " + "partial but was not seen at " + "this point" % (tp,)) + if tp.name.startswith('$') and tp.name[1:].isdigit(): + approxname = tp.name[1:] + elif tp.name == '_IO_FILE' and tp.forcename == 'FILE': + approxname = 'FILE' + self._typedef_ctx(tp, 'FILE') + else: + raise NotImplementedError("internal inconsistency: %r" % + (tp,)) + self._struct_ctx(tp, None, approxname) + + def _generate_cpy_struct_collecttype(self, tp, name): + self._struct_collecttype(tp) + _generate_cpy_union_collecttype = _generate_cpy_struct_collecttype + + def _struct_names(self, tp): + cname = tp.get_c_name('') + if ' ' in cname: + return cname, cname.replace(' ', '_') + else: + return cname, '_' + cname + + def _generate_cpy_struct_decl(self, tp, name): + self._struct_decl(tp, *self._struct_names(tp)) + _generate_cpy_union_decl = _generate_cpy_struct_decl + + def _generate_cpy_struct_ctx(self, tp, name): + self._struct_ctx(tp, *self._struct_names(tp)) + _generate_cpy_union_ctx = _generate_cpy_struct_ctx + + # ---------- + # 'anonymous' declarations. These are produced for anonymous structs + # or unions; the 'name' is obtained by a typedef. + + def _generate_cpy_anonymous_collecttype(self, tp, name): + if isinstance(tp, model.EnumType): + self._generate_cpy_enum_collecttype(tp, name) + else: + self._struct_collecttype(tp) + + def _generate_cpy_anonymous_decl(self, tp, name): + if isinstance(tp, model.EnumType): + self._generate_cpy_enum_decl(tp) + else: + self._struct_decl(tp, name, 'typedef_' + name) + + def _generate_cpy_anonymous_ctx(self, tp, name): + if isinstance(tp, model.EnumType): + self._enum_ctx(tp, name) + else: + self._struct_ctx(tp, name, 'typedef_' + name) + + # ---------- + # constants, declared with "static const ..." + + def _generate_cpy_const(self, is_int, name, tp=None, category='const', + check_value=None): + if (category, name) in self._seen_constants: + raise VerificationError( + "duplicate declaration of %s '%s'" % (category, name)) + self._seen_constants.add((category, name)) + # + prnt = self._prnt + funcname = '_cffi_%s_%s' % (category, name) + if is_int: + prnt('static int %s(unsigned long long *o)' % funcname) + prnt('{') + prnt(' int n = (%s) <= 0;' % (name,)) + prnt(' *o = (unsigned long long)((%s) | 0);' + ' /* check that %s is an integer */' % (name, name)) + if check_value is not None: + if check_value > 0: + check_value = '%dU' % (check_value,) + prnt(' if (!_cffi_check_int(*o, n, %s))' % (check_value,)) + prnt(' n |= 2;') + prnt(' return n;') + prnt('}') + else: + assert check_value is None + prnt('static void %s(char *o)' % funcname) + prnt('{') + prnt(' *(%s)o = %s;' % (tp.get_c_name('*'), name)) + prnt('}') + prnt() + + def _generate_cpy_constant_collecttype(self, tp, name): + is_int = tp.is_integer_type() + if not is_int or self.target_is_python: + self._do_collect_type(tp) + + def _generate_cpy_constant_decl(self, tp, name): + is_int = tp.is_integer_type() + self._generate_cpy_const(is_int, name, tp) + + def _generate_cpy_constant_ctx(self, tp, name): + if not self.target_is_python and tp.is_integer_type(): + type_op = CffiOp(OP_CONSTANT_INT, -1) + else: + if self.target_is_python: + const_kind = OP_DLOPEN_CONST + else: + const_kind = OP_CONSTANT + type_index = self._typesdict[tp] + type_op = CffiOp(const_kind, type_index) + self._lsts["global"].append( + GlobalExpr(name, '_cffi_const_%s' % name, type_op)) + + # ---------- + # enums + + def _generate_cpy_enum_collecttype(self, tp, name): + self._do_collect_type(tp) + + def _generate_cpy_enum_decl(self, tp, name=None): + for enumerator in tp.enumerators: + self._generate_cpy_const(True, enumerator) + + def _enum_ctx(self, tp, cname): + type_index = self._typesdict[tp] + type_op = CffiOp(OP_ENUM, -1) + if self.target_is_python: + tp.check_not_partial() + for enumerator, enumvalue in zip(tp.enumerators, tp.enumvalues): + self._lsts["global"].append( + GlobalExpr(enumerator, '_cffi_const_%s' % enumerator, type_op, + check_value=enumvalue)) + # + if cname is not None and '$' not in cname and not self.target_is_python: + size = "sizeof(%s)" % cname + signed = "((%s)-1) <= 0" % cname + else: + basetp = tp.build_baseinttype(self.ffi, []) + size = self.ffi.sizeof(basetp) + signed = int(int(self.ffi.cast(basetp, -1)) < 0) + allenums = ",".join(tp.enumerators) + self._lsts["enum"].append( + EnumExpr(tp.name, type_index, size, signed, allenums)) + + def _generate_cpy_enum_ctx(self, tp, name): + self._enum_ctx(tp, tp._get_c_name()) + + # ---------- + # macros: for now only for integers + + def _generate_cpy_macro_collecttype(self, tp, name): + pass + + def _generate_cpy_macro_decl(self, tp, name): + if tp == '...': + check_value = None + else: + check_value = tp # an integer + self._generate_cpy_const(True, name, check_value=check_value) + + def _generate_cpy_macro_ctx(self, tp, name): + if tp == '...': + if self.target_is_python: + raise VerificationError( + "cannot use the syntax '...' in '#define %s ...' when " + "using the ABI mode" % (name,)) + check_value = None + else: + check_value = tp # an integer + type_op = CffiOp(OP_CONSTANT_INT, -1) + self._lsts["global"].append( + GlobalExpr(name, '_cffi_const_%s' % name, type_op, + check_value=check_value)) + + # ---------- + # global variables + + def _global_type(self, tp, global_name): + if isinstance(tp, model.ArrayType): + actual_length = tp.length + if actual_length == '...': + actual_length = '_cffi_array_len(%s)' % (global_name,) + tp_item = self._global_type(tp.item, '%s[0]' % global_name) + tp = model.ArrayType(tp_item, actual_length) + return tp + + def _generate_cpy_variable_collecttype(self, tp, name): + self._do_collect_type(self._global_type(tp, name)) + + def _generate_cpy_variable_decl(self, tp, name): + prnt = self._prnt + tp = self._global_type(tp, name) + if isinstance(tp, model.ArrayType) and tp.length is None: + tp = tp.item + ampersand = '' + else: + ampersand = '&' + # This code assumes that casts from "tp *" to "void *" is a + # no-op, i.e. a function that returns a "tp *" can be called + # as if it returned a "void *". This should be generally true + # on any modern machine. The only exception to that rule (on + # uncommon architectures, and as far as I can tell) might be + # if 'tp' were a function type, but that is not possible here. + # (If 'tp' is a function _pointer_ type, then casts from "fn_t + # **" to "void *" are again no-ops, as far as I can tell.) + decl = '*_cffi_var_%s(void)' % (name,) + prnt('static ' + tp.get_c_name(decl, quals=self._current_quals)) + prnt('{') + prnt(' return %s(%s);' % (ampersand, name)) + prnt('}') + prnt() + + def _generate_cpy_variable_ctx(self, tp, name): + tp = self._global_type(tp, name) + type_index = self._typesdict[tp] + if self.target_is_python: + op = OP_GLOBAL_VAR + else: + op = OP_GLOBAL_VAR_F + self._lsts["global"].append( + GlobalExpr(name, '_cffi_var_%s' % name, CffiOp(op, type_index))) + + # ---------- + # extern "Python" + + def _generate_cpy_extern_python_collecttype(self, tp, name): + assert isinstance(tp, model.FunctionPtrType) + self._do_collect_type(tp) + _generate_cpy_dllexport_python_collecttype = \ + _generate_cpy_extern_python_plus_c_collecttype = \ + _generate_cpy_extern_python_collecttype + + def _extern_python_decl(self, tp, name, tag_and_space): + prnt = self._prnt + if isinstance(tp.result, model.VoidType): + size_of_result = '0' + else: + context = 'result of %s' % name + size_of_result = '(int)sizeof(%s)' % ( + tp.result.get_c_name('', context),) + prnt('static struct _cffi_externpy_s _cffi_externpy__%s =' % name) + prnt(' { "%s.%s", %s, 0, 0 };' % ( + self.module_name, name, size_of_result)) + prnt() + # + arguments = [] + context = 'argument of %s' % name + for i, type in enumerate(tp.args): + arg = type.get_c_name(' a%d' % i, context) + arguments.append(arg) + # + repr_arguments = ', '.join(arguments) + repr_arguments = repr_arguments or 'void' + name_and_arguments = '%s(%s)' % (name, repr_arguments) + if tp.abi == "__stdcall": + name_and_arguments = '_cffi_stdcall ' + name_and_arguments + # + def may_need_128_bits(tp): + return (isinstance(tp, model.PrimitiveType) and + tp.name == 'long double') + # + size_of_a = max(len(tp.args)*8, 8) + if may_need_128_bits(tp.result): + size_of_a = max(size_of_a, 16) + if isinstance(tp.result, model.StructOrUnion): + size_of_a = 'sizeof(%s) > %d ? sizeof(%s) : %d' % ( + tp.result.get_c_name(''), size_of_a, + tp.result.get_c_name(''), size_of_a) + prnt('%s%s' % (tag_and_space, tp.result.get_c_name(name_and_arguments))) + prnt('{') + prnt(' char a[%s];' % size_of_a) + prnt(' char *p = a;') + for i, type in enumerate(tp.args): + arg = 'a%d' % i + if (isinstance(type, model.StructOrUnion) or + may_need_128_bits(type)): + arg = '&' + arg + type = model.PointerType(type) + prnt(' *(%s)(p + %d) = %s;' % (type.get_c_name('*'), i*8, arg)) + prnt(' _cffi_call_python(&_cffi_externpy__%s, p);' % name) + if not isinstance(tp.result, model.VoidType): + prnt(' return *(%s)p;' % (tp.result.get_c_name('*'),)) + prnt('}') + prnt() + self._num_externpy += 1 + + def _generate_cpy_extern_python_decl(self, tp, name): + self._extern_python_decl(tp, name, 'static ') + + def _generate_cpy_dllexport_python_decl(self, tp, name): + self._extern_python_decl(tp, name, 'CFFI_DLLEXPORT ') + + def _generate_cpy_extern_python_plus_c_decl(self, tp, name): + self._extern_python_decl(tp, name, '') + + def _generate_cpy_extern_python_ctx(self, tp, name): + if self.target_is_python: + raise VerificationError( + "cannot use 'extern \"Python\"' in the ABI mode") + if tp.ellipsis: + raise NotImplementedError("a vararg function is extern \"Python\"") + type_index = self._typesdict[tp] + type_op = CffiOp(OP_EXTERN_PYTHON, type_index) + self._lsts["global"].append( + GlobalExpr(name, '&_cffi_externpy__%s' % name, type_op, name)) + + _generate_cpy_dllexport_python_ctx = \ + _generate_cpy_extern_python_plus_c_ctx = \ + _generate_cpy_extern_python_ctx + + def _print_string_literal_in_array(self, s): + prnt = self._prnt + prnt('// # NB. this is not a string because of a size limit in MSVC') + if not isinstance(s, bytes): # unicode + s = s.encode('utf-8') # -> bytes + else: + s.decode('utf-8') # got bytes, check for valid utf-8 + try: + s.decode('ascii') + except UnicodeDecodeError: + s = b'# -*- encoding: utf8 -*-\n' + s + for line in s.splitlines(True): + comment = line + if type('//') is bytes: # python2 + line = map(ord, line) # make a list of integers + else: # python3 + # type(line) is bytes, which enumerates like a list of integers + comment = ascii(comment)[1:-1] + prnt(('// ' + comment).rstrip()) + printed_line = '' + for c in line: + if len(printed_line) >= 76: + prnt(printed_line) + printed_line = '' + printed_line += '%d,' % (c,) + prnt(printed_line) + + # ---------- + # emitting the opcodes for individual types + + def _emit_bytecode_VoidType(self, tp, index): + self.cffi_types[index] = CffiOp(OP_PRIMITIVE, PRIM_VOID) + + def _emit_bytecode_PrimitiveType(self, tp, index): + prim_index = PRIMITIVE_TO_INDEX[tp.name] + self.cffi_types[index] = CffiOp(OP_PRIMITIVE, prim_index) + + def _emit_bytecode_UnknownIntegerType(self, tp, index): + s = ('_cffi_prim_int(sizeof(%s), (\n' + ' ((%s)-1) | 0 /* check that %s is an integer type */\n' + ' ) <= 0)' % (tp.name, tp.name, tp.name)) + self.cffi_types[index] = CffiOp(OP_PRIMITIVE, s) + + def _emit_bytecode_UnknownFloatType(self, tp, index): + s = ('_cffi_prim_float(sizeof(%s) *\n' + ' (((%s)1) / 2) * 2 /* integer => 0, float => 1 */\n' + ' )' % (tp.name, tp.name)) + self.cffi_types[index] = CffiOp(OP_PRIMITIVE, s) + + def _emit_bytecode_RawFunctionType(self, tp, index): + self.cffi_types[index] = CffiOp(OP_FUNCTION, self._typesdict[tp.result]) + index += 1 + for tp1 in tp.args: + realindex = self._typesdict[tp1] + if index != realindex: + if isinstance(tp1, model.PrimitiveType): + self._emit_bytecode_PrimitiveType(tp1, index) + else: + self.cffi_types[index] = CffiOp(OP_NOOP, realindex) + index += 1 + flags = int(tp.ellipsis) + if tp.abi is not None: + if tp.abi == '__stdcall': + flags |= 2 + else: + raise NotImplementedError("abi=%r" % (tp.abi,)) + self.cffi_types[index] = CffiOp(OP_FUNCTION_END, flags) + + def _emit_bytecode_PointerType(self, tp, index): + self.cffi_types[index] = CffiOp(OP_POINTER, self._typesdict[tp.totype]) + + _emit_bytecode_ConstPointerType = _emit_bytecode_PointerType + _emit_bytecode_NamedPointerType = _emit_bytecode_PointerType + + def _emit_bytecode_FunctionPtrType(self, tp, index): + raw = tp.as_raw_function() + self.cffi_types[index] = CffiOp(OP_POINTER, self._typesdict[raw]) + + def _emit_bytecode_ArrayType(self, tp, index): + item_index = self._typesdict[tp.item] + if tp.length is None: + self.cffi_types[index] = CffiOp(OP_OPEN_ARRAY, item_index) + elif tp.length == '...': + raise VerificationError( + "type %s badly placed: the '...' array length can only be " + "used on global arrays or on fields of structures" % ( + str(tp).replace('/*...*/', '...'),)) + else: + assert self.cffi_types[index + 1] == 'LEN' + self.cffi_types[index] = CffiOp(OP_ARRAY, item_index) + self.cffi_types[index + 1] = CffiOp(None, str(tp.length)) + + def _emit_bytecode_StructType(self, tp, index): + struct_index = self._struct_unions[tp] + self.cffi_types[index] = CffiOp(OP_STRUCT_UNION, struct_index) + _emit_bytecode_UnionType = _emit_bytecode_StructType + + def _emit_bytecode_EnumType(self, tp, index): + enum_index = self._enums[tp] + self.cffi_types[index] = CffiOp(OP_ENUM, enum_index) + + +if sys.version_info >= (3,): + NativeIO = io.StringIO +else: + class NativeIO(io.BytesIO): + def write(self, s): + if isinstance(s, unicode): + s = s.encode('ascii') + super(NativeIO, self).write(s) + +def _make_c_or_py_source(ffi, module_name, preamble, target_file, verbose): + if verbose: + print("generating %s" % (target_file,)) + recompiler = Recompiler(ffi, module_name, + target_is_python=(preamble is None)) + recompiler.collect_type_table() + recompiler.collect_step_tables() + f = NativeIO() + recompiler.write_source_to_f(f, preamble) + output = f.getvalue() + try: + with open(target_file, 'r') as f1: + if f1.read(len(output) + 1) != output: + raise IOError + if verbose: + print("(already up-to-date)") + return False # already up-to-date + except IOError: + tmp_file = '%s.~%d' % (target_file, os.getpid()) + with open(tmp_file, 'w') as f1: + f1.write(output) + try: + os.rename(tmp_file, target_file) + except OSError: + os.unlink(target_file) + os.rename(tmp_file, target_file) + return True + +def make_c_source(ffi, module_name, preamble, target_c_file, verbose=False): + assert preamble is not None + return _make_c_or_py_source(ffi, module_name, preamble, target_c_file, + verbose) + +def make_py_source(ffi, module_name, target_py_file, verbose=False): + return _make_c_or_py_source(ffi, module_name, None, target_py_file, + verbose) + +def _modname_to_file(outputdir, modname, extension): + parts = modname.split('.') + try: + os.makedirs(os.path.join(outputdir, *parts[:-1])) + except OSError: + pass + parts[-1] += extension + return os.path.join(outputdir, *parts), parts + + +# Aaargh. Distutils is not tested at all for the purpose of compiling +# DLLs that are not extension modules. Here are some hacks to work +# around that, in the _patch_for_*() functions... + +def _patch_meth(patchlist, cls, name, new_meth): + old = getattr(cls, name) + patchlist.append((cls, name, old)) + setattr(cls, name, new_meth) + return old + +def _unpatch_meths(patchlist): + for cls, name, old_meth in reversed(patchlist): + setattr(cls, name, old_meth) + +def _patch_for_embedding(patchlist): + if sys.platform == 'win32': + # we must not remove the manifest when building for embedding! + from distutils.msvc9compiler import MSVCCompiler + _patch_meth(patchlist, MSVCCompiler, '_remove_visual_c_ref', + lambda self, manifest_file: manifest_file) + + if sys.platform == 'darwin': + # we must not make a '-bundle', but a '-dynamiclib' instead + from distutils.ccompiler import CCompiler + def my_link_shared_object(self, *args, **kwds): + if '-bundle' in self.linker_so: + self.linker_so = list(self.linker_so) + i = self.linker_so.index('-bundle') + self.linker_so[i] = '-dynamiclib' + return old_link_shared_object(self, *args, **kwds) + old_link_shared_object = _patch_meth(patchlist, CCompiler, + 'link_shared_object', + my_link_shared_object) + +def _patch_for_target(patchlist, target): + from distutils.command.build_ext import build_ext + # if 'target' is different from '*', we need to patch some internal + # method to just return this 'target' value, instead of having it + # built from module_name + if target.endswith('.*'): + target = target[:-2] + if sys.platform == 'win32': + target += '.dll' + elif sys.platform == 'darwin': + target += '.dylib' + else: + target += '.so' + _patch_meth(patchlist, build_ext, 'get_ext_filename', + lambda self, ext_name: target) + + +def recompile(ffi, module_name, preamble, tmpdir='.', call_c_compiler=True, + c_file=None, source_extension='.c', extradir=None, + compiler_verbose=1, target=None, debug=None, **kwds): + if not isinstance(module_name, str): + module_name = module_name.encode('ascii') + if ffi._windows_unicode: + ffi._apply_windows_unicode(kwds) + if preamble is not None: + embedding = (ffi._embedding is not None) + if embedding: + ffi._apply_embedding_fix(kwds) + if c_file is None: + c_file, parts = _modname_to_file(tmpdir, module_name, + source_extension) + if extradir: + parts = [extradir] + parts + ext_c_file = os.path.join(*parts) + else: + ext_c_file = c_file + # + if target is None: + if embedding: + target = '%s.*' % module_name + else: + target = '*' + # + ext = ffiplatform.get_extension(ext_c_file, module_name, **kwds) + updated = make_c_source(ffi, module_name, preamble, c_file, + verbose=compiler_verbose) + if call_c_compiler: + patchlist = [] + cwd = os.getcwd() + try: + if embedding: + _patch_for_embedding(patchlist) + if target != '*': + _patch_for_target(patchlist, target) + if compiler_verbose: + if tmpdir == '.': + msg = 'the current directory is' + else: + msg = 'setting the current directory to' + print('%s %r' % (msg, os.path.abspath(tmpdir))) + os.chdir(tmpdir) + outputfilename = ffiplatform.compile('.', ext, + compiler_verbose, debug) + finally: + os.chdir(cwd) + _unpatch_meths(patchlist) + return outputfilename + else: + return ext, updated + else: + if c_file is None: + c_file, _ = _modname_to_file(tmpdir, module_name, '.py') + updated = make_py_source(ffi, module_name, c_file, + verbose=compiler_verbose) + if call_c_compiler: + return c_file + else: + return None, updated + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/setuptools_ext.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/setuptools_ext.py new file mode 100644 index 0000000000000000000000000000000000000000..8fe361487e469b3a87b80ddec1c5585b3801c587 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/setuptools_ext.py @@ -0,0 +1,219 @@ +import os +import sys + +try: + basestring +except NameError: + # Python 3.x + basestring = str + +def error(msg): + from distutils.errors import DistutilsSetupError + raise DistutilsSetupError(msg) + + +def execfile(filename, glob): + # We use execfile() (here rewritten for Python 3) instead of + # __import__() to load the build script. The problem with + # a normal import is that in some packages, the intermediate + # __init__.py files may already try to import the file that + # we are generating. + with open(filename) as f: + src = f.read() + src += '\n' # Python 2.6 compatibility + code = compile(src, filename, 'exec') + exec(code, glob, glob) + + +def add_cffi_module(dist, mod_spec): + from cffi.api import FFI + + if not isinstance(mod_spec, basestring): + error("argument to 'cffi_modules=...' must be a str or a list of str," + " not %r" % (type(mod_spec).__name__,)) + mod_spec = str(mod_spec) + try: + build_file_name, ffi_var_name = mod_spec.split(':') + except ValueError: + error("%r must be of the form 'path/build.py:ffi_variable'" % + (mod_spec,)) + if not os.path.exists(build_file_name): + ext = '' + rewritten = build_file_name.replace('.', '/') + '.py' + if os.path.exists(rewritten): + ext = ' (rewrite cffi_modules to [%r])' % ( + rewritten + ':' + ffi_var_name,) + error("%r does not name an existing file%s" % (build_file_name, ext)) + + mod_vars = {'__name__': '__cffi__', '__file__': build_file_name} + execfile(build_file_name, mod_vars) + + try: + ffi = mod_vars[ffi_var_name] + except KeyError: + error("%r: object %r not found in module" % (mod_spec, + ffi_var_name)) + if not isinstance(ffi, FFI): + ffi = ffi() # maybe it's a function instead of directly an ffi + if not isinstance(ffi, FFI): + error("%r is not an FFI instance (got %r)" % (mod_spec, + type(ffi).__name__)) + if not hasattr(ffi, '_assigned_source'): + error("%r: the set_source() method was not called" % (mod_spec,)) + module_name, source, source_extension, kwds = ffi._assigned_source + if ffi._windows_unicode: + kwds = kwds.copy() + ffi._apply_windows_unicode(kwds) + + if source is None: + _add_py_module(dist, ffi, module_name) + else: + _add_c_module(dist, ffi, module_name, source, source_extension, kwds) + +def _set_py_limited_api(Extension, kwds): + """ + Add py_limited_api to kwds if setuptools >= 26 is in use. + Do not alter the setting if it already exists. + Setuptools takes care of ignoring the flag on Python 2 and PyPy. + + CPython itself should ignore the flag in a debugging version + (by not listing .abi3.so in the extensions it supports), but + it doesn't so far, creating troubles. That's why we check + for "not hasattr(sys, 'gettotalrefcount')" (the 2.7 compatible equivalent + of 'd' not in sys.abiflags). (http://bugs.python.org/issue28401) + + On Windows, with CPython <= 3.4, it's better not to use py_limited_api + because virtualenv *still* doesn't copy PYTHON3.DLL on these versions. + Recently (2020) we started shipping only >= 3.5 wheels, though. So + we'll give it another try and set py_limited_api on Windows >= 3.5. + """ + from cffi import recompiler + + if ('py_limited_api' not in kwds and not hasattr(sys, 'gettotalrefcount') + and recompiler.USE_LIMITED_API): + import setuptools + try: + setuptools_major_version = int(setuptools.__version__.partition('.')[0]) + if setuptools_major_version >= 26: + kwds['py_limited_api'] = True + except ValueError: # certain development versions of setuptools + # If we don't know the version number of setuptools, we + # try to set 'py_limited_api' anyway. At worst, we get a + # warning. + kwds['py_limited_api'] = True + return kwds + +def _add_c_module(dist, ffi, module_name, source, source_extension, kwds): + from distutils.core import Extension + # We are a setuptools extension. Need this build_ext for py_limited_api. + from setuptools.command.build_ext import build_ext + from distutils.dir_util import mkpath + from distutils import log + from cffi import recompiler + + allsources = ['$PLACEHOLDER'] + allsources.extend(kwds.pop('sources', [])) + kwds = _set_py_limited_api(Extension, kwds) + ext = Extension(name=module_name, sources=allsources, **kwds) + + def make_mod(tmpdir, pre_run=None): + c_file = os.path.join(tmpdir, module_name + source_extension) + log.info("generating cffi module %r" % c_file) + mkpath(tmpdir) + # a setuptools-only, API-only hook: called with the "ext" and "ffi" + # arguments just before we turn the ffi into C code. To use it, + # subclass the 'distutils.command.build_ext.build_ext' class and + # add a method 'def pre_run(self, ext, ffi)'. + if pre_run is not None: + pre_run(ext, ffi) + updated = recompiler.make_c_source(ffi, module_name, source, c_file) + if not updated: + log.info("already up-to-date") + return c_file + + if dist.ext_modules is None: + dist.ext_modules = [] + dist.ext_modules.append(ext) + + base_class = dist.cmdclass.get('build_ext', build_ext) + class build_ext_make_mod(base_class): + def run(self): + if ext.sources[0] == '$PLACEHOLDER': + pre_run = getattr(self, 'pre_run', None) + ext.sources[0] = make_mod(self.build_temp, pre_run) + base_class.run(self) + dist.cmdclass['build_ext'] = build_ext_make_mod + # NB. multiple runs here will create multiple 'build_ext_make_mod' + # classes. Even in this case the 'build_ext' command should be + # run once; but just in case, the logic above does nothing if + # called again. + + +def _add_py_module(dist, ffi, module_name): + from distutils.dir_util import mkpath + from setuptools.command.build_py import build_py + from setuptools.command.build_ext import build_ext + from distutils import log + from cffi import recompiler + + def generate_mod(py_file): + log.info("generating cffi module %r" % py_file) + mkpath(os.path.dirname(py_file)) + updated = recompiler.make_py_source(ffi, module_name, py_file) + if not updated: + log.info("already up-to-date") + + base_class = dist.cmdclass.get('build_py', build_py) + class build_py_make_mod(base_class): + def run(self): + base_class.run(self) + module_path = module_name.split('.') + module_path[-1] += '.py' + generate_mod(os.path.join(self.build_lib, *module_path)) + def get_source_files(self): + # This is called from 'setup.py sdist' only. Exclude + # the generate .py module in this case. + saved_py_modules = self.py_modules + try: + if saved_py_modules: + self.py_modules = [m for m in saved_py_modules + if m != module_name] + return base_class.get_source_files(self) + finally: + self.py_modules = saved_py_modules + dist.cmdclass['build_py'] = build_py_make_mod + + # distutils and setuptools have no notion I could find of a + # generated python module. If we don't add module_name to + # dist.py_modules, then things mostly work but there are some + # combination of options (--root and --record) that will miss + # the module. So we add it here, which gives a few apparently + # harmless warnings about not finding the file outside the + # build directory. + # Then we need to hack more in get_source_files(); see above. + if dist.py_modules is None: + dist.py_modules = [] + dist.py_modules.append(module_name) + + # the following is only for "build_ext -i" + base_class_2 = dist.cmdclass.get('build_ext', build_ext) + class build_ext_make_mod(base_class_2): + def run(self): + base_class_2.run(self) + if self.inplace: + # from get_ext_fullpath() in distutils/command/build_ext.py + module_path = module_name.split('.') + package = '.'.join(module_path[:-1]) + build_py = self.get_finalized_command('build_py') + package_dir = build_py.get_package_dir(package) + file_name = module_path[-1] + '.py' + generate_mod(os.path.join(package_dir, file_name)) + dist.cmdclass['build_ext'] = build_ext_make_mod + +def cffi_modules(dist, attr, value): + assert attr == 'cffi_modules' + if isinstance(value, basestring): + value = [value] + + for cffi_module in value: + add_cffi_module(dist, cffi_module) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/vengine_cpy.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/vengine_cpy.py new file mode 100644 index 0000000000000000000000000000000000000000..6de0df0ea4e1a98e65964ab61588df9abf536bac --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/vengine_cpy.py @@ -0,0 +1,1076 @@ +# +# DEPRECATED: implementation for ffi.verify() +# +import sys, imp +from . import model +from .error import VerificationError + + +class VCPythonEngine(object): + _class_key = 'x' + _gen_python_module = True + + def __init__(self, verifier): + self.verifier = verifier + self.ffi = verifier.ffi + self._struct_pending_verification = {} + self._types_of_builtin_functions = {} + + def patch_extension_kwds(self, kwds): + pass + + def find_module(self, module_name, path, so_suffixes): + try: + f, filename, descr = imp.find_module(module_name, path) + except ImportError: + return None + if f is not None: + f.close() + # Note that after a setuptools installation, there are both .py + # and .so files with the same basename. The code here relies on + # imp.find_module() locating the .so in priority. + if descr[0] not in so_suffixes: + return None + return filename + + def collect_types(self): + self._typesdict = {} + self._generate("collecttype") + + def _prnt(self, what=''): + self._f.write(what + '\n') + + def _gettypenum(self, type): + # a KeyError here is a bug. please report it! :-) + return self._typesdict[type] + + def _do_collect_type(self, tp): + if ((not isinstance(tp, model.PrimitiveType) + or tp.name == 'long double') + and tp not in self._typesdict): + num = len(self._typesdict) + self._typesdict[tp] = num + + def write_source_to_f(self): + self.collect_types() + # + # The new module will have a _cffi_setup() function that receives + # objects from the ffi world, and that calls some setup code in + # the module. This setup code is split in several independent + # functions, e.g. one per constant. The functions are "chained" + # by ending in a tail call to each other. + # + # This is further split in two chained lists, depending on if we + # can do it at import-time or if we must wait for _cffi_setup() to + # provide us with the objects. This is needed because we + # need the values of the enum constants in order to build the + # that we may have to pass to _cffi_setup(). + # + # The following two 'chained_list_constants' items contains + # the head of these two chained lists, as a string that gives the + # call to do, if any. + self._chained_list_constants = ['((void)lib,0)', '((void)lib,0)'] + # + prnt = self._prnt + # first paste some standard set of lines that are mostly '#define' + prnt(cffimod_header) + prnt() + # then paste the C source given by the user, verbatim. + prnt(self.verifier.preamble) + prnt() + # + # call generate_cpy_xxx_decl(), for every xxx found from + # ffi._parser._declarations. This generates all the functions. + self._generate("decl") + # + # implement the function _cffi_setup_custom() as calling the + # head of the chained list. + self._generate_setup_custom() + prnt() + # + # produce the method table, including the entries for the + # generated Python->C function wrappers, which are done + # by generate_cpy_function_method(). + prnt('static PyMethodDef _cffi_methods[] = {') + self._generate("method") + prnt(' {"_cffi_setup", _cffi_setup, METH_VARARGS, NULL},') + prnt(' {NULL, NULL, 0, NULL} /* Sentinel */') + prnt('};') + prnt() + # + # standard init. + modname = self.verifier.get_module_name() + constants = self._chained_list_constants[False] + prnt('#if PY_MAJOR_VERSION >= 3') + prnt() + prnt('static struct PyModuleDef _cffi_module_def = {') + prnt(' PyModuleDef_HEAD_INIT,') + prnt(' "%s",' % modname) + prnt(' NULL,') + prnt(' -1,') + prnt(' _cffi_methods,') + prnt(' NULL, NULL, NULL, NULL') + prnt('};') + prnt() + prnt('PyMODINIT_FUNC') + prnt('PyInit_%s(void)' % modname) + prnt('{') + prnt(' PyObject *lib;') + prnt(' lib = PyModule_Create(&_cffi_module_def);') + prnt(' if (lib == NULL)') + prnt(' return NULL;') + prnt(' if (%s < 0 || _cffi_init() < 0) {' % (constants,)) + prnt(' Py_DECREF(lib);') + prnt(' return NULL;') + prnt(' }') + prnt(' return lib;') + prnt('}') + prnt() + prnt('#else') + prnt() + prnt('PyMODINIT_FUNC') + prnt('init%s(void)' % modname) + prnt('{') + prnt(' PyObject *lib;') + prnt(' lib = Py_InitModule("%s", _cffi_methods);' % modname) + prnt(' if (lib == NULL)') + prnt(' return;') + prnt(' if (%s < 0 || _cffi_init() < 0)' % (constants,)) + prnt(' return;') + prnt(' return;') + prnt('}') + prnt() + prnt('#endif') + + def load_library(self, flags=None): + # XXX review all usages of 'self' here! + # import it as a new extension module + imp.acquire_lock() + try: + if hasattr(sys, "getdlopenflags"): + previous_flags = sys.getdlopenflags() + try: + if hasattr(sys, "setdlopenflags") and flags is not None: + sys.setdlopenflags(flags) + module = imp.load_dynamic(self.verifier.get_module_name(), + self.verifier.modulefilename) + except ImportError as e: + error = "importing %r: %s" % (self.verifier.modulefilename, e) + raise VerificationError(error) + finally: + if hasattr(sys, "setdlopenflags"): + sys.setdlopenflags(previous_flags) + finally: + imp.release_lock() + # + # call loading_cpy_struct() to get the struct layout inferred by + # the C compiler + self._load(module, 'loading') + # + # the C code will need the objects. Collect them in + # order in a list. + revmapping = dict([(value, key) + for (key, value) in self._typesdict.items()]) + lst = [revmapping[i] for i in range(len(revmapping))] + lst = list(map(self.ffi._get_cached_btype, lst)) + # + # build the FFILibrary class and instance and call _cffi_setup(). + # this will set up some fields like '_cffi_types', and only then + # it will invoke the chained list of functions that will really + # build (notably) the constant objects, as if they are + # pointers, and store them as attributes on the 'library' object. + class FFILibrary(object): + _cffi_python_module = module + _cffi_ffi = self.ffi + _cffi_dir = [] + def __dir__(self): + return FFILibrary._cffi_dir + list(self.__dict__) + library = FFILibrary() + if module._cffi_setup(lst, VerificationError, library): + import warnings + warnings.warn("reimporting %r might overwrite older definitions" + % (self.verifier.get_module_name())) + # + # finally, call the loaded_cpy_xxx() functions. This will perform + # the final adjustments, like copying the Python->C wrapper + # functions from the module to the 'library' object, and setting + # up the FFILibrary class with properties for the global C variables. + self._load(module, 'loaded', library=library) + module._cffi_original_ffi = self.ffi + module._cffi_types_of_builtin_funcs = self._types_of_builtin_functions + return library + + def _get_declarations(self): + lst = [(key, tp) for (key, (tp, qual)) in + self.ffi._parser._declarations.items()] + lst.sort() + return lst + + def _generate(self, step_name): + for name, tp in self._get_declarations(): + kind, realname = name.split(' ', 1) + try: + method = getattr(self, '_generate_cpy_%s_%s' % (kind, + step_name)) + except AttributeError: + raise VerificationError( + "not implemented in verify(): %r" % name) + try: + method(tp, realname) + except Exception as e: + model.attach_exception_info(e, name) + raise + + def _load(self, module, step_name, **kwds): + for name, tp in self._get_declarations(): + kind, realname = name.split(' ', 1) + method = getattr(self, '_%s_cpy_%s' % (step_name, kind)) + try: + method(tp, realname, module, **kwds) + except Exception as e: + model.attach_exception_info(e, name) + raise + + def _generate_nothing(self, tp, name): + pass + + def _loaded_noop(self, tp, name, module, **kwds): + pass + + # ---------- + + def _convert_funcarg_to_c(self, tp, fromvar, tovar, errcode): + extraarg = '' + if isinstance(tp, model.PrimitiveType): + if tp.is_integer_type() and tp.name != '_Bool': + converter = '_cffi_to_c_int' + extraarg = ', %s' % tp.name + else: + converter = '(%s)_cffi_to_c_%s' % (tp.get_c_name(''), + tp.name.replace(' ', '_')) + errvalue = '-1' + # + elif isinstance(tp, model.PointerType): + self._convert_funcarg_to_c_ptr_or_array(tp, fromvar, + tovar, errcode) + return + # + elif isinstance(tp, (model.StructOrUnion, model.EnumType)): + # a struct (not a struct pointer) as a function argument + self._prnt(' if (_cffi_to_c((char *)&%s, _cffi_type(%d), %s) < 0)' + % (tovar, self._gettypenum(tp), fromvar)) + self._prnt(' %s;' % errcode) + return + # + elif isinstance(tp, model.FunctionPtrType): + converter = '(%s)_cffi_to_c_pointer' % tp.get_c_name('') + extraarg = ', _cffi_type(%d)' % self._gettypenum(tp) + errvalue = 'NULL' + # + else: + raise NotImplementedError(tp) + # + self._prnt(' %s = %s(%s%s);' % (tovar, converter, fromvar, extraarg)) + self._prnt(' if (%s == (%s)%s && PyErr_Occurred())' % ( + tovar, tp.get_c_name(''), errvalue)) + self._prnt(' %s;' % errcode) + + def _extra_local_variables(self, tp, localvars, freelines): + if isinstance(tp, model.PointerType): + localvars.add('Py_ssize_t datasize') + localvars.add('struct _cffi_freeme_s *large_args_free = NULL') + freelines.add('if (large_args_free != NULL)' + ' _cffi_free_array_arguments(large_args_free);') + + def _convert_funcarg_to_c_ptr_or_array(self, tp, fromvar, tovar, errcode): + self._prnt(' datasize = _cffi_prepare_pointer_call_argument(') + self._prnt(' _cffi_type(%d), %s, (char **)&%s);' % ( + self._gettypenum(tp), fromvar, tovar)) + self._prnt(' if (datasize != 0) {') + self._prnt(' %s = ((size_t)datasize) <= 640 ? ' + 'alloca((size_t)datasize) : NULL;' % (tovar,)) + self._prnt(' if (_cffi_convert_array_argument(_cffi_type(%d), %s, ' + '(char **)&%s,' % (self._gettypenum(tp), fromvar, tovar)) + self._prnt(' datasize, &large_args_free) < 0)') + self._prnt(' %s;' % errcode) + self._prnt(' }') + + def _convert_expr_from_c(self, tp, var, context): + if isinstance(tp, model.PrimitiveType): + if tp.is_integer_type() and tp.name != '_Bool': + return '_cffi_from_c_int(%s, %s)' % (var, tp.name) + elif tp.name != 'long double': + return '_cffi_from_c_%s(%s)' % (tp.name.replace(' ', '_'), var) + else: + return '_cffi_from_c_deref((char *)&%s, _cffi_type(%d))' % ( + var, self._gettypenum(tp)) + elif isinstance(tp, (model.PointerType, model.FunctionPtrType)): + return '_cffi_from_c_pointer((char *)%s, _cffi_type(%d))' % ( + var, self._gettypenum(tp)) + elif isinstance(tp, model.ArrayType): + return '_cffi_from_c_pointer((char *)%s, _cffi_type(%d))' % ( + var, self._gettypenum(model.PointerType(tp.item))) + elif isinstance(tp, model.StructOrUnion): + if tp.fldnames is None: + raise TypeError("'%s' is used as %s, but is opaque" % ( + tp._get_c_name(), context)) + return '_cffi_from_c_struct((char *)&%s, _cffi_type(%d))' % ( + var, self._gettypenum(tp)) + elif isinstance(tp, model.EnumType): + return '_cffi_from_c_deref((char *)&%s, _cffi_type(%d))' % ( + var, self._gettypenum(tp)) + else: + raise NotImplementedError(tp) + + # ---------- + # typedefs: generates no code so far + + _generate_cpy_typedef_collecttype = _generate_nothing + _generate_cpy_typedef_decl = _generate_nothing + _generate_cpy_typedef_method = _generate_nothing + _loading_cpy_typedef = _loaded_noop + _loaded_cpy_typedef = _loaded_noop + + # ---------- + # function declarations + + def _generate_cpy_function_collecttype(self, tp, name): + assert isinstance(tp, model.FunctionPtrType) + if tp.ellipsis: + self._do_collect_type(tp) + else: + # don't call _do_collect_type(tp) in this common case, + # otherwise test_autofilled_struct_as_argument fails + for type in tp.args: + self._do_collect_type(type) + self._do_collect_type(tp.result) + + def _generate_cpy_function_decl(self, tp, name): + assert isinstance(tp, model.FunctionPtrType) + if tp.ellipsis: + # cannot support vararg functions better than this: check for its + # exact type (including the fixed arguments), and build it as a + # constant function pointer (no CPython wrapper) + self._generate_cpy_const(False, name, tp) + return + prnt = self._prnt + numargs = len(tp.args) + if numargs == 0: + argname = 'noarg' + elif numargs == 1: + argname = 'arg0' + else: + argname = 'args' + prnt('static PyObject *') + prnt('_cffi_f_%s(PyObject *self, PyObject *%s)' % (name, argname)) + prnt('{') + # + context = 'argument of %s' % name + for i, type in enumerate(tp.args): + prnt(' %s;' % type.get_c_name(' x%d' % i, context)) + # + localvars = set() + freelines = set() + for type in tp.args: + self._extra_local_variables(type, localvars, freelines) + for decl in sorted(localvars): + prnt(' %s;' % (decl,)) + # + if not isinstance(tp.result, model.VoidType): + result_code = 'result = ' + context = 'result of %s' % name + prnt(' %s;' % tp.result.get_c_name(' result', context)) + prnt(' PyObject *pyresult;') + else: + result_code = '' + # + if len(tp.args) > 1: + rng = range(len(tp.args)) + for i in rng: + prnt(' PyObject *arg%d;' % i) + prnt() + prnt(' if (!PyArg_ParseTuple(args, "%s:%s", %s))' % ( + 'O' * numargs, name, ', '.join(['&arg%d' % i for i in rng]))) + prnt(' return NULL;') + prnt() + # + for i, type in enumerate(tp.args): + self._convert_funcarg_to_c(type, 'arg%d' % i, 'x%d' % i, + 'return NULL') + prnt() + # + prnt(' Py_BEGIN_ALLOW_THREADS') + prnt(' _cffi_restore_errno();') + prnt(' { %s%s(%s); }' % ( + result_code, name, + ', '.join(['x%d' % i for i in range(len(tp.args))]))) + prnt(' _cffi_save_errno();') + prnt(' Py_END_ALLOW_THREADS') + prnt() + # + prnt(' (void)self; /* unused */') + if numargs == 0: + prnt(' (void)noarg; /* unused */') + if result_code: + prnt(' pyresult = %s;' % + self._convert_expr_from_c(tp.result, 'result', 'result type')) + for freeline in freelines: + prnt(' ' + freeline) + prnt(' return pyresult;') + else: + for freeline in freelines: + prnt(' ' + freeline) + prnt(' Py_INCREF(Py_None);') + prnt(' return Py_None;') + prnt('}') + prnt() + + def _generate_cpy_function_method(self, tp, name): + if tp.ellipsis: + return + numargs = len(tp.args) + if numargs == 0: + meth = 'METH_NOARGS' + elif numargs == 1: + meth = 'METH_O' + else: + meth = 'METH_VARARGS' + self._prnt(' {"%s", _cffi_f_%s, %s, NULL},' % (name, name, meth)) + + _loading_cpy_function = _loaded_noop + + def _loaded_cpy_function(self, tp, name, module, library): + if tp.ellipsis: + return + func = getattr(module, name) + setattr(library, name, func) + self._types_of_builtin_functions[func] = tp + + # ---------- + # named structs + + _generate_cpy_struct_collecttype = _generate_nothing + def _generate_cpy_struct_decl(self, tp, name): + assert name == tp.name + self._generate_struct_or_union_decl(tp, 'struct', name) + def _generate_cpy_struct_method(self, tp, name): + self._generate_struct_or_union_method(tp, 'struct', name) + def _loading_cpy_struct(self, tp, name, module): + self._loading_struct_or_union(tp, 'struct', name, module) + def _loaded_cpy_struct(self, tp, name, module, **kwds): + self._loaded_struct_or_union(tp) + + _generate_cpy_union_collecttype = _generate_nothing + def _generate_cpy_union_decl(self, tp, name): + assert name == tp.name + self._generate_struct_or_union_decl(tp, 'union', name) + def _generate_cpy_union_method(self, tp, name): + self._generate_struct_or_union_method(tp, 'union', name) + def _loading_cpy_union(self, tp, name, module): + self._loading_struct_or_union(tp, 'union', name, module) + def _loaded_cpy_union(self, tp, name, module, **kwds): + self._loaded_struct_or_union(tp) + + def _generate_struct_or_union_decl(self, tp, prefix, name): + if tp.fldnames is None: + return # nothing to do with opaque structs + checkfuncname = '_cffi_check_%s_%s' % (prefix, name) + layoutfuncname = '_cffi_layout_%s_%s' % (prefix, name) + cname = ('%s %s' % (prefix, name)).strip() + # + prnt = self._prnt + prnt('static void %s(%s *p)' % (checkfuncname, cname)) + prnt('{') + prnt(' /* only to generate compile-time warnings or errors */') + prnt(' (void)p;') + for fname, ftype, fbitsize, fqual in tp.enumfields(): + if (isinstance(ftype, model.PrimitiveType) + and ftype.is_integer_type()) or fbitsize >= 0: + # accept all integers, but complain on float or double + prnt(' (void)((p->%s) << 1);' % fname) + else: + # only accept exactly the type declared. + try: + prnt(' { %s = &p->%s; (void)tmp; }' % ( + ftype.get_c_name('*tmp', 'field %r'%fname, quals=fqual), + fname)) + except VerificationError as e: + prnt(' /* %s */' % str(e)) # cannot verify it, ignore + prnt('}') + prnt('static PyObject *') + prnt('%s(PyObject *self, PyObject *noarg)' % (layoutfuncname,)) + prnt('{') + prnt(' struct _cffi_aligncheck { char x; %s y; };' % cname) + prnt(' static Py_ssize_t nums[] = {') + prnt(' sizeof(%s),' % cname) + prnt(' offsetof(struct _cffi_aligncheck, y),') + for fname, ftype, fbitsize, fqual in tp.enumfields(): + if fbitsize >= 0: + continue # xxx ignore fbitsize for now + prnt(' offsetof(%s, %s),' % (cname, fname)) + if isinstance(ftype, model.ArrayType) and ftype.length is None: + prnt(' 0, /* %s */' % ftype._get_c_name()) + else: + prnt(' sizeof(((%s *)0)->%s),' % (cname, fname)) + prnt(' -1') + prnt(' };') + prnt(' (void)self; /* unused */') + prnt(' (void)noarg; /* unused */') + prnt(' return _cffi_get_struct_layout(nums);') + prnt(' /* the next line is not executed, but compiled */') + prnt(' %s(0);' % (checkfuncname,)) + prnt('}') + prnt() + + def _generate_struct_or_union_method(self, tp, prefix, name): + if tp.fldnames is None: + return # nothing to do with opaque structs + layoutfuncname = '_cffi_layout_%s_%s' % (prefix, name) + self._prnt(' {"%s", %s, METH_NOARGS, NULL},' % (layoutfuncname, + layoutfuncname)) + + def _loading_struct_or_union(self, tp, prefix, name, module): + if tp.fldnames is None: + return # nothing to do with opaque structs + layoutfuncname = '_cffi_layout_%s_%s' % (prefix, name) + # + function = getattr(module, layoutfuncname) + layout = function() + if isinstance(tp, model.StructOrUnion) and tp.partial: + # use the function()'s sizes and offsets to guide the + # layout of the struct + totalsize = layout[0] + totalalignment = layout[1] + fieldofs = layout[2::2] + fieldsize = layout[3::2] + tp.force_flatten() + assert len(fieldofs) == len(fieldsize) == len(tp.fldnames) + tp.fixedlayout = fieldofs, fieldsize, totalsize, totalalignment + else: + cname = ('%s %s' % (prefix, name)).strip() + self._struct_pending_verification[tp] = layout, cname + + def _loaded_struct_or_union(self, tp): + if tp.fldnames is None: + return # nothing to do with opaque structs + self.ffi._get_cached_btype(tp) # force 'fixedlayout' to be considered + + if tp in self._struct_pending_verification: + # check that the layout sizes and offsets match the real ones + def check(realvalue, expectedvalue, msg): + if realvalue != expectedvalue: + raise VerificationError( + "%s (we have %d, but C compiler says %d)" + % (msg, expectedvalue, realvalue)) + ffi = self.ffi + BStruct = ffi._get_cached_btype(tp) + layout, cname = self._struct_pending_verification.pop(tp) + check(layout[0], ffi.sizeof(BStruct), "wrong total size") + check(layout[1], ffi.alignof(BStruct), "wrong total alignment") + i = 2 + for fname, ftype, fbitsize, fqual in tp.enumfields(): + if fbitsize >= 0: + continue # xxx ignore fbitsize for now + check(layout[i], ffi.offsetof(BStruct, fname), + "wrong offset for field %r" % (fname,)) + if layout[i+1] != 0: + BField = ffi._get_cached_btype(ftype) + check(layout[i+1], ffi.sizeof(BField), + "wrong size for field %r" % (fname,)) + i += 2 + assert i == len(layout) + + # ---------- + # 'anonymous' declarations. These are produced for anonymous structs + # or unions; the 'name' is obtained by a typedef. + + _generate_cpy_anonymous_collecttype = _generate_nothing + + def _generate_cpy_anonymous_decl(self, tp, name): + if isinstance(tp, model.EnumType): + self._generate_cpy_enum_decl(tp, name, '') + else: + self._generate_struct_or_union_decl(tp, '', name) + + def _generate_cpy_anonymous_method(self, tp, name): + if not isinstance(tp, model.EnumType): + self._generate_struct_or_union_method(tp, '', name) + + def _loading_cpy_anonymous(self, tp, name, module): + if isinstance(tp, model.EnumType): + self._loading_cpy_enum(tp, name, module) + else: + self._loading_struct_or_union(tp, '', name, module) + + def _loaded_cpy_anonymous(self, tp, name, module, **kwds): + if isinstance(tp, model.EnumType): + self._loaded_cpy_enum(tp, name, module, **kwds) + else: + self._loaded_struct_or_union(tp) + + # ---------- + # constants, likely declared with '#define' + + def _generate_cpy_const(self, is_int, name, tp=None, category='const', + vartp=None, delayed=True, size_too=False, + check_value=None): + prnt = self._prnt + funcname = '_cffi_%s_%s' % (category, name) + prnt('static int %s(PyObject *lib)' % funcname) + prnt('{') + prnt(' PyObject *o;') + prnt(' int res;') + if not is_int: + prnt(' %s;' % (vartp or tp).get_c_name(' i', name)) + else: + assert category == 'const' + # + if check_value is not None: + self._check_int_constant_value(name, check_value) + # + if not is_int: + if category == 'var': + realexpr = '&' + name + else: + realexpr = name + prnt(' i = (%s);' % (realexpr,)) + prnt(' o = %s;' % (self._convert_expr_from_c(tp, 'i', + 'variable type'),)) + assert delayed + else: + prnt(' o = _cffi_from_c_int_const(%s);' % name) + prnt(' if (o == NULL)') + prnt(' return -1;') + if size_too: + prnt(' {') + prnt(' PyObject *o1 = o;') + prnt(' o = Py_BuildValue("On", o1, (Py_ssize_t)sizeof(%s));' + % (name,)) + prnt(' Py_DECREF(o1);') + prnt(' if (o == NULL)') + prnt(' return -1;') + prnt(' }') + prnt(' res = PyObject_SetAttrString(lib, "%s", o);' % name) + prnt(' Py_DECREF(o);') + prnt(' if (res < 0)') + prnt(' return -1;') + prnt(' return %s;' % self._chained_list_constants[delayed]) + self._chained_list_constants[delayed] = funcname + '(lib)' + prnt('}') + prnt() + + def _generate_cpy_constant_collecttype(self, tp, name): + is_int = isinstance(tp, model.PrimitiveType) and tp.is_integer_type() + if not is_int: + self._do_collect_type(tp) + + def _generate_cpy_constant_decl(self, tp, name): + is_int = isinstance(tp, model.PrimitiveType) and tp.is_integer_type() + self._generate_cpy_const(is_int, name, tp) + + _generate_cpy_constant_method = _generate_nothing + _loading_cpy_constant = _loaded_noop + _loaded_cpy_constant = _loaded_noop + + # ---------- + # enums + + def _check_int_constant_value(self, name, value, err_prefix=''): + prnt = self._prnt + if value <= 0: + prnt(' if ((%s) > 0 || (long)(%s) != %dL) {' % ( + name, name, value)) + else: + prnt(' if ((%s) <= 0 || (unsigned long)(%s) != %dUL) {' % ( + name, name, value)) + prnt(' char buf[64];') + prnt(' if ((%s) <= 0)' % name) + prnt(' snprintf(buf, 63, "%%ld", (long)(%s));' % name) + prnt(' else') + prnt(' snprintf(buf, 63, "%%lu", (unsigned long)(%s));' % + name) + prnt(' PyErr_Format(_cffi_VerificationError,') + prnt(' "%s%s has the real value %s, not %s",') + prnt(' "%s", "%s", buf, "%d");' % ( + err_prefix, name, value)) + prnt(' return -1;') + prnt(' }') + + def _enum_funcname(self, prefix, name): + # "$enum_$1" => "___D_enum____D_1" + name = name.replace('$', '___D_') + return '_cffi_e_%s_%s' % (prefix, name) + + def _generate_cpy_enum_decl(self, tp, name, prefix='enum'): + if tp.partial: + for enumerator in tp.enumerators: + self._generate_cpy_const(True, enumerator, delayed=False) + return + # + funcname = self._enum_funcname(prefix, name) + prnt = self._prnt + prnt('static int %s(PyObject *lib)' % funcname) + prnt('{') + for enumerator, enumvalue in zip(tp.enumerators, tp.enumvalues): + self._check_int_constant_value(enumerator, enumvalue, + "enum %s: " % name) + prnt(' return %s;' % self._chained_list_constants[True]) + self._chained_list_constants[True] = funcname + '(lib)' + prnt('}') + prnt() + + _generate_cpy_enum_collecttype = _generate_nothing + _generate_cpy_enum_method = _generate_nothing + + def _loading_cpy_enum(self, tp, name, module): + if tp.partial: + enumvalues = [getattr(module, enumerator) + for enumerator in tp.enumerators] + tp.enumvalues = tuple(enumvalues) + tp.partial_resolved = True + + def _loaded_cpy_enum(self, tp, name, module, library): + for enumerator, enumvalue in zip(tp.enumerators, tp.enumvalues): + setattr(library, enumerator, enumvalue) + + # ---------- + # macros: for now only for integers + + def _generate_cpy_macro_decl(self, tp, name): + if tp == '...': + check_value = None + else: + check_value = tp # an integer + self._generate_cpy_const(True, name, check_value=check_value) + + _generate_cpy_macro_collecttype = _generate_nothing + _generate_cpy_macro_method = _generate_nothing + _loading_cpy_macro = _loaded_noop + _loaded_cpy_macro = _loaded_noop + + # ---------- + # global variables + + def _generate_cpy_variable_collecttype(self, tp, name): + if isinstance(tp, model.ArrayType): + tp_ptr = model.PointerType(tp.item) + else: + tp_ptr = model.PointerType(tp) + self._do_collect_type(tp_ptr) + + def _generate_cpy_variable_decl(self, tp, name): + if isinstance(tp, model.ArrayType): + tp_ptr = model.PointerType(tp.item) + self._generate_cpy_const(False, name, tp, vartp=tp_ptr, + size_too = tp.length_is_unknown()) + else: + tp_ptr = model.PointerType(tp) + self._generate_cpy_const(False, name, tp_ptr, category='var') + + _generate_cpy_variable_method = _generate_nothing + _loading_cpy_variable = _loaded_noop + + def _loaded_cpy_variable(self, tp, name, module, library): + value = getattr(library, name) + if isinstance(tp, model.ArrayType): # int a[5] is "constant" in the + # sense that "a=..." is forbidden + if tp.length_is_unknown(): + assert isinstance(value, tuple) + (value, size) = value + BItemType = self.ffi._get_cached_btype(tp.item) + length, rest = divmod(size, self.ffi.sizeof(BItemType)) + if rest != 0: + raise VerificationError( + "bad size: %r does not seem to be an array of %s" % + (name, tp.item)) + tp = tp.resolve_length(length) + # 'value' is a which we have to replace with + # a if the N is actually known + if tp.length is not None: + BArray = self.ffi._get_cached_btype(tp) + value = self.ffi.cast(BArray, value) + setattr(library, name, value) + return + # remove ptr= from the library instance, and replace + # it by a property on the class, which reads/writes into ptr[0]. + ptr = value + delattr(library, name) + def getter(library): + return ptr[0] + def setter(library, value): + ptr[0] = value + setattr(type(library), name, property(getter, setter)) + type(library)._cffi_dir.append(name) + + # ---------- + + def _generate_setup_custom(self): + prnt = self._prnt + prnt('static int _cffi_setup_custom(PyObject *lib)') + prnt('{') + prnt(' return %s;' % self._chained_list_constants[True]) + prnt('}') + +cffimod_header = r''' +#include +#include + +/* this block of #ifs should be kept exactly identical between + c/_cffi_backend.c, cffi/vengine_cpy.py, cffi/vengine_gen.py + and cffi/_cffi_include.h */ +#if defined(_MSC_VER) +# include /* for alloca() */ +# if _MSC_VER < 1600 /* MSVC < 2010 */ + typedef __int8 int8_t; + typedef __int16 int16_t; + typedef __int32 int32_t; + typedef __int64 int64_t; + typedef unsigned __int8 uint8_t; + typedef unsigned __int16 uint16_t; + typedef unsigned __int32 uint32_t; + typedef unsigned __int64 uint64_t; + typedef __int8 int_least8_t; + typedef __int16 int_least16_t; + typedef __int32 int_least32_t; + typedef __int64 int_least64_t; + typedef unsigned __int8 uint_least8_t; + typedef unsigned __int16 uint_least16_t; + typedef unsigned __int32 uint_least32_t; + typedef unsigned __int64 uint_least64_t; + typedef __int8 int_fast8_t; + typedef __int16 int_fast16_t; + typedef __int32 int_fast32_t; + typedef __int64 int_fast64_t; + typedef unsigned __int8 uint_fast8_t; + typedef unsigned __int16 uint_fast16_t; + typedef unsigned __int32 uint_fast32_t; + typedef unsigned __int64 uint_fast64_t; + typedef __int64 intmax_t; + typedef unsigned __int64 uintmax_t; +# else +# include +# endif +# if _MSC_VER < 1800 /* MSVC < 2013 */ +# ifndef __cplusplus + typedef unsigned char _Bool; +# endif +# endif +#else +# include +# if (defined (__SVR4) && defined (__sun)) || defined(_AIX) || defined(__hpux) +# include +# endif +#endif + +#if PY_MAJOR_VERSION < 3 +# undef PyCapsule_CheckExact +# undef PyCapsule_GetPointer +# define PyCapsule_CheckExact(capsule) (PyCObject_Check(capsule)) +# define PyCapsule_GetPointer(capsule, name) \ + (PyCObject_AsVoidPtr(capsule)) +#endif + +#if PY_MAJOR_VERSION >= 3 +# define PyInt_FromLong PyLong_FromLong +#endif + +#define _cffi_from_c_double PyFloat_FromDouble +#define _cffi_from_c_float PyFloat_FromDouble +#define _cffi_from_c_long PyInt_FromLong +#define _cffi_from_c_ulong PyLong_FromUnsignedLong +#define _cffi_from_c_longlong PyLong_FromLongLong +#define _cffi_from_c_ulonglong PyLong_FromUnsignedLongLong +#define _cffi_from_c__Bool PyBool_FromLong + +#define _cffi_to_c_double PyFloat_AsDouble +#define _cffi_to_c_float PyFloat_AsDouble + +#define _cffi_from_c_int_const(x) \ + (((x) > 0) ? \ + ((unsigned long long)(x) <= (unsigned long long)LONG_MAX) ? \ + PyInt_FromLong((long)(x)) : \ + PyLong_FromUnsignedLongLong((unsigned long long)(x)) : \ + ((long long)(x) >= (long long)LONG_MIN) ? \ + PyInt_FromLong((long)(x)) : \ + PyLong_FromLongLong((long long)(x))) + +#define _cffi_from_c_int(x, type) \ + (((type)-1) > 0 ? /* unsigned */ \ + (sizeof(type) < sizeof(long) ? \ + PyInt_FromLong((long)x) : \ + sizeof(type) == sizeof(long) ? \ + PyLong_FromUnsignedLong((unsigned long)x) : \ + PyLong_FromUnsignedLongLong((unsigned long long)x)) : \ + (sizeof(type) <= sizeof(long) ? \ + PyInt_FromLong((long)x) : \ + PyLong_FromLongLong((long long)x))) + +#define _cffi_to_c_int(o, type) \ + ((type)( \ + sizeof(type) == 1 ? (((type)-1) > 0 ? (type)_cffi_to_c_u8(o) \ + : (type)_cffi_to_c_i8(o)) : \ + sizeof(type) == 2 ? (((type)-1) > 0 ? (type)_cffi_to_c_u16(o) \ + : (type)_cffi_to_c_i16(o)) : \ + sizeof(type) == 4 ? (((type)-1) > 0 ? (type)_cffi_to_c_u32(o) \ + : (type)_cffi_to_c_i32(o)) : \ + sizeof(type) == 8 ? (((type)-1) > 0 ? (type)_cffi_to_c_u64(o) \ + : (type)_cffi_to_c_i64(o)) : \ + (Py_FatalError("unsupported size for type " #type), (type)0))) + +#define _cffi_to_c_i8 \ + ((int(*)(PyObject *))_cffi_exports[1]) +#define _cffi_to_c_u8 \ + ((int(*)(PyObject *))_cffi_exports[2]) +#define _cffi_to_c_i16 \ + ((int(*)(PyObject *))_cffi_exports[3]) +#define _cffi_to_c_u16 \ + ((int(*)(PyObject *))_cffi_exports[4]) +#define _cffi_to_c_i32 \ + ((int(*)(PyObject *))_cffi_exports[5]) +#define _cffi_to_c_u32 \ + ((unsigned int(*)(PyObject *))_cffi_exports[6]) +#define _cffi_to_c_i64 \ + ((long long(*)(PyObject *))_cffi_exports[7]) +#define _cffi_to_c_u64 \ + ((unsigned long long(*)(PyObject *))_cffi_exports[8]) +#define _cffi_to_c_char \ + ((int(*)(PyObject *))_cffi_exports[9]) +#define _cffi_from_c_pointer \ + ((PyObject *(*)(char *, CTypeDescrObject *))_cffi_exports[10]) +#define _cffi_to_c_pointer \ + ((char *(*)(PyObject *, CTypeDescrObject *))_cffi_exports[11]) +#define _cffi_get_struct_layout \ + ((PyObject *(*)(Py_ssize_t[]))_cffi_exports[12]) +#define _cffi_restore_errno \ + ((void(*)(void))_cffi_exports[13]) +#define _cffi_save_errno \ + ((void(*)(void))_cffi_exports[14]) +#define _cffi_from_c_char \ + ((PyObject *(*)(char))_cffi_exports[15]) +#define _cffi_from_c_deref \ + ((PyObject *(*)(char *, CTypeDescrObject *))_cffi_exports[16]) +#define _cffi_to_c \ + ((int(*)(char *, CTypeDescrObject *, PyObject *))_cffi_exports[17]) +#define _cffi_from_c_struct \ + ((PyObject *(*)(char *, CTypeDescrObject *))_cffi_exports[18]) +#define _cffi_to_c_wchar_t \ + ((wchar_t(*)(PyObject *))_cffi_exports[19]) +#define _cffi_from_c_wchar_t \ + ((PyObject *(*)(wchar_t))_cffi_exports[20]) +#define _cffi_to_c_long_double \ + ((long double(*)(PyObject *))_cffi_exports[21]) +#define _cffi_to_c__Bool \ + ((_Bool(*)(PyObject *))_cffi_exports[22]) +#define _cffi_prepare_pointer_call_argument \ + ((Py_ssize_t(*)(CTypeDescrObject *, PyObject *, char **))_cffi_exports[23]) +#define _cffi_convert_array_from_object \ + ((int(*)(char *, CTypeDescrObject *, PyObject *))_cffi_exports[24]) +#define _CFFI_NUM_EXPORTS 25 + +typedef struct _ctypedescr CTypeDescrObject; + +static void *_cffi_exports[_CFFI_NUM_EXPORTS]; +static PyObject *_cffi_types, *_cffi_VerificationError; + +static int _cffi_setup_custom(PyObject *lib); /* forward */ + +static PyObject *_cffi_setup(PyObject *self, PyObject *args) +{ + PyObject *library; + int was_alive = (_cffi_types != NULL); + (void)self; /* unused */ + if (!PyArg_ParseTuple(args, "OOO", &_cffi_types, &_cffi_VerificationError, + &library)) + return NULL; + Py_INCREF(_cffi_types); + Py_INCREF(_cffi_VerificationError); + if (_cffi_setup_custom(library) < 0) + return NULL; + return PyBool_FromLong(was_alive); +} + +union _cffi_union_alignment_u { + unsigned char m_char; + unsigned short m_short; + unsigned int m_int; + unsigned long m_long; + unsigned long long m_longlong; + float m_float; + double m_double; + long double m_longdouble; +}; + +struct _cffi_freeme_s { + struct _cffi_freeme_s *next; + union _cffi_union_alignment_u alignment; +}; + +#ifdef __GNUC__ + __attribute__((unused)) +#endif +static int _cffi_convert_array_argument(CTypeDescrObject *ctptr, PyObject *arg, + char **output_data, Py_ssize_t datasize, + struct _cffi_freeme_s **freeme) +{ + char *p; + if (datasize < 0) + return -1; + + p = *output_data; + if (p == NULL) { + struct _cffi_freeme_s *fp = (struct _cffi_freeme_s *)PyObject_Malloc( + offsetof(struct _cffi_freeme_s, alignment) + (size_t)datasize); + if (fp == NULL) + return -1; + fp->next = *freeme; + *freeme = fp; + p = *output_data = (char *)&fp->alignment; + } + memset((void *)p, 0, (size_t)datasize); + return _cffi_convert_array_from_object(p, ctptr, arg); +} + +#ifdef __GNUC__ + __attribute__((unused)) +#endif +static void _cffi_free_array_arguments(struct _cffi_freeme_s *freeme) +{ + do { + void *p = (void *)freeme; + freeme = freeme->next; + PyObject_Free(p); + } while (freeme != NULL); +} + +static int _cffi_init(void) +{ + PyObject *module, *c_api_object = NULL; + + module = PyImport_ImportModule("_cffi_backend"); + if (module == NULL) + goto failure; + + c_api_object = PyObject_GetAttrString(module, "_C_API"); + if (c_api_object == NULL) + goto failure; + if (!PyCapsule_CheckExact(c_api_object)) { + PyErr_SetNone(PyExc_ImportError); + goto failure; + } + memcpy(_cffi_exports, PyCapsule_GetPointer(c_api_object, "cffi"), + _CFFI_NUM_EXPORTS * sizeof(void *)); + + Py_DECREF(module); + Py_DECREF(c_api_object); + return 0; + + failure: + Py_XDECREF(module); + Py_XDECREF(c_api_object); + return -1; +} + +#define _cffi_type(num) ((CTypeDescrObject *)PyList_GET_ITEM(_cffi_types, num)) + +/**********/ +''' diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/vengine_gen.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/vengine_gen.py new file mode 100644 index 0000000000000000000000000000000000000000..26421526f62a07e04419cd57f1f19a64ecd36452 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/vengine_gen.py @@ -0,0 +1,675 @@ +# +# DEPRECATED: implementation for ffi.verify() +# +import sys, os +import types + +from . import model +from .error import VerificationError + + +class VGenericEngine(object): + _class_key = 'g' + _gen_python_module = False + + def __init__(self, verifier): + self.verifier = verifier + self.ffi = verifier.ffi + self.export_symbols = [] + self._struct_pending_verification = {} + + def patch_extension_kwds(self, kwds): + # add 'export_symbols' to the dictionary. Note that we add the + # list before filling it. When we fill it, it will thus also show + # up in kwds['export_symbols']. + kwds.setdefault('export_symbols', self.export_symbols) + + def find_module(self, module_name, path, so_suffixes): + for so_suffix in so_suffixes: + basename = module_name + so_suffix + if path is None: + path = sys.path + for dirname in path: + filename = os.path.join(dirname, basename) + if os.path.isfile(filename): + return filename + + def collect_types(self): + pass # not needed in the generic engine + + def _prnt(self, what=''): + self._f.write(what + '\n') + + def write_source_to_f(self): + prnt = self._prnt + # first paste some standard set of lines that are mostly '#include' + prnt(cffimod_header) + # then paste the C source given by the user, verbatim. + prnt(self.verifier.preamble) + # + # call generate_gen_xxx_decl(), for every xxx found from + # ffi._parser._declarations. This generates all the functions. + self._generate('decl') + # + # on Windows, distutils insists on putting init_cffi_xyz in + # 'export_symbols', so instead of fighting it, just give up and + # give it one + if sys.platform == 'win32': + if sys.version_info >= (3,): + prefix = 'PyInit_' + else: + prefix = 'init' + modname = self.verifier.get_module_name() + prnt("void %s%s(void) { }\n" % (prefix, modname)) + + def load_library(self, flags=0): + # import it with the CFFI backend + backend = self.ffi._backend + # needs to make a path that contains '/', on Posix + filename = os.path.join(os.curdir, self.verifier.modulefilename) + module = backend.load_library(filename, flags) + # + # call loading_gen_struct() to get the struct layout inferred by + # the C compiler + self._load(module, 'loading') + + # build the FFILibrary class and instance, this is a module subclass + # because modules are expected to have usually-constant-attributes and + # in PyPy this means the JIT is able to treat attributes as constant, + # which we want. + class FFILibrary(types.ModuleType): + _cffi_generic_module = module + _cffi_ffi = self.ffi + _cffi_dir = [] + def __dir__(self): + return FFILibrary._cffi_dir + library = FFILibrary("") + # + # finally, call the loaded_gen_xxx() functions. This will set + # up the 'library' object. + self._load(module, 'loaded', library=library) + return library + + def _get_declarations(self): + lst = [(key, tp) for (key, (tp, qual)) in + self.ffi._parser._declarations.items()] + lst.sort() + return lst + + def _generate(self, step_name): + for name, tp in self._get_declarations(): + kind, realname = name.split(' ', 1) + try: + method = getattr(self, '_generate_gen_%s_%s' % (kind, + step_name)) + except AttributeError: + raise VerificationError( + "not implemented in verify(): %r" % name) + try: + method(tp, realname) + except Exception as e: + model.attach_exception_info(e, name) + raise + + def _load(self, module, step_name, **kwds): + for name, tp in self._get_declarations(): + kind, realname = name.split(' ', 1) + method = getattr(self, '_%s_gen_%s' % (step_name, kind)) + try: + method(tp, realname, module, **kwds) + except Exception as e: + model.attach_exception_info(e, name) + raise + + def _generate_nothing(self, tp, name): + pass + + def _loaded_noop(self, tp, name, module, **kwds): + pass + + # ---------- + # typedefs: generates no code so far + + _generate_gen_typedef_decl = _generate_nothing + _loading_gen_typedef = _loaded_noop + _loaded_gen_typedef = _loaded_noop + + # ---------- + # function declarations + + def _generate_gen_function_decl(self, tp, name): + assert isinstance(tp, model.FunctionPtrType) + if tp.ellipsis: + # cannot support vararg functions better than this: check for its + # exact type (including the fixed arguments), and build it as a + # constant function pointer (no _cffi_f_%s wrapper) + self._generate_gen_const(False, name, tp) + return + prnt = self._prnt + numargs = len(tp.args) + argnames = [] + for i, type in enumerate(tp.args): + indirection = '' + if isinstance(type, model.StructOrUnion): + indirection = '*' + argnames.append('%sx%d' % (indirection, i)) + context = 'argument of %s' % name + arglist = [type.get_c_name(' %s' % arg, context) + for type, arg in zip(tp.args, argnames)] + tpresult = tp.result + if isinstance(tpresult, model.StructOrUnion): + arglist.insert(0, tpresult.get_c_name(' *r', context)) + tpresult = model.void_type + arglist = ', '.join(arglist) or 'void' + wrappername = '_cffi_f_%s' % name + self.export_symbols.append(wrappername) + if tp.abi: + abi = tp.abi + ' ' + else: + abi = '' + funcdecl = ' %s%s(%s)' % (abi, wrappername, arglist) + context = 'result of %s' % name + prnt(tpresult.get_c_name(funcdecl, context)) + prnt('{') + # + if isinstance(tp.result, model.StructOrUnion): + result_code = '*r = ' + elif not isinstance(tp.result, model.VoidType): + result_code = 'return ' + else: + result_code = '' + prnt(' %s%s(%s);' % (result_code, name, ', '.join(argnames))) + prnt('}') + prnt() + + _loading_gen_function = _loaded_noop + + def _loaded_gen_function(self, tp, name, module, library): + assert isinstance(tp, model.FunctionPtrType) + if tp.ellipsis: + newfunction = self._load_constant(False, tp, name, module) + else: + indirections = [] + base_tp = tp + if (any(isinstance(typ, model.StructOrUnion) for typ in tp.args) + or isinstance(tp.result, model.StructOrUnion)): + indirect_args = [] + for i, typ in enumerate(tp.args): + if isinstance(typ, model.StructOrUnion): + typ = model.PointerType(typ) + indirections.append((i, typ)) + indirect_args.append(typ) + indirect_result = tp.result + if isinstance(indirect_result, model.StructOrUnion): + if indirect_result.fldtypes is None: + raise TypeError("'%s' is used as result type, " + "but is opaque" % ( + indirect_result._get_c_name(),)) + indirect_result = model.PointerType(indirect_result) + indirect_args.insert(0, indirect_result) + indirections.insert(0, ("result", indirect_result)) + indirect_result = model.void_type + tp = model.FunctionPtrType(tuple(indirect_args), + indirect_result, tp.ellipsis) + BFunc = self.ffi._get_cached_btype(tp) + wrappername = '_cffi_f_%s' % name + newfunction = module.load_function(BFunc, wrappername) + for i, typ in indirections: + newfunction = self._make_struct_wrapper(newfunction, i, typ, + base_tp) + setattr(library, name, newfunction) + type(library)._cffi_dir.append(name) + + def _make_struct_wrapper(self, oldfunc, i, tp, base_tp): + backend = self.ffi._backend + BType = self.ffi._get_cached_btype(tp) + if i == "result": + ffi = self.ffi + def newfunc(*args): + res = ffi.new(BType) + oldfunc(res, *args) + return res[0] + else: + def newfunc(*args): + args = args[:i] + (backend.newp(BType, args[i]),) + args[i+1:] + return oldfunc(*args) + newfunc._cffi_base_type = base_tp + return newfunc + + # ---------- + # named structs + + def _generate_gen_struct_decl(self, tp, name): + assert name == tp.name + self._generate_struct_or_union_decl(tp, 'struct', name) + + def _loading_gen_struct(self, tp, name, module): + self._loading_struct_or_union(tp, 'struct', name, module) + + def _loaded_gen_struct(self, tp, name, module, **kwds): + self._loaded_struct_or_union(tp) + + def _generate_gen_union_decl(self, tp, name): + assert name == tp.name + self._generate_struct_or_union_decl(tp, 'union', name) + + def _loading_gen_union(self, tp, name, module): + self._loading_struct_or_union(tp, 'union', name, module) + + def _loaded_gen_union(self, tp, name, module, **kwds): + self._loaded_struct_or_union(tp) + + def _generate_struct_or_union_decl(self, tp, prefix, name): + if tp.fldnames is None: + return # nothing to do with opaque structs + checkfuncname = '_cffi_check_%s_%s' % (prefix, name) + layoutfuncname = '_cffi_layout_%s_%s' % (prefix, name) + cname = ('%s %s' % (prefix, name)).strip() + # + prnt = self._prnt + prnt('static void %s(%s *p)' % (checkfuncname, cname)) + prnt('{') + prnt(' /* only to generate compile-time warnings or errors */') + prnt(' (void)p;') + for fname, ftype, fbitsize, fqual in tp.enumfields(): + if (isinstance(ftype, model.PrimitiveType) + and ftype.is_integer_type()) or fbitsize >= 0: + # accept all integers, but complain on float or double + prnt(' (void)((p->%s) << 1);' % fname) + else: + # only accept exactly the type declared. + try: + prnt(' { %s = &p->%s; (void)tmp; }' % ( + ftype.get_c_name('*tmp', 'field %r'%fname, quals=fqual), + fname)) + except VerificationError as e: + prnt(' /* %s */' % str(e)) # cannot verify it, ignore + prnt('}') + self.export_symbols.append(layoutfuncname) + prnt('intptr_t %s(intptr_t i)' % (layoutfuncname,)) + prnt('{') + prnt(' struct _cffi_aligncheck { char x; %s y; };' % cname) + prnt(' static intptr_t nums[] = {') + prnt(' sizeof(%s),' % cname) + prnt(' offsetof(struct _cffi_aligncheck, y),') + for fname, ftype, fbitsize, fqual in tp.enumfields(): + if fbitsize >= 0: + continue # xxx ignore fbitsize for now + prnt(' offsetof(%s, %s),' % (cname, fname)) + if isinstance(ftype, model.ArrayType) and ftype.length is None: + prnt(' 0, /* %s */' % ftype._get_c_name()) + else: + prnt(' sizeof(((%s *)0)->%s),' % (cname, fname)) + prnt(' -1') + prnt(' };') + prnt(' return nums[i];') + prnt(' /* the next line is not executed, but compiled */') + prnt(' %s(0);' % (checkfuncname,)) + prnt('}') + prnt() + + def _loading_struct_or_union(self, tp, prefix, name, module): + if tp.fldnames is None: + return # nothing to do with opaque structs + layoutfuncname = '_cffi_layout_%s_%s' % (prefix, name) + # + BFunc = self.ffi._typeof_locked("intptr_t(*)(intptr_t)")[0] + function = module.load_function(BFunc, layoutfuncname) + layout = [] + num = 0 + while True: + x = function(num) + if x < 0: break + layout.append(x) + num += 1 + if isinstance(tp, model.StructOrUnion) and tp.partial: + # use the function()'s sizes and offsets to guide the + # layout of the struct + totalsize = layout[0] + totalalignment = layout[1] + fieldofs = layout[2::2] + fieldsize = layout[3::2] + tp.force_flatten() + assert len(fieldofs) == len(fieldsize) == len(tp.fldnames) + tp.fixedlayout = fieldofs, fieldsize, totalsize, totalalignment + else: + cname = ('%s %s' % (prefix, name)).strip() + self._struct_pending_verification[tp] = layout, cname + + def _loaded_struct_or_union(self, tp): + if tp.fldnames is None: + return # nothing to do with opaque structs + self.ffi._get_cached_btype(tp) # force 'fixedlayout' to be considered + + if tp in self._struct_pending_verification: + # check that the layout sizes and offsets match the real ones + def check(realvalue, expectedvalue, msg): + if realvalue != expectedvalue: + raise VerificationError( + "%s (we have %d, but C compiler says %d)" + % (msg, expectedvalue, realvalue)) + ffi = self.ffi + BStruct = ffi._get_cached_btype(tp) + layout, cname = self._struct_pending_verification.pop(tp) + check(layout[0], ffi.sizeof(BStruct), "wrong total size") + check(layout[1], ffi.alignof(BStruct), "wrong total alignment") + i = 2 + for fname, ftype, fbitsize, fqual in tp.enumfields(): + if fbitsize >= 0: + continue # xxx ignore fbitsize for now + check(layout[i], ffi.offsetof(BStruct, fname), + "wrong offset for field %r" % (fname,)) + if layout[i+1] != 0: + BField = ffi._get_cached_btype(ftype) + check(layout[i+1], ffi.sizeof(BField), + "wrong size for field %r" % (fname,)) + i += 2 + assert i == len(layout) + + # ---------- + # 'anonymous' declarations. These are produced for anonymous structs + # or unions; the 'name' is obtained by a typedef. + + def _generate_gen_anonymous_decl(self, tp, name): + if isinstance(tp, model.EnumType): + self._generate_gen_enum_decl(tp, name, '') + else: + self._generate_struct_or_union_decl(tp, '', name) + + def _loading_gen_anonymous(self, tp, name, module): + if isinstance(tp, model.EnumType): + self._loading_gen_enum(tp, name, module, '') + else: + self._loading_struct_or_union(tp, '', name, module) + + def _loaded_gen_anonymous(self, tp, name, module, **kwds): + if isinstance(tp, model.EnumType): + self._loaded_gen_enum(tp, name, module, **kwds) + else: + self._loaded_struct_or_union(tp) + + # ---------- + # constants, likely declared with '#define' + + def _generate_gen_const(self, is_int, name, tp=None, category='const', + check_value=None): + prnt = self._prnt + funcname = '_cffi_%s_%s' % (category, name) + self.export_symbols.append(funcname) + if check_value is not None: + assert is_int + assert category == 'const' + prnt('int %s(char *out_error)' % funcname) + prnt('{') + self._check_int_constant_value(name, check_value) + prnt(' return 0;') + prnt('}') + elif is_int: + assert category == 'const' + prnt('int %s(long long *out_value)' % funcname) + prnt('{') + prnt(' *out_value = (long long)(%s);' % (name,)) + prnt(' return (%s) <= 0;' % (name,)) + prnt('}') + else: + assert tp is not None + assert check_value is None + if category == 'var': + ampersand = '&' + else: + ampersand = '' + extra = '' + if category == 'const' and isinstance(tp, model.StructOrUnion): + extra = 'const *' + ampersand = '&' + prnt(tp.get_c_name(' %s%s(void)' % (extra, funcname), name)) + prnt('{') + prnt(' return (%s%s);' % (ampersand, name)) + prnt('}') + prnt() + + def _generate_gen_constant_decl(self, tp, name): + is_int = isinstance(tp, model.PrimitiveType) and tp.is_integer_type() + self._generate_gen_const(is_int, name, tp) + + _loading_gen_constant = _loaded_noop + + def _load_constant(self, is_int, tp, name, module, check_value=None): + funcname = '_cffi_const_%s' % name + if check_value is not None: + assert is_int + self._load_known_int_constant(module, funcname) + value = check_value + elif is_int: + BType = self.ffi._typeof_locked("long long*")[0] + BFunc = self.ffi._typeof_locked("int(*)(long long*)")[0] + function = module.load_function(BFunc, funcname) + p = self.ffi.new(BType) + negative = function(p) + value = int(p[0]) + if value < 0 and not negative: + BLongLong = self.ffi._typeof_locked("long long")[0] + value += (1 << (8*self.ffi.sizeof(BLongLong))) + else: + assert check_value is None + fntypeextra = '(*)(void)' + if isinstance(tp, model.StructOrUnion): + fntypeextra = '*' + fntypeextra + BFunc = self.ffi._typeof_locked(tp.get_c_name(fntypeextra, name))[0] + function = module.load_function(BFunc, funcname) + value = function() + if isinstance(tp, model.StructOrUnion): + value = value[0] + return value + + def _loaded_gen_constant(self, tp, name, module, library): + is_int = isinstance(tp, model.PrimitiveType) and tp.is_integer_type() + value = self._load_constant(is_int, tp, name, module) + setattr(library, name, value) + type(library)._cffi_dir.append(name) + + # ---------- + # enums + + def _check_int_constant_value(self, name, value): + prnt = self._prnt + if value <= 0: + prnt(' if ((%s) > 0 || (long)(%s) != %dL) {' % ( + name, name, value)) + else: + prnt(' if ((%s) <= 0 || (unsigned long)(%s) != %dUL) {' % ( + name, name, value)) + prnt(' char buf[64];') + prnt(' if ((%s) <= 0)' % name) + prnt(' sprintf(buf, "%%ld", (long)(%s));' % name) + prnt(' else') + prnt(' sprintf(buf, "%%lu", (unsigned long)(%s));' % + name) + prnt(' sprintf(out_error, "%s has the real value %s, not %s",') + prnt(' "%s", buf, "%d");' % (name[:100], value)) + prnt(' return -1;') + prnt(' }') + + def _load_known_int_constant(self, module, funcname): + BType = self.ffi._typeof_locked("char[]")[0] + BFunc = self.ffi._typeof_locked("int(*)(char*)")[0] + function = module.load_function(BFunc, funcname) + p = self.ffi.new(BType, 256) + if function(p) < 0: + error = self.ffi.string(p) + if sys.version_info >= (3,): + error = str(error, 'utf-8') + raise VerificationError(error) + + def _enum_funcname(self, prefix, name): + # "$enum_$1" => "___D_enum____D_1" + name = name.replace('$', '___D_') + return '_cffi_e_%s_%s' % (prefix, name) + + def _generate_gen_enum_decl(self, tp, name, prefix='enum'): + if tp.partial: + for enumerator in tp.enumerators: + self._generate_gen_const(True, enumerator) + return + # + funcname = self._enum_funcname(prefix, name) + self.export_symbols.append(funcname) + prnt = self._prnt + prnt('int %s(char *out_error)' % funcname) + prnt('{') + for enumerator, enumvalue in zip(tp.enumerators, tp.enumvalues): + self._check_int_constant_value(enumerator, enumvalue) + prnt(' return 0;') + prnt('}') + prnt() + + def _loading_gen_enum(self, tp, name, module, prefix='enum'): + if tp.partial: + enumvalues = [self._load_constant(True, tp, enumerator, module) + for enumerator in tp.enumerators] + tp.enumvalues = tuple(enumvalues) + tp.partial_resolved = True + else: + funcname = self._enum_funcname(prefix, name) + self._load_known_int_constant(module, funcname) + + def _loaded_gen_enum(self, tp, name, module, library): + for enumerator, enumvalue in zip(tp.enumerators, tp.enumvalues): + setattr(library, enumerator, enumvalue) + type(library)._cffi_dir.append(enumerator) + + # ---------- + # macros: for now only for integers + + def _generate_gen_macro_decl(self, tp, name): + if tp == '...': + check_value = None + else: + check_value = tp # an integer + self._generate_gen_const(True, name, check_value=check_value) + + _loading_gen_macro = _loaded_noop + + def _loaded_gen_macro(self, tp, name, module, library): + if tp == '...': + check_value = None + else: + check_value = tp # an integer + value = self._load_constant(True, tp, name, module, + check_value=check_value) + setattr(library, name, value) + type(library)._cffi_dir.append(name) + + # ---------- + # global variables + + def _generate_gen_variable_decl(self, tp, name): + if isinstance(tp, model.ArrayType): + if tp.length_is_unknown(): + prnt = self._prnt + funcname = '_cffi_sizeof_%s' % (name,) + self.export_symbols.append(funcname) + prnt("size_t %s(void)" % funcname) + prnt("{") + prnt(" return sizeof(%s);" % (name,)) + prnt("}") + tp_ptr = model.PointerType(tp.item) + self._generate_gen_const(False, name, tp_ptr) + else: + tp_ptr = model.PointerType(tp) + self._generate_gen_const(False, name, tp_ptr, category='var') + + _loading_gen_variable = _loaded_noop + + def _loaded_gen_variable(self, tp, name, module, library): + if isinstance(tp, model.ArrayType): # int a[5] is "constant" in the + # sense that "a=..." is forbidden + if tp.length_is_unknown(): + funcname = '_cffi_sizeof_%s' % (name,) + BFunc = self.ffi._typeof_locked('size_t(*)(void)')[0] + function = module.load_function(BFunc, funcname) + size = function() + BItemType = self.ffi._get_cached_btype(tp.item) + length, rest = divmod(size, self.ffi.sizeof(BItemType)) + if rest != 0: + raise VerificationError( + "bad size: %r does not seem to be an array of %s" % + (name, tp.item)) + tp = tp.resolve_length(length) + tp_ptr = model.PointerType(tp.item) + value = self._load_constant(False, tp_ptr, name, module) + # 'value' is a which we have to replace with + # a if the N is actually known + if tp.length is not None: + BArray = self.ffi._get_cached_btype(tp) + value = self.ffi.cast(BArray, value) + setattr(library, name, value) + type(library)._cffi_dir.append(name) + return + # remove ptr= from the library instance, and replace + # it by a property on the class, which reads/writes into ptr[0]. + funcname = '_cffi_var_%s' % name + BFunc = self.ffi._typeof_locked(tp.get_c_name('*(*)(void)', name))[0] + function = module.load_function(BFunc, funcname) + ptr = function() + def getter(library): + return ptr[0] + def setter(library, value): + ptr[0] = value + setattr(type(library), name, property(getter, setter)) + type(library)._cffi_dir.append(name) + +cffimod_header = r''' +#include +#include +#include +#include +#include /* XXX for ssize_t on some platforms */ + +/* this block of #ifs should be kept exactly identical between + c/_cffi_backend.c, cffi/vengine_cpy.py, cffi/vengine_gen.py + and cffi/_cffi_include.h */ +#if defined(_MSC_VER) +# include /* for alloca() */ +# if _MSC_VER < 1600 /* MSVC < 2010 */ + typedef __int8 int8_t; + typedef __int16 int16_t; + typedef __int32 int32_t; + typedef __int64 int64_t; + typedef unsigned __int8 uint8_t; + typedef unsigned __int16 uint16_t; + typedef unsigned __int32 uint32_t; + typedef unsigned __int64 uint64_t; + typedef __int8 int_least8_t; + typedef __int16 int_least16_t; + typedef __int32 int_least32_t; + typedef __int64 int_least64_t; + typedef unsigned __int8 uint_least8_t; + typedef unsigned __int16 uint_least16_t; + typedef unsigned __int32 uint_least32_t; + typedef unsigned __int64 uint_least64_t; + typedef __int8 int_fast8_t; + typedef __int16 int_fast16_t; + typedef __int32 int_fast32_t; + typedef __int64 int_fast64_t; + typedef unsigned __int8 uint_fast8_t; + typedef unsigned __int16 uint_fast16_t; + typedef unsigned __int32 uint_fast32_t; + typedef unsigned __int64 uint_fast64_t; + typedef __int64 intmax_t; + typedef unsigned __int64 uintmax_t; +# else +# include +# endif +# if _MSC_VER < 1800 /* MSVC < 2013 */ +# ifndef __cplusplus + typedef unsigned char _Bool; +# endif +# endif +#else +# include +# if (defined (__SVR4) && defined (__sun)) || defined(_AIX) || defined(__hpux) +# include +# endif +#endif +''' diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/verifier.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/verifier.py new file mode 100644 index 0000000000000000000000000000000000000000..59b78c216f36044501f7443955cfac3782885529 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/cffi/verifier.py @@ -0,0 +1,306 @@ +# +# DEPRECATED: implementation for ffi.verify() +# +import sys, os, binascii, shutil, io +from . import __version_verifier_modules__ +from . import ffiplatform +from .error import VerificationError + +if sys.version_info >= (3, 3): + import importlib.machinery + def _extension_suffixes(): + return importlib.machinery.EXTENSION_SUFFIXES[:] +else: + import imp + def _extension_suffixes(): + return [suffix for suffix, _, type in imp.get_suffixes() + if type == imp.C_EXTENSION] + + +if sys.version_info >= (3,): + NativeIO = io.StringIO +else: + class NativeIO(io.BytesIO): + def write(self, s): + if isinstance(s, unicode): + s = s.encode('ascii') + super(NativeIO, self).write(s) + + +class Verifier(object): + + def __init__(self, ffi, preamble, tmpdir=None, modulename=None, + ext_package=None, tag='', force_generic_engine=False, + source_extension='.c', flags=None, relative_to=None, **kwds): + if ffi._parser._uses_new_feature: + raise VerificationError( + "feature not supported with ffi.verify(), but only " + "with ffi.set_source(): %s" % (ffi._parser._uses_new_feature,)) + self.ffi = ffi + self.preamble = preamble + if not modulename: + flattened_kwds = ffiplatform.flatten(kwds) + vengine_class = _locate_engine_class(ffi, force_generic_engine) + self._vengine = vengine_class(self) + self._vengine.patch_extension_kwds(kwds) + self.flags = flags + self.kwds = self.make_relative_to(kwds, relative_to) + # + if modulename: + if tag: + raise TypeError("can't specify both 'modulename' and 'tag'") + else: + key = '\x00'.join([sys.version[:3], __version_verifier_modules__, + preamble, flattened_kwds] + + ffi._cdefsources) + if sys.version_info >= (3,): + key = key.encode('utf-8') + k1 = hex(binascii.crc32(key[0::2]) & 0xffffffff) + k1 = k1.lstrip('0x').rstrip('L') + k2 = hex(binascii.crc32(key[1::2]) & 0xffffffff) + k2 = k2.lstrip('0').rstrip('L') + modulename = '_cffi_%s_%s%s%s' % (tag, self._vengine._class_key, + k1, k2) + suffix = _get_so_suffixes()[0] + self.tmpdir = tmpdir or _caller_dir_pycache() + self.sourcefilename = os.path.join(self.tmpdir, modulename + source_extension) + self.modulefilename = os.path.join(self.tmpdir, modulename + suffix) + self.ext_package = ext_package + self._has_source = False + self._has_module = False + + def write_source(self, file=None): + """Write the C source code. It is produced in 'self.sourcefilename', + which can be tweaked beforehand.""" + with self.ffi._lock: + if self._has_source and file is None: + raise VerificationError( + "source code already written") + self._write_source(file) + + def compile_module(self): + """Write the C source code (if not done already) and compile it. + This produces a dynamic link library in 'self.modulefilename'.""" + with self.ffi._lock: + if self._has_module: + raise VerificationError("module already compiled") + if not self._has_source: + self._write_source() + self._compile_module() + + def load_library(self): + """Get a C module from this Verifier instance. + Returns an instance of a FFILibrary class that behaves like the + objects returned by ffi.dlopen(), but that delegates all + operations to the C module. If necessary, the C code is written + and compiled first. + """ + with self.ffi._lock: + if not self._has_module: + self._locate_module() + if not self._has_module: + if not self._has_source: + self._write_source() + self._compile_module() + return self._load_library() + + def get_module_name(self): + basename = os.path.basename(self.modulefilename) + # kill both the .so extension and the other .'s, as introduced + # by Python 3: 'basename.cpython-33m.so' + basename = basename.split('.', 1)[0] + # and the _d added in Python 2 debug builds --- but try to be + # conservative and not kill a legitimate _d + if basename.endswith('_d') and hasattr(sys, 'gettotalrefcount'): + basename = basename[:-2] + return basename + + def get_extension(self): + ffiplatform._hack_at_distutils() # backward compatibility hack + if not self._has_source: + with self.ffi._lock: + if not self._has_source: + self._write_source() + sourcename = ffiplatform.maybe_relative_path(self.sourcefilename) + modname = self.get_module_name() + return ffiplatform.get_extension(sourcename, modname, **self.kwds) + + def generates_python_module(self): + return self._vengine._gen_python_module + + def make_relative_to(self, kwds, relative_to): + if relative_to and os.path.dirname(relative_to): + dirname = os.path.dirname(relative_to) + kwds = kwds.copy() + for key in ffiplatform.LIST_OF_FILE_NAMES: + if key in kwds: + lst = kwds[key] + if not isinstance(lst, (list, tuple)): + raise TypeError("keyword '%s' should be a list or tuple" + % (key,)) + lst = [os.path.join(dirname, fn) for fn in lst] + kwds[key] = lst + return kwds + + # ---------- + + def _locate_module(self): + if not os.path.isfile(self.modulefilename): + if self.ext_package: + try: + pkg = __import__(self.ext_package, None, None, ['__doc__']) + except ImportError: + return # cannot import the package itself, give up + # (e.g. it might be called differently before installation) + path = pkg.__path__ + else: + path = None + filename = self._vengine.find_module(self.get_module_name(), path, + _get_so_suffixes()) + if filename is None: + return + self.modulefilename = filename + self._vengine.collect_types() + self._has_module = True + + def _write_source_to(self, file): + self._vengine._f = file + try: + self._vengine.write_source_to_f() + finally: + del self._vengine._f + + def _write_source(self, file=None): + if file is not None: + self._write_source_to(file) + else: + # Write our source file to an in memory file. + f = NativeIO() + self._write_source_to(f) + source_data = f.getvalue() + + # Determine if this matches the current file + if os.path.exists(self.sourcefilename): + with open(self.sourcefilename, "r") as fp: + needs_written = not (fp.read() == source_data) + else: + needs_written = True + + # Actually write the file out if it doesn't match + if needs_written: + _ensure_dir(self.sourcefilename) + with open(self.sourcefilename, "w") as fp: + fp.write(source_data) + + # Set this flag + self._has_source = True + + def _compile_module(self): + # compile this C source + tmpdir = os.path.dirname(self.sourcefilename) + outputfilename = ffiplatform.compile(tmpdir, self.get_extension()) + try: + same = ffiplatform.samefile(outputfilename, self.modulefilename) + except OSError: + same = False + if not same: + _ensure_dir(self.modulefilename) + shutil.move(outputfilename, self.modulefilename) + self._has_module = True + + def _load_library(self): + assert self._has_module + if self.flags is not None: + return self._vengine.load_library(self.flags) + else: + return self._vengine.load_library() + +# ____________________________________________________________ + +_FORCE_GENERIC_ENGINE = False # for tests + +def _locate_engine_class(ffi, force_generic_engine): + if _FORCE_GENERIC_ENGINE: + force_generic_engine = True + if not force_generic_engine: + if '__pypy__' in sys.builtin_module_names: + force_generic_engine = True + else: + try: + import _cffi_backend + except ImportError: + _cffi_backend = '?' + if ffi._backend is not _cffi_backend: + force_generic_engine = True + if force_generic_engine: + from . import vengine_gen + return vengine_gen.VGenericEngine + else: + from . import vengine_cpy + return vengine_cpy.VCPythonEngine + +# ____________________________________________________________ + +_TMPDIR = None + +def _caller_dir_pycache(): + if _TMPDIR: + return _TMPDIR + result = os.environ.get('CFFI_TMPDIR') + if result: + return result + filename = sys._getframe(2).f_code.co_filename + return os.path.abspath(os.path.join(os.path.dirname(filename), + '__pycache__')) + +def set_tmpdir(dirname): + """Set the temporary directory to use instead of __pycache__.""" + global _TMPDIR + _TMPDIR = dirname + +def cleanup_tmpdir(tmpdir=None, keep_so=False): + """Clean up the temporary directory by removing all files in it + called `_cffi_*.{c,so}` as well as the `build` subdirectory.""" + tmpdir = tmpdir or _caller_dir_pycache() + try: + filelist = os.listdir(tmpdir) + except OSError: + return + if keep_so: + suffix = '.c' # only remove .c files + else: + suffix = _get_so_suffixes()[0].lower() + for fn in filelist: + if fn.lower().startswith('_cffi_') and ( + fn.lower().endswith(suffix) or fn.lower().endswith('.c')): + try: + os.unlink(os.path.join(tmpdir, fn)) + except OSError: + pass + clean_dir = [os.path.join(tmpdir, 'build')] + for dir in clean_dir: + try: + for fn in os.listdir(dir): + fn = os.path.join(dir, fn) + if os.path.isdir(fn): + clean_dir.append(fn) + else: + os.unlink(fn) + except OSError: + pass + +def _get_so_suffixes(): + suffixes = _extension_suffixes() + if not suffixes: + # bah, no C_EXTENSION available. Occurs on pypy without cpyext + if sys.platform == 'win32': + suffixes = [".pyd"] + else: + suffixes = [".so"] + + return suffixes + +def _ensure_dir(filename): + dirname = os.path.dirname(filename) + if dirname and not os.path.isdir(dirname): + os.makedirs(dirname) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet-3.0.4.dist-info/AUTHORS.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet-3.0.4.dist-info/AUTHORS.txt new file mode 100644 index 0000000000000000000000000000000000000000..72c87d7d38ae7bf859717c333a5ee8230f6ce624 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet-3.0.4.dist-info/AUTHORS.txt @@ -0,0 +1,562 @@ +A_Rog +Aakanksha Agrawal <11389424+rasponic@users.noreply.github.com> +Abhinav Sagar <40603139+abhinavsagar@users.noreply.github.com> +ABHYUDAY PRATAP SINGH +abs51295 +AceGentile +Adam Chainz +Adam Tse +Adam Tse +Adam Wentz +admin +Adrien Morison +ahayrapetyan +Ahilya +AinsworthK +Akash Srivastava +Alan Yee +Albert Tugushev +Albert-Guan +albertg +Aleks Bunin +Alethea Flowers +Alex Gaynor +Alex Grönholm +Alex Loosley +Alex Morega +Alex Stachowiak +Alexander Shtyrov +Alexandre Conrad +Alexey Popravka +Alexey Popravka +Alli +Ami Fischman +Ananya Maiti +Anatoly Techtonik +Anders Kaseorg +Andreas Lutro +Andrei Geacar +Andrew Gaul +Andrey Bulgakov +Andrés Delfino <34587441+andresdelfino@users.noreply.github.com> +Andrés Delfino +Andy Freeland +Andy Freeland +Andy Kluger +Ani Hayrapetyan +Aniruddha Basak +Anish Tambe +Anrs Hu +Anthony Sottile +Antoine Musso +Anton Ovchinnikov +Anton Patrushev +Antonio Alvarado Hernandez +Antony Lee +Antti Kaihola +Anubhav Patel +Anuj Godase +AQNOUCH Mohammed +AraHaan +Arindam Choudhury +Armin Ronacher +Artem +Ashley Manton +Ashwin Ramaswami +atse +Atsushi Odagiri +Avner Cohen +Baptiste Mispelon +Barney Gale +barneygale +Bartek Ogryczak +Bastian Venthur +Ben Darnell +Ben Hoyt +Ben Rosser +Bence Nagy +Benjamin Peterson +Benjamin VanEvery +Benoit Pierre +Berker Peksag +Bernardo B. Marques +Bernhard M. Wiedemann +Bertil Hatt +Bogdan Opanchuk +BorisZZZ +Brad Erickson +Bradley Ayers +Brandon L. Reiss +Brandt Bucher +Brett Randall +Brian Cristante <33549821+brcrista@users.noreply.github.com> +Brian Cristante +Brian Rosner +BrownTruck +Bruno Oliveira +Bruno Renié +Bstrdsmkr +Buck Golemon +burrows +Bussonnier Matthias +c22 +Caleb Martinez +Calvin Smith +Carl Meyer +Carlos Liam +Carol Willing +Carter Thayer +Cass +Chandrasekhar Atina +Chih-Hsuan Yen +Chih-Hsuan Yen +Chris Brinker +Chris Hunt +Chris Jerdonek +Chris McDonough +Chris Wolfe +Christian Heimes +Christian Oudard +Christopher Hunt +Christopher Snyder +Clark Boylan +Clay McClure +Cody +Cody Soyland +Colin Watson +Connor Osborn +Cooper Lees +Cooper Ry Lees +Cory Benfield +Cory Wright +Craig Kerstiens +Cristian Sorinel +Curtis Doty +cytolentino +Damian Quiroga +Dan Black +Dan Savilonis +Dan Sully +daniel +Daniel Collins +Daniel Hahler +Daniel Holth +Daniel Jost +Daniel Shaulov +Daniele Esposti +Daniele Procida +Danny Hermes +Dav Clark +Dave Abrahams +Dave Jones +David Aguilar +David Black +David Bordeynik +David Bordeynik +David Caro +David Evans +David Linke +David Pursehouse +David Tucker +David Wales +Davidovich +derwolfe +Desetude +Diego Caraballo +DiegoCaraballo +Dmitry Gladkov +Domen Kožar +Donald Stufft +Dongweiming +Douglas Thor +DrFeathers +Dustin Ingram +Dwayne Bailey +Ed Morley <501702+edmorley@users.noreply.github.com> +Ed Morley +Eitan Adler +ekristina +elainechan +Eli Schwartz +Eli Schwartz +Emil Burzo +Emil Styrke +Endoh Takanao +enoch +Erdinc Mutlu +Eric Gillingham +Eric Hanchrow +Eric Hopper +Erik M. Bray +Erik Rose +Ernest W Durbin III +Ernest W. Durbin III +Erwin Janssen +Eugene Vereshchagin +everdimension +Felix Yan +fiber-space +Filip Kokosiński +Florian Briand +Florian Rathgeber +Francesco +Francesco Montesano +Frost Ming +Gabriel Curio +Gabriel de Perthuis +Garry Polley +gdanielson +Geoffrey Lehée +Geoffrey Sneddon +George Song +Georgi Valkov +Giftlin Rajaiah +gizmoguy1 +gkdoc <40815324+gkdoc@users.noreply.github.com> +Gopinath M <31352222+mgopi1990@users.noreply.github.com> +GOTO Hayato <3532528+gh640@users.noreply.github.com> +gpiks +Guilherme Espada +Guy Rozendorn +gzpan123 +Hanjun Kim +Hari Charan +Harsh Vardhan +Herbert Pfennig +Hsiaoming Yang +Hugo +Hugo Lopes Tavares +Hugo van Kemenade +hugovk +Hynek Schlawack +Ian Bicking +Ian Cordasco +Ian Lee +Ian Stapleton Cordasco +Ian Wienand +Ian Wienand +Igor Kuzmitshov +Igor Sobreira +Ilya Baryshev +INADA Naoki +Ionel Cristian Mărieș +Ionel Maries Cristian +Ivan Pozdeev +Jacob Kim +jakirkham +Jakub Stasiak +Jakub Vysoky +Jakub Wilk +James Cleveland +James Cleveland +James Firth +James Polley +Jan Pokorný +Jannis Leidel +jarondl +Jason R. Coombs +Jay Graves +Jean-Christophe Fillion-Robin +Jeff Barber +Jeff Dairiki +Jelmer Vernooij +jenix21 +Jeremy Stanley +Jeremy Zafran +Jiashuo Li +Jim Garrison +Jivan Amara +John Paton +John-Scott Atlakson +johnthagen +johnthagen +Jon Banafato +Jon Dufresne +Jon Parise +Jonas Nockert +Jonathan Herbert +Joost Molenaar +Jorge Niedbalski +Joseph Long +Josh Bronson +Josh Hansen +Josh Schneier +Juanjo Bazán +Julian Berman +Julian Gethmann +Julien Demoor +jwg4 +Jyrki Pulliainen +Kai Chen +Kamal Bin Mustafa +kaustav haldar +keanemind +Keith Maxwell +Kelsey Hightower +Kenneth Belitzky +Kenneth Reitz +Kenneth Reitz +Kevin Burke +Kevin Carter +Kevin Frommelt +Kevin R Patterson +Kexuan Sun +Kit Randel +kpinc +Krishna Oza +Kumar McMillan +Kyle Persohn +lakshmanaram +Laszlo Kiss-Kollar +Laurent Bristiel +Laurie Opperman +Leon Sasson +Lev Givon +Lincoln de Sousa +Lipis +Loren Carvalho +Lucas Cimon +Ludovic Gasc +Luke Macken +Luo Jiebin +luojiebin +luz.paz +László Kiss Kollár +László Kiss Kollár +Marc Abramowitz +Marc Tamlyn +Marcus Smith +Mariatta +Mark Kohler +Mark Williams +Mark Williams +Markus Hametner +Masaki +Masklinn +Matej Stuchlik +Mathew Jennings +Mathieu Bridon +Matt Good +Matt Maker +Matt Robenolt +matthew +Matthew Einhorn +Matthew Gilliard +Matthew Iversen +Matthew Trumbell +Matthew Willson +Matthias Bussonnier +mattip +Maxim Kurnikov +Maxime Rouyrre +mayeut +mbaluna <44498973+mbaluna@users.noreply.github.com> +mdebi <17590103+mdebi@users.noreply.github.com> +memoselyk +Michael +Michael Aquilina +Michael E. Karpeles +Michael Klich +Michael Williamson +michaelpacer +Mickaël Schoentgen +Miguel Araujo Perez +Mihir Singh +Mike +Mike Hendricks +Min RK +MinRK +Miro Hrončok +Monica Baluna +montefra +Monty Taylor +Nate Coraor +Nathaniel J. Smith +Nehal J Wani +Neil Botelho +Nick Coghlan +Nick Stenning +Nick Timkovich +Nicolas Bock +Nikhil Benesch +Nitesh Sharma +Nowell Strite +NtaleGrey +nvdv +Ofekmeister +ofrinevo +Oliver Jeeves +Oliver Tonnhofer +Olivier Girardot +Olivier Grisel +Ollie Rutherfurd +OMOTO Kenji +Omry Yadan +Oren Held +Oscar Benjamin +Oz N Tiram +Pachwenko <32424503+Pachwenko@users.noreply.github.com> +Patrick Dubroy +Patrick Jenkins +Patrick Lawson +patricktokeeffe +Patrik Kopkan +Paul Kehrer +Paul Moore +Paul Nasrat +Paul Oswald +Paul van der Linden +Paulus Schoutsen +Pavithra Eswaramoorthy <33131404+QueenCoffee@users.noreply.github.com> +Pawel Jasinski +Pekka Klärck +Peter Lisák +Peter Waller +petr-tik +Phaneendra Chiruvella +Phil Freo +Phil Pennock +Phil Whelan +Philip Jägenstedt +Philip Molloy +Philippe Ombredanne +Pi Delport +Pierre-Yves Rofes +pip +Prabakaran Kumaresshan +Prabhjyotsing Surjit Singh Sodhi +Prabhu Marappan +Pradyun Gedam +Pratik Mallya +Preet Thakkar +Preston Holmes +Przemek Wrzos +Pulkit Goyal <7895pulkit@gmail.com> +Qiangning Hong +Quentin Pradet +R. David Murray +Rafael Caricio +Ralf Schmitt +Razzi Abuissa +rdb +Remi Rampin +Remi Rampin +Rene Dudfield +Riccardo Magliocchetti +Richard Jones +RobberPhex +Robert Collins +Robert McGibbon +Robert T. McGibbon +robin elisha robinson +Roey Berman +Rohan Jain +Rohan Jain +Rohan Jain +Roman Bogorodskiy +Romuald Brunet +Ronny Pfannschmidt +Rory McCann +Ross Brattain +Roy Wellington Ⅳ +Roy Wellington Ⅳ +Ryan Wooden +ryneeverett +Sachi King +Salvatore Rinchiera +Savio Jomton +schlamar +Scott Kitterman +Sean +seanj +Sebastian Jordan +Sebastian Schaetz +Segev Finer +SeongSoo Cho +Sergey Vasilyev +Seth Woodworth +Shlomi Fish +Shovan Maity +Simeon Visser +Simon Cross +Simon Pichugin +sinoroc +Sorin Sbarnea +Stavros Korokithakis +Stefan Scherfke +Stephan Erb +stepshal +Steve (Gadget) Barnes +Steve Barnes +Steve Dower +Steve Kowalik +Steven Myint +stonebig +Stéphane Bidoul (ACSONE) +Stéphane Bidoul +Stéphane Klein +Sumana Harihareswara +Sviatoslav Sydorenko +Sviatoslav Sydorenko +Swat009 +Takayuki SHIMIZUKAWA +tbeswick +Thijs Triemstra +Thomas Fenzl +Thomas Grainger +Thomas Guettler +Thomas Johansson +Thomas Kluyver +Thomas Smith +Tim D. Smith +Tim Gates +Tim Harder +Tim Heap +tim smith +tinruufu +Tom Forbes +Tom Freudenheim +Tom V +Tomas Orsava +Tomer Chachamu +Tony Beswick +Tony Zhaocheng Tan +TonyBeswick +toonarmycaptain +Toshio Kuratomi +Travis Swicegood +Tzu-ping Chung +Valentin Haenel +Victor Stinner +victorvpaulo +Viktor Szépe +Ville Skyttä +Vinay Sajip +Vincent Philippon +Vinicyus Macedo <7549205+vinicyusmacedo@users.noreply.github.com> +Vitaly Babiy +Vladimir Rutsky +W. Trevor King +Wil Tan +Wilfred Hughes +William ML Leslie +William T Olson +Wilson Mo +wim glenn +Wolfgang Maier +Xavier Fernandez +Xavier Fernandez +xoviat +xtreak +YAMAMOTO Takashi +Yen Chi Hsuan +Yeray Diaz Diaz +Yoval P +Yu Jian +Yuan Jing Vincent Yan +Zearin +Zearin +Zhiping Deng +Zvezdan Petkovic +Łukasz Langa +Семён Марьясин diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet-3.0.4.dist-info/INSTALLER b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet-3.0.4.dist-info/INSTALLER new file mode 100644 index 0000000000000000000000000000000000000000..a1b589e38a32041e49332e5e81c2d363dc418d68 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet-3.0.4.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet-3.0.4.dist-info/LICENSE.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet-3.0.4.dist-info/LICENSE.txt new file mode 100644 index 0000000000000000000000000000000000000000..737fec5c5352af3d9a6a47a0670da4bdb52c5725 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet-3.0.4.dist-info/LICENSE.txt @@ -0,0 +1,20 @@ +Copyright (c) 2008-2019 The pip developers (see AUTHORS.txt file) + +Permission is hereby granted, free of charge, to any person obtaining +a copy of this software and associated documentation files (the +"Software"), to deal in the Software without restriction, including +without limitation the rights to use, copy, modify, merge, publish, +distribute, sublicense, and/or sell copies of the Software, and to +permit persons to whom the Software is furnished to do so, subject to +the following conditions: + +The above copyright notice and this permission notice shall be +included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE +LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION +OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION +WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet-3.0.4.dist-info/METADATA b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet-3.0.4.dist-info/METADATA new file mode 100644 index 0000000000000000000000000000000000000000..f1f13a85571a799a7fd2d62cc1b96b0e0689d052 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet-3.0.4.dist-info/METADATA @@ -0,0 +1,98 @@ +Metadata-Version: 2.1 +Name: chardet +Version: 3.0.4 +Summary: Universal encoding detector for Python 2 and 3 +Home-page: https://github.com/chardet/chardet +Author: Mark Pilgrim +Author-email: mark@diveintomark.org +Maintainer: Daniel Blanchard +Maintainer-email: dan.blanchard@gmail.com +License: LGPL +Keywords: encoding,i18n,xml +Platform: UNKNOWN +Classifier: Development Status :: 4 - Beta +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: GNU Library or Lesser General Public License (LGPL) +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 2 +Classifier: Programming Language :: Python :: 2.6 +Classifier: Programming Language :: Python :: 2.7 +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.3 +Classifier: Programming Language :: Python :: 3.4 +Classifier: Programming Language :: Python :: 3.5 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Topic :: Software Development :: Libraries :: Python Modules +Classifier: Topic :: Text Processing :: Linguistic + +Chardet: The Universal Character Encoding Detector +-------------------------------------------------- + +.. image:: https://img.shields.io/travis/chardet/chardet/stable.svg + :alt: Build status + :target: https://travis-ci.org/chardet/chardet + +.. image:: https://img.shields.io/coveralls/chardet/chardet/stable.svg + :target: https://coveralls.io/r/chardet/chardet + +.. image:: https://img.shields.io/pypi/v/chardet.svg + :target: https://warehouse.python.org/project/chardet/ + :alt: Latest version on PyPI + +.. image:: https://img.shields.io/pypi/l/chardet.svg + :alt: License + + +Detects + - ASCII, UTF-8, UTF-16 (2 variants), UTF-32 (4 variants) + - Big5, GB2312, EUC-TW, HZ-GB-2312, ISO-2022-CN (Traditional and Simplified Chinese) + - EUC-JP, SHIFT_JIS, CP932, ISO-2022-JP (Japanese) + - EUC-KR, ISO-2022-KR (Korean) + - KOI8-R, MacCyrillic, IBM855, IBM866, ISO-8859-5, windows-1251 (Cyrillic) + - ISO-8859-5, windows-1251 (Bulgarian) + - ISO-8859-1, windows-1252 (Western European languages) + - ISO-8859-7, windows-1253 (Greek) + - ISO-8859-8, windows-1255 (Visual and Logical Hebrew) + - TIS-620 (Thai) + +.. note:: + Our ISO-8859-2 and windows-1250 (Hungarian) probers have been temporarily + disabled until we can retrain the models. + +Requires Python 2.6, 2.7, or 3.3+. + +Installation +------------ + +Install from `PyPI `_:: + + pip install chardet + +Documentation +------------- + +For users, docs are now available at https://chardet.readthedocs.io/. + +Command-line Tool +----------------- + +chardet comes with a command-line script which reports on the encodings of one +or more files:: + + % chardetect somefile someotherfile + somefile: windows-1252 with confidence 0.5 + someotherfile: ascii with confidence 1.0 + +About +----- + +This is a continuation of Mark Pilgrim's excellent chardet. Previously, two +versions needed to be maintained: one that supported python 2.x and one that +supported python 3.x. We've recently merged with `Ian Cordasco `_'s +`charade `_ fork, so now we have one +coherent version that works for Python 2.6+. + +:maintainer: Dan Blanchard + + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet-3.0.4.dist-info/RECORD b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet-3.0.4.dist-info/RECORD new file mode 100644 index 0000000000000000000000000000000000000000..03310b99907c5e4b600ffb99dd63a89415a44485 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet-3.0.4.dist-info/RECORD @@ -0,0 +1,97 @@ +chardet/__init__.py,sha256=YsP5wQlsHJ2auF1RZJfypiSrCA7_bQiRm3ES_NI76-Y,1559 +chardet/big5freq.py,sha256=D_zK5GyzoVsRes0HkLJziltFQX0bKCLOrFe9_xDvO_8,31254 +chardet/big5prober.py,sha256=kBxHbdetBpPe7xrlb-e990iot64g_eGSLd32lB7_h3M,1757 +chardet/chardistribution.py,sha256=3woWS62KrGooKyqz4zQSnjFbJpa6V7g02daAibTwcl8,9411 +chardet/charsetgroupprober.py,sha256=6bDu8YIiRuScX4ca9Igb0U69TA2PGXXDej6Cc4_9kO4,3787 +chardet/charsetprober.py,sha256=KSmwJErjypyj0bRZmC5F5eM7c8YQgLYIjZXintZNstg,5110 +chardet/codingstatemachine.py,sha256=VYp_6cyyki5sHgXDSZnXW4q1oelHc3cu9AyQTX7uug8,3590 +chardet/compat.py,sha256=PKTzHkSbtbHDqS9PyujMbX74q1a8mMpeQTDVsQhZMRw,1134 +chardet/cp949prober.py,sha256=TZ434QX8zzBsnUvL_8wm4AQVTZ2ZkqEEQL_lNw9f9ow,1855 +chardet/enums.py,sha256=Aimwdb9as1dJKZaFNUH2OhWIVBVd6ZkJJ_WK5sNY8cU,1661 +chardet/escprober.py,sha256=kkyqVg1Yw3DIOAMJ2bdlyQgUFQhuHAW8dUGskToNWSc,3950 +chardet/escsm.py,sha256=RuXlgNvTIDarndvllNCk5WZBIpdCxQ0kcd9EAuxUh84,10510 +chardet/eucjpprober.py,sha256=iD8Jdp0ISRjgjiVN7f0e8xGeQJ5GM2oeZ1dA8nbSeUw,3749 +chardet/euckrfreq.py,sha256=-7GdmvgWez4-eO4SuXpa7tBiDi5vRXQ8WvdFAzVaSfo,13546 +chardet/euckrprober.py,sha256=MqFMTQXxW4HbzIpZ9lKDHB3GN8SP4yiHenTmf8g_PxY,1748 +chardet/euctwfreq.py,sha256=No1WyduFOgB5VITUA7PLyC5oJRNzRyMbBxaKI1l16MA,31621 +chardet/euctwprober.py,sha256=13p6EP4yRaxqnP4iHtxHOJ6R2zxHq1_m8hTRjzVZ95c,1747 +chardet/gb2312freq.py,sha256=JX8lsweKLmnCwmk8UHEQsLgkr_rP_kEbvivC4qPOrlc,20715 +chardet/gb2312prober.py,sha256=gGvIWi9WhDjE-xQXHvNIyrnLvEbMAYgyUSZ65HUfylw,1754 +chardet/hebrewprober.py,sha256=c3SZ-K7hvyzGY6JRAZxJgwJ_sUS9k0WYkvMY00YBYFo,13838 +chardet/jisfreq.py,sha256=vpmJv2Bu0J8gnMVRPHMFefTRvo_ha1mryLig8CBwgOg,25777 +chardet/jpcntx.py,sha256=PYlNqRUQT8LM3cT5FmHGP0iiscFlTWED92MALvBungo,19643 +chardet/langbulgarianmodel.py,sha256=1HqQS9Pbtnj1xQgxitJMvw8X6kKr5OockNCZWfEQrPE,12839 +chardet/langcyrillicmodel.py,sha256=LODajvsetH87yYDDQKA2CULXUH87tI223dhfjh9Zx9c,17948 +chardet/langgreekmodel.py,sha256=8YAW7bU8YwSJap0kIJSbPMw1BEqzGjWzqcqf0WgUKAA,12688 +chardet/langhebrewmodel.py,sha256=JSnqmE5E62tDLTPTvLpQsg5gOMO4PbdWRvV7Avkc0HA,11345 +chardet/langhungarianmodel.py,sha256=RhapYSG5l0ZaO-VV4Fan5sW0WRGQqhwBM61yx3yxyOA,12592 +chardet/langthaimodel.py,sha256=8l0173Gu_W6G8mxmQOTEF4ls2YdE7FxWf3QkSxEGXJQ,11290 +chardet/langturkishmodel.py,sha256=W22eRNJsqI6uWAfwXSKVWWnCerYqrI8dZQTm_M0lRFk,11102 +chardet/latin1prober.py,sha256=S2IoORhFk39FEFOlSFWtgVybRiP6h7BlLldHVclNkU8,5370 +chardet/mbcharsetprober.py,sha256=AR95eFH9vuqSfvLQZN-L5ijea25NOBCoXqw8s5O9xLQ,3413 +chardet/mbcsgroupprober.py,sha256=h6TRnnYq2OxG1WdD5JOyxcdVpn7dG0q-vB8nWr5mbh4,2012 +chardet/mbcssm.py,sha256=SY32wVIF3HzcjY3BaEspy9metbNSKxIIB0RKPn7tjpI,25481 +chardet/sbcharsetprober.py,sha256=LDSpCldDCFlYwUkGkwD2oFxLlPWIWXT09akH_2PiY74,5657 +chardet/sbcsgroupprober.py,sha256=1IprcCB_k1qfmnxGC6MBbxELlKqD3scW6S8YIwdeyXA,3546 +chardet/sjisprober.py,sha256=IIt-lZj0WJqK4rmUZzKZP4GJlE8KUEtFYVuY96ek5MQ,3774 +chardet/universaldetector.py,sha256=qL0174lSZE442eB21nnktT9_VcAye07laFWUeUrjttY,12485 +chardet/utf8prober.py,sha256=IdD8v3zWOsB8OLiyPi-y_fqwipRFxV9Nc1eKBLSuIEw,2766 +chardet/version.py,sha256=sp3B08mrDXB-pf3K9fqJ_zeDHOCLC8RrngQyDFap_7g,242 +chardet/cli/__init__.py,sha256=AbpHGcgLb-kRsJGnwFEktk7uzpZOCcBY74-YBdrKVGs,1 +chardet/cli/chardetect.py,sha256=YBO8L4mXo0WR6_-Fjh_8QxPBoEBNqB9oNxNrdc54AQs,2738 +chardet-3.0.4.dist-info/AUTHORS.txt,sha256=RtqU9KfonVGhI48DAA4-yTOBUhBtQTjFhaDzHoyh7uU,21518 +chardet-3.0.4.dist-info/LICENSE.txt,sha256=W6Ifuwlk-TatfRU2LR7W1JMcyMj5_y1NkRkOEJvnRDE,1090 +chardet-3.0.4.dist-info/METADATA,sha256=o6XNN41EUioeDnklH1-8haOSjI60AkAaI823ANFkOM4,3304 +chardet-3.0.4.dist-info/WHEEL,sha256=kGT74LWyRUZrL4VgLh6_g12IeVl_9u9ZVhadrgXZUEY,110 +chardet-3.0.4.dist-info/entry_points.txt,sha256=fAMmhu5eJ-zAJ-smfqQwRClQ3-nozOCmvJ6-E8lgGJo,60 +chardet-3.0.4.dist-info/top_level.txt,sha256=AowzBbZy4x8EirABDdJSLJZMkJ_53iIag8xfKR6D7kI,8 +chardet-3.0.4.dist-info/RECORD,, +chardet/euctwprober.cpython-38.pyc,, +chardet/charsetgroupprober.cpython-38.pyc,, +chardet/hebrewprober.cpython-38.pyc,, +chardet/cp949prober.cpython-38.pyc,, +chardet/__pycache__,, +chardet/sjisprober.cpython-38.pyc,, +chardet/big5prober.cpython-38.pyc,, +chardet-3.0.4.virtualenv,, +chardet/euckrfreq.cpython-38.pyc,, +chardet/charsetprober.cpython-38.pyc,, +chardet/cli/__pycache__,, +chardet/gb2312prober.cpython-38.pyc,, +../../../bin/chardetect-3.8,, +chardet/cli/chardetect.cpython-38.pyc,, +chardet/eucjpprober.cpython-38.pyc,, +chardet/jisfreq.cpython-38.pyc,, +chardet/codingstatemachine.cpython-38.pyc,, +chardet/chardistribution.cpython-38.pyc,, +chardet/universaldetector.cpython-38.pyc,, +chardet/escprober.cpython-38.pyc,, +chardet/langcyrillicmodel.cpython-38.pyc,, +chardet/mbcsgroupprober.cpython-38.pyc,, +chardet/escsm.cpython-38.pyc,, +chardet-3.0.4.dist-info/__pycache__,, +chardet/sbcsgroupprober.cpython-38.pyc,, +chardet/langturkishmodel.cpython-38.pyc,, +chardet/latin1prober.cpython-38.pyc,, +chardet/compat.cpython-38.pyc,, +chardet/__init__.cpython-38.pyc,, +chardet/langhungarianmodel.cpython-38.pyc,, +chardet/utf8prober.cpython-38.pyc,, +chardet/langhebrewmodel.cpython-38.pyc,, +chardet/langgreekmodel.cpython-38.pyc,, +chardet/big5freq.cpython-38.pyc,, +chardet/mbcssm.cpython-38.pyc,, +../../../bin/chardetect3,, +chardet/langbulgarianmodel.cpython-38.pyc,, +chardet/jpcntx.cpython-38.pyc,, +chardet/sbcharsetprober.cpython-38.pyc,, +../../../bin/chardetect,, +chardet-3.0.4.dist-info/INSTALLER,, +chardet/cli/__init__.cpython-38.pyc,, +chardet/euckrprober.cpython-38.pyc,, +chardet/gb2312freq.cpython-38.pyc,, +chardet/enums.cpython-38.pyc,, +chardet/langthaimodel.cpython-38.pyc,, +chardet/version.cpython-38.pyc,, +chardet/euctwfreq.cpython-38.pyc,, +chardet/mbcharsetprober.cpython-38.pyc,, \ No newline at end of file diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet-3.0.4.dist-info/WHEEL b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet-3.0.4.dist-info/WHEEL new file mode 100644 index 0000000000000000000000000000000000000000..ef99c6cf3283b50a273ac4c6d009a0aa85597070 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet-3.0.4.dist-info/WHEEL @@ -0,0 +1,6 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.34.2) +Root-Is-Purelib: true +Tag: py2-none-any +Tag: py3-none-any + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet-3.0.4.dist-info/entry_points.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet-3.0.4.dist-info/entry_points.txt new file mode 100644 index 0000000000000000000000000000000000000000..a884269e7fc0d5f0cd96ba63c8d473c6560264b6 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet-3.0.4.dist-info/entry_points.txt @@ -0,0 +1,3 @@ +[console_scripts] +chardetect = chardet.cli.chardetect:main + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet-3.0.4.dist-info/top_level.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet-3.0.4.dist-info/top_level.txt new file mode 100644 index 0000000000000000000000000000000000000000..79236f25cda563eb57cd7b5838256d9f3fdf18ab --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet-3.0.4.dist-info/top_level.txt @@ -0,0 +1 @@ +chardet diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet-3.0.4.virtualenv b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet-3.0.4.virtualenv new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..0f9f820ef6e5f21042efd0e9dc18d097388d7207 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/__init__.py @@ -0,0 +1,39 @@ +######################## BEGIN LICENSE BLOCK ######################## +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + + +from .compat import PY2, PY3 +from .universaldetector import UniversalDetector +from .version import __version__, VERSION + + +def detect(byte_str): + """ + Detect the encoding of the given byte string. + + :param byte_str: The byte sequence to examine. + :type byte_str: ``bytes`` or ``bytearray`` + """ + if not isinstance(byte_str, bytearray): + if not isinstance(byte_str, bytes): + raise TypeError('Expected object of type bytes or bytearray, got: ' + '{0}'.format(type(byte_str))) + else: + byte_str = bytearray(byte_str) + detector = UniversalDetector() + detector.feed(byte_str) + return detector.close() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/big5freq.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/big5freq.py new file mode 100644 index 0000000000000000000000000000000000000000..38f32517aa8f6cf5970f7ceddd1a415289184c3e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/big5freq.py @@ -0,0 +1,386 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Communicator client code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +# Big5 frequency table +# by Taiwan's Mandarin Promotion Council +# +# +# 128 --> 0.42261 +# 256 --> 0.57851 +# 512 --> 0.74851 +# 1024 --> 0.89384 +# 2048 --> 0.97583 +# +# Ideal Distribution Ratio = 0.74851/(1-0.74851) =2.98 +# Random Distribution Ration = 512/(5401-512)=0.105 +# +# Typical Distribution Ratio about 25% of Ideal one, still much higher than RDR + +BIG5_TYPICAL_DISTRIBUTION_RATIO = 0.75 + +#Char to FreqOrder table +BIG5_TABLE_SIZE = 5376 + +BIG5_CHAR_TO_FREQ_ORDER = ( + 1,1801,1506, 255,1431, 198, 9, 82, 6,5008, 177, 202,3681,1256,2821, 110, # 16 +3814, 33,3274, 261, 76, 44,2114, 16,2946,2187,1176, 659,3971, 26,3451,2653, # 32 +1198,3972,3350,4202, 410,2215, 302, 590, 361,1964, 8, 204, 58,4510,5009,1932, # 48 + 63,5010,5011, 317,1614, 75, 222, 159,4203,2417,1480,5012,3555,3091, 224,2822, # 64 +3682, 3, 10,3973,1471, 29,2787,1135,2866,1940, 873, 130,3275,1123, 312,5013, # 80 +4511,2052, 507, 252, 682,5014, 142,1915, 124, 206,2947, 34,3556,3204, 64, 604, # 96 +5015,2501,1977,1978, 155,1991, 645, 641,1606,5016,3452, 337, 72, 406,5017, 80, # 112 + 630, 238,3205,1509, 263, 939,1092,2654, 756,1440,1094,3453, 449, 69,2987, 591, # 128 + 179,2096, 471, 115,2035,1844, 60, 50,2988, 134, 806,1869, 734,2036,3454, 180, # 144 + 995,1607, 156, 537,2907, 688,5018, 319,1305, 779,2145, 514,2379, 298,4512, 359, # 160 +2502, 90,2716,1338, 663, 11, 906,1099,2553, 20,2441, 182, 532,1716,5019, 732, # 176 +1376,4204,1311,1420,3206, 25,2317,1056, 113, 399, 382,1950, 242,3455,2474, 529, # 192 +3276, 475,1447,3683,5020, 117, 21, 656, 810,1297,2300,2334,3557,5021, 126,4205, # 208 + 706, 456, 150, 613,4513, 71,1118,2037,4206, 145,3092, 85, 835, 486,2115,1246, # 224 +1426, 428, 727,1285,1015, 800, 106, 623, 303,1281,5022,2128,2359, 347,3815, 221, # 240 +3558,3135,5023,1956,1153,4207, 83, 296,1199,3093, 192, 624, 93,5024, 822,1898, # 256 +2823,3136, 795,2065, 991,1554,1542,1592, 27, 43,2867, 859, 139,1456, 860,4514, # 272 + 437, 712,3974, 164,2397,3137, 695, 211,3037,2097, 195,3975,1608,3559,3560,3684, # 288 +3976, 234, 811,2989,2098,3977,2233,1441,3561,1615,2380, 668,2077,1638, 305, 228, # 304 +1664,4515, 467, 415,5025, 262,2099,1593, 239, 108, 300, 200,1033, 512,1247,2078, # 320 +5026,5027,2176,3207,3685,2682, 593, 845,1062,3277, 88,1723,2038,3978,1951, 212, # 336 + 266, 152, 149, 468,1899,4208,4516, 77, 187,5028,3038, 37, 5,2990,5029,3979, # 352 +5030,5031, 39,2524,4517,2908,3208,2079, 55, 148, 74,4518, 545, 483,1474,1029, # 368 +1665, 217,1870,1531,3138,1104,2655,4209, 24, 172,3562, 900,3980,3563,3564,4519, # 384 + 32,1408,2824,1312, 329, 487,2360,2251,2717, 784,2683, 4,3039,3351,1427,1789, # 400 + 188, 109, 499,5032,3686,1717,1790, 888,1217,3040,4520,5033,3565,5034,3352,1520, # 416 +3687,3981, 196,1034, 775,5035,5036, 929,1816, 249, 439, 38,5037,1063,5038, 794, # 432 +3982,1435,2301, 46, 178,3278,2066,5039,2381,5040, 214,1709,4521, 804, 35, 707, # 448 + 324,3688,1601,2554, 140, 459,4210,5041,5042,1365, 839, 272, 978,2262,2580,3456, # 464 +2129,1363,3689,1423, 697, 100,3094, 48, 70,1231, 495,3139,2196,5043,1294,5044, # 480 +2080, 462, 586,1042,3279, 853, 256, 988, 185,2382,3457,1698, 434,1084,5045,3458, # 496 + 314,2625,2788,4522,2335,2336, 569,2285, 637,1817,2525, 757,1162,1879,1616,3459, # 512 + 287,1577,2116, 768,4523,1671,2868,3566,2526,1321,3816, 909,2418,5046,4211, 933, # 528 +3817,4212,2053,2361,1222,4524, 765,2419,1322, 786,4525,5047,1920,1462,1677,2909, # 544 +1699,5048,4526,1424,2442,3140,3690,2600,3353,1775,1941,3460,3983,4213, 309,1369, # 560 +1130,2825, 364,2234,1653,1299,3984,3567,3985,3986,2656, 525,1085,3041, 902,2001, # 576 +1475, 964,4527, 421,1845,1415,1057,2286, 940,1364,3141, 376,4528,4529,1381, 7, # 592 +2527, 983,2383, 336,1710,2684,1846, 321,3461, 559,1131,3042,2752,1809,1132,1313, # 608 + 265,1481,1858,5049, 352,1203,2826,3280, 167,1089, 420,2827, 776, 792,1724,3568, # 624 +4214,2443,3281,5050,4215,5051, 446, 229, 333,2753, 901,3818,1200,1557,4530,2657, # 640 +1921, 395,2754,2685,3819,4216,1836, 125, 916,3209,2626,4531,5052,5053,3820,5054, # 656 +5055,5056,4532,3142,3691,1133,2555,1757,3462,1510,2318,1409,3569,5057,2146, 438, # 672 +2601,2910,2384,3354,1068, 958,3043, 461, 311,2869,2686,4217,1916,3210,4218,1979, # 688 + 383, 750,2755,2627,4219, 274, 539, 385,1278,1442,5058,1154,1965, 384, 561, 210, # 704 + 98,1295,2556,3570,5059,1711,2420,1482,3463,3987,2911,1257, 129,5060,3821, 642, # 720 + 523,2789,2790,2658,5061, 141,2235,1333, 68, 176, 441, 876, 907,4220, 603,2602, # 736 + 710, 171,3464, 404, 549, 18,3143,2398,1410,3692,1666,5062,3571,4533,2912,4534, # 752 +5063,2991, 368,5064, 146, 366, 99, 871,3693,1543, 748, 807,1586,1185, 22,2263, # 768 + 379,3822,3211,5065,3212, 505,1942,2628,1992,1382,2319,5066, 380,2362, 218, 702, # 784 +1818,1248,3465,3044,3572,3355,3282,5067,2992,3694, 930,3283,3823,5068, 59,5069, # 800 + 585, 601,4221, 497,3466,1112,1314,4535,1802,5070,1223,1472,2177,5071, 749,1837, # 816 + 690,1900,3824,1773,3988,1476, 429,1043,1791,2236,2117, 917,4222, 447,1086,1629, # 832 +5072, 556,5073,5074,2021,1654, 844,1090, 105, 550, 966,1758,2828,1008,1783, 686, # 848 +1095,5075,2287, 793,1602,5076,3573,2603,4536,4223,2948,2302,4537,3825, 980,2503, # 864 + 544, 353, 527,4538, 908,2687,2913,5077, 381,2629,1943,1348,5078,1341,1252, 560, # 880 +3095,5079,3467,2870,5080,2054, 973, 886,2081, 143,4539,5081,5082, 157,3989, 496, # 896 +4224, 57, 840, 540,2039,4540,4541,3468,2118,1445, 970,2264,1748,1966,2082,4225, # 912 +3144,1234,1776,3284,2829,3695, 773,1206,2130,1066,2040,1326,3990,1738,1725,4226, # 928 + 279,3145, 51,1544,2604, 423,1578,2131,2067, 173,4542,1880,5083,5084,1583, 264, # 944 + 610,3696,4543,2444, 280, 154,5085,5086,5087,1739, 338,1282,3096, 693,2871,1411, # 960 +1074,3826,2445,5088,4544,5089,5090,1240, 952,2399,5091,2914,1538,2688, 685,1483, # 976 +4227,2475,1436, 953,4228,2055,4545, 671,2400, 79,4229,2446,3285, 608, 567,2689, # 992 +3469,4230,4231,1691, 393,1261,1792,2401,5092,4546,5093,5094,5095,5096,1383,1672, # 1008 +3827,3213,1464, 522,1119, 661,1150, 216, 675,4547,3991,1432,3574, 609,4548,2690, # 1024 +2402,5097,5098,5099,4232,3045, 0,5100,2476, 315, 231,2447, 301,3356,4549,2385, # 1040 +5101, 233,4233,3697,1819,4550,4551,5102, 96,1777,1315,2083,5103, 257,5104,1810, # 1056 +3698,2718,1139,1820,4234,2022,1124,2164,2791,1778,2659,5105,3097, 363,1655,3214, # 1072 +5106,2993,5107,5108,5109,3992,1567,3993, 718, 103,3215, 849,1443, 341,3357,2949, # 1088 +1484,5110,1712, 127, 67, 339,4235,2403, 679,1412, 821,5111,5112, 834, 738, 351, # 1104 +2994,2147, 846, 235,1497,1881, 418,1993,3828,2719, 186,1100,2148,2756,3575,1545, # 1120 +1355,2950,2872,1377, 583,3994,4236,2581,2995,5113,1298,3699,1078,2557,3700,2363, # 1136 + 78,3829,3830, 267,1289,2100,2002,1594,4237, 348, 369,1274,2197,2178,1838,4552, # 1152 +1821,2830,3701,2757,2288,2003,4553,2951,2758, 144,3358, 882,4554,3995,2759,3470, # 1168 +4555,2915,5114,4238,1726, 320,5115,3996,3046, 788,2996,5116,2831,1774,1327,2873, # 1184 +3997,2832,5117,1306,4556,2004,1700,3831,3576,2364,2660, 787,2023, 506, 824,3702, # 1200 + 534, 323,4557,1044,3359,2024,1901, 946,3471,5118,1779,1500,1678,5119,1882,4558, # 1216 + 165, 243,4559,3703,2528, 123, 683,4239, 764,4560, 36,3998,1793, 589,2916, 816, # 1232 + 626,1667,3047,2237,1639,1555,1622,3832,3999,5120,4000,2874,1370,1228,1933, 891, # 1248 +2084,2917, 304,4240,5121, 292,2997,2720,3577, 691,2101,4241,1115,4561, 118, 662, # 1264 +5122, 611,1156, 854,2386,1316,2875, 2, 386, 515,2918,5123,5124,3286, 868,2238, # 1280 +1486, 855,2661, 785,2216,3048,5125,1040,3216,3578,5126,3146, 448,5127,1525,5128, # 1296 +2165,4562,5129,3833,5130,4242,2833,3579,3147, 503, 818,4001,3148,1568, 814, 676, # 1312 +1444, 306,1749,5131,3834,1416,1030, 197,1428, 805,2834,1501,4563,5132,5133,5134, # 1328 +1994,5135,4564,5136,5137,2198, 13,2792,3704,2998,3149,1229,1917,5138,3835,2132, # 1344 +5139,4243,4565,2404,3580,5140,2217,1511,1727,1120,5141,5142, 646,3836,2448, 307, # 1360 +5143,5144,1595,3217,5145,5146,5147,3705,1113,1356,4002,1465,2529,2530,5148, 519, # 1376 +5149, 128,2133, 92,2289,1980,5150,4003,1512, 342,3150,2199,5151,2793,2218,1981, # 1392 +3360,4244, 290,1656,1317, 789, 827,2365,5152,3837,4566, 562, 581,4004,5153, 401, # 1408 +4567,2252, 94,4568,5154,1399,2794,5155,1463,2025,4569,3218,1944,5156, 828,1105, # 1424 +4245,1262,1394,5157,4246, 605,4570,5158,1784,2876,5159,2835, 819,2102, 578,2200, # 1440 +2952,5160,1502, 436,3287,4247,3288,2836,4005,2919,3472,3473,5161,2721,2320,5162, # 1456 +5163,2337,2068, 23,4571, 193, 826,3838,2103, 699,1630,4248,3098, 390,1794,1064, # 1472 +3581,5164,1579,3099,3100,1400,5165,4249,1839,1640,2877,5166,4572,4573, 137,4250, # 1488 + 598,3101,1967, 780, 104, 974,2953,5167, 278, 899, 253, 402, 572, 504, 493,1339, # 1504 +5168,4006,1275,4574,2582,2558,5169,3706,3049,3102,2253, 565,1334,2722, 863, 41, # 1520 +5170,5171,4575,5172,1657,2338, 19, 463,2760,4251, 606,5173,2999,3289,1087,2085, # 1536 +1323,2662,3000,5174,1631,1623,1750,4252,2691,5175,2878, 791,2723,2663,2339, 232, # 1552 +2421,5176,3001,1498,5177,2664,2630, 755,1366,3707,3290,3151,2026,1609, 119,1918, # 1568 +3474, 862,1026,4253,5178,4007,3839,4576,4008,4577,2265,1952,2477,5179,1125, 817, # 1584 +4254,4255,4009,1513,1766,2041,1487,4256,3050,3291,2837,3840,3152,5180,5181,1507, # 1600 +5182,2692, 733, 40,1632,1106,2879, 345,4257, 841,2531, 230,4578,3002,1847,3292, # 1616 +3475,5183,1263, 986,3476,5184, 735, 879, 254,1137, 857, 622,1300,1180,1388,1562, # 1632 +4010,4011,2954, 967,2761,2665,1349, 592,2134,1692,3361,3003,1995,4258,1679,4012, # 1648 +1902,2188,5185, 739,3708,2724,1296,1290,5186,4259,2201,2202,1922,1563,2605,2559, # 1664 +1871,2762,3004,5187, 435,5188, 343,1108, 596, 17,1751,4579,2239,3477,3709,5189, # 1680 +4580, 294,3582,2955,1693, 477, 979, 281,2042,3583, 643,2043,3710,2631,2795,2266, # 1696 +1031,2340,2135,2303,3584,4581, 367,1249,2560,5190,3585,5191,4582,1283,3362,2005, # 1712 + 240,1762,3363,4583,4584, 836,1069,3153, 474,5192,2149,2532, 268,3586,5193,3219, # 1728 +1521,1284,5194,1658,1546,4260,5195,3587,3588,5196,4261,3364,2693,1685,4262, 961, # 1744 +1673,2632, 190,2006,2203,3841,4585,4586,5197, 570,2504,3711,1490,5198,4587,2633, # 1760 +3293,1957,4588, 584,1514, 396,1045,1945,5199,4589,1968,2449,5200,5201,4590,4013, # 1776 + 619,5202,3154,3294, 215,2007,2796,2561,3220,4591,3221,4592, 763,4263,3842,4593, # 1792 +5203,5204,1958,1767,2956,3365,3712,1174, 452,1477,4594,3366,3155,5205,2838,1253, # 1808 +2387,2189,1091,2290,4264, 492,5206, 638,1169,1825,2136,1752,4014, 648, 926,1021, # 1824 +1324,4595, 520,4596, 997, 847,1007, 892,4597,3843,2267,1872,3713,2405,1785,4598, # 1840 +1953,2957,3103,3222,1728,4265,2044,3714,4599,2008,1701,3156,1551, 30,2268,4266, # 1856 +5207,2027,4600,3589,5208, 501,5209,4267, 594,3478,2166,1822,3590,3479,3591,3223, # 1872 + 829,2839,4268,5210,1680,3157,1225,4269,5211,3295,4601,4270,3158,2341,5212,4602, # 1888 +4271,5213,4015,4016,5214,1848,2388,2606,3367,5215,4603, 374,4017, 652,4272,4273, # 1904 + 375,1140, 798,5216,5217,5218,2366,4604,2269, 546,1659, 138,3051,2450,4605,5219, # 1920 +2254, 612,1849, 910, 796,3844,1740,1371, 825,3845,3846,5220,2920,2562,5221, 692, # 1936 + 444,3052,2634, 801,4606,4274,5222,1491, 244,1053,3053,4275,4276, 340,5223,4018, # 1952 +1041,3005, 293,1168, 87,1357,5224,1539, 959,5225,2240, 721, 694,4277,3847, 219, # 1968 +1478, 644,1417,3368,2666,1413,1401,1335,1389,4019,5226,5227,3006,2367,3159,1826, # 1984 + 730,1515, 184,2840, 66,4607,5228,1660,2958, 246,3369, 378,1457, 226,3480, 975, # 2000 +4020,2959,1264,3592, 674, 696,5229, 163,5230,1141,2422,2167, 713,3593,3370,4608, # 2016 +4021,5231,5232,1186, 15,5233,1079,1070,5234,1522,3224,3594, 276,1050,2725, 758, # 2032 +1126, 653,2960,3296,5235,2342, 889,3595,4022,3104,3007, 903,1250,4609,4023,3481, # 2048 +3596,1342,1681,1718, 766,3297, 286, 89,2961,3715,5236,1713,5237,2607,3371,3008, # 2064 +5238,2962,2219,3225,2880,5239,4610,2505,2533, 181, 387,1075,4024, 731,2190,3372, # 2080 +5240,3298, 310, 313,3482,2304, 770,4278, 54,3054, 189,4611,3105,3848,4025,5241, # 2096 +1230,1617,1850, 355,3597,4279,4612,3373, 111,4280,3716,1350,3160,3483,3055,4281, # 2112 +2150,3299,3598,5242,2797,4026,4027,3009, 722,2009,5243,1071, 247,1207,2343,2478, # 2128 +1378,4613,2010, 864,1437,1214,4614, 373,3849,1142,2220, 667,4615, 442,2763,2563, # 2144 +3850,4028,1969,4282,3300,1840, 837, 170,1107, 934,1336,1883,5244,5245,2119,4283, # 2160 +2841, 743,1569,5246,4616,4284, 582,2389,1418,3484,5247,1803,5248, 357,1395,1729, # 2176 +3717,3301,2423,1564,2241,5249,3106,3851,1633,4617,1114,2086,4285,1532,5250, 482, # 2192 +2451,4618,5251,5252,1492, 833,1466,5253,2726,3599,1641,2842,5254,1526,1272,3718, # 2208 +4286,1686,1795, 416,2564,1903,1954,1804,5255,3852,2798,3853,1159,2321,5256,2881, # 2224 +4619,1610,1584,3056,2424,2764, 443,3302,1163,3161,5257,5258,4029,5259,4287,2506, # 2240 +3057,4620,4030,3162,2104,1647,3600,2011,1873,4288,5260,4289, 431,3485,5261, 250, # 2256 + 97, 81,4290,5262,1648,1851,1558, 160, 848,5263, 866, 740,1694,5264,2204,2843, # 2272 +3226,4291,4621,3719,1687, 950,2479, 426, 469,3227,3720,3721,4031,5265,5266,1188, # 2288 + 424,1996, 861,3601,4292,3854,2205,2694, 168,1235,3602,4293,5267,2087,1674,4622, # 2304 +3374,3303, 220,2565,1009,5268,3855, 670,3010, 332,1208, 717,5269,5270,3603,2452, # 2320 +4032,3375,5271, 513,5272,1209,2882,3376,3163,4623,1080,5273,5274,5275,5276,2534, # 2336 +3722,3604, 815,1587,4033,4034,5277,3605,3486,3856,1254,4624,1328,3058,1390,4035, # 2352 +1741,4036,3857,4037,5278, 236,3858,2453,3304,5279,5280,3723,3859,1273,3860,4625, # 2368 +5281, 308,5282,4626, 245,4627,1852,2480,1307,2583, 430, 715,2137,2454,5283, 270, # 2384 + 199,2883,4038,5284,3606,2727,1753, 761,1754, 725,1661,1841,4628,3487,3724,5285, # 2400 +5286, 587, 14,3305, 227,2608, 326, 480,2270, 943,2765,3607, 291, 650,1884,5287, # 2416 +1702,1226, 102,1547, 62,3488, 904,4629,3489,1164,4294,5288,5289,1224,1548,2766, # 2432 + 391, 498,1493,5290,1386,1419,5291,2056,1177,4630, 813, 880,1081,2368, 566,1145, # 2448 +4631,2291,1001,1035,2566,2609,2242, 394,1286,5292,5293,2069,5294, 86,1494,1730, # 2464 +4039, 491,1588, 745, 897,2963, 843,3377,4040,2767,2884,3306,1768, 998,2221,2070, # 2480 + 397,1827,1195,1970,3725,3011,3378, 284,5295,3861,2507,2138,2120,1904,5296,4041, # 2496 +2151,4042,4295,1036,3490,1905, 114,2567,4296, 209,1527,5297,5298,2964,2844,2635, # 2512 +2390,2728,3164, 812,2568,5299,3307,5300,1559, 737,1885,3726,1210, 885, 28,2695, # 2528 +3608,3862,5301,4297,1004,1780,4632,5302, 346,1982,2222,2696,4633,3863,1742, 797, # 2544 +1642,4043,1934,1072,1384,2152, 896,4044,3308,3727,3228,2885,3609,5303,2569,1959, # 2560 +4634,2455,1786,5304,5305,5306,4045,4298,1005,1308,3728,4299,2729,4635,4636,1528, # 2576 +2610, 161,1178,4300,1983, 987,4637,1101,4301, 631,4046,1157,3229,2425,1343,1241, # 2592 +1016,2243,2570, 372, 877,2344,2508,1160, 555,1935, 911,4047,5307, 466,1170, 169, # 2608 +1051,2921,2697,3729,2481,3012,1182,2012,2571,1251,2636,5308, 992,2345,3491,1540, # 2624 +2730,1201,2071,2406,1997,2482,5309,4638, 528,1923,2191,1503,1874,1570,2369,3379, # 2640 +3309,5310, 557,1073,5311,1828,3492,2088,2271,3165,3059,3107, 767,3108,2799,4639, # 2656 +1006,4302,4640,2346,1267,2179,3730,3230, 778,4048,3231,2731,1597,2667,5312,4641, # 2672 +5313,3493,5314,5315,5316,3310,2698,1433,3311, 131, 95,1504,4049, 723,4303,3166, # 2688 +1842,3610,2768,2192,4050,2028,2105,3731,5317,3013,4051,1218,5318,3380,3232,4052, # 2704 +4304,2584, 248,1634,3864, 912,5319,2845,3732,3060,3865, 654, 53,5320,3014,5321, # 2720 +1688,4642, 777,3494,1032,4053,1425,5322, 191, 820,2121,2846, 971,4643, 931,3233, # 2736 + 135, 664, 783,3866,1998, 772,2922,1936,4054,3867,4644,2923,3234, 282,2732, 640, # 2752 +1372,3495,1127, 922, 325,3381,5323,5324, 711,2045,5325,5326,4055,2223,2800,1937, # 2768 +4056,3382,2224,2255,3868,2305,5327,4645,3869,1258,3312,4057,3235,2139,2965,4058, # 2784 +4059,5328,2225, 258,3236,4646, 101,1227,5329,3313,1755,5330,1391,3314,5331,2924, # 2800 +2057, 893,5332,5333,5334,1402,4305,2347,5335,5336,3237,3611,5337,5338, 878,1325, # 2816 +1781,2801,4647, 259,1385,2585, 744,1183,2272,4648,5339,4060,2509,5340, 684,1024, # 2832 +4306,5341, 472,3612,3496,1165,3315,4061,4062, 322,2153, 881, 455,1695,1152,1340, # 2848 + 660, 554,2154,4649,1058,4650,4307, 830,1065,3383,4063,4651,1924,5342,1703,1919, # 2864 +5343, 932,2273, 122,5344,4652, 947, 677,5345,3870,2637, 297,1906,1925,2274,4653, # 2880 +2322,3316,5346,5347,4308,5348,4309, 84,4310, 112, 989,5349, 547,1059,4064, 701, # 2896 +3613,1019,5350,4311,5351,3497, 942, 639, 457,2306,2456, 993,2966, 407, 851, 494, # 2912 +4654,3384, 927,5352,1237,5353,2426,3385, 573,4312, 680, 921,2925,1279,1875, 285, # 2928 + 790,1448,1984, 719,2168,5354,5355,4655,4065,4066,1649,5356,1541, 563,5357,1077, # 2944 +5358,3386,3061,3498, 511,3015,4067,4068,3733,4069,1268,2572,3387,3238,4656,4657, # 2960 +5359, 535,1048,1276,1189,2926,2029,3167,1438,1373,2847,2967,1134,2013,5360,4313, # 2976 +1238,2586,3109,1259,5361, 700,5362,2968,3168,3734,4314,5363,4315,1146,1876,1907, # 2992 +4658,2611,4070, 781,2427, 132,1589, 203, 147, 273,2802,2407, 898,1787,2155,4071, # 3008 +4072,5364,3871,2803,5365,5366,4659,4660,5367,3239,5368,1635,3872, 965,5369,1805, # 3024 +2699,1516,3614,1121,1082,1329,3317,4073,1449,3873, 65,1128,2848,2927,2769,1590, # 3040 +3874,5370,5371, 12,2668, 45, 976,2587,3169,4661, 517,2535,1013,1037,3240,5372, # 3056 +3875,2849,5373,3876,5374,3499,5375,2612, 614,1999,2323,3877,3110,2733,2638,5376, # 3072 +2588,4316, 599,1269,5377,1811,3735,5378,2700,3111, 759,1060, 489,1806,3388,3318, # 3088 +1358,5379,5380,2391,1387,1215,2639,2256, 490,5381,5382,4317,1759,2392,2348,5383, # 3104 +4662,3878,1908,4074,2640,1807,3241,4663,3500,3319,2770,2349, 874,5384,5385,3501, # 3120 +3736,1859, 91,2928,3737,3062,3879,4664,5386,3170,4075,2669,5387,3502,1202,1403, # 3136 +3880,2969,2536,1517,2510,4665,3503,2511,5388,4666,5389,2701,1886,1495,1731,4076, # 3152 +2370,4667,5390,2030,5391,5392,4077,2702,1216, 237,2589,4318,2324,4078,3881,4668, # 3168 +4669,2703,3615,3504, 445,4670,5393,5394,5395,5396,2771, 61,4079,3738,1823,4080, # 3184 +5397, 687,2046, 935, 925, 405,2670, 703,1096,1860,2734,4671,4081,1877,1367,2704, # 3200 +3389, 918,2106,1782,2483, 334,3320,1611,1093,4672, 564,3171,3505,3739,3390, 945, # 3216 +2641,2058,4673,5398,1926, 872,4319,5399,3506,2705,3112, 349,4320,3740,4082,4674, # 3232 +3882,4321,3741,2156,4083,4675,4676,4322,4677,2408,2047, 782,4084, 400, 251,4323, # 3248 +1624,5400,5401, 277,3742, 299,1265, 476,1191,3883,2122,4324,4325,1109, 205,5402, # 3264 +2590,1000,2157,3616,1861,5403,5404,5405,4678,5406,4679,2573, 107,2484,2158,4085, # 3280 +3507,3172,5407,1533, 541,1301, 158, 753,4326,2886,3617,5408,1696, 370,1088,4327, # 3296 +4680,3618, 579, 327, 440, 162,2244, 269,1938,1374,3508, 968,3063, 56,1396,3113, # 3312 +2107,3321,3391,5409,1927,2159,4681,3016,5410,3619,5411,5412,3743,4682,2485,5413, # 3328 +2804,5414,1650,4683,5415,2613,5416,5417,4086,2671,3392,1149,3393,4087,3884,4088, # 3344 +5418,1076, 49,5419, 951,3242,3322,3323, 450,2850, 920,5420,1812,2805,2371,4328, # 3360 +1909,1138,2372,3885,3509,5421,3243,4684,1910,1147,1518,2428,4685,3886,5422,4686, # 3376 +2393,2614, 260,1796,3244,5423,5424,3887,3324, 708,5425,3620,1704,5426,3621,1351, # 3392 +1618,3394,3017,1887, 944,4329,3395,4330,3064,3396,4331,5427,3744, 422, 413,1714, # 3408 +3325, 500,2059,2350,4332,2486,5428,1344,1911, 954,5429,1668,5430,5431,4089,2409, # 3424 +4333,3622,3888,4334,5432,2307,1318,2512,3114, 133,3115,2887,4687, 629, 31,2851, # 3440 +2706,3889,4688, 850, 949,4689,4090,2970,1732,2089,4335,1496,1853,5433,4091, 620, # 3456 +3245, 981,1242,3745,3397,1619,3746,1643,3326,2140,2457,1971,1719,3510,2169,5434, # 3472 +3246,5435,5436,3398,1829,5437,1277,4690,1565,2048,5438,1636,3623,3116,5439, 869, # 3488 +2852, 655,3890,3891,3117,4092,3018,3892,1310,3624,4691,5440,5441,5442,1733, 558, # 3504 +4692,3747, 335,1549,3065,1756,4336,3748,1946,3511,1830,1291,1192, 470,2735,2108, # 3520 +2806, 913,1054,4093,5443,1027,5444,3066,4094,4693, 982,2672,3399,3173,3512,3247, # 3536 +3248,1947,2807,5445, 571,4694,5446,1831,5447,3625,2591,1523,2429,5448,2090, 984, # 3552 +4695,3749,1960,5449,3750, 852, 923,2808,3513,3751, 969,1519, 999,2049,2325,1705, # 3568 +5450,3118, 615,1662, 151, 597,4095,2410,2326,1049, 275,4696,3752,4337, 568,3753, # 3584 +3626,2487,4338,3754,5451,2430,2275, 409,3249,5452,1566,2888,3514,1002, 769,2853, # 3600 + 194,2091,3174,3755,2226,3327,4339, 628,1505,5453,5454,1763,2180,3019,4096, 521, # 3616 +1161,2592,1788,2206,2411,4697,4097,1625,4340,4341, 412, 42,3119, 464,5455,2642, # 3632 +4698,3400,1760,1571,2889,3515,2537,1219,2207,3893,2643,2141,2373,4699,4700,3328, # 3648 +1651,3401,3627,5456,5457,3628,2488,3516,5458,3756,5459,5460,2276,2092, 460,5461, # 3664 +4701,5462,3020, 962, 588,3629, 289,3250,2644,1116, 52,5463,3067,1797,5464,5465, # 3680 +5466,1467,5467,1598,1143,3757,4342,1985,1734,1067,4702,1280,3402, 465,4703,1572, # 3696 + 510,5468,1928,2245,1813,1644,3630,5469,4704,3758,5470,5471,2673,1573,1534,5472, # 3712 +5473, 536,1808,1761,3517,3894,3175,2645,5474,5475,5476,4705,3518,2929,1912,2809, # 3728 +5477,3329,1122, 377,3251,5478, 360,5479,5480,4343,1529, 551,5481,2060,3759,1769, # 3744 +2431,5482,2930,4344,3330,3120,2327,2109,2031,4706,1404, 136,1468,1479, 672,1171, # 3760 +3252,2308, 271,3176,5483,2772,5484,2050, 678,2736, 865,1948,4707,5485,2014,4098, # 3776 +2971,5486,2737,2227,1397,3068,3760,4708,4709,1735,2931,3403,3631,5487,3895, 509, # 3792 +2854,2458,2890,3896,5488,5489,3177,3178,4710,4345,2538,4711,2309,1166,1010, 552, # 3808 + 681,1888,5490,5491,2972,2973,4099,1287,1596,1862,3179, 358, 453, 736, 175, 478, # 3824 +1117, 905,1167,1097,5492,1854,1530,5493,1706,5494,2181,3519,2292,3761,3520,3632, # 3840 +4346,2093,4347,5495,3404,1193,2489,4348,1458,2193,2208,1863,1889,1421,3331,2932, # 3856 +3069,2182,3521, 595,2123,5496,4100,5497,5498,4349,1707,2646, 223,3762,1359, 751, # 3872 +3121, 183,3522,5499,2810,3021, 419,2374, 633, 704,3897,2394, 241,5500,5501,5502, # 3888 + 838,3022,3763,2277,2773,2459,3898,1939,2051,4101,1309,3122,2246,1181,5503,1136, # 3904 +2209,3899,2375,1446,4350,2310,4712,5504,5505,4351,1055,2615, 484,3764,5506,4102, # 3920 + 625,4352,2278,3405,1499,4353,4103,5507,4104,4354,3253,2279,2280,3523,5508,5509, # 3936 +2774, 808,2616,3765,3406,4105,4355,3123,2539, 526,3407,3900,4356, 955,5510,1620, # 3952 +4357,2647,2432,5511,1429,3766,1669,1832, 994, 928,5512,3633,1260,5513,5514,5515, # 3968 +1949,2293, 741,2933,1626,4358,2738,2460, 867,1184, 362,3408,1392,5516,5517,4106, # 3984 +4359,1770,1736,3254,2934,4713,4714,1929,2707,1459,1158,5518,3070,3409,2891,1292, # 4000 +1930,2513,2855,3767,1986,1187,2072,2015,2617,4360,5519,2574,2514,2170,3768,2490, # 4016 +3332,5520,3769,4715,5521,5522, 666,1003,3023,1022,3634,4361,5523,4716,1814,2257, # 4032 + 574,3901,1603, 295,1535, 705,3902,4362, 283, 858, 417,5524,5525,3255,4717,4718, # 4048 +3071,1220,1890,1046,2281,2461,4107,1393,1599, 689,2575, 388,4363,5526,2491, 802, # 4064 +5527,2811,3903,2061,1405,2258,5528,4719,3904,2110,1052,1345,3256,1585,5529, 809, # 4080 +5530,5531,5532, 575,2739,3524, 956,1552,1469,1144,2328,5533,2329,1560,2462,3635, # 4096 +3257,4108, 616,2210,4364,3180,2183,2294,5534,1833,5535,3525,4720,5536,1319,3770, # 4112 +3771,1211,3636,1023,3258,1293,2812,5537,5538,5539,3905, 607,2311,3906, 762,2892, # 4128 +1439,4365,1360,4721,1485,3072,5540,4722,1038,4366,1450,2062,2648,4367,1379,4723, # 4144 +2593,5541,5542,4368,1352,1414,2330,2935,1172,5543,5544,3907,3908,4724,1798,1451, # 4160 +5545,5546,5547,5548,2936,4109,4110,2492,2351, 411,4111,4112,3637,3333,3124,4725, # 4176 +1561,2674,1452,4113,1375,5549,5550, 47,2974, 316,5551,1406,1591,2937,3181,5552, # 4192 +1025,2142,3125,3182, 354,2740, 884,2228,4369,2412, 508,3772, 726,3638, 996,2433, # 4208 +3639, 729,5553, 392,2194,1453,4114,4726,3773,5554,5555,2463,3640,2618,1675,2813, # 4224 + 919,2352,2975,2353,1270,4727,4115, 73,5556,5557, 647,5558,3259,2856,2259,1550, # 4240 +1346,3024,5559,1332, 883,3526,5560,5561,5562,5563,3334,2775,5564,1212, 831,1347, # 4256 +4370,4728,2331,3909,1864,3073, 720,3910,4729,4730,3911,5565,4371,5566,5567,4731, # 4272 +5568,5569,1799,4732,3774,2619,4733,3641,1645,2376,4734,5570,2938, 669,2211,2675, # 4288 +2434,5571,2893,5572,5573,1028,3260,5574,4372,2413,5575,2260,1353,5576,5577,4735, # 4304 +3183, 518,5578,4116,5579,4373,1961,5580,2143,4374,5581,5582,3025,2354,2355,3912, # 4320 + 516,1834,1454,4117,2708,4375,4736,2229,2620,1972,1129,3642,5583,2776,5584,2976, # 4336 +1422, 577,1470,3026,1524,3410,5585,5586, 432,4376,3074,3527,5587,2594,1455,2515, # 4352 +2230,1973,1175,5588,1020,2741,4118,3528,4737,5589,2742,5590,1743,1361,3075,3529, # 4368 +2649,4119,4377,4738,2295, 895, 924,4378,2171, 331,2247,3076, 166,1627,3077,1098, # 4384 +5591,1232,2894,2231,3411,4739, 657, 403,1196,2377, 542,3775,3412,1600,4379,3530, # 4400 +5592,4740,2777,3261, 576, 530,1362,4741,4742,2540,2676,3776,4120,5593, 842,3913, # 4416 +5594,2814,2032,1014,4121, 213,2709,3413, 665, 621,4380,5595,3777,2939,2435,5596, # 4432 +2436,3335,3643,3414,4743,4381,2541,4382,4744,3644,1682,4383,3531,1380,5597, 724, # 4448 +2282, 600,1670,5598,1337,1233,4745,3126,2248,5599,1621,4746,5600, 651,4384,5601, # 4464 +1612,4385,2621,5602,2857,5603,2743,2312,3078,5604, 716,2464,3079, 174,1255,2710, # 4480 +4122,3645, 548,1320,1398, 728,4123,1574,5605,1891,1197,3080,4124,5606,3081,3082, # 4496 +3778,3646,3779, 747,5607, 635,4386,4747,5608,5609,5610,4387,5611,5612,4748,5613, # 4512 +3415,4749,2437, 451,5614,3780,2542,2073,4388,2744,4389,4125,5615,1764,4750,5616, # 4528 +4390, 350,4751,2283,2395,2493,5617,4391,4126,2249,1434,4127, 488,4752, 458,4392, # 4544 +4128,3781, 771,1330,2396,3914,2576,3184,2160,2414,1553,2677,3185,4393,5618,2494, # 4560 +2895,2622,1720,2711,4394,3416,4753,5619,2543,4395,5620,3262,4396,2778,5621,2016, # 4576 +2745,5622,1155,1017,3782,3915,5623,3336,2313, 201,1865,4397,1430,5624,4129,5625, # 4592 +5626,5627,5628,5629,4398,1604,5630, 414,1866, 371,2595,4754,4755,3532,2017,3127, # 4608 +4756,1708, 960,4399, 887, 389,2172,1536,1663,1721,5631,2232,4130,2356,2940,1580, # 4624 +5632,5633,1744,4757,2544,4758,4759,5634,4760,5635,2074,5636,4761,3647,3417,2896, # 4640 +4400,5637,4401,2650,3418,2815, 673,2712,2465, 709,3533,4131,3648,4402,5638,1148, # 4656 + 502, 634,5639,5640,1204,4762,3649,1575,4763,2623,3783,5641,3784,3128, 948,3263, # 4672 + 121,1745,3916,1110,5642,4403,3083,2516,3027,4132,3785,1151,1771,3917,1488,4133, # 4688 +1987,5643,2438,3534,5644,5645,2094,5646,4404,3918,1213,1407,2816, 531,2746,2545, # 4704 +3264,1011,1537,4764,2779,4405,3129,1061,5647,3786,3787,1867,2897,5648,2018, 120, # 4720 +4406,4407,2063,3650,3265,2314,3919,2678,3419,1955,4765,4134,5649,3535,1047,2713, # 4736 +1266,5650,1368,4766,2858, 649,3420,3920,2546,2747,1102,2859,2679,5651,5652,2000, # 4752 +5653,1111,3651,2977,5654,2495,3921,3652,2817,1855,3421,3788,5655,5656,3422,2415, # 4768 +2898,3337,3266,3653,5657,2577,5658,3654,2818,4135,1460, 856,5659,3655,5660,2899, # 4784 +2978,5661,2900,3922,5662,4408, 632,2517, 875,3923,1697,3924,2296,5663,5664,4767, # 4800 +3028,1239, 580,4768,4409,5665, 914, 936,2075,1190,4136,1039,2124,5666,5667,5668, # 4816 +5669,3423,1473,5670,1354,4410,3925,4769,2173,3084,4137, 915,3338,4411,4412,3339, # 4832 +1605,1835,5671,2748, 398,3656,4413,3926,4138, 328,1913,2860,4139,3927,1331,4414, # 4848 +3029, 937,4415,5672,3657,4140,4141,3424,2161,4770,3425, 524, 742, 538,3085,1012, # 4864 +5673,5674,3928,2466,5675, 658,1103, 225,3929,5676,5677,4771,5678,4772,5679,3267, # 4880 +1243,5680,4142, 963,2250,4773,5681,2714,3658,3186,5682,5683,2596,2332,5684,4774, # 4896 +5685,5686,5687,3536, 957,3426,2547,2033,1931,2941,2467, 870,2019,3659,1746,2780, # 4912 +2781,2439,2468,5688,3930,5689,3789,3130,3790,3537,3427,3791,5690,1179,3086,5691, # 4928 +3187,2378,4416,3792,2548,3188,3131,2749,4143,5692,3428,1556,2549,2297, 977,2901, # 4944 +2034,4144,1205,3429,5693,1765,3430,3189,2125,1271, 714,1689,4775,3538,5694,2333, # 4960 +3931, 533,4417,3660,2184, 617,5695,2469,3340,3539,2315,5696,5697,3190,5698,5699, # 4976 +3932,1988, 618, 427,2651,3540,3431,5700,5701,1244,1690,5702,2819,4418,4776,5703, # 4992 +3541,4777,5704,2284,1576, 473,3661,4419,3432, 972,5705,3662,5706,3087,5707,5708, # 5008 +4778,4779,5709,3793,4145,4146,5710, 153,4780, 356,5711,1892,2902,4420,2144, 408, # 5024 + 803,2357,5712,3933,5713,4421,1646,2578,2518,4781,4782,3934,5714,3935,4422,5715, # 5040 +2416,3433, 752,5716,5717,1962,3341,2979,5718, 746,3030,2470,4783,4423,3794, 698, # 5056 +4784,1893,4424,3663,2550,4785,3664,3936,5719,3191,3434,5720,1824,1302,4147,2715, # 5072 +3937,1974,4425,5721,4426,3192, 823,1303,1288,1236,2861,3542,4148,3435, 774,3938, # 5088 +5722,1581,4786,1304,2862,3939,4787,5723,2440,2162,1083,3268,4427,4149,4428, 344, # 5104 +1173, 288,2316, 454,1683,5724,5725,1461,4788,4150,2597,5726,5727,4789, 985, 894, # 5120 +5728,3436,3193,5729,1914,2942,3795,1989,5730,2111,1975,5731,4151,5732,2579,1194, # 5136 + 425,5733,4790,3194,1245,3796,4429,5734,5735,2863,5736, 636,4791,1856,3940, 760, # 5152 +1800,5737,4430,2212,1508,4792,4152,1894,1684,2298,5738,5739,4793,4431,4432,2213, # 5168 + 479,5740,5741, 832,5742,4153,2496,5743,2980,2497,3797, 990,3132, 627,1815,2652, # 5184 +4433,1582,4434,2126,2112,3543,4794,5744, 799,4435,3195,5745,4795,2113,1737,3031, # 5200 +1018, 543, 754,4436,3342,1676,4796,4797,4154,4798,1489,5746,3544,5747,2624,2903, # 5216 +4155,5748,5749,2981,5750,5751,5752,5753,3196,4799,4800,2185,1722,5754,3269,3270, # 5232 +1843,3665,1715, 481, 365,1976,1857,5755,5756,1963,2498,4801,5757,2127,3666,3271, # 5248 + 433,1895,2064,2076,5758, 602,2750,5759,5760,5761,5762,5763,3032,1628,3437,5764, # 5264 +3197,4802,4156,2904,4803,2519,5765,2551,2782,5766,5767,5768,3343,4804,2905,5769, # 5280 +4805,5770,2864,4806,4807,1221,2982,4157,2520,5771,5772,5773,1868,1990,5774,5775, # 5296 +5776,1896,5777,5778,4808,1897,4158, 318,5779,2095,4159,4437,5780,5781, 485,5782, # 5312 + 938,3941, 553,2680, 116,5783,3942,3667,5784,3545,2681,2783,3438,3344,2820,5785, # 5328 +3668,2943,4160,1747,2944,2983,5786,5787, 207,5788,4809,5789,4810,2521,5790,3033, # 5344 + 890,3669,3943,5791,1878,3798,3439,5792,2186,2358,3440,1652,5793,5794,5795, 941, # 5360 +2299, 208,3546,4161,2020, 330,4438,3944,2906,2499,3799,4439,4811,5796,5797,5798, # 5376 +) + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/big5prober.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/big5prober.py new file mode 100644 index 0000000000000000000000000000000000000000..98f9970122088c14a5830e091ca8a12fc8e4c563 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/big5prober.py @@ -0,0 +1,47 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Communicator client code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +from .mbcharsetprober import MultiByteCharSetProber +from .codingstatemachine import CodingStateMachine +from .chardistribution import Big5DistributionAnalysis +from .mbcssm import BIG5_SM_MODEL + + +class Big5Prober(MultiByteCharSetProber): + def __init__(self): + super(Big5Prober, self).__init__() + self.coding_sm = CodingStateMachine(BIG5_SM_MODEL) + self.distribution_analyzer = Big5DistributionAnalysis() + self.reset() + + @property + def charset_name(self): + return "Big5" + + @property + def language(self): + return "Chinese" diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/chardistribution.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/chardistribution.py new file mode 100644 index 0000000000000000000000000000000000000000..c0395f4a45aaa5c4ba1824a81d8ef8f69b46dc60 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/chardistribution.py @@ -0,0 +1,233 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Communicator client code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +from .euctwfreq import (EUCTW_CHAR_TO_FREQ_ORDER, EUCTW_TABLE_SIZE, + EUCTW_TYPICAL_DISTRIBUTION_RATIO) +from .euckrfreq import (EUCKR_CHAR_TO_FREQ_ORDER, EUCKR_TABLE_SIZE, + EUCKR_TYPICAL_DISTRIBUTION_RATIO) +from .gb2312freq import (GB2312_CHAR_TO_FREQ_ORDER, GB2312_TABLE_SIZE, + GB2312_TYPICAL_DISTRIBUTION_RATIO) +from .big5freq import (BIG5_CHAR_TO_FREQ_ORDER, BIG5_TABLE_SIZE, + BIG5_TYPICAL_DISTRIBUTION_RATIO) +from .jisfreq import (JIS_CHAR_TO_FREQ_ORDER, JIS_TABLE_SIZE, + JIS_TYPICAL_DISTRIBUTION_RATIO) + + +class CharDistributionAnalysis(object): + ENOUGH_DATA_THRESHOLD = 1024 + SURE_YES = 0.99 + SURE_NO = 0.01 + MINIMUM_DATA_THRESHOLD = 3 + + def __init__(self): + # Mapping table to get frequency order from char order (get from + # GetOrder()) + self._char_to_freq_order = None + self._table_size = None # Size of above table + # This is a constant value which varies from language to language, + # used in calculating confidence. See + # http://www.mozilla.org/projects/intl/UniversalCharsetDetection.html + # for further detail. + self.typical_distribution_ratio = None + self._done = None + self._total_chars = None + self._freq_chars = None + self.reset() + + def reset(self): + """reset analyser, clear any state""" + # If this flag is set to True, detection is done and conclusion has + # been made + self._done = False + self._total_chars = 0 # Total characters encountered + # The number of characters whose frequency order is less than 512 + self._freq_chars = 0 + + def feed(self, char, char_len): + """feed a character with known length""" + if char_len == 2: + # we only care about 2-bytes character in our distribution analysis + order = self.get_order(char) + else: + order = -1 + if order >= 0: + self._total_chars += 1 + # order is valid + if order < self._table_size: + if 512 > self._char_to_freq_order[order]: + self._freq_chars += 1 + + def get_confidence(self): + """return confidence based on existing data""" + # if we didn't receive any character in our consideration range, + # return negative answer + if self._total_chars <= 0 or self._freq_chars <= self.MINIMUM_DATA_THRESHOLD: + return self.SURE_NO + + if self._total_chars != self._freq_chars: + r = (self._freq_chars / ((self._total_chars - self._freq_chars) + * self.typical_distribution_ratio)) + if r < self.SURE_YES: + return r + + # normalize confidence (we don't want to be 100% sure) + return self.SURE_YES + + def got_enough_data(self): + # It is not necessary to receive all data to draw conclusion. + # For charset detection, certain amount of data is enough + return self._total_chars > self.ENOUGH_DATA_THRESHOLD + + def get_order(self, byte_str): + # We do not handle characters based on the original encoding string, + # but convert this encoding string to a number, here called order. + # This allows multiple encodings of a language to share one frequency + # table. + return -1 + + +class EUCTWDistributionAnalysis(CharDistributionAnalysis): + def __init__(self): + super(EUCTWDistributionAnalysis, self).__init__() + self._char_to_freq_order = EUCTW_CHAR_TO_FREQ_ORDER + self._table_size = EUCTW_TABLE_SIZE + self.typical_distribution_ratio = EUCTW_TYPICAL_DISTRIBUTION_RATIO + + def get_order(self, byte_str): + # for euc-TW encoding, we are interested + # first byte range: 0xc4 -- 0xfe + # second byte range: 0xa1 -- 0xfe + # no validation needed here. State machine has done that + first_char = byte_str[0] + if first_char >= 0xC4: + return 94 * (first_char - 0xC4) + byte_str[1] - 0xA1 + else: + return -1 + + +class EUCKRDistributionAnalysis(CharDistributionAnalysis): + def __init__(self): + super(EUCKRDistributionAnalysis, self).__init__() + self._char_to_freq_order = EUCKR_CHAR_TO_FREQ_ORDER + self._table_size = EUCKR_TABLE_SIZE + self.typical_distribution_ratio = EUCKR_TYPICAL_DISTRIBUTION_RATIO + + def get_order(self, byte_str): + # for euc-KR encoding, we are interested + # first byte range: 0xb0 -- 0xfe + # second byte range: 0xa1 -- 0xfe + # no validation needed here. State machine has done that + first_char = byte_str[0] + if first_char >= 0xB0: + return 94 * (first_char - 0xB0) + byte_str[1] - 0xA1 + else: + return -1 + + +class GB2312DistributionAnalysis(CharDistributionAnalysis): + def __init__(self): + super(GB2312DistributionAnalysis, self).__init__() + self._char_to_freq_order = GB2312_CHAR_TO_FREQ_ORDER + self._table_size = GB2312_TABLE_SIZE + self.typical_distribution_ratio = GB2312_TYPICAL_DISTRIBUTION_RATIO + + def get_order(self, byte_str): + # for GB2312 encoding, we are interested + # first byte range: 0xb0 -- 0xfe + # second byte range: 0xa1 -- 0xfe + # no validation needed here. State machine has done that + first_char, second_char = byte_str[0], byte_str[1] + if (first_char >= 0xB0) and (second_char >= 0xA1): + return 94 * (first_char - 0xB0) + second_char - 0xA1 + else: + return -1 + + +class Big5DistributionAnalysis(CharDistributionAnalysis): + def __init__(self): + super(Big5DistributionAnalysis, self).__init__() + self._char_to_freq_order = BIG5_CHAR_TO_FREQ_ORDER + self._table_size = BIG5_TABLE_SIZE + self.typical_distribution_ratio = BIG5_TYPICAL_DISTRIBUTION_RATIO + + def get_order(self, byte_str): + # for big5 encoding, we are interested + # first byte range: 0xa4 -- 0xfe + # second byte range: 0x40 -- 0x7e , 0xa1 -- 0xfe + # no validation needed here. State machine has done that + first_char, second_char = byte_str[0], byte_str[1] + if first_char >= 0xA4: + if second_char >= 0xA1: + return 157 * (first_char - 0xA4) + second_char - 0xA1 + 63 + else: + return 157 * (first_char - 0xA4) + second_char - 0x40 + else: + return -1 + + +class SJISDistributionAnalysis(CharDistributionAnalysis): + def __init__(self): + super(SJISDistributionAnalysis, self).__init__() + self._char_to_freq_order = JIS_CHAR_TO_FREQ_ORDER + self._table_size = JIS_TABLE_SIZE + self.typical_distribution_ratio = JIS_TYPICAL_DISTRIBUTION_RATIO + + def get_order(self, byte_str): + # for sjis encoding, we are interested + # first byte range: 0x81 -- 0x9f , 0xe0 -- 0xfe + # second byte range: 0x40 -- 0x7e, 0x81 -- oxfe + # no validation needed here. State machine has done that + first_char, second_char = byte_str[0], byte_str[1] + if (first_char >= 0x81) and (first_char <= 0x9F): + order = 188 * (first_char - 0x81) + elif (first_char >= 0xE0) and (first_char <= 0xEF): + order = 188 * (first_char - 0xE0 + 31) + else: + return -1 + order = order + second_char - 0x40 + if second_char > 0x7F: + order = -1 + return order + + +class EUCJPDistributionAnalysis(CharDistributionAnalysis): + def __init__(self): + super(EUCJPDistributionAnalysis, self).__init__() + self._char_to_freq_order = JIS_CHAR_TO_FREQ_ORDER + self._table_size = JIS_TABLE_SIZE + self.typical_distribution_ratio = JIS_TYPICAL_DISTRIBUTION_RATIO + + def get_order(self, byte_str): + # for euc-JP encoding, we are interested + # first byte range: 0xa0 -- 0xfe + # second byte range: 0xa1 -- 0xfe + # no validation needed here. State machine has done that + char = byte_str[0] + if char >= 0xA0: + return 94 * (char - 0xA1) + byte_str[1] - 0xa1 + else: + return -1 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/charsetgroupprober.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/charsetgroupprober.py new file mode 100644 index 0000000000000000000000000000000000000000..8b3738efd8ea1e42d196d381d2809beae7f4c649 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/charsetgroupprober.py @@ -0,0 +1,106 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Communicator client code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +from .enums import ProbingState +from .charsetprober import CharSetProber + + +class CharSetGroupProber(CharSetProber): + def __init__(self, lang_filter=None): + super(CharSetGroupProber, self).__init__(lang_filter=lang_filter) + self._active_num = 0 + self.probers = [] + self._best_guess_prober = None + + def reset(self): + super(CharSetGroupProber, self).reset() + self._active_num = 0 + for prober in self.probers: + if prober: + prober.reset() + prober.active = True + self._active_num += 1 + self._best_guess_prober = None + + @property + def charset_name(self): + if not self._best_guess_prober: + self.get_confidence() + if not self._best_guess_prober: + return None + return self._best_guess_prober.charset_name + + @property + def language(self): + if not self._best_guess_prober: + self.get_confidence() + if not self._best_guess_prober: + return None + return self._best_guess_prober.language + + def feed(self, byte_str): + for prober in self.probers: + if not prober: + continue + if not prober.active: + continue + state = prober.feed(byte_str) + if not state: + continue + if state == ProbingState.FOUND_IT: + self._best_guess_prober = prober + return self.state + elif state == ProbingState.NOT_ME: + prober.active = False + self._active_num -= 1 + if self._active_num <= 0: + self._state = ProbingState.NOT_ME + return self.state + return self.state + + def get_confidence(self): + state = self.state + if state == ProbingState.FOUND_IT: + return 0.99 + elif state == ProbingState.NOT_ME: + return 0.01 + best_conf = 0.0 + self._best_guess_prober = None + for prober in self.probers: + if not prober: + continue + if not prober.active: + self.logger.debug('%s not active', prober.charset_name) + continue + conf = prober.get_confidence() + self.logger.debug('%s %s confidence = %s', prober.charset_name, prober.language, conf) + if best_conf < conf: + best_conf = conf + self._best_guess_prober = prober + if not self._best_guess_prober: + return 0.0 + return best_conf diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/charsetprober.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/charsetprober.py new file mode 100644 index 0000000000000000000000000000000000000000..eac4e5986578636ad414648e6015e8b7e9f10432 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/charsetprober.py @@ -0,0 +1,145 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Universal charset detector code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 2001 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# Shy Shalom - original C code +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +import logging +import re + +from .enums import ProbingState + + +class CharSetProber(object): + + SHORTCUT_THRESHOLD = 0.95 + + def __init__(self, lang_filter=None): + self._state = None + self.lang_filter = lang_filter + self.logger = logging.getLogger(__name__) + + def reset(self): + self._state = ProbingState.DETECTING + + @property + def charset_name(self): + return None + + def feed(self, buf): + pass + + @property + def state(self): + return self._state + + def get_confidence(self): + return 0.0 + + @staticmethod + def filter_high_byte_only(buf): + buf = re.sub(b'([\x00-\x7F])+', b' ', buf) + return buf + + @staticmethod + def filter_international_words(buf): + """ + We define three types of bytes: + alphabet: english alphabets [a-zA-Z] + international: international characters [\x80-\xFF] + marker: everything else [^a-zA-Z\x80-\xFF] + + The input buffer can be thought to contain a series of words delimited + by markers. This function works to filter all words that contain at + least one international character. All contiguous sequences of markers + are replaced by a single space ascii character. + + This filter applies to all scripts which do not use English characters. + """ + filtered = bytearray() + + # This regex expression filters out only words that have at-least one + # international character. The word may include one marker character at + # the end. + words = re.findall(b'[a-zA-Z]*[\x80-\xFF]+[a-zA-Z]*[^a-zA-Z\x80-\xFF]?', + buf) + + for word in words: + filtered.extend(word[:-1]) + + # If the last character in the word is a marker, replace it with a + # space as markers shouldn't affect our analysis (they are used + # similarly across all languages and may thus have similar + # frequencies). + last_char = word[-1:] + if not last_char.isalpha() and last_char < b'\x80': + last_char = b' ' + filtered.extend(last_char) + + return filtered + + @staticmethod + def filter_with_english_letters(buf): + """ + Returns a copy of ``buf`` that retains only the sequences of English + alphabet and high byte characters that are not between <> characters. + Also retains English alphabet and high byte characters immediately + before occurrences of >. + + This filter can be applied to all scripts which contain both English + characters and extended ASCII characters, but is currently only used by + ``Latin1Prober``. + """ + filtered = bytearray() + in_tag = False + prev = 0 + + for curr in range(len(buf)): + # Slice here to get bytes instead of an int with Python 3 + buf_char = buf[curr:curr + 1] + # Check if we're coming out of or entering an HTML tag + if buf_char == b'>': + in_tag = False + elif buf_char == b'<': + in_tag = True + + # If current character is not extended-ASCII and not alphabetic... + if buf_char < b'\x80' and not buf_char.isalpha(): + # ...and we're not in a tag + if curr > prev and not in_tag: + # Keep everything after last non-extended-ASCII, + # non-alphabetic character + filtered.extend(buf[prev:curr]) + # Output a space to delimit stretch we kept + filtered.extend(b' ') + prev = curr + 1 + + # If we're not in a tag... + if not in_tag: + # Keep everything after last non-extended-ASCII, non-alphabetic + # character + filtered.extend(buf[prev:]) + + return filtered diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/cli/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/cli/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..8b137891791fe96927ad78e64b0aad7bded08bdc --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/cli/__init__.py @@ -0,0 +1 @@ + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/cli/chardetect.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/cli/chardetect.py new file mode 100644 index 0000000000000000000000000000000000000000..f0a4cc5d79f2b42486631f96558a47093dd6298e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/cli/chardetect.py @@ -0,0 +1,85 @@ +#!/usr/bin/env python +""" +Script which takes one or more file paths and reports on their detected +encodings + +Example:: + + % chardetect somefile someotherfile + somefile: windows-1252 with confidence 0.5 + someotherfile: ascii with confidence 1.0 + +If no paths are provided, it takes its input from stdin. + +""" + +from __future__ import absolute_import, print_function, unicode_literals + +import argparse +import sys + +from chardet import __version__ +from chardet.compat import PY2 +from chardet.universaldetector import UniversalDetector + + +def description_of(lines, name='stdin'): + """ + Return a string describing the probable encoding of a file or + list of strings. + + :param lines: The lines to get the encoding of. + :type lines: Iterable of bytes + :param name: Name of file or collection of lines + :type name: str + """ + u = UniversalDetector() + for line in lines: + line = bytearray(line) + u.feed(line) + # shortcut out of the loop to save reading further - particularly useful if we read a BOM. + if u.done: + break + u.close() + result = u.result + if PY2: + name = name.decode(sys.getfilesystemencoding(), 'ignore') + if result['encoding']: + return '{0}: {1} with confidence {2}'.format(name, result['encoding'], + result['confidence']) + else: + return '{0}: no result'.format(name) + + +def main(argv=None): + """ + Handles command line arguments and gets things started. + + :param argv: List of arguments, as if specified on the command-line. + If None, ``sys.argv[1:]`` is used instead. + :type argv: list of str + """ + # Get command line arguments + parser = argparse.ArgumentParser( + description="Takes one or more file paths and reports their detected \ + encodings") + parser.add_argument('input', + help='File whose encoding we would like to determine. \ + (default: stdin)', + type=argparse.FileType('rb'), nargs='*', + default=[sys.stdin if PY2 else sys.stdin.buffer]) + parser.add_argument('--version', action='version', + version='%(prog)s {0}'.format(__version__)) + args = parser.parse_args(argv) + + for f in args.input: + if f.isatty(): + print("You are running chardetect interactively. Press " + + "CTRL-D twice at the start of a blank line to signal the " + + "end of your input. If you want help, run chardetect " + + "--help\n", file=sys.stderr) + print(description_of(f, f.name)) + + +if __name__ == '__main__': + main() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/codingstatemachine.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/codingstatemachine.py new file mode 100644 index 0000000000000000000000000000000000000000..68fba44f14366c448f13db3cf9cf1665af2e498c --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/codingstatemachine.py @@ -0,0 +1,88 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is mozilla.org code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +import logging + +from .enums import MachineState + + +class CodingStateMachine(object): + """ + A state machine to verify a byte sequence for a particular encoding. For + each byte the detector receives, it will feed that byte to every active + state machine available, one byte at a time. The state machine changes its + state based on its previous state and the byte it receives. There are 3 + states in a state machine that are of interest to an auto-detector: + + START state: This is the state to start with, or a legal byte sequence + (i.e. a valid code point) for character has been identified. + + ME state: This indicates that the state machine identified a byte sequence + that is specific to the charset it is designed for and that + there is no other possible encoding which can contain this byte + sequence. This will to lead to an immediate positive answer for + the detector. + + ERROR state: This indicates the state machine identified an illegal byte + sequence for that encoding. This will lead to an immediate + negative answer for this encoding. Detector will exclude this + encoding from consideration from here on. + """ + def __init__(self, sm): + self._model = sm + self._curr_byte_pos = 0 + self._curr_char_len = 0 + self._curr_state = None + self.logger = logging.getLogger(__name__) + self.reset() + + def reset(self): + self._curr_state = MachineState.START + + def next_state(self, c): + # for each byte we get its class + # if it is first byte, we also get byte length + byte_class = self._model['class_table'][c] + if self._curr_state == MachineState.START: + self._curr_byte_pos = 0 + self._curr_char_len = self._model['char_len_table'][byte_class] + # from byte's class and state_table, we get its next state + curr_state = (self._curr_state * self._model['class_factor'] + + byte_class) + self._curr_state = self._model['state_table'][curr_state] + self._curr_byte_pos += 1 + return self._curr_state + + def get_current_charlen(self): + return self._curr_char_len + + def get_coding_state_machine(self): + return self._model['name'] + + @property + def language(self): + return self._model['language'] diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/compat.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/compat.py new file mode 100644 index 0000000000000000000000000000000000000000..ddd74687c02a12ecc296ee803997a345e984bc9d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/compat.py @@ -0,0 +1,34 @@ +######################## BEGIN LICENSE BLOCK ######################## +# Contributor(s): +# Dan Blanchard +# Ian Cordasco +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +import sys + + +if sys.version_info < (3, 0): + PY2 = True + PY3 = False + base_str = (str, unicode) + text_type = unicode +else: + PY2 = False + PY3 = True + base_str = (bytes, str) + text_type = str diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/cp949prober.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/cp949prober.py new file mode 100644 index 0000000000000000000000000000000000000000..efd793abca4bf496001a4e46a67557e5a6f16bba --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/cp949prober.py @@ -0,0 +1,49 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is mozilla.org code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +from .chardistribution import EUCKRDistributionAnalysis +from .codingstatemachine import CodingStateMachine +from .mbcharsetprober import MultiByteCharSetProber +from .mbcssm import CP949_SM_MODEL + + +class CP949Prober(MultiByteCharSetProber): + def __init__(self): + super(CP949Prober, self).__init__() + self.coding_sm = CodingStateMachine(CP949_SM_MODEL) + # NOTE: CP949 is a superset of EUC-KR, so the distribution should be + # not different. + self.distribution_analyzer = EUCKRDistributionAnalysis() + self.reset() + + @property + def charset_name(self): + return "CP949" + + @property + def language(self): + return "Korean" diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/enums.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/enums.py new file mode 100644 index 0000000000000000000000000000000000000000..04512072251c429e63ed110cdbafaf4b3cca3412 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/enums.py @@ -0,0 +1,76 @@ +""" +All of the Enums that are used throughout the chardet package. + +:author: Dan Blanchard (dan.blanchard@gmail.com) +""" + + +class InputState(object): + """ + This enum represents the different states a universal detector can be in. + """ + PURE_ASCII = 0 + ESC_ASCII = 1 + HIGH_BYTE = 2 + + +class LanguageFilter(object): + """ + This enum represents the different language filters we can apply to a + ``UniversalDetector``. + """ + CHINESE_SIMPLIFIED = 0x01 + CHINESE_TRADITIONAL = 0x02 + JAPANESE = 0x04 + KOREAN = 0x08 + NON_CJK = 0x10 + ALL = 0x1F + CHINESE = CHINESE_SIMPLIFIED | CHINESE_TRADITIONAL + CJK = CHINESE | JAPANESE | KOREAN + + +class ProbingState(object): + """ + This enum represents the different states a prober can be in. + """ + DETECTING = 0 + FOUND_IT = 1 + NOT_ME = 2 + + +class MachineState(object): + """ + This enum represents the different states a state machine can be in. + """ + START = 0 + ERROR = 1 + ITS_ME = 2 + + +class SequenceLikelihood(object): + """ + This enum represents the likelihood of a character following the previous one. + """ + NEGATIVE = 0 + UNLIKELY = 1 + LIKELY = 2 + POSITIVE = 3 + + @classmethod + def get_num_categories(cls): + """:returns: The number of likelihood categories in the enum.""" + return 4 + + +class CharacterCategory(object): + """ + This enum represents the different categories language models for + ``SingleByteCharsetProber`` put characters into. + + Anything less than CONTROL is considered a letter. + """ + UNDEFINED = 255 + LINE_BREAK = 254 + SYMBOL = 253 + DIGIT = 252 + CONTROL = 251 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/escprober.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/escprober.py new file mode 100644 index 0000000000000000000000000000000000000000..c70493f2b131b32378612044f30173eabbfbc3f4 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/escprober.py @@ -0,0 +1,101 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is mozilla.org code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +from .charsetprober import CharSetProber +from .codingstatemachine import CodingStateMachine +from .enums import LanguageFilter, ProbingState, MachineState +from .escsm import (HZ_SM_MODEL, ISO2022CN_SM_MODEL, ISO2022JP_SM_MODEL, + ISO2022KR_SM_MODEL) + + +class EscCharSetProber(CharSetProber): + """ + This CharSetProber uses a "code scheme" approach for detecting encodings, + whereby easily recognizable escape or shift sequences are relied on to + identify these encodings. + """ + + def __init__(self, lang_filter=None): + super(EscCharSetProber, self).__init__(lang_filter=lang_filter) + self.coding_sm = [] + if self.lang_filter & LanguageFilter.CHINESE_SIMPLIFIED: + self.coding_sm.append(CodingStateMachine(HZ_SM_MODEL)) + self.coding_sm.append(CodingStateMachine(ISO2022CN_SM_MODEL)) + if self.lang_filter & LanguageFilter.JAPANESE: + self.coding_sm.append(CodingStateMachine(ISO2022JP_SM_MODEL)) + if self.lang_filter & LanguageFilter.KOREAN: + self.coding_sm.append(CodingStateMachine(ISO2022KR_SM_MODEL)) + self.active_sm_count = None + self._detected_charset = None + self._detected_language = None + self._state = None + self.reset() + + def reset(self): + super(EscCharSetProber, self).reset() + for coding_sm in self.coding_sm: + if not coding_sm: + continue + coding_sm.active = True + coding_sm.reset() + self.active_sm_count = len(self.coding_sm) + self._detected_charset = None + self._detected_language = None + + @property + def charset_name(self): + return self._detected_charset + + @property + def language(self): + return self._detected_language + + def get_confidence(self): + if self._detected_charset: + return 0.99 + else: + return 0.00 + + def feed(self, byte_str): + for c in byte_str: + for coding_sm in self.coding_sm: + if not coding_sm or not coding_sm.active: + continue + coding_state = coding_sm.next_state(c) + if coding_state == MachineState.ERROR: + coding_sm.active = False + self.active_sm_count -= 1 + if self.active_sm_count <= 0: + self._state = ProbingState.NOT_ME + return self.state + elif coding_state == MachineState.ITS_ME: + self._state = ProbingState.FOUND_IT + self._detected_charset = coding_sm.get_coding_state_machine() + self._detected_language = coding_sm.language + return self.state + + return self.state diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/escsm.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/escsm.py new file mode 100644 index 0000000000000000000000000000000000000000..0069523a04969dd920ea4b43a497162157729174 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/escsm.py @@ -0,0 +1,246 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is mozilla.org code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +from .enums import MachineState + +HZ_CLS = ( +1,0,0,0,0,0,0,0, # 00 - 07 +0,0,0,0,0,0,0,0, # 08 - 0f +0,0,0,0,0,0,0,0, # 10 - 17 +0,0,0,1,0,0,0,0, # 18 - 1f +0,0,0,0,0,0,0,0, # 20 - 27 +0,0,0,0,0,0,0,0, # 28 - 2f +0,0,0,0,0,0,0,0, # 30 - 37 +0,0,0,0,0,0,0,0, # 38 - 3f +0,0,0,0,0,0,0,0, # 40 - 47 +0,0,0,0,0,0,0,0, # 48 - 4f +0,0,0,0,0,0,0,0, # 50 - 57 +0,0,0,0,0,0,0,0, # 58 - 5f +0,0,0,0,0,0,0,0, # 60 - 67 +0,0,0,0,0,0,0,0, # 68 - 6f +0,0,0,0,0,0,0,0, # 70 - 77 +0,0,0,4,0,5,2,0, # 78 - 7f +1,1,1,1,1,1,1,1, # 80 - 87 +1,1,1,1,1,1,1,1, # 88 - 8f +1,1,1,1,1,1,1,1, # 90 - 97 +1,1,1,1,1,1,1,1, # 98 - 9f +1,1,1,1,1,1,1,1, # a0 - a7 +1,1,1,1,1,1,1,1, # a8 - af +1,1,1,1,1,1,1,1, # b0 - b7 +1,1,1,1,1,1,1,1, # b8 - bf +1,1,1,1,1,1,1,1, # c0 - c7 +1,1,1,1,1,1,1,1, # c8 - cf +1,1,1,1,1,1,1,1, # d0 - d7 +1,1,1,1,1,1,1,1, # d8 - df +1,1,1,1,1,1,1,1, # e0 - e7 +1,1,1,1,1,1,1,1, # e8 - ef +1,1,1,1,1,1,1,1, # f0 - f7 +1,1,1,1,1,1,1,1, # f8 - ff +) + +HZ_ST = ( +MachineState.START,MachineState.ERROR, 3,MachineState.START,MachineState.START,MachineState.START,MachineState.ERROR,MachineState.ERROR,# 00-07 +MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,# 08-0f +MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ERROR,MachineState.ERROR,MachineState.START,MachineState.START, 4,MachineState.ERROR,# 10-17 + 5,MachineState.ERROR, 6,MachineState.ERROR, 5, 5, 4,MachineState.ERROR,# 18-1f + 4,MachineState.ERROR, 4, 4, 4,MachineState.ERROR, 4,MachineState.ERROR,# 20-27 + 4,MachineState.ITS_ME,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,# 28-2f +) + +HZ_CHAR_LEN_TABLE = (0, 0, 0, 0, 0, 0) + +HZ_SM_MODEL = {'class_table': HZ_CLS, + 'class_factor': 6, + 'state_table': HZ_ST, + 'char_len_table': HZ_CHAR_LEN_TABLE, + 'name': "HZ-GB-2312", + 'language': 'Chinese'} + +ISO2022CN_CLS = ( +2,0,0,0,0,0,0,0, # 00 - 07 +0,0,0,0,0,0,0,0, # 08 - 0f +0,0,0,0,0,0,0,0, # 10 - 17 +0,0,0,1,0,0,0,0, # 18 - 1f +0,0,0,0,0,0,0,0, # 20 - 27 +0,3,0,0,0,0,0,0, # 28 - 2f +0,0,0,0,0,0,0,0, # 30 - 37 +0,0,0,0,0,0,0,0, # 38 - 3f +0,0,0,4,0,0,0,0, # 40 - 47 +0,0,0,0,0,0,0,0, # 48 - 4f +0,0,0,0,0,0,0,0, # 50 - 57 +0,0,0,0,0,0,0,0, # 58 - 5f +0,0,0,0,0,0,0,0, # 60 - 67 +0,0,0,0,0,0,0,0, # 68 - 6f +0,0,0,0,0,0,0,0, # 70 - 77 +0,0,0,0,0,0,0,0, # 78 - 7f +2,2,2,2,2,2,2,2, # 80 - 87 +2,2,2,2,2,2,2,2, # 88 - 8f +2,2,2,2,2,2,2,2, # 90 - 97 +2,2,2,2,2,2,2,2, # 98 - 9f +2,2,2,2,2,2,2,2, # a0 - a7 +2,2,2,2,2,2,2,2, # a8 - af +2,2,2,2,2,2,2,2, # b0 - b7 +2,2,2,2,2,2,2,2, # b8 - bf +2,2,2,2,2,2,2,2, # c0 - c7 +2,2,2,2,2,2,2,2, # c8 - cf +2,2,2,2,2,2,2,2, # d0 - d7 +2,2,2,2,2,2,2,2, # d8 - df +2,2,2,2,2,2,2,2, # e0 - e7 +2,2,2,2,2,2,2,2, # e8 - ef +2,2,2,2,2,2,2,2, # f0 - f7 +2,2,2,2,2,2,2,2, # f8 - ff +) + +ISO2022CN_ST = ( +MachineState.START, 3,MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,# 00-07 +MachineState.START,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,# 08-0f +MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,# 10-17 +MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR, 4,MachineState.ERROR,# 18-1f +MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,# 20-27 + 5, 6,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,# 28-2f +MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,# 30-37 +MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ERROR,MachineState.START,# 38-3f +) + +ISO2022CN_CHAR_LEN_TABLE = (0, 0, 0, 0, 0, 0, 0, 0, 0) + +ISO2022CN_SM_MODEL = {'class_table': ISO2022CN_CLS, + 'class_factor': 9, + 'state_table': ISO2022CN_ST, + 'char_len_table': ISO2022CN_CHAR_LEN_TABLE, + 'name': "ISO-2022-CN", + 'language': 'Chinese'} + +ISO2022JP_CLS = ( +2,0,0,0,0,0,0,0, # 00 - 07 +0,0,0,0,0,0,2,2, # 08 - 0f +0,0,0,0,0,0,0,0, # 10 - 17 +0,0,0,1,0,0,0,0, # 18 - 1f +0,0,0,0,7,0,0,0, # 20 - 27 +3,0,0,0,0,0,0,0, # 28 - 2f +0,0,0,0,0,0,0,0, # 30 - 37 +0,0,0,0,0,0,0,0, # 38 - 3f +6,0,4,0,8,0,0,0, # 40 - 47 +0,9,5,0,0,0,0,0, # 48 - 4f +0,0,0,0,0,0,0,0, # 50 - 57 +0,0,0,0,0,0,0,0, # 58 - 5f +0,0,0,0,0,0,0,0, # 60 - 67 +0,0,0,0,0,0,0,0, # 68 - 6f +0,0,0,0,0,0,0,0, # 70 - 77 +0,0,0,0,0,0,0,0, # 78 - 7f +2,2,2,2,2,2,2,2, # 80 - 87 +2,2,2,2,2,2,2,2, # 88 - 8f +2,2,2,2,2,2,2,2, # 90 - 97 +2,2,2,2,2,2,2,2, # 98 - 9f +2,2,2,2,2,2,2,2, # a0 - a7 +2,2,2,2,2,2,2,2, # a8 - af +2,2,2,2,2,2,2,2, # b0 - b7 +2,2,2,2,2,2,2,2, # b8 - bf +2,2,2,2,2,2,2,2, # c0 - c7 +2,2,2,2,2,2,2,2, # c8 - cf +2,2,2,2,2,2,2,2, # d0 - d7 +2,2,2,2,2,2,2,2, # d8 - df +2,2,2,2,2,2,2,2, # e0 - e7 +2,2,2,2,2,2,2,2, # e8 - ef +2,2,2,2,2,2,2,2, # f0 - f7 +2,2,2,2,2,2,2,2, # f8 - ff +) + +ISO2022JP_ST = ( +MachineState.START, 3,MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,# 00-07 +MachineState.START,MachineState.START,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,# 08-0f +MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,# 10-17 +MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ERROR,MachineState.ERROR,# 18-1f +MachineState.ERROR, 5,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR, 4,MachineState.ERROR,MachineState.ERROR,# 20-27 +MachineState.ERROR,MachineState.ERROR,MachineState.ERROR, 6,MachineState.ITS_ME,MachineState.ERROR,MachineState.ITS_ME,MachineState.ERROR,# 28-2f +MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ITS_ME,# 30-37 +MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,# 38-3f +MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ERROR,MachineState.START,MachineState.START,# 40-47 +) + +ISO2022JP_CHAR_LEN_TABLE = (0, 0, 0, 0, 0, 0, 0, 0, 0, 0) + +ISO2022JP_SM_MODEL = {'class_table': ISO2022JP_CLS, + 'class_factor': 10, + 'state_table': ISO2022JP_ST, + 'char_len_table': ISO2022JP_CHAR_LEN_TABLE, + 'name': "ISO-2022-JP", + 'language': 'Japanese'} + +ISO2022KR_CLS = ( +2,0,0,0,0,0,0,0, # 00 - 07 +0,0,0,0,0,0,0,0, # 08 - 0f +0,0,0,0,0,0,0,0, # 10 - 17 +0,0,0,1,0,0,0,0, # 18 - 1f +0,0,0,0,3,0,0,0, # 20 - 27 +0,4,0,0,0,0,0,0, # 28 - 2f +0,0,0,0,0,0,0,0, # 30 - 37 +0,0,0,0,0,0,0,0, # 38 - 3f +0,0,0,5,0,0,0,0, # 40 - 47 +0,0,0,0,0,0,0,0, # 48 - 4f +0,0,0,0,0,0,0,0, # 50 - 57 +0,0,0,0,0,0,0,0, # 58 - 5f +0,0,0,0,0,0,0,0, # 60 - 67 +0,0,0,0,0,0,0,0, # 68 - 6f +0,0,0,0,0,0,0,0, # 70 - 77 +0,0,0,0,0,0,0,0, # 78 - 7f +2,2,2,2,2,2,2,2, # 80 - 87 +2,2,2,2,2,2,2,2, # 88 - 8f +2,2,2,2,2,2,2,2, # 90 - 97 +2,2,2,2,2,2,2,2, # 98 - 9f +2,2,2,2,2,2,2,2, # a0 - a7 +2,2,2,2,2,2,2,2, # a8 - af +2,2,2,2,2,2,2,2, # b0 - b7 +2,2,2,2,2,2,2,2, # b8 - bf +2,2,2,2,2,2,2,2, # c0 - c7 +2,2,2,2,2,2,2,2, # c8 - cf +2,2,2,2,2,2,2,2, # d0 - d7 +2,2,2,2,2,2,2,2, # d8 - df +2,2,2,2,2,2,2,2, # e0 - e7 +2,2,2,2,2,2,2,2, # e8 - ef +2,2,2,2,2,2,2,2, # f0 - f7 +2,2,2,2,2,2,2,2, # f8 - ff +) + +ISO2022KR_ST = ( +MachineState.START, 3,MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START,MachineState.ERROR,MachineState.ERROR,# 00-07 +MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,# 08-0f +MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR, 4,MachineState.ERROR,MachineState.ERROR,# 10-17 +MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR, 5,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,# 18-1f +MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.START,MachineState.START,MachineState.START,MachineState.START,# 20-27 +) + +ISO2022KR_CHAR_LEN_TABLE = (0, 0, 0, 0, 0, 0) + +ISO2022KR_SM_MODEL = {'class_table': ISO2022KR_CLS, + 'class_factor': 6, + 'state_table': ISO2022KR_ST, + 'char_len_table': ISO2022KR_CHAR_LEN_TABLE, + 'name': "ISO-2022-KR", + 'language': 'Korean'} + + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/eucjpprober.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/eucjpprober.py new file mode 100644 index 0000000000000000000000000000000000000000..20ce8f7d15bad9d48a3e0363de2286093a4a28cc --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/eucjpprober.py @@ -0,0 +1,92 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is mozilla.org code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +from .enums import ProbingState, MachineState +from .mbcharsetprober import MultiByteCharSetProber +from .codingstatemachine import CodingStateMachine +from .chardistribution import EUCJPDistributionAnalysis +from .jpcntx import EUCJPContextAnalysis +from .mbcssm import EUCJP_SM_MODEL + + +class EUCJPProber(MultiByteCharSetProber): + def __init__(self): + super(EUCJPProber, self).__init__() + self.coding_sm = CodingStateMachine(EUCJP_SM_MODEL) + self.distribution_analyzer = EUCJPDistributionAnalysis() + self.context_analyzer = EUCJPContextAnalysis() + self.reset() + + def reset(self): + super(EUCJPProber, self).reset() + self.context_analyzer.reset() + + @property + def charset_name(self): + return "EUC-JP" + + @property + def language(self): + return "Japanese" + + def feed(self, byte_str): + for i in range(len(byte_str)): + # PY3K: byte_str is a byte array, so byte_str[i] is an int, not a byte + coding_state = self.coding_sm.next_state(byte_str[i]) + if coding_state == MachineState.ERROR: + self.logger.debug('%s %s prober hit error at byte %s', + self.charset_name, self.language, i) + self._state = ProbingState.NOT_ME + break + elif coding_state == MachineState.ITS_ME: + self._state = ProbingState.FOUND_IT + break + elif coding_state == MachineState.START: + char_len = self.coding_sm.get_current_charlen() + if i == 0: + self._last_char[1] = byte_str[0] + self.context_analyzer.feed(self._last_char, char_len) + self.distribution_analyzer.feed(self._last_char, char_len) + else: + self.context_analyzer.feed(byte_str[i - 1:i + 1], + char_len) + self.distribution_analyzer.feed(byte_str[i - 1:i + 1], + char_len) + + self._last_char[0] = byte_str[-1] + + if self.state == ProbingState.DETECTING: + if (self.context_analyzer.got_enough_data() and + (self.get_confidence() > self.SHORTCUT_THRESHOLD)): + self._state = ProbingState.FOUND_IT + + return self.state + + def get_confidence(self): + context_conf = self.context_analyzer.get_confidence() + distrib_conf = self.distribution_analyzer.get_confidence() + return max(context_conf, distrib_conf) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/euckrfreq.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/euckrfreq.py new file mode 100644 index 0000000000000000000000000000000000000000..b68078cb9680de4cca65b1145632a37c5e751c38 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/euckrfreq.py @@ -0,0 +1,195 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Communicator client code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +# Sampling from about 20M text materials include literature and computer technology + +# 128 --> 0.79 +# 256 --> 0.92 +# 512 --> 0.986 +# 1024 --> 0.99944 +# 2048 --> 0.99999 +# +# Idea Distribution Ratio = 0.98653 / (1-0.98653) = 73.24 +# Random Distribution Ration = 512 / (2350-512) = 0.279. +# +# Typical Distribution Ratio + +EUCKR_TYPICAL_DISTRIBUTION_RATIO = 6.0 + +EUCKR_TABLE_SIZE = 2352 + +# Char to FreqOrder table , +EUCKR_CHAR_TO_FREQ_ORDER = ( + 13, 130, 120,1396, 481,1719,1720, 328, 609, 212,1721, 707, 400, 299,1722, 87, +1397,1723, 104, 536,1117,1203,1724,1267, 685,1268, 508,1725,1726,1727,1728,1398, +1399,1729,1730,1731, 141, 621, 326,1057, 368,1732, 267, 488, 20,1733,1269,1734, + 945,1400,1735, 47, 904,1270,1736,1737, 773, 248,1738, 409, 313, 786, 429,1739, + 116, 987, 813,1401, 683, 75,1204, 145,1740,1741,1742,1743, 16, 847, 667, 622, + 708,1744,1745,1746, 966, 787, 304, 129,1747, 60, 820, 123, 676,1748,1749,1750, +1751, 617,1752, 626,1753,1754,1755,1756, 653,1757,1758,1759,1760,1761,1762, 856, + 344,1763,1764,1765,1766, 89, 401, 418, 806, 905, 848,1767,1768,1769, 946,1205, + 709,1770,1118,1771, 241,1772,1773,1774,1271,1775, 569,1776, 999,1777,1778,1779, +1780, 337, 751,1058, 28, 628, 254,1781, 177, 906, 270, 349, 891,1079,1782, 19, +1783, 379,1784, 315,1785, 629, 754,1402, 559,1786, 636, 203,1206,1787, 710, 567, +1788, 935, 814,1789,1790,1207, 766, 528,1791,1792,1208,1793,1794,1795,1796,1797, +1403,1798,1799, 533,1059,1404,1405,1156,1406, 936, 884,1080,1800, 351,1801,1802, +1803,1804,1805, 801,1806,1807,1808,1119,1809,1157, 714, 474,1407,1810, 298, 899, + 885,1811,1120, 802,1158,1812, 892,1813,1814,1408, 659,1815,1816,1121,1817,1818, +1819,1820,1821,1822, 319,1823, 594, 545,1824, 815, 937,1209,1825,1826, 573,1409, +1022,1827,1210,1828,1829,1830,1831,1832,1833, 556, 722, 807,1122,1060,1834, 697, +1835, 900, 557, 715,1836,1410, 540,1411, 752,1159, 294, 597,1211, 976, 803, 770, +1412,1837,1838, 39, 794,1413, 358,1839, 371, 925,1840, 453, 661, 788, 531, 723, + 544,1023,1081, 869, 91,1841, 392, 430, 790, 602,1414, 677,1082, 457,1415,1416, +1842,1843, 475, 327,1024,1417, 795, 121,1844, 733, 403,1418,1845,1846,1847, 300, + 119, 711,1212, 627,1848,1272, 207,1849,1850, 796,1213, 382,1851, 519,1852,1083, + 893,1853,1854,1855, 367, 809, 487, 671,1856, 663,1857,1858, 956, 471, 306, 857, +1859,1860,1160,1084,1861,1862,1863,1864,1865,1061,1866,1867,1868,1869,1870,1871, + 282, 96, 574,1872, 502,1085,1873,1214,1874, 907,1875,1876, 827, 977,1419,1420, +1421, 268,1877,1422,1878,1879,1880, 308,1881, 2, 537,1882,1883,1215,1884,1885, + 127, 791,1886,1273,1423,1887, 34, 336, 404, 643,1888, 571, 654, 894, 840,1889, + 0, 886,1274, 122, 575, 260, 908, 938,1890,1275, 410, 316,1891,1892, 100,1893, +1894,1123, 48,1161,1124,1025,1895, 633, 901,1276,1896,1897, 115, 816,1898, 317, +1899, 694,1900, 909, 734,1424, 572, 866,1425, 691, 85, 524,1010, 543, 394, 841, +1901,1902,1903,1026,1904,1905,1906,1907,1908,1909, 30, 451, 651, 988, 310,1910, +1911,1426, 810,1216, 93,1912,1913,1277,1217,1914, 858, 759, 45, 58, 181, 610, + 269,1915,1916, 131,1062, 551, 443,1000, 821,1427, 957, 895,1086,1917,1918, 375, +1919, 359,1920, 687,1921, 822,1922, 293,1923,1924, 40, 662, 118, 692, 29, 939, + 887, 640, 482, 174,1925, 69,1162, 728,1428, 910,1926,1278,1218,1279, 386, 870, + 217, 854,1163, 823,1927,1928,1929,1930, 834,1931, 78,1932, 859,1933,1063,1934, +1935,1936,1937, 438,1164, 208, 595,1938,1939,1940,1941,1219,1125,1942, 280, 888, +1429,1430,1220,1431,1943,1944,1945,1946,1947,1280, 150, 510,1432,1948,1949,1950, +1951,1952,1953,1954,1011,1087,1955,1433,1043,1956, 881,1957, 614, 958,1064,1065, +1221,1958, 638,1001, 860, 967, 896,1434, 989, 492, 553,1281,1165,1959,1282,1002, +1283,1222,1960,1961,1962,1963, 36, 383, 228, 753, 247, 454,1964, 876, 678,1965, +1966,1284, 126, 464, 490, 835, 136, 672, 529, 940,1088,1435, 473,1967,1968, 467, + 50, 390, 227, 587, 279, 378, 598, 792, 968, 240, 151, 160, 849, 882,1126,1285, + 639,1044, 133, 140, 288, 360, 811, 563,1027, 561, 142, 523,1969,1970,1971, 7, + 103, 296, 439, 407, 506, 634, 990,1972,1973,1974,1975, 645,1976,1977,1978,1979, +1980,1981, 236,1982,1436,1983,1984,1089, 192, 828, 618, 518,1166, 333,1127,1985, + 818,1223,1986,1987,1988,1989,1990,1991,1992,1993, 342,1128,1286, 746, 842,1994, +1995, 560, 223,1287, 98, 8, 189, 650, 978,1288,1996,1437,1997, 17, 345, 250, + 423, 277, 234, 512, 226, 97, 289, 42, 167,1998, 201,1999,2000, 843, 836, 824, + 532, 338, 783,1090, 182, 576, 436,1438,1439, 527, 500,2001, 947, 889,2002,2003, +2004,2005, 262, 600, 314, 447,2006, 547,2007, 693, 738,1129,2008, 71,1440, 745, + 619, 688,2009, 829,2010,2011, 147,2012, 33, 948,2013,2014, 74, 224,2015, 61, + 191, 918, 399, 637,2016,1028,1130, 257, 902,2017,2018,2019,2020,2021,2022,2023, +2024,2025,2026, 837,2027,2028,2029,2030, 179, 874, 591, 52, 724, 246,2031,2032, +2033,2034,1167, 969,2035,1289, 630, 605, 911,1091,1168,2036,2037,2038,1441, 912, +2039, 623,2040,2041, 253,1169,1290,2042,1442, 146, 620, 611, 577, 433,2043,1224, + 719,1170, 959, 440, 437, 534, 84, 388, 480,1131, 159, 220, 198, 679,2044,1012, + 819,1066,1443, 113,1225, 194, 318,1003,1029,2045,2046,2047,2048,1067,2049,2050, +2051,2052,2053, 59, 913, 112,2054, 632,2055, 455, 144, 739,1291,2056, 273, 681, + 499,2057, 448,2058,2059, 760,2060,2061, 970, 384, 169, 245,1132,2062,2063, 414, +1444,2064,2065, 41, 235,2066, 157, 252, 877, 568, 919, 789, 580,2067, 725,2068, +2069,1292,2070,2071,1445,2072,1446,2073,2074, 55, 588, 66,1447, 271,1092,2075, +1226,2076, 960,1013, 372,2077,2078,2079,2080,2081,1293,2082,2083,2084,2085, 850, +2086,2087,2088,2089,2090, 186,2091,1068, 180,2092,2093,2094, 109,1227, 522, 606, +2095, 867,1448,1093, 991,1171, 926, 353,1133,2096, 581,2097,2098,2099,1294,1449, +1450,2100, 596,1172,1014,1228,2101,1451,1295,1173,1229,2102,2103,1296,1134,1452, + 949,1135,2104,2105,1094,1453,1454,1455,2106,1095,2107,2108,2109,2110,2111,2112, +2113,2114,2115,2116,2117, 804,2118,2119,1230,1231, 805,1456, 405,1136,2120,2121, +2122,2123,2124, 720, 701,1297, 992,1457, 927,1004,2125,2126,2127,2128,2129,2130, + 22, 417,2131, 303,2132, 385,2133, 971, 520, 513,2134,1174, 73,1096, 231, 274, + 962,1458, 673,2135,1459,2136, 152,1137,2137,2138,2139,2140,1005,1138,1460,1139, +2141,2142,2143,2144, 11, 374, 844,2145, 154,1232, 46,1461,2146, 838, 830, 721, +1233, 106,2147, 90, 428, 462, 578, 566,1175, 352,2148,2149, 538,1234, 124,1298, +2150,1462, 761, 565,2151, 686,2152, 649,2153, 72, 173,2154, 460, 415,2155,1463, +2156,1235, 305,2157,2158,2159,2160,2161,2162, 579,2163,2164,2165,2166,2167, 747, +2168,2169,2170,2171,1464, 669,2172,2173,2174,2175,2176,1465,2177, 23, 530, 285, +2178, 335, 729,2179, 397,2180,2181,2182,1030,2183,2184, 698,2185,2186, 325,2187, +2188, 369,2189, 799,1097,1015, 348,2190,1069, 680,2191, 851,1466,2192,2193, 10, +2194, 613, 424,2195, 979, 108, 449, 589, 27, 172, 81,1031, 80, 774, 281, 350, +1032, 525, 301, 582,1176,2196, 674,1045,2197,2198,1467, 730, 762,2199,2200,2201, +2202,1468,2203, 993,2204,2205, 266,1070, 963,1140,2206,2207,2208, 664,1098, 972, +2209,2210,2211,1177,1469,1470, 871,2212,2213,2214,2215,2216,1471,2217,2218,2219, +2220,2221,2222,2223,2224,2225,2226,2227,1472,1236,2228,2229,2230,2231,2232,2233, +2234,2235,1299,2236,2237, 200,2238, 477, 373,2239,2240, 731, 825, 777,2241,2242, +2243, 521, 486, 548,2244,2245,2246,1473,1300, 53, 549, 137, 875, 76, 158,2247, +1301,1474, 469, 396,1016, 278, 712,2248, 321, 442, 503, 767, 744, 941,1237,1178, +1475,2249, 82, 178,1141,1179, 973,2250,1302,2251, 297,2252,2253, 570,2254,2255, +2256, 18, 450, 206,2257, 290, 292,1142,2258, 511, 162, 99, 346, 164, 735,2259, +1476,1477, 4, 554, 343, 798,1099,2260,1100,2261, 43, 171,1303, 139, 215,2262, +2263, 717, 775,2264,1033, 322, 216,2265, 831,2266, 149,2267,1304,2268,2269, 702, +1238, 135, 845, 347, 309,2270, 484,2271, 878, 655, 238,1006,1478,2272, 67,2273, + 295,2274,2275, 461,2276, 478, 942, 412,2277,1034,2278,2279,2280, 265,2281, 541, +2282,2283,2284,2285,2286, 70, 852,1071,2287,2288,2289,2290, 21, 56, 509, 117, + 432,2291,2292, 331, 980, 552,1101, 148, 284, 105, 393,1180,1239, 755,2293, 187, +2294,1046,1479,2295, 340,2296, 63,1047, 230,2297,2298,1305, 763,1306, 101, 800, + 808, 494,2299,2300,2301, 903,2302, 37,1072, 14, 5,2303, 79, 675,2304, 312, +2305,2306,2307,2308,2309,1480, 6,1307,2310,2311,2312, 1, 470, 35, 24, 229, +2313, 695, 210, 86, 778, 15, 784, 592, 779, 32, 77, 855, 964,2314, 259,2315, + 501, 380,2316,2317, 83, 981, 153, 689,1308,1481,1482,1483,2318,2319, 716,1484, +2320,2321,2322,2323,2324,2325,1485,2326,2327, 128, 57, 68, 261,1048, 211, 170, +1240, 31,2328, 51, 435, 742,2329,2330,2331, 635,2332, 264, 456,2333,2334,2335, + 425,2336,1486, 143, 507, 263, 943,2337, 363, 920,1487, 256,1488,1102, 243, 601, +1489,2338,2339,2340,2341,2342,2343,2344, 861,2345,2346,2347,2348,2349,2350, 395, +2351,1490,1491, 62, 535, 166, 225,2352,2353, 668, 419,1241, 138, 604, 928,2354, +1181,2355,1492,1493,2356,2357,2358,1143,2359, 696,2360, 387, 307,1309, 682, 476, +2361,2362, 332, 12, 222, 156,2363, 232,2364, 641, 276, 656, 517,1494,1495,1035, + 416, 736,1496,2365,1017, 586,2366,2367,2368,1497,2369, 242,2370,2371,2372,1498, +2373, 965, 713,2374,2375,2376,2377, 740, 982,1499, 944,1500,1007,2378,2379,1310, +1501,2380,2381,2382, 785, 329,2383,2384,1502,2385,2386,2387, 932,2388,1503,2389, +2390,2391,2392,1242,2393,2394,2395,2396,2397, 994, 950,2398,2399,2400,2401,1504, +1311,2402,2403,2404,2405,1049, 749,2406,2407, 853, 718,1144,1312,2408,1182,1505, +2409,2410, 255, 516, 479, 564, 550, 214,1506,1507,1313, 413, 239, 444, 339,1145, +1036,1508,1509,1314,1037,1510,1315,2411,1511,2412,2413,2414, 176, 703, 497, 624, + 593, 921, 302,2415, 341, 165,1103,1512,2416,1513,2417,2418,2419, 376,2420, 700, +2421,2422,2423, 258, 768,1316,2424,1183,2425, 995, 608,2426,2427,2428,2429, 221, +2430,2431,2432,2433,2434,2435,2436,2437, 195, 323, 726, 188, 897, 983,1317, 377, + 644,1050, 879,2438, 452,2439,2440,2441,2442,2443,2444, 914,2445,2446,2447,2448, + 915, 489,2449,1514,1184,2450,2451, 515, 64, 427, 495,2452, 583,2453, 483, 485, +1038, 562, 213,1515, 748, 666,2454,2455,2456,2457, 334,2458, 780, 996,1008, 705, +1243,2459,2460,2461,2462,2463, 114,2464, 493,1146, 366, 163,1516, 961,1104,2465, + 291,2466,1318,1105,2467,1517, 365,2468, 355, 951,1244,2469,1319,2470, 631,2471, +2472, 218,1320, 364, 320, 756,1518,1519,1321,1520,1322,2473,2474,2475,2476, 997, +2477,2478,2479,2480, 665,1185,2481, 916,1521,2482,2483,2484, 584, 684,2485,2486, + 797,2487,1051,1186,2488,2489,2490,1522,2491,2492, 370,2493,1039,1187, 65,2494, + 434, 205, 463,1188,2495, 125, 812, 391, 402, 826, 699, 286, 398, 155, 781, 771, + 585,2496, 590, 505,1073,2497, 599, 244, 219, 917,1018, 952, 646,1523,2498,1323, +2499,2500, 49, 984, 354, 741,2501, 625,2502,1324,2503,1019, 190, 357, 757, 491, + 95, 782, 868,2504,2505,2506,2507,2508,2509, 134,1524,1074, 422,1525, 898,2510, + 161,2511,2512,2513,2514, 769,2515,1526,2516,2517, 411,1325,2518, 472,1527,2519, +2520,2521,2522,2523,2524, 985,2525,2526,2527,2528,2529,2530, 764,2531,1245,2532, +2533, 25, 204, 311,2534, 496,2535,1052,2536,2537,2538,2539,2540,2541,2542, 199, + 704, 504, 468, 758, 657,1528, 196, 44, 839,1246, 272, 750,2543, 765, 862,2544, +2545,1326,2546, 132, 615, 933,2547, 732,2548,2549,2550,1189,1529,2551, 283,1247, +1053, 607, 929,2552,2553,2554, 930, 183, 872, 616,1040,1147,2555,1148,1020, 441, + 249,1075,2556,2557,2558, 466, 743,2559,2560,2561, 92, 514, 426, 420, 526,2562, +2563,2564,2565,2566,2567,2568, 185,2569,2570,2571,2572, 776,1530, 658,2573, 362, +2574, 361, 922,1076, 793,2575,2576,2577,2578,2579,2580,1531, 251,2581,2582,2583, +2584,1532, 54, 612, 237,1327,2585,2586, 275, 408, 647, 111,2587,1533,1106, 465, + 3, 458, 9, 38,2588, 107, 110, 890, 209, 26, 737, 498,2589,1534,2590, 431, + 202, 88,1535, 356, 287,1107, 660,1149,2591, 381,1536, 986,1150, 445,1248,1151, + 974,2592,2593, 846,2594, 446, 953, 184,1249,1250, 727,2595, 923, 193, 883,2596, +2597,2598, 102, 324, 539, 817,2599, 421,1041,2600, 832,2601, 94, 175, 197, 406, +2602, 459,2603,2604,2605,2606,2607, 330, 555,2608,2609,2610, 706,1108, 389,2611, +2612,2613,2614, 233,2615, 833, 558, 931, 954,1251,2616,2617,1537, 546,2618,2619, +1009,2620,2621,2622,1538, 690,1328,2623, 955,2624,1539,2625,2626, 772,2627,2628, +2629,2630,2631, 924, 648, 863, 603,2632,2633, 934,1540, 864, 865,2634, 642,1042, + 670,1190,2635,2636,2637,2638, 168,2639, 652, 873, 542,1054,1541,2640,2641,2642, # 512, 256 +) + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/euckrprober.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/euckrprober.py new file mode 100644 index 0000000000000000000000000000000000000000..345a060d0230ada58f769b0f6b55f43a428fc3bd --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/euckrprober.py @@ -0,0 +1,47 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is mozilla.org code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +from .mbcharsetprober import MultiByteCharSetProber +from .codingstatemachine import CodingStateMachine +from .chardistribution import EUCKRDistributionAnalysis +from .mbcssm import EUCKR_SM_MODEL + + +class EUCKRProber(MultiByteCharSetProber): + def __init__(self): + super(EUCKRProber, self).__init__() + self.coding_sm = CodingStateMachine(EUCKR_SM_MODEL) + self.distribution_analyzer = EUCKRDistributionAnalysis() + self.reset() + + @property + def charset_name(self): + return "EUC-KR" + + @property + def language(self): + return "Korean" diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/euctwfreq.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/euctwfreq.py new file mode 100644 index 0000000000000000000000000000000000000000..ed7a995a3aa311583efa5a47e316d4490e5f3463 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/euctwfreq.py @@ -0,0 +1,387 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Communicator client code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +# EUCTW frequency table +# Converted from big5 work +# by Taiwan's Mandarin Promotion Council +# + +# 128 --> 0.42261 +# 256 --> 0.57851 +# 512 --> 0.74851 +# 1024 --> 0.89384 +# 2048 --> 0.97583 +# +# Idea Distribution Ratio = 0.74851/(1-0.74851) =2.98 +# Random Distribution Ration = 512/(5401-512)=0.105 +# +# Typical Distribution Ratio about 25% of Ideal one, still much higher than RDR + +EUCTW_TYPICAL_DISTRIBUTION_RATIO = 0.75 + +# Char to FreqOrder table , +EUCTW_TABLE_SIZE = 5376 + +EUCTW_CHAR_TO_FREQ_ORDER = ( + 1,1800,1506, 255,1431, 198, 9, 82, 6,7310, 177, 202,3615,1256,2808, 110, # 2742 +3735, 33,3241, 261, 76, 44,2113, 16,2931,2184,1176, 659,3868, 26,3404,2643, # 2758 +1198,3869,3313,4060, 410,2211, 302, 590, 361,1963, 8, 204, 58,4296,7311,1931, # 2774 + 63,7312,7313, 317,1614, 75, 222, 159,4061,2412,1480,7314,3500,3068, 224,2809, # 2790 +3616, 3, 10,3870,1471, 29,2774,1135,2852,1939, 873, 130,3242,1123, 312,7315, # 2806 +4297,2051, 507, 252, 682,7316, 142,1914, 124, 206,2932, 34,3501,3173, 64, 604, # 2822 +7317,2494,1976,1977, 155,1990, 645, 641,1606,7318,3405, 337, 72, 406,7319, 80, # 2838 + 630, 238,3174,1509, 263, 939,1092,2644, 756,1440,1094,3406, 449, 69,2969, 591, # 2854 + 179,2095, 471, 115,2034,1843, 60, 50,2970, 134, 806,1868, 734,2035,3407, 180, # 2870 + 995,1607, 156, 537,2893, 688,7320, 319,1305, 779,2144, 514,2374, 298,4298, 359, # 2886 +2495, 90,2707,1338, 663, 11, 906,1099,2545, 20,2436, 182, 532,1716,7321, 732, # 2902 +1376,4062,1311,1420,3175, 25,2312,1056, 113, 399, 382,1949, 242,3408,2467, 529, # 2918 +3243, 475,1447,3617,7322, 117, 21, 656, 810,1297,2295,2329,3502,7323, 126,4063, # 2934 + 706, 456, 150, 613,4299, 71,1118,2036,4064, 145,3069, 85, 835, 486,2114,1246, # 2950 +1426, 428, 727,1285,1015, 800, 106, 623, 303,1281,7324,2127,2354, 347,3736, 221, # 2966 +3503,3110,7325,1955,1153,4065, 83, 296,1199,3070, 192, 624, 93,7326, 822,1897, # 2982 +2810,3111, 795,2064, 991,1554,1542,1592, 27, 43,2853, 859, 139,1456, 860,4300, # 2998 + 437, 712,3871, 164,2392,3112, 695, 211,3017,2096, 195,3872,1608,3504,3505,3618, # 3014 +3873, 234, 811,2971,2097,3874,2229,1441,3506,1615,2375, 668,2076,1638, 305, 228, # 3030 +1664,4301, 467, 415,7327, 262,2098,1593, 239, 108, 300, 200,1033, 512,1247,2077, # 3046 +7328,7329,2173,3176,3619,2673, 593, 845,1062,3244, 88,1723,2037,3875,1950, 212, # 3062 + 266, 152, 149, 468,1898,4066,4302, 77, 187,7330,3018, 37, 5,2972,7331,3876, # 3078 +7332,7333, 39,2517,4303,2894,3177,2078, 55, 148, 74,4304, 545, 483,1474,1029, # 3094 +1665, 217,1869,1531,3113,1104,2645,4067, 24, 172,3507, 900,3877,3508,3509,4305, # 3110 + 32,1408,2811,1312, 329, 487,2355,2247,2708, 784,2674, 4,3019,3314,1427,1788, # 3126 + 188, 109, 499,7334,3620,1717,1789, 888,1217,3020,4306,7335,3510,7336,3315,1520, # 3142 +3621,3878, 196,1034, 775,7337,7338, 929,1815, 249, 439, 38,7339,1063,7340, 794, # 3158 +3879,1435,2296, 46, 178,3245,2065,7341,2376,7342, 214,1709,4307, 804, 35, 707, # 3174 + 324,3622,1601,2546, 140, 459,4068,7343,7344,1365, 839, 272, 978,2257,2572,3409, # 3190 +2128,1363,3623,1423, 697, 100,3071, 48, 70,1231, 495,3114,2193,7345,1294,7346, # 3206 +2079, 462, 586,1042,3246, 853, 256, 988, 185,2377,3410,1698, 434,1084,7347,3411, # 3222 + 314,2615,2775,4308,2330,2331, 569,2280, 637,1816,2518, 757,1162,1878,1616,3412, # 3238 + 287,1577,2115, 768,4309,1671,2854,3511,2519,1321,3737, 909,2413,7348,4069, 933, # 3254 +3738,7349,2052,2356,1222,4310, 765,2414,1322, 786,4311,7350,1919,1462,1677,2895, # 3270 +1699,7351,4312,1424,2437,3115,3624,2590,3316,1774,1940,3413,3880,4070, 309,1369, # 3286 +1130,2812, 364,2230,1653,1299,3881,3512,3882,3883,2646, 525,1085,3021, 902,2000, # 3302 +1475, 964,4313, 421,1844,1415,1057,2281, 940,1364,3116, 376,4314,4315,1381, 7, # 3318 +2520, 983,2378, 336,1710,2675,1845, 321,3414, 559,1131,3022,2742,1808,1132,1313, # 3334 + 265,1481,1857,7352, 352,1203,2813,3247, 167,1089, 420,2814, 776, 792,1724,3513, # 3350 +4071,2438,3248,7353,4072,7354, 446, 229, 333,2743, 901,3739,1200,1557,4316,2647, # 3366 +1920, 395,2744,2676,3740,4073,1835, 125, 916,3178,2616,4317,7355,7356,3741,7357, # 3382 +7358,7359,4318,3117,3625,1133,2547,1757,3415,1510,2313,1409,3514,7360,2145, 438, # 3398 +2591,2896,2379,3317,1068, 958,3023, 461, 311,2855,2677,4074,1915,3179,4075,1978, # 3414 + 383, 750,2745,2617,4076, 274, 539, 385,1278,1442,7361,1154,1964, 384, 561, 210, # 3430 + 98,1295,2548,3515,7362,1711,2415,1482,3416,3884,2897,1257, 129,7363,3742, 642, # 3446 + 523,2776,2777,2648,7364, 141,2231,1333, 68, 176, 441, 876, 907,4077, 603,2592, # 3462 + 710, 171,3417, 404, 549, 18,3118,2393,1410,3626,1666,7365,3516,4319,2898,4320, # 3478 +7366,2973, 368,7367, 146, 366, 99, 871,3627,1543, 748, 807,1586,1185, 22,2258, # 3494 + 379,3743,3180,7368,3181, 505,1941,2618,1991,1382,2314,7369, 380,2357, 218, 702, # 3510 +1817,1248,3418,3024,3517,3318,3249,7370,2974,3628, 930,3250,3744,7371, 59,7372, # 3526 + 585, 601,4078, 497,3419,1112,1314,4321,1801,7373,1223,1472,2174,7374, 749,1836, # 3542 + 690,1899,3745,1772,3885,1476, 429,1043,1790,2232,2116, 917,4079, 447,1086,1629, # 3558 +7375, 556,7376,7377,2020,1654, 844,1090, 105, 550, 966,1758,2815,1008,1782, 686, # 3574 +1095,7378,2282, 793,1602,7379,3518,2593,4322,4080,2933,2297,4323,3746, 980,2496, # 3590 + 544, 353, 527,4324, 908,2678,2899,7380, 381,2619,1942,1348,7381,1341,1252, 560, # 3606 +3072,7382,3420,2856,7383,2053, 973, 886,2080, 143,4325,7384,7385, 157,3886, 496, # 3622 +4081, 57, 840, 540,2038,4326,4327,3421,2117,1445, 970,2259,1748,1965,2081,4082, # 3638 +3119,1234,1775,3251,2816,3629, 773,1206,2129,1066,2039,1326,3887,1738,1725,4083, # 3654 + 279,3120, 51,1544,2594, 423,1578,2130,2066, 173,4328,1879,7386,7387,1583, 264, # 3670 + 610,3630,4329,2439, 280, 154,7388,7389,7390,1739, 338,1282,3073, 693,2857,1411, # 3686 +1074,3747,2440,7391,4330,7392,7393,1240, 952,2394,7394,2900,1538,2679, 685,1483, # 3702 +4084,2468,1436, 953,4085,2054,4331, 671,2395, 79,4086,2441,3252, 608, 567,2680, # 3718 +3422,4087,4088,1691, 393,1261,1791,2396,7395,4332,7396,7397,7398,7399,1383,1672, # 3734 +3748,3182,1464, 522,1119, 661,1150, 216, 675,4333,3888,1432,3519, 609,4334,2681, # 3750 +2397,7400,7401,7402,4089,3025, 0,7403,2469, 315, 231,2442, 301,3319,4335,2380, # 3766 +7404, 233,4090,3631,1818,4336,4337,7405, 96,1776,1315,2082,7406, 257,7407,1809, # 3782 +3632,2709,1139,1819,4091,2021,1124,2163,2778,1777,2649,7408,3074, 363,1655,3183, # 3798 +7409,2975,7410,7411,7412,3889,1567,3890, 718, 103,3184, 849,1443, 341,3320,2934, # 3814 +1484,7413,1712, 127, 67, 339,4092,2398, 679,1412, 821,7414,7415, 834, 738, 351, # 3830 +2976,2146, 846, 235,1497,1880, 418,1992,3749,2710, 186,1100,2147,2746,3520,1545, # 3846 +1355,2935,2858,1377, 583,3891,4093,2573,2977,7416,1298,3633,1078,2549,3634,2358, # 3862 + 78,3750,3751, 267,1289,2099,2001,1594,4094, 348, 369,1274,2194,2175,1837,4338, # 3878 +1820,2817,3635,2747,2283,2002,4339,2936,2748, 144,3321, 882,4340,3892,2749,3423, # 3894 +4341,2901,7417,4095,1726, 320,7418,3893,3026, 788,2978,7419,2818,1773,1327,2859, # 3910 +3894,2819,7420,1306,4342,2003,1700,3752,3521,2359,2650, 787,2022, 506, 824,3636, # 3926 + 534, 323,4343,1044,3322,2023,1900, 946,3424,7421,1778,1500,1678,7422,1881,4344, # 3942 + 165, 243,4345,3637,2521, 123, 683,4096, 764,4346, 36,3895,1792, 589,2902, 816, # 3958 + 626,1667,3027,2233,1639,1555,1622,3753,3896,7423,3897,2860,1370,1228,1932, 891, # 3974 +2083,2903, 304,4097,7424, 292,2979,2711,3522, 691,2100,4098,1115,4347, 118, 662, # 3990 +7425, 611,1156, 854,2381,1316,2861, 2, 386, 515,2904,7426,7427,3253, 868,2234, # 4006 +1486, 855,2651, 785,2212,3028,7428,1040,3185,3523,7429,3121, 448,7430,1525,7431, # 4022 +2164,4348,7432,3754,7433,4099,2820,3524,3122, 503, 818,3898,3123,1568, 814, 676, # 4038 +1444, 306,1749,7434,3755,1416,1030, 197,1428, 805,2821,1501,4349,7435,7436,7437, # 4054 +1993,7438,4350,7439,7440,2195, 13,2779,3638,2980,3124,1229,1916,7441,3756,2131, # 4070 +7442,4100,4351,2399,3525,7443,2213,1511,1727,1120,7444,7445, 646,3757,2443, 307, # 4086 +7446,7447,1595,3186,7448,7449,7450,3639,1113,1356,3899,1465,2522,2523,7451, 519, # 4102 +7452, 128,2132, 92,2284,1979,7453,3900,1512, 342,3125,2196,7454,2780,2214,1980, # 4118 +3323,7455, 290,1656,1317, 789, 827,2360,7456,3758,4352, 562, 581,3901,7457, 401, # 4134 +4353,2248, 94,4354,1399,2781,7458,1463,2024,4355,3187,1943,7459, 828,1105,4101, # 4150 +1262,1394,7460,4102, 605,4356,7461,1783,2862,7462,2822, 819,2101, 578,2197,2937, # 4166 +7463,1502, 436,3254,4103,3255,2823,3902,2905,3425,3426,7464,2712,2315,7465,7466, # 4182 +2332,2067, 23,4357, 193, 826,3759,2102, 699,1630,4104,3075, 390,1793,1064,3526, # 4198 +7467,1579,3076,3077,1400,7468,4105,1838,1640,2863,7469,4358,4359, 137,4106, 598, # 4214 +3078,1966, 780, 104, 974,2938,7470, 278, 899, 253, 402, 572, 504, 493,1339,7471, # 4230 +3903,1275,4360,2574,2550,7472,3640,3029,3079,2249, 565,1334,2713, 863, 41,7473, # 4246 +7474,4361,7475,1657,2333, 19, 463,2750,4107, 606,7476,2981,3256,1087,2084,1323, # 4262 +2652,2982,7477,1631,1623,1750,4108,2682,7478,2864, 791,2714,2653,2334, 232,2416, # 4278 +7479,2983,1498,7480,2654,2620, 755,1366,3641,3257,3126,2025,1609, 119,1917,3427, # 4294 + 862,1026,4109,7481,3904,3760,4362,3905,4363,2260,1951,2470,7482,1125, 817,4110, # 4310 +4111,3906,1513,1766,2040,1487,4112,3030,3258,2824,3761,3127,7483,7484,1507,7485, # 4326 +2683, 733, 40,1632,1106,2865, 345,4113, 841,2524, 230,4364,2984,1846,3259,3428, # 4342 +7486,1263, 986,3429,7487, 735, 879, 254,1137, 857, 622,1300,1180,1388,1562,3907, # 4358 +3908,2939, 967,2751,2655,1349, 592,2133,1692,3324,2985,1994,4114,1679,3909,1901, # 4374 +2185,7488, 739,3642,2715,1296,1290,7489,4115,2198,2199,1921,1563,2595,2551,1870, # 4390 +2752,2986,7490, 435,7491, 343,1108, 596, 17,1751,4365,2235,3430,3643,7492,4366, # 4406 + 294,3527,2940,1693, 477, 979, 281,2041,3528, 643,2042,3644,2621,2782,2261,1031, # 4422 +2335,2134,2298,3529,4367, 367,1249,2552,7493,3530,7494,4368,1283,3325,2004, 240, # 4438 +1762,3326,4369,4370, 836,1069,3128, 474,7495,2148,2525, 268,3531,7496,3188,1521, # 4454 +1284,7497,1658,1546,4116,7498,3532,3533,7499,4117,3327,2684,1685,4118, 961,1673, # 4470 +2622, 190,2005,2200,3762,4371,4372,7500, 570,2497,3645,1490,7501,4373,2623,3260, # 4486 +1956,4374, 584,1514, 396,1045,1944,7502,4375,1967,2444,7503,7504,4376,3910, 619, # 4502 +7505,3129,3261, 215,2006,2783,2553,3189,4377,3190,4378, 763,4119,3763,4379,7506, # 4518 +7507,1957,1767,2941,3328,3646,1174, 452,1477,4380,3329,3130,7508,2825,1253,2382, # 4534 +2186,1091,2285,4120, 492,7509, 638,1169,1824,2135,1752,3911, 648, 926,1021,1324, # 4550 +4381, 520,4382, 997, 847,1007, 892,4383,3764,2262,1871,3647,7510,2400,1784,4384, # 4566 +1952,2942,3080,3191,1728,4121,2043,3648,4385,2007,1701,3131,1551, 30,2263,4122, # 4582 +7511,2026,4386,3534,7512, 501,7513,4123, 594,3431,2165,1821,3535,3432,3536,3192, # 4598 + 829,2826,4124,7514,1680,3132,1225,4125,7515,3262,4387,4126,3133,2336,7516,4388, # 4614 +4127,7517,3912,3913,7518,1847,2383,2596,3330,7519,4389, 374,3914, 652,4128,4129, # 4630 + 375,1140, 798,7520,7521,7522,2361,4390,2264, 546,1659, 138,3031,2445,4391,7523, # 4646 +2250, 612,1848, 910, 796,3765,1740,1371, 825,3766,3767,7524,2906,2554,7525, 692, # 4662 + 444,3032,2624, 801,4392,4130,7526,1491, 244,1053,3033,4131,4132, 340,7527,3915, # 4678 +1041,2987, 293,1168, 87,1357,7528,1539, 959,7529,2236, 721, 694,4133,3768, 219, # 4694 +1478, 644,1417,3331,2656,1413,1401,1335,1389,3916,7530,7531,2988,2362,3134,1825, # 4710 + 730,1515, 184,2827, 66,4393,7532,1660,2943, 246,3332, 378,1457, 226,3433, 975, # 4726 +3917,2944,1264,3537, 674, 696,7533, 163,7534,1141,2417,2166, 713,3538,3333,4394, # 4742 +3918,7535,7536,1186, 15,7537,1079,1070,7538,1522,3193,3539, 276,1050,2716, 758, # 4758 +1126, 653,2945,3263,7539,2337, 889,3540,3919,3081,2989, 903,1250,4395,3920,3434, # 4774 +3541,1342,1681,1718, 766,3264, 286, 89,2946,3649,7540,1713,7541,2597,3334,2990, # 4790 +7542,2947,2215,3194,2866,7543,4396,2498,2526, 181, 387,1075,3921, 731,2187,3335, # 4806 +7544,3265, 310, 313,3435,2299, 770,4134, 54,3034, 189,4397,3082,3769,3922,7545, # 4822 +1230,1617,1849, 355,3542,4135,4398,3336, 111,4136,3650,1350,3135,3436,3035,4137, # 4838 +2149,3266,3543,7546,2784,3923,3924,2991, 722,2008,7547,1071, 247,1207,2338,2471, # 4854 +1378,4399,2009, 864,1437,1214,4400, 373,3770,1142,2216, 667,4401, 442,2753,2555, # 4870 +3771,3925,1968,4138,3267,1839, 837, 170,1107, 934,1336,1882,7548,7549,2118,4139, # 4886 +2828, 743,1569,7550,4402,4140, 582,2384,1418,3437,7551,1802,7552, 357,1395,1729, # 4902 +3651,3268,2418,1564,2237,7553,3083,3772,1633,4403,1114,2085,4141,1532,7554, 482, # 4918 +2446,4404,7555,7556,1492, 833,1466,7557,2717,3544,1641,2829,7558,1526,1272,3652, # 4934 +4142,1686,1794, 416,2556,1902,1953,1803,7559,3773,2785,3774,1159,2316,7560,2867, # 4950 +4405,1610,1584,3036,2419,2754, 443,3269,1163,3136,7561,7562,3926,7563,4143,2499, # 4966 +3037,4406,3927,3137,2103,1647,3545,2010,1872,4144,7564,4145, 431,3438,7565, 250, # 4982 + 97, 81,4146,7566,1648,1850,1558, 160, 848,7567, 866, 740,1694,7568,2201,2830, # 4998 +3195,4147,4407,3653,1687, 950,2472, 426, 469,3196,3654,3655,3928,7569,7570,1188, # 5014 + 424,1995, 861,3546,4148,3775,2202,2685, 168,1235,3547,4149,7571,2086,1674,4408, # 5030 +3337,3270, 220,2557,1009,7572,3776, 670,2992, 332,1208, 717,7573,7574,3548,2447, # 5046 +3929,3338,7575, 513,7576,1209,2868,3339,3138,4409,1080,7577,7578,7579,7580,2527, # 5062 +3656,3549, 815,1587,3930,3931,7581,3550,3439,3777,1254,4410,1328,3038,1390,3932, # 5078 +1741,3933,3778,3934,7582, 236,3779,2448,3271,7583,7584,3657,3780,1273,3781,4411, # 5094 +7585, 308,7586,4412, 245,4413,1851,2473,1307,2575, 430, 715,2136,2449,7587, 270, # 5110 + 199,2869,3935,7588,3551,2718,1753, 761,1754, 725,1661,1840,4414,3440,3658,7589, # 5126 +7590, 587, 14,3272, 227,2598, 326, 480,2265, 943,2755,3552, 291, 650,1883,7591, # 5142 +1702,1226, 102,1547, 62,3441, 904,4415,3442,1164,4150,7592,7593,1224,1548,2756, # 5158 + 391, 498,1493,7594,1386,1419,7595,2055,1177,4416, 813, 880,1081,2363, 566,1145, # 5174 +4417,2286,1001,1035,2558,2599,2238, 394,1286,7596,7597,2068,7598, 86,1494,1730, # 5190 +3936, 491,1588, 745, 897,2948, 843,3340,3937,2757,2870,3273,1768, 998,2217,2069, # 5206 + 397,1826,1195,1969,3659,2993,3341, 284,7599,3782,2500,2137,2119,1903,7600,3938, # 5222 +2150,3939,4151,1036,3443,1904, 114,2559,4152, 209,1527,7601,7602,2949,2831,2625, # 5238 +2385,2719,3139, 812,2560,7603,3274,7604,1559, 737,1884,3660,1210, 885, 28,2686, # 5254 +3553,3783,7605,4153,1004,1779,4418,7606, 346,1981,2218,2687,4419,3784,1742, 797, # 5270 +1642,3940,1933,1072,1384,2151, 896,3941,3275,3661,3197,2871,3554,7607,2561,1958, # 5286 +4420,2450,1785,7608,7609,7610,3942,4154,1005,1308,3662,4155,2720,4421,4422,1528, # 5302 +2600, 161,1178,4156,1982, 987,4423,1101,4157, 631,3943,1157,3198,2420,1343,1241, # 5318 +1016,2239,2562, 372, 877,2339,2501,1160, 555,1934, 911,3944,7611, 466,1170, 169, # 5334 +1051,2907,2688,3663,2474,2994,1182,2011,2563,1251,2626,7612, 992,2340,3444,1540, # 5350 +2721,1201,2070,2401,1996,2475,7613,4424, 528,1922,2188,1503,1873,1570,2364,3342, # 5366 +3276,7614, 557,1073,7615,1827,3445,2087,2266,3140,3039,3084, 767,3085,2786,4425, # 5382 +1006,4158,4426,2341,1267,2176,3664,3199, 778,3945,3200,2722,1597,2657,7616,4427, # 5398 +7617,3446,7618,7619,7620,3277,2689,1433,3278, 131, 95,1504,3946, 723,4159,3141, # 5414 +1841,3555,2758,2189,3947,2027,2104,3665,7621,2995,3948,1218,7622,3343,3201,3949, # 5430 +4160,2576, 248,1634,3785, 912,7623,2832,3666,3040,3786, 654, 53,7624,2996,7625, # 5446 +1688,4428, 777,3447,1032,3950,1425,7626, 191, 820,2120,2833, 971,4429, 931,3202, # 5462 + 135, 664, 783,3787,1997, 772,2908,1935,3951,3788,4430,2909,3203, 282,2723, 640, # 5478 +1372,3448,1127, 922, 325,3344,7627,7628, 711,2044,7629,7630,3952,2219,2787,1936, # 5494 +3953,3345,2220,2251,3789,2300,7631,4431,3790,1258,3279,3954,3204,2138,2950,3955, # 5510 +3956,7632,2221, 258,3205,4432, 101,1227,7633,3280,1755,7634,1391,3281,7635,2910, # 5526 +2056, 893,7636,7637,7638,1402,4161,2342,7639,7640,3206,3556,7641,7642, 878,1325, # 5542 +1780,2788,4433, 259,1385,2577, 744,1183,2267,4434,7643,3957,2502,7644, 684,1024, # 5558 +4162,7645, 472,3557,3449,1165,3282,3958,3959, 322,2152, 881, 455,1695,1152,1340, # 5574 + 660, 554,2153,4435,1058,4436,4163, 830,1065,3346,3960,4437,1923,7646,1703,1918, # 5590 +7647, 932,2268, 122,7648,4438, 947, 677,7649,3791,2627, 297,1905,1924,2269,4439, # 5606 +2317,3283,7650,7651,4164,7652,4165, 84,4166, 112, 989,7653, 547,1059,3961, 701, # 5622 +3558,1019,7654,4167,7655,3450, 942, 639, 457,2301,2451, 993,2951, 407, 851, 494, # 5638 +4440,3347, 927,7656,1237,7657,2421,3348, 573,4168, 680, 921,2911,1279,1874, 285, # 5654 + 790,1448,1983, 719,2167,7658,7659,4441,3962,3963,1649,7660,1541, 563,7661,1077, # 5670 +7662,3349,3041,3451, 511,2997,3964,3965,3667,3966,1268,2564,3350,3207,4442,4443, # 5686 +7663, 535,1048,1276,1189,2912,2028,3142,1438,1373,2834,2952,1134,2012,7664,4169, # 5702 +1238,2578,3086,1259,7665, 700,7666,2953,3143,3668,4170,7667,4171,1146,1875,1906, # 5718 +4444,2601,3967, 781,2422, 132,1589, 203, 147, 273,2789,2402, 898,1786,2154,3968, # 5734 +3969,7668,3792,2790,7669,7670,4445,4446,7671,3208,7672,1635,3793, 965,7673,1804, # 5750 +2690,1516,3559,1121,1082,1329,3284,3970,1449,3794, 65,1128,2835,2913,2759,1590, # 5766 +3795,7674,7675, 12,2658, 45, 976,2579,3144,4447, 517,2528,1013,1037,3209,7676, # 5782 +3796,2836,7677,3797,7678,3452,7679,2602, 614,1998,2318,3798,3087,2724,2628,7680, # 5798 +2580,4172, 599,1269,7681,1810,3669,7682,2691,3088, 759,1060, 489,1805,3351,3285, # 5814 +1358,7683,7684,2386,1387,1215,2629,2252, 490,7685,7686,4173,1759,2387,2343,7687, # 5830 +4448,3799,1907,3971,2630,1806,3210,4449,3453,3286,2760,2344, 874,7688,7689,3454, # 5846 +3670,1858, 91,2914,3671,3042,3800,4450,7690,3145,3972,2659,7691,3455,1202,1403, # 5862 +3801,2954,2529,1517,2503,4451,3456,2504,7692,4452,7693,2692,1885,1495,1731,3973, # 5878 +2365,4453,7694,2029,7695,7696,3974,2693,1216, 237,2581,4174,2319,3975,3802,4454, # 5894 +4455,2694,3560,3457, 445,4456,7697,7698,7699,7700,2761, 61,3976,3672,1822,3977, # 5910 +7701, 687,2045, 935, 925, 405,2660, 703,1096,1859,2725,4457,3978,1876,1367,2695, # 5926 +3352, 918,2105,1781,2476, 334,3287,1611,1093,4458, 564,3146,3458,3673,3353, 945, # 5942 +2631,2057,4459,7702,1925, 872,4175,7703,3459,2696,3089, 349,4176,3674,3979,4460, # 5958 +3803,4177,3675,2155,3980,4461,4462,4178,4463,2403,2046, 782,3981, 400, 251,4179, # 5974 +1624,7704,7705, 277,3676, 299,1265, 476,1191,3804,2121,4180,4181,1109, 205,7706, # 5990 +2582,1000,2156,3561,1860,7707,7708,7709,4464,7710,4465,2565, 107,2477,2157,3982, # 6006 +3460,3147,7711,1533, 541,1301, 158, 753,4182,2872,3562,7712,1696, 370,1088,4183, # 6022 +4466,3563, 579, 327, 440, 162,2240, 269,1937,1374,3461, 968,3043, 56,1396,3090, # 6038 +2106,3288,3354,7713,1926,2158,4467,2998,7714,3564,7715,7716,3677,4468,2478,7717, # 6054 +2791,7718,1650,4469,7719,2603,7720,7721,3983,2661,3355,1149,3356,3984,3805,3985, # 6070 +7722,1076, 49,7723, 951,3211,3289,3290, 450,2837, 920,7724,1811,2792,2366,4184, # 6086 +1908,1138,2367,3806,3462,7725,3212,4470,1909,1147,1518,2423,4471,3807,7726,4472, # 6102 +2388,2604, 260,1795,3213,7727,7728,3808,3291, 708,7729,3565,1704,7730,3566,1351, # 6118 +1618,3357,2999,1886, 944,4185,3358,4186,3044,3359,4187,7731,3678, 422, 413,1714, # 6134 +3292, 500,2058,2345,4188,2479,7732,1344,1910, 954,7733,1668,7734,7735,3986,2404, # 6150 +4189,3567,3809,4190,7736,2302,1318,2505,3091, 133,3092,2873,4473, 629, 31,2838, # 6166 +2697,3810,4474, 850, 949,4475,3987,2955,1732,2088,4191,1496,1852,7737,3988, 620, # 6182 +3214, 981,1242,3679,3360,1619,3680,1643,3293,2139,2452,1970,1719,3463,2168,7738, # 6198 +3215,7739,7740,3361,1828,7741,1277,4476,1565,2047,7742,1636,3568,3093,7743, 869, # 6214 +2839, 655,3811,3812,3094,3989,3000,3813,1310,3569,4477,7744,7745,7746,1733, 558, # 6230 +4478,3681, 335,1549,3045,1756,4192,3682,1945,3464,1829,1291,1192, 470,2726,2107, # 6246 +2793, 913,1054,3990,7747,1027,7748,3046,3991,4479, 982,2662,3362,3148,3465,3216, # 6262 +3217,1946,2794,7749, 571,4480,7750,1830,7751,3570,2583,1523,2424,7752,2089, 984, # 6278 +4481,3683,1959,7753,3684, 852, 923,2795,3466,3685, 969,1519, 999,2048,2320,1705, # 6294 +7754,3095, 615,1662, 151, 597,3992,2405,2321,1049, 275,4482,3686,4193, 568,3687, # 6310 +3571,2480,4194,3688,7755,2425,2270, 409,3218,7756,1566,2874,3467,1002, 769,2840, # 6326 + 194,2090,3149,3689,2222,3294,4195, 628,1505,7757,7758,1763,2177,3001,3993, 521, # 6342 +1161,2584,1787,2203,2406,4483,3994,1625,4196,4197, 412, 42,3096, 464,7759,2632, # 6358 +4484,3363,1760,1571,2875,3468,2530,1219,2204,3814,2633,2140,2368,4485,4486,3295, # 6374 +1651,3364,3572,7760,7761,3573,2481,3469,7762,3690,7763,7764,2271,2091, 460,7765, # 6390 +4487,7766,3002, 962, 588,3574, 289,3219,2634,1116, 52,7767,3047,1796,7768,7769, # 6406 +7770,1467,7771,1598,1143,3691,4198,1984,1734,1067,4488,1280,3365, 465,4489,1572, # 6422 + 510,7772,1927,2241,1812,1644,3575,7773,4490,3692,7774,7775,2663,1573,1534,7776, # 6438 +7777,4199, 536,1807,1761,3470,3815,3150,2635,7778,7779,7780,4491,3471,2915,1911, # 6454 +2796,7781,3296,1122, 377,3220,7782, 360,7783,7784,4200,1529, 551,7785,2059,3693, # 6470 +1769,2426,7786,2916,4201,3297,3097,2322,2108,2030,4492,1404, 136,1468,1479, 672, # 6486 +1171,3221,2303, 271,3151,7787,2762,7788,2049, 678,2727, 865,1947,4493,7789,2013, # 6502 +3995,2956,7790,2728,2223,1397,3048,3694,4494,4495,1735,2917,3366,3576,7791,3816, # 6518 + 509,2841,2453,2876,3817,7792,7793,3152,3153,4496,4202,2531,4497,2304,1166,1010, # 6534 + 552, 681,1887,7794,7795,2957,2958,3996,1287,1596,1861,3154, 358, 453, 736, 175, # 6550 + 478,1117, 905,1167,1097,7796,1853,1530,7797,1706,7798,2178,3472,2287,3695,3473, # 6566 +3577,4203,2092,4204,7799,3367,1193,2482,4205,1458,2190,2205,1862,1888,1421,3298, # 6582 +2918,3049,2179,3474, 595,2122,7800,3997,7801,7802,4206,1707,2636, 223,3696,1359, # 6598 + 751,3098, 183,3475,7803,2797,3003, 419,2369, 633, 704,3818,2389, 241,7804,7805, # 6614 +7806, 838,3004,3697,2272,2763,2454,3819,1938,2050,3998,1309,3099,2242,1181,7807, # 6630 +1136,2206,3820,2370,1446,4207,2305,4498,7808,7809,4208,1055,2605, 484,3698,7810, # 6646 +3999, 625,4209,2273,3368,1499,4210,4000,7811,4001,4211,3222,2274,2275,3476,7812, # 6662 +7813,2764, 808,2606,3699,3369,4002,4212,3100,2532, 526,3370,3821,4213, 955,7814, # 6678 +1620,4214,2637,2427,7815,1429,3700,1669,1831, 994, 928,7816,3578,1260,7817,7818, # 6694 +7819,1948,2288, 741,2919,1626,4215,2729,2455, 867,1184, 362,3371,1392,7820,7821, # 6710 +4003,4216,1770,1736,3223,2920,4499,4500,1928,2698,1459,1158,7822,3050,3372,2877, # 6726 +1292,1929,2506,2842,3701,1985,1187,2071,2014,2607,4217,7823,2566,2507,2169,3702, # 6742 +2483,3299,7824,3703,4501,7825,7826, 666,1003,3005,1022,3579,4218,7827,4502,1813, # 6758 +2253, 574,3822,1603, 295,1535, 705,3823,4219, 283, 858, 417,7828,7829,3224,4503, # 6774 +4504,3051,1220,1889,1046,2276,2456,4004,1393,1599, 689,2567, 388,4220,7830,2484, # 6790 + 802,7831,2798,3824,2060,1405,2254,7832,4505,3825,2109,1052,1345,3225,1585,7833, # 6806 + 809,7834,7835,7836, 575,2730,3477, 956,1552,1469,1144,2323,7837,2324,1560,2457, # 6822 +3580,3226,4005, 616,2207,3155,2180,2289,7838,1832,7839,3478,4506,7840,1319,3704, # 6838 +3705,1211,3581,1023,3227,1293,2799,7841,7842,7843,3826, 607,2306,3827, 762,2878, # 6854 +1439,4221,1360,7844,1485,3052,7845,4507,1038,4222,1450,2061,2638,4223,1379,4508, # 6870 +2585,7846,7847,4224,1352,1414,2325,2921,1172,7848,7849,3828,3829,7850,1797,1451, # 6886 +7851,7852,7853,7854,2922,4006,4007,2485,2346, 411,4008,4009,3582,3300,3101,4509, # 6902 +1561,2664,1452,4010,1375,7855,7856, 47,2959, 316,7857,1406,1591,2923,3156,7858, # 6918 +1025,2141,3102,3157, 354,2731, 884,2224,4225,2407, 508,3706, 726,3583, 996,2428, # 6934 +3584, 729,7859, 392,2191,1453,4011,4510,3707,7860,7861,2458,3585,2608,1675,2800, # 6950 + 919,2347,2960,2348,1270,4511,4012, 73,7862,7863, 647,7864,3228,2843,2255,1550, # 6966 +1346,3006,7865,1332, 883,3479,7866,7867,7868,7869,3301,2765,7870,1212, 831,1347, # 6982 +4226,4512,2326,3830,1863,3053, 720,3831,4513,4514,3832,7871,4227,7872,7873,4515, # 6998 +7874,7875,1798,4516,3708,2609,4517,3586,1645,2371,7876,7877,2924, 669,2208,2665, # 7014 +2429,7878,2879,7879,7880,1028,3229,7881,4228,2408,7882,2256,1353,7883,7884,4518, # 7030 +3158, 518,7885,4013,7886,4229,1960,7887,2142,4230,7888,7889,3007,2349,2350,3833, # 7046 + 516,1833,1454,4014,2699,4231,4519,2225,2610,1971,1129,3587,7890,2766,7891,2961, # 7062 +1422, 577,1470,3008,1524,3373,7892,7893, 432,4232,3054,3480,7894,2586,1455,2508, # 7078 +2226,1972,1175,7895,1020,2732,4015,3481,4520,7896,2733,7897,1743,1361,3055,3482, # 7094 +2639,4016,4233,4521,2290, 895, 924,4234,2170, 331,2243,3056, 166,1627,3057,1098, # 7110 +7898,1232,2880,2227,3374,4522, 657, 403,1196,2372, 542,3709,3375,1600,4235,3483, # 7126 +7899,4523,2767,3230, 576, 530,1362,7900,4524,2533,2666,3710,4017,7901, 842,3834, # 7142 +7902,2801,2031,1014,4018, 213,2700,3376, 665, 621,4236,7903,3711,2925,2430,7904, # 7158 +2431,3302,3588,3377,7905,4237,2534,4238,4525,3589,1682,4239,3484,1380,7906, 724, # 7174 +2277, 600,1670,7907,1337,1233,4526,3103,2244,7908,1621,4527,7909, 651,4240,7910, # 7190 +1612,4241,2611,7911,2844,7912,2734,2307,3058,7913, 716,2459,3059, 174,1255,2701, # 7206 +4019,3590, 548,1320,1398, 728,4020,1574,7914,1890,1197,3060,4021,7915,3061,3062, # 7222 +3712,3591,3713, 747,7916, 635,4242,4528,7917,7918,7919,4243,7920,7921,4529,7922, # 7238 +3378,4530,2432, 451,7923,3714,2535,2072,4244,2735,4245,4022,7924,1764,4531,7925, # 7254 +4246, 350,7926,2278,2390,2486,7927,4247,4023,2245,1434,4024, 488,4532, 458,4248, # 7270 +4025,3715, 771,1330,2391,3835,2568,3159,2159,2409,1553,2667,3160,4249,7928,2487, # 7286 +2881,2612,1720,2702,4250,3379,4533,7929,2536,4251,7930,3231,4252,2768,7931,2015, # 7302 +2736,7932,1155,1017,3716,3836,7933,3303,2308, 201,1864,4253,1430,7934,4026,7935, # 7318 +7936,7937,7938,7939,4254,1604,7940, 414,1865, 371,2587,4534,4535,3485,2016,3104, # 7334 +4536,1708, 960,4255, 887, 389,2171,1536,1663,1721,7941,2228,4027,2351,2926,1580, # 7350 +7942,7943,7944,1744,7945,2537,4537,4538,7946,4539,7947,2073,7948,7949,3592,3380, # 7366 +2882,4256,7950,4257,2640,3381,2802, 673,2703,2460, 709,3486,4028,3593,4258,7951, # 7382 +1148, 502, 634,7952,7953,1204,4540,3594,1575,4541,2613,3717,7954,3718,3105, 948, # 7398 +3232, 121,1745,3837,1110,7955,4259,3063,2509,3009,4029,3719,1151,1771,3838,1488, # 7414 +4030,1986,7956,2433,3487,7957,7958,2093,7959,4260,3839,1213,1407,2803, 531,2737, # 7430 +2538,3233,1011,1537,7960,2769,4261,3106,1061,7961,3720,3721,1866,2883,7962,2017, # 7446 + 120,4262,4263,2062,3595,3234,2309,3840,2668,3382,1954,4542,7963,7964,3488,1047, # 7462 +2704,1266,7965,1368,4543,2845, 649,3383,3841,2539,2738,1102,2846,2669,7966,7967, # 7478 +1999,7968,1111,3596,2962,7969,2488,3842,3597,2804,1854,3384,3722,7970,7971,3385, # 7494 +2410,2884,3304,3235,3598,7972,2569,7973,3599,2805,4031,1460, 856,7974,3600,7975, # 7510 +2885,2963,7976,2886,3843,7977,4264, 632,2510, 875,3844,1697,3845,2291,7978,7979, # 7526 +4544,3010,1239, 580,4545,4265,7980, 914, 936,2074,1190,4032,1039,2123,7981,7982, # 7542 +7983,3386,1473,7984,1354,4266,3846,7985,2172,3064,4033, 915,3305,4267,4268,3306, # 7558 +1605,1834,7986,2739, 398,3601,4269,3847,4034, 328,1912,2847,4035,3848,1331,4270, # 7574 +3011, 937,4271,7987,3602,4036,4037,3387,2160,4546,3388, 524, 742, 538,3065,1012, # 7590 +7988,7989,3849,2461,7990, 658,1103, 225,3850,7991,7992,4547,7993,4548,7994,3236, # 7606 +1243,7995,4038, 963,2246,4549,7996,2705,3603,3161,7997,7998,2588,2327,7999,4550, # 7622 +8000,8001,8002,3489,3307, 957,3389,2540,2032,1930,2927,2462, 870,2018,3604,1746, # 7638 +2770,2771,2434,2463,8003,3851,8004,3723,3107,3724,3490,3390,3725,8005,1179,3066, # 7654 +8006,3162,2373,4272,3726,2541,3163,3108,2740,4039,8007,3391,1556,2542,2292, 977, # 7670 +2887,2033,4040,1205,3392,8008,1765,3393,3164,2124,1271,1689, 714,4551,3491,8009, # 7686 +2328,3852, 533,4273,3605,2181, 617,8010,2464,3308,3492,2310,8011,8012,3165,8013, # 7702 +8014,3853,1987, 618, 427,2641,3493,3394,8015,8016,1244,1690,8017,2806,4274,4552, # 7718 +8018,3494,8019,8020,2279,1576, 473,3606,4275,3395, 972,8021,3607,8022,3067,8023, # 7734 +8024,4553,4554,8025,3727,4041,4042,8026, 153,4555, 356,8027,1891,2888,4276,2143, # 7750 + 408, 803,2352,8028,3854,8029,4277,1646,2570,2511,4556,4557,3855,8030,3856,4278, # 7766 +8031,2411,3396, 752,8032,8033,1961,2964,8034, 746,3012,2465,8035,4279,3728, 698, # 7782 +4558,1892,4280,3608,2543,4559,3609,3857,8036,3166,3397,8037,1823,1302,4043,2706, # 7798 +3858,1973,4281,8038,4282,3167, 823,1303,1288,1236,2848,3495,4044,3398, 774,3859, # 7814 +8039,1581,4560,1304,2849,3860,4561,8040,2435,2161,1083,3237,4283,4045,4284, 344, # 7830 +1173, 288,2311, 454,1683,8041,8042,1461,4562,4046,2589,8043,8044,4563, 985, 894, # 7846 +8045,3399,3168,8046,1913,2928,3729,1988,8047,2110,1974,8048,4047,8049,2571,1194, # 7862 + 425,8050,4564,3169,1245,3730,4285,8051,8052,2850,8053, 636,4565,1855,3861, 760, # 7878 +1799,8054,4286,2209,1508,4566,4048,1893,1684,2293,8055,8056,8057,4287,4288,2210, # 7894 + 479,8058,8059, 832,8060,4049,2489,8061,2965,2490,3731, 990,3109, 627,1814,2642, # 7910 +4289,1582,4290,2125,2111,3496,4567,8062, 799,4291,3170,8063,4568,2112,1737,3013, # 7926 +1018, 543, 754,4292,3309,1676,4569,4570,4050,8064,1489,8065,3497,8066,2614,2889, # 7942 +4051,8067,8068,2966,8069,8070,8071,8072,3171,4571,4572,2182,1722,8073,3238,3239, # 7958 +1842,3610,1715, 481, 365,1975,1856,8074,8075,1962,2491,4573,8076,2126,3611,3240, # 7974 + 433,1894,2063,2075,8077, 602,2741,8078,8079,8080,8081,8082,3014,1628,3400,8083, # 7990 +3172,4574,4052,2890,4575,2512,8084,2544,2772,8085,8086,8087,3310,4576,2891,8088, # 8006 +4577,8089,2851,4578,4579,1221,2967,4053,2513,8090,8091,8092,1867,1989,8093,8094, # 8022 +8095,1895,8096,8097,4580,1896,4054, 318,8098,2094,4055,4293,8099,8100, 485,8101, # 8038 + 938,3862, 553,2670, 116,8102,3863,3612,8103,3498,2671,2773,3401,3311,2807,8104, # 8054 +3613,2929,4056,1747,2930,2968,8105,8106, 207,8107,8108,2672,4581,2514,8109,3015, # 8070 + 890,3614,3864,8110,1877,3732,3402,8111,2183,2353,3403,1652,8112,8113,8114, 941, # 8086 +2294, 208,3499,4057,2019, 330,4294,3865,2892,2492,3733,4295,8115,8116,8117,8118, # 8102 +) + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/euctwprober.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/euctwprober.py new file mode 100644 index 0000000000000000000000000000000000000000..35669cc4dd809fa4007ee67c752f1be991135e77 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/euctwprober.py @@ -0,0 +1,46 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is mozilla.org code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +from .mbcharsetprober import MultiByteCharSetProber +from .codingstatemachine import CodingStateMachine +from .chardistribution import EUCTWDistributionAnalysis +from .mbcssm import EUCTW_SM_MODEL + +class EUCTWProber(MultiByteCharSetProber): + def __init__(self): + super(EUCTWProber, self).__init__() + self.coding_sm = CodingStateMachine(EUCTW_SM_MODEL) + self.distribution_analyzer = EUCTWDistributionAnalysis() + self.reset() + + @property + def charset_name(self): + return "EUC-TW" + + @property + def language(self): + return "Taiwan" diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/gb2312freq.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/gb2312freq.py new file mode 100644 index 0000000000000000000000000000000000000000..697837bd9a87a77fc468fd1f10dd470d01701362 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/gb2312freq.py @@ -0,0 +1,283 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Communicator client code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +# GB2312 most frequently used character table +# +# Char to FreqOrder table , from hz6763 + +# 512 --> 0.79 -- 0.79 +# 1024 --> 0.92 -- 0.13 +# 2048 --> 0.98 -- 0.06 +# 6768 --> 1.00 -- 0.02 +# +# Ideal Distribution Ratio = 0.79135/(1-0.79135) = 3.79 +# Random Distribution Ration = 512 / (3755 - 512) = 0.157 +# +# Typical Distribution Ratio about 25% of Ideal one, still much higher that RDR + +GB2312_TYPICAL_DISTRIBUTION_RATIO = 0.9 + +GB2312_TABLE_SIZE = 3760 + +GB2312_CHAR_TO_FREQ_ORDER = ( +1671, 749,1443,2364,3924,3807,2330,3921,1704,3463,2691,1511,1515, 572,3191,2205, +2361, 224,2558, 479,1711, 963,3162, 440,4060,1905,2966,2947,3580,2647,3961,3842, +2204, 869,4207, 970,2678,5626,2944,2956,1479,4048, 514,3595, 588,1346,2820,3409, + 249,4088,1746,1873,2047,1774, 581,1813, 358,1174,3590,1014,1561,4844,2245, 670, +1636,3112, 889,1286, 953, 556,2327,3060,1290,3141, 613, 185,3477,1367, 850,3820, +1715,2428,2642,2303,2732,3041,2562,2648,3566,3946,1349, 388,3098,2091,1360,3585, + 152,1687,1539, 738,1559, 59,1232,2925,2267,1388,1249,1741,1679,2960, 151,1566, +1125,1352,4271, 924,4296, 385,3166,4459, 310,1245,2850, 70,3285,2729,3534,3575, +2398,3298,3466,1960,2265, 217,3647, 864,1909,2084,4401,2773,1010,3269,5152, 853, +3051,3121,1244,4251,1895, 364,1499,1540,2313,1180,3655,2268, 562, 715,2417,3061, + 544, 336,3768,2380,1752,4075, 950, 280,2425,4382, 183,2759,3272, 333,4297,2155, +1688,2356,1444,1039,4540, 736,1177,3349,2443,2368,2144,2225, 565, 196,1482,3406, + 927,1335,4147, 692, 878,1311,1653,3911,3622,1378,4200,1840,2969,3149,2126,1816, +2534,1546,2393,2760, 737,2494, 13, 447, 245,2747, 38,2765,2129,2589,1079, 606, + 360, 471,3755,2890, 404, 848, 699,1785,1236, 370,2221,1023,3746,2074,2026,2023, +2388,1581,2119, 812,1141,3091,2536,1519, 804,2053, 406,1596,1090, 784, 548,4414, +1806,2264,2936,1100, 343,4114,5096, 622,3358, 743,3668,1510,1626,5020,3567,2513, +3195,4115,5627,2489,2991, 24,2065,2697,1087,2719, 48,1634, 315, 68, 985,2052, + 198,2239,1347,1107,1439, 597,2366,2172, 871,3307, 919,2487,2790,1867, 236,2570, +1413,3794, 906,3365,3381,1701,1982,1818,1524,2924,1205, 616,2586,2072,2004, 575, + 253,3099, 32,1365,1182, 197,1714,2454,1201, 554,3388,3224,2748, 756,2587, 250, +2567,1507,1517,3529,1922,2761,2337,3416,1961,1677,2452,2238,3153, 615, 911,1506, +1474,2495,1265,1906,2749,3756,3280,2161, 898,2714,1759,3450,2243,2444, 563, 26, +3286,2266,3769,3344,2707,3677, 611,1402, 531,1028,2871,4548,1375, 261,2948, 835, +1190,4134, 353, 840,2684,1900,3082,1435,2109,1207,1674, 329,1872,2781,4055,2686, +2104, 608,3318,2423,2957,2768,1108,3739,3512,3271,3985,2203,1771,3520,1418,2054, +1681,1153, 225,1627,2929, 162,2050,2511,3687,1954, 124,1859,2431,1684,3032,2894, + 585,4805,3969,2869,2704,2088,2032,2095,3656,2635,4362,2209, 256, 518,2042,2105, +3777,3657, 643,2298,1148,1779, 190, 989,3544, 414, 11,2135,2063,2979,1471, 403, +3678, 126, 770,1563, 671,2499,3216,2877, 600,1179, 307,2805,4937,1268,1297,2694, + 252,4032,1448,1494,1331,1394, 127,2256, 222,1647,1035,1481,3056,1915,1048, 873, +3651, 210, 33,1608,2516, 200,1520, 415, 102, 0,3389,1287, 817, 91,3299,2940, + 836,1814, 549,2197,1396,1669,2987,3582,2297,2848,4528,1070, 687, 20,1819, 121, +1552,1364,1461,1968,2617,3540,2824,2083, 177, 948,4938,2291, 110,4549,2066, 648, +3359,1755,2110,2114,4642,4845,1693,3937,3308,1257,1869,2123, 208,1804,3159,2992, +2531,2549,3361,2418,1350,2347,2800,2568,1291,2036,2680, 72, 842,1990, 212,1233, +1154,1586, 75,2027,3410,4900,1823,1337,2710,2676, 728,2810,1522,3026,4995, 157, + 755,1050,4022, 710, 785,1936,2194,2085,1406,2777,2400, 150,1250,4049,1206, 807, +1910, 534, 529,3309,1721,1660, 274, 39,2827, 661,2670,1578, 925,3248,3815,1094, +4278,4901,4252, 41,1150,3747,2572,2227,4501,3658,4902,3813,3357,3617,2884,2258, + 887, 538,4187,3199,1294,2439,3042,2329,2343,2497,1255, 107, 543,1527, 521,3478, +3568, 194,5062, 15, 961,3870,1241,1192,2664, 66,5215,3260,2111,1295,1127,2152, +3805,4135, 901,1164,1976, 398,1278, 530,1460, 748, 904,1054,1966,1426, 53,2909, + 509, 523,2279,1534, 536,1019, 239,1685, 460,2353, 673,1065,2401,3600,4298,2272, +1272,2363, 284,1753,3679,4064,1695, 81, 815,2677,2757,2731,1386, 859, 500,4221, +2190,2566, 757,1006,2519,2068,1166,1455, 337,2654,3203,1863,1682,1914,3025,1252, +1409,1366, 847, 714,2834,2038,3209, 964,2970,1901, 885,2553,1078,1756,3049, 301, +1572,3326, 688,2130,1996,2429,1805,1648,2930,3421,2750,3652,3088, 262,1158,1254, + 389,1641,1812, 526,1719, 923,2073,1073,1902, 468, 489,4625,1140, 857,2375,3070, +3319,2863, 380, 116,1328,2693,1161,2244, 273,1212,1884,2769,3011,1775,1142, 461, +3066,1200,2147,2212, 790, 702,2695,4222,1601,1058, 434,2338,5153,3640, 67,2360, +4099,2502, 618,3472,1329, 416,1132, 830,2782,1807,2653,3211,3510,1662, 192,2124, + 296,3979,1739,1611,3684, 23, 118, 324, 446,1239,1225, 293,2520,3814,3795,2535, +3116, 17,1074, 467,2692,2201, 387,2922, 45,1326,3055,1645,3659,2817, 958, 243, +1903,2320,1339,2825,1784,3289, 356, 576, 865,2315,2381,3377,3916,1088,3122,1713, +1655, 935, 628,4689,1034,1327, 441, 800, 720, 894,1979,2183,1528,5289,2702,1071, +4046,3572,2399,1571,3281, 79, 761,1103, 327, 134, 758,1899,1371,1615, 879, 442, + 215,2605,2579, 173,2048,2485,1057,2975,3317,1097,2253,3801,4263,1403,1650,2946, + 814,4968,3487,1548,2644,1567,1285, 2, 295,2636, 97, 946,3576, 832, 141,4257, +3273, 760,3821,3521,3156,2607, 949,1024,1733,1516,1803,1920,2125,2283,2665,3180, +1501,2064,3560,2171,1592, 803,3518,1416, 732,3897,4258,1363,1362,2458, 119,1427, + 602,1525,2608,1605,1639,3175, 694,3064, 10, 465, 76,2000,4846,4208, 444,3781, +1619,3353,2206,1273,3796, 740,2483, 320,1723,2377,3660,2619,1359,1137,1762,1724, +2345,2842,1850,1862, 912, 821,1866, 612,2625,1735,2573,3369,1093, 844, 89, 937, + 930,1424,3564,2413,2972,1004,3046,3019,2011, 711,3171,1452,4178, 428, 801,1943, + 432, 445,2811, 206,4136,1472, 730, 349, 73, 397,2802,2547, 998,1637,1167, 789, + 396,3217, 154,1218, 716,1120,1780,2819,4826,1931,3334,3762,2139,1215,2627, 552, +3664,3628,3232,1405,2383,3111,1356,2652,3577,3320,3101,1703, 640,1045,1370,1246, +4996, 371,1575,2436,1621,2210, 984,4033,1734,2638, 16,4529, 663,2755,3255,1451, +3917,2257,1253,1955,2234,1263,2951, 214,1229, 617, 485, 359,1831,1969, 473,2310, + 750,2058, 165, 80,2864,2419, 361,4344,2416,2479,1134, 796,3726,1266,2943, 860, +2715, 938, 390,2734,1313,1384, 248, 202, 877,1064,2854, 522,3907, 279,1602, 297, +2357, 395,3740, 137,2075, 944,4089,2584,1267,3802, 62,1533,2285, 178, 176, 780, +2440, 201,3707, 590, 478,1560,4354,2117,1075, 30, 74,4643,4004,1635,1441,2745, + 776,2596, 238,1077,1692,1912,2844, 605, 499,1742,3947, 241,3053, 980,1749, 936, +2640,4511,2582, 515,1543,2162,5322,2892,2993, 890,2148,1924, 665,1827,3581,1032, + 968,3163, 339,1044,1896, 270, 583,1791,1720,4367,1194,3488,3669, 43,2523,1657, + 163,2167, 290,1209,1622,3378, 550, 634,2508,2510, 695,2634,2384,2512,1476,1414, + 220,1469,2341,2138,2852,3183,2900,4939,2865,3502,1211,3680, 854,3227,1299,2976, +3172, 186,2998,1459, 443,1067,3251,1495, 321,1932,3054, 909, 753,1410,1828, 436, +2441,1119,1587,3164,2186,1258, 227, 231,1425,1890,3200,3942, 247, 959, 725,5254, +2741, 577,2158,2079, 929, 120, 174, 838,2813, 591,1115, 417,2024, 40,3240,1536, +1037, 291,4151,2354, 632,1298,2406,2500,3535,1825,1846,3451, 205,1171, 345,4238, + 18,1163, 811, 685,2208,1217, 425,1312,1508,1175,4308,2552,1033, 587,1381,3059, +2984,3482, 340,1316,4023,3972, 792,3176, 519, 777,4690, 918, 933,4130,2981,3741, + 90,3360,2911,2200,5184,4550, 609,3079,2030, 272,3379,2736, 363,3881,1130,1447, + 286, 779, 357,1169,3350,3137,1630,1220,2687,2391, 747,1277,3688,2618,2682,2601, +1156,3196,5290,4034,3102,1689,3596,3128, 874, 219,2783, 798, 508,1843,2461, 269, +1658,1776,1392,1913,2983,3287,2866,2159,2372, 829,4076, 46,4253,2873,1889,1894, + 915,1834,1631,2181,2318, 298, 664,2818,3555,2735, 954,3228,3117, 527,3511,2173, + 681,2712,3033,2247,2346,3467,1652, 155,2164,3382, 113,1994, 450, 899, 494, 994, +1237,2958,1875,2336,1926,3727, 545,1577,1550, 633,3473, 204,1305,3072,2410,1956, +2471, 707,2134, 841,2195,2196,2663,3843,1026,4940, 990,3252,4997, 368,1092, 437, +3212,3258,1933,1829, 675,2977,2893, 412, 943,3723,4644,3294,3283,2230,2373,5154, +2389,2241,2661,2323,1404,2524, 593, 787, 677,3008,1275,2059, 438,2709,2609,2240, +2269,2246,1446, 36,1568,1373,3892,1574,2301,1456,3962, 693,2276,5216,2035,1143, +2720,1919,1797,1811,2763,4137,2597,1830,1699,1488,1198,2090, 424,1694, 312,3634, +3390,4179,3335,2252,1214, 561,1059,3243,2295,2561, 975,5155,2321,2751,3772, 472, +1537,3282,3398,1047,2077,2348,2878,1323,3340,3076, 690,2906, 51, 369, 170,3541, +1060,2187,2688,3670,2541,1083,1683, 928,3918, 459, 109,4427, 599,3744,4286, 143, +2101,2730,2490, 82,1588,3036,2121, 281,1860, 477,4035,1238,2812,3020,2716,3312, +1530,2188,2055,1317, 843, 636,1808,1173,3495, 649, 181,1002, 147,3641,1159,2414, +3750,2289,2795, 813,3123,2610,1136,4368, 5,3391,4541,2174, 420, 429,1728, 754, +1228,2115,2219, 347,2223,2733, 735,1518,3003,2355,3134,1764,3948,3329,1888,2424, +1001,1234,1972,3321,3363,1672,1021,1450,1584, 226, 765, 655,2526,3404,3244,2302, +3665, 731, 594,2184, 319,1576, 621, 658,2656,4299,2099,3864,1279,2071,2598,2739, + 795,3086,3699,3908,1707,2352,2402,1382,3136,2475,1465,4847,3496,3865,1085,3004, +2591,1084, 213,2287,1963,3565,2250, 822, 793,4574,3187,1772,1789,3050, 595,1484, +1959,2770,1080,2650, 456, 422,2996, 940,3322,4328,4345,3092,2742, 965,2784, 739, +4124, 952,1358,2498,2949,2565, 332,2698,2378, 660,2260,2473,4194,3856,2919, 535, +1260,2651,1208,1428,1300,1949,1303,2942, 433,2455,2450,1251,1946, 614,1269, 641, +1306,1810,2737,3078,2912, 564,2365,1419,1415,1497,4460,2367,2185,1379,3005,1307, +3218,2175,1897,3063, 682,1157,4040,4005,1712,1160,1941,1399, 394, 402,2952,1573, +1151,2986,2404, 862, 299,2033,1489,3006, 346, 171,2886,3401,1726,2932, 168,2533, + 47,2507,1030,3735,1145,3370,1395,1318,1579,3609,4560,2857,4116,1457,2529,1965, + 504,1036,2690,2988,2405, 745,5871, 849,2397,2056,3081, 863,2359,3857,2096, 99, +1397,1769,2300,4428,1643,3455,1978,1757,3718,1440, 35,4879,3742,1296,4228,2280, + 160,5063,1599,2013, 166, 520,3479,1646,3345,3012, 490,1937,1545,1264,2182,2505, +1096,1188,1369,1436,2421,1667,2792,2460,1270,2122, 727,3167,2143, 806,1706,1012, +1800,3037, 960,2218,1882, 805, 139,2456,1139,1521, 851,1052,3093,3089, 342,2039, + 744,5097,1468,1502,1585,2087, 223, 939, 326,2140,2577, 892,2481,1623,4077, 982, +3708, 135,2131, 87,2503,3114,2326,1106, 876,1616, 547,2997,2831,2093,3441,4530, +4314, 9,3256,4229,4148, 659,1462,1986,1710,2046,2913,2231,4090,4880,5255,3392, +3274,1368,3689,4645,1477, 705,3384,3635,1068,1529,2941,1458,3782,1509, 100,1656, +2548, 718,2339, 408,1590,2780,3548,1838,4117,3719,1345,3530, 717,3442,2778,3220, +2898,1892,4590,3614,3371,2043,1998,1224,3483, 891, 635, 584,2559,3355, 733,1766, +1729,1172,3789,1891,2307, 781,2982,2271,1957,1580,5773,2633,2005,4195,3097,1535, +3213,1189,1934,5693,3262, 586,3118,1324,1598, 517,1564,2217,1868,1893,4445,3728, +2703,3139,1526,1787,1992,3882,2875,1549,1199,1056,2224,1904,2711,5098,4287, 338, +1993,3129,3489,2689,1809,2815,1997, 957,1855,3898,2550,3275,3057,1105,1319, 627, +1505,1911,1883,3526, 698,3629,3456,1833,1431, 746, 77,1261,2017,2296,1977,1885, + 125,1334,1600, 525,1798,1109,2222,1470,1945, 559,2236,1186,3443,2476,1929,1411, +2411,3135,1777,3372,2621,1841,1613,3229, 668,1430,1839,2643,2916, 195,1989,2671, +2358,1387, 629,3205,2293,5256,4439, 123,1310, 888,1879,4300,3021,3605,1003,1162, +3192,2910,2010, 140,2395,2859, 55,1082,2012,2901, 662, 419,2081,1438, 680,2774, +4654,3912,1620,1731,1625,5035,4065,2328, 512,1344, 802,5443,2163,2311,2537, 524, +3399, 98,1155,2103,1918,2606,3925,2816,1393,2465,1504,3773,2177,3963,1478,4346, + 180,1113,4655,3461,2028,1698, 833,2696,1235,1322,1594,4408,3623,3013,3225,2040, +3022, 541,2881, 607,3632,2029,1665,1219, 639,1385,1686,1099,2803,3231,1938,3188, +2858, 427, 676,2772,1168,2025, 454,3253,2486,3556, 230,1950, 580, 791,1991,1280, +1086,1974,2034, 630, 257,3338,2788,4903,1017, 86,4790, 966,2789,1995,1696,1131, + 259,3095,4188,1308, 179,1463,5257, 289,4107,1248, 42,3413,1725,2288, 896,1947, + 774,4474,4254, 604,3430,4264, 392,2514,2588, 452, 237,1408,3018, 988,4531,1970, +3034,3310, 540,2370,1562,1288,2990, 502,4765,1147, 4,1853,2708, 207, 294,2814, +4078,2902,2509, 684, 34,3105,3532,2551, 644, 709,2801,2344, 573,1727,3573,3557, +2021,1081,3100,4315,2100,3681, 199,2263,1837,2385, 146,3484,1195,2776,3949, 997, +1939,3973,1008,1091,1202,1962,1847,1149,4209,5444,1076, 493, 117,5400,2521, 972, +1490,2934,1796,4542,2374,1512,2933,2657, 413,2888,1135,2762,2314,2156,1355,2369, + 766,2007,2527,2170,3124,2491,2593,2632,4757,2437, 234,3125,3591,1898,1750,1376, +1942,3468,3138, 570,2127,2145,3276,4131, 962, 132,1445,4196, 19, 941,3624,3480, +3366,1973,1374,4461,3431,2629, 283,2415,2275, 808,2887,3620,2112,2563,1353,3610, + 955,1089,3103,1053, 96, 88,4097, 823,3808,1583, 399, 292,4091,3313, 421,1128, + 642,4006, 903,2539,1877,2082, 596, 29,4066,1790, 722,2157, 130, 995,1569, 769, +1485, 464, 513,2213, 288,1923,1101,2453,4316, 133, 486,2445, 50, 625, 487,2207, + 57, 423, 481,2962, 159,3729,1558, 491, 303, 482, 501, 240,2837, 112,3648,2392, +1783, 362, 8,3433,3422, 610,2793,3277,1390,1284,1654, 21,3823, 734, 367, 623, + 193, 287, 374,1009,1483, 816, 476, 313,2255,2340,1262,2150,2899,1146,2581, 782, +2116,1659,2018,1880, 255,3586,3314,1110,2867,2137,2564, 986,2767,5185,2006, 650, + 158, 926, 762, 881,3157,2717,2362,3587, 306,3690,3245,1542,3077,2427,1691,2478, +2118,2985,3490,2438, 539,2305, 983, 129,1754, 355,4201,2386, 827,2923, 104,1773, +2838,2771, 411,2905,3919, 376, 767, 122,1114, 828,2422,1817,3506, 266,3460,1007, +1609,4998, 945,2612,4429,2274, 726,1247,1964,2914,2199,2070,4002,4108, 657,3323, +1422, 579, 455,2764,4737,1222,2895,1670, 824,1223,1487,2525, 558, 861,3080, 598, +2659,2515,1967, 752,2583,2376,2214,4180, 977, 704,2464,4999,2622,4109,1210,2961, + 819,1541, 142,2284, 44, 418, 457,1126,3730,4347,4626,1644,1876,3671,1864, 302, +1063,5694, 624, 723,1984,3745,1314,1676,2488,1610,1449,3558,3569,2166,2098, 409, +1011,2325,3704,2306, 818,1732,1383,1824,1844,3757, 999,2705,3497,1216,1423,2683, +2426,2954,2501,2726,2229,1475,2554,5064,1971,1794,1666,2014,1343, 783, 724, 191, +2434,1354,2220,5065,1763,2752,2472,4152, 131, 175,2885,3434, 92,1466,4920,2616, +3871,3872,3866, 128,1551,1632, 669,1854,3682,4691,4125,1230, 188,2973,3290,1302, +1213, 560,3266, 917, 763,3909,3249,1760, 868,1958, 764,1782,2097, 145,2277,3774, +4462, 64,1491,3062, 971,2132,3606,2442, 221,1226,1617, 218, 323,1185,3207,3147, + 571, 619,1473,1005,1744,2281, 449,1887,2396,3685, 275, 375,3816,1743,3844,3731, + 845,1983,2350,4210,1377, 773, 967,3499,3052,3743,2725,4007,1697,1022,3943,1464, +3264,2855,2722,1952,1029,2839,2467, 84,4383,2215, 820,1391,2015,2448,3672, 377, +1948,2168, 797,2545,3536,2578,2645, 94,2874,1678, 405,1259,3071, 771, 546,1315, + 470,1243,3083, 895,2468, 981, 969,2037, 846,4181, 653,1276,2928, 14,2594, 557, +3007,2474, 156, 902,1338,1740,2574, 537,2518, 973,2282,2216,2433,1928, 138,2903, +1293,2631,1612, 646,3457, 839,2935, 111, 496,2191,2847, 589,3186, 149,3994,2060, +4031,2641,4067,3145,1870, 37,3597,2136,1025,2051,3009,3383,3549,1121,1016,3261, +1301, 251,2446,2599,2153, 872,3246, 637, 334,3705, 831, 884, 921,3065,3140,4092, +2198,1944, 246,2964, 108,2045,1152,1921,2308,1031, 203,3173,4170,1907,3890, 810, +1401,2003,1690, 506, 647,1242,2828,1761,1649,3208,2249,1589,3709,2931,5156,1708, + 498, 666,2613, 834,3817,1231, 184,2851,1124, 883,3197,2261,3710,1765,1553,2658, +1178,2639,2351, 93,1193, 942,2538,2141,4402, 235,1821, 870,1591,2192,1709,1871, +3341,1618,4126,2595,2334, 603, 651, 69, 701, 268,2662,3411,2555,1380,1606, 503, + 448, 254,2371,2646, 574,1187,2309,1770, 322,2235,1292,1801, 305, 566,1133, 229, +2067,2057, 706, 167, 483,2002,2672,3295,1820,3561,3067, 316, 378,2746,3452,1112, + 136,1981, 507,1651,2917,1117, 285,4591, 182,2580,3522,1304, 335,3303,1835,2504, +1795,1792,2248, 674,1018,2106,2449,1857,2292,2845, 976,3047,1781,2600,2727,1389, +1281, 52,3152, 153, 265,3950, 672,3485,3951,4463, 430,1183, 365, 278,2169, 27, +1407,1336,2304, 209,1340,1730,2202,1852,2403,2883, 979,1737,1062, 631,2829,2542, +3876,2592, 825,2086,2226,3048,3625, 352,1417,3724, 542, 991, 431,1351,3938,1861, +2294, 826,1361,2927,3142,3503,1738, 463,2462,2723, 582,1916,1595,2808, 400,3845, +3891,2868,3621,2254, 58,2492,1123, 910,2160,2614,1372,1603,1196,1072,3385,1700, +3267,1980, 696, 480,2430, 920, 799,1570,2920,1951,2041,4047,2540,1321,4223,2469, +3562,2228,1271,2602, 401,2833,3351,2575,5157, 907,2312,1256, 410, 263,3507,1582, + 996, 678,1849,2316,1480, 908,3545,2237, 703,2322, 667,1826,2849,1531,2604,2999, +2407,3146,2151,2630,1786,3711, 469,3542, 497,3899,2409, 858, 837,4446,3393,1274, + 786, 620,1845,2001,3311, 484, 308,3367,1204,1815,3691,2332,1532,2557,1842,2020, +2724,1927,2333,4440, 567, 22,1673,2728,4475,1987,1858,1144,1597, 101,1832,3601, + 12, 974,3783,4391, 951,1412, 1,3720, 453,4608,4041, 528,1041,1027,3230,2628, +1129, 875,1051,3291,1203,2262,1069,2860,2799,2149,2615,3278, 144,1758,3040, 31, + 475,1680, 366,2685,3184, 311,1642,4008,2466,5036,1593,1493,2809, 216,1420,1668, + 233, 304,2128,3284, 232,1429,1768,1040,2008,3407,2740,2967,2543, 242,2133, 778, +1565,2022,2620, 505,2189,2756,1098,2273, 372,1614, 708, 553,2846,2094,2278, 169, +3626,2835,4161, 228,2674,3165, 809,1454,1309, 466,1705,1095, 900,3423, 880,2667, +3751,5258,2317,3109,2571,4317,2766,1503,1342, 866,4447,1118, 63,2076, 314,1881, +1348,1061, 172, 978,3515,1747, 532, 511,3970, 6, 601, 905,2699,3300,1751, 276, +1467,3725,2668, 65,4239,2544,2779,2556,1604, 578,2451,1802, 992,2331,2624,1320, +3446, 713,1513,1013, 103,2786,2447,1661, 886,1702, 916, 654,3574,2031,1556, 751, +2178,2821,2179,1498,1538,2176, 271, 914,2251,2080,1325, 638,1953,2937,3877,2432, +2754, 95,3265,1716, 260,1227,4083, 775, 106,1357,3254, 426,1607, 555,2480, 772, +1985, 244,2546, 474, 495,1046,2611,1851,2061, 71,2089,1675,2590, 742,3758,2843, +3222,1433, 267,2180,2576,2826,2233,2092,3913,2435, 956,1745,3075, 856,2113,1116, + 451, 3,1988,2896,1398, 993,2463,1878,2049,1341,2718,2721,2870,2108, 712,2904, +4363,2753,2324, 277,2872,2349,2649, 384, 987, 435, 691,3000, 922, 164,3939, 652, +1500,1184,4153,2482,3373,2165,4848,2335,3775,3508,3154,2806,2830,1554,2102,1664, +2530,1434,2408, 893,1547,2623,3447,2832,2242,2532,3169,2856,3223,2078, 49,3770, +3469, 462, 318, 656,2259,3250,3069, 679,1629,2758, 344,1138,1104,3120,1836,1283, +3115,2154,1437,4448, 934, 759,1999, 794,2862,1038, 533,2560,1722,2342, 855,2626, +1197,1663,4476,3127, 85,4240,2528, 25,1111,1181,3673, 407,3470,4561,2679,2713, + 768,1925,2841,3986,1544,1165, 932, 373,1240,2146,1930,2673, 721,4766, 354,4333, + 391,2963, 187, 61,3364,1442,1102, 330,1940,1767, 341,3809,4118, 393,2496,2062, +2211, 105, 331, 300, 439, 913,1332, 626, 379,3304,1557, 328, 689,3952, 309,1555, + 931, 317,2517,3027, 325, 569, 686,2107,3084, 60,1042,1333,2794, 264,3177,4014, +1628, 258,3712, 7,4464,1176,1043,1778, 683, 114,1975, 78,1492, 383,1886, 510, + 386, 645,5291,2891,2069,3305,4138,3867,2939,2603,2493,1935,1066,1848,3588,1015, +1282,1289,4609, 697,1453,3044,2666,3611,1856,2412, 54, 719,1330, 568,3778,2459, +1748, 788, 492, 551,1191,1000, 488,3394,3763, 282,1799, 348,2016,1523,3155,2390, +1049, 382,2019,1788,1170, 729,2968,3523, 897,3926,2785,2938,3292, 350,2319,3238, +1718,1717,2655,3453,3143,4465, 161,2889,2980,2009,1421, 56,1908,1640,2387,2232, +1917,1874,2477,4921, 148, 83,3438, 592,4245,2882,1822,1055, 741, 115,1496,1624, + 381,1638,4592,1020, 516,3214, 458, 947,4575,1432, 211,1514,2926,1865,2142, 189, + 852,1221,1400,1486, 882,2299,4036, 351, 28,1122, 700,6479,6480,6481,6482,6483, #last 512 +) + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/gb2312prober.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/gb2312prober.py new file mode 100644 index 0000000000000000000000000000000000000000..8446d2dd959721cc86d4ae5a7699197454f3aa91 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/gb2312prober.py @@ -0,0 +1,46 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is mozilla.org code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +from .mbcharsetprober import MultiByteCharSetProber +from .codingstatemachine import CodingStateMachine +from .chardistribution import GB2312DistributionAnalysis +from .mbcssm import GB2312_SM_MODEL + +class GB2312Prober(MultiByteCharSetProber): + def __init__(self): + super(GB2312Prober, self).__init__() + self.coding_sm = CodingStateMachine(GB2312_SM_MODEL) + self.distribution_analyzer = GB2312DistributionAnalysis() + self.reset() + + @property + def charset_name(self): + return "GB2312" + + @property + def language(self): + return "Chinese" diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/hebrewprober.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/hebrewprober.py new file mode 100644 index 0000000000000000000000000000000000000000..b0e1bf49268203d1f9d14cbe73753d95dc66c8a4 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/hebrewprober.py @@ -0,0 +1,292 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Universal charset detector code. +# +# The Initial Developer of the Original Code is +# Shy Shalom +# Portions created by the Initial Developer are Copyright (C) 2005 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +from .charsetprober import CharSetProber +from .enums import ProbingState + +# This prober doesn't actually recognize a language or a charset. +# It is a helper prober for the use of the Hebrew model probers + +### General ideas of the Hebrew charset recognition ### +# +# Four main charsets exist in Hebrew: +# "ISO-8859-8" - Visual Hebrew +# "windows-1255" - Logical Hebrew +# "ISO-8859-8-I" - Logical Hebrew +# "x-mac-hebrew" - ?? Logical Hebrew ?? +# +# Both "ISO" charsets use a completely identical set of code points, whereas +# "windows-1255" and "x-mac-hebrew" are two different proper supersets of +# these code points. windows-1255 defines additional characters in the range +# 0x80-0x9F as some misc punctuation marks as well as some Hebrew-specific +# diacritics and additional 'Yiddish' ligature letters in the range 0xc0-0xd6. +# x-mac-hebrew defines similar additional code points but with a different +# mapping. +# +# As far as an average Hebrew text with no diacritics is concerned, all four +# charsets are identical with respect to code points. Meaning that for the +# main Hebrew alphabet, all four map the same values to all 27 Hebrew letters +# (including final letters). +# +# The dominant difference between these charsets is their directionality. +# "Visual" directionality means that the text is ordered as if the renderer is +# not aware of a BIDI rendering algorithm. The renderer sees the text and +# draws it from left to right. The text itself when ordered naturally is read +# backwards. A buffer of Visual Hebrew generally looks like so: +# "[last word of first line spelled backwards] [whole line ordered backwards +# and spelled backwards] [first word of first line spelled backwards] +# [end of line] [last word of second line] ... etc' " +# adding punctuation marks, numbers and English text to visual text is +# naturally also "visual" and from left to right. +# +# "Logical" directionality means the text is ordered "naturally" according to +# the order it is read. It is the responsibility of the renderer to display +# the text from right to left. A BIDI algorithm is used to place general +# punctuation marks, numbers and English text in the text. +# +# Texts in x-mac-hebrew are almost impossible to find on the Internet. From +# what little evidence I could find, it seems that its general directionality +# is Logical. +# +# To sum up all of the above, the Hebrew probing mechanism knows about two +# charsets: +# Visual Hebrew - "ISO-8859-8" - backwards text - Words and sentences are +# backwards while line order is natural. For charset recognition purposes +# the line order is unimportant (In fact, for this implementation, even +# word order is unimportant). +# Logical Hebrew - "windows-1255" - normal, naturally ordered text. +# +# "ISO-8859-8-I" is a subset of windows-1255 and doesn't need to be +# specifically identified. +# "x-mac-hebrew" is also identified as windows-1255. A text in x-mac-hebrew +# that contain special punctuation marks or diacritics is displayed with +# some unconverted characters showing as question marks. This problem might +# be corrected using another model prober for x-mac-hebrew. Due to the fact +# that x-mac-hebrew texts are so rare, writing another model prober isn't +# worth the effort and performance hit. +# +#### The Prober #### +# +# The prober is divided between two SBCharSetProbers and a HebrewProber, +# all of which are managed, created, fed data, inquired and deleted by the +# SBCSGroupProber. The two SBCharSetProbers identify that the text is in +# fact some kind of Hebrew, Logical or Visual. The final decision about which +# one is it is made by the HebrewProber by combining final-letter scores +# with the scores of the two SBCharSetProbers to produce a final answer. +# +# The SBCSGroupProber is responsible for stripping the original text of HTML +# tags, English characters, numbers, low-ASCII punctuation characters, spaces +# and new lines. It reduces any sequence of such characters to a single space. +# The buffer fed to each prober in the SBCS group prober is pure text in +# high-ASCII. +# The two SBCharSetProbers (model probers) share the same language model: +# Win1255Model. +# The first SBCharSetProber uses the model normally as any other +# SBCharSetProber does, to recognize windows-1255, upon which this model was +# built. The second SBCharSetProber is told to make the pair-of-letter +# lookup in the language model backwards. This in practice exactly simulates +# a visual Hebrew model using the windows-1255 logical Hebrew model. +# +# The HebrewProber is not using any language model. All it does is look for +# final-letter evidence suggesting the text is either logical Hebrew or visual +# Hebrew. Disjointed from the model probers, the results of the HebrewProber +# alone are meaningless. HebrewProber always returns 0.00 as confidence +# since it never identifies a charset by itself. Instead, the pointer to the +# HebrewProber is passed to the model probers as a helper "Name Prober". +# When the Group prober receives a positive identification from any prober, +# it asks for the name of the charset identified. If the prober queried is a +# Hebrew model prober, the model prober forwards the call to the +# HebrewProber to make the final decision. In the HebrewProber, the +# decision is made according to the final-letters scores maintained and Both +# model probers scores. The answer is returned in the form of the name of the +# charset identified, either "windows-1255" or "ISO-8859-8". + +class HebrewProber(CharSetProber): + # windows-1255 / ISO-8859-8 code points of interest + FINAL_KAF = 0xea + NORMAL_KAF = 0xeb + FINAL_MEM = 0xed + NORMAL_MEM = 0xee + FINAL_NUN = 0xef + NORMAL_NUN = 0xf0 + FINAL_PE = 0xf3 + NORMAL_PE = 0xf4 + FINAL_TSADI = 0xf5 + NORMAL_TSADI = 0xf6 + + # Minimum Visual vs Logical final letter score difference. + # If the difference is below this, don't rely solely on the final letter score + # distance. + MIN_FINAL_CHAR_DISTANCE = 5 + + # Minimum Visual vs Logical model score difference. + # If the difference is below this, don't rely at all on the model score + # distance. + MIN_MODEL_DISTANCE = 0.01 + + VISUAL_HEBREW_NAME = "ISO-8859-8" + LOGICAL_HEBREW_NAME = "windows-1255" + + def __init__(self): + super(HebrewProber, self).__init__() + self._final_char_logical_score = None + self._final_char_visual_score = None + self._prev = None + self._before_prev = None + self._logical_prober = None + self._visual_prober = None + self.reset() + + def reset(self): + self._final_char_logical_score = 0 + self._final_char_visual_score = 0 + # The two last characters seen in the previous buffer, + # mPrev and mBeforePrev are initialized to space in order to simulate + # a word delimiter at the beginning of the data + self._prev = ' ' + self._before_prev = ' ' + # These probers are owned by the group prober. + + def set_model_probers(self, logicalProber, visualProber): + self._logical_prober = logicalProber + self._visual_prober = visualProber + + def is_final(self, c): + return c in [self.FINAL_KAF, self.FINAL_MEM, self.FINAL_NUN, + self.FINAL_PE, self.FINAL_TSADI] + + def is_non_final(self, c): + # The normal Tsadi is not a good Non-Final letter due to words like + # 'lechotet' (to chat) containing an apostrophe after the tsadi. This + # apostrophe is converted to a space in FilterWithoutEnglishLetters + # causing the Non-Final tsadi to appear at an end of a word even + # though this is not the case in the original text. + # The letters Pe and Kaf rarely display a related behavior of not being + # a good Non-Final letter. Words like 'Pop', 'Winamp' and 'Mubarak' + # for example legally end with a Non-Final Pe or Kaf. However, the + # benefit of these letters as Non-Final letters outweighs the damage + # since these words are quite rare. + return c in [self.NORMAL_KAF, self.NORMAL_MEM, + self.NORMAL_NUN, self.NORMAL_PE] + + def feed(self, byte_str): + # Final letter analysis for logical-visual decision. + # Look for evidence that the received buffer is either logical Hebrew + # or visual Hebrew. + # The following cases are checked: + # 1) A word longer than 1 letter, ending with a final letter. This is + # an indication that the text is laid out "naturally" since the + # final letter really appears at the end. +1 for logical score. + # 2) A word longer than 1 letter, ending with a Non-Final letter. In + # normal Hebrew, words ending with Kaf, Mem, Nun, Pe or Tsadi, + # should not end with the Non-Final form of that letter. Exceptions + # to this rule are mentioned above in isNonFinal(). This is an + # indication that the text is laid out backwards. +1 for visual + # score + # 3) A word longer than 1 letter, starting with a final letter. Final + # letters should not appear at the beginning of a word. This is an + # indication that the text is laid out backwards. +1 for visual + # score. + # + # The visual score and logical score are accumulated throughout the + # text and are finally checked against each other in GetCharSetName(). + # No checking for final letters in the middle of words is done since + # that case is not an indication for either Logical or Visual text. + # + # We automatically filter out all 7-bit characters (replace them with + # spaces) so the word boundary detection works properly. [MAP] + + if self.state == ProbingState.NOT_ME: + # Both model probers say it's not them. No reason to continue. + return ProbingState.NOT_ME + + byte_str = self.filter_high_byte_only(byte_str) + + for cur in byte_str: + if cur == ' ': + # We stand on a space - a word just ended + if self._before_prev != ' ': + # next-to-last char was not a space so self._prev is not a + # 1 letter word + if self.is_final(self._prev): + # case (1) [-2:not space][-1:final letter][cur:space] + self._final_char_logical_score += 1 + elif self.is_non_final(self._prev): + # case (2) [-2:not space][-1:Non-Final letter][ + # cur:space] + self._final_char_visual_score += 1 + else: + # Not standing on a space + if ((self._before_prev == ' ') and + (self.is_final(self._prev)) and (cur != ' ')): + # case (3) [-2:space][-1:final letter][cur:not space] + self._final_char_visual_score += 1 + self._before_prev = self._prev + self._prev = cur + + # Forever detecting, till the end or until both model probers return + # ProbingState.NOT_ME (handled above) + return ProbingState.DETECTING + + @property + def charset_name(self): + # Make the decision: is it Logical or Visual? + # If the final letter score distance is dominant enough, rely on it. + finalsub = self._final_char_logical_score - self._final_char_visual_score + if finalsub >= self.MIN_FINAL_CHAR_DISTANCE: + return self.LOGICAL_HEBREW_NAME + if finalsub <= -self.MIN_FINAL_CHAR_DISTANCE: + return self.VISUAL_HEBREW_NAME + + # It's not dominant enough, try to rely on the model scores instead. + modelsub = (self._logical_prober.get_confidence() + - self._visual_prober.get_confidence()) + if modelsub > self.MIN_MODEL_DISTANCE: + return self.LOGICAL_HEBREW_NAME + if modelsub < -self.MIN_MODEL_DISTANCE: + return self.VISUAL_HEBREW_NAME + + # Still no good, back to final letter distance, maybe it'll save the + # day. + if finalsub < 0.0: + return self.VISUAL_HEBREW_NAME + + # (finalsub > 0 - Logical) or (don't know what to do) default to + # Logical. + return self.LOGICAL_HEBREW_NAME + + @property + def language(self): + return 'Hebrew' + + @property + def state(self): + # Remain active as long as any of the model probers are active. + if (self._logical_prober.state == ProbingState.NOT_ME) and \ + (self._visual_prober.state == ProbingState.NOT_ME): + return ProbingState.NOT_ME + return ProbingState.DETECTING diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/jisfreq.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/jisfreq.py new file mode 100644 index 0000000000000000000000000000000000000000..83fc082b545106d02622de20f2083e8a7562f96c --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/jisfreq.py @@ -0,0 +1,325 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Communicator client code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +# Sampling from about 20M text materials include literature and computer technology +# +# Japanese frequency table, applied to both S-JIS and EUC-JP +# They are sorted in order. + +# 128 --> 0.77094 +# 256 --> 0.85710 +# 512 --> 0.92635 +# 1024 --> 0.97130 +# 2048 --> 0.99431 +# +# Ideal Distribution Ratio = 0.92635 / (1-0.92635) = 12.58 +# Random Distribution Ration = 512 / (2965+62+83+86-512) = 0.191 +# +# Typical Distribution Ratio, 25% of IDR + +JIS_TYPICAL_DISTRIBUTION_RATIO = 3.0 + +# Char to FreqOrder table , +JIS_TABLE_SIZE = 4368 + +JIS_CHAR_TO_FREQ_ORDER = ( + 40, 1, 6, 182, 152, 180, 295,2127, 285, 381,3295,4304,3068,4606,3165,3510, # 16 +3511,1822,2785,4607,1193,2226,5070,4608, 171,2996,1247, 18, 179,5071, 856,1661, # 32 +1262,5072, 619, 127,3431,3512,3230,1899,1700, 232, 228,1294,1298, 284, 283,2041, # 48 +2042,1061,1062, 48, 49, 44, 45, 433, 434,1040,1041, 996, 787,2997,1255,4305, # 64 +2108,4609,1684,1648,5073,5074,5075,5076,5077,5078,3687,5079,4610,5080,3927,3928, # 80 +5081,3296,3432, 290,2285,1471,2187,5082,2580,2825,1303,2140,1739,1445,2691,3375, # 96 +1691,3297,4306,4307,4611, 452,3376,1182,2713,3688,3069,4308,5083,5084,5085,5086, # 112 +5087,5088,5089,5090,5091,5092,5093,5094,5095,5096,5097,5098,5099,5100,5101,5102, # 128 +5103,5104,5105,5106,5107,5108,5109,5110,5111,5112,4097,5113,5114,5115,5116,5117, # 144 +5118,5119,5120,5121,5122,5123,5124,5125,5126,5127,5128,5129,5130,5131,5132,5133, # 160 +5134,5135,5136,5137,5138,5139,5140,5141,5142,5143,5144,5145,5146,5147,5148,5149, # 176 +5150,5151,5152,4612,5153,5154,5155,5156,5157,5158,5159,5160,5161,5162,5163,5164, # 192 +5165,5166,5167,5168,5169,5170,5171,5172,5173,5174,5175,1472, 598, 618, 820,1205, # 208 +1309,1412,1858,1307,1692,5176,5177,5178,5179,5180,5181,5182,1142,1452,1234,1172, # 224 +1875,2043,2149,1793,1382,2973, 925,2404,1067,1241, 960,1377,2935,1491, 919,1217, # 240 +1865,2030,1406,1499,2749,4098,5183,5184,5185,5186,5187,5188,2561,4099,3117,1804, # 256 +2049,3689,4309,3513,1663,5189,3166,3118,3298,1587,1561,3433,5190,3119,1625,2998, # 272 +3299,4613,1766,3690,2786,4614,5191,5192,5193,5194,2161, 26,3377, 2,3929, 20, # 288 +3691, 47,4100, 50, 17, 16, 35, 268, 27, 243, 42, 155, 24, 154, 29, 184, # 304 + 4, 91, 14, 92, 53, 396, 33, 289, 9, 37, 64, 620, 21, 39, 321, 5, # 320 + 12, 11, 52, 13, 3, 208, 138, 0, 7, 60, 526, 141, 151,1069, 181, 275, # 336 +1591, 83, 132,1475, 126, 331, 829, 15, 69, 160, 59, 22, 157, 55,1079, 312, # 352 + 109, 38, 23, 25, 10, 19, 79,5195, 61, 382,1124, 8, 30,5196,5197,5198, # 368 +5199,5200,5201,5202,5203,5204,5205,5206, 89, 62, 74, 34,2416, 112, 139, 196, # 384 + 271, 149, 84, 607, 131, 765, 46, 88, 153, 683, 76, 874, 101, 258, 57, 80, # 400 + 32, 364, 121,1508, 169,1547, 68, 235, 145,2999, 41, 360,3027, 70, 63, 31, # 416 + 43, 259, 262,1383, 99, 533, 194, 66, 93, 846, 217, 192, 56, 106, 58, 565, # 432 + 280, 272, 311, 256, 146, 82, 308, 71, 100, 128, 214, 655, 110, 261, 104,1140, # 448 + 54, 51, 36, 87, 67,3070, 185,2618,2936,2020, 28,1066,2390,2059,5207,5208, # 464 +5209,5210,5211,5212,5213,5214,5215,5216,4615,5217,5218,5219,5220,5221,5222,5223, # 480 +5224,5225,5226,5227,5228,5229,5230,5231,5232,5233,5234,5235,5236,3514,5237,5238, # 496 +5239,5240,5241,5242,5243,5244,2297,2031,4616,4310,3692,5245,3071,5246,3598,5247, # 512 +4617,3231,3515,5248,4101,4311,4618,3808,4312,4102,5249,4103,4104,3599,5250,5251, # 528 +5252,5253,5254,5255,5256,5257,5258,5259,5260,5261,5262,5263,5264,5265,5266,5267, # 544 +5268,5269,5270,5271,5272,5273,5274,5275,5276,5277,5278,5279,5280,5281,5282,5283, # 560 +5284,5285,5286,5287,5288,5289,5290,5291,5292,5293,5294,5295,5296,5297,5298,5299, # 576 +5300,5301,5302,5303,5304,5305,5306,5307,5308,5309,5310,5311,5312,5313,5314,5315, # 592 +5316,5317,5318,5319,5320,5321,5322,5323,5324,5325,5326,5327,5328,5329,5330,5331, # 608 +5332,5333,5334,5335,5336,5337,5338,5339,5340,5341,5342,5343,5344,5345,5346,5347, # 624 +5348,5349,5350,5351,5352,5353,5354,5355,5356,5357,5358,5359,5360,5361,5362,5363, # 640 +5364,5365,5366,5367,5368,5369,5370,5371,5372,5373,5374,5375,5376,5377,5378,5379, # 656 +5380,5381, 363, 642,2787,2878,2788,2789,2316,3232,2317,3434,2011, 165,1942,3930, # 672 +3931,3932,3933,5382,4619,5383,4620,5384,5385,5386,5387,5388,5389,5390,5391,5392, # 688 +5393,5394,5395,5396,5397,5398,5399,5400,5401,5402,5403,5404,5405,5406,5407,5408, # 704 +5409,5410,5411,5412,5413,5414,5415,5416,5417,5418,5419,5420,5421,5422,5423,5424, # 720 +5425,5426,5427,5428,5429,5430,5431,5432,5433,5434,5435,5436,5437,5438,5439,5440, # 736 +5441,5442,5443,5444,5445,5446,5447,5448,5449,5450,5451,5452,5453,5454,5455,5456, # 752 +5457,5458,5459,5460,5461,5462,5463,5464,5465,5466,5467,5468,5469,5470,5471,5472, # 768 +5473,5474,5475,5476,5477,5478,5479,5480,5481,5482,5483,5484,5485,5486,5487,5488, # 784 +5489,5490,5491,5492,5493,5494,5495,5496,5497,5498,5499,5500,5501,5502,5503,5504, # 800 +5505,5506,5507,5508,5509,5510,5511,5512,5513,5514,5515,5516,5517,5518,5519,5520, # 816 +5521,5522,5523,5524,5525,5526,5527,5528,5529,5530,5531,5532,5533,5534,5535,5536, # 832 +5537,5538,5539,5540,5541,5542,5543,5544,5545,5546,5547,5548,5549,5550,5551,5552, # 848 +5553,5554,5555,5556,5557,5558,5559,5560,5561,5562,5563,5564,5565,5566,5567,5568, # 864 +5569,5570,5571,5572,5573,5574,5575,5576,5577,5578,5579,5580,5581,5582,5583,5584, # 880 +5585,5586,5587,5588,5589,5590,5591,5592,5593,5594,5595,5596,5597,5598,5599,5600, # 896 +5601,5602,5603,5604,5605,5606,5607,5608,5609,5610,5611,5612,5613,5614,5615,5616, # 912 +5617,5618,5619,5620,5621,5622,5623,5624,5625,5626,5627,5628,5629,5630,5631,5632, # 928 +5633,5634,5635,5636,5637,5638,5639,5640,5641,5642,5643,5644,5645,5646,5647,5648, # 944 +5649,5650,5651,5652,5653,5654,5655,5656,5657,5658,5659,5660,5661,5662,5663,5664, # 960 +5665,5666,5667,5668,5669,5670,5671,5672,5673,5674,5675,5676,5677,5678,5679,5680, # 976 +5681,5682,5683,5684,5685,5686,5687,5688,5689,5690,5691,5692,5693,5694,5695,5696, # 992 +5697,5698,5699,5700,5701,5702,5703,5704,5705,5706,5707,5708,5709,5710,5711,5712, # 1008 +5713,5714,5715,5716,5717,5718,5719,5720,5721,5722,5723,5724,5725,5726,5727,5728, # 1024 +5729,5730,5731,5732,5733,5734,5735,5736,5737,5738,5739,5740,5741,5742,5743,5744, # 1040 +5745,5746,5747,5748,5749,5750,5751,5752,5753,5754,5755,5756,5757,5758,5759,5760, # 1056 +5761,5762,5763,5764,5765,5766,5767,5768,5769,5770,5771,5772,5773,5774,5775,5776, # 1072 +5777,5778,5779,5780,5781,5782,5783,5784,5785,5786,5787,5788,5789,5790,5791,5792, # 1088 +5793,5794,5795,5796,5797,5798,5799,5800,5801,5802,5803,5804,5805,5806,5807,5808, # 1104 +5809,5810,5811,5812,5813,5814,5815,5816,5817,5818,5819,5820,5821,5822,5823,5824, # 1120 +5825,5826,5827,5828,5829,5830,5831,5832,5833,5834,5835,5836,5837,5838,5839,5840, # 1136 +5841,5842,5843,5844,5845,5846,5847,5848,5849,5850,5851,5852,5853,5854,5855,5856, # 1152 +5857,5858,5859,5860,5861,5862,5863,5864,5865,5866,5867,5868,5869,5870,5871,5872, # 1168 +5873,5874,5875,5876,5877,5878,5879,5880,5881,5882,5883,5884,5885,5886,5887,5888, # 1184 +5889,5890,5891,5892,5893,5894,5895,5896,5897,5898,5899,5900,5901,5902,5903,5904, # 1200 +5905,5906,5907,5908,5909,5910,5911,5912,5913,5914,5915,5916,5917,5918,5919,5920, # 1216 +5921,5922,5923,5924,5925,5926,5927,5928,5929,5930,5931,5932,5933,5934,5935,5936, # 1232 +5937,5938,5939,5940,5941,5942,5943,5944,5945,5946,5947,5948,5949,5950,5951,5952, # 1248 +5953,5954,5955,5956,5957,5958,5959,5960,5961,5962,5963,5964,5965,5966,5967,5968, # 1264 +5969,5970,5971,5972,5973,5974,5975,5976,5977,5978,5979,5980,5981,5982,5983,5984, # 1280 +5985,5986,5987,5988,5989,5990,5991,5992,5993,5994,5995,5996,5997,5998,5999,6000, # 1296 +6001,6002,6003,6004,6005,6006,6007,6008,6009,6010,6011,6012,6013,6014,6015,6016, # 1312 +6017,6018,6019,6020,6021,6022,6023,6024,6025,6026,6027,6028,6029,6030,6031,6032, # 1328 +6033,6034,6035,6036,6037,6038,6039,6040,6041,6042,6043,6044,6045,6046,6047,6048, # 1344 +6049,6050,6051,6052,6053,6054,6055,6056,6057,6058,6059,6060,6061,6062,6063,6064, # 1360 +6065,6066,6067,6068,6069,6070,6071,6072,6073,6074,6075,6076,6077,6078,6079,6080, # 1376 +6081,6082,6083,6084,6085,6086,6087,6088,6089,6090,6091,6092,6093,6094,6095,6096, # 1392 +6097,6098,6099,6100,6101,6102,6103,6104,6105,6106,6107,6108,6109,6110,6111,6112, # 1408 +6113,6114,2044,2060,4621, 997,1235, 473,1186,4622, 920,3378,6115,6116, 379,1108, # 1424 +4313,2657,2735,3934,6117,3809, 636,3233, 573,1026,3693,3435,2974,3300,2298,4105, # 1440 + 854,2937,2463, 393,2581,2417, 539, 752,1280,2750,2480, 140,1161, 440, 708,1569, # 1456 + 665,2497,1746,1291,1523,3000, 164,1603, 847,1331, 537,1997, 486, 508,1693,2418, # 1472 +1970,2227, 878,1220, 299,1030, 969, 652,2751, 624,1137,3301,2619, 65,3302,2045, # 1488 +1761,1859,3120,1930,3694,3516, 663,1767, 852, 835,3695, 269, 767,2826,2339,1305, # 1504 + 896,1150, 770,1616,6118, 506,1502,2075,1012,2519, 775,2520,2975,2340,2938,4314, # 1520 +3028,2086,1224,1943,2286,6119,3072,4315,2240,1273,1987,3935,1557, 175, 597, 985, # 1536 +3517,2419,2521,1416,3029, 585, 938,1931,1007,1052,1932,1685,6120,3379,4316,4623, # 1552 + 804, 599,3121,1333,2128,2539,1159,1554,2032,3810, 687,2033,2904, 952, 675,1467, # 1568 +3436,6121,2241,1096,1786,2440,1543,1924, 980,1813,2228, 781,2692,1879, 728,1918, # 1584 +3696,4624, 548,1950,4625,1809,1088,1356,3303,2522,1944, 502, 972, 373, 513,2827, # 1600 + 586,2377,2391,1003,1976,1631,6122,2464,1084, 648,1776,4626,2141, 324, 962,2012, # 1616 +2177,2076,1384, 742,2178,1448,1173,1810, 222, 102, 301, 445, 125,2420, 662,2498, # 1632 + 277, 200,1476,1165,1068, 224,2562,1378,1446, 450,1880, 659, 791, 582,4627,2939, # 1648 +3936,1516,1274, 555,2099,3697,1020,1389,1526,3380,1762,1723,1787,2229, 412,2114, # 1664 +1900,2392,3518, 512,2597, 427,1925,2341,3122,1653,1686,2465,2499, 697, 330, 273, # 1680 + 380,2162, 951, 832, 780, 991,1301,3073, 965,2270,3519, 668,2523,2636,1286, 535, # 1696 +1407, 518, 671, 957,2658,2378, 267, 611,2197,3030,6123, 248,2299, 967,1799,2356, # 1712 + 850,1418,3437,1876,1256,1480,2828,1718,6124,6125,1755,1664,2405,6126,4628,2879, # 1728 +2829, 499,2179, 676,4629, 557,2329,2214,2090, 325,3234, 464, 811,3001, 992,2342, # 1744 +2481,1232,1469, 303,2242, 466,1070,2163, 603,1777,2091,4630,2752,4631,2714, 322, # 1760 +2659,1964,1768, 481,2188,1463,2330,2857,3600,2092,3031,2421,4632,2318,2070,1849, # 1776 +2598,4633,1302,2254,1668,1701,2422,3811,2905,3032,3123,2046,4106,1763,1694,4634, # 1792 +1604, 943,1724,1454, 917, 868,2215,1169,2940, 552,1145,1800,1228,1823,1955, 316, # 1808 +1080,2510, 361,1807,2830,4107,2660,3381,1346,1423,1134,4108,6127, 541,1263,1229, # 1824 +1148,2540, 545, 465,1833,2880,3438,1901,3074,2482, 816,3937, 713,1788,2500, 122, # 1840 +1575, 195,1451,2501,1111,6128, 859, 374,1225,2243,2483,4317, 390,1033,3439,3075, # 1856 +2524,1687, 266, 793,1440,2599, 946, 779, 802, 507, 897,1081, 528,2189,1292, 711, # 1872 +1866,1725,1167,1640, 753, 398,2661,1053, 246, 348,4318, 137,1024,3440,1600,2077, # 1888 +2129, 825,4319, 698, 238, 521, 187,2300,1157,2423,1641,1605,1464,1610,1097,2541, # 1904 +1260,1436, 759,2255,1814,2150, 705,3235, 409,2563,3304, 561,3033,2005,2564, 726, # 1920 +1956,2343,3698,4109, 949,3812,3813,3520,1669, 653,1379,2525, 881,2198, 632,2256, # 1936 +1027, 778,1074, 733,1957, 514,1481,2466, 554,2180, 702,3938,1606,1017,1398,6129, # 1952 +1380,3521, 921, 993,1313, 594, 449,1489,1617,1166, 768,1426,1360, 495,1794,3601, # 1968 +1177,3602,1170,4320,2344, 476, 425,3167,4635,3168,1424, 401,2662,1171,3382,1998, # 1984 +1089,4110, 477,3169, 474,6130,1909, 596,2831,1842, 494, 693,1051,1028,1207,3076, # 2000 + 606,2115, 727,2790,1473,1115, 743,3522, 630, 805,1532,4321,2021, 366,1057, 838, # 2016 + 684,1114,2142,4322,2050,1492,1892,1808,2271,3814,2424,1971,1447,1373,3305,1090, # 2032 +1536,3939,3523,3306,1455,2199, 336, 369,2331,1035, 584,2393, 902, 718,2600,6131, # 2048 +2753, 463,2151,1149,1611,2467, 715,1308,3124,1268, 343,1413,3236,1517,1347,2663, # 2064 +2093,3940,2022,1131,1553,2100,2941,1427,3441,2942,1323,2484,6132,1980, 872,2368, # 2080 +2441,2943, 320,2369,2116,1082, 679,1933,3941,2791,3815, 625,1143,2023, 422,2200, # 2096 +3816,6133, 730,1695, 356,2257,1626,2301,2858,2637,1627,1778, 937, 883,2906,2693, # 2112 +3002,1769,1086, 400,1063,1325,3307,2792,4111,3077, 456,2345,1046, 747,6134,1524, # 2128 + 884,1094,3383,1474,2164,1059, 974,1688,2181,2258,1047, 345,1665,1187, 358, 875, # 2144 +3170, 305, 660,3524,2190,1334,1135,3171,1540,1649,2542,1527, 927, 968,2793, 885, # 2160 +1972,1850, 482, 500,2638,1218,1109,1085,2543,1654,2034, 876, 78,2287,1482,1277, # 2176 + 861,1675,1083,1779, 724,2754, 454, 397,1132,1612,2332, 893, 672,1237, 257,2259, # 2192 +2370, 135,3384, 337,2244, 547, 352, 340, 709,2485,1400, 788,1138,2511, 540, 772, # 2208 +1682,2260,2272,2544,2013,1843,1902,4636,1999,1562,2288,4637,2201,1403,1533, 407, # 2224 + 576,3308,1254,2071, 978,3385, 170, 136,1201,3125,2664,3172,2394, 213, 912, 873, # 2240 +3603,1713,2202, 699,3604,3699, 813,3442, 493, 531,1054, 468,2907,1483, 304, 281, # 2256 +4112,1726,1252,2094, 339,2319,2130,2639, 756,1563,2944, 748, 571,2976,1588,2425, # 2272 +2715,1851,1460,2426,1528,1392,1973,3237, 288,3309, 685,3386, 296, 892,2716,2216, # 2288 +1570,2245, 722,1747,2217, 905,3238,1103,6135,1893,1441,1965, 251,1805,2371,3700, # 2304 +2601,1919,1078, 75,2182,1509,1592,1270,2640,4638,2152,6136,3310,3817, 524, 706, # 2320 +1075, 292,3818,1756,2602, 317, 98,3173,3605,3525,1844,2218,3819,2502, 814, 567, # 2336 + 385,2908,1534,6137, 534,1642,3239, 797,6138,1670,1529, 953,4323, 188,1071, 538, # 2352 + 178, 729,3240,2109,1226,1374,2000,2357,2977, 731,2468,1116,2014,2051,6139,1261, # 2368 +1593, 803,2859,2736,3443, 556, 682, 823,1541,6140,1369,2289,1706,2794, 845, 462, # 2384 +2603,2665,1361, 387, 162,2358,1740, 739,1770,1720,1304,1401,3241,1049, 627,1571, # 2400 +2427,3526,1877,3942,1852,1500, 431,1910,1503, 677, 297,2795, 286,1433,1038,1198, # 2416 +2290,1133,1596,4113,4639,2469,1510,1484,3943,6141,2442, 108, 712,4640,2372, 866, # 2432 +3701,2755,3242,1348, 834,1945,1408,3527,2395,3243,1811, 824, 994,1179,2110,1548, # 2448 +1453, 790,3003, 690,4324,4325,2832,2909,3820,1860,3821, 225,1748, 310, 346,1780, # 2464 +2470, 821,1993,2717,2796, 828, 877,3528,2860,2471,1702,2165,2910,2486,1789, 453, # 2480 + 359,2291,1676, 73,1164,1461,1127,3311, 421, 604, 314,1037, 589, 116,2487, 737, # 2496 + 837,1180, 111, 244, 735,6142,2261,1861,1362, 986, 523, 418, 581,2666,3822, 103, # 2512 + 855, 503,1414,1867,2488,1091, 657,1597, 979, 605,1316,4641,1021,2443,2078,2001, # 2528 +1209, 96, 587,2166,1032, 260,1072,2153, 173, 94, 226,3244, 819,2006,4642,4114, # 2544 +2203, 231,1744, 782, 97,2667, 786,3387, 887, 391, 442,2219,4326,1425,6143,2694, # 2560 + 633,1544,1202, 483,2015, 592,2052,1958,2472,1655, 419, 129,4327,3444,3312,1714, # 2576 +1257,3078,4328,1518,1098, 865,1310,1019,1885,1512,1734, 469,2444, 148, 773, 436, # 2592 +1815,1868,1128,1055,4329,1245,2756,3445,2154,1934,1039,4643, 579,1238, 932,2320, # 2608 + 353, 205, 801, 115,2428, 944,2321,1881, 399,2565,1211, 678, 766,3944, 335,2101, # 2624 +1459,1781,1402,3945,2737,2131,1010, 844, 981,1326,1013, 550,1816,1545,2620,1335, # 2640 +1008, 371,2881, 936,1419,1613,3529,1456,1395,2273,1834,2604,1317,2738,2503, 416, # 2656 +1643,4330, 806,1126, 229, 591,3946,1314,1981,1576,1837,1666, 347,1790, 977,3313, # 2672 + 764,2861,1853, 688,2429,1920,1462, 77, 595, 415,2002,3034, 798,1192,4115,6144, # 2688 +2978,4331,3035,2695,2582,2072,2566, 430,2430,1727, 842,1396,3947,3702, 613, 377, # 2704 + 278, 236,1417,3388,3314,3174, 757,1869, 107,3530,6145,1194, 623,2262, 207,1253, # 2720 +2167,3446,3948, 492,1117,1935, 536,1838,2757,1246,4332, 696,2095,2406,1393,1572, # 2736 +3175,1782, 583, 190, 253,1390,2230, 830,3126,3389, 934,3245,1703,1749,2979,1870, # 2752 +2545,1656,2204, 869,2346,4116,3176,1817, 496,1764,4644, 942,1504, 404,1903,1122, # 2768 +1580,3606,2945,1022, 515, 372,1735, 955,2431,3036,6146,2797,1110,2302,2798, 617, # 2784 +6147, 441, 762,1771,3447,3607,3608,1904, 840,3037, 86, 939,1385, 572,1370,2445, # 2800 +1336, 114,3703, 898, 294, 203,3315, 703,1583,2274, 429, 961,4333,1854,1951,3390, # 2816 +2373,3704,4334,1318,1381, 966,1911,2322,1006,1155, 309, 989, 458,2718,1795,1372, # 2832 +1203, 252,1689,1363,3177, 517,1936, 168,1490, 562, 193,3823,1042,4117,1835, 551, # 2848 + 470,4645, 395, 489,3448,1871,1465,2583,2641, 417,1493, 279,1295, 511,1236,1119, # 2864 + 72,1231,1982,1812,3004, 871,1564, 984,3449,1667,2696,2096,4646,2347,2833,1673, # 2880 +3609, 695,3246,2668, 807,1183,4647, 890, 388,2333,1801,1457,2911,1765,1477,1031, # 2896 +3316,3317,1278,3391,2799,2292,2526, 163,3450,4335,2669,1404,1802,6148,2323,2407, # 2912 +1584,1728,1494,1824,1269, 298, 909,3318,1034,1632, 375, 776,1683,2061, 291, 210, # 2928 +1123, 809,1249,1002,2642,3038, 206,1011,2132, 144, 975, 882,1565, 342, 667, 754, # 2944 +1442,2143,1299,2303,2062, 447, 626,2205,1221,2739,2912,1144,1214,2206,2584, 760, # 2960 +1715, 614, 950,1281,2670,2621, 810, 577,1287,2546,4648, 242,2168, 250,2643, 691, # 2976 + 123,2644, 647, 313,1029, 689,1357,2946,1650, 216, 771,1339,1306, 808,2063, 549, # 2992 + 913,1371,2913,2914,6149,1466,1092,1174,1196,1311,2605,2396,1783,1796,3079, 406, # 3008 +2671,2117,3949,4649, 487,1825,2220,6150,2915, 448,2348,1073,6151,2397,1707, 130, # 3024 + 900,1598, 329, 176,1959,2527,1620,6152,2275,4336,3319,1983,2191,3705,3610,2155, # 3040 +3706,1912,1513,1614,6153,1988, 646, 392,2304,1589,3320,3039,1826,1239,1352,1340, # 3056 +2916, 505,2567,1709,1437,2408,2547, 906,6154,2672, 384,1458,1594,1100,1329, 710, # 3072 + 423,3531,2064,2231,2622,1989,2673,1087,1882, 333, 841,3005,1296,2882,2379, 580, # 3088 +1937,1827,1293,2585, 601, 574, 249,1772,4118,2079,1120, 645, 901,1176,1690, 795, # 3104 +2207, 478,1434, 516,1190,1530, 761,2080, 930,1264, 355, 435,1552, 644,1791, 987, # 3120 + 220,1364,1163,1121,1538, 306,2169,1327,1222, 546,2645, 218, 241, 610,1704,3321, # 3136 +1984,1839,1966,2528, 451,6155,2586,3707,2568, 907,3178, 254,2947, 186,1845,4650, # 3152 + 745, 432,1757, 428,1633, 888,2246,2221,2489,3611,2118,1258,1265, 956,3127,1784, # 3168 +4337,2490, 319, 510, 119, 457,3612, 274,2035,2007,4651,1409,3128, 970,2758, 590, # 3184 +2800, 661,2247,4652,2008,3950,1420,1549,3080,3322,3951,1651,1375,2111, 485,2491, # 3200 +1429,1156,6156,2548,2183,1495, 831,1840,2529,2446, 501,1657, 307,1894,3247,1341, # 3216 + 666, 899,2156,1539,2549,1559, 886, 349,2208,3081,2305,1736,3824,2170,2759,1014, # 3232 +1913,1386, 542,1397,2948, 490, 368, 716, 362, 159, 282,2569,1129,1658,1288,1750, # 3248 +2674, 276, 649,2016, 751,1496, 658,1818,1284,1862,2209,2087,2512,3451, 622,2834, # 3264 + 376, 117,1060,2053,1208,1721,1101,1443, 247,1250,3179,1792,3952,2760,2398,3953, # 3280 +6157,2144,3708, 446,2432,1151,2570,3452,2447,2761,2835,1210,2448,3082, 424,2222, # 3296 +1251,2449,2119,2836, 504,1581,4338, 602, 817, 857,3825,2349,2306, 357,3826,1470, # 3312 +1883,2883, 255, 958, 929,2917,3248, 302,4653,1050,1271,1751,2307,1952,1430,2697, # 3328 +2719,2359, 354,3180, 777, 158,2036,4339,1659,4340,4654,2308,2949,2248,1146,2232, # 3344 +3532,2720,1696,2623,3827,6158,3129,1550,2698,1485,1297,1428, 637, 931,2721,2145, # 3360 + 914,2550,2587, 81,2450, 612, 827,2646,1242,4655,1118,2884, 472,1855,3181,3533, # 3376 +3534, 569,1353,2699,1244,1758,2588,4119,2009,2762,2171,3709,1312,1531,6159,1152, # 3392 +1938, 134,1830, 471,3710,2276,1112,1535,3323,3453,3535, 982,1337,2950, 488, 826, # 3408 + 674,1058,1628,4120,2017, 522,2399, 211, 568,1367,3454, 350, 293,1872,1139,3249, # 3424 +1399,1946,3006,1300,2360,3324, 588, 736,6160,2606, 744, 669,3536,3828,6161,1358, # 3440 + 199, 723, 848, 933, 851,1939,1505,1514,1338,1618,1831,4656,1634,3613, 443,2740, # 3456 +3829, 717,1947, 491,1914,6162,2551,1542,4121,1025,6163,1099,1223, 198,3040,2722, # 3472 + 370, 410,1905,2589, 998,1248,3182,2380, 519,1449,4122,1710, 947, 928,1153,4341, # 3488 +2277, 344,2624,1511, 615, 105, 161,1212,1076,1960,3130,2054,1926,1175,1906,2473, # 3504 + 414,1873,2801,6164,2309, 315,1319,3325, 318,2018,2146,2157, 963, 631, 223,4342, # 3520 +4343,2675, 479,3711,1197,2625,3712,2676,2361,6165,4344,4123,6166,2451,3183,1886, # 3536 +2184,1674,1330,1711,1635,1506, 799, 219,3250,3083,3954,1677,3713,3326,2081,3614, # 3552 +1652,2073,4657,1147,3041,1752, 643,1961, 147,1974,3955,6167,1716,2037, 918,3007, # 3568 +1994, 120,1537, 118, 609,3184,4345, 740,3455,1219, 332,1615,3830,6168,1621,2980, # 3584 +1582, 783, 212, 553,2350,3714,1349,2433,2082,4124, 889,6169,2310,1275,1410, 973, # 3600 + 166,1320,3456,1797,1215,3185,2885,1846,2590,2763,4658, 629, 822,3008, 763, 940, # 3616 +1990,2862, 439,2409,1566,1240,1622, 926,1282,1907,2764, 654,2210,1607, 327,1130, # 3632 +3956,1678,1623,6170,2434,2192, 686, 608,3831,3715, 903,3957,3042,6171,2741,1522, # 3648 +1915,1105,1555,2552,1359, 323,3251,4346,3457, 738,1354,2553,2311,2334,1828,2003, # 3664 +3832,1753,2351,1227,6172,1887,4125,1478,6173,2410,1874,1712,1847, 520,1204,2607, # 3680 + 264,4659, 836,2677,2102, 600,4660,3833,2278,3084,6174,4347,3615,1342, 640, 532, # 3696 + 543,2608,1888,2400,2591,1009,4348,1497, 341,1737,3616,2723,1394, 529,3252,1321, # 3712 + 983,4661,1515,2120, 971,2592, 924, 287,1662,3186,4349,2700,4350,1519, 908,1948, # 3728 +2452, 156, 796,1629,1486,2223,2055, 694,4126,1259,1036,3392,1213,2249,2742,1889, # 3744 +1230,3958,1015, 910, 408, 559,3617,4662, 746, 725, 935,4663,3959,3009,1289, 563, # 3760 + 867,4664,3960,1567,2981,2038,2626, 988,2263,2381,4351, 143,2374, 704,1895,6175, # 3776 +1188,3716,2088, 673,3085,2362,4352, 484,1608,1921,2765,2918, 215, 904,3618,3537, # 3792 + 894, 509, 976,3043,2701,3961,4353,2837,2982, 498,6176,6177,1102,3538,1332,3393, # 3808 +1487,1636,1637, 233, 245,3962, 383, 650, 995,3044, 460,1520,1206,2352, 749,3327, # 3824 + 530, 700, 389,1438,1560,1773,3963,2264, 719,2951,2724,3834, 870,1832,1644,1000, # 3840 + 839,2474,3717, 197,1630,3394, 365,2886,3964,1285,2133, 734, 922, 818,1106, 732, # 3856 + 480,2083,1774,3458, 923,2279,1350, 221,3086, 85,2233,2234,3835,1585,3010,2147, # 3872 +1387,1705,2382,1619,2475, 133, 239,2802,1991,1016,2084,2383, 411,2838,1113, 651, # 3888 +1985,1160,3328, 990,1863,3087,1048,1276,2647, 265,2627,1599,3253,2056, 150, 638, # 3904 +2019, 656, 853, 326,1479, 680,1439,4354,1001,1759, 413,3459,3395,2492,1431, 459, # 3920 +4355,1125,3329,2265,1953,1450,2065,2863, 849, 351,2678,3131,3254,3255,1104,1577, # 3936 + 227,1351,1645,2453,2193,1421,2887, 812,2121, 634, 95,2435, 201,2312,4665,1646, # 3952 +1671,2743,1601,2554,2702,2648,2280,1315,1366,2089,3132,1573,3718,3965,1729,1189, # 3968 + 328,2679,1077,1940,1136, 558,1283, 964,1195, 621,2074,1199,1743,3460,3619,1896, # 3984 +1916,1890,3836,2952,1154,2112,1064, 862, 378,3011,2066,2113,2803,1568,2839,6178, # 4000 +3088,2919,1941,1660,2004,1992,2194, 142, 707,1590,1708,1624,1922,1023,1836,1233, # 4016 +1004,2313, 789, 741,3620,6179,1609,2411,1200,4127,3719,3720,4666,2057,3721, 593, # 4032 +2840, 367,2920,1878,6180,3461,1521, 628,1168, 692,2211,2649, 300, 720,2067,2571, # 4048 +2953,3396, 959,2504,3966,3539,3462,1977, 701,6181, 954,1043, 800, 681, 183,3722, # 4064 +1803,1730,3540,4128,2103, 815,2314, 174, 467, 230,2454,1093,2134, 755,3541,3397, # 4080 +1141,1162,6182,1738,2039, 270,3256,2513,1005,1647,2185,3837, 858,1679,1897,1719, # 4096 +2954,2324,1806, 402, 670, 167,4129,1498,2158,2104, 750,6183, 915, 189,1680,1551, # 4112 + 455,4356,1501,2455, 405,1095,2955, 338,1586,1266,1819, 570, 641,1324, 237,1556, # 4128 +2650,1388,3723,6184,1368,2384,1343,1978,3089,2436, 879,3724, 792,1191, 758,3012, # 4144 +1411,2135,1322,4357, 240,4667,1848,3725,1574,6185, 420,3045,1546,1391, 714,4358, # 4160 +1967, 941,1864, 863, 664, 426, 560,1731,2680,1785,2864,1949,2363, 403,3330,1415, # 4176 +1279,2136,1697,2335, 204, 721,2097,3838, 90,6186,2085,2505, 191,3967, 124,2148, # 4192 +1376,1798,1178,1107,1898,1405, 860,4359,1243,1272,2375,2983,1558,2456,1638, 113, # 4208 +3621, 578,1923,2609, 880, 386,4130, 784,2186,2266,1422,2956,2172,1722, 497, 263, # 4224 +2514,1267,2412,2610, 177,2703,3542, 774,1927,1344, 616,1432,1595,1018, 172,4360, # 4240 +2325, 911,4361, 438,1468,3622, 794,3968,2024,2173,1681,1829,2957, 945, 895,3090, # 4256 + 575,2212,2476, 475,2401,2681, 785,2744,1745,2293,2555,1975,3133,2865, 394,4668, # 4272 +3839, 635,4131, 639, 202,1507,2195,2766,1345,1435,2572,3726,1908,1184,1181,2457, # 4288 +3727,3134,4362, 843,2611, 437, 916,4669, 234, 769,1884,3046,3047,3623, 833,6187, # 4304 +1639,2250,2402,1355,1185,2010,2047, 999, 525,1732,1290,1488,2612, 948,1578,3728, # 4320 +2413,2477,1216,2725,2159, 334,3840,1328,3624,2921,1525,4132, 564,1056, 891,4363, # 4336 +1444,1698,2385,2251,3729,1365,2281,2235,1717,6188, 864,3841,2515, 444, 527,2767, # 4352 +2922,3625, 544, 461,6189, 566, 209,2437,3398,2098,1065,2068,3331,3626,3257,2137, # 4368 #last 512 +) + + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/jpcntx.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/jpcntx.py new file mode 100644 index 0000000000000000000000000000000000000000..20044e4bc8f5a9775d82be0ecf387ce752732507 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/jpcntx.py @@ -0,0 +1,233 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Communicator client code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + + +# This is hiragana 2-char sequence table, the number in each cell represents its frequency category +jp2CharContext = ( +(0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1), +(2,4,0,4,0,3,0,4,0,3,4,4,4,2,4,3,3,4,3,2,3,3,4,2,3,3,3,2,4,1,4,3,3,1,5,4,3,4,3,4,3,5,3,0,3,5,4,2,0,3,1,0,3,3,0,3,3,0,1,1,0,4,3,0,3,3,0,4,0,2,0,3,5,5,5,5,4,0,4,1,0,3,4), +(0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2), +(0,4,0,5,0,5,0,4,0,4,5,4,4,3,5,3,5,1,5,3,4,3,4,4,3,4,3,3,4,3,5,4,4,3,5,5,3,5,5,5,3,5,5,3,4,5,5,3,1,3,2,0,3,4,0,4,2,0,4,2,1,5,3,2,3,5,0,4,0,2,0,5,4,4,5,4,5,0,4,0,0,4,4), +(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0), +(0,3,0,4,0,3,0,3,0,4,5,4,3,3,3,3,4,3,5,4,4,3,5,4,4,3,4,3,4,4,4,4,5,3,4,4,3,4,5,5,4,5,5,1,4,5,4,3,0,3,3,1,3,3,0,4,4,0,3,3,1,5,3,3,3,5,0,4,0,3,0,4,4,3,4,3,3,0,4,1,1,3,4), +(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0), +(0,4,0,3,0,3,0,4,0,3,4,4,3,2,2,1,2,1,3,1,3,3,3,3,3,4,3,1,3,3,5,3,3,0,4,3,0,5,4,3,3,5,4,4,3,4,4,5,0,1,2,0,1,2,0,2,2,0,1,0,0,5,2,2,1,4,0,3,0,1,0,4,4,3,5,4,3,0,2,1,0,4,3), +(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0), +(0,3,0,5,0,4,0,2,1,4,4,2,4,1,4,2,4,2,4,3,3,3,4,3,3,3,3,1,4,2,3,3,3,1,4,4,1,1,1,4,3,3,2,0,2,4,3,2,0,3,3,0,3,1,1,0,0,0,3,3,0,4,2,2,3,4,0,4,0,3,0,4,4,5,3,4,4,0,3,0,0,1,4), +(1,4,0,4,0,4,0,4,0,3,5,4,4,3,4,3,5,4,3,3,4,3,5,4,4,4,4,3,4,2,4,3,3,1,5,4,3,2,4,5,4,5,5,4,4,5,4,4,0,3,2,2,3,3,0,4,3,1,3,2,1,4,3,3,4,5,0,3,0,2,0,4,5,5,4,5,4,0,4,0,0,5,4), +(0,5,0,5,0,4,0,3,0,4,4,3,4,3,3,3,4,0,4,4,4,3,4,3,4,3,3,1,4,2,4,3,4,0,5,4,1,4,5,4,4,5,3,2,4,3,4,3,2,4,1,3,3,3,2,3,2,0,4,3,3,4,3,3,3,4,0,4,0,3,0,4,5,4,4,4,3,0,4,1,0,1,3), +(0,3,1,4,0,3,0,2,0,3,4,4,3,1,4,2,3,3,4,3,4,3,4,3,4,4,3,2,3,1,5,4,4,1,4,4,3,5,4,4,3,5,5,4,3,4,4,3,1,2,3,1,2,2,0,3,2,0,3,1,0,5,3,3,3,4,3,3,3,3,4,4,4,4,5,4,2,0,3,3,2,4,3), +(0,2,0,3,0,1,0,1,0,0,3,2,0,0,2,0,1,0,2,1,3,3,3,1,2,3,1,0,1,0,4,2,1,1,3,3,0,4,3,3,1,4,3,3,0,3,3,2,0,0,0,0,1,0,0,2,0,0,0,0,0,4,1,0,2,3,2,2,2,1,3,3,3,4,4,3,2,0,3,1,0,3,3), +(0,4,0,4,0,3,0,3,0,4,4,4,3,3,3,3,3,3,4,3,4,2,4,3,4,3,3,2,4,3,4,5,4,1,4,5,3,5,4,5,3,5,4,0,3,5,5,3,1,3,3,2,2,3,0,3,4,1,3,3,2,4,3,3,3,4,0,4,0,3,0,4,5,4,4,5,3,0,4,1,0,3,4), +(0,2,0,3,0,3,0,0,0,2,2,2,1,0,1,0,0,0,3,0,3,0,3,0,1,3,1,0,3,1,3,3,3,1,3,3,3,0,1,3,1,3,4,0,0,3,1,1,0,3,2,0,0,0,0,1,3,0,1,0,0,3,3,2,0,3,0,0,0,0,0,3,4,3,4,3,3,0,3,0,0,2,3), +(2,3,0,3,0,2,0,1,0,3,3,4,3,1,3,1,1,1,3,1,4,3,4,3,3,3,0,0,3,1,5,4,3,1,4,3,2,5,5,4,4,4,4,3,3,4,4,4,0,2,1,1,3,2,0,1,2,0,0,1,0,4,1,3,3,3,0,3,0,1,0,4,4,4,5,5,3,0,2,0,0,4,4), +(0,2,0,1,0,3,1,3,0,2,3,3,3,0,3,1,0,0,3,0,3,2,3,1,3,2,1,1,0,0,4,2,1,0,2,3,1,4,3,2,0,4,4,3,1,3,1,3,0,1,0,0,1,0,0,0,1,0,0,0,0,4,1,1,1,2,0,3,0,0,0,3,4,2,4,3,2,0,1,0,0,3,3), +(0,1,0,4,0,5,0,4,0,2,4,4,2,3,3,2,3,3,5,3,3,3,4,3,4,2,3,0,4,3,3,3,4,1,4,3,2,1,5,5,3,4,5,1,3,5,4,2,0,3,3,0,1,3,0,4,2,0,1,3,1,4,3,3,3,3,0,3,0,1,0,3,4,4,4,5,5,0,3,0,1,4,5), +(0,2,0,3,0,3,0,0,0,2,3,1,3,0,4,0,1,1,3,0,3,4,3,2,3,1,0,3,3,2,3,1,3,0,2,3,0,2,1,4,1,2,2,0,0,3,3,0,0,2,0,0,0,1,0,0,0,0,2,2,0,3,2,1,3,3,0,2,0,2,0,0,3,3,1,2,4,0,3,0,2,2,3), +(2,4,0,5,0,4,0,4,0,2,4,4,4,3,4,3,3,3,1,2,4,3,4,3,4,4,5,0,3,3,3,3,2,0,4,3,1,4,3,4,1,4,4,3,3,4,4,3,1,2,3,0,4,2,0,4,1,0,3,3,0,4,3,3,3,4,0,4,0,2,0,3,5,3,4,5,2,0,3,0,0,4,5), +(0,3,0,4,0,1,0,1,0,1,3,2,2,1,3,0,3,0,2,0,2,0,3,0,2,0,0,0,1,0,1,1,0,0,3,1,0,0,0,4,0,3,1,0,2,1,3,0,0,0,0,0,0,3,0,0,0,0,0,0,0,4,2,2,3,1,0,3,0,0,0,1,4,4,4,3,0,0,4,0,0,1,4), +(1,4,1,5,0,3,0,3,0,4,5,4,4,3,5,3,3,4,4,3,4,1,3,3,3,3,2,1,4,1,5,4,3,1,4,4,3,5,4,4,3,5,4,3,3,4,4,4,0,3,3,1,2,3,0,3,1,0,3,3,0,5,4,4,4,4,4,4,3,3,5,4,4,3,3,5,4,0,3,2,0,4,4), +(0,2,0,3,0,1,0,0,0,1,3,3,3,2,4,1,3,0,3,1,3,0,2,2,1,1,0,0,2,0,4,3,1,0,4,3,0,4,4,4,1,4,3,1,1,3,3,1,0,2,0,0,1,3,0,0,0,0,2,0,0,4,3,2,4,3,5,4,3,3,3,4,3,3,4,3,3,0,2,1,0,3,3), +(0,2,0,4,0,3,0,2,0,2,5,5,3,4,4,4,4,1,4,3,3,0,4,3,4,3,1,3,3,2,4,3,0,3,4,3,0,3,4,4,2,4,4,0,4,5,3,3,2,2,1,1,1,2,0,1,5,0,3,3,2,4,3,3,3,4,0,3,0,2,0,4,4,3,5,5,0,0,3,0,2,3,3), +(0,3,0,4,0,3,0,1,0,3,4,3,3,1,3,3,3,0,3,1,3,0,4,3,3,1,1,0,3,0,3,3,0,0,4,4,0,1,5,4,3,3,5,0,3,3,4,3,0,2,0,1,1,1,0,1,3,0,1,2,1,3,3,2,3,3,0,3,0,1,0,1,3,3,4,4,1,0,1,2,2,1,3), +(0,1,0,4,0,4,0,3,0,1,3,3,3,2,3,1,1,0,3,0,3,3,4,3,2,4,2,0,1,0,4,3,2,0,4,3,0,5,3,3,2,4,4,4,3,3,3,4,0,1,3,0,0,1,0,0,1,0,0,0,0,4,2,3,3,3,0,3,0,0,0,4,4,4,5,3,2,0,3,3,0,3,5), +(0,2,0,3,0,0,0,3,0,1,3,0,2,0,0,0,1,0,3,1,1,3,3,0,0,3,0,0,3,0,2,3,1,0,3,1,0,3,3,2,0,4,2,2,0,2,0,0,0,4,0,0,0,0,0,0,0,0,0,0,0,2,1,2,0,1,0,1,0,0,0,1,3,1,2,0,0,0,1,0,0,1,4), +(0,3,0,3,0,5,0,1,0,2,4,3,1,3,3,2,1,1,5,2,1,0,5,1,2,0,0,0,3,3,2,2,3,2,4,3,0,0,3,3,1,3,3,0,2,5,3,4,0,3,3,0,1,2,0,2,2,0,3,2,0,2,2,3,3,3,0,2,0,1,0,3,4,4,2,5,4,0,3,0,0,3,5), +(0,3,0,3,0,3,0,1,0,3,3,3,3,0,3,0,2,0,2,1,1,0,2,0,1,0,0,0,2,1,0,0,1,0,3,2,0,0,3,3,1,2,3,1,0,3,3,0,0,1,0,0,0,0,0,2,0,0,0,0,0,2,3,1,2,3,0,3,0,1,0,3,2,1,0,4,3,0,1,1,0,3,3), +(0,4,0,5,0,3,0,3,0,4,5,5,4,3,5,3,4,3,5,3,3,2,5,3,4,4,4,3,4,3,4,5,5,3,4,4,3,4,4,5,4,4,4,3,4,5,5,4,2,3,4,2,3,4,0,3,3,1,4,3,2,4,3,3,5,5,0,3,0,3,0,5,5,5,5,4,4,0,4,0,1,4,4), +(0,4,0,4,0,3,0,3,0,3,5,4,4,2,3,2,5,1,3,2,5,1,4,2,3,2,3,3,4,3,3,3,3,2,5,4,1,3,3,5,3,4,4,0,4,4,3,1,1,3,1,0,2,3,0,2,3,0,3,0,0,4,3,1,3,4,0,3,0,2,0,4,4,4,3,4,5,0,4,0,0,3,4), +(0,3,0,3,0,3,1,2,0,3,4,4,3,3,3,0,2,2,4,3,3,1,3,3,3,1,1,0,3,1,4,3,2,3,4,4,2,4,4,4,3,4,4,3,2,4,4,3,1,3,3,1,3,3,0,4,1,0,2,2,1,4,3,2,3,3,5,4,3,3,5,4,4,3,3,0,4,0,3,2,2,4,4), +(0,2,0,1,0,0,0,0,0,1,2,1,3,0,0,0,0,0,2,0,1,2,1,0,0,1,0,0,0,0,3,0,0,1,0,1,1,3,1,0,0,0,1,1,0,1,1,0,0,0,0,0,2,0,0,0,0,0,0,0,0,1,1,2,2,0,3,4,0,0,0,1,1,0,0,1,0,0,0,0,0,1,1), +(0,1,0,0,0,1,0,0,0,0,4,0,4,1,4,0,3,0,4,0,3,0,4,0,3,0,3,0,4,1,5,1,4,0,0,3,0,5,0,5,2,0,1,0,0,0,2,1,4,0,1,3,0,0,3,0,0,3,1,1,4,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0), +(1,4,0,5,0,3,0,2,0,3,5,4,4,3,4,3,5,3,4,3,3,0,4,3,3,3,3,3,3,2,4,4,3,1,3,4,4,5,4,4,3,4,4,1,3,5,4,3,3,3,1,2,2,3,3,1,3,1,3,3,3,5,3,3,4,5,0,3,0,3,0,3,4,3,4,4,3,0,3,0,2,4,3), +(0,1,0,4,0,0,0,0,0,1,4,0,4,1,4,2,4,0,3,0,1,0,1,0,0,0,0,0,2,0,3,1,1,1,0,3,0,0,0,1,2,1,0,0,1,1,1,1,0,1,0,0,0,1,0,0,3,0,0,0,0,3,2,0,2,2,0,1,0,0,0,2,3,2,3,3,0,0,0,0,2,1,0), +(0,5,1,5,0,3,0,3,0,5,4,4,5,1,5,3,3,0,4,3,4,3,5,3,4,3,3,2,4,3,4,3,3,0,3,3,1,4,4,3,4,4,4,3,4,5,5,3,2,3,1,1,3,3,1,3,1,1,3,3,2,4,5,3,3,5,0,4,0,3,0,4,4,3,5,3,3,0,3,4,0,4,3), +(0,5,0,5,0,3,0,2,0,4,4,3,5,2,4,3,3,3,4,4,4,3,5,3,5,3,3,1,4,0,4,3,3,0,3,3,0,4,4,4,4,5,4,3,3,5,5,3,2,3,1,2,3,2,0,1,0,0,3,2,2,4,4,3,1,5,0,4,0,3,0,4,3,1,3,2,1,0,3,3,0,3,3), +(0,4,0,5,0,5,0,4,0,4,5,5,5,3,4,3,3,2,5,4,4,3,5,3,5,3,4,0,4,3,4,4,3,2,4,4,3,4,5,4,4,5,5,0,3,5,5,4,1,3,3,2,3,3,1,3,1,0,4,3,1,4,4,3,4,5,0,4,0,2,0,4,3,4,4,3,3,0,4,0,0,5,5), +(0,4,0,4,0,5,0,1,1,3,3,4,4,3,4,1,3,0,5,1,3,0,3,1,3,1,1,0,3,0,3,3,4,0,4,3,0,4,4,4,3,4,4,0,3,5,4,1,0,3,0,0,2,3,0,3,1,0,3,1,0,3,2,1,3,5,0,3,0,1,0,3,2,3,3,4,4,0,2,2,0,4,4), +(2,4,0,5,0,4,0,3,0,4,5,5,4,3,5,3,5,3,5,3,5,2,5,3,4,3,3,4,3,4,5,3,2,1,5,4,3,2,3,4,5,3,4,1,2,5,4,3,0,3,3,0,3,2,0,2,3,0,4,1,0,3,4,3,3,5,0,3,0,1,0,4,5,5,5,4,3,0,4,2,0,3,5), +(0,5,0,4,0,4,0,2,0,5,4,3,4,3,4,3,3,3,4,3,4,2,5,3,5,3,4,1,4,3,4,4,4,0,3,5,0,4,4,4,4,5,3,1,3,4,5,3,3,3,3,3,3,3,0,2,2,0,3,3,2,4,3,3,3,5,3,4,1,3,3,5,3,2,0,0,0,0,4,3,1,3,3), +(0,1,0,3,0,3,0,1,0,1,3,3,3,2,3,3,3,0,3,0,0,0,3,1,3,0,0,0,2,2,2,3,0,0,3,2,0,1,2,4,1,3,3,0,0,3,3,3,0,1,0,0,2,1,0,0,3,0,3,1,0,3,0,0,1,3,0,2,0,1,0,3,3,1,3,3,0,0,1,1,0,3,3), +(0,2,0,3,0,2,1,4,0,2,2,3,1,1,3,1,1,0,2,0,3,1,2,3,1,3,0,0,1,0,4,3,2,3,3,3,1,4,2,3,3,3,3,1,0,3,1,4,0,1,1,0,1,2,0,1,1,0,1,1,0,3,1,3,2,2,0,1,0,0,0,2,3,3,3,1,0,0,0,0,0,2,3), +(0,5,0,4,0,5,0,2,0,4,5,5,3,3,4,3,3,1,5,4,4,2,4,4,4,3,4,2,4,3,5,5,4,3,3,4,3,3,5,5,4,5,5,1,3,4,5,3,1,4,3,1,3,3,0,3,3,1,4,3,1,4,5,3,3,5,0,4,0,3,0,5,3,3,1,4,3,0,4,0,1,5,3), +(0,5,0,5,0,4,0,2,0,4,4,3,4,3,3,3,3,3,5,4,4,4,4,4,4,5,3,3,5,2,4,4,4,3,4,4,3,3,4,4,5,5,3,3,4,3,4,3,3,4,3,3,3,3,1,2,2,1,4,3,3,5,4,4,3,4,0,4,0,3,0,4,4,4,4,4,1,0,4,2,0,2,4), +(0,4,0,4,0,3,0,1,0,3,5,2,3,0,3,0,2,1,4,2,3,3,4,1,4,3,3,2,4,1,3,3,3,0,3,3,0,0,3,3,3,5,3,3,3,3,3,2,0,2,0,0,2,0,0,2,0,0,1,0,0,3,1,2,2,3,0,3,0,2,0,4,4,3,3,4,1,0,3,0,0,2,4), +(0,0,0,4,0,0,0,0,0,0,1,0,1,0,2,0,0,0,0,0,1,0,2,0,1,0,0,0,0,0,3,1,3,0,3,2,0,0,0,1,0,3,2,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,4,0,2,0,0,0,0,0,0,2), +(0,2,1,3,0,2,0,2,0,3,3,3,3,1,3,1,3,3,3,3,3,3,4,2,2,1,2,1,4,0,4,3,1,3,3,3,2,4,3,5,4,3,3,3,3,3,3,3,0,1,3,0,2,0,0,1,0,0,1,0,0,4,2,0,2,3,0,3,3,0,3,3,4,2,3,1,4,0,1,2,0,2,3), +(0,3,0,3,0,1,0,3,0,2,3,3,3,0,3,1,2,0,3,3,2,3,3,2,3,2,3,1,3,0,4,3,2,0,3,3,1,4,3,3,2,3,4,3,1,3,3,1,1,0,1,1,0,1,0,1,0,1,0,0,0,4,1,1,0,3,0,3,1,0,2,3,3,3,3,3,1,0,0,2,0,3,3), +(0,0,0,0,0,0,0,0,0,0,3,0,2,0,3,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,3,0,3,0,3,1,0,1,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,2,0,2,3,0,0,0,0,0,0,0,0,3), +(0,2,0,3,1,3,0,3,0,2,3,3,3,1,3,1,3,1,3,1,3,3,3,1,3,0,2,3,1,1,4,3,3,2,3,3,1,2,2,4,1,3,3,0,1,4,2,3,0,1,3,0,3,0,0,1,3,0,2,0,0,3,3,2,1,3,0,3,0,2,0,3,4,4,4,3,1,0,3,0,0,3,3), +(0,2,0,1,0,2,0,0,0,1,3,2,2,1,3,0,1,1,3,0,3,2,3,1,2,0,2,0,1,1,3,3,3,0,3,3,1,1,2,3,2,3,3,1,2,3,2,0,0,1,0,0,0,0,0,0,3,0,1,0,0,2,1,2,1,3,0,3,0,0,0,3,4,4,4,3,2,0,2,0,0,2,4), +(0,0,0,1,0,1,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,2,2,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,3,1,0,0,0,0,0,0,0,3), +(0,3,0,3,0,2,0,3,0,3,3,3,2,3,2,2,2,0,3,1,3,3,3,2,3,3,0,0,3,0,3,2,2,0,2,3,1,4,3,4,3,3,2,3,1,5,4,4,0,3,1,2,1,3,0,3,1,1,2,0,2,3,1,3,1,3,0,3,0,1,0,3,3,4,4,2,1,0,2,1,0,2,4), +(0,1,0,3,0,1,0,2,0,1,4,2,5,1,4,0,2,0,2,1,3,1,4,0,2,1,0,0,2,1,4,1,1,0,3,3,0,5,1,3,2,3,3,1,0,3,2,3,0,1,0,0,0,0,0,0,1,0,0,0,0,4,0,1,0,3,0,2,0,1,0,3,3,3,4,3,3,0,0,0,0,2,3), +(0,0,0,1,0,0,0,0,0,0,2,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,0,0,1,0,0,0,0,0,3), +(0,1,0,3,0,4,0,3,0,2,4,3,1,0,3,2,2,1,3,1,2,2,3,1,1,1,2,1,3,0,1,2,0,1,3,2,1,3,0,5,5,1,0,0,1,3,2,1,0,3,0,0,1,0,0,0,0,0,3,4,0,1,1,1,3,2,0,2,0,1,0,2,3,3,1,2,3,0,1,0,1,0,4), +(0,0,0,1,0,3,0,3,0,2,2,1,0,0,4,0,3,0,3,1,3,0,3,0,3,0,1,0,3,0,3,1,3,0,3,3,0,0,1,2,1,1,1,0,1,2,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,2,2,1,2,0,0,2,0,0,0,0,2,3,3,3,3,0,0,0,0,1,4), +(0,0,0,3,0,3,0,0,0,0,3,1,1,0,3,0,1,0,2,0,1,0,0,0,0,0,0,0,1,0,3,0,2,0,2,3,0,0,2,2,3,1,2,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,2,0,0,0,0,2,3), +(2,4,0,5,0,5,0,4,0,3,4,3,3,3,4,3,3,3,4,3,4,4,5,4,5,5,5,2,3,0,5,5,4,1,5,4,3,1,5,4,3,4,4,3,3,4,3,3,0,3,2,0,2,3,0,3,0,0,3,3,0,5,3,2,3,3,0,3,0,3,0,3,4,5,4,5,3,0,4,3,0,3,4), +(0,3,0,3,0,3,0,3,0,3,3,4,3,2,3,2,3,0,4,3,3,3,3,3,3,3,3,0,3,2,4,3,3,1,3,4,3,4,4,4,3,4,4,3,2,4,4,1,0,2,0,0,1,1,0,2,0,0,3,1,0,5,3,2,1,3,0,3,0,1,2,4,3,2,4,3,3,0,3,2,0,4,4), +(0,3,0,3,0,1,0,0,0,1,4,3,3,2,3,1,3,1,4,2,3,2,4,2,3,4,3,0,2,2,3,3,3,0,3,3,3,0,3,4,1,3,3,0,3,4,3,3,0,1,1,0,1,0,0,0,4,0,3,0,0,3,1,2,1,3,0,4,0,1,0,4,3,3,4,3,3,0,2,0,0,3,3), +(0,3,0,4,0,1,0,3,0,3,4,3,3,0,3,3,3,1,3,1,3,3,4,3,3,3,0,0,3,1,5,3,3,1,3,3,2,5,4,3,3,4,5,3,2,5,3,4,0,1,0,0,0,0,0,2,0,0,1,1,0,4,2,2,1,3,0,3,0,2,0,4,4,3,5,3,2,0,1,1,0,3,4), +(0,5,0,4,0,5,0,2,0,4,4,3,3,2,3,3,3,1,4,3,4,1,5,3,4,3,4,0,4,2,4,3,4,1,5,4,0,4,4,4,4,5,4,1,3,5,4,2,1,4,1,1,3,2,0,3,1,0,3,2,1,4,3,3,3,4,0,4,0,3,0,4,4,4,3,3,3,0,4,2,0,3,4), +(1,4,0,4,0,3,0,1,0,3,3,3,1,1,3,3,2,2,3,3,1,0,3,2,2,1,2,0,3,1,2,1,2,0,3,2,0,2,2,3,3,4,3,0,3,3,1,2,0,1,1,3,1,2,0,0,3,0,1,1,0,3,2,2,3,3,0,3,0,0,0,2,3,3,4,3,3,0,1,0,0,1,4), +(0,4,0,4,0,4,0,0,0,3,4,4,3,1,4,2,3,2,3,3,3,1,4,3,4,0,3,0,4,2,3,3,2,2,5,4,2,1,3,4,3,4,3,1,3,3,4,2,0,2,1,0,3,3,0,0,2,0,3,1,0,4,4,3,4,3,0,4,0,1,0,2,4,4,4,4,4,0,3,2,0,3,3), +(0,0,0,1,0,4,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,3,2,0,0,1,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,2), +(0,2,0,3,0,4,0,4,0,1,3,3,3,0,4,0,2,1,2,1,1,1,2,0,3,1,1,0,1,0,3,1,0,0,3,3,2,0,1,1,0,0,0,0,0,1,0,2,0,2,2,0,3,1,0,0,1,0,1,1,0,1,2,0,3,0,0,0,0,1,0,0,3,3,4,3,1,0,1,0,3,0,2), +(0,0,0,3,0,5,0,0,0,0,1,0,2,0,3,1,0,1,3,0,0,0,2,0,0,0,1,0,0,0,1,1,0,0,4,0,0,0,2,3,0,1,4,1,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,1,0,0,0,0,0,0,0,2,0,0,3,0,0,0,0,0,3), +(0,2,0,5,0,5,0,1,0,2,4,3,3,2,5,1,3,2,3,3,3,0,4,1,2,0,3,0,4,0,2,2,1,1,5,3,0,0,1,4,2,3,2,0,3,3,3,2,0,2,4,1,1,2,0,1,1,0,3,1,0,1,3,1,2,3,0,2,0,0,0,1,3,5,4,4,4,0,3,0,0,1,3), +(0,4,0,5,0,4,0,4,0,4,5,4,3,3,4,3,3,3,4,3,4,4,5,3,4,5,4,2,4,2,3,4,3,1,4,4,1,3,5,4,4,5,5,4,4,5,5,5,2,3,3,1,4,3,1,3,3,0,3,3,1,4,3,4,4,4,0,3,0,4,0,3,3,4,4,5,0,0,4,3,0,4,5), +(0,4,0,4,0,3,0,3,0,3,4,4,4,3,3,2,4,3,4,3,4,3,5,3,4,3,2,1,4,2,4,4,3,1,3,4,2,4,5,5,3,4,5,4,1,5,4,3,0,3,2,2,3,2,1,3,1,0,3,3,3,5,3,3,3,5,4,4,2,3,3,4,3,3,3,2,1,0,3,2,1,4,3), +(0,4,0,5,0,4,0,3,0,3,5,5,3,2,4,3,4,0,5,4,4,1,4,4,4,3,3,3,4,3,5,5,2,3,3,4,1,2,5,5,3,5,5,2,3,5,5,4,0,3,2,0,3,3,1,1,5,1,4,1,0,4,3,2,3,5,0,4,0,3,0,5,4,3,4,3,0,0,4,1,0,4,4), +(1,3,0,4,0,2,0,2,0,2,5,5,3,3,3,3,3,0,4,2,3,4,4,4,3,4,0,0,3,4,5,4,3,3,3,3,2,5,5,4,5,5,5,4,3,5,5,5,1,3,1,0,1,0,0,3,2,0,4,2,0,5,2,3,2,4,1,3,0,3,0,4,5,4,5,4,3,0,4,2,0,5,4), +(0,3,0,4,0,5,0,3,0,3,4,4,3,2,3,2,3,3,3,3,3,2,4,3,3,2,2,0,3,3,3,3,3,1,3,3,3,0,4,4,3,4,4,1,1,4,4,2,0,3,1,0,1,1,0,4,1,0,2,3,1,3,3,1,3,4,0,3,0,1,0,3,1,3,0,0,1,0,2,0,0,4,4), +(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0), +(0,3,0,3,0,2,0,3,0,1,5,4,3,3,3,1,4,2,1,2,3,4,4,2,4,4,5,0,3,1,4,3,4,0,4,3,3,3,2,3,2,5,3,4,3,2,2,3,0,0,3,0,2,1,0,1,2,0,0,0,0,2,1,1,3,1,0,2,0,4,0,3,4,4,4,5,2,0,2,0,0,1,3), +(0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,1,1,0,0,1,1,0,0,0,4,2,1,1,0,1,0,3,2,0,0,3,1,1,1,2,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,1,0,0,0,2,0,0,0,1,4,0,4,2,1,0,0,0,0,0,1), +(0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,1,0,0,0,0,0,0,1,0,1,0,0,0,0,3,1,0,0,0,2,0,2,1,0,0,1,2,1,0,1,1,0,0,3,0,0,0,0,0,0,0,0,0,0,0,1,3,1,0,0,0,0,0,1,0,0,2,1,0,0,0,0,0,0,0,0,2), +(0,4,0,4,0,4,0,3,0,4,4,3,4,2,4,3,2,0,4,4,4,3,5,3,5,3,3,2,4,2,4,3,4,3,1,4,0,2,3,4,4,4,3,3,3,4,4,4,3,4,1,3,4,3,2,1,2,1,3,3,3,4,4,3,3,5,0,4,0,3,0,4,3,3,3,2,1,0,3,0,0,3,3), +(0,4,0,3,0,3,0,3,0,3,5,5,3,3,3,3,4,3,4,3,3,3,4,4,4,3,3,3,3,4,3,5,3,3,1,3,2,4,5,5,5,5,4,3,4,5,5,3,2,2,3,3,3,3,2,3,3,1,2,3,2,4,3,3,3,4,0,4,0,2,0,4,3,2,2,1,2,0,3,0,0,4,1), +) + +class JapaneseContextAnalysis(object): + NUM_OF_CATEGORY = 6 + DONT_KNOW = -1 + ENOUGH_REL_THRESHOLD = 100 + MAX_REL_THRESHOLD = 1000 + MINIMUM_DATA_THRESHOLD = 4 + + def __init__(self): + self._total_rel = None + self._rel_sample = None + self._need_to_skip_char_num = None + self._last_char_order = None + self._done = None + self.reset() + + def reset(self): + self._total_rel = 0 # total sequence received + # category counters, each integer counts sequence in its category + self._rel_sample = [0] * self.NUM_OF_CATEGORY + # if last byte in current buffer is not the last byte of a character, + # we need to know how many bytes to skip in next buffer + self._need_to_skip_char_num = 0 + self._last_char_order = -1 # The order of previous char + # If this flag is set to True, detection is done and conclusion has + # been made + self._done = False + + def feed(self, byte_str, num_bytes): + if self._done: + return + + # The buffer we got is byte oriented, and a character may span in more than one + # buffers. In case the last one or two byte in last buffer is not + # complete, we record how many byte needed to complete that character + # and skip these bytes here. We can choose to record those bytes as + # well and analyse the character once it is complete, but since a + # character will not make much difference, by simply skipping + # this character will simply our logic and improve performance. + i = self._need_to_skip_char_num + while i < num_bytes: + order, char_len = self.get_order(byte_str[i:i + 2]) + i += char_len + if i > num_bytes: + self._need_to_skip_char_num = i - num_bytes + self._last_char_order = -1 + else: + if (order != -1) and (self._last_char_order != -1): + self._total_rel += 1 + if self._total_rel > self.MAX_REL_THRESHOLD: + self._done = True + break + self._rel_sample[jp2CharContext[self._last_char_order][order]] += 1 + self._last_char_order = order + + def got_enough_data(self): + return self._total_rel > self.ENOUGH_REL_THRESHOLD + + def get_confidence(self): + # This is just one way to calculate confidence. It works well for me. + if self._total_rel > self.MINIMUM_DATA_THRESHOLD: + return (self._total_rel - self._rel_sample[0]) / self._total_rel + else: + return self.DONT_KNOW + + def get_order(self, byte_str): + return -1, 1 + +class SJISContextAnalysis(JapaneseContextAnalysis): + def __init__(self): + super(SJISContextAnalysis, self).__init__() + self._charset_name = "SHIFT_JIS" + + @property + def charset_name(self): + return self._charset_name + + def get_order(self, byte_str): + if not byte_str: + return -1, 1 + # find out current char's byte length + first_char = byte_str[0] + if (0x81 <= first_char <= 0x9F) or (0xE0 <= first_char <= 0xFC): + char_len = 2 + if (first_char == 0x87) or (0xFA <= first_char <= 0xFC): + self._charset_name = "CP932" + else: + char_len = 1 + + # return its order if it is hiragana + if len(byte_str) > 1: + second_char = byte_str[1] + if (first_char == 202) and (0x9F <= second_char <= 0xF1): + return second_char - 0x9F, char_len + + return -1, char_len + +class EUCJPContextAnalysis(JapaneseContextAnalysis): + def get_order(self, byte_str): + if not byte_str: + return -1, 1 + # find out current char's byte length + first_char = byte_str[0] + if (first_char == 0x8E) or (0xA1 <= first_char <= 0xFE): + char_len = 2 + elif first_char == 0x8F: + char_len = 3 + else: + char_len = 1 + + # return its order if it is hiragana + if len(byte_str) > 1: + second_char = byte_str[1] + if (first_char == 0xA4) and (0xA1 <= second_char <= 0xF3): + return second_char - 0xA1, char_len + + return -1, char_len + + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/langbulgarianmodel.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/langbulgarianmodel.py new file mode 100644 index 0000000000000000000000000000000000000000..2aa4fb2e22fc3bfa26b569273c6cb18c4c415dd9 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/langbulgarianmodel.py @@ -0,0 +1,228 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Communicator client code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +# 255: Control characters that usually does not exist in any text +# 254: Carriage/Return +# 253: symbol (punctuation) that does not belong to word +# 252: 0 - 9 + +# Character Mapping Table: +# this table is modified base on win1251BulgarianCharToOrderMap, so +# only number <64 is sure valid + +Latin5_BulgarianCharToOrderMap = ( +255,255,255,255,255,255,255,255,255,255,254,255,255,254,255,255, # 00 +255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 10 +253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253, # 20 +252,252,252,252,252,252,252,252,252,252,253,253,253,253,253,253, # 30 +253, 77, 90, 99,100, 72,109,107,101, 79,185, 81,102, 76, 94, 82, # 40 +110,186,108, 91, 74,119, 84, 96,111,187,115,253,253,253,253,253, # 50 +253, 65, 69, 70, 66, 63, 68,112,103, 92,194,104, 95, 86, 87, 71, # 60 +116,195, 85, 93, 97,113,196,197,198,199,200,253,253,253,253,253, # 70 +194,195,196,197,198,199,200,201,202,203,204,205,206,207,208,209, # 80 +210,211,212,213,214,215,216,217,218,219,220,221,222,223,224,225, # 90 + 81,226,227,228,229,230,105,231,232,233,234,235,236, 45,237,238, # a0 + 31, 32, 35, 43, 37, 44, 55, 47, 40, 59, 33, 46, 38, 36, 41, 30, # b0 + 39, 28, 34, 51, 48, 49, 53, 50, 54, 57, 61,239, 67,240, 60, 56, # c0 + 1, 18, 9, 20, 11, 3, 23, 15, 2, 26, 12, 10, 14, 6, 4, 13, # d0 + 7, 8, 5, 19, 29, 25, 22, 21, 27, 24, 17, 75, 52,241, 42, 16, # e0 + 62,242,243,244, 58,245, 98,246,247,248,249,250,251, 91,252,253, # f0 +) + +win1251BulgarianCharToOrderMap = ( +255,255,255,255,255,255,255,255,255,255,254,255,255,254,255,255, # 00 +255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 10 +253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253, # 20 +252,252,252,252,252,252,252,252,252,252,253,253,253,253,253,253, # 30 +253, 77, 90, 99,100, 72,109,107,101, 79,185, 81,102, 76, 94, 82, # 40 +110,186,108, 91, 74,119, 84, 96,111,187,115,253,253,253,253,253, # 50 +253, 65, 69, 70, 66, 63, 68,112,103, 92,194,104, 95, 86, 87, 71, # 60 +116,195, 85, 93, 97,113,196,197,198,199,200,253,253,253,253,253, # 70 +206,207,208,209,210,211,212,213,120,214,215,216,217,218,219,220, # 80 +221, 78, 64, 83,121, 98,117,105,222,223,224,225,226,227,228,229, # 90 + 88,230,231,232,233,122, 89,106,234,235,236,237,238, 45,239,240, # a0 + 73, 80,118,114,241,242,243,244,245, 62, 58,246,247,248,249,250, # b0 + 31, 32, 35, 43, 37, 44, 55, 47, 40, 59, 33, 46, 38, 36, 41, 30, # c0 + 39, 28, 34, 51, 48, 49, 53, 50, 54, 57, 61,251, 67,252, 60, 56, # d0 + 1, 18, 9, 20, 11, 3, 23, 15, 2, 26, 12, 10, 14, 6, 4, 13, # e0 + 7, 8, 5, 19, 29, 25, 22, 21, 27, 24, 17, 75, 52,253, 42, 16, # f0 +) + +# Model Table: +# total sequences: 100% +# first 512 sequences: 96.9392% +# first 1024 sequences:3.0618% +# rest sequences: 0.2992% +# negative sequences: 0.0020% +BulgarianLangModel = ( +0,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,3,3,3,3,3,3,3,3,2,3,3,3,3,3, +3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,0,3,3,3,2,2,3,2,2,1,2,2, +3,1,3,3,2,3,3,3,3,3,3,3,3,3,3,3,3,0,3,3,3,3,3,3,3,3,3,3,0,3,0,1, +0,0,0,0,0,0,0,0,0,0,1,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1, +3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,3,2,3,3,3,3,3,3,3,3,0,3,1,0, +0,1,0,0,0,0,0,0,0,0,1,1,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1, +3,2,2,2,3,3,3,3,3,3,3,3,3,3,3,3,3,1,3,2,3,3,3,3,3,3,3,3,0,3,0,0, +0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,2,3,3,2,3,3,3,3,3,3,3,3,3,3,3,3,1,3,2,3,3,3,3,3,3,3,3,0,3,0,0, +0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,3,3,3,3,3,3,3,3,3,3,2,3,2,2,1,3,3,3,3,2,2,2,1,1,2,0,1,0,1,0,0, +0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,1, +3,3,3,3,3,3,3,2,3,2,2,3,3,1,1,2,3,3,2,3,3,3,3,2,1,2,0,2,0,3,0,0, +0,0,0,0,0,0,0,1,0,0,2,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,1, +3,3,3,3,3,3,3,1,3,3,3,3,3,2,3,2,3,3,3,3,3,2,3,3,1,3,0,3,0,2,0,0, +0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1, +3,3,3,3,3,3,3,3,1,3,3,2,3,3,3,1,3,3,2,3,2,2,2,0,0,2,0,2,0,2,0,0, +0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,1, +3,3,3,3,3,3,3,3,3,0,3,3,3,2,2,3,3,3,1,2,2,3,2,1,1,2,0,2,0,0,0,0, +1,0,0,0,0,0,0,0,0,0,2,0,0,1,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1, +3,3,3,3,3,3,3,2,3,3,1,2,3,2,2,2,3,3,3,3,3,2,2,3,1,2,0,2,1,2,0,0, +0,0,0,0,0,0,0,0,0,0,3,0,0,1,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,1, +3,3,3,3,3,1,3,3,3,3,3,2,3,3,3,2,3,3,2,3,2,2,2,3,1,2,0,1,0,1,0,0, +0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1, +3,3,3,3,3,3,3,3,3,3,3,1,1,1,2,2,1,3,1,3,2,2,3,0,0,1,0,1,0,1,0,0, +0,0,0,1,0,0,0,0,1,0,2,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1, +3,3,3,3,3,2,2,3,2,2,3,1,2,1,1,1,2,3,1,3,1,2,2,0,1,1,1,1,0,1,0,0, +0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1, +3,3,3,3,3,1,3,2,2,3,3,1,2,3,1,1,3,3,3,3,1,2,2,1,1,1,0,2,0,2,0,1, +0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1, +3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,1,2,2,3,3,3,2,2,1,1,2,0,2,0,1,0,0, +0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1, +3,0,1,2,1,3,3,2,3,3,3,3,3,2,3,2,1,0,3,1,2,1,2,1,2,3,2,1,0,1,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +1,1,1,2,3,3,3,3,3,3,3,3,3,3,3,3,0,0,3,1,3,3,2,3,3,2,2,2,0,1,0,0, +0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +2,3,3,3,3,0,3,3,3,3,3,2,1,1,2,1,3,3,0,3,1,1,1,1,3,2,0,1,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1, +3,3,2,2,2,3,3,3,3,3,3,3,3,3,3,3,1,1,3,1,3,3,2,3,2,2,2,3,0,2,0,0, +0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,3,3,3,3,2,3,3,2,2,3,2,1,1,1,1,1,3,1,3,1,1,0,0,0,1,0,0,0,1,0,0, +0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0, +3,3,3,3,3,2,3,2,0,3,2,0,3,0,2,0,0,2,1,3,1,0,0,1,0,0,0,1,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1, +3,3,3,3,2,1,1,1,1,2,1,1,2,1,1,1,2,2,1,2,1,1,1,0,1,1,0,1,0,1,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1, +3,3,3,3,2,1,3,1,1,2,1,3,2,1,1,0,1,2,3,2,1,1,1,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +2,3,3,3,3,2,2,1,0,1,0,0,1,0,0,0,2,1,0,3,0,0,1,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1, +3,3,3,2,3,2,3,3,1,3,2,1,1,1,2,1,1,2,1,3,0,1,0,0,0,1,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,1,1,2,2,3,3,2,3,2,2,2,3,1,2,2,1,1,2,1,1,2,2,0,1,1,0,1,0,2,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,3,3,3,2,1,3,1,0,2,2,1,3,2,1,0,0,2,0,2,0,1,0,0,0,0,0,0,0,1,0,0, +0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1, +3,3,3,3,3,3,1,2,0,2,3,1,2,3,2,0,1,3,1,2,1,1,1,0,0,1,0,0,2,2,2,3, +2,2,2,2,1,2,1,1,2,2,1,1,2,0,1,1,1,0,0,1,1,0,0,1,1,0,0,0,1,1,0,1, +3,3,3,3,3,2,1,2,2,1,2,0,2,0,1,0,1,2,1,2,1,1,0,0,0,1,0,1,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,1, +3,3,2,3,3,1,1,3,1,0,3,2,1,0,0,0,1,2,0,2,0,1,0,0,0,1,0,1,2,1,2,2, +1,1,1,1,1,1,1,2,2,2,1,1,1,1,1,1,1,0,1,2,1,1,1,0,0,0,0,0,1,1,0,0, +3,1,0,1,0,2,3,2,2,2,3,2,2,2,2,2,1,0,2,1,2,1,1,1,0,1,2,1,2,2,2,1, +1,1,2,2,2,2,1,2,1,1,0,1,2,1,2,2,2,1,1,1,0,1,1,1,1,2,0,1,0,0,0,0, +2,3,2,3,3,0,0,2,1,0,2,1,0,0,0,0,2,3,0,2,0,0,0,0,0,1,0,0,2,0,1,2, +2,1,2,1,2,2,1,1,1,2,1,1,1,0,1,2,2,1,1,1,1,1,0,1,1,1,0,0,1,2,0,0, +3,3,2,2,3,0,2,3,1,1,2,0,0,0,1,0,0,2,0,2,0,0,0,1,0,1,0,1,2,0,2,2, +1,1,1,1,2,1,0,1,2,2,2,1,1,1,1,1,1,1,0,1,1,1,0,0,0,0,0,0,1,1,0,0, +2,3,2,3,3,0,0,3,0,1,1,0,1,0,0,0,2,2,1,2,0,0,0,0,0,0,0,0,2,0,1,2, +2,2,1,1,1,1,1,2,2,2,1,0,2,0,1,0,1,0,0,1,0,1,0,0,1,0,0,0,0,1,0,0, +3,3,3,3,2,2,2,2,2,0,2,1,1,1,1,2,1,2,1,1,0,2,0,1,0,1,0,0,2,0,1,2, +1,1,1,1,1,1,1,2,2,1,1,0,2,0,1,0,2,0,0,1,1,1,0,0,2,0,0,0,1,1,0,0, +2,3,3,3,3,1,0,0,0,0,0,0,0,0,0,0,2,0,0,1,1,0,0,0,0,0,0,1,2,0,1,2, +2,2,2,1,1,2,1,1,2,2,2,1,2,0,1,1,1,1,1,1,0,1,1,1,1,0,0,1,1,1,0,0, +2,3,3,3,3,0,2,2,0,2,1,0,0,0,1,1,1,2,0,2,0,0,0,3,0,0,0,0,2,0,2,2, +1,1,1,2,1,2,1,1,2,2,2,1,2,0,1,1,1,0,1,1,1,1,0,2,1,0,0,0,1,1,0,0, +2,3,3,3,3,0,2,1,0,0,2,0,0,0,0,0,1,2,0,2,0,0,0,0,0,0,0,0,2,0,1,2, +1,1,1,2,1,1,1,1,2,2,2,0,1,0,1,1,1,0,0,1,1,1,0,0,1,0,0,0,0,1,0,0, +3,3,2,2,3,0,1,0,1,0,0,0,0,0,0,0,1,1,0,3,0,0,0,0,0,0,0,0,1,0,2,2, +1,1,1,1,1,2,1,1,2,2,1,2,2,1,0,1,1,1,1,1,0,1,0,0,1,0,0,0,1,1,0,0, +3,1,0,1,0,2,2,2,2,3,2,1,1,1,2,3,0,0,1,0,2,1,1,0,1,1,1,1,2,1,1,1, +1,2,2,1,2,1,2,2,1,1,0,1,2,1,2,2,1,1,1,0,0,1,1,1,2,1,0,1,0,0,0,0, +2,1,0,1,0,3,1,2,2,2,2,1,2,2,1,1,1,0,2,1,2,2,1,1,2,1,1,0,2,1,1,1, +1,2,2,2,2,2,2,2,1,2,0,1,1,0,2,1,1,1,1,1,0,0,1,1,1,1,0,1,0,0,0,0, +2,1,1,1,1,2,2,2,2,1,2,2,2,1,2,2,1,1,2,1,2,3,2,2,1,1,1,1,0,1,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +2,2,2,3,2,0,1,2,0,1,2,1,1,0,1,0,1,2,1,2,0,0,0,1,1,0,0,0,1,0,0,2, +1,1,0,0,1,1,0,1,1,1,1,0,2,0,1,1,1,0,0,1,1,0,0,0,0,1,0,0,0,1,0,0, +2,0,0,0,0,1,2,2,2,2,2,2,2,1,2,1,1,1,1,1,1,1,0,1,1,1,1,1,2,1,1,1, +1,2,2,2,2,1,1,2,1,2,1,1,1,0,2,1,2,1,1,1,0,2,1,1,1,1,0,1,0,0,0,0, +3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0, +1,1,0,1,0,1,1,1,1,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +2,2,2,3,2,0,0,0,0,1,0,0,0,0,0,0,1,1,0,2,0,0,0,0,0,0,0,0,1,0,1,2, +1,1,1,1,1,1,0,0,2,2,2,2,2,0,1,1,0,1,1,1,1,1,0,0,1,0,0,0,1,1,0,1, +2,3,1,2,1,0,1,1,0,2,2,2,0,0,1,0,0,1,1,1,1,0,0,0,0,0,0,0,1,0,1,2, +1,1,1,1,2,1,1,1,1,1,1,1,1,0,1,1,0,1,0,1,0,1,0,0,1,0,0,0,0,1,0,0, +2,2,2,2,2,0,0,2,0,0,2,0,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,2,0,2,2, +1,1,1,1,1,0,0,1,2,1,1,0,1,0,1,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0, +1,2,2,2,2,0,0,2,0,1,1,0,0,0,1,0,0,2,0,2,0,0,0,0,0,0,0,0,0,0,1,1, +0,0,0,1,1,1,1,1,1,1,1,1,1,0,1,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0, +1,2,2,3,2,0,0,1,0,0,1,0,0,0,0,0,0,1,0,2,0,0,0,1,0,0,0,0,0,0,0,2, +1,1,0,0,1,0,0,0,1,1,0,0,1,0,1,1,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0, +2,1,2,2,2,1,2,1,2,2,1,1,2,1,1,1,0,1,1,1,1,2,0,1,0,1,1,1,1,0,1,1, +1,1,2,1,1,1,1,1,1,0,0,1,2,1,1,1,1,1,1,0,0,1,1,1,0,0,0,0,0,0,0,0, +1,0,0,1,3,1,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +2,2,2,2,1,0,0,1,0,2,0,0,0,0,0,1,1,1,0,1,0,0,0,0,0,0,0,0,2,0,0,1, +0,2,0,1,0,0,1,1,2,0,1,0,1,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0, +1,2,2,2,2,0,1,1,0,2,1,0,1,1,1,0,0,1,0,2,0,1,0,0,0,0,0,0,0,0,0,1, +0,1,0,0,1,0,0,0,1,1,0,0,1,0,0,1,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0, +2,2,2,2,2,0,0,1,0,0,0,1,0,1,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,1, +0,1,0,1,1,1,0,0,1,1,1,0,1,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0, +2,0,1,0,0,1,2,1,1,1,1,1,1,2,2,1,0,0,1,0,1,0,0,0,0,1,1,1,1,0,0,0, +1,1,2,1,1,1,1,0,0,0,1,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +2,2,1,2,1,0,0,1,0,0,0,0,0,0,0,0,1,1,0,1,0,0,0,0,0,0,0,0,0,0,0,1, +0,0,0,0,0,0,0,0,1,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +1,0,0,1,2,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0, +0,1,1,0,1,1,1,0,0,1,0,0,1,0,1,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0, +1,0,1,0,0,1,1,1,1,1,1,1,1,1,1,1,0,0,1,0,2,0,0,2,0,1,0,0,1,0,0,1, +1,1,0,0,1,1,0,1,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,1,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0, +1,1,1,1,1,1,1,2,0,0,0,0,0,0,2,1,0,1,1,0,0,1,1,1,0,1,0,0,0,0,0,0, +2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +1,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,1,0,1,1,0,1,1,1,1,1,0,1,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1, +) + +Latin5BulgarianModel = { + 'char_to_order_map': Latin5_BulgarianCharToOrderMap, + 'precedence_matrix': BulgarianLangModel, + 'typical_positive_ratio': 0.969392, + 'keep_english_letter': False, + 'charset_name': "ISO-8859-5", + 'language': 'Bulgairan', +} + +Win1251BulgarianModel = { + 'char_to_order_map': win1251BulgarianCharToOrderMap, + 'precedence_matrix': BulgarianLangModel, + 'typical_positive_ratio': 0.969392, + 'keep_english_letter': False, + 'charset_name': "windows-1251", + 'language': 'Bulgarian', +} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/langcyrillicmodel.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/langcyrillicmodel.py new file mode 100644 index 0000000000000000000000000000000000000000..e5f9a1fd19cca472d785bf90c6395cd11b524766 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/langcyrillicmodel.py @@ -0,0 +1,333 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Communicator client code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +# KOI8-R language model +# Character Mapping Table: +KOI8R_char_to_order_map = ( +255,255,255,255,255,255,255,255,255,255,254,255,255,254,255,255, # 00 +255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 10 +253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253, # 20 +252,252,252,252,252,252,252,252,252,252,253,253,253,253,253,253, # 30 +253,142,143,144,145,146,147,148,149,150,151,152, 74,153, 75,154, # 40 +155,156,157,158,159,160,161,162,163,164,165,253,253,253,253,253, # 50 +253, 71,172, 66,173, 65,174, 76,175, 64,176,177, 77, 72,178, 69, # 60 + 67,179, 78, 73,180,181, 79,182,183,184,185,253,253,253,253,253, # 70 +191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206, # 80 +207,208,209,210,211,212,213,214,215,216,217,218,219,220,221,222, # 90 +223,224,225, 68,226,227,228,229,230,231,232,233,234,235,236,237, # a0 +238,239,240,241,242,243,244,245,246,247,248,249,250,251,252,253, # b0 + 27, 3, 21, 28, 13, 2, 39, 19, 26, 4, 23, 11, 8, 12, 5, 1, # c0 + 15, 16, 9, 7, 6, 14, 24, 10, 17, 18, 20, 25, 30, 29, 22, 54, # d0 + 59, 37, 44, 58, 41, 48, 53, 46, 55, 42, 60, 36, 49, 38, 31, 34, # e0 + 35, 43, 45, 32, 40, 52, 56, 33, 61, 62, 51, 57, 47, 63, 50, 70, # f0 +) + +win1251_char_to_order_map = ( +255,255,255,255,255,255,255,255,255,255,254,255,255,254,255,255, # 00 +255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 10 +253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253, # 20 +252,252,252,252,252,252,252,252,252,252,253,253,253,253,253,253, # 30 +253,142,143,144,145,146,147,148,149,150,151,152, 74,153, 75,154, # 40 +155,156,157,158,159,160,161,162,163,164,165,253,253,253,253,253, # 50 +253, 71,172, 66,173, 65,174, 76,175, 64,176,177, 77, 72,178, 69, # 60 + 67,179, 78, 73,180,181, 79,182,183,184,185,253,253,253,253,253, # 70 +191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206, +207,208,209,210,211,212,213,214,215,216,217,218,219,220,221,222, +223,224,225,226,227,228,229,230,231,232,233,234,235,236,237,238, +239,240,241,242,243,244,245,246, 68,247,248,249,250,251,252,253, + 37, 44, 33, 46, 41, 48, 56, 51, 42, 60, 36, 49, 38, 31, 34, 35, + 45, 32, 40, 52, 53, 55, 58, 50, 57, 63, 70, 62, 61, 47, 59, 43, + 3, 21, 10, 19, 13, 2, 24, 20, 4, 23, 11, 8, 12, 5, 1, 15, + 9, 7, 6, 14, 39, 26, 28, 22, 25, 29, 54, 18, 17, 30, 27, 16, +) + +latin5_char_to_order_map = ( +255,255,255,255,255,255,255,255,255,255,254,255,255,254,255,255, # 00 +255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 10 +253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253, # 20 +252,252,252,252,252,252,252,252,252,252,253,253,253,253,253,253, # 30 +253,142,143,144,145,146,147,148,149,150,151,152, 74,153, 75,154, # 40 +155,156,157,158,159,160,161,162,163,164,165,253,253,253,253,253, # 50 +253, 71,172, 66,173, 65,174, 76,175, 64,176,177, 77, 72,178, 69, # 60 + 67,179, 78, 73,180,181, 79,182,183,184,185,253,253,253,253,253, # 70 +191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206, +207,208,209,210,211,212,213,214,215,216,217,218,219,220,221,222, +223,224,225,226,227,228,229,230,231,232,233,234,235,236,237,238, + 37, 44, 33, 46, 41, 48, 56, 51, 42, 60, 36, 49, 38, 31, 34, 35, + 45, 32, 40, 52, 53, 55, 58, 50, 57, 63, 70, 62, 61, 47, 59, 43, + 3, 21, 10, 19, 13, 2, 24, 20, 4, 23, 11, 8, 12, 5, 1, 15, + 9, 7, 6, 14, 39, 26, 28, 22, 25, 29, 54, 18, 17, 30, 27, 16, +239, 68,240,241,242,243,244,245,246,247,248,249,250,251,252,255, +) + +macCyrillic_char_to_order_map = ( +255,255,255,255,255,255,255,255,255,255,254,255,255,254,255,255, # 00 +255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 10 +253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253, # 20 +252,252,252,252,252,252,252,252,252,252,253,253,253,253,253,253, # 30 +253,142,143,144,145,146,147,148,149,150,151,152, 74,153, 75,154, # 40 +155,156,157,158,159,160,161,162,163,164,165,253,253,253,253,253, # 50 +253, 71,172, 66,173, 65,174, 76,175, 64,176,177, 77, 72,178, 69, # 60 + 67,179, 78, 73,180,181, 79,182,183,184,185,253,253,253,253,253, # 70 + 37, 44, 33, 46, 41, 48, 56, 51, 42, 60, 36, 49, 38, 31, 34, 35, + 45, 32, 40, 52, 53, 55, 58, 50, 57, 63, 70, 62, 61, 47, 59, 43, +191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206, +207,208,209,210,211,212,213,214,215,216,217,218,219,220,221,222, +223,224,225,226,227,228,229,230,231,232,233,234,235,236,237,238, +239,240,241,242,243,244,245,246,247,248,249,250,251,252, 68, 16, + 3, 21, 10, 19, 13, 2, 24, 20, 4, 23, 11, 8, 12, 5, 1, 15, + 9, 7, 6, 14, 39, 26, 28, 22, 25, 29, 54, 18, 17, 30, 27,255, +) + +IBM855_char_to_order_map = ( +255,255,255,255,255,255,255,255,255,255,254,255,255,254,255,255, # 00 +255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 10 +253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253, # 20 +252,252,252,252,252,252,252,252,252,252,253,253,253,253,253,253, # 30 +253,142,143,144,145,146,147,148,149,150,151,152, 74,153, 75,154, # 40 +155,156,157,158,159,160,161,162,163,164,165,253,253,253,253,253, # 50 +253, 71,172, 66,173, 65,174, 76,175, 64,176,177, 77, 72,178, 69, # 60 + 67,179, 78, 73,180,181, 79,182,183,184,185,253,253,253,253,253, # 70 +191,192,193,194, 68,195,196,197,198,199,200,201,202,203,204,205, +206,207,208,209,210,211,212,213,214,215,216,217, 27, 59, 54, 70, + 3, 37, 21, 44, 28, 58, 13, 41, 2, 48, 39, 53, 19, 46,218,219, +220,221,222,223,224, 26, 55, 4, 42,225,226,227,228, 23, 60,229, +230,231,232,233,234,235, 11, 36,236,237,238,239,240,241,242,243, + 8, 49, 12, 38, 5, 31, 1, 34, 15,244,245,246,247, 35, 16,248, + 43, 9, 45, 7, 32, 6, 40, 14, 52, 24, 56, 10, 33, 17, 61,249, +250, 18, 62, 20, 51, 25, 57, 30, 47, 29, 63, 22, 50,251,252,255, +) + +IBM866_char_to_order_map = ( +255,255,255,255,255,255,255,255,255,255,254,255,255,254,255,255, # 00 +255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 10 +253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253, # 20 +252,252,252,252,252,252,252,252,252,252,253,253,253,253,253,253, # 30 +253,142,143,144,145,146,147,148,149,150,151,152, 74,153, 75,154, # 40 +155,156,157,158,159,160,161,162,163,164,165,253,253,253,253,253, # 50 +253, 71,172, 66,173, 65,174, 76,175, 64,176,177, 77, 72,178, 69, # 60 + 67,179, 78, 73,180,181, 79,182,183,184,185,253,253,253,253,253, # 70 + 37, 44, 33, 46, 41, 48, 56, 51, 42, 60, 36, 49, 38, 31, 34, 35, + 45, 32, 40, 52, 53, 55, 58, 50, 57, 63, 70, 62, 61, 47, 59, 43, + 3, 21, 10, 19, 13, 2, 24, 20, 4, 23, 11, 8, 12, 5, 1, 15, +191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206, +207,208,209,210,211,212,213,214,215,216,217,218,219,220,221,222, +223,224,225,226,227,228,229,230,231,232,233,234,235,236,237,238, + 9, 7, 6, 14, 39, 26, 28, 22, 25, 29, 54, 18, 17, 30, 27, 16, +239, 68,240,241,242,243,244,245,246,247,248,249,250,251,252,255, +) + +# Model Table: +# total sequences: 100% +# first 512 sequences: 97.6601% +# first 1024 sequences: 2.3389% +# rest sequences: 0.1237% +# negative sequences: 0.0009% +RussianLangModel = ( +0,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,1,1,3,3,3,3,1,3,3,3,2,3,2,3,3, +3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,0,3,2,2,2,2,2,0,0,2, +3,3,3,2,3,3,3,3,3,3,3,3,3,3,2,3,3,0,0,3,3,3,3,3,3,3,3,3,2,3,2,0, +0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,3,3,2,2,3,3,3,3,3,3,3,3,3,2,3,3,0,0,3,3,3,3,3,3,3,3,2,3,3,1,0, +0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,2,3,2,3,3,3,3,3,3,3,3,3,3,3,3,3,0,0,3,3,3,3,3,3,3,3,3,3,3,2,1, +0,0,0,0,0,0,0,2,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,3,3,0,0,3,3,3,3,3,3,3,3,3,3,3,2,1, +0,0,0,0,0,1,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,3,3,3,3,3,3,3,2,2,2,3,1,3,3,1,3,3,3,3,2,2,3,0,2,2,2,3,3,2,1,0, +0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0, +3,3,3,3,3,3,2,3,3,3,3,3,2,2,3,2,3,3,3,2,1,2,2,0,1,2,2,2,2,2,2,0, +0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0, +3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,2,2,3,0,2,2,3,3,2,1,2,0, +0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,1,0,0,2,0,0,0,0,0,0,0,0,0, +3,3,3,3,3,3,2,3,3,1,2,3,2,2,3,2,3,3,3,3,2,2,3,0,3,2,2,3,1,1,1,0, +0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,3,3,3,3,3,3,3,2,2,3,3,3,3,3,2,3,3,3,3,2,2,2,0,3,3,3,2,2,2,2,0, +0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,3,3,3,3,3,3,3,3,3,2,3,2,3,3,3,3,3,3,2,3,2,2,0,1,3,2,1,2,2,1,0, +0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0, +3,3,3,3,3,3,3,3,3,3,3,2,1,1,3,0,1,1,1,1,2,1,1,0,2,2,2,1,2,0,1,0, +0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,3,3,3,3,3,2,3,3,2,2,2,2,1,3,2,3,2,3,2,1,2,2,0,1,1,2,1,2,1,2,0, +0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,3,3,3,3,3,3,3,3,3,3,3,2,2,3,2,3,3,3,2,2,2,2,0,2,2,2,2,3,1,1,0, +0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0, +3,2,3,2,2,3,3,3,3,3,3,3,3,3,1,3,2,0,0,3,3,3,3,2,3,3,3,3,2,3,2,0, +0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +2,3,3,3,3,3,2,2,3,3,0,2,1,0,3,2,3,2,3,0,0,1,2,0,0,1,0,1,2,1,1,0, +0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,0,3,0,2,3,3,3,3,2,3,3,3,3,1,2,2,0,0,2,3,2,2,2,3,2,3,2,2,3,0,0, +0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,2,3,0,2,3,2,3,0,1,2,3,3,2,0,2,3,0,0,2,3,2,2,0,1,3,1,3,2,2,1,0, +0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,1,3,0,2,3,3,3,3,3,3,3,3,2,1,3,2,0,0,2,2,3,3,3,2,3,3,0,2,2,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,3,3,3,3,3,2,2,3,3,2,2,2,3,3,0,0,1,1,1,1,1,2,0,0,1,1,1,1,0,1,0, +0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,3,3,3,3,3,2,2,3,3,3,3,3,3,3,0,3,2,3,3,2,3,2,0,2,1,0,1,1,0,1,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0, +3,3,3,3,3,3,2,3,3,3,2,2,2,2,3,1,3,2,3,1,1,2,1,0,2,2,2,2,1,3,1,0, +0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0, +2,2,3,3,3,3,3,1,2,2,1,3,1,0,3,0,0,3,0,0,0,1,1,0,1,2,1,0,0,0,0,0, +0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,2,2,1,1,3,3,3,2,2,1,2,2,3,1,1,2,0,0,2,2,1,3,0,0,2,1,1,2,1,1,0, +0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,2,3,3,3,3,1,2,2,2,1,2,1,3,3,1,1,2,1,2,1,2,2,0,2,0,0,1,1,0,1,0, +0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +2,3,3,3,3,3,2,1,3,2,2,3,2,0,3,2,0,3,0,1,0,1,1,0,0,1,1,1,1,0,1,0, +0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,3,2,3,3,3,2,2,2,3,3,1,2,1,2,1,0,1,0,1,1,0,1,0,0,2,1,1,1,0,1,0, +0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0, +3,1,1,2,1,2,3,3,2,2,1,2,2,3,0,2,1,0,0,2,2,3,2,1,2,2,2,2,2,3,1,0, +0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,3,3,3,3,1,1,0,1,1,2,2,1,1,3,0,0,1,3,1,1,1,0,0,0,1,0,1,1,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +2,1,3,3,3,2,0,0,0,2,1,0,1,0,2,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +2,0,1,0,0,2,3,2,2,2,1,2,2,2,1,2,1,0,0,1,1,1,0,2,0,1,1,1,0,0,1,1, +1,0,0,0,0,0,1,2,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0, +2,3,3,3,3,0,0,0,0,1,0,0,0,0,3,0,1,2,1,0,0,0,0,0,0,0,1,1,0,0,1,1, +1,0,1,0,1,2,0,0,1,1,2,1,0,1,1,1,1,0,1,1,1,1,0,1,0,0,1,0,0,1,1,0, +2,2,3,2,2,2,3,1,2,2,2,2,2,2,2,2,1,1,1,1,1,1,1,0,1,0,1,1,1,0,2,1, +1,1,1,1,1,1,1,1,2,1,1,1,1,1,1,1,1,1,1,0,1,0,1,1,0,1,1,1,0,1,1,0, +3,3,3,2,2,2,2,3,2,2,1,1,2,2,2,2,1,1,3,1,2,1,2,0,0,1,1,0,1,0,2,1, +1,1,1,1,1,2,1,0,1,1,1,1,0,1,0,0,1,1,0,0,1,0,1,0,0,1,0,0,0,1,1,0, +2,0,0,1,0,3,2,2,2,2,1,2,1,2,1,2,0,0,0,2,1,2,2,1,1,2,2,0,1,1,0,2, +1,1,1,1,1,0,1,1,1,2,1,1,1,2,1,0,1,2,1,1,1,1,0,1,1,1,0,0,1,0,0,1, +1,3,2,2,2,1,1,1,2,3,0,0,0,0,2,0,2,2,1,0,0,0,0,0,0,1,0,0,0,0,1,1, +1,0,1,1,0,1,0,1,1,0,1,1,0,2,0,0,1,1,0,0,1,0,0,0,0,0,0,0,0,1,1,0, +2,3,2,3,2,1,2,2,2,2,1,0,0,0,2,0,0,1,1,0,0,0,0,0,0,0,1,1,0,0,2,1, +1,1,2,1,0,2,0,0,1,0,1,0,0,1,0,0,1,1,0,1,1,0,0,0,0,0,1,0,0,0,0,0, +3,0,0,1,0,2,2,2,3,2,2,2,2,2,2,2,0,0,0,2,1,2,1,1,1,2,2,0,0,0,1,2, +1,1,1,1,1,0,1,2,1,1,1,1,1,1,1,0,1,1,1,1,1,1,0,1,1,1,1,1,1,0,0,1, +2,3,2,3,3,2,0,1,1,1,0,0,1,0,2,0,1,1,3,1,0,0,0,0,0,0,0,1,0,0,2,1, +1,1,1,1,1,1,1,0,1,0,1,1,1,1,0,1,1,1,0,0,1,1,0,1,0,0,0,0,0,0,1,0, +2,3,3,3,3,1,2,2,2,2,0,1,1,0,2,1,1,1,2,1,0,1,1,0,0,1,0,1,0,0,2,0, +0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +2,3,3,3,2,0,0,1,1,2,2,1,0,0,2,0,1,1,3,0,0,1,0,0,0,0,0,1,0,1,2,1, +1,1,2,0,1,1,1,0,1,0,1,1,0,1,0,1,1,1,1,0,1,0,0,0,0,0,0,1,0,1,1,0, +1,3,2,3,2,1,0,0,2,2,2,0,1,0,2,0,1,1,1,0,1,0,0,0,3,0,1,1,0,0,2,1, +1,1,1,0,1,1,0,0,0,0,1,1,0,1,0,0,2,1,1,0,1,0,0,0,1,0,1,0,0,1,1,0, +3,1,2,1,1,2,2,2,2,2,2,1,2,2,1,1,0,0,0,2,2,2,0,0,0,1,2,1,0,1,0,1, +2,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,2,1,1,1,0,1,0,1,1,0,1,1,1,0,0,1, +3,0,0,0,0,2,0,1,1,1,1,1,1,1,0,1,0,0,0,1,1,1,0,1,0,1,1,0,0,1,0,1, +1,1,0,0,1,0,0,0,1,0,1,1,0,0,1,0,1,0,1,0,0,0,0,1,0,0,0,1,0,0,0,1, +1,3,3,2,2,0,0,0,2,2,0,0,0,1,2,0,1,1,2,0,0,0,0,0,0,0,0,1,0,0,2,1, +0,1,1,0,0,1,1,0,0,0,1,1,0,1,1,0,1,1,0,0,1,0,0,0,0,0,0,0,0,0,1,0, +2,3,2,3,2,0,0,0,0,1,1,0,0,0,2,0,2,0,2,0,0,0,0,0,1,0,0,1,0,0,1,1, +1,1,2,0,1,2,1,0,1,1,2,1,1,1,1,1,2,1,1,0,1,0,0,1,1,1,1,1,0,1,1,0, +1,3,2,2,2,1,0,0,2,2,1,0,1,2,2,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,1,1, +0,0,1,1,0,1,1,0,0,1,1,0,1,1,0,0,1,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0, +1,0,0,1,0,2,3,1,2,2,2,2,2,2,1,1,0,0,0,1,0,1,0,2,1,1,1,0,0,0,0,1, +1,1,0,1,1,0,1,1,1,1,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0, +2,0,2,0,0,1,0,3,2,1,2,1,2,2,0,1,0,0,0,2,1,0,0,2,1,1,1,1,0,2,0,2, +2,1,1,1,1,1,1,1,1,1,1,1,1,2,1,0,1,1,1,1,0,0,0,1,1,1,1,0,1,0,0,1, +1,2,2,2,2,1,0,0,1,0,0,0,0,0,2,0,1,1,1,1,0,0,0,0,1,0,1,2,0,0,2,0, +1,0,1,1,1,2,1,0,1,0,1,1,0,0,1,0,1,1,1,0,1,0,0,0,1,0,0,1,0,1,1,0, +2,1,2,2,2,0,3,0,1,1,0,0,0,0,2,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1, +0,0,0,1,1,1,0,0,1,0,1,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0, +1,2,2,3,2,2,0,0,1,1,2,0,1,2,1,0,1,0,1,0,0,1,0,0,0,0,0,0,0,0,0,1, +0,1,1,0,0,1,1,0,0,1,1,0,0,1,1,0,1,1,0,0,1,0,0,0,0,0,0,0,0,1,1,0, +2,2,1,1,2,1,2,2,2,2,2,1,2,2,0,1,0,0,0,1,2,2,2,1,2,1,1,1,1,1,2,1, +1,1,1,1,1,1,1,1,1,1,0,0,1,1,1,0,1,1,1,0,0,0,0,1,1,1,0,1,1,0,0,1, +1,2,2,2,2,0,1,0,2,2,0,0,0,0,2,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,2,0, +0,0,1,0,0,1,0,0,0,0,1,0,1,1,0,0,1,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0, +0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +1,2,2,2,2,0,0,0,2,2,2,0,1,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,1, +0,1,1,0,0,1,1,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +1,2,2,2,2,0,0,0,0,1,0,0,1,1,2,0,0,0,0,1,0,1,0,0,1,0,0,2,0,0,0,1, +0,0,1,0,0,1,0,0,0,1,1,0,0,0,0,0,1,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0, +1,2,2,2,1,1,2,0,2,1,1,1,1,0,2,2,0,0,0,0,0,0,0,0,0,1,1,0,0,0,1,1, +0,0,1,0,1,1,0,0,0,0,1,0,0,0,0,0,1,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0, +1,0,2,1,2,0,0,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0, +0,0,1,0,1,1,0,0,0,0,1,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0, +1,0,0,0,0,2,0,1,2,1,0,1,1,1,0,1,0,0,0,1,0,1,0,0,1,0,1,0,0,0,0,1, +0,0,0,0,0,1,0,0,1,1,0,0,1,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1, +2,2,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1, +1,0,0,0,1,0,0,0,1,1,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,1,0,0,0,0,0, +2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1, +1,1,1,0,1,0,1,0,0,1,1,1,1,0,0,0,1,0,0,0,0,1,0,0,0,1,0,1,0,0,0,0, +1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1, +1,1,0,1,1,0,1,0,1,0,0,0,0,1,1,0,1,1,0,0,0,0,0,1,0,1,1,0,1,0,0,0, +0,1,1,1,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0, +) + +Koi8rModel = { + 'char_to_order_map': KOI8R_char_to_order_map, + 'precedence_matrix': RussianLangModel, + 'typical_positive_ratio': 0.976601, + 'keep_english_letter': False, + 'charset_name': "KOI8-R", + 'language': 'Russian', +} + +Win1251CyrillicModel = { + 'char_to_order_map': win1251_char_to_order_map, + 'precedence_matrix': RussianLangModel, + 'typical_positive_ratio': 0.976601, + 'keep_english_letter': False, + 'charset_name': "windows-1251", + 'language': 'Russian', +} + +Latin5CyrillicModel = { + 'char_to_order_map': latin5_char_to_order_map, + 'precedence_matrix': RussianLangModel, + 'typical_positive_ratio': 0.976601, + 'keep_english_letter': False, + 'charset_name': "ISO-8859-5", + 'language': 'Russian', +} + +MacCyrillicModel = { + 'char_to_order_map': macCyrillic_char_to_order_map, + 'precedence_matrix': RussianLangModel, + 'typical_positive_ratio': 0.976601, + 'keep_english_letter': False, + 'charset_name': "MacCyrillic", + 'language': 'Russian', +} + +Ibm866Model = { + 'char_to_order_map': IBM866_char_to_order_map, + 'precedence_matrix': RussianLangModel, + 'typical_positive_ratio': 0.976601, + 'keep_english_letter': False, + 'charset_name': "IBM866", + 'language': 'Russian', +} + +Ibm855Model = { + 'char_to_order_map': IBM855_char_to_order_map, + 'precedence_matrix': RussianLangModel, + 'typical_positive_ratio': 0.976601, + 'keep_english_letter': False, + 'charset_name': "IBM855", + 'language': 'Russian', +} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/langgreekmodel.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/langgreekmodel.py new file mode 100644 index 0000000000000000000000000000000000000000..533222166cca9fce442655d9f3098126f50e6140 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/langgreekmodel.py @@ -0,0 +1,225 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Communicator client code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +# 255: Control characters that usually does not exist in any text +# 254: Carriage/Return +# 253: symbol (punctuation) that does not belong to word +# 252: 0 - 9 + +# Character Mapping Table: +Latin7_char_to_order_map = ( +255,255,255,255,255,255,255,255,255,255,254,255,255,254,255,255, # 00 +255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 10 +253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253, # 20 +252,252,252,252,252,252,252,252,252,252,253,253,253,253,253,253, # 30 +253, 82,100,104, 94, 98,101,116,102,111,187,117, 92, 88,113, 85, # 40 + 79,118,105, 83, 67,114,119, 95, 99,109,188,253,253,253,253,253, # 50 +253, 72, 70, 80, 81, 60, 96, 93, 89, 68,120, 97, 77, 86, 69, 55, # 60 + 78,115, 65, 66, 58, 76,106,103, 87,107,112,253,253,253,253,253, # 70 +255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 80 +255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 90 +253,233, 90,253,253,253,253,253,253,253,253,253,253, 74,253,253, # a0 +253,253,253,253,247,248, 61, 36, 46, 71, 73,253, 54,253,108,123, # b0 +110, 31, 51, 43, 41, 34, 91, 40, 52, 47, 44, 53, 38, 49, 59, 39, # c0 + 35, 48,250, 37, 33, 45, 56, 50, 84, 57,120,121, 17, 18, 22, 15, # d0 +124, 1, 29, 20, 21, 3, 32, 13, 25, 5, 11, 16, 10, 6, 30, 4, # e0 + 9, 8, 14, 7, 2, 12, 28, 23, 42, 24, 64, 75, 19, 26, 27,253, # f0 +) + +win1253_char_to_order_map = ( +255,255,255,255,255,255,255,255,255,255,254,255,255,254,255,255, # 00 +255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 10 +253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253, # 20 +252,252,252,252,252,252,252,252,252,252,253,253,253,253,253,253, # 30 +253, 82,100,104, 94, 98,101,116,102,111,187,117, 92, 88,113, 85, # 40 + 79,118,105, 83, 67,114,119, 95, 99,109,188,253,253,253,253,253, # 50 +253, 72, 70, 80, 81, 60, 96, 93, 89, 68,120, 97, 77, 86, 69, 55, # 60 + 78,115, 65, 66, 58, 76,106,103, 87,107,112,253,253,253,253,253, # 70 +255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 80 +255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 90 +253,233, 61,253,253,253,253,253,253,253,253,253,253, 74,253,253, # a0 +253,253,253,253,247,253,253, 36, 46, 71, 73,253, 54,253,108,123, # b0 +110, 31, 51, 43, 41, 34, 91, 40, 52, 47, 44, 53, 38, 49, 59, 39, # c0 + 35, 48,250, 37, 33, 45, 56, 50, 84, 57,120,121, 17, 18, 22, 15, # d0 +124, 1, 29, 20, 21, 3, 32, 13, 25, 5, 11, 16, 10, 6, 30, 4, # e0 + 9, 8, 14, 7, 2, 12, 28, 23, 42, 24, 64, 75, 19, 26, 27,253, # f0 +) + +# Model Table: +# total sequences: 100% +# first 512 sequences: 98.2851% +# first 1024 sequences:1.7001% +# rest sequences: 0.0359% +# negative sequences: 0.0148% +GreekLangModel = ( +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,3,2,2,3,3,3,3,3,3,3,3,1,3,3,3,0,2,2,3,3,0,3,0,3,2,0,3,3,3,0, +3,0,0,0,2,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,3,3,3,3,3,0,3,3,0,3,2,3,3,0,3,2,3,3,3,0,0,3,0,3,0,3,3,2,0,0,0, +2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0, +0,2,3,2,2,3,3,3,3,3,3,3,3,0,3,3,3,3,0,2,3,3,0,3,3,3,3,2,3,3,3,0, +2,0,0,0,2,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,2,3,3,2,3,3,3,3,3,3,3,3,3,3,3,3,0,2,1,3,3,3,3,2,3,3,2,3,3,2,0, +0,0,0,0,2,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,3,3,3,3,0,3,3,3,3,3,3,0,3,3,0,3,3,3,3,3,3,3,3,3,3,0,3,2,3,3,0, +2,0,1,0,2,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0, +0,3,3,3,3,3,2,3,0,0,0,0,3,3,0,3,1,3,3,3,0,3,3,0,3,3,3,3,0,0,0,0, +2,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,3,3,3,3,3,0,3,0,3,3,3,3,3,0,3,2,2,2,3,0,2,3,3,3,3,3,2,3,3,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,3,3,3,3,3,3,2,2,2,3,3,3,3,0,3,1,3,3,3,3,2,3,3,3,3,3,3,3,2,2,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,3,3,3,3,3,2,0,3,0,0,0,3,3,2,3,3,3,3,3,0,0,3,2,3,0,2,3,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,3,0,3,3,3,3,0,0,3,3,0,2,3,0,3,0,3,3,3,0,0,3,0,3,0,2,2,3,3,0,0, +0,0,1,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,3,3,3,3,3,2,0,3,2,3,3,3,3,0,3,3,3,3,3,0,3,3,2,3,2,3,3,2,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,3,3,2,3,2,3,3,3,3,3,3,0,2,3,2,3,2,2,2,3,2,3,3,2,3,0,2,2,2,3,0, +2,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,3,0,0,0,3,3,3,2,3,3,0,0,3,0,3,0,0,0,3,2,0,3,0,3,0,0,2,0,2,0, +0,0,0,0,2,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,3,3,3,3,0,3,3,3,3,3,3,0,3,3,0,3,0,0,0,3,3,0,3,3,3,0,0,1,2,3,0, +3,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,3,3,3,3,3,2,0,0,3,2,2,3,3,0,3,3,3,3,3,2,1,3,0,3,2,3,3,2,1,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,3,3,0,2,3,3,3,3,3,3,0,0,3,0,3,0,0,0,3,3,0,3,2,3,0,0,3,3,3,0, +3,0,0,0,2,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,3,3,3,3,0,3,3,3,3,3,3,0,0,3,0,3,0,0,0,3,2,0,3,2,3,0,0,3,2,3,0, +2,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,3,1,2,2,3,3,3,3,3,3,0,2,3,0,3,0,0,0,3,3,0,3,0,2,0,0,2,3,1,0, +2,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,3,0,3,3,3,3,0,3,0,3,3,2,3,0,3,3,3,3,3,3,0,3,3,3,0,2,3,0,0,3,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,3,0,3,3,3,0,0,3,0,0,0,3,3,0,3,0,2,3,3,0,0,3,0,3,0,3,3,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,3,0,0,0,3,3,3,3,3,3,0,0,3,0,2,0,0,0,3,3,0,3,0,3,0,0,2,0,2,0, +0,0,0,0,1,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,3,3,3,3,3,3,0,3,0,2,0,3,2,0,3,2,3,2,3,0,0,3,2,3,2,3,3,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,3,0,0,2,3,3,3,3,3,0,0,0,3,0,2,1,0,0,3,2,2,2,0,3,0,0,2,2,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,3,0,3,3,3,2,0,3,0,3,0,3,3,0,2,1,2,3,3,0,0,3,0,3,0,3,3,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,2,3,3,3,0,3,3,3,3,3,3,0,2,3,0,3,0,0,0,2,1,0,2,2,3,0,0,2,2,2,0, +0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,3,0,0,2,3,3,3,2,3,0,0,1,3,0,2,0,0,0,0,3,0,1,0,2,0,0,1,1,1,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,3,3,3,3,3,1,0,3,0,0,0,3,2,0,3,2,3,3,3,0,0,3,0,3,2,2,2,1,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,3,0,3,3,3,0,0,3,0,0,0,0,2,0,2,3,3,2,2,2,2,3,0,2,0,2,2,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,3,3,3,3,2,0,0,0,0,0,0,2,3,0,2,0,2,3,2,0,0,3,0,3,0,3,1,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,3,2,3,3,2,2,3,0,2,0,3,0,0,0,2,0,0,0,0,1,2,0,2,0,2,0, +0,2,0,2,0,2,2,0,0,1,0,2,2,2,0,2,2,2,0,2,2,2,0,0,2,0,0,1,0,0,0,0, +0,2,0,3,3,2,0,0,0,0,0,0,1,3,0,2,0,2,2,2,0,0,2,0,3,0,0,2,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,3,0,2,3,2,0,2,2,0,2,0,2,2,0,2,0,2,2,2,0,0,0,0,0,0,2,3,0,0,0,2, +0,1,2,0,0,0,0,2,2,0,0,0,2,1,0,2,2,0,0,0,0,0,0,1,0,2,0,0,0,0,0,0, +0,0,2,1,0,2,3,2,2,3,2,3,2,0,0,3,3,3,0,0,3,2,0,0,0,1,1,0,2,0,2,2, +0,2,0,2,0,2,2,0,0,2,0,2,2,2,0,2,2,2,2,0,0,2,0,0,0,2,0,1,0,0,0,0, +0,3,0,3,3,2,2,0,3,0,0,0,2,2,0,2,2,2,1,2,0,0,1,2,2,0,0,3,0,0,0,2, +0,1,2,0,0,0,1,2,0,0,0,0,0,0,0,2,2,0,1,0,0,2,0,0,0,2,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,2,3,3,2,2,0,0,0,2,0,2,3,3,0,2,0,0,0,0,0,0,2,2,2,0,2,2,0,2,0,2, +0,2,2,0,0,2,2,2,2,1,0,0,2,2,0,2,0,0,2,0,0,0,0,0,0,2,0,0,0,0,0,0, +0,2,0,3,2,3,0,0,0,3,0,0,2,2,0,2,0,2,2,2,0,0,2,0,0,0,0,0,0,0,0,2, +0,0,2,2,0,0,2,2,2,0,0,0,0,0,0,2,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,2,0,0,3,2,0,2,2,2,2,2,0,0,0,2,0,0,0,0,2,0,1,0,0,2,0,1,0,0,0, +0,2,2,2,0,2,2,0,1,2,0,2,2,2,0,2,2,2,2,1,2,2,0,0,2,0,0,0,0,0,0,0, +0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0, +0,2,0,2,0,2,2,0,0,0,0,1,2,1,0,0,2,2,0,0,2,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,3,2,3,0,0,2,0,0,0,2,2,0,2,0,0,0,1,0,0,2,0,2,0,2,2,0,0,0,0, +0,0,2,0,0,0,0,2,2,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0, +0,2,2,3,2,2,0,0,0,0,0,0,1,3,0,2,0,2,2,0,0,0,1,0,2,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,2,0,2,0,3,2,0,2,0,0,0,0,0,0,2,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1, +0,0,2,0,0,0,0,1,1,0,0,2,1,2,0,2,2,0,1,0,0,1,0,0,0,2,0,0,0,0,0,0, +0,3,0,2,2,2,0,0,2,0,0,0,2,0,0,0,2,3,0,2,0,0,0,0,0,0,2,2,0,0,0,2, +0,1,2,0,0,0,1,2,2,1,0,0,0,2,0,0,2,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,2,1,2,0,2,2,0,2,0,0,2,0,0,0,0,1,2,1,0,2,1,0,0,0,0,0,0,0,0,0,0, +0,0,2,0,0,0,3,1,2,2,0,2,0,0,0,0,2,0,0,0,2,0,0,3,0,0,0,0,2,2,2,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,2,1,0,2,0,1,2,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,1,0,0,0,0,0,0,2, +0,2,2,0,0,2,2,2,2,2,0,1,2,0,0,0,2,2,0,1,0,2,0,0,2,2,0,0,0,0,0,0, +0,0,0,0,1,0,0,0,0,0,0,0,3,0,0,2,0,0,0,0,0,0,0,0,2,0,2,0,0,0,0,2, +0,1,2,0,0,0,0,2,2,1,0,1,0,1,0,2,2,2,1,0,0,0,0,0,0,1,0,0,0,0,0,0, +0,2,0,1,2,0,0,0,0,0,0,0,0,0,0,2,0,0,2,2,0,0,0,0,1,0,0,0,0,0,0,2, +0,2,2,0,0,0,0,2,2,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,2,0,0,2,0,0,0, +0,2,2,2,2,0,0,0,3,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,2,0,0,0,0,0,0,1, +0,0,2,0,0,0,0,1,2,0,0,0,0,0,0,2,2,1,1,0,0,0,0,0,0,1,0,0,0,0,0,0, +0,2,0,2,2,2,0,0,2,0,0,0,0,0,0,0,2,2,2,0,0,0,2,0,0,0,0,0,0,0,0,2, +0,0,1,0,0,0,0,2,1,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0, +0,3,0,2,0,0,0,0,0,0,0,0,2,0,0,0,0,0,2,0,0,0,0,0,0,0,2,0,0,0,0,2, +0,0,2,0,0,0,0,2,2,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,2,0,2,2,1,0,0,0,0,0,0,2,0,0,2,0,2,2,2,0,0,0,0,0,0,2,0,0,0,0,2, +0,0,2,0,0,2,0,2,2,0,0,0,0,2,0,2,0,0,0,0,0,2,0,0,0,2,0,0,0,0,0,0, +0,0,3,0,0,0,2,2,0,2,2,0,0,0,0,0,2,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,2,0,0,0,0,0, +0,2,2,2,2,2,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,1, +0,0,0,0,0,0,0,2,1,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,2,2,0,0,0,0,0,2,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0, +0,2,0,0,0,2,0,0,0,0,0,1,0,0,0,0,2,2,0,0,0,1,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,2,0,0,0, +0,2,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,1,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,0,2,0,2,0,0,0, +0,0,0,0,0,0,0,0,2,1,0,0,0,0,0,0,2,0,0,0,1,2,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +) + +Latin7GreekModel = { + 'char_to_order_map': Latin7_char_to_order_map, + 'precedence_matrix': GreekLangModel, + 'typical_positive_ratio': 0.982851, + 'keep_english_letter': False, + 'charset_name': "ISO-8859-7", + 'language': 'Greek', +} + +Win1253GreekModel = { + 'char_to_order_map': win1253_char_to_order_map, + 'precedence_matrix': GreekLangModel, + 'typical_positive_ratio': 0.982851, + 'keep_english_letter': False, + 'charset_name': "windows-1253", + 'language': 'Greek', +} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/langhebrewmodel.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/langhebrewmodel.py new file mode 100644 index 0000000000000000000000000000000000000000..58f4c875ec926b85256fd3866369fc8a81a14350 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/langhebrewmodel.py @@ -0,0 +1,200 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Universal charset detector code. +# +# The Initial Developer of the Original Code is +# Simon Montagu +# Portions created by the Initial Developer are Copyright (C) 2005 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# Shy Shalom - original C code +# Shoshannah Forbes - original C code (?) +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +# 255: Control characters that usually does not exist in any text +# 254: Carriage/Return +# 253: symbol (punctuation) that does not belong to word +# 252: 0 - 9 + +# Windows-1255 language model +# Character Mapping Table: +WIN1255_CHAR_TO_ORDER_MAP = ( +255,255,255,255,255,255,255,255,255,255,254,255,255,254,255,255, # 00 +255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 10 +253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253, # 20 +252,252,252,252,252,252,252,252,252,252,253,253,253,253,253,253, # 30 +253, 69, 91, 79, 80, 92, 89, 97, 90, 68,111,112, 82, 73, 95, 85, # 40 + 78,121, 86, 71, 67,102,107, 84,114,103,115,253,253,253,253,253, # 50 +253, 50, 74, 60, 61, 42, 76, 70, 64, 53,105, 93, 56, 65, 54, 49, # 60 + 66,110, 51, 43, 44, 63, 81, 77, 98, 75,108,253,253,253,253,253, # 70 +124,202,203,204,205, 40, 58,206,207,208,209,210,211,212,213,214, +215, 83, 52, 47, 46, 72, 32, 94,216,113,217,109,218,219,220,221, + 34,116,222,118,100,223,224,117,119,104,125,225,226, 87, 99,227, +106,122,123,228, 55,229,230,101,231,232,120,233, 48, 39, 57,234, + 30, 59, 41, 88, 33, 37, 36, 31, 29, 35,235, 62, 28,236,126,237, +238, 38, 45,239,240,241,242,243,127,244,245,246,247,248,249,250, + 9, 8, 20, 16, 3, 2, 24, 14, 22, 1, 25, 15, 4, 11, 6, 23, + 12, 19, 13, 26, 18, 27, 21, 17, 7, 10, 5,251,252,128, 96,253, +) + +# Model Table: +# total sequences: 100% +# first 512 sequences: 98.4004% +# first 1024 sequences: 1.5981% +# rest sequences: 0.087% +# negative sequences: 0.0015% +HEBREW_LANG_MODEL = ( +0,3,3,3,3,3,3,3,3,3,3,2,3,3,3,3,3,3,3,3,3,3,3,2,3,2,1,2,0,1,0,0, +3,0,3,1,0,0,1,3,2,0,1,1,2,0,2,2,2,1,1,1,1,2,1,1,1,2,0,0,2,2,0,1, +3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,2,2,2, +1,2,1,2,1,2,0,0,2,0,0,0,0,0,1,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0, +3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,2,2, +1,2,1,3,1,1,0,0,2,0,0,0,1,0,1,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0, +3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,1,0,1,2,2,1,3, +1,2,1,1,2,2,0,0,2,2,0,0,0,0,1,0,1,0,0,0,1,0,0,0,0,0,0,1,0,1,1,0, +3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,3,3,2,2,2,2,3,2, +1,2,1,2,2,2,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0, +3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,3,3,2,3,2,2,3,2,2,2,1,2,2,2,2, +1,2,1,1,2,2,0,1,2,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0, +3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,0,2,2,2,2,2, +0,2,0,2,2,2,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0, +3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,3,0,2,2,2, +0,2,1,2,2,2,0,0,2,1,0,0,0,0,1,0,1,0,0,0,0,0,0,2,0,0,0,0,0,0,1,0, +3,3,3,3,3,3,3,3,3,3,3,2,3,3,3,3,3,3,3,3,3,3,3,3,3,2,1,2,3,2,2,2, +1,2,1,2,2,2,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,1,0, +3,3,3,3,3,3,3,3,3,2,3,3,3,2,3,3,3,3,3,3,3,3,3,3,3,3,3,1,0,2,0,2, +0,2,1,2,2,2,0,0,1,2,0,0,0,0,1,0,1,0,0,0,0,0,0,1,0,0,0,2,0,0,1,0, +3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,3,2,3,2,2,3,2,1,2,1,1,1, +0,1,1,1,1,1,3,0,1,0,0,0,0,2,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0, +3,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,1,1,0,1,1,0,0,1,0,0,1,0,0,0,0, +0,0,1,0,0,0,0,0,2,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,2,2,2,2,2,2, +0,2,0,1,2,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0, +3,3,3,3,3,3,3,3,3,2,3,3,3,2,1,2,3,3,2,3,3,3,3,2,3,2,1,2,0,2,1,2, +0,2,0,2,2,2,0,0,1,2,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0, +3,3,3,3,3,3,3,3,3,2,3,3,3,1,2,2,3,3,2,3,2,3,2,2,3,1,2,2,0,2,2,2, +0,2,1,2,2,2,0,0,1,2,0,0,0,0,1,0,0,0,0,0,1,0,0,1,0,0,0,1,0,0,1,0, +3,3,3,3,3,3,3,3,3,3,3,3,3,2,3,3,3,2,3,3,2,2,2,3,3,3,3,1,3,2,2,2, +0,2,0,1,2,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0, +3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,2,3,3,3,2,3,2,2,2,1,2,2,0,2,2,2,2, +0,2,0,2,2,2,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0, +3,3,3,3,3,3,3,3,3,3,3,2,3,3,3,1,3,2,3,3,2,3,3,2,2,1,2,2,2,2,2,2, +0,2,1,2,1,2,0,0,1,0,0,0,0,0,1,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,1,0, +3,3,3,3,3,3,2,3,2,3,3,2,3,3,3,3,2,3,2,3,3,3,3,3,2,2,2,2,2,2,2,1, +0,2,0,1,2,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0, +3,3,3,3,3,3,3,3,3,2,1,2,3,3,3,3,3,3,3,2,3,2,3,2,1,2,3,0,2,1,2,2, +0,2,1,1,2,1,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,2,0, +3,3,3,3,3,3,3,3,3,2,3,3,3,3,2,1,3,1,2,2,2,1,2,3,3,1,2,1,2,2,2,2, +0,1,1,1,1,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,2,0,0,0,0,0,0,0,0, +3,3,3,3,3,3,3,3,3,3,0,2,3,3,3,1,3,3,3,1,2,2,2,2,1,1,2,2,2,2,2,2, +0,2,0,1,1,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0, +3,3,3,3,3,3,2,3,3,3,2,2,3,3,3,2,1,2,3,2,3,2,2,2,2,1,2,1,1,1,2,2, +0,2,1,1,1,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0, +3,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,1,0,0,0,1,0,0,0,0,0, +1,0,1,0,0,0,0,0,2,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,3,3,3,3,2,3,3,2,3,1,2,2,2,2,3,2,3,1,1,2,2,1,2,2,1,1,0,2,2,2,2, +0,1,0,1,2,2,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0, +3,0,0,1,1,0,1,0,0,1,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,2,0, +0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,0,1,0,1,0,1,1,0,1,1,0,0,0,1,1,0,1,1,1,0,0,0,0,0,0,1,0,0,0,0,0, +0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,0,0,0,1,1,0,1,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0, +3,2,2,1,2,2,2,2,2,2,2,1,2,2,1,2,2,1,1,1,1,1,1,1,1,2,1,1,0,3,3,3, +0,3,0,2,2,2,2,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0, +2,2,2,3,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,1,2,2,1,2,2,2,1,1,1,2,0,1, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +2,2,2,2,2,2,2,2,2,2,2,1,2,2,2,2,2,2,2,2,2,2,2,0,2,2,0,0,0,0,0,0, +0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +2,3,1,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,1,2,1,0,2,1,0, +0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,1,1,1,1,1,1,1,1,1,1,0,0,1,1,1,1,0,1,1,1,1,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0, +0,3,1,1,2,2,2,2,2,1,2,2,2,1,1,2,2,2,2,2,2,2,1,2,2,1,0,1,1,1,1,0, +0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,2,1,1,1,1,2,1,1,2,1,0,1,1,1,1,1,1,1,1,1,1,1,0,1,0,0,0,0,0,0,0, +0,0,2,0,0,0,0,0,0,0,0,1,1,0,0,0,0,1,1,0,0,1,1,0,0,0,0,0,0,1,0,0, +2,1,1,2,2,2,2,2,2,2,2,2,2,2,1,2,2,2,2,2,1,2,1,2,1,1,1,1,0,0,0,0, +0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +1,2,1,2,2,2,2,2,2,2,2,2,2,1,2,1,2,1,1,2,1,1,1,2,1,2,1,2,0,1,0,1, +0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,3,1,2,2,2,1,2,2,2,2,2,2,2,2,1,2,1,1,1,1,1,1,2,1,2,1,1,0,1,0,1, +0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +2,1,2,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,2,2, +0,2,0,1,2,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0, +3,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +2,1,1,1,1,1,1,1,0,1,1,0,1,0,0,1,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,2,0,1,1,1,0,1,0,0,0,1,1,0,1,1,0,0,0,0,0,1,1,0,0, +0,1,1,1,2,1,2,2,2,0,2,0,2,0,1,1,2,1,1,1,1,2,1,0,1,1,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0, +1,0,1,0,0,0,0,0,1,0,1,2,2,0,1,0,0,1,1,2,2,1,2,0,2,0,0,0,1,2,0,1, +2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,2,0,2,1,2,0,2,0,0,1,1,1,1,1,1,0,1,0,0,0,1,0,0,1, +2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,1,0,0,0,0,0,1,0,2,1,1,0,1,0,0,1,1,1,2,2,0,0,1,0,0,0,1,0,0,1, +1,1,2,1,0,1,1,1,0,1,0,1,1,1,1,0,0,0,1,0,1,0,0,0,0,0,0,0,0,2,2,1, +0,2,0,1,2,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +2,1,0,0,1,0,1,1,1,1,0,0,0,0,0,1,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +1,1,1,1,1,1,1,1,1,2,1,0,1,1,1,1,1,1,1,1,1,1,1,0,1,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,1,1,1,0,1,1,0,1,0,0,0,1,1,0,1, +2,0,1,0,1,0,1,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,1,0,1,1,1,0,1,0,0,1,1,2,1,1,2,0,1,0,0,0,1,1,0,1, +1,0,0,1,0,0,1,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,1,0,1,1,2,0,1,0,0,0,0,2,1,1,2,0,2,0,0,0,1,1,0,1, +1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,1,0,2,1,1,0,1,0,0,2,2,1,2,1,1,0,1,0,0,0,1,1,0,1, +2,0,1,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,1,2,2,0,0,0,0,0,1,1,0,1,0,0,1,0,0,0,0,1,0,1, +1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,1,2,2,0,0,0,0,2,1,1,1,0,2,1,1,0,0,0,2,1,0,1, +1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,1,0,1,1,2,0,1,0,0,1,1,0,2,1,1,0,1,0,0,0,1,1,0,1, +2,2,1,1,1,0,1,1,0,1,1,0,1,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,1,0,2,1,1,0,1,0,0,1,1,0,1,2,1,0,2,0,0,0,1,1,0,1, +2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0, +0,1,0,0,2,0,2,1,1,0,1,0,1,0,0,1,0,0,0,0,1,0,0,0,1,0,0,0,0,0,1,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +1,0,0,1,0,0,1,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,1,0,1,1,2,0,1,0,0,1,1,1,0,1,0,0,1,0,0,0,1,0,0,1, +1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +1,0,0,0,0,0,0,0,1,0,1,1,0,0,1,0,0,2,1,1,1,1,1,0,1,0,0,0,0,1,0,1, +0,1,1,1,2,1,1,1,1,0,1,1,1,1,1,1,1,1,1,1,1,1,0,1,1,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,1,2,1,0,0,0,0,0,1,1,1,1,1,0,1,0,0,0,1,1,0,0, +) + +Win1255HebrewModel = { + 'char_to_order_map': WIN1255_CHAR_TO_ORDER_MAP, + 'precedence_matrix': HEBREW_LANG_MODEL, + 'typical_positive_ratio': 0.984004, + 'keep_english_letter': False, + 'charset_name': "windows-1255", + 'language': 'Hebrew', +} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/langhungarianmodel.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/langhungarianmodel.py new file mode 100644 index 0000000000000000000000000000000000000000..bb7c095e1ea6523bd00365384e4c662954c678a0 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/langhungarianmodel.py @@ -0,0 +1,225 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Communicator client code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +# 255: Control characters that usually does not exist in any text +# 254: Carriage/Return +# 253: symbol (punctuation) that does not belong to word +# 252: 0 - 9 + +# Character Mapping Table: +Latin2_HungarianCharToOrderMap = ( +255,255,255,255,255,255,255,255,255,255,254,255,255,254,255,255, # 00 +255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 10 +253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253, # 20 +252,252,252,252,252,252,252,252,252,252,253,253,253,253,253,253, # 30 +253, 28, 40, 54, 45, 32, 50, 49, 38, 39, 53, 36, 41, 34, 35, 47, + 46, 71, 43, 33, 37, 57, 48, 64, 68, 55, 52,253,253,253,253,253, +253, 2, 18, 26, 17, 1, 27, 12, 20, 9, 22, 7, 6, 13, 4, 8, + 23, 67, 10, 5, 3, 21, 19, 65, 62, 16, 11,253,253,253,253,253, +159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174, +175,176,177,178,179,180,181,182,183,184,185,186,187,188,189,190, +191,192,193,194,195,196,197, 75,198,199,200,201,202,203,204,205, + 79,206,207,208,209,210,211,212,213,214,215,216,217,218,219,220, +221, 51, 81,222, 78,223,224,225,226, 44,227,228,229, 61,230,231, +232,233,234, 58,235, 66, 59,236,237,238, 60, 69, 63,239,240,241, + 82, 14, 74,242, 70, 80,243, 72,244, 15, 83, 77, 84, 30, 76, 85, +245,246,247, 25, 73, 42, 24,248,249,250, 31, 56, 29,251,252,253, +) + +win1250HungarianCharToOrderMap = ( +255,255,255,255,255,255,255,255,255,255,254,255,255,254,255,255, # 00 +255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 10 +253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253, # 20 +252,252,252,252,252,252,252,252,252,252,253,253,253,253,253,253, # 30 +253, 28, 40, 54, 45, 32, 50, 49, 38, 39, 53, 36, 41, 34, 35, 47, + 46, 72, 43, 33, 37, 57, 48, 64, 68, 55, 52,253,253,253,253,253, +253, 2, 18, 26, 17, 1, 27, 12, 20, 9, 22, 7, 6, 13, 4, 8, + 23, 67, 10, 5, 3, 21, 19, 65, 62, 16, 11,253,253,253,253,253, +161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176, +177,178,179,180, 78,181, 69,182,183,184,185,186,187,188,189,190, +191,192,193,194,195,196,197, 76,198,199,200,201,202,203,204,205, + 81,206,207,208,209,210,211,212,213,214,215,216,217,218,219,220, +221, 51, 83,222, 80,223,224,225,226, 44,227,228,229, 61,230,231, +232,233,234, 58,235, 66, 59,236,237,238, 60, 70, 63,239,240,241, + 84, 14, 75,242, 71, 82,243, 73,244, 15, 85, 79, 86, 30, 77, 87, +245,246,247, 25, 74, 42, 24,248,249,250, 31, 56, 29,251,252,253, +) + +# Model Table: +# total sequences: 100% +# first 512 sequences: 94.7368% +# first 1024 sequences:5.2623% +# rest sequences: 0.8894% +# negative sequences: 0.0009% +HungarianLangModel = ( +0,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,1,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3, +3,3,3,3,3,3,3,3,3,3,2,3,3,3,3,3,3,3,3,2,2,3,3,1,1,2,2,2,2,2,1,2, +3,2,2,3,3,3,3,3,2,3,3,3,3,3,3,1,2,3,3,3,3,2,3,3,1,1,3,3,0,1,1,1, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0, +3,2,1,3,3,3,3,3,2,3,3,3,3,3,1,1,2,3,3,3,3,3,3,3,1,1,3,2,0,1,1,1, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0, +3,3,3,3,3,3,3,3,3,3,3,1,1,2,3,3,3,1,3,3,3,3,3,1,3,3,2,2,0,3,2,3, +0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0, +3,3,3,3,3,3,2,3,3,3,2,3,3,2,3,3,3,3,3,2,3,3,2,2,3,2,3,2,0,3,2,2, +0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0, +3,3,3,3,3,3,2,3,3,3,3,3,2,3,3,3,1,2,3,2,2,3,1,2,3,3,2,2,0,3,3,3, +0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0, +3,3,3,3,3,3,3,3,3,3,2,2,3,3,3,3,3,3,2,3,3,3,3,2,3,3,3,3,0,2,3,2, +0,0,0,1,1,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0, +3,3,3,3,3,3,3,3,3,3,3,1,1,1,3,3,2,1,3,2,2,3,2,1,3,2,2,1,0,3,3,1, +0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0, +3,2,2,3,3,3,3,3,1,2,3,3,3,3,1,2,1,3,3,3,3,2,2,3,1,1,3,2,0,1,1,1, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0, +3,3,3,3,3,3,3,3,2,2,3,3,3,3,3,2,1,3,3,3,3,3,2,2,1,3,3,3,0,1,1,2, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0, +3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,3,3,3,2,3,3,2,3,3,3,2,0,3,2,3, +0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,1,0, +3,3,3,3,3,3,2,3,3,3,2,3,2,3,3,3,1,3,2,2,2,3,1,1,3,3,1,1,0,3,3,2, +0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0, +3,3,3,3,3,3,3,2,3,3,3,2,3,2,3,3,3,2,3,3,3,3,3,1,2,3,2,2,0,2,2,2, +0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0, +3,3,3,2,2,2,3,1,3,3,2,2,1,3,3,3,1,1,3,1,2,3,2,3,2,2,2,1,0,2,2,2, +0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0, +3,1,1,3,3,3,3,3,1,2,3,3,3,3,1,2,1,3,3,3,2,2,3,2,1,0,3,2,0,1,1,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,1,1,3,3,3,3,3,1,2,3,3,3,3,1,1,0,3,3,3,3,0,2,3,0,0,2,1,0,1,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,3,3,3,3,3,2,2,3,3,2,2,2,2,3,3,0,1,2,3,2,3,2,2,3,2,1,2,0,2,2,2, +0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0, +3,3,3,3,3,3,1,2,3,3,3,2,1,2,3,3,2,2,2,3,2,3,3,1,3,3,1,1,0,2,3,2, +0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0, +3,3,3,1,2,2,2,2,3,3,3,1,1,1,3,3,1,1,3,1,1,3,2,1,2,3,1,1,0,2,2,2, +0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0, +3,3,3,2,1,2,1,1,3,3,1,1,1,1,3,3,1,1,2,2,1,2,1,1,2,2,1,1,0,2,2,1, +0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0, +3,3,3,1,1,2,1,1,3,3,1,0,1,1,3,3,2,0,1,1,2,3,1,0,2,2,1,0,0,1,3,2, +0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0, +3,2,1,3,3,3,3,3,1,2,3,2,3,3,2,1,1,3,2,3,2,1,2,2,0,1,2,1,0,0,1,1, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0, +3,3,3,3,2,2,2,2,3,1,2,2,1,1,3,3,0,3,2,1,2,3,2,1,3,3,1,1,0,2,1,3, +0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0, +3,3,3,2,2,2,3,2,3,3,3,2,1,1,3,3,1,1,1,2,2,3,2,3,2,2,2,1,0,2,2,1, +0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0, +1,0,0,3,3,3,3,3,0,0,3,3,2,3,0,0,0,2,3,3,1,0,1,2,0,0,1,1,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,1,2,3,3,3,3,3,1,2,3,3,2,2,1,1,0,3,3,2,2,1,2,2,1,0,2,2,0,1,1,1, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,3,2,2,1,3,1,2,3,3,2,2,1,1,2,2,1,1,1,1,3,2,1,1,1,1,2,1,0,1,2,1, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0, +2,3,3,1,1,1,1,1,3,3,3,0,1,1,3,3,1,1,1,1,1,2,2,0,3,1,1,2,0,2,1,1, +0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0, +3,1,0,1,2,1,2,2,0,1,2,3,1,2,0,0,0,2,1,1,1,1,1,2,0,0,1,1,0,0,0,0, +1,2,1,2,2,2,1,2,1,2,0,2,0,2,2,1,1,2,1,1,2,1,1,1,0,1,0,0,0,1,1,0, +1,1,1,2,3,2,3,3,0,1,2,2,3,1,0,1,0,2,1,2,2,0,1,1,0,0,1,1,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +1,0,0,3,3,2,2,1,0,0,3,2,3,2,0,0,0,1,1,3,0,0,1,1,0,0,2,1,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,1,1,2,2,3,3,1,0,1,3,2,3,1,1,1,0,1,1,1,1,1,3,1,0,0,2,2,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,1,1,1,2,2,2,1,0,1,2,3,3,2,0,0,0,2,1,1,1,2,1,1,1,0,1,1,1,0,0,0, +1,2,2,2,2,2,1,1,1,2,0,2,1,1,1,1,1,2,1,1,1,1,1,1,0,1,1,1,0,0,1,1, +3,2,2,1,0,0,1,1,2,2,0,3,0,1,2,1,1,0,0,1,1,1,0,1,1,1,1,0,2,1,1,1, +2,2,1,1,1,2,1,2,1,1,1,1,1,1,1,2,1,1,1,2,3,1,1,1,1,1,1,1,1,1,0,1, +2,3,3,0,1,0,0,0,3,3,1,0,0,1,2,2,1,0,0,0,0,2,0,0,1,1,1,0,2,1,1,1, +2,1,1,1,1,1,1,2,1,1,0,1,1,0,1,1,1,0,1,2,1,1,0,1,1,1,1,1,1,1,0,1, +2,3,3,0,1,0,0,0,2,2,0,0,0,0,1,2,2,0,0,0,0,1,0,0,1,1,0,0,2,0,1,0, +2,1,1,1,1,2,1,1,1,1,1,1,1,2,1,1,1,1,1,1,1,1,1,2,0,1,1,1,1,1,0,1, +3,2,2,0,1,0,1,0,2,3,2,0,0,1,2,2,1,0,0,1,1,1,0,0,2,1,0,1,2,2,1,1, +2,1,1,1,1,1,1,2,1,1,1,1,1,1,0,2,1,0,1,1,0,1,1,1,0,1,1,2,1,1,0,1, +2,2,2,0,0,1,0,0,2,2,1,1,0,0,2,1,1,0,0,0,1,2,0,0,2,1,0,0,2,1,1,1, +2,1,1,1,1,2,1,2,1,1,1,2,2,1,1,2,1,1,1,2,1,1,1,1,1,1,1,1,1,1,0,1, +1,2,3,0,0,0,1,0,3,2,1,0,0,1,2,1,1,0,0,0,0,2,1,0,1,1,0,0,2,1,2,1, +1,1,0,0,0,1,0,1,1,1,1,1,2,0,0,1,0,0,0,2,0,0,1,1,1,1,1,1,1,1,0,1, +3,0,0,2,1,2,2,1,0,0,2,1,2,2,0,0,0,2,1,1,1,0,1,1,0,0,1,1,2,0,0,0, +1,2,1,2,2,1,1,2,1,2,0,1,1,1,1,1,1,1,1,1,2,1,1,0,0,1,1,1,1,0,0,1, +1,3,2,0,0,0,1,0,2,2,2,0,0,0,2,2,1,0,0,0,0,3,1,1,1,1,0,0,2,1,1,1, +2,1,0,1,1,1,0,1,1,1,1,1,1,1,0,2,1,0,0,1,0,1,1,0,1,1,1,1,1,1,0,1, +2,3,2,0,0,0,1,0,2,2,0,0,0,0,2,1,1,0,0,0,0,2,1,0,1,1,0,0,2,1,1,0, +2,1,1,1,1,2,1,2,1,2,0,1,1,1,0,2,1,1,1,2,1,1,1,1,0,1,1,1,1,1,0,1, +3,1,1,2,2,2,3,2,1,1,2,2,1,1,0,1,0,2,2,1,1,1,1,1,0,0,1,1,0,1,1,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +2,2,2,0,0,0,0,0,2,2,0,0,0,0,2,2,1,0,0,0,1,1,0,0,1,2,0,0,2,1,1,1, +2,2,1,1,1,2,1,2,1,1,0,1,1,1,1,2,1,1,1,2,1,1,1,1,0,1,2,1,1,1,0,1, +1,0,0,1,2,3,2,1,0,0,2,0,1,1,0,0,0,1,1,1,1,0,1,1,0,0,1,0,0,0,0,0, +1,2,1,2,1,2,1,1,1,2,0,2,1,1,1,0,1,2,0,0,1,1,1,0,0,0,0,0,0,0,0,0, +2,3,2,0,0,0,0,0,1,1,2,1,0,0,1,1,1,0,0,0,0,2,0,0,1,1,0,0,2,1,1,1, +2,1,1,1,1,1,1,2,1,0,1,1,1,1,0,2,1,1,1,1,1,1,0,1,0,1,1,1,1,1,0,1, +1,2,2,0,1,1,1,0,2,2,2,0,0,0,3,2,1,0,0,0,1,1,0,0,1,1,0,1,1,1,0,0, +1,1,0,1,1,1,1,1,1,1,1,2,1,1,1,1,1,1,1,2,1,1,1,0,0,1,1,1,0,1,0,1, +2,1,0,2,1,1,2,2,1,1,2,1,1,1,0,0,0,1,1,0,1,1,1,1,0,0,1,1,1,0,0,0, +1,2,2,2,2,2,1,1,1,2,0,2,1,1,1,1,1,1,1,1,1,1,1,1,0,1,1,0,0,0,1,0, +1,2,3,0,0,0,1,0,2,2,0,0,0,0,2,2,0,0,0,0,0,1,0,0,1,0,0,0,2,0,1,0, +2,1,1,1,1,1,0,2,0,0,0,1,2,1,1,1,1,0,1,2,0,1,0,1,0,1,1,1,0,1,0,1, +2,2,2,0,0,0,1,0,2,1,2,0,0,0,1,1,2,0,0,0,0,1,0,0,1,1,0,0,2,1,0,1, +2,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,2,0,1,1,1,1,1,0,1, +1,2,2,0,0,0,1,0,2,2,2,0,0,0,1,1,0,0,0,0,0,1,1,0,2,0,0,1,1,1,0,1, +1,0,1,1,1,1,1,1,0,1,1,1,1,0,0,1,0,0,1,1,0,1,0,1,1,1,1,1,0,0,0,1, +1,0,0,1,0,1,2,1,0,0,1,1,1,2,0,0,0,1,1,0,1,0,1,1,0,0,1,0,0,0,0,0, +0,2,1,2,1,1,1,1,1,2,0,2,0,1,1,0,1,2,1,0,1,1,1,0,0,0,0,0,0,1,0,0, +2,1,1,0,1,2,0,0,1,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,1,0,0,0,2,1,0,1, +2,2,1,1,1,1,1,2,1,1,0,1,1,1,1,2,1,1,1,2,1,1,0,1,0,1,1,1,1,1,0,1, +1,2,2,0,0,0,0,0,1,1,0,0,0,0,2,1,0,0,0,0,0,2,0,0,2,2,0,0,2,0,0,1, +2,1,1,1,1,1,1,1,0,1,1,0,1,1,0,1,0,0,0,1,1,1,1,0,0,1,1,1,1,0,0,1, +1,1,2,0,0,3,1,0,2,1,1,1,0,0,1,1,1,0,0,0,1,1,0,0,0,1,0,0,1,0,1,0, +1,2,1,0,1,1,1,2,1,1,0,1,1,1,1,1,0,0,0,1,1,1,1,1,0,1,0,0,0,1,0,0, +2,1,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,1,0,0,0,1,0,0,0,0,2,0,0,0, +2,1,1,1,1,1,1,1,1,1,0,1,1,1,1,1,1,1,1,1,2,1,1,0,0,1,1,1,1,1,0,1, +2,1,1,1,2,1,1,1,0,1,1,2,1,0,0,0,0,1,1,1,1,0,1,0,0,0,0,1,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +1,1,0,1,1,1,1,1,0,0,1,1,2,1,0,0,0,1,1,0,0,0,1,1,0,0,1,0,1,0,0,0, +1,2,1,1,1,1,1,1,1,1,0,1,0,1,1,1,1,1,1,0,1,1,1,0,0,0,0,0,0,1,0,0, +2,0,0,0,1,1,1,1,0,0,1,1,0,0,0,0,0,1,1,1,2,0,0,1,0,0,1,0,1,0,0,0, +0,1,1,1,1,1,1,1,1,2,0,1,1,1,1,0,1,1,1,0,1,1,1,0,0,0,0,0,0,0,0,0, +1,0,0,1,1,1,1,1,0,0,2,1,0,1,0,0,0,1,0,1,0,0,0,0,0,0,1,0,0,0,0,0, +0,1,1,1,1,1,1,0,1,1,0,1,0,1,1,0,1,1,0,0,1,1,1,0,0,0,0,0,0,0,0,0, +1,0,0,1,1,1,0,0,0,0,1,0,2,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0, +0,1,1,1,1,1,0,0,1,1,0,1,0,1,0,0,1,1,1,0,1,1,1,0,0,0,0,0,0,0,0,0, +0,0,0,1,0,0,0,0,0,0,1,1,2,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,1,1,1,0,1,0,0,1,1,0,1,0,1,1,0,1,1,1,0,1,1,1,0,0,0,0,0,0,0,0,0, +2,1,1,1,1,1,1,1,1,1,1,0,0,1,1,1,0,0,1,0,0,1,0,1,0,1,1,1,0,0,1,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +1,0,0,1,1,1,1,0,0,0,1,1,1,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0, +0,1,1,1,1,1,1,0,1,1,0,1,0,1,0,0,1,1,0,0,1,1,0,0,0,0,0,0,0,0,0,0, +) + +Latin2HungarianModel = { + 'char_to_order_map': Latin2_HungarianCharToOrderMap, + 'precedence_matrix': HungarianLangModel, + 'typical_positive_ratio': 0.947368, + 'keep_english_letter': True, + 'charset_name': "ISO-8859-2", + 'language': 'Hungarian', +} + +Win1250HungarianModel = { + 'char_to_order_map': win1250HungarianCharToOrderMap, + 'precedence_matrix': HungarianLangModel, + 'typical_positive_ratio': 0.947368, + 'keep_english_letter': True, + 'charset_name': "windows-1250", + 'language': 'Hungarian', +} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/langthaimodel.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/langthaimodel.py new file mode 100644 index 0000000000000000000000000000000000000000..15f94c2df021c9cccc761ebeec80146edbb000c9 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/langthaimodel.py @@ -0,0 +1,199 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Communicator client code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +# 255: Control characters that usually does not exist in any text +# 254: Carriage/Return +# 253: symbol (punctuation) that does not belong to word +# 252: 0 - 9 + +# The following result for thai was collected from a limited sample (1M). + +# Character Mapping Table: +TIS620CharToOrderMap = ( +255,255,255,255,255,255,255,255,255,255,254,255,255,254,255,255, # 00 +255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 10 +253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253, # 20 +252,252,252,252,252,252,252,252,252,252,253,253,253,253,253,253, # 30 +253,182,106,107,100,183,184,185,101, 94,186,187,108,109,110,111, # 40 +188,189,190, 89, 95,112,113,191,192,193,194,253,253,253,253,253, # 50 +253, 64, 72, 73,114, 74,115,116,102, 81,201,117, 90,103, 78, 82, # 60 + 96,202, 91, 79, 84,104,105, 97, 98, 92,203,253,253,253,253,253, # 70 +209,210,211,212,213, 88,214,215,216,217,218,219,220,118,221,222, +223,224, 99, 85, 83,225,226,227,228,229,230,231,232,233,234,235, +236, 5, 30,237, 24,238, 75, 8, 26, 52, 34, 51,119, 47, 58, 57, + 49, 53, 55, 43, 20, 19, 44, 14, 48, 3, 17, 25, 39, 62, 31, 54, + 45, 9, 16, 2, 61, 15,239, 12, 42, 46, 18, 21, 76, 4, 66, 63, + 22, 10, 1, 36, 23, 13, 40, 27, 32, 35, 86,240,241,242,243,244, + 11, 28, 41, 29, 33,245, 50, 37, 6, 7, 67, 77, 38, 93,246,247, + 68, 56, 59, 65, 69, 60, 70, 80, 71, 87,248,249,250,251,252,253, +) + +# Model Table: +# total sequences: 100% +# first 512 sequences: 92.6386% +# first 1024 sequences:7.3177% +# rest sequences: 1.0230% +# negative sequences: 0.0436% +ThaiLangModel = ( +0,1,3,3,3,3,0,0,3,3,0,3,3,0,3,3,3,3,3,3,3,3,0,0,3,3,3,0,3,3,3,3, +0,3,3,0,0,0,1,3,0,3,3,2,3,3,0,1,2,3,3,3,3,0,2,0,2,0,0,3,2,1,2,2, +3,0,3,3,2,3,0,0,3,3,0,3,3,0,3,3,3,3,3,3,3,3,3,0,3,2,3,0,2,2,2,3, +0,2,3,0,0,0,0,1,0,1,2,3,1,1,3,2,2,0,1,1,0,0,1,0,0,0,0,0,0,0,1,1, +3,3,3,2,3,3,3,3,3,3,3,3,3,3,3,2,2,2,2,2,2,2,3,3,2,3,2,3,3,2,2,2, +3,1,2,3,0,3,3,2,2,1,2,3,3,1,2,0,1,3,0,1,0,0,1,0,0,0,0,0,0,0,1,1, +3,3,2,2,3,3,3,3,1,2,3,3,3,3,3,2,2,2,2,3,3,2,2,3,3,2,2,3,2,3,2,2, +3,3,1,2,3,1,2,2,3,3,1,0,2,1,0,0,3,1,2,1,0,0,1,0,0,0,0,0,0,1,0,1, +3,3,3,3,3,3,2,2,3,3,3,3,2,3,2,2,3,3,2,2,3,2,2,2,2,1,1,3,1,2,1,1, +3,2,1,0,2,1,0,1,0,1,1,0,1,1,0,0,1,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0, +3,3,3,2,3,2,3,3,2,2,3,2,3,3,2,3,1,1,2,3,2,2,2,3,2,2,2,2,2,1,2,1, +2,2,1,1,3,3,2,1,0,1,2,2,0,1,3,0,0,0,1,1,0,0,0,0,0,2,3,0,0,2,1,1, +3,3,2,3,3,2,0,0,3,3,0,3,3,0,2,2,3,1,2,2,1,1,1,0,2,2,2,0,2,2,1,1, +0,2,1,0,2,0,0,2,0,1,0,0,1,0,0,0,1,1,1,1,0,0,0,0,0,0,0,0,0,0,1,0, +3,3,2,3,3,2,0,0,3,3,0,2,3,0,2,1,2,2,2,2,1,2,0,0,2,2,2,0,2,2,1,1, +0,2,1,0,2,0,0,2,0,1,1,0,1,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0, +3,3,2,3,2,3,2,0,2,2,1,3,2,1,3,2,1,2,3,2,2,3,0,2,3,2,2,1,2,2,2,2, +1,2,2,0,0,0,0,2,0,1,2,0,1,1,1,0,1,0,3,1,1,0,0,0,0,0,0,0,0,0,1,0, +3,3,2,3,3,2,3,2,2,2,3,2,2,3,2,2,1,2,3,2,2,3,1,3,2,2,2,3,2,2,2,3, +3,2,1,3,0,1,1,1,0,2,1,1,1,1,1,0,1,0,1,1,0,0,0,0,0,0,0,0,0,2,0,0, +1,0,0,3,0,3,3,3,3,3,0,0,3,0,2,2,3,3,3,3,3,0,0,0,1,1,3,0,0,0,0,2, +0,0,1,0,0,0,0,0,0,0,2,3,0,0,0,3,0,2,0,0,0,0,0,3,0,0,0,0,0,0,0,0, +2,0,3,3,3,3,0,0,2,3,0,0,3,0,3,3,2,3,3,3,3,3,0,0,3,3,3,0,0,0,3,3, +0,0,3,0,0,0,0,2,0,0,2,1,1,3,0,0,1,0,0,2,3,0,1,0,0,0,0,0,0,0,1,0, +3,3,3,3,2,3,3,3,3,3,3,3,1,2,1,3,3,2,2,1,2,2,2,3,1,1,2,0,2,1,2,1, +2,2,1,0,0,0,1,1,0,1,0,1,1,0,0,0,0,0,1,1,0,0,1,0,0,0,0,0,0,0,0,0, +3,0,2,1,2,3,3,3,0,2,0,2,2,0,2,1,3,2,2,1,2,1,0,0,2,2,1,0,2,1,2,2, +0,1,1,0,0,0,0,1,0,1,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,3,3,3,2,1,3,3,1,1,3,0,2,3,1,1,3,2,1,1,2,0,2,2,3,2,1,1,1,1,1,2, +3,0,0,1,3,1,2,1,2,0,3,0,0,0,1,0,3,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0, +3,3,1,1,3,2,3,3,3,1,3,2,1,3,2,1,3,2,2,2,2,1,3,3,1,2,1,3,1,2,3,0, +2,1,1,3,2,2,2,1,2,1,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2, +3,3,2,3,2,3,3,2,3,2,3,2,3,3,2,1,0,3,2,2,2,1,2,2,2,1,2,2,1,2,1,1, +2,2,2,3,0,1,3,1,1,1,1,0,1,1,0,2,1,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,3,3,3,2,3,2,2,1,1,3,2,3,2,3,2,0,3,2,2,1,2,0,2,2,2,1,2,2,2,2,1, +3,2,1,2,2,1,0,2,0,1,0,0,1,1,0,0,0,0,0,1,1,0,1,0,0,0,0,0,0,0,0,1, +3,3,3,3,3,2,3,1,2,3,3,2,2,3,0,1,1,2,0,3,3,2,2,3,0,1,1,3,0,0,0,0, +3,1,0,3,3,0,2,0,2,1,0,0,3,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,3,3,2,3,2,3,3,0,1,3,1,1,2,1,2,1,1,3,1,1,0,2,3,1,1,1,1,1,1,1,1, +3,1,1,2,2,2,2,1,1,1,0,0,2,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1, +3,2,2,1,1,2,1,3,3,2,3,2,2,3,2,2,3,1,2,2,1,2,0,3,2,1,2,2,2,2,2,1, +3,2,1,2,2,2,1,1,1,1,0,0,1,1,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,3,3,3,3,3,3,3,1,3,3,0,2,1,0,3,2,0,0,3,1,0,1,1,0,1,0,0,0,0,0,1, +1,0,0,1,0,3,2,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,0,2,2,2,3,0,0,1,3,0,3,2,0,3,2,2,3,3,3,3,3,1,0,2,2,2,0,2,2,1,2, +0,2,3,0,0,0,0,1,0,1,0,0,1,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1, +3,0,2,3,1,3,3,2,3,3,0,3,3,0,3,2,2,3,2,3,3,3,0,0,2,2,3,0,1,1,1,3, +0,0,3,0,0,0,2,2,0,1,3,0,1,2,2,2,3,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1, +3,2,3,3,2,0,3,3,2,2,3,1,3,2,1,3,2,0,1,2,2,0,2,3,2,1,0,3,0,0,0,0, +3,0,0,2,3,1,3,0,0,3,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,1,3,2,2,2,1,2,0,1,3,1,1,3,1,3,0,0,2,1,1,1,1,2,1,1,1,0,2,1,0,1, +1,2,0,0,0,3,1,1,0,0,0,0,1,0,1,0,0,1,0,1,0,0,0,0,0,3,1,0,0,0,1,0, +3,3,3,3,2,2,2,2,2,1,3,1,1,1,2,0,1,1,2,1,2,1,3,2,0,0,3,1,1,1,1,1, +3,1,0,2,3,0,0,0,3,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,2,3,0,3,3,0,2,0,0,0,0,0,0,0,3,0,0,1,0,0,0,0,0,0,0,0,0,0,0, +0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,2,3,1,3,0,0,1,2,0,0,2,0,3,3,2,3,3,3,2,3,0,0,2,2,2,0,0,0,2,2, +0,0,1,0,0,0,0,3,0,0,0,0,2,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0, +0,0,0,3,0,2,0,0,0,0,0,0,0,0,0,0,1,2,3,1,3,3,0,0,1,0,3,0,0,0,0,0, +0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,3,1,2,3,1,2,3,1,0,3,0,2,2,1,0,2,1,1,2,0,1,0,0,1,1,1,1,0,1,0,0, +1,0,0,0,0,1,1,0,3,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,3,3,3,2,1,0,1,1,1,3,1,2,2,2,2,2,2,1,1,1,1,0,3,1,0,1,3,1,1,1,1, +1,1,0,2,0,1,3,1,1,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,2,0,1, +3,0,2,2,1,3,3,2,3,3,0,1,1,0,2,2,1,2,1,3,3,1,0,0,3,2,0,0,0,0,2,1, +0,1,0,0,0,0,1,2,0,1,1,3,1,1,2,2,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0, +0,0,3,0,0,1,0,0,0,3,0,0,3,0,3,1,0,1,1,1,3,2,0,0,0,3,0,0,0,0,2,0, +0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,2,0,0,0,0,0,0,0,0,0, +3,3,1,3,2,1,3,3,1,2,2,0,1,2,1,0,1,2,0,0,0,0,0,3,0,0,0,3,0,0,0,0, +3,0,0,1,1,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,0,1,2,0,3,3,3,2,2,0,1,1,0,1,3,0,0,0,2,2,0,0,0,0,3,1,0,1,0,0,0, +0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,0,2,3,1,2,0,0,2,1,0,3,1,0,1,2,0,1,1,1,1,3,0,0,3,1,1,0,2,2,1,1, +0,2,0,0,0,0,0,1,0,1,0,0,1,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,0,0,3,1,2,0,0,2,2,0,1,2,0,1,0,1,3,1,2,1,0,0,0,2,0,3,0,0,0,1,0, +0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,0,1,1,2,2,0,0,0,2,0,2,1,0,1,1,0,1,1,1,2,1,0,0,1,1,1,0,2,1,1,1, +0,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,1, +0,0,0,2,0,1,3,1,1,1,1,0,0,0,0,3,2,0,1,0,0,0,1,2,0,0,0,1,0,0,0,0, +0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,3,3,3,3,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +1,0,2,3,2,2,0,0,0,1,0,0,0,0,2,3,2,1,2,2,3,0,0,0,2,3,1,0,0,0,1,1, +0,0,1,0,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,1,1,0,1,0,0,0,0,0,0,0,0,0, +3,3,2,2,0,1,0,0,0,0,2,0,2,0,1,0,0,0,1,1,0,0,0,2,1,0,1,0,1,1,0,0, +0,1,0,2,0,0,1,0,3,0,1,0,0,0,2,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,3,1,0,0,1,0,0,0,0,0,1,1,2,0,0,0,0,1,0,0,1,3,1,0,0,0,0,1,1,0,0, +0,1,0,0,0,0,3,0,0,0,0,0,0,3,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0, +3,3,1,1,1,1,2,3,0,0,2,1,1,1,1,1,0,2,1,1,0,0,0,2,1,0,1,2,1,1,0,1, +2,1,0,3,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +1,3,1,0,0,0,0,0,0,0,3,0,0,0,3,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,1, +0,0,0,2,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,3,2,0,0,0,0,0,0,1,2,1,0,1,1,0,2,0,0,1,0,0,2,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,2,0,0,0,1,3,0,1,0,0,0,2,0,0,0,0,0,0,0,1,2,0,0,0,0,0, +3,3,0,0,1,1,2,0,0,1,2,1,0,1,1,1,0,1,1,0,0,2,1,1,0,1,0,0,1,1,1,0, +0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0, +2,2,2,1,0,0,0,0,1,0,0,0,0,3,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0, +2,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +2,3,0,0,1,1,0,0,0,2,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +1,1,0,1,2,0,1,2,0,0,1,1,0,2,0,1,0,0,1,0,0,0,0,1,0,0,0,2,0,0,0,0, +1,0,0,1,0,1,1,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,1,0,0,0,0,0,0,0,1,1,0,1,1,0,2,1,3,0,0,0,0,1,1,0,0,0,0,0,0,0,3, +1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +2,0,1,0,1,0,0,2,0,0,2,0,0,1,1,2,0,0,1,1,0,0,0,1,0,0,0,1,1,0,0,0, +1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0, +1,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,1,1,0,0,0, +2,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +2,0,0,0,0,2,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,3,0,0,0, +2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,1,0,0,0,0, +1,0,0,0,0,0,0,0,0,1,0,0,0,0,2,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,1,1,0,0,2,1,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +) + +TIS620ThaiModel = { + 'char_to_order_map': TIS620CharToOrderMap, + 'precedence_matrix': ThaiLangModel, + 'typical_positive_ratio': 0.926386, + 'keep_english_letter': False, + 'charset_name': "TIS-620", + 'language': 'Thai', +} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/langturkishmodel.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/langturkishmodel.py new file mode 100644 index 0000000000000000000000000000000000000000..a427a457398de8076cdcefb5a6c391e89500bce8 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/langturkishmodel.py @@ -0,0 +1,193 @@ +# -*- coding: utf-8 -*- +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Communicator client code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# Özgür Baskın - Turkish Language Model +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +# 255: Control characters that usually does not exist in any text +# 254: Carriage/Return +# 253: symbol (punctuation) that does not belong to word +# 252: 0 - 9 + +# Character Mapping Table: +Latin5_TurkishCharToOrderMap = ( +255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, +255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, +255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, +255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, +255, 23, 37, 47, 39, 29, 52, 36, 45, 53, 60, 16, 49, 20, 46, 42, + 48, 69, 44, 35, 31, 51, 38, 62, 65, 43, 56,255,255,255,255,255, +255, 1, 21, 28, 12, 2, 18, 27, 25, 3, 24, 10, 5, 13, 4, 15, + 26, 64, 7, 8, 9, 14, 32, 57, 58, 11, 22,255,255,255,255,255, +180,179,178,177,176,175,174,173,172,171,170,169,168,167,166,165, +164,163,162,161,160,159,101,158,157,156,155,154,153,152,151,106, +150,149,148,147,146,145,144,100,143,142,141,140,139,138,137,136, + 94, 80, 93,135,105,134,133, 63,132,131,130,129,128,127,126,125, +124,104, 73, 99, 79, 85,123, 54,122, 98, 92,121,120, 91,103,119, + 68,118,117, 97,116,115, 50, 90,114,113,112,111, 55, 41, 40, 86, + 89, 70, 59, 78, 71, 82, 88, 33, 77, 66, 84, 83,110, 75, 61, 96, + 30, 67,109, 74, 87,102, 34, 95, 81,108, 76, 72, 17, 6, 19,107, +) + +TurkishLangModel = ( +3,2,3,3,3,1,3,3,3,3,3,3,3,3,2,1,1,3,3,1,3,3,0,3,3,3,3,3,0,3,1,3, +3,2,1,0,0,1,1,0,0,0,1,0,0,1,1,1,1,0,0,0,0,0,0,0,2,2,0,0,1,0,0,1, +3,2,2,3,3,0,3,3,3,3,3,3,3,2,3,1,0,3,3,1,3,3,0,3,3,3,3,3,0,3,0,3, +3,1,1,0,1,0,1,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0,0,0,2,2,0,0,0,1,0,1, +3,3,2,3,3,0,3,3,3,3,3,3,3,2,3,1,1,3,3,0,3,3,1,2,3,3,3,3,0,3,0,3, +3,1,1,0,0,0,1,0,0,0,0,1,1,0,1,2,1,0,0,0,1,0,0,0,0,2,0,0,0,0,0,1, +3,3,3,3,3,3,2,3,3,3,3,3,3,3,3,1,3,3,2,0,3,2,1,2,2,1,3,3,0,0,0,2, +2,2,0,1,0,0,1,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,1,0,1,0,0,1, +3,3,3,2,3,3,1,2,3,3,3,3,3,3,3,1,3,2,1,0,3,2,0,1,2,3,3,2,1,0,0,2, +2,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,2,0,2,0,0,0, +1,0,1,3,3,1,3,3,3,3,3,3,3,1,2,0,0,2,3,0,2,3,0,0,2,2,2,3,0,3,0,1, +2,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,0,3,3,3,0,3,2,0,2,3,2,3,3,1,0,0,2, +3,2,0,0,1,0,0,0,0,0,0,2,0,0,1,0,0,0,0,0,0,0,0,0,1,1,1,0,2,0,0,1, +3,3,3,2,3,3,2,3,3,3,3,2,3,3,3,0,3,3,0,0,2,1,0,0,2,3,2,2,0,0,0,2, +2,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,1,0,1,0,2,0,0,1, +3,3,3,2,3,3,3,3,3,3,3,2,3,3,3,0,3,2,0,1,3,2,1,1,3,2,3,2,1,0,0,2, +2,2,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0, +3,3,3,2,3,3,3,3,3,3,3,2,3,3,3,0,3,2,2,0,2,3,0,0,2,2,2,2,0,0,0,2, +3,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,2,0,1,0,0,0, +3,3,3,3,3,3,3,2,2,2,2,3,2,3,3,0,3,3,1,1,2,2,0,0,2,2,3,2,0,0,1,3, +0,3,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,1, +3,3,3,2,3,3,3,2,1,2,2,3,2,3,3,0,3,2,0,0,1,1,0,1,1,2,1,2,0,0,0,1, +0,3,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,1,0,0,0, +3,3,3,2,3,3,2,3,2,2,2,3,3,3,3,1,3,1,1,0,3,2,1,1,3,3,2,3,1,0,0,1, +1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,2,0,0,1, +3,2,2,3,3,0,3,3,3,3,3,3,3,2,2,1,0,3,3,1,3,3,0,1,3,3,2,3,0,3,0,3, +2,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0, +2,2,2,3,3,0,3,3,3,3,3,3,3,3,3,0,0,3,2,0,3,3,0,3,2,3,3,3,0,3,1,3, +2,0,0,0,0,0,0,0,0,0,0,1,0,1,2,0,1,0,0,0,0,0,0,0,2,2,0,0,1,0,0,1, +3,3,3,1,2,3,3,1,0,0,1,0,0,3,3,2,3,0,0,2,0,0,2,0,2,0,0,0,2,0,2,0, +0,3,1,0,1,0,0,0,2,2,1,0,1,1,2,1,2,2,2,0,2,1,1,0,0,0,2,0,0,0,0,0, +1,2,1,3,3,0,3,3,3,3,3,2,3,0,0,0,0,2,3,0,2,3,1,0,2,3,1,3,0,3,0,2, +3,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,3,3,1,3,3,2,2,3,2,2,0,1,2,3,0,1,2,1,0,1,0,0,0,1,0,2,2,0,0,0,1, +1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,1,0,0,1,0,0,0, +3,3,3,1,3,3,1,1,3,3,1,1,3,3,1,0,2,1,2,0,2,1,0,0,1,1,2,1,0,0,0,2, +2,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,3,3,1,0,2,1,3,0,0,2,0,0,3,3,0,3,0,0,1,0,1,2,0,0,1,1,2,2,0,1,0, +0,1,2,1,1,0,1,0,1,1,1,1,1,0,1,1,1,2,2,1,2,0,1,0,0,0,0,0,0,1,0,0, +3,3,3,2,3,2,3,3,0,2,2,2,3,3,3,0,3,0,0,0,2,2,0,1,2,1,1,1,0,0,0,1, +0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0, +3,3,3,3,3,3,2,1,2,2,3,3,3,3,2,0,2,0,0,0,2,2,0,0,2,1,3,3,0,0,1,1, +1,1,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0, +1,1,2,3,3,0,3,3,3,3,3,3,2,2,0,2,0,2,3,2,3,2,2,2,2,2,2,2,1,3,2,3, +2,0,2,1,2,2,2,2,1,1,2,2,1,2,2,1,2,0,0,2,1,1,0,2,1,0,0,1,0,0,0,1, +2,3,3,1,1,1,0,1,1,1,2,3,2,1,1,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0, +0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,3,3,2,2,2,3,2,3,2,2,1,3,3,3,0,2,1,2,0,2,1,0,0,1,1,1,1,1,0,0,1, +2,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,2,0,1,0,0,0, +3,3,3,2,3,3,3,3,3,2,3,1,2,3,3,1,2,0,0,0,0,0,0,0,3,2,1,1,0,0,0,0, +2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0, +3,3,3,2,2,3,3,2,1,1,1,1,1,3,3,0,3,1,0,0,1,1,0,0,3,1,2,1,0,0,0,0, +0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0, +3,3,3,2,2,3,2,2,2,3,2,1,1,3,3,0,3,0,0,0,0,1,0,0,3,1,1,2,0,0,0,1, +1,0,0,1,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1, +1,1,1,3,3,0,3,3,3,3,3,2,2,2,1,2,0,2,1,2,2,1,1,0,1,2,2,2,2,2,2,2, +0,0,2,1,2,1,2,1,0,1,1,3,1,2,1,1,2,0,0,2,0,1,0,1,0,1,0,0,0,1,0,1, +3,3,3,1,3,3,3,0,1,1,0,2,2,3,1,0,3,0,0,0,1,0,0,0,1,0,0,1,0,1,0,0, +1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,3,2,0,0,2,2,1,0,0,1,0,0,3,3,1,3,0,0,1,1,0,2,0,3,0,0,0,2,0,1,1, +0,1,2,0,1,2,2,0,2,2,2,2,1,0,2,1,1,0,2,0,2,1,2,0,0,0,0,0,0,0,0,0, +3,3,3,1,3,2,3,2,0,2,2,2,1,3,2,0,2,1,2,0,1,2,0,0,1,0,2,2,0,0,0,2, +1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,1,0,0,0, +3,3,3,0,3,3,1,1,2,3,1,0,3,2,3,0,3,0,0,0,1,0,0,0,1,0,1,0,0,0,0,0, +1,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,3,3,0,3,3,2,3,3,2,2,0,0,0,0,1,2,0,1,3,0,0,0,3,1,1,0,3,0,2, +2,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +3,3,3,1,2,2,1,0,3,1,1,1,1,3,3,2,3,0,0,1,0,1,2,0,2,2,0,2,2,0,2,1, +0,2,2,1,1,1,1,0,2,1,1,0,1,1,1,1,2,1,2,1,2,0,1,0,1,0,0,0,0,0,0,0, +3,3,3,0,1,1,3,0,0,1,1,0,0,2,2,0,3,0,0,1,1,0,1,0,0,0,0,0,2,0,0,0, +0,3,1,0,1,0,1,0,2,0,0,1,0,1,0,1,1,1,2,1,1,0,2,0,0,0,0,0,0,0,0,0, +3,3,3,0,2,0,2,0,1,1,1,0,0,3,3,0,2,0,0,1,0,0,2,1,1,0,1,0,1,0,1,0, +0,2,0,1,2,0,2,0,2,1,1,0,1,0,2,1,1,0,2,1,1,0,1,0,0,0,1,1,0,0,0,0, +3,2,3,0,1,0,0,0,0,0,0,0,0,1,2,0,1,0,0,1,0,0,1,0,0,0,0,0,2,0,0,0, +0,0,1,1,0,0,1,0,1,0,0,1,0,0,0,2,1,0,1,0,2,0,0,0,0,0,0,0,0,0,0,0, +3,3,3,0,0,2,3,0,0,1,0,1,0,2,3,2,3,0,0,1,3,0,2,1,0,0,0,0,2,0,1,0, +0,2,1,0,0,1,1,0,2,1,0,0,1,0,0,1,1,0,1,1,2,0,1,0,0,0,0,1,0,0,0,0, +3,2,2,0,0,1,1,0,0,0,0,0,0,3,1,1,1,0,0,0,0,0,1,0,0,0,0,0,2,0,1,0, +0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0, +0,0,0,3,3,0,2,3,2,2,1,2,2,1,1,2,0,1,3,2,2,2,0,0,2,2,0,0,0,1,2,1, +3,0,2,1,1,0,1,1,1,0,1,2,2,2,1,1,2,0,0,0,0,1,0,1,1,0,0,0,0,0,0,0, +0,1,1,2,3,0,3,3,3,2,2,2,2,1,0,1,0,1,0,1,2,2,0,0,2,2,1,3,1,1,2,1, +0,0,1,1,2,0,1,1,0,0,1,2,0,2,1,1,2,0,0,1,0,0,0,1,0,1,0,1,0,0,0,0, +3,3,2,0,0,3,1,0,0,0,0,0,0,3,2,1,2,0,0,1,0,0,2,0,0,0,0,0,2,0,1,0, +0,2,1,1,0,0,1,0,1,2,0,0,1,1,0,0,2,1,1,1,1,0,2,0,0,0,0,0,0,0,0,0, +3,3,2,0,0,1,0,0,0,0,1,0,0,3,3,2,2,0,0,1,0,0,2,0,1,0,0,0,2,0,1,0, +0,0,1,1,0,0,2,0,2,1,0,0,1,1,2,1,2,0,2,1,2,1,1,1,0,0,1,1,0,0,0,0, +3,3,2,0,0,2,2,0,0,0,1,1,0,2,2,1,3,1,0,1,0,1,2,0,0,0,0,0,1,0,1,0, +0,1,1,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0, +3,3,3,2,0,0,0,1,0,0,1,0,0,2,3,1,2,0,0,1,0,0,2,0,0,0,1,0,2,0,2,0, +0,1,1,2,2,1,2,0,2,1,1,0,0,1,1,0,1,1,1,1,2,1,1,0,0,0,0,0,0,0,0,0, +3,3,3,0,2,1,2,1,0,0,1,1,0,3,3,1,2,0,0,1,0,0,2,0,2,0,1,1,2,0,0,0, +0,0,1,1,1,1,2,0,1,1,0,1,1,1,1,0,0,0,1,1,1,0,1,0,0,0,1,0,0,0,0,0, +3,3,3,0,2,2,3,2,0,0,1,0,0,2,3,1,0,0,0,0,0,0,2,0,2,0,0,0,2,0,0,0, +0,1,1,0,0,0,1,0,0,1,0,1,1,0,1,0,1,1,1,0,1,0,0,0,0,0,0,0,0,0,0,0, +3,2,3,0,0,0,0,0,0,0,1,0,0,2,2,2,2,0,0,1,0,0,2,0,0,0,0,0,2,0,1,0, +0,0,2,1,1,0,1,0,2,1,1,0,0,1,1,2,1,0,2,0,2,0,1,0,0,0,2,0,0,0,0,0, +0,0,0,2,2,0,2,1,1,1,1,2,2,0,0,1,0,1,0,0,1,3,0,0,0,0,1,0,0,2,1,0, +0,0,1,0,1,0,0,0,0,0,2,1,0,1,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0, +2,0,0,2,3,0,2,3,1,2,2,0,2,0,0,2,0,2,1,1,1,2,1,0,0,1,2,1,1,2,1,0, +1,0,2,0,1,0,1,1,0,0,2,2,1,2,1,1,2,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0, +3,3,3,0,2,1,2,0,0,0,1,0,0,3,2,0,1,0,0,1,0,0,2,0,0,0,1,2,1,0,1,0, +0,0,0,0,1,0,1,0,0,1,0,0,0,0,1,0,1,0,1,1,1,0,1,0,0,0,0,0,0,0,0,0, +0,0,0,2,2,0,2,2,1,1,0,1,1,1,1,1,0,0,1,2,1,1,1,0,1,0,0,0,1,1,1,1, +0,0,2,1,0,1,1,1,0,1,1,2,1,2,1,1,2,0,1,1,2,1,0,2,0,0,0,0,0,0,0,0, +3,2,2,0,0,2,0,0,0,0,0,0,0,2,2,0,2,0,0,1,0,0,2,0,0,0,0,0,2,0,0,0, +0,2,1,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0, +0,0,0,3,2,0,2,2,0,1,1,0,1,0,0,1,0,0,0,1,0,1,0,0,0,0,0,1,0,0,0,0, +2,0,1,0,1,0,1,1,0,0,1,2,0,1,0,1,1,0,0,1,0,1,0,2,0,0,0,0,0,0,0,0, +2,2,2,0,1,1,0,0,0,1,0,0,0,1,2,0,1,0,0,1,0,0,1,0,0,0,0,1,2,0,1,0, +0,0,1,0,0,0,1,0,0,1,0,0,0,0,0,0,1,0,1,0,2,0,0,0,0,0,0,0,0,0,0,0, +2,2,2,2,1,0,1,1,1,0,0,0,0,1,2,0,0,1,0,0,0,1,0,0,1,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0, +1,1,2,0,1,0,0,0,1,0,1,0,0,0,1,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,2,0,0,0,0,0,1, +0,0,1,2,2,0,2,1,2,1,1,2,2,0,0,0,0,1,0,0,1,1,0,0,2,0,0,0,0,1,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0, +2,2,2,0,0,0,1,0,0,0,0,0,0,2,2,1,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0, +0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,1,1,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +2,2,2,0,1,0,1,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,1,0,0,0,0,0,0,0,0,0,0,2,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, +) + +Latin5TurkishModel = { + 'char_to_order_map': Latin5_TurkishCharToOrderMap, + 'precedence_matrix': TurkishLangModel, + 'typical_positive_ratio': 0.970290, + 'keep_english_letter': True, + 'charset_name': "ISO-8859-9", + 'language': 'Turkish', +} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/latin1prober.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/latin1prober.py new file mode 100644 index 0000000000000000000000000000000000000000..7d1e8c20fb09ddaa0254ae74cbd4425ffdc5dcdc --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/latin1prober.py @@ -0,0 +1,145 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Universal charset detector code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 2001 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# Shy Shalom - original C code +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +from .charsetprober import CharSetProber +from .enums import ProbingState + +FREQ_CAT_NUM = 4 + +UDF = 0 # undefined +OTH = 1 # other +ASC = 2 # ascii capital letter +ASS = 3 # ascii small letter +ACV = 4 # accent capital vowel +ACO = 5 # accent capital other +ASV = 6 # accent small vowel +ASO = 7 # accent small other +CLASS_NUM = 8 # total classes + +Latin1_CharToClass = ( + OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 00 - 07 + OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 08 - 0F + OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 10 - 17 + OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 18 - 1F + OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 20 - 27 + OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 28 - 2F + OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 30 - 37 + OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 38 - 3F + OTH, ASC, ASC, ASC, ASC, ASC, ASC, ASC, # 40 - 47 + ASC, ASC, ASC, ASC, ASC, ASC, ASC, ASC, # 48 - 4F + ASC, ASC, ASC, ASC, ASC, ASC, ASC, ASC, # 50 - 57 + ASC, ASC, ASC, OTH, OTH, OTH, OTH, OTH, # 58 - 5F + OTH, ASS, ASS, ASS, ASS, ASS, ASS, ASS, # 60 - 67 + ASS, ASS, ASS, ASS, ASS, ASS, ASS, ASS, # 68 - 6F + ASS, ASS, ASS, ASS, ASS, ASS, ASS, ASS, # 70 - 77 + ASS, ASS, ASS, OTH, OTH, OTH, OTH, OTH, # 78 - 7F + OTH, UDF, OTH, ASO, OTH, OTH, OTH, OTH, # 80 - 87 + OTH, OTH, ACO, OTH, ACO, UDF, ACO, UDF, # 88 - 8F + UDF, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 90 - 97 + OTH, OTH, ASO, OTH, ASO, UDF, ASO, ACO, # 98 - 9F + OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # A0 - A7 + OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # A8 - AF + OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # B0 - B7 + OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # B8 - BF + ACV, ACV, ACV, ACV, ACV, ACV, ACO, ACO, # C0 - C7 + ACV, ACV, ACV, ACV, ACV, ACV, ACV, ACV, # C8 - CF + ACO, ACO, ACV, ACV, ACV, ACV, ACV, OTH, # D0 - D7 + ACV, ACV, ACV, ACV, ACV, ACO, ACO, ACO, # D8 - DF + ASV, ASV, ASV, ASV, ASV, ASV, ASO, ASO, # E0 - E7 + ASV, ASV, ASV, ASV, ASV, ASV, ASV, ASV, # E8 - EF + ASO, ASO, ASV, ASV, ASV, ASV, ASV, OTH, # F0 - F7 + ASV, ASV, ASV, ASV, ASV, ASO, ASO, ASO, # F8 - FF +) + +# 0 : illegal +# 1 : very unlikely +# 2 : normal +# 3 : very likely +Latin1ClassModel = ( +# UDF OTH ASC ASS ACV ACO ASV ASO + 0, 0, 0, 0, 0, 0, 0, 0, # UDF + 0, 3, 3, 3, 3, 3, 3, 3, # OTH + 0, 3, 3, 3, 3, 3, 3, 3, # ASC + 0, 3, 3, 3, 1, 1, 3, 3, # ASS + 0, 3, 3, 3, 1, 2, 1, 2, # ACV + 0, 3, 3, 3, 3, 3, 3, 3, # ACO + 0, 3, 1, 3, 1, 1, 1, 3, # ASV + 0, 3, 1, 3, 1, 1, 3, 3, # ASO +) + + +class Latin1Prober(CharSetProber): + def __init__(self): + super(Latin1Prober, self).__init__() + self._last_char_class = None + self._freq_counter = None + self.reset() + + def reset(self): + self._last_char_class = OTH + self._freq_counter = [0] * FREQ_CAT_NUM + CharSetProber.reset(self) + + @property + def charset_name(self): + return "ISO-8859-1" + + @property + def language(self): + return "" + + def feed(self, byte_str): + byte_str = self.filter_with_english_letters(byte_str) + for c in byte_str: + char_class = Latin1_CharToClass[c] + freq = Latin1ClassModel[(self._last_char_class * CLASS_NUM) + + char_class] + if freq == 0: + self._state = ProbingState.NOT_ME + break + self._freq_counter[freq] += 1 + self._last_char_class = char_class + + return self.state + + def get_confidence(self): + if self.state == ProbingState.NOT_ME: + return 0.01 + + total = sum(self._freq_counter) + if total < 0.01: + confidence = 0.0 + else: + confidence = ((self._freq_counter[3] - self._freq_counter[1] * 20.0) + / total) + if confidence < 0.0: + confidence = 0.0 + # lower the confidence of latin1 so that other more accurate + # detector can take priority. + confidence = confidence * 0.73 + return confidence diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/mbcharsetprober.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/mbcharsetprober.py new file mode 100644 index 0000000000000000000000000000000000000000..6256ecfd1e2c9ac4cfa3fac359cd12dce85b759c --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/mbcharsetprober.py @@ -0,0 +1,91 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Universal charset detector code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 2001 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# Shy Shalom - original C code +# Proofpoint, Inc. +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +from .charsetprober import CharSetProber +from .enums import ProbingState, MachineState + + +class MultiByteCharSetProber(CharSetProber): + """ + MultiByteCharSetProber + """ + + def __init__(self, lang_filter=None): + super(MultiByteCharSetProber, self).__init__(lang_filter=lang_filter) + self.distribution_analyzer = None + self.coding_sm = None + self._last_char = [0, 0] + + def reset(self): + super(MultiByteCharSetProber, self).reset() + if self.coding_sm: + self.coding_sm.reset() + if self.distribution_analyzer: + self.distribution_analyzer.reset() + self._last_char = [0, 0] + + @property + def charset_name(self): + raise NotImplementedError + + @property + def language(self): + raise NotImplementedError + + def feed(self, byte_str): + for i in range(len(byte_str)): + coding_state = self.coding_sm.next_state(byte_str[i]) + if coding_state == MachineState.ERROR: + self.logger.debug('%s %s prober hit error at byte %s', + self.charset_name, self.language, i) + self._state = ProbingState.NOT_ME + break + elif coding_state == MachineState.ITS_ME: + self._state = ProbingState.FOUND_IT + break + elif coding_state == MachineState.START: + char_len = self.coding_sm.get_current_charlen() + if i == 0: + self._last_char[1] = byte_str[0] + self.distribution_analyzer.feed(self._last_char, char_len) + else: + self.distribution_analyzer.feed(byte_str[i - 1:i + 1], + char_len) + + self._last_char[0] = byte_str[-1] + + if self.state == ProbingState.DETECTING: + if (self.distribution_analyzer.got_enough_data() and + (self.get_confidence() > self.SHORTCUT_THRESHOLD)): + self._state = ProbingState.FOUND_IT + + return self.state + + def get_confidence(self): + return self.distribution_analyzer.get_confidence() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/mbcsgroupprober.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/mbcsgroupprober.py new file mode 100644 index 0000000000000000000000000000000000000000..530abe75e0c00cbfcb2a310d872866f320977d0a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/mbcsgroupprober.py @@ -0,0 +1,54 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Universal charset detector code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 2001 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# Shy Shalom - original C code +# Proofpoint, Inc. +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +from .charsetgroupprober import CharSetGroupProber +from .utf8prober import UTF8Prober +from .sjisprober import SJISProber +from .eucjpprober import EUCJPProber +from .gb2312prober import GB2312Prober +from .euckrprober import EUCKRProber +from .cp949prober import CP949Prober +from .big5prober import Big5Prober +from .euctwprober import EUCTWProber + + +class MBCSGroupProber(CharSetGroupProber): + def __init__(self, lang_filter=None): + super(MBCSGroupProber, self).__init__(lang_filter=lang_filter) + self.probers = [ + UTF8Prober(), + SJISProber(), + EUCJPProber(), + GB2312Prober(), + EUCKRProber(), + CP949Prober(), + Big5Prober(), + EUCTWProber() + ] + self.reset() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/mbcssm.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/mbcssm.py new file mode 100644 index 0000000000000000000000000000000000000000..8360d0f284ef394f2980b5bb89548e234385cdf1 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/mbcssm.py @@ -0,0 +1,572 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is mozilla.org code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +from .enums import MachineState + +# BIG5 + +BIG5_CLS = ( + 1,1,1,1,1,1,1,1, # 00 - 07 #allow 0x00 as legal value + 1,1,1,1,1,1,0,0, # 08 - 0f + 1,1,1,1,1,1,1,1, # 10 - 17 + 1,1,1,0,1,1,1,1, # 18 - 1f + 1,1,1,1,1,1,1,1, # 20 - 27 + 1,1,1,1,1,1,1,1, # 28 - 2f + 1,1,1,1,1,1,1,1, # 30 - 37 + 1,1,1,1,1,1,1,1, # 38 - 3f + 2,2,2,2,2,2,2,2, # 40 - 47 + 2,2,2,2,2,2,2,2, # 48 - 4f + 2,2,2,2,2,2,2,2, # 50 - 57 + 2,2,2,2,2,2,2,2, # 58 - 5f + 2,2,2,2,2,2,2,2, # 60 - 67 + 2,2,2,2,2,2,2,2, # 68 - 6f + 2,2,2,2,2,2,2,2, # 70 - 77 + 2,2,2,2,2,2,2,1, # 78 - 7f + 4,4,4,4,4,4,4,4, # 80 - 87 + 4,4,4,4,4,4,4,4, # 88 - 8f + 4,4,4,4,4,4,4,4, # 90 - 97 + 4,4,4,4,4,4,4,4, # 98 - 9f + 4,3,3,3,3,3,3,3, # a0 - a7 + 3,3,3,3,3,3,3,3, # a8 - af + 3,3,3,3,3,3,3,3, # b0 - b7 + 3,3,3,3,3,3,3,3, # b8 - bf + 3,3,3,3,3,3,3,3, # c0 - c7 + 3,3,3,3,3,3,3,3, # c8 - cf + 3,3,3,3,3,3,3,3, # d0 - d7 + 3,3,3,3,3,3,3,3, # d8 - df + 3,3,3,3,3,3,3,3, # e0 - e7 + 3,3,3,3,3,3,3,3, # e8 - ef + 3,3,3,3,3,3,3,3, # f0 - f7 + 3,3,3,3,3,3,3,0 # f8 - ff +) + +BIG5_ST = ( + MachineState.ERROR,MachineState.START,MachineState.START, 3,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#00-07 + MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ERROR,#08-0f + MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START#10-17 +) + +BIG5_CHAR_LEN_TABLE = (0, 1, 1, 2, 0) + +BIG5_SM_MODEL = {'class_table': BIG5_CLS, + 'class_factor': 5, + 'state_table': BIG5_ST, + 'char_len_table': BIG5_CHAR_LEN_TABLE, + 'name': 'Big5'} + +# CP949 + +CP949_CLS = ( + 1,1,1,1,1,1,1,1, 1,1,1,1,1,1,0,0, # 00 - 0f + 1,1,1,1,1,1,1,1, 1,1,1,0,1,1,1,1, # 10 - 1f + 1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1, # 20 - 2f + 1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1, # 30 - 3f + 1,4,4,4,4,4,4,4, 4,4,4,4,4,4,4,4, # 40 - 4f + 4,4,5,5,5,5,5,5, 5,5,5,1,1,1,1,1, # 50 - 5f + 1,5,5,5,5,5,5,5, 5,5,5,5,5,5,5,5, # 60 - 6f + 5,5,5,5,5,5,5,5, 5,5,5,1,1,1,1,1, # 70 - 7f + 0,6,6,6,6,6,6,6, 6,6,6,6,6,6,6,6, # 80 - 8f + 6,6,6,6,6,6,6,6, 6,6,6,6,6,6,6,6, # 90 - 9f + 6,7,7,7,7,7,7,7, 7,7,7,7,7,8,8,8, # a0 - af + 7,7,7,7,7,7,7,7, 7,7,7,7,7,7,7,7, # b0 - bf + 7,7,7,7,7,7,9,2, 2,3,2,2,2,2,2,2, # c0 - cf + 2,2,2,2,2,2,2,2, 2,2,2,2,2,2,2,2, # d0 - df + 2,2,2,2,2,2,2,2, 2,2,2,2,2,2,2,2, # e0 - ef + 2,2,2,2,2,2,2,2, 2,2,2,2,2,2,2,0, # f0 - ff +) + +CP949_ST = ( +#cls= 0 1 2 3 4 5 6 7 8 9 # previous state = + MachineState.ERROR,MachineState.START, 3,MachineState.ERROR,MachineState.START,MachineState.START, 4, 5,MachineState.ERROR, 6, # MachineState.START + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR, # MachineState.ERROR + MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME, # MachineState.ITS_ME + MachineState.ERROR,MachineState.ERROR,MachineState.START,MachineState.START,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START, # 3 + MachineState.ERROR,MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START, # 4 + MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START, # 5 + MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.ERROR,MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START, # 6 +) + +CP949_CHAR_LEN_TABLE = (0, 1, 2, 0, 1, 1, 2, 2, 0, 2) + +CP949_SM_MODEL = {'class_table': CP949_CLS, + 'class_factor': 10, + 'state_table': CP949_ST, + 'char_len_table': CP949_CHAR_LEN_TABLE, + 'name': 'CP949'} + +# EUC-JP + +EUCJP_CLS = ( + 4,4,4,4,4,4,4,4, # 00 - 07 + 4,4,4,4,4,4,5,5, # 08 - 0f + 4,4,4,4,4,4,4,4, # 10 - 17 + 4,4,4,5,4,4,4,4, # 18 - 1f + 4,4,4,4,4,4,4,4, # 20 - 27 + 4,4,4,4,4,4,4,4, # 28 - 2f + 4,4,4,4,4,4,4,4, # 30 - 37 + 4,4,4,4,4,4,4,4, # 38 - 3f + 4,4,4,4,4,4,4,4, # 40 - 47 + 4,4,4,4,4,4,4,4, # 48 - 4f + 4,4,4,4,4,4,4,4, # 50 - 57 + 4,4,4,4,4,4,4,4, # 58 - 5f + 4,4,4,4,4,4,4,4, # 60 - 67 + 4,4,4,4,4,4,4,4, # 68 - 6f + 4,4,4,4,4,4,4,4, # 70 - 77 + 4,4,4,4,4,4,4,4, # 78 - 7f + 5,5,5,5,5,5,5,5, # 80 - 87 + 5,5,5,5,5,5,1,3, # 88 - 8f + 5,5,5,5,5,5,5,5, # 90 - 97 + 5,5,5,5,5,5,5,5, # 98 - 9f + 5,2,2,2,2,2,2,2, # a0 - a7 + 2,2,2,2,2,2,2,2, # a8 - af + 2,2,2,2,2,2,2,2, # b0 - b7 + 2,2,2,2,2,2,2,2, # b8 - bf + 2,2,2,2,2,2,2,2, # c0 - c7 + 2,2,2,2,2,2,2,2, # c8 - cf + 2,2,2,2,2,2,2,2, # d0 - d7 + 2,2,2,2,2,2,2,2, # d8 - df + 0,0,0,0,0,0,0,0, # e0 - e7 + 0,0,0,0,0,0,0,0, # e8 - ef + 0,0,0,0,0,0,0,0, # f0 - f7 + 0,0,0,0,0,0,0,5 # f8 - ff +) + +EUCJP_ST = ( + 3, 4, 3, 5,MachineState.START,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#00-07 + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,#08-0f + MachineState.ITS_ME,MachineState.ITS_ME,MachineState.START,MachineState.ERROR,MachineState.START,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#10-17 + MachineState.ERROR,MachineState.ERROR,MachineState.START,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR, 3,MachineState.ERROR,#18-1f + 3,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START,MachineState.START#20-27 +) + +EUCJP_CHAR_LEN_TABLE = (2, 2, 2, 3, 1, 0) + +EUCJP_SM_MODEL = {'class_table': EUCJP_CLS, + 'class_factor': 6, + 'state_table': EUCJP_ST, + 'char_len_table': EUCJP_CHAR_LEN_TABLE, + 'name': 'EUC-JP'} + +# EUC-KR + +EUCKR_CLS = ( + 1,1,1,1,1,1,1,1, # 00 - 07 + 1,1,1,1,1,1,0,0, # 08 - 0f + 1,1,1,1,1,1,1,1, # 10 - 17 + 1,1,1,0,1,1,1,1, # 18 - 1f + 1,1,1,1,1,1,1,1, # 20 - 27 + 1,1,1,1,1,1,1,1, # 28 - 2f + 1,1,1,1,1,1,1,1, # 30 - 37 + 1,1,1,1,1,1,1,1, # 38 - 3f + 1,1,1,1,1,1,1,1, # 40 - 47 + 1,1,1,1,1,1,1,1, # 48 - 4f + 1,1,1,1,1,1,1,1, # 50 - 57 + 1,1,1,1,1,1,1,1, # 58 - 5f + 1,1,1,1,1,1,1,1, # 60 - 67 + 1,1,1,1,1,1,1,1, # 68 - 6f + 1,1,1,1,1,1,1,1, # 70 - 77 + 1,1,1,1,1,1,1,1, # 78 - 7f + 0,0,0,0,0,0,0,0, # 80 - 87 + 0,0,0,0,0,0,0,0, # 88 - 8f + 0,0,0,0,0,0,0,0, # 90 - 97 + 0,0,0,0,0,0,0,0, # 98 - 9f + 0,2,2,2,2,2,2,2, # a0 - a7 + 2,2,2,2,2,3,3,3, # a8 - af + 2,2,2,2,2,2,2,2, # b0 - b7 + 2,2,2,2,2,2,2,2, # b8 - bf + 2,2,2,2,2,2,2,2, # c0 - c7 + 2,3,2,2,2,2,2,2, # c8 - cf + 2,2,2,2,2,2,2,2, # d0 - d7 + 2,2,2,2,2,2,2,2, # d8 - df + 2,2,2,2,2,2,2,2, # e0 - e7 + 2,2,2,2,2,2,2,2, # e8 - ef + 2,2,2,2,2,2,2,2, # f0 - f7 + 2,2,2,2,2,2,2,0 # f8 - ff +) + +EUCKR_ST = ( + MachineState.ERROR,MachineState.START, 3,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#00-07 + MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ERROR,MachineState.ERROR,MachineState.START,MachineState.START #08-0f +) + +EUCKR_CHAR_LEN_TABLE = (0, 1, 2, 0) + +EUCKR_SM_MODEL = {'class_table': EUCKR_CLS, + 'class_factor': 4, + 'state_table': EUCKR_ST, + 'char_len_table': EUCKR_CHAR_LEN_TABLE, + 'name': 'EUC-KR'} + +# EUC-TW + +EUCTW_CLS = ( + 2,2,2,2,2,2,2,2, # 00 - 07 + 2,2,2,2,2,2,0,0, # 08 - 0f + 2,2,2,2,2,2,2,2, # 10 - 17 + 2,2,2,0,2,2,2,2, # 18 - 1f + 2,2,2,2,2,2,2,2, # 20 - 27 + 2,2,2,2,2,2,2,2, # 28 - 2f + 2,2,2,2,2,2,2,2, # 30 - 37 + 2,2,2,2,2,2,2,2, # 38 - 3f + 2,2,2,2,2,2,2,2, # 40 - 47 + 2,2,2,2,2,2,2,2, # 48 - 4f + 2,2,2,2,2,2,2,2, # 50 - 57 + 2,2,2,2,2,2,2,2, # 58 - 5f + 2,2,2,2,2,2,2,2, # 60 - 67 + 2,2,2,2,2,2,2,2, # 68 - 6f + 2,2,2,2,2,2,2,2, # 70 - 77 + 2,2,2,2,2,2,2,2, # 78 - 7f + 0,0,0,0,0,0,0,0, # 80 - 87 + 0,0,0,0,0,0,6,0, # 88 - 8f + 0,0,0,0,0,0,0,0, # 90 - 97 + 0,0,0,0,0,0,0,0, # 98 - 9f + 0,3,4,4,4,4,4,4, # a0 - a7 + 5,5,1,1,1,1,1,1, # a8 - af + 1,1,1,1,1,1,1,1, # b0 - b7 + 1,1,1,1,1,1,1,1, # b8 - bf + 1,1,3,1,3,3,3,3, # c0 - c7 + 3,3,3,3,3,3,3,3, # c8 - cf + 3,3,3,3,3,3,3,3, # d0 - d7 + 3,3,3,3,3,3,3,3, # d8 - df + 3,3,3,3,3,3,3,3, # e0 - e7 + 3,3,3,3,3,3,3,3, # e8 - ef + 3,3,3,3,3,3,3,3, # f0 - f7 + 3,3,3,3,3,3,3,0 # f8 - ff +) + +EUCTW_ST = ( + MachineState.ERROR,MachineState.ERROR,MachineState.START, 3, 3, 3, 4,MachineState.ERROR,#00-07 + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ITS_ME,#08-0f + MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ERROR,MachineState.START,MachineState.ERROR,#10-17 + MachineState.START,MachineState.START,MachineState.START,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#18-1f + 5,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.START,MachineState.ERROR,MachineState.START,MachineState.START,#20-27 + MachineState.START,MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START #28-2f +) + +EUCTW_CHAR_LEN_TABLE = (0, 0, 1, 2, 2, 2, 3) + +EUCTW_SM_MODEL = {'class_table': EUCTW_CLS, + 'class_factor': 7, + 'state_table': EUCTW_ST, + 'char_len_table': EUCTW_CHAR_LEN_TABLE, + 'name': 'x-euc-tw'} + +# GB2312 + +GB2312_CLS = ( + 1,1,1,1,1,1,1,1, # 00 - 07 + 1,1,1,1,1,1,0,0, # 08 - 0f + 1,1,1,1,1,1,1,1, # 10 - 17 + 1,1,1,0,1,1,1,1, # 18 - 1f + 1,1,1,1,1,1,1,1, # 20 - 27 + 1,1,1,1,1,1,1,1, # 28 - 2f + 3,3,3,3,3,3,3,3, # 30 - 37 + 3,3,1,1,1,1,1,1, # 38 - 3f + 2,2,2,2,2,2,2,2, # 40 - 47 + 2,2,2,2,2,2,2,2, # 48 - 4f + 2,2,2,2,2,2,2,2, # 50 - 57 + 2,2,2,2,2,2,2,2, # 58 - 5f + 2,2,2,2,2,2,2,2, # 60 - 67 + 2,2,2,2,2,2,2,2, # 68 - 6f + 2,2,2,2,2,2,2,2, # 70 - 77 + 2,2,2,2,2,2,2,4, # 78 - 7f + 5,6,6,6,6,6,6,6, # 80 - 87 + 6,6,6,6,6,6,6,6, # 88 - 8f + 6,6,6,6,6,6,6,6, # 90 - 97 + 6,6,6,6,6,6,6,6, # 98 - 9f + 6,6,6,6,6,6,6,6, # a0 - a7 + 6,6,6,6,6,6,6,6, # a8 - af + 6,6,6,6,6,6,6,6, # b0 - b7 + 6,6,6,6,6,6,6,6, # b8 - bf + 6,6,6,6,6,6,6,6, # c0 - c7 + 6,6,6,6,6,6,6,6, # c8 - cf + 6,6,6,6,6,6,6,6, # d0 - d7 + 6,6,6,6,6,6,6,6, # d8 - df + 6,6,6,6,6,6,6,6, # e0 - e7 + 6,6,6,6,6,6,6,6, # e8 - ef + 6,6,6,6,6,6,6,6, # f0 - f7 + 6,6,6,6,6,6,6,0 # f8 - ff +) + +GB2312_ST = ( + MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START, 3,MachineState.ERROR,#00-07 + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ITS_ME,#08-0f + MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ERROR,MachineState.ERROR,MachineState.START,#10-17 + 4,MachineState.ERROR,MachineState.START,MachineState.START,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#18-1f + MachineState.ERROR,MachineState.ERROR, 5,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ERROR,#20-27 + MachineState.ERROR,MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START #28-2f +) + +# To be accurate, the length of class 6 can be either 2 or 4. +# But it is not necessary to discriminate between the two since +# it is used for frequency analysis only, and we are validating +# each code range there as well. So it is safe to set it to be +# 2 here. +GB2312_CHAR_LEN_TABLE = (0, 1, 1, 1, 1, 1, 2) + +GB2312_SM_MODEL = {'class_table': GB2312_CLS, + 'class_factor': 7, + 'state_table': GB2312_ST, + 'char_len_table': GB2312_CHAR_LEN_TABLE, + 'name': 'GB2312'} + +# Shift_JIS + +SJIS_CLS = ( + 1,1,1,1,1,1,1,1, # 00 - 07 + 1,1,1,1,1,1,0,0, # 08 - 0f + 1,1,1,1,1,1,1,1, # 10 - 17 + 1,1,1,0,1,1,1,1, # 18 - 1f + 1,1,1,1,1,1,1,1, # 20 - 27 + 1,1,1,1,1,1,1,1, # 28 - 2f + 1,1,1,1,1,1,1,1, # 30 - 37 + 1,1,1,1,1,1,1,1, # 38 - 3f + 2,2,2,2,2,2,2,2, # 40 - 47 + 2,2,2,2,2,2,2,2, # 48 - 4f + 2,2,2,2,2,2,2,2, # 50 - 57 + 2,2,2,2,2,2,2,2, # 58 - 5f + 2,2,2,2,2,2,2,2, # 60 - 67 + 2,2,2,2,2,2,2,2, # 68 - 6f + 2,2,2,2,2,2,2,2, # 70 - 77 + 2,2,2,2,2,2,2,1, # 78 - 7f + 3,3,3,3,3,2,2,3, # 80 - 87 + 3,3,3,3,3,3,3,3, # 88 - 8f + 3,3,3,3,3,3,3,3, # 90 - 97 + 3,3,3,3,3,3,3,3, # 98 - 9f + #0xa0 is illegal in sjis encoding, but some pages does + #contain such byte. We need to be more error forgiven. + 2,2,2,2,2,2,2,2, # a0 - a7 + 2,2,2,2,2,2,2,2, # a8 - af + 2,2,2,2,2,2,2,2, # b0 - b7 + 2,2,2,2,2,2,2,2, # b8 - bf + 2,2,2,2,2,2,2,2, # c0 - c7 + 2,2,2,2,2,2,2,2, # c8 - cf + 2,2,2,2,2,2,2,2, # d0 - d7 + 2,2,2,2,2,2,2,2, # d8 - df + 3,3,3,3,3,3,3,3, # e0 - e7 + 3,3,3,3,3,4,4,4, # e8 - ef + 3,3,3,3,3,3,3,3, # f0 - f7 + 3,3,3,3,3,0,0,0) # f8 - ff + + +SJIS_ST = ( + MachineState.ERROR,MachineState.START,MachineState.START, 3,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#00-07 + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,#08-0f + MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ERROR,MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START,MachineState.START #10-17 +) + +SJIS_CHAR_LEN_TABLE = (0, 1, 1, 2, 0, 0) + +SJIS_SM_MODEL = {'class_table': SJIS_CLS, + 'class_factor': 6, + 'state_table': SJIS_ST, + 'char_len_table': SJIS_CHAR_LEN_TABLE, + 'name': 'Shift_JIS'} + +# UCS2-BE + +UCS2BE_CLS = ( + 0,0,0,0,0,0,0,0, # 00 - 07 + 0,0,1,0,0,2,0,0, # 08 - 0f + 0,0,0,0,0,0,0,0, # 10 - 17 + 0,0,0,3,0,0,0,0, # 18 - 1f + 0,0,0,0,0,0,0,0, # 20 - 27 + 0,3,3,3,3,3,0,0, # 28 - 2f + 0,0,0,0,0,0,0,0, # 30 - 37 + 0,0,0,0,0,0,0,0, # 38 - 3f + 0,0,0,0,0,0,0,0, # 40 - 47 + 0,0,0,0,0,0,0,0, # 48 - 4f + 0,0,0,0,0,0,0,0, # 50 - 57 + 0,0,0,0,0,0,0,0, # 58 - 5f + 0,0,0,0,0,0,0,0, # 60 - 67 + 0,0,0,0,0,0,0,0, # 68 - 6f + 0,0,0,0,0,0,0,0, # 70 - 77 + 0,0,0,0,0,0,0,0, # 78 - 7f + 0,0,0,0,0,0,0,0, # 80 - 87 + 0,0,0,0,0,0,0,0, # 88 - 8f + 0,0,0,0,0,0,0,0, # 90 - 97 + 0,0,0,0,0,0,0,0, # 98 - 9f + 0,0,0,0,0,0,0,0, # a0 - a7 + 0,0,0,0,0,0,0,0, # a8 - af + 0,0,0,0,0,0,0,0, # b0 - b7 + 0,0,0,0,0,0,0,0, # b8 - bf + 0,0,0,0,0,0,0,0, # c0 - c7 + 0,0,0,0,0,0,0,0, # c8 - cf + 0,0,0,0,0,0,0,0, # d0 - d7 + 0,0,0,0,0,0,0,0, # d8 - df + 0,0,0,0,0,0,0,0, # e0 - e7 + 0,0,0,0,0,0,0,0, # e8 - ef + 0,0,0,0,0,0,0,0, # f0 - f7 + 0,0,0,0,0,0,4,5 # f8 - ff +) + +UCS2BE_ST = ( + 5, 7, 7,MachineState.ERROR, 4, 3,MachineState.ERROR,MachineState.ERROR,#00-07 + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,#08-0f + MachineState.ITS_ME,MachineState.ITS_ME, 6, 6, 6, 6,MachineState.ERROR,MachineState.ERROR,#10-17 + 6, 6, 6, 6, 6,MachineState.ITS_ME, 6, 6,#18-1f + 6, 6, 6, 6, 5, 7, 7,MachineState.ERROR,#20-27 + 5, 8, 6, 6,MachineState.ERROR, 6, 6, 6,#28-2f + 6, 6, 6, 6,MachineState.ERROR,MachineState.ERROR,MachineState.START,MachineState.START #30-37 +) + +UCS2BE_CHAR_LEN_TABLE = (2, 2, 2, 0, 2, 2) + +UCS2BE_SM_MODEL = {'class_table': UCS2BE_CLS, + 'class_factor': 6, + 'state_table': UCS2BE_ST, + 'char_len_table': UCS2BE_CHAR_LEN_TABLE, + 'name': 'UTF-16BE'} + +# UCS2-LE + +UCS2LE_CLS = ( + 0,0,0,0,0,0,0,0, # 00 - 07 + 0,0,1,0,0,2,0,0, # 08 - 0f + 0,0,0,0,0,0,0,0, # 10 - 17 + 0,0,0,3,0,0,0,0, # 18 - 1f + 0,0,0,0,0,0,0,0, # 20 - 27 + 0,3,3,3,3,3,0,0, # 28 - 2f + 0,0,0,0,0,0,0,0, # 30 - 37 + 0,0,0,0,0,0,0,0, # 38 - 3f + 0,0,0,0,0,0,0,0, # 40 - 47 + 0,0,0,0,0,0,0,0, # 48 - 4f + 0,0,0,0,0,0,0,0, # 50 - 57 + 0,0,0,0,0,0,0,0, # 58 - 5f + 0,0,0,0,0,0,0,0, # 60 - 67 + 0,0,0,0,0,0,0,0, # 68 - 6f + 0,0,0,0,0,0,0,0, # 70 - 77 + 0,0,0,0,0,0,0,0, # 78 - 7f + 0,0,0,0,0,0,0,0, # 80 - 87 + 0,0,0,0,0,0,0,0, # 88 - 8f + 0,0,0,0,0,0,0,0, # 90 - 97 + 0,0,0,0,0,0,0,0, # 98 - 9f + 0,0,0,0,0,0,0,0, # a0 - a7 + 0,0,0,0,0,0,0,0, # a8 - af + 0,0,0,0,0,0,0,0, # b0 - b7 + 0,0,0,0,0,0,0,0, # b8 - bf + 0,0,0,0,0,0,0,0, # c0 - c7 + 0,0,0,0,0,0,0,0, # c8 - cf + 0,0,0,0,0,0,0,0, # d0 - d7 + 0,0,0,0,0,0,0,0, # d8 - df + 0,0,0,0,0,0,0,0, # e0 - e7 + 0,0,0,0,0,0,0,0, # e8 - ef + 0,0,0,0,0,0,0,0, # f0 - f7 + 0,0,0,0,0,0,4,5 # f8 - ff +) + +UCS2LE_ST = ( + 6, 6, 7, 6, 4, 3,MachineState.ERROR,MachineState.ERROR,#00-07 + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,#08-0f + MachineState.ITS_ME,MachineState.ITS_ME, 5, 5, 5,MachineState.ERROR,MachineState.ITS_ME,MachineState.ERROR,#10-17 + 5, 5, 5,MachineState.ERROR, 5,MachineState.ERROR, 6, 6,#18-1f + 7, 6, 8, 8, 5, 5, 5,MachineState.ERROR,#20-27 + 5, 5, 5,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR, 5, 5,#28-2f + 5, 5, 5,MachineState.ERROR, 5,MachineState.ERROR,MachineState.START,MachineState.START #30-37 +) + +UCS2LE_CHAR_LEN_TABLE = (2, 2, 2, 2, 2, 2) + +UCS2LE_SM_MODEL = {'class_table': UCS2LE_CLS, + 'class_factor': 6, + 'state_table': UCS2LE_ST, + 'char_len_table': UCS2LE_CHAR_LEN_TABLE, + 'name': 'UTF-16LE'} + +# UTF-8 + +UTF8_CLS = ( + 1,1,1,1,1,1,1,1, # 00 - 07 #allow 0x00 as a legal value + 1,1,1,1,1,1,0,0, # 08 - 0f + 1,1,1,1,1,1,1,1, # 10 - 17 + 1,1,1,0,1,1,1,1, # 18 - 1f + 1,1,1,1,1,1,1,1, # 20 - 27 + 1,1,1,1,1,1,1,1, # 28 - 2f + 1,1,1,1,1,1,1,1, # 30 - 37 + 1,1,1,1,1,1,1,1, # 38 - 3f + 1,1,1,1,1,1,1,1, # 40 - 47 + 1,1,1,1,1,1,1,1, # 48 - 4f + 1,1,1,1,1,1,1,1, # 50 - 57 + 1,1,1,1,1,1,1,1, # 58 - 5f + 1,1,1,1,1,1,1,1, # 60 - 67 + 1,1,1,1,1,1,1,1, # 68 - 6f + 1,1,1,1,1,1,1,1, # 70 - 77 + 1,1,1,1,1,1,1,1, # 78 - 7f + 2,2,2,2,3,3,3,3, # 80 - 87 + 4,4,4,4,4,4,4,4, # 88 - 8f + 4,4,4,4,4,4,4,4, # 90 - 97 + 4,4,4,4,4,4,4,4, # 98 - 9f + 5,5,5,5,5,5,5,5, # a0 - a7 + 5,5,5,5,5,5,5,5, # a8 - af + 5,5,5,5,5,5,5,5, # b0 - b7 + 5,5,5,5,5,5,5,5, # b8 - bf + 0,0,6,6,6,6,6,6, # c0 - c7 + 6,6,6,6,6,6,6,6, # c8 - cf + 6,6,6,6,6,6,6,6, # d0 - d7 + 6,6,6,6,6,6,6,6, # d8 - df + 7,8,8,8,8,8,8,8, # e0 - e7 + 8,8,8,8,8,9,8,8, # e8 - ef + 10,11,11,11,11,11,11,11, # f0 - f7 + 12,13,13,13,14,15,0,0 # f8 - ff +) + +UTF8_ST = ( + MachineState.ERROR,MachineState.START,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR, 12, 10,#00-07 + 9, 11, 8, 7, 6, 5, 4, 3,#08-0f + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#10-17 + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#18-1f + MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,#20-27 + MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,#28-2f + MachineState.ERROR,MachineState.ERROR, 5, 5, 5, 5,MachineState.ERROR,MachineState.ERROR,#30-37 + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#38-3f + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR, 5, 5, 5,MachineState.ERROR,MachineState.ERROR,#40-47 + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#48-4f + MachineState.ERROR,MachineState.ERROR, 7, 7, 7, 7,MachineState.ERROR,MachineState.ERROR,#50-57 + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#58-5f + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR, 7, 7,MachineState.ERROR,MachineState.ERROR,#60-67 + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#68-6f + MachineState.ERROR,MachineState.ERROR, 9, 9, 9, 9,MachineState.ERROR,MachineState.ERROR,#70-77 + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#78-7f + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR, 9,MachineState.ERROR,MachineState.ERROR,#80-87 + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#88-8f + MachineState.ERROR,MachineState.ERROR, 12, 12, 12, 12,MachineState.ERROR,MachineState.ERROR,#90-97 + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#98-9f + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR, 12,MachineState.ERROR,MachineState.ERROR,#a0-a7 + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#a8-af + MachineState.ERROR,MachineState.ERROR, 12, 12, 12,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#b0-b7 + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#b8-bf + MachineState.ERROR,MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.ERROR,MachineState.ERROR,#c0-c7 + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR #c8-cf +) + +UTF8_CHAR_LEN_TABLE = (0, 1, 0, 0, 0, 0, 2, 3, 3, 3, 4, 4, 5, 5, 6, 6) + +UTF8_SM_MODEL = {'class_table': UTF8_CLS, + 'class_factor': 16, + 'state_table': UTF8_ST, + 'char_len_table': UTF8_CHAR_LEN_TABLE, + 'name': 'UTF-8'} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/sbcharsetprober.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/sbcharsetprober.py new file mode 100644 index 0000000000000000000000000000000000000000..0adb51de5a210aa36849cd149ed0f4ae424fce42 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/sbcharsetprober.py @@ -0,0 +1,132 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Universal charset detector code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 2001 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# Shy Shalom - original C code +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +from .charsetprober import CharSetProber +from .enums import CharacterCategory, ProbingState, SequenceLikelihood + + +class SingleByteCharSetProber(CharSetProber): + SAMPLE_SIZE = 64 + SB_ENOUGH_REL_THRESHOLD = 1024 # 0.25 * SAMPLE_SIZE^2 + POSITIVE_SHORTCUT_THRESHOLD = 0.95 + NEGATIVE_SHORTCUT_THRESHOLD = 0.05 + + def __init__(self, model, reversed=False, name_prober=None): + super(SingleByteCharSetProber, self).__init__() + self._model = model + # TRUE if we need to reverse every pair in the model lookup + self._reversed = reversed + # Optional auxiliary prober for name decision + self._name_prober = name_prober + self._last_order = None + self._seq_counters = None + self._total_seqs = None + self._total_char = None + self._freq_char = None + self.reset() + + def reset(self): + super(SingleByteCharSetProber, self).reset() + # char order of last character + self._last_order = 255 + self._seq_counters = [0] * SequenceLikelihood.get_num_categories() + self._total_seqs = 0 + self._total_char = 0 + # characters that fall in our sampling range + self._freq_char = 0 + + @property + def charset_name(self): + if self._name_prober: + return self._name_prober.charset_name + else: + return self._model['charset_name'] + + @property + def language(self): + if self._name_prober: + return self._name_prober.language + else: + return self._model.get('language') + + def feed(self, byte_str): + if not self._model['keep_english_letter']: + byte_str = self.filter_international_words(byte_str) + if not byte_str: + return self.state + char_to_order_map = self._model['char_to_order_map'] + for i, c in enumerate(byte_str): + # XXX: Order is in range 1-64, so one would think we want 0-63 here, + # but that leads to 27 more test failures than before. + order = char_to_order_map[c] + # XXX: This was SYMBOL_CAT_ORDER before, with a value of 250, but + # CharacterCategory.SYMBOL is actually 253, so we use CONTROL + # to make it closer to the original intent. The only difference + # is whether or not we count digits and control characters for + # _total_char purposes. + if order < CharacterCategory.CONTROL: + self._total_char += 1 + if order < self.SAMPLE_SIZE: + self._freq_char += 1 + if self._last_order < self.SAMPLE_SIZE: + self._total_seqs += 1 + if not self._reversed: + i = (self._last_order * self.SAMPLE_SIZE) + order + model = self._model['precedence_matrix'][i] + else: # reverse the order of the letters in the lookup + i = (order * self.SAMPLE_SIZE) + self._last_order + model = self._model['precedence_matrix'][i] + self._seq_counters[model] += 1 + self._last_order = order + + charset_name = self._model['charset_name'] + if self.state == ProbingState.DETECTING: + if self._total_seqs > self.SB_ENOUGH_REL_THRESHOLD: + confidence = self.get_confidence() + if confidence > self.POSITIVE_SHORTCUT_THRESHOLD: + self.logger.debug('%s confidence = %s, we have a winner', + charset_name, confidence) + self._state = ProbingState.FOUND_IT + elif confidence < self.NEGATIVE_SHORTCUT_THRESHOLD: + self.logger.debug('%s confidence = %s, below negative ' + 'shortcut threshhold %s', charset_name, + confidence, + self.NEGATIVE_SHORTCUT_THRESHOLD) + self._state = ProbingState.NOT_ME + + return self.state + + def get_confidence(self): + r = 0.01 + if self._total_seqs > 0: + r = ((1.0 * self._seq_counters[SequenceLikelihood.POSITIVE]) / + self._total_seqs / self._model['typical_positive_ratio']) + r = r * self._freq_char / self._total_char + if r >= 1.0: + r = 0.99 + return r diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/sbcsgroupprober.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/sbcsgroupprober.py new file mode 100644 index 0000000000000000000000000000000000000000..98e95dc1a3cbc65e97bc726ab7000955132719dd --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/sbcsgroupprober.py @@ -0,0 +1,73 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Universal charset detector code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 2001 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# Shy Shalom - original C code +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +from .charsetgroupprober import CharSetGroupProber +from .sbcharsetprober import SingleByteCharSetProber +from .langcyrillicmodel import (Win1251CyrillicModel, Koi8rModel, + Latin5CyrillicModel, MacCyrillicModel, + Ibm866Model, Ibm855Model) +from .langgreekmodel import Latin7GreekModel, Win1253GreekModel +from .langbulgarianmodel import Latin5BulgarianModel, Win1251BulgarianModel +# from .langhungarianmodel import Latin2HungarianModel, Win1250HungarianModel +from .langthaimodel import TIS620ThaiModel +from .langhebrewmodel import Win1255HebrewModel +from .hebrewprober import HebrewProber +from .langturkishmodel import Latin5TurkishModel + + +class SBCSGroupProber(CharSetGroupProber): + def __init__(self): + super(SBCSGroupProber, self).__init__() + self.probers = [ + SingleByteCharSetProber(Win1251CyrillicModel), + SingleByteCharSetProber(Koi8rModel), + SingleByteCharSetProber(Latin5CyrillicModel), + SingleByteCharSetProber(MacCyrillicModel), + SingleByteCharSetProber(Ibm866Model), + SingleByteCharSetProber(Ibm855Model), + SingleByteCharSetProber(Latin7GreekModel), + SingleByteCharSetProber(Win1253GreekModel), + SingleByteCharSetProber(Latin5BulgarianModel), + SingleByteCharSetProber(Win1251BulgarianModel), + # TODO: Restore Hungarian encodings (iso-8859-2 and windows-1250) + # after we retrain model. + # SingleByteCharSetProber(Latin2HungarianModel), + # SingleByteCharSetProber(Win1250HungarianModel), + SingleByteCharSetProber(TIS620ThaiModel), + SingleByteCharSetProber(Latin5TurkishModel), + ] + hebrew_prober = HebrewProber() + logical_hebrew_prober = SingleByteCharSetProber(Win1255HebrewModel, + False, hebrew_prober) + visual_hebrew_prober = SingleByteCharSetProber(Win1255HebrewModel, True, + hebrew_prober) + hebrew_prober.set_model_probers(logical_hebrew_prober, visual_hebrew_prober) + self.probers.extend([hebrew_prober, logical_hebrew_prober, + visual_hebrew_prober]) + + self.reset() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/sjisprober.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/sjisprober.py new file mode 100644 index 0000000000000000000000000000000000000000..9e29623bdc54a7c6d11bcc167d71bb44cc9be39d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/sjisprober.py @@ -0,0 +1,92 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is mozilla.org code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +from .mbcharsetprober import MultiByteCharSetProber +from .codingstatemachine import CodingStateMachine +from .chardistribution import SJISDistributionAnalysis +from .jpcntx import SJISContextAnalysis +from .mbcssm import SJIS_SM_MODEL +from .enums import ProbingState, MachineState + + +class SJISProber(MultiByteCharSetProber): + def __init__(self): + super(SJISProber, self).__init__() + self.coding_sm = CodingStateMachine(SJIS_SM_MODEL) + self.distribution_analyzer = SJISDistributionAnalysis() + self.context_analyzer = SJISContextAnalysis() + self.reset() + + def reset(self): + super(SJISProber, self).reset() + self.context_analyzer.reset() + + @property + def charset_name(self): + return self.context_analyzer.charset_name + + @property + def language(self): + return "Japanese" + + def feed(self, byte_str): + for i in range(len(byte_str)): + coding_state = self.coding_sm.next_state(byte_str[i]) + if coding_state == MachineState.ERROR: + self.logger.debug('%s %s prober hit error at byte %s', + self.charset_name, self.language, i) + self._state = ProbingState.NOT_ME + break + elif coding_state == MachineState.ITS_ME: + self._state = ProbingState.FOUND_IT + break + elif coding_state == MachineState.START: + char_len = self.coding_sm.get_current_charlen() + if i == 0: + self._last_char[1] = byte_str[0] + self.context_analyzer.feed(self._last_char[2 - char_len:], + char_len) + self.distribution_analyzer.feed(self._last_char, char_len) + else: + self.context_analyzer.feed(byte_str[i + 1 - char_len:i + 3 + - char_len], char_len) + self.distribution_analyzer.feed(byte_str[i - 1:i + 1], + char_len) + + self._last_char[0] = byte_str[-1] + + if self.state == ProbingState.DETECTING: + if (self.context_analyzer.got_enough_data() and + (self.get_confidence() > self.SHORTCUT_THRESHOLD)): + self._state = ProbingState.FOUND_IT + + return self.state + + def get_confidence(self): + context_conf = self.context_analyzer.get_confidence() + distrib_conf = self.distribution_analyzer.get_confidence() + return max(context_conf, distrib_conf) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/universaldetector.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/universaldetector.py new file mode 100644 index 0000000000000000000000000000000000000000..7b4e92d6158527736e3d14d6f725edada8f94b00 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/universaldetector.py @@ -0,0 +1,286 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Universal charset detector code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 2001 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# Shy Shalom - original C code +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### +""" +Module containing the UniversalDetector detector class, which is the primary +class a user of ``chardet`` should use. + +:author: Mark Pilgrim (initial port to Python) +:author: Shy Shalom (original C code) +:author: Dan Blanchard (major refactoring for 3.0) +:author: Ian Cordasco +""" + + +import codecs +import logging +import re + +from .charsetgroupprober import CharSetGroupProber +from .enums import InputState, LanguageFilter, ProbingState +from .escprober import EscCharSetProber +from .latin1prober import Latin1Prober +from .mbcsgroupprober import MBCSGroupProber +from .sbcsgroupprober import SBCSGroupProber + + +class UniversalDetector(object): + """ + The ``UniversalDetector`` class underlies the ``chardet.detect`` function + and coordinates all of the different charset probers. + + To get a ``dict`` containing an encoding and its confidence, you can simply + run: + + .. code:: + + u = UniversalDetector() + u.feed(some_bytes) + u.close() + detected = u.result + + """ + + MINIMUM_THRESHOLD = 0.20 + HIGH_BYTE_DETECTOR = re.compile(b'[\x80-\xFF]') + ESC_DETECTOR = re.compile(b'(\033|~{)') + WIN_BYTE_DETECTOR = re.compile(b'[\x80-\x9F]') + ISO_WIN_MAP = {'iso-8859-1': 'Windows-1252', + 'iso-8859-2': 'Windows-1250', + 'iso-8859-5': 'Windows-1251', + 'iso-8859-6': 'Windows-1256', + 'iso-8859-7': 'Windows-1253', + 'iso-8859-8': 'Windows-1255', + 'iso-8859-9': 'Windows-1254', + 'iso-8859-13': 'Windows-1257'} + + def __init__(self, lang_filter=LanguageFilter.ALL): + self._esc_charset_prober = None + self._charset_probers = [] + self.result = None + self.done = None + self._got_data = None + self._input_state = None + self._last_char = None + self.lang_filter = lang_filter + self.logger = logging.getLogger(__name__) + self._has_win_bytes = None + self.reset() + + def reset(self): + """ + Reset the UniversalDetector and all of its probers back to their + initial states. This is called by ``__init__``, so you only need to + call this directly in between analyses of different documents. + """ + self.result = {'encoding': None, 'confidence': 0.0, 'language': None} + self.done = False + self._got_data = False + self._has_win_bytes = False + self._input_state = InputState.PURE_ASCII + self._last_char = b'' + if self._esc_charset_prober: + self._esc_charset_prober.reset() + for prober in self._charset_probers: + prober.reset() + + def feed(self, byte_str): + """ + Takes a chunk of a document and feeds it through all of the relevant + charset probers. + + After calling ``feed``, you can check the value of the ``done`` + attribute to see if you need to continue feeding the + ``UniversalDetector`` more data, or if it has made a prediction + (in the ``result`` attribute). + + .. note:: + You should always call ``close`` when you're done feeding in your + document if ``done`` is not already ``True``. + """ + if self.done: + return + + if not len(byte_str): + return + + if not isinstance(byte_str, bytearray): + byte_str = bytearray(byte_str) + + # First check for known BOMs, since these are guaranteed to be correct + if not self._got_data: + # If the data starts with BOM, we know it is UTF + if byte_str.startswith(codecs.BOM_UTF8): + # EF BB BF UTF-8 with BOM + self.result = {'encoding': "UTF-8-SIG", + 'confidence': 1.0, + 'language': ''} + elif byte_str.startswith((codecs.BOM_UTF32_LE, + codecs.BOM_UTF32_BE)): + # FF FE 00 00 UTF-32, little-endian BOM + # 00 00 FE FF UTF-32, big-endian BOM + self.result = {'encoding': "UTF-32", + 'confidence': 1.0, + 'language': ''} + elif byte_str.startswith(b'\xFE\xFF\x00\x00'): + # FE FF 00 00 UCS-4, unusual octet order BOM (3412) + self.result = {'encoding': "X-ISO-10646-UCS-4-3412", + 'confidence': 1.0, + 'language': ''} + elif byte_str.startswith(b'\x00\x00\xFF\xFE'): + # 00 00 FF FE UCS-4, unusual octet order BOM (2143) + self.result = {'encoding': "X-ISO-10646-UCS-4-2143", + 'confidence': 1.0, + 'language': ''} + elif byte_str.startswith((codecs.BOM_LE, codecs.BOM_BE)): + # FF FE UTF-16, little endian BOM + # FE FF UTF-16, big endian BOM + self.result = {'encoding': "UTF-16", + 'confidence': 1.0, + 'language': ''} + + self._got_data = True + if self.result['encoding'] is not None: + self.done = True + return + + # If none of those matched and we've only see ASCII so far, check + # for high bytes and escape sequences + if self._input_state == InputState.PURE_ASCII: + if self.HIGH_BYTE_DETECTOR.search(byte_str): + self._input_state = InputState.HIGH_BYTE + elif self._input_state == InputState.PURE_ASCII and \ + self.ESC_DETECTOR.search(self._last_char + byte_str): + self._input_state = InputState.ESC_ASCII + + self._last_char = byte_str[-1:] + + # If we've seen escape sequences, use the EscCharSetProber, which + # uses a simple state machine to check for known escape sequences in + # HZ and ISO-2022 encodings, since those are the only encodings that + # use such sequences. + if self._input_state == InputState.ESC_ASCII: + if not self._esc_charset_prober: + self._esc_charset_prober = EscCharSetProber(self.lang_filter) + if self._esc_charset_prober.feed(byte_str) == ProbingState.FOUND_IT: + self.result = {'encoding': + self._esc_charset_prober.charset_name, + 'confidence': + self._esc_charset_prober.get_confidence(), + 'language': + self._esc_charset_prober.language} + self.done = True + # If we've seen high bytes (i.e., those with values greater than 127), + # we need to do more complicated checks using all our multi-byte and + # single-byte probers that are left. The single-byte probers + # use character bigram distributions to determine the encoding, whereas + # the multi-byte probers use a combination of character unigram and + # bigram distributions. + elif self._input_state == InputState.HIGH_BYTE: + if not self._charset_probers: + self._charset_probers = [MBCSGroupProber(self.lang_filter)] + # If we're checking non-CJK encodings, use single-byte prober + if self.lang_filter & LanguageFilter.NON_CJK: + self._charset_probers.append(SBCSGroupProber()) + self._charset_probers.append(Latin1Prober()) + for prober in self._charset_probers: + if prober.feed(byte_str) == ProbingState.FOUND_IT: + self.result = {'encoding': prober.charset_name, + 'confidence': prober.get_confidence(), + 'language': prober.language} + self.done = True + break + if self.WIN_BYTE_DETECTOR.search(byte_str): + self._has_win_bytes = True + + def close(self): + """ + Stop analyzing the current document and come up with a final + prediction. + + :returns: The ``result`` attribute, a ``dict`` with the keys + `encoding`, `confidence`, and `language`. + """ + # Don't bother with checks if we're already done + if self.done: + return self.result + self.done = True + + if not self._got_data: + self.logger.debug('no data received!') + + # Default to ASCII if it is all we've seen so far + elif self._input_state == InputState.PURE_ASCII: + self.result = {'encoding': 'ascii', + 'confidence': 1.0, + 'language': ''} + + # If we have seen non-ASCII, return the best that met MINIMUM_THRESHOLD + elif self._input_state == InputState.HIGH_BYTE: + prober_confidence = None + max_prober_confidence = 0.0 + max_prober = None + for prober in self._charset_probers: + if not prober: + continue + prober_confidence = prober.get_confidence() + if prober_confidence > max_prober_confidence: + max_prober_confidence = prober_confidence + max_prober = prober + if max_prober and (max_prober_confidence > self.MINIMUM_THRESHOLD): + charset_name = max_prober.charset_name + lower_charset_name = max_prober.charset_name.lower() + confidence = max_prober.get_confidence() + # Use Windows encoding name instead of ISO-8859 if we saw any + # extra Windows-specific bytes + if lower_charset_name.startswith('iso-8859'): + if self._has_win_bytes: + charset_name = self.ISO_WIN_MAP.get(lower_charset_name, + charset_name) + self.result = {'encoding': charset_name, + 'confidence': confidence, + 'language': max_prober.language} + + # Log all prober confidences if none met MINIMUM_THRESHOLD + if self.logger.getEffectiveLevel() == logging.DEBUG: + if self.result['encoding'] is None: + self.logger.debug('no probers hit minimum threshold') + for group_prober in self._charset_probers: + if not group_prober: + continue + if isinstance(group_prober, CharSetGroupProber): + for prober in group_prober.probers: + self.logger.debug('%s %s confidence = %s', + prober.charset_name, + prober.language, + prober.get_confidence()) + else: + self.logger.debug('%s %s confidence = %s', + prober.charset_name, + prober.language, + prober.get_confidence()) + return self.result diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/utf8prober.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/utf8prober.py new file mode 100644 index 0000000000000000000000000000000000000000..6c3196cc2d7e46e6756580267f5643c6f7b448dd --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/utf8prober.py @@ -0,0 +1,82 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is mozilla.org code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +from .charsetprober import CharSetProber +from .enums import ProbingState, MachineState +from .codingstatemachine import CodingStateMachine +from .mbcssm import UTF8_SM_MODEL + + + +class UTF8Prober(CharSetProber): + ONE_CHAR_PROB = 0.5 + + def __init__(self): + super(UTF8Prober, self).__init__() + self.coding_sm = CodingStateMachine(UTF8_SM_MODEL) + self._num_mb_chars = None + self.reset() + + def reset(self): + super(UTF8Prober, self).reset() + self.coding_sm.reset() + self._num_mb_chars = 0 + + @property + def charset_name(self): + return "utf-8" + + @property + def language(self): + return "" + + def feed(self, byte_str): + for c in byte_str: + coding_state = self.coding_sm.next_state(c) + if coding_state == MachineState.ERROR: + self._state = ProbingState.NOT_ME + break + elif coding_state == MachineState.ITS_ME: + self._state = ProbingState.FOUND_IT + break + elif coding_state == MachineState.START: + if self.coding_sm.get_current_charlen() >= 2: + self._num_mb_chars += 1 + + if self.state == ProbingState.DETECTING: + if self.get_confidence() > self.SHORTCUT_THRESHOLD: + self._state = ProbingState.FOUND_IT + + return self.state + + def get_confidence(self): + unlike = 0.99 + if self._num_mb_chars < 6: + unlike *= self.ONE_CHAR_PROB ** self._num_mb_chars + return 1.0 - unlike + else: + return unlike diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/version.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/version.py new file mode 100644 index 0000000000000000000000000000000000000000..bb2a34a70ea760ad1f58979acfc8fa466e30c511 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/chardet/version.py @@ -0,0 +1,9 @@ +""" +This module exists only to simplify retrieving the version number of chardet +from within setup.py and from chardet subpackages. + +:author: Dan Blanchard (dan.blanchard@gmail.com) +""" + +__version__ = "3.0.4" +VERSION = __version__.split('.') diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft-0.8.1.dist-info/INSTALLER b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft-0.8.1.dist-info/INSTALLER new file mode 100644 index 0000000000000000000000000000000000000000..a1b589e38a32041e49332e5e81c2d363dc418d68 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft-0.8.1.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft-0.8.1.dist-info/LICENSE b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft-0.8.1.dist-info/LICENSE new file mode 100644 index 0000000000000000000000000000000000000000..261eeb9e9f8b2b4b0d119366dda99c6fd7d35c64 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft-0.8.1.dist-info/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft-0.8.1.dist-info/METADATA b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft-0.8.1.dist-info/METADATA new file mode 100644 index 0000000000000000000000000000000000000000..ad14c77137c4111a12f8c66870017c486a233b10 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft-0.8.1.dist-info/METADATA @@ -0,0 +1,184 @@ +Metadata-Version: 2.1 +Name: charmcraft +Version: 0.8.1 +Summary: The main tool to build, upload, and develop in general the Juju charms. +Home-page: https://github.com/canonical/charmcraft +Author: Facundo Batista +Author-email: facundo.batista@canonical.com +License: Apache-2.0 +Platform: UNKNOWN +Classifier: Environment :: Console +Classifier: License :: OSI Approved :: Apache Software License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python :: 3 +Requires-Python: >=3 +Description-Content-Type: text/markdown +Requires-Dist: Jinja2 (==2.11.2) +Requires-Dist: PyYAML (==5.3.1) +Requires-Dist: appdirs (==1.4.4) +Requires-Dist: attrs (==20.3.0) +Requires-Dist: jsonschema (==3.2.0) +Requires-Dist: macaroonbakery (==1.3.1) +Requires-Dist: python-dateutil (==2.8.1) +Requires-Dist: requests-toolbelt (==0.9.1) +Requires-Dist: requests (==2.24.0) +Requires-Dist: tabulate (==0.8.7) + +# Charmcraft is for Kubernetes operator developers + +Charmcraft supports Kubernetes operator development. + +Charmcraft enables collaboration between operator developers, and +publication on [Charmhub](https://charmhub.io/), home of the Open Operator +Collection. + +Use `charmcraft` to: + + * Init a new charm file structure + * Build your operator into a charm for distribution + * Register your charm name on Charmhub + * Upload your charms to Charmhub + * Release your charms into channels + +You can use charmcraft with operators written in any language but we +recommend the [Python Operator Framework on +Github](https://github.com/canonical/operator) which is also [on +PyPI](https://pypi.org/project/ops/) for ease of development and +collaboration. + +Charmcraft and the Python Operator Framework extend the operator pattern +beyond Kubernetes with [universal +operators](https://juju.is/universal-operators) that drive Linux and +Windows apps. The universal operator pattern is very exciting for +multi-cloud application management. + +## Install + +The easiest way to install `charmcraft` is with + + sudo snap install charmcraft --beta + +There are multiple channels other than `--beta`. See the full list with +`snap info charmcraft`. We recommend either `latest/stable` or `latest/beta` +for everyday charming. With the snap you will always be up to date as +Charmhub services and APIs evolve. + +You can also install from PyPI with `pip3 install --user charmcraft` + +## Initialize a charm operator package file structure + +Use `charmcraft init` to create a new template charm operator file tree: + +```bash +$ mkdir my-new-charm; cd my-new-charm +$ charmcraft init +All done. +There are some notes about things we think you should do. +These are marked with ‘TODO:’, as is customary. Namely: + README.md: fill out the description + README.md: explain how to use the charm + metadata.yaml: fill out the charm's description + metadata.yaml: fill out the charm's summary +``` + +You will now have all the essential files for a charmed operator, including +the actual `src/charm.py` skeleton and various items of metadata. Charmcraft +assumes you want to work in Python so it will add `requirements.txt` with +the Python operator framework `ops`, and other conventional development +support files. + +## Build your charm + +With a correct `metadata.yaml` and with `ops` in `requirements.txt` you can +build a charm with: + +```text +$ charmcraft build +Created 'test-charm.charm'. +``` + +`charmcraft build` will fetch additional files into the tree from PyPI based +on `requirements.txt` and will compile modules using a virtualenv. + +The charm is just a zipfile with metadata and the operator code itself: + +```text +$ unzip -l test-charm.charm +Archive: test-charm.charm + Length Date Time Name +--------- ---------- ----- ---- + 221 2020-11-15 08:10 metadata.yaml +[...] + 25304 2020-11-15 08:14 venv/yaml/__pycache__/scanner.cpython-38.pyc +--------- ------- + 812617 84 files +``` + +Now, if you have a Kubernetes cluster with the Juju OLM accessible you can +directly `juju deploy ` to the cluster. You do not need to +publish your operator on Charmhub, you can pass the charm file around +directly to users, or for CI/CD purposes. + +## Charmhub login and charm name reservations + +[Charmhub](https://charmhub.io/) is the world's largest repository of +operators. It makes it easy to share and collaborate on operators. The +community are interested in operators for a very wide range of purposes, +including infrastructure-as-code and legacy application management, and of +course Kubernetes operators. + +Use `charmcraft login` and `charmcraft logout` to sign into Charmhub. + +## Charmhub name registration + +You can register operator names in Charmhub with `charmcraft register +`. Many common names have been reserved, you are encouraged to discuss +your interest in leading or collaborating on a charm in [Charmhub +Discourse](https://discourse.charmhub.io/). + +Charmhub naming policy is the principle of least surprise - a well-known +name should map to an operator that most people would expect to get for that +name. + +Operators in Charmhub can be renamed as needed, so feel free to register a +temporary name, such as - as a placeholder. + +## Operator upload and release + +Charmhub operators are published in channels, like: + +```text +latest/stable +latest/candidate +latest/beta +latest/edge +1.3/beta +1.3/edge +1.2/stable +1.2/candidate +1.0/stable +``` + +Use `charmcraft upload` to get a new revision number for your freshly built +charm, and `charmcraft release` to release a revision into any particular +channel for your users. + +# Charmcraft source + +Get the source from github with: + + git clone https://github.com/canonical/charmcraft.git + cd charmcraft + virtualenv venv + . venv/bin/activate + pip install -r requirements.txt + python -m charmcraft + +If you would like to run the tests you can do so with + + pip install -r requirements-dev.txt + ./run_tests + +Contributions welcome! + + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft-0.8.1.dist-info/RECORD b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft-0.8.1.dist-info/RECORD new file mode 100644 index 0000000000000000000000000000000000000000..f647a035fc59ec815e424c33b786a5e4a2dcf528 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft-0.8.1.dist-info/RECORD @@ -0,0 +1,60 @@ +../../../bin/charmcraft,sha256=fAaIBsliYVWVXzAYVjcVYridqqw0bPQK50dmRbE-RIk,293 +charmcraft-0.8.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +charmcraft-0.8.1.dist-info/LICENSE,sha256=xx0jnfkXJvxRnG63LTGOxlggYnIysveWIZ6H3PNdCrQ,11357 +charmcraft-0.8.1.dist-info/METADATA,sha256=5lgDbi7x0cy_yZBVDkwH5THtStmFeX5Ml0z6MhJV4Mw,5960 +charmcraft-0.8.1.dist-info/RECORD,, +charmcraft-0.8.1.dist-info/WHEEL,sha256=g4nMs7d-Xl9-xC9XovUrsDHGXt-FT0E17Yqo92DEfvY,92 +charmcraft-0.8.1.dist-info/entry_points.txt,sha256=FdKmbFVt0I-CrHjl2mkwDf8fEx8GUIay0WIqdShVCoE,53 +charmcraft-0.8.1.dist-info/top_level.txt,sha256=ka37dQ3nvtNMpOvQhGWe3mn6HAXzkIyXk_sDS-_pqwg,11 +charmcraft/__init__.py,sha256=U0JaBQpFXw-eN4coTMHoAYL3ta6H3NhYzjcUwrTTvcM,778 +charmcraft/__main__.py,sha256=1E7b40xpslduLrUAy1SJJGrv0aiS_z9cOBo99Na_1jo,769 +charmcraft/__pycache__/__init__.cpython-38.pyc,, +charmcraft/__pycache__/__main__.cpython-38.pyc,, +charmcraft/__pycache__/cmdbase.cpython-38.pyc,, +charmcraft/__pycache__/config.cpython-38.pyc,, +charmcraft/__pycache__/helptexts.cpython-38.pyc,, +charmcraft/__pycache__/jujuignore.cpython-38.pyc,, +charmcraft/__pycache__/logsetup.cpython-38.pyc,, +charmcraft/__pycache__/main.cpython-38.pyc,, +charmcraft/__pycache__/utils.cpython-38.pyc,, +charmcraft/__pycache__/version.cpython-38.pyc,, +charmcraft/cmdbase.py,sha256=QLDNWfO_nEZsmo4CBe7uwcsntu-S00GvIw_mO3CgSh8,2868 +charmcraft/commands/__init__.py,sha256=LZ2OkKYoSMuPEKljK6kdFuR4weflnaCEkgnWU51_fuo,719 +charmcraft/commands/__pycache__/__init__.cpython-38.pyc,, +charmcraft/commands/__pycache__/build.cpython-38.pyc,, +charmcraft/commands/__pycache__/init.cpython-38.pyc,, +charmcraft/commands/__pycache__/pack.cpython-38.pyc,, +charmcraft/commands/__pycache__/utils.cpython-38.pyc,, +charmcraft/commands/__pycache__/version.cpython-38.pyc,, +charmcraft/commands/build.py,sha256=JqFYEE53ClwCHXi_KaWsrgfJNZNC1fmWvwzEgu_i0U4,15598 +charmcraft/commands/init.py,sha256=-ER6ibBYVM0t1QwU-rgVvzyFEnEpS2JI_cZxkfBcZBA,5756 +charmcraft/commands/pack.py,sha256=euv21xou4T5NTshqAErhktYCpa0_SI8o4BWhLwcBzzo,3465 +charmcraft/commands/store/__init__.py,sha256=nhWO4DIutpMvuhfirHXQdjo3IiHRlP5XvvMtFBoGq7c,37408 +charmcraft/commands/store/__pycache__/__init__.cpython-38.pyc,, +charmcraft/commands/store/__pycache__/client.cpython-38.pyc,, +charmcraft/commands/store/__pycache__/store.cpython-38.pyc,, +charmcraft/commands/store/client.py,sha256=UCZus7S9Lef0pfNGc9XwFPktzHy7BCy9tPafptGg26w,10811 +charmcraft/commands/store/store.py,sha256=IrFDMxnbUJa-TgJb4acE9CJ6jn1qK4WPcs5kj04QHEQ,9756 +charmcraft/commands/utils.py,sha256=KtpKn0KcA_oRh03ccLp17_KAmIlu5qgsur8Y5RWoC6c,2205 +charmcraft/commands/version.py,sha256=0EYCsGFQh4Mt3LW5hkpGWtki9YCR02-pI_nfytaVEQg,1572 +charmcraft/config.py,sha256=eZfs5VBw1OMZU8hXTnemYkZXR5u2EhdPXiMaCFrm_ws,6730 +charmcraft/helptexts.py,sha256=6iWbDScQdLqajeXPVXf5oLIZ_oyDXU4NKPNrJyRJsbk,9143 +charmcraft/jujuignore.py,sha256=7ZQTyxeSzfj2LW-NB-AYxNunt0KN7tU5C45gCUdbe9w,6954 +charmcraft/logsetup.py,sha256=nkfnzexNZTVgEHp77skcDdhY1wXKLhfKtQRoJMeHGrQ,4303 +charmcraft/main.py,sha256=4Lbn9MNGjUQyXmrfx9YBSjIlhXHtaDu6Auj6GFWNE3M,12578 +charmcraft/templates/charmlibs/new_library.py.j2,sha256=_m3DD9Pppdi4AY5reBb-GBTklTvfoc3TKDASUdIhwK0,1028 +charmcraft/templates/init/.flake8.j2,sha256=yOvP97zjncARsnYbYuS_w-t_iVxGrBtw1FCC59spFyY,99 +charmcraft/templates/init/.jujuignore.j2,sha256=d3D298wbcb9lpzaJazhatEIPVe2Ad6wtWqQy7m5E28A,24 +charmcraft/templates/init/LICENSE.j2,sha256=z8d0m5b2O9McPEK1xHG_dWgUBT6EfBDz6wA0F7xSPTA,11358 +charmcraft/templates/init/README.md.j2,sha256=zxVTpo1f9DgqObe7QbjZlvnTCe2Tq8CgyEKCtUiRx64,517 +charmcraft/templates/init/actions.yaml.j2,sha256=qenRrrPa-ktCVSNI-QszxXnvbYDeT6HRGedrgiyseAY,447 +charmcraft/templates/init/config.yaml.j2,sha256=IGe6LZ8H_Hdrbvk-R9huvEv9ndGmes0otlZH0VkTMpo,319 +charmcraft/templates/init/metadata.yaml.j2,sha256=_w0f5m5Usi_tLEu3XGoSUTbB3MIvOeNPWBBq_BFhDek,221 +charmcraft/templates/init/requirements-dev.txt.j2,sha256=k_Yz7UH3mvSaAIHSnZDUMB_jwKtyYd0B2BZ9kWt439c,36 +charmcraft/templates/init/requirements.txt.j2,sha256=enC05wWafSg8iDKIvj3gvtAtEP2kYCyN5Gmd689q-_I,4 +charmcraft/templates/init/run_tests.j2,sha256=HHNPI8NLJGkR65WqOmDdHikW5r_t9D6VFo3VNO6zKVk,340 +charmcraft/templates/init/src/charm.py.j2,sha256=6K9peWQyDAQM5iDzKqmjELWv-8ykKX-tWSQCeshMZrE,1531 +charmcraft/templates/init/tests/__init__.py.j2,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +charmcraft/templates/init/tests/test_charm.py.j2,sha256=ur5WN8-yoKI5_9Bbobs8_TauyBuLWV1EOn3a93sbYi4,1175 +charmcraft/utils.py,sha256=MHJzQH3JgTcpxhkfrR1TJG-UA7dI0_LsCEQiqQpot5M,2253 +charmcraft/version.py,sha256=HbQcNycvTh_GG9dkE3Cfg9zBZDRhdfe0iiVe9I18Msk,47 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft-0.8.1.dist-info/WHEEL b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft-0.8.1.dist-info/WHEEL new file mode 100644 index 0000000000000000000000000000000000000000..b552003ff90e66227ec90d1b159324f140d46001 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft-0.8.1.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.34.2) +Root-Is-Purelib: true +Tag: py3-none-any + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft-0.8.1.dist-info/entry_points.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft-0.8.1.dist-info/entry_points.txt new file mode 100644 index 0000000000000000000000000000000000000000..1d892b95ec32cc7169077c20d49122b7a66b2db3 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft-0.8.1.dist-info/entry_points.txt @@ -0,0 +1,3 @@ +[console_scripts] +charmcraft = charmcraft.main:main + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft-0.8.1.dist-info/top_level.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft-0.8.1.dist-info/top_level.txt new file mode 100644 index 0000000000000000000000000000000000000000..8b826e4b45a3d1aeeb0ce928caddef6300dccd05 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft-0.8.1.dist-info/top_level.txt @@ -0,0 +1 @@ +charmcraft diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..8420e7fe48adf694f12eca5029dc018d53c0fe08 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/__init__.py @@ -0,0 +1,19 @@ +# Copyright 2020-2021 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# For further info, check https://github.com/canonical/charmcraft + +"""Expose needed names at main package level.""" + +from .version import version as __version__ # noqa: F401 (imported but unused) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/__main__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/__main__.py new file mode 100644 index 0000000000000000000000000000000000000000..68a3d121172e6d5b809773c6ea35b310bdbaac49 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/__main__.py @@ -0,0 +1,23 @@ +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# For further info, check https://github.com/canonical/charmcraft + +"""Init file to allow execution of charmcraft as a module.""" + +import sys + +from charmcraft import main + +sys.exit(main.main()) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/cmdbase.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/cmdbase.py new file mode 100644 index 0000000000000000000000000000000000000000..fcaba30645eb54f6873ccd7e2ce944164a5b8085 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/cmdbase.py @@ -0,0 +1,81 @@ +# Copyright 2020-2021 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# For further info, check https://github.com/canonical/charmcraft + +"""Infrastructure for common base commands functionality.""" + + +class CommandError(Exception): + """Base exception for all error commands. + + It optionally receives a `retcode` parameter that will be the returned code + by the process on exit, and a `argsparsing` one to indicate that the problem + is in the command line usage. + + XXX Facundo 2020-09-21: This will be refactored soon in the branch where all + output messages are standarized. + """ + + def __init__(self, message, retcode=1, argsparsing=False): + self.retcode = retcode + self.argsparsing = argsparsing + super().__init__(message) + + +class BaseCommand: + """Base class to build charmcraft commands. + + Subclass this to create a new command; the subclass must define the following attributes: + + - name: the identifier in the command line + - help_msg: a one line help for user documentation + - common: if it's a common/starter command, which are prioritized in the help + - needs_config: will ensure a config is provided when executing the command + + It also must/can override some methods for the proper command behaviour (see each + method's docstring). + + The subclass must be declared in the corresponding section of main.COMMAND_GROUPS, + and will receive and store this group on instantiation (if overriding `__init__`, the + subclass must pass it through upwards). + """ + + name = None + help_msg = None + overview = None + common = False + needs_config = False + + def __init__(self, group, config): + self.group = group + self.config = config + + def fill_parser(self, parser): + """Specify command's specific parameters. + + Each command parameters are independant of other commands, but note there are some + global ones (see `main.Dispatcher._build_argument_parser`). + + If this method is not overriden, the command will not have any parameters. + """ + + def run(self, parsed_args): + """Execute command's actual functionality. + + It must be overridden by the command implementation. + + This will receive parsed arguments that were defined in :meth:.fill_parser. + """ + raise NotImplementedError() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/commands/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/commands/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..85360b3784f1929cbb0cb04d2d873c19049c97e2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/commands/__init__.py @@ -0,0 +1,17 @@ +# Copyright 2020-2021 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# For further info, check https://github.com/canonical/charmcraft + +"""Flag the sub package for files to be included when distributing.""" diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/commands/build.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/commands/build.py new file mode 100644 index 0000000000000000000000000000000000000000..25775c650a9fa0fcfd8faf6cb4600992c96747e7 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/commands/build.py @@ -0,0 +1,398 @@ +# Copyright 2020-2021 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# For further info, check https://github.com/canonical/charmcraft + +"""Infrastructure for the 'build' command.""" + +import errno +import logging +import os +import pathlib +import shutil +import subprocess +import zipfile + +import yaml + +from charmcraft.cmdbase import BaseCommand, CommandError +from charmcraft.jujuignore import JujuIgnore, default_juju_ignore +from charmcraft.utils import make_executable + +logger = logging.getLogger(__name__) + +# Some constants that are used through the code. +CHARM_METADATA = 'metadata.yaml' +BUILD_DIRNAME = 'build' +VENV_DIRNAME = 'venv' + +# The file name and template for the dispatch script +DISPATCH_FILENAME = 'dispatch' +# If Juju doesn't support the dispatch mechanism, it will execute the +# hook, and we'd need sys.argv[0] to be the name of the hook but it's +# geting lost by calling this dispatch, so we fake JUJU_DISPATCH_PATH +# to be the value it would've otherwise been. +DISPATCH_CONTENT = """#!/bin/sh + +JUJU_DISPATCH_PATH="${{JUJU_DISPATCH_PATH:-$0}}" PYTHONPATH=lib:venv ./{entrypoint_relative_path} +""" + +# The minimum set of hooks to be provided for compatibility with old Juju +MANDATORY_HOOK_NAMES = {'install', 'start', 'upgrade-charm'} +HOOKS_DIR = 'hooks' + + +def _pip_needs_system(): + """Determine whether pip3 defaults to --user, needing --system to turn it off.""" + try: + from pip.commands.install import InstallCommand + return InstallCommand().cmd_opts.get_option('--system') is not None + except (ImportError, AttributeError, TypeError): + # probably not the bionic pip version then + return False + + +def polite_exec(cmd): + """Execute a command, only showing output if error.""" + logger.debug("Running external command %s", cmd) + try: + proc = subprocess.Popen( + cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, universal_newlines=True) + except Exception as err: + logger.error("Executing %s crashed with %r", cmd, err) + return 1 + + for line in proc.stdout: + logger.debug(":: %s", line.rstrip()) + retcode = proc.wait() + + if retcode: + logger.error("Executing %s failed with return code %d", cmd, retcode) + return retcode + + +def relativise(src, dst): + """Build a relative path from src to dst.""" + return pathlib.Path(os.path.relpath(str(dst), str(src.parent))) + + +class Builder: + """The package builder.""" + + def __init__(self, args): + self.charmdir = args['from'] + self.entrypoint = args['entrypoint'] + self.requirement_paths = args['requirement'] + + self.buildpath = self.charmdir / BUILD_DIRNAME + self.ignore_rules = self._load_juju_ignore() + + def run(self): + """Build the charm.""" + logger.debug("Building charm in '%s'", self.buildpath) + + if self.buildpath.exists(): + shutil.rmtree(str(self.buildpath)) + self.buildpath.mkdir() + + linked_entrypoint = self.handle_generic_paths() + self.handle_dispatcher(linked_entrypoint) + self.handle_dependencies() + zipname = self.handle_package() + + logger.info("Created '%s'.", zipname) + return zipname + + def _load_juju_ignore(self): + ignore = JujuIgnore(default_juju_ignore) + path = self.charmdir / '.jujuignore' + if path.exists(): + with path.open('r', encoding='utf-8') as ignores: + ignore.extend_patterns(ignores) + return ignore + + def create_symlink(self, src_path, dest_path): + """Create a symlink in dest_path pointing relatively like src_path. + + It also verifies that the linked dir or file is inside the project. + """ + resolved_path = src_path.resolve() + if self.charmdir in resolved_path.parents: + relative_link = relativise(src_path, resolved_path) + dest_path.symlink_to(relative_link) + else: + rel_path = src_path.relative_to(self.charmdir) + logger.warning( + "Ignoring symlink because targets outside the project: '%s'", rel_path) + + def handle_generic_paths(self): + """Handle all files and dirs except what's ignored and what will be handled later. + + Works differently for the different file types: + - regular files: hard links + - directories: created + - symlinks: respected if are internal to the project + - other types (blocks, mount points, etc): ignored + """ + logger.debug("Linking in generic paths") + + for basedir, dirnames, filenames in os.walk(str(self.charmdir), followlinks=False): + abs_basedir = pathlib.Path(basedir) + rel_basedir = abs_basedir.relative_to(self.charmdir) + + # process the directories + ignored = [] + for pos, name in enumerate(dirnames): + rel_path = rel_basedir / name + abs_path = abs_basedir / name + + if self.ignore_rules.match(str(rel_path), is_dir=True): + logger.debug("Ignoring directory because of rules: '%s'", rel_path) + ignored.append(pos) + elif abs_path.is_symlink(): + dest_path = self.buildpath / rel_path + self.create_symlink(abs_path, dest_path) + else: + dest_path = self.buildpath / rel_path + dest_path.mkdir(mode=abs_path.stat().st_mode) + + # in the future don't go inside ignored directories + for pos in reversed(ignored): + del dirnames[pos] + + # process the files + for name in filenames: + rel_path = rel_basedir / name + abs_path = abs_basedir / name + + if self.ignore_rules.match(str(rel_path), is_dir=False): + logger.debug("Ignoring file because of rules: '%s'", rel_path) + elif abs_path.is_symlink(): + dest_path = self.buildpath / rel_path + self.create_symlink(abs_path, dest_path) + elif abs_path.is_file(): + dest_path = self.buildpath / rel_path + try: + os.link(str(abs_path), str(dest_path)) + except PermissionError: + # when not allowed to create hard links + shutil.copy2(str(abs_path), str(dest_path)) + except OSError as e: + if e.errno != errno.EXDEV: + raise + shutil.copy2(str(abs_path), str(dest_path)) + else: + logger.debug("Ignoring file because of type: '%s'", rel_path) + + # the linked entrypoint is calculated here because it's when it's really in the build dir + linked_entrypoint = self.buildpath / self.entrypoint.relative_to(self.charmdir) + return linked_entrypoint + + def handle_dispatcher(self, linked_entrypoint): + """Handle modern and classic dispatch mechanisms.""" + # dispatch mechanism, create one if wasn't provided by the project + dispatch_path = self.buildpath / DISPATCH_FILENAME + if not dispatch_path.exists(): + logger.debug("Creating the dispatch mechanism") + dispatch_content = DISPATCH_CONTENT.format( + entrypoint_relative_path=linked_entrypoint.relative_to(self.buildpath)) + with dispatch_path.open("wt", encoding="utf8") as fh: + fh.write(dispatch_content) + make_executable(fh) + + # bunch of symlinks, to support old juju: verify that any of the already included hooks + # in the directory is not linking directly to the entrypoint, and also check all the + # mandatory ones are present + dest_hookpath = self.buildpath / HOOKS_DIR + if not dest_hookpath.exists(): + dest_hookpath.mkdir() + + # get those built hooks that we need to replace because they are pointing to the + # entrypoint directly and we need to fix the environment in the middle + current_hooks_to_replace = [] + for node in dest_hookpath.iterdir(): + if node.resolve() == linked_entrypoint: + current_hooks_to_replace.append(node) + node.unlink() + logger.debug( + "Replacing existing hook %r as it's a symlink to the entrypoint", node.name) + + # include the mandatory ones and those we need to replace + hooknames = MANDATORY_HOOK_NAMES | {x.name for x in current_hooks_to_replace} + for hookname in hooknames: + logger.debug("Creating the %r hook script pointing to dispatch", hookname) + dest_hook = dest_hookpath / hookname + if not dest_hook.exists(): + relative_link = relativise(dest_hook, dispatch_path) + dest_hook.symlink_to(relative_link) + + def handle_dependencies(self): + """Handle from-directory and virtualenv dependencies.""" + logger.debug("Installing dependencies") + + # virtualenv with other dependencies (if any) + if self.requirement_paths: + retcode = polite_exec(['pip3', 'list']) + if retcode: + raise CommandError("problems using pip") + + venvpath = self.buildpath / VENV_DIRNAME + cmd = [ + 'pip3', 'install', # base command + '--target={}'.format(venvpath), # put all the resulting files in that specific dir + ] + if _pip_needs_system(): + logger.debug("adding --system to work around pip3 defaulting to --user") + cmd.append("--system") + for reqspath in self.requirement_paths: + cmd.append('--requirement={}'.format(reqspath)) # the dependencies file(s) + retcode = polite_exec(cmd) + if retcode: + raise CommandError("problems installing dependencies") + + def handle_package(self): + """Handle the final package creation.""" + logger.debug("Parsing the project's metadata") + with (self.charmdir / CHARM_METADATA).open('rt', encoding='utf8') as fh: + metadata = yaml.safe_load(fh) + + logger.debug("Creating the package itself") + zipname = metadata['name'] + '.charm' + zipfh = zipfile.ZipFile(zipname, 'w', zipfile.ZIP_DEFLATED) + buildpath_str = str(self.buildpath) # os.walk does not support pathlib in 3.5 + for dirpath, dirnames, filenames in os.walk(buildpath_str, followlinks=True): + dirpath = pathlib.Path(dirpath) + for filename in filenames: + filepath = dirpath / filename + zipfh.write(str(filepath), str(filepath.relative_to(self.buildpath))) + + zipfh.close() + return zipname + + +class Validator: + """A validator of all received options.""" + + _options = [ + 'from', # this needs to be processed first, as it's a base dir to find other files + 'entrypoint', + 'requirement', + ] + + def __init__(self): + self.basedir = None # this will be fulfilled when processing 'from' + + def process(self, parsed_args): + """Process the received options.""" + result = {} + for opt in self._options: + meth = getattr(self, 'validate_' + opt) + result[opt] = meth(getattr(parsed_args, opt, None)) + return result + + def validate_from(self, dirpath): + """Validate that the charm dir is there and yes, a directory.""" + if dirpath is None: + dirpath = pathlib.Path.cwd() + else: + dirpath = dirpath.expanduser().absolute() + + if not dirpath.exists(): + raise CommandError("Charm directory was not found: {!r}".format(str(dirpath))) + if not dirpath.is_dir(): + raise CommandError( + "Charm directory is not really a directory: {!r}".format(str(dirpath))) + + self.basedir = dirpath + return dirpath + + def validate_entrypoint(self, filepath): + """Validate that the entrypoint exists and is executable.""" + if filepath is None: + filepath = self.basedir / 'src' / 'charm.py' + else: + filepath = filepath.expanduser().absolute() + + if not filepath.exists(): + raise CommandError("Charm entry point was not found: {!r}".format(str(filepath))) + if self.basedir not in filepath.parents: + raise CommandError( + "Charm entry point must be inside the project: {!r}".format(str(filepath))) + if not os.access(str(filepath), os.X_OK): # access does not support pathlib in 3.5 + raise CommandError( + "Charm entry point must be executable: {!r}".format(str(filepath))) + return filepath + + def validate_requirement(self, filepaths): + """Validate that the given requirement(s) (if any) exist. + + If not specified, default to requirements.txt if there. + """ + if filepaths is None: + req = self.basedir / 'requirements.txt' + if req.exists() and os.access(str(req), os.R_OK): # access doesn't support pathlib 3.5 + return [req] + return [] + + filepaths = [x.expanduser().absolute() for x in filepaths] + for fpath in filepaths: + if not fpath.exists(): + raise CommandError("the requirements file was not found: {!r}".format(str(fpath))) + return filepaths + + +_overview = """ +Build a charm operator package. + +You can `juju deploy` the resulting `.charm` file directly, or upload it +to Charmhub with `charmcraft upload`. + +You must be inside a charm directory with a valid `metadata.yaml`, +`requirements.txt` including the `ops` package for the Python operator +framework, and an operator entrypoint, usually `src/charm.py`. + +See `charmcraft init` to create a template charm directory structure. +""" + + +class BuildCommand(BaseCommand): + """Build the charm.""" + + name = 'build' + help_msg = "Build the charm" + overview = _overview + common = True + + def fill_parser(self, parser): + """Add own parameters to the general parser.""" + parser.add_argument( + '-f', '--from', type=pathlib.Path, + help="Charm directory with metadata.yaml where the build " + "takes place; defaults to '.'") + parser.add_argument( + '-e', '--entrypoint', type=pathlib.Path, + help="The executable which is the operator entry point; " + "defaults to 'src/charm.py'") + parser.add_argument( + '-r', '--requirement', action='append', type=pathlib.Path, + help="File(s) listing needed PyPI dependencies (can be used multiple " + "times); defaults to 'requirements.txt'") + + def run(self, parsed_args): + """Run the command.""" + validator = Validator() + args = validator.process(parsed_args) + logger.debug("working arguments: %s", args) + builder = Builder(args) + builder.run() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/commands/init.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/commands/init.py new file mode 100644 index 0000000000000000000000000000000000000000..04793e828ddcefe2a37d10c1357656425ba473c9 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/commands/init.py @@ -0,0 +1,145 @@ +# Copyright 2020-2021 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# For further info, check https://github.com/canonical/charmcraft + +"""Infrastructure for the 'init' command.""" + +import logging +import os +import pwd +import re +from datetime import date + +import yaml + +from charmcraft.cmdbase import BaseCommand, CommandError +from charmcraft.utils import make_executable, get_templates_environment + +logger = logging.getLogger(__name__) + +_overview = """ +Initialize a charm operator package tree and files. + +This command will modify the directory to create the necessary files for a +charm operator package. By default it will work in the current directory. +It will setup the following tree of files and directories: + +. +├── metadata.yaml - Charm operator package description +├── README.md - Frontpage for your charmhub.io/charm/ +├── actions.yaml - Day-2 action declarations, e.g. backup, restore +├── config.yaml - Config schema for your operator +├── LICENSE - Your charm license, we recommend Apache 2 +├── requirements.txt - PyPI dependencies for your charm, with `ops` +├── requirements-dev.txt - PyPI for development tooling, notably flake8 +├── run_tests +├── src +│   └── charm.py - Minimal operator using Python operator framework +└── tests + ├── __init__.py + └── test_charm.py + +You will need to edit at least metadata.yaml and README.md. + +Your minimal operator code is in src/charm.py which uses the Python operator +framework from https://github.com/canonical/operator and there are some +example tests with a harness to run them. +""" + + +class InitCommand(BaseCommand): + """Initialize a directory to be a charm project.""" + + name = "init" + help_msg = "Initialize a charm operator package tree and files" + overview = _overview + common = True + + def fill_parser(self, parser): + """Specify command's specific parameters.""" + parser.add_argument( + "--name", + help="The name of the charm; defaults to the directory name") + parser.add_argument( + "--author", + help="The charm author; defaults to the current user name per GECOS") + parser.add_argument( + "--series", default="kubernetes", + help="A comma-separated list of supported platform series;" + " defaults to 'kubernetes'") + parser.add_argument( + "-f", "--force", action="store_true", + help="Initialize even if the directory is not empty (will not overwrite files)") + + def run(self, args): + """Execute command's actual functionality.""" + if any(self.config.project.dirpath.iterdir()) and not args.force: + raise CommandError( + "{} is not empty (consider using --force to work on nonempty directories)" + .format(self.config.project.dirpath)) + logger.debug("Using project directory '%s'", self.config.project.dirpath) + + if args.author is None: + gecos = pwd.getpwuid(os.getuid()).pw_gecos.split(',', 1)[0] + if not gecos: + raise CommandError("Author not given, and nothing in GECOS field") + logger.debug("Setting author to %r from GECOS field", gecos) + args.author = gecos + + if not args.name: + args.name = self.config.project.dirpath.name + logger.debug("Set project name to '%s'", args.name) + + if not re.match(r"[a-z][a-z0-9-]*[a-z0-9]$", args.name): + raise CommandError("{} is not a valid charm name".format(args.name)) + + context = { + "name": args.name, + "author": args.author, + "year": date.today().year, + "class_name": "".join(re.split(r"\W+", args.name.title())) + "Charm", + "series": yaml.dump(args.series.split(","), default_flow_style=True), + } + + env = get_templates_environment('init') + + _todo_rx = re.compile("TODO: (.*)") + todos = [] + executables = ["run_tests", "src/charm.py"] + for template_name in env.list_templates(): + if not template_name.endswith(".j2"): + continue + template = env.get_template(template_name) + template_name = template_name[:-3] + logger.debug("Rendering %s", template_name) + path = self.config.project.dirpath / template_name + if path.exists(): + continue + path.parent.mkdir(parents=True, exist_ok=True) + with path.open("wt", encoding="utf8") as fh: + out = template.render(context) + fh.write(out) + for todo in _todo_rx.findall(out): + todos.append((template_name, todo)) + if template_name in executables: + make_executable(fh) + logger.debug(" made executable") + logger.info("Charm operator package file and directory tree initialized.") + if todos: + logger.info("TODO:") + logger.info("") + w = max(len(i[0]) for i in todos) + for fn, todo in todos: + logger.info("%*s: %s", w + 2, fn, todo) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/commands/pack.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/commands/pack.py new file mode 100644 index 0000000000000000000000000000000000000000..7eb58a6226c4fbda9ef37378da31b28acd2f8fc0 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/commands/pack.py @@ -0,0 +1,104 @@ +# Copyright 2020-2021 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# For further info, check https://github.com/canonical/charmcraft + +"""Infrastructure for the 'pack' command.""" + +import logging +import zipfile + +from charmcraft.cmdbase import BaseCommand, CommandError +from charmcraft.utils import load_yaml + +logger = logging.getLogger(__name__) + +# the minimum set of files in a bundle +MANDATORY_FILES = {'bundle.yaml'} + + +def build_zip(zippath, basedir, fpaths): + """Build the final file. + + Note we convert all paths to str to support Python 3.5. + """ + zipfh = zipfile.ZipFile(str(zippath), 'w', zipfile.ZIP_DEFLATED) + for fpath in fpaths: + zipfh.write(str(fpath), str(fpath.relative_to(basedir))) + zipfh.close() + + +def get_paths_to_include(config): + """Get all file/dir paths to include.""" + dirpath = config.project.dirpath + allpaths = set() + + # all mandatory files, which must exist (currently only bundles.yaml is mandatory, and + # it's verified before) + for fname in MANDATORY_FILES: + allpaths.add(dirpath / fname) + + # the extra files (relative paths) + for spec in config.parts: + fpaths = sorted(fpath for fpath in dirpath.glob(spec) if fpath.is_file()) + logger.debug("Including per prime config %r: %s.", spec, fpaths) + allpaths.update(fpaths) + + return sorted(allpaths) + + +_overview = """ +Build the bundle and package it as a zip archive. + +You can `juju deploy` the bundle .zip file or upload it to +the store (see the "upload" command). +""" + + +class PackCommand(BaseCommand): + """Build the bundle or the charm. + + Eventually this command will also support charms, but for now it will work only + on bundles. + """ + + name = 'pack' + help_msg = "Build the bundle" + overview = _overview + needs_config = True + + def run(self, parsed_args): + """Run the command.""" + # get the config files + bundle_filepath = self.config.project.dirpath / 'bundle.yaml' + bundle_config = load_yaml(bundle_filepath) + if bundle_config is None: + raise CommandError( + "Missing or invalid main bundle file: '{}'.".format(bundle_filepath)) + bundle_name = bundle_config.get('name') + if not bundle_name: + raise CommandError( + "Invalid bundle config; missing a 'name' field indicating the bundle's name in " + "file '{}'.".format(bundle_filepath)) + + # so far 'pack' works for bundles only (later this will operate also on charms) + if self.config.type != 'bundle': + raise CommandError( + "Bad config: 'type' field in charmcraft.yaml must be 'bundle' for this command.") + + # pack everything + paths = get_paths_to_include(self.config) + zipname = self.config.project.dirpath / (bundle_name + '.zip') + build_zip(zipname, self.config.project.dirpath, paths) + logger.info("Created '%s'.", zipname) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/commands/store/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/commands/store/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..ff0cf214c7c2a02dfc453710e8d3b4d364ea06e3 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/commands/store/__init__.py @@ -0,0 +1,977 @@ +# Copyright 2020-2021 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# For further info, check https://github.com/canonical/charmcraft + +"""Commands related to Charmhub.""" + +import ast +import hashlib +import logging +import os +import pathlib +import string +import textwrap +from collections import namedtuple +from operator import attrgetter + +import yaml +from tabulate import tabulate + +from charmcraft.cmdbase import BaseCommand, CommandError +from charmcraft.utils import get_templates_environment + +from .store import Store + +logger = logging.getLogger('charmcraft.commands.store') + +LibData = namedtuple( + 'LibData', 'lib_id api patch content content_hash full_name path lib_name charm_name') + + +def get_name_from_metadata(): + """Return the name if present and plausible in metadata.yaml.""" + try: + with open('metadata.yaml', 'rb') as fh: + metadata = yaml.safe_load(fh) + charm_name = metadata['name'] + except (yaml.error.YAMLError, OSError, KeyError): + return + return charm_name + + +class LoginCommand(BaseCommand): + """Login to Charmhub.""" + + name = 'login' + help_msg = "Login to Charmhub" + overview = textwrap.dedent(""" + Login to Charmhub. + + Charmcraft will provide a URL for the Charmhub login. When you have + successfully logged in, charmcraft will store a token for ongoing + access to Charmhub at the CLI. + + Remember to `charmcraft logout` if you want to remove that token + from your local system, especially in a shared environment. + + See also `charmcraft whoami` to verify that you are logged in. + + """) + + def run(self, parsed_args): + """Run the command.""" + store = Store(self.config.charmhub) + store.login() + logger.info("Logged in as '%s'.", store.whoami().username) + + +class LogoutCommand(BaseCommand): + """Clear Charmhub token.""" + + name = 'logout' + help_msg = "Logout from Charmhub and remove token" + overview = textwrap.dedent(""" + Clear the Charmhub token. + + Charmcraft will remove the local token used for Charmhub access. + This is important on any shared system because the token allows + manipulation of your published charms. + + See also `charmcraft whoami` to verify that you are logged in, + and `charmcraft login`. + + """) + + def run(self, parsed_args): + """Run the command.""" + store = Store(self.config.charmhub) + store.logout() + logger.info("Charmhub token cleared.") + + +class WhoamiCommand(BaseCommand): + """Show login information.""" + + name = 'whoami' + help_msg = "Show your Charmhub login status" + overview = textwrap.dedent(""" + Show your Charmhub login status. + + See also `charmcraft login` and `charmcraft logout`. + """) + + def run(self, parsed_args): + """Run the command.""" + store = Store(self.config.charmhub) + result = store.whoami() + + data = [ + ('name:', result.name), + ('username:', result.username), + ('id:', result.userid), + ] + table = tabulate(data, tablefmt='plain') + for line in table.splitlines(): + logger.info(line) + + +class RegisterNameCommand(BaseCommand): + """Register a name in Charmhub.""" + + name = 'register' + help_msg = "Register a charm name in Charmhub" + overview = textwrap.dedent(""" + Register a charm name in Charmhub. + + Claim a name for your operator in Charmhub. Once you have registered + a name, you can upload charm operator packages for that name and + release them for wider consumption. + + Charmhub operates on the 'principle of least surprise' with regard + to naming. A charm with a well-known name should provide the best + operator for the microservice most people associate with that name. + Charms can be renamed in the Charmhub, but we would nonetheless ask + you to use a qualified name, such as `yourname-charmname` if you are + in any doubt about your ability to meet that standard. + + We discuss registrations in Charmhub Discourse: + + https://discourse.charmhub.io/c/charm + + Registration will take you through login if needed. + + """) + common = True + + def fill_parser(self, parser): + """Add own parameters to the general parser.""" + parser.add_argument('name', help="The name to register in Charmhub") + + def run(self, parsed_args): + """Run the command.""" + store = Store(self.config.charmhub) + store.register_name(parsed_args.name) + logger.info("You are now the publisher of %r in Charmhub.", parsed_args.name) + + +class ListNamesCommand(BaseCommand): + """List the charms registered in Charmhub.""" + + name = 'names' + help_msg = "List your registered charm names in Charmhub" + overview = textwrap.dedent(""" + An overview of names you have registered to publish in Charmhub. + + $ charmcraft names + Name Visibility Status + sabdfl-hello-world public registered + + Visibility and status are shown for each name. `public` items can be + seen by any user, while `private` items are only for you and the + other accounts with permission to collaborate on that specific name. + + Listing names will take you through login if needed. + + """) + common = True + + def run(self, parsed_args): + """Run the command.""" + store = Store(self.config.charmhub) + result = store.list_registered_names() + if not result: + logger.info("No charms registered.") + return + + headers = ['Name', 'Visibility', 'Status'] + data = [] + for item in result: + visibility = 'private' if item.private else 'public' + data.append([ + item.name, + visibility, + item.status, + ]) + + table = tabulate(data, headers=headers, tablefmt='plain') + for line in table.splitlines(): + logger.info(line) + + +class UploadCommand(BaseCommand): + """Upload a charm to Charmhub.""" + + name = 'upload' + help_msg = "Upload a charm to Charmhub" + overview = textwrap.dedent(""" + Upload a charm to Charmhub. + + Push a charm to Charmhub where it will be verified for conformance + to the packaging standard. This command will finish successfully + once the package is approved by Charmhub. + + In the event of a failure in the verification process, charmcraft + will report details of the failure, otherwise it will give you the + new charm revision. + + Upload will take you through login if needed. + + """) + common = True + + def _discover_charm(self, charm_filepath): + """Discover the charm name and file path. + + If received path is None, a metadata.yaml will be searched in the current directory. If + path is given the name is taken from the filename. + + """ + if charm_filepath is None: + # discover the info using project's metadata, asume the file has the project's name + # with a .charm extension + charm_name = get_name_from_metadata() + if charm_name is None: + raise CommandError( + "Cannot find a valid charm name in metadata.yaml to upload. Check you are in " + "a charm directory with metadata.yaml, or use --charm-file=foo.charm.") + + charm_filepath = pathlib.Path(charm_name + '.charm').absolute() + if not os.access(str(charm_filepath), os.R_OK): # access doesnt support pathlib in 3.5 + raise CommandError( + "Cannot access charm at {!r}. Try --charm-file=foo.charm" + .format(str(charm_filepath))) + + else: + # the path is given, asume the charm name is part of the file name + # XXX Facundo 2020-06-30: Actually, we need to open the ZIP file, extract the + # included metadata.yaml file, and read the name from there. Issue: #77. + charm_filepath = charm_filepath.expanduser() + if not os.access(str(charm_filepath), os.R_OK): # access doesnt support pathlib in 3.5 + raise CommandError( + "Cannot access {!r}.".format(str(charm_filepath))) + if not charm_filepath.is_file(): + raise CommandError( + "{!r} is not a file.".format(str(charm_filepath))) + + charm_name = charm_filepath.stem + + return charm_name, charm_filepath + + def fill_parser(self, parser): + """Add own parameters to the general parser.""" + parser.add_argument( + '--charm-file', type=pathlib.Path, + help="The charm to upload") + + def run(self, parsed_args): + """Run the command.""" + name, path = self._discover_charm(parsed_args.charm_file) + store = Store(self.config.charmhub) + result = store.upload(name, path) + if result.ok: + logger.info("Revision %s of %r created", result.revision, str(name)) + else: + logger.info("Upload failed with status %r:", result.status) + for error in result.errors: + logger.info("- %s: %s", error.code, error.message) + + +class ListRevisionsCommand(BaseCommand): + """List revisions for a charm.""" + + name = 'revisions' + help_msg = "List revisions for a charm in Charmhub" + overview = textwrap.dedent(""" + Show version, date and status for each revision in Charmhub. + + For example: + + $ charmcraft revisions + Revision Version Created at Status + 1 1 2020-11-15 released + + Listing revisions will take you through login if needed. + + """) + common = True + + def fill_parser(self, parser): + """Add own parameters to the general parser.""" + parser.add_argument('--name', help="The name of the charm") + + def run(self, parsed_args): + """Run the command.""" + if parsed_args.name: + charm_name = parsed_args.name + else: + charm_name = get_name_from_metadata() + if charm_name is None: + raise CommandError( + "Cannot find a valid charm name in metadata.yaml. Check you are in a charm " + "directory with metadata.yaml, or use --name=foo.") + + store = Store(self.config.charmhub) + result = store.list_revisions(charm_name) + if not result: + logger.info("No revisions found.") + return + + headers = ['Revision', 'Version', 'Created at', 'Status'] + data = [] + for item in sorted(result, key=attrgetter('revision'), reverse=True): + # use just the status or include error message/code in it (if exist) + if item.errors: + errors = ("{0.message} [{0.code}]".format(e) for e in item.errors) + status = "{}: {}".format(item.status, '; '.join(errors)) + else: + status = item.status + + data.append([ + item.revision, + item.version, + item.created_at.strftime('%Y-%m-%d'), + status, + ]) + + table = tabulate(data, headers=headers, tablefmt='plain', numalign='left') + for line in table.splitlines(): + logger.info(line) + + +class ReleaseCommand(BaseCommand): + """Release a charm revision to specific channels.""" + + name = 'release' + help_msg = "Release a charm revision in one or more channels" + overview = textwrap.dedent(""" + Release a charm revision in the channel(s) provided. + + Charm revisions are not published for anybody else until you release + them in a channel. When you release a revision into a channel, users + who deploy the charm from that channel will get see the new revision + as a potential update. + + A channel is made up of `track/risk/branch` with both the track and + the branch as optional items, so formally: + + [track/]risk[/branch] + + Channel risk must be one of stable, candidate, beta or edge. The + track defaults to `latest` and branch has no default. + + It is enough just to provide a channel risk, like `stable` because + the track will be assumed to be `latest` and branch is not required. + + Some channel examples: + + stable + edge + 2.0/candidate + beta/hotfix-23425 + 1.3/beta/feature-foo + + Listing revisions will take you through login if needed. + """) + common = True + + def fill_parser(self, parser): + """Add own parameters to the general parser.""" + parser.add_argument( + 'revision', type=int, help='The revision to release') + parser.add_argument( + 'channels', metavar='channel', nargs='+', + help="The channel(s) to release to") + parser.add_argument('--name', help="The name of the charm") + + def run(self, parsed_args): + """Run the command.""" + store = Store(self.config.charmhub) + + if parsed_args.name: + charm_name = parsed_args.name + else: + charm_name = get_name_from_metadata() + if charm_name is None: + raise CommandError( + "Cannot find a valid charm name in metadata.yaml. Check you are in a charm " + "directory with metadata.yaml, or use --name=foo.") + + store.release(charm_name, parsed_args.revision, parsed_args.channels) + logger.info( + "Revision %d of charm %r released to %s", + parsed_args.revision, charm_name, ", ".join(parsed_args.channels)) + + +class StatusCommand(BaseCommand): + """Show channel status for a charm.""" + + name = 'status' + help_msg = "Show channel and released revisions" + overview = textwrap.dedent(""" + Show channels and released revisions in Charmhub + + Charm revisions are not available to users until they are released + into a channel. This command shows the various channels for a charm + and whether there is a charm released. + + For example: + + $ charmcraft status + Track Channel Version Revision + latest stable - - + candidate - - + beta - - + edge 1 1 + + Showing channels will take you through login if needed. + """) + common = True + + def fill_parser(self, parser): + """Add own parameters to the general parser.""" + parser.add_argument('--name', help="The name of the charm") + + def run(self, parsed_args): + """Run the command.""" + if parsed_args.name: + charm_name = parsed_args.name + else: + charm_name = get_name_from_metadata() + if charm_name is None: + raise CommandError( + "Cannot find a valid charm name in metadata.yaml. Check you are in a charm " + "directory with metadata.yaml, or use --name=foo.") + + store = Store(self.config.charmhub) + channel_map, channels, revisions = store.list_releases(charm_name) + if not channel_map: + logger.info("Nothing has been released yet.") + return + + # build easier to access structures + releases_by_channel = {item.channel: item for item in channel_map} + revisions_by_revno = {item.revision: item for item in revisions} + + # process and order the channels, while preserving the tracks order + all_tracks = [] + per_track = {} + branch_present = False + for channel in channels: + # it's super rare to have a more than just a bunch of tracks (furthermore, normally + # there's only one), so it's ok to do this sequential search + if channel.track not in all_tracks: + all_tracks.append(channel.track) + + nonbranches_list, branches_list = per_track.setdefault(channel.track, ([], [])) + if channel.branch is None: + # insert branch right after its fallback + for idx, stored in enumerate(nonbranches_list, 1): + if stored.name == channel.fallback: + nonbranches_list.insert(idx, channel) + break + else: + nonbranches_list.append(channel) + else: + branches_list.append(channel) + branch_present = True + + headers = ['Track', 'Channel', 'Version', 'Revision'] + if branch_present: + headers.append('Expires at') + + # show everything, grouped by tracks, with regular channels at first and + # branches (if any) after those + data = [] + for track in all_tracks: + release_shown_for_this_track = False + shown_track = track + channels, branches = per_track[track] + + for channel in channels: + description = channel.risk + + # get the release of the channel, fallbacking accordingly + release = releases_by_channel.get(channel.name) + if release is None: + version = revno = '↑' if release_shown_for_this_track else '-' + else: + release_shown_for_this_track = True + revno = release.revision + revision = revisions_by_revno[revno] + version = revision.version + + data.append([shown_track, description, version, revno]) + + # stop showing the track name for the rest of the track + shown_track = '' + + for branch in branches: + description = '/'.join((branch.risk, branch.branch)) + release = releases_by_channel[branch.name] + expiration = release.expires_at.isoformat() + revision = revisions_by_revno[release.revision] + data.append(['', description, revision.version, release.revision, expiration]) + + table = tabulate(data, headers=headers, tablefmt='plain', numalign='left') + for line in table.splitlines(): + logger.info(line) + + +class _BadLibraryPathError(CommandError): + """Subclass to provide a specific error for a bad library path.""" + + def __init__(self, path): + super().__init__( + "Charm library path {} must conform to lib/charms//vN/.py" + "".format(path)) + + +class _BadLibraryNameError(CommandError): + """Subclass to provide a specific error for a bad library name.""" + + def __init__(self, name): + super().__init__( + "Charm library name {!r} must conform to charms..vN." + .format(name)) + + +def _get_positive_int(raw_value): + """Convert the raw value for api/patch into a positive integer.""" + value = int(raw_value) + if value < 0: + raise ValueError('negative') + return value + + +def _get_lib_info(*, full_name=None, lib_path=None): + """Get the whole lib info from the path/file.""" + if full_name is None: + # get it from the lib_path + try: + libsdir, charmsdir, charm_name, v_api = lib_path.parts[:-1] + except ValueError: + raise _BadLibraryPathError(lib_path) + if libsdir != 'lib' or charmsdir != 'charms' or lib_path.suffix != '.py': + raise _BadLibraryPathError(lib_path) + full_name = '.'.join((charmsdir, charm_name, v_api, lib_path.stem)) + + else: + # build the path! convert a lib name with dots to the full path, including lib + # dir and Python extension. + # e.g.: charms.mycharm.v4.foo -> lib/charms/mycharm/v4/foo.py + try: + charmsdir, charm_name, v_api, libfile = full_name.split('.') + except ValueError: + raise _BadLibraryNameError(full_name) + if charmsdir != 'charms': + raise _BadLibraryNameError(full_name) + lib_path = pathlib.Path('lib') / charmsdir / charm_name / v_api / (libfile + '.py') + + if v_api[0] != 'v' or not v_api[1:].isdigit(): + raise CommandError( + "The API version in the library path must be 'vN' where N is an integer.") + api_from_path = int(v_api[1:]) + + lib_name = lib_path.stem + if not lib_path.exists(): + return LibData( + lib_id=None, api=api_from_path, patch=-1, content_hash=None, content=None, + full_name=full_name, path=lib_path, lib_name=lib_name, charm_name=charm_name) + + # parse the file and extract metadata from it, while hashing + metadata_fields = (b'LIBAPI', b'LIBPATCH', b'LIBID') + metadata = dict.fromkeys(metadata_fields) + hasher = hashlib.sha256() + with lib_path.open('rb') as fh: + for line in fh: + if line.startswith(metadata_fields): + try: + field, value = [x.strip() for x in line.split(b'=')] + except ValueError: + raise CommandError("Bad metadata line in {}: {!r}".format(lib_path, line)) + metadata[field] = value + else: + hasher.update(line) + + missing = [k.decode('ascii') for k, v in metadata.items() if v is None] + if missing: + raise CommandError( + "Library {} is missing the mandatory metadata fields: {}." + .format(lib_path, ', '.join(sorted(missing)))) + + bad_api_patch_msg = "Library {} metadata field {} is not zero or a positive integer." + try: + libapi = _get_positive_int(metadata[b'LIBAPI']) + except ValueError: + raise CommandError(bad_api_patch_msg.format(lib_path, 'LIBAPI')) + try: + libpatch = _get_positive_int(metadata[b'LIBPATCH']) + except ValueError: + raise CommandError(bad_api_patch_msg.format(lib_path, 'LIBPATCH')) + + if libapi == 0 and libpatch == 0: + raise CommandError( + "Library {} metadata fields LIBAPI and LIBPATCH cannot both be zero." + .format(lib_path)) + + if libapi != api_from_path: + raise CommandError( + "Library {} metadata field LIBAPI is different from the version in the path." + .format(lib_path)) + + bad_libid_msg = "Library {} metadata field LIBID must be a non-empty ASCII string." + try: + libid = ast.literal_eval(metadata[b'LIBID'].decode('ascii')) + except (ValueError, UnicodeDecodeError): + raise CommandError(bad_libid_msg.format(lib_path)) + if not libid or not isinstance(libid, str): + raise CommandError(bad_libid_msg.format(lib_path)) + + content_hash = hasher.hexdigest() + content = lib_path.read_text() + + return LibData( + lib_id=libid, api=libapi, patch=libpatch, content_hash=content_hash, content=content, + full_name=full_name, path=lib_path, lib_name=lib_name, charm_name=charm_name) + + +def _get_libs_from_tree(charm_name=None): + """Get library info from the directories tree (for a specific charm if specified). + + It only follows/uses the the directories/files for a correct charmlibs + disk structure. + """ + local_libs_data = [] + + if charm_name is None: + base_dir = pathlib.Path('lib') / 'charms' + charm_dirs = sorted(base_dir.iterdir()) if base_dir.is_dir() else [] + else: + base_dir = pathlib.Path('lib') / 'charms' / charm_name + charm_dirs = [base_dir] if base_dir.is_dir() else [] + + for charm_dir in charm_dirs: + for v_dir in sorted(charm_dir.iterdir()): + if v_dir.is_dir() and v_dir.name[0] == 'v' and v_dir.name[1:].isdigit(): + for libfile in sorted(v_dir.glob('*.py')): + local_libs_data.append(_get_lib_info(lib_path=libfile)) + + found_libs = [lib_data.full_name for lib_data in local_libs_data] + logger.debug("Libraries found under %s: %s", base_dir, found_libs) + return local_libs_data + + +class CreateLibCommand(BaseCommand): + """Create a charm library.""" + + name = 'create-lib' + help_msg = "Create a charm library" + overview = textwrap.dedent(""" + Create a charm library. + + Charmcraft manages charm libraries, which are published by charmers + to help other charmers integrate their charms. This command creates + a new library in your charm which you are publishing for others. + + This command MUST be run inside your charm directory with a valid + metadata.yaml. It will create the Python library with API version 0 + initially: + + lib/charms//v0/.py + + Each library has a unique identifier assigned by Charmhub that + supports accurate updates of libraries even if charms are renamed. + Charmcraft will request a unique ID from Charmhub and initialise a + template Python library. + + Creating a charm library will take you through login if needed. + + """) + + def fill_parser(self, parser): + """Add own parameters to the general parser.""" + parser.add_argument( + 'name', metavar='name', + help="The name of the library file (e.g. 'db')") + + def run(self, parsed_args): + """Run the command.""" + lib_name = parsed_args.name + valid_all_chars = set(string.ascii_lowercase + string.digits + '_') + valid_first_char = string.ascii_lowercase + if set(lib_name) - valid_all_chars or not lib_name or lib_name[0] not in valid_first_char: + raise CommandError( + "Invalid library name. Must only use lowercase alphanumeric " + "characters and underscore, starting with alpha.") + + charm_name = get_name_from_metadata() + if charm_name is None: + raise CommandError( + "Cannot find a valid charm name in metadata.yaml. Check you are in a charm " + "directory with metadata.yaml.") + + # all libraries born with API version 0 + full_name = 'charms.{}.v0.{}'.format(charm_name, lib_name) + lib_data = _get_lib_info(full_name=full_name) + lib_path = lib_data.path + if lib_path.exists(): + raise CommandError('This library already exists: {}'.format(lib_path)) + + store = Store(self.config.charmhub) + lib_id = store.create_library_id(charm_name, lib_name) + + # create the new library file from the template + env = get_templates_environment('charmlibs') + template = env.get_template('new_library.py.j2') + context = dict(lib_id=lib_id) + try: + lib_path.parent.mkdir(parents=True, exist_ok=True) + lib_path.write_text(template.render(context)) + except OSError as exc: + raise CommandError( + "Error writing the library in {}: {!r}.".format(lib_path, exc)) + + logger.info("Library %s created with id %s.", full_name, lib_id) + logger.info("Consider 'git add %s'.", lib_path) + + +class PublishLibCommand(BaseCommand): + """Publish one or more charm libraries.""" + + name = 'publish-lib' + help_msg = "Publish one or more charm libraries" + overview = textwrap.dedent(""" + Publish charm libraries. + + Upload and release in Charmhub the new api/patch version of the + indicated library, or all the charm libraries if --all is used. + + It will automatically take you through the login process if + your credentials are missing or too old. + """) + + def fill_parser(self, parser): + """Add own parameters to the general parser.""" + parser.add_argument( + 'library', nargs='?', + help="Library to publish (e.g. charms.mycharm.v2.foo.); optional, default to all") + + def run(self, parsed_args): + """Run the command.""" + charm_name = get_name_from_metadata() + if charm_name is None: + raise CommandError( + "Can't access name in 'metadata.yaml' file. The 'publish-lib' command needs to " + "be executed in a valid project's directory.") + + if parsed_args.library: + lib_data = _get_lib_info(full_name=parsed_args.library) + if not lib_data.path.exists(): + raise CommandError( + "The specified library was not found at path {}.".format(lib_data.path)) + if lib_data.charm_name != charm_name: + raise CommandError( + "The library {} does not belong to this charm {!r}.".format( + lib_data.full_name, charm_name)) + local_libs_data = [lib_data] + else: + local_libs_data = _get_libs_from_tree(charm_name) + + # check if something needs to be done + store = Store(self.config.charmhub) + to_query = [dict(lib_id=lib.lib_id, api=lib.api) for lib in local_libs_data] + libs_tips = store.get_libraries_tips(to_query) + to_publish = [] + for lib_data in local_libs_data: + logger.debug("Verifying local lib %s", lib_data) + tip = libs_tips.get((lib_data.lib_id, lib_data.api)) + logger.debug("Store tip: %s", tip) + if tip is None: + # needs to first publish + to_publish.append(lib_data) + continue + + if tip.patch > lib_data.patch: + # the store is more advanced than local + logger.info( + "Library %s is out-of-date locally, Charmhub has version %d.%d, please " + "fetch the updates before publishing.", lib_data.full_name, tip.api, tip.patch) + elif tip.patch == lib_data.patch: + # the store has same version numbers than local + if tip.content_hash == lib_data.content_hash: + logger.info("Library %s is already updated in Charmhub.", lib_data.full_name) + else: + # but shouldn't as hash is different! + logger.info( + "Library %s version %d.%d is the same than in Charmhub but content is " + "different", lib_data.full_name, tip.api, tip.patch) + elif tip.patch + 1 == lib_data.patch: + # local is correctly incremented + if tip.content_hash == lib_data.content_hash: + # but shouldn't as hash is the same! + logger.info( + "Library %s LIBPATCH number was incorrectly incremented, Charmhub has the " + "same content in version %d.%d.", lib_data.full_name, tip.api, tip.patch) + else: + to_publish.append(lib_data) + else: + logger.info( + "Library %s has a wrong LIBPATCH number, it's too high, Charmhub " + "highest version is %d.%d.", lib_data.full_name, tip.api, tip.patch) + + for lib_data in to_publish: + store.create_library_revision( + lib_data.charm_name, lib_data.lib_id, lib_data.api, lib_data.patch, + lib_data.content, lib_data.content_hash) + logger.info( + "Library %s sent to the store with version %d.%d", + lib_data.full_name, lib_data.api, lib_data.patch) + + +class FetchLibCommand(BaseCommand): + """Fetch one or more charm libraries.""" + + name = 'fetch-lib' + help_msg = "Fetch one or more charm libraries" + overview = textwrap.dedent(""" + Fetch charm libraries. + + The first time a library is downloaded the command will create the needed + directories to place it, subsequent fetches will just update the local copy. + """) + + def fill_parser(self, parser): + """Add own parameters to the general parser.""" + parser.add_argument( + 'library', nargs='?', + help="Library to fetch (e.g. charms.mycharm.v2.foo.); optional, default to all") + + def run(self, parsed_args): + """Run the command.""" + if parsed_args.library: + local_libs_data = [_get_lib_info(full_name=parsed_args.library)] + else: + local_libs_data = _get_libs_from_tree() + + # get tips from the Store + store = Store(self.config.charmhub) + to_query = [] + for lib in local_libs_data: + if lib.lib_id is None: + item = dict(charm_name=lib.charm_name, lib_name=lib.lib_name) + else: + item = dict(lib_id=lib.lib_id) + item['api'] = lib.api + to_query.append(item) + libs_tips = store.get_libraries_tips(to_query) + + # check if something needs to be done + to_fetch = [] + for lib_data in local_libs_data: + logger.debug("Verifying local lib %s", lib_data) + # fix any missing lib id using the Store info + if lib_data.lib_id is None: + for tip in libs_tips.values(): + if lib_data.charm_name == tip.charm_name and lib_data.lib_name == tip.lib_name: + lib_data = lib_data._replace(lib_id=tip.lib_id) + break + + tip = libs_tips.get((lib_data.lib_id, lib_data.api)) + logger.debug("Store tip: %s", tip) + if tip is None: + logger.info("Library %s not found in Charmhub.", lib_data.full_name) + continue + + if tip.patch > lib_data.patch: + # the store has a higher version than local + to_fetch.append(lib_data) + elif tip.patch < lib_data.patch: + # the store has a lower version numbers than local + logger.info( + "Library %s has local changes, can not be updated.", lib_data.full_name) + else: + # same versions locally and in the store + if tip.content_hash == lib_data.content_hash: + logger.info( + "Library %s was already up to date in version %d.%d.", + lib_data.full_name, tip.api, tip.patch) + else: + logger.info( + "Library %s has local changes, can not be updated.", lib_data.full_name) + + for lib_data in to_fetch: + downloaded = store.get_library(lib_data.charm_name, lib_data.lib_id, lib_data.api) + if lib_data.content is None: + # locally new + lib_data.path.parent.mkdir(parents=True, exist_ok=True) + lib_data.path.write_text(downloaded.content) + logger.info( + "Library %s version %d.%d downloaded.", + lib_data.full_name, downloaded.api, downloaded.patch) + else: + # XXX Facundo 2020-12-17: manage the case where the library was renamed + # (related GH issue: #214) + lib_data.path.write_text(downloaded.content) + logger.info( + "Library %s updated to version %d.%d.", + lib_data.full_name, downloaded.api, downloaded.patch) + + +class ListLibCommand(BaseCommand): + """List all libraries belonging to a charm.""" + + name = 'list-lib' + help_msg = "List all libraries from a charm" + overview = textwrap.dedent(""" + List all libraries from a charm. + + For each library, it will show the name and the api and patch versions + for its tip. + """) + + def fill_parser(self, parser): + """Add own parameters to the general parser.""" + parser.add_argument( + 'name', nargs='?', help=( + "The name of the charm (optional, will get the name from" + "metadata.yaml if not given)")) + + def run(self, parsed_args): + """Run the command.""" + if parsed_args.name: + charm_name = parsed_args.name + else: + charm_name = get_name_from_metadata() + if charm_name is None: + raise CommandError( + "Can't access name in 'metadata.yaml' file. The 'list-lib' command must " + "either be executed from a valid project directory, or specify a charm " + "name using the --charm-name option.") + + # get tips from the Store + store = Store(self.config.charmhub) + to_query = [{'charm_name': charm_name}] + libs_tips = store.get_libraries_tips(to_query) + + if not libs_tips: + logger.info("No libraries found for charm %s.", charm_name) + return + + headers = ['Library name', 'API', 'Patch'] + data = sorted((item.lib_name, item.api, item.patch) for item in libs_tips.values()) + + table = tabulate(data, headers=headers, tablefmt='plain', numalign='left') + for line in table.splitlines(): + logger.info(line) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/commands/store/client.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/commands/store/client.py new file mode 100644 index 0000000000000000000000000000000000000000..8bd2024a002f506db5ec208752647e64ed18a1f5 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/commands/store/client.py @@ -0,0 +1,278 @@ +# Copyright 2020-2021 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# For further info, check https://github.com/canonical/charmcraft + +"""A client to hit the Store.""" + +import logging +import os +import pathlib +import platform +import webbrowser +from http.cookiejar import MozillaCookieJar + +import appdirs +import requests +from macaroonbakery import httpbakery +from requests.adapters import HTTPAdapter +from requests.exceptions import RequestException +from requests.packages.urllib3.util.retry import Retry +from requests_toolbelt import MultipartEncoder, MultipartEncoderMonitor + +from charmcraft import __version__ +from charmcraft.cmdbase import CommandError + +# set urllib3's logger to only emit errors, not warnings. Otherwise even +# retries are printed, and they're nasty. +logging.getLogger(requests.packages.urllib3.__package__).setLevel(logging.ERROR) + +logger = logging.getLogger('charmcraft.commands.store') + +TESTING_ENV_PREFIXES = ["TRAVIS", "AUTOPKGTEST_TMP"] + + +def _get_os_platform(filepath=pathlib.Path("/etc/os-release")): + """Determine a system/release combo for an OS using /etc/os-release if available.""" + system = platform.system() + release = platform.release() + machine = platform.machine() + + if system == "Linux": + os_release = {} + try: + with filepath.open("r", encoding='utf-8') as f: + for line in f: + if "=" in line: + key, value = line.rstrip().split("=", 1) + os_release[key] = value.strip('"') + except FileNotFoundError: + logger.debug("Unable to locate 'os-release' file, using default values") + finally: + system = os_release.get("NAME", system) + release = os_release.get("VERSION_ID", release) + + return "{}/{} ({})".format(system, release, machine) + + +def build_user_agent(): + """Build the charmcraft's user agent.""" + if any(key.startswith(prefix) for prefix in TESTING_ENV_PREFIXES for key in os.environ.keys()): + testing = " (testing) " + else: + testing = " " + return "charmcraft/{}{}{} python/{}".format(__version__, + testing, + _get_os_platform(), + platform.python_version()) + + +def visit_page_with_browser(visit_url): + """Open a browser so the user can validate its identity.""" + logger.warning( + "Opening an authorization web page in your browser; if it does not open, " + "please open this URL: %s", visit_url) + webbrowser.open(visit_url, new=1) + + +class _AuthHolder: + """Holder and wrapper of all authentication bits. + + Main two purposes of this class: + + - deal with credentials persistence + + - wrap HTTP calls to ensure authentication + + XXX Facundo 2020-06-18: right now for functionality bootstrapping we're storing credentials + on disk, we may move to a keyring, wallet, other solution, or firmly remain here when we + get a "security" recommendation (related: issue #52). + """ + + def __init__(self): + self._cookiejar_filepath = appdirs.user_config_dir('charmcraft.credentials') + self._cookiejar = None + self._client = None + + def clear_credentials(self): + """Clear stored credentials.""" + if os.path.exists(self._cookiejar_filepath): + os.unlink(self._cookiejar_filepath) + logger.debug("Credentials cleared: file '%s' removed", self._cookiejar_filepath) + else: + logger.debug( + "Credentials file not found to be removed: '%s'", self._cookiejar_filepath) + + def _save_credentials_if_changed(self): + """Save credentials if changed.""" + if list(self._cookiejar) != self._old_cookies: + logger.debug("Saving credentials to file: '%s'", self._cookiejar_filepath) + dirpath = os.path.dirname(self._cookiejar_filepath) + os.makedirs(dirpath, exist_ok=True) + + fd = os.open(self._cookiejar_filepath, os.O_WRONLY | os.O_CREAT | os.O_TRUNC, 0o600) + os.fchmod(fd, 0o600) + self._cookiejar.save(fd) + + def _load_credentials(self): + """Load credentials and set up internal auth request objects.""" + wbi = httpbakery.WebBrowserInteractor(open=visit_page_with_browser) + self._cookiejar = MozillaCookieJar(self._cookiejar_filepath) + self._client = httpbakery.Client(cookies=self._cookiejar, interaction_methods=[wbi]) + + if os.path.exists(self._cookiejar_filepath): + logger.debug("Loading credentials from file: '%s'", self._cookiejar_filepath) + try: + self._cookiejar.load() + except Exception as err: + # alert and continue processing (without having credentials, of course, the user + # will be asked to authenticate) + logger.warning("Failed to read credentials: %r", err) + else: + logger.debug("Credentials file not found: '%s'", self._cookiejar_filepath) + + # iterates the cookiejar (which is mutable, may change later) and get the cookies + # for comparison after hitting the endpoint + self._old_cookies = list(self._cookiejar) + + def request(self, method, url, body): + """Do a request.""" + if self._client is None: + # load everything on first usage + self._load_credentials() + + headers = {"User-Agent": build_user_agent()} + + # this request through the bakery lib will automatically catch any authentication + # problem and (if any) ask the user to authenticate and retry the original request; if + # that fails we capture it and raise a proper error + try: + resp = self._client.request(method, url, json=body, headers=headers) + except httpbakery.InteractionError as err: + raise CommandError("Authentication failure: {}".format(err)) + + self._save_credentials_if_changed() + return resp + + +def _storage_push(monitor, storage_base_url): + """Push bytes to the storage.""" + url = storage_base_url + '/unscanned-upload/' + headers = { + 'Content-Type': monitor.content_type, + 'Accept': 'application/json', + 'User-Agent': build_user_agent(), + } + retries = Retry(total=5, backoff_factor=2, status_forcelist=[500, 502, 503, 504]) + + with requests.Session() as session: + session.mount("https://", HTTPAdapter(max_retries=retries)) + + try: + response = session.post(url, headers=headers, data=monitor) + except RequestException as err: + raise CommandError("Network error when pushing file: {}({!r})".format( + err.__class__.__name__, str(err))) + + return response + + +class Client: + """Lightweight layer above _AuthHolder to present a more network oriented interface.""" + + def __init__(self, api_base_url, storage_base_url): + self._auth_client = _AuthHolder() + self.api_base_url = api_base_url.rstrip('/') + self.storage_base_url = storage_base_url.rstrip('/') + + def clear_credentials(self): + """Clear stored credentials.""" + self._auth_client.clear_credentials() + + def _parse_store_error(self, response): + """Get the proper error from the Store response.""" + default_msg = "Failure working with the Store: [{}] {!r}".format( + response.status_code, response.content) + try: + error_data = response.json() + except ValueError: + return default_msg + + try: + error_info = [(error['message'], error['code']) for error in error_data['error-list']] + except (KeyError, TypeError): + return default_msg + + if not error_info: + return default_msg + + messages = [] + for msg, code in error_info: + if code: + msg += " [code: {}]".format(code) + messages.append(msg) + return "Store failure! " + "; ".join(messages) + + def _hit(self, method, urlpath, body=None): + """Issue a request to the Store.""" + url = self.api_base_url + urlpath + logger.debug("Hitting the store: %s %s %s", method, url, body) + resp = self._auth_client.request(method, url, body) + if not resp.ok: + raise CommandError(self._parse_store_error(resp)) + + logger.debug("Store ok: %s", resp.status_code) + # XXX Facundo 2020-06-30: we need to wrap this .json() call, and raise UnknownError (after + # logging in debug the received raw response). This would catch weird "html" responses, + # for example, without making charmcraft to badly crash. Related: issue #73. + data = resp.json() + return data + + def get(self, urlpath): + """GET something from the Store.""" + return self._hit('GET', urlpath) + + def post(self, urlpath, body): + """POST a body (json-encoded) to the Store.""" + return self._hit('POST', urlpath, body) + + def push(self, filepath): + """Push the bytes from filepath to the Storage.""" + logger.debug("Starting to push %s", filepath) + + def _progress(monitor): + # XXX Facundo 2020-07-01: use a real progress bar + if monitor.bytes_read <= monitor.len: + progress = 100 * monitor.bytes_read / monitor.len + print("Uploading... {:.2f}%\r".format(progress), end='', flush=True) + + with filepath.open('rb') as fh: + encoder = MultipartEncoder( + fields={"binary": (filepath.name, fh, "application/octet-stream")}) + + # create a monitor (so that progress can be displayed) as call the real pusher + monitor = MultipartEncoderMonitor(encoder, _progress) + response = _storage_push(monitor, self.storage_base_url) + + if not response.ok: + raise CommandError("Failure while pushing file: [{}] {!r}".format( + response.status_code, response.content)) + + result = response.json() + if not result['successful']: + raise CommandError("Server error while pushing file: {}".format(result)) + + upload_id = result['upload_id'] + logger.debug("Uploading bytes ended, id %s", upload_id) + return upload_id diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/commands/store/store.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/commands/store/store.py new file mode 100644 index 0000000000000000000000000000000000000000..05511e9fadb31237fea268d7fa38be746dff8bdf --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/commands/store/store.py @@ -0,0 +1,252 @@ +# Copyright 2020-2021 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# For further info, check https://github.com/canonical/charmcraft + +"""The Store API handling.""" + +import logging +import time +from collections import namedtuple + +from dateutil import parser + +from charmcraft.commands.store.client import Client + +logger = logging.getLogger('charmcraft.commands.store') + +# helpers to build responses from this layer +User = namedtuple('User', 'name username userid') +Charm = namedtuple('Charm', 'name private status') +Uploaded = namedtuple('Uploaded', 'ok status revision errors') +# XXX Facundo 2020-07-23: Need to do a massive rename to call `revno` to the "revision as +# the number" inside the "revision as the structure", this gets super confusing in the code with +# time, and now it's the moment to do it (also in Release below!) +Revision = namedtuple('Revision', 'revision version created_at status errors') +Error = namedtuple('Error', 'message code') +Release = namedtuple('Release', 'revision channel expires_at') +Channel = namedtuple('Channel', 'name fallback track risk branch') +Library = namedtuple('Library', 'api content content_hash lib_id lib_name charm_name patch') + +# those statuses after upload that flag that the review ended (and if it ended succesfully or not) +UPLOAD_ENDING_STATUSES = { + 'approved': True, + 'rejected': False, +} +POLL_DELAY = 1 + + +def _build_errors(item): + """Build a list of Errors from response item.""" + return [Error(message=e['message'], code=e['code']) for e in (item['errors'] or [])] + + +def _build_revision(item): + """Build a Revision from a response item.""" + rev = Revision( + revision=item['revision'], + version=item['version'], + created_at=parser.parse(item['created-at']), + status=item['status'], + errors=_build_errors(item), + ) + return rev + + +def _build_library(resp): + """Build a Library from a response.""" + lib = Library( + api=resp['api'], + content=resp.get('content'), # not always present + content_hash=resp['hash'], + lib_id=resp['library-id'], + lib_name=resp['library-name'], + charm_name=resp['charm-name'], + patch=resp['patch'], + ) + return lib + + +class Store: + """The main interface to the Store's API.""" + + def __init__(self, charmhub_config): + self._client = Client(charmhub_config.api_url, charmhub_config.storage_url) + + def login(self): + """Login into the store. + + The login happens on every request to the Store (if current credentials were not + enough), so to trigger a new login we... + + - remove local credentials + + - exercise the simplest command regarding developer identity + """ + self._client.clear_credentials() + self._client.get('/v1/whoami') + + def logout(self): + """Logout from the store. + + There's no action really in the Store to logout, we just remove local credentials. + """ + self._client.clear_credentials() + + def whoami(self): + """Return authenticated user details.""" + response = self._client.get('/v1/whoami') + # XXX Facundo 2020-06-30: Every time we consume data from the Store (after a succesful + # call) we need to wrap it with a context manager that will raise UnknownError (after + # logging in debug the received response). This would catch API changes, for example, + # without making charmcraft to badly crash. Related: issue #73. + result = User( + name=response['display-name'], + username=response['username'], + userid=response['id'], + ) + return result + + def register_name(self, name): + """Register the specified name for the authenticated user.""" + self._client.post('/v1/charm', {'name': name}) + + def list_registered_names(self): + """Return names registered by the authenticated user.""" + response = self._client.get('/v1/charm') + result = [] + for item in response['results']: + result.append(Charm(name=item['name'], private=item['private'], status=item['status'])) + return result + + def upload(self, name, filepath): + """Upload the content of filepath to the indicated charm.""" + upload_id = self._client.push(filepath) + + endpoint = '/v1/charm/{}/revisions'.format(name) + response = self._client.post(endpoint, {'upload-id': upload_id}) + status_url = response['status-url'] + logger.debug("Upload %s started, got status url %s", upload_id, status_url) + + while True: + response = self._client.get(status_url) + logger.debug("Status checked: %s", response) + + # as we're asking for a single upload_id, the response will always have only one item + (revision,) = response['revisions'] + status = revision['status'] + + if status in UPLOAD_ENDING_STATUSES: + return Uploaded( + ok=UPLOAD_ENDING_STATUSES[status], errors=_build_errors(revision), + status=status, revision=revision['revision']) + + # XXX Facundo 2020-06-30: Implement a slight backoff algorithm and fallout after + # N attempts (which should be big, as per snapcraft experience). Issue: #79. + time.sleep(POLL_DELAY) + + def list_revisions(self, name): + """Return charm revisions for the indicated charm.""" + response = self._client.get('/v1/charm/{}/revisions'.format(name)) + result = [_build_revision(item) for item in response['revisions']] + return result + + def release(self, name, revision, channels): + """Release one or more revisions for a package.""" + endpoint = '/v1/charm/{}/releases'.format(name) + items = [{'revision': revision, 'channel': channel} for channel in channels] + self._client.post(endpoint, items) + + def list_releases(self, name): + """List current releases for a package.""" + endpoint = '/v1/charm/{}/releases'.format(name) + response = self._client.get(endpoint) + + channel_map = [] + for item in response['channel-map']: + expires_at = item['expiration-date'] + if expires_at is not None: + # `datetime.datetime.fromisoformat` is available only since Py3.7 + expires_at = parser.parse(expires_at) + channel_map.append( + Release(revision=item['revision'], channel=item['channel'], expires_at=expires_at)) + + channels = [ + Channel( + name=item['name'], + fallback=item['fallback'], + track=item['track'], + risk=item['risk'], + branch=item['branch'], + ) for item in response['package']['channels']] + + revisions = [_build_revision(item) for item in response['revisions']] + + return channel_map, channels, revisions + + def create_library_id(self, charm_name, lib_name): + """Create a new library id.""" + endpoint = '/v1/charm/libraries/{}'.format(charm_name) + response = self._client.post(endpoint, {'library-name': lib_name}) + lib_id = response['library-id'] + return lib_id + + def create_library_revision(self, charm_name, lib_id, api, patch, content, content_hash): + """Create a new library revision.""" + endpoint = '/v1/charm/libraries/{}/{}'.format(charm_name, lib_id) + payload = { + 'api': api, + 'patch': patch, + 'content': content, + 'hash': content_hash, + } + response = self._client.post(endpoint, payload) + result = _build_library(response) + return result + + def get_library(self, charm_name, lib_id, api): + """Get the library tip by id for a given api version.""" + endpoint = '/v1/charm/libraries/{}/{}?api={}'.format(charm_name, lib_id, api) + response = self._client.get(endpoint) + result = _build_library(response) + return result + + def get_libraries_tips(self, libraries): + """Get the tip details for several libraries at once. + + Each requested library can be specified in different ways: using the library id + or the charm and library names (both will pinpoint the library), but in the later + case the library name is optional (so all libraries for that charm will be + returned). Also, for all those cases, an API version can be specified. + """ + endpoint = '/v1/charm/libraries/bulk' + payload = [] + for lib in libraries: + if 'lib_id' in lib: + item = { + 'library-id': lib['lib_id'], + } + else: + item = { + 'charm-name': lib['charm_name'], + } + if 'lib_name' in lib: + item['library-name'] = lib['lib_name'] + if 'api' in lib: + item['api'] = lib['api'] + payload.append(item) + response = self._client.post(endpoint, payload) + libraries = response['libraries'] + result = {(item['library-id'], item['api']): _build_library(item) for item in libraries} + return result diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/commands/utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/commands/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..3ba5da08a125b572efc7222a90ebd99655ef94fc --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/commands/utils.py @@ -0,0 +1,64 @@ +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# For further info, check https://github.com/canonical/charmcraft + +import logging +import os +from stat import S_IXUSR, S_IXGRP, S_IXOTH, S_IRUSR, S_IRGRP, S_IROTH + +import yaml +from jinja2 import Environment, PackageLoader, StrictUndefined + +logger = logging.getLogger('charmcraft.commands') + + +# handy masks for execution and reading for everybody +S_IXALL = S_IXUSR | S_IXGRP | S_IXOTH +S_IRALL = S_IRUSR | S_IRGRP | S_IROTH + + +def make_executable(fh): + """make open file fh executable""" + fileno = fh.fileno() + mode = os.fstat(fileno).st_mode + mode_r = mode & S_IRALL + mode_x = mode_r >> 2 + mode = mode | mode_x + os.fchmod(fileno, mode) + + +def load_yaml(fpath): + """Return the content of a YAML file.""" + if not fpath.is_file(): + logger.debug("Couldn't find config file %s", fpath) + return + try: + with fpath.open('rb') as fh: + content = yaml.safe_load(fh) + except (yaml.error.YAMLError, OSError) as err: + logger.error("Failed to read/parse config file %s (got %r)", fpath, err) + return + return content + + +def get_templates_environment(templates_dir): + """Create and return a Jinja environment to deal with the templates.""" + env = Environment( + loader=PackageLoader('charmcraft', 'templates/{}'.format(templates_dir)), + autoescape=False, # no need to escape things here :-) + keep_trailing_newline=True, # they're not text files if they don't end in newline! + optimized=False, # optimization doesn't make sense for one-offs + undefined=StrictUndefined) # fail on undefined + return env diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/commands/version.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/commands/version.py new file mode 100644 index 0000000000000000000000000000000000000000..cdc7cff0cffc4f780bb9f70a45147cd6f8ee19bf --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/commands/version.py @@ -0,0 +1,56 @@ +# Copyright 2020-2021 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# For further info, check https://github.com/canonical/charmcraft + +"""Infrastructure for the 'version' command.""" + +import logging + +from charmcraft import __version__ +from charmcraft.cmdbase import BaseCommand + +logger = logging.getLogger(__name__) + +_overview = """ +Show charmcraft version. + +The output has the following format: X.Y.Z[+N.gHASH[.dirty]] + +Where: + +- X, Y and Z are the major, minor and patch version numbers, + upgraded when a release is done + +- +N.gHASH is present if using charmcraft from the project (how many + commits after last release, and last commit's hash) + +- .dirty is present if the branch you're executing charmcraft from has + modifications + +Example: 0.3.1+40.g883455b.dirty +""" + + +class VersionCommand(BaseCommand): + """Show the charmcraft version.""" + + name = 'version' + help_msg = "Show charmcraft version" + overview = _overview + common = True + + def run(self, parsed_args): + """Run the command.""" + logger.info("%s", __version__) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/config.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/config.py new file mode 100644 index 0000000000000000000000000000000000000000..b08b4336379817ac39f0f733ad35eccc4cd03023 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/config.py @@ -0,0 +1,202 @@ +# Copyright 2020-2021 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# For further info, check https://github.com/canonical/charmcraft + +"""Central configuration management.""" + +import pathlib +from urllib.parse import urlparse + +import attr +import jsonschema + +from charmcraft.cmdbase import CommandError +from charmcraft.utils import load_yaml + +format_checker = jsonschema.FormatChecker() + +# translator between "json" and pythonic yaml names +TYPE_TRANSLATOR = { + 'object': 'dict', + 'array': 'list', +} + + +def get_field_reference(path): + """Get a field indicator from the received path.""" + if isinstance(path[-1], int): + field = '.'.join(list(path)[:-1]) + ref = "item {} in '{}' field".format(path[-1], field) + else: + field = '.'.join(path) + ref = "'{}' field".format(field) + return ref + + +def adapt_validation_error(error): + """Take a jsonschema.ValidationError and create a proper CommandError.""" + if error.validator == 'required': + msg = "Bad charmcraft.yaml content; missing fields: {}.".format( + ', '.join(error.validator_value)) + elif error.validator == 'type': + expected_type = TYPE_TRANSLATOR.get(error.validator_value, error.validator_value) + field_ref = get_field_reference(error.absolute_path) + msg = "Bad charmcraft.yaml content; the {} must be a {}: got '{}'.".format( + field_ref, expected_type, error.instance.__class__.__name__) + elif error.validator == 'enum': + field_ref = get_field_reference(error.absolute_path) + msg = "Bad charmcraft.yaml content; the {} must be one of: {}.".format( + field_ref, ', '.join(map(repr, error.validator_value))) + elif error.validator == 'format': + field_ref = get_field_reference(error.absolute_path) + msg = "Bad charmcraft.yaml content; the {} {}: got {!r}.".format( + field_ref, error.cause, error.instance) + else: + # safe fallback + msg = error.message + + raise CommandError(msg) + + +@format_checker.checks('url', raises=ValueError) +def check_url(value): + """Check that the URL has at least scheme and net location.""" + if isinstance(value, str): + url = urlparse(value) + if url.scheme and url.netloc: + return True + raise ValueError("must be a full URL (e.g. 'https://some.server.com')") + + +@format_checker.checks('relative_path', raises=ValueError) +def check_relative_paths(value): + """Check that the received paths are all valid relative ones.""" + if isinstance(value, str): + # check if it's an absolute path using POSIX's '/' (not os.path.sep, as the charm's + # config is independent of the platform where charmcraft is running) + if value and value[0] != '/': + return True + raise ValueError("must be a valid relative URL") + + +@attr.s(kw_only=True, frozen=True) +class CharmhubConfig: + """Configuration for all Charmhub related options.""" + + api_url = attr.ib(default='https://api.charmhub.io') + storage_url = attr.ib(default='https://storage.snapcraftcontent.com') + + @classmethod + def from_dict(cls, source): + """Build from a raw dict.""" + return cls(**source) + + +class BasicPrime(tuple): + """Hold the list of files to include, specified under parts/bundle/prime configs. + + This is a intermediate structure until we have the full Lifecycle in place. + """ + + @classmethod + def from_dict(cls, parts): + """Build from a dicts sequence.""" + prime = parts.get('bundle', {}).get('prime', []) + return cls(prime) + + +@attr.s(kw_only=True, frozen=True) +class Project: + """Configuration for all project-related options, used internally.""" + + dirpath = attr.ib(default=None) + config_provided = attr.ib(default=None) + + +@attr.s(kw_only=True, frozen=True) +class Config: + """Root of all the configuration.""" + + charmhub = attr.ib(default={}, converter=CharmhubConfig.from_dict) + parts = attr.ib(default={}, converter=BasicPrime.from_dict) + type = attr.ib(default=None) + + # this item is provided by the code itself, not the user, as convenience for the + # rest of the code + project = attr.ib(default=None) + + +CONFIG_SCHEMA = { + 'type': 'object', + 'properties': { + 'type': {'type': 'string', 'enum': ['charm', 'bundle']}, + 'charmhub': { + 'type': 'object', + 'properties': { + 'api_url': {'type': 'string', 'format': 'url'}, + 'storage_url': {'type': 'string', 'format': 'url'}, + }, + 'additionalProperties': False, + }, + 'parts': { + 'type': 'object', + 'properties': { + 'bundle': { + 'type': 'object', + 'properties': { + 'prime': { + 'type': 'array', + 'items': { + 'type': 'string', + 'format': 'relative_path', + }, + }, + }, + }, + }, + }, + }, + 'required': ['type'], + 'additionalProperties': False, +} + + +def load(dirpath): + """Load the config from charmcraft.yaml in the indicated directory.""" + if dirpath is None: + dirpath = pathlib.Path.cwd() + else: + dirpath = pathlib.Path(dirpath).expanduser().resolve() + + content = load_yaml(dirpath / 'charmcraft.yaml') + if content is None: + # configuration is mandatory only for some commands; when not provided, it will + # be initialized all with defaults (but marked as not provided for later verification) + content = {} + config_provided = False + + else: + # config provided! validate the loaded config is ok and mark as such + try: + jsonschema.validate( + instance=content, schema=CONFIG_SCHEMA, format_checker=format_checker) + except jsonschema.ValidationError as exc: + adapt_validation_error(exc) + config_provided = True + + # inject project's config + content['project'] = Project(dirpath=dirpath, config_provided=config_provided) + + return Config(**content) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/helptexts.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/helptexts.py new file mode 100644 index 0000000000000000000000000000000000000000..16e037d51841c2338bc09625f4efceedbb19eeeb --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/helptexts.py @@ -0,0 +1,271 @@ +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# For further info, check https://github.com/canonical/charmcraft + +"""Provide all help texts.""" + +import textwrap +from operator import attrgetter + +# max columns used in the terminal +TERMINAL_WIDTH = 72 + +# the summary of the whole program, already indented so it represents the real +# "columns spread", for easier human editing. +GENERAL_SUMMARY = """ + Charmcraft helps build, package and publish operators on Charmhub. + + Together with the Python Operator Framework, charmcraft simplifies + operator development and collaboration. + + See https://charmhub.io/publishing for more information. +""" +# XXX Facundo 2020-09-10: we should add an extra (separated) line to the summary with: +# See for additional documentation. +# Related issue: https://github.com/canonical/charmcraft/issues/161 + +# generic intro and outro texts +HEADER = """ +Usage: + charmcraft [help] +""" + +USAGE = """\ +Usage: charmcraft [options] command [args]... +Try '{fullcommand} -h' for help. + +Error: {error_message} +""" + + +def get_usage_message(fullcommand, error_message): + """Build a usage and error message. + + The fullcommand is the command used by the user (`charmcraft`, `charmcraft build`, etc), + and the error message is the specific problem in the given parameters. + """ + return USAGE.format(fullcommand=fullcommand, error_message=error_message) + + +def _build_item(title, text, title_space): + """Show an item in the help, generically a title and a text aligned. + + The title starts in column 4 with an extra ':'. The text starts in + 4 plus the title space; if too wide it's wrapped. + """ + # wrap the general text to the desired max width (discounting the space for the title, + # the first 4 spaces, the two spaces to separate title/text, and the ':' + not_title_space = 7 + text_space = TERMINAL_WIDTH - title_space - not_title_space + wrapped_lines = textwrap.wrap(text, text_space) + + # first line goes with the title at column 4 + first = " {:>{title_space}s}: {}".format(title, wrapped_lines[0], title_space=title_space) + result = [first] + + # the rest (if any) still aligned but without title + for line in wrapped_lines[1:]: + result.append(" " * (title_space + not_title_space) + line) + + return result + + +def get_full_help(command_groups, global_options): + """Produce the text for the default help. + + - command_groups: list of grouped commands, as it's defined in the main + module + + - global_options: options defined at charmcraft level (not in the commands), + with the (options, description) structure + + The help text has the following structure: + + - usage + - summary + - common commands listed and described shortly + - all commands grouped, just listed + - more help + """ + textblocks = [] + + # title + textblocks.append(HEADER) + + # summary + textblocks.append("Summary:" + GENERAL_SUMMARY) + + # column alignment is dictated by longest common commands names and groups names + max_title_len = 0 + + # collect common commands + common_commands = [] + for group, _, commands in command_groups: + max_title_len = max(len(group), max_title_len) + for cmd in commands: + if cmd.common: + common_commands.append(cmd) + max_title_len = max(len(cmd.name), max_title_len) + + for title, _ in global_options: + max_title_len = max(len(title), max_title_len) + + global_lines = ["Global options:"] + for title, text in global_options: + global_lines.extend(_build_item(title, text, max_title_len)) + textblocks.append("\n".join(global_lines)) + + common_lines = ["Starter commands:"] + for cmd in sorted(common_commands, key=attrgetter('name')): + common_lines.extend(_build_item(cmd.name, cmd.help_msg, max_title_len)) + textblocks.append("\n".join(common_lines)) + + grouped_lines = ["Commands can be classified as follows:"] + for group, _, commands in sorted(command_groups): + command_names = ", ".join(sorted(cmd.name for cmd in commands)) + grouped_lines.extend(_build_item(group, command_names, max_title_len)) + textblocks.append("\n".join(grouped_lines)) + + textblocks.append(textwrap.dedent(""" + For more information about a command, run 'charmcraft help '. + For a summary of all commands, run 'charmcraft help --all'. + """)) + + # join all stripped blocks, leaving ONE empty blank line between + text = '\n\n'.join(block.strip() for block in textblocks) + '\n' + return text + + +def get_detailed_help(command_groups, global_options): + """Produce the text for the detailed help. + + - command_groups: list of grouped commands, as it's defined in the main + module + + - global_options: options defined at charmcraft level (not in the commands), + with the (options, description) structure + + The help text has the following structure: + + - usage + - summary + - global options + - all commands shown with description, grouped + - more help + """ + textblocks = [] + + # title + textblocks.append(HEADER) + + # summary + textblocks.append("Summary:" + GENERAL_SUMMARY) + + # column alignment is dictated by longest common commands names and groups names + max_title_len = 0 + for _, _, commands in command_groups: + for cmd in commands: + max_title_len = max(len(cmd.name), max_title_len) + for title, _ in global_options: + max_title_len = max(len(title), max_title_len) + + global_lines = ["Global options:"] + for title, text in global_options: + global_lines.extend(_build_item(title, text, max_title_len)) + textblocks.append("\n".join(global_lines)) + + textblocks.append("Commands can be classified as follows:") + + for _, group_description, commands in command_groups: + group_lines = ["{}:".format(group_description)] + for cmd in commands: + group_lines.extend(_build_item(cmd.name, cmd.help_msg, max_title_len)) + textblocks.append("\n".join(group_lines)) + + textblocks.append(textwrap.dedent(""" + For more information about a specific command, run 'charmcraft help '. + """)) + + # join all stripped blocks, leaving ONE empty blank line between + text = '\n\n'.join(block.strip() for block in textblocks) + '\n' + return text + + +def get_command_help(command_groups, command, arguments): + """Produce the text for each command's help. + + - command_groups: list of grouped commands, as it's defined in the main + module + + - command: the instanciated command for which help is prepared + + - arguments: all command options and parameters, with the (name, description) structure + + The help text has the following structure: + + - usage + - summary + - options + - other related commands + - footer + """ + textblocks = [] + + # separate all arguments into the parameters and optional ones, just checking + # if first char is a dash + parameters = [] + options = [] + for name, title in arguments: + if name[0] == '-': + options.append((name, title)) + else: + parameters.append(name) + + textblocks.append(textwrap.dedent("""\ + Usage: + charmcraft {} [options] {} + """.format(command.name, " ".join("<{}>".format(parameter) for parameter in parameters)))) + + textblocks.append("Summary:{}".format(textwrap.indent(command.overview, ' '))) + + # column alignment is dictated by longest options title + max_title_len = max(len(title) for title, text in options) + + # command options + option_lines = ["Options:"] + for title, text in options: + option_lines.extend(_build_item(title, text, max_title_len)) + textblocks.append("\n".join(option_lines)) + + # recommend other commands of the same group + for group_name, _, command_classes in command_groups: + if group_name == command.group: + break + else: + raise RuntimeError("Internal inconsistency in commands groups") + other_command_names = [c.name for c in command_classes if not isinstance(command, c)] + if other_command_names: + see_also_block = ["See also:"] + see_also_block.extend((" " + name) for name in sorted(other_command_names)) + textblocks.append('\n'.join(see_also_block)) + + # footer + textblocks.append(""" + For a summary of all commands, run 'charmcraft help --all'. + """) + + # join all stripped blocks, leaving ONE empty blank line between + text = '\n\n'.join(block.strip() for block in textblocks) + '\n' + return text diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/jujuignore.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/jujuignore.py new file mode 100644 index 0000000000000000000000000000000000000000..f0c103f76a185536b38ac948335e6393cc83396b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/jujuignore.py @@ -0,0 +1,228 @@ +# Copyright 2020-2021 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# For further info, check https://github.com/canonical/charmcraft + +"""Indicate which files are ignored by Juju.""" + +import logging +import re +import typing + +logger = logging.getLogger(__name__) + +KEEP = 'keep' +SKIP = 'skip' +FORCEKEEP = 'forcekeep' + +_unescapes = { + r'\!': '!', + r'\ ': ' ', + r'\#': '#', + +} + + +def _rstrip_unescaped(rule): + """Remove trailing whitespace that isn't escaped.""" + i = len(rule) - 1 + last = len(rule) + while i >= 0: + if rule[i] == '\n' or rule[i] == '\r': + last = i + elif rule[i] != ' ': + break + elif i == 0 or rule[i - 1] != '\\': + last = i + i -= 1 + rule = rule[:last] + return rule + + +def _unescape_rule(rule): + """Take out escape characters and trailing unescaped whitespace from the rule.""" + rule = rule.lstrip() + rule = _rstrip_unescaped(rule) + for old, new in _unescapes.items(): + rule = rule.replace(old, new) + return rule + + +def _rule_to_regex(rule): + """Turn a rule into a regex that we can use. + + This assumes that all the meta processing, like 'ends with /' and 'starts with !' have + already been checked. + """ + # Things we currently care about: + # * = matches only within a directory "[^/]*" + # ** = matches across directories ".*" + # ? = matches a single character + # [0-9] can match anything 0-9 + # This is taken from fnmatch.fnmatch, but that doesn't handle '**' and '*' also matches + # directories + + i, n = 0, len(rule) + res = '' + while i < n: + c = rule[i] + i += 1 + if c == '*': + if i < n and rule[i] == '*': + i += 1 + res += '.*' + else: + res += '[^/]*' + elif c == '?': + res += '[^/]' + elif c == '[': + j = i + if j < n and rule[j] == '!': + j += 1 + if j < n and rule[j] == ']': + j += 1 + while j < n and rule[j] != ']': + j += 1 + if j >= n: + res += '\\[' + else: + stuff = rule[i:j] + # Escape regex set operations (&~|). + stuff = re.sub(r'([&~|])', r'\\\1', stuff) + i = j + 1 + if stuff[0] == '!': + stuff = '^' + stuff[1:] + elif stuff[0] in ('['): + stuff = '\\' + stuff + res = '%s[%s]' % (res, stuff) + elif c == '/': + # Special case of '/**/' which can match a single '/' + if i < n and rule[i] == '*' and rule[i - 1:i + 3] == '/**/': + i += 3 + res = res + '.*/' + else: + res = res + '/' + else: + res += re.escape(c) + res += r'\Z' + return res + + +class _Matcher: + """Couple a regex with other metadata for how we should match a given pattern.""" + + def __init__(self, line_num: int, orig_rule: str, invert: bool, only_dirs: bool, + regex: typing.Pattern): + self.line_num = line_num + self.orig_rule = orig_rule + self.invert = invert + self.only_dirs = only_dirs + self.compiled = re.compile(regex, re.DOTALL) + + def match(self, path: str, is_dir: bool) -> str: + """Check if a path matches. + + Returns: + Can return one of KEEP, SKIP, FORCEKEEP + """ + if self.only_dirs and not is_dir: + return KEEP + if self.compiled.match(path): + if self.invert: + return FORCEKEEP + return SKIP + return KEEP + + +class JujuIgnore: + """Track a set of ignore patterns from a .jujuignore file.""" + + def __init__(self, patterns: typing.Iterable[str]): + self._matchers = [] + self._compile_from(patterns) + + def extend_patterns(self, patterns: typing.Iterable[str]) -> None: + """Add more patterns to the ignore list.""" + self._compile_from(patterns) + + def _compile_from(self, patterns: typing.Iterable[str]): + for line_num, rule in enumerate(patterns, 1): + orig_rule = rule + rule = rule.lstrip().rstrip('\r\n') + if not rule or rule.startswith('#'): + continue + invert = False + if rule.startswith('!'): + invert = True + rule = rule.lstrip('!') + rule = _unescape_rule(rule) + only_dirs = False + if rule.endswith('/'): + only_dirs = True + rule = rule.rstrip('/') + if not rule.startswith('/'): + # A rule that doesn't start with '/' means to match any + # subdirectory + rule = '**/' + rule + regex = _rule_to_regex(rule) + m = _Matcher( + line_num=line_num, + orig_rule=orig_rule, + invert=invert, + only_dirs=only_dirs, + regex=regex, + ) + self._matchers.append(m) + logger.debug('Translated .jujuignore %d "%s" => "%s"', line_num, orig_rule, regex) + + def match(self, path: str, is_dir: bool) -> bool: + """Check if the given path should be ignored. + + Args: + path: A local path (eg /foo/bar or foo/bar) from the root directory of the project. + is_dir: Indicate whether the given path is a directory (because of special handling + from ignore files when the path ends with a '/') + Return: + A boolean indicating whether the ignore rules matched the given path (thus the path + should be ignored). + """ + if not path.startswith('/'): + path = '/' + path + keep = True + for matcher in self._matchers: + matchRes = matcher.match(path, is_dir) + if matchRes == SKIP: + keep = False + elif matchRes == FORCEKEEP: + keep = True + break + return not keep + + +# default_juju_ignore is the initial set of ignores. +# juju itself always includes these before adding the contents of .jujuignore +# NOTE that this diverges from Juju ignore list, which also ignores "version", +# because we need the version file to populate the store +default_juju_ignore = ''' +.git +.svn +.hg +.bzr +.tox + +/build/ +/revision + +.jujuignore +'''.split('\n') diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/logsetup.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/logsetup.py new file mode 100644 index 0000000000000000000000000000000000000000..de2cf2f1bb9c12d7a0b8d96334794d9e09445437 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/logsetup.py @@ -0,0 +1,121 @@ +# Copyright 2020 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# For further info, check https://github.com/canonical/charmcraft + +"""Set up logging.""" + +import logging +import os +import tempfile + +from charmcraft import __version__ + +FORMATTER_SIMPLE = "%(message)s" +FORMATTER_DETAILED = "%(asctime)s %(name)-30s %(levelname)-8s %(message)s" + +_logger = logging.getLogger('charmcraft') +_logger.setLevel(logging.DEBUG) + + +class _MessageHandler: + """Handle all the messages to the user. + + This class deals with several combination of the following dimensions: + + - the mode: quiet, normal or verbose + - the output: sometimes, some messages, to the terminal; always to the file + - the execution result: what happens if succeeded, raise a controlled error, or crashed + """ + + _modes = { + 'quiet': (logging.WARNING, FORMATTER_SIMPLE), + 'normal': (logging.INFO, FORMATTER_SIMPLE), + 'verbose': (logging.DEBUG, FORMATTER_DETAILED), + } + + def __init__(self): + self._stderr_handler = logging.StreamHandler() + _logger.addHandler(self._stderr_handler) + + # autoset modes constants for simpler interface + for k in self._modes: + setattr(self, k.upper(), k) + + def init(self, initial_mode): + """Initialize internal structures; this must be done before start logging.""" + self._set_filehandler() + self.set_mode(initial_mode) + + def set_mode(self, mode): + """Set logging in different modes.""" + self.mode = mode + level, format_string = self._modes[mode] + self._stderr_handler.setFormatter(logging.Formatter(format_string)) + self._stderr_handler.setLevel(level) + if mode == self.VERBOSE: + _logger.debug("Starting charmcraft version %s", __version__) + + def _set_filehandler(self): + """Set the file handler to log everything to the temp file.""" + _, self._log_filepath = tempfile.mkstemp(prefix='charmcraft-log-') + + file_handler = logging.FileHandler(self._log_filepath) + file_handler.setFormatter(logging.Formatter(FORMATTER_DETAILED)) + file_handler.setLevel(0) # log eeeeeverything + _logger.addHandler(file_handler) + + # a logger for only the file + self._file_logger = logging.getLogger('charmcraft.guard') + self._file_logger.propagate = False + self._file_logger.addHandler(file_handler) + self._file_logger.debug("Starting charmcraft version %s", __version__) + + def ended_ok(self): + """Cleanup after successful execution.""" + os.unlink(self._log_filepath) + + def ended_interrupt(self): + """Clean up on keyboard interrupt.""" + if self.mode == self.VERBOSE: + _logger.exception("Interrupted.") + else: + _logger.error("Interrupted.") + os.unlink(self._log_filepath) + + def ended_cmderror(self, err): + """Report the (expected) problem and (maybe) logfile location.""" + if err.argsparsing: + print(err) + else: + msg = "{} (full execution logs in {})".format(err, self._log_filepath) + _logger.error(msg) + + def ended_crash(self, err): + """Report the internal error and logfile location. + + Show just the error to the user, but send the whole traceback to the log file. + """ + msg = "charmcraft internal error! {}: {} (full execution logs in {})".format( + err.__class__.__name__, err, self._log_filepath) + if self.mode == self.VERBOSE: + # both to screen and file! + _logger.exception(msg) + else: + # the error to screen and file, plus the traceback to the file + _logger.error(msg) + self._file_logger.exception('') + + +message_handler = _MessageHandler() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/main.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/main.py new file mode 100644 index 0000000000000000000000000000000000000000..282836725d6024912e00d3e580688aef3c8035df --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/main.py @@ -0,0 +1,319 @@ +# Copyright 2020-2021 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# For further info, check https://github.com/canonical/charmcraft + +"""Main entry point module for all the tool functionality.""" + +import argparse +import logging +import sys +from collections import namedtuple + +from charmcraft import helptexts, config +from charmcraft.commands import version, build, store, init, pack +from charmcraft.cmdbase import CommandError, BaseCommand +from charmcraft.logsetup import message_handler + +logger = logging.getLogger(__name__) + + +class HelpCommand(BaseCommand): + """Special internal command to produce help and usage messages. + + It bends the rules for parameters (we have an optional parameter without dashes), the + idea is to lower the barrier as much as possible for the user getting help. + """ + + name = 'help' + help_msg = "Provide help on charmcraft usage" + overview = "Produce a general or a detailed charmcraft help, or a specific command one." + common = True + + def fill_parser(self, parser): + """Add own parameters to the general parser.""" + parser.add_argument( + '--all', action='store_true', help="Produce an extensive help of all commands") + parser.add_argument( + 'command_to_help', nargs='?', metavar='command', + help="Produce a detailed help of the specified command") + + def run(self, parsed_args, all_commands): + """Present different help messages to the user. + + Unlike other commands, this one receives an extra parameter with all commands, + to validate if the help requested is on a valid one, or even parse its data. + """ + retcode = 0 + if parsed_args.command_to_help is None or parsed_args.command_to_help == self.name: + # help on no command in particular, get general text + help_text = get_general_help(detailed=parsed_args.all) + elif parsed_args.command_to_help not in all_commands: + # asked help on a command that doesn't exist + msg = "no such command {!r}".format(parsed_args.command_to_help) + help_text = helptexts.get_usage_message('charmcraft', msg) + retcode = 1 + else: + cmd_class, group = all_commands[parsed_args.command_to_help] + cmd = cmd_class(group, None) + parser = CustomArgumentParser(prog=cmd.name, add_help=False) + cmd.fill_parser(parser) + help_text = get_command_help(parser, cmd) + raise CommandError(help_text, argsparsing=True, retcode=retcode) + + +# Collect commands in different groups, for easier human consumption. Note that this is not +# declared in each command because it's much easier to do this separation/grouping in one +# central place and not distributed in several classes/files. Also note that order here is +# important when lisgint commands and showing help. +COMMAND_GROUPS = [ + ('basic', "Basic", [ + HelpCommand, + build.BuildCommand, + pack.PackCommand, + init.InitCommand, + version.VersionCommand, + ]), + ('store', "Charmhub", [ + # auth + store.LoginCommand, store.LogoutCommand, store.WhoamiCommand, + # name handling + store.RegisterNameCommand, store.ListNamesCommand, + # pushing files and checking revisions + store.UploadCommand, store.ListRevisionsCommand, + # release process, and show status + store.ReleaseCommand, store.StatusCommand, + # libraries support + store.CreateLibCommand, store.PublishLibCommand, store.ListLibCommand, + store.FetchLibCommand, + ]), +] + + +# global options: the name used internally, its type, short and long parameters, and help text +_Global = namedtuple('Global', 'name type short_option long_option help_message') +GLOBAL_ARGS = [ + _Global('help', 'flag', '-h', '--help', "Show this help message and exit"), + _Global('verbose', 'flag', '-v', '--verbose', "Show debug information and be more verbose"), + _Global('quiet', 'flag', '-q', '--quiet', "Only show warnings and errors, not progress"), + _Global( + 'project_dir', 'option', '-p', '--project-dir', + "Specify the project's directory (defaults to current)"), +] + + +class CustomArgumentParser(argparse.ArgumentParser): + """ArgumentParser with custom error manager..""" + + def error(self, message): + """Show the usage, the error message, and no more.""" + fullcommand = "charmcraft " + self.prog + full_msg = helptexts.get_usage_message(fullcommand, message) + raise CommandError(full_msg, argsparsing=True) + + +def _get_global_options(): + """Return the global flags ready to present as options in the help messages.""" + options = [] + for arg in GLOBAL_ARGS: + options.append(("{}, {}".format(arg.short_option, arg.long_option), arg.help_message)) + return options + + +def get_command_help(parser, command): + """Produce the complete help message for a command.""" + options = _get_global_options() + + for action in parser._actions: + # store the different options if present, otherwise it's just the dest + if action.option_strings: + options.append((', '.join(action.option_strings), action.help)) + else: + options.append((action.dest, action.help)) + + help_text = helptexts.get_command_help(COMMAND_GROUPS, command, options) + return help_text + + +def get_general_help(detailed=False): + """Produce the "general charmcraft" help.""" + options = _get_global_options() + if detailed: + help_text = helptexts.get_detailed_help(COMMAND_GROUPS, options) + else: + help_text = helptexts.get_full_help(COMMAND_GROUPS, options) + return help_text + + +class Dispatcher: + """Set up infrastructure and let the needed command run. + + ♪♫"Leeeeeet, the command ruuun"♪♫ https://www.youtube.com/watch?v=cv-0mmVnxPA + """ + + def __init__(self, sysargs, commands_groups): + self.commands = self._get_commands_info(commands_groups) + command_name, cmd_args, charmcraft_config = self._pre_parse_args(sysargs) + self.command, self.parsed_args = self._load_command( + command_name, cmd_args, charmcraft_config) + + def _get_commands_info(self, commands_groups): + """Process the commands groups structure for easier programmable access.""" + commands = {} + for _cmd_group, _, _cmd_classes in commands_groups: + for _cmd_class in _cmd_classes: + if _cmd_class.name in commands: + _stored_class, _ = commands[_cmd_class.name] + raise RuntimeError( + "Multiple commands with same name: {} and {}".format( + _cmd_class.__name__, _stored_class.__name__)) + commands[_cmd_class.name] = (_cmd_class, _cmd_group) + return commands + + def _load_command(self, command_name, cmd_args, charmcraft_config): + """Load a command.""" + cmd_class, group = self.commands[command_name] + cmd = cmd_class(group, charmcraft_config) + + # load and parse the command specific options/params + parser = CustomArgumentParser(prog=cmd.name) + cmd.fill_parser(parser) + parsed_args = parser.parse_args(cmd_args) + logger.debug("Command parsed sysargs: %s", parsed_args) + + return cmd, parsed_args + + def _pre_parse_args(self, sysargs): + """Pre-parse sys args. + + Several steps: + + - extract the global options and detects the possible command and its args + + - validate global options and apply them + + - validate that command is correct (NOT loading and parsing its arguments) + """ + # get all arguments (default to what's specified) and those per options, to filter sysargs + global_args = {} + arg_per_option = {} + options_with_equal = [] + for arg in GLOBAL_ARGS: + arg_per_option[arg.short_option] = arg + arg_per_option[arg.long_option] = arg + if arg.type == 'flag': + default = False + elif arg.type == 'option': + default = None + options_with_equal.append(arg.long_option + '=') + else: + raise ValueError("Bad GLOBAL_ARGS structure.") + global_args[arg.name] = default + + filtered_sysargs = [] + sysargs = iter(sysargs) + options_with_equal = tuple(options_with_equal) + for sysarg in sysargs: + if sysarg in arg_per_option: + arg = arg_per_option[sysarg] + if arg.type == 'flag': + value = True + else: + try: + value = next(sysargs) + except StopIteration: + raise CommandError("The 'project-dir' option expects one argument.") + global_args[arg.name] = value + elif sysarg.startswith(options_with_equal): + option, value = sysarg.split('=', 1) + if not value: + raise CommandError("The 'project-dir' option expects one argument.") + arg = arg_per_option[option] + global_args[arg.name] = value + else: + filtered_sysargs.append(sysarg) + + # control and use quiet/verbose options + if global_args['quiet'] and global_args['verbose']: + raise CommandError("The 'verbose' and 'quiet' options are mutually exclusive.") + if global_args['quiet']: + message_handler.set_mode(message_handler.QUIET) + elif global_args['verbose']: + message_handler.set_mode(message_handler.VERBOSE) + logger.debug("Raw pre-parsed sysargs: args=%s filtered=%s", global_args, filtered_sysargs) + + # if help requested, transform the parameters to make that explicit + if global_args['help']: + command = HelpCommand.name + cmd_args = filtered_sysargs + elif filtered_sysargs: + command = filtered_sysargs[0] + cmd_args = filtered_sysargs[1:] + if command not in self.commands: + msg = "no such command {!r}".format(command) + help_text = helptexts.get_usage_message('charmcraft', msg) + raise CommandError(help_text, argsparsing=True) + else: + # no command! + help_text = get_general_help() + raise CommandError(help_text, argsparsing=True) + + # load the system's config + charmcraft_config = config.load(global_args['project_dir']) + + logger.debug("General parsed sysargs: command=%r args=%s", command, cmd_args) + return command, cmd_args, charmcraft_config + + def run(self): + """Really run the command.""" + if isinstance(self.command, HelpCommand): + self.command.run(self.parsed_args, self.commands) + else: + if self.command.needs_config and not self.command.config.project.config_provided: + raise CommandError( + "The specified command needs a valid 'charmcraft.yaml' configuration file (in " + "the current directory or where specified with --project-dir option); see " + "the reference: https://discourse.charmhub.io/t/charmcraft-configuration/4138") + self.command.run(self.parsed_args) + + +def main(argv=None): + """Provide the main entry point.""" + message_handler.init(message_handler.NORMAL) + + if argv is None: + argv = sys.argv + + # process + try: + dispatcher = Dispatcher(argv[1:], COMMAND_GROUPS) + dispatcher.run() + except CommandError as err: + message_handler.ended_cmderror(err) + retcode = err.retcode + except KeyboardInterrupt: + message_handler.ended_interrupt() + retcode = 1 + except Exception as err: + message_handler.ended_crash(err) + retcode = 1 + else: + message_handler.ended_ok() + retcode = 0 + + return retcode + + +if __name__ == '__main__': + sys.exit(main(sys.argv)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/charmlibs/new_library.py.j2 b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/charmlibs/new_library.py.j2 new file mode 100644 index 0000000000000000000000000000000000000000..4a7da958b28af51c10c52ec890ef5996c5b1c6d0 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/charmlibs/new_library.py.j2 @@ -0,0 +1,29 @@ +"""TODO: Add a proper docstring here. + +This is a placeholder docstring for this charm library. Docstrings are +presented on Charmhub and updated whenever you push a new version of the +library. + +See `charmcraft push-lib` and `charmcraft fetch-lib` for details of how to +share and consume charm libraries. They serve to enhance collaboration +between charmers. Use a charmer's libraries for classes that handle +integration with their charm. + +Bear in mind that new revisions of the different major API versions (v0, v1, +v2 etc) are maintained independently. You can continue to update v0 and v1 +after you have pushed v3. + +Markdown is supported, following the CommonMark specification. +""" + +# The unique Charmhub library identifier, never change it +LIBID = "{{ lib_id }}" + +# Increment this major API version when introducing breaking changes +LIBAPI = 0 + +# Increment this PATCH version before using `charmcraft push-lib` or reset +# to 0 if you are raising the major API version +LIBPATCH = 1 + +# TODO: add your code here! Happy coding! diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/init/.flake8.j2 b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/init/.flake8.j2 new file mode 100644 index 0000000000000000000000000000000000000000..8ef84fcd43f3b7a46768c31b20f36cab48ffdfe0 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/init/.flake8.j2 @@ -0,0 +1,9 @@ +[flake8] +max-line-length = 99 +select: E,W,F,C,N +exclude: + venv + .git + build + dist + *.egg_info diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/init/.jujuignore.j2 b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/init/.jujuignore.j2 new file mode 100644 index 0000000000000000000000000000000000000000..6ccd559eabeae93e4d23215fa450130fa9b37ace --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/init/.jujuignore.j2 @@ -0,0 +1,3 @@ +/venv +*.py[cod] +*.charm diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/init/LICENSE.j2 b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/init/LICENSE.j2 new file mode 100644 index 0000000000000000000000000000000000000000..d645695673349e3947e8e5ae42332d0ac3164cd7 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/init/LICENSE.j2 @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/init/README.md.j2 b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/init/README.md.j2 new file mode 100644 index 0000000000000000000000000000000000000000..a1a54301787085d192da3c1908d3a312794eab4c --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/init/README.md.j2 @@ -0,0 +1,25 @@ +# {{ name }} + +## Description + +TODO: Describe your charm in a few paragraphs of Markdown + +## Usage + +TODO: Provide high-level usage, such as required config or relations + + +## Developing + +Create and activate a virtualenv with the development requirements: + + virtualenv -p python3 venv + source venv/bin/activate + pip install -r requirements-dev.txt + +## Testing + +The Python operator framework includes a very nice harness for testing +operator behaviour without full deployment. Just `run_tests`: + + ./run_tests diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/init/actions.yaml.j2 b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/init/actions.yaml.j2 new file mode 100644 index 0000000000000000000000000000000000000000..98b14cea956651115a3851241e1e8737bd3500d2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/init/actions.yaml.j2 @@ -0,0 +1,14 @@ +# Copyright {{ year }} {{ author }} +# See LICENSE file for licensing details. +# +# This is only an example, and you should edit to suit your needs. +# If you don't need actions, you can remove the file entirely. +# It ties in to the example _on_fortune_action handler in src/charm.py +# +# fortune: +# description: Returns a pithy phrase. +# params: +# fail: +# description: "Fail with this message" +# type: string +# default: "" diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/init/config.yaml.j2 b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/init/config.yaml.j2 new file mode 100644 index 0000000000000000000000000000000000000000..5f47d30fc928b54fbc5ad314ab9dd086302a43c1 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/init/config.yaml.j2 @@ -0,0 +1,11 @@ +# Copyright {{ year }} {{ author }} +# See LICENSE file for licensing details. +# +# This is only an example, and you should edit to suit your needs. +# If you don't need config, you can remove the file entirely. +# +# options: +# thing: +# default: 🎁 +# description: A thing used by the charm. +# type: string diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/init/metadata.yaml.j2 b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/init/metadata.yaml.j2 new file mode 100644 index 0000000000000000000000000000000000000000..c26951c23c21059d96eedd51d7db7268c3aa08f3 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/init/metadata.yaml.j2 @@ -0,0 +1,9 @@ +# Copyright {{ year }} {{ author }} +# See LICENSE file for licensing details. +name: {{ name }} +description: | + TODO: fill out the charm's description +summary: | + TODO: fill out the charm's summary +series: {{ series }} + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/init/requirements-dev.txt.j2 b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/init/requirements-dev.txt.j2 new file mode 100644 index 0000000000000000000000000000000000000000..4f2a3f5bcd8cf79122b586c767c812b481079593 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/init/requirements-dev.txt.j2 @@ -0,0 +1,3 @@ +-r requirements.txt +coverage +flake8 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/init/requirements.txt.j2 b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/init/requirements.txt.j2 new file mode 100644 index 0000000000000000000000000000000000000000..2d81d3bb6fea804d1db7a1549d67244b513aa145 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/init/requirements.txt.j2 @@ -0,0 +1 @@ +ops diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/init/run_tests.j2 b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/init/run_tests.j2 new file mode 100755 index 0000000000000000000000000000000000000000..da70cb36f2fbbac9264aa89430d571532bebf9ec --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/init/run_tests.j2 @@ -0,0 +1,17 @@ +#!/bin/sh -e +# Copyright {{ year }} {{ author }} +# See LICENSE file for licensing details. + +if [ -z "$VIRTUAL_ENV" -a -d venv/ ]; then + . venv/bin/activate +fi + +if [ -z "$PYTHONPATH" ]; then + export PYTHONPATH=src +else + export PYTHONPATH="src:$PYTHONPATH" +fi + +flake8 +coverage run --source=src -m unittest -v "$@" +coverage report -m diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/init/src/charm.py.j2 b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/init/src/charm.py.j2 new file mode 100644 index 0000000000000000000000000000000000000000..d7ea6197e1570a2c1ac87b915c861db91e26fcb8 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/init/src/charm.py.j2 @@ -0,0 +1,46 @@ +#!/usr/bin/env python3 +# Copyright {{ year }} {{ author }} +# See LICENSE file for licensing details. + +"""Charm the service.""" + +import logging + +from ops.charm import CharmBase +from ops.main import main +from ops.framework import StoredState + +logger = logging.getLogger(__name__) + + +class {{ class_name }}(CharmBase): + """Charm the service.""" + + _stored = StoredState() + + def __init__(self, *args): + super().__init__(*args) + self.framework.observe(self.on.config_changed, self._on_config_changed) + self.framework.observe(self.on.fortune_action, self._on_fortune_action) + self._stored.set_default(things=[]) + + def _on_config_changed(self, _): + # Note: you need to uncomment the example in the config.yaml file for this to work (ensure + # to not just leave the example, but adapt to your configuration needs) + current = self.config["thing"] + if current not in self._stored.things: + logger.debug("found a new thing: %r", current) + self._stored.things.append(current) + + def _on_fortune_action(self, event): + # Note: you need to uncomment the example in the actions.yaml file for this to work (ensure + # to not just leave the example, but adapt to your needs for actions commands) + fail = event.params["fail"] + if fail: + event.fail(fail) + else: + event.set_results({"fortune": "A bug in the code is worth two in the documentation."}) + + +if __name__ == "__main__": + main({{ class_name }}) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/init/tests/__init__.py.j2 b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/init/tests/__init__.py.j2 new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/init/tests/test_charm.py.j2 b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/init/tests/test_charm.py.j2 new file mode 100644 index 0000000000000000000000000000000000000000..e0348ff32d7f1cf8d8201dd2499f9feb87cc973c --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/templates/init/tests/test_charm.py.j2 @@ -0,0 +1,35 @@ +# Copyright {{ year }} {{ author }} +# See LICENSE file for licensing details. + +import unittest +from unittest.mock import Mock + +from ops.testing import Harness +from charm import {{ class_name }} + + +class TestCharm(unittest.TestCase): + def test_config_changed(self): + harness = Harness({{ class_name }}) + self.addCleanup(harness.cleanup) + harness.begin() + self.assertEqual(list(harness.charm._stored.things), []) + harness.update_config({"thing": "foo"}) + self.assertEqual(list(harness.charm._stored.things), ["foo"]) + + def test_action(self): + harness = Harness({{ class_name }}) + harness.begin() + # the harness doesn't (yet!) help much with actions themselves + action_event = Mock(params={"fail": ""}) + harness.charm._on_fortune_action(action_event) + + self.assertTrue(action_event.set_results.called) + + def test_action_fail(self): + harness = Harness({{ class_name }}) + harness.begin() + action_event = Mock(params={"fail": "fail this"}) + harness.charm._on_fortune_action(action_event) + + self.assertEqual(action_event.fail.call_args, [("fail this",)]) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..8dc03a5b773040f0048a654b2a5c752f3d6cc41d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/utils.py @@ -0,0 +1,66 @@ +# Copyright 2020-2021 Canonical Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# For further info, check https://github.com/canonical/charmcraft + +"""Collection of utilities for charmcraft.""" + +import logging +import os +from stat import S_IXUSR, S_IXGRP, S_IXOTH, S_IRUSR, S_IRGRP, S_IROTH + +import yaml +from jinja2 import Environment, PackageLoader, StrictUndefined + +logger = logging.getLogger('charmcraft.commands') + + +# handy masks for execution and reading for everybody +S_IXALL = S_IXUSR | S_IXGRP | S_IXOTH +S_IRALL = S_IRUSR | S_IRGRP | S_IROTH + + +def make_executable(fh): + """Make open file fh executable.""" + fileno = fh.fileno() + mode = os.fstat(fileno).st_mode + mode_r = mode & S_IRALL + mode_x = mode_r >> 2 + mode = mode | mode_x + os.fchmod(fileno, mode) + + +def load_yaml(fpath): + """Return the content of a YAML file.""" + if not fpath.is_file(): + logger.debug("Couldn't find config file %s", fpath) + return + try: + with fpath.open('rb') as fh: + content = yaml.safe_load(fh) + except (yaml.error.YAMLError, OSError) as err: + logger.error("Failed to read/parse config file %s: %r", fpath, err) + return + return content + + +def get_templates_environment(templates_dir): + """Create and return a Jinja environment to deal with the templates.""" + env = Environment( + loader=PackageLoader('charmcraft', 'templates/{}'.format(templates_dir)), + autoescape=False, # no need to escape things here :-) + keep_trailing_newline=True, # they're not text files if they don't end in newline! + optimized=False, # optimization doesn't make sense for one-offs + undefined=StrictUndefined) # fail on undefined + return env diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/version.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/version.py new file mode 100644 index 0000000000000000000000000000000000000000..eac7dc5ef0c3dfd7cd8e9297ce0ba6bc8521ea95 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/charmcraft/version.py @@ -0,0 +1,4 @@ + +# this is a generated file + +version = '0.8.1' diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama-0.4.3.dist-info/AUTHORS.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama-0.4.3.dist-info/AUTHORS.txt new file mode 100644 index 0000000000000000000000000000000000000000..72c87d7d38ae7bf859717c333a5ee8230f6ce624 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama-0.4.3.dist-info/AUTHORS.txt @@ -0,0 +1,562 @@ +A_Rog +Aakanksha Agrawal <11389424+rasponic@users.noreply.github.com> +Abhinav Sagar <40603139+abhinavsagar@users.noreply.github.com> +ABHYUDAY PRATAP SINGH +abs51295 +AceGentile +Adam Chainz +Adam Tse +Adam Tse +Adam Wentz +admin +Adrien Morison +ahayrapetyan +Ahilya +AinsworthK +Akash Srivastava +Alan Yee +Albert Tugushev +Albert-Guan +albertg +Aleks Bunin +Alethea Flowers +Alex Gaynor +Alex Grönholm +Alex Loosley +Alex Morega +Alex Stachowiak +Alexander Shtyrov +Alexandre Conrad +Alexey Popravka +Alexey Popravka +Alli +Ami Fischman +Ananya Maiti +Anatoly Techtonik +Anders Kaseorg +Andreas Lutro +Andrei Geacar +Andrew Gaul +Andrey Bulgakov +Andrés Delfino <34587441+andresdelfino@users.noreply.github.com> +Andrés Delfino +Andy Freeland +Andy Freeland +Andy Kluger +Ani Hayrapetyan +Aniruddha Basak +Anish Tambe +Anrs Hu +Anthony Sottile +Antoine Musso +Anton Ovchinnikov +Anton Patrushev +Antonio Alvarado Hernandez +Antony Lee +Antti Kaihola +Anubhav Patel +Anuj Godase +AQNOUCH Mohammed +AraHaan +Arindam Choudhury +Armin Ronacher +Artem +Ashley Manton +Ashwin Ramaswami +atse +Atsushi Odagiri +Avner Cohen +Baptiste Mispelon +Barney Gale +barneygale +Bartek Ogryczak +Bastian Venthur +Ben Darnell +Ben Hoyt +Ben Rosser +Bence Nagy +Benjamin Peterson +Benjamin VanEvery +Benoit Pierre +Berker Peksag +Bernardo B. Marques +Bernhard M. Wiedemann +Bertil Hatt +Bogdan Opanchuk +BorisZZZ +Brad Erickson +Bradley Ayers +Brandon L. Reiss +Brandt Bucher +Brett Randall +Brian Cristante <33549821+brcrista@users.noreply.github.com> +Brian Cristante +Brian Rosner +BrownTruck +Bruno Oliveira +Bruno Renié +Bstrdsmkr +Buck Golemon +burrows +Bussonnier Matthias +c22 +Caleb Martinez +Calvin Smith +Carl Meyer +Carlos Liam +Carol Willing +Carter Thayer +Cass +Chandrasekhar Atina +Chih-Hsuan Yen +Chih-Hsuan Yen +Chris Brinker +Chris Hunt +Chris Jerdonek +Chris McDonough +Chris Wolfe +Christian Heimes +Christian Oudard +Christopher Hunt +Christopher Snyder +Clark Boylan +Clay McClure +Cody +Cody Soyland +Colin Watson +Connor Osborn +Cooper Lees +Cooper Ry Lees +Cory Benfield +Cory Wright +Craig Kerstiens +Cristian Sorinel +Curtis Doty +cytolentino +Damian Quiroga +Dan Black +Dan Savilonis +Dan Sully +daniel +Daniel Collins +Daniel Hahler +Daniel Holth +Daniel Jost +Daniel Shaulov +Daniele Esposti +Daniele Procida +Danny Hermes +Dav Clark +Dave Abrahams +Dave Jones +David Aguilar +David Black +David Bordeynik +David Bordeynik +David Caro +David Evans +David Linke +David Pursehouse +David Tucker +David Wales +Davidovich +derwolfe +Desetude +Diego Caraballo +DiegoCaraballo +Dmitry Gladkov +Domen Kožar +Donald Stufft +Dongweiming +Douglas Thor +DrFeathers +Dustin Ingram +Dwayne Bailey +Ed Morley <501702+edmorley@users.noreply.github.com> +Ed Morley +Eitan Adler +ekristina +elainechan +Eli Schwartz +Eli Schwartz +Emil Burzo +Emil Styrke +Endoh Takanao +enoch +Erdinc Mutlu +Eric Gillingham +Eric Hanchrow +Eric Hopper +Erik M. Bray +Erik Rose +Ernest W Durbin III +Ernest W. Durbin III +Erwin Janssen +Eugene Vereshchagin +everdimension +Felix Yan +fiber-space +Filip Kokosiński +Florian Briand +Florian Rathgeber +Francesco +Francesco Montesano +Frost Ming +Gabriel Curio +Gabriel de Perthuis +Garry Polley +gdanielson +Geoffrey Lehée +Geoffrey Sneddon +George Song +Georgi Valkov +Giftlin Rajaiah +gizmoguy1 +gkdoc <40815324+gkdoc@users.noreply.github.com> +Gopinath M <31352222+mgopi1990@users.noreply.github.com> +GOTO Hayato <3532528+gh640@users.noreply.github.com> +gpiks +Guilherme Espada +Guy Rozendorn +gzpan123 +Hanjun Kim +Hari Charan +Harsh Vardhan +Herbert Pfennig +Hsiaoming Yang +Hugo +Hugo Lopes Tavares +Hugo van Kemenade +hugovk +Hynek Schlawack +Ian Bicking +Ian Cordasco +Ian Lee +Ian Stapleton Cordasco +Ian Wienand +Ian Wienand +Igor Kuzmitshov +Igor Sobreira +Ilya Baryshev +INADA Naoki +Ionel Cristian Mărieș +Ionel Maries Cristian +Ivan Pozdeev +Jacob Kim +jakirkham +Jakub Stasiak +Jakub Vysoky +Jakub Wilk +James Cleveland +James Cleveland +James Firth +James Polley +Jan Pokorný +Jannis Leidel +jarondl +Jason R. Coombs +Jay Graves +Jean-Christophe Fillion-Robin +Jeff Barber +Jeff Dairiki +Jelmer Vernooij +jenix21 +Jeremy Stanley +Jeremy Zafran +Jiashuo Li +Jim Garrison +Jivan Amara +John Paton +John-Scott Atlakson +johnthagen +johnthagen +Jon Banafato +Jon Dufresne +Jon Parise +Jonas Nockert +Jonathan Herbert +Joost Molenaar +Jorge Niedbalski +Joseph Long +Josh Bronson +Josh Hansen +Josh Schneier +Juanjo Bazán +Julian Berman +Julian Gethmann +Julien Demoor +jwg4 +Jyrki Pulliainen +Kai Chen +Kamal Bin Mustafa +kaustav haldar +keanemind +Keith Maxwell +Kelsey Hightower +Kenneth Belitzky +Kenneth Reitz +Kenneth Reitz +Kevin Burke +Kevin Carter +Kevin Frommelt +Kevin R Patterson +Kexuan Sun +Kit Randel +kpinc +Krishna Oza +Kumar McMillan +Kyle Persohn +lakshmanaram +Laszlo Kiss-Kollar +Laurent Bristiel +Laurie Opperman +Leon Sasson +Lev Givon +Lincoln de Sousa +Lipis +Loren Carvalho +Lucas Cimon +Ludovic Gasc +Luke Macken +Luo Jiebin +luojiebin +luz.paz +László Kiss Kollár +László Kiss Kollár +Marc Abramowitz +Marc Tamlyn +Marcus Smith +Mariatta +Mark Kohler +Mark Williams +Mark Williams +Markus Hametner +Masaki +Masklinn +Matej Stuchlik +Mathew Jennings +Mathieu Bridon +Matt Good +Matt Maker +Matt Robenolt +matthew +Matthew Einhorn +Matthew Gilliard +Matthew Iversen +Matthew Trumbell +Matthew Willson +Matthias Bussonnier +mattip +Maxim Kurnikov +Maxime Rouyrre +mayeut +mbaluna <44498973+mbaluna@users.noreply.github.com> +mdebi <17590103+mdebi@users.noreply.github.com> +memoselyk +Michael +Michael Aquilina +Michael E. Karpeles +Michael Klich +Michael Williamson +michaelpacer +Mickaël Schoentgen +Miguel Araujo Perez +Mihir Singh +Mike +Mike Hendricks +Min RK +MinRK +Miro Hrončok +Monica Baluna +montefra +Monty Taylor +Nate Coraor +Nathaniel J. Smith +Nehal J Wani +Neil Botelho +Nick Coghlan +Nick Stenning +Nick Timkovich +Nicolas Bock +Nikhil Benesch +Nitesh Sharma +Nowell Strite +NtaleGrey +nvdv +Ofekmeister +ofrinevo +Oliver Jeeves +Oliver Tonnhofer +Olivier Girardot +Olivier Grisel +Ollie Rutherfurd +OMOTO Kenji +Omry Yadan +Oren Held +Oscar Benjamin +Oz N Tiram +Pachwenko <32424503+Pachwenko@users.noreply.github.com> +Patrick Dubroy +Patrick Jenkins +Patrick Lawson +patricktokeeffe +Patrik Kopkan +Paul Kehrer +Paul Moore +Paul Nasrat +Paul Oswald +Paul van der Linden +Paulus Schoutsen +Pavithra Eswaramoorthy <33131404+QueenCoffee@users.noreply.github.com> +Pawel Jasinski +Pekka Klärck +Peter Lisák +Peter Waller +petr-tik +Phaneendra Chiruvella +Phil Freo +Phil Pennock +Phil Whelan +Philip Jägenstedt +Philip Molloy +Philippe Ombredanne +Pi Delport +Pierre-Yves Rofes +pip +Prabakaran Kumaresshan +Prabhjyotsing Surjit Singh Sodhi +Prabhu Marappan +Pradyun Gedam +Pratik Mallya +Preet Thakkar +Preston Holmes +Przemek Wrzos +Pulkit Goyal <7895pulkit@gmail.com> +Qiangning Hong +Quentin Pradet +R. David Murray +Rafael Caricio +Ralf Schmitt +Razzi Abuissa +rdb +Remi Rampin +Remi Rampin +Rene Dudfield +Riccardo Magliocchetti +Richard Jones +RobberPhex +Robert Collins +Robert McGibbon +Robert T. McGibbon +robin elisha robinson +Roey Berman +Rohan Jain +Rohan Jain +Rohan Jain +Roman Bogorodskiy +Romuald Brunet +Ronny Pfannschmidt +Rory McCann +Ross Brattain +Roy Wellington Ⅳ +Roy Wellington Ⅳ +Ryan Wooden +ryneeverett +Sachi King +Salvatore Rinchiera +Savio Jomton +schlamar +Scott Kitterman +Sean +seanj +Sebastian Jordan +Sebastian Schaetz +Segev Finer +SeongSoo Cho +Sergey Vasilyev +Seth Woodworth +Shlomi Fish +Shovan Maity +Simeon Visser +Simon Cross +Simon Pichugin +sinoroc +Sorin Sbarnea +Stavros Korokithakis +Stefan Scherfke +Stephan Erb +stepshal +Steve (Gadget) Barnes +Steve Barnes +Steve Dower +Steve Kowalik +Steven Myint +stonebig +Stéphane Bidoul (ACSONE) +Stéphane Bidoul +Stéphane Klein +Sumana Harihareswara +Sviatoslav Sydorenko +Sviatoslav Sydorenko +Swat009 +Takayuki SHIMIZUKAWA +tbeswick +Thijs Triemstra +Thomas Fenzl +Thomas Grainger +Thomas Guettler +Thomas Johansson +Thomas Kluyver +Thomas Smith +Tim D. Smith +Tim Gates +Tim Harder +Tim Heap +tim smith +tinruufu +Tom Forbes +Tom Freudenheim +Tom V +Tomas Orsava +Tomer Chachamu +Tony Beswick +Tony Zhaocheng Tan +TonyBeswick +toonarmycaptain +Toshio Kuratomi +Travis Swicegood +Tzu-ping Chung +Valentin Haenel +Victor Stinner +victorvpaulo +Viktor Szépe +Ville Skyttä +Vinay Sajip +Vincent Philippon +Vinicyus Macedo <7549205+vinicyusmacedo@users.noreply.github.com> +Vitaly Babiy +Vladimir Rutsky +W. Trevor King +Wil Tan +Wilfred Hughes +William ML Leslie +William T Olson +Wilson Mo +wim glenn +Wolfgang Maier +Xavier Fernandez +Xavier Fernandez +xoviat +xtreak +YAMAMOTO Takashi +Yen Chi Hsuan +Yeray Diaz Diaz +Yoval P +Yu Jian +Yuan Jing Vincent Yan +Zearin +Zearin +Zhiping Deng +Zvezdan Petkovic +Łukasz Langa +Семён Марьясин diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama-0.4.3.dist-info/INSTALLER b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama-0.4.3.dist-info/INSTALLER new file mode 100644 index 0000000000000000000000000000000000000000..a1b589e38a32041e49332e5e81c2d363dc418d68 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama-0.4.3.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama-0.4.3.dist-info/LICENSE.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama-0.4.3.dist-info/LICENSE.txt new file mode 100644 index 0000000000000000000000000000000000000000..737fec5c5352af3d9a6a47a0670da4bdb52c5725 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama-0.4.3.dist-info/LICENSE.txt @@ -0,0 +1,20 @@ +Copyright (c) 2008-2019 The pip developers (see AUTHORS.txt file) + +Permission is hereby granted, free of charge, to any person obtaining +a copy of this software and associated documentation files (the +"Software"), to deal in the Software without restriction, including +without limitation the rights to use, copy, modify, merge, publish, +distribute, sublicense, and/or sell copies of the Software, and to +permit persons to whom the Software is furnished to do so, subject to +the following conditions: + +The above copyright notice and this permission notice shall be +included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE +LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION +OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION +WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama-0.4.3.dist-info/METADATA b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama-0.4.3.dist-info/METADATA new file mode 100644 index 0000000000000000000000000000000000000000..c455cb515632088458de0546ddad104e029b64ba --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama-0.4.3.dist-info/METADATA @@ -0,0 +1,411 @@ +Metadata-Version: 2.1 +Name: colorama +Version: 0.4.3 +Summary: Cross-platform colored terminal text. +Home-page: https://github.com/tartley/colorama +Author: Jonathan Hartley +Author-email: tartley@tartley.com +Maintainer: Arnon Yaari +License: BSD +Keywords: color colour terminal text ansi windows crossplatform xplatform +Platform: UNKNOWN +Classifier: Development Status :: 5 - Production/Stable +Classifier: Environment :: Console +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: BSD License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 2 +Classifier: Programming Language :: Python :: 2.7 +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.5 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: Implementation :: CPython +Classifier: Programming Language :: Python :: Implementation :: PyPy +Classifier: Topic :: Terminals +Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.* + +.. image:: https://img.shields.io/pypi/v/colorama.svg + :target: https://pypi.org/project/colorama/ + :alt: Latest Version + +.. image:: https://img.shields.io/pypi/pyversions/colorama.svg + :target: https://pypi.org/project/colorama/ + :alt: Supported Python versions + +.. image:: https://travis-ci.org/tartley/colorama.svg?branch=master + :target: https://travis-ci.org/tartley/colorama + :alt: Build Status + +Download and docs: + https://pypi.org/project/colorama/ +Source code & Development: + https://github.com/tartley/colorama +Colorama for Enterprise: + https://github.com/tartley/colorama/blob/master/ENTERPRISE.md + +Description +=========== + +Makes ANSI escape character sequences (for producing colored terminal text and +cursor positioning) work under MS Windows. + +ANSI escape character sequences have long been used to produce colored terminal +text and cursor positioning on Unix and Macs. Colorama makes this work on +Windows, too, by wrapping ``stdout``, stripping ANSI sequences it finds (which +would appear as gobbledygook in the output), and converting them into the +appropriate win32 calls to modify the state of the terminal. On other platforms, +Colorama does nothing. + +Colorama also provides some shortcuts to help generate ANSI sequences +but works fine in conjunction with any other ANSI sequence generation library, +such as the venerable Termcolor (https://pypi.org/project/termcolor/) +or the fabulous Blessings (https://pypi.org/project/blessings/). + +This has the upshot of providing a simple cross-platform API for printing +colored terminal text from Python, and has the happy side-effect that existing +applications or libraries which use ANSI sequences to produce colored output on +Linux or Macs can now also work on Windows, simply by calling +``colorama.init()``. + +An alternative approach is to install ``ansi.sys`` on Windows machines, which +provides the same behaviour for all applications running in terminals. Colorama +is intended for situations where that isn't easy (e.g., maybe your app doesn't +have an installer.) + +Demo scripts in the source code repository print some colored text using +ANSI sequences. Compare their output under Gnome-terminal's built in ANSI +handling, versus on Windows Command-Prompt using Colorama: + +.. image:: https://github.com/tartley/colorama/raw/master/screenshots/ubuntu-demo.png + :width: 661 + :height: 357 + :alt: ANSI sequences on Ubuntu under gnome-terminal. + +.. image:: https://github.com/tartley/colorama/raw/master/screenshots/windows-demo.png + :width: 668 + :height: 325 + :alt: Same ANSI sequences on Windows, using Colorama. + +These screengrabs show that, on Windows, Colorama does not support ANSI 'dim +text'; it looks the same as 'normal text'. + + +License +======= + +Copyright Jonathan Hartley & Arnon Yaari, 2013. BSD 3-Clause license; see LICENSE file. + + +Dependencies +============ + +None, other than Python. Tested on Python 2.7, 3.5, 3.6, 3.7 and 3.8. + +Usage +===== + +Initialisation +-------------- + +Applications should initialise Colorama using: + +.. code-block:: python + + from colorama import init + init() + +On Windows, calling ``init()`` will filter ANSI escape sequences out of any +text sent to ``stdout`` or ``stderr``, and replace them with equivalent Win32 +calls. + +On other platforms, calling ``init()`` has no effect (unless you request other +optional functionality; see "Init Keyword Args", below). By design, this permits +applications to call ``init()`` unconditionally on all platforms, after which +ANSI output should just work. + +To stop using colorama before your program exits, simply call ``deinit()``. +This will restore ``stdout`` and ``stderr`` to their original values, so that +Colorama is disabled. To resume using Colorama again, call ``reinit()``; it is +cheaper to calling ``init()`` again (but does the same thing). + + +Colored Output +-------------- + +Cross-platform printing of colored text can then be done using Colorama's +constant shorthand for ANSI escape sequences: + +.. code-block:: python + + from colorama import Fore, Back, Style + print(Fore.RED + 'some red text') + print(Back.GREEN + 'and with a green background') + print(Style.DIM + 'and in dim text') + print(Style.RESET_ALL) + print('back to normal now') + +...or simply by manually printing ANSI sequences from your own code: + +.. code-block:: python + + print('\033[31m' + 'some red text') + print('\033[39m') # and reset to default color + +...or, Colorama can be used happily in conjunction with existing ANSI libraries +such as Termcolor: + +.. code-block:: python + + from colorama import init + from termcolor import colored + + # use Colorama to make Termcolor work on Windows too + init() + + # then use Termcolor for all colored text output + print(colored('Hello, World!', 'green', 'on_red')) + +Available formatting constants are:: + + Fore: BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN, WHITE, RESET. + Back: BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN, WHITE, RESET. + Style: DIM, NORMAL, BRIGHT, RESET_ALL + +``Style.RESET_ALL`` resets foreground, background, and brightness. Colorama will +perform this reset automatically on program exit. + + +Cursor Positioning +------------------ + +ANSI codes to reposition the cursor are supported. See ``demos/demo06.py`` for +an example of how to generate them. + + +Init Keyword Args +----------------- + +``init()`` accepts some ``**kwargs`` to override default behaviour. + +init(autoreset=False): + If you find yourself repeatedly sending reset sequences to turn off color + changes at the end of every print, then ``init(autoreset=True)`` will + automate that: + + .. code-block:: python + + from colorama import init + init(autoreset=True) + print(Fore.RED + 'some red text') + print('automatically back to default color again') + +init(strip=None): + Pass ``True`` or ``False`` to override whether ansi codes should be + stripped from the output. The default behaviour is to strip if on Windows + or if output is redirected (not a tty). + +init(convert=None): + Pass ``True`` or ``False`` to override whether to convert ANSI codes in the + output into win32 calls. The default behaviour is to convert if on Windows + and output is to a tty (terminal). + +init(wrap=True): + On Windows, colorama works by replacing ``sys.stdout`` and ``sys.stderr`` + with proxy objects, which override the ``.write()`` method to do their work. + If this wrapping causes you problems, then this can be disabled by passing + ``init(wrap=False)``. The default behaviour is to wrap if ``autoreset`` or + ``strip`` or ``convert`` are True. + + When wrapping is disabled, colored printing on non-Windows platforms will + continue to work as normal. To do cross-platform colored output, you can + use Colorama's ``AnsiToWin32`` proxy directly: + + .. code-block:: python + + import sys + from colorama import init, AnsiToWin32 + init(wrap=False) + stream = AnsiToWin32(sys.stderr).stream + + # Python 2 + print >>stream, Fore.BLUE + 'blue text on stderr' + + # Python 3 + print(Fore.BLUE + 'blue text on stderr', file=stream) + + +Installation +======================= +colorama is currently installable from PyPI: + + pip install colorama + +colorama also can be installed by the conda package manager: + + conda install -c anaconda colorama + + +Status & Known Problems +======================= + +I've personally only tested it on Windows XP (CMD, Console2), Ubuntu +(gnome-terminal, xterm), and OS X. + +Some presumably valid ANSI sequences aren't recognised (see details below), +but to my knowledge nobody has yet complained about this. Puzzling. + +See outstanding issues and wishlist: +https://github.com/tartley/colorama/issues + +If anything doesn't work for you, or doesn't do what you expected or hoped for, +I'd love to hear about it on that issues list, would be delighted by patches, +and would be happy to grant commit access to anyone who submits a working patch +or two. + + +Recognised ANSI Sequences +========================= + +ANSI sequences generally take the form: + + ESC [ ; ... + +Where ```` is an integer, and ```` is a single letter. Zero or +more params are passed to a ````. If no params are passed, it is +generally synonymous with passing a single zero. No spaces exist in the +sequence; they have been inserted here simply to read more easily. + +The only ANSI sequences that colorama converts into win32 calls are:: + + ESC [ 0 m # reset all (colors and brightness) + ESC [ 1 m # bright + ESC [ 2 m # dim (looks same as normal brightness) + ESC [ 22 m # normal brightness + + # FOREGROUND: + ESC [ 30 m # black + ESC [ 31 m # red + ESC [ 32 m # green + ESC [ 33 m # yellow + ESC [ 34 m # blue + ESC [ 35 m # magenta + ESC [ 36 m # cyan + ESC [ 37 m # white + ESC [ 39 m # reset + + # BACKGROUND + ESC [ 40 m # black + ESC [ 41 m # red + ESC [ 42 m # green + ESC [ 43 m # yellow + ESC [ 44 m # blue + ESC [ 45 m # magenta + ESC [ 46 m # cyan + ESC [ 47 m # white + ESC [ 49 m # reset + + # cursor positioning + ESC [ y;x H # position cursor at x across, y down + ESC [ y;x f # position cursor at x across, y down + ESC [ n A # move cursor n lines up + ESC [ n B # move cursor n lines down + ESC [ n C # move cursor n characters forward + ESC [ n D # move cursor n characters backward + + # clear the screen + ESC [ mode J # clear the screen + + # clear the line + ESC [ mode K # clear the line + +Multiple numeric params to the ``'m'`` command can be combined into a single +sequence:: + + ESC [ 36 ; 45 ; 1 m # bright cyan text on magenta background + +All other ANSI sequences of the form ``ESC [ ; ... `` +are silently stripped from the output on Windows. + +Any other form of ANSI sequence, such as single-character codes or alternative +initial characters, are not recognised or stripped. It would be cool to add +them though. Let me know if it would be useful for you, via the Issues on +GitHub. + + +Development +=========== + +Help and fixes welcome! + +Running tests requires: + +- Michael Foord's ``mock`` module to be installed. +- Tests are written using 2010-era updates to ``unittest`` + +To run tests:: + + python -m unittest discover -p *_test.py + +This, like a few other handy commands, is captured in a ``Makefile``. + +If you use nose to run the tests, you must pass the ``-s`` flag; otherwise, +``nosetests`` applies its own proxy to ``stdout``, which confuses the unit +tests. + + +Professional support +==================== + +.. |tideliftlogo| image:: https://cdn2.hubspot.net/hubfs/4008838/website/logos/logos_for_download/Tidelift_primary-shorthand-logo.png + :alt: Tidelift + :target: https://tidelift.com/subscription/pkg/pypi-colorama?utm_source=pypi-colorama&utm_medium=referral&utm_campaign=readme + +.. list-table:: + :widths: 10 100 + + * - |tideliftlogo| + - Professional support for colorama is available as part of the + `Tidelift Subscription`_. + Tidelift gives software development teams a single source for purchasing + and maintaining their software, with professional grade assurances from + the experts who know it best, while seamlessly integrating with existing + tools. + +.. _Tidelift Subscription: https://tidelift.com/subscription/pkg/pypi-colorama?utm_source=pypi-colorama&utm_medium=referral&utm_campaign=readme + + +Thanks +====== +* Marc Schlaich (schlamar) for a ``setup.py`` fix for Python2.5. +* Marc Abramowitz, reported & fixed a crash on exit with closed ``stdout``, + providing a solution to issue #7's setuptools/distutils debate, + and other fixes. +* User 'eryksun', for guidance on correctly instantiating ``ctypes.windll``. +* Matthew McCormick for politely pointing out a longstanding crash on non-Win. +* Ben Hoyt, for a magnificent fix under 64-bit Windows. +* Jesse at Empty Square for submitting a fix for examples in the README. +* User 'jamessp', an observant documentation fix for cursor positioning. +* User 'vaal1239', Dave Mckee & Lackner Kristof for a tiny but much-needed Win7 + fix. +* Julien Stuyck, for wisely suggesting Python3 compatible updates to README. +* Daniel Griffith for multiple fabulous patches. +* Oscar Lesta for a valuable fix to stop ANSI chars being sent to non-tty + output. +* Roger Binns, for many suggestions, valuable feedback, & bug reports. +* Tim Golden for thought and much appreciated feedback on the initial idea. +* User 'Zearin' for updates to the README file. +* John Szakmeister for adding support for light colors +* Charles Merriam for adding documentation to demos +* Jurko for a fix on 64-bit Windows CPython2.5 w/o ctypes +* Florian Bruhin for a fix when stdout or stderr are None +* Thomas Weininger for fixing ValueError on Windows +* Remi Rampin for better Github integration and fixes to the README file +* Simeon Visser for closing a file handle using 'with' and updating classifiers + to include Python 3.3 and 3.4 +* Andy Neff for fixing RESET of LIGHT_EX colors. +* Jonathan Hartley for the initial idea and implementation. + + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama-0.4.3.dist-info/RECORD b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama-0.4.3.dist-info/RECORD new file mode 100644 index 0000000000000000000000000000000000000000..f9bdc1a7459d9ef1d2e133712e2c56cadbf2d773 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama-0.4.3.dist-info/RECORD @@ -0,0 +1,22 @@ +colorama/__init__.py,sha256=DqjXH9URVP3IJwmMt7peYw50ns1RNAymIB9-XdPEFV8,239 +colorama/ansi.py,sha256=Fi0un-QLqRm-v7o_nKiOqyC8PapBJK7DLV_q9LKtTO0,2524 +colorama/ansitowin32.py,sha256=u8QaqdqS_xYSfNkPM1eRJLHz6JMWPodaJaP0mxgHCDc,10462 +colorama/initialise.py,sha256=PprovDNxMTrvoNHFcL2NZjpH2XzDc8BLxLxiErfUl4k,1915 +colorama/win32.py,sha256=bJ8Il9jwaBN5BJ8bmN6FoYZ1QYuMKv2j8fGrXh7TJjw,5404 +colorama/winterm.py,sha256=2y_2b7Zsv34feAsP67mLOVc-Bgq51mdYGo571VprlrM,6438 +colorama-0.4.3.dist-info/AUTHORS.txt,sha256=RtqU9KfonVGhI48DAA4-yTOBUhBtQTjFhaDzHoyh7uU,21518 +colorama-0.4.3.dist-info/LICENSE.txt,sha256=W6Ifuwlk-TatfRU2LR7W1JMcyMj5_y1NkRkOEJvnRDE,1090 +colorama-0.4.3.dist-info/METADATA,sha256=-ovqULHfBHs9pV2e_Ua8-w2VSVLno-6x36SXSTsWvSc,14432 +colorama-0.4.3.dist-info/WHEEL,sha256=kGT74LWyRUZrL4VgLh6_g12IeVl_9u9ZVhadrgXZUEY,110 +colorama-0.4.3.dist-info/top_level.txt,sha256=_Kx6-Cni2BT1PEATPhrSRxo0d7kSgfBbHf5o7IF1ABw,9 +colorama-0.4.3.dist-info/RECORD,, +colorama/__pycache__,, +colorama/winterm.cpython-38.pyc,, +colorama-0.4.3.dist-info/__pycache__,, +colorama/ansi.cpython-38.pyc,, +colorama/ansitowin32.cpython-38.pyc,, +colorama/win32.cpython-38.pyc,, +colorama-0.4.3.virtualenv,, +colorama/initialise.cpython-38.pyc,, +colorama/__init__.cpython-38.pyc,, +colorama-0.4.3.dist-info/INSTALLER,, \ No newline at end of file diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama-0.4.3.dist-info/WHEEL b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama-0.4.3.dist-info/WHEEL new file mode 100644 index 0000000000000000000000000000000000000000..ef99c6cf3283b50a273ac4c6d009a0aa85597070 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama-0.4.3.dist-info/WHEEL @@ -0,0 +1,6 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.34.2) +Root-Is-Purelib: true +Tag: py2-none-any +Tag: py3-none-any + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama-0.4.3.dist-info/top_level.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama-0.4.3.dist-info/top_level.txt new file mode 100644 index 0000000000000000000000000000000000000000..3fcfb51b2ad06cc0cb6c7155260a79fd2896b3d1 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama-0.4.3.dist-info/top_level.txt @@ -0,0 +1 @@ +colorama diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama-0.4.3.virtualenv b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama-0.4.3.virtualenv new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..34c263cc8bb4a12d99b9d375193d4eb634ed094a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama/__init__.py @@ -0,0 +1,6 @@ +# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file. +from .initialise import init, deinit, reinit, colorama_text +from .ansi import Fore, Back, Style, Cursor +from .ansitowin32 import AnsiToWin32 + +__version__ = '0.4.3' diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama/ansi.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama/ansi.py new file mode 100644 index 0000000000000000000000000000000000000000..78776588db9410924d8e4af0922fbc3960a37624 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama/ansi.py @@ -0,0 +1,102 @@ +# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file. +''' +This module generates ANSI character codes to printing colors to terminals. +See: http://en.wikipedia.org/wiki/ANSI_escape_code +''' + +CSI = '\033[' +OSC = '\033]' +BEL = '\007' + + +def code_to_chars(code): + return CSI + str(code) + 'm' + +def set_title(title): + return OSC + '2;' + title + BEL + +def clear_screen(mode=2): + return CSI + str(mode) + 'J' + +def clear_line(mode=2): + return CSI + str(mode) + 'K' + + +class AnsiCodes(object): + def __init__(self): + # the subclasses declare class attributes which are numbers. + # Upon instantiation we define instance attributes, which are the same + # as the class attributes but wrapped with the ANSI escape sequence + for name in dir(self): + if not name.startswith('_'): + value = getattr(self, name) + setattr(self, name, code_to_chars(value)) + + +class AnsiCursor(object): + def UP(self, n=1): + return CSI + str(n) + 'A' + def DOWN(self, n=1): + return CSI + str(n) + 'B' + def FORWARD(self, n=1): + return CSI + str(n) + 'C' + def BACK(self, n=1): + return CSI + str(n) + 'D' + def POS(self, x=1, y=1): + return CSI + str(y) + ';' + str(x) + 'H' + + +class AnsiFore(AnsiCodes): + BLACK = 30 + RED = 31 + GREEN = 32 + YELLOW = 33 + BLUE = 34 + MAGENTA = 35 + CYAN = 36 + WHITE = 37 + RESET = 39 + + # These are fairly well supported, but not part of the standard. + LIGHTBLACK_EX = 90 + LIGHTRED_EX = 91 + LIGHTGREEN_EX = 92 + LIGHTYELLOW_EX = 93 + LIGHTBLUE_EX = 94 + LIGHTMAGENTA_EX = 95 + LIGHTCYAN_EX = 96 + LIGHTWHITE_EX = 97 + + +class AnsiBack(AnsiCodes): + BLACK = 40 + RED = 41 + GREEN = 42 + YELLOW = 43 + BLUE = 44 + MAGENTA = 45 + CYAN = 46 + WHITE = 47 + RESET = 49 + + # These are fairly well supported, but not part of the standard. + LIGHTBLACK_EX = 100 + LIGHTRED_EX = 101 + LIGHTGREEN_EX = 102 + LIGHTYELLOW_EX = 103 + LIGHTBLUE_EX = 104 + LIGHTMAGENTA_EX = 105 + LIGHTCYAN_EX = 106 + LIGHTWHITE_EX = 107 + + +class AnsiStyle(AnsiCodes): + BRIGHT = 1 + DIM = 2 + NORMAL = 22 + RESET_ALL = 0 + +Fore = AnsiFore() +Back = AnsiBack() +Style = AnsiStyle() +Cursor = AnsiCursor() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama/ansitowin32.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama/ansitowin32.py new file mode 100644 index 0000000000000000000000000000000000000000..359c92be50ee3c97aafbc8d31c16e5fbf2e6d022 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama/ansitowin32.py @@ -0,0 +1,257 @@ +# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file. +import re +import sys +import os + +from .ansi import AnsiFore, AnsiBack, AnsiStyle, Style +from .winterm import WinTerm, WinColor, WinStyle +from .win32 import windll, winapi_test + + +winterm = None +if windll is not None: + winterm = WinTerm() + + +class StreamWrapper(object): + ''' + Wraps a stream (such as stdout), acting as a transparent proxy for all + attribute access apart from method 'write()', which is delegated to our + Converter instance. + ''' + def __init__(self, wrapped, converter): + # double-underscore everything to prevent clashes with names of + # attributes on the wrapped stream object. + self.__wrapped = wrapped + self.__convertor = converter + + def __getattr__(self, name): + return getattr(self.__wrapped, name) + + def __enter__(self, *args, **kwargs): + # special method lookup bypasses __getattr__/__getattribute__, see + # https://stackoverflow.com/questions/12632894/why-doesnt-getattr-work-with-exit + # thus, contextlib magic methods are not proxied via __getattr__ + return self.__wrapped.__enter__(*args, **kwargs) + + def __exit__(self, *args, **kwargs): + return self.__wrapped.__exit__(*args, **kwargs) + + def write(self, text): + self.__convertor.write(text) + + def isatty(self): + stream = self.__wrapped + if 'PYCHARM_HOSTED' in os.environ: + if stream is not None and (stream is sys.__stdout__ or stream is sys.__stderr__): + return True + try: + stream_isatty = stream.isatty + except AttributeError: + return False + else: + return stream_isatty() + + @property + def closed(self): + stream = self.__wrapped + try: + return stream.closed + except AttributeError: + return True + + +class AnsiToWin32(object): + ''' + Implements a 'write()' method which, on Windows, will strip ANSI character + sequences from the text, and if outputting to a tty, will convert them into + win32 function calls. + ''' + ANSI_CSI_RE = re.compile('\001?\033\\[((?:\\d|;)*)([a-zA-Z])\002?') # Control Sequence Introducer + ANSI_OSC_RE = re.compile('\001?\033\\]((?:.|;)*?)(\x07)\002?') # Operating System Command + + def __init__(self, wrapped, convert=None, strip=None, autoreset=False): + # The wrapped stream (normally sys.stdout or sys.stderr) + self.wrapped = wrapped + + # should we reset colors to defaults after every .write() + self.autoreset = autoreset + + # create the proxy wrapping our output stream + self.stream = StreamWrapper(wrapped, self) + + on_windows = os.name == 'nt' + # We test if the WinAPI works, because even if we are on Windows + # we may be using a terminal that doesn't support the WinAPI + # (e.g. Cygwin Terminal). In this case it's up to the terminal + # to support the ANSI codes. + conversion_supported = on_windows and winapi_test() + + # should we strip ANSI sequences from our output? + if strip is None: + strip = conversion_supported or (not self.stream.closed and not self.stream.isatty()) + self.strip = strip + + # should we should convert ANSI sequences into win32 calls? + if convert is None: + convert = conversion_supported and not self.stream.closed and self.stream.isatty() + self.convert = convert + + # dict of ansi codes to win32 functions and parameters + self.win32_calls = self.get_win32_calls() + + # are we wrapping stderr? + self.on_stderr = self.wrapped is sys.stderr + + def should_wrap(self): + ''' + True if this class is actually needed. If false, then the output + stream will not be affected, nor will win32 calls be issued, so + wrapping stdout is not actually required. This will generally be + False on non-Windows platforms, unless optional functionality like + autoreset has been requested using kwargs to init() + ''' + return self.convert or self.strip or self.autoreset + + def get_win32_calls(self): + if self.convert and winterm: + return { + AnsiStyle.RESET_ALL: (winterm.reset_all, ), + AnsiStyle.BRIGHT: (winterm.style, WinStyle.BRIGHT), + AnsiStyle.DIM: (winterm.style, WinStyle.NORMAL), + AnsiStyle.NORMAL: (winterm.style, WinStyle.NORMAL), + AnsiFore.BLACK: (winterm.fore, WinColor.BLACK), + AnsiFore.RED: (winterm.fore, WinColor.RED), + AnsiFore.GREEN: (winterm.fore, WinColor.GREEN), + AnsiFore.YELLOW: (winterm.fore, WinColor.YELLOW), + AnsiFore.BLUE: (winterm.fore, WinColor.BLUE), + AnsiFore.MAGENTA: (winterm.fore, WinColor.MAGENTA), + AnsiFore.CYAN: (winterm.fore, WinColor.CYAN), + AnsiFore.WHITE: (winterm.fore, WinColor.GREY), + AnsiFore.RESET: (winterm.fore, ), + AnsiFore.LIGHTBLACK_EX: (winterm.fore, WinColor.BLACK, True), + AnsiFore.LIGHTRED_EX: (winterm.fore, WinColor.RED, True), + AnsiFore.LIGHTGREEN_EX: (winterm.fore, WinColor.GREEN, True), + AnsiFore.LIGHTYELLOW_EX: (winterm.fore, WinColor.YELLOW, True), + AnsiFore.LIGHTBLUE_EX: (winterm.fore, WinColor.BLUE, True), + AnsiFore.LIGHTMAGENTA_EX: (winterm.fore, WinColor.MAGENTA, True), + AnsiFore.LIGHTCYAN_EX: (winterm.fore, WinColor.CYAN, True), + AnsiFore.LIGHTWHITE_EX: (winterm.fore, WinColor.GREY, True), + AnsiBack.BLACK: (winterm.back, WinColor.BLACK), + AnsiBack.RED: (winterm.back, WinColor.RED), + AnsiBack.GREEN: (winterm.back, WinColor.GREEN), + AnsiBack.YELLOW: (winterm.back, WinColor.YELLOW), + AnsiBack.BLUE: (winterm.back, WinColor.BLUE), + AnsiBack.MAGENTA: (winterm.back, WinColor.MAGENTA), + AnsiBack.CYAN: (winterm.back, WinColor.CYAN), + AnsiBack.WHITE: (winterm.back, WinColor.GREY), + AnsiBack.RESET: (winterm.back, ), + AnsiBack.LIGHTBLACK_EX: (winterm.back, WinColor.BLACK, True), + AnsiBack.LIGHTRED_EX: (winterm.back, WinColor.RED, True), + AnsiBack.LIGHTGREEN_EX: (winterm.back, WinColor.GREEN, True), + AnsiBack.LIGHTYELLOW_EX: (winterm.back, WinColor.YELLOW, True), + AnsiBack.LIGHTBLUE_EX: (winterm.back, WinColor.BLUE, True), + AnsiBack.LIGHTMAGENTA_EX: (winterm.back, WinColor.MAGENTA, True), + AnsiBack.LIGHTCYAN_EX: (winterm.back, WinColor.CYAN, True), + AnsiBack.LIGHTWHITE_EX: (winterm.back, WinColor.GREY, True), + } + return dict() + + def write(self, text): + if self.strip or self.convert: + self.write_and_convert(text) + else: + self.wrapped.write(text) + self.wrapped.flush() + if self.autoreset: + self.reset_all() + + + def reset_all(self): + if self.convert: + self.call_win32('m', (0,)) + elif not self.strip and not self.stream.closed: + self.wrapped.write(Style.RESET_ALL) + + + def write_and_convert(self, text): + ''' + Write the given text to our wrapped stream, stripping any ANSI + sequences from the text, and optionally converting them into win32 + calls. + ''' + cursor = 0 + text = self.convert_osc(text) + for match in self.ANSI_CSI_RE.finditer(text): + start, end = match.span() + self.write_plain_text(text, cursor, start) + self.convert_ansi(*match.groups()) + cursor = end + self.write_plain_text(text, cursor, len(text)) + + + def write_plain_text(self, text, start, end): + if start < end: + self.wrapped.write(text[start:end]) + self.wrapped.flush() + + + def convert_ansi(self, paramstring, command): + if self.convert: + params = self.extract_params(command, paramstring) + self.call_win32(command, params) + + + def extract_params(self, command, paramstring): + if command in 'Hf': + params = tuple(int(p) if len(p) != 0 else 1 for p in paramstring.split(';')) + while len(params) < 2: + # defaults: + params = params + (1,) + else: + params = tuple(int(p) for p in paramstring.split(';') if len(p) != 0) + if len(params) == 0: + # defaults: + if command in 'JKm': + params = (0,) + elif command in 'ABCD': + params = (1,) + + return params + + + def call_win32(self, command, params): + if command == 'm': + for param in params: + if param in self.win32_calls: + func_args = self.win32_calls[param] + func = func_args[0] + args = func_args[1:] + kwargs = dict(on_stderr=self.on_stderr) + func(*args, **kwargs) + elif command in 'J': + winterm.erase_screen(params[0], on_stderr=self.on_stderr) + elif command in 'K': + winterm.erase_line(params[0], on_stderr=self.on_stderr) + elif command in 'Hf': # cursor position - absolute + winterm.set_cursor_position(params, on_stderr=self.on_stderr) + elif command in 'ABCD': # cursor position - relative + n = params[0] + # A - up, B - down, C - forward, D - back + x, y = {'A': (0, -n), 'B': (0, n), 'C': (n, 0), 'D': (-n, 0)}[command] + winterm.cursor_adjust(x, y, on_stderr=self.on_stderr) + + + def convert_osc(self, text): + for match in self.ANSI_OSC_RE.finditer(text): + start, end = match.span() + text = text[:start] + text[end:] + paramstring, command = match.groups() + if command in '\x07': # \x07 = BEL + params = paramstring.split(";") + # 0 - change title and icon (we will only change title) + # 1 - change icon (we don't support this) + # 2 - change title + if params[0] in '02': + winterm.set_title(params[1]) + return text diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama/initialise.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama/initialise.py new file mode 100644 index 0000000000000000000000000000000000000000..430d0668727cd0521270a52c962867907d743b34 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama/initialise.py @@ -0,0 +1,80 @@ +# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file. +import atexit +import contextlib +import sys + +from .ansitowin32 import AnsiToWin32 + + +orig_stdout = None +orig_stderr = None + +wrapped_stdout = None +wrapped_stderr = None + +atexit_done = False + + +def reset_all(): + if AnsiToWin32 is not None: # Issue #74: objects might become None at exit + AnsiToWin32(orig_stdout).reset_all() + + +def init(autoreset=False, convert=None, strip=None, wrap=True): + + if not wrap and any([autoreset, convert, strip]): + raise ValueError('wrap=False conflicts with any other arg=True') + + global wrapped_stdout, wrapped_stderr + global orig_stdout, orig_stderr + + orig_stdout = sys.stdout + orig_stderr = sys.stderr + + if sys.stdout is None: + wrapped_stdout = None + else: + sys.stdout = wrapped_stdout = \ + wrap_stream(orig_stdout, convert, strip, autoreset, wrap) + if sys.stderr is None: + wrapped_stderr = None + else: + sys.stderr = wrapped_stderr = \ + wrap_stream(orig_stderr, convert, strip, autoreset, wrap) + + global atexit_done + if not atexit_done: + atexit.register(reset_all) + atexit_done = True + + +def deinit(): + if orig_stdout is not None: + sys.stdout = orig_stdout + if orig_stderr is not None: + sys.stderr = orig_stderr + + +@contextlib.contextmanager +def colorama_text(*args, **kwargs): + init(*args, **kwargs) + try: + yield + finally: + deinit() + + +def reinit(): + if wrapped_stdout is not None: + sys.stdout = wrapped_stdout + if wrapped_stderr is not None: + sys.stderr = wrapped_stderr + + +def wrap_stream(stream, convert, strip, autoreset, wrap): + if wrap: + wrapper = AnsiToWin32(stream, + convert=convert, strip=strip, autoreset=autoreset) + if wrapper.should_wrap(): + stream = wrapper.stream + return stream diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama/win32.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama/win32.py new file mode 100644 index 0000000000000000000000000000000000000000..c2d836033673993e00a02d5a3802b61cd051cf08 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama/win32.py @@ -0,0 +1,152 @@ +# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file. + +# from winbase.h +STDOUT = -11 +STDERR = -12 + +try: + import ctypes + from ctypes import LibraryLoader + windll = LibraryLoader(ctypes.WinDLL) + from ctypes import wintypes +except (AttributeError, ImportError): + windll = None + SetConsoleTextAttribute = lambda *_: None + winapi_test = lambda *_: None +else: + from ctypes import byref, Structure, c_char, POINTER + + COORD = wintypes._COORD + + class CONSOLE_SCREEN_BUFFER_INFO(Structure): + """struct in wincon.h.""" + _fields_ = [ + ("dwSize", COORD), + ("dwCursorPosition", COORD), + ("wAttributes", wintypes.WORD), + ("srWindow", wintypes.SMALL_RECT), + ("dwMaximumWindowSize", COORD), + ] + def __str__(self): + return '(%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d)' % ( + self.dwSize.Y, self.dwSize.X + , self.dwCursorPosition.Y, self.dwCursorPosition.X + , self.wAttributes + , self.srWindow.Top, self.srWindow.Left, self.srWindow.Bottom, self.srWindow.Right + , self.dwMaximumWindowSize.Y, self.dwMaximumWindowSize.X + ) + + _GetStdHandle = windll.kernel32.GetStdHandle + _GetStdHandle.argtypes = [ + wintypes.DWORD, + ] + _GetStdHandle.restype = wintypes.HANDLE + + _GetConsoleScreenBufferInfo = windll.kernel32.GetConsoleScreenBufferInfo + _GetConsoleScreenBufferInfo.argtypes = [ + wintypes.HANDLE, + POINTER(CONSOLE_SCREEN_BUFFER_INFO), + ] + _GetConsoleScreenBufferInfo.restype = wintypes.BOOL + + _SetConsoleTextAttribute = windll.kernel32.SetConsoleTextAttribute + _SetConsoleTextAttribute.argtypes = [ + wintypes.HANDLE, + wintypes.WORD, + ] + _SetConsoleTextAttribute.restype = wintypes.BOOL + + _SetConsoleCursorPosition = windll.kernel32.SetConsoleCursorPosition + _SetConsoleCursorPosition.argtypes = [ + wintypes.HANDLE, + COORD, + ] + _SetConsoleCursorPosition.restype = wintypes.BOOL + + _FillConsoleOutputCharacterA = windll.kernel32.FillConsoleOutputCharacterA + _FillConsoleOutputCharacterA.argtypes = [ + wintypes.HANDLE, + c_char, + wintypes.DWORD, + COORD, + POINTER(wintypes.DWORD), + ] + _FillConsoleOutputCharacterA.restype = wintypes.BOOL + + _FillConsoleOutputAttribute = windll.kernel32.FillConsoleOutputAttribute + _FillConsoleOutputAttribute.argtypes = [ + wintypes.HANDLE, + wintypes.WORD, + wintypes.DWORD, + COORD, + POINTER(wintypes.DWORD), + ] + _FillConsoleOutputAttribute.restype = wintypes.BOOL + + _SetConsoleTitleW = windll.kernel32.SetConsoleTitleW + _SetConsoleTitleW.argtypes = [ + wintypes.LPCWSTR + ] + _SetConsoleTitleW.restype = wintypes.BOOL + + def _winapi_test(handle): + csbi = CONSOLE_SCREEN_BUFFER_INFO() + success = _GetConsoleScreenBufferInfo( + handle, byref(csbi)) + return bool(success) + + def winapi_test(): + return any(_winapi_test(h) for h in + (_GetStdHandle(STDOUT), _GetStdHandle(STDERR))) + + def GetConsoleScreenBufferInfo(stream_id=STDOUT): + handle = _GetStdHandle(stream_id) + csbi = CONSOLE_SCREEN_BUFFER_INFO() + success = _GetConsoleScreenBufferInfo( + handle, byref(csbi)) + return csbi + + def SetConsoleTextAttribute(stream_id, attrs): + handle = _GetStdHandle(stream_id) + return _SetConsoleTextAttribute(handle, attrs) + + def SetConsoleCursorPosition(stream_id, position, adjust=True): + position = COORD(*position) + # If the position is out of range, do nothing. + if position.Y <= 0 or position.X <= 0: + return + # Adjust for Windows' SetConsoleCursorPosition: + # 1. being 0-based, while ANSI is 1-based. + # 2. expecting (x,y), while ANSI uses (y,x). + adjusted_position = COORD(position.Y - 1, position.X - 1) + if adjust: + # Adjust for viewport's scroll position + sr = GetConsoleScreenBufferInfo(STDOUT).srWindow + adjusted_position.Y += sr.Top + adjusted_position.X += sr.Left + # Resume normal processing + handle = _GetStdHandle(stream_id) + return _SetConsoleCursorPosition(handle, adjusted_position) + + def FillConsoleOutputCharacter(stream_id, char, length, start): + handle = _GetStdHandle(stream_id) + char = c_char(char.encode()) + length = wintypes.DWORD(length) + num_written = wintypes.DWORD(0) + # Note that this is hard-coded for ANSI (vs wide) bytes. + success = _FillConsoleOutputCharacterA( + handle, char, length, start, byref(num_written)) + return num_written.value + + def FillConsoleOutputAttribute(stream_id, attr, length, start): + ''' FillConsoleOutputAttribute( hConsole, csbi.wAttributes, dwConSize, coordScreen, &cCharsWritten )''' + handle = _GetStdHandle(stream_id) + attribute = wintypes.WORD(attr) + length = wintypes.DWORD(length) + num_written = wintypes.DWORD(0) + # Note that this is hard-coded for ANSI (vs wide) bytes. + return _FillConsoleOutputAttribute( + handle, attribute, length, start, byref(num_written)) + + def SetConsoleTitle(title): + return _SetConsoleTitleW(title) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama/winterm.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama/winterm.py new file mode 100644 index 0000000000000000000000000000000000000000..0fdb4ec4e91090876dc3fbf207049b521fa0dd73 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/colorama/winterm.py @@ -0,0 +1,169 @@ +# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file. +from . import win32 + + +# from wincon.h +class WinColor(object): + BLACK = 0 + BLUE = 1 + GREEN = 2 + CYAN = 3 + RED = 4 + MAGENTA = 5 + YELLOW = 6 + GREY = 7 + +# from wincon.h +class WinStyle(object): + NORMAL = 0x00 # dim text, dim background + BRIGHT = 0x08 # bright text, dim background + BRIGHT_BACKGROUND = 0x80 # dim text, bright background + +class WinTerm(object): + + def __init__(self): + self._default = win32.GetConsoleScreenBufferInfo(win32.STDOUT).wAttributes + self.set_attrs(self._default) + self._default_fore = self._fore + self._default_back = self._back + self._default_style = self._style + # In order to emulate LIGHT_EX in windows, we borrow the BRIGHT style. + # So that LIGHT_EX colors and BRIGHT style do not clobber each other, + # we track them separately, since LIGHT_EX is overwritten by Fore/Back + # and BRIGHT is overwritten by Style codes. + self._light = 0 + + def get_attrs(self): + return self._fore + self._back * 16 + (self._style | self._light) + + def set_attrs(self, value): + self._fore = value & 7 + self._back = (value >> 4) & 7 + self._style = value & (WinStyle.BRIGHT | WinStyle.BRIGHT_BACKGROUND) + + def reset_all(self, on_stderr=None): + self.set_attrs(self._default) + self.set_console(attrs=self._default) + self._light = 0 + + def fore(self, fore=None, light=False, on_stderr=False): + if fore is None: + fore = self._default_fore + self._fore = fore + # Emulate LIGHT_EX with BRIGHT Style + if light: + self._light |= WinStyle.BRIGHT + else: + self._light &= ~WinStyle.BRIGHT + self.set_console(on_stderr=on_stderr) + + def back(self, back=None, light=False, on_stderr=False): + if back is None: + back = self._default_back + self._back = back + # Emulate LIGHT_EX with BRIGHT_BACKGROUND Style + if light: + self._light |= WinStyle.BRIGHT_BACKGROUND + else: + self._light &= ~WinStyle.BRIGHT_BACKGROUND + self.set_console(on_stderr=on_stderr) + + def style(self, style=None, on_stderr=False): + if style is None: + style = self._default_style + self._style = style + self.set_console(on_stderr=on_stderr) + + def set_console(self, attrs=None, on_stderr=False): + if attrs is None: + attrs = self.get_attrs() + handle = win32.STDOUT + if on_stderr: + handle = win32.STDERR + win32.SetConsoleTextAttribute(handle, attrs) + + def get_position(self, handle): + position = win32.GetConsoleScreenBufferInfo(handle).dwCursorPosition + # Because Windows coordinates are 0-based, + # and win32.SetConsoleCursorPosition expects 1-based. + position.X += 1 + position.Y += 1 + return position + + def set_cursor_position(self, position=None, on_stderr=False): + if position is None: + # I'm not currently tracking the position, so there is no default. + # position = self.get_position() + return + handle = win32.STDOUT + if on_stderr: + handle = win32.STDERR + win32.SetConsoleCursorPosition(handle, position) + + def cursor_adjust(self, x, y, on_stderr=False): + handle = win32.STDOUT + if on_stderr: + handle = win32.STDERR + position = self.get_position(handle) + adjusted_position = (position.Y + y, position.X + x) + win32.SetConsoleCursorPosition(handle, adjusted_position, adjust=False) + + def erase_screen(self, mode=0, on_stderr=False): + # 0 should clear from the cursor to the end of the screen. + # 1 should clear from the cursor to the beginning of the screen. + # 2 should clear the entire screen, and move cursor to (1,1) + handle = win32.STDOUT + if on_stderr: + handle = win32.STDERR + csbi = win32.GetConsoleScreenBufferInfo(handle) + # get the number of character cells in the current buffer + cells_in_screen = csbi.dwSize.X * csbi.dwSize.Y + # get number of character cells before current cursor position + cells_before_cursor = csbi.dwSize.X * csbi.dwCursorPosition.Y + csbi.dwCursorPosition.X + if mode == 0: + from_coord = csbi.dwCursorPosition + cells_to_erase = cells_in_screen - cells_before_cursor + elif mode == 1: + from_coord = win32.COORD(0, 0) + cells_to_erase = cells_before_cursor + elif mode == 2: + from_coord = win32.COORD(0, 0) + cells_to_erase = cells_in_screen + else: + # invalid mode + return + # fill the entire screen with blanks + win32.FillConsoleOutputCharacter(handle, ' ', cells_to_erase, from_coord) + # now set the buffer's attributes accordingly + win32.FillConsoleOutputAttribute(handle, self.get_attrs(), cells_to_erase, from_coord) + if mode == 2: + # put the cursor where needed + win32.SetConsoleCursorPosition(handle, (1, 1)) + + def erase_line(self, mode=0, on_stderr=False): + # 0 should clear from the cursor to the end of the line. + # 1 should clear from the cursor to the beginning of the line. + # 2 should clear the entire line. + handle = win32.STDOUT + if on_stderr: + handle = win32.STDERR + csbi = win32.GetConsoleScreenBufferInfo(handle) + if mode == 0: + from_coord = csbi.dwCursorPosition + cells_to_erase = csbi.dwSize.X - csbi.dwCursorPosition.X + elif mode == 1: + from_coord = win32.COORD(0, csbi.dwCursorPosition.Y) + cells_to_erase = csbi.dwCursorPosition.X + elif mode == 2: + from_coord = win32.COORD(0, csbi.dwCursorPosition.Y) + cells_to_erase = csbi.dwSize.X + else: + # invalid mode + return + # fill the entire screen with blanks + win32.FillConsoleOutputCharacter(handle, ' ', cells_to_erase, from_coord) + # now set the buffer's attributes accordingly + win32.FillConsoleOutputAttribute(handle, self.get_attrs(), cells_to_erase, from_coord) + + def set_title(self, title): + win32.SetConsoleTitle(title) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/contextlib2-0.6.0.dist-info/AUTHORS.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/contextlib2-0.6.0.dist-info/AUTHORS.txt new file mode 100644 index 0000000000000000000000000000000000000000..72c87d7d38ae7bf859717c333a5ee8230f6ce624 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/contextlib2-0.6.0.dist-info/AUTHORS.txt @@ -0,0 +1,562 @@ +A_Rog +Aakanksha Agrawal <11389424+rasponic@users.noreply.github.com> +Abhinav Sagar <40603139+abhinavsagar@users.noreply.github.com> +ABHYUDAY PRATAP SINGH +abs51295 +AceGentile +Adam Chainz +Adam Tse +Adam Tse +Adam Wentz +admin +Adrien Morison +ahayrapetyan +Ahilya +AinsworthK +Akash Srivastava +Alan Yee +Albert Tugushev +Albert-Guan +albertg +Aleks Bunin +Alethea Flowers +Alex Gaynor +Alex Grönholm +Alex Loosley +Alex Morega +Alex Stachowiak +Alexander Shtyrov +Alexandre Conrad +Alexey Popravka +Alexey Popravka +Alli +Ami Fischman +Ananya Maiti +Anatoly Techtonik +Anders Kaseorg +Andreas Lutro +Andrei Geacar +Andrew Gaul +Andrey Bulgakov +Andrés Delfino <34587441+andresdelfino@users.noreply.github.com> +Andrés Delfino +Andy Freeland +Andy Freeland +Andy Kluger +Ani Hayrapetyan +Aniruddha Basak +Anish Tambe +Anrs Hu +Anthony Sottile +Antoine Musso +Anton Ovchinnikov +Anton Patrushev +Antonio Alvarado Hernandez +Antony Lee +Antti Kaihola +Anubhav Patel +Anuj Godase +AQNOUCH Mohammed +AraHaan +Arindam Choudhury +Armin Ronacher +Artem +Ashley Manton +Ashwin Ramaswami +atse +Atsushi Odagiri +Avner Cohen +Baptiste Mispelon +Barney Gale +barneygale +Bartek Ogryczak +Bastian Venthur +Ben Darnell +Ben Hoyt +Ben Rosser +Bence Nagy +Benjamin Peterson +Benjamin VanEvery +Benoit Pierre +Berker Peksag +Bernardo B. Marques +Bernhard M. Wiedemann +Bertil Hatt +Bogdan Opanchuk +BorisZZZ +Brad Erickson +Bradley Ayers +Brandon L. Reiss +Brandt Bucher +Brett Randall +Brian Cristante <33549821+brcrista@users.noreply.github.com> +Brian Cristante +Brian Rosner +BrownTruck +Bruno Oliveira +Bruno Renié +Bstrdsmkr +Buck Golemon +burrows +Bussonnier Matthias +c22 +Caleb Martinez +Calvin Smith +Carl Meyer +Carlos Liam +Carol Willing +Carter Thayer +Cass +Chandrasekhar Atina +Chih-Hsuan Yen +Chih-Hsuan Yen +Chris Brinker +Chris Hunt +Chris Jerdonek +Chris McDonough +Chris Wolfe +Christian Heimes +Christian Oudard +Christopher Hunt +Christopher Snyder +Clark Boylan +Clay McClure +Cody +Cody Soyland +Colin Watson +Connor Osborn +Cooper Lees +Cooper Ry Lees +Cory Benfield +Cory Wright +Craig Kerstiens +Cristian Sorinel +Curtis Doty +cytolentino +Damian Quiroga +Dan Black +Dan Savilonis +Dan Sully +daniel +Daniel Collins +Daniel Hahler +Daniel Holth +Daniel Jost +Daniel Shaulov +Daniele Esposti +Daniele Procida +Danny Hermes +Dav Clark +Dave Abrahams +Dave Jones +David Aguilar +David Black +David Bordeynik +David Bordeynik +David Caro +David Evans +David Linke +David Pursehouse +David Tucker +David Wales +Davidovich +derwolfe +Desetude +Diego Caraballo +DiegoCaraballo +Dmitry Gladkov +Domen Kožar +Donald Stufft +Dongweiming +Douglas Thor +DrFeathers +Dustin Ingram +Dwayne Bailey +Ed Morley <501702+edmorley@users.noreply.github.com> +Ed Morley +Eitan Adler +ekristina +elainechan +Eli Schwartz +Eli Schwartz +Emil Burzo +Emil Styrke +Endoh Takanao +enoch +Erdinc Mutlu +Eric Gillingham +Eric Hanchrow +Eric Hopper +Erik M. Bray +Erik Rose +Ernest W Durbin III +Ernest W. Durbin III +Erwin Janssen +Eugene Vereshchagin +everdimension +Felix Yan +fiber-space +Filip Kokosiński +Florian Briand +Florian Rathgeber +Francesco +Francesco Montesano +Frost Ming +Gabriel Curio +Gabriel de Perthuis +Garry Polley +gdanielson +Geoffrey Lehée +Geoffrey Sneddon +George Song +Georgi Valkov +Giftlin Rajaiah +gizmoguy1 +gkdoc <40815324+gkdoc@users.noreply.github.com> +Gopinath M <31352222+mgopi1990@users.noreply.github.com> +GOTO Hayato <3532528+gh640@users.noreply.github.com> +gpiks +Guilherme Espada +Guy Rozendorn +gzpan123 +Hanjun Kim +Hari Charan +Harsh Vardhan +Herbert Pfennig +Hsiaoming Yang +Hugo +Hugo Lopes Tavares +Hugo van Kemenade +hugovk +Hynek Schlawack +Ian Bicking +Ian Cordasco +Ian Lee +Ian Stapleton Cordasco +Ian Wienand +Ian Wienand +Igor Kuzmitshov +Igor Sobreira +Ilya Baryshev +INADA Naoki +Ionel Cristian Mărieș +Ionel Maries Cristian +Ivan Pozdeev +Jacob Kim +jakirkham +Jakub Stasiak +Jakub Vysoky +Jakub Wilk +James Cleveland +James Cleveland +James Firth +James Polley +Jan Pokorný +Jannis Leidel +jarondl +Jason R. Coombs +Jay Graves +Jean-Christophe Fillion-Robin +Jeff Barber +Jeff Dairiki +Jelmer Vernooij +jenix21 +Jeremy Stanley +Jeremy Zafran +Jiashuo Li +Jim Garrison +Jivan Amara +John Paton +John-Scott Atlakson +johnthagen +johnthagen +Jon Banafato +Jon Dufresne +Jon Parise +Jonas Nockert +Jonathan Herbert +Joost Molenaar +Jorge Niedbalski +Joseph Long +Josh Bronson +Josh Hansen +Josh Schneier +Juanjo Bazán +Julian Berman +Julian Gethmann +Julien Demoor +jwg4 +Jyrki Pulliainen +Kai Chen +Kamal Bin Mustafa +kaustav haldar +keanemind +Keith Maxwell +Kelsey Hightower +Kenneth Belitzky +Kenneth Reitz +Kenneth Reitz +Kevin Burke +Kevin Carter +Kevin Frommelt +Kevin R Patterson +Kexuan Sun +Kit Randel +kpinc +Krishna Oza +Kumar McMillan +Kyle Persohn +lakshmanaram +Laszlo Kiss-Kollar +Laurent Bristiel +Laurie Opperman +Leon Sasson +Lev Givon +Lincoln de Sousa +Lipis +Loren Carvalho +Lucas Cimon +Ludovic Gasc +Luke Macken +Luo Jiebin +luojiebin +luz.paz +László Kiss Kollár +László Kiss Kollár +Marc Abramowitz +Marc Tamlyn +Marcus Smith +Mariatta +Mark Kohler +Mark Williams +Mark Williams +Markus Hametner +Masaki +Masklinn +Matej Stuchlik +Mathew Jennings +Mathieu Bridon +Matt Good +Matt Maker +Matt Robenolt +matthew +Matthew Einhorn +Matthew Gilliard +Matthew Iversen +Matthew Trumbell +Matthew Willson +Matthias Bussonnier +mattip +Maxim Kurnikov +Maxime Rouyrre +mayeut +mbaluna <44498973+mbaluna@users.noreply.github.com> +mdebi <17590103+mdebi@users.noreply.github.com> +memoselyk +Michael +Michael Aquilina +Michael E. Karpeles +Michael Klich +Michael Williamson +michaelpacer +Mickaël Schoentgen +Miguel Araujo Perez +Mihir Singh +Mike +Mike Hendricks +Min RK +MinRK +Miro Hrončok +Monica Baluna +montefra +Monty Taylor +Nate Coraor +Nathaniel J. Smith +Nehal J Wani +Neil Botelho +Nick Coghlan +Nick Stenning +Nick Timkovich +Nicolas Bock +Nikhil Benesch +Nitesh Sharma +Nowell Strite +NtaleGrey +nvdv +Ofekmeister +ofrinevo +Oliver Jeeves +Oliver Tonnhofer +Olivier Girardot +Olivier Grisel +Ollie Rutherfurd +OMOTO Kenji +Omry Yadan +Oren Held +Oscar Benjamin +Oz N Tiram +Pachwenko <32424503+Pachwenko@users.noreply.github.com> +Patrick Dubroy +Patrick Jenkins +Patrick Lawson +patricktokeeffe +Patrik Kopkan +Paul Kehrer +Paul Moore +Paul Nasrat +Paul Oswald +Paul van der Linden +Paulus Schoutsen +Pavithra Eswaramoorthy <33131404+QueenCoffee@users.noreply.github.com> +Pawel Jasinski +Pekka Klärck +Peter Lisák +Peter Waller +petr-tik +Phaneendra Chiruvella +Phil Freo +Phil Pennock +Phil Whelan +Philip Jägenstedt +Philip Molloy +Philippe Ombredanne +Pi Delport +Pierre-Yves Rofes +pip +Prabakaran Kumaresshan +Prabhjyotsing Surjit Singh Sodhi +Prabhu Marappan +Pradyun Gedam +Pratik Mallya +Preet Thakkar +Preston Holmes +Przemek Wrzos +Pulkit Goyal <7895pulkit@gmail.com> +Qiangning Hong +Quentin Pradet +R. David Murray +Rafael Caricio +Ralf Schmitt +Razzi Abuissa +rdb +Remi Rampin +Remi Rampin +Rene Dudfield +Riccardo Magliocchetti +Richard Jones +RobberPhex +Robert Collins +Robert McGibbon +Robert T. McGibbon +robin elisha robinson +Roey Berman +Rohan Jain +Rohan Jain +Rohan Jain +Roman Bogorodskiy +Romuald Brunet +Ronny Pfannschmidt +Rory McCann +Ross Brattain +Roy Wellington Ⅳ +Roy Wellington Ⅳ +Ryan Wooden +ryneeverett +Sachi King +Salvatore Rinchiera +Savio Jomton +schlamar +Scott Kitterman +Sean +seanj +Sebastian Jordan +Sebastian Schaetz +Segev Finer +SeongSoo Cho +Sergey Vasilyev +Seth Woodworth +Shlomi Fish +Shovan Maity +Simeon Visser +Simon Cross +Simon Pichugin +sinoroc +Sorin Sbarnea +Stavros Korokithakis +Stefan Scherfke +Stephan Erb +stepshal +Steve (Gadget) Barnes +Steve Barnes +Steve Dower +Steve Kowalik +Steven Myint +stonebig +Stéphane Bidoul (ACSONE) +Stéphane Bidoul +Stéphane Klein +Sumana Harihareswara +Sviatoslav Sydorenko +Sviatoslav Sydorenko +Swat009 +Takayuki SHIMIZUKAWA +tbeswick +Thijs Triemstra +Thomas Fenzl +Thomas Grainger +Thomas Guettler +Thomas Johansson +Thomas Kluyver +Thomas Smith +Tim D. Smith +Tim Gates +Tim Harder +Tim Heap +tim smith +tinruufu +Tom Forbes +Tom Freudenheim +Tom V +Tomas Orsava +Tomer Chachamu +Tony Beswick +Tony Zhaocheng Tan +TonyBeswick +toonarmycaptain +Toshio Kuratomi +Travis Swicegood +Tzu-ping Chung +Valentin Haenel +Victor Stinner +victorvpaulo +Viktor Szépe +Ville Skyttä +Vinay Sajip +Vincent Philippon +Vinicyus Macedo <7549205+vinicyusmacedo@users.noreply.github.com> +Vitaly Babiy +Vladimir Rutsky +W. Trevor King +Wil Tan +Wilfred Hughes +William ML Leslie +William T Olson +Wilson Mo +wim glenn +Wolfgang Maier +Xavier Fernandez +Xavier Fernandez +xoviat +xtreak +YAMAMOTO Takashi +Yen Chi Hsuan +Yeray Diaz Diaz +Yoval P +Yu Jian +Yuan Jing Vincent Yan +Zearin +Zearin +Zhiping Deng +Zvezdan Petkovic +Łukasz Langa +Семён Марьясин diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/contextlib2-0.6.0.dist-info/INSTALLER b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/contextlib2-0.6.0.dist-info/INSTALLER new file mode 100644 index 0000000000000000000000000000000000000000..a1b589e38a32041e49332e5e81c2d363dc418d68 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/contextlib2-0.6.0.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/contextlib2-0.6.0.dist-info/LICENSE.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/contextlib2-0.6.0.dist-info/LICENSE.txt new file mode 100644 index 0000000000000000000000000000000000000000..737fec5c5352af3d9a6a47a0670da4bdb52c5725 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/contextlib2-0.6.0.dist-info/LICENSE.txt @@ -0,0 +1,20 @@ +Copyright (c) 2008-2019 The pip developers (see AUTHORS.txt file) + +Permission is hereby granted, free of charge, to any person obtaining +a copy of this software and associated documentation files (the +"Software"), to deal in the Software without restriction, including +without limitation the rights to use, copy, modify, merge, publish, +distribute, sublicense, and/or sell copies of the Software, and to +permit persons to whom the Software is furnished to do so, subject to +the following conditions: + +The above copyright notice and this permission notice shall be +included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE +LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION +OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION +WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/contextlib2-0.6.0.dist-info/METADATA b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/contextlib2-0.6.0.dist-info/METADATA new file mode 100644 index 0000000000000000000000000000000000000000..f4d89c410dc876674e85238d4a455546ab9e9f5f --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/contextlib2-0.6.0.dist-info/METADATA @@ -0,0 +1,70 @@ +Metadata-Version: 2.1 +Name: contextlib2 +Version: 0.6.0 +Summary: Backports and enhancements for the contextlib module +Home-page: http://contextlib2.readthedocs.org +Author: Nick Coghlan +Author-email: ncoghlan@gmail.com +License: PSF License +Platform: UNKNOWN +Classifier: Development Status :: 5 - Production/Stable +Classifier: License :: OSI Approved :: Python Software Foundation License +Classifier: Programming Language :: Python :: 2 +Classifier: Programming Language :: Python :: 2.7 +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.4 +Classifier: Programming Language :: Python :: 3.5 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.* + +.. image:: https://jazzband.co/static/img/badge.svg + :target: https://jazzband.co/ + :alt: Jazzband + +.. image:: https://readthedocs.org/projects/contextlib2/badge/?version=latest + :target: https://contextlib2.readthedocs.org/ + :alt: Latest Docs + +.. image:: https://img.shields.io/travis/jazzband/contextlib2/master.svg + :target: http://travis-ci.org/jazzband/contextlib2 + +.. image:: https://coveralls.io/repos/github/jazzband/contextlib2/badge.svg?branch=master + :target: https://coveralls.io/github/jazzband/contextlib2?branch=master + +.. image:: https://landscape.io/github/jazzband/contextlib2/master/landscape.svg + :target: https://landscape.io/github/jazzband/contextlib2/ + +contextlib2 is a backport of the `standard library's contextlib +module `_ to +earlier Python versions. + +It also serves as a real world proving ground for possible future +enhancements to the standard library version. + +Development +----------- + +contextlib2 has no runtime dependencies, but requires ``unittest2`` for testing +on Python 2.x, as well as ``setuptools`` and ``wheel`` to generate universal +wheel archives. + +Local testing is just a matter of running ``python test_contextlib2.py``. + +You can test against multiple versions of Python with +`tox `_:: + + pip install tox + tox + +Versions currently tested in both tox and Travis CI are: + +* CPython 2.7 +* CPython 3.4 +* CPython 3.5 +* CPython 3.6 +* CPython 3.7 +* PyPy +* PyPy3 + + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/contextlib2-0.6.0.dist-info/RECORD b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/contextlib2-0.6.0.dist-info/RECORD new file mode 100644 index 0000000000000000000000000000000000000000..d64bd47430432624a8c047657a380d425af8abd2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/contextlib2-0.6.0.dist-info/RECORD @@ -0,0 +1,11 @@ +contextlib2.py,sha256=5HjGflUzwWAUfcILhSmC2GqvoYdZZzFzVfIDztHigUs,16915 +contextlib2-0.6.0.dist-info/AUTHORS.txt,sha256=RtqU9KfonVGhI48DAA4-yTOBUhBtQTjFhaDzHoyh7uU,21518 +contextlib2-0.6.0.dist-info/LICENSE.txt,sha256=W6Ifuwlk-TatfRU2LR7W1JMcyMj5_y1NkRkOEJvnRDE,1090 +contextlib2-0.6.0.dist-info/METADATA,sha256=gSVIyF9xprVGAJdZdrVoZR9VXpXRWURqyrugGzj66Rk,2291 +contextlib2-0.6.0.dist-info/WHEEL,sha256=kGT74LWyRUZrL4VgLh6_g12IeVl_9u9ZVhadrgXZUEY,110 +contextlib2-0.6.0.dist-info/top_level.txt,sha256=RxWWBMkHA_rsw1laXJ8L3yE_fyYaBmvt2bVUvj3WbMg,12 +contextlib2-0.6.0.dist-info/RECORD,, +contextlib2-0.6.0.dist-info/INSTALLER,, +contextlib2-0.6.0.dist-info/__pycache__,, +contextlib2.cpython-38.pyc,, +contextlib2-0.6.0.virtualenv,, \ No newline at end of file diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/contextlib2-0.6.0.dist-info/WHEEL b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/contextlib2-0.6.0.dist-info/WHEEL new file mode 100644 index 0000000000000000000000000000000000000000..ef99c6cf3283b50a273ac4c6d009a0aa85597070 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/contextlib2-0.6.0.dist-info/WHEEL @@ -0,0 +1,6 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.34.2) +Root-Is-Purelib: true +Tag: py2-none-any +Tag: py3-none-any + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/contextlib2-0.6.0.dist-info/top_level.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/contextlib2-0.6.0.dist-info/top_level.txt new file mode 100644 index 0000000000000000000000000000000000000000..03fdf8ed24f99c297b402190021f7c75f205a70f --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/contextlib2-0.6.0.dist-info/top_level.txt @@ -0,0 +1 @@ +contextlib2 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/contextlib2-0.6.0.virtualenv b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/contextlib2-0.6.0.virtualenv new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/contextlib2.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/contextlib2.py new file mode 100644 index 0000000000000000000000000000000000000000..3aae8f4117cf8824748f5b7bc74b0aca4516bb6a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/contextlib2.py @@ -0,0 +1,518 @@ +"""contextlib2 - backports and enhancements to the contextlib module""" + +import abc +import sys +import warnings +from collections import deque +from functools import wraps + +__all__ = ["contextmanager", "closing", "nullcontext", + "AbstractContextManager", + "ContextDecorator", "ExitStack", + "redirect_stdout", "redirect_stderr", "suppress"] + +# Backwards compatibility +__all__ += ["ContextStack"] + + +# Backport abc.ABC +if sys.version_info[:2] >= (3, 4): + _abc_ABC = abc.ABC +else: + _abc_ABC = abc.ABCMeta('ABC', (object,), {'__slots__': ()}) + + +# Backport classic class MRO +def _classic_mro(C, result): + if C in result: + return + result.append(C) + for B in C.__bases__: + _classic_mro(B, result) + return result + + +# Backport _collections_abc._check_methods +def _check_methods(C, *methods): + try: + mro = C.__mro__ + except AttributeError: + mro = tuple(_classic_mro(C, [])) + + for method in methods: + for B in mro: + if method in B.__dict__: + if B.__dict__[method] is None: + return NotImplemented + break + else: + return NotImplemented + return True + + +class AbstractContextManager(_abc_ABC): + """An abstract base class for context managers.""" + + def __enter__(self): + """Return `self` upon entering the runtime context.""" + return self + + @abc.abstractmethod + def __exit__(self, exc_type, exc_value, traceback): + """Raise any exception triggered within the runtime context.""" + return None + + @classmethod + def __subclasshook__(cls, C): + """Check whether subclass is considered a subclass of this ABC.""" + if cls is AbstractContextManager: + return _check_methods(C, "__enter__", "__exit__") + return NotImplemented + + +class ContextDecorator(object): + """A base class or mixin that enables context managers to work as decorators.""" + + def refresh_cm(self): + """Returns the context manager used to actually wrap the call to the + decorated function. + + The default implementation just returns *self*. + + Overriding this method allows otherwise one-shot context managers + like _GeneratorContextManager to support use as decorators via + implicit recreation. + + DEPRECATED: refresh_cm was never added to the standard library's + ContextDecorator API + """ + warnings.warn("refresh_cm was never added to the standard library", + DeprecationWarning) + return self._recreate_cm() + + def _recreate_cm(self): + """Return a recreated instance of self. + + Allows an otherwise one-shot context manager like + _GeneratorContextManager to support use as + a decorator via implicit recreation. + + This is a private interface just for _GeneratorContextManager. + See issue #11647 for details. + """ + return self + + def __call__(self, func): + @wraps(func) + def inner(*args, **kwds): + with self._recreate_cm(): + return func(*args, **kwds) + return inner + + +class _GeneratorContextManager(ContextDecorator): + """Helper for @contextmanager decorator.""" + + def __init__(self, func, args, kwds): + self.gen = func(*args, **kwds) + self.func, self.args, self.kwds = func, args, kwds + # Issue 19330: ensure context manager instances have good docstrings + doc = getattr(func, "__doc__", None) + if doc is None: + doc = type(self).__doc__ + self.__doc__ = doc + # Unfortunately, this still doesn't provide good help output when + # inspecting the created context manager instances, since pydoc + # currently bypasses the instance docstring and shows the docstring + # for the class instead. + # See http://bugs.python.org/issue19404 for more details. + + def _recreate_cm(self): + # _GCM instances are one-shot context managers, so the + # CM must be recreated each time a decorated function is + # called + return self.__class__(self.func, self.args, self.kwds) + + def __enter__(self): + try: + return next(self.gen) + except StopIteration: + raise RuntimeError("generator didn't yield") + + def __exit__(self, type, value, traceback): + if type is None: + try: + next(self.gen) + except StopIteration: + return + else: + raise RuntimeError("generator didn't stop") + else: + if value is None: + # Need to force instantiation so we can reliably + # tell if we get the same exception back + value = type() + try: + self.gen.throw(type, value, traceback) + raise RuntimeError("generator didn't stop after throw()") + except StopIteration as exc: + # Suppress StopIteration *unless* it's the same exception that + # was passed to throw(). This prevents a StopIteration + # raised inside the "with" statement from being suppressed. + return exc is not value + except RuntimeError as exc: + # Don't re-raise the passed in exception + if exc is value: + return False + # Likewise, avoid suppressing if a StopIteration exception + # was passed to throw() and later wrapped into a RuntimeError + # (see PEP 479). + if _HAVE_EXCEPTION_CHAINING and exc.__cause__ is value: + return False + raise + except: + # only re-raise if it's *not* the exception that was + # passed to throw(), because __exit__() must not raise + # an exception unless __exit__() itself failed. But throw() + # has to raise the exception to signal propagation, so this + # fixes the impedance mismatch between the throw() protocol + # and the __exit__() protocol. + # + if sys.exc_info()[1] is not value: + raise + + +def contextmanager(func): + """@contextmanager decorator. + + Typical usage: + + @contextmanager + def some_generator(): + + try: + yield + finally: + + + This makes this: + + with some_generator() as : + + + equivalent to this: + + + try: + = + + finally: + + + """ + @wraps(func) + def helper(*args, **kwds): + return _GeneratorContextManager(func, args, kwds) + return helper + + +class closing(object): + """Context to automatically close something at the end of a block. + + Code like this: + + with closing(.open()) as f: + + + is equivalent to this: + + f = .open() + try: + + finally: + f.close() + + """ + def __init__(self, thing): + self.thing = thing + + def __enter__(self): + return self.thing + + def __exit__(self, *exc_info): + self.thing.close() + + +class _RedirectStream(object): + + _stream = None + + def __init__(self, new_target): + self._new_target = new_target + # We use a list of old targets to make this CM re-entrant + self._old_targets = [] + + def __enter__(self): + self._old_targets.append(getattr(sys, self._stream)) + setattr(sys, self._stream, self._new_target) + return self._new_target + + def __exit__(self, exctype, excinst, exctb): + setattr(sys, self._stream, self._old_targets.pop()) + + +class redirect_stdout(_RedirectStream): + """Context manager for temporarily redirecting stdout to another file. + + # How to send help() to stderr + with redirect_stdout(sys.stderr): + help(dir) + + # How to write help() to a file + with open('help.txt', 'w') as f: + with redirect_stdout(f): + help(pow) + """ + + _stream = "stdout" + + +class redirect_stderr(_RedirectStream): + """Context manager for temporarily redirecting stderr to another file.""" + + _stream = "stderr" + + +class suppress(object): + """Context manager to suppress specified exceptions + + After the exception is suppressed, execution proceeds with the next + statement following the with statement. + + with suppress(FileNotFoundError): + os.remove(somefile) + # Execution still resumes here if the file was already removed + """ + + def __init__(self, *exceptions): + self._exceptions = exceptions + + def __enter__(self): + pass + + def __exit__(self, exctype, excinst, exctb): + # Unlike isinstance and issubclass, CPython exception handling + # currently only looks at the concrete type hierarchy (ignoring + # the instance and subclass checking hooks). While Guido considers + # that a bug rather than a feature, it's a fairly hard one to fix + # due to various internal implementation details. suppress provides + # the simpler issubclass based semantics, rather than trying to + # exactly reproduce the limitations of the CPython interpreter. + # + # See http://bugs.python.org/issue12029 for more details + return exctype is not None and issubclass(exctype, self._exceptions) + + +# Context manipulation is Python 3 only +_HAVE_EXCEPTION_CHAINING = sys.version_info[0] >= 3 +if _HAVE_EXCEPTION_CHAINING: + def _make_context_fixer(frame_exc): + def _fix_exception_context(new_exc, old_exc): + # Context may not be correct, so find the end of the chain + while 1: + exc_context = new_exc.__context__ + if exc_context is old_exc: + # Context is already set correctly (see issue 20317) + return + if exc_context is None or exc_context is frame_exc: + break + new_exc = exc_context + # Change the end of the chain to point to the exception + # we expect it to reference + new_exc.__context__ = old_exc + return _fix_exception_context + + def _reraise_with_existing_context(exc_details): + try: + # bare "raise exc_details[1]" replaces our carefully + # set-up context + fixed_ctx = exc_details[1].__context__ + raise exc_details[1] + except BaseException: + exc_details[1].__context__ = fixed_ctx + raise +else: + # No exception context in Python 2 + def _make_context_fixer(frame_exc): + return lambda new_exc, old_exc: None + + # Use 3 argument raise in Python 2, + # but use exec to avoid SyntaxError in Python 3 + def _reraise_with_existing_context(exc_details): + exc_type, exc_value, exc_tb = exc_details + exec("raise exc_type, exc_value, exc_tb") + +# Handle old-style classes if they exist +try: + from types import InstanceType +except ImportError: + # Python 3 doesn't have old-style classes + _get_type = type +else: + # Need to handle old-style context managers on Python 2 + def _get_type(obj): + obj_type = type(obj) + if obj_type is InstanceType: + return obj.__class__ # Old-style class + return obj_type # New-style class + + +# Inspired by discussions on http://bugs.python.org/issue13585 +class ExitStack(object): + """Context manager for dynamic management of a stack of exit callbacks + + For example: + + with ExitStack() as stack: + files = [stack.enter_context(open(fname)) for fname in filenames] + # All opened files will automatically be closed at the end of + # the with statement, even if attempts to open files later + # in the list raise an exception + + """ + def __init__(self): + self._exit_callbacks = deque() + + def pop_all(self): + """Preserve the context stack by transferring it to a new instance""" + new_stack = type(self)() + new_stack._exit_callbacks = self._exit_callbacks + self._exit_callbacks = deque() + return new_stack + + def _push_cm_exit(self, cm, cm_exit): + """Helper to correctly register callbacks to __exit__ methods""" + def _exit_wrapper(*exc_details): + return cm_exit(cm, *exc_details) + _exit_wrapper.__self__ = cm + self.push(_exit_wrapper) + + def push(self, exit): + """Registers a callback with the standard __exit__ method signature + + Can suppress exceptions the same way __exit__ methods can. + + Also accepts any object with an __exit__ method (registering a call + to the method instead of the object itself) + """ + # We use an unbound method rather than a bound method to follow + # the standard lookup behaviour for special methods + _cb_type = _get_type(exit) + try: + exit_method = _cb_type.__exit__ + except AttributeError: + # Not a context manager, so assume its a callable + self._exit_callbacks.append(exit) + else: + self._push_cm_exit(exit, exit_method) + return exit # Allow use as a decorator + + def callback(self, callback, *args, **kwds): + """Registers an arbitrary callback and arguments. + + Cannot suppress exceptions. + """ + def _exit_wrapper(exc_type, exc, tb): + callback(*args, **kwds) + # We changed the signature, so using @wraps is not appropriate, but + # setting __wrapped__ may still help with introspection + _exit_wrapper.__wrapped__ = callback + self.push(_exit_wrapper) + return callback # Allow use as a decorator + + def enter_context(self, cm): + """Enters the supplied context manager + + If successful, also pushes its __exit__ method as a callback and + returns the result of the __enter__ method. + """ + # We look up the special methods on the type to match the with statement + _cm_type = _get_type(cm) + _exit = _cm_type.__exit__ + result = _cm_type.__enter__(cm) + self._push_cm_exit(cm, _exit) + return result + + def close(self): + """Immediately unwind the context stack""" + self.__exit__(None, None, None) + + def __enter__(self): + return self + + def __exit__(self, *exc_details): + received_exc = exc_details[0] is not None + + # We manipulate the exception state so it behaves as though + # we were actually nesting multiple with statements + frame_exc = sys.exc_info()[1] + _fix_exception_context = _make_context_fixer(frame_exc) + + # Callbacks are invoked in LIFO order to match the behaviour of + # nested context managers + suppressed_exc = False + pending_raise = False + while self._exit_callbacks: + cb = self._exit_callbacks.pop() + try: + if cb(*exc_details): + suppressed_exc = True + pending_raise = False + exc_details = (None, None, None) + except: + new_exc_details = sys.exc_info() + # simulate the stack of exceptions by setting the context + _fix_exception_context(new_exc_details[1], exc_details[1]) + pending_raise = True + exc_details = new_exc_details + if pending_raise: + _reraise_with_existing_context(exc_details) + return received_exc and suppressed_exc + + +# Preserve backwards compatibility +class ContextStack(ExitStack): + """Backwards compatibility alias for ExitStack""" + + def __init__(self): + warnings.warn("ContextStack has been renamed to ExitStack", + DeprecationWarning) + super(ContextStack, self).__init__() + + def register_exit(self, callback): + return self.push(callback) + + def register(self, callback, *args, **kwds): + return self.callback(callback, *args, **kwds) + + def preserve(self): + return self.pop_all() + + +class nullcontext(AbstractContextManager): + """Context manager that does no additional processing. + Used as a stand-in for a normal context manager, when a particular + block of code is only sometimes used with a normal context manager: + cm = optional_cm if condition else nullcontext() + with cm: + # Perform operation, using optional_cm if condition is True + """ + + def __init__(self, enter_result=None): + self.enter_result = enter_result + + def __enter__(self): + return self.enter_result + + def __exit__(self, *excinfo): + pass diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..0defb82e21f21da442706e25145b4ef0b59d576c --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/__init__.py @@ -0,0 +1,8 @@ +# -*- coding: utf-8 -*- +try: + from ._version import version as __version__ +except ImportError: + __version__ = 'unknown' + +__all__ = ['easter', 'parser', 'relativedelta', 'rrule', 'tz', + 'utils', 'zoneinfo'] diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/_common.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/_common.py new file mode 100644 index 0000000000000000000000000000000000000000..4eb2659bd2986125fcfb4afea5bae9efc2dcd1a0 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/_common.py @@ -0,0 +1,43 @@ +""" +Common code used in multiple modules. +""" + + +class weekday(object): + __slots__ = ["weekday", "n"] + + def __init__(self, weekday, n=None): + self.weekday = weekday + self.n = n + + def __call__(self, n): + if n == self.n: + return self + else: + return self.__class__(self.weekday, n) + + def __eq__(self, other): + try: + if self.weekday != other.weekday or self.n != other.n: + return False + except AttributeError: + return False + return True + + def __hash__(self): + return hash(( + self.weekday, + self.n, + )) + + def __ne__(self, other): + return not (self == other) + + def __repr__(self): + s = ("MO", "TU", "WE", "TH", "FR", "SA", "SU")[self.weekday] + if not self.n: + return s + else: + return "%s(%+d)" % (s, self.n) + +# vim:ts=4:sw=4:et diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/_version.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/_version.py new file mode 100644 index 0000000000000000000000000000000000000000..eac12096982504fedcf97d9dd75970ac513af9d9 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/_version.py @@ -0,0 +1,4 @@ +# coding: utf-8 +# file generated by setuptools_scm +# don't change, don't track in version control +version = '2.8.1' diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/easter.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/easter.py new file mode 100644 index 0000000000000000000000000000000000000000..53b7c78938f193d5b0a10216bb2e3888e9710dfa --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/easter.py @@ -0,0 +1,89 @@ +# -*- coding: utf-8 -*- +""" +This module offers a generic easter computing method for any given year, using +Western, Orthodox or Julian algorithms. +""" + +import datetime + +__all__ = ["easter", "EASTER_JULIAN", "EASTER_ORTHODOX", "EASTER_WESTERN"] + +EASTER_JULIAN = 1 +EASTER_ORTHODOX = 2 +EASTER_WESTERN = 3 + + +def easter(year, method=EASTER_WESTERN): + """ + This method was ported from the work done by GM Arts, + on top of the algorithm by Claus Tondering, which was + based in part on the algorithm of Ouding (1940), as + quoted in "Explanatory Supplement to the Astronomical + Almanac", P. Kenneth Seidelmann, editor. + + This algorithm implements three different easter + calculation methods: + + 1 - Original calculation in Julian calendar, valid in + dates after 326 AD + 2 - Original method, with date converted to Gregorian + calendar, valid in years 1583 to 4099 + 3 - Revised method, in Gregorian calendar, valid in + years 1583 to 4099 as well + + These methods are represented by the constants: + + * ``EASTER_JULIAN = 1`` + * ``EASTER_ORTHODOX = 2`` + * ``EASTER_WESTERN = 3`` + + The default method is method 3. + + More about the algorithm may be found at: + + `GM Arts: Easter Algorithms `_ + + and + + `The Calendar FAQ: Easter `_ + + """ + + if not (1 <= method <= 3): + raise ValueError("invalid method") + + # g - Golden year - 1 + # c - Century + # h - (23 - Epact) mod 30 + # i - Number of days from March 21 to Paschal Full Moon + # j - Weekday for PFM (0=Sunday, etc) + # p - Number of days from March 21 to Sunday on or before PFM + # (-6 to 28 methods 1 & 3, to 56 for method 2) + # e - Extra days to add for method 2 (converting Julian + # date to Gregorian date) + + y = year + g = y % 19 + e = 0 + if method < 3: + # Old method + i = (19*g + 15) % 30 + j = (y + y//4 + i) % 7 + if method == 2: + # Extra dates to convert Julian to Gregorian date + e = 10 + if y > 1600: + e = e + y//100 - 16 - (y//100 - 16)//4 + else: + # New method + c = y//100 + h = (c - c//4 - (8*c + 13)//25 + 19*g + 15) % 30 + i = h - (h//28)*(1 - (h//28)*(29//(h + 1))*((21 - g)//11)) + j = (y + y//4 + i + 2 - c + c//4) % 7 + + # p can be from -6 to 56 corresponding to dates 22 March to 23 May + # (later dates apply to method 2, although 23 May never actually occurs) + p = i - j + e + d = 1 + (p + 27 + (p + 6)//40) % 31 + m = 3 + (p + 26)//30 + return datetime.date(int(y), int(m), int(d)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/parser/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/parser/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d174b0e4dcc472999b75e55ebb88af320ae38081 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/parser/__init__.py @@ -0,0 +1,61 @@ +# -*- coding: utf-8 -*- +from ._parser import parse, parser, parserinfo, ParserError +from ._parser import DEFAULTPARSER, DEFAULTTZPARSER +from ._parser import UnknownTimezoneWarning + +from ._parser import __doc__ + +from .isoparser import isoparser, isoparse + +__all__ = ['parse', 'parser', 'parserinfo', + 'isoparse', 'isoparser', + 'ParserError', + 'UnknownTimezoneWarning'] + + +### +# Deprecate portions of the private interface so that downstream code that +# is improperly relying on it is given *some* notice. + + +def __deprecated_private_func(f): + from functools import wraps + import warnings + + msg = ('{name} is a private function and may break without warning, ' + 'it will be moved and or renamed in future versions.') + msg = msg.format(name=f.__name__) + + @wraps(f) + def deprecated_func(*args, **kwargs): + warnings.warn(msg, DeprecationWarning) + return f(*args, **kwargs) + + return deprecated_func + +def __deprecate_private_class(c): + import warnings + + msg = ('{name} is a private class and may break without warning, ' + 'it will be moved and or renamed in future versions.') + msg = msg.format(name=c.__name__) + + class private_class(c): + __doc__ = c.__doc__ + + def __init__(self, *args, **kwargs): + warnings.warn(msg, DeprecationWarning) + super(private_class, self).__init__(*args, **kwargs) + + private_class.__name__ = c.__name__ + + return private_class + + +from ._parser import _timelex, _resultbase +from ._parser import _tzparser, _parsetz + +_timelex = __deprecate_private_class(_timelex) +_tzparser = __deprecate_private_class(_tzparser) +_resultbase = __deprecate_private_class(_resultbase) +_parsetz = __deprecated_private_func(_parsetz) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/parser/_parser.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/parser/_parser.py new file mode 100644 index 0000000000000000000000000000000000000000..458aa6a3294fcd70efff38e6f447b232ae2afc21 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/parser/_parser.py @@ -0,0 +1,1609 @@ +# -*- coding: utf-8 -*- +""" +This module offers a generic date/time string parser which is able to parse +most known formats to represent a date and/or time. + +This module attempts to be forgiving with regards to unlikely input formats, +returning a datetime object even for dates which are ambiguous. If an element +of a date/time stamp is omitted, the following rules are applied: + +- If AM or PM is left unspecified, a 24-hour clock is assumed, however, an hour + on a 12-hour clock (``0 <= hour <= 12``) *must* be specified if AM or PM is + specified. +- If a time zone is omitted, a timezone-naive datetime is returned. + +If any other elements are missing, they are taken from the +:class:`datetime.datetime` object passed to the parameter ``default``. If this +results in a day number exceeding the valid number of days per month, the +value falls back to the end of the month. + +Additional resources about date/time string formats can be found below: + +- `A summary of the international standard date and time notation + `_ +- `W3C Date and Time Formats `_ +- `Time Formats (Planetary Rings Node) `_ +- `CPAN ParseDate module + `_ +- `Java SimpleDateFormat Class + `_ +""" +from __future__ import unicode_literals + +import datetime +import re +import string +import time +import warnings + +from calendar import monthrange +from io import StringIO + +import six +from six import integer_types, text_type + +from decimal import Decimal + +from warnings import warn + +from .. import relativedelta +from .. import tz + +__all__ = ["parse", "parserinfo", "ParserError"] + + +# TODO: pandas.core.tools.datetimes imports this explicitly. Might be worth +# making public and/or figuring out if there is something we can +# take off their plate. +class _timelex(object): + # Fractional seconds are sometimes split by a comma + _split_decimal = re.compile("([.,])") + + def __init__(self, instream): + if six.PY2: + # In Python 2, we can't duck type properly because unicode has + # a 'decode' function, and we'd be double-decoding + if isinstance(instream, (bytes, bytearray)): + instream = instream.decode() + else: + if getattr(instream, 'decode', None) is not None: + instream = instream.decode() + + if isinstance(instream, text_type): + instream = StringIO(instream) + elif getattr(instream, 'read', None) is None: + raise TypeError('Parser must be a string or character stream, not ' + '{itype}'.format(itype=instream.__class__.__name__)) + + self.instream = instream + self.charstack = [] + self.tokenstack = [] + self.eof = False + + def get_token(self): + """ + This function breaks the time string into lexical units (tokens), which + can be parsed by the parser. Lexical units are demarcated by changes in + the character set, so any continuous string of letters is considered + one unit, any continuous string of numbers is considered one unit. + + The main complication arises from the fact that dots ('.') can be used + both as separators (e.g. "Sep.20.2009") or decimal points (e.g. + "4:30:21.447"). As such, it is necessary to read the full context of + any dot-separated strings before breaking it into tokens; as such, this + function maintains a "token stack", for when the ambiguous context + demands that multiple tokens be parsed at once. + """ + if self.tokenstack: + return self.tokenstack.pop(0) + + seenletters = False + token = None + state = None + + while not self.eof: + # We only realize that we've reached the end of a token when we + # find a character that's not part of the current token - since + # that character may be part of the next token, it's stored in the + # charstack. + if self.charstack: + nextchar = self.charstack.pop(0) + else: + nextchar = self.instream.read(1) + while nextchar == '\x00': + nextchar = self.instream.read(1) + + if not nextchar: + self.eof = True + break + elif not state: + # First character of the token - determines if we're starting + # to parse a word, a number or something else. + token = nextchar + if self.isword(nextchar): + state = 'a' + elif self.isnum(nextchar): + state = '0' + elif self.isspace(nextchar): + token = ' ' + break # emit token + else: + break # emit token + elif state == 'a': + # If we've already started reading a word, we keep reading + # letters until we find something that's not part of a word. + seenletters = True + if self.isword(nextchar): + token += nextchar + elif nextchar == '.': + token += nextchar + state = 'a.' + else: + self.charstack.append(nextchar) + break # emit token + elif state == '0': + # If we've already started reading a number, we keep reading + # numbers until we find something that doesn't fit. + if self.isnum(nextchar): + token += nextchar + elif nextchar == '.' or (nextchar == ',' and len(token) >= 2): + token += nextchar + state = '0.' + else: + self.charstack.append(nextchar) + break # emit token + elif state == 'a.': + # If we've seen some letters and a dot separator, continue + # parsing, and the tokens will be broken up later. + seenletters = True + if nextchar == '.' or self.isword(nextchar): + token += nextchar + elif self.isnum(nextchar) and token[-1] == '.': + token += nextchar + state = '0.' + else: + self.charstack.append(nextchar) + break # emit token + elif state == '0.': + # If we've seen at least one dot separator, keep going, we'll + # break up the tokens later. + if nextchar == '.' or self.isnum(nextchar): + token += nextchar + elif self.isword(nextchar) and token[-1] == '.': + token += nextchar + state = 'a.' + else: + self.charstack.append(nextchar) + break # emit token + + if (state in ('a.', '0.') and (seenletters or token.count('.') > 1 or + token[-1] in '.,')): + l = self._split_decimal.split(token) + token = l[0] + for tok in l[1:]: + if tok: + self.tokenstack.append(tok) + + if state == '0.' and token.count('.') == 0: + token = token.replace(',', '.') + + return token + + def __iter__(self): + return self + + def __next__(self): + token = self.get_token() + if token is None: + raise StopIteration + + return token + + def next(self): + return self.__next__() # Python 2.x support + + @classmethod + def split(cls, s): + return list(cls(s)) + + @classmethod + def isword(cls, nextchar): + """ Whether or not the next character is part of a word """ + return nextchar.isalpha() + + @classmethod + def isnum(cls, nextchar): + """ Whether the next character is part of a number """ + return nextchar.isdigit() + + @classmethod + def isspace(cls, nextchar): + """ Whether the next character is whitespace """ + return nextchar.isspace() + + +class _resultbase(object): + + def __init__(self): + for attr in self.__slots__: + setattr(self, attr, None) + + def _repr(self, classname): + l = [] + for attr in self.__slots__: + value = getattr(self, attr) + if value is not None: + l.append("%s=%s" % (attr, repr(value))) + return "%s(%s)" % (classname, ", ".join(l)) + + def __len__(self): + return (sum(getattr(self, attr) is not None + for attr in self.__slots__)) + + def __repr__(self): + return self._repr(self.__class__.__name__) + + +class parserinfo(object): + """ + Class which handles what inputs are accepted. Subclass this to customize + the language and acceptable values for each parameter. + + :param dayfirst: + Whether to interpret the first value in an ambiguous 3-integer date + (e.g. 01/05/09) as the day (``True``) or month (``False``). If + ``yearfirst`` is set to ``True``, this distinguishes between YDM + and YMD. Default is ``False``. + + :param yearfirst: + Whether to interpret the first value in an ambiguous 3-integer date + (e.g. 01/05/09) as the year. If ``True``, the first number is taken + to be the year, otherwise the last number is taken to be the year. + Default is ``False``. + """ + + # m from a.m/p.m, t from ISO T separator + JUMP = [" ", ".", ",", ";", "-", "/", "'", + "at", "on", "and", "ad", "m", "t", "of", + "st", "nd", "rd", "th"] + + WEEKDAYS = [("Mon", "Monday"), + ("Tue", "Tuesday"), # TODO: "Tues" + ("Wed", "Wednesday"), + ("Thu", "Thursday"), # TODO: "Thurs" + ("Fri", "Friday"), + ("Sat", "Saturday"), + ("Sun", "Sunday")] + MONTHS = [("Jan", "January"), + ("Feb", "February"), # TODO: "Febr" + ("Mar", "March"), + ("Apr", "April"), + ("May", "May"), + ("Jun", "June"), + ("Jul", "July"), + ("Aug", "August"), + ("Sep", "Sept", "September"), + ("Oct", "October"), + ("Nov", "November"), + ("Dec", "December")] + HMS = [("h", "hour", "hours"), + ("m", "minute", "minutes"), + ("s", "second", "seconds")] + AMPM = [("am", "a"), + ("pm", "p")] + UTCZONE = ["UTC", "GMT", "Z", "z"] + PERTAIN = ["of"] + TZOFFSET = {} + # TODO: ERA = ["AD", "BC", "CE", "BCE", "Stardate", + # "Anno Domini", "Year of Our Lord"] + + def __init__(self, dayfirst=False, yearfirst=False): + self._jump = self._convert(self.JUMP) + self._weekdays = self._convert(self.WEEKDAYS) + self._months = self._convert(self.MONTHS) + self._hms = self._convert(self.HMS) + self._ampm = self._convert(self.AMPM) + self._utczone = self._convert(self.UTCZONE) + self._pertain = self._convert(self.PERTAIN) + + self.dayfirst = dayfirst + self.yearfirst = yearfirst + + self._year = time.localtime().tm_year + self._century = self._year // 100 * 100 + + def _convert(self, lst): + dct = {} + for i, v in enumerate(lst): + if isinstance(v, tuple): + for v in v: + dct[v.lower()] = i + else: + dct[v.lower()] = i + return dct + + def jump(self, name): + return name.lower() in self._jump + + def weekday(self, name): + try: + return self._weekdays[name.lower()] + except KeyError: + pass + return None + + def month(self, name): + try: + return self._months[name.lower()] + 1 + except KeyError: + pass + return None + + def hms(self, name): + try: + return self._hms[name.lower()] + except KeyError: + return None + + def ampm(self, name): + try: + return self._ampm[name.lower()] + except KeyError: + return None + + def pertain(self, name): + return name.lower() in self._pertain + + def utczone(self, name): + return name.lower() in self._utczone + + def tzoffset(self, name): + if name in self._utczone: + return 0 + + return self.TZOFFSET.get(name) + + def convertyear(self, year, century_specified=False): + """ + Converts two-digit years to year within [-50, 49] + range of self._year (current local time) + """ + + # Function contract is that the year is always positive + assert year >= 0 + + if year < 100 and not century_specified: + # assume current century to start + year += self._century + + if year >= self._year + 50: # if too far in future + year -= 100 + elif year < self._year - 50: # if too far in past + year += 100 + + return year + + def validate(self, res): + # move to info + if res.year is not None: + res.year = self.convertyear(res.year, res.century_specified) + + if ((res.tzoffset == 0 and not res.tzname) or + (res.tzname == 'Z' or res.tzname == 'z')): + res.tzname = "UTC" + res.tzoffset = 0 + elif res.tzoffset != 0 and res.tzname and self.utczone(res.tzname): + res.tzoffset = 0 + return True + + +class _ymd(list): + def __init__(self, *args, **kwargs): + super(self.__class__, self).__init__(*args, **kwargs) + self.century_specified = False + self.dstridx = None + self.mstridx = None + self.ystridx = None + + @property + def has_year(self): + return self.ystridx is not None + + @property + def has_month(self): + return self.mstridx is not None + + @property + def has_day(self): + return self.dstridx is not None + + def could_be_day(self, value): + if self.has_day: + return False + elif not self.has_month: + return 1 <= value <= 31 + elif not self.has_year: + # Be permissive, assume leap year + month = self[self.mstridx] + return 1 <= value <= monthrange(2000, month)[1] + else: + month = self[self.mstridx] + year = self[self.ystridx] + return 1 <= value <= monthrange(year, month)[1] + + def append(self, val, label=None): + if hasattr(val, '__len__'): + if val.isdigit() and len(val) > 2: + self.century_specified = True + if label not in [None, 'Y']: # pragma: no cover + raise ValueError(label) + label = 'Y' + elif val > 100: + self.century_specified = True + if label not in [None, 'Y']: # pragma: no cover + raise ValueError(label) + label = 'Y' + + super(self.__class__, self).append(int(val)) + + if label == 'M': + if self.has_month: + raise ValueError('Month is already set') + self.mstridx = len(self) - 1 + elif label == 'D': + if self.has_day: + raise ValueError('Day is already set') + self.dstridx = len(self) - 1 + elif label == 'Y': + if self.has_year: + raise ValueError('Year is already set') + self.ystridx = len(self) - 1 + + def _resolve_from_stridxs(self, strids): + """ + Try to resolve the identities of year/month/day elements using + ystridx, mstridx, and dstridx, if enough of these are specified. + """ + if len(self) == 3 and len(strids) == 2: + # we can back out the remaining stridx value + missing = [x for x in range(3) if x not in strids.values()] + key = [x for x in ['y', 'm', 'd'] if x not in strids] + assert len(missing) == len(key) == 1 + key = key[0] + val = missing[0] + strids[key] = val + + assert len(self) == len(strids) # otherwise this should not be called + out = {key: self[strids[key]] for key in strids} + return (out.get('y'), out.get('m'), out.get('d')) + + def resolve_ymd(self, yearfirst, dayfirst): + len_ymd = len(self) + year, month, day = (None, None, None) + + strids = (('y', self.ystridx), + ('m', self.mstridx), + ('d', self.dstridx)) + + strids = {key: val for key, val in strids if val is not None} + if (len(self) == len(strids) > 0 or + (len(self) == 3 and len(strids) == 2)): + return self._resolve_from_stridxs(strids) + + mstridx = self.mstridx + + if len_ymd > 3: + raise ValueError("More than three YMD values") + elif len_ymd == 1 or (mstridx is not None and len_ymd == 2): + # One member, or two members with a month string + if mstridx is not None: + month = self[mstridx] + # since mstridx is 0 or 1, self[mstridx-1] always + # looks up the other element + other = self[mstridx - 1] + else: + other = self[0] + + if len_ymd > 1 or mstridx is None: + if other > 31: + year = other + else: + day = other + + elif len_ymd == 2: + # Two members with numbers + if self[0] > 31: + # 99-01 + year, month = self + elif self[1] > 31: + # 01-99 + month, year = self + elif dayfirst and self[1] <= 12: + # 13-01 + day, month = self + else: + # 01-13 + month, day = self + + elif len_ymd == 3: + # Three members + if mstridx == 0: + if self[1] > 31: + # Apr-2003-25 + month, year, day = self + else: + month, day, year = self + elif mstridx == 1: + if self[0] > 31 or (yearfirst and self[2] <= 31): + # 99-Jan-01 + year, month, day = self + else: + # 01-Jan-01 + # Give precedence to day-first, since + # two-digit years is usually hand-written. + day, month, year = self + + elif mstridx == 2: + # WTF!? + if self[1] > 31: + # 01-99-Jan + day, year, month = self + else: + # 99-01-Jan + year, day, month = self + + else: + if (self[0] > 31 or + self.ystridx == 0 or + (yearfirst and self[1] <= 12 and self[2] <= 31)): + # 99-01-01 + if dayfirst and self[2] <= 12: + year, day, month = self + else: + year, month, day = self + elif self[0] > 12 or (dayfirst and self[1] <= 12): + # 13-01-01 + day, month, year = self + else: + # 01-13-01 + month, day, year = self + + return year, month, day + + +class parser(object): + def __init__(self, info=None): + self.info = info or parserinfo() + + def parse(self, timestr, default=None, + ignoretz=False, tzinfos=None, **kwargs): + """ + Parse the date/time string into a :class:`datetime.datetime` object. + + :param timestr: + Any date/time string using the supported formats. + + :param default: + The default datetime object, if this is a datetime object and not + ``None``, elements specified in ``timestr`` replace elements in the + default object. + + :param ignoretz: + If set ``True``, time zones in parsed strings are ignored and a + naive :class:`datetime.datetime` object is returned. + + :param tzinfos: + Additional time zone names / aliases which may be present in the + string. This argument maps time zone names (and optionally offsets + from those time zones) to time zones. This parameter can be a + dictionary with timezone aliases mapping time zone names to time + zones or a function taking two parameters (``tzname`` and + ``tzoffset``) and returning a time zone. + + The timezones to which the names are mapped can be an integer + offset from UTC in seconds or a :class:`tzinfo` object. + + .. doctest:: + :options: +NORMALIZE_WHITESPACE + + >>> from dateutil.parser import parse + >>> from dateutil.tz import gettz + >>> tzinfos = {"BRST": -7200, "CST": gettz("America/Chicago")} + >>> parse("2012-01-19 17:21:00 BRST", tzinfos=tzinfos) + datetime.datetime(2012, 1, 19, 17, 21, tzinfo=tzoffset(u'BRST', -7200)) + >>> parse("2012-01-19 17:21:00 CST", tzinfos=tzinfos) + datetime.datetime(2012, 1, 19, 17, 21, + tzinfo=tzfile('/usr/share/zoneinfo/America/Chicago')) + + This parameter is ignored if ``ignoretz`` is set. + + :param \\*\\*kwargs: + Keyword arguments as passed to ``_parse()``. + + :return: + Returns a :class:`datetime.datetime` object or, if the + ``fuzzy_with_tokens`` option is ``True``, returns a tuple, the + first element being a :class:`datetime.datetime` object, the second + a tuple containing the fuzzy tokens. + + :raises ParserError: + Raised for invalid or unknown string format, if the provided + :class:`tzinfo` is not in a valid format, or if an invalid date + would be created. + + :raises TypeError: + Raised for non-string or character stream input. + + :raises OverflowError: + Raised if the parsed date exceeds the largest valid C integer on + your system. + """ + + if default is None: + default = datetime.datetime.now().replace(hour=0, minute=0, + second=0, microsecond=0) + + res, skipped_tokens = self._parse(timestr, **kwargs) + + if res is None: + raise ParserError("Unknown string format: %s", timestr) + + if len(res) == 0: + raise ParserError("String does not contain a date: %s", timestr) + + try: + ret = self._build_naive(res, default) + except ValueError as e: + six.raise_from(ParserError(e.args[0] + ": %s", timestr), e) + + if not ignoretz: + ret = self._build_tzaware(ret, res, tzinfos) + + if kwargs.get('fuzzy_with_tokens', False): + return ret, skipped_tokens + else: + return ret + + class _result(_resultbase): + __slots__ = ["year", "month", "day", "weekday", + "hour", "minute", "second", "microsecond", + "tzname", "tzoffset", "ampm","any_unused_tokens"] + + def _parse(self, timestr, dayfirst=None, yearfirst=None, fuzzy=False, + fuzzy_with_tokens=False): + """ + Private method which performs the heavy lifting of parsing, called from + ``parse()``, which passes on its ``kwargs`` to this function. + + :param timestr: + The string to parse. + + :param dayfirst: + Whether to interpret the first value in an ambiguous 3-integer date + (e.g. 01/05/09) as the day (``True``) or month (``False``). If + ``yearfirst`` is set to ``True``, this distinguishes between YDM + and YMD. If set to ``None``, this value is retrieved from the + current :class:`parserinfo` object (which itself defaults to + ``False``). + + :param yearfirst: + Whether to interpret the first value in an ambiguous 3-integer date + (e.g. 01/05/09) as the year. If ``True``, the first number is taken + to be the year, otherwise the last number is taken to be the year. + If this is set to ``None``, the value is retrieved from the current + :class:`parserinfo` object (which itself defaults to ``False``). + + :param fuzzy: + Whether to allow fuzzy parsing, allowing for string like "Today is + January 1, 2047 at 8:21:00AM". + + :param fuzzy_with_tokens: + If ``True``, ``fuzzy`` is automatically set to True, and the parser + will return a tuple where the first element is the parsed + :class:`datetime.datetime` datetimestamp and the second element is + a tuple containing the portions of the string which were ignored: + + .. doctest:: + + >>> from dateutil.parser import parse + >>> parse("Today is January 1, 2047 at 8:21:00AM", fuzzy_with_tokens=True) + (datetime.datetime(2047, 1, 1, 8, 21), (u'Today is ', u' ', u'at ')) + + """ + if fuzzy_with_tokens: + fuzzy = True + + info = self.info + + if dayfirst is None: + dayfirst = info.dayfirst + + if yearfirst is None: + yearfirst = info.yearfirst + + res = self._result() + l = _timelex.split(timestr) # Splits the timestr into tokens + + skipped_idxs = [] + + # year/month/day list + ymd = _ymd() + + len_l = len(l) + i = 0 + try: + while i < len_l: + + # Check if it's a number + value_repr = l[i] + try: + value = float(value_repr) + except ValueError: + value = None + + if value is not None: + # Numeric token + i = self._parse_numeric_token(l, i, info, ymd, res, fuzzy) + + # Check weekday + elif info.weekday(l[i]) is not None: + value = info.weekday(l[i]) + res.weekday = value + + # Check month name + elif info.month(l[i]) is not None: + value = info.month(l[i]) + ymd.append(value, 'M') + + if i + 1 < len_l: + if l[i + 1] in ('-', '/'): + # Jan-01[-99] + sep = l[i + 1] + ymd.append(l[i + 2]) + + if i + 3 < len_l and l[i + 3] == sep: + # Jan-01-99 + ymd.append(l[i + 4]) + i += 2 + + i += 2 + + elif (i + 4 < len_l and l[i + 1] == l[i + 3] == ' ' and + info.pertain(l[i + 2])): + # Jan of 01 + # In this case, 01 is clearly year + if l[i + 4].isdigit(): + # Convert it here to become unambiguous + value = int(l[i + 4]) + year = str(info.convertyear(value)) + ymd.append(year, 'Y') + else: + # Wrong guess + pass + # TODO: not hit in tests + i += 4 + + # Check am/pm + elif info.ampm(l[i]) is not None: + value = info.ampm(l[i]) + val_is_ampm = self._ampm_valid(res.hour, res.ampm, fuzzy) + + if val_is_ampm: + res.hour = self._adjust_ampm(res.hour, value) + res.ampm = value + + elif fuzzy: + skipped_idxs.append(i) + + # Check for a timezone name + elif self._could_be_tzname(res.hour, res.tzname, res.tzoffset, l[i]): + res.tzname = l[i] + res.tzoffset = info.tzoffset(res.tzname) + + # Check for something like GMT+3, or BRST+3. Notice + # that it doesn't mean "I am 3 hours after GMT", but + # "my time +3 is GMT". If found, we reverse the + # logic so that timezone parsing code will get it + # right. + if i + 1 < len_l and l[i + 1] in ('+', '-'): + l[i + 1] = ('+', '-')[l[i + 1] == '+'] + res.tzoffset = None + if info.utczone(res.tzname): + # With something like GMT+3, the timezone + # is *not* GMT. + res.tzname = None + + # Check for a numbered timezone + elif res.hour is not None and l[i] in ('+', '-'): + signal = (-1, 1)[l[i] == '+'] + len_li = len(l[i + 1]) + + # TODO: check that l[i + 1] is integer? + if len_li == 4: + # -0300 + hour_offset = int(l[i + 1][:2]) + min_offset = int(l[i + 1][2:]) + elif i + 2 < len_l and l[i + 2] == ':': + # -03:00 + hour_offset = int(l[i + 1]) + min_offset = int(l[i + 3]) # TODO: Check that l[i+3] is minute-like? + i += 2 + elif len_li <= 2: + # -[0]3 + hour_offset = int(l[i + 1][:2]) + min_offset = 0 + else: + raise ValueError(timestr) + + res.tzoffset = signal * (hour_offset * 3600 + min_offset * 60) + + # Look for a timezone name between parenthesis + if (i + 5 < len_l and + info.jump(l[i + 2]) and l[i + 3] == '(' and + l[i + 5] == ')' and + 3 <= len(l[i + 4]) and + self._could_be_tzname(res.hour, res.tzname, + None, l[i + 4])): + # -0300 (BRST) + res.tzname = l[i + 4] + i += 4 + + i += 1 + + # Check jumps + elif not (info.jump(l[i]) or fuzzy): + raise ValueError(timestr) + + else: + skipped_idxs.append(i) + i += 1 + + # Process year/month/day + year, month, day = ymd.resolve_ymd(yearfirst, dayfirst) + + res.century_specified = ymd.century_specified + res.year = year + res.month = month + res.day = day + + except (IndexError, ValueError): + return None, None + + if not info.validate(res): + return None, None + + if fuzzy_with_tokens: + skipped_tokens = self._recombine_skipped(l, skipped_idxs) + return res, tuple(skipped_tokens) + else: + return res, None + + def _parse_numeric_token(self, tokens, idx, info, ymd, res, fuzzy): + # Token is a number + value_repr = tokens[idx] + try: + value = self._to_decimal(value_repr) + except Exception as e: + six.raise_from(ValueError('Unknown numeric token'), e) + + len_li = len(value_repr) + + len_l = len(tokens) + + if (len(ymd) == 3 and len_li in (2, 4) and + res.hour is None and + (idx + 1 >= len_l or + (tokens[idx + 1] != ':' and + info.hms(tokens[idx + 1]) is None))): + # 19990101T23[59] + s = tokens[idx] + res.hour = int(s[:2]) + + if len_li == 4: + res.minute = int(s[2:]) + + elif len_li == 6 or (len_li > 6 and tokens[idx].find('.') == 6): + # YYMMDD or HHMMSS[.ss] + s = tokens[idx] + + if not ymd and '.' not in tokens[idx]: + ymd.append(s[:2]) + ymd.append(s[2:4]) + ymd.append(s[4:]) + else: + # 19990101T235959[.59] + + # TODO: Check if res attributes already set. + res.hour = int(s[:2]) + res.minute = int(s[2:4]) + res.second, res.microsecond = self._parsems(s[4:]) + + elif len_li in (8, 12, 14): + # YYYYMMDD + s = tokens[idx] + ymd.append(s[:4], 'Y') + ymd.append(s[4:6]) + ymd.append(s[6:8]) + + if len_li > 8: + res.hour = int(s[8:10]) + res.minute = int(s[10:12]) + + if len_li > 12: + res.second = int(s[12:]) + + elif self._find_hms_idx(idx, tokens, info, allow_jump=True) is not None: + # HH[ ]h or MM[ ]m or SS[.ss][ ]s + hms_idx = self._find_hms_idx(idx, tokens, info, allow_jump=True) + (idx, hms) = self._parse_hms(idx, tokens, info, hms_idx) + if hms is not None: + # TODO: checking that hour/minute/second are not + # already set? + self._assign_hms(res, value_repr, hms) + + elif idx + 2 < len_l and tokens[idx + 1] == ':': + # HH:MM[:SS[.ss]] + res.hour = int(value) + value = self._to_decimal(tokens[idx + 2]) # TODO: try/except for this? + (res.minute, res.second) = self._parse_min_sec(value) + + if idx + 4 < len_l and tokens[idx + 3] == ':': + res.second, res.microsecond = self._parsems(tokens[idx + 4]) + + idx += 2 + + idx += 2 + + elif idx + 1 < len_l and tokens[idx + 1] in ('-', '/', '.'): + sep = tokens[idx + 1] + ymd.append(value_repr) + + if idx + 2 < len_l and not info.jump(tokens[idx + 2]): + if tokens[idx + 2].isdigit(): + # 01-01[-01] + ymd.append(tokens[idx + 2]) + else: + # 01-Jan[-01] + value = info.month(tokens[idx + 2]) + + if value is not None: + ymd.append(value, 'M') + else: + raise ValueError() + + if idx + 3 < len_l and tokens[idx + 3] == sep: + # We have three members + value = info.month(tokens[idx + 4]) + + if value is not None: + ymd.append(value, 'M') + else: + ymd.append(tokens[idx + 4]) + idx += 2 + + idx += 1 + idx += 1 + + elif idx + 1 >= len_l or info.jump(tokens[idx + 1]): + if idx + 2 < len_l and info.ampm(tokens[idx + 2]) is not None: + # 12 am + hour = int(value) + res.hour = self._adjust_ampm(hour, info.ampm(tokens[idx + 2])) + idx += 1 + else: + # Year, month or day + ymd.append(value) + idx += 1 + + elif info.ampm(tokens[idx + 1]) is not None and (0 <= value < 24): + # 12am + hour = int(value) + res.hour = self._adjust_ampm(hour, info.ampm(tokens[idx + 1])) + idx += 1 + + elif ymd.could_be_day(value): + ymd.append(value) + + elif not fuzzy: + raise ValueError() + + return idx + + def _find_hms_idx(self, idx, tokens, info, allow_jump): + len_l = len(tokens) + + if idx+1 < len_l and info.hms(tokens[idx+1]) is not None: + # There is an "h", "m", or "s" label following this token. We take + # assign the upcoming label to the current token. + # e.g. the "12" in 12h" + hms_idx = idx + 1 + + elif (allow_jump and idx+2 < len_l and tokens[idx+1] == ' ' and + info.hms(tokens[idx+2]) is not None): + # There is a space and then an "h", "m", or "s" label. + # e.g. the "12" in "12 h" + hms_idx = idx + 2 + + elif idx > 0 and info.hms(tokens[idx-1]) is not None: + # There is a "h", "m", or "s" preceding this token. Since neither + # of the previous cases was hit, there is no label following this + # token, so we use the previous label. + # e.g. the "04" in "12h04" + hms_idx = idx-1 + + elif (1 < idx == len_l-1 and tokens[idx-1] == ' ' and + info.hms(tokens[idx-2]) is not None): + # If we are looking at the final token, we allow for a + # backward-looking check to skip over a space. + # TODO: Are we sure this is the right condition here? + hms_idx = idx - 2 + + else: + hms_idx = None + + return hms_idx + + def _assign_hms(self, res, value_repr, hms): + # See GH issue #427, fixing float rounding + value = self._to_decimal(value_repr) + + if hms == 0: + # Hour + res.hour = int(value) + if value % 1: + res.minute = int(60*(value % 1)) + + elif hms == 1: + (res.minute, res.second) = self._parse_min_sec(value) + + elif hms == 2: + (res.second, res.microsecond) = self._parsems(value_repr) + + def _could_be_tzname(self, hour, tzname, tzoffset, token): + return (hour is not None and + tzname is None and + tzoffset is None and + len(token) <= 5 and + (all(x in string.ascii_uppercase for x in token) + or token in self.info.UTCZONE)) + + def _ampm_valid(self, hour, ampm, fuzzy): + """ + For fuzzy parsing, 'a' or 'am' (both valid English words) + may erroneously trigger the AM/PM flag. Deal with that + here. + """ + val_is_ampm = True + + # If there's already an AM/PM flag, this one isn't one. + if fuzzy and ampm is not None: + val_is_ampm = False + + # If AM/PM is found and hour is not, raise a ValueError + if hour is None: + if fuzzy: + val_is_ampm = False + else: + raise ValueError('No hour specified with AM or PM flag.') + elif not 0 <= hour <= 12: + # If AM/PM is found, it's a 12 hour clock, so raise + # an error for invalid range + if fuzzy: + val_is_ampm = False + else: + raise ValueError('Invalid hour specified for 12-hour clock.') + + return val_is_ampm + + def _adjust_ampm(self, hour, ampm): + if hour < 12 and ampm == 1: + hour += 12 + elif hour == 12 and ampm == 0: + hour = 0 + return hour + + def _parse_min_sec(self, value): + # TODO: Every usage of this function sets res.second to the return + # value. Are there any cases where second will be returned as None and + # we *don't* want to set res.second = None? + minute = int(value) + second = None + + sec_remainder = value % 1 + if sec_remainder: + second = int(60 * sec_remainder) + return (minute, second) + + def _parse_hms(self, idx, tokens, info, hms_idx): + # TODO: Is this going to admit a lot of false-positives for when we + # just happen to have digits and "h", "m" or "s" characters in non-date + # text? I guess hex hashes won't have that problem, but there's plenty + # of random junk out there. + if hms_idx is None: + hms = None + new_idx = idx + elif hms_idx > idx: + hms = info.hms(tokens[hms_idx]) + new_idx = hms_idx + else: + # Looking backwards, increment one. + hms = info.hms(tokens[hms_idx]) + 1 + new_idx = idx + + return (new_idx, hms) + + # ------------------------------------------------------------------ + # Handling for individual tokens. These are kept as methods instead + # of functions for the sake of customizability via subclassing. + + def _parsems(self, value): + """Parse a I[.F] seconds value into (seconds, microseconds).""" + if "." not in value: + return int(value), 0 + else: + i, f = value.split(".") + return int(i), int(f.ljust(6, "0")[:6]) + + def _to_decimal(self, val): + try: + decimal_value = Decimal(val) + # See GH 662, edge case, infinite value should not be converted + # via `_to_decimal` + if not decimal_value.is_finite(): + raise ValueError("Converted decimal value is infinite or NaN") + except Exception as e: + msg = "Could not convert %s to decimal" % val + six.raise_from(ValueError(msg), e) + else: + return decimal_value + + # ------------------------------------------------------------------ + # Post-Parsing construction of datetime output. These are kept as + # methods instead of functions for the sake of customizability via + # subclassing. + + def _build_tzinfo(self, tzinfos, tzname, tzoffset): + if callable(tzinfos): + tzdata = tzinfos(tzname, tzoffset) + else: + tzdata = tzinfos.get(tzname) + # handle case where tzinfo is paased an options that returns None + # eg tzinfos = {'BRST' : None} + if isinstance(tzdata, datetime.tzinfo) or tzdata is None: + tzinfo = tzdata + elif isinstance(tzdata, text_type): + tzinfo = tz.tzstr(tzdata) + elif isinstance(tzdata, integer_types): + tzinfo = tz.tzoffset(tzname, tzdata) + else: + raise TypeError("Offset must be tzinfo subclass, tz string, " + "or int offset.") + return tzinfo + + def _build_tzaware(self, naive, res, tzinfos): + if (callable(tzinfos) or (tzinfos and res.tzname in tzinfos)): + tzinfo = self._build_tzinfo(tzinfos, res.tzname, res.tzoffset) + aware = naive.replace(tzinfo=tzinfo) + aware = self._assign_tzname(aware, res.tzname) + + elif res.tzname and res.tzname in time.tzname: + aware = naive.replace(tzinfo=tz.tzlocal()) + + # Handle ambiguous local datetime + aware = self._assign_tzname(aware, res.tzname) + + # This is mostly relevant for winter GMT zones parsed in the UK + if (aware.tzname() != res.tzname and + res.tzname in self.info.UTCZONE): + aware = aware.replace(tzinfo=tz.UTC) + + elif res.tzoffset == 0: + aware = naive.replace(tzinfo=tz.UTC) + + elif res.tzoffset: + aware = naive.replace(tzinfo=tz.tzoffset(res.tzname, res.tzoffset)) + + elif not res.tzname and not res.tzoffset: + # i.e. no timezone information was found. + aware = naive + + elif res.tzname: + # tz-like string was parsed but we don't know what to do + # with it + warnings.warn("tzname {tzname} identified but not understood. " + "Pass `tzinfos` argument in order to correctly " + "return a timezone-aware datetime. In a future " + "version, this will raise an " + "exception.".format(tzname=res.tzname), + category=UnknownTimezoneWarning) + aware = naive + + return aware + + def _build_naive(self, res, default): + repl = {} + for attr in ("year", "month", "day", "hour", + "minute", "second", "microsecond"): + value = getattr(res, attr) + if value is not None: + repl[attr] = value + + if 'day' not in repl: + # If the default day exceeds the last day of the month, fall back + # to the end of the month. + cyear = default.year if res.year is None else res.year + cmonth = default.month if res.month is None else res.month + cday = default.day if res.day is None else res.day + + if cday > monthrange(cyear, cmonth)[1]: + repl['day'] = monthrange(cyear, cmonth)[1] + + naive = default.replace(**repl) + + if res.weekday is not None and not res.day: + naive = naive + relativedelta.relativedelta(weekday=res.weekday) + + return naive + + def _assign_tzname(self, dt, tzname): + if dt.tzname() != tzname: + new_dt = tz.enfold(dt, fold=1) + if new_dt.tzname() == tzname: + return new_dt + + return dt + + def _recombine_skipped(self, tokens, skipped_idxs): + """ + >>> tokens = ["foo", " ", "bar", " ", "19June2000", "baz"] + >>> skipped_idxs = [0, 1, 2, 5] + >>> _recombine_skipped(tokens, skipped_idxs) + ["foo bar", "baz"] + """ + skipped_tokens = [] + for i, idx in enumerate(sorted(skipped_idxs)): + if i > 0 and idx - 1 == skipped_idxs[i - 1]: + skipped_tokens[-1] = skipped_tokens[-1] + tokens[idx] + else: + skipped_tokens.append(tokens[idx]) + + return skipped_tokens + + +DEFAULTPARSER = parser() + + +def parse(timestr, parserinfo=None, **kwargs): + """ + + Parse a string in one of the supported formats, using the + ``parserinfo`` parameters. + + :param timestr: + A string containing a date/time stamp. + + :param parserinfo: + A :class:`parserinfo` object containing parameters for the parser. + If ``None``, the default arguments to the :class:`parserinfo` + constructor are used. + + The ``**kwargs`` parameter takes the following keyword arguments: + + :param default: + The default datetime object, if this is a datetime object and not + ``None``, elements specified in ``timestr`` replace elements in the + default object. + + :param ignoretz: + If set ``True``, time zones in parsed strings are ignored and a naive + :class:`datetime` object is returned. + + :param tzinfos: + Additional time zone names / aliases which may be present in the + string. This argument maps time zone names (and optionally offsets + from those time zones) to time zones. This parameter can be a + dictionary with timezone aliases mapping time zone names to time + zones or a function taking two parameters (``tzname`` and + ``tzoffset``) and returning a time zone. + + The timezones to which the names are mapped can be an integer + offset from UTC in seconds or a :class:`tzinfo` object. + + .. doctest:: + :options: +NORMALIZE_WHITESPACE + + >>> from dateutil.parser import parse + >>> from dateutil.tz import gettz + >>> tzinfos = {"BRST": -7200, "CST": gettz("America/Chicago")} + >>> parse("2012-01-19 17:21:00 BRST", tzinfos=tzinfos) + datetime.datetime(2012, 1, 19, 17, 21, tzinfo=tzoffset(u'BRST', -7200)) + >>> parse("2012-01-19 17:21:00 CST", tzinfos=tzinfos) + datetime.datetime(2012, 1, 19, 17, 21, + tzinfo=tzfile('/usr/share/zoneinfo/America/Chicago')) + + This parameter is ignored if ``ignoretz`` is set. + + :param dayfirst: + Whether to interpret the first value in an ambiguous 3-integer date + (e.g. 01/05/09) as the day (``True``) or month (``False``). If + ``yearfirst`` is set to ``True``, this distinguishes between YDM and + YMD. If set to ``None``, this value is retrieved from the current + :class:`parserinfo` object (which itself defaults to ``False``). + + :param yearfirst: + Whether to interpret the first value in an ambiguous 3-integer date + (e.g. 01/05/09) as the year. If ``True``, the first number is taken to + be the year, otherwise the last number is taken to be the year. If + this is set to ``None``, the value is retrieved from the current + :class:`parserinfo` object (which itself defaults to ``False``). + + :param fuzzy: + Whether to allow fuzzy parsing, allowing for string like "Today is + January 1, 2047 at 8:21:00AM". + + :param fuzzy_with_tokens: + If ``True``, ``fuzzy`` is automatically set to True, and the parser + will return a tuple where the first element is the parsed + :class:`datetime.datetime` datetimestamp and the second element is + a tuple containing the portions of the string which were ignored: + + .. doctest:: + + >>> from dateutil.parser import parse + >>> parse("Today is January 1, 2047 at 8:21:00AM", fuzzy_with_tokens=True) + (datetime.datetime(2047, 1, 1, 8, 21), (u'Today is ', u' ', u'at ')) + + :return: + Returns a :class:`datetime.datetime` object or, if the + ``fuzzy_with_tokens`` option is ``True``, returns a tuple, the + first element being a :class:`datetime.datetime` object, the second + a tuple containing the fuzzy tokens. + + :raises ValueError: + Raised for invalid or unknown string format, if the provided + :class:`tzinfo` is not in a valid format, or if an invalid date + would be created. + + :raises OverflowError: + Raised if the parsed date exceeds the largest valid C integer on + your system. + """ + if parserinfo: + return parser(parserinfo).parse(timestr, **kwargs) + else: + return DEFAULTPARSER.parse(timestr, **kwargs) + + +class _tzparser(object): + + class _result(_resultbase): + + __slots__ = ["stdabbr", "stdoffset", "dstabbr", "dstoffset", + "start", "end"] + + class _attr(_resultbase): + __slots__ = ["month", "week", "weekday", + "yday", "jyday", "day", "time"] + + def __repr__(self): + return self._repr("") + + def __init__(self): + _resultbase.__init__(self) + self.start = self._attr() + self.end = self._attr() + + def parse(self, tzstr): + res = self._result() + l = [x for x in re.split(r'([,:.]|[a-zA-Z]+|[0-9]+)',tzstr) if x] + used_idxs = list() + try: + + len_l = len(l) + + i = 0 + while i < len_l: + # BRST+3[BRDT[+2]] + j = i + while j < len_l and not [x for x in l[j] + if x in "0123456789:,-+"]: + j += 1 + if j != i: + if not res.stdabbr: + offattr = "stdoffset" + res.stdabbr = "".join(l[i:j]) + else: + offattr = "dstoffset" + res.dstabbr = "".join(l[i:j]) + + for ii in range(j): + used_idxs.append(ii) + i = j + if (i < len_l and (l[i] in ('+', '-') or l[i][0] in + "0123456789")): + if l[i] in ('+', '-'): + # Yes, that's right. See the TZ variable + # documentation. + signal = (1, -1)[l[i] == '+'] + used_idxs.append(i) + i += 1 + else: + signal = -1 + len_li = len(l[i]) + if len_li == 4: + # -0300 + setattr(res, offattr, (int(l[i][:2]) * 3600 + + int(l[i][2:]) * 60) * signal) + elif i + 1 < len_l and l[i + 1] == ':': + # -03:00 + setattr(res, offattr, + (int(l[i]) * 3600 + + int(l[i + 2]) * 60) * signal) + used_idxs.append(i) + i += 2 + elif len_li <= 2: + # -[0]3 + setattr(res, offattr, + int(l[i][:2]) * 3600 * signal) + else: + return None + used_idxs.append(i) + i += 1 + if res.dstabbr: + break + else: + break + + + if i < len_l: + for j in range(i, len_l): + if l[j] == ';': + l[j] = ',' + + assert l[i] == ',' + + i += 1 + + if i >= len_l: + pass + elif (8 <= l.count(',') <= 9 and + not [y for x in l[i:] if x != ',' + for y in x if y not in "0123456789+-"]): + # GMT0BST,3,0,30,3600,10,0,26,7200[,3600] + for x in (res.start, res.end): + x.month = int(l[i]) + used_idxs.append(i) + i += 2 + if l[i] == '-': + value = int(l[i + 1]) * -1 + used_idxs.append(i) + i += 1 + else: + value = int(l[i]) + used_idxs.append(i) + i += 2 + if value: + x.week = value + x.weekday = (int(l[i]) - 1) % 7 + else: + x.day = int(l[i]) + used_idxs.append(i) + i += 2 + x.time = int(l[i]) + used_idxs.append(i) + i += 2 + if i < len_l: + if l[i] in ('-', '+'): + signal = (-1, 1)[l[i] == "+"] + used_idxs.append(i) + i += 1 + else: + signal = 1 + used_idxs.append(i) + res.dstoffset = (res.stdoffset + int(l[i]) * signal) + + # This was a made-up format that is not in normal use + warn(('Parsed time zone "%s"' % tzstr) + + 'is in a non-standard dateutil-specific format, which ' + + 'is now deprecated; support for parsing this format ' + + 'will be removed in future versions. It is recommended ' + + 'that you switch to a standard format like the GNU ' + + 'TZ variable format.', tz.DeprecatedTzFormatWarning) + elif (l.count(',') == 2 and l[i:].count('/') <= 2 and + not [y for x in l[i:] if x not in (',', '/', 'J', 'M', + '.', '-', ':') + for y in x if y not in "0123456789"]): + for x in (res.start, res.end): + if l[i] == 'J': + # non-leap year day (1 based) + used_idxs.append(i) + i += 1 + x.jyday = int(l[i]) + elif l[i] == 'M': + # month[-.]week[-.]weekday + used_idxs.append(i) + i += 1 + x.month = int(l[i]) + used_idxs.append(i) + i += 1 + assert l[i] in ('-', '.') + used_idxs.append(i) + i += 1 + x.week = int(l[i]) + if x.week == 5: + x.week = -1 + used_idxs.append(i) + i += 1 + assert l[i] in ('-', '.') + used_idxs.append(i) + i += 1 + x.weekday = (int(l[i]) - 1) % 7 + else: + # year day (zero based) + x.yday = int(l[i]) + 1 + + used_idxs.append(i) + i += 1 + + if i < len_l and l[i] == '/': + used_idxs.append(i) + i += 1 + # start time + len_li = len(l[i]) + if len_li == 4: + # -0300 + x.time = (int(l[i][:2]) * 3600 + + int(l[i][2:]) * 60) + elif i + 1 < len_l and l[i + 1] == ':': + # -03:00 + x.time = int(l[i]) * 3600 + int(l[i + 2]) * 60 + used_idxs.append(i) + i += 2 + if i + 1 < len_l and l[i + 1] == ':': + used_idxs.append(i) + i += 2 + x.time += int(l[i]) + elif len_li <= 2: + # -[0]3 + x.time = (int(l[i][:2]) * 3600) + else: + return None + used_idxs.append(i) + i += 1 + + assert i == len_l or l[i] == ',' + + i += 1 + + assert i >= len_l + + except (IndexError, ValueError, AssertionError): + return None + + unused_idxs = set(range(len_l)).difference(used_idxs) + res.any_unused_tokens = not {l[n] for n in unused_idxs}.issubset({",",":"}) + return res + + +DEFAULTTZPARSER = _tzparser() + + +def _parsetz(tzstr): + return DEFAULTTZPARSER.parse(tzstr) + + +class ParserError(ValueError): + """Error class for representing failure to parse a datetime string.""" + def __str__(self): + try: + return self.args[0] % self.args[1:] + except (TypeError, IndexError): + return super(ParserError, self).__str__() + + def __repr__(self): + return "%s(%s)" % (self.__class__.__name__, str(self)) + + +class UnknownTimezoneWarning(RuntimeWarning): + """Raised when the parser finds a timezone it cannot parse into a tzinfo""" +# vim:ts=4:sw=4:et diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/parser/isoparser.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/parser/isoparser.py new file mode 100644 index 0000000000000000000000000000000000000000..48f86a335591ed4cc9a42320c59205f9bbfb5b8e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/parser/isoparser.py @@ -0,0 +1,411 @@ +# -*- coding: utf-8 -*- +""" +This module offers a parser for ISO-8601 strings + +It is intended to support all valid date, time and datetime formats per the +ISO-8601 specification. + +..versionadded:: 2.7.0 +""" +from datetime import datetime, timedelta, time, date +import calendar +from dateutil import tz + +from functools import wraps + +import re +import six + +__all__ = ["isoparse", "isoparser"] + + +def _takes_ascii(f): + @wraps(f) + def func(self, str_in, *args, **kwargs): + # If it's a stream, read the whole thing + str_in = getattr(str_in, 'read', lambda: str_in)() + + # If it's unicode, turn it into bytes, since ISO-8601 only covers ASCII + if isinstance(str_in, six.text_type): + # ASCII is the same in UTF-8 + try: + str_in = str_in.encode('ascii') + except UnicodeEncodeError as e: + msg = 'ISO-8601 strings should contain only ASCII characters' + six.raise_from(ValueError(msg), e) + + return f(self, str_in, *args, **kwargs) + + return func + + +class isoparser(object): + def __init__(self, sep=None): + """ + :param sep: + A single character that separates date and time portions. If + ``None``, the parser will accept any single character. + For strict ISO-8601 adherence, pass ``'T'``. + """ + if sep is not None: + if (len(sep) != 1 or ord(sep) >= 128 or sep in '0123456789'): + raise ValueError('Separator must be a single, non-numeric ' + + 'ASCII character') + + sep = sep.encode('ascii') + + self._sep = sep + + @_takes_ascii + def isoparse(self, dt_str): + """ + Parse an ISO-8601 datetime string into a :class:`datetime.datetime`. + + An ISO-8601 datetime string consists of a date portion, followed + optionally by a time portion - the date and time portions are separated + by a single character separator, which is ``T`` in the official + standard. Incomplete date formats (such as ``YYYY-MM``) may *not* be + combined with a time portion. + + Supported date formats are: + + Common: + + - ``YYYY`` + - ``YYYY-MM`` or ``YYYYMM`` + - ``YYYY-MM-DD`` or ``YYYYMMDD`` + + Uncommon: + + - ``YYYY-Www`` or ``YYYYWww`` - ISO week (day defaults to 0) + - ``YYYY-Www-D`` or ``YYYYWwwD`` - ISO week and day + + The ISO week and day numbering follows the same logic as + :func:`datetime.date.isocalendar`. + + Supported time formats are: + + - ``hh`` + - ``hh:mm`` or ``hhmm`` + - ``hh:mm:ss`` or ``hhmmss`` + - ``hh:mm:ss.ssssss`` (Up to 6 sub-second digits) + + Midnight is a special case for `hh`, as the standard supports both + 00:00 and 24:00 as a representation. The decimal separator can be + either a dot or a comma. + + + .. caution:: + + Support for fractional components other than seconds is part of the + ISO-8601 standard, but is not currently implemented in this parser. + + Supported time zone offset formats are: + + - `Z` (UTC) + - `±HH:MM` + - `±HHMM` + - `±HH` + + Offsets will be represented as :class:`dateutil.tz.tzoffset` objects, + with the exception of UTC, which will be represented as + :class:`dateutil.tz.tzutc`. Time zone offsets equivalent to UTC (such + as `+00:00`) will also be represented as :class:`dateutil.tz.tzutc`. + + :param dt_str: + A string or stream containing only an ISO-8601 datetime string + + :return: + Returns a :class:`datetime.datetime` representing the string. + Unspecified components default to their lowest value. + + .. warning:: + + As of version 2.7.0, the strictness of the parser should not be + considered a stable part of the contract. Any valid ISO-8601 string + that parses correctly with the default settings will continue to + parse correctly in future versions, but invalid strings that + currently fail (e.g. ``2017-01-01T00:00+00:00:00``) are not + guaranteed to continue failing in future versions if they encode + a valid date. + + .. versionadded:: 2.7.0 + """ + components, pos = self._parse_isodate(dt_str) + + if len(dt_str) > pos: + if self._sep is None or dt_str[pos:pos + 1] == self._sep: + components += self._parse_isotime(dt_str[pos + 1:]) + else: + raise ValueError('String contains unknown ISO components') + + if len(components) > 3 and components[3] == 24: + components[3] = 0 + return datetime(*components) + timedelta(days=1) + + return datetime(*components) + + @_takes_ascii + def parse_isodate(self, datestr): + """ + Parse the date portion of an ISO string. + + :param datestr: + The string portion of an ISO string, without a separator + + :return: + Returns a :class:`datetime.date` object + """ + components, pos = self._parse_isodate(datestr) + if pos < len(datestr): + raise ValueError('String contains unknown ISO ' + + 'components: {}'.format(datestr)) + return date(*components) + + @_takes_ascii + def parse_isotime(self, timestr): + """ + Parse the time portion of an ISO string. + + :param timestr: + The time portion of an ISO string, without a separator + + :return: + Returns a :class:`datetime.time` object + """ + components = self._parse_isotime(timestr) + if components[0] == 24: + components[0] = 0 + return time(*components) + + @_takes_ascii + def parse_tzstr(self, tzstr, zero_as_utc=True): + """ + Parse a valid ISO time zone string. + + See :func:`isoparser.isoparse` for details on supported formats. + + :param tzstr: + A string representing an ISO time zone offset + + :param zero_as_utc: + Whether to return :class:`dateutil.tz.tzutc` for zero-offset zones + + :return: + Returns :class:`dateutil.tz.tzoffset` for offsets and + :class:`dateutil.tz.tzutc` for ``Z`` and (if ``zero_as_utc`` is + specified) offsets equivalent to UTC. + """ + return self._parse_tzstr(tzstr, zero_as_utc=zero_as_utc) + + # Constants + _DATE_SEP = b'-' + _TIME_SEP = b':' + _FRACTION_REGEX = re.compile(b'[\\.,]([0-9]+)') + + def _parse_isodate(self, dt_str): + try: + return self._parse_isodate_common(dt_str) + except ValueError: + return self._parse_isodate_uncommon(dt_str) + + def _parse_isodate_common(self, dt_str): + len_str = len(dt_str) + components = [1, 1, 1] + + if len_str < 4: + raise ValueError('ISO string too short') + + # Year + components[0] = int(dt_str[0:4]) + pos = 4 + if pos >= len_str: + return components, pos + + has_sep = dt_str[pos:pos + 1] == self._DATE_SEP + if has_sep: + pos += 1 + + # Month + if len_str - pos < 2: + raise ValueError('Invalid common month') + + components[1] = int(dt_str[pos:pos + 2]) + pos += 2 + + if pos >= len_str: + if has_sep: + return components, pos + else: + raise ValueError('Invalid ISO format') + + if has_sep: + if dt_str[pos:pos + 1] != self._DATE_SEP: + raise ValueError('Invalid separator in ISO string') + pos += 1 + + # Day + if len_str - pos < 2: + raise ValueError('Invalid common day') + components[2] = int(dt_str[pos:pos + 2]) + return components, pos + 2 + + def _parse_isodate_uncommon(self, dt_str): + if len(dt_str) < 4: + raise ValueError('ISO string too short') + + # All ISO formats start with the year + year = int(dt_str[0:4]) + + has_sep = dt_str[4:5] == self._DATE_SEP + + pos = 4 + has_sep # Skip '-' if it's there + if dt_str[pos:pos + 1] == b'W': + # YYYY-?Www-?D? + pos += 1 + weekno = int(dt_str[pos:pos + 2]) + pos += 2 + + dayno = 1 + if len(dt_str) > pos: + if (dt_str[pos:pos + 1] == self._DATE_SEP) != has_sep: + raise ValueError('Inconsistent use of dash separator') + + pos += has_sep + + dayno = int(dt_str[pos:pos + 1]) + pos += 1 + + base_date = self._calculate_weekdate(year, weekno, dayno) + else: + # YYYYDDD or YYYY-DDD + if len(dt_str) - pos < 3: + raise ValueError('Invalid ordinal day') + + ordinal_day = int(dt_str[pos:pos + 3]) + pos += 3 + + if ordinal_day < 1 or ordinal_day > (365 + calendar.isleap(year)): + raise ValueError('Invalid ordinal day' + + ' {} for year {}'.format(ordinal_day, year)) + + base_date = date(year, 1, 1) + timedelta(days=ordinal_day - 1) + + components = [base_date.year, base_date.month, base_date.day] + return components, pos + + def _calculate_weekdate(self, year, week, day): + """ + Calculate the day of corresponding to the ISO year-week-day calendar. + + This function is effectively the inverse of + :func:`datetime.date.isocalendar`. + + :param year: + The year in the ISO calendar + + :param week: + The week in the ISO calendar - range is [1, 53] + + :param day: + The day in the ISO calendar - range is [1 (MON), 7 (SUN)] + + :return: + Returns a :class:`datetime.date` + """ + if not 0 < week < 54: + raise ValueError('Invalid week: {}'.format(week)) + + if not 0 < day < 8: # Range is 1-7 + raise ValueError('Invalid weekday: {}'.format(day)) + + # Get week 1 for the specific year: + jan_4 = date(year, 1, 4) # Week 1 always has January 4th in it + week_1 = jan_4 - timedelta(days=jan_4.isocalendar()[2] - 1) + + # Now add the specific number of weeks and days to get what we want + week_offset = (week - 1) * 7 + (day - 1) + return week_1 + timedelta(days=week_offset) + + def _parse_isotime(self, timestr): + len_str = len(timestr) + components = [0, 0, 0, 0, None] + pos = 0 + comp = -1 + + if len(timestr) < 2: + raise ValueError('ISO time too short') + + has_sep = len_str >= 3 and timestr[2:3] == self._TIME_SEP + + while pos < len_str and comp < 5: + comp += 1 + + if timestr[pos:pos + 1] in b'-+Zz': + # Detect time zone boundary + components[-1] = self._parse_tzstr(timestr[pos:]) + pos = len_str + break + + if comp < 3: + # Hour, minute, second + components[comp] = int(timestr[pos:pos + 2]) + pos += 2 + if (has_sep and pos < len_str and + timestr[pos:pos + 1] == self._TIME_SEP): + pos += 1 + + if comp == 3: + # Fraction of a second + frac = self._FRACTION_REGEX.match(timestr[pos:]) + if not frac: + continue + + us_str = frac.group(1)[:6] # Truncate to microseconds + components[comp] = int(us_str) * 10**(6 - len(us_str)) + pos += len(frac.group()) + + if pos < len_str: + raise ValueError('Unused components in ISO string') + + if components[0] == 24: + # Standard supports 00:00 and 24:00 as representations of midnight + if any(component != 0 for component in components[1:4]): + raise ValueError('Hour may only be 24 at 24:00:00.000') + + return components + + def _parse_tzstr(self, tzstr, zero_as_utc=True): + if tzstr == b'Z' or tzstr == b'z': + return tz.UTC + + if len(tzstr) not in {3, 5, 6}: + raise ValueError('Time zone offset must be 1, 3, 5 or 6 characters') + + if tzstr[0:1] == b'-': + mult = -1 + elif tzstr[0:1] == b'+': + mult = 1 + else: + raise ValueError('Time zone offset requires sign') + + hours = int(tzstr[1:3]) + if len(tzstr) == 3: + minutes = 0 + else: + minutes = int(tzstr[(4 if tzstr[3:4] == self._TIME_SEP else 3):]) + + if zero_as_utc and hours == 0 and minutes == 0: + return tz.UTC + else: + if minutes > 59: + raise ValueError('Invalid minutes in time zone offset') + + if hours > 23: + raise ValueError('Invalid hours in time zone offset') + + return tz.tzoffset(None, mult * (hours * 60 + minutes) * 60) + + +DEFAULT_ISOPARSER = isoparser() +isoparse = DEFAULT_ISOPARSER.isoparse diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/relativedelta.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/relativedelta.py new file mode 100644 index 0000000000000000000000000000000000000000..a9e85f7e6cd7488e6b2f4b249d5cf6af314c3859 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/relativedelta.py @@ -0,0 +1,599 @@ +# -*- coding: utf-8 -*- +import datetime +import calendar + +import operator +from math import copysign + +from six import integer_types +from warnings import warn + +from ._common import weekday + +MO, TU, WE, TH, FR, SA, SU = weekdays = tuple(weekday(x) for x in range(7)) + +__all__ = ["relativedelta", "MO", "TU", "WE", "TH", "FR", "SA", "SU"] + + +class relativedelta(object): + """ + The relativedelta type is designed to be applied to an existing datetime and + can replace specific components of that datetime, or represents an interval + of time. + + It is based on the specification of the excellent work done by M.-A. Lemburg + in his + `mx.DateTime `_ extension. + However, notice that this type does *NOT* implement the same algorithm as + his work. Do *NOT* expect it to behave like mx.DateTime's counterpart. + + There are two different ways to build a relativedelta instance. The + first one is passing it two date/datetime classes:: + + relativedelta(datetime1, datetime2) + + The second one is passing it any number of the following keyword arguments:: + + relativedelta(arg1=x,arg2=y,arg3=z...) + + year, month, day, hour, minute, second, microsecond: + Absolute information (argument is singular); adding or subtracting a + relativedelta with absolute information does not perform an arithmetic + operation, but rather REPLACES the corresponding value in the + original datetime with the value(s) in relativedelta. + + years, months, weeks, days, hours, minutes, seconds, microseconds: + Relative information, may be negative (argument is plural); adding + or subtracting a relativedelta with relative information performs + the corresponding arithmetic operation on the original datetime value + with the information in the relativedelta. + + weekday: + One of the weekday instances (MO, TU, etc) available in the + relativedelta module. These instances may receive a parameter N, + specifying the Nth weekday, which could be positive or negative + (like MO(+1) or MO(-2)). Not specifying it is the same as specifying + +1. You can also use an integer, where 0=MO. This argument is always + relative e.g. if the calculated date is already Monday, using MO(1) + or MO(-1) won't change the day. To effectively make it absolute, use + it in combination with the day argument (e.g. day=1, MO(1) for first + Monday of the month). + + leapdays: + Will add given days to the date found, if year is a leap + year, and the date found is post 28 of february. + + yearday, nlyearday: + Set the yearday or the non-leap year day (jump leap days). + These are converted to day/month/leapdays information. + + There are relative and absolute forms of the keyword + arguments. The plural is relative, and the singular is + absolute. For each argument in the order below, the absolute form + is applied first (by setting each attribute to that value) and + then the relative form (by adding the value to the attribute). + + The order of attributes considered when this relativedelta is + added to a datetime is: + + 1. Year + 2. Month + 3. Day + 4. Hours + 5. Minutes + 6. Seconds + 7. Microseconds + + Finally, weekday is applied, using the rule described above. + + For example + + >>> from datetime import datetime + >>> from dateutil.relativedelta import relativedelta, MO + >>> dt = datetime(2018, 4, 9, 13, 37, 0) + >>> delta = relativedelta(hours=25, day=1, weekday=MO(1)) + >>> dt + delta + datetime.datetime(2018, 4, 2, 14, 37) + + First, the day is set to 1 (the first of the month), then 25 hours + are added, to get to the 2nd day and 14th hour, finally the + weekday is applied, but since the 2nd is already a Monday there is + no effect. + + """ + + def __init__(self, dt1=None, dt2=None, + years=0, months=0, days=0, leapdays=0, weeks=0, + hours=0, minutes=0, seconds=0, microseconds=0, + year=None, month=None, day=None, weekday=None, + yearday=None, nlyearday=None, + hour=None, minute=None, second=None, microsecond=None): + + if dt1 and dt2: + # datetime is a subclass of date. So both must be date + if not (isinstance(dt1, datetime.date) and + isinstance(dt2, datetime.date)): + raise TypeError("relativedelta only diffs datetime/date") + + # We allow two dates, or two datetimes, so we coerce them to be + # of the same type + if (isinstance(dt1, datetime.datetime) != + isinstance(dt2, datetime.datetime)): + if not isinstance(dt1, datetime.datetime): + dt1 = datetime.datetime.fromordinal(dt1.toordinal()) + elif not isinstance(dt2, datetime.datetime): + dt2 = datetime.datetime.fromordinal(dt2.toordinal()) + + self.years = 0 + self.months = 0 + self.days = 0 + self.leapdays = 0 + self.hours = 0 + self.minutes = 0 + self.seconds = 0 + self.microseconds = 0 + self.year = None + self.month = None + self.day = None + self.weekday = None + self.hour = None + self.minute = None + self.second = None + self.microsecond = None + self._has_time = 0 + + # Get year / month delta between the two + months = (dt1.year - dt2.year) * 12 + (dt1.month - dt2.month) + self._set_months(months) + + # Remove the year/month delta so the timedelta is just well-defined + # time units (seconds, days and microseconds) + dtm = self.__radd__(dt2) + + # If we've overshot our target, make an adjustment + if dt1 < dt2: + compare = operator.gt + increment = 1 + else: + compare = operator.lt + increment = -1 + + while compare(dt1, dtm): + months += increment + self._set_months(months) + dtm = self.__radd__(dt2) + + # Get the timedelta between the "months-adjusted" date and dt1 + delta = dt1 - dtm + self.seconds = delta.seconds + delta.days * 86400 + self.microseconds = delta.microseconds + else: + # Check for non-integer values in integer-only quantities + if any(x is not None and x != int(x) for x in (years, months)): + raise ValueError("Non-integer years and months are " + "ambiguous and not currently supported.") + + # Relative information + self.years = int(years) + self.months = int(months) + self.days = days + weeks * 7 + self.leapdays = leapdays + self.hours = hours + self.minutes = minutes + self.seconds = seconds + self.microseconds = microseconds + + # Absolute information + self.year = year + self.month = month + self.day = day + self.hour = hour + self.minute = minute + self.second = second + self.microsecond = microsecond + + if any(x is not None and int(x) != x + for x in (year, month, day, hour, + minute, second, microsecond)): + # For now we'll deprecate floats - later it'll be an error. + warn("Non-integer value passed as absolute information. " + + "This is not a well-defined condition and will raise " + + "errors in future versions.", DeprecationWarning) + + if isinstance(weekday, integer_types): + self.weekday = weekdays[weekday] + else: + self.weekday = weekday + + yday = 0 + if nlyearday: + yday = nlyearday + elif yearday: + yday = yearday + if yearday > 59: + self.leapdays = -1 + if yday: + ydayidx = [31, 59, 90, 120, 151, 181, 212, + 243, 273, 304, 334, 366] + for idx, ydays in enumerate(ydayidx): + if yday <= ydays: + self.month = idx+1 + if idx == 0: + self.day = yday + else: + self.day = yday-ydayidx[idx-1] + break + else: + raise ValueError("invalid year day (%d)" % yday) + + self._fix() + + def _fix(self): + if abs(self.microseconds) > 999999: + s = _sign(self.microseconds) + div, mod = divmod(self.microseconds * s, 1000000) + self.microseconds = mod * s + self.seconds += div * s + if abs(self.seconds) > 59: + s = _sign(self.seconds) + div, mod = divmod(self.seconds * s, 60) + self.seconds = mod * s + self.minutes += div * s + if abs(self.minutes) > 59: + s = _sign(self.minutes) + div, mod = divmod(self.minutes * s, 60) + self.minutes = mod * s + self.hours += div * s + if abs(self.hours) > 23: + s = _sign(self.hours) + div, mod = divmod(self.hours * s, 24) + self.hours = mod * s + self.days += div * s + if abs(self.months) > 11: + s = _sign(self.months) + div, mod = divmod(self.months * s, 12) + self.months = mod * s + self.years += div * s + if (self.hours or self.minutes or self.seconds or self.microseconds + or self.hour is not None or self.minute is not None or + self.second is not None or self.microsecond is not None): + self._has_time = 1 + else: + self._has_time = 0 + + @property + def weeks(self): + return int(self.days / 7.0) + + @weeks.setter + def weeks(self, value): + self.days = self.days - (self.weeks * 7) + value * 7 + + def _set_months(self, months): + self.months = months + if abs(self.months) > 11: + s = _sign(self.months) + div, mod = divmod(self.months * s, 12) + self.months = mod * s + self.years = div * s + else: + self.years = 0 + + def normalized(self): + """ + Return a version of this object represented entirely using integer + values for the relative attributes. + + >>> relativedelta(days=1.5, hours=2).normalized() + relativedelta(days=+1, hours=+14) + + :return: + Returns a :class:`dateutil.relativedelta.relativedelta` object. + """ + # Cascade remainders down (rounding each to roughly nearest microsecond) + days = int(self.days) + + hours_f = round(self.hours + 24 * (self.days - days), 11) + hours = int(hours_f) + + minutes_f = round(self.minutes + 60 * (hours_f - hours), 10) + minutes = int(minutes_f) + + seconds_f = round(self.seconds + 60 * (minutes_f - minutes), 8) + seconds = int(seconds_f) + + microseconds = round(self.microseconds + 1e6 * (seconds_f - seconds)) + + # Constructor carries overflow back up with call to _fix() + return self.__class__(years=self.years, months=self.months, + days=days, hours=hours, minutes=minutes, + seconds=seconds, microseconds=microseconds, + leapdays=self.leapdays, year=self.year, + month=self.month, day=self.day, + weekday=self.weekday, hour=self.hour, + minute=self.minute, second=self.second, + microsecond=self.microsecond) + + def __add__(self, other): + if isinstance(other, relativedelta): + return self.__class__(years=other.years + self.years, + months=other.months + self.months, + days=other.days + self.days, + hours=other.hours + self.hours, + minutes=other.minutes + self.minutes, + seconds=other.seconds + self.seconds, + microseconds=(other.microseconds + + self.microseconds), + leapdays=other.leapdays or self.leapdays, + year=(other.year if other.year is not None + else self.year), + month=(other.month if other.month is not None + else self.month), + day=(other.day if other.day is not None + else self.day), + weekday=(other.weekday if other.weekday is not None + else self.weekday), + hour=(other.hour if other.hour is not None + else self.hour), + minute=(other.minute if other.minute is not None + else self.minute), + second=(other.second if other.second is not None + else self.second), + microsecond=(other.microsecond if other.microsecond + is not None else + self.microsecond)) + if isinstance(other, datetime.timedelta): + return self.__class__(years=self.years, + months=self.months, + days=self.days + other.days, + hours=self.hours, + minutes=self.minutes, + seconds=self.seconds + other.seconds, + microseconds=self.microseconds + other.microseconds, + leapdays=self.leapdays, + year=self.year, + month=self.month, + day=self.day, + weekday=self.weekday, + hour=self.hour, + minute=self.minute, + second=self.second, + microsecond=self.microsecond) + if not isinstance(other, datetime.date): + return NotImplemented + elif self._has_time and not isinstance(other, datetime.datetime): + other = datetime.datetime.fromordinal(other.toordinal()) + year = (self.year or other.year)+self.years + month = self.month or other.month + if self.months: + assert 1 <= abs(self.months) <= 12 + month += self.months + if month > 12: + year += 1 + month -= 12 + elif month < 1: + year -= 1 + month += 12 + day = min(calendar.monthrange(year, month)[1], + self.day or other.day) + repl = {"year": year, "month": month, "day": day} + for attr in ["hour", "minute", "second", "microsecond"]: + value = getattr(self, attr) + if value is not None: + repl[attr] = value + days = self.days + if self.leapdays and month > 2 and calendar.isleap(year): + days += self.leapdays + ret = (other.replace(**repl) + + datetime.timedelta(days=days, + hours=self.hours, + minutes=self.minutes, + seconds=self.seconds, + microseconds=self.microseconds)) + if self.weekday: + weekday, nth = self.weekday.weekday, self.weekday.n or 1 + jumpdays = (abs(nth) - 1) * 7 + if nth > 0: + jumpdays += (7 - ret.weekday() + weekday) % 7 + else: + jumpdays += (ret.weekday() - weekday) % 7 + jumpdays *= -1 + ret += datetime.timedelta(days=jumpdays) + return ret + + def __radd__(self, other): + return self.__add__(other) + + def __rsub__(self, other): + return self.__neg__().__radd__(other) + + def __sub__(self, other): + if not isinstance(other, relativedelta): + return NotImplemented # In case the other object defines __rsub__ + return self.__class__(years=self.years - other.years, + months=self.months - other.months, + days=self.days - other.days, + hours=self.hours - other.hours, + minutes=self.minutes - other.minutes, + seconds=self.seconds - other.seconds, + microseconds=self.microseconds - other.microseconds, + leapdays=self.leapdays or other.leapdays, + year=(self.year if self.year is not None + else other.year), + month=(self.month if self.month is not None else + other.month), + day=(self.day if self.day is not None else + other.day), + weekday=(self.weekday if self.weekday is not None else + other.weekday), + hour=(self.hour if self.hour is not None else + other.hour), + minute=(self.minute if self.minute is not None else + other.minute), + second=(self.second if self.second is not None else + other.second), + microsecond=(self.microsecond if self.microsecond + is not None else + other.microsecond)) + + def __abs__(self): + return self.__class__(years=abs(self.years), + months=abs(self.months), + days=abs(self.days), + hours=abs(self.hours), + minutes=abs(self.minutes), + seconds=abs(self.seconds), + microseconds=abs(self.microseconds), + leapdays=self.leapdays, + year=self.year, + month=self.month, + day=self.day, + weekday=self.weekday, + hour=self.hour, + minute=self.minute, + second=self.second, + microsecond=self.microsecond) + + def __neg__(self): + return self.__class__(years=-self.years, + months=-self.months, + days=-self.days, + hours=-self.hours, + minutes=-self.minutes, + seconds=-self.seconds, + microseconds=-self.microseconds, + leapdays=self.leapdays, + year=self.year, + month=self.month, + day=self.day, + weekday=self.weekday, + hour=self.hour, + minute=self.minute, + second=self.second, + microsecond=self.microsecond) + + def __bool__(self): + return not (not self.years and + not self.months and + not self.days and + not self.hours and + not self.minutes and + not self.seconds and + not self.microseconds and + not self.leapdays and + self.year is None and + self.month is None and + self.day is None and + self.weekday is None and + self.hour is None and + self.minute is None and + self.second is None and + self.microsecond is None) + # Compatibility with Python 2.x + __nonzero__ = __bool__ + + def __mul__(self, other): + try: + f = float(other) + except TypeError: + return NotImplemented + + return self.__class__(years=int(self.years * f), + months=int(self.months * f), + days=int(self.days * f), + hours=int(self.hours * f), + minutes=int(self.minutes * f), + seconds=int(self.seconds * f), + microseconds=int(self.microseconds * f), + leapdays=self.leapdays, + year=self.year, + month=self.month, + day=self.day, + weekday=self.weekday, + hour=self.hour, + minute=self.minute, + second=self.second, + microsecond=self.microsecond) + + __rmul__ = __mul__ + + def __eq__(self, other): + if not isinstance(other, relativedelta): + return NotImplemented + if self.weekday or other.weekday: + if not self.weekday or not other.weekday: + return False + if self.weekday.weekday != other.weekday.weekday: + return False + n1, n2 = self.weekday.n, other.weekday.n + if n1 != n2 and not ((not n1 or n1 == 1) and (not n2 or n2 == 1)): + return False + return (self.years == other.years and + self.months == other.months and + self.days == other.days and + self.hours == other.hours and + self.minutes == other.minutes and + self.seconds == other.seconds and + self.microseconds == other.microseconds and + self.leapdays == other.leapdays and + self.year == other.year and + self.month == other.month and + self.day == other.day and + self.hour == other.hour and + self.minute == other.minute and + self.second == other.second and + self.microsecond == other.microsecond) + + def __hash__(self): + return hash(( + self.weekday, + self.years, + self.months, + self.days, + self.hours, + self.minutes, + self.seconds, + self.microseconds, + self.leapdays, + self.year, + self.month, + self.day, + self.hour, + self.minute, + self.second, + self.microsecond, + )) + + def __ne__(self, other): + return not self.__eq__(other) + + def __div__(self, other): + try: + reciprocal = 1 / float(other) + except TypeError: + return NotImplemented + + return self.__mul__(reciprocal) + + __truediv__ = __div__ + + def __repr__(self): + l = [] + for attr in ["years", "months", "days", "leapdays", + "hours", "minutes", "seconds", "microseconds"]: + value = getattr(self, attr) + if value: + l.append("{attr}={value:+g}".format(attr=attr, value=value)) + for attr in ["year", "month", "day", "weekday", + "hour", "minute", "second", "microsecond"]: + value = getattr(self, attr) + if value is not None: + l.append("{attr}={value}".format(attr=attr, value=repr(value))) + return "{classname}({attrs})".format(classname=self.__class__.__name__, + attrs=", ".join(l)) + + +def _sign(x): + return int(copysign(1, x)) + +# vim:ts=4:sw=4:et diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/rrule.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/rrule.py new file mode 100644 index 0000000000000000000000000000000000000000..6bf0ea9c648ba8c7fcca72badadf02b652436707 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/rrule.py @@ -0,0 +1,1735 @@ +# -*- coding: utf-8 -*- +""" +The rrule module offers a small, complete, and very fast, implementation of +the recurrence rules documented in the +`iCalendar RFC `_, +including support for caching of results. +""" +import itertools +import datetime +import calendar +import re +import sys + +try: + from math import gcd +except ImportError: + from fractions import gcd + +from six import advance_iterator, integer_types +from six.moves import _thread, range +import heapq + +from ._common import weekday as weekdaybase + +# For warning about deprecation of until and count +from warnings import warn + +__all__ = ["rrule", "rruleset", "rrulestr", + "YEARLY", "MONTHLY", "WEEKLY", "DAILY", + "HOURLY", "MINUTELY", "SECONDLY", + "MO", "TU", "WE", "TH", "FR", "SA", "SU"] + +# Every mask is 7 days longer to handle cross-year weekly periods. +M366MASK = tuple([1]*31+[2]*29+[3]*31+[4]*30+[5]*31+[6]*30 + + [7]*31+[8]*31+[9]*30+[10]*31+[11]*30+[12]*31+[1]*7) +M365MASK = list(M366MASK) +M29, M30, M31 = list(range(1, 30)), list(range(1, 31)), list(range(1, 32)) +MDAY366MASK = tuple(M31+M29+M31+M30+M31+M30+M31+M31+M30+M31+M30+M31+M31[:7]) +MDAY365MASK = list(MDAY366MASK) +M29, M30, M31 = list(range(-29, 0)), list(range(-30, 0)), list(range(-31, 0)) +NMDAY366MASK = tuple(M31+M29+M31+M30+M31+M30+M31+M31+M30+M31+M30+M31+M31[:7]) +NMDAY365MASK = list(NMDAY366MASK) +M366RANGE = (0, 31, 60, 91, 121, 152, 182, 213, 244, 274, 305, 335, 366) +M365RANGE = (0, 31, 59, 90, 120, 151, 181, 212, 243, 273, 304, 334, 365) +WDAYMASK = [0, 1, 2, 3, 4, 5, 6]*55 +del M29, M30, M31, M365MASK[59], MDAY365MASK[59], NMDAY365MASK[31] +MDAY365MASK = tuple(MDAY365MASK) +M365MASK = tuple(M365MASK) + +FREQNAMES = ['YEARLY', 'MONTHLY', 'WEEKLY', 'DAILY', 'HOURLY', 'MINUTELY', 'SECONDLY'] + +(YEARLY, + MONTHLY, + WEEKLY, + DAILY, + HOURLY, + MINUTELY, + SECONDLY) = list(range(7)) + +# Imported on demand. +easter = None +parser = None + + +class weekday(weekdaybase): + """ + This version of weekday does not allow n = 0. + """ + def __init__(self, wkday, n=None): + if n == 0: + raise ValueError("Can't create weekday with n==0") + + super(weekday, self).__init__(wkday, n) + + +MO, TU, WE, TH, FR, SA, SU = weekdays = tuple(weekday(x) for x in range(7)) + + +def _invalidates_cache(f): + """ + Decorator for rruleset methods which may invalidate the + cached length. + """ + def inner_func(self, *args, **kwargs): + rv = f(self, *args, **kwargs) + self._invalidate_cache() + return rv + + return inner_func + + +class rrulebase(object): + def __init__(self, cache=False): + if cache: + self._cache = [] + self._cache_lock = _thread.allocate_lock() + self._invalidate_cache() + else: + self._cache = None + self._cache_complete = False + self._len = None + + def __iter__(self): + if self._cache_complete: + return iter(self._cache) + elif self._cache is None: + return self._iter() + else: + return self._iter_cached() + + def _invalidate_cache(self): + if self._cache is not None: + self._cache = [] + self._cache_complete = False + self._cache_gen = self._iter() + + if self._cache_lock.locked(): + self._cache_lock.release() + + self._len = None + + def _iter_cached(self): + i = 0 + gen = self._cache_gen + cache = self._cache + acquire = self._cache_lock.acquire + release = self._cache_lock.release + while gen: + if i == len(cache): + acquire() + if self._cache_complete: + break + try: + for j in range(10): + cache.append(advance_iterator(gen)) + except StopIteration: + self._cache_gen = gen = None + self._cache_complete = True + break + release() + yield cache[i] + i += 1 + while i < self._len: + yield cache[i] + i += 1 + + def __getitem__(self, item): + if self._cache_complete: + return self._cache[item] + elif isinstance(item, slice): + if item.step and item.step < 0: + return list(iter(self))[item] + else: + return list(itertools.islice(self, + item.start or 0, + item.stop or sys.maxsize, + item.step or 1)) + elif item >= 0: + gen = iter(self) + try: + for i in range(item+1): + res = advance_iterator(gen) + except StopIteration: + raise IndexError + return res + else: + return list(iter(self))[item] + + def __contains__(self, item): + if self._cache_complete: + return item in self._cache + else: + for i in self: + if i == item: + return True + elif i > item: + return False + return False + + # __len__() introduces a large performance penalty. + def count(self): + """ Returns the number of recurrences in this set. It will have go + trough the whole recurrence, if this hasn't been done before. """ + if self._len is None: + for x in self: + pass + return self._len + + def before(self, dt, inc=False): + """ Returns the last recurrence before the given datetime instance. The + inc keyword defines what happens if dt is an occurrence. With + inc=True, if dt itself is an occurrence, it will be returned. """ + if self._cache_complete: + gen = self._cache + else: + gen = self + last = None + if inc: + for i in gen: + if i > dt: + break + last = i + else: + for i in gen: + if i >= dt: + break + last = i + return last + + def after(self, dt, inc=False): + """ Returns the first recurrence after the given datetime instance. The + inc keyword defines what happens if dt is an occurrence. With + inc=True, if dt itself is an occurrence, it will be returned. """ + if self._cache_complete: + gen = self._cache + else: + gen = self + if inc: + for i in gen: + if i >= dt: + return i + else: + for i in gen: + if i > dt: + return i + return None + + def xafter(self, dt, count=None, inc=False): + """ + Generator which yields up to `count` recurrences after the given + datetime instance, equivalent to `after`. + + :param dt: + The datetime at which to start generating recurrences. + + :param count: + The maximum number of recurrences to generate. If `None` (default), + dates are generated until the recurrence rule is exhausted. + + :param inc: + If `dt` is an instance of the rule and `inc` is `True`, it is + included in the output. + + :yields: Yields a sequence of `datetime` objects. + """ + + if self._cache_complete: + gen = self._cache + else: + gen = self + + # Select the comparison function + if inc: + comp = lambda dc, dtc: dc >= dtc + else: + comp = lambda dc, dtc: dc > dtc + + # Generate dates + n = 0 + for d in gen: + if comp(d, dt): + if count is not None: + n += 1 + if n > count: + break + + yield d + + def between(self, after, before, inc=False, count=1): + """ Returns all the occurrences of the rrule between after and before. + The inc keyword defines what happens if after and/or before are + themselves occurrences. With inc=True, they will be included in the + list, if they are found in the recurrence set. """ + if self._cache_complete: + gen = self._cache + else: + gen = self + started = False + l = [] + if inc: + for i in gen: + if i > before: + break + elif not started: + if i >= after: + started = True + l.append(i) + else: + l.append(i) + else: + for i in gen: + if i >= before: + break + elif not started: + if i > after: + started = True + l.append(i) + else: + l.append(i) + return l + + +class rrule(rrulebase): + """ + That's the base of the rrule operation. It accepts all the keywords + defined in the RFC as its constructor parameters (except byday, + which was renamed to byweekday) and more. The constructor prototype is:: + + rrule(freq) + + Where freq must be one of YEARLY, MONTHLY, WEEKLY, DAILY, HOURLY, MINUTELY, + or SECONDLY. + + .. note:: + Per RFC section 3.3.10, recurrence instances falling on invalid dates + and times are ignored rather than coerced: + + Recurrence rules may generate recurrence instances with an invalid + date (e.g., February 30) or nonexistent local time (e.g., 1:30 AM + on a day where the local time is moved forward by an hour at 1:00 + AM). Such recurrence instances MUST be ignored and MUST NOT be + counted as part of the recurrence set. + + This can lead to possibly surprising behavior when, for example, the + start date occurs at the end of the month: + + >>> from dateutil.rrule import rrule, MONTHLY + >>> from datetime import datetime + >>> start_date = datetime(2014, 12, 31) + >>> list(rrule(freq=MONTHLY, count=4, dtstart=start_date)) + ... # doctest: +NORMALIZE_WHITESPACE + [datetime.datetime(2014, 12, 31, 0, 0), + datetime.datetime(2015, 1, 31, 0, 0), + datetime.datetime(2015, 3, 31, 0, 0), + datetime.datetime(2015, 5, 31, 0, 0)] + + Additionally, it supports the following keyword arguments: + + :param dtstart: + The recurrence start. Besides being the base for the recurrence, + missing parameters in the final recurrence instances will also be + extracted from this date. If not given, datetime.now() will be used + instead. + :param interval: + The interval between each freq iteration. For example, when using + YEARLY, an interval of 2 means once every two years, but with HOURLY, + it means once every two hours. The default interval is 1. + :param wkst: + The week start day. Must be one of the MO, TU, WE constants, or an + integer, specifying the first day of the week. This will affect + recurrences based on weekly periods. The default week start is got + from calendar.firstweekday(), and may be modified by + calendar.setfirstweekday(). + :param count: + If given, this determines how many occurrences will be generated. + + .. note:: + As of version 2.5.0, the use of the keyword ``until`` in conjunction + with ``count`` is deprecated, to make sure ``dateutil`` is fully + compliant with `RFC-5545 Sec. 3.3.10 `_. Therefore, ``until`` and ``count`` + **must not** occur in the same call to ``rrule``. + :param until: + If given, this must be a datetime instance specifying the upper-bound + limit of the recurrence. The last recurrence in the rule is the greatest + datetime that is less than or equal to the value specified in the + ``until`` parameter. + + .. note:: + As of version 2.5.0, the use of the keyword ``until`` in conjunction + with ``count`` is deprecated, to make sure ``dateutil`` is fully + compliant with `RFC-5545 Sec. 3.3.10 `_. Therefore, ``until`` and ``count`` + **must not** occur in the same call to ``rrule``. + :param bysetpos: + If given, it must be either an integer, or a sequence of integers, + positive or negative. Each given integer will specify an occurrence + number, corresponding to the nth occurrence of the rule inside the + frequency period. For example, a bysetpos of -1 if combined with a + MONTHLY frequency, and a byweekday of (MO, TU, WE, TH, FR), will + result in the last work day of every month. + :param bymonth: + If given, it must be either an integer, or a sequence of integers, + meaning the months to apply the recurrence to. + :param bymonthday: + If given, it must be either an integer, or a sequence of integers, + meaning the month days to apply the recurrence to. + :param byyearday: + If given, it must be either an integer, or a sequence of integers, + meaning the year days to apply the recurrence to. + :param byeaster: + If given, it must be either an integer, or a sequence of integers, + positive or negative. Each integer will define an offset from the + Easter Sunday. Passing the offset 0 to byeaster will yield the Easter + Sunday itself. This is an extension to the RFC specification. + :param byweekno: + If given, it must be either an integer, or a sequence of integers, + meaning the week numbers to apply the recurrence to. Week numbers + have the meaning described in ISO8601, that is, the first week of + the year is that containing at least four days of the new year. + :param byweekday: + If given, it must be either an integer (0 == MO), a sequence of + integers, one of the weekday constants (MO, TU, etc), or a sequence + of these constants. When given, these variables will define the + weekdays where the recurrence will be applied. It's also possible to + use an argument n for the weekday instances, which will mean the nth + occurrence of this weekday in the period. For example, with MONTHLY, + or with YEARLY and BYMONTH, using FR(+1) in byweekday will specify the + first friday of the month where the recurrence happens. Notice that in + the RFC documentation, this is specified as BYDAY, but was renamed to + avoid the ambiguity of that keyword. + :param byhour: + If given, it must be either an integer, or a sequence of integers, + meaning the hours to apply the recurrence to. + :param byminute: + If given, it must be either an integer, or a sequence of integers, + meaning the minutes to apply the recurrence to. + :param bysecond: + If given, it must be either an integer, or a sequence of integers, + meaning the seconds to apply the recurrence to. + :param cache: + If given, it must be a boolean value specifying to enable or disable + caching of results. If you will use the same rrule instance multiple + times, enabling caching will improve the performance considerably. + """ + def __init__(self, freq, dtstart=None, + interval=1, wkst=None, count=None, until=None, bysetpos=None, + bymonth=None, bymonthday=None, byyearday=None, byeaster=None, + byweekno=None, byweekday=None, + byhour=None, byminute=None, bysecond=None, + cache=False): + super(rrule, self).__init__(cache) + global easter + if not dtstart: + if until and until.tzinfo: + dtstart = datetime.datetime.now(tz=until.tzinfo).replace(microsecond=0) + else: + dtstart = datetime.datetime.now().replace(microsecond=0) + elif not isinstance(dtstart, datetime.datetime): + dtstart = datetime.datetime.fromordinal(dtstart.toordinal()) + else: + dtstart = dtstart.replace(microsecond=0) + self._dtstart = dtstart + self._tzinfo = dtstart.tzinfo + self._freq = freq + self._interval = interval + self._count = count + + # Cache the original byxxx rules, if they are provided, as the _byxxx + # attributes do not necessarily map to the inputs, and this can be + # a problem in generating the strings. Only store things if they've + # been supplied (the string retrieval will just use .get()) + self._original_rule = {} + + if until and not isinstance(until, datetime.datetime): + until = datetime.datetime.fromordinal(until.toordinal()) + self._until = until + + if self._dtstart and self._until: + if (self._dtstart.tzinfo is not None) != (self._until.tzinfo is not None): + # According to RFC5545 Section 3.3.10: + # https://tools.ietf.org/html/rfc5545#section-3.3.10 + # + # > If the "DTSTART" property is specified as a date with UTC + # > time or a date with local time and time zone reference, + # > then the UNTIL rule part MUST be specified as a date with + # > UTC time. + raise ValueError( + 'RRULE UNTIL values must be specified in UTC when DTSTART ' + 'is timezone-aware' + ) + + if count is not None and until: + warn("Using both 'count' and 'until' is inconsistent with RFC 5545" + " and has been deprecated in dateutil. Future versions will " + "raise an error.", DeprecationWarning) + + if wkst is None: + self._wkst = calendar.firstweekday() + elif isinstance(wkst, integer_types): + self._wkst = wkst + else: + self._wkst = wkst.weekday + + if bysetpos is None: + self._bysetpos = None + elif isinstance(bysetpos, integer_types): + if bysetpos == 0 or not (-366 <= bysetpos <= 366): + raise ValueError("bysetpos must be between 1 and 366, " + "or between -366 and -1") + self._bysetpos = (bysetpos,) + else: + self._bysetpos = tuple(bysetpos) + for pos in self._bysetpos: + if pos == 0 or not (-366 <= pos <= 366): + raise ValueError("bysetpos must be between 1 and 366, " + "or between -366 and -1") + + if self._bysetpos: + self._original_rule['bysetpos'] = self._bysetpos + + if (byweekno is None and byyearday is None and bymonthday is None and + byweekday is None and byeaster is None): + if freq == YEARLY: + if bymonth is None: + bymonth = dtstart.month + self._original_rule['bymonth'] = None + bymonthday = dtstart.day + self._original_rule['bymonthday'] = None + elif freq == MONTHLY: + bymonthday = dtstart.day + self._original_rule['bymonthday'] = None + elif freq == WEEKLY: + byweekday = dtstart.weekday() + self._original_rule['byweekday'] = None + + # bymonth + if bymonth is None: + self._bymonth = None + else: + if isinstance(bymonth, integer_types): + bymonth = (bymonth,) + + self._bymonth = tuple(sorted(set(bymonth))) + + if 'bymonth' not in self._original_rule: + self._original_rule['bymonth'] = self._bymonth + + # byyearday + if byyearday is None: + self._byyearday = None + else: + if isinstance(byyearday, integer_types): + byyearday = (byyearday,) + + self._byyearday = tuple(sorted(set(byyearday))) + self._original_rule['byyearday'] = self._byyearday + + # byeaster + if byeaster is not None: + if not easter: + from dateutil import easter + if isinstance(byeaster, integer_types): + self._byeaster = (byeaster,) + else: + self._byeaster = tuple(sorted(byeaster)) + + self._original_rule['byeaster'] = self._byeaster + else: + self._byeaster = None + + # bymonthday + if bymonthday is None: + self._bymonthday = () + self._bynmonthday = () + else: + if isinstance(bymonthday, integer_types): + bymonthday = (bymonthday,) + + bymonthday = set(bymonthday) # Ensure it's unique + + self._bymonthday = tuple(sorted(x for x in bymonthday if x > 0)) + self._bynmonthday = tuple(sorted(x for x in bymonthday if x < 0)) + + # Storing positive numbers first, then negative numbers + if 'bymonthday' not in self._original_rule: + self._original_rule['bymonthday'] = tuple( + itertools.chain(self._bymonthday, self._bynmonthday)) + + # byweekno + if byweekno is None: + self._byweekno = None + else: + if isinstance(byweekno, integer_types): + byweekno = (byweekno,) + + self._byweekno = tuple(sorted(set(byweekno))) + + self._original_rule['byweekno'] = self._byweekno + + # byweekday / bynweekday + if byweekday is None: + self._byweekday = None + self._bynweekday = None + else: + # If it's one of the valid non-sequence types, convert to a + # single-element sequence before the iterator that builds the + # byweekday set. + if isinstance(byweekday, integer_types) or hasattr(byweekday, "n"): + byweekday = (byweekday,) + + self._byweekday = set() + self._bynweekday = set() + for wday in byweekday: + if isinstance(wday, integer_types): + self._byweekday.add(wday) + elif not wday.n or freq > MONTHLY: + self._byweekday.add(wday.weekday) + else: + self._bynweekday.add((wday.weekday, wday.n)) + + if not self._byweekday: + self._byweekday = None + elif not self._bynweekday: + self._bynweekday = None + + if self._byweekday is not None: + self._byweekday = tuple(sorted(self._byweekday)) + orig_byweekday = [weekday(x) for x in self._byweekday] + else: + orig_byweekday = () + + if self._bynweekday is not None: + self._bynweekday = tuple(sorted(self._bynweekday)) + orig_bynweekday = [weekday(*x) for x in self._bynweekday] + else: + orig_bynweekday = () + + if 'byweekday' not in self._original_rule: + self._original_rule['byweekday'] = tuple(itertools.chain( + orig_byweekday, orig_bynweekday)) + + # byhour + if byhour is None: + if freq < HOURLY: + self._byhour = {dtstart.hour} + else: + self._byhour = None + else: + if isinstance(byhour, integer_types): + byhour = (byhour,) + + if freq == HOURLY: + self._byhour = self.__construct_byset(start=dtstart.hour, + byxxx=byhour, + base=24) + else: + self._byhour = set(byhour) + + self._byhour = tuple(sorted(self._byhour)) + self._original_rule['byhour'] = self._byhour + + # byminute + if byminute is None: + if freq < MINUTELY: + self._byminute = {dtstart.minute} + else: + self._byminute = None + else: + if isinstance(byminute, integer_types): + byminute = (byminute,) + + if freq == MINUTELY: + self._byminute = self.__construct_byset(start=dtstart.minute, + byxxx=byminute, + base=60) + else: + self._byminute = set(byminute) + + self._byminute = tuple(sorted(self._byminute)) + self._original_rule['byminute'] = self._byminute + + # bysecond + if bysecond is None: + if freq < SECONDLY: + self._bysecond = ((dtstart.second,)) + else: + self._bysecond = None + else: + if isinstance(bysecond, integer_types): + bysecond = (bysecond,) + + self._bysecond = set(bysecond) + + if freq == SECONDLY: + self._bysecond = self.__construct_byset(start=dtstart.second, + byxxx=bysecond, + base=60) + else: + self._bysecond = set(bysecond) + + self._bysecond = tuple(sorted(self._bysecond)) + self._original_rule['bysecond'] = self._bysecond + + if self._freq >= HOURLY: + self._timeset = None + else: + self._timeset = [] + for hour in self._byhour: + for minute in self._byminute: + for second in self._bysecond: + self._timeset.append( + datetime.time(hour, minute, second, + tzinfo=self._tzinfo)) + self._timeset.sort() + self._timeset = tuple(self._timeset) + + def __str__(self): + """ + Output a string that would generate this RRULE if passed to rrulestr. + This is mostly compatible with RFC5545, except for the + dateutil-specific extension BYEASTER. + """ + + output = [] + h, m, s = [None] * 3 + if self._dtstart: + output.append(self._dtstart.strftime('DTSTART:%Y%m%dT%H%M%S')) + h, m, s = self._dtstart.timetuple()[3:6] + + parts = ['FREQ=' + FREQNAMES[self._freq]] + if self._interval != 1: + parts.append('INTERVAL=' + str(self._interval)) + + if self._wkst: + parts.append('WKST=' + repr(weekday(self._wkst))[0:2]) + + if self._count is not None: + parts.append('COUNT=' + str(self._count)) + + if self._until: + parts.append(self._until.strftime('UNTIL=%Y%m%dT%H%M%S')) + + if self._original_rule.get('byweekday') is not None: + # The str() method on weekday objects doesn't generate + # RFC5545-compliant strings, so we should modify that. + original_rule = dict(self._original_rule) + wday_strings = [] + for wday in original_rule['byweekday']: + if wday.n: + wday_strings.append('{n:+d}{wday}'.format( + n=wday.n, + wday=repr(wday)[0:2])) + else: + wday_strings.append(repr(wday)) + + original_rule['byweekday'] = wday_strings + else: + original_rule = self._original_rule + + partfmt = '{name}={vals}' + for name, key in [('BYSETPOS', 'bysetpos'), + ('BYMONTH', 'bymonth'), + ('BYMONTHDAY', 'bymonthday'), + ('BYYEARDAY', 'byyearday'), + ('BYWEEKNO', 'byweekno'), + ('BYDAY', 'byweekday'), + ('BYHOUR', 'byhour'), + ('BYMINUTE', 'byminute'), + ('BYSECOND', 'bysecond'), + ('BYEASTER', 'byeaster')]: + value = original_rule.get(key) + if value: + parts.append(partfmt.format(name=name, vals=(','.join(str(v) + for v in value)))) + + output.append('RRULE:' + ';'.join(parts)) + return '\n'.join(output) + + def replace(self, **kwargs): + """Return new rrule with same attributes except for those attributes given new + values by whichever keyword arguments are specified.""" + new_kwargs = {"interval": self._interval, + "count": self._count, + "dtstart": self._dtstart, + "freq": self._freq, + "until": self._until, + "wkst": self._wkst, + "cache": False if self._cache is None else True } + new_kwargs.update(self._original_rule) + new_kwargs.update(kwargs) + return rrule(**new_kwargs) + + def _iter(self): + year, month, day, hour, minute, second, weekday, yearday, _ = \ + self._dtstart.timetuple() + + # Some local variables to speed things up a bit + freq = self._freq + interval = self._interval + wkst = self._wkst + until = self._until + bymonth = self._bymonth + byweekno = self._byweekno + byyearday = self._byyearday + byweekday = self._byweekday + byeaster = self._byeaster + bymonthday = self._bymonthday + bynmonthday = self._bynmonthday + bysetpos = self._bysetpos + byhour = self._byhour + byminute = self._byminute + bysecond = self._bysecond + + ii = _iterinfo(self) + ii.rebuild(year, month) + + getdayset = {YEARLY: ii.ydayset, + MONTHLY: ii.mdayset, + WEEKLY: ii.wdayset, + DAILY: ii.ddayset, + HOURLY: ii.ddayset, + MINUTELY: ii.ddayset, + SECONDLY: ii.ddayset}[freq] + + if freq < HOURLY: + timeset = self._timeset + else: + gettimeset = {HOURLY: ii.htimeset, + MINUTELY: ii.mtimeset, + SECONDLY: ii.stimeset}[freq] + if ((freq >= HOURLY and + self._byhour and hour not in self._byhour) or + (freq >= MINUTELY and + self._byminute and minute not in self._byminute) or + (freq >= SECONDLY and + self._bysecond and second not in self._bysecond)): + timeset = () + else: + timeset = gettimeset(hour, minute, second) + + total = 0 + count = self._count + while True: + # Get dayset with the right frequency + dayset, start, end = getdayset(year, month, day) + + # Do the "hard" work ;-) + filtered = False + for i in dayset[start:end]: + if ((bymonth and ii.mmask[i] not in bymonth) or + (byweekno and not ii.wnomask[i]) or + (byweekday and ii.wdaymask[i] not in byweekday) or + (ii.nwdaymask and not ii.nwdaymask[i]) or + (byeaster and not ii.eastermask[i]) or + ((bymonthday or bynmonthday) and + ii.mdaymask[i] not in bymonthday and + ii.nmdaymask[i] not in bynmonthday) or + (byyearday and + ((i < ii.yearlen and i+1 not in byyearday and + -ii.yearlen+i not in byyearday) or + (i >= ii.yearlen and i+1-ii.yearlen not in byyearday and + -ii.nextyearlen+i-ii.yearlen not in byyearday)))): + dayset[i] = None + filtered = True + + # Output results + if bysetpos and timeset: + poslist = [] + for pos in bysetpos: + if pos < 0: + daypos, timepos = divmod(pos, len(timeset)) + else: + daypos, timepos = divmod(pos-1, len(timeset)) + try: + i = [x for x in dayset[start:end] + if x is not None][daypos] + time = timeset[timepos] + except IndexError: + pass + else: + date = datetime.date.fromordinal(ii.yearordinal+i) + res = datetime.datetime.combine(date, time) + if res not in poslist: + poslist.append(res) + poslist.sort() + for res in poslist: + if until and res > until: + self._len = total + return + elif res >= self._dtstart: + if count is not None: + count -= 1 + if count < 0: + self._len = total + return + total += 1 + yield res + else: + for i in dayset[start:end]: + if i is not None: + date = datetime.date.fromordinal(ii.yearordinal + i) + for time in timeset: + res = datetime.datetime.combine(date, time) + if until and res > until: + self._len = total + return + elif res >= self._dtstart: + if count is not None: + count -= 1 + if count < 0: + self._len = total + return + + total += 1 + yield res + + # Handle frequency and interval + fixday = False + if freq == YEARLY: + year += interval + if year > datetime.MAXYEAR: + self._len = total + return + ii.rebuild(year, month) + elif freq == MONTHLY: + month += interval + if month > 12: + div, mod = divmod(month, 12) + month = mod + year += div + if month == 0: + month = 12 + year -= 1 + if year > datetime.MAXYEAR: + self._len = total + return + ii.rebuild(year, month) + elif freq == WEEKLY: + if wkst > weekday: + day += -(weekday+1+(6-wkst))+self._interval*7 + else: + day += -(weekday-wkst)+self._interval*7 + weekday = wkst + fixday = True + elif freq == DAILY: + day += interval + fixday = True + elif freq == HOURLY: + if filtered: + # Jump to one iteration before next day + hour += ((23-hour)//interval)*interval + + if byhour: + ndays, hour = self.__mod_distance(value=hour, + byxxx=self._byhour, + base=24) + else: + ndays, hour = divmod(hour+interval, 24) + + if ndays: + day += ndays + fixday = True + + timeset = gettimeset(hour, minute, second) + elif freq == MINUTELY: + if filtered: + # Jump to one iteration before next day + minute += ((1439-(hour*60+minute))//interval)*interval + + valid = False + rep_rate = (24*60) + for j in range(rep_rate // gcd(interval, rep_rate)): + if byminute: + nhours, minute = \ + self.__mod_distance(value=minute, + byxxx=self._byminute, + base=60) + else: + nhours, minute = divmod(minute+interval, 60) + + div, hour = divmod(hour+nhours, 24) + if div: + day += div + fixday = True + filtered = False + + if not byhour or hour in byhour: + valid = True + break + + if not valid: + raise ValueError('Invalid combination of interval and ' + + 'byhour resulting in empty rule.') + + timeset = gettimeset(hour, minute, second) + elif freq == SECONDLY: + if filtered: + # Jump to one iteration before next day + second += (((86399 - (hour * 3600 + minute * 60 + second)) + // interval) * interval) + + rep_rate = (24 * 3600) + valid = False + for j in range(0, rep_rate // gcd(interval, rep_rate)): + if bysecond: + nminutes, second = \ + self.__mod_distance(value=second, + byxxx=self._bysecond, + base=60) + else: + nminutes, second = divmod(second+interval, 60) + + div, minute = divmod(minute+nminutes, 60) + if div: + hour += div + div, hour = divmod(hour, 24) + if div: + day += div + fixday = True + + if ((not byhour or hour in byhour) and + (not byminute or minute in byminute) and + (not bysecond or second in bysecond)): + valid = True + break + + if not valid: + raise ValueError('Invalid combination of interval, ' + + 'byhour and byminute resulting in empty' + + ' rule.') + + timeset = gettimeset(hour, minute, second) + + if fixday and day > 28: + daysinmonth = calendar.monthrange(year, month)[1] + if day > daysinmonth: + while day > daysinmonth: + day -= daysinmonth + month += 1 + if month == 13: + month = 1 + year += 1 + if year > datetime.MAXYEAR: + self._len = total + return + daysinmonth = calendar.monthrange(year, month)[1] + ii.rebuild(year, month) + + def __construct_byset(self, start, byxxx, base): + """ + If a `BYXXX` sequence is passed to the constructor at the same level as + `FREQ` (e.g. `FREQ=HOURLY,BYHOUR={2,4,7},INTERVAL=3`), there are some + specifications which cannot be reached given some starting conditions. + + This occurs whenever the interval is not coprime with the base of a + given unit and the difference between the starting position and the + ending position is not coprime with the greatest common denominator + between the interval and the base. For example, with a FREQ of hourly + starting at 17:00 and an interval of 4, the only valid values for + BYHOUR would be {21, 1, 5, 9, 13, 17}, because 4 and 24 are not + coprime. + + :param start: + Specifies the starting position. + :param byxxx: + An iterable containing the list of allowed values. + :param base: + The largest allowable value for the specified frequency (e.g. + 24 hours, 60 minutes). + + This does not preserve the type of the iterable, returning a set, since + the values should be unique and the order is irrelevant, this will + speed up later lookups. + + In the event of an empty set, raises a :exception:`ValueError`, as this + results in an empty rrule. + """ + + cset = set() + + # Support a single byxxx value. + if isinstance(byxxx, integer_types): + byxxx = (byxxx, ) + + for num in byxxx: + i_gcd = gcd(self._interval, base) + # Use divmod rather than % because we need to wrap negative nums. + if i_gcd == 1 or divmod(num - start, i_gcd)[1] == 0: + cset.add(num) + + if len(cset) == 0: + raise ValueError("Invalid rrule byxxx generates an empty set.") + + return cset + + def __mod_distance(self, value, byxxx, base): + """ + Calculates the next value in a sequence where the `FREQ` parameter is + specified along with a `BYXXX` parameter at the same "level" + (e.g. `HOURLY` specified with `BYHOUR`). + + :param value: + The old value of the component. + :param byxxx: + The `BYXXX` set, which should have been generated by + `rrule._construct_byset`, or something else which checks that a + valid rule is present. + :param base: + The largest allowable value for the specified frequency (e.g. + 24 hours, 60 minutes). + + If a valid value is not found after `base` iterations (the maximum + number before the sequence would start to repeat), this raises a + :exception:`ValueError`, as no valid values were found. + + This returns a tuple of `divmod(n*interval, base)`, where `n` is the + smallest number of `interval` repetitions until the next specified + value in `byxxx` is found. + """ + accumulator = 0 + for ii in range(1, base + 1): + # Using divmod() over % to account for negative intervals + div, value = divmod(value + self._interval, base) + accumulator += div + if value in byxxx: + return (accumulator, value) + + +class _iterinfo(object): + __slots__ = ["rrule", "lastyear", "lastmonth", + "yearlen", "nextyearlen", "yearordinal", "yearweekday", + "mmask", "mrange", "mdaymask", "nmdaymask", + "wdaymask", "wnomask", "nwdaymask", "eastermask"] + + def __init__(self, rrule): + for attr in self.__slots__: + setattr(self, attr, None) + self.rrule = rrule + + def rebuild(self, year, month): + # Every mask is 7 days longer to handle cross-year weekly periods. + rr = self.rrule + if year != self.lastyear: + self.yearlen = 365 + calendar.isleap(year) + self.nextyearlen = 365 + calendar.isleap(year + 1) + firstyday = datetime.date(year, 1, 1) + self.yearordinal = firstyday.toordinal() + self.yearweekday = firstyday.weekday() + + wday = datetime.date(year, 1, 1).weekday() + if self.yearlen == 365: + self.mmask = M365MASK + self.mdaymask = MDAY365MASK + self.nmdaymask = NMDAY365MASK + self.wdaymask = WDAYMASK[wday:] + self.mrange = M365RANGE + else: + self.mmask = M366MASK + self.mdaymask = MDAY366MASK + self.nmdaymask = NMDAY366MASK + self.wdaymask = WDAYMASK[wday:] + self.mrange = M366RANGE + + if not rr._byweekno: + self.wnomask = None + else: + self.wnomask = [0]*(self.yearlen+7) + # no1wkst = firstwkst = self.wdaymask.index(rr._wkst) + no1wkst = firstwkst = (7-self.yearweekday+rr._wkst) % 7 + if no1wkst >= 4: + no1wkst = 0 + # Number of days in the year, plus the days we got + # from last year. + wyearlen = self.yearlen+(self.yearweekday-rr._wkst) % 7 + else: + # Number of days in the year, minus the days we + # left in last year. + wyearlen = self.yearlen-no1wkst + div, mod = divmod(wyearlen, 7) + numweeks = div+mod//4 + for n in rr._byweekno: + if n < 0: + n += numweeks+1 + if not (0 < n <= numweeks): + continue + if n > 1: + i = no1wkst+(n-1)*7 + if no1wkst != firstwkst: + i -= 7-firstwkst + else: + i = no1wkst + for j in range(7): + self.wnomask[i] = 1 + i += 1 + if self.wdaymask[i] == rr._wkst: + break + if 1 in rr._byweekno: + # Check week number 1 of next year as well + # TODO: Check -numweeks for next year. + i = no1wkst+numweeks*7 + if no1wkst != firstwkst: + i -= 7-firstwkst + if i < self.yearlen: + # If week starts in next year, we + # don't care about it. + for j in range(7): + self.wnomask[i] = 1 + i += 1 + if self.wdaymask[i] == rr._wkst: + break + if no1wkst: + # Check last week number of last year as + # well. If no1wkst is 0, either the year + # started on week start, or week number 1 + # got days from last year, so there are no + # days from last year's last week number in + # this year. + if -1 not in rr._byweekno: + lyearweekday = datetime.date(year-1, 1, 1).weekday() + lno1wkst = (7-lyearweekday+rr._wkst) % 7 + lyearlen = 365+calendar.isleap(year-1) + if lno1wkst >= 4: + lno1wkst = 0 + lnumweeks = 52+(lyearlen + + (lyearweekday-rr._wkst) % 7) % 7//4 + else: + lnumweeks = 52+(self.yearlen-no1wkst) % 7//4 + else: + lnumweeks = -1 + if lnumweeks in rr._byweekno: + for i in range(no1wkst): + self.wnomask[i] = 1 + + if (rr._bynweekday and (month != self.lastmonth or + year != self.lastyear)): + ranges = [] + if rr._freq == YEARLY: + if rr._bymonth: + for month in rr._bymonth: + ranges.append(self.mrange[month-1:month+1]) + else: + ranges = [(0, self.yearlen)] + elif rr._freq == MONTHLY: + ranges = [self.mrange[month-1:month+1]] + if ranges: + # Weekly frequency won't get here, so we may not + # care about cross-year weekly periods. + self.nwdaymask = [0]*self.yearlen + for first, last in ranges: + last -= 1 + for wday, n in rr._bynweekday: + if n < 0: + i = last+(n+1)*7 + i -= (self.wdaymask[i]-wday) % 7 + else: + i = first+(n-1)*7 + i += (7-self.wdaymask[i]+wday) % 7 + if first <= i <= last: + self.nwdaymask[i] = 1 + + if rr._byeaster: + self.eastermask = [0]*(self.yearlen+7) + eyday = easter.easter(year).toordinal()-self.yearordinal + for offset in rr._byeaster: + self.eastermask[eyday+offset] = 1 + + self.lastyear = year + self.lastmonth = month + + def ydayset(self, year, month, day): + return list(range(self.yearlen)), 0, self.yearlen + + def mdayset(self, year, month, day): + dset = [None]*self.yearlen + start, end = self.mrange[month-1:month+1] + for i in range(start, end): + dset[i] = i + return dset, start, end + + def wdayset(self, year, month, day): + # We need to handle cross-year weeks here. + dset = [None]*(self.yearlen+7) + i = datetime.date(year, month, day).toordinal()-self.yearordinal + start = i + for j in range(7): + dset[i] = i + i += 1 + # if (not (0 <= i < self.yearlen) or + # self.wdaymask[i] == self.rrule._wkst): + # This will cross the year boundary, if necessary. + if self.wdaymask[i] == self.rrule._wkst: + break + return dset, start, i + + def ddayset(self, year, month, day): + dset = [None] * self.yearlen + i = datetime.date(year, month, day).toordinal() - self.yearordinal + dset[i] = i + return dset, i, i + 1 + + def htimeset(self, hour, minute, second): + tset = [] + rr = self.rrule + for minute in rr._byminute: + for second in rr._bysecond: + tset.append(datetime.time(hour, minute, second, + tzinfo=rr._tzinfo)) + tset.sort() + return tset + + def mtimeset(self, hour, minute, second): + tset = [] + rr = self.rrule + for second in rr._bysecond: + tset.append(datetime.time(hour, minute, second, tzinfo=rr._tzinfo)) + tset.sort() + return tset + + def stimeset(self, hour, minute, second): + return (datetime.time(hour, minute, second, + tzinfo=self.rrule._tzinfo),) + + +class rruleset(rrulebase): + """ The rruleset type allows more complex recurrence setups, mixing + multiple rules, dates, exclusion rules, and exclusion dates. The type + constructor takes the following keyword arguments: + + :param cache: If True, caching of results will be enabled, improving + performance of multiple queries considerably. """ + + class _genitem(object): + def __init__(self, genlist, gen): + try: + self.dt = advance_iterator(gen) + genlist.append(self) + except StopIteration: + pass + self.genlist = genlist + self.gen = gen + + def __next__(self): + try: + self.dt = advance_iterator(self.gen) + except StopIteration: + if self.genlist[0] is self: + heapq.heappop(self.genlist) + else: + self.genlist.remove(self) + heapq.heapify(self.genlist) + + next = __next__ + + def __lt__(self, other): + return self.dt < other.dt + + def __gt__(self, other): + return self.dt > other.dt + + def __eq__(self, other): + return self.dt == other.dt + + def __ne__(self, other): + return self.dt != other.dt + + def __init__(self, cache=False): + super(rruleset, self).__init__(cache) + self._rrule = [] + self._rdate = [] + self._exrule = [] + self._exdate = [] + + @_invalidates_cache + def rrule(self, rrule): + """ Include the given :py:class:`rrule` instance in the recurrence set + generation. """ + self._rrule.append(rrule) + + @_invalidates_cache + def rdate(self, rdate): + """ Include the given :py:class:`datetime` instance in the recurrence + set generation. """ + self._rdate.append(rdate) + + @_invalidates_cache + def exrule(self, exrule): + """ Include the given rrule instance in the recurrence set exclusion + list. Dates which are part of the given recurrence rules will not + be generated, even if some inclusive rrule or rdate matches them. + """ + self._exrule.append(exrule) + + @_invalidates_cache + def exdate(self, exdate): + """ Include the given datetime instance in the recurrence set + exclusion list. Dates included that way will not be generated, + even if some inclusive rrule or rdate matches them. """ + self._exdate.append(exdate) + + def _iter(self): + rlist = [] + self._rdate.sort() + self._genitem(rlist, iter(self._rdate)) + for gen in [iter(x) for x in self._rrule]: + self._genitem(rlist, gen) + exlist = [] + self._exdate.sort() + self._genitem(exlist, iter(self._exdate)) + for gen in [iter(x) for x in self._exrule]: + self._genitem(exlist, gen) + lastdt = None + total = 0 + heapq.heapify(rlist) + heapq.heapify(exlist) + while rlist: + ritem = rlist[0] + if not lastdt or lastdt != ritem.dt: + while exlist and exlist[0] < ritem: + exitem = exlist[0] + advance_iterator(exitem) + if exlist and exlist[0] is exitem: + heapq.heapreplace(exlist, exitem) + if not exlist or ritem != exlist[0]: + total += 1 + yield ritem.dt + lastdt = ritem.dt + advance_iterator(ritem) + if rlist and rlist[0] is ritem: + heapq.heapreplace(rlist, ritem) + self._len = total + + + + +class _rrulestr(object): + """ Parses a string representation of a recurrence rule or set of + recurrence rules. + + :param s: + Required, a string defining one or more recurrence rules. + + :param dtstart: + If given, used as the default recurrence start if not specified in the + rule string. + + :param cache: + If set ``True`` caching of results will be enabled, improving + performance of multiple queries considerably. + + :param unfold: + If set ``True`` indicates that a rule string is split over more + than one line and should be joined before processing. + + :param forceset: + If set ``True`` forces a :class:`dateutil.rrule.rruleset` to + be returned. + + :param compatible: + If set ``True`` forces ``unfold`` and ``forceset`` to be ``True``. + + :param ignoretz: + If set ``True``, time zones in parsed strings are ignored and a naive + :class:`datetime.datetime` object is returned. + + :param tzids: + If given, a callable or mapping used to retrieve a + :class:`datetime.tzinfo` from a string representation. + Defaults to :func:`dateutil.tz.gettz`. + + :param tzinfos: + Additional time zone names / aliases which may be present in a string + representation. See :func:`dateutil.parser.parse` for more + information. + + :return: + Returns a :class:`dateutil.rrule.rruleset` or + :class:`dateutil.rrule.rrule` + """ + + _freq_map = {"YEARLY": YEARLY, + "MONTHLY": MONTHLY, + "WEEKLY": WEEKLY, + "DAILY": DAILY, + "HOURLY": HOURLY, + "MINUTELY": MINUTELY, + "SECONDLY": SECONDLY} + + _weekday_map = {"MO": 0, "TU": 1, "WE": 2, "TH": 3, + "FR": 4, "SA": 5, "SU": 6} + + def _handle_int(self, rrkwargs, name, value, **kwargs): + rrkwargs[name.lower()] = int(value) + + def _handle_int_list(self, rrkwargs, name, value, **kwargs): + rrkwargs[name.lower()] = [int(x) for x in value.split(',')] + + _handle_INTERVAL = _handle_int + _handle_COUNT = _handle_int + _handle_BYSETPOS = _handle_int_list + _handle_BYMONTH = _handle_int_list + _handle_BYMONTHDAY = _handle_int_list + _handle_BYYEARDAY = _handle_int_list + _handle_BYEASTER = _handle_int_list + _handle_BYWEEKNO = _handle_int_list + _handle_BYHOUR = _handle_int_list + _handle_BYMINUTE = _handle_int_list + _handle_BYSECOND = _handle_int_list + + def _handle_FREQ(self, rrkwargs, name, value, **kwargs): + rrkwargs["freq"] = self._freq_map[value] + + def _handle_UNTIL(self, rrkwargs, name, value, **kwargs): + global parser + if not parser: + from dateutil import parser + try: + rrkwargs["until"] = parser.parse(value, + ignoretz=kwargs.get("ignoretz"), + tzinfos=kwargs.get("tzinfos")) + except ValueError: + raise ValueError("invalid until date") + + def _handle_WKST(self, rrkwargs, name, value, **kwargs): + rrkwargs["wkst"] = self._weekday_map[value] + + def _handle_BYWEEKDAY(self, rrkwargs, name, value, **kwargs): + """ + Two ways to specify this: +1MO or MO(+1) + """ + l = [] + for wday in value.split(','): + if '(' in wday: + # If it's of the form TH(+1), etc. + splt = wday.split('(') + w = splt[0] + n = int(splt[1][:-1]) + elif len(wday): + # If it's of the form +1MO + for i in range(len(wday)): + if wday[i] not in '+-0123456789': + break + n = wday[:i] or None + w = wday[i:] + if n: + n = int(n) + else: + raise ValueError("Invalid (empty) BYDAY specification.") + + l.append(weekdays[self._weekday_map[w]](n)) + rrkwargs["byweekday"] = l + + _handle_BYDAY = _handle_BYWEEKDAY + + def _parse_rfc_rrule(self, line, + dtstart=None, + cache=False, + ignoretz=False, + tzinfos=None): + if line.find(':') != -1: + name, value = line.split(':') + if name != "RRULE": + raise ValueError("unknown parameter name") + else: + value = line + rrkwargs = {} + for pair in value.split(';'): + name, value = pair.split('=') + name = name.upper() + value = value.upper() + try: + getattr(self, "_handle_"+name)(rrkwargs, name, value, + ignoretz=ignoretz, + tzinfos=tzinfos) + except AttributeError: + raise ValueError("unknown parameter '%s'" % name) + except (KeyError, ValueError): + raise ValueError("invalid '%s': %s" % (name, value)) + return rrule(dtstart=dtstart, cache=cache, **rrkwargs) + + def _parse_date_value(self, date_value, parms, rule_tzids, + ignoretz, tzids, tzinfos): + global parser + if not parser: + from dateutil import parser + + datevals = [] + value_found = False + TZID = None + + for parm in parms: + if parm.startswith("TZID="): + try: + tzkey = rule_tzids[parm.split('TZID=')[-1]] + except KeyError: + continue + if tzids is None: + from . import tz + tzlookup = tz.gettz + elif callable(tzids): + tzlookup = tzids + else: + tzlookup = getattr(tzids, 'get', None) + if tzlookup is None: + msg = ('tzids must be a callable, mapping, or None, ' + 'not %s' % tzids) + raise ValueError(msg) + + TZID = tzlookup(tzkey) + continue + + # RFC 5445 3.8.2.4: The VALUE parameter is optional, but may be found + # only once. + if parm not in {"VALUE=DATE-TIME", "VALUE=DATE"}: + raise ValueError("unsupported parm: " + parm) + else: + if value_found: + msg = ("Duplicate value parameter found in: " + parm) + raise ValueError(msg) + value_found = True + + for datestr in date_value.split(','): + date = parser.parse(datestr, ignoretz=ignoretz, tzinfos=tzinfos) + if TZID is not None: + if date.tzinfo is None: + date = date.replace(tzinfo=TZID) + else: + raise ValueError('DTSTART/EXDATE specifies multiple timezone') + datevals.append(date) + + return datevals + + def _parse_rfc(self, s, + dtstart=None, + cache=False, + unfold=False, + forceset=False, + compatible=False, + ignoretz=False, + tzids=None, + tzinfos=None): + global parser + if compatible: + forceset = True + unfold = True + + TZID_NAMES = dict(map( + lambda x: (x.upper(), x), + re.findall('TZID=(?P[^:]+):', s) + )) + s = s.upper() + if not s.strip(): + raise ValueError("empty string") + if unfold: + lines = s.splitlines() + i = 0 + while i < len(lines): + line = lines[i].rstrip() + if not line: + del lines[i] + elif i > 0 and line[0] == " ": + lines[i-1] += line[1:] + del lines[i] + else: + i += 1 + else: + lines = s.split() + if (not forceset and len(lines) == 1 and (s.find(':') == -1 or + s.startswith('RRULE:'))): + return self._parse_rfc_rrule(lines[0], cache=cache, + dtstart=dtstart, ignoretz=ignoretz, + tzinfos=tzinfos) + else: + rrulevals = [] + rdatevals = [] + exrulevals = [] + exdatevals = [] + for line in lines: + if not line: + continue + if line.find(':') == -1: + name = "RRULE" + value = line + else: + name, value = line.split(':', 1) + parms = name.split(';') + if not parms: + raise ValueError("empty property name") + name = parms[0] + parms = parms[1:] + if name == "RRULE": + for parm in parms: + raise ValueError("unsupported RRULE parm: "+parm) + rrulevals.append(value) + elif name == "RDATE": + for parm in parms: + if parm != "VALUE=DATE-TIME": + raise ValueError("unsupported RDATE parm: "+parm) + rdatevals.append(value) + elif name == "EXRULE": + for parm in parms: + raise ValueError("unsupported EXRULE parm: "+parm) + exrulevals.append(value) + elif name == "EXDATE": + exdatevals.extend( + self._parse_date_value(value, parms, + TZID_NAMES, ignoretz, + tzids, tzinfos) + ) + elif name == "DTSTART": + dtvals = self._parse_date_value(value, parms, TZID_NAMES, + ignoretz, tzids, tzinfos) + if len(dtvals) != 1: + raise ValueError("Multiple DTSTART values specified:" + + value) + dtstart = dtvals[0] + else: + raise ValueError("unsupported property: "+name) + if (forceset or len(rrulevals) > 1 or rdatevals + or exrulevals or exdatevals): + if not parser and (rdatevals or exdatevals): + from dateutil import parser + rset = rruleset(cache=cache) + for value in rrulevals: + rset.rrule(self._parse_rfc_rrule(value, dtstart=dtstart, + ignoretz=ignoretz, + tzinfos=tzinfos)) + for value in rdatevals: + for datestr in value.split(','): + rset.rdate(parser.parse(datestr, + ignoretz=ignoretz, + tzinfos=tzinfos)) + for value in exrulevals: + rset.exrule(self._parse_rfc_rrule(value, dtstart=dtstart, + ignoretz=ignoretz, + tzinfos=tzinfos)) + for value in exdatevals: + rset.exdate(value) + if compatible and dtstart: + rset.rdate(dtstart) + return rset + else: + return self._parse_rfc_rrule(rrulevals[0], + dtstart=dtstart, + cache=cache, + ignoretz=ignoretz, + tzinfos=tzinfos) + + def __call__(self, s, **kwargs): + return self._parse_rfc(s, **kwargs) + + +rrulestr = _rrulestr() + +# vim:ts=4:sw=4:et diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/tz/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/tz/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..af1352c47292f4eebc5cae8da45641b5544558e3 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/tz/__init__.py @@ -0,0 +1,12 @@ +# -*- coding: utf-8 -*- +from .tz import * +from .tz import __doc__ + +__all__ = ["tzutc", "tzoffset", "tzlocal", "tzfile", "tzrange", + "tzstr", "tzical", "tzwin", "tzwinlocal", "gettz", + "enfold", "datetime_ambiguous", "datetime_exists", + "resolve_imaginary", "UTC", "DeprecatedTzFormatWarning"] + + +class DeprecatedTzFormatWarning(Warning): + """Warning raised when time zones are parsed from deprecated formats.""" diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/tz/_common.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/tz/_common.py new file mode 100644 index 0000000000000000000000000000000000000000..e6ac11831522b266114d5b68ee1da298e3aeb14a --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/tz/_common.py @@ -0,0 +1,419 @@ +from six import PY2 + +from functools import wraps + +from datetime import datetime, timedelta, tzinfo + + +ZERO = timedelta(0) + +__all__ = ['tzname_in_python2', 'enfold'] + + +def tzname_in_python2(namefunc): + """Change unicode output into bytestrings in Python 2 + + tzname() API changed in Python 3. It used to return bytes, but was changed + to unicode strings + """ + if PY2: + @wraps(namefunc) + def adjust_encoding(*args, **kwargs): + name = namefunc(*args, **kwargs) + if name is not None: + name = name.encode() + + return name + + return adjust_encoding + else: + return namefunc + + +# The following is adapted from Alexander Belopolsky's tz library +# https://github.com/abalkin/tz +if hasattr(datetime, 'fold'): + # This is the pre-python 3.6 fold situation + def enfold(dt, fold=1): + """ + Provides a unified interface for assigning the ``fold`` attribute to + datetimes both before and after the implementation of PEP-495. + + :param fold: + The value for the ``fold`` attribute in the returned datetime. This + should be either 0 or 1. + + :return: + Returns an object for which ``getattr(dt, 'fold', 0)`` returns + ``fold`` for all versions of Python. In versions prior to + Python 3.6, this is a ``_DatetimeWithFold`` object, which is a + subclass of :py:class:`datetime.datetime` with the ``fold`` + attribute added, if ``fold`` is 1. + + .. versionadded:: 2.6.0 + """ + return dt.replace(fold=fold) + +else: + class _DatetimeWithFold(datetime): + """ + This is a class designed to provide a PEP 495-compliant interface for + Python versions before 3.6. It is used only for dates in a fold, so + the ``fold`` attribute is fixed at ``1``. + + .. versionadded:: 2.6.0 + """ + __slots__ = () + + def replace(self, *args, **kwargs): + """ + Return a datetime with the same attributes, except for those + attributes given new values by whichever keyword arguments are + specified. Note that tzinfo=None can be specified to create a naive + datetime from an aware datetime with no conversion of date and time + data. + + This is reimplemented in ``_DatetimeWithFold`` because pypy3 will + return a ``datetime.datetime`` even if ``fold`` is unchanged. + """ + argnames = ( + 'year', 'month', 'day', 'hour', 'minute', 'second', + 'microsecond', 'tzinfo' + ) + + for arg, argname in zip(args, argnames): + if argname in kwargs: + raise TypeError('Duplicate argument: {}'.format(argname)) + + kwargs[argname] = arg + + for argname in argnames: + if argname not in kwargs: + kwargs[argname] = getattr(self, argname) + + dt_class = self.__class__ if kwargs.get('fold', 1) else datetime + + return dt_class(**kwargs) + + @property + def fold(self): + return 1 + + def enfold(dt, fold=1): + """ + Provides a unified interface for assigning the ``fold`` attribute to + datetimes both before and after the implementation of PEP-495. + + :param fold: + The value for the ``fold`` attribute in the returned datetime. This + should be either 0 or 1. + + :return: + Returns an object for which ``getattr(dt, 'fold', 0)`` returns + ``fold`` for all versions of Python. In versions prior to + Python 3.6, this is a ``_DatetimeWithFold`` object, which is a + subclass of :py:class:`datetime.datetime` with the ``fold`` + attribute added, if ``fold`` is 1. + + .. versionadded:: 2.6.0 + """ + if getattr(dt, 'fold', 0) == fold: + return dt + + args = dt.timetuple()[:6] + args += (dt.microsecond, dt.tzinfo) + + if fold: + return _DatetimeWithFold(*args) + else: + return datetime(*args) + + +def _validate_fromutc_inputs(f): + """ + The CPython version of ``fromutc`` checks that the input is a ``datetime`` + object and that ``self`` is attached as its ``tzinfo``. + """ + @wraps(f) + def fromutc(self, dt): + if not isinstance(dt, datetime): + raise TypeError("fromutc() requires a datetime argument") + if dt.tzinfo is not self: + raise ValueError("dt.tzinfo is not self") + + return f(self, dt) + + return fromutc + + +class _tzinfo(tzinfo): + """ + Base class for all ``dateutil`` ``tzinfo`` objects. + """ + + def is_ambiguous(self, dt): + """ + Whether or not the "wall time" of a given datetime is ambiguous in this + zone. + + :param dt: + A :py:class:`datetime.datetime`, naive or time zone aware. + + + :return: + Returns ``True`` if ambiguous, ``False`` otherwise. + + .. versionadded:: 2.6.0 + """ + + dt = dt.replace(tzinfo=self) + + wall_0 = enfold(dt, fold=0) + wall_1 = enfold(dt, fold=1) + + same_offset = wall_0.utcoffset() == wall_1.utcoffset() + same_dt = wall_0.replace(tzinfo=None) == wall_1.replace(tzinfo=None) + + return same_dt and not same_offset + + def _fold_status(self, dt_utc, dt_wall): + """ + Determine the fold status of a "wall" datetime, given a representation + of the same datetime as a (naive) UTC datetime. This is calculated based + on the assumption that ``dt.utcoffset() - dt.dst()`` is constant for all + datetimes, and that this offset is the actual number of hours separating + ``dt_utc`` and ``dt_wall``. + + :param dt_utc: + Representation of the datetime as UTC + + :param dt_wall: + Representation of the datetime as "wall time". This parameter must + either have a `fold` attribute or have a fold-naive + :class:`datetime.tzinfo` attached, otherwise the calculation may + fail. + """ + if self.is_ambiguous(dt_wall): + delta_wall = dt_wall - dt_utc + _fold = int(delta_wall == (dt_utc.utcoffset() - dt_utc.dst())) + else: + _fold = 0 + + return _fold + + def _fold(self, dt): + return getattr(dt, 'fold', 0) + + def _fromutc(self, dt): + """ + Given a timezone-aware datetime in a given timezone, calculates a + timezone-aware datetime in a new timezone. + + Since this is the one time that we *know* we have an unambiguous + datetime object, we take this opportunity to determine whether the + datetime is ambiguous and in a "fold" state (e.g. if it's the first + occurrence, chronologically, of the ambiguous datetime). + + :param dt: + A timezone-aware :class:`datetime.datetime` object. + """ + + # Re-implement the algorithm from Python's datetime.py + dtoff = dt.utcoffset() + if dtoff is None: + raise ValueError("fromutc() requires a non-None utcoffset() " + "result") + + # The original datetime.py code assumes that `dst()` defaults to + # zero during ambiguous times. PEP 495 inverts this presumption, so + # for pre-PEP 495 versions of python, we need to tweak the algorithm. + dtdst = dt.dst() + if dtdst is None: + raise ValueError("fromutc() requires a non-None dst() result") + delta = dtoff - dtdst + + dt += delta + # Set fold=1 so we can default to being in the fold for + # ambiguous dates. + dtdst = enfold(dt, fold=1).dst() + if dtdst is None: + raise ValueError("fromutc(): dt.dst gave inconsistent " + "results; cannot convert") + return dt + dtdst + + @_validate_fromutc_inputs + def fromutc(self, dt): + """ + Given a timezone-aware datetime in a given timezone, calculates a + timezone-aware datetime in a new timezone. + + Since this is the one time that we *know* we have an unambiguous + datetime object, we take this opportunity to determine whether the + datetime is ambiguous and in a "fold" state (e.g. if it's the first + occurrence, chronologically, of the ambiguous datetime). + + :param dt: + A timezone-aware :class:`datetime.datetime` object. + """ + dt_wall = self._fromutc(dt) + + # Calculate the fold status given the two datetimes. + _fold = self._fold_status(dt, dt_wall) + + # Set the default fold value for ambiguous dates + return enfold(dt_wall, fold=_fold) + + +class tzrangebase(_tzinfo): + """ + This is an abstract base class for time zones represented by an annual + transition into and out of DST. Child classes should implement the following + methods: + + * ``__init__(self, *args, **kwargs)`` + * ``transitions(self, year)`` - this is expected to return a tuple of + datetimes representing the DST on and off transitions in standard + time. + + A fully initialized ``tzrangebase`` subclass should also provide the + following attributes: + * ``hasdst``: Boolean whether or not the zone uses DST. + * ``_dst_offset`` / ``_std_offset``: :class:`datetime.timedelta` objects + representing the respective UTC offsets. + * ``_dst_abbr`` / ``_std_abbr``: Strings representing the timezone short + abbreviations in DST and STD, respectively. + * ``_hasdst``: Whether or not the zone has DST. + + .. versionadded:: 2.6.0 + """ + def __init__(self): + raise NotImplementedError('tzrangebase is an abstract base class') + + def utcoffset(self, dt): + isdst = self._isdst(dt) + + if isdst is None: + return None + elif isdst: + return self._dst_offset + else: + return self._std_offset + + def dst(self, dt): + isdst = self._isdst(dt) + + if isdst is None: + return None + elif isdst: + return self._dst_base_offset + else: + return ZERO + + @tzname_in_python2 + def tzname(self, dt): + if self._isdst(dt): + return self._dst_abbr + else: + return self._std_abbr + + def fromutc(self, dt): + """ Given a datetime in UTC, return local time """ + if not isinstance(dt, datetime): + raise TypeError("fromutc() requires a datetime argument") + + if dt.tzinfo is not self: + raise ValueError("dt.tzinfo is not self") + + # Get transitions - if there are none, fixed offset + transitions = self.transitions(dt.year) + if transitions is None: + return dt + self.utcoffset(dt) + + # Get the transition times in UTC + dston, dstoff = transitions + + dston -= self._std_offset + dstoff -= self._std_offset + + utc_transitions = (dston, dstoff) + dt_utc = dt.replace(tzinfo=None) + + isdst = self._naive_isdst(dt_utc, utc_transitions) + + if isdst: + dt_wall = dt + self._dst_offset + else: + dt_wall = dt + self._std_offset + + _fold = int(not isdst and self.is_ambiguous(dt_wall)) + + return enfold(dt_wall, fold=_fold) + + def is_ambiguous(self, dt): + """ + Whether or not the "wall time" of a given datetime is ambiguous in this + zone. + + :param dt: + A :py:class:`datetime.datetime`, naive or time zone aware. + + + :return: + Returns ``True`` if ambiguous, ``False`` otherwise. + + .. versionadded:: 2.6.0 + """ + if not self.hasdst: + return False + + start, end = self.transitions(dt.year) + + dt = dt.replace(tzinfo=None) + return (end <= dt < end + self._dst_base_offset) + + def _isdst(self, dt): + if not self.hasdst: + return False + elif dt is None: + return None + + transitions = self.transitions(dt.year) + + if transitions is None: + return False + + dt = dt.replace(tzinfo=None) + + isdst = self._naive_isdst(dt, transitions) + + # Handle ambiguous dates + if not isdst and self.is_ambiguous(dt): + return not self._fold(dt) + else: + return isdst + + def _naive_isdst(self, dt, transitions): + dston, dstoff = transitions + + dt = dt.replace(tzinfo=None) + + if dston < dstoff: + isdst = dston <= dt < dstoff + else: + isdst = not dstoff <= dt < dston + + return isdst + + @property + def _dst_base_offset(self): + return self._dst_offset - self._std_offset + + __hash__ = None + + def __ne__(self, other): + return not (self == other) + + def __repr__(self): + return "%s(...)" % self.__class__.__name__ + + __reduce__ = object.__reduce__ diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/tz/_factories.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/tz/_factories.py new file mode 100644 index 0000000000000000000000000000000000000000..f8a65891a023ebf9eb0c24d391ba67541b7133f1 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/tz/_factories.py @@ -0,0 +1,80 @@ +from datetime import timedelta +import weakref +from collections import OrderedDict + +from six.moves import _thread + + +class _TzSingleton(type): + def __init__(cls, *args, **kwargs): + cls.__instance = None + super(_TzSingleton, cls).__init__(*args, **kwargs) + + def __call__(cls): + if cls.__instance is None: + cls.__instance = super(_TzSingleton, cls).__call__() + return cls.__instance + + +class _TzFactory(type): + def instance(cls, *args, **kwargs): + """Alternate constructor that returns a fresh instance""" + return type.__call__(cls, *args, **kwargs) + + +class _TzOffsetFactory(_TzFactory): + def __init__(cls, *args, **kwargs): + cls.__instances = weakref.WeakValueDictionary() + cls.__strong_cache = OrderedDict() + cls.__strong_cache_size = 8 + + cls._cache_lock = _thread.allocate_lock() + + def __call__(cls, name, offset): + if isinstance(offset, timedelta): + key = (name, offset.total_seconds()) + else: + key = (name, offset) + + instance = cls.__instances.get(key, None) + if instance is None: + instance = cls.__instances.setdefault(key, + cls.instance(name, offset)) + + # This lock may not be necessary in Python 3. See GH issue #901 + with cls._cache_lock: + cls.__strong_cache[key] = cls.__strong_cache.pop(key, instance) + + # Remove an item if the strong cache is overpopulated + if len(cls.__strong_cache) > cls.__strong_cache_size: + cls.__strong_cache.popitem(last=False) + + return instance + + +class _TzStrFactory(_TzFactory): + def __init__(cls, *args, **kwargs): + cls.__instances = weakref.WeakValueDictionary() + cls.__strong_cache = OrderedDict() + cls.__strong_cache_size = 8 + + cls.__cache_lock = _thread.allocate_lock() + + def __call__(cls, s, posix_offset=False): + key = (s, posix_offset) + instance = cls.__instances.get(key, None) + + if instance is None: + instance = cls.__instances.setdefault(key, + cls.instance(s, posix_offset)) + + # This lock may not be necessary in Python 3. See GH issue #901 + with cls.__cache_lock: + cls.__strong_cache[key] = cls.__strong_cache.pop(key, instance) + + # Remove an item if the strong cache is overpopulated + if len(cls.__strong_cache) > cls.__strong_cache_size: + cls.__strong_cache.popitem(last=False) + + return instance + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/tz/tz.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/tz/tz.py new file mode 100644 index 0000000000000000000000000000000000000000..af81e88e1188112051414c9f2222a1c0781aaed3 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/tz/tz.py @@ -0,0 +1,1849 @@ +# -*- coding: utf-8 -*- +""" +This module offers timezone implementations subclassing the abstract +:py:class:`datetime.tzinfo` type. There are classes to handle tzfile format +files (usually are in :file:`/etc/localtime`, :file:`/usr/share/zoneinfo`, +etc), TZ environment string (in all known formats), given ranges (with help +from relative deltas), local machine timezone, fixed offset timezone, and UTC +timezone. +""" +import datetime +import struct +import time +import sys +import os +import bisect +import weakref +from collections import OrderedDict + +import six +from six import string_types +from six.moves import _thread +from ._common import tzname_in_python2, _tzinfo +from ._common import tzrangebase, enfold +from ._common import _validate_fromutc_inputs + +from ._factories import _TzSingleton, _TzOffsetFactory +from ._factories import _TzStrFactory +try: + from .win import tzwin, tzwinlocal +except ImportError: + tzwin = tzwinlocal = None + +# For warning about rounding tzinfo +from warnings import warn + +ZERO = datetime.timedelta(0) +EPOCH = datetime.datetime.utcfromtimestamp(0) +EPOCHORDINAL = EPOCH.toordinal() + + +@six.add_metaclass(_TzSingleton) +class tzutc(datetime.tzinfo): + """ + This is a tzinfo object that represents the UTC time zone. + + **Examples:** + + .. doctest:: + + >>> from datetime import * + >>> from dateutil.tz import * + + >>> datetime.now() + datetime.datetime(2003, 9, 27, 9, 40, 1, 521290) + + >>> datetime.now(tzutc()) + datetime.datetime(2003, 9, 27, 12, 40, 12, 156379, tzinfo=tzutc()) + + >>> datetime.now(tzutc()).tzname() + 'UTC' + + .. versionchanged:: 2.7.0 + ``tzutc()`` is now a singleton, so the result of ``tzutc()`` will + always return the same object. + + .. doctest:: + + >>> from dateutil.tz import tzutc, UTC + >>> tzutc() is tzutc() + True + >>> tzutc() is UTC + True + """ + def utcoffset(self, dt): + return ZERO + + def dst(self, dt): + return ZERO + + @tzname_in_python2 + def tzname(self, dt): + return "UTC" + + def is_ambiguous(self, dt): + """ + Whether or not the "wall time" of a given datetime is ambiguous in this + zone. + + :param dt: + A :py:class:`datetime.datetime`, naive or time zone aware. + + + :return: + Returns ``True`` if ambiguous, ``False`` otherwise. + + .. versionadded:: 2.6.0 + """ + return False + + @_validate_fromutc_inputs + def fromutc(self, dt): + """ + Fast track version of fromutc() returns the original ``dt`` object for + any valid :py:class:`datetime.datetime` object. + """ + return dt + + def __eq__(self, other): + if not isinstance(other, (tzutc, tzoffset)): + return NotImplemented + + return (isinstance(other, tzutc) or + (isinstance(other, tzoffset) and other._offset == ZERO)) + + __hash__ = None + + def __ne__(self, other): + return not (self == other) + + def __repr__(self): + return "%s()" % self.__class__.__name__ + + __reduce__ = object.__reduce__ + + +#: Convenience constant providing a :class:`tzutc()` instance +#: +#: .. versionadded:: 2.7.0 +UTC = tzutc() + + +@six.add_metaclass(_TzOffsetFactory) +class tzoffset(datetime.tzinfo): + """ + A simple class for representing a fixed offset from UTC. + + :param name: + The timezone name, to be returned when ``tzname()`` is called. + :param offset: + The time zone offset in seconds, or (since version 2.6.0, represented + as a :py:class:`datetime.timedelta` object). + """ + def __init__(self, name, offset): + self._name = name + + try: + # Allow a timedelta + offset = offset.total_seconds() + except (TypeError, AttributeError): + pass + + self._offset = datetime.timedelta(seconds=_get_supported_offset(offset)) + + def utcoffset(self, dt): + return self._offset + + def dst(self, dt): + return ZERO + + @tzname_in_python2 + def tzname(self, dt): + return self._name + + @_validate_fromutc_inputs + def fromutc(self, dt): + return dt + self._offset + + def is_ambiguous(self, dt): + """ + Whether or not the "wall time" of a given datetime is ambiguous in this + zone. + + :param dt: + A :py:class:`datetime.datetime`, naive or time zone aware. + :return: + Returns ``True`` if ambiguous, ``False`` otherwise. + + .. versionadded:: 2.6.0 + """ + return False + + def __eq__(self, other): + if not isinstance(other, tzoffset): + return NotImplemented + + return self._offset == other._offset + + __hash__ = None + + def __ne__(self, other): + return not (self == other) + + def __repr__(self): + return "%s(%s, %s)" % (self.__class__.__name__, + repr(self._name), + int(self._offset.total_seconds())) + + __reduce__ = object.__reduce__ + + +class tzlocal(_tzinfo): + """ + A :class:`tzinfo` subclass built around the ``time`` timezone functions. + """ + def __init__(self): + super(tzlocal, self).__init__() + + self._std_offset = datetime.timedelta(seconds=-time.timezone) + if time.daylight: + self._dst_offset = datetime.timedelta(seconds=-time.altzone) + else: + self._dst_offset = self._std_offset + + self._dst_saved = self._dst_offset - self._std_offset + self._hasdst = bool(self._dst_saved) + self._tznames = tuple(time.tzname) + + def utcoffset(self, dt): + if dt is None and self._hasdst: + return None + + if self._isdst(dt): + return self._dst_offset + else: + return self._std_offset + + def dst(self, dt): + if dt is None and self._hasdst: + return None + + if self._isdst(dt): + return self._dst_offset - self._std_offset + else: + return ZERO + + @tzname_in_python2 + def tzname(self, dt): + return self._tznames[self._isdst(dt)] + + def is_ambiguous(self, dt): + """ + Whether or not the "wall time" of a given datetime is ambiguous in this + zone. + + :param dt: + A :py:class:`datetime.datetime`, naive or time zone aware. + + + :return: + Returns ``True`` if ambiguous, ``False`` otherwise. + + .. versionadded:: 2.6.0 + """ + naive_dst = self._naive_is_dst(dt) + return (not naive_dst and + (naive_dst != self._naive_is_dst(dt - self._dst_saved))) + + def _naive_is_dst(self, dt): + timestamp = _datetime_to_timestamp(dt) + return time.localtime(timestamp + time.timezone).tm_isdst + + def _isdst(self, dt, fold_naive=True): + # We can't use mktime here. It is unstable when deciding if + # the hour near to a change is DST or not. + # + # timestamp = time.mktime((dt.year, dt.month, dt.day, dt.hour, + # dt.minute, dt.second, dt.weekday(), 0, -1)) + # return time.localtime(timestamp).tm_isdst + # + # The code above yields the following result: + # + # >>> import tz, datetime + # >>> t = tz.tzlocal() + # >>> datetime.datetime(2003,2,15,23,tzinfo=t).tzname() + # 'BRDT' + # >>> datetime.datetime(2003,2,16,0,tzinfo=t).tzname() + # 'BRST' + # >>> datetime.datetime(2003,2,15,23,tzinfo=t).tzname() + # 'BRST' + # >>> datetime.datetime(2003,2,15,22,tzinfo=t).tzname() + # 'BRDT' + # >>> datetime.datetime(2003,2,15,23,tzinfo=t).tzname() + # 'BRDT' + # + # Here is a more stable implementation: + # + if not self._hasdst: + return False + + # Check for ambiguous times: + dstval = self._naive_is_dst(dt) + fold = getattr(dt, 'fold', None) + + if self.is_ambiguous(dt): + if fold is not None: + return not self._fold(dt) + else: + return True + + return dstval + + def __eq__(self, other): + if isinstance(other, tzlocal): + return (self._std_offset == other._std_offset and + self._dst_offset == other._dst_offset) + elif isinstance(other, tzutc): + return (not self._hasdst and + self._tznames[0] in {'UTC', 'GMT'} and + self._std_offset == ZERO) + elif isinstance(other, tzoffset): + return (not self._hasdst and + self._tznames[0] == other._name and + self._std_offset == other._offset) + else: + return NotImplemented + + __hash__ = None + + def __ne__(self, other): + return not (self == other) + + def __repr__(self): + return "%s()" % self.__class__.__name__ + + __reduce__ = object.__reduce__ + + +class _ttinfo(object): + __slots__ = ["offset", "delta", "isdst", "abbr", + "isstd", "isgmt", "dstoffset"] + + def __init__(self): + for attr in self.__slots__: + setattr(self, attr, None) + + def __repr__(self): + l = [] + for attr in self.__slots__: + value = getattr(self, attr) + if value is not None: + l.append("%s=%s" % (attr, repr(value))) + return "%s(%s)" % (self.__class__.__name__, ", ".join(l)) + + def __eq__(self, other): + if not isinstance(other, _ttinfo): + return NotImplemented + + return (self.offset == other.offset and + self.delta == other.delta and + self.isdst == other.isdst and + self.abbr == other.abbr and + self.isstd == other.isstd and + self.isgmt == other.isgmt and + self.dstoffset == other.dstoffset) + + __hash__ = None + + def __ne__(self, other): + return not (self == other) + + def __getstate__(self): + state = {} + for name in self.__slots__: + state[name] = getattr(self, name, None) + return state + + def __setstate__(self, state): + for name in self.__slots__: + if name in state: + setattr(self, name, state[name]) + + +class _tzfile(object): + """ + Lightweight class for holding the relevant transition and time zone + information read from binary tzfiles. + """ + attrs = ['trans_list', 'trans_list_utc', 'trans_idx', 'ttinfo_list', + 'ttinfo_std', 'ttinfo_dst', 'ttinfo_before', 'ttinfo_first'] + + def __init__(self, **kwargs): + for attr in self.attrs: + setattr(self, attr, kwargs.get(attr, None)) + + +class tzfile(_tzinfo): + """ + This is a ``tzinfo`` subclass that allows one to use the ``tzfile(5)`` + format timezone files to extract current and historical zone information. + + :param fileobj: + This can be an opened file stream or a file name that the time zone + information can be read from. + + :param filename: + This is an optional parameter specifying the source of the time zone + information in the event that ``fileobj`` is a file object. If omitted + and ``fileobj`` is a file stream, this parameter will be set either to + ``fileobj``'s ``name`` attribute or to ``repr(fileobj)``. + + See `Sources for Time Zone and Daylight Saving Time Data + `_ for more information. + Time zone files can be compiled from the `IANA Time Zone database files + `_ with the `zic time zone compiler + `_ + + .. note:: + + Only construct a ``tzfile`` directly if you have a specific timezone + file on disk that you want to read into a Python ``tzinfo`` object. + If you want to get a ``tzfile`` representing a specific IANA zone, + (e.g. ``'America/New_York'``), you should call + :func:`dateutil.tz.gettz` with the zone identifier. + + + **Examples:** + + Using the US Eastern time zone as an example, we can see that a ``tzfile`` + provides time zone information for the standard Daylight Saving offsets: + + .. testsetup:: tzfile + + from dateutil.tz import gettz + from datetime import datetime + + .. doctest:: tzfile + + >>> NYC = gettz('America/New_York') + >>> NYC + tzfile('/usr/share/zoneinfo/America/New_York') + + >>> print(datetime(2016, 1, 3, tzinfo=NYC)) # EST + 2016-01-03 00:00:00-05:00 + + >>> print(datetime(2016, 7, 7, tzinfo=NYC)) # EDT + 2016-07-07 00:00:00-04:00 + + + The ``tzfile`` structure contains a fully history of the time zone, + so historical dates will also have the right offsets. For example, before + the adoption of the UTC standards, New York used local solar mean time: + + .. doctest:: tzfile + + >>> print(datetime(1901, 4, 12, tzinfo=NYC)) # LMT + 1901-04-12 00:00:00-04:56 + + And during World War II, New York was on "Eastern War Time", which was a + state of permanent daylight saving time: + + .. doctest:: tzfile + + >>> print(datetime(1944, 2, 7, tzinfo=NYC)) # EWT + 1944-02-07 00:00:00-04:00 + + """ + + def __init__(self, fileobj, filename=None): + super(tzfile, self).__init__() + + file_opened_here = False + if isinstance(fileobj, string_types): + self._filename = fileobj + fileobj = open(fileobj, 'rb') + file_opened_here = True + elif filename is not None: + self._filename = filename + elif hasattr(fileobj, "name"): + self._filename = fileobj.name + else: + self._filename = repr(fileobj) + + if fileobj is not None: + if not file_opened_here: + fileobj = _nullcontext(fileobj) + + with fileobj as file_stream: + tzobj = self._read_tzfile(file_stream) + + self._set_tzdata(tzobj) + + def _set_tzdata(self, tzobj): + """ Set the time zone data of this object from a _tzfile object """ + # Copy the relevant attributes over as private attributes + for attr in _tzfile.attrs: + setattr(self, '_' + attr, getattr(tzobj, attr)) + + def _read_tzfile(self, fileobj): + out = _tzfile() + + # From tzfile(5): + # + # The time zone information files used by tzset(3) + # begin with the magic characters "TZif" to identify + # them as time zone information files, followed by + # sixteen bytes reserved for future use, followed by + # six four-byte values of type long, written in a + # ``standard'' byte order (the high-order byte + # of the value is written first). + if fileobj.read(4).decode() != "TZif": + raise ValueError("magic not found") + + fileobj.read(16) + + ( + # The number of UTC/local indicators stored in the file. + ttisgmtcnt, + + # The number of standard/wall indicators stored in the file. + ttisstdcnt, + + # The number of leap seconds for which data is + # stored in the file. + leapcnt, + + # The number of "transition times" for which data + # is stored in the file. + timecnt, + + # The number of "local time types" for which data + # is stored in the file (must not be zero). + typecnt, + + # The number of characters of "time zone + # abbreviation strings" stored in the file. + charcnt, + + ) = struct.unpack(">6l", fileobj.read(24)) + + # The above header is followed by tzh_timecnt four-byte + # values of type long, sorted in ascending order. + # These values are written in ``standard'' byte order. + # Each is used as a transition time (as returned by + # time(2)) at which the rules for computing local time + # change. + + if timecnt: + out.trans_list_utc = list(struct.unpack(">%dl" % timecnt, + fileobj.read(timecnt*4))) + else: + out.trans_list_utc = [] + + # Next come tzh_timecnt one-byte values of type unsigned + # char; each one tells which of the different types of + # ``local time'' types described in the file is associated + # with the same-indexed transition time. These values + # serve as indices into an array of ttinfo structures that + # appears next in the file. + + if timecnt: + out.trans_idx = struct.unpack(">%dB" % timecnt, + fileobj.read(timecnt)) + else: + out.trans_idx = [] + + # Each ttinfo structure is written as a four-byte value + # for tt_gmtoff of type long, in a standard byte + # order, followed by a one-byte value for tt_isdst + # and a one-byte value for tt_abbrind. In each + # structure, tt_gmtoff gives the number of + # seconds to be added to UTC, tt_isdst tells whether + # tm_isdst should be set by localtime(3), and + # tt_abbrind serves as an index into the array of + # time zone abbreviation characters that follow the + # ttinfo structure(s) in the file. + + ttinfo = [] + + for i in range(typecnt): + ttinfo.append(struct.unpack(">lbb", fileobj.read(6))) + + abbr = fileobj.read(charcnt).decode() + + # Then there are tzh_leapcnt pairs of four-byte + # values, written in standard byte order; the + # first value of each pair gives the time (as + # returned by time(2)) at which a leap second + # occurs; the second gives the total number of + # leap seconds to be applied after the given time. + # The pairs of values are sorted in ascending order + # by time. + + # Not used, for now (but seek for correct file position) + if leapcnt: + fileobj.seek(leapcnt * 8, os.SEEK_CUR) + + # Then there are tzh_ttisstdcnt standard/wall + # indicators, each stored as a one-byte value; + # they tell whether the transition times associated + # with local time types were specified as standard + # time or wall clock time, and are used when + # a time zone file is used in handling POSIX-style + # time zone environment variables. + + if ttisstdcnt: + isstd = struct.unpack(">%db" % ttisstdcnt, + fileobj.read(ttisstdcnt)) + + # Finally, there are tzh_ttisgmtcnt UTC/local + # indicators, each stored as a one-byte value; + # they tell whether the transition times associated + # with local time types were specified as UTC or + # local time, and are used when a time zone file + # is used in handling POSIX-style time zone envi- + # ronment variables. + + if ttisgmtcnt: + isgmt = struct.unpack(">%db" % ttisgmtcnt, + fileobj.read(ttisgmtcnt)) + + # Build ttinfo list + out.ttinfo_list = [] + for i in range(typecnt): + gmtoff, isdst, abbrind = ttinfo[i] + gmtoff = _get_supported_offset(gmtoff) + tti = _ttinfo() + tti.offset = gmtoff + tti.dstoffset = datetime.timedelta(0) + tti.delta = datetime.timedelta(seconds=gmtoff) + tti.isdst = isdst + tti.abbr = abbr[abbrind:abbr.find('\x00', abbrind)] + tti.isstd = (ttisstdcnt > i and isstd[i] != 0) + tti.isgmt = (ttisgmtcnt > i and isgmt[i] != 0) + out.ttinfo_list.append(tti) + + # Replace ttinfo indexes for ttinfo objects. + out.trans_idx = [out.ttinfo_list[idx] for idx in out.trans_idx] + + # Set standard, dst, and before ttinfos. before will be + # used when a given time is before any transitions, + # and will be set to the first non-dst ttinfo, or to + # the first dst, if all of them are dst. + out.ttinfo_std = None + out.ttinfo_dst = None + out.ttinfo_before = None + if out.ttinfo_list: + if not out.trans_list_utc: + out.ttinfo_std = out.ttinfo_first = out.ttinfo_list[0] + else: + for i in range(timecnt-1, -1, -1): + tti = out.trans_idx[i] + if not out.ttinfo_std and not tti.isdst: + out.ttinfo_std = tti + elif not out.ttinfo_dst and tti.isdst: + out.ttinfo_dst = tti + + if out.ttinfo_std and out.ttinfo_dst: + break + else: + if out.ttinfo_dst and not out.ttinfo_std: + out.ttinfo_std = out.ttinfo_dst + + for tti in out.ttinfo_list: + if not tti.isdst: + out.ttinfo_before = tti + break + else: + out.ttinfo_before = out.ttinfo_list[0] + + # Now fix transition times to become relative to wall time. + # + # I'm not sure about this. In my tests, the tz source file + # is setup to wall time, and in the binary file isstd and + # isgmt are off, so it should be in wall time. OTOH, it's + # always in gmt time. Let me know if you have comments + # about this. + lastdst = None + lastoffset = None + lastdstoffset = None + lastbaseoffset = None + out.trans_list = [] + + for i, tti in enumerate(out.trans_idx): + offset = tti.offset + dstoffset = 0 + + if lastdst is not None: + if tti.isdst: + if not lastdst: + dstoffset = offset - lastoffset + + if not dstoffset and lastdstoffset: + dstoffset = lastdstoffset + + tti.dstoffset = datetime.timedelta(seconds=dstoffset) + lastdstoffset = dstoffset + + # If a time zone changes its base offset during a DST transition, + # then you need to adjust by the previous base offset to get the + # transition time in local time. Otherwise you use the current + # base offset. Ideally, I would have some mathematical proof of + # why this is true, but I haven't really thought about it enough. + baseoffset = offset - dstoffset + adjustment = baseoffset + if (lastbaseoffset is not None and baseoffset != lastbaseoffset + and tti.isdst != lastdst): + # The base DST has changed + adjustment = lastbaseoffset + + lastdst = tti.isdst + lastoffset = offset + lastbaseoffset = baseoffset + + out.trans_list.append(out.trans_list_utc[i] + adjustment) + + out.trans_idx = tuple(out.trans_idx) + out.trans_list = tuple(out.trans_list) + out.trans_list_utc = tuple(out.trans_list_utc) + + return out + + def _find_last_transition(self, dt, in_utc=False): + # If there's no list, there are no transitions to find + if not self._trans_list: + return None + + timestamp = _datetime_to_timestamp(dt) + + # Find where the timestamp fits in the transition list - if the + # timestamp is a transition time, it's part of the "after" period. + trans_list = self._trans_list_utc if in_utc else self._trans_list + idx = bisect.bisect_right(trans_list, timestamp) + + # We want to know when the previous transition was, so subtract off 1 + return idx - 1 + + def _get_ttinfo(self, idx): + # For no list or after the last transition, default to _ttinfo_std + if idx is None or (idx + 1) >= len(self._trans_list): + return self._ttinfo_std + + # If there is a list and the time is before it, return _ttinfo_before + if idx < 0: + return self._ttinfo_before + + return self._trans_idx[idx] + + def _find_ttinfo(self, dt): + idx = self._resolve_ambiguous_time(dt) + + return self._get_ttinfo(idx) + + def fromutc(self, dt): + """ + The ``tzfile`` implementation of :py:func:`datetime.tzinfo.fromutc`. + + :param dt: + A :py:class:`datetime.datetime` object. + + :raises TypeError: + Raised if ``dt`` is not a :py:class:`datetime.datetime` object. + + :raises ValueError: + Raised if this is called with a ``dt`` which does not have this + ``tzinfo`` attached. + + :return: + Returns a :py:class:`datetime.datetime` object representing the + wall time in ``self``'s time zone. + """ + # These isinstance checks are in datetime.tzinfo, so we'll preserve + # them, even if we don't care about duck typing. + if not isinstance(dt, datetime.datetime): + raise TypeError("fromutc() requires a datetime argument") + + if dt.tzinfo is not self: + raise ValueError("dt.tzinfo is not self") + + # First treat UTC as wall time and get the transition we're in. + idx = self._find_last_transition(dt, in_utc=True) + tti = self._get_ttinfo(idx) + + dt_out = dt + datetime.timedelta(seconds=tti.offset) + + fold = self.is_ambiguous(dt_out, idx=idx) + + return enfold(dt_out, fold=int(fold)) + + def is_ambiguous(self, dt, idx=None): + """ + Whether or not the "wall time" of a given datetime is ambiguous in this + zone. + + :param dt: + A :py:class:`datetime.datetime`, naive or time zone aware. + + + :return: + Returns ``True`` if ambiguous, ``False`` otherwise. + + .. versionadded:: 2.6.0 + """ + if idx is None: + idx = self._find_last_transition(dt) + + # Calculate the difference in offsets from current to previous + timestamp = _datetime_to_timestamp(dt) + tti = self._get_ttinfo(idx) + + if idx is None or idx <= 0: + return False + + od = self._get_ttinfo(idx - 1).offset - tti.offset + tt = self._trans_list[idx] # Transition time + + return timestamp < tt + od + + def _resolve_ambiguous_time(self, dt): + idx = self._find_last_transition(dt) + + # If we have no transitions, return the index + _fold = self._fold(dt) + if idx is None or idx == 0: + return idx + + # If it's ambiguous and we're in a fold, shift to a different index. + idx_offset = int(not _fold and self.is_ambiguous(dt, idx)) + + return idx - idx_offset + + def utcoffset(self, dt): + if dt is None: + return None + + if not self._ttinfo_std: + return ZERO + + return self._find_ttinfo(dt).delta + + def dst(self, dt): + if dt is None: + return None + + if not self._ttinfo_dst: + return ZERO + + tti = self._find_ttinfo(dt) + + if not tti.isdst: + return ZERO + + # The documentation says that utcoffset()-dst() must + # be constant for every dt. + return tti.dstoffset + + @tzname_in_python2 + def tzname(self, dt): + if not self._ttinfo_std or dt is None: + return None + return self._find_ttinfo(dt).abbr + + def __eq__(self, other): + if not isinstance(other, tzfile): + return NotImplemented + return (self._trans_list == other._trans_list and + self._trans_idx == other._trans_idx and + self._ttinfo_list == other._ttinfo_list) + + __hash__ = None + + def __ne__(self, other): + return not (self == other) + + def __repr__(self): + return "%s(%s)" % (self.__class__.__name__, repr(self._filename)) + + def __reduce__(self): + return self.__reduce_ex__(None) + + def __reduce_ex__(self, protocol): + return (self.__class__, (None, self._filename), self.__dict__) + + +class tzrange(tzrangebase): + """ + The ``tzrange`` object is a time zone specified by a set of offsets and + abbreviations, equivalent to the way the ``TZ`` variable can be specified + in POSIX-like systems, but using Python delta objects to specify DST + start, end and offsets. + + :param stdabbr: + The abbreviation for standard time (e.g. ``'EST'``). + + :param stdoffset: + An integer or :class:`datetime.timedelta` object or equivalent + specifying the base offset from UTC. + + If unspecified, +00:00 is used. + + :param dstabbr: + The abbreviation for DST / "Summer" time (e.g. ``'EDT'``). + + If specified, with no other DST information, DST is assumed to occur + and the default behavior or ``dstoffset``, ``start`` and ``end`` is + used. If unspecified and no other DST information is specified, it + is assumed that this zone has no DST. + + If this is unspecified and other DST information is *is* specified, + DST occurs in the zone but the time zone abbreviation is left + unchanged. + + :param dstoffset: + A an integer or :class:`datetime.timedelta` object or equivalent + specifying the UTC offset during DST. If unspecified and any other DST + information is specified, it is assumed to be the STD offset +1 hour. + + :param start: + A :class:`relativedelta.relativedelta` object or equivalent specifying + the time and time of year that daylight savings time starts. To + specify, for example, that DST starts at 2AM on the 2nd Sunday in + March, pass: + + ``relativedelta(hours=2, month=3, day=1, weekday=SU(+2))`` + + If unspecified and any other DST information is specified, the default + value is 2 AM on the first Sunday in April. + + :param end: + A :class:`relativedelta.relativedelta` object or equivalent + representing the time and time of year that daylight savings time + ends, with the same specification method as in ``start``. One note is + that this should point to the first time in the *standard* zone, so if + a transition occurs at 2AM in the DST zone and the clocks are set back + 1 hour to 1AM, set the ``hours`` parameter to +1. + + + **Examples:** + + .. testsetup:: tzrange + + from dateutil.tz import tzrange, tzstr + + .. doctest:: tzrange + + >>> tzstr('EST5EDT') == tzrange("EST", -18000, "EDT") + True + + >>> from dateutil.relativedelta import * + >>> range1 = tzrange("EST", -18000, "EDT") + >>> range2 = tzrange("EST", -18000, "EDT", -14400, + ... relativedelta(hours=+2, month=4, day=1, + ... weekday=SU(+1)), + ... relativedelta(hours=+1, month=10, day=31, + ... weekday=SU(-1))) + >>> tzstr('EST5EDT') == range1 == range2 + True + + """ + def __init__(self, stdabbr, stdoffset=None, + dstabbr=None, dstoffset=None, + start=None, end=None): + + global relativedelta + from dateutil import relativedelta + + self._std_abbr = stdabbr + self._dst_abbr = dstabbr + + try: + stdoffset = stdoffset.total_seconds() + except (TypeError, AttributeError): + pass + + try: + dstoffset = dstoffset.total_seconds() + except (TypeError, AttributeError): + pass + + if stdoffset is not None: + self._std_offset = datetime.timedelta(seconds=stdoffset) + else: + self._std_offset = ZERO + + if dstoffset is not None: + self._dst_offset = datetime.timedelta(seconds=dstoffset) + elif dstabbr and stdoffset is not None: + self._dst_offset = self._std_offset + datetime.timedelta(hours=+1) + else: + self._dst_offset = ZERO + + if dstabbr and start is None: + self._start_delta = relativedelta.relativedelta( + hours=+2, month=4, day=1, weekday=relativedelta.SU(+1)) + else: + self._start_delta = start + + if dstabbr and end is None: + self._end_delta = relativedelta.relativedelta( + hours=+1, month=10, day=31, weekday=relativedelta.SU(-1)) + else: + self._end_delta = end + + self._dst_base_offset_ = self._dst_offset - self._std_offset + self.hasdst = bool(self._start_delta) + + def transitions(self, year): + """ + For a given year, get the DST on and off transition times, expressed + always on the standard time side. For zones with no transitions, this + function returns ``None``. + + :param year: + The year whose transitions you would like to query. + + :return: + Returns a :class:`tuple` of :class:`datetime.datetime` objects, + ``(dston, dstoff)`` for zones with an annual DST transition, or + ``None`` for fixed offset zones. + """ + if not self.hasdst: + return None + + base_year = datetime.datetime(year, 1, 1) + + start = base_year + self._start_delta + end = base_year + self._end_delta + + return (start, end) + + def __eq__(self, other): + if not isinstance(other, tzrange): + return NotImplemented + + return (self._std_abbr == other._std_abbr and + self._dst_abbr == other._dst_abbr and + self._std_offset == other._std_offset and + self._dst_offset == other._dst_offset and + self._start_delta == other._start_delta and + self._end_delta == other._end_delta) + + @property + def _dst_base_offset(self): + return self._dst_base_offset_ + + +@six.add_metaclass(_TzStrFactory) +class tzstr(tzrange): + """ + ``tzstr`` objects are time zone objects specified by a time-zone string as + it would be passed to a ``TZ`` variable on POSIX-style systems (see + the `GNU C Library: TZ Variable`_ for more details). + + There is one notable exception, which is that POSIX-style time zones use an + inverted offset format, so normally ``GMT+3`` would be parsed as an offset + 3 hours *behind* GMT. The ``tzstr`` time zone object will parse this as an + offset 3 hours *ahead* of GMT. If you would like to maintain the POSIX + behavior, pass a ``True`` value to ``posix_offset``. + + The :class:`tzrange` object provides the same functionality, but is + specified using :class:`relativedelta.relativedelta` objects. rather than + strings. + + :param s: + A time zone string in ``TZ`` variable format. This can be a + :class:`bytes` (2.x: :class:`str`), :class:`str` (2.x: + :class:`unicode`) or a stream emitting unicode characters + (e.g. :class:`StringIO`). + + :param posix_offset: + Optional. If set to ``True``, interpret strings such as ``GMT+3`` or + ``UTC+3`` as being 3 hours *behind* UTC rather than ahead, per the + POSIX standard. + + .. caution:: + + Prior to version 2.7.0, this function also supported time zones + in the format: + + * ``EST5EDT,4,0,6,7200,10,0,26,7200,3600`` + * ``EST5EDT,4,1,0,7200,10,-1,0,7200,3600`` + + This format is non-standard and has been deprecated; this function + will raise a :class:`DeprecatedTZFormatWarning` until + support is removed in a future version. + + .. _`GNU C Library: TZ Variable`: + https://www.gnu.org/software/libc/manual/html_node/TZ-Variable.html + """ + def __init__(self, s, posix_offset=False): + global parser + from dateutil.parser import _parser as parser + + self._s = s + + res = parser._parsetz(s) + if res is None or res.any_unused_tokens: + raise ValueError("unknown string format") + + # Here we break the compatibility with the TZ variable handling. + # GMT-3 actually *means* the timezone -3. + if res.stdabbr in ("GMT", "UTC") and not posix_offset: + res.stdoffset *= -1 + + # We must initialize it first, since _delta() needs + # _std_offset and _dst_offset set. Use False in start/end + # to avoid building it two times. + tzrange.__init__(self, res.stdabbr, res.stdoffset, + res.dstabbr, res.dstoffset, + start=False, end=False) + + if not res.dstabbr: + self._start_delta = None + self._end_delta = None + else: + self._start_delta = self._delta(res.start) + if self._start_delta: + self._end_delta = self._delta(res.end, isend=1) + + self.hasdst = bool(self._start_delta) + + def _delta(self, x, isend=0): + from dateutil import relativedelta + kwargs = {} + if x.month is not None: + kwargs["month"] = x.month + if x.weekday is not None: + kwargs["weekday"] = relativedelta.weekday(x.weekday, x.week) + if x.week > 0: + kwargs["day"] = 1 + else: + kwargs["day"] = 31 + elif x.day: + kwargs["day"] = x.day + elif x.yday is not None: + kwargs["yearday"] = x.yday + elif x.jyday is not None: + kwargs["nlyearday"] = x.jyday + if not kwargs: + # Default is to start on first sunday of april, and end + # on last sunday of october. + if not isend: + kwargs["month"] = 4 + kwargs["day"] = 1 + kwargs["weekday"] = relativedelta.SU(+1) + else: + kwargs["month"] = 10 + kwargs["day"] = 31 + kwargs["weekday"] = relativedelta.SU(-1) + if x.time is not None: + kwargs["seconds"] = x.time + else: + # Default is 2AM. + kwargs["seconds"] = 7200 + if isend: + # Convert to standard time, to follow the documented way + # of working with the extra hour. See the documentation + # of the tzinfo class. + delta = self._dst_offset - self._std_offset + kwargs["seconds"] -= delta.seconds + delta.days * 86400 + return relativedelta.relativedelta(**kwargs) + + def __repr__(self): + return "%s(%s)" % (self.__class__.__name__, repr(self._s)) + + +class _tzicalvtzcomp(object): + def __init__(self, tzoffsetfrom, tzoffsetto, isdst, + tzname=None, rrule=None): + self.tzoffsetfrom = datetime.timedelta(seconds=tzoffsetfrom) + self.tzoffsetto = datetime.timedelta(seconds=tzoffsetto) + self.tzoffsetdiff = self.tzoffsetto - self.tzoffsetfrom + self.isdst = isdst + self.tzname = tzname + self.rrule = rrule + + +class _tzicalvtz(_tzinfo): + def __init__(self, tzid, comps=[]): + super(_tzicalvtz, self).__init__() + + self._tzid = tzid + self._comps = comps + self._cachedate = [] + self._cachecomp = [] + self._cache_lock = _thread.allocate_lock() + + def _find_comp(self, dt): + if len(self._comps) == 1: + return self._comps[0] + + dt = dt.replace(tzinfo=None) + + try: + with self._cache_lock: + return self._cachecomp[self._cachedate.index( + (dt, self._fold(dt)))] + except ValueError: + pass + + lastcompdt = None + lastcomp = None + + for comp in self._comps: + compdt = self._find_compdt(comp, dt) + + if compdt and (not lastcompdt or lastcompdt < compdt): + lastcompdt = compdt + lastcomp = comp + + if not lastcomp: + # RFC says nothing about what to do when a given + # time is before the first onset date. We'll look for the + # first standard component, or the first component, if + # none is found. + for comp in self._comps: + if not comp.isdst: + lastcomp = comp + break + else: + lastcomp = comp[0] + + with self._cache_lock: + self._cachedate.insert(0, (dt, self._fold(dt))) + self._cachecomp.insert(0, lastcomp) + + if len(self._cachedate) > 10: + self._cachedate.pop() + self._cachecomp.pop() + + return lastcomp + + def _find_compdt(self, comp, dt): + if comp.tzoffsetdiff < ZERO and self._fold(dt): + dt -= comp.tzoffsetdiff + + compdt = comp.rrule.before(dt, inc=True) + + return compdt + + def utcoffset(self, dt): + if dt is None: + return None + + return self._find_comp(dt).tzoffsetto + + def dst(self, dt): + comp = self._find_comp(dt) + if comp.isdst: + return comp.tzoffsetdiff + else: + return ZERO + + @tzname_in_python2 + def tzname(self, dt): + return self._find_comp(dt).tzname + + def __repr__(self): + return "" % repr(self._tzid) + + __reduce__ = object.__reduce__ + + +class tzical(object): + """ + This object is designed to parse an iCalendar-style ``VTIMEZONE`` structure + as set out in `RFC 5545`_ Section 4.6.5 into one or more `tzinfo` objects. + + :param `fileobj`: + A file or stream in iCalendar format, which should be UTF-8 encoded + with CRLF endings. + + .. _`RFC 5545`: https://tools.ietf.org/html/rfc5545 + """ + def __init__(self, fileobj): + global rrule + from dateutil import rrule + + if isinstance(fileobj, string_types): + self._s = fileobj + # ical should be encoded in UTF-8 with CRLF + fileobj = open(fileobj, 'r') + else: + self._s = getattr(fileobj, 'name', repr(fileobj)) + fileobj = _nullcontext(fileobj) + + self._vtz = {} + + with fileobj as fobj: + self._parse_rfc(fobj.read()) + + def keys(self): + """ + Retrieves the available time zones as a list. + """ + return list(self._vtz.keys()) + + def get(self, tzid=None): + """ + Retrieve a :py:class:`datetime.tzinfo` object by its ``tzid``. + + :param tzid: + If there is exactly one time zone available, omitting ``tzid`` + or passing :py:const:`None` value returns it. Otherwise a valid + key (which can be retrieved from :func:`keys`) is required. + + :raises ValueError: + Raised if ``tzid`` is not specified but there are either more + or fewer than 1 zone defined. + + :returns: + Returns either a :py:class:`datetime.tzinfo` object representing + the relevant time zone or :py:const:`None` if the ``tzid`` was + not found. + """ + if tzid is None: + if len(self._vtz) == 0: + raise ValueError("no timezones defined") + elif len(self._vtz) > 1: + raise ValueError("more than one timezone available") + tzid = next(iter(self._vtz)) + + return self._vtz.get(tzid) + + def _parse_offset(self, s): + s = s.strip() + if not s: + raise ValueError("empty offset") + if s[0] in ('+', '-'): + signal = (-1, +1)[s[0] == '+'] + s = s[1:] + else: + signal = +1 + if len(s) == 4: + return (int(s[:2]) * 3600 + int(s[2:]) * 60) * signal + elif len(s) == 6: + return (int(s[:2]) * 3600 + int(s[2:4]) * 60 + int(s[4:])) * signal + else: + raise ValueError("invalid offset: " + s) + + def _parse_rfc(self, s): + lines = s.splitlines() + if not lines: + raise ValueError("empty string") + + # Unfold + i = 0 + while i < len(lines): + line = lines[i].rstrip() + if not line: + del lines[i] + elif i > 0 and line[0] == " ": + lines[i-1] += line[1:] + del lines[i] + else: + i += 1 + + tzid = None + comps = [] + invtz = False + comptype = None + for line in lines: + if not line: + continue + name, value = line.split(':', 1) + parms = name.split(';') + if not parms: + raise ValueError("empty property name") + name = parms[0].upper() + parms = parms[1:] + if invtz: + if name == "BEGIN": + if value in ("STANDARD", "DAYLIGHT"): + # Process component + pass + else: + raise ValueError("unknown component: "+value) + comptype = value + founddtstart = False + tzoffsetfrom = None + tzoffsetto = None + rrulelines = [] + tzname = None + elif name == "END": + if value == "VTIMEZONE": + if comptype: + raise ValueError("component not closed: "+comptype) + if not tzid: + raise ValueError("mandatory TZID not found") + if not comps: + raise ValueError( + "at least one component is needed") + # Process vtimezone + self._vtz[tzid] = _tzicalvtz(tzid, comps) + invtz = False + elif value == comptype: + if not founddtstart: + raise ValueError("mandatory DTSTART not found") + if tzoffsetfrom is None: + raise ValueError( + "mandatory TZOFFSETFROM not found") + if tzoffsetto is None: + raise ValueError( + "mandatory TZOFFSETFROM not found") + # Process component + rr = None + if rrulelines: + rr = rrule.rrulestr("\n".join(rrulelines), + compatible=True, + ignoretz=True, + cache=True) + comp = _tzicalvtzcomp(tzoffsetfrom, tzoffsetto, + (comptype == "DAYLIGHT"), + tzname, rr) + comps.append(comp) + comptype = None + else: + raise ValueError("invalid component end: "+value) + elif comptype: + if name == "DTSTART": + # DTSTART in VTIMEZONE takes a subset of valid RRULE + # values under RFC 5545. + for parm in parms: + if parm != 'VALUE=DATE-TIME': + msg = ('Unsupported DTSTART param in ' + + 'VTIMEZONE: ' + parm) + raise ValueError(msg) + rrulelines.append(line) + founddtstart = True + elif name in ("RRULE", "RDATE", "EXRULE", "EXDATE"): + rrulelines.append(line) + elif name == "TZOFFSETFROM": + if parms: + raise ValueError( + "unsupported %s parm: %s " % (name, parms[0])) + tzoffsetfrom = self._parse_offset(value) + elif name == "TZOFFSETTO": + if parms: + raise ValueError( + "unsupported TZOFFSETTO parm: "+parms[0]) + tzoffsetto = self._parse_offset(value) + elif name == "TZNAME": + if parms: + raise ValueError( + "unsupported TZNAME parm: "+parms[0]) + tzname = value + elif name == "COMMENT": + pass + else: + raise ValueError("unsupported property: "+name) + else: + if name == "TZID": + if parms: + raise ValueError( + "unsupported TZID parm: "+parms[0]) + tzid = value + elif name in ("TZURL", "LAST-MODIFIED", "COMMENT"): + pass + else: + raise ValueError("unsupported property: "+name) + elif name == "BEGIN" and value == "VTIMEZONE": + tzid = None + comps = [] + invtz = True + + def __repr__(self): + return "%s(%s)" % (self.__class__.__name__, repr(self._s)) + + +if sys.platform != "win32": + TZFILES = ["/etc/localtime", "localtime"] + TZPATHS = ["/usr/share/zoneinfo", + "/usr/lib/zoneinfo", + "/usr/share/lib/zoneinfo", + "/etc/zoneinfo"] +else: + TZFILES = [] + TZPATHS = [] + + +def __get_gettz(): + tzlocal_classes = (tzlocal,) + if tzwinlocal is not None: + tzlocal_classes += (tzwinlocal,) + + class GettzFunc(object): + """ + Retrieve a time zone object from a string representation + + This function is intended to retrieve the :py:class:`tzinfo` subclass + that best represents the time zone that would be used if a POSIX + `TZ variable`_ were set to the same value. + + If no argument or an empty string is passed to ``gettz``, local time + is returned: + + .. code-block:: python3 + + >>> gettz() + tzfile('/etc/localtime') + + This function is also the preferred way to map IANA tz database keys + to :class:`tzfile` objects: + + .. code-block:: python3 + + >>> gettz('Pacific/Kiritimati') + tzfile('/usr/share/zoneinfo/Pacific/Kiritimati') + + On Windows, the standard is extended to include the Windows-specific + zone names provided by the operating system: + + .. code-block:: python3 + + >>> gettz('Egypt Standard Time') + tzwin('Egypt Standard Time') + + Passing a GNU ``TZ`` style string time zone specification returns a + :class:`tzstr` object: + + .. code-block:: python3 + + >>> gettz('AEST-10AEDT-11,M10.1.0/2,M4.1.0/3') + tzstr('AEST-10AEDT-11,M10.1.0/2,M4.1.0/3') + + :param name: + A time zone name (IANA, or, on Windows, Windows keys), location of + a ``tzfile(5)`` zoneinfo file or ``TZ`` variable style time zone + specifier. An empty string, no argument or ``None`` is interpreted + as local time. + + :return: + Returns an instance of one of ``dateutil``'s :py:class:`tzinfo` + subclasses. + + .. versionchanged:: 2.7.0 + + After version 2.7.0, any two calls to ``gettz`` using the same + input strings will return the same object: + + .. code-block:: python3 + + >>> tz.gettz('America/Chicago') is tz.gettz('America/Chicago') + True + + In addition to improving performance, this ensures that + `"same zone" semantics`_ are used for datetimes in the same zone. + + + .. _`TZ variable`: + https://www.gnu.org/software/libc/manual/html_node/TZ-Variable.html + + .. _`"same zone" semantics`: + https://blog.ganssle.io/articles/2018/02/aware-datetime-arithmetic.html + """ + def __init__(self): + + self.__instances = weakref.WeakValueDictionary() + self.__strong_cache_size = 8 + self.__strong_cache = OrderedDict() + self._cache_lock = _thread.allocate_lock() + + def __call__(self, name=None): + with self._cache_lock: + rv = self.__instances.get(name, None) + + if rv is None: + rv = self.nocache(name=name) + if not (name is None + or isinstance(rv, tzlocal_classes) + or rv is None): + # tzlocal is slightly more complicated than the other + # time zone providers because it depends on environment + # at construction time, so don't cache that. + # + # We also cannot store weak references to None, so we + # will also not store that. + self.__instances[name] = rv + else: + # No need for strong caching, return immediately + return rv + + self.__strong_cache[name] = self.__strong_cache.pop(name, rv) + + if len(self.__strong_cache) > self.__strong_cache_size: + self.__strong_cache.popitem(last=False) + + return rv + + def set_cache_size(self, size): + with self._cache_lock: + self.__strong_cache_size = size + while len(self.__strong_cache) > size: + self.__strong_cache.popitem(last=False) + + def cache_clear(self): + with self._cache_lock: + self.__instances = weakref.WeakValueDictionary() + self.__strong_cache.clear() + + @staticmethod + def nocache(name=None): + """A non-cached version of gettz""" + tz = None + if not name: + try: + name = os.environ["TZ"] + except KeyError: + pass + if name is None or name == ":": + for filepath in TZFILES: + if not os.path.isabs(filepath): + filename = filepath + for path in TZPATHS: + filepath = os.path.join(path, filename) + if os.path.isfile(filepath): + break + else: + continue + if os.path.isfile(filepath): + try: + tz = tzfile(filepath) + break + except (IOError, OSError, ValueError): + pass + else: + tz = tzlocal() + else: + try: + if name.startswith(":"): + name = name[1:] + except TypeError as e: + if isinstance(name, bytes): + new_msg = "gettz argument should be str, not bytes" + six.raise_from(TypeError(new_msg), e) + else: + raise + if os.path.isabs(name): + if os.path.isfile(name): + tz = tzfile(name) + else: + tz = None + else: + for path in TZPATHS: + filepath = os.path.join(path, name) + if not os.path.isfile(filepath): + filepath = filepath.replace(' ', '_') + if not os.path.isfile(filepath): + continue + try: + tz = tzfile(filepath) + break + except (IOError, OSError, ValueError): + pass + else: + tz = None + if tzwin is not None: + try: + tz = tzwin(name) + except (WindowsError, UnicodeEncodeError): + # UnicodeEncodeError is for Python 2.7 compat + tz = None + + if not tz: + from dateutil.zoneinfo import get_zonefile_instance + tz = get_zonefile_instance().get(name) + + if not tz: + for c in name: + # name is not a tzstr unless it has at least + # one offset. For short values of "name", an + # explicit for loop seems to be the fastest way + # To determine if a string contains a digit + if c in "0123456789": + try: + tz = tzstr(name) + except ValueError: + pass + break + else: + if name in ("GMT", "UTC"): + tz = UTC + elif name in time.tzname: + tz = tzlocal() + return tz + + return GettzFunc() + + +gettz = __get_gettz() +del __get_gettz + + +def datetime_exists(dt, tz=None): + """ + Given a datetime and a time zone, determine whether or not a given datetime + would fall in a gap. + + :param dt: + A :class:`datetime.datetime` (whose time zone will be ignored if ``tz`` + is provided.) + + :param tz: + A :class:`datetime.tzinfo` with support for the ``fold`` attribute. If + ``None`` or not provided, the datetime's own time zone will be used. + + :return: + Returns a boolean value whether or not the "wall time" exists in + ``tz``. + + .. versionadded:: 2.7.0 + """ + if tz is None: + if dt.tzinfo is None: + raise ValueError('Datetime is naive and no time zone provided.') + tz = dt.tzinfo + + dt = dt.replace(tzinfo=None) + + # This is essentially a test of whether or not the datetime can survive + # a round trip to UTC. + dt_rt = dt.replace(tzinfo=tz).astimezone(UTC).astimezone(tz) + dt_rt = dt_rt.replace(tzinfo=None) + + return dt == dt_rt + + +def datetime_ambiguous(dt, tz=None): + """ + Given a datetime and a time zone, determine whether or not a given datetime + is ambiguous (i.e if there are two times differentiated only by their DST + status). + + :param dt: + A :class:`datetime.datetime` (whose time zone will be ignored if ``tz`` + is provided.) + + :param tz: + A :class:`datetime.tzinfo` with support for the ``fold`` attribute. If + ``None`` or not provided, the datetime's own time zone will be used. + + :return: + Returns a boolean value whether or not the "wall time" is ambiguous in + ``tz``. + + .. versionadded:: 2.6.0 + """ + if tz is None: + if dt.tzinfo is None: + raise ValueError('Datetime is naive and no time zone provided.') + + tz = dt.tzinfo + + # If a time zone defines its own "is_ambiguous" function, we'll use that. + is_ambiguous_fn = getattr(tz, 'is_ambiguous', None) + if is_ambiguous_fn is not None: + try: + return tz.is_ambiguous(dt) + except Exception: + pass + + # If it doesn't come out and tell us it's ambiguous, we'll just check if + # the fold attribute has any effect on this particular date and time. + dt = dt.replace(tzinfo=tz) + wall_0 = enfold(dt, fold=0) + wall_1 = enfold(dt, fold=1) + + same_offset = wall_0.utcoffset() == wall_1.utcoffset() + same_dst = wall_0.dst() == wall_1.dst() + + return not (same_offset and same_dst) + + +def resolve_imaginary(dt): + """ + Given a datetime that may be imaginary, return an existing datetime. + + This function assumes that an imaginary datetime represents what the + wall time would be in a zone had the offset transition not occurred, so + it will always fall forward by the transition's change in offset. + + .. doctest:: + + >>> from dateutil import tz + >>> from datetime import datetime + >>> NYC = tz.gettz('America/New_York') + >>> print(tz.resolve_imaginary(datetime(2017, 3, 12, 2, 30, tzinfo=NYC))) + 2017-03-12 03:30:00-04:00 + + >>> KIR = tz.gettz('Pacific/Kiritimati') + >>> print(tz.resolve_imaginary(datetime(1995, 1, 1, 12, 30, tzinfo=KIR))) + 1995-01-02 12:30:00+14:00 + + As a note, :func:`datetime.astimezone` is guaranteed to produce a valid, + existing datetime, so a round-trip to and from UTC is sufficient to get + an extant datetime, however, this generally "falls back" to an earlier time + rather than falling forward to the STD side (though no guarantees are made + about this behavior). + + :param dt: + A :class:`datetime.datetime` which may or may not exist. + + :return: + Returns an existing :class:`datetime.datetime`. If ``dt`` was not + imaginary, the datetime returned is guaranteed to be the same object + passed to the function. + + .. versionadded:: 2.7.0 + """ + if dt.tzinfo is not None and not datetime_exists(dt): + + curr_offset = (dt + datetime.timedelta(hours=24)).utcoffset() + old_offset = (dt - datetime.timedelta(hours=24)).utcoffset() + + dt += curr_offset - old_offset + + return dt + + +def _datetime_to_timestamp(dt): + """ + Convert a :class:`datetime.datetime` object to an epoch timestamp in + seconds since January 1, 1970, ignoring the time zone. + """ + return (dt.replace(tzinfo=None) - EPOCH).total_seconds() + + +if sys.version_info >= (3, 6): + def _get_supported_offset(second_offset): + return second_offset +else: + def _get_supported_offset(second_offset): + # For python pre-3.6, round to full-minutes if that's not the case. + # Python's datetime doesn't accept sub-minute timezones. Check + # http://python.org/sf/1447945 or https://bugs.python.org/issue5288 + # for some information. + old_offset = second_offset + calculated_offset = 60 * ((second_offset + 30) // 60) + return calculated_offset + + +try: + # Python 3.7 feature + from contextlib import nullcontext as _nullcontext +except ImportError: + class _nullcontext(object): + """ + Class for wrapping contexts so that they are passed through in a + with statement. + """ + def __init__(self, context): + self.context = context + + def __enter__(self): + return self.context + + def __exit__(*args, **kwargs): + pass + +# vim:ts=4:sw=4:et diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/tz/win.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/tz/win.py new file mode 100644 index 0000000000000000000000000000000000000000..cde07ba792c40903f0c334839140173b39fd8124 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/tz/win.py @@ -0,0 +1,370 @@ +# -*- coding: utf-8 -*- +""" +This module provides an interface to the native time zone data on Windows, +including :py:class:`datetime.tzinfo` implementations. + +Attempting to import this module on a non-Windows platform will raise an +:py:obj:`ImportError`. +""" +# This code was originally contributed by Jeffrey Harris. +import datetime +import struct + +from six.moves import winreg +from six import text_type + +try: + import ctypes + from ctypes import wintypes +except ValueError: + # ValueError is raised on non-Windows systems for some horrible reason. + raise ImportError("Running tzwin on non-Windows system") + +from ._common import tzrangebase + +__all__ = ["tzwin", "tzwinlocal", "tzres"] + +ONEWEEK = datetime.timedelta(7) + +TZKEYNAMENT = r"SOFTWARE\Microsoft\Windows NT\CurrentVersion\Time Zones" +TZKEYNAME9X = r"SOFTWARE\Microsoft\Windows\CurrentVersion\Time Zones" +TZLOCALKEYNAME = r"SYSTEM\CurrentControlSet\Control\TimeZoneInformation" + + +def _settzkeyname(): + handle = winreg.ConnectRegistry(None, winreg.HKEY_LOCAL_MACHINE) + try: + winreg.OpenKey(handle, TZKEYNAMENT).Close() + TZKEYNAME = TZKEYNAMENT + except WindowsError: + TZKEYNAME = TZKEYNAME9X + handle.Close() + return TZKEYNAME + + +TZKEYNAME = _settzkeyname() + + +class tzres(object): + """ + Class for accessing ``tzres.dll``, which contains timezone name related + resources. + + .. versionadded:: 2.5.0 + """ + p_wchar = ctypes.POINTER(wintypes.WCHAR) # Pointer to a wide char + + def __init__(self, tzres_loc='tzres.dll'): + # Load the user32 DLL so we can load strings from tzres + user32 = ctypes.WinDLL('user32') + + # Specify the LoadStringW function + user32.LoadStringW.argtypes = (wintypes.HINSTANCE, + wintypes.UINT, + wintypes.LPWSTR, + ctypes.c_int) + + self.LoadStringW = user32.LoadStringW + self._tzres = ctypes.WinDLL(tzres_loc) + self.tzres_loc = tzres_loc + + def load_name(self, offset): + """ + Load a timezone name from a DLL offset (integer). + + >>> from dateutil.tzwin import tzres + >>> tzr = tzres() + >>> print(tzr.load_name(112)) + 'Eastern Standard Time' + + :param offset: + A positive integer value referring to a string from the tzres dll. + + .. note:: + + Offsets found in the registry are generally of the form + ``@tzres.dll,-114``. The offset in this case is 114, not -114. + + """ + resource = self.p_wchar() + lpBuffer = ctypes.cast(ctypes.byref(resource), wintypes.LPWSTR) + nchar = self.LoadStringW(self._tzres._handle, offset, lpBuffer, 0) + return resource[:nchar] + + def name_from_string(self, tzname_str): + """ + Parse strings as returned from the Windows registry into the time zone + name as defined in the registry. + + >>> from dateutil.tzwin import tzres + >>> tzr = tzres() + >>> print(tzr.name_from_string('@tzres.dll,-251')) + 'Dateline Daylight Time' + >>> print(tzr.name_from_string('Eastern Standard Time')) + 'Eastern Standard Time' + + :param tzname_str: + A timezone name string as returned from a Windows registry key. + + :return: + Returns the localized timezone string from tzres.dll if the string + is of the form `@tzres.dll,-offset`, else returns the input string. + """ + if not tzname_str.startswith('@'): + return tzname_str + + name_splt = tzname_str.split(',-') + try: + offset = int(name_splt[1]) + except: + raise ValueError("Malformed timezone string.") + + return self.load_name(offset) + + +class tzwinbase(tzrangebase): + """tzinfo class based on win32's timezones available in the registry.""" + def __init__(self): + raise NotImplementedError('tzwinbase is an abstract base class') + + def __eq__(self, other): + # Compare on all relevant dimensions, including name. + if not isinstance(other, tzwinbase): + return NotImplemented + + return (self._std_offset == other._std_offset and + self._dst_offset == other._dst_offset and + self._stddayofweek == other._stddayofweek and + self._dstdayofweek == other._dstdayofweek and + self._stdweeknumber == other._stdweeknumber and + self._dstweeknumber == other._dstweeknumber and + self._stdhour == other._stdhour and + self._dsthour == other._dsthour and + self._stdminute == other._stdminute and + self._dstminute == other._dstminute and + self._std_abbr == other._std_abbr and + self._dst_abbr == other._dst_abbr) + + @staticmethod + def list(): + """Return a list of all time zones known to the system.""" + with winreg.ConnectRegistry(None, winreg.HKEY_LOCAL_MACHINE) as handle: + with winreg.OpenKey(handle, TZKEYNAME) as tzkey: + result = [winreg.EnumKey(tzkey, i) + for i in range(winreg.QueryInfoKey(tzkey)[0])] + return result + + def display(self): + """ + Return the display name of the time zone. + """ + return self._display + + def transitions(self, year): + """ + For a given year, get the DST on and off transition times, expressed + always on the standard time side. For zones with no transitions, this + function returns ``None``. + + :param year: + The year whose transitions you would like to query. + + :return: + Returns a :class:`tuple` of :class:`datetime.datetime` objects, + ``(dston, dstoff)`` for zones with an annual DST transition, or + ``None`` for fixed offset zones. + """ + + if not self.hasdst: + return None + + dston = picknthweekday(year, self._dstmonth, self._dstdayofweek, + self._dsthour, self._dstminute, + self._dstweeknumber) + + dstoff = picknthweekday(year, self._stdmonth, self._stddayofweek, + self._stdhour, self._stdminute, + self._stdweeknumber) + + # Ambiguous dates default to the STD side + dstoff -= self._dst_base_offset + + return dston, dstoff + + def _get_hasdst(self): + return self._dstmonth != 0 + + @property + def _dst_base_offset(self): + return self._dst_base_offset_ + + +class tzwin(tzwinbase): + """ + Time zone object created from the zone info in the Windows registry + + These are similar to :py:class:`dateutil.tz.tzrange` objects in that + the time zone data is provided in the format of a single offset rule + for either 0 or 2 time zone transitions per year. + + :param: name + The name of a Windows time zone key, e.g. "Eastern Standard Time". + The full list of keys can be retrieved with :func:`tzwin.list`. + """ + + def __init__(self, name): + self._name = name + + with winreg.ConnectRegistry(None, winreg.HKEY_LOCAL_MACHINE) as handle: + tzkeyname = text_type("{kn}\\{name}").format(kn=TZKEYNAME, name=name) + with winreg.OpenKey(handle, tzkeyname) as tzkey: + keydict = valuestodict(tzkey) + + self._std_abbr = keydict["Std"] + self._dst_abbr = keydict["Dlt"] + + self._display = keydict["Display"] + + # See http://ww_winreg.jsiinc.com/SUBA/tip0300/rh0398.htm + tup = struct.unpack("=3l16h", keydict["TZI"]) + stdoffset = -tup[0]-tup[1] # Bias + StandardBias * -1 + dstoffset = stdoffset-tup[2] # + DaylightBias * -1 + self._std_offset = datetime.timedelta(minutes=stdoffset) + self._dst_offset = datetime.timedelta(minutes=dstoffset) + + # for the meaning see the win32 TIME_ZONE_INFORMATION structure docs + # http://msdn.microsoft.com/en-us/library/windows/desktop/ms725481(v=vs.85).aspx + (self._stdmonth, + self._stddayofweek, # Sunday = 0 + self._stdweeknumber, # Last = 5 + self._stdhour, + self._stdminute) = tup[4:9] + + (self._dstmonth, + self._dstdayofweek, # Sunday = 0 + self._dstweeknumber, # Last = 5 + self._dsthour, + self._dstminute) = tup[12:17] + + self._dst_base_offset_ = self._dst_offset - self._std_offset + self.hasdst = self._get_hasdst() + + def __repr__(self): + return "tzwin(%s)" % repr(self._name) + + def __reduce__(self): + return (self.__class__, (self._name,)) + + +class tzwinlocal(tzwinbase): + """ + Class representing the local time zone information in the Windows registry + + While :class:`dateutil.tz.tzlocal` makes system calls (via the :mod:`time` + module) to retrieve time zone information, ``tzwinlocal`` retrieves the + rules directly from the Windows registry and creates an object like + :class:`dateutil.tz.tzwin`. + + Because Windows does not have an equivalent of :func:`time.tzset`, on + Windows, :class:`dateutil.tz.tzlocal` instances will always reflect the + time zone settings *at the time that the process was started*, meaning + changes to the machine's time zone settings during the run of a program + on Windows will **not** be reflected by :class:`dateutil.tz.tzlocal`. + Because ``tzwinlocal`` reads the registry directly, it is unaffected by + this issue. + """ + def __init__(self): + with winreg.ConnectRegistry(None, winreg.HKEY_LOCAL_MACHINE) as handle: + with winreg.OpenKey(handle, TZLOCALKEYNAME) as tzlocalkey: + keydict = valuestodict(tzlocalkey) + + self._std_abbr = keydict["StandardName"] + self._dst_abbr = keydict["DaylightName"] + + try: + tzkeyname = text_type('{kn}\\{sn}').format(kn=TZKEYNAME, + sn=self._std_abbr) + with winreg.OpenKey(handle, tzkeyname) as tzkey: + _keydict = valuestodict(tzkey) + self._display = _keydict["Display"] + except OSError: + self._display = None + + stdoffset = -keydict["Bias"]-keydict["StandardBias"] + dstoffset = stdoffset-keydict["DaylightBias"] + + self._std_offset = datetime.timedelta(minutes=stdoffset) + self._dst_offset = datetime.timedelta(minutes=dstoffset) + + # For reasons unclear, in this particular key, the day of week has been + # moved to the END of the SYSTEMTIME structure. + tup = struct.unpack("=8h", keydict["StandardStart"]) + + (self._stdmonth, + self._stdweeknumber, # Last = 5 + self._stdhour, + self._stdminute) = tup[1:5] + + self._stddayofweek = tup[7] + + tup = struct.unpack("=8h", keydict["DaylightStart"]) + + (self._dstmonth, + self._dstweeknumber, # Last = 5 + self._dsthour, + self._dstminute) = tup[1:5] + + self._dstdayofweek = tup[7] + + self._dst_base_offset_ = self._dst_offset - self._std_offset + self.hasdst = self._get_hasdst() + + def __repr__(self): + return "tzwinlocal()" + + def __str__(self): + # str will return the standard name, not the daylight name. + return "tzwinlocal(%s)" % repr(self._std_abbr) + + def __reduce__(self): + return (self.__class__, ()) + + +def picknthweekday(year, month, dayofweek, hour, minute, whichweek): + """ dayofweek == 0 means Sunday, whichweek 5 means last instance """ + first = datetime.datetime(year, month, 1, hour, minute) + + # This will work if dayofweek is ISO weekday (1-7) or Microsoft-style (0-6), + # Because 7 % 7 = 0 + weekdayone = first.replace(day=((dayofweek - first.isoweekday()) % 7) + 1) + wd = weekdayone + ((whichweek - 1) * ONEWEEK) + if (wd.month != month): + wd -= ONEWEEK + + return wd + + +def valuestodict(key): + """Convert a registry key's values to a dictionary.""" + dout = {} + size = winreg.QueryInfoKey(key)[1] + tz_res = None + + for i in range(size): + key_name, value, dtype = winreg.EnumValue(key, i) + if dtype == winreg.REG_DWORD or dtype == winreg.REG_DWORD_LITTLE_ENDIAN: + # If it's a DWORD (32-bit integer), it's stored as unsigned - convert + # that to a proper signed integer + if value & (1 << 31): + value = value - (1 << 32) + elif dtype == winreg.REG_SZ: + # If it's a reference to the tzres DLL, load the actual string + if value.startswith('@tzres'): + tz_res = tz_res or tzres() + value = tz_res.name_from_string(value) + + value = value.rstrip('\x00') # Remove trailing nulls + + dout[key_name] = value + + return dout diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/tzwin.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/tzwin.py new file mode 100644 index 0000000000000000000000000000000000000000..cebc673e40fc376653ebf037e96f0a6d0b33e906 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/tzwin.py @@ -0,0 +1,2 @@ +# tzwin has moved to dateutil.tz.win +from .tz.win import * diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/utils.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..44d9c99455f0147cd5cd38fa41e758f6638d1f1c --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/utils.py @@ -0,0 +1,71 @@ +# -*- coding: utf-8 -*- +""" +This module offers general convenience and utility functions for dealing with +datetimes. + +.. versionadded:: 2.7.0 +""" +from __future__ import unicode_literals + +from datetime import datetime, time + + +def today(tzinfo=None): + """ + Returns a :py:class:`datetime` representing the current day at midnight + + :param tzinfo: + The time zone to attach (also used to determine the current day). + + :return: + A :py:class:`datetime.datetime` object representing the current day + at midnight. + """ + + dt = datetime.now(tzinfo) + return datetime.combine(dt.date(), time(0, tzinfo=tzinfo)) + + +def default_tzinfo(dt, tzinfo): + """ + Sets the ``tzinfo`` parameter on naive datetimes only + + This is useful for example when you are provided a datetime that may have + either an implicit or explicit time zone, such as when parsing a time zone + string. + + .. doctest:: + + >>> from dateutil.tz import tzoffset + >>> from dateutil.parser import parse + >>> from dateutil.utils import default_tzinfo + >>> dflt_tz = tzoffset("EST", -18000) + >>> print(default_tzinfo(parse('2014-01-01 12:30 UTC'), dflt_tz)) + 2014-01-01 12:30:00+00:00 + >>> print(default_tzinfo(parse('2014-01-01 12:30'), dflt_tz)) + 2014-01-01 12:30:00-05:00 + + :param dt: + The datetime on which to replace the time zone + + :param tzinfo: + The :py:class:`datetime.tzinfo` subclass instance to assign to + ``dt`` if (and only if) it is naive. + + :return: + Returns an aware :py:class:`datetime.datetime`. + """ + if dt.tzinfo is not None: + return dt + else: + return dt.replace(tzinfo=tzinfo) + + +def within_delta(dt1, dt2, delta): + """ + Useful for comparing two datetimes that may a negilible difference + to be considered equal. + """ + delta = abs(delta) + difference = dt1 - dt2 + return -delta <= difference <= delta diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/zoneinfo/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/zoneinfo/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..34f11ad66c88047f2c049a4cdcc937b4b78ea6d6 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/zoneinfo/__init__.py @@ -0,0 +1,167 @@ +# -*- coding: utf-8 -*- +import warnings +import json + +from tarfile import TarFile +from pkgutil import get_data +from io import BytesIO + +from dateutil.tz import tzfile as _tzfile + +__all__ = ["get_zonefile_instance", "gettz", "gettz_db_metadata"] + +ZONEFILENAME = "dateutil-zoneinfo.tar.gz" +METADATA_FN = 'METADATA' + + +class tzfile(_tzfile): + def __reduce__(self): + return (gettz, (self._filename,)) + + +def getzoneinfofile_stream(): + try: + return BytesIO(get_data(__name__, ZONEFILENAME)) + except IOError as e: # TODO switch to FileNotFoundError? + warnings.warn("I/O error({0}): {1}".format(e.errno, e.strerror)) + return None + + +class ZoneInfoFile(object): + def __init__(self, zonefile_stream=None): + if zonefile_stream is not None: + with TarFile.open(fileobj=zonefile_stream) as tf: + self.zones = {zf.name: tzfile(tf.extractfile(zf), filename=zf.name) + for zf in tf.getmembers() + if zf.isfile() and zf.name != METADATA_FN} + # deal with links: They'll point to their parent object. Less + # waste of memory + links = {zl.name: self.zones[zl.linkname] + for zl in tf.getmembers() if + zl.islnk() or zl.issym()} + self.zones.update(links) + try: + metadata_json = tf.extractfile(tf.getmember(METADATA_FN)) + metadata_str = metadata_json.read().decode('UTF-8') + self.metadata = json.loads(metadata_str) + except KeyError: + # no metadata in tar file + self.metadata = None + else: + self.zones = {} + self.metadata = None + + def get(self, name, default=None): + """ + Wrapper for :func:`ZoneInfoFile.zones.get`. This is a convenience method + for retrieving zones from the zone dictionary. + + :param name: + The name of the zone to retrieve. (Generally IANA zone names) + + :param default: + The value to return in the event of a missing key. + + .. versionadded:: 2.6.0 + + """ + return self.zones.get(name, default) + + +# The current API has gettz as a module function, although in fact it taps into +# a stateful class. So as a workaround for now, without changing the API, we +# will create a new "global" class instance the first time a user requests a +# timezone. Ugly, but adheres to the api. +# +# TODO: Remove after deprecation period. +_CLASS_ZONE_INSTANCE = [] + + +def get_zonefile_instance(new_instance=False): + """ + This is a convenience function which provides a :class:`ZoneInfoFile` + instance using the data provided by the ``dateutil`` package. By default, it + caches a single instance of the ZoneInfoFile object and returns that. + + :param new_instance: + If ``True``, a new instance of :class:`ZoneInfoFile` is instantiated and + used as the cached instance for the next call. Otherwise, new instances + are created only as necessary. + + :return: + Returns a :class:`ZoneInfoFile` object. + + .. versionadded:: 2.6 + """ + if new_instance: + zif = None + else: + zif = getattr(get_zonefile_instance, '_cached_instance', None) + + if zif is None: + zif = ZoneInfoFile(getzoneinfofile_stream()) + + get_zonefile_instance._cached_instance = zif + + return zif + + +def gettz(name): + """ + This retrieves a time zone from the local zoneinfo tarball that is packaged + with dateutil. + + :param name: + An IANA-style time zone name, as found in the zoneinfo file. + + :return: + Returns a :class:`dateutil.tz.tzfile` time zone object. + + .. warning:: + It is generally inadvisable to use this function, and it is only + provided for API compatibility with earlier versions. This is *not* + equivalent to ``dateutil.tz.gettz()``, which selects an appropriate + time zone based on the inputs, favoring system zoneinfo. This is ONLY + for accessing the dateutil-specific zoneinfo (which may be out of + date compared to the system zoneinfo). + + .. deprecated:: 2.6 + If you need to use a specific zoneinfofile over the system zoneinfo, + instantiate a :class:`dateutil.zoneinfo.ZoneInfoFile` object and call + :func:`dateutil.zoneinfo.ZoneInfoFile.get(name)` instead. + + Use :func:`get_zonefile_instance` to retrieve an instance of the + dateutil-provided zoneinfo. + """ + warnings.warn("zoneinfo.gettz() will be removed in future versions, " + "to use the dateutil-provided zoneinfo files, instantiate a " + "ZoneInfoFile object and use ZoneInfoFile.zones.get() " + "instead. See the documentation for details.", + DeprecationWarning) + + if len(_CLASS_ZONE_INSTANCE) == 0: + _CLASS_ZONE_INSTANCE.append(ZoneInfoFile(getzoneinfofile_stream())) + return _CLASS_ZONE_INSTANCE[0].zones.get(name) + + +def gettz_db_metadata(): + """ Get the zonefile metadata + + See `zonefile_metadata`_ + + :returns: + A dictionary with the database metadata + + .. deprecated:: 2.6 + See deprecation warning in :func:`zoneinfo.gettz`. To get metadata, + query the attribute ``zoneinfo.ZoneInfoFile.metadata``. + """ + warnings.warn("zoneinfo.gettz_db_metadata() will be removed in future " + "versions, to use the dateutil-provided zoneinfo files, " + "ZoneInfoFile object and query the 'metadata' attribute " + "instead. See the documentation for details.", + DeprecationWarning) + + if len(_CLASS_ZONE_INSTANCE) == 0: + _CLASS_ZONE_INSTANCE.append(ZoneInfoFile(getzoneinfofile_stream())) + return _CLASS_ZONE_INSTANCE[0].metadata diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/zoneinfo/rebuild.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/zoneinfo/rebuild.py new file mode 100644 index 0000000000000000000000000000000000000000..78f0d1a0ce57f188c28156555a6cbe85e3e08d19 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/dateutil/zoneinfo/rebuild.py @@ -0,0 +1,53 @@ +import logging +import os +import tempfile +import shutil +import json +from subprocess import check_call +from tarfile import TarFile + +from dateutil.zoneinfo import METADATA_FN, ZONEFILENAME + + +def rebuild(filename, tag=None, format="gz", zonegroups=[], metadata=None): + """Rebuild the internal timezone info in dateutil/zoneinfo/zoneinfo*tar* + + filename is the timezone tarball from ``ftp.iana.org/tz``. + + """ + tmpdir = tempfile.mkdtemp() + zonedir = os.path.join(tmpdir, "zoneinfo") + moduledir = os.path.dirname(__file__) + try: + with TarFile.open(filename) as tf: + for name in zonegroups: + tf.extract(name, tmpdir) + filepaths = [os.path.join(tmpdir, n) for n in zonegroups] + try: + check_call(["zic", "-d", zonedir] + filepaths) + except OSError as e: + _print_on_nosuchfile(e) + raise + # write metadata file + with open(os.path.join(zonedir, METADATA_FN), 'w') as f: + json.dump(metadata, f, indent=4, sort_keys=True) + target = os.path.join(moduledir, ZONEFILENAME) + with TarFile.open(target, "w:%s" % format) as tf: + for entry in os.listdir(zonedir): + entrypath = os.path.join(zonedir, entry) + tf.add(entrypath, entry) + finally: + shutil.rmtree(tmpdir) + + +def _print_on_nosuchfile(e): + """Print helpful troubleshooting message + + e is an exception raised by subprocess.check_call() + + """ + if e.errno == 2: + logging.error( + "Could not find zic. Perhaps you need to install " + "libc-bin or some other package that provides it, " + "or it's not in your PATH?") diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib-0.3.0.dist-info/AUTHORS.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib-0.3.0.dist-info/AUTHORS.txt new file mode 100644 index 0000000000000000000000000000000000000000..72c87d7d38ae7bf859717c333a5ee8230f6ce624 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib-0.3.0.dist-info/AUTHORS.txt @@ -0,0 +1,562 @@ +A_Rog +Aakanksha Agrawal <11389424+rasponic@users.noreply.github.com> +Abhinav Sagar <40603139+abhinavsagar@users.noreply.github.com> +ABHYUDAY PRATAP SINGH +abs51295 +AceGentile +Adam Chainz +Adam Tse +Adam Tse +Adam Wentz +admin +Adrien Morison +ahayrapetyan +Ahilya +AinsworthK +Akash Srivastava +Alan Yee +Albert Tugushev +Albert-Guan +albertg +Aleks Bunin +Alethea Flowers +Alex Gaynor +Alex Grönholm +Alex Loosley +Alex Morega +Alex Stachowiak +Alexander Shtyrov +Alexandre Conrad +Alexey Popravka +Alexey Popravka +Alli +Ami Fischman +Ananya Maiti +Anatoly Techtonik +Anders Kaseorg +Andreas Lutro +Andrei Geacar +Andrew Gaul +Andrey Bulgakov +Andrés Delfino <34587441+andresdelfino@users.noreply.github.com> +Andrés Delfino +Andy Freeland +Andy Freeland +Andy Kluger +Ani Hayrapetyan +Aniruddha Basak +Anish Tambe +Anrs Hu +Anthony Sottile +Antoine Musso +Anton Ovchinnikov +Anton Patrushev +Antonio Alvarado Hernandez +Antony Lee +Antti Kaihola +Anubhav Patel +Anuj Godase +AQNOUCH Mohammed +AraHaan +Arindam Choudhury +Armin Ronacher +Artem +Ashley Manton +Ashwin Ramaswami +atse +Atsushi Odagiri +Avner Cohen +Baptiste Mispelon +Barney Gale +barneygale +Bartek Ogryczak +Bastian Venthur +Ben Darnell +Ben Hoyt +Ben Rosser +Bence Nagy +Benjamin Peterson +Benjamin VanEvery +Benoit Pierre +Berker Peksag +Bernardo B. Marques +Bernhard M. Wiedemann +Bertil Hatt +Bogdan Opanchuk +BorisZZZ +Brad Erickson +Bradley Ayers +Brandon L. Reiss +Brandt Bucher +Brett Randall +Brian Cristante <33549821+brcrista@users.noreply.github.com> +Brian Cristante +Brian Rosner +BrownTruck +Bruno Oliveira +Bruno Renié +Bstrdsmkr +Buck Golemon +burrows +Bussonnier Matthias +c22 +Caleb Martinez +Calvin Smith +Carl Meyer +Carlos Liam +Carol Willing +Carter Thayer +Cass +Chandrasekhar Atina +Chih-Hsuan Yen +Chih-Hsuan Yen +Chris Brinker +Chris Hunt +Chris Jerdonek +Chris McDonough +Chris Wolfe +Christian Heimes +Christian Oudard +Christopher Hunt +Christopher Snyder +Clark Boylan +Clay McClure +Cody +Cody Soyland +Colin Watson +Connor Osborn +Cooper Lees +Cooper Ry Lees +Cory Benfield +Cory Wright +Craig Kerstiens +Cristian Sorinel +Curtis Doty +cytolentino +Damian Quiroga +Dan Black +Dan Savilonis +Dan Sully +daniel +Daniel Collins +Daniel Hahler +Daniel Holth +Daniel Jost +Daniel Shaulov +Daniele Esposti +Daniele Procida +Danny Hermes +Dav Clark +Dave Abrahams +Dave Jones +David Aguilar +David Black +David Bordeynik +David Bordeynik +David Caro +David Evans +David Linke +David Pursehouse +David Tucker +David Wales +Davidovich +derwolfe +Desetude +Diego Caraballo +DiegoCaraballo +Dmitry Gladkov +Domen Kožar +Donald Stufft +Dongweiming +Douglas Thor +DrFeathers +Dustin Ingram +Dwayne Bailey +Ed Morley <501702+edmorley@users.noreply.github.com> +Ed Morley +Eitan Adler +ekristina +elainechan +Eli Schwartz +Eli Schwartz +Emil Burzo +Emil Styrke +Endoh Takanao +enoch +Erdinc Mutlu +Eric Gillingham +Eric Hanchrow +Eric Hopper +Erik M. Bray +Erik Rose +Ernest W Durbin III +Ernest W. Durbin III +Erwin Janssen +Eugene Vereshchagin +everdimension +Felix Yan +fiber-space +Filip Kokosiński +Florian Briand +Florian Rathgeber +Francesco +Francesco Montesano +Frost Ming +Gabriel Curio +Gabriel de Perthuis +Garry Polley +gdanielson +Geoffrey Lehée +Geoffrey Sneddon +George Song +Georgi Valkov +Giftlin Rajaiah +gizmoguy1 +gkdoc <40815324+gkdoc@users.noreply.github.com> +Gopinath M <31352222+mgopi1990@users.noreply.github.com> +GOTO Hayato <3532528+gh640@users.noreply.github.com> +gpiks +Guilherme Espada +Guy Rozendorn +gzpan123 +Hanjun Kim +Hari Charan +Harsh Vardhan +Herbert Pfennig +Hsiaoming Yang +Hugo +Hugo Lopes Tavares +Hugo van Kemenade +hugovk +Hynek Schlawack +Ian Bicking +Ian Cordasco +Ian Lee +Ian Stapleton Cordasco +Ian Wienand +Ian Wienand +Igor Kuzmitshov +Igor Sobreira +Ilya Baryshev +INADA Naoki +Ionel Cristian Mărieș +Ionel Maries Cristian +Ivan Pozdeev +Jacob Kim +jakirkham +Jakub Stasiak +Jakub Vysoky +Jakub Wilk +James Cleveland +James Cleveland +James Firth +James Polley +Jan Pokorný +Jannis Leidel +jarondl +Jason R. Coombs +Jay Graves +Jean-Christophe Fillion-Robin +Jeff Barber +Jeff Dairiki +Jelmer Vernooij +jenix21 +Jeremy Stanley +Jeremy Zafran +Jiashuo Li +Jim Garrison +Jivan Amara +John Paton +John-Scott Atlakson +johnthagen +johnthagen +Jon Banafato +Jon Dufresne +Jon Parise +Jonas Nockert +Jonathan Herbert +Joost Molenaar +Jorge Niedbalski +Joseph Long +Josh Bronson +Josh Hansen +Josh Schneier +Juanjo Bazán +Julian Berman +Julian Gethmann +Julien Demoor +jwg4 +Jyrki Pulliainen +Kai Chen +Kamal Bin Mustafa +kaustav haldar +keanemind +Keith Maxwell +Kelsey Hightower +Kenneth Belitzky +Kenneth Reitz +Kenneth Reitz +Kevin Burke +Kevin Carter +Kevin Frommelt +Kevin R Patterson +Kexuan Sun +Kit Randel +kpinc +Krishna Oza +Kumar McMillan +Kyle Persohn +lakshmanaram +Laszlo Kiss-Kollar +Laurent Bristiel +Laurie Opperman +Leon Sasson +Lev Givon +Lincoln de Sousa +Lipis +Loren Carvalho +Lucas Cimon +Ludovic Gasc +Luke Macken +Luo Jiebin +luojiebin +luz.paz +László Kiss Kollár +László Kiss Kollár +Marc Abramowitz +Marc Tamlyn +Marcus Smith +Mariatta +Mark Kohler +Mark Williams +Mark Williams +Markus Hametner +Masaki +Masklinn +Matej Stuchlik +Mathew Jennings +Mathieu Bridon +Matt Good +Matt Maker +Matt Robenolt +matthew +Matthew Einhorn +Matthew Gilliard +Matthew Iversen +Matthew Trumbell +Matthew Willson +Matthias Bussonnier +mattip +Maxim Kurnikov +Maxime Rouyrre +mayeut +mbaluna <44498973+mbaluna@users.noreply.github.com> +mdebi <17590103+mdebi@users.noreply.github.com> +memoselyk +Michael +Michael Aquilina +Michael E. Karpeles +Michael Klich +Michael Williamson +michaelpacer +Mickaël Schoentgen +Miguel Araujo Perez +Mihir Singh +Mike +Mike Hendricks +Min RK +MinRK +Miro Hrončok +Monica Baluna +montefra +Monty Taylor +Nate Coraor +Nathaniel J. Smith +Nehal J Wani +Neil Botelho +Nick Coghlan +Nick Stenning +Nick Timkovich +Nicolas Bock +Nikhil Benesch +Nitesh Sharma +Nowell Strite +NtaleGrey +nvdv +Ofekmeister +ofrinevo +Oliver Jeeves +Oliver Tonnhofer +Olivier Girardot +Olivier Grisel +Ollie Rutherfurd +OMOTO Kenji +Omry Yadan +Oren Held +Oscar Benjamin +Oz N Tiram +Pachwenko <32424503+Pachwenko@users.noreply.github.com> +Patrick Dubroy +Patrick Jenkins +Patrick Lawson +patricktokeeffe +Patrik Kopkan +Paul Kehrer +Paul Moore +Paul Nasrat +Paul Oswald +Paul van der Linden +Paulus Schoutsen +Pavithra Eswaramoorthy <33131404+QueenCoffee@users.noreply.github.com> +Pawel Jasinski +Pekka Klärck +Peter Lisák +Peter Waller +petr-tik +Phaneendra Chiruvella +Phil Freo +Phil Pennock +Phil Whelan +Philip Jägenstedt +Philip Molloy +Philippe Ombredanne +Pi Delport +Pierre-Yves Rofes +pip +Prabakaran Kumaresshan +Prabhjyotsing Surjit Singh Sodhi +Prabhu Marappan +Pradyun Gedam +Pratik Mallya +Preet Thakkar +Preston Holmes +Przemek Wrzos +Pulkit Goyal <7895pulkit@gmail.com> +Qiangning Hong +Quentin Pradet +R. David Murray +Rafael Caricio +Ralf Schmitt +Razzi Abuissa +rdb +Remi Rampin +Remi Rampin +Rene Dudfield +Riccardo Magliocchetti +Richard Jones +RobberPhex +Robert Collins +Robert McGibbon +Robert T. McGibbon +robin elisha robinson +Roey Berman +Rohan Jain +Rohan Jain +Rohan Jain +Roman Bogorodskiy +Romuald Brunet +Ronny Pfannschmidt +Rory McCann +Ross Brattain +Roy Wellington Ⅳ +Roy Wellington Ⅳ +Ryan Wooden +ryneeverett +Sachi King +Salvatore Rinchiera +Savio Jomton +schlamar +Scott Kitterman +Sean +seanj +Sebastian Jordan +Sebastian Schaetz +Segev Finer +SeongSoo Cho +Sergey Vasilyev +Seth Woodworth +Shlomi Fish +Shovan Maity +Simeon Visser +Simon Cross +Simon Pichugin +sinoroc +Sorin Sbarnea +Stavros Korokithakis +Stefan Scherfke +Stephan Erb +stepshal +Steve (Gadget) Barnes +Steve Barnes +Steve Dower +Steve Kowalik +Steven Myint +stonebig +Stéphane Bidoul (ACSONE) +Stéphane Bidoul +Stéphane Klein +Sumana Harihareswara +Sviatoslav Sydorenko +Sviatoslav Sydorenko +Swat009 +Takayuki SHIMIZUKAWA +tbeswick +Thijs Triemstra +Thomas Fenzl +Thomas Grainger +Thomas Guettler +Thomas Johansson +Thomas Kluyver +Thomas Smith +Tim D. Smith +Tim Gates +Tim Harder +Tim Heap +tim smith +tinruufu +Tom Forbes +Tom Freudenheim +Tom V +Tomas Orsava +Tomer Chachamu +Tony Beswick +Tony Zhaocheng Tan +TonyBeswick +toonarmycaptain +Toshio Kuratomi +Travis Swicegood +Tzu-ping Chung +Valentin Haenel +Victor Stinner +victorvpaulo +Viktor Szépe +Ville Skyttä +Vinay Sajip +Vincent Philippon +Vinicyus Macedo <7549205+vinicyusmacedo@users.noreply.github.com> +Vitaly Babiy +Vladimir Rutsky +W. Trevor King +Wil Tan +Wilfred Hughes +William ML Leslie +William T Olson +Wilson Mo +wim glenn +Wolfgang Maier +Xavier Fernandez +Xavier Fernandez +xoviat +xtreak +YAMAMOTO Takashi +Yen Chi Hsuan +Yeray Diaz Diaz +Yoval P +Yu Jian +Yuan Jing Vincent Yan +Zearin +Zearin +Zhiping Deng +Zvezdan Petkovic +Łukasz Langa +Семён Марьясин diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib-0.3.0.dist-info/INSTALLER b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib-0.3.0.dist-info/INSTALLER new file mode 100644 index 0000000000000000000000000000000000000000..a1b589e38a32041e49332e5e81c2d363dc418d68 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib-0.3.0.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib-0.3.0.dist-info/LICENSE.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib-0.3.0.dist-info/LICENSE.txt new file mode 100644 index 0000000000000000000000000000000000000000..737fec5c5352af3d9a6a47a0670da4bdb52c5725 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib-0.3.0.dist-info/LICENSE.txt @@ -0,0 +1,20 @@ +Copyright (c) 2008-2019 The pip developers (see AUTHORS.txt file) + +Permission is hereby granted, free of charge, to any person obtaining +a copy of this software and associated documentation files (the +"Software"), to deal in the Software without restriction, including +without limitation the rights to use, copy, modify, merge, publish, +distribute, sublicense, and/or sell copies of the Software, and to +permit persons to whom the Software is furnished to do so, subject to +the following conditions: + +The above copyright notice and this permission notice shall be +included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE +LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION +OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION +WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib-0.3.0.dist-info/METADATA b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib-0.3.0.dist-info/METADATA new file mode 100644 index 0000000000000000000000000000000000000000..7905cc971c8dded864bb6ade93ba3d02e9d322e8 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib-0.3.0.dist-info/METADATA @@ -0,0 +1,31 @@ +Metadata-Version: 2.1 +Name: distlib +Version: 0.3.0 +Summary: Distribution utilities +Home-page: https://bitbucket.org/pypa/distlib +Author: Vinay Sajip +Author-email: vinay_sajip@red-dove.com +License: Python license +Download-URL: https://bitbucket.org/pypa/distlib/downloads/distlib-0.3.0.zip +Platform: any +Classifier: Development Status :: 5 - Production/Stable +Classifier: Environment :: Console +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: Python Software Foundation License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 2 +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 2.7 +Classifier: Programming Language :: Python :: 3.2 +Classifier: Programming Language :: Python :: 3.3 +Classifier: Programming Language :: Python :: 3.4 +Classifier: Programming Language :: Python :: 3.5 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Topic :: Software Development + +Low-level components of distutils2/packaging, augmented with higher-level APIs for making packaging easier. + + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib-0.3.0.dist-info/RECORD b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib-0.3.0.dist-info/RECORD new file mode 100644 index 0000000000000000000000000000000000000000..39025d47cae1e96de2287eec47a0fb0bd84258de --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib-0.3.0.dist-info/RECORD @@ -0,0 +1,48 @@ +distlib/__init__.py,sha256=gzl1hjUXmDGrqRyU7ZLjBwJGAcMimQbrZ22XPVaKaRE,581 +distlib/compat.py,sha256=xdNZmqFN5HwF30HjRn5M415pcC2kgXRBXn767xS8v-M,41404 +distlib/database.py,sha256=fhNzEDtb4HXrpxKyQvhVzDXcOiJlzrOM--UYnvCeZrI,51045 +distlib/index.py,sha256=SXKzpQCERctxYDMp_OLee2f0J0e19ZhGdCIoMlUfUQM,21066 +distlib/locators.py,sha256=c9E4cDEacJ_uKbuE5BqAVocoWp6rsuBGTkiNDQq3zV4,52100 +distlib/manifest.py,sha256=nQEhYmgoreaBZzyFzwYsXxJARu3fo4EkunU163U16iE,14811 +distlib/markers.py,sha256=6Ac3cCfFBERexiESWIOXmg-apIP8l2esafNSX3KMy-8,4387 +distlib/metadata.py,sha256=OhbCKmf5lswE8unWBopI1hj7tRpHp4ZbFvU4d6aAEMM,40234 +distlib/resources.py,sha256=2FGv0ZHF14KXjLIlL0R991lyQQGcewOS4mJ-5n-JVnc,10766 +distlib/scripts.py,sha256=OAkEwxRvIzX-VSfhEttQEKJFVLA47gbW0OgQXJRs7OQ,16998 +distlib/util.py,sha256=f2jZCPrcLCt6LcnC0gUy-Fur60tXD8reA7k4rDpHMDw,59845 +distlib/version.py,sha256=_n7F6juvQGAcn769E_SHa7fOcf5ERlEVymJ_EjPRwGw,23391 +distlib/wheel.py,sha256=bRtR5bNR_u_DwkwktN1bgZuwLVOJT1p_vNIUPyN8kJc,40452 +distlib/_backport/__init__.py,sha256=bqS_dTOH6uW9iGgd0uzfpPjo6vZ4xpPZ7kyfZJ2vNaw,274 +distlib/_backport/misc.py,sha256=KWecINdbFNOxSOP1fGF680CJnaC6S4fBRgEtaYTw0ig,971 +distlib/_backport/shutil.py,sha256=VW1t3uYqUjWZH7jV-6QiimLhnldoV5uIpH4EuiT1jfw,25647 +distlib/_backport/sysconfig.cfg,sha256=swZKxq9RY5e9r3PXCrlvQPMsvOdiWZBTHLEbqS8LJLU,2617 +distlib/_backport/sysconfig.py,sha256=BQHFlb6pubCl_dvT1NjtzIthylofjKisox239stDg0U,26854 +distlib/_backport/tarfile.py,sha256=Ihp7rXRcjbIKw8COm9wSePV9ARGXbSF9gGXAMn2Q-KU,92628 +distlib-0.3.0.dist-info/AUTHORS.txt,sha256=RtqU9KfonVGhI48DAA4-yTOBUhBtQTjFhaDzHoyh7uU,21518 +distlib-0.3.0.dist-info/LICENSE.txt,sha256=W6Ifuwlk-TatfRU2LR7W1JMcyMj5_y1NkRkOEJvnRDE,1090 +distlib-0.3.0.dist-info/METADATA,sha256=K-1KbKXfYWPNkxh639g8H_epsxZqI8SOWXsMxZfD3j4,1251 +distlib-0.3.0.dist-info/WHEEL,sha256=kGT74LWyRUZrL4VgLh6_g12IeVl_9u9ZVhadrgXZUEY,110 +distlib-0.3.0.dist-info/top_level.txt,sha256=9BERqitu_vzyeyILOcGzX9YyA2AB_xlC4-81V6xoizk,8 +distlib-0.3.0.dist-info/RECORD,, +distlib/index.cpython-38.pyc,, +distlib/__pycache__,, +distlib/resources.cpython-38.pyc,, +distlib/_backport/sysconfig.cpython-38.pyc,, +distlib-0.3.0.dist-info/__pycache__,, +distlib/_backport/tarfile.cpython-38.pyc,, +distlib/util.cpython-38.pyc,, +distlib/_backport/__pycache__,, +distlib/scripts.cpython-38.pyc,, +distlib/_backport/shutil.cpython-38.pyc,, +distlib/locators.cpython-38.pyc,, +distlib-0.3.0.virtualenv,, +distlib/manifest.cpython-38.pyc,, +distlib-0.3.0.dist-info/INSTALLER,, +distlib/compat.cpython-38.pyc,, +distlib/markers.cpython-38.pyc,, +distlib/__init__.cpython-38.pyc,, +distlib/metadata.cpython-38.pyc,, +distlib/wheel.cpython-38.pyc,, +distlib/_backport/misc.cpython-38.pyc,, +distlib/database.cpython-38.pyc,, +distlib/version.cpython-38.pyc,, +distlib/_backport/__init__.cpython-38.pyc,, \ No newline at end of file diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib-0.3.0.dist-info/WHEEL b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib-0.3.0.dist-info/WHEEL new file mode 100644 index 0000000000000000000000000000000000000000..ef99c6cf3283b50a273ac4c6d009a0aa85597070 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib-0.3.0.dist-info/WHEEL @@ -0,0 +1,6 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.34.2) +Root-Is-Purelib: true +Tag: py2-none-any +Tag: py3-none-any + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib-0.3.0.dist-info/top_level.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib-0.3.0.dist-info/top_level.txt new file mode 100644 index 0000000000000000000000000000000000000000..f68bb07272ddc517a1ca4c3d4e50ac67bf78d3e2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib-0.3.0.dist-info/top_level.txt @@ -0,0 +1 @@ +distlib diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib-0.3.0.virtualenv b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib-0.3.0.virtualenv new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e19aebdc4cc8700bb740176baca1cda6dce1c99c --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/__init__.py @@ -0,0 +1,23 @@ +# -*- coding: utf-8 -*- +# +# Copyright (C) 2012-2019 Vinay Sajip. +# Licensed to the Python Software Foundation under a contributor agreement. +# See LICENSE.txt and CONTRIBUTORS.txt. +# +import logging + +__version__ = '0.3.0' + +class DistlibException(Exception): + pass + +try: + from logging import NullHandler +except ImportError: # pragma: no cover + class NullHandler(logging.Handler): + def handle(self, record): pass + def emit(self, record): pass + def createLock(self): self.lock = None + +logger = logging.getLogger(__name__) +logger.addHandler(NullHandler()) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/_backport/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/_backport/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..f7dbf4c9aa8314816f9bcbe5357146369ee71391 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/_backport/__init__.py @@ -0,0 +1,6 @@ +"""Modules copied from Python 3 standard libraries, for internal use only. + +Individual classes and functions are found in d2._backport.misc. Intended +usage is to always import things missing from 3.1 from that module: the +built-in/stdlib objects will be used if found. +""" diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/_backport/misc.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/_backport/misc.py new file mode 100644 index 0000000000000000000000000000000000000000..cfb318d34f761cde906d8b02dde3f9329688b8c6 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/_backport/misc.py @@ -0,0 +1,41 @@ +# -*- coding: utf-8 -*- +# +# Copyright (C) 2012 The Python Software Foundation. +# See LICENSE.txt and CONTRIBUTORS.txt. +# +"""Backports for individual classes and functions.""" + +import os +import sys + +__all__ = ['cache_from_source', 'callable', 'fsencode'] + + +try: + from imp import cache_from_source +except ImportError: + def cache_from_source(py_file, debug=__debug__): + ext = debug and 'c' or 'o' + return py_file + ext + + +try: + callable = callable +except NameError: + from collections import Callable + + def callable(obj): + return isinstance(obj, Callable) + + +try: + fsencode = os.fsencode +except AttributeError: + def fsencode(filename): + if isinstance(filename, bytes): + return filename + elif isinstance(filename, str): + return filename.encode(sys.getfilesystemencoding()) + else: + raise TypeError("expect bytes or str, not %s" % + type(filename).__name__) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/_backport/shutil.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/_backport/shutil.py new file mode 100644 index 0000000000000000000000000000000000000000..159e49ee8c2aa698aea9a191db91a14716187324 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/_backport/shutil.py @@ -0,0 +1,761 @@ +# -*- coding: utf-8 -*- +# +# Copyright (C) 2012 The Python Software Foundation. +# See LICENSE.txt and CONTRIBUTORS.txt. +# +"""Utility functions for copying and archiving files and directory trees. + +XXX The functions here don't copy the resource fork or other metadata on Mac. + +""" + +import os +import sys +import stat +from os.path import abspath +import fnmatch +import collections +import errno +from . import tarfile + +try: + import bz2 + _BZ2_SUPPORTED = True +except ImportError: + _BZ2_SUPPORTED = False + +try: + from pwd import getpwnam +except ImportError: + getpwnam = None + +try: + from grp import getgrnam +except ImportError: + getgrnam = None + +__all__ = ["copyfileobj", "copyfile", "copymode", "copystat", "copy", "copy2", + "copytree", "move", "rmtree", "Error", "SpecialFileError", + "ExecError", "make_archive", "get_archive_formats", + "register_archive_format", "unregister_archive_format", + "get_unpack_formats", "register_unpack_format", + "unregister_unpack_format", "unpack_archive", "ignore_patterns"] + +class Error(EnvironmentError): + pass + +class SpecialFileError(EnvironmentError): + """Raised when trying to do a kind of operation (e.g. copying) which is + not supported on a special file (e.g. a named pipe)""" + +class ExecError(EnvironmentError): + """Raised when a command could not be executed""" + +class ReadError(EnvironmentError): + """Raised when an archive cannot be read""" + +class RegistryError(Exception): + """Raised when a registry operation with the archiving + and unpacking registries fails""" + + +try: + WindowsError +except NameError: + WindowsError = None + +def copyfileobj(fsrc, fdst, length=16*1024): + """copy data from file-like object fsrc to file-like object fdst""" + while 1: + buf = fsrc.read(length) + if not buf: + break + fdst.write(buf) + +def _samefile(src, dst): + # Macintosh, Unix. + if hasattr(os.path, 'samefile'): + try: + return os.path.samefile(src, dst) + except OSError: + return False + + # All other platforms: check for same pathname. + return (os.path.normcase(os.path.abspath(src)) == + os.path.normcase(os.path.abspath(dst))) + +def copyfile(src, dst): + """Copy data from src to dst""" + if _samefile(src, dst): + raise Error("`%s` and `%s` are the same file" % (src, dst)) + + for fn in [src, dst]: + try: + st = os.stat(fn) + except OSError: + # File most likely does not exist + pass + else: + # XXX What about other special files? (sockets, devices...) + if stat.S_ISFIFO(st.st_mode): + raise SpecialFileError("`%s` is a named pipe" % fn) + + with open(src, 'rb') as fsrc: + with open(dst, 'wb') as fdst: + copyfileobj(fsrc, fdst) + +def copymode(src, dst): + """Copy mode bits from src to dst""" + if hasattr(os, 'chmod'): + st = os.stat(src) + mode = stat.S_IMODE(st.st_mode) + os.chmod(dst, mode) + +def copystat(src, dst): + """Copy all stat info (mode bits, atime, mtime, flags) from src to dst""" + st = os.stat(src) + mode = stat.S_IMODE(st.st_mode) + if hasattr(os, 'utime'): + os.utime(dst, (st.st_atime, st.st_mtime)) + if hasattr(os, 'chmod'): + os.chmod(dst, mode) + if hasattr(os, 'chflags') and hasattr(st, 'st_flags'): + try: + os.chflags(dst, st.st_flags) + except OSError as why: + if (not hasattr(errno, 'EOPNOTSUPP') or + why.errno != errno.EOPNOTSUPP): + raise + +def copy(src, dst): + """Copy data and mode bits ("cp src dst"). + + The destination may be a directory. + + """ + if os.path.isdir(dst): + dst = os.path.join(dst, os.path.basename(src)) + copyfile(src, dst) + copymode(src, dst) + +def copy2(src, dst): + """Copy data and all stat info ("cp -p src dst"). + + The destination may be a directory. + + """ + if os.path.isdir(dst): + dst = os.path.join(dst, os.path.basename(src)) + copyfile(src, dst) + copystat(src, dst) + +def ignore_patterns(*patterns): + """Function that can be used as copytree() ignore parameter. + + Patterns is a sequence of glob-style patterns + that are used to exclude files""" + def _ignore_patterns(path, names): + ignored_names = [] + for pattern in patterns: + ignored_names.extend(fnmatch.filter(names, pattern)) + return set(ignored_names) + return _ignore_patterns + +def copytree(src, dst, symlinks=False, ignore=None, copy_function=copy2, + ignore_dangling_symlinks=False): + """Recursively copy a directory tree. + + The destination directory must not already exist. + If exception(s) occur, an Error is raised with a list of reasons. + + If the optional symlinks flag is true, symbolic links in the + source tree result in symbolic links in the destination tree; if + it is false, the contents of the files pointed to by symbolic + links are copied. If the file pointed by the symlink doesn't + exist, an exception will be added in the list of errors raised in + an Error exception at the end of the copy process. + + You can set the optional ignore_dangling_symlinks flag to true if you + want to silence this exception. Notice that this has no effect on + platforms that don't support os.symlink. + + The optional ignore argument is a callable. If given, it + is called with the `src` parameter, which is the directory + being visited by copytree(), and `names` which is the list of + `src` contents, as returned by os.listdir(): + + callable(src, names) -> ignored_names + + Since copytree() is called recursively, the callable will be + called once for each directory that is copied. It returns a + list of names relative to the `src` directory that should + not be copied. + + The optional copy_function argument is a callable that will be used + to copy each file. It will be called with the source path and the + destination path as arguments. By default, copy2() is used, but any + function that supports the same signature (like copy()) can be used. + + """ + names = os.listdir(src) + if ignore is not None: + ignored_names = ignore(src, names) + else: + ignored_names = set() + + os.makedirs(dst) + errors = [] + for name in names: + if name in ignored_names: + continue + srcname = os.path.join(src, name) + dstname = os.path.join(dst, name) + try: + if os.path.islink(srcname): + linkto = os.readlink(srcname) + if symlinks: + os.symlink(linkto, dstname) + else: + # ignore dangling symlink if the flag is on + if not os.path.exists(linkto) and ignore_dangling_symlinks: + continue + # otherwise let the copy occurs. copy2 will raise an error + copy_function(srcname, dstname) + elif os.path.isdir(srcname): + copytree(srcname, dstname, symlinks, ignore, copy_function) + else: + # Will raise a SpecialFileError for unsupported file types + copy_function(srcname, dstname) + # catch the Error from the recursive copytree so that we can + # continue with other files + except Error as err: + errors.extend(err.args[0]) + except EnvironmentError as why: + errors.append((srcname, dstname, str(why))) + try: + copystat(src, dst) + except OSError as why: + if WindowsError is not None and isinstance(why, WindowsError): + # Copying file access times may fail on Windows + pass + else: + errors.extend((src, dst, str(why))) + if errors: + raise Error(errors) + +def rmtree(path, ignore_errors=False, onerror=None): + """Recursively delete a directory tree. + + If ignore_errors is set, errors are ignored; otherwise, if onerror + is set, it is called to handle the error with arguments (func, + path, exc_info) where func is os.listdir, os.remove, or os.rmdir; + path is the argument to that function that caused it to fail; and + exc_info is a tuple returned by sys.exc_info(). If ignore_errors + is false and onerror is None, an exception is raised. + + """ + if ignore_errors: + def onerror(*args): + pass + elif onerror is None: + def onerror(*args): + raise + try: + if os.path.islink(path): + # symlinks to directories are forbidden, see bug #1669 + raise OSError("Cannot call rmtree on a symbolic link") + except OSError: + onerror(os.path.islink, path, sys.exc_info()) + # can't continue even if onerror hook returns + return + names = [] + try: + names = os.listdir(path) + except os.error: + onerror(os.listdir, path, sys.exc_info()) + for name in names: + fullname = os.path.join(path, name) + try: + mode = os.lstat(fullname).st_mode + except os.error: + mode = 0 + if stat.S_ISDIR(mode): + rmtree(fullname, ignore_errors, onerror) + else: + try: + os.remove(fullname) + except os.error: + onerror(os.remove, fullname, sys.exc_info()) + try: + os.rmdir(path) + except os.error: + onerror(os.rmdir, path, sys.exc_info()) + + +def _basename(path): + # A basename() variant which first strips the trailing slash, if present. + # Thus we always get the last component of the path, even for directories. + return os.path.basename(path.rstrip(os.path.sep)) + +def move(src, dst): + """Recursively move a file or directory to another location. This is + similar to the Unix "mv" command. + + If the destination is a directory or a symlink to a directory, the source + is moved inside the directory. The destination path must not already + exist. + + If the destination already exists but is not a directory, it may be + overwritten depending on os.rename() semantics. + + If the destination is on our current filesystem, then rename() is used. + Otherwise, src is copied to the destination and then removed. + A lot more could be done here... A look at a mv.c shows a lot of + the issues this implementation glosses over. + + """ + real_dst = dst + if os.path.isdir(dst): + if _samefile(src, dst): + # We might be on a case insensitive filesystem, + # perform the rename anyway. + os.rename(src, dst) + return + + real_dst = os.path.join(dst, _basename(src)) + if os.path.exists(real_dst): + raise Error("Destination path '%s' already exists" % real_dst) + try: + os.rename(src, real_dst) + except OSError: + if os.path.isdir(src): + if _destinsrc(src, dst): + raise Error("Cannot move a directory '%s' into itself '%s'." % (src, dst)) + copytree(src, real_dst, symlinks=True) + rmtree(src) + else: + copy2(src, real_dst) + os.unlink(src) + +def _destinsrc(src, dst): + src = abspath(src) + dst = abspath(dst) + if not src.endswith(os.path.sep): + src += os.path.sep + if not dst.endswith(os.path.sep): + dst += os.path.sep + return dst.startswith(src) + +def _get_gid(name): + """Returns a gid, given a group name.""" + if getgrnam is None or name is None: + return None + try: + result = getgrnam(name) + except KeyError: + result = None + if result is not None: + return result[2] + return None + +def _get_uid(name): + """Returns an uid, given a user name.""" + if getpwnam is None or name is None: + return None + try: + result = getpwnam(name) + except KeyError: + result = None + if result is not None: + return result[2] + return None + +def _make_tarball(base_name, base_dir, compress="gzip", verbose=0, dry_run=0, + owner=None, group=None, logger=None): + """Create a (possibly compressed) tar file from all the files under + 'base_dir'. + + 'compress' must be "gzip" (the default), "bzip2", or None. + + 'owner' and 'group' can be used to define an owner and a group for the + archive that is being built. If not provided, the current owner and group + will be used. + + The output tar file will be named 'base_name' + ".tar", possibly plus + the appropriate compression extension (".gz", or ".bz2"). + + Returns the output filename. + """ + tar_compression = {'gzip': 'gz', None: ''} + compress_ext = {'gzip': '.gz'} + + if _BZ2_SUPPORTED: + tar_compression['bzip2'] = 'bz2' + compress_ext['bzip2'] = '.bz2' + + # flags for compression program, each element of list will be an argument + if compress is not None and compress not in compress_ext: + raise ValueError("bad value for 'compress', or compression format not " + "supported : {0}".format(compress)) + + archive_name = base_name + '.tar' + compress_ext.get(compress, '') + archive_dir = os.path.dirname(archive_name) + + if not os.path.exists(archive_dir): + if logger is not None: + logger.info("creating %s", archive_dir) + if not dry_run: + os.makedirs(archive_dir) + + # creating the tarball + if logger is not None: + logger.info('Creating tar archive') + + uid = _get_uid(owner) + gid = _get_gid(group) + + def _set_uid_gid(tarinfo): + if gid is not None: + tarinfo.gid = gid + tarinfo.gname = group + if uid is not None: + tarinfo.uid = uid + tarinfo.uname = owner + return tarinfo + + if not dry_run: + tar = tarfile.open(archive_name, 'w|%s' % tar_compression[compress]) + try: + tar.add(base_dir, filter=_set_uid_gid) + finally: + tar.close() + + return archive_name + +def _call_external_zip(base_dir, zip_filename, verbose=False, dry_run=False): + # XXX see if we want to keep an external call here + if verbose: + zipoptions = "-r" + else: + zipoptions = "-rq" + from distutils.errors import DistutilsExecError + from distutils.spawn import spawn + try: + spawn(["zip", zipoptions, zip_filename, base_dir], dry_run=dry_run) + except DistutilsExecError: + # XXX really should distinguish between "couldn't find + # external 'zip' command" and "zip failed". + raise ExecError("unable to create zip file '%s': " + "could neither import the 'zipfile' module nor " + "find a standalone zip utility") % zip_filename + +def _make_zipfile(base_name, base_dir, verbose=0, dry_run=0, logger=None): + """Create a zip file from all the files under 'base_dir'. + + The output zip file will be named 'base_name' + ".zip". Uses either the + "zipfile" Python module (if available) or the InfoZIP "zip" utility + (if installed and found on the default search path). If neither tool is + available, raises ExecError. Returns the name of the output zip + file. + """ + zip_filename = base_name + ".zip" + archive_dir = os.path.dirname(base_name) + + if not os.path.exists(archive_dir): + if logger is not None: + logger.info("creating %s", archive_dir) + if not dry_run: + os.makedirs(archive_dir) + + # If zipfile module is not available, try spawning an external 'zip' + # command. + try: + import zipfile + except ImportError: + zipfile = None + + if zipfile is None: + _call_external_zip(base_dir, zip_filename, verbose, dry_run) + else: + if logger is not None: + logger.info("creating '%s' and adding '%s' to it", + zip_filename, base_dir) + + if not dry_run: + zip = zipfile.ZipFile(zip_filename, "w", + compression=zipfile.ZIP_DEFLATED) + + for dirpath, dirnames, filenames in os.walk(base_dir): + for name in filenames: + path = os.path.normpath(os.path.join(dirpath, name)) + if os.path.isfile(path): + zip.write(path, path) + if logger is not None: + logger.info("adding '%s'", path) + zip.close() + + return zip_filename + +_ARCHIVE_FORMATS = { + 'gztar': (_make_tarball, [('compress', 'gzip')], "gzip'ed tar-file"), + 'bztar': (_make_tarball, [('compress', 'bzip2')], "bzip2'ed tar-file"), + 'tar': (_make_tarball, [('compress', None)], "uncompressed tar file"), + 'zip': (_make_zipfile, [], "ZIP file"), + } + +if _BZ2_SUPPORTED: + _ARCHIVE_FORMATS['bztar'] = (_make_tarball, [('compress', 'bzip2')], + "bzip2'ed tar-file") + +def get_archive_formats(): + """Returns a list of supported formats for archiving and unarchiving. + + Each element of the returned sequence is a tuple (name, description) + """ + formats = [(name, registry[2]) for name, registry in + _ARCHIVE_FORMATS.items()] + formats.sort() + return formats + +def register_archive_format(name, function, extra_args=None, description=''): + """Registers an archive format. + + name is the name of the format. function is the callable that will be + used to create archives. If provided, extra_args is a sequence of + (name, value) tuples that will be passed as arguments to the callable. + description can be provided to describe the format, and will be returned + by the get_archive_formats() function. + """ + if extra_args is None: + extra_args = [] + if not isinstance(function, collections.Callable): + raise TypeError('The %s object is not callable' % function) + if not isinstance(extra_args, (tuple, list)): + raise TypeError('extra_args needs to be a sequence') + for element in extra_args: + if not isinstance(element, (tuple, list)) or len(element) !=2: + raise TypeError('extra_args elements are : (arg_name, value)') + + _ARCHIVE_FORMATS[name] = (function, extra_args, description) + +def unregister_archive_format(name): + del _ARCHIVE_FORMATS[name] + +def make_archive(base_name, format, root_dir=None, base_dir=None, verbose=0, + dry_run=0, owner=None, group=None, logger=None): + """Create an archive file (eg. zip or tar). + + 'base_name' is the name of the file to create, minus any format-specific + extension; 'format' is the archive format: one of "zip", "tar", "bztar" + or "gztar". + + 'root_dir' is a directory that will be the root directory of the + archive; ie. we typically chdir into 'root_dir' before creating the + archive. 'base_dir' is the directory where we start archiving from; + ie. 'base_dir' will be the common prefix of all files and + directories in the archive. 'root_dir' and 'base_dir' both default + to the current directory. Returns the name of the archive file. + + 'owner' and 'group' are used when creating a tar archive. By default, + uses the current owner and group. + """ + save_cwd = os.getcwd() + if root_dir is not None: + if logger is not None: + logger.debug("changing into '%s'", root_dir) + base_name = os.path.abspath(base_name) + if not dry_run: + os.chdir(root_dir) + + if base_dir is None: + base_dir = os.curdir + + kwargs = {'dry_run': dry_run, 'logger': logger} + + try: + format_info = _ARCHIVE_FORMATS[format] + except KeyError: + raise ValueError("unknown archive format '%s'" % format) + + func = format_info[0] + for arg, val in format_info[1]: + kwargs[arg] = val + + if format != 'zip': + kwargs['owner'] = owner + kwargs['group'] = group + + try: + filename = func(base_name, base_dir, **kwargs) + finally: + if root_dir is not None: + if logger is not None: + logger.debug("changing back to '%s'", save_cwd) + os.chdir(save_cwd) + + return filename + + +def get_unpack_formats(): + """Returns a list of supported formats for unpacking. + + Each element of the returned sequence is a tuple + (name, extensions, description) + """ + formats = [(name, info[0], info[3]) for name, info in + _UNPACK_FORMATS.items()] + formats.sort() + return formats + +def _check_unpack_options(extensions, function, extra_args): + """Checks what gets registered as an unpacker.""" + # first make sure no other unpacker is registered for this extension + existing_extensions = {} + for name, info in _UNPACK_FORMATS.items(): + for ext in info[0]: + existing_extensions[ext] = name + + for extension in extensions: + if extension in existing_extensions: + msg = '%s is already registered for "%s"' + raise RegistryError(msg % (extension, + existing_extensions[extension])) + + if not isinstance(function, collections.Callable): + raise TypeError('The registered function must be a callable') + + +def register_unpack_format(name, extensions, function, extra_args=None, + description=''): + """Registers an unpack format. + + `name` is the name of the format. `extensions` is a list of extensions + corresponding to the format. + + `function` is the callable that will be + used to unpack archives. The callable will receive archives to unpack. + If it's unable to handle an archive, it needs to raise a ReadError + exception. + + If provided, `extra_args` is a sequence of + (name, value) tuples that will be passed as arguments to the callable. + description can be provided to describe the format, and will be returned + by the get_unpack_formats() function. + """ + if extra_args is None: + extra_args = [] + _check_unpack_options(extensions, function, extra_args) + _UNPACK_FORMATS[name] = extensions, function, extra_args, description + +def unregister_unpack_format(name): + """Removes the pack format from the registry.""" + del _UNPACK_FORMATS[name] + +def _ensure_directory(path): + """Ensure that the parent directory of `path` exists""" + dirname = os.path.dirname(path) + if not os.path.isdir(dirname): + os.makedirs(dirname) + +def _unpack_zipfile(filename, extract_dir): + """Unpack zip `filename` to `extract_dir` + """ + try: + import zipfile + except ImportError: + raise ReadError('zlib not supported, cannot unpack this archive.') + + if not zipfile.is_zipfile(filename): + raise ReadError("%s is not a zip file" % filename) + + zip = zipfile.ZipFile(filename) + try: + for info in zip.infolist(): + name = info.filename + + # don't extract absolute paths or ones with .. in them + if name.startswith('/') or '..' in name: + continue + + target = os.path.join(extract_dir, *name.split('/')) + if not target: + continue + + _ensure_directory(target) + if not name.endswith('/'): + # file + data = zip.read(info.filename) + f = open(target, 'wb') + try: + f.write(data) + finally: + f.close() + del data + finally: + zip.close() + +def _unpack_tarfile(filename, extract_dir): + """Unpack tar/tar.gz/tar.bz2 `filename` to `extract_dir` + """ + try: + tarobj = tarfile.open(filename) + except tarfile.TarError: + raise ReadError( + "%s is not a compressed or uncompressed tar file" % filename) + try: + tarobj.extractall(extract_dir) + finally: + tarobj.close() + +_UNPACK_FORMATS = { + 'gztar': (['.tar.gz', '.tgz'], _unpack_tarfile, [], "gzip'ed tar-file"), + 'tar': (['.tar'], _unpack_tarfile, [], "uncompressed tar file"), + 'zip': (['.zip'], _unpack_zipfile, [], "ZIP file") + } + +if _BZ2_SUPPORTED: + _UNPACK_FORMATS['bztar'] = (['.bz2'], _unpack_tarfile, [], + "bzip2'ed tar-file") + +def _find_unpack_format(filename): + for name, info in _UNPACK_FORMATS.items(): + for extension in info[0]: + if filename.endswith(extension): + return name + return None + +def unpack_archive(filename, extract_dir=None, format=None): + """Unpack an archive. + + `filename` is the name of the archive. + + `extract_dir` is the name of the target directory, where the archive + is unpacked. If not provided, the current working directory is used. + + `format` is the archive format: one of "zip", "tar", or "gztar". Or any + other registered format. If not provided, unpack_archive will use the + filename extension and see if an unpacker was registered for that + extension. + + In case none is found, a ValueError is raised. + """ + if extract_dir is None: + extract_dir = os.getcwd() + + if format is not None: + try: + format_info = _UNPACK_FORMATS[format] + except KeyError: + raise ValueError("Unknown unpack format '{0}'".format(format)) + + func = format_info[1] + func(filename, extract_dir, **dict(format_info[2])) + else: + # we need to look at the registered unpackers supported extensions + format = _find_unpack_format(filename) + if format is None: + raise ReadError("Unknown archive format '{0}'".format(filename)) + + func = _UNPACK_FORMATS[format][1] + kwargs = dict(_UNPACK_FORMATS[format][2]) + func(filename, extract_dir, **kwargs) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/_backport/sysconfig.cfg b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/_backport/sysconfig.cfg new file mode 100644 index 0000000000000000000000000000000000000000..1746bd01c1ad915b728ab58b4668c82b2bd578e1 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/_backport/sysconfig.cfg @@ -0,0 +1,84 @@ +[posix_prefix] +# Configuration directories. Some of these come straight out of the +# configure script. They are for implementing the other variables, not to +# be used directly in [resource_locations]. +confdir = /etc +datadir = /usr/share +libdir = /usr/lib +statedir = /var +# User resource directory +local = ~/.local/{distribution.name} + +stdlib = {base}/lib/python{py_version_short} +platstdlib = {platbase}/lib/python{py_version_short} +purelib = {base}/lib/python{py_version_short}/site-packages +platlib = {platbase}/lib/python{py_version_short}/site-packages +include = {base}/include/python{py_version_short}{abiflags} +platinclude = {platbase}/include/python{py_version_short}{abiflags} +data = {base} + +[posix_home] +stdlib = {base}/lib/python +platstdlib = {base}/lib/python +purelib = {base}/lib/python +platlib = {base}/lib/python +include = {base}/include/python +platinclude = {base}/include/python +scripts = {base}/bin +data = {base} + +[nt] +stdlib = {base}/Lib +platstdlib = {base}/Lib +purelib = {base}/Lib/site-packages +platlib = {base}/Lib/site-packages +include = {base}/Include +platinclude = {base}/Include +scripts = {base}/Scripts +data = {base} + +[os2] +stdlib = {base}/Lib +platstdlib = {base}/Lib +purelib = {base}/Lib/site-packages +platlib = {base}/Lib/site-packages +include = {base}/Include +platinclude = {base}/Include +scripts = {base}/Scripts +data = {base} + +[os2_home] +stdlib = {userbase}/lib/python{py_version_short} +platstdlib = {userbase}/lib/python{py_version_short} +purelib = {userbase}/lib/python{py_version_short}/site-packages +platlib = {userbase}/lib/python{py_version_short}/site-packages +include = {userbase}/include/python{py_version_short} +scripts = {userbase}/bin +data = {userbase} + +[nt_user] +stdlib = {userbase}/Python{py_version_nodot} +platstdlib = {userbase}/Python{py_version_nodot} +purelib = {userbase}/Python{py_version_nodot}/site-packages +platlib = {userbase}/Python{py_version_nodot}/site-packages +include = {userbase}/Python{py_version_nodot}/Include +scripts = {userbase}/Scripts +data = {userbase} + +[posix_user] +stdlib = {userbase}/lib/python{py_version_short} +platstdlib = {userbase}/lib/python{py_version_short} +purelib = {userbase}/lib/python{py_version_short}/site-packages +platlib = {userbase}/lib/python{py_version_short}/site-packages +include = {userbase}/include/python{py_version_short} +scripts = {userbase}/bin +data = {userbase} + +[osx_framework_user] +stdlib = {userbase}/lib/python +platstdlib = {userbase}/lib/python +purelib = {userbase}/lib/python/site-packages +platlib = {userbase}/lib/python/site-packages +include = {userbase}/include +scripts = {userbase}/bin +data = {userbase} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/_backport/sysconfig.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/_backport/sysconfig.py new file mode 100644 index 0000000000000000000000000000000000000000..b470a373c8664325d0d00c367c0a93e926f468fa --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/_backport/sysconfig.py @@ -0,0 +1,786 @@ +# -*- coding: utf-8 -*- +# +# Copyright (C) 2012 The Python Software Foundation. +# See LICENSE.txt and CONTRIBUTORS.txt. +# +"""Access to Python's configuration information.""" + +import codecs +import os +import re +import sys +from os.path import pardir, realpath +try: + import configparser +except ImportError: + import ConfigParser as configparser + + +__all__ = [ + 'get_config_h_filename', + 'get_config_var', + 'get_config_vars', + 'get_makefile_filename', + 'get_path', + 'get_path_names', + 'get_paths', + 'get_platform', + 'get_python_version', + 'get_scheme_names', + 'parse_config_h', +] + + +def _safe_realpath(path): + try: + return realpath(path) + except OSError: + return path + + +if sys.executable: + _PROJECT_BASE = os.path.dirname(_safe_realpath(sys.executable)) +else: + # sys.executable can be empty if argv[0] has been changed and Python is + # unable to retrieve the real program name + _PROJECT_BASE = _safe_realpath(os.getcwd()) + +if os.name == "nt" and "pcbuild" in _PROJECT_BASE[-8:].lower(): + _PROJECT_BASE = _safe_realpath(os.path.join(_PROJECT_BASE, pardir)) +# PC/VS7.1 +if os.name == "nt" and "\\pc\\v" in _PROJECT_BASE[-10:].lower(): + _PROJECT_BASE = _safe_realpath(os.path.join(_PROJECT_BASE, pardir, pardir)) +# PC/AMD64 +if os.name == "nt" and "\\pcbuild\\amd64" in _PROJECT_BASE[-14:].lower(): + _PROJECT_BASE = _safe_realpath(os.path.join(_PROJECT_BASE, pardir, pardir)) + + +def is_python_build(): + for fn in ("Setup.dist", "Setup.local"): + if os.path.isfile(os.path.join(_PROJECT_BASE, "Modules", fn)): + return True + return False + +_PYTHON_BUILD = is_python_build() + +_cfg_read = False + +def _ensure_cfg_read(): + global _cfg_read + if not _cfg_read: + from ..resources import finder + backport_package = __name__.rsplit('.', 1)[0] + _finder = finder(backport_package) + _cfgfile = _finder.find('sysconfig.cfg') + assert _cfgfile, 'sysconfig.cfg exists' + with _cfgfile.as_stream() as s: + _SCHEMES.readfp(s) + if _PYTHON_BUILD: + for scheme in ('posix_prefix', 'posix_home'): + _SCHEMES.set(scheme, 'include', '{srcdir}/Include') + _SCHEMES.set(scheme, 'platinclude', '{projectbase}/.') + + _cfg_read = True + + +_SCHEMES = configparser.RawConfigParser() +_VAR_REPL = re.compile(r'\{([^{]*?)\}') + +def _expand_globals(config): + _ensure_cfg_read() + if config.has_section('globals'): + globals = config.items('globals') + else: + globals = tuple() + + sections = config.sections() + for section in sections: + if section == 'globals': + continue + for option, value in globals: + if config.has_option(section, option): + continue + config.set(section, option, value) + config.remove_section('globals') + + # now expanding local variables defined in the cfg file + # + for section in config.sections(): + variables = dict(config.items(section)) + + def _replacer(matchobj): + name = matchobj.group(1) + if name in variables: + return variables[name] + return matchobj.group(0) + + for option, value in config.items(section): + config.set(section, option, _VAR_REPL.sub(_replacer, value)) + +#_expand_globals(_SCHEMES) + +_PY_VERSION = '%s.%s.%s' % sys.version_info[:3] +_PY_VERSION_SHORT = '%s.%s' % sys.version_info[:2] +_PY_VERSION_SHORT_NO_DOT = '%s%s' % sys.version_info[:2] +_PREFIX = os.path.normpath(sys.prefix) +_EXEC_PREFIX = os.path.normpath(sys.exec_prefix) +_CONFIG_VARS = None +_USER_BASE = None + + +def _subst_vars(path, local_vars): + """In the string `path`, replace tokens like {some.thing} with the + corresponding value from the map `local_vars`. + + If there is no corresponding value, leave the token unchanged. + """ + def _replacer(matchobj): + name = matchobj.group(1) + if name in local_vars: + return local_vars[name] + elif name in os.environ: + return os.environ[name] + return matchobj.group(0) + return _VAR_REPL.sub(_replacer, path) + + +def _extend_dict(target_dict, other_dict): + target_keys = target_dict.keys() + for key, value in other_dict.items(): + if key in target_keys: + continue + target_dict[key] = value + + +def _expand_vars(scheme, vars): + res = {} + if vars is None: + vars = {} + _extend_dict(vars, get_config_vars()) + + for key, value in _SCHEMES.items(scheme): + if os.name in ('posix', 'nt'): + value = os.path.expanduser(value) + res[key] = os.path.normpath(_subst_vars(value, vars)) + return res + + +def format_value(value, vars): + def _replacer(matchobj): + name = matchobj.group(1) + if name in vars: + return vars[name] + return matchobj.group(0) + return _VAR_REPL.sub(_replacer, value) + + +def _get_default_scheme(): + if os.name == 'posix': + # the default scheme for posix is posix_prefix + return 'posix_prefix' + return os.name + + +def _getuserbase(): + env_base = os.environ.get("PYTHONUSERBASE", None) + + def joinuser(*args): + return os.path.expanduser(os.path.join(*args)) + + # what about 'os2emx', 'riscos' ? + if os.name == "nt": + base = os.environ.get("APPDATA") or "~" + if env_base: + return env_base + else: + return joinuser(base, "Python") + + if sys.platform == "darwin": + framework = get_config_var("PYTHONFRAMEWORK") + if framework: + if env_base: + return env_base + else: + return joinuser("~", "Library", framework, "%d.%d" % + sys.version_info[:2]) + + if env_base: + return env_base + else: + return joinuser("~", ".local") + + +def _parse_makefile(filename, vars=None): + """Parse a Makefile-style file. + + A dictionary containing name/value pairs is returned. If an + optional dictionary is passed in as the second argument, it is + used instead of a new dictionary. + """ + # Regexes needed for parsing Makefile (and similar syntaxes, + # like old-style Setup files). + _variable_rx = re.compile(r"([a-zA-Z][a-zA-Z0-9_]+)\s*=\s*(.*)") + _findvar1_rx = re.compile(r"\$\(([A-Za-z][A-Za-z0-9_]*)\)") + _findvar2_rx = re.compile(r"\${([A-Za-z][A-Za-z0-9_]*)}") + + if vars is None: + vars = {} + done = {} + notdone = {} + + with codecs.open(filename, encoding='utf-8', errors="surrogateescape") as f: + lines = f.readlines() + + for line in lines: + if line.startswith('#') or line.strip() == '': + continue + m = _variable_rx.match(line) + if m: + n, v = m.group(1, 2) + v = v.strip() + # `$$' is a literal `$' in make + tmpv = v.replace('$$', '') + + if "$" in tmpv: + notdone[n] = v + else: + try: + v = int(v) + except ValueError: + # insert literal `$' + done[n] = v.replace('$$', '$') + else: + done[n] = v + + # do variable interpolation here + variables = list(notdone.keys()) + + # Variables with a 'PY_' prefix in the makefile. These need to + # be made available without that prefix through sysconfig. + # Special care is needed to ensure that variable expansion works, even + # if the expansion uses the name without a prefix. + renamed_variables = ('CFLAGS', 'LDFLAGS', 'CPPFLAGS') + + while len(variables) > 0: + for name in tuple(variables): + value = notdone[name] + m = _findvar1_rx.search(value) or _findvar2_rx.search(value) + if m is not None: + n = m.group(1) + found = True + if n in done: + item = str(done[n]) + elif n in notdone: + # get it on a subsequent round + found = False + elif n in os.environ: + # do it like make: fall back to environment + item = os.environ[n] + + elif n in renamed_variables: + if (name.startswith('PY_') and + name[3:] in renamed_variables): + item = "" + + elif 'PY_' + n in notdone: + found = False + + else: + item = str(done['PY_' + n]) + + else: + done[n] = item = "" + + if found: + after = value[m.end():] + value = value[:m.start()] + item + after + if "$" in after: + notdone[name] = value + else: + try: + value = int(value) + except ValueError: + done[name] = value.strip() + else: + done[name] = value + variables.remove(name) + + if (name.startswith('PY_') and + name[3:] in renamed_variables): + + name = name[3:] + if name not in done: + done[name] = value + + else: + # bogus variable reference (e.g. "prefix=$/opt/python"); + # just drop it since we can't deal + done[name] = value + variables.remove(name) + + # strip spurious spaces + for k, v in done.items(): + if isinstance(v, str): + done[k] = v.strip() + + # save the results in the global dictionary + vars.update(done) + return vars + + +def get_makefile_filename(): + """Return the path of the Makefile.""" + if _PYTHON_BUILD: + return os.path.join(_PROJECT_BASE, "Makefile") + if hasattr(sys, 'abiflags'): + config_dir_name = 'config-%s%s' % (_PY_VERSION_SHORT, sys.abiflags) + else: + config_dir_name = 'config' + return os.path.join(get_path('stdlib'), config_dir_name, 'Makefile') + + +def _init_posix(vars): + """Initialize the module as appropriate for POSIX systems.""" + # load the installed Makefile: + makefile = get_makefile_filename() + try: + _parse_makefile(makefile, vars) + except IOError as e: + msg = "invalid Python installation: unable to open %s" % makefile + if hasattr(e, "strerror"): + msg = msg + " (%s)" % e.strerror + raise IOError(msg) + # load the installed pyconfig.h: + config_h = get_config_h_filename() + try: + with open(config_h) as f: + parse_config_h(f, vars) + except IOError as e: + msg = "invalid Python installation: unable to open %s" % config_h + if hasattr(e, "strerror"): + msg = msg + " (%s)" % e.strerror + raise IOError(msg) + # On AIX, there are wrong paths to the linker scripts in the Makefile + # -- these paths are relative to the Python source, but when installed + # the scripts are in another directory. + if _PYTHON_BUILD: + vars['LDSHARED'] = vars['BLDSHARED'] + + +def _init_non_posix(vars): + """Initialize the module as appropriate for NT""" + # set basic install directories + vars['LIBDEST'] = get_path('stdlib') + vars['BINLIBDEST'] = get_path('platstdlib') + vars['INCLUDEPY'] = get_path('include') + vars['SO'] = '.pyd' + vars['EXE'] = '.exe' + vars['VERSION'] = _PY_VERSION_SHORT_NO_DOT + vars['BINDIR'] = os.path.dirname(_safe_realpath(sys.executable)) + +# +# public APIs +# + + +def parse_config_h(fp, vars=None): + """Parse a config.h-style file. + + A dictionary containing name/value pairs is returned. If an + optional dictionary is passed in as the second argument, it is + used instead of a new dictionary. + """ + if vars is None: + vars = {} + define_rx = re.compile("#define ([A-Z][A-Za-z0-9_]+) (.*)\n") + undef_rx = re.compile("/[*] #undef ([A-Z][A-Za-z0-9_]+) [*]/\n") + + while True: + line = fp.readline() + if not line: + break + m = define_rx.match(line) + if m: + n, v = m.group(1, 2) + try: + v = int(v) + except ValueError: + pass + vars[n] = v + else: + m = undef_rx.match(line) + if m: + vars[m.group(1)] = 0 + return vars + + +def get_config_h_filename(): + """Return the path of pyconfig.h.""" + if _PYTHON_BUILD: + if os.name == "nt": + inc_dir = os.path.join(_PROJECT_BASE, "PC") + else: + inc_dir = _PROJECT_BASE + else: + inc_dir = get_path('platinclude') + return os.path.join(inc_dir, 'pyconfig.h') + + +def get_scheme_names(): + """Return a tuple containing the schemes names.""" + return tuple(sorted(_SCHEMES.sections())) + + +def get_path_names(): + """Return a tuple containing the paths names.""" + # xxx see if we want a static list + return _SCHEMES.options('posix_prefix') + + +def get_paths(scheme=_get_default_scheme(), vars=None, expand=True): + """Return a mapping containing an install scheme. + + ``scheme`` is the install scheme name. If not provided, it will + return the default scheme for the current platform. + """ + _ensure_cfg_read() + if expand: + return _expand_vars(scheme, vars) + else: + return dict(_SCHEMES.items(scheme)) + + +def get_path(name, scheme=_get_default_scheme(), vars=None, expand=True): + """Return a path corresponding to the scheme. + + ``scheme`` is the install scheme name. + """ + return get_paths(scheme, vars, expand)[name] + + +def get_config_vars(*args): + """With no arguments, return a dictionary of all configuration + variables relevant for the current platform. + + On Unix, this means every variable defined in Python's installed Makefile; + On Windows and Mac OS it's a much smaller set. + + With arguments, return a list of values that result from looking up + each argument in the configuration variable dictionary. + """ + global _CONFIG_VARS + if _CONFIG_VARS is None: + _CONFIG_VARS = {} + # Normalized versions of prefix and exec_prefix are handy to have; + # in fact, these are the standard versions used most places in the + # distutils2 module. + _CONFIG_VARS['prefix'] = _PREFIX + _CONFIG_VARS['exec_prefix'] = _EXEC_PREFIX + _CONFIG_VARS['py_version'] = _PY_VERSION + _CONFIG_VARS['py_version_short'] = _PY_VERSION_SHORT + _CONFIG_VARS['py_version_nodot'] = _PY_VERSION[0] + _PY_VERSION[2] + _CONFIG_VARS['base'] = _PREFIX + _CONFIG_VARS['platbase'] = _EXEC_PREFIX + _CONFIG_VARS['projectbase'] = _PROJECT_BASE + try: + _CONFIG_VARS['abiflags'] = sys.abiflags + except AttributeError: + # sys.abiflags may not be defined on all platforms. + _CONFIG_VARS['abiflags'] = '' + + if os.name in ('nt', 'os2'): + _init_non_posix(_CONFIG_VARS) + if os.name == 'posix': + _init_posix(_CONFIG_VARS) + # Setting 'userbase' is done below the call to the + # init function to enable using 'get_config_var' in + # the init-function. + if sys.version >= '2.6': + _CONFIG_VARS['userbase'] = _getuserbase() + + if 'srcdir' not in _CONFIG_VARS: + _CONFIG_VARS['srcdir'] = _PROJECT_BASE + else: + _CONFIG_VARS['srcdir'] = _safe_realpath(_CONFIG_VARS['srcdir']) + + # Convert srcdir into an absolute path if it appears necessary. + # Normally it is relative to the build directory. However, during + # testing, for example, we might be running a non-installed python + # from a different directory. + if _PYTHON_BUILD and os.name == "posix": + base = _PROJECT_BASE + try: + cwd = os.getcwd() + except OSError: + cwd = None + if (not os.path.isabs(_CONFIG_VARS['srcdir']) and + base != cwd): + # srcdir is relative and we are not in the same directory + # as the executable. Assume executable is in the build + # directory and make srcdir absolute. + srcdir = os.path.join(base, _CONFIG_VARS['srcdir']) + _CONFIG_VARS['srcdir'] = os.path.normpath(srcdir) + + if sys.platform == 'darwin': + kernel_version = os.uname()[2] # Kernel version (8.4.3) + major_version = int(kernel_version.split('.')[0]) + + if major_version < 8: + # On Mac OS X before 10.4, check if -arch and -isysroot + # are in CFLAGS or LDFLAGS and remove them if they are. + # This is needed when building extensions on a 10.3 system + # using a universal build of python. + for key in ('LDFLAGS', 'BASECFLAGS', + # a number of derived variables. These need to be + # patched up as well. + 'CFLAGS', 'PY_CFLAGS', 'BLDSHARED'): + flags = _CONFIG_VARS[key] + flags = re.sub(r'-arch\s+\w+\s', ' ', flags) + flags = re.sub('-isysroot [^ \t]*', ' ', flags) + _CONFIG_VARS[key] = flags + else: + # Allow the user to override the architecture flags using + # an environment variable. + # NOTE: This name was introduced by Apple in OSX 10.5 and + # is used by several scripting languages distributed with + # that OS release. + if 'ARCHFLAGS' in os.environ: + arch = os.environ['ARCHFLAGS'] + for key in ('LDFLAGS', 'BASECFLAGS', + # a number of derived variables. These need to be + # patched up as well. + 'CFLAGS', 'PY_CFLAGS', 'BLDSHARED'): + + flags = _CONFIG_VARS[key] + flags = re.sub(r'-arch\s+\w+\s', ' ', flags) + flags = flags + ' ' + arch + _CONFIG_VARS[key] = flags + + # If we're on OSX 10.5 or later and the user tries to + # compiles an extension using an SDK that is not present + # on the current machine it is better to not use an SDK + # than to fail. + # + # The major usecase for this is users using a Python.org + # binary installer on OSX 10.6: that installer uses + # the 10.4u SDK, but that SDK is not installed by default + # when you install Xcode. + # + CFLAGS = _CONFIG_VARS.get('CFLAGS', '') + m = re.search(r'-isysroot\s+(\S+)', CFLAGS) + if m is not None: + sdk = m.group(1) + if not os.path.exists(sdk): + for key in ('LDFLAGS', 'BASECFLAGS', + # a number of derived variables. These need to be + # patched up as well. + 'CFLAGS', 'PY_CFLAGS', 'BLDSHARED'): + + flags = _CONFIG_VARS[key] + flags = re.sub(r'-isysroot\s+\S+(\s|$)', ' ', flags) + _CONFIG_VARS[key] = flags + + if args: + vals = [] + for name in args: + vals.append(_CONFIG_VARS.get(name)) + return vals + else: + return _CONFIG_VARS + + +def get_config_var(name): + """Return the value of a single variable using the dictionary returned by + 'get_config_vars()'. + + Equivalent to get_config_vars().get(name) + """ + return get_config_vars().get(name) + + +def get_platform(): + """Return a string that identifies the current platform. + + This is used mainly to distinguish platform-specific build directories and + platform-specific built distributions. Typically includes the OS name + and version and the architecture (as supplied by 'os.uname()'), + although the exact information included depends on the OS; eg. for IRIX + the architecture isn't particularly important (IRIX only runs on SGI + hardware), but for Linux the kernel version isn't particularly + important. + + Examples of returned values: + linux-i586 + linux-alpha (?) + solaris-2.6-sun4u + irix-5.3 + irix64-6.2 + + Windows will return one of: + win-amd64 (64bit Windows on AMD64 (aka x86_64, Intel64, EM64T, etc) + win-ia64 (64bit Windows on Itanium) + win32 (all others - specifically, sys.platform is returned) + + For other non-POSIX platforms, currently just returns 'sys.platform'. + """ + if os.name == 'nt': + # sniff sys.version for architecture. + prefix = " bit (" + i = sys.version.find(prefix) + if i == -1: + return sys.platform + j = sys.version.find(")", i) + look = sys.version[i+len(prefix):j].lower() + if look == 'amd64': + return 'win-amd64' + if look == 'itanium': + return 'win-ia64' + return sys.platform + + if os.name != "posix" or not hasattr(os, 'uname'): + # XXX what about the architecture? NT is Intel or Alpha, + # Mac OS is M68k or PPC, etc. + return sys.platform + + # Try to distinguish various flavours of Unix + osname, host, release, version, machine = os.uname() + + # Convert the OS name to lowercase, remove '/' characters + # (to accommodate BSD/OS), and translate spaces (for "Power Macintosh") + osname = osname.lower().replace('/', '') + machine = machine.replace(' ', '_') + machine = machine.replace('/', '-') + + if osname[:5] == "linux": + # At least on Linux/Intel, 'machine' is the processor -- + # i386, etc. + # XXX what about Alpha, SPARC, etc? + return "%s-%s" % (osname, machine) + elif osname[:5] == "sunos": + if release[0] >= "5": # SunOS 5 == Solaris 2 + osname = "solaris" + release = "%d.%s" % (int(release[0]) - 3, release[2:]) + # fall through to standard osname-release-machine representation + elif osname[:4] == "irix": # could be "irix64"! + return "%s-%s" % (osname, release) + elif osname[:3] == "aix": + return "%s-%s.%s" % (osname, version, release) + elif osname[:6] == "cygwin": + osname = "cygwin" + rel_re = re.compile(r'[\d.]+') + m = rel_re.match(release) + if m: + release = m.group() + elif osname[:6] == "darwin": + # + # For our purposes, we'll assume that the system version from + # distutils' perspective is what MACOSX_DEPLOYMENT_TARGET is set + # to. This makes the compatibility story a bit more sane because the + # machine is going to compile and link as if it were + # MACOSX_DEPLOYMENT_TARGET. + cfgvars = get_config_vars() + macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') + + if True: + # Always calculate the release of the running machine, + # needed to determine if we can build fat binaries or not. + + macrelease = macver + # Get the system version. Reading this plist is a documented + # way to get the system version (see the documentation for + # the Gestalt Manager) + try: + f = open('/System/Library/CoreServices/SystemVersion.plist') + except IOError: + # We're on a plain darwin box, fall back to the default + # behaviour. + pass + else: + try: + m = re.search(r'ProductUserVisibleVersion\s*' + r'(.*?)', f.read()) + finally: + f.close() + if m is not None: + macrelease = '.'.join(m.group(1).split('.')[:2]) + # else: fall back to the default behaviour + + if not macver: + macver = macrelease + + if macver: + release = macver + osname = "macosx" + + if ((macrelease + '.') >= '10.4.' and + '-arch' in get_config_vars().get('CFLAGS', '').strip()): + # The universal build will build fat binaries, but not on + # systems before 10.4 + # + # Try to detect 4-way universal builds, those have machine-type + # 'universal' instead of 'fat'. + + machine = 'fat' + cflags = get_config_vars().get('CFLAGS') + + archs = re.findall(r'-arch\s+(\S+)', cflags) + archs = tuple(sorted(set(archs))) + + if len(archs) == 1: + machine = archs[0] + elif archs == ('i386', 'ppc'): + machine = 'fat' + elif archs == ('i386', 'x86_64'): + machine = 'intel' + elif archs == ('i386', 'ppc', 'x86_64'): + machine = 'fat3' + elif archs == ('ppc64', 'x86_64'): + machine = 'fat64' + elif archs == ('i386', 'ppc', 'ppc64', 'x86_64'): + machine = 'universal' + else: + raise ValueError( + "Don't know machine value for archs=%r" % (archs,)) + + elif machine == 'i386': + # On OSX the machine type returned by uname is always the + # 32-bit variant, even if the executable architecture is + # the 64-bit variant + if sys.maxsize >= 2**32: + machine = 'x86_64' + + elif machine in ('PowerPC', 'Power_Macintosh'): + # Pick a sane name for the PPC architecture. + # See 'i386' case + if sys.maxsize >= 2**32: + machine = 'ppc64' + else: + machine = 'ppc' + + return "%s-%s-%s" % (osname, release, machine) + + +def get_python_version(): + return _PY_VERSION_SHORT + + +def _print_dict(title, data): + for index, (key, value) in enumerate(sorted(data.items())): + if index == 0: + print('%s: ' % (title)) + print('\t%s = "%s"' % (key, value)) + + +def _main(): + """Display all information sysconfig detains.""" + print('Platform: "%s"' % get_platform()) + print('Python version: "%s"' % get_python_version()) + print('Current installation scheme: "%s"' % _get_default_scheme()) + print() + _print_dict('Paths', get_paths()) + print() + _print_dict('Variables', get_config_vars()) + + +if __name__ == '__main__': + _main() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/_backport/tarfile.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/_backport/tarfile.py new file mode 100644 index 0000000000000000000000000000000000000000..d66d8566374be661085213468bce338c6a747702 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/_backport/tarfile.py @@ -0,0 +1,2607 @@ +#------------------------------------------------------------------- +# tarfile.py +#------------------------------------------------------------------- +# Copyright (C) 2002 Lars Gustaebel +# All rights reserved. +# +# Permission is hereby granted, free of charge, to any person +# obtaining a copy of this software and associated documentation +# files (the "Software"), to deal in the Software without +# restriction, including without limitation the rights to use, +# copy, modify, merge, publish, distribute, sublicense, and/or sell +# copies of the Software, and to permit persons to whom the +# Software is furnished to do so, subject to the following +# conditions: +# +# The above copyright notice and this permission notice shall be +# included in all copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES +# OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT +# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, +# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING +# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR +# OTHER DEALINGS IN THE SOFTWARE. +# +from __future__ import print_function + +"""Read from and write to tar format archives. +""" + +__version__ = "$Revision$" + +version = "0.9.0" +__author__ = "Lars Gust\u00e4bel (lars@gustaebel.de)" +__date__ = "$Date: 2011-02-25 17:42:01 +0200 (Fri, 25 Feb 2011) $" +__cvsid__ = "$Id: tarfile.py 88586 2011-02-25 15:42:01Z marc-andre.lemburg $" +__credits__ = "Gustavo Niemeyer, Niels Gust\u00e4bel, Richard Townsend." + +#--------- +# Imports +#--------- +import sys +import os +import stat +import errno +import time +import struct +import copy +import re + +try: + import grp, pwd +except ImportError: + grp = pwd = None + +# os.symlink on Windows prior to 6.0 raises NotImplementedError +symlink_exception = (AttributeError, NotImplementedError) +try: + # WindowsError (1314) will be raised if the caller does not hold the + # SeCreateSymbolicLinkPrivilege privilege + symlink_exception += (WindowsError,) +except NameError: + pass + +# from tarfile import * +__all__ = ["TarFile", "TarInfo", "is_tarfile", "TarError"] + +if sys.version_info[0] < 3: + import __builtin__ as builtins +else: + import builtins + +_open = builtins.open # Since 'open' is TarFile.open + +#--------------------------------------------------------- +# tar constants +#--------------------------------------------------------- +NUL = b"\0" # the null character +BLOCKSIZE = 512 # length of processing blocks +RECORDSIZE = BLOCKSIZE * 20 # length of records +GNU_MAGIC = b"ustar \0" # magic gnu tar string +POSIX_MAGIC = b"ustar\x0000" # magic posix tar string + +LENGTH_NAME = 100 # maximum length of a filename +LENGTH_LINK = 100 # maximum length of a linkname +LENGTH_PREFIX = 155 # maximum length of the prefix field + +REGTYPE = b"0" # regular file +AREGTYPE = b"\0" # regular file +LNKTYPE = b"1" # link (inside tarfile) +SYMTYPE = b"2" # symbolic link +CHRTYPE = b"3" # character special device +BLKTYPE = b"4" # block special device +DIRTYPE = b"5" # directory +FIFOTYPE = b"6" # fifo special device +CONTTYPE = b"7" # contiguous file + +GNUTYPE_LONGNAME = b"L" # GNU tar longname +GNUTYPE_LONGLINK = b"K" # GNU tar longlink +GNUTYPE_SPARSE = b"S" # GNU tar sparse file + +XHDTYPE = b"x" # POSIX.1-2001 extended header +XGLTYPE = b"g" # POSIX.1-2001 global header +SOLARIS_XHDTYPE = b"X" # Solaris extended header + +USTAR_FORMAT = 0 # POSIX.1-1988 (ustar) format +GNU_FORMAT = 1 # GNU tar format +PAX_FORMAT = 2 # POSIX.1-2001 (pax) format +DEFAULT_FORMAT = GNU_FORMAT + +#--------------------------------------------------------- +# tarfile constants +#--------------------------------------------------------- +# File types that tarfile supports: +SUPPORTED_TYPES = (REGTYPE, AREGTYPE, LNKTYPE, + SYMTYPE, DIRTYPE, FIFOTYPE, + CONTTYPE, CHRTYPE, BLKTYPE, + GNUTYPE_LONGNAME, GNUTYPE_LONGLINK, + GNUTYPE_SPARSE) + +# File types that will be treated as a regular file. +REGULAR_TYPES = (REGTYPE, AREGTYPE, + CONTTYPE, GNUTYPE_SPARSE) + +# File types that are part of the GNU tar format. +GNU_TYPES = (GNUTYPE_LONGNAME, GNUTYPE_LONGLINK, + GNUTYPE_SPARSE) + +# Fields from a pax header that override a TarInfo attribute. +PAX_FIELDS = ("path", "linkpath", "size", "mtime", + "uid", "gid", "uname", "gname") + +# Fields from a pax header that are affected by hdrcharset. +PAX_NAME_FIELDS = set(("path", "linkpath", "uname", "gname")) + +# Fields in a pax header that are numbers, all other fields +# are treated as strings. +PAX_NUMBER_FIELDS = { + "atime": float, + "ctime": float, + "mtime": float, + "uid": int, + "gid": int, + "size": int +} + +#--------------------------------------------------------- +# Bits used in the mode field, values in octal. +#--------------------------------------------------------- +S_IFLNK = 0o120000 # symbolic link +S_IFREG = 0o100000 # regular file +S_IFBLK = 0o060000 # block device +S_IFDIR = 0o040000 # directory +S_IFCHR = 0o020000 # character device +S_IFIFO = 0o010000 # fifo + +TSUID = 0o4000 # set UID on execution +TSGID = 0o2000 # set GID on execution +TSVTX = 0o1000 # reserved + +TUREAD = 0o400 # read by owner +TUWRITE = 0o200 # write by owner +TUEXEC = 0o100 # execute/search by owner +TGREAD = 0o040 # read by group +TGWRITE = 0o020 # write by group +TGEXEC = 0o010 # execute/search by group +TOREAD = 0o004 # read by other +TOWRITE = 0o002 # write by other +TOEXEC = 0o001 # execute/search by other + +#--------------------------------------------------------- +# initialization +#--------------------------------------------------------- +if os.name in ("nt", "ce"): + ENCODING = "utf-8" +else: + ENCODING = sys.getfilesystemencoding() + +#--------------------------------------------------------- +# Some useful functions +#--------------------------------------------------------- + +def stn(s, length, encoding, errors): + """Convert a string to a null-terminated bytes object. + """ + s = s.encode(encoding, errors) + return s[:length] + (length - len(s)) * NUL + +def nts(s, encoding, errors): + """Convert a null-terminated bytes object to a string. + """ + p = s.find(b"\0") + if p != -1: + s = s[:p] + return s.decode(encoding, errors) + +def nti(s): + """Convert a number field to a python number. + """ + # There are two possible encodings for a number field, see + # itn() below. + if s[0] != chr(0o200): + try: + n = int(nts(s, "ascii", "strict") or "0", 8) + except ValueError: + raise InvalidHeaderError("invalid header") + else: + n = 0 + for i in range(len(s) - 1): + n <<= 8 + n += ord(s[i + 1]) + return n + +def itn(n, digits=8, format=DEFAULT_FORMAT): + """Convert a python number to a number field. + """ + # POSIX 1003.1-1988 requires numbers to be encoded as a string of + # octal digits followed by a null-byte, this allows values up to + # (8**(digits-1))-1. GNU tar allows storing numbers greater than + # that if necessary. A leading 0o200 byte indicates this particular + # encoding, the following digits-1 bytes are a big-endian + # representation. This allows values up to (256**(digits-1))-1. + if 0 <= n < 8 ** (digits - 1): + s = ("%0*o" % (digits - 1, n)).encode("ascii") + NUL + else: + if format != GNU_FORMAT or n >= 256 ** (digits - 1): + raise ValueError("overflow in number field") + + if n < 0: + # XXX We mimic GNU tar's behaviour with negative numbers, + # this could raise OverflowError. + n = struct.unpack("L", struct.pack("l", n))[0] + + s = bytearray() + for i in range(digits - 1): + s.insert(0, n & 0o377) + n >>= 8 + s.insert(0, 0o200) + return s + +def calc_chksums(buf): + """Calculate the checksum for a member's header by summing up all + characters except for the chksum field which is treated as if + it was filled with spaces. According to the GNU tar sources, + some tars (Sun and NeXT) calculate chksum with signed char, + which will be different if there are chars in the buffer with + the high bit set. So we calculate two checksums, unsigned and + signed. + """ + unsigned_chksum = 256 + sum(struct.unpack("148B", buf[:148]) + struct.unpack("356B", buf[156:512])) + signed_chksum = 256 + sum(struct.unpack("148b", buf[:148]) + struct.unpack("356b", buf[156:512])) + return unsigned_chksum, signed_chksum + +def copyfileobj(src, dst, length=None): + """Copy length bytes from fileobj src to fileobj dst. + If length is None, copy the entire content. + """ + if length == 0: + return + if length is None: + while True: + buf = src.read(16*1024) + if not buf: + break + dst.write(buf) + return + + BUFSIZE = 16 * 1024 + blocks, remainder = divmod(length, BUFSIZE) + for b in range(blocks): + buf = src.read(BUFSIZE) + if len(buf) < BUFSIZE: + raise IOError("end of file reached") + dst.write(buf) + + if remainder != 0: + buf = src.read(remainder) + if len(buf) < remainder: + raise IOError("end of file reached") + dst.write(buf) + return + +filemode_table = ( + ((S_IFLNK, "l"), + (S_IFREG, "-"), + (S_IFBLK, "b"), + (S_IFDIR, "d"), + (S_IFCHR, "c"), + (S_IFIFO, "p")), + + ((TUREAD, "r"),), + ((TUWRITE, "w"),), + ((TUEXEC|TSUID, "s"), + (TSUID, "S"), + (TUEXEC, "x")), + + ((TGREAD, "r"),), + ((TGWRITE, "w"),), + ((TGEXEC|TSGID, "s"), + (TSGID, "S"), + (TGEXEC, "x")), + + ((TOREAD, "r"),), + ((TOWRITE, "w"),), + ((TOEXEC|TSVTX, "t"), + (TSVTX, "T"), + (TOEXEC, "x")) +) + +def filemode(mode): + """Convert a file's mode to a string of the form + -rwxrwxrwx. + Used by TarFile.list() + """ + perm = [] + for table in filemode_table: + for bit, char in table: + if mode & bit == bit: + perm.append(char) + break + else: + perm.append("-") + return "".join(perm) + +class TarError(Exception): + """Base exception.""" + pass +class ExtractError(TarError): + """General exception for extract errors.""" + pass +class ReadError(TarError): + """Exception for unreadable tar archives.""" + pass +class CompressionError(TarError): + """Exception for unavailable compression methods.""" + pass +class StreamError(TarError): + """Exception for unsupported operations on stream-like TarFiles.""" + pass +class HeaderError(TarError): + """Base exception for header errors.""" + pass +class EmptyHeaderError(HeaderError): + """Exception for empty headers.""" + pass +class TruncatedHeaderError(HeaderError): + """Exception for truncated headers.""" + pass +class EOFHeaderError(HeaderError): + """Exception for end of file headers.""" + pass +class InvalidHeaderError(HeaderError): + """Exception for invalid headers.""" + pass +class SubsequentHeaderError(HeaderError): + """Exception for missing and invalid extended headers.""" + pass + +#--------------------------- +# internal stream interface +#--------------------------- +class _LowLevelFile(object): + """Low-level file object. Supports reading and writing. + It is used instead of a regular file object for streaming + access. + """ + + def __init__(self, name, mode): + mode = { + "r": os.O_RDONLY, + "w": os.O_WRONLY | os.O_CREAT | os.O_TRUNC, + }[mode] + if hasattr(os, "O_BINARY"): + mode |= os.O_BINARY + self.fd = os.open(name, mode, 0o666) + + def close(self): + os.close(self.fd) + + def read(self, size): + return os.read(self.fd, size) + + def write(self, s): + os.write(self.fd, s) + +class _Stream(object): + """Class that serves as an adapter between TarFile and + a stream-like object. The stream-like object only + needs to have a read() or write() method and is accessed + blockwise. Use of gzip or bzip2 compression is possible. + A stream-like object could be for example: sys.stdin, + sys.stdout, a socket, a tape device etc. + + _Stream is intended to be used only internally. + """ + + def __init__(self, name, mode, comptype, fileobj, bufsize): + """Construct a _Stream object. + """ + self._extfileobj = True + if fileobj is None: + fileobj = _LowLevelFile(name, mode) + self._extfileobj = False + + if comptype == '*': + # Enable transparent compression detection for the + # stream interface + fileobj = _StreamProxy(fileobj) + comptype = fileobj.getcomptype() + + self.name = name or "" + self.mode = mode + self.comptype = comptype + self.fileobj = fileobj + self.bufsize = bufsize + self.buf = b"" + self.pos = 0 + self.closed = False + + try: + if comptype == "gz": + try: + import zlib + except ImportError: + raise CompressionError("zlib module is not available") + self.zlib = zlib + self.crc = zlib.crc32(b"") + if mode == "r": + self._init_read_gz() + else: + self._init_write_gz() + + if comptype == "bz2": + try: + import bz2 + except ImportError: + raise CompressionError("bz2 module is not available") + if mode == "r": + self.dbuf = b"" + self.cmp = bz2.BZ2Decompressor() + else: + self.cmp = bz2.BZ2Compressor() + except: + if not self._extfileobj: + self.fileobj.close() + self.closed = True + raise + + def __del__(self): + if hasattr(self, "closed") and not self.closed: + self.close() + + def _init_write_gz(self): + """Initialize for writing with gzip compression. + """ + self.cmp = self.zlib.compressobj(9, self.zlib.DEFLATED, + -self.zlib.MAX_WBITS, + self.zlib.DEF_MEM_LEVEL, + 0) + timestamp = struct.pack(" self.bufsize: + self.fileobj.write(self.buf[:self.bufsize]) + self.buf = self.buf[self.bufsize:] + + def close(self): + """Close the _Stream object. No operation should be + done on it afterwards. + """ + if self.closed: + return + + if self.mode == "w" and self.comptype != "tar": + self.buf += self.cmp.flush() + + if self.mode == "w" and self.buf: + self.fileobj.write(self.buf) + self.buf = b"" + if self.comptype == "gz": + # The native zlib crc is an unsigned 32-bit integer, but + # the Python wrapper implicitly casts that to a signed C + # long. So, on a 32-bit box self.crc may "look negative", + # while the same crc on a 64-bit box may "look positive". + # To avoid irksome warnings from the `struct` module, force + # it to look positive on all boxes. + self.fileobj.write(struct.pack("= 0: + blocks, remainder = divmod(pos - self.pos, self.bufsize) + for i in range(blocks): + self.read(self.bufsize) + self.read(remainder) + else: + raise StreamError("seeking backwards is not allowed") + return self.pos + + def read(self, size=None): + """Return the next size number of bytes from the stream. + If size is not defined, return all bytes of the stream + up to EOF. + """ + if size is None: + t = [] + while True: + buf = self._read(self.bufsize) + if not buf: + break + t.append(buf) + buf = "".join(t) + else: + buf = self._read(size) + self.pos += len(buf) + return buf + + def _read(self, size): + """Return size bytes from the stream. + """ + if self.comptype == "tar": + return self.__read(size) + + c = len(self.dbuf) + while c < size: + buf = self.__read(self.bufsize) + if not buf: + break + try: + buf = self.cmp.decompress(buf) + except IOError: + raise ReadError("invalid compressed data") + self.dbuf += buf + c += len(buf) + buf = self.dbuf[:size] + self.dbuf = self.dbuf[size:] + return buf + + def __read(self, size): + """Return size bytes from stream. If internal buffer is empty, + read another block from the stream. + """ + c = len(self.buf) + while c < size: + buf = self.fileobj.read(self.bufsize) + if not buf: + break + self.buf += buf + c += len(buf) + buf = self.buf[:size] + self.buf = self.buf[size:] + return buf +# class _Stream + +class _StreamProxy(object): + """Small proxy class that enables transparent compression + detection for the Stream interface (mode 'r|*'). + """ + + def __init__(self, fileobj): + self.fileobj = fileobj + self.buf = self.fileobj.read(BLOCKSIZE) + + def read(self, size): + self.read = self.fileobj.read + return self.buf + + def getcomptype(self): + if self.buf.startswith(b"\037\213\010"): + return "gz" + if self.buf.startswith(b"BZh91"): + return "bz2" + return "tar" + + def close(self): + self.fileobj.close() +# class StreamProxy + +class _BZ2Proxy(object): + """Small proxy class that enables external file object + support for "r:bz2" and "w:bz2" modes. This is actually + a workaround for a limitation in bz2 module's BZ2File + class which (unlike gzip.GzipFile) has no support for + a file object argument. + """ + + blocksize = 16 * 1024 + + def __init__(self, fileobj, mode): + self.fileobj = fileobj + self.mode = mode + self.name = getattr(self.fileobj, "name", None) + self.init() + + def init(self): + import bz2 + self.pos = 0 + if self.mode == "r": + self.bz2obj = bz2.BZ2Decompressor() + self.fileobj.seek(0) + self.buf = b"" + else: + self.bz2obj = bz2.BZ2Compressor() + + def read(self, size): + x = len(self.buf) + while x < size: + raw = self.fileobj.read(self.blocksize) + if not raw: + break + data = self.bz2obj.decompress(raw) + self.buf += data + x += len(data) + + buf = self.buf[:size] + self.buf = self.buf[size:] + self.pos += len(buf) + return buf + + def seek(self, pos): + if pos < self.pos: + self.init() + self.read(pos - self.pos) + + def tell(self): + return self.pos + + def write(self, data): + self.pos += len(data) + raw = self.bz2obj.compress(data) + self.fileobj.write(raw) + + def close(self): + if self.mode == "w": + raw = self.bz2obj.flush() + self.fileobj.write(raw) +# class _BZ2Proxy + +#------------------------ +# Extraction file object +#------------------------ +class _FileInFile(object): + """A thin wrapper around an existing file object that + provides a part of its data as an individual file + object. + """ + + def __init__(self, fileobj, offset, size, blockinfo=None): + self.fileobj = fileobj + self.offset = offset + self.size = size + self.position = 0 + + if blockinfo is None: + blockinfo = [(0, size)] + + # Construct a map with data and zero blocks. + self.map_index = 0 + self.map = [] + lastpos = 0 + realpos = self.offset + for offset, size in blockinfo: + if offset > lastpos: + self.map.append((False, lastpos, offset, None)) + self.map.append((True, offset, offset + size, realpos)) + realpos += size + lastpos = offset + size + if lastpos < self.size: + self.map.append((False, lastpos, self.size, None)) + + def seekable(self): + if not hasattr(self.fileobj, "seekable"): + # XXX gzip.GzipFile and bz2.BZ2File + return True + return self.fileobj.seekable() + + def tell(self): + """Return the current file position. + """ + return self.position + + def seek(self, position): + """Seek to a position in the file. + """ + self.position = position + + def read(self, size=None): + """Read data from the file. + """ + if size is None: + size = self.size - self.position + else: + size = min(size, self.size - self.position) + + buf = b"" + while size > 0: + while True: + data, start, stop, offset = self.map[self.map_index] + if start <= self.position < stop: + break + else: + self.map_index += 1 + if self.map_index == len(self.map): + self.map_index = 0 + length = min(size, stop - self.position) + if data: + self.fileobj.seek(offset + (self.position - start)) + buf += self.fileobj.read(length) + else: + buf += NUL * length + size -= length + self.position += length + return buf +#class _FileInFile + + +class ExFileObject(object): + """File-like object for reading an archive member. + Is returned by TarFile.extractfile(). + """ + blocksize = 1024 + + def __init__(self, tarfile, tarinfo): + self.fileobj = _FileInFile(tarfile.fileobj, + tarinfo.offset_data, + tarinfo.size, + tarinfo.sparse) + self.name = tarinfo.name + self.mode = "r" + self.closed = False + self.size = tarinfo.size + + self.position = 0 + self.buffer = b"" + + def readable(self): + return True + + def writable(self): + return False + + def seekable(self): + return self.fileobj.seekable() + + def read(self, size=None): + """Read at most size bytes from the file. If size is not + present or None, read all data until EOF is reached. + """ + if self.closed: + raise ValueError("I/O operation on closed file") + + buf = b"" + if self.buffer: + if size is None: + buf = self.buffer + self.buffer = b"" + else: + buf = self.buffer[:size] + self.buffer = self.buffer[size:] + + if size is None: + buf += self.fileobj.read() + else: + buf += self.fileobj.read(size - len(buf)) + + self.position += len(buf) + return buf + + # XXX TextIOWrapper uses the read1() method. + read1 = read + + def readline(self, size=-1): + """Read one entire line from the file. If size is present + and non-negative, return a string with at most that + size, which may be an incomplete line. + """ + if self.closed: + raise ValueError("I/O operation on closed file") + + pos = self.buffer.find(b"\n") + 1 + if pos == 0: + # no newline found. + while True: + buf = self.fileobj.read(self.blocksize) + self.buffer += buf + if not buf or b"\n" in buf: + pos = self.buffer.find(b"\n") + 1 + if pos == 0: + # no newline found. + pos = len(self.buffer) + break + + if size != -1: + pos = min(size, pos) + + buf = self.buffer[:pos] + self.buffer = self.buffer[pos:] + self.position += len(buf) + return buf + + def readlines(self): + """Return a list with all remaining lines. + """ + result = [] + while True: + line = self.readline() + if not line: break + result.append(line) + return result + + def tell(self): + """Return the current file position. + """ + if self.closed: + raise ValueError("I/O operation on closed file") + + return self.position + + def seek(self, pos, whence=os.SEEK_SET): + """Seek to a position in the file. + """ + if self.closed: + raise ValueError("I/O operation on closed file") + + if whence == os.SEEK_SET: + self.position = min(max(pos, 0), self.size) + elif whence == os.SEEK_CUR: + if pos < 0: + self.position = max(self.position + pos, 0) + else: + self.position = min(self.position + pos, self.size) + elif whence == os.SEEK_END: + self.position = max(min(self.size + pos, self.size), 0) + else: + raise ValueError("Invalid argument") + + self.buffer = b"" + self.fileobj.seek(self.position) + + def close(self): + """Close the file object. + """ + self.closed = True + + def __iter__(self): + """Get an iterator over the file's lines. + """ + while True: + line = self.readline() + if not line: + break + yield line +#class ExFileObject + +#------------------ +# Exported Classes +#------------------ +class TarInfo(object): + """Informational class which holds the details about an + archive member given by a tar header block. + TarInfo objects are returned by TarFile.getmember(), + TarFile.getmembers() and TarFile.gettarinfo() and are + usually created internally. + """ + + __slots__ = ("name", "mode", "uid", "gid", "size", "mtime", + "chksum", "type", "linkname", "uname", "gname", + "devmajor", "devminor", + "offset", "offset_data", "pax_headers", "sparse", + "tarfile", "_sparse_structs", "_link_target") + + def __init__(self, name=""): + """Construct a TarInfo object. name is the optional name + of the member. + """ + self.name = name # member name + self.mode = 0o644 # file permissions + self.uid = 0 # user id + self.gid = 0 # group id + self.size = 0 # file size + self.mtime = 0 # modification time + self.chksum = 0 # header checksum + self.type = REGTYPE # member type + self.linkname = "" # link name + self.uname = "" # user name + self.gname = "" # group name + self.devmajor = 0 # device major number + self.devminor = 0 # device minor number + + self.offset = 0 # the tar header starts here + self.offset_data = 0 # the file's data starts here + + self.sparse = None # sparse member information + self.pax_headers = {} # pax header information + + # In pax headers the "name" and "linkname" field are called + # "path" and "linkpath". + def _getpath(self): + return self.name + def _setpath(self, name): + self.name = name + path = property(_getpath, _setpath) + + def _getlinkpath(self): + return self.linkname + def _setlinkpath(self, linkname): + self.linkname = linkname + linkpath = property(_getlinkpath, _setlinkpath) + + def __repr__(self): + return "<%s %r at %#x>" % (self.__class__.__name__,self.name,id(self)) + + def get_info(self): + """Return the TarInfo's attributes as a dictionary. + """ + info = { + "name": self.name, + "mode": self.mode & 0o7777, + "uid": self.uid, + "gid": self.gid, + "size": self.size, + "mtime": self.mtime, + "chksum": self.chksum, + "type": self.type, + "linkname": self.linkname, + "uname": self.uname, + "gname": self.gname, + "devmajor": self.devmajor, + "devminor": self.devminor + } + + if info["type"] == DIRTYPE and not info["name"].endswith("/"): + info["name"] += "/" + + return info + + def tobuf(self, format=DEFAULT_FORMAT, encoding=ENCODING, errors="surrogateescape"): + """Return a tar header as a string of 512 byte blocks. + """ + info = self.get_info() + + if format == USTAR_FORMAT: + return self.create_ustar_header(info, encoding, errors) + elif format == GNU_FORMAT: + return self.create_gnu_header(info, encoding, errors) + elif format == PAX_FORMAT: + return self.create_pax_header(info, encoding) + else: + raise ValueError("invalid format") + + def create_ustar_header(self, info, encoding, errors): + """Return the object as a ustar header block. + """ + info["magic"] = POSIX_MAGIC + + if len(info["linkname"]) > LENGTH_LINK: + raise ValueError("linkname is too long") + + if len(info["name"]) > LENGTH_NAME: + info["prefix"], info["name"] = self._posix_split_name(info["name"]) + + return self._create_header(info, USTAR_FORMAT, encoding, errors) + + def create_gnu_header(self, info, encoding, errors): + """Return the object as a GNU header block sequence. + """ + info["magic"] = GNU_MAGIC + + buf = b"" + if len(info["linkname"]) > LENGTH_LINK: + buf += self._create_gnu_long_header(info["linkname"], GNUTYPE_LONGLINK, encoding, errors) + + if len(info["name"]) > LENGTH_NAME: + buf += self._create_gnu_long_header(info["name"], GNUTYPE_LONGNAME, encoding, errors) + + return buf + self._create_header(info, GNU_FORMAT, encoding, errors) + + def create_pax_header(self, info, encoding): + """Return the object as a ustar header block. If it cannot be + represented this way, prepend a pax extended header sequence + with supplement information. + """ + info["magic"] = POSIX_MAGIC + pax_headers = self.pax_headers.copy() + + # Test string fields for values that exceed the field length or cannot + # be represented in ASCII encoding. + for name, hname, length in ( + ("name", "path", LENGTH_NAME), ("linkname", "linkpath", LENGTH_LINK), + ("uname", "uname", 32), ("gname", "gname", 32)): + + if hname in pax_headers: + # The pax header has priority. + continue + + # Try to encode the string as ASCII. + try: + info[name].encode("ascii", "strict") + except UnicodeEncodeError: + pax_headers[hname] = info[name] + continue + + if len(info[name]) > length: + pax_headers[hname] = info[name] + + # Test number fields for values that exceed the field limit or values + # that like to be stored as float. + for name, digits in (("uid", 8), ("gid", 8), ("size", 12), ("mtime", 12)): + if name in pax_headers: + # The pax header has priority. Avoid overflow. + info[name] = 0 + continue + + val = info[name] + if not 0 <= val < 8 ** (digits - 1) or isinstance(val, float): + pax_headers[name] = str(val) + info[name] = 0 + + # Create a pax extended header if necessary. + if pax_headers: + buf = self._create_pax_generic_header(pax_headers, XHDTYPE, encoding) + else: + buf = b"" + + return buf + self._create_header(info, USTAR_FORMAT, "ascii", "replace") + + @classmethod + def create_pax_global_header(cls, pax_headers): + """Return the object as a pax global header block sequence. + """ + return cls._create_pax_generic_header(pax_headers, XGLTYPE, "utf8") + + def _posix_split_name(self, name): + """Split a name longer than 100 chars into a prefix + and a name part. + """ + prefix = name[:LENGTH_PREFIX + 1] + while prefix and prefix[-1] != "/": + prefix = prefix[:-1] + + name = name[len(prefix):] + prefix = prefix[:-1] + + if not prefix or len(name) > LENGTH_NAME: + raise ValueError("name is too long") + return prefix, name + + @staticmethod + def _create_header(info, format, encoding, errors): + """Return a header block. info is a dictionary with file + information, format must be one of the *_FORMAT constants. + """ + parts = [ + stn(info.get("name", ""), 100, encoding, errors), + itn(info.get("mode", 0) & 0o7777, 8, format), + itn(info.get("uid", 0), 8, format), + itn(info.get("gid", 0), 8, format), + itn(info.get("size", 0), 12, format), + itn(info.get("mtime", 0), 12, format), + b" ", # checksum field + info.get("type", REGTYPE), + stn(info.get("linkname", ""), 100, encoding, errors), + info.get("magic", POSIX_MAGIC), + stn(info.get("uname", ""), 32, encoding, errors), + stn(info.get("gname", ""), 32, encoding, errors), + itn(info.get("devmajor", 0), 8, format), + itn(info.get("devminor", 0), 8, format), + stn(info.get("prefix", ""), 155, encoding, errors) + ] + + buf = struct.pack("%ds" % BLOCKSIZE, b"".join(parts)) + chksum = calc_chksums(buf[-BLOCKSIZE:])[0] + buf = buf[:-364] + ("%06o\0" % chksum).encode("ascii") + buf[-357:] + return buf + + @staticmethod + def _create_payload(payload): + """Return the string payload filled with zero bytes + up to the next 512 byte border. + """ + blocks, remainder = divmod(len(payload), BLOCKSIZE) + if remainder > 0: + payload += (BLOCKSIZE - remainder) * NUL + return payload + + @classmethod + def _create_gnu_long_header(cls, name, type, encoding, errors): + """Return a GNUTYPE_LONGNAME or GNUTYPE_LONGLINK sequence + for name. + """ + name = name.encode(encoding, errors) + NUL + + info = {} + info["name"] = "././@LongLink" + info["type"] = type + info["size"] = len(name) + info["magic"] = GNU_MAGIC + + # create extended header + name blocks. + return cls._create_header(info, USTAR_FORMAT, encoding, errors) + \ + cls._create_payload(name) + + @classmethod + def _create_pax_generic_header(cls, pax_headers, type, encoding): + """Return a POSIX.1-2008 extended or global header sequence + that contains a list of keyword, value pairs. The values + must be strings. + """ + # Check if one of the fields contains surrogate characters and thereby + # forces hdrcharset=BINARY, see _proc_pax() for more information. + binary = False + for keyword, value in pax_headers.items(): + try: + value.encode("utf8", "strict") + except UnicodeEncodeError: + binary = True + break + + records = b"" + if binary: + # Put the hdrcharset field at the beginning of the header. + records += b"21 hdrcharset=BINARY\n" + + for keyword, value in pax_headers.items(): + keyword = keyword.encode("utf8") + if binary: + # Try to restore the original byte representation of `value'. + # Needless to say, that the encoding must match the string. + value = value.encode(encoding, "surrogateescape") + else: + value = value.encode("utf8") + + l = len(keyword) + len(value) + 3 # ' ' + '=' + '\n' + n = p = 0 + while True: + n = l + len(str(p)) + if n == p: + break + p = n + records += bytes(str(p), "ascii") + b" " + keyword + b"=" + value + b"\n" + + # We use a hardcoded "././@PaxHeader" name like star does + # instead of the one that POSIX recommends. + info = {} + info["name"] = "././@PaxHeader" + info["type"] = type + info["size"] = len(records) + info["magic"] = POSIX_MAGIC + + # Create pax header + record blocks. + return cls._create_header(info, USTAR_FORMAT, "ascii", "replace") + \ + cls._create_payload(records) + + @classmethod + def frombuf(cls, buf, encoding, errors): + """Construct a TarInfo object from a 512 byte bytes object. + """ + if len(buf) == 0: + raise EmptyHeaderError("empty header") + if len(buf) != BLOCKSIZE: + raise TruncatedHeaderError("truncated header") + if buf.count(NUL) == BLOCKSIZE: + raise EOFHeaderError("end of file header") + + chksum = nti(buf[148:156]) + if chksum not in calc_chksums(buf): + raise InvalidHeaderError("bad checksum") + + obj = cls() + obj.name = nts(buf[0:100], encoding, errors) + obj.mode = nti(buf[100:108]) + obj.uid = nti(buf[108:116]) + obj.gid = nti(buf[116:124]) + obj.size = nti(buf[124:136]) + obj.mtime = nti(buf[136:148]) + obj.chksum = chksum + obj.type = buf[156:157] + obj.linkname = nts(buf[157:257], encoding, errors) + obj.uname = nts(buf[265:297], encoding, errors) + obj.gname = nts(buf[297:329], encoding, errors) + obj.devmajor = nti(buf[329:337]) + obj.devminor = nti(buf[337:345]) + prefix = nts(buf[345:500], encoding, errors) + + # Old V7 tar format represents a directory as a regular + # file with a trailing slash. + if obj.type == AREGTYPE and obj.name.endswith("/"): + obj.type = DIRTYPE + + # The old GNU sparse format occupies some of the unused + # space in the buffer for up to 4 sparse structures. + # Save the them for later processing in _proc_sparse(). + if obj.type == GNUTYPE_SPARSE: + pos = 386 + structs = [] + for i in range(4): + try: + offset = nti(buf[pos:pos + 12]) + numbytes = nti(buf[pos + 12:pos + 24]) + except ValueError: + break + structs.append((offset, numbytes)) + pos += 24 + isextended = bool(buf[482]) + origsize = nti(buf[483:495]) + obj._sparse_structs = (structs, isextended, origsize) + + # Remove redundant slashes from directories. + if obj.isdir(): + obj.name = obj.name.rstrip("/") + + # Reconstruct a ustar longname. + if prefix and obj.type not in GNU_TYPES: + obj.name = prefix + "/" + obj.name + return obj + + @classmethod + def fromtarfile(cls, tarfile): + """Return the next TarInfo object from TarFile object + tarfile. + """ + buf = tarfile.fileobj.read(BLOCKSIZE) + obj = cls.frombuf(buf, tarfile.encoding, tarfile.errors) + obj.offset = tarfile.fileobj.tell() - BLOCKSIZE + return obj._proc_member(tarfile) + + #-------------------------------------------------------------------------- + # The following are methods that are called depending on the type of a + # member. The entry point is _proc_member() which can be overridden in a + # subclass to add custom _proc_*() methods. A _proc_*() method MUST + # implement the following + # operations: + # 1. Set self.offset_data to the position where the data blocks begin, + # if there is data that follows. + # 2. Set tarfile.offset to the position where the next member's header will + # begin. + # 3. Return self or another valid TarInfo object. + def _proc_member(self, tarfile): + """Choose the right processing method depending on + the type and call it. + """ + if self.type in (GNUTYPE_LONGNAME, GNUTYPE_LONGLINK): + return self._proc_gnulong(tarfile) + elif self.type == GNUTYPE_SPARSE: + return self._proc_sparse(tarfile) + elif self.type in (XHDTYPE, XGLTYPE, SOLARIS_XHDTYPE): + return self._proc_pax(tarfile) + else: + return self._proc_builtin(tarfile) + + def _proc_builtin(self, tarfile): + """Process a builtin type or an unknown type which + will be treated as a regular file. + """ + self.offset_data = tarfile.fileobj.tell() + offset = self.offset_data + if self.isreg() or self.type not in SUPPORTED_TYPES: + # Skip the following data blocks. + offset += self._block(self.size) + tarfile.offset = offset + + # Patch the TarInfo object with saved global + # header information. + self._apply_pax_info(tarfile.pax_headers, tarfile.encoding, tarfile.errors) + + return self + + def _proc_gnulong(self, tarfile): + """Process the blocks that hold a GNU longname + or longlink member. + """ + buf = tarfile.fileobj.read(self._block(self.size)) + + # Fetch the next header and process it. + try: + next = self.fromtarfile(tarfile) + except HeaderError: + raise SubsequentHeaderError("missing or bad subsequent header") + + # Patch the TarInfo object from the next header with + # the longname information. + next.offset = self.offset + if self.type == GNUTYPE_LONGNAME: + next.name = nts(buf, tarfile.encoding, tarfile.errors) + elif self.type == GNUTYPE_LONGLINK: + next.linkname = nts(buf, tarfile.encoding, tarfile.errors) + + return next + + def _proc_sparse(self, tarfile): + """Process a GNU sparse header plus extra headers. + """ + # We already collected some sparse structures in frombuf(). + structs, isextended, origsize = self._sparse_structs + del self._sparse_structs + + # Collect sparse structures from extended header blocks. + while isextended: + buf = tarfile.fileobj.read(BLOCKSIZE) + pos = 0 + for i in range(21): + try: + offset = nti(buf[pos:pos + 12]) + numbytes = nti(buf[pos + 12:pos + 24]) + except ValueError: + break + if offset and numbytes: + structs.append((offset, numbytes)) + pos += 24 + isextended = bool(buf[504]) + self.sparse = structs + + self.offset_data = tarfile.fileobj.tell() + tarfile.offset = self.offset_data + self._block(self.size) + self.size = origsize + return self + + def _proc_pax(self, tarfile): + """Process an extended or global header as described in + POSIX.1-2008. + """ + # Read the header information. + buf = tarfile.fileobj.read(self._block(self.size)) + + # A pax header stores supplemental information for either + # the following file (extended) or all following files + # (global). + if self.type == XGLTYPE: + pax_headers = tarfile.pax_headers + else: + pax_headers = tarfile.pax_headers.copy() + + # Check if the pax header contains a hdrcharset field. This tells us + # the encoding of the path, linkpath, uname and gname fields. Normally, + # these fields are UTF-8 encoded but since POSIX.1-2008 tar + # implementations are allowed to store them as raw binary strings if + # the translation to UTF-8 fails. + match = re.search(br"\d+ hdrcharset=([^\n]+)\n", buf) + if match is not None: + pax_headers["hdrcharset"] = match.group(1).decode("utf8") + + # For the time being, we don't care about anything other than "BINARY". + # The only other value that is currently allowed by the standard is + # "ISO-IR 10646 2000 UTF-8" in other words UTF-8. + hdrcharset = pax_headers.get("hdrcharset") + if hdrcharset == "BINARY": + encoding = tarfile.encoding + else: + encoding = "utf8" + + # Parse pax header information. A record looks like that: + # "%d %s=%s\n" % (length, keyword, value). length is the size + # of the complete record including the length field itself and + # the newline. keyword and value are both UTF-8 encoded strings. + regex = re.compile(br"(\d+) ([^=]+)=") + pos = 0 + while True: + match = regex.match(buf, pos) + if not match: + break + + length, keyword = match.groups() + length = int(length) + value = buf[match.end(2) + 1:match.start(1) + length - 1] + + # Normally, we could just use "utf8" as the encoding and "strict" + # as the error handler, but we better not take the risk. For + # example, GNU tar <= 1.23 is known to store filenames it cannot + # translate to UTF-8 as raw strings (unfortunately without a + # hdrcharset=BINARY header). + # We first try the strict standard encoding, and if that fails we + # fall back on the user's encoding and error handler. + keyword = self._decode_pax_field(keyword, "utf8", "utf8", + tarfile.errors) + if keyword in PAX_NAME_FIELDS: + value = self._decode_pax_field(value, encoding, tarfile.encoding, + tarfile.errors) + else: + value = self._decode_pax_field(value, "utf8", "utf8", + tarfile.errors) + + pax_headers[keyword] = value + pos += length + + # Fetch the next header. + try: + next = self.fromtarfile(tarfile) + except HeaderError: + raise SubsequentHeaderError("missing or bad subsequent header") + + # Process GNU sparse information. + if "GNU.sparse.map" in pax_headers: + # GNU extended sparse format version 0.1. + self._proc_gnusparse_01(next, pax_headers) + + elif "GNU.sparse.size" in pax_headers: + # GNU extended sparse format version 0.0. + self._proc_gnusparse_00(next, pax_headers, buf) + + elif pax_headers.get("GNU.sparse.major") == "1" and pax_headers.get("GNU.sparse.minor") == "0": + # GNU extended sparse format version 1.0. + self._proc_gnusparse_10(next, pax_headers, tarfile) + + if self.type in (XHDTYPE, SOLARIS_XHDTYPE): + # Patch the TarInfo object with the extended header info. + next._apply_pax_info(pax_headers, tarfile.encoding, tarfile.errors) + next.offset = self.offset + + if "size" in pax_headers: + # If the extended header replaces the size field, + # we need to recalculate the offset where the next + # header starts. + offset = next.offset_data + if next.isreg() or next.type not in SUPPORTED_TYPES: + offset += next._block(next.size) + tarfile.offset = offset + + return next + + def _proc_gnusparse_00(self, next, pax_headers, buf): + """Process a GNU tar extended sparse header, version 0.0. + """ + offsets = [] + for match in re.finditer(br"\d+ GNU.sparse.offset=(\d+)\n", buf): + offsets.append(int(match.group(1))) + numbytes = [] + for match in re.finditer(br"\d+ GNU.sparse.numbytes=(\d+)\n", buf): + numbytes.append(int(match.group(1))) + next.sparse = list(zip(offsets, numbytes)) + + def _proc_gnusparse_01(self, next, pax_headers): + """Process a GNU tar extended sparse header, version 0.1. + """ + sparse = [int(x) for x in pax_headers["GNU.sparse.map"].split(",")] + next.sparse = list(zip(sparse[::2], sparse[1::2])) + + def _proc_gnusparse_10(self, next, pax_headers, tarfile): + """Process a GNU tar extended sparse header, version 1.0. + """ + fields = None + sparse = [] + buf = tarfile.fileobj.read(BLOCKSIZE) + fields, buf = buf.split(b"\n", 1) + fields = int(fields) + while len(sparse) < fields * 2: + if b"\n" not in buf: + buf += tarfile.fileobj.read(BLOCKSIZE) + number, buf = buf.split(b"\n", 1) + sparse.append(int(number)) + next.offset_data = tarfile.fileobj.tell() + next.sparse = list(zip(sparse[::2], sparse[1::2])) + + def _apply_pax_info(self, pax_headers, encoding, errors): + """Replace fields with supplemental information from a previous + pax extended or global header. + """ + for keyword, value in pax_headers.items(): + if keyword == "GNU.sparse.name": + setattr(self, "path", value) + elif keyword == "GNU.sparse.size": + setattr(self, "size", int(value)) + elif keyword == "GNU.sparse.realsize": + setattr(self, "size", int(value)) + elif keyword in PAX_FIELDS: + if keyword in PAX_NUMBER_FIELDS: + try: + value = PAX_NUMBER_FIELDS[keyword](value) + except ValueError: + value = 0 + if keyword == "path": + value = value.rstrip("/") + setattr(self, keyword, value) + + self.pax_headers = pax_headers.copy() + + def _decode_pax_field(self, value, encoding, fallback_encoding, fallback_errors): + """Decode a single field from a pax record. + """ + try: + return value.decode(encoding, "strict") + except UnicodeDecodeError: + return value.decode(fallback_encoding, fallback_errors) + + def _block(self, count): + """Round up a byte count by BLOCKSIZE and return it, + e.g. _block(834) => 1024. + """ + blocks, remainder = divmod(count, BLOCKSIZE) + if remainder: + blocks += 1 + return blocks * BLOCKSIZE + + def isreg(self): + return self.type in REGULAR_TYPES + def isfile(self): + return self.isreg() + def isdir(self): + return self.type == DIRTYPE + def issym(self): + return self.type == SYMTYPE + def islnk(self): + return self.type == LNKTYPE + def ischr(self): + return self.type == CHRTYPE + def isblk(self): + return self.type == BLKTYPE + def isfifo(self): + return self.type == FIFOTYPE + def issparse(self): + return self.sparse is not None + def isdev(self): + return self.type in (CHRTYPE, BLKTYPE, FIFOTYPE) +# class TarInfo + +class TarFile(object): + """The TarFile Class provides an interface to tar archives. + """ + + debug = 0 # May be set from 0 (no msgs) to 3 (all msgs) + + dereference = False # If true, add content of linked file to the + # tar file, else the link. + + ignore_zeros = False # If true, skips empty or invalid blocks and + # continues processing. + + errorlevel = 1 # If 0, fatal errors only appear in debug + # messages (if debug >= 0). If > 0, errors + # are passed to the caller as exceptions. + + format = DEFAULT_FORMAT # The format to use when creating an archive. + + encoding = ENCODING # Encoding for 8-bit character strings. + + errors = None # Error handler for unicode conversion. + + tarinfo = TarInfo # The default TarInfo class to use. + + fileobject = ExFileObject # The default ExFileObject class to use. + + def __init__(self, name=None, mode="r", fileobj=None, format=None, + tarinfo=None, dereference=None, ignore_zeros=None, encoding=None, + errors="surrogateescape", pax_headers=None, debug=None, errorlevel=None): + """Open an (uncompressed) tar archive `name'. `mode' is either 'r' to + read from an existing archive, 'a' to append data to an existing + file or 'w' to create a new file overwriting an existing one. `mode' + defaults to 'r'. + If `fileobj' is given, it is used for reading or writing data. If it + can be determined, `mode' is overridden by `fileobj's mode. + `fileobj' is not closed, when TarFile is closed. + """ + if len(mode) > 1 or mode not in "raw": + raise ValueError("mode must be 'r', 'a' or 'w'") + self.mode = mode + self._mode = {"r": "rb", "a": "r+b", "w": "wb"}[mode] + + if not fileobj: + if self.mode == "a" and not os.path.exists(name): + # Create nonexistent files in append mode. + self.mode = "w" + self._mode = "wb" + fileobj = bltn_open(name, self._mode) + self._extfileobj = False + else: + if name is None and hasattr(fileobj, "name"): + name = fileobj.name + if hasattr(fileobj, "mode"): + self._mode = fileobj.mode + self._extfileobj = True + self.name = os.path.abspath(name) if name else None + self.fileobj = fileobj + + # Init attributes. + if format is not None: + self.format = format + if tarinfo is not None: + self.tarinfo = tarinfo + if dereference is not None: + self.dereference = dereference + if ignore_zeros is not None: + self.ignore_zeros = ignore_zeros + if encoding is not None: + self.encoding = encoding + self.errors = errors + + if pax_headers is not None and self.format == PAX_FORMAT: + self.pax_headers = pax_headers + else: + self.pax_headers = {} + + if debug is not None: + self.debug = debug + if errorlevel is not None: + self.errorlevel = errorlevel + + # Init datastructures. + self.closed = False + self.members = [] # list of members as TarInfo objects + self._loaded = False # flag if all members have been read + self.offset = self.fileobj.tell() + # current position in the archive file + self.inodes = {} # dictionary caching the inodes of + # archive members already added + + try: + if self.mode == "r": + self.firstmember = None + self.firstmember = self.next() + + if self.mode == "a": + # Move to the end of the archive, + # before the first empty block. + while True: + self.fileobj.seek(self.offset) + try: + tarinfo = self.tarinfo.fromtarfile(self) + self.members.append(tarinfo) + except EOFHeaderError: + self.fileobj.seek(self.offset) + break + except HeaderError as e: + raise ReadError(str(e)) + + if self.mode in "aw": + self._loaded = True + + if self.pax_headers: + buf = self.tarinfo.create_pax_global_header(self.pax_headers.copy()) + self.fileobj.write(buf) + self.offset += len(buf) + except: + if not self._extfileobj: + self.fileobj.close() + self.closed = True + raise + + #-------------------------------------------------------------------------- + # Below are the classmethods which act as alternate constructors to the + # TarFile class. The open() method is the only one that is needed for + # public use; it is the "super"-constructor and is able to select an + # adequate "sub"-constructor for a particular compression using the mapping + # from OPEN_METH. + # + # This concept allows one to subclass TarFile without losing the comfort of + # the super-constructor. A sub-constructor is registered and made available + # by adding it to the mapping in OPEN_METH. + + @classmethod + def open(cls, name=None, mode="r", fileobj=None, bufsize=RECORDSIZE, **kwargs): + """Open a tar archive for reading, writing or appending. Return + an appropriate TarFile class. + + mode: + 'r' or 'r:*' open for reading with transparent compression + 'r:' open for reading exclusively uncompressed + 'r:gz' open for reading with gzip compression + 'r:bz2' open for reading with bzip2 compression + 'a' or 'a:' open for appending, creating the file if necessary + 'w' or 'w:' open for writing without compression + 'w:gz' open for writing with gzip compression + 'w:bz2' open for writing with bzip2 compression + + 'r|*' open a stream of tar blocks with transparent compression + 'r|' open an uncompressed stream of tar blocks for reading + 'r|gz' open a gzip compressed stream of tar blocks + 'r|bz2' open a bzip2 compressed stream of tar blocks + 'w|' open an uncompressed stream for writing + 'w|gz' open a gzip compressed stream for writing + 'w|bz2' open a bzip2 compressed stream for writing + """ + + if not name and not fileobj: + raise ValueError("nothing to open") + + if mode in ("r", "r:*"): + # Find out which *open() is appropriate for opening the file. + for comptype in cls.OPEN_METH: + func = getattr(cls, cls.OPEN_METH[comptype]) + if fileobj is not None: + saved_pos = fileobj.tell() + try: + return func(name, "r", fileobj, **kwargs) + except (ReadError, CompressionError) as e: + if fileobj is not None: + fileobj.seek(saved_pos) + continue + raise ReadError("file could not be opened successfully") + + elif ":" in mode: + filemode, comptype = mode.split(":", 1) + filemode = filemode or "r" + comptype = comptype or "tar" + + # Select the *open() function according to + # given compression. + if comptype in cls.OPEN_METH: + func = getattr(cls, cls.OPEN_METH[comptype]) + else: + raise CompressionError("unknown compression type %r" % comptype) + return func(name, filemode, fileobj, **kwargs) + + elif "|" in mode: + filemode, comptype = mode.split("|", 1) + filemode = filemode or "r" + comptype = comptype or "tar" + + if filemode not in "rw": + raise ValueError("mode must be 'r' or 'w'") + + stream = _Stream(name, filemode, comptype, fileobj, bufsize) + try: + t = cls(name, filemode, stream, **kwargs) + except: + stream.close() + raise + t._extfileobj = False + return t + + elif mode in "aw": + return cls.taropen(name, mode, fileobj, **kwargs) + + raise ValueError("undiscernible mode") + + @classmethod + def taropen(cls, name, mode="r", fileobj=None, **kwargs): + """Open uncompressed tar archive name for reading or writing. + """ + if len(mode) > 1 or mode not in "raw": + raise ValueError("mode must be 'r', 'a' or 'w'") + return cls(name, mode, fileobj, **kwargs) + + @classmethod + def gzopen(cls, name, mode="r", fileobj=None, compresslevel=9, **kwargs): + """Open gzip compressed tar archive name for reading or writing. + Appending is not allowed. + """ + if len(mode) > 1 or mode not in "rw": + raise ValueError("mode must be 'r' or 'w'") + + try: + import gzip + gzip.GzipFile + except (ImportError, AttributeError): + raise CompressionError("gzip module is not available") + + extfileobj = fileobj is not None + try: + fileobj = gzip.GzipFile(name, mode + "b", compresslevel, fileobj) + t = cls.taropen(name, mode, fileobj, **kwargs) + except IOError: + if not extfileobj and fileobj is not None: + fileobj.close() + if fileobj is None: + raise + raise ReadError("not a gzip file") + except: + if not extfileobj and fileobj is not None: + fileobj.close() + raise + t._extfileobj = extfileobj + return t + + @classmethod + def bz2open(cls, name, mode="r", fileobj=None, compresslevel=9, **kwargs): + """Open bzip2 compressed tar archive name for reading or writing. + Appending is not allowed. + """ + if len(mode) > 1 or mode not in "rw": + raise ValueError("mode must be 'r' or 'w'.") + + try: + import bz2 + except ImportError: + raise CompressionError("bz2 module is not available") + + if fileobj is not None: + fileobj = _BZ2Proxy(fileobj, mode) + else: + fileobj = bz2.BZ2File(name, mode, compresslevel=compresslevel) + + try: + t = cls.taropen(name, mode, fileobj, **kwargs) + except (IOError, EOFError): + fileobj.close() + raise ReadError("not a bzip2 file") + t._extfileobj = False + return t + + # All *open() methods are registered here. + OPEN_METH = { + "tar": "taropen", # uncompressed tar + "gz": "gzopen", # gzip compressed tar + "bz2": "bz2open" # bzip2 compressed tar + } + + #-------------------------------------------------------------------------- + # The public methods which TarFile provides: + + def close(self): + """Close the TarFile. In write-mode, two finishing zero blocks are + appended to the archive. + """ + if self.closed: + return + + if self.mode in "aw": + self.fileobj.write(NUL * (BLOCKSIZE * 2)) + self.offset += (BLOCKSIZE * 2) + # fill up the end with zero-blocks + # (like option -b20 for tar does) + blocks, remainder = divmod(self.offset, RECORDSIZE) + if remainder > 0: + self.fileobj.write(NUL * (RECORDSIZE - remainder)) + + if not self._extfileobj: + self.fileobj.close() + self.closed = True + + def getmember(self, name): + """Return a TarInfo object for member `name'. If `name' can not be + found in the archive, KeyError is raised. If a member occurs more + than once in the archive, its last occurrence is assumed to be the + most up-to-date version. + """ + tarinfo = self._getmember(name) + if tarinfo is None: + raise KeyError("filename %r not found" % name) + return tarinfo + + def getmembers(self): + """Return the members of the archive as a list of TarInfo objects. The + list has the same order as the members in the archive. + """ + self._check() + if not self._loaded: # if we want to obtain a list of + self._load() # all members, we first have to + # scan the whole archive. + return self.members + + def getnames(self): + """Return the members of the archive as a list of their names. It has + the same order as the list returned by getmembers(). + """ + return [tarinfo.name for tarinfo in self.getmembers()] + + def gettarinfo(self, name=None, arcname=None, fileobj=None): + """Create a TarInfo object for either the file `name' or the file + object `fileobj' (using os.fstat on its file descriptor). You can + modify some of the TarInfo's attributes before you add it using + addfile(). If given, `arcname' specifies an alternative name for the + file in the archive. + """ + self._check("aw") + + # When fileobj is given, replace name by + # fileobj's real name. + if fileobj is not None: + name = fileobj.name + + # Building the name of the member in the archive. + # Backward slashes are converted to forward slashes, + # Absolute paths are turned to relative paths. + if arcname is None: + arcname = name + drv, arcname = os.path.splitdrive(arcname) + arcname = arcname.replace(os.sep, "/") + arcname = arcname.lstrip("/") + + # Now, fill the TarInfo object with + # information specific for the file. + tarinfo = self.tarinfo() + tarinfo.tarfile = self + + # Use os.stat or os.lstat, depending on platform + # and if symlinks shall be resolved. + if fileobj is None: + if hasattr(os, "lstat") and not self.dereference: + statres = os.lstat(name) + else: + statres = os.stat(name) + else: + statres = os.fstat(fileobj.fileno()) + linkname = "" + + stmd = statres.st_mode + if stat.S_ISREG(stmd): + inode = (statres.st_ino, statres.st_dev) + if not self.dereference and statres.st_nlink > 1 and \ + inode in self.inodes and arcname != self.inodes[inode]: + # Is it a hardlink to an already + # archived file? + type = LNKTYPE + linkname = self.inodes[inode] + else: + # The inode is added only if its valid. + # For win32 it is always 0. + type = REGTYPE + if inode[0]: + self.inodes[inode] = arcname + elif stat.S_ISDIR(stmd): + type = DIRTYPE + elif stat.S_ISFIFO(stmd): + type = FIFOTYPE + elif stat.S_ISLNK(stmd): + type = SYMTYPE + linkname = os.readlink(name) + elif stat.S_ISCHR(stmd): + type = CHRTYPE + elif stat.S_ISBLK(stmd): + type = BLKTYPE + else: + return None + + # Fill the TarInfo object with all + # information we can get. + tarinfo.name = arcname + tarinfo.mode = stmd + tarinfo.uid = statres.st_uid + tarinfo.gid = statres.st_gid + if type == REGTYPE: + tarinfo.size = statres.st_size + else: + tarinfo.size = 0 + tarinfo.mtime = statres.st_mtime + tarinfo.type = type + tarinfo.linkname = linkname + if pwd: + try: + tarinfo.uname = pwd.getpwuid(tarinfo.uid)[0] + except KeyError: + pass + if grp: + try: + tarinfo.gname = grp.getgrgid(tarinfo.gid)[0] + except KeyError: + pass + + if type in (CHRTYPE, BLKTYPE): + if hasattr(os, "major") and hasattr(os, "minor"): + tarinfo.devmajor = os.major(statres.st_rdev) + tarinfo.devminor = os.minor(statres.st_rdev) + return tarinfo + + def list(self, verbose=True): + """Print a table of contents to sys.stdout. If `verbose' is False, only + the names of the members are printed. If it is True, an `ls -l'-like + output is produced. + """ + self._check() + + for tarinfo in self: + if verbose: + print(filemode(tarinfo.mode), end=' ') + print("%s/%s" % (tarinfo.uname or tarinfo.uid, + tarinfo.gname or tarinfo.gid), end=' ') + if tarinfo.ischr() or tarinfo.isblk(): + print("%10s" % ("%d,%d" \ + % (tarinfo.devmajor, tarinfo.devminor)), end=' ') + else: + print("%10d" % tarinfo.size, end=' ') + print("%d-%02d-%02d %02d:%02d:%02d" \ + % time.localtime(tarinfo.mtime)[:6], end=' ') + + print(tarinfo.name + ("/" if tarinfo.isdir() else ""), end=' ') + + if verbose: + if tarinfo.issym(): + print("->", tarinfo.linkname, end=' ') + if tarinfo.islnk(): + print("link to", tarinfo.linkname, end=' ') + print() + + def add(self, name, arcname=None, recursive=True, exclude=None, filter=None): + """Add the file `name' to the archive. `name' may be any type of file + (directory, fifo, symbolic link, etc.). If given, `arcname' + specifies an alternative name for the file in the archive. + Directories are added recursively by default. This can be avoided by + setting `recursive' to False. `exclude' is a function that should + return True for each filename to be excluded. `filter' is a function + that expects a TarInfo object argument and returns the changed + TarInfo object, if it returns None the TarInfo object will be + excluded from the archive. + """ + self._check("aw") + + if arcname is None: + arcname = name + + # Exclude pathnames. + if exclude is not None: + import warnings + warnings.warn("use the filter argument instead", + DeprecationWarning, 2) + if exclude(name): + self._dbg(2, "tarfile: Excluded %r" % name) + return + + # Skip if somebody tries to archive the archive... + if self.name is not None and os.path.abspath(name) == self.name: + self._dbg(2, "tarfile: Skipped %r" % name) + return + + self._dbg(1, name) + + # Create a TarInfo object from the file. + tarinfo = self.gettarinfo(name, arcname) + + if tarinfo is None: + self._dbg(1, "tarfile: Unsupported type %r" % name) + return + + # Change or exclude the TarInfo object. + if filter is not None: + tarinfo = filter(tarinfo) + if tarinfo is None: + self._dbg(2, "tarfile: Excluded %r" % name) + return + + # Append the tar header and data to the archive. + if tarinfo.isreg(): + f = bltn_open(name, "rb") + self.addfile(tarinfo, f) + f.close() + + elif tarinfo.isdir(): + self.addfile(tarinfo) + if recursive: + for f in os.listdir(name): + self.add(os.path.join(name, f), os.path.join(arcname, f), + recursive, exclude, filter=filter) + + else: + self.addfile(tarinfo) + + def addfile(self, tarinfo, fileobj=None): + """Add the TarInfo object `tarinfo' to the archive. If `fileobj' is + given, tarinfo.size bytes are read from it and added to the archive. + You can create TarInfo objects using gettarinfo(). + On Windows platforms, `fileobj' should always be opened with mode + 'rb' to avoid irritation about the file size. + """ + self._check("aw") + + tarinfo = copy.copy(tarinfo) + + buf = tarinfo.tobuf(self.format, self.encoding, self.errors) + self.fileobj.write(buf) + self.offset += len(buf) + + # If there's data to follow, append it. + if fileobj is not None: + copyfileobj(fileobj, self.fileobj, tarinfo.size) + blocks, remainder = divmod(tarinfo.size, BLOCKSIZE) + if remainder > 0: + self.fileobj.write(NUL * (BLOCKSIZE - remainder)) + blocks += 1 + self.offset += blocks * BLOCKSIZE + + self.members.append(tarinfo) + + def extractall(self, path=".", members=None): + """Extract all members from the archive to the current working + directory and set owner, modification time and permissions on + directories afterwards. `path' specifies a different directory + to extract to. `members' is optional and must be a subset of the + list returned by getmembers(). + """ + directories = [] + + if members is None: + members = self + + for tarinfo in members: + if tarinfo.isdir(): + # Extract directories with a safe mode. + directories.append(tarinfo) + tarinfo = copy.copy(tarinfo) + tarinfo.mode = 0o700 + # Do not set_attrs directories, as we will do that further down + self.extract(tarinfo, path, set_attrs=not tarinfo.isdir()) + + # Reverse sort directories. + directories.sort(key=lambda a: a.name) + directories.reverse() + + # Set correct owner, mtime and filemode on directories. + for tarinfo in directories: + dirpath = os.path.join(path, tarinfo.name) + try: + self.chown(tarinfo, dirpath) + self.utime(tarinfo, dirpath) + self.chmod(tarinfo, dirpath) + except ExtractError as e: + if self.errorlevel > 1: + raise + else: + self._dbg(1, "tarfile: %s" % e) + + def extract(self, member, path="", set_attrs=True): + """Extract a member from the archive to the current working directory, + using its full name. Its file information is extracted as accurately + as possible. `member' may be a filename or a TarInfo object. You can + specify a different directory using `path'. File attributes (owner, + mtime, mode) are set unless `set_attrs' is False. + """ + self._check("r") + + if isinstance(member, str): + tarinfo = self.getmember(member) + else: + tarinfo = member + + # Prepare the link target for makelink(). + if tarinfo.islnk(): + tarinfo._link_target = os.path.join(path, tarinfo.linkname) + + try: + self._extract_member(tarinfo, os.path.join(path, tarinfo.name), + set_attrs=set_attrs) + except EnvironmentError as e: + if self.errorlevel > 0: + raise + else: + if e.filename is None: + self._dbg(1, "tarfile: %s" % e.strerror) + else: + self._dbg(1, "tarfile: %s %r" % (e.strerror, e.filename)) + except ExtractError as e: + if self.errorlevel > 1: + raise + else: + self._dbg(1, "tarfile: %s" % e) + + def extractfile(self, member): + """Extract a member from the archive as a file object. `member' may be + a filename or a TarInfo object. If `member' is a regular file, a + file-like object is returned. If `member' is a link, a file-like + object is constructed from the link's target. If `member' is none of + the above, None is returned. + The file-like object is read-only and provides the following + methods: read(), readline(), readlines(), seek() and tell() + """ + self._check("r") + + if isinstance(member, str): + tarinfo = self.getmember(member) + else: + tarinfo = member + + if tarinfo.isreg(): + return self.fileobject(self, tarinfo) + + elif tarinfo.type not in SUPPORTED_TYPES: + # If a member's type is unknown, it is treated as a + # regular file. + return self.fileobject(self, tarinfo) + + elif tarinfo.islnk() or tarinfo.issym(): + if isinstance(self.fileobj, _Stream): + # A small but ugly workaround for the case that someone tries + # to extract a (sym)link as a file-object from a non-seekable + # stream of tar blocks. + raise StreamError("cannot extract (sym)link as file object") + else: + # A (sym)link's file object is its target's file object. + return self.extractfile(self._find_link_target(tarinfo)) + else: + # If there's no data associated with the member (directory, chrdev, + # blkdev, etc.), return None instead of a file object. + return None + + def _extract_member(self, tarinfo, targetpath, set_attrs=True): + """Extract the TarInfo object tarinfo to a physical + file called targetpath. + """ + # Fetch the TarInfo object for the given name + # and build the destination pathname, replacing + # forward slashes to platform specific separators. + targetpath = targetpath.rstrip("/") + targetpath = targetpath.replace("/", os.sep) + + # Create all upper directories. + upperdirs = os.path.dirname(targetpath) + if upperdirs and not os.path.exists(upperdirs): + # Create directories that are not part of the archive with + # default permissions. + os.makedirs(upperdirs) + + if tarinfo.islnk() or tarinfo.issym(): + self._dbg(1, "%s -> %s" % (tarinfo.name, tarinfo.linkname)) + else: + self._dbg(1, tarinfo.name) + + if tarinfo.isreg(): + self.makefile(tarinfo, targetpath) + elif tarinfo.isdir(): + self.makedir(tarinfo, targetpath) + elif tarinfo.isfifo(): + self.makefifo(tarinfo, targetpath) + elif tarinfo.ischr() or tarinfo.isblk(): + self.makedev(tarinfo, targetpath) + elif tarinfo.islnk() or tarinfo.issym(): + self.makelink(tarinfo, targetpath) + elif tarinfo.type not in SUPPORTED_TYPES: + self.makeunknown(tarinfo, targetpath) + else: + self.makefile(tarinfo, targetpath) + + if set_attrs: + self.chown(tarinfo, targetpath) + if not tarinfo.issym(): + self.chmod(tarinfo, targetpath) + self.utime(tarinfo, targetpath) + + #-------------------------------------------------------------------------- + # Below are the different file methods. They are called via + # _extract_member() when extract() is called. They can be replaced in a + # subclass to implement other functionality. + + def makedir(self, tarinfo, targetpath): + """Make a directory called targetpath. + """ + try: + # Use a safe mode for the directory, the real mode is set + # later in _extract_member(). + os.mkdir(targetpath, 0o700) + except EnvironmentError as e: + if e.errno != errno.EEXIST: + raise + + def makefile(self, tarinfo, targetpath): + """Make a file called targetpath. + """ + source = self.fileobj + source.seek(tarinfo.offset_data) + target = bltn_open(targetpath, "wb") + if tarinfo.sparse is not None: + for offset, size in tarinfo.sparse: + target.seek(offset) + copyfileobj(source, target, size) + else: + copyfileobj(source, target, tarinfo.size) + target.seek(tarinfo.size) + target.truncate() + target.close() + + def makeunknown(self, tarinfo, targetpath): + """Make a file from a TarInfo object with an unknown type + at targetpath. + """ + self.makefile(tarinfo, targetpath) + self._dbg(1, "tarfile: Unknown file type %r, " \ + "extracted as regular file." % tarinfo.type) + + def makefifo(self, tarinfo, targetpath): + """Make a fifo called targetpath. + """ + if hasattr(os, "mkfifo"): + os.mkfifo(targetpath) + else: + raise ExtractError("fifo not supported by system") + + def makedev(self, tarinfo, targetpath): + """Make a character or block device called targetpath. + """ + if not hasattr(os, "mknod") or not hasattr(os, "makedev"): + raise ExtractError("special devices not supported by system") + + mode = tarinfo.mode + if tarinfo.isblk(): + mode |= stat.S_IFBLK + else: + mode |= stat.S_IFCHR + + os.mknod(targetpath, mode, + os.makedev(tarinfo.devmajor, tarinfo.devminor)) + + def makelink(self, tarinfo, targetpath): + """Make a (symbolic) link called targetpath. If it cannot be created + (platform limitation), we try to make a copy of the referenced file + instead of a link. + """ + try: + # For systems that support symbolic and hard links. + if tarinfo.issym(): + os.symlink(tarinfo.linkname, targetpath) + else: + # See extract(). + if os.path.exists(tarinfo._link_target): + os.link(tarinfo._link_target, targetpath) + else: + self._extract_member(self._find_link_target(tarinfo), + targetpath) + except symlink_exception: + if tarinfo.issym(): + linkpath = os.path.join(os.path.dirname(tarinfo.name), + tarinfo.linkname) + else: + linkpath = tarinfo.linkname + else: + try: + self._extract_member(self._find_link_target(tarinfo), + targetpath) + except KeyError: + raise ExtractError("unable to resolve link inside archive") + + def chown(self, tarinfo, targetpath): + """Set owner of targetpath according to tarinfo. + """ + if pwd and hasattr(os, "geteuid") and os.geteuid() == 0: + # We have to be root to do so. + try: + g = grp.getgrnam(tarinfo.gname)[2] + except KeyError: + g = tarinfo.gid + try: + u = pwd.getpwnam(tarinfo.uname)[2] + except KeyError: + u = tarinfo.uid + try: + if tarinfo.issym() and hasattr(os, "lchown"): + os.lchown(targetpath, u, g) + else: + if sys.platform != "os2emx": + os.chown(targetpath, u, g) + except EnvironmentError as e: + raise ExtractError("could not change owner") + + def chmod(self, tarinfo, targetpath): + """Set file permissions of targetpath according to tarinfo. + """ + if hasattr(os, 'chmod'): + try: + os.chmod(targetpath, tarinfo.mode) + except EnvironmentError as e: + raise ExtractError("could not change mode") + + def utime(self, tarinfo, targetpath): + """Set modification time of targetpath according to tarinfo. + """ + if not hasattr(os, 'utime'): + return + try: + os.utime(targetpath, (tarinfo.mtime, tarinfo.mtime)) + except EnvironmentError as e: + raise ExtractError("could not change modification time") + + #-------------------------------------------------------------------------- + def next(self): + """Return the next member of the archive as a TarInfo object, when + TarFile is opened for reading. Return None if there is no more + available. + """ + self._check("ra") + if self.firstmember is not None: + m = self.firstmember + self.firstmember = None + return m + + # Read the next block. + self.fileobj.seek(self.offset) + tarinfo = None + while True: + try: + tarinfo = self.tarinfo.fromtarfile(self) + except EOFHeaderError as e: + if self.ignore_zeros: + self._dbg(2, "0x%X: %s" % (self.offset, e)) + self.offset += BLOCKSIZE + continue + except InvalidHeaderError as e: + if self.ignore_zeros: + self._dbg(2, "0x%X: %s" % (self.offset, e)) + self.offset += BLOCKSIZE + continue + elif self.offset == 0: + raise ReadError(str(e)) + except EmptyHeaderError: + if self.offset == 0: + raise ReadError("empty file") + except TruncatedHeaderError as e: + if self.offset == 0: + raise ReadError(str(e)) + except SubsequentHeaderError as e: + raise ReadError(str(e)) + break + + if tarinfo is not None: + self.members.append(tarinfo) + else: + self._loaded = True + + return tarinfo + + #-------------------------------------------------------------------------- + # Little helper methods: + + def _getmember(self, name, tarinfo=None, normalize=False): + """Find an archive member by name from bottom to top. + If tarinfo is given, it is used as the starting point. + """ + # Ensure that all members have been loaded. + members = self.getmembers() + + # Limit the member search list up to tarinfo. + if tarinfo is not None: + members = members[:members.index(tarinfo)] + + if normalize: + name = os.path.normpath(name) + + for member in reversed(members): + if normalize: + member_name = os.path.normpath(member.name) + else: + member_name = member.name + + if name == member_name: + return member + + def _load(self): + """Read through the entire archive file and look for readable + members. + """ + while True: + tarinfo = self.next() + if tarinfo is None: + break + self._loaded = True + + def _check(self, mode=None): + """Check if TarFile is still open, and if the operation's mode + corresponds to TarFile's mode. + """ + if self.closed: + raise IOError("%s is closed" % self.__class__.__name__) + if mode is not None and self.mode not in mode: + raise IOError("bad operation for mode %r" % self.mode) + + def _find_link_target(self, tarinfo): + """Find the target member of a symlink or hardlink member in the + archive. + """ + if tarinfo.issym(): + # Always search the entire archive. + linkname = os.path.dirname(tarinfo.name) + "/" + tarinfo.linkname + limit = None + else: + # Search the archive before the link, because a hard link is + # just a reference to an already archived file. + linkname = tarinfo.linkname + limit = tarinfo + + member = self._getmember(linkname, tarinfo=limit, normalize=True) + if member is None: + raise KeyError("linkname %r not found" % linkname) + return member + + def __iter__(self): + """Provide an iterator object. + """ + if self._loaded: + return iter(self.members) + else: + return TarIter(self) + + def _dbg(self, level, msg): + """Write debugging output to sys.stderr. + """ + if level <= self.debug: + print(msg, file=sys.stderr) + + def __enter__(self): + self._check() + return self + + def __exit__(self, type, value, traceback): + if type is None: + self.close() + else: + # An exception occurred. We must not call close() because + # it would try to write end-of-archive blocks and padding. + if not self._extfileobj: + self.fileobj.close() + self.closed = True +# class TarFile + +class TarIter(object): + """Iterator Class. + + for tarinfo in TarFile(...): + suite... + """ + + def __init__(self, tarfile): + """Construct a TarIter object. + """ + self.tarfile = tarfile + self.index = 0 + def __iter__(self): + """Return iterator object. + """ + return self + + def __next__(self): + """Return the next item using TarFile's next() method. + When all members have been read, set TarFile as _loaded. + """ + # Fix for SF #1100429: Under rare circumstances it can + # happen that getmembers() is called during iteration, + # which will cause TarIter to stop prematurely. + if not self.tarfile._loaded: + tarinfo = self.tarfile.next() + if not tarinfo: + self.tarfile._loaded = True + raise StopIteration + else: + try: + tarinfo = self.tarfile.members[self.index] + except IndexError: + raise StopIteration + self.index += 1 + return tarinfo + + next = __next__ # for Python 2.x + +#-------------------- +# exported functions +#-------------------- +def is_tarfile(name): + """Return True if name points to a tar archive that we + are able to handle, else return False. + """ + try: + t = open(name) + t.close() + return True + except TarError: + return False + +bltn_open = open +open = TarFile.open diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/compat.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/compat.py new file mode 100644 index 0000000000000000000000000000000000000000..ff328c8ee491f9f3c943cf637dc7ea17ea33ee3b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/compat.py @@ -0,0 +1,1120 @@ +# -*- coding: utf-8 -*- +# +# Copyright (C) 2013-2017 Vinay Sajip. +# Licensed to the Python Software Foundation under a contributor agreement. +# See LICENSE.txt and CONTRIBUTORS.txt. +# +from __future__ import absolute_import + +import os +import re +import sys + +try: + import ssl +except ImportError: # pragma: no cover + ssl = None + +if sys.version_info[0] < 3: # pragma: no cover + from StringIO import StringIO + string_types = basestring, + text_type = unicode + from types import FileType as file_type + import __builtin__ as builtins + import ConfigParser as configparser + from ._backport import shutil + from urlparse import urlparse, urlunparse, urljoin, urlsplit, urlunsplit + from urllib import (urlretrieve, quote as _quote, unquote, url2pathname, + pathname2url, ContentTooShortError, splittype) + + def quote(s): + if isinstance(s, unicode): + s = s.encode('utf-8') + return _quote(s) + + import urllib2 + from urllib2 import (Request, urlopen, URLError, HTTPError, + HTTPBasicAuthHandler, HTTPPasswordMgr, + HTTPHandler, HTTPRedirectHandler, + build_opener) + if ssl: + from urllib2 import HTTPSHandler + import httplib + import xmlrpclib + import Queue as queue + from HTMLParser import HTMLParser + import htmlentitydefs + raw_input = raw_input + from itertools import ifilter as filter + from itertools import ifilterfalse as filterfalse + + _userprog = None + def splituser(host): + """splituser('user[:passwd]@host[:port]') --> 'user[:passwd]', 'host[:port]'.""" + global _userprog + if _userprog is None: + import re + _userprog = re.compile('^(.*)@(.*)$') + + match = _userprog.match(host) + if match: return match.group(1, 2) + return None, host + +else: # pragma: no cover + from io import StringIO + string_types = str, + text_type = str + from io import TextIOWrapper as file_type + import builtins + import configparser + import shutil + from urllib.parse import (urlparse, urlunparse, urljoin, splituser, quote, + unquote, urlsplit, urlunsplit, splittype) + from urllib.request import (urlopen, urlretrieve, Request, url2pathname, + pathname2url, + HTTPBasicAuthHandler, HTTPPasswordMgr, + HTTPHandler, HTTPRedirectHandler, + build_opener) + if ssl: + from urllib.request import HTTPSHandler + from urllib.error import HTTPError, URLError, ContentTooShortError + import http.client as httplib + import urllib.request as urllib2 + import xmlrpc.client as xmlrpclib + import queue + from html.parser import HTMLParser + import html.entities as htmlentitydefs + raw_input = input + from itertools import filterfalse + filter = filter + +try: + from ssl import match_hostname, CertificateError +except ImportError: # pragma: no cover + class CertificateError(ValueError): + pass + + + def _dnsname_match(dn, hostname, max_wildcards=1): + """Matching according to RFC 6125, section 6.4.3 + + http://tools.ietf.org/html/rfc6125#section-6.4.3 + """ + pats = [] + if not dn: + return False + + parts = dn.split('.') + leftmost, remainder = parts[0], parts[1:] + + wildcards = leftmost.count('*') + if wildcards > max_wildcards: + # Issue #17980: avoid denials of service by refusing more + # than one wildcard per fragment. A survey of established + # policy among SSL implementations showed it to be a + # reasonable choice. + raise CertificateError( + "too many wildcards in certificate DNS name: " + repr(dn)) + + # speed up common case w/o wildcards + if not wildcards: + return dn.lower() == hostname.lower() + + # RFC 6125, section 6.4.3, subitem 1. + # The client SHOULD NOT attempt to match a presented identifier in which + # the wildcard character comprises a label other than the left-most label. + if leftmost == '*': + # When '*' is a fragment by itself, it matches a non-empty dotless + # fragment. + pats.append('[^.]+') + elif leftmost.startswith('xn--') or hostname.startswith('xn--'): + # RFC 6125, section 6.4.3, subitem 3. + # The client SHOULD NOT attempt to match a presented identifier + # where the wildcard character is embedded within an A-label or + # U-label of an internationalized domain name. + pats.append(re.escape(leftmost)) + else: + # Otherwise, '*' matches any dotless string, e.g. www* + pats.append(re.escape(leftmost).replace(r'\*', '[^.]*')) + + # add the remaining fragments, ignore any wildcards + for frag in remainder: + pats.append(re.escape(frag)) + + pat = re.compile(r'\A' + r'\.'.join(pats) + r'\Z', re.IGNORECASE) + return pat.match(hostname) + + + def match_hostname(cert, hostname): + """Verify that *cert* (in decoded format as returned by + SSLSocket.getpeercert()) matches the *hostname*. RFC 2818 and RFC 6125 + rules are followed, but IP addresses are not accepted for *hostname*. + + CertificateError is raised on failure. On success, the function + returns nothing. + """ + if not cert: + raise ValueError("empty or no certificate, match_hostname needs a " + "SSL socket or SSL context with either " + "CERT_OPTIONAL or CERT_REQUIRED") + dnsnames = [] + san = cert.get('subjectAltName', ()) + for key, value in san: + if key == 'DNS': + if _dnsname_match(value, hostname): + return + dnsnames.append(value) + if not dnsnames: + # The subject is only checked when there is no dNSName entry + # in subjectAltName + for sub in cert.get('subject', ()): + for key, value in sub: + # XXX according to RFC 2818, the most specific Common Name + # must be used. + if key == 'commonName': + if _dnsname_match(value, hostname): + return + dnsnames.append(value) + if len(dnsnames) > 1: + raise CertificateError("hostname %r " + "doesn't match either of %s" + % (hostname, ', '.join(map(repr, dnsnames)))) + elif len(dnsnames) == 1: + raise CertificateError("hostname %r " + "doesn't match %r" + % (hostname, dnsnames[0])) + else: + raise CertificateError("no appropriate commonName or " + "subjectAltName fields were found") + + +try: + from types import SimpleNamespace as Container +except ImportError: # pragma: no cover + class Container(object): + """ + A generic container for when multiple values need to be returned + """ + def __init__(self, **kwargs): + self.__dict__.update(kwargs) + + +try: + from shutil import which +except ImportError: # pragma: no cover + # Implementation from Python 3.3 + def which(cmd, mode=os.F_OK | os.X_OK, path=None): + """Given a command, mode, and a PATH string, return the path which + conforms to the given mode on the PATH, or None if there is no such + file. + + `mode` defaults to os.F_OK | os.X_OK. `path` defaults to the result + of os.environ.get("PATH"), or can be overridden with a custom search + path. + + """ + # Check that a given file can be accessed with the correct mode. + # Additionally check that `file` is not a directory, as on Windows + # directories pass the os.access check. + def _access_check(fn, mode): + return (os.path.exists(fn) and os.access(fn, mode) + and not os.path.isdir(fn)) + + # If we're given a path with a directory part, look it up directly rather + # than referring to PATH directories. This includes checking relative to the + # current directory, e.g. ./script + if os.path.dirname(cmd): + if _access_check(cmd, mode): + return cmd + return None + + if path is None: + path = os.environ.get("PATH", os.defpath) + if not path: + return None + path = path.split(os.pathsep) + + if sys.platform == "win32": + # The current directory takes precedence on Windows. + if not os.curdir in path: + path.insert(0, os.curdir) + + # PATHEXT is necessary to check on Windows. + pathext = os.environ.get("PATHEXT", "").split(os.pathsep) + # See if the given file matches any of the expected path extensions. + # This will allow us to short circuit when given "python.exe". + # If it does match, only test that one, otherwise we have to try + # others. + if any(cmd.lower().endswith(ext.lower()) for ext in pathext): + files = [cmd] + else: + files = [cmd + ext for ext in pathext] + else: + # On other platforms you don't have things like PATHEXT to tell you + # what file suffixes are executable, so just pass on cmd as-is. + files = [cmd] + + seen = set() + for dir in path: + normdir = os.path.normcase(dir) + if not normdir in seen: + seen.add(normdir) + for thefile in files: + name = os.path.join(dir, thefile) + if _access_check(name, mode): + return name + return None + + +# ZipFile is a context manager in 2.7, but not in 2.6 + +from zipfile import ZipFile as BaseZipFile + +if hasattr(BaseZipFile, '__enter__'): # pragma: no cover + ZipFile = BaseZipFile +else: # pragma: no cover + from zipfile import ZipExtFile as BaseZipExtFile + + class ZipExtFile(BaseZipExtFile): + def __init__(self, base): + self.__dict__.update(base.__dict__) + + def __enter__(self): + return self + + def __exit__(self, *exc_info): + self.close() + # return None, so if an exception occurred, it will propagate + + class ZipFile(BaseZipFile): + def __enter__(self): + return self + + def __exit__(self, *exc_info): + self.close() + # return None, so if an exception occurred, it will propagate + + def open(self, *args, **kwargs): + base = BaseZipFile.open(self, *args, **kwargs) + return ZipExtFile(base) + +try: + from platform import python_implementation +except ImportError: # pragma: no cover + def python_implementation(): + """Return a string identifying the Python implementation.""" + if 'PyPy' in sys.version: + return 'PyPy' + if os.name == 'java': + return 'Jython' + if sys.version.startswith('IronPython'): + return 'IronPython' + return 'CPython' + +try: + import sysconfig +except ImportError: # pragma: no cover + from ._backport import sysconfig + +try: + callable = callable +except NameError: # pragma: no cover + from collections import Callable + + def callable(obj): + return isinstance(obj, Callable) + + +try: + fsencode = os.fsencode + fsdecode = os.fsdecode +except AttributeError: # pragma: no cover + # Issue #99: on some systems (e.g. containerised), + # sys.getfilesystemencoding() returns None, and we need a real value, + # so fall back to utf-8. From the CPython 2.7 docs relating to Unix and + # sys.getfilesystemencoding(): the return value is "the user’s preference + # according to the result of nl_langinfo(CODESET), or None if the + # nl_langinfo(CODESET) failed." + _fsencoding = sys.getfilesystemencoding() or 'utf-8' + if _fsencoding == 'mbcs': + _fserrors = 'strict' + else: + _fserrors = 'surrogateescape' + + def fsencode(filename): + if isinstance(filename, bytes): + return filename + elif isinstance(filename, text_type): + return filename.encode(_fsencoding, _fserrors) + else: + raise TypeError("expect bytes or str, not %s" % + type(filename).__name__) + + def fsdecode(filename): + if isinstance(filename, text_type): + return filename + elif isinstance(filename, bytes): + return filename.decode(_fsencoding, _fserrors) + else: + raise TypeError("expect bytes or str, not %s" % + type(filename).__name__) + +try: + from tokenize import detect_encoding +except ImportError: # pragma: no cover + from codecs import BOM_UTF8, lookup + import re + + cookie_re = re.compile(r"coding[:=]\s*([-\w.]+)") + + def _get_normal_name(orig_enc): + """Imitates get_normal_name in tokenizer.c.""" + # Only care about the first 12 characters. + enc = orig_enc[:12].lower().replace("_", "-") + if enc == "utf-8" or enc.startswith("utf-8-"): + return "utf-8" + if enc in ("latin-1", "iso-8859-1", "iso-latin-1") or \ + enc.startswith(("latin-1-", "iso-8859-1-", "iso-latin-1-")): + return "iso-8859-1" + return orig_enc + + def detect_encoding(readline): + """ + The detect_encoding() function is used to detect the encoding that should + be used to decode a Python source file. It requires one argument, readline, + in the same way as the tokenize() generator. + + It will call readline a maximum of twice, and return the encoding used + (as a string) and a list of any lines (left as bytes) it has read in. + + It detects the encoding from the presence of a utf-8 bom or an encoding + cookie as specified in pep-0263. If both a bom and a cookie are present, + but disagree, a SyntaxError will be raised. If the encoding cookie is an + invalid charset, raise a SyntaxError. Note that if a utf-8 bom is found, + 'utf-8-sig' is returned. + + If no encoding is specified, then the default of 'utf-8' will be returned. + """ + try: + filename = readline.__self__.name + except AttributeError: + filename = None + bom_found = False + encoding = None + default = 'utf-8' + def read_or_stop(): + try: + return readline() + except StopIteration: + return b'' + + def find_cookie(line): + try: + # Decode as UTF-8. Either the line is an encoding declaration, + # in which case it should be pure ASCII, or it must be UTF-8 + # per default encoding. + line_string = line.decode('utf-8') + except UnicodeDecodeError: + msg = "invalid or missing encoding declaration" + if filename is not None: + msg = '{} for {!r}'.format(msg, filename) + raise SyntaxError(msg) + + matches = cookie_re.findall(line_string) + if not matches: + return None + encoding = _get_normal_name(matches[0]) + try: + codec = lookup(encoding) + except LookupError: + # This behaviour mimics the Python interpreter + if filename is None: + msg = "unknown encoding: " + encoding + else: + msg = "unknown encoding for {!r}: {}".format(filename, + encoding) + raise SyntaxError(msg) + + if bom_found: + if codec.name != 'utf-8': + # This behaviour mimics the Python interpreter + if filename is None: + msg = 'encoding problem: utf-8' + else: + msg = 'encoding problem for {!r}: utf-8'.format(filename) + raise SyntaxError(msg) + encoding += '-sig' + return encoding + + first = read_or_stop() + if first.startswith(BOM_UTF8): + bom_found = True + first = first[3:] + default = 'utf-8-sig' + if not first: + return default, [] + + encoding = find_cookie(first) + if encoding: + return encoding, [first] + + second = read_or_stop() + if not second: + return default, [first] + + encoding = find_cookie(second) + if encoding: + return encoding, [first, second] + + return default, [first, second] + +# For converting & <-> & etc. +try: + from html import escape +except ImportError: + from cgi import escape +if sys.version_info[:2] < (3, 4): + unescape = HTMLParser().unescape +else: + from html import unescape + +try: + from collections import ChainMap +except ImportError: # pragma: no cover + from collections import MutableMapping + + try: + from reprlib import recursive_repr as _recursive_repr + except ImportError: + def _recursive_repr(fillvalue='...'): + ''' + Decorator to make a repr function return fillvalue for a recursive + call + ''' + + def decorating_function(user_function): + repr_running = set() + + def wrapper(self): + key = id(self), get_ident() + if key in repr_running: + return fillvalue + repr_running.add(key) + try: + result = user_function(self) + finally: + repr_running.discard(key) + return result + + # Can't use functools.wraps() here because of bootstrap issues + wrapper.__module__ = getattr(user_function, '__module__') + wrapper.__doc__ = getattr(user_function, '__doc__') + wrapper.__name__ = getattr(user_function, '__name__') + wrapper.__annotations__ = getattr(user_function, '__annotations__', {}) + return wrapper + + return decorating_function + + class ChainMap(MutableMapping): + ''' A ChainMap groups multiple dicts (or other mappings) together + to create a single, updateable view. + + The underlying mappings are stored in a list. That list is public and can + accessed or updated using the *maps* attribute. There is no other state. + + Lookups search the underlying mappings successively until a key is found. + In contrast, writes, updates, and deletions only operate on the first + mapping. + + ''' + + def __init__(self, *maps): + '''Initialize a ChainMap by setting *maps* to the given mappings. + If no mappings are provided, a single empty dictionary is used. + + ''' + self.maps = list(maps) or [{}] # always at least one map + + def __missing__(self, key): + raise KeyError(key) + + def __getitem__(self, key): + for mapping in self.maps: + try: + return mapping[key] # can't use 'key in mapping' with defaultdict + except KeyError: + pass + return self.__missing__(key) # support subclasses that define __missing__ + + def get(self, key, default=None): + return self[key] if key in self else default + + def __len__(self): + return len(set().union(*self.maps)) # reuses stored hash values if possible + + def __iter__(self): + return iter(set().union(*self.maps)) + + def __contains__(self, key): + return any(key in m for m in self.maps) + + def __bool__(self): + return any(self.maps) + + @_recursive_repr() + def __repr__(self): + return '{0.__class__.__name__}({1})'.format( + self, ', '.join(map(repr, self.maps))) + + @classmethod + def fromkeys(cls, iterable, *args): + 'Create a ChainMap with a single dict created from the iterable.' + return cls(dict.fromkeys(iterable, *args)) + + def copy(self): + 'New ChainMap or subclass with a new copy of maps[0] and refs to maps[1:]' + return self.__class__(self.maps[0].copy(), *self.maps[1:]) + + __copy__ = copy + + def new_child(self): # like Django's Context.push() + 'New ChainMap with a new dict followed by all previous maps.' + return self.__class__({}, *self.maps) + + @property + def parents(self): # like Django's Context.pop() + 'New ChainMap from maps[1:].' + return self.__class__(*self.maps[1:]) + + def __setitem__(self, key, value): + self.maps[0][key] = value + + def __delitem__(self, key): + try: + del self.maps[0][key] + except KeyError: + raise KeyError('Key not found in the first mapping: {!r}'.format(key)) + + def popitem(self): + 'Remove and return an item pair from maps[0]. Raise KeyError is maps[0] is empty.' + try: + return self.maps[0].popitem() + except KeyError: + raise KeyError('No keys found in the first mapping.') + + def pop(self, key, *args): + 'Remove *key* from maps[0] and return its value. Raise KeyError if *key* not in maps[0].' + try: + return self.maps[0].pop(key, *args) + except KeyError: + raise KeyError('Key not found in the first mapping: {!r}'.format(key)) + + def clear(self): + 'Clear maps[0], leaving maps[1:] intact.' + self.maps[0].clear() + +try: + from importlib.util import cache_from_source # Python >= 3.4 +except ImportError: # pragma: no cover + try: + from imp import cache_from_source + except ImportError: # pragma: no cover + def cache_from_source(path, debug_override=None): + assert path.endswith('.py') + if debug_override is None: + debug_override = __debug__ + if debug_override: + suffix = 'c' + else: + suffix = 'o' + return path + suffix + +try: + from collections import OrderedDict +except ImportError: # pragma: no cover +## {{{ http://code.activestate.com/recipes/576693/ (r9) +# Backport of OrderedDict() class that runs on Python 2.4, 2.5, 2.6, 2.7 and pypy. +# Passes Python2.7's test suite and incorporates all the latest updates. + try: + from thread import get_ident as _get_ident + except ImportError: + from dummy_thread import get_ident as _get_ident + + try: + from _abcoll import KeysView, ValuesView, ItemsView + except ImportError: + pass + + + class OrderedDict(dict): + 'Dictionary that remembers insertion order' + # An inherited dict maps keys to values. + # The inherited dict provides __getitem__, __len__, __contains__, and get. + # The remaining methods are order-aware. + # Big-O running times for all methods are the same as for regular dictionaries. + + # The internal self.__map dictionary maps keys to links in a doubly linked list. + # The circular doubly linked list starts and ends with a sentinel element. + # The sentinel element never gets deleted (this simplifies the algorithm). + # Each link is stored as a list of length three: [PREV, NEXT, KEY]. + + def __init__(self, *args, **kwds): + '''Initialize an ordered dictionary. Signature is the same as for + regular dictionaries, but keyword arguments are not recommended + because their insertion order is arbitrary. + + ''' + if len(args) > 1: + raise TypeError('expected at most 1 arguments, got %d' % len(args)) + try: + self.__root + except AttributeError: + self.__root = root = [] # sentinel node + root[:] = [root, root, None] + self.__map = {} + self.__update(*args, **kwds) + + def __setitem__(self, key, value, dict_setitem=dict.__setitem__): + 'od.__setitem__(i, y) <==> od[i]=y' + # Setting a new item creates a new link which goes at the end of the linked + # list, and the inherited dictionary is updated with the new key/value pair. + if key not in self: + root = self.__root + last = root[0] + last[1] = root[0] = self.__map[key] = [last, root, key] + dict_setitem(self, key, value) + + def __delitem__(self, key, dict_delitem=dict.__delitem__): + 'od.__delitem__(y) <==> del od[y]' + # Deleting an existing item uses self.__map to find the link which is + # then removed by updating the links in the predecessor and successor nodes. + dict_delitem(self, key) + link_prev, link_next, key = self.__map.pop(key) + link_prev[1] = link_next + link_next[0] = link_prev + + def __iter__(self): + 'od.__iter__() <==> iter(od)' + root = self.__root + curr = root[1] + while curr is not root: + yield curr[2] + curr = curr[1] + + def __reversed__(self): + 'od.__reversed__() <==> reversed(od)' + root = self.__root + curr = root[0] + while curr is not root: + yield curr[2] + curr = curr[0] + + def clear(self): + 'od.clear() -> None. Remove all items from od.' + try: + for node in self.__map.itervalues(): + del node[:] + root = self.__root + root[:] = [root, root, None] + self.__map.clear() + except AttributeError: + pass + dict.clear(self) + + def popitem(self, last=True): + '''od.popitem() -> (k, v), return and remove a (key, value) pair. + Pairs are returned in LIFO order if last is true or FIFO order if false. + + ''' + if not self: + raise KeyError('dictionary is empty') + root = self.__root + if last: + link = root[0] + link_prev = link[0] + link_prev[1] = root + root[0] = link_prev + else: + link = root[1] + link_next = link[1] + root[1] = link_next + link_next[0] = root + key = link[2] + del self.__map[key] + value = dict.pop(self, key) + return key, value + + # -- the following methods do not depend on the internal structure -- + + def keys(self): + 'od.keys() -> list of keys in od' + return list(self) + + def values(self): + 'od.values() -> list of values in od' + return [self[key] for key in self] + + def items(self): + 'od.items() -> list of (key, value) pairs in od' + return [(key, self[key]) for key in self] + + def iterkeys(self): + 'od.iterkeys() -> an iterator over the keys in od' + return iter(self) + + def itervalues(self): + 'od.itervalues -> an iterator over the values in od' + for k in self: + yield self[k] + + def iteritems(self): + 'od.iteritems -> an iterator over the (key, value) items in od' + for k in self: + yield (k, self[k]) + + def update(*args, **kwds): + '''od.update(E, **F) -> None. Update od from dict/iterable E and F. + + If E is a dict instance, does: for k in E: od[k] = E[k] + If E has a .keys() method, does: for k in E.keys(): od[k] = E[k] + Or if E is an iterable of items, does: for k, v in E: od[k] = v + In either case, this is followed by: for k, v in F.items(): od[k] = v + + ''' + if len(args) > 2: + raise TypeError('update() takes at most 2 positional ' + 'arguments (%d given)' % (len(args),)) + elif not args: + raise TypeError('update() takes at least 1 argument (0 given)') + self = args[0] + # Make progressively weaker assumptions about "other" + other = () + if len(args) == 2: + other = args[1] + if isinstance(other, dict): + for key in other: + self[key] = other[key] + elif hasattr(other, 'keys'): + for key in other.keys(): + self[key] = other[key] + else: + for key, value in other: + self[key] = value + for key, value in kwds.items(): + self[key] = value + + __update = update # let subclasses override update without breaking __init__ + + __marker = object() + + def pop(self, key, default=__marker): + '''od.pop(k[,d]) -> v, remove specified key and return the corresponding value. + If key is not found, d is returned if given, otherwise KeyError is raised. + + ''' + if key in self: + result = self[key] + del self[key] + return result + if default is self.__marker: + raise KeyError(key) + return default + + def setdefault(self, key, default=None): + 'od.setdefault(k[,d]) -> od.get(k,d), also set od[k]=d if k not in od' + if key in self: + return self[key] + self[key] = default + return default + + def __repr__(self, _repr_running=None): + 'od.__repr__() <==> repr(od)' + if not _repr_running: _repr_running = {} + call_key = id(self), _get_ident() + if call_key in _repr_running: + return '...' + _repr_running[call_key] = 1 + try: + if not self: + return '%s()' % (self.__class__.__name__,) + return '%s(%r)' % (self.__class__.__name__, self.items()) + finally: + del _repr_running[call_key] + + def __reduce__(self): + 'Return state information for pickling' + items = [[k, self[k]] for k in self] + inst_dict = vars(self).copy() + for k in vars(OrderedDict()): + inst_dict.pop(k, None) + if inst_dict: + return (self.__class__, (items,), inst_dict) + return self.__class__, (items,) + + def copy(self): + 'od.copy() -> a shallow copy of od' + return self.__class__(self) + + @classmethod + def fromkeys(cls, iterable, value=None): + '''OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S + and values equal to v (which defaults to None). + + ''' + d = cls() + for key in iterable: + d[key] = value + return d + + def __eq__(self, other): + '''od.__eq__(y) <==> od==y. Comparison to another OD is order-sensitive + while comparison to a regular mapping is order-insensitive. + + ''' + if isinstance(other, OrderedDict): + return len(self)==len(other) and self.items() == other.items() + return dict.__eq__(self, other) + + def __ne__(self, other): + return not self == other + + # -- the following methods are only used in Python 2.7 -- + + def viewkeys(self): + "od.viewkeys() -> a set-like object providing a view on od's keys" + return KeysView(self) + + def viewvalues(self): + "od.viewvalues() -> an object providing a view on od's values" + return ValuesView(self) + + def viewitems(self): + "od.viewitems() -> a set-like object providing a view on od's items" + return ItemsView(self) + +try: + from logging.config import BaseConfigurator, valid_ident +except ImportError: # pragma: no cover + IDENTIFIER = re.compile('^[a-z_][a-z0-9_]*$', re.I) + + + def valid_ident(s): + m = IDENTIFIER.match(s) + if not m: + raise ValueError('Not a valid Python identifier: %r' % s) + return True + + + # The ConvertingXXX classes are wrappers around standard Python containers, + # and they serve to convert any suitable values in the container. The + # conversion converts base dicts, lists and tuples to their wrapped + # equivalents, whereas strings which match a conversion format are converted + # appropriately. + # + # Each wrapper should have a configurator attribute holding the actual + # configurator to use for conversion. + + class ConvertingDict(dict): + """A converting dictionary wrapper.""" + + def __getitem__(self, key): + value = dict.__getitem__(self, key) + result = self.configurator.convert(value) + #If the converted value is different, save for next time + if value is not result: + self[key] = result + if type(result) in (ConvertingDict, ConvertingList, + ConvertingTuple): + result.parent = self + result.key = key + return result + + def get(self, key, default=None): + value = dict.get(self, key, default) + result = self.configurator.convert(value) + #If the converted value is different, save for next time + if value is not result: + self[key] = result + if type(result) in (ConvertingDict, ConvertingList, + ConvertingTuple): + result.parent = self + result.key = key + return result + + def pop(self, key, default=None): + value = dict.pop(self, key, default) + result = self.configurator.convert(value) + if value is not result: + if type(result) in (ConvertingDict, ConvertingList, + ConvertingTuple): + result.parent = self + result.key = key + return result + + class ConvertingList(list): + """A converting list wrapper.""" + def __getitem__(self, key): + value = list.__getitem__(self, key) + result = self.configurator.convert(value) + #If the converted value is different, save for next time + if value is not result: + self[key] = result + if type(result) in (ConvertingDict, ConvertingList, + ConvertingTuple): + result.parent = self + result.key = key + return result + + def pop(self, idx=-1): + value = list.pop(self, idx) + result = self.configurator.convert(value) + if value is not result: + if type(result) in (ConvertingDict, ConvertingList, + ConvertingTuple): + result.parent = self + return result + + class ConvertingTuple(tuple): + """A converting tuple wrapper.""" + def __getitem__(self, key): + value = tuple.__getitem__(self, key) + result = self.configurator.convert(value) + if value is not result: + if type(result) in (ConvertingDict, ConvertingList, + ConvertingTuple): + result.parent = self + result.key = key + return result + + class BaseConfigurator(object): + """ + The configurator base class which defines some useful defaults. + """ + + CONVERT_PATTERN = re.compile(r'^(?P[a-z]+)://(?P.*)$') + + WORD_PATTERN = re.compile(r'^\s*(\w+)\s*') + DOT_PATTERN = re.compile(r'^\.\s*(\w+)\s*') + INDEX_PATTERN = re.compile(r'^\[\s*(\w+)\s*\]\s*') + DIGIT_PATTERN = re.compile(r'^\d+$') + + value_converters = { + 'ext' : 'ext_convert', + 'cfg' : 'cfg_convert', + } + + # We might want to use a different one, e.g. importlib + importer = staticmethod(__import__) + + def __init__(self, config): + self.config = ConvertingDict(config) + self.config.configurator = self + + def resolve(self, s): + """ + Resolve strings to objects using standard import and attribute + syntax. + """ + name = s.split('.') + used = name.pop(0) + try: + found = self.importer(used) + for frag in name: + used += '.' + frag + try: + found = getattr(found, frag) + except AttributeError: + self.importer(used) + found = getattr(found, frag) + return found + except ImportError: + e, tb = sys.exc_info()[1:] + v = ValueError('Cannot resolve %r: %s' % (s, e)) + v.__cause__, v.__traceback__ = e, tb + raise v + + def ext_convert(self, value): + """Default converter for the ext:// protocol.""" + return self.resolve(value) + + def cfg_convert(self, value): + """Default converter for the cfg:// protocol.""" + rest = value + m = self.WORD_PATTERN.match(rest) + if m is None: + raise ValueError("Unable to convert %r" % value) + else: + rest = rest[m.end():] + d = self.config[m.groups()[0]] + #print d, rest + while rest: + m = self.DOT_PATTERN.match(rest) + if m: + d = d[m.groups()[0]] + else: + m = self.INDEX_PATTERN.match(rest) + if m: + idx = m.groups()[0] + if not self.DIGIT_PATTERN.match(idx): + d = d[idx] + else: + try: + n = int(idx) # try as number first (most likely) + d = d[n] + except TypeError: + d = d[idx] + if m: + rest = rest[m.end():] + else: + raise ValueError('Unable to convert ' + '%r at %r' % (value, rest)) + #rest should be empty + return d + + def convert(self, value): + """ + Convert values to an appropriate type. dicts, lists and tuples are + replaced by their converting alternatives. Strings are checked to + see if they have a conversion format and are converted if they do. + """ + if not isinstance(value, ConvertingDict) and isinstance(value, dict): + value = ConvertingDict(value) + value.configurator = self + elif not isinstance(value, ConvertingList) and isinstance(value, list): + value = ConvertingList(value) + value.configurator = self + elif not isinstance(value, ConvertingTuple) and\ + isinstance(value, tuple): + value = ConvertingTuple(value) + value.configurator = self + elif isinstance(value, string_types): + m = self.CONVERT_PATTERN.match(value) + if m: + d = m.groupdict() + prefix = d['prefix'] + converter = self.value_converters.get(prefix, None) + if converter: + suffix = d['suffix'] + converter = getattr(self, converter) + value = converter(suffix) + return value + + def configure_custom(self, config): + """Configure an object with a user-supplied factory.""" + c = config.pop('()') + if not callable(c): + c = self.resolve(c) + props = config.pop('.', None) + # Check for valid identifiers + kwargs = dict([(k, config[k]) for k in config if valid_ident(k)]) + result = c(**kwargs) + if props: + for name, value in props.items(): + setattr(result, name, value) + return result + + def as_tuple(self, value): + """Utility function which converts lists to tuples.""" + if isinstance(value, list): + value = tuple(value) + return value diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/database.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/database.py new file mode 100644 index 0000000000000000000000000000000000000000..c16c0c8d9edd833e2a2bcf5322766c4709fc1e64 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/database.py @@ -0,0 +1,1339 @@ +# -*- coding: utf-8 -*- +# +# Copyright (C) 2012-2017 The Python Software Foundation. +# See LICENSE.txt and CONTRIBUTORS.txt. +# +"""PEP 376 implementation.""" + +from __future__ import unicode_literals + +import base64 +import codecs +import contextlib +import hashlib +import logging +import os +import posixpath +import sys +import zipimport + +from . import DistlibException, resources +from .compat import StringIO +from .version import get_scheme, UnsupportedVersionError +from .metadata import (Metadata, METADATA_FILENAME, WHEEL_METADATA_FILENAME, + LEGACY_METADATA_FILENAME) +from .util import (parse_requirement, cached_property, parse_name_and_version, + read_exports, write_exports, CSVReader, CSVWriter) + + +__all__ = ['Distribution', 'BaseInstalledDistribution', + 'InstalledDistribution', 'EggInfoDistribution', + 'DistributionPath'] + + +logger = logging.getLogger(__name__) + +EXPORTS_FILENAME = 'pydist-exports.json' +COMMANDS_FILENAME = 'pydist-commands.json' + +DIST_FILES = ('INSTALLER', METADATA_FILENAME, 'RECORD', 'REQUESTED', + 'RESOURCES', EXPORTS_FILENAME, 'SHARED') + +DISTINFO_EXT = '.dist-info' + + +class _Cache(object): + """ + A simple cache mapping names and .dist-info paths to distributions + """ + def __init__(self): + """ + Initialise an instance. There is normally one for each DistributionPath. + """ + self.name = {} + self.path = {} + self.generated = False + + def clear(self): + """ + Clear the cache, setting it to its initial state. + """ + self.name.clear() + self.path.clear() + self.generated = False + + def add(self, dist): + """ + Add a distribution to the cache. + :param dist: The distribution to add. + """ + if dist.path not in self.path: + self.path[dist.path] = dist + self.name.setdefault(dist.key, []).append(dist) + + +class DistributionPath(object): + """ + Represents a set of distributions installed on a path (typically sys.path). + """ + def __init__(self, path=None, include_egg=False): + """ + Create an instance from a path, optionally including legacy (distutils/ + setuptools/distribute) distributions. + :param path: The path to use, as a list of directories. If not specified, + sys.path is used. + :param include_egg: If True, this instance will look for and return legacy + distributions as well as those based on PEP 376. + """ + if path is None: + path = sys.path + self.path = path + self._include_dist = True + self._include_egg = include_egg + + self._cache = _Cache() + self._cache_egg = _Cache() + self._cache_enabled = True + self._scheme = get_scheme('default') + + def _get_cache_enabled(self): + return self._cache_enabled + + def _set_cache_enabled(self, value): + self._cache_enabled = value + + cache_enabled = property(_get_cache_enabled, _set_cache_enabled) + + def clear_cache(self): + """ + Clears the internal cache. + """ + self._cache.clear() + self._cache_egg.clear() + + + def _yield_distributions(self): + """ + Yield .dist-info and/or .egg(-info) distributions. + """ + # We need to check if we've seen some resources already, because on + # some Linux systems (e.g. some Debian/Ubuntu variants) there are + # symlinks which alias other files in the environment. + seen = set() + for path in self.path: + finder = resources.finder_for_path(path) + if finder is None: + continue + r = finder.find('') + if not r or not r.is_container: + continue + rset = sorted(r.resources) + for entry in rset: + r = finder.find(entry) + if not r or r.path in seen: + continue + if self._include_dist and entry.endswith(DISTINFO_EXT): + possible_filenames = [METADATA_FILENAME, + WHEEL_METADATA_FILENAME, + LEGACY_METADATA_FILENAME] + for metadata_filename in possible_filenames: + metadata_path = posixpath.join(entry, metadata_filename) + pydist = finder.find(metadata_path) + if pydist: + break + else: + continue + + with contextlib.closing(pydist.as_stream()) as stream: + metadata = Metadata(fileobj=stream, scheme='legacy') + logger.debug('Found %s', r.path) + seen.add(r.path) + yield new_dist_class(r.path, metadata=metadata, + env=self) + elif self._include_egg and entry.endswith(('.egg-info', + '.egg')): + logger.debug('Found %s', r.path) + seen.add(r.path) + yield old_dist_class(r.path, self) + + def _generate_cache(self): + """ + Scan the path for distributions and populate the cache with + those that are found. + """ + gen_dist = not self._cache.generated + gen_egg = self._include_egg and not self._cache_egg.generated + if gen_dist or gen_egg: + for dist in self._yield_distributions(): + if isinstance(dist, InstalledDistribution): + self._cache.add(dist) + else: + self._cache_egg.add(dist) + + if gen_dist: + self._cache.generated = True + if gen_egg: + self._cache_egg.generated = True + + @classmethod + def distinfo_dirname(cls, name, version): + """ + The *name* and *version* parameters are converted into their + filename-escaped form, i.e. any ``'-'`` characters are replaced + with ``'_'`` other than the one in ``'dist-info'`` and the one + separating the name from the version number. + + :parameter name: is converted to a standard distribution name by replacing + any runs of non- alphanumeric characters with a single + ``'-'``. + :type name: string + :parameter version: is converted to a standard version string. Spaces + become dots, and all other non-alphanumeric characters + (except dots) become dashes, with runs of multiple + dashes condensed to a single dash. + :type version: string + :returns: directory name + :rtype: string""" + name = name.replace('-', '_') + return '-'.join([name, version]) + DISTINFO_EXT + + def get_distributions(self): + """ + Provides an iterator that looks for distributions and returns + :class:`InstalledDistribution` or + :class:`EggInfoDistribution` instances for each one of them. + + :rtype: iterator of :class:`InstalledDistribution` and + :class:`EggInfoDistribution` instances + """ + if not self._cache_enabled: + for dist in self._yield_distributions(): + yield dist + else: + self._generate_cache() + + for dist in self._cache.path.values(): + yield dist + + if self._include_egg: + for dist in self._cache_egg.path.values(): + yield dist + + def get_distribution(self, name): + """ + Looks for a named distribution on the path. + + This function only returns the first result found, as no more than one + value is expected. If nothing is found, ``None`` is returned. + + :rtype: :class:`InstalledDistribution`, :class:`EggInfoDistribution` + or ``None`` + """ + result = None + name = name.lower() + if not self._cache_enabled: + for dist in self._yield_distributions(): + if dist.key == name: + result = dist + break + else: + self._generate_cache() + + if name in self._cache.name: + result = self._cache.name[name][0] + elif self._include_egg and name in self._cache_egg.name: + result = self._cache_egg.name[name][0] + return result + + def provides_distribution(self, name, version=None): + """ + Iterates over all distributions to find which distributions provide *name*. + If a *version* is provided, it will be used to filter the results. + + This function only returns the first result found, since no more than + one values are expected. If the directory is not found, returns ``None``. + + :parameter version: a version specifier that indicates the version + required, conforming to the format in ``PEP-345`` + + :type name: string + :type version: string + """ + matcher = None + if version is not None: + try: + matcher = self._scheme.matcher('%s (%s)' % (name, version)) + except ValueError: + raise DistlibException('invalid name or version: %r, %r' % + (name, version)) + + for dist in self.get_distributions(): + # We hit a problem on Travis where enum34 was installed and doesn't + # have a provides attribute ... + if not hasattr(dist, 'provides'): + logger.debug('No "provides": %s', dist) + else: + provided = dist.provides + + for p in provided: + p_name, p_ver = parse_name_and_version(p) + if matcher is None: + if p_name == name: + yield dist + break + else: + if p_name == name and matcher.match(p_ver): + yield dist + break + + def get_file_path(self, name, relative_path): + """ + Return the path to a resource file. + """ + dist = self.get_distribution(name) + if dist is None: + raise LookupError('no distribution named %r found' % name) + return dist.get_resource_path(relative_path) + + def get_exported_entries(self, category, name=None): + """ + Return all of the exported entries in a particular category. + + :param category: The category to search for entries. + :param name: If specified, only entries with that name are returned. + """ + for dist in self.get_distributions(): + r = dist.exports + if category in r: + d = r[category] + if name is not None: + if name in d: + yield d[name] + else: + for v in d.values(): + yield v + + +class Distribution(object): + """ + A base class for distributions, whether installed or from indexes. + Either way, it must have some metadata, so that's all that's needed + for construction. + """ + + build_time_dependency = False + """ + Set to True if it's known to be only a build-time dependency (i.e. + not needed after installation). + """ + + requested = False + """A boolean that indicates whether the ``REQUESTED`` metadata file is + present (in other words, whether the package was installed by user + request or it was installed as a dependency).""" + + def __init__(self, metadata): + """ + Initialise an instance. + :param metadata: The instance of :class:`Metadata` describing this + distribution. + """ + self.metadata = metadata + self.name = metadata.name + self.key = self.name.lower() # for case-insensitive comparisons + self.version = metadata.version + self.locator = None + self.digest = None + self.extras = None # additional features requested + self.context = None # environment marker overrides + self.download_urls = set() + self.digests = {} + + @property + def source_url(self): + """ + The source archive download URL for this distribution. + """ + return self.metadata.source_url + + download_url = source_url # Backward compatibility + + @property + def name_and_version(self): + """ + A utility property which displays the name and version in parentheses. + """ + return '%s (%s)' % (self.name, self.version) + + @property + def provides(self): + """ + A set of distribution names and versions provided by this distribution. + :return: A set of "name (version)" strings. + """ + plist = self.metadata.provides + s = '%s (%s)' % (self.name, self.version) + if s not in plist: + plist.append(s) + return plist + + def _get_requirements(self, req_attr): + md = self.metadata + logger.debug('Getting requirements from metadata %r', md.todict()) + reqts = getattr(md, req_attr) + return set(md.get_requirements(reqts, extras=self.extras, + env=self.context)) + + @property + def run_requires(self): + return self._get_requirements('run_requires') + + @property + def meta_requires(self): + return self._get_requirements('meta_requires') + + @property + def build_requires(self): + return self._get_requirements('build_requires') + + @property + def test_requires(self): + return self._get_requirements('test_requires') + + @property + def dev_requires(self): + return self._get_requirements('dev_requires') + + def matches_requirement(self, req): + """ + Say if this instance matches (fulfills) a requirement. + :param req: The requirement to match. + :rtype req: str + :return: True if it matches, else False. + """ + # Requirement may contain extras - parse to lose those + # from what's passed to the matcher + r = parse_requirement(req) + scheme = get_scheme(self.metadata.scheme) + try: + matcher = scheme.matcher(r.requirement) + except UnsupportedVersionError: + # XXX compat-mode if cannot read the version + logger.warning('could not read version %r - using name only', + req) + name = req.split()[0] + matcher = scheme.matcher(name) + + name = matcher.key # case-insensitive + + result = False + for p in self.provides: + p_name, p_ver = parse_name_and_version(p) + if p_name != name: + continue + try: + result = matcher.match(p_ver) + break + except UnsupportedVersionError: + pass + return result + + def __repr__(self): + """ + Return a textual representation of this instance, + """ + if self.source_url: + suffix = ' [%s]' % self.source_url + else: + suffix = '' + return '' % (self.name, self.version, suffix) + + def __eq__(self, other): + """ + See if this distribution is the same as another. + :param other: The distribution to compare with. To be equal to one + another. distributions must have the same type, name, + version and source_url. + :return: True if it is the same, else False. + """ + if type(other) is not type(self): + result = False + else: + result = (self.name == other.name and + self.version == other.version and + self.source_url == other.source_url) + return result + + def __hash__(self): + """ + Compute hash in a way which matches the equality test. + """ + return hash(self.name) + hash(self.version) + hash(self.source_url) + + +class BaseInstalledDistribution(Distribution): + """ + This is the base class for installed distributions (whether PEP 376 or + legacy). + """ + + hasher = None + + def __init__(self, metadata, path, env=None): + """ + Initialise an instance. + :param metadata: An instance of :class:`Metadata` which describes the + distribution. This will normally have been initialised + from a metadata file in the ``path``. + :param path: The path of the ``.dist-info`` or ``.egg-info`` + directory for the distribution. + :param env: This is normally the :class:`DistributionPath` + instance where this distribution was found. + """ + super(BaseInstalledDistribution, self).__init__(metadata) + self.path = path + self.dist_path = env + + def get_hash(self, data, hasher=None): + """ + Get the hash of some data, using a particular hash algorithm, if + specified. + + :param data: The data to be hashed. + :type data: bytes + :param hasher: The name of a hash implementation, supported by hashlib, + or ``None``. Examples of valid values are ``'sha1'``, + ``'sha224'``, ``'sha384'``, '``sha256'``, ``'md5'`` and + ``'sha512'``. If no hasher is specified, the ``hasher`` + attribute of the :class:`InstalledDistribution` instance + is used. If the hasher is determined to be ``None``, MD5 + is used as the hashing algorithm. + :returns: The hash of the data. If a hasher was explicitly specified, + the returned hash will be prefixed with the specified hasher + followed by '='. + :rtype: str + """ + if hasher is None: + hasher = self.hasher + if hasher is None: + hasher = hashlib.md5 + prefix = '' + else: + hasher = getattr(hashlib, hasher) + prefix = '%s=' % self.hasher + digest = hasher(data).digest() + digest = base64.urlsafe_b64encode(digest).rstrip(b'=').decode('ascii') + return '%s%s' % (prefix, digest) + + +class InstalledDistribution(BaseInstalledDistribution): + """ + Created with the *path* of the ``.dist-info`` directory provided to the + constructor. It reads the metadata contained in ``pydist.json`` when it is + instantiated., or uses a passed in Metadata instance (useful for when + dry-run mode is being used). + """ + + hasher = 'sha256' + + def __init__(self, path, metadata=None, env=None): + self.modules = [] + self.finder = finder = resources.finder_for_path(path) + if finder is None: + raise ValueError('finder unavailable for %s' % path) + if env and env._cache_enabled and path in env._cache.path: + metadata = env._cache.path[path].metadata + elif metadata is None: + r = finder.find(METADATA_FILENAME) + # Temporary - for Wheel 0.23 support + if r is None: + r = finder.find(WHEEL_METADATA_FILENAME) + # Temporary - for legacy support + if r is None: + r = finder.find('METADATA') + if r is None: + raise ValueError('no %s found in %s' % (METADATA_FILENAME, + path)) + with contextlib.closing(r.as_stream()) as stream: + metadata = Metadata(fileobj=stream, scheme='legacy') + + super(InstalledDistribution, self).__init__(metadata, path, env) + + if env and env._cache_enabled: + env._cache.add(self) + + r = finder.find('REQUESTED') + self.requested = r is not None + p = os.path.join(path, 'top_level.txt') + if os.path.exists(p): + with open(p, 'rb') as f: + data = f.read().decode('utf-8') + self.modules = data.splitlines() + + def __repr__(self): + return '' % ( + self.name, self.version, self.path) + + def __str__(self): + return "%s %s" % (self.name, self.version) + + def _get_records(self): + """ + Get the list of installed files for the distribution + :return: A list of tuples of path, hash and size. Note that hash and + size might be ``None`` for some entries. The path is exactly + as stored in the file (which is as in PEP 376). + """ + results = [] + r = self.get_distinfo_resource('RECORD') + with contextlib.closing(r.as_stream()) as stream: + with CSVReader(stream=stream) as record_reader: + # Base location is parent dir of .dist-info dir + #base_location = os.path.dirname(self.path) + #base_location = os.path.abspath(base_location) + for row in record_reader: + missing = [None for i in range(len(row), 3)] + path, checksum, size = row + missing + #if not os.path.isabs(path): + # path = path.replace('/', os.sep) + # path = os.path.join(base_location, path) + results.append((path, checksum, size)) + return results + + @cached_property + def exports(self): + """ + Return the information exported by this distribution. + :return: A dictionary of exports, mapping an export category to a dict + of :class:`ExportEntry` instances describing the individual + export entries, and keyed by name. + """ + result = {} + r = self.get_distinfo_resource(EXPORTS_FILENAME) + if r: + result = self.read_exports() + return result + + def read_exports(self): + """ + Read exports data from a file in .ini format. + + :return: A dictionary of exports, mapping an export category to a list + of :class:`ExportEntry` instances describing the individual + export entries. + """ + result = {} + r = self.get_distinfo_resource(EXPORTS_FILENAME) + if r: + with contextlib.closing(r.as_stream()) as stream: + result = read_exports(stream) + return result + + def write_exports(self, exports): + """ + Write a dictionary of exports to a file in .ini format. + :param exports: A dictionary of exports, mapping an export category to + a list of :class:`ExportEntry` instances describing the + individual export entries. + """ + rf = self.get_distinfo_file(EXPORTS_FILENAME) + with open(rf, 'w') as f: + write_exports(exports, f) + + def get_resource_path(self, relative_path): + """ + NOTE: This API may change in the future. + + Return the absolute path to a resource file with the given relative + path. + + :param relative_path: The path, relative to .dist-info, of the resource + of interest. + :return: The absolute path where the resource is to be found. + """ + r = self.get_distinfo_resource('RESOURCES') + with contextlib.closing(r.as_stream()) as stream: + with CSVReader(stream=stream) as resources_reader: + for relative, destination in resources_reader: + if relative == relative_path: + return destination + raise KeyError('no resource file with relative path %r ' + 'is installed' % relative_path) + + def list_installed_files(self): + """ + Iterates over the ``RECORD`` entries and returns a tuple + ``(path, hash, size)`` for each line. + + :returns: iterator of (path, hash, size) + """ + for result in self._get_records(): + yield result + + def write_installed_files(self, paths, prefix, dry_run=False): + """ + Writes the ``RECORD`` file, using the ``paths`` iterable passed in. Any + existing ``RECORD`` file is silently overwritten. + + prefix is used to determine when to write absolute paths. + """ + prefix = os.path.join(prefix, '') + base = os.path.dirname(self.path) + base_under_prefix = base.startswith(prefix) + base = os.path.join(base, '') + record_path = self.get_distinfo_file('RECORD') + logger.info('creating %s', record_path) + if dry_run: + return None + with CSVWriter(record_path) as writer: + for path in paths: + if os.path.isdir(path) or path.endswith(('.pyc', '.pyo')): + # do not put size and hash, as in PEP-376 + hash_value = size = '' + else: + size = '%d' % os.path.getsize(path) + with open(path, 'rb') as fp: + hash_value = self.get_hash(fp.read()) + if path.startswith(base) or (base_under_prefix and + path.startswith(prefix)): + path = os.path.relpath(path, base) + writer.writerow((path, hash_value, size)) + + # add the RECORD file itself + if record_path.startswith(base): + record_path = os.path.relpath(record_path, base) + writer.writerow((record_path, '', '')) + return record_path + + def check_installed_files(self): + """ + Checks that the hashes and sizes of the files in ``RECORD`` are + matched by the files themselves. Returns a (possibly empty) list of + mismatches. Each entry in the mismatch list will be a tuple consisting + of the path, 'exists', 'size' or 'hash' according to what didn't match + (existence is checked first, then size, then hash), the expected + value and the actual value. + """ + mismatches = [] + base = os.path.dirname(self.path) + record_path = self.get_distinfo_file('RECORD') + for path, hash_value, size in self.list_installed_files(): + if not os.path.isabs(path): + path = os.path.join(base, path) + if path == record_path: + continue + if not os.path.exists(path): + mismatches.append((path, 'exists', True, False)) + elif os.path.isfile(path): + actual_size = str(os.path.getsize(path)) + if size and actual_size != size: + mismatches.append((path, 'size', size, actual_size)) + elif hash_value: + if '=' in hash_value: + hasher = hash_value.split('=', 1)[0] + else: + hasher = None + + with open(path, 'rb') as f: + actual_hash = self.get_hash(f.read(), hasher) + if actual_hash != hash_value: + mismatches.append((path, 'hash', hash_value, actual_hash)) + return mismatches + + @cached_property + def shared_locations(self): + """ + A dictionary of shared locations whose keys are in the set 'prefix', + 'purelib', 'platlib', 'scripts', 'headers', 'data' and 'namespace'. + The corresponding value is the absolute path of that category for + this distribution, and takes into account any paths selected by the + user at installation time (e.g. via command-line arguments). In the + case of the 'namespace' key, this would be a list of absolute paths + for the roots of namespace packages in this distribution. + + The first time this property is accessed, the relevant information is + read from the SHARED file in the .dist-info directory. + """ + result = {} + shared_path = os.path.join(self.path, 'SHARED') + if os.path.isfile(shared_path): + with codecs.open(shared_path, 'r', encoding='utf-8') as f: + lines = f.read().splitlines() + for line in lines: + key, value = line.split('=', 1) + if key == 'namespace': + result.setdefault(key, []).append(value) + else: + result[key] = value + return result + + def write_shared_locations(self, paths, dry_run=False): + """ + Write shared location information to the SHARED file in .dist-info. + :param paths: A dictionary as described in the documentation for + :meth:`shared_locations`. + :param dry_run: If True, the action is logged but no file is actually + written. + :return: The path of the file written to. + """ + shared_path = os.path.join(self.path, 'SHARED') + logger.info('creating %s', shared_path) + if dry_run: + return None + lines = [] + for key in ('prefix', 'lib', 'headers', 'scripts', 'data'): + path = paths[key] + if os.path.isdir(paths[key]): + lines.append('%s=%s' % (key, path)) + for ns in paths.get('namespace', ()): + lines.append('namespace=%s' % ns) + + with codecs.open(shared_path, 'w', encoding='utf-8') as f: + f.write('\n'.join(lines)) + return shared_path + + def get_distinfo_resource(self, path): + if path not in DIST_FILES: + raise DistlibException('invalid path for a dist-info file: ' + '%r at %r' % (path, self.path)) + finder = resources.finder_for_path(self.path) + if finder is None: + raise DistlibException('Unable to get a finder for %s' % self.path) + return finder.find(path) + + def get_distinfo_file(self, path): + """ + Returns a path located under the ``.dist-info`` directory. Returns a + string representing the path. + + :parameter path: a ``'/'``-separated path relative to the + ``.dist-info`` directory or an absolute path; + If *path* is an absolute path and doesn't start + with the ``.dist-info`` directory path, + a :class:`DistlibException` is raised + :type path: str + :rtype: str + """ + # Check if it is an absolute path # XXX use relpath, add tests + if path.find(os.sep) >= 0: + # it's an absolute path? + distinfo_dirname, path = path.split(os.sep)[-2:] + if distinfo_dirname != self.path.split(os.sep)[-1]: + raise DistlibException( + 'dist-info file %r does not belong to the %r %s ' + 'distribution' % (path, self.name, self.version)) + + # The file must be relative + if path not in DIST_FILES: + raise DistlibException('invalid path for a dist-info file: ' + '%r at %r' % (path, self.path)) + + return os.path.join(self.path, path) + + def list_distinfo_files(self): + """ + Iterates over the ``RECORD`` entries and returns paths for each line if + the path is pointing to a file located in the ``.dist-info`` directory + or one of its subdirectories. + + :returns: iterator of paths + """ + base = os.path.dirname(self.path) + for path, checksum, size in self._get_records(): + # XXX add separator or use real relpath algo + if not os.path.isabs(path): + path = os.path.join(base, path) + if path.startswith(self.path): + yield path + + def __eq__(self, other): + return (isinstance(other, InstalledDistribution) and + self.path == other.path) + + # See http://docs.python.org/reference/datamodel#object.__hash__ + __hash__ = object.__hash__ + + +class EggInfoDistribution(BaseInstalledDistribution): + """Created with the *path* of the ``.egg-info`` directory or file provided + to the constructor. It reads the metadata contained in the file itself, or + if the given path happens to be a directory, the metadata is read from the + file ``PKG-INFO`` under that directory.""" + + requested = True # as we have no way of knowing, assume it was + shared_locations = {} + + def __init__(self, path, env=None): + def set_name_and_version(s, n, v): + s.name = n + s.key = n.lower() # for case-insensitive comparisons + s.version = v + + self.path = path + self.dist_path = env + if env and env._cache_enabled and path in env._cache_egg.path: + metadata = env._cache_egg.path[path].metadata + set_name_and_version(self, metadata.name, metadata.version) + else: + metadata = self._get_metadata(path) + + # Need to be set before caching + set_name_and_version(self, metadata.name, metadata.version) + + if env and env._cache_enabled: + env._cache_egg.add(self) + super(EggInfoDistribution, self).__init__(metadata, path, env) + + def _get_metadata(self, path): + requires = None + + def parse_requires_data(data): + """Create a list of dependencies from a requires.txt file. + + *data*: the contents of a setuptools-produced requires.txt file. + """ + reqs = [] + lines = data.splitlines() + for line in lines: + line = line.strip() + if line.startswith('['): + logger.warning('Unexpected line: quitting requirement scan: %r', + line) + break + r = parse_requirement(line) + if not r: + logger.warning('Not recognised as a requirement: %r', line) + continue + if r.extras: + logger.warning('extra requirements in requires.txt are ' + 'not supported') + if not r.constraints: + reqs.append(r.name) + else: + cons = ', '.join('%s%s' % c for c in r.constraints) + reqs.append('%s (%s)' % (r.name, cons)) + return reqs + + def parse_requires_path(req_path): + """Create a list of dependencies from a requires.txt file. + + *req_path*: the path to a setuptools-produced requires.txt file. + """ + + reqs = [] + try: + with codecs.open(req_path, 'r', 'utf-8') as fp: + reqs = parse_requires_data(fp.read()) + except IOError: + pass + return reqs + + tl_path = tl_data = None + if path.endswith('.egg'): + if os.path.isdir(path): + p = os.path.join(path, 'EGG-INFO') + meta_path = os.path.join(p, 'PKG-INFO') + metadata = Metadata(path=meta_path, scheme='legacy') + req_path = os.path.join(p, 'requires.txt') + tl_path = os.path.join(p, 'top_level.txt') + requires = parse_requires_path(req_path) + else: + # FIXME handle the case where zipfile is not available + zipf = zipimport.zipimporter(path) + fileobj = StringIO( + zipf.get_data('EGG-INFO/PKG-INFO').decode('utf8')) + metadata = Metadata(fileobj=fileobj, scheme='legacy') + try: + data = zipf.get_data('EGG-INFO/requires.txt') + tl_data = zipf.get_data('EGG-INFO/top_level.txt').decode('utf-8') + requires = parse_requires_data(data.decode('utf-8')) + except IOError: + requires = None + elif path.endswith('.egg-info'): + if os.path.isdir(path): + req_path = os.path.join(path, 'requires.txt') + requires = parse_requires_path(req_path) + path = os.path.join(path, 'PKG-INFO') + tl_path = os.path.join(path, 'top_level.txt') + metadata = Metadata(path=path, scheme='legacy') + else: + raise DistlibException('path must end with .egg-info or .egg, ' + 'got %r' % path) + + if requires: + metadata.add_requirements(requires) + # look for top-level modules in top_level.txt, if present + if tl_data is None: + if tl_path is not None and os.path.exists(tl_path): + with open(tl_path, 'rb') as f: + tl_data = f.read().decode('utf-8') + if not tl_data: + tl_data = [] + else: + tl_data = tl_data.splitlines() + self.modules = tl_data + return metadata + + def __repr__(self): + return '' % ( + self.name, self.version, self.path) + + def __str__(self): + return "%s %s" % (self.name, self.version) + + def check_installed_files(self): + """ + Checks that the hashes and sizes of the files in ``RECORD`` are + matched by the files themselves. Returns a (possibly empty) list of + mismatches. Each entry in the mismatch list will be a tuple consisting + of the path, 'exists', 'size' or 'hash' according to what didn't match + (existence is checked first, then size, then hash), the expected + value and the actual value. + """ + mismatches = [] + record_path = os.path.join(self.path, 'installed-files.txt') + if os.path.exists(record_path): + for path, _, _ in self.list_installed_files(): + if path == record_path: + continue + if not os.path.exists(path): + mismatches.append((path, 'exists', True, False)) + return mismatches + + def list_installed_files(self): + """ + Iterates over the ``installed-files.txt`` entries and returns a tuple + ``(path, hash, size)`` for each line. + + :returns: a list of (path, hash, size) + """ + + def _md5(path): + f = open(path, 'rb') + try: + content = f.read() + finally: + f.close() + return hashlib.md5(content).hexdigest() + + def _size(path): + return os.stat(path).st_size + + record_path = os.path.join(self.path, 'installed-files.txt') + result = [] + if os.path.exists(record_path): + with codecs.open(record_path, 'r', encoding='utf-8') as f: + for line in f: + line = line.strip() + p = os.path.normpath(os.path.join(self.path, line)) + # "./" is present as a marker between installed files + # and installation metadata files + if not os.path.exists(p): + logger.warning('Non-existent file: %s', p) + if p.endswith(('.pyc', '.pyo')): + continue + #otherwise fall through and fail + if not os.path.isdir(p): + result.append((p, _md5(p), _size(p))) + result.append((record_path, None, None)) + return result + + def list_distinfo_files(self, absolute=False): + """ + Iterates over the ``installed-files.txt`` entries and returns paths for + each line if the path is pointing to a file located in the + ``.egg-info`` directory or one of its subdirectories. + + :parameter absolute: If *absolute* is ``True``, each returned path is + transformed into a local absolute path. Otherwise the + raw value from ``installed-files.txt`` is returned. + :type absolute: boolean + :returns: iterator of paths + """ + record_path = os.path.join(self.path, 'installed-files.txt') + if os.path.exists(record_path): + skip = True + with codecs.open(record_path, 'r', encoding='utf-8') as f: + for line in f: + line = line.strip() + if line == './': + skip = False + continue + if not skip: + p = os.path.normpath(os.path.join(self.path, line)) + if p.startswith(self.path): + if absolute: + yield p + else: + yield line + + def __eq__(self, other): + return (isinstance(other, EggInfoDistribution) and + self.path == other.path) + + # See http://docs.python.org/reference/datamodel#object.__hash__ + __hash__ = object.__hash__ + +new_dist_class = InstalledDistribution +old_dist_class = EggInfoDistribution + + +class DependencyGraph(object): + """ + Represents a dependency graph between distributions. + + The dependency relationships are stored in an ``adjacency_list`` that maps + distributions to a list of ``(other, label)`` tuples where ``other`` + is a distribution and the edge is labeled with ``label`` (i.e. the version + specifier, if such was provided). Also, for more efficient traversal, for + every distribution ``x``, a list of predecessors is kept in + ``reverse_list[x]``. An edge from distribution ``a`` to + distribution ``b`` means that ``a`` depends on ``b``. If any missing + dependencies are found, they are stored in ``missing``, which is a + dictionary that maps distributions to a list of requirements that were not + provided by any other distributions. + """ + + def __init__(self): + self.adjacency_list = {} + self.reverse_list = {} + self.missing = {} + + def add_distribution(self, distribution): + """Add the *distribution* to the graph. + + :type distribution: :class:`distutils2.database.InstalledDistribution` + or :class:`distutils2.database.EggInfoDistribution` + """ + self.adjacency_list[distribution] = [] + self.reverse_list[distribution] = [] + #self.missing[distribution] = [] + + def add_edge(self, x, y, label=None): + """Add an edge from distribution *x* to distribution *y* with the given + *label*. + + :type x: :class:`distutils2.database.InstalledDistribution` or + :class:`distutils2.database.EggInfoDistribution` + :type y: :class:`distutils2.database.InstalledDistribution` or + :class:`distutils2.database.EggInfoDistribution` + :type label: ``str`` or ``None`` + """ + self.adjacency_list[x].append((y, label)) + # multiple edges are allowed, so be careful + if x not in self.reverse_list[y]: + self.reverse_list[y].append(x) + + def add_missing(self, distribution, requirement): + """ + Add a missing *requirement* for the given *distribution*. + + :type distribution: :class:`distutils2.database.InstalledDistribution` + or :class:`distutils2.database.EggInfoDistribution` + :type requirement: ``str`` + """ + logger.debug('%s missing %r', distribution, requirement) + self.missing.setdefault(distribution, []).append(requirement) + + def _repr_dist(self, dist): + return '%s %s' % (dist.name, dist.version) + + def repr_node(self, dist, level=1): + """Prints only a subgraph""" + output = [self._repr_dist(dist)] + for other, label in self.adjacency_list[dist]: + dist = self._repr_dist(other) + if label is not None: + dist = '%s [%s]' % (dist, label) + output.append(' ' * level + str(dist)) + suboutput = self.repr_node(other, level + 1) + subs = suboutput.split('\n') + output.extend(subs[1:]) + return '\n'.join(output) + + def to_dot(self, f, skip_disconnected=True): + """Writes a DOT output for the graph to the provided file *f*. + + If *skip_disconnected* is set to ``True``, then all distributions + that are not dependent on any other distribution are skipped. + + :type f: has to support ``file``-like operations + :type skip_disconnected: ``bool`` + """ + disconnected = [] + + f.write("digraph dependencies {\n") + for dist, adjs in self.adjacency_list.items(): + if len(adjs) == 0 and not skip_disconnected: + disconnected.append(dist) + for other, label in adjs: + if not label is None: + f.write('"%s" -> "%s" [label="%s"]\n' % + (dist.name, other.name, label)) + else: + f.write('"%s" -> "%s"\n' % (dist.name, other.name)) + if not skip_disconnected and len(disconnected) > 0: + f.write('subgraph disconnected {\n') + f.write('label = "Disconnected"\n') + f.write('bgcolor = red\n') + + for dist in disconnected: + f.write('"%s"' % dist.name) + f.write('\n') + f.write('}\n') + f.write('}\n') + + def topological_sort(self): + """ + Perform a topological sort of the graph. + :return: A tuple, the first element of which is a topologically sorted + list of distributions, and the second element of which is a + list of distributions that cannot be sorted because they have + circular dependencies and so form a cycle. + """ + result = [] + # Make a shallow copy of the adjacency list + alist = {} + for k, v in self.adjacency_list.items(): + alist[k] = v[:] + while True: + # See what we can remove in this run + to_remove = [] + for k, v in list(alist.items())[:]: + if not v: + to_remove.append(k) + del alist[k] + if not to_remove: + # What's left in alist (if anything) is a cycle. + break + # Remove from the adjacency list of others + for k, v in alist.items(): + alist[k] = [(d, r) for d, r in v if d not in to_remove] + logger.debug('Moving to result: %s', + ['%s (%s)' % (d.name, d.version) for d in to_remove]) + result.extend(to_remove) + return result, list(alist.keys()) + + def __repr__(self): + """Representation of the graph""" + output = [] + for dist, adjs in self.adjacency_list.items(): + output.append(self.repr_node(dist)) + return '\n'.join(output) + + +def make_graph(dists, scheme='default'): + """Makes a dependency graph from the given distributions. + + :parameter dists: a list of distributions + :type dists: list of :class:`distutils2.database.InstalledDistribution` and + :class:`distutils2.database.EggInfoDistribution` instances + :rtype: a :class:`DependencyGraph` instance + """ + scheme = get_scheme(scheme) + graph = DependencyGraph() + provided = {} # maps names to lists of (version, dist) tuples + + # first, build the graph and find out what's provided + for dist in dists: + graph.add_distribution(dist) + + for p in dist.provides: + name, version = parse_name_and_version(p) + logger.debug('Add to provided: %s, %s, %s', name, version, dist) + provided.setdefault(name, []).append((version, dist)) + + # now make the edges + for dist in dists: + requires = (dist.run_requires | dist.meta_requires | + dist.build_requires | dist.dev_requires) + for req in requires: + try: + matcher = scheme.matcher(req) + except UnsupportedVersionError: + # XXX compat-mode if cannot read the version + logger.warning('could not read version %r - using name only', + req) + name = req.split()[0] + matcher = scheme.matcher(name) + + name = matcher.key # case-insensitive + + matched = False + if name in provided: + for version, provider in provided[name]: + try: + match = matcher.match(version) + except UnsupportedVersionError: + match = False + + if match: + graph.add_edge(dist, provider, req) + matched = True + break + if not matched: + graph.add_missing(dist, req) + return graph + + +def get_dependent_dists(dists, dist): + """Recursively generate a list of distributions from *dists* that are + dependent on *dist*. + + :param dists: a list of distributions + :param dist: a distribution, member of *dists* for which we are interested + """ + if dist not in dists: + raise DistlibException('given distribution %r is not a member ' + 'of the list' % dist.name) + graph = make_graph(dists) + + dep = [dist] # dependent distributions + todo = graph.reverse_list[dist] # list of nodes we should inspect + + while todo: + d = todo.pop() + dep.append(d) + for succ in graph.reverse_list[d]: + if succ not in dep: + todo.append(succ) + + dep.pop(0) # remove dist from dep, was there to prevent infinite loops + return dep + + +def get_required_dists(dists, dist): + """Recursively generate a list of distributions from *dists* that are + required by *dist*. + + :param dists: a list of distributions + :param dist: a distribution, member of *dists* for which we are interested + """ + if dist not in dists: + raise DistlibException('given distribution %r is not a member ' + 'of the list' % dist.name) + graph = make_graph(dists) + + req = [] # required distributions + todo = graph.adjacency_list[dist] # list of nodes we should inspect + + while todo: + d = todo.pop()[0] + req.append(d) + for pred in graph.adjacency_list[d]: + if pred not in req: + todo.append(pred) + + return req + + +def make_dist(name, version, **kwargs): + """ + A convenience method for making a dist given just a name and version. + """ + summary = kwargs.pop('summary', 'Placeholder for summary') + md = Metadata(**kwargs) + md.name = name + md.version = version + md.summary = summary or 'Placeholder for summary' + return Distribution(md) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/index.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/index.py new file mode 100644 index 0000000000000000000000000000000000000000..7a87cdcf7a19987ad0344b8c5d161027b345e212 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/index.py @@ -0,0 +1,516 @@ +# -*- coding: utf-8 -*- +# +# Copyright (C) 2013 Vinay Sajip. +# Licensed to the Python Software Foundation under a contributor agreement. +# See LICENSE.txt and CONTRIBUTORS.txt. +# +import hashlib +import logging +import os +import shutil +import subprocess +import tempfile +try: + from threading import Thread +except ImportError: + from dummy_threading import Thread + +from . import DistlibException +from .compat import (HTTPBasicAuthHandler, Request, HTTPPasswordMgr, + urlparse, build_opener, string_types) +from .util import cached_property, zip_dir, ServerProxy + +logger = logging.getLogger(__name__) + +DEFAULT_INDEX = 'https://pypi.org/pypi' +DEFAULT_REALM = 'pypi' + +class PackageIndex(object): + """ + This class represents a package index compatible with PyPI, the Python + Package Index. + """ + + boundary = b'----------ThIs_Is_tHe_distlib_index_bouNdaRY_$' + + def __init__(self, url=None): + """ + Initialise an instance. + + :param url: The URL of the index. If not specified, the URL for PyPI is + used. + """ + self.url = url or DEFAULT_INDEX + self.read_configuration() + scheme, netloc, path, params, query, frag = urlparse(self.url) + if params or query or frag or scheme not in ('http', 'https'): + raise DistlibException('invalid repository: %s' % self.url) + self.password_handler = None + self.ssl_verifier = None + self.gpg = None + self.gpg_home = None + with open(os.devnull, 'w') as sink: + # Use gpg by default rather than gpg2, as gpg2 insists on + # prompting for passwords + for s in ('gpg', 'gpg2'): + try: + rc = subprocess.check_call([s, '--version'], stdout=sink, + stderr=sink) + if rc == 0: + self.gpg = s + break + except OSError: + pass + + def _get_pypirc_command(self): + """ + Get the distutils command for interacting with PyPI configurations. + :return: the command. + """ + from distutils.core import Distribution + from distutils.config import PyPIRCCommand + d = Distribution() + return PyPIRCCommand(d) + + def read_configuration(self): + """ + Read the PyPI access configuration as supported by distutils, getting + PyPI to do the actual work. This populates ``username``, ``password``, + ``realm`` and ``url`` attributes from the configuration. + """ + # get distutils to do the work + c = self._get_pypirc_command() + c.repository = self.url + cfg = c._read_pypirc() + self.username = cfg.get('username') + self.password = cfg.get('password') + self.realm = cfg.get('realm', 'pypi') + self.url = cfg.get('repository', self.url) + + def save_configuration(self): + """ + Save the PyPI access configuration. You must have set ``username`` and + ``password`` attributes before calling this method. + + Again, distutils is used to do the actual work. + """ + self.check_credentials() + # get distutils to do the work + c = self._get_pypirc_command() + c._store_pypirc(self.username, self.password) + + def check_credentials(self): + """ + Check that ``username`` and ``password`` have been set, and raise an + exception if not. + """ + if self.username is None or self.password is None: + raise DistlibException('username and password must be set') + pm = HTTPPasswordMgr() + _, netloc, _, _, _, _ = urlparse(self.url) + pm.add_password(self.realm, netloc, self.username, self.password) + self.password_handler = HTTPBasicAuthHandler(pm) + + def register(self, metadata): + """ + Register a distribution on PyPI, using the provided metadata. + + :param metadata: A :class:`Metadata` instance defining at least a name + and version number for the distribution to be + registered. + :return: The HTTP response received from PyPI upon submission of the + request. + """ + self.check_credentials() + metadata.validate() + d = metadata.todict() + d[':action'] = 'verify' + request = self.encode_request(d.items(), []) + response = self.send_request(request) + d[':action'] = 'submit' + request = self.encode_request(d.items(), []) + return self.send_request(request) + + def _reader(self, name, stream, outbuf): + """ + Thread runner for reading lines of from a subprocess into a buffer. + + :param name: The logical name of the stream (used for logging only). + :param stream: The stream to read from. This will typically a pipe + connected to the output stream of a subprocess. + :param outbuf: The list to append the read lines to. + """ + while True: + s = stream.readline() + if not s: + break + s = s.decode('utf-8').rstrip() + outbuf.append(s) + logger.debug('%s: %s' % (name, s)) + stream.close() + + def get_sign_command(self, filename, signer, sign_password, + keystore=None): + """ + Return a suitable command for signing a file. + + :param filename: The pathname to the file to be signed. + :param signer: The identifier of the signer of the file. + :param sign_password: The passphrase for the signer's + private key used for signing. + :param keystore: The path to a directory which contains the keys + used in verification. If not specified, the + instance's ``gpg_home`` attribute is used instead. + :return: The signing command as a list suitable to be + passed to :class:`subprocess.Popen`. + """ + cmd = [self.gpg, '--status-fd', '2', '--no-tty'] + if keystore is None: + keystore = self.gpg_home + if keystore: + cmd.extend(['--homedir', keystore]) + if sign_password is not None: + cmd.extend(['--batch', '--passphrase-fd', '0']) + td = tempfile.mkdtemp() + sf = os.path.join(td, os.path.basename(filename) + '.asc') + cmd.extend(['--detach-sign', '--armor', '--local-user', + signer, '--output', sf, filename]) + logger.debug('invoking: %s', ' '.join(cmd)) + return cmd, sf + + def run_command(self, cmd, input_data=None): + """ + Run a command in a child process , passing it any input data specified. + + :param cmd: The command to run. + :param input_data: If specified, this must be a byte string containing + data to be sent to the child process. + :return: A tuple consisting of the subprocess' exit code, a list of + lines read from the subprocess' ``stdout``, and a list of + lines read from the subprocess' ``stderr``. + """ + kwargs = { + 'stdout': subprocess.PIPE, + 'stderr': subprocess.PIPE, + } + if input_data is not None: + kwargs['stdin'] = subprocess.PIPE + stdout = [] + stderr = [] + p = subprocess.Popen(cmd, **kwargs) + # We don't use communicate() here because we may need to + # get clever with interacting with the command + t1 = Thread(target=self._reader, args=('stdout', p.stdout, stdout)) + t1.start() + t2 = Thread(target=self._reader, args=('stderr', p.stderr, stderr)) + t2.start() + if input_data is not None: + p.stdin.write(input_data) + p.stdin.close() + + p.wait() + t1.join() + t2.join() + return p.returncode, stdout, stderr + + def sign_file(self, filename, signer, sign_password, keystore=None): + """ + Sign a file. + + :param filename: The pathname to the file to be signed. + :param signer: The identifier of the signer of the file. + :param sign_password: The passphrase for the signer's + private key used for signing. + :param keystore: The path to a directory which contains the keys + used in signing. If not specified, the instance's + ``gpg_home`` attribute is used instead. + :return: The absolute pathname of the file where the signature is + stored. + """ + cmd, sig_file = self.get_sign_command(filename, signer, sign_password, + keystore) + rc, stdout, stderr = self.run_command(cmd, + sign_password.encode('utf-8')) + if rc != 0: + raise DistlibException('sign command failed with error ' + 'code %s' % rc) + return sig_file + + def upload_file(self, metadata, filename, signer=None, sign_password=None, + filetype='sdist', pyversion='source', keystore=None): + """ + Upload a release file to the index. + + :param metadata: A :class:`Metadata` instance defining at least a name + and version number for the file to be uploaded. + :param filename: The pathname of the file to be uploaded. + :param signer: The identifier of the signer of the file. + :param sign_password: The passphrase for the signer's + private key used for signing. + :param filetype: The type of the file being uploaded. This is the + distutils command which produced that file, e.g. + ``sdist`` or ``bdist_wheel``. + :param pyversion: The version of Python which the release relates + to. For code compatible with any Python, this would + be ``source``, otherwise it would be e.g. ``3.2``. + :param keystore: The path to a directory which contains the keys + used in signing. If not specified, the instance's + ``gpg_home`` attribute is used instead. + :return: The HTTP response received from PyPI upon submission of the + request. + """ + self.check_credentials() + if not os.path.exists(filename): + raise DistlibException('not found: %s' % filename) + metadata.validate() + d = metadata.todict() + sig_file = None + if signer: + if not self.gpg: + logger.warning('no signing program available - not signed') + else: + sig_file = self.sign_file(filename, signer, sign_password, + keystore) + with open(filename, 'rb') as f: + file_data = f.read() + md5_digest = hashlib.md5(file_data).hexdigest() + sha256_digest = hashlib.sha256(file_data).hexdigest() + d.update({ + ':action': 'file_upload', + 'protocol_version': '1', + 'filetype': filetype, + 'pyversion': pyversion, + 'md5_digest': md5_digest, + 'sha256_digest': sha256_digest, + }) + files = [('content', os.path.basename(filename), file_data)] + if sig_file: + with open(sig_file, 'rb') as f: + sig_data = f.read() + files.append(('gpg_signature', os.path.basename(sig_file), + sig_data)) + shutil.rmtree(os.path.dirname(sig_file)) + request = self.encode_request(d.items(), files) + return self.send_request(request) + + def upload_documentation(self, metadata, doc_dir): + """ + Upload documentation to the index. + + :param metadata: A :class:`Metadata` instance defining at least a name + and version number for the documentation to be + uploaded. + :param doc_dir: The pathname of the directory which contains the + documentation. This should be the directory that + contains the ``index.html`` for the documentation. + :return: The HTTP response received from PyPI upon submission of the + request. + """ + self.check_credentials() + if not os.path.isdir(doc_dir): + raise DistlibException('not a directory: %r' % doc_dir) + fn = os.path.join(doc_dir, 'index.html') + if not os.path.exists(fn): + raise DistlibException('not found: %r' % fn) + metadata.validate() + name, version = metadata.name, metadata.version + zip_data = zip_dir(doc_dir).getvalue() + fields = [(':action', 'doc_upload'), + ('name', name), ('version', version)] + files = [('content', name, zip_data)] + request = self.encode_request(fields, files) + return self.send_request(request) + + def get_verify_command(self, signature_filename, data_filename, + keystore=None): + """ + Return a suitable command for verifying a file. + + :param signature_filename: The pathname to the file containing the + signature. + :param data_filename: The pathname to the file containing the + signed data. + :param keystore: The path to a directory which contains the keys + used in verification. If not specified, the + instance's ``gpg_home`` attribute is used instead. + :return: The verifying command as a list suitable to be + passed to :class:`subprocess.Popen`. + """ + cmd = [self.gpg, '--status-fd', '2', '--no-tty'] + if keystore is None: + keystore = self.gpg_home + if keystore: + cmd.extend(['--homedir', keystore]) + cmd.extend(['--verify', signature_filename, data_filename]) + logger.debug('invoking: %s', ' '.join(cmd)) + return cmd + + def verify_signature(self, signature_filename, data_filename, + keystore=None): + """ + Verify a signature for a file. + + :param signature_filename: The pathname to the file containing the + signature. + :param data_filename: The pathname to the file containing the + signed data. + :param keystore: The path to a directory which contains the keys + used in verification. If not specified, the + instance's ``gpg_home`` attribute is used instead. + :return: True if the signature was verified, else False. + """ + if not self.gpg: + raise DistlibException('verification unavailable because gpg ' + 'unavailable') + cmd = self.get_verify_command(signature_filename, data_filename, + keystore) + rc, stdout, stderr = self.run_command(cmd) + if rc not in (0, 1): + raise DistlibException('verify command failed with error ' + 'code %s' % rc) + return rc == 0 + + def download_file(self, url, destfile, digest=None, reporthook=None): + """ + This is a convenience method for downloading a file from an URL. + Normally, this will be a file from the index, though currently + no check is made for this (i.e. a file can be downloaded from + anywhere). + + The method is just like the :func:`urlretrieve` function in the + standard library, except that it allows digest computation to be + done during download and checking that the downloaded data + matched any expected value. + + :param url: The URL of the file to be downloaded (assumed to be + available via an HTTP GET request). + :param destfile: The pathname where the downloaded file is to be + saved. + :param digest: If specified, this must be a (hasher, value) + tuple, where hasher is the algorithm used (e.g. + ``'md5'``) and ``value`` is the expected value. + :param reporthook: The same as for :func:`urlretrieve` in the + standard library. + """ + if digest is None: + digester = None + logger.debug('No digest specified') + else: + if isinstance(digest, (list, tuple)): + hasher, digest = digest + else: + hasher = 'md5' + digester = getattr(hashlib, hasher)() + logger.debug('Digest specified: %s' % digest) + # The following code is equivalent to urlretrieve. + # We need to do it this way so that we can compute the + # digest of the file as we go. + with open(destfile, 'wb') as dfp: + # addinfourl is not a context manager on 2.x + # so we have to use try/finally + sfp = self.send_request(Request(url)) + try: + headers = sfp.info() + blocksize = 8192 + size = -1 + read = 0 + blocknum = 0 + if "content-length" in headers: + size = int(headers["Content-Length"]) + if reporthook: + reporthook(blocknum, blocksize, size) + while True: + block = sfp.read(blocksize) + if not block: + break + read += len(block) + dfp.write(block) + if digester: + digester.update(block) + blocknum += 1 + if reporthook: + reporthook(blocknum, blocksize, size) + finally: + sfp.close() + + # check that we got the whole file, if we can + if size >= 0 and read < size: + raise DistlibException( + 'retrieval incomplete: got only %d out of %d bytes' + % (read, size)) + # if we have a digest, it must match. + if digester: + actual = digester.hexdigest() + if digest != actual: + raise DistlibException('%s digest mismatch for %s: expected ' + '%s, got %s' % (hasher, destfile, + digest, actual)) + logger.debug('Digest verified: %s', digest) + + def send_request(self, req): + """ + Send a standard library :class:`Request` to PyPI and return its + response. + + :param req: The request to send. + :return: The HTTP response from PyPI (a standard library HTTPResponse). + """ + handlers = [] + if self.password_handler: + handlers.append(self.password_handler) + if self.ssl_verifier: + handlers.append(self.ssl_verifier) + opener = build_opener(*handlers) + return opener.open(req) + + def encode_request(self, fields, files): + """ + Encode fields and files for posting to an HTTP server. + + :param fields: The fields to send as a list of (fieldname, value) + tuples. + :param files: The files to send as a list of (fieldname, filename, + file_bytes) tuple. + """ + # Adapted from packaging, which in turn was adapted from + # http://code.activestate.com/recipes/146306 + + parts = [] + boundary = self.boundary + for k, values in fields: + if not isinstance(values, (list, tuple)): + values = [values] + + for v in values: + parts.extend(( + b'--' + boundary, + ('Content-Disposition: form-data; name="%s"' % + k).encode('utf-8'), + b'', + v.encode('utf-8'))) + for key, filename, value in files: + parts.extend(( + b'--' + boundary, + ('Content-Disposition: form-data; name="%s"; filename="%s"' % + (key, filename)).encode('utf-8'), + b'', + value)) + + parts.extend((b'--' + boundary + b'--', b'')) + + body = b'\r\n'.join(parts) + ct = b'multipart/form-data; boundary=' + boundary + headers = { + 'Content-type': ct, + 'Content-length': str(len(body)) + } + return Request(self.url, body, headers) + + def search(self, terms, operator=None): + if isinstance(terms, string_types): + terms = {'name': terms} + rpc_proxy = ServerProxy(self.url, timeout=3.0) + try: + return rpc_proxy.search(terms, operator or 'and') + finally: + rpc_proxy('close')() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/locators.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/locators.py new file mode 100644 index 0000000000000000000000000000000000000000..12a1d06351e50b52c8672974d4a9053b7641abaf --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/locators.py @@ -0,0 +1,1302 @@ +# -*- coding: utf-8 -*- +# +# Copyright (C) 2012-2015 Vinay Sajip. +# Licensed to the Python Software Foundation under a contributor agreement. +# See LICENSE.txt and CONTRIBUTORS.txt. +# + +import gzip +from io import BytesIO +import json +import logging +import os +import posixpath +import re +try: + import threading +except ImportError: # pragma: no cover + import dummy_threading as threading +import zlib + +from . import DistlibException +from .compat import (urljoin, urlparse, urlunparse, url2pathname, pathname2url, + queue, quote, unescape, string_types, build_opener, + HTTPRedirectHandler as BaseRedirectHandler, text_type, + Request, HTTPError, URLError) +from .database import Distribution, DistributionPath, make_dist +from .metadata import Metadata, MetadataInvalidError +from .util import (cached_property, parse_credentials, ensure_slash, + split_filename, get_project_data, parse_requirement, + parse_name_and_version, ServerProxy, normalize_name) +from .version import get_scheme, UnsupportedVersionError +from .wheel import Wheel, is_compatible + +logger = logging.getLogger(__name__) + +HASHER_HASH = re.compile(r'^(\w+)=([a-f0-9]+)') +CHARSET = re.compile(r';\s*charset\s*=\s*(.*)\s*$', re.I) +HTML_CONTENT_TYPE = re.compile('text/html|application/x(ht)?ml') +DEFAULT_INDEX = 'https://pypi.org/pypi' + +def get_all_distribution_names(url=None): + """ + Return all distribution names known by an index. + :param url: The URL of the index. + :return: A list of all known distribution names. + """ + if url is None: + url = DEFAULT_INDEX + client = ServerProxy(url, timeout=3.0) + try: + return client.list_packages() + finally: + client('close')() + +class RedirectHandler(BaseRedirectHandler): + """ + A class to work around a bug in some Python 3.2.x releases. + """ + # There's a bug in the base version for some 3.2.x + # (e.g. 3.2.2 on Ubuntu Oneiric). If a Location header + # returns e.g. /abc, it bails because it says the scheme '' + # is bogus, when actually it should use the request's + # URL for the scheme. See Python issue #13696. + def http_error_302(self, req, fp, code, msg, headers): + # Some servers (incorrectly) return multiple Location headers + # (so probably same goes for URI). Use first header. + newurl = None + for key in ('location', 'uri'): + if key in headers: + newurl = headers[key] + break + if newurl is None: # pragma: no cover + return + urlparts = urlparse(newurl) + if urlparts.scheme == '': + newurl = urljoin(req.get_full_url(), newurl) + if hasattr(headers, 'replace_header'): + headers.replace_header(key, newurl) + else: + headers[key] = newurl + return BaseRedirectHandler.http_error_302(self, req, fp, code, msg, + headers) + + http_error_301 = http_error_303 = http_error_307 = http_error_302 + +class Locator(object): + """ + A base class for locators - things that locate distributions. + """ + source_extensions = ('.tar.gz', '.tar.bz2', '.tar', '.zip', '.tgz', '.tbz') + binary_extensions = ('.egg', '.exe', '.whl') + excluded_extensions = ('.pdf',) + + # A list of tags indicating which wheels you want to match. The default + # value of None matches against the tags compatible with the running + # Python. If you want to match other values, set wheel_tags on a locator + # instance to a list of tuples (pyver, abi, arch) which you want to match. + wheel_tags = None + + downloadable_extensions = source_extensions + ('.whl',) + + def __init__(self, scheme='default'): + """ + Initialise an instance. + :param scheme: Because locators look for most recent versions, they + need to know the version scheme to use. This specifies + the current PEP-recommended scheme - use ``'legacy'`` + if you need to support existing distributions on PyPI. + """ + self._cache = {} + self.scheme = scheme + # Because of bugs in some of the handlers on some of the platforms, + # we use our own opener rather than just using urlopen. + self.opener = build_opener(RedirectHandler()) + # If get_project() is called from locate(), the matcher instance + # is set from the requirement passed to locate(). See issue #18 for + # why this can be useful to know. + self.matcher = None + self.errors = queue.Queue() + + def get_errors(self): + """ + Return any errors which have occurred. + """ + result = [] + while not self.errors.empty(): # pragma: no cover + try: + e = self.errors.get(False) + result.append(e) + except self.errors.Empty: + continue + self.errors.task_done() + return result + + def clear_errors(self): + """ + Clear any errors which may have been logged. + """ + # Just get the errors and throw them away + self.get_errors() + + def clear_cache(self): + self._cache.clear() + + def _get_scheme(self): + return self._scheme + + def _set_scheme(self, value): + self._scheme = value + + scheme = property(_get_scheme, _set_scheme) + + def _get_project(self, name): + """ + For a given project, get a dictionary mapping available versions to Distribution + instances. + + This should be implemented in subclasses. + + If called from a locate() request, self.matcher will be set to a + matcher for the requirement to satisfy, otherwise it will be None. + """ + raise NotImplementedError('Please implement in the subclass') + + def get_distribution_names(self): + """ + Return all the distribution names known to this locator. + """ + raise NotImplementedError('Please implement in the subclass') + + def get_project(self, name): + """ + For a given project, get a dictionary mapping available versions to Distribution + instances. + + This calls _get_project to do all the work, and just implements a caching layer on top. + """ + if self._cache is None: # pragma: no cover + result = self._get_project(name) + elif name in self._cache: + result = self._cache[name] + else: + self.clear_errors() + result = self._get_project(name) + self._cache[name] = result + return result + + def score_url(self, url): + """ + Give an url a score which can be used to choose preferred URLs + for a given project release. + """ + t = urlparse(url) + basename = posixpath.basename(t.path) + compatible = True + is_wheel = basename.endswith('.whl') + is_downloadable = basename.endswith(self.downloadable_extensions) + if is_wheel: + compatible = is_compatible(Wheel(basename), self.wheel_tags) + return (t.scheme == 'https', 'pypi.org' in t.netloc, + is_downloadable, is_wheel, compatible, basename) + + def prefer_url(self, url1, url2): + """ + Choose one of two URLs where both are candidates for distribution + archives for the same version of a distribution (for example, + .tar.gz vs. zip). + + The current implementation favours https:// URLs over http://, archives + from PyPI over those from other locations, wheel compatibility (if a + wheel) and then the archive name. + """ + result = url2 + if url1: + s1 = self.score_url(url1) + s2 = self.score_url(url2) + if s1 > s2: + result = url1 + if result != url2: + logger.debug('Not replacing %r with %r', url1, url2) + else: + logger.debug('Replacing %r with %r', url1, url2) + return result + + def split_filename(self, filename, project_name): + """ + Attempt to split a filename in project name, version and Python version. + """ + return split_filename(filename, project_name) + + def convert_url_to_download_info(self, url, project_name): + """ + See if a URL is a candidate for a download URL for a project (the URL + has typically been scraped from an HTML page). + + If it is, a dictionary is returned with keys "name", "version", + "filename" and "url"; otherwise, None is returned. + """ + def same_project(name1, name2): + return normalize_name(name1) == normalize_name(name2) + + result = None + scheme, netloc, path, params, query, frag = urlparse(url) + if frag.lower().startswith('egg='): # pragma: no cover + logger.debug('%s: version hint in fragment: %r', + project_name, frag) + m = HASHER_HASH.match(frag) + if m: + algo, digest = m.groups() + else: + algo, digest = None, None + origpath = path + if path and path[-1] == '/': # pragma: no cover + path = path[:-1] + if path.endswith('.whl'): + try: + wheel = Wheel(path) + if not is_compatible(wheel, self.wheel_tags): + logger.debug('Wheel not compatible: %s', path) + else: + if project_name is None: + include = True + else: + include = same_project(wheel.name, project_name) + if include: + result = { + 'name': wheel.name, + 'version': wheel.version, + 'filename': wheel.filename, + 'url': urlunparse((scheme, netloc, origpath, + params, query, '')), + 'python-version': ', '.join( + ['.'.join(list(v[2:])) for v in wheel.pyver]), + } + except Exception as e: # pragma: no cover + logger.warning('invalid path for wheel: %s', path) + elif not path.endswith(self.downloadable_extensions): # pragma: no cover + logger.debug('Not downloadable: %s', path) + else: # downloadable extension + path = filename = posixpath.basename(path) + for ext in self.downloadable_extensions: + if path.endswith(ext): + path = path[:-len(ext)] + t = self.split_filename(path, project_name) + if not t: # pragma: no cover + logger.debug('No match for project/version: %s', path) + else: + name, version, pyver = t + if not project_name or same_project(project_name, name): + result = { + 'name': name, + 'version': version, + 'filename': filename, + 'url': urlunparse((scheme, netloc, origpath, + params, query, '')), + #'packagetype': 'sdist', + } + if pyver: # pragma: no cover + result['python-version'] = pyver + break + if result and algo: + result['%s_digest' % algo] = digest + return result + + def _get_digest(self, info): + """ + Get a digest from a dictionary by looking at a "digests" dictionary + or keys of the form 'algo_digest'. + + Returns a 2-tuple (algo, digest) if found, else None. Currently + looks only for SHA256, then MD5. + """ + result = None + if 'digests' in info: + digests = info['digests'] + for algo in ('sha256', 'md5'): + if algo in digests: + result = (algo, digests[algo]) + break + if not result: + for algo in ('sha256', 'md5'): + key = '%s_digest' % algo + if key in info: + result = (algo, info[key]) + break + return result + + def _update_version_data(self, result, info): + """ + Update a result dictionary (the final result from _get_project) with a + dictionary for a specific version, which typically holds information + gleaned from a filename or URL for an archive for the distribution. + """ + name = info.pop('name') + version = info.pop('version') + if version in result: + dist = result[version] + md = dist.metadata + else: + dist = make_dist(name, version, scheme=self.scheme) + md = dist.metadata + dist.digest = digest = self._get_digest(info) + url = info['url'] + result['digests'][url] = digest + if md.source_url != info['url']: + md.source_url = self.prefer_url(md.source_url, url) + result['urls'].setdefault(version, set()).add(url) + dist.locator = self + result[version] = dist + + def locate(self, requirement, prereleases=False): + """ + Find the most recent distribution which matches the given + requirement. + + :param requirement: A requirement of the form 'foo (1.0)' or perhaps + 'foo (>= 1.0, < 2.0, != 1.3)' + :param prereleases: If ``True``, allow pre-release versions + to be located. Otherwise, pre-release versions + are not returned. + :return: A :class:`Distribution` instance, or ``None`` if no such + distribution could be located. + """ + result = None + r = parse_requirement(requirement) + if r is None: # pragma: no cover + raise DistlibException('Not a valid requirement: %r' % requirement) + scheme = get_scheme(self.scheme) + self.matcher = matcher = scheme.matcher(r.requirement) + logger.debug('matcher: %s (%s)', matcher, type(matcher).__name__) + versions = self.get_project(r.name) + if len(versions) > 2: # urls and digests keys are present + # sometimes, versions are invalid + slist = [] + vcls = matcher.version_class + for k in versions: + if k in ('urls', 'digests'): + continue + try: + if not matcher.match(k): + logger.debug('%s did not match %r', matcher, k) + else: + if prereleases or not vcls(k).is_prerelease: + slist.append(k) + else: + logger.debug('skipping pre-release ' + 'version %s of %s', k, matcher.name) + except Exception: # pragma: no cover + logger.warning('error matching %s with %r', matcher, k) + pass # slist.append(k) + if len(slist) > 1: + slist = sorted(slist, key=scheme.key) + if slist: + logger.debug('sorted list: %s', slist) + version = slist[-1] + result = versions[version] + if result: + if r.extras: + result.extras = r.extras + result.download_urls = versions.get('urls', {}).get(version, set()) + d = {} + sd = versions.get('digests', {}) + for url in result.download_urls: + if url in sd: # pragma: no cover + d[url] = sd[url] + result.digests = d + self.matcher = None + return result + + +class PyPIRPCLocator(Locator): + """ + This locator uses XML-RPC to locate distributions. It therefore + cannot be used with simple mirrors (that only mirror file content). + """ + def __init__(self, url, **kwargs): + """ + Initialise an instance. + + :param url: The URL to use for XML-RPC. + :param kwargs: Passed to the superclass constructor. + """ + super(PyPIRPCLocator, self).__init__(**kwargs) + self.base_url = url + self.client = ServerProxy(url, timeout=3.0) + + def get_distribution_names(self): + """ + Return all the distribution names known to this locator. + """ + return set(self.client.list_packages()) + + def _get_project(self, name): + result = {'urls': {}, 'digests': {}} + versions = self.client.package_releases(name, True) + for v in versions: + urls = self.client.release_urls(name, v) + data = self.client.release_data(name, v) + metadata = Metadata(scheme=self.scheme) + metadata.name = data['name'] + metadata.version = data['version'] + metadata.license = data.get('license') + metadata.keywords = data.get('keywords', []) + metadata.summary = data.get('summary') + dist = Distribution(metadata) + if urls: + info = urls[0] + metadata.source_url = info['url'] + dist.digest = self._get_digest(info) + dist.locator = self + result[v] = dist + for info in urls: + url = info['url'] + digest = self._get_digest(info) + result['urls'].setdefault(v, set()).add(url) + result['digests'][url] = digest + return result + +class PyPIJSONLocator(Locator): + """ + This locator uses PyPI's JSON interface. It's very limited in functionality + and probably not worth using. + """ + def __init__(self, url, **kwargs): + super(PyPIJSONLocator, self).__init__(**kwargs) + self.base_url = ensure_slash(url) + + def get_distribution_names(self): + """ + Return all the distribution names known to this locator. + """ + raise NotImplementedError('Not available from this locator') + + def _get_project(self, name): + result = {'urls': {}, 'digests': {}} + url = urljoin(self.base_url, '%s/json' % quote(name)) + try: + resp = self.opener.open(url) + data = resp.read().decode() # for now + d = json.loads(data) + md = Metadata(scheme=self.scheme) + data = d['info'] + md.name = data['name'] + md.version = data['version'] + md.license = data.get('license') + md.keywords = data.get('keywords', []) + md.summary = data.get('summary') + dist = Distribution(md) + dist.locator = self + urls = d['urls'] + result[md.version] = dist + for info in d['urls']: + url = info['url'] + dist.download_urls.add(url) + dist.digests[url] = self._get_digest(info) + result['urls'].setdefault(md.version, set()).add(url) + result['digests'][url] = self._get_digest(info) + # Now get other releases + for version, infos in d['releases'].items(): + if version == md.version: + continue # already done + omd = Metadata(scheme=self.scheme) + omd.name = md.name + omd.version = version + odist = Distribution(omd) + odist.locator = self + result[version] = odist + for info in infos: + url = info['url'] + odist.download_urls.add(url) + odist.digests[url] = self._get_digest(info) + result['urls'].setdefault(version, set()).add(url) + result['digests'][url] = self._get_digest(info) +# for info in urls: +# md.source_url = info['url'] +# dist.digest = self._get_digest(info) +# dist.locator = self +# for info in urls: +# url = info['url'] +# result['urls'].setdefault(md.version, set()).add(url) +# result['digests'][url] = self._get_digest(info) + except Exception as e: + self.errors.put(text_type(e)) + logger.exception('JSON fetch failed: %s', e) + return result + + +class Page(object): + """ + This class represents a scraped HTML page. + """ + # The following slightly hairy-looking regex just looks for the contents of + # an anchor link, which has an attribute "href" either immediately preceded + # or immediately followed by a "rel" attribute. The attribute values can be + # declared with double quotes, single quotes or no quotes - which leads to + # the length of the expression. + _href = re.compile(""" +(rel\\s*=\\s*(?:"(?P[^"]*)"|'(?P[^']*)'|(?P[^>\\s\n]*))\\s+)? +href\\s*=\\s*(?:"(?P[^"]*)"|'(?P[^']*)'|(?P[^>\\s\n]*)) +(\\s+rel\\s*=\\s*(?:"(?P[^"]*)"|'(?P[^']*)'|(?P[^>\\s\n]*)))? +""", re.I | re.S | re.X) + _base = re.compile(r"""]+)""", re.I | re.S) + + def __init__(self, data, url): + """ + Initialise an instance with the Unicode page contents and the URL they + came from. + """ + self.data = data + self.base_url = self.url = url + m = self._base.search(self.data) + if m: + self.base_url = m.group(1) + + _clean_re = re.compile(r'[^a-z0-9$&+,/:;=?@.#%_\\|-]', re.I) + + @cached_property + def links(self): + """ + Return the URLs of all the links on a page together with information + about their "rel" attribute, for determining which ones to treat as + downloads and which ones to queue for further scraping. + """ + def clean(url): + "Tidy up an URL." + scheme, netloc, path, params, query, frag = urlparse(url) + return urlunparse((scheme, netloc, quote(path), + params, query, frag)) + + result = set() + for match in self._href.finditer(self.data): + d = match.groupdict('') + rel = (d['rel1'] or d['rel2'] or d['rel3'] or + d['rel4'] or d['rel5'] or d['rel6']) + url = d['url1'] or d['url2'] or d['url3'] + url = urljoin(self.base_url, url) + url = unescape(url) + url = self._clean_re.sub(lambda m: '%%%2x' % ord(m.group(0)), url) + result.add((url, rel)) + # We sort the result, hoping to bring the most recent versions + # to the front + result = sorted(result, key=lambda t: t[0], reverse=True) + return result + + +class SimpleScrapingLocator(Locator): + """ + A locator which scrapes HTML pages to locate downloads for a distribution. + This runs multiple threads to do the I/O; performance is at least as good + as pip's PackageFinder, which works in an analogous fashion. + """ + + # These are used to deal with various Content-Encoding schemes. + decoders = { + 'deflate': zlib.decompress, + 'gzip': lambda b: gzip.GzipFile(fileobj=BytesIO(d)).read(), + 'none': lambda b: b, + } + + def __init__(self, url, timeout=None, num_workers=10, **kwargs): + """ + Initialise an instance. + :param url: The root URL to use for scraping. + :param timeout: The timeout, in seconds, to be applied to requests. + This defaults to ``None`` (no timeout specified). + :param num_workers: The number of worker threads you want to do I/O, + This defaults to 10. + :param kwargs: Passed to the superclass. + """ + super(SimpleScrapingLocator, self).__init__(**kwargs) + self.base_url = ensure_slash(url) + self.timeout = timeout + self._page_cache = {} + self._seen = set() + self._to_fetch = queue.Queue() + self._bad_hosts = set() + self.skip_externals = False + self.num_workers = num_workers + self._lock = threading.RLock() + # See issue #45: we need to be resilient when the locator is used + # in a thread, e.g. with concurrent.futures. We can't use self._lock + # as it is for coordinating our internal threads - the ones created + # in _prepare_threads. + self._gplock = threading.RLock() + self.platform_check = False # See issue #112 + + def _prepare_threads(self): + """ + Threads are created only when get_project is called, and terminate + before it returns. They are there primarily to parallelise I/O (i.e. + fetching web pages). + """ + self._threads = [] + for i in range(self.num_workers): + t = threading.Thread(target=self._fetch) + t.setDaemon(True) + t.start() + self._threads.append(t) + + def _wait_threads(self): + """ + Tell all the threads to terminate (by sending a sentinel value) and + wait for them to do so. + """ + # Note that you need two loops, since you can't say which + # thread will get each sentinel + for t in self._threads: + self._to_fetch.put(None) # sentinel + for t in self._threads: + t.join() + self._threads = [] + + def _get_project(self, name): + result = {'urls': {}, 'digests': {}} + with self._gplock: + self.result = result + self.project_name = name + url = urljoin(self.base_url, '%s/' % quote(name)) + self._seen.clear() + self._page_cache.clear() + self._prepare_threads() + try: + logger.debug('Queueing %s', url) + self._to_fetch.put(url) + self._to_fetch.join() + finally: + self._wait_threads() + del self.result + return result + + platform_dependent = re.compile(r'\b(linux_(i\d86|x86_64|arm\w+)|' + r'win(32|_amd64)|macosx_?\d+)\b', re.I) + + def _is_platform_dependent(self, url): + """ + Does an URL refer to a platform-specific download? + """ + return self.platform_dependent.search(url) + + def _process_download(self, url): + """ + See if an URL is a suitable download for a project. + + If it is, register information in the result dictionary (for + _get_project) about the specific version it's for. + + Note that the return value isn't actually used other than as a boolean + value. + """ + if self.platform_check and self._is_platform_dependent(url): + info = None + else: + info = self.convert_url_to_download_info(url, self.project_name) + logger.debug('process_download: %s -> %s', url, info) + if info: + with self._lock: # needed because self.result is shared + self._update_version_data(self.result, info) + return info + + def _should_queue(self, link, referrer, rel): + """ + Determine whether a link URL from a referring page and with a + particular "rel" attribute should be queued for scraping. + """ + scheme, netloc, path, _, _, _ = urlparse(link) + if path.endswith(self.source_extensions + self.binary_extensions + + self.excluded_extensions): + result = False + elif self.skip_externals and not link.startswith(self.base_url): + result = False + elif not referrer.startswith(self.base_url): + result = False + elif rel not in ('homepage', 'download'): + result = False + elif scheme not in ('http', 'https', 'ftp'): + result = False + elif self._is_platform_dependent(link): + result = False + else: + host = netloc.split(':', 1)[0] + if host.lower() == 'localhost': + result = False + else: + result = True + logger.debug('should_queue: %s (%s) from %s -> %s', link, rel, + referrer, result) + return result + + def _fetch(self): + """ + Get a URL to fetch from the work queue, get the HTML page, examine its + links for download candidates and candidates for further scraping. + + This is a handy method to run in a thread. + """ + while True: + url = self._to_fetch.get() + try: + if url: + page = self.get_page(url) + if page is None: # e.g. after an error + continue + for link, rel in page.links: + if link not in self._seen: + try: + self._seen.add(link) + if (not self._process_download(link) and + self._should_queue(link, url, rel)): + logger.debug('Queueing %s from %s', link, url) + self._to_fetch.put(link) + except MetadataInvalidError: # e.g. invalid versions + pass + except Exception as e: # pragma: no cover + self.errors.put(text_type(e)) + finally: + # always do this, to avoid hangs :-) + self._to_fetch.task_done() + if not url: + #logger.debug('Sentinel seen, quitting.') + break + + def get_page(self, url): + """ + Get the HTML for an URL, possibly from an in-memory cache. + + XXX TODO Note: this cache is never actually cleared. It's assumed that + the data won't get stale over the lifetime of a locator instance (not + necessarily true for the default_locator). + """ + # http://peak.telecommunity.com/DevCenter/EasyInstall#package-index-api + scheme, netloc, path, _, _, _ = urlparse(url) + if scheme == 'file' and os.path.isdir(url2pathname(path)): + url = urljoin(ensure_slash(url), 'index.html') + + if url in self._page_cache: + result = self._page_cache[url] + logger.debug('Returning %s from cache: %s', url, result) + else: + host = netloc.split(':', 1)[0] + result = None + if host in self._bad_hosts: + logger.debug('Skipping %s due to bad host %s', url, host) + else: + req = Request(url, headers={'Accept-encoding': 'identity'}) + try: + logger.debug('Fetching %s', url) + resp = self.opener.open(req, timeout=self.timeout) + logger.debug('Fetched %s', url) + headers = resp.info() + content_type = headers.get('Content-Type', '') + if HTML_CONTENT_TYPE.match(content_type): + final_url = resp.geturl() + data = resp.read() + encoding = headers.get('Content-Encoding') + if encoding: + decoder = self.decoders[encoding] # fail if not found + data = decoder(data) + encoding = 'utf-8' + m = CHARSET.search(content_type) + if m: + encoding = m.group(1) + try: + data = data.decode(encoding) + except UnicodeError: # pragma: no cover + data = data.decode('latin-1') # fallback + result = Page(data, final_url) + self._page_cache[final_url] = result + except HTTPError as e: + if e.code != 404: + logger.exception('Fetch failed: %s: %s', url, e) + except URLError as e: # pragma: no cover + logger.exception('Fetch failed: %s: %s', url, e) + with self._lock: + self._bad_hosts.add(host) + except Exception as e: # pragma: no cover + logger.exception('Fetch failed: %s: %s', url, e) + finally: + self._page_cache[url] = result # even if None (failure) + return result + + _distname_re = re.compile(']*>([^<]+)<') + + def get_distribution_names(self): + """ + Return all the distribution names known to this locator. + """ + result = set() + page = self.get_page(self.base_url) + if not page: + raise DistlibException('Unable to get %s' % self.base_url) + for match in self._distname_re.finditer(page.data): + result.add(match.group(1)) + return result + +class DirectoryLocator(Locator): + """ + This class locates distributions in a directory tree. + """ + + def __init__(self, path, **kwargs): + """ + Initialise an instance. + :param path: The root of the directory tree to search. + :param kwargs: Passed to the superclass constructor, + except for: + * recursive - if True (the default), subdirectories are + recursed into. If False, only the top-level directory + is searched, + """ + self.recursive = kwargs.pop('recursive', True) + super(DirectoryLocator, self).__init__(**kwargs) + path = os.path.abspath(path) + if not os.path.isdir(path): # pragma: no cover + raise DistlibException('Not a directory: %r' % path) + self.base_dir = path + + def should_include(self, filename, parent): + """ + Should a filename be considered as a candidate for a distribution + archive? As well as the filename, the directory which contains it + is provided, though not used by the current implementation. + """ + return filename.endswith(self.downloadable_extensions) + + def _get_project(self, name): + result = {'urls': {}, 'digests': {}} + for root, dirs, files in os.walk(self.base_dir): + for fn in files: + if self.should_include(fn, root): + fn = os.path.join(root, fn) + url = urlunparse(('file', '', + pathname2url(os.path.abspath(fn)), + '', '', '')) + info = self.convert_url_to_download_info(url, name) + if info: + self._update_version_data(result, info) + if not self.recursive: + break + return result + + def get_distribution_names(self): + """ + Return all the distribution names known to this locator. + """ + result = set() + for root, dirs, files in os.walk(self.base_dir): + for fn in files: + if self.should_include(fn, root): + fn = os.path.join(root, fn) + url = urlunparse(('file', '', + pathname2url(os.path.abspath(fn)), + '', '', '')) + info = self.convert_url_to_download_info(url, None) + if info: + result.add(info['name']) + if not self.recursive: + break + return result + +class JSONLocator(Locator): + """ + This locator uses special extended metadata (not available on PyPI) and is + the basis of performant dependency resolution in distlib. Other locators + require archive downloads before dependencies can be determined! As you + might imagine, that can be slow. + """ + def get_distribution_names(self): + """ + Return all the distribution names known to this locator. + """ + raise NotImplementedError('Not available from this locator') + + def _get_project(self, name): + result = {'urls': {}, 'digests': {}} + data = get_project_data(name) + if data: + for info in data.get('files', []): + if info['ptype'] != 'sdist' or info['pyversion'] != 'source': + continue + # We don't store summary in project metadata as it makes + # the data bigger for no benefit during dependency + # resolution + dist = make_dist(data['name'], info['version'], + summary=data.get('summary', + 'Placeholder for summary'), + scheme=self.scheme) + md = dist.metadata + md.source_url = info['url'] + # TODO SHA256 digest + if 'digest' in info and info['digest']: + dist.digest = ('md5', info['digest']) + md.dependencies = info.get('requirements', {}) + dist.exports = info.get('exports', {}) + result[dist.version] = dist + result['urls'].setdefault(dist.version, set()).add(info['url']) + return result + +class DistPathLocator(Locator): + """ + This locator finds installed distributions in a path. It can be useful for + adding to an :class:`AggregatingLocator`. + """ + def __init__(self, distpath, **kwargs): + """ + Initialise an instance. + + :param distpath: A :class:`DistributionPath` instance to search. + """ + super(DistPathLocator, self).__init__(**kwargs) + assert isinstance(distpath, DistributionPath) + self.distpath = distpath + + def _get_project(self, name): + dist = self.distpath.get_distribution(name) + if dist is None: + result = {'urls': {}, 'digests': {}} + else: + result = { + dist.version: dist, + 'urls': {dist.version: set([dist.source_url])}, + 'digests': {dist.version: set([None])} + } + return result + + +class AggregatingLocator(Locator): + """ + This class allows you to chain and/or merge a list of locators. + """ + def __init__(self, *locators, **kwargs): + """ + Initialise an instance. + + :param locators: The list of locators to search. + :param kwargs: Passed to the superclass constructor, + except for: + * merge - if False (the default), the first successful + search from any of the locators is returned. If True, + the results from all locators are merged (this can be + slow). + """ + self.merge = kwargs.pop('merge', False) + self.locators = locators + super(AggregatingLocator, self).__init__(**kwargs) + + def clear_cache(self): + super(AggregatingLocator, self).clear_cache() + for locator in self.locators: + locator.clear_cache() + + def _set_scheme(self, value): + self._scheme = value + for locator in self.locators: + locator.scheme = value + + scheme = property(Locator.scheme.fget, _set_scheme) + + def _get_project(self, name): + result = {} + for locator in self.locators: + d = locator.get_project(name) + if d: + if self.merge: + files = result.get('urls', {}) + digests = result.get('digests', {}) + # next line could overwrite result['urls'], result['digests'] + result.update(d) + df = result.get('urls') + if files and df: + for k, v in files.items(): + if k in df: + df[k] |= v + else: + df[k] = v + dd = result.get('digests') + if digests and dd: + dd.update(digests) + else: + # See issue #18. If any dists are found and we're looking + # for specific constraints, we only return something if + # a match is found. For example, if a DirectoryLocator + # returns just foo (1.0) while we're looking for + # foo (>= 2.0), we'll pretend there was nothing there so + # that subsequent locators can be queried. Otherwise we + # would just return foo (1.0) which would then lead to a + # failure to find foo (>= 2.0), because other locators + # weren't searched. Note that this only matters when + # merge=False. + if self.matcher is None: + found = True + else: + found = False + for k in d: + if self.matcher.match(k): + found = True + break + if found: + result = d + break + return result + + def get_distribution_names(self): + """ + Return all the distribution names known to this locator. + """ + result = set() + for locator in self.locators: + try: + result |= locator.get_distribution_names() + except NotImplementedError: + pass + return result + + +# We use a legacy scheme simply because most of the dists on PyPI use legacy +# versions which don't conform to PEP 426 / PEP 440. +default_locator = AggregatingLocator( + JSONLocator(), + SimpleScrapingLocator('https://pypi.org/simple/', + timeout=3.0), + scheme='legacy') + +locate = default_locator.locate + +NAME_VERSION_RE = re.compile(r'(?P[\w-]+)\s*' + r'\(\s*(==\s*)?(?P[^)]+)\)$') + +class DependencyFinder(object): + """ + Locate dependencies for distributions. + """ + + def __init__(self, locator=None): + """ + Initialise an instance, using the specified locator + to locate distributions. + """ + self.locator = locator or default_locator + self.scheme = get_scheme(self.locator.scheme) + + def add_distribution(self, dist): + """ + Add a distribution to the finder. This will update internal information + about who provides what. + :param dist: The distribution to add. + """ + logger.debug('adding distribution %s', dist) + name = dist.key + self.dists_by_name[name] = dist + self.dists[(name, dist.version)] = dist + for p in dist.provides: + name, version = parse_name_and_version(p) + logger.debug('Add to provided: %s, %s, %s', name, version, dist) + self.provided.setdefault(name, set()).add((version, dist)) + + def remove_distribution(self, dist): + """ + Remove a distribution from the finder. This will update internal + information about who provides what. + :param dist: The distribution to remove. + """ + logger.debug('removing distribution %s', dist) + name = dist.key + del self.dists_by_name[name] + del self.dists[(name, dist.version)] + for p in dist.provides: + name, version = parse_name_and_version(p) + logger.debug('Remove from provided: %s, %s, %s', name, version, dist) + s = self.provided[name] + s.remove((version, dist)) + if not s: + del self.provided[name] + + def get_matcher(self, reqt): + """ + Get a version matcher for a requirement. + :param reqt: The requirement + :type reqt: str + :return: A version matcher (an instance of + :class:`distlib.version.Matcher`). + """ + try: + matcher = self.scheme.matcher(reqt) + except UnsupportedVersionError: # pragma: no cover + # XXX compat-mode if cannot read the version + name = reqt.split()[0] + matcher = self.scheme.matcher(name) + return matcher + + def find_providers(self, reqt): + """ + Find the distributions which can fulfill a requirement. + + :param reqt: The requirement. + :type reqt: str + :return: A set of distribution which can fulfill the requirement. + """ + matcher = self.get_matcher(reqt) + name = matcher.key # case-insensitive + result = set() + provided = self.provided + if name in provided: + for version, provider in provided[name]: + try: + match = matcher.match(version) + except UnsupportedVersionError: + match = False + + if match: + result.add(provider) + break + return result + + def try_to_replace(self, provider, other, problems): + """ + Attempt to replace one provider with another. This is typically used + when resolving dependencies from multiple sources, e.g. A requires + (B >= 1.0) while C requires (B >= 1.1). + + For successful replacement, ``provider`` must meet all the requirements + which ``other`` fulfills. + + :param provider: The provider we are trying to replace with. + :param other: The provider we're trying to replace. + :param problems: If False is returned, this will contain what + problems prevented replacement. This is currently + a tuple of the literal string 'cantreplace', + ``provider``, ``other`` and the set of requirements + that ``provider`` couldn't fulfill. + :return: True if we can replace ``other`` with ``provider``, else + False. + """ + rlist = self.reqts[other] + unmatched = set() + for s in rlist: + matcher = self.get_matcher(s) + if not matcher.match(provider.version): + unmatched.add(s) + if unmatched: + # can't replace other with provider + problems.add(('cantreplace', provider, other, + frozenset(unmatched))) + result = False + else: + # can replace other with provider + self.remove_distribution(other) + del self.reqts[other] + for s in rlist: + self.reqts.setdefault(provider, set()).add(s) + self.add_distribution(provider) + result = True + return result + + def find(self, requirement, meta_extras=None, prereleases=False): + """ + Find a distribution and all distributions it depends on. + + :param requirement: The requirement specifying the distribution to + find, or a Distribution instance. + :param meta_extras: A list of meta extras such as :test:, :build: and + so on. + :param prereleases: If ``True``, allow pre-release versions to be + returned - otherwise, don't return prereleases + unless they're all that's available. + + Return a set of :class:`Distribution` instances and a set of + problems. + + The distributions returned should be such that they have the + :attr:`required` attribute set to ``True`` if they were + from the ``requirement`` passed to ``find()``, and they have the + :attr:`build_time_dependency` attribute set to ``True`` unless they + are post-installation dependencies of the ``requirement``. + + The problems should be a tuple consisting of the string + ``'unsatisfied'`` and the requirement which couldn't be satisfied + by any distribution known to the locator. + """ + + self.provided = {} + self.dists = {} + self.dists_by_name = {} + self.reqts = {} + + meta_extras = set(meta_extras or []) + if ':*:' in meta_extras: + meta_extras.remove(':*:') + # :meta: and :run: are implicitly included + meta_extras |= set([':test:', ':build:', ':dev:']) + + if isinstance(requirement, Distribution): + dist = odist = requirement + logger.debug('passed %s as requirement', odist) + else: + dist = odist = self.locator.locate(requirement, + prereleases=prereleases) + if dist is None: + raise DistlibException('Unable to locate %r' % requirement) + logger.debug('located %s', odist) + dist.requested = True + problems = set() + todo = set([dist]) + install_dists = set([odist]) + while todo: + dist = todo.pop() + name = dist.key # case-insensitive + if name not in self.dists_by_name: + self.add_distribution(dist) + else: + #import pdb; pdb.set_trace() + other = self.dists_by_name[name] + if other != dist: + self.try_to_replace(dist, other, problems) + + ireqts = dist.run_requires | dist.meta_requires + sreqts = dist.build_requires + ereqts = set() + if meta_extras and dist in install_dists: + for key in ('test', 'build', 'dev'): + e = ':%s:' % key + if e in meta_extras: + ereqts |= getattr(dist, '%s_requires' % key) + all_reqts = ireqts | sreqts | ereqts + for r in all_reqts: + providers = self.find_providers(r) + if not providers: + logger.debug('No providers found for %r', r) + provider = self.locator.locate(r, prereleases=prereleases) + # If no provider is found and we didn't consider + # prereleases, consider them now. + if provider is None and not prereleases: + provider = self.locator.locate(r, prereleases=True) + if provider is None: + logger.debug('Cannot satisfy %r', r) + problems.add(('unsatisfied', r)) + else: + n, v = provider.key, provider.version + if (n, v) not in self.dists: + todo.add(provider) + providers.add(provider) + if r in ireqts and dist in install_dists: + install_dists.add(provider) + logger.debug('Adding %s to install_dists', + provider.name_and_version) + for p in providers: + name = p.key + if name not in self.dists_by_name: + self.reqts.setdefault(p, set()).add(r) + else: + other = self.dists_by_name[name] + if other != p: + # see if other can be replaced by p + self.try_to_replace(p, other, problems) + + dists = set(self.dists.values()) + for dist in dists: + dist.build_time_dependency = dist not in install_dists + if dist.build_time_dependency: + logger.debug('%s is a build-time dependency only.', + dist.name_and_version) + logger.debug('find done for %s', odist) + return dists, problems diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/manifest.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/manifest.py new file mode 100644 index 0000000000000000000000000000000000000000..ca0fe442d9ca499466df9438df16eca405c5f102 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/manifest.py @@ -0,0 +1,393 @@ +# -*- coding: utf-8 -*- +# +# Copyright (C) 2012-2013 Python Software Foundation. +# See LICENSE.txt and CONTRIBUTORS.txt. +# +""" +Class representing the list of files in a distribution. + +Equivalent to distutils.filelist, but fixes some problems. +""" +import fnmatch +import logging +import os +import re +import sys + +from . import DistlibException +from .compat import fsdecode +from .util import convert_path + + +__all__ = ['Manifest'] + +logger = logging.getLogger(__name__) + +# a \ followed by some spaces + EOL +_COLLAPSE_PATTERN = re.compile('\\\\w*\n', re.M) +_COMMENTED_LINE = re.compile('#.*?(?=\n)|\n(?=$)', re.M | re.S) + +# +# Due to the different results returned by fnmatch.translate, we need +# to do slightly different processing for Python 2.7 and 3.2 ... this needed +# to be brought in for Python 3.6 onwards. +# +_PYTHON_VERSION = sys.version_info[:2] + +class Manifest(object): + """A list of files built by on exploring the filesystem and filtered by + applying various patterns to what we find there. + """ + + def __init__(self, base=None): + """ + Initialise an instance. + + :param base: The base directory to explore under. + """ + self.base = os.path.abspath(os.path.normpath(base or os.getcwd())) + self.prefix = self.base + os.sep + self.allfiles = None + self.files = set() + + # + # Public API + # + + def findall(self): + """Find all files under the base and set ``allfiles`` to the absolute + pathnames of files found. + """ + from stat import S_ISREG, S_ISDIR, S_ISLNK + + self.allfiles = allfiles = [] + root = self.base + stack = [root] + pop = stack.pop + push = stack.append + + while stack: + root = pop() + names = os.listdir(root) + + for name in names: + fullname = os.path.join(root, name) + + # Avoid excess stat calls -- just one will do, thank you! + stat = os.stat(fullname) + mode = stat.st_mode + if S_ISREG(mode): + allfiles.append(fsdecode(fullname)) + elif S_ISDIR(mode) and not S_ISLNK(mode): + push(fullname) + + def add(self, item): + """ + Add a file to the manifest. + + :param item: The pathname to add. This can be relative to the base. + """ + if not item.startswith(self.prefix): + item = os.path.join(self.base, item) + self.files.add(os.path.normpath(item)) + + def add_many(self, items): + """ + Add a list of files to the manifest. + + :param items: The pathnames to add. These can be relative to the base. + """ + for item in items: + self.add(item) + + def sorted(self, wantdirs=False): + """ + Return sorted files in directory order + """ + + def add_dir(dirs, d): + dirs.add(d) + logger.debug('add_dir added %s', d) + if d != self.base: + parent, _ = os.path.split(d) + assert parent not in ('', '/') + add_dir(dirs, parent) + + result = set(self.files) # make a copy! + if wantdirs: + dirs = set() + for f in result: + add_dir(dirs, os.path.dirname(f)) + result |= dirs + return [os.path.join(*path_tuple) for path_tuple in + sorted(os.path.split(path) for path in result)] + + def clear(self): + """Clear all collected files.""" + self.files = set() + self.allfiles = [] + + def process_directive(self, directive): + """ + Process a directive which either adds some files from ``allfiles`` to + ``files``, or removes some files from ``files``. + + :param directive: The directive to process. This should be in a format + compatible with distutils ``MANIFEST.in`` files: + + http://docs.python.org/distutils/sourcedist.html#commands + """ + # Parse the line: split it up, make sure the right number of words + # is there, and return the relevant words. 'action' is always + # defined: it's the first word of the line. Which of the other + # three are defined depends on the action; it'll be either + # patterns, (dir and patterns), or (dirpattern). + action, patterns, thedir, dirpattern = self._parse_directive(directive) + + # OK, now we know that the action is valid and we have the + # right number of words on the line for that action -- so we + # can proceed with minimal error-checking. + if action == 'include': + for pattern in patterns: + if not self._include_pattern(pattern, anchor=True): + logger.warning('no files found matching %r', pattern) + + elif action == 'exclude': + for pattern in patterns: + found = self._exclude_pattern(pattern, anchor=True) + #if not found: + # logger.warning('no previously-included files ' + # 'found matching %r', pattern) + + elif action == 'global-include': + for pattern in patterns: + if not self._include_pattern(pattern, anchor=False): + logger.warning('no files found matching %r ' + 'anywhere in distribution', pattern) + + elif action == 'global-exclude': + for pattern in patterns: + found = self._exclude_pattern(pattern, anchor=False) + #if not found: + # logger.warning('no previously-included files ' + # 'matching %r found anywhere in ' + # 'distribution', pattern) + + elif action == 'recursive-include': + for pattern in patterns: + if not self._include_pattern(pattern, prefix=thedir): + logger.warning('no files found matching %r ' + 'under directory %r', pattern, thedir) + + elif action == 'recursive-exclude': + for pattern in patterns: + found = self._exclude_pattern(pattern, prefix=thedir) + #if not found: + # logger.warning('no previously-included files ' + # 'matching %r found under directory %r', + # pattern, thedir) + + elif action == 'graft': + if not self._include_pattern(None, prefix=dirpattern): + logger.warning('no directories found matching %r', + dirpattern) + + elif action == 'prune': + if not self._exclude_pattern(None, prefix=dirpattern): + logger.warning('no previously-included directories found ' + 'matching %r', dirpattern) + else: # pragma: no cover + # This should never happen, as it should be caught in + # _parse_template_line + raise DistlibException( + 'invalid action %r' % action) + + # + # Private API + # + + def _parse_directive(self, directive): + """ + Validate a directive. + :param directive: The directive to validate. + :return: A tuple of action, patterns, thedir, dir_patterns + """ + words = directive.split() + if len(words) == 1 and words[0] not in ('include', 'exclude', + 'global-include', + 'global-exclude', + 'recursive-include', + 'recursive-exclude', + 'graft', 'prune'): + # no action given, let's use the default 'include' + words.insert(0, 'include') + + action = words[0] + patterns = thedir = dir_pattern = None + + if action in ('include', 'exclude', + 'global-include', 'global-exclude'): + if len(words) < 2: + raise DistlibException( + '%r expects ...' % action) + + patterns = [convert_path(word) for word in words[1:]] + + elif action in ('recursive-include', 'recursive-exclude'): + if len(words) < 3: + raise DistlibException( + '%r expects ...' % action) + + thedir = convert_path(words[1]) + patterns = [convert_path(word) for word in words[2:]] + + elif action in ('graft', 'prune'): + if len(words) != 2: + raise DistlibException( + '%r expects a single ' % action) + + dir_pattern = convert_path(words[1]) + + else: + raise DistlibException('unknown action %r' % action) + + return action, patterns, thedir, dir_pattern + + def _include_pattern(self, pattern, anchor=True, prefix=None, + is_regex=False): + """Select strings (presumably filenames) from 'self.files' that + match 'pattern', a Unix-style wildcard (glob) pattern. + + Patterns are not quite the same as implemented by the 'fnmatch' + module: '*' and '?' match non-special characters, where "special" + is platform-dependent: slash on Unix; colon, slash, and backslash on + DOS/Windows; and colon on Mac OS. + + If 'anchor' is true (the default), then the pattern match is more + stringent: "*.py" will match "foo.py" but not "foo/bar.py". If + 'anchor' is false, both of these will match. + + If 'prefix' is supplied, then only filenames starting with 'prefix' + (itself a pattern) and ending with 'pattern', with anything in between + them, will match. 'anchor' is ignored in this case. + + If 'is_regex' is true, 'anchor' and 'prefix' are ignored, and + 'pattern' is assumed to be either a string containing a regex or a + regex object -- no translation is done, the regex is just compiled + and used as-is. + + Selected strings will be added to self.files. + + Return True if files are found. + """ + # XXX docstring lying about what the special chars are? + found = False + pattern_re = self._translate_pattern(pattern, anchor, prefix, is_regex) + + # delayed loading of allfiles list + if self.allfiles is None: + self.findall() + + for name in self.allfiles: + if pattern_re.search(name): + self.files.add(name) + found = True + return found + + def _exclude_pattern(self, pattern, anchor=True, prefix=None, + is_regex=False): + """Remove strings (presumably filenames) from 'files' that match + 'pattern'. + + Other parameters are the same as for 'include_pattern()', above. + The list 'self.files' is modified in place. Return True if files are + found. + + This API is public to allow e.g. exclusion of SCM subdirs, e.g. when + packaging source distributions + """ + found = False + pattern_re = self._translate_pattern(pattern, anchor, prefix, is_regex) + for f in list(self.files): + if pattern_re.search(f): + self.files.remove(f) + found = True + return found + + def _translate_pattern(self, pattern, anchor=True, prefix=None, + is_regex=False): + """Translate a shell-like wildcard pattern to a compiled regular + expression. + + Return the compiled regex. If 'is_regex' true, + then 'pattern' is directly compiled to a regex (if it's a string) + or just returned as-is (assumes it's a regex object). + """ + if is_regex: + if isinstance(pattern, str): + return re.compile(pattern) + else: + return pattern + + if _PYTHON_VERSION > (3, 2): + # ditch start and end characters + start, _, end = self._glob_to_re('_').partition('_') + + if pattern: + pattern_re = self._glob_to_re(pattern) + if _PYTHON_VERSION > (3, 2): + assert pattern_re.startswith(start) and pattern_re.endswith(end) + else: + pattern_re = '' + + base = re.escape(os.path.join(self.base, '')) + if prefix is not None: + # ditch end of pattern character + if _PYTHON_VERSION <= (3, 2): + empty_pattern = self._glob_to_re('') + prefix_re = self._glob_to_re(prefix)[:-len(empty_pattern)] + else: + prefix_re = self._glob_to_re(prefix) + assert prefix_re.startswith(start) and prefix_re.endswith(end) + prefix_re = prefix_re[len(start): len(prefix_re) - len(end)] + sep = os.sep + if os.sep == '\\': + sep = r'\\' + if _PYTHON_VERSION <= (3, 2): + pattern_re = '^' + base + sep.join((prefix_re, + '.*' + pattern_re)) + else: + pattern_re = pattern_re[len(start): len(pattern_re) - len(end)] + pattern_re = r'%s%s%s%s.*%s%s' % (start, base, prefix_re, sep, + pattern_re, end) + else: # no prefix -- respect anchor flag + if anchor: + if _PYTHON_VERSION <= (3, 2): + pattern_re = '^' + base + pattern_re + else: + pattern_re = r'%s%s%s' % (start, base, pattern_re[len(start):]) + + return re.compile(pattern_re) + + def _glob_to_re(self, pattern): + """Translate a shell-like glob pattern to a regular expression. + + Return a string containing the regex. Differs from + 'fnmatch.translate()' in that '*' does not match "special characters" + (which are platform-specific). + """ + pattern_re = fnmatch.translate(pattern) + + # '?' and '*' in the glob pattern become '.' and '.*' in the RE, which + # IMHO is wrong -- '?' and '*' aren't supposed to match slash in Unix, + # and by extension they shouldn't match such "special characters" under + # any OS. So change all non-escaped dots in the RE to match any + # character except the special characters (currently: just os.sep). + sep = os.sep + if os.sep == '\\': + # we're using a regex to manipulate a regex, so we need + # to escape the backslash twice + sep = r'\\\\' + escaped = r'\1[^%s]' % sep + pattern_re = re.sub(r'((? y, + '!=': lambda x, y: x != y, + '<': lambda x, y: x < y, + '<=': lambda x, y: x == y or x < y, + '>': lambda x, y: x > y, + '>=': lambda x, y: x == y or x > y, + 'and': lambda x, y: x and y, + 'or': lambda x, y: x or y, + 'in': lambda x, y: x in y, + 'not in': lambda x, y: x not in y, + } + + def evaluate(self, expr, context): + """ + Evaluate a marker expression returned by the :func:`parse_requirement` + function in the specified context. + """ + if isinstance(expr, string_types): + if expr[0] in '\'"': + result = expr[1:-1] + else: + if expr not in context: + raise SyntaxError('unknown variable: %s' % expr) + result = context[expr] + else: + assert isinstance(expr, dict) + op = expr['op'] + if op not in self.operations: + raise NotImplementedError('op not implemented: %s' % op) + elhs = expr['lhs'] + erhs = expr['rhs'] + if _is_literal(expr['lhs']) and _is_literal(expr['rhs']): + raise SyntaxError('invalid comparison: %s %s %s' % (elhs, op, erhs)) + + lhs = self.evaluate(elhs, context) + rhs = self.evaluate(erhs, context) + result = self.operations[op](lhs, rhs) + return result + +def default_context(): + def format_full_version(info): + version = '%s.%s.%s' % (info.major, info.minor, info.micro) + kind = info.releaselevel + if kind != 'final': + version += kind[0] + str(info.serial) + return version + + if hasattr(sys, 'implementation'): + implementation_version = format_full_version(sys.implementation.version) + implementation_name = sys.implementation.name + else: + implementation_version = '0' + implementation_name = '' + + result = { + 'implementation_name': implementation_name, + 'implementation_version': implementation_version, + 'os_name': os.name, + 'platform_machine': platform.machine(), + 'platform_python_implementation': platform.python_implementation(), + 'platform_release': platform.release(), + 'platform_system': platform.system(), + 'platform_version': platform.version(), + 'platform_in_venv': str(in_venv()), + 'python_full_version': platform.python_version(), + 'python_version': platform.python_version()[:3], + 'sys_platform': sys.platform, + } + return result + +DEFAULT_CONTEXT = default_context() +del default_context + +evaluator = Evaluator() + +def interpret(marker, execution_context=None): + """ + Interpret a marker and return a result depending on environment. + + :param marker: The marker to interpret. + :type marker: str + :param execution_context: The context used for name lookup. + :type execution_context: mapping + """ + try: + expr, rest = parse_marker(marker) + except Exception as e: + raise SyntaxError('Unable to interpret marker syntax: %s: %s' % (marker, e)) + if rest and rest[0] != '#': + raise SyntaxError('unexpected trailing data in marker: %s: %s' % (marker, rest)) + context = dict(DEFAULT_CONTEXT) + if execution_context: + context.update(execution_context) + return evaluator.evaluate(expr, context) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/metadata.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/metadata.py new file mode 100644 index 0000000000000000000000000000000000000000..2d61378e9942fb67c15544db57bac502fa50376b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/metadata.py @@ -0,0 +1,1096 @@ +# -*- coding: utf-8 -*- +# +# Copyright (C) 2012 The Python Software Foundation. +# See LICENSE.txt and CONTRIBUTORS.txt. +# +"""Implementation of the Metadata for Python packages PEPs. + +Supports all metadata formats (1.0, 1.1, 1.2, and 2.0 experimental). +""" +from __future__ import unicode_literals + +import codecs +from email import message_from_file +import json +import logging +import re + + +from . import DistlibException, __version__ +from .compat import StringIO, string_types, text_type +from .markers import interpret +from .util import extract_by_key, get_extras +from .version import get_scheme, PEP440_VERSION_RE + +logger = logging.getLogger(__name__) + + +class MetadataMissingError(DistlibException): + """A required metadata is missing""" + + +class MetadataConflictError(DistlibException): + """Attempt to read or write metadata fields that are conflictual.""" + + +class MetadataUnrecognizedVersionError(DistlibException): + """Unknown metadata version number.""" + + +class MetadataInvalidError(DistlibException): + """A metadata value is invalid""" + +# public API of this module +__all__ = ['Metadata', 'PKG_INFO_ENCODING', 'PKG_INFO_PREFERRED_VERSION'] + +# Encoding used for the PKG-INFO files +PKG_INFO_ENCODING = 'utf-8' + +# preferred version. Hopefully will be changed +# to 1.2 once PEP 345 is supported everywhere +PKG_INFO_PREFERRED_VERSION = '1.1' + +_LINE_PREFIX_1_2 = re.compile('\n \\|') +_LINE_PREFIX_PRE_1_2 = re.compile('\n ') +_241_FIELDS = ('Metadata-Version', 'Name', 'Version', 'Platform', + 'Summary', 'Description', + 'Keywords', 'Home-page', 'Author', 'Author-email', + 'License') + +_314_FIELDS = ('Metadata-Version', 'Name', 'Version', 'Platform', + 'Supported-Platform', 'Summary', 'Description', + 'Keywords', 'Home-page', 'Author', 'Author-email', + 'License', 'Classifier', 'Download-URL', 'Obsoletes', + 'Provides', 'Requires') + +_314_MARKERS = ('Obsoletes', 'Provides', 'Requires', 'Classifier', + 'Download-URL') + +_345_FIELDS = ('Metadata-Version', 'Name', 'Version', 'Platform', + 'Supported-Platform', 'Summary', 'Description', + 'Keywords', 'Home-page', 'Author', 'Author-email', + 'Maintainer', 'Maintainer-email', 'License', + 'Classifier', 'Download-URL', 'Obsoletes-Dist', + 'Project-URL', 'Provides-Dist', 'Requires-Dist', + 'Requires-Python', 'Requires-External') + +_345_MARKERS = ('Provides-Dist', 'Requires-Dist', 'Requires-Python', + 'Obsoletes-Dist', 'Requires-External', 'Maintainer', + 'Maintainer-email', 'Project-URL') + +_426_FIELDS = ('Metadata-Version', 'Name', 'Version', 'Platform', + 'Supported-Platform', 'Summary', 'Description', + 'Keywords', 'Home-page', 'Author', 'Author-email', + 'Maintainer', 'Maintainer-email', 'License', + 'Classifier', 'Download-URL', 'Obsoletes-Dist', + 'Project-URL', 'Provides-Dist', 'Requires-Dist', + 'Requires-Python', 'Requires-External', 'Private-Version', + 'Obsoleted-By', 'Setup-Requires-Dist', 'Extension', + 'Provides-Extra') + +_426_MARKERS = ('Private-Version', 'Provides-Extra', 'Obsoleted-By', + 'Setup-Requires-Dist', 'Extension') + +# See issue #106: Sometimes 'Requires' and 'Provides' occur wrongly in +# the metadata. Include them in the tuple literal below to allow them +# (for now). +_566_FIELDS = _426_FIELDS + ('Description-Content-Type', + 'Requires', 'Provides') + +_566_MARKERS = ('Description-Content-Type',) + +_ALL_FIELDS = set() +_ALL_FIELDS.update(_241_FIELDS) +_ALL_FIELDS.update(_314_FIELDS) +_ALL_FIELDS.update(_345_FIELDS) +_ALL_FIELDS.update(_426_FIELDS) +_ALL_FIELDS.update(_566_FIELDS) + +EXTRA_RE = re.compile(r'''extra\s*==\s*("([^"]+)"|'([^']+)')''') + + +def _version2fieldlist(version): + if version == '1.0': + return _241_FIELDS + elif version == '1.1': + return _314_FIELDS + elif version == '1.2': + return _345_FIELDS + elif version in ('1.3', '2.1'): + return _345_FIELDS + _566_FIELDS + elif version == '2.0': + return _426_FIELDS + raise MetadataUnrecognizedVersionError(version) + + +def _best_version(fields): + """Detect the best version depending on the fields used.""" + def _has_marker(keys, markers): + for marker in markers: + if marker in keys: + return True + return False + + keys = [] + for key, value in fields.items(): + if value in ([], 'UNKNOWN', None): + continue + keys.append(key) + + possible_versions = ['1.0', '1.1', '1.2', '1.3', '2.0', '2.1'] + + # first let's try to see if a field is not part of one of the version + for key in keys: + if key not in _241_FIELDS and '1.0' in possible_versions: + possible_versions.remove('1.0') + logger.debug('Removed 1.0 due to %s', key) + if key not in _314_FIELDS and '1.1' in possible_versions: + possible_versions.remove('1.1') + logger.debug('Removed 1.1 due to %s', key) + if key not in _345_FIELDS and '1.2' in possible_versions: + possible_versions.remove('1.2') + logger.debug('Removed 1.2 due to %s', key) + if key not in _566_FIELDS and '1.3' in possible_versions: + possible_versions.remove('1.3') + logger.debug('Removed 1.3 due to %s', key) + if key not in _566_FIELDS and '2.1' in possible_versions: + if key != 'Description': # In 2.1, description allowed after headers + possible_versions.remove('2.1') + logger.debug('Removed 2.1 due to %s', key) + if key not in _426_FIELDS and '2.0' in possible_versions: + possible_versions.remove('2.0') + logger.debug('Removed 2.0 due to %s', key) + + # possible_version contains qualified versions + if len(possible_versions) == 1: + return possible_versions[0] # found ! + elif len(possible_versions) == 0: + logger.debug('Out of options - unknown metadata set: %s', fields) + raise MetadataConflictError('Unknown metadata set') + + # let's see if one unique marker is found + is_1_1 = '1.1' in possible_versions and _has_marker(keys, _314_MARKERS) + is_1_2 = '1.2' in possible_versions and _has_marker(keys, _345_MARKERS) + is_2_1 = '2.1' in possible_versions and _has_marker(keys, _566_MARKERS) + is_2_0 = '2.0' in possible_versions and _has_marker(keys, _426_MARKERS) + if int(is_1_1) + int(is_1_2) + int(is_2_1) + int(is_2_0) > 1: + raise MetadataConflictError('You used incompatible 1.1/1.2/2.0/2.1 fields') + + # we have the choice, 1.0, or 1.2, or 2.0 + # - 1.0 has a broken Summary field but works with all tools + # - 1.1 is to avoid + # - 1.2 fixes Summary but has little adoption + # - 2.0 adds more features and is very new + if not is_1_1 and not is_1_2 and not is_2_1 and not is_2_0: + # we couldn't find any specific marker + if PKG_INFO_PREFERRED_VERSION in possible_versions: + return PKG_INFO_PREFERRED_VERSION + if is_1_1: + return '1.1' + if is_1_2: + return '1.2' + if is_2_1: + return '2.1' + + return '2.0' + +_ATTR2FIELD = { + 'metadata_version': 'Metadata-Version', + 'name': 'Name', + 'version': 'Version', + 'platform': 'Platform', + 'supported_platform': 'Supported-Platform', + 'summary': 'Summary', + 'description': 'Description', + 'keywords': 'Keywords', + 'home_page': 'Home-page', + 'author': 'Author', + 'author_email': 'Author-email', + 'maintainer': 'Maintainer', + 'maintainer_email': 'Maintainer-email', + 'license': 'License', + 'classifier': 'Classifier', + 'download_url': 'Download-URL', + 'obsoletes_dist': 'Obsoletes-Dist', + 'provides_dist': 'Provides-Dist', + 'requires_dist': 'Requires-Dist', + 'setup_requires_dist': 'Setup-Requires-Dist', + 'requires_python': 'Requires-Python', + 'requires_external': 'Requires-External', + 'requires': 'Requires', + 'provides': 'Provides', + 'obsoletes': 'Obsoletes', + 'project_url': 'Project-URL', + 'private_version': 'Private-Version', + 'obsoleted_by': 'Obsoleted-By', + 'extension': 'Extension', + 'provides_extra': 'Provides-Extra', +} + +_PREDICATE_FIELDS = ('Requires-Dist', 'Obsoletes-Dist', 'Provides-Dist') +_VERSIONS_FIELDS = ('Requires-Python',) +_VERSION_FIELDS = ('Version',) +_LISTFIELDS = ('Platform', 'Classifier', 'Obsoletes', + 'Requires', 'Provides', 'Obsoletes-Dist', + 'Provides-Dist', 'Requires-Dist', 'Requires-External', + 'Project-URL', 'Supported-Platform', 'Setup-Requires-Dist', + 'Provides-Extra', 'Extension') +_LISTTUPLEFIELDS = ('Project-URL',) + +_ELEMENTSFIELD = ('Keywords',) + +_UNICODEFIELDS = ('Author', 'Maintainer', 'Summary', 'Description') + +_MISSING = object() + +_FILESAFE = re.compile('[^A-Za-z0-9.]+') + + +def _get_name_and_version(name, version, for_filename=False): + """Return the distribution name with version. + + If for_filename is true, return a filename-escaped form.""" + if for_filename: + # For both name and version any runs of non-alphanumeric or '.' + # characters are replaced with a single '-'. Additionally any + # spaces in the version string become '.' + name = _FILESAFE.sub('-', name) + version = _FILESAFE.sub('-', version.replace(' ', '.')) + return '%s-%s' % (name, version) + + +class LegacyMetadata(object): + """The legacy metadata of a release. + + Supports versions 1.0, 1.1 and 1.2 (auto-detected). You can + instantiate the class with one of these arguments (or none): + - *path*, the path to a metadata file + - *fileobj* give a file-like object with metadata as content + - *mapping* is a dict-like object + - *scheme* is a version scheme name + """ + # TODO document the mapping API and UNKNOWN default key + + def __init__(self, path=None, fileobj=None, mapping=None, + scheme='default'): + if [path, fileobj, mapping].count(None) < 2: + raise TypeError('path, fileobj and mapping are exclusive') + self._fields = {} + self.requires_files = [] + self._dependencies = None + self.scheme = scheme + if path is not None: + self.read(path) + elif fileobj is not None: + self.read_file(fileobj) + elif mapping is not None: + self.update(mapping) + self.set_metadata_version() + + def set_metadata_version(self): + self._fields['Metadata-Version'] = _best_version(self._fields) + + def _write_field(self, fileobj, name, value): + fileobj.write('%s: %s\n' % (name, value)) + + def __getitem__(self, name): + return self.get(name) + + def __setitem__(self, name, value): + return self.set(name, value) + + def __delitem__(self, name): + field_name = self._convert_name(name) + try: + del self._fields[field_name] + except KeyError: + raise KeyError(name) + + def __contains__(self, name): + return (name in self._fields or + self._convert_name(name) in self._fields) + + def _convert_name(self, name): + if name in _ALL_FIELDS: + return name + name = name.replace('-', '_').lower() + return _ATTR2FIELD.get(name, name) + + def _default_value(self, name): + if name in _LISTFIELDS or name in _ELEMENTSFIELD: + return [] + return 'UNKNOWN' + + def _remove_line_prefix(self, value): + if self.metadata_version in ('1.0', '1.1'): + return _LINE_PREFIX_PRE_1_2.sub('\n', value) + else: + return _LINE_PREFIX_1_2.sub('\n', value) + + def __getattr__(self, name): + if name in _ATTR2FIELD: + return self[name] + raise AttributeError(name) + + # + # Public API + # + +# dependencies = property(_get_dependencies, _set_dependencies) + + def get_fullname(self, filesafe=False): + """Return the distribution name with version. + + If filesafe is true, return a filename-escaped form.""" + return _get_name_and_version(self['Name'], self['Version'], filesafe) + + def is_field(self, name): + """return True if name is a valid metadata key""" + name = self._convert_name(name) + return name in _ALL_FIELDS + + def is_multi_field(self, name): + name = self._convert_name(name) + return name in _LISTFIELDS + + def read(self, filepath): + """Read the metadata values from a file path.""" + fp = codecs.open(filepath, 'r', encoding='utf-8') + try: + self.read_file(fp) + finally: + fp.close() + + def read_file(self, fileob): + """Read the metadata values from a file object.""" + msg = message_from_file(fileob) + self._fields['Metadata-Version'] = msg['metadata-version'] + + # When reading, get all the fields we can + for field in _ALL_FIELDS: + if field not in msg: + continue + if field in _LISTFIELDS: + # we can have multiple lines + values = msg.get_all(field) + if field in _LISTTUPLEFIELDS and values is not None: + values = [tuple(value.split(',')) for value in values] + self.set(field, values) + else: + # single line + value = msg[field] + if value is not None and value != 'UNKNOWN': + self.set(field, value) + # logger.debug('Attempting to set metadata for %s', self) + # self.set_metadata_version() + + def write(self, filepath, skip_unknown=False): + """Write the metadata fields to filepath.""" + fp = codecs.open(filepath, 'w', encoding='utf-8') + try: + self.write_file(fp, skip_unknown) + finally: + fp.close() + + def write_file(self, fileobject, skip_unknown=False): + """Write the PKG-INFO format data to a file object.""" + self.set_metadata_version() + + for field in _version2fieldlist(self['Metadata-Version']): + values = self.get(field) + if skip_unknown and values in ('UNKNOWN', [], ['UNKNOWN']): + continue + if field in _ELEMENTSFIELD: + self._write_field(fileobject, field, ','.join(values)) + continue + if field not in _LISTFIELDS: + if field == 'Description': + if self.metadata_version in ('1.0', '1.1'): + values = values.replace('\n', '\n ') + else: + values = values.replace('\n', '\n |') + values = [values] + + if field in _LISTTUPLEFIELDS: + values = [','.join(value) for value in values] + + for value in values: + self._write_field(fileobject, field, value) + + def update(self, other=None, **kwargs): + """Set metadata values from the given iterable `other` and kwargs. + + Behavior is like `dict.update`: If `other` has a ``keys`` method, + they are looped over and ``self[key]`` is assigned ``other[key]``. + Else, ``other`` is an iterable of ``(key, value)`` iterables. + + Keys that don't match a metadata field or that have an empty value are + dropped. + """ + def _set(key, value): + if key in _ATTR2FIELD and value: + self.set(self._convert_name(key), value) + + if not other: + # other is None or empty container + pass + elif hasattr(other, 'keys'): + for k in other.keys(): + _set(k, other[k]) + else: + for k, v in other: + _set(k, v) + + if kwargs: + for k, v in kwargs.items(): + _set(k, v) + + def set(self, name, value): + """Control then set a metadata field.""" + name = self._convert_name(name) + + if ((name in _ELEMENTSFIELD or name == 'Platform') and + not isinstance(value, (list, tuple))): + if isinstance(value, string_types): + value = [v.strip() for v in value.split(',')] + else: + value = [] + elif (name in _LISTFIELDS and + not isinstance(value, (list, tuple))): + if isinstance(value, string_types): + value = [value] + else: + value = [] + + if logger.isEnabledFor(logging.WARNING): + project_name = self['Name'] + + scheme = get_scheme(self.scheme) + if name in _PREDICATE_FIELDS and value is not None: + for v in value: + # check that the values are valid + if not scheme.is_valid_matcher(v.split(';')[0]): + logger.warning( + "'%s': '%s' is not valid (field '%s')", + project_name, v, name) + # FIXME this rejects UNKNOWN, is that right? + elif name in _VERSIONS_FIELDS and value is not None: + if not scheme.is_valid_constraint_list(value): + logger.warning("'%s': '%s' is not a valid version (field '%s')", + project_name, value, name) + elif name in _VERSION_FIELDS and value is not None: + if not scheme.is_valid_version(value): + logger.warning("'%s': '%s' is not a valid version (field '%s')", + project_name, value, name) + + if name in _UNICODEFIELDS: + if name == 'Description': + value = self._remove_line_prefix(value) + + self._fields[name] = value + + def get(self, name, default=_MISSING): + """Get a metadata field.""" + name = self._convert_name(name) + if name not in self._fields: + if default is _MISSING: + default = self._default_value(name) + return default + if name in _UNICODEFIELDS: + value = self._fields[name] + return value + elif name in _LISTFIELDS: + value = self._fields[name] + if value is None: + return [] + res = [] + for val in value: + if name not in _LISTTUPLEFIELDS: + res.append(val) + else: + # That's for Project-URL + res.append((val[0], val[1])) + return res + + elif name in _ELEMENTSFIELD: + value = self._fields[name] + if isinstance(value, string_types): + return value.split(',') + return self._fields[name] + + def check(self, strict=False): + """Check if the metadata is compliant. If strict is True then raise if + no Name or Version are provided""" + self.set_metadata_version() + + # XXX should check the versions (if the file was loaded) + missing, warnings = [], [] + + for attr in ('Name', 'Version'): # required by PEP 345 + if attr not in self: + missing.append(attr) + + if strict and missing != []: + msg = 'missing required metadata: %s' % ', '.join(missing) + raise MetadataMissingError(msg) + + for attr in ('Home-page', 'Author'): + if attr not in self: + missing.append(attr) + + # checking metadata 1.2 (XXX needs to check 1.1, 1.0) + if self['Metadata-Version'] != '1.2': + return missing, warnings + + scheme = get_scheme(self.scheme) + + def are_valid_constraints(value): + for v in value: + if not scheme.is_valid_matcher(v.split(';')[0]): + return False + return True + + for fields, controller in ((_PREDICATE_FIELDS, are_valid_constraints), + (_VERSIONS_FIELDS, + scheme.is_valid_constraint_list), + (_VERSION_FIELDS, + scheme.is_valid_version)): + for field in fields: + value = self.get(field, None) + if value is not None and not controller(value): + warnings.append("Wrong value for '%s': %s" % (field, value)) + + return missing, warnings + + def todict(self, skip_missing=False): + """Return fields as a dict. + + Field names will be converted to use the underscore-lowercase style + instead of hyphen-mixed case (i.e. home_page instead of Home-page). + """ + self.set_metadata_version() + + mapping_1_0 = ( + ('metadata_version', 'Metadata-Version'), + ('name', 'Name'), + ('version', 'Version'), + ('summary', 'Summary'), + ('home_page', 'Home-page'), + ('author', 'Author'), + ('author_email', 'Author-email'), + ('license', 'License'), + ('description', 'Description'), + ('keywords', 'Keywords'), + ('platform', 'Platform'), + ('classifiers', 'Classifier'), + ('download_url', 'Download-URL'), + ) + + data = {} + for key, field_name in mapping_1_0: + if not skip_missing or field_name in self._fields: + data[key] = self[field_name] + + if self['Metadata-Version'] == '1.2': + mapping_1_2 = ( + ('requires_dist', 'Requires-Dist'), + ('requires_python', 'Requires-Python'), + ('requires_external', 'Requires-External'), + ('provides_dist', 'Provides-Dist'), + ('obsoletes_dist', 'Obsoletes-Dist'), + ('project_url', 'Project-URL'), + ('maintainer', 'Maintainer'), + ('maintainer_email', 'Maintainer-email'), + ) + for key, field_name in mapping_1_2: + if not skip_missing or field_name in self._fields: + if key != 'project_url': + data[key] = self[field_name] + else: + data[key] = [','.join(u) for u in self[field_name]] + + elif self['Metadata-Version'] == '1.1': + mapping_1_1 = ( + ('provides', 'Provides'), + ('requires', 'Requires'), + ('obsoletes', 'Obsoletes'), + ) + for key, field_name in mapping_1_1: + if not skip_missing or field_name in self._fields: + data[key] = self[field_name] + + return data + + def add_requirements(self, requirements): + if self['Metadata-Version'] == '1.1': + # we can't have 1.1 metadata *and* Setuptools requires + for field in ('Obsoletes', 'Requires', 'Provides'): + if field in self: + del self[field] + self['Requires-Dist'] += requirements + + # Mapping API + # TODO could add iter* variants + + def keys(self): + return list(_version2fieldlist(self['Metadata-Version'])) + + def __iter__(self): + for key in self.keys(): + yield key + + def values(self): + return [self[key] for key in self.keys()] + + def items(self): + return [(key, self[key]) for key in self.keys()] + + def __repr__(self): + return '<%s %s %s>' % (self.__class__.__name__, self.name, + self.version) + + +METADATA_FILENAME = 'pydist.json' +WHEEL_METADATA_FILENAME = 'metadata.json' +LEGACY_METADATA_FILENAME = 'METADATA' + + +class Metadata(object): + """ + The metadata of a release. This implementation uses 2.0 (JSON) + metadata where possible. If not possible, it wraps a LegacyMetadata + instance which handles the key-value metadata format. + """ + + METADATA_VERSION_MATCHER = re.compile(r'^\d+(\.\d+)*$') + + NAME_MATCHER = re.compile('^[0-9A-Z]([0-9A-Z_.-]*[0-9A-Z])?$', re.I) + + VERSION_MATCHER = PEP440_VERSION_RE + + SUMMARY_MATCHER = re.compile('.{1,2047}') + + METADATA_VERSION = '2.0' + + GENERATOR = 'distlib (%s)' % __version__ + + MANDATORY_KEYS = { + 'name': (), + 'version': (), + 'summary': ('legacy',), + } + + INDEX_KEYS = ('name version license summary description author ' + 'author_email keywords platform home_page classifiers ' + 'download_url') + + DEPENDENCY_KEYS = ('extras run_requires test_requires build_requires ' + 'dev_requires provides meta_requires obsoleted_by ' + 'supports_environments') + + SYNTAX_VALIDATORS = { + 'metadata_version': (METADATA_VERSION_MATCHER, ()), + 'name': (NAME_MATCHER, ('legacy',)), + 'version': (VERSION_MATCHER, ('legacy',)), + 'summary': (SUMMARY_MATCHER, ('legacy',)), + } + + __slots__ = ('_legacy', '_data', 'scheme') + + def __init__(self, path=None, fileobj=None, mapping=None, + scheme='default'): + if [path, fileobj, mapping].count(None) < 2: + raise TypeError('path, fileobj and mapping are exclusive') + self._legacy = None + self._data = None + self.scheme = scheme + #import pdb; pdb.set_trace() + if mapping is not None: + try: + self._validate_mapping(mapping, scheme) + self._data = mapping + except MetadataUnrecognizedVersionError: + self._legacy = LegacyMetadata(mapping=mapping, scheme=scheme) + self.validate() + else: + data = None + if path: + with open(path, 'rb') as f: + data = f.read() + elif fileobj: + data = fileobj.read() + if data is None: + # Initialised with no args - to be added + self._data = { + 'metadata_version': self.METADATA_VERSION, + 'generator': self.GENERATOR, + } + else: + if not isinstance(data, text_type): + data = data.decode('utf-8') + try: + self._data = json.loads(data) + self._validate_mapping(self._data, scheme) + except ValueError: + # Note: MetadataUnrecognizedVersionError does not + # inherit from ValueError (it's a DistlibException, + # which should not inherit from ValueError). + # The ValueError comes from the json.load - if that + # succeeds and we get a validation error, we want + # that to propagate + self._legacy = LegacyMetadata(fileobj=StringIO(data), + scheme=scheme) + self.validate() + + common_keys = set(('name', 'version', 'license', 'keywords', 'summary')) + + none_list = (None, list) + none_dict = (None, dict) + + mapped_keys = { + 'run_requires': ('Requires-Dist', list), + 'build_requires': ('Setup-Requires-Dist', list), + 'dev_requires': none_list, + 'test_requires': none_list, + 'meta_requires': none_list, + 'extras': ('Provides-Extra', list), + 'modules': none_list, + 'namespaces': none_list, + 'exports': none_dict, + 'commands': none_dict, + 'classifiers': ('Classifier', list), + 'source_url': ('Download-URL', None), + 'metadata_version': ('Metadata-Version', None), + } + + del none_list, none_dict + + def __getattribute__(self, key): + common = object.__getattribute__(self, 'common_keys') + mapped = object.__getattribute__(self, 'mapped_keys') + if key in mapped: + lk, maker = mapped[key] + if self._legacy: + if lk is None: + result = None if maker is None else maker() + else: + result = self._legacy.get(lk) + else: + value = None if maker is None else maker() + if key not in ('commands', 'exports', 'modules', 'namespaces', + 'classifiers'): + result = self._data.get(key, value) + else: + # special cases for PEP 459 + sentinel = object() + result = sentinel + d = self._data.get('extensions') + if d: + if key == 'commands': + result = d.get('python.commands', value) + elif key == 'classifiers': + d = d.get('python.details') + if d: + result = d.get(key, value) + else: + d = d.get('python.exports') + if not d: + d = self._data.get('python.exports') + if d: + result = d.get(key, value) + if result is sentinel: + result = value + elif key not in common: + result = object.__getattribute__(self, key) + elif self._legacy: + result = self._legacy.get(key) + else: + result = self._data.get(key) + return result + + def _validate_value(self, key, value, scheme=None): + if key in self.SYNTAX_VALIDATORS: + pattern, exclusions = self.SYNTAX_VALIDATORS[key] + if (scheme or self.scheme) not in exclusions: + m = pattern.match(value) + if not m: + raise MetadataInvalidError("'%s' is an invalid value for " + "the '%s' property" % (value, + key)) + + def __setattr__(self, key, value): + self._validate_value(key, value) + common = object.__getattribute__(self, 'common_keys') + mapped = object.__getattribute__(self, 'mapped_keys') + if key in mapped: + lk, _ = mapped[key] + if self._legacy: + if lk is None: + raise NotImplementedError + self._legacy[lk] = value + elif key not in ('commands', 'exports', 'modules', 'namespaces', + 'classifiers'): + self._data[key] = value + else: + # special cases for PEP 459 + d = self._data.setdefault('extensions', {}) + if key == 'commands': + d['python.commands'] = value + elif key == 'classifiers': + d = d.setdefault('python.details', {}) + d[key] = value + else: + d = d.setdefault('python.exports', {}) + d[key] = value + elif key not in common: + object.__setattr__(self, key, value) + else: + if key == 'keywords': + if isinstance(value, string_types): + value = value.strip() + if value: + value = value.split() + else: + value = [] + if self._legacy: + self._legacy[key] = value + else: + self._data[key] = value + + @property + def name_and_version(self): + return _get_name_and_version(self.name, self.version, True) + + @property + def provides(self): + if self._legacy: + result = self._legacy['Provides-Dist'] + else: + result = self._data.setdefault('provides', []) + s = '%s (%s)' % (self.name, self.version) + if s not in result: + result.append(s) + return result + + @provides.setter + def provides(self, value): + if self._legacy: + self._legacy['Provides-Dist'] = value + else: + self._data['provides'] = value + + def get_requirements(self, reqts, extras=None, env=None): + """ + Base method to get dependencies, given a set of extras + to satisfy and an optional environment context. + :param reqts: A list of sometimes-wanted dependencies, + perhaps dependent on extras and environment. + :param extras: A list of optional components being requested. + :param env: An optional environment for marker evaluation. + """ + if self._legacy: + result = reqts + else: + result = [] + extras = get_extras(extras or [], self.extras) + for d in reqts: + if 'extra' not in d and 'environment' not in d: + # unconditional + include = True + else: + if 'extra' not in d: + # Not extra-dependent - only environment-dependent + include = True + else: + include = d.get('extra') in extras + if include: + # Not excluded because of extras, check environment + marker = d.get('environment') + if marker: + include = interpret(marker, env) + if include: + result.extend(d['requires']) + for key in ('build', 'dev', 'test'): + e = ':%s:' % key + if e in extras: + extras.remove(e) + # A recursive call, but it should terminate since 'test' + # has been removed from the extras + reqts = self._data.get('%s_requires' % key, []) + result.extend(self.get_requirements(reqts, extras=extras, + env=env)) + return result + + @property + def dictionary(self): + if self._legacy: + return self._from_legacy() + return self._data + + @property + def dependencies(self): + if self._legacy: + raise NotImplementedError + else: + return extract_by_key(self._data, self.DEPENDENCY_KEYS) + + @dependencies.setter + def dependencies(self, value): + if self._legacy: + raise NotImplementedError + else: + self._data.update(value) + + def _validate_mapping(self, mapping, scheme): + if mapping.get('metadata_version') != self.METADATA_VERSION: + raise MetadataUnrecognizedVersionError() + missing = [] + for key, exclusions in self.MANDATORY_KEYS.items(): + if key not in mapping: + if scheme not in exclusions: + missing.append(key) + if missing: + msg = 'Missing metadata items: %s' % ', '.join(missing) + raise MetadataMissingError(msg) + for k, v in mapping.items(): + self._validate_value(k, v, scheme) + + def validate(self): + if self._legacy: + missing, warnings = self._legacy.check(True) + if missing or warnings: + logger.warning('Metadata: missing: %s, warnings: %s', + missing, warnings) + else: + self._validate_mapping(self._data, self.scheme) + + def todict(self): + if self._legacy: + return self._legacy.todict(True) + else: + result = extract_by_key(self._data, self.INDEX_KEYS) + return result + + def _from_legacy(self): + assert self._legacy and not self._data + result = { + 'metadata_version': self.METADATA_VERSION, + 'generator': self.GENERATOR, + } + lmd = self._legacy.todict(True) # skip missing ones + for k in ('name', 'version', 'license', 'summary', 'description', + 'classifier'): + if k in lmd: + if k == 'classifier': + nk = 'classifiers' + else: + nk = k + result[nk] = lmd[k] + kw = lmd.get('Keywords', []) + if kw == ['']: + kw = [] + result['keywords'] = kw + keys = (('requires_dist', 'run_requires'), + ('setup_requires_dist', 'build_requires')) + for ok, nk in keys: + if ok in lmd and lmd[ok]: + result[nk] = [{'requires': lmd[ok]}] + result['provides'] = self.provides + author = {} + maintainer = {} + return result + + LEGACY_MAPPING = { + 'name': 'Name', + 'version': 'Version', + 'license': 'License', + 'summary': 'Summary', + 'description': 'Description', + 'classifiers': 'Classifier', + } + + def _to_legacy(self): + def process_entries(entries): + reqts = set() + for e in entries: + extra = e.get('extra') + env = e.get('environment') + rlist = e['requires'] + for r in rlist: + if not env and not extra: + reqts.add(r) + else: + marker = '' + if extra: + marker = 'extra == "%s"' % extra + if env: + if marker: + marker = '(%s) and %s' % (env, marker) + else: + marker = env + reqts.add(';'.join((r, marker))) + return reqts + + assert self._data and not self._legacy + result = LegacyMetadata() + nmd = self._data + for nk, ok in self.LEGACY_MAPPING.items(): + if nk in nmd: + result[ok] = nmd[nk] + r1 = process_entries(self.run_requires + self.meta_requires) + r2 = process_entries(self.build_requires + self.dev_requires) + if self.extras: + result['Provides-Extra'] = sorted(self.extras) + result['Requires-Dist'] = sorted(r1) + result['Setup-Requires-Dist'] = sorted(r2) + # TODO: other fields such as contacts + return result + + def write(self, path=None, fileobj=None, legacy=False, skip_unknown=True): + if [path, fileobj].count(None) != 1: + raise ValueError('Exactly one of path and fileobj is needed') + self.validate() + if legacy: + if self._legacy: + legacy_md = self._legacy + else: + legacy_md = self._to_legacy() + if path: + legacy_md.write(path, skip_unknown=skip_unknown) + else: + legacy_md.write_file(fileobj, skip_unknown=skip_unknown) + else: + if self._legacy: + d = self._from_legacy() + else: + d = self._data + if fileobj: + json.dump(d, fileobj, ensure_ascii=True, indent=2, + sort_keys=True) + else: + with codecs.open(path, 'w', 'utf-8') as f: + json.dump(d, f, ensure_ascii=True, indent=2, + sort_keys=True) + + def add_requirements(self, requirements): + if self._legacy: + self._legacy.add_requirements(requirements) + else: + run_requires = self._data.setdefault('run_requires', []) + always = None + for entry in run_requires: + if 'environment' not in entry and 'extra' not in entry: + always = entry + break + if always is None: + always = { 'requires': requirements } + run_requires.insert(0, always) + else: + rset = set(always['requires']) | set(requirements) + always['requires'] = sorted(rset) + + def __repr__(self): + name = self.name or '(no name)' + version = self.version or 'no version' + return '<%s %s %s (%s)>' % (self.__class__.__name__, + self.metadata_version, name, version) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/resources.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/resources.py new file mode 100644 index 0000000000000000000000000000000000000000..18840167a9e3ea1cf443b0803230e41481df41d2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/resources.py @@ -0,0 +1,355 @@ +# -*- coding: utf-8 -*- +# +# Copyright (C) 2013-2017 Vinay Sajip. +# Licensed to the Python Software Foundation under a contributor agreement. +# See LICENSE.txt and CONTRIBUTORS.txt. +# +from __future__ import unicode_literals + +import bisect +import io +import logging +import os +import pkgutil +import shutil +import sys +import types +import zipimport + +from . import DistlibException +from .util import cached_property, get_cache_base, path_to_cache_dir, Cache + +logger = logging.getLogger(__name__) + + +cache = None # created when needed + + +class ResourceCache(Cache): + def __init__(self, base=None): + if base is None: + # Use native string to avoid issues on 2.x: see Python #20140. + base = os.path.join(get_cache_base(), str('resource-cache')) + super(ResourceCache, self).__init__(base) + + def is_stale(self, resource, path): + """ + Is the cache stale for the given resource? + + :param resource: The :class:`Resource` being cached. + :param path: The path of the resource in the cache. + :return: True if the cache is stale. + """ + # Cache invalidation is a hard problem :-) + return True + + def get(self, resource): + """ + Get a resource into the cache, + + :param resource: A :class:`Resource` instance. + :return: The pathname of the resource in the cache. + """ + prefix, path = resource.finder.get_cache_info(resource) + if prefix is None: + result = path + else: + result = os.path.join(self.base, self.prefix_to_dir(prefix), path) + dirname = os.path.dirname(result) + if not os.path.isdir(dirname): + os.makedirs(dirname) + if not os.path.exists(result): + stale = True + else: + stale = self.is_stale(resource, path) + if stale: + # write the bytes of the resource to the cache location + with open(result, 'wb') as f: + f.write(resource.bytes) + return result + + +class ResourceBase(object): + def __init__(self, finder, name): + self.finder = finder + self.name = name + + +class Resource(ResourceBase): + """ + A class representing an in-package resource, such as a data file. This is + not normally instantiated by user code, but rather by a + :class:`ResourceFinder` which manages the resource. + """ + is_container = False # Backwards compatibility + + def as_stream(self): + """ + Get the resource as a stream. + + This is not a property to make it obvious that it returns a new stream + each time. + """ + return self.finder.get_stream(self) + + @cached_property + def file_path(self): + global cache + if cache is None: + cache = ResourceCache() + return cache.get(self) + + @cached_property + def bytes(self): + return self.finder.get_bytes(self) + + @cached_property + def size(self): + return self.finder.get_size(self) + + +class ResourceContainer(ResourceBase): + is_container = True # Backwards compatibility + + @cached_property + def resources(self): + return self.finder.get_resources(self) + + +class ResourceFinder(object): + """ + Resource finder for file system resources. + """ + + if sys.platform.startswith('java'): + skipped_extensions = ('.pyc', '.pyo', '.class') + else: + skipped_extensions = ('.pyc', '.pyo') + + def __init__(self, module): + self.module = module + self.loader = getattr(module, '__loader__', None) + self.base = os.path.dirname(getattr(module, '__file__', '')) + + def _adjust_path(self, path): + return os.path.realpath(path) + + def _make_path(self, resource_name): + # Issue #50: need to preserve type of path on Python 2.x + # like os.path._get_sep + if isinstance(resource_name, bytes): # should only happen on 2.x + sep = b'/' + else: + sep = '/' + parts = resource_name.split(sep) + parts.insert(0, self.base) + result = os.path.join(*parts) + return self._adjust_path(result) + + def _find(self, path): + return os.path.exists(path) + + def get_cache_info(self, resource): + return None, resource.path + + def find(self, resource_name): + path = self._make_path(resource_name) + if not self._find(path): + result = None + else: + if self._is_directory(path): + result = ResourceContainer(self, resource_name) + else: + result = Resource(self, resource_name) + result.path = path + return result + + def get_stream(self, resource): + return open(resource.path, 'rb') + + def get_bytes(self, resource): + with open(resource.path, 'rb') as f: + return f.read() + + def get_size(self, resource): + return os.path.getsize(resource.path) + + def get_resources(self, resource): + def allowed(f): + return (f != '__pycache__' and not + f.endswith(self.skipped_extensions)) + return set([f for f in os.listdir(resource.path) if allowed(f)]) + + def is_container(self, resource): + return self._is_directory(resource.path) + + _is_directory = staticmethod(os.path.isdir) + + def iterator(self, resource_name): + resource = self.find(resource_name) + if resource is not None: + todo = [resource] + while todo: + resource = todo.pop(0) + yield resource + if resource.is_container: + rname = resource.name + for name in resource.resources: + if not rname: + new_name = name + else: + new_name = '/'.join([rname, name]) + child = self.find(new_name) + if child.is_container: + todo.append(child) + else: + yield child + + +class ZipResourceFinder(ResourceFinder): + """ + Resource finder for resources in .zip files. + """ + def __init__(self, module): + super(ZipResourceFinder, self).__init__(module) + archive = self.loader.archive + self.prefix_len = 1 + len(archive) + # PyPy doesn't have a _files attr on zipimporter, and you can't set one + if hasattr(self.loader, '_files'): + self._files = self.loader._files + else: + self._files = zipimport._zip_directory_cache[archive] + self.index = sorted(self._files) + + def _adjust_path(self, path): + return path + + def _find(self, path): + path = path[self.prefix_len:] + if path in self._files: + result = True + else: + if path and path[-1] != os.sep: + path = path + os.sep + i = bisect.bisect(self.index, path) + try: + result = self.index[i].startswith(path) + except IndexError: + result = False + if not result: + logger.debug('_find failed: %r %r', path, self.loader.prefix) + else: + logger.debug('_find worked: %r %r', path, self.loader.prefix) + return result + + def get_cache_info(self, resource): + prefix = self.loader.archive + path = resource.path[1 + len(prefix):] + return prefix, path + + def get_bytes(self, resource): + return self.loader.get_data(resource.path) + + def get_stream(self, resource): + return io.BytesIO(self.get_bytes(resource)) + + def get_size(self, resource): + path = resource.path[self.prefix_len:] + return self._files[path][3] + + def get_resources(self, resource): + path = resource.path[self.prefix_len:] + if path and path[-1] != os.sep: + path += os.sep + plen = len(path) + result = set() + i = bisect.bisect(self.index, path) + while i < len(self.index): + if not self.index[i].startswith(path): + break + s = self.index[i][plen:] + result.add(s.split(os.sep, 1)[0]) # only immediate children + i += 1 + return result + + def _is_directory(self, path): + path = path[self.prefix_len:] + if path and path[-1] != os.sep: + path += os.sep + i = bisect.bisect(self.index, path) + try: + result = self.index[i].startswith(path) + except IndexError: + result = False + return result + +_finder_registry = { + type(None): ResourceFinder, + zipimport.zipimporter: ZipResourceFinder +} + +try: + # In Python 3.6, _frozen_importlib -> _frozen_importlib_external + try: + import _frozen_importlib_external as _fi + except ImportError: + import _frozen_importlib as _fi + _finder_registry[_fi.SourceFileLoader] = ResourceFinder + _finder_registry[_fi.FileFinder] = ResourceFinder + del _fi +except (ImportError, AttributeError): + pass + + +def register_finder(loader, finder_maker): + _finder_registry[type(loader)] = finder_maker + +_finder_cache = {} + + +def finder(package): + """ + Return a resource finder for a package. + :param package: The name of the package. + :return: A :class:`ResourceFinder` instance for the package. + """ + if package in _finder_cache: + result = _finder_cache[package] + else: + if package not in sys.modules: + __import__(package) + module = sys.modules[package] + path = getattr(module, '__path__', None) + if path is None: + raise DistlibException('You cannot get a finder for a module, ' + 'only for a package') + loader = getattr(module, '__loader__', None) + finder_maker = _finder_registry.get(type(loader)) + if finder_maker is None: + raise DistlibException('Unable to locate finder for %r' % package) + result = finder_maker(module) + _finder_cache[package] = result + return result + + +_dummy_module = types.ModuleType(str('__dummy__')) + + +def finder_for_path(path): + """ + Return a resource finder for a path, which should represent a container. + + :param path: The path. + :return: A :class:`ResourceFinder` instance for the path. + """ + result = None + # calls any path hooks, gets importer into cache + pkgutil.get_importer(path) + loader = sys.path_importer_cache.get(path) + finder = _finder_registry.get(type(loader)) + if finder: + module = _dummy_module + module.__file__ = os.path.join(path, '') + module.__loader__ = loader + result = finder(module) + return result diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/scripts.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/scripts.py new file mode 100644 index 0000000000000000000000000000000000000000..51859741867d195f91d8ba84b87baa37d0c4a8f0 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/scripts.py @@ -0,0 +1,416 @@ +# -*- coding: utf-8 -*- +# +# Copyright (C) 2013-2015 Vinay Sajip. +# Licensed to the Python Software Foundation under a contributor agreement. +# See LICENSE.txt and CONTRIBUTORS.txt. +# +from io import BytesIO +import logging +import os +import re +import struct +import sys + +from .compat import sysconfig, detect_encoding, ZipFile +from .resources import finder +from .util import (FileOperator, get_export_entry, convert_path, + get_executable, in_venv) + +logger = logging.getLogger(__name__) + +_DEFAULT_MANIFEST = ''' + + + + + + + + + + + + +'''.strip() + +# check if Python is called on the first line with this expression +FIRST_LINE_RE = re.compile(b'^#!.*pythonw?[0-9.]*([ \t].*)?$') +SCRIPT_TEMPLATE = r'''# -*- coding: utf-8 -*- +import re +import sys +from %(module)s import %(import_name)s +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(%(func)s()) +''' + + +def _enquote_executable(executable): + if ' ' in executable: + # make sure we quote only the executable in case of env + # for example /usr/bin/env "/dir with spaces/bin/jython" + # instead of "/usr/bin/env /dir with spaces/bin/jython" + # otherwise whole + if executable.startswith('/usr/bin/env '): + env, _executable = executable.split(' ', 1) + if ' ' in _executable and not _executable.startswith('"'): + executable = '%s "%s"' % (env, _executable) + else: + if not executable.startswith('"'): + executable = '"%s"' % executable + return executable + + +class ScriptMaker(object): + """ + A class to copy or create scripts from source scripts or callable + specifications. + """ + script_template = SCRIPT_TEMPLATE + + executable = None # for shebangs + + def __init__(self, source_dir, target_dir, add_launchers=True, + dry_run=False, fileop=None): + self.source_dir = source_dir + self.target_dir = target_dir + self.add_launchers = add_launchers + self.force = False + self.clobber = False + # It only makes sense to set mode bits on POSIX. + self.set_mode = (os.name == 'posix') or (os.name == 'java' and + os._name == 'posix') + self.variants = set(('', 'X.Y')) + self._fileop = fileop or FileOperator(dry_run) + + self._is_nt = os.name == 'nt' or ( + os.name == 'java' and os._name == 'nt') + + def _get_alternate_executable(self, executable, options): + if options.get('gui', False) and self._is_nt: # pragma: no cover + dn, fn = os.path.split(executable) + fn = fn.replace('python', 'pythonw') + executable = os.path.join(dn, fn) + return executable + + if sys.platform.startswith('java'): # pragma: no cover + def _is_shell(self, executable): + """ + Determine if the specified executable is a script + (contains a #! line) + """ + try: + with open(executable) as fp: + return fp.read(2) == '#!' + except (OSError, IOError): + logger.warning('Failed to open %s', executable) + return False + + def _fix_jython_executable(self, executable): + if self._is_shell(executable): + # Workaround for Jython is not needed on Linux systems. + import java + + if java.lang.System.getProperty('os.name') == 'Linux': + return executable + elif executable.lower().endswith('jython.exe'): + # Use wrapper exe for Jython on Windows + return executable + return '/usr/bin/env %s' % executable + + def _build_shebang(self, executable, post_interp): + """ + Build a shebang line. In the simple case (on Windows, or a shebang line + which is not too long or contains spaces) use a simple formulation for + the shebang. Otherwise, use /bin/sh as the executable, with a contrived + shebang which allows the script to run either under Python or sh, using + suitable quoting. Thanks to Harald Nordgren for his input. + + See also: http://www.in-ulm.de/~mascheck/various/shebang/#length + https://hg.mozilla.org/mozilla-central/file/tip/mach + """ + if os.name != 'posix': + simple_shebang = True + else: + # Add 3 for '#!' prefix and newline suffix. + shebang_length = len(executable) + len(post_interp) + 3 + if sys.platform == 'darwin': + max_shebang_length = 512 + else: + max_shebang_length = 127 + simple_shebang = ((b' ' not in executable) and + (shebang_length <= max_shebang_length)) + + if simple_shebang: + result = b'#!' + executable + post_interp + b'\n' + else: + result = b'#!/bin/sh\n' + result += b"'''exec' " + executable + post_interp + b' "$0" "$@"\n' + result += b"' '''" + return result + + def _get_shebang(self, encoding, post_interp=b'', options=None): + enquote = True + if self.executable: + executable = self.executable + enquote = False # assume this will be taken care of + elif not sysconfig.is_python_build(): + executable = get_executable() + elif in_venv(): # pragma: no cover + executable = os.path.join(sysconfig.get_path('scripts'), + 'python%s' % sysconfig.get_config_var('EXE')) + else: # pragma: no cover + executable = os.path.join( + sysconfig.get_config_var('BINDIR'), + 'python%s%s' % (sysconfig.get_config_var('VERSION'), + sysconfig.get_config_var('EXE'))) + if options: + executable = self._get_alternate_executable(executable, options) + + if sys.platform.startswith('java'): # pragma: no cover + executable = self._fix_jython_executable(executable) + + # Normalise case for Windows - COMMENTED OUT + # executable = os.path.normcase(executable) + # N.B. The normalising operation above has been commented out: See + # issue #124. Although paths in Windows are generally case-insensitive, + # they aren't always. For example, a path containing a ẞ (which is a + # LATIN CAPITAL LETTER SHARP S - U+1E9E) is normcased to ß (which is a + # LATIN SMALL LETTER SHARP S' - U+00DF). The two are not considered by + # Windows as equivalent in path names. + + # If the user didn't specify an executable, it may be necessary to + # cater for executable paths with spaces (not uncommon on Windows) + if enquote: + executable = _enquote_executable(executable) + # Issue #51: don't use fsencode, since we later try to + # check that the shebang is decodable using utf-8. + executable = executable.encode('utf-8') + # in case of IronPython, play safe and enable frames support + if (sys.platform == 'cli' and '-X:Frames' not in post_interp + and '-X:FullFrames' not in post_interp): # pragma: no cover + post_interp += b' -X:Frames' + shebang = self._build_shebang(executable, post_interp) + # Python parser starts to read a script using UTF-8 until + # it gets a #coding:xxx cookie. The shebang has to be the + # first line of a file, the #coding:xxx cookie cannot be + # written before. So the shebang has to be decodable from + # UTF-8. + try: + shebang.decode('utf-8') + except UnicodeDecodeError: # pragma: no cover + raise ValueError( + 'The shebang (%r) is not decodable from utf-8' % shebang) + # If the script is encoded to a custom encoding (use a + # #coding:xxx cookie), the shebang has to be decodable from + # the script encoding too. + if encoding != 'utf-8': + try: + shebang.decode(encoding) + except UnicodeDecodeError: # pragma: no cover + raise ValueError( + 'The shebang (%r) is not decodable ' + 'from the script encoding (%r)' % (shebang, encoding)) + return shebang + + def _get_script_text(self, entry): + return self.script_template % dict(module=entry.prefix, + import_name=entry.suffix.split('.')[0], + func=entry.suffix) + + manifest = _DEFAULT_MANIFEST + + def get_manifest(self, exename): + base = os.path.basename(exename) + return self.manifest % base + + def _write_script(self, names, shebang, script_bytes, filenames, ext): + use_launcher = self.add_launchers and self._is_nt + linesep = os.linesep.encode('utf-8') + if not shebang.endswith(linesep): + shebang += linesep + if not use_launcher: + script_bytes = shebang + script_bytes + else: # pragma: no cover + if ext == 'py': + launcher = self._get_launcher('t') + else: + launcher = self._get_launcher('w') + stream = BytesIO() + with ZipFile(stream, 'w') as zf: + zf.writestr('__main__.py', script_bytes) + zip_data = stream.getvalue() + script_bytes = launcher + shebang + zip_data + for name in names: + outname = os.path.join(self.target_dir, name) + if use_launcher: # pragma: no cover + n, e = os.path.splitext(outname) + if e.startswith('.py'): + outname = n + outname = '%s.exe' % outname + try: + self._fileop.write_binary_file(outname, script_bytes) + except Exception: + # Failed writing an executable - it might be in use. + logger.warning('Failed to write executable - trying to ' + 'use .deleteme logic') + dfname = '%s.deleteme' % outname + if os.path.exists(dfname): + os.remove(dfname) # Not allowed to fail here + os.rename(outname, dfname) # nor here + self._fileop.write_binary_file(outname, script_bytes) + logger.debug('Able to replace executable using ' + '.deleteme logic') + try: + os.remove(dfname) + except Exception: + pass # still in use - ignore error + else: + if self._is_nt and not outname.endswith('.' + ext): # pragma: no cover + outname = '%s.%s' % (outname, ext) + if os.path.exists(outname) and not self.clobber: + logger.warning('Skipping existing file %s', outname) + continue + self._fileop.write_binary_file(outname, script_bytes) + if self.set_mode: + self._fileop.set_executable_mode([outname]) + filenames.append(outname) + + def _make_script(self, entry, filenames, options=None): + post_interp = b'' + if options: + args = options.get('interpreter_args', []) + if args: + args = ' %s' % ' '.join(args) + post_interp = args.encode('utf-8') + shebang = self._get_shebang('utf-8', post_interp, options=options) + script = self._get_script_text(entry).encode('utf-8') + name = entry.name + scriptnames = set() + if '' in self.variants: + scriptnames.add(name) + if 'X' in self.variants: + scriptnames.add('%s%s' % (name, sys.version_info[0])) + if 'X.Y' in self.variants: + scriptnames.add('%s-%s.%s' % (name, sys.version_info[0], + sys.version_info[1])) + if options and options.get('gui', False): + ext = 'pyw' + else: + ext = 'py' + self._write_script(scriptnames, shebang, script, filenames, ext) + + def _copy_script(self, script, filenames): + adjust = False + script = os.path.join(self.source_dir, convert_path(script)) + outname = os.path.join(self.target_dir, os.path.basename(script)) + if not self.force and not self._fileop.newer(script, outname): + logger.debug('not copying %s (up-to-date)', script) + return + + # Always open the file, but ignore failures in dry-run mode -- + # that way, we'll get accurate feedback if we can read the + # script. + try: + f = open(script, 'rb') + except IOError: # pragma: no cover + if not self.dry_run: + raise + f = None + else: + first_line = f.readline() + if not first_line: # pragma: no cover + logger.warning('%s: %s is an empty file (skipping)', + self.get_command_name(), script) + return + + match = FIRST_LINE_RE.match(first_line.replace(b'\r\n', b'\n')) + if match: + adjust = True + post_interp = match.group(1) or b'' + + if not adjust: + if f: + f.close() + self._fileop.copy_file(script, outname) + if self.set_mode: + self._fileop.set_executable_mode([outname]) + filenames.append(outname) + else: + logger.info('copying and adjusting %s -> %s', script, + self.target_dir) + if not self._fileop.dry_run: + encoding, lines = detect_encoding(f.readline) + f.seek(0) + shebang = self._get_shebang(encoding, post_interp) + if b'pythonw' in first_line: # pragma: no cover + ext = 'pyw' + else: + ext = 'py' + n = os.path.basename(outname) + self._write_script([n], shebang, f.read(), filenames, ext) + if f: + f.close() + + @property + def dry_run(self): + return self._fileop.dry_run + + @dry_run.setter + def dry_run(self, value): + self._fileop.dry_run = value + + if os.name == 'nt' or (os.name == 'java' and os._name == 'nt'): # pragma: no cover + # Executable launcher support. + # Launchers are from https://bitbucket.org/vinay.sajip/simple_launcher/ + + def _get_launcher(self, kind): + if struct.calcsize('P') == 8: # 64-bit + bits = '64' + else: + bits = '32' + name = '%s%s.exe' % (kind, bits) + # Issue 31: don't hardcode an absolute package name, but + # determine it relative to the current package + distlib_package = __name__.rsplit('.', 1)[0] + resource = finder(distlib_package).find(name) + if not resource: + msg = ('Unable to find resource %s in package %s' % (name, + distlib_package)) + raise ValueError(msg) + return resource.bytes + + # Public API follows + + def make(self, specification, options=None): + """ + Make a script. + + :param specification: The specification, which is either a valid export + entry specification (to make a script from a + callable) or a filename (to make a script by + copying from a source location). + :param options: A dictionary of options controlling script generation. + :return: A list of all absolute pathnames written to. + """ + filenames = [] + entry = get_export_entry(specification) + if entry is None: + self._copy_script(specification, filenames) + else: + self._make_script(entry, filenames, options=options) + return filenames + + def make_multiple(self, specifications, options=None): + """ + Take a list of specifications and make scripts from them, + :param specifications: A list of specifications. + :return: A list of all absolute pathnames written to, + """ + filenames = [] + for specification in specifications: + filenames.extend(self.make(specification, options)) + return filenames diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/util.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/util.py new file mode 100644 index 0000000000000000000000000000000000000000..01324eae462f3384358a77e649b9c59a26cdfa9e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/util.py @@ -0,0 +1,1761 @@ +# +# Copyright (C) 2012-2017 The Python Software Foundation. +# See LICENSE.txt and CONTRIBUTORS.txt. +# +import codecs +from collections import deque +import contextlib +import csv +from glob import iglob as std_iglob +import io +import json +import logging +import os +import py_compile +import re +import socket +try: + import ssl +except ImportError: # pragma: no cover + ssl = None +import subprocess +import sys +import tarfile +import tempfile +import textwrap + +try: + import threading +except ImportError: # pragma: no cover + import dummy_threading as threading +import time + +from . import DistlibException +from .compat import (string_types, text_type, shutil, raw_input, StringIO, + cache_from_source, urlopen, urljoin, httplib, xmlrpclib, + splittype, HTTPHandler, BaseConfigurator, valid_ident, + Container, configparser, URLError, ZipFile, fsdecode, + unquote, urlparse) + +logger = logging.getLogger(__name__) + +# +# Requirement parsing code as per PEP 508 +# + +IDENTIFIER = re.compile(r'^([\w\.-]+)\s*') +VERSION_IDENTIFIER = re.compile(r'^([\w\.*+-]+)\s*') +COMPARE_OP = re.compile(r'^(<=?|>=?|={2,3}|[~!]=)\s*') +MARKER_OP = re.compile(r'^((<=?)|(>=?)|={2,3}|[~!]=|in|not\s+in)\s*') +OR = re.compile(r'^or\b\s*') +AND = re.compile(r'^and\b\s*') +NON_SPACE = re.compile(r'(\S+)\s*') +STRING_CHUNK = re.compile(r'([\s\w\.{}()*+#:;,/?!~`@$%^&=|<>\[\]-]+)') + + +def parse_marker(marker_string): + """ + Parse a marker string and return a dictionary containing a marker expression. + + The dictionary will contain keys "op", "lhs" and "rhs" for non-terminals in + the expression grammar, or strings. A string contained in quotes is to be + interpreted as a literal string, and a string not contained in quotes is a + variable (such as os_name). + """ + def marker_var(remaining): + # either identifier, or literal string + m = IDENTIFIER.match(remaining) + if m: + result = m.groups()[0] + remaining = remaining[m.end():] + elif not remaining: + raise SyntaxError('unexpected end of input') + else: + q = remaining[0] + if q not in '\'"': + raise SyntaxError('invalid expression: %s' % remaining) + oq = '\'"'.replace(q, '') + remaining = remaining[1:] + parts = [q] + while remaining: + # either a string chunk, or oq, or q to terminate + if remaining[0] == q: + break + elif remaining[0] == oq: + parts.append(oq) + remaining = remaining[1:] + else: + m = STRING_CHUNK.match(remaining) + if not m: + raise SyntaxError('error in string literal: %s' % remaining) + parts.append(m.groups()[0]) + remaining = remaining[m.end():] + else: + s = ''.join(parts) + raise SyntaxError('unterminated string: %s' % s) + parts.append(q) + result = ''.join(parts) + remaining = remaining[1:].lstrip() # skip past closing quote + return result, remaining + + def marker_expr(remaining): + if remaining and remaining[0] == '(': + result, remaining = marker(remaining[1:].lstrip()) + if remaining[0] != ')': + raise SyntaxError('unterminated parenthesis: %s' % remaining) + remaining = remaining[1:].lstrip() + else: + lhs, remaining = marker_var(remaining) + while remaining: + m = MARKER_OP.match(remaining) + if not m: + break + op = m.groups()[0] + remaining = remaining[m.end():] + rhs, remaining = marker_var(remaining) + lhs = {'op': op, 'lhs': lhs, 'rhs': rhs} + result = lhs + return result, remaining + + def marker_and(remaining): + lhs, remaining = marker_expr(remaining) + while remaining: + m = AND.match(remaining) + if not m: + break + remaining = remaining[m.end():] + rhs, remaining = marker_expr(remaining) + lhs = {'op': 'and', 'lhs': lhs, 'rhs': rhs} + return lhs, remaining + + def marker(remaining): + lhs, remaining = marker_and(remaining) + while remaining: + m = OR.match(remaining) + if not m: + break + remaining = remaining[m.end():] + rhs, remaining = marker_and(remaining) + lhs = {'op': 'or', 'lhs': lhs, 'rhs': rhs} + return lhs, remaining + + return marker(marker_string) + + +def parse_requirement(req): + """ + Parse a requirement passed in as a string. Return a Container + whose attributes contain the various parts of the requirement. + """ + remaining = req.strip() + if not remaining or remaining.startswith('#'): + return None + m = IDENTIFIER.match(remaining) + if not m: + raise SyntaxError('name expected: %s' % remaining) + distname = m.groups()[0] + remaining = remaining[m.end():] + extras = mark_expr = versions = uri = None + if remaining and remaining[0] == '[': + i = remaining.find(']', 1) + if i < 0: + raise SyntaxError('unterminated extra: %s' % remaining) + s = remaining[1:i] + remaining = remaining[i + 1:].lstrip() + extras = [] + while s: + m = IDENTIFIER.match(s) + if not m: + raise SyntaxError('malformed extra: %s' % s) + extras.append(m.groups()[0]) + s = s[m.end():] + if not s: + break + if s[0] != ',': + raise SyntaxError('comma expected in extras: %s' % s) + s = s[1:].lstrip() + if not extras: + extras = None + if remaining: + if remaining[0] == '@': + # it's a URI + remaining = remaining[1:].lstrip() + m = NON_SPACE.match(remaining) + if not m: + raise SyntaxError('invalid URI: %s' % remaining) + uri = m.groups()[0] + t = urlparse(uri) + # there are issues with Python and URL parsing, so this test + # is a bit crude. See bpo-20271, bpo-23505. Python doesn't + # always parse invalid URLs correctly - it should raise + # exceptions for malformed URLs + if not (t.scheme and t.netloc): + raise SyntaxError('Invalid URL: %s' % uri) + remaining = remaining[m.end():].lstrip() + else: + + def get_versions(ver_remaining): + """ + Return a list of operator, version tuples if any are + specified, else None. + """ + m = COMPARE_OP.match(ver_remaining) + versions = None + if m: + versions = [] + while True: + op = m.groups()[0] + ver_remaining = ver_remaining[m.end():] + m = VERSION_IDENTIFIER.match(ver_remaining) + if not m: + raise SyntaxError('invalid version: %s' % ver_remaining) + v = m.groups()[0] + versions.append((op, v)) + ver_remaining = ver_remaining[m.end():] + if not ver_remaining or ver_remaining[0] != ',': + break + ver_remaining = ver_remaining[1:].lstrip() + m = COMPARE_OP.match(ver_remaining) + if not m: + raise SyntaxError('invalid constraint: %s' % ver_remaining) + if not versions: + versions = None + return versions, ver_remaining + + if remaining[0] != '(': + versions, remaining = get_versions(remaining) + else: + i = remaining.find(')', 1) + if i < 0: + raise SyntaxError('unterminated parenthesis: %s' % remaining) + s = remaining[1:i] + remaining = remaining[i + 1:].lstrip() + # As a special diversion from PEP 508, allow a version number + # a.b.c in parentheses as a synonym for ~= a.b.c (because this + # is allowed in earlier PEPs) + if COMPARE_OP.match(s): + versions, _ = get_versions(s) + else: + m = VERSION_IDENTIFIER.match(s) + if not m: + raise SyntaxError('invalid constraint: %s' % s) + v = m.groups()[0] + s = s[m.end():].lstrip() + if s: + raise SyntaxError('invalid constraint: %s' % s) + versions = [('~=', v)] + + if remaining: + if remaining[0] != ';': + raise SyntaxError('invalid requirement: %s' % remaining) + remaining = remaining[1:].lstrip() + + mark_expr, remaining = parse_marker(remaining) + + if remaining and remaining[0] != '#': + raise SyntaxError('unexpected trailing data: %s' % remaining) + + if not versions: + rs = distname + else: + rs = '%s %s' % (distname, ', '.join(['%s %s' % con for con in versions])) + return Container(name=distname, extras=extras, constraints=versions, + marker=mark_expr, url=uri, requirement=rs) + + +def get_resources_dests(resources_root, rules): + """Find destinations for resources files""" + + def get_rel_path(root, path): + # normalizes and returns a lstripped-/-separated path + root = root.replace(os.path.sep, '/') + path = path.replace(os.path.sep, '/') + assert path.startswith(root) + return path[len(root):].lstrip('/') + + destinations = {} + for base, suffix, dest in rules: + prefix = os.path.join(resources_root, base) + for abs_base in iglob(prefix): + abs_glob = os.path.join(abs_base, suffix) + for abs_path in iglob(abs_glob): + resource_file = get_rel_path(resources_root, abs_path) + if dest is None: # remove the entry if it was here + destinations.pop(resource_file, None) + else: + rel_path = get_rel_path(abs_base, abs_path) + rel_dest = dest.replace(os.path.sep, '/').rstrip('/') + destinations[resource_file] = rel_dest + '/' + rel_path + return destinations + + +def in_venv(): + if hasattr(sys, 'real_prefix'): + # virtualenv venvs + result = True + else: + # PEP 405 venvs + result = sys.prefix != getattr(sys, 'base_prefix', sys.prefix) + return result + + +def get_executable(): +# The __PYVENV_LAUNCHER__ dance is apparently no longer needed, as +# changes to the stub launcher mean that sys.executable always points +# to the stub on OS X +# if sys.platform == 'darwin' and ('__PYVENV_LAUNCHER__' +# in os.environ): +# result = os.environ['__PYVENV_LAUNCHER__'] +# else: +# result = sys.executable +# return result + result = os.path.normcase(sys.executable) + if not isinstance(result, text_type): + result = fsdecode(result) + return result + + +def proceed(prompt, allowed_chars, error_prompt=None, default=None): + p = prompt + while True: + s = raw_input(p) + p = prompt + if not s and default: + s = default + if s: + c = s[0].lower() + if c in allowed_chars: + break + if error_prompt: + p = '%c: %s\n%s' % (c, error_prompt, prompt) + return c + + +def extract_by_key(d, keys): + if isinstance(keys, string_types): + keys = keys.split() + result = {} + for key in keys: + if key in d: + result[key] = d[key] + return result + +def read_exports(stream): + if sys.version_info[0] >= 3: + # needs to be a text stream + stream = codecs.getreader('utf-8')(stream) + # Try to load as JSON, falling back on legacy format + data = stream.read() + stream = StringIO(data) + try: + jdata = json.load(stream) + result = jdata['extensions']['python.exports']['exports'] + for group, entries in result.items(): + for k, v in entries.items(): + s = '%s = %s' % (k, v) + entry = get_export_entry(s) + assert entry is not None + entries[k] = entry + return result + except Exception: + stream.seek(0, 0) + + def read_stream(cp, stream): + if hasattr(cp, 'read_file'): + cp.read_file(stream) + else: + cp.readfp(stream) + + cp = configparser.ConfigParser() + try: + read_stream(cp, stream) + except configparser.MissingSectionHeaderError: + stream.close() + data = textwrap.dedent(data) + stream = StringIO(data) + read_stream(cp, stream) + + result = {} + for key in cp.sections(): + result[key] = entries = {} + for name, value in cp.items(key): + s = '%s = %s' % (name, value) + entry = get_export_entry(s) + assert entry is not None + #entry.dist = self + entries[name] = entry + return result + + +def write_exports(exports, stream): + if sys.version_info[0] >= 3: + # needs to be a text stream + stream = codecs.getwriter('utf-8')(stream) + cp = configparser.ConfigParser() + for k, v in exports.items(): + # TODO check k, v for valid values + cp.add_section(k) + for entry in v.values(): + if entry.suffix is None: + s = entry.prefix + else: + s = '%s:%s' % (entry.prefix, entry.suffix) + if entry.flags: + s = '%s [%s]' % (s, ', '.join(entry.flags)) + cp.set(k, entry.name, s) + cp.write(stream) + + +@contextlib.contextmanager +def tempdir(): + td = tempfile.mkdtemp() + try: + yield td + finally: + shutil.rmtree(td) + +@contextlib.contextmanager +def chdir(d): + cwd = os.getcwd() + try: + os.chdir(d) + yield + finally: + os.chdir(cwd) + + +@contextlib.contextmanager +def socket_timeout(seconds=15): + cto = socket.getdefaulttimeout() + try: + socket.setdefaulttimeout(seconds) + yield + finally: + socket.setdefaulttimeout(cto) + + +class cached_property(object): + def __init__(self, func): + self.func = func + #for attr in ('__name__', '__module__', '__doc__'): + # setattr(self, attr, getattr(func, attr, None)) + + def __get__(self, obj, cls=None): + if obj is None: + return self + value = self.func(obj) + object.__setattr__(obj, self.func.__name__, value) + #obj.__dict__[self.func.__name__] = value = self.func(obj) + return value + +def convert_path(pathname): + """Return 'pathname' as a name that will work on the native filesystem. + + The path is split on '/' and put back together again using the current + directory separator. Needed because filenames in the setup script are + always supplied in Unix style, and have to be converted to the local + convention before we can actually use them in the filesystem. Raises + ValueError on non-Unix-ish systems if 'pathname' either starts or + ends with a slash. + """ + if os.sep == '/': + return pathname + if not pathname: + return pathname + if pathname[0] == '/': + raise ValueError("path '%s' cannot be absolute" % pathname) + if pathname[-1] == '/': + raise ValueError("path '%s' cannot end with '/'" % pathname) + + paths = pathname.split('/') + while os.curdir in paths: + paths.remove(os.curdir) + if not paths: + return os.curdir + return os.path.join(*paths) + + +class FileOperator(object): + def __init__(self, dry_run=False): + self.dry_run = dry_run + self.ensured = set() + self._init_record() + + def _init_record(self): + self.record = False + self.files_written = set() + self.dirs_created = set() + + def record_as_written(self, path): + if self.record: + self.files_written.add(path) + + def newer(self, source, target): + """Tell if the target is newer than the source. + + Returns true if 'source' exists and is more recently modified than + 'target', or if 'source' exists and 'target' doesn't. + + Returns false if both exist and 'target' is the same age or younger + than 'source'. Raise PackagingFileError if 'source' does not exist. + + Note that this test is not very accurate: files created in the same + second will have the same "age". + """ + if not os.path.exists(source): + raise DistlibException("file '%r' does not exist" % + os.path.abspath(source)) + if not os.path.exists(target): + return True + + return os.stat(source).st_mtime > os.stat(target).st_mtime + + def copy_file(self, infile, outfile, check=True): + """Copy a file respecting dry-run and force flags. + """ + self.ensure_dir(os.path.dirname(outfile)) + logger.info('Copying %s to %s', infile, outfile) + if not self.dry_run: + msg = None + if check: + if os.path.islink(outfile): + msg = '%s is a symlink' % outfile + elif os.path.exists(outfile) and not os.path.isfile(outfile): + msg = '%s is a non-regular file' % outfile + if msg: + raise ValueError(msg + ' which would be overwritten') + shutil.copyfile(infile, outfile) + self.record_as_written(outfile) + + def copy_stream(self, instream, outfile, encoding=None): + assert not os.path.isdir(outfile) + self.ensure_dir(os.path.dirname(outfile)) + logger.info('Copying stream %s to %s', instream, outfile) + if not self.dry_run: + if encoding is None: + outstream = open(outfile, 'wb') + else: + outstream = codecs.open(outfile, 'w', encoding=encoding) + try: + shutil.copyfileobj(instream, outstream) + finally: + outstream.close() + self.record_as_written(outfile) + + def write_binary_file(self, path, data): + self.ensure_dir(os.path.dirname(path)) + if not self.dry_run: + if os.path.exists(path): + os.remove(path) + with open(path, 'wb') as f: + f.write(data) + self.record_as_written(path) + + def write_text_file(self, path, data, encoding): + self.write_binary_file(path, data.encode(encoding)) + + def set_mode(self, bits, mask, files): + if os.name == 'posix' or (os.name == 'java' and os._name == 'posix'): + # Set the executable bits (owner, group, and world) on + # all the files specified. + for f in files: + if self.dry_run: + logger.info("changing mode of %s", f) + else: + mode = (os.stat(f).st_mode | bits) & mask + logger.info("changing mode of %s to %o", f, mode) + os.chmod(f, mode) + + set_executable_mode = lambda s, f: s.set_mode(0o555, 0o7777, f) + + def ensure_dir(self, path): + path = os.path.abspath(path) + if path not in self.ensured and not os.path.exists(path): + self.ensured.add(path) + d, f = os.path.split(path) + self.ensure_dir(d) + logger.info('Creating %s' % path) + if not self.dry_run: + os.mkdir(path) + if self.record: + self.dirs_created.add(path) + + def byte_compile(self, path, optimize=False, force=False, prefix=None, hashed_invalidation=False): + dpath = cache_from_source(path, not optimize) + logger.info('Byte-compiling %s to %s', path, dpath) + if not self.dry_run: + if force or self.newer(path, dpath): + if not prefix: + diagpath = None + else: + assert path.startswith(prefix) + diagpath = path[len(prefix):] + compile_kwargs = {} + if hashed_invalidation and hasattr(py_compile, 'PycInvalidationMode'): + compile_kwargs['invalidation_mode'] = py_compile.PycInvalidationMode.CHECKED_HASH + py_compile.compile(path, dpath, diagpath, True, **compile_kwargs) # raise error + self.record_as_written(dpath) + return dpath + + def ensure_removed(self, path): + if os.path.exists(path): + if os.path.isdir(path) and not os.path.islink(path): + logger.debug('Removing directory tree at %s', path) + if not self.dry_run: + shutil.rmtree(path) + if self.record: + if path in self.dirs_created: + self.dirs_created.remove(path) + else: + if os.path.islink(path): + s = 'link' + else: + s = 'file' + logger.debug('Removing %s %s', s, path) + if not self.dry_run: + os.remove(path) + if self.record: + if path in self.files_written: + self.files_written.remove(path) + + def is_writable(self, path): + result = False + while not result: + if os.path.exists(path): + result = os.access(path, os.W_OK) + break + parent = os.path.dirname(path) + if parent == path: + break + path = parent + return result + + def commit(self): + """ + Commit recorded changes, turn off recording, return + changes. + """ + assert self.record + result = self.files_written, self.dirs_created + self._init_record() + return result + + def rollback(self): + if not self.dry_run: + for f in list(self.files_written): + if os.path.exists(f): + os.remove(f) + # dirs should all be empty now, except perhaps for + # __pycache__ subdirs + # reverse so that subdirs appear before their parents + dirs = sorted(self.dirs_created, reverse=True) + for d in dirs: + flist = os.listdir(d) + if flist: + assert flist == ['__pycache__'] + sd = os.path.join(d, flist[0]) + os.rmdir(sd) + os.rmdir(d) # should fail if non-empty + self._init_record() + +def resolve(module_name, dotted_path): + if module_name in sys.modules: + mod = sys.modules[module_name] + else: + mod = __import__(module_name) + if dotted_path is None: + result = mod + else: + parts = dotted_path.split('.') + result = getattr(mod, parts.pop(0)) + for p in parts: + result = getattr(result, p) + return result + + +class ExportEntry(object): + def __init__(self, name, prefix, suffix, flags): + self.name = name + self.prefix = prefix + self.suffix = suffix + self.flags = flags + + @cached_property + def value(self): + return resolve(self.prefix, self.suffix) + + def __repr__(self): # pragma: no cover + return '' % (self.name, self.prefix, + self.suffix, self.flags) + + def __eq__(self, other): + if not isinstance(other, ExportEntry): + result = False + else: + result = (self.name == other.name and + self.prefix == other.prefix and + self.suffix == other.suffix and + self.flags == other.flags) + return result + + __hash__ = object.__hash__ + + +ENTRY_RE = re.compile(r'''(?P(\w|[-.+])+) + \s*=\s*(?P(\w+)([:\.]\w+)*) + \s*(\[\s*(?P[\w-]+(=\w+)?(,\s*\w+(=\w+)?)*)\s*\])? + ''', re.VERBOSE) + +def get_export_entry(specification): + m = ENTRY_RE.search(specification) + if not m: + result = None + if '[' in specification or ']' in specification: + raise DistlibException("Invalid specification " + "'%s'" % specification) + else: + d = m.groupdict() + name = d['name'] + path = d['callable'] + colons = path.count(':') + if colons == 0: + prefix, suffix = path, None + else: + if colons != 1: + raise DistlibException("Invalid specification " + "'%s'" % specification) + prefix, suffix = path.split(':') + flags = d['flags'] + if flags is None: + if '[' in specification or ']' in specification: + raise DistlibException("Invalid specification " + "'%s'" % specification) + flags = [] + else: + flags = [f.strip() for f in flags.split(',')] + result = ExportEntry(name, prefix, suffix, flags) + return result + + +def get_cache_base(suffix=None): + """ + Return the default base location for distlib caches. If the directory does + not exist, it is created. Use the suffix provided for the base directory, + and default to '.distlib' if it isn't provided. + + On Windows, if LOCALAPPDATA is defined in the environment, then it is + assumed to be a directory, and will be the parent directory of the result. + On POSIX, and on Windows if LOCALAPPDATA is not defined, the user's home + directory - using os.expanduser('~') - will be the parent directory of + the result. + + The result is just the directory '.distlib' in the parent directory as + determined above, or with the name specified with ``suffix``. + """ + if suffix is None: + suffix = '.distlib' + if os.name == 'nt' and 'LOCALAPPDATA' in os.environ: + result = os.path.expandvars('$localappdata') + else: + # Assume posix, or old Windows + result = os.path.expanduser('~') + # we use 'isdir' instead of 'exists', because we want to + # fail if there's a file with that name + if os.path.isdir(result): + usable = os.access(result, os.W_OK) + if not usable: + logger.warning('Directory exists but is not writable: %s', result) + else: + try: + os.makedirs(result) + usable = True + except OSError: + logger.warning('Unable to create %s', result, exc_info=True) + usable = False + if not usable: + result = tempfile.mkdtemp() + logger.warning('Default location unusable, using %s', result) + return os.path.join(result, suffix) + + +def path_to_cache_dir(path): + """ + Convert an absolute path to a directory name for use in a cache. + + The algorithm used is: + + #. On Windows, any ``':'`` in the drive is replaced with ``'---'``. + #. Any occurrence of ``os.sep`` is replaced with ``'--'``. + #. ``'.cache'`` is appended. + """ + d, p = os.path.splitdrive(os.path.abspath(path)) + if d: + d = d.replace(':', '---') + p = p.replace(os.sep, '--') + return d + p + '.cache' + + +def ensure_slash(s): + if not s.endswith('/'): + return s + '/' + return s + + +def parse_credentials(netloc): + username = password = None + if '@' in netloc: + prefix, netloc = netloc.rsplit('@', 1) + if ':' not in prefix: + username = prefix + else: + username, password = prefix.split(':', 1) + if username: + username = unquote(username) + if password: + password = unquote(password) + return username, password, netloc + + +def get_process_umask(): + result = os.umask(0o22) + os.umask(result) + return result + +def is_string_sequence(seq): + result = True + i = None + for i, s in enumerate(seq): + if not isinstance(s, string_types): + result = False + break + assert i is not None + return result + +PROJECT_NAME_AND_VERSION = re.compile('([a-z0-9_]+([.-][a-z_][a-z0-9_]*)*)-' + '([a-z0-9_.+-]+)', re.I) +PYTHON_VERSION = re.compile(r'-py(\d\.?\d?)') + + +def split_filename(filename, project_name=None): + """ + Extract name, version, python version from a filename (no extension) + + Return name, version, pyver or None + """ + result = None + pyver = None + filename = unquote(filename).replace(' ', '-') + m = PYTHON_VERSION.search(filename) + if m: + pyver = m.group(1) + filename = filename[:m.start()] + if project_name and len(filename) > len(project_name) + 1: + m = re.match(re.escape(project_name) + r'\b', filename) + if m: + n = m.end() + result = filename[:n], filename[n + 1:], pyver + if result is None: + m = PROJECT_NAME_AND_VERSION.match(filename) + if m: + result = m.group(1), m.group(3), pyver + return result + +# Allow spaces in name because of legacy dists like "Twisted Core" +NAME_VERSION_RE = re.compile(r'(?P[\w .-]+)\s*' + r'\(\s*(?P[^\s)]+)\)$') + +def parse_name_and_version(p): + """ + A utility method used to get name and version from a string. + + From e.g. a Provides-Dist value. + + :param p: A value in a form 'foo (1.0)' + :return: The name and version as a tuple. + """ + m = NAME_VERSION_RE.match(p) + if not m: + raise DistlibException('Ill-formed name/version string: \'%s\'' % p) + d = m.groupdict() + return d['name'].strip().lower(), d['ver'] + +def get_extras(requested, available): + result = set() + requested = set(requested or []) + available = set(available or []) + if '*' in requested: + requested.remove('*') + result |= available + for r in requested: + if r == '-': + result.add(r) + elif r.startswith('-'): + unwanted = r[1:] + if unwanted not in available: + logger.warning('undeclared extra: %s' % unwanted) + if unwanted in result: + result.remove(unwanted) + else: + if r not in available: + logger.warning('undeclared extra: %s' % r) + result.add(r) + return result +# +# Extended metadata functionality +# + +def _get_external_data(url): + result = {} + try: + # urlopen might fail if it runs into redirections, + # because of Python issue #13696. Fixed in locators + # using a custom redirect handler. + resp = urlopen(url) + headers = resp.info() + ct = headers.get('Content-Type') + if not ct.startswith('application/json'): + logger.debug('Unexpected response for JSON request: %s', ct) + else: + reader = codecs.getreader('utf-8')(resp) + #data = reader.read().decode('utf-8') + #result = json.loads(data) + result = json.load(reader) + except Exception as e: + logger.exception('Failed to get external data for %s: %s', url, e) + return result + +_external_data_base_url = 'https://www.red-dove.com/pypi/projects/' + +def get_project_data(name): + url = '%s/%s/project.json' % (name[0].upper(), name) + url = urljoin(_external_data_base_url, url) + result = _get_external_data(url) + return result + +def get_package_data(name, version): + url = '%s/%s/package-%s.json' % (name[0].upper(), name, version) + url = urljoin(_external_data_base_url, url) + return _get_external_data(url) + + +class Cache(object): + """ + A class implementing a cache for resources that need to live in the file system + e.g. shared libraries. This class was moved from resources to here because it + could be used by other modules, e.g. the wheel module. + """ + + def __init__(self, base): + """ + Initialise an instance. + + :param base: The base directory where the cache should be located. + """ + # we use 'isdir' instead of 'exists', because we want to + # fail if there's a file with that name + if not os.path.isdir(base): # pragma: no cover + os.makedirs(base) + if (os.stat(base).st_mode & 0o77) != 0: + logger.warning('Directory \'%s\' is not private', base) + self.base = os.path.abspath(os.path.normpath(base)) + + def prefix_to_dir(self, prefix): + """ + Converts a resource prefix to a directory name in the cache. + """ + return path_to_cache_dir(prefix) + + def clear(self): + """ + Clear the cache. + """ + not_removed = [] + for fn in os.listdir(self.base): + fn = os.path.join(self.base, fn) + try: + if os.path.islink(fn) or os.path.isfile(fn): + os.remove(fn) + elif os.path.isdir(fn): + shutil.rmtree(fn) + except Exception: + not_removed.append(fn) + return not_removed + + +class EventMixin(object): + """ + A very simple publish/subscribe system. + """ + def __init__(self): + self._subscribers = {} + + def add(self, event, subscriber, append=True): + """ + Add a subscriber for an event. + + :param event: The name of an event. + :param subscriber: The subscriber to be added (and called when the + event is published). + :param append: Whether to append or prepend the subscriber to an + existing subscriber list for the event. + """ + subs = self._subscribers + if event not in subs: + subs[event] = deque([subscriber]) + else: + sq = subs[event] + if append: + sq.append(subscriber) + else: + sq.appendleft(subscriber) + + def remove(self, event, subscriber): + """ + Remove a subscriber for an event. + + :param event: The name of an event. + :param subscriber: The subscriber to be removed. + """ + subs = self._subscribers + if event not in subs: + raise ValueError('No subscribers: %r' % event) + subs[event].remove(subscriber) + + def get_subscribers(self, event): + """ + Return an iterator for the subscribers for an event. + :param event: The event to return subscribers for. + """ + return iter(self._subscribers.get(event, ())) + + def publish(self, event, *args, **kwargs): + """ + Publish a event and return a list of values returned by its + subscribers. + + :param event: The event to publish. + :param args: The positional arguments to pass to the event's + subscribers. + :param kwargs: The keyword arguments to pass to the event's + subscribers. + """ + result = [] + for subscriber in self.get_subscribers(event): + try: + value = subscriber(event, *args, **kwargs) + except Exception: + logger.exception('Exception during event publication') + value = None + result.append(value) + logger.debug('publish %s: args = %s, kwargs = %s, result = %s', + event, args, kwargs, result) + return result + +# +# Simple sequencing +# +class Sequencer(object): + def __init__(self): + self._preds = {} + self._succs = {} + self._nodes = set() # nodes with no preds/succs + + def add_node(self, node): + self._nodes.add(node) + + def remove_node(self, node, edges=False): + if node in self._nodes: + self._nodes.remove(node) + if edges: + for p in set(self._preds.get(node, ())): + self.remove(p, node) + for s in set(self._succs.get(node, ())): + self.remove(node, s) + # Remove empties + for k, v in list(self._preds.items()): + if not v: + del self._preds[k] + for k, v in list(self._succs.items()): + if not v: + del self._succs[k] + + def add(self, pred, succ): + assert pred != succ + self._preds.setdefault(succ, set()).add(pred) + self._succs.setdefault(pred, set()).add(succ) + + def remove(self, pred, succ): + assert pred != succ + try: + preds = self._preds[succ] + succs = self._succs[pred] + except KeyError: # pragma: no cover + raise ValueError('%r not a successor of anything' % succ) + try: + preds.remove(pred) + succs.remove(succ) + except KeyError: # pragma: no cover + raise ValueError('%r not a successor of %r' % (succ, pred)) + + def is_step(self, step): + return (step in self._preds or step in self._succs or + step in self._nodes) + + def get_steps(self, final): + if not self.is_step(final): + raise ValueError('Unknown: %r' % final) + result = [] + todo = [] + seen = set() + todo.append(final) + while todo: + step = todo.pop(0) + if step in seen: + # if a step was already seen, + # move it to the end (so it will appear earlier + # when reversed on return) ... but not for the + # final step, as that would be confusing for + # users + if step != final: + result.remove(step) + result.append(step) + else: + seen.add(step) + result.append(step) + preds = self._preds.get(step, ()) + todo.extend(preds) + return reversed(result) + + @property + def strong_connections(self): + #http://en.wikipedia.org/wiki/Tarjan%27s_strongly_connected_components_algorithm + index_counter = [0] + stack = [] + lowlinks = {} + index = {} + result = [] + + graph = self._succs + + def strongconnect(node): + # set the depth index for this node to the smallest unused index + index[node] = index_counter[0] + lowlinks[node] = index_counter[0] + index_counter[0] += 1 + stack.append(node) + + # Consider successors + try: + successors = graph[node] + except Exception: + successors = [] + for successor in successors: + if successor not in lowlinks: + # Successor has not yet been visited + strongconnect(successor) + lowlinks[node] = min(lowlinks[node],lowlinks[successor]) + elif successor in stack: + # the successor is in the stack and hence in the current + # strongly connected component (SCC) + lowlinks[node] = min(lowlinks[node],index[successor]) + + # If `node` is a root node, pop the stack and generate an SCC + if lowlinks[node] == index[node]: + connected_component = [] + + while True: + successor = stack.pop() + connected_component.append(successor) + if successor == node: break + component = tuple(connected_component) + # storing the result + result.append(component) + + for node in graph: + if node not in lowlinks: + strongconnect(node) + + return result + + @property + def dot(self): + result = ['digraph G {'] + for succ in self._preds: + preds = self._preds[succ] + for pred in preds: + result.append(' %s -> %s;' % (pred, succ)) + for node in self._nodes: + result.append(' %s;' % node) + result.append('}') + return '\n'.join(result) + +# +# Unarchiving functionality for zip, tar, tgz, tbz, whl +# + +ARCHIVE_EXTENSIONS = ('.tar.gz', '.tar.bz2', '.tar', '.zip', + '.tgz', '.tbz', '.whl') + +def unarchive(archive_filename, dest_dir, format=None, check=True): + + def check_path(path): + if not isinstance(path, text_type): + path = path.decode('utf-8') + p = os.path.abspath(os.path.join(dest_dir, path)) + if not p.startswith(dest_dir) or p[plen] != os.sep: + raise ValueError('path outside destination: %r' % p) + + dest_dir = os.path.abspath(dest_dir) + plen = len(dest_dir) + archive = None + if format is None: + if archive_filename.endswith(('.zip', '.whl')): + format = 'zip' + elif archive_filename.endswith(('.tar.gz', '.tgz')): + format = 'tgz' + mode = 'r:gz' + elif archive_filename.endswith(('.tar.bz2', '.tbz')): + format = 'tbz' + mode = 'r:bz2' + elif archive_filename.endswith('.tar'): + format = 'tar' + mode = 'r' + else: # pragma: no cover + raise ValueError('Unknown format for %r' % archive_filename) + try: + if format == 'zip': + archive = ZipFile(archive_filename, 'r') + if check: + names = archive.namelist() + for name in names: + check_path(name) + else: + archive = tarfile.open(archive_filename, mode) + if check: + names = archive.getnames() + for name in names: + check_path(name) + if format != 'zip' and sys.version_info[0] < 3: + # See Python issue 17153. If the dest path contains Unicode, + # tarfile extraction fails on Python 2.x if a member path name + # contains non-ASCII characters - it leads to an implicit + # bytes -> unicode conversion using ASCII to decode. + for tarinfo in archive.getmembers(): + if not isinstance(tarinfo.name, text_type): + tarinfo.name = tarinfo.name.decode('utf-8') + archive.extractall(dest_dir) + + finally: + if archive: + archive.close() + + +def zip_dir(directory): + """zip a directory tree into a BytesIO object""" + result = io.BytesIO() + dlen = len(directory) + with ZipFile(result, "w") as zf: + for root, dirs, files in os.walk(directory): + for name in files: + full = os.path.join(root, name) + rel = root[dlen:] + dest = os.path.join(rel, name) + zf.write(full, dest) + return result + +# +# Simple progress bar +# + +UNITS = ('', 'K', 'M', 'G','T','P') + + +class Progress(object): + unknown = 'UNKNOWN' + + def __init__(self, minval=0, maxval=100): + assert maxval is None or maxval >= minval + self.min = self.cur = minval + self.max = maxval + self.started = None + self.elapsed = 0 + self.done = False + + def update(self, curval): + assert self.min <= curval + assert self.max is None or curval <= self.max + self.cur = curval + now = time.time() + if self.started is None: + self.started = now + else: + self.elapsed = now - self.started + + def increment(self, incr): + assert incr >= 0 + self.update(self.cur + incr) + + def start(self): + self.update(self.min) + return self + + def stop(self): + if self.max is not None: + self.update(self.max) + self.done = True + + @property + def maximum(self): + return self.unknown if self.max is None else self.max + + @property + def percentage(self): + if self.done: + result = '100 %' + elif self.max is None: + result = ' ?? %' + else: + v = 100.0 * (self.cur - self.min) / (self.max - self.min) + result = '%3d %%' % v + return result + + def format_duration(self, duration): + if (duration <= 0) and self.max is None or self.cur == self.min: + result = '??:??:??' + #elif duration < 1: + # result = '--:--:--' + else: + result = time.strftime('%H:%M:%S', time.gmtime(duration)) + return result + + @property + def ETA(self): + if self.done: + prefix = 'Done' + t = self.elapsed + #import pdb; pdb.set_trace() + else: + prefix = 'ETA ' + if self.max is None: + t = -1 + elif self.elapsed == 0 or (self.cur == self.min): + t = 0 + else: + #import pdb; pdb.set_trace() + t = float(self.max - self.min) + t /= self.cur - self.min + t = (t - 1) * self.elapsed + return '%s: %s' % (prefix, self.format_duration(t)) + + @property + def speed(self): + if self.elapsed == 0: + result = 0.0 + else: + result = (self.cur - self.min) / self.elapsed + for unit in UNITS: + if result < 1000: + break + result /= 1000.0 + return '%d %sB/s' % (result, unit) + +# +# Glob functionality +# + +RICH_GLOB = re.compile(r'\{([^}]*)\}') +_CHECK_RECURSIVE_GLOB = re.compile(r'[^/\\,{]\*\*|\*\*[^/\\,}]') +_CHECK_MISMATCH_SET = re.compile(r'^[^{]*\}|\{[^}]*$') + + +def iglob(path_glob): + """Extended globbing function that supports ** and {opt1,opt2,opt3}.""" + if _CHECK_RECURSIVE_GLOB.search(path_glob): + msg = """invalid glob %r: recursive glob "**" must be used alone""" + raise ValueError(msg % path_glob) + if _CHECK_MISMATCH_SET.search(path_glob): + msg = """invalid glob %r: mismatching set marker '{' or '}'""" + raise ValueError(msg % path_glob) + return _iglob(path_glob) + + +def _iglob(path_glob): + rich_path_glob = RICH_GLOB.split(path_glob, 1) + if len(rich_path_glob) > 1: + assert len(rich_path_glob) == 3, rich_path_glob + prefix, set, suffix = rich_path_glob + for item in set.split(','): + for path in _iglob(''.join((prefix, item, suffix))): + yield path + else: + if '**' not in path_glob: + for item in std_iglob(path_glob): + yield item + else: + prefix, radical = path_glob.split('**', 1) + if prefix == '': + prefix = '.' + if radical == '': + radical = '*' + else: + # we support both + radical = radical.lstrip('/') + radical = radical.lstrip('\\') + for path, dir, files in os.walk(prefix): + path = os.path.normpath(path) + for fn in _iglob(os.path.join(path, radical)): + yield fn + +if ssl: + from .compat import (HTTPSHandler as BaseHTTPSHandler, match_hostname, + CertificateError) + + +# +# HTTPSConnection which verifies certificates/matches domains +# + + class HTTPSConnection(httplib.HTTPSConnection): + ca_certs = None # set this to the path to the certs file (.pem) + check_domain = True # only used if ca_certs is not None + + # noinspection PyPropertyAccess + def connect(self): + sock = socket.create_connection((self.host, self.port), self.timeout) + if getattr(self, '_tunnel_host', False): + self.sock = sock + self._tunnel() + + if not hasattr(ssl, 'SSLContext'): + # For 2.x + if self.ca_certs: + cert_reqs = ssl.CERT_REQUIRED + else: + cert_reqs = ssl.CERT_NONE + self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file, + cert_reqs=cert_reqs, + ssl_version=ssl.PROTOCOL_SSLv23, + ca_certs=self.ca_certs) + else: # pragma: no cover + context = ssl.SSLContext(ssl.PROTOCOL_SSLv23) + if hasattr(ssl, 'OP_NO_SSLv2'): + context.options |= ssl.OP_NO_SSLv2 + if self.cert_file: + context.load_cert_chain(self.cert_file, self.key_file) + kwargs = {} + if self.ca_certs: + context.verify_mode = ssl.CERT_REQUIRED + context.load_verify_locations(cafile=self.ca_certs) + if getattr(ssl, 'HAS_SNI', False): + kwargs['server_hostname'] = self.host + self.sock = context.wrap_socket(sock, **kwargs) + if self.ca_certs and self.check_domain: + try: + match_hostname(self.sock.getpeercert(), self.host) + logger.debug('Host verified: %s', self.host) + except CertificateError: # pragma: no cover + self.sock.shutdown(socket.SHUT_RDWR) + self.sock.close() + raise + + class HTTPSHandler(BaseHTTPSHandler): + def __init__(self, ca_certs, check_domain=True): + BaseHTTPSHandler.__init__(self) + self.ca_certs = ca_certs + self.check_domain = check_domain + + def _conn_maker(self, *args, **kwargs): + """ + This is called to create a connection instance. Normally you'd + pass a connection class to do_open, but it doesn't actually check for + a class, and just expects a callable. As long as we behave just as a + constructor would have, we should be OK. If it ever changes so that + we *must* pass a class, we'll create an UnsafeHTTPSConnection class + which just sets check_domain to False in the class definition, and + choose which one to pass to do_open. + """ + result = HTTPSConnection(*args, **kwargs) + if self.ca_certs: + result.ca_certs = self.ca_certs + result.check_domain = self.check_domain + return result + + def https_open(self, req): + try: + return self.do_open(self._conn_maker, req) + except URLError as e: + if 'certificate verify failed' in str(e.reason): + raise CertificateError('Unable to verify server certificate ' + 'for %s' % req.host) + else: + raise + + # + # To prevent against mixing HTTP traffic with HTTPS (examples: A Man-In-The- + # Middle proxy using HTTP listens on port 443, or an index mistakenly serves + # HTML containing a http://xyz link when it should be https://xyz), + # you can use the following handler class, which does not allow HTTP traffic. + # + # It works by inheriting from HTTPHandler - so build_opener won't add a + # handler for HTTP itself. + # + class HTTPSOnlyHandler(HTTPSHandler, HTTPHandler): + def http_open(self, req): + raise URLError('Unexpected HTTP request on what should be a secure ' + 'connection: %s' % req) + +# +# XML-RPC with timeouts +# + +_ver_info = sys.version_info[:2] + +if _ver_info == (2, 6): + class HTTP(httplib.HTTP): + def __init__(self, host='', port=None, **kwargs): + if port == 0: # 0 means use port 0, not the default port + port = None + self._setup(self._connection_class(host, port, **kwargs)) + + + if ssl: + class HTTPS(httplib.HTTPS): + def __init__(self, host='', port=None, **kwargs): + if port == 0: # 0 means use port 0, not the default port + port = None + self._setup(self._connection_class(host, port, **kwargs)) + + +class Transport(xmlrpclib.Transport): + def __init__(self, timeout, use_datetime=0): + self.timeout = timeout + xmlrpclib.Transport.__init__(self, use_datetime) + + def make_connection(self, host): + h, eh, x509 = self.get_host_info(host) + if _ver_info == (2, 6): + result = HTTP(h, timeout=self.timeout) + else: + if not self._connection or host != self._connection[0]: + self._extra_headers = eh + self._connection = host, httplib.HTTPConnection(h) + result = self._connection[1] + return result + +if ssl: + class SafeTransport(xmlrpclib.SafeTransport): + def __init__(self, timeout, use_datetime=0): + self.timeout = timeout + xmlrpclib.SafeTransport.__init__(self, use_datetime) + + def make_connection(self, host): + h, eh, kwargs = self.get_host_info(host) + if not kwargs: + kwargs = {} + kwargs['timeout'] = self.timeout + if _ver_info == (2, 6): + result = HTTPS(host, None, **kwargs) + else: + if not self._connection or host != self._connection[0]: + self._extra_headers = eh + self._connection = host, httplib.HTTPSConnection(h, None, + **kwargs) + result = self._connection[1] + return result + + +class ServerProxy(xmlrpclib.ServerProxy): + def __init__(self, uri, **kwargs): + self.timeout = timeout = kwargs.pop('timeout', None) + # The above classes only come into play if a timeout + # is specified + if timeout is not None: + scheme, _ = splittype(uri) + use_datetime = kwargs.get('use_datetime', 0) + if scheme == 'https': + tcls = SafeTransport + else: + tcls = Transport + kwargs['transport'] = t = tcls(timeout, use_datetime=use_datetime) + self.transport = t + xmlrpclib.ServerProxy.__init__(self, uri, **kwargs) + +# +# CSV functionality. This is provided because on 2.x, the csv module can't +# handle Unicode. However, we need to deal with Unicode in e.g. RECORD files. +# + +def _csv_open(fn, mode, **kwargs): + if sys.version_info[0] < 3: + mode += 'b' + else: + kwargs['newline'] = '' + # Python 3 determines encoding from locale. Force 'utf-8' + # file encoding to match other forced utf-8 encoding + kwargs['encoding'] = 'utf-8' + return open(fn, mode, **kwargs) + + +class CSVBase(object): + defaults = { + 'delimiter': str(','), # The strs are used because we need native + 'quotechar': str('"'), # str in the csv API (2.x won't take + 'lineterminator': str('\n') # Unicode) + } + + def __enter__(self): + return self + + def __exit__(self, *exc_info): + self.stream.close() + + +class CSVReader(CSVBase): + def __init__(self, **kwargs): + if 'stream' in kwargs: + stream = kwargs['stream'] + if sys.version_info[0] >= 3: + # needs to be a text stream + stream = codecs.getreader('utf-8')(stream) + self.stream = stream + else: + self.stream = _csv_open(kwargs['path'], 'r') + self.reader = csv.reader(self.stream, **self.defaults) + + def __iter__(self): + return self + + def next(self): + result = next(self.reader) + if sys.version_info[0] < 3: + for i, item in enumerate(result): + if not isinstance(item, text_type): + result[i] = item.decode('utf-8') + return result + + __next__ = next + +class CSVWriter(CSVBase): + def __init__(self, fn, **kwargs): + self.stream = _csv_open(fn, 'w') + self.writer = csv.writer(self.stream, **self.defaults) + + def writerow(self, row): + if sys.version_info[0] < 3: + r = [] + for item in row: + if isinstance(item, text_type): + item = item.encode('utf-8') + r.append(item) + row = r + self.writer.writerow(row) + +# +# Configurator functionality +# + +class Configurator(BaseConfigurator): + + value_converters = dict(BaseConfigurator.value_converters) + value_converters['inc'] = 'inc_convert' + + def __init__(self, config, base=None): + super(Configurator, self).__init__(config) + self.base = base or os.getcwd() + + def configure_custom(self, config): + def convert(o): + if isinstance(o, (list, tuple)): + result = type(o)([convert(i) for i in o]) + elif isinstance(o, dict): + if '()' in o: + result = self.configure_custom(o) + else: + result = {} + for k in o: + result[k] = convert(o[k]) + else: + result = self.convert(o) + return result + + c = config.pop('()') + if not callable(c): + c = self.resolve(c) + props = config.pop('.', None) + # Check for valid identifiers + args = config.pop('[]', ()) + if args: + args = tuple([convert(o) for o in args]) + items = [(k, convert(config[k])) for k in config if valid_ident(k)] + kwargs = dict(items) + result = c(*args, **kwargs) + if props: + for n, v in props.items(): + setattr(result, n, convert(v)) + return result + + def __getitem__(self, key): + result = self.config[key] + if isinstance(result, dict) and '()' in result: + self.config[key] = result = self.configure_custom(result) + return result + + def inc_convert(self, value): + """Default converter for the inc:// protocol.""" + if not os.path.isabs(value): + value = os.path.join(self.base, value) + with codecs.open(value, 'r', encoding='utf-8') as f: + result = json.load(f) + return result + + +class SubprocessMixin(object): + """ + Mixin for running subprocesses and capturing their output + """ + def __init__(self, verbose=False, progress=None): + self.verbose = verbose + self.progress = progress + + def reader(self, stream, context): + """ + Read lines from a subprocess' output stream and either pass to a progress + callable (if specified) or write progress information to sys.stderr. + """ + progress = self.progress + verbose = self.verbose + while True: + s = stream.readline() + if not s: + break + if progress is not None: + progress(s, context) + else: + if not verbose: + sys.stderr.write('.') + else: + sys.stderr.write(s.decode('utf-8')) + sys.stderr.flush() + stream.close() + + def run_command(self, cmd, **kwargs): + p = subprocess.Popen(cmd, stdout=subprocess.PIPE, + stderr=subprocess.PIPE, **kwargs) + t1 = threading.Thread(target=self.reader, args=(p.stdout, 'stdout')) + t1.start() + t2 = threading.Thread(target=self.reader, args=(p.stderr, 'stderr')) + t2.start() + p.wait() + t1.join() + t2.join() + if self.progress is not None: + self.progress('done.', 'main') + elif self.verbose: + sys.stderr.write('done.\n') + return p + + +def normalize_name(name): + """Normalize a python package name a la PEP 503""" + # https://www.python.org/dev/peps/pep-0503/#normalized-names + return re.sub('[-_.]+', '-', name).lower() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/version.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/version.py new file mode 100644 index 0000000000000000000000000000000000000000..3eebe18ee849dcafe39246ce2f4cabbe8e5a2b33 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/version.py @@ -0,0 +1,736 @@ +# -*- coding: utf-8 -*- +# +# Copyright (C) 2012-2017 The Python Software Foundation. +# See LICENSE.txt and CONTRIBUTORS.txt. +# +""" +Implementation of a flexible versioning scheme providing support for PEP-440, +setuptools-compatible and semantic versioning. +""" + +import logging +import re + +from .compat import string_types +from .util import parse_requirement + +__all__ = ['NormalizedVersion', 'NormalizedMatcher', + 'LegacyVersion', 'LegacyMatcher', + 'SemanticVersion', 'SemanticMatcher', + 'UnsupportedVersionError', 'get_scheme'] + +logger = logging.getLogger(__name__) + + +class UnsupportedVersionError(ValueError): + """This is an unsupported version.""" + pass + + +class Version(object): + def __init__(self, s): + self._string = s = s.strip() + self._parts = parts = self.parse(s) + assert isinstance(parts, tuple) + assert len(parts) > 0 + + def parse(self, s): + raise NotImplementedError('please implement in a subclass') + + def _check_compatible(self, other): + if type(self) != type(other): + raise TypeError('cannot compare %r and %r' % (self, other)) + + def __eq__(self, other): + self._check_compatible(other) + return self._parts == other._parts + + def __ne__(self, other): + return not self.__eq__(other) + + def __lt__(self, other): + self._check_compatible(other) + return self._parts < other._parts + + def __gt__(self, other): + return not (self.__lt__(other) or self.__eq__(other)) + + def __le__(self, other): + return self.__lt__(other) or self.__eq__(other) + + def __ge__(self, other): + return self.__gt__(other) or self.__eq__(other) + + # See http://docs.python.org/reference/datamodel#object.__hash__ + def __hash__(self): + return hash(self._parts) + + def __repr__(self): + return "%s('%s')" % (self.__class__.__name__, self._string) + + def __str__(self): + return self._string + + @property + def is_prerelease(self): + raise NotImplementedError('Please implement in subclasses.') + + +class Matcher(object): + version_class = None + + # value is either a callable or the name of a method + _operators = { + '<': lambda v, c, p: v < c, + '>': lambda v, c, p: v > c, + '<=': lambda v, c, p: v == c or v < c, + '>=': lambda v, c, p: v == c or v > c, + '==': lambda v, c, p: v == c, + '===': lambda v, c, p: v == c, + # by default, compatible => >=. + '~=': lambda v, c, p: v == c or v > c, + '!=': lambda v, c, p: v != c, + } + + # this is a method only to support alternative implementations + # via overriding + def parse_requirement(self, s): + return parse_requirement(s) + + def __init__(self, s): + if self.version_class is None: + raise ValueError('Please specify a version class') + self._string = s = s.strip() + r = self.parse_requirement(s) + if not r: + raise ValueError('Not valid: %r' % s) + self.name = r.name + self.key = self.name.lower() # for case-insensitive comparisons + clist = [] + if r.constraints: + # import pdb; pdb.set_trace() + for op, s in r.constraints: + if s.endswith('.*'): + if op not in ('==', '!='): + raise ValueError('\'.*\' not allowed for ' + '%r constraints' % op) + # Could be a partial version (e.g. for '2.*') which + # won't parse as a version, so keep it as a string + vn, prefix = s[:-2], True + # Just to check that vn is a valid version + self.version_class(vn) + else: + # Should parse as a version, so we can create an + # instance for the comparison + vn, prefix = self.version_class(s), False + clist.append((op, vn, prefix)) + self._parts = tuple(clist) + + def match(self, version): + """ + Check if the provided version matches the constraints. + + :param version: The version to match against this instance. + :type version: String or :class:`Version` instance. + """ + if isinstance(version, string_types): + version = self.version_class(version) + for operator, constraint, prefix in self._parts: + f = self._operators.get(operator) + if isinstance(f, string_types): + f = getattr(self, f) + if not f: + msg = ('%r not implemented ' + 'for %s' % (operator, self.__class__.__name__)) + raise NotImplementedError(msg) + if not f(version, constraint, prefix): + return False + return True + + @property + def exact_version(self): + result = None + if len(self._parts) == 1 and self._parts[0][0] in ('==', '==='): + result = self._parts[0][1] + return result + + def _check_compatible(self, other): + if type(self) != type(other) or self.name != other.name: + raise TypeError('cannot compare %s and %s' % (self, other)) + + def __eq__(self, other): + self._check_compatible(other) + return self.key == other.key and self._parts == other._parts + + def __ne__(self, other): + return not self.__eq__(other) + + # See http://docs.python.org/reference/datamodel#object.__hash__ + def __hash__(self): + return hash(self.key) + hash(self._parts) + + def __repr__(self): + return "%s(%r)" % (self.__class__.__name__, self._string) + + def __str__(self): + return self._string + + +PEP440_VERSION_RE = re.compile(r'^v?(\d+!)?(\d+(\.\d+)*)((a|b|c|rc)(\d+))?' + r'(\.(post)(\d+))?(\.(dev)(\d+))?' + r'(\+([a-zA-Z\d]+(\.[a-zA-Z\d]+)?))?$') + + +def _pep_440_key(s): + s = s.strip() + m = PEP440_VERSION_RE.match(s) + if not m: + raise UnsupportedVersionError('Not a valid version: %s' % s) + groups = m.groups() + nums = tuple(int(v) for v in groups[1].split('.')) + while len(nums) > 1 and nums[-1] == 0: + nums = nums[:-1] + + if not groups[0]: + epoch = 0 + else: + epoch = int(groups[0]) + pre = groups[4:6] + post = groups[7:9] + dev = groups[10:12] + local = groups[13] + if pre == (None, None): + pre = () + else: + pre = pre[0], int(pre[1]) + if post == (None, None): + post = () + else: + post = post[0], int(post[1]) + if dev == (None, None): + dev = () + else: + dev = dev[0], int(dev[1]) + if local is None: + local = () + else: + parts = [] + for part in local.split('.'): + # to ensure that numeric compares as > lexicographic, avoid + # comparing them directly, but encode a tuple which ensures + # correct sorting + if part.isdigit(): + part = (1, int(part)) + else: + part = (0, part) + parts.append(part) + local = tuple(parts) + if not pre: + # either before pre-release, or final release and after + if not post and dev: + # before pre-release + pre = ('a', -1) # to sort before a0 + else: + pre = ('z',) # to sort after all pre-releases + # now look at the state of post and dev. + if not post: + post = ('_',) # sort before 'a' + if not dev: + dev = ('final',) + + #print('%s -> %s' % (s, m.groups())) + return epoch, nums, pre, post, dev, local + + +_normalized_key = _pep_440_key + + +class NormalizedVersion(Version): + """A rational version. + + Good: + 1.2 # equivalent to "1.2.0" + 1.2.0 + 1.2a1 + 1.2.3a2 + 1.2.3b1 + 1.2.3c1 + 1.2.3.4 + TODO: fill this out + + Bad: + 1 # minimum two numbers + 1.2a # release level must have a release serial + 1.2.3b + """ + def parse(self, s): + result = _normalized_key(s) + # _normalized_key loses trailing zeroes in the release + # clause, since that's needed to ensure that X.Y == X.Y.0 == X.Y.0.0 + # However, PEP 440 prefix matching needs it: for example, + # (~= 1.4.5.0) matches differently to (~= 1.4.5.0.0). + m = PEP440_VERSION_RE.match(s) # must succeed + groups = m.groups() + self._release_clause = tuple(int(v) for v in groups[1].split('.')) + return result + + PREREL_TAGS = set(['a', 'b', 'c', 'rc', 'dev']) + + @property + def is_prerelease(self): + return any(t[0] in self.PREREL_TAGS for t in self._parts if t) + + +def _match_prefix(x, y): + x = str(x) + y = str(y) + if x == y: + return True + if not x.startswith(y): + return False + n = len(y) + return x[n] == '.' + + +class NormalizedMatcher(Matcher): + version_class = NormalizedVersion + + # value is either a callable or the name of a method + _operators = { + '~=': '_match_compatible', + '<': '_match_lt', + '>': '_match_gt', + '<=': '_match_le', + '>=': '_match_ge', + '==': '_match_eq', + '===': '_match_arbitrary', + '!=': '_match_ne', + } + + def _adjust_local(self, version, constraint, prefix): + if prefix: + strip_local = '+' not in constraint and version._parts[-1] + else: + # both constraint and version are + # NormalizedVersion instances. + # If constraint does not have a local component, + # ensure the version doesn't, either. + strip_local = not constraint._parts[-1] and version._parts[-1] + if strip_local: + s = version._string.split('+', 1)[0] + version = self.version_class(s) + return version, constraint + + def _match_lt(self, version, constraint, prefix): + version, constraint = self._adjust_local(version, constraint, prefix) + if version >= constraint: + return False + release_clause = constraint._release_clause + pfx = '.'.join([str(i) for i in release_clause]) + return not _match_prefix(version, pfx) + + def _match_gt(self, version, constraint, prefix): + version, constraint = self._adjust_local(version, constraint, prefix) + if version <= constraint: + return False + release_clause = constraint._release_clause + pfx = '.'.join([str(i) for i in release_clause]) + return not _match_prefix(version, pfx) + + def _match_le(self, version, constraint, prefix): + version, constraint = self._adjust_local(version, constraint, prefix) + return version <= constraint + + def _match_ge(self, version, constraint, prefix): + version, constraint = self._adjust_local(version, constraint, prefix) + return version >= constraint + + def _match_eq(self, version, constraint, prefix): + version, constraint = self._adjust_local(version, constraint, prefix) + if not prefix: + result = (version == constraint) + else: + result = _match_prefix(version, constraint) + return result + + def _match_arbitrary(self, version, constraint, prefix): + return str(version) == str(constraint) + + def _match_ne(self, version, constraint, prefix): + version, constraint = self._adjust_local(version, constraint, prefix) + if not prefix: + result = (version != constraint) + else: + result = not _match_prefix(version, constraint) + return result + + def _match_compatible(self, version, constraint, prefix): + version, constraint = self._adjust_local(version, constraint, prefix) + if version == constraint: + return True + if version < constraint: + return False +# if not prefix: +# return True + release_clause = constraint._release_clause + if len(release_clause) > 1: + release_clause = release_clause[:-1] + pfx = '.'.join([str(i) for i in release_clause]) + return _match_prefix(version, pfx) + +_REPLACEMENTS = ( + (re.compile('[.+-]$'), ''), # remove trailing puncts + (re.compile(r'^[.](\d)'), r'0.\1'), # .N -> 0.N at start + (re.compile('^[.-]'), ''), # remove leading puncts + (re.compile(r'^\((.*)\)$'), r'\1'), # remove parentheses + (re.compile(r'^v(ersion)?\s*(\d+)'), r'\2'), # remove leading v(ersion) + (re.compile(r'^r(ev)?\s*(\d+)'), r'\2'), # remove leading v(ersion) + (re.compile('[.]{2,}'), '.'), # multiple runs of '.' + (re.compile(r'\b(alfa|apha)\b'), 'alpha'), # misspelt alpha + (re.compile(r'\b(pre-alpha|prealpha)\b'), + 'pre.alpha'), # standardise + (re.compile(r'\(beta\)$'), 'beta'), # remove parentheses +) + +_SUFFIX_REPLACEMENTS = ( + (re.compile('^[:~._+-]+'), ''), # remove leading puncts + (re.compile('[,*")([\\]]'), ''), # remove unwanted chars + (re.compile('[~:+_ -]'), '.'), # replace illegal chars + (re.compile('[.]{2,}'), '.'), # multiple runs of '.' + (re.compile(r'\.$'), ''), # trailing '.' +) + +_NUMERIC_PREFIX = re.compile(r'(\d+(\.\d+)*)') + + +def _suggest_semantic_version(s): + """ + Try to suggest a semantic form for a version for which + _suggest_normalized_version couldn't come up with anything. + """ + result = s.strip().lower() + for pat, repl in _REPLACEMENTS: + result = pat.sub(repl, result) + if not result: + result = '0.0.0' + + # Now look for numeric prefix, and separate it out from + # the rest. + #import pdb; pdb.set_trace() + m = _NUMERIC_PREFIX.match(result) + if not m: + prefix = '0.0.0' + suffix = result + else: + prefix = m.groups()[0].split('.') + prefix = [int(i) for i in prefix] + while len(prefix) < 3: + prefix.append(0) + if len(prefix) == 3: + suffix = result[m.end():] + else: + suffix = '.'.join([str(i) for i in prefix[3:]]) + result[m.end():] + prefix = prefix[:3] + prefix = '.'.join([str(i) for i in prefix]) + suffix = suffix.strip() + if suffix: + #import pdb; pdb.set_trace() + # massage the suffix. + for pat, repl in _SUFFIX_REPLACEMENTS: + suffix = pat.sub(repl, suffix) + + if not suffix: + result = prefix + else: + sep = '-' if 'dev' in suffix else '+' + result = prefix + sep + suffix + if not is_semver(result): + result = None + return result + + +def _suggest_normalized_version(s): + """Suggest a normalized version close to the given version string. + + If you have a version string that isn't rational (i.e. NormalizedVersion + doesn't like it) then you might be able to get an equivalent (or close) + rational version from this function. + + This does a number of simple normalizations to the given string, based + on observation of versions currently in use on PyPI. Given a dump of + those version during PyCon 2009, 4287 of them: + - 2312 (53.93%) match NormalizedVersion without change + with the automatic suggestion + - 3474 (81.04%) match when using this suggestion method + + @param s {str} An irrational version string. + @returns A rational version string, or None, if couldn't determine one. + """ + try: + _normalized_key(s) + return s # already rational + except UnsupportedVersionError: + pass + + rs = s.lower() + + # part of this could use maketrans + for orig, repl in (('-alpha', 'a'), ('-beta', 'b'), ('alpha', 'a'), + ('beta', 'b'), ('rc', 'c'), ('-final', ''), + ('-pre', 'c'), + ('-release', ''), ('.release', ''), ('-stable', ''), + ('+', '.'), ('_', '.'), (' ', ''), ('.final', ''), + ('final', '')): + rs = rs.replace(orig, repl) + + # if something ends with dev or pre, we add a 0 + rs = re.sub(r"pre$", r"pre0", rs) + rs = re.sub(r"dev$", r"dev0", rs) + + # if we have something like "b-2" or "a.2" at the end of the + # version, that is probably beta, alpha, etc + # let's remove the dash or dot + rs = re.sub(r"([abc]|rc)[\-\.](\d+)$", r"\1\2", rs) + + # 1.0-dev-r371 -> 1.0.dev371 + # 0.1-dev-r79 -> 0.1.dev79 + rs = re.sub(r"[\-\.](dev)[\-\.]?r?(\d+)$", r".\1\2", rs) + + # Clean: 2.0.a.3, 2.0.b1, 0.9.0~c1 + rs = re.sub(r"[.~]?([abc])\.?", r"\1", rs) + + # Clean: v0.3, v1.0 + if rs.startswith('v'): + rs = rs[1:] + + # Clean leading '0's on numbers. + #TODO: unintended side-effect on, e.g., "2003.05.09" + # PyPI stats: 77 (~2%) better + rs = re.sub(r"\b0+(\d+)(?!\d)", r"\1", rs) + + # Clean a/b/c with no version. E.g. "1.0a" -> "1.0a0". Setuptools infers + # zero. + # PyPI stats: 245 (7.56%) better + rs = re.sub(r"(\d+[abc])$", r"\g<1>0", rs) + + # the 'dev-rNNN' tag is a dev tag + rs = re.sub(r"\.?(dev-r|dev\.r)\.?(\d+)$", r".dev\2", rs) + + # clean the - when used as a pre delimiter + rs = re.sub(r"-(a|b|c)(\d+)$", r"\1\2", rs) + + # a terminal "dev" or "devel" can be changed into ".dev0" + rs = re.sub(r"[\.\-](dev|devel)$", r".dev0", rs) + + # a terminal "dev" can be changed into ".dev0" + rs = re.sub(r"(?![\.\-])dev$", r".dev0", rs) + + # a terminal "final" or "stable" can be removed + rs = re.sub(r"(final|stable)$", "", rs) + + # The 'r' and the '-' tags are post release tags + # 0.4a1.r10 -> 0.4a1.post10 + # 0.9.33-17222 -> 0.9.33.post17222 + # 0.9.33-r17222 -> 0.9.33.post17222 + rs = re.sub(r"\.?(r|-|-r)\.?(\d+)$", r".post\2", rs) + + # Clean 'r' instead of 'dev' usage: + # 0.9.33+r17222 -> 0.9.33.dev17222 + # 1.0dev123 -> 1.0.dev123 + # 1.0.git123 -> 1.0.dev123 + # 1.0.bzr123 -> 1.0.dev123 + # 0.1a0dev.123 -> 0.1a0.dev123 + # PyPI stats: ~150 (~4%) better + rs = re.sub(r"\.?(dev|git|bzr)\.?(\d+)$", r".dev\2", rs) + + # Clean '.pre' (normalized from '-pre' above) instead of 'c' usage: + # 0.2.pre1 -> 0.2c1 + # 0.2-c1 -> 0.2c1 + # 1.0preview123 -> 1.0c123 + # PyPI stats: ~21 (0.62%) better + rs = re.sub(r"\.?(pre|preview|-c)(\d+)$", r"c\g<2>", rs) + + # Tcl/Tk uses "px" for their post release markers + rs = re.sub(r"p(\d+)$", r".post\1", rs) + + try: + _normalized_key(rs) + except UnsupportedVersionError: + rs = None + return rs + +# +# Legacy version processing (distribute-compatible) +# + +_VERSION_PART = re.compile(r'([a-z]+|\d+|[\.-])', re.I) +_VERSION_REPLACE = { + 'pre': 'c', + 'preview': 'c', + '-': 'final-', + 'rc': 'c', + 'dev': '@', + '': None, + '.': None, +} + + +def _legacy_key(s): + def get_parts(s): + result = [] + for p in _VERSION_PART.split(s.lower()): + p = _VERSION_REPLACE.get(p, p) + if p: + if '0' <= p[:1] <= '9': + p = p.zfill(8) + else: + p = '*' + p + result.append(p) + result.append('*final') + return result + + result = [] + for p in get_parts(s): + if p.startswith('*'): + if p < '*final': + while result and result[-1] == '*final-': + result.pop() + while result and result[-1] == '00000000': + result.pop() + result.append(p) + return tuple(result) + + +class LegacyVersion(Version): + def parse(self, s): + return _legacy_key(s) + + @property + def is_prerelease(self): + result = False + for x in self._parts: + if (isinstance(x, string_types) and x.startswith('*') and + x < '*final'): + result = True + break + return result + + +class LegacyMatcher(Matcher): + version_class = LegacyVersion + + _operators = dict(Matcher._operators) + _operators['~='] = '_match_compatible' + + numeric_re = re.compile(r'^(\d+(\.\d+)*)') + + def _match_compatible(self, version, constraint, prefix): + if version < constraint: + return False + m = self.numeric_re.match(str(constraint)) + if not m: + logger.warning('Cannot compute compatible match for version %s ' + ' and constraint %s', version, constraint) + return True + s = m.groups()[0] + if '.' in s: + s = s.rsplit('.', 1)[0] + return _match_prefix(version, s) + +# +# Semantic versioning +# + +_SEMVER_RE = re.compile(r'^(\d+)\.(\d+)\.(\d+)' + r'(-[a-z0-9]+(\.[a-z0-9-]+)*)?' + r'(\+[a-z0-9]+(\.[a-z0-9-]+)*)?$', re.I) + + +def is_semver(s): + return _SEMVER_RE.match(s) + + +def _semantic_key(s): + def make_tuple(s, absent): + if s is None: + result = (absent,) + else: + parts = s[1:].split('.') + # We can't compare ints and strings on Python 3, so fudge it + # by zero-filling numeric values so simulate a numeric comparison + result = tuple([p.zfill(8) if p.isdigit() else p for p in parts]) + return result + + m = is_semver(s) + if not m: + raise UnsupportedVersionError(s) + groups = m.groups() + major, minor, patch = [int(i) for i in groups[:3]] + # choose the '|' and '*' so that versions sort correctly + pre, build = make_tuple(groups[3], '|'), make_tuple(groups[5], '*') + return (major, minor, patch), pre, build + + +class SemanticVersion(Version): + def parse(self, s): + return _semantic_key(s) + + @property + def is_prerelease(self): + return self._parts[1][0] != '|' + + +class SemanticMatcher(Matcher): + version_class = SemanticVersion + + +class VersionScheme(object): + def __init__(self, key, matcher, suggester=None): + self.key = key + self.matcher = matcher + self.suggester = suggester + + def is_valid_version(self, s): + try: + self.matcher.version_class(s) + result = True + except UnsupportedVersionError: + result = False + return result + + def is_valid_matcher(self, s): + try: + self.matcher(s) + result = True + except UnsupportedVersionError: + result = False + return result + + def is_valid_constraint_list(self, s): + """ + Used for processing some metadata fields + """ + return self.is_valid_matcher('dummy_name (%s)' % s) + + def suggest(self, s): + if self.suggester is None: + result = None + else: + result = self.suggester(s) + return result + +_SCHEMES = { + 'normalized': VersionScheme(_normalized_key, NormalizedMatcher, + _suggest_normalized_version), + 'legacy': VersionScheme(_legacy_key, LegacyMatcher, lambda self, s: s), + 'semantic': VersionScheme(_semantic_key, SemanticMatcher, + _suggest_semantic_version), +} + +_SCHEMES['default'] = _SCHEMES['normalized'] + + +def get_scheme(name): + if name not in _SCHEMES: + raise ValueError('unknown scheme name: %r' % name) + return _SCHEMES[name] diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/wheel.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/wheel.py new file mode 100644 index 0000000000000000000000000000000000000000..bd179383ac98bb0b7ff3571666837e1f4a789c72 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distlib/wheel.py @@ -0,0 +1,1004 @@ +# -*- coding: utf-8 -*- +# +# Copyright (C) 2013-2017 Vinay Sajip. +# Licensed to the Python Software Foundation under a contributor agreement. +# See LICENSE.txt and CONTRIBUTORS.txt. +# +from __future__ import unicode_literals + +import base64 +import codecs +import datetime +import distutils.util +from email import message_from_file +import hashlib +import imp +import json +import logging +import os +import posixpath +import re +import shutil +import sys +import tempfile +import zipfile + +from . import __version__, DistlibException +from .compat import sysconfig, ZipFile, fsdecode, text_type, filter +from .database import InstalledDistribution +from .metadata import Metadata, METADATA_FILENAME, WHEEL_METADATA_FILENAME +from .util import (FileOperator, convert_path, CSVReader, CSVWriter, Cache, + cached_property, get_cache_base, read_exports, tempdir) +from .version import NormalizedVersion, UnsupportedVersionError + +logger = logging.getLogger(__name__) + +cache = None # created when needed + +if hasattr(sys, 'pypy_version_info'): # pragma: no cover + IMP_PREFIX = 'pp' +elif sys.platform.startswith('java'): # pragma: no cover + IMP_PREFIX = 'jy' +elif sys.platform == 'cli': # pragma: no cover + IMP_PREFIX = 'ip' +else: + IMP_PREFIX = 'cp' + +VER_SUFFIX = sysconfig.get_config_var('py_version_nodot') +if not VER_SUFFIX: # pragma: no cover + VER_SUFFIX = '%s%s' % sys.version_info[:2] +PYVER = 'py' + VER_SUFFIX +IMPVER = IMP_PREFIX + VER_SUFFIX + +ARCH = distutils.util.get_platform().replace('-', '_').replace('.', '_') + +ABI = sysconfig.get_config_var('SOABI') +if ABI and ABI.startswith('cpython-'): + ABI = ABI.replace('cpython-', 'cp') +else: + def _derive_abi(): + parts = ['cp', VER_SUFFIX] + if sysconfig.get_config_var('Py_DEBUG'): + parts.append('d') + if sysconfig.get_config_var('WITH_PYMALLOC'): + parts.append('m') + if sysconfig.get_config_var('Py_UNICODE_SIZE') == 4: + parts.append('u') + return ''.join(parts) + ABI = _derive_abi() + del _derive_abi + +FILENAME_RE = re.compile(r''' +(?P[^-]+) +-(?P\d+[^-]*) +(-(?P\d+[^-]*))? +-(?P\w+\d+(\.\w+\d+)*) +-(?P\w+) +-(?P\w+(\.\w+)*) +\.whl$ +''', re.IGNORECASE | re.VERBOSE) + +NAME_VERSION_RE = re.compile(r''' +(?P[^-]+) +-(?P\d+[^-]*) +(-(?P\d+[^-]*))?$ +''', re.IGNORECASE | re.VERBOSE) + +SHEBANG_RE = re.compile(br'\s*#![^\r\n]*') +SHEBANG_DETAIL_RE = re.compile(br'^(\s*#!("[^"]+"|\S+))\s+(.*)$') +SHEBANG_PYTHON = b'#!python' +SHEBANG_PYTHONW = b'#!pythonw' + +if os.sep == '/': + to_posix = lambda o: o +else: + to_posix = lambda o: o.replace(os.sep, '/') + + +class Mounter(object): + def __init__(self): + self.impure_wheels = {} + self.libs = {} + + def add(self, pathname, extensions): + self.impure_wheels[pathname] = extensions + self.libs.update(extensions) + + def remove(self, pathname): + extensions = self.impure_wheels.pop(pathname) + for k, v in extensions: + if k in self.libs: + del self.libs[k] + + def find_module(self, fullname, path=None): + if fullname in self.libs: + result = self + else: + result = None + return result + + def load_module(self, fullname): + if fullname in sys.modules: + result = sys.modules[fullname] + else: + if fullname not in self.libs: + raise ImportError('unable to find extension for %s' % fullname) + result = imp.load_dynamic(fullname, self.libs[fullname]) + result.__loader__ = self + parts = fullname.rsplit('.', 1) + if len(parts) > 1: + result.__package__ = parts[0] + return result + +_hook = Mounter() + + +class Wheel(object): + """ + Class to build and install from Wheel files (PEP 427). + """ + + wheel_version = (1, 1) + hash_kind = 'sha256' + + def __init__(self, filename=None, sign=False, verify=False): + """ + Initialise an instance using a (valid) filename. + """ + self.sign = sign + self.should_verify = verify + self.buildver = '' + self.pyver = [PYVER] + self.abi = ['none'] + self.arch = ['any'] + self.dirname = os.getcwd() + if filename is None: + self.name = 'dummy' + self.version = '0.1' + self._filename = self.filename + else: + m = NAME_VERSION_RE.match(filename) + if m: + info = m.groupdict('') + self.name = info['nm'] + # Reinstate the local version separator + self.version = info['vn'].replace('_', '-') + self.buildver = info['bn'] + self._filename = self.filename + else: + dirname, filename = os.path.split(filename) + m = FILENAME_RE.match(filename) + if not m: + raise DistlibException('Invalid name or ' + 'filename: %r' % filename) + if dirname: + self.dirname = os.path.abspath(dirname) + self._filename = filename + info = m.groupdict('') + self.name = info['nm'] + self.version = info['vn'] + self.buildver = info['bn'] + self.pyver = info['py'].split('.') + self.abi = info['bi'].split('.') + self.arch = info['ar'].split('.') + + @property + def filename(self): + """ + Build and return a filename from the various components. + """ + if self.buildver: + buildver = '-' + self.buildver + else: + buildver = '' + pyver = '.'.join(self.pyver) + abi = '.'.join(self.abi) + arch = '.'.join(self.arch) + # replace - with _ as a local version separator + version = self.version.replace('-', '_') + return '%s-%s%s-%s-%s-%s.whl' % (self.name, version, buildver, + pyver, abi, arch) + + @property + def exists(self): + path = os.path.join(self.dirname, self.filename) + return os.path.isfile(path) + + @property + def tags(self): + for pyver in self.pyver: + for abi in self.abi: + for arch in self.arch: + yield pyver, abi, arch + + @cached_property + def metadata(self): + pathname = os.path.join(self.dirname, self.filename) + name_ver = '%s-%s' % (self.name, self.version) + info_dir = '%s.dist-info' % name_ver + wrapper = codecs.getreader('utf-8') + with ZipFile(pathname, 'r') as zf: + wheel_metadata = self.get_wheel_metadata(zf) + wv = wheel_metadata['Wheel-Version'].split('.', 1) + file_version = tuple([int(i) for i in wv]) + if file_version < (1, 1): + fns = [WHEEL_METADATA_FILENAME, METADATA_FILENAME, 'METADATA'] + else: + fns = [WHEEL_METADATA_FILENAME, METADATA_FILENAME] + result = None + for fn in fns: + try: + metadata_filename = posixpath.join(info_dir, fn) + with zf.open(metadata_filename) as bf: + wf = wrapper(bf) + result = Metadata(fileobj=wf) + if result: + break + except KeyError: + pass + if not result: + raise ValueError('Invalid wheel, because metadata is ' + 'missing: looked in %s' % ', '.join(fns)) + return result + + def get_wheel_metadata(self, zf): + name_ver = '%s-%s' % (self.name, self.version) + info_dir = '%s.dist-info' % name_ver + metadata_filename = posixpath.join(info_dir, 'WHEEL') + with zf.open(metadata_filename) as bf: + wf = codecs.getreader('utf-8')(bf) + message = message_from_file(wf) + return dict(message) + + @cached_property + def info(self): + pathname = os.path.join(self.dirname, self.filename) + with ZipFile(pathname, 'r') as zf: + result = self.get_wheel_metadata(zf) + return result + + def process_shebang(self, data): + m = SHEBANG_RE.match(data) + if m: + end = m.end() + shebang, data_after_shebang = data[:end], data[end:] + # Preserve any arguments after the interpreter + if b'pythonw' in shebang.lower(): + shebang_python = SHEBANG_PYTHONW + else: + shebang_python = SHEBANG_PYTHON + m = SHEBANG_DETAIL_RE.match(shebang) + if m: + args = b' ' + m.groups()[-1] + else: + args = b'' + shebang = shebang_python + args + data = shebang + data_after_shebang + else: + cr = data.find(b'\r') + lf = data.find(b'\n') + if cr < 0 or cr > lf: + term = b'\n' + else: + if data[cr:cr + 2] == b'\r\n': + term = b'\r\n' + else: + term = b'\r' + data = SHEBANG_PYTHON + term + data + return data + + def get_hash(self, data, hash_kind=None): + if hash_kind is None: + hash_kind = self.hash_kind + try: + hasher = getattr(hashlib, hash_kind) + except AttributeError: + raise DistlibException('Unsupported hash algorithm: %r' % hash_kind) + result = hasher(data).digest() + result = base64.urlsafe_b64encode(result).rstrip(b'=').decode('ascii') + return hash_kind, result + + def write_record(self, records, record_path, base): + records = list(records) # make a copy for sorting + p = to_posix(os.path.relpath(record_path, base)) + records.append((p, '', '')) + records.sort() + with CSVWriter(record_path) as writer: + for row in records: + writer.writerow(row) + + def write_records(self, info, libdir, archive_paths): + records = [] + distinfo, info_dir = info + hasher = getattr(hashlib, self.hash_kind) + for ap, p in archive_paths: + with open(p, 'rb') as f: + data = f.read() + digest = '%s=%s' % self.get_hash(data) + size = os.path.getsize(p) + records.append((ap, digest, size)) + + p = os.path.join(distinfo, 'RECORD') + self.write_record(records, p, libdir) + ap = to_posix(os.path.join(info_dir, 'RECORD')) + archive_paths.append((ap, p)) + + def build_zip(self, pathname, archive_paths): + with ZipFile(pathname, 'w', zipfile.ZIP_DEFLATED) as zf: + for ap, p in archive_paths: + logger.debug('Wrote %s to %s in wheel', p, ap) + zf.write(p, ap) + + def build(self, paths, tags=None, wheel_version=None): + """ + Build a wheel from files in specified paths, and use any specified tags + when determining the name of the wheel. + """ + if tags is None: + tags = {} + + libkey = list(filter(lambda o: o in paths, ('purelib', 'platlib')))[0] + if libkey == 'platlib': + is_pure = 'false' + default_pyver = [IMPVER] + default_abi = [ABI] + default_arch = [ARCH] + else: + is_pure = 'true' + default_pyver = [PYVER] + default_abi = ['none'] + default_arch = ['any'] + + self.pyver = tags.get('pyver', default_pyver) + self.abi = tags.get('abi', default_abi) + self.arch = tags.get('arch', default_arch) + + libdir = paths[libkey] + + name_ver = '%s-%s' % (self.name, self.version) + data_dir = '%s.data' % name_ver + info_dir = '%s.dist-info' % name_ver + + archive_paths = [] + + # First, stuff which is not in site-packages + for key in ('data', 'headers', 'scripts'): + if key not in paths: + continue + path = paths[key] + if os.path.isdir(path): + for root, dirs, files in os.walk(path): + for fn in files: + p = fsdecode(os.path.join(root, fn)) + rp = os.path.relpath(p, path) + ap = to_posix(os.path.join(data_dir, key, rp)) + archive_paths.append((ap, p)) + if key == 'scripts' and not p.endswith('.exe'): + with open(p, 'rb') as f: + data = f.read() + data = self.process_shebang(data) + with open(p, 'wb') as f: + f.write(data) + + # Now, stuff which is in site-packages, other than the + # distinfo stuff. + path = libdir + distinfo = None + for root, dirs, files in os.walk(path): + if root == path: + # At the top level only, save distinfo for later + # and skip it for now + for i, dn in enumerate(dirs): + dn = fsdecode(dn) + if dn.endswith('.dist-info'): + distinfo = os.path.join(root, dn) + del dirs[i] + break + assert distinfo, '.dist-info directory expected, not found' + + for fn in files: + # comment out next suite to leave .pyc files in + if fsdecode(fn).endswith(('.pyc', '.pyo')): + continue + p = os.path.join(root, fn) + rp = to_posix(os.path.relpath(p, path)) + archive_paths.append((rp, p)) + + # Now distinfo. Assumed to be flat, i.e. os.listdir is enough. + files = os.listdir(distinfo) + for fn in files: + if fn not in ('RECORD', 'INSTALLER', 'SHARED', 'WHEEL'): + p = fsdecode(os.path.join(distinfo, fn)) + ap = to_posix(os.path.join(info_dir, fn)) + archive_paths.append((ap, p)) + + wheel_metadata = [ + 'Wheel-Version: %d.%d' % (wheel_version or self.wheel_version), + 'Generator: distlib %s' % __version__, + 'Root-Is-Purelib: %s' % is_pure, + ] + for pyver, abi, arch in self.tags: + wheel_metadata.append('Tag: %s-%s-%s' % (pyver, abi, arch)) + p = os.path.join(distinfo, 'WHEEL') + with open(p, 'w') as f: + f.write('\n'.join(wheel_metadata)) + ap = to_posix(os.path.join(info_dir, 'WHEEL')) + archive_paths.append((ap, p)) + + # Now, at last, RECORD. + # Paths in here are archive paths - nothing else makes sense. + self.write_records((distinfo, info_dir), libdir, archive_paths) + # Now, ready to build the zip file + pathname = os.path.join(self.dirname, self.filename) + self.build_zip(pathname, archive_paths) + return pathname + + def skip_entry(self, arcname): + """ + Determine whether an archive entry should be skipped when verifying + or installing. + """ + # The signature file won't be in RECORD, + # and we don't currently don't do anything with it + # We also skip directories, as they won't be in RECORD + # either. See: + # + # https://github.com/pypa/wheel/issues/294 + # https://github.com/pypa/wheel/issues/287 + # https://github.com/pypa/wheel/pull/289 + # + return arcname.endswith(('/', '/RECORD.jws')) + + def install(self, paths, maker, **kwargs): + """ + Install a wheel to the specified paths. If kwarg ``warner`` is + specified, it should be a callable, which will be called with two + tuples indicating the wheel version of this software and the wheel + version in the file, if there is a discrepancy in the versions. + This can be used to issue any warnings to raise any exceptions. + If kwarg ``lib_only`` is True, only the purelib/platlib files are + installed, and the headers, scripts, data and dist-info metadata are + not written. If kwarg ``bytecode_hashed_invalidation`` is True, written + bytecode will try to use file-hash based invalidation (PEP-552) on + supported interpreter versions (CPython 2.7+). + + The return value is a :class:`InstalledDistribution` instance unless + ``options.lib_only`` is True, in which case the return value is ``None``. + """ + + dry_run = maker.dry_run + warner = kwargs.get('warner') + lib_only = kwargs.get('lib_only', False) + bc_hashed_invalidation = kwargs.get('bytecode_hashed_invalidation', False) + + pathname = os.path.join(self.dirname, self.filename) + name_ver = '%s-%s' % (self.name, self.version) + data_dir = '%s.data' % name_ver + info_dir = '%s.dist-info' % name_ver + + metadata_name = posixpath.join(info_dir, METADATA_FILENAME) + wheel_metadata_name = posixpath.join(info_dir, 'WHEEL') + record_name = posixpath.join(info_dir, 'RECORD') + + wrapper = codecs.getreader('utf-8') + + with ZipFile(pathname, 'r') as zf: + with zf.open(wheel_metadata_name) as bwf: + wf = wrapper(bwf) + message = message_from_file(wf) + wv = message['Wheel-Version'].split('.', 1) + file_version = tuple([int(i) for i in wv]) + if (file_version != self.wheel_version) and warner: + warner(self.wheel_version, file_version) + + if message['Root-Is-Purelib'] == 'true': + libdir = paths['purelib'] + else: + libdir = paths['platlib'] + + records = {} + with zf.open(record_name) as bf: + with CSVReader(stream=bf) as reader: + for row in reader: + p = row[0] + records[p] = row + + data_pfx = posixpath.join(data_dir, '') + info_pfx = posixpath.join(info_dir, '') + script_pfx = posixpath.join(data_dir, 'scripts', '') + + # make a new instance rather than a copy of maker's, + # as we mutate it + fileop = FileOperator(dry_run=dry_run) + fileop.record = True # so we can rollback if needed + + bc = not sys.dont_write_bytecode # Double negatives. Lovely! + + outfiles = [] # for RECORD writing + + # for script copying/shebang processing + workdir = tempfile.mkdtemp() + # set target dir later + # we default add_launchers to False, as the + # Python Launcher should be used instead + maker.source_dir = workdir + maker.target_dir = None + try: + for zinfo in zf.infolist(): + arcname = zinfo.filename + if isinstance(arcname, text_type): + u_arcname = arcname + else: + u_arcname = arcname.decode('utf-8') + if self.skip_entry(u_arcname): + continue + row = records[u_arcname] + if row[2] and str(zinfo.file_size) != row[2]: + raise DistlibException('size mismatch for ' + '%s' % u_arcname) + if row[1]: + kind, value = row[1].split('=', 1) + with zf.open(arcname) as bf: + data = bf.read() + _, digest = self.get_hash(data, kind) + if digest != value: + raise DistlibException('digest mismatch for ' + '%s' % arcname) + + if lib_only and u_arcname.startswith((info_pfx, data_pfx)): + logger.debug('lib_only: skipping %s', u_arcname) + continue + is_script = (u_arcname.startswith(script_pfx) + and not u_arcname.endswith('.exe')) + + if u_arcname.startswith(data_pfx): + _, where, rp = u_arcname.split('/', 2) + outfile = os.path.join(paths[where], convert_path(rp)) + else: + # meant for site-packages. + if u_arcname in (wheel_metadata_name, record_name): + continue + outfile = os.path.join(libdir, convert_path(u_arcname)) + if not is_script: + with zf.open(arcname) as bf: + fileop.copy_stream(bf, outfile) + outfiles.append(outfile) + # Double check the digest of the written file + if not dry_run and row[1]: + with open(outfile, 'rb') as bf: + data = bf.read() + _, newdigest = self.get_hash(data, kind) + if newdigest != digest: + raise DistlibException('digest mismatch ' + 'on write for ' + '%s' % outfile) + if bc and outfile.endswith('.py'): + try: + pyc = fileop.byte_compile(outfile, + hashed_invalidation=bc_hashed_invalidation) + outfiles.append(pyc) + except Exception: + # Don't give up if byte-compilation fails, + # but log it and perhaps warn the user + logger.warning('Byte-compilation failed', + exc_info=True) + else: + fn = os.path.basename(convert_path(arcname)) + workname = os.path.join(workdir, fn) + with zf.open(arcname) as bf: + fileop.copy_stream(bf, workname) + + dn, fn = os.path.split(outfile) + maker.target_dir = dn + filenames = maker.make(fn) + fileop.set_executable_mode(filenames) + outfiles.extend(filenames) + + if lib_only: + logger.debug('lib_only: returning None') + dist = None + else: + # Generate scripts + + # Try to get pydist.json so we can see if there are + # any commands to generate. If this fails (e.g. because + # of a legacy wheel), log a warning but don't give up. + commands = None + file_version = self.info['Wheel-Version'] + if file_version == '1.0': + # Use legacy info + ep = posixpath.join(info_dir, 'entry_points.txt') + try: + with zf.open(ep) as bwf: + epdata = read_exports(bwf) + commands = {} + for key in ('console', 'gui'): + k = '%s_scripts' % key + if k in epdata: + commands['wrap_%s' % key] = d = {} + for v in epdata[k].values(): + s = '%s:%s' % (v.prefix, v.suffix) + if v.flags: + s += ' %s' % v.flags + d[v.name] = s + except Exception: + logger.warning('Unable to read legacy script ' + 'metadata, so cannot generate ' + 'scripts') + else: + try: + with zf.open(metadata_name) as bwf: + wf = wrapper(bwf) + commands = json.load(wf).get('extensions') + if commands: + commands = commands.get('python.commands') + except Exception: + logger.warning('Unable to read JSON metadata, so ' + 'cannot generate scripts') + if commands: + console_scripts = commands.get('wrap_console', {}) + gui_scripts = commands.get('wrap_gui', {}) + if console_scripts or gui_scripts: + script_dir = paths.get('scripts', '') + if not os.path.isdir(script_dir): + raise ValueError('Valid script path not ' + 'specified') + maker.target_dir = script_dir + for k, v in console_scripts.items(): + script = '%s = %s' % (k, v) + filenames = maker.make(script) + fileop.set_executable_mode(filenames) + + if gui_scripts: + options = {'gui': True } + for k, v in gui_scripts.items(): + script = '%s = %s' % (k, v) + filenames = maker.make(script, options) + fileop.set_executable_mode(filenames) + + p = os.path.join(libdir, info_dir) + dist = InstalledDistribution(p) + + # Write SHARED + paths = dict(paths) # don't change passed in dict + del paths['purelib'] + del paths['platlib'] + paths['lib'] = libdir + p = dist.write_shared_locations(paths, dry_run) + if p: + outfiles.append(p) + + # Write RECORD + dist.write_installed_files(outfiles, paths['prefix'], + dry_run) + return dist + except Exception: # pragma: no cover + logger.exception('installation failed.') + fileop.rollback() + raise + finally: + shutil.rmtree(workdir) + + def _get_dylib_cache(self): + global cache + if cache is None: + # Use native string to avoid issues on 2.x: see Python #20140. + base = os.path.join(get_cache_base(), str('dylib-cache'), + '%s.%s' % sys.version_info[:2]) + cache = Cache(base) + return cache + + def _get_extensions(self): + pathname = os.path.join(self.dirname, self.filename) + name_ver = '%s-%s' % (self.name, self.version) + info_dir = '%s.dist-info' % name_ver + arcname = posixpath.join(info_dir, 'EXTENSIONS') + wrapper = codecs.getreader('utf-8') + result = [] + with ZipFile(pathname, 'r') as zf: + try: + with zf.open(arcname) as bf: + wf = wrapper(bf) + extensions = json.load(wf) + cache = self._get_dylib_cache() + prefix = cache.prefix_to_dir(pathname) + cache_base = os.path.join(cache.base, prefix) + if not os.path.isdir(cache_base): + os.makedirs(cache_base) + for name, relpath in extensions.items(): + dest = os.path.join(cache_base, convert_path(relpath)) + if not os.path.exists(dest): + extract = True + else: + file_time = os.stat(dest).st_mtime + file_time = datetime.datetime.fromtimestamp(file_time) + info = zf.getinfo(relpath) + wheel_time = datetime.datetime(*info.date_time) + extract = wheel_time > file_time + if extract: + zf.extract(relpath, cache_base) + result.append((name, dest)) + except KeyError: + pass + return result + + def is_compatible(self): + """ + Determine if a wheel is compatible with the running system. + """ + return is_compatible(self) + + def is_mountable(self): + """ + Determine if a wheel is asserted as mountable by its metadata. + """ + return True # for now - metadata details TBD + + def mount(self, append=False): + pathname = os.path.abspath(os.path.join(self.dirname, self.filename)) + if not self.is_compatible(): + msg = 'Wheel %s not compatible with this Python.' % pathname + raise DistlibException(msg) + if not self.is_mountable(): + msg = 'Wheel %s is marked as not mountable.' % pathname + raise DistlibException(msg) + if pathname in sys.path: + logger.debug('%s already in path', pathname) + else: + if append: + sys.path.append(pathname) + else: + sys.path.insert(0, pathname) + extensions = self._get_extensions() + if extensions: + if _hook not in sys.meta_path: + sys.meta_path.append(_hook) + _hook.add(pathname, extensions) + + def unmount(self): + pathname = os.path.abspath(os.path.join(self.dirname, self.filename)) + if pathname not in sys.path: + logger.debug('%s not in path', pathname) + else: + sys.path.remove(pathname) + if pathname in _hook.impure_wheels: + _hook.remove(pathname) + if not _hook.impure_wheels: + if _hook in sys.meta_path: + sys.meta_path.remove(_hook) + + def verify(self): + pathname = os.path.join(self.dirname, self.filename) + name_ver = '%s-%s' % (self.name, self.version) + data_dir = '%s.data' % name_ver + info_dir = '%s.dist-info' % name_ver + + metadata_name = posixpath.join(info_dir, METADATA_FILENAME) + wheel_metadata_name = posixpath.join(info_dir, 'WHEEL') + record_name = posixpath.join(info_dir, 'RECORD') + + wrapper = codecs.getreader('utf-8') + + with ZipFile(pathname, 'r') as zf: + with zf.open(wheel_metadata_name) as bwf: + wf = wrapper(bwf) + message = message_from_file(wf) + wv = message['Wheel-Version'].split('.', 1) + file_version = tuple([int(i) for i in wv]) + # TODO version verification + + records = {} + with zf.open(record_name) as bf: + with CSVReader(stream=bf) as reader: + for row in reader: + p = row[0] + records[p] = row + + for zinfo in zf.infolist(): + arcname = zinfo.filename + if isinstance(arcname, text_type): + u_arcname = arcname + else: + u_arcname = arcname.decode('utf-8') + # See issue #115: some wheels have .. in their entries, but + # in the filename ... e.g. __main__..py ! So the check is + # updated to look for .. in the directory portions + p = u_arcname.split('/') + if '..' in p: + raise DistlibException('invalid entry in ' + 'wheel: %r' % u_arcname) + + if self.skip_entry(u_arcname): + continue + row = records[u_arcname] + if row[2] and str(zinfo.file_size) != row[2]: + raise DistlibException('size mismatch for ' + '%s' % u_arcname) + if row[1]: + kind, value = row[1].split('=', 1) + with zf.open(arcname) as bf: + data = bf.read() + _, digest = self.get_hash(data, kind) + if digest != value: + raise DistlibException('digest mismatch for ' + '%s' % arcname) + + def update(self, modifier, dest_dir=None, **kwargs): + """ + Update the contents of a wheel in a generic way. The modifier should + be a callable which expects a dictionary argument: its keys are + archive-entry paths, and its values are absolute filesystem paths + where the contents the corresponding archive entries can be found. The + modifier is free to change the contents of the files pointed to, add + new entries and remove entries, before returning. This method will + extract the entire contents of the wheel to a temporary location, call + the modifier, and then use the passed (and possibly updated) + dictionary to write a new wheel. If ``dest_dir`` is specified, the new + wheel is written there -- otherwise, the original wheel is overwritten. + + The modifier should return True if it updated the wheel, else False. + This method returns the same value the modifier returns. + """ + + def get_version(path_map, info_dir): + version = path = None + key = '%s/%s' % (info_dir, METADATA_FILENAME) + if key not in path_map: + key = '%s/PKG-INFO' % info_dir + if key in path_map: + path = path_map[key] + version = Metadata(path=path).version + return version, path + + def update_version(version, path): + updated = None + try: + v = NormalizedVersion(version) + i = version.find('-') + if i < 0: + updated = '%s+1' % version + else: + parts = [int(s) for s in version[i + 1:].split('.')] + parts[-1] += 1 + updated = '%s+%s' % (version[:i], + '.'.join(str(i) for i in parts)) + except UnsupportedVersionError: + logger.debug('Cannot update non-compliant (PEP-440) ' + 'version %r', version) + if updated: + md = Metadata(path=path) + md.version = updated + legacy = not path.endswith(METADATA_FILENAME) + md.write(path=path, legacy=legacy) + logger.debug('Version updated from %r to %r', version, + updated) + + pathname = os.path.join(self.dirname, self.filename) + name_ver = '%s-%s' % (self.name, self.version) + info_dir = '%s.dist-info' % name_ver + record_name = posixpath.join(info_dir, 'RECORD') + with tempdir() as workdir: + with ZipFile(pathname, 'r') as zf: + path_map = {} + for zinfo in zf.infolist(): + arcname = zinfo.filename + if isinstance(arcname, text_type): + u_arcname = arcname + else: + u_arcname = arcname.decode('utf-8') + if u_arcname == record_name: + continue + if '..' in u_arcname: + raise DistlibException('invalid entry in ' + 'wheel: %r' % u_arcname) + zf.extract(zinfo, workdir) + path = os.path.join(workdir, convert_path(u_arcname)) + path_map[u_arcname] = path + + # Remember the version. + original_version, _ = get_version(path_map, info_dir) + # Files extracted. Call the modifier. + modified = modifier(path_map, **kwargs) + if modified: + # Something changed - need to build a new wheel. + current_version, path = get_version(path_map, info_dir) + if current_version and (current_version == original_version): + # Add or update local version to signify changes. + update_version(current_version, path) + # Decide where the new wheel goes. + if dest_dir is None: + fd, newpath = tempfile.mkstemp(suffix='.whl', + prefix='wheel-update-', + dir=workdir) + os.close(fd) + else: + if not os.path.isdir(dest_dir): + raise DistlibException('Not a directory: %r' % dest_dir) + newpath = os.path.join(dest_dir, self.filename) + archive_paths = list(path_map.items()) + distinfo = os.path.join(workdir, info_dir) + info = distinfo, info_dir + self.write_records(info, workdir, archive_paths) + self.build_zip(newpath, archive_paths) + if dest_dir is None: + shutil.copyfile(newpath, pathname) + return modified + +def compatible_tags(): + """ + Return (pyver, abi, arch) tuples compatible with this Python. + """ + versions = [VER_SUFFIX] + major = VER_SUFFIX[0] + for minor in range(sys.version_info[1] - 1, - 1, -1): + versions.append(''.join([major, str(minor)])) + + abis = [] + for suffix, _, _ in imp.get_suffixes(): + if suffix.startswith('.abi'): + abis.append(suffix.split('.', 2)[1]) + abis.sort() + if ABI != 'none': + abis.insert(0, ABI) + abis.append('none') + result = [] + + arches = [ARCH] + if sys.platform == 'darwin': + m = re.match(r'(\w+)_(\d+)_(\d+)_(\w+)$', ARCH) + if m: + name, major, minor, arch = m.groups() + minor = int(minor) + matches = [arch] + if arch in ('i386', 'ppc'): + matches.append('fat') + if arch in ('i386', 'ppc', 'x86_64'): + matches.append('fat3') + if arch in ('ppc64', 'x86_64'): + matches.append('fat64') + if arch in ('i386', 'x86_64'): + matches.append('intel') + if arch in ('i386', 'x86_64', 'intel', 'ppc', 'ppc64'): + matches.append('universal') + while minor >= 0: + for match in matches: + s = '%s_%s_%s_%s' % (name, major, minor, match) + if s != ARCH: # already there + arches.append(s) + minor -= 1 + + # Most specific - our Python version, ABI and arch + for abi in abis: + for arch in arches: + result.append((''.join((IMP_PREFIX, versions[0])), abi, arch)) + + # where no ABI / arch dependency, but IMP_PREFIX dependency + for i, version in enumerate(versions): + result.append((''.join((IMP_PREFIX, version)), 'none', 'any')) + if i == 0: + result.append((''.join((IMP_PREFIX, version[0])), 'none', 'any')) + + # no IMP_PREFIX, ABI or arch dependency + for i, version in enumerate(versions): + result.append((''.join(('py', version)), 'none', 'any')) + if i == 0: + result.append((''.join(('py', version[0])), 'none', 'any')) + return set(result) + + +COMPATIBLE_TAGS = compatible_tags() + +del compatible_tags + + +def is_compatible(wheel, tags=None): + if not isinstance(wheel, Wheel): + wheel = Wheel(wheel) # assume it's a filename + result = False + if tags is None: + tags = COMPATIBLE_TAGS + for ver, abi, arch in tags: + if ver in wheel.pyver and abi in wheel.abi and arch in wheel.arch: + result = True + break + return result diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distro-1.4.0.dist-info/AUTHORS.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distro-1.4.0.dist-info/AUTHORS.txt new file mode 100644 index 0000000000000000000000000000000000000000..72c87d7d38ae7bf859717c333a5ee8230f6ce624 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distro-1.4.0.dist-info/AUTHORS.txt @@ -0,0 +1,562 @@ +A_Rog +Aakanksha Agrawal <11389424+rasponic@users.noreply.github.com> +Abhinav Sagar <40603139+abhinavsagar@users.noreply.github.com> +ABHYUDAY PRATAP SINGH +abs51295 +AceGentile +Adam Chainz +Adam Tse +Adam Tse +Adam Wentz +admin +Adrien Morison +ahayrapetyan +Ahilya +AinsworthK +Akash Srivastava +Alan Yee +Albert Tugushev +Albert-Guan +albertg +Aleks Bunin +Alethea Flowers +Alex Gaynor +Alex Grönholm +Alex Loosley +Alex Morega +Alex Stachowiak +Alexander Shtyrov +Alexandre Conrad +Alexey Popravka +Alexey Popravka +Alli +Ami Fischman +Ananya Maiti +Anatoly Techtonik +Anders Kaseorg +Andreas Lutro +Andrei Geacar +Andrew Gaul +Andrey Bulgakov +Andrés Delfino <34587441+andresdelfino@users.noreply.github.com> +Andrés Delfino +Andy Freeland +Andy Freeland +Andy Kluger +Ani Hayrapetyan +Aniruddha Basak +Anish Tambe +Anrs Hu +Anthony Sottile +Antoine Musso +Anton Ovchinnikov +Anton Patrushev +Antonio Alvarado Hernandez +Antony Lee +Antti Kaihola +Anubhav Patel +Anuj Godase +AQNOUCH Mohammed +AraHaan +Arindam Choudhury +Armin Ronacher +Artem +Ashley Manton +Ashwin Ramaswami +atse +Atsushi Odagiri +Avner Cohen +Baptiste Mispelon +Barney Gale +barneygale +Bartek Ogryczak +Bastian Venthur +Ben Darnell +Ben Hoyt +Ben Rosser +Bence Nagy +Benjamin Peterson +Benjamin VanEvery +Benoit Pierre +Berker Peksag +Bernardo B. Marques +Bernhard M. Wiedemann +Bertil Hatt +Bogdan Opanchuk +BorisZZZ +Brad Erickson +Bradley Ayers +Brandon L. Reiss +Brandt Bucher +Brett Randall +Brian Cristante <33549821+brcrista@users.noreply.github.com> +Brian Cristante +Brian Rosner +BrownTruck +Bruno Oliveira +Bruno Renié +Bstrdsmkr +Buck Golemon +burrows +Bussonnier Matthias +c22 +Caleb Martinez +Calvin Smith +Carl Meyer +Carlos Liam +Carol Willing +Carter Thayer +Cass +Chandrasekhar Atina +Chih-Hsuan Yen +Chih-Hsuan Yen +Chris Brinker +Chris Hunt +Chris Jerdonek +Chris McDonough +Chris Wolfe +Christian Heimes +Christian Oudard +Christopher Hunt +Christopher Snyder +Clark Boylan +Clay McClure +Cody +Cody Soyland +Colin Watson +Connor Osborn +Cooper Lees +Cooper Ry Lees +Cory Benfield +Cory Wright +Craig Kerstiens +Cristian Sorinel +Curtis Doty +cytolentino +Damian Quiroga +Dan Black +Dan Savilonis +Dan Sully +daniel +Daniel Collins +Daniel Hahler +Daniel Holth +Daniel Jost +Daniel Shaulov +Daniele Esposti +Daniele Procida +Danny Hermes +Dav Clark +Dave Abrahams +Dave Jones +David Aguilar +David Black +David Bordeynik +David Bordeynik +David Caro +David Evans +David Linke +David Pursehouse +David Tucker +David Wales +Davidovich +derwolfe +Desetude +Diego Caraballo +DiegoCaraballo +Dmitry Gladkov +Domen Kožar +Donald Stufft +Dongweiming +Douglas Thor +DrFeathers +Dustin Ingram +Dwayne Bailey +Ed Morley <501702+edmorley@users.noreply.github.com> +Ed Morley +Eitan Adler +ekristina +elainechan +Eli Schwartz +Eli Schwartz +Emil Burzo +Emil Styrke +Endoh Takanao +enoch +Erdinc Mutlu +Eric Gillingham +Eric Hanchrow +Eric Hopper +Erik M. Bray +Erik Rose +Ernest W Durbin III +Ernest W. Durbin III +Erwin Janssen +Eugene Vereshchagin +everdimension +Felix Yan +fiber-space +Filip Kokosiński +Florian Briand +Florian Rathgeber +Francesco +Francesco Montesano +Frost Ming +Gabriel Curio +Gabriel de Perthuis +Garry Polley +gdanielson +Geoffrey Lehée +Geoffrey Sneddon +George Song +Georgi Valkov +Giftlin Rajaiah +gizmoguy1 +gkdoc <40815324+gkdoc@users.noreply.github.com> +Gopinath M <31352222+mgopi1990@users.noreply.github.com> +GOTO Hayato <3532528+gh640@users.noreply.github.com> +gpiks +Guilherme Espada +Guy Rozendorn +gzpan123 +Hanjun Kim +Hari Charan +Harsh Vardhan +Herbert Pfennig +Hsiaoming Yang +Hugo +Hugo Lopes Tavares +Hugo van Kemenade +hugovk +Hynek Schlawack +Ian Bicking +Ian Cordasco +Ian Lee +Ian Stapleton Cordasco +Ian Wienand +Ian Wienand +Igor Kuzmitshov +Igor Sobreira +Ilya Baryshev +INADA Naoki +Ionel Cristian Mărieș +Ionel Maries Cristian +Ivan Pozdeev +Jacob Kim +jakirkham +Jakub Stasiak +Jakub Vysoky +Jakub Wilk +James Cleveland +James Cleveland +James Firth +James Polley +Jan Pokorný +Jannis Leidel +jarondl +Jason R. Coombs +Jay Graves +Jean-Christophe Fillion-Robin +Jeff Barber +Jeff Dairiki +Jelmer Vernooij +jenix21 +Jeremy Stanley +Jeremy Zafran +Jiashuo Li +Jim Garrison +Jivan Amara +John Paton +John-Scott Atlakson +johnthagen +johnthagen +Jon Banafato +Jon Dufresne +Jon Parise +Jonas Nockert +Jonathan Herbert +Joost Molenaar +Jorge Niedbalski +Joseph Long +Josh Bronson +Josh Hansen +Josh Schneier +Juanjo Bazán +Julian Berman +Julian Gethmann +Julien Demoor +jwg4 +Jyrki Pulliainen +Kai Chen +Kamal Bin Mustafa +kaustav haldar +keanemind +Keith Maxwell +Kelsey Hightower +Kenneth Belitzky +Kenneth Reitz +Kenneth Reitz +Kevin Burke +Kevin Carter +Kevin Frommelt +Kevin R Patterson +Kexuan Sun +Kit Randel +kpinc +Krishna Oza +Kumar McMillan +Kyle Persohn +lakshmanaram +Laszlo Kiss-Kollar +Laurent Bristiel +Laurie Opperman +Leon Sasson +Lev Givon +Lincoln de Sousa +Lipis +Loren Carvalho +Lucas Cimon +Ludovic Gasc +Luke Macken +Luo Jiebin +luojiebin +luz.paz +László Kiss Kollár +László Kiss Kollár +Marc Abramowitz +Marc Tamlyn +Marcus Smith +Mariatta +Mark Kohler +Mark Williams +Mark Williams +Markus Hametner +Masaki +Masklinn +Matej Stuchlik +Mathew Jennings +Mathieu Bridon +Matt Good +Matt Maker +Matt Robenolt +matthew +Matthew Einhorn +Matthew Gilliard +Matthew Iversen +Matthew Trumbell +Matthew Willson +Matthias Bussonnier +mattip +Maxim Kurnikov +Maxime Rouyrre +mayeut +mbaluna <44498973+mbaluna@users.noreply.github.com> +mdebi <17590103+mdebi@users.noreply.github.com> +memoselyk +Michael +Michael Aquilina +Michael E. Karpeles +Michael Klich +Michael Williamson +michaelpacer +Mickaël Schoentgen +Miguel Araujo Perez +Mihir Singh +Mike +Mike Hendricks +Min RK +MinRK +Miro Hrončok +Monica Baluna +montefra +Monty Taylor +Nate Coraor +Nathaniel J. Smith +Nehal J Wani +Neil Botelho +Nick Coghlan +Nick Stenning +Nick Timkovich +Nicolas Bock +Nikhil Benesch +Nitesh Sharma +Nowell Strite +NtaleGrey +nvdv +Ofekmeister +ofrinevo +Oliver Jeeves +Oliver Tonnhofer +Olivier Girardot +Olivier Grisel +Ollie Rutherfurd +OMOTO Kenji +Omry Yadan +Oren Held +Oscar Benjamin +Oz N Tiram +Pachwenko <32424503+Pachwenko@users.noreply.github.com> +Patrick Dubroy +Patrick Jenkins +Patrick Lawson +patricktokeeffe +Patrik Kopkan +Paul Kehrer +Paul Moore +Paul Nasrat +Paul Oswald +Paul van der Linden +Paulus Schoutsen +Pavithra Eswaramoorthy <33131404+QueenCoffee@users.noreply.github.com> +Pawel Jasinski +Pekka Klärck +Peter Lisák +Peter Waller +petr-tik +Phaneendra Chiruvella +Phil Freo +Phil Pennock +Phil Whelan +Philip Jägenstedt +Philip Molloy +Philippe Ombredanne +Pi Delport +Pierre-Yves Rofes +pip +Prabakaran Kumaresshan +Prabhjyotsing Surjit Singh Sodhi +Prabhu Marappan +Pradyun Gedam +Pratik Mallya +Preet Thakkar +Preston Holmes +Przemek Wrzos +Pulkit Goyal <7895pulkit@gmail.com> +Qiangning Hong +Quentin Pradet +R. David Murray +Rafael Caricio +Ralf Schmitt +Razzi Abuissa +rdb +Remi Rampin +Remi Rampin +Rene Dudfield +Riccardo Magliocchetti +Richard Jones +RobberPhex +Robert Collins +Robert McGibbon +Robert T. McGibbon +robin elisha robinson +Roey Berman +Rohan Jain +Rohan Jain +Rohan Jain +Roman Bogorodskiy +Romuald Brunet +Ronny Pfannschmidt +Rory McCann +Ross Brattain +Roy Wellington Ⅳ +Roy Wellington Ⅳ +Ryan Wooden +ryneeverett +Sachi King +Salvatore Rinchiera +Savio Jomton +schlamar +Scott Kitterman +Sean +seanj +Sebastian Jordan +Sebastian Schaetz +Segev Finer +SeongSoo Cho +Sergey Vasilyev +Seth Woodworth +Shlomi Fish +Shovan Maity +Simeon Visser +Simon Cross +Simon Pichugin +sinoroc +Sorin Sbarnea +Stavros Korokithakis +Stefan Scherfke +Stephan Erb +stepshal +Steve (Gadget) Barnes +Steve Barnes +Steve Dower +Steve Kowalik +Steven Myint +stonebig +Stéphane Bidoul (ACSONE) +Stéphane Bidoul +Stéphane Klein +Sumana Harihareswara +Sviatoslav Sydorenko +Sviatoslav Sydorenko +Swat009 +Takayuki SHIMIZUKAWA +tbeswick +Thijs Triemstra +Thomas Fenzl +Thomas Grainger +Thomas Guettler +Thomas Johansson +Thomas Kluyver +Thomas Smith +Tim D. Smith +Tim Gates +Tim Harder +Tim Heap +tim smith +tinruufu +Tom Forbes +Tom Freudenheim +Tom V +Tomas Orsava +Tomer Chachamu +Tony Beswick +Tony Zhaocheng Tan +TonyBeswick +toonarmycaptain +Toshio Kuratomi +Travis Swicegood +Tzu-ping Chung +Valentin Haenel +Victor Stinner +victorvpaulo +Viktor Szépe +Ville Skyttä +Vinay Sajip +Vincent Philippon +Vinicyus Macedo <7549205+vinicyusmacedo@users.noreply.github.com> +Vitaly Babiy +Vladimir Rutsky +W. Trevor King +Wil Tan +Wilfred Hughes +William ML Leslie +William T Olson +Wilson Mo +wim glenn +Wolfgang Maier +Xavier Fernandez +Xavier Fernandez +xoviat +xtreak +YAMAMOTO Takashi +Yen Chi Hsuan +Yeray Diaz Diaz +Yoval P +Yu Jian +Yuan Jing Vincent Yan +Zearin +Zearin +Zhiping Deng +Zvezdan Petkovic +Łukasz Langa +Семён Марьясин diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distro-1.4.0.dist-info/INSTALLER b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distro-1.4.0.dist-info/INSTALLER new file mode 100644 index 0000000000000000000000000000000000000000..a1b589e38a32041e49332e5e81c2d363dc418d68 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distro-1.4.0.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distro-1.4.0.dist-info/LICENSE.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distro-1.4.0.dist-info/LICENSE.txt new file mode 100644 index 0000000000000000000000000000000000000000..737fec5c5352af3d9a6a47a0670da4bdb52c5725 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distro-1.4.0.dist-info/LICENSE.txt @@ -0,0 +1,20 @@ +Copyright (c) 2008-2019 The pip developers (see AUTHORS.txt file) + +Permission is hereby granted, free of charge, to any person obtaining +a copy of this software and associated documentation files (the +"Software"), to deal in the Software without restriction, including +without limitation the rights to use, copy, modify, merge, publish, +distribute, sublicense, and/or sell copies of the Software, and to +permit persons to whom the Software is furnished to do so, subject to +the following conditions: + +The above copyright notice and this permission notice shall be +included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE +LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION +OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION +WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distro-1.4.0.dist-info/METADATA b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distro-1.4.0.dist-info/METADATA new file mode 100644 index 0000000000000000000000000000000000000000..b34cf6205d1de724dd38179b318dc0f4172d8257 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distro-1.4.0.dist-info/METADATA @@ -0,0 +1,170 @@ +Metadata-Version: 2.1 +Name: distro +Version: 1.4.0 +Summary: Distro - an OS platform information API +Home-page: https://github.com/nir0s/distro +Author: Nir Cohen +Author-email: nir36g@gmail.com +License: Apache License, Version 2.0 +Platform: All +Classifier: Development Status :: 5 - Production/Stable +Classifier: Intended Audience :: Developers +Classifier: Intended Audience :: System Administrators +Classifier: License :: OSI Approved :: Apache Software License +Classifier: Operating System :: POSIX :: Linux +Classifier: Operating System :: POSIX :: BSD +Classifier: Operating System :: POSIX :: BSD :: FreeBSD +Classifier: Operating System :: POSIX :: BSD :: NetBSD +Classifier: Operating System :: POSIX :: BSD :: OpenBSD +Classifier: Programming Language :: Python :: 2 +Classifier: Programming Language :: Python :: 2.7 +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.4 +Classifier: Programming Language :: Python :: 3.5 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Topic :: Software Development :: Libraries :: Python Modules +Classifier: Topic :: System :: Operating System +Description-Content-Type: text/markdown + +Distro - an OS platform information API +======================================= + +[![Build Status](https://travis-ci.org/nir0s/distro.svg?branch=master)](https://travis-ci.org/nir0s/distro) +[![Build status](https://ci.appveyor.com/api/projects/status/e812qjk1gf0f74r5/branch/master?svg=true)](https://ci.appveyor.com/project/nir0s/distro/branch/master) +[![PyPI version](http://img.shields.io/pypi/v/distro.svg)](https://pypi.python.org/pypi/distro) +[![Supported Python Versions](https://img.shields.io/pypi/pyversions/distro.svg)](https://img.shields.io/pypi/pyversions/distro.svg) +[![Requirements Status](https://requires.io/github/nir0s/distro/requirements.svg?branch=master)](https://requires.io/github/nir0s/distro/requirements/?branch=master) +[![Code Coverage](https://codecov.io/github/nir0s/distro/coverage.svg?branch=master)](https://codecov.io/github/nir0s/distro?branch=master) +[![Code Quality](https://landscape.io/github/nir0s/distro/master/landscape.svg?style=flat)](https://landscape.io/github/nir0s/distro) +[![Is Wheel](https://img.shields.io/pypi/wheel/distro.svg?style=flat)](https://pypi.python.org/pypi/distro) +[![Latest Github Release](https://readthedocs.org/projects/distro/badge/?version=stable)](http://distro.readthedocs.io/en/latest/) +[![Join the chat at https://gitter.im/nir0s/distro](https://badges.gitter.im/nir0s/distro.svg)](https://gitter.im/nir0s/distro?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) + +`distro` provides information about the +OS distribution it runs on, such as a reliable machine-readable ID, or +version information. + +It is the recommended replacement for Python's original +[`platform.linux_distribution`](https://docs.python.org/3.7/library/platform.html#platform.linux_distribution) +function (which will be removed in Python 3.8). +It also provides much more functionality which isn't necessarily Python bound, +like a command-line interface. + +Distro currently supports Linux and BSD based systems but [Windows and OS X support](https://github.com/nir0s/distro/issues/177) is also planned. + +For Python 2.6 support, see https://github.com/nir0s/distro/tree/python2.6-support + +## Installation + +Installation of the latest released version from PyPI: + +```shell +pip install distro +``` + +Installation of the latest development version: + +```shell +pip install https://github.com/nir0s/distro/archive/master.tar.gz +``` + + +## Usage + +```bash +$ distro +Name: Antergos Linux +Version: 2015.10 (ISO-Rolling) +Codename: ISO-Rolling + +$ distro -j +{ + "codename": "ISO-Rolling", + "id": "antergos", + "like": "arch", + "version": "16.9", + "version_parts": { + "build_number": "", + "major": "16", + "minor": "9" + } +} + + +$ python +>>> import distro +>>> distro.linux_distribution(full_distribution_name=False) +('centos', '7.1.1503', 'Core') +``` + + +## Documentation + +On top of the aforementioned API, several more functions are available. For a complete description of the +API, see the [latest API documentation](http://distro.readthedocs.org/en/latest/). + +## Background + +An alternative implementation became necessary because Python 3.5 deprecated +this function, and Python 3.8 will remove it altogether. +Its predecessor function `platform.dist` was already deprecated since +Python 2.6 and will also be removed in Python 3.8. +Still, there are many cases in which access to that information is needed. +See [Python issue 1322](https://bugs.python.org/issue1322) for more +information. + +The `distro` package implements a robust and inclusive way of retrieving the +information about a distribution based on new standards and old methods, +namely from these data sources (from high to low precedence): + +* The os-release file `/etc/os-release`, if present. +* The output of the `lsb_release` command, if available. +* The distro release file (`/etc/*(-|_)(release|version)`), if present. +* The `uname` command for BSD based distrubtions. + + +## Python and Distribution Support + +`distro` is supported and tested on Python 2.7, 3.4+ and PyPy and on +any distribution that provides one or more of the data sources +covered. + +This package is tested with test data that mimics the exact behavior of the data sources of [a number of Linux distributions](https://github.com/nir0s/distro/tree/master/tests/resources/distros). + + +## Testing + +```shell +git clone git@github.com:nir0s/distro.git +cd distro +pip install tox +tox +``` + + +## Contributions + +Pull requests are always welcome to deal with specific distributions or just +for general merriment. + +See [CONTRIBUTIONS](https://github.com/nir0s/distro/blob/master/CONTRIBUTING.md) for contribution info. + +Reference implementations for supporting additional distributions and file +formats can be found here: + +* https://github.com/saltstack/salt/blob/develop/salt/grains/core.py#L1172 +* https://github.com/chef/ohai/blob/master/lib/ohai/plugins/linux/platform.rb +* https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/facts/system/distribution.py +* https://github.com/puppetlabs/facter/blob/master/lib/src/facts/linux/os_linux.cc + +## Package manager distributions + +* https://src.fedoraproject.org/rpms/python-distro +* https://www.archlinux.org/packages/community/any/python-distro/ +* https://launchpad.net/ubuntu/+source/python-distro +* https://packages.debian.org/sid/python-distro +* https://packages.gentoo.org/packages/dev-python/distro +* https://pkgs.org/download/python2-distro +* https://slackbuilds.org/repository/14.2/python/python-distro/ + + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distro-1.4.0.dist-info/RECORD b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distro-1.4.0.dist-info/RECORD new file mode 100644 index 0000000000000000000000000000000000000000..0ad020978447d7a6505ef3c39349ecf0a8f47251 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distro-1.4.0.dist-info/RECORD @@ -0,0 +1,15 @@ +distro.py,sha256=X2So5kjrRKyMbQJ90Xgy93HU5eFtujCzKaYNeoy1k1c,43251 +distro-1.4.0.dist-info/AUTHORS.txt,sha256=RtqU9KfonVGhI48DAA4-yTOBUhBtQTjFhaDzHoyh7uU,21518 +distro-1.4.0.dist-info/LICENSE.txt,sha256=W6Ifuwlk-TatfRU2LR7W1JMcyMj5_y1NkRkOEJvnRDE,1090 +distro-1.4.0.dist-info/METADATA,sha256=7u13dPkDA9zu5ahg3Ns-H2vfMpR6gBBS55EaBzlXIeg,6648 +distro-1.4.0.dist-info/WHEEL,sha256=kGT74LWyRUZrL4VgLh6_g12IeVl_9u9ZVhadrgXZUEY,110 +distro-1.4.0.dist-info/entry_points.txt,sha256=mDMyvS_AzB0WhRYe_6xrRkAAET1LwFiDTL5Sx57UFiY,40 +distro-1.4.0.dist-info/top_level.txt,sha256=ikde_V_XEdSBqaGd5tEriN_wzYHLgTX_zVtlsGLHvwQ,7 +distro-1.4.0.dist-info/RECORD,, +distro-1.4.0.dist-info/INSTALLER,, +distro-1.4.0.virtualenv,, +../../../bin/distro3,, +distro-1.4.0.dist-info/__pycache__,, +../../../bin/distro,, +distro.cpython-38.pyc,, +../../../bin/distro-3.8,, \ No newline at end of file diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distro-1.4.0.dist-info/WHEEL b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distro-1.4.0.dist-info/WHEEL new file mode 100644 index 0000000000000000000000000000000000000000..ef99c6cf3283b50a273ac4c6d009a0aa85597070 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distro-1.4.0.dist-info/WHEEL @@ -0,0 +1,6 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.34.2) +Root-Is-Purelib: true +Tag: py2-none-any +Tag: py3-none-any + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distro-1.4.0.dist-info/entry_points.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distro-1.4.0.dist-info/entry_points.txt new file mode 100644 index 0000000000000000000000000000000000000000..dd4023997f5b0b48c6939d03086d2caccd209d4c --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distro-1.4.0.dist-info/entry_points.txt @@ -0,0 +1,3 @@ +[console_scripts] +distro = distro:main + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distro-1.4.0.dist-info/top_level.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distro-1.4.0.dist-info/top_level.txt new file mode 100644 index 0000000000000000000000000000000000000000..0e0933171d3eb977bbc6911eec27b9c0ba289500 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distro-1.4.0.dist-info/top_level.txt @@ -0,0 +1 @@ +distro diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distro-1.4.0.virtualenv b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distro-1.4.0.virtualenv new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distro.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distro.py new file mode 100644 index 0000000000000000000000000000000000000000..33061633efb6b03f6ca845ec5b0e69c90c25fdb8 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/distro.py @@ -0,0 +1,1216 @@ +# Copyright 2015,2016,2017 Nir Cohen +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +""" +The ``distro`` package (``distro`` stands for Linux Distribution) provides +information about the Linux distribution it runs on, such as a reliable +machine-readable distro ID, or version information. + +It is the recommended replacement for Python's original +:py:func:`platform.linux_distribution` function, but it provides much more +functionality. An alternative implementation became necessary because Python +3.5 deprecated this function, and Python 3.8 will remove it altogether. +Its predecessor function :py:func:`platform.dist` was already +deprecated since Python 2.6 and will also be removed in Python 3.8. +Still, there are many cases in which access to OS distribution information +is needed. See `Python issue 1322 `_ for +more information. +""" + +import os +import re +import sys +import json +import shlex +import logging +import argparse +import subprocess + + +_UNIXCONFDIR = os.environ.get('UNIXCONFDIR', '/etc') +_OS_RELEASE_BASENAME = 'os-release' + +#: Translation table for normalizing the "ID" attribute defined in os-release +#: files, for use by the :func:`distro.id` method. +#: +#: * Key: Value as defined in the os-release file, translated to lower case, +#: with blanks translated to underscores. +#: +#: * Value: Normalized value. +NORMALIZED_OS_ID = { + 'ol': 'oracle', # Oracle Enterprise Linux +} + +#: Translation table for normalizing the "Distributor ID" attribute returned by +#: the lsb_release command, for use by the :func:`distro.id` method. +#: +#: * Key: Value as returned by the lsb_release command, translated to lower +#: case, with blanks translated to underscores. +#: +#: * Value: Normalized value. +NORMALIZED_LSB_ID = { + 'enterpriseenterprise': 'oracle', # Oracle Enterprise Linux + 'redhatenterpriseworkstation': 'rhel', # RHEL 6, 7 Workstation + 'redhatenterpriseserver': 'rhel', # RHEL 6, 7 Server +} + +#: Translation table for normalizing the distro ID derived from the file name +#: of distro release files, for use by the :func:`distro.id` method. +#: +#: * Key: Value as derived from the file name of a distro release file, +#: translated to lower case, with blanks translated to underscores. +#: +#: * Value: Normalized value. +NORMALIZED_DISTRO_ID = { + 'redhat': 'rhel', # RHEL 6.x, 7.x +} + +# Pattern for content of distro release file (reversed) +_DISTRO_RELEASE_CONTENT_REVERSED_PATTERN = re.compile( + r'(?:[^)]*\)(.*)\()? *(?:STL )?([\d.+\-a-z]*\d) *(?:esaeler *)?(.+)') + +# Pattern for base file name of distro release file +_DISTRO_RELEASE_BASENAME_PATTERN = re.compile( + r'(\w+)[-_](release|version)$') + +# Base file names to be ignored when searching for distro release file +_DISTRO_RELEASE_IGNORE_BASENAMES = ( + 'debian_version', + 'lsb-release', + 'oem-release', + _OS_RELEASE_BASENAME, + 'system-release' +) + + +def linux_distribution(full_distribution_name=True): + """ + Return information about the current OS distribution as a tuple + ``(id_name, version, codename)`` with items as follows: + + * ``id_name``: If *full_distribution_name* is false, the result of + :func:`distro.id`. Otherwise, the result of :func:`distro.name`. + + * ``version``: The result of :func:`distro.version`. + + * ``codename``: The result of :func:`distro.codename`. + + The interface of this function is compatible with the original + :py:func:`platform.linux_distribution` function, supporting a subset of + its parameters. + + The data it returns may not exactly be the same, because it uses more data + sources than the original function, and that may lead to different data if + the OS distribution is not consistent across multiple data sources it + provides (there are indeed such distributions ...). + + Another reason for differences is the fact that the :func:`distro.id` + method normalizes the distro ID string to a reliable machine-readable value + for a number of popular OS distributions. + """ + return _distro.linux_distribution(full_distribution_name) + + +def id(): + """ + Return the distro ID of the current distribution, as a + machine-readable string. + + For a number of OS distributions, the returned distro ID value is + *reliable*, in the sense that it is documented and that it does not change + across releases of the distribution. + + This package maintains the following reliable distro ID values: + + ============== ========================================= + Distro ID Distribution + ============== ========================================= + "ubuntu" Ubuntu + "debian" Debian + "rhel" RedHat Enterprise Linux + "centos" CentOS + "fedora" Fedora + "sles" SUSE Linux Enterprise Server + "opensuse" openSUSE + "amazon" Amazon Linux + "arch" Arch Linux + "cloudlinux" CloudLinux OS + "exherbo" Exherbo Linux + "gentoo" GenToo Linux + "ibm_powerkvm" IBM PowerKVM + "kvmibm" KVM for IBM z Systems + "linuxmint" Linux Mint + "mageia" Mageia + "mandriva" Mandriva Linux + "parallels" Parallels + "pidora" Pidora + "raspbian" Raspbian + "oracle" Oracle Linux (and Oracle Enterprise Linux) + "scientific" Scientific Linux + "slackware" Slackware + "xenserver" XenServer + "openbsd" OpenBSD + "netbsd" NetBSD + "freebsd" FreeBSD + ============== ========================================= + + If you have a need to get distros for reliable IDs added into this set, + or if you find that the :func:`distro.id` function returns a different + distro ID for one of the listed distros, please create an issue in the + `distro issue tracker`_. + + **Lookup hierarchy and transformations:** + + First, the ID is obtained from the following sources, in the specified + order. The first available and non-empty value is used: + + * the value of the "ID" attribute of the os-release file, + + * the value of the "Distributor ID" attribute returned by the lsb_release + command, + + * the first part of the file name of the distro release file, + + The so determined ID value then passes the following transformations, + before it is returned by this method: + + * it is translated to lower case, + + * blanks (which should not be there anyway) are translated to underscores, + + * a normalization of the ID is performed, based upon + `normalization tables`_. The purpose of this normalization is to ensure + that the ID is as reliable as possible, even across incompatible changes + in the OS distributions. A common reason for an incompatible change is + the addition of an os-release file, or the addition of the lsb_release + command, with ID values that differ from what was previously determined + from the distro release file name. + """ + return _distro.id() + + +def name(pretty=False): + """ + Return the name of the current OS distribution, as a human-readable + string. + + If *pretty* is false, the name is returned without version or codename. + (e.g. "CentOS Linux") + + If *pretty* is true, the version and codename are appended. + (e.g. "CentOS Linux 7.1.1503 (Core)") + + **Lookup hierarchy:** + + The name is obtained from the following sources, in the specified order. + The first available and non-empty value is used: + + * If *pretty* is false: + + - the value of the "NAME" attribute of the os-release file, + + - the value of the "Distributor ID" attribute returned by the lsb_release + command, + + - the value of the "" field of the distro release file. + + * If *pretty* is true: + + - the value of the "PRETTY_NAME" attribute of the os-release file, + + - the value of the "Description" attribute returned by the lsb_release + command, + + - the value of the "" field of the distro release file, appended + with the value of the pretty version ("" and "" + fields) of the distro release file, if available. + """ + return _distro.name(pretty) + + +def version(pretty=False, best=False): + """ + Return the version of the current OS distribution, as a human-readable + string. + + If *pretty* is false, the version is returned without codename (e.g. + "7.0"). + + If *pretty* is true, the codename in parenthesis is appended, if the + codename is non-empty (e.g. "7.0 (Maipo)"). + + Some distributions provide version numbers with different precisions in + the different sources of distribution information. Examining the different + sources in a fixed priority order does not always yield the most precise + version (e.g. for Debian 8.2, or CentOS 7.1). + + The *best* parameter can be used to control the approach for the returned + version: + + If *best* is false, the first non-empty version number in priority order of + the examined sources is returned. + + If *best* is true, the most precise version number out of all examined + sources is returned. + + **Lookup hierarchy:** + + In all cases, the version number is obtained from the following sources. + If *best* is false, this order represents the priority order: + + * the value of the "VERSION_ID" attribute of the os-release file, + * the value of the "Release" attribute returned by the lsb_release + command, + * the version number parsed from the "" field of the first line + of the distro release file, + * the version number parsed from the "PRETTY_NAME" attribute of the + os-release file, if it follows the format of the distro release files. + * the version number parsed from the "Description" attribute returned by + the lsb_release command, if it follows the format of the distro release + files. + """ + return _distro.version(pretty, best) + + +def version_parts(best=False): + """ + Return the version of the current OS distribution as a tuple + ``(major, minor, build_number)`` with items as follows: + + * ``major``: The result of :func:`distro.major_version`. + + * ``minor``: The result of :func:`distro.minor_version`. + + * ``build_number``: The result of :func:`distro.build_number`. + + For a description of the *best* parameter, see the :func:`distro.version` + method. + """ + return _distro.version_parts(best) + + +def major_version(best=False): + """ + Return the major version of the current OS distribution, as a string, + if provided. + Otherwise, the empty string is returned. The major version is the first + part of the dot-separated version string. + + For a description of the *best* parameter, see the :func:`distro.version` + method. + """ + return _distro.major_version(best) + + +def minor_version(best=False): + """ + Return the minor version of the current OS distribution, as a string, + if provided. + Otherwise, the empty string is returned. The minor version is the second + part of the dot-separated version string. + + For a description of the *best* parameter, see the :func:`distro.version` + method. + """ + return _distro.minor_version(best) + + +def build_number(best=False): + """ + Return the build number of the current OS distribution, as a string, + if provided. + Otherwise, the empty string is returned. The build number is the third part + of the dot-separated version string. + + For a description of the *best* parameter, see the :func:`distro.version` + method. + """ + return _distro.build_number(best) + + +def like(): + """ + Return a space-separated list of distro IDs of distributions that are + closely related to the current OS distribution in regards to packaging + and programming interfaces, for example distributions the current + distribution is a derivative from. + + **Lookup hierarchy:** + + This information item is only provided by the os-release file. + For details, see the description of the "ID_LIKE" attribute in the + `os-release man page + `_. + """ + return _distro.like() + + +def codename(): + """ + Return the codename for the release of the current OS distribution, + as a string. + + If the distribution does not have a codename, an empty string is returned. + + Note that the returned codename is not always really a codename. For + example, openSUSE returns "x86_64". This function does not handle such + cases in any special way and just returns the string it finds, if any. + + **Lookup hierarchy:** + + * the codename within the "VERSION" attribute of the os-release file, if + provided, + + * the value of the "Codename" attribute returned by the lsb_release + command, + + * the value of the "" field of the distro release file. + """ + return _distro.codename() + + +def info(pretty=False, best=False): + """ + Return certain machine-readable information items about the current OS + distribution in a dictionary, as shown in the following example: + + .. sourcecode:: python + + { + 'id': 'rhel', + 'version': '7.0', + 'version_parts': { + 'major': '7', + 'minor': '0', + 'build_number': '' + }, + 'like': 'fedora', + 'codename': 'Maipo' + } + + The dictionary structure and keys are always the same, regardless of which + information items are available in the underlying data sources. The values + for the various keys are as follows: + + * ``id``: The result of :func:`distro.id`. + + * ``version``: The result of :func:`distro.version`. + + * ``version_parts -> major``: The result of :func:`distro.major_version`. + + * ``version_parts -> minor``: The result of :func:`distro.minor_version`. + + * ``version_parts -> build_number``: The result of + :func:`distro.build_number`. + + * ``like``: The result of :func:`distro.like`. + + * ``codename``: The result of :func:`distro.codename`. + + For a description of the *pretty* and *best* parameters, see the + :func:`distro.version` method. + """ + return _distro.info(pretty, best) + + +def os_release_info(): + """ + Return a dictionary containing key-value pairs for the information items + from the os-release file data source of the current OS distribution. + + See `os-release file`_ for details about these information items. + """ + return _distro.os_release_info() + + +def lsb_release_info(): + """ + Return a dictionary containing key-value pairs for the information items + from the lsb_release command data source of the current OS distribution. + + See `lsb_release command output`_ for details about these information + items. + """ + return _distro.lsb_release_info() + + +def distro_release_info(): + """ + Return a dictionary containing key-value pairs for the information items + from the distro release file data source of the current OS distribution. + + See `distro release file`_ for details about these information items. + """ + return _distro.distro_release_info() + + +def uname_info(): + """ + Return a dictionary containing key-value pairs for the information items + from the distro release file data source of the current OS distribution. + """ + return _distro.uname_info() + + +def os_release_attr(attribute): + """ + Return a single named information item from the os-release file data source + of the current OS distribution. + + Parameters: + + * ``attribute`` (string): Key of the information item. + + Returns: + + * (string): Value of the information item, if the item exists. + The empty string, if the item does not exist. + + See `os-release file`_ for details about these information items. + """ + return _distro.os_release_attr(attribute) + + +def lsb_release_attr(attribute): + """ + Return a single named information item from the lsb_release command output + data source of the current OS distribution. + + Parameters: + + * ``attribute`` (string): Key of the information item. + + Returns: + + * (string): Value of the information item, if the item exists. + The empty string, if the item does not exist. + + See `lsb_release command output`_ for details about these information + items. + """ + return _distro.lsb_release_attr(attribute) + + +def distro_release_attr(attribute): + """ + Return a single named information item from the distro release file + data source of the current OS distribution. + + Parameters: + + * ``attribute`` (string): Key of the information item. + + Returns: + + * (string): Value of the information item, if the item exists. + The empty string, if the item does not exist. + + See `distro release file`_ for details about these information items. + """ + return _distro.distro_release_attr(attribute) + + +def uname_attr(attribute): + """ + Return a single named information item from the distro release file + data source of the current OS distribution. + + Parameters: + + * ``attribute`` (string): Key of the information item. + + Returns: + + * (string): Value of the information item, if the item exists. + The empty string, if the item does not exist. + """ + return _distro.uname_attr(attribute) + + +class cached_property(object): + """A version of @property which caches the value. On access, it calls the + underlying function and sets the value in `__dict__` so future accesses + will not re-call the property. + """ + def __init__(self, f): + self._fname = f.__name__ + self._f = f + + def __get__(self, obj, owner): + assert obj is not None, 'call {} on an instance'.format(self._fname) + ret = obj.__dict__[self._fname] = self._f(obj) + return ret + + +class LinuxDistribution(object): + """ + Provides information about a OS distribution. + + This package creates a private module-global instance of this class with + default initialization arguments, that is used by the + `consolidated accessor functions`_ and `single source accessor functions`_. + By using default initialization arguments, that module-global instance + returns data about the current OS distribution (i.e. the distro this + package runs on). + + Normally, it is not necessary to create additional instances of this class. + However, in situations where control is needed over the exact data sources + that are used, instances of this class can be created with a specific + distro release file, or a specific os-release file, or without invoking the + lsb_release command. + """ + + def __init__(self, + include_lsb=True, + os_release_file='', + distro_release_file='', + include_uname=True): + """ + The initialization method of this class gathers information from the + available data sources, and stores that in private instance attributes. + Subsequent access to the information items uses these private instance + attributes, so that the data sources are read only once. + + Parameters: + + * ``include_lsb`` (bool): Controls whether the + `lsb_release command output`_ is included as a data source. + + If the lsb_release command is not available in the program execution + path, the data source for the lsb_release command will be empty. + + * ``os_release_file`` (string): The path name of the + `os-release file`_ that is to be used as a data source. + + An empty string (the default) will cause the default path name to + be used (see `os-release file`_ for details). + + If the specified or defaulted os-release file does not exist, the + data source for the os-release file will be empty. + + * ``distro_release_file`` (string): The path name of the + `distro release file`_ that is to be used as a data source. + + An empty string (the default) will cause a default search algorithm + to be used (see `distro release file`_ for details). + + If the specified distro release file does not exist, or if no default + distro release file can be found, the data source for the distro + release file will be empty. + + * ``include_name`` (bool): Controls whether uname command output is + included as a data source. If the uname command is not available in + the program execution path the data source for the uname command will + be empty. + + Public instance attributes: + + * ``os_release_file`` (string): The path name of the + `os-release file`_ that is actually used as a data source. The + empty string if no distro release file is used as a data source. + + * ``distro_release_file`` (string): The path name of the + `distro release file`_ that is actually used as a data source. The + empty string if no distro release file is used as a data source. + + * ``include_lsb`` (bool): The result of the ``include_lsb`` parameter. + This controls whether the lsb information will be loaded. + + * ``include_uname`` (bool): The result of the ``include_uname`` + parameter. This controls whether the uname information will + be loaded. + + Raises: + + * :py:exc:`IOError`: Some I/O issue with an os-release file or distro + release file. + + * :py:exc:`subprocess.CalledProcessError`: The lsb_release command had + some issue (other than not being available in the program execution + path). + + * :py:exc:`UnicodeError`: A data source has unexpected characters or + uses an unexpected encoding. + """ + self.os_release_file = os_release_file or \ + os.path.join(_UNIXCONFDIR, _OS_RELEASE_BASENAME) + self.distro_release_file = distro_release_file or '' # updated later + self.include_lsb = include_lsb + self.include_uname = include_uname + + def __repr__(self): + """Return repr of all info + """ + return \ + "LinuxDistribution(" \ + "os_release_file={self.os_release_file!r}, " \ + "distro_release_file={self.distro_release_file!r}, " \ + "include_lsb={self.include_lsb!r}, " \ + "include_uname={self.include_uname!r}, " \ + "_os_release_info={self._os_release_info!r}, " \ + "_lsb_release_info={self._lsb_release_info!r}, " \ + "_distro_release_info={self._distro_release_info!r}, " \ + "_uname_info={self._uname_info!r})".format( + self=self) + + def linux_distribution(self, full_distribution_name=True): + """ + Return information about the OS distribution that is compatible + with Python's :func:`platform.linux_distribution`, supporting a subset + of its parameters. + + For details, see :func:`distro.linux_distribution`. + """ + return ( + self.name() if full_distribution_name else self.id(), + self.version(), + self.codename() + ) + + def id(self): + """Return the distro ID of the OS distribution, as a string. + + For details, see :func:`distro.id`. + """ + def normalize(distro_id, table): + distro_id = distro_id.lower().replace(' ', '_') + return table.get(distro_id, distro_id) + + distro_id = self.os_release_attr('id') + if distro_id: + return normalize(distro_id, NORMALIZED_OS_ID) + + distro_id = self.lsb_release_attr('distributor_id') + if distro_id: + return normalize(distro_id, NORMALIZED_LSB_ID) + + distro_id = self.distro_release_attr('id') + if distro_id: + return normalize(distro_id, NORMALIZED_DISTRO_ID) + + distro_id = self.uname_attr('id') + if distro_id: + return normalize(distro_id, NORMALIZED_DISTRO_ID) + + return '' + + def name(self, pretty=False): + """ + Return the name of the OS distribution, as a string. + + For details, see :func:`distro.name`. + """ + name = self.os_release_attr('name') \ + or self.lsb_release_attr('distributor_id') \ + or self.distro_release_attr('name') \ + or self.uname_attr('name') + if pretty: + name = self.os_release_attr('pretty_name') \ + or self.lsb_release_attr('description') + if not name: + name = self.distro_release_attr('name') \ + or self.uname_attr('name') + version = self.version(pretty=True) + if version: + name = name + ' ' + version + return name or '' + + def version(self, pretty=False, best=False): + """ + Return the version of the OS distribution, as a string. + + For details, see :func:`distro.version`. + """ + versions = [ + self.os_release_attr('version_id'), + self.lsb_release_attr('release'), + self.distro_release_attr('version_id'), + self._parse_distro_release_content( + self.os_release_attr('pretty_name')).get('version_id', ''), + self._parse_distro_release_content( + self.lsb_release_attr('description')).get('version_id', ''), + self.uname_attr('release') + ] + version = '' + if best: + # This algorithm uses the last version in priority order that has + # the best precision. If the versions are not in conflict, that + # does not matter; otherwise, using the last one instead of the + # first one might be considered a surprise. + for v in versions: + if v.count(".") > version.count(".") or version == '': + version = v + else: + for v in versions: + if v != '': + version = v + break + if pretty and version and self.codename(): + version = u'{0} ({1})'.format(version, self.codename()) + return version + + def version_parts(self, best=False): + """ + Return the version of the OS distribution, as a tuple of version + numbers. + + For details, see :func:`distro.version_parts`. + """ + version_str = self.version(best=best) + if version_str: + version_regex = re.compile(r'(\d+)\.?(\d+)?\.?(\d+)?') + matches = version_regex.match(version_str) + if matches: + major, minor, build_number = matches.groups() + return major, minor or '', build_number or '' + return '', '', '' + + def major_version(self, best=False): + """ + Return the major version number of the current distribution. + + For details, see :func:`distro.major_version`. + """ + return self.version_parts(best)[0] + + def minor_version(self, best=False): + """ + Return the minor version number of the current distribution. + + For details, see :func:`distro.minor_version`. + """ + return self.version_parts(best)[1] + + def build_number(self, best=False): + """ + Return the build number of the current distribution. + + For details, see :func:`distro.build_number`. + """ + return self.version_parts(best)[2] + + def like(self): + """ + Return the IDs of distributions that are like the OS distribution. + + For details, see :func:`distro.like`. + """ + return self.os_release_attr('id_like') or '' + + def codename(self): + """ + Return the codename of the OS distribution. + + For details, see :func:`distro.codename`. + """ + try: + # Handle os_release specially since distros might purposefully set + # this to empty string to have no codename + return self._os_release_info['codename'] + except KeyError: + return self.lsb_release_attr('codename') \ + or self.distro_release_attr('codename') \ + or '' + + def info(self, pretty=False, best=False): + """ + Return certain machine-readable information about the OS + distribution. + + For details, see :func:`distro.info`. + """ + return dict( + id=self.id(), + version=self.version(pretty, best), + version_parts=dict( + major=self.major_version(best), + minor=self.minor_version(best), + build_number=self.build_number(best) + ), + like=self.like(), + codename=self.codename(), + ) + + def os_release_info(self): + """ + Return a dictionary containing key-value pairs for the information + items from the os-release file data source of the OS distribution. + + For details, see :func:`distro.os_release_info`. + """ + return self._os_release_info + + def lsb_release_info(self): + """ + Return a dictionary containing key-value pairs for the information + items from the lsb_release command data source of the OS + distribution. + + For details, see :func:`distro.lsb_release_info`. + """ + return self._lsb_release_info + + def distro_release_info(self): + """ + Return a dictionary containing key-value pairs for the information + items from the distro release file data source of the OS + distribution. + + For details, see :func:`distro.distro_release_info`. + """ + return self._distro_release_info + + def uname_info(self): + """ + Return a dictionary containing key-value pairs for the information + items from the uname command data source of the OS distribution. + + For details, see :func:`distro.uname_info`. + """ + return self._uname_info + + def os_release_attr(self, attribute): + """ + Return a single named information item from the os-release file data + source of the OS distribution. + + For details, see :func:`distro.os_release_attr`. + """ + return self._os_release_info.get(attribute, '') + + def lsb_release_attr(self, attribute): + """ + Return a single named information item from the lsb_release command + output data source of the OS distribution. + + For details, see :func:`distro.lsb_release_attr`. + """ + return self._lsb_release_info.get(attribute, '') + + def distro_release_attr(self, attribute): + """ + Return a single named information item from the distro release file + data source of the OS distribution. + + For details, see :func:`distro.distro_release_attr`. + """ + return self._distro_release_info.get(attribute, '') + + def uname_attr(self, attribute): + """ + Return a single named information item from the uname command + output data source of the OS distribution. + + For details, see :func:`distro.uname_release_attr`. + """ + return self._uname_info.get(attribute, '') + + @cached_property + def _os_release_info(self): + """ + Get the information items from the specified os-release file. + + Returns: + A dictionary containing all information items. + """ + if os.path.isfile(self.os_release_file): + with open(self.os_release_file) as release_file: + return self._parse_os_release_content(release_file) + return {} + + @staticmethod + def _parse_os_release_content(lines): + """ + Parse the lines of an os-release file. + + Parameters: + + * lines: Iterable through the lines in the os-release file. + Each line must be a unicode string or a UTF-8 encoded byte + string. + + Returns: + A dictionary containing all information items. + """ + props = {} + lexer = shlex.shlex(lines, posix=True) + lexer.whitespace_split = True + + # The shlex module defines its `wordchars` variable using literals, + # making it dependent on the encoding of the Python source file. + # In Python 2.6 and 2.7, the shlex source file is encoded in + # 'iso-8859-1', and the `wordchars` variable is defined as a byte + # string. This causes a UnicodeDecodeError to be raised when the + # parsed content is a unicode object. The following fix resolves that + # (... but it should be fixed in shlex...): + if sys.version_info[0] == 2 and isinstance(lexer.wordchars, bytes): + lexer.wordchars = lexer.wordchars.decode('iso-8859-1') + + tokens = list(lexer) + for token in tokens: + # At this point, all shell-like parsing has been done (i.e. + # comments processed, quotes and backslash escape sequences + # processed, multi-line values assembled, trailing newlines + # stripped, etc.), so the tokens are now either: + # * variable assignments: var=value + # * commands or their arguments (not allowed in os-release) + if '=' in token: + k, v = token.split('=', 1) + if isinstance(v, bytes): + v = v.decode('utf-8') + props[k.lower()] = v + else: + # Ignore any tokens that are not variable assignments + pass + + if 'version_codename' in props: + # os-release added a version_codename field. Use that in + # preference to anything else Note that some distros purposefully + # do not have code names. They should be setting + # version_codename="" + props['codename'] = props['version_codename'] + elif 'ubuntu_codename' in props: + # Same as above but a non-standard field name used on older Ubuntus + props['codename'] = props['ubuntu_codename'] + elif 'version' in props: + # If there is no version_codename, parse it from the version + codename = re.search(r'(\(\D+\))|,(\s+)?\D+', props['version']) + if codename: + codename = codename.group() + codename = codename.strip('()') + codename = codename.strip(',') + codename = codename.strip() + # codename appears within paranthese. + props['codename'] = codename + + return props + + @cached_property + def _lsb_release_info(self): + """ + Get the information items from the lsb_release command output. + + Returns: + A dictionary containing all information items. + """ + if not self.include_lsb: + return {} + with open(os.devnull, 'w') as devnull: + try: + cmd = ('lsb_release', '-a') + stdout = subprocess.check_output(cmd, stderr=devnull) + except OSError: # Command not found + return {} + content = stdout.decode(sys.getfilesystemencoding()).splitlines() + return self._parse_lsb_release_content(content) + + @staticmethod + def _parse_lsb_release_content(lines): + """ + Parse the output of the lsb_release command. + + Parameters: + + * lines: Iterable through the lines of the lsb_release output. + Each line must be a unicode string or a UTF-8 encoded byte + string. + + Returns: + A dictionary containing all information items. + """ + props = {} + for line in lines: + kv = line.strip('\n').split(':', 1) + if len(kv) != 2: + # Ignore lines without colon. + continue + k, v = kv + props.update({k.replace(' ', '_').lower(): v.strip()}) + return props + + @cached_property + def _uname_info(self): + with open(os.devnull, 'w') as devnull: + try: + cmd = ('uname', '-rs') + stdout = subprocess.check_output(cmd, stderr=devnull) + except OSError: + return {} + content = stdout.decode(sys.getfilesystemencoding()).splitlines() + return self._parse_uname_content(content) + + @staticmethod + def _parse_uname_content(lines): + props = {} + match = re.search(r'^([^\s]+)\s+([\d\.]+)', lines[0].strip()) + if match: + name, version = match.groups() + + # This is to prevent the Linux kernel version from + # appearing as the 'best' version on otherwise + # identifiable distributions. + if name == 'Linux': + return {} + props['id'] = name.lower() + props['name'] = name + props['release'] = version + return props + + @cached_property + def _distro_release_info(self): + """ + Get the information items from the specified distro release file. + + Returns: + A dictionary containing all information items. + """ + if self.distro_release_file: + # If it was specified, we use it and parse what we can, even if + # its file name or content does not match the expected pattern. + distro_info = self._parse_distro_release_file( + self.distro_release_file) + basename = os.path.basename(self.distro_release_file) + # The file name pattern for user-specified distro release files + # is somewhat more tolerant (compared to when searching for the + # file), because we want to use what was specified as best as + # possible. + match = _DISTRO_RELEASE_BASENAME_PATTERN.match(basename) + if 'name' in distro_info \ + and 'cloudlinux' in distro_info['name'].lower(): + distro_info['id'] = 'cloudlinux' + elif match: + distro_info['id'] = match.group(1) + return distro_info + else: + try: + basenames = os.listdir(_UNIXCONFDIR) + # We sort for repeatability in cases where there are multiple + # distro specific files; e.g. CentOS, Oracle, Enterprise all + # containing `redhat-release` on top of their own. + basenames.sort() + except OSError: + # This may occur when /etc is not readable but we can't be + # sure about the *-release files. Check common entries of + # /etc for information. If they turn out to not be there the + # error is handled in `_parse_distro_release_file()`. + basenames = ['SuSE-release', + 'arch-release', + 'base-release', + 'centos-release', + 'fedora-release', + 'gentoo-release', + 'mageia-release', + 'mandrake-release', + 'mandriva-release', + 'mandrivalinux-release', + 'manjaro-release', + 'oracle-release', + 'redhat-release', + 'sl-release', + 'slackware-version'] + for basename in basenames: + if basename in _DISTRO_RELEASE_IGNORE_BASENAMES: + continue + match = _DISTRO_RELEASE_BASENAME_PATTERN.match(basename) + if match: + filepath = os.path.join(_UNIXCONFDIR, basename) + distro_info = self._parse_distro_release_file(filepath) + if 'name' in distro_info: + # The name is always present if the pattern matches + self.distro_release_file = filepath + distro_info['id'] = match.group(1) + if 'cloudlinux' in distro_info['name'].lower(): + distro_info['id'] = 'cloudlinux' + return distro_info + return {} + + def _parse_distro_release_file(self, filepath): + """ + Parse a distro release file. + + Parameters: + + * filepath: Path name of the distro release file. + + Returns: + A dictionary containing all information items. + """ + try: + with open(filepath) as fp: + # Only parse the first line. For instance, on SLES there + # are multiple lines. We don't want them... + return self._parse_distro_release_content(fp.readline()) + except (OSError, IOError): + # Ignore not being able to read a specific, seemingly version + # related file. + # See https://github.com/nir0s/distro/issues/162 + return {} + + @staticmethod + def _parse_distro_release_content(line): + """ + Parse a line from a distro release file. + + Parameters: + * line: Line from the distro release file. Must be a unicode string + or a UTF-8 encoded byte string. + + Returns: + A dictionary containing all information items. + """ + if isinstance(line, bytes): + line = line.decode('utf-8') + matches = _DISTRO_RELEASE_CONTENT_REVERSED_PATTERN.match( + line.strip()[::-1]) + distro_info = {} + if matches: + # regexp ensures non-None + distro_info['name'] = matches.group(3)[::-1] + if matches.group(2): + distro_info['version_id'] = matches.group(2)[::-1] + if matches.group(1): + distro_info['codename'] = matches.group(1)[::-1] + elif line: + distro_info['name'] = line.strip() + return distro_info + + +_distro = LinuxDistribution() + + +def main(): + logger = logging.getLogger(__name__) + logger.setLevel(logging.DEBUG) + logger.addHandler(logging.StreamHandler(sys.stdout)) + + parser = argparse.ArgumentParser(description="OS distro info tool") + parser.add_argument( + '--json', + '-j', + help="Output in machine readable format", + action="store_true") + args = parser.parse_args() + + if args.json: + logger.info(json.dumps(info(), indent=4, sort_keys=True)) + else: + logger.info('Name: %s', name(pretty=True)) + distribution_version = version(pretty=True) + logger.info('Version: %s', distribution_version) + distribution_codename = codename() + logger.info('Codename: %s', distribution_codename) + + +if __name__ == '__main__': + main() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/easy_install.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/easy_install.py new file mode 100644 index 0000000000000000000000000000000000000000..d87e984034b6e6e9eb456ebcb2b3f420c07a48bc --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/easy_install.py @@ -0,0 +1,5 @@ +"""Run the EasyInstall command""" + +if __name__ == '__main__': + from setuptools.command.easy_install import main + main() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..ecd917aa016dea4e807ef051ea048b9828b5f450 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/__init__.py @@ -0,0 +1,33 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +# Copyright 2007 Google Inc. All Rights Reserved. + +__version__ = '3.15.4' diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/any_pb2.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/any_pb2.py new file mode 100644 index 0000000000000000000000000000000000000000..e3b6ad81b200e190c3fe632d2ec271dfc8247de5 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/any_pb2.py @@ -0,0 +1,78 @@ +# -*- coding: utf-8 -*- +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: google/protobuf/any.proto +"""Generated protocol buffer code.""" +from google.protobuf import descriptor as _descriptor +from google.protobuf import message as _message +from google.protobuf import reflection as _reflection +from google.protobuf import symbol_database as _symbol_database +# @@protoc_insertion_point(imports) + +_sym_db = _symbol_database.Default() + + + + +DESCRIPTOR = _descriptor.FileDescriptor( + name='google/protobuf/any.proto', + package='google.protobuf', + syntax='proto3', + serialized_options=b'\n\023com.google.protobufB\010AnyProtoP\001Z,google.golang.org/protobuf/types/known/anypb\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes', + create_key=_descriptor._internal_create_key, + serialized_pb=b'\n\x19google/protobuf/any.proto\x12\x0fgoogle.protobuf\"&\n\x03\x41ny\x12\x10\n\x08type_url\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\x0c\x42v\n\x13\x63om.google.protobufB\x08\x41nyProtoP\x01Z,google.golang.org/protobuf/types/known/anypb\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3' +) + + + + +_ANY = _descriptor.Descriptor( + name='Any', + full_name='google.protobuf.Any', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='type_url', full_name='google.protobuf.Any.type_url', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='value', full_name='google.protobuf.Any.value', index=1, + number=2, type=12, cpp_type=9, label=1, + has_default_value=False, default_value=b"", + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=46, + serialized_end=84, +) + +DESCRIPTOR.message_types_by_name['Any'] = _ANY +_sym_db.RegisterFileDescriptor(DESCRIPTOR) + +Any = _reflection.GeneratedProtocolMessageType('Any', (_message.Message,), { + 'DESCRIPTOR' : _ANY, + '__module__' : 'google.protobuf.any_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.Any) + }) +_sym_db.RegisterMessage(Any) + + +DESCRIPTOR._options = None +# @@protoc_insertion_point(module_scope) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/api_pb2.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/api_pb2.py new file mode 100644 index 0000000000000000000000000000000000000000..5ecc064979ef0dcfef435e3f4f8cdc1e728a6918 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/api_pb2.py @@ -0,0 +1,252 @@ +# -*- coding: utf-8 -*- +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: google/protobuf/api.proto +"""Generated protocol buffer code.""" +from google.protobuf import descriptor as _descriptor +from google.protobuf import message as _message +from google.protobuf import reflection as _reflection +from google.protobuf import symbol_database as _symbol_database +# @@protoc_insertion_point(imports) + +_sym_db = _symbol_database.Default() + + +from google.protobuf import source_context_pb2 as google_dot_protobuf_dot_source__context__pb2 +from google.protobuf import type_pb2 as google_dot_protobuf_dot_type__pb2 + + +DESCRIPTOR = _descriptor.FileDescriptor( + name='google/protobuf/api.proto', + package='google.protobuf', + syntax='proto3', + serialized_options=b'\n\023com.google.protobufB\010ApiProtoP\001Z,google.golang.org/protobuf/types/known/apipb\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes', + create_key=_descriptor._internal_create_key, + serialized_pb=b'\n\x19google/protobuf/api.proto\x12\x0fgoogle.protobuf\x1a$google/protobuf/source_context.proto\x1a\x1agoogle/protobuf/type.proto\"\x81\x02\n\x03\x41pi\x12\x0c\n\x04name\x18\x01 \x01(\t\x12(\n\x07methods\x18\x02 \x03(\x0b\x32\x17.google.protobuf.Method\x12(\n\x07options\x18\x03 \x03(\x0b\x32\x17.google.protobuf.Option\x12\x0f\n\x07version\x18\x04 \x01(\t\x12\x36\n\x0esource_context\x18\x05 \x01(\x0b\x32\x1e.google.protobuf.SourceContext\x12&\n\x06mixins\x18\x06 \x03(\x0b\x32\x16.google.protobuf.Mixin\x12\'\n\x06syntax\x18\x07 \x01(\x0e\x32\x17.google.protobuf.Syntax\"\xd5\x01\n\x06Method\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x18\n\x10request_type_url\x18\x02 \x01(\t\x12\x19\n\x11request_streaming\x18\x03 \x01(\x08\x12\x19\n\x11response_type_url\x18\x04 \x01(\t\x12\x1a\n\x12response_streaming\x18\x05 \x01(\x08\x12(\n\x07options\x18\x06 \x03(\x0b\x32\x17.google.protobuf.Option\x12\'\n\x06syntax\x18\x07 \x01(\x0e\x32\x17.google.protobuf.Syntax\"#\n\x05Mixin\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0c\n\x04root\x18\x02 \x01(\tBv\n\x13\x63om.google.protobufB\x08\x41piProtoP\x01Z,google.golang.org/protobuf/types/known/apipb\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3' + , + dependencies=[google_dot_protobuf_dot_source__context__pb2.DESCRIPTOR,google_dot_protobuf_dot_type__pb2.DESCRIPTOR,]) + + + + +_API = _descriptor.Descriptor( + name='Api', + full_name='google.protobuf.Api', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='name', full_name='google.protobuf.Api.name', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='methods', full_name='google.protobuf.Api.methods', index=1, + number=2, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='options', full_name='google.protobuf.Api.options', index=2, + number=3, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='version', full_name='google.protobuf.Api.version', index=3, + number=4, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='source_context', full_name='google.protobuf.Api.source_context', index=4, + number=5, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='mixins', full_name='google.protobuf.Api.mixins', index=5, + number=6, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='syntax', full_name='google.protobuf.Api.syntax', index=6, + number=7, type=14, cpp_type=8, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=113, + serialized_end=370, +) + + +_METHOD = _descriptor.Descriptor( + name='Method', + full_name='google.protobuf.Method', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='name', full_name='google.protobuf.Method.name', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='request_type_url', full_name='google.protobuf.Method.request_type_url', index=1, + number=2, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='request_streaming', full_name='google.protobuf.Method.request_streaming', index=2, + number=3, type=8, cpp_type=7, label=1, + has_default_value=False, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='response_type_url', full_name='google.protobuf.Method.response_type_url', index=3, + number=4, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='response_streaming', full_name='google.protobuf.Method.response_streaming', index=4, + number=5, type=8, cpp_type=7, label=1, + has_default_value=False, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='options', full_name='google.protobuf.Method.options', index=5, + number=6, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='syntax', full_name='google.protobuf.Method.syntax', index=6, + number=7, type=14, cpp_type=8, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=373, + serialized_end=586, +) + + +_MIXIN = _descriptor.Descriptor( + name='Mixin', + full_name='google.protobuf.Mixin', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='name', full_name='google.protobuf.Mixin.name', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='root', full_name='google.protobuf.Mixin.root', index=1, + number=2, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=588, + serialized_end=623, +) + +_API.fields_by_name['methods'].message_type = _METHOD +_API.fields_by_name['options'].message_type = google_dot_protobuf_dot_type__pb2._OPTION +_API.fields_by_name['source_context'].message_type = google_dot_protobuf_dot_source__context__pb2._SOURCECONTEXT +_API.fields_by_name['mixins'].message_type = _MIXIN +_API.fields_by_name['syntax'].enum_type = google_dot_protobuf_dot_type__pb2._SYNTAX +_METHOD.fields_by_name['options'].message_type = google_dot_protobuf_dot_type__pb2._OPTION +_METHOD.fields_by_name['syntax'].enum_type = google_dot_protobuf_dot_type__pb2._SYNTAX +DESCRIPTOR.message_types_by_name['Api'] = _API +DESCRIPTOR.message_types_by_name['Method'] = _METHOD +DESCRIPTOR.message_types_by_name['Mixin'] = _MIXIN +_sym_db.RegisterFileDescriptor(DESCRIPTOR) + +Api = _reflection.GeneratedProtocolMessageType('Api', (_message.Message,), { + 'DESCRIPTOR' : _API, + '__module__' : 'google.protobuf.api_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.Api) + }) +_sym_db.RegisterMessage(Api) + +Method = _reflection.GeneratedProtocolMessageType('Method', (_message.Message,), { + 'DESCRIPTOR' : _METHOD, + '__module__' : 'google.protobuf.api_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.Method) + }) +_sym_db.RegisterMessage(Method) + +Mixin = _reflection.GeneratedProtocolMessageType('Mixin', (_message.Message,), { + 'DESCRIPTOR' : _MIXIN, + '__module__' : 'google.protobuf.api_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.Mixin) + }) +_sym_db.RegisterMessage(Mixin) + + +DESCRIPTOR._options = None +# @@protoc_insertion_point(module_scope) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/compiler/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/compiler/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/compiler/plugin_pb2.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/compiler/plugin_pb2.py new file mode 100644 index 0000000000000000000000000000000000000000..ff90d18f1b88f555e15d7cdbb6535617f1fc916e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/compiler/plugin_pb2.py @@ -0,0 +1,301 @@ +# -*- coding: utf-8 -*- +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: google/protobuf/compiler/plugin.proto +"""Generated protocol buffer code.""" +from google.protobuf import descriptor as _descriptor +from google.protobuf import message as _message +from google.protobuf import reflection as _reflection +from google.protobuf import symbol_database as _symbol_database +# @@protoc_insertion_point(imports) + +_sym_db = _symbol_database.Default() + + +from google.protobuf import descriptor_pb2 as google_dot_protobuf_dot_descriptor__pb2 + + +DESCRIPTOR = _descriptor.FileDescriptor( + name='google/protobuf/compiler/plugin.proto', + package='google.protobuf.compiler', + syntax='proto2', + serialized_options=b'\n\034com.google.protobuf.compilerB\014PluginProtosZ)google.golang.org/protobuf/types/pluginpb', + create_key=_descriptor._internal_create_key, + serialized_pb=b'\n%google/protobuf/compiler/plugin.proto\x12\x18google.protobuf.compiler\x1a google/protobuf/descriptor.proto\"F\n\x07Version\x12\r\n\x05major\x18\x01 \x01(\x05\x12\r\n\x05minor\x18\x02 \x01(\x05\x12\r\n\x05patch\x18\x03 \x01(\x05\x12\x0e\n\x06suffix\x18\x04 \x01(\t\"\xba\x01\n\x14\x43odeGeneratorRequest\x12\x18\n\x10\x66ile_to_generate\x18\x01 \x03(\t\x12\x11\n\tparameter\x18\x02 \x01(\t\x12\x38\n\nproto_file\x18\x0f \x03(\x0b\x32$.google.protobuf.FileDescriptorProto\x12;\n\x10\x63ompiler_version\x18\x03 \x01(\x0b\x32!.google.protobuf.compiler.Version\"\xc1\x02\n\x15\x43odeGeneratorResponse\x12\r\n\x05\x65rror\x18\x01 \x01(\t\x12\x1a\n\x12supported_features\x18\x02 \x01(\x04\x12\x42\n\x04\x66ile\x18\x0f \x03(\x0b\x32\x34.google.protobuf.compiler.CodeGeneratorResponse.File\x1a\x7f\n\x04\x46ile\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x17\n\x0finsertion_point\x18\x02 \x01(\t\x12\x0f\n\x07\x63ontent\x18\x0f \x01(\t\x12?\n\x13generated_code_info\x18\x10 \x01(\x0b\x32\".google.protobuf.GeneratedCodeInfo\"8\n\x07\x46\x65\x61ture\x12\x10\n\x0c\x46\x45\x41TURE_NONE\x10\x00\x12\x1b\n\x17\x46\x45\x41TURE_PROTO3_OPTIONAL\x10\x01\x42W\n\x1c\x63om.google.protobuf.compilerB\x0cPluginProtosZ)google.golang.org/protobuf/types/pluginpb' + , + dependencies=[google_dot_protobuf_dot_descriptor__pb2.DESCRIPTOR,]) + + + +_CODEGENERATORRESPONSE_FEATURE = _descriptor.EnumDescriptor( + name='Feature', + full_name='google.protobuf.compiler.CodeGeneratorResponse.Feature', + filename=None, + file=DESCRIPTOR, + create_key=_descriptor._internal_create_key, + values=[ + _descriptor.EnumValueDescriptor( + name='FEATURE_NONE', index=0, number=0, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='FEATURE_PROTO3_OPTIONAL', index=1, number=1, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + ], + containing_type=None, + serialized_options=None, + serialized_start=628, + serialized_end=684, +) +_sym_db.RegisterEnumDescriptor(_CODEGENERATORRESPONSE_FEATURE) + + +_VERSION = _descriptor.Descriptor( + name='Version', + full_name='google.protobuf.compiler.Version', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='major', full_name='google.protobuf.compiler.Version.major', index=0, + number=1, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='minor', full_name='google.protobuf.compiler.Version.minor', index=1, + number=2, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='patch', full_name='google.protobuf.compiler.Version.patch', index=2, + number=3, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='suffix', full_name='google.protobuf.compiler.Version.suffix', index=3, + number=4, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + serialized_start=101, + serialized_end=171, +) + + +_CODEGENERATORREQUEST = _descriptor.Descriptor( + name='CodeGeneratorRequest', + full_name='google.protobuf.compiler.CodeGeneratorRequest', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='file_to_generate', full_name='google.protobuf.compiler.CodeGeneratorRequest.file_to_generate', index=0, + number=1, type=9, cpp_type=9, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='parameter', full_name='google.protobuf.compiler.CodeGeneratorRequest.parameter', index=1, + number=2, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='proto_file', full_name='google.protobuf.compiler.CodeGeneratorRequest.proto_file', index=2, + number=15, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='compiler_version', full_name='google.protobuf.compiler.CodeGeneratorRequest.compiler_version', index=3, + number=3, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + serialized_start=174, + serialized_end=360, +) + + +_CODEGENERATORRESPONSE_FILE = _descriptor.Descriptor( + name='File', + full_name='google.protobuf.compiler.CodeGeneratorResponse.File', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='name', full_name='google.protobuf.compiler.CodeGeneratorResponse.File.name', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='insertion_point', full_name='google.protobuf.compiler.CodeGeneratorResponse.File.insertion_point', index=1, + number=2, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='content', full_name='google.protobuf.compiler.CodeGeneratorResponse.File.content', index=2, + number=15, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='generated_code_info', full_name='google.protobuf.compiler.CodeGeneratorResponse.File.generated_code_info', index=3, + number=16, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + serialized_start=499, + serialized_end=626, +) + +_CODEGENERATORRESPONSE = _descriptor.Descriptor( + name='CodeGeneratorResponse', + full_name='google.protobuf.compiler.CodeGeneratorResponse', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='error', full_name='google.protobuf.compiler.CodeGeneratorResponse.error', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='supported_features', full_name='google.protobuf.compiler.CodeGeneratorResponse.supported_features', index=1, + number=2, type=4, cpp_type=4, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='file', full_name='google.protobuf.compiler.CodeGeneratorResponse.file', index=2, + number=15, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[_CODEGENERATORRESPONSE_FILE, ], + enum_types=[ + _CODEGENERATORRESPONSE_FEATURE, + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + serialized_start=363, + serialized_end=684, +) + +_CODEGENERATORREQUEST.fields_by_name['proto_file'].message_type = google_dot_protobuf_dot_descriptor__pb2._FILEDESCRIPTORPROTO +_CODEGENERATORREQUEST.fields_by_name['compiler_version'].message_type = _VERSION +_CODEGENERATORRESPONSE_FILE.fields_by_name['generated_code_info'].message_type = google_dot_protobuf_dot_descriptor__pb2._GENERATEDCODEINFO +_CODEGENERATORRESPONSE_FILE.containing_type = _CODEGENERATORRESPONSE +_CODEGENERATORRESPONSE.fields_by_name['file'].message_type = _CODEGENERATORRESPONSE_FILE +_CODEGENERATORRESPONSE_FEATURE.containing_type = _CODEGENERATORRESPONSE +DESCRIPTOR.message_types_by_name['Version'] = _VERSION +DESCRIPTOR.message_types_by_name['CodeGeneratorRequest'] = _CODEGENERATORREQUEST +DESCRIPTOR.message_types_by_name['CodeGeneratorResponse'] = _CODEGENERATORRESPONSE +_sym_db.RegisterFileDescriptor(DESCRIPTOR) + +Version = _reflection.GeneratedProtocolMessageType('Version', (_message.Message,), { + 'DESCRIPTOR' : _VERSION, + '__module__' : 'google.protobuf.compiler.plugin_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.compiler.Version) + }) +_sym_db.RegisterMessage(Version) + +CodeGeneratorRequest = _reflection.GeneratedProtocolMessageType('CodeGeneratorRequest', (_message.Message,), { + 'DESCRIPTOR' : _CODEGENERATORREQUEST, + '__module__' : 'google.protobuf.compiler.plugin_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.compiler.CodeGeneratorRequest) + }) +_sym_db.RegisterMessage(CodeGeneratorRequest) + +CodeGeneratorResponse = _reflection.GeneratedProtocolMessageType('CodeGeneratorResponse', (_message.Message,), { + + 'File' : _reflection.GeneratedProtocolMessageType('File', (_message.Message,), { + 'DESCRIPTOR' : _CODEGENERATORRESPONSE_FILE, + '__module__' : 'google.protobuf.compiler.plugin_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.compiler.CodeGeneratorResponse.File) + }) + , + 'DESCRIPTOR' : _CODEGENERATORRESPONSE, + '__module__' : 'google.protobuf.compiler.plugin_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.compiler.CodeGeneratorResponse) + }) +_sym_db.RegisterMessage(CodeGeneratorResponse) +_sym_db.RegisterMessage(CodeGeneratorResponse.File) + + +DESCRIPTOR._options = None +# @@protoc_insertion_point(module_scope) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/descriptor.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/descriptor.py new file mode 100644 index 0000000000000000000000000000000000000000..190b89536fdfd3c13126e2d862643da05c3093f3 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/descriptor.py @@ -0,0 +1,1165 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""Descriptors essentially contain exactly the information found in a .proto +file, in types that make this information accessible in Python. +""" + +__author__ = 'robinson@google.com (Will Robinson)' + +import threading +import warnings +import six + +from google.protobuf.internal import api_implementation + +_USE_C_DESCRIPTORS = False +if api_implementation.Type() == 'cpp': + # Used by MakeDescriptor in cpp mode + import binascii + import os + from google.protobuf.pyext import _message + _USE_C_DESCRIPTORS = True + + +class Error(Exception): + """Base error for this module.""" + + +class TypeTransformationError(Error): + """Error transforming between python proto type and corresponding C++ type.""" + + +if _USE_C_DESCRIPTORS: + # This metaclass allows to override the behavior of code like + # isinstance(my_descriptor, FieldDescriptor) + # and make it return True when the descriptor is an instance of the extension + # type written in C++. + class DescriptorMetaclass(type): + def __instancecheck__(cls, obj): + if super(DescriptorMetaclass, cls).__instancecheck__(obj): + return True + if isinstance(obj, cls._C_DESCRIPTOR_CLASS): + return True + return False +else: + # The standard metaclass; nothing changes. + DescriptorMetaclass = type + + +class _Lock(object): + """Wrapper class of threading.Lock(), which is allowed by 'with'.""" + + def __new__(cls): + self = object.__new__(cls) + self._lock = threading.Lock() # pylint: disable=protected-access + return self + + def __enter__(self): + self._lock.acquire() + + def __exit__(self, exc_type, exc_value, exc_tb): + self._lock.release() + + +_lock = threading.Lock() + + +def _Deprecated(name): + if _Deprecated.count > 0: + _Deprecated.count -= 1 + warnings.warn( + 'Call to deprecated create function %s(). Note: Create unlinked ' + 'descriptors is going to go away. Please use get/find descriptors from ' + 'generated code or query the descriptor_pool.' + % name, + category=DeprecationWarning, stacklevel=3) + + +# Deprecated warnings will print 100 times at most which should be enough for +# users to notice and do not cause timeout. +_Deprecated.count = 100 + + +_internal_create_key = object() + + +class DescriptorBase(six.with_metaclass(DescriptorMetaclass)): + + """Descriptors base class. + + This class is the base of all descriptor classes. It provides common options + related functionality. + + Attributes: + has_options: True if the descriptor has non-default options. Usually it + is not necessary to read this -- just call GetOptions() which will + happily return the default instance. However, it's sometimes useful + for efficiency, and also useful inside the protobuf implementation to + avoid some bootstrapping issues. + """ + + if _USE_C_DESCRIPTORS: + # The class, or tuple of classes, that are considered as "virtual + # subclasses" of this descriptor class. + _C_DESCRIPTOR_CLASS = () + + def __init__(self, options, serialized_options, options_class_name): + """Initialize the descriptor given its options message and the name of the + class of the options message. The name of the class is required in case + the options message is None and has to be created. + """ + self._options = options + self._options_class_name = options_class_name + self._serialized_options = serialized_options + + # Does this descriptor have non-default options? + self.has_options = (options is not None) or (serialized_options is not None) + + def _SetOptions(self, options, options_class_name): + """Sets the descriptor's options + + This function is used in generated proto2 files to update descriptor + options. It must not be used outside proto2. + """ + self._options = options + self._options_class_name = options_class_name + + # Does this descriptor have non-default options? + self.has_options = options is not None + + def GetOptions(self): + """Retrieves descriptor options. + + This method returns the options set or creates the default options for the + descriptor. + """ + if self._options: + return self._options + + from google.protobuf import descriptor_pb2 + try: + options_class = getattr(descriptor_pb2, + self._options_class_name) + except AttributeError: + raise RuntimeError('Unknown options class name %s!' % + (self._options_class_name)) + + with _lock: + if self._serialized_options is None: + self._options = options_class() + else: + self._options = _ParseOptions(options_class(), + self._serialized_options) + + return self._options + + +class _NestedDescriptorBase(DescriptorBase): + """Common class for descriptors that can be nested.""" + + def __init__(self, options, options_class_name, name, full_name, + file, containing_type, serialized_start=None, + serialized_end=None, serialized_options=None): + """Constructor. + + Args: + options: Protocol message options or None + to use default message options. + options_class_name (str): The class name of the above options. + name (str): Name of this protocol message type. + full_name (str): Fully-qualified name of this protocol message type, + which will include protocol "package" name and the name of any + enclosing types. + file (FileDescriptor): Reference to file info. + containing_type: if provided, this is a nested descriptor, with this + descriptor as parent, otherwise None. + serialized_start: The start index (inclusive) in block in the + file.serialized_pb that describes this descriptor. + serialized_end: The end index (exclusive) in block in the + file.serialized_pb that describes this descriptor. + serialized_options: Protocol message serialized options or None. + """ + super(_NestedDescriptorBase, self).__init__( + options, serialized_options, options_class_name) + + self.name = name + # TODO(falk): Add function to calculate full_name instead of having it in + # memory? + self.full_name = full_name + self.file = file + self.containing_type = containing_type + + self._serialized_start = serialized_start + self._serialized_end = serialized_end + + def CopyToProto(self, proto): + """Copies this to the matching proto in descriptor_pb2. + + Args: + proto: An empty proto instance from descriptor_pb2. + + Raises: + Error: If self couldn't be serialized, due to to few constructor + arguments. + """ + if (self.file is not None and + self._serialized_start is not None and + self._serialized_end is not None): + proto.ParseFromString(self.file.serialized_pb[ + self._serialized_start:self._serialized_end]) + else: + raise Error('Descriptor does not contain serialization.') + + +class Descriptor(_NestedDescriptorBase): + + """Descriptor for a protocol message type. + + Attributes: + name (str): Name of this protocol message type. + full_name (str): Fully-qualified name of this protocol message type, + which will include protocol "package" name and the name of any + enclosing types. + containing_type (Descriptor): Reference to the descriptor of the type + containing us, or None if this is top-level. + fields (list[FieldDescriptor]): Field descriptors for all fields in + this type. + fields_by_number (dict(int, FieldDescriptor)): Same + :class:`FieldDescriptor` objects as in :attr:`fields`, but indexed + by "number" attribute in each FieldDescriptor. + fields_by_name (dict(str, FieldDescriptor)): Same + :class:`FieldDescriptor` objects as in :attr:`fields`, but indexed by + "name" attribute in each :class:`FieldDescriptor`. + nested_types (list[Descriptor]): Descriptor references + for all protocol message types nested within this one. + nested_types_by_name (dict(str, Descriptor)): Same Descriptor + objects as in :attr:`nested_types`, but indexed by "name" attribute + in each Descriptor. + enum_types (list[EnumDescriptor]): :class:`EnumDescriptor` references + for all enums contained within this type. + enum_types_by_name (dict(str, EnumDescriptor)): Same + :class:`EnumDescriptor` objects as in :attr:`enum_types`, but + indexed by "name" attribute in each EnumDescriptor. + enum_values_by_name (dict(str, EnumValueDescriptor)): Dict mapping + from enum value name to :class:`EnumValueDescriptor` for that value. + extensions (list[FieldDescriptor]): All extensions defined directly + within this message type (NOT within a nested type). + extensions_by_name (dict(str, FieldDescriptor)): Same FieldDescriptor + objects as :attr:`extensions`, but indexed by "name" attribute of each + FieldDescriptor. + is_extendable (bool): Does this type define any extension ranges? + oneofs (list[OneofDescriptor]): The list of descriptors for oneof fields + in this message. + oneofs_by_name (dict(str, OneofDescriptor)): Same objects as in + :attr:`oneofs`, but indexed by "name" attribute. + file (FileDescriptor): Reference to file descriptor. + + """ + + if _USE_C_DESCRIPTORS: + _C_DESCRIPTOR_CLASS = _message.Descriptor + + def __new__( + cls, + name=None, + full_name=None, + filename=None, + containing_type=None, + fields=None, + nested_types=None, + enum_types=None, + extensions=None, + options=None, + serialized_options=None, + is_extendable=True, + extension_ranges=None, + oneofs=None, + file=None, # pylint: disable=redefined-builtin + serialized_start=None, + serialized_end=None, + syntax=None, + create_key=None): + _message.Message._CheckCalledFromGeneratedFile() + return _message.default_pool.FindMessageTypeByName(full_name) + + # NOTE(tmarek): The file argument redefining a builtin is nothing we can + # fix right now since we don't know how many clients already rely on the + # name of the argument. + def __init__(self, name, full_name, filename, containing_type, fields, + nested_types, enum_types, extensions, options=None, + serialized_options=None, + is_extendable=True, extension_ranges=None, oneofs=None, + file=None, serialized_start=None, serialized_end=None, # pylint: disable=redefined-builtin + syntax=None, create_key=None): + """Arguments to __init__() are as described in the description + of Descriptor fields above. + + Note that filename is an obsolete argument, that is not used anymore. + Please use file.name to access this as an attribute. + """ + if create_key is not _internal_create_key: + _Deprecated('Descriptor') + + super(Descriptor, self).__init__( + options, 'MessageOptions', name, full_name, file, + containing_type, serialized_start=serialized_start, + serialized_end=serialized_end, serialized_options=serialized_options) + + # We have fields in addition to fields_by_name and fields_by_number, + # so that: + # 1. Clients can index fields by "order in which they're listed." + # 2. Clients can easily iterate over all fields with the terse + # syntax: for f in descriptor.fields: ... + self.fields = fields + for field in self.fields: + field.containing_type = self + self.fields_by_number = dict((f.number, f) for f in fields) + self.fields_by_name = dict((f.name, f) for f in fields) + self._fields_by_camelcase_name = None + + self.nested_types = nested_types + for nested_type in nested_types: + nested_type.containing_type = self + self.nested_types_by_name = dict((t.name, t) for t in nested_types) + + self.enum_types = enum_types + for enum_type in self.enum_types: + enum_type.containing_type = self + self.enum_types_by_name = dict((t.name, t) for t in enum_types) + self.enum_values_by_name = dict( + (v.name, v) for t in enum_types for v in t.values) + + self.extensions = extensions + for extension in self.extensions: + extension.extension_scope = self + self.extensions_by_name = dict((f.name, f) for f in extensions) + self.is_extendable = is_extendable + self.extension_ranges = extension_ranges + self.oneofs = oneofs if oneofs is not None else [] + self.oneofs_by_name = dict((o.name, o) for o in self.oneofs) + for oneof in self.oneofs: + oneof.containing_type = self + self.syntax = syntax or "proto2" + + @property + def fields_by_camelcase_name(self): + """Same FieldDescriptor objects as in :attr:`fields`, but indexed by + :attr:`FieldDescriptor.camelcase_name`. + """ + if self._fields_by_camelcase_name is None: + self._fields_by_camelcase_name = dict( + (f.camelcase_name, f) for f in self.fields) + return self._fields_by_camelcase_name + + def EnumValueName(self, enum, value): + """Returns the string name of an enum value. + + This is just a small helper method to simplify a common operation. + + Args: + enum: string name of the Enum. + value: int, value of the enum. + + Returns: + string name of the enum value. + + Raises: + KeyError if either the Enum doesn't exist or the value is not a valid + value for the enum. + """ + return self.enum_types_by_name[enum].values_by_number[value].name + + def CopyToProto(self, proto): + """Copies this to a descriptor_pb2.DescriptorProto. + + Args: + proto: An empty descriptor_pb2.DescriptorProto. + """ + # This function is overridden to give a better doc comment. + super(Descriptor, self).CopyToProto(proto) + + +# TODO(robinson): We should have aggressive checking here, +# for example: +# * If you specify a repeated field, you should not be allowed +# to specify a default value. +# * [Other examples here as needed]. +# +# TODO(robinson): for this and other *Descriptor classes, we +# might also want to lock things down aggressively (e.g., +# prevent clients from setting the attributes). Having +# stronger invariants here in general will reduce the number +# of runtime checks we must do in reflection.py... +class FieldDescriptor(DescriptorBase): + + """Descriptor for a single field in a .proto file. + + Attributes: + name (str): Name of this field, exactly as it appears in .proto. + full_name (str): Name of this field, including containing scope. This is + particularly relevant for extensions. + index (int): Dense, 0-indexed index giving the order that this + field textually appears within its message in the .proto file. + number (int): Tag number declared for this field in the .proto file. + + type (int): (One of the TYPE_* constants below) Declared type. + cpp_type (int): (One of the CPPTYPE_* constants below) C++ type used to + represent this field. + + label (int): (One of the LABEL_* constants below) Tells whether this + field is optional, required, or repeated. + has_default_value (bool): True if this field has a default value defined, + otherwise false. + default_value (Varies): Default value of this field. Only + meaningful for non-repeated scalar fields. Repeated fields + should always set this to [], and non-repeated composite + fields should always set this to None. + + containing_type (Descriptor): Descriptor of the protocol message + type that contains this field. Set by the Descriptor constructor + if we're passed into one. + Somewhat confusingly, for extension fields, this is the + descriptor of the EXTENDED message, not the descriptor + of the message containing this field. (See is_extension and + extension_scope below). + message_type (Descriptor): If a composite field, a descriptor + of the message type contained in this field. Otherwise, this is None. + enum_type (EnumDescriptor): If this field contains an enum, a + descriptor of that enum. Otherwise, this is None. + + is_extension: True iff this describes an extension field. + extension_scope (Descriptor): Only meaningful if is_extension is True. + Gives the message that immediately contains this extension field. + Will be None iff we're a top-level (file-level) extension field. + + options (descriptor_pb2.FieldOptions): Protocol message field options or + None to use default field options. + + containing_oneof (OneofDescriptor): If the field is a member of a oneof + union, contains its descriptor. Otherwise, None. + + file (FileDescriptor): Reference to file descriptor. + """ + + # Must be consistent with C++ FieldDescriptor::Type enum in + # descriptor.h. + # + # TODO(robinson): Find a way to eliminate this repetition. + TYPE_DOUBLE = 1 + TYPE_FLOAT = 2 + TYPE_INT64 = 3 + TYPE_UINT64 = 4 + TYPE_INT32 = 5 + TYPE_FIXED64 = 6 + TYPE_FIXED32 = 7 + TYPE_BOOL = 8 + TYPE_STRING = 9 + TYPE_GROUP = 10 + TYPE_MESSAGE = 11 + TYPE_BYTES = 12 + TYPE_UINT32 = 13 + TYPE_ENUM = 14 + TYPE_SFIXED32 = 15 + TYPE_SFIXED64 = 16 + TYPE_SINT32 = 17 + TYPE_SINT64 = 18 + MAX_TYPE = 18 + + # Must be consistent with C++ FieldDescriptor::CppType enum in + # descriptor.h. + # + # TODO(robinson): Find a way to eliminate this repetition. + CPPTYPE_INT32 = 1 + CPPTYPE_INT64 = 2 + CPPTYPE_UINT32 = 3 + CPPTYPE_UINT64 = 4 + CPPTYPE_DOUBLE = 5 + CPPTYPE_FLOAT = 6 + CPPTYPE_BOOL = 7 + CPPTYPE_ENUM = 8 + CPPTYPE_STRING = 9 + CPPTYPE_MESSAGE = 10 + MAX_CPPTYPE = 10 + + _PYTHON_TO_CPP_PROTO_TYPE_MAP = { + TYPE_DOUBLE: CPPTYPE_DOUBLE, + TYPE_FLOAT: CPPTYPE_FLOAT, + TYPE_ENUM: CPPTYPE_ENUM, + TYPE_INT64: CPPTYPE_INT64, + TYPE_SINT64: CPPTYPE_INT64, + TYPE_SFIXED64: CPPTYPE_INT64, + TYPE_UINT64: CPPTYPE_UINT64, + TYPE_FIXED64: CPPTYPE_UINT64, + TYPE_INT32: CPPTYPE_INT32, + TYPE_SFIXED32: CPPTYPE_INT32, + TYPE_SINT32: CPPTYPE_INT32, + TYPE_UINT32: CPPTYPE_UINT32, + TYPE_FIXED32: CPPTYPE_UINT32, + TYPE_BYTES: CPPTYPE_STRING, + TYPE_STRING: CPPTYPE_STRING, + TYPE_BOOL: CPPTYPE_BOOL, + TYPE_MESSAGE: CPPTYPE_MESSAGE, + TYPE_GROUP: CPPTYPE_MESSAGE + } + + # Must be consistent with C++ FieldDescriptor::Label enum in + # descriptor.h. + # + # TODO(robinson): Find a way to eliminate this repetition. + LABEL_OPTIONAL = 1 + LABEL_REQUIRED = 2 + LABEL_REPEATED = 3 + MAX_LABEL = 3 + + # Must be consistent with C++ constants kMaxNumber, kFirstReservedNumber, + # and kLastReservedNumber in descriptor.h + MAX_FIELD_NUMBER = (1 << 29) - 1 + FIRST_RESERVED_FIELD_NUMBER = 19000 + LAST_RESERVED_FIELD_NUMBER = 19999 + + if _USE_C_DESCRIPTORS: + _C_DESCRIPTOR_CLASS = _message.FieldDescriptor + + def __new__(cls, name, full_name, index, number, type, cpp_type, label, + default_value, message_type, enum_type, containing_type, + is_extension, extension_scope, options=None, + serialized_options=None, + has_default_value=True, containing_oneof=None, json_name=None, + file=None, create_key=None): # pylint: disable=redefined-builtin + _message.Message._CheckCalledFromGeneratedFile() + if is_extension: + return _message.default_pool.FindExtensionByName(full_name) + else: + return _message.default_pool.FindFieldByName(full_name) + + def __init__(self, name, full_name, index, number, type, cpp_type, label, + default_value, message_type, enum_type, containing_type, + is_extension, extension_scope, options=None, + serialized_options=None, + has_default_value=True, containing_oneof=None, json_name=None, + file=None, create_key=None): # pylint: disable=redefined-builtin + """The arguments are as described in the description of FieldDescriptor + attributes above. + + Note that containing_type may be None, and may be set later if necessary + (to deal with circular references between message types, for example). + Likewise for extension_scope. + """ + if create_key is not _internal_create_key: + _Deprecated('FieldDescriptor') + + super(FieldDescriptor, self).__init__( + options, serialized_options, 'FieldOptions') + self.name = name + self.full_name = full_name + self.file = file + self._camelcase_name = None + if json_name is None: + self.json_name = _ToJsonName(name) + else: + self.json_name = json_name + self.index = index + self.number = number + self.type = type + self.cpp_type = cpp_type + self.label = label + self.has_default_value = has_default_value + self.default_value = default_value + self.containing_type = containing_type + self.message_type = message_type + self.enum_type = enum_type + self.is_extension = is_extension + self.extension_scope = extension_scope + self.containing_oneof = containing_oneof + if api_implementation.Type() == 'cpp': + if is_extension: + self._cdescriptor = _message.default_pool.FindExtensionByName(full_name) + else: + self._cdescriptor = _message.default_pool.FindFieldByName(full_name) + else: + self._cdescriptor = None + + @property + def camelcase_name(self): + """Camelcase name of this field. + + Returns: + str: the name in CamelCase. + """ + if self._camelcase_name is None: + self._camelcase_name = _ToCamelCase(self.name) + return self._camelcase_name + + @staticmethod + def ProtoTypeToCppProtoType(proto_type): + """Converts from a Python proto type to a C++ Proto Type. + + The Python ProtocolBuffer classes specify both the 'Python' datatype and the + 'C++' datatype - and they're not the same. This helper method should + translate from one to another. + + Args: + proto_type: the Python proto type (descriptor.FieldDescriptor.TYPE_*) + Returns: + int: descriptor.FieldDescriptor.CPPTYPE_*, the C++ type. + Raises: + TypeTransformationError: when the Python proto type isn't known. + """ + try: + return FieldDescriptor._PYTHON_TO_CPP_PROTO_TYPE_MAP[proto_type] + except KeyError: + raise TypeTransformationError('Unknown proto_type: %s' % proto_type) + + +class EnumDescriptor(_NestedDescriptorBase): + + """Descriptor for an enum defined in a .proto file. + + Attributes: + name (str): Name of the enum type. + full_name (str): Full name of the type, including package name + and any enclosing type(s). + + values (list[EnumValueDescriptors]): List of the values + in this enum. + values_by_name (dict(str, EnumValueDescriptor)): Same as :attr:`values`, + but indexed by the "name" field of each EnumValueDescriptor. + values_by_number (dict(int, EnumValueDescriptor)): Same as :attr:`values`, + but indexed by the "number" field of each EnumValueDescriptor. + containing_type (Descriptor): Descriptor of the immediate containing + type of this enum, or None if this is an enum defined at the + top level in a .proto file. Set by Descriptor's constructor + if we're passed into one. + file (FileDescriptor): Reference to file descriptor. + options (descriptor_pb2.EnumOptions): Enum options message or + None to use default enum options. + """ + + if _USE_C_DESCRIPTORS: + _C_DESCRIPTOR_CLASS = _message.EnumDescriptor + + def __new__(cls, name, full_name, filename, values, + containing_type=None, options=None, + serialized_options=None, file=None, # pylint: disable=redefined-builtin + serialized_start=None, serialized_end=None, create_key=None): + _message.Message._CheckCalledFromGeneratedFile() + return _message.default_pool.FindEnumTypeByName(full_name) + + def __init__(self, name, full_name, filename, values, + containing_type=None, options=None, + serialized_options=None, file=None, # pylint: disable=redefined-builtin + serialized_start=None, serialized_end=None, create_key=None): + """Arguments are as described in the attribute description above. + + Note that filename is an obsolete argument, that is not used anymore. + Please use file.name to access this as an attribute. + """ + if create_key is not _internal_create_key: + _Deprecated('EnumDescriptor') + + super(EnumDescriptor, self).__init__( + options, 'EnumOptions', name, full_name, file, + containing_type, serialized_start=serialized_start, + serialized_end=serialized_end, serialized_options=serialized_options) + + self.values = values + for value in self.values: + value.type = self + self.values_by_name = dict((v.name, v) for v in values) + # Values are reversed to ensure that the first alias is retained. + self.values_by_number = dict((v.number, v) for v in reversed(values)) + + def CopyToProto(self, proto): + """Copies this to a descriptor_pb2.EnumDescriptorProto. + + Args: + proto (descriptor_pb2.EnumDescriptorProto): An empty descriptor proto. + """ + # This function is overridden to give a better doc comment. + super(EnumDescriptor, self).CopyToProto(proto) + + +class EnumValueDescriptor(DescriptorBase): + + """Descriptor for a single value within an enum. + + Attributes: + name (str): Name of this value. + index (int): Dense, 0-indexed index giving the order that this + value appears textually within its enum in the .proto file. + number (int): Actual number assigned to this enum value. + type (EnumDescriptor): :class:`EnumDescriptor` to which this value + belongs. Set by :class:`EnumDescriptor`'s constructor if we're + passed into one. + options (descriptor_pb2.EnumValueOptions): Enum value options message or + None to use default enum value options options. + """ + + if _USE_C_DESCRIPTORS: + _C_DESCRIPTOR_CLASS = _message.EnumValueDescriptor + + def __new__(cls, name, index, number, + type=None, # pylint: disable=redefined-builtin + options=None, serialized_options=None, create_key=None): + _message.Message._CheckCalledFromGeneratedFile() + # There is no way we can build a complete EnumValueDescriptor with the + # given parameters (the name of the Enum is not known, for example). + # Fortunately generated files just pass it to the EnumDescriptor() + # constructor, which will ignore it, so returning None is good enough. + return None + + def __init__(self, name, index, number, + type=None, # pylint: disable=redefined-builtin + options=None, serialized_options=None, create_key=None): + """Arguments are as described in the attribute description above.""" + if create_key is not _internal_create_key: + _Deprecated('EnumValueDescriptor') + + super(EnumValueDescriptor, self).__init__( + options, serialized_options, 'EnumValueOptions') + self.name = name + self.index = index + self.number = number + self.type = type + + +class OneofDescriptor(DescriptorBase): + """Descriptor for a oneof field. + + Attributes: + name (str): Name of the oneof field. + full_name (str): Full name of the oneof field, including package name. + index (int): 0-based index giving the order of the oneof field inside + its containing type. + containing_type (Descriptor): :class:`Descriptor` of the protocol message + type that contains this field. Set by the :class:`Descriptor` constructor + if we're passed into one. + fields (list[FieldDescriptor]): The list of field descriptors this + oneof can contain. + """ + + if _USE_C_DESCRIPTORS: + _C_DESCRIPTOR_CLASS = _message.OneofDescriptor + + def __new__( + cls, name, full_name, index, containing_type, fields, options=None, + serialized_options=None, create_key=None): + _message.Message._CheckCalledFromGeneratedFile() + return _message.default_pool.FindOneofByName(full_name) + + def __init__( + self, name, full_name, index, containing_type, fields, options=None, + serialized_options=None, create_key=None): + """Arguments are as described in the attribute description above.""" + if create_key is not _internal_create_key: + _Deprecated('OneofDescriptor') + + super(OneofDescriptor, self).__init__( + options, serialized_options, 'OneofOptions') + self.name = name + self.full_name = full_name + self.index = index + self.containing_type = containing_type + self.fields = fields + + +class ServiceDescriptor(_NestedDescriptorBase): + + """Descriptor for a service. + + Attributes: + name (str): Name of the service. + full_name (str): Full name of the service, including package name. + index (int): 0-indexed index giving the order that this services + definition appears within the .proto file. + methods (list[MethodDescriptor]): List of methods provided by this + service. + methods_by_name (dict(str, MethodDescriptor)): Same + :class:`MethodDescriptor` objects as in :attr:`methods_by_name`, but + indexed by "name" attribute in each :class:`MethodDescriptor`. + options (descriptor_pb2.ServiceOptions): Service options message or + None to use default service options. + file (FileDescriptor): Reference to file info. + """ + + if _USE_C_DESCRIPTORS: + _C_DESCRIPTOR_CLASS = _message.ServiceDescriptor + + def __new__( + cls, + name=None, + full_name=None, + index=None, + methods=None, + options=None, + serialized_options=None, + file=None, # pylint: disable=redefined-builtin + serialized_start=None, + serialized_end=None, + create_key=None): + _message.Message._CheckCalledFromGeneratedFile() # pylint: disable=protected-access + return _message.default_pool.FindServiceByName(full_name) + + def __init__(self, name, full_name, index, methods, options=None, + serialized_options=None, file=None, # pylint: disable=redefined-builtin + serialized_start=None, serialized_end=None, create_key=None): + if create_key is not _internal_create_key: + _Deprecated('ServiceDescriptor') + + super(ServiceDescriptor, self).__init__( + options, 'ServiceOptions', name, full_name, file, + None, serialized_start=serialized_start, + serialized_end=serialized_end, serialized_options=serialized_options) + self.index = index + self.methods = methods + self.methods_by_name = dict((m.name, m) for m in methods) + # Set the containing service for each method in this service. + for method in self.methods: + method.containing_service = self + + def FindMethodByName(self, name): + """Searches for the specified method, and returns its descriptor. + + Args: + name (str): Name of the method. + Returns: + MethodDescriptor or None: the descriptor for the requested method, if + found. + """ + return self.methods_by_name.get(name, None) + + def CopyToProto(self, proto): + """Copies this to a descriptor_pb2.ServiceDescriptorProto. + + Args: + proto (descriptor_pb2.ServiceDescriptorProto): An empty descriptor proto. + """ + # This function is overridden to give a better doc comment. + super(ServiceDescriptor, self).CopyToProto(proto) + + +class MethodDescriptor(DescriptorBase): + + """Descriptor for a method in a service. + + Attributes: + name (str): Name of the method within the service. + full_name (str): Full name of method. + index (int): 0-indexed index of the method inside the service. + containing_service (ServiceDescriptor): The service that contains this + method. + input_type (Descriptor): The descriptor of the message that this method + accepts. + output_type (Descriptor): The descriptor of the message that this method + returns. + options (descriptor_pb2.MethodOptions or None): Method options message, or + None to use default method options. + """ + + if _USE_C_DESCRIPTORS: + _C_DESCRIPTOR_CLASS = _message.MethodDescriptor + + def __new__(cls, name, full_name, index, containing_service, + input_type, output_type, options=None, serialized_options=None, + create_key=None): + _message.Message._CheckCalledFromGeneratedFile() # pylint: disable=protected-access + return _message.default_pool.FindMethodByName(full_name) + + def __init__(self, name, full_name, index, containing_service, + input_type, output_type, options=None, serialized_options=None, + create_key=None): + """The arguments are as described in the description of MethodDescriptor + attributes above. + + Note that containing_service may be None, and may be set later if necessary. + """ + if create_key is not _internal_create_key: + _Deprecated('MethodDescriptor') + + super(MethodDescriptor, self).__init__( + options, serialized_options, 'MethodOptions') + self.name = name + self.full_name = full_name + self.index = index + self.containing_service = containing_service + self.input_type = input_type + self.output_type = output_type + + +class FileDescriptor(DescriptorBase): + """Descriptor for a file. Mimics the descriptor_pb2.FileDescriptorProto. + + Note that :attr:`enum_types_by_name`, :attr:`extensions_by_name`, and + :attr:`dependencies` fields are only set by the + :py:mod:`google.protobuf.message_factory` module, and not by the generated + proto code. + + Attributes: + name (str): Name of file, relative to root of source tree. + package (str): Name of the package + syntax (str): string indicating syntax of the file (can be "proto2" or + "proto3") + serialized_pb (bytes): Byte string of serialized + :class:`descriptor_pb2.FileDescriptorProto`. + dependencies (list[FileDescriptor]): List of other :class:`FileDescriptor` + objects this :class:`FileDescriptor` depends on. + public_dependencies (list[FileDescriptor]): A subset of + :attr:`dependencies`, which were declared as "public". + message_types_by_name (dict(str, Descriptor)): Mapping from message names + to their :class:`Desctiptor`. + enum_types_by_name (dict(str, EnumDescriptor)): Mapping from enum names to + their :class:`EnumDescriptor`. + extensions_by_name (dict(str, FieldDescriptor)): Mapping from extension + names declared at file scope to their :class:`FieldDescriptor`. + services_by_name (dict(str, ServiceDescriptor)): Mapping from services' + names to their :class:`ServiceDescriptor`. + pool (DescriptorPool): The pool this descriptor belongs to. When not + passed to the constructor, the global default pool is used. + """ + + if _USE_C_DESCRIPTORS: + _C_DESCRIPTOR_CLASS = _message.FileDescriptor + + def __new__(cls, name, package, options=None, + serialized_options=None, serialized_pb=None, + dependencies=None, public_dependencies=None, + syntax=None, pool=None, create_key=None): + # FileDescriptor() is called from various places, not only from generated + # files, to register dynamic proto files and messages. + # pylint: disable=g-explicit-bool-comparison + if serialized_pb == b'': + # Cpp generated code must be linked in if serialized_pb is '' + try: + return _message.default_pool.FindFileByName(name) + except KeyError: + raise RuntimeError('Please link in cpp generated lib for %s' % (name)) + elif serialized_pb: + return _message.default_pool.AddSerializedFile(serialized_pb) + else: + return super(FileDescriptor, cls).__new__(cls) + + def __init__(self, name, package, options=None, + serialized_options=None, serialized_pb=None, + dependencies=None, public_dependencies=None, + syntax=None, pool=None, create_key=None): + """Constructor.""" + if create_key is not _internal_create_key: + _Deprecated('FileDescriptor') + + super(FileDescriptor, self).__init__( + options, serialized_options, 'FileOptions') + + if pool is None: + from google.protobuf import descriptor_pool + pool = descriptor_pool.Default() + self.pool = pool + self.message_types_by_name = {} + self.name = name + self.package = package + self.syntax = syntax or "proto2" + self.serialized_pb = serialized_pb + + self.enum_types_by_name = {} + self.extensions_by_name = {} + self.services_by_name = {} + self.dependencies = (dependencies or []) + self.public_dependencies = (public_dependencies or []) + + def CopyToProto(self, proto): + """Copies this to a descriptor_pb2.FileDescriptorProto. + + Args: + proto: An empty descriptor_pb2.FileDescriptorProto. + """ + proto.ParseFromString(self.serialized_pb) + + +def _ParseOptions(message, string): + """Parses serialized options. + + This helper function is used to parse serialized options in generated + proto2 files. It must not be used outside proto2. + """ + message.ParseFromString(string) + return message + + +def _ToCamelCase(name): + """Converts name to camel-case and returns it.""" + capitalize_next = False + result = [] + + for c in name: + if c == '_': + if result: + capitalize_next = True + elif capitalize_next: + result.append(c.upper()) + capitalize_next = False + else: + result += c + + # Lower-case the first letter. + if result and result[0].isupper(): + result[0] = result[0].lower() + return ''.join(result) + + +def _OptionsOrNone(descriptor_proto): + """Returns the value of the field `options`, or None if it is not set.""" + if descriptor_proto.HasField('options'): + return descriptor_proto.options + else: + return None + + +def _ToJsonName(name): + """Converts name to Json name and returns it.""" + capitalize_next = False + result = [] + + for c in name: + if c == '_': + capitalize_next = True + elif capitalize_next: + result.append(c.upper()) + capitalize_next = False + else: + result += c + + return ''.join(result) + + +def MakeDescriptor(desc_proto, package='', build_file_if_cpp=True, + syntax=None): + """Make a protobuf Descriptor given a DescriptorProto protobuf. + + Handles nested descriptors. Note that this is limited to the scope of defining + a message inside of another message. Composite fields can currently only be + resolved if the message is defined in the same scope as the field. + + Args: + desc_proto: The descriptor_pb2.DescriptorProto protobuf message. + package: Optional package name for the new message Descriptor (string). + build_file_if_cpp: Update the C++ descriptor pool if api matches. + Set to False on recursion, so no duplicates are created. + syntax: The syntax/semantics that should be used. Set to "proto3" to get + proto3 field presence semantics. + Returns: + A Descriptor for protobuf messages. + """ + if api_implementation.Type() == 'cpp' and build_file_if_cpp: + # The C++ implementation requires all descriptors to be backed by the same + # definition in the C++ descriptor pool. To do this, we build a + # FileDescriptorProto with the same definition as this descriptor and build + # it into the pool. + from google.protobuf import descriptor_pb2 + file_descriptor_proto = descriptor_pb2.FileDescriptorProto() + file_descriptor_proto.message_type.add().MergeFrom(desc_proto) + + # Generate a random name for this proto file to prevent conflicts with any + # imported ones. We need to specify a file name so the descriptor pool + # accepts our FileDescriptorProto, but it is not important what that file + # name is actually set to. + proto_name = binascii.hexlify(os.urandom(16)).decode('ascii') + + if package: + file_descriptor_proto.name = os.path.join(package.replace('.', '/'), + proto_name + '.proto') + file_descriptor_proto.package = package + else: + file_descriptor_proto.name = proto_name + '.proto' + + _message.default_pool.Add(file_descriptor_proto) + result = _message.default_pool.FindFileByName(file_descriptor_proto.name) + + if _USE_C_DESCRIPTORS: + return result.message_types_by_name[desc_proto.name] + + full_message_name = [desc_proto.name] + if package: full_message_name.insert(0, package) + + # Create Descriptors for enum types + enum_types = {} + for enum_proto in desc_proto.enum_type: + full_name = '.'.join(full_message_name + [enum_proto.name]) + enum_desc = EnumDescriptor( + enum_proto.name, full_name, None, [ + EnumValueDescriptor(enum_val.name, ii, enum_val.number, + create_key=_internal_create_key) + for ii, enum_val in enumerate(enum_proto.value)], + create_key=_internal_create_key) + enum_types[full_name] = enum_desc + + # Create Descriptors for nested types + nested_types = {} + for nested_proto in desc_proto.nested_type: + full_name = '.'.join(full_message_name + [nested_proto.name]) + # Nested types are just those defined inside of the message, not all types + # used by fields in the message, so no loops are possible here. + nested_desc = MakeDescriptor(nested_proto, + package='.'.join(full_message_name), + build_file_if_cpp=False, + syntax=syntax) + nested_types[full_name] = nested_desc + + fields = [] + for field_proto in desc_proto.field: + full_name = '.'.join(full_message_name + [field_proto.name]) + enum_desc = None + nested_desc = None + if field_proto.json_name: + json_name = field_proto.json_name + else: + json_name = None + if field_proto.HasField('type_name'): + type_name = field_proto.type_name + full_type_name = '.'.join(full_message_name + + [type_name[type_name.rfind('.')+1:]]) + if full_type_name in nested_types: + nested_desc = nested_types[full_type_name] + elif full_type_name in enum_types: + enum_desc = enum_types[full_type_name] + # Else type_name references a non-local type, which isn't implemented + field = FieldDescriptor( + field_proto.name, full_name, field_proto.number - 1, + field_proto.number, field_proto.type, + FieldDescriptor.ProtoTypeToCppProtoType(field_proto.type), + field_proto.label, None, nested_desc, enum_desc, None, False, None, + options=_OptionsOrNone(field_proto), has_default_value=False, + json_name=json_name, create_key=_internal_create_key) + fields.append(field) + + desc_name = '.'.join(full_message_name) + return Descriptor(desc_proto.name, desc_name, None, None, fields, + list(nested_types.values()), list(enum_types.values()), [], + options=_OptionsOrNone(desc_proto), + create_key=_internal_create_key) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/descriptor_database.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/descriptor_database.py new file mode 100644 index 0000000000000000000000000000000000000000..073eddc711571a7f510ff8b189e2a9a863d53454 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/descriptor_database.py @@ -0,0 +1,177 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""Provides a container for DescriptorProtos.""" + +__author__ = 'matthewtoia@google.com (Matt Toia)' + +import warnings + + +class Error(Exception): + pass + + +class DescriptorDatabaseConflictingDefinitionError(Error): + """Raised when a proto is added with the same name & different descriptor.""" + + +class DescriptorDatabase(object): + """A container accepting FileDescriptorProtos and maps DescriptorProtos.""" + + def __init__(self): + self._file_desc_protos_by_file = {} + self._file_desc_protos_by_symbol = {} + + def Add(self, file_desc_proto): + """Adds the FileDescriptorProto and its types to this database. + + Args: + file_desc_proto: The FileDescriptorProto to add. + Raises: + DescriptorDatabaseConflictingDefinitionError: if an attempt is made to + add a proto with the same name but different definition than an + existing proto in the database. + """ + proto_name = file_desc_proto.name + if proto_name not in self._file_desc_protos_by_file: + self._file_desc_protos_by_file[proto_name] = file_desc_proto + elif self._file_desc_protos_by_file[proto_name] != file_desc_proto: + raise DescriptorDatabaseConflictingDefinitionError( + '%s already added, but with different descriptor.' % proto_name) + else: + return + + # Add all the top-level descriptors to the index. + package = file_desc_proto.package + for message in file_desc_proto.message_type: + for name in _ExtractSymbols(message, package): + self._AddSymbol(name, file_desc_proto) + for enum in file_desc_proto.enum_type: + self._AddSymbol(('.'.join((package, enum.name))), file_desc_proto) + for enum_value in enum.value: + self._file_desc_protos_by_symbol[ + '.'.join((package, enum_value.name))] = file_desc_proto + for extension in file_desc_proto.extension: + self._AddSymbol(('.'.join((package, extension.name))), file_desc_proto) + for service in file_desc_proto.service: + self._AddSymbol(('.'.join((package, service.name))), file_desc_proto) + + def FindFileByName(self, name): + """Finds the file descriptor proto by file name. + + Typically the file name is a relative path ending to a .proto file. The + proto with the given name will have to have been added to this database + using the Add method or else an error will be raised. + + Args: + name: The file name to find. + + Returns: + The file descriptor proto matching the name. + + Raises: + KeyError if no file by the given name was added. + """ + + return self._file_desc_protos_by_file[name] + + def FindFileContainingSymbol(self, symbol): + """Finds the file descriptor proto containing the specified symbol. + + The symbol should be a fully qualified name including the file descriptor's + package and any containing messages. Some examples: + + 'some.package.name.Message' + 'some.package.name.Message.NestedEnum' + 'some.package.name.Message.some_field' + + The file descriptor proto containing the specified symbol must be added to + this database using the Add method or else an error will be raised. + + Args: + symbol: The fully qualified symbol name. + + Returns: + The file descriptor proto containing the symbol. + + Raises: + KeyError if no file contains the specified symbol. + """ + try: + return self._file_desc_protos_by_symbol[symbol] + except KeyError: + # Fields, enum values, and nested extensions are not in + # _file_desc_protos_by_symbol. Try to find the top level + # descriptor. Non-existent nested symbol under a valid top level + # descriptor can also be found. The behavior is the same with + # protobuf C++. + top_level, _, _ = symbol.rpartition('.') + try: + return self._file_desc_protos_by_symbol[top_level] + except KeyError: + # Raise the original symbol as a KeyError for better diagnostics. + raise KeyError(symbol) + + def FindFileContainingExtension(self, extendee_name, extension_number): + # TODO(jieluo): implement this API. + return None + + def FindAllExtensionNumbers(self, extendee_name): + # TODO(jieluo): implement this API. + return [] + + def _AddSymbol(self, name, file_desc_proto): + if name in self._file_desc_protos_by_symbol: + warn_msg = ('Conflict register for file "' + file_desc_proto.name + + '": ' + name + + ' is already defined in file "' + + self._file_desc_protos_by_symbol[name].name + '"') + warnings.warn(warn_msg, RuntimeWarning) + self._file_desc_protos_by_symbol[name] = file_desc_proto + + +def _ExtractSymbols(desc_proto, package): + """Pulls out all the symbols from a descriptor proto. + + Args: + desc_proto: The proto to extract symbols from. + package: The package containing the descriptor type. + + Yields: + The fully qualified name found in the descriptor. + """ + message_name = package + '.' + desc_proto.name if package else desc_proto.name + yield message_name + for nested_type in desc_proto.nested_type: + for symbol in _ExtractSymbols(nested_type, message_name): + yield symbol + for enum_type in desc_proto.enum_type: + yield '.'.join((message_name, enum_type.name)) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/descriptor_pb2.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/descriptor_pb2.py new file mode 100644 index 0000000000000000000000000000000000000000..0de963f8b5933b5ca7eefb3165c156011c678bc4 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/descriptor_pb2.py @@ -0,0 +1,2106 @@ +# -*- coding: utf-8 -*- +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: google/protobuf/descriptor.proto +"""Generated protocol buffer code.""" +from google.protobuf import descriptor as _descriptor +from google.protobuf import message as _message +from google.protobuf import reflection as _reflection +from google.protobuf import symbol_database as _symbol_database +# @@protoc_insertion_point(imports) + +_sym_db = _symbol_database.Default() + + + + +DESCRIPTOR = _descriptor.FileDescriptor( + name='google/protobuf/descriptor.proto', + package='google.protobuf', + syntax='proto2', + serialized_options=None, + create_key=_descriptor._internal_create_key, + serialized_pb=b'\n google/protobuf/descriptor.proto\x12\x0fgoogle.protobuf\"G\n\x11\x46ileDescriptorSet\x12\x32\n\x04\x66ile\x18\x01 \x03(\x0b\x32$.google.protobuf.FileDescriptorProto\"\xdb\x03\n\x13\x46ileDescriptorProto\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0f\n\x07package\x18\x02 \x01(\t\x12\x12\n\ndependency\x18\x03 \x03(\t\x12\x19\n\x11public_dependency\x18\n \x03(\x05\x12\x17\n\x0fweak_dependency\x18\x0b \x03(\x05\x12\x36\n\x0cmessage_type\x18\x04 \x03(\x0b\x32 .google.protobuf.DescriptorProto\x12\x37\n\tenum_type\x18\x05 \x03(\x0b\x32$.google.protobuf.EnumDescriptorProto\x12\x38\n\x07service\x18\x06 \x03(\x0b\x32\'.google.protobuf.ServiceDescriptorProto\x12\x38\n\textension\x18\x07 \x03(\x0b\x32%.google.protobuf.FieldDescriptorProto\x12-\n\x07options\x18\x08 \x01(\x0b\x32\x1c.google.protobuf.FileOptions\x12\x39\n\x10source_code_info\x18\t \x01(\x0b\x32\x1f.google.protobuf.SourceCodeInfo\x12\x0e\n\x06syntax\x18\x0c \x01(\t\"\xa9\x05\n\x0f\x44\x65scriptorProto\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x34\n\x05\x66ield\x18\x02 \x03(\x0b\x32%.google.protobuf.FieldDescriptorProto\x12\x38\n\textension\x18\x06 \x03(\x0b\x32%.google.protobuf.FieldDescriptorProto\x12\x35\n\x0bnested_type\x18\x03 \x03(\x0b\x32 .google.protobuf.DescriptorProto\x12\x37\n\tenum_type\x18\x04 \x03(\x0b\x32$.google.protobuf.EnumDescriptorProto\x12H\n\x0f\x65xtension_range\x18\x05 \x03(\x0b\x32/.google.protobuf.DescriptorProto.ExtensionRange\x12\x39\n\noneof_decl\x18\x08 \x03(\x0b\x32%.google.protobuf.OneofDescriptorProto\x12\x30\n\x07options\x18\x07 \x01(\x0b\x32\x1f.google.protobuf.MessageOptions\x12\x46\n\x0ereserved_range\x18\t \x03(\x0b\x32..google.protobuf.DescriptorProto.ReservedRange\x12\x15\n\rreserved_name\x18\n \x03(\t\x1a\x65\n\x0e\x45xtensionRange\x12\r\n\x05start\x18\x01 \x01(\x05\x12\x0b\n\x03\x65nd\x18\x02 \x01(\x05\x12\x37\n\x07options\x18\x03 \x01(\x0b\x32&.google.protobuf.ExtensionRangeOptions\x1a+\n\rReservedRange\x12\r\n\x05start\x18\x01 \x01(\x05\x12\x0b\n\x03\x65nd\x18\x02 \x01(\x05\"g\n\x15\x45xtensionRangeOptions\x12\x43\n\x14uninterpreted_option\x18\xe7\x07 \x03(\x0b\x32$.google.protobuf.UninterpretedOption*\t\x08\xe8\x07\x10\x80\x80\x80\x80\x02\"\xd5\x05\n\x14\x46ieldDescriptorProto\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0e\n\x06number\x18\x03 \x01(\x05\x12:\n\x05label\x18\x04 \x01(\x0e\x32+.google.protobuf.FieldDescriptorProto.Label\x12\x38\n\x04type\x18\x05 \x01(\x0e\x32*.google.protobuf.FieldDescriptorProto.Type\x12\x11\n\ttype_name\x18\x06 \x01(\t\x12\x10\n\x08\x65xtendee\x18\x02 \x01(\t\x12\x15\n\rdefault_value\x18\x07 \x01(\t\x12\x13\n\x0boneof_index\x18\t \x01(\x05\x12\x11\n\tjson_name\x18\n \x01(\t\x12.\n\x07options\x18\x08 \x01(\x0b\x32\x1d.google.protobuf.FieldOptions\x12\x17\n\x0fproto3_optional\x18\x11 \x01(\x08\"\xb6\x02\n\x04Type\x12\x0f\n\x0bTYPE_DOUBLE\x10\x01\x12\x0e\n\nTYPE_FLOAT\x10\x02\x12\x0e\n\nTYPE_INT64\x10\x03\x12\x0f\n\x0bTYPE_UINT64\x10\x04\x12\x0e\n\nTYPE_INT32\x10\x05\x12\x10\n\x0cTYPE_FIXED64\x10\x06\x12\x10\n\x0cTYPE_FIXED32\x10\x07\x12\r\n\tTYPE_BOOL\x10\x08\x12\x0f\n\x0bTYPE_STRING\x10\t\x12\x0e\n\nTYPE_GROUP\x10\n\x12\x10\n\x0cTYPE_MESSAGE\x10\x0b\x12\x0e\n\nTYPE_BYTES\x10\x0c\x12\x0f\n\x0bTYPE_UINT32\x10\r\x12\r\n\tTYPE_ENUM\x10\x0e\x12\x11\n\rTYPE_SFIXED32\x10\x0f\x12\x11\n\rTYPE_SFIXED64\x10\x10\x12\x0f\n\x0bTYPE_SINT32\x10\x11\x12\x0f\n\x0bTYPE_SINT64\x10\x12\"C\n\x05Label\x12\x12\n\x0eLABEL_OPTIONAL\x10\x01\x12\x12\n\x0eLABEL_REQUIRED\x10\x02\x12\x12\n\x0eLABEL_REPEATED\x10\x03\"T\n\x14OneofDescriptorProto\x12\x0c\n\x04name\x18\x01 \x01(\t\x12.\n\x07options\x18\x02 \x01(\x0b\x32\x1d.google.protobuf.OneofOptions\"\xa4\x02\n\x13\x45numDescriptorProto\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x38\n\x05value\x18\x02 \x03(\x0b\x32).google.protobuf.EnumValueDescriptorProto\x12-\n\x07options\x18\x03 \x01(\x0b\x32\x1c.google.protobuf.EnumOptions\x12N\n\x0ereserved_range\x18\x04 \x03(\x0b\x32\x36.google.protobuf.EnumDescriptorProto.EnumReservedRange\x12\x15\n\rreserved_name\x18\x05 \x03(\t\x1a/\n\x11\x45numReservedRange\x12\r\n\x05start\x18\x01 \x01(\x05\x12\x0b\n\x03\x65nd\x18\x02 \x01(\x05\"l\n\x18\x45numValueDescriptorProto\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0e\n\x06number\x18\x02 \x01(\x05\x12\x32\n\x07options\x18\x03 \x01(\x0b\x32!.google.protobuf.EnumValueOptions\"\x90\x01\n\x16ServiceDescriptorProto\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x36\n\x06method\x18\x02 \x03(\x0b\x32&.google.protobuf.MethodDescriptorProto\x12\x30\n\x07options\x18\x03 \x01(\x0b\x32\x1f.google.protobuf.ServiceOptions\"\xc1\x01\n\x15MethodDescriptorProto\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x12\n\ninput_type\x18\x02 \x01(\t\x12\x13\n\x0boutput_type\x18\x03 \x01(\t\x12/\n\x07options\x18\x04 \x01(\x0b\x32\x1e.google.protobuf.MethodOptions\x12\x1f\n\x10\x63lient_streaming\x18\x05 \x01(\x08:\x05\x66\x61lse\x12\x1f\n\x10server_streaming\x18\x06 \x01(\x08:\x05\x66\x61lse\"\xa5\x06\n\x0b\x46ileOptions\x12\x14\n\x0cjava_package\x18\x01 \x01(\t\x12\x1c\n\x14java_outer_classname\x18\x08 \x01(\t\x12\"\n\x13java_multiple_files\x18\n \x01(\x08:\x05\x66\x61lse\x12)\n\x1djava_generate_equals_and_hash\x18\x14 \x01(\x08\x42\x02\x18\x01\x12%\n\x16java_string_check_utf8\x18\x1b \x01(\x08:\x05\x66\x61lse\x12\x46\n\x0coptimize_for\x18\t \x01(\x0e\x32).google.protobuf.FileOptions.OptimizeMode:\x05SPEED\x12\x12\n\ngo_package\x18\x0b \x01(\t\x12\"\n\x13\x63\x63_generic_services\x18\x10 \x01(\x08:\x05\x66\x61lse\x12$\n\x15java_generic_services\x18\x11 \x01(\x08:\x05\x66\x61lse\x12\"\n\x13py_generic_services\x18\x12 \x01(\x08:\x05\x66\x61lse\x12#\n\x14php_generic_services\x18* \x01(\x08:\x05\x66\x61lse\x12\x19\n\ndeprecated\x18\x17 \x01(\x08:\x05\x66\x61lse\x12\x1e\n\x10\x63\x63_enable_arenas\x18\x1f \x01(\x08:\x04true\x12\x19\n\x11objc_class_prefix\x18$ \x01(\t\x12\x18\n\x10\x63sharp_namespace\x18% \x01(\t\x12\x14\n\x0cswift_prefix\x18\' \x01(\t\x12\x18\n\x10php_class_prefix\x18( \x01(\t\x12\x15\n\rphp_namespace\x18) \x01(\t\x12\x1e\n\x16php_metadata_namespace\x18, \x01(\t\x12\x14\n\x0cruby_package\x18- \x01(\t\x12\x43\n\x14uninterpreted_option\x18\xe7\x07 \x03(\x0b\x32$.google.protobuf.UninterpretedOption\":\n\x0cOptimizeMode\x12\t\n\x05SPEED\x10\x01\x12\r\n\tCODE_SIZE\x10\x02\x12\x10\n\x0cLITE_RUNTIME\x10\x03*\t\x08\xe8\x07\x10\x80\x80\x80\x80\x02J\x04\x08&\x10\'\"\xf2\x01\n\x0eMessageOptions\x12&\n\x17message_set_wire_format\x18\x01 \x01(\x08:\x05\x66\x61lse\x12.\n\x1fno_standard_descriptor_accessor\x18\x02 \x01(\x08:\x05\x66\x61lse\x12\x19\n\ndeprecated\x18\x03 \x01(\x08:\x05\x66\x61lse\x12\x11\n\tmap_entry\x18\x07 \x01(\x08\x12\x43\n\x14uninterpreted_option\x18\xe7\x07 \x03(\x0b\x32$.google.protobuf.UninterpretedOption*\t\x08\xe8\x07\x10\x80\x80\x80\x80\x02J\x04\x08\x08\x10\tJ\x04\x08\t\x10\n\"\x9e\x03\n\x0c\x46ieldOptions\x12:\n\x05\x63type\x18\x01 \x01(\x0e\x32#.google.protobuf.FieldOptions.CType:\x06STRING\x12\x0e\n\x06packed\x18\x02 \x01(\x08\x12?\n\x06jstype\x18\x06 \x01(\x0e\x32$.google.protobuf.FieldOptions.JSType:\tJS_NORMAL\x12\x13\n\x04lazy\x18\x05 \x01(\x08:\x05\x66\x61lse\x12\x19\n\ndeprecated\x18\x03 \x01(\x08:\x05\x66\x61lse\x12\x13\n\x04weak\x18\n \x01(\x08:\x05\x66\x61lse\x12\x43\n\x14uninterpreted_option\x18\xe7\x07 \x03(\x0b\x32$.google.protobuf.UninterpretedOption\"/\n\x05\x43Type\x12\n\n\x06STRING\x10\x00\x12\x08\n\x04\x43ORD\x10\x01\x12\x10\n\x0cSTRING_PIECE\x10\x02\"5\n\x06JSType\x12\r\n\tJS_NORMAL\x10\x00\x12\r\n\tJS_STRING\x10\x01\x12\r\n\tJS_NUMBER\x10\x02*\t\x08\xe8\x07\x10\x80\x80\x80\x80\x02J\x04\x08\x04\x10\x05\"^\n\x0cOneofOptions\x12\x43\n\x14uninterpreted_option\x18\xe7\x07 \x03(\x0b\x32$.google.protobuf.UninterpretedOption*\t\x08\xe8\x07\x10\x80\x80\x80\x80\x02\"\x93\x01\n\x0b\x45numOptions\x12\x13\n\x0b\x61llow_alias\x18\x02 \x01(\x08\x12\x19\n\ndeprecated\x18\x03 \x01(\x08:\x05\x66\x61lse\x12\x43\n\x14uninterpreted_option\x18\xe7\x07 \x03(\x0b\x32$.google.protobuf.UninterpretedOption*\t\x08\xe8\x07\x10\x80\x80\x80\x80\x02J\x04\x08\x05\x10\x06\"}\n\x10\x45numValueOptions\x12\x19\n\ndeprecated\x18\x01 \x01(\x08:\x05\x66\x61lse\x12\x43\n\x14uninterpreted_option\x18\xe7\x07 \x03(\x0b\x32$.google.protobuf.UninterpretedOption*\t\x08\xe8\x07\x10\x80\x80\x80\x80\x02\"{\n\x0eServiceOptions\x12\x19\n\ndeprecated\x18! \x01(\x08:\x05\x66\x61lse\x12\x43\n\x14uninterpreted_option\x18\xe7\x07 \x03(\x0b\x32$.google.protobuf.UninterpretedOption*\t\x08\xe8\x07\x10\x80\x80\x80\x80\x02\"\xad\x02\n\rMethodOptions\x12\x19\n\ndeprecated\x18! \x01(\x08:\x05\x66\x61lse\x12_\n\x11idempotency_level\x18\" \x01(\x0e\x32/.google.protobuf.MethodOptions.IdempotencyLevel:\x13IDEMPOTENCY_UNKNOWN\x12\x43\n\x14uninterpreted_option\x18\xe7\x07 \x03(\x0b\x32$.google.protobuf.UninterpretedOption\"P\n\x10IdempotencyLevel\x12\x17\n\x13IDEMPOTENCY_UNKNOWN\x10\x00\x12\x13\n\x0fNO_SIDE_EFFECTS\x10\x01\x12\x0e\n\nIDEMPOTENT\x10\x02*\t\x08\xe8\x07\x10\x80\x80\x80\x80\x02\"\x9e\x02\n\x13UninterpretedOption\x12;\n\x04name\x18\x02 \x03(\x0b\x32-.google.protobuf.UninterpretedOption.NamePart\x12\x18\n\x10identifier_value\x18\x03 \x01(\t\x12\x1a\n\x12positive_int_value\x18\x04 \x01(\x04\x12\x1a\n\x12negative_int_value\x18\x05 \x01(\x03\x12\x14\n\x0c\x64ouble_value\x18\x06 \x01(\x01\x12\x14\n\x0cstring_value\x18\x07 \x01(\x0c\x12\x17\n\x0f\x61ggregate_value\x18\x08 \x01(\t\x1a\x33\n\x08NamePart\x12\x11\n\tname_part\x18\x01 \x02(\t\x12\x14\n\x0cis_extension\x18\x02 \x02(\x08\"\xd5\x01\n\x0eSourceCodeInfo\x12:\n\x08location\x18\x01 \x03(\x0b\x32(.google.protobuf.SourceCodeInfo.Location\x1a\x86\x01\n\x08Location\x12\x10\n\x04path\x18\x01 \x03(\x05\x42\x02\x10\x01\x12\x10\n\x04span\x18\x02 \x03(\x05\x42\x02\x10\x01\x12\x18\n\x10leading_comments\x18\x03 \x01(\t\x12\x19\n\x11trailing_comments\x18\x04 \x01(\t\x12!\n\x19leading_detached_comments\x18\x06 \x03(\t\"\xa7\x01\n\x11GeneratedCodeInfo\x12\x41\n\nannotation\x18\x01 \x03(\x0b\x32-.google.protobuf.GeneratedCodeInfo.Annotation\x1aO\n\nAnnotation\x12\x10\n\x04path\x18\x01 \x03(\x05\x42\x02\x10\x01\x12\x13\n\x0bsource_file\x18\x02 \x01(\t\x12\r\n\x05\x62\x65gin\x18\x03 \x01(\x05\x12\x0b\n\x03\x65nd\x18\x04 \x01(\x05\x42~\n\x13\x63om.google.protobufB\x10\x44\x65scriptorProtosH\x01Z-google.golang.org/protobuf/types/descriptorpb\xf8\x01\x01\xa2\x02\x03GPB\xaa\x02\x1aGoogle.Protobuf.Reflection' +) + + + +_FIELDDESCRIPTORPROTO_TYPE = _descriptor.EnumDescriptor( + name='Type', + full_name='google.protobuf.FieldDescriptorProto.Type', + filename=None, + file=DESCRIPTOR, + create_key=_descriptor._internal_create_key, + values=[ + _descriptor.EnumValueDescriptor( + name='TYPE_DOUBLE', index=0, number=1, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_FLOAT', index=1, number=2, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_INT64', index=2, number=3, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_UINT64', index=3, number=4, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_INT32', index=4, number=5, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_FIXED64', index=5, number=6, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_FIXED32', index=6, number=7, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_BOOL', index=7, number=8, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_STRING', index=8, number=9, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_GROUP', index=9, number=10, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_MESSAGE', index=10, number=11, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_BYTES', index=11, number=12, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_UINT32', index=12, number=13, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_ENUM', index=13, number=14, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_SFIXED32', index=14, number=15, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_SFIXED64', index=15, number=16, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_SINT32', index=16, number=17, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_SINT64', index=17, number=18, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + ], + containing_type=None, + serialized_options=None, + serialized_start=1740, + serialized_end=2050, +) +_sym_db.RegisterEnumDescriptor(_FIELDDESCRIPTORPROTO_TYPE) + +_FIELDDESCRIPTORPROTO_LABEL = _descriptor.EnumDescriptor( + name='Label', + full_name='google.protobuf.FieldDescriptorProto.Label', + filename=None, + file=DESCRIPTOR, + create_key=_descriptor._internal_create_key, + values=[ + _descriptor.EnumValueDescriptor( + name='LABEL_OPTIONAL', index=0, number=1, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='LABEL_REQUIRED', index=1, number=2, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='LABEL_REPEATED', index=2, number=3, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + ], + containing_type=None, + serialized_options=None, + serialized_start=2052, + serialized_end=2119, +) +_sym_db.RegisterEnumDescriptor(_FIELDDESCRIPTORPROTO_LABEL) + +_FILEOPTIONS_OPTIMIZEMODE = _descriptor.EnumDescriptor( + name='OptimizeMode', + full_name='google.protobuf.FileOptions.OptimizeMode', + filename=None, + file=DESCRIPTOR, + create_key=_descriptor._internal_create_key, + values=[ + _descriptor.EnumValueDescriptor( + name='SPEED', index=0, number=1, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='CODE_SIZE', index=1, number=2, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='LITE_RUNTIME', index=2, number=3, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + ], + containing_type=None, + serialized_options=None, + serialized_start=3686, + serialized_end=3744, +) +_sym_db.RegisterEnumDescriptor(_FILEOPTIONS_OPTIMIZEMODE) + +_FIELDOPTIONS_CTYPE = _descriptor.EnumDescriptor( + name='CType', + full_name='google.protobuf.FieldOptions.CType', + filename=None, + file=DESCRIPTOR, + create_key=_descriptor._internal_create_key, + values=[ + _descriptor.EnumValueDescriptor( + name='STRING', index=0, number=0, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='CORD', index=1, number=1, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='STRING_PIECE', index=2, number=2, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + ], + containing_type=None, + serialized_options=None, + serialized_start=4304, + serialized_end=4351, +) +_sym_db.RegisterEnumDescriptor(_FIELDOPTIONS_CTYPE) + +_FIELDOPTIONS_JSTYPE = _descriptor.EnumDescriptor( + name='JSType', + full_name='google.protobuf.FieldOptions.JSType', + filename=None, + file=DESCRIPTOR, + create_key=_descriptor._internal_create_key, + values=[ + _descriptor.EnumValueDescriptor( + name='JS_NORMAL', index=0, number=0, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='JS_STRING', index=1, number=1, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='JS_NUMBER', index=2, number=2, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + ], + containing_type=None, + serialized_options=None, + serialized_start=4353, + serialized_end=4406, +) +_sym_db.RegisterEnumDescriptor(_FIELDOPTIONS_JSTYPE) + +_METHODOPTIONS_IDEMPOTENCYLEVEL = _descriptor.EnumDescriptor( + name='IdempotencyLevel', + full_name='google.protobuf.MethodOptions.IdempotencyLevel', + filename=None, + file=DESCRIPTOR, + create_key=_descriptor._internal_create_key, + values=[ + _descriptor.EnumValueDescriptor( + name='IDEMPOTENCY_UNKNOWN', index=0, number=0, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='NO_SIDE_EFFECTS', index=1, number=1, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='IDEMPOTENT', index=2, number=2, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + ], + containing_type=None, + serialized_options=None, + serialized_start=5134, + serialized_end=5214, +) +_sym_db.RegisterEnumDescriptor(_METHODOPTIONS_IDEMPOTENCYLEVEL) + + +_FILEDESCRIPTORSET = _descriptor.Descriptor( + name='FileDescriptorSet', + full_name='google.protobuf.FileDescriptorSet', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='file', full_name='google.protobuf.FileDescriptorSet.file', index=0, + number=1, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + serialized_start=53, + serialized_end=124, +) + + +_FILEDESCRIPTORPROTO = _descriptor.Descriptor( + name='FileDescriptorProto', + full_name='google.protobuf.FileDescriptorProto', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='name', full_name='google.protobuf.FileDescriptorProto.name', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='package', full_name='google.protobuf.FileDescriptorProto.package', index=1, + number=2, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='dependency', full_name='google.protobuf.FileDescriptorProto.dependency', index=2, + number=3, type=9, cpp_type=9, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='public_dependency', full_name='google.protobuf.FileDescriptorProto.public_dependency', index=3, + number=10, type=5, cpp_type=1, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='weak_dependency', full_name='google.protobuf.FileDescriptorProto.weak_dependency', index=4, + number=11, type=5, cpp_type=1, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='message_type', full_name='google.protobuf.FileDescriptorProto.message_type', index=5, + number=4, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='enum_type', full_name='google.protobuf.FileDescriptorProto.enum_type', index=6, + number=5, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='service', full_name='google.protobuf.FileDescriptorProto.service', index=7, + number=6, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='extension', full_name='google.protobuf.FileDescriptorProto.extension', index=8, + number=7, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='options', full_name='google.protobuf.FileDescriptorProto.options', index=9, + number=8, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='source_code_info', full_name='google.protobuf.FileDescriptorProto.source_code_info', index=10, + number=9, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='syntax', full_name='google.protobuf.FileDescriptorProto.syntax', index=11, + number=12, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + serialized_start=127, + serialized_end=602, +) + + +_DESCRIPTORPROTO_EXTENSIONRANGE = _descriptor.Descriptor( + name='ExtensionRange', + full_name='google.protobuf.DescriptorProto.ExtensionRange', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='start', full_name='google.protobuf.DescriptorProto.ExtensionRange.start', index=0, + number=1, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='end', full_name='google.protobuf.DescriptorProto.ExtensionRange.end', index=1, + number=2, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='options', full_name='google.protobuf.DescriptorProto.ExtensionRange.options', index=2, + number=3, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1140, + serialized_end=1241, +) + +_DESCRIPTORPROTO_RESERVEDRANGE = _descriptor.Descriptor( + name='ReservedRange', + full_name='google.protobuf.DescriptorProto.ReservedRange', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='start', full_name='google.protobuf.DescriptorProto.ReservedRange.start', index=0, + number=1, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='end', full_name='google.protobuf.DescriptorProto.ReservedRange.end', index=1, + number=2, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1243, + serialized_end=1286, +) + +_DESCRIPTORPROTO = _descriptor.Descriptor( + name='DescriptorProto', + full_name='google.protobuf.DescriptorProto', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='name', full_name='google.protobuf.DescriptorProto.name', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='field', full_name='google.protobuf.DescriptorProto.field', index=1, + number=2, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='extension', full_name='google.protobuf.DescriptorProto.extension', index=2, + number=6, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='nested_type', full_name='google.protobuf.DescriptorProto.nested_type', index=3, + number=3, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='enum_type', full_name='google.protobuf.DescriptorProto.enum_type', index=4, + number=4, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='extension_range', full_name='google.protobuf.DescriptorProto.extension_range', index=5, + number=5, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='oneof_decl', full_name='google.protobuf.DescriptorProto.oneof_decl', index=6, + number=8, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='options', full_name='google.protobuf.DescriptorProto.options', index=7, + number=7, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='reserved_range', full_name='google.protobuf.DescriptorProto.reserved_range', index=8, + number=9, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='reserved_name', full_name='google.protobuf.DescriptorProto.reserved_name', index=9, + number=10, type=9, cpp_type=9, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[_DESCRIPTORPROTO_EXTENSIONRANGE, _DESCRIPTORPROTO_RESERVEDRANGE, ], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + serialized_start=605, + serialized_end=1286, +) + + +_EXTENSIONRANGEOPTIONS = _descriptor.Descriptor( + name='ExtensionRangeOptions', + full_name='google.protobuf.ExtensionRangeOptions', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='uninterpreted_option', full_name='google.protobuf.ExtensionRangeOptions.uninterpreted_option', index=0, + number=999, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=True, + syntax='proto2', + extension_ranges=[(1000, 536870912), ], + oneofs=[ + ], + serialized_start=1288, + serialized_end=1391, +) + + +_FIELDDESCRIPTORPROTO = _descriptor.Descriptor( + name='FieldDescriptorProto', + full_name='google.protobuf.FieldDescriptorProto', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='name', full_name='google.protobuf.FieldDescriptorProto.name', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='number', full_name='google.protobuf.FieldDescriptorProto.number', index=1, + number=3, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='label', full_name='google.protobuf.FieldDescriptorProto.label', index=2, + number=4, type=14, cpp_type=8, label=1, + has_default_value=False, default_value=1, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='type', full_name='google.protobuf.FieldDescriptorProto.type', index=3, + number=5, type=14, cpp_type=8, label=1, + has_default_value=False, default_value=1, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='type_name', full_name='google.protobuf.FieldDescriptorProto.type_name', index=4, + number=6, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='extendee', full_name='google.protobuf.FieldDescriptorProto.extendee', index=5, + number=2, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='default_value', full_name='google.protobuf.FieldDescriptorProto.default_value', index=6, + number=7, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='oneof_index', full_name='google.protobuf.FieldDescriptorProto.oneof_index', index=7, + number=9, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='json_name', full_name='google.protobuf.FieldDescriptorProto.json_name', index=8, + number=10, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='options', full_name='google.protobuf.FieldDescriptorProto.options', index=9, + number=8, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='proto3_optional', full_name='google.protobuf.FieldDescriptorProto.proto3_optional', index=10, + number=17, type=8, cpp_type=7, label=1, + has_default_value=False, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + _FIELDDESCRIPTORPROTO_TYPE, + _FIELDDESCRIPTORPROTO_LABEL, + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1394, + serialized_end=2119, +) + + +_ONEOFDESCRIPTORPROTO = _descriptor.Descriptor( + name='OneofDescriptorProto', + full_name='google.protobuf.OneofDescriptorProto', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='name', full_name='google.protobuf.OneofDescriptorProto.name', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='options', full_name='google.protobuf.OneofDescriptorProto.options', index=1, + number=2, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + serialized_start=2121, + serialized_end=2205, +) + + +_ENUMDESCRIPTORPROTO_ENUMRESERVEDRANGE = _descriptor.Descriptor( + name='EnumReservedRange', + full_name='google.protobuf.EnumDescriptorProto.EnumReservedRange', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='start', full_name='google.protobuf.EnumDescriptorProto.EnumReservedRange.start', index=0, + number=1, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='end', full_name='google.protobuf.EnumDescriptorProto.EnumReservedRange.end', index=1, + number=2, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + serialized_start=2453, + serialized_end=2500, +) + +_ENUMDESCRIPTORPROTO = _descriptor.Descriptor( + name='EnumDescriptorProto', + full_name='google.protobuf.EnumDescriptorProto', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='name', full_name='google.protobuf.EnumDescriptorProto.name', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='value', full_name='google.protobuf.EnumDescriptorProto.value', index=1, + number=2, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='options', full_name='google.protobuf.EnumDescriptorProto.options', index=2, + number=3, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='reserved_range', full_name='google.protobuf.EnumDescriptorProto.reserved_range', index=3, + number=4, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='reserved_name', full_name='google.protobuf.EnumDescriptorProto.reserved_name', index=4, + number=5, type=9, cpp_type=9, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[_ENUMDESCRIPTORPROTO_ENUMRESERVEDRANGE, ], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + serialized_start=2208, + serialized_end=2500, +) + + +_ENUMVALUEDESCRIPTORPROTO = _descriptor.Descriptor( + name='EnumValueDescriptorProto', + full_name='google.protobuf.EnumValueDescriptorProto', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='name', full_name='google.protobuf.EnumValueDescriptorProto.name', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='number', full_name='google.protobuf.EnumValueDescriptorProto.number', index=1, + number=2, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='options', full_name='google.protobuf.EnumValueDescriptorProto.options', index=2, + number=3, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + serialized_start=2502, + serialized_end=2610, +) + + +_SERVICEDESCRIPTORPROTO = _descriptor.Descriptor( + name='ServiceDescriptorProto', + full_name='google.protobuf.ServiceDescriptorProto', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='name', full_name='google.protobuf.ServiceDescriptorProto.name', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='method', full_name='google.protobuf.ServiceDescriptorProto.method', index=1, + number=2, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='options', full_name='google.protobuf.ServiceDescriptorProto.options', index=2, + number=3, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + serialized_start=2613, + serialized_end=2757, +) + + +_METHODDESCRIPTORPROTO = _descriptor.Descriptor( + name='MethodDescriptorProto', + full_name='google.protobuf.MethodDescriptorProto', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='name', full_name='google.protobuf.MethodDescriptorProto.name', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='input_type', full_name='google.protobuf.MethodDescriptorProto.input_type', index=1, + number=2, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='output_type', full_name='google.protobuf.MethodDescriptorProto.output_type', index=2, + number=3, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='options', full_name='google.protobuf.MethodDescriptorProto.options', index=3, + number=4, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='client_streaming', full_name='google.protobuf.MethodDescriptorProto.client_streaming', index=4, + number=5, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='server_streaming', full_name='google.protobuf.MethodDescriptorProto.server_streaming', index=5, + number=6, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + serialized_start=2760, + serialized_end=2953, +) + + +_FILEOPTIONS = _descriptor.Descriptor( + name='FileOptions', + full_name='google.protobuf.FileOptions', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='java_package', full_name='google.protobuf.FileOptions.java_package', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='java_outer_classname', full_name='google.protobuf.FileOptions.java_outer_classname', index=1, + number=8, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='java_multiple_files', full_name='google.protobuf.FileOptions.java_multiple_files', index=2, + number=10, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='java_generate_equals_and_hash', full_name='google.protobuf.FileOptions.java_generate_equals_and_hash', index=3, + number=20, type=8, cpp_type=7, label=1, + has_default_value=False, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='java_string_check_utf8', full_name='google.protobuf.FileOptions.java_string_check_utf8', index=4, + number=27, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='optimize_for', full_name='google.protobuf.FileOptions.optimize_for', index=5, + number=9, type=14, cpp_type=8, label=1, + has_default_value=True, default_value=1, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='go_package', full_name='google.protobuf.FileOptions.go_package', index=6, + number=11, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='cc_generic_services', full_name='google.protobuf.FileOptions.cc_generic_services', index=7, + number=16, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='java_generic_services', full_name='google.protobuf.FileOptions.java_generic_services', index=8, + number=17, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='py_generic_services', full_name='google.protobuf.FileOptions.py_generic_services', index=9, + number=18, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='php_generic_services', full_name='google.protobuf.FileOptions.php_generic_services', index=10, + number=42, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='deprecated', full_name='google.protobuf.FileOptions.deprecated', index=11, + number=23, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='cc_enable_arenas', full_name='google.protobuf.FileOptions.cc_enable_arenas', index=12, + number=31, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=True, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='objc_class_prefix', full_name='google.protobuf.FileOptions.objc_class_prefix', index=13, + number=36, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='csharp_namespace', full_name='google.protobuf.FileOptions.csharp_namespace', index=14, + number=37, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='swift_prefix', full_name='google.protobuf.FileOptions.swift_prefix', index=15, + number=39, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='php_class_prefix', full_name='google.protobuf.FileOptions.php_class_prefix', index=16, + number=40, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='php_namespace', full_name='google.protobuf.FileOptions.php_namespace', index=17, + number=41, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='php_metadata_namespace', full_name='google.protobuf.FileOptions.php_metadata_namespace', index=18, + number=44, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='ruby_package', full_name='google.protobuf.FileOptions.ruby_package', index=19, + number=45, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='uninterpreted_option', full_name='google.protobuf.FileOptions.uninterpreted_option', index=20, + number=999, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + _FILEOPTIONS_OPTIMIZEMODE, + ], + serialized_options=None, + is_extendable=True, + syntax='proto2', + extension_ranges=[(1000, 536870912), ], + oneofs=[ + ], + serialized_start=2956, + serialized_end=3761, +) + + +_MESSAGEOPTIONS = _descriptor.Descriptor( + name='MessageOptions', + full_name='google.protobuf.MessageOptions', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='message_set_wire_format', full_name='google.protobuf.MessageOptions.message_set_wire_format', index=0, + number=1, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='no_standard_descriptor_accessor', full_name='google.protobuf.MessageOptions.no_standard_descriptor_accessor', index=1, + number=2, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='deprecated', full_name='google.protobuf.MessageOptions.deprecated', index=2, + number=3, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='map_entry', full_name='google.protobuf.MessageOptions.map_entry', index=3, + number=7, type=8, cpp_type=7, label=1, + has_default_value=False, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='uninterpreted_option', full_name='google.protobuf.MessageOptions.uninterpreted_option', index=4, + number=999, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=True, + syntax='proto2', + extension_ranges=[(1000, 536870912), ], + oneofs=[ + ], + serialized_start=3764, + serialized_end=4006, +) + + +_FIELDOPTIONS = _descriptor.Descriptor( + name='FieldOptions', + full_name='google.protobuf.FieldOptions', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='ctype', full_name='google.protobuf.FieldOptions.ctype', index=0, + number=1, type=14, cpp_type=8, label=1, + has_default_value=True, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='packed', full_name='google.protobuf.FieldOptions.packed', index=1, + number=2, type=8, cpp_type=7, label=1, + has_default_value=False, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='jstype', full_name='google.protobuf.FieldOptions.jstype', index=2, + number=6, type=14, cpp_type=8, label=1, + has_default_value=True, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='lazy', full_name='google.protobuf.FieldOptions.lazy', index=3, + number=5, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='deprecated', full_name='google.protobuf.FieldOptions.deprecated', index=4, + number=3, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='weak', full_name='google.protobuf.FieldOptions.weak', index=5, + number=10, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='uninterpreted_option', full_name='google.protobuf.FieldOptions.uninterpreted_option', index=6, + number=999, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + _FIELDOPTIONS_CTYPE, + _FIELDOPTIONS_JSTYPE, + ], + serialized_options=None, + is_extendable=True, + syntax='proto2', + extension_ranges=[(1000, 536870912), ], + oneofs=[ + ], + serialized_start=4009, + serialized_end=4423, +) + + +_ONEOFOPTIONS = _descriptor.Descriptor( + name='OneofOptions', + full_name='google.protobuf.OneofOptions', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='uninterpreted_option', full_name='google.protobuf.OneofOptions.uninterpreted_option', index=0, + number=999, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=True, + syntax='proto2', + extension_ranges=[(1000, 536870912), ], + oneofs=[ + ], + serialized_start=4425, + serialized_end=4519, +) + + +_ENUMOPTIONS = _descriptor.Descriptor( + name='EnumOptions', + full_name='google.protobuf.EnumOptions', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='allow_alias', full_name='google.protobuf.EnumOptions.allow_alias', index=0, + number=2, type=8, cpp_type=7, label=1, + has_default_value=False, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='deprecated', full_name='google.protobuf.EnumOptions.deprecated', index=1, + number=3, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='uninterpreted_option', full_name='google.protobuf.EnumOptions.uninterpreted_option', index=2, + number=999, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=True, + syntax='proto2', + extension_ranges=[(1000, 536870912), ], + oneofs=[ + ], + serialized_start=4522, + serialized_end=4669, +) + + +_ENUMVALUEOPTIONS = _descriptor.Descriptor( + name='EnumValueOptions', + full_name='google.protobuf.EnumValueOptions', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='deprecated', full_name='google.protobuf.EnumValueOptions.deprecated', index=0, + number=1, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='uninterpreted_option', full_name='google.protobuf.EnumValueOptions.uninterpreted_option', index=1, + number=999, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=True, + syntax='proto2', + extension_ranges=[(1000, 536870912), ], + oneofs=[ + ], + serialized_start=4671, + serialized_end=4796, +) + + +_SERVICEOPTIONS = _descriptor.Descriptor( + name='ServiceOptions', + full_name='google.protobuf.ServiceOptions', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='deprecated', full_name='google.protobuf.ServiceOptions.deprecated', index=0, + number=33, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='uninterpreted_option', full_name='google.protobuf.ServiceOptions.uninterpreted_option', index=1, + number=999, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=True, + syntax='proto2', + extension_ranges=[(1000, 536870912), ], + oneofs=[ + ], + serialized_start=4798, + serialized_end=4921, +) + + +_METHODOPTIONS = _descriptor.Descriptor( + name='MethodOptions', + full_name='google.protobuf.MethodOptions', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='deprecated', full_name='google.protobuf.MethodOptions.deprecated', index=0, + number=33, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='idempotency_level', full_name='google.protobuf.MethodOptions.idempotency_level', index=1, + number=34, type=14, cpp_type=8, label=1, + has_default_value=True, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='uninterpreted_option', full_name='google.protobuf.MethodOptions.uninterpreted_option', index=2, + number=999, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + _METHODOPTIONS_IDEMPOTENCYLEVEL, + ], + serialized_options=None, + is_extendable=True, + syntax='proto2', + extension_ranges=[(1000, 536870912), ], + oneofs=[ + ], + serialized_start=4924, + serialized_end=5225, +) + + +_UNINTERPRETEDOPTION_NAMEPART = _descriptor.Descriptor( + name='NamePart', + full_name='google.protobuf.UninterpretedOption.NamePart', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='name_part', full_name='google.protobuf.UninterpretedOption.NamePart.name_part', index=0, + number=1, type=9, cpp_type=9, label=2, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='is_extension', full_name='google.protobuf.UninterpretedOption.NamePart.is_extension', index=1, + number=2, type=8, cpp_type=7, label=2, + has_default_value=False, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + serialized_start=5463, + serialized_end=5514, +) + +_UNINTERPRETEDOPTION = _descriptor.Descriptor( + name='UninterpretedOption', + full_name='google.protobuf.UninterpretedOption', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='name', full_name='google.protobuf.UninterpretedOption.name', index=0, + number=2, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='identifier_value', full_name='google.protobuf.UninterpretedOption.identifier_value', index=1, + number=3, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='positive_int_value', full_name='google.protobuf.UninterpretedOption.positive_int_value', index=2, + number=4, type=4, cpp_type=4, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='negative_int_value', full_name='google.protobuf.UninterpretedOption.negative_int_value', index=3, + number=5, type=3, cpp_type=2, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='double_value', full_name='google.protobuf.UninterpretedOption.double_value', index=4, + number=6, type=1, cpp_type=5, label=1, + has_default_value=False, default_value=float(0), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='string_value', full_name='google.protobuf.UninterpretedOption.string_value', index=5, + number=7, type=12, cpp_type=9, label=1, + has_default_value=False, default_value=b"", + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='aggregate_value', full_name='google.protobuf.UninterpretedOption.aggregate_value', index=6, + number=8, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[_UNINTERPRETEDOPTION_NAMEPART, ], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + serialized_start=5228, + serialized_end=5514, +) + + +_SOURCECODEINFO_LOCATION = _descriptor.Descriptor( + name='Location', + full_name='google.protobuf.SourceCodeInfo.Location', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='path', full_name='google.protobuf.SourceCodeInfo.Location.path', index=0, + number=1, type=5, cpp_type=1, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='span', full_name='google.protobuf.SourceCodeInfo.Location.span', index=1, + number=2, type=5, cpp_type=1, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='leading_comments', full_name='google.protobuf.SourceCodeInfo.Location.leading_comments', index=2, + number=3, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='trailing_comments', full_name='google.protobuf.SourceCodeInfo.Location.trailing_comments', index=3, + number=4, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='leading_detached_comments', full_name='google.protobuf.SourceCodeInfo.Location.leading_detached_comments', index=4, + number=6, type=9, cpp_type=9, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + serialized_start=5596, + serialized_end=5730, +) + +_SOURCECODEINFO = _descriptor.Descriptor( + name='SourceCodeInfo', + full_name='google.protobuf.SourceCodeInfo', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='location', full_name='google.protobuf.SourceCodeInfo.location', index=0, + number=1, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[_SOURCECODEINFO_LOCATION, ], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + serialized_start=5517, + serialized_end=5730, +) + + +_GENERATEDCODEINFO_ANNOTATION = _descriptor.Descriptor( + name='Annotation', + full_name='google.protobuf.GeneratedCodeInfo.Annotation', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='path', full_name='google.protobuf.GeneratedCodeInfo.Annotation.path', index=0, + number=1, type=5, cpp_type=1, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='source_file', full_name='google.protobuf.GeneratedCodeInfo.Annotation.source_file', index=1, + number=2, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='begin', full_name='google.protobuf.GeneratedCodeInfo.Annotation.begin', index=2, + number=3, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='end', full_name='google.protobuf.GeneratedCodeInfo.Annotation.end', index=3, + number=4, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + serialized_start=5821, + serialized_end=5900, +) + +_GENERATEDCODEINFO = _descriptor.Descriptor( + name='GeneratedCodeInfo', + full_name='google.protobuf.GeneratedCodeInfo', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='annotation', full_name='google.protobuf.GeneratedCodeInfo.annotation', index=0, + number=1, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[_GENERATEDCODEINFO_ANNOTATION, ], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + serialized_start=5733, + serialized_end=5900, +) + +_FILEDESCRIPTORSET.fields_by_name['file'].message_type = _FILEDESCRIPTORPROTO +_FILEDESCRIPTORPROTO.fields_by_name['message_type'].message_type = _DESCRIPTORPROTO +_FILEDESCRIPTORPROTO.fields_by_name['enum_type'].message_type = _ENUMDESCRIPTORPROTO +_FILEDESCRIPTORPROTO.fields_by_name['service'].message_type = _SERVICEDESCRIPTORPROTO +_FILEDESCRIPTORPROTO.fields_by_name['extension'].message_type = _FIELDDESCRIPTORPROTO +_FILEDESCRIPTORPROTO.fields_by_name['options'].message_type = _FILEOPTIONS +_FILEDESCRIPTORPROTO.fields_by_name['source_code_info'].message_type = _SOURCECODEINFO +_DESCRIPTORPROTO_EXTENSIONRANGE.fields_by_name['options'].message_type = _EXTENSIONRANGEOPTIONS +_DESCRIPTORPROTO_EXTENSIONRANGE.containing_type = _DESCRIPTORPROTO +_DESCRIPTORPROTO_RESERVEDRANGE.containing_type = _DESCRIPTORPROTO +_DESCRIPTORPROTO.fields_by_name['field'].message_type = _FIELDDESCRIPTORPROTO +_DESCRIPTORPROTO.fields_by_name['extension'].message_type = _FIELDDESCRIPTORPROTO +_DESCRIPTORPROTO.fields_by_name['nested_type'].message_type = _DESCRIPTORPROTO +_DESCRIPTORPROTO.fields_by_name['enum_type'].message_type = _ENUMDESCRIPTORPROTO +_DESCRIPTORPROTO.fields_by_name['extension_range'].message_type = _DESCRIPTORPROTO_EXTENSIONRANGE +_DESCRIPTORPROTO.fields_by_name['oneof_decl'].message_type = _ONEOFDESCRIPTORPROTO +_DESCRIPTORPROTO.fields_by_name['options'].message_type = _MESSAGEOPTIONS +_DESCRIPTORPROTO.fields_by_name['reserved_range'].message_type = _DESCRIPTORPROTO_RESERVEDRANGE +_EXTENSIONRANGEOPTIONS.fields_by_name['uninterpreted_option'].message_type = _UNINTERPRETEDOPTION +_FIELDDESCRIPTORPROTO.fields_by_name['label'].enum_type = _FIELDDESCRIPTORPROTO_LABEL +_FIELDDESCRIPTORPROTO.fields_by_name['type'].enum_type = _FIELDDESCRIPTORPROTO_TYPE +_FIELDDESCRIPTORPROTO.fields_by_name['options'].message_type = _FIELDOPTIONS +_FIELDDESCRIPTORPROTO_TYPE.containing_type = _FIELDDESCRIPTORPROTO +_FIELDDESCRIPTORPROTO_LABEL.containing_type = _FIELDDESCRIPTORPROTO +_ONEOFDESCRIPTORPROTO.fields_by_name['options'].message_type = _ONEOFOPTIONS +_ENUMDESCRIPTORPROTO_ENUMRESERVEDRANGE.containing_type = _ENUMDESCRIPTORPROTO +_ENUMDESCRIPTORPROTO.fields_by_name['value'].message_type = _ENUMVALUEDESCRIPTORPROTO +_ENUMDESCRIPTORPROTO.fields_by_name['options'].message_type = _ENUMOPTIONS +_ENUMDESCRIPTORPROTO.fields_by_name['reserved_range'].message_type = _ENUMDESCRIPTORPROTO_ENUMRESERVEDRANGE +_ENUMVALUEDESCRIPTORPROTO.fields_by_name['options'].message_type = _ENUMVALUEOPTIONS +_SERVICEDESCRIPTORPROTO.fields_by_name['method'].message_type = _METHODDESCRIPTORPROTO +_SERVICEDESCRIPTORPROTO.fields_by_name['options'].message_type = _SERVICEOPTIONS +_METHODDESCRIPTORPROTO.fields_by_name['options'].message_type = _METHODOPTIONS +_FILEOPTIONS.fields_by_name['optimize_for'].enum_type = _FILEOPTIONS_OPTIMIZEMODE +_FILEOPTIONS.fields_by_name['uninterpreted_option'].message_type = _UNINTERPRETEDOPTION +_FILEOPTIONS_OPTIMIZEMODE.containing_type = _FILEOPTIONS +_MESSAGEOPTIONS.fields_by_name['uninterpreted_option'].message_type = _UNINTERPRETEDOPTION +_FIELDOPTIONS.fields_by_name['ctype'].enum_type = _FIELDOPTIONS_CTYPE +_FIELDOPTIONS.fields_by_name['jstype'].enum_type = _FIELDOPTIONS_JSTYPE +_FIELDOPTIONS.fields_by_name['uninterpreted_option'].message_type = _UNINTERPRETEDOPTION +_FIELDOPTIONS_CTYPE.containing_type = _FIELDOPTIONS +_FIELDOPTIONS_JSTYPE.containing_type = _FIELDOPTIONS +_ONEOFOPTIONS.fields_by_name['uninterpreted_option'].message_type = _UNINTERPRETEDOPTION +_ENUMOPTIONS.fields_by_name['uninterpreted_option'].message_type = _UNINTERPRETEDOPTION +_ENUMVALUEOPTIONS.fields_by_name['uninterpreted_option'].message_type = _UNINTERPRETEDOPTION +_SERVICEOPTIONS.fields_by_name['uninterpreted_option'].message_type = _UNINTERPRETEDOPTION +_METHODOPTIONS.fields_by_name['idempotency_level'].enum_type = _METHODOPTIONS_IDEMPOTENCYLEVEL +_METHODOPTIONS.fields_by_name['uninterpreted_option'].message_type = _UNINTERPRETEDOPTION +_METHODOPTIONS_IDEMPOTENCYLEVEL.containing_type = _METHODOPTIONS +_UNINTERPRETEDOPTION_NAMEPART.containing_type = _UNINTERPRETEDOPTION +_UNINTERPRETEDOPTION.fields_by_name['name'].message_type = _UNINTERPRETEDOPTION_NAMEPART +_SOURCECODEINFO_LOCATION.containing_type = _SOURCECODEINFO +_SOURCECODEINFO.fields_by_name['location'].message_type = _SOURCECODEINFO_LOCATION +_GENERATEDCODEINFO_ANNOTATION.containing_type = _GENERATEDCODEINFO +_GENERATEDCODEINFO.fields_by_name['annotation'].message_type = _GENERATEDCODEINFO_ANNOTATION +DESCRIPTOR.message_types_by_name['FileDescriptorSet'] = _FILEDESCRIPTORSET +DESCRIPTOR.message_types_by_name['FileDescriptorProto'] = _FILEDESCRIPTORPROTO +DESCRIPTOR.message_types_by_name['DescriptorProto'] = _DESCRIPTORPROTO +DESCRIPTOR.message_types_by_name['ExtensionRangeOptions'] = _EXTENSIONRANGEOPTIONS +DESCRIPTOR.message_types_by_name['FieldDescriptorProto'] = _FIELDDESCRIPTORPROTO +DESCRIPTOR.message_types_by_name['OneofDescriptorProto'] = _ONEOFDESCRIPTORPROTO +DESCRIPTOR.message_types_by_name['EnumDescriptorProto'] = _ENUMDESCRIPTORPROTO +DESCRIPTOR.message_types_by_name['EnumValueDescriptorProto'] = _ENUMVALUEDESCRIPTORPROTO +DESCRIPTOR.message_types_by_name['ServiceDescriptorProto'] = _SERVICEDESCRIPTORPROTO +DESCRIPTOR.message_types_by_name['MethodDescriptorProto'] = _METHODDESCRIPTORPROTO +DESCRIPTOR.message_types_by_name['FileOptions'] = _FILEOPTIONS +DESCRIPTOR.message_types_by_name['MessageOptions'] = _MESSAGEOPTIONS +DESCRIPTOR.message_types_by_name['FieldOptions'] = _FIELDOPTIONS +DESCRIPTOR.message_types_by_name['OneofOptions'] = _ONEOFOPTIONS +DESCRIPTOR.message_types_by_name['EnumOptions'] = _ENUMOPTIONS +DESCRIPTOR.message_types_by_name['EnumValueOptions'] = _ENUMVALUEOPTIONS +DESCRIPTOR.message_types_by_name['ServiceOptions'] = _SERVICEOPTIONS +DESCRIPTOR.message_types_by_name['MethodOptions'] = _METHODOPTIONS +DESCRIPTOR.message_types_by_name['UninterpretedOption'] = _UNINTERPRETEDOPTION +DESCRIPTOR.message_types_by_name['SourceCodeInfo'] = _SOURCECODEINFO +DESCRIPTOR.message_types_by_name['GeneratedCodeInfo'] = _GENERATEDCODEINFO +_sym_db.RegisterFileDescriptor(DESCRIPTOR) + +FileDescriptorSet = _reflection.GeneratedProtocolMessageType('FileDescriptorSet', (_message.Message,), { + 'DESCRIPTOR' : _FILEDESCRIPTORSET, + '__module__' : 'google.protobuf.descriptor_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.FileDescriptorSet) + }) +_sym_db.RegisterMessage(FileDescriptorSet) + +FileDescriptorProto = _reflection.GeneratedProtocolMessageType('FileDescriptorProto', (_message.Message,), { + 'DESCRIPTOR' : _FILEDESCRIPTORPROTO, + '__module__' : 'google.protobuf.descriptor_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.FileDescriptorProto) + }) +_sym_db.RegisterMessage(FileDescriptorProto) + +DescriptorProto = _reflection.GeneratedProtocolMessageType('DescriptorProto', (_message.Message,), { + + 'ExtensionRange' : _reflection.GeneratedProtocolMessageType('ExtensionRange', (_message.Message,), { + 'DESCRIPTOR' : _DESCRIPTORPROTO_EXTENSIONRANGE, + '__module__' : 'google.protobuf.descriptor_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.DescriptorProto.ExtensionRange) + }) + , + + 'ReservedRange' : _reflection.GeneratedProtocolMessageType('ReservedRange', (_message.Message,), { + 'DESCRIPTOR' : _DESCRIPTORPROTO_RESERVEDRANGE, + '__module__' : 'google.protobuf.descriptor_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.DescriptorProto.ReservedRange) + }) + , + 'DESCRIPTOR' : _DESCRIPTORPROTO, + '__module__' : 'google.protobuf.descriptor_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.DescriptorProto) + }) +_sym_db.RegisterMessage(DescriptorProto) +_sym_db.RegisterMessage(DescriptorProto.ExtensionRange) +_sym_db.RegisterMessage(DescriptorProto.ReservedRange) + +ExtensionRangeOptions = _reflection.GeneratedProtocolMessageType('ExtensionRangeOptions', (_message.Message,), { + 'DESCRIPTOR' : _EXTENSIONRANGEOPTIONS, + '__module__' : 'google.protobuf.descriptor_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.ExtensionRangeOptions) + }) +_sym_db.RegisterMessage(ExtensionRangeOptions) + +FieldDescriptorProto = _reflection.GeneratedProtocolMessageType('FieldDescriptorProto', (_message.Message,), { + 'DESCRIPTOR' : _FIELDDESCRIPTORPROTO, + '__module__' : 'google.protobuf.descriptor_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.FieldDescriptorProto) + }) +_sym_db.RegisterMessage(FieldDescriptorProto) + +OneofDescriptorProto = _reflection.GeneratedProtocolMessageType('OneofDescriptorProto', (_message.Message,), { + 'DESCRIPTOR' : _ONEOFDESCRIPTORPROTO, + '__module__' : 'google.protobuf.descriptor_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.OneofDescriptorProto) + }) +_sym_db.RegisterMessage(OneofDescriptorProto) + +EnumDescriptorProto = _reflection.GeneratedProtocolMessageType('EnumDescriptorProto', (_message.Message,), { + + 'EnumReservedRange' : _reflection.GeneratedProtocolMessageType('EnumReservedRange', (_message.Message,), { + 'DESCRIPTOR' : _ENUMDESCRIPTORPROTO_ENUMRESERVEDRANGE, + '__module__' : 'google.protobuf.descriptor_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.EnumDescriptorProto.EnumReservedRange) + }) + , + 'DESCRIPTOR' : _ENUMDESCRIPTORPROTO, + '__module__' : 'google.protobuf.descriptor_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.EnumDescriptorProto) + }) +_sym_db.RegisterMessage(EnumDescriptorProto) +_sym_db.RegisterMessage(EnumDescriptorProto.EnumReservedRange) + +EnumValueDescriptorProto = _reflection.GeneratedProtocolMessageType('EnumValueDescriptorProto', (_message.Message,), { + 'DESCRIPTOR' : _ENUMVALUEDESCRIPTORPROTO, + '__module__' : 'google.protobuf.descriptor_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.EnumValueDescriptorProto) + }) +_sym_db.RegisterMessage(EnumValueDescriptorProto) + +ServiceDescriptorProto = _reflection.GeneratedProtocolMessageType('ServiceDescriptorProto', (_message.Message,), { + 'DESCRIPTOR' : _SERVICEDESCRIPTORPROTO, + '__module__' : 'google.protobuf.descriptor_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.ServiceDescriptorProto) + }) +_sym_db.RegisterMessage(ServiceDescriptorProto) + +MethodDescriptorProto = _reflection.GeneratedProtocolMessageType('MethodDescriptorProto', (_message.Message,), { + 'DESCRIPTOR' : _METHODDESCRIPTORPROTO, + '__module__' : 'google.protobuf.descriptor_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.MethodDescriptorProto) + }) +_sym_db.RegisterMessage(MethodDescriptorProto) + +FileOptions = _reflection.GeneratedProtocolMessageType('FileOptions', (_message.Message,), { + 'DESCRIPTOR' : _FILEOPTIONS, + '__module__' : 'google.protobuf.descriptor_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.FileOptions) + }) +_sym_db.RegisterMessage(FileOptions) + +MessageOptions = _reflection.GeneratedProtocolMessageType('MessageOptions', (_message.Message,), { + 'DESCRIPTOR' : _MESSAGEOPTIONS, + '__module__' : 'google.protobuf.descriptor_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.MessageOptions) + }) +_sym_db.RegisterMessage(MessageOptions) + +FieldOptions = _reflection.GeneratedProtocolMessageType('FieldOptions', (_message.Message,), { + 'DESCRIPTOR' : _FIELDOPTIONS, + '__module__' : 'google.protobuf.descriptor_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.FieldOptions) + }) +_sym_db.RegisterMessage(FieldOptions) + +OneofOptions = _reflection.GeneratedProtocolMessageType('OneofOptions', (_message.Message,), { + 'DESCRIPTOR' : _ONEOFOPTIONS, + '__module__' : 'google.protobuf.descriptor_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.OneofOptions) + }) +_sym_db.RegisterMessage(OneofOptions) + +EnumOptions = _reflection.GeneratedProtocolMessageType('EnumOptions', (_message.Message,), { + 'DESCRIPTOR' : _ENUMOPTIONS, + '__module__' : 'google.protobuf.descriptor_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.EnumOptions) + }) +_sym_db.RegisterMessage(EnumOptions) + +EnumValueOptions = _reflection.GeneratedProtocolMessageType('EnumValueOptions', (_message.Message,), { + 'DESCRIPTOR' : _ENUMVALUEOPTIONS, + '__module__' : 'google.protobuf.descriptor_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.EnumValueOptions) + }) +_sym_db.RegisterMessage(EnumValueOptions) + +ServiceOptions = _reflection.GeneratedProtocolMessageType('ServiceOptions', (_message.Message,), { + 'DESCRIPTOR' : _SERVICEOPTIONS, + '__module__' : 'google.protobuf.descriptor_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.ServiceOptions) + }) +_sym_db.RegisterMessage(ServiceOptions) + +MethodOptions = _reflection.GeneratedProtocolMessageType('MethodOptions', (_message.Message,), { + 'DESCRIPTOR' : _METHODOPTIONS, + '__module__' : 'google.protobuf.descriptor_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.MethodOptions) + }) +_sym_db.RegisterMessage(MethodOptions) + +UninterpretedOption = _reflection.GeneratedProtocolMessageType('UninterpretedOption', (_message.Message,), { + + 'NamePart' : _reflection.GeneratedProtocolMessageType('NamePart', (_message.Message,), { + 'DESCRIPTOR' : _UNINTERPRETEDOPTION_NAMEPART, + '__module__' : 'google.protobuf.descriptor_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.UninterpretedOption.NamePart) + }) + , + 'DESCRIPTOR' : _UNINTERPRETEDOPTION, + '__module__' : 'google.protobuf.descriptor_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.UninterpretedOption) + }) +_sym_db.RegisterMessage(UninterpretedOption) +_sym_db.RegisterMessage(UninterpretedOption.NamePart) + +SourceCodeInfo = _reflection.GeneratedProtocolMessageType('SourceCodeInfo', (_message.Message,), { + + 'Location' : _reflection.GeneratedProtocolMessageType('Location', (_message.Message,), { + 'DESCRIPTOR' : _SOURCECODEINFO_LOCATION, + '__module__' : 'google.protobuf.descriptor_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.SourceCodeInfo.Location) + }) + , + 'DESCRIPTOR' : _SOURCECODEINFO, + '__module__' : 'google.protobuf.descriptor_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.SourceCodeInfo) + }) +_sym_db.RegisterMessage(SourceCodeInfo) +_sym_db.RegisterMessage(SourceCodeInfo.Location) + +GeneratedCodeInfo = _reflection.GeneratedProtocolMessageType('GeneratedCodeInfo', (_message.Message,), { + + 'Annotation' : _reflection.GeneratedProtocolMessageType('Annotation', (_message.Message,), { + 'DESCRIPTOR' : _GENERATEDCODEINFO_ANNOTATION, + '__module__' : 'google.protobuf.descriptor_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.GeneratedCodeInfo.Annotation) + }) + , + 'DESCRIPTOR' : _GENERATEDCODEINFO, + '__module__' : 'google.protobuf.descriptor_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.GeneratedCodeInfo) + }) +_sym_db.RegisterMessage(GeneratedCodeInfo) +_sym_db.RegisterMessage(GeneratedCodeInfo.Annotation) + + +# @@protoc_insertion_point(module_scope) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/descriptor_pool.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/descriptor_pool.py new file mode 100644 index 0000000000000000000000000000000000000000..de9100b09ca590d229e3fd8518e98664bd4e7205 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/descriptor_pool.py @@ -0,0 +1,1271 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""Provides DescriptorPool to use as a container for proto2 descriptors. + +The DescriptorPool is used in conjection with a DescriptorDatabase to maintain +a collection of protocol buffer descriptors for use when dynamically creating +message types at runtime. + +For most applications protocol buffers should be used via modules generated by +the protocol buffer compiler tool. This should only be used when the type of +protocol buffers used in an application or library cannot be predetermined. + +Below is a straightforward example on how to use this class:: + + pool = DescriptorPool() + file_descriptor_protos = [ ... ] + for file_descriptor_proto in file_descriptor_protos: + pool.Add(file_descriptor_proto) + my_message_descriptor = pool.FindMessageTypeByName('some.package.MessageType') + +The message descriptor can be used in conjunction with the message_factory +module in order to create a protocol buffer class that can be encoded and +decoded. + +If you want to get a Python class for the specified proto, use the +helper functions inside google.protobuf.message_factory +directly instead of this class. +""" + +__author__ = 'matthewtoia@google.com (Matt Toia)' + +import collections +import warnings + +from google.protobuf import descriptor +from google.protobuf import descriptor_database +from google.protobuf import text_encoding + + +_USE_C_DESCRIPTORS = descriptor._USE_C_DESCRIPTORS # pylint: disable=protected-access + + +def _Deprecated(func): + """Mark functions as deprecated.""" + + def NewFunc(*args, **kwargs): + warnings.warn( + 'Call to deprecated function %s(). Note: Do add unlinked descriptors ' + 'to descriptor_pool is wrong. Use Add() or AddSerializedFile() ' + 'instead.' % func.__name__, + category=DeprecationWarning) + return func(*args, **kwargs) + NewFunc.__name__ = func.__name__ + NewFunc.__doc__ = func.__doc__ + NewFunc.__dict__.update(func.__dict__) + return NewFunc + + +def _NormalizeFullyQualifiedName(name): + """Remove leading period from fully-qualified type name. + + Due to b/13860351 in descriptor_database.py, types in the root namespace are + generated with a leading period. This function removes that prefix. + + Args: + name (str): The fully-qualified symbol name. + + Returns: + str: The normalized fully-qualified symbol name. + """ + return name.lstrip('.') + + +def _OptionsOrNone(descriptor_proto): + """Returns the value of the field `options`, or None if it is not set.""" + if descriptor_proto.HasField('options'): + return descriptor_proto.options + else: + return None + + +def _IsMessageSetExtension(field): + return (field.is_extension and + field.containing_type.has_options and + field.containing_type.GetOptions().message_set_wire_format and + field.type == descriptor.FieldDescriptor.TYPE_MESSAGE and + field.label == descriptor.FieldDescriptor.LABEL_OPTIONAL) + + +class DescriptorPool(object): + """A collection of protobufs dynamically constructed by descriptor protos.""" + + if _USE_C_DESCRIPTORS: + + def __new__(cls, descriptor_db=None): + # pylint: disable=protected-access + return descriptor._message.DescriptorPool(descriptor_db) + + def __init__(self, descriptor_db=None): + """Initializes a Pool of proto buffs. + + The descriptor_db argument to the constructor is provided to allow + specialized file descriptor proto lookup code to be triggered on demand. An + example would be an implementation which will read and compile a file + specified in a call to FindFileByName() and not require the call to Add() + at all. Results from this database will be cached internally here as well. + + Args: + descriptor_db: A secondary source of file descriptors. + """ + + self._internal_db = descriptor_database.DescriptorDatabase() + self._descriptor_db = descriptor_db + self._descriptors = {} + self._enum_descriptors = {} + self._service_descriptors = {} + self._file_descriptors = {} + self._toplevel_extensions = {} + # TODO(jieluo): Remove _file_desc_by_toplevel_extension after + # maybe year 2020 for compatibility issue (with 3.4.1 only). + self._file_desc_by_toplevel_extension = {} + self._top_enum_values = {} + # We store extensions in two two-level mappings: The first key is the + # descriptor of the message being extended, the second key is the extension + # full name or its tag number. + self._extensions_by_name = collections.defaultdict(dict) + self._extensions_by_number = collections.defaultdict(dict) + + def _CheckConflictRegister(self, desc, desc_name, file_name): + """Check if the descriptor name conflicts with another of the same name. + + Args: + desc: Descriptor of a message, enum, service, extension or enum value. + desc_name (str): the full name of desc. + file_name (str): The file name of descriptor. + """ + for register, descriptor_type in [ + (self._descriptors, descriptor.Descriptor), + (self._enum_descriptors, descriptor.EnumDescriptor), + (self._service_descriptors, descriptor.ServiceDescriptor), + (self._toplevel_extensions, descriptor.FieldDescriptor), + (self._top_enum_values, descriptor.EnumValueDescriptor)]: + if desc_name in register: + old_desc = register[desc_name] + if isinstance(old_desc, descriptor.EnumValueDescriptor): + old_file = old_desc.type.file.name + else: + old_file = old_desc.file.name + + if not isinstance(desc, descriptor_type) or ( + old_file != file_name): + error_msg = ('Conflict register for file "' + file_name + + '": ' + desc_name + + ' is already defined in file "' + + old_file + '". Please fix the conflict by adding ' + 'package name on the proto file, or use different ' + 'name for the duplication.') + if isinstance(desc, descriptor.EnumValueDescriptor): + error_msg += ('\nNote: enum values appear as ' + 'siblings of the enum type instead of ' + 'children of it.') + + raise TypeError(error_msg) + + return + + def Add(self, file_desc_proto): + """Adds the FileDescriptorProto and its types to this pool. + + Args: + file_desc_proto (FileDescriptorProto): The file descriptor to add. + """ + + self._internal_db.Add(file_desc_proto) + + def AddSerializedFile(self, serialized_file_desc_proto): + """Adds the FileDescriptorProto and its types to this pool. + + Args: + serialized_file_desc_proto (bytes): A bytes string, serialization of the + :class:`FileDescriptorProto` to add. + """ + + # pylint: disable=g-import-not-at-top + from google.protobuf import descriptor_pb2 + file_desc_proto = descriptor_pb2.FileDescriptorProto.FromString( + serialized_file_desc_proto) + self.Add(file_desc_proto) + + # Add Descriptor to descriptor pool is dreprecated. Please use Add() + # or AddSerializedFile() to add a FileDescriptorProto instead. + @_Deprecated + def AddDescriptor(self, desc): + self._AddDescriptor(desc) + + # Never call this method. It is for internal usage only. + def _AddDescriptor(self, desc): + """Adds a Descriptor to the pool, non-recursively. + + If the Descriptor contains nested messages or enums, the caller must + explicitly register them. This method also registers the FileDescriptor + associated with the message. + + Args: + desc: A Descriptor. + """ + if not isinstance(desc, descriptor.Descriptor): + raise TypeError('Expected instance of descriptor.Descriptor.') + + self._CheckConflictRegister(desc, desc.full_name, desc.file.name) + + self._descriptors[desc.full_name] = desc + self._AddFileDescriptor(desc.file) + + # Add EnumDescriptor to descriptor pool is dreprecated. Please use Add() + # or AddSerializedFile() to add a FileDescriptorProto instead. + @_Deprecated + def AddEnumDescriptor(self, enum_desc): + self._AddEnumDescriptor(enum_desc) + + # Never call this method. It is for internal usage only. + def _AddEnumDescriptor(self, enum_desc): + """Adds an EnumDescriptor to the pool. + + This method also registers the FileDescriptor associated with the enum. + + Args: + enum_desc: An EnumDescriptor. + """ + + if not isinstance(enum_desc, descriptor.EnumDescriptor): + raise TypeError('Expected instance of descriptor.EnumDescriptor.') + + file_name = enum_desc.file.name + self._CheckConflictRegister(enum_desc, enum_desc.full_name, file_name) + self._enum_descriptors[enum_desc.full_name] = enum_desc + + # Top enum values need to be indexed. + # Count the number of dots to see whether the enum is toplevel or nested + # in a message. We cannot use enum_desc.containing_type at this stage. + if enum_desc.file.package: + top_level = (enum_desc.full_name.count('.') + - enum_desc.file.package.count('.') == 1) + else: + top_level = enum_desc.full_name.count('.') == 0 + if top_level: + file_name = enum_desc.file.name + package = enum_desc.file.package + for enum_value in enum_desc.values: + full_name = _NormalizeFullyQualifiedName( + '.'.join((package, enum_value.name))) + self._CheckConflictRegister(enum_value, full_name, file_name) + self._top_enum_values[full_name] = enum_value + self._AddFileDescriptor(enum_desc.file) + + # Add ServiceDescriptor to descriptor pool is dreprecated. Please use Add() + # or AddSerializedFile() to add a FileDescriptorProto instead. + @_Deprecated + def AddServiceDescriptor(self, service_desc): + self._AddServiceDescriptor(service_desc) + + # Never call this method. It is for internal usage only. + def _AddServiceDescriptor(self, service_desc): + """Adds a ServiceDescriptor to the pool. + + Args: + service_desc: A ServiceDescriptor. + """ + + if not isinstance(service_desc, descriptor.ServiceDescriptor): + raise TypeError('Expected instance of descriptor.ServiceDescriptor.') + + self._CheckConflictRegister(service_desc, service_desc.full_name, + service_desc.file.name) + self._service_descriptors[service_desc.full_name] = service_desc + + # Add ExtensionDescriptor to descriptor pool is dreprecated. Please use Add() + # or AddSerializedFile() to add a FileDescriptorProto instead. + @_Deprecated + def AddExtensionDescriptor(self, extension): + self._AddExtensionDescriptor(extension) + + # Never call this method. It is for internal usage only. + def _AddExtensionDescriptor(self, extension): + """Adds a FieldDescriptor describing an extension to the pool. + + Args: + extension: A FieldDescriptor. + + Raises: + AssertionError: when another extension with the same number extends the + same message. + TypeError: when the specified extension is not a + descriptor.FieldDescriptor. + """ + if not (isinstance(extension, descriptor.FieldDescriptor) and + extension.is_extension): + raise TypeError('Expected an extension descriptor.') + + if extension.extension_scope is None: + self._toplevel_extensions[extension.full_name] = extension + + try: + existing_desc = self._extensions_by_number[ + extension.containing_type][extension.number] + except KeyError: + pass + else: + if extension is not existing_desc: + raise AssertionError( + 'Extensions "%s" and "%s" both try to extend message type "%s" ' + 'with field number %d.' % + (extension.full_name, existing_desc.full_name, + extension.containing_type.full_name, extension.number)) + + self._extensions_by_number[extension.containing_type][ + extension.number] = extension + self._extensions_by_name[extension.containing_type][ + extension.full_name] = extension + + # Also register MessageSet extensions with the type name. + if _IsMessageSetExtension(extension): + self._extensions_by_name[extension.containing_type][ + extension.message_type.full_name] = extension + + @_Deprecated + def AddFileDescriptor(self, file_desc): + self._InternalAddFileDescriptor(file_desc) + + # Never call this method. It is for internal usage only. + def _InternalAddFileDescriptor(self, file_desc): + """Adds a FileDescriptor to the pool, non-recursively. + + If the FileDescriptor contains messages or enums, the caller must explicitly + register them. + + Args: + file_desc: A FileDescriptor. + """ + + self._AddFileDescriptor(file_desc) + # TODO(jieluo): This is a temporary solution for FieldDescriptor.file. + # FieldDescriptor.file is added in code gen. Remove this solution after + # maybe 2020 for compatibility reason (with 3.4.1 only). + for extension in file_desc.extensions_by_name.values(): + self._file_desc_by_toplevel_extension[ + extension.full_name] = file_desc + + def _AddFileDescriptor(self, file_desc): + """Adds a FileDescriptor to the pool, non-recursively. + + If the FileDescriptor contains messages or enums, the caller must explicitly + register them. + + Args: + file_desc: A FileDescriptor. + """ + + if not isinstance(file_desc, descriptor.FileDescriptor): + raise TypeError('Expected instance of descriptor.FileDescriptor.') + self._file_descriptors[file_desc.name] = file_desc + + def FindFileByName(self, file_name): + """Gets a FileDescriptor by file name. + + Args: + file_name (str): The path to the file to get a descriptor for. + + Returns: + FileDescriptor: The descriptor for the named file. + + Raises: + KeyError: if the file cannot be found in the pool. + """ + + try: + return self._file_descriptors[file_name] + except KeyError: + pass + + try: + file_proto = self._internal_db.FindFileByName(file_name) + except KeyError as error: + if self._descriptor_db: + file_proto = self._descriptor_db.FindFileByName(file_name) + else: + raise error + if not file_proto: + raise KeyError('Cannot find a file named %s' % file_name) + return self._ConvertFileProtoToFileDescriptor(file_proto) + + def FindFileContainingSymbol(self, symbol): + """Gets the FileDescriptor for the file containing the specified symbol. + + Args: + symbol (str): The name of the symbol to search for. + + Returns: + FileDescriptor: Descriptor for the file that contains the specified + symbol. + + Raises: + KeyError: if the file cannot be found in the pool. + """ + + symbol = _NormalizeFullyQualifiedName(symbol) + try: + return self._InternalFindFileContainingSymbol(symbol) + except KeyError: + pass + + try: + # Try fallback database. Build and find again if possible. + self._FindFileContainingSymbolInDb(symbol) + return self._InternalFindFileContainingSymbol(symbol) + except KeyError: + raise KeyError('Cannot find a file containing %s' % symbol) + + def _InternalFindFileContainingSymbol(self, symbol): + """Gets the already built FileDescriptor containing the specified symbol. + + Args: + symbol (str): The name of the symbol to search for. + + Returns: + FileDescriptor: Descriptor for the file that contains the specified + symbol. + + Raises: + KeyError: if the file cannot be found in the pool. + """ + try: + return self._descriptors[symbol].file + except KeyError: + pass + + try: + return self._enum_descriptors[symbol].file + except KeyError: + pass + + try: + return self._service_descriptors[symbol].file + except KeyError: + pass + + try: + return self._top_enum_values[symbol].type.file + except KeyError: + pass + + try: + return self._file_desc_by_toplevel_extension[symbol] + except KeyError: + pass + + # Try fields, enum values and nested extensions inside a message. + top_name, _, sub_name = symbol.rpartition('.') + try: + message = self.FindMessageTypeByName(top_name) + assert (sub_name in message.extensions_by_name or + sub_name in message.fields_by_name or + sub_name in message.enum_values_by_name) + return message.file + except (KeyError, AssertionError): + raise KeyError('Cannot find a file containing %s' % symbol) + + def FindMessageTypeByName(self, full_name): + """Loads the named descriptor from the pool. + + Args: + full_name (str): The full name of the descriptor to load. + + Returns: + Descriptor: The descriptor for the named type. + + Raises: + KeyError: if the message cannot be found in the pool. + """ + + full_name = _NormalizeFullyQualifiedName(full_name) + if full_name not in self._descriptors: + self._FindFileContainingSymbolInDb(full_name) + return self._descriptors[full_name] + + def FindEnumTypeByName(self, full_name): + """Loads the named enum descriptor from the pool. + + Args: + full_name (str): The full name of the enum descriptor to load. + + Returns: + EnumDescriptor: The enum descriptor for the named type. + + Raises: + KeyError: if the enum cannot be found in the pool. + """ + + full_name = _NormalizeFullyQualifiedName(full_name) + if full_name not in self._enum_descriptors: + self._FindFileContainingSymbolInDb(full_name) + return self._enum_descriptors[full_name] + + def FindFieldByName(self, full_name): + """Loads the named field descriptor from the pool. + + Args: + full_name (str): The full name of the field descriptor to load. + + Returns: + FieldDescriptor: The field descriptor for the named field. + + Raises: + KeyError: if the field cannot be found in the pool. + """ + full_name = _NormalizeFullyQualifiedName(full_name) + message_name, _, field_name = full_name.rpartition('.') + message_descriptor = self.FindMessageTypeByName(message_name) + return message_descriptor.fields_by_name[field_name] + + def FindOneofByName(self, full_name): + """Loads the named oneof descriptor from the pool. + + Args: + full_name (str): The full name of the oneof descriptor to load. + + Returns: + OneofDescriptor: The oneof descriptor for the named oneof. + + Raises: + KeyError: if the oneof cannot be found in the pool. + """ + full_name = _NormalizeFullyQualifiedName(full_name) + message_name, _, oneof_name = full_name.rpartition('.') + message_descriptor = self.FindMessageTypeByName(message_name) + return message_descriptor.oneofs_by_name[oneof_name] + + def FindExtensionByName(self, full_name): + """Loads the named extension descriptor from the pool. + + Args: + full_name (str): The full name of the extension descriptor to load. + + Returns: + FieldDescriptor: The field descriptor for the named extension. + + Raises: + KeyError: if the extension cannot be found in the pool. + """ + full_name = _NormalizeFullyQualifiedName(full_name) + try: + # The proto compiler does not give any link between the FileDescriptor + # and top-level extensions unless the FileDescriptorProto is added to + # the DescriptorDatabase, but this can impact memory usage. + # So we registered these extensions by name explicitly. + return self._toplevel_extensions[full_name] + except KeyError: + pass + message_name, _, extension_name = full_name.rpartition('.') + try: + # Most extensions are nested inside a message. + scope = self.FindMessageTypeByName(message_name) + except KeyError: + # Some extensions are defined at file scope. + scope = self._FindFileContainingSymbolInDb(full_name) + return scope.extensions_by_name[extension_name] + + def FindExtensionByNumber(self, message_descriptor, number): + """Gets the extension of the specified message with the specified number. + + Extensions have to be registered to this pool by calling :func:`Add` or + :func:`AddExtensionDescriptor`. + + Args: + message_descriptor (Descriptor): descriptor of the extended message. + number (int): Number of the extension field. + + Returns: + FieldDescriptor: The descriptor for the extension. + + Raises: + KeyError: when no extension with the given number is known for the + specified message. + """ + try: + return self._extensions_by_number[message_descriptor][number] + except KeyError: + self._TryLoadExtensionFromDB(message_descriptor, number) + return self._extensions_by_number[message_descriptor][number] + + def FindAllExtensions(self, message_descriptor): + """Gets all the known extensions of a given message. + + Extensions have to be registered to this pool by build related + :func:`Add` or :func:`AddExtensionDescriptor`. + + Args: + message_descriptor (Descriptor): Descriptor of the extended message. + + Returns: + list[FieldDescriptor]: Field descriptors describing the extensions. + """ + # Fallback to descriptor db if FindAllExtensionNumbers is provided. + if self._descriptor_db and hasattr( + self._descriptor_db, 'FindAllExtensionNumbers'): + full_name = message_descriptor.full_name + all_numbers = self._descriptor_db.FindAllExtensionNumbers(full_name) + for number in all_numbers: + if number in self._extensions_by_number[message_descriptor]: + continue + self._TryLoadExtensionFromDB(message_descriptor, number) + + return list(self._extensions_by_number[message_descriptor].values()) + + def _TryLoadExtensionFromDB(self, message_descriptor, number): + """Try to Load extensions from descriptor db. + + Args: + message_descriptor: descriptor of the extended message. + number: the extension number that needs to be loaded. + """ + if not self._descriptor_db: + return + # Only supported when FindFileContainingExtension is provided. + if not hasattr( + self._descriptor_db, 'FindFileContainingExtension'): + return + + full_name = message_descriptor.full_name + file_proto = self._descriptor_db.FindFileContainingExtension( + full_name, number) + + if file_proto is None: + return + + try: + self._ConvertFileProtoToFileDescriptor(file_proto) + except: + warn_msg = ('Unable to load proto file %s for extension number %d.' % + (file_proto.name, number)) + warnings.warn(warn_msg, RuntimeWarning) + + def FindServiceByName(self, full_name): + """Loads the named service descriptor from the pool. + + Args: + full_name (str): The full name of the service descriptor to load. + + Returns: + ServiceDescriptor: The service descriptor for the named service. + + Raises: + KeyError: if the service cannot be found in the pool. + """ + full_name = _NormalizeFullyQualifiedName(full_name) + if full_name not in self._service_descriptors: + self._FindFileContainingSymbolInDb(full_name) + return self._service_descriptors[full_name] + + def FindMethodByName(self, full_name): + """Loads the named service method descriptor from the pool. + + Args: + full_name (str): The full name of the method descriptor to load. + + Returns: + MethodDescriptor: The method descriptor for the service method. + + Raises: + KeyError: if the method cannot be found in the pool. + """ + full_name = _NormalizeFullyQualifiedName(full_name) + service_name, _, method_name = full_name.rpartition('.') + service_descriptor = self.FindServiceByName(service_name) + return service_descriptor.methods_by_name[method_name] + + def _FindFileContainingSymbolInDb(self, symbol): + """Finds the file in descriptor DB containing the specified symbol. + + Args: + symbol (str): The name of the symbol to search for. + + Returns: + FileDescriptor: The file that contains the specified symbol. + + Raises: + KeyError: if the file cannot be found in the descriptor database. + """ + try: + file_proto = self._internal_db.FindFileContainingSymbol(symbol) + except KeyError as error: + if self._descriptor_db: + file_proto = self._descriptor_db.FindFileContainingSymbol(symbol) + else: + raise error + if not file_proto: + raise KeyError('Cannot find a file containing %s' % symbol) + return self._ConvertFileProtoToFileDescriptor(file_proto) + + def _ConvertFileProtoToFileDescriptor(self, file_proto): + """Creates a FileDescriptor from a proto or returns a cached copy. + + This method also has the side effect of loading all the symbols found in + the file into the appropriate dictionaries in the pool. + + Args: + file_proto: The proto to convert. + + Returns: + A FileDescriptor matching the passed in proto. + """ + if file_proto.name not in self._file_descriptors: + built_deps = list(self._GetDeps(file_proto.dependency)) + direct_deps = [self.FindFileByName(n) for n in file_proto.dependency] + public_deps = [direct_deps[i] for i in file_proto.public_dependency] + + file_descriptor = descriptor.FileDescriptor( + pool=self, + name=file_proto.name, + package=file_proto.package, + syntax=file_proto.syntax, + options=_OptionsOrNone(file_proto), + serialized_pb=file_proto.SerializeToString(), + dependencies=direct_deps, + public_dependencies=public_deps, + # pylint: disable=protected-access + create_key=descriptor._internal_create_key) + scope = {} + + # This loop extracts all the message and enum types from all the + # dependencies of the file_proto. This is necessary to create the + # scope of available message types when defining the passed in + # file proto. + for dependency in built_deps: + scope.update(self._ExtractSymbols( + dependency.message_types_by_name.values())) + scope.update((_PrefixWithDot(enum.full_name), enum) + for enum in dependency.enum_types_by_name.values()) + + for message_type in file_proto.message_type: + message_desc = self._ConvertMessageDescriptor( + message_type, file_proto.package, file_descriptor, scope, + file_proto.syntax) + file_descriptor.message_types_by_name[message_desc.name] = ( + message_desc) + + for enum_type in file_proto.enum_type: + file_descriptor.enum_types_by_name[enum_type.name] = ( + self._ConvertEnumDescriptor(enum_type, file_proto.package, + file_descriptor, None, scope, True)) + + for index, extension_proto in enumerate(file_proto.extension): + extension_desc = self._MakeFieldDescriptor( + extension_proto, file_proto.package, index, file_descriptor, + is_extension=True) + extension_desc.containing_type = self._GetTypeFromScope( + file_descriptor.package, extension_proto.extendee, scope) + self._SetFieldType(extension_proto, extension_desc, + file_descriptor.package, scope) + file_descriptor.extensions_by_name[extension_desc.name] = ( + extension_desc) + self._file_desc_by_toplevel_extension[extension_desc.full_name] = ( + file_descriptor) + + for desc_proto in file_proto.message_type: + self._SetAllFieldTypes(file_proto.package, desc_proto, scope) + + if file_proto.package: + desc_proto_prefix = _PrefixWithDot(file_proto.package) + else: + desc_proto_prefix = '' + + for desc_proto in file_proto.message_type: + desc = self._GetTypeFromScope( + desc_proto_prefix, desc_proto.name, scope) + file_descriptor.message_types_by_name[desc_proto.name] = desc + + for index, service_proto in enumerate(file_proto.service): + file_descriptor.services_by_name[service_proto.name] = ( + self._MakeServiceDescriptor(service_proto, index, scope, + file_proto.package, file_descriptor)) + + self.Add(file_proto) + self._file_descriptors[file_proto.name] = file_descriptor + + # Add extensions to the pool + file_desc = self._file_descriptors[file_proto.name] + for extension in file_desc.extensions_by_name.values(): + self._AddExtensionDescriptor(extension) + for message_type in file_desc.message_types_by_name.values(): + for extension in message_type.extensions: + self._AddExtensionDescriptor(extension) + + return file_desc + + def _ConvertMessageDescriptor(self, desc_proto, package=None, file_desc=None, + scope=None, syntax=None): + """Adds the proto to the pool in the specified package. + + Args: + desc_proto: The descriptor_pb2.DescriptorProto protobuf message. + package: The package the proto should be located in. + file_desc: The file containing this message. + scope: Dict mapping short and full symbols to message and enum types. + syntax: string indicating syntax of the file ("proto2" or "proto3") + + Returns: + The added descriptor. + """ + + if package: + desc_name = '.'.join((package, desc_proto.name)) + else: + desc_name = desc_proto.name + + if file_desc is None: + file_name = None + else: + file_name = file_desc.name + + if scope is None: + scope = {} + + nested = [ + self._ConvertMessageDescriptor( + nested, desc_name, file_desc, scope, syntax) + for nested in desc_proto.nested_type] + enums = [ + self._ConvertEnumDescriptor(enum, desc_name, file_desc, None, + scope, False) + for enum in desc_proto.enum_type] + fields = [self._MakeFieldDescriptor(field, desc_name, index, file_desc) + for index, field in enumerate(desc_proto.field)] + extensions = [ + self._MakeFieldDescriptor(extension, desc_name, index, file_desc, + is_extension=True) + for index, extension in enumerate(desc_proto.extension)] + oneofs = [ + # pylint: disable=g-complex-comprehension + descriptor.OneofDescriptor(desc.name, '.'.join((desc_name, desc.name)), + index, None, [], desc.options, + # pylint: disable=protected-access + create_key=descriptor._internal_create_key) + for index, desc in enumerate(desc_proto.oneof_decl)] + extension_ranges = [(r.start, r.end) for r in desc_proto.extension_range] + if extension_ranges: + is_extendable = True + else: + is_extendable = False + desc = descriptor.Descriptor( + name=desc_proto.name, + full_name=desc_name, + filename=file_name, + containing_type=None, + fields=fields, + oneofs=oneofs, + nested_types=nested, + enum_types=enums, + extensions=extensions, + options=_OptionsOrNone(desc_proto), + is_extendable=is_extendable, + extension_ranges=extension_ranges, + file=file_desc, + serialized_start=None, + serialized_end=None, + syntax=syntax, + # pylint: disable=protected-access + create_key=descriptor._internal_create_key) + for nested in desc.nested_types: + nested.containing_type = desc + for enum in desc.enum_types: + enum.containing_type = desc + for field_index, field_desc in enumerate(desc_proto.field): + if field_desc.HasField('oneof_index'): + oneof_index = field_desc.oneof_index + oneofs[oneof_index].fields.append(fields[field_index]) + fields[field_index].containing_oneof = oneofs[oneof_index] + + scope[_PrefixWithDot(desc_name)] = desc + self._CheckConflictRegister(desc, desc.full_name, desc.file.name) + self._descriptors[desc_name] = desc + return desc + + def _ConvertEnumDescriptor(self, enum_proto, package=None, file_desc=None, + containing_type=None, scope=None, top_level=False): + """Make a protobuf EnumDescriptor given an EnumDescriptorProto protobuf. + + Args: + enum_proto: The descriptor_pb2.EnumDescriptorProto protobuf message. + package: Optional package name for the new message EnumDescriptor. + file_desc: The file containing the enum descriptor. + containing_type: The type containing this enum. + scope: Scope containing available types. + top_level: If True, the enum is a top level symbol. If False, the enum + is defined inside a message. + + Returns: + The added descriptor + """ + + if package: + enum_name = '.'.join((package, enum_proto.name)) + else: + enum_name = enum_proto.name + + if file_desc is None: + file_name = None + else: + file_name = file_desc.name + + values = [self._MakeEnumValueDescriptor(value, index) + for index, value in enumerate(enum_proto.value)] + desc = descriptor.EnumDescriptor(name=enum_proto.name, + full_name=enum_name, + filename=file_name, + file=file_desc, + values=values, + containing_type=containing_type, + options=_OptionsOrNone(enum_proto), + # pylint: disable=protected-access + create_key=descriptor._internal_create_key) + scope['.%s' % enum_name] = desc + self._CheckConflictRegister(desc, desc.full_name, desc.file.name) + self._enum_descriptors[enum_name] = desc + + # Add top level enum values. + if top_level: + for value in values: + full_name = _NormalizeFullyQualifiedName( + '.'.join((package, value.name))) + self._CheckConflictRegister(value, full_name, file_name) + self._top_enum_values[full_name] = value + + return desc + + def _MakeFieldDescriptor(self, field_proto, message_name, index, + file_desc, is_extension=False): + """Creates a field descriptor from a FieldDescriptorProto. + + For message and enum type fields, this method will do a look up + in the pool for the appropriate descriptor for that type. If it + is unavailable, it will fall back to the _source function to + create it. If this type is still unavailable, construction will + fail. + + Args: + field_proto: The proto describing the field. + message_name: The name of the containing message. + index: Index of the field + file_desc: The file containing the field descriptor. + is_extension: Indication that this field is for an extension. + + Returns: + An initialized FieldDescriptor object + """ + + if message_name: + full_name = '.'.join((message_name, field_proto.name)) + else: + full_name = field_proto.name + + return descriptor.FieldDescriptor( + name=field_proto.name, + full_name=full_name, + index=index, + number=field_proto.number, + type=field_proto.type, + cpp_type=None, + message_type=None, + enum_type=None, + containing_type=None, + label=field_proto.label, + has_default_value=False, + default_value=None, + is_extension=is_extension, + extension_scope=None, + options=_OptionsOrNone(field_proto), + file=file_desc, + # pylint: disable=protected-access + create_key=descriptor._internal_create_key) + + def _SetAllFieldTypes(self, package, desc_proto, scope): + """Sets all the descriptor's fields's types. + + This method also sets the containing types on any extensions. + + Args: + package: The current package of desc_proto. + desc_proto: The message descriptor to update. + scope: Enclosing scope of available types. + """ + + package = _PrefixWithDot(package) + + main_desc = self._GetTypeFromScope(package, desc_proto.name, scope) + + if package == '.': + nested_package = _PrefixWithDot(desc_proto.name) + else: + nested_package = '.'.join([package, desc_proto.name]) + + for field_proto, field_desc in zip(desc_proto.field, main_desc.fields): + self._SetFieldType(field_proto, field_desc, nested_package, scope) + + for extension_proto, extension_desc in ( + zip(desc_proto.extension, main_desc.extensions)): + extension_desc.containing_type = self._GetTypeFromScope( + nested_package, extension_proto.extendee, scope) + self._SetFieldType(extension_proto, extension_desc, nested_package, scope) + + for nested_type in desc_proto.nested_type: + self._SetAllFieldTypes(nested_package, nested_type, scope) + + def _SetFieldType(self, field_proto, field_desc, package, scope): + """Sets the field's type, cpp_type, message_type and enum_type. + + Args: + field_proto: Data about the field in proto format. + field_desc: The descriptor to modify. + package: The package the field's container is in. + scope: Enclosing scope of available types. + """ + if field_proto.type_name: + desc = self._GetTypeFromScope(package, field_proto.type_name, scope) + else: + desc = None + + if not field_proto.HasField('type'): + if isinstance(desc, descriptor.Descriptor): + field_proto.type = descriptor.FieldDescriptor.TYPE_MESSAGE + else: + field_proto.type = descriptor.FieldDescriptor.TYPE_ENUM + + field_desc.cpp_type = descriptor.FieldDescriptor.ProtoTypeToCppProtoType( + field_proto.type) + + if (field_proto.type == descriptor.FieldDescriptor.TYPE_MESSAGE + or field_proto.type == descriptor.FieldDescriptor.TYPE_GROUP): + field_desc.message_type = desc + + if field_proto.type == descriptor.FieldDescriptor.TYPE_ENUM: + field_desc.enum_type = desc + + if field_proto.label == descriptor.FieldDescriptor.LABEL_REPEATED: + field_desc.has_default_value = False + field_desc.default_value = [] + elif field_proto.HasField('default_value'): + field_desc.has_default_value = True + if (field_proto.type == descriptor.FieldDescriptor.TYPE_DOUBLE or + field_proto.type == descriptor.FieldDescriptor.TYPE_FLOAT): + field_desc.default_value = float(field_proto.default_value) + elif field_proto.type == descriptor.FieldDescriptor.TYPE_STRING: + field_desc.default_value = field_proto.default_value + elif field_proto.type == descriptor.FieldDescriptor.TYPE_BOOL: + field_desc.default_value = field_proto.default_value.lower() == 'true' + elif field_proto.type == descriptor.FieldDescriptor.TYPE_ENUM: + field_desc.default_value = field_desc.enum_type.values_by_name[ + field_proto.default_value].number + elif field_proto.type == descriptor.FieldDescriptor.TYPE_BYTES: + field_desc.default_value = text_encoding.CUnescape( + field_proto.default_value) + elif field_proto.type == descriptor.FieldDescriptor.TYPE_MESSAGE: + field_desc.default_value = None + else: + # All other types are of the "int" type. + field_desc.default_value = int(field_proto.default_value) + else: + field_desc.has_default_value = False + if (field_proto.type == descriptor.FieldDescriptor.TYPE_DOUBLE or + field_proto.type == descriptor.FieldDescriptor.TYPE_FLOAT): + field_desc.default_value = 0.0 + elif field_proto.type == descriptor.FieldDescriptor.TYPE_STRING: + field_desc.default_value = u'' + elif field_proto.type == descriptor.FieldDescriptor.TYPE_BOOL: + field_desc.default_value = False + elif field_proto.type == descriptor.FieldDescriptor.TYPE_ENUM: + field_desc.default_value = field_desc.enum_type.values[0].number + elif field_proto.type == descriptor.FieldDescriptor.TYPE_BYTES: + field_desc.default_value = b'' + elif field_proto.type == descriptor.FieldDescriptor.TYPE_MESSAGE: + field_desc.default_value = None + else: + # All other types are of the "int" type. + field_desc.default_value = 0 + + field_desc.type = field_proto.type + + def _MakeEnumValueDescriptor(self, value_proto, index): + """Creates a enum value descriptor object from a enum value proto. + + Args: + value_proto: The proto describing the enum value. + index: The index of the enum value. + + Returns: + An initialized EnumValueDescriptor object. + """ + + return descriptor.EnumValueDescriptor( + name=value_proto.name, + index=index, + number=value_proto.number, + options=_OptionsOrNone(value_proto), + type=None, + # pylint: disable=protected-access + create_key=descriptor._internal_create_key) + + def _MakeServiceDescriptor(self, service_proto, service_index, scope, + package, file_desc): + """Make a protobuf ServiceDescriptor given a ServiceDescriptorProto. + + Args: + service_proto: The descriptor_pb2.ServiceDescriptorProto protobuf message. + service_index: The index of the service in the File. + scope: Dict mapping short and full symbols to message and enum types. + package: Optional package name for the new message EnumDescriptor. + file_desc: The file containing the service descriptor. + + Returns: + The added descriptor. + """ + + if package: + service_name = '.'.join((package, service_proto.name)) + else: + service_name = service_proto.name + + methods = [self._MakeMethodDescriptor(method_proto, service_name, package, + scope, index) + for index, method_proto in enumerate(service_proto.method)] + desc = descriptor.ServiceDescriptor( + name=service_proto.name, + full_name=service_name, + index=service_index, + methods=methods, + options=_OptionsOrNone(service_proto), + file=file_desc, + # pylint: disable=protected-access + create_key=descriptor._internal_create_key) + self._CheckConflictRegister(desc, desc.full_name, desc.file.name) + self._service_descriptors[service_name] = desc + return desc + + def _MakeMethodDescriptor(self, method_proto, service_name, package, scope, + index): + """Creates a method descriptor from a MethodDescriptorProto. + + Args: + method_proto: The proto describing the method. + service_name: The name of the containing service. + package: Optional package name to look up for types. + scope: Scope containing available types. + index: Index of the method in the service. + + Returns: + An initialized MethodDescriptor object. + """ + full_name = '.'.join((service_name, method_proto.name)) + input_type = self._GetTypeFromScope( + package, method_proto.input_type, scope) + output_type = self._GetTypeFromScope( + package, method_proto.output_type, scope) + return descriptor.MethodDescriptor( + name=method_proto.name, + full_name=full_name, + index=index, + containing_service=None, + input_type=input_type, + output_type=output_type, + options=_OptionsOrNone(method_proto), + # pylint: disable=protected-access + create_key=descriptor._internal_create_key) + + def _ExtractSymbols(self, descriptors): + """Pulls out all the symbols from descriptor protos. + + Args: + descriptors: The messages to extract descriptors from. + Yields: + A two element tuple of the type name and descriptor object. + """ + + for desc in descriptors: + yield (_PrefixWithDot(desc.full_name), desc) + for symbol in self._ExtractSymbols(desc.nested_types): + yield symbol + for enum in desc.enum_types: + yield (_PrefixWithDot(enum.full_name), enum) + + def _GetDeps(self, dependencies): + """Recursively finds dependencies for file protos. + + Args: + dependencies: The names of the files being depended on. + + Yields: + Each direct and indirect dependency. + """ + + for dependency in dependencies: + dep_desc = self.FindFileByName(dependency) + yield dep_desc + for parent_dep in dep_desc.dependencies: + yield parent_dep + + def _GetTypeFromScope(self, package, type_name, scope): + """Finds a given type name in the current scope. + + Args: + package: The package the proto should be located in. + type_name: The name of the type to be found in the scope. + scope: Dict mapping short and full symbols to message and enum types. + + Returns: + The descriptor for the requested type. + """ + if type_name not in scope: + components = _PrefixWithDot(package).split('.') + while components: + possible_match = '.'.join(components + [type_name]) + if possible_match in scope: + type_name = possible_match + break + else: + components.pop(-1) + return scope[type_name] + + +def _PrefixWithDot(name): + return name if name.startswith('.') else '.%s' % name + + +if _USE_C_DESCRIPTORS: + # TODO(amauryfa): This pool could be constructed from Python code, when we + # support a flag like 'use_cpp_generated_pool=True'. + # pylint: disable=protected-access + _DEFAULT = descriptor._message.default_pool +else: + _DEFAULT = DescriptorPool() + + +def Default(): + return _DEFAULT diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/duration_pb2.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/duration_pb2.py new file mode 100644 index 0000000000000000000000000000000000000000..813a92dfcc82567c4c1e61ff1d928527cff00231 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/duration_pb2.py @@ -0,0 +1,78 @@ +# -*- coding: utf-8 -*- +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: google/protobuf/duration.proto +"""Generated protocol buffer code.""" +from google.protobuf import descriptor as _descriptor +from google.protobuf import message as _message +from google.protobuf import reflection as _reflection +from google.protobuf import symbol_database as _symbol_database +# @@protoc_insertion_point(imports) + +_sym_db = _symbol_database.Default() + + + + +DESCRIPTOR = _descriptor.FileDescriptor( + name='google/protobuf/duration.proto', + package='google.protobuf', + syntax='proto3', + serialized_options=b'\n\023com.google.protobufB\rDurationProtoP\001Z1google.golang.org/protobuf/types/known/durationpb\370\001\001\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes', + create_key=_descriptor._internal_create_key, + serialized_pb=b'\n\x1egoogle/protobuf/duration.proto\x12\x0fgoogle.protobuf\"*\n\x08\x44uration\x12\x0f\n\x07seconds\x18\x01 \x01(\x03\x12\r\n\x05nanos\x18\x02 \x01(\x05\x42\x83\x01\n\x13\x63om.google.protobufB\rDurationProtoP\x01Z1google.golang.org/protobuf/types/known/durationpb\xf8\x01\x01\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3' +) + + + + +_DURATION = _descriptor.Descriptor( + name='Duration', + full_name='google.protobuf.Duration', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='seconds', full_name='google.protobuf.Duration.seconds', index=0, + number=1, type=3, cpp_type=2, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='nanos', full_name='google.protobuf.Duration.nanos', index=1, + number=2, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=51, + serialized_end=93, +) + +DESCRIPTOR.message_types_by_name['Duration'] = _DURATION +_sym_db.RegisterFileDescriptor(DESCRIPTOR) + +Duration = _reflection.GeneratedProtocolMessageType('Duration', (_message.Message,), { + 'DESCRIPTOR' : _DURATION, + '__module__' : 'google.protobuf.duration_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.Duration) + }) +_sym_db.RegisterMessage(Duration) + + +DESCRIPTOR._options = None +# @@protoc_insertion_point(module_scope) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/empty_pb2.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/empty_pb2.py new file mode 100644 index 0000000000000000000000000000000000000000..009d4d580a26ecd2df9694258a97a759d15062a3 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/empty_pb2.py @@ -0,0 +1,64 @@ +# -*- coding: utf-8 -*- +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: google/protobuf/empty.proto +"""Generated protocol buffer code.""" +from google.protobuf import descriptor as _descriptor +from google.protobuf import message as _message +from google.protobuf import reflection as _reflection +from google.protobuf import symbol_database as _symbol_database +# @@protoc_insertion_point(imports) + +_sym_db = _symbol_database.Default() + + + + +DESCRIPTOR = _descriptor.FileDescriptor( + name='google/protobuf/empty.proto', + package='google.protobuf', + syntax='proto3', + serialized_options=b'\n\023com.google.protobufB\nEmptyProtoP\001Z.google.golang.org/protobuf/types/known/emptypb\370\001\001\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes', + create_key=_descriptor._internal_create_key, + serialized_pb=b'\n\x1bgoogle/protobuf/empty.proto\x12\x0fgoogle.protobuf\"\x07\n\x05\x45mptyB}\n\x13\x63om.google.protobufB\nEmptyProtoP\x01Z.google.golang.org/protobuf/types/known/emptypb\xf8\x01\x01\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3' +) + + + + +_EMPTY = _descriptor.Descriptor( + name='Empty', + full_name='google.protobuf.Empty', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=48, + serialized_end=55, +) + +DESCRIPTOR.message_types_by_name['Empty'] = _EMPTY +_sym_db.RegisterFileDescriptor(DESCRIPTOR) + +Empty = _reflection.GeneratedProtocolMessageType('Empty', (_message.Message,), { + 'DESCRIPTOR' : _EMPTY, + '__module__' : 'google.protobuf.empty_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.Empty) + }) +_sym_db.RegisterMessage(Empty) + + +DESCRIPTOR._options = None +# @@protoc_insertion_point(module_scope) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/field_mask_pb2.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/field_mask_pb2.py new file mode 100644 index 0000000000000000000000000000000000000000..c7494248aca8e17a23d99338725fed9ae2cdaaf6 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/field_mask_pb2.py @@ -0,0 +1,71 @@ +# -*- coding: utf-8 -*- +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: google/protobuf/field_mask.proto +"""Generated protocol buffer code.""" +from google.protobuf import descriptor as _descriptor +from google.protobuf import message as _message +from google.protobuf import reflection as _reflection +from google.protobuf import symbol_database as _symbol_database +# @@protoc_insertion_point(imports) + +_sym_db = _symbol_database.Default() + + + + +DESCRIPTOR = _descriptor.FileDescriptor( + name='google/protobuf/field_mask.proto', + package='google.protobuf', + syntax='proto3', + serialized_options=b'\n\023com.google.protobufB\016FieldMaskProtoP\001Z2google.golang.org/protobuf/types/known/fieldmaskpb\370\001\001\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes', + create_key=_descriptor._internal_create_key, + serialized_pb=b'\n google/protobuf/field_mask.proto\x12\x0fgoogle.protobuf\"\x1a\n\tFieldMask\x12\r\n\x05paths\x18\x01 \x03(\tB\x85\x01\n\x13\x63om.google.protobufB\x0e\x46ieldMaskProtoP\x01Z2google.golang.org/protobuf/types/known/fieldmaskpb\xf8\x01\x01\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3' +) + + + + +_FIELDMASK = _descriptor.Descriptor( + name='FieldMask', + full_name='google.protobuf.FieldMask', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='paths', full_name='google.protobuf.FieldMask.paths', index=0, + number=1, type=9, cpp_type=9, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=53, + serialized_end=79, +) + +DESCRIPTOR.message_types_by_name['FieldMask'] = _FIELDMASK +_sym_db.RegisterFileDescriptor(DESCRIPTOR) + +FieldMask = _reflection.GeneratedProtocolMessageType('FieldMask', (_message.Message,), { + 'DESCRIPTOR' : _FIELDMASK, + '__module__' : 'google.protobuf.field_mask_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.FieldMask) + }) +_sym_db.RegisterMessage(FieldMask) + + +DESCRIPTOR._options = None +# @@protoc_insertion_point(module_scope) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/internal/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/internal/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..7d2e571a143a51d4b385a645a77c0316edc87b98 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/internal/__init__.py @@ -0,0 +1,30 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/internal/_api_implementation.cpython-38-x86_64-linux-gnu.so b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/internal/_api_implementation.cpython-38-x86_64-linux-gnu.so new file mode 100755 index 0000000000000000000000000000000000000000..255ab1a7e7a4db8f025e40ceb82f71c9b2ad2621 Binary files /dev/null and b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/internal/_api_implementation.cpython-38-x86_64-linux-gnu.so differ diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/internal/api_implementation.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/internal/api_implementation.py new file mode 100644 index 0000000000000000000000000000000000000000..7592be4eed3d3d14d3545490fa11977084dd7e1b --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/internal/api_implementation.py @@ -0,0 +1,158 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""Determine which implementation of the protobuf API is used in this process. +""" + +import os +import warnings +import sys + +try: + # pylint: disable=g-import-not-at-top + from google.protobuf.internal import _api_implementation + # The compile-time constants in the _api_implementation module can be used to + # switch to a certain implementation of the Python API at build time. + _api_version = _api_implementation.api_version + _proto_extension_modules_exist_in_build = True +except ImportError: + _api_version = -1 # Unspecified by compiler flags. + _proto_extension_modules_exist_in_build = False + +if _api_version == 1: + raise ValueError('api_version=1 is no longer supported.') +if _api_version < 0: # Still unspecified? + try: + # The presence of this module in a build allows the proto implementation to + # be upgraded merely via build deps rather than a compiler flag or the + # runtime environment variable. + # pylint: disable=g-import-not-at-top + from google.protobuf import _use_fast_cpp_protos + # Work around a known issue in the classic bootstrap .par import hook. + if not _use_fast_cpp_protos: + raise ImportError('_use_fast_cpp_protos import succeeded but was None') + del _use_fast_cpp_protos + _api_version = 2 + # Can not import both use_fast_cpp_protos and use_pure_python. + from google.protobuf import use_pure_python + raise RuntimeError( + 'Conflict depend on both use_fast_cpp_protos and use_pure_python') + except ImportError: + try: + # pylint: disable=g-import-not-at-top + from google.protobuf import use_pure_python + del use_pure_python # Avoids a pylint error and namespace pollution. + _api_version = 0 + except ImportError: + # TODO(b/74017912): It's unsafe to enable :use_fast_cpp_protos by default; + # it can cause data loss if you have any Python-only extensions to any + # message passed back and forth with C++ code. + # + # TODO(b/17427486): Once that bug is fixed, we want to make both Python 2 + # and Python 3 default to `_api_version = 2` (C++ implementation V2). + pass + +_default_implementation_type = ( + 'python' if _api_version <= 0 else 'cpp') + +# This environment variable can be used to switch to a certain implementation +# of the Python API, overriding the compile-time constants in the +# _api_implementation module. Right now only 'python' and 'cpp' are valid +# values. Any other value will be ignored. +_implementation_type = os.getenv('PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION', + _default_implementation_type) + +if _implementation_type != 'python': + _implementation_type = 'cpp' + +if 'PyPy' in sys.version and _implementation_type == 'cpp': + warnings.warn('PyPy does not work yet with cpp protocol buffers. ' + 'Falling back to the python implementation.') + _implementation_type = 'python' + +# This environment variable can be used to switch between the two +# 'cpp' implementations, overriding the compile-time constants in the +# _api_implementation module. Right now only '2' is supported. Any other +# value will cause an error to be raised. +_implementation_version_str = os.getenv( + 'PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION_VERSION', '2') + +if _implementation_version_str != '2': + raise ValueError( + 'unsupported PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION_VERSION: "' + + _implementation_version_str + '" (supported versions: 2)' + ) + +_implementation_version = int(_implementation_version_str) + + +# Detect if serialization should be deterministic by default +try: + # The presence of this module in a build allows the proto implementation to + # be upgraded merely via build deps. + # + # NOTE: Merely importing this automatically enables deterministic proto + # serialization for C++ code, but we still need to export it as a boolean so + # that we can do the same for `_implementation_type == 'python'`. + # + # NOTE2: It is possible for C++ code to enable deterministic serialization by + # default _without_ affecting Python code, if the C++ implementation is not in + # use by this module. That is intended behavior, so we don't actually expose + # this boolean outside of this module. + # + # pylint: disable=g-import-not-at-top,unused-import + from google.protobuf import enable_deterministic_proto_serialization + _python_deterministic_proto_serialization = True +except ImportError: + _python_deterministic_proto_serialization = False + + +# Usage of this function is discouraged. Clients shouldn't care which +# implementation of the API is in use. Note that there is no guarantee +# that differences between APIs will be maintained. +# Please don't use this function if possible. +def Type(): + return _implementation_type + + +def _SetType(implementation_type): + """Never use! Only for protobuf benchmark.""" + global _implementation_type + _implementation_type = implementation_type + + +# See comment on 'Type' above. +def Version(): + return _implementation_version + + +# For internal use only +def IsPythonDefaultSerializationDeterministic(): + return _python_deterministic_proto_serialization diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/internal/containers.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/internal/containers.py new file mode 100644 index 0000000000000000000000000000000000000000..92793490bbc3e1f664e82d9993562c2e6da20acb --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/internal/containers.py @@ -0,0 +1,785 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""Contains container classes to represent different protocol buffer types. + +This file defines container classes which represent categories of protocol +buffer field types which need extra maintenance. Currently these categories +are: + +- Repeated scalar fields - These are all repeated fields which aren't + composite (e.g. they are of simple types like int32, string, etc). +- Repeated composite fields - Repeated fields which are composite. This + includes groups and nested messages. +""" + +__author__ = 'petar@google.com (Petar Petrov)' + +import sys +try: + # This fallback applies for all versions of Python before 3.3 + import collections.abc as collections_abc +except ImportError: + import collections as collections_abc + +if sys.version_info[0] < 3: + # We would use collections_abc.MutableMapping all the time, but in Python 2 + # it doesn't define __slots__. This causes two significant problems: + # + # 1. we can't disallow arbitrary attribute assignment, even if our derived + # classes *do* define __slots__. + # + # 2. we can't safely derive a C type from it without __slots__ defined (the + # interpreter expects to find a dict at tp_dictoffset, which we can't + # robustly provide. And we don't want an instance dict anyway. + # + # So this is the Python 2.7 definition of Mapping/MutableMapping functions + # verbatim, except that: + # 1. We declare __slots__. + # 2. We don't declare this as a virtual base class. The classes defined + # in collections_abc are the interesting base classes, not us. + # + # Note: deriving from object is critical. It is the only thing that makes + # this a true type, allowing us to derive from it in C++ cleanly and making + # __slots__ properly disallow arbitrary element assignment. + + class Mapping(object): + __slots__ = () + + def get(self, key, default=None): + try: + return self[key] + except KeyError: + return default + + def __contains__(self, key): + try: + self[key] + except KeyError: + return False + else: + return True + + def iterkeys(self): + return iter(self) + + def itervalues(self): + for key in self: + yield self[key] + + def iteritems(self): + for key in self: + yield (key, self[key]) + + def keys(self): + return list(self) + + def items(self): + return [(key, self[key]) for key in self] + + def values(self): + return [self[key] for key in self] + + # Mappings are not hashable by default, but subclasses can change this + __hash__ = None + + def __eq__(self, other): + if not isinstance(other, collections_abc.Mapping): + return NotImplemented + return dict(self.items()) == dict(other.items()) + + def __ne__(self, other): + return not (self == other) + + class MutableMapping(Mapping): + __slots__ = () + + __marker = object() + + def pop(self, key, default=__marker): + try: + value = self[key] + except KeyError: + if default is self.__marker: + raise + return default + else: + del self[key] + return value + + def popitem(self): + try: + key = next(iter(self)) + except StopIteration: + raise KeyError + value = self[key] + del self[key] + return key, value + + def clear(self): + try: + while True: + self.popitem() + except KeyError: + pass + + def update(*args, **kwds): + if len(args) > 2: + raise TypeError("update() takes at most 2 positional " + "arguments ({} given)".format(len(args))) + elif not args: + raise TypeError("update() takes at least 1 argument (0 given)") + self = args[0] + other = args[1] if len(args) >= 2 else () + + if isinstance(other, Mapping): + for key in other: + self[key] = other[key] + elif hasattr(other, "keys"): + for key in other.keys(): + self[key] = other[key] + else: + for key, value in other: + self[key] = value + for key, value in kwds.items(): + self[key] = value + + def setdefault(self, key, default=None): + try: + return self[key] + except KeyError: + self[key] = default + return default + + collections_abc.Mapping.register(Mapping) + collections_abc.MutableMapping.register(MutableMapping) + +else: + # In Python 3 we can just use MutableMapping directly, because it defines + # __slots__. + MutableMapping = collections_abc.MutableMapping + + +class BaseContainer(object): + + """Base container class.""" + + # Minimizes memory usage and disallows assignment to other attributes. + __slots__ = ['_message_listener', '_values'] + + def __init__(self, message_listener): + """ + Args: + message_listener: A MessageListener implementation. + The RepeatedScalarFieldContainer will call this object's + Modified() method when it is modified. + """ + self._message_listener = message_listener + self._values = [] + + def __getitem__(self, key): + """Retrieves item by the specified key.""" + return self._values[key] + + def __len__(self): + """Returns the number of elements in the container.""" + return len(self._values) + + def __ne__(self, other): + """Checks if another instance isn't equal to this one.""" + # The concrete classes should define __eq__. + return not self == other + + def __hash__(self): + raise TypeError('unhashable object') + + def __repr__(self): + return repr(self._values) + + def sort(self, *args, **kwargs): + # Continue to support the old sort_function keyword argument. + # This is expected to be a rare occurrence, so use LBYL to avoid + # the overhead of actually catching KeyError. + if 'sort_function' in kwargs: + kwargs['cmp'] = kwargs.pop('sort_function') + self._values.sort(*args, **kwargs) + + def reverse(self): + self._values.reverse() + + +collections_abc.MutableSequence.register(BaseContainer) + + +class RepeatedScalarFieldContainer(BaseContainer): + """Simple, type-checked, list-like container for holding repeated scalars.""" + + # Disallows assignment to other attributes. + __slots__ = ['_type_checker'] + + def __init__(self, message_listener, type_checker): + """Args: + + message_listener: A MessageListener implementation. The + RepeatedScalarFieldContainer will call this object's Modified() method + when it is modified. + type_checker: A type_checkers.ValueChecker instance to run on elements + inserted into this container. + """ + super(RepeatedScalarFieldContainer, self).__init__(message_listener) + self._type_checker = type_checker + + def append(self, value): + """Appends an item to the list. Similar to list.append().""" + self._values.append(self._type_checker.CheckValue(value)) + if not self._message_listener.dirty: + self._message_listener.Modified() + + def insert(self, key, value): + """Inserts the item at the specified position. Similar to list.insert().""" + self._values.insert(key, self._type_checker.CheckValue(value)) + if not self._message_listener.dirty: + self._message_listener.Modified() + + def extend(self, elem_seq): + """Extends by appending the given iterable. Similar to list.extend().""" + + if elem_seq is None: + return + try: + elem_seq_iter = iter(elem_seq) + except TypeError: + if not elem_seq: + # silently ignore falsy inputs :-/. + # TODO(ptucker): Deprecate this behavior. b/18413862 + return + raise + + new_values = [self._type_checker.CheckValue(elem) for elem in elem_seq_iter] + if new_values: + self._values.extend(new_values) + self._message_listener.Modified() + + def MergeFrom(self, other): + """Appends the contents of another repeated field of the same type to this + one. We do not check the types of the individual fields. + """ + self._values.extend(other._values) + self._message_listener.Modified() + + def remove(self, elem): + """Removes an item from the list. Similar to list.remove().""" + self._values.remove(elem) + self._message_listener.Modified() + + def pop(self, key=-1): + """Removes and returns an item at a given index. Similar to list.pop().""" + value = self._values[key] + self.__delitem__(key) + return value + + def __setitem__(self, key, value): + """Sets the item on the specified position.""" + if isinstance(key, slice): # PY3 + if key.step is not None: + raise ValueError('Extended slices not supported') + self.__setslice__(key.start, key.stop, value) + else: + self._values[key] = self._type_checker.CheckValue(value) + self._message_listener.Modified() + + def __getslice__(self, start, stop): + """Retrieves the subset of items from between the specified indices.""" + return self._values[start:stop] + + def __setslice__(self, start, stop, values): + """Sets the subset of items from between the specified indices.""" + new_values = [] + for value in values: + new_values.append(self._type_checker.CheckValue(value)) + self._values[start:stop] = new_values + self._message_listener.Modified() + + def __delitem__(self, key): + """Deletes the item at the specified position.""" + del self._values[key] + self._message_listener.Modified() + + def __delslice__(self, start, stop): + """Deletes the subset of items from between the specified indices.""" + del self._values[start:stop] + self._message_listener.Modified() + + def __eq__(self, other): + """Compares the current instance with another one.""" + if self is other: + return True + # Special case for the same type which should be common and fast. + if isinstance(other, self.__class__): + return other._values == self._values + # We are presumably comparing against some other sequence type. + return other == self._values + + +class RepeatedCompositeFieldContainer(BaseContainer): + + """Simple, list-like container for holding repeated composite fields.""" + + # Disallows assignment to other attributes. + __slots__ = ['_message_descriptor'] + + def __init__(self, message_listener, message_descriptor): + """ + Note that we pass in a descriptor instead of the generated directly, + since at the time we construct a _RepeatedCompositeFieldContainer we + haven't yet necessarily initialized the type that will be contained in the + container. + + Args: + message_listener: A MessageListener implementation. + The RepeatedCompositeFieldContainer will call this object's + Modified() method when it is modified. + message_descriptor: A Descriptor instance describing the protocol type + that should be present in this container. We'll use the + _concrete_class field of this descriptor when the client calls add(). + """ + super(RepeatedCompositeFieldContainer, self).__init__(message_listener) + self._message_descriptor = message_descriptor + + def add(self, **kwargs): + """Adds a new element at the end of the list and returns it. Keyword + arguments may be used to initialize the element. + """ + new_element = self._message_descriptor._concrete_class(**kwargs) + new_element._SetListener(self._message_listener) + self._values.append(new_element) + if not self._message_listener.dirty: + self._message_listener.Modified() + return new_element + + def append(self, value): + """Appends one element by copying the message.""" + new_element = self._message_descriptor._concrete_class() + new_element._SetListener(self._message_listener) + new_element.CopyFrom(value) + self._values.append(new_element) + if not self._message_listener.dirty: + self._message_listener.Modified() + + def insert(self, key, value): + """Inserts the item at the specified position by copying.""" + new_element = self._message_descriptor._concrete_class() + new_element._SetListener(self._message_listener) + new_element.CopyFrom(value) + self._values.insert(key, new_element) + if not self._message_listener.dirty: + self._message_listener.Modified() + + def extend(self, elem_seq): + """Extends by appending the given sequence of elements of the same type + + as this one, copying each individual message. + """ + message_class = self._message_descriptor._concrete_class + listener = self._message_listener + values = self._values + for message in elem_seq: + new_element = message_class() + new_element._SetListener(listener) + new_element.MergeFrom(message) + values.append(new_element) + listener.Modified() + + def MergeFrom(self, other): + """Appends the contents of another repeated field of the same type to this + one, copying each individual message. + """ + self.extend(other._values) + + def remove(self, elem): + """Removes an item from the list. Similar to list.remove().""" + self._values.remove(elem) + self._message_listener.Modified() + + def pop(self, key=-1): + """Removes and returns an item at a given index. Similar to list.pop().""" + value = self._values[key] + self.__delitem__(key) + return value + + def __getslice__(self, start, stop): + """Retrieves the subset of items from between the specified indices.""" + return self._values[start:stop] + + def __delitem__(self, key): + """Deletes the item at the specified position.""" + del self._values[key] + self._message_listener.Modified() + + def __delslice__(self, start, stop): + """Deletes the subset of items from between the specified indices.""" + del self._values[start:stop] + self._message_listener.Modified() + + def __eq__(self, other): + """Compares the current instance with another one.""" + if self is other: + return True + if not isinstance(other, self.__class__): + raise TypeError('Can only compare repeated composite fields against ' + 'other repeated composite fields.') + return self._values == other._values + + +class ScalarMap(MutableMapping): + + """Simple, type-checked, dict-like container for holding repeated scalars.""" + + # Disallows assignment to other attributes. + __slots__ = ['_key_checker', '_value_checker', '_values', '_message_listener', + '_entry_descriptor'] + + def __init__(self, message_listener, key_checker, value_checker, + entry_descriptor): + """ + Args: + message_listener: A MessageListener implementation. + The ScalarMap will call this object's Modified() method when it + is modified. + key_checker: A type_checkers.ValueChecker instance to run on keys + inserted into this container. + value_checker: A type_checkers.ValueChecker instance to run on values + inserted into this container. + entry_descriptor: The MessageDescriptor of a map entry: key and value. + """ + self._message_listener = message_listener + self._key_checker = key_checker + self._value_checker = value_checker + self._entry_descriptor = entry_descriptor + self._values = {} + + def __getitem__(self, key): + try: + return self._values[key] + except KeyError: + key = self._key_checker.CheckValue(key) + val = self._value_checker.DefaultValue() + self._values[key] = val + return val + + def __contains__(self, item): + # We check the key's type to match the strong-typing flavor of the API. + # Also this makes it easier to match the behavior of the C++ implementation. + self._key_checker.CheckValue(item) + return item in self._values + + # We need to override this explicitly, because our defaultdict-like behavior + # will make the default implementation (from our base class) always insert + # the key. + def get(self, key, default=None): + if key in self: + return self[key] + else: + return default + + def __setitem__(self, key, value): + checked_key = self._key_checker.CheckValue(key) + checked_value = self._value_checker.CheckValue(value) + self._values[checked_key] = checked_value + self._message_listener.Modified() + + def __delitem__(self, key): + del self._values[key] + self._message_listener.Modified() + + def __len__(self): + return len(self._values) + + def __iter__(self): + return iter(self._values) + + def __repr__(self): + return repr(self._values) + + def MergeFrom(self, other): + self._values.update(other._values) + self._message_listener.Modified() + + def InvalidateIterators(self): + # It appears that the only way to reliably invalidate iterators to + # self._values is to ensure that its size changes. + original = self._values + self._values = original.copy() + original[None] = None + + # This is defined in the abstract base, but we can do it much more cheaply. + def clear(self): + self._values.clear() + self._message_listener.Modified() + + def GetEntryClass(self): + return self._entry_descriptor._concrete_class + + +class MessageMap(MutableMapping): + + """Simple, type-checked, dict-like container for with submessage values.""" + + # Disallows assignment to other attributes. + __slots__ = ['_key_checker', '_values', '_message_listener', + '_message_descriptor', '_entry_descriptor'] + + def __init__(self, message_listener, message_descriptor, key_checker, + entry_descriptor): + """ + Args: + message_listener: A MessageListener implementation. + The ScalarMap will call this object's Modified() method when it + is modified. + key_checker: A type_checkers.ValueChecker instance to run on keys + inserted into this container. + value_checker: A type_checkers.ValueChecker instance to run on values + inserted into this container. + entry_descriptor: The MessageDescriptor of a map entry: key and value. + """ + self._message_listener = message_listener + self._message_descriptor = message_descriptor + self._key_checker = key_checker + self._entry_descriptor = entry_descriptor + self._values = {} + + def __getitem__(self, key): + key = self._key_checker.CheckValue(key) + try: + return self._values[key] + except KeyError: + new_element = self._message_descriptor._concrete_class() + new_element._SetListener(self._message_listener) + self._values[key] = new_element + self._message_listener.Modified() + + return new_element + + def get_or_create(self, key): + """get_or_create() is an alias for getitem (ie. map[key]). + + Args: + key: The key to get or create in the map. + + This is useful in cases where you want to be explicit that the call is + mutating the map. This can avoid lint errors for statements like this + that otherwise would appear to be pointless statements: + + msg.my_map[key] + """ + return self[key] + + # We need to override this explicitly, because our defaultdict-like behavior + # will make the default implementation (from our base class) always insert + # the key. + def get(self, key, default=None): + if key in self: + return self[key] + else: + return default + + def __contains__(self, item): + item = self._key_checker.CheckValue(item) + return item in self._values + + def __setitem__(self, key, value): + raise ValueError('May not set values directly, call my_map[key].foo = 5') + + def __delitem__(self, key): + key = self._key_checker.CheckValue(key) + del self._values[key] + self._message_listener.Modified() + + def __len__(self): + return len(self._values) + + def __iter__(self): + return iter(self._values) + + def __repr__(self): + return repr(self._values) + + def MergeFrom(self, other): + # pylint: disable=protected-access + for key in other._values: + # According to documentation: "When parsing from the wire or when merging, + # if there are duplicate map keys the last key seen is used". + if key in self: + del self[key] + self[key].CopyFrom(other[key]) + # self._message_listener.Modified() not required here, because + # mutations to submessages already propagate. + + def InvalidateIterators(self): + # It appears that the only way to reliably invalidate iterators to + # self._values is to ensure that its size changes. + original = self._values + self._values = original.copy() + original[None] = None + + # This is defined in the abstract base, but we can do it much more cheaply. + def clear(self): + self._values.clear() + self._message_listener.Modified() + + def GetEntryClass(self): + return self._entry_descriptor._concrete_class + + +class _UnknownField(object): + + """A parsed unknown field.""" + + # Disallows assignment to other attributes. + __slots__ = ['_field_number', '_wire_type', '_data'] + + def __init__(self, field_number, wire_type, data): + self._field_number = field_number + self._wire_type = wire_type + self._data = data + return + + def __lt__(self, other): + # pylint: disable=protected-access + return self._field_number < other._field_number + + def __eq__(self, other): + if self is other: + return True + # pylint: disable=protected-access + return (self._field_number == other._field_number and + self._wire_type == other._wire_type and + self._data == other._data) + + +class UnknownFieldRef(object): + + def __init__(self, parent, index): + self._parent = parent + self._index = index + return + + def _check_valid(self): + if not self._parent: + raise ValueError('UnknownField does not exist. ' + 'The parent message might be cleared.') + if self._index >= len(self._parent): + raise ValueError('UnknownField does not exist. ' + 'The parent message might be cleared.') + + @property + def field_number(self): + self._check_valid() + # pylint: disable=protected-access + return self._parent._internal_get(self._index)._field_number + + @property + def wire_type(self): + self._check_valid() + # pylint: disable=protected-access + return self._parent._internal_get(self._index)._wire_type + + @property + def data(self): + self._check_valid() + # pylint: disable=protected-access + return self._parent._internal_get(self._index)._data + + +class UnknownFieldSet(object): + + """UnknownField container""" + + # Disallows assignment to other attributes. + __slots__ = ['_values'] + + def __init__(self): + self._values = [] + + def __getitem__(self, index): + if self._values is None: + raise ValueError('UnknownFields does not exist. ' + 'The parent message might be cleared.') + size = len(self._values) + if index < 0: + index += size + if index < 0 or index >= size: + raise IndexError('index %d out of range'.index) + + return UnknownFieldRef(self, index) + + def _internal_get(self, index): + return self._values[index] + + def __len__(self): + if self._values is None: + raise ValueError('UnknownFields does not exist. ' + 'The parent message might be cleared.') + return len(self._values) + + def _add(self, field_number, wire_type, data): + unknown_field = _UnknownField(field_number, wire_type, data) + self._values.append(unknown_field) + return unknown_field + + def __iter__(self): + for i in range(len(self)): + yield UnknownFieldRef(self, i) + + def _extend(self, other): + if other is None: + return + # pylint: disable=protected-access + self._values.extend(other._values) + + def __eq__(self, other): + if self is other: + return True + # Sort unknown fields because their order shouldn't + # affect equality test. + values = list(self._values) + if other is None: + return not values + values.sort() + # pylint: disable=protected-access + other_values = sorted(other._values) + return values == other_values + + def _clear(self): + for value in self._values: + # pylint: disable=protected-access + if isinstance(value._data, UnknownFieldSet): + value._data._clear() # pylint: disable=protected-access + self._values = None diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/internal/decoder.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/internal/decoder.py new file mode 100644 index 0000000000000000000000000000000000000000..6804986b6ebe2cd91fa3156503f4d81446967ec6 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/internal/decoder.py @@ -0,0 +1,1057 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""Code for decoding protocol buffer primitives. + +This code is very similar to encoder.py -- read the docs for that module first. + +A "decoder" is a function with the signature: + Decode(buffer, pos, end, message, field_dict) +The arguments are: + buffer: The string containing the encoded message. + pos: The current position in the string. + end: The position in the string where the current message ends. May be + less than len(buffer) if we're reading a sub-message. + message: The message object into which we're parsing. + field_dict: message._fields (avoids a hashtable lookup). +The decoder reads the field and stores it into field_dict, returning the new +buffer position. A decoder for a repeated field may proactively decode all of +the elements of that field, if they appear consecutively. + +Note that decoders may throw any of the following: + IndexError: Indicates a truncated message. + struct.error: Unpacking of a fixed-width field failed. + message.DecodeError: Other errors. + +Decoders are expected to raise an exception if they are called with pos > end. +This allows callers to be lax about bounds checking: it's fineto read past +"end" as long as you are sure that someone else will notice and throw an +exception later on. + +Something up the call stack is expected to catch IndexError and struct.error +and convert them to message.DecodeError. + +Decoders are constructed using decoder constructors with the signature: + MakeDecoder(field_number, is_repeated, is_packed, key, new_default) +The arguments are: + field_number: The field number of the field we want to decode. + is_repeated: Is the field a repeated field? (bool) + is_packed: Is the field a packed field? (bool) + key: The key to use when looking up the field within field_dict. + (This is actually the FieldDescriptor but nothing in this + file should depend on that.) + new_default: A function which takes a message object as a parameter and + returns a new instance of the default value for this field. + (This is called for repeated fields and sub-messages, when an + instance does not already exist.) + +As with encoders, we define a decoder constructor for every type of field. +Then, for every field of every message class we construct an actual decoder. +That decoder goes into a dict indexed by tag, so when we decode a message +we repeatedly read a tag, look up the corresponding decoder, and invoke it. +""" + +__author__ = 'kenton@google.com (Kenton Varda)' + +import struct +import sys +import six + +_UCS2_MAXUNICODE = 65535 +if six.PY3: + long = int +else: + import re # pylint: disable=g-import-not-at-top + _SURROGATE_PATTERN = re.compile(six.u(r'[\ud800-\udfff]')) + +from google.protobuf.internal import containers +from google.protobuf.internal import encoder +from google.protobuf.internal import wire_format +from google.protobuf import message + + +# This will overflow and thus become IEEE-754 "infinity". We would use +# "float('inf')" but it doesn't work on Windows pre-Python-2.6. +_POS_INF = 1e10000 +_NEG_INF = -_POS_INF +_NAN = _POS_INF * 0 + + +# This is not for optimization, but rather to avoid conflicts with local +# variables named "message". +_DecodeError = message.DecodeError + + +def _VarintDecoder(mask, result_type): + """Return an encoder for a basic varint value (does not include tag). + + Decoded values will be bitwise-anded with the given mask before being + returned, e.g. to limit them to 32 bits. The returned decoder does not + take the usual "end" parameter -- the caller is expected to do bounds checking + after the fact (often the caller can defer such checking until later). The + decoder returns a (value, new_pos) pair. + """ + + def DecodeVarint(buffer, pos): + result = 0 + shift = 0 + while 1: + b = six.indexbytes(buffer, pos) + result |= ((b & 0x7f) << shift) + pos += 1 + if not (b & 0x80): + result &= mask + result = result_type(result) + return (result, pos) + shift += 7 + if shift >= 64: + raise _DecodeError('Too many bytes when decoding varint.') + return DecodeVarint + + +def _SignedVarintDecoder(bits, result_type): + """Like _VarintDecoder() but decodes signed values.""" + + signbit = 1 << (bits - 1) + mask = (1 << bits) - 1 + + def DecodeVarint(buffer, pos): + result = 0 + shift = 0 + while 1: + b = six.indexbytes(buffer, pos) + result |= ((b & 0x7f) << shift) + pos += 1 + if not (b & 0x80): + result &= mask + result = (result ^ signbit) - signbit + result = result_type(result) + return (result, pos) + shift += 7 + if shift >= 64: + raise _DecodeError('Too many bytes when decoding varint.') + return DecodeVarint + +# We force 32-bit values to int and 64-bit values to long to make +# alternate implementations where the distinction is more significant +# (e.g. the C++ implementation) simpler. + +_DecodeVarint = _VarintDecoder((1 << 64) - 1, long) +_DecodeSignedVarint = _SignedVarintDecoder(64, long) + +# Use these versions for values which must be limited to 32 bits. +_DecodeVarint32 = _VarintDecoder((1 << 32) - 1, int) +_DecodeSignedVarint32 = _SignedVarintDecoder(32, int) + + +def ReadTag(buffer, pos): + """Read a tag from the memoryview, and return a (tag_bytes, new_pos) tuple. + + We return the raw bytes of the tag rather than decoding them. The raw + bytes can then be used to look up the proper decoder. This effectively allows + us to trade some work that would be done in pure-python (decoding a varint) + for work that is done in C (searching for a byte string in a hash table). + In a low-level language it would be much cheaper to decode the varint and + use that, but not in Python. + + Args: + buffer: memoryview object of the encoded bytes + pos: int of the current position to start from + + Returns: + Tuple[bytes, int] of the tag data and new position. + """ + start = pos + while six.indexbytes(buffer, pos) & 0x80: + pos += 1 + pos += 1 + + tag_bytes = buffer[start:pos].tobytes() + return tag_bytes, pos + + +# -------------------------------------------------------------------- + + +def _SimpleDecoder(wire_type, decode_value): + """Return a constructor for a decoder for fields of a particular type. + + Args: + wire_type: The field's wire type. + decode_value: A function which decodes an individual value, e.g. + _DecodeVarint() + """ + + def SpecificDecoder(field_number, is_repeated, is_packed, key, new_default, + clear_if_default=False): + if is_packed: + local_DecodeVarint = _DecodeVarint + def DecodePackedField(buffer, pos, end, message, field_dict): + value = field_dict.get(key) + if value is None: + value = field_dict.setdefault(key, new_default(message)) + (endpoint, pos) = local_DecodeVarint(buffer, pos) + endpoint += pos + if endpoint > end: + raise _DecodeError('Truncated message.') + while pos < endpoint: + (element, pos) = decode_value(buffer, pos) + value.append(element) + if pos > endpoint: + del value[-1] # Discard corrupt value. + raise _DecodeError('Packed element was truncated.') + return pos + return DecodePackedField + elif is_repeated: + tag_bytes = encoder.TagBytes(field_number, wire_type) + tag_len = len(tag_bytes) + def DecodeRepeatedField(buffer, pos, end, message, field_dict): + value = field_dict.get(key) + if value is None: + value = field_dict.setdefault(key, new_default(message)) + while 1: + (element, new_pos) = decode_value(buffer, pos) + value.append(element) + # Predict that the next tag is another copy of the same repeated + # field. + pos = new_pos + tag_len + if buffer[new_pos:pos] != tag_bytes or new_pos >= end: + # Prediction failed. Return. + if new_pos > end: + raise _DecodeError('Truncated message.') + return new_pos + return DecodeRepeatedField + else: + def DecodeField(buffer, pos, end, message, field_dict): + (new_value, pos) = decode_value(buffer, pos) + if pos > end: + raise _DecodeError('Truncated message.') + if clear_if_default and not new_value: + field_dict.pop(key, None) + else: + field_dict[key] = new_value + return pos + return DecodeField + + return SpecificDecoder + + +def _ModifiedDecoder(wire_type, decode_value, modify_value): + """Like SimpleDecoder but additionally invokes modify_value on every value + before storing it. Usually modify_value is ZigZagDecode. + """ + + # Reusing _SimpleDecoder is slightly slower than copying a bunch of code, but + # not enough to make a significant difference. + + def InnerDecode(buffer, pos): + (result, new_pos) = decode_value(buffer, pos) + return (modify_value(result), new_pos) + return _SimpleDecoder(wire_type, InnerDecode) + + +def _StructPackDecoder(wire_type, format): + """Return a constructor for a decoder for a fixed-width field. + + Args: + wire_type: The field's wire type. + format: The format string to pass to struct.unpack(). + """ + + value_size = struct.calcsize(format) + local_unpack = struct.unpack + + # Reusing _SimpleDecoder is slightly slower than copying a bunch of code, but + # not enough to make a significant difference. + + # Note that we expect someone up-stack to catch struct.error and convert + # it to _DecodeError -- this way we don't have to set up exception- + # handling blocks every time we parse one value. + + def InnerDecode(buffer, pos): + new_pos = pos + value_size + result = local_unpack(format, buffer[pos:new_pos])[0] + return (result, new_pos) + return _SimpleDecoder(wire_type, InnerDecode) + + +def _FloatDecoder(): + """Returns a decoder for a float field. + + This code works around a bug in struct.unpack for non-finite 32-bit + floating-point values. + """ + + local_unpack = struct.unpack + + def InnerDecode(buffer, pos): + """Decode serialized float to a float and new position. + + Args: + buffer: memoryview of the serialized bytes + pos: int, position in the memory view to start at. + + Returns: + Tuple[float, int] of the deserialized float value and new position + in the serialized data. + """ + # We expect a 32-bit value in little-endian byte order. Bit 1 is the sign + # bit, bits 2-9 represent the exponent, and bits 10-32 are the significand. + new_pos = pos + 4 + float_bytes = buffer[pos:new_pos].tobytes() + + # If this value has all its exponent bits set, then it's non-finite. + # In Python 2.4, struct.unpack will convert it to a finite 64-bit value. + # To avoid that, we parse it specially. + if (float_bytes[3:4] in b'\x7F\xFF' and float_bytes[2:3] >= b'\x80'): + # If at least one significand bit is set... + if float_bytes[0:3] != b'\x00\x00\x80': + return (_NAN, new_pos) + # If sign bit is set... + if float_bytes[3:4] == b'\xFF': + return (_NEG_INF, new_pos) + return (_POS_INF, new_pos) + + # Note that we expect someone up-stack to catch struct.error and convert + # it to _DecodeError -- this way we don't have to set up exception- + # handling blocks every time we parse one value. + result = local_unpack('= b'\xF0') + and (double_bytes[0:7] != b'\x00\x00\x00\x00\x00\x00\xF0')): + return (_NAN, new_pos) + + # Note that we expect someone up-stack to catch struct.error and convert + # it to _DecodeError -- this way we don't have to set up exception- + # handling blocks every time we parse one value. + result = local_unpack(' end: + raise _DecodeError('Truncated message.') + while pos < endpoint: + value_start_pos = pos + (element, pos) = _DecodeSignedVarint32(buffer, pos) + # pylint: disable=protected-access + if element in enum_type.values_by_number: + value.append(element) + else: + if not message._unknown_fields: + message._unknown_fields = [] + tag_bytes = encoder.TagBytes(field_number, + wire_format.WIRETYPE_VARINT) + + message._unknown_fields.append( + (tag_bytes, buffer[value_start_pos:pos].tobytes())) + if message._unknown_field_set is None: + message._unknown_field_set = containers.UnknownFieldSet() + message._unknown_field_set._add( + field_number, wire_format.WIRETYPE_VARINT, element) + # pylint: enable=protected-access + if pos > endpoint: + if element in enum_type.values_by_number: + del value[-1] # Discard corrupt value. + else: + del message._unknown_fields[-1] + # pylint: disable=protected-access + del message._unknown_field_set._values[-1] + # pylint: enable=protected-access + raise _DecodeError('Packed element was truncated.') + return pos + return DecodePackedField + elif is_repeated: + tag_bytes = encoder.TagBytes(field_number, wire_format.WIRETYPE_VARINT) + tag_len = len(tag_bytes) + def DecodeRepeatedField(buffer, pos, end, message, field_dict): + """Decode serialized repeated enum to its value and a new position. + + Args: + buffer: memoryview of the serialized bytes. + pos: int, position in the memory view to start at. + end: int, end position of serialized data + message: Message object to store unknown fields in + field_dict: Map[Descriptor, Any] to store decoded values in. + + Returns: + int, new position in serialized data. + """ + value = field_dict.get(key) + if value is None: + value = field_dict.setdefault(key, new_default(message)) + while 1: + (element, new_pos) = _DecodeSignedVarint32(buffer, pos) + # pylint: disable=protected-access + if element in enum_type.values_by_number: + value.append(element) + else: + if not message._unknown_fields: + message._unknown_fields = [] + message._unknown_fields.append( + (tag_bytes, buffer[pos:new_pos].tobytes())) + if message._unknown_field_set is None: + message._unknown_field_set = containers.UnknownFieldSet() + message._unknown_field_set._add( + field_number, wire_format.WIRETYPE_VARINT, element) + # pylint: enable=protected-access + # Predict that the next tag is another copy of the same repeated + # field. + pos = new_pos + tag_len + if buffer[new_pos:pos] != tag_bytes or new_pos >= end: + # Prediction failed. Return. + if new_pos > end: + raise _DecodeError('Truncated message.') + return new_pos + return DecodeRepeatedField + else: + def DecodeField(buffer, pos, end, message, field_dict): + """Decode serialized repeated enum to its value and a new position. + + Args: + buffer: memoryview of the serialized bytes. + pos: int, position in the memory view to start at. + end: int, end position of serialized data + message: Message object to store unknown fields in + field_dict: Map[Descriptor, Any] to store decoded values in. + + Returns: + int, new position in serialized data. + """ + value_start_pos = pos + (enum_value, pos) = _DecodeSignedVarint32(buffer, pos) + if pos > end: + raise _DecodeError('Truncated message.') + if clear_if_default and not enum_value: + field_dict.pop(key, None) + return pos + # pylint: disable=protected-access + if enum_value in enum_type.values_by_number: + field_dict[key] = enum_value + else: + if not message._unknown_fields: + message._unknown_fields = [] + tag_bytes = encoder.TagBytes(field_number, + wire_format.WIRETYPE_VARINT) + message._unknown_fields.append( + (tag_bytes, buffer[value_start_pos:pos].tobytes())) + if message._unknown_field_set is None: + message._unknown_field_set = containers.UnknownFieldSet() + message._unknown_field_set._add( + field_number, wire_format.WIRETYPE_VARINT, enum_value) + # pylint: enable=protected-access + return pos + return DecodeField + + +# -------------------------------------------------------------------- + + +Int32Decoder = _SimpleDecoder( + wire_format.WIRETYPE_VARINT, _DecodeSignedVarint32) + +Int64Decoder = _SimpleDecoder( + wire_format.WIRETYPE_VARINT, _DecodeSignedVarint) + +UInt32Decoder = _SimpleDecoder(wire_format.WIRETYPE_VARINT, _DecodeVarint32) +UInt64Decoder = _SimpleDecoder(wire_format.WIRETYPE_VARINT, _DecodeVarint) + +SInt32Decoder = _ModifiedDecoder( + wire_format.WIRETYPE_VARINT, _DecodeVarint32, wire_format.ZigZagDecode) +SInt64Decoder = _ModifiedDecoder( + wire_format.WIRETYPE_VARINT, _DecodeVarint, wire_format.ZigZagDecode) + +# Note that Python conveniently guarantees that when using the '<' prefix on +# formats, they will also have the same size across all platforms (as opposed +# to without the prefix, where their sizes depend on the C compiler's basic +# type sizes). +Fixed32Decoder = _StructPackDecoder(wire_format.WIRETYPE_FIXED32, ' _UCS2_MAXUNICODE: + # Only do the check for python2 ucs4 when is_strict_utf8 enabled + if _SURROGATE_PATTERN.search(value): + reason = ('String field %s contains invalid UTF-8 data when parsing' + 'a protocol buffer: surrogates not allowed. Use' + 'the bytes type if you intend to send raw bytes.') % ( + key.full_name) + raise message.DecodeError(reason) + + return value + + assert not is_packed + if is_repeated: + tag_bytes = encoder.TagBytes(field_number, + wire_format.WIRETYPE_LENGTH_DELIMITED) + tag_len = len(tag_bytes) + def DecodeRepeatedField(buffer, pos, end, message, field_dict): + value = field_dict.get(key) + if value is None: + value = field_dict.setdefault(key, new_default(message)) + while 1: + (size, pos) = local_DecodeVarint(buffer, pos) + new_pos = pos + size + if new_pos > end: + raise _DecodeError('Truncated string.') + value.append(_ConvertToUnicode(buffer[pos:new_pos])) + # Predict that the next tag is another copy of the same repeated field. + pos = new_pos + tag_len + if buffer[new_pos:pos] != tag_bytes or new_pos == end: + # Prediction failed. Return. + return new_pos + return DecodeRepeatedField + else: + def DecodeField(buffer, pos, end, message, field_dict): + (size, pos) = local_DecodeVarint(buffer, pos) + new_pos = pos + size + if new_pos > end: + raise _DecodeError('Truncated string.') + if clear_if_default and not size: + field_dict.pop(key, None) + else: + field_dict[key] = _ConvertToUnicode(buffer[pos:new_pos]) + return new_pos + return DecodeField + + +def BytesDecoder(field_number, is_repeated, is_packed, key, new_default, + clear_if_default=False): + """Returns a decoder for a bytes field.""" + + local_DecodeVarint = _DecodeVarint + + assert not is_packed + if is_repeated: + tag_bytes = encoder.TagBytes(field_number, + wire_format.WIRETYPE_LENGTH_DELIMITED) + tag_len = len(tag_bytes) + def DecodeRepeatedField(buffer, pos, end, message, field_dict): + value = field_dict.get(key) + if value is None: + value = field_dict.setdefault(key, new_default(message)) + while 1: + (size, pos) = local_DecodeVarint(buffer, pos) + new_pos = pos + size + if new_pos > end: + raise _DecodeError('Truncated string.') + value.append(buffer[pos:new_pos].tobytes()) + # Predict that the next tag is another copy of the same repeated field. + pos = new_pos + tag_len + if buffer[new_pos:pos] != tag_bytes or new_pos == end: + # Prediction failed. Return. + return new_pos + return DecodeRepeatedField + else: + def DecodeField(buffer, pos, end, message, field_dict): + (size, pos) = local_DecodeVarint(buffer, pos) + new_pos = pos + size + if new_pos > end: + raise _DecodeError('Truncated string.') + if clear_if_default and not size: + field_dict.pop(key, None) + else: + field_dict[key] = buffer[pos:new_pos].tobytes() + return new_pos + return DecodeField + + +def GroupDecoder(field_number, is_repeated, is_packed, key, new_default): + """Returns a decoder for a group field.""" + + end_tag_bytes = encoder.TagBytes(field_number, + wire_format.WIRETYPE_END_GROUP) + end_tag_len = len(end_tag_bytes) + + assert not is_packed + if is_repeated: + tag_bytes = encoder.TagBytes(field_number, + wire_format.WIRETYPE_START_GROUP) + tag_len = len(tag_bytes) + def DecodeRepeatedField(buffer, pos, end, message, field_dict): + value = field_dict.get(key) + if value is None: + value = field_dict.setdefault(key, new_default(message)) + while 1: + value = field_dict.get(key) + if value is None: + value = field_dict.setdefault(key, new_default(message)) + # Read sub-message. + pos = value.add()._InternalParse(buffer, pos, end) + # Read end tag. + new_pos = pos+end_tag_len + if buffer[pos:new_pos] != end_tag_bytes or new_pos > end: + raise _DecodeError('Missing group end tag.') + # Predict that the next tag is another copy of the same repeated field. + pos = new_pos + tag_len + if buffer[new_pos:pos] != tag_bytes or new_pos == end: + # Prediction failed. Return. + return new_pos + return DecodeRepeatedField + else: + def DecodeField(buffer, pos, end, message, field_dict): + value = field_dict.get(key) + if value is None: + value = field_dict.setdefault(key, new_default(message)) + # Read sub-message. + pos = value._InternalParse(buffer, pos, end) + # Read end tag. + new_pos = pos+end_tag_len + if buffer[pos:new_pos] != end_tag_bytes or new_pos > end: + raise _DecodeError('Missing group end tag.') + return new_pos + return DecodeField + + +def MessageDecoder(field_number, is_repeated, is_packed, key, new_default): + """Returns a decoder for a message field.""" + + local_DecodeVarint = _DecodeVarint + + assert not is_packed + if is_repeated: + tag_bytes = encoder.TagBytes(field_number, + wire_format.WIRETYPE_LENGTH_DELIMITED) + tag_len = len(tag_bytes) + def DecodeRepeatedField(buffer, pos, end, message, field_dict): + value = field_dict.get(key) + if value is None: + value = field_dict.setdefault(key, new_default(message)) + while 1: + # Read length. + (size, pos) = local_DecodeVarint(buffer, pos) + new_pos = pos + size + if new_pos > end: + raise _DecodeError('Truncated message.') + # Read sub-message. + if value.add()._InternalParse(buffer, pos, new_pos) != new_pos: + # The only reason _InternalParse would return early is if it + # encountered an end-group tag. + raise _DecodeError('Unexpected end-group tag.') + # Predict that the next tag is another copy of the same repeated field. + pos = new_pos + tag_len + if buffer[new_pos:pos] != tag_bytes or new_pos == end: + # Prediction failed. Return. + return new_pos + return DecodeRepeatedField + else: + def DecodeField(buffer, pos, end, message, field_dict): + value = field_dict.get(key) + if value is None: + value = field_dict.setdefault(key, new_default(message)) + # Read length. + (size, pos) = local_DecodeVarint(buffer, pos) + new_pos = pos + size + if new_pos > end: + raise _DecodeError('Truncated message.') + # Read sub-message. + if value._InternalParse(buffer, pos, new_pos) != new_pos: + # The only reason _InternalParse would return early is if it encountered + # an end-group tag. + raise _DecodeError('Unexpected end-group tag.') + return new_pos + return DecodeField + + +# -------------------------------------------------------------------- + +MESSAGE_SET_ITEM_TAG = encoder.TagBytes(1, wire_format.WIRETYPE_START_GROUP) + +def MessageSetItemDecoder(descriptor): + """Returns a decoder for a MessageSet item. + + The parameter is the message Descriptor. + + The message set message looks like this: + message MessageSet { + repeated group Item = 1 { + required int32 type_id = 2; + required string message = 3; + } + } + """ + + type_id_tag_bytes = encoder.TagBytes(2, wire_format.WIRETYPE_VARINT) + message_tag_bytes = encoder.TagBytes(3, wire_format.WIRETYPE_LENGTH_DELIMITED) + item_end_tag_bytes = encoder.TagBytes(1, wire_format.WIRETYPE_END_GROUP) + + local_ReadTag = ReadTag + local_DecodeVarint = _DecodeVarint + local_SkipField = SkipField + + def DecodeItem(buffer, pos, end, message, field_dict): + """Decode serialized message set to its value and new position. + + Args: + buffer: memoryview of the serialized bytes. + pos: int, position in the memory view to start at. + end: int, end position of serialized data + message: Message object to store unknown fields in + field_dict: Map[Descriptor, Any] to store decoded values in. + + Returns: + int, new position in serialized data. + """ + message_set_item_start = pos + type_id = -1 + message_start = -1 + message_end = -1 + + # Technically, type_id and message can appear in any order, so we need + # a little loop here. + while 1: + (tag_bytes, pos) = local_ReadTag(buffer, pos) + if tag_bytes == type_id_tag_bytes: + (type_id, pos) = local_DecodeVarint(buffer, pos) + elif tag_bytes == message_tag_bytes: + (size, message_start) = local_DecodeVarint(buffer, pos) + pos = message_end = message_start + size + elif tag_bytes == item_end_tag_bytes: + break + else: + pos = SkipField(buffer, pos, end, tag_bytes) + if pos == -1: + raise _DecodeError('Missing group end tag.') + + if pos > end: + raise _DecodeError('Truncated message.') + + if type_id == -1: + raise _DecodeError('MessageSet item missing type_id.') + if message_start == -1: + raise _DecodeError('MessageSet item missing message.') + + extension = message.Extensions._FindExtensionByNumber(type_id) + # pylint: disable=protected-access + if extension is not None: + value = field_dict.get(extension) + if value is None: + message_type = extension.message_type + if not hasattr(message_type, '_concrete_class'): + # pylint: disable=protected-access + message._FACTORY.GetPrototype(message_type) + value = field_dict.setdefault( + extension, message_type._concrete_class()) + if value._InternalParse(buffer, message_start,message_end) != message_end: + # The only reason _InternalParse would return early is if it encountered + # an end-group tag. + raise _DecodeError('Unexpected end-group tag.') + else: + if not message._unknown_fields: + message._unknown_fields = [] + message._unknown_fields.append( + (MESSAGE_SET_ITEM_TAG, buffer[message_set_item_start:pos].tobytes())) + if message._unknown_field_set is None: + message._unknown_field_set = containers.UnknownFieldSet() + message._unknown_field_set._add( + type_id, + wire_format.WIRETYPE_LENGTH_DELIMITED, + buffer[message_start:message_end].tobytes()) + # pylint: enable=protected-access + + return pos + + return DecodeItem + +# -------------------------------------------------------------------- + +def MapDecoder(field_descriptor, new_default, is_message_map): + """Returns a decoder for a map field.""" + + key = field_descriptor + tag_bytes = encoder.TagBytes(field_descriptor.number, + wire_format.WIRETYPE_LENGTH_DELIMITED) + tag_len = len(tag_bytes) + local_DecodeVarint = _DecodeVarint + # Can't read _concrete_class yet; might not be initialized. + message_type = field_descriptor.message_type + + def DecodeMap(buffer, pos, end, message, field_dict): + submsg = message_type._concrete_class() + value = field_dict.get(key) + if value is None: + value = field_dict.setdefault(key, new_default(message)) + while 1: + # Read length. + (size, pos) = local_DecodeVarint(buffer, pos) + new_pos = pos + size + if new_pos > end: + raise _DecodeError('Truncated message.') + # Read sub-message. + submsg.Clear() + if submsg._InternalParse(buffer, pos, new_pos) != new_pos: + # The only reason _InternalParse would return early is if it + # encountered an end-group tag. + raise _DecodeError('Unexpected end-group tag.') + + if is_message_map: + value[submsg.key].CopyFrom(submsg.value) + else: + value[submsg.key] = submsg.value + + # Predict that the next tag is another copy of the same repeated field. + pos = new_pos + tag_len + if buffer[new_pos:pos] != tag_bytes or new_pos == end: + # Prediction failed. Return. + return new_pos + + return DecodeMap + +# -------------------------------------------------------------------- +# Optimization is not as heavy here because calls to SkipField() are rare, +# except for handling end-group tags. + +def _SkipVarint(buffer, pos, end): + """Skip a varint value. Returns the new position.""" + # Previously ord(buffer[pos]) raised IndexError when pos is out of range. + # With this code, ord(b'') raises TypeError. Both are handled in + # python_message.py to generate a 'Truncated message' error. + while ord(buffer[pos:pos+1].tobytes()) & 0x80: + pos += 1 + pos += 1 + if pos > end: + raise _DecodeError('Truncated message.') + return pos + +def _SkipFixed64(buffer, pos, end): + """Skip a fixed64 value. Returns the new position.""" + + pos += 8 + if pos > end: + raise _DecodeError('Truncated message.') + return pos + + +def _DecodeFixed64(buffer, pos): + """Decode a fixed64.""" + new_pos = pos + 8 + return (struct.unpack(' end: + raise _DecodeError('Truncated message.') + return pos + + +def _SkipGroup(buffer, pos, end): + """Skip sub-group. Returns the new position.""" + + while 1: + (tag_bytes, pos) = ReadTag(buffer, pos) + new_pos = SkipField(buffer, pos, end, tag_bytes) + if new_pos == -1: + return pos + pos = new_pos + + +def _DecodeUnknownFieldSet(buffer, pos, end_pos=None): + """Decode UnknownFieldSet. Returns the UnknownFieldSet and new position.""" + + unknown_field_set = containers.UnknownFieldSet() + while end_pos is None or pos < end_pos: + (tag_bytes, pos) = ReadTag(buffer, pos) + (tag, _) = _DecodeVarint(tag_bytes, 0) + field_number, wire_type = wire_format.UnpackTag(tag) + if wire_type == wire_format.WIRETYPE_END_GROUP: + break + (data, pos) = _DecodeUnknownField(buffer, pos, wire_type) + # pylint: disable=protected-access + unknown_field_set._add(field_number, wire_type, data) + + return (unknown_field_set, pos) + + +def _DecodeUnknownField(buffer, pos, wire_type): + """Decode a unknown field. Returns the UnknownField and new position.""" + + if wire_type == wire_format.WIRETYPE_VARINT: + (data, pos) = _DecodeVarint(buffer, pos) + elif wire_type == wire_format.WIRETYPE_FIXED64: + (data, pos) = _DecodeFixed64(buffer, pos) + elif wire_type == wire_format.WIRETYPE_FIXED32: + (data, pos) = _DecodeFixed32(buffer, pos) + elif wire_type == wire_format.WIRETYPE_LENGTH_DELIMITED: + (size, pos) = _DecodeVarint(buffer, pos) + data = buffer[pos:pos+size].tobytes() + pos += size + elif wire_type == wire_format.WIRETYPE_START_GROUP: + (data, pos) = _DecodeUnknownFieldSet(buffer, pos) + elif wire_type == wire_format.WIRETYPE_END_GROUP: + return (0, -1) + else: + raise _DecodeError('Wrong wire type in tag.') + + return (data, pos) + + +def _EndGroup(buffer, pos, end): + """Skipping an END_GROUP tag returns -1 to tell the parent loop to break.""" + + return -1 + + +def _SkipFixed32(buffer, pos, end): + """Skip a fixed32 value. Returns the new position.""" + + pos += 4 + if pos > end: + raise _DecodeError('Truncated message.') + return pos + + +def _DecodeFixed32(buffer, pos): + """Decode a fixed32.""" + + new_pos = pos + 4 + return (struct.unpack('>= 7 + while value: + write(local_int2byte(0x80|bits)) + bits = value & 0x7f + value >>= 7 + return write(local_int2byte(bits)) + + return EncodeVarint + + +def _SignedVarintEncoder(): + """Return an encoder for a basic signed varint value (does not include + tag).""" + + local_int2byte = six.int2byte + def EncodeSignedVarint(write, value, unused_deterministic=None): + if value < 0: + value += (1 << 64) + bits = value & 0x7f + value >>= 7 + while value: + write(local_int2byte(0x80|bits)) + bits = value & 0x7f + value >>= 7 + return write(local_int2byte(bits)) + + return EncodeSignedVarint + + +_EncodeVarint = _VarintEncoder() +_EncodeSignedVarint = _SignedVarintEncoder() + + +def _VarintBytes(value): + """Encode the given integer as a varint and return the bytes. This is only + called at startup time so it doesn't need to be fast.""" + + pieces = [] + _EncodeVarint(pieces.append, value, True) + return b"".join(pieces) + + +def TagBytes(field_number, wire_type): + """Encode the given tag and return the bytes. Only called at startup.""" + + return six.binary_type( + _VarintBytes(wire_format.PackTag(field_number, wire_type))) + +# -------------------------------------------------------------------- +# As with sizers (see above), we have a number of common encoder +# implementations. + + +def _SimpleEncoder(wire_type, encode_value, compute_value_size): + """Return a constructor for an encoder for fields of a particular type. + + Args: + wire_type: The field's wire type, for encoding tags. + encode_value: A function which encodes an individual value, e.g. + _EncodeVarint(). + compute_value_size: A function which computes the size of an individual + value, e.g. _VarintSize(). + """ + + def SpecificEncoder(field_number, is_repeated, is_packed): + if is_packed: + tag_bytes = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED) + local_EncodeVarint = _EncodeVarint + def EncodePackedField(write, value, deterministic): + write(tag_bytes) + size = 0 + for element in value: + size += compute_value_size(element) + local_EncodeVarint(write, size, deterministic) + for element in value: + encode_value(write, element, deterministic) + return EncodePackedField + elif is_repeated: + tag_bytes = TagBytes(field_number, wire_type) + def EncodeRepeatedField(write, value, deterministic): + for element in value: + write(tag_bytes) + encode_value(write, element, deterministic) + return EncodeRepeatedField + else: + tag_bytes = TagBytes(field_number, wire_type) + def EncodeField(write, value, deterministic): + write(tag_bytes) + return encode_value(write, value, deterministic) + return EncodeField + + return SpecificEncoder + + +def _ModifiedEncoder(wire_type, encode_value, compute_value_size, modify_value): + """Like SimpleEncoder but additionally invokes modify_value on every value + before passing it to encode_value. Usually modify_value is ZigZagEncode.""" + + def SpecificEncoder(field_number, is_repeated, is_packed): + if is_packed: + tag_bytes = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED) + local_EncodeVarint = _EncodeVarint + def EncodePackedField(write, value, deterministic): + write(tag_bytes) + size = 0 + for element in value: + size += compute_value_size(modify_value(element)) + local_EncodeVarint(write, size, deterministic) + for element in value: + encode_value(write, modify_value(element), deterministic) + return EncodePackedField + elif is_repeated: + tag_bytes = TagBytes(field_number, wire_type) + def EncodeRepeatedField(write, value, deterministic): + for element in value: + write(tag_bytes) + encode_value(write, modify_value(element), deterministic) + return EncodeRepeatedField + else: + tag_bytes = TagBytes(field_number, wire_type) + def EncodeField(write, value, deterministic): + write(tag_bytes) + return encode_value(write, modify_value(value), deterministic) + return EncodeField + + return SpecificEncoder + + +def _StructPackEncoder(wire_type, format): + """Return a constructor for an encoder for a fixed-width field. + + Args: + wire_type: The field's wire type, for encoding tags. + format: The format string to pass to struct.pack(). + """ + + value_size = struct.calcsize(format) + + def SpecificEncoder(field_number, is_repeated, is_packed): + local_struct_pack = struct.pack + if is_packed: + tag_bytes = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED) + local_EncodeVarint = _EncodeVarint + def EncodePackedField(write, value, deterministic): + write(tag_bytes) + local_EncodeVarint(write, len(value) * value_size, deterministic) + for element in value: + write(local_struct_pack(format, element)) + return EncodePackedField + elif is_repeated: + tag_bytes = TagBytes(field_number, wire_type) + def EncodeRepeatedField(write, value, unused_deterministic=None): + for element in value: + write(tag_bytes) + write(local_struct_pack(format, element)) + return EncodeRepeatedField + else: + tag_bytes = TagBytes(field_number, wire_type) + def EncodeField(write, value, unused_deterministic=None): + write(tag_bytes) + return write(local_struct_pack(format, value)) + return EncodeField + + return SpecificEncoder + + +def _FloatingPointEncoder(wire_type, format): + """Return a constructor for an encoder for float fields. + + This is like StructPackEncoder, but catches errors that may be due to + passing non-finite floating-point values to struct.pack, and makes a + second attempt to encode those values. + + Args: + wire_type: The field's wire type, for encoding tags. + format: The format string to pass to struct.pack(). + """ + + value_size = struct.calcsize(format) + if value_size == 4: + def EncodeNonFiniteOrRaise(write, value): + # Remember that the serialized form uses little-endian byte order. + if value == _POS_INF: + write(b'\x00\x00\x80\x7F') + elif value == _NEG_INF: + write(b'\x00\x00\x80\xFF') + elif value != value: # NaN + write(b'\x00\x00\xC0\x7F') + else: + raise + elif value_size == 8: + def EncodeNonFiniteOrRaise(write, value): + if value == _POS_INF: + write(b'\x00\x00\x00\x00\x00\x00\xF0\x7F') + elif value == _NEG_INF: + write(b'\x00\x00\x00\x00\x00\x00\xF0\xFF') + elif value != value: # NaN + write(b'\x00\x00\x00\x00\x00\x00\xF8\x7F') + else: + raise + else: + raise ValueError('Can\'t encode floating-point values that are ' + '%d bytes long (only 4 or 8)' % value_size) + + def SpecificEncoder(field_number, is_repeated, is_packed): + local_struct_pack = struct.pack + if is_packed: + tag_bytes = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED) + local_EncodeVarint = _EncodeVarint + def EncodePackedField(write, value, deterministic): + write(tag_bytes) + local_EncodeVarint(write, len(value) * value_size, deterministic) + for element in value: + # This try/except block is going to be faster than any code that + # we could write to check whether element is finite. + try: + write(local_struct_pack(format, element)) + except SystemError: + EncodeNonFiniteOrRaise(write, element) + return EncodePackedField + elif is_repeated: + tag_bytes = TagBytes(field_number, wire_type) + def EncodeRepeatedField(write, value, unused_deterministic=None): + for element in value: + write(tag_bytes) + try: + write(local_struct_pack(format, element)) + except SystemError: + EncodeNonFiniteOrRaise(write, element) + return EncodeRepeatedField + else: + tag_bytes = TagBytes(field_number, wire_type) + def EncodeField(write, value, unused_deterministic=None): + write(tag_bytes) + try: + write(local_struct_pack(format, value)) + except SystemError: + EncodeNonFiniteOrRaise(write, value) + return EncodeField + + return SpecificEncoder + + +# ==================================================================== +# Here we declare an encoder constructor for each field type. These work +# very similarly to sizer constructors, described earlier. + + +Int32Encoder = Int64Encoder = EnumEncoder = _SimpleEncoder( + wire_format.WIRETYPE_VARINT, _EncodeSignedVarint, _SignedVarintSize) + +UInt32Encoder = UInt64Encoder = _SimpleEncoder( + wire_format.WIRETYPE_VARINT, _EncodeVarint, _VarintSize) + +SInt32Encoder = SInt64Encoder = _ModifiedEncoder( + wire_format.WIRETYPE_VARINT, _EncodeVarint, _VarintSize, + wire_format.ZigZagEncode) + +# Note that Python conveniently guarantees that when using the '<' prefix on +# formats, they will also have the same size across all platforms (as opposed +# to without the prefix, where their sizes depend on the C compiler's basic +# type sizes). +Fixed32Encoder = _StructPackEncoder(wire_format.WIRETYPE_FIXED32, ' 0 + self._fields = {} + # Contains a mapping from oneof field descriptors to the descriptor + # of the currently set field in that oneof field. + self._oneofs = {} + + # _unknown_fields is () when empty for efficiency, and will be turned into + # a list if fields are added. + self._unknown_fields = () + # _unknown_field_set is None when empty for efficiency, and will be + # turned into UnknownFieldSet struct if fields are added. + self._unknown_field_set = None # pylint: disable=protected-access + self._is_present_in_parent = False + self._listener = message_listener_mod.NullMessageListener() + self._listener_for_children = _Listener(self) + for field_name, field_value in kwargs.items(): + field = _GetFieldByName(message_descriptor, field_name) + if field is None: + raise TypeError('%s() got an unexpected keyword argument "%s"' % + (message_descriptor.name, field_name)) + if field_value is None: + # field=None is the same as no field at all. + continue + if field.label == _FieldDescriptor.LABEL_REPEATED: + copy = field._default_constructor(self) + if field.cpp_type == _FieldDescriptor.CPPTYPE_MESSAGE: # Composite + if _IsMapField(field): + if _IsMessageMapField(field): + for key in field_value: + copy[key].MergeFrom(field_value[key]) + else: + copy.update(field_value) + else: + for val in field_value: + if isinstance(val, dict): + copy.add(**val) + else: + copy.add().MergeFrom(val) + else: # Scalar + if field.cpp_type == _FieldDescriptor.CPPTYPE_ENUM: + field_value = [_GetIntegerEnumValue(field.enum_type, val) + for val in field_value] + copy.extend(field_value) + self._fields[field] = copy + elif field.cpp_type == _FieldDescriptor.CPPTYPE_MESSAGE: + copy = field._default_constructor(self) + new_val = field_value + if isinstance(field_value, dict): + new_val = field.message_type._concrete_class(**field_value) + try: + copy.MergeFrom(new_val) + except TypeError: + _ReraiseTypeErrorWithFieldName(message_descriptor.name, field_name) + self._fields[field] = copy + else: + if field.cpp_type == _FieldDescriptor.CPPTYPE_ENUM: + field_value = _GetIntegerEnumValue(field.enum_type, field_value) + try: + setattr(self, field_name, field_value) + except TypeError: + _ReraiseTypeErrorWithFieldName(message_descriptor.name, field_name) + + init.__module__ = None + init.__doc__ = None + cls.__init__ = init + + +def _GetFieldByName(message_descriptor, field_name): + """Returns a field descriptor by field name. + + Args: + message_descriptor: A Descriptor describing all fields in message. + field_name: The name of the field to retrieve. + Returns: + The field descriptor associated with the field name. + """ + try: + return message_descriptor.fields_by_name[field_name] + except KeyError: + raise ValueError('Protocol message %s has no "%s" field.' % + (message_descriptor.name, field_name)) + + +def _AddPropertiesForFields(descriptor, cls): + """Adds properties for all fields in this protocol message type.""" + for field in descriptor.fields: + _AddPropertiesForField(field, cls) + + if descriptor.is_extendable: + # _ExtensionDict is just an adaptor with no state so we allocate a new one + # every time it is accessed. + cls.Extensions = property(lambda self: _ExtensionDict(self)) + + +def _AddPropertiesForField(field, cls): + """Adds a public property for a protocol message field. + Clients can use this property to get and (in the case + of non-repeated scalar fields) directly set the value + of a protocol message field. + + Args: + field: A FieldDescriptor for this field. + cls: The class we're constructing. + """ + # Catch it if we add other types that we should + # handle specially here. + assert _FieldDescriptor.MAX_CPPTYPE == 10 + + constant_name = field.name.upper() + '_FIELD_NUMBER' + setattr(cls, constant_name, field.number) + + if field.label == _FieldDescriptor.LABEL_REPEATED: + _AddPropertiesForRepeatedField(field, cls) + elif field.cpp_type == _FieldDescriptor.CPPTYPE_MESSAGE: + _AddPropertiesForNonRepeatedCompositeField(field, cls) + else: + _AddPropertiesForNonRepeatedScalarField(field, cls) + + +class _FieldProperty(property): + __slots__ = ('DESCRIPTOR',) + + def __init__(self, descriptor, getter, setter, doc): + property.__init__(self, getter, setter, doc=doc) + self.DESCRIPTOR = descriptor + + +def _AddPropertiesForRepeatedField(field, cls): + """Adds a public property for a "repeated" protocol message field. Clients + can use this property to get the value of the field, which will be either a + RepeatedScalarFieldContainer or RepeatedCompositeFieldContainer (see + below). + + Note that when clients add values to these containers, we perform + type-checking in the case of repeated scalar fields, and we also set any + necessary "has" bits as a side-effect. + + Args: + field: A FieldDescriptor for this field. + cls: The class we're constructing. + """ + proto_field_name = field.name + property_name = _PropertyName(proto_field_name) + + def getter(self): + field_value = self._fields.get(field) + if field_value is None: + # Construct a new object to represent this field. + field_value = field._default_constructor(self) + + # Atomically check if another thread has preempted us and, if not, swap + # in the new object we just created. If someone has preempted us, we + # take that object and discard ours. + # WARNING: We are relying on setdefault() being atomic. This is true + # in CPython but we haven't investigated others. This warning appears + # in several other locations in this file. + field_value = self._fields.setdefault(field, field_value) + return field_value + getter.__module__ = None + getter.__doc__ = 'Getter for %s.' % proto_field_name + + # We define a setter just so we can throw an exception with a more + # helpful error message. + def setter(self, new_value): + raise AttributeError('Assignment not allowed to repeated field ' + '"%s" in protocol message object.' % proto_field_name) + + doc = 'Magic attribute generated for "%s" proto field.' % proto_field_name + setattr(cls, property_name, _FieldProperty(field, getter, setter, doc=doc)) + + +def _AddPropertiesForNonRepeatedScalarField(field, cls): + """Adds a public property for a nonrepeated, scalar protocol message field. + Clients can use this property to get and directly set the value of the field. + Note that when the client sets the value of a field by using this property, + all necessary "has" bits are set as a side-effect, and we also perform + type-checking. + + Args: + field: A FieldDescriptor for this field. + cls: The class we're constructing. + """ + proto_field_name = field.name + property_name = _PropertyName(proto_field_name) + type_checker = type_checkers.GetTypeChecker(field) + default_value = field.default_value + is_proto3 = field.containing_type.syntax == 'proto3' + + def getter(self): + # TODO(protobuf-team): This may be broken since there may not be + # default_value. Combine with has_default_value somehow. + return self._fields.get(field, default_value) + getter.__module__ = None + getter.__doc__ = 'Getter for %s.' % proto_field_name + + clear_when_set_to_default = is_proto3 and not field.containing_oneof + + def field_setter(self, new_value): + # pylint: disable=protected-access + # Testing the value for truthiness captures all of the proto3 defaults + # (0, 0.0, enum 0, and False). + try: + new_value = type_checker.CheckValue(new_value) + except TypeError as e: + raise TypeError( + 'Cannot set %s to %.1024r: %s' % (field.full_name, new_value, e)) + if clear_when_set_to_default and not new_value: + self._fields.pop(field, None) + else: + self._fields[field] = new_value + # Check _cached_byte_size_dirty inline to improve performance, since scalar + # setters are called frequently. + if not self._cached_byte_size_dirty: + self._Modified() + + if field.containing_oneof: + def setter(self, new_value): + field_setter(self, new_value) + self._UpdateOneofState(field) + else: + setter = field_setter + + setter.__module__ = None + setter.__doc__ = 'Setter for %s.' % proto_field_name + + # Add a property to encapsulate the getter/setter. + doc = 'Magic attribute generated for "%s" proto field.' % proto_field_name + setattr(cls, property_name, _FieldProperty(field, getter, setter, doc=doc)) + + +def _AddPropertiesForNonRepeatedCompositeField(field, cls): + """Adds a public property for a nonrepeated, composite protocol message field. + A composite field is a "group" or "message" field. + + Clients can use this property to get the value of the field, but cannot + assign to the property directly. + + Args: + field: A FieldDescriptor for this field. + cls: The class we're constructing. + """ + # TODO(robinson): Remove duplication with similar method + # for non-repeated scalars. + proto_field_name = field.name + property_name = _PropertyName(proto_field_name) + + def getter(self): + field_value = self._fields.get(field) + if field_value is None: + # Construct a new object to represent this field. + field_value = field._default_constructor(self) + + # Atomically check if another thread has preempted us and, if not, swap + # in the new object we just created. If someone has preempted us, we + # take that object and discard ours. + # WARNING: We are relying on setdefault() being atomic. This is true + # in CPython but we haven't investigated others. This warning appears + # in several other locations in this file. + field_value = self._fields.setdefault(field, field_value) + return field_value + getter.__module__ = None + getter.__doc__ = 'Getter for %s.' % proto_field_name + + # We define a setter just so we can throw an exception with a more + # helpful error message. + def setter(self, new_value): + raise AttributeError('Assignment not allowed to composite field ' + '"%s" in protocol message object.' % proto_field_name) + + # Add a property to encapsulate the getter. + doc = 'Magic attribute generated for "%s" proto field.' % proto_field_name + setattr(cls, property_name, _FieldProperty(field, getter, setter, doc=doc)) + + +def _AddPropertiesForExtensions(descriptor, cls): + """Adds properties for all fields in this protocol message type.""" + extensions = descriptor.extensions_by_name + for extension_name, extension_field in extensions.items(): + constant_name = extension_name.upper() + '_FIELD_NUMBER' + setattr(cls, constant_name, extension_field.number) + + # TODO(amauryfa): Migrate all users of these attributes to functions like + # pool.FindExtensionByNumber(descriptor). + if descriptor.file is not None: + # TODO(amauryfa): Use cls.MESSAGE_FACTORY.pool when available. + pool = descriptor.file.pool + cls._extensions_by_number = pool._extensions_by_number[descriptor] + cls._extensions_by_name = pool._extensions_by_name[descriptor] + +def _AddStaticMethods(cls): + # TODO(robinson): This probably needs to be thread-safe(?) + def RegisterExtension(extension_handle): + extension_handle.containing_type = cls.DESCRIPTOR + # TODO(amauryfa): Use cls.MESSAGE_FACTORY.pool when available. + # pylint: disable=protected-access + cls.DESCRIPTOR.file.pool._AddExtensionDescriptor(extension_handle) + _AttachFieldHelpers(cls, extension_handle) + cls.RegisterExtension = staticmethod(RegisterExtension) + + def FromString(s): + message = cls() + message.MergeFromString(s) + return message + cls.FromString = staticmethod(FromString) + + +def _IsPresent(item): + """Given a (FieldDescriptor, value) tuple from _fields, return true if the + value should be included in the list returned by ListFields().""" + + if item[0].label == _FieldDescriptor.LABEL_REPEATED: + return bool(item[1]) + elif item[0].cpp_type == _FieldDescriptor.CPPTYPE_MESSAGE: + return item[1]._is_present_in_parent + else: + return True + + +def _AddListFieldsMethod(message_descriptor, cls): + """Helper for _AddMessageMethods().""" + + def ListFields(self): + all_fields = [item for item in self._fields.items() if _IsPresent(item)] + all_fields.sort(key = lambda item: item[0].number) + return all_fields + + cls.ListFields = ListFields + +_PROTO3_ERROR_TEMPLATE = \ + ('Protocol message %s has no non-repeated submessage field "%s" ' + 'nor marked as optional') +_PROTO2_ERROR_TEMPLATE = 'Protocol message %s has no non-repeated field "%s"' + +def _AddHasFieldMethod(message_descriptor, cls): + """Helper for _AddMessageMethods().""" + + is_proto3 = (message_descriptor.syntax == "proto3") + error_msg = _PROTO3_ERROR_TEMPLATE if is_proto3 else _PROTO2_ERROR_TEMPLATE + + hassable_fields = {} + for field in message_descriptor.fields: + if field.label == _FieldDescriptor.LABEL_REPEATED: + continue + # For proto3, only submessages and fields inside a oneof have presence. + if (is_proto3 and field.cpp_type != _FieldDescriptor.CPPTYPE_MESSAGE and + not field.containing_oneof): + continue + hassable_fields[field.name] = field + + # Has methods are supported for oneof descriptors. + for oneof in message_descriptor.oneofs: + hassable_fields[oneof.name] = oneof + + def HasField(self, field_name): + try: + field = hassable_fields[field_name] + except KeyError: + raise ValueError(error_msg % (message_descriptor.full_name, field_name)) + + if isinstance(field, descriptor_mod.OneofDescriptor): + try: + return HasField(self, self._oneofs[field].name) + except KeyError: + return False + else: + if field.cpp_type == _FieldDescriptor.CPPTYPE_MESSAGE: + value = self._fields.get(field) + return value is not None and value._is_present_in_parent + else: + return field in self._fields + + cls.HasField = HasField + + +def _AddClearFieldMethod(message_descriptor, cls): + """Helper for _AddMessageMethods().""" + def ClearField(self, field_name): + try: + field = message_descriptor.fields_by_name[field_name] + except KeyError: + try: + field = message_descriptor.oneofs_by_name[field_name] + if field in self._oneofs: + field = self._oneofs[field] + else: + return + except KeyError: + raise ValueError('Protocol message %s has no "%s" field.' % + (message_descriptor.name, field_name)) + + if field in self._fields: + # To match the C++ implementation, we need to invalidate iterators + # for map fields when ClearField() happens. + if hasattr(self._fields[field], 'InvalidateIterators'): + self._fields[field].InvalidateIterators() + + # Note: If the field is a sub-message, its listener will still point + # at us. That's fine, because the worst than can happen is that it + # will call _Modified() and invalidate our byte size. Big deal. + del self._fields[field] + + if self._oneofs.get(field.containing_oneof, None) is field: + del self._oneofs[field.containing_oneof] + + # Always call _Modified() -- even if nothing was changed, this is + # a mutating method, and thus calling it should cause the field to become + # present in the parent message. + self._Modified() + + cls.ClearField = ClearField + + +def _AddClearExtensionMethod(cls): + """Helper for _AddMessageMethods().""" + def ClearExtension(self, extension_handle): + extension_dict._VerifyExtensionHandle(self, extension_handle) + + # Similar to ClearField(), above. + if extension_handle in self._fields: + del self._fields[extension_handle] + self._Modified() + cls.ClearExtension = ClearExtension + + +def _AddHasExtensionMethod(cls): + """Helper for _AddMessageMethods().""" + def HasExtension(self, extension_handle): + extension_dict._VerifyExtensionHandle(self, extension_handle) + if extension_handle.label == _FieldDescriptor.LABEL_REPEATED: + raise KeyError('"%s" is repeated.' % extension_handle.full_name) + + if extension_handle.cpp_type == _FieldDescriptor.CPPTYPE_MESSAGE: + value = self._fields.get(extension_handle) + return value is not None and value._is_present_in_parent + else: + return extension_handle in self._fields + cls.HasExtension = HasExtension + +def _InternalUnpackAny(msg): + """Unpacks Any message and returns the unpacked message. + + This internal method is different from public Any Unpack method which takes + the target message as argument. _InternalUnpackAny method does not have + target message type and need to find the message type in descriptor pool. + + Args: + msg: An Any message to be unpacked. + + Returns: + The unpacked message. + """ + # TODO(amauryfa): Don't use the factory of generated messages. + # To make Any work with custom factories, use the message factory of the + # parent message. + # pylint: disable=g-import-not-at-top + from google.protobuf import symbol_database + factory = symbol_database.Default() + + type_url = msg.type_url + + if not type_url: + return None + + # TODO(haberman): For now we just strip the hostname. Better logic will be + # required. + type_name = type_url.split('/')[-1] + descriptor = factory.pool.FindMessageTypeByName(type_name) + + if descriptor is None: + return None + + message_class = factory.GetPrototype(descriptor) + message = message_class() + + message.ParseFromString(msg.value) + return message + + +def _AddEqualsMethod(message_descriptor, cls): + """Helper for _AddMessageMethods().""" + def __eq__(self, other): + if (not isinstance(other, message_mod.Message) or + other.DESCRIPTOR != self.DESCRIPTOR): + return False + + if self is other: + return True + + if self.DESCRIPTOR.full_name == _AnyFullTypeName: + any_a = _InternalUnpackAny(self) + any_b = _InternalUnpackAny(other) + if any_a and any_b: + return any_a == any_b + + if not self.ListFields() == other.ListFields(): + return False + + # TODO(jieluo): Fix UnknownFieldSet to consider MessageSet extensions, + # then use it for the comparison. + unknown_fields = list(self._unknown_fields) + unknown_fields.sort() + other_unknown_fields = list(other._unknown_fields) + other_unknown_fields.sort() + return unknown_fields == other_unknown_fields + + cls.__eq__ = __eq__ + + +def _AddStrMethod(message_descriptor, cls): + """Helper for _AddMessageMethods().""" + def __str__(self): + return text_format.MessageToString(self) + cls.__str__ = __str__ + + +def _AddReprMethod(message_descriptor, cls): + """Helper for _AddMessageMethods().""" + def __repr__(self): + return text_format.MessageToString(self) + cls.__repr__ = __repr__ + + +def _AddUnicodeMethod(unused_message_descriptor, cls): + """Helper for _AddMessageMethods().""" + + def __unicode__(self): + return text_format.MessageToString(self, as_utf8=True).decode('utf-8') + cls.__unicode__ = __unicode__ + + +def _BytesForNonRepeatedElement(value, field_number, field_type): + """Returns the number of bytes needed to serialize a non-repeated element. + The returned byte count includes space for tag information and any + other additional space associated with serializing value. + + Args: + value: Value we're serializing. + field_number: Field number of this value. (Since the field number + is stored as part of a varint-encoded tag, this has an impact + on the total bytes required to serialize the value). + field_type: The type of the field. One of the TYPE_* constants + within FieldDescriptor. + """ + try: + fn = type_checkers.TYPE_TO_BYTE_SIZE_FN[field_type] + return fn(field_number, value) + except KeyError: + raise message_mod.EncodeError('Unrecognized field type: %d' % field_type) + + +def _AddByteSizeMethod(message_descriptor, cls): + """Helper for _AddMessageMethods().""" + + def ByteSize(self): + if not self._cached_byte_size_dirty: + return self._cached_byte_size + + size = 0 + descriptor = self.DESCRIPTOR + if descriptor.GetOptions().map_entry: + # Fields of map entry should always be serialized. + size = descriptor.fields_by_name['key']._sizer(self.key) + size += descriptor.fields_by_name['value']._sizer(self.value) + else: + for field_descriptor, field_value in self.ListFields(): + size += field_descriptor._sizer(field_value) + for tag_bytes, value_bytes in self._unknown_fields: + size += len(tag_bytes) + len(value_bytes) + + self._cached_byte_size = size + self._cached_byte_size_dirty = False + self._listener_for_children.dirty = False + return size + + cls.ByteSize = ByteSize + + +def _AddSerializeToStringMethod(message_descriptor, cls): + """Helper for _AddMessageMethods().""" + + def SerializeToString(self, **kwargs): + # Check if the message has all of its required fields set. + if not self.IsInitialized(): + raise message_mod.EncodeError( + 'Message %s is missing required fields: %s' % ( + self.DESCRIPTOR.full_name, ','.join(self.FindInitializationErrors()))) + return self.SerializePartialToString(**kwargs) + cls.SerializeToString = SerializeToString + + +def _AddSerializePartialToStringMethod(message_descriptor, cls): + """Helper for _AddMessageMethods().""" + + def SerializePartialToString(self, **kwargs): + out = BytesIO() + self._InternalSerialize(out.write, **kwargs) + return out.getvalue() + cls.SerializePartialToString = SerializePartialToString + + def InternalSerialize(self, write_bytes, deterministic=None): + if deterministic is None: + deterministic = ( + api_implementation.IsPythonDefaultSerializationDeterministic()) + else: + deterministic = bool(deterministic) + + descriptor = self.DESCRIPTOR + if descriptor.GetOptions().map_entry: + # Fields of map entry should always be serialized. + descriptor.fields_by_name['key']._encoder( + write_bytes, self.key, deterministic) + descriptor.fields_by_name['value']._encoder( + write_bytes, self.value, deterministic) + else: + for field_descriptor, field_value in self.ListFields(): + field_descriptor._encoder(write_bytes, field_value, deterministic) + for tag_bytes, value_bytes in self._unknown_fields: + write_bytes(tag_bytes) + write_bytes(value_bytes) + cls._InternalSerialize = InternalSerialize + + +def _AddMergeFromStringMethod(message_descriptor, cls): + """Helper for _AddMessageMethods().""" + def MergeFromString(self, serialized): + if isinstance(serialized, memoryview) and six.PY2: + raise TypeError( + 'memoryview not supported in Python 2 with the pure Python proto ' + 'implementation: this is to maintain compatibility with the C++ ' + 'implementation') + + serialized = memoryview(serialized) + length = len(serialized) + try: + if self._InternalParse(serialized, 0, length) != length: + # The only reason _InternalParse would return early is if it + # encountered an end-group tag. + raise message_mod.DecodeError('Unexpected end-group tag.') + except (IndexError, TypeError): + # Now ord(buf[p:p+1]) == ord('') gets TypeError. + raise message_mod.DecodeError('Truncated message.') + except struct.error as e: + raise message_mod.DecodeError(e) + return length # Return this for legacy reasons. + cls.MergeFromString = MergeFromString + + local_ReadTag = decoder.ReadTag + local_SkipField = decoder.SkipField + decoders_by_tag = cls._decoders_by_tag + + def InternalParse(self, buffer, pos, end): + """Create a message from serialized bytes. + + Args: + self: Message, instance of the proto message object. + buffer: memoryview of the serialized data. + pos: int, position to start in the serialized data. + end: int, end position of the serialized data. + + Returns: + Message object. + """ + # Guard against internal misuse, since this function is called internally + # quite extensively, and its easy to accidentally pass bytes. + assert isinstance(buffer, memoryview) + self._Modified() + field_dict = self._fields + # pylint: disable=protected-access + unknown_field_set = self._unknown_field_set + while pos != end: + (tag_bytes, new_pos) = local_ReadTag(buffer, pos) + field_decoder, field_desc = decoders_by_tag.get(tag_bytes, (None, None)) + if field_decoder is None: + if not self._unknown_fields: # pylint: disable=protected-access + self._unknown_fields = [] # pylint: disable=protected-access + if unknown_field_set is None: + # pylint: disable=protected-access + self._unknown_field_set = containers.UnknownFieldSet() + # pylint: disable=protected-access + unknown_field_set = self._unknown_field_set + # pylint: disable=protected-access + (tag, _) = decoder._DecodeVarint(tag_bytes, 0) + field_number, wire_type = wire_format.UnpackTag(tag) + if field_number == 0: + raise message_mod.DecodeError('Field number 0 is illegal.') + # TODO(jieluo): remove old_pos. + old_pos = new_pos + (data, new_pos) = decoder._DecodeUnknownField( + buffer, new_pos, wire_type) # pylint: disable=protected-access + if new_pos == -1: + return pos + # pylint: disable=protected-access + unknown_field_set._add(field_number, wire_type, data) + # TODO(jieluo): remove _unknown_fields. + new_pos = local_SkipField(buffer, old_pos, end, tag_bytes) + if new_pos == -1: + return pos + self._unknown_fields.append( + (tag_bytes, buffer[old_pos:new_pos].tobytes())) + pos = new_pos + else: + pos = field_decoder(buffer, new_pos, end, self, field_dict) + if field_desc: + self._UpdateOneofState(field_desc) + return pos + cls._InternalParse = InternalParse + + +def _AddIsInitializedMethod(message_descriptor, cls): + """Adds the IsInitialized and FindInitializationError methods to the + protocol message class.""" + + required_fields = [field for field in message_descriptor.fields + if field.label == _FieldDescriptor.LABEL_REQUIRED] + + def IsInitialized(self, errors=None): + """Checks if all required fields of a message are set. + + Args: + errors: A list which, if provided, will be populated with the field + paths of all missing required fields. + + Returns: + True iff the specified message has all required fields set. + """ + + # Performance is critical so we avoid HasField() and ListFields(). + + for field in required_fields: + if (field not in self._fields or + (field.cpp_type == _FieldDescriptor.CPPTYPE_MESSAGE and + not self._fields[field]._is_present_in_parent)): + if errors is not None: + errors.extend(self.FindInitializationErrors()) + return False + + for field, value in list(self._fields.items()): # dict can change size! + if field.cpp_type == _FieldDescriptor.CPPTYPE_MESSAGE: + if field.label == _FieldDescriptor.LABEL_REPEATED: + if (field.message_type.has_options and + field.message_type.GetOptions().map_entry): + continue + for element in value: + if not element.IsInitialized(): + if errors is not None: + errors.extend(self.FindInitializationErrors()) + return False + elif value._is_present_in_parent and not value.IsInitialized(): + if errors is not None: + errors.extend(self.FindInitializationErrors()) + return False + + return True + + cls.IsInitialized = IsInitialized + + def FindInitializationErrors(self): + """Finds required fields which are not initialized. + + Returns: + A list of strings. Each string is a path to an uninitialized field from + the top-level message, e.g. "foo.bar[5].baz". + """ + + errors = [] # simplify things + + for field in required_fields: + if not self.HasField(field.name): + errors.append(field.name) + + for field, value in self.ListFields(): + if field.cpp_type == _FieldDescriptor.CPPTYPE_MESSAGE: + if field.is_extension: + name = '(%s)' % field.full_name + else: + name = field.name + + if _IsMapField(field): + if _IsMessageMapField(field): + for key in value: + element = value[key] + prefix = '%s[%s].' % (name, key) + sub_errors = element.FindInitializationErrors() + errors += [prefix + error for error in sub_errors] + else: + # ScalarMaps can't have any initialization errors. + pass + elif field.label == _FieldDescriptor.LABEL_REPEATED: + for i in range(len(value)): + element = value[i] + prefix = '%s[%d].' % (name, i) + sub_errors = element.FindInitializationErrors() + errors += [prefix + error for error in sub_errors] + else: + prefix = name + '.' + sub_errors = value.FindInitializationErrors() + errors += [prefix + error for error in sub_errors] + + return errors + + cls.FindInitializationErrors = FindInitializationErrors + + +def _AddMergeFromMethod(cls): + LABEL_REPEATED = _FieldDescriptor.LABEL_REPEATED + CPPTYPE_MESSAGE = _FieldDescriptor.CPPTYPE_MESSAGE + + def MergeFrom(self, msg): + if not isinstance(msg, cls): + raise TypeError( + 'Parameter to MergeFrom() must be instance of same class: ' + 'expected %s got %s.' % (cls.__name__, msg.__class__.__name__)) + + assert msg is not self + self._Modified() + + fields = self._fields + + for field, value in msg._fields.items(): + if field.label == LABEL_REPEATED: + field_value = fields.get(field) + if field_value is None: + # Construct a new object to represent this field. + field_value = field._default_constructor(self) + fields[field] = field_value + field_value.MergeFrom(value) + elif field.cpp_type == CPPTYPE_MESSAGE: + if value._is_present_in_parent: + field_value = fields.get(field) + if field_value is None: + # Construct a new object to represent this field. + field_value = field._default_constructor(self) + fields[field] = field_value + field_value.MergeFrom(value) + else: + self._fields[field] = value + if field.containing_oneof: + self._UpdateOneofState(field) + + if msg._unknown_fields: + if not self._unknown_fields: + self._unknown_fields = [] + self._unknown_fields.extend(msg._unknown_fields) + # pylint: disable=protected-access + if self._unknown_field_set is None: + self._unknown_field_set = containers.UnknownFieldSet() + self._unknown_field_set._extend(msg._unknown_field_set) + + cls.MergeFrom = MergeFrom + + +def _AddWhichOneofMethod(message_descriptor, cls): + def WhichOneof(self, oneof_name): + """Returns the name of the currently set field inside a oneof, or None.""" + try: + field = message_descriptor.oneofs_by_name[oneof_name] + except KeyError: + raise ValueError( + 'Protocol message has no oneof "%s" field.' % oneof_name) + + nested_field = self._oneofs.get(field, None) + if nested_field is not None and self.HasField(nested_field.name): + return nested_field.name + else: + return None + + cls.WhichOneof = WhichOneof + + +def _Clear(self): + # Clear fields. + self._fields = {} + self._unknown_fields = () + # pylint: disable=protected-access + if self._unknown_field_set is not None: + self._unknown_field_set._clear() + self._unknown_field_set = None + + self._oneofs = {} + self._Modified() + + +def _UnknownFields(self): + if self._unknown_field_set is None: # pylint: disable=protected-access + # pylint: disable=protected-access + self._unknown_field_set = containers.UnknownFieldSet() + return self._unknown_field_set # pylint: disable=protected-access + + +def _DiscardUnknownFields(self): + self._unknown_fields = [] + self._unknown_field_set = None # pylint: disable=protected-access + for field, value in self.ListFields(): + if field.cpp_type == _FieldDescriptor.CPPTYPE_MESSAGE: + if _IsMapField(field): + if _IsMessageMapField(field): + for key in value: + value[key].DiscardUnknownFields() + elif field.label == _FieldDescriptor.LABEL_REPEATED: + for sub_message in value: + sub_message.DiscardUnknownFields() + else: + value.DiscardUnknownFields() + + +def _SetListener(self, listener): + if listener is None: + self._listener = message_listener_mod.NullMessageListener() + else: + self._listener = listener + + +def _AddMessageMethods(message_descriptor, cls): + """Adds implementations of all Message methods to cls.""" + _AddListFieldsMethod(message_descriptor, cls) + _AddHasFieldMethod(message_descriptor, cls) + _AddClearFieldMethod(message_descriptor, cls) + if message_descriptor.is_extendable: + _AddClearExtensionMethod(cls) + _AddHasExtensionMethod(cls) + _AddEqualsMethod(message_descriptor, cls) + _AddStrMethod(message_descriptor, cls) + _AddReprMethod(message_descriptor, cls) + _AddUnicodeMethod(message_descriptor, cls) + _AddByteSizeMethod(message_descriptor, cls) + _AddSerializeToStringMethod(message_descriptor, cls) + _AddSerializePartialToStringMethod(message_descriptor, cls) + _AddMergeFromStringMethod(message_descriptor, cls) + _AddIsInitializedMethod(message_descriptor, cls) + _AddMergeFromMethod(cls) + _AddWhichOneofMethod(message_descriptor, cls) + # Adds methods which do not depend on cls. + cls.Clear = _Clear + cls.UnknownFields = _UnknownFields + cls.DiscardUnknownFields = _DiscardUnknownFields + cls._SetListener = _SetListener + + +def _AddPrivateHelperMethods(message_descriptor, cls): + """Adds implementation of private helper methods to cls.""" + + def Modified(self): + """Sets the _cached_byte_size_dirty bit to true, + and propagates this to our listener iff this was a state change. + """ + + # Note: Some callers check _cached_byte_size_dirty before calling + # _Modified() as an extra optimization. So, if this method is ever + # changed such that it does stuff even when _cached_byte_size_dirty is + # already true, the callers need to be updated. + if not self._cached_byte_size_dirty: + self._cached_byte_size_dirty = True + self._listener_for_children.dirty = True + self._is_present_in_parent = True + self._listener.Modified() + + def _UpdateOneofState(self, field): + """Sets field as the active field in its containing oneof. + + Will also delete currently active field in the oneof, if it is different + from the argument. Does not mark the message as modified. + """ + other_field = self._oneofs.setdefault(field.containing_oneof, field) + if other_field is not field: + del self._fields[other_field] + self._oneofs[field.containing_oneof] = field + + cls._Modified = Modified + cls.SetInParent = Modified + cls._UpdateOneofState = _UpdateOneofState + + +class _Listener(object): + + """MessageListener implementation that a parent message registers with its + child message. + + In order to support semantics like: + + foo.bar.baz.qux = 23 + assert foo.HasField('bar') + + ...child objects must have back references to their parents. + This helper class is at the heart of this support. + """ + + def __init__(self, parent_message): + """Args: + parent_message: The message whose _Modified() method we should call when + we receive Modified() messages. + """ + # This listener establishes a back reference from a child (contained) object + # to its parent (containing) object. We make this a weak reference to avoid + # creating cyclic garbage when the client finishes with the 'parent' object + # in the tree. + if isinstance(parent_message, weakref.ProxyType): + self._parent_message_weakref = parent_message + else: + self._parent_message_weakref = weakref.proxy(parent_message) + + # As an optimization, we also indicate directly on the listener whether + # or not the parent message is dirty. This way we can avoid traversing + # up the tree in the common case. + self.dirty = False + + def Modified(self): + if self.dirty: + return + try: + # Propagate the signal to our parents iff this is the first field set. + self._parent_message_weakref._Modified() + except ReferenceError: + # We can get here if a client has kept a reference to a child object, + # and is now setting a field on it, but the child's parent has been + # garbage-collected. This is not an error. + pass + + +class _OneofListener(_Listener): + """Special listener implementation for setting composite oneof fields.""" + + def __init__(self, parent_message, field): + """Args: + parent_message: The message whose _Modified() method we should call when + we receive Modified() messages. + field: The descriptor of the field being set in the parent message. + """ + super(_OneofListener, self).__init__(parent_message) + self._field = field + + def Modified(self): + """Also updates the state of the containing oneof in the parent message.""" + try: + self._parent_message_weakref._UpdateOneofState(self._field) + super(_OneofListener, self).Modified() + except ReferenceError: + pass diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/internal/type_checkers.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/internal/type_checkers.py new file mode 100644 index 0000000000000000000000000000000000000000..eb66f9f6fbaeeaf190792b76eb0b7cc62b82e782 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/internal/type_checkers.py @@ -0,0 +1,426 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""Provides type checking routines. + +This module defines type checking utilities in the forms of dictionaries: + +VALUE_CHECKERS: A dictionary of field types and a value validation object. +TYPE_TO_BYTE_SIZE_FN: A dictionary with field types and a size computing + function. +TYPE_TO_SERIALIZE_METHOD: A dictionary with field types and serialization + function. +FIELD_TYPE_TO_WIRE_TYPE: A dictionary with field typed and their + corresponding wire types. +TYPE_TO_DESERIALIZE_METHOD: A dictionary with field types and deserialization + function. +""" + +__author__ = 'robinson@google.com (Will Robinson)' + +try: + import ctypes +except Exception: # pylint: disable=broad-except + ctypes = None + import struct +import numbers +import six + +if six.PY3: + long = int + +from google.protobuf.internal import api_implementation +from google.protobuf.internal import decoder +from google.protobuf.internal import encoder +from google.protobuf.internal import wire_format +from google.protobuf import descriptor + +_FieldDescriptor = descriptor.FieldDescriptor + + +def TruncateToFourByteFloat(original): + if ctypes: + return ctypes.c_float(original).value + else: + return struct.unpack(' _FLOAT_MAX: + return _INF + if converted_value < _FLOAT_MIN: + return _NEG_INF + + return TruncateToFourByteFloat(converted_value) + + def DefaultValue(self): + return 0.0 + + +# Type-checkers for all scalar CPPTYPEs. +_VALUE_CHECKERS = { + _FieldDescriptor.CPPTYPE_INT32: Int32ValueChecker(), + _FieldDescriptor.CPPTYPE_INT64: Int64ValueChecker(), + _FieldDescriptor.CPPTYPE_UINT32: Uint32ValueChecker(), + _FieldDescriptor.CPPTYPE_UINT64: Uint64ValueChecker(), + _FieldDescriptor.CPPTYPE_DOUBLE: TypeCheckerWithDefault( + 0.0, float, numbers.Real), + _FieldDescriptor.CPPTYPE_FLOAT: FloatValueChecker(), + _FieldDescriptor.CPPTYPE_BOOL: TypeCheckerWithDefault( + False, bool, numbers.Integral), + _FieldDescriptor.CPPTYPE_STRING: TypeCheckerWithDefault(b'', bytes), + } + + +# Map from field type to a function F, such that F(field_num, value) +# gives the total byte size for a value of the given type. This +# byte size includes tag information and any other additional space +# associated with serializing "value". +TYPE_TO_BYTE_SIZE_FN = { + _FieldDescriptor.TYPE_DOUBLE: wire_format.DoubleByteSize, + _FieldDescriptor.TYPE_FLOAT: wire_format.FloatByteSize, + _FieldDescriptor.TYPE_INT64: wire_format.Int64ByteSize, + _FieldDescriptor.TYPE_UINT64: wire_format.UInt64ByteSize, + _FieldDescriptor.TYPE_INT32: wire_format.Int32ByteSize, + _FieldDescriptor.TYPE_FIXED64: wire_format.Fixed64ByteSize, + _FieldDescriptor.TYPE_FIXED32: wire_format.Fixed32ByteSize, + _FieldDescriptor.TYPE_BOOL: wire_format.BoolByteSize, + _FieldDescriptor.TYPE_STRING: wire_format.StringByteSize, + _FieldDescriptor.TYPE_GROUP: wire_format.GroupByteSize, + _FieldDescriptor.TYPE_MESSAGE: wire_format.MessageByteSize, + _FieldDescriptor.TYPE_BYTES: wire_format.BytesByteSize, + _FieldDescriptor.TYPE_UINT32: wire_format.UInt32ByteSize, + _FieldDescriptor.TYPE_ENUM: wire_format.EnumByteSize, + _FieldDescriptor.TYPE_SFIXED32: wire_format.SFixed32ByteSize, + _FieldDescriptor.TYPE_SFIXED64: wire_format.SFixed64ByteSize, + _FieldDescriptor.TYPE_SINT32: wire_format.SInt32ByteSize, + _FieldDescriptor.TYPE_SINT64: wire_format.SInt64ByteSize + } + + +# Maps from field types to encoder constructors. +TYPE_TO_ENCODER = { + _FieldDescriptor.TYPE_DOUBLE: encoder.DoubleEncoder, + _FieldDescriptor.TYPE_FLOAT: encoder.FloatEncoder, + _FieldDescriptor.TYPE_INT64: encoder.Int64Encoder, + _FieldDescriptor.TYPE_UINT64: encoder.UInt64Encoder, + _FieldDescriptor.TYPE_INT32: encoder.Int32Encoder, + _FieldDescriptor.TYPE_FIXED64: encoder.Fixed64Encoder, + _FieldDescriptor.TYPE_FIXED32: encoder.Fixed32Encoder, + _FieldDescriptor.TYPE_BOOL: encoder.BoolEncoder, + _FieldDescriptor.TYPE_STRING: encoder.StringEncoder, + _FieldDescriptor.TYPE_GROUP: encoder.GroupEncoder, + _FieldDescriptor.TYPE_MESSAGE: encoder.MessageEncoder, + _FieldDescriptor.TYPE_BYTES: encoder.BytesEncoder, + _FieldDescriptor.TYPE_UINT32: encoder.UInt32Encoder, + _FieldDescriptor.TYPE_ENUM: encoder.EnumEncoder, + _FieldDescriptor.TYPE_SFIXED32: encoder.SFixed32Encoder, + _FieldDescriptor.TYPE_SFIXED64: encoder.SFixed64Encoder, + _FieldDescriptor.TYPE_SINT32: encoder.SInt32Encoder, + _FieldDescriptor.TYPE_SINT64: encoder.SInt64Encoder, + } + + +# Maps from field types to sizer constructors. +TYPE_TO_SIZER = { + _FieldDescriptor.TYPE_DOUBLE: encoder.DoubleSizer, + _FieldDescriptor.TYPE_FLOAT: encoder.FloatSizer, + _FieldDescriptor.TYPE_INT64: encoder.Int64Sizer, + _FieldDescriptor.TYPE_UINT64: encoder.UInt64Sizer, + _FieldDescriptor.TYPE_INT32: encoder.Int32Sizer, + _FieldDescriptor.TYPE_FIXED64: encoder.Fixed64Sizer, + _FieldDescriptor.TYPE_FIXED32: encoder.Fixed32Sizer, + _FieldDescriptor.TYPE_BOOL: encoder.BoolSizer, + _FieldDescriptor.TYPE_STRING: encoder.StringSizer, + _FieldDescriptor.TYPE_GROUP: encoder.GroupSizer, + _FieldDescriptor.TYPE_MESSAGE: encoder.MessageSizer, + _FieldDescriptor.TYPE_BYTES: encoder.BytesSizer, + _FieldDescriptor.TYPE_UINT32: encoder.UInt32Sizer, + _FieldDescriptor.TYPE_ENUM: encoder.EnumSizer, + _FieldDescriptor.TYPE_SFIXED32: encoder.SFixed32Sizer, + _FieldDescriptor.TYPE_SFIXED64: encoder.SFixed64Sizer, + _FieldDescriptor.TYPE_SINT32: encoder.SInt32Sizer, + _FieldDescriptor.TYPE_SINT64: encoder.SInt64Sizer, + } + + +# Maps from field type to a decoder constructor. +TYPE_TO_DECODER = { + _FieldDescriptor.TYPE_DOUBLE: decoder.DoubleDecoder, + _FieldDescriptor.TYPE_FLOAT: decoder.FloatDecoder, + _FieldDescriptor.TYPE_INT64: decoder.Int64Decoder, + _FieldDescriptor.TYPE_UINT64: decoder.UInt64Decoder, + _FieldDescriptor.TYPE_INT32: decoder.Int32Decoder, + _FieldDescriptor.TYPE_FIXED64: decoder.Fixed64Decoder, + _FieldDescriptor.TYPE_FIXED32: decoder.Fixed32Decoder, + _FieldDescriptor.TYPE_BOOL: decoder.BoolDecoder, + _FieldDescriptor.TYPE_STRING: decoder.StringDecoder, + _FieldDescriptor.TYPE_GROUP: decoder.GroupDecoder, + _FieldDescriptor.TYPE_MESSAGE: decoder.MessageDecoder, + _FieldDescriptor.TYPE_BYTES: decoder.BytesDecoder, + _FieldDescriptor.TYPE_UINT32: decoder.UInt32Decoder, + _FieldDescriptor.TYPE_ENUM: decoder.EnumDecoder, + _FieldDescriptor.TYPE_SFIXED32: decoder.SFixed32Decoder, + _FieldDescriptor.TYPE_SFIXED64: decoder.SFixed64Decoder, + _FieldDescriptor.TYPE_SINT32: decoder.SInt32Decoder, + _FieldDescriptor.TYPE_SINT64: decoder.SInt64Decoder, + } + +# Maps from field type to expected wiretype. +FIELD_TYPE_TO_WIRE_TYPE = { + _FieldDescriptor.TYPE_DOUBLE: wire_format.WIRETYPE_FIXED64, + _FieldDescriptor.TYPE_FLOAT: wire_format.WIRETYPE_FIXED32, + _FieldDescriptor.TYPE_INT64: wire_format.WIRETYPE_VARINT, + _FieldDescriptor.TYPE_UINT64: wire_format.WIRETYPE_VARINT, + _FieldDescriptor.TYPE_INT32: wire_format.WIRETYPE_VARINT, + _FieldDescriptor.TYPE_FIXED64: wire_format.WIRETYPE_FIXED64, + _FieldDescriptor.TYPE_FIXED32: wire_format.WIRETYPE_FIXED32, + _FieldDescriptor.TYPE_BOOL: wire_format.WIRETYPE_VARINT, + _FieldDescriptor.TYPE_STRING: + wire_format.WIRETYPE_LENGTH_DELIMITED, + _FieldDescriptor.TYPE_GROUP: wire_format.WIRETYPE_START_GROUP, + _FieldDescriptor.TYPE_MESSAGE: + wire_format.WIRETYPE_LENGTH_DELIMITED, + _FieldDescriptor.TYPE_BYTES: + wire_format.WIRETYPE_LENGTH_DELIMITED, + _FieldDescriptor.TYPE_UINT32: wire_format.WIRETYPE_VARINT, + _FieldDescriptor.TYPE_ENUM: wire_format.WIRETYPE_VARINT, + _FieldDescriptor.TYPE_SFIXED32: wire_format.WIRETYPE_FIXED32, + _FieldDescriptor.TYPE_SFIXED64: wire_format.WIRETYPE_FIXED64, + _FieldDescriptor.TYPE_SINT32: wire_format.WIRETYPE_VARINT, + _FieldDescriptor.TYPE_SINT64: wire_format.WIRETYPE_VARINT, + } diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/internal/well_known_types.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/internal/well_known_types.py new file mode 100644 index 0000000000000000000000000000000000000000..6f55d6b17b42528b276703db572925ce9a70b083 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/internal/well_known_types.py @@ -0,0 +1,863 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""Contains well known classes. + +This files defines well known classes which need extra maintenance including: + - Any + - Duration + - FieldMask + - Struct + - Timestamp +""" + +__author__ = 'jieluo@google.com (Jie Luo)' + +import calendar +from datetime import datetime +from datetime import timedelta +import six + +try: + # Since python 3 + import collections.abc as collections_abc +except ImportError: + # Won't work after python 3.8 + import collections as collections_abc + +from google.protobuf.descriptor import FieldDescriptor + +_TIMESTAMPFOMAT = '%Y-%m-%dT%H:%M:%S' +_NANOS_PER_SECOND = 1000000000 +_NANOS_PER_MILLISECOND = 1000000 +_NANOS_PER_MICROSECOND = 1000 +_MILLIS_PER_SECOND = 1000 +_MICROS_PER_SECOND = 1000000 +_SECONDS_PER_DAY = 24 * 3600 +_DURATION_SECONDS_MAX = 315576000000 + + +class Any(object): + """Class for Any Message type.""" + + __slots__ = () + + def Pack(self, msg, type_url_prefix='type.googleapis.com/', + deterministic=None): + """Packs the specified message into current Any message.""" + if len(type_url_prefix) < 1 or type_url_prefix[-1] != '/': + self.type_url = '%s/%s' % (type_url_prefix, msg.DESCRIPTOR.full_name) + else: + self.type_url = '%s%s' % (type_url_prefix, msg.DESCRIPTOR.full_name) + self.value = msg.SerializeToString(deterministic=deterministic) + + def Unpack(self, msg): + """Unpacks the current Any message into specified message.""" + descriptor = msg.DESCRIPTOR + if not self.Is(descriptor): + return False + msg.ParseFromString(self.value) + return True + + def TypeName(self): + """Returns the protobuf type name of the inner message.""" + # Only last part is to be used: b/25630112 + return self.type_url.split('/')[-1] + + def Is(self, descriptor): + """Checks if this Any represents the given protobuf type.""" + return '/' in self.type_url and self.TypeName() == descriptor.full_name + + +_EPOCH_DATETIME = datetime.utcfromtimestamp(0) + + +class Timestamp(object): + """Class for Timestamp message type.""" + + __slots__ = () + + def ToJsonString(self): + """Converts Timestamp to RFC 3339 date string format. + + Returns: + A string converted from timestamp. The string is always Z-normalized + and uses 3, 6 or 9 fractional digits as required to represent the + exact time. Example of the return format: '1972-01-01T10:00:20.021Z' + """ + nanos = self.nanos % _NANOS_PER_SECOND + total_sec = self.seconds + (self.nanos - nanos) // _NANOS_PER_SECOND + seconds = total_sec % _SECONDS_PER_DAY + days = (total_sec - seconds) // _SECONDS_PER_DAY + dt = datetime(1970, 1, 1) + timedelta(days, seconds) + + result = dt.isoformat() + if (nanos % 1e9) == 0: + # If there are 0 fractional digits, the fractional + # point '.' should be omitted when serializing. + return result + 'Z' + if (nanos % 1e6) == 0: + # Serialize 3 fractional digits. + return result + '.%03dZ' % (nanos / 1e6) + if (nanos % 1e3) == 0: + # Serialize 6 fractional digits. + return result + '.%06dZ' % (nanos / 1e3) + # Serialize 9 fractional digits. + return result + '.%09dZ' % nanos + + def FromJsonString(self, value): + """Parse a RFC 3339 date string format to Timestamp. + + Args: + value: A date string. Any fractional digits (or none) and any offset are + accepted as long as they fit into nano-seconds precision. + Example of accepted format: '1972-01-01T10:00:20.021-05:00' + + Raises: + ValueError: On parsing problems. + """ + timezone_offset = value.find('Z') + if timezone_offset == -1: + timezone_offset = value.find('+') + if timezone_offset == -1: + timezone_offset = value.rfind('-') + if timezone_offset == -1: + raise ValueError( + 'Failed to parse timestamp: missing valid timezone offset.') + time_value = value[0:timezone_offset] + # Parse datetime and nanos. + point_position = time_value.find('.') + if point_position == -1: + second_value = time_value + nano_value = '' + else: + second_value = time_value[:point_position] + nano_value = time_value[point_position + 1:] + if 't' in second_value: + raise ValueError( + 'time data \'{0}\' does not match format \'%Y-%m-%dT%H:%M:%S\', ' + 'lowercase \'t\' is not accepted'.format(second_value)) + date_object = datetime.strptime(second_value, _TIMESTAMPFOMAT) + td = date_object - datetime(1970, 1, 1) + seconds = td.seconds + td.days * _SECONDS_PER_DAY + if len(nano_value) > 9: + raise ValueError( + 'Failed to parse Timestamp: nanos {0} more than ' + '9 fractional digits.'.format(nano_value)) + if nano_value: + nanos = round(float('0.' + nano_value) * 1e9) + else: + nanos = 0 + # Parse timezone offsets. + if value[timezone_offset] == 'Z': + if len(value) != timezone_offset + 1: + raise ValueError('Failed to parse timestamp: invalid trailing' + ' data {0}.'.format(value)) + else: + timezone = value[timezone_offset:] + pos = timezone.find(':') + if pos == -1: + raise ValueError( + 'Invalid timezone offset value: {0}.'.format(timezone)) + if timezone[0] == '+': + seconds -= (int(timezone[1:pos])*60+int(timezone[pos+1:]))*60 + else: + seconds += (int(timezone[1:pos])*60+int(timezone[pos+1:]))*60 + # Set seconds and nanos + self.seconds = int(seconds) + self.nanos = int(nanos) + + def GetCurrentTime(self): + """Get the current UTC into Timestamp.""" + self.FromDatetime(datetime.utcnow()) + + def ToNanoseconds(self): + """Converts Timestamp to nanoseconds since epoch.""" + return self.seconds * _NANOS_PER_SECOND + self.nanos + + def ToMicroseconds(self): + """Converts Timestamp to microseconds since epoch.""" + return (self.seconds * _MICROS_PER_SECOND + + self.nanos // _NANOS_PER_MICROSECOND) + + def ToMilliseconds(self): + """Converts Timestamp to milliseconds since epoch.""" + return (self.seconds * _MILLIS_PER_SECOND + + self.nanos // _NANOS_PER_MILLISECOND) + + def ToSeconds(self): + """Converts Timestamp to seconds since epoch.""" + return self.seconds + + def FromNanoseconds(self, nanos): + """Converts nanoseconds since epoch to Timestamp.""" + self.seconds = nanos // _NANOS_PER_SECOND + self.nanos = nanos % _NANOS_PER_SECOND + + def FromMicroseconds(self, micros): + """Converts microseconds since epoch to Timestamp.""" + self.seconds = micros // _MICROS_PER_SECOND + self.nanos = (micros % _MICROS_PER_SECOND) * _NANOS_PER_MICROSECOND + + def FromMilliseconds(self, millis): + """Converts milliseconds since epoch to Timestamp.""" + self.seconds = millis // _MILLIS_PER_SECOND + self.nanos = (millis % _MILLIS_PER_SECOND) * _NANOS_PER_MILLISECOND + + def FromSeconds(self, seconds): + """Converts seconds since epoch to Timestamp.""" + self.seconds = seconds + self.nanos = 0 + + def ToDatetime(self): + """Converts Timestamp to datetime.""" + return _EPOCH_DATETIME + timedelta( + seconds=self.seconds, microseconds=_RoundTowardZero( + self.nanos, _NANOS_PER_MICROSECOND)) + + def FromDatetime(self, dt): + """Converts datetime to Timestamp.""" + # Using this guide: http://wiki.python.org/moin/WorkingWithTime + # And this conversion guide: http://docs.python.org/library/time.html + + # Turn the date parameter into a tuple (struct_time) that can then be + # manipulated into a long value of seconds. During the conversion from + # struct_time to long, the source date in UTC, and so it follows that the + # correct transformation is calendar.timegm() + self.seconds = calendar.timegm(dt.utctimetuple()) + self.nanos = dt.microsecond * _NANOS_PER_MICROSECOND + + +class Duration(object): + """Class for Duration message type.""" + + __slots__ = () + + def ToJsonString(self): + """Converts Duration to string format. + + Returns: + A string converted from self. The string format will contains + 3, 6, or 9 fractional digits depending on the precision required to + represent the exact Duration value. For example: "1s", "1.010s", + "1.000000100s", "-3.100s" + """ + _CheckDurationValid(self.seconds, self.nanos) + if self.seconds < 0 or self.nanos < 0: + result = '-' + seconds = - self.seconds + int((0 - self.nanos) // 1e9) + nanos = (0 - self.nanos) % 1e9 + else: + result = '' + seconds = self.seconds + int(self.nanos // 1e9) + nanos = self.nanos % 1e9 + result += '%d' % seconds + if (nanos % 1e9) == 0: + # If there are 0 fractional digits, the fractional + # point '.' should be omitted when serializing. + return result + 's' + if (nanos % 1e6) == 0: + # Serialize 3 fractional digits. + return result + '.%03ds' % (nanos / 1e6) + if (nanos % 1e3) == 0: + # Serialize 6 fractional digits. + return result + '.%06ds' % (nanos / 1e3) + # Serialize 9 fractional digits. + return result + '.%09ds' % nanos + + def FromJsonString(self, value): + """Converts a string to Duration. + + Args: + value: A string to be converted. The string must end with 's'. Any + fractional digits (or none) are accepted as long as they fit into + precision. For example: "1s", "1.01s", "1.0000001s", "-3.100s + + Raises: + ValueError: On parsing problems. + """ + if len(value) < 1 or value[-1] != 's': + raise ValueError( + 'Duration must end with letter "s": {0}.'.format(value)) + try: + pos = value.find('.') + if pos == -1: + seconds = int(value[:-1]) + nanos = 0 + else: + seconds = int(value[:pos]) + if value[0] == '-': + nanos = int(round(float('-0{0}'.format(value[pos: -1])) *1e9)) + else: + nanos = int(round(float('0{0}'.format(value[pos: -1])) *1e9)) + _CheckDurationValid(seconds, nanos) + self.seconds = seconds + self.nanos = nanos + except ValueError as e: + raise ValueError( + 'Couldn\'t parse duration: {0} : {1}.'.format(value, e)) + + def ToNanoseconds(self): + """Converts a Duration to nanoseconds.""" + return self.seconds * _NANOS_PER_SECOND + self.nanos + + def ToMicroseconds(self): + """Converts a Duration to microseconds.""" + micros = _RoundTowardZero(self.nanos, _NANOS_PER_MICROSECOND) + return self.seconds * _MICROS_PER_SECOND + micros + + def ToMilliseconds(self): + """Converts a Duration to milliseconds.""" + millis = _RoundTowardZero(self.nanos, _NANOS_PER_MILLISECOND) + return self.seconds * _MILLIS_PER_SECOND + millis + + def ToSeconds(self): + """Converts a Duration to seconds.""" + return self.seconds + + def FromNanoseconds(self, nanos): + """Converts nanoseconds to Duration.""" + self._NormalizeDuration(nanos // _NANOS_PER_SECOND, + nanos % _NANOS_PER_SECOND) + + def FromMicroseconds(self, micros): + """Converts microseconds to Duration.""" + self._NormalizeDuration( + micros // _MICROS_PER_SECOND, + (micros % _MICROS_PER_SECOND) * _NANOS_PER_MICROSECOND) + + def FromMilliseconds(self, millis): + """Converts milliseconds to Duration.""" + self._NormalizeDuration( + millis // _MILLIS_PER_SECOND, + (millis % _MILLIS_PER_SECOND) * _NANOS_PER_MILLISECOND) + + def FromSeconds(self, seconds): + """Converts seconds to Duration.""" + self.seconds = seconds + self.nanos = 0 + + def ToTimedelta(self): + """Converts Duration to timedelta.""" + return timedelta( + seconds=self.seconds, microseconds=_RoundTowardZero( + self.nanos, _NANOS_PER_MICROSECOND)) + + def FromTimedelta(self, td): + """Converts timedelta to Duration.""" + self._NormalizeDuration(td.seconds + td.days * _SECONDS_PER_DAY, + td.microseconds * _NANOS_PER_MICROSECOND) + + def _NormalizeDuration(self, seconds, nanos): + """Set Duration by seconds and nanos.""" + # Force nanos to be negative if the duration is negative. + if seconds < 0 and nanos > 0: + seconds += 1 + nanos -= _NANOS_PER_SECOND + self.seconds = seconds + self.nanos = nanos + + +def _CheckDurationValid(seconds, nanos): + if seconds < -_DURATION_SECONDS_MAX or seconds > _DURATION_SECONDS_MAX: + raise ValueError( + 'Duration is not valid: Seconds {0} must be in range ' + '[-315576000000, 315576000000].'.format(seconds)) + if nanos <= -_NANOS_PER_SECOND or nanos >= _NANOS_PER_SECOND: + raise ValueError( + 'Duration is not valid: Nanos {0} must be in range ' + '[-999999999, 999999999].'.format(nanos)) + if (nanos < 0 and seconds > 0) or (nanos > 0 and seconds < 0): + raise ValueError( + 'Duration is not valid: Sign mismatch.') + + +def _RoundTowardZero(value, divider): + """Truncates the remainder part after division.""" + # For some languages, the sign of the remainder is implementation + # dependent if any of the operands is negative. Here we enforce + # "rounded toward zero" semantics. For example, for (-5) / 2 an + # implementation may give -3 as the result with the remainder being + # 1. This function ensures we always return -2 (closer to zero). + result = value // divider + remainder = value % divider + if result < 0 and remainder > 0: + return result + 1 + else: + return result + + +class FieldMask(object): + """Class for FieldMask message type.""" + + __slots__ = () + + def ToJsonString(self): + """Converts FieldMask to string according to proto3 JSON spec.""" + camelcase_paths = [] + for path in self.paths: + camelcase_paths.append(_SnakeCaseToCamelCase(path)) + return ','.join(camelcase_paths) + + def FromJsonString(self, value): + """Converts string to FieldMask according to proto3 JSON spec.""" + self.Clear() + if value: + for path in value.split(','): + self.paths.append(_CamelCaseToSnakeCase(path)) + + def IsValidForDescriptor(self, message_descriptor): + """Checks whether the FieldMask is valid for Message Descriptor.""" + for path in self.paths: + if not _IsValidPath(message_descriptor, path): + return False + return True + + def AllFieldsFromDescriptor(self, message_descriptor): + """Gets all direct fields of Message Descriptor to FieldMask.""" + self.Clear() + for field in message_descriptor.fields: + self.paths.append(field.name) + + def CanonicalFormFromMask(self, mask): + """Converts a FieldMask to the canonical form. + + Removes paths that are covered by another path. For example, + "foo.bar" is covered by "foo" and will be removed if "foo" + is also in the FieldMask. Then sorts all paths in alphabetical order. + + Args: + mask: The original FieldMask to be converted. + """ + tree = _FieldMaskTree(mask) + tree.ToFieldMask(self) + + def Union(self, mask1, mask2): + """Merges mask1 and mask2 into this FieldMask.""" + _CheckFieldMaskMessage(mask1) + _CheckFieldMaskMessage(mask2) + tree = _FieldMaskTree(mask1) + tree.MergeFromFieldMask(mask2) + tree.ToFieldMask(self) + + def Intersect(self, mask1, mask2): + """Intersects mask1 and mask2 into this FieldMask.""" + _CheckFieldMaskMessage(mask1) + _CheckFieldMaskMessage(mask2) + tree = _FieldMaskTree(mask1) + intersection = _FieldMaskTree() + for path in mask2.paths: + tree.IntersectPath(path, intersection) + intersection.ToFieldMask(self) + + def MergeMessage( + self, source, destination, + replace_message_field=False, replace_repeated_field=False): + """Merges fields specified in FieldMask from source to destination. + + Args: + source: Source message. + destination: The destination message to be merged into. + replace_message_field: Replace message field if True. Merge message + field if False. + replace_repeated_field: Replace repeated field if True. Append + elements of repeated field if False. + """ + tree = _FieldMaskTree(self) + tree.MergeMessage( + source, destination, replace_message_field, replace_repeated_field) + + +def _IsValidPath(message_descriptor, path): + """Checks whether the path is valid for Message Descriptor.""" + parts = path.split('.') + last = parts.pop() + for name in parts: + field = message_descriptor.fields_by_name.get(name) + if (field is None or + field.label == FieldDescriptor.LABEL_REPEATED or + field.type != FieldDescriptor.TYPE_MESSAGE): + return False + message_descriptor = field.message_type + return last in message_descriptor.fields_by_name + + +def _CheckFieldMaskMessage(message): + """Raises ValueError if message is not a FieldMask.""" + message_descriptor = message.DESCRIPTOR + if (message_descriptor.name != 'FieldMask' or + message_descriptor.file.name != 'google/protobuf/field_mask.proto'): + raise ValueError('Message {0} is not a FieldMask.'.format( + message_descriptor.full_name)) + + +def _SnakeCaseToCamelCase(path_name): + """Converts a path name from snake_case to camelCase.""" + result = [] + after_underscore = False + for c in path_name: + if c.isupper(): + raise ValueError( + 'Fail to print FieldMask to Json string: Path name ' + '{0} must not contain uppercase letters.'.format(path_name)) + if after_underscore: + if c.islower(): + result.append(c.upper()) + after_underscore = False + else: + raise ValueError( + 'Fail to print FieldMask to Json string: The ' + 'character after a "_" must be a lowercase letter ' + 'in path name {0}.'.format(path_name)) + elif c == '_': + after_underscore = True + else: + result += c + + if after_underscore: + raise ValueError('Fail to print FieldMask to Json string: Trailing "_" ' + 'in path name {0}.'.format(path_name)) + return ''.join(result) + + +def _CamelCaseToSnakeCase(path_name): + """Converts a field name from camelCase to snake_case.""" + result = [] + for c in path_name: + if c == '_': + raise ValueError('Fail to parse FieldMask: Path name ' + '{0} must not contain "_"s.'.format(path_name)) + if c.isupper(): + result += '_' + result += c.lower() + else: + result += c + return ''.join(result) + + +class _FieldMaskTree(object): + """Represents a FieldMask in a tree structure. + + For example, given a FieldMask "foo.bar,foo.baz,bar.baz", + the FieldMaskTree will be: + [_root] -+- foo -+- bar + | | + | +- baz + | + +- bar --- baz + In the tree, each leaf node represents a field path. + """ + + __slots__ = ('_root',) + + def __init__(self, field_mask=None): + """Initializes the tree by FieldMask.""" + self._root = {} + if field_mask: + self.MergeFromFieldMask(field_mask) + + def MergeFromFieldMask(self, field_mask): + """Merges a FieldMask to the tree.""" + for path in field_mask.paths: + self.AddPath(path) + + def AddPath(self, path): + """Adds a field path into the tree. + + If the field path to add is a sub-path of an existing field path + in the tree (i.e., a leaf node), it means the tree already matches + the given path so nothing will be added to the tree. If the path + matches an existing non-leaf node in the tree, that non-leaf node + will be turned into a leaf node with all its children removed because + the path matches all the node's children. Otherwise, a new path will + be added. + + Args: + path: The field path to add. + """ + node = self._root + for name in path.split('.'): + if name not in node: + node[name] = {} + elif not node[name]: + # Pre-existing empty node implies we already have this entire tree. + return + node = node[name] + # Remove any sub-trees we might have had. + node.clear() + + def ToFieldMask(self, field_mask): + """Converts the tree to a FieldMask.""" + field_mask.Clear() + _AddFieldPaths(self._root, '', field_mask) + + def IntersectPath(self, path, intersection): + """Calculates the intersection part of a field path with this tree. + + Args: + path: The field path to calculates. + intersection: The out tree to record the intersection part. + """ + node = self._root + for name in path.split('.'): + if name not in node: + return + elif not node[name]: + intersection.AddPath(path) + return + node = node[name] + intersection.AddLeafNodes(path, node) + + def AddLeafNodes(self, prefix, node): + """Adds leaf nodes begin with prefix to this tree.""" + if not node: + self.AddPath(prefix) + for name in node: + child_path = prefix + '.' + name + self.AddLeafNodes(child_path, node[name]) + + def MergeMessage( + self, source, destination, + replace_message, replace_repeated): + """Merge all fields specified by this tree from source to destination.""" + _MergeMessage( + self._root, source, destination, replace_message, replace_repeated) + + +def _StrConvert(value): + """Converts value to str if it is not.""" + # This file is imported by c extension and some methods like ClearField + # requires string for the field name. py2/py3 has different text + # type and may use unicode. + if not isinstance(value, str): + return value.encode('utf-8') + return value + + +def _MergeMessage( + node, source, destination, replace_message, replace_repeated): + """Merge all fields specified by a sub-tree from source to destination.""" + source_descriptor = source.DESCRIPTOR + for name in node: + child = node[name] + field = source_descriptor.fields_by_name[name] + if field is None: + raise ValueError('Error: Can\'t find field {0} in message {1}.'.format( + name, source_descriptor.full_name)) + if child: + # Sub-paths are only allowed for singular message fields. + if (field.label == FieldDescriptor.LABEL_REPEATED or + field.cpp_type != FieldDescriptor.CPPTYPE_MESSAGE): + raise ValueError('Error: Field {0} in message {1} is not a singular ' + 'message field and cannot have sub-fields.'.format( + name, source_descriptor.full_name)) + if source.HasField(name): + _MergeMessage( + child, getattr(source, name), getattr(destination, name), + replace_message, replace_repeated) + continue + if field.label == FieldDescriptor.LABEL_REPEATED: + if replace_repeated: + destination.ClearField(_StrConvert(name)) + repeated_source = getattr(source, name) + repeated_destination = getattr(destination, name) + repeated_destination.MergeFrom(repeated_source) + else: + if field.cpp_type == FieldDescriptor.CPPTYPE_MESSAGE: + if replace_message: + destination.ClearField(_StrConvert(name)) + if source.HasField(name): + getattr(destination, name).MergeFrom(getattr(source, name)) + else: + setattr(destination, name, getattr(source, name)) + + +def _AddFieldPaths(node, prefix, field_mask): + """Adds the field paths descended from node to field_mask.""" + if not node and prefix: + field_mask.paths.append(prefix) + return + for name in sorted(node): + if prefix: + child_path = prefix + '.' + name + else: + child_path = name + _AddFieldPaths(node[name], child_path, field_mask) + + +_INT_OR_FLOAT = six.integer_types + (float,) + + +def _SetStructValue(struct_value, value): + if value is None: + struct_value.null_value = 0 + elif isinstance(value, bool): + # Note: this check must come before the number check because in Python + # True and False are also considered numbers. + struct_value.bool_value = value + elif isinstance(value, six.string_types): + struct_value.string_value = value + elif isinstance(value, _INT_OR_FLOAT): + struct_value.number_value = value + elif isinstance(value, (dict, Struct)): + struct_value.struct_value.Clear() + struct_value.struct_value.update(value) + elif isinstance(value, (list, ListValue)): + struct_value.list_value.Clear() + struct_value.list_value.extend(value) + else: + raise ValueError('Unexpected type') + + +def _GetStructValue(struct_value): + which = struct_value.WhichOneof('kind') + if which == 'struct_value': + return struct_value.struct_value + elif which == 'null_value': + return None + elif which == 'number_value': + return struct_value.number_value + elif which == 'string_value': + return struct_value.string_value + elif which == 'bool_value': + return struct_value.bool_value + elif which == 'list_value': + return struct_value.list_value + elif which is None: + raise ValueError('Value not set') + + +class Struct(object): + """Class for Struct message type.""" + + __slots__ = () + + def __getitem__(self, key): + return _GetStructValue(self.fields[key]) + + def __contains__(self, item): + return item in self.fields + + def __setitem__(self, key, value): + _SetStructValue(self.fields[key], value) + + def __delitem__(self, key): + del self.fields[key] + + def __len__(self): + return len(self.fields) + + def __iter__(self): + return iter(self.fields) + + def keys(self): # pylint: disable=invalid-name + return self.fields.keys() + + def values(self): # pylint: disable=invalid-name + return [self[key] for key in self] + + def items(self): # pylint: disable=invalid-name + return [(key, self[key]) for key in self] + + def get_or_create_list(self, key): + """Returns a list for this key, creating if it didn't exist already.""" + if not self.fields[key].HasField('list_value'): + # Clear will mark list_value modified which will indeed create a list. + self.fields[key].list_value.Clear() + return self.fields[key].list_value + + def get_or_create_struct(self, key): + """Returns a struct for this key, creating if it didn't exist already.""" + if not self.fields[key].HasField('struct_value'): + # Clear will mark struct_value modified which will indeed create a struct. + self.fields[key].struct_value.Clear() + return self.fields[key].struct_value + + def update(self, dictionary): # pylint: disable=invalid-name + for key, value in dictionary.items(): + _SetStructValue(self.fields[key], value) + +collections_abc.MutableMapping.register(Struct) + + +class ListValue(object): + """Class for ListValue message type.""" + + __slots__ = () + + def __len__(self): + return len(self.values) + + def append(self, value): + _SetStructValue(self.values.add(), value) + + def extend(self, elem_seq): + for value in elem_seq: + self.append(value) + + def __getitem__(self, index): + """Retrieves item by the specified index.""" + return _GetStructValue(self.values.__getitem__(index)) + + def __setitem__(self, index, value): + _SetStructValue(self.values.__getitem__(index), value) + + def __delitem__(self, key): + del self.values[key] + + def items(self): + for i in range(len(self)): + yield self[i] + + def add_struct(self): + """Appends and returns a struct value as the next value in the list.""" + struct_value = self.values.add().struct_value + # Clear will mark struct_value modified which will indeed create a struct. + struct_value.Clear() + return struct_value + + def add_list(self): + """Appends and returns a list value as the next value in the list.""" + list_value = self.values.add().list_value + # Clear will mark list_value modified which will indeed create a list. + list_value.Clear() + return list_value + +collections_abc.MutableSequence.register(ListValue) + + +WKTBASES = { + 'google.protobuf.Any': Any, + 'google.protobuf.Duration': Duration, + 'google.protobuf.FieldMask': FieldMask, + 'google.protobuf.ListValue': ListValue, + 'google.protobuf.Struct': Struct, + 'google.protobuf.Timestamp': Timestamp, +} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/internal/wire_format.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/internal/wire_format.py new file mode 100644 index 0000000000000000000000000000000000000000..883f525585139493438c3c8922bbb82cf1b0084e --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/internal/wire_format.py @@ -0,0 +1,268 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""Constants and static functions to support protocol buffer wire format.""" + +__author__ = 'robinson@google.com (Will Robinson)' + +import struct +from google.protobuf import descriptor +from google.protobuf import message + + +TAG_TYPE_BITS = 3 # Number of bits used to hold type info in a proto tag. +TAG_TYPE_MASK = (1 << TAG_TYPE_BITS) - 1 # 0x7 + +# These numbers identify the wire type of a protocol buffer value. +# We use the least-significant TAG_TYPE_BITS bits of the varint-encoded +# tag-and-type to store one of these WIRETYPE_* constants. +# These values must match WireType enum in google/protobuf/wire_format.h. +WIRETYPE_VARINT = 0 +WIRETYPE_FIXED64 = 1 +WIRETYPE_LENGTH_DELIMITED = 2 +WIRETYPE_START_GROUP = 3 +WIRETYPE_END_GROUP = 4 +WIRETYPE_FIXED32 = 5 +_WIRETYPE_MAX = 5 + + +# Bounds for various integer types. +INT32_MAX = int((1 << 31) - 1) +INT32_MIN = int(-(1 << 31)) +UINT32_MAX = (1 << 32) - 1 + +INT64_MAX = (1 << 63) - 1 +INT64_MIN = -(1 << 63) +UINT64_MAX = (1 << 64) - 1 + +# "struct" format strings that will encode/decode the specified formats. +FORMAT_UINT32_LITTLE_ENDIAN = '> TAG_TYPE_BITS), (tag & TAG_TYPE_MASK) + + +def ZigZagEncode(value): + """ZigZag Transform: Encodes signed integers so that they can be + effectively used with varint encoding. See wire_format.h for + more details. + """ + if value >= 0: + return value << 1 + return (value << 1) ^ (~0) + + +def ZigZagDecode(value): + """Inverse of ZigZagEncode().""" + if not value & 0x1: + return value >> 1 + return (value >> 1) ^ (~0) + + + +# The *ByteSize() functions below return the number of bytes required to +# serialize "field number + type" information and then serialize the value. + + +def Int32ByteSize(field_number, int32): + return Int64ByteSize(field_number, int32) + + +def Int32ByteSizeNoTag(int32): + return _VarUInt64ByteSizeNoTag(0xffffffffffffffff & int32) + + +def Int64ByteSize(field_number, int64): + # Have to convert to uint before calling UInt64ByteSize(). + return UInt64ByteSize(field_number, 0xffffffffffffffff & int64) + + +def UInt32ByteSize(field_number, uint32): + return UInt64ByteSize(field_number, uint32) + + +def UInt64ByteSize(field_number, uint64): + return TagByteSize(field_number) + _VarUInt64ByteSizeNoTag(uint64) + + +def SInt32ByteSize(field_number, int32): + return UInt32ByteSize(field_number, ZigZagEncode(int32)) + + +def SInt64ByteSize(field_number, int64): + return UInt64ByteSize(field_number, ZigZagEncode(int64)) + + +def Fixed32ByteSize(field_number, fixed32): + return TagByteSize(field_number) + 4 + + +def Fixed64ByteSize(field_number, fixed64): + return TagByteSize(field_number) + 8 + + +def SFixed32ByteSize(field_number, sfixed32): + return TagByteSize(field_number) + 4 + + +def SFixed64ByteSize(field_number, sfixed64): + return TagByteSize(field_number) + 8 + + +def FloatByteSize(field_number, flt): + return TagByteSize(field_number) + 4 + + +def DoubleByteSize(field_number, double): + return TagByteSize(field_number) + 8 + + +def BoolByteSize(field_number, b): + return TagByteSize(field_number) + 1 + + +def EnumByteSize(field_number, enum): + return UInt32ByteSize(field_number, enum) + + +def StringByteSize(field_number, string): + return BytesByteSize(field_number, string.encode('utf-8')) + + +def BytesByteSize(field_number, b): + return (TagByteSize(field_number) + + _VarUInt64ByteSizeNoTag(len(b)) + + len(b)) + + +def GroupByteSize(field_number, message): + return (2 * TagByteSize(field_number) # START and END group. + + message.ByteSize()) + + +def MessageByteSize(field_number, message): + return (TagByteSize(field_number) + + _VarUInt64ByteSizeNoTag(message.ByteSize()) + + message.ByteSize()) + + +def MessageSetItemByteSize(field_number, msg): + # First compute the sizes of the tags. + # There are 2 tags for the beginning and ending of the repeated group, that + # is field number 1, one with field number 2 (type_id) and one with field + # number 3 (message). + total_size = (2 * TagByteSize(1) + TagByteSize(2) + TagByteSize(3)) + + # Add the number of bytes for type_id. + total_size += _VarUInt64ByteSizeNoTag(field_number) + + message_size = msg.ByteSize() + + # The number of bytes for encoding the length of the message. + total_size += _VarUInt64ByteSizeNoTag(message_size) + + # The size of the message. + total_size += message_size + return total_size + + +def TagByteSize(field_number): + """Returns the bytes required to serialize a tag with this field number.""" + # Just pass in type 0, since the type won't affect the tag+type size. + return _VarUInt64ByteSizeNoTag(PackTag(field_number, 0)) + + +# Private helper function for the *ByteSize() functions above. + +def _VarUInt64ByteSizeNoTag(uint64): + """Returns the number of bytes required to serialize a single varint + using boundary value comparisons. (unrolled loop optimization -WPierce) + uint64 must be unsigned. + """ + if uint64 <= 0x7f: return 1 + if uint64 <= 0x3fff: return 2 + if uint64 <= 0x1fffff: return 3 + if uint64 <= 0xfffffff: return 4 + if uint64 <= 0x7ffffffff: return 5 + if uint64 <= 0x3ffffffffff: return 6 + if uint64 <= 0x1ffffffffffff: return 7 + if uint64 <= 0xffffffffffffff: return 8 + if uint64 <= 0x7fffffffffffffff: return 9 + if uint64 > UINT64_MAX: + raise message.EncodeError('Value out of range: %d' % uint64) + return 10 + + +NON_PACKABLE_TYPES = ( + descriptor.FieldDescriptor.TYPE_STRING, + descriptor.FieldDescriptor.TYPE_GROUP, + descriptor.FieldDescriptor.TYPE_MESSAGE, + descriptor.FieldDescriptor.TYPE_BYTES +) + + +def IsTypePackable(field_type): + """Return true iff packable = true is valid for fields of this type. + + Args: + field_type: a FieldDescriptor::Type value. + + Returns: + True iff fields of this type are packable. + """ + return field_type not in NON_PACKABLE_TYPES diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/json_format.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/json_format.py new file mode 100644 index 0000000000000000000000000000000000000000..c8f5602f36b112d7ce2615144e496b200d546f8c --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/json_format.py @@ -0,0 +1,865 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""Contains routines for printing protocol messages in JSON format. + +Simple usage example: + + # Create a proto object and serialize it to a json format string. + message = my_proto_pb2.MyMessage(foo='bar') + json_string = json_format.MessageToJson(message) + + # Parse a json format string to proto object. + message = json_format.Parse(json_string, my_proto_pb2.MyMessage()) +""" + +__author__ = 'jieluo@google.com (Jie Luo)' + +# pylint: disable=g-statement-before-imports,g-import-not-at-top +try: + from collections import OrderedDict +except ImportError: + from ordereddict import OrderedDict # PY26 +# pylint: enable=g-statement-before-imports,g-import-not-at-top + +import base64 +import json +import math + +from operator import methodcaller + +import re +import sys + +import six + +from google.protobuf.internal import type_checkers +from google.protobuf import descriptor +from google.protobuf import symbol_database + + +_TIMESTAMPFOMAT = '%Y-%m-%dT%H:%M:%S' +_INT_TYPES = frozenset([descriptor.FieldDescriptor.CPPTYPE_INT32, + descriptor.FieldDescriptor.CPPTYPE_UINT32, + descriptor.FieldDescriptor.CPPTYPE_INT64, + descriptor.FieldDescriptor.CPPTYPE_UINT64]) +_INT64_TYPES = frozenset([descriptor.FieldDescriptor.CPPTYPE_INT64, + descriptor.FieldDescriptor.CPPTYPE_UINT64]) +_FLOAT_TYPES = frozenset([descriptor.FieldDescriptor.CPPTYPE_FLOAT, + descriptor.FieldDescriptor.CPPTYPE_DOUBLE]) +_INFINITY = 'Infinity' +_NEG_INFINITY = '-Infinity' +_NAN = 'NaN' + +_UNPAIRED_SURROGATE_PATTERN = re.compile(six.u( + r'[\ud800-\udbff](?![\udc00-\udfff])|(? 0: + raise ParseError('Couldn\'t parse Infinity or value too large, ' + 'use quoted "Infinity" instead.') + else: + raise ParseError('Couldn\'t parse -Infinity or value too small, ' + 'use quoted "-Infinity" instead.') + if field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_FLOAT: + # pylint: disable=protected-access + if value > type_checkers._FLOAT_MAX: + raise ParseError('Float value too large') + # pylint: disable=protected-access + if value < type_checkers._FLOAT_MIN: + raise ParseError('Float value too small') + if value == 'nan': + raise ParseError('Couldn\'t parse float "nan", use "NaN" instead.') + try: + # Assume Python compatible syntax. + return float(value) + except ValueError: + # Check alternative spellings. + if value == _NEG_INFINITY: + return float('-inf') + elif value == _INFINITY: + return float('inf') + elif value == _NAN: + return float('nan') + else: + raise ParseError('Couldn\'t parse float: {0}.'.format(value)) + + +def _ConvertBool(value, require_str): + """Convert a boolean value. + + Args: + value: A scalar value to convert. + require_str: If True, value must be a str. + + Returns: + The bool parsed. + + Raises: + ParseError: If a boolean value couldn't be consumed. + """ + if require_str: + if value == 'true': + return True + elif value == 'false': + return False + else: + raise ParseError('Expected "true" or "false", not {0}.'.format(value)) + + if not isinstance(value, bool): + raise ParseError('Expected true or false without quotes.') + return value + +_WKTJSONMETHODS = { + 'google.protobuf.Any': ['_AnyMessageToJsonObject', + '_ConvertAnyMessage'], + 'google.protobuf.Duration': ['_GenericMessageToJsonObject', + '_ConvertGenericMessage'], + 'google.protobuf.FieldMask': ['_GenericMessageToJsonObject', + '_ConvertGenericMessage'], + 'google.protobuf.ListValue': ['_ListValueMessageToJsonObject', + '_ConvertListValueMessage'], + 'google.protobuf.Struct': ['_StructMessageToJsonObject', + '_ConvertStructMessage'], + 'google.protobuf.Timestamp': ['_GenericMessageToJsonObject', + '_ConvertGenericMessage'], + 'google.protobuf.Value': ['_ValueMessageToJsonObject', + '_ConvertValueMessage'] +} diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/message.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/message.py new file mode 100644 index 0000000000000000000000000000000000000000..224d2fc49178169fbe5add2a1989e839d154c746 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/message.py @@ -0,0 +1,413 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +# TODO(robinson): We should just make these methods all "pure-virtual" and move +# all implementation out, into reflection.py for now. + + +"""Contains an abstract base class for protocol messages.""" + +__author__ = 'robinson@google.com (Will Robinson)' + +class Error(Exception): + """Base error type for this module.""" + pass + + +class DecodeError(Error): + """Exception raised when deserializing messages.""" + pass + + +class EncodeError(Error): + """Exception raised when serializing messages.""" + pass + + +class Message(object): + + """Abstract base class for protocol messages. + + Protocol message classes are almost always generated by the protocol + compiler. These generated types subclass Message and implement the methods + shown below. + """ + + # TODO(robinson): Link to an HTML document here. + + # TODO(robinson): Document that instances of this class will also + # have an Extensions attribute with __getitem__ and __setitem__. + # Again, not sure how to best convey this. + + # TODO(robinson): Document that the class must also have a static + # RegisterExtension(extension_field) method. + # Not sure how to best express at this point. + + # TODO(robinson): Document these fields and methods. + + __slots__ = [] + + #: The :class:`google.protobuf.descriptor.Descriptor` for this message type. + DESCRIPTOR = None + + def __deepcopy__(self, memo=None): + clone = type(self)() + clone.MergeFrom(self) + return clone + + def __eq__(self, other_msg): + """Recursively compares two messages by value and structure.""" + raise NotImplementedError + + def __ne__(self, other_msg): + # Can't just say self != other_msg, since that would infinitely recurse. :) + return not self == other_msg + + def __hash__(self): + raise TypeError('unhashable object') + + def __str__(self): + """Outputs a human-readable representation of the message.""" + raise NotImplementedError + + def __unicode__(self): + """Outputs a human-readable representation of the message.""" + raise NotImplementedError + + def MergeFrom(self, other_msg): + """Merges the contents of the specified message into current message. + + This method merges the contents of the specified message into the current + message. Singular fields that are set in the specified message overwrite + the corresponding fields in the current message. Repeated fields are + appended. Singular sub-messages and groups are recursively merged. + + Args: + other_msg (Message): A message to merge into the current message. + """ + raise NotImplementedError + + def CopyFrom(self, other_msg): + """Copies the content of the specified message into the current message. + + The method clears the current message and then merges the specified + message using MergeFrom. + + Args: + other_msg (Message): A message to copy into the current one. + """ + if self is other_msg: + return + self.Clear() + self.MergeFrom(other_msg) + + def Clear(self): + """Clears all data that was set in the message.""" + raise NotImplementedError + + def SetInParent(self): + """Mark this as present in the parent. + + This normally happens automatically when you assign a field of a + sub-message, but sometimes you want to make the sub-message + present while keeping it empty. If you find yourself using this, + you may want to reconsider your design. + """ + raise NotImplementedError + + def IsInitialized(self): + """Checks if the message is initialized. + + Returns: + bool: The method returns True if the message is initialized (i.e. all of + its required fields are set). + """ + raise NotImplementedError + + # TODO(robinson): MergeFromString() should probably return None and be + # implemented in terms of a helper that returns the # of bytes read. Our + # deserialization routines would use the helper when recursively + # deserializing, but the end user would almost always just want the no-return + # MergeFromString(). + + def MergeFromString(self, serialized): + """Merges serialized protocol buffer data into this message. + + When we find a field in `serialized` that is already present + in this message: + + - If it's a "repeated" field, we append to the end of our list. + - Else, if it's a scalar, we overwrite our field. + - Else, (it's a nonrepeated composite), we recursively merge + into the existing composite. + + Args: + serialized (bytes): Any object that allows us to call + ``memoryview(serialized)`` to access a string of bytes using the + buffer interface. + + Returns: + int: The number of bytes read from `serialized`. + For non-group messages, this will always be `len(serialized)`, + but for messages which are actually groups, this will + generally be less than `len(serialized)`, since we must + stop when we reach an ``END_GROUP`` tag. Note that if + we *do* stop because of an ``END_GROUP`` tag, the number + of bytes returned does not include the bytes + for the ``END_GROUP`` tag information. + + Raises: + DecodeError: if the input cannot be parsed. + """ + # TODO(robinson): Document handling of unknown fields. + # TODO(robinson): When we switch to a helper, this will return None. + raise NotImplementedError + + def ParseFromString(self, serialized): + """Parse serialized protocol buffer data into this message. + + Like :func:`MergeFromString()`, except we clear the object first. + """ + self.Clear() + return self.MergeFromString(serialized) + + def SerializeToString(self, **kwargs): + """Serializes the protocol message to a binary string. + + Keyword Args: + deterministic (bool): If true, requests deterministic serialization + of the protobuf, with predictable ordering of map keys. + + Returns: + A binary string representation of the message if all of the required + fields in the message are set (i.e. the message is initialized). + + Raises: + EncodeError: if the message isn't initialized (see :func:`IsInitialized`). + """ + raise NotImplementedError + + def SerializePartialToString(self, **kwargs): + """Serializes the protocol message to a binary string. + + This method is similar to SerializeToString but doesn't check if the + message is initialized. + + Keyword Args: + deterministic (bool): If true, requests deterministic serialization + of the protobuf, with predictable ordering of map keys. + + Returns: + bytes: A serialized representation of the partial message. + """ + raise NotImplementedError + + # TODO(robinson): Decide whether we like these better + # than auto-generated has_foo() and clear_foo() methods + # on the instances themselves. This way is less consistent + # with C++, but it makes reflection-type access easier and + # reduces the number of magically autogenerated things. + # + # TODO(robinson): Be sure to document (and test) exactly + # which field names are accepted here. Are we case-sensitive? + # What do we do with fields that share names with Python keywords + # like 'lambda' and 'yield'? + # + # nnorwitz says: + # """ + # Typically (in python), an underscore is appended to names that are + # keywords. So they would become lambda_ or yield_. + # """ + def ListFields(self): + """Returns a list of (FieldDescriptor, value) tuples for present fields. + + A message field is non-empty if HasField() would return true. A singular + primitive field is non-empty if HasField() would return true in proto2 or it + is non zero in proto3. A repeated field is non-empty if it contains at least + one element. The fields are ordered by field number. + + Returns: + list[tuple(FieldDescriptor, value)]: field descriptors and values + for all fields in the message which are not empty. The values vary by + field type. + """ + raise NotImplementedError + + def HasField(self, field_name): + """Checks if a certain field is set for the message. + + For a oneof group, checks if any field inside is set. Note that if the + field_name is not defined in the message descriptor, :exc:`ValueError` will + be raised. + + Args: + field_name (str): The name of the field to check for presence. + + Returns: + bool: Whether a value has been set for the named field. + + Raises: + ValueError: if the `field_name` is not a member of this message. + """ + raise NotImplementedError + + def ClearField(self, field_name): + """Clears the contents of a given field. + + Inside a oneof group, clears the field set. If the name neither refers to a + defined field or oneof group, :exc:`ValueError` is raised. + + Args: + field_name (str): The name of the field to check for presence. + + Raises: + ValueError: if the `field_name` is not a member of this message. + """ + raise NotImplementedError + + def WhichOneof(self, oneof_group): + """Returns the name of the field that is set inside a oneof group. + + If no field is set, returns None. + + Args: + oneof_group (str): the name of the oneof group to check. + + Returns: + str or None: The name of the group that is set, or None. + + Raises: + ValueError: no group with the given name exists + """ + raise NotImplementedError + + def HasExtension(self, extension_handle): + """Checks if a certain extension is present for this message. + + Extensions are retrieved using the :attr:`Extensions` mapping (if present). + + Args: + extension_handle: The handle for the extension to check. + + Returns: + bool: Whether the extension is present for this message. + + Raises: + KeyError: if the extension is repeated. Similar to repeated fields, + there is no separate notion of presence: a "not present" repeated + extension is an empty list. + """ + raise NotImplementedError + + def ClearExtension(self, extension_handle): + """Clears the contents of a given extension. + + Args: + extension_handle: The handle for the extension to clear. + """ + raise NotImplementedError + + def UnknownFields(self): + """Returns the UnknownFieldSet. + + Returns: + UnknownFieldSet: The unknown fields stored in this message. + """ + raise NotImplementedError + + def DiscardUnknownFields(self): + """Clears all fields in the :class:`UnknownFieldSet`. + + This operation is recursive for nested message. + """ + raise NotImplementedError + + def ByteSize(self): + """Returns the serialized size of this message. + + Recursively calls ByteSize() on all contained messages. + + Returns: + int: The number of bytes required to serialize this message. + """ + raise NotImplementedError + + def _SetListener(self, message_listener): + """Internal method used by the protocol message implementation. + Clients should not call this directly. + + Sets a listener that this message will call on certain state transitions. + + The purpose of this method is to register back-edges from children to + parents at runtime, for the purpose of setting "has" bits and + byte-size-dirty bits in the parent and ancestor objects whenever a child or + descendant object is modified. + + If the client wants to disconnect this Message from the object tree, she + explicitly sets callback to None. + + If message_listener is None, unregisters any existing listener. Otherwise, + message_listener must implement the MessageListener interface in + internal/message_listener.py, and we discard any listener registered + via a previous _SetListener() call. + """ + raise NotImplementedError + + def __getstate__(self): + """Support the pickle protocol.""" + return dict(serialized=self.SerializePartialToString()) + + def __setstate__(self, state): + """Support the pickle protocol.""" + self.__init__() + serialized = state['serialized'] + # On Python 3, using encoding='latin1' is required for unpickling + # protos pickled by Python 2. + if not isinstance(serialized, bytes): + serialized = serialized.encode('latin1') + self.ParseFromString(serialized) + + def __reduce__(self): + message_descriptor = self.DESCRIPTOR + if message_descriptor.containing_type is None: + return type(self), (), self.__getstate__() + # the message type must be nested. + # Python does not pickle nested classes; use the symbol_database on the + # receiving end. + container = message_descriptor + return (_InternalConstructMessage, (container.full_name,), + self.__getstate__()) + + +def _InternalConstructMessage(full_name): + """Constructs a nested message.""" + from google.protobuf import symbol_database # pylint:disable=g-import-not-at-top + + return symbol_database.Default().GetSymbol(full_name)() diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/message_factory.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/message_factory.py new file mode 100644 index 0000000000000000000000000000000000000000..7dfaec88e15e176adb7363c288c53d189b37a294 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/message_factory.py @@ -0,0 +1,187 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""Provides a factory class for generating dynamic messages. + +The easiest way to use this class is if you have access to the FileDescriptor +protos containing the messages you want to create you can just do the following: + +message_classes = message_factory.GetMessages(iterable_of_file_descriptors) +my_proto_instance = message_classes['some.proto.package.MessageName']() +""" + +__author__ = 'matthewtoia@google.com (Matt Toia)' + +from google.protobuf.internal import api_implementation +from google.protobuf import descriptor_pool +from google.protobuf import message + +if api_implementation.Type() == 'cpp': + from google.protobuf.pyext import cpp_message as message_impl +else: + from google.protobuf.internal import python_message as message_impl + + +# The type of all Message classes. +_GENERATED_PROTOCOL_MESSAGE_TYPE = message_impl.GeneratedProtocolMessageType + + +class MessageFactory(object): + """Factory for creating Proto2 messages from descriptors in a pool.""" + + def __init__(self, pool=None): + """Initializes a new factory.""" + self.pool = pool or descriptor_pool.DescriptorPool() + + # local cache of all classes built from protobuf descriptors + self._classes = {} + + def GetPrototype(self, descriptor): + """Obtains a proto2 message class based on the passed in descriptor. + + Passing a descriptor with a fully qualified name matching a previous + invocation will cause the same class to be returned. + + Args: + descriptor: The descriptor to build from. + + Returns: + A class describing the passed in descriptor. + """ + if descriptor not in self._classes: + result_class = self.CreatePrototype(descriptor) + # The assignment to _classes is redundant for the base implementation, but + # might avoid confusion in cases where CreatePrototype gets overridden and + # does not call the base implementation. + self._classes[descriptor] = result_class + return result_class + return self._classes[descriptor] + + def CreatePrototype(self, descriptor): + """Builds a proto2 message class based on the passed in descriptor. + + Don't call this function directly, it always creates a new class. Call + GetPrototype() instead. This method is meant to be overridden in subblasses + to perform additional operations on the newly constructed class. + + Args: + descriptor: The descriptor to build from. + + Returns: + A class describing the passed in descriptor. + """ + descriptor_name = descriptor.name + if str is bytes: # PY2 + descriptor_name = descriptor.name.encode('ascii', 'ignore') + result_class = _GENERATED_PROTOCOL_MESSAGE_TYPE( + descriptor_name, + (message.Message,), + { + 'DESCRIPTOR': descriptor, + # If module not set, it wrongly points to message_factory module. + '__module__': None, + }) + result_class._FACTORY = self # pylint: disable=protected-access + # Assign in _classes before doing recursive calls to avoid infinite + # recursion. + self._classes[descriptor] = result_class + for field in descriptor.fields: + if field.message_type: + self.GetPrototype(field.message_type) + for extension in result_class.DESCRIPTOR.extensions: + if extension.containing_type not in self._classes: + self.GetPrototype(extension.containing_type) + extended_class = self._classes[extension.containing_type] + extended_class.RegisterExtension(extension) + return result_class + + def GetMessages(self, files): + """Gets all the messages from a specified file. + + This will find and resolve dependencies, failing if the descriptor + pool cannot satisfy them. + + Args: + files: The file names to extract messages from. + + Returns: + A dictionary mapping proto names to the message classes. This will include + any dependent messages as well as any messages defined in the same file as + a specified message. + """ + result = {} + for file_name in files: + file_desc = self.pool.FindFileByName(file_name) + for desc in file_desc.message_types_by_name.values(): + result[desc.full_name] = self.GetPrototype(desc) + + # While the extension FieldDescriptors are created by the descriptor pool, + # the python classes created in the factory need them to be registered + # explicitly, which is done below. + # + # The call to RegisterExtension will specifically check if the + # extension was already registered on the object and either + # ignore the registration if the original was the same, or raise + # an error if they were different. + + for extension in file_desc.extensions_by_name.values(): + if extension.containing_type not in self._classes: + self.GetPrototype(extension.containing_type) + extended_class = self._classes[extension.containing_type] + extended_class.RegisterExtension(extension) + return result + + +_FACTORY = MessageFactory() + + +def GetMessages(file_protos): + """Builds a dictionary of all the messages available in a set of files. + + Args: + file_protos: Iterable of FileDescriptorProto to build messages out of. + + Returns: + A dictionary mapping proto names to the message classes. This will include + any dependent messages as well as any messages defined in the same file as + a specified message. + """ + # The cpp implementation of the protocol buffer library requires to add the + # message in topological order of the dependency graph. + file_by_name = {file_proto.name: file_proto for file_proto in file_protos} + def _AddFile(file_proto): + for dependency in file_proto.dependency: + if dependency in file_by_name: + # Remove from elements to be visited, in order to cut cycles. + _AddFile(file_by_name.pop(dependency)) + _FACTORY.pool.Add(file_proto) + while file_by_name: + _AddFile(file_by_name.popitem()[1]) + return _FACTORY.GetMessages([file_proto.name for file_proto in file_protos]) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/proto_builder.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/proto_builder.py new file mode 100644 index 0000000000000000000000000000000000000000..736caed385947c5c0c446b86a4d9592571d33326 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/proto_builder.py @@ -0,0 +1,130 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""Dynamic Protobuf class creator.""" + +try: + from collections import OrderedDict +except ImportError: + from ordereddict import OrderedDict #PY26 +import hashlib +import os + +from google.protobuf import descriptor_pb2 +from google.protobuf import message_factory + + +def _GetMessageFromFactory(factory, full_name): + """Get a proto class from the MessageFactory by name. + + Args: + factory: a MessageFactory instance. + full_name: str, the fully qualified name of the proto type. + Returns: + A class, for the type identified by full_name. + Raises: + KeyError, if the proto is not found in the factory's descriptor pool. + """ + proto_descriptor = factory.pool.FindMessageTypeByName(full_name) + proto_cls = factory.GetPrototype(proto_descriptor) + return proto_cls + + +def MakeSimpleProtoClass(fields, full_name=None, pool=None): + """Create a Protobuf class whose fields are basic types. + + Note: this doesn't validate field names! + + Args: + fields: dict of {name: field_type} mappings for each field in the proto. If + this is an OrderedDict the order will be maintained, otherwise the + fields will be sorted by name. + full_name: optional str, the fully-qualified name of the proto type. + pool: optional DescriptorPool instance. + Returns: + a class, the new protobuf class with a FileDescriptor. + """ + factory = message_factory.MessageFactory(pool=pool) + + if full_name is not None: + try: + proto_cls = _GetMessageFromFactory(factory, full_name) + return proto_cls + except KeyError: + # The factory's DescriptorPool doesn't know about this class yet. + pass + + # Get a list of (name, field_type) tuples from the fields dict. If fields was + # an OrderedDict we keep the order, but otherwise we sort the field to ensure + # consistent ordering. + field_items = fields.items() + if not isinstance(fields, OrderedDict): + field_items = sorted(field_items) + + # Use a consistent file name that is unlikely to conflict with any imported + # proto files. + fields_hash = hashlib.sha1() + for f_name, f_type in field_items: + fields_hash.update(f_name.encode('utf-8')) + fields_hash.update(str(f_type).encode('utf-8')) + proto_file_name = fields_hash.hexdigest() + '.proto' + + # If the proto is anonymous, use the same hash to name it. + if full_name is None: + full_name = ('net.proto2.python.public.proto_builder.AnonymousProto_' + + fields_hash.hexdigest()) + try: + proto_cls = _GetMessageFromFactory(factory, full_name) + return proto_cls + except KeyError: + # The factory's DescriptorPool doesn't know about this class yet. + pass + + # This is the first time we see this proto: add a new descriptor to the pool. + factory.pool.Add( + _MakeFileDescriptorProto(proto_file_name, full_name, field_items)) + return _GetMessageFromFactory(factory, full_name) + + +def _MakeFileDescriptorProto(proto_file_name, full_name, field_items): + """Populate FileDescriptorProto for MessageFactory's DescriptorPool.""" + package, name = full_name.rsplit('.', 1) + file_proto = descriptor_pb2.FileDescriptorProto() + file_proto.name = os.path.join(package.replace('.', '/'), proto_file_name) + file_proto.package = package + desc_proto = file_proto.message_type.add() + desc_proto.name = name + for f_number, (f_name, f_type) in enumerate(field_items, 1): + field_proto = desc_proto.field.add() + field_proto.name = f_name + field_proto.number = f_number + field_proto.label = descriptor_pb2.FieldDescriptorProto.LABEL_OPTIONAL + field_proto.type = f_type + return file_proto diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/pyext/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/pyext/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/pyext/_message.cpython-38-x86_64-linux-gnu.so b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/pyext/_message.cpython-38-x86_64-linux-gnu.so new file mode 100755 index 0000000000000000000000000000000000000000..5e77c633f0d3df7910413eed356dbc9b03400ceb Binary files /dev/null and b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/pyext/_message.cpython-38-x86_64-linux-gnu.so differ diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/pyext/cpp_message.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/pyext/cpp_message.py new file mode 100644 index 0000000000000000000000000000000000000000..fc8eb32d79f60ff95b328ec5a828593ab78e1802 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/pyext/cpp_message.py @@ -0,0 +1,65 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""Protocol message implementation hooks for C++ implementation. + +Contains helper functions used to create protocol message classes from +Descriptor objects at runtime backed by the protocol buffer C++ API. +""" + +__author__ = 'tibell@google.com (Johan Tibell)' + +from google.protobuf.pyext import _message + + +class GeneratedProtocolMessageType(_message.MessageMeta): + + """Metaclass for protocol message classes created at runtime from Descriptors. + + The protocol compiler currently uses this metaclass to create protocol + message classes at runtime. Clients can also manually create their own + classes at runtime, as in this example: + + mydescriptor = Descriptor(.....) + factory = symbol_database.Default() + factory.pool.AddDescriptor(mydescriptor) + MyProtoClass = factory.GetPrototype(mydescriptor) + myproto_instance = MyProtoClass() + myproto.foo_field = 23 + ... + + The above example will not work for nested types. If you wish to include them, + use reflection.MakeClass() instead of manually instantiating the class in + order to create the appropriate class structure. + """ + + # Must be consistent with the protocol-compiler code in + # proto2/compiler/internal/generator.*. + _DESCRIPTOR_KEY = 'DESCRIPTOR' diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/reflection.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/reflection.py new file mode 100644 index 0000000000000000000000000000000000000000..81e18859a804d589d67b0a0642de718ba1bbce13 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/reflection.py @@ -0,0 +1,95 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +# This code is meant to work on Python 2.4 and above only. + +"""Contains a metaclass and helper functions used to create +protocol message classes from Descriptor objects at runtime. + +Recall that a metaclass is the "type" of a class. +(A class is to a metaclass what an instance is to a class.) + +In this case, we use the GeneratedProtocolMessageType metaclass +to inject all the useful functionality into the classes +output by the protocol compiler at compile-time. + +The upshot of all this is that the real implementation +details for ALL pure-Python protocol buffers are *here in +this file*. +""" + +__author__ = 'robinson@google.com (Will Robinson)' + + +from google.protobuf import message_factory +from google.protobuf import symbol_database + +# The type of all Message classes. +# Part of the public interface, but normally only used by message factories. +GeneratedProtocolMessageType = message_factory._GENERATED_PROTOCOL_MESSAGE_TYPE + +MESSAGE_CLASS_CACHE = {} + + +# Deprecated. Please NEVER use reflection.ParseMessage(). +def ParseMessage(descriptor, byte_str): + """Generate a new Message instance from this Descriptor and a byte string. + + DEPRECATED: ParseMessage is deprecated because it is using MakeClass(). + Please use MessageFactory.GetPrototype() instead. + + Args: + descriptor: Protobuf Descriptor object + byte_str: Serialized protocol buffer byte string + + Returns: + Newly created protobuf Message object. + """ + result_class = MakeClass(descriptor) + new_msg = result_class() + new_msg.ParseFromString(byte_str) + return new_msg + + +# Deprecated. Please NEVER use reflection.MakeClass(). +def MakeClass(descriptor): + """Construct a class object for a protobuf described by descriptor. + + DEPRECATED: use MessageFactory.GetPrototype() instead. + + Args: + descriptor: A descriptor.Descriptor object describing the protobuf. + Returns: + The Message class object described by the descriptor. + """ + # Original implementation leads to duplicate message classes, which won't play + # well with extensions. Message factory info is also missing. + # Redirect to message_factory. + return symbol_database.Default().GetPrototype(descriptor) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/service.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/service.py new file mode 100644 index 0000000000000000000000000000000000000000..5625246324cad3c71108a4466466d1d3b1568907 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/service.py @@ -0,0 +1,228 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""DEPRECATED: Declares the RPC service interfaces. + +This module declares the abstract interfaces underlying proto2 RPC +services. These are intended to be independent of any particular RPC +implementation, so that proto2 services can be used on top of a variety +of implementations. Starting with version 2.3.0, RPC implementations should +not try to build on these, but should instead provide code generator plugins +which generate code specific to the particular RPC implementation. This way +the generated code can be more appropriate for the implementation in use +and can avoid unnecessary layers of indirection. +""" + +__author__ = 'petar@google.com (Petar Petrov)' + + +class RpcException(Exception): + """Exception raised on failed blocking RPC method call.""" + pass + + +class Service(object): + + """Abstract base interface for protocol-buffer-based RPC services. + + Services themselves are abstract classes (implemented either by servers or as + stubs), but they subclass this base interface. The methods of this + interface can be used to call the methods of the service without knowing + its exact type at compile time (analogous to the Message interface). + """ + + def GetDescriptor(): + """Retrieves this service's descriptor.""" + raise NotImplementedError + + def CallMethod(self, method_descriptor, rpc_controller, + request, done): + """Calls a method of the service specified by method_descriptor. + + If "done" is None then the call is blocking and the response + message will be returned directly. Otherwise the call is asynchronous + and "done" will later be called with the response value. + + In the blocking case, RpcException will be raised on error. + + Preconditions: + + * method_descriptor.service == GetDescriptor + * request is of the exact same classes as returned by + GetRequestClass(method). + * After the call has started, the request must not be modified. + * "rpc_controller" is of the correct type for the RPC implementation being + used by this Service. For stubs, the "correct type" depends on the + RpcChannel which the stub is using. + + Postconditions: + + * "done" will be called when the method is complete. This may be + before CallMethod() returns or it may be at some point in the future. + * If the RPC failed, the response value passed to "done" will be None. + Further details about the failure can be found by querying the + RpcController. + """ + raise NotImplementedError + + def GetRequestClass(self, method_descriptor): + """Returns the class of the request message for the specified method. + + CallMethod() requires that the request is of a particular subclass of + Message. GetRequestClass() gets the default instance of this required + type. + + Example: + method = service.GetDescriptor().FindMethodByName("Foo") + request = stub.GetRequestClass(method)() + request.ParseFromString(input) + service.CallMethod(method, request, callback) + """ + raise NotImplementedError + + def GetResponseClass(self, method_descriptor): + """Returns the class of the response message for the specified method. + + This method isn't really needed, as the RpcChannel's CallMethod constructs + the response protocol message. It's provided anyway in case it is useful + for the caller to know the response type in advance. + """ + raise NotImplementedError + + +class RpcController(object): + + """An RpcController mediates a single method call. + + The primary purpose of the controller is to provide a way to manipulate + settings specific to the RPC implementation and to find out about RPC-level + errors. The methods provided by the RpcController interface are intended + to be a "least common denominator" set of features which we expect all + implementations to support. Specific implementations may provide more + advanced features (e.g. deadline propagation). + """ + + # Client-side methods below + + def Reset(self): + """Resets the RpcController to its initial state. + + After the RpcController has been reset, it may be reused in + a new call. Must not be called while an RPC is in progress. + """ + raise NotImplementedError + + def Failed(self): + """Returns true if the call failed. + + After a call has finished, returns true if the call failed. The possible + reasons for failure depend on the RPC implementation. Failed() must not + be called before a call has finished. If Failed() returns true, the + contents of the response message are undefined. + """ + raise NotImplementedError + + def ErrorText(self): + """If Failed is true, returns a human-readable description of the error.""" + raise NotImplementedError + + def StartCancel(self): + """Initiate cancellation. + + Advises the RPC system that the caller desires that the RPC call be + canceled. The RPC system may cancel it immediately, may wait awhile and + then cancel it, or may not even cancel the call at all. If the call is + canceled, the "done" callback will still be called and the RpcController + will indicate that the call failed at that time. + """ + raise NotImplementedError + + # Server-side methods below + + def SetFailed(self, reason): + """Sets a failure reason. + + Causes Failed() to return true on the client side. "reason" will be + incorporated into the message returned by ErrorText(). If you find + you need to return machine-readable information about failures, you + should incorporate it into your response protocol buffer and should + NOT call SetFailed(). + """ + raise NotImplementedError + + def IsCanceled(self): + """Checks if the client cancelled the RPC. + + If true, indicates that the client canceled the RPC, so the server may + as well give up on replying to it. The server should still call the + final "done" callback. + """ + raise NotImplementedError + + def NotifyOnCancel(self, callback): + """Sets a callback to invoke on cancel. + + Asks that the given callback be called when the RPC is canceled. The + callback will always be called exactly once. If the RPC completes without + being canceled, the callback will be called after completion. If the RPC + has already been canceled when NotifyOnCancel() is called, the callback + will be called immediately. + + NotifyOnCancel() must be called no more than once per request. + """ + raise NotImplementedError + + +class RpcChannel(object): + + """Abstract interface for an RPC channel. + + An RpcChannel represents a communication line to a service which can be used + to call that service's methods. The service may be running on another + machine. Normally, you should not use an RpcChannel directly, but instead + construct a stub {@link Service} wrapping it. Example: + + Example: + RpcChannel channel = rpcImpl.Channel("remotehost.example.com:1234") + RpcController controller = rpcImpl.Controller() + MyService service = MyService_Stub(channel) + service.MyMethod(controller, request, callback) + """ + + def CallMethod(self, method_descriptor, rpc_controller, + request, response_class, done): + """Calls the method identified by the descriptor. + + Call the given method of the remote service. The signature of this + procedure looks the same as Service.CallMethod(), but the requirements + are less strict in one important way: the request object doesn't have to + be of any specific class as long as its descriptor is method.input_type. + """ + raise NotImplementedError diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/service_reflection.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/service_reflection.py new file mode 100644 index 0000000000000000000000000000000000000000..75c51ff3221af114d73e9eca6e74b14e32a59a57 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/service_reflection.py @@ -0,0 +1,287 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""Contains metaclasses used to create protocol service and service stub +classes from ServiceDescriptor objects at runtime. + +The GeneratedServiceType and GeneratedServiceStubType metaclasses are used to +inject all useful functionality into the classes output by the protocol +compiler at compile-time. +""" + +__author__ = 'petar@google.com (Petar Petrov)' + + +class GeneratedServiceType(type): + + """Metaclass for service classes created at runtime from ServiceDescriptors. + + Implementations for all methods described in the Service class are added here + by this class. We also create properties to allow getting/setting all fields + in the protocol message. + + The protocol compiler currently uses this metaclass to create protocol service + classes at runtime. Clients can also manually create their own classes at + runtime, as in this example:: + + mydescriptor = ServiceDescriptor(.....) + class MyProtoService(service.Service): + __metaclass__ = GeneratedServiceType + DESCRIPTOR = mydescriptor + myservice_instance = MyProtoService() + # ... + """ + + _DESCRIPTOR_KEY = 'DESCRIPTOR' + + def __init__(cls, name, bases, dictionary): + """Creates a message service class. + + Args: + name: Name of the class (ignored, but required by the metaclass + protocol). + bases: Base classes of the class being constructed. + dictionary: The class dictionary of the class being constructed. + dictionary[_DESCRIPTOR_KEY] must contain a ServiceDescriptor object + describing this protocol service type. + """ + # Don't do anything if this class doesn't have a descriptor. This happens + # when a service class is subclassed. + if GeneratedServiceType._DESCRIPTOR_KEY not in dictionary: + return + + descriptor = dictionary[GeneratedServiceType._DESCRIPTOR_KEY] + service_builder = _ServiceBuilder(descriptor) + service_builder.BuildService(cls) + cls.DESCRIPTOR = descriptor + + +class GeneratedServiceStubType(GeneratedServiceType): + + """Metaclass for service stubs created at runtime from ServiceDescriptors. + + This class has similar responsibilities as GeneratedServiceType, except that + it creates the service stub classes. + """ + + _DESCRIPTOR_KEY = 'DESCRIPTOR' + + def __init__(cls, name, bases, dictionary): + """Creates a message service stub class. + + Args: + name: Name of the class (ignored, here). + bases: Base classes of the class being constructed. + dictionary: The class dictionary of the class being constructed. + dictionary[_DESCRIPTOR_KEY] must contain a ServiceDescriptor object + describing this protocol service type. + """ + super(GeneratedServiceStubType, cls).__init__(name, bases, dictionary) + # Don't do anything if this class doesn't have a descriptor. This happens + # when a service stub is subclassed. + if GeneratedServiceStubType._DESCRIPTOR_KEY not in dictionary: + return + + descriptor = dictionary[GeneratedServiceStubType._DESCRIPTOR_KEY] + service_stub_builder = _ServiceStubBuilder(descriptor) + service_stub_builder.BuildServiceStub(cls) + + +class _ServiceBuilder(object): + + """This class constructs a protocol service class using a service descriptor. + + Given a service descriptor, this class constructs a class that represents + the specified service descriptor. One service builder instance constructs + exactly one service class. That means all instances of that class share the + same builder. + """ + + def __init__(self, service_descriptor): + """Initializes an instance of the service class builder. + + Args: + service_descriptor: ServiceDescriptor to use when constructing the + service class. + """ + self.descriptor = service_descriptor + + def BuildService(self, cls): + """Constructs the service class. + + Args: + cls: The class that will be constructed. + """ + + # CallMethod needs to operate with an instance of the Service class. This + # internal wrapper function exists only to be able to pass the service + # instance to the method that does the real CallMethod work. + def _WrapCallMethod(srvc, method_descriptor, + rpc_controller, request, callback): + return self._CallMethod(srvc, method_descriptor, + rpc_controller, request, callback) + self.cls = cls + cls.CallMethod = _WrapCallMethod + cls.GetDescriptor = staticmethod(lambda: self.descriptor) + cls.GetDescriptor.__doc__ = "Returns the service descriptor." + cls.GetRequestClass = self._GetRequestClass + cls.GetResponseClass = self._GetResponseClass + for method in self.descriptor.methods: + setattr(cls, method.name, self._GenerateNonImplementedMethod(method)) + + def _CallMethod(self, srvc, method_descriptor, + rpc_controller, request, callback): + """Calls the method described by a given method descriptor. + + Args: + srvc: Instance of the service for which this method is called. + method_descriptor: Descriptor that represent the method to call. + rpc_controller: RPC controller to use for this method's execution. + request: Request protocol message. + callback: A callback to invoke after the method has completed. + """ + if method_descriptor.containing_service != self.descriptor: + raise RuntimeError( + 'CallMethod() given method descriptor for wrong service type.') + method = getattr(srvc, method_descriptor.name) + return method(rpc_controller, request, callback) + + def _GetRequestClass(self, method_descriptor): + """Returns the class of the request protocol message. + + Args: + method_descriptor: Descriptor of the method for which to return the + request protocol message class. + + Returns: + A class that represents the input protocol message of the specified + method. + """ + if method_descriptor.containing_service != self.descriptor: + raise RuntimeError( + 'GetRequestClass() given method descriptor for wrong service type.') + return method_descriptor.input_type._concrete_class + + def _GetResponseClass(self, method_descriptor): + """Returns the class of the response protocol message. + + Args: + method_descriptor: Descriptor of the method for which to return the + response protocol message class. + + Returns: + A class that represents the output protocol message of the specified + method. + """ + if method_descriptor.containing_service != self.descriptor: + raise RuntimeError( + 'GetResponseClass() given method descriptor for wrong service type.') + return method_descriptor.output_type._concrete_class + + def _GenerateNonImplementedMethod(self, method): + """Generates and returns a method that can be set for a service methods. + + Args: + method: Descriptor of the service method for which a method is to be + generated. + + Returns: + A method that can be added to the service class. + """ + return lambda inst, rpc_controller, request, callback: ( + self._NonImplementedMethod(method.name, rpc_controller, callback)) + + def _NonImplementedMethod(self, method_name, rpc_controller, callback): + """The body of all methods in the generated service class. + + Args: + method_name: Name of the method being executed. + rpc_controller: RPC controller used to execute this method. + callback: A callback which will be invoked when the method finishes. + """ + rpc_controller.SetFailed('Method %s not implemented.' % method_name) + callback(None) + + +class _ServiceStubBuilder(object): + + """Constructs a protocol service stub class using a service descriptor. + + Given a service descriptor, this class constructs a suitable stub class. + A stub is just a type-safe wrapper around an RpcChannel which emulates a + local implementation of the service. + + One service stub builder instance constructs exactly one class. It means all + instances of that class share the same service stub builder. + """ + + def __init__(self, service_descriptor): + """Initializes an instance of the service stub class builder. + + Args: + service_descriptor: ServiceDescriptor to use when constructing the + stub class. + """ + self.descriptor = service_descriptor + + def BuildServiceStub(self, cls): + """Constructs the stub class. + + Args: + cls: The class that will be constructed. + """ + + def _ServiceStubInit(stub, rpc_channel): + stub.rpc_channel = rpc_channel + self.cls = cls + cls.__init__ = _ServiceStubInit + for method in self.descriptor.methods: + setattr(cls, method.name, self._GenerateStubMethod(method)) + + def _GenerateStubMethod(self, method): + return (lambda inst, rpc_controller, request, callback=None: + self._StubMethod(inst, method, rpc_controller, request, callback)) + + def _StubMethod(self, stub, method_descriptor, + rpc_controller, request, callback): + """The body of all service methods in the generated stub class. + + Args: + stub: Stub instance. + method_descriptor: Descriptor of the invoked method. + rpc_controller: Rpc controller to execute the method. + request: Request protocol message. + callback: A callback to execute when the method finishes. + Returns: + Response message (in case of blocking call). + """ + return stub.rpc_channel.CallMethod( + method_descriptor, rpc_controller, request, + method_descriptor.output_type._concrete_class, callback) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/source_context_pb2.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/source_context_pb2.py new file mode 100644 index 0000000000000000000000000000000000000000..cfcd9e49a931325d2f6761a4e808ce3500242ae2 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/source_context_pb2.py @@ -0,0 +1,71 @@ +# -*- coding: utf-8 -*- +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: google/protobuf/source_context.proto +"""Generated protocol buffer code.""" +from google.protobuf import descriptor as _descriptor +from google.protobuf import message as _message +from google.protobuf import reflection as _reflection +from google.protobuf import symbol_database as _symbol_database +# @@protoc_insertion_point(imports) + +_sym_db = _symbol_database.Default() + + + + +DESCRIPTOR = _descriptor.FileDescriptor( + name='google/protobuf/source_context.proto', + package='google.protobuf', + syntax='proto3', + serialized_options=b'\n\023com.google.protobufB\022SourceContextProtoP\001Z6google.golang.org/protobuf/types/known/sourcecontextpb\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes', + create_key=_descriptor._internal_create_key, + serialized_pb=b'\n$google/protobuf/source_context.proto\x12\x0fgoogle.protobuf\"\"\n\rSourceContext\x12\x11\n\tfile_name\x18\x01 \x01(\tB\x8a\x01\n\x13\x63om.google.protobufB\x12SourceContextProtoP\x01Z6google.golang.org/protobuf/types/known/sourcecontextpb\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3' +) + + + + +_SOURCECONTEXT = _descriptor.Descriptor( + name='SourceContext', + full_name='google.protobuf.SourceContext', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='file_name', full_name='google.protobuf.SourceContext.file_name', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=57, + serialized_end=91, +) + +DESCRIPTOR.message_types_by_name['SourceContext'] = _SOURCECONTEXT +_sym_db.RegisterFileDescriptor(DESCRIPTOR) + +SourceContext = _reflection.GeneratedProtocolMessageType('SourceContext', (_message.Message,), { + 'DESCRIPTOR' : _SOURCECONTEXT, + '__module__' : 'google.protobuf.source_context_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.SourceContext) + }) +_sym_db.RegisterMessage(SourceContext) + + +DESCRIPTOR._options = None +# @@protoc_insertion_point(module_scope) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/struct_pb2.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/struct_pb2.py new file mode 100644 index 0000000000000000000000000000000000000000..efc8facad7fbd57d05066fdf282ef8fefe88c495 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/struct_pb2.py @@ -0,0 +1,287 @@ +# -*- coding: utf-8 -*- +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: google/protobuf/struct.proto +"""Generated protocol buffer code.""" +from google.protobuf.internal import enum_type_wrapper +from google.protobuf import descriptor as _descriptor +from google.protobuf import message as _message +from google.protobuf import reflection as _reflection +from google.protobuf import symbol_database as _symbol_database +# @@protoc_insertion_point(imports) + +_sym_db = _symbol_database.Default() + + + + +DESCRIPTOR = _descriptor.FileDescriptor( + name='google/protobuf/struct.proto', + package='google.protobuf', + syntax='proto3', + serialized_options=b'\n\023com.google.protobufB\013StructProtoP\001Z/google.golang.org/protobuf/types/known/structpb\370\001\001\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes', + create_key=_descriptor._internal_create_key, + serialized_pb=b'\n\x1cgoogle/protobuf/struct.proto\x12\x0fgoogle.protobuf\"\x84\x01\n\x06Struct\x12\x33\n\x06\x66ields\x18\x01 \x03(\x0b\x32#.google.protobuf.Struct.FieldsEntry\x1a\x45\n\x0b\x46ieldsEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12%\n\x05value\x18\x02 \x01(\x0b\x32\x16.google.protobuf.Value:\x02\x38\x01\"\xea\x01\n\x05Value\x12\x30\n\nnull_value\x18\x01 \x01(\x0e\x32\x1a.google.protobuf.NullValueH\x00\x12\x16\n\x0cnumber_value\x18\x02 \x01(\x01H\x00\x12\x16\n\x0cstring_value\x18\x03 \x01(\tH\x00\x12\x14\n\nbool_value\x18\x04 \x01(\x08H\x00\x12/\n\x0cstruct_value\x18\x05 \x01(\x0b\x32\x17.google.protobuf.StructH\x00\x12\x30\n\nlist_value\x18\x06 \x01(\x0b\x32\x1a.google.protobuf.ListValueH\x00\x42\x06\n\x04kind\"3\n\tListValue\x12&\n\x06values\x18\x01 \x03(\x0b\x32\x16.google.protobuf.Value*\x1b\n\tNullValue\x12\x0e\n\nNULL_VALUE\x10\x00\x42\x7f\n\x13\x63om.google.protobufB\x0bStructProtoP\x01Z/google.golang.org/protobuf/types/known/structpb\xf8\x01\x01\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3' +) + +_NULLVALUE = _descriptor.EnumDescriptor( + name='NullValue', + full_name='google.protobuf.NullValue', + filename=None, + file=DESCRIPTOR, + create_key=_descriptor._internal_create_key, + values=[ + _descriptor.EnumValueDescriptor( + name='NULL_VALUE', index=0, number=0, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + ], + containing_type=None, + serialized_options=None, + serialized_start=474, + serialized_end=501, +) +_sym_db.RegisterEnumDescriptor(_NULLVALUE) + +NullValue = enum_type_wrapper.EnumTypeWrapper(_NULLVALUE) +NULL_VALUE = 0 + + + +_STRUCT_FIELDSENTRY = _descriptor.Descriptor( + name='FieldsEntry', + full_name='google.protobuf.Struct.FieldsEntry', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='key', full_name='google.protobuf.Struct.FieldsEntry.key', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='value', full_name='google.protobuf.Struct.FieldsEntry.value', index=1, + number=2, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=b'8\001', + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=113, + serialized_end=182, +) + +_STRUCT = _descriptor.Descriptor( + name='Struct', + full_name='google.protobuf.Struct', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='fields', full_name='google.protobuf.Struct.fields', index=0, + number=1, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[_STRUCT_FIELDSENTRY, ], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=50, + serialized_end=182, +) + + +_VALUE = _descriptor.Descriptor( + name='Value', + full_name='google.protobuf.Value', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='null_value', full_name='google.protobuf.Value.null_value', index=0, + number=1, type=14, cpp_type=8, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='number_value', full_name='google.protobuf.Value.number_value', index=1, + number=2, type=1, cpp_type=5, label=1, + has_default_value=False, default_value=float(0), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='string_value', full_name='google.protobuf.Value.string_value', index=2, + number=3, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='bool_value', full_name='google.protobuf.Value.bool_value', index=3, + number=4, type=8, cpp_type=7, label=1, + has_default_value=False, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='struct_value', full_name='google.protobuf.Value.struct_value', index=4, + number=5, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='list_value', full_name='google.protobuf.Value.list_value', index=5, + number=6, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + _descriptor.OneofDescriptor( + name='kind', full_name='google.protobuf.Value.kind', + index=0, containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[]), + ], + serialized_start=185, + serialized_end=419, +) + + +_LISTVALUE = _descriptor.Descriptor( + name='ListValue', + full_name='google.protobuf.ListValue', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='values', full_name='google.protobuf.ListValue.values', index=0, + number=1, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=421, + serialized_end=472, +) + +_STRUCT_FIELDSENTRY.fields_by_name['value'].message_type = _VALUE +_STRUCT_FIELDSENTRY.containing_type = _STRUCT +_STRUCT.fields_by_name['fields'].message_type = _STRUCT_FIELDSENTRY +_VALUE.fields_by_name['null_value'].enum_type = _NULLVALUE +_VALUE.fields_by_name['struct_value'].message_type = _STRUCT +_VALUE.fields_by_name['list_value'].message_type = _LISTVALUE +_VALUE.oneofs_by_name['kind'].fields.append( + _VALUE.fields_by_name['null_value']) +_VALUE.fields_by_name['null_value'].containing_oneof = _VALUE.oneofs_by_name['kind'] +_VALUE.oneofs_by_name['kind'].fields.append( + _VALUE.fields_by_name['number_value']) +_VALUE.fields_by_name['number_value'].containing_oneof = _VALUE.oneofs_by_name['kind'] +_VALUE.oneofs_by_name['kind'].fields.append( + _VALUE.fields_by_name['string_value']) +_VALUE.fields_by_name['string_value'].containing_oneof = _VALUE.oneofs_by_name['kind'] +_VALUE.oneofs_by_name['kind'].fields.append( + _VALUE.fields_by_name['bool_value']) +_VALUE.fields_by_name['bool_value'].containing_oneof = _VALUE.oneofs_by_name['kind'] +_VALUE.oneofs_by_name['kind'].fields.append( + _VALUE.fields_by_name['struct_value']) +_VALUE.fields_by_name['struct_value'].containing_oneof = _VALUE.oneofs_by_name['kind'] +_VALUE.oneofs_by_name['kind'].fields.append( + _VALUE.fields_by_name['list_value']) +_VALUE.fields_by_name['list_value'].containing_oneof = _VALUE.oneofs_by_name['kind'] +_LISTVALUE.fields_by_name['values'].message_type = _VALUE +DESCRIPTOR.message_types_by_name['Struct'] = _STRUCT +DESCRIPTOR.message_types_by_name['Value'] = _VALUE +DESCRIPTOR.message_types_by_name['ListValue'] = _LISTVALUE +DESCRIPTOR.enum_types_by_name['NullValue'] = _NULLVALUE +_sym_db.RegisterFileDescriptor(DESCRIPTOR) + +Struct = _reflection.GeneratedProtocolMessageType('Struct', (_message.Message,), { + + 'FieldsEntry' : _reflection.GeneratedProtocolMessageType('FieldsEntry', (_message.Message,), { + 'DESCRIPTOR' : _STRUCT_FIELDSENTRY, + '__module__' : 'google.protobuf.struct_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.Struct.FieldsEntry) + }) + , + 'DESCRIPTOR' : _STRUCT, + '__module__' : 'google.protobuf.struct_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.Struct) + }) +_sym_db.RegisterMessage(Struct) +_sym_db.RegisterMessage(Struct.FieldsEntry) + +Value = _reflection.GeneratedProtocolMessageType('Value', (_message.Message,), { + 'DESCRIPTOR' : _VALUE, + '__module__' : 'google.protobuf.struct_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.Value) + }) +_sym_db.RegisterMessage(Value) + +ListValue = _reflection.GeneratedProtocolMessageType('ListValue', (_message.Message,), { + 'DESCRIPTOR' : _LISTVALUE, + '__module__' : 'google.protobuf.struct_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.ListValue) + }) +_sym_db.RegisterMessage(ListValue) + + +DESCRIPTOR._options = None +_STRUCT_FIELDSENTRY._options = None +# @@protoc_insertion_point(module_scope) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/symbol_database.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/symbol_database.py new file mode 100644 index 0000000000000000000000000000000000000000..fdcf8cf06ced70c01d291c672a08850238e5d3c9 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/symbol_database.py @@ -0,0 +1,194 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""A database of Python protocol buffer generated symbols. + +SymbolDatabase is the MessageFactory for messages generated at compile time, +and makes it easy to create new instances of a registered type, given only the +type's protocol buffer symbol name. + +Example usage:: + + db = symbol_database.SymbolDatabase() + + # Register symbols of interest, from one or multiple files. + db.RegisterFileDescriptor(my_proto_pb2.DESCRIPTOR) + db.RegisterMessage(my_proto_pb2.MyMessage) + db.RegisterEnumDescriptor(my_proto_pb2.MyEnum.DESCRIPTOR) + + # The database can be used as a MessageFactory, to generate types based on + # their name: + types = db.GetMessages(['my_proto.proto']) + my_message_instance = types['MyMessage']() + + # The database's underlying descriptor pool can be queried, so it's not + # necessary to know a type's filename to be able to generate it: + filename = db.pool.FindFileContainingSymbol('MyMessage') + my_message_instance = db.GetMessages([filename])['MyMessage']() + + # This functionality is also provided directly via a convenience method: + my_message_instance = db.GetSymbol('MyMessage')() +""" + + +from google.protobuf.internal import api_implementation +from google.protobuf import descriptor_pool +from google.protobuf import message_factory + + +class SymbolDatabase(message_factory.MessageFactory): + """A database of Python generated symbols.""" + + def RegisterMessage(self, message): + """Registers the given message type in the local database. + + Calls to GetSymbol() and GetMessages() will return messages registered here. + + Args: + message: A :class:`google.protobuf.message.Message` subclass (or + instance); its descriptor will be registered. + + Returns: + The provided message. + """ + + desc = message.DESCRIPTOR + self._classes[desc] = message + self.RegisterMessageDescriptor(desc) + return message + + def RegisterMessageDescriptor(self, message_descriptor): + """Registers the given message descriptor in the local database. + + Args: + message_descriptor (Descriptor): the message descriptor to add. + """ + if api_implementation.Type() == 'python': + # pylint: disable=protected-access + self.pool._AddDescriptor(message_descriptor) + + def RegisterEnumDescriptor(self, enum_descriptor): + """Registers the given enum descriptor in the local database. + + Args: + enum_descriptor (EnumDescriptor): The enum descriptor to register. + + Returns: + EnumDescriptor: The provided descriptor. + """ + if api_implementation.Type() == 'python': + # pylint: disable=protected-access + self.pool._AddEnumDescriptor(enum_descriptor) + return enum_descriptor + + def RegisterServiceDescriptor(self, service_descriptor): + """Registers the given service descriptor in the local database. + + Args: + service_descriptor (ServiceDescriptor): the service descriptor to + register. + """ + if api_implementation.Type() == 'python': + # pylint: disable=protected-access + self.pool._AddServiceDescriptor(service_descriptor) + + def RegisterFileDescriptor(self, file_descriptor): + """Registers the given file descriptor in the local database. + + Args: + file_descriptor (FileDescriptor): The file descriptor to register. + """ + if api_implementation.Type() == 'python': + # pylint: disable=protected-access + self.pool._InternalAddFileDescriptor(file_descriptor) + + def GetSymbol(self, symbol): + """Tries to find a symbol in the local database. + + Currently, this method only returns message.Message instances, however, if + may be extended in future to support other symbol types. + + Args: + symbol (str): a protocol buffer symbol. + + Returns: + A Python class corresponding to the symbol. + + Raises: + KeyError: if the symbol could not be found. + """ + + return self._classes[self.pool.FindMessageTypeByName(symbol)] + + def GetMessages(self, files): + # TODO(amauryfa): Fix the differences with MessageFactory. + """Gets all registered messages from a specified file. + + Only messages already created and registered will be returned; (this is the + case for imported _pb2 modules) + But unlike MessageFactory, this version also returns already defined nested + messages, but does not register any message extensions. + + Args: + files (list[str]): The file names to extract messages from. + + Returns: + A dictionary mapping proto names to the message classes. + + Raises: + KeyError: if a file could not be found. + """ + + def _GetAllMessages(desc): + """Walk a message Descriptor and recursively yields all message names.""" + yield desc + for msg_desc in desc.nested_types: + for nested_desc in _GetAllMessages(msg_desc): + yield nested_desc + + result = {} + for file_name in files: + file_desc = self.pool.FindFileByName(file_name) + for msg_desc in file_desc.message_types_by_name.values(): + for desc in _GetAllMessages(msg_desc): + try: + result[desc.full_name] = self._classes[desc] + except KeyError: + # This descriptor has no registered class, skip it. + pass + return result + + +_DEFAULT = SymbolDatabase(pool=descriptor_pool.Default()) + + +def Default(): + """Returns the default SymbolDatabase.""" + return _DEFAULT diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/text_encoding.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/text_encoding.py new file mode 100644 index 0000000000000000000000000000000000000000..39898765f28f2ad2149813e892f4cc6f92adb57d --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/text_encoding.py @@ -0,0 +1,117 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""Encoding related utilities.""" +import re + +import six + +_cescape_chr_to_symbol_map = {} +_cescape_chr_to_symbol_map[9] = r'\t' # optional escape +_cescape_chr_to_symbol_map[10] = r'\n' # optional escape +_cescape_chr_to_symbol_map[13] = r'\r' # optional escape +_cescape_chr_to_symbol_map[34] = r'\"' # necessary escape +_cescape_chr_to_symbol_map[39] = r"\'" # optional escape +_cescape_chr_to_symbol_map[92] = r'\\' # necessary escape + +# Lookup table for unicode +_cescape_unicode_to_str = [chr(i) for i in range(0, 256)] +for byte, string in _cescape_chr_to_symbol_map.items(): + _cescape_unicode_to_str[byte] = string + +# Lookup table for non-utf8, with necessary escapes at (o >= 127 or o < 32) +_cescape_byte_to_str = ([r'\%03o' % i for i in range(0, 32)] + + [chr(i) for i in range(32, 127)] + + [r'\%03o' % i for i in range(127, 256)]) +for byte, string in _cescape_chr_to_symbol_map.items(): + _cescape_byte_to_str[byte] = string +del byte, string + + +def CEscape(text, as_utf8): + # type: (...) -> str + """Escape a bytes string for use in an text protocol buffer. + + Args: + text: A byte string to be escaped. + as_utf8: Specifies if result may contain non-ASCII characters. + In Python 3 this allows unescaped non-ASCII Unicode characters. + In Python 2 the return value will be valid UTF-8 rather than only ASCII. + Returns: + Escaped string (str). + """ + # Python's text.encode() 'string_escape' or 'unicode_escape' codecs do not + # satisfy our needs; they encodes unprintable characters using two-digit hex + # escapes whereas our C++ unescaping function allows hex escapes to be any + # length. So, "\0011".encode('string_escape') ends up being "\\x011", which + # will be decoded in C++ as a single-character string with char code 0x11. + if six.PY3: + text_is_unicode = isinstance(text, str) + if as_utf8 and text_is_unicode: + # We're already unicode, no processing beyond control char escapes. + return text.translate(_cescape_chr_to_symbol_map) + ord_ = ord if text_is_unicode else lambda x: x # bytes iterate as ints. + else: + ord_ = ord # PY2 + if as_utf8: + return ''.join(_cescape_unicode_to_str[ord_(c)] for c in text) + return ''.join(_cescape_byte_to_str[ord_(c)] for c in text) + + +_CUNESCAPE_HEX = re.compile(r'(\\+)x([0-9a-fA-F])(?![0-9a-fA-F])') + + +def CUnescape(text): + # type: (str) -> bytes + """Unescape a text string with C-style escape sequences to UTF-8 bytes. + + Args: + text: The data to parse in a str. + Returns: + A byte string. + """ + + def ReplaceHex(m): + # Only replace the match if the number of leading back slashes is odd. i.e. + # the slash itself is not escaped. + if len(m.group(1)) & 1: + return m.group(1) + 'x0' + m.group(2) + return m.group(0) + + # This is required because the 'string_escape' encoding doesn't + # allow single-digit hex escapes (like '\xf'). + result = _CUNESCAPE_HEX.sub(ReplaceHex, text) + + if six.PY2: + return result.decode('string_escape') + return (result.encode('utf-8') # PY3: Make it bytes to allow decode. + .decode('unicode_escape') + # Make it bytes again to return the proper type. + .encode('raw_unicode_escape')) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/text_format.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/text_format.py new file mode 100644 index 0000000000000000000000000000000000000000..c376c7bac78aa4109fbeb4f3dd8bb08805b066d3 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/text_format.py @@ -0,0 +1,1828 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""Contains routines for printing protocol messages in text format. + +Simple usage example:: + + # Create a proto object and serialize it to a text proto string. + message = my_proto_pb2.MyMessage(foo='bar') + text_proto = text_format.MessageToString(message) + + # Parse a text proto string. + message = text_format.Parse(text_proto, my_proto_pb2.MyMessage()) +""" + +__author__ = 'kenton@google.com (Kenton Varda)' + +# TODO(b/129989314) Import thread contention leads to test failures. +import encodings.raw_unicode_escape # pylint: disable=unused-import +import encodings.unicode_escape # pylint: disable=unused-import +import io +import math +import re +import six + +from google.protobuf.internal import decoder +from google.protobuf.internal import type_checkers +from google.protobuf import descriptor +from google.protobuf import text_encoding + +if six.PY3: + long = int # pylint: disable=redefined-builtin,invalid-name + +# pylint: disable=g-import-not-at-top +__all__ = ['MessageToString', 'Parse', 'PrintMessage', 'PrintField', + 'PrintFieldValue', 'Merge', 'MessageToBytes'] + +_INTEGER_CHECKERS = (type_checkers.Uint32ValueChecker(), + type_checkers.Int32ValueChecker(), + type_checkers.Uint64ValueChecker(), + type_checkers.Int64ValueChecker()) +_FLOAT_INFINITY = re.compile('-?inf(?:inity)?f?$', re.IGNORECASE) +_FLOAT_NAN = re.compile('nanf?$', re.IGNORECASE) +_QUOTES = frozenset(("'", '"')) +_ANY_FULL_TYPE_NAME = 'google.protobuf.Any' + + +class Error(Exception): + """Top-level module error for text_format.""" + + +class ParseError(Error): + """Thrown in case of text parsing or tokenizing error.""" + + def __init__(self, message=None, line=None, column=None): + if message is not None and line is not None: + loc = str(line) + if column is not None: + loc += ':{0}'.format(column) + message = '{0} : {1}'.format(loc, message) + if message is not None: + super(ParseError, self).__init__(message) + else: + super(ParseError, self).__init__() + self._line = line + self._column = column + + def GetLine(self): + return self._line + + def GetColumn(self): + return self._column + + +class TextWriter(object): + + def __init__(self, as_utf8): + if six.PY2: + self._writer = io.BytesIO() + else: + self._writer = io.StringIO() + + def write(self, val): + if six.PY2: + if isinstance(val, six.text_type): + val = val.encode('utf-8') + return self._writer.write(val) + + def close(self): + return self._writer.close() + + def getvalue(self): + return self._writer.getvalue() + + +def MessageToString( + message, + as_utf8=False, + as_one_line=False, + use_short_repeated_primitives=False, + pointy_brackets=False, + use_index_order=False, + float_format=None, + double_format=None, + use_field_number=False, + descriptor_pool=None, + indent=0, + message_formatter=None, + print_unknown_fields=False, + force_colon=False): + # type: (...) -> str + """Convert protobuf message to text format. + + Double values can be formatted compactly with 15 digits of + precision (which is the most that IEEE 754 "double" can guarantee) + using double_format='.15g'. To ensure that converting to text and back to a + proto will result in an identical value, double_format='.17g' should be used. + + Args: + message: The protocol buffers message. + as_utf8: Return unescaped Unicode for non-ASCII characters. + In Python 3 actual Unicode characters may appear as is in strings. + In Python 2 the return value will be valid UTF-8 rather than only ASCII. + as_one_line: Don't introduce newlines between fields. + use_short_repeated_primitives: Use short repeated format for primitives. + pointy_brackets: If True, use angle brackets instead of curly braces for + nesting. + use_index_order: If True, fields of a proto message will be printed using + the order defined in source code instead of the field number, extensions + will be printed at the end of the message and their relative order is + determined by the extension number. By default, use the field number + order. + float_format (str): If set, use this to specify float field formatting + (per the "Format Specification Mini-Language"); otherwise, shortest float + that has same value in wire will be printed. Also affect double field + if double_format is not set but float_format is set. + double_format (str): If set, use this to specify double field formatting + (per the "Format Specification Mini-Language"); if it is not set but + float_format is set, use float_format. Otherwise, use ``str()`` + use_field_number: If True, print field numbers instead of names. + descriptor_pool (DescriptorPool): Descriptor pool used to resolve Any types. + indent (int): The initial indent level, in terms of spaces, for pretty + print. + message_formatter (function(message, indent, as_one_line) -> unicode|None): + Custom formatter for selected sub-messages (usually based on message + type). Use to pretty print parts of the protobuf for easier diffing. + print_unknown_fields: If True, unknown fields will be printed. + force_colon: If set, a colon will be added after the field name even if the + field is a proto message. + + Returns: + str: A string of the text formatted protocol buffer message. + """ + out = TextWriter(as_utf8) + printer = _Printer( + out, + indent, + as_utf8, + as_one_line, + use_short_repeated_primitives, + pointy_brackets, + use_index_order, + float_format, + double_format, + use_field_number, + descriptor_pool, + message_formatter, + print_unknown_fields=print_unknown_fields, + force_colon=force_colon) + printer.PrintMessage(message) + result = out.getvalue() + out.close() + if as_one_line: + return result.rstrip() + return result + + +def MessageToBytes(message, **kwargs): + # type: (...) -> bytes + """Convert protobuf message to encoded text format. See MessageToString.""" + text = MessageToString(message, **kwargs) + if isinstance(text, bytes): + return text + codec = 'utf-8' if kwargs.get('as_utf8') else 'ascii' + return text.encode(codec) + + +def _IsMapEntry(field): + return (field.type == descriptor.FieldDescriptor.TYPE_MESSAGE and + field.message_type.has_options and + field.message_type.GetOptions().map_entry) + + +def PrintMessage(message, + out, + indent=0, + as_utf8=False, + as_one_line=False, + use_short_repeated_primitives=False, + pointy_brackets=False, + use_index_order=False, + float_format=None, + double_format=None, + use_field_number=False, + descriptor_pool=None, + message_formatter=None, + print_unknown_fields=False, + force_colon=False): + printer = _Printer( + out=out, indent=indent, as_utf8=as_utf8, + as_one_line=as_one_line, + use_short_repeated_primitives=use_short_repeated_primitives, + pointy_brackets=pointy_brackets, + use_index_order=use_index_order, + float_format=float_format, + double_format=double_format, + use_field_number=use_field_number, + descriptor_pool=descriptor_pool, + message_formatter=message_formatter, + print_unknown_fields=print_unknown_fields, + force_colon=force_colon) + printer.PrintMessage(message) + + +def PrintField(field, + value, + out, + indent=0, + as_utf8=False, + as_one_line=False, + use_short_repeated_primitives=False, + pointy_brackets=False, + use_index_order=False, + float_format=None, + double_format=None, + message_formatter=None, + print_unknown_fields=False, + force_colon=False): + """Print a single field name/value pair.""" + printer = _Printer(out, indent, as_utf8, as_one_line, + use_short_repeated_primitives, pointy_brackets, + use_index_order, float_format, double_format, + message_formatter=message_formatter, + print_unknown_fields=print_unknown_fields, + force_colon=force_colon) + printer.PrintField(field, value) + + +def PrintFieldValue(field, + value, + out, + indent=0, + as_utf8=False, + as_one_line=False, + use_short_repeated_primitives=False, + pointy_brackets=False, + use_index_order=False, + float_format=None, + double_format=None, + message_formatter=None, + print_unknown_fields=False, + force_colon=False): + """Print a single field value (not including name).""" + printer = _Printer(out, indent, as_utf8, as_one_line, + use_short_repeated_primitives, pointy_brackets, + use_index_order, float_format, double_format, + message_formatter=message_formatter, + print_unknown_fields=print_unknown_fields, + force_colon=force_colon) + printer.PrintFieldValue(field, value) + + +def _BuildMessageFromTypeName(type_name, descriptor_pool): + """Returns a protobuf message instance. + + Args: + type_name: Fully-qualified protobuf message type name string. + descriptor_pool: DescriptorPool instance. + + Returns: + A Message instance of type matching type_name, or None if the a Descriptor + wasn't found matching type_name. + """ + # pylint: disable=g-import-not-at-top + if descriptor_pool is None: + from google.protobuf import descriptor_pool as pool_mod + descriptor_pool = pool_mod.Default() + from google.protobuf import symbol_database + database = symbol_database.Default() + try: + message_descriptor = descriptor_pool.FindMessageTypeByName(type_name) + except KeyError: + return None + message_type = database.GetPrototype(message_descriptor) + return message_type() + + +# These values must match WireType enum in google/protobuf/wire_format.h. +WIRETYPE_LENGTH_DELIMITED = 2 +WIRETYPE_START_GROUP = 3 + + +class _Printer(object): + """Text format printer for protocol message.""" + + def __init__( + self, + out, + indent=0, + as_utf8=False, + as_one_line=False, + use_short_repeated_primitives=False, + pointy_brackets=False, + use_index_order=False, + float_format=None, + double_format=None, + use_field_number=False, + descriptor_pool=None, + message_formatter=None, + print_unknown_fields=False, + force_colon=False): + """Initialize the Printer. + + Double values can be formatted compactly with 15 digits of precision + (which is the most that IEEE 754 "double" can guarantee) using + double_format='.15g'. To ensure that converting to text and back to a proto + will result in an identical value, double_format='.17g' should be used. + + Args: + out: To record the text format result. + indent: The initial indent level for pretty print. + as_utf8: Return unescaped Unicode for non-ASCII characters. + In Python 3 actual Unicode characters may appear as is in strings. + In Python 2 the return value will be valid UTF-8 rather than ASCII. + as_one_line: Don't introduce newlines between fields. + use_short_repeated_primitives: Use short repeated format for primitives. + pointy_brackets: If True, use angle brackets instead of curly braces for + nesting. + use_index_order: If True, print fields of a proto message using the order + defined in source code instead of the field number. By default, use the + field number order. + float_format: If set, use this to specify float field formatting + (per the "Format Specification Mini-Language"); otherwise, shortest + float that has same value in wire will be printed. Also affect double + field if double_format is not set but float_format is set. + double_format: If set, use this to specify double field formatting + (per the "Format Specification Mini-Language"); if it is not set but + float_format is set, use float_format. Otherwise, str() is used. + use_field_number: If True, print field numbers instead of names. + descriptor_pool: A DescriptorPool used to resolve Any types. + message_formatter: A function(message, indent, as_one_line): unicode|None + to custom format selected sub-messages (usually based on message type). + Use to pretty print parts of the protobuf for easier diffing. + print_unknown_fields: If True, unknown fields will be printed. + force_colon: If set, a colon will be added after the field name even if + the field is a proto message. + """ + self.out = out + self.indent = indent + self.as_utf8 = as_utf8 + self.as_one_line = as_one_line + self.use_short_repeated_primitives = use_short_repeated_primitives + self.pointy_brackets = pointy_brackets + self.use_index_order = use_index_order + self.float_format = float_format + if double_format is not None: + self.double_format = double_format + else: + self.double_format = float_format + self.use_field_number = use_field_number + self.descriptor_pool = descriptor_pool + self.message_formatter = message_formatter + self.print_unknown_fields = print_unknown_fields + self.force_colon = force_colon + + def _TryPrintAsAnyMessage(self, message): + """Serializes if message is a google.protobuf.Any field.""" + if '/' not in message.type_url: + return False + packed_message = _BuildMessageFromTypeName(message.TypeName(), + self.descriptor_pool) + if packed_message: + packed_message.MergeFromString(message.value) + colon = ':' if self.force_colon else '' + self.out.write('%s[%s]%s ' % (self.indent * ' ', message.type_url, colon)) + self._PrintMessageFieldValue(packed_message) + self.out.write(' ' if self.as_one_line else '\n') + return True + else: + return False + + def _TryCustomFormatMessage(self, message): + formatted = self.message_formatter(message, self.indent, self.as_one_line) + if formatted is None: + return False + + out = self.out + out.write(' ' * self.indent) + out.write(formatted) + out.write(' ' if self.as_one_line else '\n') + return True + + def PrintMessage(self, message): + """Convert protobuf message to text format. + + Args: + message: The protocol buffers message. + """ + if self.message_formatter and self._TryCustomFormatMessage(message): + return + if (message.DESCRIPTOR.full_name == _ANY_FULL_TYPE_NAME and + self._TryPrintAsAnyMessage(message)): + return + fields = message.ListFields() + if self.use_index_order: + fields.sort( + key=lambda x: x[0].number if x[0].is_extension else x[0].index) + for field, value in fields: + if _IsMapEntry(field): + for key in sorted(value): + # This is slow for maps with submessage entries because it copies the + # entire tree. Unfortunately this would take significant refactoring + # of this file to work around. + # + # TODO(haberman): refactor and optimize if this becomes an issue. + entry_submsg = value.GetEntryClass()(key=key, value=value[key]) + self.PrintField(field, entry_submsg) + elif field.label == descriptor.FieldDescriptor.LABEL_REPEATED: + if (self.use_short_repeated_primitives + and field.cpp_type != descriptor.FieldDescriptor.CPPTYPE_MESSAGE + and field.cpp_type != descriptor.FieldDescriptor.CPPTYPE_STRING): + self._PrintShortRepeatedPrimitivesValue(field, value) + else: + for element in value: + self.PrintField(field, element) + else: + self.PrintField(field, value) + + if self.print_unknown_fields: + self._PrintUnknownFields(message.UnknownFields()) + + def _PrintUnknownFields(self, unknown_fields): + """Print unknown fields.""" + out = self.out + for field in unknown_fields: + out.write(' ' * self.indent) + out.write(str(field.field_number)) + if field.wire_type == WIRETYPE_START_GROUP: + if self.as_one_line: + out.write(' { ') + else: + out.write(' {\n') + self.indent += 2 + + self._PrintUnknownFields(field.data) + + if self.as_one_line: + out.write('} ') + else: + self.indent -= 2 + out.write(' ' * self.indent + '}\n') + elif field.wire_type == WIRETYPE_LENGTH_DELIMITED: + try: + # If this field is parseable as a Message, it is probably + # an embedded message. + # pylint: disable=protected-access + (embedded_unknown_message, pos) = decoder._DecodeUnknownFieldSet( + memoryview(field.data), 0, len(field.data)) + except Exception: # pylint: disable=broad-except + pos = 0 + + if pos == len(field.data): + if self.as_one_line: + out.write(' { ') + else: + out.write(' {\n') + self.indent += 2 + + self._PrintUnknownFields(embedded_unknown_message) + + if self.as_one_line: + out.write('} ') + else: + self.indent -= 2 + out.write(' ' * self.indent + '}\n') + else: + # A string or bytes field. self.as_utf8 may not work. + out.write(': \"') + out.write(text_encoding.CEscape(field.data, False)) + out.write('\" ' if self.as_one_line else '\"\n') + else: + # varint, fixed32, fixed64 + out.write(': ') + out.write(str(field.data)) + out.write(' ' if self.as_one_line else '\n') + + def _PrintFieldName(self, field): + """Print field name.""" + out = self.out + out.write(' ' * self.indent) + if self.use_field_number: + out.write(str(field.number)) + else: + if field.is_extension: + out.write('[') + if (field.containing_type.GetOptions().message_set_wire_format and + field.type == descriptor.FieldDescriptor.TYPE_MESSAGE and + field.label == descriptor.FieldDescriptor.LABEL_OPTIONAL): + out.write(field.message_type.full_name) + else: + out.write(field.full_name) + out.write(']') + elif field.type == descriptor.FieldDescriptor.TYPE_GROUP: + # For groups, use the capitalized name. + out.write(field.message_type.name) + else: + out.write(field.name) + + if (self.force_colon or + field.cpp_type != descriptor.FieldDescriptor.CPPTYPE_MESSAGE): + # The colon is optional in this case, but our cross-language golden files + # don't include it. Here, the colon is only included if force_colon is + # set to True + out.write(':') + + def PrintField(self, field, value): + """Print a single field name/value pair.""" + self._PrintFieldName(field) + self.out.write(' ') + self.PrintFieldValue(field, value) + self.out.write(' ' if self.as_one_line else '\n') + + def _PrintShortRepeatedPrimitivesValue(self, field, value): + """"Prints short repeated primitives value.""" + # Note: this is called only when value has at least one element. + self._PrintFieldName(field) + self.out.write(' [') + for i in six.moves.range(len(value) - 1): + self.PrintFieldValue(field, value[i]) + self.out.write(', ') + self.PrintFieldValue(field, value[-1]) + self.out.write(']') + if self.force_colon: + self.out.write(':') + self.out.write(' ' if self.as_one_line else '\n') + + def _PrintMessageFieldValue(self, value): + if self.pointy_brackets: + openb = '<' + closeb = '>' + else: + openb = '{' + closeb = '}' + + if self.as_one_line: + self.out.write('%s ' % openb) + self.PrintMessage(value) + self.out.write(closeb) + else: + self.out.write('%s\n' % openb) + self.indent += 2 + self.PrintMessage(value) + self.indent -= 2 + self.out.write(' ' * self.indent + closeb) + + def PrintFieldValue(self, field, value): + """Print a single field value (not including name). + + For repeated fields, the value should be a single element. + + Args: + field: The descriptor of the field to be printed. + value: The value of the field. + """ + out = self.out + if field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_MESSAGE: + self._PrintMessageFieldValue(value) + elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_ENUM: + enum_value = field.enum_type.values_by_number.get(value, None) + if enum_value is not None: + out.write(enum_value.name) + else: + out.write(str(value)) + elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_STRING: + out.write('\"') + if isinstance(value, six.text_type) and (six.PY2 or not self.as_utf8): + out_value = value.encode('utf-8') + else: + out_value = value + if field.type == descriptor.FieldDescriptor.TYPE_BYTES: + # We always need to escape all binary data in TYPE_BYTES fields. + out_as_utf8 = False + else: + out_as_utf8 = self.as_utf8 + out.write(text_encoding.CEscape(out_value, out_as_utf8)) + out.write('\"') + elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_BOOL: + if value: + out.write('true') + else: + out.write('false') + elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_FLOAT: + if self.float_format is not None: + out.write('{1:{0}}'.format(self.float_format, value)) + else: + if math.isnan(value): + out.write(str(value)) + else: + out.write(str(type_checkers.ToShortestFloat(value))) + elif (field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_DOUBLE and + self.double_format is not None): + out.write('{1:{0}}'.format(self.double_format, value)) + else: + out.write(str(value)) + + +def Parse(text, + message, + allow_unknown_extension=False, + allow_field_number=False, + descriptor_pool=None, + allow_unknown_field=False): + """Parses a text representation of a protocol message into a message. + + NOTE: for historical reasons this function does not clear the input + message. This is different from what the binary msg.ParseFrom(...) does. + If text contains a field already set in message, the value is appended if the + field is repeated. Otherwise, an error is raised. + + Example:: + + a = MyProto() + a.repeated_field.append('test') + b = MyProto() + + # Repeated fields are combined + text_format.Parse(repr(a), b) + text_format.Parse(repr(a), b) # repeated_field contains ["test", "test"] + + # Non-repeated fields cannot be overwritten + a.singular_field = 1 + b.singular_field = 2 + text_format.Parse(repr(a), b) # ParseError + + # Binary version: + b.ParseFromString(a.SerializeToString()) # repeated_field is now "test" + + Caller is responsible for clearing the message as needed. + + Args: + text (str): Message text representation. + message (Message): A protocol buffer message to merge into. + allow_unknown_extension: if True, skip over missing extensions and keep + parsing + allow_field_number: if True, both field number and field name are allowed. + descriptor_pool (DescriptorPool): Descriptor pool used to resolve Any types. + allow_unknown_field: if True, skip over unknown field and keep + parsing. Avoid to use this option if possible. It may hide some + errors (e.g. spelling error on field name) + + Returns: + Message: The same message passed as argument. + + Raises: + ParseError: On text parsing problems. + """ + return ParseLines(text.split(b'\n' if isinstance(text, bytes) else u'\n'), + message, + allow_unknown_extension, + allow_field_number, + descriptor_pool=descriptor_pool, + allow_unknown_field=allow_unknown_field) + + +def Merge(text, + message, + allow_unknown_extension=False, + allow_field_number=False, + descriptor_pool=None, + allow_unknown_field=False): + """Parses a text representation of a protocol message into a message. + + Like Parse(), but allows repeated values for a non-repeated field, and uses + the last one. This means any non-repeated, top-level fields specified in text + replace those in the message. + + Args: + text (str): Message text representation. + message (Message): A protocol buffer message to merge into. + allow_unknown_extension: if True, skip over missing extensions and keep + parsing + allow_field_number: if True, both field number and field name are allowed. + descriptor_pool (DescriptorPool): Descriptor pool used to resolve Any types. + allow_unknown_field: if True, skip over unknown field and keep + parsing. Avoid to use this option if possible. It may hide some + errors (e.g. spelling error on field name) + + Returns: + Message: The same message passed as argument. + + Raises: + ParseError: On text parsing problems. + """ + return MergeLines( + text.split(b'\n' if isinstance(text, bytes) else u'\n'), + message, + allow_unknown_extension, + allow_field_number, + descriptor_pool=descriptor_pool, + allow_unknown_field=allow_unknown_field) + + +def ParseLines(lines, + message, + allow_unknown_extension=False, + allow_field_number=False, + descriptor_pool=None, + allow_unknown_field=False): + """Parses a text representation of a protocol message into a message. + + See Parse() for caveats. + + Args: + lines: An iterable of lines of a message's text representation. + message: A protocol buffer message to merge into. + allow_unknown_extension: if True, skip over missing extensions and keep + parsing + allow_field_number: if True, both field number and field name are allowed. + descriptor_pool: A DescriptorPool used to resolve Any types. + allow_unknown_field: if True, skip over unknown field and keep + parsing. Avoid to use this option if possible. It may hide some + errors (e.g. spelling error on field name) + + Returns: + The same message passed as argument. + + Raises: + ParseError: On text parsing problems. + """ + parser = _Parser(allow_unknown_extension, + allow_field_number, + descriptor_pool=descriptor_pool, + allow_unknown_field=allow_unknown_field) + return parser.ParseLines(lines, message) + + +def MergeLines(lines, + message, + allow_unknown_extension=False, + allow_field_number=False, + descriptor_pool=None, + allow_unknown_field=False): + """Parses a text representation of a protocol message into a message. + + See Merge() for more details. + + Args: + lines: An iterable of lines of a message's text representation. + message: A protocol buffer message to merge into. + allow_unknown_extension: if True, skip over missing extensions and keep + parsing + allow_field_number: if True, both field number and field name are allowed. + descriptor_pool: A DescriptorPool used to resolve Any types. + allow_unknown_field: if True, skip over unknown field and keep + parsing. Avoid to use this option if possible. It may hide some + errors (e.g. spelling error on field name) + + Returns: + The same message passed as argument. + + Raises: + ParseError: On text parsing problems. + """ + parser = _Parser(allow_unknown_extension, + allow_field_number, + descriptor_pool=descriptor_pool, + allow_unknown_field=allow_unknown_field) + return parser.MergeLines(lines, message) + + +class _Parser(object): + """Text format parser for protocol message.""" + + def __init__(self, + allow_unknown_extension=False, + allow_field_number=False, + descriptor_pool=None, + allow_unknown_field=False): + self.allow_unknown_extension = allow_unknown_extension + self.allow_field_number = allow_field_number + self.descriptor_pool = descriptor_pool + self.allow_unknown_field = allow_unknown_field + + def ParseLines(self, lines, message): + """Parses a text representation of a protocol message into a message.""" + self._allow_multiple_scalars = False + self._ParseOrMerge(lines, message) + return message + + def MergeLines(self, lines, message): + """Merges a text representation of a protocol message into a message.""" + self._allow_multiple_scalars = True + self._ParseOrMerge(lines, message) + return message + + def _ParseOrMerge(self, lines, message): + """Converts a text representation of a protocol message into a message. + + Args: + lines: Lines of a message's text representation. + message: A protocol buffer message to merge into. + + Raises: + ParseError: On text parsing problems. + """ + # Tokenize expects native str lines. + if six.PY2: + str_lines = (line if isinstance(line, str) else line.encode('utf-8') + for line in lines) + else: + str_lines = (line if isinstance(line, str) else line.decode('utf-8') + for line in lines) + tokenizer = Tokenizer(str_lines) + while not tokenizer.AtEnd(): + self._MergeField(tokenizer, message) + + def _MergeField(self, tokenizer, message): + """Merges a single protocol message field into a message. + + Args: + tokenizer: A tokenizer to parse the field name and values. + message: A protocol message to record the data. + + Raises: + ParseError: In case of text parsing problems. + """ + message_descriptor = message.DESCRIPTOR + if (message_descriptor.full_name == _ANY_FULL_TYPE_NAME and + tokenizer.TryConsume('[')): + type_url_prefix, packed_type_name = self._ConsumeAnyTypeUrl(tokenizer) + tokenizer.Consume(']') + tokenizer.TryConsume(':') + if tokenizer.TryConsume('<'): + expanded_any_end_token = '>' + else: + tokenizer.Consume('{') + expanded_any_end_token = '}' + expanded_any_sub_message = _BuildMessageFromTypeName(packed_type_name, + self.descriptor_pool) + if not expanded_any_sub_message: + raise ParseError('Type %s not found in descriptor pool' % + packed_type_name) + while not tokenizer.TryConsume(expanded_any_end_token): + if tokenizer.AtEnd(): + raise tokenizer.ParseErrorPreviousToken('Expected "%s".' % + (expanded_any_end_token,)) + self._MergeField(tokenizer, expanded_any_sub_message) + deterministic = False + + message.Pack(expanded_any_sub_message, + type_url_prefix=type_url_prefix, + deterministic=deterministic) + return + + if tokenizer.TryConsume('['): + name = [tokenizer.ConsumeIdentifier()] + while tokenizer.TryConsume('.'): + name.append(tokenizer.ConsumeIdentifier()) + name = '.'.join(name) + + if not message_descriptor.is_extendable: + raise tokenizer.ParseErrorPreviousToken( + 'Message type "%s" does not have extensions.' % + message_descriptor.full_name) + # pylint: disable=protected-access + field = message.Extensions._FindExtensionByName(name) + # pylint: enable=protected-access + + + if not field: + if self.allow_unknown_extension: + field = None + else: + raise tokenizer.ParseErrorPreviousToken( + 'Extension "%s" not registered. ' + 'Did you import the _pb2 module which defines it? ' + 'If you are trying to place the extension in the MessageSet ' + 'field of another message that is in an Any or MessageSet field, ' + 'that message\'s _pb2 module must be imported as well' % name) + elif message_descriptor != field.containing_type: + raise tokenizer.ParseErrorPreviousToken( + 'Extension "%s" does not extend message type "%s".' % + (name, message_descriptor.full_name)) + + tokenizer.Consume(']') + + else: + name = tokenizer.ConsumeIdentifierOrNumber() + if self.allow_field_number and name.isdigit(): + number = ParseInteger(name, True, True) + field = message_descriptor.fields_by_number.get(number, None) + if not field and message_descriptor.is_extendable: + field = message.Extensions._FindExtensionByNumber(number) + else: + field = message_descriptor.fields_by_name.get(name, None) + + # Group names are expected to be capitalized as they appear in the + # .proto file, which actually matches their type names, not their field + # names. + if not field: + field = message_descriptor.fields_by_name.get(name.lower(), None) + if field and field.type != descriptor.FieldDescriptor.TYPE_GROUP: + field = None + + if (field and field.type == descriptor.FieldDescriptor.TYPE_GROUP and + field.message_type.name != name): + field = None + + if not field and not self.allow_unknown_field: + raise tokenizer.ParseErrorPreviousToken( + 'Message type "%s" has no field named "%s".' % + (message_descriptor.full_name, name)) + + if field: + if not self._allow_multiple_scalars and field.containing_oneof: + # Check if there's a different field set in this oneof. + # Note that we ignore the case if the same field was set before, and we + # apply _allow_multiple_scalars to non-scalar fields as well. + which_oneof = message.WhichOneof(field.containing_oneof.name) + if which_oneof is not None and which_oneof != field.name: + raise tokenizer.ParseErrorPreviousToken( + 'Field "%s" is specified along with field "%s", another member ' + 'of oneof "%s" for message type "%s".' % + (field.name, which_oneof, field.containing_oneof.name, + message_descriptor.full_name)) + + if field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_MESSAGE: + tokenizer.TryConsume(':') + merger = self._MergeMessageField + else: + tokenizer.Consume(':') + merger = self._MergeScalarField + + if (field.label == descriptor.FieldDescriptor.LABEL_REPEATED and + tokenizer.TryConsume('[')): + # Short repeated format, e.g. "foo: [1, 2, 3]" + if not tokenizer.TryConsume(']'): + while True: + merger(tokenizer, message, field) + if tokenizer.TryConsume(']'): + break + tokenizer.Consume(',') + + else: + merger(tokenizer, message, field) + + else: # Proto field is unknown. + assert (self.allow_unknown_extension or self.allow_unknown_field) + _SkipFieldContents(tokenizer) + + # For historical reasons, fields may optionally be separated by commas or + # semicolons. + if not tokenizer.TryConsume(','): + tokenizer.TryConsume(';') + + + def _ConsumeAnyTypeUrl(self, tokenizer): + """Consumes a google.protobuf.Any type URL and returns the type name.""" + # Consume "type.googleapis.com/". + prefix = [tokenizer.ConsumeIdentifier()] + tokenizer.Consume('.') + prefix.append(tokenizer.ConsumeIdentifier()) + tokenizer.Consume('.') + prefix.append(tokenizer.ConsumeIdentifier()) + tokenizer.Consume('/') + # Consume the fully-qualified type name. + name = [tokenizer.ConsumeIdentifier()] + while tokenizer.TryConsume('.'): + name.append(tokenizer.ConsumeIdentifier()) + return '.'.join(prefix), '.'.join(name) + + def _MergeMessageField(self, tokenizer, message, field): + """Merges a single scalar field into a message. + + Args: + tokenizer: A tokenizer to parse the field value. + message: The message of which field is a member. + field: The descriptor of the field to be merged. + + Raises: + ParseError: In case of text parsing problems. + """ + is_map_entry = _IsMapEntry(field) + + if tokenizer.TryConsume('<'): + end_token = '>' + else: + tokenizer.Consume('{') + end_token = '}' + + if field.label == descriptor.FieldDescriptor.LABEL_REPEATED: + if field.is_extension: + sub_message = message.Extensions[field].add() + elif is_map_entry: + sub_message = getattr(message, field.name).GetEntryClass()() + else: + sub_message = getattr(message, field.name).add() + else: + if field.is_extension: + if (not self._allow_multiple_scalars and + message.HasExtension(field)): + raise tokenizer.ParseErrorPreviousToken( + 'Message type "%s" should not have multiple "%s" extensions.' % + (message.DESCRIPTOR.full_name, field.full_name)) + sub_message = message.Extensions[field] + else: + # Also apply _allow_multiple_scalars to message field. + # TODO(jieluo): Change to _allow_singular_overwrites. + if (not self._allow_multiple_scalars and + message.HasField(field.name)): + raise tokenizer.ParseErrorPreviousToken( + 'Message type "%s" should not have multiple "%s" fields.' % + (message.DESCRIPTOR.full_name, field.name)) + sub_message = getattr(message, field.name) + sub_message.SetInParent() + + while not tokenizer.TryConsume(end_token): + if tokenizer.AtEnd(): + raise tokenizer.ParseErrorPreviousToken('Expected "%s".' % (end_token,)) + self._MergeField(tokenizer, sub_message) + + if is_map_entry: + value_cpptype = field.message_type.fields_by_name['value'].cpp_type + if value_cpptype == descriptor.FieldDescriptor.CPPTYPE_MESSAGE: + value = getattr(message, field.name)[sub_message.key] + value.MergeFrom(sub_message.value) + else: + getattr(message, field.name)[sub_message.key] = sub_message.value + + @staticmethod + def _IsProto3Syntax(message): + message_descriptor = message.DESCRIPTOR + return (hasattr(message_descriptor, 'syntax') and + message_descriptor.syntax == 'proto3') + + def _MergeScalarField(self, tokenizer, message, field): + """Merges a single scalar field into a message. + + Args: + tokenizer: A tokenizer to parse the field value. + message: A protocol message to record the data. + field: The descriptor of the field to be merged. + + Raises: + ParseError: In case of text parsing problems. + RuntimeError: On runtime errors. + """ + _ = self.allow_unknown_extension + value = None + + if field.type in (descriptor.FieldDescriptor.TYPE_INT32, + descriptor.FieldDescriptor.TYPE_SINT32, + descriptor.FieldDescriptor.TYPE_SFIXED32): + value = _ConsumeInt32(tokenizer) + elif field.type in (descriptor.FieldDescriptor.TYPE_INT64, + descriptor.FieldDescriptor.TYPE_SINT64, + descriptor.FieldDescriptor.TYPE_SFIXED64): + value = _ConsumeInt64(tokenizer) + elif field.type in (descriptor.FieldDescriptor.TYPE_UINT32, + descriptor.FieldDescriptor.TYPE_FIXED32): + value = _ConsumeUint32(tokenizer) + elif field.type in (descriptor.FieldDescriptor.TYPE_UINT64, + descriptor.FieldDescriptor.TYPE_FIXED64): + value = _ConsumeUint64(tokenizer) + elif field.type in (descriptor.FieldDescriptor.TYPE_FLOAT, + descriptor.FieldDescriptor.TYPE_DOUBLE): + value = tokenizer.ConsumeFloat() + elif field.type == descriptor.FieldDescriptor.TYPE_BOOL: + value = tokenizer.ConsumeBool() + elif field.type == descriptor.FieldDescriptor.TYPE_STRING: + value = tokenizer.ConsumeString() + elif field.type == descriptor.FieldDescriptor.TYPE_BYTES: + value = tokenizer.ConsumeByteString() + elif field.type == descriptor.FieldDescriptor.TYPE_ENUM: + value = tokenizer.ConsumeEnum(field) + else: + raise RuntimeError('Unknown field type %d' % field.type) + + if field.label == descriptor.FieldDescriptor.LABEL_REPEATED: + if field.is_extension: + message.Extensions[field].append(value) + else: + getattr(message, field.name).append(value) + else: + if field.is_extension: + if (not self._allow_multiple_scalars and + not self._IsProto3Syntax(message) and + message.HasExtension(field)): + raise tokenizer.ParseErrorPreviousToken( + 'Message type "%s" should not have multiple "%s" extensions.' % + (message.DESCRIPTOR.full_name, field.full_name)) + else: + message.Extensions[field] = value + else: + duplicate_error = False + if not self._allow_multiple_scalars: + if self._IsProto3Syntax(message): + # Proto3 doesn't represent presence so we try best effort to check + # multiple scalars by compare to default values. + duplicate_error = bool(getattr(message, field.name)) + else: + duplicate_error = message.HasField(field.name) + + if duplicate_error: + raise tokenizer.ParseErrorPreviousToken( + 'Message type "%s" should not have multiple "%s" fields.' % + (message.DESCRIPTOR.full_name, field.name)) + else: + setattr(message, field.name, value) + + +def _SkipFieldContents(tokenizer): + """Skips over contents (value or message) of a field. + + Args: + tokenizer: A tokenizer to parse the field name and values. + """ + # Try to guess the type of this field. + # If this field is not a message, there should be a ":" between the + # field name and the field value and also the field value should not + # start with "{" or "<" which indicates the beginning of a message body. + # If there is no ":" or there is a "{" or "<" after ":", this field has + # to be a message or the input is ill-formed. + if tokenizer.TryConsume(':') and not tokenizer.LookingAt( + '{') and not tokenizer.LookingAt('<'): + _SkipFieldValue(tokenizer) + else: + _SkipFieldMessage(tokenizer) + + +def _SkipField(tokenizer): + """Skips over a complete field (name and value/message). + + Args: + tokenizer: A tokenizer to parse the field name and values. + """ + if tokenizer.TryConsume('['): + # Consume extension name. + tokenizer.ConsumeIdentifier() + while tokenizer.TryConsume('.'): + tokenizer.ConsumeIdentifier() + tokenizer.Consume(']') + else: + tokenizer.ConsumeIdentifierOrNumber() + + _SkipFieldContents(tokenizer) + + # For historical reasons, fields may optionally be separated by commas or + # semicolons. + if not tokenizer.TryConsume(','): + tokenizer.TryConsume(';') + + +def _SkipFieldMessage(tokenizer): + """Skips over a field message. + + Args: + tokenizer: A tokenizer to parse the field name and values. + """ + + if tokenizer.TryConsume('<'): + delimiter = '>' + else: + tokenizer.Consume('{') + delimiter = '}' + + while not tokenizer.LookingAt('>') and not tokenizer.LookingAt('}'): + _SkipField(tokenizer) + + tokenizer.Consume(delimiter) + + +def _SkipFieldValue(tokenizer): + """Skips over a field value. + + Args: + tokenizer: A tokenizer to parse the field name and values. + + Raises: + ParseError: In case an invalid field value is found. + """ + # String/bytes tokens can come in multiple adjacent string literals. + # If we can consume one, consume as many as we can. + if tokenizer.TryConsumeByteString(): + while tokenizer.TryConsumeByteString(): + pass + return + + if (not tokenizer.TryConsumeIdentifier() and + not _TryConsumeInt64(tokenizer) and not _TryConsumeUint64(tokenizer) and + not tokenizer.TryConsumeFloat()): + raise ParseError('Invalid field value: ' + tokenizer.token) + + +class Tokenizer(object): + """Protocol buffer text representation tokenizer. + + This class handles the lower level string parsing by splitting it into + meaningful tokens. + + It was directly ported from the Java protocol buffer API. + """ + + _WHITESPACE = re.compile(r'\s+') + _COMMENT = re.compile(r'(\s*#.*$)', re.MULTILINE) + _WHITESPACE_OR_COMMENT = re.compile(r'(\s|(#.*$))+', re.MULTILINE) + _TOKEN = re.compile('|'.join([ + r'[a-zA-Z_][0-9a-zA-Z_+-]*', # an identifier + r'([0-9+-]|(\.[0-9]))[0-9a-zA-Z_.+-]*', # a number + ] + [ # quoted str for each quote mark + # Avoid backtracking! https://stackoverflow.com/a/844267 + r'{qt}[^{qt}\n\\]*((\\.)+[^{qt}\n\\]*)*({qt}|\\?$)'.format(qt=mark) + for mark in _QUOTES + ])) + + _IDENTIFIER = re.compile(r'[^\d\W]\w*') + _IDENTIFIER_OR_NUMBER = re.compile(r'\w+') + + def __init__(self, lines, skip_comments=True): + self._position = 0 + self._line = -1 + self._column = 0 + self._token_start = None + self.token = '' + self._lines = iter(lines) + self._current_line = '' + self._previous_line = 0 + self._previous_column = 0 + self._more_lines = True + self._skip_comments = skip_comments + self._whitespace_pattern = (skip_comments and self._WHITESPACE_OR_COMMENT + or self._WHITESPACE) + self._SkipWhitespace() + self.NextToken() + + def LookingAt(self, token): + return self.token == token + + def AtEnd(self): + """Checks the end of the text was reached. + + Returns: + True iff the end was reached. + """ + return not self.token + + def _PopLine(self): + while len(self._current_line) <= self._column: + try: + self._current_line = next(self._lines) + except StopIteration: + self._current_line = '' + self._more_lines = False + return + else: + self._line += 1 + self._column = 0 + + def _SkipWhitespace(self): + while True: + self._PopLine() + match = self._whitespace_pattern.match(self._current_line, self._column) + if not match: + break + length = len(match.group(0)) + self._column += length + + def TryConsume(self, token): + """Tries to consume a given piece of text. + + Args: + token: Text to consume. + + Returns: + True iff the text was consumed. + """ + if self.token == token: + self.NextToken() + return True + return False + + def Consume(self, token): + """Consumes a piece of text. + + Args: + token: Text to consume. + + Raises: + ParseError: If the text couldn't be consumed. + """ + if not self.TryConsume(token): + raise self.ParseError('Expected "%s".' % token) + + def ConsumeComment(self): + result = self.token + if not self._COMMENT.match(result): + raise self.ParseError('Expected comment.') + self.NextToken() + return result + + def ConsumeCommentOrTrailingComment(self): + """Consumes a comment, returns a 2-tuple (trailing bool, comment str).""" + + # Tokenizer initializes _previous_line and _previous_column to 0. As the + # tokenizer starts, it looks like there is a previous token on the line. + just_started = self._line == 0 and self._column == 0 + + before_parsing = self._previous_line + comment = self.ConsumeComment() + + # A trailing comment is a comment on the same line than the previous token. + trailing = (self._previous_line == before_parsing + and not just_started) + + return trailing, comment + + def TryConsumeIdentifier(self): + try: + self.ConsumeIdentifier() + return True + except ParseError: + return False + + def ConsumeIdentifier(self): + """Consumes protocol message field identifier. + + Returns: + Identifier string. + + Raises: + ParseError: If an identifier couldn't be consumed. + """ + result = self.token + if not self._IDENTIFIER.match(result): + raise self.ParseError('Expected identifier.') + self.NextToken() + return result + + def TryConsumeIdentifierOrNumber(self): + try: + self.ConsumeIdentifierOrNumber() + return True + except ParseError: + return False + + def ConsumeIdentifierOrNumber(self): + """Consumes protocol message field identifier. + + Returns: + Identifier string. + + Raises: + ParseError: If an identifier couldn't be consumed. + """ + result = self.token + if not self._IDENTIFIER_OR_NUMBER.match(result): + raise self.ParseError('Expected identifier or number, got %s.' % result) + self.NextToken() + return result + + def TryConsumeInteger(self): + try: + # Note: is_long only affects value type, not whether an error is raised. + self.ConsumeInteger() + return True + except ParseError: + return False + + def ConsumeInteger(self, is_long=False): + """Consumes an integer number. + + Args: + is_long: True if the value should be returned as a long integer. + Returns: + The integer parsed. + + Raises: + ParseError: If an integer couldn't be consumed. + """ + try: + result = _ParseAbstractInteger(self.token, is_long=is_long) + except ValueError as e: + raise self.ParseError(str(e)) + self.NextToken() + return result + + def TryConsumeFloat(self): + try: + self.ConsumeFloat() + return True + except ParseError: + return False + + def ConsumeFloat(self): + """Consumes an floating point number. + + Returns: + The number parsed. + + Raises: + ParseError: If a floating point number couldn't be consumed. + """ + try: + result = ParseFloat(self.token) + except ValueError as e: + raise self.ParseError(str(e)) + self.NextToken() + return result + + def ConsumeBool(self): + """Consumes a boolean value. + + Returns: + The bool parsed. + + Raises: + ParseError: If a boolean value couldn't be consumed. + """ + try: + result = ParseBool(self.token) + except ValueError as e: + raise self.ParseError(str(e)) + self.NextToken() + return result + + def TryConsumeByteString(self): + try: + self.ConsumeByteString() + return True + except ParseError: + return False + + def ConsumeString(self): + """Consumes a string value. + + Returns: + The string parsed. + + Raises: + ParseError: If a string value couldn't be consumed. + """ + the_bytes = self.ConsumeByteString() + try: + return six.text_type(the_bytes, 'utf-8') + except UnicodeDecodeError as e: + raise self._StringParseError(e) + + def ConsumeByteString(self): + """Consumes a byte array value. + + Returns: + The array parsed (as a string). + + Raises: + ParseError: If a byte array value couldn't be consumed. + """ + the_list = [self._ConsumeSingleByteString()] + while self.token and self.token[0] in _QUOTES: + the_list.append(self._ConsumeSingleByteString()) + return b''.join(the_list) + + def _ConsumeSingleByteString(self): + """Consume one token of a string literal. + + String literals (whether bytes or text) can come in multiple adjacent + tokens which are automatically concatenated, like in C or Python. This + method only consumes one token. + + Returns: + The token parsed. + Raises: + ParseError: When the wrong format data is found. + """ + text = self.token + if len(text) < 1 or text[0] not in _QUOTES: + raise self.ParseError('Expected string but found: %r' % (text,)) + + if len(text) < 2 or text[-1] != text[0]: + raise self.ParseError('String missing ending quote: %r' % (text,)) + + try: + result = text_encoding.CUnescape(text[1:-1]) + except ValueError as e: + raise self.ParseError(str(e)) + self.NextToken() + return result + + def ConsumeEnum(self, field): + try: + result = ParseEnum(field, self.token) + except ValueError as e: + raise self.ParseError(str(e)) + self.NextToken() + return result + + def ParseErrorPreviousToken(self, message): + """Creates and *returns* a ParseError for the previously read token. + + Args: + message: A message to set for the exception. + + Returns: + A ParseError instance. + """ + return ParseError(message, self._previous_line + 1, + self._previous_column + 1) + + def ParseError(self, message): + """Creates and *returns* a ParseError for the current token.""" + return ParseError('\'' + self._current_line + '\': ' + message, + self._line + 1, self._column + 1) + + def _StringParseError(self, e): + return self.ParseError('Couldn\'t parse string: ' + str(e)) + + def NextToken(self): + """Reads the next meaningful token.""" + self._previous_line = self._line + self._previous_column = self._column + + self._column += len(self.token) + self._SkipWhitespace() + + if not self._more_lines: + self.token = '' + return + + match = self._TOKEN.match(self._current_line, self._column) + if not match and not self._skip_comments: + match = self._COMMENT.match(self._current_line, self._column) + if match: + token = match.group(0) + self.token = token + else: + self.token = self._current_line[self._column] + +# Aliased so it can still be accessed by current visibility violators. +# TODO(dbarnett): Migrate violators to textformat_tokenizer. +_Tokenizer = Tokenizer # pylint: disable=invalid-name + + +def _ConsumeInt32(tokenizer): + """Consumes a signed 32bit integer number from tokenizer. + + Args: + tokenizer: A tokenizer used to parse the number. + + Returns: + The integer parsed. + + Raises: + ParseError: If a signed 32bit integer couldn't be consumed. + """ + return _ConsumeInteger(tokenizer, is_signed=True, is_long=False) + + +def _ConsumeUint32(tokenizer): + """Consumes an unsigned 32bit integer number from tokenizer. + + Args: + tokenizer: A tokenizer used to parse the number. + + Returns: + The integer parsed. + + Raises: + ParseError: If an unsigned 32bit integer couldn't be consumed. + """ + return _ConsumeInteger(tokenizer, is_signed=False, is_long=False) + + +def _TryConsumeInt64(tokenizer): + try: + _ConsumeInt64(tokenizer) + return True + except ParseError: + return False + + +def _ConsumeInt64(tokenizer): + """Consumes a signed 32bit integer number from tokenizer. + + Args: + tokenizer: A tokenizer used to parse the number. + + Returns: + The integer parsed. + + Raises: + ParseError: If a signed 32bit integer couldn't be consumed. + """ + return _ConsumeInteger(tokenizer, is_signed=True, is_long=True) + + +def _TryConsumeUint64(tokenizer): + try: + _ConsumeUint64(tokenizer) + return True + except ParseError: + return False + + +def _ConsumeUint64(tokenizer): + """Consumes an unsigned 64bit integer number from tokenizer. + + Args: + tokenizer: A tokenizer used to parse the number. + + Returns: + The integer parsed. + + Raises: + ParseError: If an unsigned 64bit integer couldn't be consumed. + """ + return _ConsumeInteger(tokenizer, is_signed=False, is_long=True) + + +def _TryConsumeInteger(tokenizer, is_signed=False, is_long=False): + try: + _ConsumeInteger(tokenizer, is_signed=is_signed, is_long=is_long) + return True + except ParseError: + return False + + +def _ConsumeInteger(tokenizer, is_signed=False, is_long=False): + """Consumes an integer number from tokenizer. + + Args: + tokenizer: A tokenizer used to parse the number. + is_signed: True if a signed integer must be parsed. + is_long: True if a long integer must be parsed. + + Returns: + The integer parsed. + + Raises: + ParseError: If an integer with given characteristics couldn't be consumed. + """ + try: + result = ParseInteger(tokenizer.token, is_signed=is_signed, is_long=is_long) + except ValueError as e: + raise tokenizer.ParseError(str(e)) + tokenizer.NextToken() + return result + + +def ParseInteger(text, is_signed=False, is_long=False): + """Parses an integer. + + Args: + text: The text to parse. + is_signed: True if a signed integer must be parsed. + is_long: True if a long integer must be parsed. + + Returns: + The integer value. + + Raises: + ValueError: Thrown Iff the text is not a valid integer. + """ + # Do the actual parsing. Exception handling is propagated to caller. + result = _ParseAbstractInteger(text, is_long=is_long) + + # Check if the integer is sane. Exceptions handled by callers. + checker = _INTEGER_CHECKERS[2 * int(is_long) + int(is_signed)] + checker.CheckValue(result) + return result + + +def _ParseAbstractInteger(text, is_long=False): + """Parses an integer without checking size/signedness. + + Args: + text: The text to parse. + is_long: True if the value should be returned as a long integer. + + Returns: + The integer value. + + Raises: + ValueError: Thrown Iff the text is not a valid integer. + """ + # Do the actual parsing. Exception handling is propagated to caller. + orig_text = text + c_octal_match = re.match(r'(-?)0(\d+)$', text) + if c_octal_match: + # Python 3 no longer supports 0755 octal syntax without the 'o', so + # we always use the '0o' prefix for multi-digit numbers starting with 0. + text = c_octal_match.group(1) + '0o' + c_octal_match.group(2) + try: + # We force 32-bit values to int and 64-bit values to long to make + # alternate implementations where the distinction is more significant + # (e.g. the C++ implementation) simpler. + if is_long: + return long(text, 0) + else: + return int(text, 0) + except ValueError: + raise ValueError('Couldn\'t parse integer: %s' % orig_text) + + +def ParseFloat(text): + """Parse a floating point number. + + Args: + text: Text to parse. + + Returns: + The number parsed. + + Raises: + ValueError: If a floating point number couldn't be parsed. + """ + try: + # Assume Python compatible syntax. + return float(text) + except ValueError: + # Check alternative spellings. + if _FLOAT_INFINITY.match(text): + if text[0] == '-': + return float('-inf') + else: + return float('inf') + elif _FLOAT_NAN.match(text): + return float('nan') + else: + # assume '1.0f' format + try: + return float(text.rstrip('f')) + except ValueError: + raise ValueError('Couldn\'t parse float: %s' % text) + + +def ParseBool(text): + """Parse a boolean value. + + Args: + text: Text to parse. + + Returns: + Boolean values parsed + + Raises: + ValueError: If text is not a valid boolean. + """ + if text in ('true', 't', '1', 'True'): + return True + elif text in ('false', 'f', '0', 'False'): + return False + else: + raise ValueError('Expected "true" or "false".') + + +def ParseEnum(field, value): + """Parse an enum value. + + The value can be specified by a number (the enum value), or by + a string literal (the enum name). + + Args: + field: Enum field descriptor. + value: String value. + + Returns: + Enum value number. + + Raises: + ValueError: If the enum value could not be parsed. + """ + enum_descriptor = field.enum_type + try: + number = int(value, 0) + except ValueError: + # Identifier. + enum_value = enum_descriptor.values_by_name.get(value, None) + if enum_value is None: + raise ValueError('Enum type "%s" has no value named %s.' % + (enum_descriptor.full_name, value)) + else: + # Numeric value. + if hasattr(field.file, 'syntax'): + # Attribute is checked for compatibility. + if field.file.syntax == 'proto3': + # Proto3 accept numeric unknown enums. + return number + enum_value = enum_descriptor.values_by_number.get(number, None) + if enum_value is None: + raise ValueError('Enum type "%s" has no value with number %d.' % + (enum_descriptor.full_name, number)) + return enum_value.number diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/timestamp_pb2.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/timestamp_pb2.py new file mode 100644 index 0000000000000000000000000000000000000000..6fb22d2c0e6be48524b9923ae39a06480627b8fa --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/timestamp_pb2.py @@ -0,0 +1,78 @@ +# -*- coding: utf-8 -*- +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: google/protobuf/timestamp.proto +"""Generated protocol buffer code.""" +from google.protobuf import descriptor as _descriptor +from google.protobuf import message as _message +from google.protobuf import reflection as _reflection +from google.protobuf import symbol_database as _symbol_database +# @@protoc_insertion_point(imports) + +_sym_db = _symbol_database.Default() + + + + +DESCRIPTOR = _descriptor.FileDescriptor( + name='google/protobuf/timestamp.proto', + package='google.protobuf', + syntax='proto3', + serialized_options=b'\n\023com.google.protobufB\016TimestampProtoP\001Z2google.golang.org/protobuf/types/known/timestamppb\370\001\001\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes', + create_key=_descriptor._internal_create_key, + serialized_pb=b'\n\x1fgoogle/protobuf/timestamp.proto\x12\x0fgoogle.protobuf\"+\n\tTimestamp\x12\x0f\n\x07seconds\x18\x01 \x01(\x03\x12\r\n\x05nanos\x18\x02 \x01(\x05\x42\x85\x01\n\x13\x63om.google.protobufB\x0eTimestampProtoP\x01Z2google.golang.org/protobuf/types/known/timestamppb\xf8\x01\x01\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3' +) + + + + +_TIMESTAMP = _descriptor.Descriptor( + name='Timestamp', + full_name='google.protobuf.Timestamp', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='seconds', full_name='google.protobuf.Timestamp.seconds', index=0, + number=1, type=3, cpp_type=2, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='nanos', full_name='google.protobuf.Timestamp.nanos', index=1, + number=2, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=52, + serialized_end=95, +) + +DESCRIPTOR.message_types_by_name['Timestamp'] = _TIMESTAMP +_sym_db.RegisterFileDescriptor(DESCRIPTOR) + +Timestamp = _reflection.GeneratedProtocolMessageType('Timestamp', (_message.Message,), { + 'DESCRIPTOR' : _TIMESTAMP, + '__module__' : 'google.protobuf.timestamp_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.Timestamp) + }) +_sym_db.RegisterMessage(Timestamp) + + +DESCRIPTOR._options = None +# @@protoc_insertion_point(module_scope) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/type_pb2.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/type_pb2.py new file mode 100644 index 0000000000000000000000000000000000000000..bec1a1e105bd7e36d63830d345623c40085e6acf --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/type_pb2.py @@ -0,0 +1,573 @@ +# -*- coding: utf-8 -*- +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: google/protobuf/type.proto +"""Generated protocol buffer code.""" +from google.protobuf.internal import enum_type_wrapper +from google.protobuf import descriptor as _descriptor +from google.protobuf import message as _message +from google.protobuf import reflection as _reflection +from google.protobuf import symbol_database as _symbol_database +# @@protoc_insertion_point(imports) + +_sym_db = _symbol_database.Default() + + +from google.protobuf import any_pb2 as google_dot_protobuf_dot_any__pb2 +from google.protobuf import source_context_pb2 as google_dot_protobuf_dot_source__context__pb2 + + +DESCRIPTOR = _descriptor.FileDescriptor( + name='google/protobuf/type.proto', + package='google.protobuf', + syntax='proto3', + serialized_options=b'\n\023com.google.protobufB\tTypeProtoP\001Z-google.golang.org/protobuf/types/known/typepb\370\001\001\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes', + create_key=_descriptor._internal_create_key, + serialized_pb=b'\n\x1agoogle/protobuf/type.proto\x12\x0fgoogle.protobuf\x1a\x19google/protobuf/any.proto\x1a$google/protobuf/source_context.proto\"\xd7\x01\n\x04Type\x12\x0c\n\x04name\x18\x01 \x01(\t\x12&\n\x06\x66ields\x18\x02 \x03(\x0b\x32\x16.google.protobuf.Field\x12\x0e\n\x06oneofs\x18\x03 \x03(\t\x12(\n\x07options\x18\x04 \x03(\x0b\x32\x17.google.protobuf.Option\x12\x36\n\x0esource_context\x18\x05 \x01(\x0b\x32\x1e.google.protobuf.SourceContext\x12\'\n\x06syntax\x18\x06 \x01(\x0e\x32\x17.google.protobuf.Syntax\"\xd5\x05\n\x05\x46ield\x12)\n\x04kind\x18\x01 \x01(\x0e\x32\x1b.google.protobuf.Field.Kind\x12\x37\n\x0b\x63\x61rdinality\x18\x02 \x01(\x0e\x32\".google.protobuf.Field.Cardinality\x12\x0e\n\x06number\x18\x03 \x01(\x05\x12\x0c\n\x04name\x18\x04 \x01(\t\x12\x10\n\x08type_url\x18\x06 \x01(\t\x12\x13\n\x0boneof_index\x18\x07 \x01(\x05\x12\x0e\n\x06packed\x18\x08 \x01(\x08\x12(\n\x07options\x18\t \x03(\x0b\x32\x17.google.protobuf.Option\x12\x11\n\tjson_name\x18\n \x01(\t\x12\x15\n\rdefault_value\x18\x0b \x01(\t\"\xc8\x02\n\x04Kind\x12\x10\n\x0cTYPE_UNKNOWN\x10\x00\x12\x0f\n\x0bTYPE_DOUBLE\x10\x01\x12\x0e\n\nTYPE_FLOAT\x10\x02\x12\x0e\n\nTYPE_INT64\x10\x03\x12\x0f\n\x0bTYPE_UINT64\x10\x04\x12\x0e\n\nTYPE_INT32\x10\x05\x12\x10\n\x0cTYPE_FIXED64\x10\x06\x12\x10\n\x0cTYPE_FIXED32\x10\x07\x12\r\n\tTYPE_BOOL\x10\x08\x12\x0f\n\x0bTYPE_STRING\x10\t\x12\x0e\n\nTYPE_GROUP\x10\n\x12\x10\n\x0cTYPE_MESSAGE\x10\x0b\x12\x0e\n\nTYPE_BYTES\x10\x0c\x12\x0f\n\x0bTYPE_UINT32\x10\r\x12\r\n\tTYPE_ENUM\x10\x0e\x12\x11\n\rTYPE_SFIXED32\x10\x0f\x12\x11\n\rTYPE_SFIXED64\x10\x10\x12\x0f\n\x0bTYPE_SINT32\x10\x11\x12\x0f\n\x0bTYPE_SINT64\x10\x12\"t\n\x0b\x43\x61rdinality\x12\x17\n\x13\x43\x41RDINALITY_UNKNOWN\x10\x00\x12\x18\n\x14\x43\x41RDINALITY_OPTIONAL\x10\x01\x12\x18\n\x14\x43\x41RDINALITY_REQUIRED\x10\x02\x12\x18\n\x14\x43\x41RDINALITY_REPEATED\x10\x03\"\xce\x01\n\x04\x45num\x12\x0c\n\x04name\x18\x01 \x01(\t\x12-\n\tenumvalue\x18\x02 \x03(\x0b\x32\x1a.google.protobuf.EnumValue\x12(\n\x07options\x18\x03 \x03(\x0b\x32\x17.google.protobuf.Option\x12\x36\n\x0esource_context\x18\x04 \x01(\x0b\x32\x1e.google.protobuf.SourceContext\x12\'\n\x06syntax\x18\x05 \x01(\x0e\x32\x17.google.protobuf.Syntax\"S\n\tEnumValue\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0e\n\x06number\x18\x02 \x01(\x05\x12(\n\x07options\x18\x03 \x03(\x0b\x32\x17.google.protobuf.Option\";\n\x06Option\x12\x0c\n\x04name\x18\x01 \x01(\t\x12#\n\x05value\x18\x02 \x01(\x0b\x32\x14.google.protobuf.Any*.\n\x06Syntax\x12\x11\n\rSYNTAX_PROTO2\x10\x00\x12\x11\n\rSYNTAX_PROTO3\x10\x01\x42{\n\x13\x63om.google.protobufB\tTypeProtoP\x01Z-google.golang.org/protobuf/types/known/typepb\xf8\x01\x01\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3' + , + dependencies=[google_dot_protobuf_dot_any__pb2.DESCRIPTOR,google_dot_protobuf_dot_source__context__pb2.DESCRIPTOR,]) + +_SYNTAX = _descriptor.EnumDescriptor( + name='Syntax', + full_name='google.protobuf.Syntax', + filename=None, + file=DESCRIPTOR, + create_key=_descriptor._internal_create_key, + values=[ + _descriptor.EnumValueDescriptor( + name='SYNTAX_PROTO2', index=0, number=0, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='SYNTAX_PROTO3', index=1, number=1, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + ], + containing_type=None, + serialized_options=None, + serialized_start=1413, + serialized_end=1459, +) +_sym_db.RegisterEnumDescriptor(_SYNTAX) + +Syntax = enum_type_wrapper.EnumTypeWrapper(_SYNTAX) +SYNTAX_PROTO2 = 0 +SYNTAX_PROTO3 = 1 + + +_FIELD_KIND = _descriptor.EnumDescriptor( + name='Kind', + full_name='google.protobuf.Field.Kind', + filename=None, + file=DESCRIPTOR, + create_key=_descriptor._internal_create_key, + values=[ + _descriptor.EnumValueDescriptor( + name='TYPE_UNKNOWN', index=0, number=0, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_DOUBLE', index=1, number=1, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_FLOAT', index=2, number=2, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_INT64', index=3, number=3, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_UINT64', index=4, number=4, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_INT32', index=5, number=5, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_FIXED64', index=6, number=6, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_FIXED32', index=7, number=7, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_BOOL', index=8, number=8, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_STRING', index=9, number=9, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_GROUP', index=10, number=10, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_MESSAGE', index=11, number=11, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_BYTES', index=12, number=12, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_UINT32', index=13, number=13, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_ENUM', index=14, number=14, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_SFIXED32', index=15, number=15, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_SFIXED64', index=16, number=16, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_SINT32', index=17, number=17, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_SINT64', index=18, number=18, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + ], + containing_type=None, + serialized_options=None, + serialized_start=610, + serialized_end=938, +) +_sym_db.RegisterEnumDescriptor(_FIELD_KIND) + +_FIELD_CARDINALITY = _descriptor.EnumDescriptor( + name='Cardinality', + full_name='google.protobuf.Field.Cardinality', + filename=None, + file=DESCRIPTOR, + create_key=_descriptor._internal_create_key, + values=[ + _descriptor.EnumValueDescriptor( + name='CARDINALITY_UNKNOWN', index=0, number=0, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='CARDINALITY_OPTIONAL', index=1, number=1, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='CARDINALITY_REQUIRED', index=2, number=2, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='CARDINALITY_REPEATED', index=3, number=3, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + ], + containing_type=None, + serialized_options=None, + serialized_start=940, + serialized_end=1056, +) +_sym_db.RegisterEnumDescriptor(_FIELD_CARDINALITY) + + +_TYPE = _descriptor.Descriptor( + name='Type', + full_name='google.protobuf.Type', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='name', full_name='google.protobuf.Type.name', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='fields', full_name='google.protobuf.Type.fields', index=1, + number=2, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='oneofs', full_name='google.protobuf.Type.oneofs', index=2, + number=3, type=9, cpp_type=9, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='options', full_name='google.protobuf.Type.options', index=3, + number=4, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='source_context', full_name='google.protobuf.Type.source_context', index=4, + number=5, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='syntax', full_name='google.protobuf.Type.syntax', index=5, + number=6, type=14, cpp_type=8, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=113, + serialized_end=328, +) + + +_FIELD = _descriptor.Descriptor( + name='Field', + full_name='google.protobuf.Field', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='kind', full_name='google.protobuf.Field.kind', index=0, + number=1, type=14, cpp_type=8, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='cardinality', full_name='google.protobuf.Field.cardinality', index=1, + number=2, type=14, cpp_type=8, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='number', full_name='google.protobuf.Field.number', index=2, + number=3, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='name', full_name='google.protobuf.Field.name', index=3, + number=4, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='type_url', full_name='google.protobuf.Field.type_url', index=4, + number=6, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='oneof_index', full_name='google.protobuf.Field.oneof_index', index=5, + number=7, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='packed', full_name='google.protobuf.Field.packed', index=6, + number=8, type=8, cpp_type=7, label=1, + has_default_value=False, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='options', full_name='google.protobuf.Field.options', index=7, + number=9, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='json_name', full_name='google.protobuf.Field.json_name', index=8, + number=10, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='default_value', full_name='google.protobuf.Field.default_value', index=9, + number=11, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + _FIELD_KIND, + _FIELD_CARDINALITY, + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=331, + serialized_end=1056, +) + + +_ENUM = _descriptor.Descriptor( + name='Enum', + full_name='google.protobuf.Enum', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='name', full_name='google.protobuf.Enum.name', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='enumvalue', full_name='google.protobuf.Enum.enumvalue', index=1, + number=2, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='options', full_name='google.protobuf.Enum.options', index=2, + number=3, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='source_context', full_name='google.protobuf.Enum.source_context', index=3, + number=4, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='syntax', full_name='google.protobuf.Enum.syntax', index=4, + number=5, type=14, cpp_type=8, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1059, + serialized_end=1265, +) + + +_ENUMVALUE = _descriptor.Descriptor( + name='EnumValue', + full_name='google.protobuf.EnumValue', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='name', full_name='google.protobuf.EnumValue.name', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='number', full_name='google.protobuf.EnumValue.number', index=1, + number=2, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='options', full_name='google.protobuf.EnumValue.options', index=2, + number=3, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1267, + serialized_end=1350, +) + + +_OPTION = _descriptor.Descriptor( + name='Option', + full_name='google.protobuf.Option', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='name', full_name='google.protobuf.Option.name', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='value', full_name='google.protobuf.Option.value', index=1, + number=2, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1352, + serialized_end=1411, +) + +_TYPE.fields_by_name['fields'].message_type = _FIELD +_TYPE.fields_by_name['options'].message_type = _OPTION +_TYPE.fields_by_name['source_context'].message_type = google_dot_protobuf_dot_source__context__pb2._SOURCECONTEXT +_TYPE.fields_by_name['syntax'].enum_type = _SYNTAX +_FIELD.fields_by_name['kind'].enum_type = _FIELD_KIND +_FIELD.fields_by_name['cardinality'].enum_type = _FIELD_CARDINALITY +_FIELD.fields_by_name['options'].message_type = _OPTION +_FIELD_KIND.containing_type = _FIELD +_FIELD_CARDINALITY.containing_type = _FIELD +_ENUM.fields_by_name['enumvalue'].message_type = _ENUMVALUE +_ENUM.fields_by_name['options'].message_type = _OPTION +_ENUM.fields_by_name['source_context'].message_type = google_dot_protobuf_dot_source__context__pb2._SOURCECONTEXT +_ENUM.fields_by_name['syntax'].enum_type = _SYNTAX +_ENUMVALUE.fields_by_name['options'].message_type = _OPTION +_OPTION.fields_by_name['value'].message_type = google_dot_protobuf_dot_any__pb2._ANY +DESCRIPTOR.message_types_by_name['Type'] = _TYPE +DESCRIPTOR.message_types_by_name['Field'] = _FIELD +DESCRIPTOR.message_types_by_name['Enum'] = _ENUM +DESCRIPTOR.message_types_by_name['EnumValue'] = _ENUMVALUE +DESCRIPTOR.message_types_by_name['Option'] = _OPTION +DESCRIPTOR.enum_types_by_name['Syntax'] = _SYNTAX +_sym_db.RegisterFileDescriptor(DESCRIPTOR) + +Type = _reflection.GeneratedProtocolMessageType('Type', (_message.Message,), { + 'DESCRIPTOR' : _TYPE, + '__module__' : 'google.protobuf.type_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.Type) + }) +_sym_db.RegisterMessage(Type) + +Field = _reflection.GeneratedProtocolMessageType('Field', (_message.Message,), { + 'DESCRIPTOR' : _FIELD, + '__module__' : 'google.protobuf.type_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.Field) + }) +_sym_db.RegisterMessage(Field) + +Enum = _reflection.GeneratedProtocolMessageType('Enum', (_message.Message,), { + 'DESCRIPTOR' : _ENUM, + '__module__' : 'google.protobuf.type_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.Enum) + }) +_sym_db.RegisterMessage(Enum) + +EnumValue = _reflection.GeneratedProtocolMessageType('EnumValue', (_message.Message,), { + 'DESCRIPTOR' : _ENUMVALUE, + '__module__' : 'google.protobuf.type_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.EnumValue) + }) +_sym_db.RegisterMessage(EnumValue) + +Option = _reflection.GeneratedProtocolMessageType('Option', (_message.Message,), { + 'DESCRIPTOR' : _OPTION, + '__module__' : 'google.protobuf.type_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.Option) + }) +_sym_db.RegisterMessage(Option) + + +DESCRIPTOR._options = None +# @@protoc_insertion_point(module_scope) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/util/__init__.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/util/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/util/json_format_pb2.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/util/json_format_pb2.py new file mode 100644 index 0000000000000000000000000000000000000000..72be1609f36e42f94ebf9c83459cb18d7ea6857f --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/util/json_format_pb2.py @@ -0,0 +1,983 @@ +# -*- coding: utf-8 -*- +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: google/protobuf/util/json_format.proto +"""Generated protocol buffer code.""" +from google.protobuf import descriptor as _descriptor +from google.protobuf import message as _message +from google.protobuf import reflection as _reflection +from google.protobuf import symbol_database as _symbol_database +# @@protoc_insertion_point(imports) + +_sym_db = _symbol_database.Default() + + + + +DESCRIPTOR = _descriptor.FileDescriptor( + name='google/protobuf/util/json_format.proto', + package='protobuf_unittest', + syntax='proto2', + serialized_options=None, + create_key=_descriptor._internal_create_key, + serialized_pb=b'\n&google/protobuf/util/json_format.proto\x12\x11protobuf_unittest\"\x89\x01\n\x13TestFlagsAndStrings\x12\t\n\x01\x41\x18\x01 \x02(\x05\x12K\n\rrepeatedgroup\x18\x02 \x03(\n24.protobuf_unittest.TestFlagsAndStrings.RepeatedGroup\x1a\x1a\n\rRepeatedGroup\x12\t\n\x01\x66\x18\x03 \x02(\t\"!\n\x14TestBase64ByteArrays\x12\t\n\x01\x61\x18\x01 \x02(\x0c\"G\n\x12TestJavaScriptJSON\x12\t\n\x01\x61\x18\x01 \x01(\x05\x12\r\n\x05\x66inal\x18\x02 \x01(\x02\x12\n\n\x02in\x18\x03 \x01(\t\x12\x0b\n\x03Var\x18\x04 \x01(\t\"Q\n\x18TestJavaScriptOrderJSON1\x12\t\n\x01\x64\x18\x01 \x01(\x05\x12\t\n\x01\x63\x18\x02 \x01(\x05\x12\t\n\x01x\x18\x03 \x01(\x08\x12\t\n\x01\x62\x18\x04 \x01(\x05\x12\t\n\x01\x61\x18\x05 \x01(\x05\"\x89\x01\n\x18TestJavaScriptOrderJSON2\x12\t\n\x01\x64\x18\x01 \x01(\x05\x12\t\n\x01\x63\x18\x02 \x01(\x05\x12\t\n\x01x\x18\x03 \x01(\x08\x12\t\n\x01\x62\x18\x04 \x01(\x05\x12\t\n\x01\x61\x18\x05 \x01(\x05\x12\x36\n\x01z\x18\x06 \x03(\x0b\x32+.protobuf_unittest.TestJavaScriptOrderJSON1\"$\n\x0cTestLargeInt\x12\t\n\x01\x61\x18\x01 \x02(\x03\x12\t\n\x01\x62\x18\x02 \x02(\x04\"\xa0\x01\n\x0bTestNumbers\x12\x30\n\x01\x61\x18\x01 \x01(\x0e\x32%.protobuf_unittest.TestNumbers.MyType\x12\t\n\x01\x62\x18\x02 \x01(\x05\x12\t\n\x01\x63\x18\x03 \x01(\x02\x12\t\n\x01\x64\x18\x04 \x01(\x08\x12\t\n\x01\x65\x18\x05 \x01(\x01\x12\t\n\x01\x66\x18\x06 \x01(\r\"(\n\x06MyType\x12\x06\n\x02OK\x10\x00\x12\x0b\n\x07WARNING\x10\x01\x12\t\n\x05\x45RROR\x10\x02\"T\n\rTestCamelCase\x12\x14\n\x0cnormal_field\x18\x01 \x01(\t\x12\x15\n\rCAPITAL_FIELD\x18\x02 \x01(\x05\x12\x16\n\x0e\x43\x61melCaseField\x18\x03 \x01(\x05\"|\n\x0bTestBoolMap\x12=\n\x08\x62ool_map\x18\x01 \x03(\x0b\x32+.protobuf_unittest.TestBoolMap.BoolMapEntry\x1a.\n\x0c\x42oolMapEntry\x12\x0b\n\x03key\x18\x01 \x01(\x08\x12\r\n\x05value\x18\x02 \x01(\x05:\x02\x38\x01\"O\n\rTestRecursion\x12\r\n\x05value\x18\x01 \x01(\x05\x12/\n\x05\x63hild\x18\x02 \x01(\x0b\x32 .protobuf_unittest.TestRecursion\"\x86\x01\n\rTestStringMap\x12\x43\n\nstring_map\x18\x01 \x03(\x0b\x32/.protobuf_unittest.TestStringMap.StringMapEntry\x1a\x30\n\x0eStringMapEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\t:\x02\x38\x01\"\xc4\x01\n\x14TestStringSerializer\x12\x15\n\rscalar_string\x18\x01 \x01(\t\x12\x17\n\x0frepeated_string\x18\x02 \x03(\t\x12J\n\nstring_map\x18\x03 \x03(\x0b\x32\x36.protobuf_unittest.TestStringSerializer.StringMapEntry\x1a\x30\n\x0eStringMapEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\t:\x02\x38\x01\"$\n\x18TestMessageWithExtension*\x08\x08\x64\x10\x80\x80\x80\x80\x02\"z\n\rTestExtension\x12\r\n\x05value\x18\x01 \x01(\t2Z\n\x03\x65xt\x12+.protobuf_unittest.TestMessageWithExtension\x18\x64 \x01(\x0b\x32 .protobuf_unittest.TestExtension' +) + + + +_TESTNUMBERS_MYTYPE = _descriptor.EnumDescriptor( + name='MyType', + full_name='protobuf_unittest.TestNumbers.MyType', + filename=None, + file=DESCRIPTOR, + create_key=_descriptor._internal_create_key, + values=[ + _descriptor.EnumValueDescriptor( + name='OK', index=0, number=0, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='WARNING', index=1, number=1, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='ERROR', index=2, number=2, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + ], + containing_type=None, + serialized_options=None, + serialized_start=691, + serialized_end=731, +) +_sym_db.RegisterEnumDescriptor(_TESTNUMBERS_MYTYPE) + + +_TESTFLAGSANDSTRINGS_REPEATEDGROUP = _descriptor.Descriptor( + name='RepeatedGroup', + full_name='protobuf_unittest.TestFlagsAndStrings.RepeatedGroup', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='f', full_name='protobuf_unittest.TestFlagsAndStrings.RepeatedGroup.f', index=0, + number=3, type=9, cpp_type=9, label=2, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + serialized_start=173, + serialized_end=199, +) + +_TESTFLAGSANDSTRINGS = _descriptor.Descriptor( + name='TestFlagsAndStrings', + full_name='protobuf_unittest.TestFlagsAndStrings', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='A', full_name='protobuf_unittest.TestFlagsAndStrings.A', index=0, + number=1, type=5, cpp_type=1, label=2, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='repeatedgroup', full_name='protobuf_unittest.TestFlagsAndStrings.repeatedgroup', index=1, + number=2, type=10, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[_TESTFLAGSANDSTRINGS_REPEATEDGROUP, ], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + serialized_start=62, + serialized_end=199, +) + + +_TESTBASE64BYTEARRAYS = _descriptor.Descriptor( + name='TestBase64ByteArrays', + full_name='protobuf_unittest.TestBase64ByteArrays', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='a', full_name='protobuf_unittest.TestBase64ByteArrays.a', index=0, + number=1, type=12, cpp_type=9, label=2, + has_default_value=False, default_value=b"", + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + serialized_start=201, + serialized_end=234, +) + + +_TESTJAVASCRIPTJSON = _descriptor.Descriptor( + name='TestJavaScriptJSON', + full_name='protobuf_unittest.TestJavaScriptJSON', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='a', full_name='protobuf_unittest.TestJavaScriptJSON.a', index=0, + number=1, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='final', full_name='protobuf_unittest.TestJavaScriptJSON.final', index=1, + number=2, type=2, cpp_type=6, label=1, + has_default_value=False, default_value=float(0), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='in', full_name='protobuf_unittest.TestJavaScriptJSON.in', index=2, + number=3, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='Var', full_name='protobuf_unittest.TestJavaScriptJSON.Var', index=3, + number=4, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + serialized_start=236, + serialized_end=307, +) + + +_TESTJAVASCRIPTORDERJSON1 = _descriptor.Descriptor( + name='TestJavaScriptOrderJSON1', + full_name='protobuf_unittest.TestJavaScriptOrderJSON1', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='d', full_name='protobuf_unittest.TestJavaScriptOrderJSON1.d', index=0, + number=1, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='c', full_name='protobuf_unittest.TestJavaScriptOrderJSON1.c', index=1, + number=2, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='x', full_name='protobuf_unittest.TestJavaScriptOrderJSON1.x', index=2, + number=3, type=8, cpp_type=7, label=1, + has_default_value=False, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='b', full_name='protobuf_unittest.TestJavaScriptOrderJSON1.b', index=3, + number=4, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='a', full_name='protobuf_unittest.TestJavaScriptOrderJSON1.a', index=4, + number=5, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + serialized_start=309, + serialized_end=390, +) + + +_TESTJAVASCRIPTORDERJSON2 = _descriptor.Descriptor( + name='TestJavaScriptOrderJSON2', + full_name='protobuf_unittest.TestJavaScriptOrderJSON2', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='d', full_name='protobuf_unittest.TestJavaScriptOrderJSON2.d', index=0, + number=1, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='c', full_name='protobuf_unittest.TestJavaScriptOrderJSON2.c', index=1, + number=2, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='x', full_name='protobuf_unittest.TestJavaScriptOrderJSON2.x', index=2, + number=3, type=8, cpp_type=7, label=1, + has_default_value=False, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='b', full_name='protobuf_unittest.TestJavaScriptOrderJSON2.b', index=3, + number=4, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='a', full_name='protobuf_unittest.TestJavaScriptOrderJSON2.a', index=4, + number=5, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='z', full_name='protobuf_unittest.TestJavaScriptOrderJSON2.z', index=5, + number=6, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + serialized_start=393, + serialized_end=530, +) + + +_TESTLARGEINT = _descriptor.Descriptor( + name='TestLargeInt', + full_name='protobuf_unittest.TestLargeInt', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='a', full_name='protobuf_unittest.TestLargeInt.a', index=0, + number=1, type=3, cpp_type=2, label=2, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='b', full_name='protobuf_unittest.TestLargeInt.b', index=1, + number=2, type=4, cpp_type=4, label=2, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + serialized_start=532, + serialized_end=568, +) + + +_TESTNUMBERS = _descriptor.Descriptor( + name='TestNumbers', + full_name='protobuf_unittest.TestNumbers', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='a', full_name='protobuf_unittest.TestNumbers.a', index=0, + number=1, type=14, cpp_type=8, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='b', full_name='protobuf_unittest.TestNumbers.b', index=1, + number=2, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='c', full_name='protobuf_unittest.TestNumbers.c', index=2, + number=3, type=2, cpp_type=6, label=1, + has_default_value=False, default_value=float(0), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='d', full_name='protobuf_unittest.TestNumbers.d', index=3, + number=4, type=8, cpp_type=7, label=1, + has_default_value=False, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='e', full_name='protobuf_unittest.TestNumbers.e', index=4, + number=5, type=1, cpp_type=5, label=1, + has_default_value=False, default_value=float(0), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='f', full_name='protobuf_unittest.TestNumbers.f', index=5, + number=6, type=13, cpp_type=3, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + _TESTNUMBERS_MYTYPE, + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + serialized_start=571, + serialized_end=731, +) + + +_TESTCAMELCASE = _descriptor.Descriptor( + name='TestCamelCase', + full_name='protobuf_unittest.TestCamelCase', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='normal_field', full_name='protobuf_unittest.TestCamelCase.normal_field', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='CAPITAL_FIELD', full_name='protobuf_unittest.TestCamelCase.CAPITAL_FIELD', index=1, + number=2, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='CamelCaseField', full_name='protobuf_unittest.TestCamelCase.CamelCaseField', index=2, + number=3, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + serialized_start=733, + serialized_end=817, +) + + +_TESTBOOLMAP_BOOLMAPENTRY = _descriptor.Descriptor( + name='BoolMapEntry', + full_name='protobuf_unittest.TestBoolMap.BoolMapEntry', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='key', full_name='protobuf_unittest.TestBoolMap.BoolMapEntry.key', index=0, + number=1, type=8, cpp_type=7, label=1, + has_default_value=False, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='value', full_name='protobuf_unittest.TestBoolMap.BoolMapEntry.value', index=1, + number=2, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=b'8\001', + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + serialized_start=897, + serialized_end=943, +) + +_TESTBOOLMAP = _descriptor.Descriptor( + name='TestBoolMap', + full_name='protobuf_unittest.TestBoolMap', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='bool_map', full_name='protobuf_unittest.TestBoolMap.bool_map', index=0, + number=1, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[_TESTBOOLMAP_BOOLMAPENTRY, ], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + serialized_start=819, + serialized_end=943, +) + + +_TESTRECURSION = _descriptor.Descriptor( + name='TestRecursion', + full_name='protobuf_unittest.TestRecursion', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='value', full_name='protobuf_unittest.TestRecursion.value', index=0, + number=1, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='child', full_name='protobuf_unittest.TestRecursion.child', index=1, + number=2, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + serialized_start=945, + serialized_end=1024, +) + + +_TESTSTRINGMAP_STRINGMAPENTRY = _descriptor.Descriptor( + name='StringMapEntry', + full_name='protobuf_unittest.TestStringMap.StringMapEntry', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='key', full_name='protobuf_unittest.TestStringMap.StringMapEntry.key', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='value', full_name='protobuf_unittest.TestStringMap.StringMapEntry.value', index=1, + number=2, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=b'8\001', + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1113, + serialized_end=1161, +) + +_TESTSTRINGMAP = _descriptor.Descriptor( + name='TestStringMap', + full_name='protobuf_unittest.TestStringMap', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='string_map', full_name='protobuf_unittest.TestStringMap.string_map', index=0, + number=1, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[_TESTSTRINGMAP_STRINGMAPENTRY, ], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1027, + serialized_end=1161, +) + + +_TESTSTRINGSERIALIZER_STRINGMAPENTRY = _descriptor.Descriptor( + name='StringMapEntry', + full_name='protobuf_unittest.TestStringSerializer.StringMapEntry', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='key', full_name='protobuf_unittest.TestStringSerializer.StringMapEntry.key', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='value', full_name='protobuf_unittest.TestStringSerializer.StringMapEntry.value', index=1, + number=2, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=b'8\001', + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1113, + serialized_end=1161, +) + +_TESTSTRINGSERIALIZER = _descriptor.Descriptor( + name='TestStringSerializer', + full_name='protobuf_unittest.TestStringSerializer', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='scalar_string', full_name='protobuf_unittest.TestStringSerializer.scalar_string', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='repeated_string', full_name='protobuf_unittest.TestStringSerializer.repeated_string', index=1, + number=2, type=9, cpp_type=9, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='string_map', full_name='protobuf_unittest.TestStringSerializer.string_map', index=2, + number=3, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[_TESTSTRINGSERIALIZER_STRINGMAPENTRY, ], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1164, + serialized_end=1360, +) + + +_TESTMESSAGEWITHEXTENSION = _descriptor.Descriptor( + name='TestMessageWithExtension', + full_name='protobuf_unittest.TestMessageWithExtension', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=True, + syntax='proto2', + extension_ranges=[(100, 536870912), ], + oneofs=[ + ], + serialized_start=1362, + serialized_end=1398, +) + + +_TESTEXTENSION = _descriptor.Descriptor( + name='TestExtension', + full_name='protobuf_unittest.TestExtension', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='value', full_name='protobuf_unittest.TestExtension.value', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + _descriptor.FieldDescriptor( + name='ext', full_name='protobuf_unittest.TestExtension.ext', index=0, + number=100, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=True, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1400, + serialized_end=1522, +) + +_TESTFLAGSANDSTRINGS_REPEATEDGROUP.containing_type = _TESTFLAGSANDSTRINGS +_TESTFLAGSANDSTRINGS.fields_by_name['repeatedgroup'].message_type = _TESTFLAGSANDSTRINGS_REPEATEDGROUP +_TESTJAVASCRIPTORDERJSON2.fields_by_name['z'].message_type = _TESTJAVASCRIPTORDERJSON1 +_TESTNUMBERS.fields_by_name['a'].enum_type = _TESTNUMBERS_MYTYPE +_TESTNUMBERS_MYTYPE.containing_type = _TESTNUMBERS +_TESTBOOLMAP_BOOLMAPENTRY.containing_type = _TESTBOOLMAP +_TESTBOOLMAP.fields_by_name['bool_map'].message_type = _TESTBOOLMAP_BOOLMAPENTRY +_TESTRECURSION.fields_by_name['child'].message_type = _TESTRECURSION +_TESTSTRINGMAP_STRINGMAPENTRY.containing_type = _TESTSTRINGMAP +_TESTSTRINGMAP.fields_by_name['string_map'].message_type = _TESTSTRINGMAP_STRINGMAPENTRY +_TESTSTRINGSERIALIZER_STRINGMAPENTRY.containing_type = _TESTSTRINGSERIALIZER +_TESTSTRINGSERIALIZER.fields_by_name['string_map'].message_type = _TESTSTRINGSERIALIZER_STRINGMAPENTRY +DESCRIPTOR.message_types_by_name['TestFlagsAndStrings'] = _TESTFLAGSANDSTRINGS +DESCRIPTOR.message_types_by_name['TestBase64ByteArrays'] = _TESTBASE64BYTEARRAYS +DESCRIPTOR.message_types_by_name['TestJavaScriptJSON'] = _TESTJAVASCRIPTJSON +DESCRIPTOR.message_types_by_name['TestJavaScriptOrderJSON1'] = _TESTJAVASCRIPTORDERJSON1 +DESCRIPTOR.message_types_by_name['TestJavaScriptOrderJSON2'] = _TESTJAVASCRIPTORDERJSON2 +DESCRIPTOR.message_types_by_name['TestLargeInt'] = _TESTLARGEINT +DESCRIPTOR.message_types_by_name['TestNumbers'] = _TESTNUMBERS +DESCRIPTOR.message_types_by_name['TestCamelCase'] = _TESTCAMELCASE +DESCRIPTOR.message_types_by_name['TestBoolMap'] = _TESTBOOLMAP +DESCRIPTOR.message_types_by_name['TestRecursion'] = _TESTRECURSION +DESCRIPTOR.message_types_by_name['TestStringMap'] = _TESTSTRINGMAP +DESCRIPTOR.message_types_by_name['TestStringSerializer'] = _TESTSTRINGSERIALIZER +DESCRIPTOR.message_types_by_name['TestMessageWithExtension'] = _TESTMESSAGEWITHEXTENSION +DESCRIPTOR.message_types_by_name['TestExtension'] = _TESTEXTENSION +_sym_db.RegisterFileDescriptor(DESCRIPTOR) + +TestFlagsAndStrings = _reflection.GeneratedProtocolMessageType('TestFlagsAndStrings', (_message.Message,), { + + 'RepeatedGroup' : _reflection.GeneratedProtocolMessageType('RepeatedGroup', (_message.Message,), { + 'DESCRIPTOR' : _TESTFLAGSANDSTRINGS_REPEATEDGROUP, + '__module__' : 'google.protobuf.util.json_format_pb2' + # @@protoc_insertion_point(class_scope:protobuf_unittest.TestFlagsAndStrings.RepeatedGroup) + }) + , + 'DESCRIPTOR' : _TESTFLAGSANDSTRINGS, + '__module__' : 'google.protobuf.util.json_format_pb2' + # @@protoc_insertion_point(class_scope:protobuf_unittest.TestFlagsAndStrings) + }) +_sym_db.RegisterMessage(TestFlagsAndStrings) +_sym_db.RegisterMessage(TestFlagsAndStrings.RepeatedGroup) + +TestBase64ByteArrays = _reflection.GeneratedProtocolMessageType('TestBase64ByteArrays', (_message.Message,), { + 'DESCRIPTOR' : _TESTBASE64BYTEARRAYS, + '__module__' : 'google.protobuf.util.json_format_pb2' + # @@protoc_insertion_point(class_scope:protobuf_unittest.TestBase64ByteArrays) + }) +_sym_db.RegisterMessage(TestBase64ByteArrays) + +TestJavaScriptJSON = _reflection.GeneratedProtocolMessageType('TestJavaScriptJSON', (_message.Message,), { + 'DESCRIPTOR' : _TESTJAVASCRIPTJSON, + '__module__' : 'google.protobuf.util.json_format_pb2' + # @@protoc_insertion_point(class_scope:protobuf_unittest.TestJavaScriptJSON) + }) +_sym_db.RegisterMessage(TestJavaScriptJSON) + +TestJavaScriptOrderJSON1 = _reflection.GeneratedProtocolMessageType('TestJavaScriptOrderJSON1', (_message.Message,), { + 'DESCRIPTOR' : _TESTJAVASCRIPTORDERJSON1, + '__module__' : 'google.protobuf.util.json_format_pb2' + # @@protoc_insertion_point(class_scope:protobuf_unittest.TestJavaScriptOrderJSON1) + }) +_sym_db.RegisterMessage(TestJavaScriptOrderJSON1) + +TestJavaScriptOrderJSON2 = _reflection.GeneratedProtocolMessageType('TestJavaScriptOrderJSON2', (_message.Message,), { + 'DESCRIPTOR' : _TESTJAVASCRIPTORDERJSON2, + '__module__' : 'google.protobuf.util.json_format_pb2' + # @@protoc_insertion_point(class_scope:protobuf_unittest.TestJavaScriptOrderJSON2) + }) +_sym_db.RegisterMessage(TestJavaScriptOrderJSON2) + +TestLargeInt = _reflection.GeneratedProtocolMessageType('TestLargeInt', (_message.Message,), { + 'DESCRIPTOR' : _TESTLARGEINT, + '__module__' : 'google.protobuf.util.json_format_pb2' + # @@protoc_insertion_point(class_scope:protobuf_unittest.TestLargeInt) + }) +_sym_db.RegisterMessage(TestLargeInt) + +TestNumbers = _reflection.GeneratedProtocolMessageType('TestNumbers', (_message.Message,), { + 'DESCRIPTOR' : _TESTNUMBERS, + '__module__' : 'google.protobuf.util.json_format_pb2' + # @@protoc_insertion_point(class_scope:protobuf_unittest.TestNumbers) + }) +_sym_db.RegisterMessage(TestNumbers) + +TestCamelCase = _reflection.GeneratedProtocolMessageType('TestCamelCase', (_message.Message,), { + 'DESCRIPTOR' : _TESTCAMELCASE, + '__module__' : 'google.protobuf.util.json_format_pb2' + # @@protoc_insertion_point(class_scope:protobuf_unittest.TestCamelCase) + }) +_sym_db.RegisterMessage(TestCamelCase) + +TestBoolMap = _reflection.GeneratedProtocolMessageType('TestBoolMap', (_message.Message,), { + + 'BoolMapEntry' : _reflection.GeneratedProtocolMessageType('BoolMapEntry', (_message.Message,), { + 'DESCRIPTOR' : _TESTBOOLMAP_BOOLMAPENTRY, + '__module__' : 'google.protobuf.util.json_format_pb2' + # @@protoc_insertion_point(class_scope:protobuf_unittest.TestBoolMap.BoolMapEntry) + }) + , + 'DESCRIPTOR' : _TESTBOOLMAP, + '__module__' : 'google.protobuf.util.json_format_pb2' + # @@protoc_insertion_point(class_scope:protobuf_unittest.TestBoolMap) + }) +_sym_db.RegisterMessage(TestBoolMap) +_sym_db.RegisterMessage(TestBoolMap.BoolMapEntry) + +TestRecursion = _reflection.GeneratedProtocolMessageType('TestRecursion', (_message.Message,), { + 'DESCRIPTOR' : _TESTRECURSION, + '__module__' : 'google.protobuf.util.json_format_pb2' + # @@protoc_insertion_point(class_scope:protobuf_unittest.TestRecursion) + }) +_sym_db.RegisterMessage(TestRecursion) + +TestStringMap = _reflection.GeneratedProtocolMessageType('TestStringMap', (_message.Message,), { + + 'StringMapEntry' : _reflection.GeneratedProtocolMessageType('StringMapEntry', (_message.Message,), { + 'DESCRIPTOR' : _TESTSTRINGMAP_STRINGMAPENTRY, + '__module__' : 'google.protobuf.util.json_format_pb2' + # @@protoc_insertion_point(class_scope:protobuf_unittest.TestStringMap.StringMapEntry) + }) + , + 'DESCRIPTOR' : _TESTSTRINGMAP, + '__module__' : 'google.protobuf.util.json_format_pb2' + # @@protoc_insertion_point(class_scope:protobuf_unittest.TestStringMap) + }) +_sym_db.RegisterMessage(TestStringMap) +_sym_db.RegisterMessage(TestStringMap.StringMapEntry) + +TestStringSerializer = _reflection.GeneratedProtocolMessageType('TestStringSerializer', (_message.Message,), { + + 'StringMapEntry' : _reflection.GeneratedProtocolMessageType('StringMapEntry', (_message.Message,), { + 'DESCRIPTOR' : _TESTSTRINGSERIALIZER_STRINGMAPENTRY, + '__module__' : 'google.protobuf.util.json_format_pb2' + # @@protoc_insertion_point(class_scope:protobuf_unittest.TestStringSerializer.StringMapEntry) + }) + , + 'DESCRIPTOR' : _TESTSTRINGSERIALIZER, + '__module__' : 'google.protobuf.util.json_format_pb2' + # @@protoc_insertion_point(class_scope:protobuf_unittest.TestStringSerializer) + }) +_sym_db.RegisterMessage(TestStringSerializer) +_sym_db.RegisterMessage(TestStringSerializer.StringMapEntry) + +TestMessageWithExtension = _reflection.GeneratedProtocolMessageType('TestMessageWithExtension', (_message.Message,), { + 'DESCRIPTOR' : _TESTMESSAGEWITHEXTENSION, + '__module__' : 'google.protobuf.util.json_format_pb2' + # @@protoc_insertion_point(class_scope:protobuf_unittest.TestMessageWithExtension) + }) +_sym_db.RegisterMessage(TestMessageWithExtension) + +TestExtension = _reflection.GeneratedProtocolMessageType('TestExtension', (_message.Message,), { + 'DESCRIPTOR' : _TESTEXTENSION, + '__module__' : 'google.protobuf.util.json_format_pb2' + # @@protoc_insertion_point(class_scope:protobuf_unittest.TestExtension) + }) +_sym_db.RegisterMessage(TestExtension) + +_TESTEXTENSION.extensions_by_name['ext'].message_type = _TESTEXTENSION +TestMessageWithExtension.RegisterExtension(_TESTEXTENSION.extensions_by_name['ext']) + +_TESTBOOLMAP_BOOLMAPENTRY._options = None +_TESTSTRINGMAP_STRINGMAPENTRY._options = None +_TESTSTRINGSERIALIZER_STRINGMAPENTRY._options = None +# @@protoc_insertion_point(module_scope) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/util/json_format_proto3_pb2.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/util/json_format_proto3_pb2.py new file mode 100644 index 0000000000000000000000000000000000000000..c3d4e484cbd71fe32e0a056902b1ecd41bea55e9 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/util/json_format_proto3_pb2.py @@ -0,0 +1,2031 @@ +# -*- coding: utf-8 -*- +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: google/protobuf/util/json_format_proto3.proto +"""Generated protocol buffer code.""" +from google.protobuf.internal import enum_type_wrapper +from google.protobuf import descriptor as _descriptor +from google.protobuf import message as _message +from google.protobuf import reflection as _reflection +from google.protobuf import symbol_database as _symbol_database +# @@protoc_insertion_point(imports) + +_sym_db = _symbol_database.Default() + + +from google.protobuf import any_pb2 as google_dot_protobuf_dot_any__pb2 +from google.protobuf import duration_pb2 as google_dot_protobuf_dot_duration__pb2 +from google.protobuf import field_mask_pb2 as google_dot_protobuf_dot_field__mask__pb2 +from google.protobuf import struct_pb2 as google_dot_protobuf_dot_struct__pb2 +from google.protobuf import timestamp_pb2 as google_dot_protobuf_dot_timestamp__pb2 +from google.protobuf import wrappers_pb2 as google_dot_protobuf_dot_wrappers__pb2 +from google.protobuf import unittest_pb2 as google_dot_protobuf_dot_unittest__pb2 + + +DESCRIPTOR = _descriptor.FileDescriptor( + name='google/protobuf/util/json_format_proto3.proto', + package='proto3', + syntax='proto3', + serialized_options=b'\n\030com.google.protobuf.utilB\020JsonFormatProto3', + create_key=_descriptor._internal_create_key, + serialized_pb=b'\n-google/protobuf/util/json_format_proto3.proto\x12\x06proto3\x1a\x19google/protobuf/any.proto\x1a\x1egoogle/protobuf/duration.proto\x1a google/protobuf/field_mask.proto\x1a\x1cgoogle/protobuf/struct.proto\x1a\x1fgoogle/protobuf/timestamp.proto\x1a\x1egoogle/protobuf/wrappers.proto\x1a\x1egoogle/protobuf/unittest.proto\"\x1c\n\x0bMessageType\x12\r\n\x05value\x18\x01 \x01(\x05\"\x94\x05\n\x0bTestMessage\x12\x12\n\nbool_value\x18\x01 \x01(\x08\x12\x13\n\x0bint32_value\x18\x02 \x01(\x05\x12\x13\n\x0bint64_value\x18\x03 \x01(\x03\x12\x14\n\x0cuint32_value\x18\x04 \x01(\r\x12\x14\n\x0cuint64_value\x18\x05 \x01(\x04\x12\x13\n\x0b\x66loat_value\x18\x06 \x01(\x02\x12\x14\n\x0c\x64ouble_value\x18\x07 \x01(\x01\x12\x14\n\x0cstring_value\x18\x08 \x01(\t\x12\x13\n\x0b\x62ytes_value\x18\t \x01(\x0c\x12$\n\nenum_value\x18\n \x01(\x0e\x32\x10.proto3.EnumType\x12*\n\rmessage_value\x18\x0b \x01(\x0b\x32\x13.proto3.MessageType\x12\x1b\n\x13repeated_bool_value\x18\x15 \x03(\x08\x12\x1c\n\x14repeated_int32_value\x18\x16 \x03(\x05\x12\x1c\n\x14repeated_int64_value\x18\x17 \x03(\x03\x12\x1d\n\x15repeated_uint32_value\x18\x18 \x03(\r\x12\x1d\n\x15repeated_uint64_value\x18\x19 \x03(\x04\x12\x1c\n\x14repeated_float_value\x18\x1a \x03(\x02\x12\x1d\n\x15repeated_double_value\x18\x1b \x03(\x01\x12\x1d\n\x15repeated_string_value\x18\x1c \x03(\t\x12\x1c\n\x14repeated_bytes_value\x18\x1d \x03(\x0c\x12-\n\x13repeated_enum_value\x18\x1e \x03(\x0e\x32\x10.proto3.EnumType\x12\x33\n\x16repeated_message_value\x18\x1f \x03(\x0b\x32\x13.proto3.MessageType\"\x8c\x02\n\tTestOneof\x12\x1b\n\x11oneof_int32_value\x18\x01 \x01(\x05H\x00\x12\x1c\n\x12oneof_string_value\x18\x02 \x01(\tH\x00\x12\x1b\n\x11oneof_bytes_value\x18\x03 \x01(\x0cH\x00\x12,\n\x10oneof_enum_value\x18\x04 \x01(\x0e\x32\x10.proto3.EnumTypeH\x00\x12\x32\n\x13oneof_message_value\x18\x05 \x01(\x0b\x32\x13.proto3.MessageTypeH\x00\x12\x36\n\x10oneof_null_value\x18\x06 \x01(\x0e\x32\x1a.google.protobuf.NullValueH\x00\x42\r\n\x0boneof_value\"\xe1\x04\n\x07TestMap\x12.\n\x08\x62ool_map\x18\x01 \x03(\x0b\x32\x1c.proto3.TestMap.BoolMapEntry\x12\x30\n\tint32_map\x18\x02 \x03(\x0b\x32\x1d.proto3.TestMap.Int32MapEntry\x12\x30\n\tint64_map\x18\x03 \x03(\x0b\x32\x1d.proto3.TestMap.Int64MapEntry\x12\x32\n\nuint32_map\x18\x04 \x03(\x0b\x32\x1e.proto3.TestMap.Uint32MapEntry\x12\x32\n\nuint64_map\x18\x05 \x03(\x0b\x32\x1e.proto3.TestMap.Uint64MapEntry\x12\x32\n\nstring_map\x18\x06 \x03(\x0b\x32\x1e.proto3.TestMap.StringMapEntry\x1a.\n\x0c\x42oolMapEntry\x12\x0b\n\x03key\x18\x01 \x01(\x08\x12\r\n\x05value\x18\x02 \x01(\x05:\x02\x38\x01\x1a/\n\rInt32MapEntry\x12\x0b\n\x03key\x18\x01 \x01(\x05\x12\r\n\x05value\x18\x02 \x01(\x05:\x02\x38\x01\x1a/\n\rInt64MapEntry\x12\x0b\n\x03key\x18\x01 \x01(\x03\x12\r\n\x05value\x18\x02 \x01(\x05:\x02\x38\x01\x1a\x30\n\x0eUint32MapEntry\x12\x0b\n\x03key\x18\x01 \x01(\r\x12\r\n\x05value\x18\x02 \x01(\x05:\x02\x38\x01\x1a\x30\n\x0eUint64MapEntry\x12\x0b\n\x03key\x18\x01 \x01(\x04\x12\r\n\x05value\x18\x02 \x01(\x05:\x02\x38\x01\x1a\x30\n\x0eStringMapEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\x05:\x02\x38\x01\"\x85\x06\n\rTestNestedMap\x12\x34\n\x08\x62ool_map\x18\x01 \x03(\x0b\x32\".proto3.TestNestedMap.BoolMapEntry\x12\x36\n\tint32_map\x18\x02 \x03(\x0b\x32#.proto3.TestNestedMap.Int32MapEntry\x12\x36\n\tint64_map\x18\x03 \x03(\x0b\x32#.proto3.TestNestedMap.Int64MapEntry\x12\x38\n\nuint32_map\x18\x04 \x03(\x0b\x32$.proto3.TestNestedMap.Uint32MapEntry\x12\x38\n\nuint64_map\x18\x05 \x03(\x0b\x32$.proto3.TestNestedMap.Uint64MapEntry\x12\x38\n\nstring_map\x18\x06 \x03(\x0b\x32$.proto3.TestNestedMap.StringMapEntry\x12\x32\n\x07map_map\x18\x07 \x03(\x0b\x32!.proto3.TestNestedMap.MapMapEntry\x1a.\n\x0c\x42oolMapEntry\x12\x0b\n\x03key\x18\x01 \x01(\x08\x12\r\n\x05value\x18\x02 \x01(\x05:\x02\x38\x01\x1a/\n\rInt32MapEntry\x12\x0b\n\x03key\x18\x01 \x01(\x05\x12\r\n\x05value\x18\x02 \x01(\x05:\x02\x38\x01\x1a/\n\rInt64MapEntry\x12\x0b\n\x03key\x18\x01 \x01(\x03\x12\r\n\x05value\x18\x02 \x01(\x05:\x02\x38\x01\x1a\x30\n\x0eUint32MapEntry\x12\x0b\n\x03key\x18\x01 \x01(\r\x12\r\n\x05value\x18\x02 \x01(\x05:\x02\x38\x01\x1a\x30\n\x0eUint64MapEntry\x12\x0b\n\x03key\x18\x01 \x01(\x04\x12\r\n\x05value\x18\x02 \x01(\x05:\x02\x38\x01\x1a\x30\n\x0eStringMapEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\x05:\x02\x38\x01\x1a\x44\n\x0bMapMapEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12$\n\x05value\x18\x02 \x01(\x0b\x32\x15.proto3.TestNestedMap:\x02\x38\x01\"{\n\rTestStringMap\x12\x38\n\nstring_map\x18\x01 \x03(\x0b\x32$.proto3.TestStringMap.StringMapEntry\x1a\x30\n\x0eStringMapEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\t:\x02\x38\x01\"\xee\x07\n\x0bTestWrapper\x12.\n\nbool_value\x18\x01 \x01(\x0b\x32\x1a.google.protobuf.BoolValue\x12\x30\n\x0bint32_value\x18\x02 \x01(\x0b\x32\x1b.google.protobuf.Int32Value\x12\x30\n\x0bint64_value\x18\x03 \x01(\x0b\x32\x1b.google.protobuf.Int64Value\x12\x32\n\x0cuint32_value\x18\x04 \x01(\x0b\x32\x1c.google.protobuf.UInt32Value\x12\x32\n\x0cuint64_value\x18\x05 \x01(\x0b\x32\x1c.google.protobuf.UInt64Value\x12\x30\n\x0b\x66loat_value\x18\x06 \x01(\x0b\x32\x1b.google.protobuf.FloatValue\x12\x32\n\x0c\x64ouble_value\x18\x07 \x01(\x0b\x32\x1c.google.protobuf.DoubleValue\x12\x32\n\x0cstring_value\x18\x08 \x01(\x0b\x32\x1c.google.protobuf.StringValue\x12\x30\n\x0b\x62ytes_value\x18\t \x01(\x0b\x32\x1b.google.protobuf.BytesValue\x12\x37\n\x13repeated_bool_value\x18\x0b \x03(\x0b\x32\x1a.google.protobuf.BoolValue\x12\x39\n\x14repeated_int32_value\x18\x0c \x03(\x0b\x32\x1b.google.protobuf.Int32Value\x12\x39\n\x14repeated_int64_value\x18\r \x03(\x0b\x32\x1b.google.protobuf.Int64Value\x12;\n\x15repeated_uint32_value\x18\x0e \x03(\x0b\x32\x1c.google.protobuf.UInt32Value\x12;\n\x15repeated_uint64_value\x18\x0f \x03(\x0b\x32\x1c.google.protobuf.UInt64Value\x12\x39\n\x14repeated_float_value\x18\x10 \x03(\x0b\x32\x1b.google.protobuf.FloatValue\x12;\n\x15repeated_double_value\x18\x11 \x03(\x0b\x32\x1c.google.protobuf.DoubleValue\x12;\n\x15repeated_string_value\x18\x12 \x03(\x0b\x32\x1c.google.protobuf.StringValue\x12\x39\n\x14repeated_bytes_value\x18\x13 \x03(\x0b\x32\x1b.google.protobuf.BytesValue\"n\n\rTestTimestamp\x12)\n\x05value\x18\x01 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12\x32\n\x0erepeated_value\x18\x02 \x03(\x0b\x32\x1a.google.protobuf.Timestamp\"k\n\x0cTestDuration\x12(\n\x05value\x18\x01 \x01(\x0b\x32\x19.google.protobuf.Duration\x12\x31\n\x0erepeated_value\x18\x02 \x03(\x0b\x32\x19.google.protobuf.Duration\":\n\rTestFieldMask\x12)\n\x05value\x18\x01 \x01(\x0b\x32\x1a.google.protobuf.FieldMask\"e\n\nTestStruct\x12&\n\x05value\x18\x01 \x01(\x0b\x32\x17.google.protobuf.Struct\x12/\n\x0erepeated_value\x18\x02 \x03(\x0b\x32\x17.google.protobuf.Struct\"\\\n\x07TestAny\x12#\n\x05value\x18\x01 \x01(\x0b\x32\x14.google.protobuf.Any\x12,\n\x0erepeated_value\x18\x02 \x03(\x0b\x32\x14.google.protobuf.Any\"b\n\tTestValue\x12%\n\x05value\x18\x01 \x01(\x0b\x32\x16.google.protobuf.Value\x12.\n\x0erepeated_value\x18\x02 \x03(\x0b\x32\x16.google.protobuf.Value\"n\n\rTestListValue\x12)\n\x05value\x18\x01 \x01(\x0b\x32\x1a.google.protobuf.ListValue\x12\x32\n\x0erepeated_value\x18\x02 \x03(\x0b\x32\x1a.google.protobuf.ListValue\"\x89\x01\n\rTestBoolValue\x12\x12\n\nbool_value\x18\x01 \x01(\x08\x12\x34\n\x08\x62ool_map\x18\x02 \x03(\x0b\x32\".proto3.TestBoolValue.BoolMapEntry\x1a.\n\x0c\x42oolMapEntry\x12\x0b\n\x03key\x18\x01 \x01(\x08\x12\r\n\x05value\x18\x02 \x01(\x05:\x02\x38\x01\"+\n\x12TestCustomJsonName\x12\x15\n\x05value\x18\x01 \x01(\x05R\x06@value\"J\n\x0eTestExtensions\x12\x38\n\nextensions\x18\x01 \x01(\x0b\x32$.protobuf_unittest.TestAllExtensions\"\x84\x01\n\rTestEnumValue\x12%\n\x0b\x65num_value1\x18\x01 \x01(\x0e\x32\x10.proto3.EnumType\x12%\n\x0b\x65num_value2\x18\x02 \x01(\x0e\x32\x10.proto3.EnumType\x12%\n\x0b\x65num_value3\x18\x03 \x01(\x0e\x32\x10.proto3.EnumType*\x1c\n\x08\x45numType\x12\x07\n\x03\x46OO\x10\x00\x12\x07\n\x03\x42\x41R\x10\x01\x42,\n\x18\x63om.google.protobuf.utilB\x10JsonFormatProto3b\x06proto3' + , + dependencies=[google_dot_protobuf_dot_any__pb2.DESCRIPTOR,google_dot_protobuf_dot_duration__pb2.DESCRIPTOR,google_dot_protobuf_dot_field__mask__pb2.DESCRIPTOR,google_dot_protobuf_dot_struct__pb2.DESCRIPTOR,google_dot_protobuf_dot_timestamp__pb2.DESCRIPTOR,google_dot_protobuf_dot_wrappers__pb2.DESCRIPTOR,google_dot_protobuf_dot_unittest__pb2.DESCRIPTOR,]) + +_ENUMTYPE = _descriptor.EnumDescriptor( + name='EnumType', + full_name='proto3.EnumType', + filename=None, + file=DESCRIPTOR, + create_key=_descriptor._internal_create_key, + values=[ + _descriptor.EnumValueDescriptor( + name='FOO', index=0, number=0, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='BAR', index=1, number=1, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + ], + containing_type=None, + serialized_options=None, + serialized_start=4849, + serialized_end=4877, +) +_sym_db.RegisterEnumDescriptor(_ENUMTYPE) + +EnumType = enum_type_wrapper.EnumTypeWrapper(_ENUMTYPE) +FOO = 0 +BAR = 1 + + + +_MESSAGETYPE = _descriptor.Descriptor( + name='MessageType', + full_name='proto3.MessageType', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='value', full_name='proto3.MessageType.value', index=0, + number=1, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=277, + serialized_end=305, +) + + +_TESTMESSAGE = _descriptor.Descriptor( + name='TestMessage', + full_name='proto3.TestMessage', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='bool_value', full_name='proto3.TestMessage.bool_value', index=0, + number=1, type=8, cpp_type=7, label=1, + has_default_value=False, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='int32_value', full_name='proto3.TestMessage.int32_value', index=1, + number=2, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='int64_value', full_name='proto3.TestMessage.int64_value', index=2, + number=3, type=3, cpp_type=2, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='uint32_value', full_name='proto3.TestMessage.uint32_value', index=3, + number=4, type=13, cpp_type=3, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='uint64_value', full_name='proto3.TestMessage.uint64_value', index=4, + number=5, type=4, cpp_type=4, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='float_value', full_name='proto3.TestMessage.float_value', index=5, + number=6, type=2, cpp_type=6, label=1, + has_default_value=False, default_value=float(0), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='double_value', full_name='proto3.TestMessage.double_value', index=6, + number=7, type=1, cpp_type=5, label=1, + has_default_value=False, default_value=float(0), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='string_value', full_name='proto3.TestMessage.string_value', index=7, + number=8, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='bytes_value', full_name='proto3.TestMessage.bytes_value', index=8, + number=9, type=12, cpp_type=9, label=1, + has_default_value=False, default_value=b"", + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='enum_value', full_name='proto3.TestMessage.enum_value', index=9, + number=10, type=14, cpp_type=8, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='message_value', full_name='proto3.TestMessage.message_value', index=10, + number=11, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='repeated_bool_value', full_name='proto3.TestMessage.repeated_bool_value', index=11, + number=21, type=8, cpp_type=7, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='repeated_int32_value', full_name='proto3.TestMessage.repeated_int32_value', index=12, + number=22, type=5, cpp_type=1, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='repeated_int64_value', full_name='proto3.TestMessage.repeated_int64_value', index=13, + number=23, type=3, cpp_type=2, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='repeated_uint32_value', full_name='proto3.TestMessage.repeated_uint32_value', index=14, + number=24, type=13, cpp_type=3, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='repeated_uint64_value', full_name='proto3.TestMessage.repeated_uint64_value', index=15, + number=25, type=4, cpp_type=4, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='repeated_float_value', full_name='proto3.TestMessage.repeated_float_value', index=16, + number=26, type=2, cpp_type=6, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='repeated_double_value', full_name='proto3.TestMessage.repeated_double_value', index=17, + number=27, type=1, cpp_type=5, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='repeated_string_value', full_name='proto3.TestMessage.repeated_string_value', index=18, + number=28, type=9, cpp_type=9, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='repeated_bytes_value', full_name='proto3.TestMessage.repeated_bytes_value', index=19, + number=29, type=12, cpp_type=9, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='repeated_enum_value', full_name='proto3.TestMessage.repeated_enum_value', index=20, + number=30, type=14, cpp_type=8, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='repeated_message_value', full_name='proto3.TestMessage.repeated_message_value', index=21, + number=31, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=308, + serialized_end=968, +) + + +_TESTONEOF = _descriptor.Descriptor( + name='TestOneof', + full_name='proto3.TestOneof', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='oneof_int32_value', full_name='proto3.TestOneof.oneof_int32_value', index=0, + number=1, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='oneof_string_value', full_name='proto3.TestOneof.oneof_string_value', index=1, + number=2, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='oneof_bytes_value', full_name='proto3.TestOneof.oneof_bytes_value', index=2, + number=3, type=12, cpp_type=9, label=1, + has_default_value=False, default_value=b"", + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='oneof_enum_value', full_name='proto3.TestOneof.oneof_enum_value', index=3, + number=4, type=14, cpp_type=8, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='oneof_message_value', full_name='proto3.TestOneof.oneof_message_value', index=4, + number=5, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='oneof_null_value', full_name='proto3.TestOneof.oneof_null_value', index=5, + number=6, type=14, cpp_type=8, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + _descriptor.OneofDescriptor( + name='oneof_value', full_name='proto3.TestOneof.oneof_value', + index=0, containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[]), + ], + serialized_start=971, + serialized_end=1239, +) + + +_TESTMAP_BOOLMAPENTRY = _descriptor.Descriptor( + name='BoolMapEntry', + full_name='proto3.TestMap.BoolMapEntry', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='key', full_name='proto3.TestMap.BoolMapEntry.key', index=0, + number=1, type=8, cpp_type=7, label=1, + has_default_value=False, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='value', full_name='proto3.TestMap.BoolMapEntry.value', index=1, + number=2, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=b'8\001', + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1557, + serialized_end=1603, +) + +_TESTMAP_INT32MAPENTRY = _descriptor.Descriptor( + name='Int32MapEntry', + full_name='proto3.TestMap.Int32MapEntry', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='key', full_name='proto3.TestMap.Int32MapEntry.key', index=0, + number=1, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='value', full_name='proto3.TestMap.Int32MapEntry.value', index=1, + number=2, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=b'8\001', + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1605, + serialized_end=1652, +) + +_TESTMAP_INT64MAPENTRY = _descriptor.Descriptor( + name='Int64MapEntry', + full_name='proto3.TestMap.Int64MapEntry', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='key', full_name='proto3.TestMap.Int64MapEntry.key', index=0, + number=1, type=3, cpp_type=2, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='value', full_name='proto3.TestMap.Int64MapEntry.value', index=1, + number=2, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=b'8\001', + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1654, + serialized_end=1701, +) + +_TESTMAP_UINT32MAPENTRY = _descriptor.Descriptor( + name='Uint32MapEntry', + full_name='proto3.TestMap.Uint32MapEntry', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='key', full_name='proto3.TestMap.Uint32MapEntry.key', index=0, + number=1, type=13, cpp_type=3, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='value', full_name='proto3.TestMap.Uint32MapEntry.value', index=1, + number=2, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=b'8\001', + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1703, + serialized_end=1751, +) + +_TESTMAP_UINT64MAPENTRY = _descriptor.Descriptor( + name='Uint64MapEntry', + full_name='proto3.TestMap.Uint64MapEntry', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='key', full_name='proto3.TestMap.Uint64MapEntry.key', index=0, + number=1, type=4, cpp_type=4, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='value', full_name='proto3.TestMap.Uint64MapEntry.value', index=1, + number=2, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=b'8\001', + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1753, + serialized_end=1801, +) + +_TESTMAP_STRINGMAPENTRY = _descriptor.Descriptor( + name='StringMapEntry', + full_name='proto3.TestMap.StringMapEntry', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='key', full_name='proto3.TestMap.StringMapEntry.key', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='value', full_name='proto3.TestMap.StringMapEntry.value', index=1, + number=2, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=b'8\001', + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1803, + serialized_end=1851, +) + +_TESTMAP = _descriptor.Descriptor( + name='TestMap', + full_name='proto3.TestMap', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='bool_map', full_name='proto3.TestMap.bool_map', index=0, + number=1, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='int32_map', full_name='proto3.TestMap.int32_map', index=1, + number=2, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='int64_map', full_name='proto3.TestMap.int64_map', index=2, + number=3, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='uint32_map', full_name='proto3.TestMap.uint32_map', index=3, + number=4, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='uint64_map', full_name='proto3.TestMap.uint64_map', index=4, + number=5, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='string_map', full_name='proto3.TestMap.string_map', index=5, + number=6, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[_TESTMAP_BOOLMAPENTRY, _TESTMAP_INT32MAPENTRY, _TESTMAP_INT64MAPENTRY, _TESTMAP_UINT32MAPENTRY, _TESTMAP_UINT64MAPENTRY, _TESTMAP_STRINGMAPENTRY, ], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1242, + serialized_end=1851, +) + + +_TESTNESTEDMAP_BOOLMAPENTRY = _descriptor.Descriptor( + name='BoolMapEntry', + full_name='proto3.TestNestedMap.BoolMapEntry', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='key', full_name='proto3.TestNestedMap.BoolMapEntry.key', index=0, + number=1, type=8, cpp_type=7, label=1, + has_default_value=False, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='value', full_name='proto3.TestNestedMap.BoolMapEntry.value', index=1, + number=2, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=b'8\001', + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1557, + serialized_end=1603, +) + +_TESTNESTEDMAP_INT32MAPENTRY = _descriptor.Descriptor( + name='Int32MapEntry', + full_name='proto3.TestNestedMap.Int32MapEntry', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='key', full_name='proto3.TestNestedMap.Int32MapEntry.key', index=0, + number=1, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='value', full_name='proto3.TestNestedMap.Int32MapEntry.value', index=1, + number=2, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=b'8\001', + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1605, + serialized_end=1652, +) + +_TESTNESTEDMAP_INT64MAPENTRY = _descriptor.Descriptor( + name='Int64MapEntry', + full_name='proto3.TestNestedMap.Int64MapEntry', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='key', full_name='proto3.TestNestedMap.Int64MapEntry.key', index=0, + number=1, type=3, cpp_type=2, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='value', full_name='proto3.TestNestedMap.Int64MapEntry.value', index=1, + number=2, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=b'8\001', + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1654, + serialized_end=1701, +) + +_TESTNESTEDMAP_UINT32MAPENTRY = _descriptor.Descriptor( + name='Uint32MapEntry', + full_name='proto3.TestNestedMap.Uint32MapEntry', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='key', full_name='proto3.TestNestedMap.Uint32MapEntry.key', index=0, + number=1, type=13, cpp_type=3, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='value', full_name='proto3.TestNestedMap.Uint32MapEntry.value', index=1, + number=2, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=b'8\001', + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1703, + serialized_end=1751, +) + +_TESTNESTEDMAP_UINT64MAPENTRY = _descriptor.Descriptor( + name='Uint64MapEntry', + full_name='proto3.TestNestedMap.Uint64MapEntry', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='key', full_name='proto3.TestNestedMap.Uint64MapEntry.key', index=0, + number=1, type=4, cpp_type=4, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='value', full_name='proto3.TestNestedMap.Uint64MapEntry.value', index=1, + number=2, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=b'8\001', + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1753, + serialized_end=1801, +) + +_TESTNESTEDMAP_STRINGMAPENTRY = _descriptor.Descriptor( + name='StringMapEntry', + full_name='proto3.TestNestedMap.StringMapEntry', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='key', full_name='proto3.TestNestedMap.StringMapEntry.key', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='value', full_name='proto3.TestNestedMap.StringMapEntry.value', index=1, + number=2, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=b'8\001', + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1803, + serialized_end=1851, +) + +_TESTNESTEDMAP_MAPMAPENTRY = _descriptor.Descriptor( + name='MapMapEntry', + full_name='proto3.TestNestedMap.MapMapEntry', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='key', full_name='proto3.TestNestedMap.MapMapEntry.key', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='value', full_name='proto3.TestNestedMap.MapMapEntry.value', index=1, + number=2, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=b'8\001', + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=2559, + serialized_end=2627, +) + +_TESTNESTEDMAP = _descriptor.Descriptor( + name='TestNestedMap', + full_name='proto3.TestNestedMap', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='bool_map', full_name='proto3.TestNestedMap.bool_map', index=0, + number=1, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='int32_map', full_name='proto3.TestNestedMap.int32_map', index=1, + number=2, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='int64_map', full_name='proto3.TestNestedMap.int64_map', index=2, + number=3, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='uint32_map', full_name='proto3.TestNestedMap.uint32_map', index=3, + number=4, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='uint64_map', full_name='proto3.TestNestedMap.uint64_map', index=4, + number=5, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='string_map', full_name='proto3.TestNestedMap.string_map', index=5, + number=6, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='map_map', full_name='proto3.TestNestedMap.map_map', index=6, + number=7, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[_TESTNESTEDMAP_BOOLMAPENTRY, _TESTNESTEDMAP_INT32MAPENTRY, _TESTNESTEDMAP_INT64MAPENTRY, _TESTNESTEDMAP_UINT32MAPENTRY, _TESTNESTEDMAP_UINT64MAPENTRY, _TESTNESTEDMAP_STRINGMAPENTRY, _TESTNESTEDMAP_MAPMAPENTRY, ], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1854, + serialized_end=2627, +) + + +_TESTSTRINGMAP_STRINGMAPENTRY = _descriptor.Descriptor( + name='StringMapEntry', + full_name='proto3.TestStringMap.StringMapEntry', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='key', full_name='proto3.TestStringMap.StringMapEntry.key', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='value', full_name='proto3.TestStringMap.StringMapEntry.value', index=1, + number=2, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=b'8\001', + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=2704, + serialized_end=2752, +) + +_TESTSTRINGMAP = _descriptor.Descriptor( + name='TestStringMap', + full_name='proto3.TestStringMap', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='string_map', full_name='proto3.TestStringMap.string_map', index=0, + number=1, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[_TESTSTRINGMAP_STRINGMAPENTRY, ], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=2629, + serialized_end=2752, +) + + +_TESTWRAPPER = _descriptor.Descriptor( + name='TestWrapper', + full_name='proto3.TestWrapper', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='bool_value', full_name='proto3.TestWrapper.bool_value', index=0, + number=1, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='int32_value', full_name='proto3.TestWrapper.int32_value', index=1, + number=2, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='int64_value', full_name='proto3.TestWrapper.int64_value', index=2, + number=3, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='uint32_value', full_name='proto3.TestWrapper.uint32_value', index=3, + number=4, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='uint64_value', full_name='proto3.TestWrapper.uint64_value', index=4, + number=5, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='float_value', full_name='proto3.TestWrapper.float_value', index=5, + number=6, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='double_value', full_name='proto3.TestWrapper.double_value', index=6, + number=7, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='string_value', full_name='proto3.TestWrapper.string_value', index=7, + number=8, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='bytes_value', full_name='proto3.TestWrapper.bytes_value', index=8, + number=9, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='repeated_bool_value', full_name='proto3.TestWrapper.repeated_bool_value', index=9, + number=11, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='repeated_int32_value', full_name='proto3.TestWrapper.repeated_int32_value', index=10, + number=12, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='repeated_int64_value', full_name='proto3.TestWrapper.repeated_int64_value', index=11, + number=13, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='repeated_uint32_value', full_name='proto3.TestWrapper.repeated_uint32_value', index=12, + number=14, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='repeated_uint64_value', full_name='proto3.TestWrapper.repeated_uint64_value', index=13, + number=15, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='repeated_float_value', full_name='proto3.TestWrapper.repeated_float_value', index=14, + number=16, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='repeated_double_value', full_name='proto3.TestWrapper.repeated_double_value', index=15, + number=17, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='repeated_string_value', full_name='proto3.TestWrapper.repeated_string_value', index=16, + number=18, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='repeated_bytes_value', full_name='proto3.TestWrapper.repeated_bytes_value', index=17, + number=19, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=2755, + serialized_end=3761, +) + + +_TESTTIMESTAMP = _descriptor.Descriptor( + name='TestTimestamp', + full_name='proto3.TestTimestamp', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='value', full_name='proto3.TestTimestamp.value', index=0, + number=1, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='repeated_value', full_name='proto3.TestTimestamp.repeated_value', index=1, + number=2, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=3763, + serialized_end=3873, +) + + +_TESTDURATION = _descriptor.Descriptor( + name='TestDuration', + full_name='proto3.TestDuration', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='value', full_name='proto3.TestDuration.value', index=0, + number=1, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='repeated_value', full_name='proto3.TestDuration.repeated_value', index=1, + number=2, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=3875, + serialized_end=3982, +) + + +_TESTFIELDMASK = _descriptor.Descriptor( + name='TestFieldMask', + full_name='proto3.TestFieldMask', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='value', full_name='proto3.TestFieldMask.value', index=0, + number=1, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=3984, + serialized_end=4042, +) + + +_TESTSTRUCT = _descriptor.Descriptor( + name='TestStruct', + full_name='proto3.TestStruct', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='value', full_name='proto3.TestStruct.value', index=0, + number=1, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='repeated_value', full_name='proto3.TestStruct.repeated_value', index=1, + number=2, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=4044, + serialized_end=4145, +) + + +_TESTANY = _descriptor.Descriptor( + name='TestAny', + full_name='proto3.TestAny', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='value', full_name='proto3.TestAny.value', index=0, + number=1, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='repeated_value', full_name='proto3.TestAny.repeated_value', index=1, + number=2, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=4147, + serialized_end=4239, +) + + +_TESTVALUE = _descriptor.Descriptor( + name='TestValue', + full_name='proto3.TestValue', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='value', full_name='proto3.TestValue.value', index=0, + number=1, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='repeated_value', full_name='proto3.TestValue.repeated_value', index=1, + number=2, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=4241, + serialized_end=4339, +) + + +_TESTLISTVALUE = _descriptor.Descriptor( + name='TestListValue', + full_name='proto3.TestListValue', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='value', full_name='proto3.TestListValue.value', index=0, + number=1, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='repeated_value', full_name='proto3.TestListValue.repeated_value', index=1, + number=2, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=4341, + serialized_end=4451, +) + + +_TESTBOOLVALUE_BOOLMAPENTRY = _descriptor.Descriptor( + name='BoolMapEntry', + full_name='proto3.TestBoolValue.BoolMapEntry', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='key', full_name='proto3.TestBoolValue.BoolMapEntry.key', index=0, + number=1, type=8, cpp_type=7, label=1, + has_default_value=False, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='value', full_name='proto3.TestBoolValue.BoolMapEntry.value', index=1, + number=2, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=b'8\001', + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1557, + serialized_end=1603, +) + +_TESTBOOLVALUE = _descriptor.Descriptor( + name='TestBoolValue', + full_name='proto3.TestBoolValue', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='bool_value', full_name='proto3.TestBoolValue.bool_value', index=0, + number=1, type=8, cpp_type=7, label=1, + has_default_value=False, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='bool_map', full_name='proto3.TestBoolValue.bool_map', index=1, + number=2, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[_TESTBOOLVALUE_BOOLMAPENTRY, ], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=4454, + serialized_end=4591, +) + + +_TESTCUSTOMJSONNAME = _descriptor.Descriptor( + name='TestCustomJsonName', + full_name='proto3.TestCustomJsonName', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='value', full_name='proto3.TestCustomJsonName.value', index=0, + number=1, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, json_name='@value', file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=4593, + serialized_end=4636, +) + + +_TESTEXTENSIONS = _descriptor.Descriptor( + name='TestExtensions', + full_name='proto3.TestExtensions', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='extensions', full_name='proto3.TestExtensions.extensions', index=0, + number=1, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=4638, + serialized_end=4712, +) + + +_TESTENUMVALUE = _descriptor.Descriptor( + name='TestEnumValue', + full_name='proto3.TestEnumValue', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='enum_value1', full_name='proto3.TestEnumValue.enum_value1', index=0, + number=1, type=14, cpp_type=8, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='enum_value2', full_name='proto3.TestEnumValue.enum_value2', index=1, + number=2, type=14, cpp_type=8, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='enum_value3', full_name='proto3.TestEnumValue.enum_value3', index=2, + number=3, type=14, cpp_type=8, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=4715, + serialized_end=4847, +) + +_TESTMESSAGE.fields_by_name['enum_value'].enum_type = _ENUMTYPE +_TESTMESSAGE.fields_by_name['message_value'].message_type = _MESSAGETYPE +_TESTMESSAGE.fields_by_name['repeated_enum_value'].enum_type = _ENUMTYPE +_TESTMESSAGE.fields_by_name['repeated_message_value'].message_type = _MESSAGETYPE +_TESTONEOF.fields_by_name['oneof_enum_value'].enum_type = _ENUMTYPE +_TESTONEOF.fields_by_name['oneof_message_value'].message_type = _MESSAGETYPE +_TESTONEOF.fields_by_name['oneof_null_value'].enum_type = google_dot_protobuf_dot_struct__pb2._NULLVALUE +_TESTONEOF.oneofs_by_name['oneof_value'].fields.append( + _TESTONEOF.fields_by_name['oneof_int32_value']) +_TESTONEOF.fields_by_name['oneof_int32_value'].containing_oneof = _TESTONEOF.oneofs_by_name['oneof_value'] +_TESTONEOF.oneofs_by_name['oneof_value'].fields.append( + _TESTONEOF.fields_by_name['oneof_string_value']) +_TESTONEOF.fields_by_name['oneof_string_value'].containing_oneof = _TESTONEOF.oneofs_by_name['oneof_value'] +_TESTONEOF.oneofs_by_name['oneof_value'].fields.append( + _TESTONEOF.fields_by_name['oneof_bytes_value']) +_TESTONEOF.fields_by_name['oneof_bytes_value'].containing_oneof = _TESTONEOF.oneofs_by_name['oneof_value'] +_TESTONEOF.oneofs_by_name['oneof_value'].fields.append( + _TESTONEOF.fields_by_name['oneof_enum_value']) +_TESTONEOF.fields_by_name['oneof_enum_value'].containing_oneof = _TESTONEOF.oneofs_by_name['oneof_value'] +_TESTONEOF.oneofs_by_name['oneof_value'].fields.append( + _TESTONEOF.fields_by_name['oneof_message_value']) +_TESTONEOF.fields_by_name['oneof_message_value'].containing_oneof = _TESTONEOF.oneofs_by_name['oneof_value'] +_TESTONEOF.oneofs_by_name['oneof_value'].fields.append( + _TESTONEOF.fields_by_name['oneof_null_value']) +_TESTONEOF.fields_by_name['oneof_null_value'].containing_oneof = _TESTONEOF.oneofs_by_name['oneof_value'] +_TESTMAP_BOOLMAPENTRY.containing_type = _TESTMAP +_TESTMAP_INT32MAPENTRY.containing_type = _TESTMAP +_TESTMAP_INT64MAPENTRY.containing_type = _TESTMAP +_TESTMAP_UINT32MAPENTRY.containing_type = _TESTMAP +_TESTMAP_UINT64MAPENTRY.containing_type = _TESTMAP +_TESTMAP_STRINGMAPENTRY.containing_type = _TESTMAP +_TESTMAP.fields_by_name['bool_map'].message_type = _TESTMAP_BOOLMAPENTRY +_TESTMAP.fields_by_name['int32_map'].message_type = _TESTMAP_INT32MAPENTRY +_TESTMAP.fields_by_name['int64_map'].message_type = _TESTMAP_INT64MAPENTRY +_TESTMAP.fields_by_name['uint32_map'].message_type = _TESTMAP_UINT32MAPENTRY +_TESTMAP.fields_by_name['uint64_map'].message_type = _TESTMAP_UINT64MAPENTRY +_TESTMAP.fields_by_name['string_map'].message_type = _TESTMAP_STRINGMAPENTRY +_TESTNESTEDMAP_BOOLMAPENTRY.containing_type = _TESTNESTEDMAP +_TESTNESTEDMAP_INT32MAPENTRY.containing_type = _TESTNESTEDMAP +_TESTNESTEDMAP_INT64MAPENTRY.containing_type = _TESTNESTEDMAP +_TESTNESTEDMAP_UINT32MAPENTRY.containing_type = _TESTNESTEDMAP +_TESTNESTEDMAP_UINT64MAPENTRY.containing_type = _TESTNESTEDMAP +_TESTNESTEDMAP_STRINGMAPENTRY.containing_type = _TESTNESTEDMAP +_TESTNESTEDMAP_MAPMAPENTRY.fields_by_name['value'].message_type = _TESTNESTEDMAP +_TESTNESTEDMAP_MAPMAPENTRY.containing_type = _TESTNESTEDMAP +_TESTNESTEDMAP.fields_by_name['bool_map'].message_type = _TESTNESTEDMAP_BOOLMAPENTRY +_TESTNESTEDMAP.fields_by_name['int32_map'].message_type = _TESTNESTEDMAP_INT32MAPENTRY +_TESTNESTEDMAP.fields_by_name['int64_map'].message_type = _TESTNESTEDMAP_INT64MAPENTRY +_TESTNESTEDMAP.fields_by_name['uint32_map'].message_type = _TESTNESTEDMAP_UINT32MAPENTRY +_TESTNESTEDMAP.fields_by_name['uint64_map'].message_type = _TESTNESTEDMAP_UINT64MAPENTRY +_TESTNESTEDMAP.fields_by_name['string_map'].message_type = _TESTNESTEDMAP_STRINGMAPENTRY +_TESTNESTEDMAP.fields_by_name['map_map'].message_type = _TESTNESTEDMAP_MAPMAPENTRY +_TESTSTRINGMAP_STRINGMAPENTRY.containing_type = _TESTSTRINGMAP +_TESTSTRINGMAP.fields_by_name['string_map'].message_type = _TESTSTRINGMAP_STRINGMAPENTRY +_TESTWRAPPER.fields_by_name['bool_value'].message_type = google_dot_protobuf_dot_wrappers__pb2._BOOLVALUE +_TESTWRAPPER.fields_by_name['int32_value'].message_type = google_dot_protobuf_dot_wrappers__pb2._INT32VALUE +_TESTWRAPPER.fields_by_name['int64_value'].message_type = google_dot_protobuf_dot_wrappers__pb2._INT64VALUE +_TESTWRAPPER.fields_by_name['uint32_value'].message_type = google_dot_protobuf_dot_wrappers__pb2._UINT32VALUE +_TESTWRAPPER.fields_by_name['uint64_value'].message_type = google_dot_protobuf_dot_wrappers__pb2._UINT64VALUE +_TESTWRAPPER.fields_by_name['float_value'].message_type = google_dot_protobuf_dot_wrappers__pb2._FLOATVALUE +_TESTWRAPPER.fields_by_name['double_value'].message_type = google_dot_protobuf_dot_wrappers__pb2._DOUBLEVALUE +_TESTWRAPPER.fields_by_name['string_value'].message_type = google_dot_protobuf_dot_wrappers__pb2._STRINGVALUE +_TESTWRAPPER.fields_by_name['bytes_value'].message_type = google_dot_protobuf_dot_wrappers__pb2._BYTESVALUE +_TESTWRAPPER.fields_by_name['repeated_bool_value'].message_type = google_dot_protobuf_dot_wrappers__pb2._BOOLVALUE +_TESTWRAPPER.fields_by_name['repeated_int32_value'].message_type = google_dot_protobuf_dot_wrappers__pb2._INT32VALUE +_TESTWRAPPER.fields_by_name['repeated_int64_value'].message_type = google_dot_protobuf_dot_wrappers__pb2._INT64VALUE +_TESTWRAPPER.fields_by_name['repeated_uint32_value'].message_type = google_dot_protobuf_dot_wrappers__pb2._UINT32VALUE +_TESTWRAPPER.fields_by_name['repeated_uint64_value'].message_type = google_dot_protobuf_dot_wrappers__pb2._UINT64VALUE +_TESTWRAPPER.fields_by_name['repeated_float_value'].message_type = google_dot_protobuf_dot_wrappers__pb2._FLOATVALUE +_TESTWRAPPER.fields_by_name['repeated_double_value'].message_type = google_dot_protobuf_dot_wrappers__pb2._DOUBLEVALUE +_TESTWRAPPER.fields_by_name['repeated_string_value'].message_type = google_dot_protobuf_dot_wrappers__pb2._STRINGVALUE +_TESTWRAPPER.fields_by_name['repeated_bytes_value'].message_type = google_dot_protobuf_dot_wrappers__pb2._BYTESVALUE +_TESTTIMESTAMP.fields_by_name['value'].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP +_TESTTIMESTAMP.fields_by_name['repeated_value'].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP +_TESTDURATION.fields_by_name['value'].message_type = google_dot_protobuf_dot_duration__pb2._DURATION +_TESTDURATION.fields_by_name['repeated_value'].message_type = google_dot_protobuf_dot_duration__pb2._DURATION +_TESTFIELDMASK.fields_by_name['value'].message_type = google_dot_protobuf_dot_field__mask__pb2._FIELDMASK +_TESTSTRUCT.fields_by_name['value'].message_type = google_dot_protobuf_dot_struct__pb2._STRUCT +_TESTSTRUCT.fields_by_name['repeated_value'].message_type = google_dot_protobuf_dot_struct__pb2._STRUCT +_TESTANY.fields_by_name['value'].message_type = google_dot_protobuf_dot_any__pb2._ANY +_TESTANY.fields_by_name['repeated_value'].message_type = google_dot_protobuf_dot_any__pb2._ANY +_TESTVALUE.fields_by_name['value'].message_type = google_dot_protobuf_dot_struct__pb2._VALUE +_TESTVALUE.fields_by_name['repeated_value'].message_type = google_dot_protobuf_dot_struct__pb2._VALUE +_TESTLISTVALUE.fields_by_name['value'].message_type = google_dot_protobuf_dot_struct__pb2._LISTVALUE +_TESTLISTVALUE.fields_by_name['repeated_value'].message_type = google_dot_protobuf_dot_struct__pb2._LISTVALUE +_TESTBOOLVALUE_BOOLMAPENTRY.containing_type = _TESTBOOLVALUE +_TESTBOOLVALUE.fields_by_name['bool_map'].message_type = _TESTBOOLVALUE_BOOLMAPENTRY +_TESTEXTENSIONS.fields_by_name['extensions'].message_type = google_dot_protobuf_dot_unittest__pb2._TESTALLEXTENSIONS +_TESTENUMVALUE.fields_by_name['enum_value1'].enum_type = _ENUMTYPE +_TESTENUMVALUE.fields_by_name['enum_value2'].enum_type = _ENUMTYPE +_TESTENUMVALUE.fields_by_name['enum_value3'].enum_type = _ENUMTYPE +DESCRIPTOR.message_types_by_name['MessageType'] = _MESSAGETYPE +DESCRIPTOR.message_types_by_name['TestMessage'] = _TESTMESSAGE +DESCRIPTOR.message_types_by_name['TestOneof'] = _TESTONEOF +DESCRIPTOR.message_types_by_name['TestMap'] = _TESTMAP +DESCRIPTOR.message_types_by_name['TestNestedMap'] = _TESTNESTEDMAP +DESCRIPTOR.message_types_by_name['TestStringMap'] = _TESTSTRINGMAP +DESCRIPTOR.message_types_by_name['TestWrapper'] = _TESTWRAPPER +DESCRIPTOR.message_types_by_name['TestTimestamp'] = _TESTTIMESTAMP +DESCRIPTOR.message_types_by_name['TestDuration'] = _TESTDURATION +DESCRIPTOR.message_types_by_name['TestFieldMask'] = _TESTFIELDMASK +DESCRIPTOR.message_types_by_name['TestStruct'] = _TESTSTRUCT +DESCRIPTOR.message_types_by_name['TestAny'] = _TESTANY +DESCRIPTOR.message_types_by_name['TestValue'] = _TESTVALUE +DESCRIPTOR.message_types_by_name['TestListValue'] = _TESTLISTVALUE +DESCRIPTOR.message_types_by_name['TestBoolValue'] = _TESTBOOLVALUE +DESCRIPTOR.message_types_by_name['TestCustomJsonName'] = _TESTCUSTOMJSONNAME +DESCRIPTOR.message_types_by_name['TestExtensions'] = _TESTEXTENSIONS +DESCRIPTOR.message_types_by_name['TestEnumValue'] = _TESTENUMVALUE +DESCRIPTOR.enum_types_by_name['EnumType'] = _ENUMTYPE +_sym_db.RegisterFileDescriptor(DESCRIPTOR) + +MessageType = _reflection.GeneratedProtocolMessageType('MessageType', (_message.Message,), { + 'DESCRIPTOR' : _MESSAGETYPE, + '__module__' : 'google.protobuf.util.json_format_proto3_pb2' + # @@protoc_insertion_point(class_scope:proto3.MessageType) + }) +_sym_db.RegisterMessage(MessageType) + +TestMessage = _reflection.GeneratedProtocolMessageType('TestMessage', (_message.Message,), { + 'DESCRIPTOR' : _TESTMESSAGE, + '__module__' : 'google.protobuf.util.json_format_proto3_pb2' + # @@protoc_insertion_point(class_scope:proto3.TestMessage) + }) +_sym_db.RegisterMessage(TestMessage) + +TestOneof = _reflection.GeneratedProtocolMessageType('TestOneof', (_message.Message,), { + 'DESCRIPTOR' : _TESTONEOF, + '__module__' : 'google.protobuf.util.json_format_proto3_pb2' + # @@protoc_insertion_point(class_scope:proto3.TestOneof) + }) +_sym_db.RegisterMessage(TestOneof) + +TestMap = _reflection.GeneratedProtocolMessageType('TestMap', (_message.Message,), { + + 'BoolMapEntry' : _reflection.GeneratedProtocolMessageType('BoolMapEntry', (_message.Message,), { + 'DESCRIPTOR' : _TESTMAP_BOOLMAPENTRY, + '__module__' : 'google.protobuf.util.json_format_proto3_pb2' + # @@protoc_insertion_point(class_scope:proto3.TestMap.BoolMapEntry) + }) + , + + 'Int32MapEntry' : _reflection.GeneratedProtocolMessageType('Int32MapEntry', (_message.Message,), { + 'DESCRIPTOR' : _TESTMAP_INT32MAPENTRY, + '__module__' : 'google.protobuf.util.json_format_proto3_pb2' + # @@protoc_insertion_point(class_scope:proto3.TestMap.Int32MapEntry) + }) + , + + 'Int64MapEntry' : _reflection.GeneratedProtocolMessageType('Int64MapEntry', (_message.Message,), { + 'DESCRIPTOR' : _TESTMAP_INT64MAPENTRY, + '__module__' : 'google.protobuf.util.json_format_proto3_pb2' + # @@protoc_insertion_point(class_scope:proto3.TestMap.Int64MapEntry) + }) + , + + 'Uint32MapEntry' : _reflection.GeneratedProtocolMessageType('Uint32MapEntry', (_message.Message,), { + 'DESCRIPTOR' : _TESTMAP_UINT32MAPENTRY, + '__module__' : 'google.protobuf.util.json_format_proto3_pb2' + # @@protoc_insertion_point(class_scope:proto3.TestMap.Uint32MapEntry) + }) + , + + 'Uint64MapEntry' : _reflection.GeneratedProtocolMessageType('Uint64MapEntry', (_message.Message,), { + 'DESCRIPTOR' : _TESTMAP_UINT64MAPENTRY, + '__module__' : 'google.protobuf.util.json_format_proto3_pb2' + # @@protoc_insertion_point(class_scope:proto3.TestMap.Uint64MapEntry) + }) + , + + 'StringMapEntry' : _reflection.GeneratedProtocolMessageType('StringMapEntry', (_message.Message,), { + 'DESCRIPTOR' : _TESTMAP_STRINGMAPENTRY, + '__module__' : 'google.protobuf.util.json_format_proto3_pb2' + # @@protoc_insertion_point(class_scope:proto3.TestMap.StringMapEntry) + }) + , + 'DESCRIPTOR' : _TESTMAP, + '__module__' : 'google.protobuf.util.json_format_proto3_pb2' + # @@protoc_insertion_point(class_scope:proto3.TestMap) + }) +_sym_db.RegisterMessage(TestMap) +_sym_db.RegisterMessage(TestMap.BoolMapEntry) +_sym_db.RegisterMessage(TestMap.Int32MapEntry) +_sym_db.RegisterMessage(TestMap.Int64MapEntry) +_sym_db.RegisterMessage(TestMap.Uint32MapEntry) +_sym_db.RegisterMessage(TestMap.Uint64MapEntry) +_sym_db.RegisterMessage(TestMap.StringMapEntry) + +TestNestedMap = _reflection.GeneratedProtocolMessageType('TestNestedMap', (_message.Message,), { + + 'BoolMapEntry' : _reflection.GeneratedProtocolMessageType('BoolMapEntry', (_message.Message,), { + 'DESCRIPTOR' : _TESTNESTEDMAP_BOOLMAPENTRY, + '__module__' : 'google.protobuf.util.json_format_proto3_pb2' + # @@protoc_insertion_point(class_scope:proto3.TestNestedMap.BoolMapEntry) + }) + , + + 'Int32MapEntry' : _reflection.GeneratedProtocolMessageType('Int32MapEntry', (_message.Message,), { + 'DESCRIPTOR' : _TESTNESTEDMAP_INT32MAPENTRY, + '__module__' : 'google.protobuf.util.json_format_proto3_pb2' + # @@protoc_insertion_point(class_scope:proto3.TestNestedMap.Int32MapEntry) + }) + , + + 'Int64MapEntry' : _reflection.GeneratedProtocolMessageType('Int64MapEntry', (_message.Message,), { + 'DESCRIPTOR' : _TESTNESTEDMAP_INT64MAPENTRY, + '__module__' : 'google.protobuf.util.json_format_proto3_pb2' + # @@protoc_insertion_point(class_scope:proto3.TestNestedMap.Int64MapEntry) + }) + , + + 'Uint32MapEntry' : _reflection.GeneratedProtocolMessageType('Uint32MapEntry', (_message.Message,), { + 'DESCRIPTOR' : _TESTNESTEDMAP_UINT32MAPENTRY, + '__module__' : 'google.protobuf.util.json_format_proto3_pb2' + # @@protoc_insertion_point(class_scope:proto3.TestNestedMap.Uint32MapEntry) + }) + , + + 'Uint64MapEntry' : _reflection.GeneratedProtocolMessageType('Uint64MapEntry', (_message.Message,), { + 'DESCRIPTOR' : _TESTNESTEDMAP_UINT64MAPENTRY, + '__module__' : 'google.protobuf.util.json_format_proto3_pb2' + # @@protoc_insertion_point(class_scope:proto3.TestNestedMap.Uint64MapEntry) + }) + , + + 'StringMapEntry' : _reflection.GeneratedProtocolMessageType('StringMapEntry', (_message.Message,), { + 'DESCRIPTOR' : _TESTNESTEDMAP_STRINGMAPENTRY, + '__module__' : 'google.protobuf.util.json_format_proto3_pb2' + # @@protoc_insertion_point(class_scope:proto3.TestNestedMap.StringMapEntry) + }) + , + + 'MapMapEntry' : _reflection.GeneratedProtocolMessageType('MapMapEntry', (_message.Message,), { + 'DESCRIPTOR' : _TESTNESTEDMAP_MAPMAPENTRY, + '__module__' : 'google.protobuf.util.json_format_proto3_pb2' + # @@protoc_insertion_point(class_scope:proto3.TestNestedMap.MapMapEntry) + }) + , + 'DESCRIPTOR' : _TESTNESTEDMAP, + '__module__' : 'google.protobuf.util.json_format_proto3_pb2' + # @@protoc_insertion_point(class_scope:proto3.TestNestedMap) + }) +_sym_db.RegisterMessage(TestNestedMap) +_sym_db.RegisterMessage(TestNestedMap.BoolMapEntry) +_sym_db.RegisterMessage(TestNestedMap.Int32MapEntry) +_sym_db.RegisterMessage(TestNestedMap.Int64MapEntry) +_sym_db.RegisterMessage(TestNestedMap.Uint32MapEntry) +_sym_db.RegisterMessage(TestNestedMap.Uint64MapEntry) +_sym_db.RegisterMessage(TestNestedMap.StringMapEntry) +_sym_db.RegisterMessage(TestNestedMap.MapMapEntry) + +TestStringMap = _reflection.GeneratedProtocolMessageType('TestStringMap', (_message.Message,), { + + 'StringMapEntry' : _reflection.GeneratedProtocolMessageType('StringMapEntry', (_message.Message,), { + 'DESCRIPTOR' : _TESTSTRINGMAP_STRINGMAPENTRY, + '__module__' : 'google.protobuf.util.json_format_proto3_pb2' + # @@protoc_insertion_point(class_scope:proto3.TestStringMap.StringMapEntry) + }) + , + 'DESCRIPTOR' : _TESTSTRINGMAP, + '__module__' : 'google.protobuf.util.json_format_proto3_pb2' + # @@protoc_insertion_point(class_scope:proto3.TestStringMap) + }) +_sym_db.RegisterMessage(TestStringMap) +_sym_db.RegisterMessage(TestStringMap.StringMapEntry) + +TestWrapper = _reflection.GeneratedProtocolMessageType('TestWrapper', (_message.Message,), { + 'DESCRIPTOR' : _TESTWRAPPER, + '__module__' : 'google.protobuf.util.json_format_proto3_pb2' + # @@protoc_insertion_point(class_scope:proto3.TestWrapper) + }) +_sym_db.RegisterMessage(TestWrapper) + +TestTimestamp = _reflection.GeneratedProtocolMessageType('TestTimestamp', (_message.Message,), { + 'DESCRIPTOR' : _TESTTIMESTAMP, + '__module__' : 'google.protobuf.util.json_format_proto3_pb2' + # @@protoc_insertion_point(class_scope:proto3.TestTimestamp) + }) +_sym_db.RegisterMessage(TestTimestamp) + +TestDuration = _reflection.GeneratedProtocolMessageType('TestDuration', (_message.Message,), { + 'DESCRIPTOR' : _TESTDURATION, + '__module__' : 'google.protobuf.util.json_format_proto3_pb2' + # @@protoc_insertion_point(class_scope:proto3.TestDuration) + }) +_sym_db.RegisterMessage(TestDuration) + +TestFieldMask = _reflection.GeneratedProtocolMessageType('TestFieldMask', (_message.Message,), { + 'DESCRIPTOR' : _TESTFIELDMASK, + '__module__' : 'google.protobuf.util.json_format_proto3_pb2' + # @@protoc_insertion_point(class_scope:proto3.TestFieldMask) + }) +_sym_db.RegisterMessage(TestFieldMask) + +TestStruct = _reflection.GeneratedProtocolMessageType('TestStruct', (_message.Message,), { + 'DESCRIPTOR' : _TESTSTRUCT, + '__module__' : 'google.protobuf.util.json_format_proto3_pb2' + # @@protoc_insertion_point(class_scope:proto3.TestStruct) + }) +_sym_db.RegisterMessage(TestStruct) + +TestAny = _reflection.GeneratedProtocolMessageType('TestAny', (_message.Message,), { + 'DESCRIPTOR' : _TESTANY, + '__module__' : 'google.protobuf.util.json_format_proto3_pb2' + # @@protoc_insertion_point(class_scope:proto3.TestAny) + }) +_sym_db.RegisterMessage(TestAny) + +TestValue = _reflection.GeneratedProtocolMessageType('TestValue', (_message.Message,), { + 'DESCRIPTOR' : _TESTVALUE, + '__module__' : 'google.protobuf.util.json_format_proto3_pb2' + # @@protoc_insertion_point(class_scope:proto3.TestValue) + }) +_sym_db.RegisterMessage(TestValue) + +TestListValue = _reflection.GeneratedProtocolMessageType('TestListValue', (_message.Message,), { + 'DESCRIPTOR' : _TESTLISTVALUE, + '__module__' : 'google.protobuf.util.json_format_proto3_pb2' + # @@protoc_insertion_point(class_scope:proto3.TestListValue) + }) +_sym_db.RegisterMessage(TestListValue) + +TestBoolValue = _reflection.GeneratedProtocolMessageType('TestBoolValue', (_message.Message,), { + + 'BoolMapEntry' : _reflection.GeneratedProtocolMessageType('BoolMapEntry', (_message.Message,), { + 'DESCRIPTOR' : _TESTBOOLVALUE_BOOLMAPENTRY, + '__module__' : 'google.protobuf.util.json_format_proto3_pb2' + # @@protoc_insertion_point(class_scope:proto3.TestBoolValue.BoolMapEntry) + }) + , + 'DESCRIPTOR' : _TESTBOOLVALUE, + '__module__' : 'google.protobuf.util.json_format_proto3_pb2' + # @@protoc_insertion_point(class_scope:proto3.TestBoolValue) + }) +_sym_db.RegisterMessage(TestBoolValue) +_sym_db.RegisterMessage(TestBoolValue.BoolMapEntry) + +TestCustomJsonName = _reflection.GeneratedProtocolMessageType('TestCustomJsonName', (_message.Message,), { + 'DESCRIPTOR' : _TESTCUSTOMJSONNAME, + '__module__' : 'google.protobuf.util.json_format_proto3_pb2' + # @@protoc_insertion_point(class_scope:proto3.TestCustomJsonName) + }) +_sym_db.RegisterMessage(TestCustomJsonName) + +TestExtensions = _reflection.GeneratedProtocolMessageType('TestExtensions', (_message.Message,), { + 'DESCRIPTOR' : _TESTEXTENSIONS, + '__module__' : 'google.protobuf.util.json_format_proto3_pb2' + # @@protoc_insertion_point(class_scope:proto3.TestExtensions) + }) +_sym_db.RegisterMessage(TestExtensions) + +TestEnumValue = _reflection.GeneratedProtocolMessageType('TestEnumValue', (_message.Message,), { + 'DESCRIPTOR' : _TESTENUMVALUE, + '__module__' : 'google.protobuf.util.json_format_proto3_pb2' + # @@protoc_insertion_point(class_scope:proto3.TestEnumValue) + }) +_sym_db.RegisterMessage(TestEnumValue) + + +DESCRIPTOR._options = None +_TESTMAP_BOOLMAPENTRY._options = None +_TESTMAP_INT32MAPENTRY._options = None +_TESTMAP_INT64MAPENTRY._options = None +_TESTMAP_UINT32MAPENTRY._options = None +_TESTMAP_UINT64MAPENTRY._options = None +_TESTMAP_STRINGMAPENTRY._options = None +_TESTNESTEDMAP_BOOLMAPENTRY._options = None +_TESTNESTEDMAP_INT32MAPENTRY._options = None +_TESTNESTEDMAP_INT64MAPENTRY._options = None +_TESTNESTEDMAP_UINT32MAPENTRY._options = None +_TESTNESTEDMAP_UINT64MAPENTRY._options = None +_TESTNESTEDMAP_STRINGMAPENTRY._options = None +_TESTNESTEDMAP_MAPMAPENTRY._options = None +_TESTSTRINGMAP_STRINGMAPENTRY._options = None +_TESTBOOLVALUE_BOOLMAPENTRY._options = None +# @@protoc_insertion_point(module_scope) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/wrappers_pb2.py b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/wrappers_pb2.py new file mode 100644 index 0000000000000000000000000000000000000000..6e5e2bc74256c02bdeb02275473749a07860c882 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/google/protobuf/wrappers_pb2.py @@ -0,0 +1,391 @@ +# -*- coding: utf-8 -*- +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: google/protobuf/wrappers.proto +"""Generated protocol buffer code.""" +from google.protobuf import descriptor as _descriptor +from google.protobuf import message as _message +from google.protobuf import reflection as _reflection +from google.protobuf import symbol_database as _symbol_database +# @@protoc_insertion_point(imports) + +_sym_db = _symbol_database.Default() + + + + +DESCRIPTOR = _descriptor.FileDescriptor( + name='google/protobuf/wrappers.proto', + package='google.protobuf', + syntax='proto3', + serialized_options=b'\n\023com.google.protobufB\rWrappersProtoP\001Z1google.golang.org/protobuf/types/known/wrapperspb\370\001\001\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes', + create_key=_descriptor._internal_create_key, + serialized_pb=b'\n\x1egoogle/protobuf/wrappers.proto\x12\x0fgoogle.protobuf\"\x1c\n\x0b\x44oubleValue\x12\r\n\x05value\x18\x01 \x01(\x01\"\x1b\n\nFloatValue\x12\r\n\x05value\x18\x01 \x01(\x02\"\x1b\n\nInt64Value\x12\r\n\x05value\x18\x01 \x01(\x03\"\x1c\n\x0bUInt64Value\x12\r\n\x05value\x18\x01 \x01(\x04\"\x1b\n\nInt32Value\x12\r\n\x05value\x18\x01 \x01(\x05\"\x1c\n\x0bUInt32Value\x12\r\n\x05value\x18\x01 \x01(\r\"\x1a\n\tBoolValue\x12\r\n\x05value\x18\x01 \x01(\x08\"\x1c\n\x0bStringValue\x12\r\n\x05value\x18\x01 \x01(\t\"\x1b\n\nBytesValue\x12\r\n\x05value\x18\x01 \x01(\x0c\x42\x83\x01\n\x13\x63om.google.protobufB\rWrappersProtoP\x01Z1google.golang.org/protobuf/types/known/wrapperspb\xf8\x01\x01\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3' +) + + + + +_DOUBLEVALUE = _descriptor.Descriptor( + name='DoubleValue', + full_name='google.protobuf.DoubleValue', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='value', full_name='google.protobuf.DoubleValue.value', index=0, + number=1, type=1, cpp_type=5, label=1, + has_default_value=False, default_value=float(0), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=51, + serialized_end=79, +) + + +_FLOATVALUE = _descriptor.Descriptor( + name='FloatValue', + full_name='google.protobuf.FloatValue', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='value', full_name='google.protobuf.FloatValue.value', index=0, + number=1, type=2, cpp_type=6, label=1, + has_default_value=False, default_value=float(0), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=81, + serialized_end=108, +) + + +_INT64VALUE = _descriptor.Descriptor( + name='Int64Value', + full_name='google.protobuf.Int64Value', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='value', full_name='google.protobuf.Int64Value.value', index=0, + number=1, type=3, cpp_type=2, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=110, + serialized_end=137, +) + + +_UINT64VALUE = _descriptor.Descriptor( + name='UInt64Value', + full_name='google.protobuf.UInt64Value', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='value', full_name='google.protobuf.UInt64Value.value', index=0, + number=1, type=4, cpp_type=4, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=139, + serialized_end=167, +) + + +_INT32VALUE = _descriptor.Descriptor( + name='Int32Value', + full_name='google.protobuf.Int32Value', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='value', full_name='google.protobuf.Int32Value.value', index=0, + number=1, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=169, + serialized_end=196, +) + + +_UINT32VALUE = _descriptor.Descriptor( + name='UInt32Value', + full_name='google.protobuf.UInt32Value', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='value', full_name='google.protobuf.UInt32Value.value', index=0, + number=1, type=13, cpp_type=3, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=198, + serialized_end=226, +) + + +_BOOLVALUE = _descriptor.Descriptor( + name='BoolValue', + full_name='google.protobuf.BoolValue', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='value', full_name='google.protobuf.BoolValue.value', index=0, + number=1, type=8, cpp_type=7, label=1, + has_default_value=False, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=228, + serialized_end=254, +) + + +_STRINGVALUE = _descriptor.Descriptor( + name='StringValue', + full_name='google.protobuf.StringValue', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='value', full_name='google.protobuf.StringValue.value', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=256, + serialized_end=284, +) + + +_BYTESVALUE = _descriptor.Descriptor( + name='BytesValue', + full_name='google.protobuf.BytesValue', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='value', full_name='google.protobuf.BytesValue.value', index=0, + number=1, type=12, cpp_type=9, label=1, + has_default_value=False, default_value=b"", + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=286, + serialized_end=313, +) + +DESCRIPTOR.message_types_by_name['DoubleValue'] = _DOUBLEVALUE +DESCRIPTOR.message_types_by_name['FloatValue'] = _FLOATVALUE +DESCRIPTOR.message_types_by_name['Int64Value'] = _INT64VALUE +DESCRIPTOR.message_types_by_name['UInt64Value'] = _UINT64VALUE +DESCRIPTOR.message_types_by_name['Int32Value'] = _INT32VALUE +DESCRIPTOR.message_types_by_name['UInt32Value'] = _UINT32VALUE +DESCRIPTOR.message_types_by_name['BoolValue'] = _BOOLVALUE +DESCRIPTOR.message_types_by_name['StringValue'] = _STRINGVALUE +DESCRIPTOR.message_types_by_name['BytesValue'] = _BYTESVALUE +_sym_db.RegisterFileDescriptor(DESCRIPTOR) + +DoubleValue = _reflection.GeneratedProtocolMessageType('DoubleValue', (_message.Message,), { + 'DESCRIPTOR' : _DOUBLEVALUE, + '__module__' : 'google.protobuf.wrappers_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.DoubleValue) + }) +_sym_db.RegisterMessage(DoubleValue) + +FloatValue = _reflection.GeneratedProtocolMessageType('FloatValue', (_message.Message,), { + 'DESCRIPTOR' : _FLOATVALUE, + '__module__' : 'google.protobuf.wrappers_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.FloatValue) + }) +_sym_db.RegisterMessage(FloatValue) + +Int64Value = _reflection.GeneratedProtocolMessageType('Int64Value', (_message.Message,), { + 'DESCRIPTOR' : _INT64VALUE, + '__module__' : 'google.protobuf.wrappers_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.Int64Value) + }) +_sym_db.RegisterMessage(Int64Value) + +UInt64Value = _reflection.GeneratedProtocolMessageType('UInt64Value', (_message.Message,), { + 'DESCRIPTOR' : _UINT64VALUE, + '__module__' : 'google.protobuf.wrappers_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.UInt64Value) + }) +_sym_db.RegisterMessage(UInt64Value) + +Int32Value = _reflection.GeneratedProtocolMessageType('Int32Value', (_message.Message,), { + 'DESCRIPTOR' : _INT32VALUE, + '__module__' : 'google.protobuf.wrappers_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.Int32Value) + }) +_sym_db.RegisterMessage(Int32Value) + +UInt32Value = _reflection.GeneratedProtocolMessageType('UInt32Value', (_message.Message,), { + 'DESCRIPTOR' : _UINT32VALUE, + '__module__' : 'google.protobuf.wrappers_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.UInt32Value) + }) +_sym_db.RegisterMessage(UInt32Value) + +BoolValue = _reflection.GeneratedProtocolMessageType('BoolValue', (_message.Message,), { + 'DESCRIPTOR' : _BOOLVALUE, + '__module__' : 'google.protobuf.wrappers_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.BoolValue) + }) +_sym_db.RegisterMessage(BoolValue) + +StringValue = _reflection.GeneratedProtocolMessageType('StringValue', (_message.Message,), { + 'DESCRIPTOR' : _STRINGVALUE, + '__module__' : 'google.protobuf.wrappers_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.StringValue) + }) +_sym_db.RegisterMessage(StringValue) + +BytesValue = _reflection.GeneratedProtocolMessageType('BytesValue', (_message.Message,), { + 'DESCRIPTOR' : _BYTESVALUE, + '__module__' : 'google.protobuf.wrappers_pb2' + # @@protoc_insertion_point(class_scope:google.protobuf.BytesValue) + }) +_sym_db.RegisterMessage(BytesValue) + + +DESCRIPTOR._options = None +# @@protoc_insertion_point(module_scope) diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/html5lib-1.0.1.dist-info/AUTHORS.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/html5lib-1.0.1.dist-info/AUTHORS.txt new file mode 100644 index 0000000000000000000000000000000000000000..72c87d7d38ae7bf859717c333a5ee8230f6ce624 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/html5lib-1.0.1.dist-info/AUTHORS.txt @@ -0,0 +1,562 @@ +A_Rog +Aakanksha Agrawal <11389424+rasponic@users.noreply.github.com> +Abhinav Sagar <40603139+abhinavsagar@users.noreply.github.com> +ABHYUDAY PRATAP SINGH +abs51295 +AceGentile +Adam Chainz +Adam Tse +Adam Tse +Adam Wentz +admin +Adrien Morison +ahayrapetyan +Ahilya +AinsworthK +Akash Srivastava +Alan Yee +Albert Tugushev +Albert-Guan +albertg +Aleks Bunin +Alethea Flowers +Alex Gaynor +Alex Grönholm +Alex Loosley +Alex Morega +Alex Stachowiak +Alexander Shtyrov +Alexandre Conrad +Alexey Popravka +Alexey Popravka +Alli +Ami Fischman +Ananya Maiti +Anatoly Techtonik +Anders Kaseorg +Andreas Lutro +Andrei Geacar +Andrew Gaul +Andrey Bulgakov +Andrés Delfino <34587441+andresdelfino@users.noreply.github.com> +Andrés Delfino +Andy Freeland +Andy Freeland +Andy Kluger +Ani Hayrapetyan +Aniruddha Basak +Anish Tambe +Anrs Hu +Anthony Sottile +Antoine Musso +Anton Ovchinnikov +Anton Patrushev +Antonio Alvarado Hernandez +Antony Lee +Antti Kaihola +Anubhav Patel +Anuj Godase +AQNOUCH Mohammed +AraHaan +Arindam Choudhury +Armin Ronacher +Artem +Ashley Manton +Ashwin Ramaswami +atse +Atsushi Odagiri +Avner Cohen +Baptiste Mispelon +Barney Gale +barneygale +Bartek Ogryczak +Bastian Venthur +Ben Darnell +Ben Hoyt +Ben Rosser +Bence Nagy +Benjamin Peterson +Benjamin VanEvery +Benoit Pierre +Berker Peksag +Bernardo B. Marques +Bernhard M. Wiedemann +Bertil Hatt +Bogdan Opanchuk +BorisZZZ +Brad Erickson +Bradley Ayers +Brandon L. Reiss +Brandt Bucher +Brett Randall +Brian Cristante <33549821+brcrista@users.noreply.github.com> +Brian Cristante +Brian Rosner +BrownTruck +Bruno Oliveira +Bruno Renié +Bstrdsmkr +Buck Golemon +burrows +Bussonnier Matthias +c22 +Caleb Martinez +Calvin Smith +Carl Meyer +Carlos Liam +Carol Willing +Carter Thayer +Cass +Chandrasekhar Atina +Chih-Hsuan Yen +Chih-Hsuan Yen +Chris Brinker +Chris Hunt +Chris Jerdonek +Chris McDonough +Chris Wolfe +Christian Heimes +Christian Oudard +Christopher Hunt +Christopher Snyder +Clark Boylan +Clay McClure +Cody +Cody Soyland +Colin Watson +Connor Osborn +Cooper Lees +Cooper Ry Lees +Cory Benfield +Cory Wright +Craig Kerstiens +Cristian Sorinel +Curtis Doty +cytolentino +Damian Quiroga +Dan Black +Dan Savilonis +Dan Sully +daniel +Daniel Collins +Daniel Hahler +Daniel Holth +Daniel Jost +Daniel Shaulov +Daniele Esposti +Daniele Procida +Danny Hermes +Dav Clark +Dave Abrahams +Dave Jones +David Aguilar +David Black +David Bordeynik +David Bordeynik +David Caro +David Evans +David Linke +David Pursehouse +David Tucker +David Wales +Davidovich +derwolfe +Desetude +Diego Caraballo +DiegoCaraballo +Dmitry Gladkov +Domen Kožar +Donald Stufft +Dongweiming +Douglas Thor +DrFeathers +Dustin Ingram +Dwayne Bailey +Ed Morley <501702+edmorley@users.noreply.github.com> +Ed Morley +Eitan Adler +ekristina +elainechan +Eli Schwartz +Eli Schwartz +Emil Burzo +Emil Styrke +Endoh Takanao +enoch +Erdinc Mutlu +Eric Gillingham +Eric Hanchrow +Eric Hopper +Erik M. Bray +Erik Rose +Ernest W Durbin III +Ernest W. Durbin III +Erwin Janssen +Eugene Vereshchagin +everdimension +Felix Yan +fiber-space +Filip Kokosiński +Florian Briand +Florian Rathgeber +Francesco +Francesco Montesano +Frost Ming +Gabriel Curio +Gabriel de Perthuis +Garry Polley +gdanielson +Geoffrey Lehée +Geoffrey Sneddon +George Song +Georgi Valkov +Giftlin Rajaiah +gizmoguy1 +gkdoc <40815324+gkdoc@users.noreply.github.com> +Gopinath M <31352222+mgopi1990@users.noreply.github.com> +GOTO Hayato <3532528+gh640@users.noreply.github.com> +gpiks +Guilherme Espada +Guy Rozendorn +gzpan123 +Hanjun Kim +Hari Charan +Harsh Vardhan +Herbert Pfennig +Hsiaoming Yang +Hugo +Hugo Lopes Tavares +Hugo van Kemenade +hugovk +Hynek Schlawack +Ian Bicking +Ian Cordasco +Ian Lee +Ian Stapleton Cordasco +Ian Wienand +Ian Wienand +Igor Kuzmitshov +Igor Sobreira +Ilya Baryshev +INADA Naoki +Ionel Cristian Mărieș +Ionel Maries Cristian +Ivan Pozdeev +Jacob Kim +jakirkham +Jakub Stasiak +Jakub Vysoky +Jakub Wilk +James Cleveland +James Cleveland +James Firth +James Polley +Jan Pokorný +Jannis Leidel +jarondl +Jason R. Coombs +Jay Graves +Jean-Christophe Fillion-Robin +Jeff Barber +Jeff Dairiki +Jelmer Vernooij +jenix21 +Jeremy Stanley +Jeremy Zafran +Jiashuo Li +Jim Garrison +Jivan Amara +John Paton +John-Scott Atlakson +johnthagen +johnthagen +Jon Banafato +Jon Dufresne +Jon Parise +Jonas Nockert +Jonathan Herbert +Joost Molenaar +Jorge Niedbalski +Joseph Long +Josh Bronson +Josh Hansen +Josh Schneier +Juanjo Bazán +Julian Berman +Julian Gethmann +Julien Demoor +jwg4 +Jyrki Pulliainen +Kai Chen +Kamal Bin Mustafa +kaustav haldar +keanemind +Keith Maxwell +Kelsey Hightower +Kenneth Belitzky +Kenneth Reitz +Kenneth Reitz +Kevin Burke +Kevin Carter +Kevin Frommelt +Kevin R Patterson +Kexuan Sun +Kit Randel +kpinc +Krishna Oza +Kumar McMillan +Kyle Persohn +lakshmanaram +Laszlo Kiss-Kollar +Laurent Bristiel +Laurie Opperman +Leon Sasson +Lev Givon +Lincoln de Sousa +Lipis +Loren Carvalho +Lucas Cimon +Ludovic Gasc +Luke Macken +Luo Jiebin +luojiebin +luz.paz +László Kiss Kollár +László Kiss Kollár +Marc Abramowitz +Marc Tamlyn +Marcus Smith +Mariatta +Mark Kohler +Mark Williams +Mark Williams +Markus Hametner +Masaki +Masklinn +Matej Stuchlik +Mathew Jennings +Mathieu Bridon +Matt Good +Matt Maker +Matt Robenolt +matthew +Matthew Einhorn +Matthew Gilliard +Matthew Iversen +Matthew Trumbell +Matthew Willson +Matthias Bussonnier +mattip +Maxim Kurnikov +Maxime Rouyrre +mayeut +mbaluna <44498973+mbaluna@users.noreply.github.com> +mdebi <17590103+mdebi@users.noreply.github.com> +memoselyk +Michael +Michael Aquilina +Michael E. Karpeles +Michael Klich +Michael Williamson +michaelpacer +Mickaël Schoentgen +Miguel Araujo Perez +Mihir Singh +Mike +Mike Hendricks +Min RK +MinRK +Miro Hrončok +Monica Baluna +montefra +Monty Taylor +Nate Coraor +Nathaniel J. Smith +Nehal J Wani +Neil Botelho +Nick Coghlan +Nick Stenning +Nick Timkovich +Nicolas Bock +Nikhil Benesch +Nitesh Sharma +Nowell Strite +NtaleGrey +nvdv +Ofekmeister +ofrinevo +Oliver Jeeves +Oliver Tonnhofer +Olivier Girardot +Olivier Grisel +Ollie Rutherfurd +OMOTO Kenji +Omry Yadan +Oren Held +Oscar Benjamin +Oz N Tiram +Pachwenko <32424503+Pachwenko@users.noreply.github.com> +Patrick Dubroy +Patrick Jenkins +Patrick Lawson +patricktokeeffe +Patrik Kopkan +Paul Kehrer +Paul Moore +Paul Nasrat +Paul Oswald +Paul van der Linden +Paulus Schoutsen +Pavithra Eswaramoorthy <33131404+QueenCoffee@users.noreply.github.com> +Pawel Jasinski +Pekka Klärck +Peter Lisák +Peter Waller +petr-tik +Phaneendra Chiruvella +Phil Freo +Phil Pennock +Phil Whelan +Philip Jägenstedt +Philip Molloy +Philippe Ombredanne +Pi Delport +Pierre-Yves Rofes +pip +Prabakaran Kumaresshan +Prabhjyotsing Surjit Singh Sodhi +Prabhu Marappan +Pradyun Gedam +Pratik Mallya +Preet Thakkar +Preston Holmes +Przemek Wrzos +Pulkit Goyal <7895pulkit@gmail.com> +Qiangning Hong +Quentin Pradet +R. David Murray +Rafael Caricio +Ralf Schmitt +Razzi Abuissa +rdb +Remi Rampin +Remi Rampin +Rene Dudfield +Riccardo Magliocchetti +Richard Jones +RobberPhex +Robert Collins +Robert McGibbon +Robert T. McGibbon +robin elisha robinson +Roey Berman +Rohan Jain +Rohan Jain +Rohan Jain +Roman Bogorodskiy +Romuald Brunet +Ronny Pfannschmidt +Rory McCann +Ross Brattain +Roy Wellington Ⅳ +Roy Wellington Ⅳ +Ryan Wooden +ryneeverett +Sachi King +Salvatore Rinchiera +Savio Jomton +schlamar +Scott Kitterman +Sean +seanj +Sebastian Jordan +Sebastian Schaetz +Segev Finer +SeongSoo Cho +Sergey Vasilyev +Seth Woodworth +Shlomi Fish +Shovan Maity +Simeon Visser +Simon Cross +Simon Pichugin +sinoroc +Sorin Sbarnea +Stavros Korokithakis +Stefan Scherfke +Stephan Erb +stepshal +Steve (Gadget) Barnes +Steve Barnes +Steve Dower +Steve Kowalik +Steven Myint +stonebig +Stéphane Bidoul (ACSONE) +Stéphane Bidoul +Stéphane Klein +Sumana Harihareswara +Sviatoslav Sydorenko +Sviatoslav Sydorenko +Swat009 +Takayuki SHIMIZUKAWA +tbeswick +Thijs Triemstra +Thomas Fenzl +Thomas Grainger +Thomas Guettler +Thomas Johansson +Thomas Kluyver +Thomas Smith +Tim D. Smith +Tim Gates +Tim Harder +Tim Heap +tim smith +tinruufu +Tom Forbes +Tom Freudenheim +Tom V +Tomas Orsava +Tomer Chachamu +Tony Beswick +Tony Zhaocheng Tan +TonyBeswick +toonarmycaptain +Toshio Kuratomi +Travis Swicegood +Tzu-ping Chung +Valentin Haenel +Victor Stinner +victorvpaulo +Viktor Szépe +Ville Skyttä +Vinay Sajip +Vincent Philippon +Vinicyus Macedo <7549205+vinicyusmacedo@users.noreply.github.com> +Vitaly Babiy +Vladimir Rutsky +W. Trevor King +Wil Tan +Wilfred Hughes +William ML Leslie +William T Olson +Wilson Mo +wim glenn +Wolfgang Maier +Xavier Fernandez +Xavier Fernandez +xoviat +xtreak +YAMAMOTO Takashi +Yen Chi Hsuan +Yeray Diaz Diaz +Yoval P +Yu Jian +Yuan Jing Vincent Yan +Zearin +Zearin +Zhiping Deng +Zvezdan Petkovic +Łukasz Langa +Семён Марьясин diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/html5lib-1.0.1.dist-info/INSTALLER b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/html5lib-1.0.1.dist-info/INSTALLER new file mode 100644 index 0000000000000000000000000000000000000000..a1b589e38a32041e49332e5e81c2d363dc418d68 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/html5lib-1.0.1.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/html5lib-1.0.1.dist-info/LICENSE.txt b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/html5lib-1.0.1.dist-info/LICENSE.txt new file mode 100644 index 0000000000000000000000000000000000000000..737fec5c5352af3d9a6a47a0670da4bdb52c5725 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/html5lib-1.0.1.dist-info/LICENSE.txt @@ -0,0 +1,20 @@ +Copyright (c) 2008-2019 The pip developers (see AUTHORS.txt file) + +Permission is hereby granted, free of charge, to any person obtaining +a copy of this software and associated documentation files (the +"Software"), to deal in the Software without restriction, including +without limitation the rights to use, copy, modify, merge, publish, +distribute, sublicense, and/or sell copies of the Software, and to +permit persons to whom the Software is furnished to do so, subject to +the following conditions: + +The above copyright notice and this permission notice shall be +included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE +LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION +OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION +WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/html5lib-1.0.1.dist-info/METADATA b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/html5lib-1.0.1.dist-info/METADATA new file mode 100644 index 0000000000000000000000000000000000000000..a9b0bba23f0362b9da9b1a7d64575188cf076aa4 --- /dev/null +++ b/SOL004/hackfest_firewall_pnf/charms/vyos-config/venv/lib/python3.8/site-packages/html5lib-1.0.1.dist-info/METADATA @@ -0,0 +1,526 @@ +Metadata-Version: 2.1 +Name: html5lib +Version: 1.0.1 +Summary: HTML parser based on the WHATWG HTML specification +Home-page: https://github.com/html5lib/html5lib-python +Maintainer: James Graham +Maintainer-email: james@hoppipolla.co.uk +License: MIT License +Platform: UNKNOWN +Classifier: Development Status :: 5 - Production/Stable +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: MIT License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 2 +Classifier: Programming Language :: Python :: 2.7 +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.3 +Classifier: Programming Language :: Python :: 3.4 +Classifier: Programming Language :: Python :: 3.5 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Topic :: Software Development :: Libraries :: Python Modules +Classifier: Topic :: Text Processing :: Markup :: HTML +Provides-Extra: all +Requires-Dist: chardet (>=2.2) ; extra == 'all' +Requires-Dist: genshi ; extra == 'all' +Requires-Dist: datrie ; (platform_python_implementation == 'CPython') and extra == 'all' +Requires-Dist: lxml ; (platform_python_implementation == 'CPython') and extra == 'all' +Provides-Extra: chardet +Requires-Dist: chardet (>=2.2) ; extra == 'chardet' +Provides-Extra: datrie +Requires-Dist: datrie ; (platform_python_implementation == 'CPython') and extra == 'datrie' +Provides-Extra: genshi +Requires-Dist: genshi ; extra == 'genshi' +Provides-Extra: lxml +Requires-Dist: lxml ; (platform_python_implementation == 'CPython') and extra == 'lxml' + +html5lib +======== + +.. image:: https://travis-ci.org/html5lib/html5lib-python.png?branch=master + :target: https://travis-ci.org/html5lib/html5lib-python + +html5lib is a pure-python library for parsing HTML. It is designed to +conform to the WHATWG HTML specification, as is implemented by all major +web browsers. + + +Usage +----- + +Simple usage follows this pattern: + +.. code-block:: python + + import html5lib + with open("mydocument.html", "rb") as f: + document = html5lib.parse(f) + +or: + +.. code-block:: python + + import html5lib + document = html5lib.parse("

Hello World!") + +By default, the ``document`` will be an ``xml.etree`` element instance. +Whenever possible, html5lib chooses the accelerated ``ElementTree`` +implementation (i.e. ``xml.etree.cElementTree`` on Python 2.x). + +Two other tree types are supported: ``xml.dom.minidom`` and +``lxml.etree``. To use an alternative format, specify the name of +a treebuilder: + +.. code-block:: python + + import html5lib + with open("mydocument.html", "rb") as f: + lxml_etree_document = html5lib.parse(f, treebuilder="lxml") + +When using with ``urllib2`` (Python 2), the charset from HTTP should be +pass into html5lib as follows: + +.. code-block:: python + + from contextlib import closing + from urllib2 import urlopen + import html5lib + + with closing(urlopen("http://example.com/")) as f: + document = html5lib.parse(f, transport_encoding=f.info().getparam("charset")) + +When using with ``urllib.request`` (Python 3), the charset from HTTP +should be pass into html5lib as follows: + +.. code-block:: python + + from urllib.request import urlopen + import html5lib + + with urlopen("http://example.com/") as f: + document = html5lib.parse(f, transport_encoding=f.info().get_content_charset()) + +To have more control over the parser, create a parser object explicitly. +For instance, to make the parser raise exceptions on parse errors, use: + +.. code-block:: python + + import html5lib + with open("mydocument.html", "rb") as f: + parser = html5lib.HTMLParser(strict=True) + document = parser.parse(f) + +When you're instantiating parser objects explicitly, pass a treebuilder +class as the ``tree`` keyword argument to use an alternative document +format: + +.. code-block:: python + + import html5lib + parser = html5lib.HTMLParser(tree=html5lib.getTreeBuilder("dom")) + minidom_document = parser.parse("

Hello World!") + +More documentation is available at https://html5lib.readthedocs.io/. + + +Installation +------------ + +html5lib works on CPython 2.7+, CPython 3.3+ and PyPy. To install it, +use: + +.. code-block:: bash + + $ pip install html5lib + + +Optional Dependencies +--------------------- + +The following third-party libraries may be used for additional +functionality: + +- ``datrie`` can be used under CPython to improve parsing performance + (though in almost all cases the improvement is marginal); + +- ``lxml`` is supported as a tree format (for both building and + walking) under CPython (but *not* PyPy where it is known to cause + segfaults); + +- ``genshi`` has a treewalker (but not builder); and + +- ``chardet`` can be used as a fallback when character encoding cannot + be determined. + + +Bugs +---- + +Please report any bugs on the `issue tracker +`_. + + +Tests +----- + +Unit tests require the ``pytest`` and ``mock`` libraries and can be +run using the ``py.test`` command in the root directory. + +Test data are contained in a separate `html5lib-tests +`_ repository and included +as a submodule, thus for git checkouts they must be initialized:: + + $ git submodule init + $ git submodule update + +If you have all compatible Python implementations available on your +system, you can run tests on all of them using the ``tox`` utility, +which can be found on PyPI. + + +Questions? +---------- + +There's a mailing list available for support on Google Groups, +`html5lib-discuss `_, +though you may get a quicker response asking on IRC in `#whatwg on +irc.freenode.net `_. + +Change Log +---------- + +1.0.1 +~~~~~ + +Released on December 7, 2017 + +Breaking changes: + +* Drop support for Python 2.6. (#330) (Thank you, Hugo, Will Kahn-Greene!) +* Remove ``utils/spider.py`` (#353) (Thank you, Jon Dufresne!) + +Features: + +* Improve documentation. (#300, #307) (Thank you, Jon Dufresne, Tom Most, + Will Kahn-Greene!) +* Add iframe seamless boolean attribute. (Thank you, Ritwik Gupta!) +* Add itemscope as a boolean attribute. (#194) (Thank you, Jonathan Vanasco!) +* Support Python 3.6. (#333) (Thank you, Jon Dufresne!) +* Add CI support for Windows using AppVeyor. (Thank you, John Vandenberg!) +* Improve testing and CI and add code coverage (#323, #334), (Thank you, Jon + Dufresne, John Vandenberg, Geoffrey Sneddon, Will Kahn-Greene!) +* Semver-compliant version number. + +Bug fixes: + +* Add support for setuptools < 18.5 to support environment markers. (Thank you, + John Vandenberg!) +* Add explicit dependency for six >= 1.9. (Thank you, Eric Amorde!) +* Fix regexes to work with Python 3.7 regex adjustments. (#318, #379) (Thank + you, Benedikt Morbach, Ville Skyttä, Mark Vasilkov!) +* Fix alphabeticalattributes filter namespace bug. (#324) (Thank you, Will + Kahn-Greene!) +* Include license file in generated wheel package. (#350) (Thank you, Jon + Dufresne!) +* Fix annotation-xml typo. (#339) (Thank you, Will Kahn-Greene!) +* Allow uppercase hex chararcters in CSS colour check. (#377) (Thank you, + Komal Dembla, Hugo!) + + +1.0 +~~~ + +Released and unreleased on December 7, 2017. Badly packaged release. + + +0.999999999/1.0b10 +~~~~~~~~~~~~~~~~~~ + +Released on July 15, 2016 + +* Fix attribute order going to the tree builder to be document order + instead of reverse document order(!). + + +0.99999999/1.0b9 +~~~~~~~~~~~~~~~~ + +Released on July 14, 2016 + +* **Added ordereddict as a mandatory dependency on Python 2.6.** + +* Added ``lxml``, ``genshi``, ``datrie``, ``charade``, and ``all`` + extras that will do the right thing based on the specific + interpreter implementation. + +* Now requires the ``mock`` package for the testsuite. + +* Cease supporting DATrie under PyPy. + +* **Remove PullDOM support, as this hasn't ever been properly + tested, doesn't entirely work, and as far as I can tell is + completely unused by anyone.** + +* Move testsuite to ``py.test``. + +* **Fix #124: move to webencodings for decoding the input byte stream; + this makes html5lib compliant with the Encoding Standard, and + introduces a required dependency on webencodings.** + +* **Cease supporting Python 3.2 (in both CPython and PyPy forms).** + +* **Fix comments containing double-dash with lxml 3.5 and above.** + +* **Use scripting disabled by default (as we don't implement + scripting).** + +* **Fix #11, avoiding the XSS bug potentially caused by serializer + allowing attribute values to be escaped out of in old browser versions, + changing the quote_attr_values option on serializer to take one of + three values, "always" (the old True value), "legacy" (the new option, + and the new default), and "spec" (the old False value, and the old + default).** + +* **Fix #72 by rewriting the sanitizer to apply only to treewalkers + (instead of the tokenizer); as such, this will require amending all + callers of it to use it via the treewalker API.** + +* **Drop support of charade, now that chardet is supported once more.** + +* **Replace the charset keyword argument on parse and related methods + with a set of keyword arguments: override_encoding, transport_encoding, + same_origin_parent_encoding, likely_encoding, and default_encoding.** + +* **Move filters._base, treebuilder._base, and treewalkers._base to .base + to clarify their status as public.** + +* **Get rid of the sanitizer package. Merge sanitizer.sanitize into the + sanitizer.htmlsanitizer module and move that to sanitizer. This means + anyone who used sanitizer.sanitize or sanitizer.HTMLSanitizer needs no + code changes.** + +* **Rename treewalkers.lxmletree to .etree_lxml and + treewalkers.genshistream to .genshi to have a consistent API.** + +* Move a whole load of stuff (inputstream, ihatexml, trie, tokenizer, + utils) to be underscore prefixed to clarify their status as private. + + +0.9999999/1.0b8 +~~~~~~~~~~~~~~~ + +Released on September 10, 2015 + +* Fix #195: fix the sanitizer to drop broken URLs (it threw an + exception between 0.9999 and 0.999999). + + +0.999999/1.0b7 +~~~~~~~~~~~~~~ + +Released on July 7, 2015 + +* Fix #189: fix the sanitizer to allow relative URLs again (as it did + prior to 0.9999/1.0b5). + + +0.99999/1.0b6 +~~~~~~~~~~~~~ + +Released on April 30, 2015 + +* Fix #188: fix the sanitizer to not throw an exception when sanitizing + bogus data URLs. + + +0.9999/1.0b5 +~~~~~~~~~~~~ + +Released on April 29, 2015 + +* Fix #153: Sanitizer fails to treat some attributes as URLs. Despite how + this sounds, this has no known security implications. No known version + of IE (5.5 to current), Firefox (3 to current), Safari (6 to current), + Chrome (1 to current), or Opera (12 to current) will run any script + provided in these attributes. + +* Pass error message to the ParseError exception in strict parsing mode. + +* Allow data URIs in the sanitizer, with a whitelist of content-types. + +* Add support for Python implementations that don't support lone + surrogates (read: Jython). Fixes #2. + +* Remove localization of error messages. This functionality was totally + unused (and untested that everything was localizable), so we may as + well follow numerous browsers in not supporting translating technical + strings. + +* Expose treewalkers.pprint as a public API. + +* Add a documentEncoding property to HTML5Parser, fix #121. + + +0.999 +~~~~~ + +Released on December 23, 2013 + +* Fix #127: add work-around for CPython issue #20007: .read(0) on + http.client.HTTPResponse drops the rest of the content. + +* Fix #115: lxml treewalker can now deal with fragments containing, at + their root level, text nodes with non-ASCII characters on Python 2. + + +0.99 +~~~~ + +Released on September 10, 2013 + +* No library changes from 1.0b3; released as 0.99 as pip has changed + behaviour from 1.4 to avoid installing pre-release versions per + PEP 440. + + +1.0b3 +~~~~~ + +Released on July 24, 2013 + +* Removed ``RecursiveTreeWalker`` from ``treewalkers._base``. Any + implementation using it should be moved to + ``NonRecursiveTreeWalker``, as everything bundled with html5lib has + for years. + +* Fix #67 so that ``BufferedStream`` to correctly returns a bytes + object, thereby fixing any case where html5lib is passed a + non-seekable RawIOBase-like object. + + +1.0b2 +~~~~~ + +Released on June 27, 2013 + +* Removed reordering of attributes within the serializer. There is now + an ``alphabetical_attributes`` option which preserves the previous + behaviour through a new filter. This allows attribute order to be + preserved through html5lib if the tree builder preserves order. + +* Removed ``dom2sax`` from DOM treebuilders. It has been replaced by + ``treeadapters.sax.to_sax`` which is generic and supports any + treewalker; it also resolves all known bugs with ``dom2sax``. + +* Fix treewalker assertions on hitting bytes strings on + Python 2. Previous to 1.0b1, treewalkers coped with mixed + bytes/unicode data on Python 2; this reintroduces this prior + behaviour on Python 2. Behaviour is unchanged on Python 3. + + +1.0b1 +~~~~~ + +Released on May 17, 2013 + +* Implementation updated to implement the `HTML specification + `_ as of 5th May + 2013 (`SVN `_ revision r7867). + +* Python 3.2+ supported in a single codebase using the ``six`` library. + +* Removed support for Python 2.5 and older. + +* Removed the deprecated Beautiful Soup 3 treebuilder. + ``beautifulsoup4`` can use ``html5lib`` as a parser instead. Note that + since it doesn't support namespaces, foreign content like SVG and + MathML is parsed incorrectly. + +* Removed ``simpletree`` from the package. The default tree builder is + now ``etree`` (using the ``xml.etree.cElementTree`` implementation if + available, and ``xml.etree.ElementTree`` otherwise). + +* Removed the ``XHTMLSerializer`` as it never actually guaranteed its + output was well-formed XML, and hence provided little of use. + +* Removed default DOM treebuilder, so ``html5lib.treebuilders.dom`` is no + longer supported. ``html5lib.treebuilders.getTreeBuilder("dom")`` will + return the default DOM treebuilder, which uses ``xml.dom.minidom``. + +* Optional heuristic character encoding detection now based on + ``charade`` for Python 2.6 - 3.3 compatibility. + +* Optional ``Genshi`` treewalker support fixed. + +* Many bugfixes, including: + + * #33: null in attribute value breaks XML AttValue; + + * #4: nested, indirect descendant,